Добавил:
Upload Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:
R_inferno.pdf
Скачиваний:
10
Добавлен:
21.05.2015
Размер:
947.8 Кб
Скачать

7.3. NSPACES CIRCLE 7. TRIPPING ON OBJECT ORIENTATION

x@Data

that is an indication that x is an S4 object.

By now you will have noticed that S4 methods are driven by the class attribute just as S3 methods are. This commonality perhaps makes the two systems appear more similar than they are. In S3 the decision of what method to use is made in real-time when the function is called. In S4 the decision is made when the code is loaded into the R session|there is a table that charts the relationships of all the classes. The showMethods function is useful to see the layout.

S4 has inheritance, as does S3. But, again, there are subtle di erences. For example, a concept in S4 that doesn't resonate in S3 is contains. If S4 class "B" has all of the slots that are in class "A", then class "B" contains class "A".

7.2.3discussion

Will S4 ever totally supplant S3? Highly unlikely. One reason is backward compatibility|there is a whole lot of code that depends on S3 methods. Additionally, S3 methods are convenient. It is very easy to create a plot or summary method for a speci c computation (a simulation, perhaps) that expedites analysis.

So basically S3 and S4 serve di erent purposes. S4 is useful for large, industrial-strength projects. S3 is useful for ad hoc projects.

If you are planning on writing S4 (or even S3) methods, then you can de - nitely do worse than getting the book Software for Data Analysis: Programming with R by John Chambers. Don't misunderstand: this book can be useful even if you are not using methods.

Two styles of object orientation are hardly enough. Luckily, there are the OOP, R.oo and proto packages that provide three more.

7.3Namespaces

Namespaces don't really have much to do with object-orientation. To the casual user they are related in that both seem like an unwarranted complication. They are also related in the sense that that seeming complexity is actually simplicity in disguise.

Suppose that two packages have a function called recode. You want to use a particular one of these two. There is no guarantee that the one you want will always be rst on the search list. That is the problem for which namespaces are the answer.

To understand namespaces, let's consider an analogy of a function that returns a named list. There are some things in the environment of the function that you get to see (the components that it returns), and possibly some objects that you can't see (the objects created in the function but not returned). A

42

7.3. NSPACES CIRCLE 7. TRIPPING ON OBJECT ORIENTATION

namespace exports one or more objects so that they are visible, but may have some objects that are private.

The way to specify an object from a particular namespace is to use the 8 :: 8 operator:

> stats::coef function (object, ...) UseMethod("coef")

<environment: namespace:stats>

This operator fails if the name is not exported:

> stats::coef.default

Error: 'coef.default' is not an exported object from 'namespace:stats'

There are ways to get the non-exported objects, but you have to promise not to use them except to inspect the objects. You can use 8 ::: 8 or the getAnywhere function:

>stats:::coef.default function (object, ...) object$coefficients <environment: namespace:stats>

>getAnywhere('coef.default')

A single object matching 'coef.default' was found It was found in the following places

registered S3 method for coef from namespace stats namespace:stats

with value

function (object, ...) object$coefficients <environment: namespace:stats>

There can be problems if you want to modify a function that is in a namespace. Functions assignInNamespace and unlockBinding can be useful in this regard.

The existence of namespaces, S3 methods, and especially S4 methods makes R more suitable to large, complex applications than it would otherwise be. But R is not the best tool for every application. And it doesn't try to be. One of the design goals of R is to make it easy to interact with other software to encourage the best tool being used for each task.

43

Circle 8

Believing It Does as

Intended

In this Circle we came across the fraudulent|each trapped in their own ame.

This Circle is wider and deeper than one might hope. Reasons for this include:

Backwards compatibility. There is roughly a two-decade history of compatibility to worry about. If you are a new user, you will think that rough spots should be smoothed out no matter what. You will think di erently if a new version of R breaks your code that has been working. The larger splinters have been sanded down, but this still leaves a number of annoyances to adjust to.

R is used both interactively and programmatically. There is tension there. A few functions make special arrangements to make interactive use easier. These functions tend to cause trouble if used inside a function. They can also promote false expectations.

R does a lot.

In this Circle we will meet a large number of ghosts, chimeras and devils. These can often be exorcised using the browser function. Put the command:

browser()

at strategic locations in your functions in order to see the state of play at those points. A close alternative is:

recover()

44

CIRCLE 8. BELIEVING IT DOES AS INTENDED

browser allows you to look at the objects in the function in which the browser call is placed. recover allows you to look at those objects as well as the objects in the caller of that function and all other active functions.

Liberal use of browser, recover, cat and print while you are writing functions allows your expectations and R's expectations to converge.

A very handy way of doing this is with trace. For example, if browsing at the end of the myFun function is convenient, then you can do:

trace(myFun, exit=quote(browser()))

You can customize the tracing with a command like:

trace(myFun, edit=TRUE)

If you run into an error, then debugging is the appropriate action. There are at least two approaches to debugging. The rst approach is to look at the state of play at the point where the error occurs. Prepare for this by setting the error option. The two most likely choices are:

options(error=recover)

or

options(error=dump.frames)

The di erence is that with recover you are automatically thrown into debug mode, but with dump.frames you start debugging by executing:

debugger()

In either case you are presented with a selection of the frames (environments) of active functions to inspect.

You can force R to treat warnings as errors with the command:

options(warn=2)

If you want to set the error option in your .First function, then you need a trick since not everything is in place at the time that .First is executed:

options(error=expression(recover()))

or

options(error=expression(dump.frames()))

The second idea for debugging is to step through a function as it executes. If you want to step through function myfun, then do:

debug(myfun)

and then execute a statement involving myfun. When you are done debugging, do:

undebug(myfun)

A more sophisticated version of this sort of debugging may be found in the debug package.

45

Соседние файлы в предмете [НЕСОРТИРОВАННОЕ]