ppm {spatstat} | R Documentation |
Fits a point process model to an observed point pattern
ppm(Q, trend=~1, interaction=Poisson(), ..., covariates=NULL, covfunargs = list(), correction="border", rbord=reach(interaction), use.gam=FALSE, method="mpl", forcefit=FALSE, project=FALSE, nd = NULL, gcontrol=list(), nsim=100, nrmh=1e5, start=NULL, control=list(nrep=nrmh), verb=TRUE)
Q |
A data point pattern (of class |
trend |
An R formula object specifying the spatial trend to be fitted.
The default formula, |
interaction |
An object of class |
... |
Ignored. |
covariates |
The values of any spatial covariates (other than the Cartesian coordinates) required by the model. Either a data frame, or a list whose entries are images, functions, windows or single numbers. See Details. |
covfunargs |
A named list containing the values of any additional arguments required by covariate functions. |
correction |
The name of the edge correction to be used. The default
is |
rbord |
If |
use.gam |
Logical flag; if |
method |
The method used to fit the model. Options are
|
forcefit |
Logical flag for internal use.
If |
project |
Logical. Setting |
nd |
Optional. Integer or pair of integers.
The dimension of the grid of points ( |
gcontrol |
Optional. List of parameters passed to |
nsim |
Number of simulated realisations
to generate (for |
nrmh |
Number of Metropolis-Hastings iterations
for each simulated realisation (for |
start,control |
Arguments passed to |
verb |
Logical flag indicating whether to print progress reports
(for |
This function fits a point process model to an observed point pattern. The model may include spatial trend, interpoint interaction, and dependence on covariates.
In basic use, Q
is a point pattern dataset
(an object of class "ppp"
) to which we wish to fit a model.
The syntax of ppm()
is closely analogous to the R functions
glm
and gam
.
The analogy is:
glm | ppm |
formula | trend |
family | interaction
|
The point process model to be fitted is specified by the
arguments trend
and interaction
which are respectively analogous to
the formula
and family
arguments of glm().
Systematic effects (spatial trend and/or dependence on
spatial covariates) are specified by the argument
trend
. This is an R formula object, which may be expressed
in terms of the Cartesian coordinates x
, y
,
the marks marks
,
or the variables in covariates
(if supplied), or both.
It specifies the logarithm of the first order potential
of the process.
The formula should not use any names beginning with .mpl
as these are reserved for internal use.
If trend
is absent or equal to the default, ~1
, then
the model to be fitted is stationary (or at least, its first order
potential is constant).
Stochastic interactions between random points of the point process
are defined by the argument interaction
. This is an object of
class "interact"
which is initialised in a very similar way to the
usage of family objects in glm
and gam
.
The models currently available are:
Poisson
,
AreaInter
,
BadGey
,
DiggleGatesStibbard
,
DiggleGratton
,
Fiksel
,
Geyer
,
Hardcore
,
LennardJones
,
MultiStrauss
,
MultiStraussHard
,
OrdThresh
,
Ord
,
Pairwise
,
PairPiece
,
Saturated
,
SatPiece
,
Softcore
,
Strauss
and
StraussHard
.
See the examples below.
If interaction
is missing or NULL
,
then the model to be fitted
has no interpoint interactions, that is, it is a Poisson process
(stationary or nonstationary according to trend
). In this case
the method of maximum pseudolikelihood
coincides with maximum likelihood.
The fitted point process model returned by this function can be printed
(by the print method print.ppm
)
to inspect the fitted parameter values.
If a nonparametric spatial trend was fitted, this can be extracted using
the predict method predict.ppm
.
To fit a model involving spatial covariates
other than the Cartesian coordinates x and y,
the values of the covariates should be supplied in the
argument covariates
.
Note that it is not sufficient to have observed
the covariate only at the points of the data point pattern;
the covariate must also have been observed at other
locations in the window.
Typically the argument covariates
is a list,
with names corresponding to variables in the trend
formula.
Each entry in the list is either a pixel image
(giving the values of a spatial covariate at
a fine grid of locations), or a function (which can be evaluated
at any location (x,y)
to obtain the value of the spatial
covariate), or a window (interpreted as a logical variable
which is TRUE
inside the window and FALSE
outside it)
or a single number (indicating a covariate that is
constant in this dataset).
Each entry in the list must be an image (object of class "im"
,
see im.object
), or a function(x, y, ...)
, or
a single number. The software will look up
the pixel values of each image at the required locations
(quadrature points).
In the case of a function(x, y, ...)
, the arguments
x
and y
are implicit, and
any additional arguments ...
should be given in
covfunargs
.
Note that, for covariate functions, only the name of the function appears in the trend formula. A covariate function is treated as if it were a single variable. The function arguments do not appear in the trend formula. See the Examples.
If covariates
is a list,
the list entries should have names corresponding to
the names of covariates in the model formula trend
.
The variable names x
, y
and marks
are reserved for the Cartesian
coordinates and the mark values,
and these should not be used for variables in covariates
.
If covariates
is a data frame, Q
must be a
quadrature scheme (see under Quadrature Schemes below).
Then covariates
must have
as many rows as there are points in Q
.
The ith row of covariates
should contain the values of
spatial variables which have been observed
at the ith point of Q
.
In advanced use, Q
may be a ‘quadrature scheme’.
This was originally just a technicality but it has turned out
to have practical uses, as we explain below.
Quadrature schemes are required for our implementation of the method of maximum pseudolikelihood. The definition of the pseudolikelihood involves an integral over the spatial window containing the data. In practice this integral must be approximated by a finite sum over a set of quadrature points. We use the technique of Baddeley and Turner (2000), a generalisation of the Berman-Turner (1992) device. In this technique the quadrature points for the numerical approximation include all the data points (points of the observed point pattern) as well as additional ‘dummy’ points.
A quadrature scheme is an object of class "quad"
(see quad.object
)
which specifies both the data point pattern and the dummy points
for the quadrature scheme, as well as the quadrature weights
associated with these points.
If Q
is simply a point pattern
(of class "ppp"
, see ppp.object
)
then it is interpreted as specifying the
data points only; a set of dummy points specified
by default.dummy()
is added,
and the default weighting rule is
invoked to compute the quadrature weights.
Finer quadrature schemes (i.e. those with more dummy points) generally yield a better approximation, at the expense of higher computational load.
An easy way to fit models using a finer quadrature scheme
is to let Q
be the original point pattern data,
and use the argument nd
to determine the number of dummy points in the quadrature scheme.
Complete control over the quadrature scheme is possible.
See quadscheme
for an overview.
Use quadscheme(X, D, method="dirichlet")
to compute
quadrature weights based on the Dirichlet tessellation,
or quadscheme(X, D, method="grid")
to compute
quadrature weights by counting points in grid squares,
where X
and D
are the patterns of data points
and dummy points respectively.
Alternatively use pixelquad
to make a quadrature
scheme with a dummy point at every pixel in a pixel image.
A practical advantage of quadrature schemes arises when we want to fit
a model involving covariates (e.g. soil pH). Suppose we have only been
able to observe the covariates at a small number of locations.
Suppose cov.dat
is a data frame containing the values of
the covariates at the data points (i.e.\ cov.dat[i,]
contains the observations for the i
th data point)
and cov.dum
is another data frame (with the same columns as
cov.dat
) containing the covariate values at another
set of points whose locations are given by the point pattern Y
.
Then setting Q = quadscheme(X,Y)
combines the data points
and dummy points into a quadrature scheme, and
covariates = rbind(cov.dat, cov.dum)
combines the covariate
data frames. We can then fit the model by calling
ppm(Q, ..., covariates)
.
The model may be fitted either by the method of maximum pseudolikelihood (Besag, 1975) or by the approximate maximum likelihood method of Huang and Ogata (1999). Maximum pseudolikelihood is much faster, but has poorer statistical properties.
In either case, the algorithm will begin by fitting the model by maximum pseudolikelihood. By default the algorithm returns the maximum pseudolikelihood fit.
Maximum pseudolikelihood is equivalent to maximum likelihood for Poisson point processes.
Note that the method of maximum pseudolikelihood is believed to be inefficient and biased for point processes with strong interpoint interactions. In such cases, the Huang-Ogata approximate maximum likelihood method should be used, although maximum pseudolikelihood may also be used profitably for model selection in the initial phases of modelling.
If method="ho"
then the model will be fitted using
the Huang-Ogata (1999) approximate maximum likelihood method.
First the model is fitted by maximum pseudolikelihood as
described above, yielding an initial estimate of the parameter
vector theta0.
From this initial model, nsim
simulated
realisations are generated. The score and Fisher information of
the model at theta=theta0
are estimated from the simulated realisations. Then one step
of the Fisher scoring algorithm is taken, yielding an updated
estimate theta1. The corresponding model is
returned.
Simulated realisations are generated using rmh
.
The iterative behaviour of the Metropolis-Hastings algorithm
is controlled by the arguments start
and control
which are passed to rmh
.
As a shortcut, the argument
nrmh
determines the number of Metropolis-Hastings
iterations run to produce one simulated realisation (if
control
is absent). Also
if start
is absent or equal to NULL
, it defaults to
list(n.start=N)
where N
is the number of points
in the data point pattern.
Edge correction should be applied to the sufficient statistics
of the model, to reduce bias.
The argument correction
is the name of an edge correction
method.
The default correction="border"
specifies the border correction,
in which the quadrature window (the domain of integration of the
pseudolikelihood) is obtained by trimming off a margin of width
rbord
from the observation window of the data pattern.
Not all edge corrections are implemented (or implementable)
for arbitrary windows.
Other options depend on the argument interaction
, but these
generally include correction="periodic"
(the periodic or toroidal edge
correction in which opposite edges of a rectangular window are
identified) and correction="translate"
(the translation correction,
see Baddeley 1998 and Baddeley and Turner 2000).
For pairwise interaction models
there is also Ripley's isotropic correction,
identified by correction="isotropic"
or "Ripley"
.
An object of class "ppm"
describing a fitted point process
model.
See ppm.object
for details of the format of this object
and methods available for manipulating it.
Apart from the Poisson model, every point process model fitted by
ppm
has parameters that determine the strength and
range of ‘interaction’ or dependence between points.
These parameters are of two types:
A parameter phi is called regular if the log likelihood is a linear function of theta where theta = theta(psi) is some transformation of psi. [Then theta is called the canonical parameter.]
Other parameters are called irregular.
Typically, regular parameters determine the ‘strength’ of the interaction, while irregular parameters determine the ‘range’ of the interaction. For example, the Strauss process has a regular parameter gamma controlling the strength of interpoint inhibition, and an irregular parameter r determining the range of interaction.
The ppm
command is only designed to estimate regular
parameters of the interaction.
It requires the values of any irregular parameters of the interaction
to be fixed. For example, to fit a Strauss process model to the cells
dataset, you could type ppm(cells, ~1, Strauss(r=0.07))
.
Note that the value of the irregular parameter r
must be given.
The result of this command will be a fitted model in which the
regular parameter gamma has been estimated.
To determine the irregular parameters, there are several
practical techniques, but no general statistical theory available.
One useful technique is maximum profile pseudolikelihood, which
is implemented in the command profilepl
.
The implementation of the Huang-Ogata method is experimental; several bugs were fixed in spatstat 1.19-0.
See the comments above about the possible inefficiency and bias of the maximum pseudolikelihood estimator.
The accuracy of the Berman-Turner approximation to the pseudolikelihood depends on the number of dummy points used in the quadrature scheme. The number of dummy points should at least equal the number of data points.
The parameter values of the fitted model
do not necessarily determine a valid point process.
Some of the point process models are only defined when the parameter
values lie in a certain subset. For example the Strauss process only
exists when the interaction parameter gamma
is less than or equal to 1,
corresponding to a value of ppm()$theta[2]
less than or equal to 0
.
By default (if project=FALSE
) the algorithm
maximises the pseudolikelihood
without constraining the parameters, and does not apply any checks for
sanity after fitting the model.
This is because the fitted parameter value
could be useful information for data analysis.
To constrain the parameters to ensure that the model is a valid
point process, set project=TRUE
. See also the functions
valid.ppm
and project.ppm
.
The trend
formula should not use any variable names
beginning with the prefixes .mpl
or Interaction
as these names are reserved
for internal use. The data frame covariates
should have as many rows
as there are points in Q
. It should not contain
variables called x
, y
or marks
as these names are reserved for the Cartesian coordinates
and the marks.
If the model formula involves one of the functions
poly()
, bs()
or ns()
(e.g. applied to spatial coordinates x
and y
),
the fitted coefficients can be misleading.
The resulting fit is not to the raw spatial variates
(x
, x^2
, x*y
, etc.)
but to a transformation of these variates. The transformation is implemented
by poly()
in order to achieve better numerical stability.
However the
resulting coefficients are appropriate for use with the transformed
variates, not with the raw variates.
This affects the interpretation of the constant
term in the fitted model, logbeta
.
Conventionally, beta is the background intensity, i.e. the
value taken by the conditional intensity function when all predictors
(including spatial or “trend” predictors) are set equal to 0.
However the coefficient actually produced is the value that the
log conditional intensity takes when all the predictors,
including the transformed
spatial predictors, are set equal to 0
, which is not the same thing.
Worse still, the result of predict.ppm
can be
completely wrong if the trend formula contains one of the
functions poly()
, bs()
or ns()
. This is a weakness of the underlying
function predict.glm
.
If you wish to fit a polynomial trend,
we offer an alternative to poly()
,
namely polynom()
, which avoids the
difficulty induced by transformations. It is completely analogous
to poly
except that it does not orthonormalise.
The resulting coefficient estimates then have
their natural interpretation and can be predicted correctly.
Numerical stability may be compromised.
Values of the maximised pseudolikelihood are not comparable
if they have been obtained with different values of rbord
.
Adrian Baddeley Adrian.Baddeley@csiro.au http://www.maths.uwa.edu.au/~adrian/ and Rolf Turner r.turner@auckland.ac.nz
Baddeley, A. and Turner, R. Practical maximum pseudolikelihood for spatial point patterns. Australian and New Zealand Journal of Statistics 42 (2000) 283–322.
Berman, M. and Turner, T.R. Approximating point process likelihoods with GLIM. Applied Statistics 41 (1992) 31–38.
Besag, J. Statistical analysis of non-lattice data. The Statistician 24 (1975) 179-195.
Diggle, P.J., Fiksel, T., Grabarnik, P., Ogata, Y., Stoyan, D. and Tanemura, M. On parameter estimation for pairwise interaction processes. International Statistical Review 62 (1994) 99-117.
Huang, F. and Ogata, Y. Improvements of the maximum pseudo-likelihood estimators in various spatial statistical models. Journal of Computational and Graphical Statistics 8 (1999) 510-530.
Jensen, J.L. and Moeller, M. Pseudolikelihood for exponential family models of spatial point processes. Annals of Applied Probability 1 (1991) 445–461.
Jensen, J.L. and Kuensch, H.R. On asymptotic normality of pseudo likelihood estimates for pairwise interaction processes, Annals of the Institute of Statistical Mathematics 46 (1994) 475-486.
ppm.object
for details of how to
print, plot and manipulate a fitted model.
ppp
and quadscheme
for constructing data.
Interactions:
Poisson
,
AreaInter
,
BadGey
,
DiggleGatesStibbard
,
DiggleGratton
,
Geyer
,
Fiksel
,
Hardcore
,
LennardJones
,
MultiStrauss
,
MultiStraussHard
,
OrdThresh
,
Ord
,
Pairwise
,
PairPiece
,
Saturated
,
SatPiece
,
Softcore
,
Strauss
and
StraussHard
.
See profilepl
for advice on
fitting nuisance parameters in the interaction,
and ippm
for irregular parameters in the trend.
See valid.ppm
and project.ppm
for
ensuring the fitted model is a valid point process.
data(nztrees) ppm(nztrees) # fit the stationary Poisson process # to point pattern 'nztrees' ## Not run: Q <- quadscheme(nztrees) ppm(Q) # equivalent. ## End(Not run) ppm(nztrees, nd=128) fit1 <- ppm(nztrees, ~ x) # fit the nonstationary Poisson process # with intensity function lambda(x,y) = exp(a + bx) # where x,y are the Cartesian coordinates # and a,b are parameters to be estimated fit1 coef(fit1) coef(summary(fit1)) ppm(nztrees, ~ polynom(x,2)) # fit the nonstationary Poisson process # with intensity function lambda(x,y) = exp(a + bx + cx^2) ## Not run: library(splines) ppm(nztrees, ~ bs(x,df=3)) ## End(Not run) # WARNING: do not use predict.ppm() on this result # Fits the nonstationary Poisson process # with intensity function lambda(x,y) = exp(B(x)) # where B is a B-spline with df = 3 ppm(nztrees, ~1, Strauss(r=10), rbord=10) # Fit the stationary Strauss process with interaction range r=10 # using the border method with margin rbord=10 ppm(nztrees, ~ x, Strauss(13), correction="periodic") # Fit the nonstationary Strauss process with interaction range r=13 # and exp(first order potential) = activity = beta(x,y) = exp(a+bx) # using the periodic correction. # Huang-Ogata fit: ## Not run: ppm(nztrees, ~1, Strauss(r=10), method="ho") # COVARIATES # X <- rpoispp(42) weirdfunction <- function(x,y){ 10 * x^2 + 5 * sin(10 * y) } # # (a) covariate values as function ppm(X, ~ y + Z, covariates=list(Z=weirdfunction)) # # (b) covariate values in pixel image Zimage <- as.im(weirdfunction, unit.square()) ppm(X, ~ y + Z, covariates=list(Z=Zimage)) # # (c) covariate values in data frame Q <- quadscheme(X) xQ <- x.quad(Q) yQ <- y.quad(Q) Zvalues <- weirdfunction(xQ,yQ) ppm(Q, ~ y + Z, covariates=data.frame(Z=Zvalues)) # Note Q not X # COVARIATE FUNCTION WITH EXTRA ARGUMENTS # f <- function(x,y,a){ y - a } ppm(X, ~x + f, covariates=list(f=f), covfunargs=list(a=1/2)) ## MULTITYPE POINT PROCESSES ### data(lansing) # Multitype point pattern --- trees marked by species # fit stationary marked Poisson process # with different intensity for each species ## Not run: ppm(lansing, ~ marks, Poisson()) # fit nonstationary marked Poisson process # with different log-cubic trend for each species ## Not run: ppm(lansing, ~ marks * polynom(x,y,3), Poisson())