
AniMove HowTo: how to analyze location data
 Home Range plugin for QGIS
 GRASS and QGIS integration

R and Adehabitat
 Installation and startup

Analyses

Summary Statistics
 Sample Size:
 Minimum X e Y:
 Mean of X and Y:
 XY Variance:
 Distance:
 Minimum and maximum distance:
 Minimum distance:
 Maximum distance:
 Distance between successive relocations:
 Minimum date:
 Maximum date:
 Minimum speed (units/day):
 Maximum speed (units/day):
 Mean daily speed:
 Distance (m) between successive observations:
 Total distance (m) per animal:
 Distance (m) between the first and last relocation:
 Mean distance (m) per animal:
 Linearity:
 Schoener's ratio:
 Home range analyses

Summary Statistics
 Commands to obtain a file
 Commands to convert into shapefile
 GRASS/R Interface
 Analyses with GRASS
 How to store geometry in GRASS but attributes in PostgreSQL
 Alternative approaches
 About kernel estimates
 About LoCoH (kNNCH) UD Analysis
 R HR randomisation scripts
 Batch homerange calculations
 References
AniMove HowTo: how to analyze location data
A set of manuals to explain how to analyze animal movements using free and open source software.
Home Range plugin for QGIS
A tool to calculate home ranges is available as QGIS plugin. It provides an intuitive graphic interface that collects all parameters of the analysis and calls R to perform it. Available analyses are: MCP, kernel href, kernel hLSCV, kernel with adjusted h (Wauters et al, 2007). Batch analysis is also available.
You can downolad it via Plugin Installer. The plugin is hosted on the usercontributed plugin repository.
Refer also to the wiki page of the plugin. This page contains uptodate informations on how to install it on various platforms. Feel free to contribute in signaling errors and updates, preferably on Trac, or alternatively, on Animove mailing list. Thanks!
Sample data here (SRID code: 3003).
The development of additional plugins is currently underway.
GRASS and QGIS integration
GRASS scripting is easy and powerful, and can generate GUIs on the fly through the simple Tcl/Tk language. A script for GRASS: v.animove.
First of all you have to install the libraries "adehabitat" and "shapefiles" in R (just follow the instructions according to your operating system).
To install animove, just copy the file v.animove on your /usr/lib/grass/scripts directory (in Windows, something like C:\Programmi\Quantum_GIS\grass\scripts) then you can run it from GRASS by typing v.animove from the GRASS shell.
This approach is being improved by integrating the new GRASS script into QGIS, so to have extra flexibility and usability. To use this script in QGIS you must:
 download this file, this file, and this file and copy it in your /usr/share/qgis/grass/modules directory (in Windows, something like C:\Programmi\Quantum_GIS\grass\modules)
 download the help file and copy it in /usr/share/doc/grassdoc/html/ (or the Windows equivalent)
 modify the file /usr/share/qgis/grass/config/default.qgc (in Windows, something like C:\Programmi\Quantum_GIS\grass\config\default.qgc), adding the row <grass name="v.animove"/> inside a section (preferably inside <section label="Delaunay triangulation, Voronoi diagram and convex hull">).
In bash terms:
cd /usr/lib/grass/scripts wget http://www.faunalia.it/animov/scripts/v.animove chmod a+x v.animove cd /usr/share/doc/grassdoc/html/ wget http://www.faunalia.it/animov/scripts/v.animove.html cd /usr/share/qgis/grass/modules/ wget http://www.faunalia.it/animov/scripts/v.animove.qgm wget http://www.faunalia.it/animov/scripts/v.animove.1.png wget http://www.faunalia.it/animov/scripts/v.animove.2.png nano /usr/share/qgis/grass/config/default.qgc
[in the previous file, add <grass name="v.animove"/> NB: be careful to write straight ", not the curly ones]
Now you can found and run it from the GRASS toolbox in QGIS. You can find a sample file of animal locations here (SRID code: 3003).
A first result:
Help in testing and improving it will be appreciated.
v.energy
A little script to calculate the average energy available for each home range, by Anne Ghisla et al.
R and Adehabitat
Installation and startup
Follow the instructions appropriate for your Operating System R is packaged for all major GNU/Linux distributions, and for Windows. You can download packages from R web site. For Debian you can simply type (as root):
aptget install rbase
To install all necessary libraries, you will need to download adehabitat with these commands:
Open a shell and write
su
start R as superuser, then:
install.packages("adehabitat", dependencies=TRUE, repos="your_favorite_mirror")
Some mirrors:
http://microarrays.unife.it/CRAN/ momentaneamente disattivato
http://cran.arsmachinandi.it/
http://rm.mirror.garr.it/mirrors/CRAN/
http://dssm.unipa.it/CRAN/
Then exit the root shell.
Please note: due to frequent updates and improvements, package names change often in case the links above do not work, please search the packages from R web site (contributed packages).
Otherwise, check adehabitat's dependencies, download them from R website and install them manually:
install.packages(pkgs="path to the package you downloaded, including the file name", repos=NULL)
Please note: some interfaces of R provide easier ways for installing additional packages. See their respective instructions.
Then start R (specific instruction depend on your Operating System) and load all the relevant libraries:
library(adehabitat)
Analyses
Now let's play! You can load the sample data from the adehabitat package (dataset puechabon), or use your own.
data(puechabon) #loads the data [note: comments to commands start with "#"] xy<puechabon$locs[,c("X","Y")] #takes the columns named X and Y from the section "locs" of the puechabon dataset, and store them in a matrix named "xy" id<puechabon$locs[,c("Name")] #takes the column named Name from the section "locs" of the puechabon dataset, and store it in a vector named "id" #then typing: > xy #we can see the value of x and y coordinate X Y 1 699889 3161559 2 700046 3161541 3 698840 3161033 4 699809 3161496 . . . . #typing > id #as you can see, data refer to four animals (Brock Calou Chou Jean) [1] Brock Brock Brock Brock Brock Brock Brock Brock Brock Brock Brock Brock . . . [37] Calou Calou Calou Calou Calou Calou Calou Calou Calou Calou Calou Calou . . . [61] Chou Chou Chou Chou Chou Chou Chou Chou Chou Chou Chou Chou . . . [109] Jean Jean Jean Jean Jean Jean Jean Jean Jean Jean Jean Levels: Brock Calou Chou Jean
To use your data, you can start with a text file (R can also link to a database, use other file formats, etc., in case you need this).
For example to import a file fagiani in R you can write:
>data<read.table(file="the path to your file,including the file's name", header=T, sep=" ") #data is the name that you give to your data #header=T indicate that in the file's first line there are the variables' names #sep=" " means that the columns are separed by but if you want, you can for example, separe the data with the tab writing "\t"
You should have a matrix with at least 2 columns, one for x coordinates and one for y additional columns allow you to run separate analyses (e.g. of different individuals, seasons, activities, etc.).
To remove missing data, just type (na.omit(yourdata)) when loading data, instead of (yourdata), e.g. xy<(na.omit(puechabon)). Better do it as a first step, to avoid having misaligned vectors or matrices (e.g. if you have missing xy, but known id).
Summary Statistics
Once you have your data in an R object (we have called it xy), you can start getting some results.
Sample Size:
Total number or records (observations) in the dataset.
#Sample Size nrow(xy) #per animal table(id)
Minimum X e Y:
Minimum value of x coordinate (northing) and y coordinate (easting).
R commands are:
#Minimum X and Y apply(xy, 2, min) #Minimum X and Y per animal apply(xy, 2, function(x) tapply(x, id, min))
Mean of X and Y:
Mean value of x coordinate (northing) and y coordinate (easting).
#Mean of X and Y apply(xy, 2, mean) #Mean of X and Y per animal apply(xy, 2, function(x) tapply(x, id, mean))
XY Variance:
The XY variance (not the covariance).
#Variance of X and Y apply(xy, 2, var) #Variance of X and Y per animal apply(xy, 2, function(x) tapply(x, id, var))
Distance:
Distance (m) between relocations.
di<as.matrix(dist(xy)) diag(di)<NA #Distance between relocations, per animal xyperan<split(xy, id) dibis<lapply(xyperan, function(x) as.matrix(dist(x))) dibis<lapply(dibis, function(x) {diag(x)<NA x})
Minimum and maximum distance:
Minimum and maximum distance (m) between relocations.
min(c(di), na.rm=TRUE) max(c(di), na.rm=TRUE)
Minimum distance:
Minimum distance (m) between relocations.
#min distance lapply(dibis, function(x) min(c(x), na.rm=TRUE))
Maximum distance:
Maximum distance (m) between relocations.
#max distance lapply(dibis, function(x) max(c(x), na.rm=TRUE))
Distance between successive relocations:
To do the following analyses in your data there must be a column date, as in the dataset puechabon.
In a first time you must transform the dataset into an object of class POSIX and then into an object of class traj.
The commands are:
da < as.POSIXct(strptime(as.character(puechabon$locs$Date), "%y%m%d")) tr < as.traj(id = id, xy = xy, date = da) tr
Minimum date:
Earliest observation date per dataset.
kk<tapply(tr$date, tr$id, min) class(kk)<"POSIXct" kk
Maximum date:
Latest observation date per dataset.
kk<tapply(tr$date, tr$id, max) class(kk)<"POSIXct" kk
Minimum speed (units/day):
Minimum number of meters traveled per day.
#Compute speed sp<speed(tr) #Minimum speed tapply(sp$speed, sp$id, min)
Maximum speed (units/day):
Maximum number of meters traveled per day.
#Compute speed sp<speed(tr) #Maximum speed tapply(sp$speed, sp$id, max)
Mean daily speed:
Mean number of meters traveled per day distance/number of days in dataset.
#Compute speed sp<speed(tr) #Mean speed tapply(sp$speed, sp$id, mean)
Distance (m) between successive observations:
disuc<sp$speed*sp$dt
Total distance (m) per animal:
ditot<tapply(disuc, sp$id, sum)
Distance (m) between the first and last relocation:
dd<function(x) { xfirst<x[1,] xend<x[nrow(x),] di<sqrt((xfirst[1]xend[1])^2 + (xfirst[2]xend[2])^2) return(di) } dilin<unlist(lapply(xyperan, dd))
Mean distance (m) per animal:
tapply(disuc, sp$id, mean)
Linearity:
The distance between travel path endpoints and the total distance traveled.
dilin/ditot
Schoener's ratio:
Schoener's ratio for examining autocorrelation.
R2/MSD between successive observations MSD (mean squared distance) is a measure of dispersion of the data.
t2 < lapply(split(disuc, sp$id), function(x) sum(x^2)/length(x)) r2 < lapply(xyperan, function(x) var(x[,1])+var(x[,2])) ratio < unlist(t2)/unlist(r2)
Home range analyses
Minimum Convex Polygon Home Range
Also called convex hull: calculates a minimum convex polygon home range based on either the selected records or if none are selected the entire data set.
A polygon shape file theme is created. The location point statistics menu choice will output a MCP as a graphic object if this is desired rather than a shape file.
If only area is desired then location point statistics with nothing selected will output MCP area.
This calculates area (m) between all points in dataset.
hr<mcp(xy, id) area.plot(hr)
The graph below shows the MCP area per animal.
jj<mcp.area(xy, id) jj Brock Calou Chou Jean 20 2.01000 8.06000 15.74195 1.13685 25 2.05565 9.96440 16.11330 1.48680 30 2.44080 13.27520 16.67075 1.69070 35 6.46570 13.65330 18.04590 2.33475 40 7.01845 14.88035 22.65340 4.09565 45 9.23435 15.86955 26.36545 5.55715 50 9.68735 16.31155 26.76340 5.72115 55 9.85540 16.31155 28.51200 5.99155 60 12.84635 17.49680 29.97835 6.64385 65 13.12805 19.74600 31.75860 9.05275 70 15.66545 20.05745 32.25850 10.58390 75 16.14625 21.53025 34.77430 16.40945 80 19.06725 22.15015 47.81925 24.29185 85 20.22535 26.75180 54.76550 35.31225 90 21.82440 37.12975 67.18565 52.59315 95 22.64670 41.68815 71.93205 55.88415 #In this table the first column shows the % of home range and the others indicate the home range size (ha) for each animal. #On the basis of this one you can obtain the following plot by writing: plot(jj)
Kernel home range
Kernel Home Range calculates a fixed kernel home range utilization distribution (Worton1989) as a grid coverage using either ad hoc calculation of a smoothing parameter, least squares cross validation Silverman (1986), or a user input for the smoothing
parameter (H). The bivariate normal density kernel is used as suggested by Worton (1989). The least squares cross validation (LSCV)
in this version takes significant processing time despite using the golden minimizing routine. Most user will find that the adhoc
calculations are very close to the LSCV for exploratory analysis (for most datasets Hopt is usually between 0 .7 and .9 of
Href[Ad hoc] in this implementation). The adhoc calculation is based on Silverman (1986) rather than Worton(1989) or
Seaman & Powell (1996). The problem of discretization errors (0 length distances caused from rounding location positions
giving a minimized h of zero) are handled slightly different than Tufto (1996). Distance measures are uniform randomly placed
between 0 and (href/100) when and only when the distance measurements are 0. This only adjust the locations when necessary and
allows for different projection and distance systems. The kernel is based on the noncorrected data. The program queries the
user if they would like to adjust the .LSCV or the Adhoc H. Worton (1994) suggests adjusting H by 0.8 for normal distributions
and 0.5 for uniform. Work by Seaman & Powel (1996) suggest that this is not necessary with the LSCV. It is our experience
that the original Adhoc and LSCV smoothing parameters provide a less biased estimator than a user selected or Worton's corrections.
Three things are output from this routine: A grid coverage with the Utilization Distribution (probabilities), a polygon
shapefile containing individual polygons for each selected probability, an associated attribute table containing probability
and area fields for each set of probability polygons, and a message box displaying the area calculations of each probability.
Note that the default probabilities are 50 and 95, and that the view must be zoomed out sufficiently to encompass the larger probability areas to create the polygon shapefile (95%, etc.).
hr<kernelUD(xy, id, h="LSCV") hr image(hr)
The graphic shows the kernel home range utilization distribution.
jj<kernel.area(xy, id) jj Brock Calou Chou Jean 20 15.50560 12.33984 18.82767 14.77481 25 20.35111 15.97852 24.34613 19.20725 30 25.77807 19.85449 30.18920 24.37843 35 31.59267 24.04688 36.35689 30.28835 40 37.79491 28.55566 43.17380 36.19827 45 44.77243 33.53906 49.99072 43.58567 50 52.13760 39.07617 57.78148 50.97308 55 60.27804 45.40430 66.54609 60.57670 60 69.19376 52.68164 76.28454 70.91906 65 78.88477 60.82910 86.99684 83.47765 70 89.93251 69.76758 99.65682 97.51371 75 102.91845 79.73438 114.26450 114.50474 80 118.42406 91.20410 132.11833 135.18947 85 138.19370 105.20508 156.13985 161.78411 90 165.32851 123.79395 194.44442 203.89231 95 209.71331 153.14062 261.63974 281.46004 #In this table the first column shows the % of home range and the others indicate the home range size (ha) for each animal. #On the basis of this one you can obtain the following plot by writing: plot(jj)
Incremental core area analysis
The procedure (download) is similar to the one outlined in Hanski et al. (2000). Home range size is evaluated at different levels (percentages of locations is using clusterhr or mcp or different coropleths if using kernel) and an oneway ANOVA followed by Tukey's "Honestly Significantly Different" pairwise comparison test between subsequent levels. A summary of Tukey HSD test is printed, and a plot like the one presented in Hanski et al. (2000) is produced. The function returns a list class object containing the analysis of variance, the Tukey HSD test ant the plot data.
data(puechabon) loc < puechabon$locs[, c("X", "Y")] id < puechabon$locs[, "Name"] cores < corearea(loc, id, method = "mcp")
References: Hanski, I. K. Stevens, P. C. Ihalempia, P. & Selonen, V. Homerange size, movements, and nestsite use in the Siberian flyng squirrel, Pteromys volans. Journal Of Mammalogy, 2000, 81, 798809
Standard deviation
The standard deviation is computed for each axis (major and minor axis length).
## First perform a PCA pc<lapply(xyperan, dudi.pca, scale=FALSE, scannf=FALSE) #Then, the standard deviation is computed for primary and secondary axes length sde<lapply(pc, function(x) apply(x$li, 2, sd)) sde<do.call("rbind", sde)
Eccentricity
The ratio between the minor and the major axis.
apply(sde, 1, function(x) x[2]/x[1])
Ellipse
This implements the JennrichTurner (1969) Bivariate Normal Home Range.
This method of home range calculation is seriously flawed in its dependence on a bivariate normal distribution of locations but is
valuable due to its lack of sensitivity with sample size, simplicity of underlying probability theory, ability to get confidence limits on the estimates, and to derive the principle axis of the location coordinates. Other than the rare circumstances where the data is
bivariately normal the main utility of this program lies in developing statistically simple home rangehabitat models and in comparison with
home range estimates using this method in other studies.
The package ellipse contains various routines for drawing ellipses and ellipselike confidence regions.
library(ellipse) foo<function(x) { u<ellipse(cov(x), centre=apply(x,2,mean)) plot(u, ty="n", axes=FALSE) points(x, pch=16) lines(u) box() } par(mfrow=c(2,2), mar=c(0,0,0,0)) lapply(xyperan, foo)
In this graphic is shown the 95% confidence ellipse:
Commands to obtain a file
To convert in a file an object created in R you must write:
png("the directory's address in which you want to save the file and the filename", width = 800, height = 800) #Obviously you can change width and height as you want. #Then write the R commands which you have used to create the object and after graphics.off()
For example, if you want to convert the graphic that shows the MCP area per animal, in a file named MCP, you must write:
png("the directory's address in which you want to save the graphic/MCP", width = 800, height = 800) area.plot(hr) graphics.off()
Commands to convert into shapefile
To convert objects of class "kver" (kernels) or "area" (MCPs), there is a newer procedure.
Requirements: adehabitat and rgdal packages, and a dataframe whose rownames are the same as the list of animals in the kver/area object.
spol < kver2spol(kver) #or spol < area2spol(area) #then spdf < SpatialPolygonsDataFrame(spol, dataframe, match.ID = TRUE) writeOGR(spdf,dir,name, "ESRI Shapefile")
Note: SpatialPolygonsDataFrame function is very choosy on the input data. See its documentation for common issues.
GRASS/R Interface
For the integration of R in GRASS, you need to run R from the GRASS shell enviroment. The interface loads compiled GIS library functions into the R executable enviroment. Then GRASS metadata are transferred to R.
For example you can take the result of kernel analysis, transform it in a shapefile and then import it in GRASS.
Analyses with GRASS
To import a shapefile you can use the line command:
v.in.ogr
http://www.grass.itc.it/grass60/manuals/html60_user/v.in.ogr.html
To make a minimum convex polygon in GRASS you can use the line command:
v.hull
http://www.grass.itc.it/grass60/manuals/html60_user/v.hull.html
To make a kernel analysis in GRASS you can use the line command:
v.kernel
http://www.grass.itc.it/grass60/manuals/html60_user/v.kernel.html
How to store geometry in GRASS but attributes in PostgreSQL
By Markus Neteler and Ferdinando Urbano
A simple system to update geographic data (slope, land cover, ...) in a
table with fix (animal locations) stored in postgresql. It uses GRASS
tools (it is possible to recall them through Qgis interface, both in Linux and Windows).
As v.external can just read data from postgres, you must use v.in.db, v.db.connect and then v.what.rast to get the result. Here the commands:
Check current settings for attribute storage:
db.connect p
Import table from PostgreSQL to new map (NOTE: output map name needs to be different from table name in case that GRASS is connected to PostgreSQL):
v.in.db driver=pg database="host=localhost,dbname=meteo" table=mytable x=lon y=lat key=cat out=mytable v.db.connect map=mytable p
Cancel table connection between map and attribute table:
v.db.connect map=mytable d v.db.connect map=mytable p
Drop table which was replicated due to import:
db.tables p echo "DROP TABLE mytable"  db.execute db.tables p
Reconnect map to table in PosgreSQL:
v.db.connect map=mytable driver=pg database="host=localhost,dbname=meteo" table=mytable key=cat
Now the geometry is stored in GRASS while the attributes are stored in PostgreSQL.
Alternative approaches
Web analyses
This approach has the enormous advantage of not requiring any software installation, and the disadvantage of not allowing much flexibility for the user. A first approach (fully functional, but for now limited to one kind of analysis) is: http://locoh.palary.org [now apparently not functional] Thanks to Scott FortmannRoe for this.
R GUI
A good candidate is R Commander. My guesstimate is that it could take just a few hours of work to have something simple running. With this approach, GIS integration will be limited to import/export of shapefiles.
Menu structure for a possible Graphical User Interface
File Open script file Save script Save script as Save output Save output as Save R workspace Save R workspace as Exit from Commander from Commander and R Edit Clear Window Cut Copy Paste Delete Find Select all Data New data set Import data from text file from database from geodatabase from shapefiles (points) from GRASS (points) from SPSS data set from Minitab data set from STATA data set Data in packages List data sets in packages Read data set from attached package Active data set Select active data set Help on active data set (if available) Variables in active data set Set case names Subset active data set Remove cases with missing data Export active data set Manage variables in active data set Recode variable Compute new variable Standardize variables Convert numeric variable to factor Bin numeric variable Reorder factor levels Define contrasts for a factor Rename variables Statistics [statistics from adehabitat] Home Ranges MCP Kernel ... Save as Shapefiles MCP kernel (choose %) ... Raster (Utilization Distribution) GRASS ... Save graph to file as bitmap as PDF/Postscript/EPS Help Commander help About Rcmdr Introduction to the R Commander Help on active data set (if available)
About kernel estimates
The choice of the smoothing parameter h
is the crucial point for kernel analyses. Here very interesting notes about this.
Comments from Clément Calenge
Actually, "h=href" corresponds to a smoothing parameter estimated under the hypothesis that the underlying utilisation distribution is normally distributed (UD unimodal, i.e. one center of activity, and symmetrical). This is not the case in your example. When the UD is not bivariate normal, the use of href greatly overestimates the UD (see Silverman 1986, Worton 1995 J. Wild. Manage.). I never compared the estimates of adehabitat with other softwares, when smoothing values are estimated with the ad hoc method. But before developing adehabitat, I was using the software RANGES V, which also returned greatly overestimated homeranges with "multicenter" home ranges. So that in my opinion, the reference method would return similar results whatever the software (though not exactly identical, because the different softwares do not use exactly the same algorithms  not the same size of the grid, not the same way to compute the limits of the HR from the UD). I agree that it would be interesting to perform such a comparison... One alternative, when several centers of activity are present, is to use the LSCV method, but, again, the results would differ among softwares (and more dramatically). For example, I used the dataset "puechabon" of the package adehabitat (all animals pooled), and I estimated the UD with different home range estimation programs. The smoothing values (in metres) returned by these programs are:
 83 m adehabitat
 310 m Arcview  animal movement analysis (AAMA)
 44 m ranges V
 802 m Calhome
And two other programs use one smoothing parameter for x and one for y:
 59 m for X and 161 m for Y The home ranger
 131 m for X and 364 m for Y kernelHR
As you can see, there is much variation in the estimation. The main cause of variation is that the different softwares do not use the same algorithms to smooth the UD. In addition, the algorithm often fail to minimise the smoothing error, so that bad results are returned by the function (this is a property of the method, see the help page). Finally the last method (which I prefer, personally), is to specify a value for the smoothing parameter (the same for all animals), based on some visual exploration of the data.
A few questions
What unit is the grid cell size in?
The units are defined by the dataset passed to the function: for example, if you pass a data frame xy with coordinates given in meters, h will be computed in meters. Concerning the size of the grid, it is given in number of pixels. If "grid=40", then a grid of 40*40 pixels is used for the estimation. You may also prefer to pass a grid that you have defined (e.g. using the function ascgen() ) to this parameter.
Why the algorithms differ so much between software? And why isn't there documentation for this? Is there a logical method for choosing software platforms with different algorithms?
The main reference about the kernel method is:
Silverman, B. W. 1986. Density estimation for statistics and data analysis. Chapman and Hall, London.
The kernel method is a very general statistical method. For this reason, the R environment contains a lot of functions allowing kernel smoothing:
 The package MASS contains the function kde2d()
 The package splancs contains a function kernel2d()
 The package fields contains one function smooth.2d()
 The packages genKern, ks, Kernsmooth, kernlab contains numerous functions allowing kernel smoothing
 And, of course, adehabitat contains one function kernelUD() for kernel home range estimation
The differences between these functions mainly comes from the context in which they are used: for example, kernel2d is used to smooth a point pattern distributed in space, to explore visually its structures, kde2d is used to smooth a cloud of points (a set of observations for two variables X and Y, not necessarily spatial), also to explore visually its structures, and kernelUD is used to smooth several sets of relocations collected using radiotracking, to estimate animals home range. The data passed to and the results returned by all these functions differ, which explains the need for several functions.
But there are also differences between programs using the kernel method for one aim, i.e. estimating the home range size. This difference between the functions comes from the fact that the kernel method is a very general statistical method used to smooth raw data, that is to estimate a probability density function (pdf) from a set of bivariate obervations (which is reflected by the number of packages developed around this method in the R environment). Many variations exist around this method, and depending on the choices made by the user for the numerous parameters of the method, the result may vary.
The kernel method allows to smooth the point pattern defined by the N relocations, by summing the N kernel functions placed above each relocation. The kernel functions are bivariate radially symetrical functions (some kinds of "3D bells"), with the volume under the surface equal to 1/N. The width of such function is controlled by the smoothing parameter. Therefore, the variation in the estimation may be due to:
 The choice of the kernel function: numerous functions are encountered, but the most frequent are the bivariate normal kernel (in adehabitat) or the Epanechnikov kernel. The choice of one particular function won't affect greatly the estimates of home range size. However, the values of the smoothing parameters won't be comparable (they do not refer to the same parameters in different kernel functions). The Epanechnikov kernel is faster and Silverman (1986) indicated that it is slightly more efficient, though this difference is negligible. I chose the bivariate normal kernel for two reasons: first, it is used in RANGES V, which was at the time of programming the software I was using for home range estimation and second, the algorithm of estimation was precisely described in
'Worton, B. J.
' 1995. Using Monte Carlo simulation to evaluate kernelbased home range estimators. Journal of Wildlife Management 59:794800. Please note: one can also specify the Epanechnikov kernel (through the parameter kern="epa"). However, the use of LSCV is not programmed for this kernel. Therefore, only the bivariate normal kernel is available for both href and LSCV.  The size of the grid used for the estimation. Because the pdf is continuous over the study area, one needs to estimate its value in each pixel of a regular grid (the default value is, as in RANGES V, a grid of 40*40 pixels). The size of the grid used affects only slightly the estimation. HR estimation programs differ in the default value used for this size.
 The choice of the smoothing parameter: in general, there is one smoothing parameter, but some programs use two smoothing parameters (one for the x dimension and one for the y dimension, for example kernelhr, see http://fwie.fw.vt.edu/wsb/cse/wsb2695/wsb2695.html). The default algorithm for the computation of the smoothing parameter also differ among programs. For adehabitat, it is the ad hoc method, but for example Arcview Animal Movement Analysis uses LSCV. In addition, as noted above, the smoothing parameters have a meaning only with a given kernel function. As noted in a previous mail, one cannot compare the smoothing parameters estimated with a bivariate normal kernel and those estimated with an Epanechnikov kernel.
 The computation of the limits of the HR from the estimated UD. There are two ways to compute the home range limits from the estimated UD. The first way is to search an horizontal plane, parallel to the XY plane, so that the volume comprised under the surface of the UD and *above this horizontal plane* is equal to the requested percentage. The home range limits then correspond to the intersection between this plane and the UD. The second, more classical, is to search the horizontal plane parallel to the XY plane, so that the volume comprised under the surface of the UD,
above the XY plane
and inside the contour delimited by the intersection of the plane and the UD, is equal to the requested percentage. adehabitat uses the second.
About LoCoH (kNNCH) UD Analysis
Described in Getz and Wilmers, !LoCoH (Local Convex Hulls) is a method for generating homeranges that finds the UD by following a number of simple steps:
 Locates the k1 nearest neighbors for each point in the dataset.
 Constructs a convex hull for each set of nearest neighbors and the original data point.
 Merges these hulls together from smallest to largest.
 Divides the merged hulls into isopleths where the 10% isopleth contains 10% of the original data points, the 100% isopleth contains all the points, etc.
The !LoCoH method has a number of strong points:
 It generates a density distribution denoting more and less dense regions.
 As more data is added, the homerange because more focused and accurate.
 It is handles 'sharp' features such as lakes and fences well (unlike Kernel methods).
 The generated homerange has a finite region (unlike Kernel methods).
The most important decision when generating a homerange with !LoCoH is choosing what a suitable value for k is. k is bounded by 3 points (the number of points needed to make a hull) on the low side and the number of datapoints in the dataset on the high side. A general rule of thumb estimate for k is the square root of the number of points in your dataset. As k increases, the area covered by the homerange increases and converges towards the Minimum Convex Polygon. A useful feature of the R !LoCoH methods is to analyze multiple k's at once and then run a comparison on the different homeranges generated. !LoCoH has a number of implementations. The adehabitat library has an implementation that can be called using the 'NNCH' (another name for !LoCoH) method. An improved R script (that should hopefully be integrated back into adehabitat in a month or two) can be found at LoCoH R Script. A number of additional applications have been based on the R script including a LoCoH Web Application.
R HR randomisation scripts
Note from the authors: The following functions were developed by the authors while working on an article comparing different homerange estimators (see References, Huck et al., 2008). The functions are likely to be clumsily written since none of the authors are experienced programmers, and any suggestions for improvements would be welcomed. We nonetheless hope that the functions might prove of some use to people planning similar analyses.
Randomise the distance between central points of two homeranges
by John Davison (& Maren Huck), cp. Huck et al. 2008
Need to load ‘shapefiles’ package
distance(file1="filename1", file2="filename2",esri=T)
 Where file1 and file2 are the two files containing location estimates. These should be in the format of a text file containing two columns with the headings ‘x’ and ‘y’, or an ESRI shapefile (in which case, esri=T). Shapefile names should not include any extension (.shp). These files should be saved in the working directory of R.
 If using an ESRI shapefile esri=T, otherwise esri=F.
Returns a vector with
 distance between centre of two ranges
 smallest distance between range centres returned by simulation
 largest distance between range centres returned by simulation highest level of overlap returned by simulation
 95% confidence interval (upper) for distance between range centres (i.e., 95% CI of random overlap).
Further note: To change the number of randomisations you have to change the line
for (i in 1:500){
accordingly, and than also have to change the line
pval<(length(subset(int,int>dist1))*0.002)
accordingly. For example if you want to use 1000 randomisations the lines would read
for (i in 1:1000){ pval<(length(subset(int,int>dist1))*0.001)
(i.e., times the reciprocal value of the number of randomisations) as well as
CI95<int[475]
into
CI95<int[950]
e.g. [to be used after loading the function],
a<distance(file1="frida_sum05_ag", file2="frida_aut05_ag",esri=T)
#start of DISTANCESCRIPT distance<function(file1,file2,esri){ if (esri==T){ if (! require(shapefiles)) stop("package 'shapefiles' is required") shp11<read.shapefile(file1) locs11<as.data.frame(shp11$shp$shp) shp12<read.shapefile(file2) locs12<as.data.frame(shp12$shp$shp) } if (esri==F){ shp11<read.table("file1", header=T) locs11<as.data.frame(shp11) shp12<read.table("file2", header=T) locs12<as.data.frame(shp11) } coords11<(as.data.frame(cbind(locs11$x, locs11$y))) coords12<(as.data.frame(cbind(locs12$x, locs12$y))) mean11<mean(coords11) mean12<mean(coords12) b<mean11mean12 c<as.data.frame(b) dist1<sqrt[[FootNote(c[1,]'''2)+(c[2,]''''''2)]]''' Dataf<data.frame(1,2) names(Dataf)[1]<"i" names(Dataf)[2]<"distance" for (i in 1:500){ ll<rbind(coords11,coords12) cds<as.data.frame(ll) sample(cds,replace=F) l<length(locs11$x) l2<length(locs12$x) sub < sample(nrow(cds), floor(nrow(cds) * (l)/(l+l2))) set1 < cds[sub, ] set2 < cds[sub, ] coords<as.data.frame(set1) mean1<mean(coords) coords2<as.data.frame(set2) mean2<mean(coords2) b<mean1mean2 c<as.data.frame(b) dist<sqrt[[FootNote(c[1,]'''2)+(c[2,]''''''2)]]''' Dataf[i,1]<i Dataf[i,2]<dist } int<sort(Dataf[,2]) lowest<int[1] highest<int[500] CI95<int[475] pval<(length(subset(int,int>dist1))*0.002) print(cbind(dist1,lowest,highest,CI95,pval)) result<cbind(dist1, lowest,highest,CI95,pval) } #end of DISTANCESCRIPT
Randomise the overlap between two homeranges calculated using kernel, LoCoH or MCP method
by John Davison (& Maren Huck), cp. Huck et al. 2008
Overlap functions: Need 'shapefiles', 'adehabitat', 'maptools', 'gpclib' &'spatstat' packages
General note: To change the number of randomisations you have to change the lines
for (i in 1:500){ pval<(length(subset(overprop1,overprop1<proportion))*0.002)
accordingly. For example if you want to use 1000 randomisations the lines would read
for (i in 1:1000){ pval<(length(subset(overprop1,overprop1<proportion))*0.001)
(i.e., times the reciprocal value of the number of randomisations) Likewise, you would have to change
highest<overprop1[500] #to highest<overprop1[1000] #and CI5<overprop1[25] #to CI5<overprop1[50] #Assessing overlap: kernels overlapK(file1, file2,esri,hval,gridval,levelval)
 Where file1 and file2 are the two files containing location estimates. These should be in the format of a text file containing two columns with the headings ‘x’ and ‘y’, or an ESRI shapefile (in which case, esri=T). Shapefile names should not include any extension (.shp). These files should be saven in the working directory of R.
 If using ESRI shapefiles esri=T, otherwise esri=F.
 Textfiles should start with the xlocation labeled "x", and the second column should contain the ylocations (labeled "y").
 hval is the value for the parameter h used in kernel analyses – can be an absolute value, "href" or "LSCV"  see adehabitat package
 gridval is resolution of the grid used in kernel analyses – see adehabitat package
 levelval is the desired isopleth for use in kernel analyses – see adehabitat package
Returns a vector with
 area (ha) of range 1 [areaHR1]
 area of range 2 [areaHR2]
 area of overlap between ranges 1 and 2 [overlap]
 overlap as a proportion of smaller range [proportion]
 lowest level of overlap returned by simulation [lowest]
 highest level of overlap returned by simulation [highest]
 95% confidence interval (lower) for expected level of overlap (i.e., 95% CI of random overlap [CI5]).
e.g. [to be used after loading the function],
a<overlapK(file1="frida_sum05_ag", file2="frida_aut05_ag", esri=T, hval=15, gridval=100, levelval=95) b<overlapK(file1="brightmain.txt", file2="brightall.txt", esri=F, hval="LSCV", gridval=100, levelval=95)
#start of OVERLAPSCRIPT for Kernels overlapK<function(file1,file2,esri,hval,gridval,levelval){ if (! require(shapefiles)) stop("package 'shapefiles' is required") if (! require(adehabitat)) stop("package 'adehabitat' is required") if (! require(maptools)) stop("package 'maptools' is required") if (! require(gpclib)) stop("package 'gpclib' is required") if (! require(spatstat)) stop("package 'spatstat' is required") "intersectpolys" < function(polyA,polyB,type="binary") { if (!require(gpclib)) stop("package 'gpclib' is required") if (!inherits(polyA, "polylist")) stop("Non convenient data") if (!inherits(polyB, "polylist")) stop("Non convenient data") nA < length(polyA) nB < length(polyB) if(nA==0  nB==0) stop("Non convenient data") res < matrix(0,nA,nB) for (i in 1:nA) { nbpartA<attr(polyA[[i]],"nParts") for (j in 1:nB) { nbpartB<attr(polyB[[j]],"nParts") for(ii in 1:nbpartA){ for(jj in 1:nbpartB){ gpcA < as(polyA[[i]][attr(polyA[[i]],"pstart")$from[ii]:attr(polyA[[i]],"pstart")$to[ii],],"gpc.poly") gpcB < as(polyB[[j]][attr(polyB[[j]],"pstart")$from[jj]:attr(polyB[[j]],"pstart")$to[jj],],"gpc.poly") interAB < intersect(gpcA,gpcB) nAB=length(interAB@pts) if(nAB==0) res[i,j] < 0 else res[i,j] < ifelse(type=="area",area.poly(interAB),1) } } } } res < as.data.frame(res) if (!is.null(attr(polyA, "region.id"))) row.names(res) < attr(polyA, "region.id") else row.names(res) < paste("A", 1:length(polyA), sep = "") if (!is.null(attr(polyB, "region.id"))) names(res) < attr(polyB, "region.id") else names(res) < paste("B", 1:length(polyB), sep = "") return(res) } if (esri==T){ shp11<read.shapefile(file1) locs11<as.data.frame(shp11$shp$shp) shp12<read.shapefile(file2) locs12<as.data.frame(shp12$shp$shp) } if (esri==F){ shp11<read.table(file1, header=T) locs11<as.data.frame(shp11) shp12<read.table(file2, header=T) locs12<as.data.frame(shp12) } coords11<(as.data.frame(cbind(locs11$x, locs11$y))) coords12<(as.data.frame(cbind(locs12$x, locs12$y))) k11<kernelUD(coords11, id=NULL, h=hval, grid=gridval) k12<kernelUD(coords12, id=NULL, h=hval, grid=gridval) area11< kernel.area(coords11, id=NULL, h=hval, grid=gridval, level=levelval) area12< kernel.area(coords12, id=NULL, h=hval, grid=gridval, level=levelval) ver11<getverticeshr(k11, lev = levelval) ver12<getverticeshr(k12, lev = levelval) Shapefile11<kver2shapefile(ver11, which = names(ver11)) Shapefile12<kver2shapefile(ver12, which = names(ver12)) ver11s<shape2poly(Shapefile11, region.id = NULL) ver12s<shape2poly(Shapefile12, region.id = NULL) inter1<intersectpolys(ver11s,ver12s,type = "area") tt1<sum(inter1) tt<tt1/10000 smaller11<if(area11<=area12)area11 else area12 prop11<tt/smaller11 Dataf1<data.frame(1,2) names(Dataf1)[1]<"i" names(Dataf1)[2]<"overlap" for (i in 1:500){ ll<rbind(coords11,coords12) cds<as.data.frame(ll) sample(cds,replace=F) l<length(locs11$x) l2<length(locs12$x) sub < sample(nrow(cds), floor(nrow(cds) * (l)/(l+l2))) set1 < cds[sub, ] set2 < cds[sub, ] coords1<as.data.frame(set1) coords2<as.data.frame(set2) k1<kernelUD(coords1, id=NULL, h=hval, grid=gridval) k2<kernelUD(coords2, id=NULL, h=hval, grid=gridval) area1< kernel.area(coords1, id=NULL, h=hval, grid=gridval,level=levelval) area2< kernel.area(coords2, id=NULL, h=hval, grid=gridval,level=levelval) smaller<if(area1<=area2)area1 else area2 ver1<getverticeshr(k1, lev = levelval) ver2<getverticeshr(k2, lev = levelval) Shapefile1<kver2shapefile(ver1, which = names(ver1)) Shapefile2<kver2shapefile(ver2, which = names(ver2)) ver1s<shape2poly(Shapefile1, region.id = NULL) ver2s<shape2poly(Shapefile2, region.id = NULL) inter1<intersectpolys(ver1s,ver2s,type = "area") intersum1<sum(inter1) intersumHA1=intersum1/10000 prop<intersumHA1/smaller Dataf1[i,1]<i Dataf1[i,2]< prop print(i) } overprop1<sort(Dataf1[,2]) areaHR1<area11[1,1] areaHR2<area12[1,1] overlap<tt[1] proportion<prop11[1,1] lowest<overprop1[1] highest<overprop1[500] CI5<overprop1[25] pval<(length(subset(overprop1,overprop1<proportion))*0.002) print(cbind(areaHR1,areaHR2,overlap,proportion,lowest,highest,CI5,pval)) res<cbind(areaHR1,areaHR2,overlap,proportion,lowest,highest,CI5,pval) } #end of OVERLAPSCRIPT for Kernels ##############CONVEX HULL####### overlapNN<function(file1,file2,esri,a1,a2,levelval)
 Where file1 and file2 are the two files containing location estimates. These should be in the format of a text file containing two columns with the headings ‘x’ and ‘y’, or an ESRI shapefile (in which case, esri=T). Shapefile names should not include any extension (.shp). These files should be saven in the working directory of R.
 If using an ESRI shapefile esri=T, otherwise esri=F.
* Textfiles should start with the xlocation labeled "x", and the second column should contain the ylocations (labeled "y").
 Where a1 and a2 are ‘a’ parameters for adaptive local convex hull analysis of the location estimates contained in files 1 and 2 respectively
 levelval is the desired isopleth for use in convex hull analyses – see adehabitat package
Returns a vector with
 area (ha) of range 1 [areaHR1]
 area of range 2 [areaHR2]
 area of overlap between ranges 1 and 2 [overlap]
 overlap as a proportion of smaller range [proportion]
 lowest level of overlap returned by simulation [lowest]
 highest level of overlap returned by simulation [highest]
 95% confidence interval (lower) for expected level of overlap (i.e., 95% CI of random overlap [CI5]).
e.g. [to be used after loading the function],
a<overlapNN(file1="frida_sum05_ag", file2="frida_aut05_ag", esri=T, a1=344, a2=344, levelval=95) #start of overlap !LoCoHscript overlapNN<function(file1,file2,esri,a1,a2,levelval){ if (! require(shapefiles)) stop("package 'shapefiles' is required") if (! require(adehabitat)) stop("package 'adehabitat' is required") if (! require(maptools)) stop("package 'maptools' is required") if (! require(gpclib)) stop("package 'gpclib' is required") if (! require(spatstat)) stop("package 'spatstat' is required") "intersectpolys" < function(polyA,polyB,type="binary") { if (!require(gpclib)) stop("package 'gpclib' is required") if (!inherits(polyA, "polylist")) stop("Non convenient data") if (!inherits(polyB, "polylist")) stop("Non convenient data") nA < length(polyA) nB < length(polyB) if(nA==0  nB==0) stop("Non convenient data") res < matrix(0,nA,nB) for (i in 1:nA) { nbpartA<attr(polyA[[i]],"nParts") for (j in 1:nB) { nbpartB<attr(polyB[[j]],"nParts") for(ii in 1:nbpartA){ for(jj in 1:nbpartB){ gpcA < as(polyA[[i]][attr(polyA[[i]],"pstart")$from[ii]:attr(polyA[[i]],"pstart")$to[ii],],"gpc.poly") gpcB < as(polyB[[j]][attr(polyB[[j]],"pstart")$from[jj]:attr(polyB[[j]],"pstart")$to[jj],],"gpc.poly") interAB < intersect(gpcA,gpcB) nAB=length(interAB@pts) if(nAB==0) res[i,j] < 0 else res[i,j] < ifelse(type=="area",area.poly(interAB),1) } } } } res < as.data.frame(res) if (!is.null(attr(polyA, "region.id"))) row.names(res) < attr(polyA, "region.id") else row.names(res) < paste("A", 1:length(polyA), sep = "") if (!is.null(attr(polyB, "region.id"))) names(res) < attr(polyB, "region.id") else names(res) < paste("B", 1:length(polyB), sep = "") return(res) } if (esri==T){ shp11<read.shapefile(file1) locs11<as.data.frame(shp11$shp$shp) shp12<read.shapefile(file2) locs12<as.data.frame(shp12$shp$shp) } if (esri==F){ shp11<read.table(file1, header=T) locs11<as.data.frame(shp11) shp12<read.table(file2, header=T) locs12<as.data.frame(shp12) } coords11<(as.data.frame(cbind(locs11$x, locs11$y))) coords12<(as.data.frame(cbind(locs12$x, locs12$y))) CH11<NNCH(coords11,a=a1,duplicates="delete") area11< NNCH.area(CH11, percent=levelval, plotit=F) area11<area11/10000 CH12<NNCH(coords12,a=a2,duplicates="delete") area12< NNCH.area(CH12, percent=levelval, plotit=F) area12<area12/10000 smaller11<if(area11<=area12)area11 else area12 NNCH11<NNCH.shapefile(CH11,percent=levelval) ver11s<shape2poly(NNCH11, region.id = NULL) NNCH12<NNCH.shapefile(CH12, percent=levelval) ver12s<shape2poly(NNCH12, region.id = NULL) inter1<intersectpolys(ver11s,ver12s,type = "area") tt1<sum(inter1) tt=tt1/10000 prop11<tt/smaller11 Dataf1<data.frame(1,2) names(Dataf1)[1]<"i" names(Dataf1)[2]<"overlap" for (i in 1:500){ ll<rbind(coords11,coords12) cds1<as.data.frame(ll) sample(cds1,replace=F) l11<length(locs11$x) l12<length(locs12$x) sub1 < sample(nrow(cds1), floor(nrow(cds1) * (l11)/(l11+l12))) set1 < cds1[sub1, ] set2 < cds1[sub1, ] coords1<as.data.frame(set1) coords2<as.data.frame(set2) CH1<NNCH(coords1,a=a1,duplicates="delete") CH2<NNCH(coords2,a=a2,duplicates="delete") area1< NNCH.area(CH1, percent=levelval, plotit=F) area2< NNCH.area(CH2, percent=levelval, plotit=F) smaller<if(area1<=area2)area1 else area2 NNCH1<NNCH.shapefile(CH1,percent=levelval) ver1s<shape2poly(NNCH1, region.id = NULL) NNCH2<NNCH.shapefile(CH2, percent=levelval) ver2s<shape2poly(NNCH2, region.id = NULL) inter1<intersectpolys(ver1s,ver2s,type = "area") intersum1<sum(inter1) intersumHA1<intersum1 prop<intersumHA1/smaller Dataf1[i,1]<i Dataf1[i,2]< prop print(i) } overprop1<sort(Dataf1[,2]) areaHR1<area11[1,1] areaHR2<area12[1,1] overlap<tt[1] proportion<prop11[1,1] lowest<overprop1[1] highest<overprop1[500] CI5<overprop1[25] pval<(length(subset(overprop1,overprop1<proportion))*0.002) print(cbind(areaHR1,areaHR2,overlap,proportion,lowest,highest,CI5,pval)) res<cbind(areaHR1,areaHR2,overlap,proportion,lowest,highest,CI5,pval) } #end of OVERLAPSCRIPT for !LoCoH ##############MCP####### overlapMCP<function(file1,file2,esri,,levelval)
 Where file1 and file2 are the two files containing location estimates. These should be in the format of a text file containing two columns with the headings ‘x’ and ‘y’, or an ESRI shapefile (in which case, esri=T). Shapefile names should not include any extension (.shp). These files should be saven in the working directory of R.
 If using an ESRI shapefile esri=T, otherwise esri=F.
* Textfiles should start with the xlocation labeled "x", and the second column should contain the ylocations (labeled "y").
 levelval is the desired isopleth for use in MCP/convex hull analyses – see adehabitat package
Returns a vector with
 area (ha) of range 1 [areaHR1]
 area of range 2 [areaHR2]
 area of overlap between ranges 1 and 2 [overlap]
 overlap as a proportion of smaller range [proportion]
 lowest level of overlap returned by simulation [lowest]
 highest level of overlap returned by simulation [highest]
 95% confidence interval (lower) for expected level of overlap (i.e., 95% CI of random overlap [CI5]).
e.g. [to be used after loading the function],
a<overlapMCP(file1="frida_sum05_ag", file2="frida_aut05_ag",esri=T, levelval=100) #start of MCPoverlap script <code> overlapMCP<function(file1,file2,esri,levelval){ if (! require(shapefiles)) stop("package 'shapefiles' is required") if (! require(adehabitat)) stop("package 'adehabitat' is required") if (! require(maptools)) stop("package 'maptools' is required") if (! require(gpclib)) stop("package 'gpclib' is required") if (! require(spatstat)) stop("package 'spatstat' is required") "intersectpolys" < function(polyA,polyB,type="binary") { if (!require(gpclib)) stop("package 'gpclib' is required") if (!inherits(polyA, "polylist")) stop("Non convenient data") if (!inherits(polyB, "polylist")) stop("Non convenient data") nA < length(polyA) nB < length(polyB) if(nA==0  nB==0) stop("Non convenient data") res < matrix(0,nA,nB) for (i in 1:nA) { nbpartA<attr(polyA[[i]],"nParts") for (j in 1:nB) { nbpartB<attr(polyB[[j]],"nParts") for(ii in 1:nbpartA){ for(jj in 1:nbpartB){ gpcA < as(polyA[[i]][attr(polyA[[i]],"pstart")$from[ii]:attr(polyA[[i]],"pstart")$to[ii],],"gpc.poly") gpcB < as(polyB[[j]][attr(polyB[[j]],"pstart")$from[jj]:attr(polyB[[j]],"pstart")$to[jj],],"gpc.poly") interAB < intersect(gpcA,gpcB) nAB=length(interAB@pts) if(nAB==0) res[i,j] < 0 else res[i,j] < ifelse(type=="area",area.poly(interAB),1) } } } } res < as.data.frame(res) if (!is.null(attr(polyA, "region.id"))) row.names(res) < attr(polyA, "region.id") else row.names(res) < paste("A", 1:length(polyA), sep = "") if (!is.null(attr(polyB, "region.id"))) names(res) < attr(polyB, "region.id") else names(res) < paste("B", 1:length(polyB), sep = "") return(res) } if (esri==T){ shp11<read.shapefile(file1) locs11<as.data.frame(shp11$shp$shp) shp12<read.shapefile(file2) locs12<as.data.frame(shp12$shp$shp) } if (esri==F){ shp11<read.table(file1, header=T) locs11<as.data.frame(shp11) shp12<read.table(file2, header=T) locs12<as.data.frame(shp12) } coords11<(as.data.frame(cbind(locs11$x, locs11$y))) coords12<(as.data.frame(cbind(locs12$x, locs12$y))) MCP11<NNCH(coords11,k=nrow(coords11),duplicates=0.01) area11< NNCH.area(MCP11, percent=levelval, plotit=F) area11=area11/10000 MCP12<NNCH(coords12,k= nrow(coords12),duplicates=0.01) area12< NNCH.area(MCP12, percent=levelval, plotit=F) area12=area12/10000 smaller11<if(area11<=area12)area11 else area12 mcp11<NNCH.shapefile(MCP11,percent=levelval) ver11s<shape2poly(mcp11, region.id = NULL) mcp12<NNCH.shapefile(MCP12, percent=levelval) ver12s<shape2poly(mcp12, region.id = NULL) inter1<intersectpolys(ver11s,ver12s,type = "area") tt1<sum(inter1) tt=tt1/10000 prop11<tt/smaller11 Dataf1<data.frame(1,2) names(Dataf1)[1]<"i" names(Dataf1)[2]<"overlap" for (i in 1:500){ ll<rbind(coords11,coords12) cds1<as.data.frame(ll) sample(cds1,replace=F) l11<length(locs11$x) l12<length(locs12$x) sub1 < sample(nrow(cds1), floor(nrow(cds1) * (l11)/(l11+l12))) set11 < cds1[sub1, ] set12 < cds1[sub1, ] coords1<as.data.frame(set11) coords2<as.data.frame(set12) MCP1<NNCH(coords1,k=nrow(coords1),duplicates=0.01) area1< NNCH.area(MCP1, percent=levelval, plotit=F) MCP2<NNCH(coords2,k= nrow(coords2),duplicates=0.01) area2< NNCH.area(MCP2, percent=levelval, plotit=F) smaller<if(area1<=area2)area1 else area2 NNmcp1<NNCH.shapefile(MCP1,percent=levelval) ver1s<shape2poly(NNmcp1, region.id = NULL) NNmcp2<NNCH.shapefile(MCP2, percent=levelval) ver2s<shape2poly(NNmcp2, region.id = NULL) inter1<intersectpolys(ver1s,ver2s,type = "area") intersum1<sum(inter1) prop<intersum1/smaller Dataf1[i,1]<i Dataf1[i,2]< prop print(i) } overprop1<sort(Dataf1[,2]) areaHR1<area11[1,1] areaHR2<area12[1,1] overlap<tt[1] proportion<prop11[1,1] lowest<overprop1[1] highest<overprop1[500] CI5<overprop1[25] pval<(length(subset(overprop1,overprop1<proportion))*0.002) print(cbind(areaHR1,areaHR2,overlap,proportion,lowest,highest,CI5,pval)) res<cbind(areaHR1,areaHR2,overlap,proportion,lowest,highest,CI5,pval) } #end of OVERLAPSCRIPT for MCP
Batch homerange calculations
Here is an R script to be sourced (too long to place here, please download it), which provides some useful functions for home range calculations in batch mode, starting from a point locations table (say, an attribute table from a GIS).
Some of the functions provided are actually used in QGis Home Range plugin.
For questions, hints, patches, insults, whatever, contact Damiano G. Preatoni
Usage
 download
HR_engine.zip
, placeHR_engine.R
in a convenient directory.  invoke adehabitat e.g. with
require adehabitat
.  add functions from
HR_engine.R
withsource(<path to HR_engine.R)
Function overview
HR_cruncher(x, method=c("mcp","href","lscv","hadj","clusthr","NNCH"), prc=95, grid=40)`
Wrapper function to calculate home ranges with alomst all methods available in adehabitat
.
The function is intended for a nonR proficient user, who will need to call just this one.
Mind that x
should be a sahrlocs
object, that can be easily obtained reshaping any kind of point location data into a dataframe structured like this:
$ Name: Factor
animal unique identifier$ X : numeric
point location, easting$ Y : numeric
point location, northing...
any other field you need (e.g. animal sex, age, whatever).
The function returns a list containing a method.volumes
element for each method used, which holds UDs, and a method.areas
element which contains home range surface areas estimates.
kerneloverlap.spot(x, method=c("HR","PHR","VI","BA","UDOI","HD"), lev=95, conditional=FALSE, ...)`
As one can see, the interface is pretty identical to adehabitat original kerneloverlap
function.
This one, instead, calculates overlap starting from already existing khrud
or 'khr objects. Compared to original
kerneloverlap, this function does not recalculate UDs, because often you already have them, and perhaps it took some time to calculate and there should be no need to do it again.
corearea(xy, id=NULL, levels=seq(20,95, by=5), method="mcp", unin="m", unout="m", plot=TRUE)`
This is a reimplementation of the incremental core area function discussed above. See Hanski et al. (2000) for the methodology.
exporter(x, attr, field="ID", out="allavailable", level=95, export=c("global", "separated"), shapename, dir=getwd(), owr=TRUE)`
This is the khrUDtoShapefile functrion contained in QGis Home Range Plugin.
References
Bullard, F. (1991). Estimating the home range of an animal: a Brownian bridge approach. Department of Statistics, University of North Carolina at Chapel Hill. M.Sc. thesis.View PDF
Burgman, M.A. & Fox, J.C. (2003). Bias in species range estimates from minimum convex polygons: implications for conservation and options for improved planning. Animal Conservation 6: 19 28.
Getz, W. and C. Wilmers. 2004. A local nearestneighbor convexhull construction of home ranges and utilization distributions. Ecography 27: 489505. View PDF
Getz WM, FortmannRoe S, Cross PC, Lyons AJ, Ryan SJ, et al (2007) !LoCoH: Nonparameteric Kernel Methods for Constructing Home Ranges and Utilization Distributions. PLoS ONE 2(2): e207. doi:10.1371/journal.pone.0000207 View PDF
Hanski IK, Stevens PC, Ihalempia P, Selonen V (2000) Homerange size, movements, and nestsite use in the Siberian flying squirrel, Pteromys volans. Journal of Mammalogy, 81: 789809.
Huck, M., Davison, J., Roper, T.J., 2008. Comparison of two sampling protocols and four home range estimators using radiotracking data from urban badgers. Wildlife Biology 14, 467477.
Wauters LA, Preatoni DG, Molinari A, Tosi G(2007) Radiotracking squirrels: Performance of home range density and linkage estimators with small range and sample size. Ecological Modelling, 202(34):333344
Attachments (7)

primokernel.png (84.0 KB)  added by 8 years ago.
First kernel
 primokernel_small.png (51.0 KB)  added by 8 years ago.

mcp_area_small.png (24.1 KB)  added by 8 years ago.
MCP area

mcp_plot_small.png (32.5 KB)  added by 8 years ago.
MCP plot

kernel_image_hr_.png (42.7 KB)  added by 8 years ago.
Kernel HR

kernel_plot_jj_.png (33.1 KB)  added by 8 years ago.
Kernel plot

ellipse.png (38.9 KB)  added by 8 years ago.
Ellips
Download all attachments as: .zip