Changes between Version 3 and Version 4 of AnimoveHowto


Ignore:
Timestamp:
Apr 23, 2012, 3:17:00 PM (7 years ago)
Author:
trac
Comment:

--

Legend:

Unmodified
Added
Removed
Modified
  • AnimoveHowto

    v3 v4  
    1 = !AniMove !HowTo: how to analyze location data =
    2 
    3 Please copy here the material from https://wiki.faunalia.it/dokuwiki/doku.php/public/animove_howto, improve and update it.
     1=  !AniMove !HowTo: how to analyze location data =
     2'''A set of manuals to explain how to analyze animal movements using free and open source software.'''
     3
     4==  Home Range plugin for QGIS  ==
     5
     6A tool to calculate home ranges is available as [http://qgis.org QGIS] plugin. It provides an intuitive graphic interface that collects all parameters of the analysis and calls R to perform it. Available analyses are: MCP, kernel href, kernel hLSCV, kernel with adjusted h (Wauters et al, 2007). Batch analysis is also available.
     7
     8You can downolad it via Plugin Installer. The plugin is hosted on the user-contributed plugin repository.
     9
     10Refer also to the [http://www.qgis.org/wiki/!HomeRange_plugin wiki page] of the plugin. This page contains up-to-date informations on how to install it on various platforms. Feel free to contribute in signaling errors and updates, preferably on Trac, or alternatively, on Animove mailing list. Thanks!
     11
     12Sample data [http://www.faunalia.it/animov/scripts/sample.csv here] (SRID code: 3003).
     13
     14The development of additional plugins is currently underway: [wiki:Public/animoveHowtoAnimoveQgisPlugins]
     15
     16==  GRASS and QGIS integration  ==
     17
     18[http://grass.osgeo.org/ GRASS] scripting is easy and powerful, and can generate GUIs on the fly through the simple Tcl/Tk language. A script for GRASS: [http://www.faunalia.it/animov/scripts/v.animove v.animove].
     19
     20First of all you have to install the libraries "adehabitat" and "shapefiles" in R (just follow the instructions according to your operating system).
     21
     22To install animove, just copy the file '''v.animove''' on your '''/usr/lib/grass/scripts''' directory (in Windows, something like C:\Programmi\Quantum_GIS\grass\scripts)  then you can run it from GRASS by typing '''v.animove''' from the GRASS shell.
     23
     24This approach is being improved by integrating the new GRASS script into [http://qgis.org/ QGIS], so to have extra flexibility and usability.
     25To use this script in QGIS you must:
     26  1. download [http://www.faunalia.it/animov/scripts/v.animove.qgm this file], [http://www.faunalia.it/animov/scripts/v.animove.1.png this file], and [http://www.faunalia.it/animov/scripts/v.animove.2.png this file] and copy it in your '''/usr/share/qgis/grass/modules''' directory (in Windows, something like C:\Programmi\Quantum_GIS\grass\modules)
     27  1. download [http://www.faunalia.it/animov/scripts/v.animove.html the help file] and copy it in '''/usr/share/doc/grass-doc/html/''' (or the Windows equivalent)
     28  1. modify the file '''/usr/share/qgis/grass/config/default.qgc''' (in Windows, something like C:\Programmi\Quantum_GIS\grass\config\default.qgc), adding the row '''<grass name="v.animove"/>''' inside a section (preferably inside '''<section label="Delaunay triangulation, Voronoi diagram and convex hull">''').
     29
     30In bash terms:
     31<code>
     32cd /usr/lib/grass/scripts
     33wget http://www.faunalia.it/animov/scripts/v.animove
     34chmod a+x v.animove
     35cd /usr/share/doc/grass-doc/html/
     36wget http://www.faunalia.it/animov/scripts/v.animove.html
     37cd /usr/share/qgis/grass/modules/
     38wget http://www.faunalia.it/animov/scripts/v.animove.qgm
     39wget http://www.faunalia.it/animov/scripts/v.animove.1.png
     40wget http://www.faunalia.it/animov/scripts/v.animove.2.png
     41nano /usr/share/qgis/grass/config/default.qgc
     42[adding <grass name="v.animove"/> NB: be careful to write straight ", not the curly ones]
     43</code>
     44
     45Now you can found and run it from the GRASS toolbox in QGIS. You can find a sample file of animal locations [http://www.faunalia.it/animov/scripts/sample.csv here] (SRID code: 3003).
     46
     47A first result:
     48
     49{{ public:primokernel_small.png || First kernel}}
     50
     51Help in testing and improving it will be appreciated.
     52
     53===  v.energy  ===
     54A [http://www.faunalia.it/animov/scripts/v.energy-2.0 little script] to calculate the average energy available for each home range, by Anne Ghisla et al.
     55
     56==  R and Adehabitat  ==
     57
     58===  Installation and startup  ===
     59Follow the instructions appropriate for your Operating System  R is packaged for all major GNU/Linux distributions, and for Windows. You can download packages from [http://www.r-project.org/ R web site]. For Debian you can simply type (as root):[[BR]]
     60 
     61apt-get install r-base
     62 
     63
     64To install all necessary libraries, you will need to download [http://cran.r-project.org/src/contrib/Descriptions/adehabitat.html adehabitat] with these commands:[[BR]]
     65
     66Open a shell and write[[BR]]
     67 
     68su
     69 
     70
     71start R as superuser, then:[[BR]]
     72
     73 
     74install.packages("adehabitat", dependencies=TRUE, repos="your_favorite_mirror")
     75 
     76
     77Some mirrors:[[BR]]
     78http://microarrays.unife.it/CRAN/ momentaneamente disattivato [[BR]]
     79http://cran.arsmachinandi.it/ [[BR]]
     80http://rm.mirror.garr.it/mirrors/CRAN/ [[BR]]
     81http://dssm.unipa.it/CRAN/
     82
     83Then exit the root shell.
     84
     85'''Please note:''' due to frequent updates and improvements, package names change often  in case the links above do not work, please search the packages from [http://cran.r-project.org/src/contrib/ R web site (contributed packages)].
     86
     87Otherwise, check adehabitat's dependencies, download them from R website and install them manually:
     88
     89 
     90install.packages(pkgs="path to the package you downloaded, including the file name", repos=NULL)
     91 
     92
     93
     94'''Please note:''' some interfaces of R provide easier ways for installing additional packages. See their respective instructions.
     95
     96Then start R (specific instruction depend on your Operating System) and load all the relevant libraries:
     97 
     98library(adehabitat)
     99 
     100
     101===  Analyses  ===
     102Now let's play! You can load the sample data from the adehabitat package (dataset ''puechabon''), or use your own.
     103 
     104data(puechabon)
     105#loads the data [note: comments to commands start with "#"]
     106xy<-puechabon$locs[,c("X","Y")]
     107#takes the columns named X and Y from the section "locs" of the puechabon dataset, and store them in a matrix named "xy"
     108id<-puechabon$locs[,c("Name")]
     109#takes the column named Name from the section "locs" of the puechabon dataset, and store it in a vector named "id"
     110#then typing:
     111> xy
     112#we can see the value of x and y coordinate
     113X Y
     1141 699889 3161559
     1152 700046 3161541
     1163 698840 3161033
     1174 699809 3161496
     118.   . . .
     119#typing
     120> id
     121#as you can see, data refer to four animals (Brock Calou Chou Jean)
     122[1] Brock Brock Brock Brock Brock Brock Brock Brock Brock Brock Brock Brock
     123. . .
     124[37] Calou Calou Calou Calou Calou Calou Calou Calou Calou Calou Calou Calou
     125. . .
     126[61] Chou  Chou  Chou  Chou  Chou  Chou  Chou  Chou  Chou  Chou  Chou  Chou
     127. . .
     128[109] Jean  Jean  Jean  Jean  Jean  Jean  Jean  Jean  Jean  Jean  Jean
     129Levels: Brock Calou Chou Jean
     130 
     131
     132To use your data, you can start with a text file (R can also link to a database, use other file formats, etc., in case you need this).[[BR]]
     133For example to import a file [http://www.faunalia.it/animov/sample/fagiani.txt fagiani] in R you can write:
     134 
     135>data<-read.table(file="the path to your file,including the file's name", header=T, sep=" ")
     136 
     137
     138 
     139#data is the name that you give to your data
     140#header=T indicate that in the file's first line there are the variables' names
     141#sep=" " means that the columns are separed by   but if you want, you can for example, separe the data with the tab writing "\t"
     142 
     143
     144You should have a matrix with at least 2 columns, one for x coordinates and one for y  additional columns allow you to run separate analyses (e.g. of different individuals, seasons, activities, etc.).[[BR]]
     145To remove missing data, just type ''(na.omit(yourdata))'' when loading data, instead of ''(yourdata)'', e.g. xy<-(na.omit(puechabon)). Better do it as a first step, to avoid having misaligned vectors or matrices (e.g. if you have missing xy, but known id).
     146
     147====  Summary Statistics  ====
     148Once you have your data in an R object (we have called it xy),
     149you can start getting some results.
     150
     151=====  Sample Size:  =====
     152Total number or records (observations) in the dataset.
     153 
     154#Sample Size
     155nrow(xy)
     156#per animal
     157table(id)
     158 
     159 
     160=====  Minimum X e Y:  =====
     161Minimum value of x coordinate (northing) and  y coordinate (easting).[[BR]]
     162R commands are:
     163 
     164#Minimum X and Y
     165apply(xy, 2, min)
     166#Minimum X and Y per animal
     167apply(xy, 2, function(x) tapply(x, id, min))
     168 
     169 
     170=====  Mean of X and Y:  =====
     171Mean value of x coordinate (northing) and y coordinate (easting).
     172 
     173#Mean of X and Y
     174apply(xy, 2, mean)
     175#Mean of X and Y per animal
     176apply(xy, 2, function(x) tapply(x, id, mean))
     177 
     178
     179=====  XY Variance:  =====
     180The XY variance (not the covariance).
     181 
     182#Variance of X and Y
     183apply(xy, 2, var)
     184#Variance of X and Y per animal
     185apply(xy, 2, function(x) tapply(x, id, var))
     186 
     187 
     188=====  Distance:  =====
     189Distance (m) between relocations.
     190 
     191di<-as.matrix(dist(xy))
     192diag(di)<-NA
     193#Distance between relocations, per animal
     194xyperan<-split(xy, id)
     195dibis<-lapply(xyperan, function(x) as.matrix(dist(x)))
     196dibis<-lapply(dibis, function(x) {diag(x)<-NA x})
     197 
     198 
     199=====  Minimum and maximum distance:  =====
     200Minimum and maximum distance (m) between relocations.
     201 
     202min(c(di), na.rm=TRUE)
     203max(c(di), na.rm=TRUE)
     204 
     205
     206=====  Minimum distance:  =====
     207Minimum distance (m) between relocations.
     208 
     209#min distance
     210lapply(dibis, function(x) min(c(x), na.rm=TRUE))
     211 
     212
     213=====  Maximum distance:  =====
     214Maximum distance (m) between relocations.
     215 
     216#max distance
     217lapply(dibis, function(x) max(c(x), na.rm=TRUE))
     218 
     219
     220=====  Distance between successive relocations:  =====
     221To do the following analyses in your data there must be a column date, as in the dataset puechabon.[[BR]]
     222In a first time you must transform the dataset into an object of class POSIX and then into an object of class traj.[[BR]]
     223The commands are:[[BR]]
     224 
     225da <- as.POSIXct(strptime(as.character(puechabon$locs$Date),
     226                 "%y%m%d"))
     227tr <- as.traj(id = id, xy = xy, date = da)
     228tr
     229 
     230 
     231=====  Minimum date:  =====
     232Earliest observation date per dataset.
     233 
     234kk<-tapply(tr$date, tr$id, min)
     235class(kk)<-"POSIXct"
     236kk
     237 
     238
     239=====  Maximum date:  =====
     240Latest observation date per dataset.
     241 
     242kk<-tapply(tr$date, tr$id, max)
     243class(kk)<-"POSIXct"
     244kk
     245 
     246
     247=====  Minimum speed (units/day):  =====
     248Minimum number of meters traveled per day.
     249 
     250#Compute speed
     251sp<-speed(tr)
     252#Minimum speed
     253tapply(sp$speed, sp$id, min)
     254 
     255
     256=====  Maximum speed (units/day):  =====
     257Maximum number of meters traveled per day.
     258 
     259#Compute speed
     260sp<-speed(tr)
     261#Maximum speed
     262tapply(sp$speed, sp$id, max)
     263 
     264 
     265=====  Mean daily speed:  =====
     266Mean number of meters traveled per day  distance/number of days in dataset.
     267 
     268#Compute speed
     269sp<-speed(tr)
     270#Mean speed
     271tapply(sp$speed, sp$id, mean)
     272 
     273 
     274=====  Distance (m) between successive observations:  =====
     275 
     276disuc<-sp$speed*sp$dt
     277 
     278
     279=====  Total distance (m) per animal:  =====
     280 
     281ditot<-tapply(disuc, sp$id, sum)
     282 
     283
     284=====  Distance (m) between the first and last relocation:  =====
     285 
     286dd<-function(x) {
     287        xfirst<-x[1,]
     288        xend<-x[nrow(x),]
     289        di<-sqrt((xfirst[1]-xend[1])^2 +
     290            (xfirst[2]-xend[2])^2)
     291        return(di)
     292}
     293dilin<-unlist(lapply(xyperan, dd))
     294 
     295
     296=====  Mean distance (m) per animal:  =====
     297 
     298tapply(disuc, sp$id, mean)
     299 
     300
     301=====  Linearity:  =====
     302The distance between travel path endpoints and the total distance traveled.
     303 
     304dilin/ditot
     305 
     306
     307=====  Schoener's ratio:  =====
     308Schoener's ratio for examining autocorrelation.[[BR]]
     309R2/MSD between successive observations MSD (mean squared distance) is a measure of dispersion of the data.[[BR]]
     310 
     311t2 <- lapply(split(disuc, sp$id), function(x) sum(x^2)/length(x))
     312r2 <- lapply(xyperan, function(x) var(x[,1])+var(x[,2]))
     313ratio <- unlist(t2)/unlist(r2)
     314 
     315
     316====  Home range analyses  ====
     317
     318=====  Minimum Convex Polygon Home Range  =====
     319Also called '''convex hull''': calculates a minimum convex polygon home range based on either the selected records or if none are selected the entire data set. [[BR]]
     320A polygon shape file theme is created. The location point statistics menu choice will output a MCP as a graphic object if this is desired rather than a shape file.[[BR]]
     321If only area is desired then location point statistics with nothing selected will output MCP area.[[BR]]
     322This calculates area (m) between all points in dataset.
     323 
     324hr<-mcp(xy, id)
     325area.plot(hr)
     326#The sequent graphic shows the MCP area per animal.
     327 
     328[[Image(public:mcp_area_small.png)]]
     329
     330 
     331jj<-mcp.area(xy, id)
     332jj
     333Brock  Calou  Chou  Jean
     33420  2.01000  8.06000 15.74195  1.13685
     33525  2.05565  9.96440 16.11330  1.48680
     33630  2.44080 13.27520 16.67075  1.69070
     33735  6.46570 13.65330 18.04590  2.33475
     33840  7.01845 14.88035 22.65340  4.09565
     33945  9.23435 15.86955 26.36545  5.55715
     34050  9.68735 16.31155 26.76340  5.72115
     34155  9.85540 16.31155 28.51200  5.99155
     34260 12.84635 17.49680 29.97835  6.64385
     34365 13.12805 19.74600 31.75860  9.05275
     34470 15.66545 20.05745 32.25850 10.58390
     34575 16.14625 21.53025 34.77430 16.40945
     34680 19.06725 22.15015 47.81925 24.29185
     34785 20.22535 26.75180 54.76550 35.31225
     34890 21.82440 37.12975 67.18565 52.59315
     34995 22.64670 41.68815 71.93205 55.88415
     350#In this table the first column shows the % of home range and the others indicate the home range size (ha) for each animal.
     351#On the basis of this one you can obtain the following plot by writing:
     352plot(jj)
     353 
     354[[Image(public:mcp_plot_small.png)]]
     355
     356=====  Kernel home range  =====
     357Kernel Home Range calculates a fixed kernel home range utilization distribution (Worton1989) as a grid coverage using either ad hoc calculation of a smoothing parameter, least squares cross validation Silverman (1986), or a user input for the smoothing
     358parameter (H). The bivariate normal density kernel is used as suggested by Worton (1989). The least squares cross validation (LSCV)
     359in this version takes significant processing time despite using the golden minimizing routine. Most user will find that the adhoc
     360calculations are very close to the LSCV for exploratory analysis (for most datasets Hopt is usually between 0 .7 and .9 of
     361Href[Ad hoc] in this implementation). The adhoc calculation is based on Silverman (1986) rather than Worton(1989) or
     362Seaman & Powell (1996). The problem of discretization errors (0 length distances caused from rounding location positions
     363giving a minimized h of zero) are handled slightly different than Tufto (1996). Distance measures are uniform randomly placed
     364between 0 and (href/100) when and only when the distance measurements are 0. This only adjust the locations when necessary and
     365allows for different projection and distance systems. The kernel is based on the non-corrected data. The program queries the
     366user if they would like to adjust the .LSCV or the Adhoc H. Worton (1994) suggests adjusting H by 0.8 for normal distributions
     367and 0.5 for uniform. Work by Seaman & Powel (1996) suggest that this is not necessary with the LSCV. It is our experience
     368that the original Adhoc and LSCV smoothing parameters provide a less biased estimator than a user selected or Worton's corrections.[[BR]]
     369Three things are output from this routine: A grid coverage with the Utilization Distribution (probabilities), a polygon
     370shapefile containing individual polygons for each selected probability, an associated attribute table containing probability
     371and area fields for each set of probability polygons, and a message box displaying the area calculations of each probability.
     372Note that the default probabilities are 50 and 95, and that the view must be zoomed out sufficiently to encompass the larger  probability areas to create the polygon shapefile (95%, etc.).
     373
     374 
     375hr<-kernelUD(xy, id, h="LSCV")
     376hr
     377image(hr)
     378#The graphic shows the kernel home range utilization distribution.
     379 
     380
     381[[Image(public:kernel_image_hr_.png)]]
     382
     383
     384 
     385jj<-kernel.area(xy, id)
     386jj
     387Brock  Calou  Chou  Jean
     38820  15.50560  12.33984  18.82767  14.77481
     38925  20.35111  15.97852  24.34613  19.20725
     39030  25.77807  19.85449  30.18920  24.37843
     39135  31.59267  24.04688  36.35689  30.28835
     39240  37.79491  28.55566  43.17380  36.19827
     39345  44.77243  33.53906  49.99072  43.58567
     39450  52.13760  39.07617  57.78148  50.97308
     39555  60.27804  45.40430  66.54609  60.57670
     39660  69.19376  52.68164  76.28454  70.91906
     39765  78.88477  60.82910  86.99684  83.47765
     39870  89.93251  69.76758  99.65682  97.51371
     39975 102.91845  79.73438 114.26450 114.50474
     40080 118.42406  91.20410 132.11833 135.18947
     40185 138.19370 105.20508 156.13985 161.78411
     40290 165.32851 123.79395 194.44442 203.89231
     40395 209.71331 153.14062 261.63974 281.46004
     404#In this table the first column shows the % of home range and the others indicate the home range size (ha) for each animal.
     405#On the basis of this one you can obtain the following plot by writing:
     406plot(jj)
     407 
     408[[Image(public:kernel_plot_jj_.png)]]
     409
     410=====  Incremental core area analysis  =====
     411The procedure ([http://biocenosi.dipbsf.uninsubria.it/outgoing/scripts/R/Incremental_Kernel_Analysis.R download]) is similar to the one outlined in Hanski ''et al.'' (2000).
     412Home range size is evaluated at different levels (percentages of locations is using clusterhr or mcp  or different coropleths if using kernel) and an one-way ANOVA followed by Tukey's "Honestly Significantly Different" pairwise comparison test between subsequent levels.
     413A summary of Tukey HSD test is printed, and a plot like the one presented in Hanski ''et al.'' (2000) is produced.
     414The function returns a list class object containing the analysis of variance, the Tukey HSD test ant the plot data.
     415
     416 
     417data(puechabon)
     418loc <- puechabon$locs[, c("X", "Y")]
     419id <- puechabon$locs[, "Name"]
     420cores <- corearea(loc, id, method = "mcp")
     421 
     422
     423'''References:'''
     424Hanski, I. K.  Stevens, P. C.  Ihalempia, P. & Selonen, V. Home-range size, movements, and nest-site use in the Siberian flyng squirrel, ''Pteromys volans''. Journal Of Mammalogy, 2000, 81, 798-809
     425
     426=====  Standard deviation  =====
     427The standard deviation is computed for each axis (major and minor axis length).
     428 
     429## First perform a PCA
     430pc<-lapply(xyperan, dudi.pca,
     431           scale=FALSE, scannf=FALSE)
     432 
     433 
     434 
     435#Then, the standard deviation is computed for primary and secondary axes length
     436sde<-lapply(pc, function(x) apply(x$li, 2, sd))
     437sde<-do.call("rbind", sde)
     438 
     439 
     440=====  Eccentricity  =====
     441The ratio between the minor and the major axis.
     442 
     443apply(sde, 1, function(x) x[2]/x[1])
     444 
     445
     446=====  Ellipse  =====
     447This implements the Jennrich-Turner (1969) Bivariate Normal Home Range.
     448This method of home range calculation is seriously flawed in its
     449dependence on a bivariate normal distribution of locations but is
     450valuable due to its lack of sensitivity with sample size, simplicity of
     451underlying probability theory, ability to get confidence limits on the
     452estimates, and to derive the principle axis of the location
     453coordinates. Other than the rare circumstances where the data is
     454bivariately normal the main utility of this program lies in developing
     455statistically simple home range-habitat models and in comparison with
     456home range estimates using this method in other studies.[[BR]]
     457The package ellipse contains various routines for drawing
     458ellipses and ellipse-like confidence regions.
     459
     460 
     461library(ellipse)
     462foo<-function(x) {
     463   u<-ellipse(cov(x), centre=apply(x,2,mean))
     464   plot(u, ty="n", axes=FALSE)
     465   points(x, pch=16)
     466   lines(u)
     467   box()
     468}
     469par(mfrow=c(2,2), mar=c(0,0,0,0))
     470lapply(xyperan, foo)
     471#In this graphic is shown the 95% confidence ellipse:
     472 
     473
     474[[Image(public:ellipse.png)]]
     475
     476===  Commands to obtain a file  ===
     477To convert in a file an object created in R you must write:
     478 
     479png("the directory's address in which you want to save the file and the filename", width = 800, height = 800)
     480#Obviously you can change width and height as you want.
     481#Then write the R commands which you have used to create the object and after
     482graphics.off()
     483 
     484For example, if you want to convert the graphic that shows the MCP area per animal, in a file named MCP, you must write:
     485 
     486png("the directory's address in which you want to save the graphic/MCP", width = 800, height = 800)
     487area.plot(hr)
     488graphics.off()
     489 
     490
     491===  Commands to convert into shapefile  ===
     492
     493To convert objects of class "kver" (kernels) or "area" (MCPs), there is a newer procedure. [[BR]]
     494Requirements: adehabitat and rgdal packages, and a dataframe whose rownames are the same as the list of animals in the kver/area object.
     495
     496 
     497 
     498spol <- kver2spol(kver)
     499 
     500
     501or
     502
     503 
     504spol <- area2spol(area)
     505 
     506 
     507then
     508
     509 
     510spdf <- SpatialPolygonsDataFrame(spol, dataframe, match.ID = TRUE)
     511writeOGR(spdf,dir,name, "ESRI Shapefile")
     512 
     513
     514Note: !SpatialPolygonsDataFrame function is very choosy on the input data. See its documentation for common issues.
     515
     516
     517==  GRASS/R Interface  ==
     518For the integration of R in GRASS, you need to run R from the GRASS shell enviroment. The interface loads compiled GIS library functions into the R executable enviroment. Then GRASS metadata are transferred to R.[[BR]]
     519For example you can take the result of kernel analysis, transform it in a shapefile and then import it in GRASS.
     520
     521==  Analyses with GRASS  ==
     522To import a shapefile you can use the line command:
     523 
     524v.in.ogr
     525 
     526http://www.grass.itc.it/grass60/manuals/html60_user/v.in.ogr.html
     527
     528To make a minimum convex polygon in GRASS you can use the line command:
     529 
     530v.hull
     531 
     532http://www.grass.itc.it/grass60/manuals/html60_user/v.hull.html
     533
     534To make a kernel analysis in GRASS you can use the line command:
     535 
     536v.kernel
     537 
     538http://www.grass.itc.it/grass60/manuals/html60_user/v.kernel.html
     539
     540
     541==  How to store geometry in GRASS but attributes in PostgreSQL  ==
     542By Markus Neteler and Ferdinando Urbano[[BR]]
     543A simple system to update geographic data (slope, land cover, ...) in a
     544table with fix (animal locations) stored in postgresql. It uses GRASS
     545tools (it is possible to recall them through Qgis interface, both in Linux and Windows).
     546As v.external can just read data from postgres, you must use v.in.db, v.db.connect and then v.what.rast to get the result. Here the commands:
     547
     548Check current settings for attribute storage:
     549 
     550db.connect -p
     551 
     552
     553Import table from PostgreSQL to new map (NOTE: output map name needs to be different from table name in case that GRASS is connected to PostgreSQL):
     554 
     555v.in.db driver=pg database="host=localhost,dbname=meteo" table=mytable x=lon y=lat key=cat out=mytable
     556v.db.connect map=mytable -p
     557 
     558
     559Cancel table connection between map and attribute table:
     560 
     561v.db.connect map=mytable -d
     562v.db.connect map=mytable -p
     563 
     564
     565Drop table which was replicated due to import:
     566 
     567db.tables -p
     568echo "DROP TABLE mytable" | db.execute
     569db.tables -p
     570 
     571
     572Reconnect map to table in PosgreSQL:
     573 
     574v.db.connect map=mytable driver=pg database="host=localhost,dbname=meteo" table=mytable key=cat
     575 
     576
     577Now the geometry is stored in GRASS while the attributes are stored in PostgreSQL.
     578
     579==  Alternative approaches  ==
     580
     581===  Web analyses  ===
     582
     583This approach has the enormous advantage of not requiring any software installation, and the disadvantage of not allowing much flexibility for the user. A first approach (fully functional, but for now limited to one kind of analysis) is: http://locoh.palary.org [now apparently not functional]
     584Thanks to Scott Fortmann-Roe for this.
     585
     586===  R GUI  ===
     587A good candidate is [http://socserv.mcmaster.ca/jfox/Misc/Rcmdr/ R Commander]. My guesstimate is that it could take just a few hours of work to have something simple running. With this approach, GIS integration will be limited to import/export of shapefiles.
     588
     589====  Menu structure for a possible Graphical User Interface  ====
     590
     591 
     592File
     593    Open script file
     594    Save script
     595    Save script as
     596    Save output
     597    Save output as
     598    Save R workspace
     599    Save R workspace as
     600    Exit
     601       from Commander
     602       from Commander and R
     603       Edit
     604    Clear Window
     605    Cut
     606    Copy
     607    Paste
     608    Delete
     609    Find
     610    Select all
     611Data
     612    New data set
     613    Import data
     614       from text file
     615       from database
     616       from geodatabase
     617       from shapefiles (points)
     618       from GRASS (points)
     619       from SPSS data set
     620       from Minitab data set
     621       from STATA data set
     622    Data in packages
     623       List data sets in packages   
     624       Read data set from attached package
     625       Active data set
     626          Select active data set
     627          Help on active data set (if available)
     628          Variables in active data set
     629          Set case names
     630          Subset active data set
     631          Remove cases with missing data
     632           Export active data set
     633     Manage variables in active data set
     634       Recode variable
     635       Compute new variable
     636       Standardize variables
     637       Convert numeric variable to factor
     638       Bin numeric variable
     639       Reorder factor levels
     640       Define contrasts for a factor
     641       Rename variables
     642Statistics
     643    [statistics from adehabitat]
     644Home Ranges
     645       MCP
     646       Kernel
     647       ...
     648  Save as
     649    Shapefiles
     650       MCP
     651       kernel (choose %)
     652       ...
     653    Raster (Utilization Distribution)
     654       GRASS
     655       ...
     656    Save graph to file
     657       as bitmap
     658       as PDF/Postscript/EPS 
     659 Help
     660    Commander help
     661    About Rcmdr
     662    Introduction to the R Commander
     663    Help on active data set (if available)
     664 
     665
     666==  About kernel estimates  ==
     667The choice of the smoothing parameter `h` is the crucial point for kernel analyses. Here very interesting notes about this.
     668
     669===  Comments from Clément Calenge  ===
     670Actually, "h=href" corresponds to a smoothing parameter estimated under the hypothesis that the underlying utilisation distribution is normally distributed (UD unimodal, i.e. one center of activity, and symmetrical). This is not the case in your example. When the UD is not bivariate normal, the use of href greatly overestimates the UD (see Silverman 1986, Worton 1995 J. Wild. Manage.).
     671I never compared the estimates of adehabitat with other softwares, when smoothing values are estimated with the ad hoc method. But before developing adehabitat, I was using the software RANGES V, which also returned greatly overestimated home-ranges with "multi-center" home ranges. So that in my opinion, the reference method would return similar results whatever the software (though not exactly identical, because the different softwares do not use exactly the same algorithms - not the same size of the grid, not the same way to compute the limits of the HR from the UD). I agree that it would be interesting to perform such a comparison...
     672One alternative, when several centers of activity are present, is to use the LSCV method, but, again, the results would differ among softwares (and more dramatically). For example, I used the dataset "puechabon" of the package adehabitat (all animals pooled), and I estimated the UD with different home range estimation programs. The smoothing values (in metres) returned by these programs are:
     673  * 83 m       adehabitat
     674  * 310 m      Arcview - animal movement analysis (AAMA)
     675  * 44 m       ranges V
     676  * 802 m      Calhome
     677
     678And two other programs use one smoothing parameter for x and one for y:
     679  * 59 m  for X and 161 m for Y    The home ranger
     680  * 131 m for X and 364 m for Y    kernelHR
     681
     682As you can see, there is much variation in the estimation. The main cause of variation is that the different softwares do not use the same algorithms to smooth the UD. In addition, the algorithm often fail to minimise the smoothing error, so that bad results are returned by the function (this is a property of the method, see the help page).
     683Finally the last method (which I prefer, personally), is to specify a value for the smoothing parameter (the same for all animals), based on some visual exploration of the data.
     684
     685===  A few questions  ===
     686====  What unit is the grid cell size in?  ====
     687
     688The units are defined by the dataset passed to the function: for example, if you pass a data frame xy with coordinates given in meters, h will be computed in meters. Concerning the size of the grid, it is given in number of pixels. If "grid=40", then a grid of 40*40 pixels is used for the estimation. You may also prefer to pass a grid that you have defined (e.g. using the function ascgen() ) to this parameter.
     689
     690====  Why the algorithms differ so much between software? And why isn't there documentation for this? Is there a logical method for choosing software platforms with different algorithms?  ====
     691
     692
     693The main reference about the kernel method is:
     694
     695//Silverman, B. W.// 1986. Density estimation for statistics and data analysis. Chapman and Hall, London.
     696
     697The kernel method is a very general statistical method. For this reason, the R environment contains a lot of functions allowing kernel smoothing:
     698
     699  * The package MASS contains the function kde2d()
     700  * The package splancs contains a function kernel2d()
     701  * The package fields contains one function smooth.2d()
     702  * The packages genKern, ks, Kernsmooth, kernlab contains numerous functions allowing kernel smoothing
     703  * And, of course, adehabitat contains one function kernelUD() for kernel home range estimation
     704
     705The differences between these functions mainly comes from the context in which they are used: for example, kernel2d is used to smooth a point pattern distributed in space, to explore visually its structures, kde2d is used to smooth a cloud of points (a set of observations for two variables X and Y, not necessarily spatial), also to explore visually its structures, and kernelUD is used to smooth several sets of relocations collected using radio-tracking, to estimate animals home range. The data passed to and the results returned by all these functions differ, which explains the need for several functions.
     706
     707But there are also differences between programs using the kernel method for one aim, i.e. estimating the home range size. This difference between the functions comes from the fact that the kernel method is a very general statistical method used to smooth raw data, that is to estimate a probability density function (pdf) from a set of bivariate obervations (which is reflected by the number of packages developed around this method in the R environment). Many variations exist around this method, and depending on the choices made by the user for the numerous parameters of the method, the result may vary.
     708
     709The kernel method allows to smooth the point pattern defined by the N relocations, by summing the N kernel functions placed above each relocation. The kernel functions are bivariate radially symetrical functions (some kinds of "3D bells"), with the volume under the surface equal to 1/N. The width of such function is controlled by the smoothing parameter. Therefore, the variation in the estimation may be due to:
     710  * The choice of the '''kernel function''': numerous functions are encountered, but the most frequent are the bivariate normal kernel (in adehabitat) or the Epanechnikov kernel. The choice of one particular function won't affect greatly the estimates of home range size. However, the values of the smoothing parameters won't be comparable (they do not refer to the same parameters in different kernel functions). The Epanechnikov kernel is faster and Silverman (1986) indicated that it is slightly more efficient, though this difference is negligible. I chose the bivariate normal kernel for two reasons: first, it is used in RANGES V, which was at the time of programming the software I was using for home range estimation and second, the algorithm of estimation was precisely described in `'Worton, B. J.`' 1995. Using Monte Carlo simulation to evaluate kernel-based home range estimators. Journal of Wildlife Management 59:794-800. Please note: one can also specify the Epanechnikov kernel (through the parameter kern="epa"). However, the use of LSCV is not programmed for this kernel. Therefore, only the bivariate normal kernel is available for both href and LSCV.
     711  * The size of the '''grid used''' for the estimation. Because the pdf is continuous over the study area, one needs to estimate its value in each pixel of a regular grid (the default value is, as in RANGES V, a grid of 40*40 pixels). The size of the grid used affects only slightly the estimation. HR estimation programs differ in the default value used for this size.
     712  * The choice of the '''smoothing parameter''': in general, there is one smoothing parameter, but some programs use two smoothing parameters (one for the x dimension and one for the y dimension, for example kernelhr, see http://fwie.fw.vt.edu/wsb/cse/wsb2695/wsb2695.html). The default algorithm for the computation of the smoothing parameter also differ among programs. For adehabitat, it is the ad hoc method, but for example Arcview Animal Movement Analysis uses LSCV. In addition, as noted above, the smoothing parameters have a meaning only with a given kernel function. As noted in a previous mail, one cannot compare the smoothing parameters estimated with a bivariate normal kernel and those estimated with an Epanechnikov kernel.
     713  * The '''computation of the limits''' of the HR from the estimated UD. There are two ways to compute the home range limits from the estimated UD. The first way is to search an horizontal plane, parallel to the X-Y plane, so that the volume comprised under the surface of the UD and *above this horizontal plane* is equal to the requested percentage. The home range limits then correspond to the intersection between this plane and the UD. The second, more classical, is to search the horizontal plane parallel to the XY plane, so that the volume comprised under the surface of the UD, `above the XY plane` and inside the contour delimited by the intersection of the plane and the UD, is equal to the requested percentage. adehabitat uses the second.
     714
     715
     716==  About !LoCoH (k-NNCH) UD Analysis  ==
     717Described in ''Getz and Wilmers'', !LoCoH (Local Convex Hulls) is a method for generating homeranges that finds the UD by following a number of simple steps:
     718  1. Locates the k-1 nearest neighbors for each point in the dataset.
     719  1. Constructs a convex hull for each set of nearest neighbors and the original data point.
     720  1. Merges these hulls together from smallest to largest.
     721  1. Divides the merged hulls into isopleths where the 10% isopleth contains 10% of the original data points, the 100% isopleth contains all the points, etc.
     722The !LoCoH method has a number of strong points:
     723  * It generates a density distribution denoting more and less dense regions.
     724  * As more data is added, the homerange because more focused and accurate.
     725  * It is handles 'sharp' features such as lakes and fences well (unlike Kernel methods).
     726  * The generated homerange has a finite region (unlike Kernel methods).
     727The most important decision when generating a homerange with !LoCoH is choosing what a suitable value for k is. k is bounded by 3 points (the number of points needed to make a hull) on the low side and the number of data-points in the data-set on the high side. A general rule of thumb estimate for k is the square root of the number of points in your data-set. As k increases, the area covered by the homerange increases and converges towards the Minimum Convex Polygon. A useful feature of the R !LoCoH methods is to analyze multiple k's at once and then run a comparison on the different homeranges generated.
     728!LoCoH has a number of implementations. The adehabitat library has an implementation that can be called using the 'NNCH' (another name for !LoCoH) method. An improved R script (that should hopefully be integrated back into adehabitat in a month or two) can be found at [http://locoh.palary.org/rtutorial !LoCoH R Script]. A number of additional applications have been based on the R script including a [http://locoh.palary.org/ !LoCoH Web Application].
     729
     730=  R HR randomisation scripts  =
     731
     732Note from the authors: The following functions were developed by the authors while working on an article comparing different home-range estimators (see References, Huck et al., 2008). The functions are likely to be clumsily written since none of the authors are experienced programmers, and any suggestions for improvements would be welcomed. We nonetheless hope that the functions might prove of some use to people planning similar analyses.
     733
     734==  Randomise the distance between central points of two home-ranges  ==
     735by John Davison (& Maren Huck), cp. Huck et al. 2008
     736
     737Need to load ‘shapefiles’ package
     738<code>
     739distance(file1="filename1", file2="filename2",esri=T)
     740</code>
     741
     742  * Where file1 and file2 are the two files containing location estimates. These should be in the format of a text file containing two columns with the headings ‘x’ and ‘y’, or an ESRI shapefile (in which case, esri=T). Shapefile names should not include any extension (.shp). These files should be saved in the working directory of R.
     743  * If using an ESRI shapefile esri=T, otherwise esri=F.
     744
     745Returns a vector with
     746  1. distance between centre of two ranges
     747  1. smallest distance between range centres returned by simulation
     748  1. largest distance between range centres returned by simulation highest level of overlap returned by simulation
     749  1. 95% confidence interval (upper) for distance between range centres (i.e., 95% CI of random overlap). 
     750
     751Further note: To change the number of randomisations you have to change the line
     752 
     753for (i in 1:500){
     754 
     755accordingly, and than also have to change the line
     756
     757 
     758pval<-(length(subset(int,int>dist1))*0.002)
     759 
     760accordingly. For example if you want to use 1000 randomisations the lines would read
     761 
     762for (i in 1:1000){
     763 
     764and
     765 
     766pval<-(length(subset(int,int>dist1))*0.001)
     767 
     768(i.e., times the reciprocal value of the number of randomisations)
     769as well as
     770 
     771CI95<-int[475]
     772 
     773into
     774 
     775CI95<-int[950] 
     776 
     777
     778e.g. [to be used after loading the function],
     779
     780 
     781a<-distance(file1="frida_sum05_ag", file2="frida_aut05_ag",esri=T)
     782 
     783
     784#start of DISTANCE-SCRIPT
     785<code>
     786distance<-function(file1,file2,esri){
     787if (esri==T){
     788if (! require(shapefiles))
     789 
     790      stop("package 'shapefiles' is required")
     791 
     792shp11<-read.shapefile(file1)
     793locs11<-as.data.frame(shp11$shp$shp)
     794shp12<-read.shapefile(file2)
     795locs12<-as.data.frame(shp12$shp$shp)
     796}
     797if (esri==F){
     798shp11<-read.table("file1", header=T)
     799locs11<-as.data.frame(shp11)
     800shp12<-read.table("file2", header=T)
     801locs12<-as.data.frame(shp11)
     802}
     803coords11<-(as.data.frame(cbind(locs11$x, locs11$y)))
     804coords12<-(as.data.frame(cbind(locs12$x, locs12$y)))
     805mean11<-mean(coords11)
     806mean12<-mean(coords12)
     807b<-mean11-mean12
     808c<-as.data.frame(b)
     809dist1<-sqrt[[FootNote(c[1,]||'''2)+(c[2,]'''||'''2)]]'''
     810
     811Dataf<-data.frame(1,2)
     812names(Dataf)[1]<-"i"
     813names(Dataf)[2]<-"distance"
     814
     815for (i in 1:500){
     816ll<-rbind(coords11,coords12)
     817cds<-as.data.frame(ll)
     818sample(cds,replace=F)
     819l<-length(locs11$x)
     820l2<-length(locs12$x)
     821sub <- sample(nrow(cds), floor(nrow(cds) * (l)/(l+l2)))
     822set1 <- cds[sub, ]
     823set2 <- cds[-sub, ]
     824coords<-as.data.frame(set1)
     825mean1<-mean(coords)
     826coords2<-as.data.frame(set2)
     827mean2<-mean(coords2)
     828b<-mean1-mean2
     829c<-as.data.frame(b)
     830dist<-sqrt[[FootNote(c[1,]||'''2)+(c[2,]'''||'''2)]]'''
     831Dataf[i,1]<-i
     832Dataf[i,2]<-dist
     833}
     834int<-sort(Dataf[,2])
     835lowest<-int[1]
     836highest<-int[500]
     837CI95<-int[475]
     838pval<-(length(subset(int,int>dist1))*0.002)
     839print(cbind(dist1,lowest,highest,CI95,pval))
     840result<-cbind(dist1, lowest,highest,CI95,pval)
     841}
     842</code>
     843#end of DISTANCE-SCRIPT
     844
     845==  Randomise the overlap between two home-ranges calculated using kernel, !LoCoH or MCP method  ==
     846by John Davison (& Maren Huck), cp. Huck et al. 2008
     847
     848Overlap functions:  Need 'shapefiles', 'adehabitat', 'maptools', 'gpclib' &'spatstat' packages
     849
     850General note: To change the number of randomisations you have to change the line
     851 
     852for (i in 1:500){
     853 
     854and
     855 
     856pval<-(length(subset(overprop1,overprop1<proportion))*0.002)
     857 
     858accordingly. For example if you want to use 1000 randomisations the lines would read
     859 
     860for (i in 1:1000){
     861 
     862and
     863 
     864pval<-(length(subset(overprop1,overprop1<proportion))*0.001)
     865 
     866(i.e., times the reciprocal value of the number of randomisations)
     867Likewise, you would have to change
     868 
     869highest<-overprop1[500]
     870 
     871to
     872 
     873 highest<-overprop1[1000]
     874 
     875, and
     876 
     877CI5<-overprop1[25]
     878 
     879to
     880 
     881CI5<-overprop1[50]
     882 
     883
     884
     885#Assessing overlap: kernels
     886
     887 
     888overlapK(file1, file2,esri,hval,gridval,levelval)
     889 
     890
     891  * Where file1 and file2 are the two files containing location estimates. These should be in the format of a text file containing two columns with the headings ‘x’ and ‘y’, or an ESRI shapefile (in which case, esri=T). Shapefile names should not include any extension (.shp). These files should be saven in the working directory of R.
     892  * If using ESRI shapefiles esri=T, otherwise esri=F.
     893  * ''' Textfiles should start with the x-location labeled "x", and the second column should contain the y-locations (labeled "y"). '''
     894  * hval is the value for the parameter h used in kernel analyses – can be an absolute value, "href" or "LSCV" - see adehabitat package
     895  * gridval is resolution of the grid used in kernel analyses – see adehabitat package
     896  * levelval is the desired isopleth for use in kernel analyses – see adehabitat package
     897
     898
     899Returns a vector with
     900  1. area (ha) of range 1 [areaHR1]
     901  1. area of range 2 [areaHR2]
     902  1. area of overlap between ranges 1 and 2 [overlap]
     903  1. overlap as a proportion of smaller range [proportion]
     904  1. lowest level of overlap returned by simulation [lowest]
     905  1. highest level of overlap returned by simulation [highest]
     906  1. 95% confidence interval (lower) for expected level of overlap (i.e., 95% CI of random overlap [CI5]).
     907
     908
     909e.g. [to be used after loading the function],
     910
     911 
     912a<-overlapK(file1="frida_sum05_ag", file2="frida_aut05_ag", esri=T, hval=15, gridval=100, levelval=95)
     913 
     914
     915 
     916b<-overlapK(file1="brightmain.txt", file2="brightall.txt", esri=F, hval="LSCV", gridval=100, levelval=95)
     917 
     918
     919#start of OVERLAP-SCRIPT for Kernels
     920<code>
     921overlapK<-function(file1,file2,esri,hval,gridval,levelval){
     922if (! require(shapefiles))
     923 
     924      stop("package 'shapefiles' is required")
     925 
     926if (! require(adehabitat))
     927 
     928      stop("package 'adehabitat' is required")
     929 
     930if (! require(maptools))
     931 
     932      stop("package 'maptools' is required")
     933 
     934if (! require(gpclib))
     935 
     936      stop("package 'gpclib' is required")
     937 
     938if (! require(spatstat))
     939 
     940      stop("package 'spatstat' is required")
     941 
     942"intersectpolys" <-
     943function(polyA,polyB,type="binary") {
     944 
     945  if (!require(gpclib))
     946      stop("package 'gpclib' is required")
     947  if (!inherits(polyA, "polylist"))
     948      stop("Non convenient data")
     949  if (!inherits(polyB, "polylist"))
     950      stop("Non convenient data")
     951  nA <- length(polyA)
     952  nB <- length(polyB)
     953  if(nA==0 || nB==0)
     954      stop("Non convenient data")
     955  res <- matrix(0,nA,nB)
     956  for (i in 1:nA) {     
     957      nbpartA<-attr(polyA[[i]],"nParts")
     958      for (j in 1:nB) {
     959        nbpartB<-attr(polyB[[j]],"nParts")
     960        for(ii in 1:nbpartA){     
     961          for(jj in 1:nbpartB){     
     962            gpcA <- as(polyA[[i]][attr(polyA[[i]],"pstart")$from[ii]:attr(polyA[[i]],"pstart")$to[ii],],"gpc.poly")     
     963            gpcB <- as(polyB[[j]][attr(polyB[[j]],"pstart")$from[jj]:attr(polyB[[j]],"pstart")$to[jj],],"gpc.poly")     
     964            interAB <- intersect(gpcA,gpcB)
     965            nAB=length(interAB@pts)
     966            if(nAB==0) res[i,j] <- 0
     967            else res[i,j] <- ifelse(type=="area",area.poly(interAB),1) 
     968        }     
     969        }       
     970      }
     971  }
     972  res <- as.data.frame(res)     
     973  if (!is.null(attr(polyA, "region.id")))
     974      row.names(res) <- attr(polyA, "region.id")
     975  else row.names(res) <- paste("A", 1:length(polyA), sep = "")
     976  if (!is.null(attr(polyB, "region.id")))
     977      names(res) <- attr(polyB, "region.id")
     978  else names(res) <- paste("B", 1:length(polyB), sep = "")
     979  return(res)
     980 
     981}
     982if (esri==T){
     983shp11<-read.shapefile(file1)
     984locs11<-as.data.frame(shp11$shp$shp)
     985shp12<-read.shapefile(file2)
     986locs12<-as.data.frame(shp12$shp$shp)
     987}
     988if (esri==F){
     989shp11<-read.table(file1, header=T)
     990locs11<-as.data.frame(shp11)
     991shp12<-read.table(file2, header=T)
     992locs12<-as.data.frame(shp12)
     993}
     994coords11<-(as.data.frame(cbind(locs11$x, locs11$y)))
     995coords12<-(as.data.frame(cbind(locs12$x, locs12$y)))
     996k11<-kernelUD(coords11, id=NULL, h=hval, grid=gridval)
     997k12<-kernelUD(coords12, id=NULL, h=hval, grid=gridval)
     998area11<- kernel.area(coords11, id=NULL, h=hval, grid=gridval, level=levelval)
     999area12<- kernel.area(coords12, id=NULL, h=hval, grid=gridval, level=levelval)
     1000ver11<-getverticeshr(k11, lev = levelval)
     1001ver12<-getverticeshr(k12, lev = levelval)
     1002Shapefile11<-kver2shapefile(ver11, which = names(ver11))
     1003Shapefile12<-kver2shapefile(ver12, which = names(ver12))
     1004ver11s<-shape2poly(Shapefile11, region.id = NULL)
     1005ver12s<-shape2poly(Shapefile12, region.id = NULL)
     1006inter1<-intersectpolys(ver11s,ver12s,type = "area")
     1007tt1<-sum(inter1)
     1008tt<-tt1/10000
     1009smaller11<-if(area11<=area12)area11 else area12
     1010prop11<-tt/smaller11
     1011Dataf1<-data.frame(1,2)
     1012names(Dataf1)[1]<-"i"
     1013names(Dataf1)[2]<-"overlap"
     1014for (i in 1:500){
     1015ll<-rbind(coords11,coords12)
     1016cds<-as.data.frame(ll)
     1017sample(cds,replace=F)
     1018l<-length(locs11$x)
     1019l2<-length(locs12$x)
     1020sub <- sample(nrow(cds), floor(nrow(cds) * (l)/(l+l2)))
     1021set1 <- cds[sub, ]
     1022set2 <- cds[-sub, ]
     1023coords1<-as.data.frame(set1)
     1024coords2<-as.data.frame(set2)
     1025k1<-kernelUD(coords1, id=NULL, h=hval, grid=gridval)
     1026k2<-kernelUD(coords2, id=NULL, h=hval, grid=gridval)
     1027area1<- kernel.area(coords1, id=NULL, h=hval, grid=gridval,level=levelval)
     1028area2<- kernel.area(coords2, id=NULL, h=hval, grid=gridval,level=levelval)
     1029smaller<-if(area1<=area2)area1 else area2
     1030ver1<-getverticeshr(k1, lev = levelval)
     1031ver2<-getverticeshr(k2, lev = levelval)
     1032Shapefile1<-kver2shapefile(ver1, which = names(ver1))
     1033Shapefile2<-kver2shapefile(ver2, which = names(ver2))
     1034ver1s<-shape2poly(Shapefile1, region.id = NULL)
     1035ver2s<-shape2poly(Shapefile2, region.id = NULL)
     1036inter1<-intersectpolys(ver1s,ver2s,type = "area")
     1037intersum1<-sum(inter1)
     1038intersumHA1=intersum1/10000
     1039prop<-intersumHA1/smaller
     1040Dataf1[i,1]<-i
     1041Dataf1[i,2]<- prop
     1042print(i)
     1043}
     1044overprop1<-sort(Dataf1[,2])
     1045areaHR1<-area11[1,1]
     1046areaHR2<-area12[1,1]
     1047overlap<-tt[1]
     1048proportion<-prop11[1,1]
     1049lowest<-overprop1[1]
     1050highest<-overprop1[500]
     1051CI5<-overprop1[25]
     1052pval<-(length(subset(overprop1,overprop1<proportion))*0.002)
     1053print(cbind(areaHR1,areaHR2,overlap,proportion,lowest,highest,CI5,pval))
     1054res<-cbind(areaHR1,areaHR2,overlap,proportion,lowest,highest,CI5,pval)
     1055}
     1056</code>
     1057#end of OVERLAP-SCRIPT for Kernels
     1058
     1059
     1060##############CONVEX HULL#######
     1061
     1062 
     1063overlapNN<-function(file1,file2,esri,a1,a2,levelval)
     1064 
     1065
     1066  * Where file1 and file2 are the two files containing location estimates. These should be in the format of a text file containing two columns with the headings ‘x’ and ‘y’, or an ESRI shapefile (in which case, esri=T). Shapefile names should not include any extension (.shp). These files should be saven in the working directory of R. 
     1067  * If using an ESRI shapefile esri=T, otherwise esri=F.
     1068*''' Textfiles should start with the x-location labeled "x", and the second column should contain the y-locations (labeled "y").'''
     1069  * Where a1 and a2 are ‘a’ parameters for adaptive local convex hull analysis of the location estimates contained in files 1 and 2 respectively
     1070  * levelval is the desired isopleth for use in convex hull analyses – see adehabitat package
     1071
     1072Returns a vector with
     1073  1. area (ha) of range 1 [areaHR1][[BR]]
     1074  1. area of range 2 [areaHR2][[BR]]
     1075  1. area of overlap between ranges 1 and 2 [overlap][[BR]]
     1076  1. overlap as a proportion of smaller range [proportion][[BR]]
     1077  1. lowest level of overlap returned by simulation [lowest][[BR]]
     1078  1. highest level of overlap returned by simulation [highest][[BR]]
     1079  1. 95% confidence interval (lower) for expected level of overlap (i.e., 95% CI of random overlap [CI5]).[[BR]]   
     1080
     1081e.g. [to be used after loading the function],
     1082
     1083 
     1084a<-overlapNN(file1="frida_sum05_ag", file2="frida_aut05_ag", esri=T, a1=344, a2=344, levelval=95)
     1085 
     1086
     1087
     1088#start of overlap !LoCoH-script
     1089<code>
     1090overlapNN<-function(file1,file2,esri,a1,a2,levelval){
     1091if (! require(shapefiles))
     1092 
     1093      stop("package 'shapefiles' is required")
     1094 
     1095if (! require(adehabitat))
     1096 
     1097      stop("package 'adehabitat' is required")
     1098 
     1099if (! require(maptools))
     1100 
     1101      stop("package 'maptools' is required")
     1102 
     1103if (! require(gpclib))
     1104 
     1105      stop("package 'gpclib' is required")
     1106 
     1107if (! require(spatstat))
     1108 
     1109      stop("package 'spatstat' is required")
     1110 
     1111"intersectpolys" <-
     1112function(polyA,polyB,type="binary") {
     1113 
     1114  if (!require(gpclib))
     1115      stop("package 'gpclib' is required")
     1116  if (!inherits(polyA, "polylist"))
     1117      stop("Non convenient data")
     1118  if (!inherits(polyB, "polylist"))
     1119      stop("Non convenient data")
     1120  nA <- length(polyA)
     1121  nB <- length(polyB)
     1122  if(nA==0 || nB==0)
     1123      stop("Non convenient data")
     1124  res <- matrix(0,nA,nB)
     1125  for (i in 1:nA) {     
     1126      nbpartA<-attr(polyA[[i]],"nParts")
     1127      for (j in 1:nB) {
     1128        nbpartB<-attr(polyB[[j]],"nParts")
     1129        for(ii in 1:nbpartA){     
     1130          for(jj in 1:nbpartB){     
     1131            gpcA <- as(polyA[[i]][attr(polyA[[i]],"pstart")$from[ii]:attr(polyA[[i]],"pstart")$to[ii],],"gpc.poly")     
     1132            gpcB <- as(polyB[[j]][attr(polyB[[j]],"pstart")$from[jj]:attr(polyB[[j]],"pstart")$to[jj],],"gpc.poly")     
     1133            interAB <- intersect(gpcA,gpcB)
     1134            nAB=length(interAB@pts)
     1135            if(nAB==0) res[i,j] <- 0
     1136            else res[i,j] <- ifelse(type=="area",area.poly(interAB),1) 
     1137        }     
     1138        }     
     1139      }
     1140  }
     1141  res <- as.data.frame(res)     
     1142  if (!is.null(attr(polyA, "region.id")))
     1143      row.names(res) <- attr(polyA, "region.id")
     1144  else row.names(res) <- paste("A", 1:length(polyA), sep = "")
     1145  if (!is.null(attr(polyB, "region.id")))
     1146      names(res) <- attr(polyB, "region.id")
     1147  else names(res) <- paste("B", 1:length(polyB), sep = "")
     1148  return(res)
     1149 
     1150}
     1151if (esri==T){
     1152shp11<-read.shapefile(file1)
     1153locs11<-as.data.frame(shp11$shp$shp)
     1154shp12<-read.shapefile(file2)
     1155locs12<-as.data.frame(shp12$shp$shp)
     1156}
     1157if (esri==F){
     1158shp11<-read.table(file1, header=T)
     1159locs11<-as.data.frame(shp11)
     1160shp12<-read.table(file2, header=T)
     1161locs12<-as.data.frame(shp12)
     1162}
     1163coords11<-(as.data.frame(cbind(locs11$x, locs11$y)))
     1164coords12<-(as.data.frame(cbind(locs12$x, locs12$y)))
     1165CH11<-NNCH(coords11,a=a1,duplicates="delete")
     1166area11<- NNCH.area(CH11, percent=levelval, plotit=F)
     1167area11<-area11/10000
     1168CH12<-NNCH(coords12,a=a2,duplicates="delete")
     1169area12<- NNCH.area(CH12, percent=levelval, plotit=F)
     1170area12<-area12/10000
     1171smaller11<-if(area11<=area12)area11 else area12
     1172NNCH11<-NNCH.shapefile(CH11,percent=levelval)
     1173ver11s<-shape2poly(NNCH11, region.id = NULL)
     1174NNCH12<-NNCH.shapefile(CH12, percent=levelval)
     1175ver12s<-shape2poly(NNCH12, region.id = NULL)
     1176inter1<-intersectpolys(ver11s,ver12s,type = "area")
     1177tt1<-sum(inter1)
     1178tt=tt1/10000
     1179prop11<-tt/smaller11
     1180Dataf1<-data.frame(1,2)
     1181names(Dataf1)[1]<-"i"
     1182names(Dataf1)[2]<-"overlap"
     1183for (i in 1:500){
     1184ll<-rbind(coords11,coords12)
     1185cds1<-as.data.frame(ll)
     1186sample(cds1,replace=F)
     1187l11<-length(locs11$x)
     1188l12<-length(locs12$x)
     1189sub1 <- sample(nrow(cds1), floor(nrow(cds1) * (l11)/(l11+l12)))
     1190set1 <- cds1[sub1, ]
     1191set2 <- cds1[-sub1, ]
     1192coords1<-as.data.frame(set1)
     1193coords2<-as.data.frame(set2)
     1194CH1<-NNCH(coords1,a=a1,duplicates="delete")
     1195CH2<-NNCH(coords2,a=a2,duplicates="delete")
     1196area1<- NNCH.area(CH1, percent=levelval, plotit=F)
     1197area2<- NNCH.area(CH2, percent=levelval, plotit=F)
     1198smaller<-if(area1<=area2)area1 else area2
     1199NNCH1<-NNCH.shapefile(CH1,percent=levelval)
     1200ver1s<-shape2poly(NNCH1, region.id = NULL)
     1201NNCH2<-NNCH.shapefile(CH2, percent=levelval)
     1202ver2s<-shape2poly(NNCH2, region.id = NULL)
     1203inter1<-intersectpolys(ver1s,ver2s,type = "area")
     1204intersum1<-sum(inter1)
     1205intersumHA1<-intersum1
     1206prop<-intersumHA1/smaller
     1207Dataf1[i,1]<-i
     1208Dataf1[i,2]<- prop
     1209print(i)
     1210}
     1211overprop1<-sort(Dataf1[,2])
     1212areaHR1<-area11[1,1]
     1213areaHR2<-area12[1,1]
     1214overlap<-tt[1]
     1215proportion<-prop11[1,1]
     1216lowest<-overprop1[1]
     1217highest<-overprop1[500]
     1218CI5<-overprop1[25]
     1219pval<-(length(subset(overprop1,overprop1<proportion))*0.002)
     1220print(cbind(areaHR1,areaHR2,overlap,proportion,lowest,highest,CI5,pval))
     1221res<-cbind(areaHR1,areaHR2,overlap,proportion,lowest,highest,CI5,pval)
     1222}
     1223</code>
     1224#end of OVERLAP-SCRIPT for !LoCoH
     1225
     1226
     1227
     1228##############MCP#######
     1229
     1230 
     1231overlapMCP<-function(file1,file2,esri,,levelval)
     1232 
     1233
     1234  * Where file1 and file2 are the two files containing location estimates. These should be in the format of a text file containing two columns with the headings ‘x’ and ‘y’, or an ESRI shapefile (in which case, esri=T). Shapefile names should not include any extension (.shp). These files should be saven in the working directory of R. 
     1235  * If using an ESRI shapefile esri=T, otherwise esri=F.
     1236*''' Textfiles should start with the x-location labeled "x", and the second column should contain the y-locations (labeled "y"). '''
     1237  * levelval is the desired isopleth for use in MCP/convex hull analyses – see adehabitat package
     1238
     1239Returns a vector with
     1240  1. area (ha) of range 1 [areaHR1]
     1241  1. area of range 2 [areaHR2]
     1242  1. area of overlap between ranges 1 and 2 [overlap]
     1243  1. overlap as a proportion of smaller range [proportion]
     1244  1. lowest level of overlap returned by simulation [lowest]
     1245  1. highest level of overlap returned by simulation [highest]
     1246  1. 95% confidence interval (lower) for expected level of overlap (i.e., 95% CI of random overlap [CI5]).
     1247
     1248e.g. [to be used after loading the function],
     1249
     1250 
     1251a<-overlapMCP(file1="frida_sum05_ag", file2="frida_aut05_ag",esri=T, levelval=100)
     1252 
     1253
     1254#start of MCP-overlap script
     1255<code>
     1256overlapMCP<-function(file1,file2,esri,levelval){
     1257if (! require(shapefiles))
     1258 
     1259      stop("package 'shapefiles' is required")
     1260 
     1261if (! require(adehabitat))
     1262 
     1263      stop("package 'adehabitat' is required")
     1264 
     1265if (! require(maptools))
     1266 
     1267      stop("package 'maptools' is required")
     1268 
     1269if (! require(gpclib))
     1270 
     1271      stop("package 'gpclib' is required")
     1272 
     1273if (! require(spatstat))
     1274 
     1275      stop("package 'spatstat' is required")
     1276 
     1277"intersectpolys" <-
     1278function(polyA,polyB,type="binary") {
     1279 
     1280  if (!require(gpclib))
     1281      stop("package 'gpclib' is required")
     1282  if (!inherits(polyA, "polylist"))
     1283      stop("Non convenient data")
     1284  if (!inherits(polyB, "polylist"))
     1285      stop("Non convenient data")
     1286  nA <- length(polyA)
     1287  nB <- length(polyB)
     1288  if(nA==0 || nB==0)
     1289      stop("Non convenient data")
     1290  res <- matrix(0,nA,nB)
     1291  for (i in 1:nA) {     
     1292      nbpartA<-attr(polyA[[i]],"nParts")
     1293      for (j in 1:nB) {
     1294        nbpartB<-attr(polyB[[j]],"nParts")
     1295        for(ii in 1:nbpartA){     
     1296          for(jj in 1:nbpartB){     
     1297            gpcA <- as(polyA[[i]][attr(polyA[[i]],"pstart")$from[ii]:attr(polyA[[i]],"pstart")$to[ii],],"gpc.poly")     
     1298            gpcB <- as(polyB[[j]][attr(polyB[[j]],"pstart")$from[jj]:attr(polyB[[j]],"pstart")$to[jj],],"gpc.poly")     
     1299            interAB <- intersect(gpcA,gpcB)
     1300            nAB=length(interAB@pts)
     1301            if(nAB==0) res[i,j] <- 0
     1302            else res[i,j] <- ifelse(type=="area",area.poly(interAB),1)             
     1303        }     
     1304        }             
     1305      }
     1306  }
     1307  res <- as.data.frame(res)     
     1308  if (!is.null(attr(polyA, "region.id")))
     1309      row.names(res) <- attr(polyA, "region.id")
     1310  else row.names(res) <- paste("A", 1:length(polyA), sep = "")
     1311  if (!is.null(attr(polyB, "region.id")))
     1312      names(res) <- attr(polyB, "region.id")
     1313  else names(res) <- paste("B", 1:length(polyB), sep = "")
     1314  return(res)
     1315 
     1316}
     1317if (esri==T){
     1318shp11<-read.shapefile(file1)
     1319locs11<-as.data.frame(shp11$shp$shp)
     1320shp12<-read.shapefile(file2)
     1321locs12<-as.data.frame(shp12$shp$shp)
     1322}
     1323if (esri==F){
     1324shp11<-read.table(file1, header=T)
     1325locs11<-as.data.frame(shp11)
     1326shp12<-read.table(file2, header=T)
     1327locs12<-as.data.frame(shp12)
     1328}
     1329coords11<-(as.data.frame(cbind(locs11$x, locs11$y)))
     1330coords12<-(as.data.frame(cbind(locs12$x, locs12$y)))
     1331MCP11<-NNCH(coords11,k=nrow(coords11),duplicates=0.01)
     1332area11<- NNCH.area(MCP11, percent=levelval, plotit=F)
     1333area11=area11/10000
     1334MCP12<-NNCH(coords12,k= nrow(coords12),duplicates=0.01)
     1335area12<- NNCH.area(MCP12, percent=levelval, plotit=F)
     1336area12=area12/10000
     1337smaller11<-if(area11<=area12)area11 else area12
     1338mcp11<-NNCH.shapefile(MCP11,percent=levelval)
     1339ver11s<-shape2poly(mcp11, region.id = NULL)
     1340mcp12<-NNCH.shapefile(MCP12, percent=levelval)
     1341ver12s<-shape2poly(mcp12, region.id = NULL)
     1342inter1<-intersectpolys(ver11s,ver12s,type = "area")
     1343tt1<-sum(inter1)
     1344tt=tt1/10000
     1345prop11<-tt/smaller11
     1346Dataf1<-data.frame(1,2)
     1347names(Dataf1)[1]<-"i"
     1348names(Dataf1)[2]<-"overlap"
     1349for (i in 1:500){
     1350ll<-rbind(coords11,coords12)
     1351cds1<-as.data.frame(ll)
     1352sample(cds1,replace=F)
     1353l11<-length(locs11$x)
     1354l12<-length(locs12$x)
     1355sub1 <- sample(nrow(cds1), floor(nrow(cds1) * (l11)/(l11+l12)))
     1356set11 <- cds1[sub1, ]
     1357set12 <- cds1[-sub1, ]
     1358coords1<-as.data.frame(set11)
     1359coords2<-as.data.frame(set12)
     1360MCP1<-NNCH(coords1,k=nrow(coords1),duplicates=0.01)
     1361area1<- NNCH.area(MCP1, percent=levelval, plotit=F)
     1362MCP2<-NNCH(coords2,k= nrow(coords2),duplicates=0.01)
     1363area2<- NNCH.area(MCP2, percent=levelval, plotit=F)
     1364smaller<-if(area1<=area2)area1 else area2
     1365NNmcp1<-NNCH.shapefile(MCP1,percent=levelval)
     1366ver1s<-shape2poly(NNmcp1, region.id = NULL)
     1367NNmcp2<-NNCH.shapefile(MCP2, percent=levelval)
     1368ver2s<-shape2poly(NNmcp2, region.id = NULL)
     1369inter1<-intersectpolys(ver1s,ver2s,type = "area")
     1370intersum1<-sum(inter1)
     1371prop<-intersum1/smaller
     1372Dataf1[i,1]<-i
     1373Dataf1[i,2]<- prop
     1374print(i)
     1375}
     1376overprop1<-sort(Dataf1[,2])
     1377areaHR1<-area11[1,1]
     1378areaHR2<-area12[1,1]
     1379overlap<-tt[1]
     1380proportion<-prop11[1,1]
     1381lowest<-overprop1[1]
     1382highest<-overprop1[500]
     1383CI5<-overprop1[25]
     1384pval<-(length(subset(overprop1,overprop1<proportion))*0.002)
     1385print(cbind(areaHR1,areaHR2,overlap,proportion,lowest,highest,CI5,pval))
     1386res<-cbind(areaHR1,areaHR2,overlap,proportion,lowest,highest,CI5,pval)
     1387}
     1388</code>
     1389#end of OVERLAP-SCRIPT for MCP
     1390
     1391=  Batch home-range calculations  =
     1392Here is an R script to be sourced (too long to place here, please [wiki:Ftp//gis.dipbsf.uninsubria.it/HREngine.zip download] it), which provides some useful functions for home range calculations in batch mode, starting from a point locations table (say, an attribute table from a GIS).
     1393
     1394Some of the functions provided are actually used in QGis Home Range plugin.
     1395
     1396For questions, hints, patches, insults, whatever, contact [wiki:MailtoPrea@uninsubria.it Damiano G. Preatoni]
     1397
     1398==  Usage  ==
     1399  1. [wiki:Ftp//gis.dipbsf.uninsubria.it/HREngine.zip download] `HR_engine.zip`, place `HR_engine.R` in a convenient directory.
     1400  1. invoke adehabitat e.g. with `require adehabitat`.
     1401  1. add functions from `HR_engine.R` with `source(<path to HR_engine.R)`
     1402
     1403==  Function overview  ==
     1404`'''HR_cruncher'''(x, method=c("mcp","href","lscv","hadj","clusthr","NNCH"), prc=95, grid=40)`
     1405
     1406Wrapper function to calculate home ranges with alomst all methods available in `adehabitat`.
     1407
     1408The function is intended for a non-R proficient user, who will need to call just this one.
     1409
     1410Mind that `x` should be a `sahrlocs` object, that can be easily obtained reshaping any kind of point location data into a dataframe structured like this:
     1411
     1412  * `$ Name: Factor` animal unique identifier
     1413  * `$ X   : numeric` point location, easting
     1414  * `$ Y   : numeric` point location, northing
     1415  * ` ...  ` any other field you need (e.g. animal sex, age, whatever).
     1416
     1417The function returns a list containing a `method.volumes` element for each method used, which holds UDs, and a `method.areas` element which contains home range surface areas estimates.
     1418
     1419`'''kerneloverlap.spot'''(x, method=c("HR","PHR","VI","BA","UDOI","HD"), lev=95, conditional=FALSE, ...)`
     1420
     1421As one can see, the interface is pretty identical to adehabitat original `kerneloverlap` function.
     1422
     1423This one, instead, calculates overlap starting from already existing `khrud` or 'khr` objects. Compared to original `kerneloverlap'', this function ''does not'' recalculate UDs, because often you already have them, and perhaps it took some time to calculate and there should be no need to do it again.
     1424
     1425`'''corearea'''(xy, id=NULL, levels=seq(20,95, by=5), method="mcp", unin="m", unout="m", plot=TRUE)`
     1426
     1427This is a reimplementation of the incremental core area function discussed above. See Hanski et al. (2000) for the methodology.
     1428
     1429`'''exporter'''(x, attr, field="ID", out="allavailable", level=95, export=c("global", "separated"), shapename, dir=getwd(), owr=TRUE)`
     1430
     1431This is the khrUD-to-Shapefile functrion contained in QGis Home Range Plugin.
     1432
     1433=  References  =
     1434
     1435Bullard, F. (1991). Estimating the home range of an animal: a Brownian bridge approach. Department of Statistics, University of North Carolina at Chapel Hill. M.Sc. thesis.[http://www.faunalia.it/animov/docs/Bullard1991.pdf View PDF]
     1436
     1437Burgman, M.A. & Fox, J.C. (2003). Bias in species range estimates from minimum convex polygons: implications for conservation and options for improved planning. Animal Conservation 6: 19- 28.
     1438
     1439
     1440Getz, W. and C. Wilmers. 2004. A local nearest-neighbor convex-hull construction of home ranges and utilization distributions. Ecography 27: 489-505. [http://www.faunalia.it/animov/docs/Getz&!WilmersEcography04.pdf View PDF]
     1441
     1442Getz WM, Fortmann-Roe S, Cross PC, Lyons AJ, Ryan SJ, et al (2007) !LoCoH: Nonparameteric Kernel Methods for Constructing Home Ranges and Utilization Distributions. PLoS ONE 2(2): e207. doi:10.1371/journal.pone.0000207 [http://www.faunalia.it/animov/docs/Getz_et_al_journal_pone_207.pdf View PDF]
     1443
     1444Hanski IK, Stevens PC, Ihalempia P, Selonen V (2000) Home-range size, movements, and nest-site use in the Siberian flying squirrel, ''Pteromys volans''. Journal of Mammalogy, 81: 789-809.
     1445
     1446Huck, M., Davison, J., Roper, T.J., 2008. Comparison of two sampling protocols and four home range estimators using radio-tracking data from urban badgers. Wildlife Biology 14, 467-477.
     1447
     1448Wauters LA, Preatoni DG, Molinari A, Tosi G(2007) Radio-tracking squirrels: Performance of home range density and inkage estimators with small range and sample size. Ecological Modelling, 202(3-4):333-344