I find myself drawn back to blogosphere by a nice short comment about simulation in ecology by Vıtezslav Moudry (sorry – can’t seem to get the accents over the relevant letters!) in Journal of Biogeography. The comment highlights the impact that poor data quality can have on species distribution models, and points to the value of simulation studies for helping understand the behaviour of SDMs under different types of data bias, and the utility of some approaches for dealing with poor data quality. The study makes mention of a couple of my favourite studies, including two papers led by postdocs in our group; José Lahoz-Monfort and Gurutzeta Guillera-Arroita. Jose’s paper I reviewed in a previous (now quite old) blog, but Guru’s paper I’m yet to advertise! So here it is. The paper is Global Ecology and Biogeography, titled ‘Is my species distribution model fit for purpose? Matching data and models to applications‘, looks at how various data quality issues impact SDMs and what the specific implications might be for conservation decisions (including spatial prioritisation of spending, biosecurity, population viability analysis). I love this paper because it draws a direct line from the technical, geeky problems to what we really care about, which is doing good analysis that supports good conservation decisions. After all, who cares about a imperfect detection, sampling bias, false positives, spatial autoblahblah if it doesn’t actually mess with a decision we need to make. Well, ecologists might care if it impacts on ecological inference – but unless that inference is central to a conservation decision, I’m not sure that I really care about that too much either. While we’re on simulation – I’d like to draw your attention to a recent simulation study where we show that recent (highly cited) criticisms of auto logistic regression are unfounded, and that honour of the auto logistic is intact! Vive la autologistic! Tchussie.
Imperfect detection impacts the performance of species distribution models that’s online early with Global Ecology and Biogeography… Its one of a couple of papers we’re working on that aim to clarify and quantify the influence of imperfect detection on inference and prediction from models fitted to biological survey data, including SDMs. Keep your eye out for another exciting paper by Gurutzeta Guillera-Arroita… coming soon.
Terry Walshe, Kirsten Parris, Mick McCarthy and I have a new paper out in the journal Diversity and Distributions that tackles the issue of how to determine the probability that a species occupies a given location after a number of failed attempts to find it there (check it out). It’s not quite as simple as you would think, even if you have a good handle on the ‘detectability’ literature. In the paper, we describe an approach to determining how many non-detections would be necessary to be X% sure that a species is not present at a given location. This method is particularly useful for designing impact assessment surveys in which the probability of failing to detect an endangered species that occupies the site needs to be set to some comfortably low level (e.g. 5% chance).
Terry Walshe wrote a nice piece about the paper in our decisions Hub’s magazine Decision Point (check it out).