It was a pleasure to read this paper on the effects of biodiversity (Suzan et al. 2009, "Experimental evidence for reduced rodent diversity causing increased hantavirus prevalence." PLoS ONE 4(5):e5461.) Investigators tested whether reduced biodiversity would lead to higher disease prevalence (proportion of rodents that are infected) via higher density-dependent transmission (through higher density), or via frequency-dependent transmission (via higher host-to-host encounters). Their experimental treatment "removed" non-competent rodent hosts via trapping, allowing the remaining two species (competent hosts) to increase in abundance via competitive release. They found evidence for both mechanisms: removing non-competent host species increased both the density and frequency of hosts, and prevalence in the experimental removal areas increased significantly with both host density and frequency.
Questions and comments:
-- To where did they remove the non-competent rodents? To "rodent heaven"? another forest patch?
-- They got their interpretation of Simpson's index of diversity and index of dominance backwards. Dominance, D, is the probability that to individuals drawn at random of the SAME species (not different species, as stated by Suszan et al.). It is 1-D (diversity) that is the probability that two individuals belong to different species. 1/D (inverse: used by Suzan et al. as diversity) has no probabilistic interpretation, but is a measure of entropy, like richness, Shannon-Weiner and Simpson's Index as 1-D (complement).
-- A standard approach to testing their hypotheses would have been to compare mean prevalences in control vs. treatment. This approach would have required that their treatments consistently increased host density and frequency. Instead, they compared the continuous linear relationships between prevalence (y) and density and relative density (x). This required only that their treatment (removals) increased maximum abundance. In this fashion, their treatment merely extended the range of the x-variables, which is a good way to increase the power of the regression. An alternative approach would have been to test mean prevalence, but with a more appropriate error distribution, like Gamma distribution, that tends to hang on to zero, and increase the variance and the mean together.
-- I cannot interpret their Table 1. What are the rows? If rows are treatments, I cannot see what they see. I see no interaction - I simply don't believe it.
-- The caption in figure 4 is incorrect. They have it backwards.
-- This seemed like a really long paper for this journal, although I am not all that familiar with their format. It is seven pages of the smallest font imaginable. I was expecting a Science or Nature-type paper. They included info that I did not think was necessary (e.g., "...and several weeks can go by with no rain at all.")
Very interesting, and the number of mistakes makes me (i) wonder what all of the NINE authors were doing, (ii) helps me relax and know that other people make mistakes too.
Wednesday, November 18, 2009
Tuesday, November 10, 2009
Sir Ronald Fisher vs. Rev. Bayes -- a comment on Kremen, Williams, and Thorp. 2002. Crop pollination...
A great little PNAS paper from 2002 (Kremen et al. 2002 Crop pollinaiton from native bees at risk from agricultural intensification. PNAS 99:16812-16816). They compared management (organic vs.conventional farms) and isolation from natural habitat (near vs. far), with regard to pollinator visitation rates and efficacy.
This is would be a nice paper for any undergrad class in ecology (nonmajors or majors).
My only issue is very minor: they state that "the effect of isolation from natural habitat appeared potentially to be more important than that of management" and cite a bunch of P-values from pairwise comparisons. I would argue that importance should be judged be effect size (or possible effect size) and definitely NOT based on P-values. Clearly the two are related, and in this case appear consistent with each other. However, confidence intervals, or better yet, credible intervals, would be better.
A totally uninspiring post....
This is would be a nice paper for any undergrad class in ecology (nonmajors or majors).
My only issue is very minor: they state that "the effect of isolation from natural habitat appeared potentially to be more important than that of management" and cite a bunch of P-values from pairwise comparisons. I would argue that importance should be judged be effect size (or possible effect size) and definitely NOT based on P-values. Clearly the two are related, and in this case appear consistent with each other. However, confidence intervals, or better yet, credible intervals, would be better.
A totally uninspiring post....
Subscribe to:
Posts (Atom)