Essential 10

7. Statistical methods Provide details of the statistical methods used for each analysis, including software used. explanation

Explanation

The statistical analysis methods implemented will reflect the goals and the design of the experiment, they should be decided in advance before data are collected (see item 19 – Protocol registration). Both exploratory and hypothesis-testing studies might use descriptive statistics to summarise the data (e.g. mean and SD, or median and range). In exploratory studies where no specific hypothesis was tested, reporting descriptive statistics is important for generating new hypotheses that may be tested in subsequent experiments but it does not allow conclusions beyond the data. In addition to descriptive statistics, hypothesis-testing studies might use inferential statistics to test a specific hypothesis.

Reporting the analysis methods in detail is essential to ensure readers and peer-reviewers can assess the appropriateness of the methods selected and judge the validity of the output. The description of the statistical analysis should provide enough detail so that another researcher could re-analyse the raw data using the same method and obtain the same results. Make it clear which method was used for which analysis.

Analysing the data using different methods and selectively reporting those with statistically significant results constitutes p-hacking and introduces bias in the literature [1,2]. Report all analyses performed in full. Relevant information to describe the statistical methods include: 

  • the outcome measures 
  • the independent variables of interest 
  • the nuisance variables taken into account in each statistical test (e.g. as blocking factors or covariates),  
  • what statistical analyses were performed and references for the methods used  
  • how missing values were handled 
  • adjustment for multiple comparisons 
  • the software package and version used, including computer code if available [3] 

The outcome measure is potentially affected by the treatments or interventions being tested, but also by other factors, such as the properties of the biological samples (sex, litter, age, weight, etc.), and technical considerations (cage, time of day, batch, experimenter, etc.). To reduce the risk of bias, some of these factors can be taken into account in the design of the experiment, for example by using blocking factors in the randomisation (see item 4 – Randomisation). Factors deemed to affect the variability of the outcome measure should also be handled in the analysis, for example as a blocking factor (e.g. batch of reagent or experimenter), or as a covariate (e.g. starting tumour size at point of randomisation).

Furthermore, to conduct the analysis appropriately, it is important to recognise the hierarchy that can exist in an experiment. The hierarchy can induce a clustering effect; for example, cage, litter or animal effects can occur where the outcomes measured for animals from the same cage/litter, or for cells from the same animal, are more similar to each other. This relationship has to be managed in the statistical analysis by including cage/litter/animal effects in the model or by aggregating the outcome measure to the cage/litter/animal level. Thus, describing the reality of the experiment and the hierarchy of the data, along with the measures taken in the design and the analysis to account for this hierarchy, is crucial to assessing whether the statistical methods used are appropriate.

For bespoke analysis, for example regression analysis with many terms, it is essential to describe the analysis pipeline in detail. This could include detailing the starting model and any model simplification steps.

When reporting descriptive statistics, explicitly state which measure of central tendency is reported (e.g. mean or median) and which measure of variability is reported (e.g. standard deviation, range, quartiles or interquartile range). Also describe any modification made to the raw data before analysis (e.g. relative quantification of gene expression against a house-keeping gene). For further guidance on statistical reporting, refer to the SAMPL (Statistical Analyses and Methods in the Published Literature) guidelines [4]

 

References

  1. Tsilidis KK, Panagiotou OA, Sena ES, Aretouli E, Evangelou E, Howells DW, Al-Shahi Salman R, Macleod MR and Ioannidis JP (2013). Evaluation of excess significance bias in animal studies of neurological diseases. PLoS Biol. doi: 10.1371/journal.pbio.1001609
  2. Head ML, Holman L, Lanfear R, Kahn AT and Jennions MD (2015). The Extent and Consequences of P-Hacking in Science. PLOS Biology. doi: 10.1371/journal.pbio.1002106
  3. British Ecological Society (2017). A guide to reproducible code in ecology and evolution. Available at: https://www.britishecologicalsociety.org/wp-content/uploads/2017/12/guide-to-reproducible-code.pdf
  4. Lang TA and Altman DG (2015). Basic statistical reporting for articles published in biomedical journals: the "Statistical Analyses and Methods in the Published Literature" or the SAMPL Guidelines. Int J Nurs Stud. doi: 10.1016/j.ijnurstu.2014.09.006