Look before you leap
In today’s staff blog post, Policy and Communications Officer Dr Liz Harley addresses the concerns about animal research raised by a recent article in the BMJ, and explains how the sector is not only aware of the issues but actively pursuing solutions.
In a recent article for the British Medical Journal, sociologist Pandora Pound and epidemiologist Michael Bracken ask whether “animal research is sufficiently evidence based to be a cornerstone of biomedical research.” They argue that low standards of self-assessment, training and reporting are hampering medical progress and potentially harming patients.
They outline a number of areas in which the sector can improve its practices. It is important to note that these are issues that much the sector is already aware of, and as a result many of the authors' suggestions for improvement are already being addressed.
In summary, their suggestions are –
“In addition to intensifying the systematic review effort, providing training in experimental design and adhering to higher standards of research conduct and reporting, prospective registration of preclinical studies, and the public deposition of (both positive and negative) findings would be steps in the right direction.”
The criticisms could equally be applied to the entirety of basic research, and as such can be considered a fair comment on problems inherent to working science. However, it is disingenuous of the authors to imply that the sector is either unaware of, or worse actively ignoring these problems. No one would argue that the business of science is perfect, but no one knows that better than the people working in it.
Systematic reviews are an important tool for research. Not only do they give scientists an overview of what has already been done, they can also point to new potential avenues for further study. They can reduce duplication of data, which is important in the case of animal research where duplicated results equal duplicated animal use. The same is also true of publishing negative results; having as much information available as possible enables scientists to make the most informed decisions about their research.
Scientists like Steven Perrin, writing for Nature on the lack of review for mouse models of amyotrophic lateral sclerosis (ALS), are calling for an increase in the number of systematic reviews. These voices are still in the minority, but the 2011 research referenced by Pound and Bracken, in which Korevaar and colleagues estimated that the number of preclinical systematic reviews was doubling every three years, would suggest that this is a problem that the sector is slowly starting address.
As Dr Michael Festing noted in his 2013 paper “We Are Not Born Knowing How to Design and Analyse Scientific Experiments” –
“Unfortunately, very few scientists get any formal training in the necessary methods… It would be much more cost effective to ensure that scientists are properly trained in the first place.”
This is a genuine issue that the sector is aware of, and as such there are efforts underway to address the problem. In the delivery plan "Working to Reduce the Use of Animals in Scientific Research", the Coalition placed key emphasis on the importance of good experimental design, and outlined a number of initiatives in place to improve and increase the uniformity of practices across the sector. The authors note the efforts of organisations like FRAME and the NC3Rs in this area, but omit to mention that these efforts are endorsed and promoted by the regulators.
The issue of how to get negative results published is a sticky one that causes much head-scratching among publishers and scientists alike. Journals are typically short on space, and rely on subscriptions to make money. As a result they tend to favour ‘sexy’ results that promise to rewrite the rules of science as we know it rather than studies with low or zero impact outcomes. With grant allocations and career progression hinging on publication record, researchers are often forced to put their negative results to one side in favour of those that the journals will actually accept.
Some publishers are seeking to address this problem. There are now journals that exist solely to publish negative results or small datasets that would not otherwise make it into a major journal. But there are not enough of these to accommodate every single negative result ever produced.
Critical appraisal and refinement is important for the continued development of the sector. However, articles that only give a partial overview of the current landscape are hardly conducive to this process. This is perfectly illustrated by one of Pound and Bracken’s final comments, this time focussing on institutional ethical review processes –
“Greater public accountability might be provided by including lay people in some of the processes of preclinical research such as ethical review bodies and setting research priorities.”
In fact the involvement of lay members is common practise within UK institutional Animal Welfare and Ethical Review Bodies (AWERB). The RSPCA run training courses and produce guides to assist lay members with their work, and actively support and encourage the vital role that they play in providing an external perspective on animal research.
In concluding, Pound and Bracken state that –
“…animal research is no longer as immune from challenge or criticism as it once was.”
Animal research is not considered as a ‘special’ or separate science from, say, particle physics or chemical engineering. In that respect it is subject to the same internal scrutiny that all scientific processes are subjected to, and has never been ‘immune’ to criticism. An industry that is based upon asking questions is by its very nature self-critical and self-refining.
We should always be critical of scientific processes, and we should always ask how something could be done better, and particularly when animal welfare is at stake. But it is unfair to state the problems without making readers aware of the steps that are being taken to address them.