Recently, it has come to light that the oil company Exxon knew the risks of the fuel industry on climate change since the 1970s. While it was clear that their own internally produced research accurately predicted global warming and its effects on our climate, they publicly denied any such claims to protect their own industry. Of course corporate practices like these are compounded by personal cases in which researchers put out research data and papers that are not true or inaccurate, such as what happened with psychologist Diederik Stapel. To promote his own career advancement, almost 50 of his published studies were fabricated or falsified. These examples, taken with many others, indicate a reasonable erosion in trust in science. But they also stem from a particular concern about how science is being produced, more specifically the conflicts of interest affect the science that is happening and being promoted, or, as in the case of Exxon, being hidden from view.
As part of the Inspiring and Anchoring Trust in Science project, members of our team have been looking into different issues surrounding conflicts of interest. This focuses on possible conflicts of interest, what these entail, and, most importantly, ways these may be avoided or how to respond appropriately. This is important in order to minimise risks for research and integrity of researchers as well as the people’s perception of research and their trust in science.
In addition to the corporate or personal interests indicated in the examples above, things that are presumed less biased or subject to external influence have been shown as not free from conflicts of interest either. The selection process for scientific paper publications is itself increasingly under scrutiny. Peer review, the standard for the publication process, has shown itself to have significant issues. Reviewers often bring in their own biases into the review process due to personal or business relationships or political perspectives. Additionally, research results can be used by third parties. For example, study results from psychological research could be used by others for aggressive interrogation techniques, such as torture. The fact that the intention of third parties can be very different from the scientific aims of researchers is one of the risks of the so-called dual-use of research data.
All of these issues have been further compounded by technological developments in the last few years. These often complicate the integrity of research and present new or changing types of conflicts of interest within scientific fields. The use of AI programmes such as ChatGTP makes it easier to write fake research results in a convincing way. An awareness of the ways in which technology affects scientific developments is critical in discussion over and about levels of trust in science. In this work, the project team members aim to collect together potential solutions that will minimise conflicts across different areas of science, ranging from medical research to social sciences to technological innovations. We want to understand how these types of conflicts together, from fraud to funding, impact people’s perception of trust in science and mitigate the negative effects of these conflicts where possible.