Published on in Vol 5 (2024)

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/57779, first published .
Conflicts of Interest Publication Disclosures: Descriptive Study

Conflicts of Interest Publication Disclosures: Descriptive Study

Conflicts of Interest Publication Disclosures: Descriptive Study

Original Paper

1The Department of Rhetoric & Writing, The University of Texas at Austin, Austin, TX, United States

2The Moody College of Communication, The University of Texas at Austin, Austin, TX, United States

3Department of Communication, The University of Illinois Urbana-Champaign, Champaign, IL, United States

4Department of Communication, North Dakota State University, Fargo, ND, United States

5Department of Neurology & O'Donnell Brain Institute, The University of Texas Southwestern Medical Center, Dallas, TX, United States

Corresponding Author:

S Scott Graham, PhD

The Department of Rhetoric & Writing

The University of Texas at Austin

Parlin Hall 29

Mail Code: B5500

Austin, TX, 78712

United States

Phone: 1 5124759507

Email: ssg@utexas.edu


Background: Multiple lines of previous research have documented that author conflicts of interest (COI) can compromise the integrity of the biomedical research enterprise. However, continuing research that would investigate why, how, and in what circumstances COI is most risky is stymied by the difficulty in accessing disclosure statements, which are not widely represented in available databases.

Objective: In this study, we describe a new open access dataset of COI disclosures extracted from published biomedical journal papers.

Methods: To develop the dataset, we used ClinCalc’s Top 300 drugs lists for 2017 and 2018 to identify 319 of the most commonly used drugs. Search strategies for each product were developed using the National Library of Medicine’s and MeSH (Medical Subject Headings) browser and deployed using the eUtilities application programming interface in April 2021. We identified the 150 most relevant papers for each product and extracted COI disclosure statements from PubMed, PubMed Central, or retrieved papers as necessary.

Results: Conflicts of Interest Publication Disclosures (COIPonD) is a new dataset that captures author-reported COI disclosures for biomedical research papers published in a wide range of journals and subspecialties. COIPonD captures author-reported disclosure information (including lack of disclosure) for over 38,000 PubMed-indexed papers published between 1949 and 2022. The collected papers are indexed by discussed drug products with a focus on the 319 most commonly used drugs in the United States.

Conclusions: COIPonD should accelerate research efforts to understand the effects of COI on the biomedical research enterprise. In particular, this dataset should facilitate new studies of COI effects across disciplines and subspecialties.

JMIR Data 2024;5:e57779

doi:10.2196/57779

Keywords



Multiple lines of research have documented the effects that author conflicts of interest (COI) can have on the biomedical research enterprise [1-5]. Author COI have been shown to increase the likelihood of positive findings [1] and influence study design [6-9], and they may be associated with diminished product safety [5,10]. To mitigate the effects of COI, journals, universities, professional societies, and academic medical centers have adopted disclosure requirements and policies designed to support transparency around COI [2,11]. Although such transparency endeavors are essential for continued efforts to understand the nature, prevalence, and effects of author COI, research in this area remains limited by the lack of a comprehensive centralized repository of author COI disclosure data for a few reasons [12-15]. First, the vast majority of COI disclosures are not readily available for analysis. Although PubMed has included COI disclosure statements as an available data category since 2017 [16], most journals do not participate at all or do not participate in all papers [17-19]. Related research in scientometrics also indicates that these data are frequently incomplete, incorrect, or both in commercial publication databases such as the Web of Science [13-15]. In the COIPonD dataset, for example, journals deposited disclosure statements for an average of 6.5% (SD 0.21%) of papers, and 86% (3242/3769) of journals deposited no disclosure statements for any paper sought for retrieval. Similarly, the US Centers for Medicare and Medicaid Services Open Payments Database can support research into COI, but not all COIs are reflected in this resource. Second, the data that do exist may be difficult to match with individual journal authors, and third, they include only US health care providers, greatly limiting the applicability of the dataset to nonproviders, researchers, and those working in other jurisdictions. In sum, in its current state, much of the published research on COI is grounded in datasets limited to specific subspecialties, disciplines, or publication venues [20,21] due, in part, to the difficulty in accessing disclosure statements.

To address these issues, the Conflicts of Interest Publication Disclosures (COIPonD) dataset provides a comprehensive database of author-reported COI collected from published research pertaining to the most commonly used drug products in the United States. Specifically, the dataset provides author-reported COI data on over 38,000 individual papers published between 1949 and 2022. The entries include specific disclosures and, where relevant, information about the absence of disclosures. The data come from over 3500 English-language journals published across medical subspecialties. COIPonD offers a unique, extensive, and otherwise nearly impossible to obtain a collection of disclosed COIs.

The dataset offers numerous advantages for continued and rigorous research on COI. Researchers and policy makers can use these data to improve and expand efforts to assess variability in disclosure requirements and thresholds across multiple publication domains and timelines. In addition, this dataset was developed intentionally to provide a comprehensive view of PubMed-indexed literature. Therefore, COIPonD includes, by design, not only rigorous studies published in high-quality journals but also non–peer-reviewed content and opinion pieces, which have been shown to be a common vector for COI-induced biases [22], and publications in predatory journals, which increasingly influence the state of science through consultation and citation [23]. The dataset spans a 70-year publication period and includes over 38,000 papers, both broad and long-term in scale for this search context. Furthermore, an additional advantage of this dataset is that it extends beyond the currently common focus on individual disciplines or subspecialties by focusing instead on a diverse range of drug products. It can also support future research into more effective policy solutions for addressing the risks of COI. For example, existing research suggests that different types of COI involve different risks of bias, but has not yet specified the nature of those risks or how they might be mitigated in part because of the absence of a dataset such as this one [2]. Larger datasets, like this one, are necessary for appropriately powered subsample analyses. Understanding the effects of COI and how to effectively manage COI across research contexts is critical for ensuring the integrity of the biomedical research enterprise. This dataset can facilitate the development of knowledge to reach these aims.


Study Identification

As previously mentioned, research on COI generally focuses on datasets of clinical trials or systematic reviews grounded in specific subspecialties or disciplinary journals [2]. For example, evaluations of COI’s effects on research have focused on psychiatry [24], oncology [25], and plastic surgery [26]. In contrast, the aim of our work was to capture information about COI related to the most frequently prescribed drug products across all subspecialties and publication types. In adopting this approach, we sought to mirror common practices for conducting a review of the relevant literature on a specific product. Specifically, for each identified product, we conducted a targeted search of PubMed for the most relevant papers. Our search focused on the most commonly used drug products as indexed by ClinCalc [27]. ClinCalc curates an annually updated list of the Top 300 most commonly used drugs. The list lags 2-3 years behind the current year. Drug use data are derived from ClinCalc’s analysis of the Agency for Health Care Research and Quality annual Medical Expenditure Panel Survey, which surveys the US residents on medical drug use (prescription and over-the-counter). To develop the COIPonD dataset, we used ClinCalc’s Top 300 drugs lists, each for 2017 and 2018 (the latest available at the time of query), to develop a list of target products. As expected, there was significant overlap in the most commonly used drugs between these 2 years, and so the final number of target products was 319.

For each of these 319 products, we used the National Library of Medicine’s (NLM’s) MeSH (Medical Subject Headings) browser to identify the preferred or supplementary concept, as most relevant. The MeSH-controlled vocabulary was selected for this project as it represents the primary ontology used by the NLM to support topic indexing and search retrieval. While alternative vocabularies, such as RxNorm, can be useful for improving interoperability between heterogeneous datasets, those alternative vocabularies are not engineered into the MEDLINE information architecture. MeSH terms group synonymous records in categories called “concepts” [28]. For most products, the generic name is the preferred concept, and associated trade names map automatically to that generic name in the MeSH ontology. In some cases (eg, combination neomycin, polymyxin B, and dexamethasone), the generic names map to a different preferred concept, such as the trade name. Occasionally, the preferred concept does not match the actual product in use, so we identified appropriate supplementary concepts (eg, the preferred concept for levothyroxine is thyroxine). Finally, in a few cases, no preferred or supplementary concepts were available. In these cases, we developed queries that would search for corresponding trade and generic names.

Our overall approach was to target, as closely as possible, highly relevant papers for each drug product. We identified the 150 most relevant papers per product, according to the PubMed relevance algorithm [29]. The intentional use of this algorithm to order search results was to mirror what researchers would discover in a search. In April 2021, we deployed an iterative query development strategy through which we identified the relevant preferred concept. If a preferred concept was available, we searched narrowly among MeSH terms and assessed if the available results were sufficiently large to secure a sample of 150 papers. If no preferred concepts were available, we attempted to identify a relevant supplementary concept and conducted the search again. In cases where the number of returned results for MeSH terms or supplementary concepts was insufficient (fewer than 150 papers returned), we then expanded our search to the title and abstract fields. If the search was still insufficient, we expanded our search again to all PubMed fields. Finally, in cases where the generic and trade names were not mapped to each other in MeSH, we conducted independent searches for each and aggregated the results. A complete list of primary and secondary search terms is available in Multimedia Appendix 1. For all searches, we collected up to 200 articles from PubMed, sorted by relevance. In cases where there were multiple searches per product (eg, where MeSH did not map the trade name to the generic name), searches were aggregated, and the number of times each article appeared was used to resort to relevance. All searches were executed using Rentrez, an R package designed to access the National Center for Biotechnology Information’s (NCBI’s) eUtilities application programming interface (API) [30]. The final dataset includes the most relevant papers per product, up to 150 per product. Some papers in the COIPonD database were identified for inclusion multiple times as part of different product searches. In such cases, the paper’s association with multiple products is recorded in the dataset. A total of 9/319 drug products returned fewer than 50 results even after this iterative search process, and these products were excluded from further data collection.

Data Extraction

Once target papers were identified, we used the eUtilities API to extract available PubMed metadata for each paper. Our disclosure statement data collection protocol proceeded in three steps that are: (1) in cases where a PubMed-indexed disclosure statement was available, it is included in the final dataset; (2) if no PubMed disclosure statement was available in the metadata, we developed an automated tool that attempted to locate and extract the relevant data from PubMed Central; and (3) if target disclosure statements were not available on either PubMed or PubMed Central, the paper was referred for manual collection.

While PubMed indexes all available disclosure statements under the XML tag <CoiStatement>, PubMed Central uses a variety of different possible tags. Therefore, for each paper indexed in PubMed Central, we searched all available XML tags for the following terms: conflict of interest, funding, conflicts of interest, disclosure statement, disclosure, author disclosure statement, and declaration of interest. All PubMed and PubMed Central data extraction was performed also using the Rentrez R package accessing the NCBI eUtilities API. In some cases, the multi-tag query approach led to multiple results being returned for a single paper. Any paper with multiple results was manually reviewed. In cases where the results were duplicate, they were reduced to a single entry. In cases where the results were unique, the results were combined into a single statement. This aggregation was most often required when a given paper’s XML schema separated each author’s disclosure into separate statements rather than combining them.

For each paper that did not have a disclosure statement available on PubMed or PubMed Central, we sought to locate the full text and extract the relevant data. Journal conventions for displaying disclosure statements vary widely, so we adopted the following heading prioritization schema to help ensure more uniform results. Specifically, we extracted all text under any heading of “Conflict of Interests,” “Conflicts of Interest,” “Competing Interests,” or “Duality of Interests.” If no such heading was available, the data extraction team then looked to assess the appropriateness of available information under “Disclosures,” “Funding,” or “Acknowledgements” headings. We extracted all data that specifically mentioned the terms “conflict of interests,” “conflicts of interest,” “competing interests,” or “duality of interests,” and any text that identified author-level financial relationships. We did not collect data from secondary headings if those data only identified study funding sources (as opposed to author-level relationships) or provided more general acknowledgments and statements of appreciation. In some cases, disclosure statements were hyperlinks to authors’ completed ICMJE (International Committee of Medical Journal Editors) forms. In these cases, the link was collected, but we did not seek to evaluate the data in the PDFs. Finally, if no disclosure information was available, the absence of available disclosures was recorded in the dataset.

Quality Assurance

Once data collection was complete, we conducted a quality assurance protocol to evaluate data accuracy (Table 1). This protocol involved first drawing a random sample of papers (N=381). The sample size was determined using the Cochran equation to determine representativeness at 95% confidence level and a 5% margin of error [31]. Where possible, the full text of each paper in the sample was located, and relevant data were re-extracted per the same data collection protocol described above. Re-extracted data were compared against previously collected data. The results of this protocol found that data collected for 349/381 (91.6%) of the papers were correct. Among the incorrect data 16/381 were classified as incorrect due to innocuous incomplete capture that would not adversely affect subsequent analyses (4.2%). Typically, this involved disclosure statements where additional assurances were provided and data entry only captured the reported COI. Another subset of 15/381 of the errors involved cases where data would have been partially or entirely missed (3.9%), and one was a case where a null result was incorrectly classified (no COI statement instead of non-English).

Table 1. Data collection error rates by error type.
Categoryn (%)
Correct capture349 (91.6)
Innocuous incomplete capture16 (4.2)
Relevant data not captured15 (3.9)
Incorrectly classified null result1 (0.3)

Ethical Considerations

This project does not involve human subjects or animal research, and no ethics review was required.


The total number of papers included in the dataset is 38,705. The final dataset includes papers indexed to 319 unique drug products, and the number of papers located per drug product ranges from 32 to 150 with an average of 139 (SD 18.99). The paper sample size is greater than 100 for 296 of the target drug products. Recall that 9/319 drug products returned fewer than 50 results and were excluded from the study and further data collection. The collected papers were published in 3769 unique journals. Each journal is represented between 1 and 488 times, with an average of 10.27 (SD 23.26) papers per journal. The most commonly represented journals are PLoS One (n=488), Scientific Reports (n=454), BMJ Case Reports (n=351), Medicine (n=319), and the International Journal of Pharmaceutics (n=309). Paper publication years range from 1949 to 2022, with an average publication date of 2016. Notably, the dataset includes 10,134 papers with no locatable disclosure statement. Missing rates vary widely by publication year (14%-100% with an average of 78%). As would be expected, we see a sharp decrease in missing rates as journal COI policies started to proliferate in the 1990s. Figure 1 lists the details (due to the timing of the query, there is only 1 article from 2022 in the dataset).

Notably, the presence of a disclosure statement does not definitively indicate that a COI was disclosed. In many cases, authors disclosed no specific relationships, typically asserting that there were no disclosures or indicating affirmatively that there was nothing to disclose. Accordingly, COIPonD contains approximately 15,500 entries that indicate no conflict existed, including many variations of “none” or “nothing to disclose.” These values are approximate because they are based on an analysis of disclosure statements that repeat exactly 2 or more times in the dataset. It is probable that a number of unique disclosure statements documenting either the absence of disclosure or the absence of COI are available in the data. These likely include author-specific phraseology such as “RV has nothing to disclose.” In terms of the repeated negative disclosures, we document over 600 variations, including “none” (n=1024), “the authors declare no competing interests” (n=430), and “no competing financial interests exist” (n=112). In addition, approximately 200 records contain URLs, and a substantial proportion of these are links to PDF disclosure forms maintained by the publishing journal. This leaves approximately 12,500 entities reporting the presence of COI of some variety.

Most statements were not available on PubMed or PubMed Central. Of the 38,705 papers for which we sought to retrieve disclosure statements, only 3184 were available on PubMed (8%). Of those not available on PubMed, approximately 10% (3871/38,705) were retrieved from PubMed Central. The vast majority of disclosure statements had to be retrieved manually.

The current release of the COIPonD dataset can be found on its Texas Data Repository page [32]. The data are provided in two data tables that include (1) paper metadata, (2) disclosures, and (3) drug look-up (Textbox 1).

Figure 1. Missing disclosure statement rates (%) by year. Fit with a Loess regression model for ease of visualization.
Textbox 1. The article, disclosure, and drug look-up data structures.

Paper metadata

  • PMID: the unique PubMed identifier for each paper.
  • Title: the title of the paper.
  • Journal: the Internal Organization for Standardization journal abbreviation.
  • Pub date: the paper’s publication date.

Disclosure data structure

  • PMID: the unique PubMed identifier for each paper.
  • COI: the author-reported disclosure statement or indication of its absence, in its original form.

Drug look-up data structure

  • PMID: the unique PubMed identifier for each paper.
  • Drug ID: the unique identifier for each drug product.
  • Drug name: the canonical name of each product (uses the name provided on ClinCalc, as derived from the Medical Expenditure Panel Survey).

COIPonD indexes COI disclosure statements for the 38,705 most relevant papers related to 319 unique drug products that were the most commonly used in the United States in 2017 and 2018. The data come from journals across medical subspecialties. The dataset is a unique, extensive collection of disclosed COI that has otherwise been very difficult, if not impossible, to obtain previously. When combined with the associated article metadata and relevant product table, these data can serve as a foundation for new research on COI and the subsequent development of evidence-based COI management policies. COIPonD affords a range of benefits to investigators and can support a variety of study designs. Primarily, it offers researchers and policy makers an assortment of data for assessing potential bias and differential effects based on the type of COI. Results from these studies might be used to develop or implement evidence-based COI policy reform in biomedical publishing. Because the dataset focuses on author-level disclosures across a diverse range of drug products, it is possible to evaluate drug safety and risk in relation to disclosed COI and by type of COI [5]. COIPonD could be used to establish an evidence-based understanding of variability in disclosure requirements and authors’ understandings of those requirements.

In addition, the data can assist government and institutional policy makers to make better-informed decisions about implementing new disclosure policies based on evidence. Much of the current research on COI policy is limited to comparisons of (1) the presence or absence of COI and (2) associated study features, such as positive results or methodological rigor. In contrast, the organization of this dataset provides opportunities for more granular and rigorous examinations of outcomes associated with different COI types. Subsequent analysis with COIPonD data could help address persistent and misleading calls for yet more evidence that COI has adverse effects on the biomedical research enterprise. As such, it can facilitate more robust COI research and reform. Associated metadata about disclosure statement provenance may also be helpful in evaluating the transparency of certain publications, publishers, and government transparency initiatives. These features of the dataset are relevant to researchers interested in developing an effective public management system for COI disclosure. For example, researchers might use it to conduct more comprehensive audits of studies comparing disclosed COI to data in the Open Payments Database. In addition, research using these data may contribute to the development of more robust transparency and reporting frameworks and data collection and aggregation systems. For example, while research has suggested the US NLM and the ORCID (Open Researcher and Contributor ID) researcher registry are particularly suited to a public registry, this dataset provides a starting point for those interested in building the necessary infrastructure and systems to support new institutional practices by making disclosures more accessible and streamlined in PubMed and PubMed Central [12,17].

These data were collected to pursue specific research questions and thus are not suitable for use in all study designs. For example, research on industry influence on the biomedical enterprise may wish to consider the effects of study sponsorship in addition to the disclosed COI. This dataset does not include information specifically related to study funding or sponsorship, but may contribute in combination with other data sources. The data are also limited to the articles related to the most commonly used drug products in the United States at a specific time (2017-2018). Data on new, less commonly used products and medical devices are not available. In particular, less frequently prescribed but more expensive biological products are not included in this dataset, even though their economic effects may be significant and are an important direction for future work. Although these are important components of broader research efforts related to industry influence on the biomedical research enterprise, they fall beyond the scope of the research effort that produced this dataset. The papers included in this dataset are limited to those available through a PubMed search at the time the query was initiated (April 2021). As millions of new papers are published each year and considering ongoing efforts of drug repurposing, we expect the COI picture to change with time, even for the same drug products.

Finally, the results of this research point to the need for more robust data management approaches for collecting information about COI disclosures. Although PubMed has included COI disclosure statements in its system since 2016, participation is optional and therefore low. PubMed Central offers some additional access to disclosure statements but is reliant on individual publisher data structures, leading to inconsistent field names for available disclosures. The fee-based Web of Science offers a further resource but has been noted for similar information gaps and low data quality with respect to COI [13-15]. These serious gaps in data availability led to substantial human labor hour requirements in order to complete the COIPonD dataset. It would be a significant benefit to the medical, bioethics, and scientometric research communities if repositories could explore ways to incentivize or even compel more complete participation from major publishers. For example, participation in citation metrics might be made contingent on the full reporting of COI data. Researchers and policy makers alike would benefit from ready access to COI disclosures as well as the ability to filter results based on the presence or absence of these relationships. However, even with greater participation in COI reporting systems, additional computational or labor investments would be required for researchers to make the most productive use of available data. We would, therefore, also endorse previous recommendations [18,33] to require COI disclosure reporting in structured formats that cleanly identify and link funders, recipients, and mechanisms of disbursement.

Acknowledgments

Dataset development was supported by the National Institute of General Medical Sciences of the National Institutes of Health (award R01GM141476).

Code Availability

The PubMed and PubMed Central query and data extraction code base is available at GitHub [34].

Authors' Contributions

SSG, ZPM, JBB, and JFR contributed to conceptualization and project administration. SSG, NS, and JSE performed data curation and validation and contributed to writing-original draft. SSG and NS conducted formal analysis and assisted with the software. SSG, ZPM, JBB, and JFR assisted with funding acquisition. SSG contributed to methodology, supervision, and visualization. SSG, ZPM, JBB, JFR, NS, and JSE assisted with writing-review and editing.

Conflicts of Interest

SSG reports grant funding from the National Institute of General Medical Sciences (NIGMS) and the Texas Health and Human Services Commission. ZPM reports grant funding from NIGMS. JBB reports grant funding from NIGMS, NSF, and Blue Cross Blue Shield/Health Care Service Corporation. JFR reports grant funding from NIGMS, NIMH, NIAID, NLM, Health Care Cost Institute, Austin Public Health, Texas Child Mental Health Care Consortium, Texas Alzheimer’s Research and Care Consortium, and the Michael and Susan Dell Foundation. JFR also reports receiving an award from the NIH Division of Loan Repayment.

Multimedia Appendix 1

Primary and secondary search queries for included products.

DOCX File , 48 KB

  1. Lundh A, Lexchin J, Mintzes B, Schroll JB, Bero L. Industry sponsorship and research outcome. Cochrane Database Syst Rev. 2017;2(2):MR000033. [FREE Full text] [CrossRef] [Medline]
  2. Graham SS, Karnes MS, Jensen JT, Sharma N, Barbour JB, Majdik ZP, et al. Evidence for stratified conflicts of interest policies in research contexts: a methodological review. BMJ Open. 2022;12(9):e063501. [FREE Full text] [CrossRef] [Medline]
  3. Bero LA, Rennie D. Influences on the quality of published drug studies. Int J Technol Assess Health Care. 1996;12(2):209-237. [CrossRef] [Medline]
  4. Waqas A, Baig AA, Khalid MA, Aedma KK, Naveed S. Conflicts of interest and outcomes of clinical trials of antidepressants: an 18-year retrospective study. J Psychiatr Res. 2019;116:83-87. [CrossRef] [Medline]
  5. Graham SS, Majdik ZP, Barbour JB, Rousseau JF. Associations between aggregate NLP-extracted conflicts of interest and adverse events by drug product. Stud Health Technol Inform. 2022;290:405-409. [FREE Full text] [CrossRef] [Medline]
  6. Fraguas D, Díaz-Caneja CM, Pina-Camacho L, Umbricht D, Arango C. Predictors of placebo response in pharmacological clinical trials of negative symptoms in schizophrenia: a meta-regression analysis. Schizophr Bull. 2019;45(1):57-68. [FREE Full text] [CrossRef] [Medline]
  7. Gao Y, Ge L, Ma X, Shen X, Liu M, Tian J. Improvement needed in the network geometry and inconsistency of Cochrane network meta-analyses: a cross-sectional survey. J Clin Epidemiol. 2019;113:214-227. [CrossRef] [Medline]
  8. Kapelios CJ, Naci H, Vardas PE, Mossialos E. Study design, result posting, and publication of late-stage cardiovascular trials. Eur Heart J Qual Care Clin Outcomes. 2022;8(3):277-288. [CrossRef] [Medline]
  9. Lexchin J, Bero LA, Djulbegovic B, Clark O. Pharmaceutical industry sponsorship and research outcome and quality: systematic review. BMJ. 2003;326(7400):1167-1170. [FREE Full text] [CrossRef] [Medline]
  10. Gyawali B, Tessema FA, Jung EH, Kesselheim AS. Assessing the justification, funding, success, and survival outcomes of randomized noninferiority trials of cancer drugs: a systematic review and pooled analysis. JAMA Netw Open. 2019;2(8):e199570. [FREE Full text] [CrossRef] [Medline]
  11. Mialon M, Vandevijvere S, Carriedo-Lutzenkirchen A, Bero L, Gomes F, Petticrew M, et al. Mechanisms for addressing and managing the influence of corporations on public health policy, research and practice: a scoping review. BMJ Open. 2020;10(7):e034082. [FREE Full text] [CrossRef] [Medline]
  12. Dunn AG. Set up a public registry of competing interests. Nature. 2016;533(7601):9. [CrossRef] [Medline]
  13. Lewison G, Sullivan R. Conflicts of interest statements on biomedical papers. Scientometrics. 2014;102(3):2151-2159. [CrossRef]
  14. Yegros YA, Lamers W, Díaz-Faes AA. Research integrity at stake: conflicts of interest and industry ties in scientific publications. 2022. URL: https://digital.csic.es/handle/10261/304326 [accessed 2024-08-23]
  15. Álvarez-Bornstein B, Bordons M. Industry involvement in biomedical research: authorship, research funding and conflicts of interest. In: 17th International Conference ON Scientometrics & Infometrics, Proceedings Volume II. 2019. Presented at: 17th International Conference ON Scientometrics & Infometrics; 2-5 September 2019:1746-1751; Rome, Italy. URL: https:/​/digital.​csic.es/​bitstream/​10261/​240184/​1/​ISSI-2019-Industry_involvement_in_biomedical_research_authorship%2C_research_funding_and_conflicts_of_interest.​pdf
  16. Collins M. NLM Technical Bulletin U.S. National Library of Medicine. 2017. URL: https://www.nlm.nih.gov/pubs/techbull/tb.html [accessed 2022-08-20]
  17. Graham SS, Majdik ZP, Clark D, Kessler MM, Hooker TB. Relationships among commercial practices and author conflicts of interest in biomedical publishing. PLoS One. 2020;15(7):e0236166. [FREE Full text] [CrossRef] [Medline]
  18. Grundy Q, Imahori D, Mahajan S, Garner G, Timothy R, Sud A, et al. Cannabis companies and the sponsorship of scientific research: a cross-sectional Canadian case study. PLoS One. 2023;18(1):e0280110. [FREE Full text] [CrossRef] [Medline]
  19. Falciola L, Barbieri M. Disclosure of patenting activities within scientific publications as potential conflicts-of-interest: Evidences from biomedical literature. World Patent Information. 2024;76:102251. [CrossRef]
  20. Tisherman RT, Wawrose RA, Chen J, Donaldson WF, Lee JY, Shaw JD. Undisclosed conflict of interest is prevalent in Spine literature. Spine (Phila Pa 1976). 2020;45(21):1524-1529. [CrossRef] [Medline]
  21. Probst P, Hüttner FJ, Klaiber U, Diener MK, Büchler MW, Knebel P. Thirty years of disclosure of conflict of interest in surgery journals. Surgery. 2015;157(4):627-633. [CrossRef] [Medline]
  22. Nejstgaard CH, Bero L, Hróbjartsson A, Jørgensen AW, Jørgensen KJ, Le M, et al. Association between conflicts of interest and favourable recommendations in clinical guidelines, advisory committee reports, opinion pieces, and narrative reviews: systematic review. BMJ. 2020;371:m4234. [FREE Full text] [CrossRef] [Medline]
  23. Oermann MH, Nicoll LH, Ashton KS, Edie AH, Amarasekara S, Chinn PL, et al. Analysis of citation patterns and impact of predatory sources in the nursing literature. J Nurs Scholarsh. 2020;52(3):311-319. [CrossRef] [Medline]
  24. Ahmer S, Arya P, Anderson D, Faruqui R. Conflict of interest in psychiatry. Psychiatr Bull. 2018;29(8):302-304. [CrossRef]
  25. Hampson LA, Joffe S, Fowler R, Verter J, Emanuel EJ. Frequency, type, and monetary value of financial conflicts of interest in cancer clinical research. J Clin Oncol. 2007;25(24):3609-3614. [CrossRef] [Medline]
  26. Lopez J, Musavi L, Quan A, Calotta N, Juan I, Park A, et al. Trends, frequency, and nature of surgeon-reported conflicts of interest in plastic surgery. Plast Reconstr Surg. 2017;140(4):852-861. [CrossRef] [Medline]
  27. The top 300 of 2020. URL: https://clincalc.com/DrugStats/Top300Drugs.aspx [accessed 2023-09-21]
  28. Concept Structure in MeSH. U.S. National Library of Medicine. URL: https://www.nlm.nih.gov/mesh/concept_structure.html [accessed 2023-10-04]
  29. U.S. National Library of Medicine. Updated algorithm for the PubMed best match sort order. URL: https://www.nlm.nih.gov/pubs/techbull/tb.html [accessed 2023-10-04]
  30. Winter DJ. Rentrez: an R package for the NCBI eUtils API. The R Journal; 2017. URL: https://journal.r-project.org/archive/2017/RJ-2017-058/RJ-2017-058.pdf [accessed 2024-09-27]
  31. Cochran WG. Sampling Techniques. New York, NY. John Wiley & Sons; 1977.
  32. Graham SS, Sharma N, Barbour JB, Edward JS, Majdik ZP, Rousseau JF. Texas Data Repository. Conflicts of interest publication disclosures (COIPonD) data set. 2023. URL: https://dataverse.tdl.org/dataset.xhtml?persistentId=doi:10.18738/T8/GBSTTH [accessed 2024-09-27]
  33. Dunn AG, Coiera E, Mandl KD, Bourgeois FT. Conflict of interest disclosure in biomedical research: a review of current practices, biases, and the role of public registries in improving transparency. Res Integr Peer Rev. 2016;(1):1. [FREE Full text] [CrossRef] [Medline]
  34. GitHub. COIPonD. URL: https://github.com/sscottgraham/COIPonD [accessed 2024-09-26]


API: application programming interface
COI: conflicts of interest
COIPonD: Conflicts of Interest Publication Disclosures
ICMJE: International Committee of Medical Journal Editors
MeSH: Medical Subject Headings
NCBI: National Center for Biotechnology Information
NLM: National Library of Medicine
ORCID: Open Researcher and Contributor ID


Edited by A Coristine; submitted 26.02.24; peer-reviewed by T Burke, Y Jiang; comments to author 22.08.24; revised version received 28.08.24; accepted 30.08.24; published 31.10.24.

Copyright

©S Scott Graham, Jade Shiva, Nandini Sharma, Joshua B Barbour, Zoltan P Majdik, Justin F Rousseau. Originally published in JMIR Data (https://data.jmir.org), 31.10.2024.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Data, is properly cited. The complete bibliographic information, a link to the original publication on https://data.jmir.org/, as well as this copyright and license information must be included.