Use of Bibliometric Methods to Verify the Credentials of Expert Witnesses in Forensic Science and Legal Medicine and the Dilemma of Multiple-authorship

Article Open Access

Use of Bibliometric Methods to Verify the Credentials of Expert Witnesses in Forensic Science and Legal Medicine and the Dilemma of Multiple-authorship

Author Information
Division of Clinical Chemistry and Pharmacology, Department of Biomedical and Clinical Sciences, Faculty of Medicine and Health Sciences, University of Linköping, SE-58183 Linköping, Sweden
*
Authors to whom correspondence should be addressed.
Views:1072
Downloads:127
Perspectives in Legal and Forensic Sciences 2024, 1 (1), 10002;  https://doi.org/10.35534/plfs.2024.10002

Received: 13 September 2023 Accepted: 01 November 2023 Published: 07 November 2023

Creative Commons

© 2024 by the authors; licensee SCIEPublish, SCISCAN co. Ltd. This article is an open access article distributed under the CC BY license (https://creativecommons.org/licenses/by/4.0/).

ABSTRACT: Expert testimony is an important part of criminal and civil litigation whenever scientific evidence needs to be explained and interpreted for the judge and jury. Those appearing in court as expert witnesses must possess the necessary qualifications, skill, training, and experience for the task in hand. Unlike a lay-witness, an expert witness is allowed to render an opinion based on their own specialized knowledge and research. In the adversarial system of justice, expert witnesses are hired by opposing sides in a case and this causes confusion when they disagree about the strengths and weaknesses of the scientific evidence presented. Choosing the best expert witness is often a difficult task and making a wrong decision has sometimes led to wrongful convictions and miscarriages of justice. Success in science is tightly linked to the quantity and quality of a person’s scholarly publications in academic journals in some particular area of research and scholarship. This article describes the use of a publicly available citation database to investigate the publication and citation records of British forensic practitioners with “legal and forensic medicine” (LFM) as their primary or secondary research category. How to attribute credit to individual names on multi-authored articles is a major problem in science and academia. Six different citation metrics, including authorship position on highly cited articles, were used to derive a composite citation score (c-score) for each highly cited scientist. Such bibliometric methods could prove useful in jurisprudence when reviewing the qualifications of people suggested to serve as expert witness in court cases.
Keywords: Authorship; Bibliometrics; Citation analysis; Expert evidence; Forensic science; Legal medicine; Scholarly publishing

1. Introduction

Quantitative evaluation of scholarly publications in terms of authorship and citations belongs to the discipline of bibliometrics or scientometrics [1,2]. Citation databases are commonly used in academia when university staff are considered for promotion (tenure) from assistant to associate professor, or when decisions are made about research funding and/or the award of prizes and scholarships [3,4,5]. Besides counting the number of articles listed on a person’s CV, which reflects productivity, a more important metric is the number of times these papers are cited in articles penned by other scientists [6,7]. The total number of citations to a person’s publications and their citation impact (cites/article) are considered quality indicators of prestige and recognition in a particular research field or subject category. A citation is a form of acknowledgment and draws attention to certain information contained in a scholarly publication, such as the methodology used, the experimental results and/or the interpretation and conclusions reached [8,9]. Examples of citation databases include Web-of-Science, SCOPUS, ResearchGate and Google Scholar all of which can be used to verify a person’s publication track record [10,11]. The first two of these require a subscription, whereas the latter two are gratis to use. The results obtained using these various databases differs somewhat, depending on the particular journal coverage and whether or not self-citations are included in the metrics. The latter occurs when a person cites one of their own previous publications. A new approach to evaluating a scientist’s publications was devised by a team of researchers from Stanford University, under the leadership of John Ioannidis, who introduced the concept of a composite citation score (c-score) derived using six different citation metrics [12,13]. After making a (log + 1) transformation of each metric, a mathematical formula was used to calculate each person’s c-score. This provides a single number that can be used as a yardstick to compare and contrast career long contributions of scientists belonging to different subject categories or research domains. One special feature of the Stanford University citation database was a consideration given to the pattern of co-authorship and whether the person was listed as a single author, first author, or last author on the scholarly publication. To the best of my knowledge, citation databases have not previously been used in jurisprudence, such as when the qualifications of people proposed as expert witness are considered. According to the US Supreme Court ruling in the case of Daubert vs Merrell Dow Pharmaceuticals, “peer review and publication” are important criteria for admission of scientific evidence [14,15]. However, the court also opined that publication was not the sine qua non of admissibility, probably recognizing that peer review is not infallible and some scientific journals have more rigorous manuscript peer review than others [16]. In this age of electronic journals and on-line open access publishing, there has been an upsurge of new scientific journals and some of these have gained a dubious reputation, being referred to as predatory journals. They bombard scientists with unsolicited e-mails begging them to submit their next article for publication, often expecting a hefty open-access publication fee [17,18]. Also known as opportunistic journals, they seem more interested in making money rather than advancing knowledge and scholarly publishing [19]. Furthermore, peer-reviewing of manuscripts submitted for publication in these journals has been called into question [20]. When presenting scientific evidence in court, it is obviously important to know something about the prestige of the journal where an article was published and whether this had been written or researched by the expert witness who interprets this evidence. In this connection, it would also be useful to know how many citations the article tendered in evidence had received since it was first published [21,22,23]. Highly cited articles are considered more authoritative than those that are seldom or never cited. Within all scientific disciplines, some articles are more highly regarded than others and are referred to as citation classics [24,25]. The basic premise being that the more citations a paper accrues over time the more influential it has become in the eyes of the relevant scientific community [26]. Measuring scholarly impact in criminology and criminal justice was the subject of a recent special issue of a journal [27], although not much attention has been given to the use of citation databases for evaluating specialists in LFM. The present article describes the use of a citation database to evaluate the publication and citation records of millions of publishing scientists using information gleaned from the SCOPUS database. This database was used to identify the most highly cited British forensic practitioners (highest c-score), within the subject category LFM. Also discussed is the age-old problem in academia of attributing credit to individual names on journal articles with multiple authors.

2. Methods

They should be described with sufficient detail to allow others to replicate and build on published results. New methods and protocols should be described in detail while well-established methods can be briefly described and appropriately cited. Give the name and version of any software used and make clear whether computer code used is available. Include any pre-registration codes. 2.1. Citation Databases The citation database used to prepare this article was constructed based on the information contained in Elsevier’s SCOPUS database and the results were available as a series of downloadable EXCEL files [12]. SCOPUS contains publication and citation data derived from thousands of academic journals that began publishing after 1960. Included in the database were ~7 million scientists with at least five entries in the SCOPUS database. These individuals were classified into 22 scientific fields and 176 sub-fields, based on the types of journals where their papers were mainly published [12]. Each person was then allocated to a primary, secondary, and tertiary research field or category. Forensic science did not exist as a separate subject category, so this article focused on people with FLM as their primary or secondary discipline. Five versions of the Stanford University citation database are available on-line and these contain publication and citation records up to the end of 2017, 2018, 2019, 2020, and 2021, respectively. The first two versions of the database (2017 and 2018) listed the names of the top 100,000 most highly cited scientists in all subject categories. More recent versions (2019, 2020, and 2021) also included the names of people within the top cited 2% of their primary research discipline. This increased the number of highly cited scientist from 100,000 to over 200,000. Further details of the methodology used to construct the citation database and calculation of a person’s composite c-score have been published elsewhere [12,13]. The EXCEL file used for this study contained the career-long publication and citation records for over 200,000 publishing scientists worldwide. This file was searched by country and those people with an address somewhere in Great Britain (gbr) were selected for further more detailed evaluation. They were sorted after decreasing order of their c-score. 2.2. Citation Metrics Instead of simply counting the total number of citations to a person’s published work, the Stanford University database used six different citation metrics, and these were combined into a mathematical formula to calculate each person’s c-score [28]. The six individual metrics are defined below: 1. Total number of citations to all articles in the database with that person’s name as author or co-author. 2. The person’s H-index or Hirsch index. 3. The person’s H-index adjusted for number of co-authors on each cited paper (Hm-index). 4. Citations to single-author papers. 5. Citations to single and first-author papers 6. Citations to single, first and last author papers. The H-index is a popular metric to compare and contrast individual scientists and combines information about productivity (number of papers) and their importance (number of citations) [29]. For example, a person with H-index of 50 has his or her name on 50 articles each of which has been cited at least 50 times. The H-index is robust because it is not influenced by a few very highly cited papers nor by a set of papers that are hardly ever cited [29,30]. Several alternatives to the H-index have been proposed that adjust for a person’s age and years of active research and publishing [31]. Another alternative, denoted Hm-index, takes into consideration the number of co-authors on each of the highly cited articles [32]. This requires that each paper is fractionalized in accordance with the number of co-authors (e.g., two authors 0.5, three authors 0.33, four authors 0.25 etc.). These fractions are then added together until the number obtained matches the article number with the same number of citations. A person’s Hm-index is almost always less than H-index, sometimes appreciably less. 2.3. Composite Citation Score Each of the six citation metrics were incorporated into a mathematical formula to derive that person’s c-score. This was done by making a (log + 1) transformation of each metric and comparing this with the person in the same subject category in the database with the highest score for that particular metric. These six fractions were then added together to give the person’s c-score. Only those individuals within the top 2% of their main subfield discipline where considered highly cited and included for further evaluation. Of the ~7 million scientist in the database, there were 13,388 with LFM as their primary research category, but of these only 282 were within the top 2%, according to their c-score. Another 99 individuals had FLM as a secondary research category, making a total of 381 highly cited forensic practitioners and 41 of these (10.7%) had an address somewhere in Great Britain.

3. Results

3.1. Top-cited British Forensic Practitioners Table 1 lists the names of the 41 highly cited British forensic practitioners, arranged in decreasing order of their c-scores, which ranged from 2.650 to 3.803. Table 1 also contains information about the number of papers with that person’s name as author or co-author according to the SCOPUS database (since 1960). This is followed by the total citation count for these publications (since 1996) and the next two columns show the person’s H-index and Hm-index. The last three columns gives the number of papers and citation counts for single author, single + first author, and single + first + last author publications. Of the 41 highly cited forensic practitioners from Great Britain, eight were female (19.5%).The various branches of forensic science the 41 people represented included anthropology, toxicology, odontology (dentistry), genetics/DNA, psychiatry, pathology, statistics and probability. 3.2. Summary Statistics Table 2 presents summary statistics for the various citation metrics of these 41 individuals and one notices that much depends on the particular metric. This was especially evident for single authored papers and citations to these works. The most prolific author had his name on 239 articles compared with only 28 articles for the least productive author (median 97 articles). Also shown in Table 2 are the percentage of self-citations by each of the 41 forensic scientists, which ranged from 3.9% to 37.5% (median 13%). Note that the citation metrics reported in Table 1 and Table 2 do not include self-citations, because excessive citation of one’s own previously published work can skew the citation metrics and enhance the c-score.
Table 1. The names of 41 highly cited forensic practitioners from Great Britain in the LFM subject category arranged after their composite c-score (last column).
Table 2. Summary statistics for the citation metrics of 41 highly cited British forensic practitioners listed in Table 1.

4. Discussion

Expert testimony often plays a crucial role in many types of criminal and civil litigation, especially when the outcome of a case rests on some aspect of scientific, technical, and/or medical evidence [33]. Within the adversarial system of justice, which operates mainly in USA, UK, Canada, Australia, and some other nations, it is normal practice for the prosecution and defense sides in a case to hire their own expert witnesses [34]. Problems arise however, when the experts reach different conclusions about the significance and evidential value of certain key pieces of the forensic evidence in a case [35]. The use of a joint expert appointed by the court, although a possibility is less frequently encountered in the adversarial system of justice, especially during criminal prosecutions [36]. In the inquisitorial system of justice, which operates in continental Europe and some other nations, it is more common that the investigating judge or court official appoints the scientific experts and there is less of a risk that they reach conflicting opinions [37]. Disagreement between expert witnesses appearing on each side in a case is nothing new and was highlighted by a statement made a respected US federal court judge Learned Hand (1872–1961), when in 1901 he wrote in a judgment the following [38]: “The trouble with conflicting expert testimony is that it is setting the jury to decide where doctors disagree. The whole object of the expert is to tell the jury, not facts, as we have seen, but general truths derived from his specialized experience. But how can the jury judge between two statements each founded upon an experience confessedly foreign in kind to their own? It is just because they are incompetent for such a task that, an expert is necessary at all.” In situations like these, one approach would be to compare and contrast the qualifications of each expert on the relevant scientific issue under consideration by the court using bibliometric methods, such as the publication and citation database described in this article. The person with a well-document record of scholarly achievement in terms of highly cited scientific papers should obviously be taken seriously. However, in this connection it is worth remembering that the best researchers do not necessarily make the best expert witness. More important than number of published papers is information about how many times these articles have been cited in papers penned by other scientists. Highly cited publications in international peer-reviewed journals are considered more influential than articles that are seldom or never cited. However, the relationship between scientific research, publication and expert witness testimony is complex and some people can present useful scientific evidence and further information to the court without being a prolific researcher [39]. Bibliometric methods and citation analysis are well-established in academia, although to the best of my knowledge they are not widely used in jurisprudence, such as when the qualifications of expert witnesses are considered before they are instructed to appear in court and interpret the forensic evidence. An example of the use of a citation database is exemplified here by looking at the career long publication records of British forensic practitioners in the LFM subject category [5]. It sometimes happens that the expert witness might stray outside their own area of expertise, which has misled the jury and resulted in a wrongful conviction and miscarriage of justice [40,41]. One of the biggest problems in evaluating a person’s published work is attributing credit to the individual names on multi-authored papers. In this connection, prestige positions are first, last, and/or corresponding author of the article, and what other people listed as authors contributed is hard to know [42,43]. Accordingly, ways are needed to attribute credit to individual names on multi-authored papers, such as when considering people for the award of prizes, scholarships, membership in learned societies, and even as expert witnesses [44,45]. One of the pioneers in the discipline of bibliometrics, Derek J. de Sola Price (1922–1983) commented on multiple authorship as follows [46]: “The payoff in brownie points of publications or citations must be divided among all authors listed on the by-line, and in the absence of evidence to the contrary it must be divided equally among them. Thus each author of a three-author paper gets credit for one third of a publication and one third of the ensuing citations. If this is strictly enforced it can act perhaps as a deterrent to the otherwise pernicious practice of coining false brownie points by awarding each author full credit for the whole thing.” Over the past 50 years, multi-authored scientific papers have increased significantly, especially in journals devoted to basic science and medical specialities [47,48]. This makes it increasingly difficult to figure out who did what to produce the final published article and how the credit should be sub-divided between the co-authors of the article [43,49]. For example, should all co-authors. regardless of positioning, be given full credit for all the citations a paper might accrue over time, or is some type of fractionalization (1/N) necessary [50]. Many journals now require authorship declarations, which are submitted along with the manuscript and intended to spell-out exactly what each person listed as an author contributed. These statements are then published as endnotes after the main text before the list of references [51,52]. More transparency would be welcomed in the way these declarations are formulated, because many people listed as authors seem to have made more or less the same contribution. In fact, advice and encouragement towards a research project or some loan of equipment would be better included in an acknowledgement section of the article rather than being rewarded with co-authorship. The importance of being listed as first author on a published article has led to statements in the final publication to the effect that AB and BC share first authorship, which seems ridiculous when there are 8–10 other people listed as co-authors on the same paper [53]. The importance attached to publishing papers in high impact scientific journals cannot be over-stated, such as when people apply for a new job in science and academia, or when research grants are awarded [54]. The drive to publish in prestigious journals has led to a proliferation of multi-authored articles and the phenomena of ghost and guest authorship have arisen [55]. The latter is a dubious practice, and is considered to be unethical and bordering on scientific misconduct [56,57]. The question of who should be included as an author on a published paper and the relative name ordering sometimes becomes a contentious issue leading to disputes and animosity within the research group [58,59]. The way that authorship is assigned seems to differ between institutions, research groups, and countries and efforts to standardize this are urgently needed [60,61,62]. Many years ago, The International Committee of Medical Journal Editors (ICMJE) issued guidelines about what contribution should be made to be listed as an author as opposed to being mentioned in the acknowledgment section [45]. Unfortunately, it does not appear that the recommendations made were taken very seriously considering the inflation in number of authors per paper published in leading medical journals [63,64]. Self-citations inflate a person’s total citation count, because people preferentially cite their own previously published work. However, the citation data reported in Table 1 and used to calculate each person’s c-score did include self-citations. Note that citations to single-authored papers, first-author, and last-author papers were included in the c-score calculation, which is a unique feature of the Stanford University database. The person designated as the corresponding author on a paper submitted for publication is a prestige position, because this individual vouches for the integrity of the work presented and communicates with the journal editor and also receives the peer-review reports [60]. In Great Britain, with a population of ~65 million, there were 41 highly cited people in the LFM discipline, compared with 43 highly cited forensic practitioners in Germany (pop. 83 million), 28 from Italy (pop. 59 million), and 14 from France (pop. 67 million). The most highly cited British scientist was Mark A Joblin (University of Leicester), a specialist in genetics/DNA, who had a composite citation score of 3.8037. He was ranked 20th worldwide among all highly cited individuals in the LFM subject category. The British pioneer in DNA fingerprinting, Sir Alec Jeffreys was credited with 245 published articles between 1974 and 2014, and his H-index was 58 (Hm index 29.8), and the c-score was 4.066. Sir Alec’s primary and secondary research disciplines were “genetics and heredity” and “developmental biology,” respectively and therefore he was not among the FLM people in Table 1. A plethora of citation metrics exist, and love them or hate them, they are an important part of scholarly publishing whenever the significance of a person’s contributions to research are evaluated. Bibliometric methods have already been used to evaluate the field of criminology [65, 66], forensic science [67], and legal and forensic medicine [68]. The point made in this article is that bibliometric methods might also prove useful in jurisprudence, such as when expert witnesses disagree about the evidential value of certain scientific evidence. The expert witness who can document his or her own research and publications on the scientific topic being litigated obviously deserves more credibility compared with one without such qualifications. Prolific authorship of papers that subsequently become highly cited enhances a person’s reputation and expertise when they serve as expert witnesses.

Ethics Statement

Not applicable.

Funding

There were no sources of funding applied for nor received to produce this article.

Declaration of Competing Interest

The author declares no competing interests, financial or otherwise, that might have influenced the material presented in this article.

References

1.
Adam D. The counting house.  Nature 2002, 415, 726–729. [Google Scholar]
2.
Ball R. Introduction to Bibliometrics, Developments and Trends; Chandos Publication: Cambridge, UK, 2018.
3.
Hicks D, Wouters P, Waltman L, de Rijcke S, Rafols I. Bibliometrics: The Leiden manifesto for research metrics.  Nature 2015, 520, 429–431. [Google Scholar]
4.
Ioannidis JP, Boyack KW. Citation metrics for appraising scientists: Misuse, gaming and proper use.  Med. J. Aust. 2020, 212, 237. [Google Scholar]
5.
Joshi MA. Bibliometric indicators for evaluating the quality of scientifc publications.  J. Contemp. Dent. Pract. 2014, 15, 258–262. [Google Scholar]
6.
Garfield E. How can impact factors be improved?  BMJ 1996, 313, 411–413. [Google Scholar]
7.
Garfield E. Journal impact factor: A brief review.  Can. Med. Assoc. J. 1999, 161, 979–980. [Google Scholar]
8.
Fong EA, Wilhite AW. Authorship and citation manipulation in academic research.  PLoS ONE 2017, 12, e0187394. [Google Scholar]
9.
Garfield E. When to cite.  Libr. Q. 1996, 6, 449–458. [Google Scholar]
10.
Van Noorden R. Hundreds of scientists have peer-reviewed for predatory journals. Nature. 11 March 2020. Available online: https://www.nature.com/articles/d41586-020-00709-x (accessed on 14 September 2023).
11.
Garfield E. The evolution of the science citation index.  Int. Microbiol. 2007, 10, 65–69. [Google Scholar]
12.
Ioannidis JP, Baas J, Klavans R, Boyack KW. A standardized citation metrics author database annotated for scientific field.  PLoS Biol. 2019, 17, e3000384. [Google Scholar]
13.
Ioannidis JP, Boyack KW, Baas J. Updated science-wide author databases of standardized citation indicators.  PLoS Biol. 2020, 18, e3000918. [Google Scholar]
14.
Keierleber JA, Bohan TL. Ten years after Daubert: The status of the states.  J. Forensic Sci. 2005, 50, 1154–1163. [Google Scholar]
15.
Ireland J, Beaumont J. Admitting scientific expert evidence in the UK: Reliability challenges and the need for revised criteria – proposing an abridged Daubert.  J. Forensic Pract. 2015, 17, 3–12. [Google Scholar]
16.
Fournier LR. The Daubert guidelines: Usefulness, utilization, and suggestions for improving quality control.  J. Appl. Res. Memory Cogn. 2016, 5, 308–313. [Google Scholar]
17.
Clemons M, de Costa ESM, Joy AA, Cobey KD, Mazzarello S, Stober C, et al. Predatory invitations from journals: More than just a nuisance.  Oncologist 2017, 22, 236–240. [Google Scholar]
18.
Memon AR. Predatory journals spamming for publications: What should researchers do? Sci. Eng. Ethics 2018, 24, 1617–1639. [Google Scholar]
19.
Elmore SA, Weston EH. Predatory journals: What they are and how to avoid them.  Toxicol. Pathol. 2020, 48, 607–610. [Google Scholar]
20.
Björk BC, Kanto-Karvonen S, Harviainen T. How frequently are articles in predatory open access journals cited.  Publications 2020, 8, 17. [Google Scholar]
21.
Cobey KD, Grudniewicz A, Lalu MM, Rice DB, Raffoul H, Moher D. Knowledge and motivations of researchers publishing in presumed predatory journals: A survey.  BMJ Open 2019, 9, e026516. [Google Scholar]
22.
Grudniewicz A, Moher D, Cobey KD, Bryson GL, Cukier S, Allen K, et al. Predatory journals: No definition, no defence. Nature 2019, 576, 210–212. [Google Scholar]
23.
Haug CJ. Peer-review fraud--hacking the scientific publication process.  N. Engl. J. Med. 2015, 373, 2393–2395. [Google Scholar]
24.
Martínez MA, Herrera M, López-Gijón J, Herrera-Viedma E. H-classics: Characterizing the concept of citation classics through h-index.  Scientometrics 2017, 98, 1971–1983. [Google Scholar]
25.
Garfield E. 100 citation classics from the journal of the American Medical Association.  JAMA 1987, 257, 52–59. [Google Scholar]
26.
Garfield E, Welljams-Dorof A.  Of Nobel class: A citation perspective on high impact research authors.  Theor. Med. 1992, 13, 117–135. [Google Scholar]
27.
Cohn EG, Worrall JL. Evaluating citation analysis: Introduction to the special issue.  J. Contemp. Crim. Justice 2023, 39, 324–326. [Google Scholar]
28.
Ioannidis JP, Klavans R, Boyack KW. Multiple citation indicators and their composite across scientific disciplines.  PLoS Biol. 2016, 14, e1002501. [Google Scholar]
29.
Hirsch JE. An index to quantify an individual’s scientific research output.  Proc. Natl. Acad. Sci. USA 2005, 102, 16569–16572. [Google Scholar]
30.
Opthof T, Wilde AA. The Hirsch-index: A simple, new tool for the assessment of scientific output of individual scientists: The case of dutch professors in clinical cardiology.  Neth Heart J. 2009, 17, 145–154. [Google Scholar]
31.
Bihari A, Tripathi S, Deepak A. A review on h-index and its alternative indices.  J. Info Sci. 2023, 49, 624–665. [Google Scholar]
32.
Schreiber S. A modification of the h-index: The hm-index accounts for multi-authored manuscripts.  J. Informetr. 2008, 2, 211–216. [Google Scholar]
33.
Friston M. Roles and responsibilities of medical expert witnesses.  BMJ 2005, 331, 305–306. [Google Scholar]
34.
Havard JD.  Expert scientific evidence under the adversarial system. A travesty of justice?  J. Forensic Sci. Soc. 1992, 32, 225–235. [Google Scholar]
35.
Hackman L, Raitt F, Black S. The Expert Witness, Forensic Science and the Criminal Justice Systems of the UK; Taylor & Francis: Boca Raton, FL, USA, 2019.
36.
Samuels A, Barrister JP. The single joint expert.  Med. Sci. Law 2003, 43, 9–12. [Google Scholar]
37.
Margot P. The role of the forensic scientist in an inquisitorial system of justice.  Sci. Justice 1998, 38, 71–73. [Google Scholar]
38.
Hand L. Historical and practical considerations regarding expert testimony.  Harv. Law Rev. 1901, 14, 53–65. [Google Scholar]
39.
Milroy CM. A brief history of the expert witness.  Acad. Forensic Pathol. 2017, 7, 516–526. [Google Scholar]
40.
Turvey BE, Cooley CM. Miscarriages of Justice; Elsevier: Amsterdam, The Netherlands, 2014.
41.
Walton D. When expert opinion evidence goes wrong.  Artif. Intell. Law 2019, 27, 369–401. [Google Scholar]
42.
Boyer S, Ikeda T, Lefort MC, Malumbres-Olarte J, Schmidt JM. Percentage-based author contribution index: A universal measure of author contribution to scientific articles.  Res. Integr. Peer Rev. 2017, 2, 18. [Google Scholar]
43.
Greenblatt DJ. Authorship.  Clin. Pharmacol. Drug Dev. 2022, 11, 1362–1366. [Google Scholar]
44.
Huth EJ. Abuses and uses of authorship.  Ann. Intern. Med. 1986, 104, 266–267. [Google Scholar]
45.
Huth EJ. Guidelines on authorship of medical papers.  Ann. Intern. Med. 1986, 104, 269–274. [Google Scholar]
46.
Price D. Multiple authorship. Science 1981, 212, 986. [Google Scholar]
47.
Plummer S, Sparks J, Broedel-Zaugg K, Brazeau DA, Krebs K, Brazeau GA. Trends in the number of authors and institutions in papers published in AJPE 2015–2019.  Am. J. Pharm. Educ. 2022, 87, 8972. [Google Scholar]
48.
Shaban S. Multiple authorship trends in prestigious journals from 1950 to 2005.  Saudi Med. J. 2007, 28, 927–932. [Google Scholar]
49.
Helgesson G, Eriksson S. Authorhip order.  Learn. Publ. 2019, 32, 106–112. [Google Scholar]
50.
Dance A. Authorship: Who’s on first?  Nature 2012, 489, 591–593. [Google Scholar]
51.
McNutt MK, Bradford M, Drazen JM, Hanson B, Howard B, Jamieson KH, et al. Transparency in authors’ contributions and responsibilities to promote integrity in scientific publication.  Proc. Natl. Acad. Sci. USA 2018, 115, 2557–2560. [Google Scholar]
52.
Lundberg GD, Flanagin A. New requirements for authors: Signed statements of authorship responsibility and financial disclosure.  JAMA 1989, 262, 2003–2004. [Google Scholar]
53.
Lapidow A, Scudder P. Shared first authorship.  J. Med. Libr. Assoc. 2019, 107, 618–620. [Google Scholar]
54.
Baum MA, Braun MN, Hart A, Huffer VI, Messmer JA, Weigl M, et al. The first author takes it all? Solutions for crediting authors more visibly, transparently, and free of bias.  Br. J. Soc. Psychol. 2022, 61, 1605–1620. [Google Scholar]
55.
Rennie D, Flanagin A. Authorship! Authorship! Guests, ghosts, grafters, and the two-sided coin.  JAMA 1994, 271, 469–471. [Google Scholar]
56.
Schofferman J, Wetzel FT, Bono C. Ghost and guest authors: You can’t always trust who you read.  Pain Med. 2015, 16, 416–420. [Google Scholar]
57.
Wislar JS, Flanagin A, Fontanarosa PB, Deangelis CD. Honorary and ghost authorship in high impact biomedical journals: A cross sectional survey.  BMJ 2011, 343, d6128. [Google Scholar]
58.
Fleming N. The authorship rows that sour scientific collaborations.  Nature 2021, 594, 459–462. [Google Scholar]
59.
Strange K. Authorship: Why not just toss a coin?  Am. J. Physiol. Cell Physiol. 2008, 295, C567–C575. [Google Scholar]
60.
Helgesson G. The two faces of the corresponding author and the need to separate them.  Learn. Publ. 2021, 34, 679–681. [Google Scholar]
61.
Helgesson G. Authorship order and effects of changing bibliometrics practices.  Res. Ethics 2020, 16, 1–7. [Google Scholar]
62.
Helgesson G, Bulow W, Eriksson S, Godskesen TE. Should the deceased be listed as authors?  J. Med. Ethics 2019, 45, 331–338. [Google Scholar]
63.
ICMJE. Defining the role of authors and contributors. International Committee Medical Journal Editors 2018. Available online: https://www.icmje.org/recommendations/browse/roles-and-responsibilities/defining-the-role-of-authors-and-contributors.html (accessed on 14 September 2023).
64.
Bhattacharya S. Authorship issue explained.  Indian J. Plast. Surg. 2010, 43, 233–234. [Google Scholar]
65.
Cohen EG, Farrington DP, Iratzoqui A. The Most Cited Scholars in Five International Criminology Journals, 2006–2010; Springer: Berlin/Heidelberg, Germany, 2014.
66.
Cohn EG, Farrington DP. Who are the most-cited scholars in major American criminology and criminal justice journals?  J. Crim. Justice 1994, 22, 517–534. [Google Scholar]
67.
Jones AW. Which articles and which topics in the forensic sciences are most highly cited?  Sci. Justice 2005, 45, 175–182. [Google Scholar]
68.
Jones AW. Scientometric evaluation of highly cited scientists in the field of forensic science and legal medicine.  Int. J. Legal Med. 2021, 135, 701–707. [Google Scholar]
TOP