Methods: This study gathered and
synthesized 10 metrics for almost all AAMC medical schools (n=
123): (1) total number of
published articles per medical school, (2) total number of citations to
published articles per medical school, (3) average number of citations per
article, (4) institutional impact indices, (5) institutional percentages of
articles with zero citations, (6) annual average number of faculty per medical
school, (7) total amount of NIH funding per medical school, (8) average amount
of NIH grant money awarded per faculty member, (9) average number of articles
per faculty member, and (10) average number of citations per faculty member.
Using principal components analysis, the author calculated the relationships
between measures, if they existed.
Results: Principal components analysis
revealed 3 major clusters of variables that accounted for 91% of the total
variance: (1) institutional research productivity, (2) research influence or
impact, and (3) individual faculty research productivity. Depending on the
variables in each cluster, medical school research may be appropriately
evaluated in a more nuanced way. Significant correlations exist between
extracted factors, indicating an interrelatedness of all variables. Total NIH
funding may relate more strongly to the quality of the research than the
quantity of the research. The elimination of medical schools with outliers in 1
or more indicators (n=
20)
altered the analysis considerably.
Conclusions: Though popular, ordinal rankings cannot adequately describe the multidimensional nature of a medical school's research productivity and impact. This study provides statistics that can be used in conjunction with other sound methodologies to provide a more authentic view of a medical school's research. The large variance of the collected data suggests that refining bibliometric data by discipline, peer groups, or journal information may provide a more precise assessment.