Election Section

Medical journal puts itself, competitors under microscope

By Lindsey Tanner The Associated Press
Wednesday June 05, 2002

CHICAGO – One of the world’s leading medical journals has put itself and its competitors under the microscope with research showing that published studies are sometimes misleading and frequently fail to mention weaknesses. 

Some problems can be traced to biases and conflicts of interest among peer reviewers, who are outside scientists tapped by journal editors to help decide whether a research paper should be published, according to several articles in this week’s Journal of the American Medical Association. 

Other problems originate in news releases some journals prepare to call attention to what they believe are newsworthy studies. The releases do not routinely mention study limitations or industry funding and may exaggerate the importance of findings, according to one JAMA study. 

Wednesday’s JAMA, devoted entirely to such issues, “is our attempt to police ourselves, to question ourselves and to look at better ways to make sure that we’re honest and straightforward and maintain the integrity of the journals,” said Dr. Catherine DeAngelis, JAMA’s editor. 

The articles “underscore that the findings presented in the press and medical journals are not always facts or as certain as they seem,” said Rob Logan, director of the Science Journalism Center at the University of Missouri-Columbia. 

DeAngelis said problems are most likely to occur in research funded by drug companies, which have a vested interest in findings that make their products look good. 

Journal editors are concerned that manufacturers sometimes unduly influence how researchers report study results, and even suppress unfavorable findings. 

Many top journals require researchers to disclose any ties to drug companies, and Dr. Jeffrey Drazen, editor of the New England Journal of Medicine, said editors rely on researchers to be truthful. 

“I imagine that from time to time we screw up” and fail to adequately mention drug company ties, but that is infrequent, Drazen said. 

One JAMA report found that medical journal studies on new treatments often use only the most favorable statistic in reporting results, said author Dr. Jim Nuovo of the University of California at Davis. 

His study reviewed 359 studies published between 1989 and 1998 in JAMA, The New England Journal of Medicine, The Lancet, the British Medical Journal and Annals of Internal Medicine. Only 26 studies reported straightforward statistics that clearly assessed the effect on patients. 

Most reported only the “relative risk reduction” linked to a specific treatment, which is the percentage difference between drug-treated patients and those in a placebo group. That figure is more misleading than the “absolute risk reduction,” which measures the actual difference between the treatment results compared with the placebo group, Nuovo said. 

For example, if 5.1 percent of placebo-treated patients had heart attacks compared with 3.7 percent of drug patients, the absolute risk reduction in the drug group would be 1.4 percent. But researchers could use the relative risk reduction to claim that the drug lowers the risk of a heart attack 34 percent — which sounds a lot more impressive.