Assessing and Avoiding Publication Bias in Meta-analyses

Publication bias distorts meta-analyses by inflating effects, necessitating identification and correction. Funnel plots and Egger's Test detect bias, while trim-and-fill corrects it, though limitations exist and sensitivity analyses are crucial.

Updated on November 29, 2024

Grant Writing Guide

Publication bias – when studies with significant results are more likely to be published than non-significant findings – has dangerous consequences for meta-analyses. It leads to overinflation of effects, which can negatively impact policies or even clinical diagnoses.

Addressing publication bias in meta-analyses is crucial for ensuring the reliability and validity of the synthesized evidence.

There are ways to identify and correct publication bias in meta-analyses and, after defining what we’re after here, we’ll take a look at specific methods for dealing with it.

Understanding publication bias

Meta-analyses are particularly affected by publication bias because of bias’ potential impact on the accuracy and reliability of the synthesized findings. Unlike in individual studies, the cumulative nature of meta-analyses amplifies the impact of publication bias because it combines multiple studies to generate a pooled effect estimate.

Definition and overview of publication bias

Publication bias is when statistically significant results are more likely to be published than non-significant ones. These results are then more likely to be identified by future researchers conducting meta-analyses and other research on that topic, and thereby included.

Publication bias is closely linked to statistical power. Studies with large statistical power can detect a small effect when it actually exists, whereas studies with low statistical power (“small studies”) require very large effects to become significant. This means they’re at the greatest risk of generating non-significant findings and remaining unpublished.

Causes of publication bias in meta-analysis

The main cause of publication bias is selective reporting, or the notion that only statistically significant results are interesting and therefore “publishable.” This practice has become commonplace because researchers seek to secure funding and increase their reputation, and publishing null findings is often seen as a “failure.” In a meta-analysis, with an accumulation of studies, this effect is amplified. So the same danger exists, but, in a sense, it may be on a greater scale.

Another cause of publication bias is outcome reporting bias. This is where researchers may selectively report or change specific outcomes or analyses in the study that resulted in statistically significant results.

Editorial bias also plays a role. Journal editors or reviewers may be biased toward statistically significant findings due to pressure to publish “novel” and “exciting” research. The idea is that these papers will attract a greater readership and make a larger impact on the scientific community.

This selective publication can distort the overall findings of a meta-analysis, leading to overrepresented positive results and potentially biased conclusions.

Consequences of publication bias

Meta-analyses aim to present all relevant literature on a particular topic to make informed decisions about the validity of the research question (i.e., a treatment’s effectiveness). The meta-analysis would be missing substantial important evidence if only statistically significant results are published.

This oversight leads to a distortion of the true findings and overinflation of the effectiveness or significance of a particular intervention. This, in turn, can have devastating consequences on clinicians or policymakers because they’re not getting the full picture that the meta-analysis should be providing them with.

In their controversial article, "The Emperor's New Drugs," Kirsch et al. used the Freedom of Information Act to access unpublished antidepressant trial data that pharmaceutical companies provided to the US Food and Drug Administration.

They discovered that when including the previously unpublished data, the benefits of antidepressants compared with placebos were clinically insignificant. They argued that this resulted from selective publication, where studies with unfavorable results were withheld while those with favorable findings were published. This finding challenged the widely held belief in the effectiveness of antidepressants in the scientific and public domains. You can imagine the implications – from the researchers, to the clinicians, to the pharmaceutical companies, and all the way down to the “end-consumer.”

Identifying Publication Bias

The most straightforward way to assess publication bias is through what is known as small studies effects (see Stern et al. 2000 for a deeper dive), in which small studies show larger/outsized effects.

Since publication bias exists occurs when only significant results are published, and the probability of obtaining significant results increases with the sample size, meaning publication bias will mainly affect small studies. The chance of obtaining statistically significant results with a large effect will be larger in smaller studies. 

Small-study effects can be examined and corrected for publication bias using visual and statistical methods, as described below.

Visual methods

The funnel plot is the most common visual representation of publication bias. Forest plots are not specifically designed for the purpose, but they are also used.

Funnel plot

Funnel plots are a scatterplot of each study’s effect size on the x-axis plotted against its standard error on the y-axis. 

When there is no publication bias, the data points form a symmetrical upside-down funnel, as in Graph A, with smaller studies at the top and larger studies at the bottom. This graph suggests that there are even distributions of effect sizes. 

When publication bias exists, the points tend to be more skewed in one, as in Graph B. 

Graph A

a funnel plot

Graph B

a funnel plot

Images from: https://bookdown.org/MathiasHarrer/Doing_Meta_Analysis_in_R/pub-bias.html

Note that funnel plot asymmetry can be assessed visually with the funnel plot, but it must be quantified using Egger’s Test (see below).

Forest plot

While not specifically designed for identifying publication bias, forest plots are commonly used in meta-analyses to visually present the individual study effect sizes along with their confidence intervals.

By observing the spread and distribution of the effect sizes, researchers can assess for a lack of smaller studies with null or negative results, which may suggest potential publication bias.

a forest plot

Image from: https://bookdown.org/MathiasHarrer/Doing_Meta_Analysis_in_R/forest.html

B. Statistical methods

Egger's regression test

Egger’s Test is a statistical tool for quantifying funnel plot asymmetry (see the PDF of the original paper on Egger’s Test of the Intercept). Egger’s Test performs weighted regression analysis of the effect size estimates on their precision measures (i.e., standard errors).

The metric of interest here is the intercept line, indexed by b. A significant intercept (an intercept with p < 0.05; i.e., statistically significant) suggests publication bias. 

Rank correlation test

Similar to Egger’s Test, the rank correlation test examines the relationship between the effect sizes (i.e., their “ranks”) and a measure of study precision (i.e., their standard error). This test assesses whether there is a correlation between the effect sizes/ranks and study precision, which can indicate the presence of publication bias (see this article for the original paper on this test).

Kendall's tau is typically used as the correlation coefficient for this test. If the correlation coefficient is statistically significant, it indicates funnel plot asymmetry, which can be caused by publication bias.

3. Trim-and-fill method

If funnel plot asymmetry is detected using Egger’s Test or the rank correlation test, the trim-fill method can be used to correct for it. This method assumes that missing studies are likely to exist symmetrically on the plot, reflecting studies that haven’t been published because of publication bias (see this PDF for the seminal paper on the trim-and-fill method).

This method works by first "trimming" (removing) studies from one side of the funnel plot that contributes to the asymmetry, then "filling" in the missing studies by mirroring the removed studies on the opposite side. These filled studies represent hypothetical unpublished studies that could have contributed to the observed asymmetry.

Then, a new effect size estimate is re-calculated, with the original studies and the imputed missing studies. This provides an adjusted estimate of the overall effect size, referred to as the "corrected" effect size.

C. Limitations and considerations of methods for identifying bias

Although helpful in detecting publication bias, each method has drawbacks because every test makes different assumptions about the data.

For instance, Egger’s Test and the rank correlation test assume publication bias is the underlying cause of funnel plot asymmetry. This can overlook other sources of bias present in the meta-analysis, like heterogeneity.

Likewise, the trim-and-fill procedure is not robust when there’s large between-study heterogeneity in the meta-analysis. It’s also criticized for filling in the plot with “made up” studies, and it is often outperformed by other more advanced methods of publication bias correction, including PET-PEESE or selection models.

D. Next steps after identifying bias

If you identified bias in your meta-analysis, the next step is to conduct sensitivity analyses by running several publication bias correction tests. This will help you assess the reliability of the results under different assumptions and correction methods.

Considering various bias correction methods also helps to obtain a more accurate estimation of the true effect size and increases transparency and robustness of the results.

Contributors
Tag
Publication Ethics
Table of contents
Join the newsletter
Sign up for early access to AJE Scholar articles, discounts on AJE services, and more

See our "Privacy Policy"