Ficha Educativa

Publication Bias: The File Drawer Problem

Journals preferentially publish positive findings; negative results languish unpublished. This creates a false consensus favoring treatments. Rosenthal's file drawer problem quantifies the distortion; trial registration is a partial solution.

Evaluate10 min de lectura Selección del Editor
Cómo se estructura esta entrada
Primero definiciones, luego mecanismos y finalmente “¿qué implica esto?”. Si tienes prisa, revisa rápidamente los encabezados y los recuadros destacados.
No es asesoramiento médico.
Contenido únicamente educativo. Si los síntomas son graves, persistentes o preocupantes, consulta con un profesional sanitario.

The Studies You Never See

Robert Rosenthal's 1979 "file drawer problem" remains the most elegant description of publication bias. Imagine researchers testing whether a supplement improves cognition. Twenty research teams conduct identical studies. Two teams (by random chance) find p < 0.05 "positive" results; 18 find nothing. The two positive studies get published; 18 studies slide into desk drawers, never reported.

A meta-analyst reviews published literature and concludes the supplement works. But they've examined 2 published studies while ignoring 18 null studies in drawers—a fundamentally skewed evidence base. This is publication bias: the tendency for positive findings to be published and negative findings to vanish.

Studies documenting this bias abound. Ioannidis reviewed NIH-funded trials comparing two therapies. When trials favored one therapy, publication rate was high; trials showing equivalent efficacy published less frequently. The resulting literature exaggerated superiority of the favored approach.

Outcome reporting bias compounds the problem. Researchers conducting a probiotic trial measure ten outcomes: bloating, pain, stool frequency, quality of life, biomarkers (CRP, fecal calprotectin, diversity metrics), and safety events. Only three outcomes show statistically significant improvement. When publishing, researchers emphasize these three, burying results for the seven non-significant outcomes. The published trial appears positive; the full dataset reveals mixed effects.

Post-hoc switching of primary outcomes represents particularly egregious bias. A trial pre-specifies cognition as primary outcome (not significantly improved, p = 0.08). Upon analyzing secondary outcomes, researchers discover significant benefits for mood (p = 0.03) and present mood as the primary finding in the abstract. Readers believe mood was the pre-planned outcome, not an afterthought.

Funnel plots visualize publication bias in meta-analyses. Plot effect size (x-axis) against study precision (y-axis). Unbiased literature produces a symmetric funnel shape: small studies scatter widely (low precision), large studies cluster near the true effect (high precision). Publication bias creates asymmetry: missing small, null-finding studies on one side, producing a lopsided funnel.

Egger's test quantifies funnel plot asymmetry statistically. Asymmetrical funnels suggest bias (though other factors like heterogeneity can cause asymmetry too). When Egger's test signals asymmetry, meta-analysts should investigate whether truly null studies exist unpublished or whether other explanations apply.

ClinicalTrials.gov registration offers partial solutions. Since 2007, NIH-funded trials must register before enrollment. The registry specifies primary outcomes and sample size, preventing post-hoc outcome switching. In theory, investigators cannot "shop" for significant results and hide failures. However, registration's impact remains mixed—many trials still go unpublished years after completion, and selective reporting of registered outcomes persists.

The AllTrials campaign advocates mandatory public reporting of all trials, registered or not. This transparency would expose the file-drawer effect. Some journals now require data sharing; others mandate pre-registration of analysis plans. These practices reduce, but don't eliminate, publication bias.

Better practice: search trial registries directly before reviewing published literature. Often unpublished null results reside in clinicaltrials.gov, revealing a fuller picture than published papers alone. Microbiome researchers increasingly conduct prospective registration, improving transparency.

Vota y ayúdanos a mejorar con tus comentarios

Fuentes & referencias

  1. Wagner JA et al. (2024) Evidence of the File Drawer/Publication Bias Problem in Organization Research: An Affirmative Replication of Dalton, Aguinis, Dalton, Bosco, and Pierce (2012) Psychological Reports PMID: 41152698
  2. Laitin DD et al. (2021) Reporting all results efficiently: A RARE proposal to open up the file drawer Proceedings of the National Academy of Sciences USA PMID: 34933997
  3. Godoy P et al. (2013) A critical evaluation of in vitro cell culture models for high-throughput drug screening and toxicity Journal of Internal Medicine PMID: 22252140
  4. Rennert K et al. (2015) Overview of in vitro cell culture technologies and pharmaco-toxicological applications Tissue Engineering Part B Reviews PMID: 20654357
  5. Viennois E et al. (2021) The gut microbiome of laboratory mice: considerations and best practices for translational research Mammalian Genome PMID: 33689000
Estándar Editorial
Every entry is grounded in peer-reviewed research and reviewed for accuracy. Cómo escribimos →