Systematic reviews are the gold standard for synthesizing evidence — and also one of the most time-consuming tasks in research. A well-run systematic review typically takes 9-18 months and a team of 2-4 people. Much of that time is spent on steps that could be accelerated with the right tooling. This guide covers the workflow that most efficient teams use in 2026.
If you haven't done a systematic review before, start with the PRISMA 2020 guidance — it's the global standard, and every journal will expect you to conform to it. This guide is a practical overlay on PRISMA, not a replacement.
Before you search anything, register your protocol. For health research, that's PROSPERO. For other fields, check OSF Registries or field-specific platforms (like INPLASY, Campbell Collaboration).
Your protocol should answer:
This sounds bureaucratic, but it's what keeps reviewers from accusing you of "changing the question" mid-way. Once registered, you can't quietly drop an inconvenient inclusion criterion.
In 2026, a comprehensive search means at least three databases — usually more like five or six for a rigorous review. Different databases index different journals; Web of Science misses many open-access journals, PubMed missed SSRN entirely until 2023.
Common database combinations:
Run the same query in each database (adapted for that database's syntax), and export every result.
Supplement with:
Export everything. A typical systematic review ends with 5,000-20,000 unique records at this stage.
You will have massive overlap between databases. A paper indexed in PubMed, Web of Science, and Scopus will appear three times — with slightly different formatting each time.
The standard workflow:
Expect to remove 30-50% of your initial count. Document the numbers for your PRISMA flow diagram.
Screening happens in two rounds: title/abstract, then full-text. The standard is two independent reviewers screening each record, with a third adjudicator for disagreements.
The 2026 tooling for this is much better than it was:
A warning about AI-assisted screening: it's a prioritization tool, not a replacement for human judgment. The current literature shows it can reduce the number of records you need to screen to find all relevant papers, but it doesn't change the final set of included studies. You still screen every record that looks promising — the AI just orders them better.
Keep careful records of your exclusion reasons. PRISMA requires you to report how many records were excluded and why at each stage.
For each included study, extract a structured set of fields. The minimum:
This usually lives in a spreadsheet, but increasingly in dedicated tools (DistillerSR, SRDR+, Covidence's extraction module). Two reviewers extract independently; a third adjudicates.
If your review is quantitative (meta-analysis), extract effect sizes, confidence intervals, and enough information to calculate pooled estimates later. RevMan is the standard tool for the meta-analysis itself.
Every included study needs a risk-of-bias assessment. The tool depends on your field:
Again, two assessors, third adjudicator. The output is usually a structured table in your review.
Depending on your review type:
metafor, meta packages) are standard.Don't force a meta-analysis if the underlying studies are too heterogeneous. A well-structured narrative synthesis is better than a misleading pooled estimate.
The structure is dictated by PRISMA 2020. Your paper must include:
Your references get long. A systematic review typically has 50-300 included studies, plus another 50-100 for the methods and discussion. At this scale, formatting by hand is impossible. Cite as you write with Zotero, EndNote, or DEEPNOTIS. Switch styles only when you submit.
Recent tool improvements that will save weeks:
.bib file.Systematic reviews have two bottlenecks: searching thoroughly and screening carefully. Spend time on the protocol so the search is comprehensive, use AI-assisted tools to prioritize but not replace screening, and treat your bibliography as data from day one so that the final write-up is a formatting exercise, not a copy-paste ordeal. The whole thing takes most teams 9-18 months; the fastest teams we've seen, using the toolchain above well, do it in 4-6.