A literature review is where most graduate projects quietly fall apart. Not because the researcher can't read, but because the workflow for holding hundreds of papers, themes, and arguments in a single coherent structure is not taught — and most of the tools used to teach it were designed in 2005.
This guide walks through a workflow that actually scales, one that has been refined across several cohorts of doctoral students and researchers handling 200-500 sources. It assumes you are writing a review because you need to write one — not as an abstract exercise.
Before you read a single paper, write one paragraph answering three questions:
The single biggest waste of time in a literature review is reading sources that fall outside a scope you never clearly defined. The scope paragraph is what protects you.
Once the scope is clear, you need a search that is systematic and reproducible. This matters whether or not you're doing a formal systematic review — a reader will always ask "how did you find these sources?", and "I searched around on Google Scholar" is not a good answer.
A workable search strategy has three elements:
Save the results of each database search as a .ris or .bib file. You will deduplicate them in the next step.
Export every search result into a reference manager — Zotero, Mendeley, or DEEPNOTIS. Whatever you use, the critical step is deduplication before you start reading. The same paper will appear in three database searches under slightly different titles or author orderings, and de-duplicating after you've already annotated 200 PDFs is miserable.
DEEPNOTIS ships a free reference deduplicator that takes a mixed BibTeX/RIS file and groups duplicates by DOI first, then by fuzzy match on title, first author, and year. Zotero has a built-in "Duplicate items" folder that does something similar but less forgiving on author-name variations.
Goal at the end of this stage: a single, clean list of unique candidates, with no duplicates and no obvious out-of-scope work. Typically 2-3 times larger than your final review.
This is where most people improvise, and most workflows break. You should have a note structure before you read the first paper, and you should use it consistently.
A good literature-review note template has roughly these fields:
Whether you use Obsidian, Notion, a plain folder of Markdown files, or your reference manager's built-in notes is less important than using the same structure for every paper. The connections field is the most important — it's what turns a reading list into an argument.
By the time you've read 30-50 papers, themes will emerge. Write them down as they do, before you've read everything. A common mistake is to try to "wait until I've read everything before I organize" — you'll never finish reading, and you'll forget what you thought about the first papers.
Synthesis is the step where you convert your notes into structure. A good way: color-code your notes by theme, then open all notes on one theme at once and write a paragraph summarizing what that body of work says. Those paragraphs become sections of your review.
Themes are usually some mix of:
Four themes is too few. Twelve is too many. Aim for five to eight.
A literature review is not a summary. It is an argument about the state of knowledge. Every section should answer two questions: what do we know? and what do we not yet know? The gap you identify at the end — the thing we don't yet know — is the justification for your own research.
The structure that works best for most reviews:
Resist the urge to go paper-by-paper. A review that reads "Smith (2020) found X. Then Jones (2021) found Y. Then Patel (2022) found Z" is a summary, not a review. A proper review reads "Early work in this area focused on X (Smith 2020, Jones 2021); later contributions complicated this picture by showing Y (Patel 2022, Lee 2023)."
To pull everything above together, here's a minimal modern toolkit:
The entire toolkit costs under $10 a month in 2026, or free if you're willing to self-host where possible. What you save, compared to the old manual workflow, is dozens of hours per review.
There is no tool that will read papers for you. AI summarizers are tempting, and for screening abstracts they are fine. But citing a paper you have not actually read — especially if you're relying on an AI-generated summary — is one of the fastest ways to get a reviewer to doubt your scholarship. A paper is a whole argument, with assumptions and caveats you'll miss if you only skim.
Read the papers you cite. The tool's job is to handle the metadata, the organization, the cross-references, and the formatting. Your job is still the thinking.