Most citation mistakes are accidents — a missing comma, the wrong year, an outdated style. This guide is about the other kind: the citation practices that cross from sloppy into unethical. They're rarely caught by reviewers. They're almost never intentional. And they can still end a career.
This is the conversation most supervisors mean to have with their graduate students and never quite do.
The academic term is "ghost citation" — citing a paper based on someone else's description of it, without having read the original. It's extremely common, especially for older or foundational works that everyone cites but few read.
Why it's a problem: you're vouching for a claim you can't verify. If the author you're relying on misrepresented the original, you perpetuate that misrepresentation. When a field does this at scale, it produces citation chains that have drifted far from what the original paper actually said.
The fix: If you genuinely cannot access the original, cite as secondary — APA 7 uses (Smith, 1985, as cited in Jones, 2020). Most styles have equivalent formulations. You're not penalized for using a secondary source; you are penalized for pretending it was a primary source.
Practical rule: every claim you cite, you've read the source of. If that's impossible for a specific citation, make the secondary source explicit.
Stronger than ghost citation: citing a paper for a claim the paper doesn't support, or contradicts. This ranges from misreading a nuanced finding to deliberate misrepresentation of inconvenient results.
The AI-generated research summaries now floating around have amplified this problem. If you ask ChatGPT to summarize a paper and cite the result, you've outsourced your scholarly judgment to a tool that can hallucinate. Don't.
The fix: when you cite a paper for a specific claim, that claim should be findable in the paper, in the author's words or a close paraphrase. If you have to stretch to make the paper support your point, it doesn't support your point.
A good test: open the paper to the exact passage you're citing. If you can't locate it in under 30 seconds, you probably don't remember what the paper actually said. Re-read before you submit.
You know a paper that contradicts your thesis. You leave it out of your review. No one would ever notice.
This is the ethical failure that most affects the scientific record. It's how fields develop biased consensus: every study that supports the prevailing view is cited, every contrary result is quietly ignored.
The fix: cite the contrary evidence and engage with it. A review that says "findings are mixed: X and Y support position A (Smith 2020, Jones 2022); however, Z has argued against this (Patel 2023)" is stronger than one that ignores Patel. Reviewers and readers respect honesty more than appearance of consensus.
A good test: if a reader who disagreed with you read your review, would they find their strongest arguments represented? If not, you've narrowed the literature to fit your conclusion.
Some self-citation is normal — you're building on your own prior work. Excessive self-citation is a known problem, especially since it inflates your h-index.
In 2026, most journals track self-citation rates during peer review, and some reject papers with excessive self-citation outright. Elsevier journals have flagged authors whose self-citation rate exceeds 25%.
The fix: self-cite only when the prior work is directly relevant, and say so. "In previous work (Smith 2021), we showed X; here we extend that to Y" is fine. Tacking on five of your own papers to a statement any paper could support is not.
A good test: if you remove a self-citation, does the reader lose information they need? If not, it's probably inflation.
A pattern where groups of authors systematically cite each other's work to boost impact factors and citation counts. This is sometimes organized (journal editors coercing authors to cite their own journal) and sometimes emergent (communities of authors who all cite each other's work regardless of fit).
Clarivate has been tracking this and has de-listed journals for participation. Authors who participate in cartels face reputational damage — and increasingly, formal sanctions.
The fix: cite based on relevance, not relationship. If you'd cite the paper if it were written by a stranger, cite it. If you're only citing it because someone sent you an email asking for a citation, don't.
Related, but at the editorial level. Some journals (less common now, but still happens) ask you during revision to add citations to papers in their journal. Sometimes the suggestions are legitimate; sometimes they're clearly coercive.
The fix: add the citation only if you would have cited the paper anyway. If a reviewer suggests "add these three papers from our journal" and none of them are on-topic, politely decline in your response to reviewers. You're allowed to.
A paper gets retracted — fraud, error, methodological failure. But old citations to it remain in the literature, and new researchers sometimes cite the retracted paper without knowing it's been retracted.
The fix: use Retraction Watch or your reference manager's retraction-flagging feature (Zotero and DEEPNOTIS both check reference lists against the Retraction Watch database). Before submission, run a check for any retracted references in your bibliography. If you must cite a retracted paper (e.g., to explain why the field changed direction), flag it explicitly: [Retracted] in the reference and a note in the text.
Using ChatGPT, Claude, or any generative AI to draft portions of your paper without disclosing it is now a research-integrity violation at most journals. Most major publishers (Nature, Elsevier, Springer, Wiley) now require explicit disclosure.
The fix: disclose. A sentence in the methods or acknowledgments covering what AI was used for. If the AI generated specific content that ended up in the paper, cite the AI properly (see our guide to citing AI-generated content).
If the AI was used purely for polishing prose or summarizing your own notes, a disclosure statement is sufficient — no formal citation needed.
Harder to define: taking the structure of someone else's literature review and citing the same papers in the same order, without doing your own independent reading. You haven't stolen their words, but you've stolen their scholarship.
The fix: when you find a well-written review, use it as a starting point, not a template. Go to each paper they cite, read it, decide whether it belongs in your own review, and find additional papers that weren't in theirs. Your review should include papers the one you referenced didn't have.
Found an incorrect citation in your own published work? Don't wait. Journals have correction processes precisely for this. A submitted correction is a signal of integrity, not weakness. The worst outcome is someone else noticing first.
For errors in work you haven't yet submitted: fix and move on. No stigma.
Before submitting any paper with citations you're unsure about, ask:
If you answer yes to all five, you're in good shape. If any of them gives you pause, fix it before submitting. Your future self will thank you.
Citation ethics is something academia has always been sloppy about, and digital tools have made some problems worse while solving others. But most researchers aren't malicious — they're busy, they're reading too much, and they're working in a system that rewards output volume over care.
The best thing you can do for your field is model careful citation practices in your own work. Grad students watch what their advisors cite. Reviewers watch what submitted papers cite. Citation norms don't change with policies; they change with examples.