Deakin University researchers found ChatGPT’s GPT-4o model fabricated nearly 20% of academic citations when writing mental health literature reviews. Of 176 total citations generated, only 77 were both real and accurate, meaning 56.2% contained errors or were completely fake. Lead author Jake Linardon discovered citation accuracy varied by topic familiarity. Depression citations were 94% real, while binge eating disorder and body dysmorphic disorder saw fabrication rates near 30%. Among fabricated citations with DOIs, 64% linked to real but unrelated papers, making errors harder to detect. The study, published in JMIR Mental Health, tested three psychiatric conditions with varying research volumes. Recent surveys show nearly 70% of mental health scientists use ChatGPT for research tasks, raising concerns about verification needs. (Story URL)
Study Finds More Than Half Of AI’s References Are Fabricated Or Contain Errors
Nov 17, 2025 | 7:02 PM
