TL;DR
ArXiv announced it will ban authors for one year if they submit papers containing AI-generated content without proper verification. The move aims to curb low-quality, AI-produced research and enforce responsibility.
ArXiv has announced that authors who submit research papers containing AI-generated content without proper verification will face a one-year ban from the platform, marking a significant step in regulating AI use in scientific publishing.
Thomas Dietterich, chair of arXiv’s computer science section, stated that if a submission contains incontrovertible evidence that authors did not verify AI-generated results—such as hallucinated references or unverified comments from language models—then the paper will be subject to a one-year suspension. Following this ban, authors will be required to have subsequent submissions accepted by reputable peer-reviewed venues before posting on arXiv again.
Importantly, the policy does not ban the use of AI tools outright but emphasizes that authors must take full responsibility for any content generated or influenced by AI, including plagiarized, biased, or erroneous material. The decision is described as a ‘one-strike’ rule, with moderators responsible for flagging issues and section chairs confirming evidence before enforcement. Authors will also have the right to appeal bans.
Why It Matters
This move underscores the growing concern within the scientific community over AI-generated misinformation and fabricated citations, which have been linked to declining research integrity. By enforcing accountability, arXiv aims to maintain the quality and trustworthiness of preprint research, especially in fields like computer science and mathematics where rapid dissemination of ideas is critical.
The policy highlights a broader shift toward responsible AI use in academia, potentially influencing other repositories and publishers to adopt similar standards, thereby shaping future norms in scientific publishing.

DEEP RESEARCH WITH PERPLEXITY AI: Advanced Search, Source Analysis, and Knowledge Extraction
As an affiliate, we earn on qualifying purchases.
As an affiliate, we earn on qualifying purchases.
Background
ArXiv, a leading open-access preprint repository for fields such as computer science and mathematics, has been grappling with the rise of AI-generated research. Previously, it implemented measures like requiring endorsement for first-time posters to curb low-quality submissions. The platform’s move to enforce stricter penalties for unverified AI content reflects ongoing concerns about the integrity of scientific communication amid increasing AI adoption.
This policy update follows recent studies indicating a rise in fabricated citations, especially in biomedical research, attributed to large language models. The platform’s shift also coincides with its transition from hosting by Cornell to becoming an independent nonprofit, potentially giving it more flexibility to address emerging challenges like AI misuse.
“If a submission contains incontrovertible evidence that the authors did not check the results of LLM generation, this means we can’t trust anything in the paper.”
— Thomas Dietterich
“This will be a one-strike rule, but moderators must flag the issue and section chairs must confirm the evidence before imposing the penalty.”
— Dietterich

Survey of American College Students: Understanding of & View of the Pervasiveness of Plagiarism
As an affiliate, we earn on qualifying purchases.
As an affiliate, we earn on qualifying purchases.
What Remains Unclear
It is not yet clear how strictly the policy will be enforced across all submissions or how often bans will be applied. Details about the specific evidence required to trigger a ban and the appeals process are still being finalized.
As an affiliate, we earn on qualifying purchases.
What’s Next
ArXiv will begin implementing the policy immediately, with moderation teams monitoring submissions for AI-related issues. Further guidance on evidence standards and appeal procedures is expected in the coming weeks. The platform may also review and adjust the policy based on initial experiences and community feedback.

Introduction to Research Ethics and Academic Integrity
As an affiliate, we earn on qualifying purchases.
As an affiliate, we earn on qualifying purchases.
Key Questions
Can authors still use AI tools in their research?
Yes, but they must verify and take responsibility for any AI-generated content, ensuring it is accurate and properly cited.
What constitutes incontrovertible evidence of unverified AI use?
This includes hallucinated references, unverified comments from language models, or content that cannot be substantiated or checked by the authors.
Will this policy affect all submissions equally?
It primarily targets papers where AI-generated content is unverified or misleading; enforcement will depend on moderation and evidence detection.
How can authors appeal a ban?
Authors will have the right to submit an appeal, which will be reviewed by the moderation team before any ban is finalized.
Does this mean AI use is banned on arXiv?
No, AI use is not banned outright; the policy emphasizes responsible use and verification of AI-generated content.