After months of heated debate and previous attempts to restrict the use of large language models on Wikipedia, on March 20 volunteer editors accepted a new policy that prohibits using them to create articles for the online encyclopedia.Ā
āText generated by large language models (LLMs) often violates several of Wikipedia’s core content policies,ā Wikipediaās new policy states. āFor this reason, the use of LLMs to generate or rewrite article content is prohibited, save for the exceptions given below.ā
The new policy, which was accepted in an overwhelming 40 to 2 vote among editors, allows editors to use LLMs to suggest basic copyedits to their own writing, which can be incorporated into the article or rewritten after human review if the LLM doesnāt generate entirely new content on its own.Ā
āCaution is required, because LLMs can go beyond what you ask of them and change the meaning of the text such that it is not supported by the sources cited,ā the policy states. āThe use of LLMs to translate articles from another language’s Wikipedia into the English Wikipedia must follow the guidance laid out at Wikipedia:LLM-assisted translation.āĀ
I previously reported about editors using LLMs to translate Wikipedia articles and introducing errors to those articles in the process.Ā
Wikipedia editor, Ilyas Lebleu, who goes by Chaotic Enby on Wikipedia and who proposed the guideline said that it seemed unlikely the policy will last because previously the editor community has been divided on the issue. However, Lebleu said āThe mood was shifting, with holdouts of cautious optimism turning to genuine worry.ā
āA few months ago, a much more bare-bones guideline had passed, only banning the creation of brand new articles with LLMs,ā Lebleu told me in an email. āA follow-up proposal to reword it into something more substantial failed to pass, but was noted to have āconsensus for better guidelines along the lines of and/or in the spirit of this draft.ā In recent months, more and more administrative reports centered on LLM-related issues, and editors were being overwhelmed.ā
The policy was written with the help of WikiProject AI Cleanup, a group of Wikipedia editors dedicated to finding and removing AI-generated errors on the site. Editors have been dealing with an increasing number of AI-generated articles or edits lately, and have made some minor adjustments to its guidelines as a result, like streamlining the process for removing AI-generated articles. Editorsā position, as well as the position of the Wikimedia Foundation, has been to not make blanket rules against AI because Wikipedia already uses some forms of automation, and because AI tools could assist editors in the future.Ā Ā
The new policy doesnāt ban the use of other automated tools that are already in use or future implementations, but it does show the Wikipedia community is less optimistic about the benefit of AI-generated content, and taking a stand against it.Ā
āIn context, this has implications far beyond Wikipedia,ā Lebleu said. āThe same flood of AI-generated content has been seen from social media to open-source projects, where agents submit pull requests much faster than human reviewers can keep up with. StackOverflow and the German Wikipedia paved the way in recent months with similar policies, and, as anxiety over the AI bubble grows, I foresee a domino effect,Ā empowering communities on other platforms to decide whether AI should be welcome. On their own terms.ā
About the author
Emanuel Maiberg is interested in little known communities and processes that shape technology, troublemakers, and petty beefs. Email him at emanuel@404media.co
More from Emanuel Maiberg

