Artificial Intelligence Policy


Undark does not accept nor publish editorial text generated or edited by artificial intelligence tools (e.g., ChatGPT, CoPilot, etc.). This prohibition covers entire stories as well as excerpts, boilerplate explanations, and newsletter copy. The only exception is when the use of AI itself is the subject of the story, in which case we will disclose its use and clearly flag any errors for readers.

While AI may be employed for limited non-editorial purposes, such as marketing emails or generating headline and social media post suggestions, human editors will always retain responsibility for final choices. Undark contributors may also experiment with AI in brainstorming or as a research aid—much as one might use Google Search or Wikipedia — but all original reporting, sourcing, and written expression must come directly from the journalist. 

As always, journalists should fact-check their own work against reliable and multiple sources prior to submitting a draft. At Undark, that work will be further fact-checked internally by our research team. Freelance and/or staff contributors who submit editorial work that is discovered to have been substantially produced by AI will not be invited to write for Undark again.  

This policy exists for both practical and ethical reasons: Current AI systems are prone to factual mistakes, bias, plagiarism, and uninspired prose. More importantly, writing and editing are matters of judgment and craft, requiring careful thought about how best to convey complex ideas to our readers — something AI cannot replicate. Reporters must therefore verify any information obtained through AI tools against original sources, and under no circumstances should AI-produced text appear in our journalism without disclosure. 

Undark will continue to evaluate new technologies as they evolve, but our standards for accuracy, originality, and integrity remain unchanged.

If you have questions about this policy, please send an email to [email protected].