Judge Restores NEH Grants After DOGE Used ChatGPT to Screen for DEI
A federal judge restored more than $100 million in NEH grants after DOGE used ChatGPT and keyword searches to screen for DEI.
Judge Restores NEH Grants After DOGE Used ChatGPT to Screen for DEI
A federal judge has restored more than $100 million in humanities grants after finding that the Department of Government Efficiency used ChatGPT and keyword searches to help cancel awards tied to diversity, equity, and inclusion. The Verge reported that US District Judge Colleen McMahon ruled the cancellations unconstitutional in a 143-page decision issued Thursday.
According to The Verge, the case grew out of a 2025 lawsuit filed by humanities groups after DOGE moved to eliminate National Endowment for the Humanities funding. McMahon wrote that DOGE used the presence of protected characteristics to disqualify grants from continued funding, restoring awards that had been cut through what the court described as a prejudicial process.
The decision gives a rare courtroom view into how generative AI can become part of official decision-making without clear definitions, review, or accountability. The Verge cited testimony from DOGE staffer Justin Fox, who said he asked ChatGPT to decide whether grant descriptions related to DEI and to answer in under 120 characters. Fox testified that he did not define DEI for ChatGPT and did not know how the chatbot understood the term.
The Verge also reported that Fox and colleague Nate Cavanaugh helped eliminate 97 percent of NEH grants. The ruling described additional searches for what Fox called “Detection Codes,” including terms such as BIPOC, Native, Indigenous, immigrant, LGBTQ, homosexual, and gay. McMahon said those searches turned protected characteristics into operative criteria for revoking federal support.
The ruling matters beyond one grant program because it shows how quickly AI-assisted triage can cross legal lines when public agencies treat model output or crude keyword scans as a decision system. The Verge reported that the court tied the process directly to unconstitutional grant cancellations, not merely sloppy internal analysis.
For government technology teams, the lesson is blunt: automation does not launder a bad policy. If an agency cannot explain the definitions, data, and human review behind an AI workflow, a court may treat the system less like efficiency software and more like evidence of discrimination.