At the heart of this initiative is the desire to speed up a notoriously slow and complex process. Today, writing a new regulation can take months or even years. By integrating Gemini, an advanced large language model developed by Google, the US Department of Transportation hopes to generate entire regulatory drafts in less than 20 minutes. As reported by ProPublica, the agency aims to automate up to 90% of the writing process, with human experts tasked with editing and correcting the final text.
Gregory Zerzan, a legal advisor to the department, is among those backing the project. He sees it as a way to overcome bureaucratic inertia. But his reported statement, “We don’t need perfect regulation on XYZ. We want something good enough. We want to flood the market”, has sparked significant concern over the potential risks of prioritizing speed over safety.
Gemini to Draft Majority of Regulatory Texts
According to internal presentations revealed by ProPublica, Gemini is being positioned as the primary engine for generating legal texts related to traffic safety and transportation laws. In this scenario, the AI would produce a first draft covering most of the regulation, leaving officials with a document to fine-tune rather than build from scratch.
While this may seem efficient on paper, experts point out that Gemini, like all current AI models, is subject to factual inaccuracies and “hallucinations”, errors where the system produces convincing but false information.
In regulatory contexts, such mistakes could have real-world consequences, especially when the text deals with road rules, signage, or enforcement procedures. Former government official Mike Horton compared the idea to “letting a high school intern write the rules,” underlining skepticism even from within the administration’s own ranks.

Internal Rhetoric Questions Regulatory Priorities
Zerzan’s remarks have further fueled the debate. His focus on output volume over precision reflects a shift in priorities that some say is ill-suited to public safety regulation. “Good enough” might suffice in marketing or software development, but not in traffic laws, where ambiguity or oversight could directly affect drivers, pedestrians, and law enforcement.
The internal positioning of Gemini suggests that some officials are looking to AI not just as a tool, but as a central actor in policymaking. Critics worry this reliance could dilute the quality of regulation, especially when complex legal language must account for edge cases, precedent, and enforcement implications. The risk, they argue, lies in underestimating the human judgment traditionally required to write clear, enforceable rules.
Efficiency Comes with Democratic and Technical Trade-Offs
The move also raises questions about transparency and the role of public oversight. Traditionally, the regulatory process involves multiple layers of review, including expert consultation and public comment. Introducing AI into this framework, especially at the foundational stage of drafting, could shift power away from democratic mechanisms toward technical systems that are not always understandable or verifiable by the public.
The Department insists that human officials will still play a vital role. Yet, without a clear framework defining how AI outputs will be validated, revised, or challenged, there’s concern that speed could eclipse scrutiny. The use of AI might reduce bottlenecks, but if it also sidelines expert input or simplifies complex issues into generic policy templates, it could undermine the very purpose of regulation.








