By: Nikki Zeichner and Amy Perez (DOL), Olivia Martin, Faiz Surani,

Varun Magesh, Kit Rodolfa, Daniel E. Ho, and Mihir Bhaskar

(The RegLab, Stanford University)


Artificial Intelligence use is increasingly at the forefront of the IT modernization marketplace. The White House also made the responsible use of artificial intelligence a priority for federal agencies with the recent Executive Order (E.O) 14110 on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence; issued on October 30, 2023. The Department of Labor is conducting a prototyping initiative to help UI programs and policy makers better understand the risks and benefits of AI in the context of their work.

During the onset of the COVID-19 pandemic, states saw a dramatic increase in Unemployment Insurance (UI) claims in a matter of weeks, coincidentally when their staffing levels were at their lowest in decades. Initial claims spiked by 3,000 percent in a matter of weeks from 220,000 per week to more than 6 million and stayed over 1 million per week for a year. Responding to the sudden and dramatic increase in claims was extremely difficult for state UI programs, with limited staffing, resources, and old technology as the biggest challenges for states.

While immensely burdensome, this sudden increase in UI claims also exposed an area of opportunity for states to become more efficient in processing claims by leveraging recent developments in artificial intelligence (AI) and AI-driven products, which are becoming more and more prominent in the public sector marketplace. Like many other agencies, the Department of Labor (DOL) believes AI has the potential to be one of many possible tools that could make a positive difference to the challenges described above.

DOL is working to shape a vision for implementing AI within UI administration in responsible and trustworthy ways.  In response to growing interest from states in using AI-driven products, DOL has begun to research the possibilities and risks of applying AI in the UI space. Specifically, DOL has initiated a research initiative to explore how AI might help UI adjudicators with their work. Adjudication is one of the primary engines of the UI program – if it falters, the rest of the agency is impacted.

The Laborious Work of Adjudicators

In UI, adjudication is the process of reviewing claims to determine if they meet eligibility criteria according to state and federal regulations. Adjudicators review applications but often need additional information to determine eligibility. A significant part of an adjudicator’s duties is to conduct fact-finding efforts such as interviewing claimants and employers and submitting requests for additional information. Some eligibility issues require significant fact-finding while others require minimal or no fact-finding. Being able to separate claims based on how much fact-finding they require could bring significant efficiencies. What if we could develop an AI model that could differentiate more simple claims from those that require additional fact-finding? Also, what if AI could improve how UI agencies conduct fact-finding?

By streamlining the adjudication process, AI could ultimately prevent unnecessary back and forth between a claimant and a state UI agency. This back-and-forth process stresses an already strained system, can cause delays in eligibility determinations and/or UI benefit payments, and sometimes leave claimants waiting for weeks or months.

A New Research Initiative: Artificial Intelligence Adjudicator Assistance (AIAA) 

As mentioned above, DOL has begun a research initiative to explore questions posed by states about how AI might help adjudicators with their work. Through this initiative, AIAA, we are prototyping an AI model that could support adjudicators with non-discretionary   parts of their work such as the sorting claims into those that need additional fact finding. 

We are using historical high-quality determinations in a locked environment to train the AI model and we’ll use historical claims to test the model in its ability to assist adjudicators. By prototyping with historical claims and using a closed model, we’re creating a low-risk space for experimentation and learning so that the UI community can gain insights that would not be possible through conversation or desk research alone. We will document our work to help states learn about the process of developing an AI model, including the things that an AI model does well and the things that it does not do well.

Goals and Deliverables of the AIAA

We plan to communicate how AI is applied on a broader scale across government through a series of blog posts aimed at UI practitioners and state policymakers. These blog posts will initially touch on questions such as:

  • What is AI? What are some ways in which AI has been used in administration for non-UI programs?
  • What do AI projects need? What are the pre-requisites, in terms of tech/data infrastructure and personnel?
  • What does trustworthy and responsible AI mean, and what steps should organizations concretely take to minimize the risks and harms of such tools?

Through the course of the prototyping effort, the focus of this pilot is to better understand what it looks like to scope and develop an AI system in the UI context, as well as understand how it may be a beneficial support tool for UI adjudication processes and potential risks or limitations that may arise in doing so.

AIAA Partners: Stanford University’s RegLab and Colorado’s Department of Labor and Employment (CDLE)

Colorado’s Department of Labor and Employment (CDLE) is also an integral member of this effort as they are providing historical claims data to train the AI model. Colorado has generously agreed to make historical data, and work with Department’s research partners at Stanford University to test how AI could have potentially assisted with that universe of past data, comparing the model’s results to human expertise past and present. In addition to data and technical infrastructure, CDLE is providing critical expertise to help design, build, and test this prototype model with their feedback, their insights, and their needs in the forefront.

To build the AI prototype, explore use cases, and communicate our findings through the blog series, we are partnering with the multidisciplinary team at the Regulation, Evaluation, and Governance Lab (RegLab) at Stanford University. The RegLab partners with government agencies and non-profits on high-impact projects aimed at modernizing government. The group has worked with several agencies to help them explore, learn, and deploy responsible uses of frontier technologies to better serve their missions and the public interest.

In addition to our partnership around UI, they are currently working with DOL both to pilot tools to assist with adjudication of workers compensation claims as well as contributing to the development of the agency’s trustworthy AI guide. For RegLab, this partnership is an exciting opportunity to build upon its these experiences and continue to learn more about how tools like AI might be able to alleviate some of the pain points faced by agency staff working to provide support to people at times when they need it most.


In the upcoming year, we look forward to sharing more about this effort. One thing that we are certain about is that we want to support states in making decisions around AI that they can feel confident about – ones that will bring value to both the programs as well as to claimants who are relying on the UI program. We’re excited to use prototyping as a low-risk way to learn and grow in how we understand the role that new technologies should play in UI administration. We are also excited about the lessons that might help states shape future UI system improvements to   ensure that they are safe and trustworthy.