The figures below have the overall state indicator and overall state program scores. The overall indicator score is the average of a particular individual indicator score across programs. The overall program score is the average of all individual indicators within a program. States have a performance failure if an overall score is below 90%.
There is one figure for each indicator that can be assessed in PY 2024.
There is one figure for each program that can be assessed in PY 2024.
The figures below have the individual indicator scores. Each tab has all the individual indicators within that program. States have a performance failure if an individual indicator score is below 50%.
These data tables list the values for each of the following data points, which are used to derive the performance scores. The definitions are as follows:
Click the button to download the full target level data shown in the tables below for all states, programs, and indicators.
This table shows the overall indicator score data for each performance indicator.
This table shows the overall program score data for each program.
This table shows the individual indicator performance data for the WIOA Adult program indicators.
This table shows the individual indicator performance data for the WIOA Dislocated Worker program indicators.
This table shows the individual indicator performance data for the ** WIOA Youth** program indicators.
This table shows the individual indicator performance data for the Adult Education program indicators.
This table shows the individual indicator performance data for the Wagner-Peyser program indicators.
This table shows the individual indicator performance data for the Vocational Rehabilitation program indicators.
The analysis below gives an overview of how well the models performed in adjusting and assessing performance in PY 2024 This section includes some aspects of the overall performance of the models and highlights the model variables which had a significant impact.
The figures below provide some information on the how well the models performed using PY 2024 data. While there are a few outlier results, these seem to be driven by state performance rather than the performance of the model. Click on the tabs below to see the figures for each of the program indicator models.
The distribution plots show how the adjustment process (using the statistical adjustment model) affected the performance scores for states. The blue shows the distribution of the performance scores for states in the hypothetical scenario of not using the statistical adjustment model to adjust negotiated levels. In this scenario, actual performance is compared to the original negotiated level. The green shows the distribution of performance scores for states when comparing a state’s actual performance to the adjusted level of performance. Overall, the scores using the model adjustments are more in line with performance expectations. In most cases, there are fewer high and low outlier scores. In addition, the higher peak for the performance scores (i.e., the green) also indicates less variance with a larger concentration of scores around 100%.
Below the distribution plots are scatter plots that show how the model predictions using the actual PY 2024 data (i.e., Estimate1) compare to the actual performance results reported for each program indicator. These plots highlight that there is some variance in the performance of different program indicator models. Overall, the better performing models have smaller residuals (i.e., points that are closer to the line which would indicate that the prediction matches the actual performance) and those residuals are evenly distributed (i.e., the points are evenly spread around the line).
Plots of the Employment Rate 2nd Quarter after Exit indicator models for each program.
Plots of the Employment Rate 4th Quarter after Exit indicator models for each program.
Plots of the Median Earnings 2nd Quarter after Exit indicator models for each program.
Plots of the Credential Attainment Rate indicator models for each program.
Plots of the Measurable Skill Gains indicator models for each program.
There were some model variables that had a larger impact in determining the adjustment factor and resulting performance score for each program indicator model. The reason that a variable has a large impact can vary, but includes factors such as: the relative importance that element has on a particular performance outcome and the variance in state reporting the data on particular variables. The bar plots below show the variables with the largest average adjustment for each model.
Plots of the variables in the Employment Rate 2nd Quarter after Exit indicator models for each program.
Plots of the variables in the Employment Rate 4th Quarter after Exit indicator models for each program.
Plots of the variables in the Median Earnings 2nd Quarter after Exit indicator models for each program.
Plots of the variables in the Credential Attainment Rate indicator models for each program.
Plots of the variables in the Measurable Skill Gains indicator models for each program.
This report assesses state performance based on state reported performance indicator results for PY 2024 compared to the adjusted negotiated levels of performance. The negotiated levels are adjusted based on the estimates from the statistical adjustment models and the actual characteristics of the participants served and economic conditions within the state in PY 2024. This assessment focuses specifically on those indicators and scores that are being formally assessed in 2024.
The Departments of Labor and Education issued a notice indicating which primary indicators of performance will be assessed for PY 2024. The notice was published as Training and Employment Notice (TEN) 01-25, Program Memorandum 24-7, and FAQ 24-02 for each agency.
For PY 2024, the performance indicators being formally assessed are:
Quarterly Census of Employment and Wages (QCEW) data from January 1, 2025, through June 30, 2025, are significantly different from previous periods and contain an increased number of non-disclosed observations due to the lapse in federal funding. The impact of this period of QCEW data is limited to assessing Measurable Skill Gains (MSG). To ensure fair and accurate performance assessments, the Departments tested several approaches to address the missing QCEW data, including various imputation strategies and performing assessments with and without the two most recent quarters. MSG assessment results shown in this report were generated without the two most recent quarters and the calculated performance score failures are robust to the different data scenarios (i.e., performance was consistently below the required threshold).
For PY 2024, there are three ways that a state can have a performance failure:
WIOA performance assessments are calculated using: