Oklahoma can have rigorous standards, honest reporting, and useful assessment data—but not if we keep pretending one test can do everything.

Oklahoma can have rigorous standards, honest reporting, and useful assessment data—but not if we keep pretending one test can do everything.

In 2017, the Oklahoma State Department of Education hit the reset button on how the public should interpret state test results. The department called that year “a total reset” that would establish “a new baseline year.” That wasn’t just a technical adjustment; it was a deliberate attempt to redefine what “proficient” meant in Oklahoma.

The state’s own communication made the intention clear. OSDE recommended performance labels—“Basic” and “Below Basic”—because they “correspond to those used” by the National Assessment of Educational Progress (NAEP). And it went further, declaring that Oklahoma’s test results finally had “national comparability” aligned with the ACT/SAT “as well as NAEP.”

That may sound responsible—who doesn’t want honesty?—but it introduced a long-term problem we’re still living with: Oklahoma tied its public definition of “proficiency” to an external benchmark that does not measure exactly what Oklahoma teaches and tests. NAEP is valuable, but it is not Oklahoma’s standards or test blueprint. When a state assessment is built to measure the Oklahoma Academic Standards, “proficient” should primarily mean mastery of those standards—not a proficiency rate that looks right compared to NAEP.

What makes this more concerning is that the NAEP anchor did not fade away after 2017. In 2025, the Office of Educational Quality and Accountability (OEQA) described its decision to restore cut scores, in part, as a move to “align state assessments with national benchmarks like NAEP.” OEQA’s own explanation defines the “honesty gap” as the difference between the percent of students deemed proficient on state tests versus NAEP proficiency. And it explicitly frames 2017 as the moment Oklahoma aligned its standards and assessments with NAEP (and ACT/SAT), then describes 2024 as a departure from that alignment and 2025 as the restoration.

In other words, even when Oklahoma changes cut scores, the debate keeps circling back to the same premise: the “right” proficiency bar is the one that matches NAEP. That is not a measurement strategy; it’s a policy choice. And if policymakers want that choice, they owe the public more honesty about what it means.

Here is the hard truth: tying proficiency to NAEP does not solve alignment problems—it just moves labels around.

Cut scores can be tightened or loosened, and proficiency rates can rise or fall. But none of that changes what students actually know. It changes only how many students we call “proficient.” If the state’s goal is to improve NAEP outcomes, the solution is not to redefine proficiency so Oklahoma’s numbers resemble NAEP’s. The solution is to improve instruction, curriculum quality, and learning opportunities—and to ensure students routinely do the kind of reading and problem-solving NAEP demands. Redefining proficiency creates headlines; it does not create learning.

There’s another problem hiding in plain sight: the OSTP itself is not designed to do what too many people want it to do.

Oklahoma’s technical documentation is remarkably candid about this. The OSTP scores, it says, are a “point-in-time indicator” of student knowledge and skills relative to the Oklahoma Academic Standards.

That is a summative snapshot—not a diagnostic evaluation of which exact skills each child has mastered.

The report also explains why: statewide tests are generally designed to report performance on a “unidimensional” scale.

They can support an overall score, but they are not built to “tease out” fine-grained differences across subdomains. Most importantly, it warns that “differences in subscores are likely” due to measurement error rather than something “educationally meaningful,” and it plainly states that neither summative nor broad interim assessments are designed to guide “detailed instructional planning.”

That is a devastating but necessary admission for our accountability system: even if we wanted OSTP to be a tool for individual skill mastery, it cannot reliably do that job. When we treat strand scores like a checklist of mastered or unmastered skills, we misuse the instrument—and we mislead educators and parents.

So where does that leave Oklahoma?

First, Oklahoma should stop selling “proficiency” as a single, stable truth when it is obviously a policy decision. In 2017, the state redefined proficiency in the name of national comparability, and in 2025 OEQA reaffirmed that NAEP benchmarking remains a central reference point. If state leaders want to use NAEP as a benchmark, fine—but they must clearly separate two claims: (1) how students performed against Oklahoma standards, and (2) how Oklahoma compares to national measures. Those are not the same thing.

Second, if improving NAEP is a true priority, Oklahoma should do the work NAEP cannot do for us: align curriculum materials, strengthen disciplinary literacy instruction across science and social studies, build coherent knowledge-rich content, and provide teachers with the training and time needed to make those shifts. Don’t manipulate cut scores and call it improvement.

Third, Oklahoma should stop expecting the OSTP to diagnose individual students’ skill mastery. The technical documentation itself warns against it.

Use OSTP for what it is: a broad accountability snapshot. Then invest in classroom-level diagnostics and curriculum-embedded assessments that actually can pinpoint skill gaps and guide instruction.

Oklahoma can demand high expectations and honest results. But we cannot get there by outsourcing the meaning of “proficiency” to NAEP—or by pretending a 50-item summative test can tell parents exactly which skills their child has mastered.

If we want real improvement, we need absolute alignment and real instructional support—not another round of shifting cut scores and changing labels. Plain and simple, that’s going to take more money. The legislature likes to brag about increased investment in education in Oklahoma, but much of that money has been tied up in badly needed increases in teacher compensation.

It’s handy for politicians, school-choice advocates, and the educational hierarchy to point to a crisis in education. It plays well with voters, undermines traditional public schools, and justifies the employment of state and national-level reformers.  Teachers and school leaders see the struggles students face daily and have a far better understanding of what each child excels at and struggles with.  We are all frustrated by a system designed to rank schools when we know that the metrics being used are heavily influenced by the socioeconomic status of the school’s population.

Take, for instance, the Urban Institute’s demographically adjusted state rankings on the 2024 NAEP.  Oklahoma, which is in the bottom 10 of the country economically, looks much better in this ranking, with 4th-grade math placing 21st and 4th-grade reading placing 27th

Teachers, more than anyone, have long known that learning begins at home, and too many of our students come to us with needs the school is not equipped to meet.  For real improvement, Oklahoma and the rest of the country need to invest in strengthening families.  How about enhancing mental health supports at county health departments, investing in making housing more affordable and available for the lowest-income earners, dedicating an appropriate amount of money to develop the required infrastructure and personnel to train the increasing pool of teachers who have not gone through a formal school of education, and reforming the state’s assessment system in such a way to require regular benchmarking through the 10th grade aligned to reading and math improvement?  And, finally, let’s have some absolute transparency with the public about what the state’s accountability system actually does: rank communities based mainly on the socio-economic status of their student populations.


Leave a comment

Previous: