Aligning achievement levels and assessments
This is the second of three posts based on a workshop presented in April 2015 at the annual conference of the New England Educational Research Organization.
Proficient. The passing of NCLB made it a national goal that 100% of students would be Proficient by 2014; and the law made it clear that proficiency would be determined by performance on a state assessment. What was not clear from the law was exactly what was meant by the term Proficient. We know that NCLB left it up to individual states to adopt their own content standards and to set their own criteria for Proficient performance on those standards. We also know that there was wide variation among the states in the level of student performance classified as Proficient; but that variation and the claims of honesty gaps and truth telling that flow from it are not the focus of this discussion.
The focus of this piece is on the more basic question of what Proficient means within an individual state or consortium; specifically, the extent to which there is consistency among the following:
 the meaning and interpretation of proficient performance among educators, parents, and the general public,
 the state’s definition of proficient performance on its state assessment, and
 the design of the state assessment and the methods used to classify student performance as proficient.
When asked to describe the performance of a Proficient student, responses from those not directly involved in the development of state assessments often describe performance that is consistent across standards (and over time) as well as performance that reflects a certain degree of mastery of those standards. Consistency and mastery appear to be two critical elements to common interpretations of Proficient performance.
As shown in the examples below, consistency and mastery are often found as key components of state policylevel descriptions of student performance at the achievement level corresponding to Proficient.





When we dig a little deeper, state content area descriptions often go into great deal about the knowledge and skills of students performing at the Proficient level. The following example describes the performance of students proficient in grade 6 mathematics.
Students in grade six at the proficient level have a good understanding of the concepts that underlie grade six mathematics, including integers, percentages, and proportions. They solve problems involving the addition of negative and positive integers, compare and order integers using visual representation, calculate percentages, and set up proportions from concrete situations. Their skills in algebra and geometry include solving onestep equations, writing expressions from word problems, solving problems involving rate, solving for the missing angle in a triangle or a supplementary angle, and identifying types of triangles. Proficient students also understand the basic concepts of probability and measures of central tendency. (California Department of Education)
The problem is that in most cases there is a disconnect between the specific claims made in those descriptions of Proficient performance and the design of the state assessment. In general, state assessments are comprehensive surveys of the ranges of content standards within a grade level. They usually contain no more than a few questions measuring a particular skill such as solving for the missing angle in a triangle or setting up proportions from concrete situations; certainly not enough items to claim consistency or mastery of the skill. The table below shows the percentage and number of points assigned to the four broad clusters of mathematics standard on the 20112012 NECAP sixth grade mathematics test.
Number of Points  Percentage of Points  
Numbers and Operations  27  41% 
Geometry and Measurement  16  24% 
Functions and Algebra  13  20% 
Data, Statistics, and Probability  10  15% 
Total  66  100% 
The Numbers and Operations category contains a sufficient number of points to be support a claim of mastery or consistent performance within the cluster, but it is still not possible to determine student mastery of a particular skill. The 27 points in the cluster are distributed across multiple standards, each of which may include a variety of skills as shown in the standard 52:
M(N&O)–5–2 Demonstrates understanding of the relative magnitude of numbers by ordering, comparing, or identifying equivalent positive fractional numbers, decimals, or benchmark percents within number formats (fractions to fractions, decimals to decimals, or percents to percents); or integers in context using models or number lines.
How many different skills are contained in that single standard, and how many items would be needed to determine that a student had mastered each of them?
Most important, to be classified as Proficient on a state assessment, a student may only have to earn 65% – 70% of the points on the test. It is difficult to make claims of mastery and consistency when a student has earned 70% of the points on a test. Let’s consider two students, Sue and Kevon, who each earned 70% of the points on the test. We do not know whether Sue has mastery of calculating percentages, adding negative integers, and understanding measures of central tendency. We do not know Kevon has performed consistently across all of the standards or clusters of standards. We do not know which items Sue and Kevon answered correctly, or perhaps more important, which items each answered incorrectly to earn their score of 70%. We do know that there are more ways to earn a score of 70% on the six grade mathematics test than there are students in the state; but this topic will be considered in more detail in the final installment of this 3part series of posts: One in a Million, A Million to One.
So what do we know when a student has earned 70% of the points on the sixth grade mathematics test? We know that the student earned 70% of the points on a test that contains items assessing a representative sample of grade level standards, and that the state has determined that level of performance to be Proficient. We expect Proficient students to be prepared to succeed at the next grade level, but based solely on the state assessment we cannot know what specific skills an individual student has mastered.
CCR is the new Proficient
Now we have moved on from Proficient to Collegeand CareerReady. That sounds like progress. College and CareerReady feels much more grounded in reality and tangible than Proficient. There can be no race to the bottom with college and careerreadiness. We should be able to easily verify that students classified as ready for college or a career were, in fact, ready. If the majority of students labeled collegeready on the state assessment must enroll in non creditbearing courses in college then it is safe to conclude that something is not right with the CCR score on the state assessment. Perhaps the cut score was set too low. Perhaps the test measured the wrong content knowledge and skills. Or maybe it is something we hadn’t thought of before! Maybe college readiness doesn’t come from a score. Maybe college readiness… perhaps… means a little bit more. (Acknowledgements and apologies to David Conley and Dr. Seuss)
Whatever the reason, we can determine how well performance on a mathematics and English language predicts whether students are ready for college – even if they never plan to take a formal mathematics or English course in college. There is much less room for misinterpretation with college ready than there was with Proficient. A college ready student is a student who is ready for college. Sure, we can worry about things like what kind of college and how ready, but why quibble over details.
By focusing on the predictive power of the test rather than its content, what we may have to give up, however, are claims of a tight link between a student’s test score and the specific knowledge and skills that the she has mastered.
As they say, however, can you really miss something you never had to begin with?