CONTEXT IS KING
Context matters. It matters for students and it matters for schools.
As educators, we know there’s lots to consider when supporting a student, including:
- Home life
At the same time, we know we can’t make excuses for students. Instead, we need to focus on what it will take to help this individual student be successful.
Polarys takes the same approach to analyzing school accountability data. We understand there are multiple factors beyond an administrator’s control that impact their outcome data. These can include:
- Student readiness to learn
- Neighborhood economics
- Access to high-speed internet
- Parental education levels
We believe school benchmarks and outcomes analysis should taking into account the most powerful factors that impact student outcome but are NOT driven by the school. Every school leader needs meaningful benchmarks that reflect their unique opportunities and challenges, with a goal of continuous improvement.
Raw data is incomplete information.
Research shows that student test scores are impacted by many factors beyond the school’s sphere of influence. These can include food insecurity, parental education level, neighborhood context, and many more. Some researchers find that these factors account for almost 75% of an institution’s scores on state exams*. Despite this, aggregated exam scores are used by states, districts and the public to judge the effectiveness of schools.
The Polarys algorithms reduce the impact associated with these factors on school test results by two-thirds.
Leaders Need Personalized Support, Too
The key to Polarys is in creating individualized accountability models for each school. Polarys builds unique models and benchmarks for individual schools, based on their students and their context.
Our approach is to statistically control for the three non-institutional factors most commonly linked to student outcomes:
- Student demographics
- English language learner status
- Student poverty status
- Special education status
- Test variability
- Grade levels served
This allows us to hone in on the portion of a school’s aggregated test score that is attributable to the school. The unique benchmarks created by Polarys allow school leaders to:
- Identify the school’s specific impact on student learning
- Increase focus on learning by letting go of the worry that demographic shifts, redistricting or changes in test focus or difficulty will drive down test results
- Track progress over time
UNDERSTANDING POLARYS SCORES
State accountability systems typically compare schools to one another. Polarys Scores are a totally different approach.
Polarys Scores are generated by creating a demographically identical model school for every school, using statewide data, and then asking “did the real school get more or fewer points than its model on the state exams?” [see Understanding Model Schools below]
The Your Polarys Score is the number of points your school scored above or below your unique model school on the state exam.
Say your school, Union Elementary, got its aggregated state scores back. They show that your school was 68% proficient in math and 66% proficient in reading.
Typically, the school leaders would focus their attention on reading programs to increase performance. But what if the reading test was especially hard? Or you happen to have a particularly high LEP population? Looking at raw test data, without factoring in these variables, can encourage schools to focus on the wrong areas for improvement.
By creating model schools based on the school’s population and using real student performance data from the very same test, Polarys provides a unique and useful benchmark for school performance.
For example, if your school scored 66% on the state reading exam, and your model school scored 65% on the state reading exam, you would a Polarys Score of +1%. This means that your school was 1% more proficient than its model school on the reading exam. This suggests your programming is working! You have room to improve, but don’t assume you need to go back to the drawing board.
Now let’s consider your math scores. If your school scored 68% on the state math exam and your model school scored 71%, you would get -1 Polarys Score. This means that you should consider focusing on math programming, even though your raw score was higher.
PROGRAM ANALYSIS: EFFECTIVENESS & EFFICIENCY
Polarys allows you to choose which demographic variables are most important to you. Let’s go back to our Union Elementary example. Now that you know you need to focus on math programming, can we narrow further?
It’s essential to understand that Polarys analysis always includes ALL students. By focusing on just one variable, you are asking “how efficient and effective is our programming plan when we focus on one subgroup?” We are NOT asking how well this subgroup performed on its own.
At Union Elementary, we can see that the school scores best when only adjusting for poverty and worst when only adjusting for LEP population.
This means that the programming choices around IEP services and around supporting low income students is more efficient and effective than programming choices around serving LEP students.
This could mean:
- You have a typical to high LEP population served by an ineffective program, OR
- You have a small LEP population, but are investing heavily in support services. This could be a drain on overall programming investments, resulting in an inefficient program.
This is a useful lens when considering how to allocate limited school resources. In scenario 1 (ineffective LEP services), the school will want to look for better programs and/or reallocate resources to improve the impact of the services they do provide. In scenario 2 (inefficient LEP services), the school will want to consider how they can reallocate resources from LEP services to improve programming for all students.
UNDER THE HOOD
Polarys & School Accountability
Polarys is not intended to act as an assessment measure, but designed to support targeted school improvement. Technically speaking, a Polarys score is a “status measure,” meaning it represents a point-in-time measure of student proficiency and not a “growth measure” that indicates the change in student performance year-over-year. It is also not a Value-Added Measure (VAM), which uses past data to build predictive models of expected individual student performance.
Instead, Polarys Scores answer the question “how successful is this school in educating its unique group of students?” which is different than the questions answered by standard status or growth measures.
The Polarys modeling approach uses broadly administered, single year test scores. This allows us to remove the impact of:
- Test variability
- Changes in student body profile
- Grade level variability
Polarys analysis can be applied to any outcomes data (testing and non-testing) that is consistently and broadly collected.
Building Model Schools
Polarys analytics are based on our use of Model Schools. It’s important to know that Model Schools are NOT real-world schools! Polarys uses statewide data testing and student demographic data to model out the performance on state exams of a hypothetical school serving student body with the same demographic profile.
For every school in Polarys, we build a model school with its exact demographic profile, using state-wide student demographic data. Using state-wide student performance patterns, we then give each model school the proficiency scores they would have received, given the population they serve. This becomes the real school’s unique benchmark score.
Finally, we subtract the number of points the Model School got from the number of proficiency points the Real School received. Statistically, this is known as the residual. In the Polarys interface, they’re known as Polarys Scores.
Polarys Scores Over Time
Because Polarys is calculated for each year independently of prior years’ performance, it simplifies longitudinal analysis of school progress. Polarys can be tracked year-over-year, smoothing away the impact of test changes (including when states recalibrate exams, switch exams or use multiple exams), demographic shifts, or redistricting, capturing the impact of programming and structure on student learning.
*Sutton, A. and I. Soderstrom (1999). “Predicting elementary and secondary school achievement with school-related and demographic factors.” The Journal of Educational Research 92(6): 330.