Quantitative study of school programs

On reviewing last winter’s issue of Independent School Magazine, I was struck by stories of schools conducting rigorous studies of their own practice, particularly quantitative studies. Granted, the issue theme was “Assessing What We Value,” but turning the lens of assessment inward onto school practice represented a significant additional step in my mind.

In the article, “The Role of Noncognitive Assessment in Admissions,” the author described several schools that are collecting new information about students, traits that might help predict school success. One school (Choate Rosemary Hall) found statistically significant correlations between self-efficacy, locus of control, and intrinsic motivation (as reported by students) and GPA.

2013 E. E. Ford grant award winners included Castilleja School, to support the development of “meaningful and valid assessments of experiential learning, to apply these tools to improve the effectiveness of innovative experiential programs, and to share these best practices with other educators.” $1 million, three-quarters of this raised by the school, supports this effort.

I am following a similar path here at U Prep. Whether the question is the predictive power of standardized assessments or the meeting agendas of our instructional leadership team, I find myself quantifying behavioral data, seeking patterns, and sharing the information with people. Is this just coincidence?

While I have not rigorously studied and confirmed the possible existence of a trend toward quantitative program analysis (irony intended), it seems to me that several contributing factors might exist. Quantitative data is more easily collected, processed and shared than before. The setup of a Google Form is trivial, compared to the “old days” (actually just 10 years ago) when we used to write online forms in Perl on our school web server. Data visualization has grown as a field, to the point where major news corporations prominently feature beautiful, illustrative graphic representations of data, and programming libraries make the process easier. Publication and presentation tools easily incorporate such graphics. Use of data to support conclusions has remained a respectable practice, notwithstanding occasional misuse.

In years past, schools would rarely conduct quantitative study of their own work without substantial external help or an internal reassignment. This lent a measure of respectability to the work, as one would expect valid work from a consultant or internal member of the faculty or staff. Now, with people like me studying school practice within the scope of our full-time jobs, the risk exists that we will reach conclusions that are not well supported by the data or not well compared against results from other institutions. We have to be careful, as well as thorough.