PBL vs Doing The Same Thing: The Research Is Clear
Professor and author Scott McLeod just wrote an article (which I share an excerpt below but read the entire piece here), that shares recent research about common educational practices around testing and data.
As I wrote in my last article, the results of this report come as no surprise to many of us in education. Yet, here we are, many times doing the same things over and over again. Here is what Scott has to say about the recent report:
Hechinger Report just published an article on how having teachers study student data doesn’t actually result in better student learning outcomes.
Think about that for a minute. That finding is pretty counterintuitive, right? For at least two decades now we have been asking teachers to take summative and formative data and analyze the heck out of them. We create data teams and data walls. We implement benchmarking assessments and professional learning communities (PLCs). We make graphs and charts and tables. We sort and rank students and we flag and color code their data… And yet, research study after research study confirms that all of it has no positive impact on student learning:
[Heather Hill, professor at the Harvard Graduate School of Education] “reviewed 23 student outcomes from 10 different data programs used in schools and found that the majority showed no benefits for students” . . . . Similarly, “another pair of researchers also reviewed studies on the use of data analysis in schools, much of which is produced by assessments throughout the school year, and reached the same conclusion. ‘Research does not show that using interim assessments improves student learning,’ said Susan Brookhart, professor emerita at Duquesne University and associate editor of the journal Applied Measurement in Education.”
All of that time. All of that energy. All of that effort. Most of it for nothing. NOTHING.
No wonder the long-term reviews of standards-, testing-, and data-oriented educational policy and reform efforts have concluded that they are mostly a complete waste. We’re not closing gaps with other countries on international assessments. Instead, our own country’s achievement gaps are widening. The same patterns are occurring with our own national assessments here in the United States. Similarly, our efforts to ‘toughen’ teacher evaluations also show no positive impact on students. It’s all pointless.
Why do we never ask for the "research" to support our current practices?
When I'm presenting on PBL, or working with teachers around the country on implementing project-based learning I'm often asked to share the research that supports using this approach in schools.
I have no problem sharing the research (there is much to support this work from K-12 and beyond) but I'm often left scratching my head at why we don't ask the alternative question: Where is the research to support what we are currently doing?
As shared in Scott's blog post, the recent research on testing, data use, teacher evaluations, and instructional practices point to a general NEED to shift our approach.
The problem: Many educators have been asking for this shift for a long time. Some have just gone ahead and shifted on their own because they know PBL, inquiry, and design thinking put kids in learning situations where they can grow and achieve.
Here is what I said in my last article, and I want to reiterate:
There are too many people that want school to stay the same, even as many of us educators are shouting from the rooftops that things have to change.
The solution: If we want to look at the research, it paints a clear picture of what works — PBL, inquiry, choice, and experiential learning. Check out my research roundup here.
Can we change these practices and shift to a PBL approach? How can we make this happen under the current circumstances? I'll be answering both of those questions in my next post.
Thanks for all you do to shift the work!