One-size-fits-all math homework may be more helpful than you think

On the strength of those results, an MIT research organization singled out ASSISTments as one of the rare ed tech tools proven to help students. The Department of Education’s What Works Clearinghouse, which reviews education evidence, said the research behind ASSISTments was so strong that it received the highest stamp of approval: “without reservations.”

Still, Maine is an unusual state with a population that is more than 90% white and so small that everyone could fit inside the city limits of San Diego. It had distributed laptops to every middle school student years before the ASSISTments experiment. Would an online math platform work in conditions where computer access is uneven?

The Department of Education commissioned a $3 million replication study in North Carolina, in which 3,000 seventh graders were randomly assigned to use ASSISTments. The study, set to test how well the students learned math in spring of 2020, was derailed by the pandemic. But a private foundation salvaged it. Before the pandemic, Arnold Ventures had agreed to fund an additional year of the North Carolina study, to see if students would continue to be better at math in eighth grade. (Arnold Ventures is among the many funders of The Hechinger Report.)

Those longer-term results were published in June 2023, and they were good. Even a year later, on year-end eighth grade math tests, the 3,000 students who had used ASSISTments in seventh grade outperformed 3,000 peers who hadn’t. The eighth graders had moved on to new math topics and were no longer using ASSISTments, but their practice time on the platform a year earlier was still generating dividends.

Researchers found that the lingering effect of practicing math on ASSISTments was similar in size to the long-term benefits of Saga Education’s intensive, in-person tutoring, which costs $3,200 to $4,800 per year for each student. The cost of ASSISTments is a tiny fraction of that, less than $100 per student. (That cost is covered by private foundations and federal grants. Schools use it free of charge.)

Another surprising result is that students, on average, benefited from solving the same problems, without assigning easier ones to weaker students and harder ones to stronger students.

How is it that this rather simple piece of software is succeeding while more sophisticated ed tech has often shown mixed results and failed to gain traction?

The studies aren’t able to explain that exactly. ASSISTments, criticized for its “bland” design and for sometimes being “frustrating,” doesn’t appear to be luring kids to do enormous amounts of homework. In North Carolina, students typically used it for only 18 minutes a week, usually split among two to three sessions.

From a student’s perspective, the main feature is instant feedback. ASSISTments marks each problem immediately, like a robo grader. A green check appears for getting it right on the first try, and an orange check is for solving it on a subsequent attempt. Students can try as many times as they wish. Students can also just ask for the correct answer.

Nearly every online math platform gives instant feedback. It’s a well established principle of cognitive science that students learn better when they can see and sort out their mistakes immediately, rather than waiting days for the teacher to grade their work and return it.

The secret sauce might be in the easy-to-digest feedback that teachers are getting. Teachers receive a simple data report, showing them which problems students are getting right and wrong.

ASSISTments encourages teachers to project anonymized homework results on a whiteboard and review the ones that many students got wrong. Not every teacher does that. On the teacher’s back end, the system also highlights common mistakes that students are making. In surveys, teachers said it changes how they review homework.

Other math platforms generate data reports too, and teachers ought to be able to use them to inform their instruction. But when 30 students are each working on 20 different, customized problems, it’s a lot harder to figure out which of those 600 problems should be reviewed in class.

There are other advantages to having a class work on a common set of problems. It allows kids to work together, something that motivates many extroverted tweens and teens to do their homework. It can also trigger worthwhile class discussions, in which students explain how they solved the same problem differently.

ASSISTments has drawbacks. Many students don’t have good internet connections at home and many teachers don’t want to devote precious minutes of class time to screen time. In the North Carolina study, some teachers had students do the homework in school.

Teachers are restricted to the math problems that Heffernan’s team has uploaded to the ASSISTments library. It currently includes problems from three middle school math curricula:  Illustrative Mathematics, Open Up Resources and Eureka Math (also known as EngageNY). For the Maine and North Carolina studies, the ASSISTments team uploaded math questions that teachers were familiar with from their textbooks and binders. But outside of a study, if teachers want to use their own math questions, they’ll have to wait until next year, when ASSISTments plans to allow teachers to build their own problems or edit existing ones.

Teachers can assign longer open-response questions, but ASSISTments doesn’t give instant feedback on them. Heffernan is currently testing how to use AI to evaluate students’ written explanations.

There are other bells and whistles inside the ASSISTments system too. Many problems have “hints” to help students who are struggling and can show step-by-step worked out examples. There are also optional “skill builders” for students to practice rudimentary skills, such as adding fractions with unlike denominators. It is unclear how important these extra features are. In the North Carolina study, students generally didn’t use them.

There’s every reason to believe that students can learn more from personalized instruction, but the research is mixed. Many students don’t spend as much practice time on the software as they should. Many teachers want more control over what the computer assigns to students. Researchers are starting to see good results in using differentiated practice work in combination with tutoring. That could make catching up a lot more cost effective.

I rarely hear about “personalized learning” any more in a classroom context. One thing we’ve all learned during the pandemic is that learning has proven to be a profoundly human interaction of give and take between student and teacher and among peers. One-size-fits-all instruction may not be perfect, but it keeps the humans in the picture.

title_words_as_hashtags

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top