
In the second half of Failure to Disrupt: Why Technology Alone Cannot Transform Education, author Justin Reich moves on to describe the “four as-yet intractable dilemmas” of learning at scale: The Curse of the Familiar, The Edtech Matthew Effect, The Trap of Routine Assessment, and The Toxic Power of Data and Experiments.
I had many “aha!” moments as I read this portion of the book, recalling my own examples of each dilemma. In this reflection, I’ll summarize these dilemmas and zoom in on a few personal examples.
Dilemma 1: The Curse of the Familiar
“The curse of the familiar poses a two-sided dilemma: reproduce the ordinary and get adoption but not change, or attempt to do something different and either confuse your intended audience or have them take your novel approach and transform it into something conventional.” (pg. 134)
Reich defines this first dilemma as a two-sided struggle: create a tool that mirrors familiar classroom practices to ensure adoption but risk minimal change, or introduce something innovative that confuses users or gets retro-fitted into familiar forms. This framing is both sobering and reassuring, acknowledging that no tech solution can bypass this trade-off entirely.
Faculty, much like their students, benefit from scaffolding when integrating new tools into their teaching. True transformation happens gradually. Mimicry and using new tools to replicate old practices are one crucial step along this path. In this chapter, Reich cites a study by Judith Sandholtz outlining stages of technology adoption among teachers:
Stages of technology adoption in teachers:
- Entry
- Adoption
- Adaptation
- Appropriation
- Invention
These stages offer a more concrete way to view this extended, gradual process of technology adoption. Using a timeline model also removes some of the moralizing and skill-shaming surrounding tech adoption, normalizing that it’s okay for teachers to be early in their implementation.
The key to edtech implementation: Teacher communities
“What’s needed to encourage the design, transmission, and adoption of new ideas is a large, thriving community of teachers who are committed to progressive pedagogical change and designers who are excited about seeing this community as partners.” (pg. 136)
If I had to pull one quote from the entire book to symbolize my takeaways – this would be it. I think intentional, faculty-driven instructional communities are desperately needed in higher education. That’s not to say these communities don’t exist, but they are the exception and not the rule. More often, faculty and designers can occupy separate, opposing bubbles. This leads to designers becoming disconnected from the practical needs and realities of teaching, while faculty become detached from big-picture initiatives and see new technologies as top-down mandates where their input is not considered. Naturally, this erodes trust and relationships, which are so crucial to effective work.
Teachers trust other teachers’ experiences. As designers and support staff, we need to lean into building these communities and getting excited about what excites faculty.
Example 1: Low-pressure opportunities for connection
One of my favorite things about working in my school is that our scale (40-50 faculty) is small enough to allow me to know each faculty member by name and have a high-level understanding of the teaching styles, disciplines, preferences, and tools used by each person. When an instructional technology question comes up in our departmental Slack group, I like to supplement my responses by tagging other faculty who I know used the same solution, had a similar problem, or who might like to commiserate. This creates a low-pressure opportunity for connection, while demonstrating that I care about and remember faculty input and experiences.
Example 2: Faculty-driven communities of practice
Another personal example comes from a new “AI in curriculum” community of practice formed within my department this fall. This voluntary, open community of faculty has generated more conversation about pedagogy and technology than I’ve ever seen before. It started with a heated faculty meeting discussion, that turned into a Slack channel, and now is effectively a voluntary committee. Faculty use the Slack channel to vent about AI frustrations, share ideas and links about how to teach with AI, and share discipline-specific AI articles/resources.
I think there are two key reasons why this particular community has taken off:
- Faculty-driven: This community came about voluntarily due to a genuine need raised by faculty, in contrast to their mandatory committee assignments or other top-down initiatives. Faculty see the value in this community, and therefore have a stake in contributing to it. This has a snowball effect.
- Locality = trust: There were already a few UMN-wide AI communities and many more resources posted online. However, I got the impression that some faculty felt misunderstood, overwhelmed, and shamed by the overall message and volume of this communication. In contrast, a small, immediate community gives faculty a trusted space to discuss the real issues they are facing day-to-day. The content is also more localized, so faculty can discuss discipline-specific AI developments.
Dilemma 2: The Edtech Matthew Effect
The edtech Matthew effect posits that this pattern is quite common in the field of education technology and learning at scale: new resources— even free, online resources— are more likely to benefit already affluent learners with access to networked technology and access to networks of people who know how to take advantage of free online resources.” (pg. 149)
Sociologists describe this phenomenon as a Matthew effect, drawing from the biblical verse: “For whoever has will be given more, and they will have an abundance. Whoever does not have, even what they have will be taken away.” In education, this means that privileged students with internet access, digital literacy, and access to knowledgeable mentors benefit disproportionately from free online resources, while others fall further behind.
Three Edtech Equity Myths
- Technology disrupts systems of inequality.
- Reality: Technology reproduces the inequality embedded in systems.
- Free and open technologies promote equality.
- Reality: free things benefit those with the means to take advantage of them.
- Expanding access will bridge digital divides.
- Reality: Social and cultural barriers are the chief obstacles to equitable participation.
In this chapter, Reich draws on Paul Atwell’s definitions of two types of digital divides: access and usage. While the first divide concerns unequal access to technology, the usage divide is even more concerning. Affluent students often use technology for creative, meaningful tasks guided by mentors, while marginalized students are more likely to face drill-and-practice routines with limited support. Addressing these divides requires more than simply distributing devices.
Example: Technology donations without training or implementation support
Reading this chapter immediately reminded me of something I observed years ago while working as a reading tutor at an after-school program in north Minneapolis. One day, I arrived and a TV crew was set up to film. A business had donated significant funds to build a music recording studio for the students, and the TV crew was there to capture the students’ reactions for a news segment.
Unfortunately, yet unsurprisingly, this expensive studio went unused because none of the staff had the training nor the time to help students use the equipment. Many of the staff were there on a volunteer basis for a limited time. So, this shiny equipment sat in a locked room to gather dust. Corporate philanthropy addressed the access divide but overlooked the usage divide.
Dilemma 3: The Trap of Routine Assessment
“Much of what we can assess at large scale are routine tasks, as opposed to the complex communication and unstructured problem-solving tasks that will define meaningful and valuable human work in the future.” (pg. 197)
Again, this third dilemma of learning at scale highlights the importance of centering human factors in education, while acknowledging the gains and unique abilities of computer-graded assessments. In short, if a computer can effectively do the task, it might be able to assess the task, but only after substantial training and human-driven design. While machine learning has expanded what computers can assess, such as grading essays or evaluating language pronunciation, progress remains limited. Automated systems can evaluate syntax but lack the ability to grasp meaning or context.
The core challenge persists: while computers excel at automating routine tasks, the most valuable human skills (creative thinking, nuanced communication, solving unstructured problems) remain beyond their reach. To escape the trap of routine assessment, educators must combine automated tools with thoughtful, human-driven evaluation that fosters deeper, more meaningful learning.
Example: Autograded essays
In my final year of undergrad, I took a psychology course that assigned weekly auto-graded essays – through a Pearson tool if I remember correctly. This was clearly a decision made for scale reasons, as the course served hundreds of students with just one faculty instructor and a small league of TAs. The economics of this arrangement were clear to me and my classmates, and this created a lot of resentment. I recall despising these essays, as they felt like useless busy work disconnected from the course content. I remember that no feedback was given, only a score. You could resubmit for an improved score, but it was a guessing game of trying to figure out what the auto grader wanted. Eventually, I learned tricks to getting a better score, like reusing the keywords in the prompt as many times as possible.
Today, I wonder if this program is still in use. And if so, it’s hilarious yet depressing to think about the auto-grader now assessing the AI-generated student submissions. It’s just AI all the way down, man!
Dilemma 4: The Toxic Power of Data and Experimentation
“Of all these quiet heroes, I most admire technology developers and advocates who subject their designs and interventions to rigorous study, then take the evidence from those studies to improve their products. Improving complex systems through technology will not come via lightning- bolt breakthroughs but rather from these kinds of shoulder-to-the-wheel approaches, especially when conducted in close partnership with practicing educators” (pg. 198)
In describing this final dilemma, Reich outlines major risks and limitations of educational data harvesting while still arguing for its value when wielded responsibly. He describes hoarding of educational data as a “toxic asset,” borrowing the term from computer security researcher Bruce Schneier, who warns that simply holding it poses risks for both companies and users.
Student privacy advocates raise valid concerns about the dangers of collecting sensitive information about students. Anti-cheating software, for example, often tracks eye and body movements, creating surveillance-like conditions that provoke ethical questions. As data analysis techniques become more sophisticated, datasets once considered harmless can reveal far more personal information than users initially realized.
Algorithms trained on historical data can also reinforce structural biases. Tools like Naviance, a popular college guidance system, can inadvertently reproduce privilege unless carefully managed. Yet research funding often prioritizes flashy new projects while neglecting to study the long-term effects of widely adopted systems already influencing millions of students.
Reich advocates for a more thoughtful approach: leveraging data responsibly while respecting student autonomy. Ethical data use requires transparency, community engagement, and long-term thinking. Recognizing that data can be both an asset and a liability is essential if educational technology is to serve learners equitably and effectively.
So… what about AI?
After reading through the examples Reich provides, it’s hard not to ask, “what would he say about the generative AI boom?” Thankfully, Reich offers frameworks for evaluating emerging technologies and shares his specific opinion on AI in more recent interviews. I’ve also begun reading the book AI snake oil : what artificial intelligence can do, what it can’t, and how to tell the difference by Arvind Narayanan and Sayash Kapoor. This book feels like an extension of Reich’s perspective and complements this conversation well.
In an interview with the Learning Professional, Justin Reich says, “The rhetoric about AI is often disempowering or frightening. But in 10 minutes, you can help educators understand what the systems are actually doing, and when they see how it works, they tend to find it quite empowering. To understand ChatGPT and similar tools, the most important word [to use with instructors] is “predict.” I appreciate that he acknowledges the fear many instructors feel in response to AI, while offering practical solutions for technologists working with teachers. These small rhetorical moves, even changing the words we use, can have a big impact on shifting mindsets. I also wholeheartedly agree that a ten-minute conversation can be transformative. In our world of emails, links, and endless self-paced resources, I’ve found nothing more effective in shifting mindsets than one-on-one conversations.
Further, Reich acknowledges legitimate concerns about student AI use, while reminding us of the existing tools we’ve adapted to,
“Before AI came along, educators had developed a series of tasks and exercises that provoked students to do useful cognition that led to learning. And now we have this machine that can do a bunch of that work for them. Bypassing that cognition is a problem. But we have decades of technologies that help students bypass cognition, and we have learned to work with and around them. We have encyclopedias, calculators, Google Translate, Course Hero, and the list goes on.”
The mention of web-based “study” tools like Course Hero is such an apt comparison. Faculty have been frustrated with students misusing these tools for years, and solving for their problems can also solve for AI-created problems.
In AI Snake Oil, the authors compare generative AI tools to the internet in general, and I’d also agree with that comparison. AI tools make information more readily available and transformable, similar to the internet, while lacking a reliable authentication mechanism. Educators have learned to teach internet research methods, media literacy, and other internet skills alongside their course material. Many have moved away from assessments that are easily “Google-able,” and the same will need to happen for AI tools.
It won’t be perfect, and it won’t be fast. But, as long as the cautiously optimistic tinkerers carry on in education, incremental improvements will continue. This mindset feels refreshing, practical, and reassuring.
References
- Attewell, P. (2001). Comment: The First and Second Digital Divides. Sociology of Education, 74(3), 252–259.
- Bouffard, S. (2024). WHAT EDUCATORS NEED TO KNOW ABOUT AI: Q&A WITH JUSTIN REICH. The Learning Professional, 45(2), 36–39.
- Narayanan, A., Kapoor, S., Walter de Gruyter & Co., & Walter de Gruyter & Co. (2024). AI snake oil : what artificial intelligence can do, what it can’t, and how to tell the difference. Princeton University Press.
- Reich, J. (2020). Failure to disrupt : why technology alone can’t transform education. Harvard University Press.
- Sandholtz, J. H., Ringstaff, C., & Dwyer, D. C. (1997). Teaching with technology: creating student-centered classrooms. Teachers College Press.
- Schneier, B. (2016). Data is a toxic asset, so why not throw it out? In CNN Wire Service. CNN Newsource Sales, Inc.