Colleges Are Preparing to Self-Lobotomize
The skills that students will need in an age of automation are precisely those that are eroded by inserting AI into the educational process.

After three years of doing essentially nothing to address the rise of generative AI, colleges are now scrambling to do too much. Over the summer, Ohio State University, where I teach, announced a new initiative promising to “embed AI education into the core of every undergraduate curriculum, equipping students with the ability to not only use AI tools, but to understand, question and innovate with them—no matter their major.” Similar initiatives are being rolled out at other universities, including the University of Florida and the University of Michigan. Administrators understandably want to “future proof” their graduates at a time when the workforce is rapidly transforming. But such policies represent a dangerously hasty and uninformed response to the technology. Based on the available evidence, the skills that future graduates will most need in the AI era—creative thinking, the capacity to learn new things, flexible modes of analysis—are precisely those that are likely to be eroded by inserting AI into the educational process.
Before embarking on a wholesale transformation, the field of higher education needs to ask itself two questions: What abilities do students need to thrive in a world of automation? And does the incorporation of AI into education actually provide those abilities?
The skills needed to thrive in an AI world might counterintuitively be exactly those that the liberal arts have long cultivated. Students must be able to ask AI questions, critically analyze its written responses, identify possible weaknesses or inaccuracies, and integrate new information with existing knowledge. The automation of routine cognitive tasks also places greater emphasis on creative human thinking. Students must be able to envision new solutions, make unexpected connections, and judge when a novel concept is likely to be fruitful. Finally, students must be comfortable and adept at grasping new concepts. This requires a flexible intelligence, driven by curiosity. Perhaps this is why the unemployment rate for recent art-history graduates is half that of recent computer-science grads.
Each of these skills represents a complex cognitive capacity that comes from years of sustained educational development. Let’s take, for example, the most common way a person interfaces with a large language model such as ChatGPT: by asking it a question. What’s a good question? Knowing what to ask and how to ask it is one of the key abilities that professors cultivate in their students. Skilled prompters don’t simply get the machine to supply basic, Wikipedia-level information. Rather, they frame their question so that it elicits information that can inform a solution to a problem, or lead to a deeper grasp of a topic. Skilled questioners rely on their background knowledge of a subject, their sense of how different pieces of a field relate to one another, in order to open up novel connections. The framing of a powerful question involves organizing one’s thoughts and rendering one’s expression lucid and economical.
For example, the neuroscientists Kent Berridge and Terry Robinson transformed our understanding of addiction by asking if there is a difference between the brain “liking” something and “wanting” it. It seems in retrospect like an easy and even obvious question. But much of the previous research had operated under the assumption that we want things simply because we like the way they make us feel. It took Berridge and Robinson’s familiarity with psychology, understanding of dopamine dynamics, and awareness of certain dead ends in the study of addiction to judge that this was a fruitful question to pursue. Without this background knowledge, they couldn’t have posed the question as they did, and we wouldn’t have come to understand addiction as, in part, a pathology of the brain’s “wanting” circuitry.
This is how innovation happens. The chemist and philosopher of science Michael Polanyi argued that academic breakthroughs happen only when researchers have patiently struggled to master the skills and knowledge of their disciplines. “I find that judicious and careful use of AI helps me at work, but that is because I completed my education decades ago and have been actively studying ever since,” the sociologist Gabriel Rossman has written. “My accumulated knowledge gives me inspiration for new research questions and techniques.”
Will a radically new form of AI-infused education develop these skills? A growing body of research suggests that it will not. For example, a team of scientists at MIT recently divided subjects into three groups and asked them to write a number of short essays over the course of several months. The first group used ChatGPT to assist its writing, the second used Google Search, and the third used no technology. The scientists analyzed the essays that each group produced and recorded the subjects’ brain activity using EEG. They found that the subjects that used ChatGPT produced vague, poorly reasoned essays; showed the lowest levels of brain activity; and, as time went on, tended to compose their work simply by cutting and pasting material from other sources. “While LLMs offer immediate convenience, our findings highlight potential cognitive costs,” the authors concluded. “Over four months, LLM users consistently underperformed at neural, linguistic, and behavioral levels.” Other studies have found a negative correlation between AI use and cognitive abilities.
Such research is still in its early phases, and some studies suggest that AI can play a more positive role in learning. A study published in Proceedings of the National Academy of Sciences, for instance, found that highly structured uses of generative AI, with built-in safeguards, can mitigate some of the negative effects like the ones that the MIT researchers found, at least when used in certain kinds of math tutoring. But the current push to integrate AI into all aspects of curricula is proceeding without proper attention to these safeguards, or sufficient research into AI’s impact on most fields of study.
Professors with the most experience teaching students to use technology believe that no one yet understands how to integrate AI into curricula without risking terrible educational consequences. In a recent essay for The Chronicle of Higher Education titled “Stop Pretending You Know How to Teach AI,” Justin Reich, the director of the Teaching Systems Lab at MIT, examines the track record of rushed educational efforts to incorporate new technology. “This strategy has failed regularly,” he concludes, “and sometimes catastrophically.” Even Michael Bloomberg—hardly a technology skeptic—recently wrote of the sorry history of tech in education: “All the promised academic benefits of laptops in schools never materialized. Just the opposite: Student test scores have fallen to historic lows, as has college readiness.”
To anyone who has closely observed how students interact with AI, the conclusions of studies like the experiment at MIT make perfect sense. When you allow a machine to summarize your reading, to generate the ideas for your essay, and then to write that essay, you’re not learning how to read, think, or write. It’s very difficult to imagine a robust market for university graduates whose thinking, interpreting, and communicating has been offloaded to a machine. What value can such graduates possibly add to any enterprise?
We don’t have good evidence that the introduction of AI early in college helps students acquire the critical- and creative-thinking skills they need to flourish in an ever more automated workplace, and we do have evidence that the use of these tools can erode those skills. This is why initiatives—such as those at Ohio State and Florida—to embed AI in every dimension of the curriculum are misguided. Before repeating the mistakes of past technology-literacy campaigns, we should engage in cautious and reasoned speculation about the best ways to prepare our students for this emerging world.
The most responsible way for colleges to prepare students for the future is to teach AI skills only after building a solid foundation of basic cognitive ability and advanced disciplinary knowledge. The first two to three years of university education should encourage students to develop their minds by wrestling with complex texts, learning how to distill and organize their insights in lucid writing, and absorbing the key ideas and methods of their chosen discipline. These are exactly the skills that will be needed in the new workforce. Only by patiently learning to master a discipline do we gain the confidence and capacity to tackle new fields. Classroom discussions, coupled with long hours of closely studying difficult material, will help students acquire that magic key to the world of AI: asking a good question.
After having acquired this foundation, in students’ final year or two, AI tools can be integrated into a sequence of courses leading to senior capstone projects. Then students can benefit from AI’s capacity to streamline and enhance the research process. By this point, students will (hopefully) possess the foundational skills required to use—rather than be used by—automated tools. Even if students continue to enter college underprepared and overreliant on tech that has impeded their cognitive development, universities have a responsibility to prepare them for an uncertain future. And although our higher-education institutions are not suited to predicting how a new technology will evolve, we do have centuries of experience in endowing young minds with the deep knowledge and flexible intelligence needed to thrive in a world of unceasing technological change.