I read Ruth and Peco’s Gaskovski’s recent piece on AI and education with great interest. As a college professor, I’ve been thinking about these issues a lot lately. I know my students are already using AI, particularly ChatGPT, for a variety of tasks.Ruth and Peco are right to highlight the concern that the rapid spread of AI into education seriously threatens students’ capacity to participate in slow, deep thinking.
In my course Happiness and Human Flourishing at the University of Pittsburgh, we spend several class sessions thinking about trade-offs involved in using any tool, not just generative AI. I encourage students to reflect on what exactly we’re doing when we pick up a tool and how that choice connects to living a flourishing life. I then pose a series of questions designed to help them think more critically about the tools they use and the kinds of people they are becoming through their use. I’ll share the basic structure of this lecture and class discussion here. I believe my students have found it helpful, and I hope our readers will as well.
It is important, in this sense, that I understand ChatGPT as a tool. But how do I define a tool? I define it as “a physical or digital object that assists a person in achieving a goal in the world.”
In my class, I ask students to think about tools in a broad sense by first considering how we pursue goals in the world more generally. I frame it this way:
A person is motivated to achieve a goal, and to do so, they exercise some capability—or a cluster of capabilities—to achieve that goal.
Why then would someone choose to use a tool? It’s because they found it difficult to achieve a particular goal relying solely on their own capabilities. So, they turn to a tool to help accomplish that goal more easily and efficiently. This is generally the level at which most people understand tool use. We tend to be hyper-focused on tool use in the objective sense—that is, what goal does the tool help me achieve, and how efficiently and easily does it do that?
Take, for instance, a student whose goal is to produce a decent term paper. To do this, they must engage—by my count—at least seven high-level human capabilities:
search for,
collate, and
read essays and articles;
thenthink about what they’ve read,
generate ideas, and
write out those ideas in their own essay, and finally
edit what they had to ensure that the ideas are clear and coherent.
That is a very hard process, and it is difficult to execute each of these capabilities well.
In this light, ChatGPT is an exceptionally effective tool. It allows a student to meet the goal of producing a decent paper without exercising a single one of the seven demanding capabilities traditionally required. In fact, I recently input the final paper prompt for my course Health Policy and Human Flourishing into ChatGPT, along with some basic background material. The program generated what I would consider to be roughly an A paper. What would have taken my students hours of effort and intellectual labor was completed in less than thirty seconds. In terms of objective performance, ChatGPT is remarkably good at producing a decent paper.
I often hear people say, “ChatGPT is just a tool, and tools can’t be good or bad.” This reflects an underlying belief that tools are somehow neutral. But I fundamentally disagree. The nature of a tool—and how it is used—shapes the person using it. As a good personalist, I’m especially concerned with what a tool does to the person, particularly in how it shapes, strengthens, or diminishes human capabilities.
In this way, I’m interested in tools in the subjective sense—recognizing that people are always changed by the tools they use, especially as tool use profoundly impacts our capabilities.
Before we can truly understand how our capabilities are being impacted by the use of certain tools, we first need to make an inventory of the capabilities engaged when we attempt to achieve a goal in the world, like what I did above.1
So the first question we must ask is:
Q1: What are the relevant capabilities I am relying on to achieve this goal?
Answering this requires a great deal of insight. It demands that we seriously investigate what we are actually doing—what mental, emotional, and physical capabilities we are drawing on—when we go about achieving our goals.
Once we have a sense of the capabilities involved in pursuing a particular goal, we can begin to interrogate how a tool actually impacts those capabilities. As I see it, tools interact with human capacities in two primary ways.
First, a tool can strengthen and extend our capabilities. A good example of this is the bicycle. Suppose my goal is to get to work on time (goal). I used to walk (capability), but now I use a bicycle (tool), which enables me to reach my destination more quickly and efficiently. What’s interesting, though, is that even though I’m walking less, regular bicycle use can actually improve my ability to walk. It builds muscle strength, enhances balance, and improves cardiovascular endurance—thus extending and even fortifying my original capabilities, rather than diminishing them. This seems like a great tool.
Second, however, there is also a significant risk that a tool bypasses human capabilities. By this, I mean that the tool allows us to achieve a goal without engaging in a particular capability at all. When we stop using a capability, it tends to weaken over time—or may even disappear altogether.
I’m thinking here of something like GPS navigation, which has made many of us incapable of navigating without digital assistance or developing a strong internal sense of direction. The tool may help us achieve the goal, but it does so at the expense of capacities we once relied on and developed through repeated use.
It’s crucial, then, to consider whether we actually want to lose a given capability in the first place. If we don’t actively think about this, we lose those capabilities before we’ve had a chance to consciously decide if we’re willing to let them go. The reality is that, unless we’re exceptionally attentive, we often don’t have a choice about losing these capabilities.
The second question that we need to ask is:
Q2: What likely happens to the capabilities that I currently use to achieve this goal when I adopt this tool?
It’s true that tools have drastically reduced our capabilities in areas like asking girls out on dates, remembering phone numbers, and navigating the library card catalogue. However, we might be willing to let some of these capabilities go. Personally, I thin that I am fine with the fact that, thanks to online book and journal catalogues, I no longer need to visit the library, search through card catalogues, or photocopy sections of books. These are capabilities I may be comfortable losing—my time may be better spent thinking and writing instead (I think).
disagrees with me on this. He laments the death of the card catalogue.In this way, something like ChatGPT opens up some possibilities. For instance, if it can search the internet, identify relevant papers, and collate them, it may serve as a genuinely useful application. We might be willing to trade off certain capabilities—like the ability to sift through endless Google search results—for the sake of efficiency. I’m even open to allowing ChatGPT to do very light editing, such as correcting punctuation and spelling. However, as I’ll discuss below, we need to be cautious. Once you begin relying on ChatGPT for light editing, it becomes easier to allow it to handle heavier editing—and eventually even generate original writing. Slippery slope.
Similarly, I’m open to the idea that tools like ChatGPT might relieve us of certain capabilities that we only developed to meet the demands of the Machine—tasks like navigating bureaucracies or performing administrative duties. Filling out forms or generating funding reports, for example, often seems to add little value but persist due to bureaucratic bloat. I’m more than happy to shed the capability to complete a quarterly compliance report if it means I can redirect that energy toward work that is more likely to contribute to human flourishing.
I make these arguments for potential use cases very tentatively, because—as I argue below—once we open ourselves to the use of generative AI, it becomes very difficult to set meaningful limits. Again, slippery slope.
That being said, it seems clear to me that if ChatGPT is regularly used to perform tasks requiring high-level capabilities—such as reading, thinking, and writing term papers—then much is at stake. Consistent reliance on it for these purposes poses a real and existential threat to our ability to read, think, and write for ourselves. It’s no exaggeration to say that these are among the most essential human capabilities. To read, reason, and write is central to what it means to be a human being.
Sacrificing these foundational skills for the sake of convenience or efficiency seems like an extraordinary—and deeply troubling—trade-off. In fact, given the centrality of reading, writing, and thinking to human flourishing, widespread adoption of generative AI could, in a real sense, lead to the devolution of the species. I truly believe that.
So, the third question we should ask is:
Q3: Do I want these capabilities to weaken or disappear?
There are plenty of reasons to be concerned about the loss of capabilities, but chief among them is this: as we lose capabilities, we become increasingly fragile and dependent on the world around us. We become less capable and less autonomous. It seems obvious to me that whenever we pick up a tool, we run the risk of becoming more dependent on someone or something else.
But I want to be clear—dependence is not a bad thing as such. In my understanding of what it means to be human, I remind students that we are, by nature, social creatures. We rely on others both instrumentally and constitutionally. We are not—and should not aspire to be—the fictive autonomous individuals imagined by liberal ideology.
The concern, then, is not so much that I am becoming increasingly dependent on others, but rather who I am dependent on—and what I am to them. For example, I currently own a reel mower for my small urban backyard. It has no power source other than my son. It’s a very simple tool: I can see all the parts without opening anything up, and I can generally fix it myself.
Lately, I’ve been toying with the idea of purchasing an older gas-powered mower. But this kind of tool creates new dependencies. I’d now rely on a repair shop, though I’d love to learn how to repair small engines myself. Fortunately, there’s one in my neighborhood, run by a neighbor who knows me and sees me not as a client, but as a neighbor and a friend. Gaining that kind of relationship seems like a real benefit, even if I lose some autonomy to repair the mower myself.
Conversely, in our increasingly globalized and digital society, the loss of capabilities often makes us more dependent on multinational surveillance corporations that see us not as persons to be loved, but as instruments to be used. We are cash registers. Meta does not love me—it uses me. This dynamic only intensifies as more tools are built with planned obsolescence, designed to be unrepairable by the average person. Every time one of these tools breaks, I find myself once again dependent on the corporation that made it.
This is particularly true with ChatGPT. As I lose the ability to read, write, and think critically, I become increasingly dependent on a distant corporation to handle these tasks for me. And though OpenAI was originally created as a nonprofit, it has since been at least partially absorbed by profit-motivated corporations. When I rely extensively on ChatGPT, I have sold off central capabilities and, in the process, become an economic unit for Microsoft. Slaves to the Machine. Seems like a bad trade.
So, the last question might be:
Q4: If I lose a capability, on whom am I now dependent and what is my relationship to this person/institution?
It seems to me that a great deal is at stake here especially for young students. Asking the right questions and carefully discerning which capacities are relevant—and how they are impacted by a tool like ChatGPT—likely requires exceptional wisdom and a surgeon’s scalpel. This may be beyond the capabilities of the average teenager or young adult. I suspect it’s even beyond me.
The particular threat to my students lies in the fact that they are still developing these capabilities. I have a close friend who uses ChatGPT to integrate his learning from the Bible and spiritual reading to generate ideas for updating his Prayer Rule. He has found it tremendously helpful. However, this is less concerning to me because my friend is 60 years old, extremely mature, has already read these texts carefully, and has spent years growing in wisdom and knowledge. While I would not use ChatGPT in this way myself, he believes that he genuinely benefits from it.
Young people, by contrast, are still in the process of developing these capabilities. When such capabilities are outsourced to AI, students may never have the opportunity to develop them in the first place. I remain deeply skeptical that the hypothetical objective gains offered by ChatGPT outweigh the subjective costs for my students. Again, the irony, of course, is that the gains possible from ChatGPT are likely truly accessible only to those who have already developed these capabilities in the real world—further reinforcing the gap between the more and less capable and deepening relationships of dependence.
A significant concern that I have for my students is that they will get caught up in this dependence before they even had a chance to consider the stakes. Last November, by a stroke of great fortune, I found myself riding through central Pennsylvania and upstate New York in the backseat of a 1970s Cadillac with
. It was a time of extended conversation, full of the kinds of themes we often explore at the Savage Collective and the Abbey of Misrule. It was a rich and formative experience.At one point, our discussion turned to ChatGPT. Paul said something that has stuck with me: “I suspect that ChatGPT is the apple that cannot be unbitten.” I think he meant something like this: ChatGPT is exceedingly tempting, offering the promise of limitless knowledge. You may begin by using it for small, seemingly insignificant tasks, but over time, you start relying on it for increasingly meaningful ones. Before you realize it, you’ve surrendered core human capabilities—and become fully dependent. There’s no going back.
I suspect he’s quite right. I don’t let my own children near it.
Although I do not directly address character in this essay, I am also especially interested din how tools shape our character, making it more or less easy for us to act virtuously. The capacity to act virtuously is always at stake when we use a tool.
Grant,
We should do a collaborated reading/reflection on the work I just discovered by Nathaniel Hawthorne that I did not till yesterday know existed called "The Celestial Railroad."
"The Celestial Railroad" is a short story by Nathaniel Hawthorne, published in 1843, serving as a satirical allegory inspired by John Bunyan's The Pilgrim's Progress. It critiques modern society's tendency to prioritize convenience and materialism over spiritual integrity.
The narrator dreams of a journey from the City of Destruction to the Celestial City, following the path of Christian from Bunyan’s work. Unlike Bunyan’s arduous pilgrimage, this journey is modernized with a railroad, symbolizing technological progress and superficial ease. The train, operated by Mr. Smooth-it-away, promises a quick, comfortable trip, attracting passengers who avoid the hardships of the traditional pilgrimage. Along the way, the narrator observes stops like the Hill Difficulty, now bypassed, and Vanity Fair, a bustling hub of worldly pleasures. Key figures from The Pilgrim's Progress, like Evangelist, warn against the deceptive ease of the railroad, but passengers dismiss them. At the journey’s end, the train stops short of the Celestial City, and passengers must cross a dark river. Mr. Smooth-it-away reveals himself as a demon, and many passengers drown or fail to reach the city. The narrator awakens, realizing the railroad represents a false path to salvation, emphasizing that true spiritual growth requires effort and faith, not shortcuts.
Have you ever read this?
Kevin- I'm always shocked by how little very smart people have thought this through. Or very credentialed people anyway.