Healing Agents
4 min read
Hi, I’m Geoff. I love to teach, and I love to code.
geoffreychallen.com, until recently
I started writing code in 1999. I dabbled in programming before then, but I’d date the beginning of my identity as a programmer to my first college programming course: CS50(1). From then on I wrote code most days. Based on my GitHub history, since arriving at Illinois in 2017, I committed to projects 4 out of 5 days(2).
I stopped writing code in 2026. I began using Claude Code in July 2025. Initially I would review edits and sometimes make changes by hand. Eventually the agents got better and I got better at collaborating with them. I turned on YOLO mode and stopped reading diffs. At some point I realized that I couldn’t remember opening an IDE and removed them from my launcher. I started small fun projects done entirely through conversation and then larger, more serious projects done the same way.
I’m much happier now. It’s not because I’m not working. I’m working more than ever. My challenge now is stopping.
Initially I thought I was happier because I was making more progress and doing better at the things I care about: creating new courses, improving existing ones, building new educational technology. But as I’ve become accustomed to what I can accomplish with AI, I’ve realized something else: I’m happier because I’m not writing code.
Did I ever love to code? I loved creating things. I loved being good at it. I loved that it made me feel special.
But did I ever love just doing it? I know developers who continue to participate in coding even while collaborating with AI: approving edits, writing their own tests, or debugging by hand. I don’t. I’m thrilled by the opportunity to work with coding agents through pure conversation. I’m excited by their growing abilities to complete development tasks. I want this to work. I don’t want to write, read, debug, or look at code anymore.
That leads to an uncomfortable question: Was all the coding I did not only not fun, but actually dehumanizing? In one of my favorite essays on coding agents, Kailash Nadh enumerates some of the costs of traditional software development:
After all, the real bottleneck is good old physical and biological human constraints—cognitive bandwidth, personal time and resources, and most importantly, the biological cost and constraints of having to sit for indefinite periods, writing code with one’s own hands line by line even if it is all in one’s head, while juggling and context-switching through the mental map of large systems.
These costs are real. But I’m realizing that, at least for me, they were dwarfed by the emotional and psychological toil of writing code. The most common end to my development days wasn’t physical or mental fatigue. It was anger and frustration. Typos. Bugs. Mistakes. Errors. All the result of being forced to communicate with a machine in its language with inhuman precision. Isn’t this the definition of dehumanizing?
I did this for over 25 years. What effect did it have on me? Now that I’ve stopped, I can feel myself changing, recovering, healing. I’m slowly and happily becoming a better human: more patient, more thoughtful, more interesting(3). Qualities that help me be a better developer. Qualities that help me be a better teacher. Maybe I never loved to code. But I do love to teach—now more than ever.
I’m not claiming my experience is representative. Some people do enjoy coding—but should only they be creating technology with significant societal impact? Not everyone is harmed by coding—but shouldn’t we worry about using technology created by people who are? Maybe if more people enjoyed creating technology, and the process were healthier, we’d have better technology? AI coding agents provide a chance to find out. We should take it.