"The hope is that, in not too many years, human brains and computing machines will be coupled together very tightly, and that the resulting partnership will think as no human brain has ever thought."
— J.C.R. Licklider, 1960
I've always built software to make my courses more effective (cs124.org, learncs.online, backend infrastructure).
"We Solve Problems First" — Siebel Center
903
prompts I sent (~1/min)
17
hours of work
63
commits shipped
16,266
lines of code added
5,801
lines deleted
$1,000+
API cost
I wrote ZERO lines of code by hand.
Hover over blocks for details, press ↓ to dive deeper
| 7:15am | E2E Testing | Student journey tests, reviews display fix |
| 8:15am | Dependencies | Deps update, e2e fixes, version bump |
| 9:42am | Hiring | k8s:restart, hiring improvements, UI fixes |
| 11:42am | Test Infra | data-testid refactor, student journey tests |
| 1:06pm | Discourse | Migration fixes, WWC group management |
| 2:03pm | Appointments | Staff hiring automation tool (2+ hours) |
| 5:21pm | Qdrant | Sync fixes, version bumps, lesson dates |
| 6:25pm | K8s Logs | Log aggregation utility |
| 7:54pm | Performance | planprogress CPU issue fix |
| 11:43pm | AI Chat | RAG improvements |
| 8:15am | Integrations | LDAP, Microsoft Graph, roster generation |
| 8:50am | Data Reorg | Semester format, course content migration |
| 10:02am | Welcome Emails | Outlook MCP, email workflow → 24 sent |
| 10:12am | Upgrades | Node.js 24, dependencies |
| 11:22am | Framework | Next.js 16, Tailwind v4, dev server |
| 12:20pm | Database | Docker PostgreSQL, NextAuth config |
| 1:07pm | Redesign | Terminal aesthetic, typography |
| 2:35pm | RAG Setup | Embedding tool, dev server |
| 5:47pm | Agents | Markdown-based agent definitions |
| 6:18pm | Auth Migration | NextAuth → Better Auth (2+ hours) |
| 11:06pm | RAG v2 | Hierarchical chunking, hybrid search |
Actual exchanges from yesterday's work sessions:
Press ↓ to see examples
Press ↓ to see the actual plan
Before
Iterate over
implementations
write → test → fix → repeat
Now
Iterate over
plans
design → refine → approve → agent implements
Human time shifts from writing code to designing systems
Source code is the new assembly code — do you read compiler output?
Key insight: Clear, direct feedback keeps agents on track.
They respond well to correction — no ego, just adjustment.
5,801 lines deleted = fearless pivoting
When implementation is fast, you spend more time thinking and designing.
Bad decisions are cheap to throw away.
My actual messages from yesterday:
8:01am: "No no no. This needs to work."
11:04am: "NO NO NO NO NO NO! You need to shut down the servers cleanly!"
9:19pm: "No no no no no. This is totally wrong."
Why frustration is rising: Expectations are rising.
The baseline is so good that mistakes feel jarring.
"Working with agents requires understanding source code, but differently than before. I supervise and approve edits, but the amount of code I write by hand has declined to near zero."
Give agents good targets and boundaries — they excel.
Yesterday's example:
7 testing-related commits in cs124.org alone
These are NOT coding tasks — they're course management tasks:
Yesterday was infrastructure-heavy, but I regularly use these for content review too.
The problem: Claude completes traditional assignments easily.
| Traditional: | Instructor idea → Spec → Student writes code |
| Spring 2026: | Student idea → Student spec → AI writes code |
"My Project" — each student builds their own Android app
First-semester students who have never programmed building real Android apps. That's amazing.
Press ↓ for more details
We tried AI on traditional scaffolded projects (no creativity). Students had... feelings.
"I didn't build it, AI did."
"I just completely rely on the coding agent and I don't know how to complete the code without it. That makes me feel pretty bad."
"The focus on AI in an intro level course is extremely misplaced... it is blocking a lot of important learning."
My take: We asked them to do the wrong thing. Students are scared and looking to us for guidance. We need to show more leadership.
The old anti-cheating approach:
We've built that tool. It's called a coding agent.
If the agent can do the entire assignment...
...the student is doing little to nothing.
Accurate specification
AI can do it all
Vague specification
Student frustration
This is almost impossible to resolve with traditional assignments.
Idea Development (in discussion sections)
This is the uniquely human part — AI can't come up with your idea.
Build with Claude
The key insight: AI enables beginners to build real apps.
Personal investment → deeper learning.
CBTF quizzes: 80% of grade
Why keep it?
Like a crossword puzzle for the mind — mental training that transfers.
We need to teach students to work WITH AI, not against it.
A model to consider: Mathematics
Perhaps everyone will learn "programming" — meaning prompting agents to
build things.
But CS majors will do something... different.
Am I teaching a dead skill? We need to adapt together.
A previous speaker suggested AI can only perform knowledge work of
"limited scope and complexity".
I hope today's examples challenge that assumption.
We can't teach these tools if we don't know how to use them.
We don't know how to use them unless we are using them.
Even if AI coding agents stopped improving today,
software development has changed
forever.
Start small. Try Claude for your next task—writing, coding, analysis, anything.
The learning curve is surprisingly gentle when you just... start.
Let's talk while Claude works in the background.
(What should we build?)
Geoffrey Challen
geoffreychallen.com
[email protected]