An LLM-Powered Pomodoro Timer
AI is the Clock Now
I had a weird thought: what if, instead of running a timer on my computer, I made an LLM be the timer? Not just kick off a setTimeout
, but actually live and breathe the countdown itself. A deeply silly, slightly cursed, and wonderful idea.
So I built it. It’s a Pomodoro timer where Claude is the clock. My little Node.js script is just its hands, doing nothing but what the AI tells it to. The entire logic (the 25-minute focus blocks, the 5-minute breaks) lives inside Claude’s “mind.”
It’s an odd couple. You have the host, a dumb puppet, and the LLM, the brains in the jar pulling the strings.
The Host: A Glorified setTimeout
The Node.js script is the simplest thing I could get away with. It’s a kernel with only two jobs: listen for commands and execute them blindly. It has no idea what a Pomodoro is. It just knows how to sleep
and how to notify
.
The main loop is basically this:
// Main loop: ask the model, do what it says, repeat.
while (true) {
const response = await anthropic.messages.create({ /* ... */ });
// Look for tool calls in the response
for (const tc of toolCalls) {
if (tc.name === "sleep") {
// The host just waits here, no questions asked.
await ms(capped);
} else if (tc.name === "notify") {
// Print a pretty message.
}
}
// Tell the model we're done and ask what's next.
}
When Claude says sleep({ ms: 1500000 })
, the script physically pauses for 25 minutes. It does nothing else. Once the time is up, it pokes Claude back awake with a simple message: "Okay, I slept. Now what?"
That's it. All the intelligence is somewhere else.
The LLM: The Ghost Running the Show
This is where it gets fun. In this setup, Claude isn't just a chatbot; it’s the entire runtime. The system prompt turns it into a "Pomodoro-VM," a specialized machine whose only purpose is to keep time.
Its whole world is defined by three "syscalls": sleep
, notify
, and log
.
The craziest part is the state management. The LLM has no database or variables in the traditional sense. Its "memory" is just the conversation history. It knows it’s on cycle 2 of 4 because it can read the transcript of what it did before. It wakes up, scans the chat log, and figures out what to do next.
The execution flow is a conversation:
- Claude: "Okay, starting a focus phase." (
notify
) - Claude: "Now, wait for 25 minutes." (
sleep
) - (25 minutes of silence)
- Host: "Done sleeping."
- Claude: "Right, where was I? Ah, focus is over. Time for a break." (
notify
) - Claude: "Wait for 5 minutes." (
sleep
)
This back-and-forth continues until the LLM decides the session is over. The logic doesn't live in my TypeScript files; it lives in Claude's reasoning process.
Why Bother?
This whole thing flips the script on how agents work. The LLM isn't delegating tasks; it is the task. It's a tangible, weird example of the "LLM as an operating system" idea. By giving the model a few primitive tools, we let it manage its own long-running processes.
What other dumb things could we make an LLM be? A dungeon master that doesn't just narrate the world but also runs the game clock? A personal assistant that doesn't just set reminders but actively "waits" with you during a tedious task?
I don't know, but it's a fun premise. And my Pomodoros have never been spookier.
Check out the code on GitHub (opens in a new tab)
not made by a 🤖