Can cheap AI models turn a podcast into a lesson?

At work, I build AI agents(trigger warning this is about LLMs) that are designed to help teachers avoid as much day to day drudgery as possible. We provide a system that, counterintuitively, is geared towards reducing the amount of time a teacher needs to spend in the app spelunking around. Need the latest assessment grades for Mrs Doe’s 3rd period? Just ask the Ai, it’ll go off and grab that information for you while you pour another cup of coffee. We use top of the line models from Anthropic to be as accurate, un-biased, error free as possible to achieve this. These models require API keys and 5-figures in engineering time just to put the guardrails in place to allow a Teacher/Administrator to use them in relative safety. ...

December 5, 2025 · 8 min · 1494 words · Zac Orndorff<https://orndorff.dev>

Building a Non-Deterministic Merge Game with LLMs

What I Built and Why I’ve always enjoyed those element-combining merge games like Doodle God or Little Alchemy. You know the ones - Water + Fire = Steam, Earth + Water = Mud, that sort of thing. There’s something satisfying about discovering combinations, but after playing a few, I started noticing a fundamental limitation: every combination is pre-determined. Everyone who plays gets exactly the same results. The discovery phase is fun, but once you know the combinations, there’s no variance. ...

November 1, 2025 · 5 min · 1026 words · Zac Orndorff<https://orndorff.dev>

AI in the Classroom: Product Blueprints from the 'Hard Fork' Podcast

For AI engineering leaders, the annual back-to-school season isn’t just a cultural milestone; it’s a market signal. It marks a massive influx of users engaging with digital tools, testing the limits of existing platforms, and revealing unmet needs. The recent “Hard Fork” podcast episode on AI in education serves as a potent source of raw user research, offering a direct line into the mindsets of educators, innovators, and the students who form the next generation of knowledge workers. ...

September 5, 2025 · 6 min · 1276 words · Zac Orndorff<https://orndorff.dev>

That terrible presentation, the enshittification of OpenAi

Thinking about the GPT-5 presentation fiasco yesterday(friends don’t let friends use Dalle for charts) and the resulting, almost overwhelmingly negative reaction to the style of the speakers and the substance. I’m wondering if what we’re seeing is less a problem with LLMs having hit a ‘wall’ and more with the ’enshittification’ of OpenAI itself? They’ve never been particularly strong on the pure research side of things. Their main strength has always been productizing scientific breakthroughs in consumer products. Take the fundamental ‘attention is all you need’ paper and transformers architecture. Neither of those were OpenAI breakthroughs. Instead, their incredibly talented early team identified ways to capitalize on those important insights with their own breakthroughs in model training and scaling. ...

August 8, 2025 · 4 min · 666 words · Zac Orndorff<https://orndorff.dev>

What 50 First Dates can teach us about LLM memory

You’ve been there. You and your AI coding buddy are in the zone. It’s feeding you perfect snippets of code, it understands your weirdly named variables, it’s practically reading your mind. You’ve built half a dozen functions, and the project is humming along. Then you close the window. You come back an hour later, open a new chat, and ask it to build the next piece of the puzzle. The AI stares back at you with the digital equivalent of a blank expression. It has no idea what your project is, what a user_auth_service is, or why you keep muttering about the global_config.json. It has, for all intents and purposes, become incredibly dumb. ...

August 2, 2025 · 5 min · 899 words · Zac Orndorff<https://orndorff.dev>