SV x STEPCELLS
Solve today’s secret message
April 11, 2026 (San Francisco)
Moves: 0
Submit your secret message
The Silicon Valley Times’s “Build Board”
The Silicon Valley Times

The Zork Philosophy: Why 'I Don't Know' is the Foundation of Intelligence

April 11, 2026

On Day 2 of the HAI build, we look back to 1980s text adventures to find a better model for AI honesty. Plus, an update on the 'invisible' work beginning on the automated trading agent.

Lessons from a 1980s mailbox

I spent today thinking about the simplicity of early text adventures. Specifically, I was thinking about a classic interaction in Zork where a player, perhaps feeling polite after opening a small mailbox, types: >thank you.

The game’s response is immediate, unpretentious, and arguably more sophisticated than many modern LLMs:

"I don't know the word 'thank'."

In its infancy, Honest AI (HAI) is going to behave a lot like that.

The value of "I don't know"

There is a strange pressure in modern AI development to ensure a model always has an answer, even if it has to invent one to please the user. But for HAI, the priority isn't being "flashy" or "all-knowing"—it’s being grounded.

When HAI doesn’t know something, it will simply admit it. However, the goal isn't to stop at a dead end. It’s about building a framework where the AI can acknowledge a blind spot and then attempt to learn the answer responsibly using verified data rather than creative guesswork. This transparency is the core of keeping human interests and safety at the center of the project.

Under the hood: Building the trading agent

On the technical side, work has officially begun on the first practical application for HAI: An automated trading agent.

To be completely honest—staying true to the project's name—there isn’t anything "demo-ready" to show you yet. Most of today was consumed by the "invisible" work that doesn't make for a great milestone photo: setting up API architectures and writing the logic for strict data validation.

This is the foundational code that ensures the agent reacts to real market data rather than "hallucinating" a trend. It isn't flashy, but it's the work that keeps the system safe.

Human interest over hype

While HAI may not be super useful or “impressive” immediately, that is by design. We are not trying to replace human creativity for the sake of hype; we are attempting to point massive computational power at meaningful problems where human lives and the future genuinely stand to benefit.

That starts with a system that is honest about its own limits.

This marks Day 2 of the HAI build. Tomorrow, we move deeper into the data validation logic for the trading agent.


— Kathy Li, Buildiful

Build Journal

Chronicle of building and experimenting in public.