When the Simulation Starts to Feel Real

alvinpane.com · alvinpane · 2 hours ago · view on HN · exploit
0 net
When the Simulation Starts to Feel Real - Alvin Pane When the Simulation Starts to Feel Real March 1, 2026 The Reward Signal In 1999, a neurophysiologist at the University of Cambridge named Wolfram Schultz published a paper that quietly explained a problem no one had named yet. He had spent the better part of a decade recording from individual dopamine neurons in the midbrain of macaque monkeys. The setup was simple. A light would flash. Then, after a short delay, a small amount of juice would be delivered through a tube to the monkey's mouth. Schultz placed electrodes in the ventral tegmental area and watched what happened at the level of single cells. What he expected to find was a reward signal. Light, then juice, then dopamine. Input, output, reinforcement. The clean loop that a century of behavioral psychology had promised. What he found instead was something stranger. In the early trials, the dopamine neurons did fire when the juice arrived. But as the monkey learned the pattern, the signal migrated. It moved backward in time. After a few dozen repetitions, the neurons no longer fired at the juice. They fired at the light. The reward itself, the actual sugar hitting the tongue, produced almost no dopamine response at all. The entire signal had shifted to the prediction. The anticipation of the reward had become, as far as the brain's chemistry was concerned, the reward. This was not a minor finding. Schultz had uncovered something fundamental about how the brain assigns value. The dopamine system does not measure outcomes. It measures the expectation of outcomes. The feeling of approaching something good is, neurochemically, indistinguishable from the arrival. The system that was supposed to close the loop between effort and result had, somewhere deep in the evolutionary wiring, learned to fire on the effort alone. For most of human history, this was not a flaw. It was an advantage. You cannot fake catching an animal. Your legs burn because you are running. You eat because you killed something. The dopamine fires on the chase, but the chase ends in a kill or it doesn't. There is no version of the savannah where you feel like a hunter for six hours and go home empty-handed but satisfied. The physical world does not let you separate the feeling of doing something from actually doing it. Then we built environments where you could. The Tool Shaped Object Will Manidis recently put together a great article on the concept of the "tool-shaped object" (I highly recommend you check out his work ). A tool-shaped object is not a broken tool. It is something that fits in the hand the way a tool should, that produces the friction and the rhythm and the forward motion that real work produces, but that does not produce work. Its function is to feel like a tool. He uses the example of a three-thousand-dollar Japanese hand plane, a kanna, whose blade takes days to set up and whose shavings are transcendent and whose economic output is zero. A power planer does the same job in minutes. The kanna exists so that the setup can exist. He was writing about the AI industry. Trillion-dollar infrastructure, agent frameworks, systems whose primary output is the experience of operating them. The engineer watching tokens stream and dashboards populate feels like he is working. What he is actually doing is running the machine. The kanna user doing craft for its own sake knows this is what he is doing. The engineer often does not. Manidis was writing about AI specifically. But I think the frame is too small. The tool-shaped object is not a product category. It is what happens when a digital environment learns to exploit the flaw that Schultz found in those monkeys. The prediction of progress, sustained indefinitely, without any obligation to produce a result. AI may be doing this now, but social media found the exploit over a decade ago. The Simulation Social media did not look like an exploit when it arrived. It looked like a tool. A way to reach people, build an audience, share your work. And it is those things. But somewhere in the last fifteen years, the platforms became something else as well. They became environments where you could enter and experience the sensation of mattering, of building, of doing important work, without any requirement that you actually matter, build, or do important work. The feeling was available on demand but the substance was optional. This is not a social media essay. The reason I'm starting here is that the same thing is happening right now with AI development tools, and almost nobody is naming it because it looks completely different on the surface. Open Instagram. Scroll through the content creators with six-figure followings. Open Cursor. Watch an engineer prompt an AI agent framework and see three hundred lines of code stream onto the screen. These do not look like the same activity. One looks like vanity. The other looks like work. But the underlying mechanism is identical. Both are digital environments where you can sustain the feeling of productive output indefinitely without producing productive output. Both deliver a dopamine response on activity rather than outcome. And both are frictionless enough that you can spend hours inside them without ever encountering a signal that tells you to stop. That is what makes these environments different from the physical world. A carpenter who spends eight hours in a shop has a chair or he doesn't. The feedback is binary and it is immediate. A content creator who spends eight hours on Instagram has metrics: followers, likes, reach, engagement rate. These feel like results. They have the shape and the weight of results. But they are not results in any sense that survives closing the app. An engineer who spends eight hours prompting Claude Code has a terminal full of generated code, a commit history, a sense of velocity. These also feel like results. But feeling like you shipped something and having shipped something are two different states, and the environment will never force you to confront the difference. This is the simulation and it feels real from the inside. The problem is that nothing inside the simulation ever forces you to check whether any of it mattered outside of it. You can stay in the loop indefinitely. The metrics keep going up, the tokens keep streaming, and at no point does the environment ask you to prove that something changed in the real world. The Influencers I know plenty of people with hundreds of thousands of followers who have earned the title of influencers. They make content daily: breakfast routines, workout/lifestyle videos, curated B-roll shots of laptops in coffee shops. To their credit, the production quality is high, the posting schedule is consistent, and the engagement metrics are healthy. Inside the simulation, these people are very important. They have reach, audience, and brand. Step outside of it and ask a different question. What changed in the world that would not have changed without this work? What exists now that did not exist before? Whose life got better because of this work? The answer, in a surprising number of cases, is nothing. The wheels are turning and the vehicle is not moving. But the dashboard says 100 miles per hour, and the dashboard is the only thing anyone is looking at. The Crossover The first eighty percent of building something with AI is genuinely extraordinary. You describe what you want and the model springs into action. Thousands of lines of code stream onto the screen. Features materialize in seconds that would have taken days. The dopamine response here is earned. The progress is real. You are building something and you can prove it because the thing exists and it works. Then you cross eighty and things get harder. I think about this in terms of a ratio: useful output tokens to input tokens. In the first eighty percent, this ratio is enormous. A few words of input produce thousands of lines of working code. The remaining twenty percent of building anything is where the actual product lives. This is the part that requires deep context, taste, and judgment. Often in my work, this is also the part where we are trying to do something that has never been done before. I call this "out of distribution building," and it is where the ratio collapses. The model has limited or no training data for the thing you are asking it to do. The output looks the same. The usefulness does not. But here is what makes it dangerous: nothing about the interface changes. You are still prompting. The terminal is still streaming. The code is still appearing, three hundred lines at a time, confident and immediate. And every time you hit enter, you get the same anticipatory hit, because Schultz's finding still holds. The dopamine fires not on the output but on the expectation of the output. This response could be the breakthrough that finally solves the problem. It almost never is at this stage. But the feeling is identical every single time. So you keep prompting. You feed the error back in. You watch the model confidently produce the wrong fix. You correct it and try again. The tokens are burning. The invoice is climbing. And the experience is indistinguishable from the phase twenty minutes ago where real work was happening. You have crossed from the tool to the tool-shaped object, and the crossing was invisible because nothing on your screen changed. This may sound heretical in 2026 but it is just math. If the way you are building from eighty to a hundred percent exceeds the time cost of a competent engineer going in and writing the code by hand, what you are paying for in that final stretch is not output: you are paying for the sensation of still using AI. That is the tool-shaped object premium. And your dopamine system will never tell you to stop paying it. The Loop These loops are compounding. The influencer keeps posting because every upload generates engagement metrics that feel like validation. The engineer keeps prompting because every response feels like progress. Both loops are self-reinforcing, and both degrade the person's ability to assess whether they are actually accomplishing anything. The influencer and the engineer both genuinely believe they are building something. Neither person is lazy. Neither person is stupid. They are both experiencing a neurochemical reward that their brain has no mechanism to distinguish from the real thing. And the longer the loop runs, the harder it becomes to see it from the outside. Let me be clear, this is not an argument against AI or against social media. There are creators who share real insights from real work and whose audiences are better for it. And there are many engineers, myself included, for whom AI tools produce genuine, measurable output every single day. This is merely an exploit, and the exploit does not invalidate the tool. It coexists with it. That coexistence is exactly what makes it dangerous. You have to learn to detect the crossover point from tool to tool-shaped object within your own life. Here is what that looks like in practice. The Defense Start from the goal, not the tool. Before you open the terminal, before you begin anything, know what done looks like. Not what busy looks like. Not what productive feels like. What the actual output is. If you cannot describe the finished state in one sentence, you are not ready to start. See my writing on "To Find the Answer, You Must Know the Answer" for more. Measure against external reality. What shipped. What number moved. What changed for a real user, a real customer, a person who is not you. Internal feelings of progress are inadmissible as your dopamine system is compromised. It will tell you that the session was productive. It will be wrong often enough that you cannot trust it. The most reliable signal is evidence that exists outside your own head. Watch the ratio. In the early phase of building, AI output is fast, cheap, and genuinely useful. Learn to notice the moment that changes. When you are prompting for the nth time, when the model is confidently producing the wrong fix, when the token cost per unit of useful output is climbing and the code is not getting closer to done, that is the crossover. Name it and step out. The heretical math from earlier applies here: sometimes the most productive thing you can do is close the terminal and think. Ask the uncomfortable question. Not "is this tool useful," but "is this tool useful to me, right now, on this task." The answer changes. It changes within a single session. The tool that was genuine at sixty percent and tool-shaped at ninety is the same tool. The only thing that changed is where you are in the problem, and whether you are honest with yourself about it. Always Insist on the Juice Schultz's monkeys never got the juice wrong. The light flashed, the juice came, and the prediction was always correct. The dopamine system they evolved with was built for a world where the signal was honest. We do not live in that world anymore. We live and work inside environments that sustain the prediction indefinitely, where the light keeps flashing and the juice is optional. Social media will let you feel important forever without requiring you to be important. AI tools will let you feel productive forever without requiring you to produce. The loop will not eject you. The dashboard will not correct you. The terminal will not tell you to stop. The only thing that breaks it is insisting on the juice. Real output. Real impact. Real proof that something changed in the world outside the screen. Not the prediction. The thing itself.