[Designed to be used, not understood.]
Growing up, one of my earliest thrills was figuring out how things worked. I would dismantle an alarm clock just to see what lay inside. Could I put it back together again? Would it still work? And if it didn’t, could I assemble it well enough that my parents wouldn’t immediately notice? Radios followed. Then anything else that could be opened without doing permanent damage.
In hindsight, this wasn’t just about curiosity. It was about confidence — the belief that if something didn’t make sense, it was probably because I hadn’t yet broken it down far enough. That most systems, however intimidating at first glance, could be understood at some basic level if one was willing to take them apart and sit with the mess for a while. That belief quietly shaped how I approached problems well beyond objects.
When systems stop revealing themselves
Fast forward to today, and that instinct feels oddly out of place. Most of the products and services we rely on — especially digital ones — are no longer designed to be understood, only to be used. Their inner workings are abstracted away behind layers of software, algorithms, and now AI. We don’t even keep toolkits handy anymore — not because things don’t break, but because breaking them open no longer feels like a realistic option.
The same holds true for many everyday skills we increasingly outsource or abandon — from reading and writing at length, to cooking for ourselves, or even trying to grow a plant. What these activities share is not nostalgia, but friction: effort, feedback, and a visible connection between action and outcome.
The so-called knowledge era of the 1990s, driven by the software boom, began to create new asymmetries — between those who built systems and those who simply used them. With AI now at the centre of the 2020s, that distance has widened further. Increasingly, we delegate not just execution, but thinking itself. You are only a prompt away from producing text, code, or analysis that once required direct engagement.
You could argue that this is merely the natural course of progress. After all, we have always consumed things without fully understanding how they are made. But what feels different today is not consumption alone, but the growing distance from causality — the sense that understanding how something works is neither expected nor necessary.
What the brain learns from making
Neuroscience offers a useful, if still incomplete, lens to examine this shift — not in moral terms, but in terms of how repeated patterns of engagement shape the brain over time. One of its most robust ideas is neuroplasticity: the brain continuously rewires itself based on what it is repeatedly asked to do. Skills that are practiced strengthen their neural pathways; those that are not, gradually weaken. This applies not just to motor skills, but to ways of thinking — problem decomposition, causal reasoning, spatial intuition, even patience.
Hands-on engagement — tinkering, repairing, assembling, experimenting — activates a rich combination of neural systems. It integrates sensory feedback, motor control, spatial reasoning, prediction, error correction, and delayed gratification. When you dismantle something and attempt to put it back together, the brain is constantly forming and testing internal models of how the world works. Mistakes are not abstract; they are tangible and corrective.
I was reminded of this when I read Matthew Crawford’s The Case for Working with Your Hands. Crawford argues that separating mental work from manual engagement was always artificial. Fixing things feels grounding not because it is nostalgic, but because it restores a direct relationship between effort and outcome.
In contrast, many modern digital experiences — mediated by software, algorithms, or AI — abstract away causality. Outcomes appear without visible mechanisms. Over time, this can shift us from a maker mindset to a more managerial or supervisory one. We learn to prompt, select, approve, and discard, rather than build, diagnose, or repair. While undeniably efficient, this mode of engagement offers fewer opportunities to develop deep causal intuition.
None of this suggests that modern tools are making us less intelligent. Intelligence is not diminishing; it is being reallocated. We are becoming better at navigating complexity and operating at higher layers of abstraction. But the trade-off may be a gradual weakening of first-principles thinking — the ability to mentally disassemble a problem into elemental parts and reconstruct it independently.
What may be eroding isn’t skill, but belief
The long-term impact, if there is one, is unlikely to be dramatic or catastrophic. It is subtler: a population fluent in outcomes but less anchored in origins; comfortable with consumption but distanced from creation; adept at using tools whose inner logic feels inaccessible, even irrelevant.
Whether this becomes a problem depends less on technology itself and more on whether we continue to preserve spaces for effort, friction, and making — in education, in work, and in everyday life. If not, what we may lose is not capability, but confidence: the quiet belief that one can still take something apart, understand it, and put it back together again.
And perhaps that belief, more than the skill itself, is what once shaped how many of us came to understand the world.
