4: Broken Lenses

We have more knowledge than ever. So why do we understand less?

5/1/20265 min read

Broken Lenses

We have more knowledge than ever. So why do we understand less?

Here is a strange fact about the world we live in: we have never had more information, and we have never been more confused about how things actually work.

Vaclav Smil, one of the most cited scientists alive, calls this the comprehension deficit. We are surrounded by systems we depend on completely, food, energy, materials, ecosystems, and most of us could not begin to explain how any of them function. We consume the outputs and remain untouched by the machinery.

This is not a new problem. It is an old one that knowledge itself helped create. And understanding how that happened might be the most important thing we can do before we hand even more of our thinking to machines.

When knowledge was a living thing

There was a time, in ancient India, among others, when knowledge was not stored, it was performed. Scholars would gather and debate at the edge of what they understood, governed by strict rules of discourse. To win was not to humiliate, but to illuminate. Knowledge was decided by those who had pressed furthest into the unknown.

This produced something remarkable: a systemic intelligence that was collective, living, and constantly tested. It was knowledge as a practice, not a possession.

Then, gradually, it became a possession.

The shift is visible across history. Knowledge began determining who deserved education, who could enter which rooms, who counted as intelligent at all. J.C. Bose, one of India's greatest scientists, watched his peers race to patent their discoveries. He refused. He believed knowledge grew by being shared, not hoarded. He believed the greatest motivation for scientists was creative service. His critics dismissed his work on plant intelligence as philosophy, not science, as if the two were opposites rather than companions.

Knowledge stopped being a torch and became a gate. And the person holding the gate decided who deserved the light.

The three things I think knowledge can be defined as

The first: it saves time. A phone call replaces a day of travel. A washing machine frees hours. This is knowledge as compression, reducing the friction between wanting something and getting it. Useful, obviously. But compressing time is not the same as deepening understanding. And it raises an uncomfortable question: where is all that saved time going? If we are freeing ourselves from effort, what are we doing with the freedom?

The second: it reveals what we cannot see alone. X-rays, telescopes, microscopes, the mapping of fungal networks beneath forest floors, these are extraordinary. They show us a reality our unaided senses cannot reach. And when that reality imprints on the imagination, it changes what we believe is possible. This kind of knowledge has driven some of the most important discoveries in human history. It has also cost lives, Bose's road into plant consciousness was walked almost alone, dismissed, underfunded. Few walk roads not taken.

The third, and the one Smil is writing about: knowledge as a system that quietly produces its own ignorance. The more specialized our disciplines become, the more we excel in narrow channels while losing sight of the whole. A climate scientist may understand atmospheric carbon flows in extraordinary detail and have no idea where their food comes from. A software engineer can build systems that touch millions of lives without understanding the electrical grid that powers them. We have built an architecture of expertise that makes deep comprehension increasingly rare.

The systems we built to know things — and what they got wrong

Our knowledge systems contain their own admissions of failure, if we look closely.

Godel's Incompleteness Theorem showed that mathematics, refers itself to be true, hence, it is incomplete. Any sufficiently complex formal system will contain truths it cannot prove. It is incomplete by nature. This is not a flaw to be fixed, it is a structural reality. And yet we treat mathematics as the gold standard of certainty, the language in which real knowledge is expressed.

Educational institutions were built to transmit and widen access to knowledge. Literacy was one of humanity's most powerful tools for doing this. But somewhere along the way, the institutions stopped being places of deep inquiry into the unknown and became places that rank people by how well they already know what is known. The badge of honour replaced the question. The credential replaced the curiosity.

Our own perception does this too. Our eyes discard an enormous amount of visual information before it ever reaches our brain. Try sketching something from careful observation — really looking at it — and you will feel the gap between what you thought you saw and what is actually there. We have always seen through broken lenses. The lenses just keep multiplying.

And our social systems: the instinct for group cohesion that once protected us from predators now shapes how we filter information, who we trust, which truths we are willing to consider. Group empathy preserved us at one point in history. Today, unchecked, it makes our lenses narrower.

We have never had so much information at our fingertips and yet most of us do not know how the world really works. — Vaclav Smil

What this has to do with AI

Here is what worries me about the conversation happening right now. Many people speak of AI as an intellectual threat, the moment of surrender, the end of human thinking. But this assumes we were doing a great deal of unfiltered, unbiased, deeply curious thinking to begin with.

We were not. Our thinking was already running through broken lenses, shaped by which knowledge got funded, which questions got asked, which people got to be scholars. The comprehension deficit did not begin with large language models. It has been growing for centuries.

But here is what I think AI could do, if we are honest about what we want from it. J.C. Bose believed knowledge grows by being shared. Our brains are built to extend cognition outward, into tools, environments, the people we think alongside. AI could be part of that extension: a way to access knowledge without the gatekeeping that historical institutions built around it, a way to hold more of the whole in view at once.

The risk is not that AI thinks for us. The risk is that we ask it to do exactly what our knowledge systems have always done, compress the world into manageable outputs, filter out the uncomfortable and the uncertain, reward what is measurable and ignore what is not.

The questions worth sitting with

Before we can fix anything, we need to understand what we are actually looking at. And I think the questions below are more urgent than most of the conversations happening right now about the future of intelligence.

  • What do we actually think with, just our brains, or also our tools, environments, and the people we think alongside?

  • If intelligence is distributed across minds and context, what does it mean to measure it in isolation?

  • What is actually artificial about artificial intelligence, and what does that question reveal about what we mean by natural intelligence?

  • What ecosystems, biological, social, cultural, are quietly sustaining us while we remain completely unaware of them?

  • What actions and environments could genuinely reduce our comprehension deficit, rather than just managing its symptoms?

  • What would it take to fix the centuries-old lenses we wear , the ones we inherited and the ones we built ourselves?