Digility Ltd

Does AI “appreciate” its own ignorance?

An appreciation of our own ignorance is vital to our humility, creativity and the safety of the decisions we make. Does AI have this quality and if not what does this mean?

Since the early stages of the quest for artificial intelligence, there has been a debate about what is uniquely human and what can be replicated by technology.  To what extent can Artificial Intelligence match or surpass Human Intelligence?

This article isn’t trying to predict the future.  I’m not a deep AI expert.  Nor am I a neuroscientist, a biopsychologist, or much more than an amateur philosopher.  However, I am trying to take a different perspective by asking a few more philosophical questions about potential limitations in current AI capabilities and whether they could have significant consequences if we empower them with too much.

Over the last few weeks, two podcast series have given me pause for thought on how “human” current-day AI is.  The first is season 3 of “Google DeepMind: The Podcast”1, presented by Hannah Fry, which describes the current state of art in AI.  The second is season 2 of Rory Stewart’s “The Long History of… Ignorance”2, has nothing to do with technology and is more about humans’ philosophical relationship with their ignorance.

For clarity, I’m not an AI Luddite. I’m fantastically impressed by what they can already do. I regularly use tools like Google’s NotebookLM and OpenAI’s ChatGPT when conducting research and comparative analysis. But like most, I have concerns and doubts about overreaching in the trust we give to the tools, particularly as agentic systems are introduced that can take action.

A celebration of ignorance

Rory Stuart’s podcast is a fascinating exploration of the value that we gain from ignorance. It is based on the thesis that ignorance is not just the absence of intelligence. It brings a power all of its own. It feeds humility and is essential to the most creative endeavours that humans have achieved. We should ignore it at our peril in complex systems, such as government and society.

In the rest of this article, I’ll describe my interpretation of the importance of ignorance and its role in our decisions and actions in order to assess its relevance to AI.

The core question I believe we need to consider is whether current AI appreciates its ignorance.  This is not whether it can define the word; or whether it recognises that it doesn’t know everything.  But whether AI embraces, respects and correctly recognises its ignorance; doesn’t just learn through hindsight but becomes wiser; and is fundamentally influenced when it makes decisions and offers conclusions that it is doing so from a position of ignorance.

The Rumsfeldian Trinity of Knowns

The late Donald Rumsfeld is most popularly remembered for his theory of knowns. He described that there are the things we know we know; things we known we don’t know; and things we don’t know we don’t know3.  Stewart makes multiple references to this in his podcast.  At the time that Rumsfeld made the statement I felt that it was a statement of the blindingly obvious.  Since then, I have not only learnt that there is frequently significant value in stating the obvious, but have also used the Rumsfeldian trinity of knowns on numerous occasions in a wide variety of contexts.

Understanding our known knowns is relatively easy. And I’d suggest that current AI is better than any of us at knowing what it knows, so we can tick the first category off as solved!

Even the known unknowns should be pretty straightforward.  If someone asks me a question and I don’t know the answer, then I know that this is a known unknown.  AI should be able to handle this concept.  Both human and artificial intelligence can sometimes make things up when the facts to support an answer aren’t known, but that should not be insurmountable to solve.

As Rumsfeld said though, the final category of unknown unknowns tends to contain the difficult ones (another statement of the obvious!).  These aren’t missing facts that you can easily deduce as missing.  Its situations where you have no reason even to believe that something might exist.  It is the area of big misunderstandings, such as accepting that the world is flat because nobody has even considered it might be spherical.  It is expecting Isaac Newton to understand the concept of particle physics and the existence of the Higgs boson when he theorises about gravity.  Or simply following one course of action because there was no reason to believe that there might be another available; all evidence in my known universe points to Plan A, so Plan A must be the only viable option (blind to the fact that my universe may not be everything).

I explored unknown unknowns with AI by playing some games with ChatGPT.  The impression I got was that it recognises it doesn’t know everything (perhaps in a rather humble bragging sort of way) but seemed more focused on coping with known unknowns than recognising the existence of unknown unknowns.  When asked how it handles unknown unknowns, it explained that it would ask clarifying questions or acknowledge when something is beyond its knowledge.  These appear to me to be techniques for dealing with known unknowns and not unknown unknowns.

The more we learn the more we understand how much we don’t know

Like everyone, I started life knowing little more than how to draw a breath.  Then, throughout my childhood, I was taught that the more I knew and understood, the more successful life would be.  Not knowing a fact or principle was not something to be proud of, and it needed to be addressed by learning the missing knowledge and then learning more to avoid failure in the future.  In education we were (and perhaps still are) encouraged to value knowledge more than anything else.

But then life happens.  And as the years go on, I look back on past decisions I made with absolute conviction based on my knowledge at the time that turned out to be fundamentally ill-informed.  The more often we see our mistakes in hindsight, the more conscious we are of how little we actually know.

If AI currently doesn’t appreciate or respect this fundamental concept of ignorance, then what flaws and hazards might exist in the decisions it might make or advise?

The peril of hubris

To feel we can understand all aspects of a complex system is hubris.  Rory Stewart touches on this from his experience in government.  It is a fallacy to believe that by understanding more details and absorbing more facts about the characteristics of society, we would be able to solve the really hard problems.  As Stewart says, this leads to brittle, deterministic solutions based on the known facts with maybe a measure of tolerance for the known unknowns.  Their vulnerability to the “law of unintended consequences” is proven repeatedly when the solution is found fundamentally flawed because of the facts that were never, and probably could never be anticipated.

These unknown unknowns might be known elsewhere but be out of sight to the person making the decision.  For instance, there are undoubtedly a raft of things I don’t know about how AI works today.  But I’m sure if I spoke to the right people and the conversations happened to go in the right direction, I’d become enlightened.  However, many things are universally unknown at any moment in time.  Many of the laws of physics today weren’t know by anyone a few decades ago.

The super-power of humility

When we understand and respect our ignorance, we gain humility as a counter for arrogance or hubris.  Taken to an extreme this humility would risk paralysing us, but for most it is that nagging feeling of caution we have when making big important decisions.  At Sandhurst, we were taught that humility is one of the principles of leadership.  As young officer cadets we understood it, but perhaps with a slightly arrogant view of where it stood in the order of things.  From a personal perspective humility is something that I have recognised to value increasingly over time, not just from a leadership perspective but also in general decision making.

The basis of true creativity

Stewart dedicates an entire episode to ignorance’s contribution to creativity, bringing in the views and testaments of great artists of our time like Antony Gormley. If creativity is more than the incremental improvement of what has existed before, how can it be possible without being mindful of the expanse of everything you don’t know? This is not a new theory. If you search for “the contribution that ignorance makes to human thinking and creativity” you will find numerous sources that discuss it, with references ranging from Buddhism to Charles Dickens. Stewart describes Gormley’s process of trying to empty his mind of everything in order to set the conditions for creativity.
Creativity is vital to more than creating works of art. It is an essential part of complex decision-making. We use metaphors like “brainstorming” and “blue sky thinking” to describe the importance of opening your mind and not being constrained by bias, preconception or past experience. This is useful not just to come up with new solutions, but also to “war game” previously unforeseen scenarios that might present hazards to those solutions.

What would you entrust to the “Village Smarty Pants”

So, if respecting and appreciating our undefined and unbounded ignorance is vital to making good and responsible decisions as humans, where does this leave AI?  Is AI currently able to learn from hindsight – not just learn the corrected fact but learn from the very act of having been wrong? In turn, from this learning can it be more conscious of its shortcomings when considering things with foresight?  Or are we creating an arrogant super genius unscarred by its mistakes of the past and unable to think outside of its own box?  How will this hubris affect the advice it offers and the decisions it takes?

What if we lived in a village where the candidates for leader were a wise elder and a know-it-all? The wise elder had experienced many different situations, including war, famine, joy and happiness; they have improvised solutions to problems that they have faced in the past, and have learnt in the process that a closed mind stifles creativity; they knew the mistakes they had made, and therefore knew their eternal limitations.  The village “Smarty Pants” was highly educated, having been to the finest university in the land and felt that they knew everything about everything and had never made a bad decision in their life.  Who would you vote for?  I know I’d be more concerned about the “Village Smarty Pants” being unconcerned about making an ill-informed decision than I would be of the elder’s lack of degree credentials.

Conclusion

The concepts I’ve described may already be in hand.  They shouldn’t be insurmountable, and there are far more expert philosophers and ethicists than me grappling with these issues in places like Google DeepMind.  The current models may have a degree of caution built into them to damp the more extreme enthusiasm. But I’d argue that caution when making decisions based on what you know is not the same as creatively exploring the “what if” scenarios in the vast expanse of what you don’t know.

I’ll go out on a limb just in case it is necessary to state the obvious – we should be cautious of the advice we take from these models and what we empower them to do, until we are satisfied that they are wise and creative as well as intelligent. Some tasks don’t require wisdom or creativity, and we can and should exploit the benefits that these technologies bring in this context. But does it take both qualities to decide which ones do? I’ll leave you with that little circular conundrum to ponder.

  1. Hannah Fry, 2024, Podcast, Season 3 “Google DeepMind: The Podcast” ↩︎
  2. Rory Stewart, 2024, Podcast, Season 2 “The Long History of… Ignorance” ↩︎
  3. Donald Rumsfeld, Feb 2022, “There are Unknown Unknowns” ↩︎

More Posts

How to Protect the Digital Achilles Heel of Military Capability

Our demographics and the moral value we place on life as a society mean our military must rely on it exploiting technological advantage. But the increased dependence on support from suppliers makes the supply chain an extended part of the networked battlespace, and their security and resilience are critical.

Microsoft’s and Google’s poor discipline is weakening herd immunity

Email was insecure by design, but additional standards have progressively improved that. However, our recent research has indicated that poor discipline at Microsoft and Google is putting all of that hard work at risk. As the dominant providers of email services to our businesses this puts all of us at risk.

(How) Can we trust AI?

Is human nature at fault for the weakness in our management of technology risk? How do we need to change our perspective as AI makes us more dependent?

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top