The Future of AI Is Unpredictable, and That’s a Good Thing

Aleksandra Hadzic
5 min readDec 9, 2021


Whether a computer can have a mind is as old as computing itself. When Alan Turing proposed his now-famous “imitation game,” which was a test to demonstrate machine intelligence, it was widely believed that the machines of the day couldn’t possibly pass.

Also, the best books on science and technology attempt to shift our perspective.

These books aim to force us out of our analytical, reductionist boxes and into a more holistic way of seeing.

Some of the very best achieve this by crossing disciplinary boundaries, while others are so packed with hard facts that they manage to make you think even if you disagree with what’s being said.

Thirty years ago, the famed artificial intelligence (AI) scientist Herbert A. Simon predicted that humans would have thinking machines in the twenty-first century. But we don’t. The sloppy but clever term “artificial intelligence” has fueled misdirection and misunderstanding. Simon deserves credit for exposing all this and predicting our failure to date. Now, in his last book, The Sciences of the Artificial, Third Edition, he asks why our thinking machines never arrived and considers their prospects.

On the other hand, we see the world and think about things that are too complex to be replicated by a thinking machine.

Rather than abandoning his faith in the power of AI and its possibilities, he believes that we have not kept up with our early expectations regarding its development.

The Age of Artificial Intelligence and Our Human Future

In an era when users have access to a plethora of personal assistants — from Apple’s Siri to Google Now (the older version), Alexa and Cortana, among others — all of which are dedicated to understanding and responding to natural language queries, there is growing concern that AI technologies will increasingly infiltrate every aspect of our lives.

Such AI concerns appear to fly in the face of reality because we already rely heavily on various AI technologies, such as Google search, Facebook’s news feed algorithms, Amazon’s product recommending engine, and Netflix movie recommendations, to perform complex analyses on massive amounts of data that inform our daily decisions.

Goldstein and Gigerenzer may have indirectly explained how artificial intelligence behaves and how it affects their applications.

Their research equips the reader to think critically about opportunities and risks of deploying technologies in contexts as diverse as governance, education, health care, business and journalism, religion and commerce.

According to the authors, “heuristics are simple, efficient, adaptive thinking strategies that complement — and sometimes supplant — more analytic and deliberative forms of inference.”

They refer to the recognition heuristic as one example. It is a type of recognition bias that takes advantage of our ability to recognize a common pattern. The authors point out that this heuristic is efficient mainly because it exploits an internal facility for storing and recognizing overlearned visual patterns.

The Age of AI: Our Future in a World of Artificial Intelligence

We can also learn more about this in the book titled — The Age of AI: Our Future in a World of Artificial Intelligence, by Henry A Kissinger, Eric Schmidt, and Daniel Huttenlocher, which offers itself as “a manifesto” for business leaders, policymakers and the public on “what to expect based on recent developments and where the opportunities lie.”

It is ambitious in scope both in terms of the subject: artificial intelligence is only one example of a technology that will upend society along with big data, energy and biotechnology. But also content-wise.

It offers background on why we should care about AI written engagingly, a high-level view of what AI can do and then specific chapters on why it will be important in healthcare & life sciences, transportation & car manufacturing, and many other fields.

The authors’ central thesis is that a global race is underway for AI leadership. They argue that the countries that develop novel AI-first will reap profound economic and military benefits with “potentially massive repercussions” for the countries left behind. There are as many as half a dozen “AI superpowers” determined to move to the head of this pack — China, France, Germany, Israel, Japan and the United States — but China is regarded as the most formidable competitor in pursuit of “the holy grail of AI.”

The authors have written a well-researched book that describes the current state of AI and the scientific discoveries that led to its recent explosion in performance. They lay out the technical challenges — both immediate and long term — to advanced AI forms. And they explore the policy issues raised by these breakthroughs, such as how an artificial intelligence system might be programmed to act in ways that would be illegal for humans or corporations.

The authors contend that there has been insufficient oversight and critical analysis of AI research by policymakers and academic leaders in computer science and other fields. Yet this book itself could spur a deeper examination of several important questions: In what ways might AI lead to better governance?

When Will Computers Pass the Turing Test?

Still, progress has been uneven, though. For all the hype, actual AI and machine learning applications lag those of mobile, cloud computing and big data — some say by decades. And the next wave of innovation with things such as quantum computing and neuromorphic chips may take even longer to gain traction.

But there are limitations here, too.

Many of today’s leading AI applications are narrowly focused, relatively easy to build into the core, and close to the experience layer where we all live, work, and play.

Their narrow reach is a product of their design — pattern recognition or machine learning — as much as the technology itself.

Experts are already predicting when computers will pass the Turing Test — when they can fool humans into thinking they are interacting with another human. In one breath, AI will make us unimaginably wealthy and free; in the next, AI will provide us with some or all of our emotional, social, and even physical needs. However, we’re unlikely to reach that world of AI fable any time soon.

The Future of Humanity: You, Me and AI

As AI continues to evolve — some would say it is unapologetically evolving — the fear of a singularity remains, with many companies and individuals positioning themselves to benefit from that change. With new perspectives emerging every day, we have seen one of the most eloquent and compelling reasons, yet that AI is not something to be feared but welcomed by those who desire a better future for all.

As we move ever closer to the emergence of brilliant machines, the real tragedy would be if humans remain separate and divided, unable to pull together to achieve more meaningful goals. By encouraging collaboration among humans and AI, we stand a greater chance of not just surviving but thriving as unified entities.



Aleksandra Hadzic

Researching AI. Merging Data Science and Digital Marketing.