Skip to content

Get on track with AI – a summary of four seminal books

"Get on track with AI" offers readers a concise overview of four seminal books on artificial intelligence, aiding in grasping the field's complexities. This guide introduces key texts by renowned authors Max Tegmark, Nick Bostrom, Stuart Russell, and Jacob Turner, exploring AI's future, ethical concerns, human compatibility, and legal challenges. Designed for AI enthusiasts and those new to the topic, this summary equips readers with foundational knowledge to confidently engage in AI discussions. Dive into a curated selection, ensuring a comprehensive understanding of AI's vast landscape.

The number of blogs, books, academic papers, media coverage, and pop-culture references on Artificial Intelligence are multiplying by the day and perhaps you are one of those that feels a bit left behind. Worry not, this blog is for you, intended to get you on track with AI.

How? Well, as the laureate of the Nobel Prize in Literature François Mauriac put it:

“'Tell me what you read and I’ll tell you who you are’
is true enough, but I’d know you better if you told me what you reread.”

Sticking to the first row of the quote we are going to look at four prominent books in the field, that if read (even if summarised) would communicate something (akin to erudition). I, an academic at heart, would even go as far as calling them seminal. If you disagree, you probably already know something about AI. Nonetheless, for those that are struggling to keep up with the increasingly demanding dinner conversations about AI – here are the four books that will provide what you need to know:

  • Life 3.0 by Max Tegmark
  • Superintelligence by Nick Bostrom
  • Human compatible by Stuart Russell
  • Robot rules by Jacob Turner

What follows is a summary of the core tenets of each book. Even if it is I who are making a subjective call on how to best summarise the content, this text is not intended as a review, and we have already established my bias in thinking that these books are good reads. Note that the books are intentionally not listed in order of publication date. Instead, they are listed in the order that I suggest that they are read, to make the most sense to the interested but uninitiated.

Life 3.0Life

"Life 3.0: Being Human in the Age of Artificial Intelligence" by Max Tegmark (2017), explores the future of artificial intelligence (AI) and its potential impact on civilization. Tegmark discusses the ethical implications, possible scenarios for AI development, and how we might shape a future where AI benefits all of humanity.

The title of the book refers to a categorisation of life into three stages based on how it processes and designs information:

Life 1.0 (Biological Evolution): At this stage, life is capable of evolving its hardware but not software. Here, "hardware" refers to physical biology and "software" refers to the mechanisms and processes that life uses to survive, i.e. intelligence. Bacteria, for instance, would fit into this category.

Life 2.0 (Social Evolution): This refers to entities that can evolve their software, meaning they can learn during their lifetime, but they can't significantly change their hardware. Here, humans are a prime example. Through culture, education, and personal experiences, humans can acquire and pass on knowledge without waiting for biological evolution.

Life 3.0 (Technological Evolution): This is life that can redesign both its hardware and software in short order. Advanced artificial intelligence would fit into this category. These entities would be able to modify not only their programming (software) but also their physical structures (hardware), allowing for rapid adaptation and evolution far surpassing the capabilities of Life 1.0 and Life 2.0.

Conceptually this is a way of framing the conversation about the future of life, where entities can transcend biological boundaries and evolve at unprecedented rates. This presents both exciting opportunities and profound challenges for the future of humanity and our coexistence with advanced AI.

Tegmark defines artificial as non-biological, and as for intelligence a broad and task-agnostic definition is provided:

"Intelligence is the ability to accomplish complex goals."

This definition is intentionally broad to encompass not just human intelligence but any potential form of intelligence, including artificial intelligence. By framing intelligence in terms of goal accomplishment rather than human-specific traits or tasks, Tegmark emphasises the adaptability and potential variety of intelligent entities. It underscores the idea that intelligence isn't restricted to the human way of thinking or problem-solving and that various entities (biological or artificial) can be intelligent in different ways, as long as they can achieve complex objectives.



"Superintelligence: Paths, Dangers, Strategies" by Nick Bostrom (2014) delves into the potential future emergence of machine superintelligence and its implications for humanity. Bostrom examines possible paths to superintelligence, the existential risks associated with uncontrolled development, and strategies for ensuring its safe and beneficial use.

According to Bostrom "superintelligence" is defined as:

"An intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom, and social skills."

Bostrom's concept of superintelligence goes beyond just an advanced machine that can outperform humans in specific tasks. Instead, he describes an entity that surpasses human capabilities in nearly every domain of interest, marking a profound leap in cognitive abilities that could radically transform or even dominate our world if not guided or controlled properly.

There is thus a differentiation between "narrow" (or "weak") AI and "general" (or "strong") AI. These distinctions are fundamental to the trajectory and implications of AI development.

Narrow (or Weak) AI: This refers to machines or algorithms that are designed and trained for a specific task. These systems are "intelligent" within their limited domain but lack the general problem-solving capabilities of humans. Examples include chess-playing programs, facial recognition systems, and even more sophisticated contemporary AI models that can perform specific tasks at or beyond human levels but do not have broad cognitive abilities.

General (or Strong) AI: This refers to machines that possess the ability to perform any intellectual task that a human being can do. They would have general problem-solving capabilities akin to human intelligence but, once achieved, could quickly lead to superintelligence due to the potential for self-improvement and optimization at a pace and scale beyond human capacities. Note that the mechanics underpinning transition from human level intelligence to superintelligence is somewhat vague.

Bostrom, however, emphasises that the switch from narrow AI (which we have now) to general AI (which we don't yet have) is significant. General AI poses different challenges and risks, especially if it advances to a superintelligent level. The crux of Bostrom's concerns in the book revolves around the potentially rapid transition from general AI to superintelligence and the immense importance of ensuring its alignment with human values before it achieves an autonomous capability to shape its objectives.

Human compatible

"Human Compatible: Artificial Intelligence and the Problem of Control" by Stuart Russell (2019) posits that the current trajectory of AI development might lead to outcomes detrimental to humanity. Russell argues for a fundamental rethinking of AI's value alignment, emphasizing the need to design systems that inherently respect and understand human values, ensuring their safe and beneficial integration into society.

Russell proposes a new framework for building safe AI systems. To make AI systems that are human-compatible, Russell emphasizes the following core principles:

Uncertain Objectives: Instead of building AI systems that optimize a fixed objective (which might be misspecified or lead to unintended consequences), AI should be designed to be inherently uncertain about what the true objectives are. This uncertainty would make them more deferential to humans and more cautious about taking actions that might be against human values.

Beneficial Learning: AI should be designed to learn from humans about what those objectives are. They should observe human behaviour, ask questions, and use other methods to refine their understanding of human values and goals over time.

Human Feedback: AI's actions and plans should be influenced by ongoing human feedback. By allowing humans to correct or adjust AI behaviour, this creates a safeguard mechanism to ensure the technology remains beneficial and under human control.

Russell's approach aims to shift the AI paradigm from systems that might inadvertently harm humanity by rigidly following potentially flawed objectives, to systems that are cautious, deferential to human values, and constantly learning and adjusting based on human feedback. This framework seeks to ensure that AI remains beneficial even as it becomes more capable and influential in various aspects of society.

An honourable mention in relation to Human compatible is the book The alignment problem by Brian Christian (2020). In The alignment problem Christian pose an inquiry into the challenges of ensuring that artificial intelligence understands and respects human values. Through a combination of historical context, technical insights, and narratives from researchers, Christian explores the efforts and complexities of aligning machine learning systems with human intentions and ethics. In summary both authors share concerns about AI alignment and its implications, Russell offers a more prescriptive approach based on his expertise in AI research, and Christian provides a journalistic exploration of the landscape, bringing in diverse perspectives and narratives. I prefer Russel’s penmanship, hence the inclusion on my list, but given their similarities if you do decide to read them in full, I leave it to you to choose your poison.

Robot rules

robot rules-1

"Robot Rules: Regulating Artificial Intelligence" by Jacob Turner (2018) explores the legal and ethical challenges posed by the rise of artificial intelligence and autonomous systems. Turner delves into the creation of a legal framework that holds AI and robots accountable for their actions, emphasizing the need for new laws and regulations that can adapt to the unique characteristics and behaviours of these entities.

Turner doesn't actually provide a fixed list of "robot rules" per se, but rather delves rather deeply into the challenges and considerations that should guide the creation of a regulatory framework for AI and robots. Some key concepts and recommendations from the book include:

Accountability for AI Actions: Turner discusses the need to establish clear accountability mechanisms for actions taken by AI systems, especially when they act autonomously.

Adaptable Laws: The laws should be adaptable and flexible to cater to the rapid evolution of AI technology.

Understanding AI Decision-Making: For effective regulation, there's a need to understand how AI makes decisions. This includes the challenges tied to the transparency and explainability of complex machine learning models.

Legal Personality for Robots: One of the more controversial ideas is whether advanced AI or robots should have some form of legal personality, similar to corporations. This doesn’t imply that robots have consciousness, but it would provide a legal framework for addressing issues of responsibility and rights.

Ethical Framework: Turner emphasizes the importance of integrating ethics into the development and deployment of AI, ensuring that AI systems adhere to human values.

Safety and Control: Ensuring that AI systems are safe and that there are control mechanisms in place to prevent or mitigate harmful actions.

Addressing Biases: Recognizing that AI systems can inherit or learn biases from the data they're trained on, and creating mechanisms to identify, address, and correct these biases.

Global Cooperation: Given the global nature of AI development and deployment, Turner advocates for international cooperation in establishing regulations and standards.

Turner's work is a deep dive into the multifaceted challenges of regulating AI, and it provides a foundation for understanding the complexities and nuances of creating an effective and responsive legal framework for this emerging technology. Maybe, when the EU AI act is finally finalised, I will reread this book and do some comparative analysis. Thus, fulfilling the second row of Mauriac’s quote.


In any case, having read this far this blog has not only saved you several hours of actual book reading but, more importantly, you are now ready to engage other curious enthusiasts, perhaps even a dedicated researcher, or even an outright AI alarmist fanatic. Armed with this new knowledge and opinionated dinner disputes notwithstanding, I think we can agree that in a time characterised by rapid technological advancements, AI stands out as a monumental triumph of human ingenuity. I hope that this curated guide, summarising some seminal works on AI have improved your understanding of the transformative nature of this technology. Even better if you are now inspired to learn more about the intricate tapestry woven from algorithms, data, and applications that mimic cognitive functions. With that, dear reader, I hope that we at some point can engage in discussion about a world where machines can think, learn, and perhaps one day, dream – or not…