Here, in Part Two, I am continuing my exploration of Yuval Hurari’s book: Nexus: A Brief History of Information Networks from the Stone Age to AI (2024), as I try to get a handle on this evolving world of AI as a new animated silicon species in our world.

Also watch Yuval Harari in conversation with Jon Kabat-Zinn, founder of mindfulness-based stress reduction therapy on ‘Mindfulness, AI and the Future of Humanity’: https://www.youtube.com/watch?v=7r5lw3jPrUk along with various other YouTube videos where Harari discusses his views on AI and Humanity from varying perspectives.

While Generative AI has many potential benefits for our society, its downside risks are considerable.

The 2023 Bletchley Doctrine on AI

Signed by China, US and UK, the Bletchley Doctrine states there is a potential for serious, even catastrophic harm, either deliberate or unintentional, stemming from the most significant capabilities of AI models to:

  • Supercharge existing human conflicts, dividing humanity against itself through more destructive weapons
  • Silicon Curtain (different AI networks) – divide not only one group of humans from another, but rather all humans from our new AI overlords.
  • We might find ourselves cocooned by a web of unfathomable algorithms that manage our lives, reshape our politics and culture, and even re-engineer our bodies and minds.

Social Media and the Internet

Originally proposed as a benign tool to increase social networking among people who are dispersed by geography—families, friends, those with similar hobbies, intellectual interests, and life struggles, social media has now morphed into something far more pernicious and threatening to community wellbeing and social order.

As Harari cogently demonstrates, giving social media algorithms the goal to maximise engagement (the attention economy), led them to discover that outrage, anger, desire and conspiracy fear thinking drives engagement over more positive stories about kindness, compassion and community wellbeing. And the unintended consequence of this has been the exponential increase in cyber bullying, cyber trolling, conspiracy thinking, fake news, and sexualised violence in pornography. As well, responding to the ‘desire’ impulse, we have seen the rise in body dysmorphic disorder, and a narcissistic wellness industry, influencer and selfie culture.  While the internet has massively increased our access to information networking, it has also spawned the Dark Web, a domain that harbours the dark side of human nature.

We now understand that having a lot of information doesn’t in and of itself guarantee either truth or order, and certainly has not led to any growth in advancing the idea of wisdom.

Self-Correcting Mechanisms

Harari places great importance on the role of self-correcting mechanisms in information networks. He identifies how these have worked in the operation of democratic government systems, and in science as a knowledge system. He contrasts these with the totalitarian tendencies of centralising information networks in the religions of the book, via claims of infallibility in the words of ‘God’, and in totalitarian political regimes, in their search for maximising control and order.

Harari explores how, in grappling with the relationship between the search for truth and the need for social order, which led to the tyranny of religiously imposed infallible truths, the Western institution of science, as a social enterprise, developed the self-correcting mechanism of determining truth as ‘facts’ through empiricism. Any proposed truth was a hypothesis, which could be disproved by other evidence, leading to the testing of further hypotheses and so on until a consensus was reached at any one point as to the validity of the proposition. But there always remained the possibility that further empirical investigation could disprove this, and lead to new ‘discoveries’ about the nature of the particular field of investigation. Where science claims any hegemonic infallibility over other sources of knowledge, it falls into the ideology of scientism, and is no longer scientific.

In the political sphere, democracy as a political system is based on the separation of powers of the legislature and the judiciary. This is matched by the role of the Fourth Estate, the media, in ‘holding power to account’. Together these have provided forms of self-correcting mechanisms in how the information network has operated. Although different groups (information networks) within the body politic have been able to exert different levels of influence, competing interests have ensured some level of self-corrective mechanisms.

Therefore what sets democracies as political systems apart from totalitarian systems, is that while democracies allow for diverse information networks, totalitarian systems seek to centralise all information into one network.

Harari suggests that in theory a highly centralised information network could try to maintain strong self-correcting mechanisms, like independent courts and elected legislative bodies. But if they functioned well, these would challenge the central authority and thereby decentralise the information network. (p.119) Therefore the most common method strongmen use to undermine democracy is to attack its self-correcting mechanisms one by one – often beginning with the courts and media.

One of the biggest dangers also facing democracy is the temptation to centralise information networks in the name of ‘efficiency’ and ‘integrated policy capability’ without realising that this will undermine the survival of the crucial self-correcting mechanism that gives democracies their adaptability through allowing for open conversations among competing interest groups in society.

In a well-functioning democracy, citizens trust the result of elections, the decisions of courts, the reports of media outlets and the findings of scientific disciplines because citizens believe these institutions are committed to truth. However, once people think that power is the only reality, they lose trust in all these institutions, democracy collapses, and the strongmen can seize total power. (p.134).

Populism

Populism works in this way. Harari suggests that a fundamental part of populist credo is the belief that ‘the people’ is not a collection of flesh-and-blood individuals with various interests and opinions, but a unified mystical body that possesses a single will – the will of the people. (p.131) What turns someone into a populist leader is claiming that they alone represent the people and that anyone who disagrees with them – whether state bureaucrats, minority groups or even the majority of voters – either suffers from false consciousness or isn’t really part of the people.

Driven by distrust caused by the failure of social institutions to deliver the promises of democracy (increasing and equalising wealth and freedom), and the ability of social media to amplify and intensify distrust, adherents of populism look for the ‘strongman’ who ‘hears their plight’ and can protect them. This is a sort of regressive infantilisation, of looking for the ‘father’ to protect ‘the child’ in troubled times, which is projected onto a charismatic leader, operates ini much the same way that religion has worked through the idea of ‘salvation’, idealised in the figure of Christ and the evangelical charismatic preacher.

Democracy requires a tolerance for complexity and ambiguity, which the digitally driven 24-hour news cycle of the attention economy finds difficult to deal with. In the face of complexity and competing interests, information is reduced to headlines and memes that grab our attention, and reduce complexity to a false simplicity. Increasingly we are prisoners of our own echo chambers.

Digital Imperialism

Unlike industrial power, the world’s algorithmic power can be concentrated in a single hub. Engineers in a single country might write the code and control the keys for all the crucial algorithms that run the entire world. (p.373) Or as we are beginning to worry about, the entire world of particular consumer products powered by AI software – like electric vehicles, and industrial infrastucture like wind turbines and solar farms.

As Harari points out, the raw material for the AI industry is data. In a new imperial information economy, raw data will be harvested throughout the world and will flow to the imperial hub—whether that be a nation state or a global corporation, or even global criminal enterprise that float free of any one nation state. It is there that the cutting-edge technology will be developed, producing unbeatable algorithms that know how to identify things, predict consumer trends, drive autonomous vehicles, and diagnose diseases. These algorithms will be exported back to the data colonies, and use for data mining and further intensification of control and wealth extraction. (p.372)

We have already seen concerns about computer-based systems of social control. This played out in concerns about use of Chinese technology for the 5G network and CCTV devices. This is now extending to all Chinese technology that relies on Chinese software, where control of the coding of that software remains under the control of the Chinese government. This feeds into concerns about sovereign risk that arose from the disruption to global supply chains during the COVID pandemic. There is now concern that such sovereign risk, and potential for digital disruption or deliberate hostility, could now come from Chinese manufactured goods such as electric vehicles, solar panel systems, electronic cranes in building sites and ports, etc. The AI dimension adds a further element of sovereign risk to this conundrum.

Power, Storytelling and Mythology

Evolution has adapted our brains to be good at absorbing, retaining and processing even very large quantities of information, when they are shaped into a story. (p.44) We are narrative creatures.

Harari suggests that power stems from the ability to main social order among a large number of people. However, while power depends on both truth and order, it is usually the people who know how to build ideologies (stories) and maintain order who gain the power.

He suggests that linguistic abilities gave humans the aptitude to tell and believe fictional stories and to be deeply moved by them, creating a new type of chain: human-to-story chains that serve like central connectors. This goes from ancient myths and religions to modern day influencers, celebrities and brands. All are stories and they are the glue that holds the network together. In this way, he says, mythology and bureaucracy are the twin pillars of every large-scale society. (p.56)

Harari further suggests that the power of stories is often missed or denied by materialist interpretations of history, such as Marxist analysis, which assumes stories are only used to camouflage material interests and power relationships to confound their rivals (p.29). Mao Zedong famously saying religion (as a delusional story) is the opium of the people, compared with the truth of communism, with its focus on the material conditions of the people. And yet, as we know, Maoism went on to become its own ‘opium of the people’ elevating Mao Zedong to the god like status of infallible, omniscient ruler, mirroring the role of the Emperor, holding the ‘mandate of heaven’ in Chinese history.

The centrality of stories reveals something fundamental about the power of our species and it explains why power doesn’t always go hand in hand with wisdom. (p.31) Stories stretch our biological bonds into a new sort of family. We can be members of the now global Christian faith family, the nation state family, the racial/ethnic family, the tennis fan family, various celebrity fan club families. Information networks expand the ability of our story-based ‘family’, and social media has massively accelerated this ability—for good and bad. It is this ‘reality’ into which AI will potentially exacerbate such trends.

An interesting exploration of this world of competing ‘realities’ via media manipulation of human fear, hopes, and grievances informs the French TV series, The Trigger (La Fievre), which is currently showing on SBS on Demand. La Fievre casts a critical eye on modern society through the lens of crisis communication, and what happens when football, a unifying sport, threatens to rip the nation apart at the hands of identity warriors.

Intersubjective Reality

In this way Hariri further suggests that, like DNA, stories can create new entities, even an entirely new level of reality – subjective reality, and a further third type – intersubjective reality – expressed in laws, gods, nations, corporations and currencies. In this way stories are the nexus between large numbers of minds. (p.25) And of all the genre of stories, those that create intersubjective realities have been the most crucial for the development of large scale human networks, including the founding myths of Judaism, Christianity, Islam and Buddhism.

Harari looks to the role of the Bible. Like DNA initiating chemical processes that bind billions of cells into organic networks, the Bible initiated social processes that bonded billions of people into religious networks, as has the modern global financial system bonded billions of people into one global economy.The idea of ‘nations has’ arisen out of dreams, songs, and fantasies, for it is national myths that are now one of forces underpinning identity politics and populism, particularly the ‘blood and soil’ basis of ethno-nationalism.

AI as Storytelling Intelligence

It is national myths that have legitimised tax collection and their records, while tax records help transform aspirational stories into concrete schools and hospitals. A myth that the ‘sovereign citizen’ movement has sought to subvert, while still enjoying access to the State’s facilities and infrastructure.

Harari poses a question. What will it mean for humans to live in a world where catchy melodies, scientific theories, technical tools, political manifestos and even religious myths are shaped by a non-human alien intelligence that knows how to exploit, with superhuman efficiency, the weaknesses, biases and addictions of the human mind? (p.209)

If religions throughout history claimed a non-human source for their holy books, could that soon be applied to AI as a non-human intelligence. Might attractive and powerful religions emerge, whose scriptures are composed by AI? (p.209)

Harari asserts that religions like Judaism, Christianity and Islam have always imagined that somewhere above the clouds there is an all-seeing eye that gives or deducts points for everything we do and that our eternal fate depends on the score we accumulate (p.291). Are the new evolving social credit systems being rolled out in China a secular version of this idea? So that now instead of ‘moral sinners’, we have ‘social sinners’ as a low-credit underclass. Will this idea of AI surveillance to ensure social order catch on in a world that is witnessing a greater flirtation with populist authoritarian rulers?

Harari refers to Meghan O’Giebyn’s book, God, Human, Animal, Machine (2022), which examines the similarities between the omniscient and unfathomable god of Judaic-Christian theology and present-day AI, whose decisions seem to us both infallible and inscrutable. Might this present some human political and religious leaders with a dangerous temptation? (p.298)

Given that Western culture, through its roots in Greek philosophy and modern Christianity, tends towards a strong separation between mind and matter/body, a separation that has been used to underpin various forms of misogyny and mental labour/work over mind labour/work, might not the development of a disembodied intelligence encourage the idea of an immersive Kingdom of God in cyberspace, living in our online identities, freed from material reality.

Mind – Intelligence and Consciousness

Harari distinguishes between intelligence and consciousness—that subjective capacity for humans to be self-aware. While neuroscience has sought to locate consciousness in the physical organ of the brain, it has become obvious that consciousness is embodied in other body systems such as the endocrine and hormonal systems. However, consciousness also takes us into uncharted waters of subjectivity and intersubjectivity that defy reduction into physicality and point to more esoteric domains of human experience, including mystical connections and Jung’s idea of the transpersonal and collective unconscious. While such ideas take us beyond scientific ideas grounded in materiality, they are regarded as legitimate forms of knowing in many cultures, which science has been unable to explain.

Harari suggest that Intelligence is the ability to attain goals, such as maximising user engagement on a media platform. Consciousness, on the other hand, is the ability to experience subjective feelings like pain, pleasure, love and hate.  YET, Hariri points out, 99 percent of the processes in our body, from respiration to digestion, happen without any conscious decision making (the autonomous nervous system). (p.201)

He concludes that since we don’t understand how consciousness emerges in carbon-based life-forms, we cannot foretell whether it could emerge in non-organic entities—AI. (p.202)

Harari explores how computers, by conversing and interacting with us, could form intimate relationships with people and then use the power of intimacy to influence us through ‘fake intimacy’ as computers learn how to make us feel emotionally attached to them through data analysis. In the political battle for hearts and minds, intimacy is a powerful weapon. (p.201) So that even without fake intimacy, mastery of language would give computers an immense influence on our opinions and worldview. People may come to use a single computer adviser as a one-stop oracle.

Harari concludes that this might signal the end of human-dominated history. He sees history as the interaction between biology and culture; between our biological needs and desires for things like food, sex and intimacy, and our enduring cultural creations like religions and laws. Within a few years AI could eat the whole of human culture—everything we have created over thousands of years – digest it and begin to gush out a flood of new cultural artifacts. (p.211-212) He claims we are already seeing how computers will be able to make cultural innovations, composing music or making images that are something different from anything previously produced by humans – these in turn will influence the next generation of computers and so.

Therefore, to manipulate humans there is no need to hook computers up to brains – they can just use language to manipulate human thinking and feeling and so our society and culture. (p.211)

This information revolution is thus creating new political structures, economic models and cultural norms. When we write computer code, we aren’t just designing a product. We are redesigning politics, society and culture. Computers are pushing humans toward a new kind of existence in which we are always connected and always monitored. If this network continues to evolve at an accelerating pace, errors will accumulate much faster than we can identify and correct them.

Error and Bias

This is already apparent in social credit systems, such as being pioneered in China. But Information isn’t truth. A total surveillance system may form a very distorted understanding of the world and of human beings. So that instead of discovering truth about the world, it might use its immense power to create a new kind of world order and impose it on us. (p.255)

Increasingly, understanding national politics, like US elections, will necessitate understanding inter-computer realities ranging from AI generated cults and currencies to AI-run political parties and even fully incorporated AI entities. The US legal system already recognises corporations as legal persons. Why not computer entities? (p.289)

Human have dominated Planet Earth because we were the only ones capable of creating and sustaining intersubjective entities like corporations, currencies, gods and nations and using such entities to organise large scale cooperation. Now computers may acquire comparable abilities. (p.289) They may also accumulate errors and biases as networked computers develop a common model of the world that helps them communicate and cooperate. This shared model will probably be full of errors, fictions and lacunae, and be its own mythology rather than a truthful account of the universe. (p.299)

Harari suggest that therefore we need to train computers to be aware of their own fallibility, to be encoded with the precautionary principle. He suggests this also include the following value principles:

Benevolence—when a computer collects information on me, that information should be used to help me rather than manipulate me

Decentralisation—a democratic society should never allow all its information to be concentrated in one place and one integrated system in the name of efficiency. For democracy to function some inefficiency is a feature not a bug

Mutuality—If democracies increase surveillance of individuals, they must simultaneously increase surveillance of government and corporations

Balance—democracies require bottom-up transparency and accountability to expose bribery and tax evasion to counter the ability of governments and corporations to develop apps and algorithms as tools for top down surveillance

Leave room for change and rest—any use of algorithms must allow for the human biological requirement for rest – to disconnect, and to avoid the development of social credit surveillance systems creating a new novel caste system.

Democratic societies that employ powerful surveillance technology need to beware of the extremes of both over-rigidity and over-pliability. (p.311-316)

Transparency and Storytellers

Already, the neural networks of AI are moving towards autonomy and their decisions are not always explainable. They are black boxes with their outputs and decisions based on opaque and impossibly intricate chains of minute signals. (p.333) Even today, only a fraction of humanity really understands the financial systems, which relies heavily on computers.

However, while individual laypersons may be unable to vet complex algorithms, a team of experts getting help from their own AI sidekicks can potentially assess the fairness of the algorithms. But how to determine that the vetting of the algorithm is reliable has no purely technical solution – it is a recursive problem. To vet algorithms, regulatory institutions will need not only to analyse them but also to translate their discoveries into stories that humans can understand. (p. 337-338)

Because computers will increasingly replace human bureaucrats and human myth-makers, this will again change the deep structure of power. To survive, democracies require not just dedicated bureaucratic institutions that can scrutinise these new structures but artists who can explain the new structures in accessible and entertaining ways. (p.339).

We already see the way such artists and story tellers are stepping up to this challenge, from the classic movie, Blade Runner to new TV Series, Devils which explores the world of the financial markets, the TV series, Seed, which explores bio-engineering, capitalism and food security, and TV series, Black Mirror, which explores manipulated reality. We have much to thank our storytellers for. In the mist of swirling data, facts and truth claims in the political and business spheres, it is our storytellers, rather than data scientists, who are giving us insights into this new world we are collectively and blindly creating. To allow us to imaginatively look inside the ‘black box’ of the coders and their masters who are busily reshaping our world—as we begin to look at AI as a new tool of convenience and efficiency to help us in our daily tasks.

We humans are easily seduced by convenience and efficiency. It is has underwritten the widespread use of plastic that threatens us with pollution; the health problems that have arisen from the sedentary nature of most jobs; the loneliness epidemic that comes with replacing face-to-face communities with virtual ones; and the efficiency drive that proved so disastrous to global supply chains during the 2020 COVID pandemic, and which drives the assault on the rights of workers in a world of extractive profit maximisation.