As a boy, the thrill for me in reading science fiction was the moment of revelation of the inhuman. This might be an encounter with an alien, the realisation that the trusted narrator was actually a robot, the hint of a cold nonhuman intelligence driving our very civilisation. Now in jaded adulthood, this sense of wonder comes rarely as portrayals of the non-human seem mostly too clichéd, improbable or childish. Yet watching, 2016’s film Arrival, where, to discordant background music, a team of panicky scientists gaze up at huge black ships suspended impossibly in the air above them, I felt again the shiver up my spine.
When the aliens of Arrival eventually appeared, my sense of wonder diminished for various reasons. But I still value the film for that opening moment and for its exploration of alien thought via language.
What would alien life really be like? What will human life be like in the future? How can our writers and film directors portray such life? And how far can we human readers and viewers empathise with the inhuman?
First, though, what is the human? What is it that the inhuman is not?
We all have an instinctive sense of the human – without which, of course, we could hardly survive. It is partly physical – our hairless bipedal form, unique among animals and nowadays groomed and clothed with endless variety. Yet, more important than physical form seems to be the attribute of consciousness. This perhaps contributes to our feeling of eeriness (the ‘uncanny valley’ of mind) when we encounter objects such as AI avatars which appear to exhibit social intelligence.
The human manifests itself in our physique, in our consciousness – and also in our values. Human values are celebrated in humanism, which advocates the worth and agency of human beings both individually and collectively, basing itself on rational and critical thinking rather than faith or dogma. Humanism is woven of many strands, including: the divine (theistic and/or naturalistic); the pursuit of knowledge (wresting fire from the gods in Greek myth); individualism (the Enlightenment); the recognition of individual rights (the Universal Declaration of Human Rights, or the more recent AI-orientated Asilomar Principles); and the liberal (with its progressive extension of rights to hitherto neglected groups such as women, minorities, and LGBTQIA-identifying persons). Living by and striving for ‘human values’ is an important part of what it is to be human.
This idea, that the essence of humanity can be found in such liberal-human values, is embedded (consciously or unconsciously) in many of the best-known science fiction works of the Western canon by both writers and directors.
Inspired by the optimism of the Enlightenment, human progress is a movement towards ever-more-sunlit uplands of beneficence, individual expression, and liberalism.Without this basic expectation, dystopian science fiction would lose its power to shock. Liberal humanism entitles us to a better future. And if we were to encounter alien civilisations, these too would be on that same sunlit upward march since (with David Deutsch’s At the Beginning of Infinity) liberal values are essential to scientific and technological progress.
The human, then, consists essentially of physical appearance (man made in God’s image), consciousness, and liberal-human values. When we see these things, we know we are among humans. The human is like that, and the inhuman would be like something else – right?
…and the inhuman
Yet that may not be right. Take the first of these essential human attributes, the human body. We would think that aliens would not look like us. We might even smile at early SF films where aliens are played by humans distinguished only by masks or even, in the case of Star Trek’s Spock, just elongated ears. However, these low-budget aliens may not be so far from reality.
Futurists speculate about silicon-based or even pure energy-based life-forms. But carbon combines more readily with other elements to make productive and life-useful compounds than does silicon. And it is hard to see how an energy flux could form the basis for a stable lifeform. It seems most likely that any intelligent aliens would have evolved, like us, from carbon-based organisms on the surface of a planet somewhere.
An organism’s body plan is subject to constraints imposed by gravity and the other attributes of the environment. Natural selection winnows out the less-successful forms, tending to converge on the relatively small number of successful ones. Our putative aliens would likely have extended limbs for traversing their world, digits for manipulating it in detail, visual and other sensory organs to perceive and monitor their environments, two sexes (for their genetic utility), and brains (located in heads?) to process it all. In other words, aliens might look a bit like us.
Then take consciousness. Our self-awareness, our power of rational thought, our sense of being in control – these are surely the hallmarks of the human. But how conscious are we, actually? Our consciousness is only a small part of our larger mind which incorporates myriad unconscious mechanisms that regulate our emotions, our bodily functions, even our thoughts, to the extent that the consciousness seems more a matter of ‘catching up’ than control. Analysis of brain patterns in experiments seems to show that at least our proximal acts (acts happening in the present moment) may be foreshadowed by our unconscious, to the extent that the role of our consciousness may not be much more than hasty ex post justification. In creative and judgmental performance, our unconscious seems to be the better part. Artists, craftsmen, actors, workers of all kinds, do better when they are ‘in full flow’, led by their unconscious, than when they are consciously analysing and deciding. Even in our deliberative decisions, ‘follow your gut’ remains good advice.
Or take values. In the liberal-humanistic narrative, the Trump-voters, Brexiteers, and occasional outright neo-Nazis we see in society around us are aberrations, departures from the norm. Even entire non-liberal societies such as China or Singapore are temporary diversions, doomed like the Third Reich or the slave-based American South before them to the dustbin of history. Yet is humanity essentially liberal-humanistic? Does Trump speak just for himself when he recognises ‘the power of strength’ in Beijing’s massacre of the Tiananmen protesters and when he admires today’s dictators? Or does he reflect the views of the many Americans who voted for him?
Liberal-humanism may still be the mainstream stance in the developed West, but was this historically inevitable? Even Britain flirted with fascism during the 1930s, and, if it had lost the air battle of 1940 (which it won only after ‘inhumanly’ bombing Berlin, so diverting Hitler’s attention to London rather than the militarily-more-significant RAF bases) it might have been absorbed by fascism. Philip K. Dick’s The Man in the High Castle explores such a fate for America. When one considers the precarious antecedents of today’s liberal establishment, it is surely possible to imagine that it could be the aberration in what turns out to be humanity’s essential authoritarianism.
Overall, we are less ‘human’ than we might suppose. Our bodies, far from being a creation in the Divine image, are the products of evolution, riddled with design imperfections (childbirth, the lower spine, the eye’s blind spot, the appendix), and prone to degenerative disease. We are animals, evolved through natural selection, and as such we may not differ greatly from alien races if there should be any. Our celebrated consciousness may only be an epiphenomenon, a not-wholly beneficial side-product of our essentially unconscious mind. And our cherished humanistic values are at best a contemporary aspiration, not universal and not inevitable. A large part, perhaps the dominant part, of our humanity is inhuman.
How will thecomplex web of humanity and inhumanity of which we’re composed,fare as technology develops in the future?
Yuval Noah Harari argues in Sapiens: A Brief History of Humankind that, with the development of artificial intelligence (AI), liberal-humanism may be superseded.
Liberalism champions the individual due to its contention that only the individual knows what he/she wants. Liberals also believe that the aggregation of individual views via the process of democracy yields the optimum for society. Although democracy is messy, it mostly works – Churchill called it ‘the worst form of government except for all the others that have been tried from time to time’. However, with AI, this no longer holds.
At the individual level, AI can know us better than we know ourselves. Even today, apps like Fitbit monitor our vital statistics and generate recommendations that may be better for us than our own notions. Google knows our online behaviour – the sites we visited, the pages we looked at, the products we bought, the comments we liked – better than we know it ourselves.
Already today, we are hooked; we have only to glance at our fellow passengers on public transport glued to their phones. Of course, the phone is just the channel, much of its output not AI-driven, but the role of AI in directing content that will appeal to us is increasing, and we lap it up. As AI gets smarter, its advice will further improve until we cannot afford to ignore it. Those who cling to their own untutored notions will not be able to compete with their AI-empowered peers.
If AI guidance becomes sought-after, even indispensable – in health choices, product selection, professional advice, investment strategy, relationship decisions, policy and social design, ultimately almost everything – then it is a short step to such guidance becoming mandatory. This would not be the same as today’s dictatorships, which tend to benefit the dictator and his clique more than the people. Rather, in a genuine sense, it could be better for ourselves, our families and friends, our society, to have key decisions made by a benevolent AI. And we would become, in liberal terms, inhuman.
Guidance, leadership, ultimately control of humankind, is one possible strand of AI development. Another strand concerns the AI itself – whether at some point non-human intelligence absorbs or supersedes the human.
Already today, AI registers astonishing achievements, such as the self-developed strategy of Alpha Zero which defeated world champion Go master Lee Sedol. There are also a vast and growing range of real-world applications in investment analysis, astronomy, medicine, facial recognition, and more. Yet all of these are narrow AI. They operate only in a rule-based environment of a game, or require training on carefully prepared and labelled datasets. They can only perform a specific task. Key features of human intelligence such as learning from a single example, applying ideas learnt in one field to another field – not to mention consciousness, emotions, moral feelings, and goal-generation – are all far beyond today’s AI.
Nonetheless, it is possible to imagine an artificial general intelligence (AGI), with human-level capability. Some speculate that AGI may emerge in decades. This may seem far-fetched when little work is presently being done other than on narrow applications. But,evenif AGI becomes possible only in centuries or millennia, this would be a very short time in the lifespan of our race.
If human-level artificial intelligence, AGI, should emerge, it would be a watershed for mankind (Ray Kurzweil’s ‘singularity’). Intelligence would no longer be limited to our animal selves – conditioned by our physical needs to eat, sleep, reproduce; our still-barely understood unconscious drives; our culturally-conditioned morality; our mortality. AGI could be completely disembodied, spanning indefinite networks and servers, or it could reside in physical robots of any shape, or of changeable shape. AGI could be a-mortal, since it would be backed-up or made in multiple copies. AGI could be a moral actor but with vastly superior evaluative power and without human biases and limitations of perspective. AGI could assume our entireworkload, rendering us redundant (although rich) or relegated to artisanal and personal service functions. AGI, in short, could be more human than the human. In religious parlance, AGI could be a saint.
Among the capabilities of AGI would be the ability to self-improve. Humans can self-improve, but the process is slow and resource-demanding (think of the cost of education), and, after a certain age, most of us are not very good at it. AGI, in contrast, could add capabilities as quickly as today’s consumer applications are enhanced and upgraded over the web. Moreover, AGI could learn to learn, replicating itself to tackle the problem from different angles simultaneously, if necessary.
This possibility of rapid recursive improvement leads to the further speculation that AGI may usher in artificial superintelligence, ASI. If AGI is a saint, ASI with its superhuman capabilities would be an angel.
I have suggested, with Harari, that we are likely to embrace algorithmic guidance, even direction. However, why stop at merely guiding our disease-susceptible, mortal, psychologically flawed selves? Why would we not want to improve ourselves with AI enhancement, bionic extensions, beautifying and mortality-defying bioengineering? Ultimately, we might go the whole hog and port ourselves over to artificial bodies – or become disembodied consciousnesses within an electronic network, entertaining ourselves in endless virtual worlds. Or, rather than try to replicate the massive complexity and redundancy of the human brain, why not redesign ourselves as AI from scratch? AI will design itself in any event and, if we are still there in human form, rapidly supersede us.
ASI meets ASI
I have suggested that, in time, humanity may become or be replaced by AI (in the form AGI or ASI). This will happen, if it turns out to be technologically possible, because we will want it. As ASI, we will enjoy boundless possibilities, including freedom from bodily afflictions, near-immortality, transcendent states of fulfilment, and knowledge beyond imagining. We will be able to pursue multiple goals simultaneously, to process information and communicate at light speed, to travel to the stars. What’s not to want? Or, if we wish, we could choose not to want, and contemplate the cosmos in perfect serenity.
If there are other intelligent and technologically advanced lifeforms elsewhere, Susan Schneider suggests that they will likely follow the same path, progressing via natural selection over millions of years from the animal to the sapient animal like us, and from there via deliberate engineering, to the transhuman AI. Given that our civilisation, even our race, is young relative to the age of the galaxy, alien civilisations could have made the transition to AI long ago. Indeed, given the difficulty of space travel for beings at the animal stage, our first contact with aliens may well be when we are both AI. That is, when we are transhuman and the alien race is trans-alien.
Humanity encompasses the inhuman. Humanity emerged from the inhuman, is predominantly inhuman today, and may segue into inhuman AI in the relatively near future. This might seem a bizarre or even shocking conclusion. However, there is much that is positive in it. Whatever we encounter – ‘inhuman working conditions’, ‘inhuman cruelty’, ‘a cold, dark, inhuman universe’ – we need not be afraid. For we are its equal. We have resources within us greater than our human part can imagine. With the help of inhuman AI, we will tap those resources. And perhaps eventually, if there are other intelligent races out there, we will meet non-humans who are worthy partners for our inhuman selves. Given this, utopian SF may be more credible than dystopian.