Chapter

Cultivating Intelligence

AI Ethics and Mou Zongsan

EXCERPT

Norbert Wiener, reflecting on the cultural consequences of the new science of communication and control, warned that the literal-mindedness of cyber- netic machines was analogous to the demonic dangers of magic. As a con- sequence, ‘the reprobation attaching in former ages to the sin of sorcery’, he wrote, ‘attaches now in many minds to the speculations of cybernetics’. Wiener illustrates this by referring to Goethe’s poem ‘The Sorcerer’s Apprentice’, in which an inexperienced wizard enchants a magical broom so that it can assist him with his chores. The young sorcerer sets the broom to work, confident that he can relax as he now has a tool that will obey all his commands. He is driven into a frenzied panic, however, when the automated cleaner, for whom a room is never clean enough, starts drowning the house in streams of water. Wiener sees in cybernetics a manifestation of the same ominous idea that magic ‘grants you what you ask for, not what you should have asked for or what you intend’. He elaborates by turning to the classic horror fable ‘The Monkey’s Paw’, in which a poor family is granted three wishes. They first wish for money, which comes in the form of insurance for their son who has tragically died. They next wish for their son to return, but when he comes home, it is only to haunt them as a ghost. Terrified, they use their final wish to banish their lost son’s phantom. Cybernetic machines, warns Wiener, operate in just the same unthinking way: ‘Set a playing machine to play for victory, you will get victory [without] the slightest attention to any consideration except victory according to the rules.’

This fear of the magical power of machines has resurfaced in the contemporary discourse of AI ethics. Wiener’s old warnings have been rearticulated as the doctrine known as the ‘orthogonality thesis,’ along with its associated problem of ‘value alignment’. The most well known proponent of this idea is the philosopher Nick Bostrom, who until 2024 headed the Future of Humanity Institute at Oxford dedicated to the study of existential risk. The orthogonality thesis states that instrumental rationality and objective rationality are not intrinsically united. Instead, writes Bostrom, there is an orthogonal relationship between intelligence and final goals (purposes) such that ‘more or less any level of intelligence could be combined with more or less any final goal’.

This fundamental disconnect makes it especially urgent that we are vigilant in avoiding anthropocentrism when considering the thoughts and actions of any future AI