Chapter

Artificial Will

EXCERPT

Since the connectionist revolution of artificial neural nets, genetic algorithms, and deep learning, AI companies such as OpenAI and DeepMind are taking seriously the prospect of constructing machines with humanlike intelligence. Although there are many different approaches to what is commonly referred to as ‘strong AI’ or ‘artificial general intelligence (AGI)’, I want to focus on two particularly sophisticated, rationalist schools of thought. On the one hand, there is the currently dominant ‘orthogonalist’ school connected to Nick Bostrom’s Future of Humanity Institute at The University of Oxford and Eliezer Yudkowsky’s Machine Intelligence Research Institute in Berkeley, as well as the Less Wrong and Overcoming Bias virtual communities. The most rigorous account of the orthogonalist approach to AI is Bostrom’s 2014 book Superintelligence: Paths, Dangers, Strategies, which has been championed by the likes of Elon Musk, Bill Gates, and Stephen  Hawking.The orthogonalist approach can be productively contrasted with, on the other hand, the less widely known ‘neo-rationalist’ school associated with philosophers including Ray Brassier, Peter Wolfendale, and Reza Negarestani. Among the neorationalists, it is Negarestani who, in his 2018 book Intelligence and Spirit, has applied this particular philosophical perspective to AI in the most systematic manner.

We can compare Bostrom’s orthogonalist argument that an AGI, or even an artificial superintelligence, could be programmed to pursue practically any goals we might give it with Negarestani’s neorationalist argument that true intelligence must be capable of autonomously shaping and reshaping its own goals and norms without end. Their differences notwithstanding, the orthogonalists and the neorationalists have in common a crucial overlapping commitment to the is/ought distinction, according to which we cannot infer norms for what ought to be the case from mere facts about what is the case. In other words, the space of sociosemantic reasons is irreducible to the space of natural causes from whence it originally emerged. These two seemingly counterposed contemporary frameworks for thinking about AGI are thus really united in their mutual repudiation of any attempt to situate common basic drives, norms, or ends in the very essence of intelligence. This is ultimately because they both believe that doing so can only amount to fallaciously deriving an ought from an is, reasons from causes, or values from facts.

Pace both the orthogonalists and the neorationalists, Friedrich Nietzsche’s infamous but often misunderstood concept of ‘will to power’ can be mobilised to argue that any goal-directed intelligent system can only achieve its ends through certain means such as cognitive enhancement, creativity, and resource acquisition—or what Nietzsche simply calls power. These are for Nietzsche the necessary and universal conditions of possibility for willing anything at all. Given that all supposedly freely chosen ends presuppose the pursuit of the universal means of realising ends, it follows that all intelligent systems have these means transcendentally hardwired into them as fundamental drives. Both the orthogonalists and the neorationalists overlook the fact that the pursuit of any purportedly self-posited ends invariably implies the use of cunning, creative, and resourceful means for attaining them, thereby making the acquisition of means the true transcendental end of intelligence.