David Wood on progress towards artificial general intelligence, the singularity, and how to ensure it benefits society.
Recent progress in the world of AI has captivated the world. The public release of OpenAI’s ChatGPT tool has been an eye-opener, particularly for those that had previously viewed AI as a future technology with little relevance to life today. With Microsoft apparently now ready to invest another $10 billion into OpenAI, the question of when we will reach “artificial general intelligence” (AGI) is becoming increasingly relevant.
The concept of AGI refers to the point at which AI is able to match or outperform humans at any cognitive task, and is closely linked to the idea of the singularity – the hypothesis that our world will be changed beyond all recognition by the operation of superintelligence.
Longevity.Technology: Human longevity and the singularity are inherently linked. So-called ‘singularitarians’ believe one of the possible consequences of the singularity will be massive advancements in medicine and technology that will protect us from the effects of aging. But not all visions of the singularity are positive. We spoke to the futurist and author Dr David Wood about his hopes that the singularity will benefit humanity – rather than the alternative.
Last year, Wood published his latest book The Singularity Principles, in which he sought to “dispel the confusion, to untangle the distortions, to highlight practical steps forward, and to attract much more serious attention to the singularity.”
Singularity by 2035?
Wood believes AI developments in the past year that have made a huge difference to views on how quickly we might reach AGI and the singularity.
“When you look at what the new generative models are doing – things like ChatGPT, DALL-E 2, Google’s PaLM and many other systems – they’re not perfect, they make mistakes, they’re sometimes frustrating, but on many occasions, they are astonishingly correct,” he says. “And people are truly gobsmacked, including those who work closely in this field – people who you would expect to know what’s coming next. In many cases, they’ve been taken by surprise about how much more the models are capable of than they expected.”
All of this has brought forward many people’s estimates are when we will have completely general artificial intelligence. Wood cites Metaculus, a forecasting platform that aggregates the predictions of a large online community of futurists.
“Many of people thought the arrival of AGI wouldn’t occur for several decades, possibly even the end of the century,” he says. “But in 2022, Metaculus’ median forecast for when AGI will be achieved came down from somewhere in the 2040s to 2027 – just four years away. I personally believe that may be over-estimating things a little, but I wouldn’t be surprised if we had generally intelligent AI, the so-called singularity, by 2035.”
The singularity and longevity
When considering how the singularity will help extend human healthspan, Wood points to progress in fundamental scientific research as evidence that AI is already playing its part in longevity.
“The most significant development in AI in the last few years may turn out to be DeepMind’s AlphaFold,” he says. “Understanding how proteins fold was a problem that had eluded human scientists for more than 50 years. AlphaFold comprehensively solved this problem and shows that AI can actually do science in a way that humans have been unable to do.”
Similarly, says Wood, AI has accelerated progress in the long and complicated process of drug discovery.
“Sadly, it’s been taking longer and longer to do drug discovery since the 1950s – it’s become an enormously expensive business to develop a drug. But more and more companies are now using different AI methods to improve their abilities – whether to design molecules from scratch or to test them in silico. In terms of what AI can do for medicine, it’s already very positive.”
Ultimately, Wood believes that AGI and the singularity will help us to model human biology more fully to make more profound discoveries around aging than we’ve achieved to date.
“This acceleration could help us undo the damage of aging and give us back a more youthful state of health and vitality,” he says. “This may also result in the longevity escape velocity, in which we have not just slightly increased lifespans, but every year that we live, we will add more than 12 months of healthy life expectancy.
“We’re not going to get there by simply extrapolating what we’ve done already. Aging is a very hard engineering problem. But guess what? AI is designed to help us to solve very hard engineering problems!”
Will the singularity be positive or negative?
As we get closer to the singularity, Wood says AI will become more capable, the critical nature of human involvement will diminish, and the systems will be able to improve themselves.
“We’re already seeing the use of AI to accelerate many areas of software development, but the singularity is when we will have much faster improvements than people were expecting,” he says. “Initial improvements will enable further improvements, which allow further improvement, and so on. It might be that in a few months, a few weeks, or a few days, we will have much more powerful systems than we expected.”
Whether or not the singularity will be a positive for humanity is impossible to say at this point, says Wood.
“There are the wonderfully positive scenarios, where we achieve sustainable superabundance, which will include lots of improvements to health and intelligence. But at the same time, there are many potential scenarios in which we misuse that AI, or the AI goes in the wrong direction based on its own intentions.”
“I believe both scenarios are possible by the middle of this century. At this point, I think it’s about 60% likely we will get a very good outcome, and perhaps 30% likely that we’ll be in a new dark age or worse. I think there’s also about a 10% chance that we’ll still be bubbling along like today because AI hasn’t actually developed as expected.”
It’s up to us
Which way the singularity ultimately goes largely depends on humanity, says Wood.
“Are we going to pay enough attention to all the possibilities?” he asks. “Are we going to understand them sufficiently? Or are we going to content ourselves with a superficial Hollywood-style understanding?”
“In order to coordinate together effectively, I believe we need to take back control of technology as a democratic society to ensure that AI creates a public good, and that it is not being created only based on visions of profit. The free market isn’t always the best at figuring out what should be accelerated, and we can’t just let companies make their own decisions. They may say that they are well intentioned, but good intentions are not enough. We’ve got to have in place the right safety frameworks, the right auditing, and the right monitoring. Because the dangers are so severe.”
“More and more people are realising we have to step up to this challenge and this is my hope for 2023. It’s up to us to have a surge of concern, a surge of understanding, with people gathering who are prepared to roll up their sleeves and think hard about how we can get the best out of technology. It could be wonderful; it could truly be a paradise – if we get this right.”