top of page
Search

AI: Recent Trends and Building Blocks for Future Progress

By Pahal Bhasin




Since the increasingly widespread use of deep learning in the mid-2010s, there has been huge progress in what can be achieved with Artificial Intelligence.


Timeline of advances in Artificial Intelligence


Here is brief timeline of only some of the advances we’ve seen in AI since 2019


  • AlphaStar, which can beat top professional players at StarCraft II (January 2019)

  • MuZero, a single system that learned to win games of chess, Shogi, and Go — without ever being told the rules (November 2019)

  • GPT-3, a natural language model capable of producing high-quality text (May 2020)

  • GPT-f, which can solve some Maths Olympiad problems (September 2020)

  • AlphaFold 2, a huge step forward in solving the long perplexing protein-folding problem (July 2021)

  • Codex, which can produce code for programs from natural language instructions (August 2021)

  • PaLM, a language model which has shown impressive capabilities to reason about things like cause and effect or explaining jokes (April 2022)

  • DALL-E 2 (April 2022) and Imagen (May 2022), which are both capable of generating high-quality images from written descriptions

  • SayCan, which takes natural language instructions and uses them to operate a robot(April 2022)

  • Gato, a single ML model capable of doing a huge number of different things (including playing Atari, captioning images, chatting, and stacking blocks with a real robot arm),deciding based on its context what it should output (May 2022)

  • Minerva can solve complex maths problems — fairly well at college level, and even better at high school maths competition level. (Minerva is far more successful than forecasters predicted in 2021


Current trends show rapid progress in the capabilities of ML systems


There are three things that are crucial to building AI through machine learning:

  • Good algorithms (e.g., more efficient algorithms are better)

  • Data to train an algorithm

  • Enough computational power (known as compute) to do this training


The research scientist Danny Hernandez and his team looked at how two of these inputs (compute and algorithm efficiency) are changing over time. They found that, since 2012, the amount of compute used for training the largest AI models has been rising exponentially — doubling every 3.4 months. Since 2012, the amount of computational power used to train our largest machine learning models has grown by over 1 billion times.

Hernandez and his team also looked at how much compute has been needed to train a neural network to have the same performance as AlexNet (an early image classification algorithm). They found that the amount of compute required for the same performance has been falling exponentially — halving every 16 months.

So, since 2012, the amount of compute required for the same level of performance has fallen by over 100 times. Combined with the increased compute used, that’s a lot of growth. It’s hard to say whether these trends will continue, but they speak to incredible gains over the past decade in what it’s possible to do with machine learning.

Indeed, it looks like increasing the size of models (and the amount of compute used to train them) introduces ever more sophisticated behaviour. This is how things like GPT-3 can perform tasks they weren’t specifically trained for.

These observations have led to the scaling hypothesis: that we can simply build bigger and bigger neural networks, and as a result we will end up with more and more powerful artificial intelligence, and that this trend of increasing capabilities may increase to human-level AI and beyond.


If this is true, we can attempt to predict how the capabilities of AI technology will increase over time simply by looking at how quickly we are increasing the amount of compute available to train models.


When can we expect transformative AI?

It’s difficult to predict exactly when we will develop AI that we expect to be hugely transformative for society (for better or for worse) — for example, by automating all human work or drastically changing the structure of society. But here we’ll go through a few approaches.

  • One option is to survey experts. Data from the 2019 survey of 300 AI experts implies that there is 20% probability of human-level machine intelligence (which would plausibly be transformative in this sense) by 2036, 50% probability by 2060, and 85% by 2100.

  • Ajeya Cotra (a researcher at Open Philanthropy) attempted to forecast transformative AI by comparing modern deep learning to the human brain. She estimates that there is a 35% probability of transformative AI by 2036, 50% by 2040, and 60% by 2050.

  • Tom Davidson (also a researcher at Open Philanthropy) wrote a report to complement Cotra’s work. Davidson’s report estimates that there was an 8% chance of transformative AI by 2036, 13% by 2060, and 20% by 2100.

  • Holden Karnofsky, co-CEO of Open Philanthropy, attempted to sum up the findings of all of the approaches above. He guesses there is more than a 10% chance we’ll see transformative AI by 2036(!),

So, all in all, AI seems to be advancing rapidly. More money and talent are going into the field every year, and models are getting bigger and more efficient.


SOURCE

This article’s primary source is an amazingly insightful and thoroughly researched article “Preventing an AI-related catastrophe: AI might bring huge benefits — if we avoid the risks” published by Benjamin Hilton (from @80,000 hours.org) in August 2022. We have highlighted a few key points for the benefit of our high school audience to create awareness about the topic.

23 views0 comments

Recent Posts

See All

Comments


bottom of page