• Will LLMs scale all the way to AGI?

  • Sources:

    There are varying expert opinions regarding whether large language models (LLMs) will scale all the way to artificial general intelligence (AGI):

    • , an expert in technology and economics, noted the impressive capabilities of large language models and the possibilities of them leading to general intelligence. He mentioned the continuous performance improvements with larger models and the multimodal outputs they can generate. Manyika believes that these approaches have not yet reached their limits and that more innovations are to come, although it's not clear if this alone is sufficient to achieve AGI 1.

    • , CEO of OpenAI, believes that large language models are a part of the path to AGI but insists that other important components are still missing. While being optimistic, Altman recognizes we're deep into the unknown and emphasizes the value of AI as a tool to amplify human abilities even if AGI is not achieved 2.

    • , who holds a PhD in Systems Research, argues that we are still far away from AGI. He points out technical challenges, such as the enormous data requirements of neural networks and the difficulty in learning multiple things simultaneously as key obstacles. Świechowski is skeptical of the scalability of current models like GPT and believes they are not on the path to AGI 3.

    • , from the "Data Skeptic" podcast, shares his view that large language models on their own do not constitute AGI, though he is confident that the additional components needed to realize AGI can be figured out. He discusses the potential and existential risks associated with AGI 4.

    • , co-founder of Zapier, has adjusted his belief on the role of large language models in the route to AGI, suggesting that recent progress has made it more plausible that they could be involved. The improvement seen when language models are allowed to 'think out loud' was noteworthy for Knoop 5.

    • Noam Brown, affiliated with the development of AI for games like poker and diplomacy, speaks to the rapid advancements in AI. Brown discusses the challenge of data inefficiency in current AI systems as a barrier to AGI and the possibility of overcoming this issue by allowing AIs to leverage general knowledge from a vast array of domains 6.

    • , discusses the alignment problem and the uncertainty in the field about whether neural network-based language models can achieve AGI. While some, like Gary Marcus, are skeptical, others believe deep learning may persistently surprise us. Miller is open to the possibility but also recognizes the essential need for structured approaches 7.

    Overall, experts show a spectrum of optimism and skepticism, with clear consensus that we still have much to learn and that addressing key issues such as data efficiency and structural design will be crucial steps toward achieving AGI.

    RELATED QUESTIONS