This WIRED interview features Demis Hassabis, CEO of Google DeepMind, discussing AGI (Artificial General Intelligence), its potential impact on the future of work, and Google's competitive strategy in the age of AI. The conversation explores timelines for AGI development, the potential for a rapid technological shift, and the crucial need for responsible development and international cooperation to mitigate risks.
Demis Hassabis points to inconsistencies in performance across different domains as a key indicator that current systems aren't yet AGI. He notes that while systems can excel at tasks like solving complex math problems (as demonstrated by AlphaFold), they may still struggle with simpler tasks like basic arithmetic or counting letters in a word. This lack of consistent generalization across various cognitive abilities signifies a gap from true AGI.
Hassabis identifies two primary risks: (1) Malicious use of AGI by bad actors (individuals or rogue nations) for harmful purposes, and (2) the inherent technical risks of increasingly powerful and agentic AI systems, raising concerns about the safety and effectiveness of safeguards. The first is a concern about external misuse of the technology; the second is a concern about the technology itself becoming uncontrollable.
Hassabis contrasts the possibility of an incremental shift with a step-function change in technological advancement resulting from AGI. He suggests that even with a fully developed AGI system, the integration into the physical world (factories, robots, etc.) would be a gradual process, taking potentially another decade or more. Conversely, the "hard takeoff" scenario presents a contrasting view, where a small initial advantage in AGI development could quickly lead to an insurmountable gap due to self-improvement capabilities of the AI.
Hassabis advises graduating students to immerse themselves in and understand current AI systems. He suggests focusing on STEM fields, programming, and developing expertise in techniques like fine-tuning and prompt engineering to maximize productivity and leverage the power of these tools in their chosen fields.
Other Topics Discussed:
Demis Hassabis highlights that there's a significant ongoing debate within the field regarding the definition of AGI. He notes that different definitions lead to varying predictions about when AGI will be achieved. DeepMind, he explains, has consistently defined AGI as a system capable of exhibiting all the cognitive capabilities of a human being, using the human mind as the only existing proof that general intelligence is possible. To claim a system is AGI, it must demonstrate generalization across numerous domains, essentially checking off all the boxes of human cognitive function. He emphasizes that current systems, while impressive, still have gaps in capabilities like reasoning, planning, memory, true invention, and creativity, demonstrating that they haven't yet reached the level of consistent generalization required for AGI.
Demis Hassabis destaca que existe un importante debate en curso dentro del campo sobre la definición de IAG (Inteligencia Artificial General). Señala que diferentes definiciones conducen a predicciones variadas sobre cuándo se logrará la IAG. DeepMind, explica, ha definido consistentemente la IAG como un sistema capaz de exhibir todas las capacidades cognitivas de un ser humano, utilizando la mente humana como la única prueba existente de que la inteligencia general es posible. Para afirmar que un sistema es IAG, debe demostrar generalización en numerosos dominios, esencialmente marcando todas las casillas de la función cognitiva humana. Subraya que los sistemas actuales, aunque impresionantes, todavía tienen lagunas en capacidades como el razonamiento, la planificación, la memoria, la verdadera invención y la creatividad, demostrando que aún no han alcanzado el nivel de generalización consistente requerido para la IAG.
Demis Hassabis señala que existe un importante debate en curso en el campo sobre la definición de IAG (Inteligencia Artificial General). Destaca que diferentes definiciones llevan a predicciones diversas sobre cuándo se alcanzará la IAG. DeepMind, explica, ha definido consistentemente la IAG como un sistema capaz de mostrar todas las capacidades cognitivas de un ser humano, utilizando la mente humana como la única prueba existente de que la inteligencia general es posible. Para afirmar que un sistema es IAG, debe demostrar generalización a través de numerosos dominios, esencialmente "marcando todas las casillas" de la función cognitiva humana. Subraya que los sistemas actuales, aunque impresionantes, aún presentan deficiencias en capacidades como el razonamiento, la planificación, la memoria, la verdadera invención y la creatividad, lo que demuestra que no han llegado al nivel de generalización consistente requerido para la IAG.