This TED Talk features Eric Schmidt discussing the rapid advancements in AI and why he believes the revolution is currently underhyped. He argues that the capabilities of AI are far exceeding current public perception, emphasizing the implications for scale, autonomy, and the resulting geopolitical and ethical dilemmas.
The conversation begins with a discussion of the 2016 AlphaGo games, specifically a move invented by AI that surprised Go experts. This event is presented as a pivotal moment marking the beginning of the AI revolution. The speaker contrasts this with the current over-focus on chatbots like ChatGPT, arguing that AI's capabilities are significantly underhyped due to recent advancements in reinforcement learning and planning, exemplified by systems like OpenAI o3 or DeepSeek R1. These systems are capable of planning and learning simultaneously, a development requiring significantly more computational power. The immense energy demands of these systems are highlighted, with the estimation that 90 gigawatts of additional power would be needed in America to support the growth, raising questions about the limits of scale and the need for alternative sources of energy. The discussion expands to consider the limits of data, necessitating data generation, and the limit of knowledge itself – the challenge of AI inventing entirely new concepts rather than simply applying existing patterns. The speakers then delve into the ethical dilemmas surrounding AI, particularly its dual-use nature and the tension between preventing its misuse and preserving human freedom. The conversation touches upon the existing military doctrine of "human in the loop" or "meaningful human control" and the geopolitical competition between the US and China in AI development, specifically focusing on the implications of open-source versus closed models and the risk of open-source models falling into the wrong hands. The analogy of mutually assured destruction is employed to illustrate the potential dangers of an AI arms race. The inherent tension in AI safety between preventing dystopian outcomes and inadvertently creating a surveillance state is examined, leading to a discussion of solutions like zero-knowledge proofs to maintain individual freedom while mitigating risks. The potential for radical abundance through AI is also explored, considering the future of work and human activity in a world with vastly increased productivity. The speakers discuss the need to adapt to this new economic reality, acknowledging the uncharted territory of such a significant productivity increase. Finally, the advice is given to embrace and adapt to AI's rapid advancement, recognizing its significance as a marathon, not a sprint, emphasizing the importance of daily engagement and adaptation to remain relevant. The conversation concludes by highlighting the transformative potential of AI across various sectors and the urgent need to navigate its development responsibly.
The discussion starts by referencing a specific moment in 2016 during the AlphaGo games, where an AI invented a novel move never before seen in the 2500-year history of the game. This is framed as a pivotal moment, a "quiet moment where the Earth shifted beneath us," illustrating the unexpected capabilities of AI that were not widely recognized at the time. The speaker notes that while the full implications weren't understood then, the power of these new algorithms was recognized. This is contrasted with the current public perception of AI, often fixated on tools like ChatGPT. The argument is made that this focus underestimates the rapid progress, particularly in reinforcement learning, which was partly born from AlphaGo. Systems like OpenAI o3 and DeepSeek R1 are cited as examples showcasing advanced planning capabilities – "forward and back, forward and back" – a significant leap from earlier language-based AI.
The massive scale and energy consumption required by these advanced AI systems are then highlighted. A specific figure is given – the need for an additional 90 gigawatts of power in America, equivalent to 90 nuclear power plants – to illustrate the problem. Alternative solutions are mentioned, such as Canada's hydroelectric power, but this is presented as currently unrealistic due to political factors. The growth of data centers in the Arab world and India is mentioned in comparison. The sheer power consumption is described as "cities per data center," emphasizing its magnitude. The speaker acknowledges potential for algorithmic improvements to reduce energy needs but notes that Moore's Law ("Grove giveth, Gates taketh away") suggests that increased computing power will continue to demand more energy. The increasing computational needs are quantified with an estimate of a 100- to 1000-fold increase required for advanced planning, shifting from deep learning to reinforcement learning, and then to "test-time compute," where learning occurs during the planning process itself. This "zenith" of computational needs is presented as one of three main problems.
The second problem is the depletion of readily available data, necessitating the generation of new data. This, however, is viewed as solvable due to the capabilities of the AI itself. The third problem, less readily defined, is the limit of knowledge and the ability of AI to make completely new inventions. The analogy used is that of a collective consciousness of all computers thinking based on previously existing knowledge. The question is posed: how do we invent something completely new? Einstein is used as an example of a scientist who saw patterns across seemingly unrelated fields. The current inability of AI systems to do this is noted, and this inability is referred to as "non-stationarity of objectives," where the rules constantly change. The speaker expresses hope that overcoming this limitation will lead to the invention of entirely new scientific and intellectual fields.
The ethical dilemmas and the dual-use nature of AI are discussed next. The exceedingly dual-use nature of this technology, applicable to both civilian and military applications, is highlighted. Existing doctrines such as the US military's rule 3000.09 ("human in the loop" or "meaningful human control") are presented as illustrating a line that shouldn't be crossed. The intense competition between the US and China is emphasized as a defining factor, noting the use of tariffs ("essentially reciprocating 145-percent tariffs") and the implications for supply chains, particularly access to advanced chips. The speaker mentions ongoing Track II dialogues between the US and China, initiated by Dr. Kissinger, and notes that access to advanced technology is the primary issue raised by China. The open-source approach of China, contrasted with the largely closed models of the US, is viewed as a significant advantage for China, leading to rapid proliferation that is considered dangerous at the cyber and bio levels.
A nuclear-threat scenario is then introduced to illustrate the dangers of an AI arms race, drawing parallels to Dr. Kissinger's work on mutually assured destruction. The analogy involves a scenario where two actors, one six months ahead of the other in achieving superintelligence, engage in a race. The speaker highlights that the "slope" of improvement, not just the absolute level, is crucial in a network-effect business. The hypothetical bad actor, once sufficiently ahead, would possess the capability to "reinvent the world and in particular, destroy me". Hypothetical counter-measures such as stealing code, infiltration, model modification, and finally, physical destruction of data centers ("bomb your data center") are discussed, illustrating the escalating tensions and the potential for preemptive action. This is framed as a very real concern in today's geopolitical climate.
The tension between AI safety and the potential for surveillance is discussed next, using the analogy of preventing "1984" often resulting in something that resembles "1984." The need for methods such as proof of personhood and moderating systems at scale are discussed, highlighting the importance of adhering to societal values and preserving human freedom. The speaker emphasizes that many challenges are not purely technical but business decisions. While acknowledging the potential for a surveillance state, the speaker also notes the possibility of creating freeing systems. The importance of proof of identity without necessarily revealing personal details is emphasized, mentioning zero-knowledge proofs as a solution.
The conversation then shifts to a more optimistic outlook, focusing on the potential for radical abundance and the opportunities that AI can provide. The speaker expresses excitement about potential advancements in areas like disease eradication ("eradicate all of these diseases"), exploring advancements in identifying druggable targets and reducing the cost of drug trials. Other aspirations include discovering the nature of dark energy and dark matter, and revolutionizing material science, transportation, and education. The concept of providing every human with a personalized tutor is proposed as an example of a seemingly achievable goal hindered only by economic considerations, not technological limitations. Similarly, improving healthcare access globally using AI-powered assistants is another example.
However, the speaker also notes the potential for AI to exacerbate existing problems, such as loneliness in a digitally connected world. The speaker reiterates that these challenges are fixable, not requiring revolutionary discoveries but changes in approach. The potential for a 30% yearly productivity increase is presented, acknowledging that economists currently lack models for understanding the implications of such a drastic change. The speaker warns against underestimating the speed and scale of AI's transformative power and stresses the uniqueness of this moment in human history.
The concluding remarks emphasize that navigating this AI transition is a marathon, not a sprint. An analogy is used of the speaker's experience in a 100-mile bike race, where the strategy is simply to keep going each day. The rapid pace of technological change necessitates constant adaptation, reminding the audience that what is true today may be obsolete in a few years. The advice given is to embrace and utilize AI technology across various sectors, concluding with the suggestion that those not using it will soon be irrelevant. The conversation finishes with an example of a change in the software industry, illustrating the emergence of new methods of connecting AI models directly to databases, eliminating the need for intermediary connectors.