Eric Schmidt's Realistic AI Utopia (2024)

Eric Schmidt's Realistic AI Utopia (1)

According to Eric Schmidt, there are three major developments in AI: (i) the development of an infinite context window, (ii) the ability for agents to excel in specific fields, and (iii) AI’s ability for action, such as writing code. Currently, AI largely operates through humans giving commands based on human judgment. However, Schmidt envisions a not too distant future where models will become smarter through recursive self-improvement and by integrating heterogeneous information via inter-agent interaction. Eventually, agents will be able to work together and develop their own language, which humans may no longer understand. If this happens, Schmidt suggests that these systems should be terminated.

The rapid pace of AI development is evident as new models are released almost every year. Schmidt predicts that advanced agents could emerge within the next five years, driven by the potential financial gains from their efficiency. Frontier model providers will offer high-end agents, but many other agent providers will also emerge.

Regarding AI safety, Schmidt emphasizes that large frontier models are not the problem since they are subject to the most stringent self-regulation, regulatory oversight and collaboration with AI safety institutes. The real issue lies in the proliferation and potential misuse of future models and agents, especially open source models. To mitigate this, Schmidt suggests imposing limits on the proliferation of the most powerful systems, especially as they approach what he calls "general intelligence." To ensure safety, Schmidt advocates for AI systems to be checked or policed, not by humans but by other AI systems. While universities have a role to play in analyzing the advancement and safety of AI, they are underfunded compared to the usual large frontier model providers, including Microsoft and Google. Increasing research funding and access to hardware (GPU) for universities is essential to addressing this issue.

According to him, non-Western countries are most likely to misuse AI. In Schmidt’s talk, unsurprisingly, he singled out China where the development of generative AI began with open-source models available in the West, which then got amplified by Chinese tech companies. Schmidt believes China is about two years behind the US, with four companies—Baidu, Alibaba, Tencent, and Huawei—conducting large-scale model training. Despite a late release of gen AI to the public due to previous government restriction, these companies could quickly catch up but remain hindered by a lack of access to the best hardware, and the Trump/Biden export restrictions are likely to further tighten. As a result, Chinese companies must work harder and invest more to keep up with rapid developments. To Schmidt, there is no doubt that Chinese companies are catching up, but are facing higher costs and more obstacles. He believes that US trade restrictions are justified.

In recent months, the Chinese government has approved dozens of new AI models and is supporting the development of generative AI, while balancing AI governance regulations. However, Schmidt highlighted that China lags behind the US due to restricted free speech in China. Controlling the content proliferated through generative AI is difficult. The more technical restrictions are applied, the less those models perform. Other major concerns include potential biothreats and cyberthreats caused by AI proliferation, which has led the Chinese government to impose restrictions on AI. To Schmidt, however, those government restrictions are „intelligent“ and still well measured balancing between innovation/catch-up and security.

Although Schmidt remains bullish towards China and provides a one-sided view on the potential misuse of AI by non-Western countries, notably China, he acknowledges that both the West and China face common AI safety issues and need to collaborate on their prevention and mitigation, especially when AI approaches general intelligence. Addressing these threats through a U.S.-China treaty will be challenging, but Schmidt suggests that both sides should inform each other when training completely new models to avoid surprises (he referred to a "no-surprise rule" similar to the Treaty of Open Skies). Also, a common agreement on AI safety will be essential. Despite recent track II diplomatic talks between US and China, for now, he doesn’t see China to be willing to collaboration with the US.

Generally, Schmidt advocates for a balancing mechanism to restrain powerful AI systems, which, he believes, will continue lacking human moral reasoning capabilities. Open-source AI is of special concern to Schmidt, with providers already adding RLHF guardrails to eliminate harmful content. However, while guardrails make LLMs' outputs more palatable, they can be easily bypassed through reverse engineering, which remains an unresolved engineering issue.

This text is derived from a recent interview conducted by Noema Magazine:

Interview transcript: https://www.noemamag.com/mapping-ais-rapid-advance/

Eric Schmidt's Realistic AI Utopia (2024)

References

Top Articles
Latest Posts
Article information

Author: Delena Feil

Last Updated:

Views: 5729

Rating: 4.4 / 5 (45 voted)

Reviews: 92% of readers found this page helpful

Author information

Name: Delena Feil

Birthday: 1998-08-29

Address: 747 Lubowitz Run, Sidmouth, HI 90646-5543

Phone: +99513241752844

Job: Design Supervisor

Hobby: Digital arts, Lacemaking, Air sports, Running, Scouting, Shooting, Puzzles

Introduction: My name is Delena Feil, I am a clean, splendid, calm, fancy, jolly, bright, faithful person who loves writing and wants to share my knowledge and understanding with you.