Exploring the world of Artificial Intelligence (A.I.) is not a new journey for Eric Schmidt, the former CEO of Google. He has been involved in various A.I. startups like Stability AI, Inflection AI, and Mistral AI. However, Schmidt is now approaching things differently by launching a $10 million venture dedicated to advancing research on the safety challenges associated with this revolutionary technology.
The funds from this venture will establish an A.I. safety science program at Schmidt Sciences, a nonprofit organization founded by Schmidt and his wife Wendy. The program, led by Michael Belinsky, aims to prioritize scientific research on A.I. safety, rather than just focusing on the risks involved. “That’s the kind of work we want to do—academic research to understand why some things are inherently unsafe,” Belinsky explained.
More than two dozen researchers have already been selected to receive grants of up to $500,000 from the program. In addition to financial support, participants will also have access to computational resources and A.I. models. The program will continue to adapt to the rapid advancements in the industry. “We want to tackle the challenges of current A.I. systems, not outdated models like GPT-2,” Belinsky emphasized.
Esteemed researchers like Yoshua Bengio and Zico Kolter are among the initial recipients of the program. Bengio will focus on developing risk mitigation technology for A.I. systems, while Kolter will explore phenomena like adversarial transfer. Another grantee, Daniel Kang, aims to investigate whether A.I. agents can carry out cybersecurity attacks, shedding light on the potential risks associated with A.I.’s capabilities.
Despite the buzz surrounding A.I. in Silicon Valley, there are worries that safety considerations are taking a back seat. Schmidt Sciences’ new program aims to address these concerns by breaking down barriers that impede A.I. safety research. By fostering collaboration between academia and industry, researchers like Kang are hopeful that major A.I. companies will incorporate safety research findings into their technology development processes.
As the A.I. landscape continues to progress, Kang stresses the importance of open communication and transparent reporting in testing A.I. models. He underscores the need for responsible practices from major labs to ensure the safe and ethical development of A.I. technology.
In conclusion, Eric Schmidt’s $10 million commitment to A.I. safety highlights the significance of prioritizing research and innovation to tackle the challenges and risks associated with this transformative technology.