Artificial intelligence (AI) has been a hot topic in recent years, with rapid advancements raising concerns about its potential impact on society. Eric Schmidt, the former CEO of Google, has weighed in on the debate, urging for a more thoughtful approach to AI development.
In an interview with ABC, Schmidt expressed his worries about the possibility of AI surpassing human intelligence, leading to unforeseen risks. He emphasized the need for safeguards to prevent AI from becoming too autonomous, even suggesting the radical idea of “unplugging” AI if necessary.
But who gets to make these crucial decisions about AI? Schmidt believes it shouldn’t be left solely to technologists like himself. He advocates for a diverse group of stakeholders to come together and establish guidelines for AI development and usage.
One intriguing proposal from Schmidt is the idea of using AI to regulate itself. He argues that humans may not be equipped to oversee AI effectively, but AI systems could potentially monitor and control their own advancements.
While Schmidt’s perspective may be unconventional, it raises important questions about the future of AI and the role of human oversight in its development. As technology continues to advance at a rapid pace, it’s essential to consider how we can ensure that AI works in the best interest of humanity.