Uncategorized

AI Leaders Tell Globalist Davos Crowd that ‘Artificial General Intelligence’ Will Be ‘Better than Humans’



Top executives from major AI organizations including OpenAI, Google DeepMind, and Cohere gathered at the World Economic Forum in Davos, Switzerland, to discuss the imminent approach of Artificial General Intelligence (AGI) and its potential impacts. One CEO explained that AGI will be “better than humans at pretty much whatever humans can do.”



CNBC reports that at the globalist Davos summit, a gathering of AI leaders from esteemed labs like OpenAI, Google DeepMind, and Cohere initiated a significant dialogue on the advent of AGI. This form of AI, equating to or surpassing human intellect, is a source of both enthusiasm and concern within the AI community​.

Human relaxes between robot workers (Andrew Bret Wallis/Getty)

Sam Altman, CEO of OpenAI, suggested during the panel at the World Economic Forum that AGI could become a reality in the “reasonably close-ish future.” However, Altman downplayed the fears of AGI drastically reshaping the world, stating: “It will change the world much less than we all think, and it will change jobs much less than we all think.” Previously, Altman had expressed concerns about AI being used for disinformation and cyberattacks, saying, “I think people should be happy that we are a little bit scared of this.”



Aidan Gomez, CEO and co-founder of Cohere, agreed with Altman on the near-term arrival of AGI but emphasized its vague definition. “First off, AGI is a super vaguely defined term. If we just term it as ‘better than humans at pretty much whatever humans can do,’ I agree, it’s going to be pretty soon that we can get systems that do that,” Gomez said, adding that while adoption in companies might take decades, Cohere is focused on making these systems more adaptable and efficient​.

Lila Ibrahim, chief operating officer of Google’s DeepMind, highlighted the uncertainty around AGI’s definition and timeline. “The reality is no one knows,” she stated. “There’s a debate within the AI experts who’ve been doing this or a long time both within the industry and also within the organization.”

She continued: “We’re already seeing areas where AI has the ability to unlock our understanding … where humans haven’t been able to make that type of progress. So it’s AI in partnership with the human, or as a tool.”

“So I think that’s really a big open question, and I don’t know how better to answer other than, how do we actually think about that, rather than how much longer will it be?” Ibrahim added. “How do we think about what it might look like, and how do we ensure we’re being responsible stewards of the technology?”



Salesforce CEO Marc Benioff, speaking on a panel, highlighted the need to prevent a “Hiroshima moment” in the AI field. “We don’t want that in our AI industry. We want to have a good healthy partnership with these moderators and with these regulators,” Benioff remarked, highlighting the need for effective regulation to avoid the pitfalls seen in social media over the past decade​.

Jack Hidary, CEO of SandboxAQ, offered a differing view, pointing out that AI, while having passed the Turing test, still lacks common sense. “One thing we’ve seen from LLMs [large language models] is very powerful can write says for college students like there’s no tomorrow but it’s difficult to sometimes find common sense,” Hidary stated. He predicted a significant leap in AI, especially with humanoid robots using advanced AI communication software in 2024​​.

Read more at CNBC here.

Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship.





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *