Uncategorized

The Dawn of Generative AI: Promise, Peril, and Debate – Heidelberg Laureate Forum – SciLogs


In a world where machines can ‘dream up’ art, music, and news, how do we define what is acceptable in the realm of AI innovation?

In the past year, generative AI has emerged seemingly out of nowhere. Or rather, it reached spectacular performance seemingly out of nowhere and became available to general users. There are a number of publicly available algorithms now with the potential to generate entirely new content, be it images, music, or text. This opens up avenues for unparalleled creativity and innovation, where AI can work alongside humans to create novel innovations, science, or even art – but it also opens an ethical can of worms.

This topic did not go unnoticed at the 10th Heidelberg Laureate Forum (HLF) in September 2023. Two back-to-back sessions addressed the inner workings of AI, as well as the opportunities and risk that come with them.

The Rise and Democratization of Generative AI

The first panel featured Sanjeev Arora (ACM Prize in Computing – 2011), Sébastien Bubeck (Microsoft Research), Björn Ommer (University of Munich), and Margo Seltzer (University of British Columbia).

Panel moderator Anil Ananthaswamy kicked off the panel with the observation that AI – much like real people – sometimes makes mistakes, and can do so very confidently. While generative AI can often get things right, it also seems to “hallucinate” and spew out nonsensical things. It makes up numbers, names, and even events. However, AI is “sometimes wrong, but never uncertain,” says Seltzer. This part, Seltzer says, is not well understood by the general public.

“I actually fear how we’re presenting this to the public,” she continued. “I don’t think we’ve done a good job at explaining these hallucinations.” So attention and verification of AI is critical and one should never take what an AI says for granted. For people who actually work on AI or are familiar with the technology, this comes as a given, but considering how widespread generative AI has become, it is important that everyone is made aware of this.

The “Generative AI – Promises and Perils” panel at the 10th HLF. Image credits: HLFF.

For Ommer, it was important that generative AI (and the image-generating Stable Diffusion, which his research group developed, in particular) be available to the public. Democratizing AI has never been more important, the researcher pointed out.

“What sets Stable Diffusion apart … was that our goal was right from the bat to make sure this technology works on consumer hardware for 300, 400 Euros … as opposed to the trajectory going to only big companies having these assets to run generative AI.

“I saw this as – for our society – critical technology which will permeate through all different fields … and my goal was to make sure that the future development of this technology stands on as broad a set of shoulders as possible and not just in the hands of a few companies that have the assets available for that.”

Stable Diffusion also produces “wrong but not uncertain” results sometimes, due to the way it is designed. It works by adding a tiny bit of noise to images – so little you would not even notice it with the naked eye. But if the process is repeated hundreds and thousands of times, the original image becomes complete noise. Then, the algorithm is tasked with doing the opposite process: creating the image from noise.

“The beauty now happens when you repeat this denoising process hundreds or thousands of times and you turn it not into the original image but something that is qualitatively similar,” Ommer explained.

This is the beauty and the curse of such generative AIs. They are not just recreating the things they are fed, they are creating new things that are “qualitatively similar” – they seem to make sense.

This also feeds into another debated concept: that of the stochastic parrot.

Is Artificial Intelligence… Intelligent?

The term “stochastic parrot” was coined in a 2021 paper, particularly referring to text-generating AIs. A stochastic parrot is “haphazardly stitching together sequences of linguistic forms … according to probabilistic information about how they combine, but without any reference to meaning.” Essentially, a “stochastic parrot” refers to a system or entity that randomly repeats or mimics information without a genuine understanding or context.

However, the panelists (or most of them) believe the most advanced engines are not stochastic parrots.

“It’s not stochastic parroting, it’s not just parroting things it has seen. It has some level of composition,” said Arora, noting that even without seeing the training dataset, there are ways to mathematically prove that the more advanced LLMs are not stochastical parrots, and this can be mathematically proven: “There is a demonstration that this is not just stochastic parroting, it has some composition.”

In fact, Bubeck takes things even further. Referencing a poem about mathematical proofs written by ChatGPT, he argued that it is possible for the algorithm to actually have some form of intelligence.

“The act of synthesizing information, you can talk about it as stochastic parrotry, but isn’t this the essence of intelligence? Isn’t this the essence of what we do as mathematicians, trying to find the commonality between many different situations? I think it’s going far beyond just stochastic parroting,” Bubeck mentioned.

The panelists engaged in a lively debate on this, with the discussion ending with no clear conclusion. This just goes to show that sometimes even experts have a tough time agreeing on what generative AI actually is. As Bubeck mentioned, we are only seeing the tip of the iceberg when it comes to this technology. “GPT4 is the first model of this type. We don’t know where this is going,” he explained.

Regardless of how intelligent it is and how it works, it is important to ensure AI is used to better society and not for nefarious purposes.

AI and Public Interest

Vint Cerf, ACM A.M. Turing Award laureate, and Afua Bruce, leading public interest technologist, followed up with a thought-provoking discussion on how we can ensure AI is as safe as possible.

Vint Cerf, hailed as one of the “fathers of the internet,” drew a parallel between AI and the internet. He mentioned that as much as people tried, they could not predict all the ways in which people would abuse the internet. It is a “failure of imagination,” the laureate explained.

But this does not mean you should not try to prevent problems and be proactive, says Bruce. Her work is centered on AI data science in security, and she is interested in developing technology that can address problems before they even exist.

“Ensuring that technology is centered around equity and justice is key,” said Bruce. “We see different biases in society all the time, and technology can perpetuate them, so it’s essential to ensure that technology is used to reduce and not accentuate these issues.”

Cerf suggests we should not only think in terms of black and white.

“My first reaction is that we cannot be sure that there will not be an abuse of technology that gets released. It’s very hard to imagine the ways in which technology will get used. Also, there can be negative but also positive and constructive ways. Rather than a binary decision, a more interesting way of thinking about this is risk factors.”

Afua Bruce (left) and Vinton Cerf (right) at the panel on “How Do We Ensure AI Benefits Humans?” at the 10th HLF. Image credits: HLFF.

Essentially, Cerf wants us to weigh the benefits that generative AI can bring against the potential risks. For instance, generative AI can be used to quickly generate news reports for time-sensitive events, especially for standardized content like sports scores, financial earnings reports, or weather updates. This allows news agencies to deliver real-time updates to their audience, enhancing the speed and coverage of news dissemination. However, relying solely on AI for news reporting can lead to errors or misinformation, especially if the AI misinterprets data or lacks the contextual understanding that a human journalist would have. Moreover, the absence of human oversight could lead to the unintentional spread of misinformation, biases, or a lack of depth in the story, potentially misleading the public. Guardrails can also help, to an extent. For instance, one could prevent AIs from discussing dangerous and controversial topics or producing images of celebrities. But there is no guarantee that upcoming algorithms will implement guardrails, especially as there is not much regulation. Besides, even if guardrails are set up, you can still “jailbreak” them sometimes, Cerf points out.

The discussion eventually shifted to ethics. But what exactly ‘ethics’ means, in the context of AI, is not entirely clear either.

“I’m not sure ethics is something you can teach. It’s something you can learn but I would distinguish between those two. It’s something you can practice. What we need is more immersion in an environment where ethics are more visible,” said Cerf.

Ultimately, there is a growing demand for unrestricted and more powerful AI systems. Even with guardrails in place, the sheer complexity and adaptability of these models can sometimes allow them to find unexpected ways around restrictions. As AI continues to evolve, ensuring its responsible use will become even more challenging and crucial.

One of the primary conclusions from the HLF sessions is that while generative AI offers boundless potential, it also requires a well-informed and vigilant user base. Educating the public, especially those who might not be tech-savvy, about the capabilities and limitations of AI is imperative. For science and technology to truly benefit humanity, the users must understand not just the advantages but also the pitfalls of the tools they wield.

Technology should always be in service of people – that is one aspect everyone seems to agree on. But as with any tool, it can be used to do good and bad. “The problem is that this ‘hammer’ can pound nails and it can pound heads. It’s serving what I decide I want it to do. So we’d better be a bit careful with what service you want technology to serve,” Cerf concluded.

The future of generative AI is undeniably bright, but it comes with its own set of challenges. As we continue to integrate AI into various facets of our lives, from art to science, it becomes crucial to approach it with a blend of optimism and skepticism, taking responsibility for the entire spectrum that comes along with it.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *