Uncategorized

Report: World governments must act to create generative AI safeguards



Generative AI’s fast-flowering utility in the cybersecurity field means that governments must take steps to regulate the technology as its use by malicious actors becomes increasingly common, according to a report issued this week by the Aspen Institute. The report called generative AI a “technological marvel,” but one that is reaching the broader public in a time when cyberattacks are sharply on the rise, both in frequency and severity. It’s incumbent on regulators and industry groups, the authors said, to ensure that the benefits of generative AI don’t wind up outweighed by its potential for misuse.

“The actions that governments, companies, and organizations take today will lay the foundation that determines who benefits more from this emerging capability – attackers or defenders,” the report said.

Global response to generative AI security varies

The regulatory approach taken by large nations like the US, UK and Japan have differed, as have those taken by the United Nations and European Union. The UN’s focus has been on security, accountability, and transparency, according to the Aspen Institute, via various subgroups like UNESCO, an Inter-Agency Working Group on AI, and a high-level advisory body under the Secretary General. The European Union has been particularly aggressive in its efforts to protect privacy and address security threats posed by generative AI, with the AI Act – agreed in December 2023 – containing numerous provisions for transparency, data protection and rules for model training data.

Legislative inaction in the US has not stopped the Biden Administration from issuing an executive order on AI, which provides “guidance and benchmarks for evaluating AI capabilities,” with a particular emphasis on AI functionality that could cause harm. The US Cybersecurity and Infrastructure Security Agency (CISA) has also issued non-binding guidance, in conjunction with UK regulators, the authors said.

Japan, by contrast, is one example of a more hands-off approach to AI regulation from a cybersecurity perspective, focusing more on disclosure channels and developer feedback loops than strict rules or risk assessments, the Aspen Institute said.

Time running out for governments to act on generative AI regulation

Time, the report also noted, is of the essence. Security breaches by generative AI create an erosive effect on the public trust, and that AI gains new capabilities that could be used for nefarious ends practically by the day. “As that trust erodes, we will miss the opportunity to have proactive conversations about the permissible uses of genAI in threat detection and examine the ethical dilemmas surrounding autonomous cyber defenses as the market charges forward,” the report said.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *