Uncategorized

Report – Regulating Large Language Models: A Roundtable Report, Co-Hosted by CDT and NYU Information Law Institute



This report also authored by Paul Friedl, affiliate at NYU’s Information Law Institute

AI is moving fast, but for once, so are policymakers.

Governments and international organizations around the world have been quick to propose commitments, guidelines, and regulation concerning the development and deployment of AI, with more coming down the pike. In particular, policymakers are reacting to concerns about large language models (LLMs), algorithmic systems that process huge volumes of written language and thus become capable of analyzing and generating human-like text. 

Designing regulation that harnesses the innovative potential of LLMs while minimizing their social harms is a challenge that will require new levels of cross-discipline collaboration. To this end, the Center for Democracy & Technology (CDT) and the NYU Information Law Institute (NYU ITI) hosted the Large Language Model, Law, and Policy Roundtable this past summer. This roundtable brought together a group of roughly thirty scholars and advocates from law, computer science, and political science to discuss how to create effective LLM regulation. The day-long meeting focused on three issue areas in particular: truthfulness, privacy, and market concentration.

Today, we publish the roundtable’s proceedings. The conversation touched on a broad range of concerns and potential interventions. Although participants did not agree on all points, we highlight some of the themes that emerged:

  • Technical interventions almost inevitably raise difficult tradeoffs that regulators need to consider. For instance, removing personal information from training data to prevent memorization-based privacy harms may have unintended side-effects on model behavior.
  • Analogies to older technologies can help distinguish actual needs for regulation from “regulatory panic.” Comparisons with technologies such as search engines, social media, and databases can help identify which novel, LLM-specific issues policymakers should focus on.
  • Policy interventions that treat AI as a regulatory “blank slate” are likely to be unsuccessful. LLMs may be new, but they are already subject to economic and sociotechnical dynamics. Regulatory interventions that treat LLMs as a field yet to be developed (rather than one that already exists) risk failing to develop appropriate remedies.
  • Developing effective policy interventions requires consideration of the full LLM supply chain. An LLM is not a standalone product; it is an assemblage of interacting technologies, often developed by different actors. Regulatory interventions may target actors at any stage of the development/deployment process (e.g. hosting services, dataset providers, downstream application developers and deployers.)

In order to learn more, check out the full report.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *