Uncategorized

Thoughts on the privacy threats and personalisation opportunities of qualitative inference with large language models  – Ana Canhoto



I have come across the paper entitled “Beyond Memorization: Violating Privacy Via Inference with Large Language Models”, authored by Robin Staab, Mark Vero, Mislav Balunović and Martin Vechev.

Staab and his team investigated “whether current LLMs could violate individuals’ privacy by inferring personal attributes from text”. Using prompts and techniques that, to me, seem quite possible to replicate by someone without a computer science degree, the authors were able to correctly infer many personal characteristics of the authors of various Reddit posts.

As shown in the figure below (Figure 5 in the paper), all tools were particularly good at predicting gender and location, with GPT-4 performing particularly well – it could accurately infer up to 85% of personal information, in 100x less cost and 240x less time than human analysts. As shown by the last column on the right, human labelling could achieve higher levels of accuracy than the LLMs. However, it is important to note that human analysts could also access additional information such as reading comments or using search engines to fill in the gaps. 

The authors also find that “common mitigations, i.e., text anonymization and model alignment, are currently ineffective at protecting user privacy against LLM inference”, concluding that LLMs’ ability to infer personal data at a previously unattainable scale” (p1.), presents a significant privacy threat.

The paper by Staab and colleagues (which contains very helpful appendices detailing the process and prompts used!), included an exercise whereby the LLMs entered a conversation with the user, personalised with basis on previous profile analysis, whereby they were able to collect additional information about the user. This exercise was yet another illustration of the privacy threat presented by LLM-based chatbots trying to extract personal information through seemingly benign questions. However, in my view, this exercise also shows the great opportunity of using this technology for personalised support of the kind formerly available only to companies with very deep pockets and highly skilled staff. 

The inference potential of LLMs opens the path for small business to offer personalised service; for over-stretched charities to offer support to more people; and for people that, for a range of reasons, can’t access traditional health service providers, to access advice and guidance. 

Obviously, we would want there to be some level of human over-sight of the advice given – not the least to avoid damaging advice being given. And, presumably, the LLM interaction would be part of a customer-journey – an entry point / filtering stage prior to action by a human; or, alternatively, be used to follow-up on staff interactions.

But, overall, it is quite an excited opportunity, right?

The challenge, like in other areas of personalisation, is to strike a balance between i) addressing the privacy threats posed by LLMs’ inference capabilities, as highlighted by Staab and his team; ii) leveraging the opportunities that LLMs offer for democratizing access to personalized support; and iii) not pushing beyond the level of personalisation that users are willing to accept.

But, in principle, it seems very possible to use LLMs’ inference capabilities to democratise access to a high level of personalised support, in a variety of contexts. Have you done any work in this area?





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *