Uncategorized

[2305.08845] Large Language Models are Zero-Shot Rankers for Recommender Systems



Download a PDF of the paper titled Large Language Models are Zero-Shot Rankers for Recommender Systems, by Yupeng Hou and 6 other authors

Download PDF
HTML (experimental)

Abstract:Recently, large language models (LLMs) (e.g., GPT-4) have demonstrated impressive general-purpose task-solving abilities, including the potential to approach recommendation tasks. Along this line of research, this work aims to investigate the capacity of LLMs that act as the ranking model for recommender systems. We first formalize the recommendation problem as a conditional ranking task, considering sequential interaction histories as conditions and the items retrieved by other candidate generation models as candidates. To solve the ranking task by LLMs, we carefully design the prompting template and conduct extensive experiments on two widely-used datasets. We show that LLMs have promising zero-shot ranking abilities but (1) struggle to perceive the order of historical interactions, and (2) can be biased by popularity or item positions in the prompts. We demonstrate that these issues can be alleviated using specially designed prompting and bootstrapping strategies. Equipped with these insights, zero-shot LLMs can even challenge conventional recommendation models when ranking candidates are retrieved by multiple candidate generators. The code and processed datasets are available at this https URL.

Submission history

From: Yupeng Hou [view email]
[v1]
Mon, 15 May 2023 17:57:39 UTC (308 KB)
[v2]
Wed, 24 Jan 2024 04:41:01 UTC (335 KB)



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *