Xinye Wanyan
PhD Candidate, RMIT University
Large language Model’s (LLMs) have shown high-quality semantic comprehension ability and extensive external knowledge, which has been incorporated into recommendation systems as multiple functions. However, existing bias evaluation pipelines designed for conventional recommendation systems are not fully applicable to recommendation systems via LLM (RecLLM) and most bias mitigation methods are limited to a single intervention stage, rendering them inadequate for addressing the overall bias of the complex RecLLMs. Xinye will introduce a comprehensive evaluation framework designed to assess the biases within RecLLMs and their constituent sub-modules. In addition, a calibrated synthetic benchmark dataset, leveraging LLMs, will be developed to facilitate the bias evaluation and mitigation experiments.
Xinye is a scholarship recipient of the ARC Centre for Automated Decision-Making and Society (ADM+S) is supervised by Prof. Jeffrey Chan and Dr. Danula Hettiachchi.