Personas within Parameters: Fine-Tuning Small Language Models with Low-Rank Adapters to Mimic User Behaviors
December 2024
Utilized dataset distillation and low-rank fine-tuning to enhance Small Language Models (SLMs) for simulating user agents in recommender systems. Our experiments provide compelling empirical evidence of the efficacy of our methods, demonstrating that user agents developed using our approach have the potential to bridge the gap between offline metrics and real-world performance of recommender systems.
Enhancing Contract Negotiations with LLM-Based Legal Document Comparison
October 2024
This approach is the first in the literature to produce a natural language comparison between legal contracts and their template documents.
Systematic Evaluation of Long-Context LLMs on Financial Concepts
October 2024
Evaluated the performance of state-of-the-art GPT-4 suite of LC LLMs in solving a series of progressively challenging tasks, as a function of factors such as context length, task difficulty, and position of key information by creating a real world financial news dataset.
AR-NLU: A Framework for Enhancing Natural Language Understanding Model Robustness against ASR Errors
June 2024
A major challenge with pipeline spoken language understanding systems is that errors in the upstream automatic speech recognition (ASR) engine adversely impact downstream natural language understanding (NLU) models. To address this challenge, we propose an ASR-Robust NLU (AR-NLU) framework that extends a pre-existing NLU model by training it simultaneously on two input streams: human generated or gold transcripts and noisy ASR transcripts.