Hi 👋, I'm a graduate student specializing in Natural Language Processing (NLP), with a particular focus on improving
the factuality and reliability of Large Language Models. My research aims to advance how AI systems can generate
truthful and verifiable information, making them more reliable and trustworthy for real-world applications.
My research interests include:
-
Hallucination Detection and LLM Factuality: My work focuses on developing methods to detect and prevent language
models from generating false or unsupported information, while improving their ability to produce factual and verifiable
outputs. This includes designing novel evaluation frameworks and techniques to enhance the truthfulness and reliability
of large language models.
-
Retrieval-Augmented Language Models: I'm exploring ways to enhance the performance of large language
models by efficiently retrieving and incorporating relevant information from external knowledge sources.
-
Reinforcement Learning: Drawing parallels to human learning processes, I'm also interested in applying
reinforcement learning techniques to NLP tasks, particularly in scenarios where continuous adaptation and
learning from feedback are crucial.
Feel free to reach out if you're interested in discussing NLP, reinforcement learning, or potential
collaborations in these exciting fields!