ChaTA: Towards an Intelligent Question-Answer Teaching Assistant using Open-Source LLMs
Authors: Yann Hicke, Anmol Agarwal, Qianou Ma, Paul Denny
Abstract: To address the challenges of scalable and intelligent question-answering (QA), we introduce an innovative solution that leverages open-source Large Language Models (LLMs) to ensure data privacy. We use models from the LLaMA-2 family and augmentations including retrieval augmented generation (RAG), supervised fine-tuning (SFT), and an alternative to reinforcement learning with human feedback (RLHF). We perform our experiments on a Piazza dataset from an introductory CS course with 10k QA pairs and 1.5k pairs of preferences data and conduct both human evaluations and automatic LLM evaluations on a small subset. We find preliminary evidence that modeling techniques collectively enhance the quality of answers by 33%, and RAG is an impactful addition. This work paves the way for the development of ChaTA, an intelligent QA assistant customizable for courses with an online QA platform.
Explore the paper tree
Click on the tree nodes to be redirected to a given paper and access their summaries and virtual assistant
Look for similar papers (in beta version)
By clicking on the button above, our algorithm will scan all papers in our database to find the closest based on the contents of the full papers and not just on metadata. Please note that it only works for papers that we have generated summaries for and you can rerun it from time to time to get a more accurate result while our database grows.