INF 385T: Special Topics in Information Science: Fine Tuning Open Source Large Language Models

Fall Term 2025
Mode: In-Person
Instructor
Syllabus to come
Program: MSIS/PhD
Unique ID
29995
Day Start End Building Room
  • Thursday
  • 6:30 pm
  • 9:30 pm
  • UTA
  • 1.212

Catalog Description

Introduction to the area of Fine Tuning Open Source Large Language Models. Students will gain hands-on experience in data preparation, model fine tuning, and performance evaluation for popular open-source frameworks.

Instructor Description

Introduction to Fine-Tuning Open-Source Large Language Models (LLMs) through project-based applications and real-world examples. The course will begin with a foundational understanding of Natural Language Processing (NLP), focusing on Text Preprocessing techniques such as tokenization and vectorization. A basic overview of Large Language Models will be provided, covering the fundamental structure and architecture of commonly used Open-Source Frameworks. The course will then focus on three key methods for fine-tuning and Evaluating LLMs: * LLM Performance and Quality Metrics * Supervised Learning * Reinforcement Learning Each method will be explored through both theoretical explanations and practical group-based projects, applying these concepts to real-world examples. Students will engage in hands-on projects to strengthen their understanding of how to customize and optimize LLMs for specific tasks or domains.

Prerequisites

INF 380P INTRO TO PROGRAMMING or prior experience in Python strongly recommended.