BootcampFine-tuning Llama 3.2 on Your Data with torchtune

Fine-tuning Llama 3.2 on Your Data with torchtune

Modern open LLMs are really getting close to their closed counterparts, but still require a lot of compute to do inference (get predictions). Luckily, we have smaller (0.5B - 3B) LLMs that are very capable and can be fine-tuned on your custom data. In this tutorial, we’ll fine-tune Llama 3.2 1B on a mental health sentiment dataset using torchtune.

Tutorial Goals

In this tutorial, you will:

  • Prepare a custom dataset for training
  • Evaluate the base (untrained) model
  • Train the model on the custom dataset
  • Upload and evaluate the trained model

Will the fine-tuned model outperform the base model? Let’s find out!

What is torchtune?

MLExpert is loading...

References

Footnotes

  1. torchtune documentation

  2. Sentiment Analysis for mental health

  3. Llama 3.2 Instruct on HuggingFace Hub