wikitext dataset using the GPT-2 model.
You will see that this straightforward process only requires two components: a training script and a dataset. The training script is responsible for defining the model, setting up and applying hyperparameters, running the training loop, and applying its respective evaluation logic, while the dataset contains the information that will be used to train the model.
Connect to GitHub (if needed)
If you haven’t already connected FlexAI to GitHub, you’ll need to set up a code registry connection:This will allow FlexAI to pull repositories directly from GitHub using the
-u flag in training commands.Preparing the Dataset
In this experiment, we will use a pre-processed version of the the
wikitext dataset that has been set up for the GPT-2 model.If you’d like to reproduce the pre-processing steps yourself to use a different dataset or simply to learn more about the process, you can refer to the Manual Dataset Pre-processing section below.
-
Download the dataset:
-
Upload the dataset (located in
gpt2-tokenized-wikitext/) to FlexAI:
Train the Model
Now, it’s time to train your LLM on the dataset you just pushed in the previous step, The first line defines the 3 main components required to run a Training Job in FlexAI:
gpt2-tokenized-wikitext. This experiment uses the GPT-2 model, however, the training script we will use leverages the HuggingFace Transformers Trainer class, which makes it easy to replace GPT-2 with another model from the HuggingFace Model Hub.To start the Training Job, run the following command:- The Training Job’s name (
first-ddp-training-job). - The URL of the repository containing the training script (
https://github.com/flexaihq/blueprints). - The name of the dataset to be used (
gpt2-tokenized-wikitext).
code/causal-language-modeling/train.py).After the third line come the script’s arguments, which are passed to the script when it is executed to adjust the Training Job hyperparameters or customize its behavior. For instance, --max_train_samples and --max_eval_samples can be used to tweak the sample size.Checking up on the Training Job
You can check the status and life cycle events of your Training Job by running:Additionally, you can view the logs of your Training Job by running:
Fetching the Trained Model artifacts
Once the Training Job completes successfully, you will be able to list all the produced checkpoints:They can be downloaded with:You now have a trained model that you can use for inference or further fine-tuning! Check out the Extra section below for more information on how to run your fine-tuned model locally, or even better, how to run the training script directly on FlexAI using an Interactive Training Session. You can also learn how to manually pre-process the dataset if you’re interested in understanding the process better.You can also have a look at other FlexAI experiments within this repository to explore more advanced use cases and techniques.
Optional Extra Steps
Try your fine-tuned model locally
You can run your newly fine-tuned model in a FlexAI Interactive Session or in a local env (e.g.pipenv install --python 3.10), if you have hardware that’s capable of doing inference.
1. Clone this repository
If you haven’t already, clone this repository on your host machine:2. Install the dependencies
Depending on your environment, you might need to install - if not already - the experiments’ dependencies by running:3. Extract the model artifacts
First, list the available checkpoints from your training job:<CHECKPOINT-ID> with the actual checkpoint ID from the list):
checkpoint directory. Make note of this location, as you will use it next.
4. Run the inference script
Run the script made for inference on this model by running the command below, replacing**PATH_TO_THE_CHECKPOINT_DIRECTORY** with the path to the checkpoint directory you downloaded:
Run the training script directly on FlexAI using an Interactive Training Session
An Interactive Training Session allows you to connect to a Training Environment runtime on FlexAI and run your both training and prediction or inference scripts directly from this environment. This is a great way to test your scripts and experiment with different hyperparameters without having to create multiple Training Jobs per configuration change. You will find the guide on how to run an Interactive Training Session in the FlexAI Documentation. You’ll need to use the path for theflexaihq/blueprints repository as your --repository-url and pass the gpt2-tokenized-wikitext dataset you pushed earlier as --dataset, unless you want to leverage the Interactive Training Session’s compute resources to manually pre-process the dataset.
Manual Dataset Pre-processing
To prepare and save thewikitext dataset for the GPT-2 model run the following command:
--tokenized_dataset_save_dir, in this case: gpt2-tokenized-wikitext.
Keep in mind that you can use other combinations of datasets and models available on HuggingFace.