Lecture

Fine-Tuning Practice

The CodeFriends fine-tuning practice environment is conducted in the following three steps.


1. Select Training Data

Click the + Select Data button to create your own training data, or choose sample data.

The training data uses a JSONL format and must include at least 10 pairs of questions and answers.


2. Set Hyperparameters

Set hyperparameters consisting of Batch Size, Learning Rate, and Epoch Number. These are the same values inputted when fine-tuning the GPT model with OpenAI.


3. Execute Fine-Tuning

Enter the fine-tuning model name and press the Execute Fine-Tuning button. Once fine-tuning is complete, you can interact with the tuned model.


The learning objective of [Understanding Fine-Tuning] is to acquire the essential AI knowledge required for fine-tuning and to develop the ability to perform fine-tuning independently on the OpenAI platform. At CodeFriends, we do not perform actual fine-tuning due to OpenAI's policy/technical issues.


What Happens When Learning Proceeds?

During fine-tuning, the weights and biases of the AI model are updated according to the configured hyperparameters.

  • Weights: Decide how important certain features of the input data are

  • Bias: A value adjusted to ensure that the model’s output is not skewed in a certain direction, regulating the activation function of a neural network (determining which level of input activates a neural network)


Fine-Tuning Practice Preview

In the upcoming lessons, we will delve into JSONL data format, basic AI theory, and details about hyperparameters that are utilized in fine-tuning. Proceed with this simple fine-tuning practice and get a sneak peek of what you'll be learning in the future.

Mission
0 / 1

When fine-tuning a GPT model with OpenAI, what is the minimum number of question-answer pair data required?

1 pair

10 pairs

50 pairs

100 pairs

Lecture

AI Tutor

Publish

Design

Upload

Notes

Favorites

Help