You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Download a training file, such as `alpaca_data.json` for instruction tuning and upload it to the server with below command, this file can be downloaded in [here](https://github.com/tatsu-lab/stanford_alpaca/blob/main/alpaca_data.json):
For reranking and embedding models finetuning, the training file [toy_finetune_data.jsonl](https://github.com/FlagOpen/FlagEmbedding/blob/master/examples/finetune/toy_finetune_data.jsonl) is an toy example.
99
99
100
-
## 3.2 Create fine-tuning job
100
+
###3.2 Create fine-tuning job
101
101
102
-
### 3.2.1 Instruction Tuning
102
+
####3.2.1 Instruction Tuning
103
103
104
104
After a training file like `alpaca_data.json` is uploaded, use the following command to launch a finetuning job using `meta-llama/Llama-2-7b-chat-hf` as base model:
Use the following command to launch a job for LLM pretraining, such as `meta-llama/Llama-2-7b-hf`:
179
179
@@ -199,7 +199,7 @@ Below is an example for the format of the pretraining dataset:
199
199
{"text": "A boy with a blue tank top sitting watching three dogs."}
200
200
```
201
201
202
-
## 3.3 Manage fine-tuning job
202
+
###3.3 Manage fine-tuning job
203
203
204
204
Below commands show how to list finetuning jobs, retrieve a finetuning job, cancel a finetuning job and list checkpoints of a finetuning job.
205
205
@@ -217,6 +217,10 @@ curl http://localhost:8015/v1/fine_tuning/jobs/cancel -X POST -H "Content-Type:
217
217
curl http://${your_ip}:8015/v1/finetune/list_checkpoints -X POST -H "Content-Type: application/json" -d '{"fine_tuning_job_id": ${fine_tuning_job_id}}'
218
218
```
219
219
220
+
### 3.4 Leverage fine-tuned model
221
+
222
+
After fine-tuning job is done, fine-tuned model can be chosen from listed checkpoints, then the fine-tuned model can be used in other microservices. For example, fine-tuned reranking model can be used in [reranks](../reranks/README.md) microservice by assign its path to the environment variable `RERANK_MODEL_ID`, fine-tuned embedding model can be used in [embeddings](../embeddings/README.md) microservice by assign its path to the environment variable `model`, LLMs after instruction tuning can be used in [llms](../llms/README.md) microservice by assign its path to the environment variable `your_hf_llm_model`.
223
+
220
224
## 🚀4. Descriptions for Finetuning parameters
221
225
222
226
We utilize [OpenAI finetuning parameters](https://platform.openai.com/docs/api-reference/fine-tuning) and extend it with more customizable parameters, see the definitions at [finetune_config](https://github.com/opea-project/GenAIComps/blob/main/comps/finetuning/finetune_config.py).
0 commit comments