Skip to content

Commit 1a3d66d

Browse files
committed
Added README.md for main with examples and explanations
1 parent c50b628 commit 1a3d66d

File tree

1 file changed

+149
-2
lines changed

1 file changed

+149
-2
lines changed

examples/main/README.md

Lines changed: 149 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,150 @@
1-
# main
1+
# llama.cpp/example/main
22

3-
TODO
3+
This example program allows you to use various LLaMA language models in an easy and efficient way. It is specifically designed to work with the llama.cpp project, which provides a plain C/C++ implementation with optional 4-bit quantization support for faster, lower memory inference, and is optimized for desktop CPUs. This program can be used to perform various inference tasks with LLaMA models, including generating text based on user-provided prompts and chat-like interactions with reverse prompts.
4+
5+
## Table of Contents
6+
7+
1. [Quick Start](#quick-start)
8+
2. [Common Options](#common-options)
9+
3. [Input Prompts](#input-prompts)
10+
4. [Interaction](#interaction)
11+
5. [Context Management](#context-management)
12+
6. [Generation Flags](#generation-flags)
13+
7. [Performance Tuning and Memory Options](#performance-tuning-and-memory-options)
14+
8. [Additional Options](#additional-options)
15+
16+
## Quick Start
17+
18+
To get started right away, run the following command, making sure to use the correct path for the model you have:
19+
20+
```bash
21+
./main -m models/7B/ggml-model.bin --prompt "Once upon a time"
22+
```
23+
24+
For an interactive experience, try this command:
25+
26+
```bash
27+
./main -m models/7B/ggml-model.bin -n -1 --color -r "User:" --in-prefix " " --prompt $'User: Hi\nAI: Hello. I am an AI chatbot. Would you like to talk?\nUser: Sure!\nAI: What would you like to talk about?\nUser:'
28+
```
29+
30+
## Common Options
31+
32+
In this section, we cover the most commonly used options for running the `main` program with the LLaMA models:
33+
34+
- `-m FNAME, --model FNAME`: Specify the path to the LLaMA model file (e.g., `models/lamma-7B/ggml-model.bin`).
35+
- `-i, --interactive`: Run the program in interactive mode, allowing you to provide input directly and receive real-time responses.
36+
- `-ins, --instruct`: Run the program in instruction mode, which is particularly useful when working with Alpaca models.
37+
- `-t N, --threads N`: Set the number of threads to use during computation. It is recommended to set this to the number of physical cores your CPU has.
38+
- `-n N, --n_predict N`: Set the number of tokens to predict when generating text. Adjusting this value can influence the length of the generated text.
39+
- `-c N, --ctx_size N`: Set the size of the prompt context. The default is 512, but LLaMA models were built with a context of 2048, which will provide better results for longer input/inference.
40+
41+
## Input Prompts
42+
43+
The `main` program provides several ways to interact with the LLaMA models using input prompts:
44+
45+
- `--prompt PROMPT`: Provide a prompt directly as a command-line option.
46+
- `--file FNAME`: Provide a file containing a prompt or multiple prompts.
47+
- `--interactive-first`: Run the program in interactive mode and wait for input right away. (More on this below.)
48+
- `--random-prompt`: Start with a randomized prompt.
49+
50+
## Interaction
51+
52+
The `main` program offers a seamless way to interact with LLaMA models, allowing users to engage in real-time conversations or provide instructions for specific tasks. The interactive mode can be triggered using various options, including `--interactive`, `--interactive-first`, and `--instruct`.
53+
54+
In interactive mode, users can participate in text generation by injecting their input during the process. Users can press `Ctrl+C` at any time to interject and type their input, followed by pressing `Return` to submit it to the LLaMA model. To submit additional lines without finalizing input, users can end the current line with a backslash (`\`) and continue typing.
55+
56+
### Interaction Options
57+
58+
- `-i, --interactive`: Run the program in interactive mode, allowing users to engage in real-time conversations or provide specific instructions to the model.
59+
- `--interactive-first`: Run the program in interactive mode and immediately wait for user input before starting the text generation.
60+
- `-ins, --instruct`: Run the program in instruction mode, which is specifically designed to work with Alpaca models that excel in completing tasks based on user instructions.
61+
- `--color`: Enable colorized output to differentiate visually distinguishing between prompts, user input, and generated text.
62+
63+
By understanding and utilizing these interaction options, you can create engaging and dynamic experiences with the LLaMA models, tailoring the text generation process to your specific needs.
64+
65+
### Reverse Prompts
66+
67+
Reverse prompts are a powerful way to create a chat-like experience with a LLaMA model by pausing the text generation when specific text strings are encountered:
68+
69+
- `-r PROMPT, --reverse-prompt PROMPT`: Specify one or multiple reverse prompts to pause text generation and switch to interactive mode. For example, `-r "User:"` can be used to jump back into the conversation whenever it's the user's turn to speak. This helps create a more interactive and conversational experience. However, the reverse prompt doesn't work when it ends with a space.
70+
71+
To overcome this limitation, you can use the `--in-prefix` flag to add a space or any other characters after the reverse prompt.
72+
73+
### In-Prefix
74+
75+
The `--in-prefix` flag is used to add a prefix to your input, primarily, this is used to insert a space after the reverse prompt. Here's an example of how to use the `--in-prefix` flag in conjunction with the `--reverse-prompt` flag:
76+
77+
```sh
78+
./main -r "User:" --in-prefix " "
79+
```
80+
81+
### Instruction Mode
82+
83+
Instruction mode is particularly useful when working with Alpaca models, which are designed to follow user instructions for specific tasks:
84+
85+
- `-ins, --instruct`: Enable instruction mode to leverage the capabilities of Alpaca models in completing tasks based on user-provided instructions.
86+
87+
By understanding and utilizing these interaction options, you can create engaging and dynamic experiences with the LLaMA models, tailoring the text generation process to your specific needs.
88+
89+
## Context Management
90+
91+
During text generation, LLaMA models have a limited context size, which means they can only consider a certain number of tokens from the input and generated text. When the context fills up, the model resets internally, potentially losing some information from the beginning of the conversation or instructions. Context management options help maintain continuity and coherence in these situations.
92+
93+
### Context Size
94+
95+
The `--ctx_size` option allows you to set the size of the prompt context used by the LLaMA models during text generation. A larger context size helps the model to better comprehend and generate responses for longer input or conversations.
96+
97+
- `-c N, --ctx_size N`: Set the size of the prompt context (default: 512). The LLaMA models were built with a context of 2048, which will yield the best results on longer input/inference. However, increasing the context size beyond 2048 may lead to unpredictable results.
98+
99+
### Keep Prompt
100+
101+
The `--keep` option allows users to retain the original prompt when the model runs out of context, ensuring a connection to the initial instruction or conversation topic is maintained.
102+
103+
- `--keep N`: Specify the number of tokens from the initial prompt to retain when the model resets its internal context. By default, this value is set to 0 (meaning no tokens are kept). Use `-1` to retain all tokens from the initial prompt.
104+
105+
By utilizing context management options like `--ctx_size` and `--keep`, you can maintain a more coherent and consistent interaction with the LLaMA models, ensuring that the generated text remains relevant to the original prompt or conversation.
106+
107+
## Generation Flags
108+
109+
The following options are related to controlling the text generation process, influencing the diversity, creativity, and quality of the generated text. Understanding these options will help you fine-tune the output according to your needs:
110+
111+
### Temperature
112+
113+
- `--temp N`: Adjust the randomness of the generated text (default: 0.8).
114+
115+
### Top-K Sampling
116+
117+
- `--top_k N`: Limit the next token selection to the K most probable tokens (default: 40).
118+
119+
### Top-P Sampling
120+
121+
- `--top_p N`: Limit the next token selection to a subset of tokens with a cumulative probability above a threshold P (default: 0.9).
122+
123+
### Repeat Penalty
124+
125+
- `--repeat_last_n N`: Set the number of last tokens to consider for penalization (default: 64).
126+
- `--repeat_penalty N`: Control the repetition of token sequences in the generated text (default: 1.1).
127+
128+
Experiment with different combinations of these options to achieve the desired output. For more details on each option, refer to the previous explanations provided in this conversation.
129+
130+
## Performance Tuning and Memory Options
131+
132+
These options help improve the performance and memory usage of the LLaMA models:
133+
134+
- `-t N, --threads N`: Set the number of threads to use during computation (default: 4). Using the correct number of threads can greatly improve performance. It is recommended to set this value to the number of CPU cores.- `--mlock`: Lock the model in memory, preventing it from being swapped out when mmaped. This can improve performance.
135+
- `--no-mmap`: Do not memory-map the model. This results in a slower load time but may reduce pageouts if you're not using `mlock`.
136+
- `--memory_f32`: Use 32 bit floats instead of 16 bit floats for memory key+value, allowing higher quality inference at the cost of memory.
137+
138+
For information about 4-bit quantization, which can significantly improve performance and reduce memory usage, please refer to llama.cpp's primary [README](`../../README.md#prepare-data--run`).
139+
140+
By understanding and using these performance tuning settings, you can optimize the LLaMA model's behavior to achieve the best performance for your specific needs.
141+
142+
## Additional Options
143+
144+
These options provide extra functionality and customization when running the LLaMA models:
145+
146+
- `--verbose-prompt`: Print the prompt before generating text.
147+
- `--mtest`: Test the model's functionality by running a series of tests to ensure it's working properly.
148+
- `--lora FNAME`: Apply a LoRA (Layer-wise Relevance Approximation) adapter to the model (implies --no-mmap). This allows you to adapt the pretrained model to specific tasks or domains.
149+
- `--lora-base FNAME`: Optional model to use as a base for the layers modified by the LoRA adapter. This flag is used in conjunction with the `--lora` flag, and specifies the base model for the adaptation.
150+
- `--lora-layers LAYERS`: Specify which layers of the model to modify with the LoRA adapter. LAYERS should be a comma-separated list of integers indicating the layers to be adapted (e.g., "1,2,3").

0 commit comments

Comments
 (0)