Skip to content

Commit ba65415

Browse files
Fix win PC issues (#399)
* change to LF * add readme for windows pc * add OLLAMA_MODEL param * readme * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Update README.md * Update docker_compose.yaml --------- Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
1 parent 3505bd2 commit ba65415

File tree

2 files changed

+31
-4
lines changed

2 files changed

+31
-4
lines changed

.gitattributes

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
* text=auto eol=lf

ChatQnA/docker/aipc/README.md

Lines changed: 30 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -105,6 +105,8 @@ export your_hf_api_token="Your_Huggingface_API_Token"
105105
export your_no_proxy=${your_no_proxy},"External_Public_IP"
106106
```
107107

108+
- Linux PC
109+
108110
```bash
109111
export no_proxy=${your_no_proxy}
110112
export http_proxy=${your_http_proxy}
@@ -125,8 +127,29 @@ export BACKEND_SERVICE_ENDPOINT="http://${host_ip}:8888/v1/chatqna"
125127
export DATAPREP_SERVICE_ENDPOINT="http://${host_ip}:6007/v1/dataprep"
126128

127129
export OLLAMA_ENDPOINT=http://${host_ip}:11434
128-
# On Windows PC, please use host.docker.internal instead of ${host_ip}
129-
#export OLLAMA_ENDPOINT=http://host.docker.internal:11434
130+
export OLLAMA_MODEL="llama3"
131+
```
132+
133+
- Windows PC
134+
135+
```bash
136+
set EMBEDDING_MODEL_ID=BAAI/bge-base-en-v1.5
137+
set RERANK_MODEL_ID=BAAI/bge-reranker-base
138+
set TEI_EMBEDDING_ENDPOINT=http://%host_ip%:6006
139+
set TEI_RERANKING_ENDPOINT=http://%host_ip%:8808
140+
set REDIS_URL=redis://%host_ip%:6379
141+
set INDEX_NAME=rag-redis
142+
set HUGGINGFACEHUB_API_TOKEN=%your_hf_api_token%
143+
set MEGA_SERVICE_HOST_IP=%host_ip%
144+
set EMBEDDING_SERVICE_HOST_IP=%host_ip%
145+
set RETRIEVER_SERVICE_HOST_IP=%host_ip%
146+
set RERANK_SERVICE_HOST_IP=%host_ip%
147+
set LLM_SERVICE_HOST_IP=%host_ip%
148+
set BACKEND_SERVICE_ENDPOINT=http://%host_ip%:8888/v1/chatqna
149+
set DATAPREP_SERVICE_ENDPOINT=http://%host_ip%:6007/v1/dataprep
150+
151+
set OLLAMA_ENDPOINT=http://host.docker.internal:11434
152+
set OLLAMA_MODEL="llama3"
130153
```
131154

132155
Note: Please replace with `host_ip` with you external IP address, do not use localhost.
@@ -140,7 +163,10 @@ cd GenAIExamples/ChatQnA/docker/aipc/
140163
docker compose -f docker_compose.yaml up -d
141164

142165
# let ollama service runs
143-
ollama run llama3
166+
# e.g. ollama run llama3
167+
ollama run $OLLAMA_MODEL
168+
# for windows
169+
# ollama run %OLLAMA_MODEL%
144170
```
145171

146172
### Validate Microservices
@@ -211,7 +237,7 @@ curl http://${host_ip}:9000/v1/chat/completions\
211237

212238
```bash
213239
curl http://${host_ip}:8888/v1/chatqna -H "Content-Type: application/json" -d '{
214-
"messages": "What is the revenue of Nike in 2023?"
240+
"messages": "What is the revenue of Nike in 2023?", "model": "'"${OLLAMA_MODEL}"'"
215241
}'
216242
```
217243

0 commit comments

Comments
 (0)