Skip to content

Commit 0672112

Browse files
authored
Update TEI docker image to 1.6 (opea-project#1453)
update TEI docker image to ghcr.io/huggingface/text-embeddings-inference:cpu-1.6 Signed-off-by: Wang, Xigui <[email protected]>
1 parent 8ce0162 commit 0672112

File tree

20 files changed

+21
-21
lines changed

20 files changed

+21
-21
lines changed

comps/dataprep/src/README_finance.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,7 @@ First, you need to start a TEI service.
2929
```bash
3030
your_port=6006
3131
model="BAAI/bge-base-en-v1.5"
32-
docker run -p $your_port:80 -v ./data:/data --name tei_server -e http_proxy=$http_proxy -e https_proxy=$https_proxy --pull always ghcr.io/huggingface/text-embeddings-inference:cpu-1.5 --model-id $model
32+
docker run -p $your_port:80 -v ./data:/data --name tei_server -e http_proxy=$http_proxy -e https_proxy=$https_proxy --pull always ghcr.io/huggingface/text-embeddings-inference:cpu-1.6 --model-id $model
3333
```
3434

3535
Then you need to test your TEI service using the following commands:

comps/dataprep/src/README_milvus.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,7 @@ First, start the TEI embedding server.
2626
```bash
2727
your_port=6010
2828
model="BAAI/bge-base-en-v1.5"
29-
docker run -p $your_port:80 -v ./data:/data --name tei_server -e http_proxy=$http_proxy -e https_proxy=$https_proxy --pull always ghcr.io/huggingface/text-embeddings-inference:cpu-1.5 --model-id $model
29+
docker run -p $your_port:80 -v ./data:/data --name tei_server -e http_proxy=$http_proxy -e https_proxy=$https_proxy --pull always ghcr.io/huggingface/text-embeddings-inference:cpu-1.6 --model-id $model
3030
export TEI_EMBEDDING_ENDPOINT="http://localhost:$your_port"
3131
```
3232

comps/dataprep/src/README_opensearch.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,7 @@ First, you need to start a TEI service.
2626
```bash
2727
your_port=6006
2828
model="BAAI/bge-base-en-v1.5"
29-
docker run -p $your_port:80 -v ./data:/data --name tei_server -e http_proxy=$http_proxy -e https_proxy=$https_proxy --pull always ghcr.io/huggingface/text-embeddings-inference:cpu-1.5 --model-id $model
29+
docker run -p $your_port:80 -v ./data:/data --name tei_server -e http_proxy=$http_proxy -e https_proxy=$https_proxy --pull always ghcr.io/huggingface/text-embeddings-inference:cpu-1.6 --model-id $model
3030
```
3131

3232
Then you need to test your TEI service using the following commands:

comps/dataprep/src/README_redis.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@ First, you need to start a TEI service.
2323
```bash
2424
your_port=6006
2525
model="BAAI/bge-base-en-v1.5"
26-
docker run -p $your_port:80 -v ./data:/data --name tei_server -e http_proxy=$http_proxy -e https_proxy=$https_proxy --pull always ghcr.io/huggingface/text-embeddings-inference:cpu-1.5 --model-id $model
26+
docker run -p $your_port:80 -v ./data:/data --name tei_server -e http_proxy=$http_proxy -e https_proxy=$https_proxy --pull always ghcr.io/huggingface/text-embeddings-inference:cpu-1.6 --model-id $model
2727
```
2828

2929
Then you need to test your TEI service using the following commands:

comps/embeddings/src/README_tei.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ This guide walks you through starting, deploying, and consuming the **TEI-based
1616
model="BAAI/bge-large-en-v1.5"
1717
docker run -p $your_port:80 -v ./data:/data --name tei-embedding-serving \
1818
-e http_proxy=$http_proxy -e https_proxy=$https_proxy --pull always \
19-
ghcr.io/huggingface/text-embeddings-inference:cpu-1.5 --model-id $model
19+
ghcr.io/huggingface/text-embeddings-inference:cpu-1.6 --model-id $model
2020
```
2121

2222
2. **Test the TEI service**:

comps/rerankings/src/README_tei.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,7 @@ This README provides set-up instructions and comprehensive details regarding the
3131
export RERANK_MODEL_ID="BAAI/bge-reranker-base"
3232
export volume=$PWD/data
3333

34-
docker run -d -p 12005:80 -v $volume:/data -e http_proxy=$http_proxy -e https_proxy=$https_proxy --pull always ghcr.io/huggingface/text-embeddings-inference:cpu-1.5 --model-id $RERANK_MODEL_ID --hf-api-token $HF_TOKEN --auto-truncate
34+
docker run -d -p 12005:80 -v $volume:/data -e http_proxy=$http_proxy -e https_proxy=$https_proxy --pull always ghcr.io/huggingface/text-embeddings-inference:cpu-1.6 --model-id $RERANK_MODEL_ID --hf-api-token $HF_TOKEN --auto-truncate
3535
```
3636

3737
2. **Verify the TEI Service**:

comps/retrievers/src/README_elasticsearch.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,7 @@ pip install -r requirements.txt
2828
```bash
2929
model=BAAI/bge-base-en-v1.5
3030
volume=$PWD/data
31-
docker run -d -p 6060:80 -v $volume:/data -e http_proxy=$http_proxy -e https_proxy=$https_proxy --pull always ghcr.io/huggingface/text-embeddings-inference:cpu-1.5 --model-id $model
31+
docker run -d -p 6060:80 -v $volume:/data -e http_proxy=$http_proxy -e https_proxy=$https_proxy --pull always ghcr.io/huggingface/text-embeddings-inference:cpu-1.6 --model-id $model
3232
```
3333

3434
### 1.3 Verify the TEI Service

comps/retrievers/src/README_opensearch.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,7 @@ pip install -r requirements.txt
2121
```bash
2222
model=BAAI/bge-base-en-v1.5
2323
volume=$PWD/data
24-
docker run -d -p 6060:80 -v $volume:/data -e http_proxy=$http_proxy -e https_proxy=$https_proxy --pull always ghcr.io/huggingface/text-embeddings-inference:cpu-1.5 --model-id $model
24+
docker run -d -p 6060:80 -v $volume:/data -e http_proxy=$http_proxy -e https_proxy=$https_proxy --pull always ghcr.io/huggingface/text-embeddings-inference:cpu-1.6 --model-id $model
2525
```
2626

2727
### 1.3 Verify the TEI Service

comps/retrievers/src/README_pathway.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ model=BAAI/bge-base-en-v1.5
1717
export TEI_EMBEDDING_ENDPOINT="http://${your_ip}:6060" # if you want to use the hosted embedding service, example: "http://127.0.0.1:6060"
1818

1919
# then run:
20-
docker run -p 6060:80 -e http_proxy=$http_proxy -e https_proxy=$https_proxy --pull always ghcr.io/huggingface/text-embeddings-inference:cpu-1.5 --model-id $model
20+
docker run -p 6060:80 -e http_proxy=$http_proxy -e https_proxy=$https_proxy --pull always ghcr.io/huggingface/text-embeddings-inference:cpu-1.6 --model-id $model
2121
```
2222

2323
Health check the embedding service with:

comps/retrievers/src/README_pgvector.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,7 @@ pip install -r requirements.txt
2121
```bash
2222
model=BAAI/bge-base-en-v1.5
2323
volume=$PWD/data
24-
docker run -d -p 6060:80 -v $volume:/data -e http_proxy=$http_proxy -e https_proxy=$https_proxy --pull always ghcr.io/huggingface/text-embeddings-inference:cpu-1.5 --model-id $model
24+
docker run -d -p 6060:80 -v $volume:/data -e http_proxy=$http_proxy -e https_proxy=$https_proxy --pull always ghcr.io/huggingface/text-embeddings-inference:cpu-1.6 --model-id $model
2525
```
2626

2727
### 1.3 Verify the TEI Service

0 commit comments

Comments
 (0)