Skip to content

Commit 6674832

Browse files
authored
fix tgi xeon tag (#641)
1 parent 67df280 commit 6674832

File tree

22 files changed

+23
-23
lines changed

22 files changed

+23
-23
lines changed

AudioQnA/docker/xeon/compose.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -41,7 +41,7 @@ services:
4141
environment:
4242
TTS_ENDPOINT: ${TTS_ENDPOINT}
4343
tgi-service:
44-
image: ghcr.io/huggingface/text-generation-inference:latest-intel-cpu
44+
image: ghcr.io/huggingface/text-generation-inference:sha-e4201f4-intel-cpu
4545
container_name: tgi-service
4646
ports:
4747
- "3006:80"

ChatQnA/docker/xeon/compose.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -102,7 +102,7 @@ services:
102102
HF_HUB_ENABLE_HF_TRANSFER: 0
103103
restart: unless-stopped
104104
tgi-service:
105-
image: ghcr.io/huggingface/text-generation-inference:latest-intel-cpu
105+
image: ghcr.io/huggingface/text-generation-inference:sha-e4201f4-intel-cpu
106106
container_name: tgi-service
107107
ports:
108108
- "9009:80"

ChatQnA/docker/xeon/compose_qdrant.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -102,7 +102,7 @@ services:
102102
HF_HUB_ENABLE_HF_TRANSFER: 0
103103
restart: unless-stopped
104104
tgi-service:
105-
image: ghcr.io/huggingface/text-generation-inference:latest-intel-cpu
105+
image: ghcr.io/huggingface/text-generation-inference:sha-e4201f4-intel-cpu
106106
container_name: tgi-service
107107
ports:
108108
- "6042:80"

ChatQnA/kubernetes/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,7 @@ The ChatQnA uses the below prebuilt images if you choose a Xeon deployment
2020
- retriever: opea/retriever-redis:latest
2121
- tei_xeon_service: ghcr.io/huggingface/text-embeddings-inference:cpu-1.5
2222
- reranking: opea/reranking-tei:latest
23-
- tgi-service: ghcr.io/huggingface/text-generation-inference:latest-intel-cpu
23+
- tgi-service: ghcr.io/huggingface/text-generation-inference:sha-e4201f4-intel-cpu
2424
- llm: opea/llm-tgi:latest
2525
- chaqna-xeon-backend-server: opea/chatqna:latest
2626

ChatQnA/kubernetes/manifests/xeon/chatqna.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1121,7 +1121,7 @@ spec:
11211121
name: chatqna-tgi-config
11221122
securityContext:
11231123
{}
1124-
image: "ghcr.io/huggingface/text-generation-inference:latest-intel-cpu"
1124+
image: "ghcr.io/huggingface/text-generation-inference:sha-e4201f4-intel-cpu"
11251125
imagePullPolicy: IfNotPresent
11261126
volumeMounts:
11271127
- mountPath: /data

CodeGen/docker/xeon/compose.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33

44
services:
55
tgi-service:
6-
image: ghcr.io/huggingface/text-generation-inference:latest-intel-cpu
6+
image: ghcr.io/huggingface/text-generation-inference:sha-e4201f4-intel-cpu
77
container_name: tgi-service
88
ports:
99
- "8028:80"

CodeGen/kubernetes/manifests/xeon/codegen.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -239,7 +239,7 @@ spec:
239239
name: codegen-tgi-config
240240
securityContext:
241241
{}
242-
image: "ghcr.io/huggingface/text-generation-inference:latest-intel-cpu"
242+
image: "ghcr.io/huggingface/text-generation-inference:sha-e4201f4-intel-cpu"
243243
imagePullPolicy: IfNotPresent
244244
volumeMounts:
245245
- mountPath: /data

CodeGen/kubernetes/manifests/xeon/ui/react-codegen.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -126,7 +126,7 @@ spec:
126126
- name: no_proxy
127127
value:
128128
securityContext: {}
129-
image: "ghcr.io/huggingface/text-generation-inference:latest-intel-cpu"
129+
image: "ghcr.io/huggingface/text-generation-inference:sha-e4201f4-intel-cpu"
130130
imagePullPolicy: IfNotPresent
131131
volumeMounts:
132132
- mountPath: /data

CodeGen/tests/test_codegen_on_xeon.sh

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,7 @@ function build_docker_images() {
2222
service_list="codegen codegen-ui llm-tgi"
2323
docker compose -f docker_build_compose.yaml build ${service_list} --no-cache > ${LOG_PATH}/docker_image_build.log
2424

25-
docker pull ghcr.io/huggingface/text-generation-inference:latest-intel-cpu
25+
docker pull ghcr.io/huggingface/text-generation-inference:sha-e4201f4-intel-cpu
2626
docker images
2727
}
2828

CodeTrans/docker/xeon/compose.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33

44
services:
55
tgi-service:
6-
image: ghcr.io/huggingface/text-generation-inference:latest-intel-cpu
6+
image: ghcr.io/huggingface/text-generation-inference:sha-e4201f4-intel-cpu
77
container_name: codetrans-tgi-service
88
ports:
99
- "8008:80"

0 commit comments

Comments
 (0)