Skip to content

Commit 4f35b1d

Browse files
zehao-intelchensuyueWenjiaoYuedaisy-ycguokding1
authored
Refactor Translation Example (opea-project#287)
* Refactor Translation Example Signed-off-by: zehao-intel <[email protected]> * support e2s test Signed-off-by: zehao-intel <[email protected]> Signed-off-by: zehao-intel <[email protected]> Signed-off-by: zehao-intel <[email protected]> Signed-off-by: zehao-intel <[email protected]> * fix test ip_address Signed-off-by: zehao-intel <[email protected]> * update test scripts Signed-off-by: chensuyue <[email protected]> * update test scripts Signed-off-by: chensuyue <[email protected]> * for test Signed-off-by: chensuyue <[email protected]> * fix readme and dockerfile Signed-off-by: zehao-intel <[email protected]> * revert test code Signed-off-by: chensuyue <[email protected]> * remove gaudi test update Signed-off-by: chensuyue <[email protected]> * bug fix Signed-off-by: chensuyue <[email protected]> * fix test xeon Signed-off-by: zehao-intel <[email protected]> * modify mega check Signed-off-by: zehao-intel <[email protected]> Signed-off-by: zehao-intel <[email protected]> Signed-off-by: zehao-intel <[email protected]> * fix ui Signed-off-by: zehao-intel <[email protected]> * fix ut network Signed-off-by: zehao-intel <[email protected]> * fix network Signed-off-by: zehao-intel <[email protected]> * Modify the corresponding format according to the backend new structure. (opea-project#317) * Add image build job in docker compose e2e gaudi test in CI (opea-project#305) Signed-off-by: Yingchun Guo <[email protected]> * Add gpu support for ChatQnA (opea-project#308) * add gpu support for chatqna Signed-off-by: Ding, Ke <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci --------- Signed-off-by: Ding, Ke <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * Update ChatQnA for Xeon docker_compose.yaml to fix downloads failing (opea-project#310) * Update docker_compose * Updated docker_compose * Updated docker_compose * Add build docker image option for test scripts (opea-project#312) Signed-off-by: chensuyue <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci --------- Signed-off-by: Yingchun Guo <[email protected]> Signed-off-by: Ding, Ke <[email protected]> Signed-off-by: chensuyue <[email protected]> Signed-off-by: WenjiaoYue <[email protected]> Co-authored-by: Ying Chun Guo <[email protected]> Co-authored-by: Ke Ding <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Steve Fowler <[email protected]> Co-authored-by: chen, suyue <[email protected]> --------- Signed-off-by: zehao-intel <[email protected]> Signed-off-by: chensuyue <[email protected]> Signed-off-by: Yingchun Guo <[email protected]> Signed-off-by: Ding, Ke <[email protected]> Signed-off-by: WenjiaoYue <[email protected]> Co-authored-by: chen, suyue <[email protected]> Co-authored-by: WenjiaoYue <[email protected]> Co-authored-by: Ying Chun Guo <[email protected]> Co-authored-by: Ke Ding <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Steve Fowler <[email protected]> Co-authored-by: lvliang-intel <[email protected]>
1 parent aa48977 commit 4f35b1d

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

50 files changed

+921
-136
lines changed

.github/workflows/Translation.yml

Lines changed: 0 additions & 50 deletions
This file was deleted.

.github/workflows/scripts/build_push.sh

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -46,14 +46,14 @@ function docker_build() {
4646
# $1 is like "apple orange pear"
4747
for MEGA_SVC in $1; do
4848
case $MEGA_SVC in
49-
"ChatQnA"|"CodeGen"|"CodeTrans"|"DocSum")
49+
"ChatQnA"|"CodeGen"|"CodeTrans"|"DocSum"|"Translation")
5050
cd $MEGA_SVC/docker
5151
IMAGE_NAME="$(getImagenameFromMega $MEGA_SVC)"
5252
docker_build ${IMAGE_NAME}
5353
cd ui
5454
docker_build ${IMAGE_NAME}-ui docker/Dockerfile
5555
;;
56-
"AudioQnA"|"SearchQnA"|"Translation"|"VisualQnA")
56+
"AudioQnA"|"SearchQnA"|"VisualQnA")
5757
echo "Not supported yet"
5858
;;
5959
*)

Translation/README.md

Lines changed: 9 additions & 39 deletions
Original file line numberDiff line numberDiff line change
@@ -1,51 +1,21 @@
1-
# Language Translation
1+
# Translation Application
22

33
Language Translation is the communication of the meaning of a source-language text by means of an equivalent target-language text.
44

5-
The workflow falls into the following architecture:
5+
Translation architecture shows below:
66

77
![architecture](./assets/img/translation_architecture.png)
88

9-
# Start Backend Service
9+
This Translation use case performs Language Translation Inference on Intel Gaudi2 or Intel XEON Scalable Processors. The Intel Gaudi2 accelerator supports both training and inference for deep learning models in particular for LLMs. Please visit [Habana AI products](https://habana.ai/products) for more details.
1010

11-
1. Start the TGI Service to deploy your LLM
11+
# Deploy Translation Service
1212

13-
```sh
14-
cd serving/tgi_gaudi
15-
bash build_docker.sh
16-
bash launch_tgi_service.sh
17-
```
13+
The Translation service can be effortlessly deployed on either Intel Gaudi2 or Intel XEON Scalable Processors.
1814

19-
`launch_tgi_service.sh` the script uses `8080` as the TGI service's port by default. Please replace it if any port conflicts detected.
15+
## Deploy Translation on Gaudi
2016

21-
2. Start the Language Translation Service
17+
Refer to the [Gaudi Guide](./docker/gaudi/README.md) for instructions on deploying Translation on Gaudi.
2218

23-
```sh
24-
cd langchain/docker
25-
bash build_docker.sh
26-
docker run -it --name translation_server --net=host --ipc=host -e TGI_ENDPOINT=${TGI_ENDPOINT} -e HUGGINGFACEHUB_API_TOKEN=${HUGGINGFACEHUB_API_TOKEN} -e SERVER_PORT=8000 -e http_proxy=${http_proxy} -e https_proxy=${https_proxy} translation:latest bash
27-
```
19+
## Deploy Translation on Xeon
2820

29-
**Note**: Set the following parameters before running the above command
30-
31-
- `TGI_ENDPOINT`: The endpoint of your TGI service, usually equal to `<ip of your machine>:<port of your TGI service>`.
32-
- `HUGGINGFACEHUB_API_TOKEN`: Your HuggingFace hub API token, usually generated [here](https://huggingface.co/settings/tokens).
33-
- `SERVER_PORT`: The port of the Translation service on the host.
34-
35-
3. Quick Test
36-
37-
```sh
38-
curl http://localhost:8000/v1/translation \
39-
-X POST \
40-
-d '{"language_from": "Chinese","language_to": "English","source_language": "我爱机器翻译。"}' \
41-
-H 'Content-Type: application/json'
42-
```
43-
44-
The shortcodes of languages are also supported:
45-
46-
```sh
47-
curl http://localhost:8000/v1/translation \
48-
-X POST \
49-
-d '{"language_from": "de","language_to": "en","source_language": "Maschinelles Lernen"}' \
50-
-H 'Content-Type: application/json'
51-
```
21+
Refer to the [Xeon Guide](./docker/xeon/README.md) for instructions on deploying Translation on Xeon.

Translation/deprecated/README.md

Lines changed: 51 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,51 @@
1+
# Language Translation
2+
3+
Language Translation is the communication of the meaning of a source-language text by means of an equivalent target-language text.
4+
5+
The workflow falls into the following architecture:
6+
7+
![architecture](../assets/img/translation_architecture.png)
8+
9+
# Start Backend Service
10+
11+
1. Start the TGI Service to deploy your LLM
12+
13+
```sh
14+
cd serving/tgi_gaudi
15+
bash build_docker.sh
16+
bash launch_tgi_service.sh
17+
```
18+
19+
`launch_tgi_service.sh` the script uses `8080` as the TGI service's port by default. Please replace it if any port conflicts detected.
20+
21+
2. Start the Language Translation Service
22+
23+
```sh
24+
cd langchain/docker
25+
bash build_docker.sh
26+
docker run -it --name translation_server --net=host --ipc=host -e TGI_ENDPOINT=${TGI_ENDPOINT} -e HUGGINGFACEHUB_API_TOKEN=${HUGGINGFACEHUB_API_TOKEN} -e SERVER_PORT=8000 -e http_proxy=${http_proxy} -e https_proxy=${https_proxy} translation:latest bash
27+
```
28+
29+
**Note**: Set the following parameters before running the above command
30+
31+
- `TGI_ENDPOINT`: The endpoint of your TGI service, usually equal to `<ip of your machine>:<port of your TGI service>`.
32+
- `HUGGINGFACEHUB_API_TOKEN`: Your HuggingFace hub API token, usually generated [here](https://huggingface.co/settings/tokens).
33+
- `SERVER_PORT`: The port of the Translation service on the host.
34+
35+
3. Quick Test
36+
37+
```sh
38+
curl http://localhost:8000/v1/translation \
39+
-X POST \
40+
-d '{"language_from": "Chinese","language_to": "English","source_language": "我爱机器翻译。"}' \
41+
-H 'Content-Type: application/json'
42+
```
43+
44+
The shortcodes of languages are also supported:
45+
46+
```sh
47+
curl http://localhost:8000/v1/translation \
48+
-X POST \
49+
-d '{"language_from": "de","language_to": "en","source_language": "Maschinelles Lernen"}' \
50+
-H 'Content-Type: application/json'
51+
```

0 commit comments

Comments
 (0)