You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: comps/llms/src/doc-summarization/README.md
+33-9Lines changed: 33 additions & 9 deletions
Original file line number
Diff line number
Diff line change
@@ -1,6 +1,10 @@
1
1
# Document Summary LLM Microservice
2
2
3
-
This microservice leverages LangChain to implement summarization strategies and facilitate LLM inference using Text Generation Inference on Intel Xeon and Gaudi2 processors. You can set backend service either [TGI](../../../third_parties/tgi) or [vLLM](../../../third_parties/vllm).
3
+
This microservice leverages LangChain to implement advanced text summarization strategies and facilitate Large Language Model (LLM) inference using Text Generation Inference (TGI) on Intel Xeon and Gaudi2 processors. Users can configure the backend service to utilize either [TGI](../../../third_parties/tgi) or [vLLM](../../../third_parties/vllm).
4
+
5
+
# Quick Start Guide
6
+
7
+
## Deployment options
4
8
5
9
## 🚀1. Start Microservice with Docker 🐳
6
10
@@ -25,18 +29,18 @@ Please make sure MAX_TOTAL_TOKENS should be larger than (MAX_INPUT_TOKENS + max_
25
29
26
30
Step 1: Prepare backend LLM docker image.
27
31
28
-
If you want to use vLLM backend, refer to [vLLM](../../../third_parties/vllm/)to build vLLM docker images first.
32
+
If you want to use vLLM backend, refer to [vLLM](../../../third_parties/vllm/)for building the necessary Docker image.
0 commit comments