Skip to content

Commit 7e40475

Browse files
Refine READMEs after reorg (#666)
* update dataprep readme Signed-off-by: letonghan <[email protected]> * update vectorstore readme Signed-off-by: letonghan <[email protected]> * udpate retriever readme Signed-off-by: letonghan <[email protected]> * update retriever readme Signed-off-by: letonghan <[email protected]> * udpate embedding readme Signed-off-by: letonghan <[email protected]> * update guardrails readme Signed-off-by: letonghan <[email protected]> * update other readmes Signed-off-by: letonghan <[email protected]> * update reranks readme Signed-off-by: letonghan <[email protected]> * update llm&lvms readme Signed-off-by: letonghan <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci --------- Signed-off-by: letonghan <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
1 parent 7664578 commit 7e40475

File tree

31 files changed

+97
-138
lines changed

31 files changed

+97
-138
lines changed

comps/dataprep/README.md

Lines changed: 13 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -19,20 +19,28 @@ export SUMMARIZE_IMAGE_VIA_LVM=1
1919

2020
## Dataprep Microservice with Redis
2121

22-
For details, please refer to this [langchain readme](langchain/redis/README.md) or [llama index readme](llama_index/redis/README.md)
22+
For details, please refer to this [readme](redis/README.md)
2323

2424
## Dataprep Microservice with Milvus
2525

26-
For details, please refer to this [readme](langchain/milvus/README.md)
26+
For details, please refer to this [readme](milvus/langchain/README.md)
2727

2828
## Dataprep Microservice with Qdrant
2929

30-
For details, please refer to this [readme](langchain/qdrant/README.md)
30+
For details, please refer to this [readme](qdrant/langchain/README.md)
3131

3232
## Dataprep Microservice with Pinecone
3333

34-
For details, please refer to this [readme](langchain/pinecone/README.md)
34+
For details, please refer to this [readme](pinecone/langchain/README.md)
3535

3636
## Dataprep Microservice with PGVector
3737

38-
For details, please refer to this [readme](langchain/pgvector/README.md)
38+
For details, please refer to this [readme](pgvector/langchain/README.md)
39+
40+
## Dataprep Microservice with VDMS
41+
42+
For details, please refer to this [readme](vdms/README.md)
43+
44+
## Dataprep Microservice with Multimodal
45+
46+
For details, please refer to this [readme](multimodal/redis/langchain/README.md)

comps/dataprep/redis/README.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# Dataprep Microservice with Redis
22

3-
We have provided dataprep microservice for multimodal data input (e.g., text and image) [here](../../multimodal/redis/langchain/README.md).
3+
We have provided dataprep microservice for multimodal data input (e.g., text and image) [here](../multimodal/redis/langchain/README.md).
44

55
For dataprep microservice for text input, we provide here two frameworks: `Langchain` and `LlamaIndex`. We also provide `Langchain_ray` which uses ray to parallel the data prep for multi-file performance improvement(observed 5x - 15x speedup by processing 1000 files/links.).
66

@@ -33,7 +33,7 @@ cd langchain_ray; pip install -r requirements_ray.txt
3333

3434
### 1.2 Start Redis Stack Server
3535

36-
Please refer to this [readme](../../../vectorstores/redis/README.md).
36+
Please refer to this [readme](../../vectorstores/redis/README.md).
3737

3838
### 1.3 Setup Environment Variables
3939

@@ -90,7 +90,7 @@ python prepare_doc_redis_on_ray.py
9090

9191
### 2.1 Start Redis Stack Server
9292

93-
Please refer to this [readme](../../../vectorstores/redis/README.md).
93+
Please refer to this [readme](../../vectorstores/redis/README.md).
9494

9595
### 2.2 Setup Environment Variables
9696

@@ -109,21 +109,21 @@ export HUGGINGFACEHUB_API_TOKEN=${your_hf_api_token}
109109
- option 1: Start single-process version (for 1-10 files processing)
110110

111111
```bash
112-
cd ../../../
112+
cd ../../
113113
docker build -t opea/dataprep-redis:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/dataprep/redis/langchain/Dockerfile .
114114
```
115115

116116
- Build docker image with llama_index
117117

118118
```bash
119-
cd ../../../
119+
cd ../../
120120
docker build -t opea/dataprep-redis:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/dataprep/redis/llama_index/Dockerfile .
121121
```
122122

123123
- option 2: Start multi-process version (for >10 files processing)
124124

125125
```bash
126-
cd ../../../../
126+
cd ../../../
127127
docker build -t opea/dataprep-on-ray-redis:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/dataprep/redis/langchain_ray/Dockerfile .
128128
```
129129

comps/dataprep/vdms/README.md

Lines changed: 2 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,7 @@ cd langchain_ray; pip install -r requirements_ray.txt
2727

2828
## 1.2 Start VDMS Server
2929

30-
Please refer to this [readme](../../vectorstores/langchain/vdms/README.md).
30+
Please refer to this [readme](../../vectorstores/vdms/README.md).
3131

3232
## 1.3 Setup Environment Variables
3333

@@ -37,8 +37,6 @@ export https_proxy=${your_http_proxy}
3737
export VDMS_HOST=${host_ip}
3838
export VDMS_PORT=55555
3939
export COLLECTION_NAME=${your_collection_name}
40-
export LANGCHAIN_TRACING_V2=true
41-
export LANGCHAIN_PROJECT="opea/gen-ai-comps:dataprep"
4240
export PYTHONPATH=${path_to_comps}
4341
```
4442

@@ -62,7 +60,7 @@ python prepare_doc_redis_on_ray.py
6260

6361
## 2.1 Start VDMS Server
6462

65-
Please refer to this [readme](../../vectorstores/langchain/vdms/README.md).
63+
Please refer to this [readme](../../vectorstores/vdms/README.md).
6664

6765
## 2.2 Setup Environment Variables
6866

comps/embeddings/README.md

Lines changed: 4 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -18,20 +18,16 @@ Users are albe to configure and build embedding-related services according to th
1818

1919
We support both `langchain` and `llama_index` for TEI serving.
2020

21-
For details, please refer to [langchain readme](langchain/tei/README.md) or [llama index readme](llama_index/tei/README.md).
21+
For details, please refer to [langchain readme](tei/langchain/README.md) or [llama index readme](tei/llama_index/README.md).
2222

2323
## Embeddings Microservice with Mosec
2424

25-
For details, please refer to this [readme](langchain/mosec/README.md).
25+
For details, please refer to this [readme](mosec/langchain/README.md).
2626

27-
## Embeddings Microservice with Neural Speed
27+
## Embeddings Microservice with Multimodal
2828

29-
For details, please refer to this [readme](neural-speed/README.md).
29+
For details, please refer to this [readme](multimodal/README.md).
3030

3131
## Embeddings Microservice with Multimodal Clip
3232

3333
For details, please refer to this [readme](multimodal_clip/README.md).
34-
35-
## Embeddings Microservice with Multimodal Langchain
36-
37-
For details, please refer to this [readme](multimodal_embeddings/README.md).

comps/guardrails/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ The Guardrails service enhances the security of LLM-based applications by offeri
44

55
| MicroService | Description |
66
| ---------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------ |
7-
| [Llama Guard](./llama_guard/README.md) | Provides guardrails for inputs and outputs to ensure safe interactions |
7+
| [Llama Guard](./llama_guard/langchain/README.md) | Provides guardrails for inputs and outputs to ensure safe interactions |
88
| [PII Detection](./pii_detection/README.md) | Detects Personally Identifiable Information (PII) and Business Sensitive Information (BSI) |
99
| [Toxicity Detection](./toxicity_detection/README.md) | Detects Toxic language (rude, disrespectful, or unreasonable language that is likely to make someone leave a discussion) |
1010

comps/guardrails/llama_guard/langchain/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -79,7 +79,7 @@ export LLM_MODEL_ID=${your_hf_llm_model}
7979
### 2.2 Build Docker Image
8080

8181
```bash
82-
cd ../../
82+
cd ../../../../
8383
docker build -t opea/guardrails-tgi:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/guardrails/llama_guard/langchain/Dockerfile .
8484
```
8585

comps/intent_detection/langchain/README.md

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -35,7 +35,7 @@ export TGI_LLM_ENDPOINT="http://${your_ip}:8008"
3535
Start intent detection microservice with below command.
3636

3737
```bash
38-
cd /your_project_path/GenAIComps/
38+
cd ../../../
3939
cp comps/intent_detection/langchain/intent_detection.py .
4040
python intent_detection.py
4141
```
@@ -55,7 +55,7 @@ export TGI_LLM_ENDPOINT="http://${your_ip}:8008"
5555
### 2.3 Build Docker Image
5656

5757
```bash
58-
cd /your_project_path/GenAIComps
58+
cd ../../../
5959
docker build --no-cache -t opea/llm-tgi:latest -f comps/intent_detection/langchain/Dockerfile .
6060
```
6161

@@ -68,7 +68,6 @@ docker run -it --name="intent-tgi-server" --net=host --ipc=host -e http_proxy=$h
6868
### 2.5 Run with Docker Compose (Option B)
6969

7070
```bash
71-
cd /your_project_path/GenAIComps/comps/intent_detection/langchain
7271
export LLM_MODEL_ID=${your_hf_llm_model}
7372
export http_proxy=${your_http_proxy}
7473
export https_proxy=${your_http_proxy}

comps/knowledgegraphs/langchain/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -73,7 +73,7 @@ curl $LLM_ENDPOINT/generate \
7373
### 1.4 Start Microservice
7474

7575
```bash
76-
cd ../..
76+
cd ../../../
7777
docker build -t opea/knowledge_graphs:latest \
7878
--build-arg https_proxy=$https_proxy \
7979
--build-arg http_proxy=$http_proxy \

comps/llms/faq-generation/tgi/langchain/README.md

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ export LLM_MODEL_ID=${your_hf_llm_model}
1919
### 1.2 Build Docker Image
2020

2121
```bash
22-
cd ../../../../
22+
cd ../../../../../
2323
docker build -t opea/llm-faqgen-tgi:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/llms/faq-generation/tgi/langchain/Dockerfile .
2424
```
2525

@@ -43,7 +43,6 @@ docker run -d --name="llm-faqgen-server" -p 9000:9000 --ipc=host -e http_proxy=$
4343
### 1.4 Run Docker with Docker Compose (Option B)
4444

4545
```bash
46-
cd faq-generation/tgi/docker
4746
docker compose -f docker_compose_llm.yaml up -d
4847
```
4948

comps/llms/summarization/tgi/langchain/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -53,7 +53,7 @@ export LLM_MODEL_ID=${your_hf_llm_model}
5353
### 2.2 Build Docker Image
5454

5555
```bash
56-
cd ../../
56+
cd ../../../../../
5757
docker build -t opea/llm-docsum-tgi:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/llms/summarization/tgi/langchain/Dockerfile .
5858
```
5959

0 commit comments

Comments
 (0)