Skip to content

Commit a8a46bc

Browse files
authored
doc: fix heading levels in markdown content (#627)
* only one H1 (#) heading for the title is allowed, so fix the extra H1 headings (and the subheadings under those) to appropriate levels * fix some inline code blocks containing leading/trailing spaces * fix some indenting issues under an ordered list item Signed-off-by: David B. Kinder <[email protected]>
1 parent b2e64d2 commit a8a46bc

File tree

17 files changed

+145
-139
lines changed

17 files changed

+145
-139
lines changed

comps/agent/langchain/README.md

Lines changed: 10 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -33,34 +33,34 @@ The tools are registered with a yaml file. We support the following types of too
3333

3434
Currently we have implemented OpenAI chat completion compatible API for agents. We are working to support OpenAI assistants APIs.
3535

36-
# 🚀2. Start Agent Microservice
36+
## 🚀2. Start Agent Microservice
3737

38-
## 2.1 Option 1: with Python
38+
### 2.1 Option 1: with Python
3939

40-
### 2.1.1 Install Requirements
40+
#### 2.1.1 Install Requirements
4141

4242
```bash
4343
cd comps/agent/langchain/
4444
pip install -r requirements.txt
4545
```
4646

47-
### 2.1.2 Start Microservice with Python Script
47+
#### 2.1.2 Start Microservice with Python Script
4848

4949
```bash
5050
cd comps/agent/langchain/
5151
python agent.py
5252
```
5353

54-
## 2.2 Option 2. Start Microservice with Docker
54+
### 2.2 Option 2. Start Microservice with Docker
5555

56-
### 2.2.1 Build Microservices
56+
#### 2.2.1 Build Microservices
5757

5858
```bash
5959
cd GenAIComps/ # back to GenAIComps/ folder
6060
docker build -t opea/comps-agent-langchain:latest -f comps/agent/langchain/docker/Dockerfile .
6161
```
6262

63-
### 2.2.2 Start microservices
63+
#### 2.2.2 Start microservices
6464

6565
```bash
6666
export ip_address=$(hostname -I | awk '{print $1}')
@@ -87,7 +87,7 @@ docker logs comps-langchain-agent-endpoint
8787
> docker run --rm --runtime=runc --name="comps-langchain-agent-endpoint" -v ./comps/agent/langchain/:/home/user/comps/agent/langchain/ -p 9090:9090 --ipc=host -e http_proxy=$http_proxy -e https_proxy=$https_proxy -e HUGGINGFACEHUB_API_TOKEN=${HUGGINGFACEHUB_API_TOKEN} -e model=${model} -e ip_address=${ip_address} -e strategy=react -e llm_endpoint_url=http://${ip_address}:8080 -e llm_engine=tgi -e recursion_limit=5 -e require_human_feedback=false -e tools=/home/user/comps/agent/langchain/tools/custom_tools.yaml opea/comps-agent-langchain:latest
8888
> ```
8989
90-
# 🚀 3. Validate Microservice
90+
## 🚀 3. Validate Microservice
9191
9292
Once microservice starts, user can use below script to invoke.
9393
@@ -104,7 +104,7 @@ data: [DONE]
104104
105105
```
106106
107-
# 🚀 4. Provide your own tools
107+
## 🚀 4. Provide your own tools
108108

109109
- Define tools
110110

@@ -180,7 +180,7 @@ data: 'The weather information in Austin is not available from the Open Platform
180180
data: [DONE]
181181
```
182182

183-
# 5. Customize agent strategy
183+
## 5. Customize agent strategy
184184

185185
For advanced developers who want to implement their own agent strategies, you can add a separate folder in `src\strategy`, implement your agent by inherit the `BaseAgent` class, and add your strategy into the `src\agent.py`. The architecture of this agent microservice is shown in the diagram below as a reference.
186186
![Architecture Overview](agent_arch.jpg)

comps/cores/telemetry/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ OPEA Comps currently provides telemetry functionalities for metrics and tracing
88

99
OPEA microservice metrics are exported in Prometheus format and are divided into two categories: general metrics and specific metrics.
1010

11-
General metrics, such as `http_requests_total `, `http_request_size_bytes`, are exposed by every microservice endpoint using the [prometheus-fastapi-instrumentator](https://github.com/trallnag/prometheus-fastapi-instrumentator).
11+
General metrics, such as `http_requests_total`, `http_request_size_bytes`, are exposed by every microservice endpoint using the [prometheus-fastapi-instrumentator](https://github.com/trallnag/prometheus-fastapi-instrumentator).
1212

1313
Specific metrics are the built-in metrics exposed under `/metrics` by each specific microservices such as TGI, vLLM, TEI and others. Both types of the metrics adhere to the Prometheus format.
1414

comps/dataprep/redis/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -105,7 +105,7 @@ export HUGGINGFACEHUB_API_TOKEN=${your_hf_api_token}
105105

106106
- Build docker image with langchain
107107

108-
* option 1: Start single-process version (for 1-10 files processing)
108+
- option 1: Start single-process version (for 1-10 files processing)
109109

110110
```bash
111111
cd ../../../../

comps/dataprep/redis/multimodal_langchain/README.md

Lines changed: 24 additions & 24 deletions
Original file line numberDiff line numberDiff line change
@@ -2,9 +2,9 @@
22

33
This `dataprep` microservice accepts videos (mp4 files) and their transcripts (optional) from the user and ingests them into Redis vectorstore.
44

5-
# 🚀1. Start Microservice with Python(Option 1)
5+
## 🚀1. Start Microservice with Python(Option 1)
66

7-
## 1.1 Install Requirements
7+
### 1.1 Install Requirements
88

99
```bash
1010
# Install ffmpeg static build
@@ -17,11 +17,11 @@ cp $(pwd)/ffmpeg-git-amd64-static/ffmpeg /usr/local/bin/
1717
pip install -r requirements.txt
1818
```
1919

20-
## 1.2 Start Redis Stack Server
20+
### 1.2 Start Redis Stack Server
2121

2222
Please refer to this [readme](../../../vectorstores/langchain/redis/README.md).
2323

24-
## 1.3 Setup Environment Variables
24+
### 1.3 Setup Environment Variables
2525

2626
```bash
2727
export your_ip=$(hostname -I | awk '{print $1}')
@@ -30,7 +30,7 @@ export INDEX_NAME=${your_redis_index_name}
3030
export PYTHONPATH=${path_to_comps}
3131
```
3232

33-
## 1.4 Start LVM Microservice (Optional)
33+
### 1.4 Start LVM Microservice (Optional)
3434

3535
This is required only if you are going to consume the _generate_captions_ API of this microservice as in [Section 4.3](#43-consume-generate_captions-api).
3636

@@ -42,21 +42,21 @@ export your_ip=$(hostname -I | awk '{print $1}')
4242
export LVM_ENDPOINT="http://${your_ip}:9399/v1/lvm"
4343
```
4444

45-
## 1.5 Start Data Preparation Microservice for Redis with Python Script
45+
### 1.5 Start Data Preparation Microservice for Redis with Python Script
4646

4747
Start document preparation microservice for Redis with below command.
4848

4949
```bash
5050
python prepare_videodoc_redis.py
5151
```
5252

53-
# 🚀2. Start Microservice with Docker (Option 2)
53+
## 🚀2. Start Microservice with Docker (Option 2)
5454

55-
## 2.1 Start Redis Stack Server
55+
### 2.1 Start Redis Stack Server
5656

5757
Please refer to this [readme](../../../vectorstores/langchain/redis/README.md).
5858

59-
## 2.2 Start LVM Microservice (Optional)
59+
### 2.2 Start LVM Microservice (Optional)
6060

6161
This is required only if you are going to consume the _generate_captions_ API of this microservice as described [here](#43-consume-generate_captions-api).
6262

@@ -68,7 +68,7 @@ export your_ip=$(hostname -I | awk '{print $1}')
6868
export LVM_ENDPOINT="http://${your_ip}:9399/v1/lvm"
6969
```
7070

71-
## 2.3 Setup Environment Variables
71+
### 2.3 Setup Environment Variables
7272

7373
```bash
7474
export your_ip=$(hostname -I | awk '{print $1}')
@@ -79,39 +79,39 @@ export INDEX_NAME=${your_redis_index_name}
7979
export HUGGINGFACEHUB_API_TOKEN=${your_hf_api_token}
8080
```
8181

82-
## 2.4 Build Docker Image
82+
### 2.4 Build Docker Image
8383

8484
```bash
8585
cd ../../../../
8686
docker build -t opea/dataprep-redis:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/dataprep/redis/multimodal_langchain/docker/Dockerfile .
8787
```
8888

89-
## 2.5 Run Docker with CLI (Option A)
89+
### 2.5 Run Docker with CLI (Option A)
9090

9191
```bash
9292
docker run -d --name="dataprep-redis-server" -p 6007:6007 --runtime=runc --ipc=host -e http_proxy=$http_proxy -e https_proxy=$https_proxy -e REDIS_URL=$REDIS_URL -e INDEX_NAME=$INDEX_NAME -e LVM_ENDPOINT=$LVM_ENDPOINT -e HUGGINGFACEHUB_API_TOKEN=$HUGGINGFACEHUB_API_TOKEN opea/dataprep-redis:latest
9393
```
9494

95-
## 2.6 Run with Docker Compose (Option B - deprecated, will move to genAIExample in future)
95+
### 2.6 Run with Docker Compose (Option B - deprecated, will move to genAIExample in future)
9696

9797
```bash
9898
cd comps/dataprep/redis/multimodal_langchain/docker
9999
docker compose -f docker-compose-dataprep-redis.yaml up -d
100100
```
101101

102-
# 🚀3. Status Microservice
102+
## 🚀3. Status Microservice
103103

104104
```bash
105105
docker container logs -f dataprep-redis-server
106106
```
107107

108-
# 🚀4. Consume Microservice
108+
## 🚀4. Consume Microservice
109109

110110
Once this dataprep microservice is started, user can use the below commands to invoke the microservice to convert videos and their transcripts (optional) to embeddings and save to the Redis vector store.
111111

112112
This mircroservice has provided 3 different ways for users to ingest videos into Redis vector store corresponding to the 3 use cases.
113113

114-
## 4.1 Consume _videos_with_transcripts_ API
114+
### 4.1 Consume _videos_with_transcripts_ API
115115

116116
**Use case:** This API is used when a transcript file (under `.vtt` format) is available for each video.
117117

@@ -120,7 +120,7 @@ This mircroservice has provided 3 different ways for users to ingest videos into
120120
- Make sure the file paths after `files=@` are correct.
121121
- Every transcript file's name must be identical with its corresponding video file's name (except their extension .vtt and .mp4). For example, `video1.mp4` and `video1.vtt`. Otherwise, if `video1.vtt` is not included correctly in this API call, this microservice will return error `No captions file video1.vtt found for video1.mp4`.
122122

123-
### Single video-transcript pair upload
123+
#### Single video-transcript pair upload
124124

125125
```bash
126126
curl -X POST \
@@ -130,7 +130,7 @@ curl -X POST \
130130
http://localhost:6007/v1/videos_with_transcripts
131131
```
132132

133-
### Multiple video-transcript pair upload
133+
#### Multiple video-transcript pair upload
134134

135135
```bash
136136
curl -X POST \
@@ -142,13 +142,13 @@ curl -X POST \
142142
http://localhost:6007/v1/videos_with_transcripts
143143
```
144144

145-
## 4.2 Consume _generate_transcripts_ API
145+
### 4.2 Consume _generate_transcripts_ API
146146

147147
**Use case:** This API should be used when a video has meaningful audio or recognizable speech but its transcript file is not available.
148148

149149
In this use case, this microservice will use [`whisper`](https://openai.com/index/whisper/) model to generate the `.vtt` transcript for the video.
150150

151-
### Single video upload
151+
#### Single video upload
152152

153153
```bash
154154
curl -X POST \
@@ -157,7 +157,7 @@ curl -X POST \
157157
http://localhost:6007/v1/generate_transcripts
158158
```
159159

160-
### Multiple video upload
160+
#### Multiple video upload
161161

162162
```bash
163163
curl -X POST \
@@ -167,7 +167,7 @@ curl -X POST \
167167
http://localhost:6007/v1/generate_transcripts
168168
```
169169

170-
## 4.3 Consume _generate_captions_ API
170+
### 4.3 Consume _generate_captions_ API
171171

172172
**Use case:** This API should be used when a video does not have meaningful audio or does not have audio.
173173

@@ -192,7 +192,7 @@ curl -X POST \
192192
http://localhost:6007/v1/generate_captions
193193
```
194194

195-
## 4.4 Consume get_videos API
195+
### 4.4 Consume get_videos API
196196

197197
To get names of uploaded videos, use the following command.
198198

@@ -202,7 +202,7 @@ curl -X POST \
202202
http://localhost:6007/v1/dataprep/get_videos
203203
```
204204

205-
## 4.5 Consume delete_videos API
205+
### 4.5 Consume delete_videos API
206206

207207
To delete uploaded videos and clear the database, use the following command.
208208

comps/embeddings/neural-speed/README.md

Lines changed: 7 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,31 +1,33 @@
1-
# build Mosec endpoint docker image
1+
# Embedding Neural Speed
2+
3+
## build Mosec endpoint docker image
24

35
```
46
docker build --build-arg http_proxy=$http_proxy --build-arg https_proxy=$https_proxy -t langchain-mosec:neuralspeed -f comps/embeddings/neural-speed/neuralspeed-docker/Dockerfile .
57
```
68

7-
# build embedding microservice docker image
9+
## build embedding microservice docker image
810

911
```
1012
docker build --build-arg http_proxy=$http_proxy --build-arg https_proxy=$https_proxy -t opea/embedding-langchain-mosec:neuralspeed -f comps/embeddings/neural-speed/docker/Dockerfile .
1113
```
1214

1315
Note: Please contact us to request model files before building images.
1416

15-
# launch Mosec endpoint docker container
17+
## launch Mosec endpoint docker container
1618

1719
```
1820
docker run -d --name="embedding-langchain-mosec-endpoint" -p 6001:8000 langchain-mosec:neuralspeed
1921
```
2022

21-
# launch embedding microservice docker container
23+
## launch embedding microservice docker container
2224

2325
```
2426
export MOSEC_EMBEDDING_ENDPOINT=http://{mosec_embedding_host_ip}:6001
2527
docker run -d --name="embedding-langchain-mosec-server" -e http_proxy=$http_proxy -e https_proxy=$https_proxy -p 6000:6000 --ipc=host -e MOSEC_EMBEDDING_ENDPOINT=$MOSEC_EMBEDDING_ENDPOINT opea/embedding-langchain-mosec:neuralspeed
2628
```
2729

28-
# run client test
30+
## run client test
2931

3032
```
3133
curl localhost:6000/v1/embeddings \

comps/feedback_management/mongo/README.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -34,15 +34,15 @@ docker build -t opea/feedbackmanagement-mongo-server:latest --build-arg https_pr
3434

3535
1. Run mongoDB image
3636

37-
```bash
38-
docker run -d -p 27017:27017 --name=mongo mongo:latest
39-
```
37+
```bash
38+
docker run -d -p 27017:27017 --name=mongo mongo:latest
39+
```
4040

4141
2. Run Feedback Management service
4242

43-
```bash
44-
docker run -d --name="feedbackmanagement-mongo-server" -p 6016:6016 -e http_proxy=$http_proxy -e https_proxy=$https_proxy -e no_proxy=$no_proxy -e MONGO_HOST=${MONGO_HOST} -e MONGO_PORT=${MONGO_PORT} -e DB_NAME=${DB_NAME} -e COLLECTION_NAME=${COLLECTION_NAME} opea/feedbackmanagement-mongo-server:latest
45-
```
43+
```bash
44+
docker run -d --name="feedbackmanagement-mongo-server" -p 6016:6016 -e http_proxy=$http_proxy -e https_proxy=$https_proxy -e no_proxy=$no_proxy -e MONGO_HOST=${MONGO_HOST} -e MONGO_PORT=${MONGO_PORT} -e DB_NAME=${DB_NAME} -e COLLECTION_NAME=${COLLECTION_NAME} opea/feedbackmanagement-mongo-server:latest
45+
```
4646

4747
### Invoke Microservice
4848

0 commit comments

Comments
 (0)