Skip to content

Commit f6ae4fa

Browse files
authored
doc: Fix headings (#706)
Only one H1 heading with the title is allowed. The rest must be H2 and deeper, so adjust them accordingly. Signed-off-by: David B. Kinder <[email protected]>
1 parent ef90fbb commit f6ae4fa

File tree

3 files changed

+20
-20
lines changed

3 files changed

+20
-20
lines changed

comps/embeddings/predictionguard/README.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -6,30 +6,30 @@ This embedding microservice is designed to efficiently convert text into vectori
66

77
**Note** - The BridgeTower model implemented in Prediction Guard can actually embed text, images, or text + images (jointly). For now this service only embeds text, but a follow on contribution will enable the multimodal functionality.
88

9-
# 🚀 Start Microservice with Docker
9+
## 🚀 Start Microservice with Docker
1010

11-
## Setup Environment Variables
11+
### Setup Environment Variables
1212

1313
Setup the following environment variables first
1414

1515
```bash
1616
export PREDICTIONGUARD_API_KEY=${your_predictionguard_api_key}
1717
```
1818

19-
## Build Docker Images
19+
### Build Docker Images
2020

2121
```bash
2222
cd ../../..
2323
docker build -t opea/embedding-predictionguard:latest -f comps/embeddings/predictionguard/Dockerfile .
2424
```
2525

26-
## Start Service
26+
### Start Service
2727

2828
```bash
2929
docker run -d --name="embedding-predictionguard" -p 6000:6000 -e PREDICTIONGUARD_API_KEY=$PREDICTIONGUARD_API_KEY opea/embedding-predictionguard:latest
3030
```
3131

32-
# 🚀 Consume Embeddings Service
32+
## 🚀 Consume Embeddings Service
3333

3434
```bash
3535
curl localhost:6000/v1/embeddings \

comps/llms/text-generation/predictionguard/README.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -1,27 +1,27 @@
1-
# Introduction
1+
# Prediction Guard Introduction
22

33
[Prediction Guard](https://docs.predictionguard.com) allows you to utilize hosted open access LLMs, LVMs, and embedding functionality with seamlessly integrated safeguards. In addition to providing a scalable access to open models, Prediction Guard allows you to configure factual consistency checks, toxicity filters, PII filters, and prompt injection blocking. Join the [Prediction Guard Discord channel](https://discord.gg/TFHgnhAFKd) and request an API key to get started.
44

5-
# Get Started
5+
## Get Started
66

7-
## Build Docker Image
7+
### Build Docker Image
88

99
```bash
1010
cd ../../..
1111
docker build -t opea/llm-textgen-predictionguard:latest -f comps/llms/text-generation/predictionguard/Dockerfile .
1212
```
1313

14-
## Run the Predictionguard Microservice
14+
### Run the Predictionguard Microservice
1515

1616
```bash
1717
docker run -d -p 9000:9000 -e PREDICTIONGUARD_API_KEY=$PREDICTIONGUARD_API_KEY --name llm-textgen-predictionguard opea/llm-textgen-predictionguard:latest
1818
```
1919

20-
# Consume the Prediction Guard Microservice
20+
## Consume the Prediction Guard Microservice
2121

2222
See the [Prediction Guard docs](https://docs.predictionguard.com/) for available model options.
2323

24-
## Without streaming
24+
### Without streaming
2525

2626
```bash
2727
curl -X POST http://localhost:9000/v1/chat/completions \
@@ -37,7 +37,7 @@ curl -X POST http://localhost:9000/v1/chat/completions \
3737
}'
3838
```
3939

40-
## With streaming
40+
### With streaming
4141

4242
```bash
4343
curl -N -X POST http://localhost:9000/v1/chat/completions \

comps/lvms/predictionguard/README.md

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -4,44 +4,44 @@
44

55
Visual Question and Answering is one of the multimodal tasks empowered by LVMs (Large Visual Models). This microservice supports visual Q&A by using a LLaVA model available via the Prediction Guard API. It accepts two inputs: a prompt and an image. It outputs the answer to the prompt about the image.
66

7-
# 🚀1. Start Microservice with Python
7+
## 🚀1. Start Microservice with Python
88

9-
## 1.1 Install Requirements
9+
### 1.1 Install Requirements
1010

1111
```bash
1212
pip install -r requirements.txt
1313
```
1414

15-
## 1.2 Start LVM Service
15+
### 1.2 Start LVM Service
1616

1717
```bash
1818
python lvm.py
1919
```
2020

21-
# 🚀2. Start Microservice with Docker (Option 2)
21+
## 🚀2. Start Microservice with Docker (Option 2)
2222

23-
## 2.1 Setup Environment Variables
23+
### 2.1 Setup Environment Variables
2424

2525
Setup the following environment variables first
2626

2727
```bash
2828
export PREDICTIONGUARD_API_KEY=${your_predictionguard_api_key}
2929
```
3030

31-
## 2.1 Build Docker Images
31+
### 2.1 Build Docker Images
3232

3333
```bash
3434
cd ../../..
3535
docker build -t opea/lvm-predictionguard:latest -f comps/lvms/predictionguard/Dockerfile .
3636
```
3737

38-
## 2.2 Start Service
38+
### 2.2 Start Service
3939

4040
```bash
4141
docker run -d --name="lvm-predictionguard" -p 9399:9399 -e PREDICTIONGUARD_API_KEY=$PREDICTIONGUARD_API_KEY opea/lvm-predictionguard:latest
4242
```
4343

44-
# 🚀3. Consume LVM Service
44+
## 🚀3. Consume LVM Service
4545

4646
```bash
4747
curl -X POST http://localhost:9399/v1/lvm \

0 commit comments

Comments
 (0)