Skip to content

Commit 032ddbc

Browse files
authored
Doc: Fix broken links (#439)
Signed-off-by: Lianhao Lu <[email protected]>
1 parent b224b65 commit 032ddbc

File tree

6 files changed

+5
-8
lines changed

6 files changed

+5
-8
lines changed

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -49,7 +49,7 @@ The following steps are optional. They're only required if you want to run the w
4949

5050
Follow [GMC README](https://github.com/opea-project/GenAIInfra/blob/main/microservices-connector/README.md)
5151
to install GMC into your kubernetes cluster. [GenAIExamples](https://github.com/opea-project/GenAIExamples) contains several sample GenAI example use case pipelines such as ChatQnA, DocSum, etc.
52-
Once you have deployed GMC in your Kubernetes cluster, you can deploy any of the example pipelines by following its Readme file (e.g. [Docsum](https://github.com/opea-project/GenAIExamples/blob/main/DocSum/kubernetes/README.md)).
52+
Once you have deployed GMC in your Kubernetes cluster, you can deploy any of the example pipelines by following its Readme file (e.g. [Docsum](https://github.com/opea-project/GenAIExamples/blob/main/DocSum/kubernetes/intel/README_gmc.md)).
5353

5454
### Use helm charts to deploy
5555

helm-charts/common/tei/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -41,4 +41,4 @@ curl http://localhost:2081/embed -X POST -d '{"inputs":"What is Deep Learning?"}
4141
| global.modelUseHostPath | string | `"/mnt/opea-models"` | Cached models directory, tei will not download if the model is cached here. The host path "modelUseHostPath" will be mounted to container as /data directory. Set this to null/empty will force it to download model. |
4242
| image.repository | string | `"ghcr.io/huggingface/text-embeddings-inference"` | |
4343
| image.tag | string | `"cpu-1.5"` | |
44-
| horizontalPodAutoscaler.enabled | bool | false | Enable HPA autoscaling for the service deployment based on metrics it provides. See HPA section in ../../README.md before enabling! |
44+
| horizontalPodAutoscaler.enabled | bool | false | Enable HPA autoscaling for the service deployment based on metrics it provides. See [HPA section](../../HPA.md) before enabling! |

helm-charts/common/teirerank/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -44,4 +44,4 @@ curl http://localhost:2082/rerank \
4444
| global.modelUseHostPath | string | `"/mnt/opea-models"` | Cached models directory, teirerank will not download if the model is cached here. The host path "modelUseHostPath" will be mounted to container as /data directory. Set this to null/empty will force it to download model. |
4545
| image.repository | string | `"ghcr.io/huggingface/text-embeddings-inference"` | |
4646
| image.tag | string | `"cpu-1.5"` | |
47-
| horizontalPodAutoscaler.enabled | bool | false | Enable HPA autoscaling for the service deployment based on metrics it provides. See HPA section in ../../README.md before enabling! |
47+
| horizontalPodAutoscaler.enabled | bool | false | Enable HPA autoscaling for the service deployment based on metrics it provides. See [HPA section](../../HPA.md) before enabling! |

helm-charts/common/tgi/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -48,4 +48,4 @@ curl http://localhost:2080/generate \
4848
| global.modelUseHostPath | string | `"/mnt/opea-models"` | Cached models directory, tgi will not download if the model is cached here. The host path "modelUseHostPath" will be mounted to container as /data directory. Set this to null/empty will force it to download model. |
4949
| image.repository | string | `"ghcr.io/huggingface/text-generation-inference"` | |
5050
| image.tag | string | `"1.4"` | |
51-
| horizontalPodAutoscaler.enabled | bool | false | Enable HPA autoscaling for the service deployment based on metrics it provides. See HPA section in ../../README.md before enabling! |
51+
| horizontalPodAutoscaler.enabled | bool | false | Enable HPA autoscaling for the service deployment based on metrics it provides. See [HPA section](../../HPA.md) before enabling! |

helm-charts/common/vllm/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22

33
Helm chart for deploying vLLM Inference service.
44

5-
Refer to [Deploy with Helm Charts](../README.md) for global guides.
5+
Refer to [Deploy with Helm Charts](../../README.md) for global guides.
66

77
## Installing the Chart
88

microservices-connector/config/samples/ChatQnA/use_cases.md

Lines changed: 0 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -28,9 +28,6 @@ For Gaudi:
2828
- tei-embedding-service: opea/tei-gaudi:latest
2929
- tgi-service: ghcr.io/huggingface/tgi-gaudi:1.2.1
3030

31-
> [NOTE]
32-
> Refer to [Xeon README](https://github.com/opea-project/GenAIExamples/blob/main/ChatQnA/docker/xeon/README.md) or [Gaudi README](https://github.com/opea-project/GenAIExamples/blob/main/ChatQnA/docker/gaudi/README.md) to build the OPEA images. These too will be available on Docker Hub soon to simplify use.
33-
3431
## Deploy ChatQnA pipeline
3532

3633
There are 3 use cases for ChatQnA example:

0 commit comments

Comments
 (0)