Skip to content

Commit 849cac9

Browse files
Update README.md of Table in markdown (#717)
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
1 parent f416f84 commit 849cac9

File tree

1 file changed

+21
-126
lines changed

1 file changed

+21
-126
lines changed

README.md

Lines changed: 21 additions & 126 deletions
Original file line numberDiff line numberDiff line change
@@ -42,132 +42,27 @@ This modular approach allows developers to independently develop, deploy, and sc
4242

4343
The initially supported `Microservices` are described in the below table. More `Microservices` are on the way.
4444

45-
<table>
46-
<tbody>
47-
<tr>
48-
<td>MicroService</td>
49-
<td>Framework</td>
50-
<td>Model</td>
51-
<td>Serving</td>
52-
<td>HW</td>
53-
<td>Description</td>
54-
</tr>
55-
<tr>
56-
<td rowspan="2"><a href="./comps/embeddings">Embedding</a></td>
57-
<td rowspan="2"><a href="https://www.langchain.com">LangChain</a>/<a href="https://www.llamaindex.ai">LlamaIndex</a></td>
58-
<td rowspan="2"><a href="https://huggingface.co/BAAI/bge-base-en-v1.5">BAAI/bge-base-en-v1.5</a></td>
59-
<td><a href="https://github.com/huggingface/tei-gaudi">TEI-Gaudi</a></td>
60-
<td>Gaudi2</td>
61-
<td>Embedding on Gaudi2</td>
62-
</tr>
63-
<tr>
64-
<td><a href="https://github.com/huggingface/text-embeddings-inference">TEI</a></td>
65-
<td>Xeon</td>
66-
<td>Embedding on Xeon CPU</td>
67-
</tr>
68-
<tr>
69-
<td><a href="./comps/retrievers">Retriever</a></td>
70-
<td><a href="https://www.langchain.com">LangChain</a>/<a href="https://www.llamaindex.ai">LlamaIndex</a></td>
71-
<td><a href="https://huggingface.co/BAAI/bge-base-en-v1.5">BAAI/bge-base-en-v1.5</a></td>
72-
<td><a href="https://github.com/huggingface/text-embeddings-inference">TEI</a></td>
73-
<td>Xeon</td>
74-
<td>Retriever on Xeon CPU</td>
75-
</tr>
76-
<tr>
77-
<td rowspan="2"><a href="./comps/reranks">Reranking</a></td>
78-
<td rowspan="2"><a href="https://www.langchain.com">LangChain</a>/<a href="https://www.llamaindex.ai">LlamaIndex</a></td>
79-
<td ><a href="https://huggingface.co/BAAI/bge-reranker-base">BAAI/bge-reranker-base</a></td>
80-
<td><a href="https://github.com/huggingface/tei-gaudi">TEI-Gaudi</a></td>
81-
<td>Gaudi2</td>
82-
<td>Reranking on Gaudi2</td>
83-
</tr>
84-
<tr>
85-
<td><a href="https://huggingface.co/BAAI/bge-reranker-base">BBAAI/bge-reranker-base</a></td>
86-
<td><a href="https://github.com/huggingface/text-embeddings-inference">TEI</a></td>
87-
<td>Xeon</td>
88-
<td>Reranking on Xeon CPU</td>
89-
</tr>
90-
<tr>
91-
<td rowspan="2"><a href="./comps/asr/whisper">ASR</a></td>
92-
<td rowspan="2">NA</a></td>
93-
<td rowspan="2"><a href="https://huggingface.co/openai/whisper-small">openai/whisper-small</a></td>
94-
<td rowspan="2">NA</td>
95-
<td>Gaudi2</td>
96-
<td>Audio-Speech-Recognition on Gaudi2</td>
97-
</tr>
98-
<tr>
99-
<td>Xeon</td>
100-
<td>Audio-Speech-RecognitionS on Xeon CPU</td>
101-
</tr>
102-
<tr>
103-
<td rowspan="2"><a href="./comps/tts/speecht5">TTS</a></td>
104-
<td rowspan="2">NA</a></td>
105-
<td rowspan="2"><a href="https://huggingface.co/microsoft/speecht5_tts">microsoft/speecht5_tts</a></td>
106-
<td rowspan="2">NA</td>
107-
<td>Gaudi2</td>
108-
<td>Text-To-Speech on Gaudi2</td>
109-
</tr>
110-
<tr>
111-
<td>Xeon</td>
112-
<td>Text-To-Speech on Xeon CPU</td>
113-
</tr>
114-
<tr>
115-
<td rowspan="4"><a href="./comps/dataprep">Dataprep</a></td>
116-
<td rowspan="2"><a href="https://qdrant.tech/">Qdrant</td>
117-
<td rowspan="2"><a href="https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2">sentence-transformers/all-MiniLM-L6-v2</a></td>
118-
<td rowspan="4">NA</td>
119-
<td>Gaudi2</td>
120-
<td>Dataprep on Gaudi2</td>
121-
</tr>
122-
<tr>
123-
<td>Xeon</td>
124-
<td>Dataprep on Xeon CPU</td>
125-
</tr>
126-
<tr>
127-
<td rowspan="2"><a href="https://redis.io/">Redis</td>
128-
<td rowspan="2"><a href="https://huggingface.co/BAAI/bge-base-en-v1.5">BAAI/bge-base-en-v1.5</a></td>
129-
<td>Gaudi2</td>
130-
<td>Dataprep on Gaudi2</td>
131-
</tr>
132-
<tr>
133-
<td>Xeon</td>
134-
<td>Dataprep on Xeon CPU</td>
135-
</tr>
136-
<tr>
137-
<td rowspan="6"><a href="./comps/llms">LLM</a></td>
138-
<td rowspan="6"><a href="https://www.langchain.com">LangChain</a>/<a href="https://www.llamaindex.ai">LlamaIndex</a></td>
139-
<td rowspan="2"><a href="https://huggingface.co/Intel/neural-chat-7b-v3-3">Intel/neural-chat-7b-v3-3</a></td>
140-
<td><a href="https://github.com/huggingface/tgi-gaudi">TGI Gaudi</a></td>
141-
<td>Gaudi2</td>
142-
<td>LLM on Gaudi2</td>
143-
</tr>
144-
<tr>
145-
<td><a href="https://github.com/huggingface/text-generation-inference">TGI</a></td>
146-
<td>Xeon</td>
147-
<td>LLM on Xeon CPU</td>
148-
</tr>
149-
<tr>
150-
<td rowspan="2"><a href="https://huggingface.co/Intel/neural-chat-7b-v3-3">Intel/neural-chat-7b-v3-3</a></td>
151-
<td rowspan="2"><a href="https://github.com/ray-project/ray">Ray Serve</a></td>
152-
<td>Gaudi2</td>
153-
<td>LLM on Gaudi2</td>
154-
</tr>
155-
<tr>
156-
<td>Xeon</td>
157-
<td>LLM on Xeon CPU</td>
158-
</tr>
159-
<tr>
160-
<td rowspan="2"><a href="https://huggingface.co/Intel/neural-chat-7b-v3-3">Intel/neural-chat-7b-v3-3</a></td>
161-
<td rowspan="2"><a href="https://github.com/vllm-project/vllm/">vLLM</a></td>
162-
<td>Gaudi2</td>
163-
<td>LLM on Gaudi2</td>
164-
</tr>
165-
<tr>
166-
<td>Xeon</td>
167-
<td>LLM on Xeon CPU</td>
168-
</tr>
169-
</tbody>
170-
</table>
45+
| MicroService | Framework | Model | Serving | HW | Description |
46+
| --------------------------------------------- | ------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------- | ------ | ------------------------------------- |
47+
| [Embedding](./comps/embeddings/README.md) | [LangChain](https://www.langchain.com)/[LlamaIndex](https://www.llamaindex.ai) | [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) | [TEI-Gaudi](https://github.com/huggingface/tei-gaudi) | Gaudi2 | Embedding on Gaudi2 |
48+
| [Embedding](./comps/embeddings/README.md) | [LangChain](https://www.langchain.com)/[LlamaIndex](https://www.llamaindex.ai) | [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) | [TEI](https://github.com/huggingface/text-embeddings-inference) | Xeon | Embedding on Xeon CPU |
49+
| [Retriever](./comps/retrievers/README.md) | [LangChain](https://www.langchain.com)/[LlamaIndex](https://www.llamaindex.ai) | [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) | [TEI](https://github.com/huggingface/text-embeddings-inference) | Xeon | Retriever on Xeon CPU |
50+
| [Reranking](./comps/reranks/tei/README.md) | [LangChain](https://www.langchain.com)/[LlamaIndex](https://www.llamaindex.ai) | [BAAI/bge-reranker-base](https://huggingface.co/BAAI/bge-reranker-base) | [TEI-Gaudi](https://github.com/huggingface/tei-gaudi) | Gaudi2 | Reranking on Gaudi2 |
51+
| [Reranking](./comps/reranks/tei/README.md) | [LangChain](https://www.langchain.com)/[LlamaIndex](https://www.llamaindex.ai) | [BBAAI/bge-reranker-base](https://huggingface.co/BAAI/bge-reranker-base) | [TEI](https://github.com/huggingface/text-embeddings-inference) | Xeon | Reranking on Xeon CPU |
52+
| [ASR](./comps/asr/whisper/README.md) | NA | [openai/whisper-small](https://huggingface.co/openai/whisper-small) | NA | Gaudi2 | Audio-Speech-Recognition on Gaudi2 |
53+
| [ASR](./comps/asr/whisper/README.md) | NA | [openai/whisper-small](https://huggingface.co/openai/whisper-small) | NA | Xeon | Audio-Speech-RecognitionS on Xeon CPU |
54+
| [TTS](./comps/tts/speecht5/README.md) | NA | [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) | NA | Gaudi2 | Text-To-Speech on Gaudi2 |
55+
| [TTS](./comps/tts/speecht5/README.md) | NA | [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) | NA | Xeon | Text-To-Speech on Xeon CPU |
56+
| [Dataprep](./comps/dataprep/README.md) | [Qdrant](https://qdrant.tech/) | [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) | NA | Gaudi2 | Dataprep on Gaudi2 |
57+
| [Dataprep](./comps/dataprep/README.md) | [Qdrant](https://qdrant.tech/) | [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) | NA | Xeon | Dataprep on Xeon CPU |
58+
| [Dataprep](./comps/dataprep/README.md) | [Redis](https://redis.io/) | [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) | NA | Gaudi2 | Dataprep on Gaudi2 |
59+
| [Dataprep](./comps/dataprep/README.md) | [Redis](https://redis.io/) | [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) | NA | Xeon | Dataprep on Xeon CPU |
60+
| [LLM](./comps/llms/text-generation/README.md) | [LangChain](https://www.langchain.com)/[LlamaIndex](https://www.llamaindex.ai) | [Intel/neural-chat-7b-v3-3](https://huggingface.co/Intel/neural-chat-7b-v3-3) | [TGI Gaudi](https://github.com/huggingface/tgi-gaudi) | Gaudi2 | LLM on Gaudi2 |
61+
| [LLM](./comps/llms/text-generation/README.md) | [LangChain](https://www.langchain.com)/[LlamaIndex](https://www.llamaindex.ai) | [Intel/neural-chat-7b-v3-3](https://huggingface.co/Intel/neural-chat-7b-v3-3) | [TGI](https://github.com/huggingface/text-generation-inference) | Xeon | LLM on Xeon CPU |
62+
| [LLM](./comps/llms/text-generation/README.md) | [LangChain](https://www.langchain.com)/[LlamaIndex](https://www.llamaindex.ai) | [Intel/neural-chat-7b-v3-3](https://huggingface.co/Intel/neural-chat-7b-v3-3) | [Ray Serve](https://github.com/ray-project/ray) | Gaudi2 | LLM on Gaudi2 |
63+
| [LLM](./comps/llms/text-generation/README.md) | [LangChain](https://www.langchain.com)/[LlamaIndex](https://www.llamaindex.ai) | [Intel/neural-chat-7b-v3-3](https://huggingface.co/Intel/neural-chat-7b-v3-3) | [Ray Serve](https://github.com/ray-project/ray) | Xeon | LLM on Xeon CPU |
64+
| [LLM](./comps/llms/text-generation/README.md) | [LangChain](https://www.langchain.com)/[LlamaIndex](https://www.llamaindex.ai) | [Intel/neural-chat-7b-v3-3](https://huggingface.co/Intel/neural-chat-7b-v3-3) | [vLLM](https://github.com/vllm-project/vllm/) | Gaudi2 | LLM on Gaudi2 |
65+
| [LLM](./comps/llms/text-generation/README.md) | [LangChain](https://www.langchain.com)/[LlamaIndex](https://www.llamaindex.ai) | [Intel/neural-chat-7b-v3-3](https://huggingface.co/Intel/neural-chat-7b-v3-3) | [vLLM](https://github.com/vllm-project/vllm/) | Xeon | LLM on Xeon CPU |
17166

17267
A `Microservices` can be created by using the decorator `register_microservice`. Taking the `embedding microservice` as an example:
17368

0 commit comments

Comments
 (0)