Skip to content

Commit a533038

Browse files
XinyaoWaXuhuiRenlvliang-intelSpycshpre-commit-ci[bot]
authored andcommitted
Add knowledge graph components (opea-project#171)
* enable ragas (opea-project#129) Signed-off-by: XuhuiRen <[email protected]> Signed-off-by: Xinyao Wang <[email protected]> * Fix RAG performance issues (opea-project#132) * Fix RAG performance issues Signed-off-by: lvliang-intel <[email protected]> Signed-off-by: Xinyao Wang <[email protected]> * add microservice level perf statistics (opea-project#135) * add statistics * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci --------- Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Signed-off-by: Xinyao Wang <[email protected]> * Add More Contents to the Table of MicroService (opea-project#141) * Add More Contents to the Table MicroService Signed-off-by: zehao-intel <[email protected]> * reorder Signed-off-by: zehao-intel <[email protected]> * Update README.md * refine structure Signed-off-by: zehao-intel <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * fix model Signed-off-by: zehao-intel <[email protected]> * refine table Signed-off-by: zehao-intel <[email protected]> * put llm to the ground Signed-off-by: zehao-intel <[email protected]> --------- Signed-off-by: zehao-intel <[email protected]> Co-authored-by: Sihan Chen <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Signed-off-by: Xinyao Wang <[email protected]> * Use common security content for OPEA projects (opea-project#151) * add python coverage Signed-off-by: chensuyue <[email protected]> * docs update Signed-off-by: chensuyue <[email protected]> * Revert "add python coverage" This reverts commit 69615b1. * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci --------- Signed-off-by: chensuyue <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Signed-off-by: Xinyao Wang <[email protected]> * Enable vLLM Gaudi support for LLM service based on officially habana vllm release (opea-project#137) Signed-off-by: tianyil1 <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Signed-off-by: Xinyao Wang <[email protected]> * add knowledge graph Signed-off-by: Xinyao Wang <[email protected]> * knowledge graph microservice update Signed-off-by: Xinyao Wang <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Support Dataprep Microservice with Llama Index (opea-project#154) * move file to langchain folder Signed-off-by: letonghan <[email protected]> * support dataprep with llama_index Signed-off-by: letonghan <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * add e2e test script Signed-off-by: letonghan <[email protected]> * update test script name Signed-off-by: letonghan <[email protected]> --------- Signed-off-by: letonghan <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Signed-off-by: Xinyao Wang <[email protected]> * Support Embedding Microservice with Llama Index (opea-project#150) * fix stream=false doesn't work issue Signed-off-by: letonghan <[email protected]> * support embedding comp with llama_index Signed-off-by: letonghan <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Add More Contents to the Table of MicroService (opea-project#141) * Add More Contents to the Table MicroService Signed-off-by: zehao-intel <[email protected]> * reorder Signed-off-by: zehao-intel <[email protected]> * Update README.md * refine structure Signed-off-by: zehao-intel <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * fix model Signed-off-by: zehao-intel <[email protected]> * refine table Signed-off-by: zehao-intel <[email protected]> * put llm to the ground Signed-off-by: zehao-intel <[email protected]> --------- Signed-off-by: zehao-intel <[email protected]> Co-authored-by: Sihan Chen <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * Use common security content for OPEA projects (opea-project#151) * add python coverage Signed-off-by: chensuyue <[email protected]> * docs update Signed-off-by: chensuyue <[email protected]> * Revert "add python coverage" This reverts commit 69615b1. * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci --------- Signed-off-by: chensuyue <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * Enable vLLM Gaudi support for LLM service based on officially habana vllm release (opea-project#137) Signed-off-by: tianyil1 <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * support embedding comp with llama_index Signed-off-by: letonghan <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * add test script for embedding llama_inde Signed-off-by: letonghan <[email protected]> * remove conflict requirements Signed-off-by: letonghan <[email protected]> * update test script Signed-off-by: letonghan <[email protected]> * udpate Signed-off-by: letonghan <[email protected]> * update Signed-off-by: letonghan <[email protected]> * update Signed-off-by: letonghan <[email protected]> * fix ut issue Signed-off-by: letonghan <[email protected]> --------- Signed-off-by: letonghan <[email protected]> Signed-off-by: zehao-intel <[email protected]> Signed-off-by: chensuyue <[email protected]> Signed-off-by: tianyil1 <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: zehao-intel <[email protected]> Co-authored-by: Sihan Chen <[email protected]> Co-authored-by: chen, suyue <[email protected]> Co-authored-by: Tianyi Liu <[email protected]> Signed-off-by: Xinyao Wang <[email protected]> * Support Ollama microservice (opea-project#142) * Add Ollama Support Signed-off-by: lvliang-intel <[email protected]> Signed-off-by: Xinyao Wang <[email protected]> * Fix dataprep microservice path issue (opea-project#163) Signed-off-by: lvliang-intel <[email protected]> Signed-off-by: Xinyao Wang <[email protected]> * update CI to support dataprep_redis path level change (opea-project#155) Signed-off-by: chensuyue <[email protected]> Signed-off-by: letonghan <[email protected]> Signed-off-by: Xinyao Wang <[email protected]> * Add Gateway for Translation (opea-project#169) * add translation gateway Signed-off-by: zehao-intel <[email protected]> * fix import Signed-off-by: zehao-intel <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci --------- Signed-off-by: zehao-intel <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Signed-off-by: Xinyao Wang <[email protected]> * Update LLM readme (opea-project#172) * Update LLM readme Signed-off-by: lvliang-intel <[email protected]> * update readme Signed-off-by: lvliang-intel <[email protected]> * update tgi readme Signed-off-by: lvliang-intel <[email protected]> * rollback requirements.txt Signed-off-by: lvliang-intel <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci --------- Signed-off-by: lvliang-intel <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Signed-off-by: Xinyao Wang <[email protected]> * add milvus microservice (opea-project#158) * Use common security content for OPEA projects (opea-project#151) * add python coverage Signed-off-by: chensuyue <[email protected]> * docs update Signed-off-by: chensuyue <[email protected]> * Revert "add python coverage" This reverts commit 69615b1. * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci --------- Signed-off-by: chensuyue <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Signed-off-by: jinjunzh <[email protected]> * add milvus microservice Signed-off-by: jinjunzh <[email protected]> * fix the typo Signed-off-by: jinjunzh <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci Signed-off-by: jinjunzh <[email protected]> --------- Signed-off-by: chensuyue <[email protected]> Signed-off-by: jinjunzh <[email protected]> Co-authored-by: chen, suyue <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Signed-off-by: Xinyao Wang <[email protected]> * enable python coverage (opea-project#149) Signed-off-by: Sun, Xuehao <[email protected]> Signed-off-by: chensuyue <[email protected]> Signed-off-by: Xinyao Wang <[email protected]> * Add Ray version for multi file process (opea-project#119) * add ray version document to redis Signed-off-by: Chendi Xue <[email protected]> * update test Signed-off-by: Chendi Xue <[email protected]> * Add test Signed-off-by: Chendi Xue <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Add TIMEOUT in container environment and return status Signed-off-by: Chendi Xue <[email protected]> * rebase on new folder layout Signed-off-by: Chendi Xue <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci --------- Signed-off-by: Chendi Xue <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Signed-off-by: Xinyao Wang <[email protected]> * Add codecov (opea-project#178) Signed-off-by: Sun, Xuehao <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Signed-off-by: Xinyao Wang <[email protected]> * Rename lm-eval folder to utils/lm-eval (opea-project#179) Signed-off-by: changwangss <[email protected]> Signed-off-by: Xinyao Wang <[email protected]> * Support rerank and retrieval of RAG OPT (opea-project#164) * supported bce model for rerank. Signed-off-by: Xinyu Ye <[email protected]> * change folder Signed-off-by: Xinyu Ye <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * change path in test file. Signed-off-by: Xinyu Ye <[email protected]> --------- Signed-off-by: Xinyu Ye <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Signed-off-by: Xinyao Wang <[email protected]> * Support vLLM XFT LLM microservice (opea-project#174) * Support vLLM XFT serving Signed-off-by: lvliang-intel <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * fix access vllm issue Signed-off-by: lvliang-intel <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * add permission for run.sh Signed-off-by: lvliang-intel <[email protected]> * add readme Signed-off-by: lvliang-intel <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * fix proxy issue Signed-off-by: lvliang-intel <[email protected]> --------- Signed-off-by: lvliang-intel <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Signed-off-by: Xinyao Wang <[email protected]> * Update Dataprep Microservice README (opea-project#173) * update dataprep readme Signed-off-by: letonghan <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci --------- Signed-off-by: letonghan <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Signed-off-by: Xinyao Wang <[email protected]> * fixed milvus port conflict issues during deployment (opea-project#183) * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * fixed milvus port conflict issues during deployment * align port for unified retrieval microservice --------- Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Signed-off-by: Xinyao Wang <[email protected]> * remove Signed-off-by: Xinyao Wang <[email protected]> * remove hard address Signed-off-by: Xinyao Wang <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * update readme Signed-off-by: Xinyao Wang <[email protected]> * add example data and ingestion Signed-off-by: Xinyao Wang <[email protected]> * fix typ Signed-off-by: Xinyao Wang <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Fix dataprep timeout issue (opea-project#203) Signed-off-by: lvliang-intel <[email protected]> Signed-off-by: Xinyao Wang <[email protected]> * Add a new embedding MosecEmbedding (opea-project#182) * Add a new embedding MosecEmbedding. Signed-off-by: Jincheng Miao <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci --------- Signed-off-by: Jincheng Miao <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Signed-off-by: Xinyao Wang <[email protected]> * expand timeout for microservice test (opea-project#208) Signed-off-by: chensuyue <[email protected]> Signed-off-by: Xinyao Wang <[email protected]> * fix typo Signed-off-by: Xinyao Wang <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci Signed-off-by: Xinyao Wang <[email protected]> * fix requirement Signed-off-by: Xinyao Wang <[email protected]> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci --------- Signed-off-by: XuhuiRen <[email protected]> Signed-off-by: Xinyao Wang <[email protected]> Signed-off-by: lvliang-intel <[email protected]> Signed-off-by: zehao-intel <[email protected]> Signed-off-by: chensuyue <[email protected]> Signed-off-by: tianyil1 <[email protected]> Signed-off-by: letonghan <[email protected]> Signed-off-by: jinjunzh <[email protected]> Signed-off-by: Sun, Xuehao <[email protected]> Signed-off-by: Chendi Xue <[email protected]> Signed-off-by: changwangss <[email protected]> Signed-off-by: Xinyu Ye <[email protected]> Signed-off-by: Jincheng Miao <[email protected]> Co-authored-by: XuhuiRen <[email protected]> Co-authored-by: lvliang-intel <[email protected]> Co-authored-by: Sihan Chen <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: zehao-intel <[email protected]> Co-authored-by: chen, suyue <[email protected]> Co-authored-by: Tianyi Liu <[email protected]> Co-authored-by: Letong Han <[email protected]> Co-authored-by: jasperzhu <[email protected]> Co-authored-by: Chendi.Xue <[email protected]> Co-authored-by: Sun, Xuehao <[email protected]> Co-authored-by: Wang, Chang <[email protected]> Co-authored-by: XinyuYe-Intel <[email protected]> Co-authored-by: Jincheng Miao <[email protected]> Signed-off-by: sharanshirodkar7 <[email protected]>
1 parent 04e0b70 commit a533038

19 files changed

+452
-0
lines changed

comps/__init__.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -16,6 +16,7 @@
1616
TextDoc,
1717
RAGASParams,
1818
RAGASScores,
19+
GraphDoc,
1920
LVMDoc,
2021
)
2122

comps/cores/mega/constants.py

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -27,6 +27,7 @@ class ServiceType(Enum):
2727
UNDEFINED = 10
2828
RAGAS = 11
2929
LVM = 12
30+
KNOWLEDGE_GRAPH = 13
3031

3132

3233
class MegaServiceEndpoint(Enum):
@@ -50,6 +51,8 @@ class MegaServiceEndpoint(Enum):
5051
RERANKING = "/v1/reranking"
5152
GUARDRAILS = "/v1/guardrails"
5253
RAGAS = "/v1/ragas"
54+
GRAPHS = "/v1/graphs"
55+
5356
# COMMON
5457
LIST_SERVICE = "/v1/list_service"
5558
LIST_PARAMETERS = "/v1/list_parameters"

comps/cores/proto/docarray.py

Lines changed: 13 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -104,6 +104,19 @@ class RAGASScores(BaseDoc):
104104
context_precision: float
105105

106106

107+
class GraphDoc(BaseDoc):
108+
text: str
109+
strtype: Optional[str] = Field(
110+
description="type of input query, can be 'query', 'cypher', 'rag'",
111+
default="query",
112+
)
113+
max_new_tokens: Optional[int] = Field(default=1024)
114+
rag_index_name: Optional[str] = Field(default="rag")
115+
rag_node_label: Optional[str] = Field(default="Task")
116+
rag_text_node_properties: Optional[list] = Field(default=["name", "description", "status"])
117+
rag_embedding_node_property: Optional[str] = Field(default="embedding")
118+
119+
107120
class LVMDoc(BaseDoc):
108121
image: str
109122
prompt: str

comps/knowledgegraphs/README.md

Lines changed: 146 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,146 @@
1+
# Knowledge Graph Microservice
2+
3+
This microservice, designed for efficiently handling and retrieving informantion from knowledge graph. The microservice integrates text retriever, knowledge graph quick search and LLM agent, which can be combined to enhance question answering.
4+
5+
The service contains three modes:
6+
7+
- "cypher": Query knowledge graph directly with cypher
8+
- "rag": Apply similarity search on embeddings of knowledge graph
9+
- "query": An LLM agent will automatically choose tools (RAG or CypherChain) to enhance the question answering
10+
11+
Here is the overall workflow:
12+
13+
![Workflow](doc/workflow.png)
14+
15+
A prerequisite for using this microservice is that users must have a knowledge gragh database already running, and currently we have support [Neo4J](https://neo4j.com/) for quick deployment. Users need to set the graph service's endpoint into an environment variable and microservie utilizes it for data injestion and retrieve. If user want to use "rag" and "query" mode, still need a LLM text generation service (etc., TGI, vLLM and Ray) already running.
16+
17+
Overall, this microservice provides efficient support for applications related with graph dataset, especially for answering multi-part questions, or any other conditions including comples relationship between entities.
18+
19+
# 🚀1. Start Microservice with Docker
20+
21+
## 1.1 Setup Environment Variables
22+
23+
```bash
24+
export NEO4J_ENDPOINT="neo4j://${your_ip}:7687"
25+
export NEO4J_USERNAME="neo4j"
26+
export NEO4J_PASSWORD=${define_a_password}
27+
export HUGGINGFACEHUB_API_TOKEN=${your_huggingface_api_token}
28+
export LLM_ENDPOINT="http://${your_ip}:8080"
29+
export LLM_MODEL="meta-llama/Llama-2-7b-hf"
30+
export AGENT_LLM="HuggingFaceH4/zephyr-7b-beta"
31+
```
32+
33+
## 1.2 Start Neo4j Service
34+
35+
```bash
36+
docker pull neo4j
37+
38+
docker run --rm \
39+
--publish=7474:7474 --publish=7687:7687 \
40+
--env NEO4J_AUTH=$NEO4J_USER/$NEO4J_PASSWORD \
41+
--volume=$PWD/neo4j_data:"/data" \
42+
--env='NEO4JLABS_PLUGINS=["apoc"]' \
43+
neo4j
44+
```
45+
46+
## 1.3 Start LLM Service for "rag"/"query" mode
47+
48+
You can start any LLM microserve, here we take TGI as an example.
49+
50+
```bash
51+
docker run -p 8080:80 \
52+
-v $PWD/llm_data:/data --runtime=habana \
53+
-e HABANA_VISIBLE_DEVICES=all \
54+
-e OMPI_MCA_btl_vader_single_copy_mechanism=none \
55+
-e HUGGING_FACE_HUB_TOKEN=$HUGGINGFACEHUB_API_TOKEN \
56+
--cap-add=sys_nice \
57+
--ipc=host \
58+
ghcr.io/huggingface/tgi-gaudi:2.0.0 \
59+
--model-id $LLM_MODEL \
60+
--max-input-tokens 1024 \
61+
--max-total-tokens 2048
62+
```
63+
64+
Verify LLM service.
65+
66+
```bash
67+
curl $LLM_ENDPOINT/generate \
68+
-X POST \
69+
-d '{"inputs":"What is Deep Learning?","parameters":{"max_new_tokens":32}}' \
70+
-H 'Content-Type: application/json'
71+
```
72+
73+
## 1.4 Start Microservice
74+
75+
```bash
76+
cd ../..
77+
docker build -t opea/knowledge_graphs:latest \
78+
--build-arg https_proxy=$https_proxy \
79+
--build-arg http_proxy=$http_proxy \
80+
-f comps/knowledgegraphs/langchain/docker/Dockerfile .
81+
82+
docker run --rm \
83+
--name="knowledge-graph-server" \
84+
-p 8060:8060 \
85+
--ipc=host \
86+
-e http_proxy=$http_proxy \
87+
-e https_proxy=$https_proxy \
88+
-e NEO4J_ENDPOINT=$NEO4J_ENDPOINT \
89+
-e NEO4J_USERNAME=$NEO4J_USERNAME \
90+
-e NEO4J_PASSWORD=$NEO4J_PASSWORD \
91+
-e HUGGINGFACEHUB_API_TOKEN=$HUGGINGFACEHUB_API_TOKEN \
92+
-e LLM_ENDPOINT=$LLM_ENDPOINT \
93+
opea/knowledge_graphs:latest
94+
```
95+
96+
# 🚀2. Consume Knowledge Graph Service
97+
98+
## 2.1 Cypher mode
99+
100+
```bash
101+
curl http://${your_ip}:8060/v1/graphs \
102+
-X POST \
103+
-d "{\"text\":\"MATCH (t:Task {status:'open'}) RETURN count(*)\",\"strtype\":\"cypher\"}" \
104+
-H 'Content-Type: application/json'
105+
```
106+
107+
Example output:
108+
![Cypher Output](doc/output_cypher.png)
109+
110+
## 2.2 Rag mode
111+
112+
```bash
113+
curl http://${your_ip}:8060/v1/graphs \
114+
-X POST \
115+
-d "{\"text\":\"How many open tickets there are?\",\"strtype\":\"rag\", \"max_new_tokens\":128}" \
116+
-H 'Content-Type: application/json'
117+
```
118+
119+
Example output:
120+
![Cypher Output](doc/output_rag.png)
121+
122+
## 2.3 Query mode
123+
124+
First example:
125+
126+
```bash
127+
curl http://${your_ip}:8060/v1/graphs \
128+
-X POST \
129+
-d "{\"text\":\"Which tasks have optimization in their description?\",\"strtype\":\"query\"}" \
130+
-H 'Content-Type: application/json'
131+
```
132+
133+
Example output:
134+
![Cypher Output](doc/output_query1.png)
135+
136+
Second example:
137+
138+
```bash
139+
curl http://${your_ip}:8060/v1/graphs \
140+
-X POST \
141+
-d "{\"text\":\"Which team is assigned to maintain PaymentService?\",\"strtype\":\"query\"}" \
142+
-H 'Content-Type: application/json'
143+
```
144+
145+
Example output:
146+
![Cypher Output](doc/output_query2.png)

comps/knowledgegraphs/__init__.py

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,2 @@
1+
# Copyright (C) 2024 Intel Corporation
2+
# SPDX-License-Identifier: Apache-2.0

comps/knowledgegraphs/build_docker.sh

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,7 @@
1+
# Copyright (C) 2024 Intel Corporation
2+
# SPDX-License-Identifier: Apache-2.0
3+
4+
docker build -t opea/knowledge_graphs:latest \
5+
--build-arg https_proxy=$https_proxy \
6+
--build-arg http_proxy=$http_proxy \
7+
-f comps/knowledgegraphs/langchain/docker/Dockerfile .
407 KB
Loading
467 KB
Loading
418 KB
Loading
711 KB
Loading

0 commit comments

Comments
 (0)