Skip to content

Commit ac52f79

Browse files
Add benchmark part into top README (#127)
* Add benchmark part in top README Signed-off-by: lvliang-intel <[email protected]> * add pic Signed-off-by: lvliang-intel <[email protected]> --------- Signed-off-by: lvliang-intel <[email protected]>
1 parent 7318fb8 commit ac52f79

File tree

3 files changed

+42
-1
lines changed

3 files changed

+42
-1
lines changed

README.md

Lines changed: 41 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -127,6 +127,47 @@ isolation on Kubernetes nodes. See [Platform
127127
optimization](doc/platform-optimization/README.md).
128128

129129

130+
## Benchmark
131+
132+
We provide a OPEA microservice benchmarking tool which is designed for microservice performance testing and benchmarking. It allows you to define test cases for various services based on YAML configurations, run load tests using `stresscli`, built on top of [locust](https://github.com/locustio/locust), and analyze the results for performance insights.
133+
134+
### Features
135+
136+
- **Services load testing**: Simulates high concurrency levels to test services like LLM, reranking, ASR, E2E and more.
137+
- **YAML-based configuration**: Easily define test cases, service endpoints, and parameters.
138+
- **Service metrics collection**: Optionally collect service metrics to analyze performance bottlenecks.
139+
- **Flexible testing**: Supports a variety of tests like chatqna, codegen, codetrans, faqgen, audioqna, and visualqna.
140+
- **Data analysis and visualization**: Visualize test results to uncover performance trends and bottlenecks.
141+
142+
### How to use
143+
144+
**Define Test Cases**: Configure your tests in the [benchmark.yaml](./evals/benchmark/benchmark.py) file.
145+
146+
**Increase File Descriptor Limit (if running large-scale tests)**:
147+
148+
```bash
149+
ulimit -n 100000
150+
```
151+
152+
This ensures the system can handle high concurrency by allowing more open files and connections.
153+
154+
**Run the benchmark script**:
155+
156+
```bash
157+
python evals/benchmark/benchmark.py
158+
```
159+
160+
Results will be saved in the directory specified by `test_output_dir` in the configuration.
161+
162+
163+
For more details on configuring test cases, refer to the [README](./evals/benchmark/README.md).
164+
165+
166+
### Grafana Dashboards
167+
Prometheus metrics collected during the tests can be used to create Grafana dashboards for visualizing performance trends and monitoring bottlenecks. For more information, refer to the [Grafana README](./evals/benchmark/grafana/README.md)
168+
169+
![tgi microservice dashboard](./assets/grafana_dashboard.png)
170+
130171
## Additional Content
131172
- [Code of Conduct](https://github.com/opea-project/docs/tree/main/community/CODE_OF_CONDUCT.md)
132173
- [Contribution](https://github.com/opea-project/docs/tree/main/community/CONTRIBUTING.md)

assets/grafana_dashboard.png

414 KB
Loading

evals/benchmark/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# OPEA Benchmark Tool
22

3-
This Tool provides a microservices benchmarking framework that uses YAML configurations to define test cases for different services. It executes these tests using `stresscli`, built on top of [locust](https://github.com/locustio/locust), a performance/load testing tool for HTTP and other protocols and logs the results for performance analysis and data visualization.
3+
This Tool provides a microservice benchmarking framework that uses YAML configurations to define test cases for different services. It executes these tests using `stresscli`, built on top of [locust](https://github.com/locustio/locust), a performance/load testing tool for HTTP and other protocols and logs the results for performance analysis and data visualization.
44

55
## Features
66

0 commit comments

Comments
 (0)