You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Aug 7, 2025. It is now read-only.
Copy file name to clipboardExpand all lines: README.md
+5Lines changed: 5 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -214,6 +214,11 @@ To stop the currently running TorchServe instance, run:
214
214
torchserve --stop
215
215
```
216
216
217
+
### Inspect the logs
218
+
All the logs you've seen outputed to stdout related to model registration, management, inference are recorded in the `/logs` folder.
219
+
220
+
High level performance data like Throughput or Percentile Precision can be generated with [Benchmark](benchmark/README.md) and visualized in a report.
221
+
217
222
### Concurrency And Number of Workers
218
223
TorchServe exposes configurations that allow the user to configure the number of worker threads on CPU and GPUs. There is an important config property that can speed up the server depending on the workload.
219
224
*Note: the following property has bigger impact under heavy workloads.*
0 commit comments