-
-
Notifications
You must be signed in to change notification settings - Fork 605
Open
Description
Hi, and thanks for the great library!
I’m wondering whether it’s possible to call model.learn_one() in parallel from multiple workers or threads in order to speed up the processing of incoming data. Specifically, I’d like to know:
- Is it safe or supported to call model.learn_one() concurrently on the same model instance?
- If not, is there a recommended approach for handling data updates in parallel? (e.g., merging models)
- My target components are:
- preprocessing.RobustScaler
- Regression models such as HoeffdingTreeRegressor, etc.
My use case involves multiple producers generating data samples simultaneously, and I’d like each to update the model as those samples arrive.
Metadata
Metadata
Assignees
Labels
No labels