-
Notifications
You must be signed in to change notification settings - Fork 12
Training across devices #83
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Edge devices / edge cloud need more consistent definitions, but in general here is how I currently see it. Privacy was named as a reason - IMHO for that use case it would make sense if the user would be able to define the privacy zone(s) so that the compute offload to one or more edge devices/servers falls under the user's privacy control. The compute offload mechanisms should include the possibility that users could select the compute offload target (e.g. an edge server within the privacy zone). The authors made a DNN-specific proposal for splitting the load.
|
ONNX.js - A Javascript library to run ONNX models in browsers and Node.js talk by @EmmaNingMS explains how ONNX.js benefits from parallelization using web workers:
What are the obstacles in scaling this architecture to run JS or WebAssembly modules on the edge to enable training across devices? Some CDN providers (e.g. Cloudflare, Fastly) seem to have products for running JS and Wasm modules on their edge network. |
The Machine Learning on the Web for content filtering applications talk by @shoniko brings up a point on how federated learning could help in the context of content filtering applications, quoting:
Training is out of scope for the initial version of the Web Neural Network API, but could be considered in a future version. In order to make a stronger case for inclusions of training capabilities, real-world usages such as those discussed in this issue help raise the priority. An additional consideration is the availability of respective platform APIs. |
There are a couple of relevant papers, for instance,
The APIs, challenges and findings are (not surprisingly) somewhat similar, so they provide a consistent background for compute offload, be it for training or inference. These are generic compute offload mechanisms, though. |
I think this is a great idea. Esp., the DNN, which accuracy generally depends on the number of the learning data would have a big impact to improve. To maintain the quality of data, the Blockchain technology will help much. |
Uh oh!
There was an error while loading. Please reload this page.
The Collaborative Learning talk by @wmaass concludes with lessons learned, an extract:
The Enabling Distributed DNNs for the Mobile Web Over Cloud, Edge and End Devices talk by Yakun Huang (@Nov1102) makes a point:
Also raises the following questions:
Maybe browser-instantiated workers running on edge devices could help here? There has been some exploration in the Web & Networks IG around this space in edge computing workstream. Cc @zolkis
The text was updated successfully, but these errors were encountered: