Skip to content

Need a more algorithmic definition of AudioWorkletNode lifetime #1079

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
joeberkovitz opened this issue Nov 16, 2016 · 5 comments
Closed

Need a more algorithmic definition of AudioWorkletNode lifetime #1079

joeberkovitz opened this issue Nov 16, 2016 · 5 comments
Assignees
Milestone

Comments

@joeberkovitz
Copy link

The definition of lifetime used in the fix to #475 may lack specificity in some way, so WG members have asked to add an issue reflecting the potential need for an algorithmic definition.

@joeberkovitz
Copy link
Author

Thought: would it be appropriate to place (or somehow invoke) this algorithmic definition in the Rendering an audio graph section of the spec?

@padenot
Copy link
Member

padenot commented Nov 17, 2016

This is orthogonal to rendering: the rendering takes the list of all AudioNodes for a context. This is about removing or adding AudioNodes from a context.

The rendering algorithm is already quite complicated. Maybe it would be best to define something like the "list of AudioNodes of a context" or something equivalent, and have that in the section about life times.

@joeberkovitz
Copy link
Author

Oh right. This is about lifetime.

I guess I still don't understand how we should describe a lifetime algorithm for any node at all beyond what we've already said. We have these references defined in the spec, and UAs do garbage collection on any node that lacks references.

@padenot
Copy link
Member

padenot commented Nov 17, 2016

We need to define exactly what happens when you render audio for an AudioWorklet. This allow us to get access to the return value of process and do something like the following:

Rendering audio for an AudioWorklet means executing the following steps:

  1. If this AudioWorklet does not have a self-reference, make it take a self-reference.
  2. Do some prep work, like, take the input data and make Float32Array from them, take the AudioParam data for this render quantum and do the same, etc. This is necessary to spec whether the arrays are reused or not, for example.
  3. Call process, and let rv be the return value of this invocation.
  4. If rv is true, return from this algorithm
  5. Drop the self-reference

At the last step, if the node still has input, it's kept alive. Otherwise, it can be collected.

@joeberkovitz
Copy link
Author

I will put together a PR that makes the Lifetime section of AudioNode normative, to resolve this issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants