You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This is mostly an issue to discuss what is the most effective way to get data into and from an AudioFrame and if anything needs to be implemented to improve this.
For uses cases where we want to send and receive microphone recordings, one of the approaches would be to record a few seconds of audio into an AudioFrame, break it down into smaller chunks to be sent, and then on the other side we need to stitch them back together.
When trying a to build a few examples with radio we tried a few different methods and had some struggles, so these are some of the notes from that.
Options for breaking down a larger AudioFrame into smaller chunks
The main approached we followed was to create a bytes or bytearray object from the AudioFrame and use slicing
This works but creates an unnecessary copy of the data, which takes time and memory
Lists should also work, but would be more wasteful of resources
The most efficient way, that currently works, is possibly to use a memoryview
This is a less known feature, which we haven't really used or documented in the past for micro:bit users, so while perfectly valid, we would need to make it more visible in the docs
Another thing we tried, but didn't work, was slicing the AudioFrame directly
Options for combining smaller chunks into a larger AudioFrame
Modules like radio, uart and spi have the option to either return a bytes object, or write into an existing buffer.
There isn't a way for the receive_into() methods to write into a buffer offset, so we cannot write directly into a larger buffer
The micro:bit version of MicroPython doesn't support bytearray slice assignment (e.g. my_bytearray[1:3] = (1,2)), so I don't think there is currently way to inject the bytes data directly into a pre-allocated larger buffer
We do have the bytearray.extend() method, so we can grow a bytearray as we receive data packets
Not sure what the allocation policy, or how this is internally implemented inside MicroPython, but there is the potential of having wasteful allocations while growing the bytearray
Ideally we could add the received data directly into an AudioFrame.
Updating all the receive_into() is more intrusive than updating AudioFrame, so that would be my preferred option
Option a) slice assignment: my_audioframe[i:i+PACKET_SIZE] = received_bytes
Option b) provide a new method similar to insert(i, x) but that can take a buffer
Option c) update existing copyfrom(buffer, index=0)
My preference would be c) as it feels intuitive enough and insert() already exisit so we would have to find a different name
So my suggestion would be to go from something like:
This is mostly an issue to discuss what is the most effective way to get data into and from an AudioFrame and if anything needs to be implemented to improve this.
For uses cases where we want to send and receive microphone recordings, one of the approaches would be to record a few seconds of audio into an AudioFrame, break it down into smaller chunks to be sent, and then on the other side we need to stitch them back together.
When trying a to build a few examples with radio we tried a few different methods and had some struggles, so these are some of the notes from that.
Options for breaking down a larger AudioFrame into smaller chunks
bytes
orbytearray
object from the AudioFrame and use slicingmemoryview
So my suggestion would be to go from something like:
To skip the memory view and be able to use slices directly (#188):
It's a small change, but I think it can help avoid users converting to a bytes object instead:
Options for combining smaller chunks into a larger AudioFrame
radio
,uart
andspi
have the option to either return a bytes object, or write into an existing buffer.receive_into()
methods to write into a buffer offset, so we cannot write directly into a larger buffermy_bytearray[1:3] = (1,2)
), so I don't think there is currently way to inject the bytes data directly into a pre-allocated larger bufferbytearray.extend()
method, so we can grow a bytearray as we receive data packetsAudioFrame
.receive_into()
is more intrusive than updatingAudioFrame
, so that would be my preferred optionmy_audioframe[i:i+PACKET_SIZE] = received_bytes
insert(i, x)
but that can take a buffercopyfrom(buffer, index=0)
insert()
already exisit so we would have to find a different nameSo my suggestion would be to go from something like:
To something like this:
It's only two lines but it could save us from expensive reallocations that will also do more memory fragmentation.
The text was updated successfully, but these errors were encountered: