-
Notifications
You must be signed in to change notification settings - Fork 10
Use a static packet buffer in DMXConnection #7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
All this would be possible using
|
Then we wouldn't need two code paths, as we can slice using builtin memoryviews, and users can cast to numpy using |
I like this, let me write it out and see! |
All right, have a look! |
4ea3853
to
ae611ab
Compare
All right, I got rid of some cruft comment I'd added, changed the name of the commit to reflect its nature, and then I ran the tests and made the light turn on and off. |
There are really two things being conflated in this patch - using numpy for the backing array which allows taking slices by-reference, and storing the frame padding data in the buffer directly to avoid the allocation when sending the frame. As far as taking slices by reference: The way I usually handle this in my code is to pass the DMX frame around to anything that needs to write to it. Being able to take slices by-reference can be nice for decoupling (since you can delegate tracking starting address and fixture channel count to a patching system which only hands a sub-slice of the universe to a fixture to be rendered). However, if you are using this to store a long-lived reference to a slice of the DMX universe directly in your fixture representation, I'd strongly encourage you to avoid this pattern. The DMX buffer should belong to the port, and you should inject it into a render method when you need to render the state of the fixture out to DMX. As far as avoiding the allocation: Profile before optimizing. How much latency does the allocation and copy add, given that there is a tradeoff in code complexity for removing it? I suspect it is very, very small, especially considering that the enttec runs at like 40 fps tops. There's also another method to avoid the allocation - separately write the packet header, DMX frame, and packet trailer using three separate Overall I'd prefer to avoid the need to add numpy as a dependency if not necessary. I think a better approach to this would be to refactor the library to entirely eliminate the internal DMX frame buffer. There's not really a need for the port itself to manage this, since all it is really doing is writing a byte array out to the port with some appropriate headers/trailers. Then you can use whatever library you want for managing the buffer, and you hand it to the port to render. |
I'm sorry, I'm not understanding this comment. I removed all hints of I don't care about the performance one whit, but I did want to point out that this pull request wasn't going to make the code slower. I just want to be able to use Sending out the packet opening, packet body, and packet tail separately is also possible. |
Heh, my apologies, I should have read the changeset a bit more closely. I still think I'd prefer to refactor this to entirely eliminate the need for the port object to own the DMX buffer. Since the port object isn't doing anything asynchronous like periodically rendering (the port hardware itself is already doing this), there's no reason it needs to hold onto the buffer between render actions. This is the approach I took when I ported this library to Rust: https://github.com/generalelectrix/rust-dmx/blob/main/src/enttec.rs#L165 I'll go ahead and make that refactoring tonight or this weekend, test it, and release a new major version of this library. |
Hello, and thanks for this very useful library!
This pull request should be completely backward compatible.
If you construct
DMXConnection
with the newuse_numpy
parameter set toTrue
, thennumpy.array
is used to construct the buffers instead ofarray.array
.There are many advantages in general to
numpy.array
and two ones specific to this project, using the fact that taking a slice of anumpy.array
creates a reference, not a copy, so modifying the slice modifies the original.In the existing
array
code, a new buffer is created on each DMX frame update and then discarded.In the new
numpy
codepath, a fixednp.array
buffer called_packet
is constructed with_packet_begin
andPACKET_END
already written into it, and thendmx_frame
is just a slice out of that.So when you manipulate
dmx_frame
, you are directly writing into the final packet which is sent to the controller. No copies or allocations are performed.The second advantage is that you can now construct "lighting instruments" by taking out little slices from
DMXConnection.dmx_frame
, which is what I'm doing in my code!Both of these take advantage of the fact that taking a slice of a
numpy.array
creates a reference, not a copy.I tested all four cases, with and without numpy installed, and with and without
use_numpy=True
.I both ran the
nose
tests, and lighted a lamp on my table. :-D