Skip to content

Use a static packet buffer in DMXConnection #7

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

rec
Copy link

@rec rec commented Mar 22, 2023

Hello, and thanks for this very useful library!

This pull request should be completely backward compatible.

If you construct DMXConnection with the new use_numpy parameter set to True, then numpy.array is used to construct the buffers instead of array.array.

There are many advantages in general to numpy.array and two ones specific to this project, using the fact that taking a slice of a numpy.array creates a reference, not a copy, so modifying the slice modifies the original.

In the existing array code, a new buffer is created on each DMX frame update and then discarded.

In the new numpy codepath, a fixed np.array buffer called _packet is constructed with _packet_begin and PACKET_END already written into it, and then dmx_frame is just a slice out of that.

So when you manipulate dmx_frame, you are directly writing into the final packet which is sent to the controller. No copies or allocations are performed.

The second advantage is that you can now construct "lighting instruments" by taking out little slices from DMXConnection.dmx_frame, which is what I'm doing in my code!

Both of these take advantage of the fact that taking a slice of a numpy.array creates a reference, not a copy.


I tested all four cases, with and without numpy installed, and with and withoutuse_numpy=True.

I both ran the nose tests, and lighted a lamp on my table. :-D

@willstott101
Copy link
Contributor

All this would be possible using memoryview:

>>> a = array('B', [1,2,3,4,5])
>>> na = np.array(memoryview(a)[1:4], copy=False)
>>> na
array([2, 3, 4], dtype=uint8)
>>> na[1] = 123
>>> a
array('B', [1, 2, 123, 4, 5])

@willstott101
Copy link
Contributor

Then we wouldn't need two code paths, as we can slice using builtin memoryviews, and users can cast to numpy using copy=False if they really want it as a dependency

@rec
Copy link
Author

rec commented Mar 22, 2023

I like this, let me write it out and see!

@rec
Copy link
Author

rec commented Mar 22, 2023

All right, have a look!

@rec rec force-pushed the numpy branch 3 times, most recently from 4ea3853 to ae611ab Compare March 22, 2023 15:33
@rec
Copy link
Author

rec commented Mar 22, 2023

All right, I got rid of some cruft comment I'd added, changed the name of the commit to reflect its nature, and then I ran the tests and made the light turn on and off.

@rec rec changed the title Teach pyenttec about numpy Use a static packet buffer in DMXConnection Mar 22, 2023
@generalelectrix
Copy link
Owner

generalelectrix commented Mar 22, 2023

There are really two things being conflated in this patch - using numpy for the backing array which allows taking slices by-reference, and storing the frame padding data in the buffer directly to avoid the allocation when sending the frame.

As far as taking slices by reference:

The way I usually handle this in my code is to pass the DMX frame around to anything that needs to write to it. Being able to take slices by-reference can be nice for decoupling (since you can delegate tracking starting address and fixture channel count to a patching system which only hands a sub-slice of the universe to a fixture to be rendered). However, if you are using this to store a long-lived reference to a slice of the DMX universe directly in your fixture representation, I'd strongly encourage you to avoid this pattern. The DMX buffer should belong to the port, and you should inject it into a render method when you need to render the state of the fixture out to DMX.

As far as avoiding the allocation:

Profile before optimizing. How much latency does the allocation and copy add, given that there is a tradeoff in code complexity for removing it? I suspect it is very, very small, especially considering that the enttec runs at like 40 fps tops.

There's also another method to avoid the allocation - separately write the packet header, DMX frame, and packet trailer using three separate write method calls.

Overall I'd prefer to avoid the need to add numpy as a dependency if not necessary. I think a better approach to this would be to refactor the library to entirely eliminate the internal DMX frame buffer. There's not really a need for the port itself to manage this, since all it is really doing is writing a byte array out to the port with some appropriate headers/trailers. Then you can use whatever library you want for managing the buffer, and you hand it to the port to render.

@rec
Copy link
Author

rec commented Mar 23, 2023

I'm sorry, I'm not understanding this comment.

I removed all hints of numpy from the code, so I am not introducing any dependencies at all. Even the original pull request didn't have a numpy dependency, but now the string numpy doesn't appear anywhere in it.

I don't care about the performance one whit, but I did want to point out that this pull request wasn't going to make the code slower.

I just want to be able to use numpy arrays because I have a lot of mechanisms to manipulate them.

Sending out the packet opening, packet body, and packet tail separately is also possible.

@generalelectrix
Copy link
Owner

generalelectrix commented Mar 23, 2023

Heh, my apologies, I should have read the changeset a bit more closely.

I still think I'd prefer to refactor this to entirely eliminate the need for the port object to own the DMX buffer. Since the port object isn't doing anything asynchronous like periodically rendering (the port hardware itself is already doing this), there's no reason it needs to hold onto the buffer between render actions. This is the approach I took when I ported this library to Rust: https://github.com/generalelectrix/rust-dmx/blob/main/src/enttec.rs#L165

I'll go ahead and make that refactoring tonight or this weekend, test it, and release a new major version of this library.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants