Skip to content

fix: do not multithread ChatMessage processing #861

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 1 commit into from

Conversation

TestingPlant
Copy link
Collaborator

Chat messages must be processed before the packet bump allocator is cleared because they store a reference to the packet data. In other words, we need a happens-before relationship where chat message processing happens before the packet bump allocator is cleared. Disabling multithreaded ChatMessage processing is one way of providing this happens-before relationship, and since processing chat messages is relatively cheap, this should not cause any noticable performance problems.

Chat messages must be processed before the packet bump allocator is
cleared because they store a reference to the packet data. In other
words, we need a happens-before relationship where chat message
processing happens before the packet bump allocator is cleared.
Disabling multithreaded ChatMessage processing is one way of
providing this happens-before relationship, and since processing chat
messages is relatively cheap, this should not cause any noticable
performance problems.
@github-actions github-actions bot added the fix label Mar 9, 2025
Copy link

github-actions bot commented Mar 9, 2025

Benchmark Results for general

ray_intersection/aabb_size_0.1                     [  19.3 ns ...  19.3 ns ]      -0.41%
ray_intersection/aabb_size_1                       [  19.2 ns ...  19.3 ns ]      +0.20%
ray_intersection/aabb_size_10                      [  19.3 ns ...  19.2 ns ]      -0.18%
ray_intersection/ray_distance_1                    [   3.8 ns ...   3.8 ns ]      +0.16%
ray_intersection/ray_distance_5                    [   3.9 ns ...   3.9 ns ]      -0.03%
ray_intersection/ray_distance_20                   [   3.8 ns ...   3.8 ns ]      -0.05%
overlap/no_overlap                                 [  28.7 ns ...  28.7 ns ]      -0.11%
overlap/partial_overlap                            [  28.8 ns ...  28.7 ns ]      -0.22%
overlap/full_containment                           [  21.4 ns ...  21.6 ns ]      +1.02%
point_containment/inside                           [   4.9 ns ...   4.9 ns ]      +0.40%
point_containment/outside                          [   4.5 ns ...   4.4 ns ]      -0.15%
point_containment/boundary                         [   7.5 ns ...   7.6 ns ]      +0.90%

Comparing to 7c99a55

Copy link

codecov bot commented Mar 9, 2025

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 21.01%. Comparing base (7c99a55) to head (ea8806b).
Report is 16 commits behind head on main.

@@           Coverage Diff           @@
##             main     #861   +/-   ##
=======================================
  Coverage   21.00%   21.01%           
=======================================
  Files         160      160           
  Lines       16902    16901    -1     
  Branches      470      470           
=======================================
  Hits         3551     3551           
+ Misses      13285    13284    -1     
  Partials       66       66           
Files with missing lines Coverage Δ
events/tag/src/module/chat.rs 0.00% <ø> (ø)
🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@andrewgazelka
Copy link
Member

andrewgazelka commented Mar 9, 2025

I'm not sure I like the current way you're handling this. In an ideal world, you would have a sync point, not just making this multithreaded. I think there should be a sync point between when all of the events end and when the bump allocator is cleared.

@TestingPlant
Copy link
Collaborator Author

Replaced with #888

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants