-
-
Notifications
You must be signed in to change notification settings - Fork 409
sctp: remove unnecessary drops and use precise padding #381
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
lock will be dropped automatically
contract: padding_needed <= PADDING_MULTIPLE
Codecov ReportBase: 59.86% // Head: 59.89% // Increases project coverage by
Additional details and impacted files@@ Coverage Diff @@
## master #381 +/- ##
==========================================
+ Coverage 59.86% 59.89% +0.03%
==========================================
Files 504 504
Lines 48000 47989 -11
Branches 12516 12516
==========================================
+ Hits 28733 28743 +10
+ Misses 10026 10013 -13
+ Partials 9241 9233 -8
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here. ☔ View full report at Codecov. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't believe this change will behave correctly.
Since all the functions are &self
rather than &mut self
, we can potentially have multiple threads calling these functions at the same time.
The let sem_lock
is a RAII guard that ensure the entire data structure is locked for the entire insertion. This is important in a case like this for loop.
webrtc/sctp/src/queue/pending_queue.rs
Lines 120 to 135 in daaf05d
for chunk in chunks.into_iter() { | |
let user_data_len = chunk.user_data.len(); | |
let permits = self.semaphore.acquire_many(user_data_len as u32).await; | |
// unwrap ok because we never close the semaphore unless we have dropped self | |
permits.unwrap().forget(); | |
if chunk.unordered { | |
let mut unordered_queue = self.unordered_queue.write(); | |
unordered_queue.push_back(chunk); | |
} else { | |
let mut ordered_queue = self.ordered_queue.write(); | |
ordered_queue.push_back(chunk); | |
} | |
self.n_bytes.fetch_add(user_data_len, Ordering::SeqCst); | |
self.queue_len.fetch_add(1, Ordering::SeqCst); | |
} |
If two threads were competing in this loop, we would not be in "direct sequence". The code doc explains the relationship between the two locks.
The chunks of one fragmented message need to be put in direct sequence into the queue which the lock guarantees
Edit: provide correct link to loop
I agree, but my point is we don't need to drop them since it will be done by Rust automatically. |
@melekes this is true. They do need to live for the entire scope though. So you can replace with |
If you are fixing the padding anyways, there are a few some more places where get_padding_size is used alongside a vec![] call. I think generalizing what I did in the packet marshal code would be a good idea: static PADDING_BYTES: [u8; PADDING_MULTIPLE] = [0;PADDING_MULTIPLE];
pub(crate) fn get_padding(len: usize) -> &'static [u8] {
&PADDING_BYTES[..get_padding_size(len)]
}
----
writer.extend(get_padding(len)) |
Refs #364 and #367
cc @KillingSpark