-
Notifications
You must be signed in to change notification settings - Fork 358
HARQ feature for 5G LDPC #944
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
Hi @rabihchrabieh, Thank you for your contribution to Sionna. We will review the code shortly and get back to you. |
|
Hi @rabihchrabieh, I started reviewing the PR, and overall, it looks good. However, my main concern at this stage is the API design and its compatibility with graph/XLA mode. I think we should clarify the API before proceeding further. Would it be an option to pass
Alternatively, the encoder could return a stacked version of all RVs (only if this mode is explicitly enabled), which could then be fed directly into the decoder using the same API. What do you think? A few additional remarks:
|
|
Thank you, @SebastianCa, for the helpful feedback. We can definitely move the "rv" to the function parameters — that makes sense and ensures compatibility with graph/XLA mode. We might also allow passing "n" optionally, in case the user wants to test combinations like (rv0, n0), (rv1, n1). Regarding stacking all RVs inside the encoder: I think that adds unnecessary complexity. In most scenarios, we want to test RVs through time-varying channels, so the stacked output would need to be unstacked anyway before transmission — which somewhat defeats the purpose. For the decoder, I agree it should remain stateless. A simple and flexible approach is to accumulate the received RVs outside the decoder. This keeps the decoder logic clean and allows users to apply any desired accumulation method or weighting strategy. That said, we could optionally provide a utility method for accumulation within the class (but not tied to the decoder itself), to support convenience without introducing state. Let me know if this approach works for you or if you'd prefer something more integrated. I will extend the notebook into a tutorial with background on HARQ and RVs, and switch to Es/N₀ for plotting to better reflect varying code rates. I will also add unittests comparing the HARQ decoder to the classical (rv=0) case under equivalent redundancy. Good catch: the change to test_mimo_flat_fading.py wasn’t intentional and must have slipped in during testing. I’ll remove it from the PR. |
|
As an alternative, I’d like to propose a simpler structure when HARQ mode is activated:
This keeps encode and decode clean, stateless, and graph-compatible, and avoids repeated calls to encode(u) when testing multiple RVs — the full n_cb output is stored once and sliced as needed for retransmissions. A harq_mode flag is provided as a function parameter to both encode and decode. This supports backward compatibility:
A first transmission can use harq_mode=False, while retransmissions typically use harq_mode=True. Let me know if this direction would be acceptable. |
|
Hi @rabihchrabieh, Here are my thoughts:
As the decoder has no internal state, we must provide all rv versions anyhow. Here I would suggest to simply stack the inputs. In case a user wants to do advanced experiments, e.g., only decode "rv3", one could stack with zero llrs Otherwise, one could then just stack the different rv versions (e.g., from transmission over different instances of time-varying channels) and run the decoder once. For example Or alternatively, do it as one-shot experiment (assuming the channel model supports this) The overhead in the decoder should be fairly small as it boils down to gathering and adding llrs before decoding. I would always disable pruning if What do you think, does that make sense? |
|
Hi @SebastianCa, Thanks again for the detailed input. Before moving forward, I wanted to clarify a few points to make sure I understand the proposed design fully — especially in the context of stacked RVs. Suppose a user provides
Alternatively, is the idea to transmit all Just trying to get a clearer picture of how you see these components fitting together. |
|
Hi @rabihchrabieh, ah, my bad - sorry for confusion; when saying Comment: Mapping/demapping is not shown here regarding your questions: In my eyes, that's the best version to give users the flexibility to simulate realistic HARQ scenarios (i.e., re-transmission of full code blocks) without adding a lot of complexity to the API. |
|
Sounds good — static RV lists. One suggestion: should the decoder also take the same Let me know. |
|
Yes, we could add Why do we need weighting? shouldn't the LLR already be scaled by the reliability/SNR? I would try to keep the API streamlined. Applying weights could be also done manually before feeding the LLRs to the decoder? |
|
OK, making "rv" optional in the decoder sounds good. We can default to the standard RV order (["rv0", "rv2", "rv3", "rv1"]), which aligns with typical HARQ scheduling. Just to clarify: sequences like ["rv0", "rv2"] or ["rv0", "rv2", "rv3"] are quite common. Repeating an RV, such as ["rv0", "rv0"], can also occur — for example, if the initial transmission was interfered with, it's sometimes better to retransmit RV0 rather than move to RV2. So having the option to explicitly specify the RV list is useful — especially for comparing cases like Regarding weighting: you're right that LLR scaling is ideally handled by the user. In some practical setups, LLRs are quantized to 8 bits after each RV reception. In such cases, to combine with a new RV, the stored LLRs must be dequantized, rescaled (e.g., based on SNR), accumulated and re-quantized. But I agree — it's best to leave this under user control. I will proceed with this implementation. |
|
Sounds good! |
|
Hi @SebastianCa, To display Es/No, it seems that it's not currently exposed in the Do you have a different suggestion or a preferred way to handle this? Thanks for your help! |
|
The straightforward way is to not call the |
|
The issue is with displaying and printing the word EsNo on graphs and results tables. I had to make small changes. |
632ba8c to
aae52c1
Compare
|
I've pushed the updated, squashed commit with the refactored HARQ implementation per feedback. Ready for re-review.
|
- Implemented HARQ mode for LDPC encoder/decoder with RV selection - Added HARQ tutorial with AWGN BLER performance example - Added unit tests Signed-off-by: Rabih Chrabieh <[email protected]>
aae52c1 to
21f9231
Compare
|
Great - we'll review the code soon. |
modified docstrings removed (uneded) public functions minor code changes
| raise ValueError("Last dimension must be of length k.") | ||
|
|
||
| def call(self, bits): | ||
| def call(self, bits, rv=None): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The problem with rv as a (Python) list is that it will re-trace the graph whenever the input signature changes, i.e., TF will trace one graph for each (used) combination of rv.
This can be ok as we (typically) have a small amount of possibilities
An alternative would be to provide a list of tf.ints and gather the rv_starts from another tensor. This could work in a dynamic graph and avoids re-tracing. However, the code becomes less readable (and the wrap-arounds may become a bit ugly).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This likely complicates the code, and only partially solves the problem: if we have a different number of rv, then it needs to re-trace. If we set it to always max number of rv (how many retransmissions is that?) then it's a lot of overhead and complex to manage the masked rv. Let me know if you have a particular solution in mind.
SebastianCa
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I reviewed all files except the ipynb. This can be done once the code is ready
test/unit/fec/test_ldpc_5g_harq.py
Outdated
| def encode_harq(): | ||
| bits = source([batch_size, k]) | ||
| x_ref = encoder_ref(bits) # Shape: [batch_size, n_cb] | ||
| x_ref = tf.roll(x_ref, shift=starts["rv0"], axis=-1) # Undo shift of RV0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why do we need to shift the reference here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
encoder_ref outputs the full n_cb buffer but shifted by 2Z, as this is the original non-HARQ encoder. Here we undo this shift by 2Z for the correct, full and unshifted buffer of n_cb bits; then it's easier to make comparisons to the HARQ mode using rv_list.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I changed the logic a bit
|
Hi @rabihchrabieh, Keep in mind that the LDPC Block is one of Sionna's core components and we should be very careful with changes are not strictly related to HARQ. And it's important that we maintain backwards compatibility (ideally no breaking changes). A few big picture comments: Besides that, the PR looks very promising. btw: I'm using the Github comment function; Thus, I haven't executed the code locally, my changes may have introduced a few minor incompatibilities. |
|
Hi @SebastianCa, thanks for your thorough review and good feedback. I agree with most of your comments and suggestions but as I am currently on travel, it will take a bit of time to respond. |
|
No worries - there is no deadline. We'll wait for your response. |
|
Hi @rabihchrabieh, thanks for updating the PR; please expect some delays with the review due to traveling. |
Signed-off-by: Rabih Chrabieh <[email protected]>
Signed-off-by: Rabih Chrabieh <[email protected]>
Signed-off-by: Rabih Chrabieh <[email protected]>
…_starts methods, added multi-batch test Signed-off-by: Rabih Chrabieh <[email protected]>
Signed-off-by: Rabih Chrabieh <[email protected]>
Signed-off-by: Rabih Chrabieh <[email protected]>
Signed-off-by: Rabih Chrabieh <[email protected]>
Signed-off-by: Rabih Chrabieh <[email protected]>
ae39bb2 to
8199c5a
Compare
|
Just pushed new commits — sorry for the delay! I've aligned with all your suggestions except one, which I've explained in the thread. |
Description
Added HARQ support for 5G LDPC codes.
Created a Jupyter notebook describing the functionality and tests it.
Currently, the HARQ functionality does not support pruning (TODO for RV0, etc).
Graph mode is working but should be revisited.
All (or most) existing tests related to 5G LDPC pass.
No new unit test was created (TODO).
The additional API does not impact the existing API.
Fixed a few unrelated tests that were previously broken (cn_type changed to cn_update).
Checklist
[ ] Co-authored with someone? Add Co-authored-by: user@domain and ensure they signed off their commits too.