Feature/rb 143 compression pack network#115
Conversation
WalkthroughThe changes introduce the Zstandard (zstd) compression library to the project, modifying the network packet structure to include compressed data. The Zstd class provides static functions for data compression and decompression. The build process is updated to fetch and link the zstd library, and the GitHub workflow is adjusted to include zstd in the release process. Changes
WalkthroughThe changes introduce the Zstandard (zstd) compression library to the project, enabling data compression and decompression for network packets. The Zstd library is fetched and linked via CMake. A new Changes
TipsChat with CodeRabbit Bot (
|
There was a problem hiding this comment.
Review Status
Actionable comments generated: 3
Files selected for processing (5)
- deps/CMakeLists.txt (1 hunks)
- deps/zstd/CMakeLists.txt (1 hunks)
- src/Nitwork/ANitwork.cpp (3 hunks)
- src/Nitwork/ANitwork.hpp (3 hunks)
- src/Nitwork/Zstd.hpp (1 hunks)
Files skipped from review due to trivial changes (1)
- deps/CMakeLists.txt
Additional comments (Suppressed): 8
deps/zstd/CMakeLists.txt (1)
- 1-27: The changes in this hunk look good. The Zstandard library is fetched and linked statically to both the client and server projects. The use of
FetchContentis a good practice for managing dependencies in CMake. The static linking of the library ensures that the binary can run on systems without the Zstandard library installed. The version of the library is also explicitly specified, which is a good practice to ensure consistent behavior across different build environments.src/Nitwork/ANitwork.hpp (4)
14-17: The inclusion of the Zstd library is a significant change. Ensure that the library is correctly linked and that the build system is properly configured to include it. Also, verify that the library is compatible with the rest of the codebase and the target platforms.
62-79: The data compression using Zstd is a good addition for optimizing network data transmission. However, it's important to consider the trade-off between CPU usage and network bandwidth. Compression algorithms like Zstd can be CPU-intensive, which might not be ideal for systems with limited CPU resources. Also, ensure that the receiving end of the data transmission is capable of decompressing the data.
65-65: The
Zstd::compressfunction is used here to compress the data before sending. Ensure that this function handles errors properly and is able to compress the data as expected. Also, verify that the function does not cause any memory leaks or other resource management issues.70-79: The error handling in the
async_send_tofunction is well done. It checks for errors and logs them appropriately. However, it would be beneficial to add more specific error messages to help with debugging. For example, the message "NITWORK: Package not sent" could be more descriptive, indicating why the package was not sent.src/Nitwork/ANitwork.cpp (3)
8-10: The order of the
#includedirectives has been changed. Ensure that this does not affect the compilation or the functionality of the code. The order of#includedirectives can sometimes matter, especially when there are dependencies or macros that need to be defined in a certain order.127-131: The
callReceiveHandlerfunction now logs an error message before starting the receive handler. This is a good practice for debugging and understanding the flow of the program. However, ensure that the logging does not introduce significant overhead, especially if this function is called frequently.136-147: The code now checks the size of the packet after decompression against
MAX_PACKET_SIZEand the size of the header. This is a good practice to prevent buffer overflow attacks and to ensure that the packet is valid. However, the error message "header not received" may not be accurate if the packet size is greater thanMAX_PACKET_SIZE. Consider using a more descriptive error message.
Also, the decompression of the received buffer is done after the size check. This is a good practice as it prevents unnecessary decompression of invalid packets.- callReceiveHandler("header not received"); + callReceiveHandler("Invalid packet size");
There was a problem hiding this comment.
Review Status
Actionable comments generated: 4
Files selected for processing (2)
- src/Nitwork/ANitwork.cpp (2 hunks)
- src/Nitwork/Zstd.hpp (1 hunks)
Additional comments (Suppressed): 1
src/Nitwork/ANitwork.cpp (1)
- 127-131: The
callReceiveHandlerfunction now logs an error message before starting the receive handler. This is a good improvement for error handling and debugging. However, ensure that theLogger::errorfunction is thread-safe, ascallReceiveHandlermight be called from multiple threads.
…X-R-G-B/R-Bus into feature/RB-143-compression-pack-network
There was a problem hiding this comment.
Review Status
Actionable comments generated: 1
Files selected for processing (1)
- src/Nitwork/Zstd.hpp (1 hunks)
Additional comments (Suppressed): 2
src/Nitwork/Zstd.hpp (2)
20-37: The
compressfunction is well implemented. It checks if the data type is POD (Plain Old Data), calculates the maximum size of the compressed data, compresses the data, checks for errors, and resizes the vector to the actual size of the compressed data. However, it's worth noting that the compression level is set to 1, which is the fastest but least compressed. If you need more compression and can afford the extra CPU usage, consider increasing this value.55-58: The
getFrameContentSizefunction is straightforward and doesn't seem to have any issues. It correctly uses the ZSTD library function to get the size of the decompressed content.
…X-R-G-B/R-Bus into feature/RB-143-compression-pack-network
There was a problem hiding this comment.
Review Status
Actionable comments generated: 1
Files selected for processing (1)
- src/Nitwork/Zstd.hpp (1 hunks)
Additional comments (Suppressed): 3
src/Nitwork/Zstd.hpp (3)
34-37: The
compressfunction checks if there was an error during compression usingZSTD_isError. If there was an error, it throws a runtime error with a message that includes the error name. This is a good practice as it helps with debugging.42-50: The
decompressfunction decompresses the input data and checks if there was an error during decompression usingZSTD_isError. If there was an error, it throws a runtime error with a message that includes the error name. This is a good practice as it helps with debugging.54-60: The
getFrameContentSizefunction gets the size of the decompressed content and checks if there was an error during this operation usingZSTD_isError. If there was an error, it throws a runtime error with a message that includes the error name. This is a good practice as it helps with debugging.
Saverio976
left a comment
There was a problem hiding this comment.
il faut modifier la RFC
There was a problem hiding this comment.
Review Status
Actionable comments generated: 2
Files selected for processing (2)
- docs/network/rfc/RFC.md (1 hunks)
- src/Nitwork/Zstd.hpp (1 hunks)
Additional comments (Suppressed): 2
docs/network/rfc/RFC.md (1)
- 142-147: The new hunk introduces the use of the zstd library for compressing and decompressing network packets. Ensure that the implementation of this feature does not introduce any performance or compatibility issues. Also, verify that the zstd library is compatible with the platforms that the game is intended to run on.
src/Nitwork/Zstd.hpp (1)
- 54-56: The check for
decompressedSize > MAX_PACKET_SIZEshould be done before callingZSTD_decompress. If the decompressed size is larger thanMAX_PACKET_SIZE, it could lead to a buffer overflow. Consider moving this check before the call toZSTD_decompress.std::array<char, MAX_PACKET_SIZE> decompressedArray = {0}; size_t decompressedSize = ZSTD_decompress(decompressedArray.data(), MAX_PACKET_SIZE, data.data(), size); - if (ZSTD_isError(decompressedSize) != 0U) { throw std::runtime_error( std::string("ZSTD: Error while decompressing: ") + ZSTD_getErrorName(decompressedSize)); } + if (decompressedSize > MAX_PACKET_SIZE) { + throw std::runtime_error("ZSTD: Decompressed size is too big"); + } if (decompressedSize > MAX_PACKET_SIZE) { throw std::runtime_error("ZSTD: Decompressed size is too big"); } return decompressedArray;Overall, the code is well-structured and follows good practices. The use of
std::is_standard_layout_vandstd::is_trivial_vto check if the data is POD (Plain Old Data) is a good practice to ensure that the data can be safely serialized and deserialized. The error handling is also well done, with clear error messages and checks for errors after each ZSTD function call.
There was a problem hiding this comment.
Review Status
Actionable comments generated: 0
Files selected for processing (2)
- docs/network/rfc/RFC.md (1 hunks)
- src/Nitwork/Zstd.hpp (1 hunks)
Additional comments (Suppressed): 4
src/Nitwork/Zstd.hpp (3)
22-40: The
compressfunction checks if the input data is of Plain Old Data (POD) type and if its size is within the limit that Zstandard can handle. It then compresses the data and returns it as a vector of characters. If any of these operations fail, it throws a runtime error with an appropriate message. This is a good practice as it ensures that the function fails fast and provides useful information for debugging.45-60: The
decompressfunction takes a data array and its size, decompresses the data, and returns it as an array of characters. It checks if the decompression was successful and if the decompressed size is within the limit. If any of these checks fail, it throws a runtime error with an appropriate message. This is a good practice as it ensures that the function fails fast and provides useful information for debugging.63-72: The
getFrameContentSizefunction takes a data array and returns the size of its decompressed content. It checks if the operation was successful and if it fails, it throws a runtime error with an appropriate message. This is a good practice as it ensures that the function fails fast and provides useful information for debugging.docs/network/rfc/RFC.md (1)
- 142-147: The changes made to the "Network Packet Global Structure" section of the RFC document accurately reflect the introduction of the Zstandard (zstd) compression library to the project. The new text explains that packets are now compressed with zstd before being sent over the network and decompressed upon receipt. This is a good update to the documentation, ensuring it stays current with the codebase.
However, it might be beneficial to include a brief explanation or a link to more information about the zstd library for those unfamiliar with it. This would provide context and help developers understand why this specific compression library was chosen.
Example:
The packets are compressed using the zstd library before being sent over the network using the UDP protocol. Upon receipt, the packets are decompressed using the zstd library + Zstandard (zstd) is a fast lossless compression algorithm, providing high compression ratios. It also offers a very wide range of compression/speed trade-off. More information can be found [here](https://facebook.github.io/zstd/).
There was a problem hiding this comment.
Review Status
Actionable comments generated: 1
Files selected for processing (1)
- .github/workflows/release.yml (5 hunks)
Additional comments (Suppressed): 4
.github/workflows/release.yml (4)
3-5: The new branch
fix/Nitwork-tiks-and-enemydeathhas been added to the list of branches that trigger the workflow. Ensure that this branch exists and is intended to trigger the workflow.11-11: A new environment variable
ZSTD_TAGhas been added. This variable is used to checkout a specific tag of thezstdrepository. Ensure that the tagv1.5.5exists in thezstdrepository.119-119: The
zstd-to-tarjob has been added as a dependency for therelease-windows,release-linux, andrelease-macosjobs. Ensure that thezstd-to-tarjob completes successfully before these jobs are run.225-259: A new job
zstd-to-tarhas been added to the workflow. This job clones thezstdrepository, checks out a specific tag, creates a tar file, and uploads it to the release or artifact based on the branch. Ensure that the commands in this job are correct and work as expected.
There was a problem hiding this comment.
Review Status
Actionable comments generated: 0
Files selected for processing (1)
- .github/workflows/release.yml (5 hunks)
Additional comments (Suppressed): 4
.github/workflows/release.yml (4)
3-5: The branch
fix/Nitwork-tiks-and-enemydeathhas been added to the list of branches that trigger the workflow. Ensure that this branch is intended to trigger the workflow and that it exists in the repository.8-11: The environment variable
ZSTD_TAGhas been added. This variable is used to specify the version of the Zstd library to be used in the project. Ensure that the version specified is compatible with the project.119-119: The
zstd-to-tarjob has been added as a dependency for therelease-windows,release-linux, andrelease-macosjobs. This means that these jobs will not start until thezstd-to-tarjob has completed successfully. Ensure that this dependency is necessary and that thezstd-to-tarjob is guaranteed to complete successfully before these jobs are expected to start.225-259: A new job
zstd-to-tarhas been added. This job clones the Zstd repository, checks out the version specified by theZSTD_TAGenvironment variable, creates a tar file of the repository, and uploads it to the release if the branch ismain. Otherwise, it uploads the tar file as an artifact for later use. This job is necessary for the inclusion of the Zstd library in the project.
There was a problem hiding this comment.
Review Status
Actionable comments generated: 0
Files selected for processing (1)
- .github/workflows/release.yml (5 hunks)
Additional comments (Suppressed): 4
.github/workflows/release.yml (4)
3-5: The new branch
feature/RB-143-compression-pack-networkhas been added to the list of branches that trigger the workflow. Ensure that this branch exists and is the correct one for this feature.11-11: A new environment variable
ZSTD_TAGhas been added. This is used to specify the version of the Zstandard library to be used. Ensure that the version "v1.5.5" is the correct and intended one.119-119: The
zstd-to-tarjob has been added as a dependency for therelease-windows,release-linux, andrelease-macosjobs. This means these jobs will wait for thezstd-to-tarjob to complete successfully before they start. Ensure that this is the intended behavior.225-259: A new job
zstd-to-tarhas been added. This job clones the Zstandard repository, creates a tar file of the repository, and uploads it to the release if the branch ismain. Otherwise, it uploads the tar file as an artifact for later use. This job is necessary for the new feature of using the Zstandard library for network packet compression and decompression. The job seems to be correctly defined and follows the same pattern as the other similar jobs in the workflow.
…sion-pack-network
There was a problem hiding this comment.
Review Status
Actionable comments generated: 0
Files selected for processing (1)
- deps/zstd/CMakeLists.txt (1 hunks)
Additional comments (Suppressed): 1
deps/zstd/CMakeLists.txt (1)
- 1-28: The CMakeLists.txt file for the zstd library is well-structured and follows best practices for including external libraries in a CMake project. The library is fetched and built statically, which is a good choice for a network library to avoid potential issues with dynamic linking. The library is then linked to both the client and server projects, which is necessary for both to use the compression and decompression functions. The use of
FetchContent_DeclareandFetchContent_MakeAvailableis a good practice for fetching and making available the content at configure time. The URL for fetching the zstd library points to a tar file in the releases of the R-Bus project, which is a reasonable choice if the project maintains a version of the zstd library that is known to work with the project. However, it would be good to verify that the URL is correct and that the tar file is available at that location.
There was a problem hiding this comment.
Review Status
Actionable comments generated: 2
Files selected for processing (8)
- .github/workflows/compil.yml (2 hunks)
- deps/CMakeLists.txt (1 hunks)
- deps/boost/CMakeLists.txt (1 hunks)
- deps/date/CMakeLists.txt (1 hunks)
- deps/json/CMakeLists.txt (1 hunks)
- deps/raylib/CMakeLists.txt (1 hunks)
- deps/zstd/CMakeLists.txt (1 hunks)
- scripts/compil.sh (1 hunks)
Files skipped from review due to trivial changes (4)
- .github/workflows/compil.yml
- deps/CMakeLists.txt
- deps/date/CMakeLists.txt
- deps/raylib/CMakeLists.txt
Additional comments (Suppressed): 3
scripts/compil.sh (1)
- 12-12: The addition of the
--parallelflag to thecmake --buildcommand is a good practice as it allows for parallel building, which can significantly speed up the build process by utilizing multiple cores of the processor. However, please ensure that all dependencies in your build are properly managed to avoid potential race conditions that could lead to inconsistent or failed builds.```diff
- cmake --build build
- cmake --build build --parallel
</blockquote></details> <details><summary>deps/json/CMakeLists.txt (1)</summary><blockquote> * 7-7: The `CMAKE_ARGS` option is not a valid option for the `FetchContent_Declare` function. If you want to set CMake arguments for the json library, you should do it in the `FetchContent_MakeAvailable` call or in a `FetchContent_GetProperties` call before `FetchContent_MakeAvailable`. Please verify this change. </blockquote></details> <details><summary>deps/zstd/CMakeLists.txt (1)</summary><blockquote> * 1-41: The CMakeLists.txt file for zstd looks good. It fetches the zstd library, builds it statically, and links it to both the client and server projects. The `FetchContent_Declare` function is used to fetch the zstd library from a URL, and `FetchContent_MakeAvailable` is used to make the library available for the project. The `target_link_libraries` function is used to link the zstd library to the client and server projects, and `target_include_directories` is used to add the zstd library's include directories to the client and server projects. The `set` function is used to set the build options for the zstd library. The `cmake_minimum_required` function is used to specify the minimum required version of CMake. </blockquote></details></blockquote></details> </details>
What is the current behavior? (link an issue based on the kind of change this pr introduce)
What is the new behavior (if this is a feature change)?
Other information:
Summary by CodeRabbit
Zstdto provide static functions for data compression and decompression, improving the efficiency of data handling.