-
Notifications
You must be signed in to change notification settings - Fork 41
Description
I'm working on a contribution to fix this problem, so I want to engage the community to see which solutions are acceptable.
After installing cmake from pip, the cmake binary does not work, failing with a segmentation fault. Below is a test on Ubuntu 20.04, arm64:
docker run --rm -it ubuntu:focal bash
root@dd6e77ac5530:/# apt update && apt install python3-pip
...
root@dd6e77ac5530:/# pip3 install cmake
Collecting cmake
Downloading cmake-3.18.2.post1-py3-none-manylinux2014_aarch64.whl (15.2 MB)
|████████████████████████████████| 15.2 MB 27.6 MB/s
Installing collected packages: cmake
Successfully installed cmake-3.18.2.post1
root@dd6e77ac5530:/# cmake --version
root@dd6e77ac5530:/# echo $?
245
root@dd6e77ac5530:/# /usr/local/lib/python3.8/dist-packages/cmake/data/bin/cmake
Segmentation fault (core dumped)
root@dd6e77ac5530:/#
After some debugging I discovered this problem could be fixed by dynamically linking against libstdc++ and libgcc instead of statically linking. I changed these lines:
cmake-python-distributions/CMakeLists.txt
Lines 189 to 191 in 1f845d0
| file(WRITE "${CMAKE_BINARY_DIR}/initial-cache.txt" | |
| "set(CMAKE_C_FLAGS \"-D_POSIX_C_SOURCE=199506L -D_POSIX_SOURCE=1 -D_SVID_SOURCE=1 -D_BSD_SOURCE=1\" CACHE STRING \"Initial cache\" FORCE) | |
| set(CMAKE_EXE_LINKER_FLAGS \"-static-libstdc++ -static-libgcc -lrt\" CACHE STRING \"Initial cache\" FORCE) |
to
file(WRITE "${CMAKE_BINARY_DIR}/initial-cache.txt"
"set(CMAKE_C_FLAGS \"-g3 -D_POSIX_C_SOURCE=199506L -D_POSIX_SOURCE=1 -D_SVID_SOURCE=1 -D_BSD_SOURCE=1\" CACHE STRING \"Initial cache\" FORCE)
set(CMAKE_EXE_LINKER_FLAGS \"-lstdc++ -lgcc -lrt\" CACHE STRING \"Initial cache\" FORCE)
This fixed the problem for Ubuntu 20.04. The wheel works correctly after this change. However, testing in the manylinux2014-aarch64, I discovered that the cmake binary was linked against newer versions than were available of libcrypto and libssl. This problem could be fixed with the auditwheel tool, but this had two new problems: 1. the auditwheel tool couldn't be run in the cross compiler container so it has to be run in a separate emulated container or natively. 2. After I switched to run natively on an Arm system, auditwheel refused to repair the wheel because of linkage with glibc 2.25.
[root@74a50ab14cd5 io]# /opt/python/cp38-cp38/bin/auditwheel repair dist/cmake-0.post308+gef0722c-py3-none-manylinux2014_aarch64.whl
INFO:auditwheel.main_repair:Repairing cmake-0.post308+gef0722c-py3-none-manylinux2014_aarch64.whl
usage: auditwheel [-h] [-V] [-v] command ...
auditwheel: error: cannot repair "dist/cmake-0.post308+gef0722c-py3-none-manylinux2014_aarch64.whl" to "manylinux2014_aarch64" ABI because of the presence of too-recent versioned symbols. You'll need to compile the wheel on an older toolchain.
From there I discovered that the dockcross container used to cross compile the arm64 wheel is using libraries that aren't compliant with the PEP599 spec, so I reported this bug to dockcross.
Finally, I am working on a change to use Travis-CI to do a native build for Linux on Arm with the manylinux2014-aarch64 container. Since the build uses scikit-build which itself depends on cmake, it must first download, build, and install cmake before it can be built again for the wheel.
I have the following questions to the community:
- Would you support migrating away from the dockcross container to prefer native builds on Arm?
- Using
auditwheelbundles the dynamic libraries with the wheel. Does this present any licensing problems? - Would you be open to migrating from Travis-CI.org to Travis-CI.com? The dot-com version supports Arm64 build on AWS Graviton 2 instances, which are much faster than the previous generation available on Travis. See: https://blog.travis-ci.com/2020-09-11-arm-on-aws