-
Notifications
You must be signed in to change notification settings - Fork 13.3k
lets try this on the new Travis setup #28437
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Thanks for the pull request, and welcome! The Rust team is excited to review your changes, and you should hear from @alexcrichton (or someone else) soon. If any changes to this PR are deemed necessary, please add them as extra commits. This ensures that the reviewer can see what has changed since they last reviewed the code. The way Github handles out-of-date commits, this should also make it reasonably obvious what issues have or haven't been addressed. Large or tricky changes may require several passes of review and changes. Please see the contribution instructions for more information. |
Looks like we timed out :( |
ah ha! so timeouts aren't working of the container jobs, interesting. Well, let me modify the timeouts for this repo and we can restart (i'll do a new commit for that) |
I've bumped the time limit to 120 minutes while we see if we can get this to finish before being killed. |
Yes, this is why I filed travis-ci/travis-ci#4521 and never fully went forward with IIRC, ccache was also broken with this mode, significantly exacerbating the timeout issue. |
Looks like we managed to just squeeze in with a 2hr timeout on that build, although a number of the tests failed:
They all failed for the same reason:
Which may be on our end, but I've only seen that when we have two standard library test suites running in parallel, which I don't think this is doing. Just to be sure, IPv6 is enabled for these new machines? Also, as @gankro pointed out although ccache may not be necessary to get us under the time limit it's certainly useful for reducing build times, so just curious if you guys know if it's an issue on the new machines? If not we can just turn it on and it'll all start working once it's smoothed out in the backend :) |
Caching isn't currently available on the new GCE setup, yet. As for the failures, GCE VMs doesn't support IPv6 just yet, which might mean sticking to the Docker setup OR using Docker on the GCE hosts. Happy to talk through how this might work. |
Ah ok, lack of IPv6 would do it for the failing tests, so we may have to stick to Docker for now. Is it planned to have IPv6 enabled on GCE? The ccache problem isn't critical per se, just a nice to have. So long as the build doesn't time out it's not so bad to take a little longer to build LLVM, the build's already quite long anyway! |
Hey Alex, Sadly IPv6 is out of our control when it comes to GCE as GCE is just not capable of that right now. What you could do though is use Docker inside of GCE and run your tests in there, thus giving you IPv6, and also allowing you to use the extra ram that the host has. This would also mean you could pre prep an image with any deps needed (like what is stored using ccache) to reduce test times. If this is of interest, let me know how I can help. |
Hm, so we've long wanted automation using a stock build of LLVM instead, so this may be a good opportunity to take action on that! We can probably just use a vanilla ubuntu docker image and install stock LLVM at build time. I may try playing around and see how far that gets us, thanks @joshk! |
My pleasure @alexcrichton! Let me know how you get on! |
Continuing this in #28500 where we can try out docker (but get the higher time limit on this repo as well) |
please do not merge me, just yet, pretty please, sugar on top