Skip to content

lets try this on the new Travis setup #28437

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 4 commits into from
Closed

lets try this on the new Travis setup #28437

wants to merge 4 commits into from

Conversation

joshk
Copy link

@joshk joshk commented Sep 16, 2015

please do not merge me, just yet, pretty please, sugar on top

@rust-highfive
Copy link
Contributor

Thanks for the pull request, and welcome! The Rust team is excited to review your changes, and you should hear from @alexcrichton (or someone else) soon.

If any changes to this PR are deemed necessary, please add them as extra commits. This ensures that the reviewer can see what has changed since they last reviewed the code. The way Github handles out-of-date commits, this should also make it reasonably obvious what issues have or haven't been addressed. Large or tricky changes may require several passes of review and changes.

Please see the contribution instructions for more information.

@alexcrichton
Copy link
Member

Looks like we timed out :(

@joshk
Copy link
Author

joshk commented Sep 16, 2015

ah ha! so timeouts aren't working of the container jobs, interesting. Well, let me modify the timeouts for this repo and we can restart (i'll do a new commit for that)

@joshk
Copy link
Author

joshk commented Sep 17, 2015

I've bumped the time limit to 120 minutes while we see if we can get this to finish before being killed.

@Gankra
Copy link
Contributor

Gankra commented Sep 17, 2015

Yes, this is why I filed travis-ci/travis-ci#4521 and never fully went forward with sudo: 9000

IIRC, ccache was also broken with this mode, significantly exacerbating the timeout issue.

@alexcrichton
Copy link
Member

Looks like we managed to just squeeze in with a 2hr timeout on that build, although a number of the tests failed:

failures:
    net::tcp::tests::clone_accept_concurrent
    net::tcp::tests::clone_accept_smoke
    net::tcp::tests::clone_while_reading
    net::tcp::tests::close_read_wakes_up
    net::tcp::tests::close_readwrite_smoke
    net::tcp::tests::connect_ip6_loopback
    net::tcp::tests::double_bind
    net::tcp::tests::fast_rebind
    net::tcp::tests::multiple_connect_interleaved_greedy_schedule
    net::tcp::tests::multiple_connect_interleaved_lazy_schedule_ip4
    net::tcp::tests::multiple_connect_serial_ip4
    net::tcp::tests::partial_read
    net::tcp::tests::read_eof_ip4
    net::tcp::tests::shutdown_smoke
    net::tcp::tests::smoke_test_ip6
    net::tcp::tests::socket_and_peer_name_ip4
    net::tcp::tests::tcp_clone_smoke
    net::tcp::tests::tcp_clone_two_read
    net::tcp::tests::tcp_clone_two_write
    net::tcp::tests::write_close
    net::udp::tests::socket_name_ip4
    net::udp::tests::socket_smoke_test_ip4
    net::udp::tests::udp_clone_smoke
    net::udp::tests::udp_clone_two_read
    net::udp::tests::udp_clone_two_write

They all failed for the same reason:

thread '<unnamed>' panicked at 'received error for `TcpListener::bind(&addr)`: Cannot assign requested address (os error 99)', src/libstd/net/tcp.rs:867

Which may be on our end, but I've only seen that when we have two standard library test suites running in parallel, which I don't think this is doing. Just to be sure, IPv6 is enabled for these new machines?

Also, as @gankro pointed out although ccache may not be necessary to get us under the time limit it's certainly useful for reducing build times, so just curious if you guys know if it's an issue on the new machines? If not we can just turn it on and it'll all start working once it's smoothed out in the backend :)

@joshk
Copy link
Author

joshk commented Sep 17, 2015

Caching isn't currently available on the new GCE setup, yet.

As for the failures, GCE VMs doesn't support IPv6 just yet, which might mean sticking to the Docker setup OR using Docker on the GCE hosts.

Happy to talk through how this might work.

@alexcrichton
Copy link
Member

Ah ok, lack of IPv6 would do it for the failing tests, so we may have to stick to Docker for now. Is it planned to have IPv6 enabled on GCE?

The ccache problem isn't critical per se, just a nice to have. So long as the build doesn't time out it's not so bad to take a little longer to build LLVM, the build's already quite long anyway!

@joshk
Copy link
Author

joshk commented Sep 18, 2015

Hey Alex,

Sadly IPv6 is out of our control when it comes to GCE as GCE is just not capable of that right now.

What you could do though is use Docker inside of GCE and run your tests in there, thus giving you IPv6, and also allowing you to use the extra ram that the host has. This would also mean you could pre prep an image with any deps needed (like what is stored using ccache) to reduce test times.

If this is of interest, let me know how I can help.

@alexcrichton
Copy link
Member

Hm, so we've long wanted automation using a stock build of LLVM instead, so this may be a good opportunity to take action on that! We can probably just use a vanilla ubuntu docker image and install stock LLVM at build time.

I may try playing around and see how far that gets us, thanks @joshk!

@joshk
Copy link
Author

joshk commented Sep 18, 2015

My pleasure @alexcrichton!

Let me know how you get on!

@alexcrichton
Copy link
Member

Continuing this in #28500 where we can try out docker (but get the higher time limit on this repo as well)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants