-
Notifications
You must be signed in to change notification settings - Fork 661
Make sure err isn't nil when returning failure #2861
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
A separate issue is if the pseudoLoopbackForwarder should accept connections from |
dc8ae6d
to
9be7f54
Compare
@AkihiroSuda This fixes the crash, but doesn't actually forward to
|
Actually, it only stops accepting connections from This also reminds of something @Nino-K told me: when the guest agent sends a duplicate port to add, then the host agent removes the existing port forwarding. Will see if I can find this in his WIP PR. |
ed5c6da
to
e539fd3
Compare
Thanks, can we have a test? diff --git a/hack/test-templates.sh b/hack/test-templates.sh
index 57ff37a4..0d689312 100755
--- a/hack/test-templates.sh
+++ b/hack/test-templates.sh
@@ -307,9 +307,11 @@ if [[ -n ${CHECKS["port-forwards"]} ]]; then
fi
limactl shell "$NAME" $sudo $CONTAINER_ENGINE info
limactl shell "$NAME" $sudo $CONTAINER_ENGINE pull --quiet ${nginx_image}
- limactl shell "$NAME" $sudo $CONTAINER_ENGINE run -d --name nginx -p 8888:80 ${nginx_image}
-
- timeout 3m bash -euxc "until curl -f --retry 30 --retry-connrefused http://${hostip}:8888; do sleep 3; done"
+ for hostport in 8888 80; do
+ limactl shell "$NAME" $sudo $CONTAINER_ENGINE run -d --name nginx -p ${hostrport}:80 ${nginx_image}
+ timeout 3m bash -euxc "until curl -f --retry 30 --retry-connrefused http://${hostip}:${hostport}; do sleep 3; done"
+ limactl shell "$NAME" $sudo $CONTAINER_ENGINE rm -f nginx
+ done
fi
fi
set +x |
This PR now fixes the crash and also avoids closing the forwarder the first time a non-local address tries to connect to it. $ curl 127.0.0.1
<!DOCTYPE html>
…
$ curl localhost
curl: (56) Recv failure: Connection reset by peer
$ curl 127.0.0.1
<!DOCTYPE html>
… I guess we could use a custom error type instead of matching on the string of the error message to make it a bit more robust. I also think we should accept connections from |
e539fd3
to
3cda370
Compare
I don't actually know if this PR fixes the hanging problem; I only know it fixes the crash @AkihiroSuda has shown in the same issue. And it makes the forwarding listeners a bit more robust in general against accidental removal. I don't plan to make any more changes tonight; @rfay maybe you can test this PR (or wait until it has been merged into |
CI is failing |
Yes, because we cannot bind to |
3cda370
to
bb995b9
Compare
It is still failing in CI, but works for me locally. I'm out of time now and will look into this tomorrow. Unless somebody beats me to it... |
Converting back to draft since we'll switch the default back to SSH in #2864. And I just tested |
8a8eefb
to
a744670
Compare
I've restricted the test binding to port 80 to only macOS, which is also the only platform where the lower port makes a difference due to the pseudoloopback forwarders. However, since we have reverted to the SSH forwarder, this test will not currently test the gRPC implementation. Should we run at least one of the VZ tests (default or fedora) with |
8a08a76
to
af3b7d6
Compare
Signed-off-by: Jan Dubois <[email protected]>
af3b7d6
to
b8b400c
Compare
Does this PR fix the hang too? Or just fixes the panic? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks
I have not been able to reproduce the hangs. Do you have any repro steps?
It fixes the panic, adds support for IPv6, and makes sure the forwarder isn't removed when a connection is attempted from a rejected (non-loopback) address. |
Just running docker.yaml on the CI seems enough. |
It seems almost all the PRs merged for v1.0.1 have been running Anyways, I can't remember seeing the docker test hang in this PR, but also not in any of the others I did after v1.0.0. |
I think I have restarted several failing jobs, so they are marked green |
Ok, let's see if this is going to stop now. |
Fixes #2859