Skip to content

gpg: keyserver receive failed: Address not available #380

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
marciomsm opened this issue Apr 12, 2017 · 4 comments
Closed

gpg: keyserver receive failed: Address not available #380

marciomsm opened this issue Apr 12, 2017 · 4 comments

Comments

@marciomsm
Copy link

I am trying to generate a docker image to node 6.10 with alpine, but I get this error:

gpg: keyserver receive failed: Address not available

Does anyone knows how to solve it?

@chorrell
Copy link
Contributor

That looks like a possible network issue in your docker setup. We also recently updated the docker files to try more than one key server:

https://github.com/nodejs/docker-node/blob/master/6.10/Dockerfile#L18

We would get periodic failures with ha.pool.sks-keyservers.net

@F30
Copy link

F30 commented Jun 6, 2017

(Came here through Google after having the same error in a different container.)

In my case, the error with [ha.]pool.sks-keyservers.net was related to IPv4 vs. IPv6: GPG's dirmngr might select a v6 server from the pool and try to connect to it, even though it doesn't have v6 connectivity inside the container.
I'm not sure whether this is a bug in Docker or dirmngr, but there is at least this bug related to dirmngr and IPv6.

If one wants stay with sks-keyservers, ipv4.pool.sks-keyservers.net can be used.

@nicolas-albert
Copy link

nicolas-albert commented Dec 1, 2017

Thank @chorrell , I fixed my Dockerfiles with your workaround 👍

( gpg --keyserver ha.pool.sks-keyservers.net --recv-keys "$GPG_KEYS" \
  || gpg --keyserver pgp.mit.edu --recv-keys "$GPG_KEYS" \
  || gpg --keyserver keyserver.pgp.com --recv-keys "$GPG_KEYS" )

@chorrell
Copy link
Contributor

redshiftzero added a commit to freedomofpress/securedrop that referenced this issue May 31, 2018
In Tails, the default keyserver is sometimes flaky [0], so
we implemented a retry in securedrop-admin update [1].

In the integration tests, using the same default pool as in
Tails, there appears to still be some flakiness. Digging a bit
my hypothesis is that this is occuring when we are assigned a keyserver
that is IPv6 from the pool [2]. For test purposes, I'm setting a
reliable default keyserver in gpg.conf.

[0] https://labs.riseup.net/code/issues/12689
[1] #3257
[2] nodejs/docker-node#380 (comment)
redshiftzero added a commit to freedomofpress/securedrop that referenced this issue Jun 4, 2018
In Tails, the default keyserver is sometimes flaky [0], so
we implemented a retry in securedrop-admin update [1].

In the integration tests, using the same default pool as in
Tails, there appears to still be some flakiness. Digging a bit
my hypothesis is that this is occuring when we are assigned a keyserver
that is IPv6 from the pool [2]. For test purposes, I'm setting a
reliable default keyserver in gpg.conf.

[0] https://labs.riseup.net/code/issues/12689
[1] #3257
[2] nodejs/docker-node#380 (comment)
redshiftzero added a commit to freedomofpress/securedrop that referenced this issue Jun 14, 2018
In Tails, the default keyserver is sometimes flaky [0], so
we implemented a retry in securedrop-admin update [1].

In the integration tests, using the same default pool as in
Tails, there appears to still be some flakiness. Digging a bit
my hypothesis is that this is occuring when we are assigned a keyserver
that is IPv6 from the pool [2]. For test purposes, I'm setting a
reliable default keyserver in gpg.conf.

[0] https://labs.riseup.net/code/issues/12689
[1] #3257
[2] nodejs/docker-node#380 (comment)
redshiftzero added a commit to freedomofpress/securedrop that referenced this issue Jun 14, 2018
In Tails, the default keyserver is sometimes flaky [0], so
we implemented a retry in securedrop-admin update [1].

In the integration tests, using the same default pool as in
Tails, there appears to still be some flakiness. Digging a bit
my hypothesis is that this is occuring when we are assigned a keyserver
that is IPv6 from the pool [2]. For test purposes, I'm setting a
reliable default keyserver in gpg.conf.

[0] https://labs.riseup.net/code/issues/12689
[1] #3257
[2] nodejs/docker-node#380 (comment)
redshiftzero added a commit to freedomofpress/securedrop that referenced this issue Jun 15, 2018
In Tails, the default keyserver is sometimes flaky [0], so
we implemented a retry in securedrop-admin update [1].

In the integration tests, using the same default pool as in
Tails, there appears to still be some flakiness. Digging a bit
my hypothesis is that this is occuring when we are assigned a keyserver
that is IPv6 from the pool [2]. For test purposes, I'm setting a
reliable default keyserver in gpg.conf.

[0] https://labs.riseup.net/code/issues/12689
[1] #3257
[2] nodejs/docker-node#380 (comment)
israelshirk added a commit to nlp-secure/percona-docker that referenced this issue Jun 19, 2018
redshiftzero added a commit to freedomofpress/securedrop that referenced this issue Jun 21, 2018
In Tails, the default keyserver is sometimes flaky [0], so
we implemented a retry in securedrop-admin update [1].

In the integration tests, using the same default pool as in
Tails, there appears to still be some flakiness. Digging a bit
my hypothesis is that this is occuring when we are assigned a keyserver
that is IPv6 from the pool [2]. For test purposes, I'm setting a
reliable default keyserver in gpg.conf.

[0] https://labs.riseup.net/code/issues/12689
[1] #3257
[2] nodejs/docker-node#380 (comment)

(cherry picked from commit caebc2d)
avrabe added a commit to avrabe/ros2_raspbian_tools that referenced this issue Mar 17, 2019
For building dockerfiles it seems others use this workaround too: nodejs/docker-node#380
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants