Skip to content

Switch to version 1.0.0 : node-fetch behind proxy #2161

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
Leer0r opened this issue Jan 14, 2025 · 3 comments
Open

Switch to version 1.0.0 : node-fetch behind proxy #2161

Leer0r opened this issue Jan 14, 2025 · 3 comments
Labels
lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.

Comments

@Leer0r
Copy link

Leer0r commented Jan 14, 2025

Describe the bug
I'm currently using the 1.0.0 version, and I notice on the changelog that the project switched from request to node-fetch for APIs call. With request I can use HTTP_PROXY and HTTPS_PROXY variables to access to my cluster, but the new package doesn't allowed to do such things. In my research, I've seen a hint of @lselden who is using the global-agent package to import the proxy setting. I use it and my client can now reach the kubernetes API, but I have the Unauthorized error, so I suppose that some kubeconfig authorization are not in the forwarded request.
image

Client Version
1.0.0

Server Version
1.31.3

To Reproduce

  1. Install @kubernetes/client-node stable version on a nodeJS project behind a proxy
  2. Install global-agent package and bootstrap it
  3. Create a kubernetes API with makeApiClient
  4. Try to use any fetch method from the client API

Expected behavior
The API return the expected information by reaching the cluster

Example Code
Simple code

import {bootstrap} from 'global-agent'
import k8s, {CoreV1Api, Metrics, V1NodeCondition} from '@kubernetes/client-node'

//PROXY settings
process.env.NODE_TLS_REJECT_UNAUTHORIZED = "0";
process.env["GLOBAL_AGENT_HTTP_PROXY"] = process.env.HTTP_PROXY || "http://my.proxy.address:my.proxy.port";
bootstrap({
  forceGlobalAgent: true
})

const config = new k8s.KubeConfig()
config.loadFromDefault()
const kube_api = config.makeApiClient(k8s.CoreV1Api)

kube_api.listNode()

Environment (please complete the following information):

  • OS: Ubuntu (WSL) and Ubuntu (bare metal)
  • Node.js version v23.3.0
  • Cloud runtime On-prem K3S install

Additional context
When I use the same kubeconfig file with kubectl, I can access to the cluster
image
Here I'm on my local WSL, accessing the remote cluster

@mstruebing
Copy link
Member

Does it work if you set the proxy configuration in your KUBECONFIG?

Reference:

@Leer0r
Copy link
Author

Leer0r commented Jan 16, 2025

Adding the proxy-url field in the kubeconfig file seems not handle by the client, I have the same issue than before. Regarding the small hack from @geisbruch, it not work due to a missing property:
Image
Here is the plain code if you need it

type ApiConstructor<T extends ApiType> = new (config: Configuration) => T;
function connectToKubeClusterWithProxy(proxyURL: string) {
    const kc = new k8s.KubeConfig();
    const originalMakeApiClient = kc.makeApiClient;
    kc.makeApiClient = function <T extends ApiType>(api: ApiConstructor<T>): T {
        const client = originalMakeApiClient.call(kc, api);
        client.interceptors.push(async (config: Configuration) => {
            config.proxy=proxyURL;
        });
        return client as T;
    };
}

Also the ApiConstructor interface is not exported by default, so I need to recreate it in my code

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 16, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.
Projects
None yet
Development

No branches or pull requests

4 participants