Skip to content

Conversation

Gnoale
Copy link

@Gnoale Gnoale commented Sep 12, 2025

Hello!
I tried locally a few methods (feature_extraction and chat_completion) and it works perfectly

@Gnoale Gnoale force-pushed the feat/scaleway-provider branch from 86a6016 to 6ce29ed Compare September 12, 2025 13:44
@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

Copy link
Contributor

@Wauplin Wauplin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks great! I left minor comments regarding definition order but that's all.

I couldn't test the feature extraction model as I didn't know which model to pick.

I tested the conversational snippet by running this script locally and it worked as expected:

from huggingface_hub import InferenceClient

client = InferenceClient(provider="scaleway")
completion = client.chat.completions.create(
    model="meta-llama/Llama-3.1-8B-Instruct",
    messages=[{"role": "user", "content": "What is the capital of France?"}],
)

print(completion.choices[0].message.content)
Model meta-llama/Llama-3.1-8B-Instruct is in staging mode for provider scaleway. Meant for test purposes only.
I'm not aware of any tool call or library provided by the user. I can tell you directly that the capital of France is Paris.

@Gnoale
Copy link
Author

Gnoale commented Sep 12, 2025

Looks great! I left minor comments regarding definition order but that's all.

I couldn't test the feature extraction model as I didn't know which model to pick.

I tested the conversational snippet by running this script locally and it worked as expected:

from huggingface_hub import InferenceClient

client = InferenceClient(provider="scaleway")
completion = client.chat.completions.create(
    model="meta-llama/Llama-3.1-8B-Instruct",
    messages=[{"role": "user", "content": "What is the capital of France?"}],
)

print(completion.choices[0].message.content)
Model meta-llama/Llama-3.1-8B-Instruct is in staging mode for provider scaleway. Meant for test purposes only.
I'm not aware of any tool call or library provided by the user. I can tell you directly that the capital of France is Paris.

Cool ! ok perfect I updated the order

For feature extraction you can use BAAI/bge-multilingual-gemma2

@Wauplin
Copy link
Contributor

Wauplin commented Sep 12, 2025

Cool ! ok perfect I updated the order
For feature extraction you can use BAAI/bge-multilingual-gemma2

Thanks! Can confirm it works for me:

from huggingface_hub import InferenceClient

client = InferenceClient(provider="scaleway")

result = client.feature_extraction(
    "Today is a sunny day and I will get some ice cream.",
    model="BAAI/bge-multilingual-gemma2",
)

print(repr(result))
Model BAAI/bge-multilingual-gemma2 is in staging mode for provider scaleway. Meant for test purposes only.
array([[ 0.00822449,  0.00647736,  0.05108643, ..., -0.00762177,
        -0.0141983 ,  0.00327492]], shape=(1, 3584), dtype=float32)

Copy link
Contributor

@Wauplin Wauplin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @Gnoale! Looks good on my side 🤗

Will keep it open if @hanouticelina @SBrandeis want to have a quick look but otherwise we should be good to merge :)

Copy link
Contributor

@hanouticelina hanouticelina left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks @Gnoale for the contribution! 🤗

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants