-
Notifications
You must be signed in to change notification settings - Fork 207
Open
Description
Hi. Please add support for this model 🙏. It supports many CIS languages.
https://huggingface.co/ai-sage/GigaChat3-10B-A1.8B-bf16/blob/main/config.json
An error occurs during the current conversion.
(openarc) c:\llm\openarc\201>optimum-cli export openvino --model "T:\models\ai-sage\GigaChat3-10B-A1.8B-bf16" --task text-generation --weight-format int4 "C:\llm\models\ov\GigaChat3-10B-A1.8B-ov-int4"
Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 11/11 [00:23<00:00, 2.14s/it]
Some weights of the model checkpoint at T:\models\ai-sage\GigaChat3-10B-A1.8B-bf16 were not used when initializing DeepseekV3ForCausalLM: ['model.layers.26.eh_proj.weight', 'model.layers.26.embed_tokens.weight', 'model.layers.26.enorm.weight', 'model.layers.26.hnorm.weight', 'model.layers.26.input_layernorm.weight', 'model.layers.26.mlp.experts.0.down_proj.weight', 'model.layers.26.mlp.experts.0.gate_proj.weight', 'model.layers.26.mlp.experts.0.up_proj.weight', 'model.layers.26.mlp.experts.1.down_proj.weight', 'model.layers.26.mlp.experts.1.gate_proj.weight', 'model.layers.26.mlp.experts.1.up_proj.weight', 'model.layers.26.mlp.experts.10.down_proj.weight', 'model.layers.26.mlp.experts.10.gate_proj.weight', 'model.layers.26.mlp.experts.10.up_proj.weight', 'model.layers.26.mlp.experts.11.down_proj.weight', 'model.layers.26.mlp.experts.11.gate_proj.weight', 'model.layers.26.mlp.experts.11.up_proj.weight', 'model.layers.26.mlp.experts.12.down_proj.weight', 'model.layers.26.mlp.experts.12.gate_proj.weight', 'model.layers.26.mlp.experts.12.up_proj.weight', 'model.layers.26.mlp.experts.13.down_proj.weight', 'model.layers.26.mlp.experts.13.gate_proj.weight', 'model.layers.26.mlp.experts.13.up_proj.weight', 'model.layers.26.mlp.experts.14.down_proj.weight', 'model.layers.26.mlp.experts.14.gate_proj.weight', 'model.layers.26.mlp.experts.14.up_proj.weight', 'model.layers.26.mlp.experts.15.down_proj.weight', 'model.layers.26.mlp.experts.15.gate_proj.weight', 'model.layers.26.mlp.experts.15.up_proj.weight', 'model.layers.26.mlp.experts.16.down_proj.weight', 'model.layers.26.mlp.experts.16.gate_proj.weight', 'model.layers.26.mlp.experts.16.up_proj.weight', 'model.layers.26.mlp.experts.17.down_proj.weight', 'model.layers.26.mlp.experts.17.gate_proj.weight', 'model.layers.26.mlp.experts.17.up_proj.weight', 'model.layers.26.mlp.experts.18.down_proj.weight', 'model.layers.26.mlp.experts.18.gate_proj.weight', 'model.layers.26.mlp.experts.18.up_proj.weight', 'model.layers.26.mlp.experts.19.down_proj.weight', 'model.layers.26.mlp.experts.19.gate_proj.weight', 'model.layers.26.mlp.experts.19.up_proj.weight', 'model.layers.26.mlp.experts.2.down_proj.weight', 'model.layers.26.mlp.experts.2.gate_proj.weight', 'model.layers.26.mlp.experts.2.up_proj.weight', 'model.layers.26.mlp.experts.20.down_proj.weight', 'model.layers.26.mlp.experts.20.gate_proj.weight', 'model.layers.26.mlp.experts.20.up_proj.weight', 'model.layers.26.mlp.experts.21.down_proj.weight', 'model.layers.26.mlp.experts.21.gate_proj.weight', 'model.layers.26.mlp.experts.21.up_proj.weight', 'model.layers.26.mlp.experts.22.down_proj.weight', 'model.layers.26.mlp.experts.22.gate_proj.weight', 'model.layers.26.mlp.experts.22.up_proj.weight', 'model.layers.26.mlp.experts.23.down_proj.weight', 'model.layers.26.mlp.experts.23.gate_proj.weight', 'model.layers.26.mlp.experts.23.up_proj.weight', 'model.layers.26.mlp.experts.24.down_proj.weight', 'model.layers.26.mlp.experts.24.gate_proj.weight', 'model.layers.26.mlp.experts.24.up_proj.weight', 'model.layers.26.mlp.experts.25.down_proj.weight', 'model.layers.26.mlp.experts.25.gate_proj.weight', 'model.layers.26.mlp.experts.25.up_proj.weight', 'model.layers.26.mlp.experts.26.down_proj.weight', 'model.layers.26.mlp.experts.26.gate_proj.weight', 'model.layers.26.mlp.experts.26.up_proj.weight', 'model.layers.26.mlp.experts.27.down_proj.weight', 'model.layers.26.mlp.experts.27.gate_proj.weight', 'model.layers.26.mlp.experts.27.up_proj.weight', 'model.layers.26.mlp.experts.28.down_proj.weight', 'model.layers.26.mlp.experts.28.gate_proj.weight', 'model.layers.26.mlp.experts.28.up_proj.weight', 'model.layers.26.mlp.experts.29.down_proj.weight', 'model.layers.26.mlp.experts.29.gate_proj.weight', 'model.layers.26.mlp.experts.29.up_proj.weight', 'model.layers.26.mlp.experts.3.down_proj.weight', 'model.layers.26.mlp.experts.3.gate_proj.weight', 'model.layers.26.mlp.experts.3.up_proj.weight', 'model.layers.26.mlp.experts.30.down_proj.weight', 'model.layers.26.mlp.experts.30.gate_proj.weight', 'model.layers.26.mlp.experts.30.up_proj.weight', 'model.layers.26.mlp.experts.31.down_proj.weight', 'model.layers.26.mlp.experts.31.gate_proj.weight', 'model.layers.26.mlp.experts.31.up_proj.weight', 'model.layers.26.mlp.experts.32.down_proj.weight', 'model.layers.26.mlp.experts.32.gate_proj.weight', 'model.layers.26.mlp.experts.32.up_proj.weight', 'model.layers.26.mlp.experts.33.down_proj.weight', 'model.layers.26.mlp.experts.33.gate_proj.weight', 'model.layers.26.mlp.experts.33.up_proj.weight', 'model.layers.26.mlp.experts.34.down_proj.weight', 'model.layers.26.mlp.experts.34.gate_proj.weight', 'model.layers.26.mlp.experts.34.up_proj.weight', 'model.layers.26.mlp.experts.35.down_proj.weight', 'model.layers.26.mlp.experts.35.gate_proj.weight', 'model.layers.26.mlp.experts.35.up_proj.weight', 'model.layers.26.mlp.experts.36.down_proj.weight', 'model.layers.26.mlp.experts.36.gate_proj.weight', 'model.layers.26.mlp.experts.36.up_proj.weight', 'model.layers.26.mlp.experts.37.down_proj.weight', 'model.layers.26.mlp.experts.37.gate_proj.weight', 'model.layers.26.mlp.experts.37.up_proj.weight', 'model.layers.26.mlp.experts.38.down_proj.weight', 'model.layers.26.mlp.experts.38.gate_proj.weight', 'model.layers.26.mlp.experts.38.up_proj.weight', 'model.layers.26.mlp.experts.39.down_proj.weight', 'model.layers.26.mlp.experts.39.gate_proj.weight', 'model.layers.26.mlp.experts.39.up_proj.weight', 'model.layers.26.mlp.experts.4.down_proj.weight', 'model.layers.26.mlp.experts.4.gate_proj.weight', 'model.layers.26.mlp.experts.4.up_proj.weight', 'model.layers.26.mlp.experts.40.down_proj.weight', 'model.layers.26.mlp.experts.40.gate_proj.weight', 'model.layers.26.mlp.experts.40.up_proj.weight', 'model.layers.26.mlp.experts.41.down_proj.weight', 'model.layers.26.mlp.experts.41.gate_proj.weight', 'model.layers.26.mlp.experts.41.up_proj.weight', 'model.layers.26.mlp.experts.42.down_proj.weight', 'model.layers.26.mlp.experts.42.gate_proj.weight', 'model.layers.26.mlp.experts.42.up_proj.weight', 'model.layers.26.mlp.experts.43.down_proj.weight', 'model.layers.26.mlp.experts.43.gate_proj.weight', 'model.layers.26.mlp.experts.43.up_proj.weight', 'model.layers.26.mlp.experts.44.down_proj.weight', 'model.layers.26.mlp.experts.44.gate_proj.weight', 'model.layers.26.mlp.experts.44.up_proj.weight', 'model.layers.26.mlp.experts.45.down_proj.weight', 'model.layers.26.mlp.experts.45.gate_proj.weight', 'model.layers.26.mlp.experts.45.up_proj.weight', 'model.layers.26.mlp.experts.46.down_proj.weight', 'model.layers.26.mlp.experts.46.gate_proj.weight', 'model.layers.26.mlp.experts.46.up_proj.weight', 'model.layers.26.mlp.experts.47.down_proj.weight', 'model.layers.26.mlp.experts.47.gate_proj.weight', 'model.layers.26.mlp.experts.47.up_proj.weight', 'model.layers.26.mlp.experts.48.down_proj.weight', 'model.layers.26.mlp.experts.48.gate_proj.weight', 'model.layers.26.mlp.experts.48.up_proj.weight', 'model.layers.26.mlp.experts.49.down_proj.weight', 'model.layers.26.mlp.experts.49.gate_proj.weight', 'model.layers.26.mlp.experts.49.up_proj.weight', 'model.layers.26.mlp.experts.5.down_proj.weight', 'model.layers.26.mlp.experts.5.gate_proj.weight', 'model.layers.26.mlp.experts.5.up_proj.weight', 'model.layers.26.mlp.experts.50.down_proj.weight', 'model.layers.26.mlp.experts.50.gate_proj.weight', 'model.layers.26.mlp.experts.50.up_proj.weight', 'model.layers.26.mlp.experts.51.down_proj.weight', 'model.layers.26.mlp.experts.51.gate_proj.weight', 'model.layers.26.mlp.experts.51.up_proj.weight', 'model.layers.26.mlp.experts.52.down_proj.weight', 'model.layers.26.mlp.experts.52.gate_proj.weight', 'model.layers.26.mlp.experts.52.up_proj.weight', 'model.layers.26.mlp.experts.53.down_proj.weight', 'model.layers.26.mlp.experts.53.gate_proj.weight', 'model.layers.26.mlp.experts.53.up_proj.weight', 'model.layers.26.mlp.experts.54.down_proj.weight', 'model.layers.26.mlp.experts.54.gate_proj.weight', 'model.layers.26.mlp.experts.54.up_proj.weight', 'model.layers.26.mlp.experts.55.down_proj.weight', 'model.layers.26.mlp.experts.55.gate_proj.weight', 'model.layers.26.mlp.experts.55.up_proj.weight', 'model.layers.26.mlp.experts.56.down_proj.weight', 'model.layers.26.mlp.experts.56.gate_proj.weight', 'model.layers.26.mlp.experts.56.up_proj.weight', 'model.layers.26.mlp.experts.57.down_proj.weight', 'model.layers.26.mlp.experts.57.gate_proj.weight', 'model.layers.26.mlp.experts.57.up_proj.weight', 'model.layers.26.mlp.experts.58.down_proj.weight', 'model.layers.26.mlp.experts.58.gate_proj.weight', 'model.layers.26.mlp.experts.58.up_proj.weight', 'model.layers.26.mlp.experts.59.down_proj.weight', 'model.layers.26.mlp.experts.59.gate_proj.weight', 'model.layers.26.mlp.experts.59.up_proj.weight', 'model.layers.26.mlp.experts.6.down_proj.weight', 'model.layers.26.mlp.experts.6.gate_proj.weight', 'model.layers.26.mlp.experts.6.up_proj.weight', 'model.layers.26.mlp.experts.60.down_proj.weight', 'model.layers.26.mlp.experts.60.gate_proj.weight', 'model.layers.26.mlp.experts.60.up_proj.weight', 'model.layers.26.mlp.experts.61.down_proj.weight', 'model.layers.26.mlp.experts.61.gate_proj.weight', 'model.layers.26.mlp.experts.61.up_proj.weight', 'model.layers.26.mlp.experts.62.down_proj.weight', 'model.layers.26.mlp.experts.62.gate_proj.weight', 'model.layers.26.mlp.experts.62.up_proj.weight', 'model.layers.26.mlp.experts.63.down_proj.weight', 'model.layers.26.mlp.experts.63.gate_proj.weight', 'model.layers.26.mlp.experts.63.up_proj.weight', 'model.layers.26.mlp.experts.7.down_proj.weight', 'model.layers.26.mlp.experts.7.gate_proj.weight', 'model.layers.26.mlp.experts.7.up_proj.weight', 'model.layers.26.mlp.experts.8.down_proj.weight', 'model.layers.26.mlp.experts.8.gate_proj.weight', 'model.layers.26.mlp.experts.8.up_proj.weight', 'model.layers.26.mlp.experts.9.down_proj.weight', 'model.layers.26.mlp.experts.9.gate_proj.weight', 'model.layers.26.mlp.experts.9.up_proj.weight', 'model.layers.26.mlp.gate.e_score_correction_bias', 'model.layers.26.mlp.gate.weight', 'model.layers.26.mlp.shared_experts.down_proj.weight', 'model.layers.26.mlp.shared_experts.gate_proj.weight', 'model.layers.26.mlp.shared_experts.up_proj.weight', 'model.layers.26.post_attention_layernorm.weight', 'model.layers.26.self_attn.kv_a_layernorm.weight', 'model.layers.26.self_attn.kv_a_proj_with_mqa.weight', 'model.layers.26.self_attn.kv_b_proj.weight', 'model.layers.26.self_attn.o_proj.weight', 'model.layers.26.self_attn.q_proj.weight', 'model.layers.26.shared_head.head.weight', 'model.layers.26.shared_head.norm.weight']
- This IS expected if you are initializing DeepseekV3ForCausalLM from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing DeepseekV3ForCausalLM from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
`loss_type=None` was set in the config but it is unrecognised.Using the default loss: `ForCausalLMLoss`.
C:\llm\openarc\201\.venv\Lib\site-packages\transformers\masking_utils.py:187: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if (padding_length := kv_length + kv_offset - attention_mask.shape[-1]) > 0:
C:\llm\openarc\201\.venv\Lib\site-packages\optimum\exporters\openvino\model_patcher.py:207: TracerWarning: torch.tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect.
torch.tensor(0.0, device=mask.device, dtype=dtype),
C:\llm\openarc\201\.venv\Lib\site-packages\optimum\exporters\openvino\model_patcher.py:208: TracerWarning: torch.tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect.
torch.tensor(torch.finfo(torch.float16).min, device=mask.device, dtype=dtype),
Traceback (most recent call last):
File "C:\llm\openarc\201\.venv\Lib\site-packages\openvino\frontend\pytorch\ts_decoder.py", line 72, in __init__
pt_module = self._get_scripted_model(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\llm\openarc\201\.venv\Lib\site-packages\openvino\frontend\pytorch\ts_decoder.py", line 178, in _get_scripted_model
scripted = torch.jit.trace(
^^^^^^^^^^^^^^^^
File "C:\llm\openarc\201\.venv\Lib\site-packages\torch\jit\_trace.py", line 1016, in trace
traced_func = _trace_impl(
^^^^^^^^^^^^
File "C:\llm\openarc\201\.venv\Lib\site-packages\torch\jit\_trace.py", line 701, in _trace_impl
return trace_module(
^^^^^^^^^^^^^
File "C:\llm\openarc\201\.venv\Lib\site-packages\torch\jit\_trace.py", line 1210, in trace_module
module._c._create_method_from_trace(
File "C:\llm\openarc\201\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1776, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\llm\openarc\201\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1787, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\llm\openarc\201\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1766, in _slow_forward
result = self.forward(*input, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\llm\openarc\201\.venv\Lib\site-packages\optimum\exporters\openvino\convert.py", line 398, in ts_patched_forward
outputs = patched_forward(**kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\llm\openarc\201\.venv\Lib\site-packages\optimum\exporters\onnx\model_patcher.py", line 596, in patched_forward
outputs = self.orig_forward(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\llm\openarc\201\.venv\Lib\site-packages\transformers\utils\generic.py", line 943, in wrapper
output = func(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\llm\openarc\201\.venv\Lib\site-packages\transformers\models\deepseek_v3\modeling_deepseek_v3.py", line 741, in forward
outputs: BaseModelOutputWithPast = self.model(
^^^^^^^^^^^
File "C:\llm\openarc\201\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1776, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\llm\openarc\201\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1787, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\llm\openarc\201\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1766, in _slow_forward
result = self.forward(*input, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\llm\openarc\201\.venv\Lib\site-packages\transformers\utils\generic.py", line 943, in wrapper
output = func(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\llm\openarc\201\.venv\Lib\site-packages\transformers\models\deepseek_v3\modeling_deepseek_v3.py", line 629, in forward
layer_outputs = decoder_layer(
^^^^^^^^^^^^^^
File "C:\llm\openarc\201\.venv\Lib\site-packages\transformers\modeling_layers.py", line 83, in __call__
return super().__call__(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\llm\openarc\201\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1776, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\llm\openarc\201\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1787, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\llm\openarc\201\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1766, in _slow_forward
result = self.forward(*input, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\llm\openarc\201\.venv\Lib\site-packages\transformers\models\deepseek_v3\modeling_deepseek_v3.py", line 474, in forward
hidden_states, self_attn_weights = self.self_attn(
^^^^^^^^^^^^^^^
File "C:\llm\openarc\201\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1776, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\llm\openarc\201\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1787, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\llm\openarc\201\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1766, in _slow_forward
result = self.forward(*input, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: deepseek_v3_attn_forward() got an unexpected keyword argument 'cache_position'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "c:\llm\openarc\201\.venv\Scripts\optimum-cli.exe\__main__.py", line 10, in <module>
File "C:\llm\openarc\201\.venv\Lib\site-packages\optimum\commands\optimum_cli.py", line 219, in main
service.run()
File "C:\llm\openarc\201\.venv\Lib\site-packages\optimum\commands\export\openvino.py", line 469, in run
main_export(
File "C:\llm\openarc\201\.venv\Lib\site-packages\optimum\exporters\openvino\__main__.py", line 524, in main_export
submodel_paths = export_from_model(
^^^^^^^^^^^^^^^^^^
File "C:\llm\openarc\201\.venv\Lib\site-packages\optimum\exporters\openvino\convert.py", line 740, in export_from_model
export_models(
File "C:\llm\openarc\201\.venv\Lib\site-packages\optimum\exporters\openvino\convert.py", line 509, in export_models
export(
File "C:\llm\openarc\201\.venv\Lib\site-packages\optimum\exporters\openvino\convert.py", line 211, in export
return export_pytorch(
^^^^^^^^^^^^^^^
File "C:\llm\openarc\201\.venv\Lib\site-packages\optimum\exporters\openvino\convert.py", line 416, in export_pytorch
ts_decoder = TorchScriptPythonDecoder(model, example_input=dummy_inputs, **ts_decoder_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\llm\openarc\201\.venv\Lib\site-packages\openvino\frontend\pytorch\ts_decoder.py", line 84, in __init__
raise RuntimeError(
RuntimeError: Couldn't get TorchScript module by tracing.
Exception:
deepseek_v3_attn_forward() got an unexpected keyword argument 'cache_position'
Please check correctness of provided 'example_input'. Sometimes models can be converted in scripted mode, please try running conversion without 'example_input'.
You can also provide TorchScript module that you obtained yourself, please refer to PyTorch documentation: https://pytorch.org/tutorials/beginner/Intro_to_TorchScript_tutorial.html.
(openarc) c:\llm\openarc\201>
pip list
(openarc) c:\llm\openarc\201>uv pip list
Package Version Editable project location
-------------------------- ---------------------- -------------------------
about-time 4.2.1
addict 2.4.0
aiohappyeyeballs 2.6.1
aiohttp 3.12.14
aiosignal 1.4.0
alive-progress 3.3.0
annotated-types 0.7.0
anyio 4.9.0
asttokens 3.0.0
attrs 25.4.0
audioread 3.0.1
autograd 1.8.0
babel 2.17.0
blis 1.3.0
brotli 1.1.0
catalogue 2.0.10
certifi 2026.1.4
cffi 2.0.0
charset-normalizer 3.4.4
click 8.2.1
cloudpathlib 0.22.0
cma 4.4.2
colorama 0.4.6
comm 0.2.3
confection 0.1.5
contourpy 1.3.3
cryptography 46.0.3
csvw 3.6.0
curated-tokenizers 0.0.9
curated-transformers 0.1.1
cycler 0.12.1
cymem 2.0.11
datasets 4.0.0
ddgs 9.6.1
debugpy 1.8.17
decorator 5.2.1
deprecated 1.3.1
dill 0.3.8
distro 1.9.0
dlinfo 2.0.0
docopt 0.6.2
espeakng-loader 0.2.4
executing 2.2.1
fastapi 0.116.1
filelock 3.20.3
fonttools 4.61.1
frozenlist 1.7.0
fsspec 2026.2.0
grapheme 0.6.0
graphemeu 0.7.2
griffe 1.14.0
h11 0.16.0
h2 4.3.0
hpack 4.1.0
httpcore 1.0.9
httpx 0.28.1
httpx-sse 0.4.3
huggingface-hub 0.36.2
hyperframe 6.1.0
idna 3.11
iniconfig 2.3.0
inquirerpy 0.3.4
ipykernel 7.0.1
ipython 9.6.0
ipython-pygments-lexers 1.1.1
ipywidgets 8.1.7
isodate 0.7.2
jedi 0.19.2
jinja2 3.1.6
jiter 0.11.0
joblib 1.5.3
jsonschema 4.26.0
jsonschema-specifications 2025.9.1
jupyter-client 8.6.3
jupyter-core 5.9.1
jupyterlab-widgets 3.0.15
kiwisolver 1.4.9
kokoro 0.9.4
langcodes 3.5.0
language-data 1.3.0
language-tags 1.2.0
lazy-loader 0.4
librosa 0.11.0
llvmlite 0.45.0
loguru 0.7.3
lxml 6.0.2
marisa-trie 1.3.1
markdown-it-py 4.0.0
markupsafe 2.1.5
matplotlib 3.10.8
matplotlib-inline 0.1.7
mcp 1.20.0
mdurl 0.1.2
misaki 0.9.4
ml-dtypes 0.5.4
moocore 0.2.0
mpmath 1.3.0
msgpack 1.1.1
multidict 6.6.3
multiprocess 0.70.16
murmurhash 1.0.13
natsort 8.4.0
nest-asyncio 1.6.0
networkx 3.6.1
ninja 1.13.0
nncf 2.19.0
num2words 0.5.14
numba 0.62.0
numpy 2.4.2
onnx 1.20.1
openai 2.2.0
openai-agents 0.4.2
openarc 2.0 C:\llm\openarc\201
openvino 2026.1.0.dev20260206
openvino-genai 2026.1.0.0.dev20260206
openvino-telemetry 2025.2.0
openvino-tokenizers 2026.1.0.0.dev20260206
optimum 2.1.0
optimum-intel 1.27.0
optimum-onnx 0.1.0
packaging 26.0
pandas 2.3.3
parso 0.8.5
pfzy 0.3.4
phonemizer-fork 3.3.2
pillow 12.0.0
pip 25.2
platformdirs 4.5.1
pluggy 1.6.0
pooch 1.8.2
preshed 3.0.10
primp 0.15.0
prompt-toolkit 3.0.52
propcache 0.3.2
protobuf 6.33.5
psutil 7.2.2
pure-eval 0.2.3
pyarrow 20.0.0
pycparser 3.0
pydantic 2.11.7
pydantic-core 2.33.2
pydantic-settings 2.11.0
pydot 3.0.4
pygments 2.19.2
pyjwt 2.10.1
pymoo 0.6.1.6
pynput 1.8.1
pyparsing 3.3.2
pytest 8.4.2
python-dateutil 2.9.0.post0
python-dotenv 1.2.1
python-multipart 0.0.20
pytz 2025.2
pywin32 311
pyyaml 6.0.3
pyzmq 27.1.0
rdflib 7.2.1
referencing 0.37.0
regex 2026.1.15
requests 2.32.5
rfc3986 1.5.0
rich 14.3.2
rich-click 1.8.9
rpds-py 0.30.0
safetensors 0.7.0
scikit-learn 1.8.0
scipy 1.17.0
segments 2.3.0
setuptools 80.9.0
shellingham 1.5.4
six 1.17.0
smart-open 7.3.1
smolagents 1.22.0
sniffio 1.3.1
socksio 1.0.0
sounddevice 0.5.2
soundfile 0.13.1
soxr 1.0.0
spacy 3.8.7
spacy-curated-transformers 0.3.1
spacy-legacy 3.0.12
spacy-loggers 1.0.5
srsly 2.5.1
sse-starlette 3.0.3
stack-data 0.6.3
starlette 0.47.1
sympy 1.14.0
tabulate 0.9.0
termcolor 3.1.0
thinc 8.3.6
threadpoolctl 3.6.0
tokenizers 0.21.4
torch 2.10.0+cpu
torchaudio 2.10.0+cpu
torchvision 0.25.0+cpu
tornado 6.5.2
tqdm 4.67.3
traitlets 5.14.3
transformers 4.53.3
typer 0.19.2
types-requests 2.32.4.20250913
typing-extensions 4.15.0
typing-inspection 0.4.1
tzdata 2025.3
uritemplate 4.2.0
urllib3 2.6.3
uvicorn 0.35.0
wasabi 1.1.3
wcwidth 0.2.14
weasel 0.4.1
widgetsnbextension 4.0.14
win32-setctime 1.2.0
wrapt 2.1.1
xxhash 3.5.0
yarl 1.20.1
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels