Inference Providers
Active filters: quark
amd/gpt-oss-20b-MoE-Quant-W-MXFP4-A-FP8-KV-FP8
11B • Updated • 8.91k
• 1
550B • Updated • 117k
• 4
amd/Qwen3.5-397B-A17B-MXFP4
Image-Text-to-Text
• 222B • Updated • 16.4k
• 4
Text Generation
• 102B • Updated • 413
• 1
nameistoken/Qwen3.6-35B-A3B-Quark-W8A8-INT8
Image-Text-to-Text
• 35B • Updated • 1.63k
• 1
Text Generation
• 0.1B • Updated • 177
• 1
fxmarty/llama-tiny-testing-quark-indev
1.03M • Updated • 3
fxmarty/llama-tiny-int4-per-group-sym
1.03M • Updated • 10
fxmarty/llama-tiny-w-fp8-a-fp8
1.03M • Updated • 10
fxmarty/llama-tiny-w-fp8-a-fp8-o-fp8
1.03M • Updated • 3
fxmarty/llama-tiny-w-int8-per-tensor
1.03M • Updated • 7
fxmarty/llama-small-int4-per-group-sym-awq
16.7M • Updated • 1
fxmarty/quark-legacy-int8
1.03M • Updated • 2
fxmarty/llama-tiny-w-int8-b-int8-per-tensor
1.03M • Updated • 7
fxmarty/llama-small-int4-per-group-sym-awq-old
16.7M • Updated • 2
amd-quark/llama-tiny-w-int8-per-tensor
1.03M • Updated • 33
amd-quark/llama-tiny-w-int8-b-int8-per-tensor
1.03M • Updated • 33
amd-quark/llama-tiny-w-fp8-a-fp8
1.03M • Updated • 16
amd-quark/llama-tiny-w-fp8-a-fp8-o-fp8
1.03M • Updated • 15
amd-quark/llama-tiny-int4-per-group-sym
1.03M • Updated • 23
amd-quark/llama-small-int4-per-group-sym-awq
16.7M • Updated • 17
amd-quark/quark-legacy-int8
1.03M • Updated • 4
amd/Llama-3.1-8B-Instruct-FP8-KV-Quark-test
8B • Updated • 21.1k
amd/Llama-3.1-8B-Instruct-w-int8-a-int8-sym-test
8B • Updated • 12.4k
EmbeddedLLM/Llama-3.1-8B-Instruct-w_fp8_per_channel_sym
Text Generation
• 8B • Updated • 8
amd/DeepSeek-R1-Distill-Llama-8B-awq-asym-uint4-g128-lmhead
Text Generation
• 2B • Updated • 42
amd-quark/llama-tiny-fp8-quark-quant-method
17.1M • Updated • 12.1k
aigdat/Qwen2.5-Coder-7B-quantized-ppl-14
1B • Updated • 1
aigdat/Qwen2-7B-Instruct_quantized_int4_bfloat16
1B • Updated • 5
aigdat/Qwen2.5-1.5B-Instruct-awq-uint4-bfloat16
0.4B • Updated • 1