How to use from
Unsloth Studio
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh
# Run unsloth studio
unsloth studio -H 0.0.0.0 -p 8888
# Then open http://localhost:8888 in your browser
# Search for lainlives/WizardCoder-Python-7B to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex
# Run unsloth studio
unsloth studio -H 0.0.0.0 -p 8888
# Then open http://localhost:8888 in your browser
# Search for lainlives/WizardCoder-Python-7B to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required
# Open https://huggingface.co/spaces/unsloth/studio in your browser
# Search for lainlives/WizardCoder-Python-7B to start chatting
Quick Links

lainlives/WizardCoder-Python-7B

This model contains GGUF format model files for vanillaOVO/WizardCoder-Python-7B-V1.0.

Available Quants

The following files were generated and uploaded to this repo: Q4_0, Q4_K_S, Q4_K_M, Q5_0, Q5_K_S, Q5_K_M, Q6_K, Q8_0, f16, bf16

Use with llama.cpp

CLI:

llama-cli --hf-repo lainlives/WizardCoder-Python-7B --hf-file WizardCoder-Python-7B-V1.0-Q4_K_M.gguf -p "The meaning to life and the universe is"

Server:

llama-server --hf-repo lainlives/WizardCoder-Python-7B --hf-file WizardCoder-Python-7B-V1.0-Q4_K_M.gguf -c 2048

Or ollama

CLI:

ollama run https://hf.co/lainlives/WizardCoder-Python-7B:Q4_K_M
Downloads last month
608
GGUF
Model size
7B params
Architecture
llama
Hardware compatibility
Log In to add your hardware

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for lainlives/WizardCoder-Python-7B

Quantized
(2)
this model

Evaluation results