Instructions to use Khurram123/SigmaMath-Visual-Core with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- llama-cpp-python
How to use Khurram123/SigmaMath-Visual-Core with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="Khurram123/SigmaMath-Visual-Core", filename="qwen_math_q4_k_m.gguf", )
llm.create_chat_completion( messages = "No input example has been defined for this model task." )
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use Khurram123/SigmaMath-Visual-Core with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf Khurram123/SigmaMath-Visual-Core:Q4_K_M # Run inference directly in the terminal: llama-cli -hf Khurram123/SigmaMath-Visual-Core:Q4_K_M
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf Khurram123/SigmaMath-Visual-Core:Q4_K_M # Run inference directly in the terminal: llama-cli -hf Khurram123/SigmaMath-Visual-Core:Q4_K_M
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf Khurram123/SigmaMath-Visual-Core:Q4_K_M # Run inference directly in the terminal: ./llama-cli -hf Khurram123/SigmaMath-Visual-Core:Q4_K_M
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf Khurram123/SigmaMath-Visual-Core:Q4_K_M # Run inference directly in the terminal: ./build/bin/llama-cli -hf Khurram123/SigmaMath-Visual-Core:Q4_K_M
Use Docker
docker model run hf.co/Khurram123/SigmaMath-Visual-Core:Q4_K_M
- LM Studio
- Jan
- Ollama
How to use Khurram123/SigmaMath-Visual-Core with Ollama:
ollama run hf.co/Khurram123/SigmaMath-Visual-Core:Q4_K_M
- Unsloth Studio new
How to use Khurram123/SigmaMath-Visual-Core with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for Khurram123/SigmaMath-Visual-Core to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for Khurram123/SigmaMath-Visual-Core to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for Khurram123/SigmaMath-Visual-Core to start chatting
- Pi new
How to use Khurram123/SigmaMath-Visual-Core with Pi:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf Khurram123/SigmaMath-Visual-Core:Q4_K_M
Configure the model in Pi
# Install Pi: npm install -g @mariozechner/pi-coding-agent # Add to ~/.pi/agent/models.json: { "providers": { "llama-cpp": { "baseUrl": "http://localhost:8080/v1", "api": "openai-completions", "apiKey": "none", "models": [ { "id": "Khurram123/SigmaMath-Visual-Core:Q4_K_M" } ] } } }Run Pi
# Start Pi in your project directory: pi
- Hermes Agent new
How to use Khurram123/SigmaMath-Visual-Core with Hermes Agent:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf Khurram123/SigmaMath-Visual-Core:Q4_K_M
Configure Hermes
# Install Hermes: curl -fsSL https://hermes-agent.nousresearch.com/install.sh | bash hermes setup # Point Hermes at the local server: hermes config set model.provider custom hermes config set model.base_url http://127.0.0.1:8080/v1 hermes config set model.default Khurram123/SigmaMath-Visual-Core:Q4_K_M
Run Hermes
hermes
- Docker Model Runner
How to use Khurram123/SigmaMath-Visual-Core with Docker Model Runner:
docker model run hf.co/Khurram123/SigmaMath-Visual-Core:Q4_K_M
- Lemonade
How to use Khurram123/SigmaMath-Visual-Core with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull Khurram123/SigmaMath-Visual-Core:Q4_K_M
Run and chat with the model
lemonade run user.SigmaMath-Visual-Core-Q4_K_M
List all available models
lemonade list
ΣMath — Visual Computation Engine v2.0
Powered by Qwen2.5-Coder-7B & NuminaMath-TIR
Developed by: Khurram Pervez, Assistant Professor of Mathematics
ΣMath Core is a high-performance mathematical visualization engine that bridges the gap between deep symbolic reasoning and real-time interactive rendering. By leveraging a fine-tuned Qwen2.5-Coder-7B backbone with the NuminaMath-TIR dataset, the model excels at Chain-of-Thought (CoT) reasoning, allowing it to solve complex geometric problems before translating them into interactive code.
The engine utilizes a specialized Resilient Execution Pipeline to render 3D manifolds, animations, and parametric surfaces directly in the browser, optimized specifically for local deployment on NVIDIA hardware.
🚀 The Multi-Stage Pipeline
1. TIR (Thought-Intermediate-Reasoning)
By training on the NuminaMath-TIR dataset, the model follows a rigorous logical path:
- Identification: Analyzes the geometric properties of the requested manifold.
- Calculation: Determines the necessary vertices, normals, and parametric equations.
- Code Synthesis: Generates high-efficiency Python code (Plotly/Matplotlib) using its native Coder capabilities.
2. The Resilient Engine (FastAPI Layer)
To ensure stability during research, the system includes a proprietary processing layer:
- Dummy Interception: Captures and silences
plt.show()commands to prevent GUI thread blocking on Ubuntu/Linux servers. - Colorscale Transpilation: Automatically maps Matplotlib colormap names (e.g., spring, summer) to Plotly-valid equivalents to ensure 3D renders never fail.
- Sandbox Execution: Executes generated code in a safe local scope using your RTX 4060 Ti.
📸 Interactive Visual Samples
Here are examples of advanced parametric surfaces generated in real-time by ΣMath Core v2.0, showcasing the full Thought-Intermediate-Reasoning (TIR) pipeline.
| 3D Torus Visualization | Full Research Dashboard Interface | Resilient Color Scaling Error Fix |
|---|---|---|
![]() |
![]() |
![]() |
💻 System Configuration
| Component | Specification |
|---|---|
| Compute Engine | NVIDIA GeForce RTX 4060 Ti (16GB VRAM) |
| Model Format | GGUF (Quantized Q4_K_M) |
| Context Window | n_ctx=4096 (Optimized for detailed manifold calculation) |
| OS | Ubuntu 22.04 LTS (Optimized for Agg Backend) |
| Frameworks | FastAPI, Llama-cpp-python, Plotly, mpld3 |
🛠️ Quick Start
1. Installation
# Clone this repository
git clone [https://huggingface.co/Khurram123/SigmaMath-Visual-Core](https://huggingface.co/Khurram123/SigmaMath-Visual-Core)
cd SigmaMath-Visual-Core
# Install dependencies
pip install fastapi uvicorn llama-cpp-python numpy matplotlib mpld3 plotly
- Downloads last month
- 21
4-bit


