Uncensored-Coder
Local LLM//Zero cloud//No filters
An offline AI code generator powered by your own machine. No API keys, no rate limits, no telemetry, no censorship layer between you and the model. Built on Ollama + deepseek-coder, wrapped in a Python CLI that stays out of the way.
/overview
Cloud LLMs come with leashes. They log your prompts, refuse half your security research, and disappear behind a 503 the moment you actually need them. Uncensored-Coder runs entirely on your machine.
Under the hood it's an Ollama wrapper around deepseek-coder, with a Python CLI that handles context windows, prompt templates, and streaming output. There's a chat mode, a one-shot mode for piping into editors, and a project mode that loads files into context so the model actually understands what you're working on.
It's not trying to replace Copilot. It's trying to be the tool you reach for when you're offline, on a plane, on an air-gapped lab box, or just sick of being told a buffer-overflow tutorial violates someone's policy.
/features
- 100% offline — model + inference run on your machine
- No API keys, no accounts, no telemetry
- Streaming output to stdout, pipe-friendly
- Chat mode with persistent session history
- Project mode — load a folder, model gets context
- Custom system prompts per profile
- Configurable model: deepseek-coder, llama, qwen, etc.
- Markdown + syntax highlighting in terminal
- Token counter + context-window guard
- Single-file Python script, easy to audit
/stack
/install
# 1. install ollama (linux) $ curl -fsSL https://ollama.com/install.sh | sh # 2. pull a code model $ ollama pull deepseek-coder:6.7b
# clone and run $ git clone https://github.com/BitJacker/uncensored-coder.git $ cd uncensored-coder $ pip install -r requirements.txt $ python3 coder.py --model deepseek-coder # or pipe a one-shot prompt $ echo "refactor this for async" | python3 coder.py --stdin < src.py