v1.0 · 144 tests passing · running on a 3090 Ti

Darkroom

v1.0 · Local-first · PySide6 + PyTorch + diffusers

Built for:
People who want to develop AI imagery the way you’d develop film — slowly, with intention, on their own hardware, with no per-frame cost and no upload prompt.
Not built for:
Anyone who needs cloud throughput or a hosted API surface. Darkroom is single-machine by design.

A real darkroom is a room you walk into and a process you control end to end. Darkroom — the app — is a desktop image-generation and editing surface that treats local inference as the default: FLUX.1-dev, RealVisXL Lightning, your own LoRAs, no per-frame cost, no telemetry, no ceiling on how much you develop in a sitting.

§ I

The problem

Hosted image-generation services charge per call, throttle on concurrency, train on your prompts unless you read the fine print right, and quietly raise prices the quarter after you build a workflow on top of them. The pricing model is rented compute; the GPU on your desk is owned compute. Anyone shipping imagery at any volume has already paid for both.

Darkroom is the opposite shape. Every model runs locally. Every generation is free at the marginal-cost level. Every prompt stays on the machine. The trade is hardware up-front instead of metered cost forever — a trade that pays back inside a few hundred frames at any reasonable cloud rate.

§ II

Decisions

Four calls that shaped what Darkroom is and isn’t. Each of them cost something I deliberately gave up.

  1. cut

    Stable Diffusion XL Turbo. The pipeline was 2.1× faster at sub-40-step generation, but the output regressed below the bar I want to print at. Speed for speed’s sake isn’t a feature; the floor on quality is.

  2. kept

    FLUX.1-dev as the headline model. Slower than Turbo, larger than SDXL, fussier on prompts — but the only one that holds up at 1024×1536 portrait without a refinement pass. The base model has to be the best one I can run; everything else builds on it.

  3. refused

    A cloud-API fallback for “when local generation isn’t fast enough.” The whole premise is that the local pipeline is the pipeline. A fallback would erase the cost and privacy guarantees the local-first claim makes; worse, it would teach the workflow to depend on it.

The cheapest GPU is the one already on your desk.

— Darkroom design note
§ III

System

A native desktop window, a Python inference core wrapped indiffusers, a thin job queue, and a model registry that picks the right pipeline for the task at hand. Every pipeline runs in-process; nothing is shelled out, nothing leaves the machine.

Stack — current pins.
LayerImplementationPurpose
ShellPySide6 (Qt 6.7)Native window · canvas · brushes · history
InferencePyTorch 2.6 + diffusersFLUX.1-dev · RealVisXL Lightning · LoRA
SchedulersEuler / DPM-Solver++Quality vs. speed selectable per job
EditorCustom canvasInpaint · outpaint · masking · layers
RegistryWatched folderDrop a .safetensors, restart, it’s loaded
Testspytest (144 cases)Pipeline contracts · UI smoke · regressions
Darkroom desktop image-generation editor — generation history sidebar, a 1024×1024 abstract result on the canvas, prompt panel with FLUX.1-dev controls, GPU utilization at the bottom.
FIGURE 1. A 12-second generation on the 4090 — seed locked, history retained, no API key in the path. The cost per frame is electricity.
Darkroom history pane — a four-thumbnail grid of past abstract generations, with a horizontal filter bar above showing all 412, flagged 8, this week 22, and locked seeds 3.
FIGURE 2. The history pane. Every generation kept, addressable by mono prompt preview and timestamp. Locked seeds are the variations worth coming back to; the rest is paint.
Darkroom prompt panel close-up — multi-line FLUX.1-dev prompt input with steps, guidance, seed-locked indicator, sampler controls, token-budget meter, and four side-by-side variation thumbnails.
FIGURE 3. The prompt panel. Steps, guidance, seed, sampler — every dial that actually matters, and nothing else. The variation row is the same seed walked four small directions; the rest is whatever the GPU has time for.
Darkroom GPU monitor — VRAM bar at 22.4 of 24 GB, GPU utilization at 94 percent, thermal at 71C, a 60-second VRAM sparkline with a flush annotation, and a queue of three pending generations.
FIGURE 4. The 4090 reads itself. VRAM walked up to within 1.6 GB of the ceiling before the post-gen flush; the queue holds three jobs the dispatcher trusts the card to clear before the kettle boils.
§ IV

Running it locally

The setup shape is below — Python, CUDA, a clone, a model pull, and a launch.

setup.ps1powershell# prerequisites: Python 3.13, NVIDIA GPU with 12 GB+ VRAM, CUDA 12+
git clone https://github.com/dbhavery/darkroomcd darkroom
python -m venv .venv
.\.venv\Scripts\Activate.ps1
pip install -r requirements.txt
python -m darkroom.fetch_models                   # ~14 GB pull
python -m darkroom

Targets Windows 11 + NVIDIA (CUDA). Linux build is a smaller patch but not the focus today. Disk: ~16 GB for the model set. Generation memory: ~10 GB at 1024-square; ~14 GB at 1536-portrait.

Acknowledgments

Darkroom stands on PyTorch and the Hugging Face diffusers stack, the FLUX team at Black Forest Labs, the RealVisXL community, Qt for Python, and every author whose LoRA fine-tunes I’ve run through the pipeline. The local-inference space owes most of its working code to people who released theirs.

← Index