Skip to content

Releases: TensorStack-AI/Diffuse

v0.3.5 - Environment Manager

06 Jan 04:04
e478140

Choose a tag to compare

Environment Manager

Support for multiple python virtual environments, Vendor, Device and Pipeline based environments


Features

  • Isolated python process
  • Custom Python virtual environments
  • Environment variable support

Installation

  1. Download and extract Diffuse_v0.3.5.zip
    A fast SSD with plenty of free space is recommended, as model downloads can be large.

  2. Run Diffuse.exe

  3. Load a model
    Diffuse will automatically:

    • Install an isolated portable Python runtime
    • Create the required virtual environment
    • Download the selected model from Hugging Face

First-run notice

On first launch or when loading a model for the first time, setup may take several minutes while Python, dependencies, and model files are downloaded and initialized. This is expected behavior.

No manual Python setup is required.

Device Support

Supports CUDA and ROCM based devices

GitHub Downloads (all assets, specific tag)

v0.3.2 - Diffuse - Proof Of Concept

03 Jan 21:06
6bafb49

Choose a tag to compare

Diffuse - Proof Of Concept

Diffuse is a Windows desktop UI for Huggingface Diffusers. It integrates directly with Python using the Python C API via CSnakes, enabling high-performance interop between .NET and Python for running diffusion models.


Features

  • Automatic installation of isolated portable Python
  • Device-specific Python virtual environments
  • Automatic model downloads from Huggingface repositories

Supported Pipelines

  • Z-Image: ZImagePipeline, ZImageImg2ImgPipeline
  • Qwen Image: QwenImagePipeline, QwenImageImg2ImgPipeline, QwenImageEditPlusPipeline
  • FLUX.1: FluxPipeline, FluxImg2ImgPipeline, FluxKontextPipeline, FluxControlNetPipeline
  • FLUX.2: Flux2Pipeline
  • Chroma: ChromaPipeline, ChromaImg2ImgPipeline
  • LTX-Video: LTXPipeline, LTXImageToVideoPipeline
  • Wan Video: WanPipeline, WanImageToVideoPipeline
  • CogVideoX: CogVideoXPipeline, CogVideoXImageToVideoPipeline, CogVideoXVideoToVideoPipeline
  • Kandinsky5: Kandinsky5T2IPipeline, Kandinsky5I2IPipeline, Kandinsky5T2VPipeline, Kandinsky5I2VPipeline
  • StableDiffusionXL: StableDiffusionXLPipeline, StableDiffusionXLImg2ImgPipeline, StableDiffusionXLControlNetPipeline, StableDiffusionXLControlNetImg2ImgPipeline

Installation

  1. Download and extract Diffuse_v0.3.2_CUDA.zip
    A fast SSD with plenty of free space is recommended, as model downloads can be large.

  2. Run Diffuse.exe

  3. Load a model
    Diffuse will automatically:

    • Install an isolated portable Python runtime
    • Create the required virtual environment
    • Download the selected model from Hugging Face

First-run notice

On first launch or when loading a model for the first time, setup may take several minutes while Python, dependencies, and model files are downloaded and initialized. This is expected behavior.

No manual Python setup is required.

Device Support

Supports CUDA based devices with support for ROCM based devices in active development

GitHub Downloads (all assets, specific tag)