Skip to content
@NVIDIA

NVIDIA Corporation

Pinned Loading

  1. cuopt cuopt Public

    GPU accelerated decision optimization

    Cuda 645 110

  2. cuopt-examples cuopt-examples Public

    NVIDIA cuOpt examples for decision optimization

    Jupyter Notebook 392 61

  3. open-gpu-kernel-modules open-gpu-kernel-modules Public

    NVIDIA Linux open GPU kernel module source

    C 16.6k 1.6k

  4. aistore aistore Public

    AIStore: scalable storage for AI applications

    Go 1.7k 230

  5. nvidia-container-toolkit nvidia-container-toolkit Public

    Build and run containers leveraging NVIDIA GPUs

    Go 4k 459

  6. GenerativeAIExamples GenerativeAIExamples Public

    Generative AI reference workflows optimized for accelerated infrastructure and microservice architecture.

    Jupyter Notebook 3.7k 952

Repositories

Showing 10 of 647 repositories
  • Megatron-LM Public

    Ongoing research training transformer models at scale

    NVIDIA/Megatron-LM’s past year of commit activity
    Python 14,855 3,479 309 (1 issue needs help) 246 Updated Jan 10, 2026
  • spark-rapids-jni Public

    RAPIDS Accelerator JNI For Apache Spark

    NVIDIA/spark-rapids-jni’s past year of commit activity
    Cuda 52 Apache-2.0 78 85 6 Updated Jan 10, 2026
  • cuda-quantum Public

    C++ and Python support for the CUDA Quantum programming model for heterogeneous quantum-classical workflows

    NVIDIA/cuda-quantum’s past year of commit activity
    C++ 884 318 410 (16 issues need help) 86 Updated Jan 10, 2026
  • cuopt Public

    GPU accelerated decision optimization

    NVIDIA/cuopt’s past year of commit activity
    Cuda 645 Apache-2.0 110 85 33 Updated Jan 10, 2026
  • OSMO Public

    The developer-first platform for scaling complex Physical AI workloads across heterogeneous compute—unifying training GPUs, simulation clusters, and edge devices in a simple YAML

    NVIDIA/OSMO’s past year of commit activity
    Python 71 Apache-2.0 6 23 11 Updated Jan 10, 2026
  • TensorRT-LLM Public

    TensorRT LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and supports state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. TensorRT LLM also contains components to create Python and C++ runtimes that orchestrate the inference execution in a performant way.

    NVIDIA/TensorRT-LLM’s past year of commit activity
    Python 12,592 2,003 512 469 Updated Jan 10, 2026
  • Model-Optimizer Public

    A unified library of SOTA model optimization techniques like quantization, pruning, distillation, speculative decoding, etc. It compresses deep learning models for downstream deployment frameworks like TensorRT-LLM, TensorRT, vLLM, etc. to optimize inference speed.

    NVIDIA/Model-Optimizer’s past year of commit activity
    Python 1,785 Apache-2.0 232 56 64 Updated Jan 10, 2026
  • aistore Public

    AIStore: scalable storage for AI applications

    NVIDIA/aistore’s past year of commit activity
    Go 1,722 MIT 230 3 0 Updated Jan 10, 2026
  • TensorRT-Incubator Public

    Experimental projects related to TensorRT

    NVIDIA/TensorRT-Incubator’s past year of commit activity
    MLIR 117 22 37 (1 issue needs help) 13 Updated Jan 10, 2026
  • bionemo-framework Public

    BioNeMo Framework: For building and adapting AI models in drug discovery at scale

    NVIDIA/bionemo-framework’s past year of commit activity
    Jupyter Notebook 619 108 61 (1 issue needs help) 114 Updated Jan 10, 2026