GVProf: A Value Profiler for GPU-based Clusters
-
Updated
Mar 24, 2024 - Python
GVProf: A Value Profiler for GPU-based Clusters
KeSSie HUGE Context Semantic recall for Large Language Models
Physics-based computation at scale — Hamiltonian dynamics, spectral theory, and statistical mechanics powering optimization, drug discovery, genomics, molecular proof, and agentic commerce.
用于复现和优化常见的深度学习算子,基于cuda和triton两种方案,可供学习和参考
The GPU Optimizer for ML Models enhances GPU performance for machine learning. It offers advanced scheduling, real-time monitoring, and efficient resource management through a user-friendly web interface and robust API, integrating big data technologies for seamless data processing and model optimization. @NVIDIA
Executive FinOps dashboard and automated governance engine using FOCUS 1.3 standards for AWS, Azure, and Snowflake.
Text-to-video generation application that converts natural language (english) prompts into short animated videos using diffusion models and AnimateDiff, with GPU-aware optimization and an interactive Gradio UI that can be executed on Google Colab (T4 GPU).
🤖 Ollama Consumer - A Python-based interactive chat interface for Ollama models with advanced model management, comprehensive benchmarking, vision support, and automatic error recovery. Features dynamic model switching, GPU optimization, and intelligent service monitoring for seamless AI model interactions.
DGX Spark (GB10/SM121) platform support for Meta's KernelAgent — auto-detect, hardware constraints, safe Triton configs
Quantitative dataset of 119 neural architectures (2017-2025) scored on hardware compatibility and ecosystem friction. Validates the Transformer Attractor thesis.
The NVIDIA driver's fan control logic wasn't doing it for me — too conservative, too opaque — so I built my own. This is a Linux GUI application for independent NVIDIA GPU fan control without requiring Coolbits. Uses pynvml via a root helper subprocess for direct fan management.
Optimizing PyTorch Model Training by Wrapping Memory Mapped Tensors on Nvidia GPUs with TensorDict.
AI Infrastructure Senior Engineer Learning Track - Advanced ML infrastructure and technical leadership
Hybrid AI routing: LOCAL Ollama + CLOUD GitHub Copilot
Optimizing PyTorch Model Training by Wrapping Memory Mapped Tensors on an Nvidia GPU with TensorDict.
🚀 Achieve rapid training of NanoGPT (GPT-2 124M) on a single RTX 4090, targeting a validation loss below 3.28 with FineWeb-Edu data.
Profile-first ML systems project optimizing a multi-camera end-to-end driving model for hardware efficiency using PyTorch, CUDA streams, NVTX instrumentation, and Nsight Systems.
LM Multi-Bin Dynamic Scheduler Simulator - Implementation combining Multi-Bin batching with SLA-constrained dynamic batching
Drop-in small-matrix acceleration for PyTorch on edge devices
An advanced hybrid scheduling framework that leverages Reinforcement Learning and ML to dynamically optimize CPU/GPU task allocation in real-time.
Add a description, image, and links to the gpu-optimization topic page so that developers can more easily learn about it.
To associate your repository with the gpu-optimization topic, visit your repo's landing page and select "manage topics."