MLOS is a project to enable autotuning for systems.
-
Updated
Mar 19, 2026 - Python
MLOS is a project to enable autotuning for systems.
🦗 Learn Locust from scratch 🐍
⚡ JEval helps you to evaluate your JMeter test plan and provides recommendation before you start your performance testing. All contributions welcome 🙏.
🦃 Black Friday Performance Testing Experiment 🙏
🛠 This utility converts your LoadRunner Rules to JMeter Correlation Recorder Rules. 💙 All Contributions Welcome 🙏
JEval helps you to evaluate your JMeter test plan and provides recommendation before you start your performance testing.
perf experiments in gil free python v3.13
Collection of examples and links that uses different profiling tools to show memory usage and timings.
Load testing suite built with Locust for web applications
Standalone LLM inference benchmarking pipelines on AMD GPUs using ROCm, vLLM, MAD, and data visualization scripts.
Field-theoretic dual track python subset. Curvature, phase, and domain-driven optimization. Deterministic and interpretable.
Anthropic Performance Take-Home: 1,339 cycles (110.3x speedup, 9/9 tests) — Claude Opus 4.6 solution
A production-grade telemetry-aware suite for benchmarking LLM inference performance on NVIDIA RTX 3080.
🚀 DeltaPerf AI 🔍 Analyze. Detect. Summarize
Library to compute auto-tuning and performance metrics.
Telemetry-aware GPU kernel benchmarking with contamination detection, finalist-pair reruns, and confidence-gated promotion.
LLM-guided CUDA kernel generation framework with correctness validation and roofline analysis
Python lab for exploring memory bandwidth, cache effects, and locality in accelerator workloads
Profile-first ML systems project optimizing a multi-camera end-to-end driving model for hardware efficiency using PyTorch, CUDA streams, NVTX instrumentation, and Nsight Systems.
This repository demonstrates a hybrid Python/C++ architecture for performance-critical scientific workloads
Add a description, image, and links to the performance-engineering topic page so that developers can more easily learn about it.
To associate your repository with the performance-engineering topic, visit your repo's landing page and select "manage topics."