Compact, opinionated agent skills for AI coding assistants. Distilled and refined to stay small, hit hard, and waste nothing.
Note: This repo is a read-only mirror of skills from the compound-engineering plugin. Edits happen upstream; this repo is for distribution via
npx skills add.
npx skills add iliaal/ai-skills
# Pick one skill
npx skills add iliaal/ai-skills -s code-review
# Target a specific agent
npx skills add iliaal/ai-skills -a claude-codeWorks with Claude Code, OpenCode, OpenClaw and its derivatives, Codex, Cursor, Kilo Code, and 35+ other agents.
| Skill | Description |
|---|---|
| agent-native-architecture | Build AI agents using prompt-native architecture |
| frontend-design | Create production-grade frontend interfaces |
| simplifying-code | Declutter code without changing behavior |
| Skill | Description |
|---|---|
| react-frontend | React, TypeScript, Next.js patterns, Vitest/RTL testing |
| nodejs-backend | Express/Fastify layered architecture, validation, error handling, production config |
| python-services | CLI tools, async parallelism, FastAPI, project tooling (uv, ruff, ty) |
| php-laravel | PHP 8.4, Laravel, Eloquent, queues, PHPUnit testing |
| pinescript | Pine Script v6 indicators, strategies, backtesting, TradingView |
| Skill | Description |
|---|---|
| postgresql | Schema design, query tuning, indexing, JSONB, partitioning, RLS, window functions |
| terraform | Terraform/OpenTofu modules, testing, state management, multi-environment HCL |
| linux-bash-scripting | Defensive Bash, argument parsing, production patterns, ShellCheck compliance |
| Skill | Description |
|---|---|
| writing-tests | Generic test discipline: quality, anti-patterns, rationalization resistance |
| Skill | Description |
|---|---|
| code-review | Two-stage review (spec compliance, then code quality) with severity-ranked findings |
| receiving-code-review | Process review feedback critically: verify, push back, no blind agreement |
| debugging | Root-cause debugging: reproduce, investigate, hypothesize, fix, verify, postmortem |
| verification-before-completion | Fresh verification evidence before any completion claim |
| planning | File-based implementation planning with task breakdown and progress tracking |
| Skill | Description |
|---|---|
| brainstorming | Explore requirements and approaches through dialogue |
| compound-docs | Capture solved problems as categorized documentation |
| document-review | Improve documents through structured self-review |
| file-todos | File-based todo tracking system |
| finishing-branch | Workflow closer: merge, PR, keep, or discard with safety checks |
| git-worktree | Manage Git worktrees for parallel development |
| md-docs | Manage AGENTS.md, README.md, and CONTRIBUTING.md |
| resolve-pr-parallel | Resolve PR review comments in parallel |
| setup | Configure which review agents run for your project |
| writing | Prose editing, rewriting, and humanizing text |
| Skill | Description |
|---|---|
| meta-prompting | Structured reasoning via /think, /verify, /adversarial, /edge, /compare |
| refine-prompt | Turn rough prompts into precise AI instructions |
| reflect | Session retrospective and skill audit |
| Skill | Description |
|---|---|
| orchestrating-swarms | Comprehensive guide to multi-agent swarm orchestration |
Skills eat context. Every token a skill spends is one the agent can't use on your code. So these are built tight.
Each skill goes through a distillation process: multiple expert sources are analyzed, overlapping advice is merged, filler is stripped, and contradictions are resolved. What's left is one focused instruction set per topic. The approach follows Anthropic's prompting principles and patterns refined by experienced skill authors.
What that looks like in practice:
- Under 1K tokens, 2K hard cap. If it doesn't fit, it gets split into a reference file the agent loads on demand.
- Front-loaded. The most important rules come first. Model attention drops off, so the critical stuff leads.
- Actions, not explanations. Tell the agent what to do, not what things are. Skip anything the model already knows.
- Every "don't" has a "do instead." Bare prohibitions leave the agent guessing. Alternatives give it a clear path.
- One good default per decision. A single best practice beats a menu of options.
- Keyword-rich descriptions under 80 tokens. The description is the only part loaded at startup across all installed skills. It's what the agent uses to decide whether to activate a skill, so it's packed with the exact phrases developers type. The skill body only loads when triggered.
Claude Code sometimes skips skills even when they match your request. If that happens, add this to your CLAUDE.md or MEMORY.md:
## Always check skills before starting work
Before starting any task, scan the full available skills list in the system prompt
and check if any skill's trigger matches the user's request. If a match exists,
invoke it via the Skill tool BEFORE generating any manual response. Do not skip
skills in favor of doing the work manually.This turns skill activation from "when it feels like it" into a reliable first step.
See CHANGELOG.md for detailed version history.
MIT