SKaiNET aims to democratize "Edge AI / On-device AI" by bridging the gap between high-level application development and low-level hardware optimization. We believe AI should be portable, type-safe, and developer-friendly, enabling seamless intelligence in everything from mobile apps to IoT devices without sacrificing performance.
For architecture details see ARCHITECTURE.md.
Add the core dependencies (Gradle Kotlin DSL):
dependencies {
implementation("sk.ainet.core:SKaiNET-lang-core:0.17.0")
implementation("sk.ainet.core:SKaiNET-backend-cpu:0.17.0")
}Java / Maven users — see Java Getting Started for BOM setup and JVM flags.
val model = nn {
input(28 * 28)
dense(out = 128)
relu()
dense(out = 10)
}val a = tensor(shape(2, 2)) { float(1f, 2f, 3f, 4f) }
val b = tensor(shape(2, 2)) { float(5f, 6f, 7f, 8f) }
val c = a matMul b
val d = c.relu()val source = SystemFileSystem.source(Path("model.gguf")).buffered()
val reader = GGUFReader(source)
val tensor = reader.tensors.first { it.name == "token_embd.weight" }
val weights = reader.materialize(tensor)More examples: SKaiNET-examples | SKaiNET-notebook
SKaiNET is a modular ecosystem. While this repository contains the core engine, specialized high-level libraries are maintained in standalone repositories:
| Project | Description |
|---|---|
| SKaiNET-LLM | Llama, Gemma, and BERT inference runtimes |
| SKaiNET-transformers | Pre-built transformer architectures and layers |
| SKaiNET-examples | Sample projects and integration demos |
| Goal | Start here |
|---|---|
| Examples and sample projects | SKaiNET-examples |
| Interactive notebooks | SKaiNET-notebook |
| LLM inference (Llama, Gemma) | SKaiNET-LLM |
| Java 21+ integration | docs/java-getting-started.md |
| Data loading and transforms | docs/io-readers-guide.md |
| Graph DSL (ResNet, YOLO) | docs/graph-dsl.md |
| Edge AI / Arduino export | docs/arduino-c-codegen.md |
| MLIR / StableHLO compiler | docs/hlo-getting-started.md |
| Architecture overview | ARCHITECTURE.md |
| Contributing | CONTRIBUTING.md |
- Targets: JVM, macOS (Native), JS, WASM (Browser + WasmWasi)
- Single codebase shared across all platforms via Kotlin Multiplatform
- ComputeGraphExecutor: Optimized engine with fusion passes and trace-to-DAG bridging.
- SDPA & Gather: High-performance Scaled Dot-Product Attention and indexing operations.
- ComputeGraph: Unified framework for defining agentic workflows and tool-calling loops.
- Java facade:
JavaAgentLoop(inskainet-lang-java)
- Sequential:
nn { input(); dense(); relu(); dense() } - DAG / Graph: arbitrary wiring with
dag { }for ResNet, YOLO-style architectures - Layers: Dense, Conv1d/2d/3d, MaxPool, AvgPool, BatchNorm, Dropout, LeakyReLU, ELU
- KAN (Kolmogorov–Arnold Networks) layer (experimental)
- Autograd engine with reverse-mode gradients, SGD and Adam/AdamW optimizers
- Built-in loaders: MNIST, Fashion-MNIST, CIFAR-10
- Formats: GGUF, ONNX, SafeTensors, JSON, Image (JPEG, PNG)
- Type-safe transform DSL: resize, crop, normalize, toTensor
SKaiNETentry point,TensorJavaOps, builder-pattern model definition- Maven BOM (
sk.ainet:skainet-bom) for one-line version management - Docs: Getting Started | Model Training
- Export trained models to standalone, optimized C99 with static memory allocation
- Ready-to-use Arduino library output
- See arduino-c-codegen.md
- Lower Kotlin DSL to MLIR StableHLO dialect
- Optimization passes: constant folding, operation fusion, dead code elimination
- Valid IREE-compilable output with streaming API and public
HloGenerator - See hlo-getting-started.md
- Core Engine Focus — Refactored the repository to focus on the core
ComputeGraphframework, compiler, and backends. Extracted high-level LLM and transformer implementations to standalone repositories. - LLM-as-DSL — New high-level DSL for defining and running LLM architectures within the core framework.
- Optimized ComputeGraphExecutor — New executor with support for fusion passes and trace-to-DAG bridging for faster inference.
- SDPA & Gather — Implemented Scaled Dot-Product Attention and
gather/indexSelectops for improved performance.
See CHANGELOG.md for the full release history.
- Q1 2026: Comprehensive documentation ✅
- Q2 2026: Reference-based validation of computation correctness
- Q3 2026: Agentic AI enhancements ✅ (tool calling shipped in 0.13.0; ongoing)
- Q4 2026: Federated learning support for multi-device training
We love contributions! Whether it's a new operator, documentation, or a bug fix:
- Read our Contribution Guide.
- Check the Good First Issues.
- Open a discussion or issue on GitHub.
Browse the full codebase documentation on DeepWiki.
- Dhia Chemingui (@dhiaspaner) — Android KMP plugin migration (#385, #386)
MIT — see LICENCE.
