Hi,Question: if there any interest in an autonomous AI agent for Gitea? (Issue → Plan → Implementation → Eval → PR, skeleton code, version diff, auto‑fix prep)? #190372
Replies: 1 comment 1 reply
-
|
This is a definite yes. Please open-source it! A lightweight, local-first AI agent that integrates directly with Gitea (especially one that can run on a Raspberry Pi or Jetson) is highly relevant right now. Most existing autonomous coding agents are heavily tied to GitHub or require massive cloud compute, so a Gitea-native alternative fills a huge gap for the self-hosted community. Releasing it under the MIT license is the perfect move. Also, don't worry at all about the code not being "professional" since you built it with LLMs. The best thing you can do is publish the repo, drop the link in this thread, and let the community help you test, refactor, and improve it. Looking forward to seeing the repository! |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi everyone,
I would like to get feedback from the community on whether I should release a project that has become much bigger than I originally expected.
This started as a small personal tool for my Jetson‑LLM WhatsApp chatbot. Over time I noticed that LLMs tend to “drift”:
they skip tests, modify the wrong files, ignore workflows, or hallucinate new paths.
I only wanted something that would prevent this from happening again.
But that small idea turned into something much bigger.
If I had to describe him (I'll call him gitea-agent), I would describe him like this:
An autonomous agent for Gitea that turns issues into tested pull requests — with real technical constraints, deterministic checks, and a full evaluation system instead of prompt tricks.
I’m not a developer (I can barely write “hello world” in Python), but through many LLM sessions this system has grown far beyond what I planned.
It runs fully locally and even works on small hardware like Raspberry Pi, Jetson, or mini‑servers.
What the agent can do today (compact but complete):
🔹 1. Full workflow automation
Issue → Plan → Approval → Implementation → Eval → PR
Fully automated, but controlled through approval gates.
🔹 2. Intelligent context loader (token‑efficient)
The agent does not load the whole repository.
Instead, it builds skeleton code blocks by automatically extracting only the relevant parts:
backtick‑referenced files
AST import analysis
keyword grep
automatic context folders
token‑budget optimization
drift detection
→ This keeps the LLM context small, stable, and reproducible.
🔹 3. Gitea version diff (foundation for auto‑fixing)
The agent automatically detects:
which files changed since the last commit
which differences matter
which areas could be automatically repaired or improved
This is the groundwork for future auto‑refactoring and auto‑repair features.
🔹 4. Evaluation system (CI/CD for LLMs)
Not just “run tests”, but a full deterministic evaluation pipeline:
weighted tests
multi‑step tests (same user context across messages)
latency measurement
baseline tracking (score must not regress)
tag analysis for systematic errors
automatic issues when regressions occur
PR blocking if the score drops
score history in the dashboard
🔹 5. Operating modes
Watch mode: periodic evaluation + auto‑issues
Patch mode: active development without auto‑issues
Idle mode
🔹 6. Dashboard
Live view with:
score history
system status
error analysis
tag statistics
🔹 7. Additional features
plan comments with metadata
approval gate (“ok”, “yes”, “✅”)
auto‑restart on inactivity + new commits
self‑consistency check
LLM‑assisted test and log analysis
automatic cleanup of context folders
My question to you:
Is there interest in a tool like this?
Would anyone use such an agent?
Or is this too niche and only useful for my own setup?
Should I release it as open source (MIT)?
I’d really appreciate any feedback — positive or critical.
Beta Was this translation helpful? Give feedback.
All reactions