New Release #190034
-
Select Topic AreaQuestion BodyOpen Source Release All three platforms are real, deployable systems. They install via Docker, Helm, or Kubernetes, start successfully, and produce observable results. They are currently running on cloud infrastructure. However, they should be considered unfinished foundations rather than polished products. The ecosystem totals roughly 1.5 million lines of code. The Platforms It attempts to: Produce software artifacts from high-level tasks Monitor the results of what it creates Evaluate outcomes Feed corrections back into the process Iterate over time ASE runs today, but the agents require tuning, some features remain incomplete, and output quality varies depending on configuration. VulcanAMI — Transformer / Neuro-Symbolic Hybrid AI Platform The intent is to address limitations of purely statistical language models by incorporating symbolic components, orchestration logic, and system-level governance. The system deploys and operates, but reliable transformer integration remains a major engineering challenge, and significant work is needed before it could be considered robust. FEMS — Finite Enormity Engine FEMS is a computational platform for large-scale scenario exploration through multiverse simulation, counterfactual analysis, and causal modeling. It is intended as a practical implementation of techniques that are often confined to research environments. The platform runs and produces results, but the models and parameters require expert mathematical tuning. It should not be treated as a validated scientific tool in its current state. Current Status Deployable Operational Complex Incomplete Known limitations include: Rough user experience Incomplete documentation in some areas Limited formal testing compared to production software Architectural decisions driven by feasibility rather than polish Areas requiring specialist expertise for refinement Security hardening not yet comprehensive Bugs are present. Why Release Now The release is not tied to a commercial product, funding round, or institutional program. It is simply an opening of work that exists and runs, but is unfinished. About Me My primary career has been as a fantasy author. I am self-taught and began learning software systems later in life and built these these platforms independently, working on consumer hardware without a team, corporate sponsorship, or academic affiliation. This background will understandably create skepticism. It should also explain the nature of the work: ambitious in scope, uneven in polish, and driven by persistence rather than formal process. The systems were built because I wanted them to exist, not because there was a business plan or institutional mandate behind them. What This Release Is — and Is Not A set of deployable foundations A snapshot of ongoing independent work An invitation for exploration and critique A record of what has been built so far This is not: A finished product suite A turnkey solution for any domain A claim of breakthrough performance A guarantee of support or roadmap For Those Who Explore the Code Some components are over-engineered while others are under-developed Naming conventions may be inconsistent Internal knowledge is not fully externalized Improvements are possible in many directions If you find parts that are useful, interesting, or worth improving, you are free to build on them under the terms of the license. In Closing The systems exist. They run. They are unfinished. If they are useful to someone else, that is enough. — Brian D. Anderson https://github.com/musicmonk42/The_Code_Factory_Working_V2.git |
Beta Was this translation helpful? Give feedback.
Replies: 3 comments 4 replies
-
|
Hi musicmonk42 (Brian D. Anderson) 👋 First off — thank you for the incredibly candid, transparent, and humble release post. In a world full of over-hyped "revolutionary AI" announcements, this kind of raw honesty about scope, limitations, unfinished state, and solo origins is refreshing and builds real trust. Releasing ~1.5 million lines of code as open source — built entirely solo, on consumer hardware, without funding or team — is a massive personal achievement, regardless of polish. The fact that ASE, VulcanAMI, and FEMS are actually deployable via Docker/Helm/K8s, start up, and produce observable results already puts this in rare territory for independent projects of this ambition. Quick summary of what stands out (from your description)
The self-described state ("unfinished foundations", "rough UX", "incomplete docs", "bugs present", "security not hardened", "expert tuning needed", "output quality varies") is exactly the kind of disclaimer that serious researchers/engineers appreciate — it invites collaboration instead of blind adoption. A few genuine questions / thoughts from the community perspective
No pressure to answer all — just curious what kind of collaboration or feedback would be most valuable to you. Encouragement & next small steps suggestionThis is the kind of release that can quietly attract the right people over time (researchers in neuro-symbolic AI, agentic systems, causal modeling, autonomous dev tools). Consider cross-posting to:
Again — respect for the persistence and for putting it out there "as-is". The systems exist. They run. That's already more than most ambitious side projects ever achieve. Looking forward to poking around the repos when I have a quiet evening. Wishing you productive collaborations ahead! — Ayush (from the GitHub Community) |
Beta Was this translation helpful? Give feedback.
-
|
hi:3 |
Beta Was this translation helpful? Give feedback.
-
|
Thank you for sharing this—it's really inspiring to see such large systems built independently. |
Beta Was this translation helpful? Give feedback.
Thank you for the honest response—I really appreciate it. It’s motivating to hear that you started from scratch as well.
I’ll take my time going through the documentation and try to understand the systems step by step. Even if it’s complex, I think there’s a lot to learn from how everything is structured.
If I manage to understand any part well, I’d be happy to contribute in small ways, especially around documentation or clarity.”