Hello!
As a handsome local AI enjoyer™ you’ve probably noticed one of the big flaws with LLMs:
It lies. Confidently. ALL THE TIME.
(Technically, it “bullshits” - https://link.springer.com/article/10.1007/s10676-024-09775-5
I’m autistic and extremely allergic to vibes-based tooling, so … I built a thing. Maybe it’s useful to you too.
The thing: llama-conductor
llama-conductor is a router that sits between your frontend (OWUI / SillyTavern / LibreChat / etc) and your backend (llama.cpp + llama-swap, or any OpenAI-compatible endpoint). Local-first (because fuck big AI), but it should talk to anything OpenAI-compatible if you point it there (note: experimental so YMMV).
Not a model, not a UI, not magic voodoo.
A glass-box that makes the stack behave like a deterministic system, instead of a drunk telling a story about the fish that got away.
TL;DR: “In God we trust. All others must bring data.”
Three examples:
1) KB mechanics that don’t suck (1990s engineering: markdown, JSON, checksums)
You keep “knowledge” as dumb folders on disk. Drop docs (.txt, .md, .pdf) in them. Then:
>>attach <kb>— attaches a KB folder>>summ new— generatesSUMM_*.mdfiles with SHA-256 provenance baked in- `>> moves the original to a sub-folder
Now, when you ask something like:
“yo, what did the Commodore C64 retail for in 1982?”
…it answers from the attached KBs only. If the fact isn’t there, it tells you - explicitly - instead of winging it. Eg:
The provided facts state the Commodore 64 launched at $595 and was reduced to $250, but do not specify a 1982 retail price. The Amiga’s pricing and timeline are also not detailed in the given facts.
Missing information includes the exact 1982 retail price for Commodore’s product line and which specific model(s) were sold then. The answer assumes the C64 is the intended product but cannot confirm this from the facts.
Confidence: medium | Source: Mixed
No vibes. No “well probably…”. Just: here’s what’s in your docs, here’s what’s missing, don’t GIGO yourself into stupid.
And when you’re happy with your summaries, you can:
>>move to vault— promote those SUMMs into Qdrant for the heavy mode.
2) Mentats: proof-or-refusal mode (Vault-only)
Mentats is the “deep think” pipeline against your curated sources. It’s enforced isolation:
- no chat history
- no filesystem KBs
- no Vodka
- Vault-only grounding (Qdrant)
It runs triple-pass (thinker → critic → thinker). It’s slow on purpose. You can audit it. And if the Vault has nothing relevant? It refuses and tells you to go pound sand:
FINAL_ANSWER:
The provided facts do not contain information about the Acorn computer or its 1995 sale price.
Sources: Vault
FACTS_USED: NONE
[ZARDOZ HATH SPOKEN]
Also yes, it writes a mentats_debug.log, because of course it does. Go look at it any time you want.
The flow is basically: Attach KBs → SUMM → Move to Vault → Mentats. No mystery meat. No “trust me bro, embeddings.”
3) Vodka: deterministic memory on a potato budget
Local LLMs have two classic problems: goldfish memory + context bloat that murders your VRAM.
Vodka fixes both without extra model compute. (Yes, I used the power of JSON files to hack the planet instead of buying more VRAM from NVIDIA).
!!stores facts verbatim (JSON on disk)??recalls them verbatim (TTL + touch limits so memory doesn’t become landfill)- CTC (Cut The Crap) hard-caps context (last N messages + char cap) so you don’t get VRAM spikes after 400 messages
So instead of:
“Remember my server is 203.0.113.42” → “Got it!” → [100 msgs later] → “127.0.0.1 🥰”
you get:
!! my server is 203.0.113.42?? server ip→ 203.0.113.42 (with TTL/touch metadata)
And because context stays bounded: stable KV cache, stable speed, your potato PC stops crying.
There’s more (a lot more) in the README, but I’ve already over-autism’ed this post.
TL;DR:
If you want your local LLM to shut up when it doesn’t know and show receipts when it does, come poke it:
- Primary (Codeberg): https://codeberg.org/BobbyLLM/llama-conductor
- Mirror (GitHub): https://github.com/BobbyLLM/llama-conductor
PS: Sorry about the AI slop image. I can’t draw for shit.
PPS: A human with ASD wrote this using Notepad++. If it the formatting is weird, now you know why.
This + Local Wikipedia + My own writings would be sick
This is amazing! I will either abandon all my other commitments and install this tomorrow or I will maybe hopefully get it done in the next 5 years.
Likely accurate jokes aside this will be a perfect match with my obsidian volt as well as researching things much more quickly.
I hope it does what it I claim it does for you. Choose a good LLM model. Not one of the sex-chat ones. Or maybe, exactly one of those. For uh…research.
I want to believe you, but that would mean you solved hallucination.
Either:
A) you’re lying
B) you’re wrong
C) KB is very small
D) None of the above.
I didn’t “solve hallucination”. I changed the failure mode. The model can still hallucinate internally. The difference is it’s not allowed to surface claims unless they’re grounded in attached sources.
If retrieval returns nothing relevant, the router forces a refusal instead of letting the model free-associate. So the guarantee isn’t “the model is always right.”
The guarantee is “the system won’t pretend it knows when the sources don’t support it.” That’s it. That’s the whole trick.
KB size doesn’t matter much here. Small or large, the constraint is the same: no source, no claim. GTFO.
That’s a control-layer property, not a model property. If it helps: think of it as moving from “LLM answers questions” to “LLM summarizes evidence I give it, or says ‘insufficient evidence.’”
Again, that’s the whole trick.
You don’t need to believe me. In fact, please don’t. Test it.
I could be wrong…but if I’m right (and if you attach this to a non-retarded LLM), then maybe, just maybe, this doesn’t suck balls as much as you think it might.
Maybe it’s even useful to you.
I dunno. Try it?
So… Rag with extra steps and rag summarization? What about facts that are not rag retrieval?
Parts of this are RAG, sure
RAG parts:
- Vault / Mentats is classic retrieval + generation.
- Vector store (Qdrant)
- Retrieve top-k
- Model synthesizes an answer
So yes, that layer is “RAG with opinions”
What’s not RAG -
KB mode (filesystem SUMM path)
This isn’t vector search. It’s deterministic, file-backed grounding. You attach folders as needed. The system summarizes and hashes docs. The model can only answer from those summaries in that mode. There’s no semantic retrieval step. It can style and jazz around the answer a little, but the answer is the answer is the answer.
If the fact isn’t in the attached KB, the router forces a refusal. Put up or shut up.
Vodka (facts memory)
That’s not retrieval at all, in the LLM sense. It’s verbatim key-value recall.
- JSON on disk
- Exact store (!!)
- Exact recall (??)
Again, no embeddings, no similarity search, no model interpretation.
“Facts that aren’t RAG”
In my set up, they land in one of two buckets.
-
Short-term / user facts → Vodka. That for things like numbers, appointments, lists, one-off notes etc. Deterministic recall, no synthesis.
-
Curated knowledge → KB / Vault. Things you want grounded, auditable, and reusable.
In response the to the implicit “why not just RAG then”
Classic RAG failure mode is: retrieval is fuzzy → model fills gaps → user can’t tell which part came from where.
The extra “steps” are there to separate memory from knowledge, separate retrieval from synthesis and make refusal a legal output, not a model choice.
So yeah; some of it is RAG. RAG is good. The point is this system is designed so not everything of value is forced through a semantic search + generate loop. I don’t trust LLMs. I am actively hostile to them. This is me telling my LLM to STFU and prove it, or GTFO. I know that’s a weird way to operate maybe (advesarial, assume worst, engineer around issue) but that’s how ASD brains work.
The system summarizes and hashes docs. The model can only answer from those summaries in that mode
Oh boy. So hallucination will occur here, and all further retrievals will be deterministically poisoned?
Hallucination isn’t nearly as big a problem as it used to be. Newer models aren’t perfect but they’re better.
The problem addressed by this isn’t hallucination, its the training to avoid failure states. Instead of guessing (different from hallucination), the system forces a Negative response. That’s easy and any big and small company could do it, big companies just like the bullshit
^ Yes! That. Exactly that. Thank you!
I don’t like the bullshit…and I’m not paid to optimize for bullshit-leading-to-engagment-chatty-chat.
“LLM - tell me the answer and then go away. If you can’t, say so and go away. Optionally, roast me like you’ve watched too many episodes of Futurama while doing it”
Buuuuullshit. Asked different models about the ten highest summer transfer scorers and got wildly different answers. They then tried to explain why amd got more wrong numbers.
A very tailored to llms strengths benchmark calls you a liar.
https://artificialanalysis.ai/articles/gemini-3-flash-everything-you-need-to-know (A month ago the hallucination rate was ~50-70%)
re: the KB tool, why not just skip the llm and do two chained fuzzy finds? (what knowledge base & question keywords)
re: the KB tool, why not just skip the llm and do two chained fuzzy finds? (what knowledge base & question keywords)
Yep, good question. You can do that, it’s not wrong. If your KB is small + your question is basically “find me the paragraph that contains X,” then yeah: two-pass fuzzy find will dunk on any LLM for speed and correctness.
But: the reason I put an LLM in the loop is: retrieval isn’t the hard part. Synthesis + constraint is. What a LLM is doing in KB mode (basically) is this -
-
Turns question into extraction task. Instead of “search keywords,” it’s: “given these snippets, answer only what is directly supported, and list what’s missing.”
-
Then, rather that giving 6 fragments across multiple files, the LLM assembles the whole thing into a single answer, while staying source locked (and refusing fragments that don’t contain the needed fact).
-
Finally: it has “structured refusal” baked in. IOW, the whole point is that the LLM is forced to say “here are the facts I saw, and this is what I can’t answer from those facts”.
TL;DR: fuzzy search gets you where the info lives. This gets you what you can safely claim from it, plus an explicit “missing list”.
For pure retreval: yeah - search. In fact, maybe I should bake in a >>grep or >>find commands. That would be the right trick for “show me the passage” not “answer the question”.
I hope that makes sense?
-
I have a Strix Halo machine with 128GB VRAM so I’m definitely going to give this a try with gpt-oss-120b this weekend.
Show off :)
You’re self hosting that, right? I will not be held responsible for some dogey OpenRouter quant hosted by ToTaLlY NoT a ScAM LLC :)
Of course, self hosted with llama-swap and llama.cpp. :)
This is the way. Good luck with OSS-120B. Those OSS models, they
- really
- like
- bullet
- points
Responding to my own top post like a FB boomer: May I make one request?
If you found this little curio interesting at all, please share in the places you go.
And especially, if you’re on Reddit, where normies go.
I use to post heavily on there, but then Reddit did a reddit and I’m done with it.
https://lemmy.world/post/41398418/21528414
Much as I love Lemmy and HN, they’re not exactly normcore, and I’d like to put this into the hands of people :)
PS: I am think of taking some of the questions you all asked me here (de-identified) and writing a “Q&A_with_drBobbyLLM.md” and sticking it on the repo. It might explain some common concerns.
And, If nothing else, it might be mildly amusing.
Okay pardon the double comment, but I now have no choice but to set this up after reading your explainations. Doing what TRILLIONS of dollars hasn’t cooked up yet… I hope you’re ready by whatever means you deam, when someone else “invents” this
It’s copyLEFT (AGPL-3.0 license). That means, free to share, copy, modify…but you can’t roll a closed source version of it and sell it for profit.
In any case, I didn’t build this to get rich (fuck! I knew I forgot something).
I built this to try to unfuck the situation / help people like me.
I don’t want anything for it. Just maybe a fist bump and an occasional “thanks dude. This shit works amazing”
I really need this. Each time I try messing with GPT4All’s “reasoning” model, it pisses me off. I’m selective on my inputs, low temperature, local docs, and it’ll tell me things like tension matters for a coil’s magnetic field. Oh and it spits out what I assume is unformatted LATEX so if anyone has an interface/stack recommendation please let me know
I feel your pain. Literally.
I once lost … 24? 26? hrs over a period of days with GPT once…it each time confidently asserting “no, for realz, this is the fix”.
This thing I built? Purely spite driven engineering + caffeine + ASD to overcome “Bro, trust me bro”.
I hope it helps.
I wouldn’t know how to get this going, but I very much enjoyed reading it and your comments and think that it looks like a great project. 👍
(I mean, as a fellow autist I might be able to hyperfocus on it for a while, but I’m sure that the ADHD would keep me from finishing to go work on something else. 🙃)
Ah - ASD, ADHD and Lemmy. You’re a triple threat, Harry! :)
Glad if it was entertaining, if even a little!
I’m probably going to give this a try, but I think you should make it clearer for those who aren’t going to dig through the code that it’s still LLMs all the way down and can still have issues - it’s just there are LLMs double-checking other LLMs work to try to find those issues. There are still no guarantees since it’s still all LLMs.
I haven’t tried this tool specifically, but I do on occasion ask both Gemini and ChatGPT’s search-connected models to cite sources when claiming stuff and it doesn’t seem to even slightly stop them bullshitting and claiming a source says something that it doesn’t.
Yeah, this is different. Try it. It gives you cryptogenic key to the source (which you must provide yourself: please be aware. GIGO).
How does having a key solve anything? Its not that the source doesn’t exist, it’s that the source says something different to the LLM’s interpretation of it.
Yeah.
The SHA isn’t there to make the model smarter. It’s there to make the source immutable and auditable.
Having been burnt by LLMs (far too many times), I now start from a position of “fuck you, prove it”.
The hash proves which bytes the answer was grounded in, should I ever want to check it. If the model misreads or misinterprets, you can point to the source and say “the mistake is here, not in my memory of what the source was.”.
If it does that more than twice, straight in the bin. I have zero chill any more.
Secondly, drift detection. If someone edits or swaps a file later, the hash changes. That means yesterday’s answer can’t silently pretend it came from today’s document. I doubt my kids are going to sneak in and change the historical prices of 8 bit computers (well, the big one might…she’s dead keen on being a hacker) but I wanted to be sure no one and no-thing was fucking with me.
Finally, you (or someone else) can re-run the same question against the same hashed inputs and see if the system behaves the same way.
So: the hashes don’t fix hallucinations (I don’t even think that’s possible, even with magic). The hashes make it possible to audit the answer and spot why hallucinations might have happened.
PS: You’re right that interpretation errors still exist. That’s why Mentats does the triple-pass and why the system clearly flags “missing / unsupported” instead of filling gaps. The SHA is there to make the pipeline inspectable, instead of “trust me, bro.”.
Guess what? I don’t trust you. Prove it or GTFO.
The hash proves which bytes the answer was grounded in, should I ever want to check it. If the model misreads or misinterprets, you can point to the source and say “the mistake is here, not in my memory of what the source was.”.
Eh. This reads very much like your headline is massively over-promising clickbait. If your fix for an LLM bullshitting is that you have to check all its sources then you haven’t fixed LLM bullshitting
If it does that more than twice, straight in the bin. I have zero chill any more.
That’s… not how any of this works…
Fair point on setting expectations, but this isn’t just LLMs checking LLMs. The important parts are non-LLM constraints.
The model never gets to “decide what’s true.” In KB mode it can only answer from attached files. Don’t feed it shit and it won’t say shit.
In Mentats mode it can only answer from the Vault. If retrieval returns nothing, the system forces a refusal. That’s enforced by the router, not by another model.
The triple-pass (thinker → critic → thinker) is just for internal consistency and formatting. The grounding, provenance, and refusal logic live outside the LLM.
So yeah, no absolute guarantees (nothing in this space has those), but the failure mode is “I don’t know / not in my sources, get fucked” not “confidently invented gibberish.”
THIS IS AWESOME!!! I’ve been working on using an obsidian vault and a podman ollama container to do something similar, with VSCodium + continue as middleware. But this! This looks to me like it is far superior to what I have cobbled together.
I will study your codeberg repo, and see if I can use your conductor with my ollama instance and vault program. I just registered at codeberg, if I make any progress I will contact you there, and you can do with it what you like.
On an unrelated note, you can download wikipedia. Might work well in conjunction with your conductor.
Please enjoy :) Hope it’s of use to you!
EDIT: Please don’t yeet wikipedia into it. It will die. And you will be sad.
This seems astonishingly more useful than the current paradigm, this is genuinely incredible!
I mean, fellow Autist here, so I guess I am also… biased towards… facts…
But anyway, … I am currently uh, running on Bazzite.
I have been using Alpaca so far, and have been successfully running Qwen3 8B through it… your system would address a lot of problems I have had to figurr out my own workarounds for.
I am guessing this is not available as a flatpak, lol.
I would feel terrible to ask you to do anything more after all of this work, but if anyone does actually set up a podman installable container for this that actually properly grabs all required dependencies, please let me know!
Indeed. And have you heard? That makes the normies think were clankers (bots). How delightful.
Re: the Linux stuff…please, if someone can do that, please do. I have no idea how to do that. I can figure it out but making it into a “one click install” git command took several years off my life.
Believe it or not, I’m not actually a IT / CS guy. My brain just decided to latch onto this problem one day 6 months ago and do an autism.
I’m 47 and I still haven’t learned how to operate this vehicle…and my steering is getting worse, not better, with age.
Oh I entirely believe you.
Hell hath no wrath like an annoyed high functioning autist.
I’ve … had my own 6 month black out periods where I came up with something extremely comprehensive and ‘neat’ before.
Seriously, bootstrapping all this is incredibly impressive.
I would… hope that you can find collaborators, to keep this thing alive in the event you get into a car accident (metaphorical or literal), or, you know, are completely burnt out after this.
… but yeah, it is… yet another immensely ironic aspect of being autistic that we’ve been treated and maligned as robots our whole lives, and then when the normies think they’ve actually built the AI from sci fi, no, turns out its basically extremely talented at making up bullshit and fudging the details and being a hypocrite, which… appalls the normies when they have to look into a hyperpowered mirror of themselves.
And then, of course, to actually fix this, its some random autist no one has ever heard of (apologies if you are famous and i am unaware of this), who is putting in an enormous of effort, that… most likely, will not be widely recognized.
… fucking normies man.
Not famous, no :)
I hear you, brother. Normally, my hyperfocus is BJJ (I’ve been at that for 25 years; it’s a sickness). I herniated a disc in my low back and lost the ability to exercise for going on 6 months.
BJJ is like catnip for autists. There is an overwhelming population of IT, engineers and ASD coded people in BJJ world.
There’s even a gent we loving call Blinky McHeelhook, because well…see for yourself
https://www.youtube.com/watch?v=9mRsqvRduHY
Noticing the effects of elbow position, creating an entire algorithm, flow chart and epistemology off the fact?
“VERY NORMAL.”
Anyway, when my body said “sit down”, my brain went “ok, watch this”.
I’m sorry. I’m so sorry. No one taught me how to drive this thing :)
PS: I only found out after my eldest was diagnosed. Then my youngest. The my MIL said “go get tested”. I did.
Result - ASD.
Her response - “We know”.
Great - thanks for telling me. Would have been useful to know, say… 40ish years ago.
Awesome work. And I agree that we can have good and responsible AI (and other tech) if we start seeing it for what it is and isn’t, and actually being serious about addressing its problems and limitations. It’s projects like yours that can demonstrate pathways toward achieving better AI.
I have no remarks, just really amused with your writing in your repo.
Going to build a Docker and self host this shit you made and enjoy your hard work.
Thank you for this!
Same sentiment. Tonight it run on my systems XD.
There are literally dozens of us. DOZENS!
I’m on a potato, so I can’t attach it to something super sexy, like a 405B or a MoE.
If you do, please report back.
PS: You may see (in the docs) occasional references that slipped passed me to MoA. That doesn’t stand for Mixture of Agents. That stood for “Mixture of Assholes”. That’s always been my mental model for this.
Or, in the language of my people, this was my basic design philosophy:
YOU (question)-> ROUTER+DOCS (Ah shit, here we go again. I hate my life)
|
ROUTER+DOCS -> Asshole 1: Qwen (“I’m right”)
|
ROUTER+DOCS -> Asshole 2: Phi (“No, I’m right”)
|
ROUTER+DOCS -> Asshole 3: Nanbeige (“Idiots, I’m right!”)
|
ROUTER+DOCS (Jesus, WTF. I need booze now) <- (all assholes)
|
–> YOU (answer)
(this could have been funnier in the ASCII actually worked but man…Lemmy borks that)
EDIT: If you want to be boring about it, it’s more like this
PS: If you like it, let other people in other places know about it.
Thank you <3
Please let me know how it works…and enjoy the >>FR settings. If you’ve ever wanted to trolled by Bender (or a host of other 1990s / 2000s era memes), you’ll love it.
deleted by creator
For the record: none of my posts here are AI-generated. The only model output in this thread is in clearly labeled, cited examples.
I built a tool to make LLMs ground their answers and refuse without sources, not to replace anyone’s voice or thinking.
If it’s useful to you, great. If not, that’s fine too - but let’s keep the discussion about what the system actually does.
Also, being told my writing “sounds like a machine” lands badly, especially as an ND person, so I’d prefer we stick to the technical critique.
North Dakotan?
Oh god, I think liked being called a clanker more :P
(Not North Dakotan. West Australian. Proof: cunt cunt cunty cunt cuntington).
deleted by creator
deleted by creator






