Show your work.

"Hallucination" is treated as a tuning problem. We think it's a category error. A model that generates plausible text without grounding is a different class of thing from a model that retrieves and cites it. Confusing the two is how the AI industry got into this mess.

For consumer chat, ungrounded generation is fine — the user is ten feet from the answer, the cost of being wrong is small, and the same person who asks the question evaluates the answer. For work where the answer goes downstream, gets pasted into a deal document, ends up in front of a regulator, or lands on someone's review pile — ungrounded generation is malpractice. The user can't evaluate it. They're trusting the model. And the model has no obligation to deserve it.

Citation is category-defining.

What separates the two classes of AI isn't the model size or the training data. It's whether every answer carries a verifiable trace back to a source the user can read. With citation, the model becomes a research assistant — a fast retriever, a competent summarizer, a tireless reader. Without citation, it's a confident guesser whose output you have no way to verify before it leaves your desk.

We think this is the most important distinction in applied AI right now, and we think most products are on the wrong side of it. They're shipping confident generation as if it were research, and the consequences land on the people who use the output downstream.

Stones AI products live on the citing side of the line. Every answer points to a source. Every claim has a path back. Every recommendation can be checked. That isn't a feature we added — it's the foundation we built on. Removing it would change the category of product we're selling — what we sell is trust, and trust without traceability isn't trust.

What we won't do.

We won't ship outputs without source attribution, even when attribution is technically inconvenient. The minute we let the model fill in a gap rather than admit one, we've crossed back into the category of products we're trying to displace.

We won't market "explainability" as a substitute for citation. Post-hoc explanations of how a model reached an answer are useful for debugging, but they don't ground the answer. A citation grounds the answer. The two are not equivalents, and the industry's habit of conflating them is a category error of its own.

We won't optimize for confidence at the expense of accuracy. A model that says "I don't see this in your documentation" is more valuable than the same model saying "your retention period is 30 days" when the documentation actually says nothing of the sort. The human reviewer needs a draft that knows its limits.

How VTTD reflects this.

Every VTTD answer carries a citation back to the source document and section. When VTTD can't find an answer, it flags the gap rather than fabricate one. The audit trail isn't a compliance checkbox — it's the product. A buyer reviewing VTTD's output can read the source for any claim, which is the only way they can stand behind the answer when the questionnaire goes back. Read more about VTTD →

The same rule applies to our own writing. Every AI-drafted post on this site is reviewed, disclosed in three places, and traceable back to the human who approved it. See how we use AI in our content.

If the work earns trust, the work has to be visible. We make it visible.