Execution Receipts and the Problem of Citable AI
AI citations are failing because they lack verifiable provenance. Execution receipts offer a path toward AI outputs that can be meaningfully cited.
Technical writing from the MLNavigator Research Group. These articles explore concepts, methodologies, and observations from our work.
AI citations are failing because they lack verifiable provenance. Execution receipts offer a path toward AI outputs that can be meaningfully cited.
Why running the same CUDA code twice can produce different floating-point results, and what you can do about it.
A technical governance study examining how nondeterminism in AI systems creates audit, compliance, and operational control failures across regulated industries.
Verification proves what happened, not that the output is correct. This distinction matters for compliance and trust.
MoE architectures add a discrete routing layer that amplifies the floating-point nondeterminism already present in GPU execution.
Token usage variance is a measurable financial loss. Verifiable token accounting closes this gap.
Why we measure inference efficiency in Joules per token, and how to do it repeatably on macOS.
Most AI tools pull toward third-party services. Building truly offline systems means resisting this gravity at every layer.
Subscribe to updates via RSS
RSS Feed