As generative AI adoption accelerates across federal government and healthcare IT environments, questions around reliability, data integrity, and operational trust are moving to the forefront. Understanding how models behave, how outputs are grounded in authoritative data, and where human oversight is required is critical to deploying AI responsibly in high-stakes, mission-critical settings. On OrangeSlices AI’s The Peel Podcast, IntelliDyne’s Director of Data Science, Austin Keller, joins host Shelley McGuire to explore the technical and operational considerations that transform generative AI from an experimental capability into a trusted, mission-ready solution.
Original Post (from “OrangeSlices AI“, February 3, 2026):
On this episode of the The Peel, host Shelley McGuire sits down with Austin Keller, Director of Data Science at IntelliDyne, to unpack some of the most misunderstood concepts in artificial intelligence. With a background that spans secure generative AI, Navy operational analytics, public health, and veteran suicide prevention, Austin brings both technical depth and real-world perspective to the conversation.
The episode dives into timely questions around AI reliability, including what “hallucinations” really mean in AI systems and why they occur. Shelley and Austin explore how techniques like retrieval-augmented generation help ground AI outputs in real, up-to-date information, and why simply deploying a model isn’t enough, especially in government and healthcare environments where accuracy matters.
Austin explains how AI can best support analysts and practitioners by summarizing, comparing, and organizing massive volumes of data, while still requiring human oversight, validation, and judgment. The conversation highlights where AI excels, where it needs guardrails, and why understanding how these systems work is critical to using them responsibly.
Click here to listen on Spotify or watch the video below.


