AI adoption isn’t a tech-first challenge. It’s an organizational one. And for leaders in learning and data, that means the path to responsible, scalable AI starts with people and process, not just the latest models.
In a recent webinar hosted by Data Society and The Data Lodge, Mike Baylor, VP and Chief Digital & AI Officer at Lockheed Martin, shared candid insights on what it takes to move from pilot projects to scalable enterprise AI. Mike didn’t sugarcoat the complexity, but he did offer a clear-eyed view of what works.
Below are highlights from that conversation, with takeaways for CDOs and CLOs charting their AI journey.
MUST READ: The Brain Behind Better Learning: How Neuroscience is Shaping L&D Design
Start with Governance Before Scale
It is impossible to scale AI responsibly without a clear governance structure in place. Every organization needs to be thoughtful about risk, ethics, and the processes guiding AI development and deployment. Mike emphasized the foundational role of governance in Lockheed Martin’s AI efforts. “We stood up an AI governance council, which is where we’re putting in things like ethics, policies, risk reviews.” Without these checks, AI becomes a liability instead of a competitive advantage.
Takeaway: Before your AI strategy goes wide, make sure it goes deep into ethical frameworks and operational guardrails.
Action Item: Formalize your AI governance structures early. Include representatives from data, legal, security, and business units. Use this group to define what “safe to scale” looks like in your organization.
Advance from Awareness to Technical Skills

Educating your workforce about AI is only the beginning. Real transformation requires role-specific fluency and ongoing support. For Lockheed Martin, this meant structured, high-quality training that moved beyond theory. “We have been pushing training around AI and generative AI,” said Mike. “We’ve been working with Data Society on that. We’ve had about 250 people go through that training.”
Takeaway: Broad awareness alone won’t move the needle. Invest in targeted upskilling aligned to roles and readiness.
Action Item: Map your workforce by skill level and job function. Pair foundational AI literacy for business roles with advanced technical training for developers and engineers. Track adoption over time, not just attendance.
MUST READ: Learning That Meets You Where You Are: Adaptive Design for a Hybrid Workforce
Use Champions and Structured Intake to Drive Adoption
Even the best tools will fall flat without engagement and ownership across teams. That’s why Lockheed Martin relies on internal champions to promote responsible adoption from the inside out. Employees are also invited to submit their own AI project ideas through a structured intake process, creating a two-way system of innovation. “We have champions in each of the business areas,” said Mike. “And we also have an AI intake process, where people can come and propose AI projects.” This model empowers experimentation while maintaining oversight.
Takeaway: Adoption accelerates when employees have clear pathways to engage, and when leaders are empowered to lead it.
Action Item: Identify internal champions who can advocate for responsible AI use within their departments. Create a transparent intake process to vet and prioritize use cases. Then track outcomes so your wins become shared learning moments.
Apply AI Where It Adds Real Value
Not every business problem requires an AI solution. To scale responsibly, leaders need to be strategic about where AI is deployed—and just as strategic about where it’s not. For Mike, the focus is on applying AI where it delivers measurable value. “We’re looking at, where does it apply? And where does it make sense? And where can we actually provide some value?” That approach ensures the organization doesn’t get swept up in hype, but instead remains grounded in purpose.
Takeaway: Focus efforts on high-impact, low-friction opportunities that solve real business problems.
Action Item: Work with cross-functional teams to evaluate use cases based on risk, reward, and readiness. Start with applications that are meaningful but manageable, then expand as confidence and capability grow.
Align AI Strategy to Business Strategy
Scaling AI is not just about selecting the right models. It requires rethinking how teams operate, how decisions are made, and how value is measured. At Lockheed Martin, the AI team is focused on impact and accountability. “We’re really trying to make sure that we’re working on things that matter,” said Mike. “And that we’re doing it in a responsible way.” For other leaders, the same principle applies.
Takeaway: Responsible AI is a leadership issue. Strategy must align with culture, compliance, and long-term goals.
Action Item: Reframe your AI initiatives in terms of business value and organizational change. Communicate the “why” early and often. Ensure you are not just deploying AI, but embedding it into the way people work, decide, and deliver.
Watch the Replay: Real Talk on Scaling AI in High-Stakes Environments
If you’re leading AI adoption in a high-stakes industry, this is a conversation you don’t want to miss. Mike Baylor shares grounded insights on AI governance and scaling AI that you can apply immediately, whether you’re just starting your roadmap or advancing your next wave of deployment.
Watch the full replay here and see how Lockheed Martin is making responsible AI real.