Organizational Culture Is Vital for Success in AI Adoption

Organizational Culture Is Vital for Success in AI Adoption

Sofia Khaira brings a unique perspective to the intersection of human talent and technological evolution. As an expert dedicated to diversity, equity, and inclusion, she understands that the true hurdle of digital transformation is rarely the code itself, but rather the cultural soil in which that technology is planted. With organizations racing to integrate generative tools, Sofia provides a roadmap for ensuring that this transition doesn’t leave people behind, but instead empowers every tier of the workforce to thrive in a reorganized digital landscape.

Our conversation explores the necessity of aligning leadership teams to view artificial intelligence as a comprehensive operating model rather than a siloed IT project. We discuss the transition from individual experimentation to institutionalized “control planes” for AI agents, emphasizing the importance of automated learning loops and the redesign of work processes to favor outcomes over traditional task management. By examining the current gap between individual skills and organizational readiness, we uncover how businesses can move their “emergent” workers into a state of high-performing proficiency.

Roughly twenty percent of workers currently operate with the ideal mix of personal AI skills and technical infrastructure. How can organizations pinpoint where these high-performers are located, and what specific steps should they take to move the rest of the workforce into this “sweet spot”?

To identify these high-performers, organizations must look beyond job titles and instead map the dependencies that exist between people and existing processes. We often find these “sweet spot” employees in pockets where there is a high willingness to experiment paired with a robust technical foundation, yet according to current data, only one in five workers has achieved this balance. To move the remaining 80% of the workforce, leaders must first treat AI as a new operating model rather than a simple software update, requiring a deep dive into how specific jobs will change at a fundamental level. This involves a three-step approach: first, auditing the “emergent” 50% of the workforce to see where skills are taking shape but lack infrastructure; second, building the conditions for these employees to actually apply what they have learned in a safe environment; and third, redesigning work architectures to support high-level individual capabilities. It is not enough for an employee to be savvy; the organization must be ready to catch them with supportive talent practices and clear rules that encourage growth.

Success in AI pilots is frequently tied to cultural readiness rather than just the technology itself. How can leadership teams align to make AI a company-wide priority instead of just an IT initiative, and what specific behaviors should managers model to encourage safe experimentation?

The reality is that 67% of respondents in recent research identify organizational readiness as the single most critical factor for the success of AI pilots. This means that if a project is seen purely as an IT drive, it is likely to stay siloed and fail to gain the necessary traction across the broader enterprise. Leadership teams must align by making AI a shared imperative, where the C-suite collectively takes ownership of the shift in the operating model rather than delegating it to the technology department. Managers play a pivotal role here by modeling experimentation; they should be seen using these tools themselves, openly sharing their “failures” in pilot tests, and rewarding the process of learning rather than just the final output. When 20,000 knowledge workers tell us that manager support and a supportive culture are what move the needle, it becomes clear that the emotional safety to try and fail is the most valuable currency in a digital transformation.

Many employees have skills that are still taking shape and aren’t yet fully integrated into their daily routines. How should leaders rearchitect job descriptions to account for this growth, and what specific metrics indicate that a team is moving from “emergent” to truly proficient?

Rearchitecting job descriptions requires a fundamental shift from listing static tasks to defining desired outcomes and the level of autonomy an employee—and their AI agents—can exercise. Since half of the knowledge workforce is currently in an “emergent” phase, job descriptions should explicitly include time and metrics for AI experimentation and the refinement of review processes. Proficiency is signaled when a team stops asking “how do I use this tool?” and starts demonstrating a redesigned workflow where agents handle routine heavy lifting while humans focus on high-level oversight. We look for metrics like the reduction in time spent on manual data processing and an increase in the frequency of “agentic” interactions that result in successful project milestones. A proficient team is one where the infrastructure is no longer a hurdle, and the employees are consistently applying their skills to solve complex, non-routine problems.

When deploying AI agents, it is becoming necessary to treat them as managed entities with their own permissions and lifecycles. What does an effective “control plane” for these agents look like in practice, and how can security teams ensure that rapid scaling doesn’t lead to a loss of oversight?

An effective control plane functions as the central nervous system for all agent operations, extending the same level of rigor to AI entities that we currently apply to human employees and traditional software applications. In practice, this means every AI agent is assigned a unique identity and a specific set of permissions that govern what data it can access and what actions it can take autonomously. Security teams must build layers of trust into these systems, ensuring that as the number of agents scales, there is still total visibility into their lifecycle, from deployment to decommissioning. This prevents a “shadow AI” scenario where agents operate in a vacuum, potentially creating security vulnerabilities or data leaks. By treating agents as managed entities, the IT department ensures that rapid growth is balanced with meticulous enforcement of corporate standards and safety protocols.

Building an automated learning loop is essential for refining agentic systems through both positive and negative feedback. How should companies structure their data collection to capture these insights, and what strategies prevent the organization’s processes from becoming obsolete as the technology continues to shift?

To build a truly functional learning loop, organizations must capture and analyze every single interaction with an AI agent, ensuring that both successes and failures are documented as valuable data points. This feedback loop should be owned by the creators of the agentic systems themselves, as they are best positioned to understand the nuances of the processes they are trying to automate. Data collection needs to be granular, looking for patterns in how an agent responds to edge cases or where a human had to intervene to correct an AI-driven error. To prevent obsolescence, leadership must adopt the mindset that these systems are living entities that will continually change alongside the technology. This means refusing to “set and forget” any process and instead scheduling regular intervals to adapt the AI’s training based on the real-world feedback collected through the automated loop.

What is your forecast for AI adoption in the workplace?

I anticipate that the “emergent” middle—the 50% of workers currently in the middle of their journey—will either rapidly ascend to the “sweet spot” or find themselves struggling as the infrastructure gap widens. We are moving toward a landscape where the distinction between “human work” and “AI work” disappears, replaced by a seamless collaboration where agents are treated with the same organizational rigor as any other team member. Within the next few years, the companies that thrive will be those that prioritized culture and leadership alignment today, effectively turning their IT departments into sophisticated control planes for a massive, autonomous workforce. Success will not be measured by how many AI tools a company buys, but by how deeply they have redesigned their very core to be as flexible as the technology itself.

Subscribe to our weekly news digest.

Join now and become a part of our fast-growing community.

Invalid Email Address
Thanks for Subscribing!
We'll be sending you our best soon!
Something went wrong, please try again later