Is AI Damaging Your Professional Relationships at Work?

Is AI Damaging Your Professional Relationships at Work?

Sofia Khaira is a specialist in diversity, equity, and inclusion who has dedicated her career to helping organizations navigate the complex intersections of talent management and workplace culture. As an HR expert, she focuses on building environments where transparency and human connection remain the foundation of professional growth. In this discussion, we explore the subtle yet profound ways generative AI is reshaping interpersonal dynamics, from the erosion of trust to the potential loss of essential conflict-resolution skills.

The following conversation examines the hidden costs of “workslop,” the psychological shift in how we treat human colleagues after interacting with obedient machines, and the ethical balance between personal privacy and AI transparency.

When colleagues cannot tell if a message is human or machine-made, they often spend extra energy decoding the true intent. How does this ambiguity erode professional trust over time, and what specific steps can teams take to maintain authenticity when using generative tools for daily communication?

When we are unsure if we are interacting with a person or a bot, it triggers a form of minor emotional labor that quickly becomes exhausting. This ambiguity causes us to second-guess our relationships; in fact, research shows that when people suspect AI involvement, they often question whether their colleague is actually putting in any effort. Over time, this erodes the foundation of trust because the receiver feels the sender is being transactional rather than relational. To combat this, teams must make disclosure the norm rather than the exception. For example, a colleague recently sent me an interview plan with a note saying, “ChatGPT assisted with the ideation and layout.” This simple act of transparency removed all the guesswork for me—when a question felt slightly off-topic or didn’t sound exactly like him, I didn’t have to wonder why his “tone” had suddenly shifted. By being upfront, we reduce the cognitive load on our teammates and preserve the authenticity of the partnership.

AI-generated content can sometimes create “workslop” that requires recipients to fix errors or hunt for missing context. What are the hidden productivity drains of this cycle, and how can employees ensure their use of automation doesn’t inadvertently shift a heavy “decoding” workload onto their teammates?

The hidden drain of “workslop” is that it transforms a supposedly efficient tool into a source of useless noise and performative busy work. While one employee might save 20 minutes using AI to draft a report, their manager might then spend two hours checking sources, fixing false context, and explaining why the output failed to hit the mark. This cycle creates a cascade of complex decision-making for the receiver that actually intensifies the total workload of the team. We see a significant social cost here too: about 50% of people surveyed view colleagues who send “workslop” as less creative and reliable, and 42% even see them as less trustworthy. To avoid this, employees must treat AI as a starting point, not a finished product, and take full responsibility for the “human” polish. You should never hit “send” until you have personally verified every claim and ensured the message adds genuine value rather than just filling up an inbox.

Some professionals now use AI to script tough feedback or role-play office conflicts to avoid “messy” interactions. What happens to a team’s ability to innovate when this “creative abrasion” is removed, and how can workers rebuild the essential skills needed to handle interpersonal friction directly?

I am a firm believer that we actually need tension and messiness—what Professor Linda Hill calls “creative abrasion”—to produce truly innovative work. When we use AI to sanitize our feedback or avoid the “drama” of a disagreement, we lose the productive friction that allows us to see problems from new angles. If we outsource the management of our interpersonal dynamics to a machine, we risk losing our personal capability to diagnose and address conflict, which is a dangerous skill to lose in such polarized times. To rebuild these skills, teams should consciously distinguish between transactional tasks and relational ones. If the goal is to deepen a connection or solve a complex team problem, you must put the AI aside and engage in the awkward, vulnerable, and direct human back-and-forth. These “messy” moments are exactly how we build long-term bonds and psychological safety.

Constant interaction with transactional, obedient AI tools can change how we expect human colleagues to behave. In what ways does this digital interaction spill over into real-life levels of patience and empathy, and what rituals can teams adopt to prioritize human connection over pure efficiency?

There is a real risk that our “transactional” habits with AI—where we don’t have to be polite or worry about feelings—will bleed into our human interactions. Because AI programs generally tell us what we want to hear or can be “reprogrammed” if they don’t, we may find ourselves having less patience when a human colleague disagrees or moves slower than a processor. To prevent this, I recommend that teams establish specific rituals that emphasize the human over the digital, such as dedicated “no-tech” coffee chats or start-of-meeting check-ins focused solely on personal well-being. You might even explicitly write down your intent: “I will be transactional with my AI tools, but I will remain collaborative and empathetic with my colleagues.” By making this distinction conscious, we can protect our “brain synchrony,” which is the neurological alignment that facilitates social interaction and weakens when we spend too much time isolated with machines.

Disclosing AI use helps reduce the cognitive load for others, yet some individuals use it specifically to navigate learning differences or language barriers. How should organizations navigate the ethics of transparency versus personal privacy, and what guidelines help determine when a task is too relational for automation?

This is a delicate balance, as AI can be a powerful accommodation for someone who is dyslexic or navigating a second language, allowing them to communicate in a fraction of the time. In these cases, transparency still matters, but the context of “why” it’s being used changes the team’s perception from “lazy” to “effective.” Organizations should create a culture where people feel safe sharing their use of AI as a tool for equity without feeling they are oversharing private medical or personal information. A good rule of thumb for determining if a task is too relational is to ask: “Is this interaction about accomplishing a task or building a relationship?” If you are writing a firm email to a general contractor, AI is a great tool; if you are trying to build rapport with a new hire or resolve a misunderstanding with a partner, the task is too relational for automation. In those instances, you should use AI only as a “bridge”—perhaps for brainstormed conversation starters—but the actual delivery must be human.

What is your forecast for the future of human-AI collaboration in the workplace?

My forecast is that we will see a “premium” placed on authentic human interaction as AI becomes more ubiquitous. In the next few years, I believe the most successful organizations won’t be the ones that automate the most, but the ones that use AI to clear away the administrative “slop” so they can double down on face-to-face collaboration. We will likely see a shift where “soft skills” like empathy, conflict resolution, and the ability to navigate “creative abrasion” become the most valuable assets a worker can possess. However, if we aren’t intentional, we run the risk of creating “silent offices” where everyone is talking to their personal bot instead of each other. The future depends on our ability to use AI as a tool to get us to the “human” work faster, rather than using it as a shield to hide from the very relationships that make work meaningful.

Subscribe to our weekly news digest.

Join now and become a part of our fast-growing community.

Invalid Email Address
Thanks for Subscribing!
We'll be sending you our best soon!
Something went wrong, please try again later