A world-famous chef has opened a high-tech restaurant in central London and appointed an AI-powered robotic device as its sous chef. Their signature entree is fugu, pufferfish known for its potentially fatal tetrodotoxin content, that has to be prepared by specially licensed chefs to avoid killing you. Thanks to the newest AI model the robo-sous-chef can do it now. Will you book a table?
What Is a Team Member?
The definition of a team in the Team Topologies book is: "5 to 9 people who work toward a shared goal as a unit" - if we assume that "people" can be swapped for "members", can AI agents become team members in a blended human-software team as some suggest?
In short, today agents are no closer to being a team member than a "smart" thermostat is to being a member of the household. Both are sophisticated tools that can be helpful to varying degrees and require varying amounts of human intervention.
Agents And Trust
High-performing teams, both in sports and in business, are built on high levels of trust, this allows team members to move fast, bring their best shots, and focus on the goal trusting their teammates to do the same. Without trust a team is not a team, it is a group of individuals collaborating at best. There is a place for agents and value in using them in workplace systems as long as we avoid confusing a system with a team.
There is a growing amount of research on human-to-machine trust specifically but psychology scholars agree that many of the human-to-human trust research findings can be applied to human-to-machine trust.
I have found the model below really helpful in explaining the concept of trust and highly recommend the original article this model is sourced from: How and why humans trust: A meta-analysis and elaborated model

Unlike people, AI agents lack benevolence ("do no harm") and integrity (adherence to a moral code) - two out of three of the perceived trustworthiness components in the model, leaving us their ability of the AI agent to judge their trustworthiness.
This may be why the same mistake made by a human team member and by AI will impact our trust in them to a very different degree. We can "forgive" a team member after they demonstrate a gap in hard skills because we still trust their good intentions and ethic, an agent is judged on skill alone and we often expect more of them.
Shared Team Goals
A team being "5 to 9 people who work toward a shared goal as a unit" have a shared goal. Humans have developed and refined many ways to create and maintain shared context and synchronising their understanding of the shared goals. Humans have been developing social skills and mechanisms over millions of years and are social by design.
AI agents as individual actors can be goal or utility based and to some extent even learn, but they lack social experience. They can be given precise context and instructions on how to handle multi-party interactions in a specific setting but the LLMs that power most AI agents don't have the capability to interact with a group in real time. If in doubt, try engaging an AI assistant or agent in a standard rapid team conversation in Slack and see how fast it gets confused by the simplest human interactions.
Accountability And Autonomy
Human team members are not only responsible for building whatever they're building, they are also accountable for the outcome:
- accountable to their team who trusts them and who they share a goal with,
- accountable to their employer and can face different financial and legal outcomes,
- accountable to their social circle and society at large.
Because we trust humans to understand accountability and be deeply incentivised by desirable outcomes and disincentivised by undesirable outcomes, we give team members autonomy.
AI agents have no accountability because they lack the mechanism of feeling incentivised nor fearing consequences on a personal level. Instead, a human or several humans will be accountable - morally, legally, financially, for the outcomes of trusting AI agents.
The closest relationship where a human takes a calculated risk of trusting someone and assumes accountability for the outcomes of the trustees actions is the truster being their manager or supervisor. Or more realistically of a tool and a user.
The Future Of Agents In Teams
AI Agents are just another form of software and system, with enough investment it is possible to build agents which replicate human social behaviour. The cost of building such capabilities is substantial, and the benefits are debatable, so the ultimate decision is whether we should strive to build them just because we can.