Back to Insights
Future of Work7 min read

Augmenting Human Capabilities When All Work Becomes Humans + AI

We are crossing a threshold. Not in some distant imagined future, but right now, in the workflows and meetings and decisions that constitute the everyday reality of organizational life.

I recently had the privilege of closing Asana's Work Innovation Tour, an event built around the launch of their impressive new AI Teammates feature. What struck me most wasn't the technology itself, compelling as it is. It was the room. Hundreds of leaders, genuinely uncertain about what AI means for their teams, their roles, and the people they lead. That uncertainty is completely understandable. What I wanted to leave them with was something more useful: clarity, and optimism grounded in frameworks they could act on.

The feedback at the end told me we got there. Numerous members of the audience came up afterward and said they had never felt as optimistic about the positive potential of AI in and for work, and that the frameworks I'd shared had given them a concrete sense of the actions they need to take to create that future.

The Asana bet is the right one

First, a word on what Asana has built, because it's worth understanding why their approach is architecturally significant.

Asana CEO Dan Rogers has been direct: "Everyone is building autonomous agents, but autonomy is the wrong goal." This is a courageous position in a market intoxicated by the idea of fully autonomous AI. Yet it is exactly right. Research from Carnegie Mellon University found that autonomous agents fail at 70% of work-related tasks, not because AI lacks power, but because agents designed for individuals lack the context, checkpoints, and controls needed to be effective within teams.

Asana's AI Teammates are built around a different premise: that AI works best not when it replaces human judgment, but when it operates within the rich context of how teams actually function. Their Work Graph gives AI Teammates comprehensive organizational context, allowing them to build team-wide memory and continuously adapt to how work gets done. This is Humans + AI architecture made tangible in a product.

The four levels that matter

The framework that generated the strongest response in my keynote was the Spectrum of Cognitive Impact of AI.

The core insight: AI's impact on human thinking is not uniformly positive. It can be negative, neutral, or genuinely expansive, and which outcome you get depends almost entirely on how you engage with it.

At Level 1, Cognitive Corruption, over-delegation and uncritical acceptance of AI output replaces independent thinking. Mental models degrade. Attention, judgment, and critical reasoning quietly decline. The human in the Humans + AI system becomes the weakest link.

At Level 2, Neutral Productivity, humans review, adjust, and delegate tasks to AI for efficiency and convenience. Cognitive skill is maintained, but understanding remains largely unchanged. This is where most organizations currently sit. It represents a significant missed opportunity.

At Level 3, Cognitive Augmentation, active partnership with AI broadens perception, stimulates connections, and integrates capabilities. The reasoning and output quality far exceeds what either achieves alone. This is already remarkable. Yet there is a level beyond it.

At Level 4, Expanded Intelligence, deep reflective use of AI, where the user actively questions outputs and integrates insights into their own thinking, permanently expands human capability. Cognition operates at a higher level even without AI present. The human has genuinely grown. This is the level worth aiming for, and the one that excites me most.

Metacognition is the new leadership competency

What makes this framework immediately actionable is that it pushes leaders into metacognition: thinking about thinking. Becoming aware not just of what you and your colleagues are doing with AI, but of which level you are actually operating at. And then taking deliberate action to shift upward.

This is a genuinely new leadership responsibility. The question is no longer whether your team has the right tools. It is whether they are thinking well with the tools they have. Whether AI is making your people sharper or slowly hollowing out the cognitive depth of your organization.

The answer isn't to use less AI. That would be both futile and wasteful. The answer is to use it better, with intention, with reflection, with a commitment to improve their capabilities through their AI use.

The future that is already here

All work will be Humans + AI. This is not a prediction. It is a description of a trajectory already well underway.

The question has never really been whether AI will transform work. It will, and it is. The question is whether that transformation makes humans more capable or less. Whether the cognitive impact of AI in your organization trends toward corruption, stagnation, augmentation, or genuine expansion of human intelligence.

That is a choice. It requires frameworks, awareness, and leadership at an organizational scale. The organizations that master this will not merely be more productive. They will be more intelligent, more adaptive, and more capable of navigating whatever comes next.

The leaders who take their teams and organizations on this journey will discover that the greatest return on AI isn't productivity. It's intelligence.

Ross Dawson

About Ross Dawson

Ross works with leadership teams to design organizations, business models, and decision-making structures for a world shaped by AI. His work focuses on how Humans + AI reshape strategy, work, and leadership—helping organizations move beyond experimentation toward real, scalable impact.

View Full Profile →