Defining human centricity

Are you for or against humans?

You’d be hard pressed to find a leader who’d say “no” to this question publicly, especially anyone who is building frontier models or integrating AI into their products and services. The follow-up to that question, however, will get a variety of answers — some meaningful while others, mere talking points. It begs the question, then: what does human-centric AI look like?

A Wired writer recently filmed dirty socks for $0.37. It was a part of his experiment with DoorDash’s new Tasks app, in which gig workers strap phones to themselves and record the process of getting any number of tasks done, like doing the laundry, scrambling eggs, or taking a walk through parks. DoorDash isn’t alone in this quest, although it may be the most visible example of human work being replicated at scale through the infrastructure of the gig economy. Lawyers and scientists who’ve lost their jobs are now being recruited to train AI models, which, for some, was the whole reason why their employer laid them off in the first place. Meanwhile, China already has dark factories full of assembly lines that are exclusively made of robots. The process of automating certain types of work seems reasonable enough, but who is deciding what types of work makes sense to automate, why, and how — and most importantly, what is our vision for a world in which work takes on a different shape altogether, when labor and its output are still a currency of measure for our value and worth?

All of this is getting harder to discern in the age of “toxic confidence,” as The New York Times recently declared. We’ve always used confidence as a shortcut for credibility. Doctors, executives, lawyers — you trust them partly because certainty signals they’ve earned the right to be sure. AI broke that shortcut in leaps and bounds, with LLMs that sound perfectly confident whether they’re right or wrong. But people are picking up confidence as a performance, too, speaking with certainty that would have previously come with experience and expertise. Signals of authority are now mixed up in a whole lot of noise, and worse yet, it’s getting more difficult to see how confidence as a measure of credibility can hold people (or machines) accountable to anything.

Gig workers or lawyers having to train AI models that replace them is the physical version of the same problem. AI is learning the mechanics of laundry and eggs and document review, without any of the judgment. It’s getting the footage, not the context. The decision-making behind when, why, which approach given these constraints — that doesn’t make it into the video or the mark-ups.

This is the moment when the “how” part becomes the most valuable thing you have. Not just the output, but the decision architecture behind it. I can think of so many untold stories from the rooms I sat in over the past 20+ years of my career, where discernment mattered, and being wrong had a cost. Where the brief either landed or it didn’t and someone was keeping score. Where I built the credibility to sound sure by being publicly wrong first and rebuilding from there. AI doesn’t have that — and whatever we’re feeding them right now will be derivative, not formative. Gig footage doesn’t capture it. Executives who perform certainty without it are running the same problem at a different price point.

When I imagine a future with AI that doesn’t leave people behind, I think about the things that make us most human — our imperfections and mistakes and most importantly, our ability to own up to the decisions and actions we make. It’s not about false modesty, but telling the stories of stumbles as well as triumphs as we try something, giving away the playbook. It’s uncomfortable in a culture where confidence is currency. But confidence, one of the most human signals we used to rely on, has been commoditized and diluted, with or without AI. To build a people-first future with a technology-first tool, it requires our leaders to be just that — human: willing to share their process, invite scrutiny, and when wrong, own up to it with enough specificity that it’s legible, accountable, and useful.

So, are you for humans? Most of us will say yes. The harder question is whether we’re building, leading, and operating like we mean it, and follow through on shaping what that means to everyday people.

Next
Next

The unbearable lightness of being (a manager)