The Birth of the H-Corp
What Organizations Owe Humanity in the Age of AI
The CEO of Shopify infamously declared that managers must “demonstrate why they cannot get what they want done using AI” before hiring another human. His default assumption: work should be automated, not done by a person.
This mindset—efficiency über alles—represents an existential threat to human dignity in the workplace. And it’s spreading faster than our ability to shape alternatives. We’re watching organizations rush toward “AI-first” without asking the fundamental question: What do we owe the humans who make our organizations continued existence possible?
It seems to many that the shareholders reign supreme, and the stakeholders who co-create the excess value they demand are valued less and less. Perhaps we can strike a balance between short-term and long-term thinking, between fast and slow, and between conflicting stakeholder priorities. As Wayne Aspland wrote, “Short-term thinking exacerbates conflicts. Long-term thinking minimises them.” It doesn’t need to be this way. We have chosen this path. We can choose another.
On September 17, 2025, fifty participants from multiple generations and continents engaged in a World Cafe styled online dialogue about defining human-centered organizations. Using Thinkscape® technology that enabled anonymous small-group conversations, they identified leading principles, policies, and practices that moves us in the direction of our collective ambition - to ensure organizations support a future that leads to humanity flourishing instead of adopting technologies of pure efficiency and human replacement.
The Language Problem We Must Address
Before sharing what emerged, we need to acknowledge a critical tension that Nilofer Merchant privately articulated in the run-up to the event: the very term “human-centered” carries limitations. As she notes, Uber exemplifies “human-centered” design that delights riders and drivers while contributing to urban congestion, precarious labor, and environmental damage. The app centers humans in the transaction while corroding the commons for humanity.
Her critique cuts deeper: “human-centered” often means “privileged-human-centered,” defaulting to those with voice and access while excluding the marginalized. Corporate realities flatten good intentions into quarterly metrics. What begins as an inquiry into lived experience narrows to “How does this boost satisfaction scores?”
We grapple with this critique even as we focus on “human-centered AI”. Perhaps we need frames that are equity-centered, justice-centered, life-centered – as participant Nils von Heijne suggested, learning from nature’s patterns of collaboration.
In this time of uncertainty, as Michael Dennedy pointed out, we all need more specificity and a shared understanding in the language we choose and use, especially when programming AI or discussing values.
The H-Corp framework we are developing represents a pragmatic starting point, not an endpoint. It’s a stake in the ground saying: organizations must put people first while we collectively evolve better language for this movement. While we acknowledge that the language of Ethical AI, Responsible AI, and Human-Centered AI is not perfect, they serve as suitable placeholders for the sorts of conversations we urgently need.
One of the objectives of our efforts is to coalesce these organizations from around the world into a broader movement in the months ahead. Regardless of our language and points of focus, we all seek the same positive future for humanity.
The Core Concepts That Emerged
Across the three questions, we explored principles, policies, and practices, where diverse voices generated rich and varied insights while organically converging on several shared human concerns.
What emerged was not a consensus of a singular answer, but something more valuable: organic patterns that showed which concerns appear universally across diverse groups, alongside rich variation in how to address them.
Core Principle: “Enable humans rather than replace them”
Participants rejected the false binary of efficiency versus humanity. As one participant wrote: “We are here to be humans, not just produce crap.” Another clarified: “Humans drive the loop, not humans in the loop”—positioning humans as orchestrators, not babysitters of AI systems. Perhaps, as my colleague Michael Moon noted recently, what we should be thinking about is Humanity in the Loop.
Essential Policies: Two leading concepts emerged:
“Reinvest AI gains to support humans”—ensuring productivity benefits flow back to those who enable them
“Adopt a human-centered AI strategy”—keeping humans in control with AI in service
One participant crystallized this: “If we can feel the gains, we can emotionally invest in the adoption.” Another warned: “Never create dependency on tech, promote self-sufficiency.”
Critical Practices: Two actions were seen as foundational:
“Resist prioritizing growth/efficiency above human values”—active protection against value erosion
“Embed empathy and ethics into every process, especially decision making”—structural integration, not an optional add-on
We invite you to explore the full dataset—the raw transcripts, the Thinkscape analysis spreadsheets, and visualization overviews—all of which are made publicly available for our collaborative development of the H-Corp.
Progress Amidst Tensions
The tragedy of the commons surfaced repeatedly. As one participant noted: “There’s a conflict between what broadly benefits the ecosystem versus short-term gains for a single organization.” Of course, this has always been a tension in Western society of (mostly) free markets and capitalism. For several capitalists in the room, this shifting sentiment towards greater market and technological harmony was unsettling. That is understandable, but it needn’t be. While many may believe that capitalism itself is at fault, I believe the leaders and investors bear the brunt of this challenge. There is a need to rebalance the system to ensure we don’t sacrifice the market's survivability for continuous quarter-over-quarter growth in profits.
From the core concepts and the less understood ones, we must now explore the gray areas we are struggling to navigate. We must establish clear delineations between the contexts wherein AI can be beneficial and where it can be harmful, for both the organization and humanity.
Where do we draw the line between augmentation and replacement? Is it okay for a startup founder with no capital to pay employees to use AI for code development, marketing campaigns, and website development, all on their own? Without paying employees as they build and operate their company, where jobs never existed? How about a social entrepreneur working on social justice, similarly hampered by a lack of operating funds, but able to leverage generative AI to create positive social and economic impacts on their own without any humans in their organization?
This is a challenging task, and one where I suspect we will invest a lot of time beyond gaining clarity on the more widely agreed-upon dimensions of the organization and its design.
Beyond the Popular Concepts: The Emergent Thinking
While convergence provides direction, the emergent perspectives offer essential nuance:
Cultural Sovereignty: “There’s growing unease with the idea that generative AI should be a universal tool shaped in Silicon Valley. We can make a compelling start if we strive for a diverse ecosystem of AI tools that reflects the plurality of human society.”
Pipeline Preservation: “Can’t entirely replace junior-level hiring with AI” - protecting the experience development pathways that create tomorrow’s leaders.
The Vibes Factor: Moving beyond surveys to “establish better ways of learning what people are thinking by indirect participation (vibes) over direct metrics.”
These aren’t dissenting opinions - they’re sophistication markers, showing participants grappling with implementation realities while maintaining aspirational vision. We encourage you to look behind the popular concepts and pull on the threads from the edges found in the conversational data to tease out something more meaningful and instructional to our collective effort.
Patterns of Convergence and Essential Diversity
While the groups generated hundreds of insights, certain themes appeared independently in multiple “think tanks”, suggesting shared concerns:
Several important principles:
Respect for human experience and individuality (8/13 groups)
Humanity over profit (7/13 groups)
Empathy as core value (6/13 groups)
Transparency and open communication (6/13 groups)
Plus dozens of other valuable principles
Alongside important policy directions like:
Mandatory AI literacy training (8/12 groups)
Transparency and explainability requirements (8/12 groups)
“Do No Harm” assessment protocols (6/12 groups)
Stakeholder governance structures (6/12 groups)
Combined with critical practices such as:
Leadership modeling and demonstration (8/11 groups)
Measurement and accountability systems (7/11 groups)
Education and human development (6/11 groups)
Communication and listening infrastructure (6/11 groups)
From Principles to Practice: The Path Forward
This conversation revealed not idealistic thinking but practical wisdom. Participants consistently framed human-centeredness as a competitive advantage:
Long-term sustainability over short-term efficiency
Differentiation in an AI-saturated market
Innovation catalyst through human creativity
Talent magnet for meaning-seekers
The data—raw transcripts, analysis, visualizations—tells a richer story than any summary could capture. We’re making this complete dataset available to the public to further our efforts. Today, we encourage you to:
Explore how the topics evolved across groups
Identify patterns and insights we might have missed
Comment here to contribute your perspective to the manifesto draft
Write your own articles and social media posts on this tagged with #HCorpManifesto
Apply insights within your organization and share feedback from others
The H-Corp Manifesto: Your Voice Needed
This November, we aim to publish the first draft of the H-Corp Manifesto - not as a finished doctrine, but as a living document shaped by our collective wisdom and efforts. It won’t be perfect. It won’t solve every tension between human dignity and organizational imperatives. But it will establish a north star for a brighter tomorrow: one where organizations choose humanity over pure efficiency, augmentation over replacement, flourishing over extraction.
The manifesto will belong to everyone committed to ensuring that organizations amplify, rather than diminish, our humanity. Your critique, your experience, and your vision are essential to what emerges. Please join us.
This Moment Requires Your Participation
We stand at a critical crossroads, a moment in time where all our conversations, individually and collectively, have the power to shape the future we choose to create. It is not only “a naming moment” for the broader movement, similar to when “social media” crystallized as a concept. But it’s more than naming; it’s choosing whether we let AI transformation happen to us or whether we shape it for us. Can we intentionally choose outcomes that serve us all, or is society itself being reshaped by unconscious biases and decisions that will create unintended consequences that may ultimately harm us all?
Let’s not forget where we started the conversation in all of this. My colabborators from the Human Centric AI working group set the stage for this conversation. Gary Bolles reminded participants that we have a chance to be more intentional than during the internet revolution. Jorge Costa shared his transformation from AI skeptic to advocate for AI, creating “headroom for creativity and growth.” Tara Mandrekar challenged us to prioritize human agency, learning from social media’s unintended consequences.
What is your perspective, your approach, or your key insight that will guide us to a better tomorrow? What has been your experience so far? What kind of organization will best serve humanity’s interests?
Twenty participants from our Defining Human Centered Organizations event have already sought to become more deeply involved in our effort. A terrific start from our first public gathering, but we will need hundreds more voices, especially from young people, frontline workers, and communities that are often excluded from these conversations. We aim to gather all organizations, associations, community efforts, and individuals who share our passion for this cause. To support the energy of the underlying movement regardless of whether they choose ethical AI, responsible AI, or human-centered AI as their focus.
Please write about this, share this with colleagues, and invite those you know who care about this as much as you do to become part of a more prosperous future where human flourishing is at the center of all organizations, not just a select few.
Join the Movement
Immediate Actions:
If you haven’t already, please subscribe to access the complete dataset and analysis
Download the raw data and insights from our Thinkscape® powered conversation (10MB)
Share your reflections on what human-centered means to you in the comments or in your own posts tagged with #HCorpManifesto
If you are interested in doing more with us, complete this form
Next Steps:
Review the manifesto draft when published in November, support it as a signatory
Pilot human-centered practices in your organization, share the results
Contribute your expertise through writing or research, and publish everywhere
For Organizations:
Become early adopters of H-Corp principles
Host similar conversations within your teams (Research from the Team Flow Institute is a useful guide)
Share what’s working and what isn’t
The conversation on September 17th was a seed. Together, we’re cultivating something transformative—ensuring that in the age of AI, efficiency gains translate into human thriving, not human obsolescence. You are not alone and you are not crazy. You are a part of something bigger than you can imagine, a movement to not only ensure we leave no human behind, but to reshape society with humanity at its core - in its organizations and across the global economy.
Because if we don’t define human-centered organizations, who will?
Chris Heuer is the Managing Director of the Team Flow Institute, where he works to enhance team collaboration, performance, and well-being, enabling organizations to fully leverage the benefits of AI-augmented workflows. He is also a producer of global movements and a facilitator of meaningful conversations, such as this, with a deep understanding of society, technology, economics, and human potential.

