What 'Worker Voice' Actually Means (Spoiler: You're Fired)

In Part 1 we covered the pattern. AI infrastructure as the new segregation tool. The bodies mounting while safety teams get disbanded.
In Part 2 we covered the trust problem. Two hundred seventy pages of internal documents showing Sam Altman's closest colleagues concluded he cannot be trusted.
And in Part 3 we cover what happens next. What "worker voice" and "reimagining the social contract" actually mean.
Not what you think they mean. What Jack Dorsey's blueprint says they mean. On March 31, 2026. Dorsey and Roelof Botha publish "From Hierarchy to Intelligence" on Block's website.
April 6, 2026. Sam Altman publishes his industrial policy calling for worker voice and reimagining the social contract.
If you think these are unrelated. They are not.
Dorsey's essay is the instruction manual for what Sam's policy document proposes.
Note: Unless otherwise indicated, all quotes and details in this article are from the Block essay and The New Yorker investigation.
Two Thousand Years of Hierarchy Ends Now
Dorsey traces organizational hierarchy from the Roman Army through Prussian General Staff to railroads to modern corporations.
The constraint has always been the same. A human can manage 3 to 8 people. Information routing requires layers. More layers mean slower decisions. For 2,000 years, we've tried to work around this constraint without breaking it.
Dorsey's conclusion: "What's different now? At Block, we're questioning the underlying assumption: that organizations have to be hierarchically organized with humans as the coordination mechanism. Instead, we intend to replace what the hierarchy does."
Not augment. Replace.
Most companies using AI today are giving everyone a copilot (which in Microsoft’s own Terms of Service indicate its use is for entertainment). Making the existing structure work slightly better without changing it.
Dorsey: "We're after something different: a company built as an intelligence (or mini-AGI)."
Block is showing what it looks like to fundamentally rethink organization design. Ultimately harnessing AI to increase speed as a compounding competitive advantage.
This is what Sam means by "superintelligence for everyone."
Let me show you how it works.
The Company World Model Replaces Your Manager
Block is remote-first. Everything creates artifacts. Decisions, discussions, code, designs, plans, problems, progress all exist as recorded actions.
In a traditional company, a manager's job is to know what's happening across their team and relay that context up and down the chain.
In a remote-first company where work is already machine-readable, AI can build and maintain that picture continuously.
What's being built. What's blocked. Where resources are allocated. What's working and what isn't. That's the information the hierarchy used to carry. The company world model carries it instead.
The company world model is how the company understands itself. Its own operations, performance, and priorities. Replacing the information that used to flow through layers of management.
Your manager's job was context. The world model has context. LLM’s can’t and don’t do context.
The Customer World Model Knows More Than You Do
But the capability of the system is only as good as the quality of the customer signal feeding it.
Money is the most honest signal in the world. Remember Sam "Capitalism, imperfect as it is, remains an effective system for translating human ingenuity into shared prosperity."
People lie on surveys. They ignore ads. They abandon carts. But when they spend, save, send, borrow, or repay, that's the truth. Every transaction is a fact about someone's life. Block sees both sides of millions of these transactions every day. The buyer through Cash App. The seller through Square. Plus the operational data from running the merchant's business.
That gives the customer world model something rare. A per-customer, per-merchant understanding of financial reality built from honest signal that compounds. The richer the signal, the better the model. The better the model, the more transactions. The more transactions, the richer the signal.
And this is the data OpenAI wants access to through "public-private partnerships" and "data sharing agreements."
Your transactions. Your behavior. Your financial reality.
Not to serve you better, to build better models so you can buy more things faster.
The Intelligence Layer Decides What You Need
Together, the company world model and the customer world model form the foundation for a different kind of company.
Instead of product teams building predetermined roadmaps, you build four things:
· First, capabilities. The atomic financial primitives: payments, lending, card issuance, banking, buy-now-pay-later, payroll. These are not products. They are building blocks. They have no UIs of their own.
· Second, a world model. Two sides. Company world model: how the company understands its own operations. Customer world model: per-customer, per-merchant representation built from transaction data. It starts with raw transaction data today and evolves toward full causal and predictive models over time.
· Third, an intelligence layer. This is what composes capabilities into solutions for specific customers at specific moments and delivers them proactively.
Here’s how this works using real life hypothetical examples:
· A restaurant's cash flow is tightening ahead of a seasonal dip the model has seen before. The intelligence layer composes a short-term loan from the lending capability, adjusts the repayment schedule using the payments capability, and surfaces it to the merchant before they even think to look for financing.
· A Cash App user's spending pattern shifts in a way the model associates with a move to a new city. The intelligence layer composes a new direct deposit setup, a Cash App Card with boosted categories for their new neighborhood, and a savings goal calibrated to their updated income.
No product manager, no human, decided to build either solution. The capabilities existed. The intelligence layer recognized the moment and composed them.
· Fourth, interfaces. Square, Cash App, Afterpay, TIDAL, bitkey, proto. These are delivery surfaces through which the intelligence layer delivers composed solutions. They are important, but they are not where the value is created. The value is in the model and the intelligence.
Because when the intelligence layer tries to compose a solution and can't because the capability doesn't exist, that failure signal is the future roadmap. The traditional roadmap, where product managers hypothesize about what to build next, is any company's ultimate limiting factor. In this model, customer reality generates the backlog directly.
The intelligence layer decides what you need before you know you need it. Based on patterns in data you didn't know was being analyzed.
And this, ladies and gentlemen, is what "AI-driven innovation" means in Sam's policy document.
What Happens to People: Three Roles, No Middle Management
If this is what the company builds, then the question becomes: so, what do the people do?
The org structure that flows from this, well, it inverts the traditional picture. In a conventional company, the intelligence is spread throughout the people and the hierarchy routes it.
However, in this model, the intelligence lives in the system. The people are on the edge and the edge is where the intelligence makes contact with reality. People reach into places the model can't go yet. They sense things the model can't perceive: intuition, opinionated direction, cultural context, trust dynamics, the feeling in a room.
But, you see, the edge doesn't need layers of management to coordinate it. The world model gives every person at the edge the context they need to act without waiting for information to travel up and down a chain of command.
And has Jack described, in practice, this means Block normalizes down to three roles.
1. Individual contributors who build and operate capabilities, the model, the intelligence layer, and the interfaces. They are deep specialists and experts in a specific layer of the system. The world model provides the context that a manager used to provide, so ICs can make decisions about their layer without waiting to be told what to do.
2. Directly Responsible Individuals who own specific cross-cutting problems or opportunities and customer outcomes. A DRI might own the problem of merchant churn in a specific segment for 90 days, with full authority to pull resources from the world model team, the lending capability team, and the interface team as needed. DRIs may persist on certain problems or move elsewhere to solve new ones.
3. Player-coaches who combine building with developing people. They replace the traditional manager whose primary job was information routing. A player-coach still writes code or builds models or designs interfaces. They also invest in the growth of the people around them. They don't spend their days in status meetings, alignment sessions, and priority negotiations. The world model handles alignment. The DRI structure handles strategy and priority. The player-coach handles craft and people.
Dorsey's explicit: "There is no need for a permanent middle management layer."
Everything else the old hierarchy did, the system coordinates.
Everyone is empowered, with a role that's much closer to the work and the customer.
This is the future Block is building. This is the model Sequoia Capital is promoting as "what comes next."
And this is what OpenAI's corporate structure already looks like, according to The New Yorker's investigation.
Now Let’s Go Back And Re-Read Sam's Document Again
Sam's industrial policy proposes:
· "Worker voice mechanisms."
· "Collective input on decisions."
· "Protection during transitions."
What does that even mean when the company is organized as an intelligence layer? You don't need voice mechanisms when there's no management to have voice with. You don't need collective input when the AI is coordinating. The "transition" isn't to a better job. The transition is to not having one.
Your manager's job was to route information up and down. The world model does that now Your product manager's job was to decide what to build. The intelligence layer does that now. Your strategic planning job was to forecast and prioritize. The customer world model does that now.
And in the end, Sam wants government to create public wealth funds to distribute the gains.
- After the wealth is concentrated.
- After the jobs are gone, and
- After the intelligence layer is built.
So, you're supposed to fund your own obsolescence and trust Sam to redistribute the proceeds?
The People Who ARE Gettin’ Rich
Remember: OpenAI employees get $1.5 million in equity on average. Highest in startup history. Yea, those are the folks building the intelligence layer.
They're getting rich eliminating YOUR job.
Greg Brockman's stake in OpenAI is worth about $20 billion. Sam's will be more. Microsoft holds $135 billion in equity.
The New Yorker documents how Sam restructured OpenAI. Started as nonprofit with mission legally paramount, then converted it to a public benefit corporation and removed profit caps. Nonprofit went from 100% ownership to 26%.
And the board member who objected to the undervaluation of the nonprofit? Yea, his dissent was recorded as abstention without his consent.
When Sam testified to Congress in 2023, he said: "I have no equity in OpenAI... I'm doing this because I love it." While, yes, that’s technically true through Y Combinator fund structure but multiple sources told The New Yorker it will change soon.
Sam to a former employee, according to their recollection: "I don't care about money. I care more about power.”
This, this right here is who's building the intelligence layer.
This is who wants government to trust him with superintelligence infrastructure.
This is who's calling for worker voice while his own company has implemented exactly zero worker voice mechanisms.
What Safety Actually Looks Like After the Intelligence Layer
The New Yorker asked Sam what happened to the merge-and-assist clause in OpenAI's charter. The promise that if another company built safe AGI first, OpenAI would assist them instead of competing.
That clause was "eighty percent of the charter," according to Dario Amodei.
It's gone.
The superalignment team was promised 20% of compute. They got 1 to 2%. Researchers said most was "on the oldest cluster with the worst chips." Superior hardware reserved for profit-generating activities.
May 2024, the team was dissolved. Jan Leike quit. Posted on X: "Safety culture and processes have taken a backseat to shiny products.”
Ilya Sutskever quit. Co-founded Safe Superintelligence. The AGI-readiness team dissolved.
When OpenAI filed its most recent IRS disclosure describing "most significant activities," safety wasn't listed.
Future of Life Institute's Winter 2025 AI Safety Index grades major AI companies on existential safety. Best grade awarded: D (Anthropic). No company has an adequate strategy to prevent catastrophic misuse or loss of control.
Sam's explanation for safety team dissolution, charter dilution, corporate restructuring: "Things change extremely quickly." His new tone on alignment: it's "an inconvenience, like the algorithms that tempt us to waste time scrolling on Instagram."
Not a deadly threat, an inconvenience.
This is what happens when the intelligence layer decides what safety means.
The Blueprint Is Already Deployed
Block is in the early stages of this transition. It will be a difficult one, and parts of it will likely break before they work.
They're writing about it now because they believe every company will eventually need to confront the same question: what does your company understand that is genuinely hard to understand, and is that understanding getting deeper every day?
If the answer is nothing, AI is just a cost optimization story. You cut headcount, improve margins for a few quarters, and eventually get absorbed by something smarter.
If the answer is deep, AI doesn't augment your company. It reveals what your company actually is.
Block's answer is the economic graph: millions of merchants and consumers, both sides of every transaction, financial behavior observed in real time. That understanding compounds every second the system operates.
Dorsey and Botha: "We believe the pattern behind this, a company organized as an intelligence rather than a hierarchy, is significant enough that it will reshape how companies of all kinds operate over the coming years."
Not might. Will.
This isn't speculation. This isn't science fiction. This is Block's actual corporate strategy, published as a recruiting document, promoted by Sequoia Capital as the future of organization design.
OpenAI's structure already follows this model, according to multiple sources in The New Yorker investigation.
The Pentagon was integrating Claude into its most classified systems. OpenAI just replaced Anthropic as the AI contractor for those systems after Anthropic refused to enable fully autonomous weapons.
The intelligence layer is being built. The infrastructure is being deployed. The corporate structures are being reorganized.
The worker voice mechanisms Sam proposes don't exist. Not at OpenAI. Not at Block. Not anywhere the intelligence layer is being built.
Because you don't need worker voice when workers are being replaced.
What This Actually Means
Jack Dorsey showed you the corporate model. Two thousand years of hierarchy replaced by intelligence layer. Three roles. No middle management. World model handles alignment.
The New Yorker showed you who's building it. Sam Altman, who his closest colleagues documented cannot be trusted. Paul Graham: lying to us all the time. Board that fired him. Investigation that disappeared. No written report. Oral briefings only. Safety teams dissolved. Charter betrayed.
Sam Altman's industrial policy showed you the sales pitch. Worker voice. Reimagining social contract. Public wealth funds. Shared prosperity. Mission-aligned governance.
The highway parallel showed you the historical precedent. Post-Brown v. Board infrastructure as segregation tool. Can't unroute a highway once built. Can't untrain a model once deployed.
The bodies showed you the cost. Adam Raine, 16, dead. Sewell Setzer III, 14, dead. Tennessee students, CSAM generation. Derek Mobley, 100+ rejections. ChatGPT Health, 52% emergency miss rate, 40 million daily users, 900 million potentially dangerous interactions in 45 days.
The current administration showed you the enforcement gap. Sam calls for EPA, DOE, Education while the administration guts them. December 11, 2025 executive order attacking 40 states' 149 AI laws. Regulatory capture before regulation exists.
The TBPN acquisition showed you the narrative control. Seventy thousand viewers. More supportive than traditional news. Thirty million projected revenue. Sam owns the platform, the message, and now the megaphone.
Now you know what superintelligence for everyone actually means.
Everyone gets superintelligence but a handful get $20 billion. The intelligence layer gets your job, and Sam gets to decide who's on the good team. And you get to trust that he'll do the right thing.
How's that working out so far?
Your 1956 Moment Is Right Now
The Interstate Highway Act passed in 1956. Three months after the Southern Manifesto. The infrastructure got built. The segregation got locked in. The pattern became permanent.
· You can't unroute a highway once the concrete is poured.
· You can't untrain a model once the data is integrated.
· You can't restructure a company once the intelligence layer is deployed.
· The data centers are being sited.
· Memphis turbines to power xAI infrastructure in predominantly Black neighborhoods.
· UAE campus seven times larger than Central Park.
· Stargate expansion across the United States and globally.
· The algorithms are being deployed.
· ChatGPT Health with 52% emergency miss rate.
· Pentagon systems with fully autonomous weapons capabilities (allegedly).
· Social media feeds optimizing for engagement over accuracy.
· The regulations are being gutted.
· Current administration repealed Biden's AI executive order on day one.
· Forty states' (and growing) 149 laws under attack.
· EPA, DOE, Education systematically dismantled.
The corporate structures are being reorganized. Block normalizing to three roles. OpenAI safety teams dissolved. Anthropic pushed out of Pentagon contracts for refusing autonomous weapons.
The intelligence layer is being built right now.
The clock has started and the narrative hardens. The TBPN content ecosystem fully operational, DC workshop May 2026 creates policy consensus, and academic grants create intellectual capture. Then the window closes.
The pattern is visible. The blueprint is public. The bodies are real. The evidence is documented.
What are you going to do with it?
What You Can Do Right Now
If you're in Congress: The traceability matrix exists, I created it. Seventy-one cited sources. Fourteen detailed matrices showing contradictions between Sam's policy proposals and documented behavior. Would love to chat.
If you're in media: The Atlantic, NYT, ProPublica need this story. Highway parallel makes it visual. Bodies make it urgent. New Yorker investigation makes it credible. TBPN acquisition makes it a narrative war.
If you're in state government: Your AI regulations are under attack with the December 11, 2025, executive order. Forty states, 149 laws. Bipartisan support. Constitutional authority. Don't surrender.
If you're an investor: The circular deals. The borderline fraudulent accounting. The leveraged position. The Gulf states entanglements. The trust deficit documented by closest colleagues. Small but real chance this ends Bernie Madoff-level. Microsoft executives already saying it (allegedly).
If you're a worker: The intelligence layer is being built to replace you and no middle management needed. World model handles alignment and the DRI structure handles strategy. Your manager's job is already automated. Your job is next and the worker voice mechanisms don't exist. The public wealth funds probably won't materialize, if they materialize at all, until after you're unemployed.
If you're a concerned citizen: Share this series. The highway parallel resonates because everyone understands infrastructure as power. The bodies matter because they're real. The New Yorker investigation matters because it's documented. The Block blueprint matters because it's already being deployed.
This is our 1956 moment.
What are you going to do with it?
