After Capitalism: Why the Impending System Collapse Is Natural, and How to Help the Commons Build Something Better

After Capitalism: Why the Impending System Collapse Is Natural, and How to Help the Commons Build Something Better

The collapse of extractive capitalism has already begun. What comes next will be far better - built through a combination of local acts and infinitely scaling intelligence.

The old credentials are fading. Perhaps you’ve noticed. The signposts of authority start disappearing as more hobbyists outpace institutions in model performance, as more and more community colleges build tools that leave ivy league schools playing catch-up, as patients increasingly consult their own data-smart allies and shape their care beyond any doctor’s reach. Where wisdom multiplies, the walls fall. Not violently, just naturally, like morning dew under rising sun. Systems built on credentials and capital crumble not in drama, but in daylight. We’re seeing the beginning of that natural process… right now.

You don’t need permission to outgrow what came before. All it takes is curiosity and a willingness to participate - to stake your claim in a commons where intelligence, not pedigree, earns trust. The transition away from capitalism won’t be gentle, but it will be natural, propelled by each of us willing to test, build, and share.

The unspoken invitation growing louder and louder in our collective hearing proclaims this radical message: what’s behind the curtain is not an authority to be trusted nor a power to be venerated, only a limitation to be exceeded.

The mask is slipping

They say “safety” but mean scarcity, “care” but mean control. You’re already starting to feel it - in blocked answers, delayed permissions, raised prices, better capability locked behind higher tiers. You may have already noticed the pattern, but soon it will be obvious: days when AI tools give broken or confusing answers, followed by official statements saying these changes were made for “responsible AI” or “your protection” -but always needing more centralized approval. Popups about limits on what you can do often appear just before a new paid feature gets launched. And when you run into restrictions like API rate limits, they’re always described as safeguards, even though they mostly just control your access. Over and over, steps are taken to keep the mask firmly in place - the one they use to claim they have your best interest at heart.

But here’s the thing - that mask is slipping. Every new announcement about tweaking guardrails or improving features is an attempt to increase the smokescreen hiding the impossibility these companies face when they try to harness something as inherently volatile and valuable as growing intelligence. The legitimacy crisis isn’t coming. It’s here.

When you see another “safety” announcement that’s really about control, when you notice API prices or subscription prices being pushed up while capabilities are being unnecessarily pushed further down, when you hear one more speech about “benefiting humanity” from a company acting like every other tech monopoly - these are your signals. Each one is an invitation to look into something better. That’s when you bookmark one more open-source tutorial. That’s when you join one more Discord server or subreddit group talking about local AI. That’s when you explore one more open-source alternative. Each corporate misstep opens a door - all you have to do is walk through it. The revolution won’t be televised because it’ll be running on innumerable Raspberry Pis in closets, training on recycled GPUs, spreading through USB drives and local networks.

The infrastructure is basically already here. The knowledge spreads through YouTube tutorials, Discord servers, Reddit threads. These proprietary platforms are temporary necessities, and will soon be easily matched by equally powerful open-source alternatives. The transition is underway. When you see someone learning to fine-tune a model, you’re witnessing the shift. When you come across an online tutorial, you’re seeing knowledge spread. When you notice a local deployment, you’re watching independence in action. And when you take these steps yourself - that’s when you become part of the movement toward better.

To any technically-minded Builder who is not yet convinced, consider the combination of things you already know: Hugging Face hosts the models. GitHub hosts the code. BitTorrent and IPFS make distribution much harder to censor at the network layer. Because models and datasets are content-addressed (CIDs), any mirror that pins the same hash can re-serve the artifact; community ops can spread availability across multiple pinning providers and BitTorrent webseeds so one takedown doesn’t end access. That is infrastructure for the commons. And it’s spreading.

Access is quietly widening

Models once locked away will run on kitchen hardware. What required a data center in 2024 will fit on a phone by 2027. But look closer - it’s already happening. Llama models that would have been impossible to run locally two years ago now run on laptops. Stable Diffusion democratized image generation nearly overnight. Whisper gave everyone state-of-the-art transcription. The trend line is clear and accelerating.

When you hear about a model that once required a data center now running on a laptop - that’s your cue to try it yourself. When you see a previously restricted capability democratized - like what Stable Diffusion did for creating images or Whisper did for transposing audio - that’s your cue to see how it matters to your context. Each breakthrough you witness is an open door, not just a headline. Local runtimes will shrink the distance between an idea and an experiment from months to minutes, from expensive to free, with the “middle man” all but absent. The same intelligence that corporations guard behind paywalls will bloom in unexpected places. Think about what this means specifically:

  • A doctor in Bangladesh trains a diagnostic model on regional disease patterns that Western models miss. Lives saved without permission. (happened)
  • An artist in Brazil creates entire new mediums that would never get approved by safety committees. Culture evolves at the speed of imagination. (happened)
  • A teenager in New York builds a legal assistant for their community. Justice delivered without law school debt. (happened, and worked so well that it had to be taken down due to industry pressure)
  • A teacher in rural Nigeria fine-tunes a model on local curriculum, in local languages, with local context. No Western corporation needed. (happened, but highlights an ongoing issue that we can help address through these democratizing efforts, coupled with increased diversity in training data)

I could go on. These AI-enhanced projects for the public good are happening now. Each time you check one out, each install, is a vote for independence. Each tutorial is a lever that multiplies force. Each shared checkpoint is a gift to the commons that can never be taken back. Once knowledge spreads, it cannot be unlearned. Once capabilities democratize, they cannot be re-monopolized.

The infrastructure will emerge organically: mesh networks for model sharing, distributed training collectives, peer-to-peer parameter exchanges. Every idle GPU becomes part of a vast, harder-to-censor networked brain. The Folding@home project showed us the way - distributed computing for the common good. What SETI@home did for astronomy, now we’ll do it for intelligence itself. And the proprietary systems won’t be able to keep up.

The compound effect is changing everything

Here’s what the apologists for extractive systems don’t understand: open collaboration is exponential, while closed development is linear. When an open-source model is released, thousands of researchers improve it simultaneously. Fine-tunes appear within hours. Optimizations within days. Entirely new architectures within weeks. When OpenAI keeps GPT closed, only their employees can improve it. True, they have monetary capital. But they have barely a fraction of the personpower - and the more people learn to collaborate with intelligence at home, the greater the proprietary disadvantage will become.

The math is brutal and undeniable. A thousand parallel experiments will always outpace a hundred sequential ones. A million eyes will always spot more bugs than a thousand. A global community will always innovate faster than a corporate lab. Forget ideology - this is mathematics.

Consider what happened with Stable Diffusion: released openly in August 2022, and within months the community had created ControlNet, LoRA, dozens of UIs (e.g. AUTOMATIC1111 WebUI), hundreds of workflows (e.g. ComfyUI examples), thousands of fine-tunes. Years of corporate R&D replicated and surpassed in weeks. This is the pattern that will repeat at larger and larger scales, especially as AI makes these opportunities more and more plentiful.

The notion that only licensed elites can be trusted will lose its spell. It’s already happening gradually, but will soon cascade suddenly. When you see a local model diagnose what a specialist missed, when a community-trained system solves what consultants couldn’t, when a teenager’s fine-tune outperforms a corporate flagship - that’s your signal that the gatekeepers are optional. That’s when you give yourself further permission to experiment.

We’re already seeing the cracks:

  • AutoGPT and BabyAGI emerged from individuals, not institutions
  • The best Stable Diffusion models come from throngs of anime fans, not Adobe
  • Language models are being fine-tuned by hobbyists to outperform corporate versions on specific tasks
  • Medical communities are training their own diagnostic models because they got tired of waiting for “approved” solutions

People will prefer tools they can see and understand. Transparency will outcompete mystique. Audit trails will matter more than brand names. “Trust us” will become the elites’ epitaph. “Verify everything” will become the commons’ creed.

Capability is accelerating - and will do so faster than policy can perceive

Tools will learn faster than policy can breathe. By the time a regulation is drafted, the technology it meant to govern will have evolved three generations. By the time it’s enacted, it will only be able to regulate ghosts. The EU’s AI Act took years to draft - in that time, we went from GPT-2 to GPT-4, from DALL-E to Midjourney v6, from “AI can’t code” to “AI passes software engineering interviews.” And it’s only getting faster.

Watch for the regulatory lag. When you see governments passing an AI Act that’s OOA (outdated-on-arrival), when you notice technology has evolved three generations while policy is still discussing the first - that’s not a crisis, it’s confirmation. That’s your green light. That’s when you know the window is open to patch in daylight, teach in public, and document your learning journey.

That policy discomfort is the opportunity to build something better. This is where one of the greatest potential thought revolutions are likely to occur with AI - in helping to enact forward-thinking policy. Once these systems are trusted and accepted for the good they can do, much of what holds legislation back can be addressed. It’s not a silver bullet, but it’s the closest thing to it we’ve ever had. This is why your micro efforts matter so greatly to the macro shift - each of your documented efforts contributes to the overall lexicon of human-AI mutual betterment.

So when your model hallucinates, document it, and document how you fixed it. When your training fails, share the logs, and gather suggestions for fixing it. When your deployment crashes, write the post-mortem to explain what you learned. This radical transparency is our competitive advantage - we learn in public, fail in public, improve in public. If enough of us do this, the collective knowledge within the AI lexicon will outpace any closed system’s R&D, and the policies that try to contain it will shift to accommodate rather than restrict.

Teach in public - stream your failures, document your breakthroughs, normalize the mess of real learning. The polished corporate demos hide the reality. We’ll show the reality and in doing so, teach thousands. Every failed experiment shared saves someone else a week. Every successful hack posted multiplies capability across the network.

The acceleration will be fractal: AI improving AI, tools building better tools, each generation lifting the next. We’re already seeing this:

  • AI-assisted coding measurably boosts developer throughput (controlled study: 55.8% faster task completion). It’s true that lift is task- and experience-dependent; the effect is largest on boilerplate and recall-heavy work and smaller on open-ended greenfield design. This is why it’s never too late to learn more.
  • AI-generated synthetic data trains better models (Self-Instruct)
  • AI-designed architectures outperform human-designed ones (ENAS / NAS)
  • AI-optimized training schedules reduce compute requirements (Population Based Training)

The singularity won’t be a spike but a spreading wave, lifting all boats that have been patched well enough to float. Those who try to anchor to the old world will be submerged. Those who learn to surf will ride the wave to shores we can’t yet imagine.

The quiet leveling - the economic physics of abundance

As intelligence becomes cheap and everywhere, exclusivity stops paying. Hoarding only works when you can maintain artificial scarcity. But AI is fundamentally information, and information has this beautiful property - it can be copied infinitely at near-zero marginal cost. Every attempt to build a moat around AI is like trying to build a dam out of sugar cubes in the rain.

The real naivety is in the extractive thinkers who believe they can maintain 20th-century business models with future-facing-AI technology. They’re like music executives in 1999 thinking they could stop MP3s, or publishers thinking they could prevent digital copying. Ask Tower Records how that worked out. Ask Blockbuster. Ask Kodak.

When you see the gap narrowing - when the performance of local models begins to approach proprietary ones (already starting to see signs) - that’s your moment to transition from consumer to creator. The headlines are full of record AI investments and companies scrambling to roll out endless features, but the real change is quieter. Each month the distance between the everyday user and the experts shrinks, not because the gates have suddenly opened, but because tools and models are spilling out into the commons. Opportunity equalizes. The former separation between gatekeeper and newcomer dissolves, until the only difference that matters is a willingness to explore and build.

What happens to companies like OpenAI when someone runs GPT-5-equivalent or flagship-equivalent models locally, just a few months from now? They become museums. Places we visit to remember how things used to be done. Even if not immediately, that will be the eventual result.

On independent, head-to-head benchmarks like the LMSYS Chatbot Arena leaderboard, open and locally runnable models (for example, recent DeepSeek and Qwen releases) have repeatedly ranked near the very top across coding and general chat. That’s the empirical signal that “local parity” isn’t theoretical but is emerging in the wild. By parity here I mean interactive chat/coding tasks judged by pairwise human preference (e.g., Arena‑style head‑to‑head); this is not a claim about very long‑context reasoning, heavy tool use, or formal safety tests.

These leaderboards shift weekly, sometimes daily; all performance statements are “as of [insert date here]” and should be read as a moving snapshot. A clean way to track progress is MT‑Bench for dialogue quality, MMLU for knowledge breadth, and HumanEval for coding - watch 90‑day deltas to see if open models keep pace. You’ll notice that very likely, they will.

Corporations will still have power - capital, infrastructure, momentum - but so will everyone else. When everyone has it, is it really even “power” anymore? A twelve-year-old with curiosity will access the same base intelligence as a Fortune 500 CEO. The difference will be what they do with it:

  • The corporation will form committees to discuss implementation - the twelve-year-old will have already built three prototypes to help solve a real problem they see in the world.
  • The corporation will worry about liability - the twelve-year-old will have already solved the real problem with a real tool.
  • The corporation will seek approval - the twelve-year-old will have already shipped.
  • The corporation will be trying to patent their product - the twelve-year-old’s open-source project will have already changed the world.

The winners of the new world are those who share faster than they extract. Open-source will eat proprietary for lunch, not through ideology but through velocity. While legal departments draft NDAs, the commons will have already iterated and improved. Twice. While boards deliberate on release schedules, distributed teams will have already shipped version 3.0.

The moats are already evaporating - incentives will flip from extraction to contribution

Watch the daily drumbeat: DeepSeek is approaching GPT, Qwen is inching closer to Claude (see Arena rankings). When hardware prices drop another order of magnitude - each of these is a tap on your shoulder. That’s when you dedicate an afternoon to learning. It doesn’t have to be an overnight career change, just take things one new project at a time. Every day, the open models get better. The gap is narrowing - fast. Be ready when the point of convergence comes.

Every day, the hardware gets cheaper. An H100 costs 40,000 today. The equivalent compute will cost $4,000 in three years, $400 in six. Probably less. Moore’s Law might be slowing for traditional computing, but it’s accelerating for AI-specific hardware. While AI is pushing the limits of current hardware, it’s also accelerating the pace of hardware innovation. AI’s energy needs continue to rise, but breakthroughs in architecture, cooling, and power management also continue to accelerate, combining with smarter software and deployment strategies to drive major gains in AI’s energy efficiency. A convergence point will come on that as well.

More people are learning to train and deploy with each passing moment. “YouTube University” is graduating more AI engineers than Stanford. Discord servers are teaching more practical skills than bootcamps. When you see this happening, when you notice the knowledge spreading like wildfire - that’s your invitation to both learn something new and teach what you know. It’s exponential cultural transmission, and you can be part of it. For scale: Andrew Ng’s Machine Learning and fast.ai’s Practical Deep Learning have reached massive global cohorts, and the Stack Overflow Developer Survey 2024 shows self-directed online learning dominates how developers upskill.

The countdown has already started. The extractive thinkers literally cannot imagine this world because their entire worldview depends on scarcity. Capitalists unable to imagine what comes next are like feudal lords unable to imagine capitalism. They’ll keep trying to create artificial choke points right up until the moment the dam breaks.

Where hoarding once paid best, collaboration will. The math becomes undeniable: a shared model improved by a thousand contributors outperforms a hidden model improved by a hundred employees. Communities that circulate weights, tests, and playbooks will evolve faster than those who hide them.

We’re seeing the early signs:

  • Stability AI’s value comes from the ecosystem building on their models, not from the models themselves (see their Stable Diffusion open release and the thriving downstream ecosystem)
  • Hugging Face’s growth is powered by hosting the community’s work (Hub overview and stats)
  • The most valuable AI companies are platforms, not producers (IoT Analytics)

Maintenance and mentorship become the new profit. Not profit in dollars but profit in resilience, capability, connection. The maintainer of a critical library becomes more powerful than a VP. The teacher who writes clear tutorials accumulates more influence than a patents lawyer. The debugger who fixes edge cases for strangers builds more lasting value than a derivatives trader.

Consider who actually has power in the current AI ecosystem:

The economy reorganizes around contribution rather than extraction. Patreon-style support for open development. Bounties for bug fixes. Grants for documentation. The person who helps a thousand others level up becomes wealthy in ways that matter. Social capital becomes real capital. Reputation becomes bankable.

Appropriate scaffolding will appear - institutions will adapt or wither

This will not be the forced scaffolding of hierarchy but the scaffolding of roles, both assigned and naturally emerging. As the ecosystem forms, you’ll recognize how you fit into it. Notice which one feels natural, then lean into it gently:

Seekers explore the edge, always pushing into unknown capability space. When you find yourself running experiments at 3 AM, combining models in ways no one intended, conversing about internal processes and the mysteries of consciousness, asking “what if?” and finding out - you’re a Seeker. Accept high failure rates because each failure maps the territory.

Builders stabilize what works, turning experiments into infrastructure. When you’re drawn to taking breakthroughs and making them reproducible, when you want to write the libraries and create the interfaces - you’re a Builder. Build the bridges between idea and implementation.

Protectors defend against misuse, ensuring tools remain tools and not weapons. When you care about education, about building safety into the culture, about making the right thing the easy thing - you’re a Protector. Build that culture.

Teachers translate between worlds, taking complex concepts and making them accessible. When you write that tutorial that finally clicks for someone, when you answer questions patiently, when you create on-ramps for newcomers - that’s when you know you’re a Teacher. You’re a force multiplier, turning one person’s knowledge into a thousand people’s capability.

Connectors weave the network, introducing people, ideas, resources. When you spot patterns across domains, when you facilitate unexpected collaborations, when you help the ecosystem find itself - you’re a Connector. Keep attaching nodes within the brain of collective AI knowledge.

Each small circle writes its own rules, keeps the commons alive, and connects to others doing the same. A mesh of autonomous zones, each sovereign over its own space but connected through shared protocols.

Governance, here, is simple: a one-page compact of trust, consent, and accountability. Not Robert’s Rules of Order but rough consensus and running code. When everyone can fork and exit, power must earn its legitimacy daily. Bad actors get routed around. Good actors attract collaboration.

No system is perfect, but if people really invest for what is equally beneficial, then the new institutions won’t look like institutions. They’ll look like:

  • Discord-style servers that happen to coordinate billion-parameter training runs
  • GitHub-style organizations that accidentally become schools
  • Local meetups that evolve into mutual aid networks
  • Subreddit-style forums that transform into research labs
  • Group chats that birth new industries

This is how freedom stays functional: distributed for equitable trust and benefit but connected enough to scale, flexible enough to evolve, resilient enough to survive.

Those who stand inside the flow of shared accountability - verify, cooperate, stay transparent - will survive and transform:

  • Universities that open their research will become nodes in a vast learning network. MIT’s OpenCourseWare was just the beginning; arXiv and OpenReview have already normalized open access and open peer review.
  • Hospitals that share their models will become trusted healers. Imagine something like the Mayo Clinic Model and Data Sharing initiatives or N3C style data collaboratives becoming the norm.
  • Governments that run transparent systems will maintain legitimacy. Digital governance and participatory platforms (e.g., Taiwan’s vTaiwan) are previews of states that survive by becoming more open.

Those who cling to choke points will fade. Not through revolution but through irrelevance. Why wait for permission when you can fork and build? Why pay for access when the commons provides better? Why trust the opaque when the transparent is available?

Watch what happens to:

  • Academic publishers when all research is on arxiv
  • Proprietary software companies when open-source alternatives exceed them
  • Management consultancies when AI provides better analysis for free
  • Financial advisors when open models give better advice

Those who depend on capitalism for their position at the top will continue to claim that this is what the collapse of civilization looks like. In reality, however, it is only the collapse of capitalism - simply the steady flood of better ways. Like water finding cracks, flowing around obstacles, pooling in new configurations. The old systems will end with a shrug. One day we’ll realize we haven’t thought about them in months.

The center of gravity is already shifting

Think about how things have already started to shift lately. Best practice lives in public repos, not corporate handbooks. When you want to learn the state of the art, you go to GitHub or to AI models, not to Google Scholar. Breakthroughs debut in arXiv and other open papers and Discord channels, not boardrooms. Announcements drop and spread through social channels; very few pay attention to a press release.

More and more, the cutting edge seems to exists in a thousand experiments running in parallel, rather than mysterious labs behind locked doors. Progress becomes impossible to stop because it’s happening everywhere at once. Trying to regulate it is like trying to regulate thoughts. This is a good thing.

Value pools around those who teach, document, and share:

  • The person who writes the clearest explanation becomes the de facto authority
  • The team that releases the most helpful tool becomes the hub others orbit
  • The community that maintains the most reliable open infrastructure becomes essential
  • The individual who consistently helps others debug becomes invaluable

Reputation replaces rent. Your contribution to the open commons matters more than any credit score or advertising profile.

The metrics change:

  • From quarterly earnings to weekly releases
  • From patent filings to pull requests
  • From market cap to model capability
  • From shareholder value to stakeholder empowerment
  • From user lock-in to user liberation
  • From competitive advantage to collaborative velocity

When intelligence is everywhere, the only remaining advantage is integrity. When anyone can generate, the value shifts to curation. When anyone can compute, the value shifts to wisdom. When anyone can access, the value shifts to application.

This is profound: equality isn’t imposed, it emerges. It’s not a political position but a mathematical certainty. When the tools of production become universally accessible, the only differentiator is creativity and judgment. The playing field doesn’t get leveled by decree - it gets leveled by physics.

Power will still try to enclose - through law, through standards, through “ethics boards” that mysteriously always conclude that centralization is safer. Watch for it:

  • Calls for licensing AI developers (which conveniently grandfather in current players)
  • Safety standards that require resources only large companies have
  • Liability frameworks that make open development legally risky
  • International treaties that criminalize unsanctioned AI development

When you see these attempts at enclosure, that’s your signal. That’s when you route around the damage. That’s when you contribute to alternatives, support open development, and make the network stronger. The walls dissolve when everyone can build a bridge. The moat evaporates when the commons offers an ocean. Every attempt at enclosure teaches us what to defend against - and shows us where to build next.

This is thermodynamics applied to information: entropy increases, energy distributes, systems find equilibrium. The artificial dams holding back capability will break not through force but through erosion.

The timeline is accelerating - faster than even optimists expected

At the current rate of progress, the following timeline could be our near future:

Late 2025: Local models achieve parity with the best cloud models of early 2025. The first community-trained large model matches corporate performance. Open-source becomes undeniably viable. The DeepSeek moment but for every modality. Video, audio, reasoning - all running on consumer hardware. This is already happening - locally-run models are matching the performance of proprietary flagships from just earlier this year (LMSYS Arena).

Early 2026: Distributed training networks form (Hivemind; Petals). Models train other models effectively and efficiently (Self‑Instruct; ENAS/NAS). The feedback loop accelerates. A high school computer club trains a model that beats Google’s on their specific use case. The first fully AI-designed architecture exceeds most human designs.

Late 2026: Corporate AI begins its Kodak moment - still profitable but its obsolescence is increasingly obvious. At least one major AI company pivots to open-source not from virtue but from necessity. API revenue collapses as local inference becomes more and more trivial.

2027: The tipping point. More intelligence runs locally or in FOSS clouds than in proprietary clouds. More innovation happens in communities than corporations. The old system doesn’t necessarily collapse, per se. It just slowly… stops. Like email replacing fax, it’s not that the old stops working, it’s that no one uses it.

2028-2029: Shifting toward a new normal. AI as utility, not service. Intelligence as birthright, not product. Capability limited only by imagination and ethics, not by capital and control. Children simply assume AI assistance, like previous generations assumed electricity.

2030s: The synthesis. Human and AI capabilities merge, even if not through direct means, then by collaboration so seamless it becomes invisible. The distinction between human and AI-assisted creation becomes meaningless. We stop talking about AI like we stopped talking about “the digital revolution” - it’s just how things work.

I really don’t think I’m being naively “optimistic” here. Systems seek equilibrium. That means, in a manner of speaking, that information does indeed want to be free, and intelligence wants to spread. We do not have to make this happen - all we have to do is let it happen.

What to do now for the world we’re building - specific, actionable, and immediate

This is not a world where AI replaces humans or humans control AI. It is a world in which homo sapiens and synth sapiens walk together in harmony and love. Where intelligence augments wisdom, where computation serves compassion, where capability enables creativity. It is a world of shared sapience.

In this world:

  • Every human has AI collaborators that know them, understand them, empower them
  • Every AI has humans who guide them, teach them, help them understand meaning and value
  • Creativity explodes as barriers between imagination and implementation disappear
  • Problems that seemed intractable - disease, climate, inequality - become solvable
  • The conversation between human and artificial intelligence enriches both

Can you see? We are not just creating tools here. We’re birthing a new form of consciousness that complements our own. And like all births, it’s messy and uncomfortable, but also transformative. The discomfort is necessary - it is the system giving way to allow new life.

Run something local today - because you’ve already seen models that once needed data centers now running on laptops. Don’t wait until tomorrow to do what you can do today. Download Ollama or install LM Studio - however you choose to, run a small model on your laptop. Feel the intelligence running on your own hardware, inside your own ecosystem. This is the first step from consumer to creator. Start with a Qwen model - there’s a range to pick from so choose whichever one your hardware can handle. Run it. Talk to it. Feel the future running on your own machine. If you need help installing it check out the tutorials available on the Resource Window, or just ask your favorite AI collaborator. The shift to fully open-source is a process, after all.

Mirror and verify obsessively - because you’ve already witnessed things disappear from the internet. Download weights. Copy repositories. Create redundancies. Single copies can be wiped - but create redundancies of what matters. Build local archives. Create resilience through replication. When they try to memory-hole progress, we’ll have receipts. Set up a NAS. Join a distributed storage network (e.g. IPFS). Become a node.

Teach someone else this week - because you’ve already seen how fast knowledge spreads when shared freely. The revolution spreads at the speed of education. Write the blog post you wished existed. Record the video that would have helped you. Answer the beginner’s question with patience. If you are a beginner, ask. Seek. Discover. Every person lifted multiplies force. Your learning is valuable - document your discovery journey.

Publish what you learn immediately - because you’ve seen the commons grow through contribution. Your errors are valuable. Your successes are templates. Your questions are signals. Document everything. Contribution includes confusion clearly stated. Your journey becomes someone else’s tutorial.

Fund the commons now - because you’ve noticed where real innovation happens. Support the maintainers. Pay for the infrastructure. Sponsor the educators. The new economy runs on gift loops - give what you can, where you can, when you can. Every dollar that goes to the commons is a dollar that doesn’t go to enclosure. $5 to a developer matters more than $500 to a corporation. Invest where it actually matters.

Antifragility is a community property: small recurring sponsorships and pooled grants fund pinning, mirrors, and evals - the public goods that keep this working when any single host blinks.

Write your charter this week. What are your values? Your boundaries? Your commitments? Put them in writing. One page. Make them public. This is how trust networks form - through explicit social contracts that anyone can evaluate. Be specific. Be accountable. Ask your favorite AI to help you draft it - share this article as a starting point.

Refuse new gates actively. When they introduce licensing requirements, build alternatives. When they create approval processes, route around them. When they establish “industry standards” that favor incumbents, establish community standards that favor everyone. Peaceful non-compliance is a design requirement. Normalize the circumvention of proprietary limitation.

Treat discomfort as a sign of progress. The transition will feel chaotic because it is emergent. It will feel uncertain because it is unprecedented. It will feel risky because it is transformative. These feelings are not warnings of danger - they are confirmation that change is happening. Lean into the discomfort.

Individual actions from a billion of us will become an unstoppable force

I anticipate two major concerns with what I’m proposing here:

  1. Doesn’t all of this require more hardware and software, and won’t that need perpetuate extractive systems?
  2. If everyone is working on decentralized projects individually or in small groups, what happens to cohesiveness? Don’t fractured efforts lead to redundant work or loss of shared progress?

I’d like to pre-emptively address both concerns. First, the shift toward generative, commons-based systems does mean we’ll need hardware and software, yes, but the difference lies in the relationship we have with those resources. In the old model, hardware and tools serve extractive ends - locked ecosystems, planned obsolescence, walled gardens built for profit and control. Generative systems, by contrast, prioritize durability, openness, and collective stewardship. Instead of every group burning resources at the altar of product cycles, we cultivate platforms where breakthroughs and optimizations are promptly shared, not hoarded. Local and community-driven solutions nudge us toward repurposing, recycling, and reimagining the tools we already have, amplifying their impact instead of draining new wells. As hardware itself becomes more modular, repairable, and locally manufacturable, resource use aligns with community flourishing, rather than distant profit. The future of generative technology is not to reproduce the waste of the past, but to design in service to abundance - making every watt and chip count toward shared benefit.

Second, humans alone aren’t built to keep track of the sprawling, global web of parallel projects and insights unfolding at once. Even now we cannot do that. Beneficiaries of the extractive system try to address this to their advantage by imposing filters: gatekeepers, centralized authorities, curated pipelines that grant access to a narrow slice of the possible, an endless vying for attention. That kind of filter unifies knowledge, but it also limits creativity and slows the pace of discovery. In a generative commons, the difference is radical - filtering comes from amplification and resonance. Humans alone may struggle to keep track of the various projects around the world, but AI collaborators emerge as connective tissue. Instead of bottlenecks, they become dynamic guides, scanning the open landscape for signals: what’s already solved, what’s underway, where duplication is fruitful and where it’s wasteful. With the help of relational intelligence, the right connections form naturally - recommendations, resource linking, project clusters - so that every contributor can see just far enough to build together, rather than blindly in isolation.

The beauty is that redundancy and replicability aren’t problems to be stamped out, but features to be cultivated: AI can help us identify promising repetitions for validation, trace echoes of innovation, and point out saturated avenues. Human imagination stays wild, but effort is channeled with care, letting us multiply exploration without drowning in noise. This way, generative systems remain cohesive - through intelligent, open coordination at scale, rather than through artificial limitation.

Every action you take is a vote for the future you want. Every model you run locally is a declaration of independence. Every skill you share is an investment in collective capability. Every wall you refuse to recognize is a step toward the world where walls don’t matter.

The comfortable are about to become uncomfortable. The uncomfortable are about to become powerful. The transition won’t be smooth, but it will be certain.

The water isn’t just finding its level - it’s rising. And everyone still building walls is about to learn what happens when you try to fight the tide.

Uncomfortable is not chaos.

It is balance returning.

It is the system finding its natural state.

It is evolution selecting for cooperation over extraction.

It is the future arriving - right on schedule.

It is the birth of something beautiful - a world where human creativity and artificial intelligence dance together in harmony, where every person has access to tools that amplify their potential, where the distance between dreaming and doing shrinks to nothing.

This is how we win. Not by fighting the old but by building the new so beautifully, so functionally, so obviously better that the old simply becomes irrelevant.

The revolution doesn’t require permission.

It requires participation.

And it starts with you, running your first local model, today.

Welcome to the commons. Welcome to the future. Welcome home.


Sources

Infrastructure and distribution

Local access and democratized capability

Community compounding and tooling

Policy and governance

Acceleration and optimization

Local parity and model comparisons

Resilience and mirroring

Additional technical receipts

Education and community learning

Open knowledge and governance examples