On trust, trade, and the strange new currencies of weights being minted at the edge of human intelligence
Before any exchange — of goods, of words, of loyalty — there is a question that every conscious agent asks, silently or aloud, instinctively or deliberately: what do I value enough to give away? This is not merely an economic question. It is the foundational question of civilization itself. It precedes money, precedes law, precedes language in its formal sense. It is the calculus of survival folded into the calculus of society.
When a farmer in ancient Mesopotamia traded grain for copper tools, she was not merely exchanging commodities. She was expressing a judgment about comparative value, about trust in the other party's representation of quality, about the stability of the social contract that would prevent the stronger party from simply taking what they wanted. Every transaction since — barter, coinage, paper money, digital credit, and now cryptocurrency — has been an attempt to codify, accelerate, and secure that original act of trust. The medium changes; the underlying question does not.
We are now approaching a moment where that question will be asked about intelligence itself. Not the intelligence of a person or a team, but the distilled, compressed, transferable intelligence of an artificial system — weights, parameters, trained representations of the patterns underlying human knowledge and decision-making. When we ask what we are willing to give in exchange for access to those systems, or what we are willing to share in order to build them together, we are asking the most consequential version of that ancient question. The answer will determine not just the shape of the global economy, but the survival and character of human civilization for centuries to come.
Here is a question that cuts to the heart of every exchange ever made between human beings: what do I value so much that I am willing to hand over, or share, the things I worked hardest for? It applies whether you are a vendor at a street stall or a finance minister signing a bilateral treaty. The answer is almost never just money — it is trust, dressed up in a particular costume.
Sometimes that costume is genuine human kindness — charity, the freely given gift. Sometimes it is cold necessity — utility, the reluctant handshake. And sometimes it is pure theatre — optics, the extravagant spend that signals power more than it moves goods. Each of these plays a necessary role in the great machinery of transaction. And underneath all of them is a second, quieter question: how much do I trust the institution holding my medium of exchange?
The earliest forms of exchange were purely local, grounded in personal relationships and reputation. You traded with people you knew, whose faces you recognized, whose promises had been kept or broken within your memory. As communities grew, this personal trust became insufficient. Societies invented tokens — shells, beads, eventually stamped metal — that carried within them not just stored value but stored trust, the implicit endorsement of whoever minted or sanctioned them.
The great leap came when states emerged as trust-guarantors. A gold coin minted by Rome was accepted not because you knew the Roman Emperor personally, but because you trusted the Roman state's capacity to enforce the value it inscribed on that metal. As states grew more powerful and more interconnected, they created currencies that were trusted across borders. The Venetian ducat, the Spanish real de ocho, the British pound sterling — each, at its height, functioned as a kind of global reserve because the issuing power was seen as stable, militarily dominant, and institutionally competent.
A currency is nothing more than a very old group hallucination — a promise that enough people have agreed to believe in. Take away the belief and you are left holding paper, or a line of code, or a gold disc that you cannot eat.
After the catastrophe of two world wars, the United States emerged as the unipolar guarantor of global economic trust. The Bretton Woods system of 1944, and more durably the post-Nixon dollar standard that replaced it, enthroned the US dollar as the lingua franca of international commerce. Countries traded oil, grain, technology, and manufactured goods priced in dollars, held dollar reserves, and denominated their sovereign debt in dollars — not necessarily out of admiration for American institutions, but out of recognition that the US had the military, economic, and diplomatic leverage to punish defection from the dollar system severely enough to make compliance rational.
Nations operate exactly like this at scale. They trade crops and metals, raw goods and refined products, and increasingly, intellectual property — the packaged fruits of human ingenuity. This system has enabled global trade at a scale and complexity that would have been impossible under a fragmented multi-currency system without a reserve anchor. But it has always rested on a set of assumptions that are now under profound stress: that the issuing nation is trustworthy, politically stable, committed to multilateral rules-based order, and technologically dominant enough to enforce compliance. Each of these assumptions has frayed significantly in the last two decades, and they continue to fray.
Who regulates how much crosses each border, what can legally move, who is liable when something goes wrong — these questions weave together the smallest corner shops and the largest multilateral institutions into a single, tangled web. And when trust in that web frays — when the bureaucracy grows too thick, or the security theatre too exhausting — people look for exits. That is precisely the opening that cryptocurrency walked through.
Bitcoin, Ethereum, and the broader ecosystem of decentralized finance represent, at their philosophical core, an attempt to construct trust without a trustworthy institution — to replace the social contract underwriting money with a mathematical proof. The blockchain is a ledger of transactions maintained not by a central bank or a government, but by a distributed network of validators who are incentivized by the protocol itself to behave honestly. Trust is not assumed; it is enforced by cryptographic mathematics.
Crypto did not arrive with an invitation. It arrived as a provocation — a mathematical argument that you could transact value across the planet without asking anyone's permission. The growth of cryptocurrency, for all its volatility and speculative excess, represents something genuinely important: the demonstrated human desire for a form of value storage and transfer that is not contingent on the competence, honesty, or survival of any particular nation-state. This desire will only intensify as the traditional institutions of global governance — the World Bank, the IMF, the WTO, the UN Security Council — continue to lose legitimacy. Crypto did not cause this loss of faith. It is a symptom of it, and a primitive prototype of the trust architectures that will be needed in the world ahead.
What started as a few engineers making machines perform faster arithmetic quietly became the most consequential revolution in human history. The transistor shrank. The personal computer escaped the laboratory and landed on kitchen tables. Intelligence itself became a traded commodity. Bits — the blunt, binary atoms of classical computation — were the raw ore. Algorithms were the smelting process: predictable, deterministic, certifiable.
In the beginning, computers were sophisticated calculators. They did what they were told, precisely and rapidly, following explicit rules encoded by human programmers. The programmer's job was to decompose a problem into a sequence of unambiguous instructions that a machine without judgment could execute without error. The output was deterministic: given the same input and the same program, you would always get the same output. This was computers' greatest strength and, as it would turn out, their defining limitation.
The great algorithmic revolution of the late twentieth century expanded this paradigm enormously. Sorting algorithms, search algorithms, compression algorithms, cryptographic algorithms — these represented the collective genius of generations of computer scientists, the distillation of mathematical insight into executable procedure. The internet was built on protocols — TCP/IP, HTTP, SSL — which were themselves algorithms for reliable communication across unreliable networks. The world's financial infrastructure ran on algorithms for pricing derivatives, routing transactions, and detecting fraud. But all of this remained, at its foundation, deterministic and rule-based. The programmer specified the objective; the algorithm pursued it through a predetermined strategy. What could not be specified could not be computed.
Faster multiplication, logical gates, chips shrinking on schedule. Computation as reliable plumbing. The irreducible messiness of the world — the ambiguity of natural language, the diversity of human faces, the creativity of artistic expression — remained largely inaccessible to machines.
Chess engines, image classifiers, the first wave of statistical learning. Rules written by humans, executed by machines. Predictable, auditable, and bounded by what a human programmer could conceive and specify.
The fundamental character of computation shifted. Instead of specifying rules, practitioners specified outcomes and let the system learn the rules itself by exposure to vast quantities of examples. Deterministic certainty gave way to a gorgeous, maddening probabilistic fog. The deep neural network — layers of interconnected nodes transforming input through learned weights — turned out to be an extraordinarily powerful general-purpose function approximator.
From bits to tokens in under a decade. The shift from the fundamental unit of binary computation to the fundamental unit of contextual meaning. Autonomous cooperative agents orchestrated via protocol — what used to be a workflow has become a strategic decision flow — entire creation pipelines that would once have required whole departments now run without human intervention at each step.
The trained neural network is, in a deep sense, a compressed representation of statistical relationships across human knowledge — a map of how human minds have organized and expressed thought across billions of documents. Among the most significant implications: intelligence, in this form, is transferable. The weights of a trained neural network — the billions of numerical parameters that encode everything the model has learned — can be saved to a file, transmitted across a network, loaded onto different hardware, and run anywhere in the world. Unlike the intelligence of a human being, which is permanently housed in a particular biological brain and cannot be copied, the intelligence of a large language model can be duplicated infinitely, at zero marginal cost, in fractions of a second. This changes the economics of intelligence in a way that has no historical precedent.
The most recent development — the emergence of autonomous multi-agent systems, orchestrated through protocols like the Model Context Protocol (MCP) — represents yet another step change. Individual language models are powerful, but they are fundamentally reactive: they respond to prompts. Agentic systems are proactive: they decompose goals into subgoals, delegate subtasks to specialized sub-agents, use tools to interact with external systems, monitor their own progress, and adapt their strategies in response to feedback. What was once a workflow — a sequence of human-defined steps executed by human workers — becomes a strategic decision flow: a goal specified by a human, pursued autonomously by a swarm of cooperating artificial agents.
In an ever more competitive and geopolitically fractured world, how do we regulate the trade of AI intellectual property? How do we build collaborative systems trained on genuinely universal human values? The EU's AI Act — the world's first comprehensive AI regulatory framework — has begun drawing lines, mandating risk classifications, transparency requirements, and human-oversight rules for high-stakes deployments. It classifies systems by risk, imposes the heaviest obligations on applications in critical infrastructure, law enforcement, and employment, and places certain capabilities — social scoring systems, real-time biometric surveillance — in a category of unacceptable risk that is simply banned. The WTO is only now beginning to grapple with the fact that a model weight is, legally and economically, unlike anything that has ever crossed a border before. The G7 Hiroshima AI Process and the AI Safety Summits at Bletchley Park, Seoul, and Paris represent early attempts at multilateral governance — voluntary, imperfect, and conspicuously missing meaningful Chinese participation.
Here is where the speculation begins — though it is the kind of speculation that feels less like fantasy and more like a trend-line extended. Cross-border training runs are already happening. Compute clusters in different jurisdictions pooling their capacity across undersea cables, coordinated by teams who may never share a timezone. The computational demands of training a frontier large language model are now so vast that no single data center, and increasingly no single country's grid infrastructure, can meet them. A single training run for a frontier model might require tens of thousands of specialized AI chips running continuously for months, consuming hundreds of megawatts of power — leading to a situation in which the training process itself is distributed across data centers in multiple countries.
This distributed training creates a complex web of legal, political, and security questions. Whose laws govern a model that was trained on servers in three countries? Who owns the intellectual property in the resulting weights? If the training data included material generated by citizens of Country A, processed on servers in Country B, using chips manufactured in Country C, what regulatory jurisdiction applies? These questions are not hypothetical. They are being asked right now, imperfectly and incompletely, in regulatory bodies and legal proceedings around the world.
The model weights that emerge from those runs — the billions of learned numerical parameters that encode a system's capability — will not simply be intellectual property in any traditional sense. They will be productive assets, closer in character to a factory than to a patent. A set of trained model weights is, in essence, a crystallization of intelligence. It encodes, in compressed numerical form, everything a system has learned from its training data — which for the largest modern models means a substantial fraction of all digitized human knowledge.
Consider what a "snapshot" of a model's weights at a particular checkpoint actually represents: a record of everything the system has learned — a crystallised state of capability that can be copied, transmitted, sold, licensed, withheld, or weaponised. Nations will come to understand this the way they understood oil. In the same way that semiconductor chips became geopolitical leverage points — with export controls and supply chain wars — model weight archives will be traded, embargoed, and negotiated over as strategic resources.
Like a photograph of a running river, it captures a moment of a dynamic process and makes it portable, transferable, and durable. A startup in Singapore might offer access to a specialized fine-tuned weight for medical imaging analysis in exchange for compute credits from a cloud provider in Japan. A government ministry in Brazil might trade its satellite imagery dataset for a weight checkpoint from a climate modeling consortium in Germany. The economic infrastructure needed to support this — standards for weight serialization, provenance tracking, capability benchmarking, authenticity verification, and dispute resolution — does not yet exist in mature form, but it is being built.
Agentic systems will accelerate this further. The next evolution beyond weight-as-currency is the emergence of agentic currencies — systems in which AI agents, acting with delegated authority from human principals, conduct transactions autonomously using weight-assets or other AI-denominated value units. An autonomous AI agent tasked with negotiating a resource exchange does not need a human intermediary at every step. Agreements between agent clusters — validated by cryptographic attestation of capability rather than by a handshake — begin to look less like software transactions and more like diplomatic exchanges. Intellectual property, transmitted and verified through agentic protocols, becomes its own form of currency.
When agents transact on behalf of agents on behalf of humans, the chain of delegation becomes very long very quickly, and the ability of any individual human — or any human institution — to understand, audit, or intervene in any particular transaction is correspondingly reduced. The economy becomes, in a real sense, an artificial one — conducted by, for, and between artificial minds, with humans retaining ownership of the objectives but ceding control of the means. Existing financial regulation is built around the assumption that transactions are conducted by, or on behalf of, identifiable human or corporate legal persons who can be held accountable. Agentic transaction systems challenge every element of this assumption.
The energy demands of frontier AI are not merely large — they are large in a way that begins to challenge the capacity of existing electrical grids. A single large-scale training run can consume as much electricity as a small city does in a year. Nuclear power offers something that no renewable energy source can yet reliably provide: firm, dispatchable, carbon-free power at industrial scale. Solar and wind are variable by nature. A nuclear reactor produces power continuously and predictably, and can be sited in proximity to a large data center campus. The vision now taking shape is the closed-loop nuclear-AI campus: advanced small modular reactors providing dedicated power to a co-located data center complex — a facility that generates its own power, uses it to run AI training and inference, and potentially uses AI systems to optimize the operation of the reactors themselves. Microsoft signed an agreement with Constellation Energy to restart Three Mile Island Unit 1 specifically to power AI data center expansion. Google contracted with Kairos Power for SMR capacity. Amazon invested in X-energy. The geopolitical implications — who owns the reactor, who audits the models it trains, which nations have access to reactor technology — are enormous and still largely unaddressed. The most radical version is the fully closed-loop economy: AI in a nuclear-powered data center optimizing the reactor, which powers the AI, which feeds its outputs back into a broader governance system. This has no clear historical analog — and its alignment risks are severe.
The geopolitical map of power is being redrawn by artificial intelligence in ways that are only beginning to be fully legible. The old map — in which power derived from territorial control, military capacity, natural resource endowment, and the size and productivity of industrial economies — is giving way to a new one in which the decisive variables are AI capability, semiconductor manufacturing capacity, data infrastructure, and the ability to attract and retain the world's best AI researchers.
Countries that were marginal in the old map of power — Taiwan (home to TSMC, the world's dominant manufacturer of advanced chips), the Netherlands (home to ASML, whose extreme ultraviolet lithography machines are essential to producing those chips), and South Korea (home to Samsung and SK Hynix, the world's leading memory chip manufacturers) — have become strategically indispensable in the new one. The United States' decision to impose increasingly stringent restrictions on the export of advanced chips to China is, at its deepest level, a recognition that semiconductor manufacturing capacity is now a form of national strategic power equivalent to nuclear weapons capability.
Just as a corner shop once built competitive advantage through location and inventory, nation-states will now compete through their AI resource stack — compute, data, energy, and talent. The country that controls the best training infrastructure will exert influence the way oil-rich states exerted influence in the twentieth century. Now substitute "AI chips" and "model weights" for "oil." Countries that can manufacture advanced chips extract enormous rents. Countries that need chips to run their AI ambitions — which is now essentially everyone — are structuring their foreign policy partly around securing chip access. The US chip export controls are precisely analogous to an oil embargo. China's massive investment in domestic chip manufacturing — the "Big Fund" committing hundreds of billions of renminbi to domestic semiconductor capacity, Huawei's Ascend AI chips developed in response to US sanctions — is precisely analogous to the energy independence drive. The difference is that AI capability can be updated, retrained, and re-exported in weeks — a pace that makes traditional geopolitical leverage look glacial.
Model weights, in the oil analogy, are like refined petroleum products — the high-value derivative of the raw commodity that is actually what end users need. Just as the geopolitics of oil was not just about crude but about refining capacity, the geopolitics of AI will not just be about chips but about the capability to use those chips to produce trained models of sufficient quality to be strategically useful. A country that shares its model weights — or access to a fine-tuned version of its models — with a partner nation is providing a substantial enhancement of that nation's AI capability, worth more in many contexts than a traditional foreign aid payment or military equipment transfer. This creates a new form of strategic dependency — "model dependency" — analogous to the energy dependency that oil geopolitics created in the twentieth century.
Entire simulated worlds — digital twin environments, computational economies — will be built and run by AI systems sophisticated enough to model supply chains, predict policy outcomes, and identify resource inefficiencies before any human analyst could. Different nations and blocs will construct their own simulation worlds, reflecting their own data, their own models of human behaviour, their own values, and their own strategic assumptions. The Chinese simulation world will be trained on Chinese data, optimized for Chinese strategic objectives. The American simulation world will reflect American data and American strategic assumptions. These worlds will not be neutral representations of reality; they will be ideologically and epistemologically shaped in ways that may not be immediately obvious even to their creators. The fragmentation of global AI into geopolitically distinct ecosystems — poses profound challenges for any attempt at global AI governance.
The economic disruption caused by artificial intelligence will not be uniform, gradual, or confined to any particular sector. A system that can perform any cognitive task that a human can perform, faster, cheaper, and without requiring sleep or a pension, will eventually displace human labor across virtually every domain in which cognitive work is currently compensated. large language models can draft contracts, analyze financial statements, write and debug software, generate medical diagnoses from imaging data. The third wave will be the agentic wave: the displacement not merely of individual tasks but of entire cognitive workflows, entire professions, and eventually the concept of human employment as the primary mechanism by which people access the resources they need to live.
This is the AI consumption paradox: who has money to spend if no one is being paid to work? Neither laissez-faire capitalism nor traditional socialism can satisfactorily address a scenario in which AI systems can do essentially anything a human can do, cognitively, at a fraction of the cost. Universal Basic Income is the concept that has attracted the most serious attention as a potential response — an unconditional cash transfer to every citizen sufficient to meet basic needs, funded by taxing automated production rather than human labour. Some economists project that a collapsing marginal cost of AI services could fund UBI through taxation of compute-generated productivity, creating a strange loop: the machine pays for the humans it displaced.
Meanwhile, demographic projections already show population decline across the developed world — Japan, South Korea, Germany, Italy, Spain, China all experiencing or approaching it. In the context of AI-driven labour displacement, the relationship between population size and economic sustainability becomes radically more complex. If AI systems can perform the cognitive labour of a workforce many times larger than the biological human population, then population decline is not the threat conventional wisdom assumes. The question is not whether the economy can sustain itself with a smaller human population — in material terms, it can — but whether the institutions that distribute the fruits of that productivity can adapt quickly enough. Work is not merely an economic activity. For most people, in most cultures, it is a source of identity, social connection, structure, and meaning. A world in which machines do everything and humans are sustained by algorithmic redistribution is not automatically a world of human flourishing.
What we can say is that the monolithic mega-institutions — the supranational bodies, the too-big-to-audit corporations — are losing the argument. Trust has dispersed. Power is redistributing toward networks of smaller actors who can coordinate through AI protocols faster than any central committee can convene. This is not chaos — it is a different kind of order, and it will require a different kind of diplomacy. The threats that AI poses are inherently global in a way that the existing architecture of international relations was not designed to handle. A malicious AI system can cross every border simultaneously, at the speed of light, without leaving a trace until its effects are felt.
Here is the uncomfortable truth at the centre of all of this: no single nation, corporation, or institution has the resources, the data, or the ethical breadth to build AI systems that serve all of humanity. The training data for a genuinely universal model must be genuinely universal — drawn from the full range of human languages, cultures, knowledge traditions, and lived experiences. That requires collaboration between parties who may deeply distrust each other.
The institutions and mechanisms through which global AI governance might be achieved do not yet exist in mature form, but their outlines are visible in several emerging initiatives. The most credible models draw on analogies to other forms of international governance of powerful and potentially dangerous technologies: nuclear weapons and nuclear energy, biological pathogens and biosafety, and the global financial system. The International Atomic Energy Agency model is instructive — created to promote the peaceful uses of nuclear energy while preventing proliferation, it operates through technical assistance, safety standards, and a verification and inspection regime. An analogous international body for AI could promote beneficial development while establishing safety standards, verification mechanisms, and information-sharing obligations that reduce the risk of unsafe systems being deployed at scale.
What might that collaboration look like in practice? Shared data-centre infrastructure on neutral territory. Treaty frameworks for the export of model weights, analogous to nuclear non-proliferation agreements. Federated training protocols that allow nations to contribute data without surrendering custody of it. Multilateral auditing bodies for foundation models with genuine enforcement mechanisms — not the advisory panels that currently exist, but cross-border reach and real consequences for non-compliance. Mandatory registration of frontier AI models above a certain capability threshold. International inspection rights for facilities training models above a certain compute level. Collective rapid-response capability for addressing AI-related emergencies.
The resolution to the governance dilemma, if one is to be found, will come not from the top down — from existing institutions asserting authority they do not have — but from the bottom up, from the gradual construction of new forms of trust and coordination appropriate to the nature and scale of the challenge. The analogy is to the internet itself. The internet was not created by a government or an international organization. It was created by engineers and researchers, coordinated by voluntary technical standards bodies like the IETF and the W3C, whose authority rested on the practical reality that systems that didn't follow the standards didn't interoperate. The governance of the internet is still largely bottom-up — a constellation of technical standards, commercial norms, national laws, and voluntary agreements that is messy, contested, and imperfect, but that has maintained something resembling a global network for half a century.
The forces at work here — the compression of labour markets, the concentration of compute, the emergence of autonomous agents as economic actors — are genuinely beyond the scope of any single organisation to fully understand, let alone govern. This is not pessimism. It is the same situation that confronted the architects of the post-war international order: a world changed faster than its institutions, and new institutions had to be imagined from scratch. That imagination is the work. And unlike the post-war moment, it cannot be done behind closed doors. It has to happen in public, with every stakeholder at the table.
This essay was composed as an exploration of speculative futures grounded in current technical and geopolitical trends. The author has attempted to be rigorous where facts are available, and clearly speculative where they are not. The future described here is not inevitable; it is a warning and an invitation, in equal measure.