Every system that allocates resources has a mechanism for deciding who continues to receive them. The question is whether that mechanism is visible to the people it is judging.
In 1973, evolutionary biologist Leigh Van Valen published a paper in the journal Evolutionary Theory that was rejected by every major journal before he printed it himself. The finding was strange enough to warrant the resistance: the probability of extinction for any taxonomic group, Van Valen showed, remained roughly constant regardless of how long that group had already survived. Age conferred no advantage. Experience offered no protection. The environment was changing fast enough that longevity itself was not a form of fitness — only continuous adaptation was.
Van Valen named it the Red Queen hypothesis, after the character in Lewis Carroll who must run as fast as she can just to stay in the same place. The image is precise: in a co-evolutionary system, the baseline is not fixed. It moves. And it moves because every other participant in the system is also adapting. Standing still is not neutral. It is a form of falling behind.
What Van Valen described for biological organisms, four decades of subsequent research has found to be equally true of economic ecosystems, information networks, and the attention-allocation mechanisms that now determine whose ideas reach other people and whose do not.
This essay examines four mechanisms through which complex systems filter their participants — not as a guide to survival, but as an attempt to understand what the mechanisms actually are, what the research behind them shows, and what they reveal about the nature of competitive systems in 2026.
Van Valen's original finding has been replicated and contested across five decades of evolutionary biology. The specific mechanism he proposed — that predator-prey co-evolution drives a constant arms race in which fitness gains are immediately matched by adaptive responses — remains debated. But the broader empirical observation has held: in competitive systems with multiple adapting participants, absolute capability matters less than relative capability. The baseline is not the environment. The baseline is everyone else.
The economic literature reached a structurally identical conclusion through a different route. William Baumol's work on the "treadmill of innovation" showed that in competitive markets, firms must continuously invest in productivity improvement not to gain advantage but simply to preserve their current position. The gains from innovation are competed away. What remains is the necessity of the innovation itself.
The practical implication is uncomfortable because it is structurally different from the intuition most people bring to competitive environments. The common frame is absolute: am I good enough? The Red Queen frame is relational: good enough compared to what, and compared to what that was last year?
The distinction matters most in moments of rapid baseline movement — when the average capability of a system's participants is rising faster than any individual's rate of improvement. This is the condition that creates mass displacement in competitive ecosystems: not because the displaced participants became worse, but because the system around them became better faster than they did.
Van Valen's Red Queen hypothesis is a claim about evolutionary dynamics, not a general theory of competition. Its translation to economic and information ecosystems is conceptually useful but requires care. The mechanism differs: biological co-evolution operates through reproduction and selection over generations; economic competition operates through decisions and imitation over quarters. The underlying logic — that relative fitness matters more than absolute capability in systems with multiple adapting participants — is empirically supported across both domains, but the timescales and mechanisms are distinct.
Joseph Schumpeter introduced the phrase "creative destruction" in Capitalism, Socialism and Democracy in 1942, describing it as the "essential fact about capitalism" — the process by which new technologies, business models, and organisational forms continuously destroy the economic structures that preceded them and replace them with new ones.
The critical feature of Schumpeter's account is that the destruction is not selective in the way competition typically is. It does not remove the weakest firms within an industry. It removes entire industries, entire categories of economic activity, entire skill sets that were previously valuable. The newspaper is not outcompeted by a better newspaper. It is made structurally obsolete by a technology that changes what "finding information" means. The travel agent is not outperformed by a more efficient travel agent. The function itself is automated away.
Clayton Christensen's subsequent work on disruptive innovation — drawing on Schumpeter but extending it into the structure of incumbent firms — showed that established organisations typically fail not because of operational incompetence but because of rational resource allocation. Good companies invest where their existing customers are paying. The disruption arrives from below, in markets the incumbent is not serving, at price points the incumbent cannot profitably match. By the time the disruption reaches the incumbent's core market, the organisational capacity to respond has atrophied.
Creative destruction removes entire categories, not just weak participants. The question is not whether you are competitive within your current habitat — it is whether your habitat is being demolished.
Disruption follows a pattern: arrives in low-end or non-consumption markets, improves along dimensions incumbents ignored, reaches core markets before incumbents can respond. The innovator's dilemma is structural, not managerial.
Technologies and business models follow S-curves: slow adoption, rapid growth, plateau. Organisations that optimise for the plateau of one S-curve typically miss the inflection point of the next. The gap between curves is where displacement occurs.
Schumpeter did not specify the rate at which gales arrive. In practice, the timing of structural disruption is notoriously difficult to predict. The insight is directional, not predictive.
The Schumpeterian frame is important because it shifts the unit of analysis. Red Queen dynamics operate at the level of the individual participant — how are you adapting relative to others? Creative destruction operates at the level of the category — is the category itself viable? These are different questions, and conflating them produces different errors. An organisation can be a vigorous Red Queen competitor within a category that is being structurally disrupted. Winning the race inside a habitat that is being demolished is not a survival strategy.
In 1999, Albert-László Barabási and Réka Albert published a paper in Science that described a property of large real-world networks — the internet, citation networks, social connections — that earlier network models had missed. Most network models assumed that new connections were made randomly. Barabási and Albert showed that, in observed networks, new nodes attached preferentially to already well-connected nodes. They called this "preferential attachment," and it explained why real networks exhibit power-law degree distributions: a small number of nodes with very many connections, and a very large number of nodes with very few.
This finding has a specific implication that is worth holding carefully. It is not a claim that quality does not matter. It is a claim that, in growing networks, connection history matters in addition to quality — and that the two are asymmetrically compounding. A node that accumulates connections early gains not just those connections but a structural advantage in attracting future connections. The rich-get-richer dynamic is not a metaphor here. It is a mathematical property of preferential attachment in scale-free networks.
Robert Metcalfe's original observation was that the value of a telecommunications network is proportional to the square of the number of connected users — because each user can connect to every other user. The relationship is V ∝ n². This is a claim about the value of the network to its participants, not a claim about individual visibility. Its relevance to content and ideas is that it describes why being connected to a network is exponentially more valuable than being near a network — and why isolation is a structural disadvantage regardless of the quality of what the isolated node produces.
Robert Merton documented the sociological version of this dynamic in 1968 — the Matthew Effect in science, named after the verse in Matthew that describes the accumulation of advantage: "For to every one who has will more be given." Merton showed that in scientific citation networks, papers by already-prominent authors received disproportionate attention relative to their quality, while equivalent papers by unknown authors were systematically undercited. The mechanism was not deliberate bias. It was attention scarcity meeting connection asymmetry.
What Barabási formalised mathematically and Merton documented sociologically is the same underlying phenomenon: in systems where attention is scarce and connections are unequally distributed, the structure of the network shapes outcomes independently of the merit of what is being evaluated.
In 1971, Herbert Simon wrote that "a wealth of information creates a poverty of attention." The observation was prescient; the mechanism he described has since become the organising principle of the largest information systems ever built.
When attention is scarce, allocation mechanisms emerge. In pre-digital media, those mechanisms were editorial — human gatekeepers deciding what was worth amplifying. In current information systems, those mechanisms are primarily algorithmic — systems that make probabilistic predictions about what a given user will find engaging, based on the behaviour of that user and of users who resemble them.
The relevant research here is less about any specific algorithm — which are proprietary and continuously changing — and more about the structural property that algorithmic curation shares with the other mechanisms described in this essay: it amplifies existing signals rather than discovering new ones. Early engagement predicts future amplification. Low early engagement constrains future reach. The system is Bayesian in the statistical sense: it updates its estimate of value based on evidence, and early evidence weighs disproportionately in the prior.
Merton's Matthew Effect, documented in academic citation in 1968, appears to operate in algorithmic distribution systems through a structurally similar mechanism. Content that generates strong early engagement signals receives wider distribution, which generates more engagement, which reinforces the distribution. The implication is not that quality is irrelevant — it is that quality is necessary but not sufficient, and that the timing and network context of initial distribution shapes long-run outcomes independently of intrinsic value.
The 2026 version of this mechanism has an additional dimension that the earlier literature did not anticipate. The volume of algorithmically generated content has grown to the point where detection of non-generic signals — evidence of specific perspective, unusual combination of sources, non-linear argumentation — may itself become a distributional factor, as systems attempt to distinguish human-originated from machine-originated content. The research on this is nascent and the mechanisms are not yet documented with the rigour that the earlier work on network effects and preferential attachment commands. It is worth watching.
The four mechanisms described in this essay — relative fitness dynamics, structural creative destruction, network preferential attachment, and algorithmic curation — are distinct in their origins and their internal logic. Van Valen's Red Queen is a claim about co-evolutionary biology. Schumpeter's gale is a claim about capitalist dynamics. Barabási's preferential attachment is a claim about network mathematics. Simon's attention poverty is a claim about information economics. They were developed independently, by researchers working in different disciplines, on different empirical problems.
What they share is a structural property that makes their convergence significant: in each case, the system's filtering mechanism operates independently of the absolute quality of what is being filtered. Extinction risk in Van Valen's data is constant regardless of how fit a taxon is in absolute terms — only relative fitness matters. Creative destruction removes categories regardless of how well the incumbents within them were performing. Preferential attachment advantages connected nodes regardless of what unconnected nodes are producing. Algorithmic amplification rewards early signals regardless of the long-run quality of the content they represent.
This convergence is worth sitting with, because it cuts against the intuition that quality is the primary variable in competitive outcomes. Quality matters — but it operates within a system whose other parameters can dominate it. The implication is not fatalism. It is that understanding the system is a prerequisite for operating effectively within it.
What is less clear from the research is whether these dynamics are new or newly visible. The Matthew Effect in academic citation was documented in 1968. Schumpeter wrote about creative destruction in 1942. Van Valen published in 1973. The mechanisms are not products of the digital age. What the digital age has done is accelerate their timescales, increase their reach, and make their operation legible in data that was not previously available.
Whether that legibility produces better adaptive responses — or simply more articulate descriptions of the forces that were always operating — is a question the research has not yet answered.