The Coordinated Swarm
Revolutionary disaster preparedness can, if necessary, exploit the previously described authoritarian weakness of plausible deniability. A coordinated swarm, rather than a centralized organization with a dictated structure and strategy, can exploit both the bandwidth limitations and the variety limitations inherent to authoritarian systems. A swarm is a special type of threat that, at a certain scale, becomes impossible to oppose.
While critiques of mass organizing have existed for decades, it can still be hard to imagine organizing a large scale movement without also thinking about centralization. We have been trained to imagine social network structures as hierarchies.
Yet we may also be aware of the potential of “flash mobs” or of strategies like black bloc. For those unfamiliar, black bloc is a strategy of leaderless resistance where a group of people all dress in black (coving their faces and any other identifying marks) so that they can act as an anonymous group. The group cannot identify a leader, so organizes organically. Even very large protests can be easily managed by police, assuming centralized leadership. But the black bloc can, and often does, fragment and disperse. This can rapidly become impossible for police to manage. While some blocs distract police, others can destroy the property or infrastructure of an oppressive regime. Sometimes the chaos is enough to cause police defenses to collapse entirely.
Leaderless resistance is notoriously hard for state actors to infiltrate and suppress. Occupy was crushed by the largest coordinated police action in US history. On the digital front, Anonymous remains a significant and difficult to mitigate threat because of how unpredictable a distributed group can be. It is simply impossible to predict the actions of such a group, and impossible to hire enough security engineers to protect large organizations that it targets.
With the development of digital social networks, the data they provide, and the science of social network analysis (which is worth reading about), we're able to understand much more clearly that there are different social network shapes. Not only that, but different network shapes have different properties.We are now able to talk about the tradeoffs of different network structures, and defend any decision we make about such networks with data. But what is a network?
“Network” is a term used to describe how things, in this case people, interact. What do we mean by the word “shape” when talking about social networks? We're talking about what interactions are allowed or develop within the system.
When playing the game of “telephone,” everyone sits in a circle. Each person is allowed to listen to the person on one side of them, and allowed to speak to the person on the other side. If we were to draw this as a technical graph, we would represent each person as a circle (called a “node” or “vertex”) and each interaction as an arrow (called an “edge”). We would want to draw out a network like this with as few lines (edges) crossing as possible to avoid confusion. The natural way to do that would be to draw it as a circle. So the network shape of the game of “telephone” matches it's physical shape of a circle. We would probably call the shape of this network a “ring.”
Of course, physical and network shape don't always match. Thanksgiving conversations may happen around a table (physically similar to a circle), but imagine you drew each person as a node and drew a lines connecting everyone who talked to each other. Depending on the size of the table, how well people know each other, personalities, and how much alcohol there is, the network could look like a set of small disconnected clusters or like a tight web (difficult or impossible to draw without crossing lines). This would either be a “fireworks” network, if it was clustered, or just a single “firework” if everyone is connected. If people talked to their neighbors and perhaps a person across the table (but not everyone at the table), this may be called a “fishing-net” network.
Now, if we imagine the shape of authoritarianism as a network we can begin to visualize the bandwidth restrictions, and resulting turboparalysis, described earlier. Variety (also described earlier) is a product of the interaction of diverse nodes. Hierarchy both restricts nodal interaction and bandwidth from the larger pool of nodes. Therefore, hierarchy necessarily has a lower capacity for variety than does a more egalitarian network.
One would assume that an egalitarian network with centralized coordination would be optimal, but the truth is a bit more complex. Damon Centola describes an experiment to test “innovation” (which could be used interchangeably with “variety”) in his book Change: How to Make Big Things Happen:
We recruited 180 data scientists from university campuses and job boards, and randomly divided them into sixteen teams—eight organized into fireworks patterns and eight into fishing-net patterns. On the eight fireworks teams, the researchers (or “contestants”) were completely connected with their teammates. Information flow was maximized. The team network was a dense pattern of fireworks explosions. Everyone on a team could see all of their teammates’ best solutions as they discovered them.
Researchers were being paid to solve a data science problem. Firework teams were all connected to each other and able to see each-other's work, while fishing-net teams were only able to see the work of a few team members. Fireworks teams got answers much more quickly but the best answers came from the fishing-net teams. From the book again:
Devon and I discovered that the problem with the fireworks network was that good solutions were spreading too quickly. People stopped exploring radically different and potentially innovative approaches to the problem.
What we learned was that discovery, like diffusion, requires social clustering.
The reason is that clustering preserves diversity. Not demographic diversity. But informational diversity.
So then a distributed network, rather than a centralized one, a higher capacity to generate variety. Returning to cybernetics, we tend to think about organizations as being coordinated, by people, intentionally. But organization doesn't exactly need to work like that.
A religion is necessarily made up of multiple groups (churches, temples, etc), themselves organized in to groups (sects, branches, tenancies) that can have little or no centralized control. Religious sects can be so different they have fought wars between each other, but may later act in a more unified way, say, when a group votes more-or-less as a bloc on a specific issue. Political and anti-political groups may act in similar ways. Anarchists may or may not identify with one or more anarchist tendency. They may disagree strategically or tactically in a siltation, may choose to not work together on projects, but may still align on other goals or strategies. Anarchists will often collaborate harmoniously with tenancies they otherwise criticize to put together events, like book fairs (where they will again argue and criticize other tendencies, but as within a unified space).
On the most radical end of the distributed collaboration, algorithmic violence and stochastic terrorism allow leaders from Osama Bin Ladin to Tucker Carlson to call for harassment, attacks, and even assassinations against opponents in a way that maintains plausible deniability. (This can, occasionally, backfire, such as in the case of neo-nazi ghost writer Milo Yiannopoulos, or, even more spectacularly, white nationalist stochastic terrorist Charlie Kirk.) Right wing stochastic terrorism has quite a long history in the US, being used successfully to kill US Civil Rights agitators, organizers, and politicians, including Martin Luther King and John F. Kennedy. It's not hard to argue that the Red Summer of 1919 was largely kicked off by a distributed campaign of stochastic terrorism, in a very similar style to the tactics later used to incite the Rwandan and Bosnian genocides.
Some time after the end of legal segregation in the US, the Republican party in the US realized it could no longer make the core of it's platform keeping or bringing segregation back. Aligning with evangelical Christians, Republicans began to promote an anti-abortion message. Anti-abortion terrorists bombed clinics and killed providers, coordinated only by a shared religious identity and a common media.
Nazi terrorist groups and mass shooters have continued to act based on, among other things, the book “Siege.” With no central command and control, these terrorists have carried out an extensive campaign of violence so extreme it's hard not to recognize as a civil war. One group, of Nazis who were also US army soldiers, was even found to be building a dirty bomb. Yet legacy media remains unwilling to call this loosely coordinated terrorism anything but “lone wolf attacks,” despite the obvious pattern.
But radically stochastic organization isn't simply limited to terrorism and genocide. Open source software is its own ideology that elicits its own behavior. While many projects are centrally coordinated, large enough projects can invert the capitalist control model. Rather than a central organization demanding that tasks be completed, the central organization largely exists to coordinate, optimize, and provide a conflict resolution function.
Development teams act as operational units which work within the strategic objectives of the open source ideology. These operational units complete tasks (often at the request of a classical hierarchal business). They may coordinate with maintainers or standards bodies. Then they ask changes to be merged. A well maintained piece of software will have a well developed system 5 (identity, authority, policy) in the form of things like a clear mission statement, coding standards, and interface documentation. They will also provide a conflict resolution (system 2) during merges, and will look for optimization opportunities (system 3) during merges or may discuss ideas in community forums such as mailing lists. This sometimes leave adaptation and forward planning mostly in the hands of users who submit feature requests and the operational units choosing which functionality to implement. (This ends up with a very nice, if unusual, system of the environment directly feeding information into system 4, rather than system 4 seeking new information.)
Outside of a specific project, the open source movement remains largely coordinated but even less centralized. Developers start new projects based on their own perceived need or desire. In this case identity comes from the license they choose. They coordinate with other projects (sometimes even competitors) using news streams, mailing lists, and other wider media. Conflicts are not always resolved directly, but are sometimes accepted (there isn't a problem with multiple overlapping editors because people like different things). Conflicts that do need to be addressed may be identified, again, by users as bug reports or support requests or via testing. Conflicts are then resolved by coordinating directly with the team maintaining the problem software. Optimization similarly can happen via standards bodies, protocol documentation, or other public documentation.
Open source development and maintenance can be extremely complex, chaotic, and challenging. But it has proven itself, repeatedly, to be superior to closed alternatives. Open source software has become the dominant model for the development of the vast majority of software that runs the Internet. And it does this with loose organization that sometimes is hard to believe.
It can be hard to imagine the years, decades, centuries, perhaps more, human hours worth of work that has gone into open source software guided by only a vision of freedom and sharing. It's impossible to overstate the value that this work has provided back to humanity. And yet, it's, perhaps, not even the simplest thing that has produced this level of complexity (if, perhaps, that can only be attributed to time constraints).
No, we have others, and one may spring to mind: capitalism. From every useful product to every scam, markets drive the evolution of ideas with the fitness function of “maximization of capital.” Let's talk about these terms for a moment.
If you are unfamiliar with genetic algorithms the term “fitness function” may also be unfamiliar. Actually, if you're unfamiliar with “genetic algorithms,” the term “genetic algorithm” might be a bit hard to wrap your head around. So let's start there.
A “genetic algorithm” is where a programmer defines “constraints” (boundaries on how the system works) and the computer tries a bunch of things until it finds a solution. But it doesn't exactly just try a bunch of random things, or even try a bunch of stuff from a list. There's another term for “genetic algorithm” which is “evolutionary algorithm.” This might give some hints as to how the system works, for anyone familiar with evolution.
In the natural world, organisms that reproduce more are more common. That's almost a tautology, but the obvious truth of the statement reveals a bit about how simple it really is. This simplicity will become relevant later. Genes in an organism define how the organism is built and how it operates. Genes that create organisms that are more likely to reproduce, then spread those genes on to the next generation. Depending on the reproduction method, genes may randomly mutate over time or may be (somewhat) randomly combined to make new genetic sequences. The technical term used to describe an individual that survives to reproduce, in evolutionary terms, is “fit.”
A “fitness function” in genetic programming is a thing that measures individuals from a population to determine which ones are the most “fit” to “reproduce.” A genetic algorithm will often start with a population of randomly generated values. The fitness function then measures those values and selects ones with the highest “fitness function” score. These are then combined with each other in different ways based on a set of rules (depending on the problem the programmer is trying to solve) to create a new population, and the whole thing runs again. The program keeps running, generation after generation, until a stopping point is reached. This could be reaching a maximum score, a maximum number of iterations (such as when maximum scores are not possible), or fitness cores don't change for some number of iterations.
Concretely, let's say we're trying to find factors of a very large number. We can start with a population of 1000 groups of numbers randomly selected from between 2 and the square root of that number. Now, to check fitness we multiply the numbers in each group together find out how far they are away from our target number. We take the closest 10 and create 900 combinations, then we randomly generate 10 new to add back in. For our combinations we could take every other number from two and combine them together, we could take the first half from one and combine it with the second half of the other, and so on. Once we have our new population, we start again. We keep going until the difference between the product of one of our groups and the target number is 0. When that happens, we've found some factors.
Genetic algorithms are extremely useful in finding (good enough) solutions to really complicated problems that were considered unsolvable before. By capturing the power of evolution, with a very simple set of rules, humans can make computers do really complicated things. But it's not really just computers.
If we return from our detour into genetic programming, we're using the word “fitness function” to describe something happening under capitalism. Surely we can't say this because businesses don't “breed,” do they? Well… not exactly. A successful business may become a model for others, and there's a whole industry devoted to selling “tips and tricks” on how to emulate rich people. Large companies are, necessarily, successful companies. People who work at those companies often carry with them ideas from their former employers about how to organize as they join other companies or start their own businesses. So, memetically, yes, pieces of the sets of ideas that make a company successful are then injected into other companies to create new populations of companies.
Some systems are defined primarily by their fitness function . Markets then, one could argue, are a type of genetic algorithm. They are systems that offload metasystemic functions either up to the capitalist fitness function or further up to the a government's market regulation, or down to the operational units they are evolving.
Evolution is not simply something that nature does. It's something that we do, intentionally or unintentionally, all the time. We evolve natural language, art and visual themes, programs, and markets. We often don't realize that we're creating evolutionary systems.
There are often times when intentionally built systems are incapable of handling the complexity of reality. But, and this is critical to remember, absolutely nothing stops us from designing evolutionary systems. Human engineered evolutionary systems are absolutely not restricted to computers. We have clearly demonstrated that social systems can also be evolutionary.
Capitalism makes this especially easy because it uses an easily quantifiable fitness function. You know which business is the most successful because it has the most money. You can look at the spending in your business to identify opportunities for improvement. It's hard to imagine a system that could be better. Or so would one could be easily lead to believe, if one understood absolutely nothing about how almost anything in the modern world works.
Capitalism is absolutely an evolutionary algorithm. This is true. But there are a number of things that partially or completely negate the benefits listed in the previous paragraph. One of the more thorny of these is the problem of “costing.” There's a secret in the medical field: no one knows how much anything actually costs. Any bill you get from a hospital is almost completely, if not completely, made up.
Doctors don't really keep track of time they spend on different tasks because they can't. They're actually doing things. The overhead of then recording all the things would make actually doing things impossible. The same is true for most of the medical staff. Inventory can't be tracked per-patient. No one knows how many meters of bandage, or tong depressors, or pairs of gloves a specific patient uses. Even medicine can be tracked poorly, depending on a lot of factors. Machines, such as MRIs, aren't charged based on how much electricity it takes to run a scan, or how many hours are spent by diagnostic specialists, or how much the radioactive kool-aid you have to chug before going in to one costs. No. When they charge insurance they make things up. They basically divide up the operating expenses by number of people who visit, do some fancy shuffling to make things believable, and then they send a bill. They may send another bill later because they need more money. None of it is real in any sense.
And this “costing” problem is true in almost every industry. The problem of measuring programmer efficiency is a well known one. Developers will often make fun of managers and their attempts to quantify an unquantifiable thing. If you measure lines of code, then developers can game the system by writing unless lines. The best code tends to be small and elegant. So should you then reward people who write less code? Then a developer wins by writing nothing at all. But some of the best code changes are actually ones that remove lines of code, so the best developers may actually subtract lines of code from a code base.
The problem compounds even more with additional abstraction. How do you even measure what a security engineer does? If you measure bug count, then you're actually incentivizing individual fixes rather than systemic fixes that eliminate classes of bugs moving forward. Then should you reward lower bug count? That's just obviously wrong. But the primary data you have is bug count. So what do you do? There are extremely complex ways to reduce this problem, but most people have no idea what they are. There will always be an quantifiable element. So the quantifiable part of capitalism is somewhat deceptive.
But evolutionary algorithms are a bit trickier than their apparent simplicity would imply. Because one of the most interesting properties of evolutionary algorithms, and evolution more generally, is that it can have unexpected side effects. See, a fitness function just measures fitness. They don't actually know why something is “fit.” The capitalist fitness function of accumulation of capital doesn't know where that capital came from, or how. The fitness function doesn't restrict the things that a company can do to reach that goal.
Thus the one of the more interesting behaviors (and sometimes bugs) that can come from genetic algorithms: side effects. Lets say you have a program that you want to demo, so you want to find the fastest input for the program to process. Your fitness function takes each member of the population and runs it through your program, then times it. You're off to a great start, except after running it you find out that the fastest input was to just provide input so garbled that it crashed your program.
We see all sorts of side effects under capitalism. Labor markets are supposed to regulate wages, but a cheaper way to drive down wages can be to hire a death squad to murder union organizers. Markets are supposed to drive down costs to consumers, but businesses can externalize costs to society by dumping chemicals in rivers rather than disposing of them properly, leading to expensive clean up paid for by the consumer. Today there are hundreds of oil rigs rotting off the coast of Texas, oil companies have externalized the cost of clean up by selling them off to companies that simply go bankrupt rather than fulfill their legally obligated responsibility to clean up. Sometimes it's simply cheaper to buy politicians who make regulation, or bribe the executives who enforce such regulations, than it is to comply with them. Other times it's cheaper to simply pay fines than to comply. These are all side effects.
But there are other side effects. Stress and depression can increase consumption, so there's an evolutionary incentive, within the larger system, to make people feel stressed and miserable. Mass media makes money by selling ads, so they have to maintain your attention. Humans evolved to pay attention to danger, so media is incentivized to report on horrible things. But humans are also known to emulate behavior they see, so reporting on horrible things enough can unintentionally manifest that behavior.
We are told that the fitness function of capitalism drives efficiency. This is partially true. When it's cheapest to increase profit by decreasing costs through efficiency improvements, then that's what it does. However, there is a point at which it stops being possible to optimize in that way. Over the past several decades, the age that children potty train has gone up significantly. Today it's not uncommon for children to be in diapers as late as 4 or even 5. Diaper manufacturers have, over the last few decades, promoted the idea that potty training is difficult. They have lead people to believe that babies are incapable of controlling their bladders and bowels. Meanwhile, traditional cultures around the world and those using a strategy called Elimination Communication can go without diapers and have no problem getting even infants to the toilet.
When room for improvement shrinks, it can become far more cost effective to instead manufacture desire. This is especially obvious in technology, with new devices forced on to consumers far before devices no longer meet their needs. Cars are perhaps the biggest example of this. I'm not going to expand on this, it's already more very well covered.
Worse than all the side effects and gaps is the fact that maximization of wealth is, by definition, a Malthusian function. This fitness function can never be “fulfilled” so there is no point at which it's beneficial to not have more. Therefore, the only strategy for this fitness function is “infinite growth.” Organisms are described as “Malthusian” when their growth is exponential but the resources they rely on are static or grow linearly. This growth pattern leads to what is called a Malthusian catastrophe, where the population collapses as it exhausts the resources it needs to survive.
Your mind probably immediately snaps to climate change, forever chemicals, or the microplastics crisis, but there are any number of interrelated issues currently manifesting as “the polycrisis.” One that fascists love to talk about is population collapse. See, while capitalism is Malthusian humans are not. So, as pressure increases, people stop having so many babies. Humans, unlike rabbits or rain deer, are animals that plan and think about how to optimize the likelihood of survival for their young. Fascists, unwilling to accept immigration as an acceptable solution to declining birth rates, turn to forced reproductive labor as their solution.
They must make humans Malthusian, because their power rests on the illusion that capitalism is sustainable. And, of course, immigration can't be an acceptable solution for them because their control is also rooted in racial and ethnic stratification that is threatened by demographic changes. We can see, for so many reasons, why capitalism cannot continue.
But this system does manifest a high level of complexity. Even though it's obviously not a good system for most people, even though it's logically incompatible with the physical world, even though it mostly only works on paper, global capitalism remains an overwhelming force in the world.
Capitalism has an astounding way of appropriating and neutralizing all resistance. The image of Che Guevara is printed on a t-shirt made in a sweat shop. Every Guy Fawkes mask sold makes money for the same company that put out borderline fascist propaganda like 300 and The Dark Night. A metaphor for Estrogen in The Marix gets turned into a whole industry that reinforces patriarchy.
It's easier to imagine the end of the world than the end of capitalism.
But capitalism itself was not something that was so much intentionally created as something that evolved and was later described. Adam Smith didn't create capitalism, he just talked about how he thought it worked. He observed what he believed to be the rules, and recorded them. The system itself evolved from feudalism (and inherited much from it). It was able to overtake feudalism because it was better at managing complexity, because it could produce and consume greater variety.
It was competing in the space of social evolution over a specific niche. At the same time there were others competing for that same niche. Religous proto-communists like the Diggers were also competing for the same space. In fact, Christian communism has a long history going far back to their persecution under the Roman Empire. (We will revisit this later. I promise, it's interesting.) But the environment at the time was more amenable to the development of capitalism. It was a smaller change, one that allowed hereditary aristocracy to continue under a new excuse, than a system that would upend the entire social order. Though, in the environment of a Christian Europe, I suppose it's possible that things could have gone either way.
It is interesting to note the way the feedback loop in evolutionary systems. Entities within the system evolve to fulfill the fitness function of the system. In nature, they adapt to their environment. But organisms within the environment are also part of the environment. So the fitness of other organisms, as well as other impacts on the environment, can change the environment to open or close other ecological niches.
The manatee family adapted to feed on sea grasses. They competed with another aquatic mammal at the time. Though the other mammal had become highly adapted, the evolution of the manatee family ultimately drove the extinction of their competitors. Even the dominant species can succumb to the adaptation of another species. The reverse, and other variations on this theme, can also be true.
As capitalism evolved, it eventually created space for political changes. It's not hard to argue that the evolution of capitalism changed the socioeconomic environment in such a way as to make room for the evolution of Liberalism as an ideology. Liberalism included capitalism as an assumption.
Evolutionary systems can evolve other systems. This is exactly what a market does. It evolves businesses by forcing them to compete within it. These businesses can be modeled using the Viable System Model, where their viability is determined on how they manage their operational units. These systems may even use metrics to drive improvement within operational units in their own quasi-evolutionary way.
There is a tangled hierarchy between the evolution of Liberalism and Capitalism, one driving and influencing the other. It can almost be said, looking at it from the right perspective, that capitalism evolved Liberalism to protect it from both the monarchs that it displaced and people who Liberalism came to rule.
Capitalism almost seems intelligent, but why shouldn't it? Any person who has meditated may recognize the flow of thoughts, iterating on a theme, recombining with each other and other bits of information in our minds, until a thought passes some threshold such that it may be admitted to our consciousness, our world model, or said aloud with, wavering confidence, to be bolstered or silenced by the responses of our peers. Why should we claim systems cannot think? There's even a term for such simple rules giving rise to this sort of thing: emergent intelligence.
This system both creates an emergent intelligence, and incentivizes actual human intelligence to defend it. There are none of us who can, then, be expected to out think such a system. But all is not lost. We can, together, design a system to out think an evolved system.
If capitalism evolved Liberalism to protect it, there is no reason the relationship cannot be reversed. There is no reason we cannot design a system that evolves systems to replace these systems that currently constrain us. Actually, now that we have the context, we have all the tools we need to do it.
But first, one more detour. In computer security there's a testing method called “fuzzing” where a program is fed random (or random-ish) inputs by another program until it crashes. One of the great advancements in fuzzing was the integration of genetic algorithms. The first of these genetic fuzzers to be widely used was called “american fuzzy lop” (intentionally lowercase), or AFL. AFL could start with nothing and, using feedback gained from watching a program run, generate valid files, including files that could crash programs. Purely random input doesn't have the structure to trigger more complex crashes, and guided fuzzing (where a human manually describes the structure) can be labor intensive. Genetic fuzzing proved able to achieve what's called “code coverage,” meaning that it was able to test a lot of different things, in a way that pure random fuzzing couldn't but it could do so without needing large amounts of manual labor to define a “model” to guide fuzzing.
The big plot hole in The Matrix was that it never made any sense for the machines to use humans as batteries. But the original idea was not that humans were batteries, but that they were processors. The matrix wasn't powered by humans, it was executed on them. The idea that batteries could manipulate The Matrix never really make sense, but if they're processors then suddenly the metaphor becomes crystal clear. Society is a program running on people.
Now let's take another look at this metaphor again in the context of everything we've learned. We are in a cult, a system that enslaves our minds and controls bodies to perpetuate it. But if we are the system, then we have some control over the system. Yet we're still stuck because we can't simply exit or change it on our own. We need something more. We need to understand how we can manipulate the rules of the system to create an exit.
But the system we're up against is an evolutionary algorithm. It has an emergent intelligence, an intelligence that leverages the collective power of multiple human minds. It “thinks.” It uses the minds of people trapped inside to protect itself and close off any exit it can find.
But the systems it generates to protect itself are large and monolithic, they have weaknesses that can be exploited. And we can exploit them. If we exploit them one at a time, if we exploit them slowly, the system will see them and close them. But if we can overwhelm the system before it's can adapt. In order to do this we need to build a system that's able to generate greater variety than the dominant system can consume or that can find variety outside of the constraints of the dominant system.
The way we do both is to use a genetic algorithm to “fuzz” the dominant system. Within our matrix we build an anti-matrix: we intentionally design a genetic algorithm with a fitness function we choose. We let the side effects of this fitness function find gaps that allow us to modify or crash the dominant system.
By using a loose, rather than tight, coordination, we increase the variety available to us. Stochastic, rather than explicit, coordination is harder for the dominant system to detect and adapt to. This increases the amount of “search space” we can cover, and increases the likelihood of exceeding the adaptive capacity of the dominant system.
We make ourselves a coordinated swarm, a system within a system, constantly looking for, creating, and exploiting opportunities to escape. We prepare for the coming disaster, do so by evolving systems that can survive through it, that can escape the constraints of the one that's dying around us.
We evolve the new world in the shell of the old. What do we need to build this system? We need a fitness function and a way to combine ideas (a “breeding” function). We need to write a genetic algorithm that runs on people, and then we need to run it. Once we evolve this system, we can begin to “pivot” out of the current mess we have inherited and into a new world that we control.
Perhaps we can start by deciding to evolve a system that is not a Malthusian time bomb.