Crisis Theory: The Decline of Capitalism As The Growth of Expensive and Fragile Complexity

It’s an empirical fact that the economy experiences business cycles, in other words, oscillations between booms and busts.  Furthermore, many argue that the economy is experiencing a secular decline. For example, productivity across all industries has decreased since the 1970s. What ares the mechanisms behind these instabilities and also decline?  What would an accurate theory of economic crisis look like? 

Screen Shot 2018-06-29 at 7.55.01 PMSource:

I believe that capitalism is both unstable and vulnerable to business cycles, and is also experiencing secular decline. The source of these trends are  feedback mechanisms that are structural to capitalism that encourage the growth of fragile and expensive complexity (logistics, rent-seeking, finance, etc) due to the pursuit of short-term profits.  Furthermore, this complexity becomes increasingly separated from the human labor (see Marx on labor theory of value) that creates wealth indirectly or directly (e.g. the factory worker, the doctor, the teacher, etc.),  which means a larger ratio of overhead versus wealth creation. The growth of expensive complexity in the long run means both declining productivity and fragility to the business cycle.

I will first review some of the theories that already exist to explain this secular decline and also the nature of business cycles. Then I will present my own crisis theory that addresses the weaknesses of the other existing models.

The mainstream economic approach to the business cycle is  modelled through the so called Dynamic Stochastic General Equilibrium model (DGSE).  In this model, mainstream economists assume the world economy is more or less in equilibrium (e.g. markets clear, and agents’ utility functions maximize) until a random shock appears, for example, a sudden rise in oil prices.  The nature and the source of the shock are irrelevant in this model – the DGSE approach only dictates that random shocks are an economic reality. Thus the task of the economist reduces to  studying how the structures of the economy amplify/dampen and propagate the shock.  For example, after the 2008 crash, economists began taking seriously how aspects of the financial sector may amplify these shocks (they call these financial frictions). It appears mainstream economists only have achieved a consensus on business cycle modelling but not necessarily on the secular decline of the economy.

Hyman Minsky was an important heterodox thinker that elaborated a crisis theory, and also recently became widely cited because of the 2008 crash.  Minsky  argued that crisis emerges from endogenous activities in the financial sector. Minsky explained that in booming times, banks and other financial institutions become “euphoric” and begin lending and also borrowing  large quantities that in bust periods they would find too risky.  Given that these financial actors   are overconfident, a speculative investment bubble develops. At some point, the debtors cannot pay back, and the bubble bursts, creating a crisis.

The more orthodox of the Marxist approaches to crisis  is  referred famously  as the theory of the tendency of  the rate of profit to fall (TRPF). According to Marx, capitalism experiences a secular decline in the rate of profits as work is automated away by machines, and therefore less workers are employed, which means less human labor to exploit. As production becomes more optimized, machinery  and raw material absorb more of the costs of production, and less workers are employed due to rising productivity. In marxist analysis, profit comes from exploitation of workers, that is, from paying workers less than the value created by the hours they worked. So in marxist analysis, as machinery automates more of the labor, the rate of profit also declines.  According to Marx, in a hypothetical scenario where all labor becomes automated by robots, the capitalist wouldn’t profit at all!

Finally there are some crisis theories were more heterodox marxist models and pseudo-keynesian theories converge.  Thomas Palley recently compared Foster’s Social Structure Accumulation theory (SSA) to his own theory, Structural Keynesianism. Both Palley and Foster argue the decline of economic growth is related to a stagnation of wages. If wages are stagnant, the aggregate demand necessary for growth is unmet, because workers don’t make enough to purchase commodities.  They argue that this  economic stagnation is related to the neoliberal growth model adopted since the 1970s. According to Palley, the  only mechanisms that kept the economy from crashing were the overvaluation of assets, and firms filling the hole in aggregate demand by taking on more debt. However this excess of credit lead to financial instabilities that eventually  crashed the economy  in 2008.

In my opinion all these approaches are flawed. For one, the mainstream approach under-theorizes the sources of fragility and the secular decline in the rate of profit. It is true that much of the crises/business cycles have to do with the fragility of the capitalist economy to volatility,  which is explained by mainstream models.  However, an important part of the story is why the capitalist system is fragile to these shocks. In fact, mainstream economists showed their ignorance with their inability to forecast the effects of the 2008 recession.  After the crash,  mainstream economists implicitly conceded to the heterodox arguments of Minsky that the financial sector creates fragility. For example, only after  the crisis did mainstream economists include in their DGSE models the financial instabilities mentioned by Minsky.  Furthermore, it appears mainstream economics doesn’t really have a theoretical consensus on the secular decline of capitalism.

The problem with the Minskyan approach is that it is severely limited – for one, it only identifies one source of fragility, which is the financial sector. It also does not theorize why the financial sector is “less real” than for example, the manufacturing sector – which Minsky implicitly assumes when he blames fragility only to the financial part.  Because of Minsky’s limited theorization, he also fails to explain the secular decline of the rate of profit, content with only explaining the business cycle. 

The greatest flaw of the  “orthodox” Marxist approach is its dependence on pseudo-aristotelian arguments. The TRPF model  is based in a logical relation between very specific variables, which are the costs of raw materials and machinery (constant capital), the costs of human labor (variable capital), and the value extracted from the exploitation of human labor (surplus value). This spurious precision and logicality is unwarranted, as the capitalist system is too complex and stochastic  be able to describe the behaviour of crisis as related to a couple of logical propositions. One has to take into account  the existence of instabilities and shocks, as the mainstream economists do. However, Marx still had a key insight which is that the aggregate wealth of the world must be sourced in human labor that produces use values. The source of wealth comes from dentists doing dentistry, and construction workers doing  construction work, not from the dentist trying to make money by trading in the stock market. Furthermore, Marx identified that there is a secular trend in the declining rate of profit, which is missing in other contemporary accounts.

Finally,  Palley’s approach seems to be too politically motivated. To them, the stagnation of the economy is related to issues of policy – of statesmen adopting the “wrong” set of regulations/deregulations. If politicians were just “objective”, and followed Palley’s set of ideas, then crisis and decline could be averted! To  Palley, the neoliberal phase was a matter of certain “top-down” policies rather than endogenous/spontaneous fragilities and instabilities that are inherent to the capitalist system. In my opinion, it’s impossible to disaggregate what is political and what is inherently structural in the secular decline of capitalism, since the whole world economy is more or less neoliberalized at this moment so there is no alternative to compare it at the present. So it seems to me that it’s a just-so story that is projected from the present to the past and impossible to prove empirically. 

One of the issues I have with the “left” theories of crises, such as Keynesian and Marxism, is that they don’t take instability, uncertainty, stochasticity, and complexity seriously. Instead, proofs and discussions are reduced to aristotelian logical chopping related to a few variables. In the Keynesian case, it’s aggregate demand, in the Marxist case these variables are surplus value, constant capital, and variable capital.  A system that pulsates with tens of billions of people is reduced to the logical chopping of a few variables. Instead, we must device a more holistic view of the capitalist world-system, taking into account its nonlinearities and fragilities.

The theories outlined above contain  parts of the truth, so we can use some of these elements to synthesize a model of crisis that contains the following: (i) economic fragility to instabilities and shocks,  (ii) endogenous sources of this fragility, (iii) a theory of the secular decline of the rate of profit. The concepts ultimately uniting these three points are fragility/nonlinearities and increasingly expensive complexity. For example, Minsky, by addressing the fragility in the financial sector, also implicitly points to a theory of  degenerative complexity, where the financial sector acts as a complex, expensive, and fragile  overhead that exists over the “real economy”.

We can use Taleb’s definition of fragility to make the concept more precise. Taleb mathematically defines fragility as harmful, exponential sensitivity to volatility. For example,  a coffee cup can withstand stress up to a certain threshold, above that, the cup becomes exponentially vulnerable to harm, as any stress higher than that threshold will simply shatter the cup. The reason why fragility is a nonlinear property is that the cup won’t wear  and tear proportionally to stress. Instead the cup will sustain the stress until a certain  threshold is reached, and then suddenly shatter. So in other words, the cup reacts exponentially to stress, with stress below a certain threshold inflicting negligible damage. 

Similarly, the capitalist world system probably has many thresholds, many of them currently unknown. This is because the capitalist world system is complex and nonlinear.  It is complex because it is made of various interlocking parts (firms, individuals, governments, etc.) that form causal chains that connect across planetary scales. It is nonlinear because the behaviour of the system is not simply the “sum” of the interlocking parts, as the parts depend on each other. Therefore one cannot really study the individual components in isolation and then understand the whole system by adding these components. In other words, interdependence  of the units within capitalism makes the system nonlinear. Furthermore, nonlinear systems are frequently very sensitive to change in its variables, where surpassing certain thresholds can make the system exhibit abrupt changes and discontinuities that often manifest as crisis.  This abrupt changes caused by the crossing of certain threshold is a common mathematical property in nonlinear systems.  Fragility therefore correlates with nonlinearities, abrupt jumps/shocks, and complexity. 

However it is not enough to say that the capitalist world system is fragile because it is nonlinear. The point is that the capitalist world system structurally generates feedback loops that lead to the accelerated creation of endogenous fragilities.  The frenetic pursuit of short-term profits in increasingly competitive contexts leads to the creation of fragile, nonlinear complexity. This is because a firm must invest in more expensive research, infrastructure, and qualified personnel to generate innovation that leads  profit in the short term, as many of the “low hanging fruits” have already been  plucked. So capitalism leads to a random “tinkering”  by firms and institutions to produce profit, by often adding ad-hoc complexity. This complexity make generate short-term profits, but is expensive in the long term.   Joseph Tainter tries to measure the productivity of innovation by looking at how many resources go into creating a patent. For example,  here is a plot showing how ratio of patent per GDP  and per R&D expenses has declined since the 70s:



Another marker of increased expensive complexity is  how many people are required to create a patent:


A very common and studied example  of this nonlinear complexity is the financial system.  The financial system is an example of growth of complexity in order to aid the profit motive.   Cash flows are generally too slow and cash reserves too low in order to cover the capital required to start firms, or to add a layer of complexity required for more profitability, so agents must resort to acquiring credit and loans. In other words the financial system acts as a fast, short-timescale distributive mechanism for the funnelling of resources to banks, firms and individuals that require quick access to capital in spite of low cash flows.   Without the financial system growth would be much lower because access to capital could only be facilitated through cash flows. However, as Minsky noted decades ago and mainstream economics emphasizes now,  the financial system is extremely unstable, complex and nonlinear, and therefore fragile. Here is a figure that shows for UK banks how much the “leverage ratio”, which is roughly the ratio between debt to equity of banks, has exponentially grown from the 1880s to the 2000s – in other words, banks depend on loans/credit in order to have fast access to capital.

MilesFig1 (1)


The addition of complex overhead as inversely proportional to growth has been empirically verified for various parts of capitalism. Here are some examples: the cost diseases associated with industries like education and healthcare, the admin bloat in education and healthcare, the  stagnation of productivity across virtually all industries including manufacturing, the stagnation of scientific productivity in spite of exponential growth in the number of scientists and fields, etc.

Furthermore, capitalism encourages rent-seeking and expensive complexity, even if there are no benefits in wealth production for the economy in general. For example, this rent-seeking scenario is probably the case for admin bloat at the universities.  In the case of this admin bloat, there is a transfer of wealth from society to certain sectors of the university, but there is no obvious economic benefit for society in general. This is in contrast to traditional, profitable industries were profit leads to capital valorization through the reinvestment of that profit.


As noted in a previous post, there is also a secular degeneration of science with the secular decline of capitalism. To summarize that post, as informational complexity grows at a faster rate than empirical validation and knowledge production, an informational bloat of unverified scientific theories gets created. An obvious example is the complex bloat of theoretical physics models that predict all sorts of new particles, in spite of the fact that the Large Hadron Collider, a multibillion dollar experiment, failed to confirm any of them. So you have a whole layer of professionals that are just experts in unverified/degenerative theories, and these professionals collect large salaries in spite of not contributing to economic nor epistemic growth.  Another example of a degenerative profession is economics. Judging from the stagnating productivity across most industries, we can probably assume that these caste of degenerative professionals is rampant across all corners of capitalism. This caste of degenerative professionals and “degenerative” experiments add expensive and fragile complexity to capitalism.


Finally, as complexity grows, there is an increasing dislocation between abstracted logistical, degenerative, and “scientific” complexity and the human labor that creates the wealth.  A very good example is finance. To paraphrase and elaborate on what Taleb said, the wealth of the world is created by dentists doing dentistry, and construction workers doing “construction work”, not by the dentist trying to become rich by trading their savings in the financial market. This is where Marx becomes relevant – for the wealth of society comes from human labor, not from the transfer of wealth through administrative and accounting tricks, or through the circulation of financial instruments. This bloated complexity is required for the functioning of capital  because of financial, accounting, and logistical constraints.  Much of this complexity acts as an overhead for the world-economy that is required for the survival of capital itself, but this complexity does not necessarily create socially necessary wealth. An example of the fragility of this separation between wealth creation and complex abstraction is the existence of speculative bubbles.  Due to the overconfidence of the financial industry, assets are often overvalued and at some point their value collapses, as the dislocation between the real and financial economy becomes unsustainable. This financial instability was discovered by Minsky and that now is understood by mainstream economists, who incorporate it in their models.

Here we begin to sketch a theory for the secular decline of capitalism.  First there is a secular increase of fragile, nonlinear complexity driven by ad-hoc tinkering of firms/institutions in order to pursue short term profits at the expense of fragility. Furthermore much of this  expensive complexity is due to rent-seeking, where specialists trained in degenerative methods that add no obvious knowledge/efficiency self-reproduce and multiply, like string theorists, economists, university admins, healthcare admins etc. In the long run, all this added complexity that is created for short term profits becomes increasingly expensive, leading to even slower productivity growth  (GDP growth per labor hour).  Part of the lowering of productivity is the increasing dislocation between human labor that produces wealth and an abstracted layer of researchers, administrators, managers, etc. Furthermore not only there is a secular decline of the economy, but there are also increasing fragilities and instabilities, as the bloated complexity is very nonlinear, given that it couples agents across planetary scales, such as how the financial industry transcends national economies. So the world economy becomes increasingly more vulnerable to shocks, due to nonlinearities (caused by interdependencies) that lead to  abrupt changes. These instabilities and fragilities give rise to the so called business cycle.

In conclusion, a socialist theory of crisis should begin by looking at the economy as a whole, taking into account its instabilities and fragilities. In my opinion, the methodologies of the various Keynesian and Marxist schools are wrong because they pretend to have identified a couple of important variables (e.g. aggregate demand, organic composition of capital) and then logically derive a theory of crisis through these variables. However, because the economic system is extremely complex and nonlinear, these theories probably amount to just-so stories, since the mechanisms behind the instabilities in capitalism are probably very varied (and many of them unknown),  and therefore  cannot be pinpointed to just specific sources. Instead, a better approach to a crisis theory is  to analyze how capitalism creates  endogenous feedback loops that lead to fragility, due to generalized and socially unnecessary nonlinearities and complexites. This nonlinearization and complexification is imposed in order to pursue short term profits, at the expense of long-term productivity. Moreover, another important issue is how a large part of this complexity becomes increasingly dislocated from wealth creating labor – such as the dislocation between administrators and professors, or the financial sector and the real economy.  

I am confident many of the theories presented in this article can be both quantified and verified against empirical data in a much more rigorous way than done here. But alas, there isn’t an eccentric millionaire backing this research program😞.

If you liked this post so much that you want to buy me a drink, you can pitch in some bucks to my Patreon.

Ergodicity as the solution for the decline of science


In a previous post I explored the decline of science as related to the decline capitalism. A large aspect of this decline is how the increase of informational complexity leads to marginal returns in knowledge. For example, the last revolution in physics appeared roughly one hundred years ago, with the advent of quantum mechanics and relativity. Since then, the number of scientists and fields have exponentially increased, and the division of labor has become increasingly more complex and specialized. Yet, that billion dollar per year experiment, the Large Hadron Collider, that was created to probe the most fundamental aspects of theoretical physics, has failed to confirm any of the new theories in particle physics. The decline of science is coupled to the decline of capitalism in general, as specialist and institutional overhead is increasing exponentially across industries, but GDP growth has been sluggish since the 1970s.

Right now across scientific fields there is an increasing concern for the overproduction of “bad science”.  Recently, the medical and psychological sciences have been making headlines, because of the high rates of irreproducible papers.  In even the more exact sciences, there is a stagnant informational bloat, with a flurry of math bubbles, theoretical particles, and cosmological models inundating the peer-review process, in spite of billion dollar experiments like the Large Hadron Collider not confirming any of them, with no scientific revolution (last one was 100 years ago) in the horizon.

There is no shortage of solutions being postulated to solve the perceived problem. Most of them are simply suggestions of making the peer review process more rigorous, and refining the statistical techniques used for analyzing data.  For example using bayesian statistics instead of frequentism, encouraging the reproducibility of results, and finding ways to constraint the “p-value” hacking. Sometimes some writers that are a little bolder would argue that there should be “interdisciplinarity”, or that scientists should talk more to philosophers, but usually these calls for “thinking outside the box” are very vague and broad.

However, most of these suggestions would simply exacerbate the problem. I would argue that the bloat of  degenerative informational complexity is not due to lax standards. To give an example, let’s analyze the concept of p-value hacking. A common heuristic in the social sciences is that for a result to be significant, it should have a p-value of less than 0.05. In layman parlance, this implies that your result has only 5 percent of probability of being due to chance (not exact definition but suffices for this example).  So now you established a “standard” that can be gamed in the same way lawyers can game the law. This creates a perverse incentive to game this rule, by researchers finding all sorts of clever ways of “p-hacking” their data so that it passes that standard. So in the case of p-value hacking, one can make conscious fraud by not including the data that raises the p-value (high p-values mean your results are due to chance), to unconscious biases like ignoring certain data points because you convince yourself they are a measurement error, in order to protect your low and precious p-value.

The more rigid rules a system has, the more is invested in “overhead” to regulate those rules and game them. This is intuitively grasped almost by everyone, and hence the standard resentment against bureaucrats that take the roundabout and sluggish way to accomplish something.  In the sciences,  once a an important study/experiment/theorem generates a  new rule, or “methodology”,  this creates perverse incentive loops where scientists and researchers use this “rule” to create paper mills, that will in turn be used to game citation counts . Instead of earnest research, you have an overproduction of “bad science” that amounts the gaming of certain methodologies.  String theory, which can be defined as a methodology,  was established as the only game in town a couple of decades ago,  which in turn constrained young theoretical physicists in investing their time and money in gaming that informational complexity, generating even more complexity. Something similar happens in the humanities, where a famous (usually french) guy establish a methodology or rule, and the anglo counter-parts game the rule to produce concatenations of polysyllabic words.   Furthermore this fetish of informational complexity in the form of method and rules, creates a caste of “guild keepers” that are learned in these rules and accrue resources and money without allowing anybody who isn’t learned in these methodologies.

This article serves as a “microphysical” account of what leads to the degenerative informational complexity and diminishing returns I associated with modern science in my previous post. However what would be the solution to such a problem? The answer is in one word: ergodicity.

As said before, science has become more specialized, complex, and bloated that ever before.  However, just because science has grown exponentially, it doesn’t mean it has become more ergodic. By ergodic I specifically mean that all possible states are explored by a system.  For example  a dice that is thrown a large amount of times would be ergodic, given that the system would access every possible side of the dice. Ergodicity has a long history in thermodynamics and statistical mechanics, where physicists often have to assume that a system has accessed all its possible states.  This hypothesis allows physicists to calculate quantities like pressure or temperature by making some theoretical approximations of the number of states a system (e.g. a gas ) has. However we can use the concept of ergodicity to analyze social systems  like “science” too.

If science were ergodic, it would explore all possible  avenues of research, and individual scientists would switch of research programs frequently.  Now, social systems cannot be perfectly ergodic, as social systems are dynamic and therefore the “number” of states grow (e.g. the number of scientists grow). But we can treat ergodicity as an idealized heuristic.

The modern world sells us ergodicity as a good thing. Often, systems describes themselves as ergodic as a defence from detractors. For example, when politicians and economists claim that capitalism is innovative, and that it allows all workers to have a chance at becoming rich (or a chance for rich people to become poor),  they are implicitly describing an ergodic system. Innovation implies that entrepreneurs experiment and explore all possible market ideas so that they can discover the best ones. Similarly, social mobility implies that a person has a shot at becoming rich (or if already rich, becoming poor) if that person lives long enough. In real life, we know that the ergodic approximation is really poor for capitalism, as the rich do often stay rich, and the poor will stay poor. We also know that important technological innovation is often carried out by public institutions  such as the american military, not the private sector. Still, the reason why ergodicity is invoked is that it is viscerally appealing. We often want “new blood” into fields and niches, and we resent bureaucrats and capitalists insulated from the chaos of the market for not giving other deserving people a chance.  

One of the reasons that ergodicity is appealing is that there is really no recipe for innovation except experimentation and exploring many possible scenarios.   That’s why often universities have unwritten rules of not hiring their own graduate students into faculty positions – they want “new blood” from other institutions. A common (although incorrect, as described above) argument against public institutions is that they are construed as often dull and stagnant in generating new products or technologies compared to the more “grassroots” and “ergodic” market. So I think there is a common intuition amongst both laymen and many professionals that the only sure way of finding if something “works” or not is trying different experimental scenarios.

Now let’s return to science.  The benefit of ergodicity in science  was indirectly supported the infamous philosopher Feyerabend. Before him,  philosophers of science tried to come up with recipes of what works in science or not.  An example is Popper, who argued that science must be falsifiable. Another example is Lakatos, who came up with heuristics of what causes research programs to degenerate. Yet,  Feyerabend argued that the only real scientific method is that  “anything goes” – he termed this attitude as epistemological anarchism. He argued that scientific breakthroughs don’t follow usually any hard and fast rules, and that scientists first and foremost are opportunists.

Feyerabend got a lot of flack for  these statements – his detractors accusing him of relativism and anti-scientific attitudes. Feyerabend didn’t help himself because he often was inflammatory in purpose and seeking to cause a reaction (for example putting astrology and science on the same epistemic level). However I would say that in some sense he was protecting science from dogmatic scientists.  To use the terminology sketched in the previous paragraphs: he ultimately was arguing for a more ergodic approach to science so that it doesn’t fall under this dogmatic trap.

This dogmatic trap was already explained in previous paragraphs: the idea that more methods, rules,  divisions, thought policing, and  rigour, would  always lead to good science.  Instead it leads to a growth of degenerative research  that amounts to gaming certain rules.  This in turn leads to the growth of degenerative specialists that are only experts in degenerative methods.   Meanwhile, all this growth is non-ergodic, because it’s based around respecting certain rules and regulations, which constrains the exploration of all possible scenarios and states. It’s like loading a dice so that always the six dots face up, in contrast to allowing the dice to land in all possible states.

How can we translate these abstract heuristics of ergodicity into real scientific practice? The problem with much of philosophy of science, both made by professional philosophers, or professional scientists unconsciously doing philosophy, is that it looks at individual practice. It comes up with a laundry list of specific rules of thumb that an individual scientist most follow to make their work scientific, including certain statistical tests and reproducibility. However the problems are social and institutional, not individual.

What is the social and institutional solution? Proposing solutions is harder than describing the problem. However  I always try to sketch a solution because I think criticism without proposing something is somewhat cowardly – you avoid opening yourself up to criticisms from readers.

The main heuristic for solving these problems should be on collapsing the informational complexity in a planned, transparent, and accountable way.  As mentioned before, this informational complexity is like a cancer that increasingly grows, and its source is probably methodological dogmatism, where complex overhead becomes bloated as researchers find increasingly more convoluted way of “gaming” these rules. Here are some suggestions for collapsing complexity:

  1. Cutting administrative bloat and instead have rotating academics in the essential administrative postings. 
  2. Get rid of the peer-review system, and instead use an open system, similar to Arxiv.
  3. Collapsing some of the academic departments into bigger ones. For example, there is more in common with much of theoretical physics, mathematics and  philosophy than between theoretical physics and some of the more experimental aspects of physics. So the departments should be reorganized so that people with more similarities interact with each other.
  4. Create an egalitarian funding scheme, based more on divisions between theory and experiment than between departments.  Everyone involved in the same category should receive the same, minimum amount of funding, where funding quantities are based on how much resources a specific type of work would realistically require.  For example, a theoretical physicist that uses only pencil, paper, and their personal computer, has financially a lot in common with a sociologist that does the same. 
  5. Beyond the  minimum funding outlined above, excess funding should be decided democratically, with input outside of professionals.
  6. Abolish the distinction between tenured professor and adjunct. Instead everyone should teach.

Hopefully the destruction of admin bloat and adjunct/tenure distinction would release resources that can  be spent on hiring researchers, instead of coming up with bad heuristics such as publication and citation numbers as filters for new hires.

Many of these recommendations cannot be seen in the abstract, since the University is intimately coupled to the society and the economy as a whole. For example, part of the admin bloat comes from legal liabilities and the state offshoring some of their responsibilities to universities.  Number 6 would require a radical reconfiguration of society in general. Number 6 wouldn’t be able to be enacted today, since “democratic” institutions  are  composed of non-ergodic, technocratic lifers. 

This takes me to the political conclusion that the problems of science should be seen as the problems of society as a whole.  The only sure way to find solutions for problems is an ergodic approach.  Right now, the state is non-ergodic, that is, its  occupied and controlled  by political and bureaucratic lifers.  These non-ergodic bureaucracies in turn generate informational complexity, as new regulations and “rules” are imposed by the same caste of degenerative professionals, which in turn requires even more complex overhead. Instead,  the State, (and in a socialist society, the means of production) should have a combination of democratic and sortition mechanisms that makes it impossible for individuals to stay too long in power. This democratic vision should be supported by broad and free education programs that train individuals with the sufficient knowledge required to rule themselves in a republican way. Not only is this method guarantees more equality, but it also  turns society into this great parallelized computer that solves problems by ergodic trial and error, through the introduction of  new blood, sortition and democratic accountability.

If you liked this post so much that you want to buy me a drink, you can pitch in some bucks to my Patreon.

The Decline of Science, The Decline of Capitalism


Can another Einstein exist in this era?  A better question  is whether  the spirit of his  research program could emerge again in our current predicament. By his research program, I mean the activity that grasped through a few thought experiments and heuristics fundamental principles that not only revolutionized physics but our whole ontology in general. Through a combination of imagination and mathematical prowess,  such as imagining himself riding a lightning bolt, and then translating this imagination into the language of geometry, he revolutionized our most fundamental intuitions of space and time.

Fast forward a hundred years later, where physics has become increasingly specialized and fractal-like,  with theoretical physics atomized across many sub-disciplines. Given this complex landscape, there is simply not enough bandwidth to  engage the informational complexity of all relevant fields in order to grasp at something both holistic and fundamental. Instead, scientific knowledge is atomized among various disciplines.  Yet, although this division of labor and increased informational complexity has a legitimate logic, as many fields  truly become more specialized and complex in a useful, authentic sense – this complexity has decreasing marginal returns. We can see this effect in some of the paper mills of theoretical physics, with theory after theory that may only have tenuous links with the facts of the world.  At some point, the complexity and literature grew exponentially, engulfing empirical confirmation.

One of the most striking example of the diminishing returns of complexity is the lack of revolutionary shifts in theoretical physics. The  last major physics revolutions, quantum mechanics and relativity, happened roughly a hundred years ago. This is in spite of the huge increase in the number of scientists and disciplines throughout the last century. There is no shortage of models and theories, yet the creation of novel predictions and empirical confirmation is slowing down, as evidenced by the inability of expensive particle physics experiments to confirm any of the new particles conjectured by the last generation of theoretical physicists.   In other words, to use Lakatos’ ideas, theoretical physicists is degenerating, because there is an exponential increase in informational complexity without much empirical content backing it. In short, all the new and expensive scientists, computers, theories (e.g. supersymmetry, string theory) and cryptic fields are generating diminishing returns in knowledge.

However, not only academic sciences are degenerating. In this stage of capitalism, the degenerative research program is universal. This universal research program includes all relevant fields of human inquiry and knowledge. Therefore, this degeneration not only exists in the apex of academia, but it dwells in any institution meant for problem solving.  We find a decrease in productivity across many industries and the economy as a whole, which signals diminishing returns in complexity. In all these parts of society there is an increase of expensive complexity that yields diminishing returns.  Since all these institutions are problem-solving,  and use some sort of method/episteme, we can say that their theories of the world are degenerative, in analogy to the Lakatosian concept of degenerative research program. In spite of their bloat in specialists, the marginal returns in the “knowledge” necessary for production decreases.

Perhaps the most incredible aspect of this decline is the existence of experts in almost wholly degenerative methods.  As degenerative methods exponentially increase in volume – methods that don’t have much empirical backing, the informational complexity needs more specialists to manage it, and these  experts are almost specialized entirely on these decaying methods. Economists and string theorists are the quintessential examples of degenerative professionals.

This degeneration of the universal research program, and with it, the creation of a degenerative caste of professionals  has not come unnoticed by the population. This decline has probably fueled part of the anti-intellectual and anti-technocratic wave that brought Trump to power. For example,   people often complain about the increased inaccessibility of academic literature, with its overproduction of obscure jargon. Another example is the knee-jerk hatred for administrators, managers, and other technocratic professionals that are seen as doing increasingly abstracted work that may not connect with what is happening at the ground. For instance, a common target of criticism  for this phenomenon is the admin bloat that festers at universities.

This abstract process of the degenerative research program is linked to the health of capitalism, in a two way feedback loop, given that it is through problem solving that capitalism develops technological and economic growth.  Perhaps we can understand the health of capitalism better by referring to the ideas of the anthropologist Joseph Tainter. Tainter argues that societies are fundamentally problem solving machines, and that they add complexity in the form of institutions, specialists, bureaucrats, and information in order to increase their capacity to solve problems in the short term. For example, early irrigation systems in Mesopotamian civilizations, crucial for agriculture and therefore survival, created  their own layer of specialists to manage these systems.

However complexity is expensive, as it adds more energy/resources usage per capita. Furthermore, the problem solving ability of institutions diminishes in returns as more expensive complexity is added. At some point, complex societies end up having a very expensive layer of managers, specialists, and bureaucrats that are unable to deliver in problem solving anymore.  Soon, because the complexity is not making society more productive anymore, the economic base, such as agricultural output, cannot grow as fast as the expensive complexity, making society collapse. This collapse resets complexity by producing simpler societies. Tainter argues that this was the fate of many ancient empires and civilizations, such as the Romans, Olmecs, and Mayans. So Tainter here is arguing for a theory of decline of the mode of production, where modes of production are “cyclical” and have an ascendant and descendant stage. Using this picture, we can begin  to identify a stage of capitalism in decline.

This decline of capitalism has plenty of empirical evidence.  “Bourgeois” think-tanks like the Brookings Institute argue that productivity has declined since the 1970s. Marxist economists like Michael Roberts assert that the empirical data shows that the rate of profit has fallen since the late 1940s in the US.  Not to mention the recent Great Recession of 2008. However this economic and material decline is linked to the degenerative research program, as the expensive complexity of degenerative institutions expands faster than the economic base (e.g. GDP). For example, the exponential grow of administrators in healthcare and university at the expense of physicians and professors is symptomatic of this degeneration.

The degeneration of the universal research program  has two important consequences. First, that a large part of authority figures that base their expertise on credentials are illegitimate. The reason is that if they are part of a degenerative caste of professionals (politicians, economists, etc.)  so they cannot claim authority on relevant knowledge because their whole method is corrupted. This implies that socialists should not feel intimidated by the credentials and resumé of the technocrats closer to power. As mentioned before, right wing populists such as Trump understand partially this phenomenon, which has unleashed his reactionary electorate against the “limousine liberals” and “deep-statists” in Washington D.C. It’s time for us socialists to understand that particular truth, and not be afraid to counter the supposed realism and expertise of the neoliberal center.  The second consequence is that our methods of inquiry, such as science or philosophy, has  stalled. Instead, the feed-back loop of complexity creates more degenerative specialists that are experts in an informational complexity that has tenuous connection with the facts of the world. Whole PhDs are made in degenerative methods – for example, scientists specializing in some particular theoretical framework in physics that has not been validated empirically.

What is the socialist approach to the degeneration of the research program? Although one cannot say that socialists will not suffer from similar problems, given that informational complexity will always required when dealing with our complex civilization, Capitalism has particularly perverse incentives for degenerative research programs.   For example, the way the degenerative research program survives is through gate-keeping that safeguards the division of labor by well paid and powerful professionals. An obvious example is current professional economics, which largely requires an absorption of sophisticated graduate level math in order to enter the profession, even if those mathematical models are largely degenerative. In the political landscape at large, the State is conformed by career politicians and technocrats, who safeguard their positions through undemocratic gate-keeping in the form of elite networking and resumé padding.  The rationale for this gate-keeping is that these rent-keepers accrue power and wealth  through the protection of their degenerative research programs. Furthermore capitalism accelerates the fracturing of division of labor as it pursues short-term productivity at all costs, even when this complexity in the long term becomes expensive and a liability. 

The socialist cure to the degeneration of the research program could consist of two main ingredients. First, that institutions that command vast control over society and its resources should democratize and rotate their functionaries and “researchers”.  For example in the case of the State, a socialist approach would eliminate the existence of career politicians by putting stringent term limits and making many functionaries, such as judges, accountable to democratic will. Since there are diminishing returns in knowledge through specialization and informational complexity, a broad public education (up to the university bachelor level) could guarantee a sufficiently educated body of citizens so that they can partake in the day to day affairs of the State.  Instead of  a caste of degenerative professionals controlling the State, an educated body of worker-citizens could run the day to day affairs of the State through a combination of sortition, democracy, and stringent term limits.

The second ingredient consists of downsizing much of the complexity by focusing on the reduction of the work-day through economic planning. Since one of the main tenets of socialism is to reduce the work-day so that society is instead ruled by the imperatives of free time as opposed to the compulsion of toil, this would require the elimination of  industries that do not satisfy social need (finance, real estate, some of the service sector, some aspects of academia) in order to create a leaner, more minimal state.  Once the work-day is reduced to only what is necessary for the self-reproduction of society, there will be free time for people to partake in whichever research program of their choosing. Doing so may give rise to alternate research programs that don’t require the mastering of immense informational complexity to partake in. Perhaps the next scientific revolution can only arise by making science more democratic and free. This vision contrasts to the elitist science that exists today, which is at the mercy of   hyper-specialized professionals that are unable to have a holistic, bird’s eye view of the field, and therefore, are unable to grasp the fundamental laws of reality.

If you liked this post so much that you want to buy me a drink, you can pitch in some bucks to my Patreon.

On Hegel and the Intelligibility of the Human World

Screen Shot 2018-05-02 at 11.37.55 AM

I’ve been studying Hegel lately because I find a value in his idea that history has an objective structure and is intelligible.  He argued that History is rational, and therefore its chain of causes and effects can be understood by Reason. I deeply believe in the intelligibility of history and the human world at large, as I advocate for the human world to be  administered in a planned and democratic way, which requires the possibility of scientific understanding. In contrast, many contemporary thinkers are extremely skeptical about the intelligibility of the human world. For example, many economists proclaimed that socialist planning is flawed because the supply and demand of goods cannot be rationally made intelligible to planners. We see similar arguments from the Left in the form of post-structuralist attacks against the  “master narratives” that seek to unearth the rational structure of the human world. For example, contemporary criticisms of the Enlightenment sometimes argue that the same reason used to understand the world is used to dominate human beings, because Reason starts to see humans as stacks of labor power to be manipulated for some instrumental end.

However in my opinion, to deny the intelligibility of the human world,  or to deny that this intelligibility can ever be used for emancipation, is to deny the possibility of politics, for political actors must have a theory of where history is marching, in other words,  “in what direction does the wind blows”. Political agents need to ground themselves in a world-theory so they can suggest a political program that would either change the direction of history to another preferred  course, or enhance the direction that it is undertaking right now. The IMF, Breton-Woods, the Iraq War, the current austerity onslaught, etc. have or had an army of politicians, intellectuals, and technocrats wielding scientific reason, trying to grasp where the current of history flows, and developing policy in line with their world-theory.  In lieu of our “enemies” (capitalist state, empire) using a scientific understanding of history in order to destroy the world, I will attempt to instrumentalize my reading of Hegel in order to make a case of a socialist intelligibility of the human world, that has the purpose of freeing humanity through the use of socialist planning. I am however, not trained in philosophy, so my reading of Hegel may not be entirely accurate – yet accuracy isn’t really my goal as much as using him as an inspiration for making my case.

Hegel and many  thinkers in the 19th century were optimistic about uncovering the laws of motion that drive history, and thus the evolution of the human world.  Hegel thought that history was intellectually intelligible in so far that it is can be rationally understood as marching in a certain rational direction, that is, towards freedom even if the human beings that make this history are often driven by irrational desires.  For example, Hegel thought the French Revolution, following the evolutionary path of history, brought about the progress of freedom in spite of its actors being driven by desires that may have concretely nothing to do with freedom (e.g. glory, self-interest, revenge, etc.).  To Hegel, the French Revolution was a logically necessary event that follows accordance to a determined motion of history towards freedom. In parallel, Marx, who “turned Hegel on its head” thought that the human world could be understood as functions of the underlying economic structure (e.g. capitalism or feudalism) and its  class composition. Furthermore Marx argued that the working class, due to its objective socio-economic position as the producer of the world’s wealth, could bring about socialism.

Not only were Hegel and Marx optimistic in the intelligibility of the human world, but they found that a liberated society would make use of this intelligibility to make humans free. In the case of Hegel, he thought that the end of history would be realized by a rational State that scaffolds people’s freedom by making them masters of the world they can understand and manipulate in order to realize their liberties/rights. This is why Hegel thought the French Revolution revealed the structure of history, as this event  demanded that the laws of the government become based on reason and serve human freedom. In the case of Marx and his socialist descendants, the fact that the economy is intelligible means that a socialist society could administer it for social need, as opposed to the random, anarchic and crisis ridden chaos of capitalism. The socialist case for the intelligibility of the human world gave rise to very ambitious and totalizing political programs, with calls for the economy to be planned for the sake of social need, and with the working class as the coherent agent for enacting this political program. Sometimes these socialist totalizing narratives are described by some marxists as programmatism,  where programmatism is the phenomenon of coherent socialist parties that have grandiose and ambitious political programs of restructuring the world through the universal agency of the working class.

However,  from the 20th century onwards, much of  intellectual activity was spent in arguing against this intelligibility of the human world, and therefore against the totalizing socialist program. In the economic sphere, Hayek argued that the economy was too complicated and fine-grained to be consciously understood by human actors, therefore making conscious economic planning an impossibility. From the Left, post-structuralist theorists attacked  the idea that there exists underlying, objective structures that steer and scaffold the human world. Philosophers such as Laclau and Lyotard criticized nineteenth century thinkers such as Marx and Hegel for having totalizing narratives of how history marches and the certainty of scientific approaches to the world. In many ways these post-structuralist and marginalist views do reflect a certain aspect of the current political landscape.  The market in the West has considerably liberalized since World War II, expanding the roles price signals in directing the distribution of goods, which seem to echo Hayek’s propositions. In western-liberal democracies, electoral politics is often interpreted as a heterogenous and conflicting space formed of different identities and interest groups, pushing their own agendas without a discernible universal feature that binds them all – which echoes the post-structuralist attack against Marxist and Hegelian appeals to universalism. Furthermore, the decline of Marxism, anarchism, and other radical political movements that posited a coherent revolutionary actor, such as the working class, give even more credence to the post-structuralist insistence on how the social world cannot be made intelligible by totalizing and “scientific” theories.

However these attacks on human-world intelligibility miss a crucial point, which  makes the critique fatally flawed. These attacks only feature as evidence for their arguments  the ideological justifications of the ruling class and the defeat of the programmatic Left. It is true that Hayekian marginalism is used as “proof” that the economic world is not intelligible to the human mind, therefore justifying increasing neoliberalization. Or that the totalizing social movements of the early 20th century with coherent political programs and revolutionary subjects have been almost completely supplanted with heterogenous, big-tent movementism. Yet the ruling classes – those who control the State, still act from the perspective that the human world is intelligible. The State’s actors cannot make political interventions without assuming a theory on how the human world works and having a self-consciousness on their own function of how to “steer” this human world into  a specific set of economic and social objectives. For example, the whole military and intelligence apparatus of the United States studies scientifically the geopolitical order of the modern world in order to apply policy that guarantees the economic and political supremacy of the American Empire. Governments have economic policies that emerge from trying to understand the laws of motion of capitalism and using that understanding to administer the nation-state on a rational basis.

The skeptics of the intelligibility of the human world could protest in different ways to the above assertions. One of the protestation could be that existence of the technocratic state still does not reveal some universally, coherent ruling class. In other words, there is no bourgeoisie, “banksters” or other identified subjects that control the technocratic state for some identifiable reason   – ithe State is simply some autonomous machine with no coherent identifiable trajectory or narrative. Furthermore, a second protestation is inherent in some interpretations of Adorno’s and Horkheimer’s Dialectic of Enlightenment: to make the human world intelligible to science is a method of domination, where human beings can be instrumentalized into stacks of labor power to be manipulated and administered.  Furthermore, according to this criticism of Enlightenment, those particularities that might not be scientifically uncovered in the human world, are forced to violently fit certain universal – for example, the Canadian violence done unto First Nations where they attempted to “anglicize” First Nations violently by abusing and destroying them in Residential Schools.

Curiously this second protestation, the one of how rationality is used to scientifically dissect the human world to dominate it, shows the weakness of the whole counter-rational project. The ruling classes do make the human world intelligible for domination, through their technocrats, wonks, and economists.  However the key idea here is that they administer the world in the name of some objective that does not treat social need as its end. The behavior of the State does indeed show that the human world and history are intelligible – it’s just that its intelligibility is instrumentalized in favour of some anti-human end. In reply to the first protestation, about how it is impossible to recognize a universal subject and the end the technocratic state pursues, I will say that the complexity of world capitalism does not imply there are no dominant trends in it that cannot be analyzed. It just happens that systems experiences various tendencies, some in conflict with each other, but that can be still understood from a bird’s eye view and scientifically. For example, one of the key trajectories of the modern capitalist state is the safeguarding the institution of private property and attempting to stimulate capital accumulation (e.g. GDP growth) – this is certainly an intelligible aspect of modern world history.   The existence of conflicting trends within the State that counter the feedback of capital accumulation, such as inefficiencies caused by rent-seekers and corruption, only means that the State (and the human world) are complex systems with counteracting feedback loops, not that these objects cannot be made intelligible by scientific reason in order to understand them and ultimately change them.

The existence of contradicting feedback loops embedded in a complex system is not an argument against the scientific understanding of the human world. One can still try to understand the various emergent properties even if they contradict each other.  For example, a very politicized complex system today is the climate. Although we cannot predict the weather, that is the atmospheric properties in a ten square kilometers patch during a specific day, we can predict the climate, that is the averaged out atmospheric properties of the whole Earth during tens of years. For example, we have very good idea how the average temperature of the Earth evolves.  In the case of the human world, the same heuristic applies – we cannot understand everything that happens at the granular level but we can have ideas about the average properties integrated throughout the whole human world.  Similarly, the climate  system has counteracting feedbacks, for example, clouds may decrease the temperature of the Earth by reflecting solar radiation into outer space, but at the same time heat up the Earth through the greenhouse effect of water vapour.

These contradicting feedbacks does not make the climate system incoherent to science. Similarly, the existence of various subjects with conflicting interests in capitalism does not mean that there cannot be dominant trends, or some sort of universality underlying many of the subjects.  At the end of the day, the basic human needs, such as housing, education, and healthcare are approximately universal.

The fact that the human world is intelligible and this intelligibility is instrumentalized by our enemies, that is the capitalists, the military apparatus, and the technocratic state, in order to exploit and degrade the Earth and its inhabitants for capital accumulation,  means that we should make use of this instrumental reason to counterattack, not just pretend that this Reason is incoherent or that it is a tool that corrupts its user. In fact, there are many examples where instrumental reason is used for “good”, for example, the concerted medical effort of curing certain diseases, which makes the human body intelligible in order to understand it.  At the same time, in a Foucauldian sense, it is true that the clinic can be used for domination but this power dynamic is just one feedback loop amongst other more positive ones, such as emancipating humanity from the structural obstacles of disability and disease. Thus, universal healthcare is proof of the use of instrumental reason for the purpose of human need/emancipation.

The usage of instrumental reason for social need and freedom harkens back to Hegel. The world Hegel promised us at the end-point of history,  that is the world of absolute freedom, is the world where human beings become conscious of the intelligibility of history, and therefore they rationally administer history in order to serve  well-being and freedom. The only problem with Hegel’s perspective is that he thought history marched in a deterministic sense towards freedom. Instead, to make history and the human world intelligible for human needs is a political decision that is not predetermined by the structure of history itself.  Until now, the historical march of the last couple centuries have been for increasing domination of the Earth and its inhabitants for the purpose of capital accumulation. However, in the same way the ruling classes make history intelligible in order to serve profit and private property, there is no necessary reason or law that prevents using the intelligibility of history for social need.  The socialist political program is precisely this – to make the human world transparent to science and reason in order to shape it into a free society that is dominated by human creative will, as opposed to the imperatives of toil and profit.

If you liked this post so much that you want to buy me a drink, you can pitch in some bucks to my Patreon.

Against Economics, Against Rigour


I’ve been trying to grasp why mainstream economics considers itself the superior approach over  heterodox disciplines like Post-Keynesianism or Marxism.  After reading a couple of papers, and articles, a constant argument that appears is the one of rigour.  Mainstream economics is mathematically axiomatic, that is, it begins from a  set of primitive assumptions and then derives a whole system through self-consistent mathematical formalism.  Usually this is contrasted with heterodox approaches like  Post-Keynesianism, which are seen as less coherent and ad-hoc, with some writers referring Post-Keynesianism as  “babylonian babble” and not superior to a pamphlet.   Even if some heterodox economists use mathematical modelling, they do not follow from some axiomatic method, but are ad-hoc implementations.

What interests me about this argument is its definition of science.  According to many mainstream economists, heterodox economics isn’t a science.  The main reason given for the unscientific status of heterodox economics is that the latter lacks internal coherence, that it is not rigourous. As mentioned above, mainstream economics claims rigour by  deriving its propositions from mathematical inference  that begins with a set of axioms. It is by the usage of this rigour, that mainstream economics defines itself as a science.

If a field  claims to be scientific, it must justify its own status.  For better or worse, in the west,  the status of science is epistemically privileged. In other words, an activity can assert more legitimacy than other modes of producing knowledge by claiming the mantle of science.  Therefore mainstream economics by arguing for its scientific status due to an axiomatic coherence, while denying that same mantle to heterodox economics, is implicitly arguing that heterodox economics is an inferior epistemological approach and unscientific.

A common retort against the mathematical rigour of economics is that its coherent mathematical frameworks don’t necessarily correlate with empirical reality, which calls the scientific status into question. However this argument has been done to death, probably by people much smarter than me. What I find interesting is the idea that inferential coherence is a necessary condition for science.  In fact the argument being made by mainstream economists is that even if heterodox economics may arguably be able to explain some empirical phenomena mainstream economics cannot, heterodox economics is less scientific because it lacks internal coherence. Therefore, mainstream economics claim a necessary condition for science is rigorous logical coherence.

Where does this definition of science as rigorous logical inference comes from? There is only one natural science, which is physics, that approximates this sort of rigorous coherence, that is,  that there is a set of primitive axioms that lead to a whole system of knowledge by the application of rules of mathematical inference. Even then, the mathematical rigour in physics is often inferior than in economics, given that physicists don’t do mathematical proofs as much as economists. The rest  of the natural sciences are less rigorously formulated – many of them are a “bag of tricks”  that are heuristically unified. This is because anything more complex than a system of two interacting particles is mathematically intractable due to nonlinearities.  A good example is  psychology.  Although psychologists assume that certain personality traits are a manifestation of chemical processes in the brain, there is no rigorous mathematical inference that connects psychology to brain chemistry –  these scales are unified heuristically and qualitatively.  There are similar examples in biology, where in theory, morphological evolution is coupled to chemical evolution of genes, but the rigorous, mathematical linkage of both scales is close to impossible.

How  did the  definition of science as mathematical inference came into being? It is certainly not the normative self-consciousness of scientists, who see themselves as Popperian. Popper’s theory treats the evolution of science as a process where propositions  are falsified by empirical evidence only to be replaced by  better explanations – it does not say anything about “logical rigour”.   Nor this definition is descriptive, as shown in the previous paragraph, because most natural sciences aren’t as rigorously self-coherent as mainstream economics.   Weintraub  argues that the current  axiomatic approach to mainstream economics can be traced back to Gerard Debreu, an important french-american economists of the 20th century.   In the first half of the 20th century,   David Hilbert and Bourbaki (a pseudonym used by a group of french mathematicians) attempted to axiomatize mathematics, given the discovery of non-euclidian geometry in the 19th century. Before non-euclidian geometry,  geometry was thought to derive its axioms intuitively from the world – the truth-value of the axioms were self-evident.  An example of an “intuitive” axiom in euclidian geometry is that parallel lines don’t meet.  However 19th century mathematicians  realized that they could create self-consistent, alternate geometries where parallel lines could meet. An alternative geometry that starts from non-euclidian axioms  was self-consistent if rigorously inferred through mathematical rules.  This led Hilbert and Bourbaki to develop a more axiomatic approach to the study of mathematics. Debreu, who learnt mathematics from the Bourbaki school, brought this axiomatic way of thinking to economics.

Today this axiomatic approach  is very obvious in the average graduate economics curriculum. For example some of the classes emphasize writing mathematical proofs! I am very close to competing a PhD in physics and I only experienced very basic proofs at my undergraduate level in a  linear algebra class.   After that I never wrote a single proof ever again.  Yet, economics, which arguably has had less empirical success from its mathematics,  requires more mathematical rigour than the average paper in physics.  This tells me that the economist’s emphasis on rigour is not inspired in the example of the successful, natural sciences, but it’s endogenous –  from within. If anything, it shows that mainstream economics is at most a bizarre synthesis of philosophy and mathematics, owing more to these abstract fields, than to any of the existing natural sciences.  Therefore, mainstream economics should be described as more of a  mathematical philosophy than  a science.

The case of the arbitrary rigour of economics has interesting implications in academia at large.  An uncharitable person would say that the spurious mathematical rigour of economics is simply  gate-keeping for a professional guild.  The extremely technical skills required to master mainstream economics  limit the supply of would-be economists, generating a manageable number of rent-seekers that can be paid handsomely.  But this probably extends to much of academia as well.   Academia is peppered with examples where “rigour” and “method” are elevated with no obvious epistemic justification. One has to wonder if appeals to rigour are more often than not guild building in order to justify large pay-checks by limiting the supply of the participants.  The trope of “how many angels can dance on the tip of a pin” is a famous example of this spurious rigour.  Medieval theologians were accused of developing  beautiful, often rigorous and coherent systems, that deal with questions of no intellectual consequence.   Similarly, the same phenomenon probably emerges in some sector of academia, given that rigour and opacity are a cheap way of signalling expertise to institutions in order to justify large salaries.

Finally, I think the unjustified emphasis on rigour when not warranted is unhealthy for democracies.  Often, many problems that are meaningful to humanity at large, such as issues of political and economic nature, require the mass participation of society in order to build an engaged citizenship.  Spurious rigour and credentialism are ways to build a technocratic hierarchy that is not necessarily justified. In the absence of authentic knowledge, rigour becomes simply a guild-like mechanism for confining meaningful problems to a set of fake experts that decide the fate of whole nations, often in the interests of a reduced elite. A socialist, democratic society would require a more egalitarian epistemology than the one that exists today.

If you liked this post so much that you want to buy me a drink, you can pitch in some bucks to my Patreon.

The World-System Versus Keynes


The most incredible  modern lie is the one of nation-state sovereignty.  From the left to right,  the relative success of an administration is aways interpreted as a function  of endogenous variables the nation-state can  supposedly control.  In the case of the right wing, they see the perceived failure of their society as related to the government not closing the borders, running high deficits,  or allowing companies to outsource jobs.  From the left’s perspective, the nation-state is simply not running high enough deficits to fund more social programs, not supporting full employment policies, or refusing to raise the minimum wage. Meanwhile, a totalizing world-system pulsates  in all corners of the planet,  with flows of information, commodities,  securities, and dollars  creating a complex system that subsumes the sovereignty of most nation-states.  In the heart of this world-monster,  there is a hierarchy of nation-states, with some states having more influence and control  over the world-system than others.

Recently, with the advent of the Great  Recession in 2008, many people in the left, some of them self-proclaimed socialists, have been doubling down on the myth of national sovereignty.  They see the economic crisis, and the continuous casualization of workers, as an opportunity to administrate the nation-state in the “right way” to reverse these trends. They see themselves as holding secret truths and insights about the economy that neoliberals don’t truly fathom.  Only if these social democrats had the opportunity to apply  the right ideas, ideas that they claim have been pushed out of the political and academic mainstream for venal reasons, they could fix the economy.

What are these right ideas?  In the first half of the 20th century, John Maynard Keynes had already developed a toolkit for any eager leftist technocrat to  manipulate in order to attenuate economic crisis.  He, in contrast to the classical economists that preceded him, argued that sometimes the market did not clear, which generated a recession.  By market clearing, I mean that the supply of commodities wasn’t balanced out by  their demand. This is sometimes referred as Say’s Law.  Another important aspect of the failure of Say’s Law is the existence of unemployment, given that there is more supply of labor than demand.  While classical economists argued that economic crises could self-correct themselves and eventually clear, by for example, lowering the wage of workers or cheapening commodities,  Keynes argued that these recessions could persist for a very long time without the aid of governmental fiscal and monetary policy. According to Keynes, some of the reasons the markets fail to clear are: (i) workers will not accept wage cuts, (ii) recession would make investors risk averse, causing them to save their money rather than invest, and (iii) mass unemployment and risk aversion would decrease the buying of commodities.

Keynes thought that the state could  force the market to clear through fiscal and monetary policy.  He argued that in the case of recession, aggregate demand is lower than what it should be, and this in turn, caused negative feedback loops that halted the economic engine (e.g. the underconsumption of commodities). In order to stimulate demand, the state could increase the amount of money in the consumer side by: (a) public spending on infrastructure in order to employ the previously unemployed, (b) lowering taxes so that the consumer has more available money. Meanwhile,  the state could stimulate demand through monetary policy by lowering the interest rate so that consumers and investors can buy an invest through cheap loans and credit.  This monetary policy was thought to cause inflation because it would increase the money supply by allowing low interest/cheap borrowing, but at the same time, this policy was thought to cure the greater diseases, which were mass unemployment and low aggregate demand.   Keynes’ policies often required deficit spending, that is the government spending more than they acquire, usually by accruing debt. Furthermore, Keynesian policies  tend to trigger inflation because they increase the money supply. However the Keynesians thought that this inflation was a necessary evil to cure unemployment.

In the 1970s, however, economic crisis displaced Keynesianism into the fringe.  The rapid increase of the price of oil coupled with a large money supply created a crisis. These high prices discouraged companies from investing, given that production costs were too expensive and inflated. The Keynesian approach to dealing with crises was not applicable since unemployment was coupled with low demand and inflation (stagflation), which ran contrary to the Keynesian consensus of the time. So it seemed that inflationary policies, such as increasing the money supply, wouldn’t solve the stagnation and unemployment problem.  In response to the crisis, some economists, like the monetarist Milton Friedman, claimed   that Keynesian monetary policy was at least partly responsible for the crisis given its inflationary nature.  Friedman argued that in order to cure the recession, governments should reduce the money supply.  Therefore in accordance to Friedman’s prescription, the Fed in the United States sharply increased interest rates, which ran contrary to Keynesian policy. This tightening of the money supply by the Fed is thought to have aided in the resolution of the crisis. The empirical falsification of Keynesianism because of the stagflation crisis, coupled with a protracted cultural war by classical economists such as Hayek, Friedman etc., and the shift of power towards financial speculators,  displaced Keynesianism into the fringe of heterodox economics that exists today.

Nowadays Keynesianism has been rebranded into all sorts of heterodox disciplines that found a place in Left. Keynes became a darling of the Left for three reasons: (i) melancholy for the post-WWII welfare state and cheap credit, (ii) a consumer-side perspective (e.g. focus on aggregate demand) that seems to value working class consumers over capitalist suppliers, and (iii) the idea that capitalism is crisis prone in contrast  to the the neoliberal orthodoxy of economic equilibrium.  Some of these rebranded Keynesian theories go under different names, such as Post-Keynesianism and Modern Monetary Theory.  Although these Post-Keynesian theories are not exactly isomorphic to the original theories and prescriptions set by Keynes, they all roughly agree with the main heuristics, mainly that the state should strongly intervene in the market, and that an increase of money supply and government spending should be used to counter crisis rather than neoliberal austerity.  Finally, all these approaches rely on one particular thing (which I will show later on why it’s flawed), which is the strength of the sovereignty of the nation-state.  I will focus on Modern Monetary Theory (MMT) as an example given that it is one of the more contemporary iterations of Keynesianism.

Modern Monetary Theory’s basic premise is simple:  a nation-state that issues its own currency cannot go bankrupt given that it can print more of its own money to pay for all necessary goods and services.   Another way of stating this theory is that governments don’t collect taxes for funding programs and services. Rather governments literally spend money into existence, printing money in order to pay for necessary services and goods. Taxes are just the government’s mechanism to control for inflation. In other words, taxes are the valve used to  control the money supply. MMT therefore argues that since money is in the form of fiat currency,  it’s not constrained by scarce commodities such as gold and silver, and therefore it is a flexible social construct. So governments don’t need to cut social programs in order to increase revenue – they could simply spend more money into existence in order to pay for social programs. Furthermore, the government can  enforce full employment by spending jobs into existence  – the state can create jobs through large-scale public works, and then print the necessary money to pay the workers. In a sense, MMT is another iteration of the Keynesian monetary heuristic that increasing the money supply is a good way to solve high unemployment and crisis.

Imagine the potential of MMT for a leftist!  The neoliberals  arguing for austerity and balanced budgets are talking nonsense – the state can simply spend money into existence and therefore pay for welfare and other public services, and also use this new minted money to employ the unemployed! If the increase of money supply triggers inflation, the state can simply tax more, fine-tuning the quantity of money. If only the MMTer would convince the right technocrats, we wouldn’t have to deal with the infernal landscape of austerity.

However, the idealized picture presented by MMT is missing key variables.  Ultimately,  an MMT approach would be  heavily constrained by national production bottlenecks.  In order for MMT approaches to work, the increase of demand caused by the sudden injection of money should be able to be met by the production of  the desired commodities.  In an ideally sovereign nation, society would be able to meet the demand of computers, medicine, or food by simply producing more of these commodities. We may refer to a country’s capacity for producing all the goods it needs as material sovereignty.

However this is where the fundamental achilles hill of MMT (and Post-Keynesianism in general) lies.  Most countries are not materially sovereign at all. Instead, they depend on imports in order to meet their demand on fundamental goods such as technology, fuel, food or medicine.   In the real world, countries have to buy forex currency (e.g. dollars) in order to be able to import necessary goods. The price of the dollar in terms of another currency is not in control of the currency’s issuer. Instead it’s a reflection of the economic and geopolitical standing of that nation amidst the current existing world-system. Whether the dollar is worth 20 or 30 Mexican pesos has to do with Mexico’s  position in the global pecking order, and this exchange rate, if anything, can be made worse by the adoption of Keynesian policies. For example, if Mexico suddenly increases its own currency supply, the Mexican peso would simply be devalued in contrast to the american dollar, making its ability to buy the necessary imports diminished.  This puts a fatal constrain on a nation-states ability to finance itself through simple monetary policy.

The economic castigation of “pro-Keynesian” countries by the world-system is a cliche at this point.  To name some examples:  Allende’s Chile,   Maduro’s Venezuela, or pre-2015 Greece. In the case of Allende, the sudden increase of the money supply by raising the minimum wage created a large unmet demand and also eventually depleted the country’s forex reserves (there was also economic sabotage aided by the United States, but this also reinforces my argument).  In the case of Maduro, Chavez ran large deficits, assuming the high revenues from oil will last long enough. Greece overspent itself through massive welfare and social programs.  Although Greece doesn’t have its own currency, it still engaged in a high deficit fiscal policy that led to its default.   If these countries had their own material sovereignty, such as being able to produce their own food, technology, and other necessary goods, the global order would not have been able to castigate them so harshly. Instead, what ended up happening is that foreign investors pulled out,  the national currency plummeted, and their forex reserves depleted,  making these governments unable to meet the national demand for necessary goods through imports or foreign capital injection.

The above scenario reveals a fundamental truth about capitalism – national economies are functions of global, exogenous variables, rather than only endogenous factors.   Keynesian policy is based on the idea that nation-states are sufficiently sovereign to have economies that depend mostly on endogenous factors. If  the nation-state’s economy depend solely on  national variables, then a Keynesian government could simply manipulate these variables in order to get the desired outcome of  its national economy.  However it turns out nation-states are instead firms embedded in a global market, and their fate ultimately lies in the behaviour of the planetary world-system.  The nation-state firm has to be competitive in the world-system in order to generate profit; this implies that inflationary policies, large debts, and state enforced  “full employment” are not necessarily healthy for the profitability of the firm.   Furthermore, it means that the leftist nationalists that want to, for example, leave the eurozone in order to be able to issue their own currency, are acting from misguided principles.

Given the persistence of the totalitarianism of the world-system, no matter the utopian schemes of leftist nationalists and their fringe hetetodox academics, it’s infuriating to witness how the Left has lost its tradition of internationalism. Instead, the Left, since the advent of WWII, has been pushing for “delinking” of the world-system, whether it’s through national liberation during the 60s, or more recently, by leaving the euro-zone, fomenting balkanization in countries like Spain or the United Kingdom, etc.

The world-system can only be domesticated to pursue social need with the existence of a world socialist government.  Regardless of how politically  unfeasible the program of world government is, its necessity follows formally from the existence of a world system. Only through world government could socialists have sufficient sovereignty in order to manipulate the economy for social need. In fact, the Keynesians indirectly point at this problem through their formalism.  Post-Keynesian theories such as MMT start from the idea of a state having material sovereignty. Yet, the only way for a state to have material sovereignty, and therefore be able to manipulate endogenous variables for its own economic ends, is to subsume the whole planet into some sort of unitary, democratic system.  A planetary government could then manipulate variables across the planet (e.g. both in China and in the United States) to enforce social-democratic measures like full employment or a welfare state, without dealing with the risk of international agents castigating the economy, or having to import goods from “outside”.  But the funny thing is that once we have global. fiscal and monetary policy, Keynesianism becomes irrelevant, given that market signals can be supplanted by a planned economy.

If you liked this post so much that you want to buy me a drink, you can pitch in some bucks to my Patreon.

You Aren’t A Vulcan, But a Squishy and Ideological Human


Scientific rationality is one of the foundations of western civilization.  The discovery of the natural laws  behind the useful work done by a diesel engine, the electron clouds propagating through conductors, and the modus operandi of a virus, has given the  geopolitical upper hand to North America and Western Europe.  So it comes to no surprise that many have attempted to use a similar methodology to uncover the fundamental laws that regulate the human world.

Although this rationalist method of parsing the human world  is intimately coupled with the spirit of anglo-saxon capitalism, with economic marginalism (e.g. Hayek, Samuelson, etc.) being its first coherent expression – there has been a recent, growing rationalist  movement that attempts to bring this perspective to the  culture wars.   Some examples of these  platforms are   the popular blog  Slate Star Codex, and the publication Quilette.   An important animus behind this upsurge is  a reaction against  the  sociological  theories of the Left, such as  structural theories on gender and racial discrimination. Many of these rationalists, instead postulate that the perceived empirical disparities (e.g. gender wage gap,  racial inequality, lopsided  gender and racial ratios in STEM etc.) between races and genders are connected to biological-essentialist variables such as sexual reproductive strategies, or the differences in IQ among races.

It’s hard for me to discuss in a  completely detached and charitable manner these theories, because of my ethnicity, leftist leanings, and utter contempt for  Vulcan-wannabe dudes with shitty STEM degrees.  However I will try to use  peer-reviewed articles that are popular among them  in order to argue that ultimately, their  “rationalist” methodology is fundamentally wrong.  The outline of my argument is as follows:   (i) the only thing these papers  demonstrate is an empirical correlation not causation.  (ii) The reason why they cannot demonstrate causality is that the problems they are dealings have many variables that are extremely hard to isolate. (iii) Because of the large epistemic uncertainty in the casual links, politics become unavoidable. (iv)  Because of politics, this rationalist project collapses, and their vulcan-like rationality becomes a political ideology amongst others.

A good example is the often cited paper by Schmitt et al.   The main thesis is that personality differences between women and men seem to widen in more gender equal countries.  The paper finds a moderate correlation between personality sexual dimorphism and gender equality.  However, what is generally referred is one of the conclusions, which argues that personality dimorphism is not enforced by  stringent  policing in gender equal countries. Rather,  gender equality lets  sexual dimorphic traits diverge into their natural equilibrium. In other words, free societies let women and men express their intrinsic, gendered personality traits that are a function of darwinian processes.

I’ve seen many “rationalist” sources refer to this paper either explicitly or implicitly. It’s seen as one of the most powerful attack against feminist points, such as how certain gendered disparities, like  lopsided ratio of women in some STEM fields,   or the lack of females in certain leadership positions, are product of sociological  and structural factors such as socialization and sexual harassment.  The “rationalist” argues that policies aimed at making certain fields like STEM more sexually diverse, or increasing the number of women in leadership positions, are  misguided and potentially counterproductive.  Very recently, a study also showed that  percentage of women in STEM fields seems to actually decrease as a function of equality,  where in relatively unequal country such as Algeria, about 41 percent of STEM workers are women. This study seems to vindicate the previous study of Scmitt et al.

I am not going to question the methodology behind these studies, but I feel necessary to point out that quantifying things like “personality traits” and “gender equality”, and also aggregating them, is probably not  trivial and riddled with assumptions. However,  even without questioning the methodology, and taking at face value these empirical relations, the papers at most demonstrate the existence of empirical correlations and nothing more. One could try to hypothesize a multiple of causes, including a biological-essentialist link, but ultimately,  these studies only demonstrate a correlation between two empirical measurements, and nothing else.   This is the old adage of correlation does not imply causation.  Here is a very funny site showing all sorts of spurious correlations, such as the relationship between suicides by strangulation and government spending on science. A better way to understand this problem is to imagine a situation where two variables are correlated: A and B.  There are actually four plausible causal explanations for this correlation: (1) A causes B, (2) B causes A, (3} A and B are caused by some variable C,  or (4) the correlation is only a spurious coincidence. Therefore an empirical correlation, while an important result in itself, is not sufficient proof to establish causation.

The issue of causation is very deep and has lead to centuries old discussions in the sciences and philosophy.  For example, the Scottish philosopher David Hume argued that there is no logically consistent way of assuming causation from correlation. However,  my argument isn’t really as absolute, but more practical in an everyday sense. Studies, like the ones I referred above, deal with problems that are too multivariate to convincingly establish a biological argument by just a mere correlation.  In the case of physics and the hard sciences, causation is usually proven through experimentation that isolates all the irrelevant variables, or if lab experiments are not possible, through computational simulations where all the important variables and physical laws are plugged into a computer code.

In the case of other “softer” sciences, such as bio-informatics, social sciences, etc. that deal with complex, multivariate problems that cannot be dissected by controlled experiments,  the important variables are isolated through  statistical techniques that try to take take into account all the relevant parameters.  For example,  here is  a very easy to understand paper that argues against the book of  IQ and the Wealth of Nation, by disproving  the idea that that some biologically detemined  lower IQ of the  “non-white” races leads to underdevelopment in their respective countries, by using a simple multiple regression analysis that takes other variables beyond IQ into account.  Furthermore, in many cases, especially studies with political consequences,  even sophisticated statistical techniques are not enough to establish causation beyond reasonable doubt, given that there is always the possibility of unknown variables not being accounted for.  A famous example is the history of the cigarette-lung cancer link, where it took decades of different types of studies, from lab experiments, to questionnaire based correlations, to establish a causal link. This weakness was obviously abused by tobacco conglomerates, but the point is that even the scientists hired by these tobacco companies at some point began to accept the validity of the evidence, since various research trajectories triangulated into the cancer-cigarette connection.

Now lets go back to the previous statement on how sexual dimorphism in gender egalitarian countries implies an inherent, biologically hard-wired tendency that makes  men in average more interested in engineering than women.    This causal link is almost completely impossible to establish beyond doubt, at least with the known experimental and scientific techniques. This problem is incredibly much more complex than the subject of the cancer-tobacco link.  This complexity arises due to the existence of many social variables interfacing with the career choice of women that are extremely hard to take into account.  For example,  it is obvious that in the most gender, unequal limit, there wouldn’t be almost any women in engineering jobs  (e.g. England in the 19th century)! It is only in today’s particular configuration that this correlation seems to be valid, which already shows the existence of  hidden socio-economic variables that affect these studies.

The inability to establish causal links beyond reasonable doubt in many socio-economic problems (e.g. economics) is actually well defined mathematically.   For example, in the case of mathematical physics, the Holy Grail for all these vulcan-like rationalists, the problems that can be solved are extremely limited in scope.   Poincare showed in the late 19th century the exact solution  for the trajectories of more than  two interacting bodies is mathematically non-integrable.  In the 1960s, Lorenz  discovered that despite the sophistication of computers, many multivariate problems, such as the one of simulating the weather,  become intractable after a certain point due to chaos.  These uncertainties are not even a matter of not properly accounting for all relevant variables, but are embedded in the mathematical structure.  So it is quite arrogant to argue with  confidence that a couple of mere empirical correlations are enough to disprove the lived experience of many female students and STEM workers, which point at discouragement from peers, lack of role-models, unwelcoming workplaces etc.

Given the existence of large amount of noise, chaos, and “hidden” variables in socio-economic systems, there cannot be a pure rationalist and “scientific” way of tackling these problems.  The existence of this epistemic uncertainty therefore gives rise to politics in a much more stronger sense, than when dealing with simpler, “mathematical’ problems.   Therefore the cry of “centrists”, “classical liberals”, “rationalists”, Jordan Peterson, etc. of feminism, leftism, etc. being ideological is a case of pot calling the kettle black. Given the epistemic opacity of socio-economic problems, this claim of rationality is simply a bed-time story – a shallow aesthetic consideration for soul-less logical chopping and boring prose. Instead, they have agendas, not unlike the “irrational” leftists and feminists. In fact, if I were uncharitable, I could claim it isn’t reason that animates them, but some burning resentment for women, minorities, feminists etc. invading their spaces.  I am not ashamed of admitting  my own agendas as well, and that’s why this blog is explicitly partisan, and not written in the spirit of some shitty analytic philosophy paper.

If you liked this post so much that you want to buy me a drink, you can pitch in some bucks to my Patreon.


Socialism Versus Economic Growth: the Human Being Is Not Infinitely Hackable


I am for the rational planning of the world economy in order to fulfil social need (e.g. free time, housing, healthcare, transportation, education etc.), including the minimization of the work day until its eventual abolition.  This would require consolidating current scientific and technological capacity towards the goal of serving these needs.  Yet, I feel this usage of scientific rationality for socialist means is often mistakenly coupled with the idea of  unconstrained economic growth.   In the last couple of years, this idea of growth has become  a  tension in the Left between the so called “de-growthers” and the “prometheans“, the former wish to contract the economy in order to avoid ecological catastrophe while the latter argues that continued growth and progress are necessary for socialism. The debate is quite muddled, and often it is not really related to technical disagreements in political program relating to economic growth, but instead, to fuzzier aesthetic and ideological concerns between the ecologists and the futurists.  On one hand you have quasi-luddites who privilege the local and small over the global and cosmopolitan, and rail against GMOs, and nuclear power. On the other side, you may have sci-fi  “communist” types that want to pave the Earth and colonize Mars.

Much of these debates about growth are anchored around ecology and malthusianism – the idea that planetary constraints demand that humanity downsizes and consumes less.  However, as a socialist, I am not invested in the tension between  mass consumption and an impersonal natural world that I have no affinity with. Rather, I am interested in the liberation of humanity from toil, alienation and material misery.   I therefore  believe that the idea of unconstrained growth is  at best confused from the perspective of a socialist, or at worst, actually detrimental to to the objectives of liberating humanity from quasi-forced labor (wage labor,  peasant labor , slavery, etc.).  This leftist anchor around growth leads me to argue  in this piece the following: (i) growth as a metric for socialism is undefined, (ii) if we measure growth as increased productive capacities then it is antithetical to socialism (productionism), (iii) productionism  has a human limit, given that human beings  can only be optimized into productive workers at the cost of incredible physical and psychological violence.

Growth, from the perspective of  these left debates, is definitely undefined, given that economic growth is usually conceptualized in the context of capitalism. Since GDP  growth is the   telos of capitalism  – the expanding of capital through  reinvestment of profit and exploitation of labour, economic growth is a very well defined process within the market and in that sense, it is a “positive” thing.    For example,  the competency of a politician, whether “left” or “right” is at least partly judged by how much did the GDP grew under their tenure.   In the context of social democrats operating within capitalism and the nation-state, GDP growth is important because the satisfaction of social need is  the side-effect of a  growing economy that can generate new jobs  and more tax revenue. However the fulfilment of social need is not the end goal of capitalism, just the potential byproduct of profit.  In contrast, the telos of socialism is not capital growth, but the rational satisfaction of social need.    Therefore the concept of economic growth in the context of socialist economics becomes undefined.  One cannot use a metric defined in relation to the expansion of capital to judge  the progress of a society that is focused in satisfying needs related to housing, healthcare, education, reduction of the work day, and transportation. Socialist progress cannot be meaningfully quantified in a metric such as GDP, especially in the maximum program of socialism, which would abolish money and private property.

A more universal metric for growth, as opposed to GDP, may be a productionist metric – a function of how much of a particular industrial output is created. This was more or less the metric used for planning in the USSR , under the famous Five Year Plans.  Through a method called “material balances”, the planning agency of the USSR, the Gosplan, would survey all the available raw materials/natural resources, turning them into inputs  that where “balanced” with industrial outputs.  Given the absurdly high production outputs required by, for example, the first Five Year Plan, which demanded the accelerated expansion of heavy industry at the cost of famines, terror,  and slave labor, one could label the USSR as productionist.  This historical human cost of industrialization (both in the USSR and the West) leads to my next argument, that  the intensification of productionist growth depends on the exploitation of human labour –  through either extending the work day so that more industrial output is produced within a single day, or by extracting a surplus that must be reinvested in the development of machinery and techniques.

The history of class society has shown that economic expansion is contingent to the extraction of surplus from human labor.   The pyramids,  the steam engine,  and the violent transformation of peasants to more productive proletarians  are a function of the coagulated blood of billions.  Economic expansion requires the extraction of a surplus in human labour, whether it is by seizing peasants’ agricultural output, or through the exploitation of proletarians.

Today in the Global North we can see the more humanistic manifestation of the tyranny of economic growth. Although the economy in  core states has exponentially expanded in the last century, the length of the work day has frozen for almost a hundred years.   Not only has the length of the work day remained frozen, but more intensive  techniques are currently applied to dissect the human being in order to rebuild it as a working automation.  We see this with the expansion of the work-day into our inner lives, transforming humans into semi-sentient, individual firms. Socializing becomes networking, love a machine learning algorithm to find a mortgage partner, social media a matter of building a brand.  This transformation of homo sapiens to homo economicus is hard to describe, but I feel it in the marrow of my bones as an immigrant.  Economic rationality  controls the way I move my hands in a professional presentation and also structures my speech,  demanding that I do not betray my foreign sloppiness. For the sake of career and success I must conceal my spirit, which was shaped by a culture where lines are wobbly, time is erratic, and human boundaries less exact.  How could anyone that is human defend this infernal labor camp?  This despair  makes me  sympathetic to “non-model” minorities that are unable to adapt to this padded asylum of white light and right angles, because at some level they are more human than me.

I must reiterate that the above arguments do not necessarily run counter to technological innovation and a planned and controlled growth. My point is that productionism inevitably is a function of human labour, and therefore is at tension with the reduction of the work day.  If the priority of socialism is to expand the sphere of free-time, then inevitably,  reduction of the work-day will be prioritized over mass consumption and productionist growth.   That does not imply that humanity will necessarily live an austere existence with the minimum necessary for survival, but that production will be planned in accordance to use-value, so instead of the bult-in, capitalist obsolescence of large volumes of short-lived consumer goods, we may have a lower volume of long-lived, quality goods. An exact picture of  the social reality within  a world, planned economy is hard to portray at this moment, but the important point is that productionism and consumerism are antithetical to free time.

If you liked this post so much that you want to buy me a drink, you can pitch in some bucks to my Patreon.

EDIT: I misrepresented Leigh Phillips’ argument, as he isn’t really arguing for unfettered production, rather his argument is against localist imperatives of arbitrary “downsizing”. He instead locates the problem in the fact that capitalism isn’t planned, and we don’t have control on what to produce/stop producing depending on a scientific and planned evaluation of human need and environmental constraints. So the problem is not in growth per se, but the random arbitrariness of the market which cannot be solved even if we downsize if we don’t leave capitalism.  To quote Phillips from his book “Austerity Ecology“:

“Instead of next investment or production decision being driven blindly by profit seeking, or consumer purchase made constrained by the need to reduce expenditure, all economic actions occur as the result of rational decision-making on the basis of maximum utility to society. Because this all this is a conscious, planned process and we are no longer beholden to the drive for profit, we would now have the possibility to wait, to hold off for a while until we have sufficient technological innovation to move forward in a way that does not damage the environment in a way that delimits the optimum living conditions for humans.

We can collectively say: Well, now that we have this new efficiency in the production of this commodity, what shall we do with the savings? Shall we increase production? Shall we reduce material use? Shall we increase the overall amount of leisure time available to the labour force?

Capitalism is a problem because in the face of environmental spoilage, it must proceed regardless (not because of growth per se!). Any new innovation permitting efficiency gains will be invested in the optimum way to produce still more capital, even at the expense of environmental despoilment. This is not to say that the capitalist is evil. He is not. He has no choice. Indeed, even if he is environmentally minded, he must still make that choice, or go bankrupt. As Foster writes, and here he is correct, the constant drive to accumulate capital “impos[es] the needs of capital on nature, regardless of the consequences to natural systems.

[…] Democratic economic planning though gives us breathing room. True, we may in principle at some point in the future have to pause some production expansions here or there, for a period. But this is a very different thing from saying there is an upper limit.

Even better, because socialism would permit us to direct investment—including investments in research and development—not merely toward what is profitable, but toward what is most useful, there is every likelihood that growth may actually advance faster under socialism than under capitalism, because more research funding can directed to technologies ensuring we do not damage the environment.”



Socialism Versus Jordan Peterson: The Case of Complexity Science


The debate of nature versus nurture has undergone a new political dimension. Although that  discourse was always political, it seems that a combination of mediatic dynamics and academic fortress building has divided the nature-nurture debate into  two ontological camps. In other words, each camp’s language is unintelligible to the other.  On one hand, you have the sociological Left  that posits  that socio-economic discrepancies between races and the sexes are  due to socio-cultural phenomena. For example, centuries of policing gender boundaries (some of which continues today) through both soft suggestion (e.g. gendered roles), and direct institutional violence (adultery laws, banning of women in certain professions etc.) have solidified a disadvantaged position of women in this society. On the other hand, you have the naturalistic “center” or right wing  (e.g. Jordan Peterson) that argues that the lower socio-economic position of women is not a function of structural obstacles, but personality differences that are more or less biologically hard-wired that make women choose less paying professions, or be less confrontational and assertive in corporate settings.

This piece-meal approach, on one hand of the Left’s sociology, and on the other hand, of  the Right’s naturalism, is counter to  a scientific ontology,  which would posit that humans are both social beings and evolved animals.  Even if the Left is correct in that  biological variables are not necessarily relevant to many of the social sciences, the Right’s naturalistic prescription of social problems as functions of biological darwinism makes more intuitive sense to an anglo-saxon audience. This is because  anglo-saxon education is steeped in scientism (e.g.  anglo public intellectuals, such as Dawkins, Degrasse Tyson, etc. are scientists),  and their culture is ill equipped to deal with more sociological  and philosophical arguments (e.g. Weber’s argument about the West’s instrumental reason). So given the scientism of western culture, it is important for leftists to develop a synthesis that outlines when do sociological feedback loop completely overwhelm biological loops, rather than simply eliminating biology from their conceptual framework. I believe this synthesis is possible using the conceptual constellation of complexity science (something I’ve written about before).  To make my case,  I will first outline why the Left is against naturalistic prescriptions. Secondly, I will explore how these sociological arguments were recuperated by liberal bureaucrats and opinion makers (hereby referred as coordinators), and thus made  naturalistic arguments more popular given the backlash to authority. Finally I will sketch a  “complexity science” synthesis  on why is it that social dynamics tend to be more important than biological ones when dealing with human society.

Humans are  animals, the end-product of billions of years of biological evolution that transformed a primal bacteria into a big-brained, bipedal primate that can talk and make abstract, mathematical computations. However  the human being is also a social being, shaped and programmed by various complicated feedback loops that are enforced not only by the most rudimentary kinship unit such as the nuclear family, but by large-scale socio economic structures that extend through centuries and thousands of kilometres (e.g. states).   However,  one of the most ancient and predictable tricks done by the elite is to justify their privileged positions through naturalistic arguments.  For example, in the 19th century social darwinism and racial pseudo-science was used to biologically  justify the privileged position of the white man and the capitalist. Given the reactionary nature of these pseudo-scientific arguments,  revolutionaries and militants began  taking a sociological approach to dismantle the naturalistic myths of power (e.g. Marx) – it’s not biology that has given the capitalist or the white man the head-start, but complicated historical contingencies that gave rise to feedback loops that privileged some castes at the expense of others.  Existing power differentials were not biological telos, but a historical accident.  These sociological arguments formed the theoretical backbones of working class militancy, feminist activism, and anti-racist movements.

However, recently  these initially emancipatory sociological explanations  have been recuperated by a professional caste in a diluted, tragic form. This form does not have the objective of liberating humanity by addressing  material structures that sustain the nightmare of class society.  Instead, these ideas have become defanged into  talking points used by the bosses to discipline how their employees talk, or strings used by human resource cyborgs  to maintain appearances in the atomic wasteland of social media. These new deplorable, liberal coordinators, only concerned with maintaining optics and flaunting  cultural capital,  are unable to defend these sociological ideas, because nothing is at stake for them except television ratings or curriculum vitaes.  Jordan Peterson,  the idiot’s smart man,  by realizing how vulnerable are these clueless coordinators, has made a killing for a living, netting him about sixty grand monthly in his Patreon.  For example, he recently crushed Cathy Newman in a televised debate, which was a cathartic event for his fans. By  uttering the most simplistic and naturalistic  anti-feminist talking points, he made Newman short-circuit, making her repeat stereotypical liberal mantras over and over. Like Quilette recently published, it seemed like Newman hadn’t heard  these really basic arguments  that are routinely used by wikipedia-reading  misogynists   and crackpot “evo-psych” amateurs, and was just caught in some liberal, human resources bubble where the barbarian hordes of angry  dads  are gated away.

Given the inability of the contemptible coordinator caste to  actually defend these sociological arguments from the naturalistic attack in the first place, socialists must  come up with a synthesis on how to address this naturalistic attack and  defend the socio-historical tradition of the left.   As I have written before, complexity science may offer a good outline on how to address these naturalistic arguments.   Complexity science roughly argues that complicated systems cannot actually be reduced to the behaviour of their individual units. Instead, the system  itself creates emergent feedback loops and laws that cannot be simply be derived from the microphysics of the individual unit.  For example, psychology cannot be reduced to the behaviour of the individual neuron, or  how the temperature of a gas cannot be derived from the trajectory of one molecule. Instead these systems must be sometimes treated on their own terms. For example, the Newtonian physics that describe the air flow around an airplane’s wing  is not concerned  with the quantum chromodynamics of quarks.  The science that deals with mental illness operates without a  picture of how neural synapses works. In other words, systems operate on their own level of abstraction that overwhelms the particularities of the unit.  This is the case with socio-economic issues – although it’s true that society is made of evolved animals subject to biological forces,  these naturalistic particularities are overwhelmed by  socio-economic feedback loops.

Let’s use complexity science against Peterson as an example. In his recent debate against Cathy Newman he was arguing that one of the reasons the gender gap persists is because of women’s agreeableness.  According to him, women tend to be more agreeable, and therefore that  affects negatively their  earning potential in highly competitive workplaces. I also found out in an interview he had with Stefan Molyneux  that Peterson associates agreeableness with maternal instinct, ergo it is somehow biologically  hardwired into the female psyche.    The controversial point is not so much whether women are more agreeable or not, but if that agreeableness is a function of biology.  How on Earth would you even begin to prove agreeableness is hardwired biologically in a scientific way? At most, you can make a study  that shows gender and agreeableness are empirically correlated. Although, in the context of scientism, attributing agreeableness to some darwinistic  child rearing instinct “makes sense” in a shallow, common-sense sort of way, that does not mean such a theory can be proven scientifically.  In contrast, a sociologist or anthropologist may document various gender-policing mechanisms, which act as social feedback loops,  where woman are castigated for being combative (e.g. being called a bitch, caricatured as an evil manager etc.) therefore reinforcing female agreeableness as a social strategy, leading to a plausible narrative for the sociological explanation. The point is this – similar how to how the properties of the individual neurons are buried within the emergent laws  that constitute psychology, it could be that the the individual biological wirings of the female psyche are overshadowed  by the socio-economic feedback loops of by class society.

Peterson also claimed that hierarchy is biologically coded into much of the animal kingdom, including humans. Therefore he argued that the sociological explanation for the historical contingency of hierarchies is incorrect, given that our evolutionary ancestors already enforced hierarchies (e.g. lobsters).    However, the sociological explanation of modern power differentials is actually vindicated by the behaviour of early hunter gatherer societies, who some could argue are devoid of the more complicated feedback loops that appear in complex, sedentary societies. Even if these hunter gather societies may operate with  “soft” hierarchies (e.g. the existence of chieftains, leaders etc), it would be ridiculous to put these dynamics in the same order of magnitude as the extreme power differentials existing  in modern class society between a worker versus a president or a CEO. Therefore,  even if  a “soft” hierarchy may be encoded in our biological wiring,  it is completely  overshadowed by extreme  power differentials arising out of socio-economic structures.

I hope that my humble effort at a synthesis may generate some interesting thoughts. I am a firm believer that socialists should justify their positions with concrete arguments rooted in existing scientific consensus, and  therefore the argument against  “naturalism” must not only be philosophical, but based on our empirical understanding of reality as well.

If you liked this post so much that you want to buy me a drink, you can pitch in some bucks to my Patreon.

The Minimal Socialist State Versus Bloated Capitalism


Perhaps one of the largest, propagandistic triumphs of capitalism is to equate socialism as inefficient, bureaucratic bloat. In contrast, capitalism is portrayed as a lean, efficient system. Many argue that the market, instead of depending on a slow moving, centralized bureaucracy to produce and allocate the necessary goods, automatically balances supply and demand, with the price signal as the carrier of information on how to produce and distribute commodities. However, I believe that socialism, at least the version exposed by Marx, would be a free society with a very lean and bare-bones administrative apparatus (a minimal state). For socialism to be a qualitatively different stage of history than previous class societies, social structures need to be dictated by free time as opposed to the imperatives of slavery, survival and toil. In other words, the time outside the activities necessary for the survival of the human species, must dominate the course of history. This freeing of human life from drudgery and misery requires a lean apparatus that curtails all the extra socially wasteful industries and infrastructures in order to progressively reduce the length of the work day until the eventual abolition of toil. In short, the socialist state must be minimal. First, I will expose the arguments that portray socialism as bloated and inefficient, then I will argue why instead, capitalism is wasteful and swollen, and then I will sketch some of the key attributes of this minimal socialist state.

At first glance, the right wing case against socialism sounds sensible, both philosophically and empirically. In the philosophical realm, it seems unlikely that a central planner can possibly have all the necessary expertise to know what is happening “on the ground”. In the empirical realm, the former socialist bloc was sluggish, bureaucratically bloated, and authoritarian. In contrast, market-based societies appear much more efficient and freer.

However, I believe these arguments, although they may sound plausible, are ultimately wrong. The reason why these arguments appear correct, given that many of the socialist states tended to be more authoritarian, inefficient and “backward” than the West, is due to a combination of a couple of factors: (i) pre-modern forms of life that existed in the underdeveloped countries where “socialism” took root, (ii) geopolitical configurations where the periphery (socialist states were peripheral) is in an inferior bargaining position, and (iii) the ideology of instrumental reason in the West, that rationality that values the calculation of the means, over thinking about the end.

In the first case (i), as I mentioned in a previous post, many of the problems identified with the existing “socialist” countries are not formally related to the idea of a planned economy, but are linked to social forms that predated “socialism” sometimes for centuries. For example, clientelism is a large scourge in underdeveloped countries, where informal exchange of favours and services between powerful agents undermine the transparent functioning of institutions. These problems predate capitalism and “socialism” and where even formalized in ancient Rome. In the USSR, clientelism was evident through the way wealth was accumulated by the elite, where high ranking bureaucrats exchanged favours and privileges at the expense of society at large. The opacity of institutions due to these webs of corruption and hierarchies also created feedback loops where the citizen do not trust formal mechanisms anymore, creating the large-scale systemic problems that led to the USSR’s collapse. This generalized state of corruption also lead to various factions of the bureaucracy scamming and conning each other, by misrepresenting and exaggerating (e.g. how many widgets where produced in a factory) in order to lever a career advantage. The sum-total of all these dynamics led to massive wastefulness, scarcity of useable goods, and ultimately terror. The problem of corruption, wastefulness, and terror is common across much of the periphery (including peripheral capitalist states), and is not formally related to socialism.

The second case (ii) is very closely related to to the first case. Due to various historical factors, many of the countries that became “socialist” where peripheral societies (e.g. Cuba, Russia, etc.) that where economically subordinated to other more powerful countries. Their lack of capital-intensive technologies and working institutions made them dependent of the core economies for technologies (e.g. engines, computers, medicine, etc), turning them into bodegas to be ransacked for slave-like labour and cheap natural resources. This dependency not only assured that the periphery (and thus many socialist states) lagged behind, but also created other feedback loops that lead to other dysfunctions. In the case of the USSR, the State was forced to industrialize quickly in order to have the military capabilities to defend itself against a more powerful and hostile West. This fast, frenzied, and unscientific hyper-industrialization that appeared during the first Five Year Plan, created corrupted institutions due to unrealistic objectives that forced, for example, factory directors to inflate their numbers and share dishonest information.

The third case (iii), the focusing on the most efficient way to achieve the means, without thinking about the end, distorts the discussion on what does “efficiency” mean. Capitalism is extremely good at intensive development, where a novel technology or a service, is made progressively cheaper and more advanced as competition forces firms to cut down costs. This leads to the “lean” and “efficient” perception of the West. However, the efficiency that leads to profit, that is the optimization of processes related to the creation of random commodities, is not necessarily the socially preferable form of efficiency. For example, a key socialist demand is the shortening of the work day until its eventual abolition. Defenders of the market, such as Keynes, thought that capitalism by its own devices would create a shorter working day. However, the eight hour work week has persisted in the United States and Canada for about a hundred years, even if the economy grew exponentially in that same century.. Other more banal examples of capitalist “inefficiencies” are the coexistences of vacant buildings with homelessness, and long work-day with unemployment. In short, capitalist efficiency ultimately only concerns itself with profit, and although this logic can lead to various socially beneficial by-products, such as cheap computers and an abundance of calories, capitalism is not concerned with the realization of social-need, therefore it cannot reduce the work-day, create sustainable and psychologically beneficial urban spaces (as opposed to private condo towers and desolate suburban sprawl), and deal effectively with the question of climate change. Ultimately, capitalism, with its creation of random, socially unnecessary market and industries, becomes increasingly bloated. For example, a large percent of the GDP in core economies is related to Finance, Insurance, and Real Estate (FIRE), a sector that would be rendered irrelevant with the abolition of private property.

Ultimately the socialist, minimal state, would shrink the world’s administrative apparatuses (the bureaucracies of corporations, the executive and judicial powers, the administrative bloat of universities) by (a) the abolition of private property, and (b) through planning. The fact that private property is mediated through a contract, whether the contract is digital (e.g. ownership of stocks), or analog (the paperwork of a house), inevitably creates an administrative bloat of gentry-scholar like functionaries, both in the private and public spheres, that have to deal with the regulations, lawyering, and the legalese of these contracts. In addition, the increasingly fractal and abstracted labyrinth of private property creates an informational complexity in the form of FIRE, which is a socially unproductive sector, but is necessary for capital to “grease its wheels”, by bailing out companies through loans, stimulating investment, moving capital shares across thousands of miles at a fraction of the speed of light through optical fiber cables, etc. Once socialism abolishes private property, the informational complexity will be greatly reduced, transferring the world’s labour to socially necessary tasks.

Planning will be the central engine of the minimal socialist state. By planning, through a combination of accountable “planetary” central planners at the large scale, and machine learning algorithms and local committees at the granular scale, industries that are deemed socially necessary (e.g. agriculture, some IT, medicine, etc.) could be preserved and social waste eliminated. This would create a situation where only the minimum tasks required for comfortable survival will be the domain of labour and the  bare-bones State. Once these socially necessary tasks are recognized (through a combination of scientific planning and grassroots consensus), work will only be spent in doing these socially necessary activities, in contrast to capitalism’s arbitrary tasks that have enslaved humanity to toil for centuries. Once labour-time is minimized, the majority of waking hours would not be spend in grind forced upon by survival, but in free time. Socially necessary labour will be rotated by all the citizens, and will be reduced to the social equivalent of “cleaning your room.” Thus socialism will create a different type of efficiency than capitalist optimization. Although socialism may not lead to the most effective janitor, or the most optimized smart phone, that does not imply that it will be more bloated, miserable, and labor intensive than capitalism. This emancipation of humanity from labor is the hidden potential of modernity, and would usher for the first time in history, a society that will be shaped by free time, not the constraints of survival.

If you liked this post so much that you want to buy me a drink, you can pitch in some bucks to my Patreon.