Crisis Theory: The Decline of Capitalism As The Growth of Expensive and Fragile Complexity

It’s an empirical fact that the economy experiences business cycles, in other words, oscillations between booms and busts.  Furthermore, many argue that the economy is experiencing a secular decline. For example, productivity across all industries has decreased since the 1970s. What ares the mechanisms behind these instabilities and also decline?  What would an accurate theory of economic crisis look like? 

Screen Shot 2018-06-29 at 7.55.01 PMSource: https://www.brookings.edu/wp-content/uploads/2016/09/wp22_baily-montalbano_final4.pdf

I believe that capitalism is both unstable and vulnerable to business cycles, and is also experiencing secular decline. The source of these trends are  feedback mechanisms that are structural to capitalism that encourage the growth of fragile and expensive complexity (logistics, rent-seeking, finance, etc) due to the pursuit of short-term profits.  Furthermore, this complexity becomes increasingly separated from the human labor (see Marx on labor theory of value) that creates wealth indirectly or directly (e.g. the factory worker, the doctor, the teacher, etc.),  which means a larger ratio of overhead versus wealth creation. The growth of expensive complexity in the long run means both declining productivity and fragility to the business cycle.

I will first review some of the theories that already exist to explain this secular decline and also the nature of business cycles. Then I will present my own crisis theory that addresses the weaknesses of the other existing models.

The mainstream economic approach to the business cycle is  modelled through the so called Dynamic Stochastic General Equilibrium model (DGSE).  In this model, mainstream economists assume the world economy is more or less in equilibrium (e.g. markets clear, and agents’ utility functions maximize) until a random shock appears, for example, a sudden rise in oil prices.  The nature and the source of the shock are irrelevant in this model – the DGSE approach only dictates that random shocks are an economic reality. Thus the task of the economist reduces to  studying how the structures of the economy amplify/dampen and propagate the shock.  For example, after the 2008 crash, economists began taking seriously how aspects of the financial sector may amplify these shocks (they call these financial frictions). It appears mainstream economists only have achieved a consensus on business cycle modelling but not necessarily on the secular decline of the economy.

Hyman Minsky was an important heterodox thinker that elaborated a crisis theory, and also recently became widely cited because of the 2008 crash.  Minsky  argued that crisis emerges from endogenous activities in the financial sector. Minsky explained that in booming times, banks and other financial institutions become “euphoric” and begin lending and also borrowing  large quantities that in bust periods they would find too risky.  Given that these financial actors   are overconfident, a speculative investment bubble develops. At some point, the debtors cannot pay back, and the bubble bursts, creating a crisis.

The more orthodox of the Marxist approaches to crisis  is  referred famously  as the theory of the tendency of  the rate of profit to fall (TRPF). According to Marx, capitalism experiences a secular decline in the rate of profits as work is automated away by machines, and therefore less workers are employed, which means less human labor to exploit. As production becomes more optimized, machinery  and raw material absorb more of the costs of production, and less workers are employed due to rising productivity. In marxist analysis, profit comes from exploitation of workers, that is, from paying workers less than the value created by the hours they worked. So in marxist analysis, as machinery automates more of the labor, the rate of profit also declines.  According to Marx, in a hypothetical scenario where all labor becomes automated by robots, the capitalist wouldn’t profit at all!

Finally there are some crisis theories were more heterodox marxist models and pseudo-keynesian theories converge.  Thomas Palley recently compared Foster’s Social Structure Accumulation theory (SSA) to his own theory, Structural Keynesianism. Both Palley and Foster argue the decline of economic growth is related to a stagnation of wages. If wages are stagnant, the aggregate demand necessary for growth is unmet, because workers don’t make enough to purchase commodities.  They argue that this  economic stagnation is related to the neoliberal growth model adopted since the 1970s. According to Palley, the  only mechanisms that kept the economy from crashing were the overvaluation of assets, and firms filling the hole in aggregate demand by taking on more debt. However this excess of credit lead to financial instabilities that eventually  crashed the economy  in 2008.

In my opinion all these approaches are flawed. For one, the mainstream approach under-theorizes the sources of fragility and the secular decline in the rate of profit. It is true that much of the crises/business cycles have to do with the fragility of the capitalist economy to volatility,  which is explained by mainstream models.  However, an important part of the story is why the capitalist system is fragile to these shocks. In fact, mainstream economists showed their ignorance with their inability to forecast the effects of the 2008 recession.  After the crash,  mainstream economists implicitly conceded to the heterodox arguments of Minsky that the financial sector creates fragility. For example, only after  the crisis did mainstream economists include in their DGSE models the financial instabilities mentioned by Minsky.  Furthermore, it appears mainstream economics doesn’t really have a theoretical consensus on the secular decline of capitalism.

The problem with the Minskyan approach is that it is severely limited – for one, it only identifies one source of fragility, which is the financial sector. It also does not theorize why the financial sector is “less real” than for example, the manufacturing sector – which Minsky implicitly assumes when he blames fragility only to the financial part.  Because of Minsky’s limited theorization, he also fails to explain the secular decline of the rate of profit, content with only explaining the business cycle. 

The greatest flaw of the  “orthodox” Marxist approach is its dependence on pseudo-aristotelian arguments. The TRPF model  is based in a logical relation between very specific variables, which are the costs of raw materials and machinery (constant capital), the costs of human labor (variable capital), and the value extracted from the exploitation of human labor (surplus value). This spurious precision and logicality is unwarranted, as the capitalist system is too complex and stochastic  be able to describe the behaviour of crisis as related to a couple of logical propositions. One has to take into account  the existence of instabilities and shocks, as the mainstream economists do. However, Marx still had a key insight which is that the aggregate wealth of the world must be sourced in human labor that produces use values. The source of wealth comes from dentists doing dentistry, and construction workers doing  construction work, not from the dentist trying to make money by trading in the stock market. Furthermore, Marx identified that there is a secular trend in the declining rate of profit, which is missing in other contemporary accounts.

Finally,  Palley’s approach seems to be too politically motivated. To them, the stagnation of the economy is related to issues of policy – of statesmen adopting the “wrong” set of regulations/deregulations. If politicians were just “objective”, and followed Palley’s set of ideas, then crisis and decline could be averted! To  Palley, the neoliberal phase was a matter of certain “top-down” policies rather than endogenous/spontaneous fragilities and instabilities that are inherent to the capitalist system. In my opinion, it’s impossible to disaggregate what is political and what is inherently structural in the secular decline of capitalism, since the whole world economy is more or less neoliberalized at this moment so there is no alternative to compare it at the present. So it seems to me that it’s a just-so story that is projected from the present to the past and impossible to prove empirically. 

One of the issues I have with the “left” theories of crises, such as Keynesian and Marxism, is that they don’t take instability, uncertainty, stochasticity, and complexity seriously. Instead, proofs and discussions are reduced to aristotelian logical chopping related to a few variables. In the Keynesian case, it’s aggregate demand, in the Marxist case these variables are surplus value, constant capital, and variable capital.  A system that pulsates with tens of billions of people is reduced to the logical chopping of a few variables. Instead, we must device a more holistic view of the capitalist world-system, taking into account its nonlinearities and fragilities.

The theories outlined above contain  parts of the truth, so we can use some of these elements to synthesize a model of crisis that contains the following: (i) economic fragility to instabilities and shocks,  (ii) endogenous sources of this fragility, (iii) a theory of the secular decline of the rate of profit. The concepts ultimately uniting these three points are fragility/nonlinearities and increasingly expensive complexity. For example, Minsky, by addressing the fragility in the financial sector, also implicitly points to a theory of  degenerative complexity, where the financial sector acts as a complex, expensive, and fragile  overhead that exists over the “real economy”.

We can use Taleb’s definition of fragility to make the concept more precise. Taleb mathematically defines fragility as harmful, exponential sensitivity to volatility. For example,  a coffee cup can withstand stress up to a certain threshold, above that, the cup becomes exponentially vulnerable to harm, as any stress higher than that threshold will simply shatter the cup. The reason why fragility is a nonlinear property is that the cup won’t wear  and tear proportionally to stress. Instead the cup will sustain the stress until a certain  threshold is reached, and then suddenly shatter. So in other words, the cup reacts exponentially to stress, with stress below a certain threshold inflicting negligible damage. 

Similarly, the capitalist world system probably has many thresholds, many of them currently unknown. This is because the capitalist world system is complex and nonlinear.  It is complex because it is made of various interlocking parts (firms, individuals, governments, etc.) that form causal chains that connect across planetary scales. It is nonlinear because the behaviour of the system is not simply the “sum” of the interlocking parts, as the parts depend on each other. Therefore one cannot really study the individual components in isolation and then understand the whole system by adding these components. In other words, interdependence  of the units within capitalism makes the system nonlinear. Furthermore, nonlinear systems are frequently very sensitive to change in its variables, where surpassing certain thresholds can make the system exhibit abrupt changes and discontinuities that often manifest as crisis.  This abrupt changes caused by the crossing of certain threshold is a common mathematical property in nonlinear systems.  Fragility therefore correlates with nonlinearities, abrupt jumps/shocks, and complexity. 

However it is not enough to say that the capitalist world system is fragile because it is nonlinear. The point is that the capitalist world system structurally generates feedback loops that lead to the accelerated creation of endogenous fragilities.  The frenetic pursuit of short-term profits in increasingly competitive contexts leads to the creation of fragile, nonlinear complexity. This is because a firm must invest in more expensive research, infrastructure, and qualified personnel to generate innovation that leads  profit in the short term, as many of the “low hanging fruits” have already been  plucked. So capitalism leads to a random “tinkering”  by firms and institutions to produce profit, by often adding ad-hoc complexity. This complexity make generate short-term profits, but is expensive in the long term.   Joseph Tainter tries to measure the productivity of innovation by looking at how many resources go into creating a patent. For example,  here is a plot showing how ratio of patent per GDP  and per R&D expenses has declined since the 70s:

 

mfig008Source: https://voxeu.org/article/what-optimal-leverage-bank

Another marker of increased expensive complexity is  how many people are required to create a patent:

mfig009Source: https://onlinelibrary.wiley.com/doi/full/10.1002/sres.1057

A very common and studied example  of this nonlinear complexity is the financial system.  The financial system is an example of growth of complexity in order to aid the profit motive.   Cash flows are generally too slow and cash reserves too low in order to cover the capital required to start firms, or to add a layer of complexity required for more profitability, so agents must resort to acquiring credit and loans. In other words the financial system acts as a fast, short-timescale distributive mechanism for the funnelling of resources to banks, firms and individuals that require quick access to capital in spite of low cash flows.   Without the financial system growth would be much lower because access to capital could only be facilitated through cash flows. However, as Minsky noted decades ago and mainstream economics emphasizes now,  the financial system is extremely unstable, complex and nonlinear, and therefore fragile. Here is a figure that shows for UK banks how much the “leverage ratio”, which is roughly the ratio between debt to equity of banks, has exponentially grown from the 1880s to the 2000s – in other words, banks depend on loans/credit in order to have fast access to capital.

MilesFig1 (1)

Source: https://voxeu.org/article/what-optimal-leverage-bank

The addition of complex overhead as inversely proportional to growth has been empirically verified for various parts of capitalism. Here are some examples: the cost diseases associated with industries like education and healthcare, the admin bloat in education and healthcare, the  stagnation of productivity across virtually all industries including manufacturing, the stagnation of scientific productivity in spite of exponential growth in the number of scientists and fields, etc.

Furthermore, capitalism encourages rent-seeking and expensive complexity, even if there are no benefits in wealth production for the economy in general. For example, this rent-seeking scenario is probably the case for admin bloat at the universities.  In the case of this admin bloat, there is a transfer of wealth from society to certain sectors of the university, but there is no obvious economic benefit for society in general. This is in contrast to traditional, profitable industries were profit leads to capital valorization through the reinvestment of that profit.

pnhp-long-setweisbartversion-52-638

As noted in a previous post, there is also a secular degeneration of science with the secular decline of capitalism. To summarize that post, as informational complexity grows at a faster rate than empirical validation and knowledge production, an informational bloat of unverified scientific theories gets created. An obvious example is the complex bloat of theoretical physics models that predict all sorts of new particles, in spite of the fact that the Large Hadron Collider, a multibillion dollar experiment, failed to confirm any of them. So you have a whole layer of professionals that are just experts in unverified/degenerative theories, and these professionals collect large salaries in spite of not contributing to economic nor epistemic growth.  Another example of a degenerative profession is economics. Judging from the stagnating productivity across most industries, we can probably assume that these caste of degenerative professionals is rampant across all corners of capitalism. This caste of degenerative professionals and “degenerative” experiments add expensive and fragile complexity to capitalism.

F1436560-42DD-4C21-BB80930F45E22220Source: https://blogs.scientificamerican.com/cross-check/is-science-hitting-a-wall-part-1/

Finally, as complexity grows, there is an increasing dislocation between abstracted logistical, degenerative, and “scientific” complexity and the human labor that creates the wealth.  A very good example is finance. To paraphrase and elaborate on what Taleb said, the wealth of the world is created by dentists doing dentistry, and construction workers doing “construction work”, not by the dentist trying to become rich by trading their savings in the financial market. This is where Marx becomes relevant – for the wealth of society comes from human labor, not from the transfer of wealth through administrative and accounting tricks, or through the circulation of financial instruments. This bloated complexity is required for the functioning of capital  because of financial, accounting, and logistical constraints.  Much of this complexity acts as an overhead for the world-economy that is required for the survival of capital itself, but this complexity does not necessarily create socially necessary wealth. An example of the fragility of this separation between wealth creation and complex abstraction is the existence of speculative bubbles.  Due to the overconfidence of the financial industry, assets are often overvalued and at some point their value collapses, as the dislocation between the real and financial economy becomes unsustainable. This financial instability was discovered by Minsky and that now is understood by mainstream economists, who incorporate it in their models.

Here we begin to sketch a theory for the secular decline of capitalism.  First there is a secular increase of fragile, nonlinear complexity driven by ad-hoc tinkering of firms/institutions in order to pursue short term profits at the expense of fragility. Furthermore much of this  expensive complexity is due to rent-seeking, where specialists trained in degenerative methods that add no obvious knowledge/efficiency self-reproduce and multiply, like string theorists, economists, university admins, healthcare admins etc. In the long run, all this added complexity that is created for short term profits becomes increasingly expensive, leading to even slower productivity growth  (GDP growth per labor hour).  Part of the lowering of productivity is the increasing dislocation between human labor that produces wealth and an abstracted layer of researchers, administrators, managers, etc. Furthermore not only there is a secular decline of the economy, but there are also increasing fragilities and instabilities, as the bloated complexity is very nonlinear, given that it couples agents across planetary scales, such as how the financial industry transcends national economies. So the world economy becomes increasingly more vulnerable to shocks, due to nonlinearities (caused by interdependencies) that lead to  abrupt changes. These instabilities and fragilities give rise to the so called business cycle.

In conclusion, a socialist theory of crisis should begin by looking at the economy as a whole, taking into account its instabilities and fragilities. In my opinion, the methodologies of the various Keynesian and Marxist schools are wrong because they pretend to have identified a couple of important variables (e.g. aggregate demand, organic composition of capital) and then logically derive a theory of crisis through these variables. However, because the economic system is extremely complex and nonlinear, these theories probably amount to just-so stories, since the mechanisms behind the instabilities in capitalism are probably very varied (and many of them unknown),  and therefore  cannot be pinpointed to just specific sources. Instead, a better approach to a crisis theory is  to analyze how capitalism creates  endogenous feedback loops that lead to fragility, due to generalized and socially unnecessary nonlinearities and complexites. This nonlinearization and complexification is imposed in order to pursue short term profits, at the expense of long-term productivity. Moreover, another important issue is how a large part of this complexity becomes increasingly dislocated from wealth creating labor – such as the dislocation between administrators and professors, or the financial sector and the real economy.  

I am confident many of the theories presented in this article can be both quantified and verified against empirical data in a much more rigorous way than done here. But alas, there isn’t an eccentric millionaire backing this research program😞.

If you liked this post so much that you want to buy me a drink, you can pitch in some bucks to my Patreon.

Advertisements

Ergodicity as the solution for the decline of science

608px-Maxwell's_demon.svg

In a previous post I explored the decline of science as related to the decline capitalism. A large aspect of this decline is how the increase of informational complexity leads to marginal returns in knowledge. For example, the last revolution in physics appeared roughly one hundred years ago, with the advent of quantum mechanics and relativity. Since then, the number of scientists and fields have exponentially increased, and the division of labor has become increasingly more complex and specialized. Yet, that billion dollar per year experiment, the Large Hadron Collider, that was created to probe the most fundamental aspects of theoretical physics, has failed to confirm any of the new theories in particle physics. The decline of science is coupled to the decline of capitalism in general, as specialist and institutional overhead is increasing exponentially across industries, but GDP growth has been sluggish since the 1970s.

Right now across scientific fields there is an increasing concern for the overproduction of “bad science”.  Recently, the medical and psychological sciences have been making headlines, because of the high rates of irreproducible papers.  In even the more exact sciences, there is a stagnant informational bloat, with a flurry of math bubbles, theoretical particles, and cosmological models inundating the peer-review process, in spite of billion dollar experiments like the Large Hadron Collider not confirming any of them, with no scientific revolution (last one was 100 years ago) in the horizon.

There is no shortage of solutions being postulated to solve the perceived problem. Most of them are simply suggestions of making the peer review process more rigorous, and refining the statistical techniques used for analyzing data.  For example using bayesian statistics instead of frequentism, encouraging the reproducibility of results, and finding ways to constraint the “p-value” hacking. Sometimes some writers that are a little bolder would argue that there should be “interdisciplinarity”, or that scientists should talk more to philosophers, but usually these calls for “thinking outside the box” are very vague and broad.

However, most of these suggestions would simply exacerbate the problem. I would argue that the bloat of  degenerative informational complexity is not due to lax standards. To give an example, let’s analyze the concept of p-value hacking. A common heuristic in the social sciences is that for a result to be significant, it should have a p-value of less than 0.05. In layman parlance, this implies that your result has only 5 percent of probability of being due to chance (not exact definition but suffices for this example).  So now you established a “standard” that can be gamed in the same way lawyers can game the law. This creates a perverse incentive to game this rule, by researchers finding all sorts of clever ways of “p-hacking” their data so that it passes that standard. So in the case of p-value hacking, one can make conscious fraud by not including the data that raises the p-value (high p-values mean your results are due to chance), to unconscious biases like ignoring certain data points because you convince yourself they are a measurement error, in order to protect your low and precious p-value.

The more rigid rules a system has, the more is invested in “overhead” to regulate those rules and game them. This is intuitively grasped almost by everyone, and hence the standard resentment against bureaucrats that take the roundabout and sluggish way to accomplish something.  In the sciences,  once a an important study/experiment/theorem generates a  new rule, or “methodology”,  this creates perverse incentive loops where scientists and researchers use this “rule” to create paper mills, that will in turn be used to game citation counts . Instead of earnest research, you have an overproduction of “bad science” that amounts the gaming of certain methodologies.  String theory, which can be defined as a methodology,  was established as the only game in town a couple of decades ago,  which in turn constrained young theoretical physicists in investing their time and money in gaming that informational complexity, generating even more complexity. Something similar happens in the humanities, where a famous (usually french) guy establish a methodology or rule, and the anglo counter-parts game the rule to produce concatenations of polysyllabic words.   Furthermore this fetish of informational complexity in the form of method and rules, creates a caste of “guild keepers” that are learned in these rules and accrue resources and money without allowing anybody who isn’t learned in these methodologies.

This article serves as a “microphysical” account of what leads to the degenerative informational complexity and diminishing returns I associated with modern science in my previous post. However what would be the solution to such a problem? The answer is in one word: ergodicity.

As said before, science has become more specialized, complex, and bloated that ever before.  However, just because science has grown exponentially, it doesn’t mean it has become more ergodic. By ergodic I specifically mean that all possible states are explored by a system.  For example  a dice that is thrown a large amount of times would be ergodic, given that the system would access every possible side of the dice. Ergodicity has a long history in thermodynamics and statistical mechanics, where physicists often have to assume that a system has accessed all its possible states.  This hypothesis allows physicists to calculate quantities like pressure or temperature by making some theoretical approximations of the number of states a system (e.g. a gas ) has. However we can use the concept of ergodicity to analyze social systems  like “science” too.

If science were ergodic, it would explore all possible  avenues of research, and individual scientists would switch of research programs frequently.  Now, social systems cannot be perfectly ergodic, as social systems are dynamic and therefore the “number” of states grow (e.g. the number of scientists grow). But we can treat ergodicity as an idealized heuristic.

The modern world sells us ergodicity as a good thing. Often, systems describes themselves as ergodic as a defence from detractors. For example, when politicians and economists claim that capitalism is innovative, and that it allows all workers to have a chance at becoming rich (or a chance for rich people to become poor),  they are implicitly describing an ergodic system. Innovation implies that entrepreneurs experiment and explore all possible market ideas so that they can discover the best ones. Similarly, social mobility implies that a person has a shot at becoming rich (or if already rich, becoming poor) if that person lives long enough. In real life, we know that the ergodic approximation is really poor for capitalism, as the rich do often stay rich, and the poor will stay poor. We also know that important technological innovation is often carried out by public institutions  such as the american military, not the private sector. Still, the reason why ergodicity is invoked is that it is viscerally appealing. We often want “new blood” into fields and niches, and we resent bureaucrats and capitalists insulated from the chaos of the market for not giving other deserving people a chance.  

One of the reasons that ergodicity is appealing is that there is really no recipe for innovation except experimentation and exploring many possible scenarios.   That’s why often universities have unwritten rules of not hiring their own graduate students into faculty positions – they want “new blood” from other institutions. A common (although incorrect, as described above) argument against public institutions is that they are construed as often dull and stagnant in generating new products or technologies compared to the more “grassroots” and “ergodic” market. So I think there is a common intuition amongst both laymen and many professionals that the only sure way of finding if something “works” or not is trying different experimental scenarios.

Now let’s return to science.  The benefit of ergodicity in science  was indirectly supported the infamous philosopher Feyerabend. Before him,  philosophers of science tried to come up with recipes of what works in science or not.  An example is Popper, who argued that science must be falsifiable. Another example is Lakatos, who came up with heuristics of what causes research programs to degenerate. Yet,  Feyerabend argued that the only real scientific method is that  “anything goes” – he termed this attitude as epistemological anarchism. He argued that scientific breakthroughs don’t follow usually any hard and fast rules, and that scientists first and foremost are opportunists.

Feyerabend got a lot of flack for  these statements – his detractors accusing him of relativism and anti-scientific attitudes. Feyerabend didn’t help himself because he often was inflammatory in purpose and seeking to cause a reaction (for example putting astrology and science on the same epistemic level). However I would say that in some sense he was protecting science from dogmatic scientists.  To use the terminology sketched in the previous paragraphs: he ultimately was arguing for a more ergodic approach to science so that it doesn’t fall under this dogmatic trap.

This dogmatic trap was already explained in previous paragraphs: the idea that more methods, rules,  divisions, thought policing, and  rigour, would  always lead to good science.  Instead it leads to a growth of degenerative research  that amounts to gaming certain rules.  This in turn leads to the growth of degenerative specialists that are only experts in degenerative methods.   Meanwhile, all this growth is non-ergodic, because it’s based around respecting certain rules and regulations, which constrains the exploration of all possible scenarios and states. It’s like loading a dice so that always the six dots face up, in contrast to allowing the dice to land in all possible states.

How can we translate these abstract heuristics of ergodicity into real scientific practice? The problem with much of philosophy of science, both made by professional philosophers, or professional scientists unconsciously doing philosophy, is that it looks at individual practice. It comes up with a laundry list of specific rules of thumb that an individual scientist most follow to make their work scientific, including certain statistical tests and reproducibility. However the problems are social and institutional, not individual.

What is the social and institutional solution? Proposing solutions is harder than describing the problem. However  I always try to sketch a solution because I think criticism without proposing something is somewhat cowardly – you avoid opening yourself up to criticisms from readers.

The main heuristic for solving these problems should be on collapsing the informational complexity in a planned, transparent, and accountable way.  As mentioned before, this informational complexity is like a cancer that increasingly grows, and its source is probably methodological dogmatism, where complex overhead becomes bloated as researchers find increasingly more convoluted way of “gaming” these rules. Here are some suggestions for collapsing complexity:

  1. Cutting administrative bloat and instead have rotating academics in the essential administrative postings. 
  2. Get rid of the peer-review system, and instead use an open system, similar to Arxiv.
  3. Collapsing some of the academic departments into bigger ones. For example, there is more in common with much of theoretical physics, mathematics and  philosophy than between theoretical physics and some of the more experimental aspects of physics. So the departments should be reorganized so that people with more similarities interact with each other.
  4. Create an egalitarian funding scheme, based more on divisions between theory and experiment than between departments.  Everyone involved in the same category should receive the same, minimum amount of funding, where funding quantities are based on how much resources a specific type of work would realistically require.  For example, a theoretical physicist that uses only pencil, paper, and their personal computer, has financially a lot in common with a sociologist that does the same. 
  5. Beyond the  minimum funding outlined above, excess funding should be decided democratically, with input outside of professionals.
  6. Abolish the distinction between tenured professor and adjunct. Instead everyone should teach.

Hopefully the destruction of admin bloat and adjunct/tenure distinction would release resources that can  be spent on hiring researchers, instead of coming up with bad heuristics such as publication and citation numbers as filters for new hires.

Many of these recommendations cannot be seen in the abstract, since the University is intimately coupled to the society and the economy as a whole. For example, part of the admin bloat comes from legal liabilities and the state offshoring some of their responsibilities to universities.  Number 6 would require a radical reconfiguration of society in general. Number 6 wouldn’t be able to be enacted today, since “democratic” institutions  are  composed of non-ergodic, technocratic lifers. 

This takes me to the political conclusion that the problems of science should be seen as the problems of society as a whole.  The only sure way to find solutions for problems is an ergodic approach.  Right now, the state is non-ergodic, that is, its  occupied and controlled  by political and bureaucratic lifers.  These non-ergodic bureaucracies in turn generate informational complexity, as new regulations and “rules” are imposed by the same caste of degenerative professionals, which in turn requires even more complex overhead. Instead,  the State, (and in a socialist society, the means of production) should have a combination of democratic and sortition mechanisms that makes it impossible for individuals to stay too long in power. This democratic vision should be supported by broad and free education programs that train individuals with the sufficient knowledge required to rule themselves in a republican way. Not only is this method guarantees more equality, but it also  turns society into this great parallelized computer that solves problems by ergodic trial and error, through the introduction of  new blood, sortition and democratic accountability.

If you liked this post so much that you want to buy me a drink, you can pitch in some bucks to my Patreon.