The Rise of the Right Wing Is Not Due to the Working Class Because Workers Don’t Vote

juan-cavalleiro-241522

A common and mistaken assumption among radicals is that right wing parties win because of ideological trickery and lies. In other words, that the electorate does not understand their own class interests, and are bamboozled by smooth talking politicians.   For example, a popular idea about american politics is that poor whites tend to vote against their own interests, as there is the preconception that they are part of the electorate of Trump and the GOP. Just recently in  Ontario, Doug Ford, a millionaire, won the  provincial elections under a very vague platform that included lowering taxes and “anti-elitist” rhetoric, very similar to Trump’s “drain the swamp” antics. Some pointed out the contradiction of the wealthy Ford running under an anti-elitist platform – seeing it as a form of ideological articulation and nothing else.

In general, there’s been a rise of the right wing in elections across the the  developed West. A couple of high profile examples are Trump, Brexit, and the recent German elections. Furthermore, fascistoid parties that took power recently in some european countries, like Hungary and Poland.  Superficially it may seem that these parties are the “will of the people”, since they won by an algorithmic majority in democratic countries. For leftists this may seem hopeless, as it could be interpreted that we lost the ideological battle, and that much of the Left’s traditional demographic (e.g. workers) have fallen into reaction.

I find that these sentiments begin with the wrong (and liberal) idea, that the body of citizens are an amorphous, classless set of individuals that must be “won over”  so that they do not turn right wing. Another iteration of the same argument is that many voters are going against their “own interest” by voting for the right wing. For example, the common archetype of the poor rural white that votes Republican.

The worst aspects of these assumptions are in the mainstream of the Left, especially social democratic parties and “center left” parties.  Since the electorate at first glance seems to swing conservative, many social-democratic parties have swung to the right to win back some of that electorate. An interesting example is the rise of the  center-left NDP (New Democratic Party) in Alberta, one of the most conservative provinces of Canada.  Many of the militants in the federal NDP are against the construction of new oil pipelines, for fiscal and environmental reasons; yet the Albertan NDP has taken a pro-big oil stance in order to appease the seemingly conservative Albertan electorate. I am sure that the shift towards austerity politics of many of the mainstream social democratic parties is also related to the tailing of a supposedly conservative electorate.

However, once we start looking with nuance at hard data, rather than simply taking a phenomenological algorithmic majority for granted, we will find that the rise of the right wing isn’t really just a matter of false consciousness or ideology, but has a real class basis. In other words,  today’s electoral choices fully emerge from the class interests of much of the voting base. This is simply because many of whom fit the marxist definition of proletarian, that is someone that owns nothing except their own labor, are not voting. It’s well known that lower income makes it more likely that someone will not vote.    In fact, there is a correlation with income inequality and low political participation.

Another interesting trend is that voter turnout in the developed world is steadily declining. This correlates with the increase of income inequality, the rise of the right wing, and yes, the decline of the Left.

Let us look at the United States as a particularly dire but interesting example. The reason why voters choose politicians that want to cut social programs and enforce austerity, is that the same politicians often promise more tax cuts, a restructuring that would benefit people from higher tax brackets, who happen to be the people that vote.  Surveys have found that nearly half of non voters in the US make less than 30k in income. If you zoom into the lifestyle of a large percentage of 60k+ households – a life that may include mortgages, workplace insurance, fat credit lines, and segregated neighborhoods were race and income cut along zip code lines, voting patterns make sense.  I imagine that the last person that would benefit from rent control, centralized school funding, and welfare is going be an office manager that holds home equity and sends their kid to piano lessons.

These voting patterns are also interesting from a political economy perspective.  Much of what passes as class analysis in the more popular iterations of marxism usually only looks at workplace relations, and whether someone collects a salary as opposed to being a capitalist. Yet, one of the ways the liberal democratic state culled  working class militancy is through the introduction of cheap credit, which suddenly made much of the traditional working class into “property owners”, because they now own house equity. Specifically, the skilled layer of the working class  and professionals became petit-bourgeosified (synonimous to small land holders). In other words, this middle class, even if some of them collect a salary, stop being proletarian in the marxist sense (a class that owns nothing except their labor power) and turn into small property owners.   In the american case, this was also related to racial dynamics, where a white middle class entrenched itself in segregated zip-codes, with housing associations that monitor the evenness of lawns in order to mantain property values. Furthermore, zoning privilieges are also a way of gatekeeping resources for  their childrens’ social mobility – for example, through public schools that are only attended by rich people

The existence of a petit-bourgeosified middle class and upper middle class, who are isomorphic to small land holders, can only manifest in the era of finance capital, as their lifestyle is sustained by fragile debt that leads to financial fragility and secular decline.  According to Minsky (who has been recently adapted to macroeconomic models), financial fragility emerges from banks and other financial institutions lending too much money in boom periods, which inevitably leads to financed enterprises that fail to be profitable. This generates a bubble  that later on bursts, creating business cycles and dislocation between financial sector and the real economy. Furthermore, as mentioned in my previous post, the financialization of capital correlates with the decline of productivity across virtually almost all industries, so only finance capital instead of the “real economy” can sustain these small proprietors. So it is no surprise that there is almost a clientelistic link between these small proprietor, middle class whites  and the most reactionary elements of capital, as the latter buys them off by giving them racialized financial leverage that is not available to poorer, racialized sectors.

No wonder why left wing  tendencies and social democratic parties have declined, and the ones that survived, shifted rightwards. For  they all aim to convince “likely voters” who tend to be  petit-bourgeosified middle classes whose class interests are aligned with tax cuts and fiscal austerity, in contrast to lower income individuals that do not vote as much, and who would benefit from wealth redistribution programs. 

Instead of aiming for likely voters, leftists should create a genuine socialist party that fights for the working class and the poor.  The key for socialist hegemony is politically activating unlikely voters, e.g. racialized, working class and poor individuals, rather than trying to pull the heart strings of a middle class. This strategy will not yield  easy wins in the ballot box, for the likely voters tend to be conservative. Instead it would require a strategy in the long run where socialist hegemony is created amongst unlikely, low income voters. 

A minimum program for a party of the working class and the poor could contain some of the following policies:  (i) nationalization of real estate (except the infrastructure built upon it), (ii) a job retraining program for the casualized, unemployed, or low wage workers, (iii) a robust public healthcare infrastructure, (iv) abolition of temporary “work visas” and instead full citizenship for all immigrants, and (v), restructuring of educational infrastructure so that funding depends on “head count” rather than zip codes, including free upper education and student stipends. These positions are only some tentative examples, and this minimum program should go hand in hand with the long term maximum program of a world workers’ republic, and the replacement of market mechanisms with world economic planning. 

Much of the platform of a workers’  party will be opposed by the small proprietor middle class, since it is diametrically opposite to their interests  – for example, real estate nationalization is in contradiction with home ownership.  However, the large underclass that does not vote, and the segment of the working class that does go to the polls, should be able to be won over by a program that considers their immediate class interests.

The outlook for a workers’ party is moderately optimistic. As the pauperization of millennials, who are poorer than their parents, and the recent financial crisis have shown, the lifestyle of middle class small proprietors inflated by financial debt is unsustainable.   Therefore,  the base for a future workers’ party is secularly increasing.

If you liked this post so much that you want to buy me a drink, you can pitch in some bucks to my Patreon.

 

Advertisements

Crisis Theory: The Decline of Capitalism As The Growth of Expensive and Fragile Complexity

It’s an empirical fact that the economy experiences business cycles, in other words, oscillations between booms and busts.  Furthermore, many argue that the economy is experiencing a secular decline. For example, productivity across all industries has decreased since the 1970s. What ares the mechanisms behind these instabilities and also decline?  What would an accurate theory of economic crisis look like? 

Screen Shot 2018-06-29 at 7.55.01 PMSource: https://www.brookings.edu/wp-content/uploads/2016/09/wp22_baily-montalbano_final4.pdf

I believe that capitalism is both unstable and vulnerable to business cycles, and is also experiencing secular decline. The source of these trends are  feedback mechanisms that are structural to capitalism that encourage the growth of fragile and expensive complexity (logistics, rent-seeking, finance, etc) due to the pursuit of short-term profits.  Furthermore, this complexity becomes increasingly separated from the human labor (see Marx on labor theory of value) that creates wealth indirectly or directly (e.g. the factory worker, the doctor, the teacher, etc.),  which means a larger ratio of overhead versus wealth creation. The growth of expensive complexity in the long run means both declining productivity and fragility to the business cycle.

I will first review some of the theories that already exist to explain this secular decline and also the nature of business cycles. Then I will present my own crisis theory that addresses the weaknesses of the other existing models.

The mainstream economic approach to the business cycle is  modelled through the so called Dynamic Stochastic General Equilibrium model (DGSE).  In this model, mainstream economists assume the world economy is more or less in equilibrium (e.g. markets clear, and agents’ utility functions maximize) until a random shock appears, for example, a sudden rise in oil prices.  The nature and the source of the shock are irrelevant in this model – the DGSE approach only dictates that random shocks are an economic reality. Thus the task of the economist reduces to  studying how the structures of the economy amplify/dampen and propagate the shock.  For example, after the 2008 crash, economists began taking seriously how aspects of the financial sector may amplify these shocks (they call these financial frictions). It appears mainstream economists only have achieved a consensus on business cycle modelling but not necessarily on the secular decline of the economy.

Hyman Minsky was an important heterodox thinker that elaborated a crisis theory, and also recently became widely cited because of the 2008 crash.  Minsky  argued that crisis emerges from endogenous activities in the financial sector. Minsky explained that in booming times, banks and other financial institutions become “euphoric” and begin lending and also borrowing  large quantities that in bust periods they would find too risky.  Given that these financial actors   are overconfident, a speculative investment bubble develops. At some point, the debtors cannot pay back, and the bubble bursts, creating a crisis.

The more orthodox of the Marxist approaches to crisis  is  referred famously  as the theory of the tendency of  the rate of profit to fall (TRPF). According to Marx, capitalism experiences a secular decline in the rate of profits as work is automated away by machines, and therefore less workers are employed, which means less human labor to exploit. As production becomes more optimized, machinery  and raw material absorb more of the costs of production, and less workers are employed due to rising productivity. In marxist analysis, profit comes from exploitation of workers, that is, from paying workers less than the value created by the hours they worked. So in marxist analysis, as machinery automates more of the labor, the rate of profit also declines.  According to Marx, in a hypothetical scenario where all labor becomes automated by robots, the capitalist wouldn’t profit at all!

Finally there are some crisis theories were more heterodox marxist models and pseudo-keynesian theories converge.  Thomas Palley recently compared Foster’s Social Structure Accumulation theory (SSA) to his own theory, Structural Keynesianism. Both Palley and Foster argue the decline of economic growth is related to a stagnation of wages. If wages are stagnant, the aggregate demand necessary for growth is unmet, because workers don’t make enough to purchase commodities.  They argue that this  economic stagnation is related to the neoliberal growth model adopted since the 1970s. According to Palley, the  only mechanisms that kept the economy from crashing were the overvaluation of assets, and firms filling the hole in aggregate demand by taking on more debt. However this excess of credit lead to financial instabilities that eventually  crashed the economy  in 2008.

In my opinion all these approaches are flawed. For one, the mainstream approach under-theorizes the sources of fragility and the secular decline in the rate of profit. It is true that much of the crises/business cycles have to do with the fragility of the capitalist economy to volatility,  which is explained by mainstream models.  However, an important part of the story is why the capitalist system is fragile to these shocks. In fact, mainstream economists showed their ignorance with their inability to forecast the effects of the 2008 recession.  After the crash,  mainstream economists implicitly conceded to the heterodox arguments of Minsky that the financial sector creates fragility. For example, only after  the crisis did mainstream economists include in their DGSE models the financial instabilities mentioned by Minsky.  Furthermore, it appears mainstream economics doesn’t really have a theoretical consensus on the secular decline of capitalism.

The problem with the Minskyan approach is that it is severely limited – for one, it only identifies one source of fragility, which is the financial sector. It also does not theorize why the financial sector is “less real” than for example, the manufacturing sector – which Minsky implicitly assumes when he blames fragility only to the financial part.  Because of Minsky’s limited theorization, he also fails to explain the secular decline of the rate of profit, content with only explaining the business cycle. 

The greatest flaw of the  “orthodox” Marxist approach is its dependence on pseudo-aristotelian arguments. The TRPF model  is based in a logical relation between very specific variables, which are the costs of raw materials and machinery (constant capital), the costs of human labor (variable capital), and the value extracted from the exploitation of human labor (surplus value). This spurious precision and logicality is unwarranted, as the capitalist system is too complex and stochastic  be able to describe the behaviour of crisis as related to a couple of logical propositions. One has to take into account  the existence of instabilities and shocks, as the mainstream economists do. However, Marx still had a key insight which is that the aggregate wealth of the world must be sourced in human labor that produces use values. The source of wealth comes from dentists doing dentistry, and construction workers doing  construction work, not from the dentist trying to make money by trading in the stock market. Furthermore, Marx identified that there is a secular trend in the declining rate of profit, which is missing in other contemporary accounts.

Finally,  Palley’s approach seems to be too politically motivated. To them, the stagnation of the economy is related to issues of policy – of statesmen adopting the “wrong” set of regulations/deregulations. If politicians were just “objective”, and followed Palley’s set of ideas, then crisis and decline could be averted! To  Palley, the neoliberal phase was a matter of certain “top-down” policies rather than endogenous/spontaneous fragilities and instabilities that are inherent to the capitalist system. In my opinion, it’s impossible to disaggregate what is political and what is inherently structural in the secular decline of capitalism, since the whole world economy is more or less neoliberalized at this moment so there is no alternative to compare it at the present. So it seems to me that it’s a just-so story that is projected from the present to the past and impossible to prove empirically. 

One of the issues I have with the “left” theories of crises, such as Keynesian and Marxism, is that they don’t take instability, uncertainty, stochasticity, and complexity seriously. Instead, proofs and discussions are reduced to aristotelian logical chopping related to a few variables. In the Keynesian case, it’s aggregate demand, in the Marxist case these variables are surplus value, constant capital, and variable capital.  A system that pulsates with tens of billions of people is reduced to the logical chopping of a few variables. Instead, we must device a more holistic view of the capitalist world-system, taking into account its nonlinearities and fragilities.

The theories outlined above contain  parts of the truth, so we can use some of these elements to synthesize a model of crisis that contains the following: (i) economic fragility to instabilities and shocks,  (ii) endogenous sources of this fragility, (iii) a theory of the secular decline of the rate of profit. The concepts ultimately uniting these three points are fragility/nonlinearities and increasingly expensive complexity. For example, Minsky, by addressing the fragility in the financial sector, also implicitly points to a theory of  degenerative complexity, where the financial sector acts as a complex, expensive, and fragile  overhead that exists over the “real economy”.

We can use Taleb’s definition of fragility to make the concept more precise. Taleb mathematically defines fragility as harmful, exponential sensitivity to volatility. For example,  a coffee cup can withstand stress up to a certain threshold, above that, the cup becomes exponentially vulnerable to harm, as any stress higher than that threshold will simply shatter the cup. The reason why fragility is a nonlinear property is that the cup won’t wear  and tear proportionally to stress. Instead the cup will sustain the stress until a certain  threshold is reached, and then suddenly shatter. So in other words, the cup reacts exponentially to stress, with stress below a certain threshold inflicting negligible damage. 

Similarly, the capitalist world system probably has many thresholds, many of them currently unknown. This is because the capitalist world system is complex and nonlinear.  It is complex because it is made of various interlocking parts (firms, individuals, governments, etc.) that form causal chains that connect across planetary scales. It is nonlinear because the behaviour of the system is not simply the “sum” of the interlocking parts, as the parts depend on each other. Therefore one cannot really study the individual components in isolation and then understand the whole system by adding these components. In other words, interdependence  of the units within capitalism makes the system nonlinear. Furthermore, nonlinear systems are frequently very sensitive to change in its variables, where surpassing certain thresholds can make the system exhibit abrupt changes and discontinuities that often manifest as crisis.  This abrupt changes caused by the crossing of certain threshold is a common mathematical property in nonlinear systems.  Fragility therefore correlates with nonlinearities, abrupt jumps/shocks, and complexity. 

However it is not enough to say that the capitalist world system is fragile because it is nonlinear. The point is that the capitalist world system structurally generates feedback loops that lead to the accelerated creation of endogenous fragilities.  The frenetic pursuit of short-term profits in increasingly competitive contexts leads to the creation of fragile, nonlinear complexity. This is because a firm must invest in more expensive research, infrastructure, and qualified personnel to generate innovation that leads  profit in the short term, as many of the “low hanging fruits” have already been  plucked. So capitalism leads to a random “tinkering”  by firms and institutions to produce profit, by often adding ad-hoc complexity. This complexity make generate short-term profits, but is expensive in the long term.   Joseph Tainter tries to measure the productivity of innovation by looking at how many resources go into creating a patent. For example,  here is a plot showing how ratio of patent per GDP  and per R&D expenses has declined since the 70s:

 

mfig008Source: https://voxeu.org/article/what-optimal-leverage-bank

Another marker of increased expensive complexity is  how many people are required to create a patent:

mfig009Source: https://onlinelibrary.wiley.com/doi/full/10.1002/sres.1057

A very common and studied example  of this nonlinear complexity is the financial system.  The financial system is an example of growth of complexity in order to aid the profit motive.   Cash flows are generally too slow and cash reserves too low in order to cover the capital required to start firms, or to add a layer of complexity required for more profitability, so agents must resort to acquiring credit and loans. In other words the financial system acts as a fast, short-timescale distributive mechanism for the funnelling of resources to banks, firms and individuals that require quick access to capital in spite of low cash flows.   Without the financial system growth would be much lower because access to capital could only be facilitated through cash flows. However, as Minsky noted decades ago and mainstream economics emphasizes now,  the financial system is extremely unstable, complex and nonlinear, and therefore fragile. Here is a figure that shows for UK banks how much the “leverage ratio”, which is roughly the ratio between debt to equity of banks, has exponentially grown from the 1880s to the 2000s – in other words, banks depend on loans/credit in order to have fast access to capital.

MilesFig1 (1)

Source: https://voxeu.org/article/what-optimal-leverage-bank

The addition of complex overhead as inversely proportional to growth has been empirically verified for various parts of capitalism. Here are some examples: the cost diseases associated with industries like education and healthcare, the admin bloat in education and healthcare, the  stagnation of productivity across virtually all industries including manufacturing, the stagnation of scientific productivity in spite of exponential growth in the number of scientists and fields, etc.

Furthermore, capitalism encourages rent-seeking and expensive complexity, even if there are no benefits in wealth production for the economy in general. For example, this rent-seeking scenario is probably the case for admin bloat at the universities.  In the case of this admin bloat, there is a transfer of wealth from society to certain sectors of the university, but there is no obvious economic benefit for society in general. This is in contrast to traditional, profitable industries were profit leads to capital valorization through the reinvestment of that profit.

pnhp-long-setweisbartversion-52-638

As noted in a previous post, there is also a secular degeneration of science with the secular decline of capitalism. To summarize that post, as informational complexity grows at a faster rate than empirical validation and knowledge production, an informational bloat of unverified scientific theories gets created. An obvious example is the complex bloat of theoretical physics models that predict all sorts of new particles, in spite of the fact that the Large Hadron Collider, a multibillion dollar experiment, failed to confirm any of them. So you have a whole layer of professionals that are just experts in unverified/degenerative theories, and these professionals collect large salaries in spite of not contributing to economic nor epistemic growth.  Another example of a degenerative profession is economics. Judging from the stagnating productivity across most industries, we can probably assume that these caste of degenerative professionals is rampant across all corners of capitalism. This caste of degenerative professionals and “degenerative” experiments add expensive and fragile complexity to capitalism.

F1436560-42DD-4C21-BB80930F45E22220Source: https://blogs.scientificamerican.com/cross-check/is-science-hitting-a-wall-part-1/

Finally, as complexity grows, there is an increasing dislocation between abstracted logistical, degenerative, and “scientific” complexity and the human labor that creates the wealth.  A very good example is finance. To paraphrase and elaborate on what Taleb said, the wealth of the world is created by dentists doing dentistry, and construction workers doing “construction work”, not by the dentist trying to become rich by trading their savings in the financial market. This is where Marx becomes relevant – for the wealth of society comes from human labor, not from the transfer of wealth through administrative and accounting tricks, or through the circulation of financial instruments. This bloated complexity is required for the functioning of capital  because of financial, accounting, and logistical constraints.  Much of this complexity acts as an overhead for the world-economy that is required for the survival of capital itself, but this complexity does not necessarily create socially necessary wealth. An example of the fragility of this separation between wealth creation and complex abstraction is the existence of speculative bubbles.  Due to the overconfidence of the financial industry, assets are often overvalued and at some point their value collapses, as the dislocation between the real and financial economy becomes unsustainable. This financial instability was discovered by Minsky and that now is understood by mainstream economists, who incorporate it in their models.

Here we begin to sketch a theory for the secular decline of capitalism.  First there is a secular increase of fragile, nonlinear complexity driven by ad-hoc tinkering of firms/institutions in order to pursue short term profits at the expense of fragility. Furthermore much of this  expensive complexity is due to rent-seeking, where specialists trained in degenerative methods that add no obvious knowledge/efficiency self-reproduce and multiply, like string theorists, economists, university admins, healthcare admins etc. In the long run, all this added complexity that is created for short term profits becomes increasingly expensive, leading to even slower productivity growth  (GDP growth per labor hour).  Part of the lowering of productivity is the increasing dislocation between human labor that produces wealth and an abstracted layer of researchers, administrators, managers, etc. Furthermore not only there is a secular decline of the economy, but there are also increasing fragilities and instabilities, as the bloated complexity is very nonlinear, given that it couples agents across planetary scales, such as how the financial industry transcends national economies. So the world economy becomes increasingly more vulnerable to shocks, due to nonlinearities (caused by interdependencies) that lead to  abrupt changes. These instabilities and fragilities give rise to the so called business cycle.

In conclusion, a socialist theory of crisis should begin by looking at the economy as a whole, taking into account its instabilities and fragilities. In my opinion, the methodologies of the various Keynesian and Marxist schools are wrong because they pretend to have identified a couple of important variables (e.g. aggregate demand, organic composition of capital) and then logically derive a theory of crisis through these variables. However, because the economic system is extremely complex and nonlinear, these theories probably amount to just-so stories, since the mechanisms behind the instabilities in capitalism are probably very varied (and many of them unknown),  and therefore  cannot be pinpointed to just specific sources. Instead, a better approach to a crisis theory is  to analyze how capitalism creates  endogenous feedback loops that lead to fragility, due to generalized and socially unnecessary nonlinearities and complexites. This nonlinearization and complexification is imposed in order to pursue short term profits, at the expense of long-term productivity. Moreover, another important issue is how a large part of this complexity becomes increasingly dislocated from wealth creating labor – such as the dislocation between administrators and professors, or the financial sector and the real economy.  

I am confident many of the theories presented in this article can be both quantified and verified against empirical data in a much more rigorous way than done here. But alas, there isn’t an eccentric millionaire backing this research program😞.

If you liked this post so much that you want to buy me a drink, you can pitch in some bucks to my Patreon.

Ergodicity as the solution for the decline of science

608px-Maxwell's_demon.svg

In a previous post I explored the decline of science as related to the decline capitalism. A large aspect of this decline is how the increase of informational complexity leads to marginal returns in knowledge. For example, the last revolution in physics appeared roughly one hundred years ago, with the advent of quantum mechanics and relativity. Since then, the number of scientists and fields have exponentially increased, and the division of labor has become increasingly more complex and specialized. Yet, that billion dollar per year experiment, the Large Hadron Collider, that was created to probe the most fundamental aspects of theoretical physics, has failed to confirm any of the new theories in particle physics. The decline of science is coupled to the decline of capitalism in general, as specialist and institutional overhead is increasing exponentially across industries, but GDP growth has been sluggish since the 1970s.

Right now across scientific fields there is an increasing concern for the overproduction of “bad science”.  Recently, the medical and psychological sciences have been making headlines, because of the high rates of irreproducible papers.  In even the more exact sciences, there is a stagnant informational bloat, with a flurry of math bubbles, theoretical particles, and cosmological models inundating the peer-review process, in spite of billion dollar experiments like the Large Hadron Collider not confirming any of them, with no scientific revolution (last one was 100 years ago) in the horizon.

There is no shortage of solutions being postulated to solve the perceived problem. Most of them are simply suggestions of making the peer review process more rigorous, and refining the statistical techniques used for analyzing data.  For example using bayesian statistics instead of frequentism, encouraging the reproducibility of results, and finding ways to constraint the “p-value” hacking. Sometimes some writers that are a little bolder would argue that there should be “interdisciplinarity”, or that scientists should talk more to philosophers, but usually these calls for “thinking outside the box” are very vague and broad.

However, most of these suggestions would simply exacerbate the problem. I would argue that the bloat of  degenerative informational complexity is not due to lax standards. To give an example, let’s analyze the concept of p-value hacking. A common heuristic in the social sciences is that for a result to be significant, it should have a p-value of less than 0.05. In layman parlance, this implies that your result has only 5 percent of probability of being due to chance (not exact definition but suffices for this example).  So now you established a “standard” that can be gamed in the same way lawyers can game the law. This creates a perverse incentive to game this rule, by researchers finding all sorts of clever ways of “p-hacking” their data so that it passes that standard. So in the case of p-value hacking, one can make conscious fraud by not including the data that raises the p-value (high p-values mean your results are due to chance), to unconscious biases like ignoring certain data points because you convince yourself they are a measurement error, in order to protect your low and precious p-value.

The more rigid rules a system has, the more is invested in “overhead” to regulate those rules and game them. This is intuitively grasped almost by everyone, and hence the standard resentment against bureaucrats that take the roundabout and sluggish way to accomplish something.  In the sciences,  once a an important study/experiment/theorem generates a  new rule, or “methodology”,  this creates perverse incentive loops where scientists and researchers use this “rule” to create paper mills, that will in turn be used to game citation counts . Instead of earnest research, you have an overproduction of “bad science” that amounts the gaming of certain methodologies.  String theory, which can be defined as a methodology,  was established as the only game in town a couple of decades ago,  which in turn constrained young theoretical physicists in investing their time and money in gaming that informational complexity, generating even more complexity. Something similar happens in the humanities, where a famous (usually french) guy establish a methodology or rule, and the anglo counter-parts game the rule to produce concatenations of polysyllabic words.   Furthermore this fetish of informational complexity in the form of method and rules, creates a caste of “guild keepers” that are learned in these rules and accrue resources and money without allowing anybody who isn’t learned in these methodologies.

This article serves as a “microphysical” account of what leads to the degenerative informational complexity and diminishing returns I associated with modern science in my previous post. However what would be the solution to such a problem? The answer is in one word: ergodicity.

As said before, science has become more specialized, complex, and bloated that ever before.  However, just because science has grown exponentially, it doesn’t mean it has become more ergodic. By ergodic I specifically mean that all possible states are explored by a system.  For example  a dice that is thrown a large amount of times would be ergodic, given that the system would access every possible side of the dice. Ergodicity has a long history in thermodynamics and statistical mechanics, where physicists often have to assume that a system has accessed all its possible states.  This hypothesis allows physicists to calculate quantities like pressure or temperature by making some theoretical approximations of the number of states a system (e.g. a gas ) has. However we can use the concept of ergodicity to analyze social systems  like “science” too.

If science were ergodic, it would explore all possible  avenues of research, and individual scientists would switch of research programs frequently.  Now, social systems cannot be perfectly ergodic, as social systems are dynamic and therefore the “number” of states grow (e.g. the number of scientists grow). But we can treat ergodicity as an idealized heuristic.

The modern world sells us ergodicity as a good thing. Often, systems describes themselves as ergodic as a defence from detractors. For example, when politicians and economists claim that capitalism is innovative, and that it allows all workers to have a chance at becoming rich (or a chance for rich people to become poor),  they are implicitly describing an ergodic system. Innovation implies that entrepreneurs experiment and explore all possible market ideas so that they can discover the best ones. Similarly, social mobility implies that a person has a shot at becoming rich (or if already rich, becoming poor) if that person lives long enough. In real life, we know that the ergodic approximation is really poor for capitalism, as the rich do often stay rich, and the poor will stay poor. We also know that important technological innovation is often carried out by public institutions  such as the american military, not the private sector. Still, the reason why ergodicity is invoked is that it is viscerally appealing. We often want “new blood” into fields and niches, and we resent bureaucrats and capitalists insulated from the chaos of the market for not giving other deserving people a chance.  

One of the reasons that ergodicity is appealing is that there is really no recipe for innovation except experimentation and exploring many possible scenarios.   That’s why often universities have unwritten rules of not hiring their own graduate students into faculty positions – they want “new blood” from other institutions. A common (although incorrect, as described above) argument against public institutions is that they are construed as often dull and stagnant in generating new products or technologies compared to the more “grassroots” and “ergodic” market. So I think there is a common intuition amongst both laymen and many professionals that the only sure way of finding if something “works” or not is trying different experimental scenarios.

Now let’s return to science.  The benefit of ergodicity in science  was indirectly supported the infamous philosopher Feyerabend. Before him,  philosophers of science tried to come up with recipes of what works in science or not.  An example is Popper, who argued that science must be falsifiable. Another example is Lakatos, who came up with heuristics of what causes research programs to degenerate. Yet,  Feyerabend argued that the only real scientific method is that  “anything goes” – he termed this attitude as epistemological anarchism. He argued that scientific breakthroughs don’t follow usually any hard and fast rules, and that scientists first and foremost are opportunists.

Feyerabend got a lot of flack for  these statements – his detractors accusing him of relativism and anti-scientific attitudes. Feyerabend didn’t help himself because he often was inflammatory in purpose and seeking to cause a reaction (for example putting astrology and science on the same epistemic level). However I would say that in some sense he was protecting science from dogmatic scientists.  To use the terminology sketched in the previous paragraphs: he ultimately was arguing for a more ergodic approach to science so that it doesn’t fall under this dogmatic trap.

This dogmatic trap was already explained in previous paragraphs: the idea that more methods, rules,  divisions, thought policing, and  rigour, would  always lead to good science.  Instead it leads to a growth of degenerative research  that amounts to gaming certain rules.  This in turn leads to the growth of degenerative specialists that are only experts in degenerative methods.   Meanwhile, all this growth is non-ergodic, because it’s based around respecting certain rules and regulations, which constrains the exploration of all possible scenarios and states. It’s like loading a dice so that always the six dots face up, in contrast to allowing the dice to land in all possible states.

How can we translate these abstract heuristics of ergodicity into real scientific practice? The problem with much of philosophy of science, both made by professional philosophers, or professional scientists unconsciously doing philosophy, is that it looks at individual practice. It comes up with a laundry list of specific rules of thumb that an individual scientist most follow to make their work scientific, including certain statistical tests and reproducibility. However the problems are social and institutional, not individual.

What is the social and institutional solution? Proposing solutions is harder than describing the problem. However  I always try to sketch a solution because I think criticism without proposing something is somewhat cowardly – you avoid opening yourself up to criticisms from readers.

The main heuristic for solving these problems should be on collapsing the informational complexity in a planned, transparent, and accountable way.  As mentioned before, this informational complexity is like a cancer that increasingly grows, and its source is probably methodological dogmatism, where complex overhead becomes bloated as researchers find increasingly more convoluted way of “gaming” these rules. Here are some suggestions for collapsing complexity:

  1. Cutting administrative bloat and instead have rotating academics in the essential administrative postings. 
  2. Get rid of the peer-review system, and instead use an open system, similar to Arxiv.
  3. Collapsing some of the academic departments into bigger ones. For example, there is more in common with much of theoretical physics, mathematics and  philosophy than between theoretical physics and some of the more experimental aspects of physics. So the departments should be reorganized so that people with more similarities interact with each other.
  4. Create an egalitarian funding scheme, based more on divisions between theory and experiment than between departments.  Everyone involved in the same category should receive the same, minimum amount of funding, where funding quantities are based on how much resources a specific type of work would realistically require.  For example, a theoretical physicist that uses only pencil, paper, and their personal computer, has financially a lot in common with a sociologist that does the same. 
  5. Beyond the  minimum funding outlined above, excess funding should be decided democratically, with input outside of professionals.
  6. Abolish the distinction between tenured professor and adjunct. Instead everyone should teach.

Hopefully the destruction of admin bloat and adjunct/tenure distinction would release resources that can  be spent on hiring researchers, instead of coming up with bad heuristics such as publication and citation numbers as filters for new hires.

Many of these recommendations cannot be seen in the abstract, since the University is intimately coupled to the society and the economy as a whole. For example, part of the admin bloat comes from legal liabilities and the state offshoring some of their responsibilities to universities.  Number 6 would require a radical reconfiguration of society in general. Number 6 wouldn’t be able to be enacted today, since “democratic” institutions  are  composed of non-ergodic, technocratic lifers. 

This takes me to the political conclusion that the problems of science should be seen as the problems of society as a whole.  The only sure way to find solutions for problems is an ergodic approach.  Right now, the state is non-ergodic, that is, its  occupied and controlled  by political and bureaucratic lifers.  These non-ergodic bureaucracies in turn generate informational complexity, as new regulations and “rules” are imposed by the same caste of degenerative professionals, which in turn requires even more complex overhead. Instead,  the State, (and in a socialist society, the means of production) should have a combination of democratic and sortition mechanisms that makes it impossible for individuals to stay too long in power. This democratic vision should be supported by broad and free education programs that train individuals with the sufficient knowledge required to rule themselves in a republican way. Not only is this method guarantees more equality, but it also  turns society into this great parallelized computer that solves problems by ergodic trial and error, through the introduction of  new blood, sortition and democratic accountability.

If you liked this post so much that you want to buy me a drink, you can pitch in some bucks to my Patreon.

The Decline of Science, The Decline of Capitalism

pnhp-long-setweisbartversion-52-638

Can another Einstein exist in this era?  A better question  is whether  the spirit of his  research program could emerge again in our current predicament. By his research program, I mean the activity that grasped through a few thought experiments and heuristics fundamental principles that not only revolutionized physics but our whole ontology in general. Through a combination of imagination and mathematical prowess,  such as imagining himself riding a lightning bolt, and then translating this imagination into the language of geometry, he revolutionized our most fundamental intuitions of space and time.

Fast forward a hundred years later, where physics has become increasingly specialized and fractal-like,  with theoretical physics atomized across many sub-disciplines. Given this complex landscape, there is simply not enough bandwidth to  engage the informational complexity of all relevant fields in order to grasp at something both holistic and fundamental. Instead, scientific knowledge is atomized among various disciplines.  Yet, although this division of labor and increased informational complexity has a legitimate logic, as many fields  truly become more specialized and complex in a useful, authentic sense – this complexity has decreasing marginal returns. We can see this effect in some of the paper mills of theoretical physics, with theory after theory that may only have tenuous links with the facts of the world.  At some point, the complexity and literature grew exponentially, engulfing empirical confirmation.

One of the most striking example of the diminishing returns of complexity is the lack of revolutionary shifts in theoretical physics. The  last major physics revolutions, quantum mechanics and relativity, happened roughly a hundred years ago. This is in spite of the huge increase in the number of scientists and disciplines throughout the last century. There is no shortage of models and theories, yet the creation of novel predictions and empirical confirmation is slowing down, as evidenced by the inability of expensive particle physics experiments to confirm any of the new particles conjectured by the last generation of theoretical physicists.   In other words, to use Lakatos’ ideas, theoretical physicists is degenerating, because there is an exponential increase in informational complexity without much empirical content backing it. In short, all the new and expensive scientists, computers, theories (e.g. supersymmetry, string theory) and cryptic fields are generating diminishing returns in knowledge.

However, not only academic sciences are degenerating. In this stage of capitalism, the degenerative research program is universal. This universal research program includes all relevant fields of human inquiry and knowledge. Therefore, this degeneration not only exists in the apex of academia, but it dwells in any institution meant for problem solving.  We find a decrease in productivity across many industries and the economy as a whole, which signals diminishing returns in complexity. In all these parts of society there is an increase of expensive complexity that yields diminishing returns.  Since all these institutions are problem-solving,  and use some sort of method/episteme, we can say that their theories of the world are degenerative, in analogy to the Lakatosian concept of degenerative research program. In spite of their bloat in specialists, the marginal returns in the “knowledge” necessary for production decreases.

Perhaps the most incredible aspect of this decline is the existence of experts in almost wholly degenerative methods.  As degenerative methods exponentially increase in volume – methods that don’t have much empirical backing, the informational complexity needs more specialists to manage it, and these  experts are almost specialized entirely on these decaying methods. Economists and string theorists are the quintessential examples of degenerative professionals.

This degeneration of the universal research program, and with it, the creation of a degenerative caste of professionals  has not come unnoticed by the population. This decline has probably fueled part of the anti-intellectual and anti-technocratic wave that brought Trump to power. For example,   people often complain about the increased inaccessibility of academic literature, with its overproduction of obscure jargon. Another example is the knee-jerk hatred for administrators, managers, and other technocratic professionals that are seen as doing increasingly abstracted work that may not connect with what is happening at the ground. For instance, a common target of criticism  for this phenomenon is the admin bloat that festers at universities.

This abstract process of the degenerative research program is linked to the health of capitalism, in a two way feedback loop, given that it is through problem solving that capitalism develops technological and economic growth.  Perhaps we can understand the health of capitalism better by referring to the ideas of the anthropologist Joseph Tainter. Tainter argues that societies are fundamentally problem solving machines, and that they add complexity in the form of institutions, specialists, bureaucrats, and information in order to increase their capacity to solve problems in the short term. For example, early irrigation systems in Mesopotamian civilizations, crucial for agriculture and therefore survival, created  their own layer of specialists to manage these systems.

However complexity is expensive, as it adds more energy/resources usage per capita. Furthermore, the problem solving ability of institutions diminishes in returns as more expensive complexity is added. At some point, complex societies end up having a very expensive layer of managers, specialists, and bureaucrats that are unable to deliver in problem solving anymore.  Soon, because the complexity is not making society more productive anymore, the economic base, such as agricultural output, cannot grow as fast as the expensive complexity, making society collapse. This collapse resets complexity by producing simpler societies. Tainter argues that this was the fate of many ancient empires and civilizations, such as the Romans, Olmecs, and Mayans. So Tainter here is arguing for a theory of decline of the mode of production, where modes of production are “cyclical” and have an ascendant and descendant stage. Using this picture, we can begin  to identify a stage of capitalism in decline.

This decline of capitalism has plenty of empirical evidence.  “Bourgeois” think-tanks like the Brookings Institute argue that productivity has declined since the 1970s. Marxist economists like Michael Roberts assert that the empirical data shows that the rate of profit has fallen since the late 1940s in the US.  Not to mention the recent Great Recession of 2008. However this economic and material decline is linked to the degenerative research program, as the expensive complexity of degenerative institutions expands faster than the economic base (e.g. GDP). For example, the exponential grow of administrators in healthcare and university at the expense of physicians and professors is symptomatic of this degeneration.

The degeneration of the universal research program  has two important consequences. First, that a large part of authority figures that base their expertise on credentials are illegitimate. The reason is that if they are part of a degenerative caste of professionals (politicians, economists, etc.)  so they cannot claim authority on relevant knowledge because their whole method is corrupted. This implies that socialists should not feel intimidated by the credentials and resumé of the technocrats closer to power. As mentioned before, right wing populists such as Trump understand partially this phenomenon, which has unleashed his reactionary electorate against the “limousine liberals” and “deep-statists” in Washington D.C. It’s time for us socialists to understand that particular truth, and not be afraid to counter the supposed realism and expertise of the neoliberal center.  The second consequence is that our methods of inquiry, such as science or philosophy, has  stalled. Instead, the feed-back loop of complexity creates more degenerative specialists that are experts in an informational complexity that has tenuous connection with the facts of the world. Whole PhDs are made in degenerative methods – for example, scientists specializing in some particular theoretical framework in physics that has not been validated empirically.

What is the socialist approach to the degeneration of the research program? Although one cannot say that socialists will not suffer from similar problems, given that informational complexity will always required when dealing with our complex civilization, Capitalism has particularly perverse incentives for degenerative research programs.   For example, the way the degenerative research program survives is through gate-keeping that safeguards the division of labor by well paid and powerful professionals. An obvious example is current professional economics, which largely requires an absorption of sophisticated graduate level math in order to enter the profession, even if those mathematical models are largely degenerative. In the political landscape at large, the State is conformed by career politicians and technocrats, who safeguard their positions through undemocratic gate-keeping in the form of elite networking and resumé padding.  The rationale for this gate-keeping is that these rent-keepers accrue power and wealth  through the protection of their degenerative research programs. Furthermore capitalism accelerates the fracturing of division of labor as it pursues short-term productivity at all costs, even when this complexity in the long term becomes expensive and a liability. 

The socialist cure to the degeneration of the research program could consist of two main ingredients. First, that institutions that command vast control over society and its resources should democratize and rotate their functionaries and “researchers”.  For example in the case of the State, a socialist approach would eliminate the existence of career politicians by putting stringent term limits and making many functionaries, such as judges, accountable to democratic will. Since there are diminishing returns in knowledge through specialization and informational complexity, a broad public education (up to the university bachelor level) could guarantee a sufficiently educated body of citizens so that they can partake in the day to day affairs of the State.  Instead of  a caste of degenerative professionals controlling the State, an educated body of worker-citizens could run the day to day affairs of the State through a combination of sortition, democracy, and stringent term limits.

The second ingredient consists of downsizing much of the complexity by focusing on the reduction of the work-day through economic planning. Since one of the main tenets of socialism is to reduce the work-day so that society is instead ruled by the imperatives of free time as opposed to the compulsion of toil, this would require the elimination of  industries that do not satisfy social need (finance, real estate, some of the service sector, some aspects of academia) in order to create a leaner, more minimal state.  Once the work-day is reduced to only what is necessary for the self-reproduction of society, there will be free time for people to partake in whichever research program of their choosing. Doing so may give rise to alternate research programs that don’t require the mastering of immense informational complexity to partake in. Perhaps the next scientific revolution can only arise by making science more democratic and free. This vision contrasts to the elitist science that exists today, which is at the mercy of   hyper-specialized professionals that are unable to have a holistic, bird’s eye view of the field, and therefore, are unable to grasp the fundamental laws of reality.

If you liked this post so much that you want to buy me a drink, you can pitch in some bucks to my Patreon.

On Hegel and the Intelligibility of the Human World

Screen Shot 2018-05-02 at 11.37.55 AM

I’ve been studying Hegel lately because I find a value in his idea that history has an objective structure and is intelligible.  He argued that History is rational, and therefore its chain of causes and effects can be understood by Reason. I deeply believe in the intelligibility of history and the human world at large, as I advocate for the human world to be  administered in a planned and democratic way, which requires the possibility of scientific understanding. In contrast, many contemporary thinkers are extremely skeptical about the intelligibility of the human world. For example, many economists proclaimed that socialist planning is flawed because the supply and demand of goods cannot be rationally made intelligible to planners. We see similar arguments from the Left in the form of post-structuralist attacks against the  “master narratives” that seek to unearth the rational structure of the human world. For example, contemporary criticisms of the Enlightenment sometimes argue that the same reason used to understand the world is used to dominate human beings, because Reason starts to see humans as stacks of labor power to be manipulated for some instrumental end.

However in my opinion, to deny the intelligibility of the human world,  or to deny that this intelligibility can ever be used for emancipation, is to deny the possibility of politics, for political actors must have a theory of where history is marching, in other words,  “in what direction does the wind blows”. Political agents need to ground themselves in a world-theory so they can suggest a political program that would either change the direction of history to another preferred  course, or enhance the direction that it is undertaking right now. The IMF, Breton-Woods, the Iraq War, the current austerity onslaught, etc. have or had an army of politicians, intellectuals, and technocrats wielding scientific reason, trying to grasp where the current of history flows, and developing policy in line with their world-theory.  In lieu of our “enemies” (capitalist state, empire) using a scientific understanding of history in order to destroy the world, I will attempt to instrumentalize my reading of Hegel in order to make a case of a socialist intelligibility of the human world, that has the purpose of freeing humanity through the use of socialist planning. I am however, not trained in philosophy, so my reading of Hegel may not be entirely accurate – yet accuracy isn’t really my goal as much as using him as an inspiration for making my case.

Hegel and many  thinkers in the 19th century were optimistic about uncovering the laws of motion that drive history, and thus the evolution of the human world.  Hegel thought that history was intellectually intelligible in so far that it is can be rationally understood as marching in a certain rational direction, that is, towards freedom even if the human beings that make this history are often driven by irrational desires.  For example, Hegel thought the French Revolution, following the evolutionary path of history, brought about the progress of freedom in spite of its actors being driven by desires that may have concretely nothing to do with freedom (e.g. glory, self-interest, revenge, etc.).  To Hegel, the French Revolution was a logically necessary event that follows accordance to a determined motion of history towards freedom. In parallel, Marx, who “turned Hegel on its head” thought that the human world could be understood as functions of the underlying economic structure (e.g. capitalism or feudalism) and its  class composition. Furthermore Marx argued that the working class, due to its objective socio-economic position as the producer of the world’s wealth, could bring about socialism.

Not only were Hegel and Marx optimistic in the intelligibility of the human world, but they found that a liberated society would make use of this intelligibility to make humans free. In the case of Hegel, he thought that the end of history would be realized by a rational State that scaffolds people’s freedom by making them masters of the world they can understand and manipulate in order to realize their liberties/rights. This is why Hegel thought the French Revolution revealed the structure of history, as this event  demanded that the laws of the government become based on reason and serve human freedom. In the case of Marx and his socialist descendants, the fact that the economy is intelligible means that a socialist society could administer it for social need, as opposed to the random, anarchic and crisis ridden chaos of capitalism. The socialist case for the intelligibility of the human world gave rise to very ambitious and totalizing political programs, with calls for the economy to be planned for the sake of social need, and with the working class as the coherent agent for enacting this political program. Sometimes these socialist totalizing narratives are described by some marxists as programmatism,  where programmatism is the phenomenon of coherent socialist parties that have grandiose and ambitious political programs of restructuring the world through the universal agency of the working class.

However,  from the 20th century onwards, much of  intellectual activity was spent in arguing against this intelligibility of the human world, and therefore against the totalizing socialist program. In the economic sphere, Hayek argued that the economy was too complicated and fine-grained to be consciously understood by human actors, therefore making conscious economic planning an impossibility. From the Left, post-structuralist theorists attacked  the idea that there exists underlying, objective structures that steer and scaffold the human world. Philosophers such as Laclau and Lyotard criticized nineteenth century thinkers such as Marx and Hegel for having totalizing narratives of how history marches and the certainty of scientific approaches to the world. In many ways these post-structuralist and marginalist views do reflect a certain aspect of the current political landscape.  The market in the West has considerably liberalized since World War II, expanding the roles price signals in directing the distribution of goods, which seem to echo Hayek’s propositions. In western-liberal democracies, electoral politics is often interpreted as a heterogenous and conflicting space formed of different identities and interest groups, pushing their own agendas without a discernible universal feature that binds them all – which echoes the post-structuralist attack against Marxist and Hegelian appeals to universalism. Furthermore, the decline of Marxism, anarchism, and other radical political movements that posited a coherent revolutionary actor, such as the working class, give even more credence to the post-structuralist insistence on how the social world cannot be made intelligible by totalizing and “scientific” theories.


However these attacks on human-world intelligibility miss a crucial point, which  makes the critique fatally flawed. These attacks only feature as evidence for their arguments  the ideological justifications of the ruling class and the defeat of the programmatic Left. It is true that Hayekian marginalism is used as “proof” that the economic world is not intelligible to the human mind, therefore justifying increasing neoliberalization. Or that the totalizing social movements of the early 20th century with coherent political programs and revolutionary subjects have been almost completely supplanted with heterogenous, big-tent movementism. Yet the ruling classes – those who control the State, still act from the perspective that the human world is intelligible. The State’s actors cannot make political interventions without assuming a theory on how the human world works and having a self-consciousness on their own function of how to “steer” this human world into  a specific set of economic and social objectives. For example, the whole military and intelligence apparatus of the United States studies scientifically the geopolitical order of the modern world in order to apply policy that guarantees the economic and political supremacy of the American Empire. Governments have economic policies that emerge from trying to understand the laws of motion of capitalism and using that understanding to administer the nation-state on a rational basis.

The skeptics of the intelligibility of the human world could protest in different ways to the above assertions. One of the protestation could be that existence of the technocratic state still does not reveal some universally, coherent ruling class. In other words, there is no bourgeoisie, “banksters” or other identified subjects that control the technocratic state for some identifiable reason   – ithe State is simply some autonomous machine with no coherent identifiable trajectory or narrative. Furthermore, a second protestation is inherent in some interpretations of Adorno’s and Horkheimer’s Dialectic of Enlightenment: to make the human world intelligible to science is a method of domination, where human beings can be instrumentalized into stacks of labor power to be manipulated and administered.  Furthermore, according to this criticism of Enlightenment, those particularities that might not be scientifically uncovered in the human world, are forced to violently fit certain universal – for example, the Canadian violence done unto First Nations where they attempted to “anglicize” First Nations violently by abusing and destroying them in Residential Schools.

Curiously this second protestation, the one of how rationality is used to scientifically dissect the human world to dominate it, shows the weakness of the whole counter-rational project. The ruling classes do make the human world intelligible for domination, through their technocrats, wonks, and economists.  However the key idea here is that they administer the world in the name of some objective that does not treat social need as its end. The behavior of the State does indeed show that the human world and history are intelligible – it’s just that its intelligibility is instrumentalized in favour of some anti-human end. In reply to the first protestation, about how it is impossible to recognize a universal subject and the end the technocratic state pursues, I will say that the complexity of world capitalism does not imply there are no dominant trends in it that cannot be analyzed. It just happens that systems experiences various tendencies, some in conflict with each other, but that can be still understood from a bird’s eye view and scientifically. For example, one of the key trajectories of the modern capitalist state is the safeguarding the institution of private property and attempting to stimulate capital accumulation (e.g. GDP growth) – this is certainly an intelligible aspect of modern world history.   The existence of conflicting trends within the State that counter the feedback of capital accumulation, such as inefficiencies caused by rent-seekers and corruption, only means that the State (and the human world) are complex systems with counteracting feedback loops, not that these objects cannot be made intelligible by scientific reason in order to understand them and ultimately change them.

The existence of contradicting feedback loops embedded in a complex system is not an argument against the scientific understanding of the human world. One can still try to understand the various emergent properties even if they contradict each other.  For example, a very politicized complex system today is the climate. Although we cannot predict the weather, that is the atmospheric properties in a ten square kilometers patch during a specific day, we can predict the climate, that is the averaged out atmospheric properties of the whole Earth during tens of years. For example, we have very good idea how the average temperature of the Earth evolves.  In the case of the human world, the same heuristic applies – we cannot understand everything that happens at the granular level but we can have ideas about the average properties integrated throughout the whole human world.  Similarly, the climate  system has counteracting feedbacks, for example, clouds may decrease the temperature of the Earth by reflecting solar radiation into outer space, but at the same time heat up the Earth through the greenhouse effect of water vapour.

These contradicting feedbacks does not make the climate system incoherent to science. Similarly, the existence of various subjects with conflicting interests in capitalism does not mean that there cannot be dominant trends, or some sort of universality underlying many of the subjects.  At the end of the day, the basic human needs, such as housing, education, and healthcare are approximately universal.

The fact that the human world is intelligible and this intelligibility is instrumentalized by our enemies, that is the capitalists, the military apparatus, and the technocratic state, in order to exploit and degrade the Earth and its inhabitants for capital accumulation,  means that we should make use of this instrumental reason to counterattack, not just pretend that this Reason is incoherent or that it is a tool that corrupts its user. In fact, there are many examples where instrumental reason is used for “good”, for example, the concerted medical effort of curing certain diseases, which makes the human body intelligible in order to understand it.  At the same time, in a Foucauldian sense, it is true that the clinic can be used for domination but this power dynamic is just one feedback loop amongst other more positive ones, such as emancipating humanity from the structural obstacles of disability and disease. Thus, universal healthcare is proof of the use of instrumental reason for the purpose of human need/emancipation.

The usage of instrumental reason for social need and freedom harkens back to Hegel. The world Hegel promised us at the end-point of history,  that is the world of absolute freedom, is the world where human beings become conscious of the intelligibility of history, and therefore they rationally administer history in order to serve  well-being and freedom. The only problem with Hegel’s perspective is that he thought history marched in a deterministic sense towards freedom. Instead, to make history and the human world intelligible for human needs is a political decision that is not predetermined by the structure of history itself.  Until now, the historical march of the last couple centuries have been for increasing domination of the Earth and its inhabitants for the purpose of capital accumulation. However, in the same way the ruling classes make history intelligible in order to serve profit and private property, there is no necessary reason or law that prevents using the intelligibility of history for social need.  The socialist political program is precisely this – to make the human world transparent to science and reason in order to shape it into a free society that is dominated by human creative will, as opposed to the imperatives of toil and profit.

If you liked this post so much that you want to buy me a drink, you can pitch in some bucks to my Patreon.

Against Economics, Against Rigour

jupter

I’ve been trying to grasp why mainstream economics considers itself the superior approach over  heterodox disciplines like Post-Keynesianism or Marxism.  After reading a couple of papers, and articles, a constant argument that appears is the one of rigour.  Mainstream economics is mathematically axiomatic, that is, it begins from a  set of primitive assumptions and then derives a whole system through self-consistent mathematical formalism.  Usually this is contrasted with heterodox approaches like  Post-Keynesianism, which are seen as less coherent and ad-hoc, with some writers referring Post-Keynesianism as  “babylonian babble” and not superior to a pamphlet.   Even if some heterodox economists use mathematical modelling, they do not follow from some axiomatic method, but are ad-hoc implementations.

What interests me about this argument is its definition of science.  According to many mainstream economists, heterodox economics isn’t a science.  The main reason given for the unscientific status of heterodox economics is that the latter lacks internal coherence, that it is not rigourous. As mentioned above, mainstream economics claims rigour by  deriving its propositions from mathematical inference  that begins with a set of axioms. It is by the usage of this rigour, that mainstream economics defines itself as a science.

If a field  claims to be scientific, it must justify its own status.  For better or worse, in the west,  the status of science is epistemically privileged. In other words, an activity can assert more legitimacy than other modes of producing knowledge by claiming the mantle of science.  Therefore mainstream economics by arguing for its scientific status due to an axiomatic coherence, while denying that same mantle to heterodox economics, is implicitly arguing that heterodox economics is an inferior epistemological approach and unscientific.

A common retort against the mathematical rigour of economics is that its coherent mathematical frameworks don’t necessarily correlate with empirical reality, which calls the scientific status into question. However this argument has been done to death, probably by people much smarter than me. What I find interesting is the idea that inferential coherence is a necessary condition for science.  In fact the argument being made by mainstream economists is that even if heterodox economics may arguably be able to explain some empirical phenomena mainstream economics cannot, heterodox economics is less scientific because it lacks internal coherence. Therefore, mainstream economics claim a necessary condition for science is rigorous logical coherence.

Where does this definition of science as rigorous logical inference comes from? There is only one natural science, which is physics, that approximates this sort of rigorous coherence, that is,  that there is a set of primitive axioms that lead to a whole system of knowledge by the application of rules of mathematical inference. Even then, the mathematical rigour in physics is often inferior than in economics, given that physicists don’t do mathematical proofs as much as economists. The rest  of the natural sciences are less rigorously formulated – many of them are a “bag of tricks”  that are heuristically unified. This is because anything more complex than a system of two interacting particles is mathematically intractable due to nonlinearities.  A good example is  psychology.  Although psychologists assume that certain personality traits are a manifestation of chemical processes in the brain, there is no rigorous mathematical inference that connects psychology to brain chemistry –  these scales are unified heuristically and qualitatively.  There are similar examples in biology, where in theory, morphological evolution is coupled to chemical evolution of genes, but the rigorous, mathematical linkage of both scales is close to impossible.

How  did the  definition of science as mathematical inference came into being? It is certainly not the normative self-consciousness of scientists, who see themselves as Popperian. Popper’s theory treats the evolution of science as a process where propositions  are falsified by empirical evidence only to be replaced by  better explanations – it does not say anything about “logical rigour”.   Nor this definition is descriptive, as shown in the previous paragraph, because most natural sciences aren’t as rigorously self-coherent as mainstream economics.   Weintraub  argues that the current  axiomatic approach to mainstream economics can be traced back to Gerard Debreu, an important french-american economists of the 20th century.   In the first half of the 20th century,   David Hilbert and Bourbaki (a pseudonym used by a group of french mathematicians) attempted to axiomatize mathematics, given the discovery of non-euclidian geometry in the 19th century. Before non-euclidian geometry,  geometry was thought to derive its axioms intuitively from the world – the truth-value of the axioms were self-evident.  An example of an “intuitive” axiom in euclidian geometry is that parallel lines don’t meet.  However 19th century mathematicians  realized that they could create self-consistent, alternate geometries where parallel lines could meet. An alternative geometry that starts from non-euclidian axioms  was self-consistent if rigorously inferred through mathematical rules.  This led Hilbert and Bourbaki to develop a more axiomatic approach to the study of mathematics. Debreu, who learnt mathematics from the Bourbaki school, brought this axiomatic way of thinking to economics.

Today this axiomatic approach  is very obvious in the average graduate economics curriculum. For example some of the classes emphasize writing mathematical proofs! I am very close to competing a PhD in physics and I only experienced very basic proofs at my undergraduate level in a  linear algebra class.   After that I never wrote a single proof ever again.  Yet, economics, which arguably has had less empirical success from its mathematics,  requires more mathematical rigour than the average paper in physics.  This tells me that the economist’s emphasis on rigour is not inspired in the example of the successful, natural sciences, but it’s endogenous –  from within. If anything, it shows that mainstream economics is at most a bizarre synthesis of philosophy and mathematics, owing more to these abstract fields, than to any of the existing natural sciences.  Therefore, mainstream economics should be described as more of a  mathematical philosophy than  a science.

The case of the arbitrary rigour of economics has interesting implications in academia at large.  An uncharitable person would say that the spurious mathematical rigour of economics is simply  gate-keeping for a professional guild.  The extremely technical skills required to master mainstream economics  limit the supply of would-be economists, generating a manageable number of rent-seekers that can be paid handsomely.  But this probably extends to much of academia as well.   Academia is peppered with examples where “rigour” and “method” are elevated with no obvious epistemic justification. One has to wonder if appeals to rigour are more often than not guild building in order to justify large pay-checks by limiting the supply of the participants.  The trope of “how many angels can dance on the tip of a pin” is a famous example of this spurious rigour.  Medieval theologians were accused of developing  beautiful, often rigorous and coherent systems, that deal with questions of no intellectual consequence.   Similarly, the same phenomenon probably emerges in some sector of academia, given that rigour and opacity are a cheap way of signalling expertise to institutions in order to justify large salaries.

Finally, I think the unjustified emphasis on rigour when not warranted is unhealthy for democracies.  Often, many problems that are meaningful to humanity at large, such as issues of political and economic nature, require the mass participation of society in order to build an engaged citizenship.  Spurious rigour and credentialism are ways to build a technocratic hierarchy that is not necessarily justified. In the absence of authentic knowledge, rigour becomes simply a guild-like mechanism for confining meaningful problems to a set of fake experts that decide the fate of whole nations, often in the interests of a reduced elite. A socialist, democratic society would require a more egalitarian epistemology than the one that exists today.

If you liked this post so much that you want to buy me a drink, you can pitch in some bucks to my Patreon.

The World-System Versus Keynes

20170522_juno-south-pole-2

The most incredible  modern lie is the one of nation-state sovereignty.  From the left to right,  the relative success of an administration is aways interpreted as a function  of endogenous variables the nation-state can  supposedly control.  In the case of the right wing, they see the perceived failure of their society as related to the government not closing the borders, running high deficits,  or allowing companies to outsource jobs.  From the left’s perspective, the nation-state is simply not running high enough deficits to fund more social programs, not supporting full employment policies, or refusing to raise the minimum wage. Meanwhile, a totalizing world-system pulsates  in all corners of the planet,  with flows of information, commodities,  securities, and dollars  creating a complex system that subsumes the sovereignty of most nation-states.  In the heart of this world-monster,  there is a hierarchy of nation-states, with some states having more influence and control  over the world-system than others.

Recently, with the advent of the Great  Recession in 2008, many people in the left, some of them self-proclaimed socialists, have been doubling down on the myth of national sovereignty.  They see the economic crisis, and the continuous casualization of workers, as an opportunity to administrate the nation-state in the “right way” to reverse these trends. They see themselves as holding secret truths and insights about the economy that neoliberals don’t truly fathom.  Only if these social democrats had the opportunity to apply  the right ideas, ideas that they claim have been pushed out of the political and academic mainstream for venal reasons, they could fix the economy.

What are these right ideas?  In the first half of the 20th century, John Maynard Keynes had already developed a toolkit for any eager leftist technocrat to  manipulate in order to attenuate economic crisis.  He, in contrast to the classical economists that preceded him, argued that sometimes the market did not clear, which generated a recession.  By market clearing, I mean that the supply of commodities wasn’t balanced out by  their demand. This is sometimes referred as Say’s Law.  Another important aspect of the failure of Say’s Law is the existence of unemployment, given that there is more supply of labor than demand.  While classical economists argued that economic crises could self-correct themselves and eventually clear, by for example, lowering the wage of workers or cheapening commodities,  Keynes argued that these recessions could persist for a very long time without the aid of governmental fiscal and monetary policy. According to Keynes, some of the reasons the markets fail to clear are: (i) workers will not accept wage cuts, (ii) recession would make investors risk averse, causing them to save their money rather than invest, and (iii) mass unemployment and risk aversion would decrease the buying of commodities.

Keynes thought that the state could  force the market to clear through fiscal and monetary policy.  He argued that in the case of recession, aggregate demand is lower than what it should be, and this in turn, caused negative feedback loops that halted the economic engine (e.g. the underconsumption of commodities). In order to stimulate demand, the state could increase the amount of money in the consumer side by: (a) public spending on infrastructure in order to employ the previously unemployed, (b) lowering taxes so that the consumer has more available money. Meanwhile,  the state could stimulate demand through monetary policy by lowering the interest rate so that consumers and investors can buy an invest through cheap loans and credit.  This monetary policy was thought to cause inflation because it would increase the money supply by allowing low interest/cheap borrowing, but at the same time, this policy was thought to cure the greater diseases, which were mass unemployment and low aggregate demand.   Keynes’ policies often required deficit spending, that is the government spending more than they acquire, usually by accruing debt. Furthermore, Keynesian policies  tend to trigger inflation because they increase the money supply. However the Keynesians thought that this inflation was a necessary evil to cure unemployment.

In the 1970s, however, economic crisis displaced Keynesianism into the fringe.  The rapid increase of the price of oil coupled with a large money supply created a crisis. These high prices discouraged companies from investing, given that production costs were too expensive and inflated. The Keynesian approach to dealing with crises was not applicable since unemployment was coupled with low demand and inflation (stagflation), which ran contrary to the Keynesian consensus of the time. So it seemed that inflationary policies, such as increasing the money supply, wouldn’t solve the stagnation and unemployment problem.  In response to the crisis, some economists, like the monetarist Milton Friedman, claimed   that Keynesian monetary policy was at least partly responsible for the crisis given its inflationary nature.  Friedman argued that in order to cure the recession, governments should reduce the money supply.  Therefore in accordance to Friedman’s prescription, the Fed in the United States sharply increased interest rates, which ran contrary to Keynesian policy. This tightening of the money supply by the Fed is thought to have aided in the resolution of the crisis. The empirical falsification of Keynesianism because of the stagflation crisis, coupled with a protracted cultural war by classical economists such as Hayek, Friedman etc., and the shift of power towards financial speculators,  displaced Keynesianism into the fringe of heterodox economics that exists today.

Nowadays Keynesianism has been rebranded into all sorts of heterodox disciplines that found a place in Left. Keynes became a darling of the Left for three reasons: (i) melancholy for the post-WWII welfare state and cheap credit, (ii) a consumer-side perspective (e.g. focus on aggregate demand) that seems to value working class consumers over capitalist suppliers, and (iii) the idea that capitalism is crisis prone in contrast  to the the neoliberal orthodoxy of economic equilibrium.  Some of these rebranded Keynesian theories go under different names, such as Post-Keynesianism and Modern Monetary Theory.  Although these Post-Keynesian theories are not exactly isomorphic to the original theories and prescriptions set by Keynes, they all roughly agree with the main heuristics, mainly that the state should strongly intervene in the market, and that an increase of money supply and government spending should be used to counter crisis rather than neoliberal austerity.  Finally, all these approaches rely on one particular thing (which I will show later on why it’s flawed), which is the strength of the sovereignty of the nation-state.  I will focus on Modern Monetary Theory (MMT) as an example given that it is one of the more contemporary iterations of Keynesianism.

Modern Monetary Theory’s basic premise is simple:  a nation-state that issues its own currency cannot go bankrupt given that it can print more of its own money to pay for all necessary goods and services.   Another way of stating this theory is that governments don’t collect taxes for funding programs and services. Rather governments literally spend money into existence, printing money in order to pay for necessary services and goods. Taxes are just the government’s mechanism to control for inflation. In other words, taxes are the valve used to  control the money supply. MMT therefore argues that since money is in the form of fiat currency,  it’s not constrained by scarce commodities such as gold and silver, and therefore it is a flexible social construct. So governments don’t need to cut social programs in order to increase revenue – they could simply spend more money into existence in order to pay for social programs. Furthermore, the government can  enforce full employment by spending jobs into existence  – the state can create jobs through large-scale public works, and then print the necessary money to pay the workers. In a sense, MMT is another iteration of the Keynesian monetary heuristic that increasing the money supply is a good way to solve high unemployment and crisis.

Imagine the potential of MMT for a leftist!  The neoliberals  arguing for austerity and balanced budgets are talking nonsense – the state can simply spend money into existence and therefore pay for welfare and other public services, and also use this new minted money to employ the unemployed! If the increase of money supply triggers inflation, the state can simply tax more, fine-tuning the quantity of money. If only the MMTer would convince the right technocrats, we wouldn’t have to deal with the infernal landscape of austerity.

However, the idealized picture presented by MMT is missing key variables.  Ultimately,  an MMT approach would be  heavily constrained by national production bottlenecks.  In order for MMT approaches to work, the increase of demand caused by the sudden injection of money should be able to be met by the production of  the desired commodities.  In an ideally sovereign nation, society would be able to meet the demand of computers, medicine, or food by simply producing more of these commodities. We may refer to a country’s capacity for producing all the goods it needs as material sovereignty.

However this is where the fundamental achilles hill of MMT (and Post-Keynesianism in general) lies.  Most countries are not materially sovereign at all. Instead, they depend on imports in order to meet their demand on fundamental goods such as technology, fuel, food or medicine.   In the real world, countries have to buy forex currency (e.g. dollars) in order to be able to import necessary goods. The price of the dollar in terms of another currency is not in control of the currency’s issuer. Instead it’s a reflection of the economic and geopolitical standing of that nation amidst the current existing world-system. Whether the dollar is worth 20 or 30 Mexican pesos has to do with Mexico’s  position in the global pecking order, and this exchange rate, if anything, can be made worse by the adoption of Keynesian policies. For example, if Mexico suddenly increases its own currency supply, the Mexican peso would simply be devalued in contrast to the american dollar, making its ability to buy the necessary imports diminished.  This puts a fatal constrain on a nation-states ability to finance itself through simple monetary policy.

The economic castigation of “pro-Keynesian” countries by the world-system is a cliche at this point.  To name some examples:  Allende’s Chile,   Maduro’s Venezuela, or pre-2015 Greece. In the case of Allende, the sudden increase of the money supply by raising the minimum wage created a large unmet demand and also eventually depleted the country’s forex reserves (there was also economic sabotage aided by the United States, but this also reinforces my argument).  In the case of Maduro, Chavez ran large deficits, assuming the high revenues from oil will last long enough. Greece overspent itself through massive welfare and social programs.  Although Greece doesn’t have its own currency, it still engaged in a high deficit fiscal policy that led to its default.   If these countries had their own material sovereignty, such as being able to produce their own food, technology, and other necessary goods, the global order would not have been able to castigate them so harshly. Instead, what ended up happening is that foreign investors pulled out,  the national currency plummeted, and their forex reserves depleted,  making these governments unable to meet the national demand for necessary goods through imports or foreign capital injection.

The above scenario reveals a fundamental truth about capitalism – national economies are functions of global, exogenous variables, rather than only endogenous factors.   Keynesian policy is based on the idea that nation-states are sufficiently sovereign to have economies that depend mostly on endogenous factors. If  the nation-state’s economy depend solely on  national variables, then a Keynesian government could simply manipulate these variables in order to get the desired outcome of  its national economy.  However it turns out nation-states are instead firms embedded in a global market, and their fate ultimately lies in the behaviour of the planetary world-system.  The nation-state firm has to be competitive in the world-system in order to generate profit; this implies that inflationary policies, large debts, and state enforced  “full employment” are not necessarily healthy for the profitability of the firm.   Furthermore, it means that the leftist nationalists that want to, for example, leave the eurozone in order to be able to issue their own currency, are acting from misguided principles.

Given the persistence of the totalitarianism of the world-system, no matter the utopian schemes of leftist nationalists and their fringe hetetodox academics, it’s infuriating to witness how the Left has lost its tradition of internationalism. Instead, the Left, since the advent of WWII, has been pushing for “delinking” of the world-system, whether it’s through national liberation during the 60s, or more recently, by leaving the euro-zone, fomenting balkanization in countries like Spain or the United Kingdom, etc.

The world-system can only be domesticated to pursue social need with the existence of a world socialist government.  Regardless of how politically  unfeasible the program of world government is, its necessity follows formally from the existence of a world system. Only through world government could socialists have sufficient sovereignty in order to manipulate the economy for social need. In fact, the Keynesians indirectly point at this problem through their formalism.  Post-Keynesian theories such as MMT start from the idea of a state having material sovereignty. Yet, the only way for a state to have material sovereignty, and therefore be able to manipulate endogenous variables for its own economic ends, is to subsume the whole planet into some sort of unitary, democratic system.  A planetary government could then manipulate variables across the planet (e.g. both in China and in the United States) to enforce social-democratic measures like full employment or a welfare state, without dealing with the risk of international agents castigating the economy, or having to import goods from “outside”.  But the funny thing is that once we have global. fiscal and monetary policy, Keynesianism becomes irrelevant, given that market signals can be supplanted by a planned economy.

If you liked this post so much that you want to buy me a drink, you can pitch in some bucks to my Patreon.