Ergodicity as the solution for the decline of science

608px-Maxwell's_demon.svg

In a previous post I explored the decline of science as related to the decline capitalism. A large aspect of this decline is how the increase of informational complexity leads to marginal returns in knowledge. For example, the last revolution in physics appeared roughly one hundred years ago, with the advent of quantum mechanics and relativity. Since then, the number of scientists and fields have exponentially increased, and the division of labor has become increasingly more complex and specialized. Yet, that billion dollar per year experiment, the Large Hadron Collider, that was created to probe the most fundamental aspects of theoretical physics, has failed to confirm any of the new theories in particle physics. The decline of science is coupled to the decline of capitalism in general, as specialist and institutional overhead is increasing exponentially across industries, but GDP growth has been sluggish since the 1970s.

Right now across scientific fields there is an increasing concern for the overproduction of “bad science”.  Recently, the medical and psychological sciences have been making headlines, because of the high rates of irreproducible papers.  In even the more exact sciences, there is a stagnant informational bloat, with a flurry of math bubbles, theoretical particles, and cosmological models inundating the peer-review process, in spite of billion dollar experiments like the Large Hadron Collider not confirming any of them, with no scientific revolution (last one was 100 years ago) in the horizon.

There is no shortage of solutions being postulated to solve the perceived problem. Most of them are simply suggestions of making the peer review process more rigorous, and refining the statistical techniques used for analyzing data.  For example using bayesian statistics instead of frequentism, encouraging the reproducibility of results, and finding ways to constraint the “p-value” hacking. Sometimes some writers that are a little bolder would argue that there should be “interdisciplinarity”, or that scientists should talk more to philosophers, but usually these calls for “thinking outside the box” are very vague and broad.

However, most of these suggestions would simply exacerbate the problem. I would argue that the bloat of  degenerative informational complexity is not due to lax standards. To give an example, let’s analyze the concept of p-value hacking. A common heuristic in the social sciences is that for a result to be significant, it should have a p-value of less than 0.05. In layman parlance, this implies that your result has only 5 percent of probability of being due to chance (not exact definition but suffices for this example).  So now you established a “standard” that can be gamed in the same way lawyers can game the law. This creates a perverse incentive to game this rule, by researchers finding all sorts of clever ways of “p-hacking” their data so that it passes that standard. So in the case of p-value hacking, one can make conscious fraud by not including the data that raises the p-value (high p-values mean your results are due to chance), to unconscious biases like ignoring certain data points because you convince yourself they are a measurement error, in order to protect your low and precious p-value.

The more rigid rules a system has, the more is invested in “overhead” to regulate those rules and game them. This is intuitively grasped almost by everyone, and hence the standard resentment against bureaucrats that take the roundabout and sluggish way to accomplish something.  In the sciences,  once a an important study/experiment/theorem generates a  new rule, or “methodology”,  this creates perverse incentive loops where scientists and researchers use this “rule” to create paper mills, that will in turn be used to game citation counts . Instead of earnest research, you have an overproduction of “bad science” that amounts the gaming of certain methodologies.  String theory, which can be defined as a methodology,  was established as the only game in town a couple of decades ago,  which in turn constrained young theoretical physicists in investing their time and money in gaming that informational complexity, generating even more complexity. Something similar happens in the humanities, where a famous (usually french) guy establish a methodology or rule, and the anglo counter-parts game the rule to produce concatenations of polysyllabic words.   Furthermore this fetish of informational complexity in the form of method and rules, creates a caste of “guild keepers” that are learned in these rules and accrue resources and money without allowing anybody who isn’t learned in these methodologies.

This article serves as a “microphysical” account of what leads to the degenerative informational complexity and diminishing returns I associated with modern science in my previous post. However what would be the solution to such a problem? The answer is in one word: ergodicity.

As said before, science has become more specialized, complex, and bloated that ever before.  However, just because science has grown exponentially, it doesn’t mean it has become more ergodic. By ergodic I specifically mean that all possible states are explored by a system.  For example  a dice that is thrown a large amount of times would be ergodic, given that the system would access every possible side of the dice. Ergodicity has a long history in thermodynamics and statistical mechanics, where physicists often have to assume that a system has accessed all its possible states.  This hypothesis allows physicists to calculate quantities like pressure or temperature by making some theoretical approximations of the number of states a system (e.g. a gas ) has. However we can use the concept of ergodicity to analyze social systems  like “science” too.

If science were ergodic, it would explore all possible  avenues of research, and individual scientists would switch of research programs frequently.  Now, social systems cannot be perfectly ergodic, as social systems are dynamic and therefore the “number” of states grow (e.g. the number of scientists grow). But we can treat ergodicity as an idealized heuristic.

The modern world sells us ergodicity as a good thing. Often, systems describes themselves as ergodic as a defence from detractors. For example, when politicians and economists claim that capitalism is innovative, and that it allows all workers to have a chance at becoming rich (or a chance for rich people to become poor),  they are implicitly describing an ergodic system. Innovation implies that entrepreneurs experiment and explore all possible market ideas so that they can discover the best ones. Similarly, social mobility implies that a person has a shot at becoming rich (or if already rich, becoming poor) if that person lives long enough. In real life, we know that the ergodic approximation is really poor for capitalism, as the rich do often stay rich, and the poor will stay poor. We also know that important technological innovation is often carried out by public institutions  such as the american military, not the private sector. Still, the reason why ergodicity is invoked is that it is viscerally appealing. We often want “new blood” into fields and niches, and we resent bureaucrats and capitalists insulated from the chaos of the market for not giving other deserving people a chance.  

One of the reasons that ergodicity is appealing is that there is really no recipe for innovation except experimentation and exploring many possible scenarios.   That’s why often universities have unwritten rules of not hiring their own graduate students into faculty positions – they want “new blood” from other institutions. A common (although incorrect, as described above) argument against public institutions is that they are construed as often dull and stagnant in generating new products or technologies compared to the more “grassroots” and “ergodic” market. So I think there is a common intuition amongst both laymen and many professionals that the only sure way of finding if something “works” or not is trying different experimental scenarios.

Now let’s return to science.  The benefit of ergodicity in science  was indirectly supported the infamous philosopher Feyerabend. Before him,  philosophers of science tried to come up with recipes of what works in science or not.  An example is Popper, who argued that science must be falsifiable. Another example is Lakatos, who came up with heuristics of what causes research programs to degenerate. Yet,  Feyerabend argued that the only real scientific method is that  “anything goes” – he termed this attitude as epistemological anarchism. He argued that scientific breakthroughs don’t follow usually any hard and fast rules, and that scientists first and foremost are opportunists.

Feyerabend got a lot of flack for  these statements – his detractors accusing him of relativism and anti-scientific attitudes. Feyerabend didn’t help himself because he often was inflammatory in purpose and seeking to cause a reaction (for example putting astrology and science on the same epistemic level). However I would say that in some sense he was protecting science from dogmatic scientists.  To use the terminology sketched in the previous paragraphs: he ultimately was arguing for a more ergodic approach to science so that it doesn’t fall under this dogmatic trap.

This dogmatic trap was already explained in previous paragraphs: the idea that more methods, rules,  divisions, thought policing, and  rigour, would  always lead to good science.  Instead it leads to a growth of degenerative research  that amounts to gaming certain rules.  This in turn leads to the growth of degenerative specialists that are only experts in degenerative methods.   Meanwhile, all this growth is non-ergodic, because it’s based around respecting certain rules and regulations, which constrains the exploration of all possible scenarios and states. It’s like loading a dice so that always the six dots face up, in contrast to allowing the dice to land in all possible states.

How can we translate these abstract heuristics of ergodicity into real scientific practice? The problem with much of philosophy of science, both made by professional philosophers, or professional scientists unconsciously doing philosophy, is that it looks at individual practice. It comes up with a laundry list of specific rules of thumb that an individual scientist most follow to make their work scientific, including certain statistical tests and reproducibility. However the problems are social and institutional, not individual.

What is the social and institutional solution? Proposing solutions is harder than describing the problem. However  I always try to sketch a solution because I think criticism without proposing something is somewhat cowardly – you avoid opening yourself up to criticisms from readers.

The main heuristic for solving these problems should be on collapsing the informational complexity in a planned, transparent, and accountable way.  As mentioned before, this informational complexity is like a cancer that increasingly grows, and its source is probably methodological dogmatism, where complex overhead becomes bloated as researchers find increasingly more convoluted way of “gaming” these rules. Here are some suggestions for collapsing complexity:

  1. Cutting administrative bloat and instead have rotating academics in the essential administrative postings. 
  2. Get rid of the peer-review system, and instead use an open system, similar to Arxiv.
  3. Collapsing some of the academic departments into bigger ones. For example, there is more in common with much of theoretical physics, mathematics and  philosophy than between theoretical physics and some of the more experimental aspects of physics. So the departments should be reorganized so that people with more similarities interact with each other.
  4. Create an egalitarian funding scheme, based more on divisions between theory and experiment than between departments.  Everyone involved in the same category should receive the same, minimum amount of funding, where funding quantities are based on how much resources a specific type of work would realistically require.  For example, a theoretical physicist that uses only pencil, paper, and their personal computer, has financially a lot in common with a sociologist that does the same. 
  5. Beyond the  minimum funding outlined above, excess funding should be decided democratically, with input outside of professionals.
  6. Abolish the distinction between tenured professor and adjunct. Instead everyone should teach.

Hopefully the destruction of admin bloat and adjunct/tenure distinction would release resources that can  be spent on hiring researchers, instead of coming up with bad heuristics such as publication and citation numbers as filters for new hires.

Many of these recommendations cannot be seen in the abstract, since the University is intimately coupled to the society and the economy as a whole. For example, part of the admin bloat comes from legal liabilities and the state offshoring some of their responsibilities to universities.  Number 6 would require a radical reconfiguration of society in general. Number 6 wouldn’t be able to be enacted today, since “democratic” institutions  are  composed of non-ergodic, technocratic lifers. 

This takes me to the political conclusion that the problems of science should be seen as the problems of society as a whole.  The only sure way to find solutions for problems is an ergodic approach.  Right now, the state is non-ergodic, that is, its  occupied and controlled  by political and bureaucratic lifers.  These non-ergodic bureaucracies in turn generate informational complexity, as new regulations and “rules” are imposed by the same caste of degenerative professionals, which in turn requires even more complex overhead. Instead,  the State, (and in a socialist society, the means of production) should have a combination of democratic and sortition mechanisms that makes it impossible for individuals to stay too long in power. This democratic vision should be supported by broad and free education programs that train individuals with the sufficient knowledge required to rule themselves in a republican way. Not only is this method guarantees more equality, but it also  turns society into this great parallelized computer that solves problems by ergodic trial and error, through the introduction of  new blood, sortition and democratic accountability.

If you liked this post so much that you want to buy me a drink, you can pitch in some bucks to my Patreon.

Advertisements

The Decline of Science, The Decline of Capitalism

pnhp-long-setweisbartversion-52-638

Can another Einstein exist in this era?  A better question  is whether  the spirit of his  research program could emerge again in our current predicament. By his research program, I mean the activity that grasped through a few thought experiments and heuristics fundamental principles that not only revolutionized physics but our whole ontology in general. Through a combination of imagination and mathematical prowess,  such as imagining himself riding a lightning bolt, and then translating this imagination into the language of geometry, he revolutionized our most fundamental intuitions of space and time.

Fast forward a hundred years later, where physics has become increasingly specialized and fractal-like,  with theoretical physics atomized across many sub-disciplines. Given this complex landscape, there is simply not enough bandwidth to  engage the informational complexity of all relevant fields in order to grasp at something both holistic and fundamental. Instead, scientific knowledge is atomized among various disciplines.  Yet, although this division of labor and increased informational complexity has a legitimate logic, as many fields  truly become more specialized and complex in a useful, authentic sense – this complexity has decreasing marginal returns. We can see this effect in some of the paper mills of theoretical physics, with theory after theory that may only have tenuous links with the facts of the world.  At some point, the complexity and literature grew exponentially, engulfing empirical confirmation.

One of the most striking example of the diminishing returns of complexity is the lack of revolutionary shifts in theoretical physics. The  last major physics revolutions, quantum mechanics and relativity, happened roughly a hundred years ago. This is in spite of the huge increase in the number of scientists and disciplines throughout the last century. There is no shortage of models and theories, yet the creation of novel predictions and empirical confirmation is slowing down, as evidenced by the inability of expensive particle physics experiments to confirm any of the new particles conjectured by the last generation of theoretical physicists.   In other words, to use Lakatos’ ideas, theoretical physicists is degenerating, because there is an exponential increase in informational complexity without much empirical content backing it. In short, all the new and expensive scientists, computers, theories (e.g. supersymmetry, string theory) and cryptic fields are generating diminishing returns in knowledge.

However, not only academic sciences are degenerating. In this stage of capitalism, the degenerative research program is universal. This universal research program includes all relevant fields of human inquiry and knowledge. Therefore, this degeneration not only exists in the apex of academia, but it dwells in any institution meant for problem solving.  We find a decrease in productivity across many industries and the economy as a whole, which signals diminishing returns in complexity. In all these parts of society there is an increase of expensive complexity that yields diminishing returns.  Since all these institutions are problem-solving,  and use some sort of method/episteme, we can say that their theories of the world are degenerative, in analogy to the Lakatosian concept of degenerative research program. In spite of their bloat in specialists, the marginal returns in the “knowledge” necessary for production decreases.

Perhaps the most incredible aspect of this decline is the existence of experts in almost wholly degenerative methods.  As degenerative methods exponentially increase in volume – methods that don’t have much empirical backing, the informational complexity needs more specialists to manage it, and these  experts are almost specialized entirely on these decaying methods. Economists and string theorists are the quintessential examples of degenerative professionals.

This degeneration of the universal research program, and with it, the creation of a degenerative caste of professionals  has not come unnoticed by the population. This decline has probably fueled part of the anti-intellectual and anti-technocratic wave that brought Trump to power. For example,   people often complain about the increased inaccessibility of academic literature, with its overproduction of obscure jargon. Another example is the knee-jerk hatred for administrators, managers, and other technocratic professionals that are seen as doing increasingly abstracted work that may not connect with what is happening at the ground. For instance, a common target of criticism  for this phenomenon is the admin bloat that festers at universities.

This abstract process of the degenerative research program is linked to the health of capitalism, in a two way feedback loop, given that it is through problem solving that capitalism develops technological and economic growth.  Perhaps we can understand the health of capitalism better by referring to the ideas of the anthropologist Joseph Tainter. Tainter argues that societies are fundamentally problem solving machines, and that they add complexity in the form of institutions, specialists, bureaucrats, and information in order to increase their capacity to solve problems in the short term. For example, early irrigation systems in Mesopotamian civilizations, crucial for agriculture and therefore survival, created  their own layer of specialists to manage these systems.

However complexity is expensive, as it adds more energy/resources usage per capita. Furthermore, the problem solving ability of institutions diminishes in returns as more expensive complexity is added. At some point, complex societies end up having a very expensive layer of managers, specialists, and bureaucrats that are unable to deliver in problem solving anymore.  Soon, because the complexity is not making society more productive anymore, the economic base, such as agricultural output, cannot grow as fast as the expensive complexity, making society collapse. This collapse resets complexity by producing simpler societies. Tainter argues that this was the fate of many ancient empires and civilizations, such as the Romans, Olmecs, and Mayans. So Tainter here is arguing for a theory of decline of the mode of production, where modes of production are “cyclical” and have an ascendant and descendant stage. Using this picture, we can begin  to identify a stage of capitalism in decline.

This decline of capitalism has plenty of empirical evidence.  “Bourgeois” think-tanks like the Brookings Institute argue that productivity has declined since the 1970s. Marxist economists like Michael Roberts assert that the empirical data shows that the rate of profit has fallen since the late 1940s in the US.  Not to mention the recent Great Recession of 2008. However this economic and material decline is linked to the degenerative research program, as the expensive complexity of degenerative institutions expands faster than the economic base (e.g. GDP). For example, the exponential grow of administrators in healthcare and university at the expense of physicians and professors is symptomatic of this degeneration.

The degeneration of the universal research program  has two important consequences. First, that a large part of authority figures that base their expertise on credentials are illegitimate. The reason is that if they are part of a degenerative caste of professionals (politicians, economists, etc.)  so they cannot claim authority on relevant knowledge because their whole method is corrupted. This implies that socialists should not feel intimidated by the credentials and resumé of the technocrats closer to power. As mentioned before, right wing populists such as Trump understand partially this phenomenon, which has unleashed his reactionary electorate against the “limousine liberals” and “deep-statists” in Washington D.C. It’s time for us socialists to understand that particular truth, and not be afraid to counter the supposed realism and expertise of the neoliberal center.  The second consequence is that our methods of inquiry, such as science or philosophy, has  stalled. Instead, the feed-back loop of complexity creates more degenerative specialists that are experts in an informational complexity that has tenuous connection with the facts of the world. Whole PhDs are made in degenerative methods – for example, scientists specializing in some particular theoretical framework in physics that has not been validated empirically.

What is the socialist approach to the degeneration of the research program? Although one cannot say that socialists will not suffer from similar problems, given that informational complexity will always required when dealing with our complex civilization, Capitalism has particularly perverse incentives for degenerative research programs.   For example, the way the degenerative research program survives is through gate-keeping that safeguards the division of labor by well paid and powerful professionals. An obvious example is current professional economics, which largely requires an absorption of sophisticated graduate level math in order to enter the profession, even if those mathematical models are largely degenerative. In the political landscape at large, the State is conformed by career politicians and technocrats, who safeguard their positions through undemocratic gate-keeping in the form of elite networking and resumé padding.  The rationale for this gate-keeping is that these rent-keepers accrue power and wealth  through the protection of their degenerative research programs. Furthermore capitalism accelerates the fracturing of division of labor as it pursues short-term productivity at all costs, even when this complexity in the long term becomes expensive and a liability. 

The socialist cure to the degeneration of the research program could consist of two main ingredients. First, that institutions that command vast control over society and its resources should democratize and rotate their functionaries and “researchers”.  For example in the case of the State, a socialist approach would eliminate the existence of career politicians by putting stringent term limits and making many functionaries, such as judges, accountable to democratic will. Since there are diminishing returns in knowledge through specialization and informational complexity, a broad public education (up to the university bachelor level) could guarantee a sufficiently educated body of citizens so that they can partake in the day to day affairs of the State.  Instead of  a caste of degenerative professionals controlling the State, an educated body of worker-citizens could run the day to day affairs of the State through a combination of sortition, democracy, and stringent term limits.

The second ingredient consists of downsizing much of the complexity by focusing on the reduction of the work-day through economic planning. Since one of the main tenets of socialism is to reduce the work-day so that society is instead ruled by the imperatives of free time as opposed to the compulsion of toil, this would require the elimination of  industries that do not satisfy social need (finance, real estate, some of the service sector, some aspects of academia) in order to create a leaner, more minimal state.  Once the work-day is reduced to only what is necessary for the self-reproduction of society, there will be free time for people to partake in whichever research program of their choosing. Doing so may give rise to alternate research programs that don’t require the mastering of immense informational complexity to partake in. Perhaps the next scientific revolution can only arise by making science more democratic and free. This vision contrasts to the elitist science that exists today, which is at the mercy of   hyper-specialized professionals that are unable to have a holistic, bird’s eye view of the field, and therefore, are unable to grasp the fundamental laws of reality.

If you liked this post so much that you want to buy me a drink, you can pitch in some bucks to my Patreon.

On Hegel and the Intelligibility of the Human World

Screen Shot 2018-05-02 at 11.37.55 AM

I’ve been studying Hegel lately because I find a value in his idea that history has an objective structure and is intelligible.  He argued that History is rational, and therefore its chain of causes and effects can be understood by Reason. I deeply believe in the intelligibility of history and the human world at large, as I advocate for the human world to be  administered in a planned and democratic way, which requires the possibility of scientific understanding. In contrast, many contemporary thinkers are extremely skeptical about the intelligibility of the human world. For example, many economists proclaimed that socialist planning is flawed because the supply and demand of goods cannot be rationally made intelligible to planners. We see similar arguments from the Left in the form of post-structuralist attacks against the  “master narratives” that seek to unearth the rational structure of the human world. For example, contemporary criticisms of the Enlightenment sometimes argue that the same reason used to understand the world is used to dominate human beings, because Reason starts to see humans as stacks of labor power to be manipulated for some instrumental end.

However in my opinion, to deny the intelligibility of the human world,  or to deny that this intelligibility can ever be used for emancipation, is to deny the possibility of politics, for political actors must have a theory of where history is marching, in other words,  “in what direction does the wind blows”. Political agents need to ground themselves in a world-theory so they can suggest a political program that would either change the direction of history to another preferred  course, or enhance the direction that it is undertaking right now. The IMF, Breton-Woods, the Iraq War, the current austerity onslaught, etc. have or had an army of politicians, intellectuals, and technocrats wielding scientific reason, trying to grasp where the current of history flows, and developing policy in line with their world-theory.  In lieu of our “enemies” (capitalist state, empire) using a scientific understanding of history in order to destroy the world, I will attempt to instrumentalize my reading of Hegel in order to make a case of a socialist intelligibility of the human world, that has the purpose of freeing humanity through the use of socialist planning. I am however, not trained in philosophy, so my reading of Hegel may not be entirely accurate – yet accuracy isn’t really my goal as much as using him as an inspiration for making my case.

Hegel and many  thinkers in the 19th century were optimistic about uncovering the laws of motion that drive history, and thus the evolution of the human world.  Hegel thought that history was intellectually intelligible in so far that it is can be rationally understood as marching in a certain rational direction, that is, towards freedom even if the human beings that make this history are often driven by irrational desires.  For example, Hegel thought the French Revolution, following the evolutionary path of history, brought about the progress of freedom in spite of its actors being driven by desires that may have concretely nothing to do with freedom (e.g. glory, self-interest, revenge, etc.).  To Hegel, the French Revolution was a logically necessary event that follows accordance to a determined motion of history towards freedom. In parallel, Marx, who “turned Hegel on its head” thought that the human world could be understood as functions of the underlying economic structure (e.g. capitalism or feudalism) and its  class composition. Furthermore Marx argued that the working class, due to its objective socio-economic position as the producer of the world’s wealth, could bring about socialism.

Not only were Hegel and Marx optimistic in the intelligibility of the human world, but they found that a liberated society would make use of this intelligibility to make humans free. In the case of Hegel, he thought that the end of history would be realized by a rational State that scaffolds people’s freedom by making them masters of the world they can understand and manipulate in order to realize their liberties/rights. This is why Hegel thought the French Revolution revealed the structure of history, as this event  demanded that the laws of the government become based on reason and serve human freedom. In the case of Marx and his socialist descendants, the fact that the economy is intelligible means that a socialist society could administer it for social need, as opposed to the random, anarchic and crisis ridden chaos of capitalism. The socialist case for the intelligibility of the human world gave rise to very ambitious and totalizing political programs, with calls for the economy to be planned for the sake of social need, and with the working class as the coherent agent for enacting this political program. Sometimes these socialist totalizing narratives are described by some marxists as programmatism,  where programmatism is the phenomenon of coherent socialist parties that have grandiose and ambitious political programs of restructuring the world through the universal agency of the working class.

However,  from the 20th century onwards, much of  intellectual activity was spent in arguing against this intelligibility of the human world, and therefore against the totalizing socialist program. In the economic sphere, Hayek argued that the economy was too complicated and fine-grained to be consciously understood by human actors, therefore making conscious economic planning an impossibility. From the Left, post-structuralist theorists attacked  the idea that there exists underlying, objective structures that steer and scaffold the human world. Philosophers such as Laclau and Lyotard criticized nineteenth century thinkers such as Marx and Hegel for having totalizing narratives of how history marches and the certainty of scientific approaches to the world. In many ways these post-structuralist and marginalist views do reflect a certain aspect of the current political landscape.  The market in the West has considerably liberalized since World War II, expanding the roles price signals in directing the distribution of goods, which seem to echo Hayek’s propositions. In western-liberal democracies, electoral politics is often interpreted as a heterogenous and conflicting space formed of different identities and interest groups, pushing their own agendas without a discernible universal feature that binds them all – which echoes the post-structuralist attack against Marxist and Hegelian appeals to universalism. Furthermore, the decline of Marxism, anarchism, and other radical political movements that posited a coherent revolutionary actor, such as the working class, give even more credence to the post-structuralist insistence on how the social world cannot be made intelligible by totalizing and “scientific” theories.


However these attacks on human-world intelligibility miss a crucial point, which  makes the critique fatally flawed. These attacks only feature as evidence for their arguments  the ideological justifications of the ruling class and the defeat of the programmatic Left. It is true that Hayekian marginalism is used as “proof” that the economic world is not intelligible to the human mind, therefore justifying increasing neoliberalization. Or that the totalizing social movements of the early 20th century with coherent political programs and revolutionary subjects have been almost completely supplanted with heterogenous, big-tent movementism. Yet the ruling classes – those who control the State, still act from the perspective that the human world is intelligible. The State’s actors cannot make political interventions without assuming a theory on how the human world works and having a self-consciousness on their own function of how to “steer” this human world into  a specific set of economic and social objectives. For example, the whole military and intelligence apparatus of the United States studies scientifically the geopolitical order of the modern world in order to apply policy that guarantees the economic and political supremacy of the American Empire. Governments have economic policies that emerge from trying to understand the laws of motion of capitalism and using that understanding to administer the nation-state on a rational basis.

The skeptics of the intelligibility of the human world could protest in different ways to the above assertions. One of the protestation could be that existence of the technocratic state still does not reveal some universally, coherent ruling class. In other words, there is no bourgeoisie, “banksters” or other identified subjects that control the technocratic state for some identifiable reason   – ithe State is simply some autonomous machine with no coherent identifiable trajectory or narrative. Furthermore, a second protestation is inherent in some interpretations of Adorno’s and Horkheimer’s Dialectic of Enlightenment: to make the human world intelligible to science is a method of domination, where human beings can be instrumentalized into stacks of labor power to be manipulated and administered.  Furthermore, according to this criticism of Enlightenment, those particularities that might not be scientifically uncovered in the human world, are forced to violently fit certain universal – for example, the Canadian violence done unto First Nations where they attempted to “anglicize” First Nations violently by abusing and destroying them in Residential Schools.

Curiously this second protestation, the one of how rationality is used to scientifically dissect the human world to dominate it, shows the weakness of the whole counter-rational project. The ruling classes do make the human world intelligible for domination, through their technocrats, wonks, and economists.  However the key idea here is that they administer the world in the name of some objective that does not treat social need as its end. The behavior of the State does indeed show that the human world and history are intelligible – it’s just that its intelligibility is instrumentalized in favour of some anti-human end. In reply to the first protestation, about how it is impossible to recognize a universal subject and the end the technocratic state pursues, I will say that the complexity of world capitalism does not imply there are no dominant trends in it that cannot be analyzed. It just happens that systems experiences various tendencies, some in conflict with each other, but that can be still understood from a bird’s eye view and scientifically. For example, one of the key trajectories of the modern capitalist state is the safeguarding the institution of private property and attempting to stimulate capital accumulation (e.g. GDP growth) – this is certainly an intelligible aspect of modern world history.   The existence of conflicting trends within the State that counter the feedback of capital accumulation, such as inefficiencies caused by rent-seekers and corruption, only means that the State (and the human world) are complex systems with counteracting feedback loops, not that these objects cannot be made intelligible by scientific reason in order to understand them and ultimately change them.

The existence of contradicting feedback loops embedded in a complex system is not an argument against the scientific understanding of the human world. One can still try to understand the various emergent properties even if they contradict each other.  For example, a very politicized complex system today is the climate. Although we cannot predict the weather, that is the atmospheric properties in a ten square kilometers patch during a specific day, we can predict the climate, that is the averaged out atmospheric properties of the whole Earth during tens of years. For example, we have very good idea how the average temperature of the Earth evolves.  In the case of the human world, the same heuristic applies – we cannot understand everything that happens at the granular level but we can have ideas about the average properties integrated throughout the whole human world.  Similarly, the climate  system has counteracting feedbacks, for example, clouds may decrease the temperature of the Earth by reflecting solar radiation into outer space, but at the same time heat up the Earth through the greenhouse effect of water vapour.

These contradicting feedbacks does not make the climate system incoherent to science. Similarly, the existence of various subjects with conflicting interests in capitalism does not mean that there cannot be dominant trends, or some sort of universality underlying many of the subjects.  At the end of the day, the basic human needs, such as housing, education, and healthcare are approximately universal.

The fact that the human world is intelligible and this intelligibility is instrumentalized by our enemies, that is the capitalists, the military apparatus, and the technocratic state, in order to exploit and degrade the Earth and its inhabitants for capital accumulation,  means that we should make use of this instrumental reason to counterattack, not just pretend that this Reason is incoherent or that it is a tool that corrupts its user. In fact, there are many examples where instrumental reason is used for “good”, for example, the concerted medical effort of curing certain diseases, which makes the human body intelligible in order to understand it.  At the same time, in a Foucauldian sense, it is true that the clinic can be used for domination but this power dynamic is just one feedback loop amongst other more positive ones, such as emancipating humanity from the structural obstacles of disability and disease. Thus, universal healthcare is proof of the use of instrumental reason for the purpose of human need/emancipation.

The usage of instrumental reason for social need and freedom harkens back to Hegel. The world Hegel promised us at the end-point of history,  that is the world of absolute freedom, is the world where human beings become conscious of the intelligibility of history, and therefore they rationally administer history in order to serve  well-being and freedom. The only problem with Hegel’s perspective is that he thought history marched in a deterministic sense towards freedom. Instead, to make history and the human world intelligible for human needs is a political decision that is not predetermined by the structure of history itself.  Until now, the historical march of the last couple centuries have been for increasing domination of the Earth and its inhabitants for the purpose of capital accumulation. However, in the same way the ruling classes make history intelligible in order to serve profit and private property, there is no necessary reason or law that prevents using the intelligibility of history for social need.  The socialist political program is precisely this – to make the human world transparent to science and reason in order to shape it into a free society that is dominated by human creative will, as opposed to the imperatives of toil and profit.

If you liked this post so much that you want to buy me a drink, you can pitch in some bucks to my Patreon.

Against Economics, Against Rigour

jupter

I’ve been trying to grasp why mainstream economics considers itself the superior approach over  heterodox disciplines like Post-Keynesianism or Marxism.  After reading a couple of papers, and articles, a constant argument that appears is the one of rigour.  Mainstream economics is mathematically axiomatic, that is, it begins from a  set of primitive assumptions and then derives a whole system through self-consistent mathematical formalism.  Usually this is contrasted with heterodox approaches like  Post-Keynesianism, which are seen as less coherent and ad-hoc, with some writers referring Post-Keynesianism as  “babylonian babble” and not superior to a pamphlet.   Even if some heterodox economists use mathematical modelling, they do not follow from some axiomatic method, but are ad-hoc implementations.

What interests me about this argument is its definition of science.  According to many mainstream economists, heterodox economics isn’t a science.  The main reason given for the unscientific status of heterodox economics is that the latter lacks internal coherence, that it is not rigourous. As mentioned above, mainstream economics claims rigour by  deriving its propositions from mathematical inference  that begins with a set of axioms. It is by the usage of this rigour, that mainstream economics defines itself as a science.

If a field  claims to be scientific, it must justify its own status.  For better or worse, in the west,  the status of science is epistemically privileged. In other words, an activity can assert more legitimacy than other modes of producing knowledge by claiming the mantle of science.  Therefore mainstream economics by arguing for its scientific status due to an axiomatic coherence, while denying that same mantle to heterodox economics, is implicitly arguing that heterodox economics is an inferior epistemological approach and unscientific.

A common retort against the mathematical rigour of economics is that its coherent mathematical frameworks don’t necessarily correlate with empirical reality, which calls the scientific status into question. However this argument has been done to death, probably by people much smarter than me. What I find interesting is the idea that inferential coherence is a necessary condition for science.  In fact the argument being made by mainstream economists is that even if heterodox economics may arguably be able to explain some empirical phenomena mainstream economics cannot, heterodox economics is less scientific because it lacks internal coherence. Therefore, mainstream economics claim a necessary condition for science is rigorous logical coherence.

Where does this definition of science as rigorous logical inference comes from? There is only one natural science, which is physics, that approximates this sort of rigorous coherence, that is,  that there is a set of primitive axioms that lead to a whole system of knowledge by the application of rules of mathematical inference. Even then, the mathematical rigour in physics is often inferior than in economics, given that physicists don’t do mathematical proofs as much as economists. The rest  of the natural sciences are less rigorously formulated – many of them are a “bag of tricks”  that are heuristically unified. This is because anything more complex than a system of two interacting particles is mathematically intractable due to nonlinearities.  A good example is  psychology.  Although psychologists assume that certain personality traits are a manifestation of chemical processes in the brain, there is no rigorous mathematical inference that connects psychology to brain chemistry –  these scales are unified heuristically and qualitatively.  There are similar examples in biology, where in theory, morphological evolution is coupled to chemical evolution of genes, but the rigorous, mathematical linkage of both scales is close to impossible.

How  did the  definition of science as mathematical inference came into being? It is certainly not the normative self-consciousness of scientists, who see themselves as Popperian. Popper’s theory treats the evolution of science as a process where propositions  are falsified by empirical evidence only to be replaced by  better explanations – it does not say anything about “logical rigour”.   Nor this definition is descriptive, as shown in the previous paragraph, because most natural sciences aren’t as rigorously self-coherent as mainstream economics.   Weintraub  argues that the current  axiomatic approach to mainstream economics can be traced back to Gerard Debreu, an important french-american economists of the 20th century.   In the first half of the 20th century,   David Hilbert and Bourbaki (a pseudonym used by a group of french mathematicians) attempted to axiomatize mathematics, given the discovery of non-euclidian geometry in the 19th century. Before non-euclidian geometry,  geometry was thought to derive its axioms intuitively from the world – the truth-value of the axioms were self-evident.  An example of an “intuitive” axiom in euclidian geometry is that parallel lines don’t meet.  However 19th century mathematicians  realized that they could create self-consistent, alternate geometries where parallel lines could meet. An alternative geometry that starts from non-euclidian axioms  was self-consistent if rigorously inferred through mathematical rules.  This led Hilbert and Bourbaki to develop a more axiomatic approach to the study of mathematics. Debreu, who learnt mathematics from the Bourbaki school, brought this axiomatic way of thinking to economics.

Today this axiomatic approach  is very obvious in the average graduate economics curriculum. For example some of the classes emphasize writing mathematical proofs! I am very close to competing a PhD in physics and I only experienced very basic proofs at my undergraduate level in a  linear algebra class.   After that I never wrote a single proof ever again.  Yet, economics, which arguably has had less empirical success from its mathematics,  requires more mathematical rigour than the average paper in physics.  This tells me that the economist’s emphasis on rigour is not inspired in the example of the successful, natural sciences, but it’s endogenous –  from within. If anything, it shows that mainstream economics is at most a bizarre synthesis of philosophy and mathematics, owing more to these abstract fields, than to any of the existing natural sciences.  Therefore, mainstream economics should be described as more of a  mathematical philosophy than  a science.

The case of the arbitrary rigour of economics has interesting implications in academia at large.  An uncharitable person would say that the spurious mathematical rigour of economics is simply  gate-keeping for a professional guild.  The extremely technical skills required to master mainstream economics  limit the supply of would-be economists, generating a manageable number of rent-seekers that can be paid handsomely.  But this probably extends to much of academia as well.   Academia is peppered with examples where “rigour” and “method” are elevated with no obvious epistemic justification. One has to wonder if appeals to rigour are more often than not guild building in order to justify large pay-checks by limiting the supply of the participants.  The trope of “how many angels can dance on the tip of a pin” is a famous example of this spurious rigour.  Medieval theologians were accused of developing  beautiful, often rigorous and coherent systems, that deal with questions of no intellectual consequence.   Similarly, the same phenomenon probably emerges in some sector of academia, given that rigour and opacity are a cheap way of signalling expertise to institutions in order to justify large salaries.

Finally, I think the unjustified emphasis on rigour when not warranted is unhealthy for democracies.  Often, many problems that are meaningful to humanity at large, such as issues of political and economic nature, require the mass participation of society in order to build an engaged citizenship.  Spurious rigour and credentialism are ways to build a technocratic hierarchy that is not necessarily justified. In the absence of authentic knowledge, rigour becomes simply a guild-like mechanism for confining meaningful problems to a set of fake experts that decide the fate of whole nations, often in the interests of a reduced elite. A socialist, democratic society would require a more egalitarian epistemology than the one that exists today.

If you liked this post so much that you want to buy me a drink, you can pitch in some bucks to my Patreon.

The World-System Versus Keynes

20170522_juno-south-pole-2

The most incredible  modern lie is the one of nation-state sovereignty.  From the left to right,  the relative success of an administration is aways interpreted as a function  of endogenous variables the nation-state can  supposedly control.  In the case of the right wing, they see the perceived failure of their society as related to the government not closing the borders, running high deficits,  or allowing companies to outsource jobs.  From the left’s perspective, the nation-state is simply not running high enough deficits to fund more social programs, not supporting full employment policies, or refusing to raise the minimum wage. Meanwhile, a totalizing world-system pulsates  in all corners of the planet,  with flows of information, commodities,  securities, and dollars  creating a complex system that subsumes the sovereignty of most nation-states.  In the heart of this world-monster,  there is a hierarchy of nation-states, with some states having more influence and control  over the world-system than others.

Recently, with the advent of the Great  Recession in 2008, many people in the left, some of them self-proclaimed socialists, have been doubling down on the myth of national sovereignty.  They see the economic crisis, and the continuous casualization of workers, as an opportunity to administrate the nation-state in the “right way” to reverse these trends. They see themselves as holding secret truths and insights about the economy that neoliberals don’t truly fathom.  Only if these social democrats had the opportunity to apply  the right ideas, ideas that they claim have been pushed out of the political and academic mainstream for venal reasons, they could fix the economy.

What are these right ideas?  In the first half of the 20th century, John Maynard Keynes had already developed a toolkit for any eager leftist technocrat to  manipulate in order to attenuate economic crisis.  He, in contrast to the classical economists that preceded him, argued that sometimes the market did not clear, which generated a recession.  By market clearing, I mean that the supply of commodities wasn’t balanced out by  their demand. This is sometimes referred as Say’s Law.  Another important aspect of the failure of Say’s Law is the existence of unemployment, given that there is more supply of labor than demand.  While classical economists argued that economic crises could self-correct themselves and eventually clear, by for example, lowering the wage of workers or cheapening commodities,  Keynes argued that these recessions could persist for a very long time without the aid of governmental fiscal and monetary policy. According to Keynes, some of the reasons the markets fail to clear are: (i) workers will not accept wage cuts, (ii) recession would make investors risk averse, causing them to save their money rather than invest, and (iii) mass unemployment and risk aversion would decrease the buying of commodities.

Keynes thought that the state could  force the market to clear through fiscal and monetary policy.  He argued that in the case of recession, aggregate demand is lower than what it should be, and this in turn, caused negative feedback loops that halted the economic engine (e.g. the underconsumption of commodities). In order to stimulate demand, the state could increase the amount of money in the consumer side by: (a) public spending on infrastructure in order to employ the previously unemployed, (b) lowering taxes so that the consumer has more available money. Meanwhile,  the state could stimulate demand through monetary policy by lowering the interest rate so that consumers and investors can buy an invest through cheap loans and credit.  This monetary policy was thought to cause inflation because it would increase the money supply by allowing low interest/cheap borrowing, but at the same time, this policy was thought to cure the greater diseases, which were mass unemployment and low aggregate demand.   Keynes’ policies often required deficit spending, that is the government spending more than they acquire, usually by accruing debt. Furthermore, Keynesian policies  tend to trigger inflation because they increase the money supply. However the Keynesians thought that this inflation was a necessary evil to cure unemployment.

In the 1970s, however, economic crisis displaced Keynesianism into the fringe.  The rapid increase of the price of oil coupled with a large money supply created a crisis. These high prices discouraged companies from investing, given that production costs were too expensive and inflated. The Keynesian approach to dealing with crises was not applicable since unemployment was coupled with low demand and inflation (stagflation), which ran contrary to the Keynesian consensus of the time. So it seemed that inflationary policies, such as increasing the money supply, wouldn’t solve the stagnation and unemployment problem.  In response to the crisis, some economists, like the monetarist Milton Friedman, claimed   that Keynesian monetary policy was at least partly responsible for the crisis given its inflationary nature.  Friedman argued that in order to cure the recession, governments should reduce the money supply.  Therefore in accordance to Friedman’s prescription, the Fed in the United States sharply increased interest rates, which ran contrary to Keynesian policy. This tightening of the money supply by the Fed is thought to have aided in the resolution of the crisis. The empirical falsification of Keynesianism because of the stagflation crisis, coupled with a protracted cultural war by classical economists such as Hayek, Friedman etc., and the shift of power towards financial speculators,  displaced Keynesianism into the fringe of heterodox economics that exists today.

Nowadays Keynesianism has been rebranded into all sorts of heterodox disciplines that found a place in Left. Keynes became a darling of the Left for three reasons: (i) melancholy for the post-WWII welfare state and cheap credit, (ii) a consumer-side perspective (e.g. focus on aggregate demand) that seems to value working class consumers over capitalist suppliers, and (iii) the idea that capitalism is crisis prone in contrast  to the the neoliberal orthodoxy of economic equilibrium.  Some of these rebranded Keynesian theories go under different names, such as Post-Keynesianism and Modern Monetary Theory.  Although these Post-Keynesian theories are not exactly isomorphic to the original theories and prescriptions set by Keynes, they all roughly agree with the main heuristics, mainly that the state should strongly intervene in the market, and that an increase of money supply and government spending should be used to counter crisis rather than neoliberal austerity.  Finally, all these approaches rely on one particular thing (which I will show later on why it’s flawed), which is the strength of the sovereignty of the nation-state.  I will focus on Modern Monetary Theory (MMT) as an example given that it is one of the more contemporary iterations of Keynesianism.

Modern Monetary Theory’s basic premise is simple:  a nation-state that issues its own currency cannot go bankrupt given that it can print more of its own money to pay for all necessary goods and services.   Another way of stating this theory is that governments don’t collect taxes for funding programs and services. Rather governments literally spend money into existence, printing money in order to pay for necessary services and goods. Taxes are just the government’s mechanism to control for inflation. In other words, taxes are the valve used to  control the money supply. MMT therefore argues that since money is in the form of fiat currency,  it’s not constrained by scarce commodities such as gold and silver, and therefore it is a flexible social construct. So governments don’t need to cut social programs in order to increase revenue – they could simply spend more money into existence in order to pay for social programs. Furthermore, the government can  enforce full employment by spending jobs into existence  – the state can create jobs through large-scale public works, and then print the necessary money to pay the workers. In a sense, MMT is another iteration of the Keynesian monetary heuristic that increasing the money supply is a good way to solve high unemployment and crisis.

Imagine the potential of MMT for a leftist!  The neoliberals  arguing for austerity and balanced budgets are talking nonsense – the state can simply spend money into existence and therefore pay for welfare and other public services, and also use this new minted money to employ the unemployed! If the increase of money supply triggers inflation, the state can simply tax more, fine-tuning the quantity of money. If only the MMTer would convince the right technocrats, we wouldn’t have to deal with the infernal landscape of austerity.

However, the idealized picture presented by MMT is missing key variables.  Ultimately,  an MMT approach would be  heavily constrained by national production bottlenecks.  In order for MMT approaches to work, the increase of demand caused by the sudden injection of money should be able to be met by the production of  the desired commodities.  In an ideally sovereign nation, society would be able to meet the demand of computers, medicine, or food by simply producing more of these commodities. We may refer to a country’s capacity for producing all the goods it needs as material sovereignty.

However this is where the fundamental achilles hill of MMT (and Post-Keynesianism in general) lies.  Most countries are not materially sovereign at all. Instead, they depend on imports in order to meet their demand on fundamental goods such as technology, fuel, food or medicine.   In the real world, countries have to buy forex currency (e.g. dollars) in order to be able to import necessary goods. The price of the dollar in terms of another currency is not in control of the currency’s issuer. Instead it’s a reflection of the economic and geopolitical standing of that nation amidst the current existing world-system. Whether the dollar is worth 20 or 30 Mexican pesos has to do with Mexico’s  position in the global pecking order, and this exchange rate, if anything, can be made worse by the adoption of Keynesian policies. For example, if Mexico suddenly increases its own currency supply, the Mexican peso would simply be devalued in contrast to the american dollar, making its ability to buy the necessary imports diminished.  This puts a fatal constrain on a nation-states ability to finance itself through simple monetary policy.

The economic castigation of “pro-Keynesian” countries by the world-system is a cliche at this point.  To name some examples:  Allende’s Chile,   Maduro’s Venezuela, or pre-2015 Greece. In the case of Allende, the sudden increase of the money supply by raising the minimum wage created a large unmet demand and also eventually depleted the country’s forex reserves (there was also economic sabotage aided by the United States, but this also reinforces my argument).  In the case of Maduro, Chavez ran large deficits, assuming the high revenues from oil will last long enough. Greece overspent itself through massive welfare and social programs.  Although Greece doesn’t have its own currency, it still engaged in a high deficit fiscal policy that led to its default.   If these countries had their own material sovereignty, such as being able to produce their own food, technology, and other necessary goods, the global order would not have been able to castigate them so harshly. Instead, what ended up happening is that foreign investors pulled out,  the national currency plummeted, and their forex reserves depleted,  making these governments unable to meet the national demand for necessary goods through imports or foreign capital injection.

The above scenario reveals a fundamental truth about capitalism – national economies are functions of global, exogenous variables, rather than only endogenous factors.   Keynesian policy is based on the idea that nation-states are sufficiently sovereign to have economies that depend mostly on endogenous factors. If  the nation-state’s economy depend solely on  national variables, then a Keynesian government could simply manipulate these variables in order to get the desired outcome of  its national economy.  However it turns out nation-states are instead firms embedded in a global market, and their fate ultimately lies in the behaviour of the planetary world-system.  The nation-state firm has to be competitive in the world-system in order to generate profit; this implies that inflationary policies, large debts, and state enforced  “full employment” are not necessarily healthy for the profitability of the firm.   Furthermore, it means that the leftist nationalists that want to, for example, leave the eurozone in order to be able to issue their own currency, are acting from misguided principles.

Given the persistence of the totalitarianism of the world-system, no matter the utopian schemes of leftist nationalists and their fringe hetetodox academics, it’s infuriating to witness how the Left has lost its tradition of internationalism. Instead, the Left, since the advent of WWII, has been pushing for “delinking” of the world-system, whether it’s through national liberation during the 60s, or more recently, by leaving the euro-zone, fomenting balkanization in countries like Spain or the United Kingdom, etc.

The world-system can only be domesticated to pursue social need with the existence of a world socialist government.  Regardless of how politically  unfeasible the program of world government is, its necessity follows formally from the existence of a world system. Only through world government could socialists have sufficient sovereignty in order to manipulate the economy for social need. In fact, the Keynesians indirectly point at this problem through their formalism.  Post-Keynesian theories such as MMT start from the idea of a state having material sovereignty. Yet, the only way for a state to have material sovereignty, and therefore be able to manipulate endogenous variables for its own economic ends, is to subsume the whole planet into some sort of unitary, democratic system.  A planetary government could then manipulate variables across the planet (e.g. both in China and in the United States) to enforce social-democratic measures like full employment or a welfare state, without dealing with the risk of international agents castigating the economy, or having to import goods from “outside”.  But the funny thing is that once we have global. fiscal and monetary policy, Keynesianism becomes irrelevant, given that market signals can be supplanted by a planned economy.

If you liked this post so much that you want to buy me a drink, you can pitch in some bucks to my Patreon.

You Aren’t A Vulcan, But a Squishy and Ideological Human

Leonard_Nimoy_Spock_1967

Scientific rationality is one of the foundations of western civilization.  The discovery of the natural laws  behind the useful work done by a diesel engine, the electron clouds propagating through conductors, and the modus operandi of a virus, has given the  geopolitical upper hand to North America and Western Europe.  So it comes to no surprise that many have attempted to use a similar methodology to uncover the fundamental laws that regulate the human world.

Although this rationalist method of parsing the human world  is intimately coupled with the spirit of anglo-saxon capitalism, with economic marginalism (e.g. Hayek, Samuelson, etc.) being its first coherent expression – there has been a recent, growing rationalist  movement that attempts to bring this perspective to the  culture wars.   Some examples of these  platforms are   the popular blog  Slate Star Codex, and the publication Quilette.   An important animus behind this upsurge is  a reaction against  the  sociological  theories of the Left, such as  structural theories on gender and racial discrimination. Many of these rationalists, instead postulate that the perceived empirical disparities (e.g. gender wage gap,  racial inequality, lopsided  gender and racial ratios in STEM etc.) between races and genders are connected to biological-essentialist variables such as sexual reproductive strategies, or the differences in IQ among races.

It’s hard for me to discuss in a  completely detached and charitable manner these theories, because of my ethnicity, leftist leanings, and utter contempt for  Vulcan-wannabe dudes with shitty STEM degrees.  However I will try to use  peer-reviewed articles that are popular among them  in order to argue that ultimately, their  “rationalist” methodology is fundamentally wrong.  The outline of my argument is as follows:   (i) the only thing these papers  demonstrate is an empirical correlation not causation.  (ii) The reason why they cannot demonstrate causality is that the problems they are dealings have many variables that are extremely hard to isolate. (iii) Because of the large epistemic uncertainty in the casual links, politics become unavoidable. (iv)  Because of politics, this rationalist project collapses, and their vulcan-like rationality becomes a political ideology amongst others.

A good example is the often cited paper by Schmitt et al.   The main thesis is that personality differences between women and men seem to widen in more gender equal countries.  The paper finds a moderate correlation between personality sexual dimorphism and gender equality.  However, what is generally referred is one of the conclusions, which argues that personality dimorphism is not enforced by  stringent  policing in gender equal countries. Rather,  gender equality lets  sexual dimorphic traits diverge into their natural equilibrium. In other words, free societies let women and men express their intrinsic, gendered personality traits that are a function of darwinian processes.

I’ve seen many “rationalist” sources refer to this paper either explicitly or implicitly. It’s seen as one of the most powerful attack against feminist points, such as how certain gendered disparities, like  lopsided ratio of women in some STEM fields,   or the lack of females in certain leadership positions, are product of sociological  and structural factors such as socialization and sexual harassment.  The “rationalist” argues that policies aimed at making certain fields like STEM more sexually diverse, or increasing the number of women in leadership positions, are  misguided and potentially counterproductive.  Very recently, a study also showed that  percentage of women in STEM fields seems to actually decrease as a function of equality,  where in relatively unequal country such as Algeria, about 41 percent of STEM workers are women. This study seems to vindicate the previous study of Scmitt et al.

I am not going to question the methodology behind these studies, but I feel necessary to point out that quantifying things like “personality traits” and “gender equality”, and also aggregating them, is probably not  trivial and riddled with assumptions. However,  even without questioning the methodology, and taking at face value these empirical relations, the papers at most demonstrate the existence of empirical correlations and nothing more. One could try to hypothesize a multiple of causes, including a biological-essentialist link, but ultimately,  these studies only demonstrate a correlation between two empirical measurements, and nothing else.   This is the old adage of correlation does not imply causation.  Here is a very funny site showing all sorts of spurious correlations, such as the relationship between suicides by strangulation and government spending on science. A better way to understand this problem is to imagine a situation where two variables are correlated: A and B.  There are actually four plausible causal explanations for this correlation: (1) A causes B, (2) B causes A, (3} A and B are caused by some variable C,  or (4) the correlation is only a spurious coincidence. Therefore an empirical correlation, while an important result in itself, is not sufficient proof to establish causation.

The issue of causation is very deep and has lead to centuries old discussions in the sciences and philosophy.  For example, the Scottish philosopher David Hume argued that there is no logically consistent way of assuming causation from correlation. However,  my argument isn’t really as absolute, but more practical in an everyday sense. Studies, like the ones I referred above, deal with problems that are too multivariate to convincingly establish a biological argument by just a mere correlation.  In the case of physics and the hard sciences, causation is usually proven through experimentation that isolates all the irrelevant variables, or if lab experiments are not possible, through computational simulations where all the important variables and physical laws are plugged into a computer code.

In the case of other “softer” sciences, such as bio-informatics, social sciences, etc. that deal with complex, multivariate problems that cannot be dissected by controlled experiments,  the important variables are isolated through  statistical techniques that try to take take into account all the relevant parameters.  For example,  here is  a very easy to understand paper that argues against the book of  IQ and the Wealth of Nation, by disproving  the idea that that some biologically detemined  lower IQ of the  “non-white” races leads to underdevelopment in their respective countries, by using a simple multiple regression analysis that takes other variables beyond IQ into account.  Furthermore, in many cases, especially studies with political consequences,  even sophisticated statistical techniques are not enough to establish causation beyond reasonable doubt, given that there is always the possibility of unknown variables not being accounted for.  A famous example is the history of the cigarette-lung cancer link, where it took decades of different types of studies, from lab experiments, to questionnaire based correlations, to establish a causal link. This weakness was obviously abused by tobacco conglomerates, but the point is that even the scientists hired by these tobacco companies at some point began to accept the validity of the evidence, since various research trajectories triangulated into the cancer-cigarette connection.

Now lets go back to the previous statement on how sexual dimorphism in gender egalitarian countries implies an inherent, biologically hard-wired tendency that makes  men in average more interested in engineering than women.    This causal link is almost completely impossible to establish beyond doubt, at least with the known experimental and scientific techniques. This problem is incredibly much more complex than the subject of the cancer-tobacco link.  This complexity arises due to the existence of many social variables interfacing with the career choice of women that are extremely hard to take into account.  For example,  it is obvious that in the most gender, unequal limit, there wouldn’t be almost any women in engineering jobs  (e.g. England in the 19th century)! It is only in today’s particular configuration that this correlation seems to be valid, which already shows the existence of  hidden socio-economic variables that affect these studies.

The inability to establish causal links beyond reasonable doubt in many socio-economic problems (e.g. economics) is actually well defined mathematically.   For example, in the case of mathematical physics, the Holy Grail for all these vulcan-like rationalists, the problems that can be solved are extremely limited in scope.   Poincare showed in the late 19th century the exact solution  for the trajectories of more than  two interacting bodies is mathematically non-integrable.  In the 1960s, Lorenz  discovered that despite the sophistication of computers, many multivariate problems, such as the one of simulating the weather,  become intractable after a certain point due to chaos.  These uncertainties are not even a matter of not properly accounting for all relevant variables, but are embedded in the mathematical structure.  So it is quite arrogant to argue with  confidence that a couple of mere empirical correlations are enough to disprove the lived experience of many female students and STEM workers, which point at discouragement from peers, lack of role-models, unwelcoming workplaces etc.

Given the existence of large amount of noise, chaos, and “hidden” variables in socio-economic systems, there cannot be a pure rationalist and “scientific” way of tackling these problems.  The existence of this epistemic uncertainty therefore gives rise to politics in a much more stronger sense, than when dealing with simpler, “mathematical’ problems.   Therefore the cry of “centrists”, “classical liberals”, “rationalists”, Jordan Peterson, etc. of feminism, leftism, etc. being ideological is a case of pot calling the kettle black. Given the epistemic opacity of socio-economic problems, this claim of rationality is simply a bed-time story – a shallow aesthetic consideration for soul-less logical chopping and boring prose. Instead, they have agendas, not unlike the “irrational” leftists and feminists. In fact, if I were uncharitable, I could claim it isn’t reason that animates them, but some burning resentment for women, minorities, feminists etc. invading their spaces.  I am not ashamed of admitting  my own agendas as well, and that’s why this blog is explicitly partisan, and not written in the spirit of some shitty analytic philosophy paper.

If you liked this post so much that you want to buy me a drink, you can pitch in some bucks to my Patreon.

 

Socialism Versus Economic Growth: the Human Being Is Not Infinitely Hackable

phoca_thumb_l_arntz_arbeit

I am for the rational planning of the world economy in order to fulfil social need (e.g. free time, housing, healthcare, transportation, education etc.), including the minimization of the work day until its eventual abolition.  This would require consolidating current scientific and technological capacity towards the goal of serving these needs.  Yet, I feel this usage of scientific rationality for socialist means is often mistakenly coupled with the idea of  unconstrained economic growth.   In the last couple of years, this idea of growth has become  a  tension in the Left between the so called “de-growthers” and the “prometheans“, the former wish to contract the economy in order to avoid ecological catastrophe while the latter argues that continued growth and progress are necessary for socialism. The debate is quite muddled, and often it is not really related to technical disagreements in political program relating to economic growth, but instead, to fuzzier aesthetic and ideological concerns between the ecologists and the futurists.  On one hand you have quasi-luddites who privilege the local and small over the global and cosmopolitan, and rail against GMOs, and nuclear power. On the other side, you may have sci-fi  “communist” types that want to pave the Earth and colonize Mars.

Much of these debates about growth are anchored around ecology and malthusianism – the idea that planetary constraints demand that humanity downsizes and consumes less.  However, as a socialist, I am not invested in the tension between  mass consumption and an impersonal natural world that I have no affinity with. Rather, I am interested in the liberation of humanity from toil, alienation and material misery.   I therefore  believe that the idea of unconstrained growth is  at best confused from the perspective of a socialist, or at worst, actually detrimental to to the objectives of liberating humanity from quasi-forced labor (wage labor,  peasant labor , slavery, etc.).  This leftist anchor around growth leads me to argue  in this piece the following: (i) growth as a metric for socialism is undefined, (ii) if we measure growth as increased productive capacities then it is antithetical to socialism (productionism), (iii) productionism  has a human limit, given that human beings  can only be optimized into productive workers at the cost of incredible physical and psychological violence.

Growth, from the perspective of  these left debates, is definitely undefined, given that economic growth is usually conceptualized in the context of capitalism. Since GDP  growth is the   telos of capitalism  – the expanding of capital through  reinvestment of profit and exploitation of labour, economic growth is a very well defined process within the market and in that sense, it is a “positive” thing.    For example,  the competency of a politician, whether “left” or “right” is at least partly judged by how much did the GDP grew under their tenure.   In the context of social democrats operating within capitalism and the nation-state, GDP growth is important because the satisfaction of social need is  the side-effect of a  growing economy that can generate new jobs  and more tax revenue. However the fulfilment of social need is not the end goal of capitalism, just the potential byproduct of profit.  In contrast, the telos of socialism is not capital growth, but the rational satisfaction of social need.    Therefore the concept of economic growth in the context of socialist economics becomes undefined.  One cannot use a metric defined in relation to the expansion of capital to judge  the progress of a society that is focused in satisfying needs related to housing, healthcare, education, reduction of the work day, and transportation. Socialist progress cannot be meaningfully quantified in a metric such as GDP, especially in the maximum program of socialism, which would abolish money and private property.

A more universal metric for growth, as opposed to GDP, may be a productionist metric – a function of how much of a particular industrial output is created. This was more or less the metric used for planning in the USSR , under the famous Five Year Plans.  Through a method called “material balances”, the planning agency of the USSR, the Gosplan, would survey all the available raw materials/natural resources, turning them into inputs  that where “balanced” with industrial outputs.  Given the absurdly high production outputs required by, for example, the first Five Year Plan, which demanded the accelerated expansion of heavy industry at the cost of famines, terror,  and slave labor, one could label the USSR as productionist.  This historical human cost of industrialization (both in the USSR and the West) leads to my next argument, that  the intensification of productionist growth depends on the exploitation of human labour –  through either extending the work day so that more industrial output is produced within a single day, or by extracting a surplus that must be reinvested in the development of machinery and techniques.

The history of class society has shown that economic expansion is contingent to the extraction of surplus from human labor.   The pyramids,  the steam engine,  and the violent transformation of peasants to more productive proletarians  are a function of the coagulated blood of billions.  Economic expansion requires the extraction of a surplus in human labour, whether it is by seizing peasants’ agricultural output, or through the exploitation of proletarians.

Today in the Global North we can see the more humanistic manifestation of the tyranny of economic growth. Although the economy in  core states has exponentially expanded in the last century, the length of the work day has frozen for almost a hundred years.   Not only has the length of the work day remained frozen, but more intensive  techniques are currently applied to dissect the human being in order to rebuild it as a working automation.  We see this with the expansion of the work-day into our inner lives, transforming humans into semi-sentient, individual firms. Socializing becomes networking, love a machine learning algorithm to find a mortgage partner, social media a matter of building a brand.  This transformation of homo sapiens to homo economicus is hard to describe, but I feel it in the marrow of my bones as an immigrant.  Economic rationality  controls the way I move my hands in a professional presentation and also structures my speech,  demanding that I do not betray my foreign sloppiness. For the sake of career and success I must conceal my spirit, which was shaped by a culture where lines are wobbly, time is erratic, and human boundaries less exact.  How could anyone that is human defend this infernal labor camp?  This despair  makes me  sympathetic to “non-model” minorities that are unable to adapt to this padded asylum of white light and right angles, because at some level they are more human than me.

I must reiterate that the above arguments do not necessarily run counter to technological innovation and a planned and controlled growth. My point is that productionism inevitably is a function of human labour, and therefore is at tension with the reduction of the work day.  If the priority of socialism is to expand the sphere of free-time, then inevitably,  reduction of the work-day will be prioritized over mass consumption and productionist growth.   That does not imply that humanity will necessarily live an austere existence with the minimum necessary for survival, but that production will be planned in accordance to use-value, so instead of the bult-in, capitalist obsolescence of large volumes of short-lived consumer goods, we may have a lower volume of long-lived, quality goods. An exact picture of  the social reality within  a world, planned economy is hard to portray at this moment, but the important point is that productionism and consumerism are antithetical to free time.

If you liked this post so much that you want to buy me a drink, you can pitch in some bucks to my Patreon.

EDIT: I misrepresented Leigh Phillips’ argument, as he isn’t really arguing for unfettered production, rather his argument is against localist imperatives of arbitrary “downsizing”. He instead locates the problem in the fact that capitalism isn’t planned, and we don’t have control on what to produce/stop producing depending on a scientific and planned evaluation of human need and environmental constraints. So the problem is not in growth per se, but the random arbitrariness of the market which cannot be solved even if we downsize if we don’t leave capitalism.  To quote Phillips from his book “Austerity Ecology“:


“Instead of next investment or production decision being driven blindly by profit seeking, or consumer purchase made constrained by the need to reduce expenditure, all economic actions occur as the result of rational decision-making on the basis of maximum utility to society. Because this all this is a conscious, planned process and we are no longer beholden to the drive for profit, we would now have the possibility to wait, to hold off for a while until we have sufficient technological innovation to move forward in a way that does not damage the environment in a way that delimits the optimum living conditions for humans.

We can collectively say: Well, now that we have this new efficiency in the production of this commodity, what shall we do with the savings? Shall we increase production? Shall we reduce material use? Shall we increase the overall amount of leisure time available to the labour force?

Capitalism is a problem because in the face of environmental spoilage, it must proceed regardless (not because of growth per se!). Any new innovation permitting efficiency gains will be invested in the optimum way to produce still more capital, even at the expense of environmental despoilment. This is not to say that the capitalist is evil. He is not. He has no choice. Indeed, even if he is environmentally minded, he must still make that choice, or go bankrupt. As Foster writes, and here he is correct, the constant drive to accumulate capital “impos[es] the needs of capital on nature, regardless of the consequences to natural systems.

[…] Democratic economic planning though gives us breathing room. True, we may in principle at some point in the future have to pause some production expansions here or there, for a period. But this is a very different thing from saying there is an upper limit.

Even better, because socialism would permit us to direct investment—including investments in research and development—not merely toward what is profitable, but toward what is most useful, there is every likelihood that growth may actually advance faster under socialism than under capitalism, because more research funding can directed to technologies ensuring we do not damage the environment.”