Musings on Taleb

Musings on Taleb

Nicholas Naseem Taleb is a former options trader with a PHD in Management Science whose body of work touches on statistical reasoning and its implications for the world at large. He popularized the notion of the black swan, and his thinking has informed risk models in a range of disciplines. His books, collectively titled Incerto, include Fooled by Randomness, The Black Swan, The Bed of Procrustes, Antifragile, and Skin in the Game. 

I will revisit this post periodically as I reread his works and reflect on them. I’ve organized the essay by topic area rather than book because most topics are carried through or touched upon in all of them. 

On Black Swans

Taleb popularized the concept of “the black swan”, an unpredictable, high impact, low probability event. The concept finds its origin in Hume’s critique of inductive reasoning – he argues that our attempts to make the world intelligible are based on associations that are insufficiently exhaustive. Just as no amount of observations of white swans can conclusively disprove the existence of black swans, no amount of observations that we extrapolate into specific types of causal relationships can disprove the potential for other relationships.  This is a classic epistemic critique dating in some form all the way back to Aristotle, but Taleb argues that most modern approaches to risk mitigation still fail to take this problem of induction into account. This problem of induction and its implications for what is knowable form the foundation for much of his later work. 

Throughout Incerto, Taleb rails against the tyranny of Gaussian (classically normal) distributions. Average and standard deviations make sense with very particular types of distributions, but aren’t particularly useful for distributions with fat tails, dimensional effects, or non-linear impacts. I agree with him here – too many modern data modelling processes take the assumption of normality as given, even if they don’t realize it’s so. Even if practitioners concede to making this simplifying assumption of normality, it’s rare that the full import of this is explored. This is particularly problematic when general education courses limit most student’s exposure to introductory statistics courses – cloaking judgements in mathematics can obscure more than it illuminates, and oftentimes those assumptions aren’t fully explored until it’s too late. 

Very few domains actually operate according to classically normal distributions. Human height might be one. The range of outcomes for a gambler playing (optimal) blackjack might be another. Most things, however, suffer from extreme fat tails. Venture Capital returns, book sales, war deaths, the payoff to asking someone on a date – all reflect an underlying distribution of outcomes with orders of magnitude more extreme, high-impact events than a normal distribution would deem possible. 

Taleb argues that a key part of what is missing from much experimental design and statistical reasoning is the notion of dimensionality, where phenomena’s effects also have effects. The orders of consequences can be infinite – the second-order effects create third-order effects, and so on and so forth. I’d expand upon Taleb by arguing that in a world of temporality, the dimensionality problem becomes even more difficult. A second order effect could happen, be ongoing, and then have its characteristics change based on the impacts of a fourth-order effect. Relationships between effects explode, and the permutations of these effects becomes unmanageable quite quickly. If the situation and relationship being viewed is on the same level of dimensionality as the issue being understood, then basing future action on previous results is quite tenable. But if one changes contexts or changes the dimension one operates on, extrapolation can get quite dangerous. Black swans can result from extra-dimensional effects of seemingly innocuous items.  

Another critical insight missing from most impact models is the notion of nonlinearity. Oftentimes, impacts of events are not additive or do not scale linearly. A pretty intuitive example he provides is that falling 100 feet once is much worse than falling one foot 100 times. Nonlinear effects can lead to disproportionate impacts from slight variations from the mean. Take the example of an overzealous 21 year old on their birthday. Every additional drink probably drastically increases the likelihood of alcohol poisoning, so all it takes is a small percentage change in drink consumption to lead to a much more drastic difference in outcome, say the difference between a hangover and a hospitalization. 

The combination of dimensionality and nonlinearity is the intuition behind the infamous butterfly effect, which chaos theorists use to illustrate a world where a butterfly flapping their wings causes a chain of events that results in a hurricane somewhere else in the world. Small variable changes can have huge run-on effects. we trust in overly controlled, additive models to our peril. 

I think these effects just become more potent as the internet and technology grow in prominence. Both of these serve to decrease friction in access and distribution. Marginal differences in quality or random positive feedback loops become the drivers for huge disparities in outcomes. It started in tech, we’ve seen it filter into media, and the phenomenon seems poised to take over the rest of the economy. Increased protectionism and walled internets might provide a brief respite, but I think the arc of the universe still bends towards decreased friction. 

Interestingly, I find the notion of black swans to be a good, consequentialist justification for social programs. Currently, people are quite disincentivized from doing things with small expected value payoffs or high volatility. Among these activities are things like entrepreneurship, fundamental research, or creating art/literature. These endeavors are generally scalable and non-rivalrous, meaning they can be offered to many people quite easily and the end consumer’s enjoyment of them tends not to crowd out another consumer’s enjoyment. That means these are all things that create incredible amounts of value for society if done well! The problem is, the likelihood of any individual person doing them in a world where their attempt has a very low probability of success (where the aforementioned tech/internet dynamics mean their probability is only getting lower, even if the potential expected value is higher) is low. My intuition is that the way this plays out in the real world is that decreasing distribution costs/technological shifts probably increase the net surplus available, but also make the underlying distribution even more fat tailed. This problem gets worse if we take the diminishing marginal utility of income/wealth for individuals into account. If my utility diminishes marginally, then for a given expected value, a fatter tail makes it less beneficial to me to engage in an action. Conversely, society as a whole is large enough to care much more about the expected value of said action – it’s a form of the tragedy of the commons. The problem gets exacerbated if people consider blowup to be an unacceptable outcome – for most people, leaving their family destitute isn’t worth the possibility of being a world-shaker. 

Social programs can decrease the risk of blowup by ensuring some minimum standard of living for everyone. They can also increase our standard of living to some point where diminishing marginal income doesn’t matter as much – we can be free to pursue artistic, entrepreneurial, or research endeavors for their intellectual or freedom benefits. The more that our collective benefits accrue through low probability, high impact positive events, the more the incentive problem that these programs help solve become material. Taleb’s arguments form the foundation for a probability-based consequentialist argument for comprehensive social safety nets. 

On Incentives

The later books in Incerto become increasingly concerned with incentives. Taleb runs through some of the classic market failure incentive problems and applies them to many everyday situations. A central theme is that people are a product of the incentives they are offered, and these quite often create suboptimal outcomes for the collective. Every KPI or performance metric is a potential incentive problem. Sometimes, the accumulation of these slight incentive distortions create situations where risk accumulates and causes a blowup scenario. 

Two key incentive distortions Taleb identifies are Principal Agent Problems and Moral Hazard Problems.  A principal agent problem occurs when a party (a principal) who is supposed to act on behalf of another party has incentives that differ from those of the other party (the agent). If the incentives of the principal and the incentives of the agent are in conflict, the agent will often act in their own best interests to the detriment of the principal. A canonical example is real estate agents, who receive some portion of a total house sale price for compensation. Because they are less exposed to the upside than the seller is (getting only a small percentage of any incremental gain in price) they are heavily incentivized to turn over houses quickly rather than hold out for the absolute best price. A moral hazard problem is one where an entity is incentivized to take on undue risk because they are disproportionately exposed to the benefits of taking the risk by passing on some portion of the potential harms to someone else. A classic example is the ’08 financial crisis, where banks invested in financial markets with FDIC insured deposit account money. The banks would keep profits, but the consequences of failure were socialized. 

Taleb argues that information bearers often have an incentive problem because they are rewarded for complexity and the adulation of their peers rather than actual outcomes. Journalists and academics are incentivized to make predictions, but they have no liability for the outcomes of said predictions. If they’re not held accountable for their entire body of work, they can later selectively choose to highlight their successful predictions at the expense of older ones. Taleb tells us to beware of pundits because of these distorted incentives. In reality, I think reputational pressure provides some accountability for predictions, but it’s often much less than we think. Feigned precision is a real danger, and we should make sure that we evaluate whether the people giving us advice have any form of liability if it’s wrong.  

Taleb takes his argument further by arguing that it’s often nearly impossible to predict every incentive problem that could occur. As such, the best way to ensure that people have similar incentives is by giving them “skin in the game” such that they are directly invested in the same outcome as you are. this is easier in theory than in practice – examples like the above, where real-estate agents are given a commission based on sale price, are deliberately engineered to try to create some accountability. Nevertheless, his insights generally stand. Understanding the incentives underlying someone’s action is one of the most powerful forces for evaluating the validity of their arguments and actions. 

On Work

 I found Taleb’s discourse on employees quite interesting. He argues that being an employee is just a manner of risk transfer – an employee is ensured a steady income while a firm is exposed to the upside of the value an employee creates. On some level, this is almost quaint – it’s a common argument levelled against critiques of businesses extracting a worker’s excess labor value. To me, this seems pretty reasonable – individual utility functions that show marginal utility diminishing more sharply and at lower levels than those of firms or shareholders create a situation that’s ideal for risk transfer – firms act as insurance policies. I do think Taleb’s focus on risk transfer is a bit reductive – firms also create a lot of benefits through economies of scale. Firm-based economies of scale offer employees the benefits of training, access to capital markets, and specialization. 

In Skin in the Game, Taleb has an interesting aside about the Coates Theorem and transaction costs. The Coates Theorem holds that in a world without transaction costs, who originally holds property rights doesn’t matter – the rights will eventually end up in the hands of the stakeholder who values them the most. From a firm perspective, this means that without transaction costs they’d just stick to their very core competencies and contract out all other functions. The problem is, in the real world transaction costs are huge – training, contract creation, search costs, and the cost of production downtime during the search phase are all very real and salient. Contracting out becomes both dangerous and tempting if the underlying distribution has fat tailed downside risk. In most situations contracting might save you some money, but if it carries the risk of blowup (a key technical failing, not being able to find someone in time to appease a key customer), the risk becomes unacceptable. Having employees becomes a risk-mitigation strategy from a firm perspective, because you are guaranteed to have them when necessary. 

Transaction costs are the reason why the work that is most easily contracted out is work that isn’t particularly differentiated – take electrical wiring or janitorial services for example. Search and training costs decrease the less differentiated an industry is. Again, technology serves as a magnifier here. Most of the gig economy platform businesses operate according these principles. Companies like Uber or Taskrabbit allow people to outsource relatively undifferentiated work and pay for it on an as-needed basis. My thoughts are that these companies are just the beginning of a transition to a more mobile, less firm-based workforce. Transaction costs become higher the more difficult it is to measure a contractor’s abilities and potential impact, but the advent of big data and an emphasis on a quantified self mean that more and more jobs will be measurable. When this happens, it’s just a matter of time before tech-based marketplaces pop up to efficiently allocate this newly quantified labor. We’re already seeing this in industries that seem more highly skilled – take, for example, a company that provides a platform-based mechanism for matching physicians with hospital needs.  More and more businesses will fall prey to the temptation to outsource work – given Taleb’s analysis, I think this has some interesting implications for firm fragility. Ostensibly, a more contracted workforce should both decrease operating leverage and increase profits. However, it also leaves employers vulnerable to supply squeezes. Concepts like surge pricing seem innocuous when it’s just a ride downtown, but what happens when bidding causes prices for ER physicians to go through the roof in the midst of a pandemic? When prices for energy technicians skyrocket in the midst of nationwide blackouts? Do richer hospitals and local governments suck up critical labor at the expense of those less well-endowed?  Do we start thinking about and managing the liquidity of labor pools as well? Is any of this necessarily more or less unethical than any other market-based system? 

Efficiency Vs. Fragility 

Writing in the midst of a pandemic that seems to be on the verge of toppling fragile system after system, Taleb’s discussions of risk mitigation and fragility feel particularly timely.

Leverage

First, on leverage. I think the biggest source of business fragility can be attributed to leverage. Generally, leverage refers to situations where the first derivative of the relationship between one variable to another (say, the relationship between revenue and profits) is positive, but the second derivative is negative. The observation about the first derivative is always true, while the second is generally true, but not necessarily so.  Companies generally have two types of leverage – financial and operational. 

Financial leverage is generally thought of as debt, but I think that it makes more sense to think about it as the degree to which a capital structure has superior claims to your own. The reason why we generally think of leverage as “debt” is because we tend to be focused on the considerations of shareholders. In reality, the clear delineation between debt and equity is a fiction buttressed by the tax code. A capital structure is a series of claims to the cash flows of assets that vary based on superiority, consistency of payment schedule, legal protections, and contractual ability to affect the actions of a company. 

From a firm perspective, parceling out the capital structure creates efficiency. The most common argument for debt in a capital structure is that it creates tax advantages – interest on debt is tax deductible, while distributions to shareholders are not. The problem is that from the perspective of society at large, this doesn’t make much sense. It’s just a tax transfer from the government – the tax advantages of debt don’t actually create any value for the world. It’s my opinion that this is just vestigial – the inclination to privilege certain terms of financing over others through the tax code is just a holdover from a world with less developed financial markets. Other advantages to parceling out the capital structure are more legitimate. Offering investors different risk-return profiles allows you to appeal to a broader array of capital providers and thereby drive down your blended cost of capital. Additionally, different investor bases will have differing specializations and abilities to forecast the expected value of assets. This should decrease their risk profile for an investment, creating some value that can be split between the firm and the capital providers. Financial leverage, then, isn’t simply a question of preying on the tax code – it provides real, tangible benefits. 

Operational leverage refers to the ratio of fixed to variable costs. Some businesses have huge up front starting costs, but their incremental cost to produce each unit they sell is low. Think about a video game company who must spends tens of millions in development but spends zero additional dollars every time someone pays to download their game online. In times of high demand, operational leverage can be a boon – each incremental dollar of revenue has a highly impactful, positive profit impact. Conversely, in times of squeezed demand, each incremental dollar lost has a highly negative profit impact because the business has little ability to decrease costs. Industries with high operational leverage tend to have higher Returns on Invested Capital (a good proxy for the long term expected return for a business) to compensate for their riskiness. It’s not an accident that venture capital (pre vision fund) tends to target businesses with precisely these characteristics (tech, media, biotech) – they can afford to subsidize the fixed costs, in the hopes that one of the companies takes off and generates a huge return because of the combination of high demand and an operationally-levered business structure. 

Now, back to Taleb. Throughout his books, he constantly criticizes systems that can not cope with disruption. He deems these systems “fragile”. From an investor’s perspective, fragility is a consequence of both the underlying business and the nature of your claims to the value a business creates. Leverage increases fragility, both to investors with subordinate claims and to the business as a whole. For financial leverage, bankruptcy costs are huge, both in terms of actual administrative costs (lawyers, investment bankers) and lost business/damaged vendor relationships. If you’ve ever taken a finance 100 class and seen the graph of the Weighted Average Cost of Capital vs the amount of leverage, you’ve seen the hyperbolic relationship between the two. This happens because the tax benefits to leverage scale linearly while the expected probability of default increases exponentially with increases in leverage.  For operational leverage, slight changes in the ability to conduct operations portend huge drops in profits and risks of blowup. Positive correlation between capital market strength and underlying real economy strength exacerbate the problem – the times when you need financing are oftentimes the times when it’s hardest to get. If you buy Taleb’s arguments, the combination of nonlinear effects, dimensionality, and unpredictable tail risk events make the world a much more volatile place than most models presume.  As such, things that increase fragility – I.E. leverage, are to be avoided. 

Taleb has an interesting way of developing this argument. He identifies something called the “Bob Rubin” trade, where agents engage in negative expected value bets consisting of many small, short term gains and few large, long term losses. If agents are evaluated on small time frames or able to withdraw some portion of the value they create without being exposed to the downside, they’ll be heavily incentivized to engage in these types of value destructive activities. Taking Taleb’s thoughts to their conclusion leads us to understand that many incidences of leverage are Bob Rubin trades. In most instances, leverage will allow junior claimholders to generate more value – but when things go wrong, they go drastically wrong. If the ones making the decisions are able to receive and sell stock options, get out before the company blows up, or otherwise withdraw value, they’ll be heavily incentivized to keep engaging in these activities while socializing the risks. Other examples abound. Take the junk bond crisis, and all the pension funds that were able to mark paper gains based off of Payment in Kind (PIK) interest. Even if managers knew they knew that the companies they had invested in were likely to blow up when the debt came due, they were incentivized to keep the charade going so long as they were incentivized for paper gains. 

This has drastic implications for public policy. Much of the current discourse around shareholder buybacks and bailouts is actually a discussion of the incentive distorting effects of leverage. Returning cash to shareholders is effectively the same as increasing leverage, and many companies that are now asking for bailouts have consistently done this. We must recognize that different stakeholders have different abilities to and consequences of exiting a firm.  Investors are often able to exit more easily than suppliers or employees, and as such are more easily able to gain from a leverage induced Bob Rubin trade.

 Government policy and financial market regulation should be made in recognition of these perverse incentives. Ethical arguments for freedom of capital allocation can be balanced with the potential negative externalities generated by stakeholders with a higher ability to exit than other parties. Simple discourse identifying that corporations have duties to all stakeholders, not just shareholders, is insufficient. Regulations demanding that no shareholder buybacks take place until bailout money has been paid back are a good start, but insignificant. Extending the horizon capital gains taxes require to qualify as long term might be an interesting solution. Limitations on short sales, staggered boards, and taxes on transactions can all potentially increase shareholder and manager commitment. Research done by Martin Cremers, Finance Professor at and Dean of Notre Dame’s Mendoza Business School, indicates that commitment devices like staggered boards actually increase shareholder value over the long term. Although this may seem to fly in the face of efficiency-based views of markets, it makes sense in the world Taleb outlines of distorted incentives coupled with more high-impact tail risk events than we might presume. 

Other Forms of Fragility 

Some forms of fragility are more hidden, but no less impactful. I view just-in-time manufacturing and centralized foreign manufacturing bases (especially geographically concentrated ones) as one of these. Although on their face these practices seem to promise the resiliency benefits of variabilized costs and more deployable capital, they also expose businesses to input variability risks. I think these generally manifest themselves in supply chain disruption and geopolitical risks. A disruption to the ability to transport (war, pandemic) or changing geopolitical considerations (bans, tariffs) can quickly make supply inaccessible. Even if the situation can eventually be rectified, the impact of downtime is significant. Lost time means lost profits, increased market share for competitors, eroding customer relationships, and a leeching of internal talent.

Antifragility

 The concept of “Antifragility” is a key theme of Taleb’s later works. Taleb draws a distinction between antifragility and robustness – he argues that robust simply means the capability to withstand disorder, while antifragility is the ability to actually gain from disorder. Some examples include financial options, natural selection, and localized governments. My interpretation of “antifragility” is that it’s really just adaptability. Certain things or systems are more easily able to adapt to changed contexts. I think that in a vacuum, there’s always a tradeoff between antifragility and efficiency. The degree to which this tradeoff exists differs, and I agree with Taleb that optimizing this should be a key component of strategy. I also agree with him when he implies that most incentives are structured to encourage people to ignore tail risk or the impacts of volatility/changed contexts. Where I draw a distinction is in the notion that antifragility entails no tradeoff. Generally, antifragility comes at the cost of optimization. Something that is adaptable to many environments is not optimized for any single environment. 

I think antifragility can often be achieved most efficiently through building generalized capabilities at a system level. Many tail risk events are unpredictable. We can’t predict what the next black swan event might require; perhaps we will need protective suits or certain types of vehicles rather than ventilators and hospital masks. Flexibility of system inputs and production is key here. We should make a reserve of flexible manufacturing – perhaps a federally owned stock of 3d printers and latent factories that are loaned out to businesses during peaceful times. As much flack as distressed or special situations hedge funds get, I think their use case is particularly salient here. Their mandates tend to be flexible enough to provide any type of capital to keep businesses alive. Redundancy has a cost, but the cost can be minimized by focusing on generalized capabilities (flexible capital, manufacturing, resource extraction) rather than the specific needs for any type of crisis. 

Another idea Taleb speaks about extensively is hormesis. Hormesis is the idea that certain things have beneficial effects at small doses, but become very harmful in high doses. Common examples include muscle tears, vaccines, and natural selection pressures. Taleb uses the concept to argue that in many domains, things actually strengthen when exposed to harms. The problem is, this isn’t evidence of an absolute advantage over other things that don’t strengthen similarly – just a type of advantage. Take the example Taleb gives, of weightlifting. Small muscle tears encourage muscle growth, and ultimately result in a stronger person. The problem is, a bigger muscle base isn’t a generally “better” solution, it’s one that’s useful in cases where one has to move large amounts of weight. The tradeoff is increasing energy consumption and weight while decreasing mobility. If strength was always optimal, it would still be more efficient for us to be born with large muscles and be able to save the time we took take to build them.

His argument generally seems to be that things change more often and equilibria are less static than we think. Prioritizing systems that allow for adaptability ensures that we’re collectively less vulnerable to shocks. There seems to be some inherent tradeoff between antifragility and some of the ideas in the black swan – if hormesis is the ability to grow stronger or context adapted from small pressures, then low-impact high probability events would still be catastrophic. My reading is that these are fundamentally different situations. Things can have varying vulnerabilities to black swan events, and Taleb advocates for generalized strategies to provide flexibility in the case of these events. He further holds that generalized volatility is more than our default orientations would have us believe, so antifragile systems are more important than we currently give them credit for.  

Taleb’s overarching prescription that runs through all his works is to consciously reflect on the underlying distribution of any situation you are faced with and adjust accordingly. If it is normally distributed, outcomes are fairly constrained. One can operate fairly normally, or on a simple expected value basis. If it is one with a system that gains from slight amounts of disorder, expose it to slight shocks. Tinker as much as possible, incorporating or discarding each time. If it has low probability, high-impact events – react based on the nature of these events. If the tails are negative, engage with the situation as little as possible – over the long term, you face blowup risk. If the tails are positive, play the game as many times as you can – any slight costs you bear in the interim will be outweighed by collecting positive outcomes. 

As far as practical philosophy goes, Taleb’s books have been some of the most personally impactful of any I’ve read. Finding domains where downside is small and upside is huge (even if low likelihood) is the best way to navigate the world. In school or business, experiment with things until something really works. Look in as many crevices as you can, because all it takes is one opportunity or insight for it to be all worth it. On a personal level, if some action could be impactful to the rest of your life and the worst that can happen is a loss of some ego or a little bit of money or time, just do it! Over and over again. Read the book! Call up the old friend! Ask that person on a date! Try the new drink! Go to that new activity! These prescriptions might have the overworn feel of motivational quotes and aphorisms, but I think keeping them up-front and conscious is incredibly important. I try to make it a philosophy in life to say no to as little as possible, especially things that are novel. Life works out in mysterious ways and orienting myself towards the world this way has been the impetus for most of the things (personal, professional, intellectual) that have made me the happiest. 

On Systemic Levels

Taleb advocates for viewing risks as multidimensional – we should view them from both individual and systemic levels. These systemic levels are often multidimensional, building on top of each other.  Risks that have the potential to damage larger systems are more unethical. In explanation, he invokes Elinor Ostrom quite often. Ostrom was a political economist who studied mechanisms used around the world to solve collective action problems. Her work was seminal in our understanding of game theory, microeconomics, and the political economy. I think his reading of her is a bit odd. He argues that she is an advocate for “skin in the game” to alleviate incentive problems. Some of the mechanisms she identifies have to do with this, but I read Ostrom as being primarily concerned with system design more broadly. She advocates for systems that have dynamic monitoring and enforcement mechanisms to solve collective action problems. These systems highlight the importance of antifragility, but I think Taleb’s attempt to narrow specifically on individual liability misses the point. Still, an emphasis on systems as well and individuals makes good sense from a few different moral frameworks. 

First, from a consequentialist standpoint. Human made systems tend to be collective governance mechanisms intended to solve suboptimal societal outcomes and maximize potential good. Something that hurts a system is likely to have run-on effects that hurt multiple people. These effects aren’t linear – reaching a certain threshold can easily trap us in vicious feedback loops that cause system collapse. Any one person going bankrupt is sad, but if that bankruptcy brings the potential for the financial system collapsing it is a disaster. Similarly, any one person getting a disease is bad – but enough people getting diseases that our medical infrastructure gets overwhelmed is a wholly different type of disaster. 

Second, as positive goods in themselves. It’s tempting to justify these arguments on purely consequentialist grounds, but Taleb seems to allude to the fact that larger systems have value in and of themselves as well. I think this is true, especially ones that aren’t purely arbitrary, human-generated symbolic constructions.  A philosophy that discourages us from viewing the natural world as a simple standing reserve is one that argues for some intrinsic value to the preservation of the biosphere, ecological systems, and Earth herself.  

Implicit in this concern for systems seems to be a deprivileging of the subject. The good that should be maximized transcends the individual, or even the collection of individuals. A true reckoning demands that our orientation towards generally positive events with potentially catastrophic tail risks be reevaluated as well. 

Take, for example, GMOs. Taleb’s been a vocal critique of GMOs as unacceptable risks despite their added nutritional value. He invokes the precautionary principle in his calls for us to desist from genetic engineering. Current technology allows for genomic change much more drastic than that allowed for by natural mutation rates, meaning that even antifragile systems could find some change made too drastic to cope with. In his mind, the combination of nonlinear effects, drastic changes from current equilibriums of organism interaction, and potential second/third order dimensional effects make them an unacceptable risk. Quite frankly, I think his is the first legitimate sounding critique of GMOs I’ve read. A robust framework for action must attempt to grapple with the unknown and mitigate risks accordingly. A true concession to what is unknowable might mean treading incredibly, even painfully carefully when faced with system blowup. If we are to be legitimate in our concern for systems, this must hold true even in the face of real human suffering. 

Even if we are invoking the precautionary principle, and prioritizing system safety, most problems still become messy. Take, for example, the GMO argument above. Poverty, food instability, and vulnerability to natural disasters are all conflict magnifiers. They make the likelihood of flashpoints erupting into violence much higher and exacerbate conflicts when they do happen. Violence has its own consequences and escalation can be catastrophic, especially in a world of weapons (nuclear, biological, chemical, space) that have the potential to destroy any number of systems. Or, for a more timely example, consider that economic insecurity might encourage people to take the risk of engaging with comparatively unsafe wet markets and spurring on a pandemic that brings the world to its knees. Inaction can conceal long-term tail risks just as easily. My take is that a more nuanced calculus errs on the side of small changes whenever possible as a risk mitigation strategy against system blowup. When real, material benefits require drastic changes, we should take into account the relative reversibility of risk-impacts and certainty of being able to do so when weighing them against eachother. Pandemics cause deaths, but their impact is largely reversible on a system level. Human bodies are notoriously adaptable. Species extinction is of yet irreversible, but the progress of synthetic biology marches relentlessly onward and I hope that soon enough extinct species can be revived. Ecosphere or habitat collapse, on the other hand, is nearly impossible to reverse. Enframing system risks through a lens like this will allow us to more easily weigh potential consequences against eachother and provide a means for using the precautionary principle even in light of solvable human suffering.   

On what Is Knowable

Taleb forwards the notion that exploitation goes beyond different access to facts, positing that it’s unethical to engage in transactions where both parties have differing levels of uncertainty. I see a few problems with this analysis. First, the notion of a “fact” divorced from uncertainty. Nearly every assessment of truth is probabilistic – our assessment of something as “fact” is a function of our trust in the source, the context in which it is given to us, our assumptions about our own soundness of mind and judgement, our analysis of second order consequences, etc. Second, knowledge is fundamentally interpretive – it’s difficult to have any interaction without differing mediations based on our own lived experiences. The filter through which we put knowledge and the resulting level of uncertainty we ascribe to it are highly individuated. Our very existence as subjects precludes an equality of certainty in any interaction. The notion of similar levels of certainty strikes me as rare enough to be impossible – structures that discourage coercion and exploitation would accomplish the problems Taleb is talked about without buying into a pipe dream of equivalent certainty. 

Oftentimes, Taleb reads like generalized epistemic critique. His books become a laundry list of common holes in the foundations of our knowledge. This is one of the areas I find him most compelling, even if it leaves me confused in the way most arguments of this sort tend to do. Especially in his early work, he is much more philosophical in bent. He constantly references thinkers like Montaigne and Descartes, critiquing Descartes (and Kant’s) cold formal reasoning and the conceits that come with it. Although he would hate to be classified as such, he reminds me of many writers that might be categorized as postmodern. That he is mathematical in bent does not distort the connection – in fact, Taleb himself rails against the overreliance on mathematics in understanding the disciplines of probability and risk mitigation. He articulates his view of probability as “principally a branch of applied skepticism”. 

Keep in mind, he is writing at a specific historical moment, and his skepticism is applied as such. The era of big data and precision forecasting creates a relentless collective urge to classify and taxonomize. This classification, and the reduction that results, is what he calls “Platonification”. Taleb argues in a fairly Nietzchian way that classifications are quite arbitrary and do more to indicate the predispositions of the creator than they do to elucidate a thing in itself[1]

For a practical example, let’s go back to the junk bond crisis. During the mid-80s, Michael Milkin, a jailed-then-pardoned alumni of my very own Wharton school, argued that bonds classified as “junk” based on certain credit metrics had traditionally delivered outsized risk-adjusted returns as an asset class. He went on to suggest that investors could capture excess profits if they would just invest in junk bonds and diversify across them. Milken was right – if one only looked backwards. Many historical junk bonds belonged to a category known as “fallen angels”, bonds that belonged to companies that had previously been investment grade but that had fallen from grace. However, the rush of new investors to the space spurred by Milkin’s insights spurred the issuance of many new junk bonds, mostly from companies that had similar credit characteristics to fallen angels but had never been investment grade. Something about the fallen angels made them more likely to perform, and a pivot away from them meant a financial crisis soon followed. Put simply, the “junk” bonds before and after the crisis were not the same thing, even if they had the same name. Financial markets and human behavior are particularly difficult to predict based on past datasets because they are an example of something known as a second-order chaotic system, a system that responds to prediction. The acts of knowledge production and system engagement fundamentally change the underlying distribution an action is predicated on. 

Taleb’s critiques of Platonism lead him to emphasize localism and individuation. He advocates this on the level of the government, individual, and as a guideline for our interactions with the world. His emphasis on localism, on the particularities of a specific instance and backlash against “Platonification” are remarkably similar to other writers’ critiques of grand, universalizing, essentializing narratives. His statistics-based critiques of pure rationalism and the difficulty of making the world knowable and controllable have a similar effect. The similarity between his arguments and those of critics of essentialization and enlightenment rationalism is the premise that more is lost in the process of classification than we tend to realize, and we should err on the side of multiplicity whenever possible. Violence and inefficiency become hidden and reified by the attempt to classify everything. 

The epistemic critiques that run closest to my heart have to do with systematic biases. He argues, in classic Kahneman and Tversky fashion, that our brains are hardwired to create narratives to fit sets of facts. We create cause and effect relationships to make sense of situations that result from pure statistical noise. Counterintuitively, many random patterns actually lend themselves quite easily to interpretation of some sort. A few potential justifications are posited for our tendency to overpattern. Evolutionary biologists argue that this makes sense from a risk mitigation standpoint – it’s much more advantageous to avoid dangerous situations due to misguided pattern recognition than to do the opposite. Others, often neuroscientists or computer scientists, argue that these are mechanisms to create pattern recognition that enables us to quickly solve problems that would otherwise require insurmountable amounts of processing power. I don’t think these are mutually exclusive – both risk mitigation and decreasing cognitive load strike me as good reasons for erring on the side of over-patterning and creating decision heuristics. 

I would take these critiques a step further. The narratives we use to parse our lives are not passively handed down or random effects of the world at large. We are actively encouraged to make the world intelligible in certain ways. History is the battleground for vicious interpretive warfare. Pattern recognition is a type of grouping, which is a function of both language structures and prevailing social power dynamics. Links between events become the foundation for policy, action, resource distribution and violence. Institutions have bolstered and perpetuated themselves through narrative creation for time immemorial. The advent of mass-market media further threatens to make everything a simulacrum – a referential entity with no “real” basis. The more that our brains are hardwired to try to create patterns, and the less those patterns are likely to actually exist (patterns of noise rather than something “real”), the more the way we make sense of the world becomes socioculturally mediated. Even seemingly objective facts take on new dimensions based on their presented context, assumptions, and extrapolations. Take, for example, market returns. They are an easily verifiable number, but they happened as a consequence of a certain set of historical value systems, collective behaviors, and demographic patterns. That all of these factors are stable and don’t respond to prediction is a breathtaking, and likely misguided, conceit. Here, Taleb’s insights take on new significance. If there is more noise, context dependence/dimensional effects, and nonlinear impacts in the world than we presume, there is less validity to experimental design or most things that reference the real and socially unmediated. The resultant void is filled by narrative creation, which is subject to prevailing power dynamics. The situation becomes borderline Orwellian. I don’t know that Taleb would agree with these arguments – he often bemoans the excesses of humanities departments and opaque writers (Ex. Derrida). Nevertheless, they seem to me to be the logical conclusion of his critiques. 

Taleb would have us abandon the firm footing of certain knowledge and precise models for the murky marshes of general uncertainty and the impossibility of exhaustive proof. The natural question then becomes: How, then, does one act? Taleb’s solutions often seem insufficient or fall prey to their own critiques. Take, for example, his prescription for a “barbell” strategy for portfolio construction. He advocates for keeping the majority of your assets in perfectly safe assets like cash or treasuries and allocating a small percentage of your portfolio to very risky assets with high payoffs. To me, the notion of safety falls prey to the same assumptions Taleb critiques. Is not the US Federal Government the type of overarching, rigid, tightly controlled entity Taleb rails against? This is especially true in an age of freewheeling monetary and fiscal intervention. A treasury bond is a bet on the stability and consistency of the US government and a certain pattern of inflation, all of which are quite vulnerable to the type of black swan events he writes about. 

I think that a deep reading of Taleb suggests a few takeaways. First, the world is one of uncertainty. The datasets and sources of knowledge we use to understand the world are uncertain and suffer from the impossibility of complete certainty. A rational orientation towards it is one that accedes to uncertainty and builds resiliency. Whenever possible, one should create systems that are adaptable and expose oneself to situations with disproportionate upside.  Second, that Platonism carries inherent, often underappreciated risks. In our rush to classify and taxonomize, we ignore many important characteristics of the things-in-themselves that we work with. Still, categorization serves a purpose – an ordered world is and efficient one. Pattern recognition makes decisions quick and easy. Standardized processes allow for economies of scale. In light of Taleb’s critiques, I think the most reasonable approach is one of Platonification and adjustment. Categorize to find general principles that might apply to any given situation, and then adjust according to what you observe about the thing in itself. Always approach your own knowledge and classifications of the world with a healthy degree of skepticism, and do your utmost to be a person who can gain from disorder.


[1] “We call love what binds us to certain creatures only by reference to a collective way of seeing for which books and legends are responsible. But of love I know only that mixture of desire, affection, and intelligence that binds me to this or that creature. That compound is not the same for another person. I do not have the right to cover all these experiences with the same name. This exempts one from conducting them with the same gestures. The absurd man multiplies here again what he cannot unify. Thus he discovers a new way of being which liberates him at least as much as it liberates those who approach him.”

— Albert Camus, The Myth of Sisyphus.

Leave a comment