A Discussion about Whether Austrian Economists and Value Investors Agree on How Intrinsic Value is Determined.
CSInvesting: Understand that Intrinsic Value is SUBJECTIVELY determined while prices are set by the marginal buyer and seller. All an investor does is compare price to value.
Essentially, value investing focuses on the comparison of a good’s intrinsic value and its market price and recommends investing in it as long as the asset’s value exceeds its price given a margin of safety.
The first article says in summary: value investing and Austrian economics are nevertheless incompatible, particularly given that value investing’s definition of value contradicts the Austrian value concept.
An Austrian economist who is also a value investor, Chris Leithner rebuts the above statement: “Value investors’ conception and assessment of value are congruent with the Austrian School’s.”
“A value investor” measures value by one of two methods:
First, he/she values a company according to the external prices of its assets. He/she observes, for example, that X Ltd owns quantity Y of land, and that such land has a market price of $Z per hectacre.
Second, the value investor makes plausible (based, perhaps, upon past experience and/or domain specific expertise) assumptions about a company’s future cash flows and, using some rate, discounts them to the present. He might do these calculations in his head or on a spreadsheet.
The Hinge between the theory of Value and the Practice of Value Investing.
John Burr Williams in his The Theory of Investment Value, 1938 wrote, “With bonds, as with stocks, prices are determined by marginal opinion…..Concerning the right and proper interest rate (discount rate), however, opinions can easily differ, and differ widely….Hence those who believe in a low rate will consent to pay high prices for bonds…while those who believe in a high rate will insist on low prices…Thus investors will be bullish or bearish on bonds according to whether they believe low or high interest rates to be suitable under prevailing economic conditions. As a result, the actual price of bonds….will thus be only an expression of opinion, not a statement of fact. Today’s opinion will make today’s rate; tomorrow’ opinion, tomorrow’s rate; tomorrow’s opinion, tomorrow’s rate; and seldom if ever will any rate be exactly right as proved by the event.
How then does Warren Buffett define and measure value? In his 1994 Letter to Shareholders he writes:
We (Charlie Munger and I) define intrinsic value as the discounted value of the cash that can be taken out of a business during its remaining life. Anyone’s calculation intrinsic value necessarily comes up with a highly subjective figure that will change both as estimates of future cash flows are revised and as interest rates move. Despite its fuzziness, however, intrinsic value is all-important and is the only logical way to evaluate the relative attractiveness of investments and businesses.
Graham, by the way, would agree with the definition of intrinsic value but he would doubt whether investors could usefully apply it. (Ben Graham, 1939) “The rub,” writes James Grant in the 6th Edition of Security Analysis (2009), page 18, “was that, in order to apply Williams’s method, one needed to make some very large assumptions about the future course of interest rates, the growth of profit, and the terminal value of the shares when growth stops.”
The video below–though choppy in the first few minutes–is worth hearing about the psychology of market bubbles. The interviewer of Bob Moriarty is ignorant of basic economics (Can prices EVER go below the cost pf producing a useful/needed product? Yes or No), but you can follow the discussion. Note the pushback of the interviewer who is also an owner of bitcoins to Moriarty’s questions. The psychology is fascinating–the will to believe and suspend judgment.
Investing might be considered decision-making under uncertainty. Therefore the following exam.
You must answer BOTH questions correctly to be hired. You are now in the final pool of candidates to work for a big hedgie fund. Now comes
Imagine playing the following game, At a casino table is a brass urn containing 100 balls, 50 red and 50 black. You’re asked to choose a color. Your choice is recorded but not revealed to anyone, after which the casino attendant draws a ball randomly out of the urn. If the color you chose is the same as the color of the ball, you win $10,000. If it isn’t, you win nothing-$0.00.
You are only allowed to play once–which color would you prefer, and what is the maximum bid you would pay to play? Why?
Now imagine playing the same game, but with a second urn containing 100 balls in UNKNOWN proportions. There might be 100 black balls and no red balls, or 100 red balls and no black balls or ANY proportion in between those two extremes. Suppose we play the exact same game as game 1, but using this urn containing balls of unknown colors.
What is your bid to play this game IF you decide to play? How does the “risk” in this game (#2) compare to game (#1)?
Take no more than a minute. So are you hired?!
Answer posted this weekend.
A Reader provides a clearer distinction in Question2:
Your second problem is ill-specified for your desired effect . You write that all combinations of red/black balls within the 100 ball population ARE possible; you don’t say they are equally probable. You need to assume them to be equally probable in order for the reader to infer that the expectations are identical between problem 1 and problem 2.
The reason being is that without defined probabilities on the possible ratios the long run frequency of draws from the second bag isn’t calculable. Hence the expected value cannot be computed and therefore cannot be used in comparison to the EV of problem 1 (you need probabilities in a probability weighted average after all).
You could suggest that the offeree has a 50/50 chance of choosing the correct colour (even if the long run frequencies are not known). But this not an argument born from expected value. This is an argument of chance and it assumes the offeree has no additional information from which to make their decision (which is hardly ever the case).
There are 100 possible choices for the proportion of red/black: 100 red balls/0 black balls, 99 red balls/1 black ball etc., 98/2, 97/3 with 100%, 99%, 98%, 97% probability of choosing a black ball all the way………… to 2 black balls/98 red balls, 1/99, 0/100. Put equal weight on them since random. When computed, the average of the expected payoffs across all these alternative realities, one got an expected value of $5,000, the same as Game 1.
The two games describesthe Ellsberg Paradox, after the example in Ellsberg’s seminal paper. Thinking isn’t the same as feeling. You can think the two games have equal odds, but you just don’t feel the same about them. When there is any uncertainty about those risks, they immediately become more cautious and conservative. Fear of the unknown is one of the most potent kinds of fear there is, and the natural reaction is to get as far away from it as possible.
So, if you said the two games were exactly similar in probabilities, then A+. The price you would bid depends upon your margin of safety/comfort. You would be rational to bid $4,999.99 since that is less than the expected payoff of $5,000. But the loss of $4,999.99 might not be worth it despite the positive pay-off. A bid of $3,000 or $1,000 might be rational for you. The main point is to understand that the two games were similar but didn’t appear to be on the surface.
The Ellsberg paradox is a paradox in decision theory in which people’s choices violate the postulates of subjective expected utility. It is generally taken to be evidence for ambiguity aversion. The paradox was popularized by Daniel Ellsberg, although a version of it was noted considerably earlier by John Maynard Keynes. READ his paper: ellsberg
Who was fooled?
Anyone not answering correctly or NOT answering has to go on a date with my ex:
The Stock Market: Risk vs. Uncertainty
Life is risky. The future is uncertain. We’ve all heard these statements, but how well do we understand the concepts behind them? More specifically, what do risk and uncertainty imply for stock market investments? Is there any difference in these two terms?
Risk and uncertainty both relate to the same underlying concept—randomness. Risk is randomness in which events have measurable probabilities, wrote economist Frank Knight in 1921 in Meaning of Risk and Uncertainty.1 Probabilities may be attained either by deduction (using theoretical models) or induction (using the observed frequency of events). For example, we can easily deduce the probabilities of the possible outcomes of a game of dice. Similarly, economists can deduce probability distributions for stock market returns based on theoretical models of investor behavior.
On the other hand, induction allows us to calculate probabilities from past observations where theoretical models are unavailable, possibly because of a lack of knowledge about the underlying relation between cause and effect. For instance, we can induce the probability of suffering a head injury when riding a bicycle by observing how frequently it has happened in the past. In a like manner, economists estimate probability distributions for stock market returns from the history of past returns.
Whereas risk is quantifiable randomness, uncertainty isn’t. It applies to situations in which the world is not well-charted. First, our world view might be insufficient from the start. Second, the way the world operates might change so that past observations offer little guidance for the future. Once bicyclists were encouraged to wear helmets, the relation between riding the bicycle—the cause—and the probability of suffering a head injury—the effect—changed. You might simply think that the introduction of helmets would have reduced the number of head injuries. Rather, the opposite happened. The number of head injuries actually increased, possibly because helmet wearing bikers started riding in a more risky manner due to a false perception of safety.2
Typically, in situations of choice, risk and uncertainty both apply. Many situations of choice are unprecedented, and uncertainty about the underlying relation between cause and effect is often present. Given that risk is quantifiable, it is not surprising that academic literature on stock market randomness deals exclusively with stock market risk. On the other hand, ignorance of uncertainty may be hazardous to the investor’s financial health.
Stock market uncertainty relates to imperfect information about how the world behaves. First, how well do we understand the process that generated historical stock market returns? Second, even if we had perfect information about past processes, can we assume that the same relation between cause and effect will apply in the future?
The Highs and Lows of the Market
Warren Buffett, the world’s second-richest man, distinguishes between periods of comparatively high and low stock market valuation. In the early 1920s, stock market valuation was comparatively low, as measured by the inflation-adjusted present value of future dividends. The attractive valuation of stocks relative to bonds became a widely held belief after Edgar Lawrence Smith published a book in 1924 on stock market valuation, Common Stocks as Long Term Investments. Smith argued that stocks not only offer dividends, but also capital appreciation through retained earnings. The book, which was reviewed by John Maynard Keynes in 1925, gave cause to an unprecedented stock market appreciation. The inflation-adjusted annual average growth rate of a buy-and-hold investment in large-company stocks established at the end of 1925 amounted to a staggering 32.13 percent at the end of 1928.
On the other hand, over the next four years, this portfolio depreciated at an average annual rate of 17.28 percent, inflation-adjusted. Taken together, over the entire seven-year period, the inflation-adjusted average annual growth rate of this portfolio came to a meager 1.11 percent. Buy-and-hold portfolios in allegedly unattractive long-term corporate and government bonds, on the other hand, grew at inflation-adjusted average annual rates of 10.18 and 9.83 percent, respectively. This proves Buffett’s point: “What the few bought for the right reason in 1925, the many bought for the wrong reason in 1929.” One conclusion from this episode is that learning about the stock market may feed back into the market and, by changing the behavior of the market, render our “learning” useless or—if we don’t recognize the feedback effect—hazardous.
Is Tomorrow Another Day?
Risk and uncertainty are two concepts that stem from randomness. Neither is fully understood. Although risk is quantifiable, uncertainty is not. Rather, uncertainty arises from imperfect knowledge about the way the world behaves. Most importantly, uncertainty relates to the questions of how to deal with the unprecedented, and whether the world will behave tomorrow in the way as it behaved in the past.
This article was adapted from “The Stock Market: Beyond Risk Lies Uncertainty,” which was written by Frank A. Schmid and appeared in the July 2002 issue of The Regional Economist, a St. Louis Fed publication.
For those who are interested and are in NYC: Blockchain Technology Versus Fiat Currency
The next CMRE event will be held on October 3 at the University Club in New York City: Blockchain Technology Versus Fiat Currency. Speakers will include noted author George Gilder, co-founder of Etherium Joe Lupin, thought-leader Saifedean Ammous, and more.
Topics will range from an introduction of blockchain technology, economic implications, the politics surrounding private currencies, and the role of gold. Full program to come.
Check back on www.cmre.org for more information and to purchase tickets.
Even if the mention of a “Gold Standard” makes your eyes glaze over, the video above and the article below show you how a monetary system SHOULD WORK. More importantly, you learn how the US can extract itself from ever-compounding debt. Currently, the FED is destroying savers in the name of “helping” the economy. Learn how credit can expand and contract WITHOUT booms and busts.
Shale gas is not a revolution. It’s just another play with a somewhat higher cost structure but larger resource base than conventional gas.
The marginal cost of shale gas production is $4/mmBtu despite popular but incorrect narratives that it is lower. The average spot price of gas has been $3.77 since shale gas became the sustaining factor in U.S. supply (2009-2017). Medium-term prices should logically average about $4/mmBtu.
A crucial consideration going forward, however, will be the availability of capital. Credit markets have been willing to support unprofitable shale gas drilling since the 2008 Financial Collapse. If that support continues, medium-term prices for gas may be lower, perhaps in the $3.25/mmBtu range. The average spot price for the last 7 months has been $3.13.
Gas supply models over the last 50 years have been consistentlywrong. Over that period, experts all agreed that existing conditions of abundance or scarcity would define the foreseeable future. That led to billions of dollars of wasted investment on LNG import facilities.
Today, most experts assume that gas abundance and low price will define the next several decades because of shale gas. This had led to massive investment in LNG export facilities.
(CSInvesting: You should read Mr. Berman’s full report at the link below. He uses history to debunk long-term prediction models and shows the common sense of looking at markets through the long lens of history. The assumption of abundant natural gas could be wrong–many “experts” are not even thinking of vastly different outcomes to their models.)
Has the meteoric rise of passive investing generated the “greatest bubble ever”?
The better we understand the baked-in biases of algorithmic investing, the closer we can come to answers.
The following article was originally published in “What I Learned This Week” on June 15, 2017. To learn more about 13D’s investment research, visit website. https://latest.13d.com/tagged/wiltw
In an article for Bloomberg View last week titled “Why It’s Smart to Worry About ETFs”, Noah Smith wrote the following prescient truth: “No one knows the basic laws that govern asset markets, so there’s a tendency to use new technologies until they fail, then start over.” As we explored in WILTW June 1, 2017, algorithmic accountability has become a rising concern among technologists as we stand at the precipice of the machine-learning age. For more than a decade, blind faith in the impartiality of math has suppressed proper accounting for the inevitable biases and vulnerabilities baked into the algorithms that dominate the Digital Age. In no sector could this faith prove more costly than finance.
The rise of passive investing has been well-reported, yet the statistics remain staggering. According to Bloomberg, Vanguard saw net inflows of $2 billion per day during the first quarter of this year. According to The Wall Street Journal, quantitative hedge funds are now responsible for 27% of all U.S. stock trades by investors, up from 14% in 2013. Based on a recent Bernstein Research prediction, 50% of all assets under management in the U.S. will be passively managed by early 2018.
In these pages, we have time and again expressed concern about the potential distortions passive investing is creating. Today, evidence is everywhere in the U.S. economy — record low volatility despite a news cycle defined by turbulence; a stock market controlled by extreme top-heaviness; and many no-growth companies seeing ever-increasing valuation divergences. As always, the key questions are when will passive strategies backfire, what will prove the trigger, and how can we mitigate the damage to our portfolios? The better we understand the baked-in biases of algorithmic investing, the closer we can come to answers.
Over the last year, few have sounded the passive alarm as loudly as Steven Bregman, co-founder of investment advisor Horizon Kinetics. He believes record ETF inflows have generated “the greatest bubble ever” — “a massive systemic risk to which everyone who believes they are well-diversified in the conventional sense are now exposed.”
Bregman explained his rationale in a speech at a Grant’s conference in October:
“In the past two years, the most outstanding mutual fund and holding- company managers of the past couple of decades, each with different styles, with limited overlap in their portfolios, collectively and simultaneously underperformed the S&P 500…There is no precedent for this. It’s never happened before. It is important to understand why. Is it really because they invested poorly? In other words, were they the anomaly for underperforming — and is it reasonable to believe that they all lost their touch at the same time, they all got stupid together? Or was it the S&P 500 that was the anomaly for outperforming? One part of the answer we know… If active managers behave in a dysfunctional manner, it will eventually be reflected in underperformance relative to their benchmark, and they can be dismissed. If the passive investors behave dysfunctionally, by definition this cannot be reflected in underperformance, since the indices are the benchmark.”
At the heart of passive “dysfunction” are two key algorithmic biases: the marginalization of price discovery and the herd effect. Because shares are not bought individually, ETFs neglect company-by-company due diligence. This is not a problem when active managers can serve as a counterbalance. However, the more capital that floods into ETFs, the less power active managers possess to force algorithmic realignments. In fact, active managers are incentivized to join the herd—they underperform if they challenge ETF movements based on price discovery. This allows the herd to crowd assets and escalate their power without accountability to fundamentals.
With Exxon as his example, Bregman puts the crisis of price discovery in a real- world context:
“Aside from being 25% of the iShares U.S. Energy ETF, 22% of the Vanguard Energy ETF, and so forth, Exxon is simultaneously a Dividend Growth stock and a Deep Value stock. It is in the USA Quality Factor ETF and in the Weak Dollar U.S. Equity ETF. Get this: It’s both a Momentum Tilt stock and a Low Volatility stock. It sounds like a vaudeville act…Say in 2013, on a bench in a train station, you came upon a page torn from an ExxonMobil financial statement that a time traveler from 2016 had inadvertently left behind. There it is before you: detailed, factual knowledge of Exxon’s results three years into the future. You’d know everything except, like a morality fable, the stock price: oil prices down 50%, revenue down 46%, earnings down 75%, the dividend-payout ratio almost 3x earnings. If you shorted, you would have lost money…There is no factor in the algorithm for valuation. No analyst at the ETF organizer—or at the Pension Fund that might be investing—is concerned about it; it’s not in the job description. There is, really, no price discovery. And if there’s no price discovery, is there really a market?”
We see a similar dynamic at play with quants. Competitive advantage comes from finding data points and correlations that give an edge. However, incomplete or esoteric data can mislead algorithms. So the pool of valuable insights is self-limiting. Meaning, the more money quants manage, the more the same inputs and formulas are utilized, crowding certain assets. This dynamic is what caused the “quant meltdown” of 2007. Since, quants have become more sophisticated as they integrate machine learning, yet the risk of overusing algorithmic strategies remains.
Writing about the bubble-threat quants pose, Wolf Street’s Wolf Richter pinpoints the herd problem:
“It seems algos are programmed with a bias to buy. Individual stocks have risen to ludicrous levels that leave rational humans scratching their heads. But since everything always goes up, and even small dips are big buying opportunities for these algos, machine learning teaches algos precisely that, and it becomes a self-propagating machine, until something trips a limit somewhere.”
As Richter suggests, there’s a flip side to the self-propagating coin. If algorithms have a bias to buy, they can also have a bias to sell. As we explored in WILTW February 11, 2016, we are concerned about how passive strategies will react to a severe market shock. If a key sector failure, a geopolitical crisis, or even an unknown, “black box” bias pulls an algorithmic risk trigger, will the herd run all at once? With such a concentrated market, an increasing amount of assets in weak hands have the power to create a devastating “sell” cascade—a risk tech giant stocks demonstrated over the past week.
With leverage on the rise, the potential for a “sell” cascade appears particularly threatening. Quant algorithms are designed to read market tranquility as a buy-sign for risky assets—another bias of concern. Currently, this is pushing leverage higher. As reported by The Financial Times, Morgan Stanley calculates that equity exposure of risk parity funds is now at its highest level since its records began in 1999.
This risk is compounded by the ETF transparency-problem. Because assets are bundled, it may take dangerously long to identify a toxic asset. And once toxicity is identified, the average investor may not be able to differentiate between healthy and infected ETFs. (A similar problem exacerbated market volatility during the subprime mortgage crisis a decade ago.) As Noah Smith writes, this could create a liquidity crisis: “Liquidity in the ETF market might suddenly dry up, as everyone tries to figure out which ETFs have lots of junk and which ones don’t.”
J.P. Morgan estimated this week that passive and quantitative investors now account for 60% of equity assets, which compares to less than 30% a decade ago. Moreover, they estimate that only 10% of trading volumes now originate from fundamental discretionary traders. This unprecedented rate of change no doubt opens the door to unaccountability, miscalculation and in turn, unforeseen consequence. We will continue to track developments closely as we try and pinpoint tipping points and safe havens. As we’ve discussed time and again with algorithms, advancement and transparency are most-often opposing forces. If we don’t pry open the passive black box, we will miss the biases hidden within. And given the power passive strategies have rapidly accrued, perpetuating blind faith could prove devastating.
A Reader’s question that I post below so the many intelligent folks that read this can chip in their thoughts….
The part that confuses me the most is this:
From what I gather, Greenblatt typically calculates his measurement of normal EBITDA – MCX. He then puts a conservative multiple on this, typically 8 or 10 times EBITDA-MCX. He says higher quality companies may deserve 12x or more. He often says something like “this is a 10% cash return that is growing at 6% a year. A growing income is worth much more than a flat income”. He seems to do this on page 309-310 of the notes you sent me complete-notes-on-special-sit-class-joel-greenblatt_2.
My question is: Greenblatt’s calculation of earnings (EBITDA – MCX) only includes the maintenance portion of capital expenditure. The actual cash flow may be lower because of growth capex. Yet he is assuming a 6% growing income. It seems strange to me that he calculates the steady-state income (no growth capex. Only Maintenance capex), but he assumes that the income will grow. It seems like he is assuming the income will grow 6% but doesn’t incude the growth capex in his earnings calculation. Is it logical to assume that the steady-state earnings will grow, but not deducting the cost of the growth capex from the earnings?
is not a malfunctioning entrepreneurial impulse, but an artificial lengthening of production and overcapacity in fixed assets induced by the fractional reserve banking system. Everyone who keeps funds in the market or in a bank is vulnerable, since it is cash deposits that banks use to fund the reckless expansion. When the banking system blows up—as it must—conservative savers lose their savings just as surely as ardent speculators: that is the real horror and also why the existence of a dynamic sector in the economy does not change the credit bubble analysis.
I wonder how Mr. Pabrai thinks the market misprices a security by 90%. It has been my experience that when you think you have a company priced at $10 per share but worth $100, you had better check your valuation. For a stock to go up 10 times, you are betting on profitable growth or a change in the environment.
The value of the video is given in the reminder to go through your value lines or stock guides to give you context and ideas! In the course I am designing, we will have access to Value-Line to constantly search.
Go where they ain’t (but patience is needed in huge dollops):
HEDGE FUND ANALYST QUIZ
Your boss calls you into his office and asks if the Fed should keep raising rates? Then he asks if the Fed should lower rates? What do you tell him? There is ONLY one correct answer. To KEEP your job you must answer correctly.
June 19, 2017
Hyman Minsky was an economist who popularised the idea that “stability leads to instability”. According to Minsky and his followers, credit expands rapidly during the good times to the point where a lot of borrowing is being done by financially fragile/vulnerable entities, thus sowing the seeds of a financial crisis. That’s why the start of a financial crisis is now often referred to as a “Minsky moment”. Unfortunately, Minsky’s analysis was far too superficial.
Minsky described a process during which financing becomes increasingly speculative. At the start, most of the debt that is taken on can be serviced and repaid using the cash flows generated by the debt-financed investment. At this stage the economy is robust. However, financial success and rising asset prices prompt both borrowers and lenders to take on greater risk, until eventually the economy reaches the point where the servicing of most new debt depends on further increases in asset prices. At this stage the economy is fragile, because anything that interrupts the upward trend in asset prices will potentially set in motion a large-scale liquidation of investments and an economic bust.
This description of the process is largely correct, but rather than drilling down in an effort to find the underlying causes Minsky takes the route of most Keynesians and assumes that the process occurs naturally. That is, underpinning Minsky’s analysis is the assumption that an irresistible tendency to careen from boom to bust and back again is inherent in the capitalist/market economy.
In the view of the world put forward by Keynesians in general and Minsky in particular, people throughout the economy gradually become increasingly optimistic for no real reason and eventually this increasing optimism causes them to take far too many risks. The proverbial chickens then come home to roost (the “Minsky moment” happens). It never occurs to these economists that while any individual could misread the situation and make an investing error for his own idiosyncratic reasons, the only way that there could be an economy-wide cluster of similar errors at the same time is if the one price that affects all investments is providing a misleading signal. The one price that affects all investments is, of course, the price of credit.
Prior to the advent of central banks the price of credit was routinely distorted by fractional reserve banking, which is not a natural part of a market economy. These days, however, the price of credit is distorted primarily by central banks, and the central bank is most definitely not a natural part of a market economy. Therefore, what is now often called a “Minsky moment” could more aptly be called a “central-bank moment”.
I expect the next “central-bank moment” to arrive within the coming 12 months. I also expect that when it does arrive it will generally be called a “Minsky moment” or some other name that deftly misdirects the finger of blame, and that central banks will generally be seen as part of the solution rather than what they are: the biggest part of the problem.