*Investing might be considered decision-making under uncertainty. Therefore the following exam.*

You must answer **BOTH questions correctly to be hired**. You are now in the final pool of candidates to work for a big hedgie fund. Now comes

**Question 1:**

Imagine playing the following game, At a casino table is a brass urn containing 100 balls, 50 red and 50 black. You’re asked to choose a color. Your choice is recorded but not revealed to anyone, after which the casino attendant draws a ball randomly out of the urn. **If the color you chose is the same as the color of the ball, you win $10,000. If it isn’t, you win nothing-$0.00.**

You are only allowed to play once–which color would you prefer, and **what is the maximum bid you would pay to play?** Why?

—

**Question 2:**

Now imagine playing the same game, but with a second urn containing 100 balls in UNKNOWN proportions. There might be 100 black balls and no red balls, or 100 red balls and no black balls or ANY proportion in between those two extremes. Suppose we play the exact same game as game 1, but using this urn containing balls of unknown colors.

What is your bid to play this game IF you decide to play? How does the “risk” in this game (#2) compare to game (#1)?

Take no more than a minute. So are you hired?!

Answer posted this weekend.

**ANSWER (9/10/2017)**

A Reader provides a clearer distinction in Question2:

Your second problem is ill-specified for your desired effect . You write that all combinations of red/black balls within the 100 ball population ARE possible; you don’t say they are equally probable. You need to assume them to be equally probable in order for the reader to infer that the expectations are identical between problem 1 and problem 2.

The reason being is that without defined probabilities on the possible ratios the long run frequency of draws from the second bag isn’t calculable. Hence the expected value cannot be computed and therefore cannot be used in comparison to the EV of problem 1 (you need probabilities in a probability weighted average after all).

You could suggest that the offeree has a 50/50 chance of choosing the correct colour (even if the long run frequencies are not known). But this not an argument born from expected value. This is an argument of chance and it assumes the offeree has no additional information from which to make their decision (which is hardly ever the case).

—

There are 100 possible choices for the proportion of red/black: 100 red balls/0 black balls, 99 red balls/1 black ball etc., 98/2, 97/3 with 100%, 99%, 98%, 97% probability of choosing a black ball all the way………… to 2 black balls/98 red balls, 1/99, 0/100. Put equal weight on them since random. When computed, the **average of the expected payoffs across all these alternative realities**, one got an **expected value of $5,000**, the same as Game 1.

The two games describesthe *Ellsberg Paradox*, after the example in Ellsberg’s seminal paper. Thinking isn’t the same as feeling. You can think the two games have equal odds, but you just don’t *feel* the same about them. When there is any uncertainty about those risks, they immediately become more cautious and conservative. Fear of the unknown is one of the most potent kinds of fear there is, and the natural reaction is to get as far away from it as possible.

So, if you said the two games were exactly similar in probabilities, then A+. The price you would bid depends upon your margin of safety/comfort. You would be rational to bid $4,999.99 since that is less than the expected payoff of $5,000. But the loss of $4,999.99 might not be worth it despite the positive pay-off. A bid of $3,000 or $1,000 might be rational for you. The main point is to understand that the two games were similar but didn’t appear to be on the surface.

**The Ellsberg paradox** is a paradox in decision theory in which people’s choices violate the postulates of subjective expected utility. It is generally taken to be evidence for ambiguity aversion. The paradox was popularized by Daniel Ellsberg, although a version of it was noted considerably earlier by John Maynard Keynes. READ his paper: ellsberg

Who was fooled?

—

Anyone not answering correctly or NOT answering has to go on a date with my ex:

—

## The Stock Market: Risk vs. Uncertainty

Life is risky. The future is uncertain. We’ve all heard these statements, **but how well do we understand the concepts behind them?** More specifically, what do risk and uncertainty imply for stock market investments? Is there any difference in these two terms?

Risk and uncertainty both relate to the same underlying concept—randomness. **Risk is randomness in which events have measurable probabilities**, wrote economist Frank Knight in 1921 in Meaning of Risk and Uncertainty.1 Probabilities may be attained either by deduction (using theoretical models) or induction (using the observed frequency of events). For example, we can easily deduce the probabilities of the possible outcomes of a game of dice. Similarly, economists can deduce probability distributions for stock market returns based on theoretical models of investor behavior.

On the other hand, induction allows us to calculate probabilities from past observations where theoretical models are unavailable, possibly because of a lack of knowledge about the underlying relation between cause and effect. For instance, we can induce the probability of suffering a head injury when riding a bicycle by observing how frequently it has happened in the past. **In a like manner, economists estimate probability distributions for stock market returns from the history of past returns.**

**Whereas risk is quantifiable randomness, uncertainty isn’t.** It applies to situations in which the world is not well-charted. First, our world view might be insufficient from the start. Second, the way the world operates might change so that past observations offer little guidance for the future. Once bicyclists were encouraged to wear helmets, the relation between riding the bicycle—the cause—and the probability of suffering a head injury—the effect—changed. You might simply think that the introduction of helmets would have reduced the number of head injuries. Rather, the opposite happened. The number of head injuries actually increased, possibly because helmet wearing bikers started riding in a more risky manner due to a false perception of safety.2

Typically, in situations of choice, risk and uncertainty both apply. Many situations of choice are unprecedented, and uncertainty about the underlying relation between cause and effect is often present. Given that risk is quantifiable, it is not surprising that academic literature on stock market randomness deals exclusively with stock market risk. On the other hand, ignorance of uncertainty may be hazardous to the investor’s financial health.

Stock market uncertainty relates to imperfect information about how the world behaves. First, how well do we understand the process that generated historical stock market returns? Second, even if we had perfect information about past processes, can we assume that the same relation between cause and effect will apply in the future?

The Highs and Lows of the Market

Warren Buffett, the world’s second-richest man, distinguishes between periods of comparatively high and low stock market valuation. In the early 1920s, stock market valuation was comparatively low, as measured by the inflation-adjusted present value of future dividends. The attractive valuation of stocks relative to bonds became a widely held belief after Edgar Lawrence Smith published a book in 1924 on stock market valuation, *Common Stocks as Long Term Investments*. Smith argued that stocks not only offer dividends, but also capital appreciation through retained earnings. The book, which was reviewed by John Maynard Keynes in 1925, gave cause to an unprecedented stock market appreciation. The inflation-adjusted annual average growth rate of a buy-and-hold investment in large-company stocks established at the end of 1925 amounted to a staggering 32.13 percent at the end of 1928.

On the other hand, over the next four years, this portfolio depreciated at an average annual rate of 17.28 percent, inflation-adjusted. Taken together, over the entire seven-year period, the inflation-adjusted average annual growth rate of this portfolio came to a meager 1.11 percent. Buy-and-hold portfolios in allegedly unattractive long-term corporate and government bonds, on the other hand, grew at inflation-adjusted average annual rates of 10.18 and 9.83 percent, respectively. **This proves Buffett’s point: “What the few bought for the right reason in 1925, the many bought for the wrong reason in 1929.”** One conclusion from this episode is that learning about the stock market may feed back into the market and, by changing the behavior of the market, render our “learning” useless or—if we don’t recognize the feedback effect—hazardous.

Is Tomorrow Another Day?

Risk and uncertainty are two concepts that stem from randomness. Neither is fully understood. **Although risk is quantifiable, uncertainty is not.** Rather, uncertainty arises from imperfect knowledge about the way the world behaves. Most importantly, uncertainty relates to the questions of how to deal with the unprecedented, and whether the world will behave tomorrow in the way as it behaved in the past.

This article was adapted from “The Stock Market: Beyond Risk Lies Uncertainty,” which was written by Frank A. Schmid and appeared in the July 2002 issue of The Regional Economist, a St. Louis Fed publication.

(Source: St Louis Federal Reserve)

I would play both games. I’d pick red for no reason other than I like the color red, since my expected return will be the same regardless of which color I choose. I’d be willing to place a max bet of $1000, because if I’m only allowed to play ONCE, I want the risk/reward to be greatly in my favor. Both games have the same level of risk.

Daniel,

you may be too smart for this blog. You aced it. Call me about the hedge fund job.

Thanks. I have applied similar thinking to video poker in Vegas, and the returns have been satisfactory.

Under the first scenario, if you decide to play (I think it’s important to remember that you don’t always have to play, that’s always an option), you have a 50% chance at $10,000 so the max bid, from a mathematical/expected return standpoint is $5,000. However, that doesn’t price in a margin of safety, so to speak. If I can pay less than the true odds, then I can greatly increase my return. The idea would be to pay an amount that greatly skews the risk/reward payout in your favor. If I could play for free that would be my first option, increasing my bid by the minimum amount ($0.01) until I hit my predetermined limit (which depends on your return requirements but shouldn’t exceed the $5,000 fair value). You should be indifferent between red and black, although Wesley Snipes told me to always bet on black. Not knowing the ratio of red and black in game 2 makes ascertaining fair value more difficult. It’s incomplete information, this is generally the environment where we operate. Why take a chance playing game 2 when the risk reward ratio could be skewed heavily against you. Wait for a better opportunity, or that fat pitch Buffet’s always talking about.

The expected return for both games is $5,000, so I don’t understand why you wouldn’t play the 2nd game.

I’m not sure that I would play either game (depends on the exact terms) and I get what you’re saying about the expected value being the same, but if the casino is withholding information (ratio of red to black) they are gaining some kind of edge. Maybe they’ve been running the 50-50 game and know that given a choice between red and black people are predisposed to selecting black and the urn is full of red. Or they’ve been subconsciously placing black or red into your mind. It’s similar to the roulette tables at the casino with the electronic scoreboards displaying the last number and it’s color. They realized that when people saw red numbers come up several times in a row, they bet on black thinking black was due and vice versa (reversion to the mean). That scoreboard increased revenue from the roulette tables by double digits. People, in general, don’t realize that reversion to the mean is observed over numerous trials. It’s why we run 1000’s of simulations. I do get that I’m supposed to be indifferent to the color and the 2 games, but the withholding of the ratio of red to black by the casino forces me to believe they have some sort of edge. Maybe I’m overthinking the scenario, or maybe it’s how the question is framed, but you say casino and I know I’m not likely to get a 50-50 chance.

OK, like John mentions below, I was assuming that the ratio of black to red balls in game #2 was completely random.

You’re right, if game 2 is random then there isn’t a difference. I hear casino and automatically think there is a built in disadvantage to the player. Probably why you’re not supposed to think more than a minute.

Kinda like those questions they always preface with a “fair” coin is tossed.

Correct. Completely RANDOM. No cheating or sly tricks. There could be 43 read balls and 57 black balls or vice versa–but 100 balls in total of either color in random combination.

Actually, game 2 doesn’t even have to be completely random. Even if the casino took all information available about what color people tend to choose and determined the ratio of red-to-black that way, all you would have to do is flip a coin to determine which color to choose, and your expected return would still be $5000.

Where are you drawing the conclusion that the expected return for both games is $5000?

50% chance of $0 and 50% chance of $10,000. Multiply those out and it comes out to the $5,000.

Roy, you are a smart guy but you fell for the Ellsberg Paradox. See “answer” in the post above. Don’t feel bad, you are human. Daniel crunches the numbers or he is “Thinking Slow” and not fast.

red, less than $2500, second game i pay 1/2 as much bc extra variable

I wouldn’t play game 1. There is not enough information. Your choice isn’t recorded, and it requires a great amount of trust in the casino.

If I could be assured this game was set up the way as described, it would depend on the size of my bankroll. E.g. If I only had $1000 to my name, I still would not wager all of it and have a good chance of being back to zero. If I had $1 billion to my name, it likely wouldn’t be worth my time. Under no circumstances would I bet more than 4999.99. Under most circumstances I would bet at least $1, and the rest is somewhere in between.

Game 2, I wouldn’t bother with, too difficult, likely rigged and not enough information.

Ok, assume game 2 is

perfectly random. Of course, ANY rigged game is one not to play.I think this threw off my understanding, “Suppose we play the exact same game as game 1, but using this urn containing balls of unknown colors.”

I think you are still saying that they are red/black. and the proportion is random from a uniform distribution.

Seems like about 50/50, of picking the correct color. In that case same bet as game 1.

Would use some variation of the Kelly formula(If this were real money I would invest the time into calculating ball park how much to bet).

Michal, you are hired for the compliance department because you don’t TRUST anybody or anything.

I think for both games your chance to win is 50%. In the first case it is more obvious why. In the second you have no idea but still, you have 50% of guessing the right colour.

I might take the bet for $3,000. I like those odds.

1) Choose red, pay up to $4949 because you have a 49/99 probability of success.

2) Choose red, pay up to $5,000 because the outcome is totally random so your monte carlo guess is a 50% probability of success.

This assumes you have much more money than is in your pocket. Otherwise, if you have the chance of being wiped out, do not play low probability games even if they have positive expected returns! Shooting fish in barrels is always the best strategy!

OK, you got it.

(1) I’d bet $2500 per Kelly Criterion.

(2) Would not play. The risk is the same, you lose your bet. However, the probability is unknown.

to clarify, the bet per #1 would be on either color (it wouldn’t matter).

Nick, your idea to use the Kelly Criterion is interesting. But probability is the same–see answer in the post above.

Ah, indeed, I over thought it. The only sensible bet would be $5k, and your expected earnings, if you kept betting 50% would be $10k (starting point). So I wouldn’t play either game.

I would not play both games as risk is to high, especially in 2 game.

Osk, you are hired for the risk department since you would prevent anyone from taking any risk. However, most people would play a game with a $5,000 payoff (expected) for some amount of money less than $5,000.

Game 1: Colour is irrelevant. Expected payout is $5000 ($10000 x 50% + $0 x 50%), max I would bet is $4999.

Game 2: Wouldn’t play as don’t know the probability.

I’m in agreement with Daniel.

I have will play both games. Since i will lose nothing in both games. So i will play both games. I will go All-in.

If you bid $3,500 to play black and the ball is red, then you lose your bid price or $3,500.

I have will play both games. Since i will lose nothing in both games. So i will play both games. I will go All-in.

To me the question seems more like, “if you buy a stock and event x happen you will gain 500% but if event x didn’t happen your gain will be 0% means my loss will be 0% as well. So i am all in for both games.

Hi, John Chew!!

Hi I am Ankur Agrawal from India. I just downloaded “What works on Wall Street” from your website. Thanks for the book. I would have costed me more than 1000Rs.

I am having problem accessing your Value Vault. Whenever I click the link that you provided it takes me to this website ‘HighTail’ and the page says “Sorry, this invitation is for different user”. Please help

So you’d probably say that the probability of winning the first game is 49/99, not 50/100 like some people on here are saying. I’d choose red for the first game bc I think someone choosing a red ball would be slightly more likely to choose a red ball over a black ball (though you state it’s random).

But none of that matters. You don’t say anything about having to wager any money or that a wager would increase your payout, just “If the color you chose is the same as the color of the ball, you win $10,000. If it isn’t, you win nothing-$0.00.” As a result, I wouldn’t be wagering anything on either game. I’d just choose a ball in both games and hope for the best.

My “risk” is the same in both games. That is, there is no risk aside from the loss of time playing the game.

It pays to buy a cheap paperback like, “How to lie with statistics” and brush up on probability. Our minds didn’t evolve to automatically use probability.

expected value is $5000,but I can only play once, so I will be willing to pay $3000 to play the game.

The risk are same for both games. For game 1, there is 50% chance i will pick red or black, and there is 50% chance of casino drawing red or black, so its 25% + 25% = 50%. For game 2, assuming the chance i will pick red or black stays the same, r is chance of drawing red, (1-r) is chance of drawing black.

0.5*r + 0.5*(1-r) = 50%.

Your second problem is ill-specified for your desired effect . You write that all combinations of red/black balls within the 100 ball population ARE possible; you don’t say they are equally probable. You need to assume them to be equally probable in order for the reader to infer that the expectations are identical between problem 1 and problem 2.

The reason being is that without defined probabilities on the possible ratios the long run frequency of draws from the second bag isn’t calculable. Hence the expected value cannot be computed and therefore cannot be used in comparison to the EV of problem 1 (you need probabilities in a probability weighted average after all).

You could suggest that the offeree has a 50/50 chance of choosing the correct colour (even if the long run frequencies are not known). But this not an argument born from expected value. This is an argument of chance and it assumes the offeree has no additional information from which to make their decision (which is hardly ever the case).

I will post your correction into the post. You are right, the question could have been worded more precisely.

As an FYI, uncertainty is certainly quantifiable in a very real sense. It’s quite naive to suggest otherwise. See the entire literature on Bayesian Statistics.

Thanks Allan for pointing this out. I need a refresher on Statistics.

As Always thank you for teaching.