2016 Silver Bullet Awards Part Two

Each week I try to give special attention to those who do important work, even though it is probably unpopular. These contributors are so important, and their work is so helpful, that we recommend taking another look at the end of the year. (Part One is here).



In a WTWA first, CNBC anchor Sara Eisen earned a Silver Bullet Award for her excellent interview with Fed Vice-Chairman Stanley Fischer (Transcript and video via CNBC). As we wrote at the time:

One-by-one she asked all of the key questions in the current debate over Fed policy – potential for negative rates, Brexit impact, does the Fed make decisions based the economic impact abroad, the state of the economy, recession potential, employment, George Soros, and the strong bond market. Whether or not you agree with Vice-Chairman Fischer, it is important to know what he thinks.

Sara Eisen displayed first-rate journalism, as expected from a Medill School graduate. Unlike so many other financial interviewers she did not argue with her subject nor push her own agenda. She did raise all of the current Fed misperceptions common in the trading community. Her preparation and poise helped us all learn important information. It was well worth turning off my mute button and dialing back the TIVO.


We gave the Silver Bullet to Justin Fox for his writing on one of the most persistent myths – the manipulation of government statistics. His whole post is available here, but we particularly liked this bit:

First, because I know a little bit about the people who put together our nation’s economic statistics. The Bureau of Labor Statistics, Bureau of Economic Analysis and Census Bureau are run on a day-to-day basis by career employees, not political appointees. Even the appointees are often career staffers who get promoted, and many have served under multiple administrations. When top statistics-agency officials do leave government, it’s often for jobs in academia. Credibility with peers is generally of far more value (economic and otherwise) to these people than anything a politician could do for them.

To those with even basic experience in civil service, the political manipulation theory makes little sense.


Ben Carlson won a Silver Bullet for investigating the apparent link between Fed meetings and stock performance. While many (including at least one WSJ writer) took the rumor at face value, Ben asked a clever question: What happens if you change the starting date of the analysis?

As it turns out, any relationship between the two is likely a result of 2008.


Menzie Chinn was a big winner this year. Professor Chinn, a Wisconsin economist, debunked many annoying data conspiracies in one fell swoop. In so doing, he also illustrated how an inappropriate use of log scales can mislead readers.

We called his piece the most profitable thing for investors to read that week – if you missed it, be sure and catch up!


By late in the year, it was increasingly apparent that individual investors were misreading the VIX as a “fear indicator” rather than a measure of expected volatility. Chris Ciovacco did an excellent job in making that distinction. His image here is particularly persuasive.

Runner up awards to Jeff Macke and Adam H. Grimes for their similar conclusions on the same subject.


Shiller’s CAPE method has often caused some eyebrow-raising on A Dash, most notably since he doesn’t use it himselfJustin Lahart of the Wall Street Journal thought to analyze just how this method (and others like it) would work in practice:

For New York University finance professor Aswath Damodaran, this is the real sticking point. He set up a spreadsheet to see if there was a way that using the CAPE could boost returns. When the CAPE was high, it put more money into Treasuries and cash, and when it was low it put more into stocks.

He fiddled with it, allowing for different overvaluation and undervaluation thresholds, changing target allocations. And over the past 50-odd years, he couldn’t find a single way he could make CAPE beat a simple buy-and-hold strategy. In the end, he doesn’t think it represents an improvement over using conventional PEs to value stocks.

“This is one of the most oversold, overhyped metrics I’ve ever seen,” says Mr. Damodaran.

Mr. Shiller agrees that the CAPE can’t be used as a market-timing tool, per se. Rather, he thinks that investors should tilt their portfolios away from individual stocks that have high CAPEs. But he says he isn’t ready to modify his CAPE for judging the overall market.


With the blogosphere in full election season fever, some started to worry that the 2016 stock market gains were a precursor to something much worse. We gave the Silver Bullet to Ryan Detrick of LPL Research for discrediting this argument with two easy charts:


We make a special effort to recognize writers trying to debunk the endless onslaught of recession predictions. Bill McBride of Calculated Risk did this very effectively, with a few key points:

Note: I’ve made one recession call since starting this blog.  One of my predictions for 2007 was a recession would start as a result of the housing bust (made it by one month – the recession started in December 2007).  That prediction was out of the consensus for 2007 and, at the time, ECRI was saying a “recession is no longer a serious concern”.  Ouch.

For the last 6+ years [now 7+ years], there have been an endless parade of incorrect recession calls. The most reported was probably the multiple recession calls from ECRI in 2011 and 2012.

In May of [2015], ECRI finally acknowledged their incorrect call, and here is their admission : The Greater Moderation

In line with the adage, “never say never,” [ECRI’s] September 2011 U.S. recession forecast did turn out to be a false alarm.

I disagreed with that call in 2011; I wasn’t even on recession watch!

And here is another call [last December] via CNBC: US economy recession odds ’65 percent’: Investor

Raoul Pal, the publisher of The Global Macro Investor, reiterated his bearishness … “The economic situation is deteriorating fast.” … [The ISM report] “is showing that the U.S. economy is almost at stall speed now,” Pal said. “It gives us a 65 percent chance of a recession in the U.S.

The manufacturing sector has been weak, and contracted in the US in November due to a combination of weakness in the oil sector, the strong dollar and some global weakness.  But this doesn’t mean the US will enter a recession.

The last time the index contracted was in 2012 (no recession), and has shown contraction several times outside of a recession.

We strongly recommend reading the original post in its entirety.


Jon Krinsky of MKM and Downtown Josh Brown both earned the Silver Bullet award in late 2016, for taking on myths about currency strength and stock performance. In sum: there is zero evidence of a long-term correlation between stocks and the dollar.


Our final Silver Bullet award of the year, given on New Year’s Eve, went to Robert Huebscher of Advisor Perspectives. His full article is definitely worth a read, but choice excerpts follow below. Good financial products are bought, not sold!

But I caution anyone against buying precious metals from Lear Capital. It is not an SEC-registered investment advisor and its web site states that there is no fiduciary relationship between it and its customers.

And also…

For example, Lear will sell you a $10 circulated Liberty gold coin (1/2 ounce) for $753.00 (plus $24 shipping). I did a quick search on eBay and found a circulated Liberty coin selling for as low as $666 (with free shipping).

Buying silver is no different. Lear will sell you a pre-1921 circulated Morgan silver dollar for $30 (plus $10 shipping). On eBay, I quickly found one of these for $22.00 (plus $2.62 shipping).


As always, you can feel free to contact us with recommendations for future Silver Bullet prize winners at any time. Whenever someone takes interest in defending a thankless but essential cause, we hope you’ll find them here.  Have a Happy New Year and a profitable 2017.

How to create a perfect “forecast”

[The following is a work of fiction.  It is intended as educational, illustrating why some research methods look great but have poor results.  Those who grasp the problems illustrated can figure out where to apply the conclusions.  It is also intended to be fun!]

The setting:  The research lab of a well-known fund company.

The participants:  Dr. B (the boss), Dr. Z (the research director), Mr. S (a staff member), and the Rookie (well-educated, but new to the team).

B:  I need some fresh material.  How about a new syndrome?

S:  But we have so many already…..

Z:  People love to read about new syndromes.  Our regular articles top the lists in popularity.

Rookie:  What’s a syndrome?

Z:  That is where we show why the current market conditions are strongly tied to a market crash, ten years of pestilence, an imminent recession, or something equally bad.

Rookie:  If we have created these before, why do we need a new one?

Z:  Some of the former predictions did not work out.

Rookie:  Why not?

Z:  The standard reasons.  The Fed and other central banks flooded the market with liquidity.

Rookie:  I read that most of the Fed expansion stayed on bank balance sheets.  Hasn’t the economy gotten better?

Z:  Let’s focus on syndromes.  We explain past performance in terms that everyone will accept.  They all hate the Fed.  That is our playbook.  And kid — it is OK to ask questions, but keep an open mind.  Focus on learning our system.

Rookie:  OK, how do we discover a syndrome?

S:  We have an established method.  We look for a bad former period and ask what that time had in common with current conditions.

Z:  Any two time periods share many characteristics.  If the fit is not as good as we want, we can do some tweaking?

Rookie:  What do you mean by tweaking?

Z:  We might need to specify that a variable has a specific value before the effect takes place.  Or that two elements occur at the same time.

Rookie:  There are not very many recessions and market crashes.  If you do too much of this tweaking, don’t you risk over-fitting the —er — syndrome?  One of my classes included something about “degrees of freedom” and not using too many variables.

S:  That is the beauty of our method.  Since we use all of the data on every test, no one can prove that we are wrong.  There is no evidence to provide refutation.

Rookie:  Don’t we keep some out-of-sample data as verification?  That was recommended in one of my classes.

Z:  Wasting data that way would not give us enough cases to prove the point.  There are too few relevant business cycles already.

B:  Enough of the basic education.  The kid can learn more as we go along.  I want to call the new syndrome Grandma Gertrude.  It will show that the current market rally is at extremes of valuation, stretched in time, and indicating the most dangerous conditions except for the last two market crashes.

S:  Why do we always name the syndrome after a female relative.  Shouldn’t we be like the hurricane center?  Mix in a few guys’ names.

B:  You need to learn about symbolism.  Everyone loves female relatives and feels protective.  We sympathize with their frailties and worry about them.  Who would care about a market syndrome called “Uncle Harold?”

Z:  OK, we’ll get started.  I assume that we are starting with “old reliable?”

B:  Absolutely!  The Shiller CAPE ratio always confirms bad times and has earned tremendous credibility.  It is the foundation of every syndrome.

Rookie:  I read that Dr. Shiller does not use it for market timing — just for choosing sectors.

B:  No one knows that, so who cares?

Z:  We can mix in some other variables that show recent weakness, but none of them indicate a recession by themselves.

B:  No problem.  That is why we have a syndrome.  We can explain that the effects occur only when several things happen at the same time.  Then we can use the magic words….

Z:  You mean “ever and always?”

B:  Yes!  We want to say that whenever the syndrome has occurred disaster has come as well.  It is a powerful statement.

Rookie:  In one of my classes we learned that you were supposed to begin with a hypothesis and then see whether the data supported it.

Z:  We already know what is going to happen.  We are just looking for evidence for our readers and investors.

Rookie:  I am curious.  Suppose we were to reverse the process.  What if we took the very best times to invest — lowest risk or something — and looked for variables correlated to current times?  Couldn’t we prove the exact opposite of the new syndrome?

B:  Kid, you ask too many questions.  If you want to work here, you need to get with the program.

Are you really a chart expert?

When it comes to charts, everyone is an expert — or so they think!

It can be expensive to be overconfident.  In this post and another on my agenda I will illustrate the problem.  If you get both problems right, you can have confidence in your chart-reading skill.

Tracing a Dangerous Path

One of the most common charts we see compares current circumstances to something that happened in the past.  It often comes as a warning.

My email today provided this example, suggested for consideration by one of my most astute friends.  He is also a very successful investor and market observer.



So we are tracing the same steps that we did last year, with economic disaster to follow.

But wait — there's more!


We are following the same path as in the Summer of 2011 and the debt ceiling debate.  Oh my!

My friend did not reveal the source of the charts, but I did not need a hint.  I get similar emails every week.  The charts are all from the same source.  The perpetrators have a wonderful and profitable business model.  They have identified a market of people who want to be Scared Witless (TM euphemism, OldProf).  They love to hear about conspiracies.  They want to have their worst fears confirmed.

My own market is much smaller, but I am proud of the readership.  It consists of people who employ critical thinking when evaluating evidence.  They want to profit from their investments.

Problems with the Chart

Regular readers will have already pounced on the main problem with this chart — the twisting of scales.  Modern computers and software have created enough charting power to overwhelm the (lack of) skill of the user.  You can now look back in history, adjust the scales from two different time periods, find a brief period that seems similar, and then show a prediction.  Bravo!

You could just as easily find a time period that showed the opposite.

When you read something like this, it is time to turn the page.  I recently awarded The Silver Bullet to Tom Brakke for exposing this kind of deception.

Problems with the Logic

Moving beyond the chart itself, let us suppose that you paused to consider the actual reasoning.  Try to forget the chart and use words instead.  Here is my best effort:

The Citi Economic Surprise Index has moved higher, just as it did last year.  It is tracking last year's move almost tick-for-tick.  We can see the collapse last year.  Ergo, we should expect a series of economic disappointments to start 2013.

It seems pretty foolish when stated that way, but it is the story they are selling.  Please note what the chart does not say:

  • This is something that happens every year (not just one time);
  • This is something that happens whenever there is a similar negotiation (not just one time).

There are many differences between last year and this year.  Just for starters:

  • Election versus non-election year
  • Debt ceiling for a temporary change versus long-term policy
  • Public preamble and preparation.

But those are only starters.  The basic problem is the following: 

You cannot make valid inferences from one prior case.

Anyone cherry-picking a prior history to do this is selling snakewater.  Beware!

An Alternative Viewpoint

Let us pretend that we did not start with the original chart.

In fact, I did not.  I have often looked at the Citi Surprise Index in the past, but I have not found much predictive value.  If the economy turns positive, the expectations increase.  I have tried but cannot sync it with any meaningful prediction.

Here is one take, from Easynomics:

Keep in mind what is basically happening, as it is usually a cycle.
 Expectations rise as a result of improving data, and then it becomes
more likely that data will disappoint.  It doesn’t actually mean the
data get worse, only that they disappoint versus expectations.  The
reverse then happens in order to complete the cycle.  We hope that the
downturns don’t go as deep as the upturns go high.

Dr. Ed also follows this index.

Citi surprise


Good luck in tracking stock prices versus the surprise index.  I do not see a fit, but suggestions are welcome.

There is a lot of irony in the original presentation.  Step back from the charts and think about it.

If the Citi index showed that there were disappointments, that would be the end of the story.  The perma-bear site would simply report that the macro conditions were bad.

Since results have been beating expectations, and you are selling snakewater, there is a problem.  Solution?  If you do enough data mining you can find a time period where beating expectations was actually bearish…  Black is white.  Bad is good.

Let's see — last year – – wonderful.  One case is enough.

If it does not fit last year, let's go back in time……