Trickle-Down Economics Fails a Sophisticated Statistical Test



Last week two British scholars released a study (PDF) concluding that trickle-down economics doesn’t work. Trickle-down theory says cutting taxes on rich people will encourage them to work and invest more, ultimately creating jobs and benefiting everyone. In reality, it increases inequality while not having “any significant effect on economic growth and unemployment,” wrote David Hope, a visiting fellow at the London School of Economics’ International Inequalities Institute, and Julian Limberg, a lecturer in political economy at King’s College London.

The study was widely covered, including in this Bloomberg story. But articles haven’t explored how it was that these two scholars managed to undermine a theory that, while questioned, has been used to justify every major tax cut on the rich in recent decades, including the Tax Cuts and Jobs Act of 2017, which remains President Trump’s most notable achievement.

Their fresh contribution? Statistical wizardry. Hope and Limberg didn’t compile fresh data or apply new economic theory. Rather, they used sophisticated (though established) statistical methods to look for patterns in the data that others had missed. That’s the way a lot of economics is done these days. If you browse through the 50 most recently posted working papers on the National Bureau of Economic Research website, you’ll see that virtually all of them include a substantial amount of statistical analysis.

Hope and Limberg happen to have doctorates in political science, but their study would be at home in an economics journal. Their first statistical step was to use what’s called Bayesian latent variable analysis to build a comprehensive measure of major tax cuts in 18 wealthy nations from 1965 to 2015. The concept behind the approach is to boil down many kinds of tax cuts—income tax cuts, property tax cuts, corporate tax cuts, etc.—to a single variable for easy comparison. A “latent” variable is one that can’t be observed but underlies others that can be observed. 

To find the latent variable in the tax cut data, Hope and Limberg combined two techniques, the Monte Carlo method and the Markov Chain method. You can think of the Monte Carlo method as rolling a pair of dice over and over to see the likelihood of getting a six rather than calculating the probability of a six from combinatorics theory. The Markov Chain method is useful for situations in which where you land depends on where you just came from, as in the game of Chutes and Ladders. The physicists who did the calculations that produced the first atomic bomb in the 1940s fused those two methods into what’s called Markov Chain Monte Carlo.

The authors’ next step was to see how tax cuts affected income inequality. They realized that a simple statistical regression wouldn’t work for three reasons: The effect could change over time; a simple regression wouldn’t account for the trajectory of previous tax changes; and it wouldn’t account for confounding political and economic factors that might affect both tax cuts and “subsequent income inequality dynamics.” 

“To deal with these challenges to causal identification,” they write, “we use a new econometric approach.” That approach, they say, “implements a nonparametric generalisation of the difference-in-differences indicator for panel data analysis.” That’s a mouthful, but the concept is fairly straightforward. A nonparametric test is one that can be carried out on data that doesn’t fit a neat bell curve,  as the outcomes of coin flips or dice tossing do. A difference-in-differences analysis is a common way of eliminating causes of confusion. It looks at the difference over time in the income inequality of Country A, which had a big tax cut, and the difference over time in the inequality of Country B, which did not, and then looks at the difference between those two differences. 

There are lots more statistical methods in the paper, but one deserves special mention because its history is so bizarre. The Mahalanobis distance method, which the authors use to compare tax-cut countries and non-tax-cut countries, was developed in the 1930s by Prasanta Chandra Mahalanobis from Bikrampur in Bengal, now part of Bangladesh. He used it to compare people’s skulls, which he studied with a device of his own invention called the profiloscope.

It’s common to warn in statistics that correlation doesn’t imply causation. But by combining the Mahalanobis distance technique, difference-in-differences analysis, and so on, Hope and Limberg were able to get at causation. Their bottom line: 

“We find that major tax cuts for the rich push up income inequality, as measured by the top 1% share of pre-tax national income. The size of the effect is substantial: on average, each major tax cut results in a rise of 0.8 percentage points in top 1% share of pre-tax national income. The effect holds in both the short and medium term. Turning our attention to economic performance, we find no significant effects of major tax cuts for the rich. More specifically, the trajectories of real GDP per capita and the unemployment rate are unaffected by significant reductions in taxes on the rich in both the short and medium term.”

Source: Read Full Article