They Don’t Just Hide Their Money. Economist Says Most Of Billionaire Wealth Is Unearned.

On April 8, 2016, Didier Jacobs writes on Evonomics:

The 62 richest people in the world own as much wealth as half of humanity. Such extreme wealth conjures images of both fat cats and deserving entrepreneurs. So where did so much money come from?

It turns out, three-fourths of extreme wealth in the US falls on the fat cat side.

A key empirical question in the inequality debate is to what extent rich people derive their wealth from “rents”, which is windfall income they did not produce, as opposed to activities creating true economic benefit.

Economists define “rent” as the difference between what people are paid and what they would have to be paid to do the work anyway. The classical example is the farmer who owns particularly fertile land. With the same effort, she can produce more than other farmers working on land of average productivity. The extra income she gets is a rent. Monopolists also get rent by overcharging customers as compared to what they could charge in competitive markets. More generally, economists have identified a series of “market failures”, which are situations where full competition does not prevail and where someone can therefore overcharge – they would be ready to do the work for less, but lack of competition allows them to make a quick extra buck. Government can alleviate market failures through proper economic regulation; or it can make them worse. Political scientists define “rent-seeking” as influencing government to get special privileges, such as subsidies or exclusive production licenses, to capture income and wealth produced by others.

So how much of extreme wealth derives from rents? It’s a pretty divisive debate, but one that can be resolved with data.

On one hand, Lawrence Mishel and Josh Bivens argue that the income of the top one percent richest Americans comes mainly from executive pay and the financial industry, two sources of income notorious for the market failure of imperfect information between buyers and sellers. CEOs have more information about their company than shareholders and portfolio managers than investors, which allows them to dramatically overcharge.

On the other hand, Steven Kaplan and Joshua Rauh claim that the fast growth of income at the top is broad-based and is better explained by rising returns to talent induced by technological progress and globalization.

Both camps use largely the same data to support rather divergent narratives, and in truth extreme inequality is driven by more than one phenomenon.

Data limitations do not allow us to compute rents anywhere close to accurately. But if I had to give a single number to settle the debate, it is this: when it comes to the very richest Americans (Forbes’ billionaires), 74% of their wealth is derived from rents.

I recently explored this issue in my paper Extreme Wealth Is Not Merited, and found that American industries that produce more billionaire wealth than average relative to their size share one of three characteristics:

  • They depend heavily on the state whether through government procurement, licenses, or subsidies, and are therefore prone to rent-seeking. This category includes for instance oil, gas and mining, gambling, or forestry.
  • They are plagued by market failures such as imperfect information, like finance, or by the combination of intellectual property and so-called “network externalities”, which create monopolies like those that pervade the IT industry and industries prone to fads like fashion and music.
  • The billionaire wealth they have generated is largely inherited.

Building on that finding, I calculate that the billionaire wealth generated by these industries in excess of what other industries (considered here as competitive industries) generate represents 74% of America’s billionaire wealth. The table below shows that the industries that are neither dependent on the state nor prone to market failures have a self-made “billionaire wealth intensity” (that is, non-inherited billionaire wealth divided by industry value added – a measure of industry size) of 3%. (So the self-made billionaire wealth we observe in the competitive industries equals 3% of annual production in those industries, or in other words you might say that it has taken 33 years-worth of production to generate today’s billionaire wealth in the competitive industries.) If the whole economy had produced billionaire wealth at that rate, the total American billionaire wealth would have been $427 billion in 2012 instead $1,626 billion, or just 26% as high.Screen Shot 2016-04-08 at 12.34.13 PM

Sources: Author’s calculations based on data from Forbes and US Bureau of Economic Analysis. SeeExtreme Wealth Is Not Merited for methodological details and data for each industry.

1 Industries of the cronyism index, namely: casinos; forestry; defense; real estate and construction; ports, airports, infrastructure and pipelines; oil, gas, coal and mining; steel and other metals; utilities and telecoms. Although also in the cronyism index, banking is in the market failure-prone industries.
2 Industries prone to asymmetries of information, namely finance, health care services, and law, and industries prone to network externalities, namely IT, apparel retail, art dealing, broadcasting, motion picture and music, and sports.
3 All other industries.
4 Assuming that all industries had a billionaire wealth intensity equal to the self-made billionaire wealth intensity of competitive industries, namely 3%.
5 Assuming that diversified wealth is invested in each industry according to its weight in GDP.

There are, of course, all sorts of reasons why billionaire wealth intensity varies across industries, not all of which involve rents. However, Joseph Stiglitzcounters that the very existence of extreme wealth is an indicator of rents. Competition drives profit down, such that it might be impossible to become extremely rich without market failures. Every good business strategy seeks to exploit one market failure or the other in order to generate excess profit. I discuss in my paper how some of these strategies are more or less harmful than others. While not all the excess billionaire wealth generated by state-dependent or market failure-prone industries may be due to rents, it is also possible that my figure underestimates the proportion of rent in billionaire wealth. After all, the perfect competition of economics textbooks rarely exists in reality and there must be many pockets of rents in what I call the “competitive industries” as well. Given the current state of research in the field, 74% is the best estimate of the proportion of US billionaire wealth derived from rents.

The bottom-line is that extreme wealth is not broad-based: it is disproportionately generated by a small portion of the economy. Economic theory predicts that activities that are prone to rent-seeking or market failures will concentrate wealth, and that is what we observe.

This finding has important moral, economic, and policy implications. To the extent that it is driven by rents as opposed to productive activities, the extreme concentration of wealth we observe is not fair according to a meritocratic conception of social justice. Moreover, because rents do not compensate productive activities, redistributing them through taxes or regulation does not harm the economy, and could even boost economic growth. As wealth inequality has become so extreme, even modest redistribution could have significant positive impact for the poor and the middle class.

This piece was originally published on the Center for Popular Economics Blog.

The author views the world strictly from a one-factor (people’s labor) point of view, instead of viewing economics in reality, which can be divided into two (binary) factors of productive input: human and non-human. Binary economics recognizes that there are two independent factors of production: people (labor workers who contribute manual, intellectual, creative and entrepreneurial work) and capital (land; structures; infrastructure; tools; machines; robotics; computer processing; certain intangibles that have the characteristics of property, such as patents and trade or firm names; and the like which are owned by people individually or in association with others). Fundamentally, economic value is created through human and non-human contributions.
The problem is the monetary system that is structured so that those with past savings can dominate and monopolize the formation of wealth-creating, income-producing capital assets.
The solution is to free economic growth from the slavery of past savings and empower EVERY child, woman and man to acquire personal OWNERSHIP stakes in FUTURE viable capital asset projects using INSURED, INTEREST-FREE capital credit, repayable out of the FUTURE earnings of the investments in the growth of the economy, without the requirement of past savings (denial of consumption) or any reduction in wage earnings and benefits for those employed and contributing their labor. This concept is relatively simple to understand. And solutions have already been thought through and proposals drafted to reform the system and create a generally affluent society of individual citizen OWNERS.

The Growing Case For Massive Taxes On The Rich

Society’s takers, hoarders, and cheaters just ignore the injustice, and go on avoiding taxes while they blame the less fortunate for their own misfortunes.

On June 19, 2016, Paul Buchheit writes on Nation Of Change:

While candidates bicker and Congress stagnates and the rest of us dwell on the latest shooting tragedy, the super-rich enjoy the absence of attention paid to one of our nation’s most destructive issues.

The richest Americans are takers of social benefits. Yet they complain about paying 12%to 20% in taxes, even as respected researchers estimate an optimal revenue-producing rate of 80% to 90%, and even with the near-certainty that higher marginal tax rates will have no adverse effects on GDP growth.

The super-rich pay little in taxes because, as Senator Lindsey Graham said, “It’s really American to avoid paying taxes, legally…It’s a game we play…I see nothing wrong with playing the game because we set it up to be a game.” In reality, it’s a game of theft from the essential needs of education, infrastructure, and jobs.

The Richest Individuals Cheat the Most

According to a recent IRS report, an incredible $406 billion annual gap exists between owed and paid taxes, with individuals accounting for over three-quarters of the total, and with the most egregious misreporting coming from the highest income-takers.

That’s about $3,000 per U.S. household in annual lost revenue. Yet even though the IRSretrieves well over $100 for every dollar in salaries paid to their agents, the agency has been rapidly losing staff, making the tax avoidance game a lot easier for the biggest cheaters.

Corporations Cheat Most Creatively

Relative to a dollar of payroll tax, corporations used to pay $3 in income tax. Now they pay 30 cents.

Exxon uses a theoretical tax to ‘pay’ its bill, and grandfatherly old Warren Buffett’s company Berkshire Hathaway uses hypothetical amounts to avoid paying taxes.

Despite having billions in profits and nearly half of its sales in the U.S., Pfizer claimed enormous losses in the United States.

Each year the Chicago Mercantile Exchange (CME) sells contracts worth about a quadrillion dollars, four times more than all the wealth in the world. Yet ZERO sales tax is paid on the purchases.

Indebted Young Americans Have Lost the Freedom to Innovate

The richest Americans believe they drive the economy. They babble about the “freedom” they create. But experience has shown that productive new ideas, and the job creation that comes with them, are generated by young middle class people, who recently have been devastated by debt and underemployment. As a result of their loss of freedom to take chances, the number of new startups in the U.S. has dropped dramatically.

Revenue lost to tax avoiders is desperately needed to educate and enable our young would-be entrepreneurs.

Decades of Theft from Taxpayers

To the uninformed, Steve Jobs started with boxes of silicon and wires in a garage and fashioned the first Apple computer. The reality is explained by Mariana Mazzucato: “Everything you can do with an iPhone was government-funded. From the Internet that allows you to surf the Web, to GPS that lets you use Google Maps, to touchscreen display and even the SIRI voice activated system — all of these things were funded by Uncle Sam through the Defense Advanced Research Projects Agency (DARPA), NASA, the Navy, and even the CIA.”

It’s the same story with our medicines. Pharmaceutical companies wouldn’t exist without money from the taxpayers, who have provided support for decades through the National Institutes of Health, and who still pay over 80 percent of the cost of basic research for new drugs and vaccines. Yet the drug companies claim patents on medications that were developed with our tax dollars.

Other businesses rely on our roads and seaports and airports to ship their products, the FAA and TSA and Coast Guard and Department of Transportation to safeguard them, and a nationwide energy grid to power their factories, while they pollute our air and water at almost no cost.

Two More Victims of Tax Cheating: K-12 Education and Mental Health Care

Most of the 50 states have cut funding for K-12 education, and they continue to cut it.Teachers haven’t received a raise in 15 years. School infrastructure is crumbling, so severely in Detroit that the kids in some of the schools have nowhere to go to the bathroom.

For the increasing number of Americans (one out of five!) with mental health problems, there is no place to go. The Department of Health and Human Services reportsthat most U.S. counties “have no practicing psychiatrists, psychologists, or social workers.” In 44 of the 50 states, the majority of mentally ill people reside in jails rather than in psychiatric hospitals. There’s no tax money to support the needs of society, and so people in need are thrown into prison.

Society’s takers, hoarders, and cheaters just ignore the injustice, and go on avoiding taxes while they blame the less fortunate for their own misfortunes.

The place to start to reform our tax code is the tax rate on corporate earnings. That rate should be raised to 90 percent, with the caveat that the tax would be zero on the condition that the earnings of the corporation were fully paid out to the OWNERS as dividend earnings subject to personal tax rates. Why? This would abate retained earnings financing of future corporate capital asset formation and incentivize corporation to issue and sell new stock to raise raise money to invest in future viable corporate capital asset growth while simultaneously creating new capital owners (workers, other citizens). The stock purchases would be financed using insured, interest-free capital credit, repayable out of the future earnings of the investment in growing the economy, without the requirement of past savings (which only the wealthy possess). This would empower EVERY child, woman, and man to acquire significant OWNERSHIP shares in the future formation of viable wealth-creating, income-producing capital assets, and enable far more efficient production using advanced technologies, robotics, computerization, etc. with increasingly less reliance on human input, thus significantly strengthening the productive capability of our economy over time and producing better quality at lower cost. At the same time, individual citizens will replace wage earnings for dividend earnings, while steadily advancing to an affluent lifestyle with secure retirement income from a portfolio of productive assets.

Such a policy approach uses the logic of corporate finance. Capital acquisition takes place on the logic of self-financing and asset-backed credit for productive uses. People invest in capital ownership on the basis that the investment will pay for itself. The basis for the commitment of loan guarantees is the fact that nobody who knows what he or she is doing buys a physical capital asset or an interest in one unless he or she is first assured, on the basis of the best advice one can get, that the asset in operation will pay for itself within a reasonable period of time––5 to 7 or, in a worst case scenario, 10 years (given the current depressive state of the economy). And after it pays for itself within a reasonable capital cost recovery period, it is expected to go on producing income indefinitely with proper maintenance and with restoration in the technical sense through research and development.

The proposed Capital Homestead Act (aka Economic Democracy Act) is the necessary legislation to achieve this aim. See,, and

Also see Monetary Justice at

Bernie And Common Good Capitalism Movement

On June 16, 2016, Terry Mollner writes on The Huffington Post:

Senator Bernie Sanders now needs to make a decision. Should his movement be a fundamental movement, political movement, or both?

A political movement is people joining forces to get political power, such as electing candidates, moving a political agenda forward, or both.

A fundamental movement is people joining forces to mature something in the way we all think. Not just some of us but all of us. Its leaders have identified a current social tradition they accurately declare is now immoral. It is not something we can or cannot continue to do. As a society, it is something that we can’t continue to do because we have matured to where we can see it is immoral: not fair. Therefore, they also know that no matter how long it takes or how difficult the task, this movement will eventually be successful. As individuals and as societies overall we are consistently maturing, admittedly in steps forward and backward. However this change is maturation we are now able to understand and embrace.

The Civil Rights Movement was a fundamental movement. We have matured to where we know it is immoral if we do not extend equal rights to African-Americans. The Woman’s Movement is another example of a fundamental movement. We have matured to where we know it is immoral to not extend equal rights to women. We also know that just as today all children are born into societies that know the Earth is round someday all children will be born into societies that know women, black, whites, all colors, and all human beings deserve equal rights relative to one another.

What is the fundamental movement at the bottom of Bernie’s call for “a revolution”? I believe it is accurately declaring that the currently acceptable social tradition of giving priority in economic activity to one’s self-interest is immoral. It is not something we can do or not do as members of a society. It is something we can no longer allow ourselves to do because we have matured to where we know it is immoral.

Like any fundamental movement this will not at first be something easy to explain because it is currently such a deeply held social tradition. However at the bottom giving priority to one’s self-interest in anything at any time is immoral. Also freely choosing to give priority to the common good not only does not threaten individual freedom but also enhances the safety necessary to exercise it. Let me explain.

Anytime two human beings come together they have two choices: to compete or cooperate. If they choose to compete they will be in a constant relationship of conflict. If they choose to cooperate, they have used their human skill of self-consciousness to make an agreement. The agreement is they will give priority to the common good of the two of them as if they are two parts of a whole. If many people agree to do this, we call it “a society.”

Therefore people who no longer give priority to the common good of the society in all they do have left it and are in competition with it.

Whether the society is a group of friends, a village, a company, a democratic nation, a communist state, or any other kind of nation, self-interest can be secondary; but it cannot be primary. Also, because of our modern technology and media, like it or not we now all live in an Earth society. Therefore to not give priority to the common good of us all in anything we do at any time is immoral. We are all in this together now.

Throughout history “moral behavior” has been defined as “freely choosing to give priority to the common good.” It is the fundamental agreement of being in “a human society.”

Our marketplace used to be based on an agreement that we would cooperate by competing based on self-interest without collusion among competitors. Thatagreement was the cooperative context that gave priority to the common good. Therefore the classic competitive marketplace was fundamentally moral.

However we no longer live in a classic competitive marketplace. Our Information Age has made it possible, without talking to each other, to legally cooperate with competitors to give priority to mutual self-interest. Let me explain this as well.

It used to be that companies existed in silos where they did not know what was going on in each other’s company. Today everyone has access to nearly all information and the marketplace is global. Also, a monopoly is illegal but a duopoly monopoly is legal. A duopoly monopoly is when two companies dominate a product market – think of CVS-Walgreens, Home Depot-Lowe’s, Visa-MasterCard, and Ben & Jerry’s-Haagen Dazs.

Today the rush is on to become number one or two in each product market or to get out of it. If you are part of a duopoly monopoly you have a legal backdoor to monopoly behavior: without ever talking to each other you simply match each other’s price increases. In fact because of this Information Age more than two companies can also do it. For instance, in the USA we are down to only four major airlines that can easily do it: United, Delta, American, and Southwest. Therefore the days of breaking up companies by the government can’t end this as well.

I am on the board of Ben & Jerry’s. In the USA Ben & Jerry’s and Haagen-Dazs control 82% of the super premium ice cream market. We never talk to Haagen Dazs but also quickly match each other’s price increases. We are a duopoly monopoly with Ben & Jerry’s based in Bernie’s home state of Vermont. Our two companies are expanding into about the same 35 nations around the world. We are now each owned by one of the most powerful consumer products companies on Earth: Unilever and Nestles. If we did not become a duopoly monopoly in our market two other products would. So in the 2016 global marketplace we have no choice but to seek to become one of the two dominant companies. We will probably eventually become the global super-premium duopoly monopoly. Because of the power of our parent companies it will not be easy for any other company to stop this from happening. The one that began to eat into our market, Talenti, was immediately bought by Unilever and copied by Haagen-Dazs. They did not want to sell but quickly realized they had no choice if they wanted their product to survive, get its value in dollars, or both. Welcome to the duopoly monopoly 2016 marketplace.

Bernie has the potential to create both a fundamental movement and a political movement while fully doing both. The most fundamental platform of his “revolution” is economic reform. He can have it be a fundamental movement by declaring that we have matured to now know it is immoral to give priority to self-interest in economic activity. In our now global economy with global companies that are now capable of primarily cooperating for self-interest without ever talking to each other it is essential that we all publicly declare that in all economic activity our highest priority is the common good, not self-interest.

If companies can now primarily cooperate for mutual self-interest they can also primarily cooperate for the common good.

There will be a continuation of the proliferation of global duopoly monopolies. It is now easily possible. Thus, practically, common good capitalism will be when the duopoly monopolies in each product area voluntarily meet, along with all the smaller competitors, and reach agreements that give priority to the common good. A couple of public officials without a vote could be present to provide evidence to the public that the agreements were based on giving priority to the common good, not collusion for self-interest. The former is legal; the latter is not. Then the companies could not only annually provide the public a financial audit but also a common good audit. It will report on their agreements and the progress of each in fulfilling them. This will also allow the public to be in an on-going conversation with the companies on what we have matured into identifying as new agreements we want to make that give priority to the common good. We are constantly maturing as the Civil Rights, Women’s, Environment, and Gay’s Right to Marry Movements have revealed.

The agreements they could forge are quite obvious. One could be to have the minimum wage be a livable wage in each employment location and other good labor agreements. It could be certain environment and safety standards. It could even be to donate the same percentage of annual net profits to reduce poverty around the world.

As a secondary activity they could compete as ferociously as before in all other areas such a packaging, marketing, distribution, and new product development. Of course this process could eventually lead to legislation nearly all support.

What is significant about common good capitalism is that it fully honors individual freedom and free markets. There is nothing to prevent another company being so successful that it replaces one of the two duopolies. Secondly, relative to one another it will not cost the companies a penny. If they need to raise their prices to afford some of their agreements, such as a livable minimum wage, they will both do so at the same time; and they will still be able to compete on price but now it will be based on efficiencies, a committed workforce, and better service to the customers. Thirdly, the conflict between workers and employers can end and be replaced with joint cooperation with the priority being the common good. Lastly the continued proliferation of duopoly monopolies will be supported by the public rather than attacked. They will overtly, publicly, and freely choose to give priority to the common good. Thus, our multinational corporations, some of the most powerful organizations on Earth, will be primarily cooperating with all for the common good. This, by the way, will be essential for the Environment Movement to succeed.

Common good capitalism is inevitable. In the Information Age it is the only solution to the now easy global duopolization of our product markets. We want these companies to continue producing all the things we need. At the same time monopoly behavior through a backdoor is not acceptable. Common good capitalism is the only solution that both honors individual freedom and free markets that is something we definitely want to build on and not end. It also enhances the safety for all so we are able to more easily, broadly, and safely exercise individual freedom.

In addition to becoming the Mahatma Gandhi, Martin Luther King Jr., or Nelson Mandela of the common good capitalism fundamental movement, as the above three did Bernie can also seek political power by getting officials elected and legislation passed that supports it.

Bernie now has a golden opportunity to create both a common good capitalism fundamental movement as well as a political movement. I hope he does both.

See my article published by The Huffington Post entitled “Bernie Sanders Can Win By Empowering Every Child, Woman, And Man To Become A Productive Capital Asset Owner” at

10 Ways America Has Come To Resemble A Banana Republic


On September 5, 2013, Alex Henderson writes on AlterNet:

In the post-New Deal America of the 1950s and ’60s, the idea of the United States becoming a banana republic would have seemed absurd to most Americans. Problems and all, the U.S. had a lot going for it: a robust middle-class, an abundance of jobs that paid a living wage, a strong manufacturing base, a heavily unionized work force, and upward mobility for both white-collar workers with college degrees and blue-collar workers who attended trade school. To a large degree, the nation worked well for cardiologists, accountants, attorneys and computer programmers as well as electricians, machinists, plumbers and construction workers.

In contrast, developing countries that were considered banana republics—the Dominican Republic under the brutal Rafael Trujillo regime, Nicaragua under the Somoza dynasty—lacked upward mobility for most of the population and were plagued by blatant income equality, a corrupt alliance of government and corporate interests, rampant human rights abuses, police corruption and extensive use of torture on political dissidents.

Saying that the U.S. had a robust middle-class in the 1950s and ’60s is not to say it was devoid of poverty, which was one of the things Dr. Martin Luther King, Jr. was vehemently outspoken about. King realized that the economic gains of the post-World War II era need to be expanded to those who were still on the outside of the American Dream looking in. But 50 years after King’s “I Have a Dream” speech of 1963, poverty has become much more widespread in the U.S.—and the country has seriously declined not only economically, but also in terms of civil liberties and constitutional rights.

Here are 10 ways in which the United States has gone from bad to worse, and is looking more and more like a banana republic in 2013.

1. Rising Income Inequality and Shrinking Middle Class

In a stereotypical banana republic, income inequality is dramatic: one finds an ultra-rich minority, a poor majority, a small or nonexistent middle class, and a lack of upward mobility for most of the population. And according to a recent study on income inequality conducted by four researchers (Emmanuel Saez, Facundo Alvaredo, Thomas Piketty and Anthony B. Atkinson), the U.S. is clearly moving in that direction in 2013.

Their report asserted that the U.S. now has the highest income inequality and lowest upward mobility of any country in the developed world. They found that while the picture grows increasingly bleak for American’s embattled middle-class, “the share of total annual income received by the top 1% has more than doubled from 9% in 1976 to 20% in 2011.” And earlier this year, a report by the Organization for Economic Co-operation and Development OECD also found that the U.S. now leads the developed industrialized world in income inequality.

2. Unchecked Police Corruption and an Ever-Expanding Police State 

Journalist Chris Hedges made an excellent point when he said that brutality committed on the outer reaches of empire eventually migrates back to the heart of empire. Hedges asserted that with the increased militarization of American police, drug raids in the U.S. are now looking like military actions taken by American soldiers in Fallujah, Iraq. And, to be sure, there have been numerous examples of militarized narcotics officers killing innocent people in botched drug raids or sting operations gone wrong.

To make matters worse, narcotics officers who kill innocent people rarely face either civil or criminal prosecution; they essentially operate with impunity. And in addition to the abuses of the war on drugs, the U.S. government has far-reaching powers it did not have prior to 9/11. Between the drug war, the Patriot Act, the National Defense Authorization Act, and warrantless wiretapping, the United States is employing the sorts of tactics that are common in dictatorships.

3. Torture

During the Cold War, the U.S. supported many fascist regimes and banana republics that engaged in torture. But it didn’t openly flaunt such tactics itself. That changed after 9/11. Post-9/11, the U.S. crossed a dangerous line when the CIA used waterboarding on political detainees with the blessing of the George W. Bush administration. Waterboarding and other forms of torture are not only bad interrogation methods that do nothing to decrease or prevent terrorism, they are a blatant violation of the rules of the Geneva Convention. As Amnesty International observed, “In the years since 9/11, the U.S. government has repeatedly violated both international and domestic prohibitions on torture and other cruel, inhuman or degrading treatment in the name of fighting terrorism.”

4. Highest Incarceration Rate in the World

According to the London-based International Center for Prison Studies, the U.S. has 716 prisoners per 100,000 residents compared to 114 per 100,000 in Canada, 79 per 100,000 in Germany, 106 per 100,000 in Italy, 82 per 100,000 in the Netherlands or 67 per 100,000 in Sweden. Even Saudi Arabia, which has an incarceration rate of 162 per 100,000, doesn’t imprison nearly as many of its residents as the United States. One of the main reasons the U.S. has such a high incarceration rate is its failed war on drugs, which has emphasized draconian sentences for nonviolent offenses.

The prison industrial complex has become quite a racket. From prison labor to construction companies to companies specializing in surveillance technology, imprisoning people is big business in the United States—and the sizable prison lobby has a major stake in keeping draconian drug laws on the books. Further, the drug war has included harsh asset forfeiture laws that, in essence, place the burden of proof not on the courts, but on people whose assets have been seized.

5. Corrupt Alliance of Big Business and Big Government

Trends forecaster Gerald Celente has asserted that the U.S. has become a “fascist banana republic” and now lives up to Italian dictator Benito Mussolini’s definition of fascism: the merger of state and corporate power. Celente, a frequent guest on the cable news network RT, has repeatedly said that systemic corruption in the banking sector has not decreased since the financial crash of September 2008 and the bailouts that came after it, it has gotten worse, and too-big-to-fail banks now operate with impunity.

That union of corporate and state power fits Mussolini’s definition of fascism, which was followed by a long list of dictators in banana republics. In a democratic republic, banks and corporations are not above the law; in a banana republic, they are—and with the legislation and reforms of Roosevelt’s New Deal (which did a lot to prevent banks and corporations from enjoying unchecked power) having been undermined considerably (most notably, by the 1999 repeal of the Glass-Steagall Act of 1933), the U.S. is looking more and more like a banana republic.

6. High Unemployment

According to the Bureau of Labor Statistics, the unemployment rate in the U.S. decreased to 7.4% in July 2013. But that figure is misleading because it fails to take into account the millions of Americans who have given up looking for work (that is, they have been unemployed for so long the BLS no longer counts them as part of the work force) or workers who have only been able to find temp work.

And according to economist/researcher John Williams, the unemployment crisis in the U.S. is much more dire than the BLS’ 7.4% figure. Williams’ research counts the millions of Americans the BLS excludes, and his newsletter, Shadow Statistics, reported that in June 2013, the U.S.’ actual unemployment rate was a disturbing 23.3% (which is only slightly less than the unemployment rate in 1932). Also, BLS figures don’t take into the account the fact that most of the new jobs created in 2013 have been low-paying service jobs. Clearly, much of the American population is growing poorer while the 1% are doing better than ever.

7. Inadequate Access to Healthcare

The United States continues to be the only developed country that lacks universal healthcare. And since the economic meltdown of September 2008, the number of Americans who lack health insurance has increased. According to a study the Commonwealth Fund conducted in 2012, 55 million Americans lacked health insurance at some point last year—and that 55 million doesn’t even count all of the Americans who are underinsured, meaning that they have gaps in their coverage that could easily result in bankruptcy in the event of a major illness. Americans have some of the highest healthcare expenses in the world but are plagued with much worse outcomes than residents of Canada, Australia, New Zealand or any country in Western Europe. From medical bankruptcies and sky-high premiums to a lack of preventative care, the American healthcare system is a disaster on many levels.

The U.S. took a small step in the direction of universal healthcare with the passage of the Affordable Care Act of 2010, but many proponents of health insurance reform have been quick to point out that it doesn’t go far enough. According to Robert Reich, “Obamacare is an important step, but it still leaves 20 million Americans without coverage.”

8. Dramatic Gaps in Life Expectancy

In many banana republics, it is common knowledge that the poor die much younger than the wealthy minority. The disparity in life expectancy rates dramatically illustrates the severity of the growing rich/poor divide in the United States. Life expectancy for males is 63.9 years in McDowell County, West Virginia compared to 81.6 years in affluent Fairfax County, Virginia or 81.4 in upscale Marin County, Calif. That is especially alarming when one considers that life expectancy for males was 68.2 in Bangladesh in 2012 and 64.3 for males in Bolivia, one of the poorest countries in Latin America, in 2011.

The news for many American women isn’t very good either. According to the United Nations, American women on the whole fell from #14 worldwide in life expectancy in 1985 to #41 in 2010. And in September 2012, the New York Times reported that nationally, life expectancy was down to 67.5 years for the least educated white males compared to 80.4 for more educated white males. The Times also reported that life expectancy was 73.5 years for less educated white females compared to 83.9 for more educated white females.

9. Hunger and Malnutrition

In the 1950s and ’60s, hunger was a word one associated with developing countries rather than the United States. But with millions of Americans having slipped into poverty during the current economic downturn, the number of people who are now poor enough to qualify for food stamps has increased from 17 million in 2000 to 47 million in 2013. Only one in 50 Americans received food stamps in the 1970s; now, the number is one in seven.

According to Share Our Strength, 48.8 million Americans now suffer from food insecurity. In 2010, Ariana Huffington came out with a book titled Third World America: How Our Politicians Are Abandoning the Middle Class and Betraying the American Dream. That title was no exaggeration; the U.S. is, as Huffington said, “on a trajectory to become a Third World country,” and the fact that food stamp use has more than doubled since 2000 bears that out.

10. High Infant Mortality

Earlier this year, the organization Save the Children released the results of its 14th annual State of the World’s Mothers Report. The report found that “the United States has the highest first-day death rate in the industrialized world” (babies dying the day they are born) and that the European Union has “only about half as many first-day deaths as the United States: 11,300 in the U.S. vs. 5,800 in EU member countries.”

“Poverty, racism and stress are likely to be important contributing factors to first-day deaths in the United States,” said the report. Save the Children also reported that the U.S. had a rate of three first-day deaths per 1,000 births, the same rate the organization reported for developing countries like Egypt, Tunisia, Sri Lanka, Peru and Libya. Meanwhile, Mexico, Argentina, Chile, El Salvador and Costa Rica were among the Latin American countries that had only two first-day deaths per 1,000 births. So, a baby born in El Salvador or Mexico has a better chance of living to its second day than a baby born in the United States.

What will it take for the United States to reverse its dramatic decline? Robert Reich, in a video released on Labor Day 2013, called for six things: 1) a living wage for more American workers; 2) an earned income tax credit; 3) better childcare for working parents; 4) easier access to good schools and a quality education; 5) universal health insurance; and 6) union rights.

Those are all excellent ideas. The U.S. also should replace the war on drugs with a sane drug policy (something Attorney General Eric Holder recently addressed), abolish the prison industrial complex, rebuild the U.S.’ decaying infrastructure, abolish the Patriot Act and the NDAA, restore the Glass-Steagall Act and break up too-big-to-fail banks. Obviously, accomplishing even a third of these would be an uphill climb. But unless most or all of those steps are taken, the U.S. can look forward to a grim future as a banana republic.

Face it, this is due to the lack of sufficient earning power to provide for one’s and family general welfare.  The vast majority of Americans suffer from a serious lack of income to cover basic day-to-day, week-to-week and month-to-month living expenses, and essentially no significant, or zero savings necessary to cover emergency expenses and to provide retirement security.

We need to reform the system to empower EVERY child, woman, and man to secure new income sources, namely wealth-creating, income-producing capital assets embodied in viable growth corporations. This can be accomplished by empowering EVERY child, woman and man to acquire personal OWNERSHIP stakes in the FUTURE formation of wealth-creating, income-producing capital assets using INSURED, INTEREST-FREE capital credit, repayable out of the FUTURE earnings of the investments, without the requirement of PAST SAVINGS, a job or any income from any source. Instead their solution is solely viewed interns of raising the minimum wage, which continues the serfdon status of the vast majority of Americans now employed in minimum wage sectors of the economy. A boost in the minimum wage is not the solution to economic inequality. Widespread, universal personal wealth-creating, income-producing capital asset property OWNERSHIP is, and is the true path to inclusive prosperity, inclusive opportunity, and inclusive economic justice.

Support the Agenda of The Just Third Way Movement at and

Support Monetary Justice at

Support the Capital Homestead Act (aka Economic Democracy Act) at and

See my article entitled “The Solution To America’s Economic Decline” at



The Milton Friedman Doctrine Is Wrong. Here’s How To Rethink The Corporation.

On June 9, 2016, Susan Holmberg and Mark Schmitt wrote on Evonomics:

The compensation of American executives—CEOs and their “C-suite” colleagues—has long been a matter of controversy, especially recently, as the wages of average workers have stagnated and economic inequality has moved to the center of the national debate. Just about every spring, the season of corporate proxy votes, we see the rankings of the highest-paid CEOs, topped by men (they’re all men until number 21) like David Cote of Honeywell, who in 2013 took home $16 million in salary and bonus, and another $9 million in stock options.

Rarely, however, does the press coverage go beyond the moral symbolism of a new Gilded Age. Coverage of CEO pay usually fails to show that the scale of CEO pay packages—and the way CEOs are paid—comes at a cost. At the most basic level, the company is choosing to pay executives instead of doing other things—distributing revenues to shareholders, raising wages for workers, or reinvesting in the business. But the greater cost may be the risky behavior that very high pay encourages CEOs to engage in, especially when pay is tied to short-term corporate performance. CEO pay also plays a major role in the broader trend toward radical inequality—a trend that, evidence has shown, precipitates financial instability in turn.

CEO pay has been controversial in the United States for more than a century—for as long as corporate management has been a profession separate from ownership. In economic booms, CEO pay skyrockets and, after the inevitable bust, it attracts attention—as the million-dollar paychecks of executives such as W.R. Grace of Bethlehem Steel and Charles Mitchell of National City Bank drew notice in the 1930s. But the most recent debate focuses on the staggering, uninterrupted rise in CEO pay over the past three decades, following a long period of moderation in both executive pay and in overall economic inequality. Between 1940 and 1970, average CEO pay remained below $1 million (in 2000 dollars). According to the Economic Policy Institute (EPI), from 1978 to 2013, CEO pay at American firms rose a stunning 937 percent, compared with a mere 10.2 percent growth in worker compensation over the same period, all adjusted for inflation. In 2013, the average CEO pay at the top 350 U.S. companies was $15.2 million.

Get Evonomics in your inbox

Given the polarization and stalemate of current politics, one might expect CEO pay to be one of those issues, like tax loopholes, that the public occasionally gets upset about but the political system, which demonstrably tilts toward the interests of the wealthy, ignores or can’t resolve. But in fact, the cause of restraining CEO pay has had remarkable political success—measured by legislation passed and regulations enacted—since the 1930s, when CEO pay first became a contentious public issue.

The problem isn’t that the political system doesn’t want to deal with excessive CEO pay. There have been any number of formal efforts to rein in executive pay, involving a host of direct regulation and tax changes. But most of the specific efforts to reduce executive pay—through major policies such as a limit on the tax deductibility of high salaries, as well as more modest accounting and disclosure legislation—have fallen short. That’s because the story of skyrocketing executive pay is a story about our conception of the corporation and its responsibilities. And until we rethink our deepest assumptions about the corporation, we won’t be able to master the challenge of excessive CEO pay, or the inequality it generates. Is the CEO simply the agent of the company’s shareholders? Is the corporation’s only obligation to return short-term gains to shareholders? Or can we begin to think of the corporation in terms of the interests of all those who have a stake in its success—its customers, its community, and all of its employees? If we take the latter view, the challenge of CEO pay will become clearer and more manageable.

Decades of Modest Pay

It’s strange to imagine, but the position of corporate CEO is a relatively new one in the history of American business, and CEO pay has been controversial for most of that time. According to Harwell Wells of Temple University’s law school, who has written one of the only historical accounts of the CEO pay debate, before the “great merger movement” of the early twentieth century, all but a few companies were small and were run by managers who owned a sizeable portion of the business. At the beginning of the twentieth century, the face of industry was morphing from thousands of small manufacturing firms into fewer large corporations. As owners of these companies opted out of day-to-day management, employee-executives gradually took over their roles, and “management” became a profession. It didn’t take long for CEO pay to begin to climb—and for the American people to object.

There is very little information available about CEO pay prior to 1935, when the 1934 Securities Exchange Act implemented Form 10-K, the annual report companies are required to file with the Securities and Exchange Commission (SEC). One of the only surveys available tells us that, prior to World War I, the average salary of an executive at a large corporation was $9,958, or $220,000 in 2010 dollars, which would be paltry for most of today’s mid-management, let alone today’s high-level executives.

Convinced that an executive salary would never inspire managers to feel the same stake in their company that owners inherently have, American Tobacco and U.S. Steel were among the first companies, in the 1910s, to institute “performance pay” in the form of bonuses for senior executives, who received a percentage of annual profits in addition to their base salary. By 1928, a survey of 100 industrial companies showed that 64 percent of executives received a bonus, typically in the form of cash linked to the firm’s annual profits. The same survey found that for those executives, bonuses constituted 42 percent of average total compensation. Incidentally, while it’s impossible to do any real comparison with the available data, there does seem to be a noticeable jump in pay after bonuses were introduced. The 1928 survey of industrial firms reports that the median annual compensation for executives was $69,728, or $892,000 in 2010 dollars—four times the pre-World War I numbers.

How the Explosion in CEO Pay Happened

The most comprehensive historical analysis of CEO pay numbers, by Carola Frydman and Raven Saks Molloy, indicates that average pay remained below $1 million (in 2000 dollars) from 1936 to the mid-1970s—despite the fact that there was a lot of company growth during that time span. It even fell in the 1940s: sharply during World War II, and more gradually in the later part of the decade, which, according to Frydman and Saks Molloy, was “the last notable decrease in the past 70 years.” From the early 1950s to the mid-1970s, the inflation-adjusted value of executive pay increased very gradually, averaging less than 1 percent growth a year. Growth in pay picked up speed starting in the mid-1970s and continued until the recent financial crisis, with the most significant increase happening in the 1990s, when annual growth rates topped 10 percent. According to EPI, between 1978 and 2012, CEO pay rose about 875 percent.

Starting in 1930, a handful of shareholder lawsuits put the issue of executive pay on the front pages, culminating in Congress’s “Pecora hearings” on the securities industry. The hearings revealed that Charles E. Mitchell of National City Bank (now Citibank), who was blamed for fueling the speculation that led to the Crash of 1929, took home more than $1 million a year leading up to the crash, a revelation that inflamed shareholders and the American public and prompted the federal government to begin to institute reforms, starting in the early 1930s with the Securities Act and the Securities Exchange Act.

The New Deal response to the Pecora revelations centered on disclosure, which was already a major component of the nascent structure of corporate reform and Wall Street regulation. As previously noted, the 10-K form on which we find chief executive salaries to this day was created in the Securities Exchange Act of 1934. Soon after, in 1938, the SEC required shareholder proxies to report compensation of the corporation’s top three executives. Since the New Deal, the SEC has, among other regulations, instituted a variety of disclosure rules, including a 2009 rule requiring some companies to disclose what they pay for compensation consulting. And it has recently proposed a strong disclosure rule—mandated by the Wall Street Reform and Consumer Protection Act of 2010, better known as the Dodd-Frank bill—on the CEO-worker pay gap.

Another avenue to target executive pay has been through the tax code. Tax provisions specifically addressing executive pay date back to 1950, when restricted stock grants were given preferential treatment. And overall changes to tax rates have likely had a significant effect on executive pay. Thomas Piketty has suggested that a major cause of the sharp rise in inequality beginning in the late 1980s was the tax reform of 1986, which reduced individual rates and closed corporate loopholes, making it more lucrative for executives to take money as salary than to leave it in the company.

But it wasn’t until the beginning of the 1990s that the current effort to use the tax code to target executive compensation directly took hold. Compensation expert Graef Crystal’s 1991 book, In Search of Excess: The Overcompensation of American Executives, became a best-seller, but more important was a single reader: then-presidential candidate Bill Clinton. CEO pay became a core issue of Clinton’s 1992 campaign, during which he pledged to eliminate corporate tax deductions for executive pay in excess of $1 million a year. In the 1993 budget legislation, this policy became part of the U.S. tax code, known as Section 162(m). But it came with a few qualifiers. The most significant was the exception for executive pay based on specific corporate performance goals, called “performance pay.”

The IRS offered a technical definition for performance pay but, to corporations’ collective glee, allowed a lot of room for interpretation, so companies quickly began moving executive pay from salaries to mainly stock options and restricted stock grants. If you look at a standard proxy statement, you’ll notice that most companies say outright what sections of their executive compensation packages are designed to avoid being taxed.

After In Search of Excess and Section 162(m), CEO pay continued to skyrocket, now at an even faster pace. Using the performance-pay loophole, during the longest sustained run-up in stock prices since the 1920s, the spike was driven by short-term measures of earnings or stock performance.

The Value of a CEO

Aside from the occasional anomaly, where pay clearly doesn’t align with performance (as in, for example, the case of JPMorgan Chase’s Jamie Dimon, who recently announced 10,000 potential layoffs by the end of 2014 despite his $20 million in pay last year), one might ask what is so wrong with high CEO pay. Especially when it’s linked to profits or stock performance, haven’t executives earned this compensation?

Indeed, that is what the economic theory of marginal productivity—which holds that any worker is paid based on what he or she adds to the firm’s income—would suggest. Harvard economist N. Gregory Mankiw has argued that “the most natural explanation of high CEO pay is that the value of a good CEO is extraordinarily high.”

But this is the most tautological of economic ideas. The theory requires very strict assumptions that are found nowhere in the real world, and it cannot be put to the test, because it is impossible to measure the performance of a CEO in terms of his or her marginal contribution to a firm, particularly when success is the function of an entire team. And when “pay for performance” is based on the company’s stock price, it is really “pay for luck,” because more of the share price performance that CEOs are paid for is driven by broader macroeconomic factors, particularly economic upswings, than anything the executives did. But when the economy declines, and the share price goes down with it, executives are usually not penalized. Marginal productivity theory seems to move in only one direction.

The foundation of “pay for performance” is “agency theory” or “shareholder primacy.” The intellectual godfather of shareholder primacy is Milton Friedman, who wrote in 1970 that “a corporate executive is an employee of the owners of the business [i.e., the shareholders]. He has direct responsibility to his employers. That responsibility is to conduct the business in accordance with their desires, which generally will be to make as much money as possible,” without breaking the law or cheating people. At a time when CEO pay was less than 40 times what the typical worker earned (the multiple is now more than 350), Michael C. Jensen and William H. Meckling codified Friedman’s argument with their seminal 1976 article, “Theory of the Firm.” The purpose of corporate governance, they argued, is about finding ways to align the incentives of shareholders (whom they referred to as “principals”) and executives (“agents” of the shareholder-owners). This theory has enraptured economics departments and business and law schools for decades and profoundly shaped how corporate officers, shareholders, taxpayers, policy-makers, and even most Americans think about the roles and responsibilities of corporations.

Though shareholder primacy has never been challenged in a serious way, a bit of heresy did happen at the 2013 annual meeting of the Allied Social Science Associations, where mostly neoclassical economists converge to present their research, graduate students scramble for tenure-track jobs, and what should be debatable ideas like marginal productivity theory are taken as pillars of research. That year, a French financial economist named Jean-Charles Rochet gave the keynote address, in which he skewered the very foundation of pay for performance. Cornell Law School professor Lynn Stout calls it the “shareholder value myth”—the idea that corporations exist for shareholders and no one else. Rochet told the conference: “Everyone knows that corporations are not just cash machines for their shareholders, but that they also provide goods and services for their consumers, as well as jobs and incomes for their employees. Everyone, that is, except most economists.”

Rochet was, if anything, being too kind. While economists are certainly to blame for presenting an ideology as a natural law, shareholder primacy has also infiltrated the consciousness of most politicians and journalists and has been transmitted in our classrooms, to the extent that today, most of the American public has come to take this myth for granted.

The idea that there are other corporate stakeholders besides shareholders—the stakeholder framework—is not a new one. But it’s gained traction recently as a result of Rochet’s speech, Stout’s 2012 book, The Shareholder Value Myth, and the emergence of new corporate forms like benefit corporations, which promise to be accountable and transparent about their impact on the environment and surrounding communities, and often aspire to a “double bottom line” of private and public value. A high-profile union-recognition battle at a Volkswagen plant in Chattanooga earlier this year, in which the company not only didn’t challenge the union but actually invited the vote, brought new attention to the German corporate model, based on “works councils” that include labor in decision-making, and a much broader vision of the corporation’s obligations.

At the heart of almost every effort to curb CEO pay have been the assumptions of marginal productivity and shareholder primacy. There is no silver bullet to slow the growth of CEO pay. It requires all the tools in our toolbox—the tax code, disclosure and accounting rules, and so forth. But none of those will be fully effective without rethinking the very purpose of the corporation, a question that is too often outside the scope of debate.

The True Costs of High CEO Pay

Before we go any further, we should consider how CEO pay is determined. In theory—and this is what corporations would like us to believe—compensation packages for CEOs are determined by independent boards of directors, by compensation committees made up of members of the board, and sometimes by compensation consultants, who make pay recommendations based on their analysis of the market.

But Lucian Bebchuk and Jesse Fried, in their 2004 book Pay Without Performance, argued that this procedure is a comforting fiction. They wrote that skyrocketing executive pay is the blatant result of CEOs’ power over decisions within U.S. firms, including compensation. Being on a corporate board is a great gig. It offers personal and professional connections, prestige, company perks, and, of course, money. In 2013, the average compensation for a board member at an S&P 500 company—usually a part-time position—was $251,000. It only stands to reason that board members don’t want to rock the CEO’s boat. While directors are elected by shareholders, the key is to be nominated to a directorship, because nominees to directorships are almost never voted down. Bebchuk and Fried showed that CEOs typically have considerable influence over the nominating process and can exert their power to block or put forward nominations, so directors have a sense that they were brought in by the CEO. Beyond elections, CEOs can use their control over the company’s resources to legally (and sometimes illegally) bribe board members with company perks, such as air travel, as well as monetary payment.

Usually the CEO pay debate pivots on the public’s distaste for extreme inequality. While Thomas Piketty has recently provided us an impressive historical account of how capital accumulation increases inequality, Joseph Stiglitz, in his 2012 book The Price of Inequality, and former Labor Secretary Robert Reich’s recent documentary Inequality for All have moved the conversation by broadening our grasp of how economic inequality, including between CEOs and the typical worker, harms our society. What we haven’t talked about enough is how the assumptions and incentives driving CEO pay, which primarily encourage executives to raise the price of the company’s stock, can damage the economy by encouraging companies to take on excessive risk, rewarding fraudulent behavior and curtailing real investment and innovation.

A successful business leader or entrepreneur needs to be willing to evaluate and take risks. Starting a business, moving into new markets, and developing new products all come with great risks—of losing profits, shutting down departments, even closing a company’s doors. One of the main arguments for high CEO pay is that it compensates executives for being exceptionally calculating risk-takers. Yet there is plenty of evidence that shows us that when CEOs are paid with stock—either options or grants—it can enable executives to become very wealthy very quickly without bearing much risk at all. This creates the financial motivation for CEOs to make shortsighted and very high-risk decisions in order to boost their company’s stock prices, which will ultimately line their own pockets. The effects of this behavior, particularly with CEOs in the financial industry, can be measured in higher share-price volatility (meaning large swings in share prices) and in bank failures, such as those of 2008 and 2009, which had profound consequences for the broader financial and economic system.

More troubling about the ways in which CEOs are paid is that incentives can easily move from risky behavior toward outright fraud, including misrepresenting the company’s finances and illegal stock-options backdating. The backdating of stock options became a scandal in the late 2000s. By retroactively changing the date when a stock option was granted, typically to an earlier date when the share price was lower, companies can change the baseline by which performance was measured, making it look better than it was, in order to pump up executive pay. At its peak, this was not a rare practice: A study led by Bebchuk showed that between the mid-1990s and mid-2000s, 12 percent of the firms in the sample backdated options for their CEO, boosting total compensation by around 20 percent. Many studies demonstrate that firms found committing fraud have greater stock option-based compensation, suggesting that the greater the incentive for CEOs to maximize the company’s stock price, the greater the incentive the CEO has to engage in fraudulent activities to accomplish this objective.

CEO pay that is ultimately based on the stock price invites another easy trick to show performance: stock buybacks. The problem, according to economist William Lazonick, co-director of the UMass Center for Industrial Competitiveness, is that funds for stock buybacks come at the expense of other priorities. By choosing to buy back publicly held shares, executives can push up the price of the stock without actually investing in the company’s capital, research and development, or workers.

Lazonick’s research provides many examples. For several decades after World War II, IBM had a lifetime employment policy, which was the norm for that era. In the mid-1990s, IBM shifted gears from manufacturing to software and services, and global employment dropped from 374,000 to 220,000. A leader in the U.S. offshoring movement, IBM announced in 2011 a strategic plan for the years until 2015, the main objective of which is to raise their earnings per share from $13.44 to $20 by increasing “operating leverage” (i.e., layoffs) and buybacks. IBM bought back $107 billion of its stock between 2003 and 2012, $13.9 billion in 2013 alone, and $8.2 billion in the first quarter of 2014. All these financial moves have had the effect of boosting “performance pay” for executives without the slightest improvement in the company’s revenues, market share, or profits.

Performance pay, on the model encouraged by the 1993 reform, has been tested. What we’ve learned is that it rewards not performance, but shortsightedness, excessive risk, and even fraud, and that the consequences go well beyond radical inequality to include the kind of crisis that nearly took down the economy in 2008, abrupt layoffs and plant closings to meet shareholder expectations, corners cut on products that risk consumer safety (as seen at General Motors), and desperate attempts to evade the costs of environmental and workplace safety regulation.

From Shareholders to Stakeholders

There is an alternative. If Rochet, Stout, and others are right that a corporation has obligations beyond delivering short-term gains to the shareholders of the moment, then surely that alternative view of the corporation can provide a sounder foundation for thinking about CEO pay. CEOs should be rewarded for productivity and performance, yes, but success should be measured in terms that reflect the interests of all the stakeholders in a corporation, and the corporation’s own health.

The stakeholder corporation is not a new idea. The term stakeholder has been in circulation since the 1960s to characterize the key groups of people that support an organization. R. Edward Freeman brought it into the management world in 1984, when he published Strategic Management. The book proposed that effective management consists of balancing the interests of all the corporation’s stakeholders, including employees, customers, and communities:

Simply put, a stakeholder is any group or individual who can affect, or is affected by, the achievement of a corporation’s purpose. Stakeholders include employees, customers, suppliers, stockholders, banks, environmentalists, government, and other groups who can help or hurt the corporation. The stakeholder concept provides a new way of thinking about strategic management—that is, how a corporation can and should set and implement direction. By paying attention to strategic management, executives can begin to put their corporations back on the road to success.

The concept of the stakeholder corporation has percolated since Freeman’s book, and interest in this model has slowly begun to take root, particularly since the failed United Auto Workers vote at the Volkswagen plant in Chattanooga.

German corporations like VW are far friendlier than their U.S. counterparts to worker rights and “co-determination.” [See “The Church of Labor,” Issue #22.]Works councils, or Betriebsrat, are essentially “shop floor” organizations that represent workers and institute labor law at the local level. The works council is what the Germans proposed in Chattanooga but, after a drawn-out public battle, workers at the plant rejected the idea by voting against unionization, which was opposed by the state’s Republican politicians, but not by the company.

The stakeholder corporation is not only a brilliant model, as the German economic success, especially in manufacturing, shows—it is also the key to the unresolved problem of CEO pay. Shareholder primacy is now so self-evidently flawed that we should be emboldened to think of a range of options—through policy, corporate norms, and culture—for changing CEO pay practices. The irony, as Cornell’s Stout points out, is that broadening the scope of corporate stakeholders would benefit many shareholders as well, because “long-term shareholders fear corporate myopia.”

Imagine what becomes possible when we start to understand that executives and managers are not strictly beholden to shareholders—who hold their shares for an average of four months—and share prices. When executives and directors are free to consider a range of stakeholders—workers, suppliers, creditors, customers, shareholders, and the community in which they’re based—in managing a company, it inherently changes their time horizon from the next quarter to the next decade or quarter-century and beyond, because most of these stakeholders have deeper investments in the company.

The next steps in controlling CEO pay fall into two categories. The first should involve reconsidering and reversing the failed practices that were the result of shareholder primacy. The second would begin to advance the vision of the stakeholder corporation.

The most obvious priority is to close the performance-pay loophole and stop subsidizing pay practices that encourage CEOs to behave like financial speculators. Last year, Democratic Senators Richard Blumenthal and Jack Reed, with Congressman Lloyd Doggett of Texas, introduced the Stop Subsidizing Multimillion Dollar Corporate Bonuses Act, which would cap the deductibility of compensation at $1 million, as Clinton had originally proposed, regardless of the form that compensation takes. The legislation also broadens the range of Section 162(m) by applying it not just to public companies but to all companies that file quarterly reports with the SEC. It would also no longer be limited to CEOs and the three highest-paid executives in a company; it would apply to any employee earning more than $1 million.

UMass’s Lazonick proposes stronger regulation of stock buybacks. The current SEC rule, he argues, “has given top executives license to use buybacks to manipulate the market.” He also suggests that the SEC rescind its current rule and “conduct a Special Study, on the scale of its 1963 study of securities markets that resulted in the creation of NASDAQ, of the possible damage that open-market repurchases have done to the U.S. economy over the past three decades.”

One small and familiar step, endorsed even by shareholder-primacy advocates, would be to move toward more independent boards of directors by reducing the power of the CEO in the nominating committee. Another option, still based on traditional assumptions, would be for companies to pay their executives for performance only after the fact, with performance measured by what Edward D. Hess of the University of Virginia’s Darden Business School and author of the 2000 book Smart Growth calls “authentic earnings.” Hess identifies “non-authentic earnings” as “numbers manufactured creatively by accountants and investment bankers.” Authentic earnings, based solely on real transactions with real customers, provide a broader and more accurate picture of a company’s productive capacity, engagement with new markets, and technological innovation than share price. Hess also challenges the idea that corporate success should expect earnings growth to be continuous and linear. Successful companies might not always be growing. It would be complex, but not impossible, to structure tax incentives for CEO pay based on the measures Hess identifies.

But it’s necessary to go well beyond these steps, which don’t challenge the assumptions that led to the 1993 reform. Yes, we need to reform corporate boards, but let’s do it by following the successful German model and creating a place for workers at the board table. Employee board-level representation is a core part of Germany’s corporate “dual structure”: a management board for day-to-day functions and a supervisory board for more high-level decisions, akin to U.S. boards. Depending on a company’s number of employees, up to half of the supervisory board members are employee representatives rather than shareholders.

And yes, we need to redefine performance pay, but let’s reward companies and CEOs that not only keep executive pay down but increase the well-being of all those connected to the corporation. One smart, still-theoretical proposal would adjust the corporate tax rate based on the ratio of CEO pay to the average pay for workers in the company. At the moment, this is difficult to implement, or even to study, because the data on average pay is invisible or unreliable. In some cases, it should include employees of firms, often offshore, that contract solely with the parent company, and it might have to be adjusted based on industry sector—for example, a firm like Apple, where the average employee might be an engineer, will look much better than a firm like Costco, even though Costco pays very well for its sector. But the reporting provision of Dodd-Frank, if implemented effectively, could provide the data needed to develop a policy that would push against inequality in both directions.

Beyond policy efforts, we need to change our cultural understanding of what corporations are for. It’s highly ironic that one of the most articulate critiques of shareholder primary was delivered by one of its most grandiose beneficiaries: Jack Welch of General Electric. After years as one of the best-paid celebrity CEOs, and after taking a retirement package worth $417 million, including tax-free perks such as club memberships and the use of private aircraft, Welch told the Financial Times in 2009 that the doctrine of shareholder primacy was “the dumbest idea in the world,” and added: “Shareholder value is a result, not a strategy…your main constituencies are your employees, your customers, and your products. Managers and investors should not set share price increases as their overarching goal…. Short-term profits should be allied with an increase in the long-term value of a company.” A few days later, Welch backtracked, but his words make a biting case against the doctrine on which he built his career and reputation.

When even Jack Welch can see that Milton Friedman’s doctrine was no eternal rule, but one economist’s theory with no basis in law, then business schools, economics departments, and financial journalists should be able to do the same. If they can train students, including future CEOs, how to think creatively about the challenges corporations face in building viable businesses that meet their obligations to all their stakeholders, then even if CEOs continue to be well-paid professionals—although not at today’s stratospheric levels—at least they will be paid for helping their companies and communities become better off.

The Milton Friedman Doctrine Is Wrong. Here’s How to Rethink the Corporation.


The Fed Has Whiffed Again—–Massive Monetary Stimulus Has Not Helped Labor, Part 2

On June 12, 2016, David Stockman writes on Contra Corner:

In Part 1 we established the rather obvious point that in today’s world of flexible just-in-time production, hours-based labor scheduling and gig-based employment patterns, there is really no such standardized labor unit as a “job”.

Accordingly, the headcount-centered  metrics of the BLS, such as the U-3 unemployment rate and the nonfarm payroll numbers, are a relic of a half-century ago world of mines, factories, warehouses and retail shops where a 40+ hour workweek on a year round basis was the standard practice.

In that context, a simple paint-by-the-numbers exercise demonstrates the foolishness of the Fed’s obsession with hitting a quantitative “full employment” target. Since the latter entails gunning the financial markets with monetary “stimulus” until every last iota of “slack” has been drained from the labor market, the question answers itself when viewed in an hours based framework.

To wit, the US working age population between 16 and 65 totals 205 million, meaning that on a standard work year basis of 2000 hours, the potential labor force amounts to 410 billion hours. However, according to the BLS’ own data, only 230 billion labor hours are currently being utilized by the US economy from that potential hours pool.

So all things being equal the unemployment rate is actually 44%!

The point, of course, is that virtually everything which impacts the 180 billion hours gap between potential and actual hours employed is beyond the reach of monetary policy. For instance, about 18 billion hours are removed from productive employment by social security disability recipients and 40 billion potential labor hours are unavailable owing to young adults enrolled in higher education.

Yet neither of these represent unchanging “natural” rates of unavailable labor supply. In fact, they are heavily impacted by public policies originating outside of the central bank, and which can change significantly over modest periods of time.

For instance, the ratio of disabled workers to the population aged 16-65 rose from 2.82% in 2000 to 4.34% at present. That gain is primarily due to the relaxation of eligibility standards for qualification in such areas as “back pain” and bureaucratic drift toward higher rates of favorable case determinations.

Thus, at the 2000 disability ratio of 2.82% there would currently be 5.8 million workers on the rolls or 11.5 billion unavailable labor hours. That compares to the actual level of 9 million workers on disability and 18 billionunavailable hours.

Needless to say, in the scheme of things the 6.5 billion hours lost to higher disability rates is not a trivial difference. It represents the equivalent of 3.7 million nonfarm payroll jobs. That’s more new jobs than have been celebrated on Jobs Friday for the last 18 months running.

The story is similar with the 40 billion labor hours not available owing to the 20 million students enrolled in higher education. In this case, the enrollment rate for the prime student age population (18 to 24 years) has risen from 35.5% in 2000 to about 40.5% at present.

Yet it is surely the case that the liberalization of the Pell Grant program and the eruption of student debt outstanding from about $150 billion to $1.3 trillion during the last 15 years has had a powerful impact on that gain. Accordingly, a reasonable estimate is that the massive ratcheting-up of state aid for higher education has caused a minimum of 4 billion labor hours to exit the jobs market.

Moreover, that begs the question as to the appropriateness and efficacy of the underlying public policy bias in favor of massive state support for 4-year higher education. Arguably, one-third or more of college students would be better served by on-the-job vocational training.

Accordingly, at the very time the central bank is operating its primitive “stimulus” tools in overdrive in order to push labor utilization higher, a countervailing set of state policies has the effect of pulling upwards of 15 billion labor hours out of the labor market.

Indeed, the impossibility of defining the “potential” labor supply at any given moment in time, let alone causing it to be fully employed through the primitive instruments of interest rate pegging and yield curve repression (i.e. quantitative easing), is illustrated in this context by a simple counterfactual.

Currently, student loan disbursements amount to $100 billion per year, Pell and related grants total about $40 billion and college work-study programs amount to about $4 billion (including state and college matching).

Just assume half of these grants and loans—or $70 billion per year—-were accompanied by a work requirement similar to the traditional work-study program (@20 hours per week). Presto, the potential labor supply would enlarge by upwards of 5 billion hours (500 hours per school year per student).

Beyond the impossibility of defining the potential labor supply and the complex multitude of factors which drive utilization rates, there is also the fact that in many instances it is none of the state’s business in the first place.

For example, there are 36.5 million women with children under 18 in the US today. Approximately 24.2 million or 66% of them are employed in the monetary economy and are counted in the labor force by the BLS.

At the same time, 12.3 million have chosen to stay at home and work in the unmonetized household economy, raising their children and keeping house. Self-evidently, cultural values and personal considerations far more than economics account for the removal of these 25 billion potential labor hours from the labor market.

More importantly, the impact of these largely private choices on the size of the potential labor force have varied substantially over time. I few decades ago, stay-at-home moms accounted for 40 billion labor hours which were unavailable to the monetary economy——-and which were therefore invisible to the headcount estimators at the BLS and the full-employment gunslingers at the Fed.

On the other end of the scale, there are also numerous examples of improper state interventions which powerfully impact the demand for labor hours offered by potential workers.

The current trend toward sharp increases in state and city minimum wages, for example, is reducing the demand for labor hours, and accelerating the substitution of automation and robots for low wage labor. When fully implemented, these misguided intrusions in the wage setting process will easily reduce demand for low-skill labor by billions of hours per year.

At the end of the day, therefore, the labor hours utilization rate is an outcome, not a proper or viable target of monetary policy or any other intervention of the state.

Instead, it is the happenstance result of the unfathomable interactions of taxes, welfare, trade, economic regulation, cultural preferences, demographics and the underlying efficiency and entrepreneurial dynamics (or lack thereof) of the market economy.

While the Fed claims that the Humphrey-Hawkins Act makes them pursue the impossibility of full employment labor utilization, that is specious nonsense. The statute is purely aspirational and content free on the quantitative measurement of “maximum employment”.

In fact, draining the labor slack from the bathtub of full employment GDP is just a pretext for what really motivates present day monetary policy. Namely, the Keynesian presumption that the business cycle is inherently unstable without the ministrations of the state, and that the tools of monetary policy need be deployed on a continuous and aggressive basis to prevent capitalism from lapsing into underperformance, slumps, recessions and worse.

That predicate, of course, is dead wrong. Every one of the 10 business cycle contractions since WWII have been caused by state action.

Two of these were owing to the sudden cooling of the economy after a war spending mobilization, as in the case of the recession after the Korean War in 1953 and the 1970 recession after the drawdown in Vietnam. The others were caused by the bursting of a central bank fueled credit bubble.

The deep recession of 1974-1975, for instance, was caused by the prior runaway growth of bank credit enabled by the money printing policies of Arthur Burns. As I detailed in The Great Deformation, during the 1972-1973 peak of monetary ease designed to put Nixon back in the White House, US bank credit erupted at a 30% annual rate.

At length the Fed was forced to throw on the brakes to prevent an inflationary blow-off—even after the US economy was throttled by Nixon’s wage and price control apparatus. By early 1975 the rate of bank credit growth had slowed to less than 3%, thereby generating a sharp curtailment of credit fueled household and business spending.

But that wasn’t evidence of capitalism’s inherent cyclical instability; it was proof of the folly of activist central banking under the post-Camp David regime of unanchored fiat money.

In any event, even the furtive efforts of the Fed to manage the credit cycle prior to the Greenspan era are no longer plausible. That’s because the crude instrument of manipulating the Federal funds rate to induce households and businesses to borrow and spend more than they would otherwise doesn’t work under the post-2007 condition of Peak Debt.

As we keep insisting, the major portion of Fed “stimulus” prior to the great financial crisis was actually nothing more profound than the ratcheting up of household leverage ratios to more than triple their stable pre-1980 levels. Over the period from 1970 to 2008, in fact, the Fed induced the nation’s households to undergo a cheap money driven LBO.

Household Leverage Ratio

As is evident in the above graph, we are now in the payback phase of the great credit supercycle. Accordingly, the credit channel of monetary stimulus is blocked, over and done.

The Fed’s massive injections of liquidity into the financial system, therefore, never leave the canyons of Wall Street. They neither stimulate main street spending and output nor push the consumer prices higher toward the Fed’s arbitrary 2% target.

Instead, in the name of full employment labor utilization and filling the Keynesian bathtub of potential GDP full to the brim, 90 months of ZIRP and $3.5 trillion of balance sheet expansion via QE have simply inflated financial asset prices to the nosebleed section of history relative to income and growth. The Fed has thereby generated still another giant financial bubble that will inexorably collapse on its own weight.

That day is coming soon and will come as another shocking surprise to both the denizens of the Eccles Building and the gamblers still left in the Wall Street casino. Owing to their mutual presumption that the Fed’s lunatic policies have actually worked and that something again to the nirvana of Keynesian full employment is close at hand, they are utterly blind to the facts of approaching recession.

Yet when it becomes no longer deniable, the market will panic because the god of central bank stimulus will have self-evidently failed once again. In the meanwhile, the level of incredulity among the Cool-Aid drinkers becomes ever more remarkable.

Even the BLS establishment survey——-a lagging indicator that is deeply flawed——indicates that US economy is cooling at a rapid rate. Indeed, the rate of gain during the last three months has slowed to 115,000.

Monthly Change In Payrolls

But there is even more. The Fed itself has developed a composite labor market conditions index which combines data from approximately 19 series. During the last cycle, this index began turning down in January 2007. That was 12 months before the recession officially began, 14 months before the level of jobs in the nonfarm payroll report began to fall, and 20 months before the Wall Street meltdown in September 2008.

Labor Market Conditions Change Led 2007-09 Recession

The fact is, the same pattern has materialized once again. The labor market conditions index began to rollover more than two years ago. And after stabilizing during mid-2015, it has resumed its downward course—–just as the rate of monthly gain in the establishment survey has begun to decisively weaken.

Labor Market Momentum Weakening Again

As we have frequently noted, there is one indicator of business conditions and the labor market situation that suffers no distortion owing to statistical artifacts like the BLS’ faulty season maladjustments and its obsolete birth-death model. To wit, the daily payroll withholding collections of the US Treasury Department.

As shown in the graph below, the smoothed three-month rolling average rate of gain has slowed sharply. After rising by nearly 10% over prior year in early 2014, the year-over-year rate of gain declined to the 4-5% range last year, and is now increasing at only 2.8%.

Given that  hourly wage gains are running in the 2.0-2.5% range, the implied rate of growth in actual labor hours employed has slumped close to zero.  That is, employers are sending in withholding payments from an economy that is operating at stall speed, at best.

Witholding Tax Collections Show Plunging Growth

Needless to say, the Fed has pumped the third financial bubble of this century with even more reckless abandon than it did during the dotcom boom and the housing boom.

For the time being, that has led to a massive $40 trillion gain in the value of financial assets held by US households——-85% of which are attributable to the top 10%of households.

The same cannot be said for labor—-notwithstanding the Fed’s preoccupation with full-employment. In fact, there were nearly 2 million fewer full-time, full-pay “breadwinner jobs” in the US during May than there were in January 2001.

Breadwinner Economy

As we said in the beginning of Part 1, there is a huge irony here. Wealthy households—and especially Wall Street speculators and the 1%—– have never had it so good. And its all owing to the opening that two left/labor politicians gave to power-aggrandizing central bankers nearly four decades ago.

Hubert Humphrey and Augustus Hawkins are surely rolling in their graves.


The Fed Has Whiffed Again — Massive Monetary Stimulus Has Not Helped Labor, Part 1

On June 10, 2016, David Stockman writes on Contra Corner:

There is a deep irony embedded in the Fed’s savage assault on savers and its delusional doctrine of interest rate repression. While this actually results in monumental windfalls to speculators and the one percent, it’s all justified in the name of boosting the labor market and the wage bill.

So the chart in Jeff Snider’s nearby post is especially salient. It shows that all this money printing has been for naught. Notwithstanding the 9X eruption of the Fed’s balance sheet from $500 billion at the turn of the century to $4.5 trillion today, growth in the most basic measure of labor input—-total hours worked——has come to a grinding halt.

Compared to labor hours growth of nearly 3% annualized during the Reagan expansion of the 1980s and nearly 2% during the start-and-stop stagflationary economy of the 1970s, labor hours growth over the two boom-and-bust cycles since the Fed went full frontal on money printing in December 2000 has averaged just 0.15% per annum.

Stated in aggregate terms, during the 10-year expansion between 1980 and 1990, labor hours employed in the US economy grew by 23.5%. By contrast, during the last 15 years combined, labor hours employed have risen by only2.3%.

ABOOK June 2016 Productivity Total Hours Cycles

Needless to say, this dismal outcome is not for want of potential labor supply. At the turn of the century, the civilian population aged 16 to 65 years was 177 million. That number has since grown to 205 million, meaning that the potential labor pool grew by 16% or nearly 7X faster than hours employed.

These figures also stick a fork in the Fed’s  blind fixation on the U-3 unemployment rate and the nonfarm payroll numbers. Both are a relic of a half-century ago world of mines, factories, warehouses and retail shops based on a 40+ hour workweek on a year round basis.

By contrast, in today’s world of flexible just-in-time production, hours-based labor scheduling and gig-based employment patterns, there is really no such standardized labor unit as a “job”.

Likewise, the BLS conventions for counting as “employed” anyone on a payroll for even a few hours per week, and omitting from the labor force denominator tens of millions of potential workers not actively looking for jobs at the moment of the surveys, mean that its headline series are essentially noise.

Most certainly they do not validly measure economic “slack” in the labor force and therefore the degree to which the Keynesian bathtub of “potential GDP” is less than filled to the brim.

The silliness of the Fed’s targets is underscored by the graph below, which shows that the nonfarm economy is now employing only 15 billion more labor hours than it did in the year 2000. By contrast, assuming a standard work year of 2000 hours, the 28 million increase of the population 16-65 years old theoretically was capable of producing 56 billion more labor hours.

So only 27% of those potential hours were actually absorbed by the nonfarm economy.

Likewise, as the jobs mix in the payroll report has shifted increasingly to what we have called the Part-Time economy of jobs in bars, restaurants, hotels, recreational venues, retail stores, temp agencies and household services, it has become self evident that the job count of 40 million workers in these categories has nothing to do with economic “slack” in the work force.

That’s because “jobs” in these categories average only about 27 hours per week of paid employment. On an annualized basis that’s a 1400 hour work year, meaning the Part-Time Economy depicted below generates about  56 billion labor hours annually. Yet a standard work year for these workers would amount to 80 billion labor hours.

The point is that without any change in the BLS headline numbers with respect to the U-3 unemployment rate or any increase in the nonfarm payroll total, there is 24 billion unutilized labor hours in this segment alone that could supplied to the US economy.

Now that’s “slack” and then some. It is virtually inconceivable that the tens of millions of hand-to-mouth workers in these jobs would not take an additional 5 hour or even 10 hours per week if offered at current wage rates. And that represents 10 to 20 billion additional labor hours.

Part Time Economy

So the headcount-based metrics of the BLS are useless for establishing monetary policy targets; and the occupants of the Eccles Building couldn’t do much about achieving a better, more modern hours-based target, anyway.

That’s because the true labor utilization rate and the actual amount of “slack” in the US economy  is due to dozens of structural factors that the Fed has no ability to impact. Many of these are supply-side factors such as the doubling of the disability rolls during that period and the explosion of student loans and grants, which took billions of potential labor hours out of the economy.

Needless to say, these forces had nothing to do with that imaginary ether called “aggregate demand” by the Keynesians, nor could they be reversed by ultra-low interest rates.

At the same time, the off-shoring of manufacturing labor due to the China Price and back office service labor due to the India Price did reduce the demand for domestic labor by tens of billions of hours. But that was due to high US costs and labor rates, not high interest rates or anything else the Fed could impact.

Likewise, the decision of some spouses to work in the unmonetized economy rearing children and maintaining households is a function of culture and demographics, not interest rates. And it varies over time in ways that the central bank cannot anticipate, measure or impact.

In all, less than 60% of the 410 billion potentially available labor hours among the 16-65 year old population is employed in the monetary economy. And when you adjust for more than 14 billion hours supplied by the over 65 populations, which is included in the graph above, only 56% of available labor hours among the working age population are employed.

The point is that whether the hours based “employment rate” of the non-retired adult population should be 50% or 70% rather than the current 56% level is dependent on a plentitude of factors which totally overwhelm monetary policy.

For instance, shifting the current $1 trillion of annual payroll tax levies to a consumption tax would bring billions of additional labor hours into the US economy by changing the incentives of employers and employees alike.

By sharply reducing employment costs in sectors which compete with off-shored goods and services, it would increase the demand for domestic labor hours; and on the supply side it would improve the trade-off between work and welfare for low wages workers.

By the same token, on the margin, the current trend toward sharp increases in state and city minimum wages is reducing the demand for labor hours, and accelerating the substitution of automation and robots for low wage labor.

At the end of the day, there is no accurate way to measure full employment in an hours based economy, nor is it an especially appropriate target for public policy.

In fact, the labor hours utilization rate is an outcome. It is the happenstance result of the unfathomable interactions of taxes, welfare, trade, economic regulation, cultural preferences, demographics and the underlying efficiency and entrepreneurial dynamics (or lack thereof) of the market economy.

Even then, the Fed’s claim that the Humphrey-Hawkins Act makes them do it is nonsense. The statute is purely aspirational and content free on the matter of “maximum employment”.

The fact that the Fed even bothers with a U-3 target in the range of 5% is purely ritualistic and vestigial. It is self-evidently meaningless as a measure of economic “slack” in today’s globalized and sliced and diced labor markets. And the matter of whether the instruments of the state should be used to encourage citizens to study or travel rather than work, or to take disability payments rather than supply productive labor, is a matter for Congress to decide, not our unelected monetary politburo.

Forget JM Keynes, Paul Samuelson and James Tobin. The fact that the U-3 unemployment rate was in fashion among the Keynesian economists of the day when Humphrey-Hawkins was enacted in 1977 is little more than a historical curiosity.

That their primitive and outmoded economic model supplied the narrative and authority for the Act’s so-called “dual mandate” simply proves a variation of Keynes’ famous observation. Namely, that the purportedly practical men and women who run the Fed “are merely slaves of some defunct economist”.

In fact, today’s Fed could easily jettison its mechanical, misleading and obsolete U-3 based unemployment target and most of the other jobs series of the BLS. It could even be truthful and admit that 95% of what drives the real labor metric——-the labor hours utilization rate——— is beyond the reach of monetary policy; and that much of what forms how much “slack” we have—-like early retirement, late retirement, homemaking and child care, college enrollment—— is not even the business of government.

Needless to say, that would essentially put it out of the macroeconic management business and dramatically reduce its reason for massive intrusion on the money and capital markets. Indeed, it would imply that its real job is to provide back-up liquidity to the banking system at a penalty spread above market determined interest rates and yield curves.

It just so happens that this was the original mission assigned to it by its author, Congressman Carter Glass, in the 1913 Act which created the Fed.

The Fed that Carter Glass envisioned only needed to recognize good collateral when posted by member banks seeking liquidity loans. It did not need to know what “full employment” is in the context of 410 billion potential labor hours or whether there was too much or too little “slack” or whether the bathtub of potential GDP was filled to the brim or not.


Our Neoliberal Nightmare: Hillary Clinton, Donald Trump And Why The Wealthy Win Every Time


Hillary Clinton speaks in Athens, Ohio on May 3, 2016. (Photo: Sarah Hina; Edited: LW / TO)

On June 10, 2016, Anis Shivaji writes on Salon:




A table from the groundbreaking NELP report.

On May 5, 2016, Paul Constant writes on Civic Ventures:

You’ve heard opponents argue again and again that raising the minimum wage will kill jobs. (And you’ve seen us refute their claims at every turn.) If you increase labor costs for businesses, they claim, employers will have to either raise costs or lay off employees. They discuss minimum-wage increases in such simple language that it’s hard to argue with them: wages go up, employment goes down. A child could understand that, right?

Unfortunately, simplicity doesn’t always equal truth. For most of the history of humanity, we believed through our uninformed observations that the sun circled the earth, and that misconception colored our understanding of the way the universe works. These charges that raising the minimum wage will kill jobs are just as untrue as the belief that the earth is the center of the universe.

Luckily, we now have proof that raising the minimum wage doesn’t kill jobs, in the form of a new report from the National Employment Law Project. Titled “Raise Wages, Kill Jobs? Seven Decades of Historical Data Find No Correlation Between Minimum Wage Increases and Employment Levels,” authors Paul K. Sonn and Yannet M. Lathrop, using data collected with help from T. William Lester, PhD., lay out every federal minimum-wage increase in America since 1938, making “simple before-and-after comparisons of change 12 months after each minimum-wage increase.” This seems basic enough, and it’s fairly remarkable that nobody has ever thought to do it before.

So what did they find using these methods?  Out of 22 changes in the federal minimum wage since 1938, “in the substantial majority of instances (68 percent) overall employment increased after a federal minimum-wage increase.” This means that one year after the national minimum wage went up, employment was up 15 out of 22 times. Of the decreases remaining, the vast majority occurred just after or during recessions, when employment always decreases, no matter what the minimum wage is.

And in those industries most affected by minimum wage hikes—restaurants, hotels and other leisure and hospitality fields—employment went up over eighty percent of the time, while retail employment increased nearly three-quarters of the time. This is significant because it is precisely these service-sector jobs that minimum-wage opponents generally argue they are trying to save by keeping wages low. The fact that these industries, on average, do better than normal when the minimum wage goes up is a game-changing discovery. Ultimately, Sonn and Lathrop write, the report provides “simple confirmation that opponents’ perennial predictions of job losses when minimum-wage increases are proposed are rooted in ideology, not evidence.”

The case for raising the minimum wage has always been that it transforms low-wage workers into consumers, and the money those workers earn is immediately spent in the local economy. When people who work in restaurants can actually afford to eat in restaurants, everybody does better. Until now, both sides of the minimum wage “debate” have had their theories. This report finally proves that the more inclusive theory—the one that favors raising the minimum wage in order to empower more Americans as consumers—is the correct one. We’ll always be plagued by equivocators and people who just straight up do not want to pay their workers a living wage. But this new paper is a breakthrough, because it puts a stake through the heart of the most basic claims of minimum-wage opponents.

The next time someone tries to tell you that raising the minimum wage will cost jobs, you can do more than just spout theories at each other. You can show this study to minimum-wage doubters and demonstrate exactly why raising the minimum wage is good for everyone, from business owners to workers. Finally, after years of argument and the endless shouting of bumper-sticker slogans, we have proof: as surely as the earth orbits the sun, raising the minimum wage strengthens, not weakens, the local economy.

Using common sense, if raising the minimum wage will not kill jobs then why not raise the minimum wage to $25.00 or $50.00 or $100.00 per hour? Of course there are consequences that either are reflected in job elimination or increased prices. Virtually never are the OWNERS of corporations willing reduce profits. If wage levels were not a factor there would be no reason for ANY company to exit production in the United States and move production to foreign lands with significantly less labor costs. Also, there is the impact on pricing levels, as any increases in the cost of production or service always results in pricing increases.

If this were not the case, then no companies would be compelled to seek other more cost-efficient means of production or to move production to foreign countries whose workers are paid far less than  Americans.  Increasingly, companies are seeking more efficient and less long term costs that non-human technology can deliver to reduce their operating costs, provide higher build quality, automate service, and maximize profits for their OWNERS. As is virtually always the case, the OWNERS of companies do not want to reduce profits.

What the proponents of raising the minimum wage fundamentally are addressing is that low-paid American workers need to earn more income.


Two Thirds Of All Americans Would Struggle To Cover A $1,000 Crisis

On May 25, 2016, Thom Hartmann writes on The Thom Hartmann Program:

A new poll shows that two-thirds of all Americans would struggle to cover $1,000 crisis.

According to this poll by the NORC Center for Public Affairs Research, this type of crisis spans all incomes.

Three-quarters of people in households making less than $50,000 a year and two-thirds of those making between $50,000 and $100,000 would have trouble coming up with $1,000 to cover an unexpected bill.

Even for the country’s wealthiest 20 percent — households making more than $100,000 a year — 38 percent say they would have at least some difficulty coming up with $1,000.

William R. Emmons, a senior economic adviser at the Center for Household Financial Stability at the Federal Reserve Bank of St. Louis said, “Many families are still struggling with debt from the housing bubble and borrowing boom.

And the recent economic stresses make it much more likely families are going to be fighting basic financial issues.”

Face it, the vast majority of Americans suffer from a serious lack of income to cover basic day-to-day, week-to-week and month-to-month living expenses, and essentially no significant, or zero savings necessary to cover emergency expenses and to provide retirement security.

We need to reform the system to empower EVERY child, woman, and man to secure new income sources, namely wealth-creating, income-producing capital assets embodied in viable growth corporations. This can be accomplished by empowering EVERY child, woman and man to acquire personal OWNERSHIP stakes in the FUTURE formation of wealth-creating, income-producing capital assets using INSURED, INTEREST-FREE capital credit, repayable out of the FUTURE earnings of the investments, without the requirement of PAST SAVINGS, a job or any income from any source. Instead their solution is solely viewed interns of raising the minimum wage, which continues the serfdon status of the vast majority of Americans now employed in minimum wage sectors of the economy. A boost in the minimum wage is not the solution to economic inequality. Widespread, universal personal wealth-creating, income-producing capital asset property OWNERSHIP is, and is the true path to inclusive prosperity, inclusive opportunity, and inclusive economic justice.

Support the Agenda of The Just Third Way Movement at,, and

Support Monetary Justice at

Support the Capital Homestead Act (aka Economic Democracy Act) at,, and

See my article entitled “The Solution To America’s Economic Decline” at