The Solution To Our Education Crisis Might Be AI

On December 11, 2017, Kristin Houser writes on Futurism:

Tomorrow’s Teachers

Robots will replace teachers by 2027.

That’s the bold claim that Anthony Seldon, a British education expert, made at the British Science Festival in September.

Seldon may be the first to set such a specific deadline for the automation of education, but he’s not the first to note technology’s potential to replace human workers. Whether the “robots” take the form of artificially intelligent (AI) software programs or humanoid machines, research suggests that technology is poised to automate a huge proportion of jobs worldwide, disrupting the global economy and leaving millions unemployed.

But just which jobs are on the chopping block is still a subject of debate.

Some experts have suggested that autonomous systems will replace us in jobs for which humans are unsuited anyway — those that are dull, dirty, and dangerous. That’s already happening. Robots clean nuclear disaster sites and work construction jobs. Desk jobs aren’t immune to the robot takeover, however — machines are replacing finance expertsoutperforming doctors, and competing with advertising masterminds.

The unique demands placed on primary and secondary school teachers make this position different from many other jobs at risk of automation. Students all learn differently, and a good teacher must attempt to deliver lessons in a way that resonates with every child in the classroom. Some students may have behavioral or psychological problems that inhibit or complicate that process. Others may have parents who are too involved, or not involved enough, in their education. Effective teachers must be able to navigate these many hurdles while satisfying often-changing curriculum requirements.

In short, the job demands that teachers have nearly superhuman levels of empathy, grit, and organization. Creating robotic teachers that can meet all these demands might be challenging, but in the end, could these AI-enhanced entities solve our most pervasive and systemic issues in education?

Room For Improvement

In 2015, the United Nations Educational, Scientific, and Cultural Organization (UNESCO) adopted the 2030 Agenda for Sustainable Development, a plan for eliminating poverty through sustainable development. One goal listed on the agenda is to ensure everyone in the world has equal access to a quality education. Specific targets include completely free primary and secondary education, access to updated education facilities, and instruction from qualified teachers.

Some nations will have a tougher time meeting these goals than others. As of 2014, roughly nine percent of primary school-aged children (ages 5 to 11) weren’t in school, according to the same UNESCO report. For lower secondary school-aged children (ages 12 to 14), that percentage jumps to 16 percent. More than 70 percent of out-of-school children live in Southern Asia and Sub-Saharan Africa. In the latter region, a majority of the schools aren’t equipped with electricity or potable water, and depending on the grade level, between 26 and 56 percent of teachers aren’t properly trained.

To meet UNESCO’s target of equal access to quality education, the world needs a lot more qualified teachers. The organization reports that we must add 20.1 millionprimary and secondary school teachers to the workforce, while also finding replacements for the 48.6 million expected to leave in the next 13 years due to retirement, the end of a temporary contract, or the desire to pursue a different profession with better pay or better working conditions.

That’s…a lot of teachers. So it’s easy to see the appeal of using a robotic teacher to fill these gaps. Sure, it takes a lot of time and money to automate an entire profession. But after the initial development costs, administrators wouldn’t need to worry about paying digital teachers. This saved money could then be used to pay for the needed updates to education facilities or other costs associated with providing all youth with a free education.

Digital teachers wouldn’t need days off and would never be late for work. Administrators could upload any changes to curricula across an entire fleet of AI instructors, and the systems would never make mistakes. If programmed correctly, they also wouldn’t show any biases toward students based on gender, race, socio-economic status, personality preference, or other consideration.

But we still have some ways to go before such instructors enter our classrooms.

Education systems are “only as good as the teachers who provide the hands-on schooling,” UNESCO claims, and today’s robots simply can’t match human teachers in the quality of education they provide to students. In fact, they won’t be able to for at least the next decade, Rose Luckin, a professor at the University College London Knowledge Lab, a research center focused on how digital media can transform education, told Futurism. Teachers rely heavily on social interaction to support their students and figure out what they need, Luckin continued, and so far no digital system can compete with a human in this realm.

However, it is possible that no robot will ever be good enough to replace teachers completely. “I do not believe that any robot can fulfill the wide range of tasks that a human teacher completes on a daily basis, nor do I believe that any robot will develop the vast repertoire of skills and abilities that a human teacher possesses,” Luckin said.

There is some weight to Luckin’s assertions. While machines can handle a variety of specific tasks, we haven’t yet come close to creating artificial general intelligence (AGI) — the kind of machine that could answer the tough questions outside the purview of the immediate lesson that good teachers should be prepared to tackle. Today’s robots also lack the empathy and ability to inspire that teachers bring to the classroom.

That doesn’t mean robots won’t replace teachers, though. Very few studies directly compare human and robot teachers, so it’s not clear how much better the human performs than the robot.

In any case, Luckin suggests a compromise: AI and automated systems could have collaborative roles in the education system. That would enable teachers and students to take advantage of the tech in ways that will benefit them both, and we wouldn’t need to worry about lack of oversight for when our AI systems do encounter problems.

AI in Every Classroom

For teachers, the classroom is anything but serene. Kids giggle during lessons, call out, rustle papers, and fidget — teachers must compete with the chaos to simply get students to learn. And teachers take their jobs home with them, too, spending their evenings and weekends planning lessons and grading student work.

What if AI could act as an extra pair of hands in the classroom? That was the idea Luckin and her co-author put forth in a recent paper. This AI assistant could manage tasks such as taking attendance or routine grading. It could also help teachers generate new lessons by autonomously navigating online teaching resources, such as Teachers Pay Teachers, to find the lesson plans most likely to resonate with a classroom based on the details of the students and the school’s specific curriculum.

Decreasing the workload dumped on teachers would hopefully make them less stressed. This could limit the burnout that has exacerbated the teacher shortage and make the position more appealing to others considering becoming teachers. In her paper, Luckin predicted that every teacher could have a dedicated AI assistant within the next decade.

But AI could do more than the drudgery of teaching — it could actually make teachers better by giving them greater insight into their students’ needs.

Classrooms could be equipped with language processors, speech and gesture recognition technology, eye-tracking, and other physiological sensors to collect and analyze information about each student, Luckin writes. Instead of waiting for a test or a raised hand for a student to display her understanding of the material, teachers could access real-time information that could show them why the student might not be learning at full capacity. They’d know which students weren’t getting enough sleep, if they had inadequate diets, if they were suffering from emotional stress — information that can affect a student’s performance but that can be difficult to tease out in the classroom.

The teacher could use this information to tailor his or her teaching strategies to meet the needs of each individual student. They could simply look at a list generated by the AI to see what each student should work on that day. If a student needed extra one-on-one attention, the teacher could instruct that student to work with an AI-powered Intelligent Tutoring System (ITS) that adjusts its approach to match the student’s learning style. Meanwhile, the teacher may decide to give other students group work so they could hone their interpersonal skills.

The system could also help students modify their behavior to improve their own performance. For example, a student might learn that she scores lower on exams when she stayed up late the night before, drank coffee that morning, or took public transport to school instead of walking. By altering these habits in the future, she could score better on the tests and excel.

Image Credit: Pixabay

As Luckin told Futurism, the increased use of AI in education could have downsides. Schools will need to guard against the misuse of student data, and cybersecurity will be of the utmost importance. Still, these types of precautions won’t be limited to education. Data protection will be a universal concern as the Internet of Things (IoT) grows and our world gets increasingly “smart.”

A few decades in the future, every student and teacher could be the master of their own personal educational analytics, Luckin predicts. That information could be useful beyond the classroom — students may choose to share certain analytics along with their college admission packages, while teachers may include theirs in applications for future employment.

Not My Classroom

As previously noted, it will be a little while before every teacher has an AI assistant and every student has an AI tutor. Developing the necessary technology will be the simple part, Luckin said, and already, some researchers are working on such systems.

Convincing parents, teachers, and students to embrace AI in education will be the real challenge. Some may be biased against the technology for fear it will leave them unemployed, while others may have a hard time shaking thoughts of the doomsday scenarios posited by tech luminaries such as Elon Musk and Stephen Hawking.

Teachers, in particular, are likely to be more resistant to automation because teaching is an “inherently human process,” Terry Heick, founder and director of TeachThought, a publisher of teaching materials and resources that are focused on innovation in education, told Futurism.

He said teachers might take the suggestion that a “symbolically mindless robot” could do their work as an indication that others see their skillset as easy to duplicate. In fact, said Heick, the opposite is true — teaching is such an “impossible” task that anything that could make the job easier on teachers should be valued.

To ease this resistance, researchers and tech companies could involve educators, parents, and students in the development of AI systems designed for the classroom, as Luckin suggests in her paper. Stakeholders could see that their input and experience is valuable; they could even identify weaknesses in systems under development. Researchers can tailor systems to suit the needs of educators, students, and parents, improving the final product. Skeptics could see how AI could improve the learning experience.

For this process to go well, it needs to be slow and iterative. It will likely take decades at best.

“Considering that education still hasn’t embraced mobile technology, the idea of Johnny 5 circling around a classroom teaching students in just a decade seems far-fetched,” Heick said. But maybe someday.

The Solution to Our Education Crisis Might be AI


Traditional Economics Failed. Here’s A New Blueprint.

On March 20, 2016, Eric Liu and Nick Hanauer write on Economics:

Politics in democracy can be understood many ways, but on one level it is the expression of where people believe their self-interest lies— that is to say, “what is good for me?” Even when voters vote according to primal affinities or fears rather than economic advantage (as Thomas Frank, in What’s the Matter With Kansas?, lamented of poor whites who vote Republican), it is because they’ve come to define self-interest more in terms of those primal identities than in terms of dollars and cents.

This is not proof of the stupidity of such voters. It is proof of the malleability and multidimensionality of self-interest. While the degree to which human beings pursue that which they think is good for them has not and will probably never change, what they believe is good for them can change and from time to time has, radically.

We assert a simple proposition: that fundamental shifts in popular understanding of how the world works necessarily produce fundamental shifts in our conception of self-interest, which in turn necessarily producefundamental shifts in how we think to order our societies.

Consider for a moment this simple example:

For the overwhelming majority of human history, people looked up into the sky and saw the sun, moon, stars, and planets revolve around the earth. This bedrock assumption based on everyday observation framed our self-conception as a species and our interpretation of everything around us.

Alas, it was completely wrong.

Advances in both observation technology and scientific understanding allowed people to first see, and much later accept, that in fact the earth was not the center of the universe, but rather, a speck in an ever-enlarging and increasingly humbling and complex cosmos. We are not the center of the universe.

It’s worth reflecting for a moment on the fact that the evidence for this scientific truth was there the whole time. But people didn’t perceive it until concepts like gravity allowed us to imagine the possibility of orbits. New understanding turns simple observation into meaningful perception. Without it, what one observes can be radically misinterpreted. New understanding can completely change the way we see a situation and how we see our self-interest with respect to it. Concepts determine, and often distort, percepts.

Today, most of the public is unaware that we are in the midst of a moment of new understanding. In recent decades, a revolution has taken place in our scientific and mathematical understanding of the systemic nature of the world we inhabit.

–We used to understand the world as stable and predictable, and now we see that it is unstable and inherently impossible to predict.

–We used to assume that what you do in one place has little or no effect on what happens in another place, but now we understand that small differences in initial choices can cascade into huge variations in ultimate consequences.

–We used to assume that people are primarily rational, and now we see that they are primarily emotional.

Now, consider: how might these new shifts in understanding affect our sense of who we are and what is good for us?

A Second Enlightenment and the Radical Redefinition of Self-Interest

In traditional economic theory, as in politics, we Americans are taught to believe that selfishness is next to godliness. We are taught that the market is at its most efficient when individuals act rationally to maximize their own self-interest without regard to the effects on anyone else. We are taught that democracy is at its most functional when individuals and factions pursue their own self-interest aggressively. In both instances, we are taught that an invisible hand converts this relentless clash and competition of self-seekers into a greater good.

These teachings are half right: most people indeed are looking out for themselves. We have no illusions about that. But the teachings are half wrong in that they enshrine a particular, and particularly narrow, notion of what it means to look out for oneself.

Conventional wisdom conflates self-interest and selfishness. It makes sense to be self-interested in the long run. It does not make sense to be reflexively selfish in every transaction. And that, unfortunately, is what market fundamentalism and libertarian politics promote: a brand of selfishness that is profoundly against our actual interest.

Let’s back up a step.

When Thomas Jefferson wrote in the Declaration of Independence that certain truths were held to be “self-evident,” he was not recording a timeless fact; he was asserting one into being. Today we read his words through the filter of modernity. We assume that those truths had always been self-evident. But they weren’t. They most certainly were not a generation before Jefferson wrote. In the quarter century between 1750 and 1775, in a confluence of dramatic changes in science, politics, religion, and economics, a group of enlightened British colonists in America grew gradually more open to the idea that all men are created equal and are endowed by their Creator with certain unalienable rights.

It took Jefferson’s assertion, and the Revolution that followed, to make those truths self-evident.

We point this out as a simple reminder. Every so often in history, new truths about human nature and the nature of human societies crystallize. Such paradigmatic shifts build gradually but cascade suddenly.

This has certainly been the case with prevailing ideas about what constitutes self-interest. Self-interest, it turns out, is not a fixed entity that can be objectively defined and held constant. It is a malleable, culturally embodied notion.

Think about it. Before the Enlightenment, the average serf believed that his destiny was foreordained. He fatalistically understood the scope of life’s possibility to be circumscribed by his status at birth. His concept of self-interest extended only as far as that of his nobleman. His station was fixed, and reinforced by tradition and social ritual. His hopes for betterment were pinned on the afterlife. Post-Enlightenment, that all changed. The average European now believed he was master of his own destiny. Instead of worrying about his odds of a good afterlife, he worried about improving his lot here and now. He was motivated to advance beyond what had seemed fated. He was inclined to be skeptical about received notions of what was possible in life.

The multiple revolutions of the Enlightenment— scientific, philosophical, spiritual, material, political— substituted reason for doctrine, agency for fatalism, independence for obedience, scientific method for superstition, human ambition for divine predestination. Driving this change was a new physics and mathematics that made the world seem rational and linear and subject to human mastery.

The science of that age had enormous explanatory and predictive power, and it yielded an entirely new way of conceptualizing self-interest. Now the individual, relying on his own wits, was to be celebrated for looking out for himself— and was expected to do so. As physics developed into a story of zero-sum collisions, as man mastered steam and made machines, as Darwin’s theories of natural selection and evolution took hold, the binding and life-defining power of old traditions and institutions waned. A new belief seeped osmotically across disciplines and domains: Every man can make himself anew. And before long, this mutated into another ethic: Every man for himself.

Compared to the backward-looking, authority-worshipping, passive notion of self-interest that had previously prevailed, this, to be sure, was astounding progress. It was liberation. Nowhere more than in America— a land of wide-open spaces, small populations, and easily erased histories— did this atomized ideal of self-interest take hold. As Steven Watts describes in his groundbreaking history The Republic Reborn, “the cult of the self-made man” emerged in the first three decades after Independence. The civic ethos of the founding evaporated amidst the giddy free-agent opportunity to stake a claim and enrich oneself. Two centuries later, our greed-celebrating, ambition-soaked culture still echoes this original song of self-interest and individualism.

Over time, the rational self-seeking of the American has been elevated into an ideology now as strong and totalizing as the divine right of kings once was in medieval Europe. Homo economicus, the rationalist self-seeker of orthodox economics, along with his cousin Homo politicus, gradually came to define what is considered normal in the market and politics. We’ve convinced ourselves that a million individual acts of selfishness magically add up to a common good. And we’ve paid a great price for such arrogance. We have today a dominant legal and economic doctrine that treats people as disconnected automatons and treats the mess we leave behind as someone else’s problem. We also have, in the Great Recession, painful evidence of the limits of this doctrine’s usefulness.

But now a new story is unfolding.

Our century is yielding a second Enlightenment, and the narrative it offers about what makes us tick, individually and collectively, is infinitely more sophisticated than what we got the last time around. Since the mid-1960s, there have been profound advances in how we understand the systemic nature of botany, biology, physics, computer science, neuroscience, oceanography, atmospheric science, cognitive science, zoology, psychology, epidemiology, and even, yes, economics. Across these fields, a set of conceptual shifts is underway:

Simple → Complex
Atomistic → Networked
Equilibrium → Disequilibrium
Linear → Non-linear
Mechanistic → Behavioral
Efficient → Effective
Predictive → Adaptive
Independent → Interdependent
Individual ability → Group diversity
Rational calculator → Irrational approximators
Selfish → Strongly reciprocal
Win-lose → Win-win or lose-lose
Competition → Cooperation

Simple → Complex

The reductionist spirit of the first Enlightenment yielded a passion for classification— of species, of races, of types of all kinds of things— and this had the virtue of clarifying and simplifying what had once seemed fuzzy. But Enlightenment mathematics was limited in its ability to depict complicated systems like ecosystems and economies. The second Enlightenment is giving us the tools to understand complexity, as Scott Page and John Miller explain in Complex Adaptive Systems. Such systems— whether they are stock markets or immune systems, biospheres or political movements— are made of interacting agents, operating interdependently and unpredictably, learning from experience at individual and collective levels. The patterns we see are not mere aggregations of isolated acts but are the dynamic, emergent properties of all these interactions. The way these patterns behave may not be predictable, but they can be understood. We understand now how whirlpools arise from turbulence, or how bubbles emerge from economic activity.

Atomistic → Networked

The first Enlightenment was excellent for breaking phenomena into component parts, ever smaller and more discrete. It was an atomic worldview that conceptualized us as separate and independent. The second Enlightenment proves that while we are made of atoms we are not atoms— that is, we behave not in atomistic ways but as permeable, changeable parts of great networks and ecosystems. In particular, human societies are made up of vast, many-to-many networks that have far greater impact on us as individuals and on the shape and nature of our communities than we ever realized. The “six degrees” phenomenon is not a party game; it is a way of seeing more clearly what Albert-Laszlo Barabasi, author of Linked, describes as “scale-free networks”: networks with an uneven distribution of connectedness, whose unevenness shapes how people behave. Recognizing ourselves as part of networks— rather than as isolated agents or even niches in a hierarchy— enables us to see behavior as contagious, even many degrees away. We are all on the network, part of the same web, for better or worse. Thus does consumption of Middle East oil produce climate change, which creates drought in North Africa, which raises food prices there, which leads a vendor in Tunis to set himself afire, which sparks a revolution that upends the Middle East.

Equilibrium → Disequilibrium

Classical economics, with us still today, relied upon 19th-century ideas from physics about systems in equilibrium. On this account, shocks or inputs to the system eventually result in the system going back to equilibrium, like water in a bucket or a ball bearing in a bowl (or the body returning to “stasis” after “sickness”). Such systems are closed, stable, and predictable. By contrast, complex systems like ecosystems and economies (or hurricanes or Facebook) are open and never stay in equilibrium. In non-equilibrium systems, a tiny input can create a catastrophic change— the so-called butterfly effect. The natural, emergent state of such systems— open rather than closed— is not stability but rather booms and busts, bubbles and crashes. It is from this tumult, says Eric Beinhocker, author of the magisterial The Origin of Wealth, that evolutionary opportunities for innovation and wealth creation arise.

Linear → Non-linear

The first Enlightenment emphasized linear, predictable models for change, whether at the atomic or the global level. The second Enlightenment emphasizes the butterfly effect, path dependence, high sensitivity to initial conditions and high volatility thereafter: in short, it gives us chaos, complexity, and non-linearity. What once seemed predictable is now understood to be quite unpredictable.

Mechanistic → Behavioral

The first Enlightenment made the stable, order-seeking machine the generative metaphor for economic activity (assembly lines), social organization (political machines), and government’s role (that of a mechanic or clockmaker). The second Enlightenment studies not how people process things independently but rather how they behave interdependently. As David Brooks describes in The Social Animal, behavior is contagious, often unconsciously and unpredictably so, and individual choices can cascade suddenly into great waves of social change.

Efficient → Effective

The metaphors of the Enlightenment, taken to scale during the Industrial Age, led us to conceptualize markets as running with “machine-like efficiency” and frictionless alignment of supply and demand. But in fact, complex systems are tuned not for efficiency but for effectiveness— not for perfect solutions but for adaptive, resilient, good-enough solutions. This, as Rafe Sagarin depicts in the interdisciplinary survey Natural Security, is how nature works. It is how social and economic systems work too. Evolution relentlessly churns out effective, good-enough-for-now solutions in an ever-changing landscape of challenges. Effectiveness is often inefficient, usually messy, and always short-lived, such that a system that works for one era may not work for another.

Predictive → Adaptive

In the old Enlightenment and the machine age that followed, inputs were assumed to predict outputs. In the second Enlightenment, once we recognize that the laws that govern the world are laws of complex systems, we must trade the story of inputs and predictability for a story of influence and ever-shifting adaptation. In complex human societies, individuals act and adapt to changing circumstances; their adaptations in turn influence the next round of action, and so on. This picture of how neither risks nor outcomes can be fully anticipated makes flexibility and resilience more valuable at every scale of decision-making.

Independent → Interdependent

The Enlightenment allowed us to see ourselves as individuals and agents. Free from supernatural authority, people were first allowed and then expected to act independently and selfishly for themselves. This extraordinary cultural shift sparked invention, innovation, and the autonomy we expect in our daily lives. But this mode of thinking, particularly applied to the American frontier, persuaded us that we were independent rather than interdependent. A new understanding of systems and human behavior and physiology shows this to be untrue. From the quantum level up, we are far more interdependent than our politics and culture generally let us think. We are at all times both cause and effect. Our mirror neurons and evolved social rites mean that how we behave influences how others behave, and how they behave influences us. The permutating patterns formed by those interactions become the shape our societies take. And obviously, the denser and more connected the network— compare, say, America today with America 300 years ago— the greater these effects.

Rational calculator → Irrational approximators

The Enlightenment encouraged scientists to apply mathematics and physics to human nature and social dynamics, but these were of course blunt instruments for such complex work, requiring many simplifying assumptions. Over time, the caveat that these assumptions were simplifying fell away and what was left was a mechanical view that people are rational calculators of their own interest. Economists even today assume that an ordinary consumer can make complex instantaneous calculations about net present value and risk when making decisions in grocery stores between tomatoes and carrots. This homo economicus stands at the center of traditional economics, and his predilection for perfect rationality and selfishness permeates our politics and culture. By contrast, the behavioral science of our times is pulling us back to common sense and reminding us that people are often irrational or at least a-rational and emotional, and that we are at best approximators of interest who often don’t know what’s best for us and even when we do, often don’t do it. This accounts for the “animal spirits” of fear, longing, and greed that seem to drive markets in unpredictable and irrational ways.

Selfish → Strongly reciprocal

For centuries, a bedrock economic, legal, and social assumption was that people were inherently so selfish that they could not be expected to support or aid others not in their own genetic line. Now the study of human behavior reinforces the neglected fact that we are hardwired equally to be cooperative. As social psychologist Dacher Keltner writes in Born to Be Good, humans could not have survived and evolved without the social organization that only cooperation, mutuality, and reciprocity make possible. In fact, we are so tilted toward cooperation that we punish non-cooperators in our communities, even at cost to ourselves. This “strong reciprocation” strategy reflects a deep recognition, made instinctual through millennia of group activity, that all behavior is contagious, and that rewarding good with good and bad with punishment is the best way to protect our societies and therefore ourselves. Reciprocity makes compassion not a form of weakness but a model of strength; it makes pro-social morality not just moral but natural and smart.

Win-lose → Win-win or lose-lose

The story that grew out of Enlightenment rationalism and then Social Darwinism had a strong streak of “your gain is my loss.” The more that people and groups were seen as competing isles of ambition, all struggling for survival, the more life was analogized at every turn into a win-lose scenario. But the stories and science of the second Enlightenment prove what has long been a parallel intuition: that in fact, the evolution of humanity from cave dweller to Facebooker is the story of increasing adoption of nonzero, or positive-sum, attitudes; and that societies capable of setting up win-win (or lose-lose) scenarios always win. Robert Wright’s Nonzero describes this dynamic across civilizations. Unhealthy societies think zero-sum and fight over a pie of fixed size. Healthy societies think 1 + 1 = 3 and operate from a norm that the pie can grow. Open, non-equilibrium systems have synergies that generate increasing returns and make the whole greater than the sum of the parts. The proper goal of politics and economics is to maximize those increasing returns and win-win scenarios.

Competition → Cooperation

A fundamental assumption of traditional economics is that competitiveness creates prosperity. This view, descended from a misreading of Adam Smith and Charles Darwin, weds the invisible hand of the market to the natural selection of nature. It justifies atomistic self-seeking. A clearer understanding of how evolutionary forces work in complex adaptive human society shows that cooperation is the true foundation of prosperity (as does a full reading Adam Smith’s lesser-known masterpiece A Theory of Moral Sentiments). Competition properly understood—in nature or in business—is between groups of cooperators. Groups that know how to cooperate—whose members attend to social and emotional skills like empathy—defeat those that do not. That’s because only cooperation can create symbiotic, nonzero outcomes. And those nonzero outcomes, borne and propelled by ever-increasing trust and cooperation, create a feedback loop of ever-increasing economic growth and social health.

Now: what does all this have to do with self-interest?

Everything. Our previous understanding of the world animated and enables a primitive and narrow perspective on self-interest, giving us such notions as:

– I should be able to do whatever I please, so long as it doesn’t directly harm someone else.
-Your loss is my gain.
-It’s survival of the fittest—only the strong survive.
-Rugged individualism wins.
-We are a nation of self-made people.
-Every man for himself.

Until recently, these beliefs—we aptly call them “rationalizations”—could be backed, even if speciously, by references to science and laws of nature. But now, to anyone really paying attention, they can’t. Today, emerging from our knowledge of emergence, complexity, and innate human behavior, a different story about self-interest is taking shape, and it sounds more like this:

-What goes around comes around.
-The better you do, the better I do.
-It’s survival of the smartest—only the cooperative survive.
-Teamwork wins.
-There’s no such thing as a self-made person.
-All for one, one for all.

Let’s be clear here: we are not talking about a sudden embrace of saintly self-denial. We are talking about humans correcting their vision—as they did when they recognized that the sun didn’t orbit the earth; as they did when they acknowledged that germs, not humours, caused sickness. We are talking about humans seeing, with long-overdue clarity, and with all our millennia of self-preservation instincts intact, a simple truth: True self-interest is mutual interest. The best way to improve your likelihood of surviving and thriving is to make sure those around you survive and thrive. Notwithstanding American mythology about selfishness making the world go round, humans have in fact evolved—have been selected—to look out for others in their group and, in so doing, to look out for self. We exist today because this is how our ancestors behaved. We evolve today by ensuring that our definition of “our group” is wide enough to take advantage of diversity and narrow enough to be actionable.

This is a story, in short, about self-interest that is smart, or “self-interest properly understood,” as Tocqueville put it. It is a true story. It tells of neither altruism nor raw simple selfishness. Altruism is admirable, but not common enough to support a durable moral or political system. Raw Selfishness may seem like the savvy stance, but is in fact self-defeating: tragedies of the commons are so called because they kill first the commons and then the people. True self-interest is mutual interest. This is even more urgently true in the age of global climate change, terror, drugs, pop culture, marketing, and so forth than it was in the age of hunter-gatherers.

We are aware that many have used the “newest science” to justify outlandish views and schemes, or to lend a patina of certainty to things ineffable. It would be easy to characterize our reliance on new science as similarly naive. We are also aware, acutely, that the Machinebrain thinking we criticize is itself the direct product of science, and that our remedy may appear strangely to be a fresh dose of the illness. But while skepticism is warranted, there is an important difference: today’s science is most useful in how it demonstrates the limits of science, Complexity and evolutionary theory doesn’t give you mastery over the systems we inhabit; it simply informs us about their inherent unpredictability and instability. These new perspectives should not make us more certain of our approaches, but rather, more keenly aware of how our approaches can go wrong or become outmoded, and how necessary it is in civic life to be able to adjust to changes in fact and experience.

Where the rationalist schemes of central planners on the left and market fundamentalists on the right have led to costly hubris, public policy informed by the new science should now lead to constant humility.

In a sense, the latest wave of scientific understanding merely confirms what we, in our bones, know to be true: that no one is an island; and that someone who thinks he can take for himself, everyone else be damned, causes a society to become to sick to sustain anyone. Indeed, he or she who defends his own immediate gain, for either the longer term or the greater good, causes a society to prosper so much as to pay back his investment of deferred gratification. True self-interest is mutual interest.

The contract between the new and old stories of self-interest —like any paradigmatic shift in the public imagination—is not just a philosophical curiosity. It plays out in how we interpret and understand—and therefore, prepare for or prevent—calamities like global financial meltdowns or catastrophic climate change or political gridlock. And it will transform the way we think about three basic elements of a democratic society: citizenship, economy, government.

Traditional Economics Failed. Here’s a New Blueprint.


Economists Are Obsessed With “Job Creation.” How About Less Work?

On October 9, 2017, Peter Gray writes on Economics:

In 1930, the British economist John Maynard Keynes predicted that, by the end of the century, the average workweek would be about 15 hours.  Automation had already begun to replace many jobs by the early 20th century, and Keynes predicted that the trend would accelerate to the point where all that people need for a satisfying life could be produced with a minimum of human labor, whether physical or mental.  Keynes turned out to be right about increased automation.  We now have machines, computers and robots that can do quickly what human beings formerly did laboriously, and the increase in automation shows no sign of slowing down.  But he was wrong about the decline of work.

As old jobs have been replaced by machines, new jobs have cropped up.  Some of these new jobs are direct results of the new technologies and can fairly be said to benefit society in ways beyond just keeping people employed (Autor, 2015).  Information technology jobs are obvious examples, as are jobs catering to newfound realms of amusement, such as computer game design and production.  But we also have an ever-growing number of jobs that seem completely useless or even harmful.  As examples, we have administrators and assistant administrators in ever larger numbers shuffling papers that don’t need to be shuffled, corporate lawyers and their staffs helping big companies pay less than their fair share of taxes, countless people in the financial industries doing who knows what mischief, lobbyists using every means possible to further corrupt our politicians, and advertising executives and sales personnel pushing stuff that nobody needs or really wants.

A sad fact is that many people are now spending huge portions of their lives at work that, they know, is not benefitting society (see Graeber, 2013).  It leads to such cynicism that people begin to stop even thinking that jobs are supposed to benefit society.  We have the spectacle of politicians on both sides of the aisle fighting to keep munitions plants open in their states, to preserve the jobs, even when the military itself says the weapons the plant is building are no longer useful.  And we have politicians and pundits arguing that fossil fuel mining and carbon spewing factories should be maintained for the sake of the jobs, let the environment be damned.

The real problem, of course, is an economic one.  We’ve figured out how to reduce the amount of work required to produce everything we need and realistically want, but we haven’t figured out how to distribute those resources except through wages earned from the 40-hour (or more) workweek.  In fact, technology has had the effect of concentrating more and more of the wealth in the hands of an ever-smaller percentage of the population, which compounds the distribution problem.  Moreover, as a legacy of the industrial revolution, we have a cultural ethos that says people must work for what they get, and so we shun any serious plans for sharing wealth through means other than exchanges for work.

So, I say, down with the work ethic, up with the play ethic!  We are designed to play, not to work.  We are at our shining best when playing. Let’s get our economists thinking about how to create a world that maximizes play and minimizes work.  It seems like a solvable problem.  We’d all be better off if people doing useless or harmful jobs were playing, instead, and we all shared equally the necessary work and the benefits that accrue from it.

What is work?

The word work, of course, has a number of different, overlapping meanings.  As used by Keynes, and as I used it in the preceding paragraphs, it refers to activity that we do only or primarily because we feel we must do it in order to support ourselves and our families economically.  Work can also refer to any activity that we experience as unpleasant, but which we feel we must do, whether or not it benefits us financially.  A synonym for work by that definition is toil, and by that definition work is the opposite of play. Still another definition is that work is any activity that has some positive effect on the world, whether or not the activity is experienced as pleasant. By that definition, work and play are not necessarily distinct.  Some lucky people consider their job, at which they earn their living, to be play.  They would do it even if they didn’t need to in order to make a living.  That’s not the meaning of work as I use it in this essay, but it’s a meaning worth keeping in mind because it reminds us that much of what we now call work, because we earn a living at it, might be called play in a world where our living was guaranteed in other ways.

Is work an essential part of human nature?  No.

It surprises many people to learn that, on the time scale of human biological history, work is a new invention.  It came about with agriculture, when people had to spend long hours plowing, planting, weeding, and harvesting; and then it expanded further with industry, when people spent countless tedious or odious hours assembling things or working in mines.  But agriculture has been with us for a mere ten thousand years and industry for far less time.  Before that, for hundreds of thousands of years, we were all hunter-gatherers.  Researchers who have observed and lived with groups who survived as hunter-gathers into modern times, in various remote parts of the world, have regularly reported that they spent little time doing what we, in our culture, would categorize as work (Gowdy, 1999; Gray, 2009, Ingold, 1999).

In fact, quantitative studies revealed that the average adult hunter-gatherer spent about 20 hours a week at hunting and gathering, and a few hours more at other subsistence-related tasks such as making tools and preparing meals (for references, see Gray, 2009).  Some of the rest of their waking time was spent resting, but most of it was spent at playful, enjoyable activities, such as making music, creating art, dancing, playing games, telling stories, chatting and joking with friends, and visiting friends and relatives in neighboring bands. Even hunting and gathering were not regarded as work; they were done enthusiastically, not begrudgingly.  Because these activities were fun and were carried out with groups of friends, there were always plenty of people who wanted to hunt and gather, and because food was shared among the whole band, anyone who didn’t feel like hunting or gathering on any given day (or week or more) was not pressured to do so.

Some anthropologists have reported that the people they studied didn’t even have a word for work; or, if they had one, it referred to what farmers, or miners, or other non-hunter-gatherers with whom they had contact did.  The anthropologist Marshal Sahlins (1972) famously referred to hunter-gatherers as comprising the original affluent society—affluent not because they had so much, but because their needs were small and they could satisfy those needs with relatively little effort, so they had lots of time to play.

Ten thousand years is an almost insignificant period of time, evolutionarily.  We evolved our basic human nature long before agriculture or industry came about.  We are, by nature, all hunter-gatherers, meant to enjoy our subsistence activities and to have lots of free time to create our own joyful activities that go beyond subsistence. Now that we can do all our farming and manufacturing with so little work, we can regain the freedom we enjoyed through most of our evolutionary history, if we can solve the distribution problem.

Do we need to work to be active and happy?  No

Some people worry that life with little work would be a life of sloth and psychological depression.  They think that human beings need work to have a sense of purpose in life or just to get out of bed in the morning. They look at how depressed people often become when they become unemployed, or at the numbers of people who just veg out when they come home after work, or at how some people, after retirement, don’t know what to do and begin to feel useless.  But those observations are all occurring in a world in which unemployment signifies failure in the minds of many; in which workers come home physically or mentally exhausted each day; in which work is glorified and play is denigrated; and in which a life of work, from elementary school on to retirement, leads many to forget how to play.

Look at little children, who haven’t yet started school and therefore haven’t yet had their curiosity and playfulness suppressed for the sake of work.  Are they lazy?  No.  They are almost continuously active when not sleeping.  They are always getting into things, motivated by curiosity, and in their play they make up stories, build things, create art, and philosophize (yes, philosophize) about the world around them.  There is no reason to think the drives for such activities naturally decline with age.  They decline because our schools, which value work and devalue play, drill them out of people; and then tedious jobs and careers continue to drill them out.  These drives don’t decline in hunter-gatherers with age, and they wouldn’t decline in us either if it weren’t for all the work imposed on us.

Schools were invented largely to teach us to obey authority figures (bosses) unquestioningly and perform tedious tasks in a timely manner.  In other words, they were invented to suppress our natural tendencies to explore and play and prepare us to accept a life of work.  In a world that valued play rather than work, we would have no need for such schools.  Instead, we would allow each person’s playfulness, creativity, and natural strivings to find meaning in life to blossom.

Work, pretty much by definition, is something we don’t want to do.  It interferes with our freedom.  To the degree that we must work we are not free to choose our own activities and find our own life meanings.  The view that people need work in order to be happy is closely tied to the patronizing view that people can’t handle freedom (see Danaher, 2016). That dismal view of human nature has been promoted for centuries, and reinforced in schools, in order to maintain a placid workforce.

Do culturally valuable discoveries, creations, and inventions depend upon work?  No.

People love to discover and create.  We are naturally curious and playful, and discovery and creation are, respectively, the products of curiosity and playfulness.  There is no reason to believe that less work and more time to do what we want to do would cause fewer achievements in sciences, arts, and other creative endeavors.

The specific forms our inventiveness takes depend in part on cultural conditions.  Among nomadic hunter-gatherers, where material goods beyond what one could easily carry were a burden, discoveries were generally about the immediate physical and biological environment, on which they depended, and creative products where typically ephemeral in nature—songs, dances, jokes, stories, bodily decorations, and the like.  Today, and ever since agriculture, creative products can take all these forms plus material inventions that transform our basic ways of living.

Nearly all great scientists, inventors, artists, poets, and writers talk about their achievements as play.  Einstein, for example, spoke of his achievements in mathematics and theoretical physics as “combinatorial play.”  He did it for fun, not money, while he supported himself as a clerk in a patent office.  The Dutch cultural historian Johan Huizinga, in his classic book Homo Ludens, argued, convincingly, that most of the cultural achievements that have enriched human lives—in art, music, literature, poetry, mathematics, philosophy, and even jurisprudence—are derivatives of the drive to play.  He pointed out that the greatest outpourings of such achievements have occurred at those times and places where a significant number of adults were freed from work and could therefore play, in an environment in which play was valued.  A prime example was ancient Athens.

Would we degenerate morally without work?  No.

The 18th century poet and philosopher Friedrich Schiller wrote, “Man is only fully human when he plays.”  I agree; and it seems as clear to me as it did to Schiller that part of our humanity, which rises in play, is concern for our fellow human beings.

In our work-filled world we too often fall into a pit where the duty of the job overrides our concern for others.  Work detracts from the time and energy—and sometimes even from the motivation—for helping neighbors in need, or striving to clean up our environment, or promoting causes aimed at improving the word for all.  The fact that so many people engage in such humanitarian activities already, despite the pressures of work, is evidence that people want to help others and make the world a better place.  Most of us would do more for our fellow humans if it weren’t for the sink of time and energy and the tendencies toward greed and submission to power that work creates.

Band hunter-gatherers, who, as I said, lived a life of play, are famous among anthropologists for their eagerness to share and help one another.  Another term for such societies is egalitarian societies—they are the only societies without social hierarchies that have ever been found.  Their ethos, founded in play, is one that prohibits any one person from having more status or goods than any other.  In a world without work, or without so much of it, we would all be less concerned with moving up some ladder, ultimately to nowhere, and more concerned with the happiness of others, who are, after all, our playmates.

So, instead of trying so hard to preserve work, why don’t we solve the distribution problem, cut way back on work, and allow ourselves to play?

Good question.

Economists Are Obsessed with “Job Creation.” How About Less Work?


Ronald Reagan Tried Deep Corporate-Tax Cuts Before. They Didn’t Work

Ronald Reagan
Deja vu all over again. (Reuters/Gary Cameron)


On December 3, 2017, Gwynn Guilford writes on Quartz:

Will sweeping corporate-tax cuts succeed in juicing the US economy and buoying middle-class wage growth? Most assuredly, says the Trump administration, with Congress poised to pass the Republican tax bill.

History, however, suggests the opposite.

The Trump team’s argument goes something like this: Cutting taxes on businesses will free up profits they will invest in new factories, research and development, and new equipment. The resulting investment boom will spur growth, as firms hire and as workers harness new ideas and equipment to produce more than they used to.

Let’s look at what happened the last time the US tested this logic. In 1986, Ronald Reagan signed cuts that brought corporate taxes to 35%, down from 48%.

“That’s as large a cut as we’re talking about today, and investment fell—that was the weakest period of investment in the postwar period,” Dean Baker, economist and co-director of the Center for Economic and Policy Research (CEPR), an independent, nonpartisan think tank, tells Quartz. “I’m not going to say that was because we cut the taxes, but it’s a little hard to believe that it will boost investment this time.”

As a share of GDP, gross business investment peaked in 1982, at more than 15%. That share dropped sharply starting in 1987.

In the five years after the tax cut, investment growth averaged 1.6%, compared with more than 8% between 1977 and 1981.

What this colossal fiscal experiment suggests is that tax rates and after-tax share of profits weren’t driving business-investment decisions.

“Firms weren’t cash-constrained—they weren’t saying, ‘If only we could have more money, we’d do more investment,’” says Baker. “That’s even more true today. They really don’t know what to do with their money. It’s not as though Apple is sitting there going, ‘We have all these great plans—if only we had the money.’”

One big difference, though, is that current business-investment levels aren’t nearly as high as they were before Reagan’s tax cut.

So what will companies do with a windfall of after-tax profits? The odds that it will flow back into the real economy aren’t looking good. Many major companies are planning to hand that money to their investors through dividends and share buybacks. In fact, when Gary Cohn, Trump’s economic guru, asked a gathering of corporate leaders who was planning to reinvest their tax cuts, few raised their hands, Bloomberg recently reported. “Why aren’t the other hands up?” Cohn asked.

Ronald Reagan tried deep corporate-tax cuts before. They didn’t work


I’m A Depression Historian. The GOP Tax Bill Is Straight Out Of 1929.

People gather on the subtreasury building steps across from the New York Stock Exchange in New York on “Black Thursday” on Oct. 24, 1929. The Great Depression followed thereafter. (AP)

On November 30, 2017, Robert S. McElvaine writes in The Washington Post:

Republicans are again sprinting toward an economic cliff.

“There are two ideas of government,” William Jennings Bryan declared in his 1896 “Cross of Gold” speech. “There are those who believe that if you will only legislate to make the well-to-do prosperous their prosperity will leak through on those below. The Democratic idea, however, has been that if you legislate to make the masses prosperous their prosperity will find its way up through every class which rests upon them.”

That was more than three decades before the collapse of the economy in 1929. The crash followed a decade of Republican control of the federal government during which trickle-down policies, including massive tax cuts for the rich, produced the greatest concentration of income in the accounts of the richest 0.01 percent at any time between World War I and 2007 (when trickle-down economics, tax cuts for the hyper-rich, and deregulation again resulted in another economic collapse).

Yet the plain fact that the trickle-down approach has never worked leaves Republicans unfazed. The GOP has been singing from the Market-is-God hymnal for well over a century, telling us that deregulation, tax cuts for the rich, and the concentration of ever more wealth in the bloated accounts of the richest people will result in prosperity for the rest of us. The party is now trying to pass a scam that throws a few crumbs to the middle class (temporarily — millions of middle-class Americans will soon see a tax hike if the bill is enacted) while heaping benefits on the super-rich, multiplying the national debt and endangering the American economy.

As a historian of the Great Depression, I can say: I’ve seen this show before.

In 1926, Calvin Coolidge’s treasury secretary, Andrew Mellon, one of the world’s richest men, pushed through a massive tax cut that would substantially contribute to the causes of the Great Depression. Republican Sen. George Norris of Nebraska said that Mellon himself would reap from the tax bill “a larger personal reduction [in taxes] than the aggregate of practically all the taxpayers in the state of Nebraska.” The same is true now of Donald Trump, the Koch Brothers, Sheldon Adelson and other fabulously rich people.

During the 1920s, Republicans almost literally worshiped business. “The business of America,” Coolidge proclaimed, “is business.” Coolidge also remarked that, “The man who builds a factory builds a temple,” and “the man who works there worships there.” That faith in the Market as God has been the Republican religion ever since. A few months after he became president in 1981, Ronald Reagan praised Coolidge for cutting “taxes four times” and said “we had probably the greatest growth in prosperity that we’ve ever known.” Reagan said nothing about what happened to “Coolidge Prosperity” a few months after he left office.

In 1932, in the depths of the Great Depression, Franklin D. Roosevelt called for “bold, persistent experimentation” and said: “It is common sense to take a method and try it; if it fails, admit it frankly and try another. But above all, try something.” The contrasting position of Republicans then and now is: Take the method and try it. If it fails, deny its failure and try it again. And again. And again.

When Bill Clinton proposed a modest increase in the top marginal tax rate in his 1993 budget, every Republican voted against it. Trickle-down economists proclaimed that it would lead to economic disaster. But the tax increase on the wealthy was followed by one of the greatest periods of prosperity in American history and resulted in a budget surplus. When the Republicans came back into power in 2001, the administration of George W. Bush pushed the opposite policies, which had invariably produced calamity in the past. Predictably, that happened again in 2008.

Just how disastrous would the proposed reincarnation of the failed Republican trickle-down policies of the past be for the American people and the future of the nation? A few ways:

  • Repealing the estate tax, or, as Republicans have dubbed it, the “death tax.” But the estate tax is not a tax on the dead; it is a tax on their heirs. Repeal would reverse an important aspect of the American Revolution and establish an American hereditary aristocracy. If your estate is not above $11 million, your benefits from this portion of the GOP’s tax cut will be a nice round number: zero.
  • Eliminating deductions for state and local taxes. The GOP has called these deductions favoritism for people who live in high-tax states. In fact, ending deductibility of state and local taxes would tax income that has already been taxed away from a taxpayer. It is, quite simply, double taxation.
  • Repealing the Alternative Minimum Tax, which assures that wealthy people who hire accountants to find all the obscure ways to avoid taxes cannot escape taxation altogether. Repealing it would save Trump millions.
  • Extending the “pass-through” provision to noncorporate businesses, including some 500 entities Trump owns. It would allow the owners of these businesses to pay taxes at 25 percent, instead of 39.9 percent. This provision would allow Wall Street fund managers, among other very wealthy people, to pay a lower tax rate than many middle-class Americans pay.
  • Ending the deductibility of large medical expenses.
  • Taxing waived tuition for college students, ending deductibility for student loan payments, and even disallowing teachers from deducting what they spend on school supplies for their students.
  • Ending the Affordable Care Act’s individual mandate, which would cause 13 million Americans to lose health insurance and result in much higher premiums for those who do get insurance through the exchanges. The Congressional Budget Office has indicated that, if enacted, the Republican tax bill may force deep cuts in Medicare through a generally unknown budget rule that its deficits would trigger.

The analysis of the nonpartisan Congressional Budget Office found that people making less than $100,000 a year (approximately 80 percent of American households) will have their taxes increased while the millionaires and billionaires will make off like bandits.

In the 1920s, Republicans were in full control of the federal government and used that power to pursue their objective to “make the well-to-do prosperous.” It didn’t “leak through on those below.” In that decade, the mass-production American economy became dependent on mass consumption. For it to work, the masses need a sufficient share of the national income to be able to consume what is being produced.

Republican policies in the ’20s instead pushed to concentrate more of the income at the top. Nine decades later, Republicans are rushing to do it again — and they are sprinting toward an economic cliff. Another round of Government of the People, by the Republicans, for the super-rich will be catastrophic. The American people must call a halt before it’s too late.