Wednesday, January 04, 2023

Was the Civil War Inevitable? - The New York Times

 


Was the Civil War Inevitable? - The New York Times

https://www.nytimes.com/2022/12/21/magazine/civil-war-jan-6.html

 

Listen to This Article

Audio Recording by Audm

In the late morning of March 6, 1857, two days after the inauguration of James Buchanan as the 15th president of the United States, the Supreme Court’s chief justice, Roger B. Taney, stood among a crowd of reporters and spectators on the ground floor of the United States Capitol and formally read the 55-page majority opinion in Dred Scott v. John F.A. Sandford. Born during the American Revolution and now just shy of 80, Taney could still take over a room with his sense of conviction, and as he began to address the crowd, the old Supreme Court chamber brimmed with anticipation.

Dred Scott’s name was by that point well known to many Americans. The four days of debate on the case, conducted in December of the previous year, had been covered extensively by newspapers. Scott, an enslaved man, and his wife, Harriet, had sued for their freedom based on Dred’s claim that their late owner had taken them for several years into Illinois, a free state, and to Fort Snelling, in a Northern territory where slavery was banned by the Missouri Compromise of 1820. That federal legislation effectively outlawed slavery in the territories above the 36 degrees 30 minutes north latitude. It was considered a “sacred pledge” by many antislavery Northerners determined to protect the West as “free soil” for “free labor,” but pro-slavery Southerners became equally determined to incorporate new territories as slaveholding states. In deciding whether enslaved people could gain their freedom by residing on free soil, the Supreme Court might answer a question critical to the growing nation: What would the status of slavery be in the Western territories?

Now Taney was ready to deliver the decision. A Marylander and former slaveholder, he was six feet tall and had a drooping, worn facial expression and tobacco-stained teeth. His voice was a bit weak and his body enfeebled, but he remained possessed of what a critic called an “infernal apostolic manner.” Black people, he said, could never be “citizens,” nor considered “as a part of the people.” The room stirred as listeners recognized that Taney was reaching for a much bigger impact than simply the fate of Dred and Harriet Scott and their daughters, or even the question of whether slavery would be permitted in the territories. “Every citizen has a right to take with him into the Territory any article of property,” the chief justice declared. “The Constitution of the United States recognizes slaves as property and pledges the Federal Government to protect it.”

The great crisis over the existence and expansion of slavery had just made a decisive turn. In the aftermath of Taney’s reading, the decision was greeted with a torrent of editorial commentary. Newspapers that sided with the Democrats, like The Daily Picayune of New Orleans, celebrated the court for “so adjudge[ing] the vexed question of the times as to rebuke faction … and consolidate the Union … for all time.” Republican papers, like the New-York Tribune, called the decision “atrocious,” “wicked” and “abominable.” The Chicago Daily Tribune declared that Illinois could no longer prevent someone from “opening a slave pen and an auction block for the sale of black men, women and children, right here in Chicago.” The New-York Daily Times saw the ruling as a revolution against the federal government. “Slavery,” it maintained, “is no longer local; it is national.” 

Taney’s decision sought to resolve a powerfully divisive issue that, it turned out, he could not control. Over the next three years, the country descended into disunion, followed by civil war. Recently, it has become disturbingly common to hear Americans wonder aloud whether we are headed for another breakup of some kind. Especially on the far right, talk of overthrowing the government has been increasing, reaching a peak when a mob stormed the Capitol, inspired by President Donald Trump’s persistent claims that the 2020 election had been “stolen” from him. According to the Chicago Project on Security & Threats, use of the term “civil war” surged by 3,000 percent among Twitter users in the hours after the F.B.I. search of Trump’s residence at Mar-a-Lago in August. A similar surge occurred in September when President Biden gave a prime-time speech in front of Independence Hall in Philadelphia, denouncing “MAGA Republicans” as anti-democratic threats to America. In the recent trial of Stewart Rhodes, the leader of the Oath Keepers militia who was charged with seditious conspiracy, a jury heard many hours of testimony and saw a mountain of evidence implicating the defendant in the Jan. 6 insurrection. Rhodes warned that if Trump did not invoke the Insurrection Act to stop the electoral count in Congress, he and his people would take violent action and trigger a “bloody civil war.” (Rhodes was convicted in late November.)

We might dismiss all this as paranoid ravings, except that a recent University of Virginia Center for Politics poll found that 52 percent of Trump voters and 41 percent of Biden voters at least somewhat agreed that America is so fractured that they would favor some kind of “secession” of blue from red states. Some of this sentiment is no doubt a result of irresponsible rhetoric practiced by people who seek to sow chaos or increase media ratings (and reflects a rather romanticized conception of our Civil War in the 1860s). But the anxiety animating these concerns is real.

Our divisions are deep and seemingly intractable. Thomas B. Edsall, a contributor to The New York Times’s Opinion section, conducted an extensive survey of social-science data and concluded that “there appear to be no major or effective movements to counter polarization.” It would seem that every well-meaning attempt at bipartisanship, political reconciliation or even decency in public discourse has to fight the powerful headwinds of disinformation flowing from ardent Trumpists and their media allies. A strained insistence on conformity and correctness of thought, language and behavior by the left and the right also seems to have rendered respect, grace and honest communication across political lines a thing of the past. And our elections, rather than reliably resolving our differences, are now unsteady rituals of intolerance. One enduring lesson of the 1850s and 1860s is that democracies survive only when those who lose elections accept the result.

Today, given the scale of hyperpartisanship, it might be said that our country is already in the midst of a slow, low-intensity civil conflict. Will it bring about something more violent and destructive?

For historians, this question leads back to the 1850s and the debate over what brought the nation to civil war. The 2020s are vastly different from the 1850s in terms of technology, demographics, race relations, media and America’s standing in global affairs. We do not even have the same Constitution. Americans of the 1850s were governed by the 1789 Constitution; today we live under the Constitution forged during Reconstruction by the 13th, 14th and 15th Amendments. What these two decades do share, however, are cultures of what the historian John Higham called a “boundlessness” colliding into “consolidation.” Possibilities could seem infinite to an inventor in 1855 seeking to patent a new grain reaper or a thousand other devices needed in an expanding early industrial economy, as they do today for creators of software in the biomedical or aerospace industries. Each era inspired great hope for a limitless future, but also dread of internal conflict and violence. They share a culture of fear that the American experiment is in peril and in need of regeneration — through politics or violent conflict or both. 

A recent book offers sobering prospects for where our current situation might lead. Barbara F. Walter’s “How Civil Wars Start” examines more than 200 civil wars in modern history and suggests two major variables that help gauge the potential for civil conflict. The first is whether a government is a “partial democracy” — either a backsliding democracy or an autocracy trying to democratize — and the second is whether its population is voting based on people’s ethnic, religious or racial identity rather than on something else. Walter argues that we will never again see a sectional or regional conflict between armies in the United States, but she believes that decentralized insurgencies, which she calls the “21st-century civil war,” are possible.

The debate over whether America is on an irreversible path to division and breakup like the one that led the country to war in 1861 raises an obvious question: Was the Civil War preventable? Although it is no longer at the center of academic history, this question once dominated American historians’ minds like no other. Historians from the 1930s to the 1960s engaged in a prolonged debate over whether the war was inevitable, and if it was, when it became so. For historians, the stakes of the question are very high. Most of us reject the concept of “inevitability” as a force in history; we much prefer “contingency,” the constancy of change in cause and effect. We tend to avoid single-cause explanations and prefer to situate the big events of the past within complex swirls of social history and political culture.

Although it is antithetical to the historian’s craft, inevitability is utterly beguiling. In an engrossing work from the 1950s titled “Historical Inevitability,” the philosopher Isaiah Berlin held that “for historians determinism is not a serious issue.” Then, turning sharply, he said, “Yet, unthinkable as it may be as a theory of human action, specific forms of the deterministic hypothesis have played an arresting, if limited, role in altering our views of human responsibility.” Indeed, when are we in control of events, and when are forces, ideas and upheavals our masters? When living in the midst of a historical process, no one can know where the train is taking them, whether in the Weimar Republic of the 1920s, around the rim of the Empire of Japan in 1937 or on the Kansas plains in 1857. But when we look at each of these periods in retrospect, our human tendency is to perceive an inevitable course of action leading to the known future.

 

The problem is exacerbated by the fact that we cannot write narrative history without turning points. And for historians of the period leading to the Civil War, there are many to choose from. In his indispensable “The Impending Crisis, 1848-1861,” David M. Potter notes “the ubiquity of the slavery question” as early as 1848, unleashed into American politics by the war with Mexico and the conquest of the Southwest. “No other issue in American history has so monopolized the political scene,” he wrote, a notable claim considering that he published the book in the immediate aftermath of the civil rights movement and the Vietnam War. Potter, though he didn’t make an explicit argument about inevitability, found the essential conflict of 1860 fully “articulated by December of 1847.”

But was that the pivotal turning point? Or did it come in 1854, when the Kansas-Nebraska Act proposed to settle the question of whether those territories would permit slavery on the basis of “popular sovereignty,” meaning the voters would decide by referendum? The law proved so unworkable that it led to widespread vigilante violence, death and destruction across Kansas, as one side saw a slaveholding oligarchy out to destroy free labor and small independent farmers and the other saw a fanatical tide of radical abolitionists, seeking to deny slaveholders their property rights. Still other historians have made the case for 1857, when an economic crisis threw hundreds of thousands of Americans out of work and led Northerners and Southerners to blame each other for the suffering. That year, wrote the great Civil War-era historian Kenneth M. Stampp in his too-little-read book “America in 1857: A Nation on the Brink,” “encompassed a political crisis which proved to be decisive in the coming of the Civil War.”

As a historian of the period, I find Stampp’s case for 1857 as the great pivot on the road to disunion to be persuasive largely because of the Dred Scott case, which stoked the fear, distrust and conspiratorial hatred already common in both the North and the South to new levels of intensity. As we worry over where the country is headed today, it’s instructive to turn our attention to this milestone on the road to war. Though the first bullets would not fly for another four years, Dred Scott was the point of no return. It revealed and shaped the political condition of a society as dangerously divided as it had ever been — polarized, to employ our modern term. It confirmed for antislavery Northerners that the pro-slavery South would stop at nothing, constitutional or otherwise, to preserve and spread slavery. In the wake of the decision, those seeking a middle ground saw few paths to compromise. 

The Dred Scott decision brought to a head tensions that had been growing throughout the 1850s. For opponents of slavery in the North, the decade was simultaneously one of heightened activism and deepening despair. Fugitive slave rescues, some violent and successful, followed the passage of the Fugitive Slave Act in 1850. In May 1854, just as the Kansas-Nebraska Act exploded in American politics, a man named Anthony Burns, who had escaped slavery in Virginia, was arrested and detained in Boston. Two days later, a multiracial crowd, angry and armed, stormed the courthouse but failed to free the young Burns, even as a guard was killed. Burns was tried under the Fugitive Slave Act and ordered to be remanded to his owner and returned to Virginia. Amid tremendous controversy and rumors of violence, President Franklin Pierce sent some 1,500 federal troops to Boston to keep order. On June 2, the city streets were decked in black banners and filled with crowds of abolitionists as Burns was marched in shackles to the wharf.

The Burns case provided another sensational story in a society already reading Harriet Beecher Stowe’s best-selling novel, “Uncle Tom’s Cabin,” which was published in 1852. Above all, the book galvanized Northerners, as well as Southern critics, like no other work of literature ever had, around the issue of the fugitive slave and the sensibility that slavery might die in America only by violence. Fear for the future of the country percolates up from the sea of sentimentalism in “Uncle Tom’s Cabin.” The book is replete with reversals of responsibility for slavery, with evil Yankees like Simon Legree and conflicted slave owners like Arthur Shelby and Augustine St. Clare, but in the end, as the literary historian Andrew Delbanco writes, the novel’s central theme is that “conscience is no match for the coercive force of the market.” The novel swept up the American imagination and played its part in widening the sectional divide.

Though the first bullets would not fly for another four years, Dred Scott was the point of no return.

African American leaders followed these currents intently. A rhetoric of righteous revolutionary violence flowed from some Black writers in the 1850s. The political crisis over slavery forced a choice, Frederick Douglass wrote. Either America’s creeds were a “warrant for the abolition of slavery in every State of the Union,” or the only alternative might become revolution. He began to advocate and justify violence in self-defense against slave catchers. The formerly enslaved minister Samuel Ringgold Ward wrote in his 1855 autobiography that the great question before the country was “not whether the black man’s slavery shall be perpetuated, but whether the freedom of any Americans can be permanent.” Most whites might deny or avoid the reality, but Black and white freedom were mingled in a single political fate.

The country’s growing divides were on clear display in the 1856 presidential contest, in which slavery was the central issue. The new Republican Party — an entirely sectional, Northern coalition — ran on a clear antislavery platform that opposed any expansion of slavery into the West, calling for the immediate admission of Kansas as a free state and labeling slavery one of the “relics of barbarism.” For their candidate, they chose the explorer John C. Frémont of California. The Democrats ran James Buchanan of Pennsylvania, a man of pro-slavery sentiments. A third vaguely moderate, nativist party, calling itself the American Party, ran the former president Millard Fillmore, another Northerner sympathetic to the South.

The rhetoric of the 1856 campaign was dire. Republicans of all backgrounds made the idea of a “Slave Power conspiracy” their primary slogan. The notion was as old as the 1820s but now had new persuasive currency. The Slave Power, so the argument went, was a small cadre of slaveholders in the South who had managed to manipulate all the levers of government, and now law, to render slavery forever safe in the Union. Evidence of this was abundant: Every American president, except for the two Adamses, had been either a slaveholder or a pro-slavery sympathizer, and two-thirds of Supreme Court justices had been slaveholders. The Slave Power had even succeeded in making Northerners legally complicit in returning fugitive slaves to their owners. According to the Slave Power’s most incisive historian, Leonard L. Richards, “Usually, conspiracy arguments have limited appeal, inspiring a handful of true believers but not a wide audience. The Slave Power thesis, in contrast, attained the status of conventional wisdom in Republican circles and had wide appeal across the North” in the 1850s. The Slave Power idea had a strong basis in fact, but abolitionists and Republican politicians inflated and weaponized it as propaganda in a politics of fear.

Sign up for The New York Times Magazine Newsletter  The best of The New York Times Magazine delivered to your inbox every week, including exclusive feature stories, photography, columns and more.

For their part, pro-slavery Southerners had long rehearsed their own conspiracy against what they viewed as the religious zealots in the vanguard of abolitionism, whom they called the “Black Republicans.” Among many white Southerners, as well as their Northern allies, the label “Black Republican” became a ubiquitous description for anyone who opposed the extension of slavery, and certainly for anyone favoring its elimination. In 1856, if you screamed “Black Republicans” enough and accused them of trying to achieve the “amalgamation” of the races through forced marriages, millions of voters would be frightened. Racial purity has often shown its power to unite white Americans. “The Black Republican party favor the full citizenship of the negro,” declared The Indiana Daily State Sentinel in 1857. The Detroit Free Press hailed Dred Scott because it destroyed “the underpinnings of negro-worship” and threw “that detestable ism in the dirt.” Frederick Douglass humorously exploited the irony of the terminology. At a state convention for Republican delegates in 1856, Douglass declared: “You are called Black Republicans. What right have you to that name? Among all the candidates you have selected, or talked of, I have not seen or heard of a single black one.” 

Both sides of the American political divide now accused each other repeatedly of being the true “disunionists.” Political opponents were no longer election foes; they were enemies with values that threatened the republic. An opponent so evil and dangerous must be destroyed, not merely defeated. One clear lesson of the 1850s is the danger of conspiracy theories, how they grow in the cracks of a fractured society.

Are we a society driven now, as in the 1850s, by conspiratorial visions of each other? On the left, many see Trumpism, and the Republican Party that buttresses its extremes, as tantamount to the Slave Power, a devious force seeking to lock in minority rule through extreme gerrymandering, state and local culture wars and the partisan takeover of the Supreme Court. In the minds of some Fox News devotees, the specter of corporate neoliberalism, “coastal elites,” L.G.B.T.Q. rights and globalization is the new version of “Black Republicans,” taking over our material, moral and intellectual lives with “wokeism” and fraudulent voting.

Fear works in politics. Shout “voter fraud” loud and often enough, and it gains legitimacy. Label your opponent a “socialist,” and it will appear on placards at rallies. Declare mainstream print or television reporting “fake news,” and those on your side possess a ready-made slogan for any disagreeable information.

The conspiracies swirl around our digital information space in ways that 19th-century Americans could hardly imagine. The man who is accused of breaking into Speaker Nancy Pelosi’s home in San Francisco and assaulting her husband, Paul, was most likely inspired by a long-standing climate of Pelosi hatred, exemplified by Stewart Rhodes’s line from Jan. 10, 2021: “We should have brought rifles [to the Capitol]. We could have fixed it right then and there. I’d hang [expletive] Pelosi from the lamppost.”

The election of 1856 did not result in widespread violence, but it was very sectional, and conspiracy politics ran rampant. Buchanan won, carrying every slave state except Maryland, as well as New Jersey, Pennsylvania, Indiana, Illinois and California, while Frémont won the rest of the free states. For some pundits at the time, and historians in retrospect, the election was called a “victory within defeat” for the Republicans. The old Whig Party was now effectively dead, its Northern remnants folded into the Republican Party, and the nativists more and more pitched their tents in the Republican camp as opponents to the expansion of slavery, the supposed denigration of free labor and the slaveholding oligarchy’s threat to individualism. The Republican Party was an unsteady coalition of factions and strange bedfellows, but it represented a potent new political force. The splintering of the American party system, a phenomenon in process ever since the war with Mexico, endangered the cohesion of the Union itself. What happened next would launch the country onto an irreversible course to war.

By the time Buchanan was ready to be sworn in, in March of 1857, the Supreme Court had finished its deliberations in the Dred Scott case. A winding legal road had brought the case to this point. The Scotts were legally married at Fort Snelling in what is now Minnesota, which was free territory in 1836, when Dred was around 40 and Harriet was 17. The impetus for their lawsuit might have come as much from Harriet as from Dred. They had two daughters, Eliza and Lizzie, and Harriet was determined to protect them from enslavement. In 1842, their owner, Dr. John Emerson, returned to Missouri. He died a year later, and Dred and his family transferred to the ownership of his widow, Eliza Irene Sanford. Her brother, John Sanford, would eventually claim to be the true owner of the Scott family and declare his intention to possess his property.  

But Scott had supporters in Missouri who saw the potential in his case for a freedom suit. Such suits were not uncommon during the 19th century, especially in the territories. They demonstrate the ambiguous and conflicted nature of enslaved people’s legal status in these volatile years of westward expansion. Scott’s case was first brought in the Circuit Court of St. Louis in 1846, where he prevailed, only to have the decision overturned by the Missouri Supreme Court by a 2-to-1 vote. The Missouri state justice William Scott, a pro-slavery Democrat, feared that to free the Scott family would risk disunion and, echoing John C. Calhoun, cause “the overthrow and destruction of our government.” From there the case made its way through a series of appeals. A Federal District Court concurred with the state’s decision. Scott’s supporters enlisted, pro bono, the famous free-soil politician and lawyer Montgomery Blair to take his case, and under Blair’s legal leadership, Scott appealed to the U.S. Supreme Court, where it reached the docket in late 1854.

The court faced three major questions. One, jurisdiction: As a Black man, was Dred Scott a citizen, with the right to sue in a federal court? Two, the validity of the “free soil” concept: Were Scott or his wife and their children entitled to freedom based on their residence for several years in a free state and a free territory? And three, the Missouri Compromise line: Would the court rule once and for all on whether Congress, and therefore the federal government, had the power to restrict the presence of slavery anywhere in the jurisdiction of the United States?

In one way or another, all of these issues were at the center of the 1856 election, and its bitter partisanship hung over the justices and their deliberations like a poisoned cloak. As is the case today with our conservative-majority court, the Taney court was decidedly partisan. Of the nine justices, seven were appointed to the bench by Southern pro-slavery presidents. Five of those seven were from slave states and slaveholding families. Of the four Northerners on the court, Robert C. Grier of Pennsylvania, an old Jacksonian Democrat, was appointed by President James K. Polk, perhaps the most pro-slavery chief executive of the entire antebellum era, and Samuel Nelson of New York was appointed by President John Tyler, a Virginian. They each joined the Southern majority in the Dred Scott decision. The two other Northerners, John McLean of Ohio and Benjamin R. Curtis of Massachusetts, would be the dissenters in the case.

Ahead of the inauguration, Justice John Catron and Justice Grier corresponded directly with the president-elect about the case. Buchanan wrote back urging them to push for a decisive declaration on Congress’s power to control slavery in the territories, knowing full well that the justices were aware he sought a pro-slavery outcome. Grier consulted about Buchanan’s letter with Justice James Wayne of Georgia and the chief justice himself, in what one historian has called a highly irregular “game of judicial politics” and a “breathtaking example of judicial activism.” Buchanan was given advance notice of the decision so that he could, if he so chose, refer to it in his Inaugural Address in the first week of March. In no uncertain terms, by the time Taney sat down to write the majority opinion, the fix was in for a broad decision that would try to settle, in thoroughly pro-slavery terms, the constitutional question forever.

It was the finality of the decision that made it so pivotal in leading the country to open conflict. To radical abolitionists, and certainly to many Republicans, the most offensive part of the decision was that it closed off the possibility of liberty or citizenship for free Black people. To the political antislavery coalition, growing in the North, the case’s inflammatory result was that it explicitly opened all of the Western territories, potentially as well as Northern states, to the legality of slave ownership. Dred Scott v. Sandford declared an eternal pro-slavery future in America.

Resistance began with the two dissents. In Justice Curtis’s opinion, he reminded the chief justice and posterity that when the Constitution was adopted in 1787, free Black men had been able to vote for delegates for the ratification conventions in five states. He also pointed out that there was no racial qualification for citizenship anywhere in the Constitution, thus declaring Taney’s originalism bad history and false law. Curtis further contended, with a long history to back it up, that slavery could exist only where “positive law” expressly sanctioned it. Otherwise, how could so many Northern states have abolished it? And as to the claim that the Constitution had been written “exclusively by and for the white race,” Curtis labeled this a mere “assumption,” contradicted by the Preamble, which calls for a “more perfect union,” and the Declaration of Independence’s promise of natural rights. This opinion was printed and published almost immediately in pamphlet form, a highly unusual act. Curtis had taken Taney’s uninformed originalism and thrown it in his face.

Opposition to the decision quickly became a marker by which Republicans would define their careers. At the annual convention of the American Antislavery Society in May 1857, Frederick Douglass, now very much a political abolitionist devoted to fighting slavery through law and political action, said the Slave Power was “poisoning, corrupting and perverting the institutions of the country.” Douglass warned that the conspiracy threatened everyone. “The white man’s liberty has been marked out for the same grave with the black man’s,” he said. The “ballot box is desecrated, God’s law set at naught.” He believed that the only way to stop the Slave Power was direct confrontation, the “overthrow” of slavery, “sooner or later, by fair means or foul means … in peace or in blood.” As the historian Elizabeth Varon has written, the Dred Scott case and the extended reactions to it gave the Slave Power concept a new stark reality and a “terrifying boundlessness.” 

From 1857, the Dred Scott case determined how many Americans voted. It made moderation nearly impossible, and many Republicans took more radical stances. The next year, Abraham Lincoln accepted the Republican nomination for the U.S. Senate in a speech at the Statehouse in Springfield, Ill. Standing on the high dais, Lincoln gave his poetic oration about a “house divided” that “cannot stand.” The one-term congressman and successful lawyer had always hated slavery, but he came of age a devotee of Henry Clay and the Whigs’ moderate approach. He was a gradualist about abolishing the institution and believed that the removal of some part of the African American population from the country remained an effective solution. But as we have seen, the election of 1856 was the end of the road for the Whigs. By 1858, Lincoln had begun to adopt a more aggressive tone.

The slavery controversy, Lincoln announced, “will not cease, until a crisis shall have been reached, and passed.” Just what Lincoln intended to predict in this speech has never been perfectly clear, but his fears and his analysis of the current divide were distinct and resounding. He believed that the national “government cannot endure, permanently half slave and half free.” Lincoln kept a moderate’s hopeful pose: “I do not expect the Union to be dissolved — I do not expect the house to fall — but I do expect it will cease to be divided. It will become all one thing, or all the other.” For antislavery Northern Republicans, this was the existential fear unleashed by Dred Scott: that slavery would no longer be confined to the South, where it might gradually die out. Because of Dred Scott, they believed that slavery now stalked their own neighborhoods.

As a June breeze wafted off the prairie and through the windows of the Old State Capitol in Springfield, Lincoln gave a blurry prediction. The “opponents” of slavery may “arrest the further spread of it and place it where the public mind shall rest in the belief that it is in course of ultimate extinction; or its advocates will push it forward, till it shall become alike lawful in all the States, old as well as new — North as well as South.” As Lincoln’s career moved ever closer to the national center in the next two years, those words — especially “ultimate extinction” — would be reread and repeated by both sides in the slavery crisis. Lincoln had laid down a marker: What the republic risked in the slavery crisis was everything.

For the remainder of the “House Divided” speech, Lincoln shifted to a vigorous attack on the Dred Scott decision. We often stop reading after the poetry of his opening, but the crisis posed to the Union emerges clearly in the rest of the address. Lincoln did not explicitly employ the term “Slave Power,” but he gave it many other names. He said that the Nebraska doctrine and the Dred Scott decision together had become a “piece of machinery” in the hands of Southerners and their Northern Democratic allies. Lincoln insisted that his audience see what he saw: clear “evidences of design, and concert of action, among its chief bosses” to give slavery an eternal future in America. These designers — the Slave Power — had one primary goal since 1854: to open “all the national territory to slavery.” The idea of popular “self-government” had been rendered “perverted” by this organized cartel, such that under law, “if any one man choose to enslave another, no third man shall be allowed to object.”

In stark terms, Lincoln had become the moderate as alarmist, a conspiracy theorist in his own right, alerting his tribe of the struggle ahead. Such blunt warnings had become mainstream rhetoric for Republicans by this point. Before ending the “House Divided” speech, Lincoln stated the deepest Republican and free-soil fear, especially in the wake of Dred Scott: that a new case would emanate from the border states, or even a free state, that would challenge whether any state could lawfully “exclude slavery from its limits.” Here Lincoln darkly predicted that “such a decision is all that slavery now lacks of being alike lawful in all the states.” Lincoln believed that such a decision was “probably coming” and that the only way to stop it was by organizing and voting, such that the “power of the present political dynasty shall be met and overthrown.”

Four months after Lincoln gave his “House Divided” speech, Senator William Seward of New York delivered a speech in Rochester in which he said the country had become a “theater,” staging a drama between “two radically different political systems.” The two systems were “incompatible,” Seward announced, and on a course of “collision” in an expanding single nation. Echoing, even extending, Lincoln’s metaphor, Seward concluded portentously: “It is an irrepressible conflict between opposing and enduring forces.”

Today many Americans would agree that our current politics pit two such forces against each other. As a result, our country faces crises of institutional legitimacy, of utterly polarized media sources, of transparent voter suppression, of irreconcilable public-policy debates over guns, abortion, climate change, public schools and attempts to control the conduct of elections. We have reason to wonder if the persistence of racism is a transhistorical ingredient of American politics. Justifiably, we fear vigilante, militia violence against the institutions and political leaders we depend on. We rightly worry about whether American democracy can withstand the current pressures placed upon it by the authoritarian tendencies that Trumpism has unleashed. 

A striking dissimilarity between the 19th century and today is that even during the secession crisis of 1860-61, those who lost elections acknowledged their defeat. They acted politically to organize against their opponents in the next election or they took the revolutionary act of domestic insurrection and withdrawal from the Union, but they did not dispute the election results.

Are today’s myriad crises somehow equivalent to the great question of slavery in late-antebellum America?

What is common to the 1850s and our own time is fundamental and disruptive change and a powerful minority that seeks to turn back the clock to prevent it. We can see this in some of the similarities between the decisions in Dred Scott and Dobbs v. Jackson Women’s Health Organization. Each draws on history as a means of arresting certain developments in society. Taney, in Dred Scott, argued that Black people had always been perceived as inferior and had been mostly enslaved and therefore possessed no rights as citizens. But that had not stopped many thousands of Black people and their allies from demanding and fighting, legally or otherwise, for their freedom and their rights. Similarly, in Dobbs, Justice Samuel A. Alito Jr. argued that abortion was nowhere in the Constitution and had never been legal until Roe v. Wade. But women had sought abortions for generations, however clandestinely, as a medical practice that they considered their right.

Each decision says, in effect, that because certain freedoms were not enshrined in law historically, the evolution of society to embrace those freedoms is irrelevant. Would Justice Alito overturn Loving v. Virginia (1967) because marriage between two people of different races is nowhere in the Constitution, or because decades of state laws prohibited it? Should clean-air legislation be on the chopping block because it is nowhere in the 1787 or the 1868 Constitutions? What about Native American citizenship? Women’s suffrage? Federal regulation of the industrial economy? Disability rights? Same-sex marriage? And what of the precious right to vote, so long denied or suppressed by law or by violence in this country? Voting rights are not in the original Constitution, either.

In both cases, the stakes are the nature and extent of freedom in this republic. What will the next year bring? Is a second Dobbs v. Jackson decision on the horizon, as Republicans in the late 1850s feared a second Dred Scott? There is good reason to fear the Moore v. Harper case from North Carolina, which will test the “independent state legislature” theory, which contends that only state legislatures — not the state courts — have authority over federal election procedures and voting rights. Progressives understandably fear that the states’ rights doctrine has become a Trojan horse of the right wing, returning power to the states so they remain safe from the post-Civil War and post-New Deal regulatory powers of the federal government.

Are today’s myriad crises somehow equivalent to the great question of slavery in late-antebellum America? Can our current rabble of loud difference still be governed? The recent midterm elections provide measures of reassurance: Most election deniers lost, although some key candidates did prevail in Colorado, Florida and Ohio, Wisconsin and elsewhere. Overall, it appears that at least small majorities in many regions prefer facts over bizarre conspiracy, democracy over authoritarianism and American pluralism over racism and xenophobia. In other Western democracies, far-right extremists win seats in national assemblies, where coalitions can constrain their ideas. But in a two-party system, the capture of one party by extremists is enough to cause great political havoc and violence — a lesson we should have learned from the destruction of our Union in 1861.

Authoritarianism is an American historical tradition, newly energized and threatening our republican existence. In coming elections, we shall see whether our 21st-century democracy will live or die honestly, whether we, too, are heading for collapse or renewal through politics, law or civil conflict. How we answer such challenges will determine whether it is 1857 again in America. Even if it is, we need to remember that antislavery advocates did not merely lay down in front of the juggernaut of Dred Scott; they mobilized and fought back — over race, rights and their future.

David W. Blight is the Sterling professor of history at Yale University as well as the director of its Gilder Lehrman Center for the study of slavery and abolition. His book on Frederick Douglass won the 2019 Pulitzer Prize for history. Máximo Tuja, who goes by Max-o-matic, is an illustrator in Barcelona known for his collage work. He is the founder of The Weird Show art platform.

A version of this article appears in print on Dec. 25, 2022, Page 30 of the Sunday Magazine with the headline: The Irrepressible Conflict. Order Reprints | Today’s Paper | Subscribe


 

 

 

 

 

 


 

 

Tuesday, January 03, 2023

A Message to NYC Council

 


After consultation with our legal team, we offer you this information. On December 15, 2022, Martin Scheinman issued a 31-page document that has no force of law.  As the signature page at the end explains, it is just a “Recommendation.”  Scheinman has no authority to order the City and the MLC to force retirees into Medicare Advantage, which is far worse than the traditional Medicare benefits that retirees have long received.

As he admits, Scheinman’s limited authority comes from a 2018 Agreement between the City and the MLC.  Under Section 5 of that Agreement, he and two others member of the “Tripartite Health Insurance Policy Committee” are authorized to “make recommendations to be considered by the MLC and the City.”  The Agreement does not allow the Committee, let alone Scheinman alone, to order anyone to do anything.  Moreover, the Agreement requires the Committee to make “recommend[ations] for implementation as soon as practicable during the term of this Agreement but no later than June 30, 2020.”  Thus, not only are recommendations non-binding, they are now two-and-a-half years too late.

Some have attempted to make Scheinman’s document seem more consequential than it really is by calling it a “decision” or “order” or “award.”  However, it is none of those things.  It is just a non-binding (and untimely) recommendation, as the document itself makes clear.  Although the 2018 Agreement allows Scheinman to arbitrate certain disputes between the City and the MLC, there was no dispute between the City and the MLC here – both are aligned with respect to forcing Medicare Advantage on retirees.  Thus, Scheinman was not acting as an arbitrator and was not issuing a ruling, decision, or award on anything.

Scheinman’s document is a transparent and futile attempt to make it seem like the City is being ordered to take away traditional Medicare from Retirees.  The document does not—and cannot—require the City, or anyone else, to do anything.  If the Mayor wants to take away the healthcare rights of elderly and disabled retirees, he should not pretend that anyone is making him do it.  And the City Council should not assist him in this charade by amending Section 12-126.

The City Council should not participate in the illegal effort to force Medicare Advantage on Retirees, who are entitled to the traditional Medicare benefits they were promised and which they desperately need.  Let the Mayor be the one to strip retirees of these hard-earned benefits.  The retirees will challenge him in court, and they will win.  Again.  But if the City Council amends Section 12-126, the path to victory in court becomes much harder.  Give retirees the chance to fight and win in court with the current version of Section 12-126, which has existed for over half a century.  If they lose, the City Council can always amend the statute later. 

Overworked, Underpaid And Understaffed: EMS In Crisis As NYC Faces Tridemic

 

RWDSU is organizing New York’s cannabis industry workers. Above: cannabis gummies sold in Massachusetts dispensaries containing 5 milligrams of THC.

Overworked, Underpaid And Understaffed: EMS In Crisis As NYC Faces Tridemic

By Bob Hennelly

New York City’s 911 EMS daily call volume has reached 4,500 on multiple days this month, and FDNY EMS unions warn current staffing is so inadequate three years into the Covid pandemic it puts their members at greater risk while degrading the essential service they provide the public.(READ MORE)

Jan 2 2023 - Inside the Supreme Court Case That Could Chill A U.S. Strike Wave - Work Bites

Inside the Supreme Court Case That Could Chill A U.S. Strike Wave 

https://www.work-bites.com/view-all/inside-the-supreme-court-case-that-could-chill-the-us-strike-wave?ss_source=sscampaigns&ss_campaign_id=63b3bc592778441937a11076&ss_email_id=63b40ad896e9211651d0eccc&ss_campaign_name=The+First+Work-Bites.com+Wake-Up+Call+For+2023+Is+Here%21%21&ss_campaign_sent_date=2023-01-03T11%3A01%3A13Z

LatestNational

The Supreme Court will hear oral arguments in Glacier Northwest v. International Brotherhood of Teamsters Local 174 on Jan. 10.

By Steve Wishnia

The Supreme Court is about to consider whether employers can sue unions for perishable goods lost during a strike by claiming they’re intentional property damage.

On Jan. 10, the Court will hear oral arguments in Glacier Northwest v. International Brotherhood of Teamsters Local 174, in which a Seattle concrete company is seeking to overturn a Washington Supreme Court decision dismissing its suit against Local 174 for the costs of several truckloads it had to throw out after drivers walked out in 2017. The state court held that Glacier had to wait until the National Labor Relations Board [NLRB] ruled on whether the damage was “incidental” to strike conduct protected under the federal National Labor Relations Act.

“This could give a lot of indication on where the Court is going on labor laws,” says West Virginia University Law School professor Anne Lofaso, a former NLRB attorney. At worst, she fears, the Court’s far-right majority could narrow what is considered “protected conduct.”

THE BACKGROUND

The strike by about 85 drivers began at three Glacier Northwest facilities in the Seattle area on the morning of Aug. 11, 2017 — 11 days after Local 174’s contract had expired. Sixteen drivers on the morning shift returned to the yard with their trucks still loaded. Union agents, the company says, told the drivers to “leave the fuckers running” and walk out without emptying them. The company had to rush to have other workers empty the trucks before the concrete dried inside them, build forms to dump it into, and then pay to have it removed.

This, Glacier Northwest repeatedly insists in its brief, was “intentional destruction of property,” which was “deliberately planned and timed” to damage its business.

Local 174 responds that the drivers had taken “reasonable precautions” to avoid damage to the trucks, by leaving them running so the concrete wouldn’t dry inside them, and that the union had moved the strike up one day so it wouldn’t occur on the day of a “mat pour,” the pouring of a large concrete slab for the foundation of a commercial building. Therefore, its brief argues, the loss of concrete was “incidental damage” that is common in strikes. A few weeks later, after the company sent disciplinary letters to several drivers, the union filed an unfair-labor-practice charge with the NLRB.

Glacier Northwest sued Local 174 for damages, and after partially contradictory lower-court rulings, the Washington Supreme Court in December 2021 unanimously dismissed the suit. It ruled that because the drivers leaving the trucks was “arguably protected” conduct under federal labor law, it was a matter for the NLRB, not state courts.

ARGUABLY PROTECTED

The case involves complex procedural issues. The Washington court relied on the Supreme Court’s 1959 decision in San Diego Building Trades Council v. Garmon, which set the precedent that if strikers’ conduct is “arguably” protected, the NLRB’s jurisdiction pre-empts that of state courts.

In the Glacier Northwest strike, “anyone who has any sense of labor law would say it’s ‘arguably’ protected,” says Lofaso. “Whether it is protected is a more complicated question.”

The general principles are that some economic losses are inevitable in a strike, but intentional property damage isn’t protected. Does the timing of the strike and drivers failing to empty the trucks qualify as “intentional property destruction”? Glacier Northwest’s brief, filed by former federal solicitor general Noel Francisco for the management side law-firm Jones Day, insists it does, repeating the phrase more than 60 times.

The unions — the Teamsters, along with the AFL-CIO, the Carpenters and SEIU, and UNITE HERE and the Sheet Metal, Air, Rail, and Transportation Workers in amicus briefs — disagree. The cases Jones Day is using to support its “intentional property destruction” claim, they say, involved clearly egregious conduct: sabotage of a crane, iron-foundry workers walking out just as a cupola full of molten metal was about to be poured, and drivers parking their trucks off the premises so the concrete inside dried. Meanwhile, the unions note, the NLRB has ruled several times over the past 65 years that strikers were not liable for damages when cheese, chicken, and milk spoiled and newspapers went undelivered.

The Carpenters estimate that the value of the concrete Glacier Northwest lost in the strike was less than $12,000, a small sum for one of the largest building-supply companies on the West Coast.

“The union activity for which Glacier seeks recovery in state court was a peaceful strike in support of collective bargaining demands,” the AFL-CIO states. “Such activity is at the heart of the concerted activity protected by the National Labor Relations Act.”

AN OPENING FOR THE BOSSES

Glacier Northwest, however, is also raising procedural issues based on the Court’s 1978 decision in Sears, Roebuck v. San Diego Carpenters. That held that a party can sue in state court if it lacks a “reasonable opportunity” to raise its claims before the NLRB.

Here, says Lofaso, the NLRB’s slow pace gives the company an opening. Local 174 filed its unfair-labor-practice complaint in September 2017, but the NLRB did not issue a formal complaint, analogous to an indictment, until January 2022, after the Washington Supreme Court decision. A hearing on the complaint is scheduled for Jan. 24.

In her experience litigating, Lofaso adds, the Supreme Court hates administrative delay.

She sees three possible ways the Court might rule. One would be to uphold the Washington court’s decision and let the NLRB decide whether the drivers were legally protected. Given the Court’s current makeup, she says, that is unlikely.

Another would hold that the NLRB took too long, but give some guidelines about what is an unreasonable delay. This scenario would mostly preserve the Garmon precedent of pre-emption, but would clarify what Justice Harry Blackmun in Sears called a “jurisdictional no man’s land.”

The outcome Lofaso speculates is most likely is the Court ruling that the drivers’ conduct was “almost certainly not protected,” and since the NLRB took too long, Glacier Northwest can sue for damages in state court.

SCARY SUPREME COURT

She says it’s worrisome that the Court agreed to hear the case, given the long-established precedent. There are legitimate procedural issues, but on the question of whether the strikers intentionally damaged property, “the Supreme Court can’t give a reasonable answer. They haven’t done a factual inquiry.”

Justices Samuel Alito and Clarence Thomas might want to use this case to narrow what is considered “protected activity,” Lofaso speculates. That would follow the pattern of the 2018 Janus v. AFSCME decision, which was preceded by another case where Alito disputed the 1977 precedent that public-sector unions may charge nonmembers fees for the costs of representing them.

Glacier Northwest argued in its petition that if the Washington court’s decision is allowed to stand, “it will not only put private property at the mercy of deliberate sabotage, but will also cast the NLRA into serious constitutional doubt by inviting the destruction of employers’ property rights while leaving them with no means of just compensation.”

This worries the unions. The Supreme Court has never heard a case where “a party argued it could proceed in state court when there was a pending NLRB charge,” the Teamsters note.

Would this case open the door to expanding employers’ ability to sue unions for alleged property damage, undermining workers’ right to strike? Risks to property from strikes “are omnipresent in every industry,” UNITE HERE and SMART argue. For example, UNITE HERE represents about 100,000 workers in the hotel-casino business, where “many perishables are in stock to serve the food and beverage needs of customers. When could hotel-casino employees strike without the risk that some product will be spoiled?”

“What’s at stake here is ‘what’s protected activity?’” Lofaso says. “That’s what’s scary about this case.”

Meanwhile, labor disputes continue in the Seattle-area concrete industry. In December 2021, Local 174 went on strike against six companies, including Glacier Northwest parent company CalPortland, demanding better health-care and retirement benefits and pay comparable to what other building-trades workers get. The companies tried to win a court order suppressing picketing, and the union won one prohibiting strikebreaker drivers from ramming their trucks through picket lines. They returned to work in April 2022 and ratified a contract with four of the companies in September.



Sunday, January 04, 2015

School Reform Fails the Test



School Reform Fails the Test

How can our schools get better when we’ve made our teachers the problem and not the solution?

By Mike Rose

December 10, 2014

During the first wave of what would become the 30-year school reform movement that shapes education policy to this day, I visited good public school classrooms across the United States, wanting to compare the rhetoric of reform, which tended to be abstract and focused on crisis, with the daily efforts of teachers and students who were making public education work.
I identified teachers, principals, and superintendents who knew about local schools; college professors who taught teachers; parents and community activists who were involved in education. What’s going on in your area that seems promising? I asked. What are teachers talking about? Who do parents hold in esteem? In all, I interviewed and often observed in action more than 60 teachers and 25 administrators in 30-some schools. I also met many students and parents from the communities I visited. What soon became evident—and is still true today—was an intellectual and social richness that was rarely discussed in the public sphere or in the media. I tried to capture this travelogue of educational achievement in a book published in 1995 called Possible Lives: The Promise of Education in America. Twenty years later, I want to consider school reform in light of the lessons learned during that journey, and relearned in later conversations with some of these same teachers.

For all of the features that schools share, life inside a classroom is profoundly affected by the immediate life outside it, by the particular communities in which a school is embedded. Visiting a one-room schoolhouse in rural Montana or a crowded high school in Chicago, you will find much in the routines and the curriculum that holds steady—the grammar of schooling, as historians David Tyack and Larry Cuban called it in Tinkering Toward Utopia: A Century of Public School Reform (1995). Yet within that grammar lie differences: in topics of discussion, in the illustrations that teachers use, in how the language sounds, and in the worries of the day pressing in from the neighborhood. These differences, the differences of place, make each school distinct from every other.
During my travels, I watched as third-graders in Calexico, a California-Mexico border town, gave reports on current events in Spanish and in English. They followed the journalist’s central questions—who, what, why, when, where, and how—exploring the significance of the depleted ozone layer, of smog in nearby industrial Mexicali, of changes in the local school board.
In Chicago, 12th-graders discussed Faulkner’s As I Lay Dying, trying to make sense of the characters’ different perspectives, offering provisional explanations of important occurrences in the novel. They were gaining a sense of the power of speculation, of moving an inquiry forward by wading into uncertain waters.
On Baltimore’s West Side, first-graders combined literature and science by reading a fanciful story about hermit crabs and then conducting an experiment—resulting from a student’s question—to understand the environment in which the crabs thrive.
In small towns in the Mississippi Delta, middle school children played games with physical representations of algebraic operations, part of civil rights activist Bob Moses’s Algebra Project, a curriculum as well as a social movement that still helps prepare children, regardless of academic background, for algebra, which Moses believes is an important pathway to opportunity.
And in a one-room schoolhouse in Polaris, Montana, students kept a naturalist’s journal on the willows in the creek behind the school. At one point the teacher bent over an older student who was working on sketches and measurements. The teacher pointed to one detailed drawing and asked his student why he thought the willows grew in such dense clusters, rather than long and snaky up a tree. The boy had fished these creeks for years, the teacher later explained, and “I just wanted him to take a different look at what he already knows.”
The teachers in these varied classrooms shared a belief in their students’ ability to become engaged by ideas and to develop as thoughtful, intellectually adventurous people. They saw the subjects they taught—whether science, literature, or math—as bountiful resources that would foster their students’ development.
To update Possible Lives, I spoke to each of these teachers again about 10 years after my visit and found that all of them shared a deep concern about the potential effect of the federal No Child Left Behind Act of 2001 on the classrooms they had worked so hard to create. No Child Left Behind and the Obama administration’s 2009 Race to the Top initiative are built on the assumption that our public schools are in crisis, and that the best way to improve them is by using standardized tests (up to now only in reading and math) to rate student achievement and teacher effectiveness. Learning is defined as a rise in a standardized test score and teaching as the set of activities that lead to that score, with the curriculum tightly linked to the tests. This system demonstrates a technocratic neatness, but it doesn’t measure what goes on in the classrooms I visited. A teacher can prep students for a standardized test, get a bump in scores, and yet not be providing a very good education.
Organizing schools and creating curricula based on an assumption of wholesale failure make going to school a regimented and punitive experience. If we determine success primarily by a test score, we miss those considerable intellectual achievements that aren’t easily quantifiable. If we think about education largely in relation to economic competitiveness, then we ignore the social, moral, and aesthetic dimensions of teaching and learning. You will be hard pressed to find in federal education policy discussions of achievement that include curiosity, reflection, creativity, aesthetics, pleasure, or a willingness to take a chance, to blunder. Our understanding of teaching and learning, and of the intellectual and social development of children, becomes terribly narrow in the process.

School reform is hardly a new phenomenon, and the harshest criticism of schools tends to coincide with periods of social change or economic transformation. The early decades of the 20th century—a time of rapid industrialization and mass immigration from central and southern Europe—saw a blistering attack, reminiscent of our own time. The Soviet launch of Sputnik in 1957 triggered another assault, with particular concern over math and science education. And during the 1980s, as postwar American global economic preeminence was being challenged, we saw a flurry of reports on the sorry state of education, the most notable of which, A Nation at Risk (1983), warned of “a rising tide of mediocrity that threatens our very future as a Nation and a people.”
Public education, a vast, ambitious, loosely coupled system of schools, is one of our country’s defining institutions. It is also flawed, in some respects deeply so. Unequal funding, fractious school politics, bureaucratic inertia, uneven curricula, uninspired pedagogy, and the social ills that seep into the classroom all limit the potential of our schools. The critics are right to be worried. The problem is that the criticism, fueled as it is by broader cultural anxieties, is often sweeping and indiscriminate. Critics blame the schools for problems that have many causes. And some remedies themselves create difficulties. Policymakers and educators face a challenge: how to target the problems without diminishing the achievements in our schools or undermining their purpose. The current school reform movement fails this challenge.
Back when I was visiting schools for Possible Lives, critics were presenting charts of declining scores on SATs but overlooking the demographic and economic factors that contributed to these numbers—for example, more low-income and immigrant students were taking the tests (arguably an egalitarian development). Comparing our test scores with those of other countries, the critics also failed to consider the social, economic, and cultural differences. (Students in our nation’s affluent districts fare much better in international comparisons.) The proposed remedies included not only new curricula and tests to measure the mastery of these courses of study, but also more time in school, more rigorous teacher education and credentialing, and market-based options like school choice and vouchers. And the primary goal of reform was always presented as an economic one: to prepare our young people for the world of work and to protect our nation’s position in the global economy.
Since then, the reform effort has spread and grown more intense, and it continues to focus on public school failure. No Child Left Behind and Race to the Top have dramatically increased the influence of the federal government on public schools. Both programs require states to establish standardized testing programs, and federal funding often depends on the test results. If schools don’t meet certain performance criteria, they are subject to sanction and even closure. Race to the Top added a competitive grant program to the federal effort, requiring states to lift limits on charter schools and tie teacher evaluations to students’ test scores in order to be eligible for a significant one-time award of federal funds. Some philanthropies have also supported the reform agenda, and private advocacy groups have championed causes ranging from charter schools to alternative approaches to teacher credentialing to, most recently, overturning teacher tenure and union protections.
Not all those who identify themselves as reformers would subscribe to the redefinition of teaching and learning that concerns me, and some of those reformers are raising among their peers the same issues I am. But a dominant account does emerge from many influential reform reports and organizations, and it is supported by the U.S. Department of Education.

A core assumption underlying No Child Left Behind is that substandard academic achievement is the result of educators’ low expectations and lack of effort. The standardized tests mandated by the act, its framers contended, hold administrators and teachers accountable—there can be no excuses for a student’s poor performance. It’s true that some teachers don’t expect much of the young people in their charge, particularly students from low-income backgrounds and underrepresented racial and ethnic groups. But because we know that so many factors contribute to student achievement, the strongest of which is parental income, the low expectations of some teachers cannot possibly account for all the disparities in academic performance. The act’s assumptions also reveal a pretty simplified notion of what motivates a teacher: raise your expectations or you’ll be punished—what a friend of mine calls the caveman theory of motivation. An even more simplistic theory of cognitive and behavioral change suggests that threats will lead to a change in beliefs about students, whether these beliefs come from prejudice or from pity. Still, No Child Left Behind’s focus on vulnerable students was important, and the law did jolt some low-performing schools into improving their students’ mastery of the basic math and reading skills measured by the tests.
But the use of such tests and the high stakes attached to them also led to other results that any student of organizational behavior could have predicted. A number of education officials manipulated the system by lowering the cutoff test scores for proficiency, or withheld from testing students who would perform poorly, or occasionally fudged the results. A dramatic example is the recent case of cheating in Atlanta, where school personnel all the way up to the superintendent were indicted.
Studies of what went on in classrooms are equally troubling and predictable. The high-stakes tests led many administrators and teachers to increase math and reading test preparation and reduce time spent on science, history, and geography. The arts were, in some cases, drastically reduced or eliminated. Aspects of math and reading that didn’t directly relate to the tests were also eliminated, even though they could have led to broader understanding and appreciation of these subjects.
Not long ago, a teacher I’ll call Priscilla contacted me with a typical story. She has been teaching for 30 years in an elementary school in a low-income community north of Los Angeles. The school’s test scores were not adequate last year, so the principal, under immense pressure from the school district, mandated for all teachers a regimented curriculum focused on basic math and literacy skills. The principal directed the teachers not to change or augment this curriculum. So now Priscilla cannot draw on her cabinets full of materials collected over the years to enliven or individualize instruction. The time spent on the new curriculum has meant trims in science and social studies. Art and music have been cut entirely. “There is no joy here,” she told me, “only admonishment.”
It makes sense to concentrate on the basics of math and reading, for they are central to success in school, and an unacceptable number of students don’t master them. And a score on a standardized test seems like a straightforward measure of mastery. But in addition to the kinds of manipulation I discussed, there are a host of procedural and technical problems in developing, scoring, and interpreting such tests. Test outcomes depend on the statistical models used, and scores can fluctuate and be marred by error—thus there is a debate among testing experts about what, finally, can be deduced from the scores about a student’s or a school’s achievement. Similar debates surround the currently popular use of “value-added” methods to determine a teacher’s effectiveness.
A further issue is that a test that includes, say, the writing of an essay, a music recital, or the performance of an experiment embodies different notions of learning and achievement than do the typical tasks on standardized tests: multiple choice items, matching, fill-ins. I have given both kinds of tests. Both have value, but they represent knowledge in different ways and require different kinds of teaching.
The nature of a school’s response to high-stakes pressure is especially pertinent for those less affluent students at the center of reform. When teachers in schools like Priscilla’s concentrate on standardized tests, students might improve their scores but receive an inadequate education. A troubling pattern in American schooling thereby continues: poor kids get a lower-tier education focused on skills and routine while students in more affluent districts get a robust and engaging school experience.
It’s important to consider how far removed standardized tests are from the cognitive give and take of the classroom. That’s one reason for the debate about whether a test score—which is, finally, a statistical abstraction—accurately measures learning. Some reform leaders, including Arne Duncan, the U.S. secretary of education, are now trying to dial down the emphasis on testing. But because tests are easy to use and have an aura of objectivity, they are likely to remain central in the reform agenda.

Priscilla’s story is emblematic not only of the mechanical and restrictive pedagogy that is frequently forced on teachers in a test-driven environment, but also of the attitude toward teachers. They live in a bipolar world, praised as central to students’ achievement and yet routinely condemned as the cause of low performance.
When the standardized test score is the measure of a teacher’s effectiveness, other indicators of competence are discounted. One factor is seniority—which reformers believe, not without reason, overly constrains an administrator’s hiring decisions. Another is post-baccalaureate degrees and certifications in education, a field many reformers hold in contempt. Several studies do report low correlation between experience (defined as years in the profession) and students’ test scores. Other studies find a similarly low correlation between students’ scores and teachers’ post-baccalaureate degrees and certifications. These studies lead to an absolute claim that neither experience nor schooling beyond the bachelor’s degree makes any difference.
What a remarkable assertion. Can you think of any other kind of work—from hair styling to neurosurgery—where we don’t value experience and training? If reformers had a better understanding of teaching, they might wonder whether something was amiss with the studies, which tend to deal in simple averages and define experience or training in crude ways. Experience, for example, is typically defined as years on the job, yet years in service, considered alone, don’t mean that much. A dictionary definition of experience—“activity that includes training, observation of practice, and personal participation and knowledge gained from this”—indicates the connection to competence. The teachers in Possible Lives had attended workshops and conferences, participated in professional networks, or taken classes. They experimented with their curricula and searched out ideas and materials to incorporate into their work. What people do with their time on the job becomes the foundation of expertise.
More generally, the qualities of good work—study and experimentation, the accumulation of knowledge, and refinement of skill—are thinly represented in descriptions of teacher quality, overshadowed by the simplified language of testing. In a similar vein, the long history of Western thought on education—from Plato to Septima Clark—is rarely if ever mentioned in the reform literature. History as well as experience and inquiry are replaced with a metric.
These attitudes toward experience are rooted in the technocratic-managerial ideology that drives many kinds of policy, from health care to urban planning to agriculture: the devaluing of local, craft, and experiential knowledge and the elevating of systems thinking, of finding the large economic, social, or organizational levers to pull in order to initiate change. A professor of management tells a University of California class of aspiring principals that the more they know about the particulars of instruction, the less effective they’ll be, for that nitty-gritty knowledge will blur their perception of the problem and the application of universal principles of management—as fitting for a hospital or a manufacturing plant as a school.
This dismissal of classroom knowledge fits with the trendy discourse of innovation and creative disruption, a discourse that runs throughout education reform, asserting that it will take entrepreneurial outsiders to change the system. I understand the impulse here, because getting something fresh through large school bureaucracies can be maddening. But creative disruption is predicated on the belief that anything new must be better, and it relies on a reductive model of organizational and technological change. One of the celebrated technologies in the disrupters’ armory is the computer, which clearly allows wonderful things to happen in education. But online charter schools have a troubled record, and higher education’s much ballyhooed massive open online courses, or MOOCs, are proving to be much more limited in their usefulness or success than predicted. The computer’s potential is realized only when people who are wise about teaching and learning program it, and when it is integrated into a strong curriculum taught by someone who is savvy about its use.

If you pare down your concept of teaching far enough, you are left with sequences of behaviors and routines—with techniques. Technique becomes central to the reformers’ redefinition of teaching, and the focus on technique is at the heart of many of the alternative teacher credentialing programs that have emerged over the past decade. Effective techniques are an important part of the complex activity that is teaching, and good mentorship includes analyzing a teacher’s work and providing corrective feedback. Teachers of teachers have been doing this for a long time. What is new is the nearly exclusive focus on techniques, the increased role of digital technology to study them, and the attempt to define “effective” by seeking positive correlations between specific techniques and, you guessed it, students’ standardized test scores. What is also new is the magnitude of the effort, punched up considerably by a $45 million project funded by the Bill & Melinda Gates Foundation to measure effective teaching.
Because teaching involves a good deal of craft, I’m all for implementing useful techniques, from guidance on giving directions to ways to pose a math problem. But given the technocratic orientation of contemporary school reform, I worry that other aspects of teaching less easily observed and circumscribed—bearing, beliefs about learning, a sensibility about students’ lives—will get short shrift.
Techniques don’t work in isolation. The sequencing of questions, for example, is a crucial skill, but it depends on the teacher’s knowledge of the material being taught, children’s typical responses to this material, the kinds of misconceptions and errors they make, and the alternative explanations and illustrations that might help them. A teacher can’t ask meaningful questions for long without this kind of knowledge. In equal measure, the effectiveness of techniques, particularly for classroom management, is influenced by students’ sense of a teacher’s concern for them and understanding of them.
When I was visiting schools in Chicago, I spent time in Michelle Smith’s high school math classroom. One morning, she was calling her class to order and saw that a boy who plays the class clown was sitting way in the back. She called him by name, then said, “My young gentleman, I’d like you to sit up here where I can see you.” The student groaned, uncurled himself from his desk, and walked to the front, sauntering for the benefit of his peers. “C’mon darlin’,” Smith added, head tilted, hand on hip, “humor me.” She watched; he sat down. “Thank you, sir. I feel better.” With a mix of humor and direction, she had deftly changed the seating to ensure order in the room—an effective technique for classroom management.
Imagine, however, the unpleasant ways this situation could have played out: the student refusing to move, insulting or threatening her, or stirring up his comrades sitting nearby. But Smith’s action occurred in the context of a relationship with the class and with that boy, a legacy of her care and of the learning that goes on in her classroom. (“Miss Smith,” the boy later told me, “she’s teaching us how to do things we couldn’t do before.”) Smith knows local culture, understands the rituals of masculinity and the huge importance of allowing that student a little space to save face. She has developed a classroom persona that blends sass and seriousness, and she uses it strategically. Technique works in context and within the flow of other events.
If you conceive of teaching as a repertoire of instructional and behavior management techniques, then you won’t appreciate the kind of social knowledge Michelle Smith possesses. This pinched notion of teaching combined with a “no excuses” stance toward low achievement yields a troubling response to economic inequality: the belief that the right kind of education can overcome poverty. We have a long tradition in the United States of seeing education as, in Horace Mann’s words, the “great equalizer” of social class differences. As our social safety net has been increasingly compromised, we have put the school at the center of our dwindling welfare state. Even though half a century’s research has demonstrated that parental income level is the primary determiner of educational achievement, the reformers hold fast to the demand that schools can overcome the assaults of poverty. Charter school leader Doug Lemov, whose Teach Like a Champion has become a user’s manual among reformers, offers a good illustration. In his introduction, Lemov reflects on the charter school teachers he has observed:
These outstanding teachers routinely do what a thousand hand-wringing social programs have found impossible: close the achievement gap between rich and poor, transform students at risk of failure into achievers and believers, and rewrite the equation of opportunity.
Schooling becomes the one solution to poverty, the intervention that will work where others have failed.
About 15 pages later, however, Lemov offers a reminder of the ugly staying power of inequality. A former student of his, “the bright and passionate son of a single mother with limited English,” made the remarkable journey to Williams College. At college, though, the student’s problems with writing dogged him and were reflected in a professor’s unfavorable response to a paper he wrote on Zora Neale Hurston. Lemov tells this story to stress the importance of teaching students standard written English. But having worked in university programs that serve students like this one—and having been such a student myself—I find that this story represents the intractability of inequality: even after the best teaching Lemov and his colleagues could provide, this young man still needs assistance at further points along the way. The student will also need people who understand what he must be feeling: the crushing disappointment, the possible anger, and the deep blow to his confidence. Schools like Lemov’s might be able to narrow an achievement gap, improving the scores on district or state standardized tests, but not necessarily erase the achievement gap, which requires sustained help of many kinds, including programs that Lemov dismisses as “hand-wringing.”
The teachers in Possible Lives worked with significant numbers of low-income children, and every one of those teachers tried in some way to address their hardship. They might have drawn in social service agencies, or participated in church-based or civic organizations or political campaigns aimed at helping the poor. Sometimes they tried to find resources for parents, or tutored and counseled their students individually, or spent their own money and donated food, clothing, and other goods. They taught diligently, sometimes brilliantly, fought back despair, didn’t let up. “The problems are big ones,” a young Calexico teacher told me, “but they’re not going to stop me from teaching.” You cannot be in teaching—or medicine or counseling or the ministry—without slamming up against failure. These teachers did not rush to find excuses for their failures, but they knew the trauma poverty brings and did their work with that awareness. To deny the effects of poverty blinds you to the reality of your students’ lives, lives you need to understand as fully as you can to intervene and enlist others inside and beyond the school. I deeply believe in the power of teaching, but to make teaching the magic bullet against inequality or to pit it against other social and economic interventions leads to insular and self-defeating education policy.

As is the case with public school teachers today, many of the teachers I wrote about grew up in families with modest incomes. Some came from the same region or background as their students. A small number went to major universities, but most graduated from smaller state universities or local colleges with teacher education programs. Some of the teachers I visited were new, and some had taught for decades. Some organized their classrooms with desks in rows, and others turned their rooms into hives of activity. Some were real performers, and some were serious and proper. For all the variation, however, the classrooms shared certain qualities. These qualities emerged before our era’s heavy reform agenda, yet most parents, and most reformers, would want them for their children.
The classrooms were safe. They provided physical safety, which in some neighborhoods is a real consideration. But there was also safety from insult and diminishment: “They don’t make fun of you if you mess up,” said a middle school student in Chicago. And there was safety to take intellectual risks. The teacher was “coaxing our thinking along,” as one of the students reading Faulkner put it.
Intimately related to safety is respect, a word I heard frequently during my travels. It meant many things: politeness, fair treatment, and beyond individual civility, a respect for the language and culture of the local population. Surveying images of Mexican-American history on the walls of a Los Angeles classroom, a student exclaimed, “This room is something positive. As you walk around, you say ‘Hey, we’re somebody!’ ” Respect also has a cognitive dimension. As a New York principal put it, “It’s not just about being polite—even the curriculum has to be challenging enough that it’s respectful.”
Talking about safety and respect leads to a consideration of authority. I witnessed a range of classroom management styles, and though some teachers involved students in determining the rules of conduct and gave them significant responsibility to provide the class its direction, others came with curriculum and codes of conduct fairly well in place. But two things were always evident. A teacher’s authority came not just with age or with the role, but from multiple sources—knowing the subject, appreciating students’ backgrounds, and providing a safe and respectful space. And even in traditionally run classrooms, authority was distributed. Students contributed to the flow of events, shaped the direction of discussion, became authorities on the work they were doing.
These classrooms, then, were places of expectation and responsibility. As a Los Angeles middle school teacher observed, “Children can tell right off those people who believe in them and those who patronize them.” Young people had to work hard, think things through, come to terms with each other—and there were times when such effort took them to their limits. To be sure, some students weren’t engaged, and everyone, students and teachers, had bad days. But overall the students I talked to, from primary-grade children to graduating seniors, had the sense that their teachers had their best interests at heart and their classrooms were good places to be. The huge, burning question is how to create more classrooms like these.

What if reform had begun with the assumption that at least some of the answers for improvement were in the public schools themselves, that significant unrealized capacity exists in the teaching force, that even poorly performing schools employ teachers who work to the point of exhaustion to benefit their students? Imagine, then, what could happen if the astronomical amount of money and human resources that went into the past decade’s vast machinery of high-stakes testing-—from test development to the logistics of testing at each school site—if all that money had gone into a high-quality, widely distributed program of professional development. I don’t mean the quick-hit, half-day events that teachers endure, but serious, extended engagement of the kind offered by the National Science Foundation and the National Writing Project, by university summer programs in literature or science or history, by teams of expert teachers themselves.
In such programs, teachers read, write, and think together. They learn new material, hear from others who have successfully integrated it into their classrooms, and try it out themselves. Some participating teachers become local experts, providing further training for their schools and districts. Electronic media would facilitate participation, connecting people from remote areas and helping everyone to check in regularly when trying new things. These programs already exist but could be expanded significantly if policymakers had a different orientation to reform, one that honored teaching and the teaching profession. Distributed professional development would substitute a human development model of school reform for the current test-based technocratic one. And because such professional development would enhance what teachers teach and how they teach and assess it, there would be a more direct effect on the classroom.
Imagine as well that school reform acknowledged poverty as a formidable barrier to academic success. All low-income schools would be staffed with a nurse and a social worker and have direct links to local health and social service agencies. If poor kids simply had eye exams and glasses, we’d see a rise in early reading proficiency. Extra tutoring would be provided, some of which could be done by volunteers and interns from nearby colleges. Schools would be funded to stay open late, providing academic and recreational activities for their students. They could become focal institutions in low-income communities, involving parents and working with existing community groups and agencies focused on educational and economic improvement. Such schools already exist, and an Obama administration initiative called Promise Neighborhoods awards grants to local programs and agencies that provide health and social services. But the provision of services is conceived as an add-on rather than an organic part of school reform itself, and the services are awarded by competition to only a percentage of the neighborhoods and schools that need them.
My proposals do not address all that ails our schools, and what they cost might be better spent on other ideas that are in the air. But they do move us away from the current model of reform and closer to the immediate needs of teachers and students. The proposals assume that our schools have talent to be tapped, and that the physical and social burdens of the poor are a drag on achievement.
As with the current reform programs, these proposals would draw on government and philanthropic funding and on large, sometimes distant, organizations such as the National Science Foundation. But the interventions would be adapted to the needs of particular schools and communities by local teachers and social service providers. The writing of narratives or a study of water-borne organisms would play out differently in New York City versus the Mississippi Delta.
Surveying the many unsuccessful and hugely expensive attempts at school reform in our past, historians Tyack and Cuban observed the same mistakes being repeated over and over again: top-down remedies, grandiose claims about the latest technology, disdain for teachers. To improve our schools, we need to be informed by knowledge gained from many days in the neighborhoods surrounding them and from many, many days inside the schoolhouse itself, learning from children’s experience and the full sweep of a teacher’s work. This is what contemporary school reform has failed to do. l