Globalization: A Lasting Change

Globalization has been a buzzword for much of the last thirty years. It’s been used to describe the present state of economics and future state of everything. It’s been, if not said explicitly, then generally understood that the world will only become more globalized. In reality, globalization is as much a variable condition as it is multifaceted and poorly defined. Globalization is both a condition of economic interdependence and the effect it has on the human condition. Taken together these aspects define globalization and provide the incentive to maintain international cooperation. Globalization, therefore, is not only here but will never go away.

Foreign Policy writer Pankaj Ghemawat (2009) argues that the locally sourced nature of everything from finance to how we interact with the internet undermines the notion of globalization, however this view is a bit narrow. Consider that one could travel to nearly any country in the world and see the familiar sight of power poles and power lines. One could likely buy a cigarette in most of these places as well. Even if locally provided are not the global availability of electricity and tobacco indicative of a globalized world? Noam Chomsky (2015) seems to think so when he argues that one of the great benefits of globalization is that he could phone a friend virtually anywhere in the world. This interconnectivity, he says, is the very definition of globalization. Whether people choose to connect with others is irrelevant. The important fact is that the integration exists to do so.

Even with the nearly ubiquitous availability of electricity, tobacco, and companies like Starlink bringing internet to the world, globalists contend that more integration is required. As economics writer Martin Wolf (2004) puts it, “The failure of the world is not that there is too much globalization, but that there is too little” (p. 4). Harvard economics professor Jeffrey Frankel quantifies this sentiment by pointing out that, “globalization would have to increase another sixfold…before it would literally be true that Americans did business as easily across the globe as across the country” (Frankel, 2000, Judging by the Globalization 2000 Standard of Perfect International Integration section, para 2). These sentiments should not be taken to mean that globalization doesn’t exist. As mentioned earlier, globalization is a macro-economic condition that changes over time. Mr. Frankel (2000) points out that the first half the twentieth century saw both a boom in global growth and a return to protectionist policies. A similar give and take has existed with regional trade. The World Trade Organization (2021) cited three instances of regionalization originating in the 1950s, 1980s, and the present day (p. 26). It’s notable that globalization as it’s been thought about, has existed between the second and third periods of regionalization, suggesting a cyclical relationship. Even if analysts like former Foreign Policy writer Moises Naim (2009) contend that today’s globalization is fundamentally different from prior eras, the progression toward globalization is undeniable. The relationship between regional and global policies may be cyclical but it’s not terminal for either.

The narrative around global finance tells a similar tale. In his arguments suggesting the world is less globalized than people think, Mr. Ghemawat (2009) points out that:

[T]he total amount of the world’s capital formation that is generated from foreign direct investment (FDI) has been less than 10 percent for the last three years for which data are available (2003–05). In other words, more than 90 percent of the fixed investment around the world is still domestic (The 10% presumption section, para. 2).

However, Noam Chomsky (2015) argues that it’s specifically the growth of foreign capital that proved detrimental to developing economies. To sort this out, a more recent look at the data is required. According to McKinsey & Company (Lund et al., 2017), “From January 2007 to December 2016, banks divested at least $2 trillion of [foreign] assets…more than half of the total by European banks” (p. 4). This could be correctly read as a pullback from international markets, but McKinsey analysts suggest that banks were cleaning up unprofitable assets which has resulted in healthier balance sheets (pp. 3, 4, 13). Over the same period, FDI and equity as a percent of total cross-border capital flows increased from 36% to 69% (p. 9). McKinsey doesn’t break down the percentage of FDI vs domestic funding, however they do mention that 27% of equities and 31% of bonds are owned by foreign investors (p. 6). Clearly, we see a significant international presence in financial markets and a substantial increase since Mr. Ghemawat’s article ran in Foreign Policy. These findings also lend support to Mr. Chomsky’s base assumptions that foreign investment has grown dramatically with globalization. Regardless of the conclusions, the growth in global finance would be difficult to unwind completely and is therefore likely to persist at some level even if regional trade continues to grow.

The human cost of globalization is certainly open for debate, and these conclusions may sound like value judgements that have little to do with the binary assessment of whether globalization will continue; however, positive human outcomes form a basis for popular support. It is essential as Harvard University professor of international political economy Dani Rodrik puts it, that government and market economies work together to ensure a broadly beneficial outcome for their citizens (Pearlstein, 2011). Mr. Wolf (2004) echoes this sentiment, writing, “We need more global markets, not fewer, if we want to raise the living standards of the poor of the world” (p. 4, para. 2). Perhaps most conclusively, in research done by Gapminder, Hans Rosling (2007) demonstrates a linear relationship between child mortality and the overall wealth of a nation. He notes that as nations invest in the health of their people, wealth increases in tandem. In other words, a healthy population is a more productive population.

It's important to point out that the human cost of globalization is not all upside. Mr. Naim (2009) cites the rising levels of international crime and terrorism as one drawback of increased interconnectivity, while Mr. Chomsky (2015) alleges that “for most of the [Mexican] population [NAFTA] was awful” (6.42). Furthermore, the Center for Global Development (Clemens, 2015) concluded that the wage disparity between Mexican and American workers has grown since NAFTA’s implementation, not diminished. The rise of global crime is tough to argue, however the economics of trade offer a more optimistic outlook. According to the Council on Foreign Relations (Chatzky et al., 2020), since NAFTA’s implementation, Mexico’s farm imports to the United States have tripled while hundreds of thousands of auto-manufacturing jobs have been created. Finally, Rosling (2007) argues that world health and wealth are generally levelling off. That is, globally we are becoming wealthier and living longer across every region. Certainly, increased jobs and life expectancy are great incentives for the nations of the world to share in economic prosperity, while the proliferation of international crime is an ongoing area of opportunity for all states.

In summary, globalization is a macro-economic event that, while subject to periods of decline and expansion, is unlikely to ever have an end. Moreover, while global finance has continued to grow, the increasing health and wealth of the world’s population provides strong incentives for countries to maintain global ties. It is therefore unlikely that countries could or would want to unwind from the global system. In short, the world is too interconnected to ever deglobalize.

References

Chatzky, A., McBride, J., & Sergie, M.A. (2020). NAFTA and the USMCA: Weighing the impact of

North American trade. Council on Foreign Relations. https://www.cfr.org/backgrounder/naftas-economic-impact

Chomsky, N. [Chomsky's Philosophy]. (2015, July 20). Noam Chomsky – Globalization [Video].

YouTube.https://www.youtube.com/watch?v=4RxHzQTHhKk&t=12s

Clemens, M. (2015, March 17). The US-Mexico Wage Gap Has Grown, Not Shrunk, under

NAFTA. Awkward. Center for Global Development. https://www.cgdev.org/blog/us-mexico-wage-gap-has-grown-not-shrunk-under-nafta-awkward

Frankel, J.A. (2000, August 1). Globalization of the economy. National Bureau of Economic Research.

https://www.nber.org/system/files/working_papers/w7858/w7858.pdf

Ghemawat, P. (2009). Why the world isn’t flat. Foreign Policy.

https://foreignpolicy.com/2009/10/14/why-the-world-isnt-flat/

Lund, S., Windhagen, E., Manyika, J., Harle, P., Woetzel, J., & Goldshtein, D. (2017). The new

dynamics of financial globalization. McKinsey & Company. https://www.mckinsey.com/~/media/mckinsey/industries/financial%20services/our%20insights/the%20new%20dynamics%20of%20financial%20globalization/mgi-financial-globalization-executive-summary-aug-2017.pdf

Naim, M. (2009, September 30). Think again: Globalization. Moises Naim.

https://www.moisesnaim.com/my-columns/2020/6/10/think-again-globalization

Pearlstein, S. (2011). Dani Rodrik's "The globalization paradox". The Washington Post.

https://www.washingtonpost.com/wp-dyn/content/article/2011/03/11/AR2011031106730.html

Rosling, H. [TED]. (2007, January 16). The best stats you've ever seen | Hans Rosling [Video].

YouTube.https://www.youtube.com/watch?v=hVimVzgtD6w

Wolf, M. (2004). Why globalization works. Yale University Press.

World Trade Organization. (2021). Global value chain development report 2021: Beyond

production.https://www.wto.org/english/res_e/booksp_e/04_gvc_ch1_dev_report_2021_e.pdf

America: A British Revolution

As British troops set fire to the U.S. capitol building on August 24, 1814, it might have seemed as though the American experiment had met its end. In fact, the War of 1812 was simply the last word in a debate over democracy that had pestered Britain and her parliament for decades. What began in earnest during the French and Indian War, continued through American independence and later the French Revolution. It is with some irony, however, that in sowing the seeds of revolt in America, Britain initiated a democratic revolution of its own. This essay examines the attitudes and perspectives of the British legislature, public, and the crown toward American independence; and while correctly attributed to taxes and representation, the war ought to be more broadly viewed as the middle act of a larger British revolution.

If England’s national debt was the third front of the French and Indian War, Lord Grenville was the general to fight it. While an honest and forthright man, Grenville was not well liked. His abrasive style afforded him few friends and was as much responsible for his rise as his downfall. Nonetheless, Grenville’s penchant for law and finance at a time when England’s national debt had nearly doubled, proved too valuable to dispatch (Clark, 1950). It was in this post war fiscal crisis, that the British government turned to the American colonies for assistance.

The story of taxes and American revolution is deeply rooted in a personal feud between Lord Grenville and King George. In fact, the Stamp and Sugar Acts were a direct manifestation of the power struggle between these two men, and indeed, the slow rolling retreat from monarchial rule that had been proceeding throughout Britain for centuries. Grenville’s ambitions were very much focused on expanding the powers of Parliament at the expense of the King. Raising taxes on the colonies was not only a means to generate revenue, but it diminished the Throne by demonstrating Parliament’s legal authority over the colonies (Clark, 1950, p. 393). Though repealed little more than a year after being ratified, Grenville’s taxes would work to advance democratic progress in both Great Brittan and the American colonies. By 1765, King George had had enough of the problematic minister and dismissed him from office. Yet ironically Lord Grenville’s departure would, itself, spawn a dramatic reduction in the King’s ability to remove public officials. It was Grenville’s last rebuke to the powers of the Throne, that his termination should solidify Parliamentary authority once and for all (p. 391). At the same time, the issues of taxes and representation set in motion by Grenville’s policies would fuel revolution a continent away.

By the time the Americans declared their independence, there was little room for surprise on either side. There was, however, plenty of room for debate and rebuttal, much of which came from two lawyers appointed by the King to respond to the colonists demands. Jeremy Bentham (1776) issued a blistering response to the Declaration that was both emotionally colorful and intellectually provoking. “The opinions of the modern Americans on Government,” he writes, “like those of their good ancestors on witchcraft, would be too ridiculous to deserve any notice” (para 1). And on the notion of self-evident truths he ripostes, “This rarity is a new discovery; now, for the first time, we learn, that a child…has the same quantity of natural power as the parent” (para 3). Bentham reserves little for the notion of unalienable rights as well, suggesting that life, liberty, and the pursuit of happiness are tantamount to the complete disregard for law and order (para 7). Past the fiery retorts, however, Bentham argues the American declaration is undermined by the precedent of historic submission to British rule; both in law and duties paid. He contends that none of the taxes and indeed none of the colonists’ grievances exist outside the bounds of preexisting norms (Bentham 1776). American independence, therefore, had no legal or philosophical grounds on which to stand.

While Bentham was writing his rebuttal of the American declaration, his friend and fellow attorney John Lind articulated a more complete response. As Oxford Professor of law, Herbert Hart writes, in Lind’s view, taxation and representation were inseparable. Hart summarizes Lind’s views as follows:

The idea that [taxes and representation were separate] arose, according to Lind, from a misconception of the nature of property as something that belonged to individuals independently of the law. On the basis of this misconception there had developed the further erroneous idea that when the subject pays taxes he is making a gift of what is his and which, since it is a gift, requires his consent (Hart, 1976, p. 550).

Hart goes on to say that Lind’s position was premised on the idea that the notion of ownership can only exist if supported by law. He quotes Lind himself, quite aptly. “Take away the fence which the law has set around this thing…and where would your right or property be then” (p. 550). The point is well stated but also perfectly embodies the American disagreement. While the colonists held certain rights to be unalienable, they recognized the law needed to be structured in a way that protected these ideas. In a sense, both the British and the Americans were correct in their assessment that without the law, rights are void of meaning. In fact, a century before, English philosopher John Locke argued that such rights as life, liberty and property were God-given. And prior still, the Magna Carta contained similar provisions (Krutz, G., 2021, p. 32). In any event, while Lind’s final views on democracy are lesser known, Bentham would become one of its biggest champions, pressing for Parliamentary reforms and praising the progress of the Americans (Hart, 1976, pp. 557, 560). These views, and indeed the arc of Bentham’s trajectory, were matched by the British public and to a lesser degree, King George himself.

However, while British reforms would follow the American and French revolutions, popular opinion did not start off favoring the colonists. Historian Benjamin Labaree (1970) writes, “As one reads the newspaper commentary [regarding the prospect of war]…he is struck by the extent to which the subject of America evoked an emotional response” (p. 7). By Labaree’s analysis, some 70% of political commentary took a decidedly anti-American stance (p. 7). While it’s not surprising that the common public would have sided with the domestic viewpoint, the range of opinions within that spectrum swung from fear that the Americans would succeed, to conspiracies framing the Americans as both aggressors and victims (pp. 9, 10, 16). The most common view, however, was that the Americans were ungrateful, specifically for the protection provided by British troops who bled colonial ground during the French and Indian War (pp. 17, 18). Indeed, American revolt in that context would be a bitter pill to swallow, but ingratitude, while potent, would have proven transitory next to the economic concerns of trade.

The British public was not alone in harboring these concerns. King George considered the loss of the colonies to be a mortal blow not only to Britan’s finances but its status on the world stage (Bullion, 1994). However, despite these concerns, the King was remarkably sympathetic to the notion that citizens might grow disillusioned with opportunities at home. He writes,

It was thoroughly known that from every Country there always exists an active emigration of unsettled, discontented, or unfortunate People, who failing in their endeavours to live at home, hope to succeed better where there is more employment suitable to their poverty (p. 306).

This conciliatory view is particularly surprising from a monarch who would have viewed American revolution as betrayal, but it was not out of line with revolutionary observers like Michele-Guillaume Jean de Crèvoceur who wrote, “Alas, two thirds of [Americans] have no country. Can a wretch who wanders about, who works and starves…call England or any other kingdom his country?” (Crèvoceur, 1782, p. 4). On matters of trade and diplomacy, the King was no less gentil, writing,

This comparative view of our former territories in America is not stated with any idea of lessening the consequence of a future friendship and connection with them; on the contrary it is to be hoped we shall reap more advantages from their trade as friends than ever we could derive from them as Colonies (Bullion, 1994, p. 307).

While it is tempting to declare King George an American apologist, Historian John Bullion cautions against this conclusion, writing that the King was susceptible to whomever had his ear. After American independence had been finalized, the King was “noticeably lukewarm toward efforts to improve commercial relations with the United States” (p. 310). Regardless, King George’s views were not decidedly anti-American. Even if one were to consider the King’s most favorable sentiments to stem from economic enrichment versus democratic endorsement, that an eighteenth-century monarch could hold such a favorable view is no less remarkable.

By the time 1809 came around, Jeremy Bentham had completed a full about-face, advocating for Parliamentary reforms and universal voting rights. It was a remarkable turn of events for a man who only two decades before had declared the French revolution to be “nonsense on stilts” (Armitage, 2004, p. 63). Furthermore, the eventual adoption of full representation by the British Parliament speaks to broad public support for American ideals. King George, as Bullion pointed out, was at worst lukewarm while at best, quite optimistic. Certainly, these sentiments cast the subsequent war of 1812 in a more interesting, if not puzzling light. One might read the American grievance of impressment as a young upstart nation wanting to challenge the aging empire for supremacy. And this could be quite right. In fact, this global challenge would take centuries to play out, as American scholar Robert Kagan, writes,

When it came to dealing with the European giants, [the United States] claimed to abjure power and assailed as atavistic the power politics of the eighteenth and nineteenth-century European empires (Kagan, 2002, p. 6).

It would not be until the close of the second World War that America emerged as economically and militarily superior to its European fathers. Indeed, the bi-directional tension between Britain, Europe, and North America that gave rise to revolution has never fully abated. As Kagan points out, the freedom enjoyed by much of the European continent is paid for by American hegemony. In this aspect, he writes, the wall cannot pass through the gate (p. 25). In other words, the luxuries afforded Europe by American power can never be fully enjoyed by Americans. This centuries-old tension is likely to persist as America’s foreign policy continues to shift and global trade becomes more regionally focused. As throughout history, however, America and Britain will maintain their long-standing, if at times, strained relationship as societies of similar stripes.  

In summary, the British view of American independence was almost universally one of contempt and ingratitude. Yet few Brits likely saw the broader strokes of their own budding independence. In a very real sense, America showed the British that rights and representative government were possible, and, provided Englishmen with the inspiration to finish their own revolution.

References

Armitage, D., (2004, April). The declaration of independence in world context. OAH Magazine of

History, 18(3), 61-66. https://www.jstor.org/stable/25163686

Bentham, J., (1776). A short review of the declaration. University of Wisconsin Pressbooks.

https://wisc.pb.unizin.org/ps601/chapter/jeremy-bentham-a-short-review-of-the-declaration/

Bullion, J.L., (1994, April). George III on empire, 1783. The William and Mary Quarterly, 51(2),

305-310. https://www.jstor.org/stable/2946866

Clark, D. M., (1950). George Grenville as first lord of the treasury and chancellor of the exchequer,

1763-1765. Huntington Library Quarterly, 13(4), 383-397. https://www.jstor.org/stable/3816164

Crèvecoeur, M.G. J. (1782). “What is an American?” Letter III of letters from an American farmer.

https://americainclass.org/sources/makingrevolution/independence/text6/crevecoeuramerican.pdf

Hart, H.L.A., (1976, October). Bentham and the United States of America. The Journal of Law &

Economics, 19(3), 547-567. https://www.jstor.org/stable/725081

Kagan, R., (June & July, 2002). Power and Weakness. Policy Review.

Krutz, G. (2021). American government (3rd ed.). Rice University.

https://openstax.org/details/books/american-government-3e

Labaree, B.W., (1970). The idea of American independence. Proceedings of the Massachusetts

Historical Society, 1970, Third Series, 82(1970), 3-20. https://www.jstor.org/stable/25080688

Politics of Difference in Liberal Dimensions

In the fall of 1994, two Tlingit Indian teens were banished to a remote set of Alaskan islands as punishment for assaulting a pizza delivery man in Everett, Washington. The sentence which involved traditional native customs ran concurrently with state guidelines for felony assault, and the teens served both tribal and traditional state sentences. The decision to invoke the Tlingit tribal court was not without controversy, however it illustrated an important fact. Federal, state and Indian traditions are not mutually exclusive. In fact, illiberal customs are tolerated throughout American society, ranging from the non-democratic arrangement of some Indian Nations to the religious practices of cults and mainstream denominations alike. This essay argues that, while Federal protection for specific cultural traditions isn’t realistic, it also isn’t necessary. The flexibility provided by liberal governance allows for self-directed preservation of historical norms while existing under traditional western values.

To begin with, cultural trade is unavoidable and arguably desirable. A tradition that no longer evolves is, in a very real sense, already dead. Consider, for example, the tradition of Christmas. The fact that both the exchange of gifts and the date of celebration are likely derived from pagan traditions is irrelevant to observers. Yet such cross-cultural exchange is a vital part of human existence. Without traditions, such as Saturnalia and the winter solstice, our modern notion of Christmas might look very different. Food, language, ceremonies, and ideas flow freely across borders. As Historian Eric Foner writes, although the French offered American Indians citizenship if they adopted Catholicism, it was more often that Frenchmen chose the free life of the Indians (Foner, 2019, p. 44). Likewise, the modern genres of jazz, blues, and R&B are fusions of African and American musical traditions. The Blues were exported by Jimi Hendrix, B.B. King, and Muddy Waters, and reimported by way of Eric Clapton. Surely it is a dark world where such vibrant expressions and interpretations of culture are lost or segregated from one another. Yet it is also true that porous intellectual borders necessitate change and ultimately, loss of cultural artifacts to time.

Determining the obligation, if any, that liberal countries have to protect cultural traditions is crucial. English philosopher, John Locke, recognized that freedom was neither absolute nor cost-free. Ceding a little personal autonomy, he said, was necessary to ensure a well-functioning society (Krotoszynski, 1990, pp. 1398-1399). Indeed, Locke’s comments are not contingent upon western liberal governance. Any society, whether illiberal Indian Nations or French-Canadian Quebecers require that individuals conform to prevailing social norms. This necessarily means that one shed some of their cultural identity. For example, to the extent that western liberal values impose on relative minorities, like Muslim immigrants, such assimilation is necessary, and indeed, would be equally as necessary if western women were to emigrate to Saudi Arabia. Social Science professor, Charles Taylor resists the notion of conformity however (Gutmann, 1994, pp. 48, 53), even while acknowledging that intersectionalist group identities themselves necessitate that one conforms (pp. 55, 58). While Taylor pays minimal attention to this conflict, it underpins the impossibility of legislating specific cultural identities. Liberal societies, therefore, can only provide a legal framework for self-determination and organization, agnostic of specific religious or cultural values.

In fact, this framework tolerates relatively illiberal traditions. For example, Amish communities reject most notions of modern technology and secularism, nor do they generally participate in the political process. Yet like Native Americans, they are considered American citizens and protected by the same civil liberties. Religious liberty allows for numerous organized belief systems, many of which voluntarily restrict personal agency. Amish place restrictions on dress, Jews enforce dietary and labor laws, and even the existence of cults is permitted. Each of these organizations, regardless of how one feels about them, possesses a system of ideas and, in the case of the Jews and the Amish, long standing cultural traditions that the group perpetuates. It is impossible to imagine a federal government legislating the specific values of the Jewish and Amish communities or walling them off on their respective reservations; however, these groups are free to and, in fact do, self-organize along shared values.

Taylor and others do not believe that passive tolerance provides sufficient cultural protection. He argues for a politics of difference, where cultural and individual differences are emphasized over similarities (Gutmann, 1994, p. 38). He argues that cultural preservation must be protected by law, writing, “the goal of [such laws] is not to bring us back to an eventual ‘difference-blind’ social space but…to maintain and cherish distinctness, not just now but forever” (p. 40). This is a remarkably naïve statement given that its implications are without precedent. No human tradition has remained unchanged forever. Even the most well-preserved religious ceremonies are two-dimensional reenactments without the depth of true belief. Furthermore, linguistic, culinary, and secular traditions all carry cultures across borders, making preservation of the type proposed by Taylor unrealistic. Indeed, a world where French remains French and white remains white, is repulsive. The beauty of culture is that it can, and should, be shared. That said, Taylor’s argument raises the valuable question of to what degree liberal societies should accommodate illiberal traditions, and what, if anything, should be done when those traditions violate majority norms.

The short answer is western majorities owe little to intervening when social norms are violated. As the example of the Amish shows, little social harm is done by the existence of those communities. Furthermore, even if certain cultural norms violate personal autonomy, adherence is voluntary. Jews who no longer whish to be Jewish are free to do so, even if they face social and familial backlash. Furthermore, the numerous Indian Nations, with Indian law, and a variety of democratic and undemocratic governments, demonstrates that western liberalism and illiberal practices are not mutually exclusive and, in fact, are often protected by law. Intervention is only required in the case where an individual’s civil liberties are violated against their will. Such a case is currently playing out in a lawsuit filed against the Church of Scientology alleging crimes of child trafficking, forced labor, and imprisonment (Cohen Milstein, 2022). However, by and large, group practices whether religious, secular, or a fusion of the two, ought to be left alone. It is impossible for a legislature to be all things for all cultures as politics of difference suggests. Such a prospect requires, if not encourages, cultural consolidation, not diversity. Therefore, it is essential that communities take it upon themselves to preserve their cultures. As Anthony Appiah put it, “If we create a culture that our descendants will want to hold on to, our culture will survive them” (Gutann, 1994, p. 158). In other words, cultural survival rests in the hands of its members, and the value they place on perpetuating it.

In summary, while critics of western democracy like Taylor advocate for unprecedented social collectivism, liberalism provides a proven, if imperfect, track record of enabling diversity through self-determination. It is through organizations of individuals around shared identities, not Federal management, that cultural traditions will survive.

References

Cohen Milstein (2022). Church of Scientology accused of human trafficking, forced labor.

CohenMilstein. https://www.cohenmilstein.com/update/church-scientology-accused-human-trafficking-forced-labor

Foner, E. (2011). Give me liberty: An American history. W.W. Norton & Company

Gutmann, A. (1994). Multiculturalism. Princeton University Press

Krotoszynski Jr., R.J. (1990). Autonomy, community, and traditions of liberty: The contrast of

British and American privacy law. Duke Law Journal, 1398-1454. https://scholarship.law.duke.edu/dlj/vol39/iss6/6

Suicide: Liberty and Intervention

In early June 1963, a crowd gathered in the South Vietnamese city of Ho Chi Minh, as Thích Quang Duc was quietly set on fire. This act of suicide, and the horrific images that followed, was also an act of protest against religious oppression. Thích’s last words were not the letter he left behind, but this moment of self-immolation. Indeed, his act of protest is rarely, if ever, referred to as suicide, yet events such as these draw into question the extent to which liberty allows us to do with our lives as we please. Thinkers such as John Stuart Mill would have likely viewed suicide as a violation of social responsibility, personal liberty, and principles of do no harm. However, such black and white conclusions are not supported by the realities of personal suffering, politics, and social norms. This essay contends that while suicide is an act of personal autonomy, liberty does not, and should not preclude intervention when appropriate.

According to the Centers for Disease Control and Prevention (2023), nearly fifty-thousand Americans took their own lives in 2022. Of these, the majority were overwhelmingly white, male, and 25 – 64 years old, with over half of those under the age of 45. In other words, men in the prime of life with plenty to live and work for. The devastation suicide brings to families and loved ones dominates the national conversation, and with good reason. However, our relationship to the act of suicide is entirely dependent on the circumstances and prevailing social norms. Consider, for example, the difference in sentiment between suicide as an outcome of PTSD versus relieving a terminal illness. In 2013, PEW Research found that 57% of adults would choose to end treatment if their illness were terminal. The same research found that 47% of adults approved of physician-assisted suicide, while 49% disapproved. It’s difficult to imagine such a plurality of opinion on PTSD-related suicides, yet such a conversation exists in other circumstances where a person chooses to die. Cultural and political perspectives play a critical role in our acceptance of suicide as well. Japanese fighter pilots, for example, were revered for flying their planes into Allied warships, and Muslim suicide bombers are believed to ascend to heaven for their deeds. In fact, such a complicated relationship is evident throughout history. Historian Philip Freeman (2011) writes in his book, Alexander the Great, that when the philosopher Calamus fell ill, rather than continue suffering, he chose to be burned alive in ritual suicide. Even at that time, writes Freeman, observers were divided on whether his act was one of bravery or pompous self-conceit (p. 306). Clearly, our reactions to any of these events is more dependent on the circumstances and political perspective from which they are viewed, than they are on the decision itself.

It is important not to conflate the emotional impact of an immediate death with the more acceptable decision to die slowly. However, while Mill might have condemned suicide, he would have done so for different reasons. As an avid libertarian with strong utilitarian sentiments, Mill recognized there were limits to the notion of self-determination. He writes,

Whenever, in short, there is a definite damage, or a definite risk of damage, either to an individual or to the public, the case is taken out of the province of liberty, and placed in that of morality or law (Collini, 2007, p. 82).

On the one hand, suicide would seem to satisfy both of Mill’s requirements. Tremendous harm can be done to the families and communities associated with the subject. On the other, Mill muddies the water when he says, “[No one] is warranted in saying to another human creature of ripe years, that he shall not do with his life for his own benefit what he chooses to do with it” (p. 76). Indeed, it is quite difficult to declare what is in another person’s benefit. As the example of Calamus showed, a person who is suffering will have a very different view of their best interests than people in their periphery. Furthermore, by Mill’s own logic, society ought to exercise a great deal of caution in matters that override personal agency.

Mill attempts to work through this dilemma by first acknowledging the inherent difficulty of predicting consequences (Collini, 2007 p. 80-81), and later by invoking the notion of liberty as an unalienable right. “The principle of freedom cannot require that [a person] should be free not to be free. It is not freedom, to be allowed to alienate [one’s] freedom” (p. 103). In other words, liberty cannot be used to sabotage liberty, even if it is our own. Indeed, this is a powerful argument for the immorality of suicide, but it is limited. Mill cites the example of slavery and the contradiction in voluntarily becoming a slave (p. 103). Yet in the case of suicide, there is no other person to whom the subject cedes their agency, even if the act of suicide is assisted by someone else. This philosophical logjam represents a critical gap in Mill’s definition of harm. At no point does he suggest that personal harm is anything other than physical, whether to the person’s body or their property. Yet we know that real harm occurs when an act of suicide is committed. Had Mill recognized emotional harm as legitimate, his arguments would more clearly support intervention and treatment.

Nonetheless, Mill’s arguments allow for broader definitions of harm and interventionalist mindsets. While he remains vague on the definition of personal harm, he invokes the individual’s Platonian obligation to society, writing, “[E]very one who receives the protection of society owes a return for the benefit…[by] each person’s bearing his share of the labour and sacrifices incurred for defending [society]” (Collini, 2007, p. 75). In this respect, someone who commits an irrational act of suicide could be said to have failed in their obligation to society itself. This is particularly true, as CDC data pointed out, in the case of young men who own the majority share of suicides. Mill goes further to suggest that individuals who are incapable of “self-government” should be protected from themselves (p. 80), and that individuals are obligated to dissuade each other from harmful decisions (p. 99). Certainly, a person suffering from severe depression could be said to be incapable of self-government. Intervention, in such a case, would be justified by preventing an irrational act of harm. In short, projecting Mill’s opinions on suicide could be read either way, however there is a clear case to be made that dissuasion and obligation to society lay the groundwork for intervention and treatment.

In summary, John Mill’s views of harm and obligation to society support the cause of dissuasion and even intervention when suicide is deemed likely. On the other hand, his views on liberty support the premise that human beings possess the agency to determine what’s in their best interests, and act accordingly, even if such action results in their own death. His views, therefore, while far from conclusive, allow for a variety of humane approaches to suicide, both interventionalist and in service to the relief of great suffering.

References

Centers for Disease Control and Prevention. (2023, August 10). Suicide data and Statistics.

CDC.gov. https://www.cdc.gov/suicide/suicide-data-statistics.html

Collini, S. (Ed.). (2007). J.S. Mill: On liberty and other writings. Cambridge University Press

Freeman, P. (2011). Alexander the great. Simon & Schuster

Pew Research Center. (2013, November 21). Views on end-of-life medical treatments. Pewresearch.org.

https://www.pewresearch.org/religion/2013/11/21/views-on-end-of-life-medical-treatments/

Economic Global Order

In 2001 noted Political Scientist and Professor John Mearsheimer published his third book, The Tragedy of Great Power Politics.  Mearsheimer pays particular attention to states and their motivations in chapter 3, in which he defines the world as inherently anarchial, competitive, and one dominated by power, fear, and a quest for survival. This trinity of motivations manifests principally as military strength, aggression, and underhanded behavior which can only be offset by geographical factors or opposing strength (Mearsheimer, 2001). This essay examines a few of Mearsheimer’s positions and argues that contrary to a perpetual state of fear and threat of war, the world today is driven much more by economic motivations, interconnectivity, and mutual prosperity.

The decade following the end of World War II laid the groundwork for two markedly divergent paths of global order. In one was the rise of the Soviet Union, nuclear proliferation, and an omnipresent threat of atomic catastrophe. In the other, the late forties and fifties marked the beginning of globalization, openness, and increased economic freedom between countries. As Princeton Professor of Politics and International Affairs John Ikenberry (1996, pp. 79 - 80) points out, it was institutions like the United Nations and global trade agreements like GATT that, while widely used to enforce Soviet containment, provided the basic stability for Western States and democracies to engage in more free and open trade. It is, as Ikenberry says (p. 79), telling that while the Soviet Union collapsed more than thirty years ago, these Western agreements and trade are still flourishing today.

Mearsheimer (2001, pp. 63-64, 66) would argue that institutions like the UN, NATO, and trade agreements like GATT or the USMCA (formerly NAFTA) are at their root driven by competition and motivated by fear and a need for ever greater security. But why? Mearsheimer (p. 54) clearly states that his first assumption is that world order is anarchic. This to mean, as he puts it, there is no government of governments, no global authority presiding over world affairs. It is this anarchial environment that inspires competition and leads states to exist in perpetual fear.

On its face at the highest level, the world is anarchic. There is no government of governments and save for the goodwill of a Great Power, smaller states have little recourse. However, there are Great Powers and those Great Powers do come to the aid of weaker states. Currently we see this playing out in great force in the Ukraine war. More distantly, in the first Gulf War, the United States came to the aid of Kuwait. America also infamously backed the Taliban against the Soviets during their Afghan war. Great Power involvement in the affairs of weaker states is discretional to be sure but it’s not non-trivial. Furthermore, trade deals like the USMCA or the USJTA (United States Japan Trade Agreement) provide the structure and necessary rules that govern commerce between countries that both parties agree to. In short, whether a Great Power, an international trade deal, or non-governmental organizations like the IMF or the IAEA, there are resources available to weaker states. The world does not exist in a vacuum of authority as perhaps Mearsheimer suggests nor is it a free for all. Trade agreements and cooperatives between States like the UN provide a semblance of international law and order, not anarchy.

Mearsheimer (2001) concludes that survival is the core drive of every state and indeed it is hard to argue that any state would choose to not exist. However, Mearsheimer over-indexes on the prominence of survival in the policies of foreign affairs. For example, he writes, “[…] the only assumption in dealing with a specific motive that is common to all states says their principal objective is to survive” (p. 55).  He furthers this point by proposing that American military might is the only reason Canada and Mexico do not attack their North American neighbor and that the Atlantic Ocean is the only thing keeping the Americans out of Europe (pp. 56, 60). These arguments ignore the mutually beneficial trade agreements between the U.S., Mexico, and Canada, as strong motivators for peace; and they ignore the fact that the United States has twice crossed the Atlantic to fight World Wars; and, after occupying large swaths of the European continent, decided to give it back. To be clear, survival can absolutely play a dominant role in foreign policy. The spread of Communism was a key motivator in America’s decision to get involved in Vietnam, for example (Office of the Historian, 1964). The Cold War and the resulting arms race were also direct manifestations of all the things Mearsheimer identifies in his arguments. The critique is that the motivators of fear and survival are highly selective to specific situations between actors. The foreign policy of Ukraine, for example, is no doubt highly indexed by fear and a struggle for survival, but this does not mean that a similar relationship exists between the countries of Canada, Mexico, and the United States.

Finally, if fear, survival and the threat of war govern foreign policy, we should see an increase in inter-state conflict, but in fact we see a long-term decline in wars between countries. According to the Peace Research Institute (2016), rates of war between countries have dramatically declined since the 1940s. In fact, not only have rates of inter-state conflicts decreased, they’re not a major contributor to overall world conflict:

(Gates et al., 2016, p. 2)

Over this same period, the world has become increasingly globalized and economically dependent. As American University professor Joshua Goldstein (2011) points out:

Not only is China a very long way from being able to go toe-to-toe with the United States [militarily]; it’s not clear why it would want to. A military conflict (particularly with its biggest customer and debtor) would impede China’s global trading posture and endanger its prosperity. (Wars will get worse in the future section, para. 4)

The same logic applies to the relationships codified in NAFTA and later the USMCA. In today’s world, economic prosperity motivates Great Powers while existential survival is the exception, not a constant.  Consider, for example, that from the end of World War II until the early 1970s, world GDP and trade grew by 4.9% and 7% respectively per year. The fortunes of growth were universally felt from Europe to Asia, and of course, North America (WTO, 2014, p.48). It’s hard to imagine such a change in global prosperity not impacting priorities between states; and indeed, the decline in inter-state conflicts suggests this is the case.

In summary, the world does not exist in a state of total anarchy. Great Powers serve their best interests, but today those interests are principally governed by maintaining good economic ties with the rest of the world. An increase in trade, a decrease in inter-state conflicts, and the proliferation of international cooperatives all reflect a shift toward global interdependence. The more skin states have in the economic game, the more they have to lose through war. Survival is therefore less driven by an anarchial model of power and fear, and more incentivized through economic prosperity while playing within the rules.

References

Gates, S., Nygard, H.M., Strand, H. & Urdal, H. (2016). Trends in armed conflict, 1946-2014.

Peace Research institute Oslo. https://files.prio.org/publication_files/prio/Gates,%20Nyg%C3%A5rd,%20Strand,%20Urdal%20-%20Trends%20in%20Armed%20Conflict,%20Conflict%20Trends%201-2016.pdf

Goldstein, J. (2011, August 15). Think again: war. Foreign policy.

https://foreignpolicy.com/2011/08/15/think-again-war/

Ikenberry, J. (1996). The myth of post Cold-War chaos. Foreign Affairs, 75(3), 79-91.

Mearsheimer, J. (2001). Anarchy and the struggle for power. The tragedy of great power politics. (pp.

54-72). W.W. Norton & Company

Office of the Historian. (1964, March 16). 84. Memorandum from the secretary of defense

(McNamara) to the president. Office of the historian. https://history.state.gov/historicaldocuments/frus1964-68v01/d84

World Trade Organization. (2014). World trade report 2014.

https://www.wto.org/english/res_e/booksp_e/world_trade_report14_e.pdf

Confronting the Challenge of Suicide Terrorism

In August of 2021 as the last American flight left Kabul, video of victorious Taliban fighters spread around the world. For the better part of two decades, the Taliban had employed suicide attacks and IEDs to harass a better equipped, more advanced force; and now that force was leaving. The story of the Taliban’s success has many threads, and while there is plenty of room to debate what went wrong, there is little doubt the Taliban won. Though their tactics were not the only contributing factor to their victory, it is impossible to separate the outcome from the methods. Furthermore, America’s withdrawal from Afghanistan is, in part, an endorsement of the tactics. More importantly, the Taliban’s success makes it difficult to dissuade terrorist groups from deploying suicide attacks in the future. In fact, the globalized nature of terrorism, the permanent record immortalized on the internet, and the success of suicide attacks makes total dissuasion impossible. Instead, the powers of the world should focus on mitigation through clear objectives, negotiation, diplomacy, and military force to achieve the best possible outcome.  

To begin with, it’s important to understand the full scope of the problem suicide terrorism represents. It is not a one-sided issue, as University of Chicago professor Robert Pape (2003) points out, nor one led by mindless irrational actors. Terrorism, and specifically suicide attacks, are used because they’re effective and often carry popular domestic support (p. 349). Furthermore, target countries contribute to popular support by engaging in socially disruptive policies like regime change, democratization, and religious confrontation. While America and its allies have an unofficial policy of refusing to negotiate with terrorists, the United States has shown throughout its history that it is willing to work with imperfect regimes and international groups that do not share its values (Augusto Pinochet in Chile, the Afghan mujahedeen, and modern relations with Saudi Arabia, to name a few). Therefore, the absence of western principles should not prohibit engaging with any group, terrorist or otherwise. Instead, target countries should understand who they’re working with and the objectives of the organization. This requires that western countries acknowledge certain aspects of terrorist groups that cut across preferred narratives. For example, terrorist groups often have quantifiable goals and deploy suicide attacks in pursuit of those goals.

[S]uicide terrorism is strategic. The vast majority of suicide terrorist attacks are not isolated or random acts by individual fanatics but, rather, occur in clusters as part of a larger campaign by an organized group to achieve a specific political goal (Pape, 2003 p. 344).

In other words, terrorist organizations are rational actors with objectives that have quantifiable definitions of success (p. 344). By comparison, an irrational actor commits actions without any broader objective in mind. School shootings are an example of an irrational act where violence serves its own end, and the shooter is not part of a larger organization with political aims. Recognizing terror groups as rational actors is critical to understanding their objectives and forming a response. That said, while the hardline approach is not reliable, as Afghanistan showed, accepting that terrorist have rational goals does not preclude an aggressive posture.

For policy makers who believe terrorists can be convinced that suicide attacks don’t work, consider that America’s defeat and many, if not all, of the suicide attacks that led to that defeat are immortalized on the internet. The idea that the Taliban victory will fade with time is foolish. Terrorism has globalized both in presence and in tactics. American University professor Audrey Kurth Cronin speaks to this point in an issue of International Security:

The al-Qaida movement has successfully used the tools of globalization to enable it to communicate with multiple audiences, including potential new members, new recruits, active supporters, passive sympathizers, neutral observers, enemy governments, and potential victims (Cronin, 2006 p. 38).

Cronin adds that the internet has allowed terrorism to decentralize and spread globally in a way that wasn’t possible in its earliest iterations (p. 12). For example, in 2020, fifty-four percent of global suicide attacks occurred in Africa, while the second most occurred a continent away in Asia (Mendelboim et al, 2022 para 3). That said, the number of suicide attacks worldwide has decreased dramatically from 470 in 2016 to 74 in 2021 (para 3). Yet the impact of globalization on terrorism remains uncertain. On the one hand, the de-globalization of U.S. forces from the middle east contributed strongly to the decline in suicide attacks (para 1). On the other hand, Sociology professor Donald Black points out that globalization will only continue to force divergent people and cultures together. While this can increase violence in the near term, he says, the very act of globalization can reduce differences between people and therefore terrorism itself (Black, 2004 p. 24). Clearly, globalization has helped terror networks decentralize and spread throughout the world, but it may also be a mechanism to prevent radicalization through long term integration.   

Terror groups perceive a lack of commitment from the west as a weakness, however, democracy is not necessarily a prohibitor to success in combating terrorism. Issues arise for any government when objectives are not properly defined or inhibited by mission-creep. It’s easy to forget that before the United States became bogged down in Afghanistan, the Soviet Union experienced a similar fate. Difficulties are not strictly ideological. Instead, underestimating the opponent and conflating mission objectives are far more lethal to mission success then democracy itself. As Cronin puts it, counterterrorism and political change operate on completely different timelines. Consider that, “[ninety percent] of terrorist organizations have a lifespan of less than one year; and of those that make it to a year, more than half disappear in a decade (Cronin, 2006 p. 13).” Now consider that it took over a century for the United States to pacify the North American continent, and much longer if we were to consider the entirety of the colonial period. The goals of democratizing Afghanistan, defeating Al-Qaida, and toppling the Taliban operated on different timelines, involved different tactics, and had different success criteria. In short, western countries can contain terrorist activities, but it is essential that they do not conflate those efforts with the much more difficult task of nation building.

Successfully confronting terrorism and suicide attacks begins with understanding the opponent and their motivations. Second, military force and negotiations are not mutually exclusive. Pape recognized this twenty years ago when he said, “The current policy debate is misguided. Offensive military action or concessions alone rarely work for long” (Pape, 2003 p. 356). This played out in the Afghan war where simply the threat of a U.S. invasion following 9/11 had the Taliban ready to negotiate. Yet even after the Americans had toppled them from power, the United States made no effort to integrate the Taliban into the political process (Whitlock 2021). Had America utilized the opportunity provided by military force, they might have prevented twenty years of war. Finally, negotiation itself can be used as an offensive tactic. As Cronin says, splintered terrorist groups like Al-Qaida present opportunities to negotiate with individual cells and break them away from the global collective (Cronin, 2006). Furthermore, while Pape (2003) says a decreased U.S. presence in the middle east can help reduce suicide attacks (p. 357), total avoidance isn’t a viable solution. Western countries should continue to foster economic development in third world countries as a long-term counterterrorism strategy.

In summary, suicide attacks will never disappear entirely, but they can be mitigated through a combination of military, diplomatic, and economic means. Therefore, success begins with a mindset change and strictly limiting our ambitions. While these tactics operate on extended timelines, this does not preclude western democracies from using them, so long as scope and objectives are clearly defined.

References

Black, D. (2004). The geometry of terrorism. Theories of Terrorism: A Symposium, 22(1), 14-25.

https://www.jstor.org/stable/3648956

Cronin, A. K. (2006). How Al-Qaida ends: The decline and demise of terrorist groups. International

Security, 31(1), 7-48. https://www.jstor.org/stable/4137538

Mendelboim, A., Schweitzer, Y., & Raz, I.G. (2022). Suicide attacks worldwide in 2021: The

downward trend continues. The Institute for National Security Studies. https://www.inss.org.il/publication/suicide-attacks-2021/

Pape, R. (2003). The strategic logic of suicide terrorism. American Political Science Review, 97(3),

343-361. https://www.jstor.org/stable/3117613

Whitlock, C. (2021). The Afghanistan papers: A secret history of the war. Simon & Schuster

Do No Harm: Mitigating the Consequences of Humanitarian Aid

Two years before the United States accused Zimbabwean president Robert Mugabe of weaponizing food shipments for votes, severe drought devastated the country’s agricultural sector. The resulting humanitarian crisis, coupled with hyper-inflation and high unemployment, prompted the international community to respond. Yet, Zimbabwe, as professors Michael Barnett and Jack Snyder point out, is a shining example of humanitarian aid gone wrong (Barnett & Snyder, 2012, p. 148). By June 2008, stories began circulating that Mr. Mugabe’s government was withholding food relief from voters registered to the opposing party (Pleming, 2008, para. 3). At the same time, the government froze the relief efforts of all international aid groups after accusing some of them of supporting the opposition (para. 8). Zimbabwe is, in many ways, an example of how well-intended foreign aid can have unintended negative consequences. The relief effort also illustrates the peril of conflating aid with a political agenda. Humanitarian aid provides a critical lifeline to people in need, but it is essential that providers be cognizant of unintended consequences. This essay argues that an apolitical, do no harm approach with strong local support is essential to delivering effective humanitarian relief.

Twenty-five years ago, Kofi Annan (1998) passionately argued the merits of nation building and intervention as a means of mitigating humanitarian disasters and avoiding another Rwanda. This was, in some ways, a response to broad recognition that simply providing aid wasn’t good enough, and often resulted in more harm than good. Getting involved militarily and spreading democracy around the world seemed the logical, if not formattable task ahead for developed nations. No one could imagine that only three years later, the events of September 11th would bog the United States down in a failed attempt at nation building; and, while the Afghan war wasn’t strictly based on relief, the humanitarian crisis under the Taliban, the rights of women, and general violence were strong interventionalist incentives. As Afghanistan has shown, however, nation building is difficult.

That said, even if political transition had a functional track record, expecting an NGO to take on that responsibility is naïve. As Drs. Barrett and Snyder point out, “peace builders lack the knowledge to transform crisis-prone countries into stable, liberal, free-market societies” (Barnett & Snyder, 2012, p. 152). Further still, even if NGOs possessed the expertise to engage in nation building, they most certainly lack the resources. Consider that of the nearly $13.5 billion of federal aid to Afghanistan in 2011, only $180 million was classified as humanitarian, while over $12 billion went to governance, and most of that to conflict, peace and security (U.S. Department of State, 2023). These numbers say nothing of the private funds from individuals and NGOs, nor of the costs of military operations in that country. Furthermore, the presence of war offers little qualification as violence almost always accompanies humanitarian disasters, whether in Afghanistan, Iraq, Sudan, or Somalia. In any event, for an NGO to cover the costs of just the U.S. federal government’s aid to Afghanistan, the humanitarian budget would need to increase by a factor of seventy-five. This provides some mathematical context to the task being suggested by peace building advocates.

While expecting NGOs and state-sponsored humanitarian efforts to take on the effort of nation building is foolish, remaining apolitical as do no harm suggests, is not so simple. Johns Hopkins University professor, Soren Jessen-Petersen (2011) speaks to this in a paper published by the U.S. Institute of Peace.

This kind of response, throwing humanitarian personnel right into the middle of conflicts, has tested the security of staff and the agencies’ understanding of politically sensitive involvement, courage, and stamina. It has also severely exposed the difficulties in combining such action with the fundamental principles of impartiality and neutrality. How do you maintain your neutrality and impartiality when a forceful Sri Lanka military embarks on a final onslaught…[and] indiscriminately targets the civilian population whom you are trying to protect? (p. 6)

While Jessen-Petersen argues against the aforementioned peace building efforts, he highlights the difficulty in maintaining neutrality. Yet maintaining neutrality is precisely what must be done. As Zimbabwe showed, partiality puts humanitarian aid at risk. To reduce the possibility of a misunderstanding, Jessen-Petersen proposes that engaging locals in the relief effort would help mitigate bad perceptions. For example, more direct Afghani involvement would have helped dispel the notion that aid organizations were dominated by western interests (p. 10).

At first, sourcing aid from the region where the humanitarian crisis is occurring seems far-fetched, but allowing the communities most impacted by aid to drive how it’s used is, in fact, the best application of do no harm humanitarianism. George Washington professor of Political Science Alexander Downes speaks to a ground up approach when he says, “regime change is highly destabilizing and outcomes depend less on the good intentions or strategy of the intervener and more on the conditions in the target country that are out of the intervener’s control” (Art et al., 2023, pp. 515-521). Columbia professor of Political Science Séverine Autesserre (2017) also wrote extensively on this topic in an issue of International Studies Review. In it, she says the actions of locals within their communities often plays a larger role in peace than those taken by international organizations. She goes on to say that foreigners often lack the knowledge or skills to interact with the population, much less build peace. For example, “Out of 140 diplomats working in the U.K. embassy in Kabul…only three [spoke the language]” (p. 125). Furthermore, international intervention can be poorly received by local communities as was the case in Congo. Auteserre writes, “Congolese youth activists emphasize that they would prefer outsiders to leave, because international peace builders get in the way of local people trying to hold their government accountable” (p. 124). Even regime change, as Downes writes, is best left to domestic forces, as was the case in Serbia. International involvement, he says, “should be limited to diplomatic pressure in support of legitimate domestic demands” (Art et al., 2023, pp. 515-521). In short, engaging locals in the humanitarian process is the surest way to direct international resources in the most needed direction, while limiting unintended consequences.

While do no harm is the most reasonable approach to delivering relief, it is not a perfect approach to humanitarian aid. As humanitarian writer Fiona Terry said, all aid has negative consequences. The best organizations can hope for is to mitigate unintended consequences as much as possible (Barnett & Snyder, 2012, p. 149). That said, there is a balance to be found between doing no harm and comprehensive peace building that will best position providers to succeed against tomorrow’s challenges. This approach begins with defining the limits of humanitarianism. Providing relief cannot be conflated with nation building nor can aid be tied to foreign policy. As Afghanistan showed, state’s interests abroad can change rapidly with little concern for the humanitarian consequences. More importantly, aid organizations should engage apolitically with local communities, and allow the citizenry to define the aid they need. While not perfect, this approach will provide the best possible outcome through the intelligent application of aid. Most of all, this approach scales with the unknown challenges of the future.

There is no foolproof solution to humanitarian relief. In fact, there is no decision one could make that does not carry some consequence. Yet attempting to control for every variable or expanding the scope of relief to include nation building, only results in the conditions that cause humanitarian disasters. Therefore, it is essential that states and NGOs involved in humanitarian initiatives, accept the limits of what aid can do and attempt to limit unintended consequences as much as possible.

References

Annan, K. (1998). Secretary-General reflects on ‘intervention’ in thirty-fifth annual

Ditchley foundation lecture. United Nations. https://press.un.org/en/1998/19980626.sgsm6613.html

Autesserre, S. (2017). International peacebuilding and local success: Assumptions and

effectiveness. International Studies Review, 19(1), 114-132. https://www.jstor.org/stable/26407939  

Barnett, M. & Snyder, J. (2012). Humanitarianism in question. Cornell University Press.

Art, R.J., Crawford, T.W., & Jervis, R. (Eds.). (2023). International Politics. Rowman &

Littlefield.

Jessen-Petersen, S. (2011). Humanitarianism in crisis. U.S. Institute of Peace.

https://www.jstor.org/stable/resrep12279

Pleming, S. (2008). U.S. says Zimbabwe uses food aid as weapon. Reuters.

https://www.reuters.com/article/us-zimbabwe-usa/u-s-says-zimbabwe-uses-food-aid-as-weapon-idUSWAT00961420080606

U.S. Department of State. (2023). U.S. foreign assistance by country.

https://www.foreignassistance.gov/

America: A Multi-Century Revolution

Popular history would argue that the American revolution was based on taxes and parliamentary representation. However, this is not a complete story. Revolution wasn’t a singular event at a fixed point in time, nor was it limited to a handful of decades. Revolution was a long-running movement spanning multiple centuries with millions of participants, each committing a personal act of freedom. With every decision to leave their homelands, Europeans were, in a very real sense, declaring independence. This essay examines the roots of American’s desire for freedom and argues that what is commonly thought of the start of revolution was really the final act. Independence had already been declared centuries earlier and only culminated as a war fought against the British empire.

Centuries before the Sugar Act, the Stamp Act, and the Townsend Acts, and long before the Quebec Act, European settlers waited to board ships bound for the New World. This act of leaving their homeland likely bore special significance to them. It was, after all, no small risk to cross the Atlantic Ocean for a continent populated by a people who neither spoke their language nor shared their customs. There was the basic matter of leaving what was a familiar if not broken system under European rule and heading for a new land replete with danger. Likely very few Europeans imagined that by walking away, they were declaring independence from their homelands. But this was in fact what they were doing.

For many settlers, the New World represented opportunity. There they could develop a trade, own land, practice religion, and accumulate wealth (Foner, 2011 pp. 6-7). It was, in many respects, the basis of what we take for granted today: life, liberty, and the pursuit of happiness. In other words, the things many of them felt they had no access to in Europe. Fleeing their homelands meant an escape from the European class system, Papal authority, and an unfair and often brutal legal system. It represented a future that simply did not exist outside of the New World.

To put a value on these motivations, consider the risks the average European was taking. Arriving safely in the New World was no guarantee. Hurricanes, pirates, violence, and ship-born disease were just a few of the ways to die before ever catching sight of America. Once arrived upon North American shores, death in the form of disease, Native Americans, and other settlers was possible, if not likely. It was with no small amount of risk that these early settlers departed their homelands. The arduous journey they faced was not unlike that of so many other immigrants who seek American opportunities today. The fact that these settlers braved the dangers ahead speaks to how little future they saw for themselves at home. French settler, Michel-Guillaume Jean de Crèvecoeur (1782), sums up this sentiment well:

To what purpose should [American settlers] ask one another what countrymen they are? Alas, two thirds of them had no country. Can a wretch who wanders about, who works and starves, whose life is a continual scene of sore affliction or pinching penury can that man call England or any other kingdom his country? (p. 3)

Here Crèvecoeur declares that the average settler had no bond to Europe. So little future was offered back home that these settlers readily adopted an American identity. Leaving, for these settlers, was a quiet act of revolution.

By the time the Sugar Act passed into law, the American colonies were full of European immigrants and native-born Anglo-Americans, many of whom came from nothing and bore little allegiance to their parent countries. These same settlers who had fought for English interests only a year before in the French and Indian War, were now being asked to carry its financial burden as well. The Quebec Act of 1774 which ceded territory to the French likely carried additional bite as those colonists who had fought for England watched the monarchy give away large swaths of land. These events carry much more significance when viewed against the backdrop of a population who had fundamentally declared their independence four centuries prior. They’d been asked to die in wars and pay the costs of an empire many of them felt no allegiance to. Taxes weren’t the reason, but they may well have been the cause of the American Revolution.

In summary, immigration to the New World represents a breakup of sorts between the settler and his country. This divorcing of the old life was itself an act of independence and self-determination. It was an indictment of the European system and an embrace of the potential for something better. It was a quiet revolution that involved no armies or wars. It was simply conceived by the act of walking away. What followed nearly three centuries later was nothing more than a formality. Revolution had been building for hundreds of years prior.

References:

Crèvecoeur, M.G. J. (1782). “What is an American?” Letter III of letters from an American farmer.

https://americainclass.org/sources/makingrevolution/independence/text6/crevecoeuramerican.pdf

Foner, E. (2011). Give me liberty: An American history. W.W. Norton & Company

Plato: A Merited but Flawed Philosophy

The great Greek philosopher Plato is said to be the father of western philosophy. He is certainly a household name and a person of interest to historians, philosophers, and political scientists alike. But his ideas have flaws, specifically regarding the obligations of citizen and state. In his dialogue Crito, Plato (Crawford, 2007) argues that citizens are bound by social contract to the state. He asserts it is one’s moral obligation to obey the law unequivocally even to the point of death, and he concludes there is no higher authority to whom citizens owe their lives than the law (p. 33). In other words, the state is the primary lender to whom we owe repayment. This essay confronts Plato’s assertions and contends that his arguments are contradictory and incomplete. Furthermore, this analysis concludes that our social contract is not governed by the law but by civic duty to improve the law.

Plato’s core argument claims the state has shaped the citizenry and therefore the citizenry owes the state a debt of gratitude in the form of service and obedience to the law (Crawford, 2007). This argument is premised on equating the influence of the state to that of a parent, teacher, or trainer and dismissing our more distant relationship to the general citizenry (pp. 26-27). It is from these intimate relationships that the state derives authority and from this authority, or influence, arises the debt of servitude (p. 30). Plato’s logic, however, is contradictory. If we consider that the state and the laws that govern it are abstractions of a distant majority (whether of lawmakers or citizens), then the state is necessarily distant. Plato ignores this conclusion and sees the state as being much closer to the citizen. Yet the state cannot be both distant and intimate simultaneously. This point of contention causes friction throughout Plato’s arguments and undermines his conclusions.

To begin with, Plato’s underlying assertions rest on two basic assumptions: First, the law raised us and shaped us into who we are. As he points out in Crito, “[S]ince you were brought into the world and nurtured and educated by [the state], can you deny…that you are our child and slave” (Crawford, 2007 p. 30). From Plato’s perspective, not only are citizens children of the state but they are bound to it as slaves to their masters. Plato never addresses outcomes in Crito, but the contract of servitude implies that a debt is owed in exchange for something of value. This is the essence of Plato’s second assumption: Laws are omnipresent and produce equal outcomes for all people and those outcomes are always good. This assumption implies a close relationship between the citizen and the state, but it too is flawed.

Consider for example that within any state, a variety of divergent social conditions exist. Income inequality, crime rates, school quality, and access to social services all vary widely, yet all exist under the same set of laws. At no point does Plato’s logic account for divergent circumstances. In fact, his logic presupposes that an individual could only exist under state rule and be grateful at the outcomes. He says nothing of the fact that, for example, the institution of marriage might have brought together two horribly unqualified parents.

It is in fact impossible to achieve the outcomes Plato’s logic requires. Even if we were to assume that the law is universally applied, it is impossible to ensure equality of outcome across the board. Two citizens might attend different schools with different teachers and have very different educations. One citizen might grow up in a broken home and be disillusioned about marriage or fall into criminal activity. Plato makes no allowance for the possibility that a citizen might strictly adhere to his philosophy and draw negative conclusions about the law, or that the state might create citizens who are detrimental to society. Even if the law could be evenly applied throughout the state, outcomes are far too fluid to ensure a debt is owed. Here again friction between Plato’s arguments of intimacy and distance manifests. It is impossible to stipulate that a debt is always owed when value isn’t always delivered.

It’s clear in Crito that Plato holds the law in high esteem, though he questions whether it is ever permissible to break the law (Crawford, 2007 p. 29). He suggests rhetorically that perhaps the concepts of justice, right, wrong, good, and evil exist outside the law. He illustrates this by posing the following hypothetical: “If I am clearly right in escaping [without the consent of the Athenians], then I will make the attempt. If not, I will abstain” (p. 29). Here Plato expresses contention between Athenian law (under which he’s being held), and a personal interpretation of justice. Plato does not indicate by what manner he will conclude whether he is right or wrong, but interpretation is a fundamental aspect of the law. It is, in fact, what allows the law to evolve with society as much as it allows for variability in its application. Plato’s analysis never directly acknowledges the fluidity of law. He reconciles the question by concluding that a hierarchy of obligations exist between who ought to be wronged least. At the top of the hierarchy is the rule of law itself (pp. 29, 33).

Had Plato dived more deeply into the concept that right and wrong exist outside of the boundaries of the law, he might have realized laws are not always just. Almost fifteen-hundred years later, Henry David Thoreau (1849) would arrive at exactly this conclusion:

Unjust laws exist: shall we be content to obey them, or shall we endeavor to amend them, and obey them until we have succeeded, or shall we transgress them at once? Men generally, under such a government as this, think that they ought to wait until they have persuaded the majority to alter them. They think that, if they should resist, the remedy would be worse than the evil. But it is the fault of the government itself that the remedy is worse than the evil (para 17).

Thoreau famously argued the merits of breaking the law as he points out in the previous passage. The state itself is responsible for authoring unjust laws and it is the role of the citizen to change those laws. In fact, Plato seems to circle this conclusion, extolling citizens who commit no injustice even when an injustice has been committed against them (Crawford, 2007 p. 29). This suggests that Plato knew laws could be misapplied or poorly written, but he concluded that it is the duty of the citizen to follow these laws regardless of the circumstances under which they’ve been applied. Plato’s reasoning fails to consider the need to constantly improve the law and the role individuals have in that process. Our social contract is not to blindly follow the state but to constantly seek to improve it.

In summary, social contracts do exist, but laws and outcomes under laws are far too fluid to ensure an outcome that is worth repaying. Instead, the obligation of the citizen is to continuously improve upon the institutions that govern us, including as Thoreau pointed out, through civil disobedience. Said differently, our contract is not to state institutions, but to the act of continually improving those institutions for the benefit of future generations.


References:

Crawford, T. (Ed.). (2007). Plato. Tudor publishing company.

Thoreau, H.D. (1849). Civil disobedience. https://xroads.virginia.edu/~Hyper2/thoreau/civil.html

Federalism: Echoes of Plato’s Paternal Guardian

When the framers of America’s constitution came together on a chilly November afternoon, likely very few expected the Articles of Confederation to be ratified quickly. In fact, it would be over two years before this early attempt at a governing framework would be codified. This wasn’t exactly a problem with the articles, but it was certainly a manifestation of another problem. The early attempts at federal governance required total agreement from all states. A paralyzed and powerless central government was a problem the framers and federalists both sought to resolve. Indeed, very little of the United States government requires unanimity to operate today, and it’d be easy to package the federalist movement as a rebuttal to the Articles of Confederation; but reality speaks to a much deeper philosophical debate. The Federalist Papers weren’t just about a strong central government, they advocated for the rule of law, a guardian state that ran afoul of America’s identity and confronted the values of life, liberty, and the pursuit of happiness.

Two thousand years before American colonists declared their independence, Plato wrestled with the ideas of moral autonomy and the paternal state. He had only recently watched his mentor Socrates put to death at the hands of democratic lynch men, and he now questioned the merits of mob rule. Undoubtedly the events surrounding Socrates’ execution greatly influenced his mistrust of the general citizenry, and his belief in absolute obedience to the law. In Crito, Plato goes so far as to declare that the citizen is indebted to the state. “[S]ince you were brought into the world and nurtured and educated by [the state] can you deny in the first place that you are our child and slave” (Crawford, 2007 p. 30), he says. So sacred was the law, and by extension, the state, that Plato declared them our paternal guardians. In his mind, the public was not to be trusted. It was after all a panel of citizens who had convicted Socrates to die. The law, by contrast, was emotionally agnostic and in theory, state guardians could be taught to protect it. Plato had no way of knowing that his principles of guardianship would persist for millennia to come, and one day work their way into the founding documents of the world’s greatest empire.

Centuries later, the Federalist papers sought to address certain deficiencies in the Articles of Confederation. Among them, entangled federal and state powers and a weak central government. This implied that a strong central government was needed but that boundaries needed to be set. At first pass these objectives may seem at odds with one another. However, as historian Gary Wills summarizes it,

Neither [Hamilton nor Madison] wanted to see the states absorbed into a national government, but neither thought it was likely. It seemed [unlikely] that a central authority could or would want to descend to enforcement of local laws. (Wills, 1982 p. xv)

Separation of powers was essential to the Federalist system and both Hamilton and Madison recognized that the federal government couldn’t do everything. Nevertheless, Americans were wary of the formation of a strong central government. Not only would this have seemed like a return to British rule, but the very presence of a central government implied authority and the rule of law. More importantly, it required that the citizen cede a little of their personal autonomy to the state. These demands were not entirely unwarranted, however. North America was, after all, far from settled. There were still active Indian, French, Spanish, and English interests in the region, and domestic instability in the recently failed Shay’s rebellion (Krutz, 2021). Certainly, a strong central government could play a useful role in international and domestic affairs yet, empowering a central government carried a scent of Plato’s paternal guardian and such sentiments were decidedly not American. It’s not clear how many common Americans would have actively connected resistance of central governance to Plato’s guardian state, but these aspects certainly would have played with America’s founders. After all, “Madison did not believe in the equality of the branches of government, but in legislative supremacy” (Wills, 1982 p. xxii). In other words, the rule of law. Despite reservations that a central government could metastasize into tyranny, the debate pressed on. A central government was necessary, the federalists argued, but to what degree.

Ultimately a strong separation of federal and state powers such as that protected by the tenth amendment, carried the day. The federalist papers served their purpose, and the constitution was ratified. However, for those who were wary of Madison’s view, their fears would have been codified with the formation of the Supreme Court. It was, in many respects, a final nod to Plato’s guardian state. The court was assigned to protect the Constitution and ruled by a panel of judges who could not be removed. States retained much of their autonomy but guardianship, it seemed, would forever have a role in American government.

It’s easy to view the federalist papers as a logical rebuttal to the Articles of Confederation; and, in many respects, this is a fair assessment. The notion of a strong central government, after all, is academic in hindsight. Yet for the time, the ideas of central governance would have seemed overbearing and paternalistic. Indeed, the debate over how much federal government is necessary continues to this day. The persistence of this dialogue speaks to more than the structure of government; it speaks to a debate between personal autonomy and the guardian state. A debate that reaches back to the time of Plato.

References

Crawford, T. (Ed). (2007). Six great dialogues. Dover Publications, Inc.

Krutz, G. (2021). American government (3rd ed.). Rice University.

https://openstax.org/details/books/american-government-3e

Wills, G. (1982). The federalist papers. Bantam Classic.

The Inalienable Rugged American

As Charles Ingalls drove his family west across the frozen Mississippi river, he had no idea that his daughter’s stories would be immortalized as classic American history. Though they were only one family out of tens of thousands to push into America’s heartland, they existed at a time of pivotal transition from homesteading to mass industrialization. Yet despite the dramatic changes in where and how people lived, the fundamental aspects of America remained unchanged. This essay explores who and what Americans are, the threads that stitch our society together, and how the earliest aspects of our culture continue to shape who we are today.

As a furniture builder I have some appreciation for the industrious capability of the early settlers. That is to say, I can only sense the great chasm that exists between twenty-first century woodworking, and raising a family, fending off Indians, and building a homestead. It’s remarkable to consider that for Charles Ingalls, felling trees and building a cabin with no recourse except what he could provide for himself, was simply part of the task of being a man. Yet tens of thousands of settlers chose the dangers and unpredictability of the frontier, over the relative stability of settled lands.

It is this choice, in fact, that makes America what it is. The United States represents a type of natural selection that began with the earliest settlers and continues with modern immigration today. Consider, for example, the type of person who decides to push westward across the Appalachian Mountains, with only a wagon, axe, and his wits to see him through. Today those same people contend with deserts, rivers, and human trafficking while in search of something better. The fact is, most people, whether then or now, will avoid the frontier. For most, even an uncomfortable life is preferable to the possibility of dying. It is no wonder then that America is a collection of inwardly directed, self-reliant, anti-intellectual individuals, who lack pretense and are suspicious of those who harbor it. From necessity more than intelligence, the frontier selected for the bootstrap American while rejecting all aspects of class and European hauteur. “Here there are no aristocratical families,” writes French American Michel-Guillaume Jean De Crèvecoeur. “[N]o courts, no kings, no bishops…The rich and the poor are not so far removed from each other as they are in Europe.” He continues, “The rich stay in Europe; it is only the middling and the poor that emigrate.” (Crèvecoeur, 1782).

In this sense, America disposed its people of class by virtue of geography, while simultaneously instilling a great value for self-reliance and inner-direction. This inner-direction as Sociologist David Riesman (1963) wrote, is defined as an increase in personal mobility, accumulation of wealth, expansion, exploration, and colonization (p. 14). Riesman goes on to say, “the source of direction for the individual is ‘inner’ in the sense that it is implanted early in life by the elders and directed toward generalized by nonetheless inescapably destined goals” (p. 15). In other words, the modern traits of American exceptionalism are engrained in us by prior generations and ultimately those forged by the frontier. In this way, the traits of the most adventurous settlers echo through modern American society.

The frontier, therefore, dispelled all notions of intellectualism by rejecting whatever debate might have been associated with the luxury of the upper class. As Crèvecoeur pointed out, these were not the people to make the journey in the first place, but had they crossed the Atlantic Ocean en masse, they would have found little utility for European intellectualism in the frontier. Indeed, America’s anti-intellectualism, cultivated by the harsh reality of the wilderness, persisted well into the twentieth century. Historian Richard Hofstadter (1966) cites President Eisenhower, who defined an intellectual as, “a man who takes more words than are necessary to tell more than he knows” (p. 10). This sentiment illustrates the collision between academia and American independence. Ironically, Eisenhower, a West Point grad and hard-nosed Texan, embodied this dichotomy himself. Yet, like Riesman’s inner-direction, America’s anti-intellectualism is born of a frontier where a man’s wits, not his social status, were often the difference between life and death. This environment cultivated a distaste for the superfluous as much as it fueled America’s expansion and industrial might. As the frontier of America’s wilderness diminished, we turned our attention, with no less rugged individualism, to the frontiers of industry and the world.

In summary, Americans are, in a very literal sense, a product of our environment, formed both by our geography and the generations who raised us. America’s culture is not homogeneous in a sense that binds us together in groups, but instead we our bound to one another as individuals. Though modern technology has radically changed how Americans live, there is still no escaping the individualism and self-reliance of our ancestors.

 

References

Crèvecoeur, M.G. J. (1782). “What is an American?” Letter III of letters from an American farmer.

https://americainclass.org/sources/makingrevolution/independence/text6/crevecoeuramerican.pdf

Hofstadter, R. (1966). Anti-intellectualism in American life. Vintage Books.

Riesman, D., Glazer, N., & Denney, R. (1963). The lonely crowd: A study of the changing American

character. Yale University Press.

Western Democracy: Of Plato and Dahl

Sometime after the year 400 B.C., as Plato finished the last of his great dialogues, he had no idea that some twenty-three centuries later a western liberal democrat named Robert Dahl would challenge his ideas. It may have struck him as ironic that scholars would declare his philosophy the basis on which modern democratic principles were formed, but if Plato were alive to listen to these arguments, he might have heard an echo of his own voice. This essay contends that while Robert Dahl’s worldview is fundamental to the existence of western democracy, Plato’s principles of guardianship are not without merit. Dahl may have been more correct in valuing individual liberty, but as Plato understood, unchecked liberty is dangerous, anarchial, and should be closely guarded.

In his book Democracy and its Critics Political Science professor Robert Dahl confronts several common challenges to democratic principles. It is, in many respects, a defense of democracy and a qualified counterargument to Plato’s views. But it’s not all disagreement. In fact, to declare a winner in this debate would be somewhat foolish. The ideas of liberty and guardianship are not mutually exclusive. They are in many ways interdependent. The Supreme Court, for example, is precisely a manifestation of Plato’s guardian state. Dahl (1989) brushes over this fact but consider that America’s founders devoted one third of governing power to a judiciary that is neither elected nor can be removed. Furthermore, the essence of a representative democracy distances the people from the political process. Representatives, while elected, do not run every decision by the voting public. They are entrusted, to a degree, to make decisions on behalf of the people who elected them. In short, Plato didn’t have it all wrong, and to separate guardianship from democracy is to remove the rule of law and fundamentally alter western political process.

The shared ground between Plato and Dahl goes further than the arrangement of American democracy. One could argue that their philosophies arise from a common understanding of morality and diverge when each decides what to do about it. Regardless, this common origin directly influenced the ideas of both men, the ideas of western democracy, and indeed the structure of our democratic systems. It is, as Dahl writes, the very justification of democracy to, “live under laws of one’s own choosing, and thus to participate in choosing those laws [that] facilitate the personal development of citizens as moral and social beings” (Dahl, 1989 p. 91). He continues more poignantly,

I believe the reasons for respecting moral autonomy sift down to one’s belief that it is a quality without which human beings cease to be fully human and in the total absence of which they would not be human at all (p. 91).

Dahl contends that it is democracy itself that teaches self-reliance, self-worth, and independence (p. 92). At first pass, these positions might seem at odds with Plato’s view of guardianship, but in fact they’re highly complementary. Like Dahl, Plato recognized that moral autonomy exists. He questions whether life would be worth living if the aspects of our intellect that benefit from justice are corrupted, going so far as to declare our sense of justice to be “[f]ar more honored [than the body]” (Crawford, 2007 p. 27). Furthermore, the very essence of Crito is our inner debate over whether to obey the law. It is the ambiguity of this debate, and the potential for unchecked liberty, that necessitates a legal guardrail.

Admittedly, it’s difficult as citizens of western democracy to be wholly impartial when evaluating the virtues of the democratic system. Certainly self-reliance, liberty, and the freedom to pursue self-interests are fundamental to the American way of life, and it’s difficult to argue that Dahl didn’t have it right in his description of western liberalism. However, here again, he and Plato converge at the limits of liberty and self-direction. To better understand this convergence, consider that many of the core ideals that Dahl identifies as uniquely democratic: self-reliance, self-determination, and independence, are all fundamental aspects of anarchial autonomy. The anarchist as Dahl points out, is compelled by moral obligation to evaluate the laws he follows and obey those he chooses but never the ones he rejects. Personal responsibility to the anarchist, he writes, cannot be forfeited (Dahl, 1989 p. 43).

The common ground between anarchy and democracy may have been what worried Plato. After all, the lines between direct democracy, mob rule, and anarchy are quite blurred. It’s not hard to imagine the chaos that would ensue if every citizen were free to exercise their moral autonomy to whatever degree they saw fit. Certainly, the perpetrators of the worst atrocities in human history all felt morally justified. So, while the importance Dahl places on liberty and self-determination is correct and fundamental to democracy, its furthest extremes lie in chaos. Plato surely understood this as the basis for the law but also the reason to protect the law from despotic forces.

Such circumstances may seem theoretical, but they’ve played out in recent American history. In 1957, following the Supreme Court’s ruling in Brown vs The Board of Education, which desegregated public schools, mobs of southern whites backed by the Arkansas national guard took to the streets in protest. In response, President Eisenhower sent in the Army and later addressed the nation:

The very basis of our individual rights and freedoms rests upon the certainty that the President and the Executive Branch of Government will support and insure the carrying out of the decisions of the Federal Courts, even, when necessary with all the means at the President’s command…The interest of the nation in the proper fulfillment of the law’s requirements cannot yield to opposition and demonstrations by some few persons. Mob rule cannot be allowed to override the decisions of our courts (Eisenhower, 1957).

By his own admission, Dahl declares moral judgements to be necessarily ambiguous. To assert that such truths exist in the same sense as mathematical proofs or the laws of physics, he writes, is patently false (Dahl, 1989 pp. 66-67). This admission which Dahl meant as a blow to guardianship is actually an endorsement of Plato’s reliance on the law. Surely southern whites felt morally justified in their actions and, in an anarchial sense, evaluated which laws were worth disregarding before mobilizing. There were certainly many more Americans who exercised the same moral autonomy and arrived at a different conclusion. However, it is specifically because moral judgements are subjective that a set of common laws are required. As this example shows, the sterile rulings of the Supreme Court necessarily need to be insulated from the fervor and passion of mob rule. In a very real sense, some form of legal guardianship is necessary to protect civil society from the very people who inhabit it.

For their similarities, Plato and Dahl diverge radically in their conclusions on the nature of man and the role of government. And clearly the debate between Plato and Dahl cannot be easily settled. However, western democracies like the United States could not exist without the precepts of liberty and individualism. In these aspects, Dahl had it right and, given the undeniable success of democracy in producing the world’s greatest empire, it’s hard to argue that a superior system exists. That said, Plato understood that unchecked liberty leads to chaos and destruction, and therefore, an impartial legal system, a guardian, was critical to protect against mob rule.


References

Crawford, T. (Ed). (2007). Six great dialogues. Dover Publications, Inc.

Dahl, R. A. (1989). Democracy and its critics. Yale University Press.

Eisenhower, D. D. (1957). Radio and television address on the situation in Little Rock. Dwight D.

Eisenhower presidential library. https://www.eisenhowerlibrary.gov/media/3883