Philosophy

The Ethics of Performance Enhancing Drug Use in Baseball

In late September 1998, Mark McGwire hit his 70th homerun of the season. It was four more than rival Sammy Sosa and nine more than the prior record of 61 that had stood for nearly forty years. A little more than a decade later, however, McGwire would admit to using steroids enroute to his record-setting season, offering apologies to fans, baseball commissioner Bud Selig, and the family of prior record holder, Roger Maris (Weisbaum, 2010). Though McGwire and a handful of other players would become the face of baseball’s steroid era, responsibility for one of the league’s most prominent scandals falls equally to the Players Association, the league office, and the owners. Morally, however, baseball’s decision to not regulate performance enhancing drug use, amounts to a breach of contract between the fan and the game. Though the league has never admitted to using steroids to rejuvenate baseball following the 1994 strike, it wasn’t until steroid use became public and attendance numbers had recovered, that the league instituted a formal testing policy. This essay, therefore, explores the ethical theories and implications of baseball’s involvement in the steroid scandal, its outcomes, and how future circumstances could be handled differently.

Background and Ethical Dilemma

The decline in attendance following the players strike in 1994 is widely viewed as a pivotal moment in the league’s handling of steroids, however, the fact is, performance enhancing drugs (PEDs) were not a new concern for baseball. In fact, the steroid era is widely viewed as originating in the 1980s, and baseball’s attempts to limit illegal drug use, though weak, date back to the 1970s. The strike brought doping into focus, however, as owners faced twin crises of declining attendance and accelerating PED use. Moreover, the anabolic steroids that typified PEDs during that time, were a controlled substance and subject to federal regulation (Mitchell, 2007). In other words, unauthorized distribution and possession of these drugs was illegal, regardless of baseball’s position on the matter. Therefore, the league had an incentive to moderate performance enhancing drugs if for no other reason than player safety and rule of law. Furthermore, the strike presented an opportunity for both sides to negotiate a solution to PEDs before they became a problem.

At the same time, baseball appeared to be benefiting from enhanced player performance. Attendance and revenue had climbed steadily throughout the steroid era and by 1993, over 70 million people had attended a game compared to fewer than 30 million twenty years prior (BBR, n.d.). Team payrolls had topped $900 million for the first time in league history (BBR, n.d.), and baseball was quickly becoming the domain of rich players and wealthier owners. It’s somewhat unsurprising, therefore, that as the players and owners began negotiating a new labor agreement, the two sides prioritized money over a comprehensive drug policy (Mitchell, 2007). Nonetheless, the negotiations failed to reach an agreement and in August of 1994, the players went on strike.

Major League Baseball now had two competing problems. When play resumed in 1995, attendance, which had climbed steadily for decades, fell by over 30% (Pantuosco, 2011). Consequently, the league faced a dramatic reduction in revenue. The second problem was the proliferation of illegal performance enhancing drugs. On the one hand, the league could maintain the hands-off approach to PEDs that had accompanied baseball’s rise over the prior two decades, and trust that enhanced performance would renew interest; or they could abide by federal law and implement new drug policies that would expose steroid use and further risk the fans’ relationship with the game. Such a dilemma pits aspects of utilitarianism, relativism, and Kantian ethics against one another. However, as will be discussed, the players’ preference for financial priorities and baseball’s recovered attendance figures, suggest that no clear-cut lines of moral superiority exist in baseball’s steroid scandal.  

Solutions Implemented

In the early months of 1994, then commissioner, Bud Selig, proposed adding steroids to the league’s list of banned substances, and instituting a formal testing program for players suspected of doping. This was intended to augment an earlier, informal agreement that had been reached in 1984 under which players could be tested if suspected of illegal drug use, however, neither proposal defined what constituted reasonable cause, making each subjective and difficult to enforce. The league had also made numerous attempts to institute random drug testing over the preceding decades, however these policies had focused primarily on recreational drugs not performance enhancing substances. Regardless, all, including the 1994 proposal, were rejected by the players (Mitchell, 2007).

That said, though the owners appeared to support drug policies, their commitment to enforcing them was weak and easily displaced. For example, future commissioner Rob Manfred, who was baseball’s chief labor negotiator at the time, admitted that after players rejected the 1994 proposal, determining how revenue would be divided between the players and owners was more important than setting drug policy. These priorities would remain in place until the next labor contract was negotiated in the early 2000s (Mitchell, 2007, pp. SR-11, 25, 43-44). Therefore, baseball’s initial solution to the attendance and steroid issue was to maintain the status quo and prioritize money over doping policy.

It wasn’t until 2001, after rumors of steroid use had been picked up by the media, that baseball began a pilot program, testing players in the minor leagues. That program was ratified by major league players in 2002 and formally enforced in 2004. By 2005 baseball’s drug policy had been updated a final time. The first positive test was punishable with a 50 game suspension; a second failed test carried a 100 game ban; and a third infraction resulted in a permanent suspension. In all cases, the player’s name would be released publicly (Michell, 2007). On April 3, 2005, twenty years after the start of the steroid era, Alex Sanchez became the first player to receive a suspension under baseball’s new PED policy (BA, n.d.).

Results

It wasn’t until 1998, a year after the new labor agreement was signed, that rumors first began circulating that St. Louis Cardinals slugger Mark McGwire was taking steroids. At the time, McGwire and fellow slugger Sammy Sosa were pursuing baseball’s single season homerun total. One of the game’s most prestigious records. Indeed, the drama of the homerun chase sparked renewed interest in the game. Pantuosco (2011) writes that the competition between McGwire and Sosa provided the injection of interest that baseball’s owners desired. League-wide attendance numbers further support that assessment. By 1998, over 70 million fans attended games, a number not seen since 1993 (BBR, n.d.). Furthermore, the rise in attendance was accompanied by a “dramatic increase” in offensive output throughout the league in 1996 (Mitchell, 2007, p. SR-14). Renewed interest and offensive output, therefore, positively reinforced baseball’s decision to not control doping and instead, prioritize money and performance.

There were negative consequences as well, however. For example, numerous incidents of steroids, packaging, and syringes being found by team personnel, law enforcement, and others culminated with a federal investigation into an illegal PED distribution center, which had alleged connections to high profile stars including Barry Bonds and Jason Giambi (Mitchell, 2007, p. SR-17). The BALCO investigation, as it was known, ultimately spawned congressional hearings and an independent investigation led by Senator George Mitchell (2007), marking the culmination of baseball’s worst scandal.

Certainly, baseball’s approach to steroids had far reaching implications. On the one hand, it resulted in an era of historical offensive output and rejuvenated fan interest. Additionally, some of baseball’s biggest stars made millions in endorsements as a result of their performances (Pantuosco, 2011, p. 60). On the other hand, baseball traded a PR problem for a moral dilemma. Mitchell (2007) writes, “[t]he problem of performance enhancing substance use in baseball has shaken the faith of many baseball fans in the integrity and fairness of…the records that have been achieve during what has come to be known as baseball’s ‘steroids era’” (p. 12). It has also resulted in players of that era being kept out of the hall of fame. Nearly 20 years after baseball instituted its formal drug testing policies, no player linked to performance enhancing drugs has been inducted into the hall of fame (Uberti, 2016). It must be stated, however, that since baseball instituted its new policy, incidents of steroid use have declined dramatically, though this may simply be due to players opting for newer, non-detectable substances (Mitchell, 2007).

Analysis

 It’s easy to criticize baseball’s handling of steroids following the 1994 strike, but as discussed above, the short-term results were exactly what the owners wanted. Mitchell (2007) writes that the league’s initial response to steroids was slow and ineffective, however this is only the case if the outcome is judged through a reduction in steroid use. Both the players and the owners made clear through their negotiations that their chief concern was money, not drug policy. More importantly, both sides chose to prioritize money over player health and integrity of the game. These realities provide important insight into baseball’s decision-making process and whether their decisions were appropriate.

Under the best interpretation of the owner’s intensions, a utilitarian argument could apply. Baber (n.d.) summarizes utilitarianism as preferencing those actions that produce the best overall consequence and least amount of pain. Baber continues, writing that modern, rule-based utilitarianism instructs societies to adopt only those rules that produce the greatest good for all. Hindsight certainly allows one to judge baseball’s decisions over an ever-lengthening period of time. However, leading up to the strike of 1994, it was clear that the players, by voting down various drug policies, preferred the benefits of enhanced performance over the health risks of PEDs. The boost in offensive output and endorsements are aspects of such benefits. Furthermore, such enhanced performance provided a beneficial version of the game to millions of fans, and indeed, baseball’s attendance toward the end of the 1990s supports this conclusion. It’s also true that baseball’s attendance figures did not dramatically decline following the steroid scandal and have held steady for much of the last 20 years. Importantly, over that same period, league payroll has climbed to over $4.6 billion (BBR, n.d.). It’s difficult to argue, therefore, that from a utilitarian perspective the greater good wasn’t served. Player health would constitute a minor stake in the overall good brought to fans, players, owners, and advertisers. And to the extent that baseball’s laissez-faire approach to steroids damaged the integrity of the game, it hasn’t resulted in decreased attendance. Therefore, on these grounds, baseball’s decision to defer testing was appropriate.

In many respects, baseball’s steroid policy is a collaboration between players and owners. Their initial decision to act (or not act) was mutually agreed upon and largely driven by financial concerns. By the time the steroid scandal broke, and the BALCO federal investigation had begun, the two sides had mutual incentive to formulate a comprehensive drug policy, both for legal and PR reasons. Therefore, the cooperation between players and owners was equitable and presents a strong Kantian argument that their decisions were moral. In the early years of steroid use, the players and owners knew the risks of PEDs, and both chose money over doping controls. As the steroid scandal gained momentum, a comprehensive doping policy became both legally and financially prudent. Therefore, neither side used the other as mere means. On the other hand, while the relationship between the players and owners might have followed Kantian and utilitarian principles, certain moral and ethical deficiencies arise when considering the league’s obligation to the fans.

To begin with, there’s a legitimate Kantian argument to be made that steroid use constitutes a lie. Gendreau (2015) writes that an athlete’s physical performance is central to their public persona. Moreover, their public persona is shared, interpreted, and written in part by the fans. In this way, the fans share in who the player is (p. 516). By using PEDs, the player has deceived the fan about the nature of their physical performance and violated their Kantian obligation to always tell the truth. Moreover, as Rachels (2022) write, Kant placed particular emphasis on the intentions behind one’s actions. By failing to disclose illegal drug use, the owners intentionally misled fans in pursuit of renewed profits. The fans’ participation in the player’s narrative and the fact that financial transactions are central to that participation, imply an obligation on ownership to conduct business in an honest, transparent manner. By failing to disclose PEDs, ownership breached this contract while using the fans as mere means. In this way, the relationship between the league and the public was inequitable.  

While Kantian principles provide a strong basis for both criticizing and defending baseball’s decisions, cultural relativism provides an alternative defense. Consider, for example, that players have estimated that anywhere from 20 – 50% of their peers used PEDs (Mitchell, 2007). Though this number is unverifiable, it suggests a culture of drug use amongst MLB players that goes beyond official estimates. Strulik (2012) argues that not only does such a culture compel athletes to dope, but it produces two fundamentally different moral frameworks. One for the fans and another for the players (pp. 541-542). In that, few parallels exist in public life for the competitive pressure players face. To fans, baseball is a reprieve from daily life. To players, it’s their livelihood. Not only can doping represent a cultural norm within sports, but it’s necessary to remain employed. Such cultural relativism stands in stark contrast to the moral framework of fans who view sports as a diversion, and players as elite athletic performers. Indeed, relativism is a compelling argument if not for the fact that the league (and players) profit from a dishonest representation of themselves. Therefore, while the players and owners acted ethically with respect to each other, both failed in their moral obligation to the fans, upon whom they depend.

Finally, to the extent that fans may want to claim a moral higher ground, such a position is difficult to support. If voting with their feet is any indication of their morality, the 1994 strike proved more salient than performance enhancing drug use. As mentioned, baseball’s attendance numbers recovered and barely dipped when the steroid scandal was at its height. If anything, fans’ continued patronage brings the Kantian inequity of the steroid era into balance. Regardless, though baseball may claim this argument vindicates their handling of PEDs, it nonetheless constitutes a breach of contract. Their decisions were dishonest whether the fans were broadly outraged by them or not. That said, baseball took an enormous risk in not curtailing steroid use before it became a scandal. There was no reason to think fans wouldn’t react badly to player drug use given how poorly they’d reacted to the 1994 strike. It’s also not clear to what extent broken trust lingers or might surface should baseball encounter a new controversy. For these reasons, the preferable policy would be one of prompt and aggressive anti-doping regulations.

Furthermore, if baseball viewed offense as a means of reinvigorating the sport, they might have considered changing the game itself. For example, baseball’s recent implementation of bigger bases, pitch clocks, bans on certain defensive alignments, and other rule changes have all proven beneficial to the game. It’s worth noting, however, that today’s changes do not entirely fit the historical context of baseball thirty years ago. Nonetheless, there were aspects of the game itself that could have been changed and agreed to by the players without violating their obligation to the fans. In this way, the league would have avoided the ethical dilemma it created by neither disclosing nor curtailing steroid use.

In summary, while baseball’s decision to defer a comprehensive drug policy did not violate codes of ethics between players and owners, it did violate their Kantian obligation to the fans. By not disclosing steroid use, the league misrepresented the game to the public upon whom they depend. Furthermore, though hindsight suggests that this was the correct decision, baseball couldn’t have known this was the case at the time. Therefore, while the outcomes of the league’s decisions can be fully justified on utilitarian principles, the correct course of action would have been to proactively establish a comprehensive drug policy and avoid the risks of a PED scandal.

References

BA. (n.d.). Steroid suspensions in Major League Baseball. Baseball Almanac.

https://www.baseball-almanac.com/legendary/steroids_baseball.shtml

Baber, H. (n.d.). The nature of morality and moral theories. University of San Diego.

https://home.sandiego.edu/~baber/gender/MoralTheories.html

BBR. (n.d.). Major league miscellaneous year-by-year averages and totals. Baseball Reference.

https://www.baseball-reference.com/leagues/majors/misc.shtml

Gendreau, M. S. (2015). Who? Moral Condemnation, PEDs, and Violating the Constraints of

Public Narrative. Ethical Theory and Moral Practice, 18(3), 515–528. http://www.jstor.org/stable/24478637

Michell, G. J. (2007). Report to the commissioner of baseball of an independent investigation

into the illegal use of steroids and other performance enhancing substances by players in Major League Baseball. Major League Baseball. https://files.mlb.com/mitchrpt.pdf 

Pantuosco, L. J. (2011). Does it pay to be unethical? The case of performance enhancing drugs in

MLB. The American Economist, 56(2), 58–68. http://www.jstor.org/stable/23240392

Strulik, H. (2012). Riding High: Success in Sports and the Rise of Doping Cultures. The

Scandinavian Journal of Economics, 114(2), 539–574. http://www.jstor.org/stable/41679520

Rachels, J.R. S. (2022). The Elements of Moral Philosophy (10th ed.). McGraw-Hill Higher

Education (US). https://bookshelf.vitalsource.com/books/9781264998692

Uberti, D. (2016). Baseball writers face moral dilemma in hall of fame vote. Columbia

Journalism Review. https://www.cjr.org/analysis/baseball_writers_annual_ritual_of.php

Weisbaum, W. (2010). McGwire apologizes to La Russa, Selig. ESPN.

https://www.espn.com/mlb/news/story?id=4816607

The Moral Case for Edward Snowden

Acts of espionage present curious ethical dilemmas. In the case of Edward Snowden, he acted because he believed his actions served the greater good, both to civil liberties and to the existence of the country. On the one hand, utilitarian morals certainly apply. Civil liberties form the foundation of this country and broadly benefit the lives of most Americans. On the other hand, Snowden makes subtle appeals to idealistic American values that appear to be deeply personal and subjective. His desire to help Iraqis escape tyranny and his disillusionment with American foreign policy (Winig, 2014, p. 3) capture this sentiment. However, it's not clear to what extent moral frameworks are equipped to judge actions when taken in defense of ideas. Morality is often talked about as it applies to our interactions with one-another, but seldom do we consider how the value of ideas and individuals stack up against each other. For example, do the ideas of civil liberties and unalienable rights hold value apart from their impact on people’s lives? At first, it seems as though they don’t. After all, Kant, Mill, and others attempted to project definitions of morality that imply universal truths, agnostic of cultural context, and they all fail on these same grounds. And yet, people are willing to die for their countries and the ideas that define them.

Such perspective casts Snowden’s actions and the national conversation that followed in a new light. At root was a conversation about the type of society we want to have and how willing we are to defend those values. However, it’s not obvious that this is a moral conversation. The fact that the NSA failed to disclose the true nature of the Patriot Act to congress (and by extension, the American public) suggests an ethically dubious footing (Winig, 2014, p. 1). And Obama’s subtle conflation of transparent policy with disclosing specific anti-terrorist activities is at best, intellectually dishonest (p. 9). Certainly, the members of the state establishment would defend their secrecy (or lie) on utilitarian grounds. Defending the country is, after all, defending the greater good. When defending that greater good transgresses into a personal quest for power, however, is not clear. And so, for this reason, bringing the public into the conversation is to check the power of an overly-intrusive government.

Yet for this to be a conversation about morals, we must agree that civil liberties and the qualities that make America what it is, hold value. Baber (n.d.) writes that morality is a system for determining right from wrong and ethics is the study of that system. I would propose, however, that ethics describe specific actions and morals govern the context of those actions. In that, the government’s failure to disclose their domestic surveillance program was unethical, even if its existence was morally justifiable. The NSA’s defense, therefore, could invoke utilitarian principles even while acknowledging a Kantian obligation to tell the truth. Snowden, on the other hand, would likely argue that not only was the failure to disclose unethical, but the program itself was immoral. Without civil liberties, there is no America. Therefore, actions that violate those norms cannot be done in service to the greater good.

Clearly, these conclusions rest in large part on the value one places on the premise of America. Snowden, for example, makes numerous references to liberties, turnkey tyranny, and the general good of the American people in his communications both before and after his disclosure (Winig., 2014), suggesting a strong belief in America. This moral subjectivism, or right and wrong as determined by the individual, rejects formal premises of morality (Baber, n.d.). Alternatively, Snowden could mount a utilitarian defense, however, his inability to know the outcome of his decisions prevents him from accurately honoring the greater good. His decisions, as was argued by General Dempsey, might have caused far more harm than benefit (Winig, 2014, p. 11) in which case he could only base his actions on subjectivism, even if he believed he was invoking utilitarian principles. Nonetheless, with all of this being said, Snowden’s decisions have proven beneficial in deeply subtle and important ways.

The moral worth of civil liberties may not be obvious but that does not mean they lack value. It may be that, like all moral frameworks, they have value simply because we say they do. Therefore, the why behind our sense of moral value isn’t so important. It’s whether we broadly agree that these ideas hold value that matters. In that, Snowden’s decision put the idea of America up for public vote. It provided Americans an opportunity to reaffirm that the legal and philosophical basis for western governance still hold moral worth. Moreover, the fact that as of 2016, over 56% of Americans aged 18-34 viewed Snowden favorably (Winig, 2016) suggests many young people hold similar values. More importantly, this shared acceptance of value allows us to draw firmer moral evaluations. Because Americans value our civil liberties, they must be considered in our evaluation of the greater good. The government’s failure to disclose the NSA’s spying program, therefore, jeopardized the idea of America and the value Americans place in it. Not only do the NSA’s actions fail on Kantian grounds, but they fail on utilitarian principles as well. Though Snowden couldn’t have known the utilitarian implications of his actions with certainty, we can still say he acted morally, even if from a subjective point of view.

In summary, the national conversation that followed the Snowden leaks was not simply about the legality of his actions, it put the idea of America up for debate. It also helped to establish the moral worth of the laws and civil liberties that govern society. In that, moral clarity can be established that finds while Snowden may have invoked subjective forms of right and wrong, he ultimately acted with morality.  

References

Baber, H.E. (n.d.). The nature of morality and moral theories. University of San Diego.

http://home.sandiego.edu/~baber/gender/MoralTheories.html

Winig, L. (2014). Hero or traitor? Edward Snowden and the NSA spying program. Harvard

University. https://case.hks.harvard.edu/hero-or-traitor-edward-snowden-and-the-nsa-spying-program/

Winig, L. (2016). Hero or traitor? Edward Snowden and the NSA spying program (sequel).

Harvard University. https://case.hks.harvard.edu/hero-or-traitor-edward-snowden-and-the-nsa-spying-program-sequel/

The Sun Still Orbits the Earth in Utopia

Evening sun passed through the Parthenon, casting a two-dimensional lattice upon which Plato walked. The philosopher muttered quietly to himself, gesticulating in moments of thought that coalesced around the notion of an idyllic city state. His mentor and friend Socrates, was long dead, executed at the hands of a democratic mob, and it was now against his memory that Plato formed his ideas. The state was not simply a governing body, it was the cultivator of citizens, a parent, a mentor, and guardian. A state did not a great people make. A great state with great institutions made the citizenry. It was therefore with some irony that Plato extolled the higher virtues of human intellect and deplored anything that might pollute the mind. The people, it seemed, were agents of great intellectual potential, but could not be trusted with self-governance (Crawford, 2007). It’s unlikely that Plato included himself amongst those requiring a parental guardian, however. He, after all, was the author of the idea. If only everyone possessed his capacity for intellectual reflection, society might well enter the utopian bliss he sought to create. This inherent selfishness would be mirrored thousands of years later by Marxist revolutionaries like Stalin and Castro, who used such utopian ideas to propel themselves into positions of elite power. Indeed, neither Stalin nor Castro had any intention of condemning themselves to a life of dreary factory work. The revolution and utopia itself were about them. The legions of proletariat were simply useful idiots in a personal quest for power. This essay examines utopian concepts in politics, society, and social media, and explores the thesis that utopian thinking is inherently, perhaps unavoidably, self-serving, and narcissistic.

Much like its Marxist incarnation, utopias are dramatic asymmetries in power masked by curtains of egalitarianism. The Marxist manages to hide this fact in plain sight, notably excluding the elite in their blistering ripostes of capitalism and instead focusing their ire on the bourgeois. Plato was similarly partial to authoritarian governance. As Yves Charbit notes, “[T]he statement that Plato was concerned with ‘equality’ is debatable, and at the very least, should not be understood in the sense of democratic equality. For him, Sparta, not Athenian democracy, represented the most accomplished political model” (Charbit & Virmani, 2002, p. 211). Plato’s guardians, an elite cadre of intellectuals, skilled in logic and unpersuaded by passion were the center of an exclusive upper tier of society. Guardians were imagined to possess the virtues sought in an idyllic citizenry, most notably wisdom and reason. Only warriors were similarly esteemed, and both were privileged in the community (p. 223).  While Plato never explicitly identifies himself as a guardian or a warrior, these ideas are products of his inner dialogue and suggest that his ideal state reflects his self-image. Plato’s utopia, therefore, is his utopia. It not only reflects his values but demonstrates the requisite arrogance to presuppose his values on others. Plato presumes to know what is in everyone’s best interests and withholds the ability for citizens to make such decisions for themselves. In fact, such presuppositions are demonstrated in today’s politics, social movements, and popular culture. They manifest in populism, communism, and identity politics. At the center of these dynamics, however, whether group-based or individual, is an inherently self-centered motivation.

Utopia, exclusivity, power, and revolution are inseparable. Utopian concepts also employ universally simplistic thinking. Sadeq et al (2011) write that utopian thinkers portray the idyllic state as a living reality, enjoyed by all citizens. The end-state is so obvious, they write, that it is taken for granted and with little thought about how it would be implemented or maintained (pp. 131-132). In other words, utopia simply works and almost magically so. More’s vision of a feudal labor system where everyone works the fields (p. 138) captures this quite well. In More’s idyllic state, however, he fails to question the basic assumption that trade work holds equal value – or any value – to all people. Such a society might be quite dystopian for large numbers of people, yet this possibility doesn’t enter into More’s thinking. Therefore, much like Plato’s managed society, one must ask whose utopia More is describing. Has he based his imaginings on empirical research and collective study or is he simply articulating what would make him happy.

Similarly flawed thinking manifests in utopian concepts of power and hierarchy. For example, in a study of narcissism in right-wing populist movements, researchers Golec de Zavala & Keenan (2021) write that members of populist movements see themselves as exceptional and entitled to privileged treatment. They view themselves self-righteously as the only true representatives of national interests. Narcissistic populism, they write, is not about social justice and equality but rather, entitlement and privilege (pp. 2-3). This collective narcissism blends with individual perceptions of self, and personal status is often the underlying motivation for broader nationalist concerns (p. 2-4). In this way, the left and right wings share common ground. For example, Betty Glad (2002) writes that Stalin and Hitler both displayed grandiose yet insecure perceptions of self. Describing both as malignant narcissists, possessing superegos with insatiable appetites for personal glory. Stalin portrayed himself as the creator of a communist world order while Hitler envisioned himself the founder of a pure Germanic utopia, often comparing himself to Jesus (pp. 2, 5, 20). While each of Stalin, Hitler, Plato, and More sought to advance their vision of an ideal state, Hitler and Stalin wrapped their personal motivations in national political movements. It’s important, therefore, to consider whether individual narcissistic behavior manifests at the group level and how each might influence the other.

To begin with, utopias are collectives of sameness. This uniformity was captured quite well in the 2014 film The Giver (Noyce, 2014), which depicts a utopian society of egalitarian equality, surveillance, and tightly controlled social norms. Society is governed by a single matriarchal authoritarian (Meryl Streep), whose knowledge of life before utopia is debatable, but whose singular authority is not. In their paper on narcissism and political affiliation, Hatemi & Fazekas (2018) summarize such political narcissism, writing that political ideologies and attitudes are not just about how one should live, but demanding that all others live the same way. This arises from a definition of narcissism that is not simply being concerned with oneself but is an agglomeration of self, views of others, modes of thinking, and motivations that guide behavior and become part of individual identity (pp. 873-875). Golec de Zavala & Keenan (2021) arrived at a similar bidirectional conclusion, noting that collective narcissism merges group and personal identity, often compensating undermined self-importance with group affiliation (p. 4). In other words, narcissistic behavior not only manifests at the group and individual levels, but each interacts with the other.

Such interactions are evident in today’s social media landscape. Consider the microblogging site Tumblr, for instance. Andre Cavalcante (2019) likens the site to a “queer utopia” and users describe it as a “queer bubble” where one can lose themselves and fall into a black hole (pp. 1715-1716). Tumblr, like all utopian concepts, represents a type of safe space. One where conformity and lack of dissenting opinion are expected. For utopians, safe spaces are synonymous with ‘people like me’ which mirrors the uniformity of sameness expected in utopia. For example, Cavalcante writes that Tumblr is a vortex of similar thought and is nearly devoid of dissenting opinions (pp. 1727-1729). This sameness extends to the use of language as well. For example, users of Tumblr note that there is no need to defend concepts or words and that the site’s queer voice drowns out non-conforming views (p. 1727). This conformity and the loss of self suggests a merging of group and personal identity through common language, experience, and thought as summarized by Hatemi & Fazekas (2018).

Like spatial utopias, digital utopias possess revolutionary ideation. Cavalcante (2019) for instance, reviews prior research indicating that users of digital utopias are not merely content with behavioral and political change, but seek to bring about social revision as well (p. 1723). They carry an expectation of group accommodation. As one user put it, “people [on Tumblr] get it. And if they can, everyone else should” (p. 1725). This collective self-consciousness closely parallels the collective narcissism observed in right-wing populist movements by Golec de Zavala & Keenan (2021), noting that collective narcissists are driven by a belief that their group is unique, exceptional, entitled to privileged treatment, and insufficiently recognized by others (pp. 2-4). Furthermore, Hatemi & Fazekas (2018) write that with respect to identity politics, the demands for attention, benefits, and implied superiority arguably reflect the exhibitionist dimension of narcissism, that is, the expectation that greater attention be paid to one’s wishes, needs, opinions and values (pp. 874-875). Finally, the proclivity of some members to speak on behalf of the group, prefacing statements with ‘As a…’ followed by their identifiers, further suggests narcissistic self-importance (p. 876). More critically, such statements position the speaker at the center of the group and at the head of an impromptu hierarchy, drawing attention to themselves and away from the issue. The utopian thinking embodied in identity politics, therefore, represents the narcissistic traits observed in both Marxist and populist visions of the ideal.

The self-centered nature of utopian thinking can lead to xenophobic tendencies. For example, although Plato’s utopia allowed for contact with the outside world, he expressly sought to limit external interactions (Charbit & Virmani, 2002, p. 209). The sequestration of utopia, however, is common in spatial, cultural, and religious conceptions of society. Travis DeCook (2022) writes that the nature of utopia necessitates not only a separatist purity but an expulsion of all that threatens its order (p. 213). This intolerance is captured by today’s safe spaces, echo chambers, and xenophobic policies on both the right and left, from modern border policy in the United States to legislating cultural difference in Quebec as advocated by Charles Taylor (Gutmann, 1994). It’s represented in popular culture through films like Elysium (Blomkamp, 2013), where utopia is visible from Earth but separated by the vacuum of space. It’s captured in religious concepts where Heaven is separated from humanity both physically and temporally.

More interesting is what these attempts at utopia say about us. Charles Taylor’s essay on politics of difference offers one perspective. Writing on the notion of legislating difference through law, he says, “the goal of [such laws] is not to bring us back to an eventual ‘difference-blind’ social space but…to maintain and cherish distinctness, not just now but forever” (Gutmann, 1994, p. 40). Just as the literature cited by Cavalcante (2019) suggests, utopians are not satisfied with equal recognition and tolerance. There seems a clear desire for advanced social status and an equally clear disinterest in equal treatment. Taylor’s appeals to distinctness are actually a summons to group identity and sameness. In this respect, he seeks to elevate the in-group (French Quebeckers) over the out-group, non-French speaking Canadians.

It’s not surprising that an inherently self-centered paradigm would produce broadly corrosive pressure on society. Stalin and Hitler illustrate the far ends of such extremes, however utopian thinking fails such tests on theoretical grounds as well. As DeCook (2022) writes, More’s utopia utterly fails in its mission to form a better citizenry through state institutions by not only creating worse utopians, but also worse non-utopians. It is a fundamental contradiction, he says, that utopians claim to abhor the dimensions of money, violence, and greed, yet depend on those vices in others for their very survival (pp. 210-215). In this vein, it is debatable whether the utopias of Marxism, populist nationalism, or identity politics, have created better citizens. Certainly, the aspects of intolerance, cancel culture and doxing, are authoritarian and unbecoming of a morally higher ground. Charles Taylor’s (Gutmann, 1994) desire to preserve the French-Canadian culture through law bears similar reflection. It is undoubtedly the version of Quebec that he remembers, not that of someone else or of another time that defines his ideal state. In summary, it’s clear in the examples reviewed here that utopians are not simply satisfied with equal recognition under law and social norms, they are interested in the self-centered acquisition of status and power both for themselves and for their group.

Any closing discussion must acknowledge certain limitations. To begin with, utopian thinking is not necessarily a group attribute. For example, Golec de Zavala & Keenan (2021) note that narcissism predicts nationalism, but nationalism does not predict narcissism (p. 4). A populist might simply agree with a political alignment but not for narcissistic reasons. Similarly, a Tumblr user is not a narcissistic utopian simply for engaging in an online community. In fact, many users cited by Cavalcante (2019) acknowledge the lack of diverse views on Tumblr as a problem. Cavalcante himself calls the limited range of opinion and the potential to inhabit an echo chamber the biggest risks to the Tumblr community (pp. 1727-1729). Finally, it bears special notice that observing narcissistic characteristics is not tantamount to making a clinical diagnosis of narcissism. The thesis of this paper is that utopian thinking is inherently narcissistic and self-serving, not that all utopians are narcissists. In that, the evidence seems strongly aligned with the hypothesis. The deeper question, however, is whether utopian thinking is unavoidably narcissistic and whether that’s all bad. South African scholar Sabelo J Ndlovu-Gatsheni (2018) argues that the Cartesian subject (Descartes’ I think therefore I am) replaced God with self and positioned man at the center of the universe (p. 85). While Ndlovu-Gatsheni intended this birth of narcissistic thinking in the context of racism and identity politics, it can be extrapolated further as a universal point of reference for technocratic and atheistic utopian forms, like Marxism. In the absence of a higher power, the self becomes paramount. While this doesn’t wholly describe the paths to narcissistic utopias, after all, many western religions position humanity as the object of God’s attention, it offers some intriguing food for thought. Finally, it must be asked whether utopian thinking has any value or whether it represents a grave threat to society. Certainly, the losers in Stalin’s revolution would argue the latter. Indeed, collective narcissism can represent real dangers to democracy. On the other hand, majority rule can be slow if not impossible to change. Bringing about social revolutions like gay marriage, women’s and civil rights arguably require the hard leadership that only narcissists provide. Therefore, the presence of narcissistic traits in utopian thinking is likely both a feature and a bug. Not all thinkers are narcissists, yet to suppose our ideal state is the perfect state for all requires narcissistic qualities.

In summary, narcissism appears to play a strong role in utopian thinking. This manifests as a need for recognition at both the group and individual levels and persists across political affiliation. Narcissistic qualities are observable in the utopian concepts of Plato and Charles Taylor and in the digital and physical spaces. Finally, while being affiliated with a utopian community or movement does not make one a narcissist, it is clear that the sun still orbits the earth in utopia.

References

Blomkamp, N. (Director). (2013). Elysium [Film]. QED International, Kinberg Genre,

Alphacore.

Cavalcante, A. (2018). Tumbling into queer utopias and vortexes: Experiences of LGBTQ social

media users on Tumblr. Journal of Homosexuality, 66(12), 1715-1735. https://doi.org/10.1080/00918369.2018.1511131

Charbit, Y., & Virmani, A. (2002). The Platonic City: History and Utopia. Population (English

Edition, 2002-), 57(2), 207–235. https://doi.org/10.2307/3246608

Crawford, T. (Ed.). (2007). Plato. Tudor publishing company.

DeCook, T. (2022). The charming circle: identity in utopia, unethical practices, and Augustine’s

two cities. Moreana, 59(2), 208-219. DOI: 10.3366/more.2022.0126

Glad, B. (2002). Why Tyrants Go Too Far: Malignant Narcissism and Absolute Power. Political

Psychology, 23(1), 1–37. http://www.jstor.org/stable/3792241

Golec de Zavala, A. & Keenan, O. (2020). Collective narcissism as a framework for

understanding populism. Journal of Theoretical Social Psychology, 5(2), 1-11. https://onlinelibrary.wiley.com/doi/full/10.1002/jts5.69

Gutmann, A. (1994). Multiculturalism. Princeton University Press

Hatemi, P.K. & Fazekas, Z. (2018). Narcissism and political orientation. American Journal of

Political Science, 62(4), 873-888. https://www.jstor.org/stable/26598789

Ndlovu-Gatsheni, S.J. (2018). A world without others? Specter of difference and toxic

identitarian politics. International Journal of Critical Diversity Studies, 1(1), 80-96. https://www.jstor.org/stable/10.13169/intecritdivestud.1.1.0080

Noyce, P. (Director). (2014). The Giver [Film]. Tonik, As Is Productions.

Sadeq, A. E., Shalabi, I., & Alkurdi, S.H. (2011). Major themes in Renaissance utopias. Asian

Social Science, 7(9), 131-141. https://doi.org/10.5539/ass.v7n9p131

 

Social Networking: A new species of media or a public health concern

In 1927 Philo Farnsworth demonstrated the first working version of the television. This device would revolutionize information in ways few could imagine. Media was no longer simply a newspaper, radio, or photograph. It was a talking box that magically combined all three domains into a visual and auditory experience. Certainly, television was transformative, and it would reign supreme for nearly 80 years until the advent of Facebook and social networking. Indeed, the rise of social media has dominated the information landscape in ways that dwarf the impact of television. The ubiquity of the internet and its constant presence in our lives represents something fundamentally different from all previous forms of media. It travels with us in ways that television and radio can’t approach. Its utility is embedded into everything from healthcare to navigation. Yet growing concerns exist about the consequences of social media, its role in our lives, and the potential for physical and psychological harm. This essay explores the case for social media addiction and whether today’s media represent not just the next evolution of communication, but a public health and safety concern.

The idea of media addiction is nothing new. In fact, scientists have been studying the effects of television on behavior since 1964 (Kubey & Csikszentmihalyi, 2004). And much like the conclusions of researchers studying TV, there is general agreement that social media addiction exists, but its definition, prevalence, and even its relationship to our behavior is less clear. For example, researchers at the University of Bergen note that the recency of social media and the lack of uniform data, make determining pervasiveness difficult if not impossible (Andreassen, 2015, pp. 175-176). Similar conclusions were drawn by Kuss & Griffiths (2017) and Reed & Reay (2015). In fact, in the literature reviewed here, there was broad agreement for both the presence of addiction and the need for further study. Concerns over data quality, self-selection, and biasing were also raised by Reed & Reay (2015), Lee (2015), and Tang et al (2017). The data aren’t wholly bad, however. Andreassen (2015) notes that while statistical support for pervasiveness is hard to come by, data suggesting that certain people are predisposed to social networking (social media) addiction do exist. Furthermore, data showing parallels between social media and chemical addiction were cited by Andreassen (p. 176) and Kuss & Griffiths (2017). The latter notes the growing body of evidence that social networking addiction causes symptoms typically associated with substance abuse, such as, mood modification, tolerance, withdrawal, relapse, and conflict (pp. 2, 6). Scientific American draws similar parallels between social media addiction and compulsive gambling, noting that both groups try and fail to stop, and become defensive about the behavior when questioned or prohibited from engaging in the activity (Kardaras, 2016, p. 67). Addiction, therefore, is broadly understood as prioritizing social networking to the detriment of other social activities, relationships, and one’s psychological health and well-being (Andreassen, 2015, p. 175). In other words, addicts display a continued preference for social media, even if such preferences are detrimental.

Nonetheless, defining specific boundaries remains difficult. How much social media does one need to consume before they’re considered an addict. Many social media users engage in normal overuse while maintaining healthy relationships in the real world (Andreassen, 2015, p. 176). Researchers Kuss & Griffiths (2017) generalize this observation more broadly, suggesting that heavy social network engagement might constitute a new normal (p. 5), and may not be problematic. One might imagine a similar observation being made of America’s crowded taverns during the industrial revolution. Were the bars and speakeasies of the 1920s and 30s breeding grounds for alcoholics or a new normal in industrialized America. Such anecdotal observations suggest that addiction, in general, may be difficult to pin down. Researchers may agree that problem behavior exists, but specifically when behavior becomes a problem is less clear.

The previous section has demonstrated that social media addiction has captured researcher’s attention. At the least there seems to be a broad suspicion that social media addiction is real, even if its cause and extent are not understood. In this it is important that the issue be taken seriously as addiction represents a conversion from the digital into the literal. The virtual in the physical. Though it may be unclear when normal use becomes an addiction, research indicates a strong connection between depression, anxiety, guilt, and other psychological issues with social media overuse. For example, Andreassen (2015) writes, social networking sites are used to replace feelings of guilt, anxiety, restlessness, helplessness, and to forget about personal problems (p. 176). Such coping behaviors can affect behavior in the real world. For example, neuroticism, defined as a tendency to experience anxiety, fear, and depression was positively related to private social networking use during working hours (p. 178). Furthermore, social network addicts were unable to separate from social media despite realizing its negative effects. The usage resulted in social withdrawal, insomnia, and other health problems (pp. 178-179). Marshall University researcher Keith Beard (2011) notes the common causes of internet addiction as depression, anxiety, social awkwardness, and a means of escape from domestic issues (p. 103). In fact, studies suggest that the relationship between our psychological state of mind and social media is bidirectional. That is, it can create and worsen pre-existing conditions (p. 102). The negative looping of social media is known to researchers as the Facebook effect. The more friends a person has on the platform, the higher the likelihood they’ll be depressed, and the more time a person spends on social networking, the more likely they are to become addicted (Kardaras, 2016, p. 69). These findings illustrate the complex relationship between social networking and mental health. A relationship exists but in which direction and to what degree remains unclear.

The consequences of social media addiction can manifest in other ways as well. For example, Scientific American cited research showing a link between compulsive texting and poor performance in school (Kardaras, 2016, pp. 67-68). Research conducted by Reed & Reay (2015), suggested that higher levels of internet use were negatively related to self-motivation, study habits, goal orientation, and control over learning. The authors are quick to note that their findings extend the conclusions of prior research, suggesting that excessive internet and social media use negatively impacts grades (pp. 719-720). A separate, non-representative study of African American students found the opposite, however. While 57% strongly agreed that social media and texting were a distraction during lecture, their GPA was not affected (Lee, 2015, pp. 54-55). Amongst the literature reviewed here, however, the aforementioned research was an outlier.

Beyond school, addiction to social networking impacts work and career. Several studies cited by Andreassen (2015) corroborated these findings, concluding that the use of social networks during working hours negatively impacted performance, and in some cases, resulted in termination (p. 180). More generally, researchers find that multitasking is strongly correlated with excessive smartphone and Facebook use (Lee, 2015, p. 54), suggesting that the urge to engage online regularly diverts attention away from the real world. Nonetheless, while addiction to social media can result in negative outcomes, the long-term consequences may take years to play out. Poor academic performance, for example, might lead to declining job or graduate school opportunities. Poor performance at work, particularly if it leads to termination, can result in increased stress, anxiety, and depression, as well as stunted professional growth. Therefore, it’s important to understand the contributing factors of addiction, who’s at risk, and what, if any trends can be determined.

While the limitations of existing research have been noted, many academics recognize the importance of standardizing scales, definitions, and conducting cross-cultural studies. One such study was conducted by Tang et al (2017) which looked at social media addiction amongst young adults in China, Singapore, and the United States. The numbers were eye-catching. For example, 43% of students reported at least one internet-related addiction. Females were significantly more likely to cite social networking addiction and males more likely to cite gaming addiction as problems (p. 676). It’s worth noting however, that gender biases were far less conclusive in other, U.S. based studies. Tang et al add that cultural factors are a significant contributor to addiction. For example, U.S. rates of self-identified social networking addiction amongst males and females were identical, 26.2%. And online gaming addiction was almost wholly an American phenomenon (pp. 676-677). The increased likelihood of female addiction to social media is likely skewed by Chinese data, where, researchers theorize, China’s one child policy and academic pressure lead Chinese children to seek connections through social media that are unavailable at home (pp. 679-680). Like other studies, Tang et al found that depressive symptoms are a leading indicator of addiction; however, they add that being an only child increases the risk of being depressed. It is not surprising, therefore, that Chinese students spent roughly 10 hours per week on social networking compared to about 6.5 for Americans (pp. 678-679). These data further suggest a connection between anxiety, depression, and addiction but point to cultural factors as a potential driver of behavior.

Further studies suggest similar trends amongst adolescents and college-aged adults. For example, Reed & Reay (2015) found that problematic internet usage was much higher in college students than in the general population (p. 720). Nonetheless, these data were acknowledged as exploratory by researchers. Equally in question are the causes of addiction. As discussed, psychological factors such as anxiety and depression may lead to and result from addiction. However, whether addictive tendencies are the product of our environment or biology is contested. Dingel et al (2015) researched the coverage of addiction in academic literature. While the study found consistent interest in environmental factors (such as cultural norms, parenting, and domestic violence), biological drivers dominated the conversation (p. 475). They note that the notion of an addiction gene crowded out alternative theories and treatments (p. 476). Still, researchers including Dingel et al, recognize that environmental and biological factors are not mutually exclusive, and may in fact be complementary (p. 475). Therefore, based on the literature reviewed here, it is likely that some combination of environmental, pre-existing, and biological factors is at play. For example, researchers acknowledge that social media addicts may have other addictions (Andreassen, 2015, p. 178). Scientific American cited studies indicating that 20% of teens, who were engaged in hyper-texting, were twice as likely to have tried alcohol and 41% more likely to have tried illegal drugs. While not synonymous with addiction, researchers note that such tendencies could indicate compulsive behavior (Kardaras, 2016, p. 68).

Finally, the role of technology and the object of addiction are equally contested. For example, is a social media addict addicted to their phone, the applications, or the psychological reward of social approval. In a 1994 Playboy interview, Marshall McLuhan (1994) argued that the delivery method mattered more than the content of the message, commenting that “most people…are blissfully ignorant of what the media do to them; unaware that because of their pervasive effects on man, it is the medium itself that is the message, not the content” (para. 10). McLuhan’s comments strike a different tone when we consider that 90% of Americans have a smartphone (Bortin, 2023), and over 3 billion people worldwide have a Facebook account (Dixon, 2024). In fact, so attached are we to our mobile phones that researchers developed the term nomophobia to describe the fear of being without one’s device (Kuss & Griffiths, 2017, pp. 8-9). The notion of smartphone addiction, nonetheless, remains contested. For example, Kuss & Griffiths argue that the object of addiction isn’t the technology but other people’s confirmation (pp. 6-9). In this way, social media hijacks our need for novelty, or neophilia, while producing interactions that are less satisfying than real world encounters (Kardaras, 2016, pp. 67-68). In reality, our relationship with social media is not exclusively defined by the device or how we use it. Instead, it is both the omnipresence of our mobile phones and the continuous access to social media that differentiates the internet age from all others.

As demonstrated, the idea that social media addiction exists is broadly suspected, however, much remains undefined. For example, anxiety and depression are surely connected, but to what degree and in which direction is under discussion. Furthermore, are we sure that what we’re observing is addictive behavior and not a new social norm. The research shows, after all, that hyper use is predominant among today’s college-aged students, not their parents’ generation. Yet, at the same time, research also suggests that social media invokes biological desires for human connection and confirmation, even if failing to deliver those basic needs. Furthermore, the near ubiquitous presence of mobile phones coupled with the expanding footprint of social networks like Facebook and increasingly, TikTok, make social media available in ways early television producers could never imagine. Indeed, the era of being able to leave the news at home has long since passed. In this regard, social media, mobile phones, social networks, and the internet are fundamentally different than all previous forms of media. In an essay for Harvard International Journal, Neil Postman (2004) wrote that in solving the problem of information scarcity, we’ve created a new problem of information saturation (p. 4). In fact, we may have created a public health crisis. According to the World Health Organization, more people are reporting mental health issues today than in the 1980s and depression is now the leading global cause of disability (Kardaras, 2016, p. 66). The Chinese have recently declared social media addiction a public health risk (Tang et al., 2017, p. 678) while similar concerns are an active point of research at the National Institute of Health. Such actions lend support to the notion that social media are not simply the next evolution of media. They are fundamentally different, having fused biological and psychological factors with unprecedented accessibility to create a new form of media. Yet the lack of empirical data, uniform scales of measure, and most importantly, pervasiveness, fall short of the requirements to support a public health crisis. Therefore, while it is true that social media are fundamentally different from all prior forms of media, declaring their overuse a public health issue is not supported by the research. Better data are required.

In summary, there is general consensus that social media addiction exists, but its pervasiveness and drivers are not fully understood. However, the ongoing evolution of addiction research should not dissuade policymakers from recognizing the risks of social media overuse, and it does not change the assessment that today’s media represents something distinctly different from all prior media. Nonetheless, until better data are made available, it is too early to declare social media addiction a public health concern.

References:

Andreassen, C.S. (2015). Online social network site addiction: A comprehensive review. Current

Addiction Reports 2, 175–184. https://doi.org/10.1007/s40429-015-0056-9

Beard, K.W. (2011). Internet addiction in children and adolescents. In H.O. Price (Ed.), Internet

addiction. (pp. 95-111). Nova Science Publishers Inc.

Bortin, J. (2023). Cell phone statistics 2024. Consumer Affairs.

https://www.consumeraffairs.com/cell_phones/cell-phone-statistics.html

Dingel, M. J., Ostergren, J., McCormick, J. B., Hammer, R., & Koenig, B. A. (2015). The media

and behavioral genetics: Alternatives coexisting with addiction genetics. Science, Technology, & Human Values, 40(4), 459–486. http://www.jstor.org/stable/43671270

Dixon, S.J. (2024). Number of monthly active Facebook user worldwide as of 4th quarter 2023.

Statista. https://www.statista.com/statistics/264810/number-of-monthly-active-facebook-users-worldwide/

Kardaras, N. (2016). Generation Z: Online and at risk? Scientific American Mind, 27(5), 64–69.

https://www.jstor.org/stable/24945499

Kuss DJ, Griffiths MD. (2017). Social networking sites and addiction: Ten lessons learned.

International Journal of Environmental Research and Public Health, 14(3), 311-328. https://pubmed.ncbi.nlm.nih.gov/28304359/

Lee, E. B. (2015). Too much information: Heavy smartphone and Facebook utilization by

African American young adults. Journal of Black Studies, 46(1), 44–61. http://www.jstor.org/stable/24572928

McLuhan, M. (1994). The Playboy interview: Marshall McLuhan. Playboy.

https://web.cs.ucdavis.edu/~rogaway/classes/188/spring07/mcluhan.pdf

Postman, N. (2004). The information age: A blessing or a curse. The Harvard International

Journal of Press/Politics, 9(3), 3-10. https://journals.sagepub.com/toc/hija/9/2

Reed, P., & Reay, E. (2015). Relationship between levels of problematic internet usage and

motivation to study in university students. Higher Education, 70(4), 711–723. http://www.jstor.org/stable/43648900

Tang C.S-K., Koh Y.W., Gan Y. (2017). Addiction to internet use, online gaming, and online

social networking among young adults in China, Singapore, and the United States. Asia Pacific Journal of Public Health, 29(8), 673-682. https://journals.sagepub.com/doi/10.1177/1010539517739558

Avoiding A Return to History

In 2015 a tiny Seattle startup created waves when it announced that all employees would make at least $70,000 per year. Gravity Payments then CEO Dan Price declared that he was cutting his own salary to the new self-imposed minimum. The move garnered national attention from the Seattle Times to the Wall Street Journal, with supporters calling Price a visionary and critics labelling him a Socialist. In fact, both rather missed the point. Whether conscious or not, wage inequality has become salient enough in the American consciousness, that it’s begun manifesting in voluntary corporate policy. Gravity Payments action and the attention it drew are both leading indicators of a deeper problem. America’s social gradient between rich and poor is out of balance, resulting in unhealthy wage disparities and limited social mobility. More importantly, such leading indicators presage more serious declines in our social fabric, trust in government, and open the door for dangerous ideologies.

Differences in outcome are a feature of the capitalist system. This is, at least partly, predicated on preferencing individual freedom over collective stability. Individual failure is allowed to ensure the possibility of individual success. In a society where sufficient numbers of citizens succeed, the overall community (the collective whole) benefits. Plato recognized this relationship in Republic, though emphasized the opposite hierarchy, stating that a wise state begets wise citizens (Charbit, 2002 p. 222). In fact, the relationship is not so binary. A society cannot rise above its citizenry, and a citizenry – at large – will have a difficult time rising above existing social norms. Polarization, income inequality, and affordable housing are all examples of broad social trends that individuals might overcome, but where achieving radical social change is difficult. Gravity Payments decision to dramatically increase its minimum wage, for example, did not spawn a wave of similar movements across the industry, nor does one person choosing to overcome their partisan polarization result in a dramatic sea change. Yet societies do change, and dramatic transformations do exist. The rise of fascism in Germany and Italy, are such examples, as is the multi-century evolution of the American colonies into the United States. The countries of China, Japan, and Vietnam have also undergone radical changes in government and social norms since the turn of the 20th century.

It bears contemplation then that all nations have an inevitable end, including the United States. There will one day, no longer be a thing called America. This need not necessitate a dystopian depression, merely an acknowledgement that history deals from a deck of very limited cards. Each hand has a shelf-life and there is constant pressure to wipe the board clean and deal another round. There is also a propensity, perhaps inevitability, that history will deal a hand previously favored. Utopian concepts, for example, have existed throughout human history and continue to surface in the modern day. Marxism is its most recent conception, but the principles of Marxism are not themselves new. Plato was highly suspicious of democracy, private property, and decadence. In so far as establishing the ideal city, he saw the individual as entirely subservient to the state (Futre Pinheiro, 2006 p. 158). The welfare of the collective, in Plato’s mind, was paramount. The great philosopher John Locke also advocated for strict social controls amongst the American colonies, going so far as to express a need for tithingmen and draconian norms such as neighbors reporting each other’s transgressions (Hsueh, 2008). Both Locke and Plato, and for that matter, Karl Marx, wrapped these totalitarian ideas in a vision of utopia. A perfect society of uniformity, contentedness, and intellectual exploration. It is what makes such ideas so dangerous but precisely what makes them so appealing, particularly in times of great division.

In fact, utopian and totalitarian tendencies manifest throughout the American free world. Homeowners Associations are one such example. These islands of perfection are devoid of individualism, ruled by a vaguely democratic but wholly totalitarian body, where neighbors report neighbors, and the collective can strip owners of their properties. Consider further the East Wind Community, a child of the hippy commune established in 1974 (Mariani, 2020) and which continues to operate in a wholly Marxist manner today. There, food and labor are equally distributed amongst its 72 residents and a minimalist approach is taken to technology (para. 3). More insidiously, our online communities exhibit utopian tendencies through closed (private) groups, closely guarded both algorithmically, administratively, and through strictly enforced social norms. All of these examples suggest that utopian, collectivist, and tribal tendencies are tightly intertwined and very present in western society. It is paramount, therefore, that these tendencies be acknowledged and understood by liberal governments. The attention garnered by Gravity Payments is not so notable because of what it is, but because it is at all. Totalitarian tendencies are corralled only (and then not always), by prosperous societies. When such inequalities exist, utopian ideas gain broad appeal, and they provide a vehicle for totalitarian ideology to spread. In that, Gravity Payments is not an example of Socialism run amok; it is an early warning sign that social inequality is dangerously out of balance.

In summary, history is bound to repeat itself. The utopian concepts of antiquity are just as present today as they were in the time of Plato. Modern inclinations toward totalitarianism are no less benign because they exist in the 21st century, or because they’re wrapped in visions of bliss. They are dangerous because they carry broad appeal when capitalism fails to maintain adequate social balance. It is necessary, therefore, that western governments ensure social mobility to avoid a return to history.

References

Charbit, Y., & Virmani, A. (2002). The Platonic City: History and Utopia. Population, 57(2),

207–235. https://doi.org/10.2307/3246608

Marinari, M. (2020). The new generation of self-created utopias. The New York Times.

https://www.nytimes.com/2020/01/16/t-magazine/intentional-communities.html

Pinheiro, M. P. F. (2006). Utopia and Utopias: a Study on a Literary Genre in Antiquity. In S. N.

Byrne, E. P. Cueva, & J. Alvares (Eds.), Authors, Authority, and Interpreters in the Ancient Novel:

Essays in Honor of Gareth L. Schmeling (Vol. 5, pp. 147–171). Barkhuis. http://www.jstor.org/stable/j.ctt13wwxhm.12

 

Welfare Through Empowerment

The ideas of the internet, fiber optics, Amazon and the billionaires they would create were inconceivable to America’s founders. One wonders how the United States constitution might look or whether it would exist at all, had the members of the Constitutional Convention better understood the future. Certainly, western liberalism has produced the greatest economic, political, and military power the world has known. Yet within this system of unbelievable wealth lie vast disparities in circumstances. Indeed, the disparity between rich and poor gave rise to Marxism, Vladimir Lenin, and the Soviet Union. Contemporary western liberals have argued the merits of capitalism from its most radical libertarian extremes to various degrees of welfare state. This essay examines the arguments of the late Northeastern University professor Stephen Nathanson (1998), whose approach, while flawed and at times misleading, arrives at a meritorious conclusion: Social safety nets are the moral obligation of any society that can afford to provide them. Furthermore, this essay argues that understanding the flaws of Dr. Nathanson’s arguments helps advance a more successful version of his ideas.

Welfare State and Nation Building have an improbable commonality. Each is widely despised by various factions of the American demos. In a subjective sense, they share another common thread: Nation building might better describe a government’s obligation to the citizenry and the country it oversees, than the best interpretations of the welfare state. Governments ought to look at social welfare as categorically similar to national defense or infrastructure spending. Investing in, arguably the country’s greatest asset, its people, is not only a morally just cause but also an operationally just expense. In his book, Economic Justice Dr. Nathanson (1998) argues the advantages of such a comprehensive welfare state over the extremes of Socialism and Libertarian Capitalism. However, while he draws some worthwhile conclusions, his reasoning is flawed.

To begin with, Dr. Nathanson (1998) fails to acknowledge the middleclass. He builds his argument on a binary view of extreme wealth and extreme poverty; and, while poverty, in part, drives the need for social safety nets, he leads the reader to the false conclusion that only the conditions of wealth and poverty exist, while overstating privilege and unjust reward. This view obfuscates the much broader problem of shrinking class mobility, and it ignores a critical measure of success in addressing social stability. Instead, Dr. Nathanson uses inheritance as both a means to illustrate the advantages of the rich, and as an example of unjust or undeserved wealth (pp. 57, 64 – 65). However, even a high-level review of the data shows this reasoning to be flawed. For example, according to the Federal Reserve, the average inheritance received by American beneficiaries in 2019, was just $49,200 (Bricker et al., 2020). Even ignoring that this number is inflated by the top 1%, this is still far short of the windfall required to attend college or to retire. Furthermore, according to data published by the Bureau of Labor Statistics, the average age at which inheritance is received is 50 – 60 years old (Wolff & Gittleman, 2011 p. 3). Certainly, far too late to advantage someone at the start of life. Finally, according to the U.S. Census Bureau, over 87% of households earning less than $25,000 a year, used economic stimulus to meet household expenses (Perez-Lopez & Bee, 2020). Even if these households received an average inheritance, that money would likely be spent on basic needs, not college tuition or buying a home. The issue is that Dr. Nathanson is logically conflating inheritance with purchasing power. For example, even if inheritance were made illegal, the families of rich children would still be able to afford the tuitions of elite schools, housing costs, and medical care. Furthermore, by committing this logical indiscretion, Dr. Nathanson overvalues the importance of wealth while simultaneously missing the point of wealth stagnation.

As the data show, simply providing money to the poor won’t result in life-changing circumstances. It is the sustained ability to earn more and change social status that matters. Yet, American’s ability to change classes has dramatically declined over the last fifty years. According to the World Economic Forum (Lu, 2020), middleclass wages are stagnating and the percentage of people who earn more than their parents is plummeting. While 62% of aggregate income went to the middleclass in 1970, by 2018, that number was just 43%. Over the same period, the upper class saw their income increase from 28% to 48%, while lower class income fell by just 1%. In other words, the incomes of the poorest Americans have remained relatively unchanged over the last fifty years while the ability to change income status, for most Americans, has diminished. Class mobility and wage stagnation are somewhat different problems than the issues raised by Dr. Nathanson, but their potential solutions are similar.

In fact, one of Dr. Nathanson’s (1998) more intriguing ideas is the notion of a social inheritance, which would be held in trust until some future date (p. 125). He entertains other ideas as well, hinting at concepts of minimum wage or universal basic income (UBI). While these ideas are attractive, they face significant hurdles in practice and in theory. While the inflationary impacts of minimum wage are muted by the fact that only a subset of workers benefit from a wage increase, a trust or UBI would theoretically be available to everyone. Former presidential candidate Andrew Yang (2020) advocated for a universal basic income that would be available to all Americans regardless of earnings. The issue with UBI and to a lesser extent with minimum wage, is that when everyone has the same basic income, the value of that money decreases. The cost of goods whether bread or apartments increases as a result. In essence, inflation will define UBI as the new poverty level. This doesn’t mean that the idea of social inheritance should be written off, but the inflationary consequences need to be thoroughly understood.

Alternatively, western countries ought to focus on enabling outcomes. As Dr. Nathanson (1998) says, “A comprehensive welfare state could operate in different ways. It could provide specific resources such as food, housing, health care, and education” (p. 106). Indeed, such solutions are not without precedent. For example, according to the European Commission, in 2020 Germany spent over $430 billion euros or 12.8% of gross domestic product on healthcare. Over the same period, France reported total national healthcare costs of 12.2% of GDP (Eurostat, 2022). According to the World Bank (2022), Germany and France spent 4.7% and 5.5% of GDP on education in 2020 respectively. Such programs could manifest in the United States, for instance, as an extension of public education to include college and graduate school. Doing so would enable career development while freeing students of crippling debt. Public transportation is another program with near universal benefits. The United States should commit to building the infrastructure to connect communities to one another and ensure access to services like education. Finally, dissociating health insurance from employment is critical. A person who wishes to advance their status by going back to school, should not have to worry about health coverage while pursuing a career that will likely increase their productivity. In short, these programs should not focus on income status specifically, but on universal access to the tools required to build a more productive citizenry.

Dr. Nathanson correctly concludes that nations who can afford social safety nets, have a moral obligation to provide them. However, he incorrectly defines the problem by ignoring the middleclass and the issue of class mobility. In fact, social safety nets are vital to the people who need them, but they should be focused on promoting wage and opportunity growth for all people, not dependence on a discretionary system.

 References

Bricker, J., Goodman, S., Moore, K.B., & Volz, A.H. (2020). Wealth and income concentration in the

SCF: 1989-2019. The Federal Reserve. https://www.federalreserve.gov/econres/notes/feds-notes/wealth-and-income-concentration-in-the-scf-20200928.html

Eurostat. (2022). Healthcare expenditure statistics. Eurostat: Statistics explained.

https://ec.europa.eu/eurostat/statistics-explained/index.php?title=Healthcare_expenditure_statistics#Healthcare_expenditure

Lu, M. (2020). Is the American dream over? Here’s what the data says. World Economic Forum.

https://www.weforum.org/agenda/2020/09/social-mobility-upwards-decline-usa-us-america-economics/

Nathanson, S. (1998). Economic Justice. Prentice-Hall, Inc.

Perez-Lopez, D., & Bee, C.A. (2020). Majority who received stimulus payments spending most of it on

household expenses. The United States Census Bureau. https://www.census.gov/library/stories/2020/06/how-are-americans-using-their-stimulus-payments.html

The World Bank. (2022). Government expenditures on education, total (% of GDP).

https://data.worldbank.org/indicator/SE.XPD.TOTL.GD.ZS

Wolf, E.N., & Gittleman, M. (2011). Inheritances and the distribution of wealth Or whatever

happened to the great inheritance boom? Bureau of Labor Statistics. https://www.bls.gov/osmr/research-papers/2011/pdf/ec110030.pdf

Yang, A. (2020). The freedom dividend, defined. Yang 2020.

https://2020.yang2020.com/what-is-freedom-dividend-faq/

 

Politics of Difference in Liberal Dimensions

In the fall of 1994, two Tlingit Indian teens were banished to a remote set of Alaskan islands as punishment for assaulting a pizza delivery man in Everett, Washington. The sentence which involved traditional native customs ran concurrently with state guidelines for felony assault, and the teens served both tribal and traditional state sentences. The decision to invoke the Tlingit tribal court was not without controversy, however it illustrated an important fact. Federal, state and Indian traditions are not mutually exclusive. In fact, illiberal customs are tolerated throughout American society, ranging from the non-democratic arrangement of some Indian Nations to the religious practices of cults and mainstream denominations alike. This essay argues that, while Federal protection for specific cultural traditions isn’t realistic, it also isn’t necessary. The flexibility provided by liberal governance allows for self-directed preservation of historical norms while existing under traditional western values.

To begin with, cultural trade is unavoidable and arguably desirable. A tradition that no longer evolves is, in a very real sense, already dead. Consider, for example, the tradition of Christmas. The fact that both the exchange of gifts and the date of celebration are likely derived from pagan traditions is irrelevant to observers. Yet such cross-cultural exchange is a vital part of human existence. Without traditions, such as Saturnalia and the winter solstice, our modern notion of Christmas might look very different. Food, language, ceremonies, and ideas flow freely across borders. As Historian Eric Foner writes, although the French offered American Indians citizenship if they adopted Catholicism, it was more often that Frenchmen chose the free life of the Indians (Foner, 2019, p. 44). Likewise, the modern genres of jazz, blues, and R&B are fusions of African and American musical traditions. The Blues were exported by Jimi Hendrix, B.B. King, and Muddy Waters, and reimported by way of Eric Clapton. Surely it is a dark world where such vibrant expressions and interpretations of culture are lost or segregated from one another. Yet it is also true that porous intellectual borders necessitate change and ultimately, loss of cultural artifacts to time.

Determining the obligation, if any, that liberal countries have to protect cultural traditions is crucial. English philosopher, John Locke, recognized that freedom was neither absolute nor cost-free. Ceding a little personal autonomy, he said, was necessary to ensure a well-functioning society (Krotoszynski, 1990, pp. 1398-1399). Indeed, Locke’s comments are not contingent upon western liberal governance. Any society, whether illiberal Indian Nations or French-Canadian Quebecers require that individuals conform to prevailing social norms. This necessarily means that one shed some of their cultural identity. For example, to the extent that western liberal values impose on relative minorities, like Muslim immigrants, such assimilation is necessary, and indeed, would be equally as necessary if western women were to emigrate to Saudi Arabia. Social Science professor, Charles Taylor resists the notion of conformity however (Gutmann, 1994, pp. 48, 53), even while acknowledging that intersectionalist group identities themselves necessitate that one conforms (pp. 55, 58). While Taylor pays minimal attention to this conflict, it underpins the impossibility of legislating specific cultural identities. Liberal societies, therefore, can only provide a legal framework for self-determination and organization, agnostic of specific religious or cultural values.

In fact, this framework tolerates relatively illiberal traditions. For example, Amish communities reject most notions of modern technology and secularism, nor do they generally participate in the political process. Yet like Native Americans, they are considered American citizens and protected by the same civil liberties. Religious liberty allows for numerous organized belief systems, many of which voluntarily restrict personal agency. Amish place restrictions on dress, Jews enforce dietary and labor laws, and even the existence of cults is permitted. Each of these organizations, regardless of how one feels about them, possesses a system of ideas and, in the case of the Jews and the Amish, long standing cultural traditions that the group perpetuates. It is impossible to imagine a federal government legislating the specific values of the Jewish and Amish communities or walling them off on their respective reservations; however, these groups are free to and, in fact do, self-organize along shared values.

Taylor and others do not believe that passive tolerance provides sufficient cultural protection. He argues for a politics of difference, where cultural and individual differences are emphasized over similarities (Gutmann, 1994, p. 38). He argues that cultural preservation must be protected by law, writing, “the goal of [such laws] is not to bring us back to an eventual ‘difference-blind’ social space but…to maintain and cherish distinctness, not just now but forever” (p. 40). This is a remarkably naïve statement given that its implications are without precedent. No human tradition has remained unchanged forever. Even the most well-preserved religious ceremonies are two-dimensional reenactments without the depth of true belief. Furthermore, linguistic, culinary, and secular traditions all carry cultures across borders, making preservation of the type proposed by Taylor unrealistic. Indeed, a world where French remains French and white remains white, is repulsive. The beauty of culture is that it can, and should, be shared. That said, Taylor’s argument raises the valuable question of to what degree liberal societies should accommodate illiberal traditions, and what, if anything, should be done when those traditions violate majority norms.

The short answer is western majorities owe little to intervening when social norms are violated. As the example of the Amish shows, little social harm is done by the existence of those communities. Furthermore, even if certain cultural norms violate personal autonomy, adherence is voluntary. Jews who no longer whish to be Jewish are free to do so, even if they face social and familial backlash. Furthermore, the numerous Indian Nations, with Indian law, and a variety of democratic and undemocratic governments, demonstrates that western liberalism and illiberal practices are not mutually exclusive and, in fact, are often protected by law. Intervention is only required in the case where an individual’s civil liberties are violated against their will. Such a case is currently playing out in a lawsuit filed against the Church of Scientology alleging crimes of child trafficking, forced labor, and imprisonment (Cohen Milstein, 2022). However, by and large, group practices whether religious, secular, or a fusion of the two, ought to be left alone. It is impossible for a legislature to be all things for all cultures as politics of difference suggests. Such a prospect requires, if not encourages, cultural consolidation, not diversity. Therefore, it is essential that communities take it upon themselves to preserve their cultures. As Anthony Appiah put it, “If we create a culture that our descendants will want to hold on to, our culture will survive them” (Gutann, 1994, p. 158). In other words, cultural survival rests in the hands of its members, and the value they place on perpetuating it.

In summary, while critics of western democracy like Taylor advocate for unprecedented social collectivism, liberalism provides a proven, if imperfect, track record of enabling diversity through self-determination. It is through organizations of individuals around shared identities, not Federal management, that cultural traditions will survive.

References

Cohen Milstein (2022). Church of Scientology accused of human trafficking, forced labor.

CohenMilstein. https://www.cohenmilstein.com/update/church-scientology-accused-human-trafficking-forced-labor

Foner, E. (2011). Give me liberty: An American history. W.W. Norton & Company

Gutmann, A. (1994). Multiculturalism. Princeton University Press

Krotoszynski Jr., R.J. (1990). Autonomy, community, and traditions of liberty: The contrast of

British and American privacy law. Duke Law Journal, 1398-1454. https://scholarship.law.duke.edu/dlj/vol39/iss6/6

Suicide: Liberty and Intervention

In early June 1963, a crowd gathered in the South Vietnamese city of Ho Chi Minh, as Thích Quang Duc was quietly set on fire. This act of suicide, and the horrific images that followed, was also an act of protest against religious oppression. Thích’s last words were not the letter he left behind, but this moment of self-immolation. Indeed, his act of protest is rarely, if ever, referred to as suicide, yet events such as these draw into question the extent to which liberty allows us to do with our lives as we please. Thinkers such as John Stuart Mill would have likely viewed suicide as a violation of social responsibility, personal liberty, and principles of do no harm. However, such black and white conclusions are not supported by the realities of personal suffering, politics, and social norms. This essay contends that while suicide is an act of personal autonomy, liberty does not, and should not preclude intervention when appropriate.

According to the Centers for Disease Control and Prevention (2023), nearly fifty-thousand Americans took their own lives in 2022. Of these, the majority were overwhelmingly white, male, and 25 – 64 years old, with over half of those under the age of 45. In other words, men in the prime of life with plenty to live and work for. The devastation suicide brings to families and loved ones dominates the national conversation, and with good reason. However, our relationship to the act of suicide is entirely dependent on the circumstances and prevailing social norms. Consider, for example, the difference in sentiment between suicide as an outcome of PTSD versus relieving a terminal illness. In 2013, PEW Research found that 57% of adults would choose to end treatment if their illness were terminal. The same research found that 47% of adults approved of physician-assisted suicide, while 49% disapproved. It’s difficult to imagine such a plurality of opinion on PTSD-related suicides, yet such a conversation exists in other circumstances where a person chooses to die. Cultural and political perspectives play a critical role in our acceptance of suicide as well. Japanese fighter pilots, for example, were revered for flying their planes into Allied warships, and Muslim suicide bombers are believed to ascend to heaven for their deeds. In fact, such a complicated relationship is evident throughout history. Historian Philip Freeman (2011) writes in his book, Alexander the Great, that when the philosopher Calamus fell ill, rather than continue suffering, he chose to be burned alive in ritual suicide. Even at that time, writes Freeman, observers were divided on whether his act was one of bravery or pompous self-conceit (p. 306). Clearly, our reactions to any of these events is more dependent on the circumstances and political perspective from which they are viewed, than they are on the decision itself.

It is important not to conflate the emotional impact of an immediate death with the more acceptable decision to die slowly. However, while Mill might have condemned suicide, he would have done so for different reasons. As an avid libertarian with strong utilitarian sentiments, Mill recognized there were limits to the notion of self-determination. He writes,

Whenever, in short, there is a definite damage, or a definite risk of damage, either to an individual or to the public, the case is taken out of the province of liberty, and placed in that of morality or law (Collini, 2007, p. 82).

On the one hand, suicide would seem to satisfy both of Mill’s requirements. Tremendous harm can be done to the families and communities associated with the subject. On the other, Mill muddies the water when he says, “[No one] is warranted in saying to another human creature of ripe years, that he shall not do with his life for his own benefit what he chooses to do with it” (p. 76). Indeed, it is quite difficult to declare what is in another person’s benefit. As the example of Calamus showed, a person who is suffering will have a very different view of their best interests than people in their periphery. Furthermore, by Mill’s own logic, society ought to exercise a great deal of caution in matters that override personal agency.

Mill attempts to work through this dilemma by first acknowledging the inherent difficulty of predicting consequences (Collini, 2007 p. 80-81), and later by invoking the notion of liberty as an unalienable right. “The principle of freedom cannot require that [a person] should be free not to be free. It is not freedom, to be allowed to alienate [one’s] freedom” (p. 103). In other words, liberty cannot be used to sabotage liberty, even if it is our own. Indeed, this is a powerful argument for the immorality of suicide, but it is limited. Mill cites the example of slavery and the contradiction in voluntarily becoming a slave (p. 103). Yet in the case of suicide, there is no other person to whom the subject cedes their agency, even if the act of suicide is assisted by someone else. This philosophical logjam represents a critical gap in Mill’s definition of harm. At no point does he suggest that personal harm is anything other than physical, whether to the person’s body or their property. Yet we know that real harm occurs when an act of suicide is committed. Had Mill recognized emotional harm as legitimate, his arguments would more clearly support intervention and treatment.

Nonetheless, Mill’s arguments allow for broader definitions of harm and interventionalist mindsets. While he remains vague on the definition of personal harm, he invokes the individual’s Platonian obligation to society, writing, “[E]very one who receives the protection of society owes a return for the benefit…[by] each person’s bearing his share of the labour and sacrifices incurred for defending [society]” (Collini, 2007, p. 75). In this respect, someone who commits an irrational act of suicide could be said to have failed in their obligation to society itself. This is particularly true, as CDC data pointed out, in the case of young men who own the majority share of suicides. Mill goes further to suggest that individuals who are incapable of “self-government” should be protected from themselves (p. 80), and that individuals are obligated to dissuade each other from harmful decisions (p. 99). Certainly, a person suffering from severe depression could be said to be incapable of self-government. Intervention, in such a case, would be justified by preventing an irrational act of harm. In short, projecting Mill’s opinions on suicide could be read either way, however there is a clear case to be made that dissuasion and obligation to society lay the groundwork for intervention and treatment.

In summary, John Mill’s views of harm and obligation to society support the cause of dissuasion and even intervention when suicide is deemed likely. On the other hand, his views on liberty support the premise that human beings possess the agency to determine what’s in their best interests, and act accordingly, even if such action results in their own death. His views, therefore, while far from conclusive, allow for a variety of humane approaches to suicide, both interventionalist and in service to the relief of great suffering.

References

Centers for Disease Control and Prevention. (2023, August 10). Suicide data and Statistics.

CDC.gov. https://www.cdc.gov/suicide/suicide-data-statistics.html

Collini, S. (Ed.). (2007). J.S. Mill: On liberty and other writings. Cambridge University Press

Freeman, P. (2011). Alexander the great. Simon & Schuster

Pew Research Center. (2013, November 21). Views on end-of-life medical treatments. Pewresearch.org.

https://www.pewresearch.org/religion/2013/11/21/views-on-end-of-life-medical-treatments/

Plato: A Merited but Flawed Philosophy

The great Greek philosopher Plato is said to be the father of western philosophy. He is certainly a household name and a person of interest to historians, philosophers, and political scientists alike. But his ideas have flaws, specifically regarding the obligations of citizen and state. In his dialogue Crito, Plato (Crawford, 2007) argues that citizens are bound by social contract to the state. He asserts it is one’s moral obligation to obey the law unequivocally even to the point of death, and he concludes there is no higher authority to whom citizens owe their lives than the law (p. 33). In other words, the state is the primary lender to whom we owe repayment. This essay confronts Plato’s assertions and contends that his arguments are contradictory and incomplete. Furthermore, this analysis concludes that our social contract is not governed by the law but by civic duty to improve the law.

Plato’s core argument claims the state has shaped the citizenry and therefore the citizenry owes the state a debt of gratitude in the form of service and obedience to the law (Crawford, 2007). This argument is premised on equating the influence of the state to that of a parent, teacher, or trainer and dismissing our more distant relationship to the general citizenry (pp. 26-27). It is from these intimate relationships that the state derives authority and from this authority, or influence, arises the debt of servitude (p. 30). Plato’s logic, however, is contradictory. If we consider that the state and the laws that govern it are abstractions of a distant majority (whether of lawmakers or citizens), then the state is necessarily distant. Plato ignores this conclusion and sees the state as being much closer to the citizen. Yet the state cannot be both distant and intimate simultaneously. This point of contention causes friction throughout Plato’s arguments and undermines his conclusions.

To begin with, Plato’s underlying assertions rest on two basic assumptions: First, the law raised us and shaped us into who we are. As he points out in Crito, “[S]ince you were brought into the world and nurtured and educated by [the state], can you deny…that you are our child and slave” (Crawford, 2007 p. 30). From Plato’s perspective, not only are citizens children of the state but they are bound to it as slaves to their masters. Plato never addresses outcomes in Crito, but the contract of servitude implies that a debt is owed in exchange for something of value. This is the essence of Plato’s second assumption: Laws are omnipresent and produce equal outcomes for all people and those outcomes are always good. This assumption implies a close relationship between the citizen and the state, but it too is flawed.

Consider for example that within any state, a variety of divergent social conditions exist. Income inequality, crime rates, school quality, and access to social services all vary widely, yet all exist under the same set of laws. At no point does Plato’s logic account for divergent circumstances. In fact, his logic presupposes that an individual could only exist under state rule and be grateful at the outcomes. He says nothing of the fact that, for example, the institution of marriage might have brought together two horribly unqualified parents.

It is in fact impossible to achieve the outcomes Plato’s logic requires. Even if we were to assume that the law is universally applied, it is impossible to ensure equality of outcome across the board. Two citizens might attend different schools with different teachers and have very different educations. One citizen might grow up in a broken home and be disillusioned about marriage or fall into criminal activity. Plato makes no allowance for the possibility that a citizen might strictly adhere to his philosophy and draw negative conclusions about the law, or that the state might create citizens who are detrimental to society. Even if the law could be evenly applied throughout the state, outcomes are far too fluid to ensure a debt is owed. Here again friction between Plato’s arguments of intimacy and distance manifests. It is impossible to stipulate that a debt is always owed when value isn’t always delivered.

It’s clear in Crito that Plato holds the law in high esteem, though he questions whether it is ever permissible to break the law (Crawford, 2007 p. 29). He suggests rhetorically that perhaps the concepts of justice, right, wrong, good, and evil exist outside the law. He illustrates this by posing the following hypothetical: “If I am clearly right in escaping [without the consent of the Athenians], then I will make the attempt. If not, I will abstain” (p. 29). Here Plato expresses contention between Athenian law (under which he’s being held), and a personal interpretation of justice. Plato does not indicate by what manner he will conclude whether he is right or wrong, but interpretation is a fundamental aspect of the law. It is, in fact, what allows the law to evolve with society as much as it allows for variability in its application. Plato’s analysis never directly acknowledges the fluidity of law. He reconciles the question by concluding that a hierarchy of obligations exist between who ought to be wronged least. At the top of the hierarchy is the rule of law itself (pp. 29, 33).

Had Plato dived more deeply into the concept that right and wrong exist outside of the boundaries of the law, he might have realized laws are not always just. Almost fifteen-hundred years later, Henry David Thoreau (1849) would arrive at exactly this conclusion:

Unjust laws exist: shall we be content to obey them, or shall we endeavor to amend them, and obey them until we have succeeded, or shall we transgress them at once? Men generally, under such a government as this, think that they ought to wait until they have persuaded the majority to alter them. They think that, if they should resist, the remedy would be worse than the evil. But it is the fault of the government itself that the remedy is worse than the evil (para 17).

Thoreau famously argued the merits of breaking the law as he points out in the previous passage. The state itself is responsible for authoring unjust laws and it is the role of the citizen to change those laws. In fact, Plato seems to circle this conclusion, extolling citizens who commit no injustice even when an injustice has been committed against them (Crawford, 2007 p. 29). This suggests that Plato knew laws could be misapplied or poorly written, but he concluded that it is the duty of the citizen to follow these laws regardless of the circumstances under which they’ve been applied. Plato’s reasoning fails to consider the need to constantly improve the law and the role individuals have in that process. Our social contract is not to blindly follow the state but to constantly seek to improve it.

In summary, social contracts do exist, but laws and outcomes under laws are far too fluid to ensure an outcome that is worth repaying. Instead, the obligation of the citizen is to continuously improve upon the institutions that govern us, including as Thoreau pointed out, through civil disobedience. Said differently, our contract is not to state institutions, but to the act of continually improving those institutions for the benefit of future generations.


References:

Crawford, T. (Ed.). (2007). Plato. Tudor publishing company.

Thoreau, H.D. (1849). Civil disobedience. https://xroads.virginia.edu/~Hyper2/thoreau/civil.html

Western Democracy: Of Plato and Dahl

Sometime after the year 400 B.C., as Plato finished the last of his great dialogues, he had no idea that some twenty-three centuries later a western liberal democrat named Robert Dahl would challenge his ideas. It may have struck him as ironic that scholars would declare his philosophy the basis on which modern democratic principles were formed, but if Plato were alive to listen to these arguments, he might have heard an echo of his own voice. This essay contends that while Robert Dahl’s worldview is fundamental to the existence of western democracy, Plato’s principles of guardianship are not without merit. Dahl may have been more correct in valuing individual liberty, but as Plato understood, unchecked liberty is dangerous, anarchial, and should be closely guarded.

In his book Democracy and its Critics Political Science professor Robert Dahl confronts several common challenges to democratic principles. It is, in many respects, a defense of democracy and a qualified counterargument to Plato’s views. But it’s not all disagreement. In fact, to declare a winner in this debate would be somewhat foolish. The ideas of liberty and guardianship are not mutually exclusive. They are in many ways interdependent. The Supreme Court, for example, is precisely a manifestation of Plato’s guardian state. Dahl (1989) brushes over this fact but consider that America’s founders devoted one third of governing power to a judiciary that is neither elected nor can be removed. Furthermore, the essence of a representative democracy distances the people from the political process. Representatives, while elected, do not run every decision by the voting public. They are entrusted, to a degree, to make decisions on behalf of the people who elected them. In short, Plato didn’t have it all wrong, and to separate guardianship from democracy is to remove the rule of law and fundamentally alter western political process.

The shared ground between Plato and Dahl goes further than the arrangement of American democracy. One could argue that their philosophies arise from a common understanding of morality and diverge when each decides what to do about it. Regardless, this common origin directly influenced the ideas of both men, the ideas of western democracy, and indeed the structure of our democratic systems. It is, as Dahl writes, the very justification of democracy to, “live under laws of one’s own choosing, and thus to participate in choosing those laws [that] facilitate the personal development of citizens as moral and social beings” (Dahl, 1989 p. 91). He continues more poignantly,

I believe the reasons for respecting moral autonomy sift down to one’s belief that it is a quality without which human beings cease to be fully human and in the total absence of which they would not be human at all (p. 91).

Dahl contends that it is democracy itself that teaches self-reliance, self-worth, and independence (p. 92). At first pass, these positions might seem at odds with Plato’s view of guardianship, but in fact they’re highly complementary. Like Dahl, Plato recognized that moral autonomy exists. He questions whether life would be worth living if the aspects of our intellect that benefit from justice are corrupted, going so far as to declare our sense of justice to be “[f]ar more honored [than the body]” (Crawford, 2007 p. 27). Furthermore, the very essence of Crito is our inner debate over whether to obey the law. It is the ambiguity of this debate, and the potential for unchecked liberty, that necessitates a legal guardrail.

Admittedly, it’s difficult as citizens of western democracy to be wholly impartial when evaluating the virtues of the democratic system. Certainly self-reliance, liberty, and the freedom to pursue self-interests are fundamental to the American way of life, and it’s difficult to argue that Dahl didn’t have it right in his description of western liberalism. However, here again, he and Plato converge at the limits of liberty and self-direction. To better understand this convergence, consider that many of the core ideals that Dahl identifies as uniquely democratic: self-reliance, self-determination, and independence, are all fundamental aspects of anarchial autonomy. The anarchist as Dahl points out, is compelled by moral obligation to evaluate the laws he follows and obey those he chooses but never the ones he rejects. Personal responsibility to the anarchist, he writes, cannot be forfeited (Dahl, 1989 p. 43).

The common ground between anarchy and democracy may have been what worried Plato. After all, the lines between direct democracy, mob rule, and anarchy are quite blurred. It’s not hard to imagine the chaos that would ensue if every citizen were free to exercise their moral autonomy to whatever degree they saw fit. Certainly, the perpetrators of the worst atrocities in human history all felt morally justified. So, while the importance Dahl places on liberty and self-determination is correct and fundamental to democracy, its furthest extremes lie in chaos. Plato surely understood this as the basis for the law but also the reason to protect the law from despotic forces.

Such circumstances may seem theoretical, but they’ve played out in recent American history. In 1957, following the Supreme Court’s ruling in Brown vs The Board of Education, which desegregated public schools, mobs of southern whites backed by the Arkansas national guard took to the streets in protest. In response, President Eisenhower sent in the Army and later addressed the nation:

The very basis of our individual rights and freedoms rests upon the certainty that the President and the Executive Branch of Government will support and insure the carrying out of the decisions of the Federal Courts, even, when necessary with all the means at the President’s command…The interest of the nation in the proper fulfillment of the law’s requirements cannot yield to opposition and demonstrations by some few persons. Mob rule cannot be allowed to override the decisions of our courts (Eisenhower, 1957).

By his own admission, Dahl declares moral judgements to be necessarily ambiguous. To assert that such truths exist in the same sense as mathematical proofs or the laws of physics, he writes, is patently false (Dahl, 1989 pp. 66-67). This admission which Dahl meant as a blow to guardianship is actually an endorsement of Plato’s reliance on the law. Surely southern whites felt morally justified in their actions and, in an anarchial sense, evaluated which laws were worth disregarding before mobilizing. There were certainly many more Americans who exercised the same moral autonomy and arrived at a different conclusion. However, it is specifically because moral judgements are subjective that a set of common laws are required. As this example shows, the sterile rulings of the Supreme Court necessarily need to be insulated from the fervor and passion of mob rule. In a very real sense, some form of legal guardianship is necessary to protect civil society from the very people who inhabit it.

For their similarities, Plato and Dahl diverge radically in their conclusions on the nature of man and the role of government. And clearly the debate between Plato and Dahl cannot be easily settled. However, western democracies like the United States could not exist without the precepts of liberty and individualism. In these aspects, Dahl had it right and, given the undeniable success of democracy in producing the world’s greatest empire, it’s hard to argue that a superior system exists. That said, Plato understood that unchecked liberty leads to chaos and destruction, and therefore, an impartial legal system, a guardian, was critical to protect against mob rule.


References

Crawford, T. (Ed). (2007). Six great dialogues. Dover Publications, Inc.

Dahl, R. A. (1989). Democracy and its critics. Yale University Press.

Eisenhower, D. D. (1957). Radio and television address on the situation in Little Rock. Dwight D.

Eisenhower presidential library. https://www.eisenhowerlibrary.gov/media/3883