Waking Up from the Internet: A Digital Nightmare Dressed Like a Daydream
This review of related literature was originally written for a subject in my Masters in Digital Communication Leadership programme in February 2018 and is now being published online with a few minor revisions from my professor. I chose this topic because personal values vs. work was a huge subject of debate in a UX Philippines Facebook thread. I decided to make it public after hearing Mike Monteiro’s How to Build an Atomic Bomb talk at UX Copenhagen. This work is over 4000 words long and if you don’t feel like reading everything, I suggest reading Mule Design’s Code of Ethics.
And yes I took the title from a Taylor Swift song (please don’t sue me)
Edited for some typo corrections
Everyone dreams of a better world. Big tech companies believe we can solve the world’s problems using their technology. This subject is highly relevant since in recent news, former Facebook employees have criticized its addictive qualities, epidemic of hate speech and the damaging of democracy (“How big tech finally awakened to the horror of its own inventions,” 2017). The paper will list case studies of how the choices that designers use in designing websites and mobile applications have affected society as a whole. Designers should be more aware of the contradictory nature of the Internet when they try to find the most ethical solutions in their work especially with the accelerating technological progress of recent times. Some solutions that have been proposed involve creating a code of ethics for designers or doing thought experiments.
The Digital Dream of the Good Society
The world has become digital, according to Nicholas Negroponte, MIT Media Lab co-founder. In his 1995 book (Negroponte 1995), he predicted the ubiquitous use of multimedia online that we see today. Manuel Castells, network sociologist, also noted that the last two decades have seen unprecedented technological breakthroughs that have led to a digital transformation of society (Castells 2010). He cites three distinct stages of telecommunications: automation, experimentation, and reconfiguration. While users only learn by using in the first two stages, they find even more applications of technologies in the last stage (Castells 2010). The spread of the personal computer has placed computer science beyond military and big businesses into the everyday lives of society’s creative individuals (Negroponte 1995). Currently, computers can be manipulated through point and click mouse interfaces and keyboards. Negroponte predicts that people might not even use the machine anymore (1995).
Left-wing academic Lawrence Lessig was positive about the outcome on the market that digital technologies would produce (2004). The new market would be more competitive and would have a more diverse range of creators who could actually earn more than what they did in average before (Lessig, 2004). Paul Simon, the singer, described Web 2.0 as a “fire… for vigorous new growth” (Keen 2015, p. 141). Consistent with this, Robin Mansell, professor of new media, stated that “each new generation of technology is presumed, on balance, to be consistent with human well-being, democracy, and freedom” (2012, p. 16). The vision of these technologists is said to be inline with the notion of the “good society” and in time, everybody will benefit from technological progress (Mansell 2012).
The goal of automation that Negroponte predicted “was consistent with the prevailing social imaginary of a world in which ‘man’ could ‘better review his shady past and analyze more completely and objectively his present problems’, in the interests of building the good society” (Mansell 2012, p. 96). Technology proponents all believed that “the Internet was the answer… because it “democratized” media, giving a voice to everyone, thereby making it more diverse” (Keen 2015, p. 140). Kevin Kelly, founding editor of Wired magazine, said that Web 1.0 would give everything away for free. Dale Dougherty, O’Reilly media co-founder, said that in Web 2.0, everybody could become a writer or musician (Keen 2015). In Web 2.0, people could produce content because they didn’t need the gatekeepers anymore (Keen 2015).
Cultural anthropologist, Adam Fish said that rights that protect free speech, “an essential right to information exchange” (Halleck as cited by Fish 2017) in person and in traditional media, would also apply to new technologies (2017). If old media was “parochial, self-interested and sexist,” (Keen 2015, p. 149) then Web 2.0 social networks such as Reddit and Twitter would give voice to the voiceless and even people who are not usually eloquent have a human right to participate (Fish 2017).
Collective project communities where participants are all assumed to have a worthy contribution to make are made possible through the Internet, according to Axel Bruns, creative industries professor (2008). Since there is no pre-filtering, it is easier to get in and hierarchy is determined based on cooperation (Bruns 2008). In Slashdot, a citizen journalism platform, which doesn’t require accreditation, users feel that the broad diversity as critics has enabled more trust in the site (Bruns 2008). Produsage communities are organized through “ad hoc forms of governance” which had been predicted by futurist, Alvin Toffler in the 1970s who said that “We are witnessing not the triumph, but the breakdown of bureaucracy… the arrival of a new organizational system that will increasingly challenge, and ultimately supplant bureaucracy. This is the organization of the future” (Bruns 2008, p. 26).
In order to reduce human distortions such as “desires, prejudice, distrust of outsiders” that could affect decisions, the traditional industry handed their work over to the unbiased machine, as mathematician Cathy O’Neil narrates in Weapons of Math Destruction (2016). However, nowadays the the world’s automated systems are feeding on garbage data. Only humans can identify the mistakes that these machines are making. But since this is not most important priority of the market because it would cause inefficiency, humans are discouraged from interfering (O’Neil 2016).
The full benefits of the digitization of society cannot be achieved because of its contradictory nature as in the example stated above. Mansell mentioned two paradoxes of the information society, information abundance vs. scarcity and complexity vs. control (2012), but there may be more. This section will elaborate more on other competing values and unintended tradeoffs of the new digital communication technologies.
More Transparency and Loss of Individual Freedom vs. More Anonymity and Bullying
David Kirkpatrick, a technology journalist, claimed that the social network Facebook was founded on all-encompassing transparency (Keen 2012). The top management of Facebook, Mark Zuckerberg and Sheryl Sandberg are “today’s utilitarian social reformers” (Keen 2012, p. 61). Keen compares Facebook to Jeremy Bentham’s Inspection-House, where individual transparency through the Open Graph and Timeline functions can create a healthier society (2012). “More truth leads to more togetherness, they say; and more togetherness, their logic spirals, leads to a better society” (Keen 2012, p. 61).
However, like Bentham and his greatest happiness principle, Zuckerberg also oversimplifies human beings to a quantifiable code of happiness or pain like a “cost-benefit expert on a grand-scale” (Keen 2012, p. 61). Keen believes that Mark Zuckerberg is “wrong that this shared future makes us more human”, rather a “vicious cycle of less and less individual freedom, weaker and weaker communal ties, and more and more unhappiness” (Keen 2012, p. 66).
On the flipside, in defense of transparency, when people can hide behind a screen, they reveal the worst of humanity. Amanda Todd, a 15-year old girl, committed suicide after 3 years of cyberbullying (Ess 2010). The Internet was supposed to empower people like her, instead it has compounded hatred toward the very defenseless people it was supposed to empower (Keen 2015, p. 149–150). Despite this, Silicon Valley continues to pour funding into anonymous networks and apps like Secret, Whisper, etc. (Keen 2015).
Discipline and Order vs. Mass Surveillance and Loss of Privacy
Aside from Facebook, Mark Deuze reported that other corporations and the government are also taking cues from Jeremy Bentham’s model of a disciplined society through constant surveillance (Deuze 2012). Herbert Schiller, the media critic, said that they justify expanding their databases and surveillance technology like GPS (global positioning system) and RFID (radio frequency identification) with vulnerabilities of viruses and hackers for the good of society (2007).
Deuze cited Foucault, Deleuze and Mattelart that this discipline is becoming something that is not just in the hands of a powerful few but as part of everyday life (2012). “Discipline, therefore, is enforced as well as (potentially) subverted by all individuals in everything they do” (Deuze 2012, p. 107). Since media and power are everywhere yet nowhere, the surveillance of new media is becoming mundane and almost desirable as well (Deuze 2012).
Convenience and Efficiency vs. Loss of Autonomy/Privacy
One advantage of the Internet that people like in work and their everyday life is its convenience. However, there is a lot of behind-the-screen work that occurs to reduce the work needed from users (Mansell 2012). Sociologists and policy makers alike are concerned about its effect on personal autonomy. “As the new new gadget I hold in my hand becomes increasingly personalized, easy to use, ‘transparent’ in its functioning, the more the entire set-up has to rely on work being done elsewhere, on the vast circuit of machines which coordinate the user’s experience,” notes Slovenian cultural critic Slavoj Zizek about the growth of personalized technology and corporate power (Keen 2012, p. 166). Jan Philipp Albrecht, German politician and member of the European parliament, feels that the human being has been more and more deprived of his right to make decisions, more and more degraded into a mathematically calculable system that can optimise itself (2015).
Freedom of Speech vs. Misinformation and Propaganda
With freedom of speech where everyone is given a voice, it gets harder to find the signal in the noise with the abundance of information. Some may unintentionally share false information, but there are also some who intentionally use misleading strategies. Wikipedia, where anyone can contribute and edit articles, is “no more immune to human nature than any other Utopian project. Pettiness, idiocy, and vulgarity are regular features of the site. Nothing about high-minded collaboration guarantees accuracy, and open editing invites abuse” (Bruns 2008, p. 124). Without gatekeepers for information, a lot of content could turn out to be either “propaganda or plain lies” (Keen 2015, p. 153). At its least, people are paid to post fake glowing reviews on Yelp and Amazon. At its worst, terrorist groups like ISIS have exploited this feature of the Internet to become successful in recruiting (Keen 2015).
Problem Solving vs. Technosolutionism
The powerful tech founders assume that their interests and solutions on how to solve society’s problems align with the general public (Keen 2015). “They appointed themselves as the emancipators of the people without bothering to check with them first” (Keen 2015, p.141). Morozov says wrote on a similar note that although their intentions may seem good, they also rely too much on technology as the solution (2011). He also stated that “clinging to Internet-centrism — that pernicious tendency to place Internet technologies before the environment in which they operate” affects policy makers as well giving them a false sense of security (Morozov 2011, p. 111). And yet the sort of problems that these technologists are trying to solve are not very important (Morozov, 2013). Instead of “democracy and diversity, all we’ve got from the digital revolution so far is fewer jobs, an overabundance of content, an infestation of piracy, a coterie of Internet monopolists, and a radical narrowing of our economic and cultural elite” (Keen 2015, p. 157).
Their Electronic Daydream, Our Digital Nightmare
These contradictory features of the Internet have contributed to its dark side of it that people may not be aware of or willingly embrace as the norm.
The Effect on our Brains
Tech journalist Nicholas Carr, writes about how the Internet physical changes human brains like in the way we read in screens has affected our attention span and understanding of the content (2010). People distracted by hyperlinks and ads are rewired to keep switching contexts in the name of efficiency. Internet users are scatterbrained, have a weak memory and rewired to actually crave the distraction (Carr 2010).
Workers are not only expected to be disciplined and efficient but also to have many achievements, philosopher Byung-Chul Han observes (2015). In the twenty-first-century, everyone is an “entrepreneur of themselves” (Han 2015, p. 8). While the early cultural achievements of humanity have been attained from deep contemplation, David Brooks, a New York Times writer stated that achievement has now been redefined as the ability to attract attention (Keen 2012) which can be done much faster.
Immersive reflection is replaced by hyperattention, “a rash change of focus between different tasks, sources of information, and processes characterizes this scattered mode of awareness” that “has a low tolerance for boredom” and leaves no room for “profound idleness that benefits the creative process” (Han 2015, p. 13). Han refers to this overachieving and constantly tired and exhausted society as the burnout society (2015).
Power Inequality and Erosion of Trust
The open Internet where anyone can say anything has benefited “mostly young white western males with a slight personality defect” (Perkins cited by Keen 2015, p. 155) and where the “wisdom of the crowd” is prioritized instead of “accountable experts.”. The transparency that Web 2.0 promised, took away freedom and “ironically spawned opaque bureaucracies controlled by anonymous elites” (Keen 2015, p. 155). People who don’t belong to this demographic are silenced. These power inequalities are something that big tech companies prefer to sweep under the rug when they make their grand proclamations of making the world a better place.
The questions is who are they making it better for? Mansell notes that the knowledge and skills for developing and understanding the powerful algorithms is a privilege and that those who don’t have it are excluded from the conversation on making a good society, leading to power imbalances (2012). Although the World Bank and UN usually try to solve the digital divide in terms of Internet availability (Mansell 2012) where ICT providers have no desire to work in poor areas with a low return on investment (McChesney 2013), on a deeper level, the digital divide is about the power gap and distribution of information resources (Schiller 2007). Unless this is addressed, the information asymmetry is a threat to democracy (Schiller 2007).
Even peer-to-peer exchange, which was designed to democratize the Internet is power imbalanced according to NYU business professor, Arun Sundararajan (2016). For example, in Uber, a passenger doesn’t know the intentions of the driver and in Airbnb, the home owner would know more about the accommodation than the traveller. The producer is the one with the power, agrees communications professor Robert McChesney (2013). “They may give the people what they want, but only within the range that is most profitable for them” (McChesney 2013, p. 74). Another example is the massive open online course (MOOC) which was intended to democratize education but William Deresiewicz), which was intended to democratize education, but William Deresiewicz, notes “That is just their cover story… They’re reinforcing existing hierarchies and monetizing institutional prestige” (Keen 2015, p. 145).
As power is yet expressed algorithmically, the digital divide grows even more, law researcher Frank Pasquale notes (2015). Automated decisions like optimizing engines and restaurant recommendations, that take thousands of rules in a fraction of a second are treated more like technical problems instead of asking about their fairness and the values are hidden in the algorithmic blackbox (Pasquale 2015).
For all the talk about making the world a better place, the ability to answer those questions about social justice and technological capability lies only in a few powerful hands who have access to the world’s data (Morozov 2018). No wonder that Keen concludes that trust in authority has become the greatest casualty in this society when big tech’s blackbox algorithms challenges them (2015). Alongside that, “trust is coming to be regarded as relating to the trustworthiness of the software ‘system’, not the human beings who design and manage it” (Mansell 2012, p. 112).
Lucidity in Design
“Every technology is an expression of human will”
(Carr 2010, p. 44)
Relying on purely data-driven decisions made by machines, which are not neutral (O’Neil 2016, p. 171) allows tech designers to elude responsibility.
Every Design Action has a Reaction
Showing a Number or Adding a Word can Manipulate and Lead to Conformity
Another criticism that Keen noted about the Internet is that it is the opposite of its intended goal of “networked intelligence,”, rather, Facebook, Linkedin, Instagram, etc. are “creating more social conformity and herd behavior” (Keen 2012, p. 50). “Men aren’t sheep,” he quotes John Stuart Mill, the greatest critic of Bentham but on the social network, we act like sheep. This leads to what cultural critic Neil Strauss describes as “the need to belong,” becoming the rule instead of genuine nonconformity (Keen 2012, p. 50).
Behavioral economist Richard Thaler and Harvard law professor Cass Sunstein write about the nudge effect in the artificial music market study by Matthew Salganik, et al. (2008). They discovered that individuals were “far more likely to download songs that had been previously downloaded in significant numbers, and far less likely to download songs that had not been as popular” (Thaler and Sunstein 2008, p. 62). The success of a song depended on whether the number of previous downloads could be seen or not which leads us to believe that the music industry could manipulate us to conform to listen to the same tracks (Thaler and Sunstein 2008).
User experience consultant Chris Nodder adds another choice architecture example in the design of Microsoft’s automatic updates in Windows XP (2013). “Adding a word such as “recommended” or “preferred” can either rely on social proof (most people do this) or authority (we say you should do this)” (Nodder 2013, p. 52).
Default Settings can Betray
Nodder thinks that companies like Facebook take advantage of human laziness to read the fine print and that they assume that companies will do no harm (2013). For example, Facebook keeps revising its terms and conditions (Nodder 2013). In 2004, the concept of Facebook was that only friends can access a person’s information but in 2010 Mark Zuckerberg said in an interview that “the new social norm is openness, not privacy” (Nodder 2013, p. 54). The new default settings allowed third party advertising to proliferate while Facebook profited (Nodder 2013).
Everything is Political
Bolaño claims that in the structure and organization of mass media, the choice of technology is not neutral and “the development of a specific possibility removes others, sometimes irreparably” (2015, p. 65). When these choices are made by designers, it won’t just have an effect on the economic value of information but also its social value ultimately affecting how people live (Mansell 2012). Designers have to be careful with language where power resides especially those working with news media platforms since it affects the political process (Bruns 2008). “Individually none of these little lies are ruinous… but they add up and they take both an economic and cultural toll” (Wu cited by Keen 2015, p. 154). Under this “informationalized capitalism” and “historical tension between capitalism and democracy,” bad choices could lead toward “full blown authoritarianism” (Schiller 2007, p. 55).
Wake Up, Designers!
Being aware of the competing social imaginaries of the Internet is the first step in designing technologies better than before. Mansell believes that these differences can still be resolved (2012). Although Morozov is known for being a technology critic, he actually defends it by ending his polemic with “Technology is not the enemy; our enemy is the romantic and revolutionary problem solver who resides within” (Morozov 2013, p. 358).
Design of course is not limited to what users can see, as Negroponte predicted,s that interfaces will were to be less about the look and feel and more about the intelligence behind it (1995). It is also not just limited to algorithmic design but also business and community models that have to be carefully thought out like the produsage model (Bruns 2008). “Those who choose to compose and disseminate alternative value systems may be working against the current and increasingly concretised mythologies of market, church and state, but they ultimately hold the keys to the rebirth of all three institutions in an entirely new context” (Bruns 2008, p. 89).
Introducing a Code of Ethics in Design
“As designers, investors, commentators, we need to seriously ask ourselves whether some of these systems are legitimate and worthy… not from an investment return point of view, but from an ethical and moral point of view,” Marc Andreesen tweeted in March 2014 (Keen 2015, p. 152). More and more designers and writers are realizing that the capabilities of technologies available today make their jobs more complicated to do.
Dutch philosopher Henk Oosterling calls this new movement relational design, which “is the overture to a creative lifestyle whose cornerstones will be ecopolitical sustainability and geopolitical responsibility… for a revaluation of some of its inherent values, such as responsibility, honour and respect, so as to limit the excesses of hyperindividualism and hyperconsumerism” (2009, p. 19). Likewise, interaction design professor Yvonne Rogers believes that Human Computer Interaction is becoming more transdisciplinary (2012).
Because of the complexity of the Internet involving multiple agents which include both designers and users as well as machines and networks, media studies professor Charles Ess recommends that ethics, instead of being thought of as an individual duty, must be thought of in a shared and distributed responsibility framework (2010). In his book Digital Media Ethics, he compares moral absolutism and relativism and concludes that pluralism and dialogical approaches are a good framework in order to recognize diverse ethical views.
Business writer Nir Eyal suggests that more designers do thought experiments like the “Regret Test” (2017). There are more examples of evil interaction design patterns in Nodder’s Evil by Design. He suggests a design activity where designers can think of a product, flip a coin, heads is “good” and tails is “evil”, randomly pick a pattern and try to imagine the product designed in that manner (Nodder 2013).
Although in the past decade, technologists believed that they could “make the world a better place” by giving Internet access and democratizing creative tools to everyone, evidence shows that the Internet is not as simple as that. Values are hidden behind blackbox algorithms, which can only be understood by a privileged few. The Internet promised to equalize everyone but it has only served to reinforce hierarchies. Although there is a lot of talking about innovation and democracy in tech events and on media sites, big tech has shown that they are more about lip service than actual effort. The time to wake up and introduce a code of ethics for design was yesterday.
1. Bolaño, C. (2015). The culture industry, information and capitalism. Basingstoke, UK: Palgrave Macmillan.
2. Bruns, A. (2008). Blogs, Wikipedia, Second Life, and Beyond: From Production to Produsage. New York: Peter Lang.
3. Carr, N. (2011). The Shallows: What the Internet Is Doing to Our Brains. New York: W. W. Norton & Company.
4. Castells, M. (2010). The rise of the network society (Second Edition). Wiley-Blackwell.
5. Deuze, M. (2012). Media Life. Cambridge: Polity.
6. Ess, C. (2010). Digital media ethics (Reprint.). Cambridge: Polity.
7. Eyal, N. (2017). “Designers Need the Regret Test,” Words That Matter. Retrieved from https://medium.com/wordsthatmatter/designers-need-the-regret-test-86ef957e0d34
8. Fish, A. (2017). Technoliberalism and the End of Participatory Culture in the United States. Springer.
9. Han, B. (2015). The Burnout Society. Redwood: Stanford Briefs.
10. Keen, A. (2013). Digital Vertigo: How Today’s Online Social Revolution Is Dividing, Diminishing, and Disorienting Us. New York: St. Martin’s Griffin.
— (2016). The Internet Is Not the Answer. New York: Grove Press.
11. Lessig, L. (2004). Free culture: How big media uses technology and the law to lock down culture and control creativity. Penguin.
12. Mansell, R. (2012). Imagining the Internet: Communication, Innovation, and Governance. Oxford, UK: Oxford University Press.
13. McChesney, R. W. (2013). Digital Disconnect: How Capitalism is Turning the Internet Against Democracy. New York: The New Press.
14. Morozov, E. (2011). The net delusion: the dark side of internet freedom (1. ed.). New York: Public Affairs.
— (2014). To Save Everything, Click Here: The Folly of Technological Solutionism. New York: Public Affairs.
— (2018). “Die Menschen müssen die Daten der Internet-Giganten”. Sueddeutsche Zeitung. Retrieved from http://www.sueddeutsche.de/digital/digitale-abhaengigkeit-die-menschen-muessen-die-dat en-der-internet-giganten-zurueckerobern-1.3828542
15. Negroponte, N. (1996). Being Digital. Vintage.
16. Nodder, C. (2013). Evil by Design: Interaction Design to Lead Us into Temptation. New Jersey: Wiley.
17. O’Neil, C. (2016). Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. New York: Broadway Books.
18. Oosterling, H. (2009). DASEIN AS DESIGN Or: Must Design Save the World?. Melintas, 25(1), 1–22. Retrieved from http://journal.unpar.ac.id/
19. Pasquale, F. (2016). The Black Box Society: The Secret Algorithms That Control Money and Information. Cambridge: Harvard University Press.
20. Schiller, D. (2007). How to Think about Information. University of Illinois Press.
21. Sundararajan, A. (2016). The sharing economy: The end of employment and the rise of crowd-based capitalism. Cambridge, Massachusetts & London, England: MIT Press.
22. Thaler, R. (2009). Nudge: Improving Decisions About Health, Wealth, and Happiness. London: Penguin Books.
23. Wong, J. (2017). “How big tech finally awakened to the horror of its own inventions”, The Guardian. Retrieved from https://www.theguardian.com/media/2017/dec/20/facebook-twitter-mental-health-sea n-parker
24. Wu, T. (2016). The Attention Merchants: The Epic Scramble to Get Inside Our Heads. New York: Knopf.