User:Rita Graca/first chapter

From XPUB & Lens-Based wiki

IS IT POSSIBLE TO GAIN AGENCY? SHOULD WE RUN, HIDE OR FIGHT?

Users are persuaded to believe centralised platforms are the only real option. Who are the alternatives for?

My phone came with social apps already installed. My data plan facilitates the free use of these apps. At some point, having a Gmail account forced me to sign up for the social media Google+ aswell. As seen in the first chapter, platforms engage us in certain behaviours. Not only that but even outside the platforms, we are compelled by the ease of joining. We are persuaded to believe centralised platforms are the only real option to engage in social media.

But what are centralised platforms? In the context of networks, it means there's one central server. Large corporations like Twitter have several servers, but they all connect to one identity. When we are using Twitter we are complying to the rules of one corporation, with one CEO, one CFO, and all other positions that you obtain by mixing random capitalised letters. A centralised system concentrates its power at one point. This means when the authority decides to follow a certain path, the nodes of the network don't have the power to change things from the top. And that's when decentralisation appears as an alternative. If the platforms behave in ways that users don't agree with, it's appropriate (and proactive!) to create other platforms.

Mastodon is a decentralised social media platform. It initially emerged as a fork of GNU Social and was driven by a dissatisfaction towards social media like Twitter. (Lorusso, 2019) Decentralisation means the distribution of authority. Each server can implement their visions while sharing a common platform. Mastodon is also federated. Users from different groups can socialise with each other, but everyone has their experience more tailored to their liking. Practically, in Mastodon one group of people may allow porn and nudity and another group be blocking it.

Decentralisation is not something new. IRC, the Internet Relay Chat, is a long-established chat protocol created in 1988 that is still used by feminists, hacktivists, open-source communities and others. Mastodon and IRC allow a particular kind of resistance as a result of their decentralisation.

Migrating data in decentralised social media can be challenging, but this hardship can be a relief. Indeed, it can be a good way of escaping surveillance in social media. The Facebook Terms of Service declares that not using the same name that you use in everyday life is a violation of the contract. This backs up Mark Zuckerberg's idea that "Having two identities for yourself is an example of a lack of integrity." (Kirkpatrick, 2010) However, having unconnected accounts serves different purposes, from avoiding an unhealthy algorithm to ensuring safety.

Decentralised platforms align with moral values that are hard to ignore. Their structure rejects one concentration of power, making them more democratic. They demand smaller infrastructures, and thus more meaningful communities. (hbsc et al, 2017) These attract a lot of people looking for safe spaces to socialise, particularly marginalised communities. However, decentralised social networks are still failing to reach the mainstream.

Decentralisation can be elitist. The number of users that are programmers and can contribute to the platform is not many, so the bottom-up approach is just enjoyed by the same group of people. (Peeters, 2013) Mastodon tried to resist this problem with a great initiative. A GitHub repository called issues is opened for bugs or wanted features. This space is very active, every time I visited the page it had recent posts. In issue 12376 you can read a proposal for a redesign of a hashtag column. The motivation is "the current hashtag column doesn't stand out and looks bland". The user then shares a code to change the style. Unfortunately, the implementation of these and the other 1,395 open issues have to be impossible. As someone has to go through them and make decisions, a responsible agent will always be necessary. New, divergent or conflicting opinions have to be dismissed for the sake of moving forward (Berriot, 2018)

It's true that not every user is interested in the same degree of control. We can argue that if a user wants to perform more agency they can self-host and become the administrator of their network. However, creating a server is difficult and the use of jargon is a way of exclusion.

Last year, I participated in the development of a small network. My colleagues and I were self-hosting websites to reflect on our understandings of networks, autonomy, online publishing and social infrastructures. (XPUB, 2019) This experience allowed me to have an independence that I didn't know it was possible. However, I acknowledge my privilege. I was surrounded by experienced peers in such a way that I'm sure I couldn't have done this without help.

Although the future of decentralised social media looks promising, there are still concerns worth being discussed. Some authors have engaged with them in inspiring ways {referencing Lídia Pereira, 2019, Sarah Friend, 2018, etc}.


Combative actions on social media demonstrate the need to make discussions public by publishing them.

What else can be done against harmful features, things we don't agree with? A fair answer is to insist on accountability either from the government, tech companies or international organisations. Starting legal actions as David Carroll, or asking for laws such as the NetzDG Law in Germany. David Carroll started a legal process against Cambridge Analytica to retrieve any personal information collected by the company. The NetzDG law (or even better, the Netzwerkdurchsetzungsgesetz) is controversial but it aimed to give legal importance to flagging, complaining and reporting inside platforms. Not every country can rely on a democratic government, however, these laws can set an example for so many social media companies that are US based, as well as European data centres.

These legal discussions deserve being accelerated. Alongside, it's stimulating to see how users can use the platform components to their interest.

The most honest way for me to look into these problems is from where I stand. I'm already subscribed to the biggest platforms. What happens if I resist the biased discourse, but from the inside? I'm looking into mainstream platforms to find the actions challenging authority.

I will take cancel culture as a case study to discuss how digital vigilantism becomes a way for users to assert their agency. Especially important, I want to demonstrate how interface design shapes these behaviours.

Cancel culture appeared as a way for users with little media power get together and raise awareness for problematic people or products. Cancel culture happens when a mediatic figure does something unacceptable in the eye of the public. Therefore, they become cancelled. This usually means a viral reaction in social media, loss of followers, sponsors, or other ways of online punishment. In the attention economy, when you find someone not worthy of your attention, you’re denying them their sustenance. (Nakamura, 2019) Cancel culture also puts pressure on social platforms to act politically towards users, something that social media businesses have been avoiding.

In the US, publishers such as traditional newspapers curate content so they have responsibility for what it's published. US laws declare that an interactive computer service is not a publisher. (Communications Decency Act, 1996) This means computers services can't be held accountable for what their users publish. Facebook is a computer service, but when it starts baning content and deciding what is appropriate content, it's making editorial decisions. There's still some confusion on whether social media businesses should comply with particular legislation.

Cancel culture is boycott and callout culture, and it touches on neighbourhood concepts such as harassment and shaming. However, cancel culture exists with a very specific aim: pursue social justice. (Trottier, 2019)

In August 2019, a far-right group gathered in a conference room inside a hotel in Lisbon. After the meeting was made public, people were upset that the hotel allowed this kind of event. Soon, a group of users stormed into social media to show their discontent. Using the feedback score that Facebook offers to official pages, users dropped the hotel score from 4,6 to 1,9. The review section was also populated with very negative comments. This attracted a lot of media attention and the hotel published an apology afterwards. The bad reviews had little to do with the hotel amenities, but they became a tool for protesting.

Also in August but 2014, an online post accused a gamer developer of having romantic relationships with a journalist, showing a lack of ethics in the gaming world. This started a controversy around the gaming industry culture for being filled with sexism and misogyny. People speaking against the gaming culture suffered coordinated attacks, mainly targeted at women. The movement spread and escalated with the usage of the hashtag Gamergate on Twitter. The repercussions of such movements should be taken seriously. The #gamergate harassment included doxing, intimidations, SWAT interventions, life threats, bomb alerts and shooting warnings.

The two episodes described above are linked by rhetoric, to change public opinion, but they differ in power positions. The difference between harassment and protest seems to lie in context, power differentials and who is saying what. (Sinders, 2018)

Social media platforms are vehicles for persuasion. When the users start using them in their terms, the persuasion doesn't end. Instead, it's multiplied by all the users imposing their views. In reality, the virality of shaming benefits social media business models. (Trotier, 2019) For these businesses, it may not make sense to eradicate such profitable engagement. Powerless users of social media understood this and started using the same tools as hateful movements such as trolling.

Users started appropriating techniques from internet trolls to fight against the increasing misogyny, sexism, and extremist politics online. There are important differences. Trolls shame fragile targets, usually marginalised communities. Cancel culture points to people with big audiences, to cut across the established power and question it. Capitalistic platforms are currently the mainstream, so they are the most massive platforms for public opinion. (Partido Interdimensional Pirata, 2019) This has a lot of worth. The influence of public opinion is very powerful, indeed it is used to change the verdict of court cases.

When it started, the cancel movement was about the ability of users to perform agency, to build safer online communities for marginalised groups. Instead of creating online cults where nothing is questioned, cancel culture was, and in part still is, questioning centralised power. It started as a movement of compassion for the voiceless, an activist attitude. For the first time, if the online outrage against a company as big as Pepsi was loud enough, it would reach them. {reference to 2017 Pepsi campaign controversy}

The act of shaming always existed but it gained a lot of momentum with social media. Some authors believe its a characteristic of the technologically empowered yet politically precarious digital citizen. (Anker, 2014) Ineffective politics pushes the users to react transforming shaming culture in meaningful political participation. (Ingraham and Reeves, 2016) Online political engagement uses the features of social media platforms to spread quickly and gain visibility. By finding their way to the trending topics, through hashtags, using famous location tags. These are integrated features of the platforms infrastructure. Design works as a moderator or promoter of user behaviours.

The design of social media accommodated users participation in demanding accountability and change in ever-evolving society norms through cancel culture. But is it possible to do it without following the same techniques as their opponents, where innocent people can become targets of a mob? Is it possible to fight for better online platforms where the design is doomed to promote viral actions?


References

Ananny, M. and Crawford, K. (2018) Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. New Media & Society, 20 (3): 973–989.

Decentralization and its Discontents - Radical Networks (2019) Film.
Available at: https://www.youtube.com/watch?v=Km6EYsBYAlY (Accessed: 22 November 2019).

DNL# 13: HATE NEWS. Keynote with Andrea Noel and Renata Avila (2018) Film.
Available at: https://www.youtube.com/watch?v=l2z6jP0Ynwg&list=PLmm_HP_Sb_cTFwQrgkRvP8yqJqerkttpm&index=3 (Accessed: 12 November 2019).

Dubrofsky, R.E. and Magnet, S. (2015) Feminist surveillance studies. Durham: Duke University Press, 221–228.

Hollanek, T. (2019) Non-user-friendly. Staging resistance with interpassive user experience design. APRJA, 8, 184–193

Holmes, K. and Maeda, J. (2018) Mismatch: how inclusion shapes design. Simplicity : design, technology, business, life. Cambridge, Massachusetts ; London, England: The MIT Press.

Ingraham, C. and Reeves, J. (2016) New media, new panics. Critical Studies in Media Communication, 33 (5): 455–467.

Kitchin, R. and Dodge, M. (2011) Code/space: software and everyday life. Software studies. Cambridge, Mass.: MIT Press, 3-21.

Pereira, L. (2019) Pervasive Labour Union - 13 - Fed Up!
Available at: http://ilu.servus.at/category/13-fed-up.html (Accessed: 5 December 2019).

Shaw, T. (2017) Invisible Manipulators of Your Mind., 20 April.
Available at: https://www.nybooks.com/articles/2017/04/20/kahneman-tversky-invisible-mind-manipulators/ (Accessed: 11 November 2019).

Trottier, D. (2019) Denunciation and doxing: towards a conceptual model of digital vigilantism. Global Crime, pp. 1–17.

Williams, J. (2018) Stand out of our light: freedom and resistance in the attention economy. Cambridge, United Kingdom ; New York, NY: Cambridge University Press.