User:Peach/peach-thesis-drafts

From XPUB & Lens-Based wiki
< User:Peach
Revision as of 16:11, 19 January 2021 by Peach (talk | contribs) (Created page with "=== "2.3 Consequences of ad revenue based business models on social networks" === The main reason I started from this chapter is that I have the most information about it. The...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

"2.3 Consequences of ad revenue based business models on social networks"

The main reason I started from this chapter is that I have the most information about it. The responsibility of social media platforms in the spread of misinformation and hate speech has been a subject of discussion since the start of the 2016 election campaign. In 2020, 86% of Twitter's revenue came from advertising services, and YouTube made 15 billion dollars from ads in 2019. It's clear that the customers of these companies are not their viewers or users, but their advertisers. They are incentivized to provide their users with content that will keep them engaged, but this does not necessarily mean that said content needs to be of good quality, useful, or healthy. In recent years we started to see more active efforts to moderate content from Facebook, Twitter and YouTube.

YouTube

The way YouTube and similar platforms recommend content to the viewer is concisely explained by YouTuber Dan Olson in his video "In Search of a Flat Earth". "[...] content algorithms trying to maximize retention and engagement by serving users suggestions for things that are, effectively, incrementally more concentrated versions of the thing they were looking at." This mechanism led to people who are more likely to believe conspiracy theories and fake news be introduced to even more conspiracy theories and fake news. Arguably, the same system that exposed people who are susceptible to conspiratorial thinking to more and more extreme conspiracy theories, exposed people who might enjoy or even just tolerate, "casually bigoted" content such as jokes that rely on racist stereotypes, to white supremacists. Is there a parallel between this "concentration" of ideologies and a propaganda tactic Douglas Rushkoff calls describes as "distraction and over-simplification"? One of the efforts in controlling the spread of conspiracy theories and hateful content is the updates YouTube made to it's algorithm in early 2019. YouTube announced that they are working on tweaking their recommendation algorithms to identify fake news and conspiracy theories and no longer recommend them to viewers, while keeping them accessible on searches. While less people being exposed to fake news is a good outcome, the way YouTube handled this issue, and the kind of content they seem to prioritize suggest that their intention is to make the site more advertiser friendly and therefore appealing to investors, rather than make a genuine response to widespread criticism. A lot of independent content creators on YouTube feel like the platform favors content by major media corporations. While corporate content is being recommended to more users, independently made content is recommended to less users. This affects creators' ad revenue and in turn their ability to make more content.