Difference between revisions of "User:ThomasW/Notes SocityoftheQueryReader"

From Media Design: Networked & Lens-Based wiki
Jump to navigation Jump to search
(Created page with "Society of the Query Reader notes soon")
 
 
(2 intermediate revisions by the same user not shown)
Line 1: Line 1:
 
Society of the Query Reader
 
Society of the Query Reader
  
notes soon
+
Search engines are not neutral:
 +
Googlization is real, and it is a problem:
 +
‘For the last decade we have systematically outsourced
 +
our sense of judgment to this one company, we’ve let this company decide for us
 +
what’s important and what’s true for a large number of questions in our lives.’9
 +
page 13
 +
 
 +
Teachers refer to Google as an educational tool without having control over the information their students find and use.
 +
page 13
 +
 
 +
 
 +
===Kylie Jarret - A Database of intention?===
 +
If Google really is a database of our intentions for action or engagement in life, then access to that information breaches long held assumptions about the private possession of our own thoughts and desires. But it is not only privacy that is at stake in the possession of intention by commercial search sites such as Google.
 +
page18
 +
 
 +
There is a disconnect between our actions, their effects, and our desires. What animates behavior is not necessarily co-extensive with its manifestation, shaped as the latter is by intervening social, cultural, and technological factors. The underlying drives, the rich tapestry of cognitions, experiences, embodied desires that shape any person’s goals, cannot be read directly off externalized activity. Google, therefore, can read and capture the extensive output of my search terms, but this is merely the mapping of my behavior rather than real insight into its motivational logic. My purpose may have been to search for ‘abortion’,but what animated that search was not contained in the term itself. Google was committing the intentional fallacy in assuming it could read my intentional logic from my output or in ascribing motivation to the range of behavioral traces I leave across my search activity, Gmail, Google+ profile, or YouTube viewing.
 +
page 21
 +
 
 +
When I enter ‘i’ it is assumed to mean ‘Irish Times’ and not the ‘Irish Independent’, ‘izakaya’, ‘iPhone’, or ‘igloo’, based on the intentions Google has associated with my profile and with similar users in my geographic and demographic areas. In these mechanisms, the intentions ascribed to me are fed back to me, working to inform my ongoing search articulations. A feedback loop emerges in which presumptions about activity, based on Google’s assumptions about users’ intentions, go on to inform a user’s experience of, although not necessarily engagement with, the web.
 +
page 23
 +
 
 +
=== Dialectic of Google, Andrea Miconi ===
 +
 
 +
Google has been achieving dominance over its competitors through a twostep process. At a first level, quality arguably played a part: Google is considered to provide more complete and spam-free results than its competitors, and people prefer it over other search engines because of its accuracy and overall quality.21 What is more, the fact that Google is set as a default home page in some popular browsers has probably contributed to its universal adoption, or at it least to the consolidation of its leadership. For these reasons, it is actually difficult to distinguish between technical and social factors when it comes to analyzing their consequences on daily life practices. A social pattern, wrote Pierre Bourdieu, is basically a form of ‘habitus’
 +
page 34,35
 +
 
 +
 
 +
as Rachael – Blade Runner’s most controversial replicant – famously said: I am not in the business; I am the business.
 +
page 37
 +
 
 +
===  Frictionless Sharing: The Rise of Automatic Criticism - Vito Capanelli ===
 +
 
 +
Vilém Flusser.
 +
The Bohemian thinker (whose horizon of observation is the diffusion of the first personal computers in the 80s) observes with great concern the consolidation of a tendency to escape the responsibility of critical consciousness and instead delegate every decision-making process to machines. This passage threatens to deprive humans of the critical role of essences who make decisions.
 +
page 44
 +
 
 +
For Flusser, to avoid being programmed by the apparatuses, it is necessary to devote oneself to their reconfiguration and programming. In other words it is necessary to say, ‘I want to have my program so that I won’t be subject to anyone else’s’.
 +
page 45
 +
 
 +
=== Is small really beautiful - Astrid Mager === 
 +
 
 +
It is thus not surprising that Google is a flourishing company, and its algorithm incorporates and strengthens the capitalist ideology. Rather than blaming Google for doing evil, however, I suggest thinking of Google as being shaped by society. Google shows us the face of capitalism because it was born and raised in a capitalist society. ‘Technology is
 +
society made durable’, as Bruno Latour put it.2 Accordingly, Google is not the only actor
 +
to blame. Quite on the contrary, actors such as policy makers, jurists, journalists, search
 +
engine optimizers, website providers, and, last but not least, users are part of the game
 +
too. If users would turn away from Google, the whole business model, including its
 +
sophisticated algorithm and database of personal data, would fall apart. But where can
 +
people turn to? Are there true alternatives to Google and their algorithmic ideology?
 +
page 60
 +
 
 +
Their default settings protect
 +
privacy rather than collecting and offering personal data to third parties (which big
 +
search engines usually do). They incorporate privacy in their technical Gestalt and
 +
may hence be interpreted as following the principle of ‘Privacy by Design’. Privacy
 +
by Design builds on the idea of integrating privacy-relevant features into the design
 +
process of IT technologies to enable ‘value-sensitive innovation’.13 But can privacy
 +
be seen as their ideological framework?
 +
page 63
 +
 
 +
 
 +
 
 +
=== The Dark Side of Google: Pre-Afterword- Social Media Times - Ipoolita ===
 +
 
 +
The cybernetic IT systems continuously reshape their very foundations by being transformed
 +
into ideology, which is actually an output of these technological beginnings.
 +
page 75
 +
 
 +
Jeff Jarvis who puts his finger on the problem: the public. Similar
 +
to how ‘opening code’ does not equate to ‘making it free’, ‘publishing content’ does
 +
not equate to ‘making it public’. On the contrary, with Facebook (although G+ or other
 +
social platforms work in the same way) it becomes clear that things actually work in
 +
the opposite way. Everything that is posted becomes the non-exclusive property of the
 +
company and can be resold to third parties, as can be (re)read in the TOS (Terms of
 +
Service). In the clouds of social networks, then, publishing does not mean public. For
 +
almost all web 2.0 applications, publishing means ‘private’ – a corporation or a private
 +
company owns the content. Every time we access our online profiles (our digital alter
 +
egos), we work for these corporations for free. By serving us with increasingly invasive
 +
and targeted advertisements, the sites’ algorithms try to make money on our backs –
 +
on our digital bodies.
 +
page 80
 +
 
 +
Politics are already a technocracy, and proposals for
 +
technological democracy, web 2.0, or what have you, are increasingly enmeshed within
 +
this technocracy. Who creates and manages these tools of democracy? Are they the
 +
nerd supremacists on the payroll of anarcho-capitalists? How can repentant soldiers,
 +
technocratic geeks, or whistleblowers constantly on the run inspire the struggle for
 +
freedom?
 +
page 81
 +
 
 +
 
 +
We like to imagine escape routes,
 +
then try to sell them; we imagine and build appropriate tools to achieve our desires.
 +
We should make them available to an audience that is made up of people, instead of
 +
publishing them through the depraved megaphone of an intrusive corporation’s wall. It
 +
is as McLuhan already claimed: ‘the medium is the message’.
 +
page 82
 +
 
 +
 +
=== A ‘History’ of Search Engines: Mapping Technologies of Memory, Learning, and Discovery - Richard Graham <<< ===
 +
 
 +
Knowledge is of two kinds. We know a subject ourselves, or we know where we can find information upon it.
 +
page 109
 +
 
 +
As psychologist Stephen Kosslyn puts it: ‘Once I look up something on the internet, I don’t need to retain all the details for future use – I know where to find that information again and can quickly and easily do so. More generally, the internet functions as if it were my memory.’
 +
page 109
 +
 
 +
Nicholas Carr in his 2008 article for The Atlantic, ‘Is Google Making Us
 +
Stupid?’, uses a metaphor that can usefully stand in for the frequently voiced opinion
 +
that the internet has fundamentally changed the way we think:
 +
What the Net seems to be doing is chipping away my capacity for concentration
 +
and contemplation. My mind now expects to take in information the way the Net
 +
distributes it: in a swiftly moving stream of particles. Once I was a scuba diver in the
 +
sea of words. Now I zip along the surface like a guy on a Jet Ski.
 +
page110
 +
 
 +
The media archaeologist Jussi Parikka, paraphrasing the German media theorist
 +
Friedich Kittler, describes the situation by arguing that:
 +
Media determine our situation and are already inside our heads, inside our capacities
 +
of understanding and writing, our theoretical concepts, memories and
 +
such, yet these perspectives of a media-archaeological kind elaborate the wider
 +
intermedial fields in which the human body is trained as part of the modernization
 +
process.
 +
page 114
 +
 
 +
The Mundaneum, which was built and functioned in an unfinished state for a time (hence the ambiguity between actual and imaginary), was planned to house all the world’s knowledge. Otlet described it in 1914:
 +
These collections are conceived as parts of one universal body of documentation,as an encyclopaedic survey of human knowledge, as an enormous intellectual warehouse of books, documents, catalogues and scientific objects. Established according to standardized methods, they are formed by assembling cooperatively everything that the participating associations may gather or classify.
 +
page 118
 +
 
 +
Unlike other ideas for
 +
drawing together information in a dynamic and futuristic manner, for instance H. G.
 +
Wells’ idea for a World Brain or Vannevar Bush’s Memex, Otlet’s negotiation between
 +
the imaginary end goal of his project and the early stages of organization were key to
 +
his project. The Mundaneum had to be built in order to matter at all. Otlet’s designs
 +
for a universal collection of knowledge went a step beyond familiarity with an existing
 +
system or a new structure that could be imposed on existing materials. The
 +
 
 +
 
 +
=== Search Control in China -  Min Jiang and Vicențiu Dîngă ===
 +
 
 +
Broadly, Google defines a person in three aspects: your geographical location, search history and your relationships or social networks
 +
page 143
 +
 
 +
=== ‘I Am not a Web Search Result! I Am a Free Word’: The Categorization and Commodification of ‘Switzerland’ by Google - Anna Jobin and Olivier Glassey ===
 +
 
 +
The Order of Things, Michel Foucault defines the concept
 +
of episteme as ‘the strategic apparatus which permits of separating out from
 +
among all the statements which are possible those that will be acceptable within, I
 +
won’t say a scientific theory, but a field of scientificity, and which it is possible to say
 +
are true or false’.
 +
page 161
 +
 
 +
=== Historicizing  Google Search:  A Discussion of  the Challenges  Related  to Archiving  Search Results - Jacob Ørmen ===
 +
 
 +
Who would not want to know which results you would have gotten if you had entered
 +
the keyword ‘terrorism’ into Google’s search bar just before September 11? And what
 +
if you could compare it to a search conducted on exactly 11 September 2001 or two
 +
weeks after? Or what about tracing the search rankings of websites associated with
 +
the query ‘USA’ through the last ten years? page 189
 +
 
 +
Just think of all oral communication that is not being recorded. It is, as the android Roy Batty so poetically
 +
utters in Blade Runner, as if ‘all those moments will be lost in time, like tears in rain’.
 +
This also entails that we of course cannot archive everything (not even all the material
 +
on the web), and therefore we must choose carefully what type of information we want
 +
to archive and how we want to store it. I believe that search results can serve as important primary sources in the future, and we therefore should worry about which search  results merit archiving and how to archive them. page 190
 +
 
 +
-
 +
Never forget that Google collects data for a commercial purpose. It is not a public archive. Besides this, the Google search engine is getting more and more ‘polluted’, coming up with useless and predictable search outcomes.
 +
– Geert Lovink

Latest revision as of 13:36, 15 April 2015

Society of the Query Reader

Search engines are not neutral: Googlization is real, and it is a problem: ‘For the last decade we have systematically outsourced our sense of judgment to this one company, we’ve let this company decide for us what’s important and what’s true for a large number of questions in our lives.’9 page 13

Teachers refer to Google as an educational tool without having control over the information their students find and use. page 13


Kylie Jarret - A Database of intention?

If Google really is a database of our intentions for action or engagement in life, then access to that information breaches long held assumptions about the private possession of our own thoughts and desires. But it is not only privacy that is at stake in the possession of intention by commercial search sites such as Google. page18

There is a disconnect between our actions, their effects, and our desires. What animates behavior is not necessarily co-extensive with its manifestation, shaped as the latter is by intervening social, cultural, and technological factors. The underlying drives, the rich tapestry of cognitions, experiences, embodied desires that shape any person’s goals, cannot be read directly off externalized activity. Google, therefore, can read and capture the extensive output of my search terms, but this is merely the mapping of my behavior rather than real insight into its motivational logic. My purpose may have been to search for ‘abortion’,but what animated that search was not contained in the term itself. Google was committing the intentional fallacy in assuming it could read my intentional logic from my output or in ascribing motivation to the range of behavioral traces I leave across my search activity, Gmail, Google+ profile, or YouTube viewing. page 21

When I enter ‘i’ it is assumed to mean ‘Irish Times’ and not the ‘Irish Independent’, ‘izakaya’, ‘iPhone’, or ‘igloo’, based on the intentions Google has associated with my profile and with similar users in my geographic and demographic areas. In these mechanisms, the intentions ascribed to me are fed back to me, working to inform my ongoing search articulations. A feedback loop emerges in which presumptions about activity, based on Google’s assumptions about users’ intentions, go on to inform a user’s experience of, although not necessarily engagement with, the web. page 23

Dialectic of Google, Andrea Miconi

Google has been achieving dominance over its competitors through a twostep process. At a first level, quality arguably played a part: Google is considered to provide more complete and spam-free results than its competitors, and people prefer it over other search engines because of its accuracy and overall quality.21 What is more, the fact that Google is set as a default home page in some popular browsers has probably contributed to its universal adoption, or at it least to the consolidation of its leadership. For these reasons, it is actually difficult to distinguish between technical and social factors when it comes to analyzing their consequences on daily life practices. A social pattern, wrote Pierre Bourdieu, is basically a form of ‘habitus’ page 34,35


as Rachael – Blade Runner’s most controversial replicant – famously said: I am not in the business; I am the business. page 37

Frictionless Sharing: The Rise of Automatic Criticism - Vito Capanelli

Vilém Flusser. The Bohemian thinker (whose horizon of observation is the diffusion of the first personal computers in the 80s) observes with great concern the consolidation of a tendency to escape the responsibility of critical consciousness and instead delegate every decision-making process to machines. This passage threatens to deprive humans of the critical role of essences who make decisions. page 44

For Flusser, to avoid being programmed by the apparatuses, it is necessary to devote oneself to their reconfiguration and programming. In other words it is necessary to say, ‘I want to have my program so that I won’t be subject to anyone else’s’. page 45

Is small really beautiful - Astrid Mager

It is thus not surprising that Google is a flourishing company, and its algorithm incorporates and strengthens the capitalist ideology. Rather than blaming Google for doing evil, however, I suggest thinking of Google as being shaped by society. Google shows us the face of capitalism because it was born and raised in a capitalist society. ‘Technology is society made durable’, as Bruno Latour put it.2 Accordingly, Google is not the only actor to blame. Quite on the contrary, actors such as policy makers, jurists, journalists, search engine optimizers, website providers, and, last but not least, users are part of the game too. If users would turn away from Google, the whole business model, including its sophisticated algorithm and database of personal data, would fall apart. But where can people turn to? Are there true alternatives to Google and their algorithmic ideology? page 60

Their default settings protect privacy rather than collecting and offering personal data to third parties (which big search engines usually do). They incorporate privacy in their technical Gestalt and may hence be interpreted as following the principle of ‘Privacy by Design’. Privacy by Design builds on the idea of integrating privacy-relevant features into the design process of IT technologies to enable ‘value-sensitive innovation’.13 But can privacy be seen as their ideological framework? page 63


The Dark Side of Google: Pre-Afterword- Social Media Times - Ipoolita

The cybernetic IT systems continuously reshape their very foundations by being transformed into ideology, which is actually an output of these technological beginnings. page 75

Jeff Jarvis who puts his finger on the problem: the public. Similar to how ‘opening code’ does not equate to ‘making it free’, ‘publishing content’ does not equate to ‘making it public’. On the contrary, with Facebook (although G+ or other social platforms work in the same way) it becomes clear that things actually work in the opposite way. Everything that is posted becomes the non-exclusive property of the company and can be resold to third parties, as can be (re)read in the TOS (Terms of Service). In the clouds of social networks, then, publishing does not mean public. For almost all web 2.0 applications, publishing means ‘private’ – a corporation or a private company owns the content. Every time we access our online profiles (our digital alter egos), we work for these corporations for free. By serving us with increasingly invasive and targeted advertisements, the sites’ algorithms try to make money on our backs – on our digital bodies. page 80

Politics are already a technocracy, and proposals for technological democracy, web 2.0, or what have you, are increasingly enmeshed within this technocracy. Who creates and manages these tools of democracy? Are they the nerd supremacists on the payroll of anarcho-capitalists? How can repentant soldiers, technocratic geeks, or whistleblowers constantly on the run inspire the struggle for freedom? page 81


We like to imagine escape routes, then try to sell them; we imagine and build appropriate tools to achieve our desires. We should make them available to an audience that is made up of people, instead of publishing them through the depraved megaphone of an intrusive corporation’s wall. It is as McLuhan already claimed: ‘the medium is the message’. page 82


A ‘History’ of Search Engines: Mapping Technologies of Memory, Learning, and Discovery - Richard Graham <<<

Knowledge is of two kinds. We know a subject ourselves, or we know where we can find information upon it. page 109

As psychologist Stephen Kosslyn puts it: ‘Once I look up something on the internet, I don’t need to retain all the details for future use – I know where to find that information again and can quickly and easily do so. More generally, the internet functions as if it were my memory.’ page 109

Nicholas Carr in his 2008 article for The Atlantic, ‘Is Google Making Us Stupid?’, uses a metaphor that can usefully stand in for the frequently voiced opinion that the internet has fundamentally changed the way we think: What the Net seems to be doing is chipping away my capacity for concentration and contemplation. My mind now expects to take in information the way the Net distributes it: in a swiftly moving stream of particles. Once I was a scuba diver in the sea of words. Now I zip along the surface like a guy on a Jet Ski. page110

The media archaeologist Jussi Parikka, paraphrasing the German media theorist Friedich Kittler, describes the situation by arguing that: Media determine our situation and are already inside our heads, inside our capacities of understanding and writing, our theoretical concepts, memories and such, yet these perspectives of a media-archaeological kind elaborate the wider intermedial fields in which the human body is trained as part of the modernization process. page 114

The Mundaneum, which was built and functioned in an unfinished state for a time (hence the ambiguity between actual and imaginary), was planned to house all the world’s knowledge. Otlet described it in 1914: These collections are conceived as parts of one universal body of documentation,as an encyclopaedic survey of human knowledge, as an enormous intellectual warehouse of books, documents, catalogues and scientific objects. Established according to standardized methods, they are formed by assembling cooperatively everything that the participating associations may gather or classify. page 118

Unlike other ideas for drawing together information in a dynamic and futuristic manner, for instance H. G. Wells’ idea for a World Brain or Vannevar Bush’s Memex, Otlet’s negotiation between the imaginary end goal of his project and the early stages of organization were key to his project. The Mundaneum had to be built in order to matter at all. Otlet’s designs for a universal collection of knowledge went a step beyond familiarity with an existing system or a new structure that could be imposed on existing materials. The


Search Control in China - Min Jiang and Vicențiu Dîngă

Broadly, Google defines a person in three aspects: your geographical location, search history and your relationships or social networks page 143

‘I Am not a Web Search Result! I Am a Free Word’: The Categorization and Commodification of ‘Switzerland’ by Google - Anna Jobin and Olivier Glassey

The Order of Things, Michel Foucault defines the concept of episteme as ‘the strategic apparatus which permits of separating out from among all the statements which are possible those that will be acceptable within, I won’t say a scientific theory, but a field of scientificity, and which it is possible to say are true or false’. page 161

Historicizing Google Search: A Discussion of the Challenges Related to Archiving Search Results - Jacob Ørmen

Who would not want to know which results you would have gotten if you had entered the keyword ‘terrorism’ into Google’s search bar just before September 11? And what if you could compare it to a search conducted on exactly 11 September 2001 or two weeks after? Or what about tracing the search rankings of websites associated with the query ‘USA’ through the last ten years? page 189

Just think of all oral communication that is not being recorded. It is, as the android Roy Batty so poetically utters in Blade Runner, as if ‘all those moments will be lost in time, like tears in rain’. This also entails that we of course cannot archive everything (not even all the material on the web), and therefore we must choose carefully what type of information we want to archive and how we want to store it. I believe that search results can serve as important primary sources in the future, and we therefore should worry about which search results merit archiving and how to archive them. page 190

- Never forget that Google collects data for a commercial purpose. It is not a public archive. Besides this, the Google search engine is getting more and more ‘polluted’, coming up with useless and predictable search outcomes. – Geert Lovink