User:Alessia/special issue xxiii

From XPUB & Lens-Based wiki
< User:Alessia
Revision as of 23:09, 14 February 2024 by Alessia (talk | contribs) (Women are objects)


Humanisation of technology + biases


Let’s try to make the inhuman more human

Humanisation within the context of technology is a really interesting concept tied to tech humanism, a philosophical and ethical approach to technology’s role into our human-centred society.
“Humanising is a word that directly strikes that insecurity: the fear of losing our humanity in the face of rapid technological advancement. So we must do something, something extremely powerful to ensure that humanity will be saved, so the best thing is try to change our own perception on technology.
There's a desire to make technology more relatable and understandable, as if giving it a human touch would help us maintain control over it.

Women are objects

Do you want to argue it?
Why do most virtual assistants have female voices and names?

There's a biological aspect to it: humans seem to prefer the soothing and calming tones of female voices. We're familiar with them from the womb, with our mother's voice often being the first that we perceive, that’s the first human connection there.

Female voices are often considered easier to hear and distinguish due to their higher pitches. That’s a myth, a myth that still influences our perception on the matter.
Historically, female voices have been prevalent in technological applications. From Emma Nutt, the famous telephone operator whose voice became the standard in the late 1800s, to "Sexy Sally," a voice that was tape recorded by singer Joan Elms, used in aeroplane cockpits during World War II, which was believed to be more attention-grabbing to young men.
In the 1980s, Nissan introduced a voice warnings system for their cars. This system utilised a female voice to alert drivers about various issues such as lights being on or the left door being open. Nissan named this system the "Talking Lady."
Over time, text-to-speech systems have mainly been trained on female voices due to the availability of extensive data. Why even care about collecting male voices recordings if you have so many ready made female voice recordings? Siri, Alexa, Cortana, Sophia, I feel there might be gender bias in the house of virtual assistants.
Let’s take Alexa, the developers had to activate a “disengage mode” after the AI was constantly sexually harassed, and was responding to the offences in the flirtiest way ever. Movements had to burst asking to reprogram the assistants to let them push back sexual harassment. So now Alexa will not respond flirty after being insulted, she will just say “I don’t know how to answer that”. Surely my favourite one is Cortana, that after being sexually harassed will search for the “Pussy song” video directly.
This brings to the surface the problem with programmed passivity, as when we apply human voices to assistants then directly people (I don’t wanna say men, I don’t wanna say men) men, not just man, will get the feeling they have a right to abuse AI, then it will be robots, then human-like robots and then it will be someone’s daughter, mother, sister (that already happens, the circle closes!). I already imagine that time when a woman will be beaten and the abuser will just say “oh I thought it was a robot”, it feels like a future that is fastly approaching and too much to bear.

measures of perceived humanness

Femininity is injected in our own perception of caring and hosting, on serving.
Femininity as warmth, as emotions, nice caresses, cute politeness, what a great benevolent sexism.
So women are considered more human than men.
That’s nothing new.

When feminine traits are used to make objects seem more human, then maybe we could ask ourselves if by portraying women as objects or tools designed solely for fulfilling others' needs in technology, we are reinforcing the harmful notion that women are mere objects rather than individuals with their own agency. This could contribute to further objectification and dehumanisation of women in real life. At the same time, making AI objects seem more human makes them more acceptable by the public.

Is chopchop a woman??

It was very interesting to talk all together about this cultural tendency to directly associate assistive technology with femininity. So if serving is a female characteristic, then I guess chopchop could be a female? If we are getting to the humanisation of chopchop then we MUST find a way to give it a gender, OR NOT? 
I guess not, and the discussion in class seemed to be focused on the idea that a server should be nonbinary. 

Mind you as long as it is a virtual assistant it can even work, but if it is a human being, a woman let’s say, you can expect experience, fastness, but not expertise, it will never be enough. And this remains so deeply embedded in our brain that it still complex to eradicate the germ of a toxic influence that unconsciously this concept still often permeates our lives.
This conversation raises several questions, including linguistic ones, that animate the fervent global discussion, as well as new forms of grammatical gender neutrality and other form of linguistic inclusive expressions.
In Italian, a server is considered male, the artificial assistant female, a rock male, the moon female, meme is quite problematic because I say it female and people bully me because it should be male, they say. This to ask, are these linguistic issues grounded in gender prejudices as well? How much should we dig deeply to get to the roots here? It feels like a long story, but it should bother anyone, even if English is the leading language at least here at this moment, but everyone in their own brain have probably different languages as the main ones, and it is interesting to know more about all the linguistic differences that we may encounter during the analysis of issues that may be rooted dramatically in cultural differences as well.

How to build an ethical AI that doesn’t reinforce stereotypes?

AI and voice assistants, like other technologies, are increasingly embedded in our daily lives. However, there are currently no existing guidelines focused on how to humanise AI the right way, despite its growing importance in shaping human everyday interactions with technology.
It’s not just about women, it’s about any community. Let’s take the Midjourney case, if you ask to generate a terrorist then it will create a middle eastern terrorist, who knows why. AIs are still taking gender and racial disparities to the extremes.
Exactly what is happening to most of the companies that have been developing voice assistants: they still rely on female voices and/or female names, which may re-enforce the general gender cliché that women are here to serve others. If a human can be biassed, an AI could even be worse.
There are some experiments going on, after some nice media pressure. Google tried the default gender voice to its Google Assistant, that had both male and female voices included. Female voices still performed better as the algorithm was better trained. Google decided to not really engage much in the creation of a male voice assistant, as it seemed much more difficult to gain data for it, and as users still prefer female one so much. During 2017 the damn tech giant worked on Wave Net, an algorithm that helped develop more natural female and male voices to add to their assistants. Let’s acknowledge at least that Google Assistant comes now programmed with 11 different voices, even with different accents, that surely will help to make the product as inclusive as possible and pave the way for the future of virtual, more inclusive, assistants.
Still I feel it’s not the product problem we are talking about now, it just has moved from being a human society issue much naturalised to becoming a new technology problem. A problem that feels ancient and new at the same time.
We shall talk more about how gender is portrayed throughout AI, creating new industry standards.
Some researches are already going on where the sound of "female," "male," "neutral," and "nonbinary" human voices are being analysed. A research that is not freed of new biases and prejudices.
It is crucial to examine who shapes the algorithms and guidance of artificial intelligence, as the technologies are often a product of the biases and opinions of their creators. It is important to address misrepresentation as well, as the field of artificial intelligence lacks diversity, especially in terms of involvement of different communities.