How To Be More Or Less Human: Difference between revisions
Line 24: | Line 24: | ||
===Colourpages=== | ===Colourpages=== | ||
[[ | [[File:MaxDcatalougeColor.pdf]] | ||
===B&Wpages=== | ===B&Wpages=== |
Revision as of 13:45, 20 May 2015
How To Be More Or Less Human | |
---|---|
Creator | Max Dovey |
Year | 2015 |
Bio | Max Dovey [UK] is 28.3% man, 14.1% artist and 8.4% successful. His performances confront how computers, software and data affect the human condition. Specifically he is interested in how the meritocracy of neo-liberal ideology is embedded in technology and digital culture. His research is in liveness and real-time computation in performance and theatre. |
Thumbnail | |
Website | http://howtobemoreorless.com |
How To Be More Or Less Human investigates how human activity is classified by image recognition software. Computer vision and the gaze of the webcam become the basis for a performance that explores how online databases form an identity of the human subject.
The installation will be open daily and the performance will take place at 19:30 on Friday 3rd July 2015 and 19:30 on Thursday 10th July 2015.
Abstract (longer)
How To Be More Or Less Human is a performance investigating how humans are identified by computer vision software. Looking specifically at how the human subject is identified and classified by image recognition software, a representation of the human body is formed. The living presence of a human being cannot be sensed by computer vision, so the human subject becomes a quantifiable data object with a set of attributes and characteristics. Seeing ourselves in this digital mirror allows us to reflect on other models of perception and develop an understanding of how the human subject is ‘seen’ by the machinic ‘other’. Looking at ourselves through the automated perception of image recognition can highlight how gender, race and ethnicity have been processed into a mathematical model. The algorithm is trained to ‘see’ certain things forcing the human subject to identify themselves within the frame of computer vision.
Images
Video
PrintedCatalogue
Colourpages
B&Wpages
Hello, i currently only have a hacker account but am looking to upgrade to one of your other services but i had a few more questions regarding the auto - tagging feature of imagga.
I have been mainly using it to auto-tag pictures of humans, and although am quite satisfied by the wide range of results I wanted to have a better understanding of the human terms available within the imagga dictionary. Ive seen 'happiness, happy, smile, love and sexy and passion' but I was wondering if you could inform me of the list of human associated terms, that are based around emotions.
I am looking to use Imagga for a project I am doing but would like to have a better understanding of the vocabulary available to describe human emotion.
Look forward to hearing from you
Max Dovey
Hey Max,
I'm really sorry for the inconvenience caused with not answering your mail!
We do not have specific vocabulary for human expressions and emotions. I might say that you've listed more of them in your mail, but we can offer custom training based on user provided data. For example you can collect images with different human expressions and emotions and organise them in needed categories/tags. Then we can train custom API algorithm based on your data. Usually we have pricing policy for the custom training, but your case sounds interesting and we can think on some collaboration.
We are happy that you find our new Developer plan useful! If you have any other questions, please let me know.
Best
Pavel from Imagga
Hi Pavel,
I've signed up for the Developer plan and am really enjoying your auto-tagging software. I was wondering if it was possible for someone to tell me a little bit more about how the software is trained to recognise certain things?
If you have any information on the software training process? or what image library you are using? this would help me out alot.
Many thanks
Max Dovey
Hey Max,
Sorry for the late response! I'm glad to see that you are satisfied with our service!
On your first question, you can look at our technology page for more info -https://imagga.com/technology/auto-tagging.html. If you have any more specific question on this, please let me know.
On the second one - we need 1000+ sample images per category/tag and then we can run a training process based on this. Do you have specific use case that need custom training?
Regards
Pavel from Imagga
Hey Pavel,
I would like for the auto tagging to be 100% confident with gender. For Custom training would i have to submit 1000+ pictures of men and women to achieve 100% confidence?
Thanks
Max Dovey
Hey Max,
кажу ми да даде и ще направим тест. дори да не е 100% когато не сме сигурно ще е по нисък върнатия конфидънс и ще си преценява дали да го показва или да ходи за модератор
Hi Max,
Sorry for the late response!
We can do the test and see what will be the confidence. You can send us the sample images grouped by gender. If the results and the confidence rate are satisfying you'll be charged $1199 which is our standard rate for custom training. If the results don't fit your expectations, you'll not be charged anything. Let me know if you want to proceed with this.
Regards
Pavel from Imagga