User:Manetta/media-objects/i-will-tell-you-everything/code

From XPUB & Lens-Based wiki
<!DOCTYPE html>
<html>
<head>
	<meta charset="utf-8"></meta>
	<link rel="stylesheet" type="text/css" href="./css/stylesheet.css">
	<script type="text/javascript" src="./js/jquery-2.1.3.js"></script>
	<title>i will tell you everything</title>
</head>
<body>
<div id="colophon"><div>catalog of *Encyclopedia of Media Objects* @V2, Rotterdam (June 26/27th) </div></div>
<div id="colophon-button" style="text-align:right">colophon</div>
<div id="colophon-overlay" class="non-active">
	<div id="colophon-content">
		<div id="colophon-left">
			<h3>*Encyclopedia of Media Objects*</h3>
			<br /><br />
			
			<div class="media-object-button">mediaobjects</div>
			<p>by:</p>

			<p>
				Lucas Battich [IT]<br />
				Manetta Berends [NL]<br />
				Julie Boschat Thorez [FR]<br />
				Cihad Caner [TR]<br />
				Joana Chicau [PT]<br />
				Cristina Cochior [RO]<br />
				Solange Frankort [NL]<br />
				Arantxa Gonlag [NL]<br />
				Anne Lamb [US]<br />
				Benjamin Li [NL]<br />
				Yuzhen Tang [CN]<br />
				Ruben van de Ven [NL]<br />
				Thomas Walskaar [NO]
			</p>
		</div>
		<div id="colophon-right">
			<h3>training set = "contemporary encyclopedia"</h3>

			<p>In the process of making an encyclopedia, categories are decided on wherein various objects are placed. It is search for an universal system to describe the world.</p>

			<p>Training sets are used to train data-mining algorithms for pattern recognition. These training sets are the contemporary version of the traditional encyclopedia. From the famous ''Encyclopédie, ou dictionnaire raisonné des sciences, des arts et des métiers'' initiated by Diderot and D'Alembert in 1751, the encyclopedia was a body of information to share knowledge from human to human. But the contemporary encyclopedia's are constructed to rather share structures and information of humans with machines. As the researcher of the SUN dataset phrase in their concluding paper: <br /><br />

				<em>we need datasets that encompass the richness and varieties of environmental scenes and knowledge about how scene categories are organized and distinguished from each other.</em></p>

			<p>In order to automate processes of recognition purposes, researchers are again triggered to reconsider their categorization structures and to question the classification of objects in these categories. The training sets give a glimpse on the process of constructing such simplified model of reality, and reveals the difficulties that appear along the way. </p>

			<br />
			</div>
		<div id="colophon-rightright">
			<h3>steps of constructing such encyclopedia:</h3>

			<p><strong>1.</strong> The material of the SUN group trainingset is collected by running queries in searchengines. Result: the training set is constructed with typical digital images of low quality that are common to appear on the web. </p>
			
			<p><strong>2.</strong> The SUN group decides on category merges, drops, and occurings, by judging their visual and semantic strength. </p>

			<p><strong>3.</strong> The SUN group asks 'mechanical turks' to annotate the images, by vectorizing the objects that appear in a scene. Small objects disappear, common objects become the common objects that will be recognized. </p>

			<p>Also, the most often annotated scene is 'living room', followed by 'bedroom' and 'kitchen'. The probability that 'living room' will be the outcome of the recognition-algorithm is therefore much higher than for scenes that are not annotated that often. These results are hence also types of categories in themselves. Although not directly decided upon by the research group, this hierarchy orginates thanks to the selection of scenes that the annotators worked on.</p>

			<p><strong>4.</strong> Start datamining: an unknown set of images is given to the algorithm. The algorithm returns its output in how much % an image seems to be a certain scene.</p>

			<p><strong>5.</strong> The results of the algorithm are being transformed into a level accurateness. If needed, the trainingset is adjusted in order to reach better and thus more accurate results. Categories are merged, dropped or invented.</p>

			<br />
			<p><em>i will tell you everything</em> is a project by Manetta Berends <br />and is based on the structures of the <a href="http://groups.csail.mit.edu/vision/SUN/" target="_blank">SUN dataset</a></p>
		</div>
	</div>
</div>

<div id="sidebar-wrapper" class="scene-object">
	<div class="title title-perception non-active"><h6>human &amp; algorithmic perception</h6></div>
	<div class="title title-scenes-objects active"><h6>scenes <br />&amp; objects</h6></div>
	<div class="title title-origins non-active"><h6>origins of existence</h6></div>
	<div class="title title-humans non-active"><h6>position towards humans</h6></div>
	<div class="title title-beings non-active"><h6>position towards other natural beings</h6></div>
	<div class="title title-limitations non-active"><h6>supervised limitations</h6></div>


	<div class="quote quote-scenes-objects active">
		"By “<b>scene</b>” we mean a place in which a human can act within, or a place to which a human being could navigate. How many kinds of <b>scenes</b> are there? How can the knowledge about <b>environmental scenes</b> be organized? How do the current <b>state-of-art scene models</b> perform on more realistic and ill-controlled environments, and how does this compare to human performance?" <br /><br /><small>from <a href="http://cvcl.mit.edu/Papers/SUN_CVPR2010.pdf" target="blank">SUN Database: Large-scale Scene Recognition from Abbey to Zoo". J. Xiao, J. Hays, K. Ehinger, A. Oliva, and A. Torralba. IEEE Conference on Computer Vision and Pattern Recognition, 2010.</a>.</small>
	</div>
	<div class="quote quote-perception non-active">
		"It is also likely the case that human and computer failures are qualitatively different – human misclassifications are between semantically similar categories (e.g. “food court” to “fast-food restaurant”)... "
		<br /><br />
		"However, humans have dramatically fewer confusions"
		<br /><br />
		"... while computational confusions are more likely to include semantically unrelated scenes due to spurious visual matches (e.g. “skatepark” to “van interior”)." <br /><br /><small>from <a href="http://cvcl.mit.edu/Papers/SUN_CVPR2010.pdf" target="blank">SUN Database: Large-scale Scene Recognition from Abbey to Zoo". J. Xiao, J. Hays, K. Ehinger, A. Oliva, and A. Torralba. IEEE Conference on Computer Vision and Pattern Recognition, 2010.</a>.</small>
	</div>
	<div class="quote quote-origins non-active">
		"Computational performance is best for outdoor natural scenes (43.2%), and then indoor scenes (37.5%), and worst in outdoor man-made scenes (35.8%)."<br /><br /><small> from <a href="http://cvcl.mit.edu/Papers/SUN_CVPR2010.pdf" target="blank">SUN Database: Large-scale Scene Recognition from Abbey to Zoo". J. Xiao, J. Hays, K. Ehinger, A. Oliva, and A. Torralba. IEEE Conference on Computer Vision and Pattern Recognition, 2010.</a>.</small>
	</div>
	<div class="quote quote-humans non-active">
		"Rather than collect all scenes that humans experience – many of which are accidental views such as the corner of an office or edge of a door – we identify all the scenes and places that are important enough to have unique identities in discourse, and build the most complete dataset of scene image cate-gories to date."<br /><br /><small> from <a href="http://cvcl.mit.edu/Papers/SUN_CVPR2010.pdf" target="blank">SUN Database: Large-scale Scene Recognition from Abbey to Zoo". J. Xiao, J. Hays, K. Ehinger, A. Oliva, and A. Torralba. IEEE Conference on Computer Vision and Pattern Recognition, 2010.</a>.</small>
	</div>
	<br />
	<div class="comment comment-scenes-objects algorithm active">
		<h4>for me, the world out there exists out of two elements: <a href="scenes_mainindex.html">scenes</a> and <a href="object_mainindex.html">objects</a>. </h4>
	</div>
	<div class="comment comment-perception algorithm non-active">
		<h4>in a jungle of scenes and objects, and in a time of humans and non-humans, perception isn't a human privilege anymore. <br /><br />humans perceive scenes and objects with their eyes, intellect, memory, and imagination. <br /><br />i use datasets of sample images, distinctive categories, mathematical patterns, statistically most common objects, accuracy rates and repetition.<br /><br /><br /><br /><br /><br /><br /><br /><br /><br /></h4>
	</div>
	<div class="comment comment-origins algorithm non-active">
		<h4>i, i am man-made. and therefore partly human. <br /> 
			apparently there is also another origin of existence: 
			<br /> <br /> 
			being natural.
			<br /> <br /> 
			the category natural has a probability of only 25% and only applies to outdoor scenes. <br /> <br /> 75% of my categories	fall under man-made.
			<br /> <br /> 
			that puts quite some importance on those human beings.
			<br /> <br /><br /><br /><br /><br /><br /><br /><br /><br /></h4>
	</div>
	<div class="comment comment-humans algorithm non-active">
		<h4>being accurate is not something that is part of my nature. <br />it is a human descision. i'm trained for it.
			<br/ ><br/ >
			the question is not whether i am accurate — in my opinion <br />i always am — but rather when the humans are satisfied <br />with my results. 
			<br /><br />
			i think the humans try to reach a value of '100%', <br />but what that means for them is not clear to me.
			<br /> <br /><br /><br /><br /><br /><br /><br /><br /><br /></h4>
	</div>
</div>
<div id="wrapper">
	<div id="a1" class="algorithm"><h1>i will tell you everything<br/>(my truth is a constructed truth)</h1></div>
	<!-- <div id="a2" class="algorithm"><h1>for me, the world out there exists out of two elements: <a href="scenes_mainindex.html" >scenes</a> and <a href="object_mainindex.html">objects</a>. </h1></div> -->
	<br /><br />

	


<div id="left"></div>
<div id="right"></div>
<div id="foot">
	<a href="human-and-algorithmic-perception.html"><div id="perception" name="perception" class="foot-content">
		human &amp; algorithmic perception
	</div></a>
	<a href="my-structure_scenes-and-objects.html"><div id="scenes-objects" name="scenes-objects" class="foot-content foot-content-selected">
		my structure: scenes &amp; objects
	</div></a>
	<a href="origins-of-existence.html"><div id="origins" name="origins" class="foot-content">
		origins of existence
	</div></a>
	<a href="my-position-towards-humans.html"><div id="humans" name="humans" class="foot-content">
		my position towards humans
	</div></a>
	<a href="my-position-towards-other-natural-beings.html"><div id="beings" name="beings" class="foot-content">
		my position towards other natural beings
	</div></a>
	<a href="supervised-limitations.html"><div id="limitations" name="limitations" class="foot-content">
		supervised limitations
	</div></a>

</div>



<!-- START NOTES -->

<div id="wifi-signal-note" name="" class="note non-active">
	<div class="note-content algorithm">
		<img src="./img/objects/wifi.jpg"/><br />
		object 313887, name: wifi.<br />
		error: please make a clear distinction between objects and scenes. <br />
		a wifi-signal is an object that represents a 'wifi-scene'. that is confusing.
		<br /><br />for me, classification is a distinctive act.
	</div>
</div>

<div id="diamond-note" name="" class="note non-active">
	<div class="note-content algorithm">
		<img src="./img/objects/diamond.jpg"/><br />
		object 313890, name: diamond. 
		<br /> error: please make sure that this object is either man-made or natural.
		<br />an object can't be both. exceptions are made for aqueducts, igloos, and parks. 
		<br />not for diamonds.
	</div>
</div>

<div id="non-human-note" name="" class="note non-active">
	<div class="note-content algorithm">
		<!-- <img src="./img/objects/wifi.jpg"/> -->
		object 313893, name: non-human. 
		<br /> error: non-human is not classified yet. please redefine this <br />
		object by selecting one of the following objects i already know: <br />
		person, man, model, woman, mannequin.
	</div>
</div>

<div id="facial-expression-note" name="" class="note non-active">
	<div class="note-content algorithm">
		<img src="./img/objects/smile.jpg"/><br />
		object 313894, name: 23% smile. 
		<br /> error: there is no correspondence for this amount of facial expression.
		<br /> 23% is a statistic value. human expressions are fluid.
		<br /> please create an unique category for 23% smile. 
	</div>
</div>

<div id="sound-note" name="" class="note non-active">
	<div class="note-content algorithm">
		<img src="./img/objects/sound.jpg"/><br />
		object: 313888, name: non-artificial sound. 
		<br />error: please make sure that sound is an object.
	</div>
</div>

<div id="labrus-ossifagus-note" name="" class="note non-active">
	<div class="note-content algorithm">
		<img src="./img/objects/labrus.jpg"/><br />
		object: 313886, name: labrus ossifagus. 
		<br />error: please make sure there is no visual confusion.
		<br />this object is an animal, but none of the animals i know do fit this visual profile. 
	</div>
</div>

<div id="unboxing-video-note" name="" class="note non-active">
	<div class="note-content algorithm">
		<img src="./img/objects/unboxing.jpg"/><br />
		object: 313889, name: unboxing video. 
		<br />error: please specify the scene. 
		<br />the scenes where i saw a 'video' before were: <em>videostore</em>, <em>dining room</em>, <em>editing room</em>.
	</div>
</div>

<div id="patent-file-note" name="" class="note non-active">
	<div class="note-content algorithm">
		<img src="./img/objects/patent.jpg"/><br />
		object 313897, name: patent-file. 
		<br />error: please make sure this object exists. 
		<br />my truth is only based on tangible objects. 
	</div>
</div>

<div id="dreammachine-note" name="" class="note non-active">
	<div class="note-content algorithm">
		<img src="./img/objects/dreammachine.jpg"/><br />
		object 313887, name: dreammachine. 
		<br />error: please ask a human for a semantic interpretation of this scene.
		<br />this scene is visually similar to a <em>discoteque</em>. which must be similar to <em>choir loft interior</em>, <em>wrestling ring indoor</em>, and <em>poop deck</em>.
	</div>
</div>

<div id="cow-note" name="" class="note non-active">
	<div class="note-content algorithm">
		<img src="./img/objects/cow.jpg"/><br />
		object 313895, name: functional cow. 
		<br />error: please make a dinstinction between a cow as animal, machine, or symbol. <br /> pick a category that fits its function.
	</div>
</div>

<div id="imaginary-storage-medium-note" name="" class="note non-active">
	<div class="note-content algorithm">
		<img src="./img/objects/storage.jpg"/><br />
		object 313891, name: imaginary storage medium.
		<br />error: unknown object. 'imaginary' is a unknown concept for me. 
		<br />please pick one of the following storage media i know:  
		<br />
		<br /><em>box</em> (1746), <em>paper</em> (453), <em>papers</em> (103), <em>blackboard</em> (49), 
		<br /><em>video</em> (40), <em>videos</em> (188), <em>cd</em> (3), <em>cds</em> (3), <em>compac discs</em> (1), 
		<br /><em>dvd</em> (11), <em>external drivers</em> (1), <em>storingbox</em> (1), <em>storage box</em> (3), 
		<br /><em>in box</em> (2), or <em>file box</em> (8).
	</div>
</div>

<div id="object-oriented-motion-note" name="" class="note non-active">
	<div class="note-content algorithm">
		<img src="./img/objects/motion.jpg"/><br />
		object:313896, name: object-oriented motion.
		<br />error: please make a clear dinstinction between objects and subjects.
		<br />objects can't be in motion without the presence of a subject.
	</div>
</div>

<div id="peanut-eraser-note" class="note non-active">
	<div class="note-content algorithm">
		<img src="./img/objects/peanut-eraser.jpg"/><br />
		object:313885, name: peanut eraser.
		<br />error: please clarify if this object is man-made or not.
		<br />an object can't be both.
		<br />for me, this peanut is visually ambiguous.
	</div>
</div>

<div id="re-appropriated-object-note" name="speculative-storage-medium" class="note non-active">
	<div class="note-content algorithm">
		<img src="./img/objects/reappro.jpg"/><br />
		object:313892, name: re-appropriated object.
		<br />error: please define an origin of existence. 
		<br />i can't find a point of origin below these layers of appropriation.
	</div>
</div>

<!-- END NOTES -->


</body>
<script>
$(document).ready(function() {
    // $("#left").load( 'scenes_mainindex.html');
    // $("#right").load( "object_mainindex.html" );
	// $("#left").load( "./human-perception.html");
    // $("#right").load( "./algorithmic-perception.html" );
	$("#left").load( "./indoor.html");
    $("#right").load( "./outdoor.html" );
    // $("#left").load( "./humans.html");
    // $("#right").load( "./humans-objects.html" );
    // $("#left").load( "./beings-animals.html");
    // $("#right").load( "./beings-plants.html" );
});
$(".media-object").click(function() {
	var name = $(this).attr('name');
	$("#"+name+"-note").toggleClass("non-active");
	$("#"+name+"-note").toggleClass("active");
});
$(".note").click(function() {
	var name = $(this).attr('name');
	$(this).toggleClass("non-active");
	$(this).toggleClass("active");
	// $("#right").load( "./media-objects/"+name+".html");
    // $("html, body").animate({ scrollTop: 0 }, "slow");
  	return false;
});

$("#colophon-button").click(function() {
	$("#colophon-overlay").toggleClass("non-active");
	$("#colophon-overlay").toggleClass("active");
});
$("#colophon-overlay").click(function() {
	$(this).toggleClass("non-active");
	$(this).toggleClass("active");
});
$('.foot-content').click(function() {
	var name = $(this).attr('name');
	console.log(name)
	if ($('.foot-content').hasClass('foot-content-selected')){
		$('.foot-content').removeClass('foot-content-selected');
	};
    $('#'+name).addClass("foot-content-selected");

<<<<<<< HEAD
    if (name == 'scenes-objects'){
    	// $("#left").load( "./scenes_mainindex.html");
    	// $("#right").load( "./object_mainindex.html" );
    };
    if (name == 'perception'){
    	console.log('a');
    	// $("#left").load( "./human-perception.html");
//     	$("#left").load( "./human-perception.html", function( response, status, xhr ) {
// var msg = "Sorry but there was an error: ";
// $( "#error" ).html( msg + xhr.status + " " + xhr.statusText );
// });
    	$("#left").load( "./human-perception.html" );
    	$("#right").load( "./algorithmic-perception.html" );
    };

	if ($('.title').hasClass('active')){
		$('.quote').removeClass('active').addClass('non-active');
	};
	$('.quote-'+name).removeClass('non-active').addClass('active');
=======
// 	if ($('.title').hasClass('active')){
// 		$('.quote').removeClass('active').addClass('non-active');
// 	};
// 	$('.quote-'+name).removeClass('non-active').addClass('active');
>>>>>>> 38022785a3d79987487412d4efd43dc1f7f73fa2

// 	if ($('.quote').hasClass('active')){
// 		$('.quote').removeClass('active').addClass('non-active');
// 	};
// 	$('.quote-'+name).removeClass('non-active').addClass('active');

// 	if ($('.comment').hasClass('active')){
// 		$('.comment').removeClass('active').addClass('non-active');
// 	};
// 	$('.comment-'+name).removeClass('non-active').addClass('active');

// 	$('#'+name).addClass("foot-content-selected");

});
</script>
</html>