User:Emily/Prototyping/Trimester 02/03: Difference between revisions
No edit summary |
No edit summary |
||
Line 9: | Line 9: | ||
---- | ---- | ||
<br> | <br> | ||
Video to images - Sorting images - Back to video<br> | Video to images - Sorting images - Back to video (refined)<br> | ||
Source: http://pzwart2.wdka.hro.nl/mediadesign/Prototyping/the-networked-image/python-editing-brightness | |||
<br> | <br> | ||
---- | ---- | ||
<br> | <br> | ||
Image compare and graph<br> | Image compare and graph (the networked image)<br> | ||
<br> | <br> | ||
---- | ---- |
Revision as of 11:21, 11 March 2015
Edge Detection
need imagemagick: mogrify -resize 480x *.jpg mogrify -canny 0x1+10%+30% *.jpg then need ffmpeg to make images into a video:
TEST RESULT: http://youtu.be/DxuVLJEd6iI
more information source: https://trac.ffmpeg.org/wiki/Create%20a%20video%20slideshow%20from%20images
Video to images - Sorting images - Back to video (refined)
Source: http://pzwart2.wdka.hro.nl/mediadesign/Prototyping/the-networked-image/python-editing-brightness
Image compare and graph (the networked image)
The idea of video to Image
To extract frames from video(one image per second?), then overly them with certain transparency(to make the transparency of final image to be one).
|
Then I want to make this in real time. When click on the video it plays, when hit again, it generates a image(stop at this image as well) which is produced by previous way.
Code Source: