User:Tisa/special issue 11

From XPUB & Lens-Based wiki
< User:Tisa
Revision as of 13:26, 14 January 2020 by Tisa (talk | contribs)

prototyping

8-1-20

with Michael -> https://pzwiki.wdka.nl/mediadesign/Digital_zines_I:_PDF What is the way of publishing that avoids censorship?

  • What about real-time censorship situations?

https://en.wikipedia.org/wiki/Internet_censorship_in_China

What about destruction on the long run? How does humanity archive itself? USB? Physical media. What about ecological consequences of infinite "cloud storage"? The low tech magazine, a website that runs on solar panels: https://solar.lowtechmagazine.com/2018/09/how-to-build-a-lowtech-website.html Post-truth.

Non-realtime publishing strategies: http://blog.bjrn.se/2020/01/non-realtime-publishing-for-censorship.html Digital signatures, identity, ownership - "signing": any change can be noticeable, to track the originality of a document. Contrasting: To edit files? To re-publish?

A "living archive", organic form, subject to change? Collective editing.

Zines as resistance. History of zines? ref: Amy Spencer DIY: The rise of lo-fi culture.

  • production and distribution

PDF hate. Then becoming an open format (recenty, 2008), Adobe monopoly crash. PDFs with text (not only as a scanned image) > enables search function & copy.

What's up with "post-digital"?

Internet as the working memory of a culture, not really an archive... then projects like: web.archive.org or geocities archive (Olia Lialina): https://blog.geocities.institute/

Internet as an unsustainable medium. Cave paintings stayed for thousands of years. A sun storm that ruins all the chips will delete most of human knowledge in an instant. What would be sustainable and ecological ways of preserving big amounts of data for thousands of years?

Scene in the film V for Vendetta, some researchers are searching for data about a certain facility. All has been censored, deleted, lost. Te only way that they can prove the existence of the facility is through tax records that are deleted from the digital archive, but then found in the printed version in a cold room archive.

For whom do we archive? What is the worth of an archive if nobody reads it?


some bash stuff ------------------

wget (+link)

for downloading from the web.

display

to display (images)

gedit (+ download.sh)

my text editor, making a file named download.sh in the text file, write down the script that executes wget

bash download.sh

it runs the script that is in download.sh

-O whateveryouwantthenametobe

Add this in the same line to rename the downloaded files.

-x

adds a trace of the commands (prints out the lines)

| grep searching-this-term --color

you pipe this to the wget of a webpage.

| grep searching-this-term --color > wiki.txt

you save the ones that you were searching for as wiki.txt (stdout (-)). Use 2> to save smth as stderr.

/dev/null

byebye

ls | wc -l

word count used to see how many elements are in the dir.

[Bulletin board systems] ref: Documentary by Jason Scott: BBS: The Documentary [Community memory]

evince to open up a pdf

display to open up images

gedit how to call my text editor

imagemagick:

identify name.jpg

https://en.wikipedia.org/wiki/Thumbnail

convert name.jpg -resize 120x90 newname.jpg
mogrify -resize 200x200 *.jpg 
convert *.jpg one.PDF
pdfunite first.pdf second.pdf outcome.pdf

OCR (optical character recognition)

tesseract

13-1-20

w/Andre

wiki pg

pad

"Curating" information of Wikipedia. Who, by what (political) agenda?

There is no way for different opinions to coexist on Wikipedia. Edit war, opinion war.

Backing up information with sources, making it more stable, more valid.

The sources have to be public. Here, we come into place.

TOR - The onion router

ref (find a chapter about TOR):

TOR as a browser plugin for Firefox.

normal browsing:

   human > ISP (internet service provider) > wwwebsite

tor browsing:

   human > ISP > TOR plugin > a number of random machines/"exit nodes" that are within a tor network, each time a different route > website
  1. Q: Do you become an "exit node" (a server in between) as soon as you enable tor? No.

ISP knows only (the first) one machine that the human is connecting to.

  1. Q: Who can see more?

DARK/DARK WEB

Cannot see it through a regular search engine. A tor hidden service is a simple website, made out of static html, with a .onion adress. The location of the server/ the website is not revealed. Public but hidden.

Static vs dynamic website. Dynamic constantly communicates.

Such as: http://warpweftmemory.net/#/notes

  1. Q: How changable are static websites?

Inspect element (Q) > network (on a dynamic website, there is a lot of recurring files // on a static website, there is a finite amount of network requests)

End goal of the project: three copies of a static website that host (on tor hidden service) the archive.

Creating an editing/archiving organisation of the archival material. As .img and as .txt, using OCR (optical character recognition)

  1. Q: Is there a structure of the archive already? Categorisation?
  1. Q: Simulation perspective on this project. The pziwiki already contains a bunch of info that could be risky.

Level: Internal (archive)

  • on a closed (need an account) Media wiki because of:
  • markup
  • FLOSS
  • local server (raspberry pi, 3x)
  • revisions
  • collaboration

Factory: Mediation between levels.

Level: Public (archive)

  • static website
  • print

What we want: Structure of the system

Internal archive

  • wiki: unreadable & unwritable to non-collaborators
  • visible from HRO net? YES
  • visible from outside HRO net? MAYBE (YES for now)

Public archive

  • the documents are public accessible via TOR HS
  1. Q: Semantic web? https://en.wikipedia.org/wiki/Semantic_Web Semantic tagging? Wiki=Categories. 3-fold structure: thing>property>value

Semantic media wiki. Cargo extension. Wikidata. Using the structures of semantics to categorise/manage the archive.


Publishing form of content on wiki. examples

Wikimedia > html > css to print Wiki/html > ".icml" format (Indesign) https://fileinfo.com/extension/icml

ssh useraname @145.24.139.127

hub.xpub.nl/ ... different Raspberry Pi's > Tinc (vpn) connection > Xpub server (HRO: Hogeschool RTM) > outside world

It is more direct to SSH directly to the IP adress of the Raspberry Pi that you want to access.

Using SSH keys. SSH keys stored at ~/.ssh > id_rsa (private) and id_rsa.pub (public key). The public key is the authorized key, enables the login into the account. The pair of the private and the public match eachother.

ssh tisaneza @145.24.139.127

=

ssh sandbox.local
exit OR control+d to logout.

Log into pi from home/anywhere:

ssh -J xpub.nl:2501 10.0.0.11

=

ssh sandbox

Installing Mediawiki "itchwiki" LAMP L=linux/operating system A=apache/web server MySQL=database P=programming language Its language is: PHP.

145.24.139.127/itchwiki/

Internal archive

  • wiki: unreadable & unwritable to non-collaborators
  • visible from HRO net? YES
  • visible from outside HRO net? MAYBE (YES for now)

Public archive

  • the documents are public accessible via TOR HS
  1. Q: Semantic web? https://en.wikipedia.org/wiki/Semantic_Web Semantic tagging? Wiki=Categories. 3-fold structure: thing>property>value
   Semantic media wiki. Cargo extension. Wikidata. Using the structures of semantics to categorise/manage the archive.


Publishing form of content on wiki. examples

Wikimedia > html > css to print Wiki/html > ".icml" format (Indesign) https://fileinfo.com/extension/icml

ssh useraname @145.24.139.127

hub.xpub.nl/ ... different Raspberry Pi's > Tinc (vpn) connection > Xpub server (HRO: Hogeschool RTM) > outside world

It is more direct to SSH directly to the IP adress of the Raspberry Pi that you want to access.

Using SSH keys. SSH keys stored at ~/.ssh > id_rsa (private) and id_rsa.pub (public key). The public key is the authorized key, enables the login into the account. The pair of the private and the public match eachother.

ssh tisaneza @145.24.139.127

=

ssh sandbox.local

exit OR control+d to logout.

Log into pi from home/anywhere:

ssh -J xpub.nl:2501 10.0.0.11

= ssh sandbox


Installing Mediawiki "itchwiki" LAMP L=linux/operating system A=apache/web server MySQL=database P=programming language Its language is: PHP.

145.24.139.127/itchwiki/

14-1-20

w/Clara and Sami

  • Public + private life that gets merged.
  • "Helping people" is not the way to go.
  • Research, data being acquired - have to publish it.
  • Christian Hanson, they made a publishing house together with Clara. "Cottage industry publishing house." Publications with a birthmark, stopping to look to the West.
  • "deep hanging out" as a strategy for working together.
  • Kulambo bulleting (kulambO = a mosquito net).
  • Mosquito presses (Mosquito press period) in the time of Marcos(es) dictatorship cleptocratic regime, martial law sanctions information law, censorship. No reporting on what is going on in the country. You can never silence all of the mosquitoes.
  • Recreating the conditions of urgency in the publication Kulambo Bulleting. To recreate the urgency - doing a lot in a short span of time. Academic Philippine diaspora in the USA is the one that produces knowledge about the actual situation.
  • Duterte is repeating history of Marcoses, in terms of political actions.

After 1986 reforms, democracy ...

  • Other countries with similar situations - Turkey for example.
  • Crisis of historical memory.

Different archives:

  • Bantayo ... (pink)
  • The living room archive from Philadelphia (green)
  • Publications from Clara's publishing house. (on the slides)

Carlos Nazareno (guest tutor):

  • wiki society Philippines - editing wars.
  • "Wikipedian in residency" at Bantayo, editing wiki pages. Importance of content creation, it gets propagated.
  • ref: Architects of Networked Disinformation
  • trolling
  • open identity
  • Asking people for what they need.
    1. Q: ... ? >> "There is agency in doing what you're told."
  • How to manufacture the sense of urgency? To engineer, to build it.
  • Mayday room: Rosemary Grennan and Jan Gerber - Political archive in London. Making it active again.
  • OSP: Stephanie Vilayphiou and Pierre Huyghebaert - learning us to make static website.
  • Carlos Nazareno - press and cyber freedom of speech.
  • ref: Alan Robles - Stories of martial law
  • May Rodriguez on the subject of B. institution.
  1. Q: How is the content of this project relevant for my background, knowledge, interests, history, region?
  • Collective workflow constitution.
  • Interests, assets/knowledge, time that we want to give into the collective.
  • Create a: Code of conduct. How do we want to treat each other? Communication?

Archiving: Research dynamics: Idea

The idea is to make a tool (a script) that archives ones own research process, forming it into a shape that can be shared with others. Making "invisible work" visible.

Thinking in terms of a "mood board", a collection of references, thoughts - combining written text (.txt) and screenshots (jpg) also gif and excerpts of video and sound in one place. They would be stored in separate daily folders, automatically assembles into a layout that is a live/organic/editable/comment-able website that can be also exported as an A4 .PDF that is printable (to be assembled into a weekly publication/zine that physically represents a research process.) The topics of research would be categorized, using hashtags, and therefore new connections would be able to form with time. (Then a question of their visual representation, their non-linear nature appears. As a 3D configuration by topics?)

Why am I thinking of a tool like this? What drives my necessity to construct it?

My research process is quite dynamic, intuitive and unstructured. I jump from one topic to another rapidly and i haven't found an efficient way to represent this process yet (except for writing). I am driven by interest and the thirst for knowledge, but no more preoccupied by the urgency of choosing and narrowing down. The world is not disciplinary, the thought is non-linear.

Subjective perspective: It is a matter of tracking ones progress and mind flow, a way to make associations more explicit and due to that also open for new interpretations, assembling connections between different fields/disciplines/sciences/content that pop up in the process.

So, technically it is not only a tool for archiving, but also a tool that offers its user the possibility to evaluate, rethink, visualize and imagine the mycelial structure of their thought. This is of course, a speculation on the tools' effects on ones cognition. Only by using this tool, one could confirm its possibility to effect the thought-flow/process.

Outwards perspective: By using this tool, I believe our internal processes could become debatable. Which means that by the possibility of sharing insight and thoughts, it would be easier to step into conversations, get critique or feedback. It is a way of publicizing ones internal subjective mind flow, with the intention of a collective cross-pollination of ideas, ping-ponging thoughts and producing conversations that come from the space of intuitive understanding of ones process and intention.

How to start?

  • Screenshots - To automatize their saving into a specific directory. Way to edit them simply (crop) and editing metadata (or?) to include a footnote of the source and a hashtag category. (Also used as a way to produce instant quotation, source.)
  • PDF - annotations while reading being exported into a separate editable file (one .txt file per read text). also include same data as above.
  • Browsing history - the pages that are connected to research being collected in one place.
  • Connection to phone. KDE connect - but more direct, less manual work.

This tool should do it all by itself, only basic editing would have to be needed in the production of the daily process tracking. (What is more relevant?)

.jpg to .txt

How to extract text from a single .jpg directly into a single .txt file?

Useful for screenshots of books/pdfs, also especially for screenshots of etymologies and dictionaries, meanings.

pad saving

Another probably more simple thing to do would be to automate saving content of pads. As pads are not the most stable thing, and I manually (almost daily) save the ones I work on, it would be super-useful if there was a script that would (from a list of pads) automatically save their content (.txt) to my computer.