User:Bohye Woo/Prototyping: Difference between revisions

From XPUB & Lens-Based wiki
No edit summary
 
(24 intermediate revisions by the same user not shown)
Line 45: Line 45:
==Workshop #1==
==Workshop #1==


Using Selenium to scrp the Youtube comments, and using text processor to rank the most frequent words.
Using Selenium to scrap the Youtube comments, and using text processor to rank the most frequent words.


<source lang=python>
<source lang=python>
Line 67: Line 67:
</source>
</source>


=E-book / Pandoc converter=
Link: [[.py.rate.chnic sessions]]
 
=Pandoc converter=


A universal document converter - converts from one markup language onto another
A universal document converter - converts from one markup language onto another
 
https://pandoc.org/MANUAL.html
https://pandoc.org/


Use: convert downloaded wiki pages onto HTML files
Use: convert downloaded wiki pages onto HTML files
Line 111: Line 112:


<source lang=python>
<source lang=python>
Content is stored in a file now
#Content is stored in a file now


#start Pandoc program
#start Pandoc program
pandoc texts.docx -f docx -t mediawiki
pandoc texts.docx -f docx -t mediawiki


#convert to wiki file
#convert to wiki file to get output of texts.wiki
pandoc texts.docx -f docx -t mediawiki -o texts.wiki
pandoc texts.docx -f docx -t mediawiki -o texts.wiki


Line 123: Line 124:
</source>
</source>


==From Wiki to Html to PDF==
example: https://gitlab.com/Mondotheque/RadiatedBook/blob/master/resources/style.pdf.css
Weasyprint filename.html  -s style.css filename.pdf


<source lang=python>
<source lang=python>
#download the page


</source>
#STEP 1
#take content from wiki
./wiki-download.py --pages 'Entreprecariat reader synopses and abstracts'


#STEP 2
#converting mediawiki file to html file:
pandoc 'Entreprecariat reader synopses and abstracts' -f mediawiki -t html -s -o 'Entreprecariat_reader.html'


#STEP 3
#convert html to pdf
weasyprint Entreprecariat_reader.html -s style.css output.pdf
</source>


[[.py.rate.chnic sessions]]
[[Program Languages]]

Latest revision as of 13:38, 23 October 2018

Motivational messages - work groups

Bo, Bi, Pedro, Rita group (BBPR)

Pad:https://pad.xpub.nl/p/LINKEDIN

outcome

http://145.24.139.232/~pedrosaclout/linkedinproject/

In my role as Head of recruiting for technology product development in India, I have had the exciting opportunity to was director of Open State Foundation, a non-profit organization. I am Isla Garcia and I . I take responsibility and pride myself in being strategic yet adaptable. I have an entrepreneurial spirit in that I enjoy taking on new challenges, creating new opportunities and designing new programs. My passions lie in reinforcement learning. When Iâm not focused on my professional endeavors, you can find me go 14,000 feet above sea level hiking a mountain. My goal is to be a good social responsibility person in society.

In my role as Co-Founder, I have had the exciting opportunity to was director of Open State Foundation, a non-profit organization. I am Isla Garcia and I . I take responsibility and pride myself in being strategic yet adaptable. I have an entrepreneurial spirit in that I enjoy taking on new challenges, creating new opportunities and designing new programs. My passions lie in GANs. When Iâm not focused on my professional endeavors, you can find me go 14,000 feet above sea level hiking a mountain. My goal is to become a good software engineer in software field.

script

/home/pedrosaclout/public_html/linkedinproject/generator.sh

#!/bin/sh
dir=/home/pedrosaclout/public_html/linkedinproject
profession=`cat $dir/professions.txt | sort -R | head -n 1`
subject=`cat $dir/subject.txt | sort -R | head -n 1`
goal=`cat $dir/goal.txt | sort -R | head -n 1`
education=`cat $dir/education.txt | sort -R | head -n 1`
quotes=`cat $dir/quotes.txt | sort -R | head -n 1`
adjectives=`cat $dir/adjectives.txt | sort -R | head -n 1`
name=`cat $dir/names.txt | sort -R | head -n 1`
hobby=`cat $dir/hobby.txt | sort -R | head -n 1`
experience=`cat $dir/experience.txt | sort -R | head -n 1`

template=`cat $dir/template.txt| sort -R | head -n 1`


echo $template | sed "s/PROFESSION/$profession/g"  | sed "s/EXPERIENCE/$experience/g" | sed "s/SUBJECT/$subject/g" | sed "s/GOAL/$goal/g" | sed "s/EDUCATION/$education/g" | sed "s/QUOTE/$quotes/g" | sed "s/ADJECTIVES/$adjectives/g" | sed "s/NAME/$name/g" | sed "s/HOBBY/$hobby/g" | sed "s/EXPERIENCE/$experience/g" > $dir/index.html


 from nltk.corpus import wordnet synonyms = [] for syn in wordnet.synsets('Computer'): for lemma in syn.lemmas(): synonyms.append(lemma.name()) print(synonyms)

Motivational messages


Py.rate.chinic workshop #1

Workshop #1

Using Selenium to scrap the Youtube comments, and using text processor to rank the most frequent words.

import re
import string
frequency = {}
document_text = open('4.txt', 'r')
text_string = document_text.read().lower()
match_pattern = re.findall(r'\b[a-z]{4,15}\b', text_string)
 
for word in match_pattern:
    count = frequency.get(word,0)
    frequency[word] = count + 1
     
frequency_list = frequency.keys()
print (frequency)

for word in sorted(frequency, key=frequency.get):
	print (word, frequency[word])

Link: .py.rate.chnic sessions

Pandoc converter

A universal document converter - converts from one markup language onto another https://pandoc.org/MANUAL.html

Use: convert downloaded wiki pages onto HTML files

extensive documentation in Pandoc’s Manual or man pandoc

calibre ebook program

https://calibre-ebook.com/

Pandoc (Convert Docx file to Media Wiki)

Pandoc common arguments

-f - option standing for “from”, is followed by the input format;

-t - option standing for “to”, is followed by the output format;

-s - option standing for “standalone”, produces output with an appropriate header and footer;

-o - option for file output;

mediawiki - mediawiki input filename - you need to replace it by its actual name


echo texts.docx 
echo '<h1>Hello</h1>'
# (Convert Docx file to Markdown)
echo '<h1>Hello</h1>' | pandoc -f (from) html -t(to) markdown

# (Convert Docx file to Media Wiki)
echo '<h1>Hello</h1>' | pandoc -f (from) html -t(to) mediawiki

# (Convert Docx file to Latex)
echo '<h1>Hello</h1>' | pandoc -f (from) html -t(to) latex
#Content is stored in a file now

#start Pandoc program
pandoc texts.docx -f docx -t mediawiki

#convert to wiki file to get output of texts.wiki
pandoc texts.docx -f docx -t mediawiki -o texts.wiki

#check the text
less text.wiki

From Wiki to Html to PDF

example: https://gitlab.com/Mondotheque/RadiatedBook/blob/master/resources/style.pdf.css

Weasyprint filename.html -s style.css filename.pdf

#download the page 

#STEP 1
#take content from wiki
./wiki-download.py --pages 'Entreprecariat reader synopses and abstracts'

#STEP 2
#converting mediawiki file to html file:
pandoc 'Entreprecariat reader synopses and abstracts' -f mediawiki -t html -s -o 'Entreprecariat_reader.html' 

#STEP 3
#convert html to pdf
weasyprint Entreprecariat_reader.html -s style.css output.pdf

Program Languages