User:Bohye Woo/Prototyping

From XPUB & Lens-Based wiki

Motivational messages - work groups

Bo, Bi, Pedro, Rita group (BBPR)

Pad:https://pad.xpub.nl/p/LINKEDIN

outcome

http://145.24.139.232/~pedrosaclout/linkedinproject/

In my role as Head of recruiting for technology product development in India, I have had the exciting opportunity to was director of Open State Foundation, a non-profit organization. I am Isla Garcia and I . I take responsibility and pride myself in being strategic yet adaptable. I have an entrepreneurial spirit in that I enjoy taking on new challenges, creating new opportunities and designing new programs. My passions lie in reinforcement learning. When Iâm not focused on my professional endeavors, you can find me go 14,000 feet above sea level hiking a mountain. My goal is to be a good social responsibility person in society.

In my role as Co-Founder, I have had the exciting opportunity to was director of Open State Foundation, a non-profit organization. I am Isla Garcia and I . I take responsibility and pride myself in being strategic yet adaptable. I have an entrepreneurial spirit in that I enjoy taking on new challenges, creating new opportunities and designing new programs. My passions lie in GANs. When Iâm not focused on my professional endeavors, you can find me go 14,000 feet above sea level hiking a mountain. My goal is to become a good software engineer in software field.

script

/home/pedrosaclout/public_html/linkedinproject/generator.sh

#!/bin/sh
dir=/home/pedrosaclout/public_html/linkedinproject
profession=`cat $dir/professions.txt | sort -R | head -n 1`
subject=`cat $dir/subject.txt | sort -R | head -n 1`
goal=`cat $dir/goal.txt | sort -R | head -n 1`
education=`cat $dir/education.txt | sort -R | head -n 1`
quotes=`cat $dir/quotes.txt | sort -R | head -n 1`
adjectives=`cat $dir/adjectives.txt | sort -R | head -n 1`
name=`cat $dir/names.txt | sort -R | head -n 1`
hobby=`cat $dir/hobby.txt | sort -R | head -n 1`
experience=`cat $dir/experience.txt | sort -R | head -n 1`

template=`cat $dir/template.txt| sort -R | head -n 1`


echo $template | sed "s/PROFESSION/$profession/g"  | sed "s/EXPERIENCE/$experience/g" | sed "s/SUBJECT/$subject/g" | sed "s/GOAL/$goal/g" | sed "s/EDUCATION/$education/g" | sed "s/QUOTE/$quotes/g" | sed "s/ADJECTIVES/$adjectives/g" | sed "s/NAME/$name/g" | sed "s/HOBBY/$hobby/g" | sed "s/EXPERIENCE/$experience/g" > $dir/index.html


 from nltk.corpus import wordnet synonyms = [] for syn in wordnet.synsets('Computer'): for lemma in syn.lemmas(): synonyms.append(lemma.name()) print(synonyms)

Motivational messages


Py.rate.chinic workshop #1

Workshop #1

Using Selenium to scrp the Youtube comments, and using text processor to rank the most frequent words.

import re
import string
frequency = {}
document_text = open('4.txt', 'r')
text_string = document_text.read().lower()
match_pattern = re.findall(r'\b[a-z]{4,15}\b', text_string)
 
for word in match_pattern:
    count = frequency.get(word,0)
    frequency[word] = count + 1
     
frequency_list = frequency.keys()
print (frequency)

for word in sorted(frequency, key=frequency.get):
	print (word, frequency[word])

E-book / Pandoc converter

calibre ebook program

https://calibre-ebook.com/

Pandoc (Convert Docx file to Media Wiki)

Pandoc common arguments

-f - option standing for “from”, is followed by the input format;

-t - option standing for “to”, is followed by the output format;

-s - option standing for “standalone”, produces output with an appropriate header and footer;

-o - option for file output;

mediawiki - mediawiki input filename - you need to replace it by its actual name


echo texts.docx 
echo '<h1>Hello</h1>'
# (Convert Docx file to Markdown)
echo '<h1>Hello</h1>' | pandoc -f (from) html -t(to) markdown

# (Convert Docx file to Media Wiki)
echo '<h1>Hello</h1>' | pandoc -f (from) html -t(to) mediawiki

# (Convert Docx file to Latex)
echo '<h1>Hello</h1>' | pandoc -f (from) html -t(to) latex
Content is stored in a file now

#start Pandoc program
pandoc texts.docx -f docx -t mediawiki

#convert to wiki file
pandoc texts.docx -f docx -t mediawiki -o texts.wiki

#check the text
less text.wiki



.py.rate.chnic sessions