User:Bohye Woo/Prototyping
Motivational messages - work groups
Bo, Bi, Pedro, Rita group (BBPR)
Pad:https://pad.xpub.nl/p/LINKEDIN
outcome
http://145.24.139.232/~pedrosaclout/linkedinproject/
In my role as Head of recruiting for technology product development in India, I have had the exciting opportunity to was director of Open State Foundation, a non-profit organization. I am Isla Garcia and I . I take responsibility and pride myself in being strategic yet adaptable. I have an entrepreneurial spirit in that I enjoy taking on new challenges, creating new opportunities and designing new programs. My passions lie in reinforcement learning. When Iâm not focused on my professional endeavors, you can find me go 14,000 feet above sea level hiking a mountain. My goal is to be a good social responsibility person in society.
In my role as Co-Founder, I have had the exciting opportunity to was director of Open State Foundation, a non-profit organization. I am Isla Garcia and I . I take responsibility and pride myself in being strategic yet adaptable. I have an entrepreneurial spirit in that I enjoy taking on new challenges, creating new opportunities and designing new programs. My passions lie in GANs. When Iâm not focused on my professional endeavors, you can find me go 14,000 feet above sea level hiking a mountain. My goal is to become a good software engineer in software field.
script
/home/pedrosaclout/public_html/linkedinproject/generator.sh
#!/bin/sh dir=/home/pedrosaclout/public_html/linkedinproject profession=`cat $dir/professions.txt | sort -R | head -n 1` subject=`cat $dir/subject.txt | sort -R | head -n 1` goal=`cat $dir/goal.txt | sort -R | head -n 1` education=`cat $dir/education.txt | sort -R | head -n 1` quotes=`cat $dir/quotes.txt | sort -R | head -n 1` adjectives=`cat $dir/adjectives.txt | sort -R | head -n 1` name=`cat $dir/names.txt | sort -R | head -n 1` hobby=`cat $dir/hobby.txt | sort -R | head -n 1` experience=`cat $dir/experience.txt | sort -R | head -n 1` template=`cat $dir/template.txt| sort -R | head -n 1` echo $template | sed "s/PROFESSION/$profession/g" | sed "s/EXPERIENCE/$experience/g" | sed "s/SUBJECT/$subject/g" | sed "s/GOAL/$goal/g" | sed "s/EDUCATION/$education/g" | sed "s/QUOTE/$quotes/g" | sed "s/ADJECTIVES/$adjectives/g" | sed "s/NAME/$name/g" | sed "s/HOBBY/$hobby/g" | sed "s/EXPERIENCE/$experience/g" > $dir/index.html
from nltk.corpus import wordnet synonyms = [] for syn in wordnet.synsets('Computer'): for lemma in syn.lemmas(): synonyms.append(lemma.name()) print(synonyms)
Py.rate.chinic workshop #1
==
Using Selenuim to scrp the Youtube comments, and using text processor to rank the most frequent words.
import re import string frequency = {} document_text = open('4.txt', 'r') text_string = document_text.read().lower() match_pattern = re.findall(r'\b[a-z]{4,15}\b', text_string) for word in match_pattern: count = frequency.get(word,0) frequency[word] = count + 1 frequency_list = frequency.keys() print (frequency) for word in sorted(frequency, key=frequency.get): print (word, frequency[word])