Splitting text into sentences

From XPUB & Lens-Based wiki
Revision as of 13:46, 16 March 2011 by Aymeric Mansoux (talk | contribs) (Created page with "<source lang="python"> from nltk.tokenize import sent_tokenize print sent_tokenize("I read J.D. Salinger in High School. He wrote 'Catcher in the Rye'.") </source> ['I read J.D...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
from nltk.tokenize import sent_tokenize
print sent_tokenize("I read J.D. Salinger in High School. He wrote 'Catcher in the Rye'.")
['I read J.D.', 'Salinger in High School.', "He wrote 'Catcher in the Rye'."]

So you can see it's not perfect.