Splitting text into sentences

From XPUB & Lens-Based wiki
The printable version is no longer supported and may have rendering errors. Please update your browser bookmarks and please use the default browser print function instead.
from nltk.tokenize import sent_tokenize
print sent_tokenize("I read J.D. Salinger in High School. He wrote 'Catcher in the Rye'.")
['I read J.D.', 'Salinger in High School.', "He wrote 'Catcher in the Rye'."]

So you can see it's not perfect.