Splitting text into sentences

From XPUB & Lens-Based wiki
from nltk.tokenize import sent_tokenize
print sent_tokenize("I read J.D. Salinger in High School. He wrote 'Catcher in the Rye'.")
['I read J.D.', 'Salinger in High School.', "He wrote 'Catcher in the Rye'."]

So you can see it's not perfect.