Unicode: Difference between revisions

From XPUB & Lens-Based wiki
No edit summary
Line 9: Line 9:
The "Unicode Lifecycle" has been usefully summarized by Kumar McMillan in a talk at PyCon 2008 [http://farmdev.com/talks/unicode/] as follows:
The "Unicode Lifecycle" has been usefully summarized by Kumar McMillan in a talk at PyCon 2008 [http://farmdev.com/talks/unicode/] as follows:


=== Decode Early ===
=== The golden rules ===
==== Decode Early ====


Turn raw bytes into Unicode as soon as you get them. Use the string decode function, along with the format you know the bytes have been encoded with, based on the source.
Turn raw bytes into Unicode as soon as you get them. Use the string decode function, along with the format you know the bytes have been encoded with, based on the source.
Line 18: Line 19:
</source>
</source>


=== Unicode everywhere ===
==== Unicode everywhere ====


As long as everything you are using is Unicode, Python should be able to handle everything without a single dreaded Unicode Exeception.
As long as everything you are using is Unicode, Python should be able to handle everything without a single dreaded Unicode Exeception.
Line 26: Line 27:
</source>
</source>


=== Encode late ===
==== Encode late ====


Turn Unicode back into raw bytes for output. Use the encode method of the Unicode object, and give the format desired/required by the output (a Terminal, a database, a webpage).
Turn Unicode back into raw bytes for output. Use the encode method of the Unicode object, and give the format desired/required by the output (a Terminal, a database, a webpage).

Revision as of 19:34, 20 March 2009

Python

I have always found the Unicode methods confusing. The confusion, for me, lay in confusing the sense of encoding / decoding; Initially I thought of "encoding" as meaning "making" Unicode, and "decoding" as going back out of Unicode. In fact, this is exactly opposite. A "Unicode" string in Python could better be thought of as "un-coded" or at least "coding-neutral".

Encoded text is that which has already been translated into actual bytes of data with a particular encoding scheme, like "latin-1" or "utf-8". Note here that "utf-8" is a particular encoding defined as part of the Unicode standard, and thus not a "Unicode" string. In Python, encoded text is "dumb" in the sense that the raw bytes of data have no inherent sense of how they have been encoded and working with them is "dangerous" in that you need to be aware of the encoding that has been employed -- as mixing different schemes, or using functions that might make assumptions about format other than what you have in mind, could produce wrong results. For this reason, Python dutifully, though infuriatingly and seemingly always at inconvenient times, complains in the form of Unicode exceptions, when something unclear has been attempted.

In contrast, one decodes to turn "raw bytes" bytes of data into a proper Unicode object in Python. The resulting Unicode object is "smart" in the sense that in addition to the actual text data, the format is known. In this way, functions that work with Unicode objects are able to negotiate differences between formats, translating as necessary to say splice together parts of texts.

The "Unicode Lifecycle" has been usefully summarized by Kumar McMillan in a talk at PyCon 2008 [1] as follows:

The golden rules

Decode Early

Turn raw bytes into Unicode as soon as you get them. Use the string decode function, along with the format you know the bytes have been encoded with, based on the source.

str = get_from_latin1_encoded_database("name")
ustr = str.decode("latin-1")

Unicode everywhere

As long as everything you are using is Unicode, Python should be able to handle everything without a single dreaded Unicode Exeception.

letter = u"Chère Madame %s ..." % ustr

Encode late

Turn Unicode back into raw bytes for output. Use the encode method of the Unicode object, and give the format desired/required by the output (a Terminal, a database, a webpage).

# output as part of a utf-8 encoded webpage...
print "Content-type: text/html; charset=utf-8"
print
print letter.encode("utf-8")

Reading from a file

import codecs
f = codecs.open("myfile.txt", encoding='utf-8')
for line in f:
    print repr(line)

Note that in the above example, calling the repr function means that the unicode gets displayed with escaped special characters (and thus will display with no problems on any kind of Terminal as it's ASCII.

To show the actual contents of the file, you would then encode the text to match the encoding scheme of your Terminal, so in the (likely) case that your terminal is set to utf-8:

import codecs
f = codecs.open("myfile.txt", encoding='utf-8')
for line in f:
    print line.encode("utf-8")

Or if your terminal was "latin-1":

import codecs
f = codecs.open("myfile.txt", encoding='utf-8')
for line in f:
    print line.encode("latin-1")