When blur is invoked, words in the text are stochastically replaced by a homonym -- a word which sounds the same but has a different meaning. In this way, the "hard" meaning is of the original text is softened (the new text becomes a pun for the original), while original sound of the text is much retained -- something like a low pass filter in the "meaning" domain which preserves information in the sound domain. This is accomplished by using a CGI interface to a program I wrote in C called homonym.cgi which sits on top of a list of some two thousand homonyms I stole from Alan Cooper's Homonym list. Note: this operation can be very slow.
Enhance is a much simpler operation. When enhance is invoked, words are stochastically deleted if they are considered trivial, otherwise they are replicated at the expense of successive words. This operation is a metaphor for edge detection in an image, where important features (in this case non-trivial words) are strengthened at the expense of other details.
The noise operation is an attempt to introduce noise at the level of semantic meaning. To accomplish this, words are stochastically blurred, enhanced, capitalized, cast to lower case, or randomly replaced by a selection from a list of words and short phrases. Successively applying this operation results progressive dissolution of the original text into a soup of semantic noise or Dadaist poetry.