Typewriters (2017)

From my book, Computational Photography, which featured this project.

Imagine someone typing on a keyboard, imagine the motion of their fingers and their position in space. Imagine that to you the keyboard is invisible. Could you still, with some analysis, decode what the person was typing? Of course, you could. Look carefully and you’ll see that certain finger motions, in certain positions, are more common than others. The most common of all, with English writers, is the letter “e”. From there you could deduce the rest. This sort of thing, called “frequency analysis,” is the foundation of traditional codebreaking.

With a computer, when typing on a computer, the data in your head goes through various mechanical end electronic conversions--intermediate conversions that are very difficult to decipher forms of the data--until the data finally emerges in a form understandable by humans. Same data, different forms.

Consider the old-fashioned typewriter. You have the same (or almost the same) finger motions as on the computer keyboard but here the transformations are mechanical and more visible, more comprehendible. The fingers move in just such a way, that then depresses a key which in turn is attached to a rod, which then flips a bar forward in an arc, at the top of which is a raised glyph that strikes an inked ribbon behind which is a sheet of paper.

At any step in this process, you could look at just that slice in isolation and be able to decipher what is being typed, as clearly and as accurately as reading the typed text. The motion and location of the fingers, the motion of the keys, the movement of the metal bars as they arc through their air (or, more likely, the flash of darkness in the screen of bars that indicates that the bar has moved, since it moves so fast it is difficult to see). Or read the text that is typed on the paper.

These are the linkages between one mind and another mind. These are the transformations of the data needed to transport that data from me to you. There is no direct connection that compares in efficiency or accuracy (not to mention in geographical and temporal reach).

A computer, a thinking computer, observing this same scene, wouldn’t need any of this. The data contained in the vision of the keys pressing, or just by looking at which bars are moving and which are not--it wouldn’t need all of those transformations. Any one of them would do. And computer-to-computer communication wouldn’t need these transformations at all. Because it’s biology itself that is the obstacle, it is biology that we are working around.

That’s how the project started at least, with these thoughts.

—-

Note: This video is made up of short, one-minute excerpts from each of the ten full-length videos.

Previous
Previous

COVID Self-Portraits

Next
Next

The Annotated Dispatches