Medium as Interface

Davis Foulger
Oswego State University of New York

Competitively selected for presentation at the Fall, 2005 meeting of the National Communication Association.

Abstract

User interface components are, in general, the most visible elements of human communications media. This paper documents twelve commonly used user interface components, including microphones, speakers (personal and remote), cameras, volume controls, electronic interface managers, interruptive summons signals, on/off switches, channel selectors, keypads, displays, and variants of stylus. The purpose of these descriptions is, at least for the present purpose limited to establishing that these components do occur commonly across a broad range of media, to establish the approximate frequency of their occurrence, and to document the purposes these user interface components serve, including message capture, message reproduction, and control of the medium. While these are are hardly the only user interface components used in today's communications systems, they are, with the possible exception of paper, the most commonly used user interface components.

The Medium's Interface to the Organism

Among the primitive terms that we use to describe the structure and process of human communication, "medium" is among the least well understood (Foulger, 2005b). When we are creating and consuming messages, our focus is almost always the messages that we create and consume. We don't usually think much about the details of our message constructions, the languages that we use to build those messages, or the media which enable those languages or in which we create and consume messages. At some risk of oversimplification, we "simply" imagine what we want to say and "say" it; process what others have said and "understand" it. Once we get used to a language or a medium of communication, it becomes, for all practical purposes, invisible. If the medium is the message, it isn't a message that we typically spend much time thinking about.

This is most obviously true in face-to-face communication, where the light and air that enable visual and aural communication are quite literally invisible. We don't see air molecules jostling against each other as a message moves from our mouth to someone else's ear. We don't see, or at least don't think about, the photons that, by reflecting off us, make our expressions and gestures visible to others. Indeed, we most often don't think of face-to-face communication as a medium at all. While face-to-face interaction is probably the paradigm case of an invisible medium, it has frequently been treated as a system that enables communication (McLuhan, 1964; Bretz, 1971; Ciampa, 1989; Foulger, 1990, 1992, 2002, 2005; Hoffman and Novak, 1996).

The workings of other media are just as invisible, however. When we read a book we can the print and feel the paper and have probably encountered a bookshelf, library, or bookstore, but the author is absent invisible beyond a name and the words they put to paper, the editor is rarely visible beyond an acknowledgment in the author's preface, and there is rarely any readily observable evidence of the end to end process of property development, publication, or distribution, of the hundreds of people who participated in those activities, or of the technical infrastructure that aided them in the process of bringing the book to market. For all intents and purposes, the medium which enables the creation and consumptions of books is a black box which most readers do not and need not understand. The message they find in the book is all that really matters.

The same is true for almost all media. Indeed, most of the workings of such broadcast and electronic media as television, cellular phones, e-mail, web sites, electronic library catalogs, instant messaging, Internet streaming, webcams, and broadcast radio are even less visible. We don't need to understand the workings of transmitters, receivers, switches, servers, bridges or databases to use the media that critically depend on them, albeit in varied combinations. We simply need to understand the workings the medium's user interface components, including the small number of relatively ubiquitous user interface elements, including microphones, cameras, keyboards, speakers, displays, volume controls, channel selectors, and pointing devices like the stylus, trackball, and mouse, that are explored in this paper.

These user interface elements are notable on several grounds. First, while they operate at the margins of the medium (what Simon, 1968, refers to as the interface between the inner artifact and the external environment) and can be regarded as the veritable tip of a medium's iceberg, they far more strongly shape the language possibilities associated with the medium than any of its other components. Second, they are relatively ubiquitous. As will be seen below, most of the user interface elements documented here are associated with over 20% of all media. Third, they are highly visible. Most media require us to directly interact with one or more of these components as we create and/or consume messages.

Indeed, these elements are sometimes so visible and recognizable that they effectively define a medium of communication for people. We often treat these user interface elemensts as iconic representations of the media with which they are associated.. We readily identify a rectangle with with a large circle (the speaker) and two small circles (the tuner and the volume control) as a radio. We readily identify a rectangle enclosing a large rounded rectangle and two small circles as a television. An icon that suggests a device with a microphone, speaker, and dial or keypad is generally recognized as a telephone (and there are several widely used variants. Add an antenna to some of these variations is generally recognized as a cellular phone. Examples of such iconic representations can be seen in Figure One. Other icons that incorporate user interface components are uniquely identified with books, records, walkie talkies, newspapers, and letters.

Figure One: Simple Iconic representations of a television, two radios, two telephones, and a cellular phone, all featuring key user interface components of the medium.


Other combinations of user interface elements are much less distinctive, especially at a time where a wide range of networked computer media encourage speculations about media convergence. The physical user interface for e-mail, instant messaging, web browsing, computer conferencing, and other text and/or image-based Internet media is the same: a keyboard, mouse, and computer display. Hence an iconic representation of those components doesn't suggest any specific medium of communication, even as (and perhaps because) communication has become the primary thing most people use computers for. Extension of a computer for interactive audio (Internet telephone, audio messaging system, and audio streaming, among others) requires only the addition of speakers and microphones. Video conferencing systems, webcams, and streaming video systems require only the addition of a camera.

Interface Element as Attribute

Even when, as is the case for networked computer media, these user interface elements cannot usefully represent media iconically, it remains that they are the most visible components of most technologically mediated communication systems. They may be, in the language of system theory, only some of the parts that form the whole of a medium of communication, but their presence at the interface between the "inner artifact" (Simon, 1968) of the medium and the "outer environment" in which message creators and message consumers make use of the medium make them a veritable signature by which we are generally able to differentiate one medium from another at a glance. Even networked computer media can usually be immediately distinguished from one another based on the specifics of what is displayed on the screen as people use the medium.

User interface components are, then, returning once again to the vocabulary of Simon (1968) both "constituent" and "attribute"; parts that contribute to the capabilities of the medium and characteristics that describe the qualities that make a medium useful for a particular purpose. This duality is an important one for Simon, who clearly distinguishes the constituents of a system or artifact from its attributes. Constituents are the construction materials with which a system is built. Attributes are the possibilities that those constituents, in combination enable. For Simon, and later for Foulger (1990), an artifact is more strongly defined by its attributes than its constituents. One may be able to replace at least some of the constituents of a system without changing what the artifact can be used for. A change in a systems attributes, by contrast, is likely to make it unusable for some purposes (and perhaps usable for others).

Foulger (1990, 2002) extends Simon's ideas and applies them directly to communication systems, adopting a somewhat variant vocabulary and clarifying the operation of a medium's outer environment. Simon's constituents become mediators. His attributes become characteristics. Characteristics enable a medium's uses in the the outer environment. Uses trigger effects in the wider world. And effects encourage people to adopt practices which both control and optimizes the medium's uses. Foulger (2002) proposes that there are a wide range of media characteristics that can be used, when they are properly understood, to make predictions about a medium's prospects for success and the things that it is likely to be useful for. Indeed, it may be possible, following Foulger (1992), to make projective statements about what kinds of new media are likely to emerge and succeed in the future. These possibilities transform the central proposition of this paper:

Proposition: Media can be usefully characterized by their user interface elements.


into a fairly strong statement. It is implicit to the proposition that an understanding of a medium's user interface elements should enable predictions about the things a new medium may be useful for, its prospects for success, and perhaps more. This study will explore a relative small number of user interface elements which, because the recur across a range of different media, should enable us to usefully compare those media. There are clearly a much wider range of user interface elements that might have been described here, and it may be that some of them recur across enough media to make productive comparisons possible. The current study on just twelve, including variants of stylus, keypads, microphones, cameras, speakers, displays, volume controls, on/off switches, interruptive summons signals, presence indicators, and electronic interface managers.

These user interface elements will be only be treated descriptively in this paper. The intent is to characterize these user interface attributes, provide frequency statistics that establish their relative importance as attributes of media, and, in so doing, set up future studies that make comparisons of media using these user interface elements. The twelve user interface characteristics of media described in this paper have been described for 167 media. These are the same media documented in Foulger (2005). In all cases the variables take the form of a presence/absence variable in which the presence of the user interface element within the medium was coded as one and absence of the component was coded as zero.

The remainder of the paper will describe these user interface characteristics. We will initially describe user specialized interface elements that are typically associated with creators of messages or consumers of messages, but not both. We will proceed to describe generalized user interface elements that are generally used by both consumers and creators of messages within the medium. We will close the paper by considering various ways in which these user interface elements occur in combination.

User Interface Elements Used by Creators OR Consumers

User Interface Elements are often fairly specialized. Some are particularly useful for creating or capturing messages, but are not terribly useful for the purposes of message consumption. Others are distinctly focused on enabling consumers to see, hear or other receive messages, but offer very little to message creators. The differences between these specialized devices can be very small. In the first practical system to make use of microphones and speakers, Bell's telephone (Schoenherr, 2001; Microphone, 2004), a single integrated mechanism acted as both microphone and speaker. As a microphone it translated sound waves into electrical waves that could be carried to the other end of the line. As a speaker it translated electrical waves into sounds. Within a year of Bell's invention, however, microphones and speakers rapidly differentiated into specialized user interface components. Specialized patents were filed for both the microphone (by Emile Berliner) and the speaker (by Ernst W. Seimens) within a year of Bell's patent for the the telephone. While it remains possible for a single component to act as both microphone and speaker (piezo-electronic components are often used this way), high performance microphones and speakers remain highly specialized user interface components that are recognizably different from one another. Indeed, we commonly use recognizable icons for speakers and microphones that reflect the differences in their form and function.

Speakers and microphones are not the most commonly occurring user interface elements in media, but they all but ubiquitous in media that involve storing or transmitting audio beyond the immediate limitations of the here and now. There are, of course, other ways to capture and create sound. The most obvious of these are the human voice and ear. These human modalities are not, however, a part of the medium. They are, rather (again following Simon, 1968) the organisms interface with the medium. This is, perhaps, a fine line, but it consistent with long accepted models of the communication process, including Shannon's (1949) Information Systems model, which distinguishes the source from the transmitter and the receiver from the destination. We can and should view these elements somewhat flexibly, but it is clear that the source and destination are represent creators and consumers of messages, including their mouths and ears; that the transmitter and the receiver include the interfaces through which messages enter and are retrieved from the medium, .

Alternatives to speakers and microphones that operate as interfaces to various media include musical instruments and, in the age of computers, artificial sound generators, but it remains that microphones allow extensions of sound across space and time that enable a range of media that are simply not possible without them. The same can be said for speakers, which enable the reproduction of sound, and other recurring user interface elements, including cameras and displays, variants of stylus, keypads, volume controls, on/off switches, interruptive summons signals, consumer presence indicators, and electronic interface managers. One can create moving pictures by flipping the pages of a book rapidly or creating digital animations in a computer, but it remains that cameras remain the most general solution to the problem of capturing visual content; that displays remain the most general solution to reproducing those images at other places and times.

Table 5.1 lists a small set of interface characteristics which are typically associated with either creators of messages or consumers of messages, but which are not generally associated with both. Common user interface elements that are routinely associated with creators of messages include microphones and cameras, both of which have already been described to at least some extent, and consumer presence indicators, which will be described below. Microphones, associated with 49 of the 167 media discussed here, capture audio content such that it can be transmitted through, stored in, or otherwise processed within the context of, a particular medium. Cameras, which are associated with 32 of the media considered here, do the same for visual content.

The history of cameras is more richer and more complicated than the history of microphones. Since the invention of still cameras in the 19th century, the capture of still images has been extended to enable the capture of (what we perceive to be) moving images; cameras that capture images to film have been joined by cameras that capture those images as electrical signals that can be stored, transmitted, and manipulated in the same ways that microphone-captured audio signals can; electrical image capture has been transformed into digital image capture. One might productively distinguish these developments. Still cameras support different semiotic opportunities than still cameras do. But the difference between all of these variants of camera continue to narrow. Digital still cameras increasingly support shooting short videos. Video cameras increasingly support the capture of still images. In the end, the ability of a camera to capture a message into the medium is what matters. Everything else is processing within the medium.

This agnosticism about what a medium does with their content is shared by both cameras and microphones. Both simply capture content from one form (sound and sight) to transform it to another (which, at least today, is generally electrical). Their only function within the medium is to convert the observed outer environment of the medium into a form which the medium can process. Microphones and cameras are agnostic about what a medium does with that content, which may be stored for long periods of time, transmitted over long distances, or transformed into other forms. The medium may convert sound into a light show, sight into sound, and its own representations of each into graphics and/or files. The microphone and the camera simply make it possible to do these things. They are simply parts of the whole in every medium which which they are associated, parts that capture the content that message creators provide. Neither cameras nor microphones will generally be associated with consumers of messages unless they take on the role of creator.


Table 5.1: Generic User Interface Elements that are associated with creators or consumers of messages.

Interface Element or Characteristic

N

Percentage

Commonly Used By

microphone or analog

49

29.3

Creators

personal speaker or analog

34

20.4

Consumers

remote speaker or analog

38

22.8

Consumers

camera or analog

32

19.2

Creators

volume control

54

32.3

Consumers

electronic interface manager

62

37.1

Consumers

consumer presence indicator

67

40.1

Creators

interruptive summons signal

20

12.0

Consumers


Speakers, volume controls, interruptive summons signals, and electronic interface managers, by contrast, tend to be consumer-focused user interface elements. Volume controls, in general, control how loud the audio playback is. Interruptive summons signals notify a message consumer of a waiting message or opportunity to interact with others. Electronic interface managers give consumers additional control of content within a medium.

Speakers reproduce transmitted or stored audio for consumption at remote locations in time and space. The speakers may be low volume such that content can only be heard by one or a few people, as is the case for 34 of the media considered here, or high volume (38 media) such that many people can hear at once. The speakers in a television set are generally operated at a much lower volume than the speakers used in a public address system or at a rock concert.

Volume controls, more so than other generally consumer-oriented user interface elements, may be scattered across several locations within a medium. Mass media, in particular, may have volume controls that are managed by production staff (sound engineers, broadcast engineers, and others) and there are some media (public address systems) in which consumers may have little or no control of volume. Radio and Television broadcasting, by contrast, include volume controls at both the transmitter (controlled by the radio or television station) and the receiver (controlled by the listener). It remains, however, that the ultimate arbiter of volume in almost all media is the consumer. Among the 54 media described here that involve some sort of volume control, almost all give message consumers final control over how loud the audio content of their medium is.

It may be reasonable to treat volume controls somewhat metaphorically, as volume can have multiple meanings, including loudness and amount. Practically speaking, volume is an informal measure of how far a message can travel and, by extension, how many people it can reach. Hence, insofar as a printing press can be adjusted to print more or less copies of a publication, that adjustment might be regarded as a volume control. Where such controls are formalized, as they are on a copy machine, one can reasonably speak of a publication medium having a volume control as a part of its user interface. Where such controls are informal, as in a publishers decision to print more copies of a newspaper because of a compelling story that will sell more copies than normal, referring to the publishers decision as a volume control may be somewhat tenuous. This paper has not extended volume control to cover these extended cases. Other researchers may wish to do so.

Interruptive summons signals are much more strongly oriented to consumers of messages. These signals, which are associated with 20 of the media discussed here, can take a variety of forms, including sound (a ringing telephone), sight (a screen pop up or blinking light), or touch (a vibrating cellular telephone). All act as a notification that there is a message or interactive opportunity within a medium. Classic examples of interruptive summons signals include a buzzer indicating the beginning or end of a class, the opening door sound that indicates the virtual arrival of a buddy in instant messenger "space", a car horn, a ringing telephone, and AOL's classic e-mail arrival greeting "You've got mail". The first interruptive summons signals were supplied by hammers banged against the microphones of early telephones (Hopper, 1992), but telephone ringers were an early innovation, with the first patent for a telephone ringer filed just two years after Bell's initial patent filing on the telephone itself. (Privateline.com, 2005).

Electronic interface managers allow consumers to control a variety of features within a medium. Depending on the medium this may include such features as select, playback, delete, fast forward, index, pause, and other content control features. These controls may enable the addressing of messages, as with telephone dialers and keyboard entry of an e-mail address, the initiation of activities, as with the on switch on a television or radio, the start button on copiers, the record button on tape recorders, and the send button on fax machines. Electronic interface managers of one sort or another are associated with 62 of the media considered here.

While speakers, volume controls, interruptive summons signals, and electronic interface managers are frequently associated with the production and distribution of content within a medium, it is unusual for creators of messages to use them unless they are acting as consumers (of, at minimum, their own message). Even cameras and microphones are often taken out of the hands of content creators in heavily produced mass media like movies, television, and broadcast radio. Where high quality sound and video is expected, the task of operating recording technology and editing the result is generally left to specialists.

The most recent innovation in user interface components, consumer presence detection, provides message creators with a capability that is commonplace in face-to-face interactive media, but which has often been absent in media which enable the transmission across time and space. The value of consumer presence detection is in the power it gives a creator of content to observe, acknowledge, initiate contact with adapt messages to, and even avoid message consumers. The most obvious implementation of consumer presence detection for most people is in instant messaging's "buddy lists". Regular users of instant messaging systems routinely initiate interaction with friends when they appear online.

Consumer presence indication can be accomplished via an actively monitored open microphone or camera. Actively monitored security systems often work this way. So do do webcams. The more interesting and widely used variants of consumer presence detection is made possible by the message processing capabilities of computer-mediated systems. The simplest of these variants treats the act of logging in or out of the system as an event trigger for notifying others in a virtual communication environment of their arrival or departure.

This is an important and useful feature of multiplayer interactive computer games, closed computer conferencing systems, chat rooms, and other environments where awareness of specific others may trigger changes in communication behavior. It can also be an insidious feature, as when presence detection is used by commercial web sites and other interactive computer software to track the search and buying preferences of users. Cookies are just one of a growing range of presence detection mechanisms that are routinely used to detect the arrival of consumers, track their message consumption, and adapt messages to their apparent preferences.

It can be assumed that the search engines used by the student at the beginning of this chapter not only found relevant references, but also displayed what may be relevant advertising. Indeed, the references displayed may be ordered, at least in part, to reflect the preferences of commercial advertisers. It should be noticed in this instance that consumer presence detection is allowing the medium to make decisions about what messages will be presented in response to consumer requests. This is clearly a far less benign presence indication than the arrival of a boyfriend on a buddy list or the arrival of a new player in a multiplayer interactive computer game.

User Interface Elements Used by Both Creators and Consumers

Other generic user interface elements are routinely used by both consumers and creators of messages. These features, as shown in Table 5.2, include on/off switches, channel selectors, displays, and variations of stylus and keypad. While these features are used by both consumers and creators of content in at least some media, the distribution of function is anything but symmetrical. Variations of stylus and keypad are much more likely to be used by creators of content (96 for stylus variants; 61 for keypad variants) than by consumers (27 for stylus variants; 42 for keypad variants). Displays and channel selectors, by contrast, are much more likely to be used by consumers (59 media involve displays; 14 involve channel selectors) than creators of content (41 involve displays and only 5 involve channel selectors). Only on/off switches are a more generally symmetrical feature, associated with creators in 61 media and consumers in 51 media.



Table 5.2: Generic User Interface Elements which are used by both creators and consumers of messages

 

Creator Stats

Consumer Stats

Interface Element

N

Percentage

N

Percentage

on/off switch

61

36.5

51

30.5

keypad or analog

61

36.5

42

25.1

channel selector

5

3.0

14

8.4

display system

41

24.6

59

35.3

stylus or analog

96

57.5

27

16.2


Variants of stylus (stylus or analog in Table 5.2) are not only the most common user interface feature in the media considered here, they are by far the oldest, with roots that can be traced back to writing on clay tablets in Mesopotamia, and certainly much further than that. A finger makes an excellent stylus when drawing pictures in sand, dirt, or (with finger paints), on paper. A stick or bone shard can provide finer lines. A brush made by tyeing hair to the end of a stick can provide a range of expressive lines. Some variation of stylus must have been used in creating the prehistoric cave paintings and Venus figurines,

Today's ball point pens, mice, mechanical pencils, trackballs, felt tip pens, and many other stylus analogs are a considerable technological advance on the shaped stylus that was used by Mesopotamian scribes to write on clay, but the most available stylus, the finger, is still widely used. Indeed, thanks to such computer interface devices as drawing pads and IBM's Trackpoint, its use as a stylus is growing. Our primary use of stylus analogs is to create messages using language and/or images on a substrate (clay, papyrus, parchment, paper, canvas, drawing tablet, handheld computer, laptop, desktop computer, etc). It is only with the arrival of computer that we have started to use stylus analogs, within the scope of electronic interface managers, to control the workings of the medium.

This same combination of composition and control are associated with another commonly used user interface element, the keyboard/keypad. While it is tempting to trace the history of keypads back to early musical instruments, including the harpsichord and (even older) various forms of trumpet and woodwind instrument, the first true keypad (capable of encoding signals that are subject to manipulation within the medium) is the telegraph switch. While keypads no longer have much in common with the design of the telegraph switch, whose invention marks one of the first practical uses of electricity. The design of the telegraph switch remains an elegant one.

The telegraph switch uses electronic to transmit information through a telegraph wire to, at least in the original design which was retained for over sixty years, another telegraph switch. When used to transmit, the switch controls the flow of electricity down the telegraph wire. Current flows when the switch is pressed. The current is interrupted when it is not. When used receive, the presence of a current drives an electromagnet to pulls the telegraph switch down. This simple arrangement allows telegraph operators to both compose a message as Morse code and to render that Morse code as a message. This complementary arrangement was critical to the telegraphs early success. Speakers wouldn't be developed for another sixty years, Display of anything more complicated than a spark or a motion would take much longer.

Speakers, displays, and other message display components allow today's keypads to operate as dedicated input devices. They generally range in complexity from the twelve to fifteen button arrangement of a touch tone telephone to the sixty to one hundred button layouts of computer keyboards, Keyboards and keypads provide a reliable and relatively high speed data entry capability in a variety of communication media and devices. Indeed, their use in at least some media provides an object lesson in the inherent conservatism of users.

The QWERTY layout substantially resolved the tendency of early typewriter keys to collide and get stuck. The solution made high speed touch typing possible, but at a price in unrealized productivity that wasn't properly understood for decades. By the time tests demonstrated that the DVORAK keyboard layout dramatically increased typing speeds the damage was already irreversible. The investment associated with learning the QWERTY layout and buying QWERTY equipment was simply to great. Decades after its relative inefficiency was clearly demonstrated, QWERTY persists as the only widely used keyboard layout.

While video displays enter widespread use with the arrival of television, variants of video display were fairly quickly adapted for use in other user interface applications. Character LEDs in calculators were followed by character LCDs in a variety of consumer devices, including pocket computers and handheld video games. Dot matrix LCDs enabled handheld computers and graphical calculators. Today the classic CRT television is low resolution compared with most computer displays, including both CRTs and the range of flat panels displays that are available. High resolution color flat panel displays are now the primary user interface element of a palm computers, tablet computers, and a variety of other systems, and work on flexible displays that can be worn as a part of clothing continues to advance steadily, making it likely that our interpersonal communication may be enhanced with messages that appear on our t-shirts or jackets on demand in the near future.

Displays are no longer constrained to the kind of passive consumption we associate with television viewing. Displays are integral to the process of creation in computer-based media. We compose and edit our e-mails, instant messages, web pages, and other messages on a display, only sending them when we are satisfied that we are finished. This dual role of displays in both the creation and consumption of messages enable a "near real time" quality that wasn't possible in prior electric, paper, and face-to-face media. They also enable some symmetries in the consumption and creation processes that, while hardly new (they are integral to the workings of a telephone, walkie talkie, or C.B. radio), enable new kinds of communication. The notion of media convergence is largely rooted in these symmetries, which make devices with displays, keypads, microphones, and speakers useful for use in a broad variety of communication systems.


User Interface Elements in Combination

More generally, the combination of a display, an on/off switch, a volume control (sometimes integrated with the on/off switch), and a channel selector is become commonplace in a range of media user interfaces, including televisions, video games, C.B. Radios, Walkie Talkies, computer-based document composition systems, and Internet-based computer mediated-communication systems. Even audio media like radios and CD and mp3-based audio recordings frequently support text display of artist, song, album, and play list data.

Not much need be said about on/off switches, even in an age where the off switch for a medium may be closing a window on a computer screen. Many media allow both creators and consumers of content to turn the medium on and off at will. Channel selectors provide a more sophisticated kind of on and off switch that allow us to turn content on and off even though we continue to use the medium. The mere existence of a channel selector in a mediums user interface implies that consumers have the ability to select content from a selection of parallel competing (or at least discrete) "channels".

A channel, as the term is used here, is a frequency, stream, or connection that allows consumers of content to access, connect to, or establish. A television channel is a collection of content. Tuning to a particular television channel will allow a television viewer to consume content from that collection. Television sets are usually set up with a set of predefined channels which correspond to specific broadcast frequencies. The same is true of radio, and while radios are often built a continuous frequency tuner, radio stations locate themselves at specific predefined locations in the broadcast spectrum. Manipulating a channel selector allows a user to "change channels" from one station or collection of content to another. generally requires some measure of channel selection.

While the term channel, in the sense used here, originates in radio, its use is not restricted to radio waves. Channels of Internet streaming radio and multimedia may be referred to as streams, but their effect is the same as a radio frequency. Internet users can pick the stream they want to hear and change channels whenever they want. The channel changer, in such systems, is a list of streams.

When the telephone company establishes a dedicated end to end link from one telephone to another, they create a channel. The telephone does not, in general, provide a channel changer. We enter an address (the phone number). The telephone network then creates a custom end to end connection. Channel selection functionality has been growing in telephones, however. A saved list of phone numbers in the telephone allows you to connect with people as if they were streaming servers and call waiting allows us to use the flash button as a channel selector that switches us back and forth between two calls.

When an Instant Messaging server establishes a dedicated end to end link from one computer to another, it establishes a channel. Many users of instant messaging routinely establish multiple sessions in which they talk with different people "at the same time". Channel changing is routine in Instant Messaging. The buddy list allows us to see who we can connect to. The ability to select between different Instant Messaging sessions on the desktop allows us to change between channels at will and even cut and paste between them.

There is a presumption, in channel selection, of complementary end points. A television, broadcast radio, family radio, or C.B. user must tune to the same frequency as the station they want to listen to or the other person they want to talk to. An Internet streaming station that streams in a particular format can only be heard or seen using end user software that supports that format. Users of different instant messaging software may not be able to interact with each other. Once creators and consumers are using complementary end points, however, channel selection becomes a reality in an increasing number of media. The key is a channel control that allows people to switch from one channel to another. Media that use multiple channels or transparently shift between channels without any specific act by the user do not entail a channel selector in the sense that is being used here.

Symmetries in User Interfaces

Twelve user interface components have been documented here, including microphones, speakers (personal and remote), cameras, volume controls, electronic interface managers, interruptive summons signals, on/off switches, channel selectors, keypads, displays, and variants of stylus. They are hardly the only user interface components used in today's communications systems. Indeed, we might easily have considered another common user interface element, paper, here. Unlike the user interface components documented here, however, paper has other important uses, including storage and low speech transmission, that make it appropriate to description in other papers. For present purposes, it is enough to note that the most common components of media, including both paper and the other components documented here, are routinely combined in fairly predictable ways in constructing media.

Microphones collect audio. Speakers reproduce audio. Media that entail either tend to entail both. Cameras collect images. Displays reproduce images, For the most part, and increasingly, media that feature cameras also entail some form of display. Stylus variants can be used to create images and record written language. Keypad variants are also used to record written language. Paper and/or (increasingly) displays are usually associated with media that entail a stylus or keypad.



This symmetry of components, which predominates in today's media mirrors the symmetry of our production and reception modalities. To the extent that cameras capture gesture, microphones capture sound, styluses and keypads capture abstract language, and volume controls, on/off switches, channel selectors, and electronic interface managers respond to our intent in controlling a medium, they complement our production modalities. To the extent that displays and paper can be seen and speakers can be heard, they complement our reception modalities.

None of these user interface elements operates in complete isolation from at least one of the others. A production user interface element, whether microphone, keypad, camera, or stylus, is useless for the purposes of communication if the thing it records cannot be viewed with a reception-oriented user interface element, whether paper, display, speaker, or something else. Volume controls don't matter if there is no volume, usually sourced at a microphone and reproduced at a speaker. On switches and channel selectors don't matter if there is no content to be selected.

Interdependencies and symmetries between user interface elements may be commonplace, but are not necessary to the construction of useful new media. Voice recognition allows microphone input to be transformed, in near real time, into a text display. Voice synthesis allows writing to be reproduced as "spoken" language (via a speaker). Keyboard and stylus-sourced computer generated graphics can be and are added to live camera shots as they are being shot. Visual displays can be and are automatically generated based on, and as a complement to, music. The possibilities for media in which asymmetries between user interface components are resolved by software continue to grow steadily.

New media will emerge as a product of these asymmetries. The most successful of those media will entail new characteristics that allow people to use those media in media in new ways to solve communication problems that we cannot currently imagine. Some of those attributes may are already be evident in the kind of near-real-time interaction that is typical of chatrooms and instant messaging and sometimes occurs in e-mail and computer conferences. When voice into data and other real-time processing becomes sufficiently practical, we may see media that increase the speeds of near-real-time interaction while allowing us to converse with more people at the same time.

Reference list