Paradigmatic Considerations for an AI Interface for a Wearable Computer
By ai-depot | March 15, 2003
AI interfaces can be annoying. How should the information be presented to humans? Two concerns which must be adequately addressed by any designer of a wearable computer’s PC-based AI interface are distraction, and effect on human memory.
Written by Carol Stein.
Paradigmatic Considerations for an AI Interface for a Wearable Computer
Abstract
Two concerns which must be adequately addressed by any designer of a wearable computer’s PC-based AI interface are distraction, and effect on human memory.
Preliminaries
Ubiquitous (Pervasive) vs. Personal Portable Computing
Two distinct trends in computing are easily distinguished: ubiquitous (or pervasive) computing, and portable (or personal and persistent) computing. Ideally, the wearable computer will actually comprise two computing systems, which we will dub the PC and the LC. The PC is a truly personal computer; the LC is a library computer, or limited computer — with regard to security, it should be considered a public computer.The LC OS ought to be very limited and standardized, comprising primarily networking functions, and should be distributed on ROM to minimize "cracking." LC applications, also, will be relatively standardized (browsers, anti-virus and compression utilities, and the like), and very limited. The LC may read and write only to its own storage devices and its own memory.
The PC should have no direct networking connection. Its OS and applications will normally be nonstandard (although some may be widely distributed, depending on their merits), since they will depend on the specific interests of its user. The PC OS should be able to read LC storage devices, and write to LC RAM, as well as reading and writing to its own storage devices and memory. The PC is the private computer.Certain functions now contained under the rubric of a single application would be distributed under this scheme. E-mail is one obvious example. Consider a general outline for e-mail management:
- The LC checks user’s e-mail accounts on user demand, on a schedule, or at request of an AI agent.
- E-mail messages and attachments are downloaded…
- An AI agent within the LC unzips messages, checks that they are well-formed, and writes them to a read-once directory within LC storage. The LC connection to e-mail accounts is terminated.
- An AI agent within the PC, automatically or on demand of the user, checks status of the read-once LC directory.
- User reads an e-mail message (automatically deleting it from LC storage), and may optionally write it to PC storage. User may respond (in this context, forwarding a message is also a response), and this is written to LC memory for delivery, optionally copying it to PC storage.
- User may create a message, using PC’s address book function (part of a larger Personal Information Manager [PIM] application on the PC) and a general text-processing function. This message is also written to LC memory (or stored on PC device and later written to LC memory).
- An AI agent within the LC identifies a well-formed e-mail message with header in its memory, zips the message and any attachments, and transmits it.
- Note that attachments, like other information intended to be shared with a small audience, could be optionally encrypted by the PC in the process of writing them to the LC storage device, if they comprise sensitive information. (Encryption software and keys would be contained within the PC’s storage, obviously.)
Intelligence (Human)
For the purposes of this paper, human intelligence is defined as the ability to move up and down the "ladder of abstraction." (If this is not self-explanatory, please see article.) Human intelligence is thus directly related to creativity. The reader will recognize that current digital devices do not display much human intelligence, thus defined, but this is incidental to this paper.
Creative or intentional thought requires time, uninterrupted time. In a world in which we schedule increasing numbers of "events" (with information overload increasing), how can we continue to think? An AI which could filter information for us would be enormously helpful, as would an AI which helps us think. Are either of these possible?
Data and Information
A useful distinction can be drawn between data and information, as discussed in a wearable computing e-mail list ([email protected]): Information is data which has been found useful — analyzed and incorporated, or manipulated, by a user. Another way to say this is information is data which has been selected and organized; the organization itself is of intrinsic value. Not incidentally, human memory stores organized information, rather than data.
Data may be mined for information. Over the life of the user, information can evolve and new information can be generated (as users see higher-level or more extensive patterns). Data can be updated, or generated by instrumentation or analysis of lower-level data.
Information thus defined is intrinsically personal; however, published information becomes data (although legalities such as copyright and patent prevent certain types of manipulation by other users, for a time, without compensation to the originating user). Although the astute reader will realize that any arbitrary line could be drawn between data and information (which is the same as saying data at one level of abstraction could become information at a higher level), the subjective definition is very useful in the context of this paper.
The LC deals with data; for some purposes it may be considered as a local subset of the Internet (especially the Web) and other networks. The PC deals with information. Although there are archiving concerns for both the LC and the PC, the concerns are different: The LC archival concern is to assure up-to-date local copies of data, whereas the PC archival concern is that information remain accessible and manipulable over the life of the user.
Parenthetically, I would therefore argue that all information (including embedded formatting commands, for text documents) should be stored in a standard format (i.e., ASCII for English text). It seems highly probable that whatever becomes standard on the Web (currently XHTML for text; SVG for vector graphics; GIF, JPG, and PNG for non-vector graphics; MP3 for music; etc.) will become the format for both data and information files. Applications which embed non-standard code in either data or information must eventually die (regardless of their manufacturer’s hype) as the archiving implications, entailing various costs, become more obvious to naive users.
Immersive and Augmented Reality Interfaces
One final distinction: I customarily speak of a computer interface, especially for an AI agent, as either AR (augmented reality) or immersive. (I no longer remember whether I originated these terms years ago, but I have seen "immersive interface" used online recently by other persons.) These terms, like data versus information, are points along a continuum, but the distinction is similarly useful. In case these terms are not self-explanatory, an "immersive" interface largely replaces stimuli from the actual world, constructing a virtual world for the senses — currently, primarily vision and hearing, although some work has been done with pressure (for gaming). The virtual world constructed by the designer of the interface may or may not match the physical world. An AR interface, by comparison, does not block sensory input from the world (except in the almost trivial sense that a monitor, for example, visually blocks what is physically behind it, or a text label projected on eyeglasses overlays a small area of the visual field).
Certain professions use an immersive interface; examples include a doctor performing surgery, especially micro-surgery. In this case the virtual world is largely an enlarged depiction of the actual physical world, though some enhancements may be added based on previous instrumental analysis or texts (a tumor revealed by an MRI may be colored, for example). Other remote-control (I.E., scientific or military use of pilotless craft) or guided operations (such as maintenance of complex machinery) also benefit from an immersive interface.
For a wearable computer, however, the AR paradigm should be normative: Anyone using an immersive visual interface must be relatively stationary simply to minimize environmental dangers, wherease wearable computing normally implies some degree of mobility.
While we’re on the subject of further defining wearable computing, many current users even of today’s relatively primitive wearables include various continuous and automatic input devices such as microphones and video cameras (sometimes as "stealth" devices), or GPS locators, in their rigs. In general, it has been argued that a wearable should be highly flexible, optionally interfacing with diverse I/O devices. These include HUD’s (heads-up displays, in stealth mode or visibly) as well as monitors and televisions for output, possibly using digital cameras (still or video) as scanners, and accepting speech, keyboard, Twiddler (&tm;) input from humans as well as continuous automatic input. This makes interface possibilities extremely rich for users of wearable computers.
Tags: none
Category: essay |