Paradigmatic Considerations for an AI Interface for a Wearable Computer
By ai-depot | March 15, 2003
Paradigmatic Considerations
Summary of Paradigmatic Considerations
We have now discussed enough terms and concepts. Let us lay out some dimensions of the issues that must be addressed by designers of an AI interface, for a personal computer within a wearable device. The following grid attempts to summarize how user may be notified of the results of AI agent operations.
What is not indicated within this grid is the effect of various types of notification or presentations on human memory. Unfortunately, the relationship between the distraction quotient of a notification stimulus and memorability of the information presented is probably inverse. To some extent, interface designers may be able to offset this somewhat by emphasizing context for the presented information. In fact, embedding information, or providing context, should both reduce distractability and improve memorability.
Context
Context is intrinsically reduced when a small screen device, such as a PDA, is used visually. This is one reason offering visually presented data on cell phones should die out as better options (heads-up displays embedded in eyeglasses, etc.) become available. Less obviously, data presented as sound inhibits presentation of context because we can scan visual data, or jump to the relevant piece (usually by using color, or a smaller text size, to identify contextual information), but sound only allows sequential access to data. (This is one reason everyone is annoyed by so-called voicemail systems.)
Another kind of context can be supplied, even lacking structure (hierarchy) for data, by simply providing information such as when the request was entered or what triggered it (time), how big the result is (size), and the source of the information being presented. Time and file size as aspects of data should be familiar from simple Find routines; they are often surprisingly helpful in identifying information for which we are searching, but they also help us remember what the information represents.
Should Young Computer Professionals Design an AI Interface?
I have no idea whether I can effectively communicate my concerns about how wearable computing could affect our species’ ability to attain wisdom, or even solve specific critical problems, to young software designers; however, I am fairly sure they would never anticipate my concerns. My orientation and awareness is certainly not shared by most of them.
The age of the personal computer began about 20 years ago, but relatively few of us were prepared at that time to profit from it. As it happens, about 10 years before the advent of the personal computer I began learning to program (old mainframe computers). Consequently, my vantage point is very unusual: I am an older woman with some formal training in computer science, I know and have experience with several computing languages, yet my orientation to computers remains that of a long-term "power" user.
Initially I used (mainframe) computers to run statistics and modeling applications related to my extensive schooling in social sciences, philosphy of science, methodology, etc. (My formal schooling was unusually broad, since my actual interest was human communication prior to the discovery of mirror neurons, so I necessarily searched widely for a scientific basis for my field). As it happened, I remained broadly aware of computer capabilities because I stumbled into a professional career — using the term loosely — as (primarily) a consulting documentation specialist. Not only did I end up often documenting new computer systems or applications, but I had to use sophisticated applications myself, and remain broadly aware of their developing capabilities, in order to publish the manuals (or online systems) I produced.
Unlike many software designers, I also have both a bent for self-reflexive examination (frequently about cognition) as well as an abiding interest in social change. Particularly, I am obsessed with how technology affects human society, or can be developed and disseminated to bring about social change. Yet I remain independent of any academic community, crossing interdisciplinary lines with barely an acknowledgement that they (presumably) exist.
Finally, having become one of the invisibly disabled, I can no longer find enough paying work to occupy myself most days, and thus have substantial time to follow research, and mull over my concerns. Time has become a great luxury which few academics, even, enjoy these days. I am very fortunate to have had the time to anticipate these issues, gather information, and even write this paper, and I worry that few others share my good (in non-monetary terms, anyway!) fortune.
How many young computer specialists are capable of either recognizing or caring about the problems a distracting AI interface can create? Gifted software designers tend to be very young, overwhelmingly male, and have a very narrow focus; appallingly strong prejudices dominating computer-related employment practices tend to reenforce these self-selections. Many are also notoriously inept socially.
In addition, many software designers believe that for commercial (marketing) reasons, an interface, or hardware and sofware both, should be flashy, should draw attention to itself, should approach omnipotence within its scope. One could well argue that the best interface is barely noticeable (similarly, that it should be minimal and "intuitive") but this would be a hard sell.
Few persons under the age of 35 or so have had time enough to reflect thoughtfully about life-long observations, if they have even managed to recognize related experiences in their over-scheduled, time-compressed, and fragmented world. In addition, most young computer users have been exposed to television and video games for a massive proportion of their waking lives. Television, unlike reading, is not linear, it is distracting, and it does not promote large-scale thinking. Most video games emphasize quick (and violent) reaction to stimuli over thought. Even the long term leisure pursuits of young persons often tend to be loud, frantic, and otherwise distracting; or are habitually accompanied by ingestion of alcohol or other drugs.
Conclusions
Can the AI interface of a wearable PC facilitate thought? Or will it merely become part of the buzzing, booming confusion of competing sensory stimuli which inhibit or prevent coherent, creative thought?
A notable feature of normal young humans is omnipotentiality. All children (or practically all) will sing, dance, make up a story, draw a picture, or engage in most other kinds of activities spontaneously or on request. As we mature, we learn we have to give up real potential in order to develop specific other skills or talents; this is a balancing act required by the fact that we are mortal; we live within a particular timescale. Thus, in late adolescence or early adulthood, a healthy human achieves enough wisdom to recognize that omnipotentiality should be sacrificed. Ubiquitous computing (the Web) increases our sense of omnipotentiality. Will it hinder or seriously delay our realization that commitment is necessary to achieve valuable goals? Will we be prove able to voluntarily give up access to broadly appealing information, in order to focus more narrowly?
More positively, could an AI help us achieve such a focus, perhaps by helping us become self-reflective? I am skeptical, although developing this paper gives me some hope.
This paper has raised many issues, but addressed few problems. Right now I am not at all optimistic that the user of a wearable device could derive benefits from general AI agents.
Therefore, at a minimum, let us observe that there are potential problems to an AI interface which should be addressed at the highest level of interface considerations. Specifically, I suggest that all hardware and software interfaces should have the following characteristics:
- User can easily and quickly disable all forms of notification. (This is the no distraction mode, which should be always available.)
- User can obtain results on demand, or obtain summaries or listings on demand. This is another no-interruption mode.
- User may choose periodic or pre-scheduled notifications. This is a mode in which user may choose to attend to distracting stimuli at controlled intervals.
- Similarly, user may modify notification stimuli, either in their modality or intensity, to be able to choose to attend to or ignore them. A programmable delayed repeat option is also useful (as with the snooze button of an alarm clock).
- User may choose to be interrupted when AI agent finds preselected results.
- Finally, we have the full-interruption mode, in which AI agent is free to notify user whenever any result is produced, when results it judges useful or relevant are produced.
I suggest further that in all (except the first, no-interrupt) modes, the interface should offer a large range of modifiers, helping user specify various sensory channels and modes of notification to suit individual distraction susceptibilities. Notification stimuli could vary in intensity, frequency, and duration. Sets of these notification modifiers could be saved for quick reestablishment in different environments (i.e., stronger stimuli for noisier environments). Alternatively, a set of variables could be linked to characteristics of particular, relatively common (foreseeable) types of results.
Meanwhile, perhaps some psychology lab somewhere could do us all a favor and begin compiling data about distraction levels for various stimuli, varying channel, mode, and intensity of the stimulus. Wearable computing will become pervasive sooner than we apparently can imagine, despite that laptop in Jean-Luc Picard’s ready room.
Updated: February 3, 2003;copyright © 2003. All rights reserved to author, who, however, grants to AI-Depot the exclusive right to online publication for as long as the AI-Depot site exists. Comments addressed to [email protected] will be received by the author.
Written by Carol Stein.
Tags: none
Category: essay |