|Australian Journal of Educational Technology
1995, 11(2), 75-90.
A review of intelligent software agents and their relevance to networked information touching on some of their emerging potential and on interface considerations.
Agents: tireless software helpers with great promise in a variety of fields including the retrieval and filtering of information for individual needs.
Could these two work together to make a significant difference to future patterns of information gathering in research and education?
The Internet is part of the motivation for agents - it's going to be impossible, if it isn't already, for people to deal with the complexity of the online world. I'm convinced that the only solution is to have agents that help us to manage the complexity of information. I don't think designing better interfaces is going to do it. There will be so many different things going on, so much new information and software becoming available, we will need agents that are our alter egos; they will know what we are interested in, and monitor databases and parts of networks. (Pattie Maes of the MIT's Media Laboratory, interviewed in Berkun, 1995).What are some of the difficulties with network (Internet, Web) information? What are agents and what are some of the issues they raise? What matters need consideration with regard to agent interfaces?
Software tools known as 'filters' and 'agents' are beginning to manage the informational onslaught for us. As long as we are conscious of their hazards and limitations, they'll serve us well until even more powerful navigating tools do the job. The computers that cause the problem will also solve the problem. (Danny Goodman in Goodman (1995), countering one of a list of 'common myths' about the Information Superhighway... that "I will be crushed under tons of information arriving via the Superhighway".)
Although they are still in their infancy, the promise of intelligent agents is an appealing one. The intelligent agents of tomorrow will relieve users of the time-consuming and tedious searches through a massive, intricate and globally-dispersed web of electronic information. Agents will find, assemble and analyse information that users need to solve problems, become better informed and make intelligent decisions. (Roesler & Hawkins, 1994, pp. 20-24.)
Should there be one or more agents? Should agents use facial expressions and what other means of personification? What is the best metaphor for interface agents? (Maes, 1994, p.40.)
A coherent centre and a complete index may also be missing from the resource, but the potential of it is clear to many as is indicated by recent dramatic growth in interest in its uses, both recreational and academic.
In discussing the variety of publishing and information dissemination mechanisms accessible via the Internet, December (1994) lists electronic mail, telnet, FTP, Archie, Gopher, Veronica, hyperlinked Web pages, listservers and USENET discussion groups as samples of the possibilities encountered. Apart from the basic hardware and software infrastructure requirements, December concludes, "a primary barrier to this access involves user interface".
By consequence "creating a graphical interface to unify other communication services" with browser interfaces such as Mosaic is seen as a first challenge (December, 1994, p. 35). This suggestion, combined with his next thought that information should be formatted in such a way as to facilitate retrieval and display by a variety of means, suggests an opening-up of information for perusal by remote means. These means could include agent software in place of the individual browsers which are his main interest.
This lack of comprehensive indexing is a substantial problem to the 'directed', as distinct from the 'browsing' user. Directed users know that there is relevant information to their purposes - hidden among the volumes of the ejournals, databases and papers currently multiplying rapidly 'out there' - they just have difficulties in finding it or in sifting the valuable from among the multiple possibilities presented.
"Networked computers have become the 'fishing poles' in that vast, seemingly unlimited ocean of virtual information sites", says Kawamoto (1994, pp. 44-45), before repeating the point that "there is still no real mega-indexing facility that streamlines the exploration, search and retrieval process". The Internet, it is observed, even "precludes the kind of systematic centralisation that might make navigation less cumbersome".
While the use of retrieval mechanisms like Gopher and WAIS may gradually become known to the novice, it is possible that agents could serve as a much more effective means to make accessible information resident on both the Web and the entire network of which it is just a part.
Indeed, intelligent software agents are already responsible for the creation of numbers of indexes which are accessible to cyberspace explorers... though more confusion is bound to occur when the novice discovers there are some dozens of such indexes, the coverage and management of which is uncertain, and the redundancy between which is significant. Still greater problems will become apparent in dealing with the volume of response search engines can return, as is illustrated in Dawe & Baird (1995), where a 'Multi Threaded Query' returned literally hundreds of references. Clearly data of such a volume needs parsing and organising in some way, which is another activity in which agents may have a role.
The entire Web is a construct of hyperlinks. It therefore runs the risk of losing its users in hyperspace when the cognitive load associated with understanding links made and places been gets too high. The kinds of mechanisms which Oren (1990) reiterates in reference to designing less vast hypermedia than the Web, like limiting links and giving clear visual cues about 'position' in the linked materials, are simply impossible to guarantee. Though some browsers provide pull-down lists of pages visited recently and change colours on links exercised in the last n days, these mechanisms are simply inadequate.
Using agents to discover relevant information, to remove duplications and then to make initial assessments about the level of relevance of any resource, be it Web page or WAIS document, may well be the saviour of directed researchers. Agents could also function to insulate the researcher from the technicalities of a particular interface by retrieving required information from them and presenting it in a familiar form. They could also generate significant efficiencies, especially given their potential to schedule activities, including follow-up scanning for new and changed information, independent of their user.
Intelligent agents are autonomous and adaptive computer programs operating within software environments such as operating systems, databases or computer networks. Intelligent agents help their users with routine computer tasks, while still accommodating individual habits.Additional to such a definition could be two contributions from Harmon (1995, p.2): the suggestion that the generic agent stands between user and an application, but does not necessarily prevent user from using application or process and that they are appropriate for repetitive tasks which are performed differently by different people.
This technology combines artificial intelligence (reasoning, planning, natural language processing, etc.) and system development techniques (object-oriented programming, scripting languages, human-machine interface, distributed processing, etc.) to produce a new generation of software that can, based on user preferences, perform tasks for users.
Although currently in their infancy, the advent of agents is seen by many to be a significant step in the evolution of computing. Alan Kay considers that they are part of a 'third revolution' in computing, following upon the move to time-sharing computing and then to desktop computing employing the graphical user interface, respectively. Kay (1990) suggested that the next major advance in computing will be the widespread adoption of networked or distributed computing, and that this will be driven by agent-based interfaces. More recent activity surrounding the Internet and the development effort going into agent-based systems may well reinforce the veracity of this view.
Earlier, Kay (1984) pointed to the origin of agency in computing thus:
The idea of an agent originated with John McCarthy in the mid-1950s, and the term was coined by Oliver G. Selfridge a few years later, when they were both at the Massachusetts Institute of Technology. They had in view a system that, when given a goal, could carry out the details of the appropriate computer operations and could ask for and receive advice, offered in human terms, when it was stuck. An agent would be a 'soft robot' living and doing its business within the computer world. (Alan Kay quoted in Laurel (1990, p.359).)This kind of idea was carried forward by Nicholas Negroponte, who is sometimes also credited with its origin, with elements emerging in his publication of The Architecture Machine: Toward a more human environment in 1970. Rather later, when Negroponte contributed his 'Hospital Corners' article to Laurel's (1990) discussions about future interfaces, including software agents and guides, he reworked the agent idea as a collective of software entities providing an alternative to the current direct manipulation model of computer interaction represented by the desktop metaphor:
But wouldn't you really prefer to run your home and office life with a gaggle of well-trained butlers (to answer the telephone), maids (to make the hospital corners), secretaries (to filter the world), accountants or brokers (to manage your money), and on some occasions, cooks, gardeners and chauffers when there were too many guests, weeds, or cars on the road? (Negroponte (1990, p. 352).)Publication of Laurel's The Art of Human-Computer Interface Design, following on such developments as the development of HyperCard and of distribution of Apple's agent vision in the 'Knowledge Navigator' video, also coincided with advances in artificial intelligence techniques and desktop computing power. All these together added impetus and further inspiration to the development of agents in computing, as has most recently, the dramatic advance in networking technology and its adoption in the guise of popular access to the World Wide Web.
Today, in Being Digital, an exploration of the realm of the 'Information Superhighway', Negroponte again reworks the agent concept, talking-of 'digital butlers', 'personal filters' and even 'digital sisters-in-law' to help in choosing which movie to see. He envisions a range of agents, working together to create an 'intelligent interface' with which the user can converse, or which can anticipate the user's needs from knowledge it holds (and builds) about him or her. This is an interface which "will be rooted in delegation, not in the vernacular of direct manipulation". (Negroponte, 1995, p101)
Agents are a concept in software which various sources and orientations might define differently. For purposes of this paper, they are regarded as differing from other related interface mechanisms like guides and wizards in several ways. Figure 1 sketches just one set of possibilities based on a very broad definition which does not necessarily differentiate a separate guide category. 'Agents', in terms of this discussion tend to be found in the 'information' and 'work' categories.
|FUNCTIONS FOR AGENTS?|
Sorting and organising
|Figure 1: Borrowed from Laurel (1990, p.360).|
Guides assist the user of a particular piece of software in its operation or in constructing an understanding of its content by presenting differing viewpoints; wizards tend to function as experts in a particular domain, guiding the novice; whereas an agent is set a task or function and then left to perform it alone, sometimes with the agent even deriving its own tasks by observation of the user.
Another list of applications, found in Maes (1994, p.31) and based on prototypes being developed at MIT, gives similar emphasis to Indermaur's of the information gathering and filtering potential in agents reported below. It also adds such items as mail management, meeting scheduling and selection of books, music, movies to a list of where agents can help which is 'virtually limitless'.
This is an area of vigorous activity which, like the development of Net resources, is holding the attention of people from a variety of disciplines, all aiming at designing "applications to be better surrogates while requiring less control over the environment in which these applications perform". Thus, what agents could be used for is an idea that varies in the eye of the beholder.
In listing intelligent interfaces, adaptive interfaces, knowbots, knobots, softbots, userbots, taskbots, personal agents and network agents as just a few among the class 'agents', Reicken (1994a) betrays an interest rooted in the study of artificial intelligence and inter-machine communications. Meanwhile Laurel's (1990) discussions of agents and guides generally confined the 'guide' to a single application as a means of communicating differing viewpoints or giving hints, and in so doing it shows a human-computer interface orientation.
From another point of view, one more closely related to how they do their work than what they do, Harmon (1995, p.4) identifies three types of agents:
Indermaur (1995, p.97), while acknowledging that agents are being developed today in a number of application areas, lists three major types - advisory agents, assistant agents and Internet agents - as well as identifying a subclass of communicating agents. It then goes on to lay stress on that part of the broad range which are "designed to filter and gather information from commercial data services and public domains like the Internet and to automate work flow"... a group of distinct interest to this discussion.
Under this scheme, advisory agents "offer instruction and advice to help you do your work". These 'learn' about you, your expertise and interests and adapt accordingly, at best anticipating your goals and presenting suggestions based on past actions. They do this by maintaining two models: One of the user and user behaviour, and another of subject matter or domain details.
Assistant agents "can be more ambitious than advisory agents because they often act without direct feedback from users". Examples of this kind of agent like smart mailboxes and search engines raise a number of issues in actually doing work for you. Indermaur reports Pattie Maes' suggestion that two most important factors in design of such agents are their competence and the level of trust extended to them. Competence concerns how an agent acquires knowledge and its sensitivity to its user's needs, while trust concerns whether users will feel comfortable in delegating tasks to an agent. These issues are explored in more detail in Maes (1994, p.31-32), where they are used to explain a preference for approaches to agent creation employing machine learning over others based on end-user programming and knowledge bases.
Indermaur's third grouping is the Internet agents, most of which are information gatherers, some of which attempt to make sense of the information they find on the Web. Examples of these are WebCrawlers, Spiders and various other software 'robots'.
Indermaur's commentary on the 'assistant agent' category points out that it is important to have a mechanism whereby the balance between agent independence and intrusiveness can be manipulated. Another issue of import raised by such agents is that of responsibility for agent actions: if an agent can act more autonomously, who will take responsibility for its activities?
Also arising in association with agents is the issue of privacy. If an agent 'knows' a lot about its employer, could that not pose problems when agents find they have to communicate with one another about their purposes and their owners? Among other salient matters, like the tension between people wanting agents to do things they are not good at, but not to get too good at doing those things, this idea of inter-agent communication and a 'society of agents' are covered in an interesting interview with Marvin Minsky found in Reicken (1994b).
Indermaur's 'Internet agents' raise other concerns regarding their behaviour in the network which they roam which are taken up in Eichmann (1994) and Markoff (1994). Markoff dramatises the concern thus:
Protoartificially intelligent creatures are already loose in the net, and in the future they will pose vexing ethical dilemmas that will challenge the very survival of cyberspace. Markoff (1994, p.45).What he is concerned about is the load on processors dispensing information through the Internet of uncontrolled 'robots' wandering the net, reviewing and harvesting its riches. David Eichmann has similar concerns about the impact of 'Web spiders', which he takes as far as to use as a motive for proposing a set of ethics for spider behaviour.
In Eichmann (1994, p.10) it is proposed that agents acting in the network for an individual user should adhere to the following guidelines, which are quoted verbatim:
Boden (1994) comes to the reverse position, finding potential assistance to creativity in agents being able to help by "suggesting, identifying, and even evaluating differences between familiar ideas and novel ones". Agents will be able to collaborate and compare 'ideas', and in any case, there will always be the potential for them to be set-up to occasionally make random comments or suggestions to prompt human thinking.
It seems very unlikely that human users will ever surrender their intellect to the agent which is designed as a helper, not a replacement... but the connection made between creative process and agents is, nonetheless, a thought-provoking one.
Some are even more deeply troubled by the emergence of the software agent. Lanier (1995), putting an extreme position, considers intelligent agents "both wrong and evil". He suggests that in employing such mechanisms humans might be surrendering their humanity - "redefining themselves into lesser beings" - and altering their own psychology. Such ideas are certainly worth considering, though it is difficult to imagine the sort of person who would abdicate responsibility so totally to what is, after all, merely a contrivance of machine and software: trust is one thing, surrender another.
Telescript, the script language associated with the Magic Cap product and discussed in Davis (1994) is one product which is dealing with the potential threats of programs like itself. Such programs as these which are actually transported across networks to operate on different hosts are being designed to incorporate 'cyberspace passports' which carry their origin and authority. Telescript is also intentionally constrained by a vocabulary which disallows potentially dangerous functions like the direct examination or modification of host system memory or file systems, for example.
The interface of an agent is in many ways no different to that which mediates communication between any computer-based artefact and its user, and is thus subject to the same sorts of constraints that are applied in many human-computer interaction design guidelines. Before looking at a selection of issues which some see as particularly pertinent to agents, it may be useful to review a set of general principles for machine design that seem to have application here.
Donald Norman (1988), in The Psychology of Everyday Things, provided a refreshingly practical and attractive set of ideas on design which can be applied for any interface, whether computer or otherwise. His ideas in the context of building an agent interface and with the adoption of an appropriate metaphor, as discussed below, can be significant in deciding whether an agent is effective and/or accepted in its role. Some of his central concepts which seem relevant to creating agents include: being aware of object affordances, or what the appearance of something implies about its utility; the importance of giving visibility to its functions; the power of making constraints clear and using in-built expectations of users here to support their expectations; the need for direct feedback and to provide evidence that the user has control of an object; and the great capital which can be made of human tendencies to build concept models of objects with which they interact.
Using a coherent set of cues, visual or aural, can be a key factor in effective human-computer communication, whether because the users understand 'intuitively' how better to work with an object, or it helps them better to apprehend its value. Commonly, the agent is given expression in a human-like form, such as seen in the Apple Knowledge Navigator video or products like those being developed by Pattie Maes' group at MIT.
Human metaphors, like the 'assistant' casting often found in agents, however, are not the only possibility. So long as what is used can be judged both appropriate and a coherent metaphor well implemented, it can assist communication.
As might be anticipated from his previously mentioned guiding principles, Norman stresses the need for interface with agents to provide reassurance to its user that the agent is technically reliable and feedback that it is working according to plan. Furthermore, the outward face of the agent application should control expectations about its abilities.
Norman is worried about the tendency to use anthropomorphic devices in the agent interface as he feels that these could be interpreted as promises of performance which cannot be met by relatively primitive programs. This betrays his basic wish that 'system image' should accurately depict capabilities and actions, but probably underestimates the sophistication of the computer user.
Privacy issues are considered, in addition, with concerns being expressed about the potential for agents to exchange sensitive information about their users. Perhaps inter-agent interfaces will need further consideration with regard to this less-technical matter.
Finally, Norman raises concerns about the means by which agents are to be instructed or controlled. He expresses reservations about the practicality of both agents instructing themselves by 'watching' their users and about direct user programming of agents, suggesting that neither approach to communication is likely to be wholly satisfactory. Norman highlights a further issue in interface which might sometimes be overlooked: the fact that there are number of communications (and modes for them) possible in the user-agent interface. Some of these are explicit and some are implicit, but all are mediated through some form of interface.
Instructions are given to the agent and responses are received from it. Instructions could be given directly by spoken-word or through text input or through demonstration. They might, alternatively, be given implicitly, through the agent making conclusions based on user action, though in this case it could be said that the agent 'instructs' itself by 'observing user behaviour. A 'conversation' might be necessary to refine unclear intentions and to ensure that goals are appropriate. Eventually an agent must report its findings to the user, and this could potentially be in one of several forms.
Future interfaces are likely to be more complex than the current mostly text- based processes. They will also likely be rather more complex than a command and response model. These possibilities are worth keeping in mind.
Laurel (1990, p.358) suggests that anthropomorphic tendencies in an interface are acceptable providing there is no pretence that the agent figure actually is human. She feels that two distinctly anthropomorphic qualities are required of (and enjoyed by) computer users - responsiveness and a capacity to perform actions - and contends that these serve as the basis of the metaphor of agency.
Similarly, Tognazzini (1992) suggests that designers should make no pretence that the computer is human, but instead should consider the creation of a character separate from, but within, the computer context which 'acts' as an agent. User expectations of agent abilities, says Tognazzini, in an echo of Norman's idea, should be constrained. Further, the tasks which an agent is to perform should be limited to such tasks as it is conceivably capable of, which thus makes the form in which it is portrayed in software very important.
Believability, a term not to be confused with realism, is the topic of an interesting paper in Bates (1994). It discusses agents in terms of the coherence of their expression and a need for them to be able to express their 'emotions' in order for them to be understood. When read alongside Maes' comments about the feedback which can be got from visual representations of an agent's 'state of mind' (in Maes 1994, p. 36), this paper provides an interesting perspective. Both give some support for those seeking mechanisms and motivation to create agents which will be trusted by their users and each uses agents based on cartoon forms.
Language and agents is another field in which there are many interface possibilities which could be explored. In future, agents may be required to talk or to 'understand' spoken language in different applications, though most computer interfaces continue to be text-based for now.
Just how agents present the information they gather is another issue deserving attention. Several existing Internet search mechanisms, including Veronica, are able to numerically rate the 'relevance' of articles being scanned to the set of criteria which was supplied to prompt the search. In future, information could be presented by agents which have first 'sub-contracted' its tailoring to individual needs by means of the personal presentation engines or filters as described by Bergeron (1994). Such engines might abbreviate or expand upon raw text according to the needs of the target user.
Bergeron, Bryan (1994). Personalised data representation: Supporting the individual needs of knowledge workers. Journal of educational multimedia and hypermedia, 3(1), 93-109.
Berkun, Scott (1995). Agent of change. Wired, 3(4),116-117. (April 1995 - An interview with Pattie Maes.)
Cates, Ward M (1994). Designing hypermedia is hell: Metaphor's role in instructional design. 16th Annual Proceedings of AECT, 95-108. Ames, Iowa: Iowa State University.
Davis, Arnold (1994). The digital valet, or Jeeves goes online. Educom review, 29(3), 44- 46. (May/June, 1994.)
Dawe, Russell T. and Baird, Jeanette H. (1995). WWW, researchers and research services. Proceedings of AusWeb'95. Lismore, NSW: Southern Cross University. http://www.scu.edu.au/sponsored/ausweb/ausweb95/papers/sociology/dawe/
December, John. (1994). Electronic publishing on the Internet: New traditions, new choices. Educational Technology, 34 (6), 32-36. (September, 1994.)
Eichmann, David. (1994). Ethical web agents. Second international world-wide web conference: Mosaic and the web. pp.3-13. (Held in Chicago, Ill, October 18-20,1994.)
Falk, Jim. (1995). The meaning of the web. Proceedings of AusWeb'95. Lismore, NSW: Southern Cross University. http://www.scu.edu.au/sponsored/ausweb/ausweb95/papers/sociology/falk/
Goodman, Danny. (1995). Living at light speed? Random House Electronic Publishing. (Extract found 'somewhere' on the Web, under title 'Myths').
Harmon, Paul. (Ed) (1995). Software agents. Intelligent software strategies, 11(1), 1-13. (January, 1995.)
Indermaur, Kurt. (1995). Baby steps. Byte, 20(3), 97-104. (March, 1995.)
Kawamoto, Kevin. (1994). Wired students: Computer-assisted research and education. Educational Technology, 34(6), 43-48. (September, 1994.)
Kay, Alan. (1990). On the next revolution. Byte, 15(9), 241. (September, 1990.)
Lanier, Jaron. (1995). Agents of alienation. Interactions, 11(3), 66-72. (July, 1995.)
Laurel, Brenda. (1990). Interface agents: Metaphors with character. Laurel, B. (ed.), The art of human computer interface design. pp. 355-365. Reading, MA: Addison Wesley. (In general, as well as for the particular reference.)
Maddux, Cleborne D. (1994). The Internet: Educational prospects - and problems. Educational Technology, 34 (6), 43-48. (September, 1994.)
Maes, Pattie. (1994). Agents that reduce work and information overload. Communications of the ACM, 37(7), 31-40. (July, 1994.)
Markoff, John. (1994). The fourth law of robotics. Educom Review, 29 (2), 45-46. (March/April 1994.)
Murie, Michael. (1993). Macintosh multimedia workshop. Carmel, Indiana: Hayden Books.
Negroponte, Nicholas. (1970). The architecture machine: Toward a more human environment. Cambridge, MA: The MIT Press.
Negroponte, Nicholas. (1990). Hospital corners. In Laurel, B. (ed.), The art of human- computer interface design. pp. 347-353. Reading, MA: Addison Wesley.
Negroponte, Nicholas. (1995). Being digital. Rydalmere, NSW: Hodder and Stoughton.
Norman, Donald A. (1988). The Psychology of everyday things. New York: Basic Books.
Norman, Donald A. (1994). How might people interact with agents. Communications of the ACM, 37(7), 68-71. (July, 1994.)
Oren, Tim. (1990). Cognitive load in hypermedia: Designing for the exploratory learner. Ambron, Sueann & Hooper, Kristina (eds.), Learning with interactive multimedia (pp.125-136.) Redmond, Washington: Microsoft Press.
Reicken, Doug. (1994a). Introduction to intelligent agents special issue. Communications of the ACM, 37(7), 20-21. (July, 1994.)
Reicken, Doug. (1994b). A conversation with Marvin Minsky about agents. Communications of the ACM, 37(7), 23-29. (July, 1994.)
Roesler, Marina and Hawkins, Donald T. (1994). Intelligent Agents: Software Servants for an Electronic Information World (And More!). Online, 18(4), 18-32. (July, 1994.)
Tognazzini, Bruce. (1992). Tog on interface. (Especially chapters 21, 22.) Reading, MA: Addison Wesley.
|Please cite as: Meek, J. (1995). Intelligent agents, Internet information and interface. Australian Journal of Educational Technology, 11(2), 75-90. http://www.ascilite.org.au/ajet/ajet11/meek.html|