USING SHARED DESIGN MODELS
Robert Neches, Jim Foley, Pedro Szekely, Piyawadee Sukaviriya, Ping Luo, Srdjan
Kovacevic, Scott Hudson
USC / Information Sciences Institute and Georgia Institute of Technology
ABSTRACT
We describe MASTERMIND, a step toward our vision of aknowledge-based design-time and run-time environmentwhere human-computer interfaces development is centeredaround an all-encompassing design model. TheMASTERMIND approach is intended to provide integrationand continuity across the entire life cycle of the userinterface. In addition it facilitates higher quality workwithin each phase of the life cycle. MASTERMIND is anopen framework, in which the design knowledge base allowsmultiple tools to come into play and makes knowledgecreated by each tool accessible to the others.
KEYWORDS: models, collaboration, design, developmentINTRODUCTION
The challenge facing the research community is to providethe bass for an effective, integrated suite of tools to supportthe entire lifecycle of an interface. This means that the toolsmust be given a great deal more knowledge than theycurrently have about the product they are intended toconstruct. It means that this knowledge must be preservedand shared between tools across the software lifecycle.In an effort to move our research in this direction, we atInformation Sciences Institute and Georgia Tech have beencollaborating on the design of a shared system calledMASTERMIND, which is comprised of a knowledge base, adesign-time environment, and a run-time environment. InMASTERMIND (which stands for Models Allowing SharedTools and Explicit Representations to Make InterfacesNatural to Develop), the knowledge base serves as anintegrating framework that allows separate tools to integrateinto the design- and run-time environments.
Part of our underlying thesis in MASTERMIND is thatmodels of interface concepts need to be a shared communityresource that drives the creation of an architecture and toolsuite for design, development, and maintenance. Ifknowledge of these concepts can be built into the tools, thengreater assistance can be provided earlier in the designprocess, individual tools will become much moreinteroperable, and it will become possible to buildknowledge bases about particular designs which can greatlyfacilitate their maintenance and extension.
These benefits come at a cost -- modelling entails a certaindegree of additional effort. However, our argument is that
this cost can, and should, be paid primarily when creatingtools and environments rather than when buildingapplications. Creating knowledgeable developmentenvironments is the way to provide the benefits of a model-based approach to application developers without makingmodeling too burdensome to be practical.
We will develop our view of a community-resourceknowledge base according to the following exposition.First, we will describe the issues that arise over the courseof the software lifecycle for a user interface design. We wishto make two major points from that analysis: (1) each phaseis facilitated if we can carry over knowledge from previousphases; and (2) it is possible to identify the nature of theknowledge that needs to be carried over.
Having argued generally that this carryover is beneficial,next we will point out specific complementary benefits thatarise from using shared models to combine tools developedunder two complementary model-based approaches: theHUMANOID effort ongoing at Information SciencesInstitute and the UIDE work at George WashingtonUniversity and Georgia Tech.
After reviewing the leverage that these tools provide eachother, our next topic will be the mechanisms that will allowthem to be combined. In particular, we will describe ourprogress toward a unified model that supports prototypingfrom partial specifications, design critiquing, context-sensitive control of presentations, and context-sensitiveanimated help and tutorials.
Once that unified model has been explained, we will thenturn to a consideration of the practical issues that must beaddressed in moving toward an open, extensible environmentin which such a model can serve to bring together ourtools. We will close by speculating about the possibilitiesthat this approach opens up for integrating anddisseminating the results of research in the HCI community.
AN ANALYSIS OF THE UI LIFECYCLE: WHY AMODEL-BASED APPROACH IS NEEDED
Development of a user interface starts with an existingsystem (computerized or manual) that must be analyzed inorder to understand what users need to accomplish and wherethe bottlenecks lie in attempting to do so. This problemidentification process, which relies on techniques for taskanalysis and user monitoring, leads to the definition of aspecific design problem. Elements of that design problem, at
this point in the process, involve a description of the taskand identification of requirements for improvements inquality, speed, and/or accuracy of particular taskcomponents. Today, that task description is rarely madeexplicit (although techniques exist to do so [10]). Littlehelp beyond force of will is available to ensure that thedesign evolves in line with that description. Yet the taskanalysis deals in goals, operators, methods, and selectors --elements that, as we will see, are part of the interface designrepresentation. Properly modeled, task analyses could feeddirectly into the design.
In the next phase, conceptualization, design policies need tobe set in order to provide for an interface which addresses thetask analysis and requirements resulting from problemidentification. Conceptualization, and the prototypingphase which follows it, can be viewed as a search through aspace of alternative designs. This notion of search for adesign that satisfices (rather than necessarily optimizing)multiple criteria is central to current research trends.Conceptualization formulates design policies that defineregions in the space. Prototyping works within thoseabstractions to create a specific design specified at anexecutable level.
In particular, elements of a conceptualization describe designcommitments. These include decisions about the choice andnature of application and interaction objects presented tousers through the interface. Other commitments involvepolicy decisions about choices of interaction paradigms anddialogue techniques, as well as the general look-and-feeloffered via input and output media. If we wish to expressthese commitments explicitly, then we benefit from havinga model of tasks since the design policy commitments madeduring conceptualization build on our assumptions about theactivities that the interface will support.
Many design commitments are made during these phases. Itis only in the next phase, prototyping, that the designrepresentation grows to include actual executable software.Unfortunately, the current generation of tools ignores theearlier phases. Interface builders and other interfaceprogramming aids really only help in creating code after thedesigner has a sense of what is wanted. As we have arguedelsewhere [22], although some experimentation is possible,the cost of backing away from a commitment is quite highonce much software is built.
A great deal is to be gained by maintaining an explicitdeclarative representation that covers both the design modeland the code implementing it. Such a representation enablessemi-automated design critics to evaluate the design withrespect to issues such as usability and learnability. Byproviding higher levels of abstraction at which to specify theinterface, it also empowers more rapid exploration of designalternatives and therefore faster arrival at a satisfactorydesign. A representation of the design goals allows us toprovide help in managing the activities required toimplement design policies.
As the software lifecycle proceeds into usage andmaintenance phases, knowledge accumulated in the previousphases can be put to good use -- but only, of course, if thereis a model that preserves it for use by tools in the run-timeenvironment. In particular, knowledge of how the designmapped its model of the application onto its model ofpresentation methods is important, as is knowledge abouttasks and goals. Carrying this knowledge over from design-time to run-time allows us to program systems that canmake context-sensitive decisions about the best presentationtechnique to use for particular data. It allows us to definehelp and guidance systems that can help with how-toquestions, that know enough about the presentation to beable to generate effective animations, and that maintain theaccuracy of their help without extra programming effortbecause their help is generated from the design itself.In summary, a declarative model-oriented approach allowsseparate tools, operating at very different times thoughoutthe lifecycle, to take advantage of knowledge collected byother tools and thereby build better interfaces with lesseffort. To accomplish this, we need a model capturing:•
task structure, and the goals, subgoals, operators,methods and selectors which comprise the means foraccomplishing tasks
conceptual design abstractions and policy decisions aboutstructural and functional properties of the interfacewhich constrain a particular design
mappings of conceptual structure to uses of i/o media insystem displays
mappings of low-level, empirically-recordable usergestures onto higher-level semantics recorded in thedesign model
•
••
There are several advantages to this approach. Thedeclarative model is a common representation that tools canreason about, and allows the tools that operate on it tocooperate. Because all components of the system share theknowledge in the model, the model promotes interfaceconsistency within and across systems and reusability in theconstruction of new interfaces. Also, the declarative natureof the model allows system builders to more easilyunderstand and extend the model.
CARRY-OVER OF KNOWLEDGE BETWEENDESIGN-TIME AND RUN-TIME TOOLS ANDENVIRONMENTS
We have built a number of tools which operate at designtime and at run time by making use of the kind ofknowledge just listed.
ISI's model-based user interface development environment isHUMANOID [20, 21, 22]. Its contribution to interfacedesign is that it lets designers express abstractconceptualizations in an executable form, allowing designersto experiment with scenarios and dialogues even before thesystem model is completely concretized. The consequence isthat designers can get an executable version of their design
2
quickly, experiment with it in action, and then repeat theprocess after adding only whatever details are necessary toextend it along the particular dimension currently of interestto them.
HUMANOID models the functional capabilities of thesystem as a set of objects and operations, and partitions themodel of the style and requirements of the interface into fourdimensions that can be varied independently:
1.Presentation. The presentation defines the visualappearance of the interface.
2.Manipulation. The manipulation specification defines thegestures that can be applied to the objects presented, andthe effects of those gestures on the state of the systemand the interface.
3.Sequencing. The sequencing defines the order in whichmanipulations are enabled. Many sequencing constraintsfollow from the data flow constraints specified in thesystem functionality model (e.g., a command cannot beinvoked unless all its inputs are correct). Additionalconstraints can be imposed during dialogue design.
4.Action side-effects. Action side-effects refer to actionsthat an interface performs automatically as side effects ofthe action of a manipulation (e.g., a newly created objectcan become automatically selected).
HUMANOID provides facilities to incrementally refine thesystem functionality model and to refine any of thedimensions of interface style to allow the exploration of alarge set of interface designs, while allowing the design tobe executed at any time.
In addition to supporting design exploration, HUMANOID'smodel allows it to construct displays whose characteristicsdepend on the runtime values of system data structures.HUMANOID reasons about the values of the data structuresand the presentation policies defined in the presentationdimension of interface style to determine the resultingpresentation. HUMANOID's model also allows it record thedependencies between displays and system data structures,enabling it to automatically update the displays when thedata structures change.
Georgia Tech's model-based user interface developmentenvironment is UIDE, the User Interface DesignEnvironment [3, 4, 6, 7]. UIDE's models support richdescriptions of the application. The basic elements of themodel are: the class hierarchy of objects which exist in thesystem, properties of the objects, actions which can beperformed on the objects, units of information (parameters)required by the actions, and pre- and postconditions for theactions.
A variety of run-time and design-time uses have been madeof the representation. For design time, tests have beendeveloped for certain aspects of completeness, consistencyand command reachability [4, 1]. UIDE can automaticallyorganize menus and dialogue boxes [11], including use ofstyle-guide knowledge encapsulated in a rule base [2]. It can
3
automatically create an interface to the application, usingmenus, dialogue boxes, and direct manipulation [6]. It hasbeen extended to evaluate the interface design with respect tospeed of use, using a key-stroke model type of analysiswhich accounts for different interaction techniques and actionsequences [16].
At run-time, UIDE can explain why a command is disabled(based on false predicates in its preconditions), and partiallyexplain what a command does (based on the semanticsimplied by its preconditions, postconditions, and actionclass [4]. It can provide procedural help, via animation of amouse and keyboard on the screen, taking into account thecurrent application context [18, 19]. Specifically, thesequence of commands which must be executed to carry outa (potentially disabled) command is animated, based on back-chaining from the target command. Finally, it can controlactual execution of the application, including enabling anddisabling of menu items, as well as display of menus,dialogue boxes, and windows [6, 8].
PROGRESS TOWARD A UNIFIED MODEL
Both our groups start from a base of implemented software,which is written in terms of their own current genericmodel, and which processes declarative user interface designspecifications written in the terminology defined by theirgeneric model. Our work therefore begins with aligning themodels, producing an initial knowledge base that merges thebest representational approaches of each. For example, theISI model has a richer and more flexible approach tospecifying interactive dialogues, while Georgia Tech's isstronger when describing the effects of commands.
Our call for explicit user interface design models is aninteresting application of the DARPA Knowledge SharingEffort's development methodology for large knowledge-based systems [12]. In the Knowledge Sharing Effort'smethodology, sharing and reuse of software is greatlyfacilitated by adopting a common ontology: i.e., a set ofagreements about how to model the topic area. Their work isdeveloping tools to facilitate the evolution of suchontologies, so there are compelling opportunities for thatline of work to leverage user interface research and viceversa.
The problems in defining an ontology of user interfacedesigns are to structure the design space into relativelyorthogonal dimensions, and to provide a characterization ofimplications and interdependencies between designcommitments. Structuring the design space organizesdesign tools so that any aspect of a design can be revisedwith minimal necessity to recode other aspects. Modelingimplications and interdependencies lets design spaces bepruned more quickly, by using knowledge to restrict thesearch to alternatives consistent with current designcommitments.
The MASTERMIND Generic Model
As it stands now, our models for interface development
contain the following kinds of information.
Application Semantics. The application semantics is a
description of the functional capabilities of the system as aset of objects and commands. In building a model of theapplication semantics for an interface design, the designer ismaking explicit what we earlier called the conceptual designof the system. That is, without making commitmentsabout the appearance or behavior of the interface, thedesigner's model of application semantics captures abstractcommitments about the capabilities that the interface willoffer and the type of information it will allow users to seeand manipulate. The MASTERMIND generic applicationsemantics model defines the vocabulary in which thesecommitments can be expressed.
Figure 1 shows the part of that model representing objects, afusion of the models in [21, 22] and [4, 6, 7]. The modelcontains a superset of the information contained in thedefinition of a class in typical object-oriented programminglanguages. Object class definitions typically state only theslots of an object and the types of values that each slot cancontain. The additional knowledge represented in our model,in attributes such as formatter, slot-class and validator, isused by various components of the design and run-timetools.
For example, the formatter attribute contains knowledge thatthe interface software needs to translate between the internalrepresentation of an object and textual forms (e.g., toconstruct the labels of menus that allow the user to choosefrom a set of objects). Parsers contain knowledge to convertfrom a textual representation of an object to its internalform, which is used by interfaces that allow the user to typein the identifier of an object. Validators attached to theobject model tell how to check consistency of valuessupplied when a user attempts to input an instance of thatclass. Organizing knowledge in this fashion facilitatesprototyping of partial designs, because it allows the systemto use class inheritance to fill in parsers and formatters fromthe generic model for use during execution of designs forwhich more application-specific methods have not yet beenprovided.
Two unusual pieces of knowledge in the model of objectslots are the slot-class and the validator. The slot-classcontains knowledge about the semantics of the slot that thepresentation component can use to aid in the design ofdisplays. For example, one kind of slot-class in our modelis called Part-Of; it indicates that the values of the slot are ina part-of relationship with respect to the object. Suchknowledge can be used to pick out certain presentatationmethods and rule out others.
The unique aspect of validators is that they contain, inaddition to a procedure to test a condition (predicate), aspecification of the error messages to show the user for thedifferent error conditions that the validator can detect (error-conditions). Storing the error messages with the validatorseparates the representation of the error messages from the
4
presentation techniques used to communicate them to theuser. This gives the presentation component the flexibilityto choose a presentation technique appropriate to the currentsituation.
The object model, which comes mostly from UIDE,together with the presentation model, which comes mostlyfrom HUMANOID, enables MASTERMIND to providecapabilities unavailable in UIDE or HUMANOID. Forexample, the object model provides design-time informationthat DON, the automatic dialogue-box generationcomponent of UIDE uses to group and select the interactiontechniques in a dialogue box. Similar uses of the objectmodel could be incorporated into HUMANOID, to increaseHUMANOID’s ability to automatically design displays,while conserving the context-sensitive presentationcapabilities of HUMANOID.
Figure 2 shows the MASTERMIND command model,derived from HUMANOID's and UIDE's. Commands modelthe operations that can be performed on objects.
The command model contains knowledge about the inputs ofa command, the conditions under which the command can beexecuted (preconditions, exceptions, validator), and theeffects of the command (post-conditions and side-effects).The run-time environment uses some of this knowledge toacquire values for nputs from the user: the legal values ofthe inputs (type, validator, alternatives, min, max), defaultvalues, parsers and formatter. Knowledge from thecommand model is also used to control the sequencing foracquiring the input values from the user.
The preconditions, postconditions, exceptions and side-effects provide knowledge about the semantics of anoperation that can be used by many tools. For example, theanimated help generation system uses preconditions andpostconditions to figure out the sequence of actions that auser needs to perform to carry out a task. The presentationcomponent enables and disables menu-items when thepreconditions of commands change. The help system canexplain why a command is disabled based on unsatisfiedpreconditions and whether the values of inputs are incorrector missing.
Presentation and Behavior. The presentation model
describes the visual appearance of the interface, and thebehavior model defines the gestures that can be applied tothe objects presented, and the effects of those gestures on thestate of the system and the interface.
Figure 3 shows MASTERMIND's merger of thepresentation and manipulation models in HUMANOID andUIDE. A presentation is modeled as a composition ofsimpler presentations called parts. In addition to the parts,the model contains knowledge about the layout of the parts,the kind of data that the presentation can display, thecontexts in which the presentation is appropriate(applicability-condition), the input behaviors associated withthe presentation, and other presentations that might be more
appropriate in certain contexts (refinements).
Each part of a presentation contains knowledge aboutconditions when the part should be included in the completepresentation (inclusion-condition), knowledge that allows apart to be replicated when the data to be presented is a list(replication-data), and knowledge about different choices ofpresentation methods for displaying that part .
The model of behaviors is based on the Garnet InteractorsModel. Briefly, a behavior describes the area of apresentation where it is active, the events that invoke it andstop it, and the action to be executed (see [Garnet-Interactors]for more details).
The model of presentation and behavior is used by the run-time system to generate context-sensitive presentations bymatching the types in the slots of objects with the types andpredicates in the data attributes of presentations.
Together, the presentation and command models letMASTERMIND-based interfaces provide animated help forfree. The animation generation works from the commandmodel to figure out the sequence of steps to animate, andfrom the presentation model to construct the contents of theanimation. Animation generation is a compelling exampleof the benefit of the MASTERMIND approach because itpiggybacks on knowledge that is in the model for otherpurposes.
Sequencing and Action Side-Effects. Sequencing defines
In our model-based approach, interface developers specifyinterfaces by modeling the desired features declaratively interms defined in the generic knowledge base. Unlike thetraditional approach to interface construction, whereprogrammers spend most of the effort writing and debuggingprocedural code, our goal is for developers usingMASTERMIND to spend the bulk of their effort writingdeclarative specifications that extend and specialize thegeneric model. As these specifications evolve, the toolsthat we described in the previous section can interpret thosespecfications to provide assistance in critiquing the design,executing and evaluating partially specified designs, andmanaging the activities necessary for extending thespecifications.
The run-time environment of an application developed withMASTERMIND consists of a standard software module thatis a component of every application program. The run-timecomponent module uses the model of the application and itsinterface, along with knowledge about the state of theapplication program's run-time data structures, in order togenerate and control the interface of the application, interpretinputs, and provide help to the end user.
To interpret inputs, the run-time system uses thepresentation model to map the input event into theapplication data referenced by it, and triggers the appropriatecommands according to the application's model of behaviorand sequencing.
To produce or update the display of an application datastructure, the run-time system queries the model for apresentation component capable of displaying the datastructure. The model returns the most specific presentationcomponent suitable for displaying the data structure in thegiven context (e.g. taking into account data type congruenceand size restrictions), and the run-time system uses it toproduce or update the display. Note that the presentationcomponent obtained from the model might either be adefault inherited from Mastermind’s generic knowledge base,or a more specific presentation component specified by aninterface's designer. This mechanism is the key to twovaluable properties of MASTERMIND: (1) built-in supportfor context sensitive presentation; and, (2) the ability togenerate default behavior that fills in for deferred designcommitments, thereby making even incompletespecifications executable and testable.
RELATED WORK
the order in which input behaviors are enabled. Action side-effects refer to actions that an interface performsautomatically as side effects of the action of a manipulation(e.g., a newly created object can become automaticallyselected).
Our model of sequencing and side-effects is described indetail in [HUMNAOID] and [UIDE]. The main feature ofthe model, that distinguishes it from the models used inother UIMSs, is that sequencing is not representedexplicitly, either as a finite state machine or an eventsystem. Instead, sequencing constraints are derived from thedata flow constraints specified in the system functionalitymodel (e.g., a command cannot be invoked unless all itsinputs are correct) and from the preconditions,postconditions and exceptions of a command. Additionalsequencing constraints (e.g. that certain inputs should beprompted for in sequence) are defined by annotating groups(Figure B) with declarative descriptions of the sequencingdesired.
Planned Extensions to the Model. A number of design
issues are not currently covered in our model. Our futureplans include coverage of: policies that define global stylecharacteristics of the interface, characteristics of the deliveryplatforms, end-user characteristics and preferences, and usertasks.
Design and Execution in MASTERMIND
Other user interface management systems which derive theuser interface from a high-level specification of thesemantics of a program are MIKE [15], and UofA* [17],which are able to generate a default interface from a minimalapplication description, and provide a few parameters that adesigner can set to control the resulting interface.
Our model of commands allows designers to exert muchfiner control over dialogue sequencing. In addition, weprovide a library of command groups that allows designersto very easily specify the dialogue structures that MIKE and5
UofA* support. We also provide finer control overpresentation design, and offer richer descriptions ofapplication semantics that can be used to support moresophisticated design tools.
Interface builders such as the Next Interface Builder [14], andOpenInterface [13] are a different class of tools to aid in thedesign of interfaces. These tools make it very easy toconstruct the particular interfaces that they support, but arevery poor for design exploration. Designers have to committo particular presentation, layout and interaction techniquesearly in the design. Making global policy changes, such aschanging the way choices are presented, is difficult becauseit requires manually editing a large number of displays. Amodel-based approach handles both these problems.
CONCLUSIONS
interface system. In fact, the user task models that we planto build in order to support more sophisticated help andinteractive guidance may well contribute to such anextension.
For these reasons, MASTERMIND offers a valuable pathtoward a comprehensive, interoperable suite of tools, whatthe recent ISAT study on intelligent interfaces referred to asa knowledgeable development environment. [9]
Although MASTERMIND only instantiates a portion ofthat comprehensive framework, it has significant merit in itsown right. Among other innovations, it will represent amajor step toward explicit representation and support forearly, conceptual phases of design. Designers can partiallydescribe their designs, by providing descriptions ofapplication functionality and data structures or by usingabstractions about presentation, manipulation, orsequencing. Because this approach allows execution andtesting of partially-specified designs, because it alsofacilitates exploration of design alternatives, and because itallows stating and enforcing high-level design policies,MASTERMIND will facilitate rapid production of muchmore thoughtfully designed user interfaces.
MASTERMIND uses the knowledge created in the designprocess to provide useful run-time services, such as context-sensitive presentation and help, which would not be possiblewithout a design model.
An integrated set of easy-to-use tools with the aboveproperties would provide a much faster and cheaper path tothe creation of usable, maintainable, and better-adaptedinterfaces. Knowledgeable development environments woulddramatically change the nature of interface systemdevelopment. It would ease the task of initial design. Itwould let design and evolution extend throughout thelifecycle, and it would soften unhealthy boundaries betweendesigners and end users.
The result would be improvements in the quality, cost, andproduction time of advanced user interfaces. Quality wouldincrease because first-pass interface designs would be better,because there would be more opportunity to iteratively refinethe designs, and because end users would have greaterparticipation and influence in ensuring that their needs andlimitations were addressed. Cost would decrease becauseinterfaces could be developed and tested much more quickly,because better adaptivity to task requirements wouldsimplify training and because better design would enhanceuser productivity. Production time would be speededbecause prototyping would be faster and more complete,because the distinction between prototypes and deployedsystems could be blurred/eliminated, and because generationof informational materials would not entail extra effort.Thus, we believe that much more powerful systems can bebuilt much more quickly in the future -- if two conditionsare met:
An overall architecture centered around an all-encompassingdesign model would provide integration and continuityacross the entire lifecycle of a user interface in addition toenabling more powerful results within each phase. Today'sinterface development environments are primitive withrespect to what is needed.
The state of the art today is an architecture consisting of alibrary of low-level objects like menus and buttons, andspecification and prototyping tools consisting of aids fordrawing what individual displays should look like.Prototyping in today's environment really means that -- ifenough code is also written -- you can test the interfacebefore the application is done. However, it does not meanthat it is easy to experiment with different interface designsor easily see how a partially conceptualized design mightlook.
The opportunity exists to go far beyond this, not bythrowing away that architecture, but by building on it.MASTERMIND is a first step in that direction.
MASTERMIND is best thought of as a framework thatothers can build on, with some pieces instantiated. Theframework supports design, execution, help, andmaintenance for well-designed user interfaces to advancedapplications. We have identified certain design tools that fillmissing needs: visual aids for developing design models,tools for managing and automating multi-step designrefinements, and critics based on design policies. Becausethe framework is open, other tools can be added later. Forexample, if the pychological research on analyzing theusability of proposed designs matures to the the point whereit enables creation of automated design usability critics, theextensible nature of the MASTERMIND design makes itfeasible for other researchers to add those tools.
Similar observations apply to the MASTERMIND run-timeenvironment. For example, as psychological researchprogresses on identifying bottlenecks in the use ofimplemented designs, it should be easily possible toaugment the run-time environment in order to collect andanalyze performance data from users interacting with the
6
•we organize our development and maintenance tools•
around explicit models
we begin, as a community, to work towards sharingcommon models
Doing so will allow the research community to composeour tools together to create development and maintenanceenvironments far superior to what any of us could buildalone.
ACKNOWLEDGEMENTS
The research reported in this paper was supported byDARPA through Contract Numbers NCC 2-719 andN00174-91-0015 at ISI, and by grants from SUN, Siemens,and the State of Georgia at Georgia Tech.
REFERENCES
[1]
[2]
[3]
[4]
Braudes, R.E., and J.L. Sibert, \"ConMod: A Systemfor Conceptual Consistency Verification andCommunication,\" SIGCHI Bulletin 23(1), Jan. 1991,pp.92-94.
DeBaar, D, K. Mullet, and J. Foley. CouplingApplication Design and User Interface Design,Proceedings CHI'92 - SIGCHI 1992 Computer HumanInteraction Conference, ACM, New York, NY, 1992,in press.
Foley, J., C. Gibbs, W. Kim, and S. Kovacevic, AKnowledge Base for a User Interface ManagementSystem, Proceedings CHI '88 - 1988 SIGCHIComputer-Human Interaction Conference, ACM, NewYork, 1988, pp. 67-72.
Foley, J., W. Kim, S. Kovacevic, and K. Murray,Designing Interfaces at a High Level of Abstraction,IEEE Software, 6(1), January 1989, pp. 25-32.Foley, J., A. van Dam, S. Feiner, and J. Hughes,Computer Graphics – Principles and Practice, Addison-Wesley, Reading, MA, 1990.
Foley, J., W. Kim, S. Kovacevic and K. Murray,UIDE - An Intelligent User Interface DesignEnvironment, in J. Sullivan and S. Tyler (eds.)Architectures for Intelligent User Interfaces: Elementsand Prototypes, Addison-Wesley, Reading MA, 1991,pp.339-384.
Foley, J., D. Gieskens, W. Kim, S. Kovacevic, L.Moran and P. Sukaviriya, A Second-GenerationKnowledge Base for the User Interface DesignEnvironment, GWI-IIST-91-13, Dept. of ElectricalEngineering and Computer Science, The GeorgeWashington University, Washington DC 20052, 1991.Gieskens, D. and J. Foley, Controlling User InterfaceObjects Through Pre- and Postconditions, ProceedingsCHI'92 - SIGCHI 1992 Computer Human InteractionConference, ACM, New York, NY, 1992, in press.Intelligent User Interfaces. ISI/RR-91-288, USC/ISI,4676 Admiralty Way, Marina del Rey, CA 90292,September 1991.
[5]
[6]
[7]
[8]
[10]John, B. E., Extensions of GOMS Analyses to Expert
Performance, Requiring Perception of Dynamic Visualand Auditory Information, Proceedings of ACMCHI’90 Conference on Human Factors in ComputingSystems, pp. 107-115.
[11]Kim, W. and J. Foley, DON: User Interface
Presentation Design Assistant, ProceedingsSIGGRAPH Symposium on User Interface Softwareand Technology, ACM, New York, 1990, pp. 10-20.[12]R. Neches, R. Fikes, T. Finin, T. Gruber, R. Patil, T.
Senator, and W.R. Swartout. Enabling Technology forKnowledge Sharing. AI Magazine, Volume 12 No. 3 ,Fall 1991, pp.36-56.
[13]Neuron Data, Inc. 1991. Open Interface Toolkit. 156
University Ave. Palo Alto, CA 94301.
[14]NeXT, Inc. 1990. Interface Builder, Palo Alto, CA.[15]D. Olsen. MIKE: The Menu Interaction Kontrol
Environment. ACM Transactions on Graphics, vol 17,no 3, pp. 43-50, 1986.
[16]Senay, H., P. Sukaviriya, L. Moran, Planning for
Automatic Help Generation, Proceedings of WorkingConference on Engineering for Human ComputerInteractions, IFIP, August 1989.
[17]G. Singh and M. Green. A High-level User Interface
Management System. In Proceedings SIGCHI'89.April 1989, pp. 133-138.
[18]Sukaviriya, P., Dynamic Construction of Animated
Help from Application Context, Proceedings of ACMSIGGRAPH 1988 Symposium on User InterfaceSoftware and Technology (UIST '88), 1988, ACM,New York, NY, pp. 190-202.
[19]Sukaviriya, P and J. Foley, Coupling a UI Framework
with Automatic Generation of Context-SensitiveAnimated Help, Proceedings of ACM SIGGRAPH1990 Symposium on User Interface Software andTechnology (UIST '90), ACM, New York, 1990, pp.152-156.
[20]P. Szekely. Standardizing the interface between
applications and UIMS's. In Proceedings UIST'89.November 1989, pp. 34-42.
[21]P. Szekely. Template-based mapping of application
data to interactive displays. In Proceedings UIST'90.October 1990, pp. 1-9.
[22]P. Szekely, P. Luo, and R. Neches. Facilitating the
Exploration of Interface Design Alternatives: TheHUMANOID Model of Interface Design. InProceedings of CHI'92, The National Conference onComputer-Human Interaction, May, 1992, pp. 507-515.
[9]
7
因篇幅问题不能全部显示,请点此查看更多更全内容