s and less usable so long as theinterface and its associated interaction techniques continue
to be conflated with the physical characteristics ofthe device itself. In attempting to resolve this dilemma,we have been exploring ways of decoupling the interactiontechniques from the physical characteristics of thedevices.
In our current research, we assume that there will bean increasing convergence between mobile/wearable
E. O’Neill (&) Æ M. Kaenampornpan Æ V. Kostakos Æ A. Warr Æ
D. Woodgate
Human-Computer Interaction Group, Department of Computer
Science, University of Bath, BA2 7AY Bath, UK
E-mail:
[email protected]E-mail:
[email protected]E-mail:
[email protected]E-mail:
[email protected]E-mail:
[email protected]Pers Ubiquit Comput (2006) 10: 269–283
DOI 10.1007/s00779-005-0048-1computing and ubiquitous computing. For many
applications, the user may want to use, say, the walldisplay in the https://www.51lunwen.org/scienceandtech/hospital waiting room or cafe´ with thehigh bandwidth connection, rather than the tiny displayon her PDA with its relatively poor connectivity. Forother applications, the user may prefer to take advantage
of the characteristics of her mobile device. Indeed,some applications may be most usable through simultaneoususe of a combination of ubiquitous and mobilecomputing power. In the context of converging mobileand ubiquitous technologies, this implies developinginput and output techniques that will work with devicesranging from the smallest wearable computer or smartring with no visual display to a wall-sized display driven
by a powerful fixed-location computer in a shop or streetor hospital. Again, this motivates us to decouple the
interaction technique from the particular devices. Ideally,we should have a range of common, usable interactiontechniques that operate across the gamut ofdesktop, mobile, wearable and ubiquitous devices.In our recent work [3], we have developed a gesturebased
input technique that attempts to achieve this goal.Clearly, however, we also need to consider output and in
the work reported here, we have gone on to combine thisgestural input technique with speech output. We proposethat this combination of gestural input and speechoutput will satisfy our goal of decoupling interaction
technique from device, providing a common, usableinterface. To test this proposal, we implemented theseinteraction techniques in a prototype system developedfrom our field studies in a hospital Accident andEmergency (A&E) Department [4]. This paper reports
an experimental evaluation of this prototype, investigatingthe effect of the presence or absence of a graphical
user interface (GUI).Multimodal interaction is likely to become increasinglyimportant as a wide range of different people use awide range of mobile, wearable and ubiquitous devices
in a wide range of different situations, in many of whicha visual display may not be effective or available at all.
In addition to the difficulties noted above of producing ausable visual display for mobile and wearable devices,
ubiquitous systems have their own problems with visualinteraction. The most fundamental of these is thatwireless technologies of various kinds, from Bluetoothto 802.11 to UMTS, enable the delivery of informationand services in many, many more locations than one canexpect to find visual displays through which to interactwith these servic
本论文由英语论文网提供整理,提供论文代写,英语论文代写,代写论文,代写英语论文,代写留学生论文,代写英文论文,留学生论文代写相关核心关键词搜索。