Menu
David Ahlström
top heading for publication page
Vo, T., Jannat, M.-E., Ahlström, D., and Hasan, K. • 2022
FaceUI: Leveraging Front-Facing Camera Input to Access Mid-Air Spa­tial In­ter­faces on Smart­phones
Graphics Interface 2022 thumbnail
Graphics Interface 2022 thumbnail
In Proceedings of Graphics Inter­face 2022 (Mon­tréal, Québec, Can­ada, 17–19 May, 2022), Can­ad­ian In­for­ma­tion Pro­cess­ing Soc­i­ety, pp. 1–12.
Keywords: smartphones • face-based input • spa­tial user inter­faces
Download: open-access
Abstract
Video
We present FaceUI, a novel strat­e­gy to ac­cess mid-air face-centered spa­tial in­ter­faces with off-the-shelf smart­phones. FaceUI uses the smart­phone's front-facing cam­era to track the phone's mid-air po­si­tion rel­a­tive to the user's face. This self-con­tained track­ing mech­a­nism opens up new op­por­tu­ni­ties to en­able mid-air in­ter­ac­tions on off-the-shelf smart­phones. We demon­strate one pos­si­bil­i­ty that lever­ages the empty mid-air space in front of the user to ac­com­mo­date vir­tu­al win­dows which the user can browse by mov­ing the phone in the space in front of their face. We inform our im­ple­men­ta­tion of FaceUI by first study­ing es­sen­tial design fac­tors, such as the com­fort­able face-to-phone dis­tance range and ap­pro­pri­ate view­ing an­gles for brows­ing mid-air win­dows and vi­su­al­ly ac­cess­ing their con­tent. After that, we com­pare users' per­for­mance with FaceUI to their per­for­mance when using a touch-based in­ter­face in an an­a­lyt­ic task that re­quires brows­ing mul­ti­ple win­dows. We find that FaceUI offers better per­for­mance than the tra­di­tion­al touch-based in­ter­face. We con­clude with rec­om­men­da­tions for the design and use of face-centered mid-air in­ter­faces on smart­phones.
Hasan, K., Mondal, D., Khatra, K., Ahlström, D., and Neu­staed­ther, C. • 2021
CoAware: Designing Solutions for Being Aware of a Co-Located Part­ner's Smart­phone Usage Activities
Graphics Interface 2021 thumbnail
Graphics Interface 2021 thumbnail
In Proceedings of Graphics Inter­face 2021 (vir­tual event, 28–29 May, 2021), Can­ad­ian In­for­ma­tion Pro­cess­ing Soc­i­ety, pp. 46–55.
Keywords: smartphone usage • co-located usage • strate­gies & tools
Abstract
Video
There is a growing concern that smart­phone usage in front of fam­i­ly or friends can be both­er­some and even de­ter­i­orate re­lat­i­on­ships. We re­port on a sur­vey ex­am­i­n­ing smart­phone usage be­hav­ior and prob­lems that arise from over­use when part­ners (married coup­les, common-law re­lat­i­on­ships) are co-located. Results show that peop­le have var­i­ous expec­ta­tions from their part­ner, and often feel frust­rated when their part­ner uses a smart­phone in front of them. Study part­i­ci­pants also re­ported a lack of smart­phone ac­ti­vi­ty aware­ness that could help de­cide when or how to commun­i­cate ex­pect­a­tions to the part­ner. This mot­i­vated us to deve­lop an app, CoA­ware, for shar­ing smart­phone activity-related in­for­ma­tion bet­ween part­ners. In a lab study with coup­les, we found that Co­Aware has the po­ten­tial to im­prove smart­phone ac­ti­vity aware­ness among co-located part­ners. In light of the study re­sults, we suggest design stra­te­gies for shar­ing smart­phone act­i­vity in­for­ma­tion among co-located partners.
Hasan, K., Mondal, D., Ahlström, D., and Neu­staed­ther, C. • 2020
An Exploration of Rules and Tools for Fami­ly Mem­bers to Lim­it Co-­Located Smart­phone Usage
Augmented Human 2020 thumbnail
Augmented Human 2020 thumbnail
In Proceedings of the 11th Aug­ment­ed Human In­ter­na­tio­nal Con­fe­rence (Winnipeg, Canada, 27–28 May, 2020), ACM Press, article no. 7, pages 1–8.
Keywords: smartphone usage • co-located usage • strate­gies & tools
Abstract
Smartphones play an in­creas­ing­ly large role with­in our lives, shap­ing our in­ter­ac­tion with friends and family mem­bers. Though smart­phones fac­i­li­tate seam­less com­mu­ni­cat­ion, there is a grow­ing con­cern that peop­le over­use smart­phones in front of family mem­bers, which can some­times de­ter­i­o­rate family re­la­tion­ships. We re­port on a sur­vey ex­am­i­ning smart­phone usage among three types of family mem­bers: child­ren, part­ners, and other ad­ults liv­ing in the house­hold. We ex­a­mine the rules and tools that they use to re­duce their smart­phone usage. Re­sults show that peop­le have many rules to limit their co-located smart­phone usage. How­ever, the rules vary wide­ly be­tween the three types of fam­ily mem­bers. Further­more, par­ti­ci­pants re­port­ed a lack of smart­phone-based tools to help them re­duce smart­phone usage. Con­sid­er­ing these re­sults, we suggest re­com­men­da­tions for de­sign­ing smart­phone-based tools in­tend­ed to help re­duce co-located smart­phone usage with­in families.
Ahlström, D., Hasan, K., Lank, E., and Liang, R. • 2018
TiltCrown: Extending Input on a Smart­watch with a Tilt­able Di­gi­tal Crown
Mum 2018 thumbnail
Mum 2018 thumbnail
In Proceedings of the 17th In­ter­na­tio­nal Con­fe­rence on Mo­bile and Ubi­qui­tous Mul­ti­me­dia (Cai­ro, Egypt, 25–28 No­vem­ber, 2018), ACM Press, pp. 359–366.
Keywords: smartwatches • di­gi­tal crowns • radial inter­faces • item se­lec­tion • iso­me­tric joys­tick
Abstract
Many smartwatches have a di­gi­tal crown as a com­ple­ment­ary in­put mo­da­li­ty to the touch­screen. While the crown eli­mi­nates issues of touch­screen occ­lu­sion and im­pre­ci­sion, i.e., the 'fat-­finger' prob­lem, the crown in­put is limi­ted as it only sup­ports bi-­di­rec­tional ro­ta­tions along a single di­men­sion. We pre­sent Tilt­Crown, a crown pro­to­type that in­crea­ses a smart­watch's in­put band­width through an iso­me­tric joy­stick. Tilt­Crown sup­ports angu­lar tilt, ro­ta­tions and button press events for touch­less smart­watch in­ter­ac­tion. In a user study we ex­plore how acc­u­rate and how fast users can per­form item se­lec­tion tasks using Tilt­Crown.
Peshkova, E., Hitz, M., Ahlström, D., Alex­andro­wicz, A., and Kopper, A. • 2017
Exploring Intuitiveness of Me­ta­phor-Based Ges­tures for UAV Navi­gation
RO-MAN 2017 thumbnail
RO-MAN 2017 thumbnail
In Proceedings of the 26th IEEE Inter­na­tional Sym­po­sium on Ro­bot and Human In­ter­ac­tive Commu­ni­ca­tion (Lis­bon, Por­tugal, 28 Au­gust – 1 Sep­tem­ber, 2017), IEEE, pp. 175–182.
Keywords: unmanned areal vehicle • user-centered design of ro­bots • cog­ni­tive skills • men­tal models
Abstract
We investigate how the use of meta­phors sup­ports the in­tui­tive­ness of ges­ture in­put vo­ca­bu­la­ries for Un­manned Aer­i­al Ve­hicle (UAV) na­vi­ga­tion. We com­pare ges­ture sets con­sti­tut­ing a single meta­phor to ges­ture sets that are based on mixed me­ta­phors in terms of their res­pec­tive in­tui­tive­ness. To this end, we im­ple­mented a 3D si­mu­lator in or­der to check how well no­vice users steer a UAV with­out know­ing the valid ges­tures, using only a hint on the un­der­ly­ing me­ta­phor. We com­par­ed their task com­ple­tion time (in­di­rect ass­ess­ment of in­tu­i­ti­ve­ness) with the one ach­i­ev­ed after study­ing a ges­ture set that con­sists of ges­tures from se­ve­ral me­ta­phors. We ana­ly­zed users’ feed­back re­flect­ed in quest­i­on­naires (di­rect assess­ment of in­tui­tive­ness) to further com­pare sing­le me­ta­phor ges­ture sets with mixed meta­phors ges­ture sets. The re­sults of the study sup­port our hypo­the­sis that a metaphor-based app­roach is an ex­pe­di­ent means for gesture-based UAV na­vi­ga­tion.
Hasan, K., Ahlström, D., Kim, J., and Irani, P. • 2017
AirPanes: Two-Handed Around-Device In­ter­ac­tion for Pane Swit­ch­ing on Smart­phones
CHI 2017 thumbnail
CHI 2017 thumbnail
In Proceedings of the SIG­CHI Conf­erence on Human Fac­tors in Com­puting (Den­ver, Colo­rado, USA, 6–11 May, 2017), ACM Press, pp. 679–691.
Keywords: around-device inter­action • in-air input • two-handed mobilie inter­action • analytic inter­faces
Acceptance Rate: 25% (600/2400)
Abstract
Video
In recent years, around de­vice in­put has emer­ged as a com­ple­ment to stan­dard touch in­put, albeit in li­mi­ted tasks and con­texts, such as for item se­lec­tion or map na­vi­ga­tion. We push the boun­daries for around de­vice in­ter­ac­tions to fa­ci­li­tate an en­ti­re smart­phone app­li­cation: brows­ing through large in­for­ma­tion lists to make a de­cision. To this end, we pre­sent Air­Panes, a novel tech­ni­que that allows two-handed in-air in­ter­ac­tions, con­joint­ly with touch in­put to per­form ana­ly­tic tasks, such as making a pur­chase de­cision. Air­Panes re­sol­ves the in­eff­i­ci­en­cies of hav­ing to switch be­tween mul­ti­ple views or panes in com­mon smart­phone app­li­ca­tions. We ex­plore the de­sign fact­ors that make Air­Panes eff­i­cient. In a con­trolled study, we find that Air­Panes is on aver­age 50% more eff­i­ci­ent that stan­dard touch in­put for an ana­ly­tic task. We offer re­com­men­dat­ions for im­ple­ment­ing Air­Panes in a broad range of app­li­ca­tions.
Han, T., Ahlström, D., Yang, X.-D., Byagowi, A., and Irani, P. • 2016
Exploring Design Factors for Trans­for­ming Pass­ive Vi­bra­tion Sig­nals into Smart­wear In­ter­ac­tions
Nordichi 2016 thumbnail
Nordichi 2016 thumbnail
In Proceedings of Nordi­CHI’16, the ninth Nor­dic Conf­e­rence on Human-Com­puter Inter­action (Go­then­burg, Swe­den, 23–27 Oct­ober, 2016), ACM Press, Arti­cle No. 35, 10 pages.
Keywords: always-available input • passive vi­bra­tional signal • design factors • wear­able devices
Acceptance Rate: 25% (58/231)
Abstract
Video
Vibrational signals that are ge­ne­rat­ed when a fin­ger is swept over an un­even sur­face can be re­li­ably de­tect­ed via low-cost sen­sors that are in prox­im­ity to the in­ter­ac­tion sur­face. Such in­ter­ac­tions pro­vide an alter­na­tive to touch­screens by en­ab­ling always-available in­put. In this paper we de­mon­strate that In­er­tial Mea­sure­ment Units (known as IMUs) em­bedded in many off-the-shelf smart­wear are well suit­ed for cap­tu­ring vi­bra­ti­o­nal sig­nals ge­ne­rat­ed by a user’s fin­ger swipes, even when the IMU app­e­ars in a smart­ring or smart­watch. In com­par­i­son to acous­tic based app­roaches for cap­tur­ing vi­bra­ti­o­nal sig­nals, IMUs are sen­si­tive to a vast num­ber of fact­ors, both, in terms of the sur­face and swipe pro­per­ties, when the in­ter­act­ion is carried out. We con­tri­bu­te by ex­am­i­n­ing through three user ex­per­i­ments the im­pact of these sur­face and swipe pro­per­ties, in­clud­ing sur­face or bump height and den­sity, sur­face sta­bi­lity, sensor lo­ca­tion and swipe style and di­rec­tion. Based on our re­sults, we pre­sent a num­ber of usage scen­ar­ios to de­mon­stra­te how this app­roach can be used as an always-available in­put for di­gi­tal in­ter­ac­tions.
Ens, B., Ahlström, D., and Irani, P. • 2016
Moving Ahead with Peep­hole Pointing: Effects of Head-Worn Dis­play Field of View Limi­tation on Object Selec­tion
SUI 2016 thumbnail
SUI 2016 thumbnail
In Proceedings of the 4th ACM Sym­posium on Spa­tial User Inter­action (Tokyo, Ja­pan, 15–16 Oct­ober, 2016), ACM Press, pp. 107–110.
Keywords: head-worn dis­play • field of view • peep­hole point­ing • Fitts' law • per­for­mance mo­dell­ing
Acceptance Rate: 26% (20/77)
Abstract
Head-worn displays (HWDs) are now be­com­ing wide­ly avail­able, which will allow re­search­ers to ex­plo­re so­phi­st­i­cated in­ter­face de­signs that sup­port rich user pro­duct­i­vity fea­tu­res. In a large vir­tual work­space, the li­mited avail­able field of view (FoV) may cause ob­jects to be lo­cated out­side of the avail­able view­ing area, re­quir­ing users to first lo­cate an item using head motion be­fore mak­ing a se­lec­tion. How­ever, FoV varies wide­ly across diff­e­rent de­vices, with an un­known im­pact on in­ter­face usa­bi­li­ty. We pre­sent a user study to test two-step se­lec­tion mo­dels pre­vi­ously pro­po­sed for "peep­hole point­ing" in large vir­tual work­spaces on mo­bile de­vices. Using a CAVE en­vir­on­ment to simu­late the FoV re­str­ic­tion of ste­reo­sco­pic HWDs, we com­pare two diff­e­rent in­put methods, di­rect point­ing, and ray­cast­ing in a se­lec­tion task with vary­ing FoV width. We find a very strong fit in this con­text, com­par­able to the pre­dic­tion accur­acy in the ori­ginal studies, and much more accurate than the tra­di­tio­nal Fitts' law mo­del. We de­tect an ad­van­tage of di­rect point­ing over ray­cast­ing, par­ti­cu­lar­ly with small tar­gets. More­over, we find that this ad­vant­age of di­rect point­ing di­mi­ni­shes with de­creasing FoV.
Hasan, K., Kim, J., Ahlström, D., and Irani, P. • 2016
Thumbs-Up: 3D Spat­ial Thumb-Reach­able Space for One-Handed Thumb In­ter­action on Smart­phones
SUI 2016 thumbnail
SUI 2016 thumbnail
In Proceedings of the 4th ACM Sym­po­sium on Spa­tial User In­ter­action (Tokyo, Ja­pan, 15–16 Oct­ober, 2016), ACM Press, pp. 103-106.
Keywords: one-handed spa­tial in­ter­action • around-device in­ter­action • thumb in­put • reach­a­bil­ity • limi­ted multi-touch inter­action • occ­lusion
Acceptance Rate: 26% (20/77)
Abstract
People very often use mo­bile de­vices with one hand to free the se­cond hand for other tasks. In such cases, the thumb of the hand hold­ing the de­vice is the only avail­able in­put fin­ger, mak­ing multi-touch in­ter­ac­tions im­poss­i­ble. Com­pli­ca­ting in­ter­ac­tion furt­her, the screen area that can be reach­ed with the thumb while hold­ing the de­vice is li­mi­ted, which makes di­stant on-screen areas in­accessi­ble. Mo­ti­va­ted by em­erg­ing port­able ob­ject track­ing tech­no­lo­gies, we ex­plore how spa­tial mid-air thumb-gestures could po­ten­tial­ly be used in com­bi­na­tion with on-screen touch in­put to fa­ci­li­tate one-handed in­ter­ac­tion. From a user study we iden­ti­fy the 3D thumb-reach­able space when hold­ing a smart­phone. We call this space "Thumbs-Up". This space ex­tends up to 7mm above the screen, making it poss­i­ble to create in­ter­ac­tions for the thumb of the hand hold­ing the smart­phone. We fur­ther­more de­mon­strate how such Thumbs-Up tech­ni­ques, when com­bined with on-screen in­ter­ac­tion, can ex­tend the in­put vo­ca­bu­lary in one-handed si­tu­a­tions.
Peshkova, E., Hitz, M., and Ahlström, D. • 2016
Exploring User-Defined Ges­tures and Voice Com­mands to Con­trol an Un­manned Aerial Vehicle
Intetain 2016 thumbnail
Intetain 2016 thumbnail
In 8th International Con­fe­rence on In­telligent Tech­nolo­gies for Inter­active En­ter­tain­ment (Ut­recht, Ne­ther­lands, 28–30 June, 2016), Springer, pp. 47-62.
Keywords: unmanned areal ve­hicle • voice and ges­ture commands • mental models
Abstract
In this paper we follow a par­ti­ci­pa­tory de­sign app­roach to ex­plore what no­vice users find to be in­tu­i­tive ways to con­trol an Un­manned Aerial Ve­hicle (UAV). We gather users’ sugg­est­ions for suit­able voice and ges­ture com­mands through an on­line sur­vey and a vi­deo in­ter­view and we also re­cord the voice com­mands and ges­tures used by par­ti­ci­pants in a Wiz­ard of Oz ex­per­i­ment where par­ti­ci­pants thought they were man­oeuv­ring a UAV. We iden­ti­fy com­mon­al­i­ties in the data coll­ect­ed from the three eli­ci­ta­tion methods and ass­emble a coll­ec­tion of voice and ges­ture com­mand sets for na­vi­ga­ting a UAV. Further­more, to ob­tain a deeper un­der­stand­ing of why our par­ti­ci­pants chose the ges­tures and voice comm­ands they did, we an­a­lyse and di­scuss the coll­ected data in terms of men­tal mo­dels and iden­ti­fy three pre­vai­ling classes of men­tal mo­dels that likely guided many of our par­ti­ci­pants in their choice of voice and ges­ture com­mands.
Hasan, K., Ahlström, D., and Irani, P. • 2015
Comparing Direct Off-Screen Poin­t­ing, Peep­hole, and Flick&­Pinch In­ter­action for Map Na­vi­ga­tion
★ BEST SHORT PAPER AWARD ★
SUI 2015 thumbnail
SUI 2015 thumbnail
In Proceedings of the 3rd ACM Sym­po­sium on Spa­tial User In­ter­ac­tion (Los An­ge­les, CA, USA, 8–9 Aug­ust, 2015), ACM Press, pp. 99-102.
Keywords: spatial interaction • direct off-screen point­ing • peep­hole dis­plays
Acceptance Rate: 35% (17/48)
Abstract
Video
Navigating large work­spaces with mobile de­vices often re­quire users to access in­for­mation that spat­ially lies be­yond it's view­port. To browse in­for­mation on such work­spaces, two pro­mi­nent spat­ially-aware navi­ga­tion tech­ni­ques, peep­hole, and direct off-screen point­ing, have been pro­posed as al­ter­na­tives to the stand­ard on-screen flick and pinch gest­ures. Pre­vious studies have shown that both tech­ni­ques can out­per­form on-screen gest­ures in va­ri­ous user tasks, but no prior study has com­pared the three tech­ni­ques in a map-based ana­lytic task. In this paper, we ex­amine these two spat­ially-aware tech­niques and com­pare their effi­ci­ency to on-screen gest­ures in a map na­vi­ga­tion and ex­plor­ation scen­ario. Our study de­mon­strates that peep­hole and direct off-screen point­ing allows for 30% faster na­vi­ga­tion times be­tween work­space lo­ca­tions and that on-screen flick and pinch is su­pe­rior for accu­rate re­trie­val of work­space content.
Hudelist, M.A., Schoeffmann, K., Ahlström, D., and Lux, M. • 2015
How Many, What and Why? Visual Media Sta­ti­stics on Smart­phones and Ta­blets
ICME 2015 thumbnail
ICME 2015 thumbnail
In 2015 IEEE Inter­national Con­fe­rence on Multi­media and Expo Work­shops (Turin, Italy, 29 June – 3 July, 2015), IEEE, 6 pages.
Keywords: surveys • smart­phones and tablets • photos and videos
Abstract
The focus of our re­search is on im­prov­ing mobile image and video brows­ing inter­faces. To get a better idea about real world mobile photo and video scen­arios and to base our re­search on real world numbers we per­formed a survey of photo and video usage on smart­phones and tablets. In an on­line survey we asked 215 par­ti­ci­pants of the German speak­ing re­gion about their mobile image collec­tions, their usage patterns, and their mo­tives and in­ten­tions when cap­tu­ring photos. Our results show, among other things, that users store con­sider­able more photos on smart­phones than on tablets, that the major­ity of our par­ti­ci­pants use their smart­phone as pri­mary camera and that users are un­likely to orga­nize their photos on their mobile devices in any way. More­over, the most pop­ular mo­tives are people, holi­day photos, events, and land­scapes. Furthe­rmore, it is more pop­ular to cap­ture photos for private than for sharing pur­po­ses. We also re­port about vari­ous corre­lation hypo­the­sis that we tested in the ga­thered data.
Hasan, K., Ahlström, D., and Irani, P. • 2015
SAMMI: A Spatially-Aware Multi-Mobile In­ter­face for Ana­ly­tic Map Na­vi­ga­tion Tasks
MobileHCI 2015 thumbnail
MobileHCI 2015 thumbnail
In Proceedings of the 17th In­ter­na­tio­nal Con­fe­rence on Human-Computer In­ter­ac­tion with Mo­bile De­vices and Ser­vi­ces (Co­pen­ha­gen, Den­mark, 24–27 August, 2015), ACM Press, pp. 36-45.
Keywords: spatial interaction • around-device in­ter­ac­tion • peep­hole in­ter­ac­tion • ana­ly­tic in­ter­faces
Abstract
Video
Motivated by a rise in the variety and num­ber of mo­bile de­vices that users carry, we in­vest­i­gate scen­arios when ope­rat­ing these de­vices in a spat­i­ally in­ter­linked manner can lead to in­ter­faces that ge­ne­rate new ad­van­tages. Our ex­plo­ra­tion is fo­cused on the de­sign of SAMMI, a spatially-aware multi-device in­ter­face to assist with ana­ly­tic map na­vi­ga­tion tasks, where, in add­i­tion to brows­ing the work­space, the user has to make a de­ci­sion based on the con­tent em­bedded in the map. We fo­cus pri­mar­ily on the de­sign space for spa­ti­ally in­ter­link­ing a smart­phone with a smart­watch. As both smart devices are spa­ti­ally tracked, the user can browse in­for­ma­tion by mov­ing either de­vice in the work­space. We iden­ti­fy se­ve­ral de­sign fact­ors for SAMMI and through a first study we ex­plore how best to com­bine these for effi­cient map na­vi­ga­tion. In a second study we com­pare SAMMI to the com­mon Flick-&-Pinch gest­ures for an an­a­ly­tic map na­vi­ga­tion task. Our re­sults re­veal that SAMMI is an effi­cient spa­tial na­vi­ga­tion in­ter­face, and by means of an add­i­ti­o­nal spat­i­ally track­ed dis­play, can fa­ci­li­tate quick in­for­ma­tion re­trie­val and com­par­i­sons. We fi­nal­ly de­mon­strate other po­ten­tial use cases for SAMMI that ex­tend be­yond map na­vi­ga­tion to fa­ci­li­tate in­ter­ac­tion with spa­tial works­paces.
Schoeffmann, K., Ahlström, D., and Hudelist, M.A. • 2014
3D Interfaces to Improve the per­for­mance of Vi­su­al Known-Item Search
IEEE Transactions 2014 thumbnail
IEEE Transactions 2014 thumbnail
IEEE Transactions on Mul­ti­media, 16(7), IEEE, pp. 1942-1951.
Keywords: image and video brows­ing • image and vi­deo search and re­trie­val • known item search
Abstract
Most interfaces in the field of image and vi­deo search use a two-dimensional grid in­ter­face, which pre­sents image thumb­nails in a left-to-right arr­an­ge­ment that can be brows­ed from top to bottom. This grid in­ter­face, how­ever, has se­ve­ral draw­backs that be­come par­ti­cu­lar­ly app­a­rent when per­form­ing in­ter­ac­tive search tasks for tar­get items in large col­lec­tions of images or vi­deos. There­fore, we pro­pose to use 3-D in­ter­faces as an alt­er­na­tive to the grid in­ter­face for in­ter­ac­tive known-item search in vis­ual data as they can par­t­i­ally over­come these draw­backs. In this pa­per, we first sum­ma­rize our ideas and di­scuss de­sign aspects of a 3-D ring and a 3-D globe in­ter­face. Next, we pre­sent re­sults from four diff­e­rent user stu­dies, where we eva­lu­ated the per­for­mance of these in­ter­faces for known-item search tasks in image col­lec­tions. Our re­sults from these stu­dies show that the pro­posed 3-D in­ter­faces allow for sig­ni­fi­cantly fas­ter visual tar­get search on desk­top com­pu­ters with mouse in­ter­ac­tion as well as on tab­let de­vices. The in­ter­faces also ach­i­eve better sub­jec­tive rat­ings. How­ever, our eva­lu­a­tion also shows that on smart­phones with 3.5-in screens an im­prove­ment over the grid in­ter­face in terms of vi­sual search time is only possi­ble in col­lec­tions with more than 200 images.
Ahlström, D., Hasan, K., and Irani, P. • 2014
Are You Comfortable Do­ing That?: Accep­tan­ce Stu­dies of Around-Device Ges­tures in and for Pub­lic Settings
MobileHCI 2014 thumbnail
MobileHCI 2014 thumbnail
In Proceedings of the 16th Inter­national Con­fe­rence on Human-Computer In­ter­ac­tion with Mo­bile De­vices and Ser­vices (To­ron­to, Ca­na­da, 23–26 Sep­tem­ber, 2014), ACM Press, pp. 193-202.
Keywords: around-device input • user accep­tance • ge­sture de­sign
Abstract
Several research groups have de­mon­strated ad­van­ta­ges of ex­tend­ing a mo­bile de­vice's in­put vo­cab­u­lary with in-air gest­ures. Such gest­ures show pro­mise but are not yet being in­te­gra­ted onto comm­er­cial de­vices. One reason for this might be the un­cer­tain­ty about users' per­cept­ions re­gard­ing the so­cial acc­ep­t­ance of such around-device gest­ures. In three studies, per­form­ed in pub­lic set­tings, we ex­plore users' and spec­ta­tors' atti­tudes about using around-device gest­ures in pub­lic. The re­sults show that peo­ple are con­cerned about others' re­ac­tions. They are also sen­si­tive and se­lec­tive re­gard­ing where and in front of whom they would feel com­for­table using around-device gest­ures. How­ever, accep­tance and com­fort are strongly linked to gest­ure char­ac­ter­ist­ics, such as, gest­ure size, du­ration and in-air po­sition. Based on our find­ings we pre­sent re­com­men­da­tions for around-device in­put de­sig­ners and sug­gest new app­roaches for eva­lu­ating the so­cial accept­a­bi­lity of no­vel in­put methods.
Hudelist, M.A., Schoeffmann, K., and Ahlström, D. • 2014
Evaluating Alternatives to the 2D Grid Inter­face for Mobile Image Browsing
Int. Journal of Semantic Computing 2014 thumbnail
Int. Journal of Semantic Computing 2014 thumbnail
International Jour­nal of Se­mantic Com­puting, 8(2), World Sci­en­ti­fic Pub­li­shing, pp. 185-208.
Keywords: image browsing • mobile devices • touch­screens
Abstract
In recent years point-and-shoot ca­me­ras were lar­ge­ly re­placed by smart­phones and tab­lets. For ex­am­ple, in the de­vice chart of the Flickr photo shar­ing site the first four de­vices are smart­phones. Eight me­ga­pixel sen­sors are already stan­dard in high-end smart­phones and Nokia re­leased in 2012 a smart­phone with a 41 me­ga­pixel sen­sor, aim­ing at pro­duct cate­gories above point-and-shoot ca­me­ras. Since smart­phones and tab­lets are easy to use and high­ly port­able they are an easy choice for shoot­ing images on the go. There­fore, image col­lect­ions on the de­vices grow fast but the de­fault grid-based image brows­ing in­ter­faces are in­cre­as­ing­ly over­whelmed by the amount of images. They make it very dif­fi­cult to find images fast and effort­less. We there­fore in­ves­ti­gate the per­for­mance of new image brows­ing in­ter­faces for smart­phones and tab­lets, which uti­lize sim­ple content-based sort­ing and 3D vi­su­ali­za­tion to im­prove search time. In two user studies we com­pare a color sorted 3D-globe, 3D-ring, a zoom­able image pane in­ter­face and a tra­di­tio­nal grid-based image brows­ing in­ter­face. Our eva­lua­tions on tab­lets and smart­phones show that es­pe­ci­ally 3D can re­duce search time when there is enough screen space. On the other hand, it looses its ad­van­tage on smaller screens.
Schoeffmann, K., Ahlström, D., Bailer, W., Cobârzan, C., Hopfgartner, F., McGuinness, K., Gurrin, C., Frisson, C., Le, D.-D., Del Fabro, M., Bai., H., and Weiss, W. • 2014
The Video Browser Showdown: A Live Eva­lu­ation of In­ter­act­ive Vi­deo Se­arch Tools
Int. Journal of Multimedia Information Retrieval 2014 thumbnail
Int. Journal of Multimedia Information Retrieval 2014 thumbnail
International Journal of Mul­ti­me­dia In­for­ma­tion Re­trie­val, 3(2), Sprin­ger, pp. 113-127.
Keywords: video browsing • video se­arch • video re­trie­val • ex­plo­ra­tory search
Abstract
The Video Browser Show­down eva­lu­ates the per­for­mance of exp­lor­a­tory vi­deo search tools on a com­mon data set in a com­mon en­vi­ron­ment and in pre­sence of the aud­i­ence. The main goal of this com­pe­ti­tion is to en­able re­search­ers in the field of in­ter­ac­tive vi­deo search to di­rect­ly com­pare their tools at work. In this pa­per, we pre­sent re­sults from the second Vi­deo Browser Show­down (VBS­2013) and de­s­cribe and eva­lu­ate the tools of all par­ti­ci­pa­ting teams in de­tail. The eva­lua­tion re­sults give in­sights on how ex­plo­ra­tory vi­deo search tools are used and how they per­form in di­rect com­pari­son. More­over, we com­pare the achiev­ed per­for­mance to re­sults from an­other user study where 16 par­ti­ci­pants emp­loy­ed a stan­dard vi­deo player to comp­lete the same tasks as per­for­med in VBS­2013. This com­pa­ri­son shows that the so­phi­s­ti­cated tools en­able better per­for­mance in ge­ne­ral, but for some tasks com­mon vi­deo play­ers pro­vide si­mi­lar per­for­mance and could even out­per­form the ex­pert tools. Our re­sults high­light the need for further im­pro­ve­ment of pro­fess­io­nal tools for in­ter­ac­tive search in videos.
Hudelist, M.A., Schoeff­mann, K., and Ahlström, D. • 2013
Evaluation of Image Brows­ing In­ter­faces for Smart­phones and Tab­lets
Int. Symposium on Multimedia 2013 thumbnail
Int. Symposium on Multimedia 2013 thumbnail
In Proceedings of the IEEE In­ter­na­tio­nal Sym­po­sium on Mul­ti­me­dia (Ana­heim, USA, 9–11 Dec­em­ber, 2013), IEEE, 8 pages.
Keywords: image browsing • mo­bile de­vices • touch­screens
Abstract
Smartphones and tab­lets are po­pu­lar de­vices. As light­weight, com­pact de­vices with built-in high-quality came­ras, they are ideal to carry around and to use for snap­shot pho­to­gra­phy. As the num­ber of photos acc­umu­late on the de­vice quickly find­ing a par­ti­cu­lar photo can be te­di­ous using the de­fault grid-based photo brow­ser in­stalled on the de­vice. In this pa­per we in­vest­i­gate user per­for­mance in a photo brows­ing task on an iPad and an iPod Touch de­vice. We pre­sent re­sults from two user ex­per­i­ments com­pa­ring the stan­dard grid in­ter­face to a pan-and-zoom able grid, a 3D-globe and a 3D-ring. In par­ti­cu­lar we are in­ter­ested in how the in­ter­faces per­form with large photo col­lec­tions (100 to 400 photos). The re­sults show most pro­mise for the pan-and-zoom grid and that the per­for­mance with the stan­dard grid in­terf­ace quickly de­ter­io­ra­tes with large col­lec­tions.
Ahlström, D. and Hitz, M. • 2013
Revisiting Point­Assist and Study­ing Effects of Control-Display Gain on Point­ing Per­for­mance by Four-Year-Olds
Interaction Design and Children 2013 thumbnail
Interaction Design and Children 2013 thumbnail
In Proceedings of the 12th In­ter­na­tio­nal Con­fe­rence on In­ter­ac­tion De­sign and Child­ren (New York, NY, USA, 24–27 June, 2013), ACM Press, pp. 257-260.
Keywords: preschool children • mouse point­ing • control-display gain
Acceptance Rate: 33% (28/86)
Abstract
Previous in-depth ana­ly­ses of the cur­sor paths taken by young child­ren when they point at screen tar­gets have shown that the fine-tuning move­ments that are nec­ess­ary to accu­ra­tely po­si­tion the cur­sor over a small tar­get can be very trouble­some. We pre­sent two mouse point­ing ex­per­i­ments with four-year-olds; the first re-evaluates the eff­ect of Point­Assist, a tech­nique designed to help child­ren per­form­ing fine-tuning move­ments by track­ing the cur­sor and man­i­pu­lat­ing the cur­sor control-display gain in pro­blem si­tu­a­tions. The re­sults par­ti­ally con­firm pre­vious­ly re­ported re­sults show­ing that Point­Assist can im­prove child­ren's point­ing accu­racy when point­ing at small tar­gets. The second ex­per­i­ment in­ves­ti­gates the effect of var­ious (con­stant) cur­sor control-display gains on child­ren's point­ing per­for­mance. The re­sults sug­gest a pre­fe­rence for higher gains.
Hasan, K., Ahlström, D., and Irani, P. • 2013
AD-Binning: Lever­aging Around-Device Space for Stor­ing, Brow­s­ing and Re­triev­ing Mo­bile De­vice Con­tent
CHI 2013 thumbnail
CHI 2013 thumbnail
In Proceedings of the SIG­CHI Con­fe­rence on Hu­man Fac­tors in Com­pu­ting Sys­tems (Paris, France, 27 April – 2 May, 2013), ACM Press, pp. 899-908.
Keywords: around-device in­ter­ac­tion • off-screen di­scre­ti­za­tion • data ana­ly­tics • vi­sual an­a­ly­tics
Acceptance Rate: 23% (392/1963)
Abstract
Video
Exploring in­for­ma­tion con­tent on mo­bile de­vices can be te­di­ous and time con­sum­ing. We pre­sent Around-Device Binn­ing, or AD-Binning, a no­vel mo­bile user in­ter­face that allows users to off-load mo­bile con­tent in the space around the de­vice. We in­formed our im­ple­men­ta­tion of AD-Binning by ex­plor­ing vari­ous de­sign fact­ors, such as the mi­ni­mum around-device tar­get size, suit­able item se­lec­tion methods, and tech­ni­ques for plac­ing con­tent in off-screen space. In a task re­quir­ing ex­plor­a­tion, we find that AD-Binning im­pro­ves brows­ing effi­ci­ency by avoid­ing the min­ute se­lec­tion and flick­ing mech­anisms needed for on-screen in­ter­ac­tion. We con­clu­de with de­sign guide­lines for off screen con­tent stor­age and brows­ing.
Kaufmann, B. and Ahlström, D. • 2013
Studying Spatial Me­mory and Map Na­vi­ga­tion Per­for­mance on Pro­jec­tor Phones with Peep­hole In­ter­ac­tion
CHI 2013 thumbnail
CHI 2013 thumbnail
In Proceedings of the SIG­CHI Con­fe­rence on Hu­man Fac­tors in Com­pu­ting Sys­tems (Paris, France, 27 April – 2 May, 2013), ACM Press, pp. 3173-3176.
Keywords: map navigation • spa­tial me­mory • peep­hole / touch in­ter­ac­tion • hand­held pro­jector
Acceptance Rate: 23% (392/1963)
Abstract
Video
Smartphones are useful per­so­nal ass­i­stants and om­ni­pre­sent com­muni­ca­tion de­vices. How­ever, colla­bo­ra­tion is not among their strengths. With the advent of em­bedded proj­ec­tors this might change. We con­duc­ted a study with 56 par­ti­ci­pants to find out if map na­vi­ga­tion and spa­tial me­mory per­for­mance among users and ob­ser­vers can be im­proved by using a pro­jec­tor phone with a peep­hole in­ter­face in­stead of a smart­phone with its touch­screen in­ter­face. Our re­sults show that users per­form­ed map na­vi­ga­tion equally well on both in­ter­faces. Spa­tial me­mory per­for­mance, how­ever, was 41% better for pro­jec­tor phone users. More­over, ob­ser­vers of the map na­vi­ga­tion on the pro­jec­tor phone were 25% more accu­rate when asked to re­call lo­ca­tions of points of in­te­rest after they watched a user per­form­ing map na­vi­ga­tion.
Schoeffmann, K., Ahlström, D., and Böszömenyi, L. • 2013
A User Study of Visual Se­arch per­for­mance of In­ter­ac­tive 2D and 3D Story­boards
AMR 2013 thumbnail
AMR 2013 thumbnail
In Proceedings of the In­ter­na­tio­nal Work­shop on Ad­ap­tive Mul­ti­me­dia Re­trie­val (Bar­ce­lona, Spain, 18–19 July, 2011), Sprin­ger, pp. 18-32.
Keywords: image browsing • video re­trie­val • image story­board • 3D vi­su­a­li­za­tion
Abstract
A story­board is a grid-like arran­ge­ment of images, or key-frames of vi­deos, that is com­monly used to browse image or vi­deo col­lec­tions or to pre­sent re­sults of a query in an image or vi­deo re­trie­val tool. We in­vest­i­gate alter­na­tives to the com­mon­ly used scroll-based 2D story­board for the task of brows­ing a large set of images. Through a user study with 28 par­ti­ci­pants we eva­lu­ate three diffe­rent kinds of story­boards in terms of visual search per­for­mance and user sa­tis­fac­tion. Our re­sults show that a 3D cy­lind­ri­cal vis­ua­li­za­tion of a story­board is a pro­mis­ing al­ter­na­tive to the con­ven­tio­nal scroll-based story­board.
Bailer, W., Schoeffmann, K., Ahlström, D., Weiss, W., and del Fabro, M. • 2013
Interactive Evaluation of Video Browsing Tools
MMM 2013 thumbnail
MMM 2013 thumbnail
In Proceedings of the 19th Inter­national Con­fe­rence on Ad­van­ces in Mul­ti­media Mo­del­ing (Huang­shan, China, 7–9 Janu­ary, 2013), Sprin­ger, pp. 81-91.
Keywords: video browsing • Video Browser Show­down • known-item search
Abstract
The Video Browser Showdown (VBS) is a live compe­tition for evaluat­ing video brows­ing tools re­gard­ing their eff­iciency at known-item search (KIS) tasks. The first VBS was held at MMM 2012 with eight teams work­ing on 14 tasks, of which eight were com­pleted by expert users and six by novices. We describe the details of the comp­etition, analyze results re­gard­ing the per­for­mance of tools, the diff­erences between the tasks and the nature of the false sub­missions.
Ahlström, D., Hudelist, M.A., Schoeffmann, K., and Schaefer, G. • 2012
A User Study on Image Browsing on Touch­screens
MM 2012 thumbnail
MM 2012 thumbnail
In Proceedings of the 20th ACM Inter­na­tional Con­fe­rence on Mul­ti­media (Nara, Japan, 29 Oct­ober – 2 No­vem­ber, 2012), ACM Press, pp. 925-928.
Keywords: image browsing and search • touch­screens • mobile devices
Acceptance Rate: 20% (67/331)
Abstract
Video
Default image browsing inter­faces on touch-based mobile devices pro­vide limited support for image search tasks. To facili­tate fast and con­venient searches we pro­pose an al­ter­native inter­face that takes ad­van­tage of 3D gra­phics and arran­ges images on a ro­tat­able globe acc­ord­ing to color sim­i­lar­ity. In a user study we com­pare the new de­sign to the iPad's image brow­ser. Re­sults col­lected from 24 par­ti­ci­pants show that for color-sorted image col­lec­tions the globe can re­duce search time by 23% with­out caus­ing more err­ors and that it is per­ceiv­ed as being fun to use and pre­ferred over the stan­dard brows­ing in­ter­face by 70% of the par­ti­ci­pants.
Kaufmann, B. and Ahlström, D. • 2012
Revisiting Peep­hole Pointing: A Study of Tar­get Acqui­si­tion with a Hand­held Pro­jec­tor
MobileHCI 2012 thumbnail
MobileHCI 2012 thumbnail
In Proceedings of the 14th In­ter­na­tio­nal Con­fe­rence on Human-Computer In­ter­ac­tion with Mo­bile De­vices and Ser­vi­ces (San Fran­cis­co, CA, USA, 21–24 Sep­tember, 2012), ACM Press, pp. 211-220.
Keywords: peephole pointing • pico pro­jec­tor • Fitts' law • per­for­mance mo­deling
Acceptance Rate: 25% (54/212)
Abstract
Video
Peephole pointing is a pro­mis­ing in­ter­ac­tion tech­nique for large work­spaces that con­tain more in­for­ma­tion than can be app­ro­pri­ate­ly dis­played on a single screen. In peep­hole point­ing a win­dow to the vir­tual work­space is moved in space to re­veal add­i­tio­nal con­tent. In 2008, two diffe­rent mo­dels for pee­phole point­ing were dis­cussed. Cao, Li and Ba­la­krish­nan pro­posed a two-component mo­del, where­as Rohs and Oul­as­vir­ta in­vesti­gated a simi­lar mo­del, but con­clud­ed that Fitts' law is suff­i­ci­ent for pre­dict­ing peep­hole point­ing per­for­mance. We pre­sent a user study per­formed with a hand­held pro­jec­tor show­ing that Cao et al.'s mo­del only out­per­forms Fitts' law in pre­dic­tion accur­acy when diffe­rent peep­hole sizes are used and users have no prior know­ledge of tar­get lo­ca­tion. Never­the­less, Fitts' law succeeds under the con­di­tions most like­ly to occur. Add­i­tion­ally, we show that tar­get over­shoot­ing is a key char­ac­te­ris­tic of peep­hole point­ing and pre­sent the im­ple­men­ta­tion of an ori­en­ta­tion aware hand­held pro­jec­tor that en­ables peep­hole in­ter­ac­tion with­out in­stru­ment­ing the en­viron­ment.
Schoeffmann, K. and Ahlström, D. • 2012
Using a 3D Cylindrical In­ter­face for Image Brows­ing to Im­prove Vi­sual Search per­for­mance
WIAMIS 2012 thumbnail
WIAMIS 2012 thumbnail
In Proceedings of the 13th Inter­na­tio­nal Work­shop on Image Ana­ly­sis for Mul­ti­me­dia In­ter­ac­tive Ser­vices (Du­blin, Ire­land, 23–25 May, 2012), IEEE, 4 pages.
Keywords: visualization • image color ana­ly­sis • in­spec­tion • lay­out
Abstract
In this paper we evaluate a 3D cylind­rical in­ter­face that arran­ges image thumb­nails by vi­sual si­mi­lar­ity for the pur­pose of image brows­ing. Through a user study we com­pare the per­for­mance of this in­ter­face to the per­for­mance of a com­mon scroll­able 2D list of thumb­nails in a grid arr­ange­ment. Our eva­lua­tion shows that the 3D Cy­lin­der in­ter­face en­ables sig­ni­fi­cant­ly fas­ter vi­sual search and is the pre­ferred search in­ter­face for the ma­jori­ty of tested users.
Lassen, Ch., Ahlström, D., Gula, B., and Hayden, M. • 2012
Auswirkungen von Icon-An­ord­nung­en auf vi­su­elle Such­ge­sch­win­dig­keit und Er­inne­rung von Icon-Positionen
[In German, English title: The In­fluence of Icon Arrange­ment on Visual Search Time and Position Re­call Performance]
 Österreichischen Gesellschaft für Psychologie 2012 thumbnail
 Österreichischen Gesellschaft für Psychologie 2012 thumbnail
In Proceedings of the 10th Con­fer­ence of the Austrian Psy­cho­logy Asso­cia­tion (Graz, Austria, 12–14 April, 2012), OEG, pp. 183-184.
Download: pdf poster
German Text
Nach Chlebek (2006) ist das Einzige, was An­wen­der­Innen einer Software wahr­nehmen können, das User Inter­face. In Anwendungs­soft­ware wer­den da­für fast aus­schließlich graphi­sche Be­nut­zer­ober­flächen (GUIs) ver­wendet. Icons sind ein wesent­licher Bestand­teil davon geworden. Um die in der DIN EN ISO 9241 ge­forderte Effektivi­tät, Effizienz und Zu­frieden­heit bei der Soft­ware­be­nutz­ung zu ge­währ­leisten, bedarf es einer genauen über­le­gung, wie Icons in Systemen ver­wendet werden sollen.
In der Studie wurde unter­sucht, ob und welche Effekte die An­ordnung von Icons auf die visuelle Such­ge­sch­win­dig­keit und das Er­innern Ihrer räum­licher Posi­tion haben.
Fünf ver­schiedene Icon-Anord­nun­gen wur­den bei N = 100 Pro­band­Innen im Rah­men einer com­pu­ter­gestützten Unter­such­ung ge­testet. Die Anord­nungen waren vier vor­ge­ge­bene, geo­met­rische For­men (Quadrat, Kreuz, hori­zon­tale Linie, Dia­mant) und eine indi­vi­du­elle Vari­ante. Sie be­st­an­den immer aus 16 un­be­kann­ten Icons mit un­ter­sch­ied­lichen Be­deut­ungen, wie z.B. Film starten.
Der Versuch sah zwei Phasen vor. In einer Phase hatten die Proband­Innen zur Auf­gabe mehr­mals ein best­immtes Icon inner­halb des vor­ge­ge­benen Lay­outs zu suchen. Dabei wurden zu­fällig aus­gewählte Icons häufiger prä­sentiert, nämlich acht-, vier-, drei- und zweimal. In einer weit­eren Phase musste die je­weil­ige An­ord­nung un­an­ge­kündigt re­kon­stru­iert werden.
Bei der vis­uellen Such­ge­schwindig­keit sch­nitten die geo­metrischen Formen, vor allem Quadrat und Kreuz am besten ab (1,071 bzw. 1,127 ms). Bei der in­divid­uellen Variante be­nötigten die Pro­banden dafür am längsten (1,517 ms). Diese Differenz war aller­dings nicht sig­nifi­kant (p = 0,052).
Bei der Re­konstruk­tion der An­ordnung wiesen das Kreuz (M = 3,35) und das Quadrat (M = 3,6) die ge­ringste Fehler­an­zahl auf. Am schlecht­esten schnitt die hori­zontale Leiste mit M = 9,75 korrekt er­inner­ten Icon­positionen ab. Außer­dem hat sich eine starke Aus­wirkung der häufig­eren Icon­präs­entation auf die Er­innerungs­leistung her­aus­kristall­isiert. Die Pos­itionen der acht­mal prä­sent­ierten Icons wurden zu 88 % korrekt er­innert. Wohin gegen die ein­mal ges­uchten Icons nur zu 17 % richtig re­konst­ruiert wurden.
Es wäre be­sonders frucht­bar in weiteren Studien her­aus­zu­finden, ob die ein­zelnen An­ordnungen be­deut­ende räuml­iche Pos­itionen (Land­marks) be­sitzen, die besser er­innert werden, wie z.B. die Ecken des Quadrats. Auß­er­dem könnte die Aus­wirk­ung der Icon - Be­deut­ung und einer semant­ischen An­ordn­ung über­prüft werden. Beide Fakt­oren könnten zu­sätzlich zum Lay­out eine wesentl­iche Rolle für die opt­imale Pos­itionier­ung spezi­fischer Icons in graphi­schen Ben­utzer­ober­flächen spielen.
Ahlström, D. and Schoeffmann, K. • 2012
A Visual Search User Study on the In­flu­en­ces of Aspect Ra­tio Di­stor­tion of Pre­view Thumb­nails
ICME 2012 thumbnail
ICME 2012 thumbnail
In 2012 IEEE Inter­na­tional Con­fe­rence on Mul­ti­me­dia and Expo Work­shops (Mel­bourne, Aus­tra­lia, 9–13 July, 2012), IEEE, pp. 546-551.
Keywords: graphical user inter­faces • image di­stor­tion • visual search • image and vi­deo re­trie­val tools
Abstract
Most image and video retrieval tools used for large-scale media collec­tions pre­sent query results as thumb­nails arranged in a grid-like dis­play with each thumb­nail pre­serv­ing the aspect ratio of its cor­respond­ing source image or video. Often, the out­come of a query is a set of thumb­nails with different aspect ratios, thus a vary­ing amount of padd­ing space is used be­tween the thumb­nails in the dis­play. This results in a visual­ly erratic disp­lay that con­flicts with inter­face design rules and aesthetic prin­ciples sti­pulating align­ment and the usage of straight visual lines to guide the human eye while scann­ing the dis­play. A solution is to create equally sized thumb­nails by using cropp­ing algo­rithms. How­ever, this may remove useful search infor­mation. We in­vesti­gated a simple alter­native: to dist­ort thumb­nails to the same aspect ratio in order to pro­vide a calm and struc­tured dis­play with straight lines between thumb­nails. In a user experi­ment we evalu­ated whet­her and how much such a hori­zontal dis­tortion can be app­lied with­out hamp­ering visual search per­for­mance. The results show that dis­tortion does not notably in­fluence error rate and visual search time.
Schoeffmann, K., Ahlström, D., and Böszörmenyi, L. • 2012
3D Storyboards for In­ter­ac­tive Vis­ual Search
ICME 2012 thumbnail
ICME 2012 thumbnail
In 2012 IEEE Inter­national Con­fe­rence on Mul­ti­media and Expo (Mel­bourne, Au­stra­lia, 9–13 July, 2012), IEEE, pp. 848-853.
Keywords: visual similarity • visual search • 3D vis­uali­zation • image search • brows­ing • inter­action
Abstract
Interactive image and video search tools typ­ically use a grid-like arrange­ment of thumb­nails for pre­view pur­pose. Such a dis­play, which is com­monly known as story­board, pro­vides limited fle­xi­bil­ity at inter­active search and it does not opti­mally ex­ploit the avail­able screen estate. In this paper we design and eval­uate alter­natives to the com­mon two-dimensional story­board. We take adv­an­tage of 3D graphics in order to present image thumb­nails in cylin­drical arr­ange­ments. Through a user study we eval­uate the per­for­mance of these inter­faces in terms of visual search time and sub­jective per­formance.
Schoeffmann, K. and Ahlström, D. • 2012
An Evaluation of Color Sort­ing for Image Brows­ing
JMDEM 2012 thumbnail
JMDEM 2012 thumbnail
International Journal of Mul­ti­media Data En­gi­nee­ring and Ma­na­ge­ment, 3 (1), IGI Glo­bal, pp. 49-62.
Keywords: color sorting al­go­rithm • image brows­ing • image sort­ing • vis­ual search • vis­ual simi­larity
Abstract
Many image browsing tools employ a scroll­able grid-like arrange­ment of thumb­nails of images for en­abl­ing users to browse through image col­lec­tions. The thumb­nails in these arr­an­ge­ments are ty­pi­cally sorted by some kind of meta­data, e.g., by file­name or cre­ation date. How­ever, users look­ing for a spe­cific image in mind pre­fer search by vis­ual simi­larity rather than search based on simple metad­ata. In difference to previous work, which rare­ly pre­sent results from user stu­dies, the authors pro­vide empirical evi­dence that color sort­ing is an effec­tive app­roach for that pur­pose. With a user sur­vey, the authors iden­tify which of six alter­na­tive color sort­ing algo­rithms pro­duces the most in­tui­tive result. The authors use the best algo­rithm, a simple HSV-based sort­ing method, and com­pare users’ vi­sual search per­for­mance in a color sorted story­board against per­for­mance in an un­sorted story­board. The re­sults show that color sor­ting can im­prove user in­ter­ac­tion, both in terms of sub­jec­tive im­pres­sions and visual search times.
Schoeffmann, K., Ahlström, D., and Böszörmenyi, L. • 2012
Video Browsing with a 3D Thumb­nail Ring Arr­anged by Color Si­mi­larity
MMM 2012 thumbnail
MMM 2012 thumbnail
In Proceedings of the 18th In­ter­na­tional Con­fe­rence on Ad­van­ces in Mul­ti­media Mo­del­ing (Klagen­furt, Austria, 4–6 Janu­ary, 2012), Springer, pp. 660-661.
Keywords: visual search • 3D visuali­zation • image search • brows­ing • interaction
Abstract
We propose a 3D arrangement of thumb­nail images for the pur­pose of brows­ing a single video file. The thumb­nail images are li­ne­arly ex­tra­cted from the video and used as tex­tures for ben­ded screens in a 3D-ring arr­an­ge­ment, which act as links for the play­back of the corr­e­spond­ing video seg­ments. Further­more, the thumb­nail images in this 3D-ring are in­tu­i­ti­vely org­anized by their do­mi­nant colors accord­ing to the HSV color space. This color-based or­ga­ni­za­tion should help users to es­ti­mate the po­si­tion of a known item in the 3D-ring.
Cockburn, A., Ahlström, D., and Gutwin, C. • 2012
Understanding Per­for­mance in Touch Selec­tions: Tap, Drag and Ra­dial Point­ing Drag with Fin­ger, Sty­lus and Mouse
IJHCS 2012 thumbnail
IJHCS 2012 thumbnail
International Journal of Human-Computer Stud­ies 70 (3), El­se­vier, pp. 218-233.
Keywords: touch inter­action • dragg­ing • radial menus
Abstract
Touch-based interaction with com­puting de­vices is be­com­ing more and more com­mon. In order to de­sign for this sett­ing, it is cri­ti­cal to un­der­stand the ba­sic hu­man fac­tors of touch in­ter­ac­ti­ons such as tapp­ing and dragg­ing; how­ever, there is re­la­ti­ve­ly little em­pi­ri­cal re­search in this area, par­ti­cu­lar­ly for touch-based dragg­ing.
To provide foun­da­ti­o­nal know­ledge in this area, and to help de­signers un­der­stand the hu­man fac­tors of touch-based in­ter­ac­tions, we con­duct­ed an ex­per­i­ment using three in­put de­vices (the finger, a sty­lus, and a mouse as a per­for­mance ba­se­line) and three diff­e­rent point­ing ac­ti­vi­ties. The point­ing ac­ti­vi­ties were bi­di­rec­ti­o­nal tapp­ing, one-dimen­sional dragg­ing, and ra­dial dragg­ing (point­ing to items arr­anged in a cir­cle around the cur­sor). Tapping ac­ti­vi­ties re­pre­sent the ele­men­tal target se­lec­tion method and are ana­lysed as a per­for­mance base­line. Dragg­ing is also a ba­sic in­ter­ac­tion meth­od and un­der­stand­ing its per­for­mance is im­por­tant for touch-based in­ter­faces be­cause it in­vol­ves re­la­ti­vely high con­tact fric­tion. Ra­dial dragg­ing is also im­por­tant for touch-based sys­tems as this tech­ni­que is cla­i­med to be well suited to di­rect in­put yet ra­dial se­lec­tions nor­mal­ly in­volve the re­la­ti­vely un­studied dragg­ing ac­tion, and there have been few stu­dies of the in­ter­ac­tion mech­an­ics of ra­dial dragg­ing. Per­for­mance mo­dels of tap, drag, and ra­di­al dragg­ing are ana­lysed.
For tapping tasks, we con­firm prior re­sults show­ing fin­ger point­ing to be fa­ster than the stylus/mouse but in­accu­rate, par­ti­cu­lar­ly with small tar­gets. In dragg­ing tasks, we also con­firm that finger input is slower than the mouse and sty­lus, pro­ba­bly due to the re­la­ti­ve­ly high sur­face fric­tion. Dragg­ing errors were low in all con­di­tions. As ex­pected, per­for­mance con­for­med to Fitts' Law.
Our results for radial dragg­ing are new, show­ing that errors, task time and move­ment dis­tance are all line­ar­ly cor­re­lated with num­ber of items avail­able. We de­mon­stra­te that this per­for­mance is mo­delled by the Steer­ing Law (where the tunnel width in­creases with move­ment dist­anceƚ rather than Fitts' Law. Other ra­dial dragg­ing results showed that the sty­lus is fast­est, follow­ed by the mouse and finger, but that the stylus has the high­est error rate of the three de­vices. Finger se­lect­ions in the North-West di­rec­tion were par­ti­cu­lar­ly slow and error prone, poss­ibly due to a ten­dency for the fin­ger to stick–slip when dragg­ing in that di­rec­tion.
Schoeffmann, K. and Ahlström, D. • 2011
Similarity-Based Visuali­zation for Image Brows­ing Re­visited
ISM 2011 thumbnail
ISM 2011 thumbnail
In Proceedings of the IEEE Inter­national Sym­po­sium on Mul­ti­media (Dana Point, Ca­li­fornia, USA, 5–7 Decem­ber, 2011), IEEE, pp. 422-427.
Keywords: user study • visual search • image brows­ing • image sort­ing • visual si­mi­larity
Abstract
We investigate whether users' visual search per­for­mance in a com­monly used grid-like arrange­ment of images (i.e., a story­board) can be im­proved by using a similarity-based sort­ing of images. We pro­pose a simple but effi­cient algo­rithm for sort­ing image based on their color sim­ilarity. The algo­rithm gene­rates an intui­tive arrange­ment of images and allows for general app­lication with several different lay­outs (e.g., story­board, simple row/column, 3D globe/cylinder). In difference to pre­vious work, which rarely present results from user studies, we per­form a fair user study and com­pare an inter­face with color sorted images to an inter­face with images posi­tioned in a random order. Both inter­faces use exactly the same screen estate and inter­action means. Results show that users are 20% faster with the sorted inter­face.
Schoeffmann, K., Ahlström, D., and Beecks, C. • 2011
3D Image Browsing on Mobile Devices
ISM 2011 thumbnail
ISM 2011 thumbnail
In Proceedings of the IEEE Inter­national Sym­posium on Mul­ti­media (Dana Point, Ca­li­for­nia, USA, 5–7 Decem­ber, 2011), IEEE, pp. 335-336.
Keywords: visual search • image search and brows­ing • 3D visuali­zation • interaction
Abstract
Video
We present an intuitive user inter­face for the explo­ration of images on mobile multi-touch devices. Our inter­face uses a novel cylindri­cal 3D visuali­zation of visually sorted images as well as touch gestures and tilt­ing oper­ations to support mobile users in inter­active brows­ing of images by pro­vid­ing con­venient navi­gation/inter­action and intui­tive visuali­zation capabilities.
Ens, B., Ahlström, D., Cockburn, A., and Irani, P. • 2011
Characterizing User Per­for­mance with Assist­ed Di­rect Off-Screen Point­ing
MobileHCI 2011 thumbnail
MobileHCI 2011 thumbnail
In Proceedings of the 13th In­ter­na­tio­nal Con­fe­rence on Human-Computer In­ter­ac­tion with Mo­bile De­vices and Ser­vi­ces (Stock­holm, Swe­den, 30 Au­gust – 2 Sep­tem­ber, 2011), ACM Press, pp. 485-494.
Keywords: direct off-screen pointing • off-screen tar­get vi­su­a­li­za­tions • per­for­mance mo­dels • Fitts' Law • steering law
Acceptance Rate: 23% (63/276)
Abstract
The limited viewport size of mo­bile de­vi­ces re­qui­res that users con­ti­nuous­ly acquire in­for­ma­tion that lies be­yond the edge of the screen. Re­cent hard­ware so­lu­tions are cap­able of con­tin­ually track­ing a user's fin­ger around the de­vice. This has created new opp­or­tun­i­ties for in­ter­ac­tive sol­u­tions, such as di­rect off-screen poin­ting: the abi­li­ty to di­rect­ly point at ob­jects that are out­side the view­port. We em­pir­i­cally char­act­er­ize user per­for­mance with di­rect off-screen point­ing when assist­ed by tar­get cues. We pre­dict time and accur­acy out­comes for di­rect off-screen point­ing with ex­ist­ing and de­ri­ved mo­dels. We vali­date the mo­dels with good re­sults (R2 >= 0.9) and re­veal that di­rect off-screen point­ing takes up to four times lon­ger than point­ing at vis­i­ble tar­gets, de­pend­ing on the de­sired accuracy trade­off. Point­ing accuracy de­grades log­a­rithm­i­cally with tar­get dist­ance. We di­scuss de­sign im­pli­ca­tions in the co­ntext of se­ve­ral real-world app­li­ca­tions.
Ahlström, D., Cockburn, A., Gutwin, C., and Irani, P. • 2010
Why it’s Quick to be Square: Modell­ing New and Ex­ist­ing Hier­archi­cal Menu Designs
★ HONORABLE MENTION AWARD ★
CHI 2010 thumbnail
CHI 2010 thumbnail
In Proceedings of the SIG­CHI Con­fe­rence on Human Fac­tors in Com­put­ing (At­lanta, GA, USA, 10–15 April, 2010), ACM Press, pp. 1371-1380.
Keywords: menus • hier­archi­cal menus • per­for­mance models
Acceptance Rate: 22% (302/1346)
Abstract
We con­sider different hier­arch­i­cal menu and tool­bar-like in­ter­face designs from a theo­re­ti­cal per­spec­tive and show how a model based on vi­sual search time, point­ing time, de­ci­sion time and ex­per­tise de­ve­lop­ment can assist in un­der­stand­ing and pre­dict­ing in­ter­ac­tion per­for­mance. Three hier­archi­cal menus designs are mo­delled – a tra­di­ti­o­nal pull-down menu, a pie menu and a no­vel Square Menu with its items arrang­ed in a grid – and the pre­dic­tions are va­li­da­ted in an em­pi­ri­cal study. The mo­del cor­rect­ly pre­dicts the re­la­tive per­for­mance of the de­signs – both the ev­en­tual do­mi­nance of Square Menus com­pared to tra­di­ti­onal and pie designs and a per­for­mance cross­over as users gain ex­per­i­ence. Our work shows the value of mo­dell­ing in HCI design, pro­vides new in­sights about per­for­mance with dif­fe­rent hier­arch­i­cal menu de­signs, and de­mon­stra­tes a new high-per­for­mance menu type.
Ahlström, D., Großmann, J., Tak, S., and Hitz, M. • 2009
Exploring New Win­dow Mani­pu­la­tion Tech­ni­ques
OzCHI 2009 thumbnail
OzCHI 2009 thumbnail
In Proceedings of the 21st Con­ference of the Austra­lian Computer-Human In­ter­ac­tion Spe­cial Inte­rest Group (CHISIG) of the Hu­man Fac­tors Ergo­no­mics So­ciety of Au­stra­lia, OZCHI 2009 (Mel­bourne, Au­stra­lia, 23–27 No­vem­ber, 2009), ACM Press, pp. 177-183.
Keywords: window management • win­dow mov­ing and re­sizing • no­vel in­ter­ac­tion tech­niques
Acceptance Rate: 53% (32/60)
Abstract
Moving and re­sizing desk­top win­dows are fre­qu­ently per­formed but largely un­explored inter­action tasks. The stan­dard title bar and border dragg­ing tech­niques used for win­dow mani­pu­lation have not changed much over the years. We studied three new methods to move and re­size win­dows. The new methods are based on proxy and goal-crossing tech­niques to eli­minate the need of long cur­sor move­ments and acqui­ring narrow window borders. Instead, moving and re­sizing actions are per­formed by mani­pula­ting proxy objects close to the cursor and by sweep­ing cursor motions across window borders. We com­pared these tech­niques with the standard tech­niques. The results indi­cate that further in­vesti­gations and re­designs of win­dow man­ipula­tion tech­niques are worth­while: all new tech­niques were faster than the stan­dard tech­niques, with task comp­letion times im­pro­ving more than 50% in some cases. Also, the new re­sizing tech­ni­ques were found to be less error-prone than the tra­ditional click-and-drag method.
Tak, S., Cockburn, A., Humm, K., Ahlström, D., Gutwin, C., and Scarr, J. • 2009
Improving Window Switching Inter­faces
Interact 2009 thumbnail
Interact 2009 thumbnail
In Proceedings of the 12th IFIP TC13 Con­fe­rence on Human-Compter In­ter­ac­tion, INTERACT 2009 (Upp­sala, Swe­den, 24–28 Au­gust, 2009), LNCS 5727, Springer, pp. 187-200.
Keywords: window switching • re­visi­tation pat­terns • spa­tial constancy
Abstract
Switching between win­dows on a com­pu­ter is a fre­qu­ent act­i­vi­ty, but cur­rent switch­ing mech­a­ni­sms make it dif­fi­cult to find items. We carried out a long­i­tu­di­nal study that re­cor­ded actual win­dow switch­ing be­hav­i­our. We found that win­dow re­vi­si­ta­tion is very com­mon, and that people spend most time work­ing with a small set of win­dows and app­li­ca­tions. We iden­ti­fy two design prin­cip­les from these ob­ser­va­tions. First, spa­tial con­st­ancy in the lay­out of items in a swit­ching in­ter­face can aid me­mo­ra­bi­lity and sup­port re­vi­si­ta­tion. Second, gra­dual­ly ad­just­ing the size of app­li­ca­tion and win­dow zones in a swit­cher can im­prove vi­si­bi­li­ty and tar­get­ing for fre­quent­ly-used items. We carried out two stu­dies to con­firm the value of these design ideas. The first showed that spa­ti­ally sta­ble lay­outs are sig­ni­fi­cant­ly fas­ter than the com­monly used re­cency lay­out. The second showed that gra­dual ad­just­ments to acc­om­mo­date new app­li­cat­ions and win­dows do not re­duce per­for­mance.
Melcher, R., Hitz, M., Leitner, G., and Ahlström, D. • 2008
Der Einfluss von Ubi­qui­tous Com­pu­ting auf Be­nutz­ungs­sch­nitt­stellen­para­digmen
[In German, English title: The In­flu­ence of Ubi­qui­tous Com­put­ing on User In­ter­face Para­digms]
Info Gesellschaft 2008 thumbnail
Info Gesellschaft 2008 thumbnail
In Infor­mation und Ge­sell­schaft. Tech­no­lo­gien einer so­zi­alen Be­zieh­ung, Greif, H., Werner, M. und Mitrea, O. (eds.), VS Re­search, pp. 161-184.
Abstract
Wir schreiben das Jahr 2007, und die Ent­wick­lung der Computer­technik über­trifft alle zu­vor ge­machten seriösen Abs­chätz­ungen hin­sicht­lich Leistungs­fähig­keit, Miniatur­isier­ung, Vernetz­ung und result­ieren­der Hetero­geni­tät: Zum einen folgt auf das PC-Zeit­alter nun­mehr die Ära des über­all vor­hand­enen, aber un­sicht­baren Rechn­ers, zum anderen ver­liert durch die Spe­zial­isier­ung in Form von „smart devices” die Meta­pher des Computers als Uni­ver­sal­werk­zeug an Über­zeug­ungs­kraft, wenn auch nicht an grund­sätz­licher Be­deut­ung.
Leitner, G., Ahlström, D., and Hitz, M. • 2007
Usability of Mobile Com­puting in Emer­gency Re­sponse Sy­stems – Less­ons Learn­ed and Fu­ture Di­rec­tions
USAB 2007 thumbnail
USAB 2007 thumbnail
In Proceedings of the 3rd Sym­posium of the work­group Human-Com­pu­ter In­ter­action and Usa­bi­lity of the Aust­rian Com­puter So­ciety, USAB 2007 - HCI and Usa­bi­lity for Me­di­cine and Health Care (Graz, Aus­tria, 22 Nov­ember, 2007), LNCS 4799, Springer, pp. 241-254.
Keywords: usability engineering • medical in­for­matics • emer­gency re­sponse • mo­bile devices
Abstract
Mobile information systems show high po­ten­tial in suppor­ting emer­gency phy­sicians in their work at an emer­gency scene. Parti­cularly, infor­mation re­ceived by the hospi­tal’s emer­gency room well before the patients’ arrival allows the emer­gency room staff to opti­mally pre­pare for ade­quate treat­ment and may thus help in saving lives. How­ever, utmost care must be taken with respect to the usability of mobile data re­cording and trans­mission systems since the con­text of use of such devices is ext­remely delicate: Phy­sicians must by no means be im­peded by data pro­cessing tasks in their pri­mary mission to care for the victims. Other­wise, the employ­ment of such high tech systems may turn out to be counter pro­ductive and to even risk the patients’ lives. Thus, we present the usability eng­ineer­ing measures taken within an Austrian project aim­ing to re­place paper-based Emer­gency Patient Care Report Forms by mobile ele­ctronic devices. We try to ident­ify some lessons learned, with re­spect to both, the engineer­ing process and the pro­duct itself.
Leitner, G., Ahlström, D., and Hitz, M. • 2007
Usability — Key Factor of Future Smart Home Systems
Hoit 2007 thumbnail
Hoit 2007 thumbnail
In Home Infor­matics and Tele­matics: ICT for The Next Billion. HOIT 2007. IFIP – The Inter­na­tio­nal Fede­ra­tion for Infor­mation Pro­cess­ing, vol. 241. Sprin­ger, pp. 269-278.
Keywords: usability • smart home
Abstract
A framework of usa­bility fact­ors is pre­sented which serves as a basis for the thor­ough re­search of usa­bil­ity issues in the con­text of smart home sys­tems. Based on well accep­ted app­ro­aches taken from the li­te­ra­ture, var­ious as­pects re­lated to usa­bil­ity are iden­ti­fied as sig­ni­fi­cant for the im­ple­men­ta­tion and fu­tu­re de­ve­lo­pment of smart home sys­tems. Finally, the part­ly ex­is­ting pro­to­typ­i­cal in­stall­a­tion of a smart home sys­tem is di­scuss­ed and scen­a­rios for fu­ture in­vest­i­ga­tions are pre­sent­ed.
Hitz, M., Ahlström, D., Leitner, G., and Melcher, R. • 2007
Intelligent Design vs. Sur­vival of the Fitt­est? – A Case Study on Succ­ess­ful User In­ter­faces
Daten-Information-Wissen 2007 thumbnail
Daten-Information-Wissen 2007 thumbnail
In Informations­systeme: Daten-Infor­mation-Wissen, Haring, G. and Kara­giannis, D. (eds.), OCG Ver­lag, pp. 69-81.
Download: pdf
Abstract
User interfaces play an essen­tial role for the succ­ess of computer app­licat­ions. From time to time, par­ticul­arly success­ful inter­face de­signs appear on the scene, often re­present­ing a shift of para­digm in one way or the other. The question arises whether success­ful user inter­faces are the pro­ducts of con­tinuous tech­no­logical evo­lution or rather result from a sing­ular brilliant act of “intelli­gent design”. Having in­vesti­gated four (arbi­trarily selected) well-known examples of success­ful inter­faces, we feel that in general, even seem­ingly re­vo­lut­ionary tech­no­logical sing­ularities are the result of evo­lu­tionary modi­fication and re­combi­nation of success­ful HCI “memes”.
Leitner, G., Hitz, M., and Ahlström, D. • 2007
Applicability and Usa­bility of Off-the-Shelf Smart App­li­ances in Tele-Care
AINAW 2007 thumbnail
AINAW 2007 thumbnail
In Proceedings of AINA 2007, the 21st IEEE Con­fe­rence on Ad­vanced Net­work­ing and App­li­ca­tions (Nia­gara Falls, On­ta­rio, Ca­na­da, 21–23 May, 2007), IEEE, pp. 881-886.
Keywords: tele-health • off-the-shelf smart app­li­ances • aging • smart homes
Abstract
Investments in tele-care in­fra­struc­ture are an in­creas­ing fi­nan­cial bur­den for pri­vate house­holds because fi­nan­cial support of pub­lic author­ities has de­creased for several reasons. On the other hand, the pre­sence of satis­factory IT infra­struc­ture and the avail­ab­ility of commer­cial-off-the-shelf (COTS) smart devices could be a possi­ble alter­native. Being quite afford­able, the adap­tation of such devices for the usage in tele-health and tele-care seems to solve the finan­cial problem. How­ever, users of tele-health systems are usually people with a low level of tech­nical know-how and computer liter­acy, e.g. eld­erly people. Conse­quently, besides other re­quire­ments, the usability of the systems used in tele-care has to be high. Based on the out­come of a field study, scen­arios of future research re­gard­ing the app­li­ca­bility and usa­bility of off-the-shelf smart app­liances are discussed.
Ahlström, D., Hitz, M., and Leitner, G. • 2006
Reaching for the Usa­bility AND Utili­ty of M-Learn­ing – Ex­per­iences From an M-Learn­ing Pro­ject
MobiLearn 2006 thumbnail
MobiLearn 2006 thumbnail
In M3 – Inter­di­sci­pli­nary Asp­ects on Di­gital Media and Edu­cation (213), OCG Verlag, pp. 61-73.
Download: pdf
Abstract
The widespread use of portable Web clients and wire­less net­working infra­struc­ture have opened up new possi­bilities for the e-learn­ing commun­ity – going mobile. Howe­ver, m-le­arning systems seem not to be quite that success­ful, partly due to miss­ing or sub­optimal eval­uation methods and guide­lines. In this paper, a project is discussed which was focused on the design and develop­ment of an m-learning plat­form for uni­versity students. Eval­uation methods care­fully sel­ected and app­lied with­in the project ful­filled the re­quire­ments of a usa­bility engi­neering app­roach and re­vealed satis­factory results. How­ever, re­pre­sen­tatives of the user groups asked within focus group discussions and dur­ing usability test are mostly skep­tical regard­ing the be­ne­fits of m-learning systems. A criti­cal review of the project re­veals that usa­bility en­ginee­ring in general, and espe­cially in con­text of m-learning, are lack­ing a central aspect in­fluenc­ing the accept­ability of a system – system utility. Based on two models of system accept­ance, we discuss short­comings of the project and poss­ibili­ties to over­come such problems in the future.
Ahlström, D., Hitz, M., and Leitner, G. • 2006
An Evaluation of Sticky and Force En­han­ced Tar­gets in Multi-Target Situ­ations
Nordichi 2006 thumbnail
Nordichi 2006 thumbnail
In Proceedings of Nordi­CHI’06, the fourth Nor­dic Con­fe­rence on Human-Computer In­ter­ac­tion (Oslo, Nor­way, 14–18 Oct­ober, 2006), ACM Press, pp. 58-67.
Keywords: interaction improvement • cursor point­ing • control-display ratio • “sticky tar­gets” • “force fields” • Fitts' law
Acceptance Rate: 28% (37/134)
Abstract
In this paper we exp­lore the us­age of “force fields” in order to facili­tate the com­puter user du­ring point­ing tasks. The first study shows that point­ing time can be re­duced by en­han­cing a point­ing target with an in­visible force field that warps the screen cur­sor to­ward the target center. The app­lica­tion of force fields is fur­ther supported in that we show how per­for­mance of force en­hanced point­ing can be pre­dicted by using Fitts' law and a force ad­justed index of diffi­culty. In the second study, the force field tech­nique is com­pared with the “sticky target” tech­nique in two rea­lis­tic point­ing situa­tions which in­volve several closely placed tar­gets. The results show that the force fields im­prove point­ing per­for­mance and that the sticky target tech­nique does not.
Leitner, G., Hitz, M., and Ahlström, D. • 2006
Eye-Tracking-unter­stütztes In­for­mation Re­trieval
[In German, English title: Eye-Tracking Sup­ported In­for­mation Re­trieval]
uDay IV 2006 thumbnail
uDay IV 2006 thumbnail
In Information nutzbar machen, uDay IV, Kempter, G. und Hellberg, P.V. (eds.), Pabst Science Publisher, pp. 61-64.
Download: pdf
Abstract
Content Based Infor­mation Retrieval (CBIR) basiert auf der Iden­ti­fi­kation von Must­ern in Multi­media­daten wie Bild­ern oder Videos (z.B.: auf Basis von Farb­verteil­ung oder Ecken- bzw. Kanten­detektion), die zu Klassi­fikation, Kate­gori­sierung oder zu Ähn­lich­keits­verg­leichen her­an­ge­zogen werden können. Die auf Basis der zu­grunde lieg­enden theo­retischen Prin­zipien reali­sierten Systeme sind jedoch viel­fach un­befriedi­gend, da die maschi­nell extra­hierten Para­meter und die dar­auf auf­bauende Klassi­fizie­rung nicht opti­mal mit mensch­lichen Klassi­fizie­rungs­mustern ü̈ber­ein zu stimmen scheinen. Das vor­ge­stellte Projekt ver­folgt die Ziel­setzung, CBIR Systeme durch den Ein­satz von Eye-Tracking für die Identi­fikation von klassi­fizie­rungs- und ver­gleichs­re­le­vanten Regionen bei Bild­ern zu verbessern.
Leitner, G., Ahlström, D., and Hitz, M. • 2006
Kognitive Psychologie in der In­for­matik (Human Com­pu­ter In­ter­action)
[In German, English title: Cognitive Psy­cho­logy in Info­rmatics (Human Com­puter Inter­action)]
Österreichischen Gesellschaft 
                        für Psychologie 2006 thumbnail
Österreichischen Gesellschaft 
                        für Psychologie 2006 thumbnail
In Proceedings of the 7th Con­fe­rence of the Aus­trian Psy­cho­logy Asso­cia­tion (Kla­gen­furt, Aus­tria, 28–30 April, 2006), Pabst Science Pub­lisher, pp. 78-84.
Keywords: eye-tracking • point­ing devices • cog­ni­tive psychology
Download: pdf
Abstract
Anhand von zwei Beispiel-Aktivi­täten an einem Infor­matik-Insti­tut soll die Rele­vanz von Er­kennt­nissen der Psycho­logie im Bereich der Human Computer Inter­action illu­striert werden. Das erste Bei­spiel be­schäftigt sich mit dem Ein­satz von Eye-Tracking. Dieser hat, im Spe­ziellen in der nicht aka­dem­ischen Nutz­ung, zu eini­gen Miss­ver­ständ­nissen ge­führt. In Relation zu Aspekten der kog­nitiven Psycho­logie werden die (Un-)Möglichkeiten von Eye-Tracking krit­isch disku­tiert. Das zweite Bei­spile be­schäft­igt sich mit der Hand­ha­bung von Zeige­geräten. Das Bei­spiel be­schäf­tigt sich sich mit der Mög­lich­keit, die Maus-Inter­aktion durch den Ein­satz von Ma­gne­tis­mus zu unter­stützen. Rele­vante Aspekte der kogni­tiven Psycho­logie, z.B. be­treffend die Auge-Hand Ko­ordi­nation werden diskutiert.
Ahlström, D., Alexandrowicz, R., and Hitz, M. • 2006
Improving Menu In­ter­action: A Com­pari­son of Stan­dard, Force En­han­ced and Jump­ing Menus
CHI 2006 thumbnail
CHI 2006 thumbnail
In Proceedings of the SIG­CHI Con­fe­rence on Human Fac­tors in Com­pu­ting (Mon­tréal, Québec, Can­ada, 22–27 April, 2006), ACM Press, pp. 1067-1076.
Keywords: cascading pull-down menus • in­ter­ac­tion models • pre­dic­tion models • menu en­hance­ment • “force fields”
Acceptance Rate: 24% (151/626)
Abstract
In this paper we show how a mo­del cen­te­red ana­ly­sis of the usage of the mouse click in­ter­ac­tion ac­tion in gra­phi­cal user in­ter­faces can be used to create a new menu sy­stem. The an­a­ly­sis iden­ti­fies a poss­i­ble new usage of the click action in cas­cad­ing pull-down menus which can make it easier for the user dur­ing menu na­vi­ga­tion and se­lec­tion. A new menu sy­stem which is easy to im­ple­ment, the “Jumping Menu”, is in­tro­du­ced. The new menu sy­stem warps the screen cur­sor to the right into open sub-menu le­vels when a mouse click is de­tect­ed in­side a pa­rent item. The Jump­ing Menu was com­pa­red with stan­dard pull-down menus and force en­hanced menus in a user ex­per­i­ment. The re­sults show that the Jump­ing Menu and a force en­hanced menu can fa­ci­li­ta­te menu in­ter­ac­tion and that they are pro­mi­sing alter­na­ti­ves to con­ven­tio­nal menu systems. Based on the re­sults, a pre­dic­tion mo­del for se­lec­tion times in Jump­ing Menus is de­veloped.
Ahlström, D. • 2005
Optimizing Selection Tasks in Gra­phi­cal User In­ter­faces
Dissertation 2006 thumbnail
Dissertation 2006 thumbnail
Dissertation, Department of Informatics-Systems, Alpen-Adria-Uni­ver­si­tät Kla­gen­furt, Aus­tria, May 2005.
Abstract
This thesis describes the de­sign, im­ple­men­tation and assess­ment of a soft­ware mecha­nism which supports the computer user during point-and-click tasks and dur­ing sel­ec­tion tasks in cas­cading pull-down menus. The focus is to im­prove user per­for­mance (in terms of task completion time and error rate) during these basic human-computer inter­action tasks and so to make inter­action easier and more en­joy­able for the user.
After a com­pre­hen­sive over­view and discuss­ion of the state of the art of inter­action im­prov­ing tech­ni­ques, high­light­ing their stren­gths and weak­nesses, a new app­roach based on vir­tual “force fields” is designed, imple­mented and evalu­ated.
Basi­cally, a cursor warp­ing algo­rithm is used to help the user steer the cursor by in­sert­ing small extra cur­sor dis­place­ments as the cur­sor is moved toward a selection target. In con­trast to many other inter­action im­proving tech­niques, the cursor warp­ing app­roach taken in this thesis is suit­able for both, point-and-click and menu selec­tion tasks, is app­li­cable in stan­dard GUIs, and does not change the visual app­ear­ance of the GUI, nor the basic means of inter­action. Further­more, the new tech­nique can be em­ployed with stan­dard in­direct point­ing devices.
The pro­posed tech­nique was eva­luated in five con­trolled user experi­ments with a total of 100 par­tici­pants and the results show that the tech­nique is easy to use and can sig­ni­fi­cantly reduce task com­pletion times. The last experi­ment also showed that selection times in cas­cading pull-down menus can be mod­eled with high accuracy using a com­bination of Fitts' law and the steer­ing law.
Ahlström, D. • 2005
Modeling and Improving Se­lect­ion in Cas­cading Pull-Down Menus Using Fitts’ Law, the Stee­ring Law and Force Fields
★ HONORABLE MENTION AWARD ★
CHI 2005 thumbnail
CHI 2005 thumbnail
In Proceedings of the SIGCHI Con­fe­rence on Human Fac­tors in Com­pu­ting (Port­land, OR, USA, 2–7 April, 2005), ACM Press, pp. 61-67.
Keywords: cascading pull-down menus • menu na­vi­ga­tion • se­lec­tion • Fitts' law • steer­ing law • input de­vices • “force fields”
Acceptance Rate: 25% (93/372)
Abstract
Selecting a menu item in a cas­ca­ding pull-down menu is a fre­quent but time con­suming and com­plex GUI task. This paper de­scri­bes an app­roach aimed to support the user dur­ing selec­tion in cas­cading pull-down menus when using an in­direct point­ing device. By en­hanc­ing such a cas­cading pull-down menu with “force fields”, the cursor is att­racted to­ward a cer­tain direct­ion, e.g. toward the right hand side within a menu item, which opens up a sub-menu, making the cur­sor steer­ing task easier and fas­ter. The experi­ment de­scribed here shows that the force fields can de­crease se­lect­ion times, on aver­age by 18%, when a mouse, a track point, or touch pad is used as in­put device. The results also sug­gest that se­lect­ion times in cas­ca­ding pull-down menus can be modeled using a combi­nation of Fitts' law and the steer­ing law. The pro­posed model pro­ved to hold for all three de­vices, in both stan­dard and in en­hanced cas­cading pull-down menus, with cor­relat­ions better than R2=0.90.
Ahlström, D., Hitz, M., and Hodnigg, K. • 2002
Adaptive Testing in E-Learning – the Veval Approach
ICAL 2002 thumbnail
ICAL 2002 thumbnail
In Proceedings of the 5th In­ter­na­tional Work­shop In­ter­ac­tive Com­pu­ter Aided Learn­ing (Villach, Aus­tria, 25–27 Sep­tember, 2002), Kassel Uni­ver­sity Press.
Keywords: web based ex­ami­nation • self ass­ess­ment • test ge­ne­ra­tion • test eva­lu­ation
Download: pdf
Abstract
This paper presents Veval, an Inter­net based assess­ment tool under de­velop­ment. It is designed for inte­gration into Velo, a virtual el­ec­tron­ic labora­tory system. Veval helps to uti­lize the mani­fold possi­bili­ties of self ass­ess­ment and ex­ami­na­tion via Inter­net. It pro­vides a test gene­rating module for the tutor and an exam­i­nation or self ass­ess­ment en­viron­ment for stud­ents as well as a com­ponent re­spon­sible for sta­tisti­cal evalua­tions. Based on an ana­lysis of several exist­ing ass­ess­ment tools, the re­quire­ments for Veval are discussed. An over­view of the flex­ible and port­able system archi­tec­ture and a short presen­tation of the graphi­cal user inter­face are provided.
Ahlström, D., Hitz, M., and Leitner, G. • 2002
Improving Mouse Navi­gation – A Walk Through the “Hilly Screen Landscape”
DSV-IS 2002 thumbnail
DSV-IS 2002 thumbnail
In Proceedings of the 9th In­ter­na­tio­nal Work­shop on In­ter­ac­tive Sys­tems – Design, Spe­ci­fi­ca­tion, and Veri­fi­ca­tion (Ro­stock, Ger­many, 12–14 June, 2002), LNCS 2545, Springer, pp. 185-195.
Abstract
During computer in­ter­action much time is spent navi­ga­ting the graphi­cal user in­ter­face to find and invoke func­tions through in­ter­face con­trols. If this nav­i­ga­tion pro­cess could be opti­mised, users would spend less time search­ing for and navi­gating to inter­face con­trols. This paper pre­sents a walk through an on­going research pro­ject aimed at de­vel­op­ing and ass­ess­ing a nav­i­ga­tion sup­port module for mouse based in­ter­ac­tion, which en­han­ces stan­dard screen point­er be­haviour with pos­i­tion con­text sensi­tive func­tional­ity — creat­ing a “hilly screen land­scape”. The main hypo­thesis of this work is that a con­text sens­i­tive screen point­er pre­vents na­vi­gation to and selection of err­on­eous and in­app­ro­priate in­ter­face con­trols, de­creases point­ing and se­lec­tion times and con­tri­butes to in­creased over­all usa­bi­li­ty of the app­li­ca­tion. A de­scrip­tion of the nav­i­ga­tion sup­port module and hypo­thet­i­cal si­tu­a­tions where such a module could prove to be use­ful are prov­i­ded to­ge­ther with major im­ple­men­ta­tion and eval­u­a­tion issues of the pro­ject.