Videos - Eric Lecolinet


Eric Lecolinet - DIVA Group - LTCI - Télécom ParisTech




        Search:      Year:

2018
video   MobiLimb: Augmenting Mobile Devices with a Robotic Limb
M. Teyssier, G. Bailly, C. Pelachaud, E. Lecolinet. In UIST'18: Proceedings of the ACM Symposium on User Interface Software and Technology, ACM (2018). 53-63. doi pdf bibcite
@inproceedings{MT:UIST-18,
 author = M. {Teyssier} and G. {Bailly} and C. {Pelachaud} and E. {Lecolinet},
 booktitle = UIST'18: Proceedings of the ACM Symposium on User Interface Software and Technology,
 month = oct,
 pages = 53--63,
 publisher = ACM,
 title = MobiLimb: Augmenting Mobile Devices with a Robotic Limb,
 year = 2018,
 image = mobilimb-UIST18.png,
 project = https://www.marcteyssier.com/projects/mobilimb/,
 video = https://www.youtube.com/watch?v=wi3INyIDNdk,
}
keywords
Mobile device, Actuated devices, Robotics, Mobile Augmentation
project
video   Impact of Semantic Aids on Command Memorization for On-Body Interaction and Directional Gestures
B. Fruchard, E. Lecolinet, O. Chapuis. In AVI'18: International Conference on Advanced Visual Interfaces, Article No. 14 (9 pages), ACM (2018). doi pdf bibcite
@inproceedings{BL:AVI-18,
 address = Grosseto, Italie,
 author = B. {Fruchard} and E. {Lecolinet} and O. {Chapuis},
 booktitle = AVI'18: International Conference on Advanced Visual Interfaces,
 month = jun,
 number = Article No. 14 (9 pages),
 publisher = ACM,
 title = Impact of Semantic Aids on Command Memorization for On-Body Interaction and Directional Gestures,
 year = 2018,
 image = bodyloci-AVI18.png,
 video = https://vimeo.com/251815791,
}
keywords
Semantic aids; Memorization; Command selection; On-body interaction; Marking menus; Virtual reality

2017
video   MarkPad: Augmenting Touchpads for Command Selection
B. Fruchard, E. Lecolinet, O. Chapuis. In CHI'17: Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems, ACM (2017). 5630-5642. doi hal pdf bibcite
@inproceedings{MP:CHI-17,
 address = Denver, Colorado, Etats-Unis,
 author = B. {Fruchard} and E. {Lecolinet} and O. {Chapuis},
 booktitle = CHI'17: Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems,
 month = may,
 pages = 5630--5642,
 publisher = ACM,
 title = MarkPad: Augmenting Touchpads for Command Selection,
 year = 2017,
 hal = hal-01437093/en,
 image = MP-CHI-17.png,
 video = https://www.youtube.com/watch?v=rUGGTrYPuSM,
 software = http://brunofruchard.com/markpad.html,
}
keywords
Gestural interaction; bezel gestures; tactile feedback; spatial memory; touchpad; user-defined gestures; Marking menus
abstract
We present MarkPad, a novel interaction technique taking advantage of the touchpad. MarkPad allows creating a large number of size-dependent gestural shortcuts that can be spatially organized as desired by the user. It relies on the idea of using visual or tactile marks on the touchpad or a combination of them. Gestures start from a mark on the border and end on another mark anywhere. MarkPad does not conflict with standard interactions and provides a novice mode that acts as a rehearsal of the expert mode. A first study showed that an accuracy of 95% could be achieved for a dense configuration of tactile and/or visual marks allowing many gestures. Performance was 5% lower in a second study where the marks were only on the borders. A last study showed that borders are rarely used, even when the users are unaware of the technique. Finally, we present a working prototype and briefly report on how it was used by two users for a few months.
software
video   CoReach: Cooperative Gestures for Data Manipulation on Wall-sized Displays
C. Liu, O. Chapuis, M. Beaudouin-Lafon, E. Lecolinet. In CHI'17: Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems, ACM (2017). 6730-6741. doi hal pdf bibcite
@inproceedings{LCBL:CHI-17,
 author = C. {Liu} and O. {Chapuis} and M. {Beaudouin-Lafon} and E. {Lecolinet},
 booktitle = CHI'17: Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems,
 month = may,
 pages = 6730--6741,
 publisher = ACM,
 title = CoReach: Cooperative Gestures for Data Manipulation on Wall-sized Displays,
 year = 2017,
 hal = hal-01437091/en,
 image = LCBL-CHI-17.jpg,
 video = https://www.lri.fr/~chapuis/publications/CHI17-coreach.mp4,
}
keywords
Shared interaction, wall display, co-located collaboration
abstract
Multi-touch wall-sized displays afford collaborative exploration of large datasets and re-organization of digital content. However, standard touch interactions, such as dragging to move content, do not scale well to large surfaces and were not designed to support collaboration, such as passing an object around. This paper introduces CoReach, a set of collaborative gestures that combine input from multiple users in order to manipulate content, facilitate data exchange and support communication. We conducted an observational study to inform the design of CoReach, and a controlled study showing that it reduced physical fatigue and facilitated collaboration when compared with traditional multi-touch gestures. A final study assessed the value of also allowing input through a handheld tablet to manipulate content from a distance.

video   VersaPen: An Adaptable, Modular and Multimodal I/O Pen
M. Teyssier, G. Bailly, E. Lecolinet. In CHI'17 Extended Abstracts: ACM SIGCHI Conference on Human Factors in Computing Systems, ACM (2017). 2155-2163. doi hal pdf bibcite
@inproceedings{Teyssier:VersaPen-2017,
 address = Denver, USA,
 author = M. {Teyssier} and G. {Bailly} and E. {Lecolinet},
 booktitle = CHI'17 Extended Abstracts: ACM SIGCHI Conference on Human Factors in Computing Systems,
 month = may,
 pages = 2155--2163,
 publisher = ACM,
 title = VersaPen: An Adaptable, Modular and Multimodal I/O Pen,
 year = 2017,
 hal = hal-01521565,
 image = VersaPen-wp-CHI17.png,
 video = https://www.youtube.com/watch?v=WhhZc67geAQ,
}
keywords
Pen input; Multimodal interaction; Modular input
abstract
While software often allows user customization, most physical devices remain mainly static. We introduce VersaPen, an adaptable, multimodal, hot-pluggable pen for expanding input capabilities. Users can create their own pens by stacking different input/output modules that define both the look and feel of the customized device. VersaPen offers multiple advantages. Allowing in-place interaction, it reduces hand movements and avoids cluttering the interface with menus and palettes. It also enriches interaction by providing multimodal capabilities, as well as a mean to encapsulate virtual data into physical modules which can be shared by users to foster collaboration. We present various applications to demonstrate how VersaPen enables new interaction techniques.

video   VersaPen: Exploring Multimodal Interactions with a Programmable Modular Pen
M. Teyssier, G. Bailly, E. Lecolinet. In CHI'17 Extended Abstracts (demonstration): ACM SIGCHI Conference on Human Factors in Computing Systems, ACM (2017). 377-380. doi hal pdf bibcite
@inproceedings{teyssier:hal-01521566,
 address = Denver, USA,
 author = M. {Teyssier} and G. {Bailly} and E. {Lecolinet},
 booktitle = CHI'17 Extended Abstracts (demonstration): ACM SIGCHI Conference on Human Factors in Computing Systems,
 month = may,
 pages = 377--380,
 publisher = ACM,
 title = VersaPen: Exploring Multimodal Interactions with a Programmable Modular Pen,
 year = 2017,
 hal = hal-01521566,
 image = VersaPen-demo-CHI17.png,
 video = https://www.youtube.com/watch?v=LYIjfUDTdbU,
}
keywords
Pen input ; Multimodal interaction
abstract
We introduce and demonstrate VersaPen, a modular pen for expanding input capabilities. Users can create their own digital pens by stacking different input/output modules that define both the look and feel of the customized device. VersaPen investigate the benefits of adaptable devices and enriches interaction by providing multimodal capabilities, allows in-place interaction, it reduces hand movements and avoids cluttering the interface with menus and palettes. The device integrates seamlessly thanks to a visual programming interface, allowing end users to connect input and output sources in other existing software. We present various applications to demonstrate the power of VersaPen and how it enables new interaction techniques

2016
video   Shared Interaction on a Wall-Sized Display in a Data Manipulation Task
C. Liu, O. Chapuis, M. Beaudouin-Lafon, E. Lecolinet. In CHI'16: Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems, ACM Press (2016). 2075-2086. doi hal pdf bibcite
@inproceedings{LIU:CHI16,
 author = C. {Liu} and O. {Chapuis} and M. {Beaudouin-Lafon} and E. {Lecolinet},
 booktitle = CHI'16: Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems,
 month = may,
 pages = 2075--2086,
 publisher = ACM Press,
 title = Shared Interaction on a Wall-Sized Display in a Data Manipulation Task,
 year = 2016,
 hal = hal-01275535,
 image = LIU-CHI16.jpg,
 video = https://www.youtube.com/watch?v=X7IA9XFGL38,
}
keywords
Co-located collaboration; shared interaction; collaboration styles; wall-sized display; classification task; pick-and-drop
abstract
Wall-sized displays support small groups of users working together on large amounts of data. Observational studies of such settings have shown that users adopt a range of collaboration styles, from loosely to closely coupled. Shared interaction techniques, in which multiple users perform a command collaboratively, have also been introduced to support co-located collaborative work. In this paper, we operationalize five collaborative situations with increasing levels of coupling , and test the effects of providing shared interaction support for a data manipulation task in each situation. The results show the benefits of shared interaction for close collaboration: it encourages collaborative manipulation, it is more efficient and preferred by users, and it reduces physical navigation and fatigue. We also identify the time costs caused by disruption and communication in loose collaboration and analyze the trade-offs between parallelization and close collaboration. These findings inform the design of shared interaction techniques to support collaboration on wall-sized displays.

video   SchemeLens: A Content-Aware Vector-Based Fisheye Technique for Navigating Large Systems Diagrams
A. Cohé, B. Liutkus, G. Bailly, J. Eagan, E. Lecolinet. Transactions on Visualization & Computer Graphics (TVCG), In InfoVis '15, 22, 1, IEEE (2016). 330-338. doi hal pdf bibcite
@article{cohe:infovis15,
 author = A. {Coh{\'e}} and B. {Liutkus} and G. {Bailly} and J. {Eagan} and E. {Lecolinet},
 booktitle = InfoVis '15,
 journal = Transactions on Visualization \& Computer Graphics (TVCG),
 month = jan,
 number = 1,
 pages = 330--338,
 publisher = IEEE,
 title = SchemeLens: A Content-Aware Vector-Based Fisheye Technique for Navigating Large Systems Diagrams,
 volume = 22,
 year = 2016,
 hal = hal-01442946,
 image = cohe-infovis15.png,
 video = https://vimeo.com/152548517,
}
keywords
Fisheye; vector-scaling; content-aware; network schematics; interactive zoom; navigation; information visualization
abstract
System schematics, such as those used for electrical or hydraulic systems, can be large and complex. Fisheye techniques can help navigate such large documents by maintaining the context around a focus region, but the distortion introduced by traditional fisheye techniques can impair the readability of the diagram. We present SchemeLens, a vector-based, topology-aware fisheye technique which aims to maintain the readability of the diagram. Vector-based scaling reduces distortion to components, but distorts layout. We present several strategies to reduce this distortion by using the structure of the topology, including orthogonality and alignment, and a model of user intention to foster smooth and predictable navigation. We evaluate this approach through two user studies: Results show that (1) SchemeLens is 16-27% faster than both round and rectangular flat-top fisheye lenses at finding and identifying a target along one or several paths in a network diagram; (2) augmenting SchemeLens with a model of user intentions aids in learning the network topology.

2015
  video   SuperVision: Spatial Control of Connected Objects in a Smart Home
S. Gosh, G. Bailly, R. Despouys, E. Lecolinet, R. Sharrock. In CHI Extended Abstracts: ACM Conference on Human Factors in Computing Systems, ACM (2015). 2079-2084. doi hal pdf bibcite
@inproceedings{GOSH-ELC:CHI-EA15,
 address = Soul, Korea,
 author = S. {Gosh} and G. {Bailly} and R. {Despouys} and E. {Lecolinet} and R. {Sharrock},
 booktitle = CHI Extended Abstracts: ACM Conference on Human Factors in Computing Systems,
 month = apr,
 pages = 2079--2084,
 publisher = ACM,
 title = SuperVision: Spatial Control of Connected Objects in a Smart Home,
 year = 2015,
 hal = hal-01147717,
 video = http://sarthakg.in/portfolio/content-page-supervision.html,
}
keywords
Smart Home; Pico-projector; Spatial memory; Visualization; SuperVision
abstract
In this paper, we propose SuperVision, a new interaction technique for distant control of objects in a smart home. This technique aims at enabling users to point towards an object, visualize its current state and select a desired functionality as well. To achieve this: 1) we present a new remote control that contains a pico-projector and a slider; 2) we introduce a visualization technique that allows users to locate and control objects kept in adjacent rooms, by using their spatial memories. We further present a few example applications that convey the possibilities of this technique.

2014
video   Effects of Display Size and Navigation Type on a Classification Task
C. Liu, O. Chapuis, M. Beaudouin-Lafon, E. Lecolinet, W. Mackay. In CHI'14: Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems, ACM (2014). 4147-4156. CHI 2014 Best Paper Award. doi hal pdf bibcite
@inproceedings{LIU-ELC:CHI14,
 author = C. {Liu} and O. {Chapuis} and M. {Beaudouin-Lafon} and E. {Lecolinet} and W. {Mackay},
 booktitle = CHI'14: Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems,
 month = apr,
 pages = 4147--4156,
 publisher = ACM,
 title = Effects of Display Size and Navigation Type on a Classification Task,
 year = 2014,
 award = CHI 2014 Best Paper Award,
 hal = hal-00957269,
 image = EffectDisplaySize-CHI14.jpg,
 video = http://www.youtube.com/watch?feature=player_embedded&v=SBXwW5lz-4o,
 bdsk-url-1 = http://dl.acm.org/citation.cfm?doid=2556288.2557020,
}
keywords
Wall-size display; Classification task; Physical navigation; Pan-and-zoom; Lenses; Overview+detail
abstract
The advent of ultra-high resolution wall-size displays and their use for complex tasks require a more systematic anal- ysis and deeper understanding of their advantages and draw- backs compared with desktop monitors. While previous work has mostly addressed search, visualization and sense-making tasks, we have designed an abstract classification task that involves explicit data manipulation. Based on our observa- tions of real uses of a wall display, this task represents a large category of applications. We report on a controlled experiment that uses this task to compare physical navigation in front of a wall-size display with virtual navigation using pan- and-zoom on the desktop. Our main finding is a robust interaction effect between display type and task difficulty: while the desktop can be faster than the wall for simple tasks, the wall gains a sizable advantage as the task becomes more dif- ficult. A follow-up study shows that other desktop techniques (overview+detail, lens) do not perform better than pan-and- zoom and are therefore slower than the wall for difficult tasks.

video   Multi-finger Chords for Hand-held Tablets: Recognizable and Memorable
J. Wagner, E. Lecolinet, T. Selker. In CHI'14: Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems, ACM (2014). 2883-2892. CHI 2014 Honorable Mention Award. doi hal pdf bibcite
@inproceedings{WAGNER-ELC:CHI14,
 address = Toronto, Canada,
 author = J. {Wagner} and E. {Lecolinet} and T. {Selker},
 booktitle = CHI'14: Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems,
 month = apr,
 pages = 2883--2892,
 publisher = ACM,
 title = Multi-finger Chords for Hand-held Tablets: Recognizable and Memorable,
 year = 2014,
 award = CHI 2014 Honorable Mention Award,
 hal = hal-01447407,
 image = MultiFingerChords-CHI14.jpg,
 video = https://www.youtube.com/watch?feature=player_embedded&v=W6aC9cqgrH0,
 bdsk-url-1 = http://dl.acm.org/citation.cfm?doid=2556288.2556958,
}
keywords
multi-finger chord; chord-command mapping; finger identification; hand-held tablet
abstract
Despite the demonstrated benefits of multi-finger input, todays gesture vocabularies offer a limited number of postures and gestures. Previous research designed several posture sets, but does not address the limited human capacity of retaining them. We present a multi-finger chord vocabulary, introduce a novel hand-centric approach to detect the iden- tity of fingers on off-the-shelf hand-held tablets, and report on the detection accuracy. A between-subjects experiment comparing 'random' to a `categorized' chord-command mapping found that users retained categorized mappings more accurately over one week than random ones. In response to the logical posture-language structure, people adapted to logical memorization strategies, such as `exclusion', `order', and `category', to minimize the amount of information to retain. We conclude that structured chord-command mappings support learning, short-, and long-term retention of chord-command mappings.

2013
video   WatchIt: Simple Gestures and Eyes-free Interaction for Wristwatches and Bracelets
S. T. Perrault, E. Lecolinet, J. Eagan, Y. Guiard. In CHI'13: Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems, ACM (2013). 1451-1460. doi hal pdf bibcite
@inproceedings{SP:2013,
 address = Paris, France,
 author = S. T. {Perrault} and E. {Lecolinet} and J. {Eagan} and Y. {Guiard},
 booktitle = CHI'13: Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems,
 month = apr,
 pages = 1451--1460,
 publisher = ACM,
 title = WatchIt: Simple Gestures and Eyes-free Interaction for Wristwatches and Bracelets,
 year = 2013,
 hal = hal-01115851,
 image = watchit.png,
 video = http://www.youtube.com/watch?feature=player_embedded&v=fDxmYJgD6Qw,
 bdsk-url-1 = http://dl.acm.org/citation.cfm?id=2466192&dl,
}
keywords
IHM; Digital jewelry; wearable computing; watch; watchstrap; watchband; watch bracelet; input; eyes-free interaction; continuous input; scrolling
abstract
We present WatchIt, a prototype device that extends interaction beyond the watch surface to the wristband, and two interaction techniques for command selection and execution. Because the small screen of wristwatch computers suffers from visual occlusion and the fat finger problem, we investigate the use of the wristband as an available interaction resource. Not only does WatchIt use a cheap, energy efficient and invisible technology, but also it involves simple, basic gestures that allow good performance after little training, as suggested by the results of a pilot study. We propose a novel gesture technique and an adaptation of an existing menu technique suitable for wristband interaction. In a user study, we investigate their usage in eyes-free contexts, finding that they perform well. Finally, we present a technique where the bracelet is used in addition to the screen to provide precise continuous control on lists. We also report on a preliminary survey of traditional and digital jewelry that points to the high frequency of watches and bracelets in both genders and gives a sense of the tasks people would like to perform on such devices.

video   Augmented Letters: Mnemonic Gesture-Base Shortcuts
Q. Roy, S. Malacria, Y. Guiard, E. Lecolinet, J. Eagan. In CHI'13: Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems, ACM (2013). 2325-2328. doi hal pdf bibcite
@inproceedings{QR:CHI-2013,
 address = Paris, France,
 author = Q. {Roy} and S. {Malacria} and Y. {Guiard} and E. {Lecolinet} and J. {Eagan},
 booktitle = CHI'13: Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems,
 month = apr,
 pages = 2325--2328,
 publisher = ACM,
 title = Augmented Letters: Mnemonic Gesture-Base Shortcuts,
 year = 2013,
 hal = hal-01164207,
 image = AugmentedLetters-CHI13.png,
 video = http://www.dailymotion.com/video/xxobz5_augmented-letters-mnemonic-gesture-based-shortcuts_tech,
 bdsk-url-1 = http://dl.acm.org/citation.cfm?id=2481321,
}
keywords
Interaction Design, Input and Interaction Technologies, Tactile Input, Language
abstract
We propose Augmented Letters, a new technique aimed at augmenting gesture-based techniques such as Marking Menus [9] by giving them natural, mnemonic associations. Augmented Letters gestures consist of the initial of command names, sketched by hand in the Unistroke style, and affixed with a straight tail. We designed a tentative touch device interaction technique that supports fast interactions with large sets of commands, is easily discoverable, improves user's recall at no speed cost, and supports fluid transition from novice to expert mode. An experiment suggests that Augmented Letters outperform Marking Menu in terms of user recall.

video   Bezel-Tap Gestures: Quick Activation of Commands from Sleep Mode on Tablets
M. Serrano, E. Lecolinet, Y. Guiard. In CHI'13: Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems, ACM (2013). 3027-3036. doi hal pdf bibcite
@inproceedings{SERRANO:CHI-2013,
 address = Paris, France,
 author = M. {Serrano} and E. {Lecolinet} and Y. {Guiard},
 booktitle = CHI'13: Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems,
 month = apr,
 pages = 3027--3036,
 publisher = ACM,
 title = Bezel-Tap Gestures: Quick Activation of Commands from Sleep Mode on Tablets,
 year = 2013,
 hal = hal-01115852,
 image = BezelTap-CHI13.png,
 video = http://www.telecom-paristech.fr/~via/media/videos/bezel-tap-chi13.m4v,
 bdsk-url-1 = http://dl.acm.org/citation.cfm?id=2481421,
}
keywords
Interaction techniques; Mobile devices; Bezel Gestures; Accelerometers; Micro-Interaction; Marking Menus.
abstract
We present Bezel-Tap Gestures, a novel family of interaction techniques for immediate interaction on handheld tablets regardless of whether the device is alive or in sleep mode. The technique rests on the close succession of two input events: first a bezel tap, whose detection by accelerometers will awake an idle tablet almost instantly, then a screen contact. Field studies confirmed that the probability of this input sequence occurring by chance is very low, excluding the accidental activation concern. One experiment examined the optimal size of the vocabulary of commands for all four regions of the bezel (top, bottom, left, right). Another experiment evaluated two variants of the technique which both allow two-level selection in a hierarchy of commands, the initial bezel tap being followed by either two screen taps or a screen slide. The data suggests that Bezel-Tap Gestures may serve to design large vocabularies of micro-interactions with a sleeping tablet.

2012
video   S-Notebook: Augmenting Mobile Devices with Interactive Paper for Data Management
S. Malacria, Th. Pietrzak, E. Lecolinet. In AVI'12: International Conference on Advanced Visual Interfaces, ACM (2012). 733-736. doi hal pdf bibcite
@inproceedings{ELC:AVI-12,
 address = Capri, Italy,
 author = S. {Malacria} and Th. {Pietrzak} and E. {Lecolinet},
 booktitle = AVI'12: International Conference on Advanced Visual Interfaces,
 month = may,
 pages = 733--736,
 publisher = ACM,
 title = S-Notebook: Augmenting Mobile Devices with Interactive Paper for Data Management,
 year = 2012,
 hal = hal-00757125,
 image = SNotebook-AVI12.jpg,
 video = http://www.telecom-paristech.fr/~via/media/videos/s-notebook.m4v,
}
abstract
This paper presents S-Notebook, a tool that makes it possible to "extend" mobile devices with augmented paper. Paper is used to overcome the physical limitations of mobile devices by offering additional space to annotate digital files and to easily create relationships between them. S-Notebook allows users to link paper annotations or drawings to anchors in digital files without having to learn pre-defined pen gestures. The systems stores meta data such as spatial or temporal location of anchors in the document as well as the zoom level of the view. Tapping on notes with the digital pen make appear the corresponding documents as displayed when the notes were taken. A given piece of augmented paper can contain notes associated to several documents, possibliy at several locations. The annotation space can thus serve as a simple way to relate various pieces of one or several digital documents between them. When the user shares his notes, the piece of paper becomes a tangible token that virtually contains digital information.

2011
video   Gesture-aware remote controls: guidelines and interaction techniques
G. Bailly, D.-B. Vo, E. Lecolinet, Y. Guiard. In ICMI'11: ACM International Conference on Multimodal Interaction, ACM (2011). 263-270. doi hal pdf bibcite
@inproceedings{GB:ICMI-11,
 address = Alicante, Espagne,
 author = G. {Bailly} and D.-B. {Vo} and E. {Lecolinet} and Y. {Guiard},
 booktitle = ICMI'11: ACM International Conference on Multimodal Interaction,
 month = nov,
 pages = 263--270,
 publisher = ACM,
 title = Gesture-aware remote controls: guidelines and interaction techniques,
 year = 2011,
 hal = hal-00705413,
 image = GestureAwareRemotes-ICMI11.png,
 video = https://www.youtube.com/watch?v=PfYwcCZapm4,
}
keywords
Mid-air gestures, remote control, 10-foot interaction, menu, interactive television
abstract
Interaction with TV sets, set-top boxes or media centers strongly differs from interaction with personal computers: not only does a typical remote control suffer strong form factor limitations but the user may well be slouching in a sofa. In the face of more and more data, features, and services made available on interactive televisions, we propose to exploit the new capabilities provided by gesture-aware remote controls. We report the data of three user studies that suggest some guidelines for the design of a gestural vocabulary and we propose five novel interaction techniques. Study 1 reports that users spontaneously perform pitch and yaw gestures as the first modality when interacting with a remote control. Study 2 indicates that users can accurately select up to 5 items with eyes-free roll gestures. Capitalizing on our findings, we designed five interaction techniques that use either device motion, or button-based interaction, or both. They all favor the transition from novice to expert usage for selecting favorites. Study 3 experimentally compares these techniques. It reveals that motion of the device in 3D space, associated with finger presses at the surface of the device, is achievable, fast and accurate. Finally, we discuss the integration of these techniques into a coherent multimedia system menu.

video   JerkTilts: Using Accelerometers for Eight-Choice Selection on Mobile Devices
M. Baglioni, E. Lecolinet, Y. Guiard. In ICMI'11: ACM International Conference on Multimodal Interaction, ACM (2011). 121-128. doi hal pdf bibcite
@inproceedings{MB:ICMI-11,
 address = Alicante, Spain,
 author = M. {Baglioni} and E. {Lecolinet} and Y. {Guiard},
 booktitle = ICMI'11: ACM International Conference on Multimodal Interaction,
 month = nov,
 pages = 121--128,
 publisher = ACM,
 title = JerkTilts: Using Accelerometers for Eight-Choice Selection on Mobile Devices,
 year = 2011,
 hal = hal-00705420,
 image = JerkTilts-ICMI11.png,
 video = http://www.telecom-paristech.fr/~via/media/videos/jerktilts.m4v,
}
keywords
Interaction techniques, handheld devices, input, accelerometers, gestures, Marking menu, self-delimited
abstract
This paper introduces JerkTilts, quick back-and-forth gestures that combine device pitch and roll. JerkTilts may serve as gestural self-delimited shortcuts for activating commands. Because they only depend on device acceleration and rely on a parallel and independent input channel, these gestures do not interfere with finger activity on the touch screen. Our experimental data suggest that recognition rates in an eight-choice selection task are as high with JerkTilts as with thumb slides on the touch screen. We also report data confirming that JerkTilts can be combined successfully with simple touch-screen operation. Data from a field study suggest that inadvertent JerkTilts are unlikely to occur in real-life contexts. We describe three illustrative implementations of JerkTilts, which show how the technique helps to simplify frequently used commands.

video   Flick-and-Brake: Finger Control over Inertial/Sustained Scroll Motion
M. Baglioni, S. Malacria, E. Lecolinet, Y. Guiard. In CHI Extended Abstracts: ACM Conference on Human Factors in Computing Systems, ACM (2011). 2281-2286. doi pdf bibcite
@inproceedings{baglioni11-flickandbrake,
 address = Vancouver, Canada,
 author = M. {Baglioni} and S. {Malacria} and E. {Lecolinet} and Y. {Guiard},
 booktitle = CHI Extended Abstracts: ACM Conference on Human Factors in Computing Systems,
 month = may,
 pages = 2281--2286,
 publisher = ACM,
 title = Flick-and-Brake: Finger Control over Inertial/Sustained Scroll Motion,
 year = 2011,
 image = FlickAndBrake-CHI-EA11.png,
 video = http://www.telecom-paristech.fr/~via/media/videos/flick-brake.m4v,
}

2010
video   Finger-Count and Radial-Stroke Shortcuts: Two Techniques for Augmenting Linear Menus.
G. Bailly, E. Lecolinet, Y. Guiard. In ACM SIGCHI Conference on Human Factors in Computing Systems (CHI'10), ACM Press (2010). 591-594. pdf bibcite
@inproceedings{GB:CHI-10,
 address = Atlanta, USA,
 author = G. {Bailly} and E. {Lecolinet} and Y. {Guiard},
 booktitle = ACM SIGCHI Conference on Human Factors in Computing Systems (CHI'10),
 month = apr,
 pages = 591--594,
 publisher = ACM Press,
 title = Finger-Count and Radial-Stroke Shortcuts: Two Techniques for Augmenting Linear Menus.,
 year = 2010,
 image = FingerCount-CHI10.png,
 video = http://www.youtube.com/watch?feature=player_embedded&v=P69spTHzYUM,
}
keywords
menu techniques, multi-touch, multi-finger, two-handed interaction

video   Clutch-free panning and integrated pan-zoom control on touch-sensitive surfaces: the cyclostar approach
S. Malacria, E. Lecolinet, Y. Guiard. In ACM SIGCHI Conference on Human Factors in Computing Systems (CHI'10), ACM Press (2010). 2615-2624. pdf bibcite
@inproceedings{SM:CHI-10,
 address = Atlanta, GA, USA,
 author = S. {Malacria} and E. {Lecolinet} and Y. {Guiard},
 booktitle = ACM SIGCHI Conference on Human Factors in Computing Systems (CHI'10),
 month = apr,
 pages = 2615--2624,
 publisher = ACM Press,
 title = Clutch-free panning and integrated pan-zoom control on touch-sensitive surfaces: the cyclostar approach,
 year = 2010,
 image = CycloStar-CHI10.png,
 video = https://www.youtube.com/watch?v=tcYX56TegbE#t=23,
}
keywords
Input techniques, touch screens, touchpads, oscillatory motion, elliptic gestures, panning, zooming, multi-scale navigation.

video   Wavelet menus on handheld devices: stacking metaphor for novice mode and eyes-free selection for expert mode
J. Francone, G. Bailly, E. Lecolinet, N. Mandran, L. Nigay. In In AVI'10: International Conference on Advanced Visual Interface, ACM Press (2010). 173-180. bibcite
@inproceedings{GB:AVI-10,
 author = J. {Francone} and G. {Bailly} and E. {Lecolinet} and N. {Mandran} and L. {Nigay},
 booktitle = In AVI'10: International Conference on Advanced Visual Interface,
 month = may,
 pages = 173--180,
 publisher = ACM Press,
 title = Wavelet menus on handheld devices: stacking metaphor for novice mode and eyes-free selection for expert mode,
 year = 2010,
 image = WaveletMenu-AVI10.png,
 video = http://www.telecom-paristech.fr/~via/media/videos/wavelet-menu.m4v,
}

2009
video   MicroRolls: Expanding Touch-Screen Input Vocabulary by Distinguishing Rolls vs. Slides of the Thumb
A. Roudaut, E. Lecolinet, Y. Guiard. In ACM CHI (Conference on Human Factors in Computing Systems), (2009). 927-936. doi url pdf bibcite
@inproceedings{RA:CHI-09,
 address = Boston, USA,
 author = A. {Roudaut} and E. {Lecolinet} and Y. {Guiard},
 booktitle = ACM CHI (Conference on Human Factors in Computing Systems),
 month = apr,
 pages = 927--936,
 title = MicroRolls: Expanding Touch-Screen Input Vocabulary by Distinguishing Rolls vs. Slides of the Thumb,
 url = http://dl.acm.org/citation.cfm?doid=1518701.1518843,
 year = 2009,
 image = MicroRolls-CHI09.png,
 video = http://www.youtube.com/watch?feature=player_embedded&v=bfH0-OqgbLw,
 bdsk-url-1 = http://dl.acm.org/citation.cfm?doid=1518701.1518843,
}
keywords
Mobile devices, touch-screen, interaction, selection techniques, gestures, one-handed, thumb interaction, rolling/sliding gestures, MicroRoll, RollTap, RollMark.

video   TimeTilt: Using Sensor-Based Gestures to Travel Through Multiple Applications on a Mobile Device
A. Roudaut, M. Baglioni, E. Lecolinet. In Interact (IFIP Conference in Human-Computer Interaction), Springer (2009). 830-834. doi url pdf bibcite
@inproceedings{RA:INTERACT-09,
 address = Uppsala, Su{\`e}de,
 author = A. {Roudaut} and M. {Baglioni} and E. {Lecolinet},
 booktitle = Interact (IFIP Conference in Human-Computer Interaction),
 month = aug,
 pages = 830-834,
 publisher = Springer,
 title = TimeTilt: Using Sensor-Based Gestures to Travel Through Multiple Applications on a Mobile Device,
 url = http://link.springer.com/chapter/10.1007%2F978-3-642-03655-2_90,
 year = 2009,
 image = TimeTilt-INTERACT09.png,
 video = http://www.youtube.com/watch?feature=player_embedded&v=7JBSpojUBm8,
 bdsk-url-1 = http://link.springer.com/chapter/10.1007%2F978-3-642-03655-2_90,
}
keywords
Mobile devices, sensors, interaction techniques, multiple windows

video   Leaf Menus: Linear Menus with Stroke Shortcuts for Small Handheld Devices
A. Roudaut, G. Bailly, E. Lecolinet, L. Nigay. In Interact (IFIP Conference in Human-Computer Interaction), Springer (2009). 616-619. doi url pdf bibcite
@inproceedings{BG:INTERACT-09,
 address = Uppsala, Su{\`e}de,
 author = A. {Roudaut} and G. {Bailly} and E. {Lecolinet} and L. {Nigay},
 booktitle = Interact (IFIP Conference in Human-Computer Interaction),
 month = aug,
 pages = 616--619,
 publisher = Springer,
 title = Leaf Menus: Linear Menus with Stroke Shortcuts for Small Handheld Devices,
 url = http://link.springer.com/chapter/10.1007%2F978-3-642-03655-2_69,
 year = 2009,
 image = LeafMenus-INTERACT09.jpg,
 video = http://www.youtube.com/watch?feature=player_embedded&v=bsswQ06pZrU,
 bdsk-url-1 = http://link.springer.com/chapter/10.1007%2F978-3-642-03655-2_69,
}
keywords
Mobile devices, gestures, menu, interaction techniques

2001
  video   Biblioth\`{e}ques : comparaisons entre le r\'{e}el et le virtuel en 3D, 2D zoomable et 2D arborescent
P. Plenacoste, E. Lecolinet, S. Pook, C. Dumas, J.-D. Fekete. In conférence franco-britannique IHM-HCI (Interaction Homme-Machine / Human Computer Interaction),, IOS Press (2001). pdf bibcite
@inproceedings{reference124,
 address = Lille,
 author = P. {Plenacoste} and E. {Lecolinet} and S. {Pook} and C. {Dumas} and J.-D. {Fekete},
 booktitle = conf{\'e}rence franco-britannique IHM-HCI (Interaction Homme-Machine / Human Computer Interaction),,
 month = sep,
 publisher = IOS Press,
 title = Biblioth{\textbackslash}`\{e\}ques : comparaisons entre le r{\textbackslash}'\{e\}el et le virtuel en 3D, 2D zoomable et 2D arborescent,
 year = 2001,
 video = http://www.telecom-paristech.fr/~elc/videos/biblinum.mov,
}

2000
video   Control Menus: Execution and Control in a Single Interactor
S. Pook, E. Lecolinet, G. Vaysseix, E. Barillot. In CHI'2000: ACM SIGCHI Conference on Human Factors in Computing Systems, ACM (2000). 263-264. pdf bibcite
@inproceedings{reference196,
 address = The Hague (The Netherlands),
 author = S. {Pook} and E. {Lecolinet} and G. {Vaysseix} and E. {Barillot},
 booktitle = CHI'2000: ACM SIGCHI Conference on Human Factors in Computing Systems,
 month = apr,
 organization = ACM Press,
 pages = 263--264,
 publisher = ACM,
 title = Control Menus: Execution and Control in a Single Interactor,
 year = 2000,
 image = Zomit-CHI00.jpg,
 video = http://www.telecom-paristech.fr/~elc/videos/zomit.mov,
}

video   Context and Interaction in Zoomable User Interfaces
S. Pook, E. Lecolinet, G. Vaysseix, E. Barillot. In ACM AVI'2000, ACM (2000). 227-231. pdf bibcite
@inproceedings{reference194,
 address = Palerme (Italie),
 author = S. {Pook} and E. {Lecolinet} and G. {Vaysseix} and E. {Barillot},
 booktitle = ACM AVI'2000,
 month = may,
 pages = 227--231,
 publisher = ACM,
 title = Context and Interaction in Zoomable User Interfaces,
 year = 2000,
 image = Zomit-AVI00.jpg,
 video = http://www.telecom-paristech.fr/~elc/videos/zomit.mov,
}

>>> Eric Lecolinet