|
Faculty habilitation de APPERT Caroline |
|
|
APPERT Caroline
Faculty habilitation
Group : Human-Centered Computing
Moving the Expressive Power from the Displays to the Fingers
Starts on
Advisor :
Funding :
Affiliation : vide
Laboratory :
Defended on 26/06/2017, committee :
- Michel Beaudouin-Lafon, Professeur, Université Paris-Sud, France
- Stephen Brewster, Professor, University of Glasgow, Scotland
- Géry Casiez, Professeur, Université Lille 1, Lille, France
- Andy Cockburn, Professor, University of Canterbury, New Zealand
- Jean-Claude Martin, Professeur, Université Paris-Sud, France
- Laurence Nigay, Professeur, Université Grenoble Alpes, Grenoble, France
- Shumin Zhai, Senior Staff Research Scientist, Google, Mountain View, CA, USA
Research activities :
Abstract :
Optimizing the bandwidth of the communication channel between users and the system is fundamental for designing efficient interactive systems. Apart from the case of speech-based interfaces that rely on users' natural language, this entails designing an efficient language that users can adopt and that the system can understand. My research has been focusing on studying and optimizing the two following types of languages: interfaces that allow users to trigger actions through the direct manipulation of on-screen objects, and interactive systems that allow users to invoke commands by performing specific movements. Direct manipulation requires encoding most information in the graphical representation, mostly relying on users' ability to recognize visual elements; whereas gesture-based interaction interprets the shape and dynamics of users' movements, mostly relying on users' ability to recall specific movements. I will present my main research projects about these two types of language, and discuss how we can increase the efficiency of interactive systems that make use of them. When using direct manipulation, achieving a high expressive power and a good level of usability depends on the interface's ability to accommodate large graphical scenes while enabling the easy selection and manipulation of objects in the scene. When using gestures, it depends on the number of different gestures in the system's vocabulary, as well as on the simplicity of those gestures, that should remain easy to learn and execute. I will conclude with directions for future work around interaction with tangible objects.
|
|
|
|
Ph.D. dissertations & Faculty habilitations |
|
|
CAUSAL LEARNING FOR DIAGNOSTIC SUPPORTCAUSAL UNCERTAINTY QUANTIFICATION UNDER PARTIAL KNOWLEDGE AND LOW DATA REGIMESMICRO VISUALIZATIONS: DESIGN AND ANALYSIS OF VISUALIZATIONS FOR SMALL DISPLAY SPACESThe topic of this habilitation is the study of very small data visualizations, micro visualizations, in display contexts that can only dedicate minimal rendering space for data representations. For several years, together with my collaborators, I have been studying human perception, interaction, and analysis with micro visualizations in multiple contexts. In this document I bring together three of my research streams related to micro visualizations: data glyphs, where my joint research focused on studying the perception of small-multiple micro visualizations, word-scale visualizations, where my joint research focused on small visualizations embedded in text-documents, and small mobile data visualizations for smartwatches or fitness trackers. I consider these types of small visualizations together under the umbrella term ``micro visualizations.'' Micro visualizations are useful in multiple visualization contexts and I have been working towards a better understanding of the complexities involved in designing and using micro visualizations. Here, I define the term micro visualization, summarize my own and other past research and design guidelines and outline several design spaces for different types of micro visualizations based on some of the work I was involved in since my PhD.
|
|
|
|
|