Gaze-based prediction of pen-based virtual interaction tasks
Tài liệu tham khảo
Alamargot, 2006, Eye and pen, Behav. Res. Methods, 38, 287, 10.3758/BF03192780
Bader, T., Vogelgesang, M., Klaus, E., 2009. Multimodal integration of natural gaze behavior for intention recognition during object manipulation. In: Proceedings of the Eleventh International Conference on Multimodal Interfaces, ACM, New York, NY, USA, pp. 199–206.
Ballard, 1992, Hand-eye coordination during sequential tasks [and discussion], Philos. Trans. Biol. Sci., 337, 331, 10.1098/rstb.1992.0111
Bednarik, R., Vrzakova, H., Hradis, M., 2012. What do you want to do next: a novel approach for intent prediction in gaze-based interaction. In: Proceedings of the Symposium on Eye Tracking Research and Applications, ACM, New York, NY, USA, pp. 83–90.
Bulling, A., Ward, J.A., Gellersen, H., Tröster, G., 2009. Eye movement analysis for activity recognition. In: Proceedings of the Eleventh International Conference on Ubiquitous Computing, ACM, New York, NY, USA, pp. 41–50.
Bulling, 2011, What׳s in the eyes for context-awareness?, Pervasive Computing, IEEE, 10, 48, 10.1109/MPRV.2010.49
Campbell, C.S., Maglio, P.P., 2001. A robust algorithm for reading detection. In: Proceedings of the 2001 Workshop on Perceptive User Interfaces, ACM, New York, NY, USA, pp. 1–7.
Chang, 2011, Libsvm, ACM Trans. Intell. Syst. Technol., 2, 1, 10.1145/1961189.1961199
Courtemanche, 2011, Activity recognition using eye-gaze movements and traditional interactions, Interact. Comput., 23, 202, 10.1016/j.intcom.2011.02.008
de Xivry, 2007, Saccades and pursuit, J. Physiol., 584, 11, 10.1113/jphysiol.2007.139881
Duchowski, 2004, Gaze-contingent displays, Behav. Social Netw., 7, 621
Fathi, A., Li, Y., Rehg, J.M., 2012. Learning to recognize daily actions using gaze. In: Proceedings of the Twelfth European Conference on Computer Vision – Volume Part I, Springer-Verlag, Berlin, Heidelberg, 2012, pp. 314–327.
Felty, T., 2004. Dynamic time warping (Dec. 2004). 〈http://www.mathworks.com/matlabcentral/fileexchange/6516-dynamic-time-warping/〉.
Forlines, C., Balakrishnan, R., 2008. Evaluating tactile feedback and direct vs. indirect stylus input in pointing and crossing selection tasks. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, ACM, New York, NY, USA, pp. 1563–1572.
Gesierich, 2008, Human gaze behaviour during action execution and observation, Acta Psychol., 128, 324, 10.1016/j.actpsy.2008.03.006
Harrison, B.L., Ishii, H., Vicente, K.J., Buxton, W.A.S., 1995. Transparent layered user interfaces: an evaluation of a display design to enhance focused and divided attention. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, ACM Press/Addison-Wesley Publishing Co., New York, NY, USA, pp. 317–324.
Hayhoe, 2005, Eye movements in natural behavior, Trends Cogn. Sci., 9, 188, 10.1016/j.tics.2005.02.009
Iqbal, S.T., Bailey, B.P., 2004. Using eye gaze patterns to identify user tasks. In: The Grace Hopper Celebration of Women in Computing, pp. 5–10
James, 2007, Curve alignment by moments, Ann. Appl. Stat., 1, 480, 10.1214/07-AOAS127
Johansson, 2001, Eye-hand coordination in object manipulation, J. Neurosci., 21, 6917, 10.1523/JNEUROSCI.21-17-06917.2001
Khotanzad, 1990, Invariant image recognition by zernike moments, IEEE Trans. Pattern Anal. Mach. Intell., 12, 489, 10.1109/34.55109
Kneip, 1992, Statistical tools to analyze data representing a sample of curves, Ann. Stat., 20, 1266, 10.1214/aos/1176348769
Kumar, M., Paepcke, A., Winograd, T., 2007. Eyepoint: practical pointing and selection using gaze and keyboard, in: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, ACM, New York, NY, USA, pp. 421–430.
Land, 2001, In what ways do eye movements contribute to everyday activities?, Vision Res., 41, 3559, 10.1016/S0042-6989(01)00102-X
Li, Y., Hinckley, K., Guan, Z., Landay, J.A., 2005. Experimental analysis of mode switching techniques in pen-based user interfaces. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, New York, NY, USA, pp. 461–470.
Negulescu, M., Ruiz, J., Lank, E., 2010. Exploring usability and learnability of mode inferencing in pen/tablet interfaces. In: Proceedings of the Seventh Sketch-Based Interfaces and Modeling Symposium. Eurographics Association, Aire-la-Ville, Switzerland, pp. 87–94.
Nielsen, 1993, Noncommand user interfaces, Commun. ACM, 36, 83, 10.1145/255950.153582
Ogaki, K., Kitani, K.M., Sugano, Y., Sato, Y., 2012. Coupling eye-motion and ego-motion features for first-person activity recognition. In: Computer Vision and Pattern Recognition Workshops, IEEE, pp. 1–7.
Ouyang, T.Y., Davis, R., 2009. A visual approach to sketched symbol recognition. In: Proceedings of the Twenty-first International Joint Conference on Artifical Intelligence, pp. 1463–1468.
Peng, 2005, Feature selection based on mutual information, IEEE Trans. Pattern Anal. Mach. Intell., 27, 1226, 10.1109/TPAMI.2005.159
Plimmer, 2008, Experiences with digital pen, keyboard and mouse usability, J. Multimodal User Interfaces, 2, 13, 10.1007/s12193-008-0002-4
Ramsay, 2005
Rayner, 2009, Eye movements and attention in reading, and visual search, Q. J. Exp. Psychol., 62, 1457, 10.1080/17470210902816461
Rubine, 1991, Specifying gestures by example, SIGGRAPH Comput. Graph., 25, 329, 10.1145/127719.122753
Sakoe, 1978, Dynamic programming algorithm optimization for spoken word recognition, IEEE Trans. Acoust. Speech Signal Process., 26, 43, 10.1109/TASSP.1978.1163055
Steichen, B., Carenini, G., Conati, C., 2013. User-adaptive information visualization: using eye gaze data to infer visualization tasks and user cognitive abilities, in: Proceedings of the Eighteenth International Conference on Intelligent User Interfaces, ACM, New York, NY, USA, pp. 317–328.
Tümen, R.S., Acer, M.E., Sezgin, T.M. , 2010. Feature extraction and classifier combination for image-based sketch recognition. In: Proceedings of the Seventh Sketch-Based Interfaces and Modeling Symposium, Eurographics Association, Aire-la-Ville, Switzerland, pp. 63–70.
Yi, 2009, Recognizing behavior in hand-eye coordination patterns, Int. J. Hum. Robot., 6, 337, 10.1142/S0219843609001863
Yu, C., Ballard, D., 2002. Learning to recognize human action sequences. In: Proceeding of the Second International Conference on Development and Learning, pp. 28–33.
Yu, C., Ballard, D.H., 2002. Understanding human behaviors based on eye-head-hand coordination. In: Proceedings of the Second International Workshop on Biologically Motivated Computer Vision, Springer-Verlag, London, UK, pp. 611–619.
Zhai, S., Morimoto, C., Ihde, S., 1999. Manual and gaze input cascaded (magic) pointing. In: Proceedings of the SIGCHI conference on Human Factors in Computing Systems, ACM, New York, NY, USA, pp. 246–253.