Multi-modality gaze-contingent displays for image fusion

S.G. Nikolov1, D.R. Bull1, C.N. Canagarajah1, M.G. Jones2, I.D. Gilchrist3
1Centre for Communications Research, University of Bristol, Bristol, UK
2Division of Oncology, University of Bristol, Bristol, UK
3Department of Experimental Psychology, University of Bristol, Bristol, UK

Tóm tắt

Gaze-contingent displays are used in this paper for integrated visualisation of 2-D multi-modality images. In gaze-contingent displays a window centred around the observer's fixation point is modified while the observer moves their eyes around the display. In the proposed technique, this window, in the central part of vision, is taken from one of the input modalities, while the rest of the display, in peripheral vision, comes from the other one. The human visual system fuses these two images into a single percept. An SMI EyeLink I eye-tracker is used to obtain real-time data about the observer's fixation point, while he/she is examining the displayed images. The test data used in this study comprise registered medical images (CT and MR), remote sensing images, partially-focused images, and multi-layered geographical maps. In all experiments the observer is presented with a dynamic gaze-contingent display. As the eyes scan the display, information is processed not just from the point of fixation but from a larger area, called the 'useful field of view' or 'functional visual field'. Various display parameters, e.g. the size, shape, border, and colour of the window, affect the perception and combination of the two image types. Images generated using this new approach are presented and qualitatively compared to other commonly used multi-modality image display methods, such as adjacent display, 'chessboard' display and transparency weighted display.

Từ khóa

#Displays #Image fusion #Eyes #Visualization #Humans #Visual system #Fuses #Medical tests #Biomedical imaging #Computed tomography

Tài liệu tham khảo

nikolov, 2001, Wavelets for image fusion, Wavelets in Signal and Image Analysis, 213, 10.1007/978-94-015-9715-9_8 10.1016/0042-6989(74)90049-2 10.1163/156856897X00258 10.1038/226177a0 10.1109/VISUAL.2000.885699 10.1016/S0042-6989(01)00145-6 10.3758/BF03204665 10.1016/S0010-0277(01)00123-8 10.1111/1467-9280.00309 10.3758/BF03198797 10.1037/0033-2909.124.3.372 nikolov, 2002, Multi-modality gaze-contingent displays of multi- layered geographical maps, Proceeedings of the Fifth International Conference on Numerical Methods and Applications (NM & A02) Symposium on Numerical Methods for Sensor Data Processing rehm, 1994, Display of merged multimodality brain images using interleaved pixels with independent color scales, Journal of Nuclear Medicine, 35, 1815 10.1037/h0081192 stokking, 1998, Integrated Visualization of Functional and Anatomical Brain Images hawkes, 1990, Preliminary work on the integration of SPECT images with the aid of registered MRI images and an MR derived neuro-anatomical atlas, 3D Imaging in Medicine Algorithms Systems Applications, 241 10.3758/BF03203972 10.1117/12.383069 hill, 1993, Combination of 3D medical images from multiple modalities 10.1145/964965.808606 robb, 1996, The ANALYZE software system for visualization and analysis in surgery simulations, Computer Integrated Surgery, 175 jones, 2000, volume visualisation via region enhancement around an observer's fixation point, First International Conference on Advances in Medical Signal and Information Processing, 305, 10.1049/cp:20000353 nikolov, 2001, Focus+context visualisation for image fusion, 4th Intl Conf on Information Fusion (FUSION 2001)