Human Factors
Công bố khoa học tiêu biểu
* Dữ liệu chỉ mang tính chất tham khảo
Mục tiêu: Chúng tôi đánh giá và định lượng các tác động của yếu tố con người, robot và môi trường đến niềm tin cảm nhận trong tương tác người-robot (HRI).
Bối cảnh: Cho đến nay, các tổng quan về niềm tin trong HRI thường mang tính chất định tính hoặc mô tả. Nghiên cứu tổng quan định lượng của chúng tôi cung cấp cơ sở thực nghiệm nền tảng để thúc đẩy cả lý thuyết và thực hành.
Phương pháp: Phương pháp phân tích meta được áp dụng cho các tài liệu hiện có về niềm tin và HRI. Tổng cộng có 29 nghiên cứu thực nghiệm được thu thập, trong đó 10 nghiên cứu đạt tiêu chuẩn chọn lựa cho phân tích tương quan và 11 nghiên cứu cho phân tích thực nghiệm. Các nghiên cứu này cung cấp 69 kích thước hiệu ứng tương quan và 47 kích thước hiệu ứng thực nghiệm.
Kết quả: Kích thước hiệu ứng tương quan tổng thể cho niềm tin là r̄ = +0.26, với kích thước hiệu ứng thực nghiệm là d̄ = +0.71. Các tác động của đặc điểm con người, robot và môi trường đã được xem xét với sự đánh giá đặc biệt về các khía cạnh về hiệu suất và yếu tố thuộc tính của robot. Hiệu suất và các thuộc tính của robot là những yếu tố đóng góp lớn nhất vào sự phát triển niềm tin trong HRI. Các yếu tố môi trường chỉ đóng vai trò trung bình.
Kết luận: Các yếu tố liên quan đến bản thân robot, cụ thể là hiệu suất của nó, hiện có sự liên kết mạnh nhất với niềm tin, và các yếu tố môi trường chỉ có mối liên kết ở mức độ trung bình. Có rất ít bằng chứng cho thấy tác động của các yếu tố liên quan đến con người.
Ứng dụng: Các phát hiện cung cấp ước lượng định lượng của các yếu tố con người, robot và môi trường ảnh hưởng đến niềm tin HRI. Cụ thể, tóm tắt hiện tại cung cấp ước lượng kích thước hiệu ứng hữu ích trong việc thiết lập hướng dẫn thiết kế và đào tạo liên quan đến các yếu tố robot của niềm tin HRI. Hơn nữa, kết quả cho thấy rằng việc hiệu chỉnh không đúng niềm tin có thể được giảm thiểu bằng cách điều chỉnh thiết kế robot. Tuy nhiên, nhiều nhu cầu nghiên cứu trong tương lai đã được xác định.
Objective: This paper introduces a robust, real-time system for detecting driver lane changes. Background: As intelligent transportation systems evolve to assist drivers in their intended behaviors, the systems have demonstrated a need for methods of inferring driver intentions and detecting intended maneuvers. Method: Using a “model tracing” methodology, our system simulates a set of possible driver intentions and their resulting behaviors using a simplification of a previously validated computational model of driver behavior. The system compares the model's simulated behavior with a driver's actual observed behavior and thus continually infers the driver's unobservable intentions from her or his observable actions. Results: For data collected in a driving simulator, the system detects 82% of lane changes within 0.5 s of maneuver onset (assuming a 5% false alarm rate), 93% within 1 s, and 95% before the vehicle moves one fourth of the lane width laterally. For data collected from an instrumented vehicle, the system detects 61% within 0.5 s, 77% within 1 s, and 84% before the vehicle moves one-fourth of the lane width laterally. Conclusion: The model-tracing system is the first system to demonstrate high sample-by-sample accuracy at low false alarm rates as well as high accuracy over the course of a lane change with respect to time and lateral movement. Application: By providing robust real-time detection of driver lane changes, the system shows good promise for incorporation into the next generation of intelligent transportation systems.
This study investigated the control strategies and decision making of drivers who were executing overtaking maneuvers in a fixed-base driving simulator. It was found that drivers were frequently inaccurate in deciding whether it was safe to overtake in front of an oncoming vehicle. One source of error in this situation was the control strategy adopted by the driver; in several instances our drivers initiated an overtaking maneuver when the oncoming car's distance was above a critical value, even though there was not sufficient time to complete a safe maneuver. Adaptation to closing speed (produced by driving on a straight open road) also had large effects on overtaking behavior. For all participants, closing speed adaptation resulted in decisions that were delayed, of higher risk, and more variable. Actual or potential applications of this research include improved training for younger drivers and the development of in-car interfaces that reduce closing speed adaptation.
The study focused on the relationship between field dependence and the ability to perceive traffic signs in embedded and disembedded contexts as measured by verbal reaction times. Intercorrelations among the reaction times, personality measures, and driving record items were also tested. Twenty-eight females were blocked into four quartiles according to their score on the Group Embedded Figures Test. Subjects completed the traffic-sign task, the Eysenck Personality Inventory, and a driving experience questionnaire. Field-dependent subjects had longer reaction times to embedded traffic signs and more traffic accidents than did field-independent subjects. Also, extraverts had longer reaction times to the embedded traffic signs, more accidents, and more traffic convictions than introverts. No relationships were found for neuroticism.
The effects of a warning's validity and display characteristics on the responses to binary warnings were studied in a categorization task that resembled the control of a simulated production environment. Students performed a visual signal detection task and were aided by a binary warning indicator. Experimental conditions differed in the validity of the warning and its proximity to the judged stimulus. Participants' performance improved over the course of the experiment, and they partly adjusted their responses to the validity of the warnings but continued to respond to nonvalid warnings throughout the experiment. It was particularly difficult to ignore the nonvalid information when it was integrated with the continuous information. There was evidence for nonoptimal use of the information from the warning system, whether it was valid or not valid. The results indicate a possible distinction between two dimensions of users' trust in warning systems: compliance and reliance. Actual or potential implications of this research include improved warning design based on analysis of system and operator characteristics.
We performed an experiment in which a human, working in conjunction with an automated signal detection device, monitored a system for abnormal states. The human and the automated alarm monitored sets of information that were either dependent, partially independent, or independent of each other. The response criteria of the alarm and human were manipulated. The participants concurrently performed either an easy Or difficult tracking task as a load task. The results indicate that system sensitivity decreases as the sets of information become increasingly dependent on each other. The benefits of the alarm did not always outweigh its costs, especially in the more dependent conditions, which are probably more characteristic of real-life situations.
This paper addresses theoretical, empirical, and analytical studies pertaining to human use, misuse, disuse, and abuse of automation technology. Use refers to the voluntary activation or disengagement of automation by human operators. Trust, mental workload, and risk can influence automation use, but interactions between factors and large individual differences make prediction of automation use difficult. Misuse refers to over reliance on automation, which can result in failures of monitoring or decision biases. Factors affecting the monitoring of automation include workload, automation reliability and consistency, and the saliency of automation state indicators. Disuse, or the neglect or underutilization of automation, is commonly caused by alarms that activate falsely. This often occurs because the base rate of the condition to be detected is not considered in setting the trade-off between false alarms and omissions. Automation abuse, or the automation of functions by designers and implementation by managers without due regard for the consequences for human performance, tends to define the operator's roles as by-products of the automation. Automation abuse can also promote misuse and disuse of automation by human operators. Understanding the factors associated with each of these aspects of human use of automation can lead to improved system design, effective training methods, and judicious policies and procedures involving automation use.
The study assessed the use of binary warnings in a detection task with high attentional demands. Participants in the experiment had to decide whether to continue or halt production based on a briefly displayed number that indicated a temperature level. The short time that the number was displayed required that participants focus on the display area. Participants were rewarded for production when the system was intact and were heavily penalized for decisions to produce under dangerous temperature levels. Color-coded warning cues (green for safe, red for danger) were displayed to the participants prior to number presentation. The experimental conditions differed in the validity of the cue and in the probability of red cues. Results showed significant learning for all conditions. Participants tended to ignore the nonvalid and low-validity cues and rely only on highly valid cues. However, the mere existence of cues affected participants' general tendency to take risks. Actual or potential applications of this research include improving systems that require operators to devote attention to complex tasks while receiving and responding to warnings.
This paper presents a conceptual analysis of dynamic hazard warning systems. The normative aspects of responses to warnings are analyzed, and a distinction is made between two forms of responses to a warning system, referred to as compliance and reliance. Determinants of the responses to warnings are identified, and they are broadly classified into normative, task, and operator factors. Existing research on warnings and automation is assessed in view of this conceptual framework, and directions for future research are discussed. Some implications of this analysis for practitioners, designers, and researchers are indicated. Actual or potential applications of this research include recommendations for the analysis, design, and study of dynamic warning systems.
Suitably adapted computers hold considerable potential for integrating people who are blind or visually impaired into the mainstream. The principal problems that preclude the achievement of this potential are human factors issues. These issues are discussed, and the problems presented by icon-based interfaces are reviewed. An argument is offered that these issues, which ostensibly pertain to the blind or visually impaired user, are fundamental issues confronting all users. There is reason to hope that the benefits of research into the human factors issues of people with vision impairments will also extend to the sighted user.
- 1
- 2
- 3
- 4
- 5
- 6
- 8