British Journal of Educational Technology
0007-1013
1467-8535
Anh Quốc
Cơ quản chủ quản: Wiley-Blackwell Publishing Ltd , WILEY
Các bài báo tiêu biểu
This study examined the impact of structuredness of asynchronous online discussion protocols and evaluation rubrics on meaningful discourse. Transcripts of twelve online discussions involving 87 participants from four sections of a graduate course entitled
This study investigates the differences in children’s comprehension and enjoyment of storybooks according to the medium of presentation. Two different storybooks were used and 132 children participated. Of these, 51 children read an extract from
Educational institutions are increasingly turning to learning analytics to identify and intervene with students at risk of underperformance or discontinuation. However, the extent to which the current evidence base supports this investment is currently unclear, and particularly so in relation to the effectiveness of interventions based on predictive models. The aim of the present paper was to conduct a systematic review and quality assessment of studies on the use of learning analytics in higher education, focusing specifically on intervention studies. Search terms identified 689 papers, but only 11 studies evaluated the effectiveness of interventions based on learning analytics. These studies highlighted the potential of such interventions, but the general quality of the research was moderate, and left several important questions unanswered. The key recommendation based on this review is that more research into the implementation and evaluation of scientifically driven learning analytics is needed to build a solid evidence base for the feasibility, effectiveness and generalizability of such interventions. This is particularly relevant when considering the increasing tendency of educational institutions around the world to implement learning analytics interventions with only little evidence of their effectiveness.
What is already known about this topic?
Drop‐out rates and underachivement is a significant issue at most Western universities. Learning analytics have been shown to predict student performance and risk of dropping out. Interventions based on learning analytics have emerged in recent years, some reportedly successful. What this paper adds
The paper also reviews and synthesizes the evidence on the effectiveness of learning analytics interventions targeting student underperformance, experience and discontinuation. The paper compares and contrasts past and current learning analytics methods and foci, and makes recommendations for the future research and practice. It critically synthesizes the current evidence base on learning analytics interventions, which is a field that is in constant flux and development. Implications for practice and/or policy
The paper focuses on an increasing part of higher education with the goal of validating learning analytics methods and usefulness. The paper makes evidence‐based recommendations for institutions wishing to implement learning analytics programs and/or interventions. The paper makes evidence‐based recommendations for instructors as well as researchers in the field.
The aim of this theoretical review was to identify the important factors shown to affect attitudes towards use of educational technologies by students or educators in higher education institutions and organise them into broad, intermediate and narrow groupings. This was done to assist the construction of more objective measurement instruments used in the evaluation of educational technologies. A qualitative review of the influential factors that affect user attitudes, intentions and motivations to use educational technologies was conducted, first by interrogating the fundamental behavioural theories underpinning technology acceptance models, and then by exploring the findings of later and contemporary empirical research conducted in the educational context. Identified factors were grouped to produce an ordered taxonomy of measurement constructs. This taxonomy provides each construct’s lineage back through tertiary, secondary and primary taxonomic groups and provides a greater scope of measurement than commonly used models. Seven primary and twenty two secondary and tertiary taxonomic groups were defined, which collectively comprise sixty one measurement constructs. The taxonomy is designed to reduce measurement bias within studies and also acts as a basis for consistent and objective benchmarking within and across institutions.
What is already known about this topic
Technology acceptance models are derived from a number of foundational behavioural and motivational theories. The TAM and UTAUT are validated models that appraise attitude and/or behavioural intent to use an educational technology, which nonetheless do not cover the entire scope of what has been shown to be important in various studies. There is little consistency from study to study of measurement constructs used in technology acceptance models. What this paper adds
Collection and organisation of the salient measurement constructs into a flexible taxonomy. Establishment of a consistent measurement scope that is specifically suited to educational technology research. Establishment of construct lineage that clearly shows similarities and differences between the various constructs. Implications for practice and/or policy
The taxonomy supports robust instrument construction to improve both convergent and discriminant validity of measurement models. The taxonomy provides a recommended scope for higher education institutions to measure factors affecting use of various educational technologies. Consistent use of the taxonomy will provide an objective standard that can be used to compare across institutions or within institutions over time, which assist with benchmarking and management decisions. The taxonomy can be used as a framework for meta‐analyses or to collate ‘prior’ data to use in Bayesian‐type technology evaluation.
This study is the first to systematically investigate the extent to which apps for children aged 0–5 foster play and creativity. There is growing evidence of children's use of tablets, but limited knowledge of the use of apps by children of children of this age. This ESRC‐funded study undertook research that identified how UK children aged from 0 to 5 use apps, and how far the use of apps promotes play and creativity, given the importance of these for learning and development. A survey was conducted with 2000 parents of under 5s in the UK, using a random, stratified sample, and ethnographic case studies of children in six families were undertaken. Over 17 hours of video films of children using apps were analysed. Findings indicate that children of this age are using a variety of apps, some of which are not aimed at their age range. The design features of such apps can lead to the support or inhibition of play and creativity. The study makes an original contribution to the field in that it offers an account of how apps contribute to the play and creativity of children aged five and under.
The purpose of this paper is to compare and contrast characteristics of use and adoption of mobile learning in higher education in developed and developing countries. A comparative case study based on a survey questionnaire was conducted with 189 students (undergraduate and postgraduate) from Makerere University in Uganda and the University of Adelaide in Australia. The Unified Theory of Acceptance and Use of Technology (UTAUT) was employed as the theoretical framework. The results indicated that higher education students in developed and developing countries use a range of technologies for learning, with major differences between Uganda and Australia. The study concludes that mobile learning in higher education in developed and developing country contexts is still at an experimental stage with students using mobile devices in pedagogically limited ways.
Analyses presented here are secondary data analyses of the
Most research on learning technology uses clickstreams and questionnaires as their primary source of quantitative data. This study presents the outcomes of a systematic literature review of empirical evidence on the capabilities of multimodal data (MMD) for human learning. This paper provides an overview of what and how MMD have been used to inform learning and in what contexts. A search resulted in 42 papers that were included in the analysis. The results of the review depict the capabilities of MMD for learning and the ongoing advances and implications that emerge from the employment of MMD to capture and improve learning. In particular, we identified the six main objectives (ie, behavioral trajectories, learning outcome, learning‐task performance, teacher support, engagement and student feedback) that the MMLA research has been focusing on. We also summarize the implications derived from the reviewed articles and frame them within six thematic areas. Finally, this review stresses that future research should consider developing a framework that would enable MMD capacities to be aligned with the research and learning design (LD). These MMD capacities could also be utilized on furthering theory and practice. Our findings set a baseline to support the adoption and democratization of MMD within future learning technology research and development.
What is already known about this topic
Capturing and measuring learners’ engagement and behavior using MMD has been explored in recent years and exhibits great potential. There are documented challenges and opportunities associated with capturing, processing, analyzing and interpreting MMD to support human learning. MMD can provide insights into predicting learning engagement and performance as well as into supporting the process. What this paper adds
Provides a systematic literature review (SLR) of empirical evidence on MMD for human learning. Summarizes the insights MMD can give us about the learning outcomes and process. Identifies challenges and opportunities of MMD to support human learning. Implications for practice and/or policy
Learning analytics researchers will be able to use the SLR as a guide for future research. Learning analytics practitioners will be able to use the SLR as a summary of the current state of the field.