DeepSleepNet: A Model for Automatic Sleep Stage Scoring Based on Raw Single-Channel EEG

IEEE Transactions on Neural Systems and Rehabilitation Engineering - Tập 25 Số 11 - Trang 1998-2008 - 2017
Akara Supratak1, Hao Dong1, Chao Wu1, Yike Guo1
1Department of Computing, Imperial College London, London, SW7 2AZ, UK

Tóm tắt

Từ khóa


Tài liệu tham khảo

karpathy, 2015, Visualizing and understanding recurrent networks

abadi, 2016, Tensorflow Large-scale machine learning on heterogeneous distributed systems

10.1016/j.ipm.2009.03.002

10.1177/001316446002000104

10.1155/2012/107046

tsinalis, 2016, Automatic sleep stage scoring with single-channel EEG using convolutional neural networks

10.1016/j.neucom.2012.11.003

dong, 2016, Mixed neural network approach for temporal sleep stage classification

cohen, 2014, Analyzing neural time series data theory and practice, 10.7551/mitpress/9609.001.0001

ioffe, 2015, Batch Normalization Accelerating Deep Network Training by Reducing Internal Covariate Shift

10.1109/CVPR.2016.90

10.1109/78.650093

10.1162/neco.1997.9.8.1735

10.1109/IJCNN.2000.861302

10.1109/10.867928

10.1016/j.jneumeth.2015.01.022

goldberger, 2000, PhysioBank, PhysioToolkit, and PhysioNet: Components of a new research resource for complex physiologic signals, Circulation, 101, 215e, 10.1161/01.CIR.101.23.e215

iber, 2007, The AASM Manual for the Scoring of Sleep and Associated Events

10.1016/j.eswa.2010.04.043

lagerlund, 2000, Manipulating the magic of digital EEG: Montage reformatting and filtering, Amer J Electroneurodiagnostic Technol, 40, 121, 10.1080/1086508X.2000.11079295

10.3389/fnins.2014.00263

10.1007/s00521-017-2919-6

10.1007/s10439-015-1444-y

10.1016/0013-4694(69)90021-2

10.1016/j.knosys.2017.05.005

10.1038/nrn2868

sak, 2014, Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition

kingma, 2014, Adam A method for stochastic optimization

szegedy, 2015, Rethinking the inception architecture for computer vision

srivastava, 2014, Dropout: A simple way to prevent neural networks from overfitting, J Mach Learn Res, 15, 1929

pascanu, 2012, On the Difficulty of Training Recurrent Neural Networks

10.1111/jsr.12169

zaremba, 2014, Recurrent Neural Network Regularization