Computational Intelligence and Neuroscience
Công bố khoa học tiêu biểu
* Dữ liệu chỉ mang tính chất tham khảo
Biorobotic fishes have a huge impact on the development of underwater devices due to both fast swimming speed and great maneuverability. In this paper, an enhanced CPG model is investigated for locomotion control of an elongated undulating fin robot inspired by black knife fish. The proposed CPG network includes sixteen coupled Hopf oscillators for gait generation to mimic fishlike swimming. Furthermore, an enhanced particle swarm optimization (PSO), called differential particle swarm optimization (D-PSO), is introduced to find a set of optimal parameters of the modified CPG network. The proposed D-PSO-based CPG network is not only able to increase the thrust force in order to make the faster swimming speed but also avoid the local maxima for the enhanced propulsive performance of the undulating fin robot. Additionally, a comparison of D-PSO with the traditional PSO and genetic algorithm (GA) has been performed in tuning the parametric values of the CPG model to prove the superiority of the introduced method. The D-PSO-based optimization technique has been tested on the actual undulating fin robot with sixteen fin-rays. The obtained results show that the average propulsive force of the untested material is risen 5.92%, as compared to the straight CPG model.
The Harmony Search (HS) method is an emerging metaheuristic optimization algorithm, which has been employed to cope with numerous challenging tasks during the past decade. In this paper, the essential theory and applications of the HS algorithm are first described and reviewed. Several typical variants of the original HS are next briefly explained. As an example of case study, a modified HS method inspired by the idea of Pareto-dominance-based ranking is also presented. It is further applied to handle a practical wind generator optimal design problem.
This paper describes methods to analyze the brain's electric fields recorded with multichannel Electroencephalogram (EEG) and demonstrates their implementation in the software CARTOOL. It focuses on the analysis of the spatial properties of these fields and on quantitative assessment of changes of field topographies across time, experimental conditions, or populations. Topographic analyses are advantageous because they are reference independents and thus render statistically unambiguous results. Neurophysiologically, differences in topography directly indicate changes in the configuration of the active neuronal sources in the brain. We describe global measures of field strength and field similarities, temporal segmentation based on topographic variations, topographic analysis in the frequency domain, topographic statistical analysis, and source imaging based on distributed inverse solutions. All analysis methods are implemented in a freely available academic software package called CARTOOL. Besides providing these analysis tools, CARTOOL is particularly designed to visualize the data and the analysis results using 3-dimensional display routines that allow rapid manipulation and animation of 3D images. CARTOOL therefore is a helpful tool for researchers as well as for clinicians to interpret multichannel EEG and evoked potentials in a global, comprehensive, and unambiguous way.
We present a program (Ragu; Randomization Graphical User interface) for statistical analyses of multichannel event-related EEG and MEG experiments. Based on measures of scalp field differences including all sensors, and using powerful, assumption-free randomization statistics, the program yields robust, physiologically meaningful conclusions based on the entire, untransformed, and unbiased set of measurements. Ragu accommodates up to two within-subject factors and one between-subject factor with multiple levels each. Significance is computed as function of time and can be controlled for type II errors with overall analyses. Results are displayed in an intuitive visual interface that allows further exploration of the findings. A sample analysis of an ERP experiment illustrates the different possibilities offered by Ragu. The aim of Ragu is to maximize statistical power while minimizing the need for a-priori choices of models and parameters (like inverse models or sensors of interest) that interact with and bias statistics.
Brain-computer interface (BCI) systems based on the steady-state visual evoked potential (SSVEP) provide higher information throughput and require shorter training than BCI systems using other brain signals. To elicit an SSVEP, a repetitive visual stimulus (RVS) has to be presented to the user. The RVS can be rendered on a computer screen by alternating graphical patterns, or with external light sources able to emit modulated light. The properties of an RVS (e.g., frequency, color) depend on the rendering device and influence the SSVEP characteristics. This affects the BCI information throughput and the levels of user safety and comfort. Literature on SSVEP-based BCIs does not generally provide reasons for the selection of the used rendering devices or RVS properties. In this paper, we review the literature on SSVEP-based BCIs and comprehensively report on the different RVS choices in terms of rendering devices, properties, and their potential influence on BCI performance, user safety and comfort.
Intelligent medical diagnosis has become common in the era of big data, although this technique has been applied to asthma only in limited contexts. Using routine blood biomarkers to identify asthma patients would make clinical diagnosis easier to implement and would enhance research of key asthma variables through data mining techniques. We used routine blood data from healthy individuals to construct a Mahalanobis space (MS). Then, we calculated Mahalanobis distances of the training routine blood data from 355 asthma patients and 1,480 healthy individuals to ensure the efficiency of MS. Orthogonal arrays and signal-to-noise ratios were used to optimize blood biomarker variables. Receiver operating characteristic (ROC) curve was used to determine the threshold value. Ultimately, we validated the system on 182 individuals based on the threshold value. Out of 35 patients with asthma, MTS correctly classified 94.15% of patients. In addition, 97.20% of 147 healthy individuals were correctly classified. The system isolated 7 routine blood biomarkers. Among these biomarkers, platelet distribution width, mean platelet volume, white blood cell count, eosinophil count, and lymphocyte ratio performed well in asthma diagnosis. In brief, MTS shows promise as an accurate method to identify asthma patients based on 7 vital blood biomarker variables and threshold determined by the ROC curve, thus offering the potential to simplify diagnostic complexity and optimize clinical efficiency.
In recent years, financial market dynamics forecasting has been a focus of economic research. To predict the price indices of stock markets, we developed an architecture which combined Elman recurrent neural networks with stochastic time effective function. By analyzing the proposed model with the linear regression, complexity invariant distance (CID), and multiscale CID (MCID) analysis methods and taking the model compared with different models such as the backpropagation neural network (BPNN), the stochastic time effective neural network (STNN), and the Elman recurrent neural network (ERNN), the empirical results show that the proposed neural network displays the best performance among these neural networks in financial time series forecasting. Further, the empirical research is performed in testing the predictive effects of SSE, TWSE, KOSPI, and Nikkei225 with the established model, and the corresponding statistical comparisons of the above market indices are also exhibited. The experimental results show that this approach gives good performance in predicting the values from the stock market indices.
Semiautomated digital creation is increasingly important in the manipulation of electronic music. How to realize the learning of local effective features of audio data is a difficult point in the current research field. Based on recurrent neural network theory, this paper designs a semiautomatic digital creation system for electronic music for digital manipulation and genre classification. The recurrent neural network improves the transmission of electronic music information between the input and output of the network by adopting dense connections consistent with DenseNet and adopts an inception-like structure for the autonomous selection of effective recursive nuclear electronic music categories. In the simulation process, the prediction method based on semiautomatic digital audio clips is also adopted, which pays more attention to the learning of local effective features of audio data, which gives the model the ability to create audio samples of different lengths and improves the model’s support for creative tasks in different scenarios. It includes the determination of the number of neurons, the selection of the function of neurons, the determination of the connection method, and the specific learning algorithm rules, and then the training samples are formed. The experimental results show that the recurrent neural network exhibits powerful feature extraction ability and classification ability of music information. The 10-fold cross-validation on GTZAN dataset and ISMIR2004 dataset has obtained 88.7% and 87.68%, surpassing similar ones. The model has reached a leading level. After further use of the MSD (Million Song Dataset) dataset for pre-semiautomatic training, the model effect has been further greatly improved. The accuracy rate on the dataset has been increased to 91.0% and 89.91%, respectively, which has improved the semiautomatic number and creative advancement.
Over the last years deep learning methods have been shown to outperform previous state-of-the-art machine learning techniques in several fields, with computer vision being one of the most prominent cases. This review paper provides a brief overview of some of the most significant deep learning schemes used in computer vision problems, that is, Convolutional Neural Networks, Deep Boltzmann Machines and Deep Belief Networks, and Stacked Denoising Autoencoders. A brief account of their history, structure, advantages, and limitations is given, followed by a description of their applications in various computer vision tasks, such as object detection, face recognition, action and activity recognition, and human pose estimation. Finally, a brief overview is given of future directions in designing deep learning schemes for computer vision problems and the challenges involved therein.
China’s huge regional differences are taken into consideration to study the influencing factors and their differences in CO2 emissions of the power industry from different regions. This study aimed to improve the efficiency of CO2 emission reduction policies. From the production and consumption perspectives, this study analyzes the influencing factors of CO2 emissions and utilizes the Logarithmic Mean Divisia Index (LMDI) to decompose CO2 emissions with consideration of the cross-regional power dispatching in the power industry. The results indicate that the trend of CO2 emissions in the eastern, central, and western China seems similar during years 2005 to 2017 no matter from which perspective. From the production perspective, power consumption is the main factor in CO2 emission increase and its affect extent may vary from different regions over a period of time. Energy efficiency inhibits CO2 emission increase in all regions. The power structure and power distribution across regions affect CO2 emissions significantly different in amount and direction from region to region. From the consumption perspective, economic activity plays a major role in CO2 emission increase and plays a similar role in the trend of CO2 emissions in three regions, but its affect extent on CO2 emissions varies in different regions. Targeted policy recommendations are provided to reduce CO2 emissions more effectively from China’s power industry.
- 1
- 2