SN Computer Science

Công bố khoa học tiêu biểu

* Dữ liệu chỉ mang tính chất tham khảo

Sắp xếp:  
Development of Smart Healthcare Monitoring System in IoT Environment
SN Computer Science - Tập 1 - Trang 1-11 - 2020
Md. Milon Islam, Ashikur Rahaman, Md. Rashedul Islam
Healthcare monitoring system in hospitals and many other health centers has experienced significant growth, and portable healthcare monitoring systems with emerging technologies are becoming of great concern to many countries worldwide nowadays. The advent of Internet of Things (IoT) technologies facilitates the progress of healthcare from face-to-face consulting to telemedicine. This paper proposes a smart healthcare system in IoT environment that can monitor a patient’s basic health signs as well as the room condition where the patients are now in real-time. In this system, five sensors are used to capture the data from hospital environment named heart beat sensor, body temperature sensor, room temperature sensor, CO sensor, and CO2 sensor. The error percentage of the developed scheme is within a certain limit (< 5%) for each case. The condition of the patients is conveyed via a portal to medical staff, where they can process and analyze the current situation of the patients. The developed prototype is well suited for healthcare monitoring that is proved by the effectiveness of the system.
Finding Informative Comments for Video Viewing
SN Computer Science - Tập 1 - Trang 1-14 - 2019
Seungwoo Choi, Aviv Segev
Of all the information-sharing methods on the Web, video is a factor with increasing importance and will continue to influence the future Web environment. Various services such as YouTube, Vimeo, and Liveleak are information-sharing platforms that support uploading UGC (user-generated content) to the Web. Users tend to seek related information while or after watching an informative video when they are using these Web services. In this situation, the best way of satisfying information needs of this kind is to find and read the comments on Web services. However, existing services only support sorting by recentness (newest one) or rating (high LIKES score). Consequently, the search for related information is limited unless the users read all the comments. Therefore, we suggest a novel method to find informative comments by considering original content and its relevance. We developed a set of methods composed of measuring informativeness priority, which we define as the level of information provided by online users, classifying the intention of the information posted online, and clustering to eliminate duplicate themes. The first method of measuring informativeness priority calculates the extent to which the comments cover all the topics in the original contents. After the informativeness priority calculation, the second method classifies the intention of information posted in comments. Then, the next method picks the most informative comments by applying clustering methods to eliminate duplicate themes using rules. Experiments based on 20 sampled videos with 1000 comments and analysis of 1861 TED talk videos and 380,619 comments show that the suggested methods can find more informative comments compared to existing methods such as sorting by high LIKES score.
Vaccination, Booster Doses and Social Constraints: A Steady State and an Optimal Transient Approaches to Epidemics Containment
SN Computer Science - Tập 5 - Trang 1-17 - 2023
Paolo Di Giamberardino, Daniela Iacoviello
The problem of the definition of control actions to contain epidemic diseases is crucial in case of high infectivity, dangerous or fatal consequences, large inhabited areas involved. Unfortunately, during the last three years, the COVID-19 pandemic has represented a critical situation all over the world. On the basis of the experiences for known diseases and the literature on the epidemic modeling, various strategies have been proposed and applied in different countries, someone using all the possible efforts, some others just maintaining the global health compact within acceptable levels. The effectiveness of the approaches has been always measured on the basis of the reproduction number $${\mathcal {R}}_t$$ , which is intrinsically a steady-state evaluation, since it does not takes into account the control variations. In the present paper, with reference to a mathematical model which takes into account the different level of vaccination in the population, an optimal condition based approach is adopted to define the actions of intervention, bringing to a switching optimal control scheme based on the time by time evolution of the disease. The two approaches are developed and compared, showing that the use of partial information can bring to counter-intuitive situations and supporting the necessity of a feedback action to better adapt the containment measure to the situation. Numerical simulations are performed to better show the claimed results.
A Systematic Review: Classification of Lung Diseases from Chest X-Ray Images Using Deep Learning Algorithms
SN Computer Science - - 2024
Aya Hage Chehade, Nassib Abdallah, Jean-Marie Marion, Mathieu Hatt, Mohamad Oueidat, Pierre Chauvet
The purpose of this survey is to provide a comprehensive review of the most recent publications on lung disease classification from chest X-ray images using deep learning algorithms. Methods: This research aims to present several common chest radiography datasets and to introduce briefly the general image preprocessing procedures that are applied to chest X-ray images. Then, the classification of specific and multiple lung diseases is described, focusing on the method and dataset used in the selected studies, the evaluation measures and the results. In addition, the problems and future direction of lung diseases classification are discussed to provide an important research base for researchers in the future. As the most common examination tool, Chest X-ray (CXR) is crucial in the medical field for disease diagnosis. Thus, the classification of chest diseases based on chest X-ray has gained significant attention from researchers. In recent years, deep learning methods have been used and have emerged as powerful techniques in medical imaging fields. One hundred ten articles published from 2016 to 2023 were reviewed and summarized, confirming that this particular research area is very important and has great potential for future research.
An Efficient Human Computer Interaction through Hand Gesture Using Deep Convolutional Neural Network
SN Computer Science - Tập 1 - Trang 1-9 - 2020
Md. Milon Islam, Md. Repon Islam, Md. Saiful Islam
This paper focuses on the achievement of effective human–computer interaction using only webcam by continuous locating or tracking and recognizing the hand region. We detected the region of interest (ROI) in the captured image range and classify hand gestures for specific tasks. Firstly, background subtraction is used based on the main frame captured by webcam, and some preprocessing are done, and then YCrCb skin segmentation is used on RGB subtracted image. The ROI is detected using Haar cascade classifier for hand palm detection. Next, kernelized correlation filters tracking algorithm is used to avoid noise or background influences for tracking the ROI, and the median-flow tracking algorithm is used for depth tracking. The ROI is converted to a binary channel (black and white), resized to 54 × 54. Then gesture recognition is done using a 2D convolutional neural network (CNN) by entering the preprocessed ROI on the architecture. Two predictions are made based on skin segmented frame and image dilated frame, and gesture is recognized from the maximum value of those two predictions. The tracking and recognition process is continued until the ROI is presented on the frames. Finally, after validation, the proposed system has successfully obtained a recognition rate of 98.44%, which is usable for the practical and real-time application.
Application of Optimal Control of Infectious Diseases in a Model-Free Scenario
SN Computer Science - Tập 2 - Trang 1-9 - 2021
Erivelton G. Nepomuceno, Márcia L. C. Peixoto, Márcio J. Lacerda, Andriana S. L. O. Campanharo, Ricardo H. C. Takahashi, Luis A. Aguirre
Optimal control for infectious diseases has received increasing attention over the past few decades. In general, a combination of cost state variables and control effort have been applied as cost indices. Many important results have been reported. Nevertheless, it seems that the interpretation of the optimal control law for an epidemic system has received less attention. In this paper, we have applied Pontryagin’s maximum principle to develop an optimal control law to minimize the number of infected individuals and the vaccination rate. We have adopted the compartmental model SIR to test our technique. We have shown that the proposed control law can give some insights to develop a control strategy in a model-free scenario. Numerical examples show a reduction of 50% in the number of infected individuals when compared with constant vaccination. There is not always a prior knowledge of the number of susceptible, infected, and recovered individuals required to formulate and solve the optimal control problem. In a model-free scenario, a strategy based on the analytic function is proposed, where prior knowledge of the scenario is not necessary. This insight can also be useful after the development of a vaccine to COVID-19, since it shows that a fast and general cover of vaccine worldwide can minimize the number of infected, and consequently the number of deaths. The considered approach is capable of eradicating the disease faster than a constant vaccination control method.
Segmentation and Feature Extraction in Lung CT Images with Deep Learning Model Architecture
SN Computer Science - Tập 4 - Trang 1-14 - 2023
R. Indumathi, R. Vasuki
Recently, lung cancer is observed as the most deadly disease throughout the world with a high mortality rate. The survival rate with lung cancer is minimal due to the difficulty in detection of cancer in early stages. Various screening techniques are available such as X-ray, CT, and Sputum Cytology; here, CT images are considered for identification of the lung tumor. Computed tomography has been widely exploited for various clinical applications. Early detection and treatment of lung tumor can aid in improving the survival rate, and CT scan is the best modality for imaging lung tumor. In many cases, when the nodules are identified, it might be either more advanced or too large to be effectively cured. Physical characteristics of the nodules such as the size, tumor type and type of borders are very significant in the examination of nodules. Lung cancer detection and treatment will be of significant value for early diagnosis. Machine learning classification can benefit greatly from the wealth of research on the use of image processing for detecting lung cancer. In this paper, an effective classification model significant value for early diagnosis is developed. The segmentation in CT images is performed with marker-controlled segmentation with likelihood estimation between the features. The proposed model Markov likelihood grasshopper classification (MLGC) is utilized for the classification of nodules in the CT images. The MLGC model performs the estimation of features and computes the likelihood distance between those features. With the estimated features, grasshopper optimization algorithm (GOA) is employed for the optimization of the features. The optimized features are applied over the Boltzmann machine to derive the classification results. The MLGC model estimates the hyperparameters for the selection of set to derive classification results. The simulation results expressed that the proposed MLGC model achieves the higher accuracy value of 99.5% compared with the existing model accuracy which are AlexNet of 96.35%, GoogleNet as 93.45% and VGG-16 as 92.56%.
Tối ưu hóa mạng học sâu cho các thiết bị biên với trường hợp sử dụng tập dữ liệu bệnh da liễu và bệnh lá ngô Dịch bởi AI
SN Computer Science - Tập 4 - Trang 1-13 - 2023
B. S. Sharmila, H. S. Santhosh, S. Parameshwara, M. S. Swamy, Wahid Hussain Baig, S. V. Nanditha
Điện toán biên cung cấp các giải pháp hứa hẹn cho những thách thức liên quan đến độ trễ, kết nối, khả năng mở rộng, chi phí và quyền riêng tư. Tuy nhiên, các yêu cầu về tài nguyên của mạng học sâu tiếp tục gây khó khăn cho các thiết bị biên. Các ứng dụng trong nông nghiệp và y tế dựa trên trí tuệ nhân tạo (AI) đòi hỏi một mô hình có kích thước mạng lớn liên quan đến nhiều phép toán số thực. Nghiên cứu này nhằm giải quyết các vấn đề về tài nguyên hạn chế liên quan đến thiết bị biên thông qua tối ưu hóa mạng nơ-ron. Các chiến lược được sử dụng để thực hiện tối ưu hóa mạng nơ-ron bao gồm cắt bớt, cụm trọng số và lượng tử hóa. Đối với các mô hình học sâu, các kỹ thuật tối ưu hóa hợp tác này hỗ trợ trong việc giảm kích thước bộ nhớ và mức sử dụng. Để minh họa công việc của chúng tôi, chúng tôi đã sử dụng các tập dữ liệu Bệnh lá ngô và Bệnh da liễu và thực hiện các bước tiền xử lý hình ảnh cần thiết. Thuật toán Mạng nơ-ron tích chập (CNN) được sử dụng để đào tạo mô hình bằng tập dữ liệu đã được tiền xử lý. Đối với Bệnh lá ngô, mô hình CNN được tối ưu hóa sử dụng 5.1 MB bộ nhớ với độ chính xác đào tạo là 81.92% so với mô hình đào tạo có dung lượng 66 MB và độ chính xác đào tạo là 83.38%. Công trình nghiên cứu này đã tối ưu hóa các mô hình học chuyển giao như ResNet và MobileNet vì độ chính xác là điều quan trọng, và nhận thấy rằng MobileNet không chỉ cho kết quả độ chính xác tốt mà còn hoạt động hiệu quả về khía cạnh bộ nhớ so với CNN và ResNet. Để xác minh độ tin cậy của các mô hình đã được đào tạo trong thời gian thực, thuật toán GradCam được sử dụng ngoài độ chính xác. Sử dụng một công cụ phân tích bộ nhớ, chúng tôi đã đánh giá hiệu suất của mô hình tối ưu hóa và việc suy diễn được thực hiện trên thiết bị biên Raspberry PI sử dụng vi xử lý ARM.
#điện toán biên #tối ưu hóa mạng nơ-ron #học sâu #trí tuệ nhân tạo #bệnh lá ngô #bệnh da liễu #Mạng nơ-ron tích chập #ResNet #MobileNet
JPPF: Multi-task Fusion for Consistent Panoptic-Part Segmentation
SN Computer Science - Tập 5 - Trang 1-16 - 2024
Shishir Muralidhara, Sravan Kumar Jagadeesh, René Schuster, Didier Stricker
Part-aware panoptic segmentation is a problem of computer vision that aims to provide a semantic understanding of the scene at multiple levels of granularity. More precisely, semantic areas, object instances, and semantic parts are predicted simultaneously. In this paper, we present our joint panoptic part fusion (JPPF) that combines the three individual segmentations effectively to obtain a panoptic-part segmentation. Two aspects are of utmost importance for this: first, a unified model for the three problems is desired that allows for mutually improved and consistent representation learning. Second, balancing the combination so that it gives equal importance to all individual results during fusion. Our proposed JPPF is parameter-free and dynamically balances its input. The method is evaluated and compared on the Cityscapes panoptic parts (CPP) and Pascal panoptic parts (PPP) datasets in terms of PartPQ and Part-Whole Quality (PWQ). In extensive experiments, we verify the importance of our fair fusion, highlight its most significant impact for areas that can be further segmented into parts, and demonstrate the generalization capabilities of our design without fine-tuning on 5 additional datasets.
Tổng số: 1,760   
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 10