RESUMO
Human Activity Recognition (HAR) has gained significant attention due to its broad range of applications, such as healthcare, industrial work safety, activity assistance, and driver monitoring. Most prior HAR systems are based on recorded sensor data (i.e., past information) recognizing human activities. In fact, HAR works based on future sensor data to predict human activities are rare. Human Activity Prediction (HAP) can benefit in multiple applications, such as fall detection or exercise routines, to prevent injuries. This work presents a novel HAP system based on forecasted activity data of Inertial Measurement Units (IMU). Our HAP system consists of a deep learning forecaster of IMU activity signals and a deep learning classifier to recognize future activities. Our deep learning forecaster model is based on a Sequence-to-Sequence structure with attention and positional encoding layers. Then, a pre-trained deep learning Bi-LSTM classifier is used to classify future activities based on the forecasted IMU data. We have tested our HAP system for five daily activities with two tri-axial IMU sensors. The forecasted signals show an average correlation of 91.6% to the actual measured signals of the five activities. The proposed HAP system achieves an average accuracy of 97.96% in predicting future activities.
Assuntos
Atividades Humanas , Redes Neurais de Computação , Humanos , Exercício Físico , Acidentes por QuedasRESUMO
One of the effective ways to minimize the spread of COVID-19 infection is to diagnose it as early as possible before the onset of symptoms. In addition, if the infection can be simply diagnosed using a smartwatch, the effectiveness of preventing the spread will be greatly increased. In this study, we aimed to develop a deep learning model to diagnose COVID-19 before the onset of symptoms using heart rate (HR) data obtained from a smartwatch. In the deep learning model for the diagnosis, we proposed a transformer model that learns HR variability patterns in presymptom by tracking relationships in sequential HR data. In the cross-validation (CV) results from the COVID-19 unvaccinated patients, our proposed deep learning model exhibited high accuracy metrics: sensitivity of 84.38%, specificity of 85.25%, accuracy of 84.85%, balanced accuracy of 84.81%, and area under the receiver operating characteristics (AUROC) of 0.8778. Furthermore, we validated our model using external multiple datasets including healthy subjects, COVID-19 patients, as well as vaccinated patients. In the external healthy subject group, our model also achieved high specificity of 77.80%. In the external COVID-19 unvaccinated patient group, our model also provided similar accuracy metrics to those from the CV: balanced accuracy of 87.23% and AUROC of 0.8897. In the COVID-19 vaccinated patients, the balanced accuracy and AUROC dropped by 66.67% and 0.8072, respectively. The first finding in this study is that our proposed deep learning model can simply and accurately diagnose COVID-19 patients using HRs obtained from a smartwatch before the onset of symptoms. The second finding is that the model trained from unvaccinated patients may provide less accurate diagnosis performance compared with the vaccinated patients. The last finding is that the model trained in a certain period of time may provide degraded diagnosis performances as the virus continues to mutate.
Assuntos
COVID-19 , Aprendizado Profundo , Humanos , Frequência Cardíaca , Curva ROC , Tomografia Computadorizada por Raios X/métodosRESUMO
Wearable exoskeleton robots have become a promising technology for supporting human motions in multiple tasks. Activity recognition in real-time provides useful information to enhance the robot's control assistance for daily tasks. This work implements a real-time activity recognition system based on the activity signals of an inertial measurement unit (IMU) and a pair of rotary encoders integrated into the exoskeleton robot. Five deep learning models have been trained and evaluated for activity recognition. As a result, a subset of optimized deep learning models was transferred to an edge device for real-time evaluation in a continuous action environment using eight common human tasks: stand, bend, crouch, walk, sit-down, sit-up, and ascend and descend stairs. These eight robot wearer's activities are recognized with an average accuracy of 97.35% in real-time tests, with an inference time under 10 ms and an overall latency of 0.506 s per recognition using the selected edge device.
Assuntos
Aprendizado Profundo , Exoesqueleto Energizado , Robótica , Dispositivos Eletrônicos Vestíveis , Humanos , Atividades HumanasRESUMO
Blood cells carry important information that can be used to represent a person's current state of health. The identification of different types of blood cells in a timely and precise manner is essential to cutting the infection risks that people face on a daily basis. The BCNet is an artificial intelligence (AI)-based deep learning (DL) framework that was proposed based on the capability of transfer learning with a convolutional neural network to rapidly and automatically identify the blood cells in an eight-class identification scenario: Basophil, Eosinophil, Erythroblast, Immature Granulocytes, Lymphocyte, Monocyte, Neutrophil, and Platelet. For the purpose of establishing the dependability and viability of BCNet, exhaustive experiments consisting of five-fold cross-validation tests are carried out. Using the transfer learning strategy, we conducted in-depth comprehensive experiments on the proposed BCNet's architecture and test it with three optimizers of ADAM, RMSprop (RMSP), and stochastic gradient descent (SGD). Meanwhile, the performance of the proposed BCNet is directly compared using the same dataset with the state-of-the-art deep learning models of DensNet, ResNet, Inception, and MobileNet. When employing the different optimizers, the BCNet framework demonstrated better classification performance with ADAM and RMSP optimizers. The best evaluation performance was achieved using the RMSP optimizer in terms of 98.51% accuracy and 96.24% F1-score. Compared with the baseline model, the BCNet clearly improved the prediction accuracy performance 1.94%, 3.33%, and 1.65% using the optimizers of ADAM, RMSP, and SGD, respectively. The proposed BCNet model outperformed the AI models of DenseNet, ResNet, Inception, and MobileNet in terms of the testing time of a single blood cell image by 10.98, 4.26, 2.03, and 0.21 msec. In comparison to the most recent deep learning models, the BCNet model could be able to generate encouraging outcomes. It is essential for the advancement of healthcare facilities to have such a recognition rate improving the detection performance of the blood cells.
RESUMO
A wearable silent speech interface (SSI) is a promising platform that enables verbal communication without vocalization. The most widely studied methodology for SSI focuses on surface electromyography (sEMG). However, sEMG suffers from low scalability because of signal quality-related issues, including signal-to-noise ratio and interelectrode interference. Hence, here, we present a novel SSI by utilizing crystalline-silicon-based strain sensors combined with a 3D convolutional deep learning algorithm. Two perpendicularly placed strain gauges with minimized cell dimension (<0.1 mm2) could effectively capture the biaxial strain information with high reliability. We attached four strain sensors near the subject's mouths and collected strain data of unprecedently large wordsets (100 words), which our SSI can classify at a high accuracy rate (87.53%). Several analysis methods were demonstrated to verify the system's reliability, as well as the performance comparison with another SSI using sEMG electrodes with the same dimension, which exhibited a relatively low accuracy rate (42.60%).
Assuntos
Aprendizado Profundo , Fala , Algoritmos , Eletromiografia/métodos , Reprodutibilidade dos Testes , SilícioRESUMO
The layered Ni-rich NiCoMn (NCM)-based cathode active material Li[NixCo(1-x)/2Mn(1-x)/2]O2 (x ≥ 0.6) has the advantages of high energy density and price competitiveness over an LiCoO2-based material. Additionally, NCM is beneficial in terms of its increasing reversible discharge capacity with the increase in Ni content; however, stable electrochemical performance has not been readily achieved because of the cation mixing that occurs during its synthesis. In this study, various layer-structured Li1.0[Ni0.8Co0.1Mn0.1]O2 materials were synthesized, and their electrochemical performances were investigated. A NiCoMnCO3 precursor, prepared using carbonate co-precipitation with Li2CO3 as the lithium source and having a sintering temperature of 850 °C, sintering time of 25 h, and metal to Li molar ratio of 1.00-1.05 were found to be the optimal parameters/conditions for the preparation of Li1.0[Ni0.8Co0.1Mn0.1]O2. The material exhibited a discharge capacity of 160 mAhg-1 and capacity recovery rate of 95.56% (from a 5.0-0.1 C-rate).
RESUMO
Deep learning-based emotion recognition using EEG has received increasing attention in recent years. The existing studies on emotion recognition show great variability in their employed methods including the choice of deep learning approaches and the type of input features. Although deep learning models for EEG-based emotion recognition can deliver superior accuracy, it comes at the cost of high computational complexity. Here, we propose a novel 3D convolutional neural network with a channel bottleneck module (CNN-BN) model for EEG-based emotion recognition, with the aim of accelerating the CNN computation without a significant loss in classification accuracy. To this end, we constructed a 3D spatiotemporal representation of EEG signals as the input of our proposed model. Our CNN-BN model extracts spatiotemporal EEG features, which effectively utilize the spatial and temporal information in EEG. We evaluated the performance of the CNN-BN model in the valence and arousal classification tasks. Our proposed CNN-BN model achieved an average accuracy of 99.1% and 99.5% for valence and arousal, respectively, on the DEAP dataset, while significantly reducing the number of parameters by 93.08% and FLOPs by 94.94%. The CNN-BN model with fewer parameters based on 3D EEG spatiotemporal representation outperforms the state-of-the-art models. Our proposed CNN-BN model with a better parameter efficiency has excellent potential for accelerating CNN-based emotion recognition without losing classification performance.
Assuntos
Eletroencefalografia , Emoções , Nível de Alerta , Eletroencefalografia/métodos , Redes Neurais de ComputaçãoRESUMO
The Korea Atomic Energy Research Institute has recently proposed and developed a novel cesium-free negative hydrogen/deuterium ion source system based on two pulsed plasma sources for fusion and particle accelerator applications. The main feature of this ion source system is the use of both magnetic filters and plasma pulsing (also called the temporal filter). The system operates with two alternate pulsing sequences related to the respective plasma sources, thereby switching the plasmas in the after-glow state in an alternating manner. This study investigates the temporal behavior of deuterium negative ions in the system in a qualitative way by conducting a time-resolved measurement of laser photodetachment current commensurate with the negative ion density. In preliminary experiments, the current in the initial after-glow state remains higher than in the active-glow state identical to a steady-state continuous wave plasma, and the ratio reaches a maximum of about three times. This indicates that the pulsing gives highly efficient negative ion volume formation. Furthermore, it is observed that the time duration when the current is maintained at high values can be prolonged (or modulated) with the alternate dual pulsing, which is not possible with conventional single pulsing. These results provide a clue that the multi-pulsed ion source system may offer a continuous supply of negative ions at high densities and consequently become an alternative to cesium seeded ion sources.
RESUMO
INTRODUCTION: Intramedullary nailing (IMN), which is a common method for treating subtrochanteric fractures, is conducted as cephalomedullary (CMN) or reconstruction (RCN) nailing. Numerous studies have reported the effectiveness of CMN, which requires a shorter surgery time and provides stronger fixation strength with blade-type devices. However, the radiographic and clinical outcomes of the use of CMN and RCN in elderly patients aged ≥65 years have not been compared yet. This study aimed to investigate whether CMN offers superior outcomes over RCN in the treatment of subtrochanteric fractures in elderly patients. MATERIALS AND METHODS: This retrospective study included 60 elderly patients (17 men and 43 women; mean age: 74.9 years) diagnosed with subtrochanteric fractures and treated with IMN with helical blade CMN (CMN group: 30 patients) or RCN (RCN group: 30 patients) between January 2013 and December 2018 with at least 1 year of follow-up period. Radiologic outcomes were evaluated based on the postoperative state of alignment and the achievement and timing of bony union at the final follow-up. Clinical outcomes were evaluated using the Merle d'Aubigné-Postel score. Radiologic and clinical outcomes in the two groups were compared and analyzed, and the occurrence of complications was examined. RESULTS: The difference in malalignment between the two groups was not significant; however, the RCN group achieved more effective reduction. At the final follow-up, bony union was achieved within 18.9 weeks, on average, in 28 patients in the CMN group and within 21.6 weeks, on average, in 27 patients in the RCN group. Twenty patients in the CMN group and 26 in the RCN group showed good or better results according to the Merle d'Aubigné-Postel score. No significant differences were found for any of the parameters. CONCLUSIONS: In the treatment of difficult subtrochanteric fractures in elderly patients, RCN can provide excellent reduction and strong fixation similar to CMN and can result in outstanding clinical and radiologic outcomes.
Assuntos
Fixação Intramedular de Fraturas , Fraturas do Quadril , Idoso , Pinos Ortopédicos , Feminino , Fixação Intramedular de Fraturas/métodos , Consolidação da Fratura , Mãos , Fraturas do Quadril/diagnóstico por imagem , Fraturas do Quadril/etiologia , Fraturas do Quadril/cirurgia , Humanos , Masculino , Estudos Retrospectivos , Resultado do TratamentoRESUMO
Anthropomorphic robotic hands are designed to attain dexterous movements and flexibility much like human hands. Achieving human-like object manipulation remains a challenge especially due to the control complexity of the anthropomorphic robotic hand with a high degree of freedom. In this work, we propose a deep reinforcement learning (DRL) to train a policy using a synergy space for generating natural grasping and relocation of variously shaped objects using an anthropomorphic robotic hand. A synergy space is created using a continuous normalizing flow network with point clouds of haptic areas, representing natural hand poses obtained from human grasping demonstrations. The DRL policy accesses the synergistic representation and derives natural hand poses through a deep regressor for object grasping and relocation tasks. Our proposed synergy-based DRL achieves an average success rate of 88.38% for the object manipulation tasks, while the standard DRL without synergy space only achieves 50.66%. Qualitative results show the proposed synergy-based DRL policy produces human-like finger placements over the surface of each object including apple, banana, flashlight, camera, lightbulb, and hammer.
Assuntos
Procedimentos Cirúrgicos Robóticos , Robótica , Dedos , Mãos , Força da Mão , HumanosRESUMO
BACKGROUND: Severely displaced calcaneal fractures can result in considerable morphology derangement and may be accompanied by soft tissue compromise. Delayed operative restoration of the calcaneal morphology may result in acute retensioning of the damaged soft tissue with associated wound-related complications. In this study, we describe a staged treatment of displaced intra-articular calcaneal fractures that uses temporary transarticular Kirschner wire (K-wire) fixation and staged conversion to definite fixation. METHODS: We identified all of the patients who were treated at our institution for calcaneal fractures between 2015 and 2019. A total of 17 patients with 20 calcaneal fractures were selectively treated with 2-stage management. Temporary transarticular K-wire fixation was performed 24 hours after the injury to restore calcaneal morphology and the surrounding soft tissue. After the soft tissue was considered safe, delayed open reduction and internal fixation was performed. The time to definite surgery, radiographic alignment, wound complications, time to radiographic union, and hindfoot American Orthopaedic Foot & Ankle Society (AOFAS) scores were recorded. RESULTS: The average follow-up period was 17 months (range, 12-43). The average Böhler angle increased from a mean of -22 degrees (range, -109 to 25) to 25 degrees (range, 0 to 47) after temporary transarticular K-wire fixation. The mean time from temporary pinning to conversion to definite internal fixation was 20 (range, 10-32) days. There were no immediate postoperative complications. The average time to radiographic union was 13.7 (range, 10-16) weeks. The mean AOFAS score was 87 (range, 55-100). No infections or wound complications were reported during the follow-up period. CONCLUSION: Temporary transarticular pinning for staged calcaneal fracture treatment is safe and effective in restoring the calcaneal morphology. This novel and relatively simple method may facilitate delayed operation and decrease wound-related complications. LEVEL OF EVIDENCE: Level IV, retrospective case series.
Assuntos
Calcâneo , Traumatismos do Pé , Fraturas Ósseas , Fraturas Intra-Articulares , Calcâneo/cirurgia , Fixação Interna de Fraturas , Fraturas Ósseas/diagnóstico por imagem , Fraturas Ósseas/cirurgia , Humanos , Fraturas Intra-Articulares/diagnóstico por imagem , Fraturas Intra-Articulares/cirurgia , Estudos Retrospectivos , Resultado do TratamentoRESUMO
Recording human gestures from a wearable sensor produces valuable information to implement control gestures or in healthcare services. The wearable sensor is required to be small and easily worn. Advances in miniaturized sensor and materials research produces patchable inertial measurement units (IMUs). In this paper, a hand gesture recognition system using a single patchable six-axis IMU attached at the wrist via recurrent neural networks (RNN) is presented. The IMU comprises IC-based electronic components on a stretchable, adhesive substrate with serpentine-structured interconnections. The proposed patchable IMU with soft form-factors can be worn in close contact with the human body, comfortably adapting to skin deformations. Thus, signal distortion (i.e., motion artifacts) produced for vibration during the motion is minimized. Also, our patchable IMU has a wireless communication (i.e., Bluetooth) module to continuously send the sensed signals to any processing device. Our hand gesture recognition system was evaluated, attaching the proposed patchable six-axis IMU on the right wrist of five people to recognize three hand gestures using two models based on recurrent neural nets. The RNN-based models are trained and validated using a public database. The preliminary results show that our proposed patchable IMU have potential to continuously monitor people's motions in remote settings for applications in mobile health, human-computer interaction, and control gestures recognition.
Assuntos
Gestos , Redes Neurais de Computação , Dispositivos Eletrônicos Vestíveis , Mãos , Humanos , Movimento (Física) , Tecnologia sem Fio , Punho , Articulação do PunhoRESUMO
Quantitative tissue characteristics, which provide valuable diagnostic information, can be represented by magnetic resonance (MR) parameter maps using magnetic resonance imaging (MRI); however, a long scan time is necessary to acquire them, which prevents the application of quantitative MR parameter mapping to real clinical protocols. For fast MR parameter mapping, we propose a deep model-based MR parameter mapping network called DOPAMINE that combines a deep learning network with a model-based method to reconstruct MR parameter maps from undersampled multi-channel k-space data. DOPAMINE consists of two networks: 1) an MR parameter mapping network that uses a deep convolutional neural network (CNN) that estimates initial parameter maps from undersampled k-space data (CNN-based mapping), and 2) a reconstruction network that removes aliasing artifacts in the parameter maps with a deep CNN (CNN-based reconstruction) and an interleaved data consistency layer by an embedded MR model-based optimization procedure. We demonstrated the performance of DOPAMINE in brain T1 map reconstruction with a variable flip angle (VFA) model. To evaluate the performance of DOPAMINE, we compared it with conventional parallel imaging, low-rank based reconstruction, model-based reconstruction, and state-of-the-art deep-learning-based mapping methods for three different reduction factors (R = 3, 5, and 7) and two different sampling patterns (1D Cartesian and 2D Poisson-disk). Quantitative metrics indicated that DOPAMINE outperformed other methods in reconstructing T1 maps for all sampling patterns and reduction factors. DOPAMINE exhibited quantitatively and qualitatively superior performance to that of conventional methods in reconstructing MR parameter maps from undersampled multi-channel k-space data. The proposed method can thus reduce the scan time of quantitative MR parameter mapping that uses a VFA model.
Assuntos
Dopamina , Processamento de Imagem Assistida por Computador , Algoritmos , Encéfalo/diagnóstico por imagem , Humanos , Imageamento por Ressonância Magnética , Espectroscopia de Ressonância Magnética , Redes Neurais de ComputaçãoRESUMO
BACKGROUND AND OBJECTIVE: Deep learning detection and classification from medical imagery are key components for computer-aided diagnosis (CAD) systems to efficiently support physicians leading to an accurate diagnosis of breast lesions. METHODS: In this study, an integrated CAD system of deep learning detection and classification is proposed aiming to improve the diagnostic performance of breast lesions. First, a deep learning YOLO detector is adopted and evaluated for breast lesion detection from entire mammograms. Then, three deep learning classifiers, namely regular feedforward CNN, ResNet-50, and InceptionResNet-V2, are modified and evaluated for breast lesion classification. The proposed deep learning system is evaluated over 5-fold cross-validation tests using two different and widely used databases of digital X-ray mammograms: DDSM and INbreast. RESULTS: The evaluation results of breast lesion detection show the capability of the YOLO detector to achieve overall detection accuracies of 99.17% and 97.27% and F1-scores of 99.28% and 98.02% for DDSM and INbreast datasets, respectively. Meanwhile, the YOLO detector could predict 71 frames per second (FPS) at the testing time for both DDSM and INbreast datasets. Using detected breast lesions, the classification models of CNN, ResNet-50, and InceptionResNet-V2 achieve promising average overall accuracies of 94.50%, 95.83%, and 97.50%, respectively, for the DDSM dataset and 88.74%, 92.55%, and 95.32%, respectively, for the INbreast dataset. CONCLUSION: The capability of the YOLO detector boosted the classification models to achieve a promising breast lesion diagnostic performance. Such prediction results should help to develop a feasible CAD system for practical breast cancer diagnosis.
Assuntos
Neoplasias da Mama , Aprendizado Profundo , Neoplasias da Mama/diagnóstico por imagem , Computadores , Humanos , Aprendizado de Máquina , Mamografia , Redes Neurais de Computação , Raios XRESUMO
A test was performed to determine the efficacy of a novel multi-channel thermocouple temperature sensor employing "N+1" array architecture for the in-situ detection of icing in cold climates. T-type thermoelements were used to fabricate a sensor with six independent temperature sensing points, capable of two-dimensional temperature mapping. The sensor was intended to detect the high latent heat of fusion of water (334 J/g) which is released to the environment during ice formation. The sensor was embedded on a plywood board and an aluminium plate, respectively by an epoxy resin. Three different ice accretion cases were considered. Ice accretion for all cases was achieved on the surface of the resin layer. In order to analyse the temperature variation for all three cases, the first 20 s response for each case was averaged between three cases. A temperature increase of (1.0 ± 0.1) °C and (0.9 ± 0.1) °C was detected by the sensors 20 s after the onset of icing, attributed to the latent heat of fusion of water. The results indicate that the sensor design is well-suited to cold temperature applications and that detection of the latent heat of fusion could provide a rapid and robust means of icing detection.
RESUMO
This study developed a domain-transform framework comprising domain-transform manifold learning with an initial analytic transform to accelerate Cartesian magnetic resonance imaging (DOTA-MRI). The proposed method directly transforms undersampled Cartesian k-space data into a reconstructed image. In Cartesian undersampling, the k-space is fully or zero sampled in the data-acquisition direction (i.e., the frequency-encoding direction or the x-direction); one-dimensional (1D) inverse Fourier transform (IFT) along the x-direction on the undersampled k-space does not induce any aliasing. To exploit this, the algorithm first applies an analytic x-direction 1D IFT to the undersampled Cartesian k-space input, and subsequently transforms it into a reconstructed image using deep neural networks. The initial analytic transform (i.e., 1D IFT) allows the fully connected layers of the neural network to learn 1D global transform only in the phase-encoding direction (i.e., the y-direction) instead of 2D transform. This drastically reduces the number of parameters to be learned from O(N2) to O(N) compared with the existing manifold learning algorithm (i.e., automated transform by manifold approximation) (AUTOMAP). This enables DOTA-MRI to be applied to high-resolution MR datasets, which has previously proved difficult to implement in AUTOMAP because of the enormous memory requirements involved. After the initial analytic transform, the manifold learning phase uses a symmetric network architecture comprising three types of layers: front-end convolutional layers, fully connected layers for the 1D global transform, and back-end convolutional layers. The front-end convolutional layers take 1D IFT of the undersampled k-space (i.e., undersampled data in the intermediate domain or in the ky-x domain) as input and performs data-domain restoration. The following fully connected layers learn the 1D global transform between the ky-x domain and the image domain (i.e., the y-x domain). Finally, the back-end convolutional layers reconstruct the final image by denoising in the image domain. DOTA-MRI exhibited superior performance over nine other existing algorithms, including state-of-the-art deep learning-based algorithms. The generality of the algorithm was demonstrated by experiments conducted under various sampling ratios, datasets, and noise levels.
Assuntos
Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Algoritmos , Análise de Fourier , Humanos , Redes Neurais de ComputaçãoRESUMO
BACKGROUND AND OBJECTIVE: Computer automated diagnosis of various skin lesions through medical dermoscopy images remains a challenging task. METHODS: In this work, we propose an integrated diagnostic framework that combines a skin lesion boundary segmentation stage and a multiple skin lesions classification stage. Firstly, we segment the skin lesion boundaries from the entire dermoscopy images using deep learning full resolution convolutional network (FrCN). Then, a convolutional neural network classifier (i.e., Inception-v3, ResNet-50, Inception-ResNet-v2, and DenseNet-201) is applied on the segmented skin lesions for classification. The former stage is a critical prerequisite step for skin lesion diagnosis since it extracts prominent features of various types of skin lesions. A promising classifier is selected by testing well-established classification convolutional neural networks. The proposed integrated deep learning model has been evaluated using three independent datasets (i.e., International Skin Imaging Collaboration (ISIC) 2016, 2017, and 2018, which contain two, three, and seven types of skin lesions, respectively) with proper balancing, segmentation, and augmentation. RESULTS: In the integrated diagnostic system, segmented lesions improve the classification performance of Inception-ResNet-v2 by 2.72% and 4.71% in terms of the F1-score for benign and malignant cases of the ISIC 2016 test dataset, respectively. The classifiers of Inception-v3, ResNet-50, Inception-ResNet-v2, and DenseNet-201 exhibit their capability with overall weighted prediction accuracies of 77.04%, 79.95%, 81.79%, and 81.27% for two classes of ISIC 2016, 81.29%, 81.57%, 81.34%, and 73.44% for three classes of ISIC 2017, and 88.05%, 89.28%, 87.74%, and 88.70% for seven classes of ISIC 2018, respectively, demonstrating the superior performance of ResNet-50. CONCLUSIONS: The proposed integrated diagnostic networks could be used to support and aid dermatologists for further improvement in skin cancer diagnosis.
Assuntos
Diagnóstico por Computador/métodos , Interpretação de Imagem Assistida por Computador/métodos , Neoplasias Cutâneas/diagnóstico , Dermoscopia , Humanos , Aprendizado de Máquina , Redes Neurais de Computação , Neoplasias Cutâneas/classificaçãoRESUMO
For computer-aided diagnosis (CAD), detection, segmentation, and classification from medical imagery are three key components to efficiently assist physicians for accurate diagnosis. In this chapter, a completely integrated CAD system based on deep learning is presented to diagnose breast lesions from digital X-ray mammograms involving detection, segmentation, and classification. To automatically detect breast lesions from mammograms, a regional deep learning approach called You-Only-Look-Once (YOLO) is used. To segment breast lesions, full resolution convolutional network (FrCN), a novel segmentation model of deep network, is implemented and used. Finally, three conventional deep learning models including regular feedforward CNN, ResNet-50, and InceptionResNet-V2 are separately adopted and used to classify or recognize the detected and segmented breast lesion as either benign or malignant. To evaluate the integrated CAD system for detection, segmentation, and classification, the publicly available and annotated INbreast database is used over fivefold cross-validation tests. The evaluation results of the YOLO-based detection achieved detection accuracy of 97.27%, Matthews's correlation coefficient (MCC) of 93.93%, and F1-score of 98.02%. Moreover, the results of the breast lesion segmentation via FrCN achieved an overall accuracy of 92.97%, MCC of 85.93%, Dice (F1-score) of 92.69%, and Jaccard similarity coefficient of 86.37%. The detected and segmented breast lesions are classified via CNN, ResNet-50, and InceptionResNet-V2 achieving an average overall accuracies of 88.74%, 92.56%, and 95.32%, respectively. The performance evaluation results through all stages of detection, segmentation, and classification show that the integrated CAD system outperforms the latest conventional deep learning methodologies. We conclude that our CAD system could be used to assist radiologists over all stages of detection, segmentation, and classification for diagnosis of breast lesions.
Assuntos
Neoplasias da Mama/diagnóstico por imagem , Aprendizado Profundo , Diagnóstico por Computador , Interpretação de Imagem Assistida por Computador , Mamografia/métodos , HumanosRESUMO
To control foot-and-mouth disease (FMD) outbreaks that originated in Jincheon County in South Korea between 2014 and 2015, several commercial vaccines were studied for their efficacy and serological performance in the field. In this study, the efficacy of the O SKR 7/10 vaccine was evaluated by challenge with the FMD virus (FMDV) O/Jincheon/SKR/2014 (O Jincheon), which has the same O/SEA/Mya-98 lineage as the O/SKR/7/10 strain that was isolated in 2010 in South Korea, in FMD-seronegative pigs. Full protection against the O Jincheon virus was demonstrated as early as 14 days postvaccination, which was explained by the strong serological relationship (r1 value: ≥ 0.92) between the O Jincheon and O SKR 2010 viruses. However, in the field trial, no satisfactory serological elevations against FMDV were observed, even in the double-vaccinated groups. Therefore, it can be concluded that the O SKR 7/10 vaccine may need to be improved to overcome the interference effects from the high levels of maternally-derived antibodies generated due to the mandatory nationwide vaccination of sows in South Korea.