RESUMO
(Aim) To detect COVID-19 patients more accurately and more precisely, we proposed a novel artificial intelligence model. (Methods) We used previously proposed chest CT dataset containing four categories: COVID-19, community-acquired pneumonia, secondary pulmonary tuberculosis, and healthy subjects. First, we proposed a novel VGG-style base network (VSBN) as backbone network. Second, convolutional block attention module (CBAM) was introduced as attention module into our VSBN. Third, an improved multiple-way data augmentation method was used to resist overfitting of our AI model. In all, our model was dubbed as a 12-layer attention-based VGG-style network for COVID-19 (AVNC) (Results) This proposed AVNC achieved the sensitivity/precision/F1 per class all above 95%. Particularly, AVNC yielded a micro-averaged F1 score of 96.87%, which is higher than 11 state-of-the-art approaches. (Conclusion) This proposed AVNC is effective in recognizing COVID-19 diseases.
RESUMO
Brain tumor is one of the most death defying diseases nowadays. The tumor contains a cluster of abnormal cells grouped around the inner portion of human brain. It affects the brain by squeezing/ damaging healthy tissues. It also amplifies intra cranial pressure and as a result tumor cells growth increases rapidly which may lead to death. It is, therefore desirable to diagnose/ detect brain tumor at an early stage that may increase the patient survival rate. The major objective of this research work is to present a new technique for the detection of tumor. The proposed architecture accurately segments and classifies the benign and malignant tumor cases. Different spatial domain methods are applied to enhance and accurately segment the input images. Moreover Alex and Google networks are utilized for classification in which two score vectors are obtained after the softmax layer. Further, both score vectors are fused and supplied to multiple classifiers along with softmax layer. Evaluation of proposed model is done on top medical image computing and computer-assisted intervention (MICCAI) challenge datasets i.e., multimodal brain tumor segmentation (BRATS) 2013, 2014, 2015, 2016 and ischemic stroke lesion segmentation (ISLES) 2018 respectively.
Assuntos
Neoplasias Encefálicas/diagnóstico por imagem , Neoplasias Encefálicas/patologia , Processamento de Imagem Assistida por Computador/métodos , Redes Neurais de Computação , Algoritmos , Humanos , Imageamento por Ressonância Magnética/métodos , Tomografia Computadorizada por Raios X/métodosRESUMO
The aim of this work is to develop a Computer-Aided-Brain-Diagnosis (CABD) system that can determine if a brain scan shows signs of Alzheimer's disease. The method utilizes Magnetic Resonance Imaging (MRI) for classification with several feature extraction techniques. MRI is a non-invasive procedure, widely adopted in hospitals to examine cognitive abnormalities. Images are acquired using the T2 imaging sequence. The paradigm consists of a series of quantitative techniques: filtering, feature extraction, Student's t-test based feature selection, and k-Nearest Neighbor (KNN) based classification. Additionally, a comparative analysis is done by implementing other feature extraction procedures that are described in the literature. Our findings suggest that the Shearlet Transform (ST) feature extraction technique offers improved results for Alzheimer's diagnosis as compared to alternative methods. The proposed CABD tool with the ST + KNN technique provided accuracy of 94.54%, precision of 88.33%, sensitivity of 96.30% and specificity of 93.64%. Furthermore, this tool also offered an accuracy, precision, sensitivity and specificity of 98.48%, 100%, 96.97% and 100%, respectively, with the benchmark MRI database.
Assuntos
Doença de Alzheimer/diagnóstico por imagem , Encéfalo/patologia , Diagnóstico por Computador/métodos , Reconhecimento Automatizado de Padrão/métodos , Doença de Alzheimer/classificação , Humanos , Interpretação de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodosRESUMO
The visual inspection of histopathological samples is the benchmark for detecting breast cancer, but a strenuous and complicated process takes a long time of the pathologist practice. Deep learning models have shown excellent outcomes in clinical diagnosis and image processing and advances in various fields, including drug development, frequency simulation, and optimization techniques. However, the resemblance of histopathologic images of breast cancer and the inclusion of stable and infected tissues in different areas make detecting and classifying tumors on entire slide images more difficult. In breast cancer, a correct diagnosis is needed for complete care in a limited amount of time. An effective detection can relieve the pathologist's workload and mitigate diagnostic subjectivity. Therefore, this research work investigates improved the pre-trained xception and deeplabv3+ design semantic model. The model has been trained on input images with ground masks on the tuned parameters that significantly improve the segmentation of ultrasound breast images into respective classes, that is, benign/malignant. The segmentation model delivered an accuracy of greater than 99% to prove the model's effectiveness. The segmented images and histopathological breast images are transferred to the 4-qubit-quantum circuit with six-layered architecture to detect breast malignancy. The proposed framework achieved remarkable performance as contrasted to currently published methodologies. HIGHLIGHTS: This research proposed hybrid semantic model using pre-trained xception and deeplabv3 for breast microscopic cancer classification in to benign and malignant classes at accuracy of 95% accuracy, 99% accuracy for detection of breast malignancy.
Assuntos
Neoplasias da Mama , Mama , Neoplasias da Mama/diagnóstico por imagem , Feminino , Humanos , Processamento de Imagem Assistida por Computador/métodosRESUMO
Biomechanics based human identification is a major area of research. Biomechanics based approaches depend on accurately recognizing humans using body movements, the accuracy of these approaches is enhanced by incorporating the knee-hip angle to angle relationships. Current biomechanics based models are developed by considering the biomechanics of human walking and running. In biomechanics the joint angle characteristics, also known as gait features play a vital role in identification of humans. In general, identification of humans can be broadly classified into two approaches: biomechanics based approach, also known as Gait Recognition and biometric based Composite Sketch Matching. Gait recognition is a biomechanics based approach which uses gait traits for person authentication, it discriminates people by the way they walk. Gait recognition uses shape and motion information of a person and identifies the individual; this information is generally acquired from an image sequence. The efficiency of gait recognition is mainly affected by covariates such as observation view, walking speed, clothing, and belongings. Biometric based approach for human identification is usually done by composite sketch matching. Composite sketches are sketches generated using a computer. This obviates the need of using a skilled sketch artist; these sketches can be easily drawn by eyewitness using face design system software in a very short time period. This doesn't require any prior specialized software training but identifying humans using only composite sketches is still a challenging task owing to the fact that human faces are not always clearly visible from a distance. Hence drawing a composite sketch at all times is not feasible. The key contribution of this paper is a fusion system developed by combining biomechanics based gait recognition and biometric based composite sketch matching for identifying humans in crowded scenes. First various existing biomechanics based approaches for gait recognitionare developed. Then a novel biomechanics based gait recognition is developed using Sparse Representation to generate what we term as "score 1." Further another novel technique for composite sketch matching is developed using Dictionary Matching to generate what we term as "score 2." Finally, score level fusion using Dempster Shafer and Proportional Conflict Distribution Rule Number 5 is performed. The proposed fusion approach is validated using a database containing biomechanics based gait sequences and biometric based composite sketches. From our analysis we find that a fusion of gait recognition and composite sketch matching provides excellent results for real-time human identification.