Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 127
Filtrar
Más filtros

Banco de datos
País/Región como asunto
Tipo del documento
Intervalo de año de publicación
1.
Skin Res Technol ; 30(4): e13698, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38634154

RESUMEN

BACKGROUND: Dermoscopy is a common method of scalp psoriasis diagnosis, and several artificial intelligence techniques have been used to assist dermoscopy in the diagnosis of nail fungus disease, the most commonly used being the convolutional neural network algorithm; however, convolutional neural networks are only the most basic algorithm, and the use of object detection algorithms to assist dermoscopy in the diagnosis of scalp psoriasis has not been reported. OBJECTIVES: Establishment of a dermoscopic modality diagnostic framework for scalp psoriasis based on object detection technology and image enhancement to improve diagnostic efficiency and accuracy. METHODS: We analyzed the dermoscopic patterns of scalp psoriasis diagnosed at 72nd Group army hospital of PLA from January 1, 2020 to December 31, 2021, and selected scalp seborrheic dermatitis as a control group. Based on dermoscopic images and major dermoscopic patterns of scalp psoriasis and scalp seborrheic dermatitis, we investigated a multi-network fusion object detection framework based on the object detection technique Faster R-CNN and the image enhancement technique contrast limited adaptive histogram equalization (CLAHE), for assisting in the diagnosis of scalp psoriasis and scalp seborrheic dermatitis, as well as to differentiate the major dermoscopic patterns of the two diseases. The diagnostic performance of the multi-network fusion object detection framework was compared with that between dermatologists. RESULTS: A total of 1876 dermoscopic images were collected, including 1218 for scalp psoriasis versus 658 for scalp seborrheic dermatitis. Based on these images, training and testing are performed using a multi-network fusion object detection framework. The results showed that the test accuracy, specificity, sensitivity, and Youden index for the diagnosis of scalp psoriasis was: 91.0%, 89.5%, 91.0%, and 0.805, and for the main dermoscopic patterns of scalp psoriasis and scalp seborrheic dermatitis, the diagnostic results were: 89.9%, 97.7%, 89.9%, and 0.876. Comparing the diagnostic results with those of five dermatologists, the fusion framework performs better than the dermatologists' diagnoses. CONCLUSIONS: Studies have shown some differences in dermoscopic patterns between scalp psoriasis and scalp seborrheic dermatitis. The proposed multi-network fusion object detection framework has higher diagnostic performance for scalp psoriasis than for dermatologists.


Asunto(s)
Dermatitis Seborreica , Psoriasis , Neoplasias Cutáneas , Humanos , Cuero Cabelludo , Inteligencia Artificial , Redes Neurales de la Computación , Dermoscopía/métodos , Neoplasias Cutáneas/diagnóstico
2.
Sensors (Basel) ; 24(8)2024 Apr 10.
Artículo en Inglés | MEDLINE | ID: mdl-38676049

RESUMEN

Long-term, automated fish detection provides invaluable data for deep-sea aquaculture, which is crucial for safe and efficient seawater aquafarming. In this paper, we used an infrared camera installed on a deep-sea truss-structure net cage to collect fish images, which were subsequently labeled to establish a fish dataset. Comparison experiments with our dataset based on Faster R-CNN as the basic objection detection framework were conducted to explore how different backbone networks and network improvement modules influenced fish detection performances. Furthermore, we also experimented with the effects of different learning rates, feature extraction layers, and data augmentation strategies. Our results showed that Faster R-CNN with the EfficientNetB0 backbone and FPN module was the most competitive fish detection network for our dataset, since it took a significantly shorter detection time while maintaining a high AP50 value of 0.85, compared to the best AP50 value of 0.86 being achieved by the combination of VGG16 with all improvement modules plus data augmentation. Overall, this work has verified the effectiveness of deep learning-based object detection methods and provided insights into subsequent network improvements.


Asunto(s)
Acuicultura , Aprendizaje Profundo , Peces , Animales , Acuicultura/métodos , Rayos Infrarrojos , Procesamiento de Imagen Asistido por Computador/métodos , Redes Neurales de la Computación
3.
Sensors (Basel) ; 24(8)2024 Apr 21.
Artículo en Inglés | MEDLINE | ID: mdl-38676267

RESUMEN

The rapid increase in the number of vehicles has led to increasing traffic congestion, traffic accidents, and motor vehicle crime rates. The management of various parking lots has also become increasingly challenging. Vehicle-type recognition technology can reduce the workload of humans in vehicle management operations. Therefore, the application of image technology for vehicle-type recognition is of great significance for integrated traffic management. In this paper, an improved faster region with convolutional neural network features (Faster R-CNN) model was proposed for vehicle-type recognition. Firstly, the output features of different convolution layers were combined to improve the recognition accuracy. Then, the average precision (AP) of the recognition model was improved through the contextual features of the original image and the object bounding box optimization strategy. Finally, the comparison experiment used the vehicle image dataset of three vehicle types, including cars, sports utility vehicles (SUVs), and vans. The experimental results show that the improved recognition model can effectively identify vehicle types in the images. The AP of the three vehicle types is 83.2%, 79.2%, and 78.4%, respectively, and the mean average precision (mAP) is 1.7% higher than that of the traditional Faster R-CNN model.

4.
Sensors (Basel) ; 23(19)2023 Sep 28.
Artículo en Inglés | MEDLINE | ID: mdl-37836963

RESUMEN

For centuries, libraries worldwide have preserved ancient manuscripts due to their immense historical and cultural value. However, over time, both natural and human-made factors have led to the degradation of many ancient Arabic manuscripts, causing the loss of significant information, such as authorship, titles, or subjects, rendering them as unknown manuscripts. Although catalog cards attached to these manuscripts might contain some of the missing details, these cards have degraded significantly in quality over the decades within libraries. This paper presents a framework for identifying these unknown ancient Arabic manuscripts by processing the catalog cards associated with them. Given the challenges posed by the degradation of these cards, simple optical character recognition (OCR) is often insufficient. The proposed framework uses deep learning architecture to identify unknown manuscripts within a collection of ancient Arabic documents. This involves locating, extracting, and classifying the text from these catalog cards, along with implementing processes for region-of-interest identification, rotation correction, feature extraction, and classification. The results demonstrate the effectiveness of the proposed method, achieving an accuracy rate of 92.5%, compared to 83.5% with classical image classification and 81.5% with OCR alone.

5.
Sensors (Basel) ; 23(19)2023 Oct 09.
Artículo en Inglés | MEDLINE | ID: mdl-37837174

RESUMEN

An increasing number of special-use and high-rise buildings have presented challenges for efficient evacuations, particularly in fire emergencies. At the same time, however, the use of autonomous vehicles within indoor environments has received only limited attention for emergency scenarios. To address these issues, we developed a method that classifies emergency symbols and determines their location on emergency floor plans. The method incorporates color filtering, clustering and object detection techniques to extract walls, which were used in combination to generate clean, digitized plans. By integrating the geometric and semantic data digitized with our method, existing building information modeling (BIM) based evacuation tools can be enhanced, improving their capabilities for path planning and decision making. We collected a dataset of 403 German emergency floor plans and created a synthetic dataset comprising 5000 plans. Both datasets were used to train two distinct faster region-based convolutional neural networks (Faster R-CNNs). The models were evaluated and compared using 83 floor plan images. The results show that the synthetic model outperformed the standard model for rare symbols, correctly identifying symbol classes that were not detected by the standard model. The presented framework offers a valuable tool for digitizing emergency floor plans and enhancing digital evacuation applications.

6.
Sensors (Basel) ; 23(23)2023 Dec 02.
Artículo en Inglés | MEDLINE | ID: mdl-38067941

RESUMEN

Vehicle type and brand information constitute a crucial element in intelligent transportation systems (ITSs). While numerous appearance-based classification methods have studied frontal view images of vehicles, the challenge of multi-pose and multi-angle vehicle distribution has largely been overlooked. This paper proposes an appearance-based classification approach for multi-angle vehicle information recognition, addressing the aforementioned issues. By utilizing faster regional convolution neural networks, this method automatically captures crucial features for vehicle type and brand identification, departing from traditional handcrafted feature extraction techniques. To extract rich and discriminative vehicle information, ZFNet and VGG16 are employed. Vehicle feature maps are then imported into the region proposal network and classification location refinement network, with the former generating candidate regions potentially containing vehicle targets on the feature map. Subsequently, the latter network refines vehicle locations and classifies vehicle types. Additionally, a comprehensive vehicle dataset, Car5_48, is constructed to evaluate the performance of the proposed method, encompassing multi-angle images across five vehicle types and 48 vehicle brands. The experimental results on this public dataset demonstrate the effectiveness of the proposed approach in accurately classifying vehicle types and brands.

7.
Sensors (Basel) ; 23(4)2023 Feb 10.
Artículo en Inglés | MEDLINE | ID: mdl-36850579

RESUMEN

With the rapid advancement of deep learning theory and hardware device computing capacity, computer vision tasks, such as object detection and instance segmentation, have entered a revolutionary phase in recent years. As a result, extremely challenging integrated tasks, such as person search, might develop quickly. The majority of efficient network frameworks, such as Seq-Net, are based on Faster R-CNN. However, because of the parallel structure of Faster R-CNN, the performance of re-ID can be significantly impacted by the single-layer, low resolution, and occasionally overlooked check feature diagrams retrieved during pedestrian detection. To address these issues, this paper proposed a person search methodology based on an inception convolution and feature fusion module (IC-FFM) using Seq-Net (Sequential End-to-end Network) as the benchmark. First, we replaced the general convolution in ResNet-50 with the new inception convolution module (ICM), allowing the convolution operation to effectively and dynamically distribute various channels. Then, to improve the accuracy of information extraction, the feature fusion module (FFM) was created to combine multi-level information using various levels of convolution. Finally, Bounding Box regression was created using convolution and the double-head module (DHM), which considerably enhanced the accuracy of pedestrian retrieval by combining global and fine-grained information. Experiments on CHUK-SYSU and PRW datasets showed that our method has higher accuracy than Seq-Net. In addition, our method is simpler and can be easily integrated into existing two-stage frameworks.

8.
Sensors (Basel) ; 23(5)2023 Feb 25.
Artículo en Inglés | MEDLINE | ID: mdl-36904768

RESUMEN

Recent years have witnessed the increasing risk of subsea gas leaks with the development of offshore gas exploration, which poses a potential threat to human life, corporate assets, and the environment. The optical imaging-based monitoring approach has become widespread in the field of monitoring underwater gas leakage, but the shortcomings of huge labor costs and severe false alarms exist due to related operators' operation and judgment. This study aimed to develop an advanced computer vision-based monitoring approach to achieve automatic and real-time monitoring of underwater gas leaks. A comparison analysis between the Faster Region Convolutional Neural Network (Faster R-CNN) and You Only Look Once version 4 (YOLOv4) was conducted. The results demonstrated that the Faster R-CNN model, developed with an image size of 1280 × 720 and no noise, was optimal for the automatic and real-time monitoring of underwater gas leakage. This optimal model could accurately classify small and large-shape leakage gas plumes from real-world datasets, and locate the area of these underwater gas plumes.

9.
Sichuan Da Xue Xue Bao Yi Xue Ban ; 54(5): 915-922, 2023 Sep.
Artículo en Zh | MEDLINE | ID: mdl-37866946

RESUMEN

Objective: To propose an improved algorithm for thyroid nodule object detection based on Faster R-CNN so as to improve the detection precision of thyroid nodules in ultrasound images. Methods: The algorithm used ResNeSt50 combined with deformable convolution (DC) as the backbone network to improve the detection effect of irregularly shaped nodules. Feature pyramid networks (FPN) and Region of Interest (RoI) Align were introduced in the back of the trunk network. The former was used to reduce missed or mistaken detection of thyroid nodules, and the latter was used to improve the detection precision of small nodules. To improve the generalization ability of the model, parameters were updated during backpropagation with an optimizer improved by Sharpness-Aware Minimization (SAM). Results: In this experiment, 6 261 thyroid ultrasound images from the Affiliated Hospital of Xuzhou Medical University and the First Hospital of Nanjing were used to compare and evaluate the effectiveness of the improved algorithm. According to the findings, the algorithm showed optimization effect to a certain degree, with the AP50 of the final test set being as high as 97.4% and AP@50:5:95 also showing a 10.0% improvement compared with the original model. Compared with both the original model and the existing models, the improved algorithm had higher detection precision and improved capacity to detect thyroid nodules with better accuracy and precision. In particular, the improved algorithm had a higher recall rate under the requirement of lower detection frame precision. Conclusion: The improved method proposed in the study is an effective object detection algorithm for thyroid nodules and can be used to detect thyroid nodules with accuracy and precision.


Asunto(s)
Nódulo Tiroideo , Humanos , Nódulo Tiroideo/diagnóstico por imagen , Redes Neurales de la Computación , Algoritmos , Ultrasonografía/métodos
10.
Biotechnol Bioeng ; 119(2): 626-635, 2022 02.
Artículo en Inglés | MEDLINE | ID: mdl-34750809

RESUMEN

Macrophages play an important role in the adaptive immune system. Their ability to neutralize cellular targets through Fc receptor-mediated phagocytosis has relied upon immunotherapy that has become of particular interest for the treatment of cancer and autoimmune diseases. A detailed investigation of phagocytosis is the key to the improvement of the therapeutic efficiency of existing medications and the creation of new ones. A promising method for studying the process is imaging flow cytometry (IFC) that acquires thousands of cell images per second in up to 12 optical channels and allows multiparametric fluorescent and morphological analysis of samples in the flow. However, conventional IFC data analysis approaches are based on a highly subjective manual choice of masks and other processing parameters that can lead to the loss of valuable information embedded in the original image. Here, we show the application of a Faster region-based convolutional neural network (CNN) for accurate quantitative analysis of phagocytosis using imaging flow cytometry data. Phagocytosis of erythrocytes by peritoneal macrophages was chosen as a model system. CNN performed automatic high-throughput processing of datasets and demonstrated impressive results in the identification and classification of macrophages and erythrocytes, despite the variety of shapes, sizes, intensities, and textures of cells in images. The developed procedure allows determining the number of phagocytosed cells, disregarding cases with a low probability of correct classification. We believe that CNN-based approaches will enable powerful in-depth investigation of a wide range of biological processes and will reveal the intricate nature of heterogeneous objects in images, leading to completely new capabilities in diagnostics and therapy.


Asunto(s)
Citometría de Flujo/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Redes Neurales de la Computación , Fagocitosis/fisiología , Algoritmos , Animales , Eritrocitos/citología , Eritrocitos/fisiología , Macrófagos Peritoneales/citología , Macrófagos Peritoneales/fisiología , Ratones
11.
Mycoses ; 65(4): 466-472, 2022 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-35119144

RESUMEN

BACKGROUND: Onychomycosis is a common disease. Emerging noninvasive, real-time techniques such as dermoscopy and deep convolutional neural networks have been proposed for the diagnosis of onychomycosis. However, deep learning application in dermoscopic images has not been reported. OBJECTIVES: To explore the establishment of deep learning-based diagnostic models for onychomycosis in dermoscopy to improve the diagnostic efficiency and accuracy. METHODS: We evaluated the dermoscopic patterns of onychomycosis diagnosed at Sun Yat-sen Memorial Hospital, Guangzhou, China, from May 2019 to February 2021 and included nail psoriasis and traumatic onychodystrophy as control groups. Based on the dermoscopic images and the characteristic dermoscopic patterns of onychomycosis, we gain the faster region-based convolutional neural networks to distinguish between nail disorder and normal nail, onychomycosis and non-mycological nail disorder (nail psoriasis and traumatic onychodystrophy). The diagnostic performance is compared between deep learning-based diagnosis models and dermatologists. RESULTS: All of 1,155 dermoscopic images were collected, including onychomycosis (603 images), nail psoriasis (221 images), traumatic onychodystrophy (104 images) and normal cases (227 images). Statistical analyses revealed subungual keratosis, distal irregular termination, longitudinal striae, jagged edge, and marble-like turbid area, and cone-shaped keratosis were of high specificity (>82%) for onychomycosis diagnosis. The deep learning-based diagnosis models (ensemble model) showed test accuracy /specificity/ sensitivity /Youden index of (95.7%/98.8%/82.1%/0.809) and (87.5%/93.0%/78.5%/0.715) for nail disorder and onychomycosis. The diagnostic performance for onychomycosis using ensemble model was superior to 54 dermatologists. CONCLUSIONS: Our study demonstrated that onychomycosis had distinctive dermoscopic patterns, compared with nail psoriasis and traumatic onychodystrophy. The deep learning-based diagnosis models showed a diagnostic accuracy of onychomycosis, superior to dermatologists.


Asunto(s)
Aprendizaje Profundo , Onicomicosis , Dermoscopía , Humanos , Redes Neurales de la Computación , Onicomicosis/diagnóstico por imagen , Sensibilidad y Especificidad
12.
BMC Med Inform Decis Mak ; 22(1): 297, 2022 11 17.
Artículo en Inglés | MEDLINE | ID: mdl-36397034

RESUMEN

BACKGROUND: The electroencephalography (EEG) signal carries important information about the electrical activity of the brain, which may reveal many pathologies. This information is carried in certain waveforms and events, one of which is the K-complex. It is used by neurologists to diagnose neurophysiologic and cognitive disorders as well as sleep studies. Existing detection methods largely depend on tedious, time-consuming, and error-prone manual inspection of the EEG waveform. METHODS: In this paper, a highly accurate K-complex detection system is developed. Based on multiple convolutional neural network (CNN) feature extraction backbones and EEG waveform images, a regions with faster regions with convolutional neural networks (Faster R-CNN) detector was designed, trained, and tested. Extensive performance evaluation was performed using four deep transfer learning feature extraction models (AlexNet, ResNet-101, VGG19 and Inceptionv3). The dataset was comprised of 10948 images of EEG waveforms, with the location of the K-complexes included as separate text files containing the bounding boxes information. RESULTS: The Inceptionv3 and VGG19-based detectors performed consistently high (i.e., up to 99.8% precision and 0.2% miss rate) over different testing scenarios, in which the number of training images was varied from 60% to 80% and the positive overlap threshold was increased from 60% to 90%. CONCLUSIONS: Our automated method appears to be a highly accurate automatic K-complex detection in real-time that can aid practitioners in speedy EEG inspection.


Asunto(s)
Aprendizaje Profundo , Humanos , Redes Neurales de la Computación , Electroencefalografía , Polisomnografía , Encéfalo
13.
Sensors (Basel) ; 22(10)2022 May 18.
Artículo en Inglés | MEDLINE | ID: mdl-35632233

RESUMEN

The purpose of this paper is to study the recognition of ships and their structures to improve the safety of drone operations engaged in shore-to-ship drone delivery service. This study has developed a system that can distinguish between ships and their structures by using a convolutional neural network (CNN). First, the dataset of the Marine Traffic Management Net is described and CNN's object sensing based on the Detectron2 platform is discussed. There will also be a description of the experiment and performance. In addition, this study has been conducted based on actual drone delivery operations-the first air delivery service by drones in Korea.


Asunto(s)
Redes Neurales de la Computación , Navíos , República de Corea
14.
Sensors (Basel) ; 22(5)2022 Mar 07.
Artículo en Inglés | MEDLINE | ID: mdl-35271214

RESUMEN

In an orchard automation process, a current challenge is to recognize natural landmarks and tree trunks to localize intelligent robots. To overcome low-light conditions and global navigation satellite system (GNSS) signal interruptions under a dense canopy, a thermal camera may be used to recognize tree trunks using a deep learning system. Therefore, the objective of this study was to use a thermal camera to detect tree trunks at different times of the day under low-light conditions using deep learning to allow robots to navigate. Thermal images were collected from the dense canopies of two types of orchards (conventional and joint training systems) under high-light (12-2 PM), low-light (5-6 PM), and no-light (7-8 PM) conditions in August and September 2021 (summertime) in Japan. The detection accuracy for a tree trunk was confirmed by the thermal camera, which observed an average error of 0.16 m for 5 m, 0.24 m for 15 m, and 0.3 m for 20 m distances under high-, low-, and no-light conditions, respectively, in different orientations of the thermal camera. Thermal imagery datasets were augmented to train, validate, and test using the Faster R-CNN deep learning model to detect tree trunks. A total of 12,876 images were used to train the model, 2318 images were used to validate the training process, and 1288 images were used to test the model. The mAP of the model was 0.8529 for validation and 0.8378 for the testing process. The average object detection time was 83 ms for images and 90 ms for videos with the thermal camera set at 11 FPS. The model was compared with the YOLO v3 with same number of datasets and training conditions. In the comparisons, Faster R-CNN achieved a higher accuracy than YOLO v3 in tree truck detection using the thermal camera. Therefore, the results showed that Faster R-CNN can be used to recognize objects using thermal images to enable robot navigation in orchards under different lighting conditions.


Asunto(s)
Redes Neurales de la Computación , Árboles , Japón
15.
Sensors (Basel) ; 22(18)2022 Sep 16.
Artículo en Inglés | MEDLINE | ID: mdl-36146375

RESUMEN

Pearl detection with a counter (PDC) in a noncontact and high-precision manner is a challenging task in the area of commercial production. Additionally, sea pearls are considered to be quite valuable, so the traditional manual counting methods are not satisfactory, as touching may cause damage to the pearls. In this paper, we conduct a comprehensive study on nine object-detection models, and the key metrics of these models are evaluated. The results indicate that using Faster R-CNN with ResNet152, which was pretrained on the pearl dataset, mAP@0.5IoU = 100% and mAP@0.75IoU = 98.83% are achieved for pearl recognition, requiring only 15.8 ms inference time with a counter after the first loading of the model. Finally, the superiority of the proposed algorithm of Faster R-CNN ResNet152 with a counter is verified through a comparison with eight other sophisticated object detectors with a counter. The experimental results on the self-made pearl image dataset show that the total loss decreased to 0.00044. Meanwhile, the classification loss and the localization loss of the model gradually decreased to less than 0.00019 and 0.00031, respectively. The robust performance of the proposed method across the pearl dataset indicates that Faster R-CNN ResNet152 with a counter is promising for natural light or artificial light peal detection and accurate counting.


Asunto(s)
Aprendizaje Profundo , Redes Neurales de la Computación , Algoritmos , Proyectos de Investigación , Tacto
16.
Sensors (Basel) ; 22(20)2022 Oct 19.
Artículo en Inglés | MEDLINE | ID: mdl-36298312

RESUMEN

Rust of transmission line fittings is a major hidden risk to transmission safety. Since the fittings located at high altitude are inconvenient to detect and maintain, machine vision techniques have been introduced to realize the intelligent rust detection with the help of unmanned aerial vehicles (UAV). Due to the small size of fittings and disturbance of complex environmental background, however, there are often cases of missing detection and false detection. To improve the detection reliability and robustness, this paper proposes a new robust Faster R-CNN model with feature enhancement mechanism for the rust detection of transmission line fitting. Different from current methods that improve feature representation in front end, this paper adopts an idea of back-end feature enhancement. First, the residual network ResNet-101 is introduced as the backbone network to extract rich discriminative information from the UAV images. Second, a new feature enhancement mechanism is added after the region of interest (ROI) pooling layer. Through calculating the similarity between each region proposal and the others, the feature weights of the region proposals containing target object can be enhanced via the overlaying of the object's representation. The weight of the disturbance terms can then be relatively reduced. Empirical evaluation is conducted on some real-world UAV monitoring images. The comparative results demonstrate the effectiveness of the proposed model in terms of detection precision and recall rate, with the average precision of rust detection 97.07%, indicating that the proposed method can provide an reliable and robust solution for the rust detection.


Asunto(s)
Reproducibilidad de los Resultados
17.
Sensors (Basel) ; 22(3)2022 Feb 05.
Artículo en Inglés | MEDLINE | ID: mdl-35161961

RESUMEN

The intelligent crack detection method is an important guarantee for the realization of intelligent operation and maintenance, and it is of great significance to traffic safety. In recent years, the recognition of road pavement cracks based on computer vision has attracted increasing attention. With the technological breakthroughs of general deep learning algorithms in recent years, detection algorithms based on deep learning and convolutional neural networks have achieved better results in the field of crack recognition. In this paper, deep learning is investigated to intelligently detect road cracks, and Faster R-CNN and Mask R-CNN are compared and analyzed. The results show that the joint training strategy is very effective, and we are able to ensure that both Faster R-CNN and Mask R-CNN complete the crack detection task when trained with only 130+ images and can outperform YOLOv3. However, the joint training strategy causes a degradation in the effectiveness of the bounding box detected by Mask R-CNN.


Asunto(s)
Algoritmos , Redes Neurales de la Computación
18.
Sensors (Basel) ; 22(22)2022 Nov 10.
Artículo en Inglés | MEDLINE | ID: mdl-36433294

RESUMEN

As deep learning has been successfully applied in various domains, it has recently received considerable research attention for decades, making it possible to efficiently and intelligently detect crop pests. Nevertheless, the detection of pest objects is still challenging due to the lack of discriminative features and pests' aggregation behavior. Recently, intersection over union (IoU)-based object detection has attracted much attention and become the most widely used metric. However, it is sensitive to small-object localization bias; furthermore, IoU-based loss only works when ground truths and predicted bounding boxes are intersected, and it lacks an awareness of different geometrical structures. Therefore, we propose a simple and effective metric and a loss function based on this new metric, truncated structurally aware distance (TSD). Firstly, the distance between two bounding boxes is defined as the standardized Chebyshev distance. We also propose a new regression loss function, truncated structurally aware distance loss, which consider the different geometrical structure relationships between two bounding boxes and whose truncated function is designed to impose different penalties. To further test the effectiveness of our method, we apply it on the Pest24 small-object pest dataset, and the results show that the mAP is 5.0% higher than other detection methods.

19.
Sensors (Basel) ; 22(2)2022 Jan 09.
Artículo en Inglés | MEDLINE | ID: mdl-35062437

RESUMEN

Internet of Things (IoT) technology has recently been applied in healthcare systems as an Internet of Medical Things (IoMT) to collect sensor information for the diagnosis and prognosis of heart disease. The main objective of the proposed research is to classify data and predict heart disease using medical data and medical images. The proposed model is a medical data classification and prediction model that operates in two stages. If the result from the first stage is efficient in predicting heart disease, there is no need for stage two. In the first stage, data gathered from medical sensors affixed to the patient's body were classified; then, in stage two, echocardiogram image classification was performed for heart disease prediction. A hybrid linear discriminant analysis with the modified ant lion optimization (HLDA-MALO) technique was used for sensor data classification, while a hybrid Faster R-CNN with SE-ResNet-101 modelwass used for echocardiogram image classification. Both classification methods were carried out, and the classification findings were consolidated and validated to predict heart disease. The HLDA-MALO method obtained 96.85% accuracy in detecting normal sensor data, and 98.31% accuracy in detecting abnormal sensor data. The proposed hybrid Faster R-CNN with SE-ResNeXt-101 transfer learning model performed better in classifying echocardiogram images, with 98.06% precision, 98.95% recall, 96.32% specificity, a 99.02% F-score, and maximum accuracy of 99.15%.


Asunto(s)
Cardiopatías , Internet de las Cosas , Inteligencia Artificial , Atención a la Salud , Cardiopatías/diagnóstico por imagen , Humanos , Pronóstico
20.
Radiol Med ; 127(4): 398-406, 2022 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-35262842

RESUMEN

PURPOSE: We developed a tool for locating and grading knee osteoarthritis (OA) from digital X-ray images and illustrate the possibility of deep learning techniques to predict knee OA as per the Kellgren-Lawrence (KL) grading system. The purpose of the project is to see how effectively an artificial intelligence (AI)-based deep learning approach can locate and diagnose the severity of knee OA in digital X-ray images. METHODS: Selection criteria: Patients above 50 years old with OA symptoms (knee joint pain, stiffness, crepitus, and functional limitations) were included in the study. Medical experts excluded patients with post-surgical evaluation, trauma, and infection from the study. We used 3172 Anterior-posterior view knee joint digital X-ray images. We have trained the Faster RCNN architecture to locate the knee joint space width (JSW) region in digital X-ray images and we incorporate ResNet-50 with transfer learning to extract the features. We have used another pre-trained network (AlexNet with transfer learning) for the classification of knee OA severity. We trained the region proposal network (RPN) using manual extract knee area as the ground truth image and the medical experts graded the knee joint digital X-ray images based on the Kellgren-Lawrence score. An X-ray image is an input for the final model, and the output is a Kellgren-Lawrence grading value. RESULTS: The proposed model identified the minimal knee JSW area with a maximum accuracy of 98.516%, and the overall knee OA severity classification accuracy was 98.90%. CONCLUSIONS: Today numerous diagnostic methods are available, but tools are not transparent and automated analysis of OA remains a problem. The performance of the proposed model increases while fine-tuning the network and it is higher than the existing works. We will extend this work to grade OA in MRI data in the future.


Asunto(s)
Aprendizaje Profundo , Osteoartritis de la Rodilla , Inteligencia Artificial , Humanos , Articulación de la Rodilla , Persona de Mediana Edad , Osteoartritis de la Rodilla/diagnóstico por imagen , Dolor
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA