Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 131.896
Filtrar
1.
Comput Intell Neurosci ; 2021: 2091053, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34531907

RESUMO

To realize the safe transmission of images, a chaotic image encryption algorithm based on Latin square and random shift is proposed. The algorithm consists of four parts: key generation, pixel scrambling, pixel replacement, and bit scrambling. Firstly, the key is generated from the plain image to improve the sensitivity of the encryption method. Secondly, each pixel in each row of the image matrix is moved cyclically to the right, in turn, to change the position of the image pixel and realize pixel position scrambling. Then, a 256-order Latin square matrix composed of a chaotic sequence is used as a lookup table, and the replacement coordinates are calculated based on the image pixel value and the chaotic sequence value, replacing the corresponding coordinate elements in the image matrix. Finally, decompose the bitplane of the image matrix and combine it into two-bit matrices, scramble the two bit matrices, respectively, with the Latin square matrix, recombine the scrambled two-bit matrices, and convert them into decimal to obtain the ciphertext image. In the proposed encryption method, all the Latin square matrices used are generated by chaotic sequences, further enhancing the complexity of the generated Latin square matrix and improving the algorithm's security. Experimental results and security analysis show that the proposed algorithm has good security performance and is suitable for image encryption.


Assuntos
Segurança Computacional , Processamento de Imagem Assistida por Computador , Algoritmos , Rotação
2.
Comput Intell Neurosci ; 2021: 3766877, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34531908

RESUMO

In the image reconstruction of the electrical capacitance tomography (ECT) system, the application of the total least squares theory transforms the ill-posed problem into a nonlinear unconstrained minimization problem, which avoids calculating the matrix inversion. But in the iterative process of the coefficient matrix, the ill-posed problem is also produced. For the effect on the final image reconstruction accuracy of this problem, combined with the principle of the ECT system, the coefficient matrix is targeted and updated in the overall least squares iteration process. The new coefficient matrix is calculated, and then, the regularization matrix is corrected according to the adaptive targeting singular value, which can reduce the ill-posed effect. In this study, the total least squares iterative method is improved by introducing the mathematical model of EIV to deal with the errors in the measured capacitance data and coefficient matrix. The effect of noise interference on the measurement capacitance data is reduced, and finally, the high-quality reconstructed images are calculated iteratively.


Assuntos
Algoritmos , Processamento de Imagem Assistida por Computador , Capacitância Elétrica , Análise dos Mínimos Quadrados , Tomografia
3.
BMC Bioinformatics ; 22(1): 427, 2021 Sep 08.
Artigo em Inglês | MEDLINE | ID: mdl-34496765

RESUMO

BACKGROUND: In mammalian cells the endoplasmic reticulum (ER) comprises a highly complex reticular morphology that is spread throughout the cytoplasm. This organelle is of particular interest to biologists, as its dysfunction is associated with numerous diseases, which often manifest themselves as changes to the structure and organisation of the reticular network. Due to its complex morphology, image analysis methods to quantitatively describe this organelle, and importantly any changes to it, are lacking. RESULTS: In this work we detail a methodological approach that utilises automated high-content screening microscopy to capture images of cells fluorescently-labelled for various ER markers, followed by their quantitative analysis. We propose that two key metrics, namely the area of dense ER and the area of polygonal regions in between the reticular elements, together provide a basis for measuring the quantities of rough and smooth ER, respectively. We demonstrate that a number of different pharmacological perturbations to the ER can be quantitatively measured and compared in our automated image analysis pipeline. Furthermore, we show that this method can be implemented in both commercial and open-access image analysis software with comparable results. CONCLUSIONS: We propose that this method has the potential to be applied in the context of large-scale genetic and chemical perturbations to assess the organisation of the ER in adherent cell cultures.


Assuntos
Retículo Endoplasmático , Processamento de Imagem Assistida por Computador , Animais , Linhagem Celular , Humanos , Software
4.
BMC Bioinformatics ; 22(1): 421, 2021 Sep 07.
Artigo em Inglês | MEDLINE | ID: mdl-34493208

RESUMO

BACKGROUND: Brain tumor segmentation is a challenging problem in medical image processing and analysis. It is a very time-consuming and error-prone task. In order to reduce the burden on physicians and improve the segmentation accuracy, the computer-aided detection (CAD) systems need to be developed. Due to the powerful feature learning ability of the deep learning technology, many deep learning-based methods have been applied to the brain tumor segmentation CAD systems and achieved satisfactory accuracy. However, deep learning neural networks have high computational complexity, and the brain tumor segmentation process consumes significant time. Therefore, in order to achieve the high segmentation accuracy of brain tumors and obtain the segmentation results efficiently, it is very demanding to speed up the segmentation process of brain tumors. RESULTS: Compared with traditional computing platforms, the proposed FPGA accelerator has greatly improved the speed and the power consumption. Based on the BraTS19 and BraTS20 dataset, our FPGA-based brain tumor segmentation accelerator is 5.21 and 44.47 times faster than the TITAN V GPU and the Xeon CPU. In addition, by comparing energy efficiency, our design can achieve 11.22 and 82.33 times energy efficiency than GPU and CPU, respectively. CONCLUSION: We quantize and retrain the neural network for brain tumor segmentation and merge batch normalization layers to reduce the parameter size and computational complexity. The FPGA-based brain tumor segmentation accelerator is designed to map the quantized neural network model. The accelerator can increase the segmentation speed and reduce the power consumption on the basis of ensuring high accuracy which provides a new direction for the automatic segmentation and remote diagnosis of brain tumors.


Assuntos
Algoritmos , Neoplasias Encefálicas , Neoplasias Encefálicas/diagnóstico por imagem , Humanos , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Redes Neurais de Computação
5.
IEEE Trans Image Process ; 30: 7636-7648, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34469297

RESUMO

Convolutional neural networks are capable of extracting powerful representations for face recognition. However, they tend to suffer from poor generalization due to imbalanced data distributions where a small number of classes are over-represented (e.g. frontal or non-occluded faces) and some of the remaining rarely appear (e.g. profile or heavily occluded faces). This is the reason why the performance is dramatically degraded in minority classes. For example, this issue is serious for recognizing masked faces in the scenario of ongoing pandemic of the COVID-19. In this work, we propose an Attention Augmented Network, called AAN-Face, to handle this issue. First, an attention erasing (AE) scheme is proposed to randomly erase units in attention maps. This well prepares models towards occlusions or pose variations. Second, an attention center loss (ACL) is proposed to learn a center for each attention map, so that the same attention map focuses on the same facial part. Consequently, discriminative facial regions are emphasized, while useless or noisy ones are suppressed. Third, the AE and the ACL are incorporated to form the AAN-Face. Since the discriminative parts are randomly removed by the AE, the ACL is encouraged to learn different attention centers, leading to the localization of diverse and complementary facial parts. Comprehensive experiments on various test datasets, especially on masked faces, demonstrate that our AAN-Face models outperform the state-of-the-art methods, showing the importance and effectiveness.


Assuntos
Reconhecimento Facial Automatizado/métodos , Face/anatomia & histologia , Processamento de Imagem Assistida por Computador/métodos , Redes Neurais de Computação , COVID-19 , Humanos , Máscaras
6.
BMC Bioinformatics ; 22(1): 433, 2021 Sep 10.
Artigo em Inglês | MEDLINE | ID: mdl-34507520

RESUMO

BACKGROUND: Imaging data contains a substantial amount of information which can be difficult to evaluate by eye. With the expansion of high throughput microscopy methodologies producing increasingly large datasets, automated and objective analysis of the resulting images is essential to effectively extract biological information from this data. CellProfiler is a free, open source image analysis program which enables researchers to generate modular pipelines with which to process microscopy images into interpretable measurements. RESULTS: Herein we describe CellProfiler 4, a new version of this software with expanded functionality. Based on user feedback, we have made several user interface refinements to improve the usability of the software. We introduced new modules to expand the capabilities of the software. We also evaluated performance and made targeted optimizations to reduce the time and cost associated with running common large-scale analysis pipelines. CONCLUSIONS: CellProfiler 4 provides significantly improved performance in complex workflows compared to previous versions. This release will ensure that researchers will have continued access to CellProfiler's powerful computational tools in the coming years.


Assuntos
Processamento de Imagem Assistida por Computador , Software , Microscopia , Fluxo de Trabalho
7.
Comput Intell Neurosci ; 2021: 7270908, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34512745

RESUMO

Tool safety is an important part of machining and machine tool safety, and machine tool path image detection can effectively obtain the in-machine condition of a tool. To obtain an accurate image edge and improve image processing accuracy, a novel subpixel edge detection method is proposed in this study. The precontour is segmented by binarization, the second derivative in the neighborhood of the demand point is calculated, and the obtained value is sampled according to the specified rules for curve fitting. The point whose curve ordinate is 0 is the subpixel position. The experiment proves that an improved subpixel edge can be obtained. Results show that the proposed method can extract a satisfactory subpixel contour, which is more accurate and reliable than the edge results obtained by several current pixel-level operators, such as the Canny operator, and can be used in edge detection with high-accuracy requirements, such as the contour detection of online tools.


Assuntos
Algoritmos , Processamento de Imagem Assistida por Computador
8.
Sci Rep ; 11(1): 17489, 2021 09 01.
Artigo em Inglês | MEDLINE | ID: mdl-34471180

RESUMO

Rapid and sensitive screening tools for SARS-CoV-2 infection are essential to limit the spread of COVID-19 and to properly allocate national resources. Here, we developed a new point-of-care, non-contact thermal imaging tool to detect COVID-19, based on advanced image processing algorithms. We captured thermal images of the backs of individuals with and without COVID-19 using a portable thermal camera that connects directly to smartphones. Our novel image processing algorithms automatically extracted multiple texture and shape features of the thermal images and achieved an area under the curve (AUC) of 0.85 in COVID-19 detection with up to 92% sensitivity. Thermal imaging scores were inversely correlated with clinical variables associated with COVID-19 disease progression. In summary, we show, for the first time, that a hand-held thermal imaging device can be used to detect COVID-19. Non-invasive thermal imaging could be used to screen for COVID-19 in out-of-hospital settings, especially in low-income regions with limited imaging resources.


Assuntos
COVID-19/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/instrumentação , Adulto , Idoso , Algoritmos , Área Sob a Curva , Progressão da Doença , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Sistemas Automatizados de Assistência Junto ao Leito , Sensibilidade e Especificidade , Smartphone
9.
Sensors (Basel) ; 21(17)2021 Aug 30.
Artigo em Inglês | MEDLINE | ID: mdl-34502731

RESUMO

As a sub-direction of image retrieval, person re-identification (Re-ID) is usually used to solve the security problem of cross camera tracking and monitoring. A growing number of shopping centers have recently attempted to apply Re-ID technology. One of the development trends of related algorithms is using an attention mechanism to capture global and local features. We notice that these algorithms have apparent limitations. They only focus on the most salient features without considering certain detailed features. People's clothes, bags and even shoes are of great help to distinguish pedestrians. We notice that global features usually cover these important local features. Therefore, we propose a dual branch network based on a multi-scale attention mechanism. This network can capture apparent global features and inconspicuous local features of pedestrian images. Specifically, we design a dual branch attention network (DBA-Net) for better performance. These two branches can optimize the extracted features of different depths at the same time. We also design an effective block (called channel, position and spatial-wise attention (CPSA)), which can capture key fine-grained information, such as bags and shoes. Furthermore, based on ID loss, we use complementary triplet loss and adaptive weighted rank list loss (WRLL) on each branch during the training process. DBA-Net can not only learn semantic context information of the channel, position, and spatial dimensions but can integrate detailed semantic information by learning the dependency relationships between features. Extensive experiments on three widely used open-source datasets proved that DBA-Net clearly yielded overall state-of-the-art performance. Particularly on the CUHK03 dataset, the mean average precision (mAP) of DBA-Net achieved 83.2%.


Assuntos
Processamento de Imagem Assistida por Computador , Pedestres , Algoritmos , Humanos , Pesquisa , Semântica
10.
Sensors (Basel) ; 21(17)2021 Aug 30.
Artigo em Inglês | MEDLINE | ID: mdl-34502738

RESUMO

In the field of computer vision, object detection consists of automatically finding objects in images by giving their positions. The most common fields of application are safety systems (pedestrian detection, identification of behavior) and control systems. Another important application is head/person detection, which is the primary material for road safety, rescue, surveillance, etc. In this study, we developed a new approach based on two parallel Deeplapv3+ to improve the performance of the person detection system. For the implementation of our semantic segmentation model, a working methodology with two types of ground truths extracted from the bounding boxes given by the original ground truths was established. The approach has been implemented in our two private datasets as well as in a public dataset. To show the performance of the proposed system, a comparative analysis was carried out on two deep learning semantic segmentation state-of-art models: SegNet and U-Net. By achieving 99.14% of global accuracy, the result demonstrated that the developed strategy could be an efficient way to build a deep neural network model for semantic segmentation. This strategy can be used, not only for the detection of the human head but also be applied in several semantic segmentation applications.


Assuntos
Pedestres , Semântica , Humanos , Processamento de Imagem Assistida por Computador , Redes Neurais de Computação
11.
Sensors (Basel) ; 21(17)2021 Sep 01.
Artigo em Inglês | MEDLINE | ID: mdl-34502782

RESUMO

Fatigue failure is a significant problem in the structural safety of engineering structures. Human inspection is the most widely used approach for fatigue failure detection, which is time consuming and subjective. Traditional vision-based methods are insufficient in distinguishing cracks from noises and detecting crack tips. In this paper, a new framework based on convolutional neural networks (CNN) and digital image processing is proposed to monitor crack propagation length. Convolutional neural networks were first applied to robustly detect the location of cracks with the interference of scratch and edges. Then, a crack tip-detection algorithm was established to accurately locate the crack tip and was used to calculate the length of the crack. The effectiveness and precision of the proposed approach were validated through conducting fatigue experiments. The results demonstrated that the proposed approach could robustly identify a fatigue crack surrounded by crack-like noises and locate the crack tip accurately. Furthermore, crack length could be measured with submillimeter accuracy.


Assuntos
Processamento de Imagem Assistida por Computador , Redes Neurais de Computação , Algoritmos , Humanos
12.
Sensors (Basel) ; 21(17)2021 Sep 02.
Artigo em Inglês | MEDLINE | ID: mdl-34502809

RESUMO

Face and person detection are important tasks in computer vision, as they represent the first component in many recognition systems, such as face recognition, facial expression analysis, body pose estimation, face attribute detection, or human action recognition. Thereby, their detection rate and runtime are crucial for the performance of the overall system. In this paper, we combine both face and person detection in one framework with the goal of reaching a detection performance that is competitive to the state of the art of lightweight object-specific networks while maintaining real-time processing speed for both detection tasks together. In order to combine face and person detection in one network, we applied multi-task learning. The difficulty lies in the fact that no datasets are available that contain both face as well as person annotations. Since we did not have the resources to manually annotate the datasets, as it is very time-consuming and automatic generation of ground truths results in annotations of poor quality, we solve this issue algorithmically by applying a special training procedure and network architecture without the need of creating new labels. Our newly developed method called Simultaneous Face and Person Detection (SFPD) is able to detect persons and faces with 40 frames per second. Because of this good trade-off between detection performance and inference time, SFPD represents a useful and valuable real-time framework especially for a multitude of real-world applications such as, e.g., human-robot interaction.


Assuntos
Reconhecimento Facial , Robótica , Expressão Facial , Humanos , Processamento de Imagem Assistida por Computador
13.
Sensors (Basel) ; 21(17)2021 Sep 06.
Artigo em Inglês | MEDLINE | ID: mdl-34502863

RESUMO

Biometrics has been shown to be an effective solution for the identity recognition problem, and iris recognition, as well as face recognition, are accurate biometric modalities, among others. The higher resolution inside the crucial region reveals details of the physiological characteristics which provides discriminative information to achieve extremely high recognition rate. Due to the growing needs for the IoT device in various applications, the image sensor is gradually integrated in the IoT device to decrease the cost, and low-cost image sensors may be preferable than high-cost ones. However, low-cost image sensors may not satisfy the minimum requirement of the resolution, which definitely leads to the decrease of the recognition accuracy. Therefore, how to maintain high accuracy for biometric systems without using expensive high-cost image sensors in mobile sensing networks becomes an interesting and important issue. In this paper, we proposed MA-SRGAN, a single image super-resolution (SISR) algorithm, based on the mask-attention mechanism used in Generative Adversarial Network (GAN). We modified the latest state-of-the-art (nESRGAN+) in the GAN-based SR model by adding an extra part of a discriminator with an additional loss term to force the GAN to pay more attention within the region of interest (ROI). The experiments were performed on the CASIA-Thousand-v4 dataset and the Celeb Attribute dataset. The experimental results show that the proposed method successfully learns the details of features inside the crucial region by enhancing the recognition accuracies after image super-resolution (SR).


Assuntos
Algoritmos , Processamento de Imagem Assistida por Computador , Iris , Projetos de Pesquisa
14.
Comput Intell Neurosci ; 2021: 7552185, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34504522

RESUMO

For the segmentation task of stroke lesions, using the attention U-Net model based on the self-attention mechanism can suppress irrelevant regions in an input image while highlighting salient features useful for specific tasks. However, when the lesion is small and the lesion contour is blurred, attention U-Net may generate wrong attention coefficient maps, leading to incorrect segmentation results. To cope with this issue, we propose a dual-path attention compensation U-Net (DPAC-UNet) network, which consists of a primary network and auxiliary path network. Both networks are attention U-Net models and identical in structure. The primary path network is the core network that performs accurate lesion segmentation and outputting of the final segmentation result. The auxiliary path network generates auxiliary attention compensation coefficients and sends them to the primary path network to compensate for and correct possible attention coefficient errors. To realize the compensation mechanism of DPAC-UNet, we propose a weighted binary cross-entropy Tversky (WBCE-Tversky) loss to train the primary path network to achieve accurate segmentation and propose another compound loss function called tolerance loss to train the auxiliary path network to generate auxiliary compensation attention coefficient maps with expanded coverage area to perform compensate operations. We conducted segmentation experiments using the 239 MRI scans of the anatomical tracings of lesions after stroke (ATLAS) dataset to evaluate the performance and effectiveness of our method. The experimental results show that the DSC score of the proposed DPAC-UNet network is 6% higher than the single-path attention U-Net. It is also higher than the existing segmentation methods of the related literature. Therefore, our method demonstrates powerful abilities in the application of stroke lesion segmentation.


Assuntos
Processamento de Imagem Assistida por Computador , Acidente Vascular Cerebral , Humanos , Imageamento por Ressonância Magnética , Acidente Vascular Cerebral/diagnóstico por imagem
15.
Comput Intell Neurosci ; 2021: 5970957, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34527041

RESUMO

There exist large numbers of methods/algorithms which can be used for the classification of aerobic images. While the current method is used to classify the aerobics image, it cannot effectively remove the noise in the aerobics image. The classification time is long, and there are problems of poor denoising effect and low classification efficiency. Therefore, the aerobics image classification algorithm based on the modal symmetry algorithm is proposed. The method of nonlocal mean filtering based on structural features is used to denoise the aerobics image, and the pyramid structure of the image is introduced to decompose the aerobics image. According to the denoising and decomposition results, the enhancement of aerobics image is realized by the logarithmic image processing (LIP) model and gradient sharpening method. Finally, the aerobics image after the enhancement is classified by a modal symmetry algorithm. Experimental results show that the proposed method has a good denoising effect and high classification efficiency, which shows that the algorithm has significant effectiveness and high application performance.


Assuntos
Algoritmos , Processamento de Imagem Assistida por Computador , Aumento da Imagem , Razão Sinal-Ruído
16.
Anal Chim Acta ; 1180: 338852, 2021 Oct 02.
Artigo em Inglês | MEDLINE | ID: mdl-34538329

RESUMO

Controlling blending processes of solid material using advanced real-time sensing technologies tools is crucial to guarantee the quality attributes of manufactured products from diverse industries. The use of process analytical technology (PAT) tools based on chemical imaging systems are useful to assess heterogeneity information during mixing processes. Recently, a powerful procedure for heterogeneity assessment based on the combination of off-line acquired chemical images and variographic analysis has been proposed to provide specific heterogeneity indices related to global and distributional heterogeneity. This work proposes a novel PAT tool combining in situ chemical imaging and variogram-derived quantitative heterogeneity indices for the real-time monitoring of blending processes. The proposed method, so called sliding window variographic image analysis (SWiVIA), derives heterogeneity indices in real-time associated with a sliding image window that moves continuously until the full blending time interval is covered. The SWiVIA method is thoroughly assessed paying attention at the effect of relevant factors for continuous blending monitoring and heterogeneity description, such as the scale of scrutiny needed for heterogeneity definition or the blending period defined to set the sliding image window. SWiVIA is tested on blending runs of pharmaceutical and food products monitored with an in situ near-infrared chemical imaging system. The results obtained help to detect abnormal mixing phenomena and can be the basis to establish blending process control indicators in the future. SWiVIA is adapted to study blending behaviors of the bulk product or compound-specific blending evolutions.


Assuntos
Imageamento Hiperespectral , Tecnologia Farmacêutica , Processamento de Imagem Assistida por Computador
17.
Comput Intell Neurosci ; 2021: 6168562, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34539771

RESUMO

With the gradual improvement of people's living standards, the production and drinking of all kinds of food is increasing. People's disease rate has increased compared with before, which leads to the increasing number of medical image processing. Traditional technology cannot meet most of the needs of medicine. At present, convolutional neural network (CNN) algorithm using chaotic recursive diagonal model has great advantages in medical image processing and has become an indispensable part of most hospitals. This paper briefly introduces the use of medical science and technology in recent years. The hybrid algorithm of CNN in chaotic recursive diagonal model is mainly used for technical research, and the application of this technology in medical image processing is analysed. The CNN algorithm is optimized by using chaotic recursive diagonal model. The results show that the chaotic recursive diagonal model can improve the structure of traditional neural network and improve the efficiency and accuracy of the original CNN algorithm. Then, the application research and comparison of medical image processing are performed according to CNN algorithm and optimized CNN algorithm. The experimental results show that the CNN algorithm optimized by chaotic recursive diagonal model can help medical image automatic processing and patient condition analysis.


Assuntos
Processamento de Imagem Assistida por Computador , Redes Neurais de Computação , Algoritmos , Humanos
18.
Nat Commun ; 12(1): 5181, 2021 08 30.
Artigo em Inglês | MEDLINE | ID: mdl-34462435

RESUMO

Functional magnetic resonance imaging (fMRI) has become an indispensable tool for investigating the human brain. However, the inherently poor signal-to-noise-ratio (SNR) of the fMRI measurement represents a major barrier to expanding its spatiotemporal scale as well as its utility and ultimate impact. Here we introduce a denoising technique that selectively suppresses the thermal noise contribution to the fMRI experiment. Using 7-Tesla, high-resolution human brain data, we demonstrate improvements in key metrics of functional mapping (temporal-SNR, the detection and reproducibility of stimulus-induced signal changes, and accuracy of functional maps) while leaving the amplitude of the stimulus-induced signal changes, spatial precision, and functional point-spread-function unaltered. We demonstrate that the method enables the acquisition of ultrahigh resolution (0.5 mm isotropic) functional maps but is also equally beneficial for a large variety of fMRI applications, including supra-millimeter resolution 3- and 7-Tesla data obtained over different cortical regions with different stimulation/task paradigms and acquisition strategies.


Assuntos
Mapeamento Encefálico/métodos , Encéfalo/diagnóstico por imagem , Imageamento por Ressonância Magnética/métodos , Adulto , Feminino , Humanos , Processamento de Imagem Assistida por Computador , Masculino
19.
Stomatologiia (Mosk) ; 100(4): 63-67, 2021.
Artigo em Russo | MEDLINE | ID: mdl-34357730

RESUMO

THE AIM OF THE STUDY: Was to investigate the efficiency of decoding teleradiological studies using an algorithm based on the use of convolutional neural networks - a simple convolutional architecture, as well as an extended U-Net architecture. MATERIALS AND METHODS: For the experiment, a dataset was prepared by three orthodontists with over 10 years of clinical experience. Each of the orthodontists processed 100 X-ray images of the lateral projection of the head according to 27 parameters, 2700 measurements were made. The coordinates of the control points found by orthodontists in the images were compared with each other and a conclusion was made about the consistency of experts in the data obtained. RESULTS: The results of convolutional neural network CNN were not satisfactory in 17 (62.96%) features, satisfactory in 10 (37.04%). The assessment of orthodontists resulted in non-satisfactory evaluation in 6 (22.22%), satisfactory in 8 (29.63%), good in 8 (29.63%), and excellent in 5 (18.52%) coordinates. Neural networks with U-Net architecture showed satisfactory results in 9 (33.3%) cases, good in 16 (59.3%) and excellent in 2 (7.4%) cases, with no non-satisfactory results. CONCLUSION: The neural network of the U-Net architecture is more effective than a simple fully convolutional neural network and its results of determining anatomical reference points on two-dimensional images of the head are relatively comparable with the data obtained by medical specialists.


Assuntos
Redes Neurais de Computação , Crânio , Cefalometria , Humanos , Processamento de Imagem Assistida por Computador , Radiografia , Crânio/diagnóstico por imagem , Raios X
20.
Comput Intell Neurosci ; 2021: 6370526, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34367271

RESUMO

Accurate segmentation of the tongue body is an important prerequisite for computer-aided tongue diagnosis. In general, the size and shape of the tongue are very different, the color of the tongue is similar to the surrounding tissue, the edge of the tongue is fuzzy, and some of the tongue is interfered by pathological details. The existing segmentation methods are often not ideal for tongue image processing. To solve these problems, this paper proposes a symmetry and edge-constrained level set model combined with the geometric features of the tongue for tongue segmentation. Based on the symmetry geometry of the tongue, a novel level set initialization method is proposed to improve the accuracy of subsequent model evolution. In order to increase the evolution force of the energy function, symmetry detection constraints are added to the evolution model. Combined with the latest convolution neural network, the edge probability input of the tongue image is obtained to guide the evolution of the edge stop function, so as to achieve accurate and automatic tongue segmentation. The experimental results show that the input tongue image is not subject to the external capturing facility or environment, and it is suitable for tongue segmentation under most realistic conditions. Qualitative and quantitative comparisons show that the proposed method is superior to the other methods in terms of robustness and accuracy.


Assuntos
Algoritmos , Processamento de Imagem Assistida por Computador , Diagnóstico por Computador , Humanos , Redes Neurais de Computação , Língua
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...