Your browser doesn't support javascript.
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 36.077
Filtrar
1.
Am J Orthod Dentofacial Orthop ; 156(3): 420-428, 2019 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-31474272

RESUMO

INTRODUCTION: This study aimed to test the accuracy of the 3-dimensional (3D) digital dental models generated by the Dental Monitoring (DM) smartphone application in both photograph and video modes over successive DM examinations in comparison with 3D digital dental models generated by the iTero Element intraoral scanner. METHODS: Ten typodonts with setups of class I malocclusion and comparable severity of anterior crowding were used in the study. iTero Element scans along with DM examination in photograph and video modes were performed before tooth movement and after each set of 10 Invisalign aligners for each typodont. Stereolithography (STL) files generated from the DM examinations in photograph and video modes were superimposed with the STL files from the iTero scans using GOM Inspect software to determine the accuracy of both photograph and video modes of DM technology. RESULTS: No clinically significant differences, according to the American Board of Orthodontics-determined standards, were found. Mean global deviations for the maxillary arch ranged from 0.00149 to 0.02756 mm in photograph mode and from 0.0148 to 0.0256 mm in video mode. Mean global deviations for the mandibular arch ranged from 0.0164 to 0.0275 mm in photograph mode and from 0.0150 to 0.0264 mm in video mode. Statistically significant differences were found between the 3D models generated by the iTero and the DM application in photograph and video modes over successive DM examinations. CONCLUSIONS: 3D digital dental models generated by the DM smartphone application in photograph and video modes are accurate enough to be used for clinical applications.


Assuntos
Confiabilidade dos Dados , Técnica de Moldagem Odontológica , Modelos Dentários , Processamento de Imagem Assistida por Computador/métodos , Imagem Tridimensional/métodos , Projeto Auxiliado por Computador , Arco Dental , Humanos , Má Oclusão/diagnóstico por imagem , Aparelhos Ortodônticos/normas , Aparelhos Ortodônticos Removíveis , Ortodontia/normas , Fotografia Dentária , Smartphone , Software , Estereolitografia , Tecnologia Odontológica/métodos , Técnicas de Movimentação Dentária , Gravação em Vídeo
2.
BMC Ophthalmol ; 19(1): 184, 2019 Aug 14.
Artigo em Inglês | MEDLINE | ID: mdl-31412800

RESUMO

BACKGROUND: With the diabetes mellitus (DM) prevalence increasing annually, the human grading of retinal images to evaluate DR has posed a substantial burden worldwide. SmartEye is a recently developed fundus image processing and analysis system with lesion quantification function for DR screening. It is sensitive to the lesion area and can automatically identify the lesion position and size. We reported the diabetic retinopathy (DR) grading results of SmartEye versus ophthalmologists in analyzing images captured with non-mydriatic fundus cameras in community healthcare centers, as well as DR lesion quantitative analysis results on different disease stages. METHODS: This is a cross-sectional study. All the fundus images were collected from the Shanghai Diabetic Eye Study in Diabetics (SDES) program from Apr 2016 to Aug 2017. 19,904 fundus images were acquired from 6013 diabetic patients. The grading results of ophthalmologists and SmartEye are compared. Lesion quantification of several images at different DR stages is also presented. RESULTS: The sensitivity for diagnosing no DR, mild NPDR (non-proliferative diabetic retinopathy), moderate NPDR, severe NPDR, PDR (proliferative diabetic retinopathy) are 86.19, 83.18, 88.64, 89.59, and 85.02%. The specificity are 63.07, 70.96, 64.16, 70.38, and 74.79%, respectively. The AUC are PDR, 0.80 (0.79, 0.81); severe NPDR, 0.80 (0.79, 0.80); moderate NPDR, 0.77 (0.76, 0.77); and mild NPDR, 0.78 (0.77, 0.79). Lesion quantification results showed that the total hemorrhage area, maximum hemorrhage area, total exudation area, and maximum exudation area increase with DR severity. CONCLUSIONS: SmartEye has a high diagnostic accuracy in DR screening program using non-mydriatic fundus cameras. SmartEye quantitative analysis may be an innovative and promising method of DR diagnosis and grading.


Assuntos
Retinopatia Diabética/diagnóstico , Angiofluoresceinografia/métodos , Processamento de Imagem Assistida por Computador/métodos , Retina/diagnóstico por imagem , Seleção Visual/métodos , Adulto , Idoso , Idoso de 80 Anos ou mais , Estudos Transversais , Feminino , Fundo de Olho , Humanos , Masculino , Pessoa de Meia-Idade , Reprodutibilidade dos Testes , Índice de Gravidade de Doença , Adulto Jovem
3.
Medicine (Baltimore) ; 98(33): e16606, 2019 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-31415352

RESUMO

OBJECTIVE: The aim of this study was to determine optimal window settings for conventional polyenergetic and virtual monoenergetic images derived from computed tomography pulmonary angiogram (CTPA) examinations of a novel dual-layer spectral detector computed tomography system (DLCT). METHODS: Monoenergetic (40 keV) and polyenergetic images of 50 CTPA examinations were calculated and the best individual window width and level (W/L) values were manually assessed. Optimized values were obtained afterwards based on regression analysis. Diameters of standardized pulmonary artery segments and subjective image quality parameters were evaluated and compared. RESULTS: Attenuation and contrast-to-noise values were higher in monoenergetic than in polyenergetic images (P≤.001). Averaged best individual W/L for polyenergetic and monoenergetic were 1020/170 and 2070/480 HU, respectively.All adjusted W/L-settings varied significantly compared to standard settings (700/100 HU) and obtained higher subjective image quality scores. A systematic overestimation of artery diameters for standard window settings in monoenergetic images was observed. CONCLUSIONS: Appropriate W/L-settings are required to assess polyenergetic and monoenergetic CTPA images of a novel DLCT. W/L-settings of 1020/170 HU and 2070/480 HU were found to be the best averaged values for polyenergetic and monoenergetic CTPA images, respectively.


Assuntos
Angiografia por Tomografia Computadorizada/métodos , Processamento de Imagem Assistida por Computador/métodos , Pulmão/diagnóstico por imagem , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Imagem Radiográfica a Partir de Emissão de Duplo Fóton/métodos , Humanos , Pneumologia/métodos , Razão Sinal-Ruído
4.
Am J Orthod Dentofacial Orthop ; 156(2): 275-282, 2019 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-31375238

RESUMO

This case report describes the interdisciplinary treatment of an ectopic horizontally placed maxillary right central incisor with severe root dilaceration. The root was distally angulated and entrapped by the root of the maxillary right lateral incisor. The initial force system was aimed at an occlusal displacement and applied to the crown. During the second phase, a button was cemented onto the apex of the impacted tooth. A force from the apex to a temporary anchorage device in the palate moved the root toward the midline. Finally, a root canal and an apectomy were performed and the central incisor could be moved to its ideal position. The treatment generated a normal height of the alveolar bone and an ideal occlusion with a healthy periodontium.


Assuntos
Incisivo/cirurgia , Técnicas de Movimentação Dentária/métodos , Raiz Dentária/cirurgia , Dente Impactado/cirurgia , Dente Impactado/terapia , Fenômenos Biomecânicos , Criança , Tomografia Computadorizada de Feixe Cônico/métodos , Cavidade Pulpar , Feminino , Humanos , Processamento de Imagem Assistida por Computador/métodos , Imagem Tridimensional , Má Oclusão de Angle Classe I/terapia , Maxila/anatomia & histologia , Maxila/diagnóstico por imagem , Maxila/cirurgia , Aparelhos Ortodônticos Fixos , Extrusão Ortodôntica/métodos , Planejamento de Assistência ao Paciente , Tratamento do Canal Radicular , Coroa do Dente , Dente Impactado/diagnóstico por imagem , Resultado do Tratamento
5.
Niger J Clin Pract ; 22(8): 1091-1098, 2019 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-31417052

RESUMO

Aims: Our aim was to compare three different voxel sizes of CBCT images for the determination of residual filling material volume in root canals when compared with micro CT. Material and Methods: Forty-two root canals of 14 extracted human maxillary molar teeth were retreated by using ProFile® instruments. Images were obtained after retreatment by using ProMax 3D Max CBCT at 3 different voxel sizes (1) High resolution (0.1 mm); (2) High definition (0.15 mm); and (3) Normal resolution (0.2 mm). Two observers measured volumes of residual filling materials in exported CBCT images by means of 3D Doctor Software. Micro CT measurements were served as gold standard. Mann-Whitney U test and Wilcoxon Test were used for the comparison of CBCT and micro CT measurements. Statistical significance was set at P < 0.05. Results: No statistically differences were found between the two observers for all measurements (P > 0.05). There were no significant differences among different CBCT voxel sizes used (0.1 mm, 0.15 mm, and 0.2 mm) (P > 0.05). The Spearman correlation coefficients between CBCT at different voxel sizes significantly highly correlated with micro CT measurements for each observer (P < 0.05). Furthermore, no significant differences were found between the measurements obtained by the two observers in consideration to root canal location (P > 0.05). Conclusion: CBCT images may provide useful information in the volumetric assessment of the amount of residual filling material in root canals for retreatment procedures.


Assuntos
Cavidade Pulpar/diagnóstico por imagem , Dente Molar/diagnóstico por imagem , Dente Molar/cirurgia , Retratamento , Materiais Restauradores do Canal Radicular/química , Obturação do Canal Radicular/métodos , Tratamento do Canal Radicular/métodos , Tomografia Computadorizada de Feixe Cônico Espiral/métodos , Materiais Dentários , Humanos , Processamento de Imagem Assistida por Computador/métodos , Materiais Restauradores do Canal Radicular/uso terapêutico , Preparo de Canal Radicular/métodos , Microtomografia por Raio-X/métodos
6.
Am J Orthod Dentofacial Orthop ; 156(1): 44-52, 2019 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-31256835

RESUMO

INTRODUCTION: The objective of this study was to investigate the location, orientation and root development of maxillary lateral incisors in patients with palatally impacted central incisors. Comparison was made between the lateral incisor on the affected side and that on the normally erupted side. METHODS: Cone-beam computed tomographic images from 20 patients (10 boys, 10 girls, mean age (9.01 ± 1.52 years old) with unilateral palatally impacted maxillary central incisors were imported into Dolphin imaging software 11.8 for 3-dimensional reconstruction and reorientation. Software measurement tools were used to measure the root length, crown distance, angle to palatal plane, distance to midline, and angle to midsagittal plane of the maxillary lateral incisors on both the impacted and unaffected sides. RESULTS: The Wilcoxon signed rank test indicated that lateral incisors on the impacted side were more proclined, at a mean angle difference of 29.47° in the sagittal plane (P < 0.001). The mean length of the roots of the lateral incisors was 1.21 mm shorter (P < 0.05) on the affected side compared with the normal side, and the lateral incisor crowns on the impacted side were located at an average of 4.57 mm closer to the palatal plane than on the normally erupted side (P < 0.001). The angle of long axis of the lateral incisors on the affected side had a greater angulation to the midsagittal plane compared with the unaffected side, with a mean difference of 30.27° (P < 0.001). CONCLUSIONS: Maxillary lateral incisors adjacent to palatally impacted maxillary central incisors side had abnormal root development and demonstrated angulation and position change compared with those adjacent to normally erupted central incisors.


Assuntos
Tomografia Computadorizada de Feixe Cônico/métodos , Incisivo/anormalidades , Incisivo/anatomia & histologia , Maxila/anatomia & histologia , Palato/anatomia & histologia , Dente Impactado/diagnóstico por imagem , Criança , Feminino , Humanos , Processamento de Imagem Assistida por Computador/métodos , Imagem Tridimensional , Incisivo/diagnóstico por imagem , Masculino , Maxila/diagnóstico por imagem , Palato/diagnóstico por imagem , Estudos Retrospectivos , Coroa do Dente/anatomia & histologia , Erupção Dentária , Raiz Dentária/anormalidades , Raiz Dentária/anatomia & histologia , Raiz Dentária/diagnóstico por imagem
7.
Am J Orthod Dentofacial Orthop ; 156(1): 53-60, 2019 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-31256838

RESUMO

INTRODUCTION: Pharyngeal airway space (PAS) assessment has been used in the past for a better understanding of orthodontic and surgical outcomes; however, this analysis could be unreliable. Our objective was to evaluate possible changes in the PAS reading in the same patient from their consecutive cone-beam computed tomography (CBCT) scans. METHODS: We evaluated a total of 27 patients' CBCT scans obtained at 2 time points with the use of a standardized acquisition protocol. The mean age at T0 was 31 years (range 17-62 years) and the follow-up records (T1) were taken after 4-6 months. Dolphin Imaging software was used to measure the volumes of the nasopharynx, oropharynx, and hypopharynx. We also evaluated the craniocervical position with the use of a lateral cephalogram. RESULTS: The variables exhibited high intraclass correlation coefficients (ICCs) when measuring the same CBCT scan twice (T0 and T0). However, The ICC between the measurements performed on the first and second CBCT scans (T0 and T1) showed that the only variable with high reproducibility between the 2 scans was cranial base, with an ICC >0.97. Average differences of 682.1 mm3, 2255.3 mm3, and 517.4 mm3 were found for the nasopharynx, oropharynx, and hypopharynx, respectively. Regarding the cephalometric angles, average differences between T0 and T1 scans were 0.6°, 2.7°, and 0.4° for OPT.CVT, OPT.SN, and cranial base, respectively. CONCLUSIONS: Different CBCT exams with equal scanning and patient positioning protocols can result in different 3D PAS readings. A more careful interpretation of CBCT volumetric data to achieve adequate conclusions of the clinical outcomes is necessary.


Assuntos
Tomografia Computadorizada de Feixe Cônico/métodos , Tomografia Computadorizada de Feixe Cônico/normas , Posicionamento do Paciente/métodos , Posicionamento do Paciente/normas , Faringe/anatomia & histologia , Faringe/diagnóstico por imagem , Adolescente , Adulto , Cefalometria/métodos , Feminino , Seguimentos , Humanos , Hipofaringe/anatomia & histologia , Hipofaringe/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos , Imagem Tridimensional/métodos , Masculino , Pessoa de Meia-Idade , Nasofaringe/anatomia & histologia , Nasofaringe/diagnóstico por imagem , Variações Dependentes do Observador , Orofaringe/anatomia & histologia , Orofaringe/diagnóstico por imagem , Procedimentos Cirúrgicos Ortognáticos , Valores de Referência , Reprodutibilidade dos Testes , Software , Adulto Jovem
8.
Environ Monit Assess ; 191(8): 481, 2019 Jul 04.
Artigo em Inglês | MEDLINE | ID: mdl-31273539

RESUMO

This study presents a new fusion method namely supervised cross-fusion method to improve the capability of fused thermal, radar, and optical images for classification. The proposed cross-fusion method is a combination of pixel-based and supervised feature-based fusion of thermal, radar, and optical data. The pixel-based fusion was applied to fuse optical data of Sentinel-2 and Landsat 8. According to correlation coefficient (CR) and signal to noise ratio (SNR), among the used pixel-based fusion methods, wavelet obtained the best results for fusion. Considering spectral and spatial information preservation, CR of the wavelet method is 0.97 and 0.96, respectively. The supervised feature-based fusion method is a fusion of best output of pixel-based fusion level, land surface temperature (LST) data, and Sentinel-1 radar image using a supervised approach. The supervised approach is a supervised feature selection and learning of the inputs based on linear discriminant analysis and sparse regularization (LDASR) algorithm. In the present study, the non-negative matrix factorization (NMF) was utilized for feature extraction. A comparison of the obtained results with state of the art fusion method indicated a higher accuracy of our proposed method of classification. The rotation forest (RoF) classification results improvement was 25% and the support vector machine (SVM) results improvement was 31%. The results showed that the proposed method is well classified and separated four main classes of settlements, barren land, river, river bank, and even the bridges over the river. Also, a number of unclassified pixels by SVM are very low compared to other classification methods and can be neglected. The study results showed that LST calculated using thermal data has had positive effects on improving the classification results. By comparing the results of supervised cross-fusion without using LST data to the proposed method results, SVM and RoF classifiers showed 38% and 7% of classification improvement, respectively.


Assuntos
Monitoramento Ambiental/métodos , Processamento de Imagem Assistida por Computador/métodos , Algoritmos , Irã (Geográfico) , Radar , Rios , Máquina de Vetores de Suporte , Temperatura Ambiente
9.
Food Chem ; 298: 125096, 2019 Nov 15.
Artigo em Inglês | MEDLINE | ID: mdl-31272051

RESUMO

The aim of this paper is to test different models for predicting furan content in a dough system, based on partial least squares regression using colour images. Starch dough systems were fried at five temperatures between 150 and 190 °C and for 5, 7, 9, 11 and 13 min. The furan content was quantified using gas chromatography/mass spectrometry, while the corresponding images were simultaneously obtained and processed in order to extract 2914 features. Good furan content predictions were obtained using computer vision image chromatic features using correlation coefficient of prediction (Rp = 0.86). However, the best prediction correlation was obtained using the image textural features (Rp = 0.93), when the number of features was reduced to 10 by algorithms applications. These results suggest that furan content in fried dough systems can be predicted using features of computer vision images.


Assuntos
Pão , Análise de Alimentos/métodos , Indústria de Processamento de Alimentos/métodos , Furanos/análise , Processamento de Imagem Assistida por Computador/métodos , Algoritmos , Cor , Culinária , Qualidade dos Alimentos , Cromatografia Gasosa-Espectrometria de Massas , Análise dos Mínimos Quadrados , Amido , Triticum
10.
Environ Monit Assess ; 191(8): 491, 2019 Jul 12.
Artigo em Inglês | MEDLINE | ID: mdl-31297617

RESUMO

Leaf segmentation is significantly important in assisting ecologists to automatically detect symptoms of disease and other stressors affecting trees. This paper employs state-of-the-art techniques in image processing to introduce an accurate framework for segmenting leaves and diseased leaf spots from images. The proposed framework integrates an appearance model that visually represents the current input image with the color prior information generated from RGB color images that were formerly saved in our database. Our framework consists of four main steps: (1) Enhancing the accuracy of the segmentation at minimum time by making use of contrast changes to automatically identify the region of interest (ROI) of the entire leaf, where the pixel-wise intensity relations are described by an electric field energy model. (2) Modeling the visual appearance of the input image using a linear combination of discrete Gaussians (LCDG) to predict the marginal probability distributions of the grayscale ROI main three classes. (3) Calculating the pixel-wise probabilities of these three classes for the color ROI based on the color prior information of database images that are segmented manually, where the current and prior pixel-wise probabilities are used to find the initial labels. (4) Refining the labels with the generalized Gauss-Markov random field model (GGMRF), which maintains the continuity. The proposed segmentation approach was applied to the leaves of mangrove trees in Abu Dhabi in the United Arab Emirates. Experimental validation showed high accuracy, with a Dice similarity coefficient 90% for distinguishing leaf spot from healthy leaf area.


Assuntos
Monitoramento Ambiental/métodos , Processamento de Imagem Assistida por Computador/métodos , Doenças das Plantas , Folhas de Planta/química , Árvores/química , Algoritmos , Cor , Humanos , Distribuição Normal , Probabilidade , Sensibilidade e Especificidade , Emirados Árabes Unidos
11.
Nat Commun ; 10(1): 2736, 2019 06 21.
Artigo em Inglês | MEDLINE | ID: mdl-31227718

RESUMO

Reconstruction and annotation of volume electron microscopy data sets of brain tissue is challenging but can reveal invaluable information about neuronal circuits. Significant progress has recently been made in automated neuron reconstruction as well as automated detection of synapses. However, methods for automating the morphological analysis of nanometer-resolution reconstructions are less established, despite the diversity of possible applications. Here, we introduce cellular morphology neural networks (CMNs), based on multi-view projections sampled from automatically reconstructed cellular fragments of arbitrary size and shape. Using unsupervised training, we infer morphology embeddings (Neuron2vec) of neuron reconstructions and train CMNs to identify glia cells in a supervised classification paradigm, which are then used to resolve neuron reconstruction errors. Finally, we demonstrate that CMNs can be used to identify subcellular compartments and the cell types of neuron reconstructions.


Assuntos
Encéfalo/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos , Redes Neurais (Computação) , Neurônios/citologia , Sinapses , Algoritmos , Animais , Encéfalo/citologia , Conjuntos de Dados como Assunto , Estudos de Viabilidade , Masculino , Microscopia Eletrônica , Passeriformes
12.
J Comput Assist Tomogr ; 43(4): 553-558, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31162229

RESUMO

OBJECTIVE: This study aimed to analyze the possibility of artifact reduction using a new iterative metal artifact reduction algorithm (iMAR) in the diagnosis of perfusion deficits due to vasospasms and to evaluate its clinical relevance. METHODS: Sixty-one volume perfusion computed tomographies of 24 patients after coiling or aneurysm clipping were reconstructed using standard-filtered back-projection and iMAR retrospectively. The degree of artifacts was evaluated as well as the size of the nonevaluable area. Diagnostic performance was evaluated compared with digital subtraction angiography. RESULTS: Artifacts were present in 39 of 61 volume perfusion computed tomography examinations. Image quality (score, 1.0 vs 1.6; P < 0.01) was higher and the size of the signal loss was reduced significantly by iMAR (intracranial metal artifacts, 887 mm vs 359 mm [P < 0.01]; cranial bolt, 3008 mm vs 837 mm [P < 0.01]). Digital subtraction angiography confirmed vasospasms in 11 (92%) of 12 patients. CONCLUSION: The iMAR yields higher image quality by reducing artifacts compared with filtered back-projection.


Assuntos
Artefatos , Processamento de Imagem Assistida por Computador/métodos , Metais/química , Tomografia Computadorizada por Raios X/métodos , Vasoespasmo Intracraniano/diagnóstico por imagem , Adulto , Idoso , Idoso de 80 Anos ou mais , Algoritmos , Humanos , Pessoa de Meia-Idade
13.
Medicine (Baltimore) ; 98(23): e15871, 2019 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-31169691

RESUMO

To evaluate the ability of a radiomics signature based on 3T dynamic contrast-enhanced (DCE) magnetic resonance imaging (MRI) to distinguish between low and non-low Oncotype DX (OD) risk groups in estrogen receptor (ER)-positive invasive breast cancers.Between May 2011 and March 2016, 67 women with ER-positive invasive breast cancer who performed preoperative 3T MRI and OD assay were included. We divided the patients into low (OD recurrence score [RS] <18) and non-low risk (RS ≥18) groups. Extracted radiomics features included 8 morphological, 76 histogram-based, and 72 higher-order texture features. A radiomics signature (Rad-score) was generated using the least absolute shrinkage and selection operator (LASSO). Univariate and multivariate logistic regression analyses were performed to investigate the association between clinicopathologic factors, MRI findings, and the Rad-score with OD risk groups, and the areas under the receiver operating characteristic curves (AUC) were used to assess classification performance of the Rad-score.The Rad-score was constructed for each tumor by extracting 10 (6.3%) from 158 radiomics features. A higher Rad-score (odds ratio [OR], 65.209; P <.001), Ki-67 expression (OR, 17.462; P = .007), and high p53 (OR = 8.449; P = .077) were associated with non-low OD risk. The Rad-score classified low and non-low OD risk with an AUC of 0.759.The Rad-score showed the potential for discrimination between low and non-low OD risk groups in patients with ER-positive invasive breast cancers.


Assuntos
Neoplasias da Mama/diagnóstico por imagem , Neoplasias da Mama/genética , Genômica/métodos , Processamento de Imagem Assistida por Computador/métodos , Imagem por Ressonância Magnética/métodos , Receptores Estrogênicos/biossíntese , Adulto , Neoplasias da Mama/patologia , Meios de Contraste/administração & dosagem , Feminino , Perfilação da Expressão Gênica/métodos , Perfilação da Expressão Gênica/normas , Humanos , Pessoa de Meia-Idade , Recidiva Local de Neoplasia/genética , Recidiva Local de Neoplasia/patologia , Compostos Organometálicos/administração & dosagem , Curva ROC , Reprodutibilidade dos Testes , Medição de Risco , Sensibilidade e Especificidade
14.
Environ Monit Assess ; 191(6): 406, 2019 May 31.
Artigo em Inglês | MEDLINE | ID: mdl-31152251

RESUMO

Camera traps are becoming ubiquitous tools for ecologists. While easily deployed, they require human time to organize, review, and classify images including sequences of images of the same individual, and non-target images triggered by environmental conditions. For such cases, we developed an automated computer program, named EventFinder, to reduce operator time by pre-processing and classifying images using background subtraction techniques and color histogram comparisons. We tested the accuracy of the program against images previously classified by a human operator. The automated classification, on average, reduced the data requiring human input by 90.8% with an accuracy of 96.1%, and produced a false positive rate of only 3.4%. Thus, EventFinder provides an efficient method for reducing the time for human operators to review and classify images making camera trap projects, which compile a large number of images, less costly to process. Our testing process used medium to large animals, but will also work with smaller animals, provided their images occupy a sufficient area of the frame. While our discussion focuses on camera trap image reduction, we also discuss how EventFinder might be used in conjunction with other software developments for managing camera trap data.


Assuntos
Monitoramento Ambiental/métodos , Processamento de Imagem Assistida por Computador/métodos , Tecnologia de Sensoriamento Remoto/métodos , Alberta , Animais , Animais Selvagens , Periféricos de Computador , Monitoramento Ambiental/instrumentação , Humanos , Processamento de Imagem Assistida por Computador/instrumentação , Tecnologia de Sensoriamento Remoto/instrumentação , Software
15.
BMC Bioinformatics ; 20(1): 326, 2019 Jun 13.
Artigo em Inglês | MEDLINE | ID: mdl-31195977

RESUMO

BACKGROUND: An important task of macromolecular structure determination by cryo-electron microscopy (cryo-EM) is the identification of single particles in micrographs (particle picking). Due to the necessity of human involvement in the process, current particle picking techniques are time consuming and often result in many false positives and negatives. Adjusting the parameters to eliminate false positives often excludes true particles in certain orientations. The supervised machine learning (e.g. deep learning) methods for particle picking often need a large training dataset, which requires extensive manual annotation. Other reference-dependent methods rely on low-resolution templates for particle detection, matching and picking, and therefore, are not fully automated. These issues motivate us to develop a fully automated, unbiased framework for particle picking. RESULTS: We design a fully automated, unsupervised approach for single particle picking in cryo-EM micrographs. Our approach consists of three stages: image preprocessing, particle clustering, and particle picking. The image preprocessing is based on multiple techniques including: image averaging, normalization, cryo-EM image contrast enhancement correction (CEC), histogram equalization, restoration, adaptive histogram equalization, guided image filtering, and morphological operations. Image preprocessing significantly improves the quality of original cryo-EM images. Our particle clustering method is based on an intensity distribution model which is much faster and more accurate than traditional K-means and Fuzzy C-Means (FCM) algorithms for single particle clustering. Our particle picking method, based on image cleaning and shape detection with a modified Circular Hough Transform algorithm, effectively detects the shape and the center of each particle and creates a bounding box encapsulating the particles. CONCLUSIONS: AutoCryoPicker can automatically and effectively recognize particle-like objects from noisy cryo-EM micrographs without the need of labeled training data or human intervention making it a useful tool for cryo-EM protein structure determination.


Assuntos
Algoritmos , Microscopia Crioeletrônica/métodos , Processamento de Imagem Assistida por Computador/métodos , Aprendizado de Máquina não Supervisionado , Automação , Análise por Conglomerados , Software
16.
Microsc Res Tech ; 82(9): 1471-1488, 2019 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-31168871

RESUMO

Among precision medical techniques, medical image processing is rapidly growing as a successful tool for cancer detection. Skin cancer is one of the crucial cancer types. It is identified through computer vision (CV) techniques using dermoscopic images. The early diagnosis skin cancer from dermoscopic images can be decrease the mortality rate. We propose an automated system for skin lesion detection and classification based on statistical normal distribution and optimal feature selection. Local contrast is controlled using a brighter channel enhancement technique, and segmentation is performed through a statistical normal distribution approach. The multiplication law of probability is implemented for the fusion of segmented images. In the feature extraction phase, optimized histogram, optimized color, and gray level co-occurrences matrices features are extracted and covariance-based fusion is performed. Subsequently, optimal features are selected through a binary grasshopper optimization algorithm. The selected optimal features are finally fed to a classifier and evaluated on the ISBI 2016 and ISBI 2017 data sets. Classification accuracy is computed using different Support Vector Machine (SVM) kernel functions, and the best accuracy is obtained for the cubic function. The average accuracies of the proposed segmentation on the PH2 and ISBI 2016 data sets are 93.79 and 96.04%, respectively, for an image size 512 × 512. The accuracies of the proposed classification on the ISBI 2016 and ISBI 2017 data sets are 93.80 and 93.70%, respectively. The proposed system outperforms existing methods on selected data sets.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Lacerações/diagnóstico , Lacerações/patologia , Imagem Óptica/métodos , Neoplasias Cutâneas/diagnóstico , Neoplasias Cutâneas/patologia , Pele/patologia , Automação Laboratorial/métodos , Bioestatística , Humanos , Distribuição Normal
17.
Cancer Imaging ; 19(1): 41, 2019 Jun 22.
Artigo em Inglês | MEDLINE | ID: mdl-31228956

RESUMO

BACKGROUND: To determine if mammographic features from deep learning networks can be applied in breast cancer to identify groups at interval invasive cancer risk due to masking beyond using traditional breast density measures. METHODS: Full-field digital screening mammograms acquired in our clinics between 2006 and 2015 were reviewed. Transfer learning of a deep learning network with weights initialized from ImageNet was performed to classify mammograms that were followed by an invasive interval or screen-detected cancer within 12 months of the mammogram. Hyperparameter optimization was performed and the network was visualized through saliency maps. Prediction loss and accuracy were calculated using this deep learning network. Receiver operating characteristic (ROC) curves and area under the curve (AUC) values were generated with the outcome of interval cancer using the deep learning network and compared to predictions from conditional logistic regression with errors quantified through contingency tables. RESULTS: Pre-cancer mammograms of 182 interval and 173 screen-detected cancers were split into training/test cases at an 80/20 ratio. Using Breast Imaging-Reporting and Data System (BI-RADS) density alone, the ability to correctly classify interval cancers was moderate (AUC = 0.65). The optimized deep learning model achieved an AUC of 0.82. Contingency table analysis showed the network was correctly classifying 75.2% of the mammograms and that incorrect classifications were slightly more common for the interval cancer mammograms. Saliency maps of each cancer case found that local information could highly drive classification of cases more than global image information. CONCLUSIONS: Pre-cancerous mammograms contain imaging information beyond breast density that can be identified with deep learning networks to predict the probability of breast cancer detection.


Assuntos
Neoplasias da Mama/diagnóstico por imagem , Aprendizado Profundo , Processamento de Imagem Assistida por Computador/métodos , Mamografia/métodos , Detecção Precoce de Câncer , Feminino , Humanos , Processamento de Imagem Assistida por Computador/normas , Limite de Detecção , Mamografia/normas
18.
Microsc Res Tech ; 82(9): 1542-1556, 2019 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-31209970

RESUMO

Plant diseases are accountable for economic losses in an agricultural country. The manual process of plant diseases diagnosis is a key challenge from last one decade; therefore, researchers in this area introduced automated systems. In this research work, automated system is proposed for citrus fruit diseases recognition using computer vision technique. The proposed method incorporates five fundamental steps such as preprocessing, disease segmentation, feature extraction and reduction, fusion, and classification. The noise is being removed followed by a contrast stretching procedure in the very first phase. Later, watershed method is applied to excerpt the infectious regions. The shape, texture, and color features are subsequently computed from these infection regions. In the fourth step, reduced features are fused using serial-based approach followed by a final step of classification using multiclass support vector machine. For dimensionality reduction, principal component analysis is utilized, which is a statistical procedure that enforces an orthogonal transformation on a set of observations. Three different image data sets (Citrus Image Gallery, Plant Village, and self-collected) are combined in this research to achieving a classification accuracy of 95.5%. From the stats, it is quite clear that our proposed method outperforms several existing methods with greater precision and accuracy.


Assuntos
Citrus/anatomia & histologia , Processamento de Imagem Assistida por Computador/métodos , Microscopia/métodos , Doenças das Plantas , Automação Laboratorial/métodos
19.
Microsc Res Tech ; 82(9): 1601-1609, 2019 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-31243869

RESUMO

Lung cancer is the most common cause of cancer-related death globally. Currently, lung nodule detection and classification are performed by radiologist-assisted computer-aided diagnosis systems. However, emerged artificially intelligent techniques such as neural network, support vector machine, and HMM have improved the detection and classification process of cancer in any part of the human body. Such automated methods and their possible combinations could be used to assist radiologists at early detection of lung nodules that could reduce treatment cost, death rate. Literature reveals that classification based on voting of classifiers exhibited better performance in the detection and classification process. Accordingly, this article presents an automated approach for lung nodule detection and classification that consists of multiple steps including lesion enhancement, segmentation, and features extraction from each candidate's lesion. Moreover, multiple classifiers logistic regression, multilayer perceptron, and voted perceptron are tested for the lung nodule classification using k-fold cross-validation process. The proposed approach is evaluated on the publically available Lung Image Database Consortium benchmark data set. Based on the performance evaluation, it is observed that the proposed method performed better in the stateof the art and achieved an overall accuracy rate of 100%.


Assuntos
Automação Laboratorial/métodos , Processamento de Imagem Assistida por Computador/métodos , Pneumopatias/diagnóstico por imagem , Pneumopatias/patologia , Pulmão/diagnóstico por imagem , Pulmão/patologia , Radiografia Torácica/métodos , Humanos
20.
Top Magn Reson Imaging ; 28(3): 159-171, 2019 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-31188274

RESUMO

Magnetic resonance imaging (MRI) has been driven toward ultrahigh magnetic fields (UHF) in order to benefit from correspondingly higher signal-to-noise ratio and spectral resolution. Technological challenges associated with UHF, such as increased radiofrequency (RF) energy deposition and RF excitation inhomogeneity, limit realization of the full potential of these benefits. Parallel RF transmission (pTx) enables decreases in the inhomogeneity of RF excitations and in RF energy deposition by using multiple-transmit RF coils driven independently and operating simultaneously. pTx plays a fundamental role in UHF MRI by bringing the potential applications of UHF into reality. In this review article, we review the recent developments in pTx pulse design and RF safety in pTx. Simultaneous multislice imaging and inner volume imaging using pTx are reviewed with a focus on UHF applications. Emerging pTx design approaches using improved pTx design frameworks and calibrations are reviewed together with calibration-free approaches that remove the necessity of time-consuming calibrations necessary for successful pTx. Lastly, we focus on the safety of pTx that is improved by using intersubject variability analysis, proactively managing pTx and temperature-based pTx approaches.


Assuntos
Imagem por Ressonância Magnética/métodos , Encéfalo/diagnóstico por imagem , Humanos , Processamento de Imagem Assistida por Computador/métodos , Imagem por Ressonância Magnética/instrumentação , Ondas de Rádio
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA