RESUMO
OBJECTIVES: To develop an automated deep-learning algorithm for detection and 3D segmentation of incidental bone lesions in maxillofacial CBCT scans. METHODS: The dataset included 82 cone beam CT (CBCT) scans, 41 with histologically confirmed benign bone lesions (BL) and 41 control scans (without lesions), obtained using three CBCT devices with diverse imaging protocols. Lesions were marked in all axial slices by experienced maxillofacial radiologists. All cases were divided into sub-datasets: training (20,214 axial images), validation (4530 axial images), and testing (6795 axial images). A Mask-RCNN algorithm segmented the bone lesions in each axial slice. Analysis of sequential slices was used for improving the Mask-RCNN performance and classifying each CBCT scan as containing bone lesions or not. Finally, the algorithm generated 3D segmentations of the lesions and calculated their volumes. RESULTS: The algorithm correctly classified all CBCT cases as containing bone lesions or not, with an accuracy of 100%. The algorithm detected the bone lesion in axial images with high sensitivity (95.9%) and high precision (98.9%) with an average dice coefficient of 83.5%. CONCLUSIONS: The developed algorithm detected and segmented bone lesions in CBCT scans with high accuracy and may serve as a computerized tool for detecting incidental bone lesions in CBCT imaging. CLINICAL RELEVANCE: Our novel deep-learning algorithm detects incidental hypodense bone lesions in cone beam CT scans, using various imaging devices and protocols. This algorithm may reduce patients' morbidity and mortality, particularly since currently, cone beam CT interpretation is not always preformed. KEY POINTS: ⢠A deep learning algorithm was developed for automatic detection and 3D segmentation of various maxillofacial bone lesions in CBCT scans, irrespective of the CBCT device or the scanning protocol. ⢠The developed algorithm can detect incidental jaw lesions with high accuracy, generates a 3D segmentation of the lesion, and calculates the lesion volume.
Assuntos
Aprendizado Profundo , Humanos , Algoritmos , Tomografia Computadorizada de Feixe Cônico/métodos , Processamento de Imagem Assistida por ComputadorRESUMO
The field of printed electronics is continually trying to reduce the dimensions of the electrical components. Here, a method of printing metallic lines with widths as small as 15 nm and up to a few micrometers using fountain pen nanolithography (FPN) is shown. The FPN technique is based on a bent nanopipette with atomic force feedback that acts similar to a nanopen. The geometry of the nanopen allows for rapid placement accuracy of the printing tip, on any desired location, with the highest of optical sub-micrometer resolution. Using this nanopen, investigations of various inks are undertaken together with instrumental and script-tool development that allows accurate printing of multiple layers. This has led to the printing of conductive lines using inks composed of silver nanoparticles and salt solutions of silver and copper. In addition, it is shown that the method can be applied to substrates of various materials with minimal effect on the dimension of the line. The line widths are varied by using nanopens with different orifices or by tailoring the wetting properties of the ink on the substrate. Metallic interconnections of conducting lines are reported.
RESUMO
Force controlled optical imaging of membranes of living cells is demonstrated. Such imaging has been extended to image membrane potential changes to demonstrate that live cell imaging has been achieved. To accomplish this advance, limitations inherent in atomic force microscopy (AFM) since its inception in 1986 [G. Binnig, C. F. Quate, and C. Gerber, "Atomic Force Microscope," Phys. Rev. Lett. 56, 930-933 (1986).] had to be overcome. The advances allow for live cell imaging of a whole genre of functional biological imaging with stiff (1-10N/m) scanned probe imaging cantilevers. Even topographic imaging of fine cell protrusions, such as microvilli, has been accomplished with such cantilevers. Similar topographic imaging has only recently been demonstrated with the standard soft (0.05N/m) cantilevers that are generally required for live cell imaging. The progress reported here demonstrates both ultrasensitive AFM (~100pN), capable of topographic imaging of even microvilli protruding from cell membranes and new functional applications that should have a significant impact on optical and other approaches in biological imaging of living systems and ultrasoft materials.
RESUMO
Single-walled carbon nanotubes (SWCNTs) are considered pivotal components for molecular electronics. Techniques for SWCNT lithography today lack simplicity, flexibility, and speed of direct, oriented deposition at specific target locations. In this paper SWCNTs are directly drawn and placed with chemical identification and demonstrated orientation using fountain pen nanolithography (FPN) under ambient conditions. Placement across specific electrical contacts with such alignment is demonstrated and characterized. The fundamental basis of the drawing process with alignment has potential applications for other related systems such as inorganic nanotubes, polymers, and biological molecules.
RESUMO
PURPOSE: This study addressed the challenge of detecting and classifying the severity of ductopenia in parotid glands, a structural abnormality characterized by a reduced number of salivary ducts, previously shown to be associated with salivary gland impairment. The aim of the study was to develop an automatic algorithm designed to improve diagnostic accuracy and efficiency in analyzing ductopenic parotid glands using sialo cone-beam CT (sialo-CBCT) images. METHODS: We developed an end-to-end automatic pipeline consisting of three main steps: (1) region of interest (ROI) computation, (2) parotid gland segmentation using the Frangi filter, and (3) ductopenia case classification with a residual neural network (RNN) augmented by multidirectional maximum intensity projection (MIP) images. To explore the impact of the first two steps, the RNN was trained on three datasets: (1) original MIP images, (2) MIP images with predefined ROIs, and (3) MIP images after segmentation. RESULTS: Evaluation was conducted on 126 parotid sialo-CBCT scans of normal, moderate, and severe ductopenic cases, yielding a high performance of 100% for the ROI computation and 89% for the gland segmentation. Improvements in accuracy and F1 score were noted among the original MIP images (accuracy: 0.73, F1 score: 0.53), ROI-predefined images (accuracy: 0.78, F1 score: 0.56), and segmented images (accuracy: 0.95, F1 score: 0.90). Notably, ductopenic detection sensitivity was 0.99 in the segmented dataset, highlighting the capabilities of the algorithm in detecting ductopenic cases. CONCLUSIONS: Our method, which combines classical image processing and deep learning techniques, offers a promising solution for automatic detection of parotid glands ductopenia in sialo-CBCT scans. This may be used for further research aimed at understanding the role of presence and severity of ductopenia in salivary gland dysfunction.
RESUMO
Purpose: Quantitative evaluation of renal obstruction is crucial for preventing renal atrophy. This study presents a novel method for diagnosing renal obstruction by automatically extracting objective indicators from routine multi-phase CT Urography (CTU). Material and methods: The study included multi-phase CTU examinations of 6 hydronephrotic kidneys and 24 non-hydronephrotic kidneys (23,164 slices). The developed algorithm segmented the renal parenchyma and the renal pelvis of each kidney in each CTU slice. Following a 3D reconstruction of the parenchyma and renal pelvis, the algorithm evaluated the amount of the contrast media in both components in each phase. Finally, the algorithm evaluated two indicators for assessing renal obstruction: the change in the total amount of contrast media in both components during the CTU phases, and the drainage time, "T1/2", from the renal parenchyma. Results: The algorithm segmented the parenchyma and renal pelvis with an average dice coefficient of 0.97 and 0.92 respectively. In all the hydronephrotic kidneys the total amount of contrast media did not decrease during the CTU examination and the T1/2 value was longer than 20 min. Both indicators yielded a statistically significant difference (p < 0.001) between hydronephrotic and normal kidneys, and combining both indicators yielded 100% accuracy. Conclusions: The novel algorithm enables accurate 3D segmentation of the renal parenchyma and pelvis and estimates the amount of contrast media in multi-phase CTU examinations. This serves as a proof-of-concept for the ability to extract from routine CTU indicators that alert to the presence of renal obstruction and estimate its severity.
RESUMO
PURPOSE: Partial obstruction of the upper urinary tract is a common urological pathology that leads to progressive atrophy and dysfunction of the kidney. Most methods for evaluating the urine drainage rate, to assess the severity of partial obstruction, involve injection of markers into the blood stream and therefore the filtration rate from the blood effects the drainage rate. This study presents a novel method for assessing the drainage rate from the upper urinary tract by analyzing sequential fluoroscopic images from a routine nephrostogram, in which contrast material is introduced directly into the renal collecting system. METHODS: Fluoroscopic images from 36 nephrostograms, following percutaneous nephrolithotomy, were retrospectively evaluated, 19 with a dilated renal pelvis. A radiological model for calculating the radiopacity of the renal pelvis, which reflects the amount of contrast material in each sequential image, was developed. Using this model, an algorithm was designed for generating a drainage curve and calculating the "drainage time" t1/2 in which half of the contrast material has drained from the renal pelvis. RESULTS: Analysis of images of a step-wedge phantom made of an increasing number of contrast material layers showed that the calculated radiopacity of each step was proportional to the amount of contrast material, independent of the background attenuation. Analysis of the nephrostograms showed that the drainage curves highly fitted an exponential function (R = 0.961), with a significantly higher t1/2 for dilated cases. CONCLUSION: The developed method may be used for a quantitative and accurate estimation of the urine drainage rate.
Assuntos
Drenagem , Pelve Renal , Meios de Contraste , Humanos , Pelve Renal/diagnóstico por imagem , Estudos Retrospectivos , UrografiaRESUMO
OBJECTIVES: The aim of this study was to develop a computer vision algorithm based on artificial intelligence, designed to automatically detect and classify various dental restorations on panoramic radiographs. STUDY DESIGN: A total of 738 dental restorations in 83 anonymized panoramic images were analyzed. Images were automatically cropped to obtain the region of interest containing maxillary and mandibular alveolar ridges. Subsequently, the restorations were segmented by using a local adaptive threshold. The segmented restorations were classified into 11 categories, and the algorithm was trained to classify them. Numerical features based on the shape and distribution of gray level values extracted by the algorithm were used for classifying the restorations into different categories. Finally, a Cubic Support Vector Machine algorithm with Error-Correcting Output Codes was used with a cross-validation approach for the multiclass classification of the restorations according to these features. RESULTS: The algorithm detected 94.6% of the restorations. Classification eliminated all erroneous marks, and ultimately, 90.5% of the restorations were marked on the image. The overall accuracy of the classification stage in discriminating between the true restoration categories was 93.6%. CONCLUSIONS: This machine-learning algorithm demonstrated excellent performance in detecting and classifying dental restorations on panoramic images.