Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 6 de 6
Filter
Add more filters











Database
Language
Publication year range
1.
IEEE J Biomed Health Inform ; 27(1): 176-187, 2023 01.
Article in English | MEDLINE | ID: mdl-35877797

ABSTRACT

Fluorescence imaging-based diagnostic systems have been widely used to diagnose skin diseases due to their ability to provide detailed information related to the molecular composition of the skin compared to conventional RGB imaging. In addition, recent advances in smartphones have made them suitable for application in biomedical imaging, and therefore various smartphone-based optical imaging systems have been developed for mobile healthcare. However, an advanced analysis algorithm is required to improve the diagnosis of skin diseases. Various deep learning-based algorithms have recently been developed for this purpose. However, deep learning-based algorithms using only white-light reflectance RGB images have exhibited limited diagnostic performance. In this study, we developed an auxiliary deep learning network called fluorescence-aided amplifying network (FAA-Net) to diagnose skin diseases using a developed multi-modal smartphone imaging system that offers RGB and fluorescence images. FAA-Net is equipped with a meta-learning-based algorithm to solve problems that may occur due to the insufficient number of images acquired by the developed system. In addition, we devised a new attention-based module that can learn the location of skin diseases by itself and emphasize potential disease regions, and incorporated it into FAA-Net. We conducted a clinical trial in a hospital to evaluate the performance of FAA-Net and to compare various evaluation metrics of our developed model and other state-of-the-art models for the diagnosis of skin diseases using our multi-modal system. Experimental results demonstrated that our developed model exhibited an 8.61% and 9.83% improvement in mean accuracy and area under the curve in classifying skin diseases, respectively, compared with other advanced models.


Subject(s)
Deep Learning , Skin Diseases , Humans , Algorithms , Diagnostic Imaging , Neural Networks, Computer
2.
Article in English | MEDLINE | ID: mdl-36331635

ABSTRACT

Acoustic holography has been gaining attention for various applications, such as noncontact particle manipulation, noninvasive neuromodulation, and medical imaging. However, only a few studies on how to generate acoustic holograms have been conducted, and even conventional acoustic hologram algorithms show limited performance in the fast and accurate generation of acoustic holograms, thus hindering the development of novel applications. We here propose a deep learning-based framework to achieve fast and accurate acoustic hologram generation. The framework has an autoencoder-like architecture; thus, the unsupervised training is realized without any ground truth. For the framework, we demonstrate a newly developed hologram generator network, the holographic ultrasound generation network (HU-Net), which is suitable for unsupervised learning of hologram generation, and a novel loss function that is devised for energy-efficient holograms. Furthermore, for considering various hologram devices (i.e., ultrasound transducers), we propose a physical constraint (PC) layer. Simulation and experimental studies were carried out for two different hologram devices, such as a 3-D printed lens, attached to a single element transducer, and a 2-D ultrasound array. The proposed framework was compared with the iterative angular spectrum approach (IASA) and the state-of-the-art (SOTA) iterative optimization method, Diff-PAT. In the simulation study, our framework showed a few hundred times faster generation speed, along with comparable or even better reconstruction quality, than those of IASA and Diff-PAT. In the experimental study, the framework was validated with 3-D printed lenses fabricated based on different methods, and the physical effect of the lenses on the reconstruction quality was discussed. The outcomes of the proposed framework in various cases (i.e., hologram generator networks, loss functions, and hologram devices) suggest that our framework may become a very useful alternative tool for other existing acoustic hologram applications, and it can expand novel medical applications.


Subject(s)
Deep Learning , Holography , Holography/methods , Algorithms , Computer Simulation , Acoustics
3.
Article in English | MEDLINE | ID: mdl-35877808

ABSTRACT

The performance of computer-aided diagnosis (CAD) systems that are based on ultrasound imaging has been enhanced owing to the advancement in deep learning. However, because of the inherent speckle noise in ultrasound images, the ambiguous boundaries of lesions deteriorate and are difficult to distinguish, resulting in the performance degradation of CAD. Although several methods have been proposed to reduce speckle noise over decades, this task remains a challenge that must be improved to enhance the performance of CAD. In this article, we propose a deep content-aware image prior (DCAIP) with a content-aware attention module (CAAM) for superior despeckling of ultrasound images without clean images. For the image prior, we developed a CAAM to deal with the content information in an input image. In this module, super-pixel pooling (SPP) is used to give attention to salient regions in an ultrasound image. Therefore, it can provide more content information regarding the input image when compared to other attention modules. The DCAIP consists of deep learning networks based on this attention module. The DCAIP is validated by applying it as a preprocessing step for breast tumor segmentation in ultrasound images, which is one of the tasks in CAD. Our method improved the segmentation performance by 15.89% in terms of the area under the precision-recall (PR) curve (AUPRC). The results demonstrate that our method enhances the quality of ultrasound images by effectively reducing speckle noise while preserving important information in the image, promising for the design of superior CAD systems.


Subject(s)
Algorithms , Breast Neoplasms , Female , Humans , Image Processing, Computer-Assisted , Ultrasonography
4.
Ultrasonics ; 115: 106457, 2021 Aug.
Article in English | MEDLINE | ID: mdl-33991980

ABSTRACT

Mechanical circulatory support systems (MCSSs) are crucial devices for transplants in patients with heart failure. The blood flowing through the MCSS can be recirculated or even stagnated in the event of critical blood flow issues. To avoid emergencies due to abnormal changes in the flow, continuous changes of the flowrate should be measured with high accuracy and robustness. For better flowrate measurements, a more advanced ultrasonic blood flowmeter (UFM), which is a noninvasive measurement tool, is needed. In this paper, we propose a novel UFM sensor module using a novel algorithm (Xero) that can exploit the advantages of both conventional cross-correlation (Xcorr) and zero-crossing (Zero) algorithms, using only the zero-crossing-based algorithm. To ensure the capability of our own developed and optimized ultrasonic sensor module for MCSSs, the accuracy, robustness, and continuous monitoring performance of the proposed algorithm were compared to those of conventional algorithms after application to the developed sensor module. The results show that Xero is superior to other algorithms for flowrate measurements under different environments and offers an error rate of at least 0.92%, higher robustness for changing fluid temperatures than conventional algorithms, and sensitive responses to sudden changes in flowrates. Thus, the proposed UFM system with Xero has a great potential for flowrate measurements in MCSSs.


Subject(s)
Algorithms , Flowmeters , Hemorheology , Ultrasonics/instrumentation , Equipment Design , Humans
5.
IEEE Trans Med Imaging ; 40(2): 594-606, 2021 02.
Article in English | MEDLINE | ID: mdl-33079654

ABSTRACT

We developed a forward-looking (FL) multimodal endoscopic system that offers color, spectral classified, high-frequency ultrasound (HFUS) B-mode, and integrated backscattering coefficient (IBC) images for tumor detection in situ. Examination of tumor distributions from the surface of the colon to deeper inside is essential for determining a treatment plan of cancer. For example, the submucosal invasion depth of tumors in addition to the tumor distributions on the colon surface is used as an indicator of whether the endoscopic dissection would be operated. Thus, we devised the FL multimodal endoscopic system to offer information on the tumor distribution from the surface to deep tissue with high accuracy. This system was evaluated with bilayer gelatin phantoms which have different properties at each layer of the phantom in a lateral direction. After evaluating the system with phantoms, it was employed to characterize forty human colon tissues excised from cancer patients. The proposed system could allow us to obtain highly resolved chemical, anatomical, and macro-molecular information on excised colon tissues including tumors, thus enhancing the detection of tumor distributions from the surface to deep tissue. These results suggest that the FL multimodal endoscopic system could be an innovative screening instrument for quantitative tumor characterization.


Subject(s)
Endoscopy , Radiopharmaceuticals , Colon/diagnostic imaging , Humans , Phantoms, Imaging , Ultrasonography
6.
Biomed Opt Express ; 12(12): 7765-7779, 2021 Dec 01.
Article in English | MEDLINE | ID: mdl-35003865

ABSTRACT

Otitis media (OM) is one of the most common ear diseases in children and a common reason for outpatient visits to medical doctors in primary care practices. Adhesive OM (AdOM) is recognized as a sequela of OM with effusion (OME) and often requires surgical intervention. OME and AdOM exhibit similar symptoms, and it is difficult to distinguish between them using a conventional otoscope in a primary care unit. The accuracy of the diagnosis is highly dependent on the experience of the examiner. The development of an advanced otoscope with less variation in diagnostic accuracy by the examiner is crucial for a more accurate diagnosis. Thus, we developed an intelligent smartphone-based multimode imaging otoscope for better diagnosis of OM, even in mobile environments. The system offers spectral and autofluorescence imaging of the tympanic membrane using a smartphone attached to the developed multimode imaging module. Moreover, it is capable of intelligent analysis for distinguishing between normal, OME, and AdOM ears using a machine learning algorithm. Using the developed system, we examined the ears of 69 patients to assess their performance for distinguishing between normal, OME, and AdOM ears. In the classification of ear diseases, the multimode system based on machine learning analysis performed better in terms of accuracy and F1 scores than single RGB image analysis, RGB/fluorescence image analysis, and the analysis of spectral image cubes only, respectively. These results demonstrate that the intelligent multimode diagnostic capability of an otoscope would be beneficial for better diagnosis and management of OM.

SELECTION OF CITATIONS
SEARCH DETAIL