Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add more filters











Database
Language
Publication year range
1.
Sci Rep ; 14(1): 16987, 2024 Jul 23.
Article in English | MEDLINE | ID: mdl-39043724

ABSTRACT

This manuscript introduces an innovative multi-stage image fusion framework that adeptly integrates infrared (IR) and visible (VIS) spectrum images to surmount the difficulties posed by low-light settings. The approach commences with an initial preprocessing stage, utilizing an Efficient Guided Image Filter for the infrared (IR) images to amplify edge boundaries and a function for the visible (VIS) images to boost local contrast and brightness. Utilizing a two-scale decomposition technique that incorporates Lipschitz constraints-based smoothing, the images are effectively divided into distinct base and detail layers, thereby guaranteeing the preservation of essential structural information. The process of fusion is carried out in two distinct stages: firstly, a method grounded in Bayesian theory is employed to effectively combine the base layers, so effectively addressing any inherent uncertainty. Secondly, a Surface from Shade (SfS) method is utilized to ensure the preservation of the scene's geometry by enforcing integrability on the detail layers. Ultimately a Choose Max principle is employed to determine the most prominent textural characteristics, resulting in the amalgamation of the base and detail layers to generate an image that exhibits a substantial enhancement in both clarity and detail. The efficacy of our strategy is substantiated by rigorous testing, showcasing notable progressions in edge preservation, detail enhancement, and noise reduction. Consequently, our method presents significant advantages for real-world applications in image analysis.

2.
Curr Med Imaging ; 20: 1-13, 2024.
Article in English | MEDLINE | ID: mdl-38389343

ABSTRACT

BACKGROUND: Modern medical imaging modalities used by clinicians have many applications in the diagnosis of complicated diseases. These imaging technologies reveal the internal anatomy and physiology of the body. The fundamental idea behind medical image fusion is to increase the image's global and local contrast, enhance the visual impact, and change its format so that it is better suited for computer processing or human viewing while preventing noise magnification and accomplishing excellent real-time performance. OBJECTIVE: The top goal is to combine data from various modal images (CT/MRI and MR-T1/MR-T2) into a solitary image that, to the greatest degree possible, retains the key characteristics (prominent features) of the source images. METHODS: The clinical accuracy of medical issues is compromised because innumerable classical fusion methods struggle to conserve all the prominent features of the original images. Furthermore, complex implementation, high computation time, and more memory requirements are key problems of transform domain methods. With the purpose of solving these problems, this research suggests a fusion framework for multimodal medical images that makes use of a multi-scale edge-preserving filter and visual saliency detection. The source images are decomposed using a two-scale edge-preserving filter into base and detail layers. Base layers are combined using the addition fusion rule, while detail layers are fused using weight maps constructed using the maximum symmetric surround saliency detection algorithm. RESULTS: The resultant image constructed by the presumed method has improved objective evaluation metrics than other classical methods, as well as unhindered edge contour, more global contrast, and no ringing effect or artifacts. CONCLUSION: The methodology offers a dominant and symbiotic arsenal of clinical symptomatic, therapeutic, and biomedical research competencies that have the prospective to considerably strengthen medical practice and biological understanding.


Subject(s)
Algorithms , Magnetic Resonance Imaging , Humans , Prospective Studies
3.
Curr Med Imaging ; 2024 Jan 26.
Article in English | MEDLINE | ID: mdl-38284702

ABSTRACT

BACKGROUND: A clinical medical image provides vital information about a person's health and bodily condition. Typically, doctors monitor and examine several types of medical images individually to gather supplementary information for illness diagnosis and treatment. As it is arduous to analyze and diagnose from a single image, multi-modality images have been shown to enhance the precision of diagnosis and evaluation of medical conditions. OBJECTIVE: Several conventional image fusion techniques strengthen the consistency of the information by combining varied image observations; nevertheless, the drawback of these techniques in retaining all crucial elements of the original images can have a negative impact on the accuracy of clinical diagnoses. This research develops an improved image fusion technique based on fine-grained saliency and an anisotropic diffusion filter to preserve structural and detailed information of the individual image. METHOD: In contrast to prior efforts, the saliency method is not executed using a pyramidal decomposition, but rather an integral image on the original scale is used to obtain features of superior quality. Furthermore, an anisotropic diffusion filter is utilized for the decomposition of the original source images into a base layer and a detail layer. The proposed algorithm's performance is then contrasted to those of cutting-edge image fusion algorithms. RESULTS: The proposed approach cannot only cope with the fusion of medical images well, both subjectively and objectively, according to the results obtained, but also has high computational efficiency. CONCLUSION: Furthermore, it provides a roadmap for the direction of future research.

4.
Biomed Tech (Berl) ; 63(2): 191-196, 2018 Mar 28.
Article in English | MEDLINE | ID: mdl-28306516

ABSTRACT

This paper describes the utility of principal component analysis (PCA) in classifying upper limb signals. PCA is a powerful tool for analyzing data of high dimension. Here, two different input strategies were explored. The first method uses upper arm dual-position-based myoelectric signal acquisition and the other solely uses PCA for classifying surface electromyogram (SEMG) signals. SEMG data from the biceps and the triceps brachii muscles and four independent muscle activities of the upper arm were measured in seven subjects (total dataset=56). The datasets used for the analysis are rotated by class-specific principal component matrices to decorrelate the measured data prior to feature extraction.


Subject(s)
Electromyography/methods , Muscle, Skeletal/physiology , Upper Extremity/physiology , Arm , Humans , Motion , Principal Component Analysis
SELECTION OF CITATIONS
SEARCH DETAIL