Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 42
Filtrar
Mais filtros

Base de dados
País/Região como assunto
Tipo de documento
Intervalo de ano de publicação
1.
Sensors (Basel) ; 21(18)2021 Sep 09.
Artigo em Inglês | MEDLINE | ID: mdl-34577246

RESUMO

Water, one of the most valuable resources, is underutilized in irrigated rice production. The yield of rice, a staple food across the world, is highly dependent on having proper irrigation systems. Alternate wetting and drying (AWD) is an effective irrigation method mainly used for irrigated rice production. However, unattended, manual, small-scale, and discrete implementations cannot achieve the maximum benefit of AWD. Automation of large-scale (over 1000 acres) implementation of AWD can be carried out using wide-area wireless sensor network (WSN). An automated AWD system requires three different WSNs: one for water level and environmental monitoring, one for monitoring of the irrigation system, and another for controlling the irrigation system. Integration of these three different WSNs requires proper dimensioning of the AWD edge elements (sensor and actuator nodes) to reduce the deployment cost and make it scalable. Besides field-level monitoring, the integration of external control parameters, such as real-time weather forecasts, plant physiological data, and input from farmers, can further enhance the performance of the automated AWD system. Internet of Things (IoT) can be used to interface the WSNs with external data sources. This research focuses on the dimensioning of the AWD system for the multilayer WSN integration and the required algorithms for the closed loop control of the irrigation system using IoT. Implementation of the AWD for 25,000 acres is shown as a possible use case. Plastic pipes are proposed as the means to transport and control proper distribution of water in the field, which significantly helps to reduce conveyance loss. This system utilizes 250 pumps, grouped into 10 clusters, to ensure equal water distribution amongst the users (field owners) in the wide area. The proposed automation algorithm handles the complexity of maintaining proper water pressure throughout the pipe network, scheduling the pump, and controlling the water outlets. Mathematical models are presented for proper dimensioning of the AWD. A low-power and long-range sensor node is developed due to the lack of cellular data coverage in rural areas, and its functionality is tested using an IoT platform for small-scale field trials.


Assuntos
Internet das Coisas , Oryza , Automação , Dessecação , Água
2.
Sensors (Basel) ; 20(5)2020 Mar 06.
Artigo em Inglês | MEDLINE | ID: mdl-32155829

RESUMO

Non-invasive determination of leaf nitrogen (N) and water contents is essential for ensuring the healthy growth of the plants. However, most of the existing methods to measure them are expensive. In this paper, a low-cost, portable multispectral sensor system is proposed to determine N and water contents in the leaves, non-invasively. Four different species of plants-canola, corn, soybean, and wheat-are used as test plants to investigate the utility of the proposed device. The sensor system comprises two multispectral sensors, visible (VIS) and near-infrared (NIR), detecting reflectance at 12 wavelengths (six from each sensor). Two separate experiments were performed in a controlled greenhouse environment, including N and water experiments. Spectral data were collected from 307 leaves (121 for N and 186 for water experiment), and the rational quadratic Gaussian process regression (GPR) algorithm was applied to correlate the reflectance data with actual N and water content. By performing five-fold cross-validation, the N estimation showed a coefficient of determination () of 63.91% for canola, 80.05% for corn, 82.29% for soybean, and 63.21% for wheat. For water content estimation, canola showed an of 18.02%, corn showed an of 68.41%, soybean showed an of 46.38%, and wheat showed an of 64.58%. The result reveals that the proposed low-cost sensor with an appropriate regression model can be used to determine N content. However, further investigation is needed to improve the water estimation results using the proposed device.


Assuntos
Técnicas Biossensoriais/economia , Técnicas Biossensoriais/instrumentação , Análise Custo-Benefício , Produtos Agrícolas/metabolismo , Nitrogênio/análise , Dispositivos Ópticos/economia , Folhas de Planta/metabolismo , Água/análise , Luz , Solo/química
3.
Sensors (Basel) ; 20(3)2020 Jan 31.
Artigo em Inglês | MEDLINE | ID: mdl-32023975

RESUMO

A minirhizotron is an in situ root imaging system that captures components of root system architecture dynamics over time. Commercial minirhizotrons are expensive, limited to white-light imaging, and often need human intervention. The implementation of a minirhizotron needs to be low cost, automated, and customizable to be effective and widely adopted. We present a newly designed root imaging system called SoilCam that addresses the above mentioned limitations. The imaging system is multi-modal, i.e., it supports both conventional white-light and multispectral imaging, with fully automated operations for long-term in-situ monitoring using wireless control and access. The system is capable of taking 360° images covering the entire area surrounding the tube. The image sensor can be customized depending on the spectral imaging requirements. The maximum achievable image quality of the system is 8 MP (Mega Pixel)/picture, which is equivalent to a 2500 DPI (dots per inch) image resolution. The length of time in the field can be extended with a rechargeable battery and solar panel connectivity. Offline image-processing software, with several image enhancement algorithms to eliminate motion blur and geometric distortion and to reconstruct the 360° panoramic view, is also presented. The system is tested in the field by imaging canola roots to show the performance advantages over commercial systems.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Raízes de Plantas/ultraestrutura , Software , Algoritmos , Humanos
4.
J Food Sci Technol ; 56(6): 2814-2824, 2019 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-31205337

RESUMO

Onion is perishable and thereby subject to drying during unrefrigerated storage. Its moisture content is important to ensure optimum quality in storage. To track and analyze the dynamics of natural dehydration in onion and also to assess its moisture content, noninvasive and nondestructive methods are preferred. One of them is known as electrical impedance spectroscopy (or EIS in short). In the first phase of our experiment, we have used EIS, where we apply alternating current with multiple frequency to the object (onion in this case) and generate impedance spectrum which is used to characterize the object. We then develop an equivalent electrical circuit representing onion characteristics using a computer assisted optimization technique that allows us to monitor the response of onion undergoing natural drying for a duration of 3 weeks. The developed electrical model shows better congruence with the impedance data measured experimentally when compared to other conventional models for plant tissue with a mean absolute error of 0.42% and root mean squared error of 0.55%. In the second phase of our experiment, we attempted to find a correlation between the previous impedance data and the actual moisture content of the onions under test (measured by weighing) and developed a mathematical model. This model will provide an alternative tool for assessing the moisture content of onion nondestructively. Our model shows excellent correlation with the ground truth data with a deterministic coefficient of 0.9767, root mean square error of 0.02976 and sum of squared error of 0.01329. Therefore, our two models will offer plant scientists the ability to study the physiological status of onion both qualitatively and quantitatively.

5.
Sensors (Basel) ; 18(6)2018 Jun 08.
Artigo em Inglês | MEDLINE | ID: mdl-29890700

RESUMO

To meet the high demand for supporting and accelerating progress in the breeding of novel traits, plant scientists and breeders have to measure a large number of plants and their characteristics accurately. Imaging methodologies are being deployed to acquire data for quantitative studies of complex traits. Images are not always good quality, in particular, they are obtained from the field. Image fusion techniques can be helpful for plant breeders with more comfortable access plant characteristics by improving the definition and resolution of color images. In this work, the multi-focus images were loaded and then the similarity of visual saliency, gradient, and color distortion were measured to obtain weight maps. The maps were refined by a modified guided filter before the images were reconstructed. Canola images were obtained by a custom built mobile platform for field phenotyping and were used for testing in public databases. The proposed method was also tested against the five common image fusion methods in terms of quality and speed. Experimental results show good re-constructed images subjectively and objectively performed by the proposed technique. The findings contribute to a new multi-focus image fusion that exhibits a competitive performance and outperforms some other state-of-the-art methods based on the visual saliency maps and gradient domain fast guided filter. The proposed fusing technique can be extended to other fields, such as remote sensing and medical image fusion applications.

6.
J Med Syst ; 41(6): 102, 2017 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-28526945

RESUMO

Modern endoscopes play a significant role in diagnosing various gastrointestinal (GI) tract related diseases where the visual quality of endoscopic images helps improving the diagnosis. This article presents an image enhancement method for color endoscopic images that consists of three stages, and hence termed as "Tri-scan" enhancement: (1) tissue and surface enhancement: a modified linear unsharp masking is used to sharpen the surface and edges of tissue and vascular characteristics; (2) mucosa layer enhancement: an adaptive sigmoid function is employed on the R plane of the image to highlight micro-vessels of the superficial layers of the mucosa and submucosa; and (3) color tone enhancement: the pixels are uniformly distributed to create an enhanced color effect to highlight the subtle micro-vessels, mucosa and tissue characteristics. The proposed method is used on a large data set of low contrast color white light images (WLI). The results are compared with three existing enhancement techniques: Narrow Band Imaging (NBI), Fuji Intelligent Color Enhancement (FICE) and i-scan Technology. The focus value and color enhancement factor show that the enhancement level achieved in the processed images is higher compared to NBI, FICE and i-scan images.


Assuntos
Endoscopia , Cor , Humanos , Aumento da Imagem , Luz
7.
Can Vet J ; 58(12): 1321-1325, 2017 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-29203945

RESUMO

This pilot study assessed wireless capsule endoscopy in horses. Image transmission was achieved with good image quality. Time to exit the stomach was variable and identified as one limitation, together with gaps in image transmission, capsule tumbling, and inability to accurately locate the capsule. Findings demonstrate usefulness and current limitations.


Existe-t-il une application pour l'endoscopie par capsule sans fil chez les chevaux? Cette étude pilote a évalué l'endoscopie par capsule chez les chevaux. La transmission d'images a permis d'obtenir une bonne qualité d'image. Le temps jusqu'à la sortie de l'estomac était variable et identifié comme une limitation, de même que les lacunes dans la transmission de l'image, le culbutage de la capsule et l'incapacité de situer l'emplacement exact de la capsule. Les résultats démontrent l'utilité et les limitations actuelles.(Traduit par Isabelle Vallières).


Assuntos
Endoscopia por Cápsula/veterinária , Gastroenteropatias/veterinária , Doenças dos Cavalos/diagnóstico , Animais , Endoscopia por Cápsula/instrumentação , Gastroenteropatias/diagnóstico , Cavalos , Tecnologia sem Fio
8.
Sensors (Basel) ; 14(11): 20779-99, 2014 Nov 04.
Artigo em Inglês | MEDLINE | ID: mdl-25375753

RESUMO

In this paper, a new low complexity and lossless image compression system for capsule endoscopy (CE) is presented. The compressor consists of a low-cost YEF color space converter and variable-length predictive with a combination of Golomb-Rice and unary encoding. All these components have been heavily optimized for low-power and low-cost and lossless in nature. As a result, the entire compression system does not incur any loss of image information. Unlike transform based algorithms, the compressor can be interfaced with commercial image sensors which send pixel data in raster-scan fashion that eliminates the need of having large buffer memory. The compression algorithm is capable to work with white light imaging (WLI) and narrow band imaging (NBI) with average compression ratio of 78% and 84% respectively. Finally, a complete capsule endoscopy system is developed on a single, low-power, 65-nm field programmable gate arrays (FPGA) chip. The prototype is developed using circular PCBs having a diameter of 16 mm. Several in-vivo and ex-vivo trials using pig's intestine have been conducted using the prototype to validate the performance of the proposed lossless compression algorithm. The results show that, compared with all other existing works, the proposed algorithm offers a solution to wireless capsule endoscopy with lossless and yet acceptable level of compression.


Assuntos
Algoritmos , Cápsulas Endoscópicas , Compressão de Dados/métodos , Fontes de Energia Elétrica , Aumento da Imagem/instrumentação , Processamento de Sinais Assistido por Computador/instrumentação , Animais , Desenho de Equipamento , Análise de Falha de Equipamento , Reprodutibilidade dos Testes , Sensibilidade e Especificidade , Suínos
9.
J Med Syst ; 38(6): 57, 2014 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-24859846

RESUMO

The state-of-the-art capsule endoscopy (CE) technology offers painless examination for the patients and the ability to examine the interior of the gastrointestinal tract by a noninvasive procedure for the gastroenterologists. In this work, a modular and flexible CE development system platform consisting of a miniature field programmable gate array (FPGA) based electronic capsule, a microcontroller based portable data recorder unit and computer software is designed and developed. Due to the flexible and reprogrammable nature of the system, various image processing and compression algorithms can be tested in the design without requiring any hardware change. The designed capsule prototype supports various imaging modes including white light imaging (WLI) and narrow band imaging (NBI), and communicates with the data recorder in full duplex fashion, which enables configuring the image size and imaging mode in real time during examination. A low complexity image compressor based on a novel color-space is implemented inside the capsule to reduce the amount of RF transmission data. The data recorder contains graphical LCD for real time image viewing and SD cards for storing image data. Data can be uploaded to a computer or Smartphone by SD card, USB interface or by wireless Bluetooth link. Computer software is developed that decompresses and reconstructs images. The fabricated capsule PCBs have a diameter of 16 mm. An ex-vivo animal testing has also been conducted to validate the results.


Assuntos
Endoscopia por Cápsula/métodos , Processamento de Imagem Assistida por Computador/métodos , Algoritmos , Animais , Compressão de Dados/métodos , Humanos
10.
J Med Syst ; 38(4): 25, 2014 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-24696394

RESUMO

Wireless Capsule Endoscopy (WCE) is a technology in the field of endoscopic imaging which facilitates direct visualization of the entire small intestine. Many algorithms are being developed to automatically identify clinically important frames in WCE videos. This paper presents a supervised method for automated detection of bleeding regions present in WCE frames or images. The proposed method characterizes the image regions by using statistical features derived from the first order histogram probability of the three planes of RGB color space. Despite being inconsistent and tiresome, manual selection of regions has been a popular technique for creating training data in the studies of capsule endoscopic images. We propose a semi-automatic region-annotation algorithm for creating training data efficiently. All possible combinations of different features are exhaustively analyzed to find the optimum feature set with the best performance. During operation, regions from images are obtained by applying a segmentation method. Finally, a trained neural network recognizes the patterns of the data arising from bleeding and non-bleeding regions.


Assuntos
Endoscopia por Cápsula/métodos , Hemorragia/diagnóstico , Processamento de Imagem Assistida por Computador/métodos , Redes Neurais de Computação , Algoritmos , Humanos
11.
J Imaging ; 10(1)2024 Jan 20.
Artigo em Inglês | MEDLINE | ID: mdl-38276320

RESUMO

Endoscopies are helpful for examining internal organs, including the gastrointestinal tract. The endoscope device consists of a flexible tube to which a camera and light source are attached. The diagnostic process heavily depends on the quality of the endoscopic images. That is why the visual quality of endoscopic images has a significant effect on patient care, medical decision-making, and the efficiency of endoscopic treatments. In this study, we propose an endoscopic image enhancement technique based on image fusion. Our method aims to improve the visual quality of endoscopic images by first generating multiple sub images from the single input image which are complementary to one another in terms of local and global contrast. Then, each sub layer is subjected to a novel wavelet transform and guided filter-based decomposition technique. To generate the final improved image, appropriate fusion rules are utilized at the end. A set of upper gastrointestinal tract endoscopic images were put to the test in studies to confirm the efficacy of our strategy. Both qualitative and quantitative analyses show that the proposed framework performs better than some of the state-of-the-art algorithms.

12.
R Soc Open Sci ; 11(4)2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38601031

RESUMO

With the rapid development of medical imaging methods, multimodal medical image fusion techniques have caught the interest of researchers. The aim is to preserve information from diverse sensors using various models to generate a single informative image. The main challenge is to derive a trade-off between the spatial and spectral qualities of the resulting fused image and the computing efficiency. This article proposes a fast and reliable method for medical image fusion depending on multilevel Guided edge-preserving filtering (MLGEPF) decomposition rule. First, each multimodal medical image was divided into three sublayer categories using an MLGEPF decomposition scheme: small-scale component, large-scale component and background component. Secondly, two fusion strategies-pulse-coupled neural network based on the structure tensor and maximum based-are applied to combine the three types of layers, based on the layers' various properties. The three different types of fused sublayers are combined to create the fused image at the end. A total of 40 pairs of brain images from four separate categories of medical conditions were tested in experiments. The pair of images includes various case studies including magnetic resonance imaging (MRI) , TITc, single-photon emission computed tomography (SPECT) and positron emission tomography (PET). We included qualitative analysis to demonstrate that the visual contrast between the structure and the surrounding tissue is increased in our proposed method. To further enhance the visual comparison, we asked a group of observers to compare our method's outputs with other methods and score them. Overall, our proposed fusion scheme increased the visual contrast and received positive subjective review. Moreover, objective assessment indicators for each category of medical conditions are also included. Our method achieves a high evaluation outcome on feature mutual information (FMI), the sum of correlation of differences (SCD), Qabf and Qy indexes. This implies that our fusion algorithm has better performance in information preservation and efficient structural and visual transferring.

13.
Multimed Tools Appl ; : 1-22, 2023 Mar 17.
Artigo em Inglês | MEDLINE | ID: mdl-37362715

RESUMO

Conventional Endoscopy (CE) and Wireless Capsule Endoscopy (WCE) are well known tools for diagnosing gastrointestinal (GI) tract related disorders. Defining the anatomical location within the GI tract helps clinicians determine appropriate treatment options, which can reduce the need for repetitive endoscopy. Limited research addresses the localization of the anatomical location of WCE and CE images using classification, mainly due to the difficulty in collecting annotated data. In this study, we present a few-shot learning method based on distance metric learning which combines transfer-learning and manifold mixup schemes to localize and classify endoscopic images and video frames. The proposed method allows us to develop a pipeline for endoscopy video sequence localization that can be trained with only a few samples. The use of manifold mixup improves learning by increasing the number of training epochs while reducing overfitting and providing more accurate decision boundaries. A dataset is collected from 10 different anatomical positions of the human GI tract. Two models were trained using only 78 CE and 27 WCE annotated frames to predict the location of 25,700 and 1825 video frames from CE and WCE respectively. We performed subjective evaluation using nine gastroenterologists to validate the need of having such an automated system to localize endoscopic images and video frames. Our method achieved higher accuracy and a higher F1-score when compared with the scores from subjective evaluation. In addition, the results show improved performance with less cross-entropy loss when compared with several existing methods trained on the same datasets. This indicates that the proposed method has the potential to be used in endoscopy image classification. Supplementary Information: The online version contains supplementary material available at 10.1007/s11042-023-14982-1.

14.
PLoS One ; 18(12): e0294988, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-38128020

RESUMO

The most common cause of breast cancer-related death is tumor recurrence. To develop more effective treatments, the identification of cancer cell specific malignancy indicators is therefore critical. Lipid droplets are known as an emerging hallmark in aggressive breast tumors. A common technique that can be used for observing molecules in cancer microenvironment is fluorescence microscopy. We describe the design, development and applicability of a smart fluorometer to detect lipid droplet accumulation based on the emitted fluorescence signals from highly malignant (MDA-MB-231) and mildly malignant (MCF7) breast cancer cell lines, that are stained with BODIPY dye. This device uses a visible-range light source as an excitation source and a spectral sensor as the detector. A commercial imaging system was used to examine the fluorescent cancer cell lines before being validated in a preclinical setting with the developed prototype. The outcomes indicate that this low-cost fluorometer can effectively detect the alterations levels of lipid droplets and hence distinguish between "moderately malignant" and "highly malignant" cancer cells. In comparison to prior research that used fluorescence spectroscopy techniques to detect cancer biomarkers, this study revealed enhanced capability in classifying mildly and highly malignant cancer cell lines.


Assuntos
Neoplasias da Mama , Humanos , Feminino , Neoplasias da Mama/patologia , Gotículas Lipídicas/metabolismo , Recidiva Local de Neoplasia/patologia , Mama/patologia , Microscopia de Fluorescência , Microambiente Tumoral
15.
Plants (Basel) ; 11(13)2022 Jun 28.
Artigo em Inglês | MEDLINE | ID: mdl-35807666

RESUMO

Root biomass is one of the most relevant root parameters for studies of plant response to environmental change. In this work, a dynamic and adjustable electrode array sensor system is designed for developing a cost-effective, high-speed data acquisition system based on electrical impedance tomography (EIT). The developed EIT system is found to be suitable for in situ measurements and capable of monitoring the changes in root growth and development with three-dimensional imaging by measuring impedances in multiple frequencies with the help of an EIT sensor. The designed EIT sensor system is assessed and calibrated by the inhomogeneities in both water and soil media. The impedances are measured for multiple tap roots using an electrical impedance spectroscopy (EIS) tool connected to the sensor at frequencies ranging from 1 kHz to 100 kHz. The changes in conductivity are calculated by obtaining the boundary voltages from the measured impedances for a given stimulation current. A non-invasive imaging method is utilized, and the spectral changes are observed accordingly to evaluate the growth of the roots. A further root analysis helps us estimate the root biomass non-destructively in real-time. The root size (such as, weight, length) is correlated with the measured impedances. A regression analysis is performed using the least square method, and more than 97% correlation is found for the biomass estimation of carrot roots with an RMSE of 4.516. The obtained models are later validated using a new and separate set of carrot root samples and the accuracy of the predicted models is found to be 93% or above. A complete electrode model is utilized, and the reconstruction analysis is performed and optimized by utilizing the impedance imaging technique in difference method. The tomography of the root is reconstructed with finite element method (FEM) modeling considering one-step Gauss-Newton (GN) algorithm which is carried out using an open source software known as electrical impedance and diffuse optical tomography reconstruction software (EIDORS).

16.
IEEE J Biomed Health Inform ; 26(2): 515-526, 2022 02.
Artigo em Inglês | MEDLINE | ID: mdl-34516382

RESUMO

A non-invasive fetal electrocardiogram (FECG) is used to monitor the electrical pulse of the fetal heart. Decomposing the FECG signal from the maternal ECG (MECG) is a blind source separation problem, which is hard due to the low amplitude of the FECG, the overlap of R waves, and the potential exposure to noise from different sources. Traditional decomposition techniques, such as adaptive filters, require tuning, alignment, or pre-configuration, such as modeling the noise or desired signal to map the MECG to the FECG. The high correlation between maternal and fetal ECG fragments decreases the performance of convolution layers. Therefore, the masking region of interest based on the attention mechanism was performed to improve the signal generators' precision. The sine activation function was also used to retain more details when converting two signal domains. Three available datasets from the Physionet, including the A&D FECG, NI-FECG, and NI-FECG challenge, and one synthetic dataset using FECGSYN toolbox, were used to evaluate the performance. The proposed method could map an abdominal MECG to a scalp FECG with an average of 98% R-Square [CI 95%: 97%, 99%] as the goodness of fit on the A&D FECG dataset. Moreover, it achieved 99.7% F1-score [CI 95%: 97.8-99.9], 99.6% F1-score [CI 95%: 98.2%, 99.9%] and 99.3% F1-score [CI 95%: 95.3%, 99.9%] for fetal QRS detection on the A&D FECG, NI-FECG and NI-FECG challenge datasets, respectively. Also, the distortion was in the "very good" and "good" ranges. These results are comparable to the state-of-the-art results; thus, the proposed algorithm has the potential to be used for high-performance signal-to-signal conversion.


Assuntos
Monitorização Fetal , Processamento de Sinais Assistido por Computador , Algoritmos , Eletrocardiografia/métodos , Feminino , Monitorização Fetal/métodos , Feto/fisiologia , Humanos , Gravidez
17.
Sci Rep ; 11(1): 11204, 2021 05 27.
Artigo em Inglês | MEDLINE | ID: mdl-34045554

RESUMO

Localizing the endoscopy capsule inside gastrointestinal (GI) system provides key information which leads to GI abnormality tracking and precision medical delivery. In this paper, we have proposed a new method to localize the capsule inside human GI track. We propose to equip the capsule with four side wall cameras and an Inertial Measurement Unit (IMU), that consists of 9 Degree-Of-Freedom (DOF) including a gyroscope, an accelerometer and a magnetometer to monitor the capsule's orientation and direction of travel. The low resolution mono-chromatic cameras, installed along the wide wall, are responsible to measure the actual capsule movement, not the involuntary motion of the small intestine. Finally, a fusion algorithm is used to combine all data to derive the traveled path and plot the trajectory. Compared to other methods, the presented system is resistive to surrounding conditions, such as GI nonhomogeneous structure and involuntary small bowel movements. In addition, it does not require external antenna or arrays. Therefore, GI tracking can be achieved without disturbing patients' daily activities.


Assuntos
Cápsulas Endoscópicas , Endoscopia por Cápsula/métodos , Trato Gastrointestinal , Algoritmos , Desenho de Equipamento , Humanos
18.
Magn Reson Imaging ; 75: 107-115, 2021 01.
Artigo em Inglês | MEDLINE | ID: mdl-33148512

RESUMO

Motion artifacts are a common occurrence in Magnetic Resonance Imaging exam. Motion during acquisition has a profound impact on workflow efficiency, often requiring a repeat of sequences. Furthermore, motion artifacts may escape notice by technologists, only to be revealed at the time of reading by the radiologists, affecting their diagnostic quality. There is a paucity of clinical tools to identify and quantitatively assess the severity of motion artifacts in MRI. An image with subtle motion may still have diagnostic value, while severe motion may be uninterpretable by radiologists and requires the exam to be repeated. Therefore, a tool for the automatic identification of motion artifacts would aid in maintaining diagnostic quality, while potentially driving workflow efficiencies. Here we aim to quantify the severity of motion artifacts from MRI images using deep learning. Impact of subject movement parameters like displacement and rotation on image quality is also studied. A state-of-the-art, stacked ensemble model was developed to classify motion artifacts into five levels (no motion, slight, mild, moderate and severe) in brain scans. The stacked ensemble model is able to robustly predict rigid-body motion severity across different acquisition parameters, including T1-weighted and T2-weighted slices acquired in different anatomical planes. The ensemble model with XGBoost metalearner achieves 91.6% accuracy, 94.8% area under the curve, 90% Cohen's Kappa, and is observed to be more accurate and robust than the individual base learners.


Assuntos
Artefatos , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética , Movimento , Humanos , Neuroimagem , Rotação
19.
IEEE J Transl Eng Health Med ; 8: 3300111, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32190429

RESUMO

BACKGROUND: Computer-aided disease detection schemes from wireless capsule endoscopy (WCE) videos have received great attention by the researchers for reducing physicians' burden due to the time-consuming and risky manual review process. While single disease classification schemes are greatly dealt by the researchers in the past, developing a unified scheme which is capable of detecting multiple gastrointestinal (GI) diseases is very challenging due to the highly irregular behavior of diseased images in terms of color patterns. METHOD: In this paper, a computer-aided method is developed to detect multiple GI diseases from WCE videos utilizing linear discriminant analysis (LDA) based region of interest (ROI) separation scheme followed by a probabilistic model fitting approach. Commonly in training phase, as pixel-labeled images are available in small number, only the image-level annotations are used for detecting diseases in WCE images, whereas pixel-level knowledge, although a major source for learning the disease characteristics, is left unused. In view of learning the characteristic disease patterns from pixel-labeled images, a set of LDA models are trained which are later used to extract the salient ROI from WCE images both in training and testing stages. The intensity patterns of ROI are then modeled by a suitable probability distribution and the fitted parameters of the distribution are utilized as features in a supervised cascaded classification scheme. RESULTS: For the purpose of validation of the proposed multi-disease detection scheme, a set of pixel-labeled images of bleeding, ulcer and tumor are used to extract the LDA models and then, a large WCE dataset is used for training and testing. A high level of accuracy is achieved even with a small number of pixel-labeled images. CONCLUSION: Therefore, the proposed scheme is expected to help physicians in reviewing a large number of WCE images to diagnose different GI diseases.

20.
Cancers (Basel) ; 12(4)2020 Apr 06.
Artigo em Inglês | MEDLINE | ID: mdl-32268557

RESUMO

Wireless capsule endoscopy (WCE) has been widely used in gastrointestinal (GI) diagnosis that allows the physicians to examine the interior wall of the human GI tract through a pain-free procedure. However, there are still several limitations of the technology, which limits its functionality, ultimately limiting its wide acceptance. Its counterpart, the wired endoscopic system is a painful procedure that demotivates patients from going through the procedure, and adversely affects early diagnosis. Furthermore, the current generation of capsules is unable to automate the detection of abnormality. As a result, physicians are required to spend longer hours to examine each image from the endoscopic capsule for abnormalities, which makes this technology tiresome and error-prone. Early detection of cancer is important to improve the survival rate in patients with colorectal cancer. Hence, a fluorescence-imaging-based endoscopic capsule that automates the detection process of colorectal cancer was designed and developed in our lab. The proof of concept of this endoscopic capsule was tested on porcine intestine and liquid phantom. The proposed WCE system offers great possibilities for future applicability in selective and specific detection of other fluorescently labelled cancers.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA