Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 11 de 11
Filtrar
Mais filtros








Base de dados
Intervalo de ano de publicação
1.
J Clin Med ; 13(16)2024 Aug 15.
Artigo em Inglês | MEDLINE | ID: mdl-39200968

RESUMO

Objectives: To develop a machine learning logistic regression algorithm that can classify patients with an idiopathic macular hole (IMH) into those with good or poor vison at 6 months after a vitrectomy. In addition, to determine its accuracy and the contribution of the preoperative OCT characteristics to the algorithm. Methods: This was a single-center, cohort study. The classifier was developed using preoperative clinical information and the optical coherence tomographic (OCT) findings of 43 eyes of 43 patients who had undergone a vitrectomy. The explanatory variables were selected using a filtering method based on statistical significance and variance inflation factor (VIF) values, and the objective variable was the best-corrected visual acuity (BCVA) at 6 months postoperation. The discrimination threshold of the BCVA was the 0.15 logarithm of the minimum angle of the resolution (logMAR) units. Results: The performance of the classifier was 0.92 for accuracy, 0.73 for recall, 0.60 for precision, 0.74 for F-score, and 0.84 for the area under the curve (AUC). In logistic regression, the standard regression coefficients were 0.28 for preoperative BCVA, 0.13 for outer nuclear layer defect length (ONL_DL), -0.21 for outer plexiform layer defect length (OPL_DL) - (ONL_DL), and -0.17 for (OPL_DL)/(ONL_DL). In the IMH form, a stenosis pattern with a narrowing from the OPL to the ONL of the MH had a significant effect on the postoperative BCVA at 6 months. Conclusions: Our results indicate that (OPL_DL) - (ONL_DL) had a similar contribution to preoperative visual acuity in predicting the postoperative visual acuity. This model had a strong performance, suggesting that the preoperative visual acuity and MH characteristics in the OCT images were crucial in forecasting the postoperative visual acuity in IMH patients. Thus, it can be used to classify MH patients into groups with good or poor postoperative visual acuity, and the classification was comparable to that of previous studies using deep learning.

2.
PLoS One ; 19(7): e0304281, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39038056

RESUMO

BACKGROUND: The purpose of this study was to develop a model that can predict the postoperative visual acuity in eyes that had undergone vitrectomy for an epiretinal membrane (ERM). The Light Gradient Boosting Machine (LightGBM) was used to evaluate the accuracy of the prediction and the contribution of the explanatory variables. Two models were designed to predict the postoperative visual acuity in 67 ERM patients. Model 1 used the age, sex, affected eye, axial length, preoperative visual acuity, Govetto's classification stage, and OCT-derived vector information as features to predict the visual acuity at 1, 3, and 6 months postoperatively. Model 2 incorporated the early postoperative visual acuity as an additional variable to predict the visual acuity at 3, and 6 months postoperatively. LightGBM with 100 iterations of 5-fold cross-validation was used to tune the hyperparameters and train the model. This involved addressing multicollinearity and selecting the explanatory variables. The generalized performance of these models was evaluated using the root mean squared error (RMSE) in a 5-fold cross-validation, and the contributions of the explanatory variables were visualized using the average Shapley Additive exPlanations (SHAP) values. RESULTS: The RMSEs for the predicted visual acuity of Model 1 were 0.14 ± 0.02 logMAR units at 1 month, 0.12 ± 0.03 logMAR units at 3 months, and 0.13 ± 0.04 logMAR units at 6 months. High SHAP values were observed for the preoperative visual acuity and the ectopic inner foveal layer (EIFL) area with significant and positive correlations across all models. Model 2 that incorporated the postoperative visual acuity was used to predict the visual acuity at 3 and 6 months, and it had superior accuracy with RMSEs of 0.10 ± 0.02 logMAR units at 3 months and 0.10 ± 0.04 logMAR units at 6 months. High SHAP values were observed for the postoperative visual acuity in Model 2. CONCLUSION: Predicting the postoperative visual acuity in ERM patients is possible using the preoperative clinical data and OCT images with LightGBM. The contribution of the explanatory variables can be visualized using the SHAP values, and the accuracy of the prediction models improved when the postoperative visual acuity is included as an explanatory variable. Our data-driven machine learning models reveal that preoperative visual acuity and the size of the EIFL significantly influence postoperative visual acuity. Early intervention may be crucial for achieving favorable visual outcomes in eyes with an ERM.


Assuntos
Membrana Epirretiniana , Aprendizado de Máquina , Acuidade Visual , Vitrectomia , Humanos , Membrana Epirretiniana/cirurgia , Membrana Epirretiniana/diagnóstico por imagem , Membrana Epirretiniana/fisiopatologia , Acuidade Visual/fisiologia , Masculino , Feminino , Idoso , Pessoa de Meia-Idade , Período Pós-Operatório , Tomografia de Coerência Óptica/métodos
3.
Comput Biol Med ; 179: 108902, 2024 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-39038392

RESUMO

In the field of histopathology, many studies on the classification of whole slide images (WSIs) using artificial intelligence (AI) technology have been reported. We have studied the disease progression assessment of glioma. Adult-type diffuse gliomas, a type of brain tumor, are classified into astrocytoma, oligodendroglioma, and glioblastoma. Astrocytoma and oligodendroglioma are also called low grade glioma (LGG), and glioblastoma is also called glioblastoma multiforme (GBM). LGG patients frequently have isocitrate dehydrogenase (IDH) mutations. Patients with IDH mutations have been reported to have a better prognosis than patients without IDH mutations. Therefore, IDH mutations are an essential indicator for the classification of glioma. That is why we focused on the IDH1 mutation. In this paper, we aimed to classify the presence or absence of the IDH1 mutation using WSIs and clinical data of glioma patients. Ensemble learning between the WSIs model and the clinical data model is used to classify the presence or absence of IDH1 mutation. By using slide level labels, we combined patch-based imaging information from hematoxylin and eosin (H & E) stained WSIs, along with clinical data using deep image feature extraction and machine learning classifier for predicting IDH1 gene mutation prediction versus wild-type across cohort of 546 patients. We experimented with different deep learning (DL) models including attention-based multiple instance learning (ABMIL) models on imaging data along with gradient boosting machine (LightGBM) for the clinical variables. Further, we used hyperparameter optimization to find the best overall model in terms of classification accuracy. We obtained the highest area under the curve (AUC) of 0.823 for WSIs, 0.782 for clinical data, and 0.852 for ensemble results using MaxViT and LightGBM combination, respectively. Our experimental results indicate that the overall accuracy of the AI models can be improved by using both clinical data and images.


Assuntos
Neoplasias Encefálicas , Aprendizado Profundo , Glioma , Isocitrato Desidrogenase , Mutação , Humanos , Isocitrato Desidrogenase/genética , Neoplasias Encefálicas/genética , Neoplasias Encefálicas/diagnóstico por imagem , Neoplasias Encefálicas/patologia , Glioma/genética , Glioma/diagnóstico por imagem , Glioma/patologia , Masculino , Feminino , Adulto , Pessoa de Meia-Idade
4.
Sensors (Basel) ; 23(18)2023 Sep 21.
Artigo em Inglês | MEDLINE | ID: mdl-37766058

RESUMO

Today, hyperspectral imaging plays an integral part in the remote sensing and precision agriculture field. Identifying the matching key points between hyperspectral images is an important step in tasks such as image registration, localization, object recognition, and object tracking. Low-pixel resolution hyperspectral imaging is a recent introduction to the field, bringing benefits such as lower cost and form factor compared to traditional systems. However, the use of limited pixel resolution challenges even state-of-the-art feature detection and matching methods, leading to difficulties in generating robust feature matches for images with repeated textures, low textures, low sharpness, and low contrast. Moreover, the use of narrower optics in these cameras adds to the challenges during the feature-matching stage, particularly for images captured during low-altitude flight missions. In order to enhance the robustness of feature detection and matching in low pixel resolution images, in this study we propose a novel approach utilizing 3D Convolution-based Siamese networks. Compared to state-of-the-art methods, this approach takes advantage of all the spectral information available in hyperspectral imaging in order to filter out incorrect matches and produce a robust set of matches. The proposed method initially generates feature matches through a combination of Phase Stretch Transformation-based edge detection and SIFT features. Subsequently, a 3D Convolution-based Siamese network is utilized to filter out inaccurate matches, producing a highly accurate set of feature matches. Evaluation of the proposed method demonstrates its superiority over state-of-the-art approaches in cases where they fail to produce feature matches. Additionally, it competes effectively with the other evaluated methods when generating feature matches in low-pixel resolution hyperspectral images. This research contributes to the advancement of low pixel resolution hyperspectral imaging techniques, and we believe it can specifically aid in mosaic generation of low pixel resolution hyperspectral images.

5.
J Clin Med ; 11(13)2022 Jul 04.
Artigo em Inglês | MEDLINE | ID: mdl-35807188

RESUMO

This study's goal is to determine the accuracy of a linear classifier that predicts the prognosis of patients with macular edema (ME) due to a branch retinal vein occlusion during the maintenance phase of antivascular endothelial growth factor (anti-VEGF) therapy. The classifier was created using the clinical information and optical coherence tomographic (OCT) findings obtained up to the time of the first resolution of ME. In total, 66 eyes of 66 patients received an initial intravitreal injection of anti-VEGF followed by repeated injections with the pro re nata (PRN) regimen for 12 months. The patients were divided into two groups: those with and those without good vision during the PRN phase. The mean AUC of the classifier was 0.93, and the coefficients of the explanatory variables were: best-corrected visual acuity (BCVA) at baseline was 0.66, BCVA at first resolution of ME was 0.51, age was 0.21, the average brightness of the ellipsoid zone (EZ) was -0.12, the intactness of the external limiting membrane (ELM) was -0.14, the average brightness of the ELM was -0.17, the brightness value of EZ was -0.17, the area of the outer segments of the photoreceptors was -0.20, and the intactness of the EZ was -0.24. This algorithm predicted the prognosis over time for individual patients during the PRN phase.

7.
Graefes Arch Clin Exp Ophthalmol ; 260(5): 1501-1508, 2022 May.
Artigo em Inglês | MEDLINE | ID: mdl-34773490

RESUMO

PURPOSE: To identify the eyes with macular edema (ME) due to a branch retinal vein occlusion (BRVO) that have good visual acuity during the continuous anti-vascular endothelial growth factor (anti-VEGF) treatment based on the patients' clinical information and optical coherence tomography (OCT) images by using machine learning. METHODS: Sixty-six eyes of 66 patients received 1 anti-VEGF injection followed by repeated injections in the pro re nata (PRN) regimen for 12 months. The patients were divided into two groups: those with and those without good vision during the 1-year experimental period. Handcraft features were defined from the OCT images at the time of the first resolution of the ME. Variables with a significant difference between the groups were used as explanatory variables. A classifier was created with handcrafted features based on a support vector machine (SVM) that adjusted parameters for increasing maximal precision. RESULTS: The age, best-corrected visual acuity (BCVA) at the baseline, BCVA at the first resolution of the ME, integrity and reflectivity of the external limiting membrane (ELM), the ellipsoid zone (EZ), and area of the outer segments of the photoreceptors were selected as explanatory variables. The classification performance was 0.806 for accuracy, 0.768 for precision, 0.772 for recall, and 0.752 for the F-measure. CONCLUSION: The use of the SVM of the patient's clinical information and OCT images can be helpful for determining the prognosis of the BCVA during continued pro re nata anti-VEGF treatment in eyes with ME associated with BRVO.


Assuntos
Edema Macular , Oclusão da Veia Retiniana , Inibidores da Angiogênese , Humanos , Injeções Intravítreas , Edema Macular/diagnóstico , Edema Macular/tratamento farmacológico , Edema Macular/etiologia , Oclusão da Veia Retiniana/complicações , Oclusão da Veia Retiniana/diagnóstico , Oclusão da Veia Retiniana/tratamento farmacológico , Estudos Retrospectivos , Máquina de Vetores de Suporte , Tomografia de Coerência Óptica/métodos , Resultado do Tratamento , Acuidade Visual
8.
Micromachines (Basel) ; 12(4)2021 Apr 01.
Artigo em Inglês | MEDLINE | ID: mdl-33915731

RESUMO

Several robot-related studies have been conducted in recent years; however, studies on the autonomous travel of small mobile robots in small spaces are lacking. In this study, we investigate the development of autonomous travel for small robots that need to travel and cover the entire smooth surface, such as those employed for cleaning tables or solar panels. We consider an obstacle-available surface and target this travel on it by proposing a spiral motion method. To achieve the spiral motion, we focus on developing autonomous avoidance of obstacles, return to original path, and fall prevention when robots traverse a surface. The development of regular travel by a robot without an encoder is an important feature of this study. The traveled distance was measured using the traveling time. We achieved spiral motion by analyzing the data from multiple small sensors installed on the robot by introducing a new attitude-control method, and we ensured that the robot returned to the original spiral path autonomously after avoiding obstacles and without falling over the edge of the surface.

9.
Biomed Eng Lett ; 8(3): 321-327, 2018 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-30603216

RESUMO

In the field of computational histopathology, computer-assisted diagnosis systems are important in obtaining patient-specific diagnosis for various diseases and help precision medicine. Therefore, many studies on automatic analysis methods for digital pathology images have been reported. In this work, we discuss an automatic feature extraction and disease stage classification method for glioblastoma multiforme (GBM) histopathological images. In this paper, we use deep convolutional neural networks (Deep CNNs) to acquire feature descriptors and a classification scheme simultaneously. Further, comparisons with other popular CNNs objectively as well as quantitatively in this challenging classification problem is undertaken. The experiments using Glioma images from The Cancer Genome Atlas shows that we obtain 96.5 % average classification accuracy for our network and for higher cross validation folds other networks perform similarly with a higher accuracy of 98.0 % . Deep CNNs could extract significant features from the GBM histopathology images with high accuracy. Overall, the disease stage classification of GBM from histopathological images with deep CNNs is very promising and with the availability of large scale histopathological image data the deep CNNs are well suited in tackling this challenging problem.

10.
J Digit Imaging ; 26(5): 958-70, 2013 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-23546774

RESUMO

It is often difficult for clinicians to decide correctly on either biopsy or follow-up for breast lesions with masses on ultrasonographic images. The purpose of this study was to develop a computerized determination scheme for histological classification of breast mass by using objective features corresponding to clinicians' subjective impressions for image features on ultrasonographic images. Our database consisted of 363 breast ultrasonographic images obtained from 363 patients. It included 150 malignant (103 invasive and 47 noninvasive carcinomas) and 213 benign masses (87 cysts and 126 fibroadenomas). We divided our database into 65 images (28 malignant and 37 benign masses) for training set and 298 images (122 malignant and 176 benign masses) for test set. An observer study was first conducted to obtain clinicians' subjective impression for nine image features on mass. In the proposed method, location and area of the mass were determined by an experienced clinician. We defined some feature extraction methods for each of nine image features. For each image feature, we selected the feature extraction method with the highest correlation coefficient between the objective features and the average clinicians' subjective impressions. We employed multiple discriminant analysis with the nine objective features for determining histological classification of mass. The classification accuracies of the proposed method were 88.4 % (76/86) for invasive carcinomas, 80.6 % (29/36) for noninvasive carcinomas, 86.0 % (92/107) for fibroadenomas, and 84.1 % (58/69) for cysts, respectively. The proposed method would be useful in the differential diagnosis of breast masses on ultrasonographic images as diagnosis aid.


Assuntos
Neoplasias da Mama/diagnóstico por imagem , Interpretação de Imagem Assistida por Computador/métodos , Ultrassonografia Mamária/métodos , Diagnóstico Diferencial , Análise Discriminante , Feminino , Humanos , Processamento de Imagem Assistida por Computador/métodos
11.
J Digit Imaging ; 25(3): 377-86, 2012 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-21989574

RESUMO

In a computer-aided diagnosis (CADx) scheme for evaluating the likelihood of malignancy of clustered microcalcifications on mammograms, it is necessary to segment individual calcifications correctly. The purpose of this study was to develop a computerized segmentation method for individual calcifications with various sizes while maintaining their shapes in the CADx schemes. Our database consisted of 96 magnification mammograms with 96 clustered microcalcifications. In our proposed method, a mammogram image was decomposed into horizontal subimages, vertical subimages, and diagonal subimages for a second difference at scales 1 to 4 by using a filter bank. The enhanced subimages for nodular components (NCs) and the enhanced subimages for both nodular and linear components (NLCs) were obtained from analysis of a Hessian matrix composed of the pixel values in those subimages for the second difference at each scale. At each pixel, eight objective features were given by pixel values in the subimages for NCs at scales 1 to 4 and the subimages for NLCs at scales 1 to 4. An artificial neural network with the eight objective features was employed to enhance calcifications on magnification mammograms. Calcifications were finally segmented by applying a gray-level thresholding technique to the enhanced image for calcifications. With the proposed method, a sensitivity of calcifications within clustered microcalcifications and the number of false positives per image were 96.5% (603/625) and 1.69, respectively. The average shape accuracy for segmented calcifications was also 91.4%. The proposed method with high sensitivity of calcifications while maintaining their shapes would be useful in the CADx schemes.


Assuntos
Neoplasias da Mama/diagnóstico por imagem , Calcinose/diagnóstico por imagem , Diagnóstico por Computador/métodos , Mamografia/métodos , Algoritmos , Inteligência Artificial , Feminino , Humanos , Redes Neurais de Computação , Reconhecimento Automatizado de Padrão , Interpretação de Imagem Radiográfica Assistida por Computador , Sensibilidade e Especificidade
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA