Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 17 de 17
Filtrar
Más filtros

Bases de datos
Tipo del documento
Intervalo de año de publicación
1.
Skeletal Radiol ; 51(11): 2121-2128, 2022 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-35624310

RESUMEN

OBJECTIVE: Deep learning has the potential to automatically triage orthopedic emergencies, such as joint dislocations. However, due to the rarity of these injuries, collecting large numbers of images to train algorithms may be infeasible for many centers. We evaluated if the Internet could be used as a source of images to train convolutional neural networks (CNNs) for joint dislocations that would generalize well to real-world clinical cases. METHODS: We collected datasets from online radiology repositories of 100 radiographs each (50 dislocated, 50 located) for four joints: native shoulder, elbow, hip, and total hip arthroplasty (THA). We trained a variety of CNN binary classifiers using both on-the-fly and static data augmentation to identify the various joint dislocations. The best-performing classifier for each joint was evaluated on an external test set of 100 corresponding radiographs (50 dislocations) from three hospitals. CNN performance was evaluated using area under the ROC curve (AUROC). To determine areas emphasized by the CNN for decision-making, class activation map (CAM) heatmaps were generated for test images. RESULTS: The best-performing CNNs for elbow, hip, shoulder, and THA dislocation achieved high AUROCs on both internal and external test sets (internal/external AUC): elbow (1.0/0.998), hip (0.993/0.880), shoulder (1.0/0.993), THA (1.0/0.950). Heatmaps demonstrated appropriate emphasis of joints for both located and dislocated joints. CONCLUSION: With modest numbers of images, radiographs from the Internet can be used to train clinically-generalizable CNNs for joint dislocations. Given the rarity of joint dislocations at many centers, online repositories may be a viable source for CNN-training data.


Asunto(s)
Colaboración de las Masas , Aprendizaje Profundo , Luxaciones Articulares , Algoritmos , Humanos , Internet
2.
Emerg Radiol ; 29(5): 801-808, 2022 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-35608786

RESUMEN

OBJECTIVE: Periprosthetic dislocations of total hip arthroplasty (THA) are time-sensitive injuries, as the longer diagnosis and treatment are delayed, the more difficult they are to reduce. Automated triage of radiographs with dislocations could help reduce these delays. We trained convolutional neural networks (CNNs) for the detection of THA dislocations, and evaluated their generalizability by evaluating them on external datasets. METHODS: We used 357 THA radiographs from a single hospital (185 with dislocation [51.8%]) to develop and internally test a variety of CNNs to identify THA dislocation. We performed external testing of these CNNs on two datasets to evaluate generalizability. CNN performance was evaluated using area under the receiving operating characteristic curve (AUROC). Class activation mapping (CAM) was used to create heatmaps of test images for visualization of regions emphasized by the CNNs. RESULTS: Multiple CNNs achieved AUCs of 1 for both internal and external test sets, indicating good generalizability. Heatmaps showed that CNNs consistently emphasized the THA for both dislocated and located THAs. CONCLUSION: CNNs can be trained to recognize THA dislocation with high diagnostic performance, which supports their potential use for triage in the emergency department. Importantly, our CNNs generalized well to external data from two sources, further supporting their potential clinical utility.


Asunto(s)
Artroplastia de Reemplazo de Cadera , Aprendizaje Profundo , Luxaciones Articulares , Humanos , Internet , Redes Neurales de la Computación , Estudios Retrospectivos
3.
J Neuroophthalmol ; 41(3): 368-374, 2021 Sep 01.
Artículo en Inglés | MEDLINE | ID: mdl-34415271

RESUMEN

BACKGROUND: To date, deep learning-based detection of optic disc abnormalities in color fundus photographs has mostly been limited to the field of glaucoma. However, many life-threatening systemic and neurological conditions can manifest as optic disc abnormalities. In this study, we aimed to extend the application of deep learning (DL) in optic disc analyses to detect a spectrum of nonglaucomatous optic neuropathies. METHODS: Using transfer learning, we trained a ResNet-152 deep convolutional neural network (DCNN) to distinguish between normal and abnormal optic discs in color fundus photographs (CFPs). Our training data set included 944 deidentified CFPs (abnormal 364; normal 580). Our testing data set included 151 deidentified CFPs (abnormal 71; normal 80). Both the training and testing data sets contained a wide range of optic disc abnormalities, including but not limited to ischemic optic neuropathy, atrophy, compressive optic neuropathy, hereditary optic neuropathy, hypoplasia, papilledema, and toxic optic neuropathy. The standard measures of performance (sensitivity, specificity, and area under the curve of the receiver operating characteristic curve (AUC-ROC)) were used for evaluation. RESULTS: During the 10-fold cross-validation test, our DCNN for distinguishing between normal and abnormal optic discs achieved the following mean performance: AUC-ROC 0.99 (95 CI: 0.98-0.99), sensitivity 94% (95 CI: 91%-97%), and specificity 96% (95 CI: 93%-99%). When evaluated against the external testing data set, our model achieved the following mean performance: AUC-ROC 0.87, sensitivity 90%, and specificity 69%. CONCLUSION: In summary, we have developed a deep learning algorithm that is capable of detecting a spectrum of optic disc abnormalities in color fundus photographs, with a focus on neuro-ophthalmological etiologies. As the next step, we plan to validate our algorithm prospectively as a focused screening tool in the emergency department, which if successful could be beneficial because current practice pattern and training predict a shortage of neuro-ophthalmologists and ophthalmologists in general in the near future.


Asunto(s)
Algoritmos , Aprendizaje Profundo , Técnicas de Diagnóstico Oftalmológico , Disco Óptico/anomalías , Enfermedades del Nervio Óptico/diagnóstico , Humanos , Disco Óptico/diagnóstico por imagen , Curva ROC
4.
Emerg Radiol ; 28(5): 949-954, 2021 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-34089126

RESUMEN

PURPOSE: To develop and test the performance of deep convolutional neural networks (DCNNs) for automated classification of age and sex on chest radiographs (CXR). METHODS: We obtained 112,120 frontal CXRs from the NIH ChestX-ray14 database performed in 48,780 females (44%) and 63,340 males (56%) ranging from 1 to 95 years old. The dataset was split into training (70%), validation (10%), and test (20%) datasets, and used to fine-tune ResNet-18 DCNNs pretrained on ImageNet for (1) determination of sex (using entire dataset and only pediatric CXRs); (2) determination of age < 18 years old or ≥ 18 years old (using entire dataset); and (3) determination of age < 11 years old or 11-18 years old (using only pediatric CXRs). External testing was performed on 662 CXRs from China. Area under the receiver operating characteristic curve (AUC) was used to evaluate DCNN test performance. RESULTS: DCNNs trained to determine sex on the entire dataset and pediatric CXRs only had AUCs of 1.0 and 0.91, respectively (p < 0.0001). DCNNs trained to determine age < or ≥ 18 years old and < 11 vs. 11-18 years old had AUCs of 0.99 and 0.96 (p < 0.0001), respectively. External testing showed AUC of 0.98 for sex (p = 0.01) and 0.91 for determining age < or ≥ 18 years old (p < 0.001). CONCLUSION: DCNNs can accurately predict sex from CXRs and distinguish between adult and pediatric patients in both American and Chinese populations. The ability to glean demographic information from CXRs may aid forensic investigations, as well as help identify novel anatomic landmarks for sex and age.


Asunto(s)
Aprendizaje Profundo , Radiología , Adolescente , Adulto , Anciano , Anciano de 80 o más Años , Niño , Preescolar , Femenino , Humanos , Lactante , Masculino , Persona de Mediana Edad , Redes Neurales de la Computación , Radiografía , Radiografía Torácica , Adulto Joven
5.
J Neuroophthalmol ; 40(2): 178-184, 2020 06.
Artículo en Inglés | MEDLINE | ID: mdl-31453913

RESUMEN

BACKGROUND: Deep learning (DL) has demonstrated human expert levels of performance for medical image classification in a wide array of medical fields, including ophthalmology. In this article, we present the results of our DL system designed to determine optic disc laterality, right eye vs left eye, in the presence of both normal and abnormal optic discs. METHODS: Using transfer learning, we modified the ResNet-152 deep convolutional neural network (DCNN), pretrained on ImageNet, to determine the optic disc laterality. After a 5-fold cross-validation, we generated receiver operating characteristic curves and corresponding area under the curve (AUC) values to evaluate performance. The data set consisted of 576 color fundus photographs (51% right and 49% left). Both 30° photographs centered on the optic disc (63%) and photographs with varying degree of optic disc centration and/or wider field of view (37%) were included. Both normal (27%) and abnormal (73%) optic discs were included. Various neuro-ophthalmological diseases were represented, such as, but not limited to, atrophy, anterior ischemic optic neuropathy, hypoplasia, and papilledema. RESULTS: Using 5-fold cross-validation (70% training; 10% validation; 20% testing), our DCNN for classifying right vs left optic disc achieved an average AUC of 0.999 (±0.002) with optimal threshold values, yielding an average accuracy of 98.78% (±1.52%), sensitivity of 98.60% (±1.72%), and specificity of 98.97% (±1.38%). When tested against a separate data set for external validation, our 5-fold cross-validation model achieved the following average performance: AUC 0.996 (±0.005), accuracy 97.2% (±2.0%), sensitivity 96.4% (±4.3%), and specificity 98.0% (±2.2%). CONCLUSIONS: Small data sets can be used to develop high-performing DL systems for semantic labeling of neuro-ophthalmology images, specifically in distinguishing between right and left optic discs, even in the presence of neuro-ophthalmological pathologies. Although this may seem like an elementary task, this study demonstrates the power of transfer learning and provides an example of a DCNN that can help curate large medical image databases for machine-learning purposes and facilitate ophthalmologist workflow by automatically labeling images according to laterality.


Asunto(s)
Algoritmos , Aprendizaje Profundo , Técnicas de Diagnóstico Oftalmológico , Aprendizaje Automático , Neurología , Oftalmología , Disco Óptico/diagnóstico por imagen , Enfermedades del Nervio Óptico/diagnóstico , Humanos , Curva ROC
6.
Skeletal Radiol ; 49(10): 1623-1632, 2020 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-32415371

RESUMEN

OBJECTIVE: To develop and evaluate the performance of deep convolutional neural networks (DCNN) to detect and identify specific total shoulder arthroplasty (TSA) models. MATERIALS AND METHODS: We included 482 radiography studies obtained from publicly available image repositories with native shoulders, reverse TSA (RTSA) implants, and five different TSA models. We trained separate ResNet DCNN-based binary classifiers to (1) detect the presence of shoulder arthroplasty implants, (2) differentiate between TSA and RTSA, and (3) differentiate between the five TSA models, using five individual classifiers for each model, respectively. Datasets were divided into training, validation, and test datasets. Training and validation datasets were 20-fold augmented. Test performances were assessed with area under the receiver-operating characteristic curves (AUC-ROC) analyses. Class activation mapping was used to identify distinguishing imaging features used for DCNN classification decisions. RESULTS: The DCNN for the detection of the presence of shoulder arthroplasty implants achieved an AUC-ROC of 1.0, whereas the AUC-ROC for differentiation between TSA and RTSA was 0.97. Class activation map analysis demonstrated the emphasis on the characteristic arthroplasty components in decision-making. DCNNs trained to distinguish between the five TSA models achieved AUC-ROCs ranging from 0.86 for Stryker Solar to 1.0 for Zimmer Bigliani-Flatow with class activation map analysis demonstrating an emphasis on unique implant design features. CONCLUSION: DCNNs can accurately identify the presence of and distinguish between TSA & RTSA, and classify five specific TSA models with high accuracy. The proof of concept of these DCNNs may set the foundation for an automated arthroplasty atlas for rapid and comprehensive model identification.


Asunto(s)
Artroplastía de Reemplazo de Hombro , Aprendizaje Profundo , Humanos , Redes Neurales de la Computación , Curva ROC , Radiografía
7.
Pediatr Radiol ; 49(8): 1066-1070, 2019 07.
Artículo en Inglés | MEDLINE | ID: mdl-31041454

RESUMEN

BACKGROUND: An automated method for identifying the anatomical region of an image independent of metadata labels could improve radiologist workflow (e.g., automated hanging protocols) and help facilitate the automated curation of large medical imaging data sets for machine learning purposes. Deep learning is a potential tool for this purpose. OBJECTIVE: To develop and test the performance of deep convolutional neural networks (DCNN) for the automated classification of pediatric musculoskeletal radiographs by anatomical area. MATERIALS AND METHODS: We utilized a database of 250 pediatric bone radiographs (50 each of the shoulder, elbow, hand, pelvis and knee) to train 5 DCNNs, one to detect each anatomical region amongst the others, based on ResNet-18 pretrained on ImageNet (transfer learning). For each DCNN, the radiographs were randomly split into training (64%), validation (12%) and test (24%) data sets. The training and validation data sets were augmented 30 times using standard preprocessing methods. We also tested our DCNNs on a separate test set of 100 radiographs from a single institution. Receiver operating characteristics (ROC) with area under the curve (AUC) were used to evaluate DCNN performances. RESULTS: All five DCNN trained for classification of the radiographs into anatomical region achieved ROC AUC of 1, respectively, for both test sets. Classification of the test radiographs occurred at a rate of 33 radiographs per s. CONCLUSION: DCNNs trained on a small set of images with 30 times augmentation through standard processing techniques are able to automatically classify pediatric musculoskeletal radiographs into anatomical region with near-perfect to perfect accuracy at superhuman speeds. This concept may apply to other body parts and radiographic views with the potential to create an all-encompassing semantic-labeling DCNN.


Asunto(s)
Aprendizaje Profundo , Enfermedades Musculoesqueléticas/diagnóstico por imagen , Redes Neurales de la Computación , Radiografía/métodos , Adolescente , Área Bajo la Curva , Automatización , Niño , Preescolar , Competencia Clínica , Bases de Datos Factuales , Femenino , Humanos , Aprendizaje Automático , Masculino , Enfermedades Musculoesqueléticas/clasificación , Curva ROC , Radiólogos/estadística & datos numéricos , Estudios Retrospectivos , Semántica , Flujo de Trabajo
8.
J Digit Imaging ; 32(6): 925-930, 2019 12.
Artículo en Inglés | MEDLINE | ID: mdl-30972585

RESUMEN

Ensuring correct radiograph view labeling is important for machine learning algorithm development and quality control of studies obtained from multiple facilities. The purpose of this study was to develop and test the performance of a deep convolutional neural network (DCNN) for the automated classification of frontal chest radiographs (CXRs) into anteroposterior (AP) or posteroanterior (PA) views. We obtained 112,120 CXRs from the NIH ChestX-ray14 database, a publicly available CXR database performed in adult (106,179 (95%)) and pediatric (5941 (5%)) patients consisting of 44,810 (40%) AP and 67,310 (60%) PA views. CXRs were used to train, validate, and test the ResNet-18 DCNN for classification of radiographs into anteroposterior and posteroanterior views. A second DCNN was developed in the same manner using only the pediatric CXRs (2885 (49%) AP and 3056 (51%) PA). Receiver operating characteristic (ROC) curves with area under the curve (AUC) and standard diagnostic measures were used to evaluate the DCNN's performance on the test dataset. The DCNNs trained on the entire CXR dataset and pediatric CXR dataset had AUCs of 1.0 and 0.997, respectively, and accuracy of 99.6% and 98%, respectively, for distinguishing between AP and PA CXR. Sensitivity and specificity were 99.6% and 99.5%, respectively, for the DCNN trained on the entire dataset and 98% for both sensitivity and specificity for the DCNN trained on the pediatric dataset. The observed difference in performance between the two algorithms was not statistically significant (p = 0.17). Our DCNNs have high accuracy for classifying AP/PA orientation of frontal CXRs, with only slight reduction in performance when the training dataset was reduced by 95%. Rapid classification of CXRs by the DCNN can facilitate annotation of large image datasets for machine learning and quality assurance purposes.


Asunto(s)
Aprendizaje Profundo , Interpretación de Imagen Radiográfica Asistida por Computador/métodos , Radiografía Torácica/métodos , Adulto , Niño , Bases de Datos Factuales , Humanos , Reproducibilidad de los Resultados , Estudios Retrospectivos , Sensibilidad y Especificidad
9.
J Digit Imaging ; 32(4): 565-570, 2019 08.
Artículo en Inglés | MEDLINE | ID: mdl-31197559

RESUMEN

Machine learning has several potential uses in medical imaging for semantic labeling of images to improve radiologist workflow and to triage studies for review. The purpose of this study was to (1) develop deep convolutional neural networks (DCNNs) for automated classification of 2D mammography views, determination of breast laterality, and assessment and of breast tissue density; and (2) compare the performance of DCNNs on these tasks of varying complexity to each other. We obtained 3034 2D-mammographic images from the Digital Database for Screening Mammography, annotated with mammographic view, image laterality, and breast tissue density. These images were used to train a DCNN to classify images for these three tasks. The DCNN trained to classify mammographic view achieved receiver-operating-characteristic (ROC) area under the curve (AUC) of 1. The DCNN trained to classify breast image laterality initially misclassified right and left breasts (AUC 0.75); however, after discontinuing horizontal flips during data augmentation, AUC improved to 0.93 (p < 0.0001). Breast density classification proved more difficult, with the DCNN achieving 68% accuracy. Automated semantic labeling of 2D mammography is feasible using DCNNs and can be performed with small datasets. However, automated classification of differences in breast density is more difficult, likely requiring larger datasets.


Asunto(s)
Neoplasias de la Mama/diagnóstico por imagen , Aprendizaje Profundo , Mamografía/métodos , Interpretación de Imagen Radiográfica Asistida por Computador/métodos , Semántica , Mama/diagnóstico por imagen , Femenino , Humanos , Aprendizaje Automático
11.
Clin Imaging ; 92: 38-43, 2022 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-36183620

RESUMEN

OBJECTIVE: Joint dislocations are orthopedic emergencies that require prompt intervention. Automatic identification of these injuries could help improve timely patient care because diagnostic delays increase the difficulty of reduction. In this study, we developed convolutional neural networks (CNNs) to detect elbow and shoulder dislocations, and tested their generalizability on external datasets. METHODS: We collected 106 elbow radiographs (53 with dislocation [50 %]) and 140 shoulder radiographs (70 with dislocation [50 %]) from a level-1 trauma center. After performing 24× data augmentation on training/validation data, we trained multiple CNNs to detect elbow and shoulder dislocations, and also evaluated the best-performing models using external datasets from an external hospital and online radiology repositories. To examine CNN decision-making, we generated class activation maps (CAMs) to visualize areas of images that contributed the most to model decisions. RESULTS: On all internal test sets, CNNs achieved AUCs >0.99, and on all external test sets, CNNs achieved AUCs >0.97. CAMs demonstrated that the CNNs were focused on relevant joints in decision-making regardless of whether or not dislocations were present. CONCLUSION: Joint dislocations in both shoulders and elbows were readily identified with high accuracy by CNNs with excellent generalizability to external test sets. These findings suggest that CNNs could expedite access to intervention by assisting in diagnosing dislocations.


Asunto(s)
Aprendizaje Profundo , Luxaciones Articulares , Luxación del Hombro , Humanos , Luxación del Hombro/diagnóstico por imagen , Redes Neurales de la Computación , Luxaciones Articulares/diagnóstico por imagen , Extremidad Superior
12.
Annu Int Conf IEEE Eng Med Biol Soc ; 2021: 3353-3357, 2021 11.
Artículo en Inglés | MEDLINE | ID: mdl-34891958

RESUMEN

Small rodent cardiac magnetic resonance imaging (MRI) plays an important role in preclinical models of cardiac disease. Accurate myocardial boundaries delineation is crucial to most morphological and functional analysis in rodent cardiac MRIs. However, rodent cardiac MRIs, due to animal's small cardiac volume and high heart rate, are usually acquired with sub-optimal resolution and low signal-to-noise ratio (SNR). These rodent cardiac MRIs can also suffer from signal loss due to the intra-voxel dephasing. These factors make automatic myocardial segmentation challenging. Manual contouring could be applied to label myocardial boundaries but it is usually laborious, time consuming, and not systematically objective. In this study, we present a deep learning approach based on 3D attention M-net to perform automatic segmentation of left ventricular myocardium. In the deep learning architecture, we use dual spatial-channel attention gates between encoder and decoder along with multi-scale feature fusion path after decoder. Attention gates enable networks to focus on relevant spatial information and channel features to improve segmentation performance. A distance derived loss term, besides general dice loss and binary cross entropy loss, was also introduced to our hybrid loss functions to refine segmentation contours. The proposed model outperforms other generic models, like U-Net and FCN, in major segmentation metrics including the dice score (0.9072), Jaccard index (0.8307) and Hausdorff distance (3.1754 pixels), which are comparable to the results achieved by state-of-the-art models on human cardiac ACDC17 datasets.Clinical relevance Small rodent cardiac MRI is routinely used to probe the effect of individual genes or groups of genes on the etiology of a large number of cardiovascular diseases. An automatic myocardium segmentation algorithm specifically designed for these data can enhance accuracy and reproducibility of cardiac structure and function analysis.


Asunto(s)
Ventrículos Cardíacos , Imagen por Resonancia Magnética , Animales , Atención , Ventrículos Cardíacos/diagnóstico por imagen , Ratones , Miocardio , Reproducibilidad de los Resultados
13.
Knee ; 27(2): 535-542, 2020 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-31883760

RESUMEN

BACKGROUND: Preoperative identification of knee arthroplasty is important for planning revision surgery. However, up to 10% of implants are not identified prior to surgery. The purposes of this study were to develop and test the performance of a deep learning system (DLS) for the automated radiographic 1) identification of the presence or absence of a total knee arthroplasty (TKA); 2) classification of TKA vs. unicompartmental knee arthroplasty (UKA); and 3) differentiation between two different primary TKA models. METHOD: We collected 237 anteroposterior (AP) knee radiographs with equal proportions of native knees, TKA, and UKA and 274 AP knee radiographs with equal proportions of two TKA models. Data augmentation was used to increase the number of images for deep convolutional neural network (DCNN) training. A DLS based on DCNNs was trained on these images. Receiver operating characteristic (ROC) curves with area under the curve (AUC) were generated. Heatmaps were created using class activation mapping (CAM) to identify image features most important for DCNN decision-making. RESULTS: DCNNs trained to detect TKA and distinguish between TKA and UKA both achieved AUC of 1. Heatmaps demonstrated appropriate emphasis of arthroplasty components in decision-making. The DCNN trained to distinguish between the two TKA models achieved AUC of 1. Heatmaps showed emphasis of specific unique features of the TKA model designs, such as the femoral component anterior flange shape. CONCLUSIONS: DCNNs can accurately identify presence of TKA and distinguish between specific arthroplasty designs. This proof-of-concept could be applied towards identifying other prosthesis models and prosthesis-related complications.


Asunto(s)
Artroplastia de Reemplazo de Rodilla/clasificación , Técnicas de Apoyo para la Decisión , Aprendizaje Profundo , Articulación de la Rodilla/cirugía , Osteoartritis de la Rodilla/cirugía , Anciano , Artroplastia de Reemplazo de Rodilla/métodos , Femenino , Humanos , Articulación de la Rodilla/diagnóstico por imagen , Masculino , Persona de Mediana Edad , Osteoartritis de la Rodilla/clasificación , Osteoartritis de la Rodilla/diagnóstico , Radiografía , Reoperación , Resultado del Tratamiento
14.
Genes (Basel) ; 11(1)2019 12 24.
Artículo en Inglés | MEDLINE | ID: mdl-31878175

RESUMEN

Glutathione S-transferases (GSTs)-an especially plant-specific tau class of GSTs-are key enzymes involved in biotic and abiotic stress responses. To improve the stress resistance of crops via the genetic modification of GSTs, we predicted the amino acids present in the GSH binding site (G-site) and hydrophobic substrate-binding site (H-site) of OsGSTU17, a tau class GST in rice. We then examined the enzyme activity, substrate specificity, enzyme kinetics and thermodynamic stability of the mutant enzymes. Our results showed that the hydrogen bonds between Lys42, Val56, Glu68, and Ser69 of the G-site and glutathione were essential for enzyme activity and thermal stability. The hydrophobic side chains of amino acids of the H-site contributed to enzyme activity toward 4-nitrobenzyl chloride but had an inhibitory effect on enzyme activity toward 1-chloro-2,4-dinitrobenzene and cumene hydroperoxide. Different amino acids of the H-site had different effects on enzyme activity toward a different substrate, 7-chloro-4-nitrobenzo-2-oxa-1,3-diazole. Moreover, Leu112 and Phe162 were found to inhibit the catalytic efficiency of OsGSTU17 to 7-chloro-4-nitrobenzo-2-oxa-1,3-diazole, while Pro16, Leu112, and Trp165 contributed to structural stability. The results of this research enhance the understanding of the relationship between the structure and function of tau class GSTs to improve the abiotic stress resistance of crops.


Asunto(s)
Glutatión Transferasa/química , Glutatión Transferasa/metabolismo , Nitrobencenos/metabolismo , Oryza/enzimología , Derivados del Benceno/farmacología , Sitios de Unión , Dinitroclorobenceno/farmacología , Estabilidad de Enzimas , Glutatión Transferasa/efectos de los fármacos , Enlace de Hidrógeno , Oryza/química , Proteínas de Plantas/química , Proteínas de Plantas/metabolismo , Unión Proteica , Especificidad por Sustrato , Termodinámica
15.
Annu Int Conf IEEE Eng Med Biol Soc ; 2019: 4491-4495, 2019 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-31946863

RESUMEN

In this paper, we propose a new technique for interpolating shapes in order to upsample a sparsely acquired serial-section image stack. The method is based on a maximum a posteriori estimation strategy which models neighboring sections as observations of random deformations of an image to be estimated. We show the computation of diffeomorphic trajectories between observed sections and define estimated upsampled image sections as a Jacobian-weighted sum of flowing images at corresponding distances along those trajectories. We apply this methodology to upsample stacks of sparse 2D magnetic resonance cross-sections through live mouse hearts. We show that the proposed method results in smoother and more accurate reconstructions over linear interpolation, and report a Dice coefficient of 0.8727 against ground truth segmentations in our dataset and statistically significant improvements in both left ventricular segmentation accuracy and image intensity estimates.


Asunto(s)
Algoritmos , Corazón , Imagen por Resonancia Magnética , Animales , Corazón/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador , Ratones , Radiografía
16.
Med Clin North Am ; 91(5): 845-62, 2007 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-17826105

RESUMEN

Nanomedicine is the use of nanotechnology to achieve innovative medical breakthroughs. Nanomedicine, with its broad range of ideas, hypotheses, concepts, and undeveloped clinical devices, is still in its early stage. This article outlines present developments and future prospects for the use of nanotechnology techniques in experimental in vivo and in vitro studies and in engineering nanodevices and biosensors for clinical and investigative use n diagnosis and therapy in the fields of genetics, oncology, cardiology, and dermatology. Toxicologic considerations also are discussed.


Asunto(s)
Investigación Biomédica/tendencias , Terapia Genética/métodos , Nanomedicina/métodos , Nanoestructuras/estadística & datos numéricos , Animales , Investigación Biomédica/métodos , Enfermedades Cardiovasculares/diagnóstico , Enfermedades Cardiovasculares/terapia , Humanos , Nanomedicina/tendencias , Neoplasias/diagnóstico , Neoplasias/terapia , Enfermedades del Sistema Nervioso/diagnóstico , Enfermedades del Sistema Nervioso/terapia , Enfermedades de la Piel/diagnóstico , Enfermedades de la Piel/terapia
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA