Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 15 de 15
Filtrar
2.
Retina ; 43(12): 2075-2079, 2023 Dec 01.
Artículo en Inglés | MEDLINE | ID: mdl-35174805

RESUMEN

PURPOSE: We present a new technique that allows an intraocular lens to be explanted through the small incisions used in modern cataract surgery. METHODS AND RESULTS: The intraocular lens optic is cut into three connected pieces at the 1-mm-wide end with scissors. Then, with the stabilizing counterforce provided by a pair of vitreoretinal forceps through a paracentesis, the middle piece is removed first, followed by the two side pieces connected with haptics flipped over at the connected part. These two parts overlap each other when passing through the incision, eventually resulting in the explantation of the intraocular lens, as an intact piece. CONCLUSION: We believe this method provides a simple and effective way to remove intraocular lens through very small incisions, which could also reduce complications and hasten patient's recovery.


Asunto(s)
Extracción de Catarata , Lentes Intraoculares , Humanos , Reoperación , Remoción de Dispositivos/métodos , Ojo
3.
IEEE Trans Pattern Anal Mach Intell ; 45(5): 5904-5917, 2023 May.
Artículo en Inglés | MEDLINE | ID: mdl-36251909

RESUMEN

Blind face restoration is a challenging task due to the unknown, unsynthesizable and complex degradation, yet is valuable in many practical applications. To improve the performance of blind face restoration, recent works mainly treat the two aspects, i.e., generic and specific restoration, separately. In particular, generic restoration attempts to restore the results through general facial structure prior, while on the one hand, cannot generalize to real-world degraded observations due to the limited capability of direct CNNs' mappings in learning blind restoration, and on the other hand, fails to exploit the identity-specific details. On the contrary, specific restoration aims to incorporate the identity features from the reference of the same identity, in which the requirement of proper reference severely limits the application scenarios. Generally, it is a challenging and intractable task to improve the photo-realistic performance of blind restoration and adaptively handle the generic and specific restoration scenarios with a single unified model. Instead of implicitly learning the mapping from a low-quality image to its high-quality counterpart, this paper suggests a DMDNet by explicitly memorizing the generic and specific features through dual dictionaries. First, the generic dictionary learns the general facial priors from high-quality images of any identity, while the specific dictionary stores the identity-belonging features for each person individually. Second, to handle the degraded input with or without specific reference, dictionary transform module is suggested to read the relevant details from the dual dictionaries which are subsequently fused into the input features. Finally, multi-scale dictionaries are leveraged to benefit the coarse-to-fine restoration. The whole framework including the generic and specific dictionaries is optimized in an end-to-end manner and can be flexibly plugged into different application scenarios. Moreover, a new high-quality dataset, termed CelebRef-HQ, is constructed to promote the exploration of specific face restoration in the high-resolution space. Experimental results demonstrate that the proposed DMDNet performs favorably against the state of the arts in both quantitative and qualitative evaluation, and generates more photo-realistic results on the real-world low-quality images. The codes, models and the CelebRef-HQ dataset will be publicly available at https://github.com/csxmli2016/DMDNet.

4.
Front Med (Lausanne) ; 9: 856800, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35721099

RESUMEN

Purpose: This study aimed to present the 1-year follow-up of a modified technique for scleral fixation of three-piece intraocular lens (IOLs) without conjunctival incision. Materials and Methods: A retrospective chart review of a consecutive series of 10 eyes of nine patients who underwent scleral IOL fixation using the modified technique was performed. Data were collected 1 year after surgery for all patients. Results: The range of follow-up time was between 1 year and 31 months. At the last follow-up point, the IOL was well-positioned and the visual acuity was good (as limited by primary diseases). Short-term complications included pupillary IOL capture (n = 1) and decreased intraocular pressure (n = 1), and no long-term complications were observed. Conclusion: Outcome data support this technique as a viable option for the management of secondary IOL fixation with flexible usage of more designs of IOLs.

5.
Comput Methods Programs Biomed ; 217: 106697, 2022 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-35180678

RESUMEN

OBJECTIVE: The purpose of this study was to model the process of liver tissue carbonization with laser ablation (LA). METHODS: A dynamic heat source model was proposed and combined with the light distribution model as well as bioheat transfer model to predict the development of tissue carbonization with laser ablation (LA) using an ex vivo porcine liver tissue model. An ex vivo laser ablation experiment with porcine liver tissues using a custom-made 1064 nm bare fiber was then used to verify the simulation results at 3, 5, and 7 W laser administrations for 5 min. The spatiotemporal temperature distribution was monitored by measuring the temperature changes at three points close the fiber during LA. Both the experiment and simulation of the temperature, tissue carbonization zone, and ablation zone were then compared. RESULTS: Four stages were recognized in the development of liver tissue carbonization during LA. The growth of the carbonization zone along the fiber axial and radial directions were different in the four stages. The carbonization zone along the fiber axial direction (L2) grew in the four stages with a sharp increase in the initial period and a minor increase in Stage 4. However, the change in the carbonization zone along the fiber radial direction (D2) increased dramatically (Stage 1) to a long-time plateau (Stages 2 and 3) followed by a slow growth in Stage 4. An acceptable agreement between the computer simulation and ex vivo experiment in the temperature changes at the three points was found at all three testing laser administrations. A similar result was also obtained for the dimensions of coagulation zone and ablation zone between the computer simulation and ex vivo experiment (carbonization zone: 2.99± 0.10 vs. 2.78 mm2, 67.39± 0.09 vs. 63.53 mm2, and 90.53± 0.11 vs. 85.15 mm2; ablation zone: 68.95± 0.28 vs. 65.29 mm2, 182.11± 0.24 vs. 213.81 mm2, and 244.80± 0.06 vs. 251.79 mm2 at 3, 5, and 7 W, respectively). CONCLUSION: This study demonstrates that the proposed dynamic heat source model combined with the light distribution model as well as bioheat transfer model can predict the development of liver tissue carbonization with an acceptable accuracy. This study contributes to an improved understanding of the LA process in the treatment of liver tumors.


Asunto(s)
Terapia por Láser , Animales , Simulación por Computador , Calor , Terapia por Láser/métodos , Rayos Láser , Hígado/cirugía , Porcinos
6.
Entropy (Basel) ; 22(6)2020 Jun 06.
Artículo en Inglés | MEDLINE | ID: mdl-33286401

RESUMEN

Due to the complexity of wind speed, it has been reported that mixed-noise models, constituted by multiple noise distributions, perform better than single-noise models. However, most existing regression models suppose that the noise distribution is single. Therefore, we study the Least square S V R of the Gaussian-Laplacian mixed homoscedastic ( G L M - L S S V R ) and heteroscedastic noise ( G L M H - L S S V R ) for complicated or unknown noise distributions. The ALM technique is used to solve model G L M - L S S V R . G L M - L S S V R is used to predict short-term wind speed with historical data. The prediction results indicate that the presented model is superior to the single-noise model, and has fine performance.

7.
Entropy (Basel) ; 22(10)2020 Sep 29.
Artículo en Inglés | MEDLINE | ID: mdl-33286871

RESUMEN

In this article, it was observed that the noise in some real-world applications, such as wind power forecasting and direction of the arrival estimation problem, does not satisfy the single noise distribution, including Gaussian distribution and Laplace distribution, but the mixed distribution. Therefore, combining the twin hyperplanes with the fast speed of Least Squares Support Vector Regression (LS-SVR), and then introducing the Gauss-Laplace mixed noise feature, a new regressor, called Gauss-Laplace Twin Least Squares Support Vector Regression (GL-TLSSVR), for the complex noise. Subsequently, we apply the augmented Lagrangian multiplier method to solve the proposed model. Finally, we apply the short-term wind speed data-set to the proposed model. The results of this experiment confirm the effectiveness of our proposed model.

8.
Sci Rep ; 9(1): 8978, 2019 06 20.
Artículo en Inglés | MEDLINE | ID: mdl-31222027

RESUMEN

For the DNA microarray datasets, tumor classification based on gene expression profiles has drawn great attention, and gene selection plays a significant role in improving the classification performance of microarray data. In this study, an effective hybrid gene selection method based on ReliefF and Ant colony optimization (ACO) algorithm for tumor classification is proposed. First, for the ReliefF algorithm, the average distance among k nearest or k non-nearest neighbor samples are introduced to estimate the difference among samples, based on which the distances between the samples in the same class or the different classes are defined, and then it can more effectively evaluate the weight values of genes for samples. To obtain the stable results in emergencies, a distance coefficient is developed to construct a new formula of updating weight coefficient of genes to further reduce the instability during calculations. When decreasing the distance between the same samples and increasing the distance between the different samples, the weight division is more obvious. Thus, the ReliefF algorithm can be improved to reduce the initial dimensionality of gene expression datasets and obtain a candidate gene subset. Second, a new pruning rule is designed to reduce dimensionality and obtain a new candidate subset with the smaller number of genes. The probability formula of the next point in the path selected by the ants is presented to highlight the closeness of the correlation relationship between the reaction variables. To increase the pheromone concentration of important genes, a new phenotype updating formula of the ACO algorithm is adopted to prevent the pheromone left by the ants that are overwhelmed with time, and then the weight coefficients of the genes are applied here to eliminate the interference of difference data as much as possible. It follows that the improved ACO algorithm has the ability of the strong positive feedback, which quickly converges to an optimal solution through the accumulation and the updating of pheromone. Finally, by combining the improved ReliefF algorithm and the improved ACO method, a hybrid filter-wrapper-based gene selection algorithm called as RFACO-GS is proposed. The experimental results under several public gene expression datasets demonstrate that the proposed method is very effective, which can significantly reduce the dimensionality of gene expression datasets, and select the most relevant genes with high classification accuracy.


Asunto(s)
Algoritmos , Biomarcadores de Tumor , Biología Computacional/métodos , Neoplasias/diagnóstico , Neoplasias/genética , Análisis de Secuencia por Matrices de Oligonucleótidos/métodos , Proteínas de Fusión Oncogénica/genética , Biología Computacional/normas , Humanos , Análisis de Secuencia por Matrices de Oligonucleótidos/normas , Reproducibilidad de los Resultados
9.
Entropy (Basel) ; 21(2)2019 Feb 01.
Artículo en Inglés | MEDLINE | ID: mdl-33266854

RESUMEN

For continuous numerical data sets, neighborhood rough sets-based attribute reduction is an important step for improving classification performance. However, most of the traditional reduction algorithms can only handle finite sets, and yield low accuracy and high cardinality. In this paper, a novel attribute reduction method using Lebesgue and entropy measures in neighborhood rough sets is proposed, which has the ability of dealing with continuous numerical data whilst maintaining the original classification information. First, Fisher score method is employed to eliminate irrelevant attributes to significantly reduce computation complexity for high-dimensional data sets. Then, Lebesgue measure is introduced into neighborhood rough sets to investigate uncertainty measure. In order to analyze the uncertainty and noisy of neighborhood decision systems well, based on Lebesgue and entropy measures, some neighborhood entropy-based uncertainty measures are presented, and by combining algebra view with information view in neighborhood rough sets, a neighborhood roughness joint entropy is developed in neighborhood decision systems. Moreover, some of their properties are derived and the relationships are established, which help to understand the essence of knowledge and the uncertainty of neighborhood decision systems. Finally, a heuristic attribute reduction algorithm is designed to improve the classification performance of large-scale complex data. The experimental results under an instance and several public data sets show that the proposed method is very effective for selecting the most relevant attributes with high classification accuracy.

10.
Entropy (Basel) ; 21(2)2019 Feb 07.
Artículo en Inglés | MEDLINE | ID: mdl-33266871

RESUMEN

Attribute reduction as an important preprocessing step for data mining, and has become a hot research topic in rough set theory. Neighborhood rough set theory can overcome the shortcoming that classical rough set theory may lose some useful information in the process of discretization for continuous-valued data sets. In this paper, to improve the classification performance of complex data, a novel attribute reduction method using neighborhood entropy measures, combining algebra view with information view, in neighborhood rough sets is proposed, which has the ability of dealing with continuous data whilst maintaining the classification information of original attributes. First, to efficiently analyze the uncertainty of knowledge in neighborhood rough sets, by combining neighborhood approximate precision with neighborhood entropy, a new average neighborhood entropy, based on the strong complementarity between the algebra definition of attribute significance and the definition of information view, is presented. Then, a concept of decision neighborhood entropy is investigated for handling the uncertainty and noisiness of neighborhood decision systems, which integrates the credibility degree with the coverage degree of neighborhood decision systems to fully reflect the decision ability of attributes. Moreover, some of their properties are derived and the relationships among these measures are established, which helps to understand the essence of knowledge content and the uncertainty of neighborhood decision systems. Finally, a heuristic attribute reduction algorithm is proposed to improve the classification performance of complex data sets. The experimental results under an instance and several public data sets demonstrate that the proposed method is very effective for selecting the most relevant attributes with great classification performance.

11.
BMC Bioinformatics ; 18(1): 300, 2017 Jun 12.
Artículo en Inglés | MEDLINE | ID: mdl-28606086

RESUMEN

BACKGROUND: DNA-binding proteins perform important functions in a great number of biological activities. DNA-binding proteins can interact with ssDNA (single-stranded DNA) or dsDNA (double-stranded DNA), and DNA-binding proteins can be categorized as single-stranded DNA-binding proteins (SSBs) and double-stranded DNA-binding proteins (DSBs). The identification of DNA-binding proteins from amino acid sequences can help to annotate protein functions and understand the binding specificity. In this study, we systematically consider a variety of schemes to represent protein sequences: OAAC (overall amino acid composition) features, dipeptide compositions, PSSM (position-specific scoring matrix profiles) and split amino acid composition (SAA), and then we adopt SVM (support vector machine) and RF (random forest) classification model to distinguish SSBs from DSBs. RESULTS: Our results suggest that some sequence features can significantly differentiate DSBs and SSBs. Evaluated by 10 fold cross-validation on the benchmark datasets, our prediction method can achieve the accuracy of 88.7% and AUC (area under the curve) of 0.919. Moreover, our method has good performance in independent testing. CONCLUSIONS: Using various sequence-derived features, a novel method is proposed to distinguish DSBs and SSBs accurately. The method also explores novel features, which could be helpful to discover the binding specificity of DNA-binding proteins.


Asunto(s)
Biología Computacional/métodos , ADN de Cadena Simple/metabolismo , Proteínas de Unión al ADN , ADN/metabolismo , Análisis de Secuencia de Proteína/métodos , Secuencia de Aminoácidos , Proteínas de Unión al ADN/química , Proteínas de Unión al ADN/genética , Proteínas de Unión al ADN/metabolismo , Unión Proteica , Máquina de Vectores de Soporte
12.
Neural Netw ; 57: 1-11, 2014 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-24874183

RESUMEN

Support vector regression (SVR) techniques are aimed at discovering a linear or nonlinear structure hidden in sample data. Most existing regression techniques take the assumption that the error distribution is Gaussian. However, it was observed that the noise in some real-world applications, such as wind power forecasting and direction of the arrival estimation problem, does not satisfy Gaussian distribution, but a beta distribution, Laplacian distribution, or other models. In these cases the current regression techniques are not optimal. According to the Bayesian approach, we derive a general loss function and develop a technique of the uniform model of ν-support vector regression for the general noise model (N-SVR). The Augmented Lagrange Multiplier method is introduced to solve N-SVR. Numerical experiments on artificial data sets, UCI data and short-term wind speed prediction are conducted. The results show the effectiveness of the proposed technique.


Asunto(s)
Máquina de Vectores de Soporte , Viento , Teorema de Bayes , Meteorología/métodos , Modelos Teóricos , Distribución Normal , Análisis de Regresión
13.
World J Gastroenterol ; 16(6): 755-63, 2010 Feb 14.
Artículo en Inglés | MEDLINE | ID: mdl-20135726

RESUMEN

AIM: To investigate the prevalence and time of Dickkopf (DKK) family methylation and its clinical significance in hepatocarcinogenesis. METHODS: Methylation of DKK family genes was quantitatively analyzed in 115 liver tissue samples, including 50 pairs of primary hepatocellular carcinoma (HCC) and matched noncancerous cirrhotic tissue samples, as well as 15 liver cirrhosis biopsy samples. RESULTS: The methylation level of DKK3 was significantly higher in HCC tissue samples than in matched noncancerous cirrhotic tissue samples (P < 0.0001) or in liver cirrhosis biopsy samples (P = 0.0139). Receiver operator characteristic curve analysis confirmed that the percent of methylated reference (PMR) values of DKK3 could effectively discriminate HCC tissue samples from noncancerous tissue samples (AUC = 0.8146) or liver cirrhosis biopsy samples (AUC = 0.7093). Kaplan-Meier survival curves revealed that the progression-free survival time of patients with a higher DKK3 methylation level (PMR > 1%) was significantly shorter than that of those with a lower DKK3 methylation level (PMR < or = 1%) (P = 0.0255). Multivariate Cox analysis indicated that methylated DKK3 was significantly and independently related with a shorter survival time (relative risk = 2.527, 95% CI: 1.063-6.008, P = 0.036) of HCC patients. CONCLUSION: Methylation of DKK3 is an important event in early malignant transformation and HCC progression, and therefore might be a prognostic indicator for risk assessment of HCC.


Asunto(s)
Biomarcadores de Tumor/metabolismo , Carcinoma Hepatocelular/diagnóstico , Carcinoma Hepatocelular/metabolismo , Péptidos y Proteínas de Señalización Intercelular/metabolismo , Cirrosis Hepática/complicaciones , Neoplasias Hepáticas/diagnóstico , Neoplasias Hepáticas/metabolismo , Proteínas Adaptadoras Transductoras de Señales , Biopsia , Carcinoma Hepatocelular/etiología , Quimiocinas , Femenino , Humanos , Estimación de Kaplan-Meier , Hígado/metabolismo , Hígado/patología , Neoplasias Hepáticas/etiología , Masculino , Metilación , Persona de Mediana Edad , Análisis Multivariante , Pronóstico , Curva ROC , Estudios Retrospectivos , Medición de Riesgo
14.
Mol Biol Rep ; 37(1): 381-7, 2010 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-19757161

RESUMEN

Genome copy number variation (CNV) is one of the mechanisms to regulate the expression level of genes which contributes to the development and progression of cancer. In order to investigate the regions of high-level amplification and potential target genes within these amplicons in hepatocellular carcinoma (HCC), we analyzed HCC cell line (TJ3ZX-01) for CNV regions at the whole genome level using GeneChip Human Mapping 500K array, and also examined the relative copy number and expression levels of the related genes at candidate amplicons in 41 HCC tissues via real-time fluorescence quantitative PCR methods. Through analysis of sequence tag site(STS) markers by quantitative PCR, The two candidate amplicons at 1q found by SNP array were shown to occur in56.1% (23/41) HCC samples at 1q21 and 80.5% (33/41) at 1q22-23.1. Wilcoxon signed rank test showed expression of CD1d, which located at amplicon of 1q22-23.1 increased significantly within tumor tissues compared with paired nontumor tissues. Our study provides evidences that a novel, high-level amplicon at 1q22-23.1 occurs in both HCC cell line and tissues. CD1d is a potential target for this amplicon in HCC. The up-regulation of CD1d may be used as a novel molecular signature for diagnosis and prognosis of HCC.


Asunto(s)
Antígenos CD1d/genética , Carcinoma Hepatocelular/genética , Neoplasias Hepáticas/genética , Carcinoma Hepatocelular/patología , Cromosomas Humanos Par 1/genética , Variaciones en el Número de Copia de ADN/genética , Femenino , Perfilación de la Expresión Génica , Regulación Neoplásica de la Expresión Génica , Humanos , Neoplasias Hepáticas/patología , Masculino , Repeticiones de Microsatélite/genética , Persona de Mediana Edad , Análisis de Secuencia por Matrices de Oligonucleótidos , Polimorfismo de Nucleótido Simple/genética
15.
Zhonghua Zhong Liu Za Zhi ; 31(8): 566-70, 2009 Aug.
Artículo en Chino | MEDLINE | ID: mdl-20021941

RESUMEN

OBJECTIVE: To screen and determine the regions of copy number variation (CNV) associated with hepatocellular carcinoma (HCC) using SNP array and fluorescence quantitative PCR. METHODS: The CNV from HCC cell line TJ3ZX-01 was analyzed using GeneChip Human Mapping 500K SNP array. According to the data obtained by SNP array analysis, four candidate amplification regions were verified in 41 primary HCC samples by fluorescence quantitative PCR. RESULTS: Four regions of copy number amplification at 1q21.2, 1q22 approximately 23.1, 7p22.1 and 22q13.1 were detected by SNP array analysis. The four candidate amplicons occurred in 56.1% (23/41) of HCC samples at 1q21.2; 80.5% (33/41) at 1q22 approximately 23.1; 75.6% (31/41) at 7p22.1 and 31.7% (13/41) at 22q13.1 analyzed with sequence tagged site (STS) markers by quantitative PCR. CONCLUSION: In four candidate amplification regions selected by SNP array analysis and detected by fluorescence quantitative PCR, three amplification regions show increased copy number in more than 50.0% HCC tissues. This result indicates that these amplification regions are associated with pathogenesis of hepatocellular carcinoma.


Asunto(s)
Carcinoma Hepatocelular/genética , Variaciones en el Número de Copia de ADN/genética , Neoplasias Hepáticas/genética , Carcinoma Hepatocelular/patología , Línea Celular Tumoral , Cromosomas Humanos Par 1/genética , Cromosomas Humanos Par 22/genética , Cromosomas Humanos Par 7/genética , Femenino , Humanos , Neoplasias Hepáticas/patología , Masculino , Persona de Mediana Edad , Análisis de Secuencia por Matrices de Oligonucleótidos , Reacción en Cadena de la Polimerasa/métodos , Polimorfismo de Nucleótido Simple , Lugares Marcados de Secuencia
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...