Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 24.330
Filtrar
1.
Zootaxa ; 4926(3): zootaxa.4926.3.7, 2021 Feb 09.
Artículo en Inglés | MEDLINE | ID: mdl-33756743

RESUMEN

Mallophora Macquart, 1834 is a bee-mimicking genus of Asilidae, which has more than 50 described species in the Neotropical Region. Examination of specimens of this genus from Colombia indicate that there are two undescribed species based on the structure of the hind leg of males. Here we describe Mallophora gauteovan sp. nov. and Mallophora kalos sp. nov. from Tayrona National Park (Magdalena) and Arauca, respectively. For each new species a diagnosis and a description including the structure of the face, thorax, male hind leg, abdomen, and hypandrium are provided. All morphological structures are documented with digital photographs.


Asunto(s)
Dípteros , Distribución Animal , Animales , Abejas , Colombia , Masculino , Fotograbar
2.
Int J Mol Sci ; 22(4)2021 Feb 17.
Artículo en Inglés | MEDLINE | ID: mdl-33671198

RESUMEN

Near-infrared (NIR) fluorescence-guided surgery is an innovative technique for the real-time visualization of resection margins. The aim of this study was to develop a head and neck multicellular tumor spheroid model and to explore the possibilities offered by it for the evaluation of cameras for NIR fluorescence-guided surgery protocols. FaDu spheroids were incubated with indocyanine green (ICG) and then included in a tissue-like phantom. To assess the capability of Fluobeam® NIR camera to detect ICG in tissues, FaDu spheroids exposed to ICG were embedded in 2, 5 or 8 mm of tissue-like phantom. The fluorescence signal was significantly higher between 2, 5 and 8 mm of depth for spheroids treated with more than 5 µg/mL ICG (p < 0.05). The fluorescence intensity positively correlated with the size of spheroids (p < 0.01), while the correlation with depth in the tissue-like phantom was strongly negative (p < 0.001). This multicellular spheroid model embedded in a tissue-like phantom seems to be a simple and reproducible in vitro tumor model, allowing a comparison of NIR cameras. The ideal configuration seems to be 450 µm FaDu spheroids incubated for 24 hours with 0.05 mg/ml of ICG, ensuring the best stability, toxicity, incorporation and signal intensity.


Asunto(s)
Cabeza/diagnóstico por imagen , Imagenología Tridimensional , Modelos Biológicos , Cuello/diagnóstico por imagen , Neoplasias/cirugía , Fotograbar/instrumentación , Espectroscopía Infrarroja Corta , Esferoides Celulares/citología , Muerte Celular/efectos de los fármacos , Línea Celular Tumoral , Proliferación Celular , Fluorescencia , Humanos , Verde de Indocianina/toxicidad , Cinética , Fantasmas de Imagen
3.
Lancet Digit Health ; 3(1): e10-e19, 2021 01.
Artículo en Inglés | MEDLINE | ID: mdl-33735063

RESUMEN

BACKGROUND: Diabetic retinopathy screening is instrumental to preventing blindness, but scaling up screening is challenging because of the increasing number of patients with all forms of diabetes. We aimed to create a deep-learning system to predict the risk of patients with diabetes developing diabetic retinopathy within 2 years. METHODS: We created and validated two versions of a deep-learning system to predict the development of diabetic retinopathy in patients with diabetes who had had teleretinal diabetic retinopathy screening in a primary care setting. The input for the two versions was either a set of three-field or one-field colour fundus photographs. Of the 575 431 eyes in the development set 28 899 had known outcomes, with the remaining 546 532 eyes used to augment the training process via multitask learning. Validation was done on one eye (selected at random) per patient from two datasets: an internal validation (from EyePACS, a teleretinal screening service in the USA) set of 3678 eyes with known outcomes and an external validation (from Thailand) set of 2345 eyes with known outcomes. FINDINGS: The three-field deep-learning system had an area under the receiver operating characteristic curve (AUC) of 0·79 (95% CI 0·77-0·81) in the internal validation set. Assessment of the external validation set-which contained only one-field colour fundus photographs-with the one-field deep-learning system gave an AUC of 0·70 (0·67-0·74). In the internal validation set, the AUC of available risk factors was 0·72 (0·68-0·76), which improved to 0·81 (0·77-0·84) after combining the deep-learning system with these risk factors (p<0·0001). In the external validation set, the corresponding AUC improved from 0·62 (0·58-0·66) to 0·71 (0·68-0·75; p<0·0001) following the addition of the deep-learning system to available risk factors. INTERPRETATION: The deep-learning systems predicted diabetic retinopathy development using colour fundus photographs, and the systems were independent of and more informative than available risk factors. Such a risk stratification tool might help to optimise screening intervals to reduce costs while improving vision-related outcomes. FUNDING: Google.


Asunto(s)
Aprendizaje Profundo , Retinopatía Diabética/diagnóstico , Anciano , Área Bajo la Curva , Técnicas de Diagnóstico Oftalmológico , Femenino , Humanos , Estimación de Kaplan-Meier , Masculino , Persona de Mediana Edad , Fotograbar , Pronóstico , Curva ROC , Reproducibilidad de los Resultados , Medición de Riesgo/métodos
4.
Lancet Digit Health ; 3(1): e29-e40, 2021 01.
Artículo en Inglés | MEDLINE | ID: mdl-33735066

RESUMEN

BACKGROUND: In current approaches to vision screening in the community, a simple and efficient process is needed to identify individuals who should be referred to tertiary eye care centres for vision loss related to eye diseases. The emergence of deep learning technology offers new opportunities to revolutionise this clinical referral pathway. We aimed to assess the performance of a newly developed deep learning algorithm for detection of disease-related visual impairment. METHODS: In this proof-of-concept study, using retinal fundus images from 15 175 eyes with complete data related to best-corrected visual acuity or pinhole visual acuity from the Singapore Epidemiology of Eye Diseases Study, we first developed a single-modality deep learning algorithm based on retinal photographs alone for detection of any disease-related visual impairment (defined as eyes from patients with major eye diseases and best-corrected visual acuity of <20/40), and moderate or worse disease-related visual impairment (eyes with disease and best-corrected visual acuity of <20/60). After development of the algorithm, we tested it internally, using a new set of 3803 eyes from the Singapore Epidemiology of Eye Diseases Study. We then tested it externally using three population-based studies (the Beijing Eye study [6239 eyes], Central India Eye and Medical study [6526 eyes], and Blue Mountains Eye Study [2002 eyes]), and two clinical studies (the Chinese University of Hong Kong's Sight Threatening Diabetic Retinopathy study [971 eyes] and the Outram Polyclinic Study [1225 eyes]). The algorithm's performance in each dataset was assessed on the basis of the area under the receiver operating characteristic curve (AUC). FINDINGS: In the internal test dataset, the AUC for detection of any disease-related visual impairment was 94·2% (95% CI 93·0-95·3; sensitivity 90·7% [87·0-93·6]; specificity 86·8% [85·6-87·9]). The AUC for moderate or worse disease-related visual impairment was 93·9% (95% CI 92·2-95·6; sensitivity 94·6% [89·6-97·6]; specificity 81·3% [80·0-82·5]). Across the five external test datasets (16 993 eyes), the algorithm achieved AUCs ranging between 86·6% (83·4-89·7; sensitivity 87·5% [80·7-92·5]; specificity 70·0% [66·7-73·1]) and 93·6% (92·4-94·8; sensitivity 87·8% [84·1-90·9]; specificity 87·1% [86·2-88·0]) for any disease-related visual impairment, and the AUCs for moderate or worse disease-related visual impairment ranged between 85·9% (81·8-90·1; sensitivity 84·7% [73·0-92·8]; specificity 74·4% [71·4-77·2]) and 93·5% (91·7-95·3; sensitivity 90·3% [84·2-94·6]; specificity 84·2% [83·2-85·1]). INTERPRETATION: This proof-of-concept study shows the potential of a single-modality, function-focused tool in identifying visual impairment related to major eye diseases, providing more timely and pinpointed referral of patients with disease-related visual impairment from the community to tertiary eye hospitals. FUNDING: National Medical Research Council, Singapore.


Asunto(s)
Algoritmos , Aprendizaje Profundo , Oftalmopatías/complicaciones , Trastornos de la Visión/diagnóstico , Trastornos de la Visión/etiología , Anciano , Área Bajo la Curva , Grupo de Ascendencia Continental Asiática , Femenino , Humanos , Masculino , Persona de Mediana Edad , Fotograbar/métodos , Prueba de Estudio Conceptual , Curva ROC , Sensibilidad y Especificidad , Singapur/epidemiología
5.
Soins Pediatr Pueric ; 42(318): 28-32, 2021.
Artículo en Francés | MEDLINE | ID: mdl-33602423

RESUMEN

Although screen assistance does not prevent all image traumas, especially those that are likely to disturb the youngest children, it does help to anticipate them and reduce their impact. This is why the protection of minors against the dangers of images requires three series of measures: a reform of public broadcasting, in particular the composition and role of classification panels; more comprehensive information for parents; training for teachers, school psychologists, socio-cultural workers and educators. As part of their initial and ongoing training, all should be trained in the issue of images and their reception by children.


Asunto(s)
Internet , Fotograbar , Trauma Psicológico , Niño , Humanos , Padres , Trauma Psicológico/prevención & control
6.
PLoS One ; 16(2): e0247440, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-33630951

RESUMEN

The purpose of this work is to provide an effective social distance monitoring solution in low light environments in a pandemic situation. The raging coronavirus disease 2019 (COVID-19) caused by the SARS-CoV-2 virus has brought a global crisis with its deadly spread all over the world. In the absence of an effective treatment and vaccine the efforts to control this pandemic strictly rely on personal preventive actions, e.g., handwashing, face mask usage, environmental cleaning, and most importantly on social distancing which is the only expedient approach to cope with this situation. Low light environments can become a problem in the spread of disease because of people's night gatherings. Especially, in summers when the global temperature is at its peak, the situation can become more critical. Mostly, in cities where people have congested homes and no proper air cross-system is available. So, they find ways to get out of their homes with their families during the night to take fresh air. In such a situation, it is necessary to take effective measures to monitor the safety distance criteria to avoid more positive cases and to control the death toll. In this paper, a deep learning-based solution is proposed for the above-stated problem. The proposed framework utilizes the you only look once v4 (YOLO v4) model for real-time object detection and the social distance measuring approach is introduced with a single motionless time of flight (ToF) camera. The risk factor is indicated based on the calculated distance and safety distance violations are highlighted. Experimental results show that the proposed model exhibits good performance with 97.84% mean average precision (mAP) score and the observed mean absolute error (MAE) between actual and measured social distance values is 1.01 cm.


Asunto(s)
/prevención & control , Aprendizaje Profundo , Humanos , Luz , Pandemias , Fotograbar/instrumentación
7.
Nutrients ; 13(1)2021 Jan 08.
Artículo en Inglés | MEDLINE | ID: mdl-33430147

RESUMEN

The use of image-based dietary assessments (IBDAs) has rapidly increased; however, there is no formalized training program to enhance the digital viewing skills of dieticians. An IBDA was integrated into a nutritional practicum course in the School of Nutrition and Health Sciences, Taipei Medical University Taiwan. An online IBDA platform was created as an off-campus remedial teaching tool to reinforce the conceptualization of food portion sizes. Dietetic students' receptiveness and response to the IBDA, and their performance in food identification and quantification, were compared between the IBDA and real food visual estimations (RFVEs). No differences were found between the IBDA and RFVE in terms of food identification (67% vs. 71%) or quantification (±10% of estimated calories: 23% vs. 24%). A Spearman correlation analysis showed a moderate to high correlation for calorie estimates between the IBDA and RFVE (r ≥ 0.33~0.75, all p < 0.0001). Repeated IBDA training significantly improved students' image-viewing skills [food identification: first semester: 67%; pretest: 77%; second semester: 84%) and quantification [±10%: first semester: 23%; pretest: 28%; second semester: 32%; and ±20%: first semester: 38%; pretest: 48%; second semester: 59%] and reduced absolute estimated errors from 27% (first semester) to 16% (second semester). Training also greatly improved the identification of omitted foods (e.g., condiments, sugar, cooking oil, and batter coatings) and the accuracy of food portion size estimates. The integration of an IBDA into dietetic courses has the potential to help students develop knowledge and skills related to "e-dietetics".


Asunto(s)
Dietética/educación , Evaluación Nutricional , Nutricionistas/educación , Fotograbar , Tamaño de la Porción , Curriculum , Humanos , Internet
8.
JMIR Mhealth Uhealth ; 9(1): e19346, 2021 01 26.
Artículo en Inglés | MEDLINE | ID: mdl-33496670

RESUMEN

BACKGROUND: For the classification of facial paresis, various systems of description and evaluation in the form of clinician-graded or software-based scoring systems are available. They serve the purpose of scientific and clinical assessment of the spontaneous course of the disease or monitoring therapeutic interventions. Nevertheless, none have been able to achieve universal acceptance in everyday clinical practice. Hence, a quick and precise tool for assessing the functional status of the facial nerve would be desirable. In this context, the possibilities that the TrueDepth camera of recent iPhone models offer have sparked our interest. OBJECTIVE: This paper describes the utilization of the iPhone's TrueDepth camera via a specially developed app prototype for quick, objective, and reproducible quantification of facial asymmetries. METHODS: After conceptual and user interface design, a native app prototype for iOS was programmed that accesses and processes the data of the TrueDepth camera. Using a special algorithm, a new index for the grading of unilateral facial paresis ranging from 0% to 100% was developed. The algorithm was adapted to the well-established Stennert index by weighting the individual facial regions based on functional and cosmetic aspects. Test measurements with healthy subjects using the app were performed in order to prove the reliability of the system. RESULTS: After the development process, the app prototype had no runtime or buildtime errors and also worked under suboptimal conditions such as different measurement angles, so it met our criteria for a safe and reliable app. The newly defined index expresses the result of the measurements as a generally understandable percentage value for each half of the face. The measurements that correctly rated the facial expressions of healthy individuals as symmetrical in all cases were reproducible and showed no statistically significant intertest variability. CONCLUSIONS: Based on the experience with the app prototype assessing healthy subjects, the use of the TrueDepth camera should have considerable potential for app-based grading of facial movement disorders. The app and its algorithm, which is based on theoretical considerations, should be evaluated in a prospective clinical study and correlated with common facial scores.


Asunto(s)
Nervio Facial/fisiopatología , Parálisis Facial/fisiopatología , Aplicaciones Móviles , Fotograbar/métodos , Teléfono Inteligente/estadística & datos numéricos , Percepción de Profundidad , Estudios de Factibilidad , Humanos , Informática Médica , Trastornos del Movimiento , Estudios Prospectivos , Reproducibilidad de los Resultados , Telemedicina
9.
Phys Rev Lett ; 126(1): 018101, 2021 Jan 08.
Artículo en Inglés | MEDLINE | ID: mdl-33480762

RESUMEN

Many organisms use visual signals to estimate motion, and these estimates typically are biased. Here, we ask whether these biases may reflect physical rather than biological limitations. Using a camera-gyroscope system, we sample the joint distribution of images and rotational motions in a natural environment, and from this distribution we construct the optimal estimator of velocity based on local image intensities. Over most of the natural dynamic range, this estimator exhibits the biases observed in neural and behavioral responses. Thus, imputed errors in sensory processing may represent an optimal response to the physical signals sampled from the environment.


Asunto(s)
Modelos Biológicos , Percepción de Movimiento/fisiología , Animales , Ambiente , Fotograbar
10.
Zhonghua Wei Chang Wai Ke Za Zhi ; 24(1): 62-67, 2021 Jan 25.
Artículo en Chino | MEDLINE | ID: mdl-33461254

RESUMEN

Objective: At present, surgeons do not know enough about the mesenteric morphology of the colonic splenic flexure, resulting in many problems in the complete mesenteric resection of cancer around the splenic flexure. In this study, the morphology of the mesentery during the mobilization of the colonic splenic flexure was continuously observed in vivo, and from the embryological point of view, the unique mesenteric morphology of the colonic splenic flexure was reconstructed in three dimensions to help surgeons further understand the mesangial structure of the region. Methods: A total of 9 patients with left colon cancer who underwent laparoscopic radical resection with splenic flexure mobilization by the same group of surgeons in Union Hospital of Fujian Medical University from January 2018 to June 2019 were enrolled. The splenic flexure was mobilized using a "three-way approach" strategy based on a middle-lateral approach. During the process of splenic flexure mobilization, the morphology of the transverse mesocolon and descending mesocolon were observed and reconstructed from the embryological point of view. The lower margin of the pancreas was set as the axis, and 4 pictures for each patient (section 1-section 4) were taken during middle-lateral mobilization. Results: The median operation time of the splenic flexure mobilization procedure was 31 (12-55) minutes, and the median bleeding volume was 5 (2-30) ml. One patient suffered from lower splenic vessel injury during the operation and the bleeding was stopped successfully after hemostasis with an ultrasound scalpel. The transverse mesocolon root was observed in all 9 (100%) patients, locating under pancreas, whose inner side was more obvious and tough, and the structure gradually disappeared in the tail of the pancreatic body, replaced by smooth inter-transitional mesocolon and dorsal lobes of the descending colon. The mesenteric morphology of the splenic flexure was reconstructed by intraoperative observation. The transverse mesocolon was continuous with a fan-shaped descending mesocolon. During the embryonic stage, the medial part (section 1-section 2) of the transverse mesocolon and the descending mesocolon were pulled and folded by the superior mesenteric artery (SMA). Then, the transverse mesocolon root was formed by compression of the pancreas on the folding area of the transverse mesocolon and the descending mesocolon. The lateral side of the transverse mesocolon root (section 3-section 4) was distant from the mechanical traction of the SMA, and the corresponding folding area was not compressed by the tail of the pancreas. The posterior mesangial lobe of the transverse mesocolon and the descending mesocolon were continuous with each other, forming a smooth lobe. This smooth lobe laid flat on the corresponding membrane bed composed of the tail of pancreas, Gerota's fascia and inferior pole of the spleen. Conclusions: From an embryological point of view, this study reconstructs the mesenteric morphology of the splenic flexure and proposes a transverse mesocolon root structure that can be observed consistently intraopertively. Cutting the transverse mesocolon root at the level of Gerota's fascia can ensure the complete resection of the mesentery of the transverse colon.


Asunto(s)
Colectomía/métodos , Colon Transverso , Neoplasias del Colon , Laparoscopía , Mesocolon , Colon Transverso/anatomía & histología , Colon Transverso/cirugía , Neoplasias del Colon/cirugía , Disección , Fascia/anatomía & histología , Humanos , Mesenterio/anatomía & histología , Mesenterio/irrigación sanguínea , Mesenterio/embriología , Mesenterio/cirugía , Mesocolon/anatomía & histología , Mesocolon/irrigación sanguínea , Mesocolon/embriología , Mesocolon/cirugía , Páncreas/anatomía & histología , Páncreas/cirugía , Fotograbar , Bazo/anatomía & histología , Bazo/cirugía
11.
Lancet Digit Health ; 3(2): e88-e97, 2021 02.
Artículo en Inglés | MEDLINE | ID: mdl-33509389

RESUMEN

BACKGROUND: Ocular changes are traditionally associated with only a few hepatobiliary diseases. These changes are non-specific and have a low detection rate, limiting their potential use as clinically independent diagnostic features. Therefore, we aimed to engineer deep learning models to establish associations between ocular features and major hepatobiliary diseases and to advance automated screening and identification of hepatobiliary diseases from ocular images. METHODS: We did a multicentre, prospective study to develop models using slit-lamp or retinal fundus images from participants in three hepatobiliary departments and two medical examination centres. Included participants were older than 18 years and had complete clinical information; participants diagnosed with acute hepatobiliary diseases were excluded. We trained seven slit-lamp models and seven fundus models (with or without hepatobiliary disease [screening model] or one specific disease type within six categories [identifying model]) using a development dataset, and we tested the models with an external test dataset. Additionally, we did a visual explanation and occlusion test. Model performances were evaluated using the area under the receiver operating characteristic curve (AUROC), sensitivity, specificity, and F1* score. FINDINGS: Between Dec 16, 2018, and July 31, 2019, we collected data from 1252 participants (from the Department of Hepatobiliary Surgery of the Third Affiliated Hospital of Sun Yat-sen University, the Department of Infectious Diseases of the Affiliated Huadu Hospital of Southern Medical University, and the Nantian Medical Centre of Aikang Health Care [Guangzhou, China]) for the development dataset; between Aug 14, 2019, and Jan 31, 2020, we collected data from 537 participants (from the Department of Infectious Diseases of the Third Affiliated Hospital of Sun Yat-sen University and the Huanshidong Medical Centre of Aikang Health Care [Guangzhou, China]) for the test dataset. The AUROC for screening for hepatobiliary diseases of the slit-lamp model was 0·74 (95% CI 0·71-0·76), whereas that of the fundus model was 0·68 (0·65-0·71). For the identification of hepatobiliary diseases, the AUROCs were 0·93 (0·91-0·94; slit-lamp) and 0·84 (0·81-0·86; fundus) for liver cancer, 0·90 (0·88-0·91; slit-lamp) and 0·83 (0·81-0·86; fundus) for liver cirrhosis, and ranged 0·58-0·69 (0·55-0·71; slit-lamp) and 0·62-0·70 (0·58-0·73; fundus) for other hepatobiliary diseases, including chronic viral hepatitis, non-alcoholic fatty liver disease, cholelithiasis, and hepatic cyst. In addition to the conjunctiva and sclera, our deep learning model revealed that the structures of the iris and fundus also contributed to the classification. INTERPRETATION: Our study established qualitative associations between ocular features and major hepatobiliary diseases, providing a non-invasive, convenient, and complementary method for hepatobiliary disease screening and identification, which could be applied as an opportunistic screening tool. FUNDING: Science and Technology Planning Projects of Guangdong Province; National Key R&D Program of China; Guangzhou Key Laboratory Project; National Natural Science Foundation of China.


Asunto(s)
Algoritmos , Simulación por Computador , Aprendizaje Profundo , Enfermedades del Sistema Digestivo/diagnóstico , Ojo , Tamizaje Masivo/métodos , Modelos Biológicos , Adulto , Área Bajo la Curva , China , Conjuntiva/diagnóstico por imagen , Enfermedades del Sistema Digestivo/complicaciones , Ojo/diagnóstico por imagen , Fondo de Ojo , Humanos , Iris/diagnóstico por imagen , Hígado , Persona de Mediana Edad , Fotograbar/métodos , Estudios Prospectivos , Curva ROC , Esclerótica/diagnóstico por imagen , Microscopía con Lámpara de Hendidura/métodos
12.
Curr Opin Ophthalmol ; 32(2): 105-117, 2021 Mar 01.
Artículo en Inglés | MEDLINE | ID: mdl-33395111

RESUMEN

PURPOSE OF REVIEW: The field of artificial intelligence has grown exponentially in recent years with new technology, methods, and applications emerging at a rapid rate. Many of these advancements have been used to improve the diagnosis and management of glaucoma. We aim to provide an overview of recent publications regarding the use of artificial intelligence to enhance the detection and treatment of glaucoma. RECENT FINDINGS: Machine learning classifiers and deep learning algorithms have been developed to autonomously detect early structural and functional changes of glaucoma using different imaging and testing modalities such as fundus photography, optical coherence tomography, and standard automated perimetry. Artificial intelligence has also been used to further delineate structure-function correlation in glaucoma. Additional 'structure-structure' predictions have been successfully estimated. Other machine learning techniques utilizing complex statistical modeling have been used to detect glaucoma progression, as well as to predict future progression. Although not yet approved for clinical use, these artificial intelligence techniques have the potential to significantly improve glaucoma diagnosis and management. SUMMARY: Rapidly emerging artificial intelligence algorithms have been used for the detection and management of glaucoma. These algorithms may aid the clinician in caring for patients with this complex disease. Further validation is required prior to employing these techniques widely in clinical practice.


Asunto(s)
Inteligencia Artificial , Técnicas de Diagnóstico Oftalmológico , Glaucoma/diagnóstico , Modelos Estadísticos , Algoritmos , Humanos , Aprendizaje Automático , Fotograbar , Tomografía de Coherencia Óptica/métodos
13.
J Fr Ophtalmol ; 44(2): 145-150, 2021 Feb.
Artículo en Francés | MEDLINE | ID: mdl-33413987

RESUMEN

INTRODUCTION: During the COVID-19 pandemic, we have witnessed a world-wide lock-down of the population. This government action combined with the application of social distancing should in principle reduce the frequency of occurrence of ocular injuries. The goal of our work is to try to understand the circumstances of the occurrence of ocular injuries at the IOTA Teaching Hospital during the lock-down period of the COVID-19 health crisis. METHODOLOGY: This was a cross-sectional, descriptive study. The data were collected prospectively. Our study covered the period from March to May 2020. All consenting patients seen at the IOTA Teaching Hospital for ocular trauma regardless of gender, age, circumstances in which the trauma occurred or the nature of the injuries were included by non-probability sampling. Excluded from the study were patients who did not consent or who consulted for a non-traumatic ophthalmologic condition. RESULTS: There were a total of 138 cases, of which 84 were male and 54 female, for a gender ratio of M/F=1.5. Children aged 0 to 5 years represented more than 3/4 (79.14%) of our sample. Trauma occurred in 45.83% of cases during leisure activities and 3.60% of cases involved domestic violence. DISCUSSION: According to the authors, measures aimed at limiting public movement, particularly the curfews introduced by the Malian government to contain the spread of the COVID-19 pandemic, may actually result in trauma. CONCLUSION: Raising public awareness of the social and psychological consequences of lock-down through audiovisual means might significantly reduce the frequency of these ocular traumas.


Asunto(s)
/epidemiología , Lesiones Oculares/epidemiología , Pandemias , Adolescente , Adulto , Distribución por Edad , Niño , Preescolar , Estudios Transversales , Lesiones Oculares/etiología , Lesiones Oculares/patología , Femenino , Humanos , Lactante , Masculino , Persona de Mediana Edad , Fotograbar , Estudios Prospectivos , Cuarentena , Distribución por Sexo , Adulto Joven
15.
J Formos Med Assoc ; 120(1 Pt 1): 165-171, 2021 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-32307321

RESUMEN

PURPOSE: To develop a deep learning image assessment software VeriSee™ and to validate its accuracy in grading the severity of diabetic retinopathy (DR). METHODS: Diabetic patients who underwent single-field, nonmydriatic, 45-degree color retinal fundus photography at National Taiwan University Hospital between July 2007 and June 2017 were retrospectively recruited. A total of 7524 judgeable color fundus images were collected and were graded for the severity of DR by ophthalmologists. Among these pictures, 5649 along with another 31,612 color fundus images from the EyePACS dataset were used for model training of VeriSee™. The other 1875 images were used for validation and were graded for the severity of DR by VeriSee™, ophthalmologists, and internal physicians. Area under the receiver operating characteristic curve (AUC) for VeriSee™, and the sensitivities and specificities for VeriSee™, ophthalmologists, and internal physicians in diagnosing DR were calculated. RESULTS: The AUCs for VeriSee™ in diagnosing any DR, referable DR and proliferative diabetic retinopathy (PDR) were 0.955, 0.955 and 0.984, respectively. VeriSee™ had better sensitivities in diagnosing any DR and PDR (92.2% and 90.9%, respectively) than internal physicians (64.3% and 20.6%, respectively) (P < 0.001 for both). VeriSee™ also had better sensitivities in diagnosing any DR and referable DR (92.2% and 89.2%, respectively) than ophthalmologists (86.9% and 71.1%, respectively) (P < 0.001 for both), while ophthalmologists had better specificities. CONCLUSION: VeriSee™ had good sensitivity and specificity in grading the severity of DR from color fundus images. It may offer clinical assistance to non-ophthalmologists in DR screening with nonmydriatic retinal fundus photography.


Asunto(s)
Aprendizaje Profundo , Retinopatía Diabética , Retinopatía Diabética/diagnóstico por imagen , Humanos , Tamizaje Masivo , Fotograbar , Estudios Retrospectivos , Programas Informáticos , Taiwán
16.
Oral Maxillofac Surg Clin North Am ; 33(1): 1-5, 2021 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-33246543

RESUMEN

Three-dimensional (3D) surface imaging has found its place in aesthetic surgery globally. The first attempt to use 3D surface imaging technique in clinic was in 1944 by Thalmaan, who used stereo photogrammetry to examine an adult with facial asymmetry and a baby with Pierre Robin syndrome. Three-dimensional photography is becoming more common allowing for a more dynamic facial evaluation, although it is associated with increased cost.


Asunto(s)
Rinoplastia , Adulto , Cara , Humanos , Imagenología Tridimensional , Fotogrametría , Fotograbar
17.
Lancet Digit Health ; 2(5): e240-e249, 2020 05.
Artículo en Inglés | MEDLINE | ID: mdl-33328056

RESUMEN

BACKGROUND: Deep learning is a novel machine learning technique that has been shown to be as effective as human graders in detecting diabetic retinopathy from fundus photographs. We used a cost-minimisation analysis to evaluate the potential savings of two deep learning approaches as compared with the current human assessment: a semi-automated deep learning model as a triage filter before secondary human assessment; and a fully automated deep learning model without human assessment. METHODS: In this economic analysis modelling study, using 39 006 consecutive patients with diabetes in a national diabetic retinopathy screening programme in Singapore in 2015, we used a decision tree model and TreeAge Pro to compare the actual cost of screening this cohort with human graders against the simulated cost for semi-automated and fully automated screening models. Model parameters included diabetic retinopathy prevalence rates, diabetic retinopathy screening costs under each screening model, cost of medical consultation, and diagnostic performance (ie, sensitivity and specificity). The primary outcome was total cost for each screening model. Deterministic sensitivity analyses were done to gauge the sensitivity of the results to key model assumptions. FINDINGS: From the health system perspective, the semi-automated screening model was the least expensive of the three models, at US$62 per patient per year. The fully automated model was $66 per patient per year, and the human assessment model was $77 per patient per year. The savings to the Singapore health system associated with switching to the semi-automated model are estimated to be $489 000, which is roughly 20% of the current annual screening cost. By 2050, Singapore is projected to have 1 million people with diabetes; at this time, the estimated annual savings would be $15 million. INTERPRETATION: This study provides a strong economic rationale for using deep learning systems as an assistive tool to screen for diabetic retinopathy. FUNDING: Ministry of Health, Singapore.


Asunto(s)
Inteligencia Artificial , Análisis Costo-Beneficio , Retinopatía Diabética/diagnóstico , Técnicas de Diagnóstico Oftalmológico/economía , Procesamiento de Imagen Asistido por Computador/economía , Modelos Biológicos , Telemedicina/economía , Adulto , Anciano , Árboles de Decisión , Diabetes Mellitus , Retinopatía Diabética/economía , Costos de la Atención en Salud , Humanos , Aprendizaje Automático , Tamizaje Masivo/economía , Persona de Mediana Edad , Oftalmología/economía , Fotograbar , Examen Físico , Retina/patología , Sensibilidad y Especificidad , Singapur , Telemedicina/métodos
18.
Sensors (Basel) ; 20(24)2020 Dec 11.
Artículo en Inglés | MEDLINE | ID: mdl-33322359

RESUMEN

Electroencephalogram (EEG) biosignals are widely used to measure human emotional reactions. The recent progress of deep learning-based classification models has improved the accuracy of emotion recognition in EEG signals. We apply a deep learning-based emotion recognition model from EEG biosignals to prove that illustrated surgical images reduce the negative emotional reactions that the photographic surgical images generate. The strong negative emotional reactions caused by surgical images, which show the internal structure of the human body (including blood, flesh, muscle, fatty tissue, and bone) act as an obstacle in explaining the images to patients or communicating with the images with non-professional people. We claim that the negative emotional reactions generated by illustrated surgical images are less severe than those caused by raw surgical images. To demonstrate the difference in emotional reaction, we produce several illustrated surgical images from photographs and measure the emotional reactions they engender using EEG biosignals; a deep learning-based emotion recognition model is applied to extract emotional reactions. Through this experiment, we show that the negative emotional reactions associated with photographic surgical images are much higher than those caused by illustrated versions of identical images. We further execute a self-assessed user survey to prove that the emotions recognized from EEG signals effectively represent user-annotated emotions.


Asunto(s)
Electroencefalografía , Emociones , Adulto , Femenino , Cirugía General , Cuerpo Humano , Humanos , Masculino , Fotograbar , Adulto Joven
19.
Lancet Digit Health ; 2(10): e526-e536, 2020 10.
Artículo en Inglés | MEDLINE | ID: mdl-33328047

RESUMEN

BACKGROUND: The application of deep learning to retinal photographs has yielded promising results in predicting age, sex, blood pressure, and haematological parameters. However, the broader applicability of retinal photograph-based deep learning for predicting other systemic biomarkers and the generalisability of this approach to various populations remains unexplored. METHODS: With use of 236 257 retinal photographs from seven diverse Asian and European cohorts (two health screening centres in South Korea, the Beijing Eye Study, three cohorts in the Singapore Epidemiology of Eye Diseases study, and the UK Biobank), we evaluated the capacities of 47 deep-learning algorithms to predict 47 systemic biomarkers as outcome variables, including demographic factors (age and sex); body composition measurements; blood pressure; haematological parameters; lipid profiles; biochemical measures; biomarkers related to liver function, thyroid function, kidney function, and inflammation; and diabetes. The standard neural network architecture of VGG16 was adopted for model development. FINDINGS: In addition to previously reported systemic biomarkers, we showed quantification of body composition indices (muscle mass, height, and bodyweight) and creatinine from retinal photographs. Body muscle mass could be predicted with an R2 of 0·52 (95% CI 0·51-0·53) in the internal test set, and of 0·33 (0·30-0·35) in one external test set with muscle mass measurement available. The R2 value for the prediction of height was 0·42 (0·40-0·43), of bodyweight was 0·36 (0·34-0·37), and of creatinine was 0·38 (0·37-0·40) in the internal test set. However, the performances were poorer in external test sets (with the lowest performance in the European cohort), with R2 values ranging between 0·08 and 0·28 for height, 0·04 and 0·19 for bodyweight, and 0·01 and 0·26 for creatinine. Of the 47 systemic biomarkers, 37 could not be predicted well from retinal photographs via deep learning (R2≤0·14 across all external test sets). INTERPRETATION: Our work provides new insights into the potential use of retinal photographs to predict systemic biomarkers, including body composition indices and serum creatinine, using deep learning in populations with a similar ethnic background. Further evaluations are warranted to validate these findings and evaluate the clinical utility of these algorithms. FUNDING: Agency for Science, Technology, and Research and National Medical Research Council, Singapore; Korea Institute for Advancement of Technology.


Asunto(s)
Algoritmos , Composición Corporal , Creatinina/sangre , Aprendizaje Profundo , Procesamiento de Imagen Asistido por Computador/métodos , Modelos Biológicos , Retina , Área Bajo la Curva , Asia , Beijing , Biomarcadores , Grupos Étnicos , Europa (Continente) , Femenino , Humanos , Masculino , Persona de Mediana Edad , Músculos , Redes Neurales de la Computación , Fotograbar , Curva ROC , República de Corea , Singapur , Reino Unido
20.
PLoS One ; 15(12): e0244494, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-33362230

RESUMEN

The tri-spine horseshoe crab, Tachypleus tridentatus, is a threatened species that inhabits coastal areas from South to East Asia. A Conservation management system is urgently required for managing its nursery habitats, i.e., intertidal flats, especially in Japan. Habitat suitability maps are useful in drafting conservation plans; however, they have rarely been prepared for juvenile T. tridentatus. In this study, we examined the possibility of constructing robust habitat suitability models (HSMs) for juveniles based on topographical data acquired using unmanned aerial vehicles and the Structure from Motion (UAV-SfM) technique. The distribution data of the juveniles in the Tsuyazaki and Imazu intertidal flats from 2017 to 2019 were determined. The data were divided into a training dataset for HSM construction and three test datasets for model evaluation. High accuracy digital surface models were built for each region using the UAV-SfM technique. Normalized elevation was assessed by converting the topographical models that consider the tidal range in each region, and the slope was calculated based on these models. Using the training data, HSMs of the juveniles were constructed with normalized elevation and slope as the predictor variables. The HSMs were evaluated using the test data. The results showed that HSMs exhibited acceptable discrimination performance for each region. Habitat suitability maps were built for the juveniles in each region, and the suitable areas were estimated to be approximately 6.1 ha of the total 19.5 ha in Tuyazaki, and 3.7 ha of the total 7.9 ha area in Imazu. In conclusion, our findings support the usefulness of the UAV-SfM technique in constructing HSMs for juvenile T. tridentatus. The monitoring of suitable habitat areas for the juveniles using the UAV-SfM technique is expected to reduce survey costs, as it can be conducted with fewer investigators over vast intertidal zones within a short period of time.


Asunto(s)
Seguimiento de Parámetros Ecológicos/métodos , Ecosistema , Especies en Peligro de Extinción , Cangrejos Herradura/fisiología , Animales , Seguimiento de Parámetros Ecológicos/instrumentación , Mapeo Geográfico , Japón , Fotograbar/instrumentación , Fotograbar/métodos , Tecnología de Sensores Remotos/instrumentación , Tecnología de Sensores Remotos/métodos , Olas de Marea
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...