Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 290
Filtrar
1.
Front Oncol ; 14: 1255109, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38505584

RESUMO

Background: Mammography is the modality of choice for breast cancer screening. However, some cases of breast cancer have been diagnosed through ultrasonography alone with no or benign findings on mammography (hereby referred to as non-visibles). Therefore, this study aimed to identify factors that indicate the possibility of non-visibles based on the mammary gland content ratio estimated using artificial intelligence (AI) by patient age and compressed breast thickness (CBT). Methods: We used AI previously developed by us to estimate the mammary gland content ratio and quantitatively analyze 26,232 controls and 150 non-visibles. First, we evaluated divergence trends between controls and non-visibles based on the average estimated mammary gland content ratio to ensure the importance of analysis by age and CBT. Next, we evaluated the possibility that mammary gland content ratio ≥50% groups affect the divergence between controls and non-visibles to specifically identify factors that indicate the possibility of non-visibles. The images were classified into two groups for the estimated mammary gland content ratios with a threshold of 50%, and logistic regression analysis was performed between controls and non-visibles. Results: The average estimated mammary gland content ratio was significantly higher in non-visibles than in controls when the overall sample, the patient age was ≥40 years and the CBT was ≥40 mm (p < 0.05). The differences in the average estimated mammary gland content ratios in the controls and non-visibles for the overall sample was 7.54%, the differences in patients aged 40-49, 50-59, and ≥60 years were 6.20%, 7.48%, and 4.78%, respectively, and the differences in those with a CBT of 40-49, 50-59, and ≥60 mm were 6.67%, 9.71%, and 16.13%, respectively. In evaluating mammary gland content ratio ≥50% groups, we also found positive correlations for non-visibles when controls were used as the baseline for the overall sample, in patients aged 40-59 years, and in those with a CBT ≥40 mm (p < 0.05). The corresponding odds ratios were ≥2.20, with a maximum value of 4.36. Conclusion: The study findings highlight an estimated mammary gland content ratio of ≥50% in patients aged 40-59 years or in those with ≥40 mm CBT could be indicative factors for non-visibles.

2.
Med Image Anal ; 94: 103155, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38537415

RESUMO

Recognition of mitotic figures in histologic tumor specimens is highly relevant to patient outcome assessment. This task is challenging for algorithms and human experts alike, with deterioration of algorithmic performance under shifts in image representations. Considerable covariate shifts occur when assessment is performed on different tumor types, images are acquired using different digitization devices, or specimens are produced in different laboratories. This observation motivated the inception of the 2022 challenge on MItosis Domain Generalization (MIDOG 2022). The challenge provided annotated histologic tumor images from six different domains and evaluated the algorithmic approaches for mitotic figure detection provided by nine challenge participants on ten independent domains. Ground truth for mitotic figure detection was established in two ways: a three-expert majority vote and an independent, immunohistochemistry-assisted set of labels. This work represents an overview of the challenge tasks, the algorithmic strategies employed by the participants, and potential factors contributing to their success. With an F1 score of 0.764 for the top-performing team, we summarize that domain generalization across various tumor domains is possible with today's deep learning-based recognition pipelines. However, we also found that domain characteristics not present in the training set (feline as new species, spindle cell shape as new morphology and a new scanner) led to small but significant decreases in performance. When assessed against the immunohistochemistry-assisted reference standard, all methods resulted in reduced recall scores, with only minor changes in the order of participants in the ranking.


Assuntos
Laboratórios , Mitose , Humanos , Animais , Gatos , Algoritmos , Processamento de Imagem Assistida por Computador/métodos , Padrões de Referência
3.
Front Med (Lausanne) ; 11: 1335958, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38510449

RESUMO

Introduction: Physical measurements of expiratory flow volume and speed can be obtained using spirometry. These measurements have been used for the diagnosis and risk assessment of chronic obstructive pulmonary disease and play a crucial role in delivering early care. However, spirometry is not performed frequently in routine clinical practice, thereby hindering the early detection of pulmonary function impairment. Chest radiographs (CXRs), though acquired frequently, are not used to measure pulmonary functional information. This study aimed to evaluate whether spirometry parameters can be estimated accurately from single frontal CXR without image findings using deep learning. Methods: Forced vital capacity (FVC), forced expiratory volume in 1 s (FEV1), and FEV1/FVC as spirometry measurements as well as the corresponding chest radiographs of 11,837 participants were used in this study. The data were randomly allocated to the training, validation, and evaluation datasets at an 8:1:1 ratio. A deep learning network was pretrained using ImageNet. The input and output information were CXRs and spirometry test values, respectively. The training and evaluation of the deep learning network were performed separately for each parameter. The mean absolute error rate (MAPE) and Pearson's correlation coefficient (r) were used as the evaluation indices. Results: The MAPEs between the spirometry measurements and AI estimates for FVC, FEV1 and FEV1/FVC were 7.59% (r = 0.910), 9.06% (r = 0.879) and 5.21% (r = 0.522), respectively. A strong positive correlation was observed between the measured and predicted indices of FVC and FEV1. The average accuracy of >90% was obtained in each estimation of spirometry indices. Bland-Altman analysis revealed good agreement between the estimated and measured values for FVC and FEV1. Discussion: Frontal CXRs contain information related to pulmonary function, and AI estimation performed using frontal CXRs without image findings could accurately estimate spirometry values. The network proposed for estimating pulmonary function in this study could serve as a recommendation for performing spirometry or as an alternative method, suggesting its utility.

4.
Phys Chem Chem Phys ; 26(9): 7658-7663, 2024 Feb 28.
Artigo em Inglês | MEDLINE | ID: mdl-38369923

RESUMO

The chiral recognition of a self-assembled structure of enantiopure (M)-type 2,13-diphenyl[7]thiaheterohelicene ((M)-Ph-[7]TH) was investigated on a Ag(111) substrate by scanning tunnelling microscopy (STM) and tip-enhanced Raman spectroscopy (TERS). In contrast to previous research of thiaheterohelicene and its derivatives showing zigzag row formation on the Ag(111) substrate, the hexagonal ordered structure was observed by STM. The obtained TERS spectra of (M)-Ph-[7]TH were consistent with the Raman spectra calculated on the basis of density functional theory (DFT), which suggests that (M)-Ph-[7]TH was adsorbed on the substrate without decomposition. The sample bias voltage dependence of STM images combined with the calculated molecular orbitals of (M)-Ph-[7]TH indicates that a phenyl ring was observed as a protrusion at +3.0 V, whereas the helicene backbone was observed at +0.5 V. From these results, a possible model of the hexagonal structure was proposed. Owing to the phenyl ring, the van der Waals interaction between (M)-Ph-[7]TH and the substrate becomes strong. This leads to the formation of the hexagonal structure with the same symmetry as the substrate.

5.
IEEE Trans Med Imaging ; 43(1): 542-557, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-37713220

RESUMO

The early detection of glaucoma is essential in preventing visual impairment. Artificial intelligence (AI) can be used to analyze color fundus photographs (CFPs) in a cost-effective manner, making glaucoma screening more accessible. While AI models for glaucoma screening from CFPs have shown promising results in laboratory settings, their performance decreases significantly in real-world scenarios due to the presence of out-of-distribution and low-quality images. To address this issue, we propose the Artificial Intelligence for Robust Glaucoma Screening (AIROGS) challenge. This challenge includes a large dataset of around 113,000 images from about 60,000 patients and 500 different screening centers, and encourages the development of algorithms that are robust to ungradable and unexpected input data. We evaluated solutions from 14 teams in this paper and found that the best teams performed similarly to a set of 20 expert ophthalmologists and optometrists. The highest-scoring team achieved an area under the receiver operating characteristic curve of 0.99 (95% CI: 0.98-0.99) for detecting ungradable images on-the-fly. Additionally, many of the algorithms showed robust performance when tested on three other publicly available datasets. These results demonstrate the feasibility of robust AI-enabled glaucoma screening.


Assuntos
Inteligência Artificial , Glaucoma , Humanos , Glaucoma/diagnóstico por imagem , Fundo de Olho , Técnicas de Diagnóstico Oftalmológico , Algoritmos
6.
Pharmaceutics ; 15(12)2023 Dec 05.
Artigo em Inglês | MEDLINE | ID: mdl-38140073

RESUMO

Many evaluation tools for predicting human absorption are well-known for using cultured cell lines such as Caco-2, MDCK, and so on. Since the combinatorial chemistry and high throughput screening system, pharmacological assay, and pharmaceutical profiling assay are mainstays of drug development, PAMPA has been used to evaluate human drug absorption. In addition, cultured cell lines from iPS cells have been attracting attention because they morphologically resemble human intestinal tissues. In this review, we used human intestinal tissues to estimate human intestinal absorption and metabolism. The Ussing chamber uses human intestinal tissues to directly assay a drug candidate's permeability and determine the electrophysiological parameters such as potential differences (PD), short circuit current (Isc), and resistance (R). Thus, it is an attractive tool for elucidating human intestinal permeability and metabolism. We have presented a novel prediction method for intestinal absorption and metabolism by utilizing a mini-Ussing chamber using human intestinal tissues and animal intestinal tissues, based on the transport index (TI). The TI value was calculated by taking the change in drug concentrations on the apical side due to precipitation and the total amounts accumulated in the tissue (Tcorr) and transported to the basal side (Xcorr). The drug absorbability in rank order, as well as the fraction of dose absorbed (Fa) in humans, was predicted, and the intestinal metabolism of dogs and rats was also predicted, although it was not quantitative. However, the metabolites formation index (MFI) values, which are included in the TI values, can predict the evaluation of intestinal metabolism and absorption by using ketoconazole. Therefore, the mini-Ussing chamber, equipped with human and animal intestinal tissues, would be an ultimate method to predict intestinal absorption and metabolism simultaneously.

7.
BMC Med Imaging ; 23(1): 114, 2023 08 29.
Artigo em Inglês | MEDLINE | ID: mdl-37644398

RESUMO

BACKGROUND: In recent years, contrast-enhanced ultrasonography (CEUS) has been used for various applications in breast diagnosis. The superiority of CEUS over conventional B-mode imaging in the ultrasound diagnosis of the breast lesions in clinical practice has been widely confirmed. On the other hand, there have been many proposals for computer-aided diagnosis of breast lesions on B-mode ultrasound images, but few for CEUS. We propose a semi-automatic classification method based on machine learning in CEUS of breast lesions. METHODS: The proposed method extracts spatial and temporal features from CEUS videos and breast tumors are classified as benign or malignant using linear support vector machines (SVM) with combination of selected optimal features. In the proposed method, tumor regions are extracted using the guidance information specified by the examiners, then morphological and texture features of tumor regions obtained from B-mode and CEUS images and TIC features obtained from CEUS video are extracted. Then, our method uses SVM classifiers to classify breast tumors as benign or malignant. During SVM training, many features are prepared, and useful features are selected. We name our proposed method "Ceucia-Breast" (Contrast Enhanced UltraSound Image Analysis for BREAST lesions). RESULTS: The experimental results on 119 subjects show that the area under the receiver operating curve, accuracy, precision, and recall are 0.893, 0.816, 0.841 and 0.920, respectively. The classification performance is improved by our method over conventional methods using only B-mode images. In addition, we confirm that the selected features are consistent with the CEUS guidelines for breast tumor diagnosis. Furthermore, we conduct an experiment on the operator dependency of specifying guidance information and find that the intra-operator and inter-operator kappa coefficients are 1.0 and 0.798, respectively. CONCLUSION: The experimental results show a significant improvement in classification performance compared to conventional classification methods using only B-mode images. We also confirm that the selected features are related to the findings that are considered important in clinical practice. Furthermore, we verify the intra- and inter-examiner correlation in the guidance input for region extraction and confirm that both correlations are in strong agreement.


Assuntos
Neoplasias da Mama , Diagnóstico por Computador , Humanos , Feminino , Ultrassonografia , Processamento de Imagem Assistida por Computador , Neoplasias da Mama/diagnóstico por imagem , Computadores
8.
BMC Med Educ ; 23(1): 528, 2023 Jul 24.
Artigo em Inglês | MEDLINE | ID: mdl-37488587

RESUMO

BACKGROUND: Social trust in medical students is trust in the cluster of medical students and not individual medical students. Social trust in medical students seems critical in clinical practice since citizens often face unknown medical students for the first time. However, most previous research has focused on interpersonal trust in particular medical professions, and social trust in medical students has not been addressed sufficiently. In social science, the Salient Value Similarity model has demonstrated that the value similarity between professionals and citizens is associated with social trust. This research aimed to explore the relationship between social trust in medical students and the perception of value similarity. This study also aimed to determine whether the information of medical students strengthens social trust in them. METHODS: We conducted a cross-sectional study to investigate how the perception of value similarity affects social trust. The participants answered the social trust questionnaires before and after reading a brief summary of the medical education curriculum and certification via the internet in Japan. The model structure of social trust in medical students, including the perception of value similarity, was investigated using SEM. A paired t-test was used to examine the effect of informing citizens about the knowledge, skills, and professionalism requirements of students attending medical school on social trust by reading the brief summary. RESULTS: The study included 658 participants, who all answered a web questionnaire. Social trust in medical students was associated with the perception of ability and value similarity. Social trust in medical students, the perception of ability, and value similarity were improved by information about medical students. CONCLUSIONS: The perception of ability and value similarity seem to affect social trust in medical students. Information on medical education regarding the knowledge, skills, and professionalism of medical students may improve social trust in these students. Further research is required to sophisticate the model of social trust in medical students by exploring social trust in the medical students' supervisors in clinical settings.


Assuntos
Educação Médica , Estudantes de Medicina , Humanos , Estudos Transversais , Confiança , Inquéritos e Questionários
9.
Med Image Anal ; 89: 102888, 2023 10.
Artigo em Inglês | MEDLINE | ID: mdl-37451133

RESUMO

Formalizing surgical activities as triplets of the used instruments, actions performed, and target anatomies is becoming a gold standard approach for surgical activity modeling. The benefit is that this formalization helps to obtain a more detailed understanding of tool-tissue interaction which can be used to develop better Artificial Intelligence assistance for image-guided surgery. Earlier efforts and the CholecTriplet challenge introduced in 2021 have put together techniques aimed at recognizing these triplets from surgical footage. Estimating also the spatial locations of the triplets would offer a more precise intraoperative context-aware decision support for computer-assisted intervention. This paper presents the CholecTriplet2022 challenge, which extends surgical action triplet modeling from recognition to detection. It includes weakly-supervised bounding box localization of every visible surgical instrument (or tool), as the key actors, and the modeling of each tool-activity in the form of triplet. The paper describes a baseline method and 10 new deep learning algorithms presented at the challenge to solve the task. It also provides thorough methodological comparisons of the methods, an in-depth analysis of the obtained results across multiple metrics, visual and procedural challenges; their significance, and useful insights for future research directions and applications in surgery.


Assuntos
Inteligência Artificial , Cirurgia Assistida por Computador , Humanos , Endoscopia , Algoritmos , Cirurgia Assistida por Computador/métodos , Instrumentos Cirúrgicos
10.
BMC Med Educ ; 23(1): 408, 2023 Jun 05.
Artigo em Inglês | MEDLINE | ID: mdl-37277728

RESUMO

BACKGROUND: Formative feedback plays a critical role in guiding learners to gain competence, serving as an opportunity for reflection and feedback on their learning progress and needs. Medical education in Japan has historically been dominated by a summative paradigm within assessment, as opposed to countries such as the UK where there are greater opportunities for formative feedback. How this difference affects students' interaction with feedback has not been studied. We aim to explore the difference in students' perception of feedback in Japan and the UK. METHODS: The study is designed and analysed with a constructivist grounded theory lens. Medical students in Japan and the UK were interviewed on the topic of formative assessment and feedback they received during clinical placements. We undertook purposeful sampling and concurrent data collection. Data analysis through open and axial coding with iterative discussion among research group members was conducted to develop a theoretical framework. RESULTS: Japanese students perceived feedback as a model answer provided by tutors which they should not critically question, which contrasted with the views of UK students. Japanese students viewed formative assessment as an opportunity to gauge whether they are achieving the pass mark, while UK students used the experience for reflective learning. CONCLUSIONS: The Japanese student experience of formative assessment and feedback supports the view that medical education and examination systems in Japan are focused on summative assessment, which operates alongside culturally derived social pressures including the expectation to correct mistakes. These findings provide new insights in supporting students to learn from formative feedback in both Japanese and UK contexts.


Assuntos
Educação de Graduação em Medicina , Estudantes de Medicina , Humanos , Feedback Formativo , Japão , Competência Clínica , Retroalimentação , Reino Unido
11.
Cancers (Basel) ; 15(10)2023 May 17.
Artigo em Inglês | MEDLINE | ID: mdl-37345132

RESUMO

Recently, breast types were categorized into four types based on the Breast Imaging Reporting and Data System (BI-RADS) atlas, and evaluating them is vital in clinical practice. A Japanese guideline, called breast composition, was developed for the breast types based on BI-RADS. The guideline is characterized using a continuous value called the mammary gland content ratio calculated to determine the breast composition, therefore allowing a more objective and visual evaluation. Although a discriminative deep convolutional neural network (DCNN) has been developed conventionally to classify the breast composition, it could encounter two-step errors or more. Hence, we propose an alternative regression DCNN based on mammary gland content ratio. We used 1476 images, evaluated by an expert physician. Our regression DCNN contained four convolution layers and three fully connected layers. Consequently, we obtained a high correlation of 0.93 (p < 0.01). Furthermore, to scrutinize the effectiveness of the regression DCNN, we categorized breast composition using the estimated ratio obtained by the regression DCNN. The agreement rates are high at 84.8%, suggesting that the breast composition can be calculated using regression DCNN with high accuracy. Moreover, the occurrence of two-step errors or more is unlikely, and the proposed method can intuitively understand the estimated results.

12.
BMC Med Educ ; 23(1): 385, 2023 May 25.
Artigo em Inglês | MEDLINE | ID: mdl-37231480

RESUMO

BACKGROUND: Vaccine administration skills are very important for physicians, especially in the era of global pandemics. However, medical students have reported that practical sessions to develop these skills are insufficient. Therefore, the aim of our study was to develop a vaccination training course for medical students. We also examined its educational effectiveness. METHODS: 5th- and 6th-year medical students at the University of Tokyo were recruited to attend the vaccine administration training course in 2021. These students were our study participants. Our course consisted of an orientation part, which included a lecture on the indications, adverse events, and vaccination techniques of flu vaccines and practice on a simulator, and a main part in which the staff of the University of Tokyo Hospital were actually vaccinated. Before and after the main part of the course, study participants completed an online questionnaire that assessed their confidence in vaccine administration technique through a five-point Likert scale. We also surveyed their feedback about the course content and process. At the beginning and end of the main part, their technical competence in vaccination was assessed by two independent doctors. These doctors used a validated checklist scale (ranging from 16 to 80) and a global rating scale (ranging from 0 to 10). We used their mean scores for analysis. The quantitative data were analyzed through the Wilcoxon signed-rank test. For the qualitative data of the questionnaire, thematic analysis was conducted. RESULTS: All 48 course participants participated in our study. Participants' confidence in vaccination technique (Z = -5.244, p < 0.05) and vaccination skill significantly improved (checklist rating: Z = -5.852, p < 0.05; global rating: Z = -5.868, p < 0.05). All participants rated the course as, "overall educational." Our thematic analysis identified four emerging themes: interest in medical procedures, efficacy of supervision and feedback, efficacy of "near-peer" learning, and very instructive course. CONCLUSIONS: In our study, we developed a vaccine administration course for medical students, assessed their vaccination techniques and confidence in those techniques, and investigated their perceptions of the course. Students' vaccination skills and confidence improved significantly after the course, and they positively evaluated the course based on a variety of factors. Our course will be effective in educating medical students about vaccination techniques.


Assuntos
Educação de Graduação em Medicina , Estudantes de Medicina , Humanos , Competência Clínica , Currículo , Educação de Graduação em Medicina/métodos , Vacinação
13.
Med Image Anal ; 86: 102803, 2023 05.
Artigo em Inglês | MEDLINE | ID: mdl-37004378

RESUMO

Context-aware decision support in the operating room can foster surgical safety and efficiency by leveraging real-time feedback from surgical workflow analysis. Most existing works recognize surgical activities at a coarse-grained level, such as phases, steps or events, leaving out fine-grained interaction details about the surgical activity; yet those are needed for more helpful AI assistance in the operating room. Recognizing surgical actions as triplets of combination delivers more comprehensive details about the activities taking place in surgical videos. This paper presents CholecTriplet2021: an endoscopic vision challenge organized at MICCAI 2021 for the recognition of surgical action triplets in laparoscopic videos. The challenge granted private access to the large-scale CholecT50 dataset, which is annotated with action triplet information. In this paper, we present the challenge setup and the assessment of the state-of-the-art deep learning methods proposed by the participants during the challenge. A total of 4 baseline methods from the challenge organizers and 19 new deep learning algorithms from the competing teams are presented to recognize surgical action triplets directly from surgical videos, achieving mean average precision (mAP) ranging from 4.2% to 38.1%. This study also analyzes the significance of the results obtained by the presented approaches, performs a thorough methodological comparison between them, in-depth result analysis, and proposes a novel ensemble method for enhanced recognition. Our analysis shows that surgical workflow analysis is not yet solved, and also highlights interesting directions for future research on fine-grained surgical activity recognition which is of utmost importance for the development of AI in surgery.


Assuntos
Benchmarking , Laparoscopia , Humanos , Algoritmos , Salas Cirúrgicas , Fluxo de Trabalho , Aprendizado Profundo
14.
Comput Methods Programs Biomed ; 236: 107561, 2023 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-37119774

RESUMO

BACKGROUND AND OBJECTIVE: In order to be context-aware, computer-assisted surgical systems require accurate, real-time automatic surgical workflow recognition. In the past several years, surgical video has been the most commonly-used modality for surgical workflow recognition. But with the democratization of robot-assisted surgery, new modalities, such as kinematics, are now accessible. Some previous methods use these new modalities as input for their models, but their added value has rarely been studied. This paper presents the design and results of the "PEg TRAnsfer Workflow recognition" (PETRAW) challenge with the objective of developing surgical workflow recognition methods based on one or more modalities and studying their added value. METHODS: The PETRAW challenge included a data set of 150 peg transfer sequences performed on a virtual simulator. This data set included videos, kinematic data, semantic segmentation data, and annotations, which described the workflow at three levels of granularity: phase, step, and activity. Five tasks were proposed to the participants: three were related to the recognition at all granularities simultaneously using a single modality, and two addressed the recognition using multiple modalities. The mean application-dependent balanced accuracy (AD-Accuracy) was used as an evaluation metric to take into account class balance and is more clinically relevant than a frame-by-frame score. RESULTS: Seven teams participated in at least one task with four participating in every task. The best results were obtained by combining video and kinematic data (AD-Accuracy of between 93% and 90% for the four teams that participated in all tasks). CONCLUSION: The improvement of surgical workflow recognition methods using multiple modalities compared with unimodal methods was significant for all teams. However, the longer execution time required for video/kinematic-based methods(compared to only kinematic-based methods) must be considered. Indeed, one must ask if it is wise to increase computing time by 2000 to 20,000% only to increase accuracy by 3%. The PETRAW data set is publicly available at www.synapse.org/PETRAW to encourage further research in surgical workflow recognition.


Assuntos
Algoritmos , Procedimentos Cirúrgicos Robóticos , Humanos , Fluxo de Trabalho , Procedimentos Cirúrgicos Robóticos/métodos
15.
Med Image Anal ; 86: 102770, 2023 05.
Artigo em Inglês | MEDLINE | ID: mdl-36889206

RESUMO

PURPOSE: Surgical workflow and skill analysis are key technologies for the next generation of cognitive surgical assistance systems. These systems could increase the safety of the operation through context-sensitive warnings and semi-autonomous robotic assistance or improve training of surgeons via data-driven feedback. In surgical workflow analysis up to 91% average precision has been reported for phase recognition on an open data single-center video dataset. In this work we investigated the generalizability of phase recognition algorithms in a multicenter setting including more difficult recognition tasks such as surgical action and surgical skill. METHODS: To achieve this goal, a dataset with 33 laparoscopic cholecystectomy videos from three surgical centers with a total operation time of 22 h was created. Labels included framewise annotation of seven surgical phases with 250 phase transitions, 5514 occurences of four surgical actions, 6980 occurences of 21 surgical instruments from seven instrument categories and 495 skill classifications in five skill dimensions. The dataset was used in the 2019 international Endoscopic Vision challenge, sub-challenge for surgical workflow and skill analysis. Here, 12 research teams trained and submitted their machine learning algorithms for recognition of phase, action, instrument and/or skill assessment. RESULTS: F1-scores were achieved for phase recognition between 23.9% and 67.7% (n = 9 teams), for instrument presence detection between 38.5% and 63.8% (n = 8 teams), but for action recognition only between 21.8% and 23.3% (n = 5 teams). The average absolute error for skill assessment was 0.78 (n = 1 team). CONCLUSION: Surgical workflow and skill analysis are promising technologies to support the surgical team, but there is still room for improvement, as shown by our comparison of machine learning algorithms. This novel HeiChole benchmark can be used for comparable evaluation and validation of future work. In future studies, it is of utmost importance to create more open, high-quality datasets in order to allow the development of artificial intelligence and cognitive robotics in surgery.


Assuntos
Inteligência Artificial , Benchmarking , Humanos , Fluxo de Trabalho , Algoritmos , Aprendizado de Máquina
16.
Food Chem Toxicol ; 175: 113755, 2023 May.
Artigo em Inglês | MEDLINE | ID: mdl-36997052

RESUMO

Zinc (Zn) is one of the trace elements, and Zn deficiency causes many adverse effects. Zn complexes are used for Zn supplementation, but there are few toxicity reports. Zn maltol (ZM) was orally administered for 4 weeks to male rats at a dose of 0, 200, 600, or 1000 mg/kg to assess its toxicity. As a ligand group, maltol was administered at a dose of 800 mg/kg/day. General conditions, ophthalmology, hematology, blood biochemistry, urinalysis, organ weights, necropsy, histopathology, and plasma Zn concentration were investigated. Plasma Zn concentration increased with dose levels of ZM. The following toxicities were observed at 1000 mg/kg. Pancreatitis was observed with histopathological lesions and increases in white blood cell parameters and creatine kinase. Anemia was observed with changes in red blood cell parameters and extramedullary hematopoiesis in the spleen. Decreases in the trabecula and growth plate in the femur were observed. On the other hand, no toxicities were observed in the ligand group. In conclusion, these toxicities induced by ZM have been reported as Zn-related toxicities. It was considered that these results will be helpful for a creation and development of new Zn complexes as well as supplements.


Assuntos
Anemia , Zinco , Ratos , Masculino , Animais , Zinco/toxicidade , Ligantes , Anemia/induzido quimicamente , Suplementos Nutricionais
17.
PRiMER ; 7: 765336, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36845843

RESUMO

Background and Objective: In the Japanese primary care setting, a set of questions to screen patients' social circumstances has never been developed in a scientific manner. This project aimed to reach a consensus among diverse experts to develop a set of such questions, to meet the need for assessing patients' health-related social circumstances. Methods: We used a Delphi technique to generate expert consensus. The expert panel was composed of various clinical professionals, medical trainees, researchers, support members for marginalized people, and patients. We conducted multiple rounds of communication online. In round 1, the participants provided their opinions about what health care professionals should ask to assess patients' social circumstances in primary care settings. These data were analyzed into several themes. In round 2, all themes were confirmed by consensus. Results: Sixty-one people participated in the panel. All participants completed the rounds. Six themes were generated and confirmed: economic condition and employment, access to health care and other services, living in everyday life and leisure time, total physiological needs, tools and technology, and history of the patient's life. In addition, the panelists emphasized the importance of respecting the patient's preferences and values. Conclusion: A questionnaire, abbreviated by the acronym of HEALTH+P, was developed. Further research about its clinical feasibility and impact on patient outcomes is warranted.

18.
Med Image Anal ; 83: 102628, 2023 01.
Artigo em Inglês | MEDLINE | ID: mdl-36283200

RESUMO

Domain Adaptation (DA) has recently been of strong interest in the medical imaging community. While a large variety of DA techniques have been proposed for image segmentation, most of these techniques have been validated either on private datasets or on small publicly available datasets. Moreover, these datasets mostly addressed single-class problems. To tackle these limitations, the Cross-Modality Domain Adaptation (crossMoDA) challenge was organised in conjunction with the 24th International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI 2021). CrossMoDA is the first large and multi-class benchmark for unsupervised cross-modality Domain Adaptation. The goal of the challenge is to segment two key brain structures involved in the follow-up and treatment planning of vestibular schwannoma (VS): the VS and the cochleas. Currently, the diagnosis and surveillance in patients with VS are commonly performed using contrast-enhanced T1 (ceT1) MR imaging. However, there is growing interest in using non-contrast imaging sequences such as high-resolution T2 (hrT2) imaging. For this reason, we established an unsupervised cross-modality segmentation benchmark. The training dataset provides annotated ceT1 scans (N=105) and unpaired non-annotated hrT2 scans (N=105). The aim was to automatically perform unilateral VS and bilateral cochlea segmentation on hrT2 scans as provided in the testing set (N=137). This problem is particularly challenging given the large intensity distribution gap across the modalities and the small volume of the structures. A total of 55 teams from 16 countries submitted predictions to the validation leaderboard. Among them, 16 teams from 9 different countries submitted their algorithm for the evaluation phase. The level of performance reached by the top-performing teams is strikingly high (best median Dice score - VS: 88.4%; Cochleas: 85.7%) and close to full supervision (median Dice score - VS: 92.5%; Cochleas: 87.7%). All top-performing methods made use of an image-to-image translation approach to transform the source-domain images into pseudo-target-domain images. A segmentation network was then trained using these generated images and the manual annotations provided for the source image.


Assuntos
Neuroma Acústico , Humanos , Neuroma Acústico/diagnóstico por imagem
19.
Med Image Anal ; 84: 102699, 2023 02.
Artigo em Inglês | MEDLINE | ID: mdl-36463832

RESUMO

The density of mitotic figures (MF) within tumor tissue is known to be highly correlated with tumor proliferation and thus is an important marker in tumor grading. Recognition of MF by pathologists is subject to a strong inter-rater bias, limiting its prognostic value. State-of-the-art deep learning methods can support experts but have been observed to strongly deteriorate when applied in a different clinical environment. The variability caused by using different whole slide scanners has been identified as one decisive component in the underlying domain shift. The goal of the MICCAI MIDOG 2021 challenge was the creation of scanner-agnostic MF detection algorithms. The challenge used a training set of 200 cases, split across four scanning systems. As test set, an additional 100 cases split across four scanning systems, including two previously unseen scanners, were provided. In this paper, we evaluate and compare the approaches that were submitted to the challenge and identify methodological factors contributing to better performance. The winning algorithm yielded an F1 score of 0.748 (CI95: 0.704-0.781), exceeding the performance of six experts on the same task.


Assuntos
Algoritmos , Mitose , Humanos , Gradação de Tumores , Prognóstico
20.
Med Educ ; 57(1): 57-65, 2023 01.
Artigo em Inglês | MEDLINE | ID: mdl-35953461

RESUMO

INTRODUCTION: An understanding of social determinants of health (SDH) and patients' social circumstances is recommended to deliver contextualised care. However, the processes of patient care related to SDH in clinical settings have not been described in detail. Observable practice activities (OPAs) are a collection of learning objectives and activities that must be observed in daily practice and can be used to describe the precise processes for professionals to follow in specific situations (process OPA.) METHODS: We used a modified Delphi technique to generate expert consensus about the process OPA for patient care related to SDH in primary care settings. To reflect the opinions of various stakeholders, the expert panel comprised clinical professionals (physicians, nurses, public health nurses, social workers, pharmacists and medical clerks), residents, medical students, researchers (medical education, health care, sociology of marginalised people), support members for marginalised people and patients. The Delphi rounds were conducted online. In Round 1, a list of potentially important steps in the processes of care was distributed to panellists. The list was modified, and one new step was added. In Round 2, all steps were acknowledged with few modifications. RESULTS: Of 63 experts recruited, 61 participated, and all participants completed the Delphi rounds. A total of 14 observable steps were identified, which were divided into four components: communication, practice, maintenance and advocacy. The importance of ongoing patient-physician relationships and collaboration with professionals and stakeholders was emphasised for the whole process of care. DISCUSSION: This study presents the consensus of a variety of experts on the process OPA for patient care related to SDHs. Further research is warranted to investigate how this Communication-Practice-Maintenance-Advocacy framework could affect medical education, quality of patient care, and patient outcomes.


Assuntos
Educação Médica , Visitas de Preceptoria , Humanos , Determinantes Sociais da Saúde , Assistência ao Paciente
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...