Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 251
Filtrar
1.
Med Image Anal ; 97: 103276, 2024 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-39068830

RESUMO

Radiation therapy plays a crucial role in cancer treatment, necessitating precise delivery of radiation to tumors while sparing healthy tissues over multiple days. Computed tomography (CT) is integral for treatment planning, offering electron density data crucial for accurate dose calculations. However, accurately representing patient anatomy is challenging, especially in adaptive radiotherapy, where CT is not acquired daily. Magnetic resonance imaging (MRI) provides superior soft-tissue contrast. Still, it lacks electron density information, while cone beam CT (CBCT) lacks direct electron density calibration and is mainly used for patient positioning. Adopting MRI-only or CBCT-based adaptive radiotherapy eliminates the need for CT planning but presents challenges. Synthetic CT (sCT) generation techniques aim to address these challenges by using image synthesis to bridge the gap between MRI, CBCT, and CT. The SynthRAD2023 challenge was organized to compare synthetic CT generation methods using multi-center ground truth data from 1080 patients, divided into two tasks: (1) MRI-to-CT and (2) CBCT-to-CT. The evaluation included image similarity and dose-based metrics from proton and photon plans. The challenge attracted significant participation, with 617 registrations and 22/17 valid submissions for tasks 1/2. Top-performing teams achieved high structural similarity indices (≥0.87/0.90) and gamma pass rates for photon (≥98.1%/99.0%) and proton (≥97.3%/97.0%) plans. However, no significant correlation was found between image similarity metrics and dose accuracy, emphasizing the need for dose evaluation when assessing the clinical applicability of sCT. SynthRAD2023 facilitated the investigation and benchmarking of sCT generation techniques, providing insights for developing MRI-only and CBCT-based adaptive radiotherapy. It showcased the growing capacity of deep learning to produce high-quality sCT, reducing reliance on conventional CT for treatment planning.


Assuntos
Tomografia Computadorizada de Feixe Cônico , Imageamento por Ressonância Magnética , Planejamento da Radioterapia Assistida por Computador , Humanos , Tomografia Computadorizada de Feixe Cônico/métodos , Planejamento da Radioterapia Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Tomografia Computadorizada por Raios X/métodos , Dosagem Radioterapêutica , Neoplasias/radioterapia , Neoplasias/diagnóstico por imagem , Radioterapia Guiada por Imagem/métodos
2.
Med Image Anal ; 97: 103257, 2024 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-38981282

RESUMO

The alignment of tissue between histopathological whole-slide-images (WSI) is crucial for research and clinical applications. Advances in computing, deep learning, and availability of large WSI datasets have revolutionised WSI analysis. Therefore, the current state-of-the-art in WSI registration is unclear. To address this, we conducted the ACROBAT challenge, based on the largest WSI registration dataset to date, including 4,212 WSIs from 1,152 breast cancer patients. The challenge objective was to align WSIs of tissue that was stained with routine diagnostic immunohistochemistry to its H&E-stained counterpart. We compare the performance of eight WSI registration algorithms, including an investigation of the impact of different WSI properties and clinical covariates. We find that conceptually distinct WSI registration methods can lead to highly accurate registration performances and identify covariates that impact performances across methods. These results provide a comparison of the performance of current WSI registration methods and guide researchers in selecting and developing methods.


Assuntos
Algoritmos , Neoplasias da Mama , Humanos , Neoplasias da Mama/diagnóstico por imagem , Neoplasias da Mama/patologia , Feminino , Interpretação de Imagem Assistida por Computador/métodos , Imuno-Histoquímica
3.
Diagnostics (Basel) ; 14(11)2024 May 29.
Artigo em Inglês | MEDLINE | ID: mdl-38893657

RESUMO

A comparative interpretation of mammograms has become increasingly important, and it is crucial to develop subtraction processing and registration methods for mammograms. However, nonrigid image registration has seldom been applied to subjects constructed with soft tissue only, such as mammograms. We examined whether subtraction processing for the comparative interpretation of mammograms can be performed using nonrigid image registration. As a preliminary study, we evaluated the results of subtraction processing by applying nonrigid image registration to normal mammograms, assuming a comparative interpretation between the left and right breasts. Mediolateral-oblique-view mammograms were taken from noncancer patients and divided into 1000 cases for training, 100 cases for validation, and 500 cases for testing. Nonrigid image registration was applied to align the horizontally flipped left-breast mammogram with the right one. We compared the sum of absolute differences (SAD) of the difference of bilateral images (Difference Image) with and without the application of nonrigid image registration. Statistically, the average SAD was significantly lower with the application of nonrigid image registration than without it (without: 0.0692; with: 0.0549 (p < 0.001)). In four subgroups using the breast area, breast density, compressed breast thickness, and Difference Image without nonrigid image registration, the average SAD of the Difference Image was also significantly lower with nonrigid image registration than without it (p < 0.001). Nonrigid image registration was found to be sufficiently useful in aligning bilateral mammograms, and it is expected to be an important tool in the development of a support system for the comparative interpretation of mammograms.

4.
Front Oncol ; 14: 1255109, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38505584

RESUMO

Background: Mammography is the modality of choice for breast cancer screening. However, some cases of breast cancer have been diagnosed through ultrasonography alone with no or benign findings on mammography (hereby referred to as non-visibles). Therefore, this study aimed to identify factors that indicate the possibility of non-visibles based on the mammary gland content ratio estimated using artificial intelligence (AI) by patient age and compressed breast thickness (CBT). Methods: We used AI previously developed by us to estimate the mammary gland content ratio and quantitatively analyze 26,232 controls and 150 non-visibles. First, we evaluated divergence trends between controls and non-visibles based on the average estimated mammary gland content ratio to ensure the importance of analysis by age and CBT. Next, we evaluated the possibility that mammary gland content ratio ≥50% groups affect the divergence between controls and non-visibles to specifically identify factors that indicate the possibility of non-visibles. The images were classified into two groups for the estimated mammary gland content ratios with a threshold of 50%, and logistic regression analysis was performed between controls and non-visibles. Results: The average estimated mammary gland content ratio was significantly higher in non-visibles than in controls when the overall sample, the patient age was ≥40 years and the CBT was ≥40 mm (p < 0.05). The differences in the average estimated mammary gland content ratios in the controls and non-visibles for the overall sample was 7.54%, the differences in patients aged 40-49, 50-59, and ≥60 years were 6.20%, 7.48%, and 4.78%, respectively, and the differences in those with a CBT of 40-49, 50-59, and ≥60 mm were 6.67%, 9.71%, and 16.13%, respectively. In evaluating mammary gland content ratio ≥50% groups, we also found positive correlations for non-visibles when controls were used as the baseline for the overall sample, in patients aged 40-59 years, and in those with a CBT ≥40 mm (p < 0.05). The corresponding odds ratios were ≥2.20, with a maximum value of 4.36. Conclusion: The study findings highlight an estimated mammary gland content ratio of ≥50% in patients aged 40-59 years or in those with ≥40 mm CBT could be indicative factors for non-visibles.

5.
Med Image Anal ; 94: 103155, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38537415

RESUMO

Recognition of mitotic figures in histologic tumor specimens is highly relevant to patient outcome assessment. This task is challenging for algorithms and human experts alike, with deterioration of algorithmic performance under shifts in image representations. Considerable covariate shifts occur when assessment is performed on different tumor types, images are acquired using different digitization devices, or specimens are produced in different laboratories. This observation motivated the inception of the 2022 challenge on MItosis Domain Generalization (MIDOG 2022). The challenge provided annotated histologic tumor images from six different domains and evaluated the algorithmic approaches for mitotic figure detection provided by nine challenge participants on ten independent domains. Ground truth for mitotic figure detection was established in two ways: a three-expert majority vote and an independent, immunohistochemistry-assisted set of labels. This work represents an overview of the challenge tasks, the algorithmic strategies employed by the participants, and potential factors contributing to their success. With an F1 score of 0.764 for the top-performing team, we summarize that domain generalization across various tumor domains is possible with today's deep learning-based recognition pipelines. However, we also found that domain characteristics not present in the training set (feline as new species, spindle cell shape as new morphology and a new scanner) led to small but significant decreases in performance. When assessed against the immunohistochemistry-assisted reference standard, all methods resulted in reduced recall scores, with only minor changes in the order of participants in the ranking.


Assuntos
Laboratórios , Mitose , Humanos , Animais , Gatos , Algoritmos , Processamento de Imagem Assistida por Computador/métodos , Padrões de Referência
6.
BMC Med Imaging ; 23(1): 114, 2023 08 29.
Artigo em Inglês | MEDLINE | ID: mdl-37644398

RESUMO

BACKGROUND: In recent years, contrast-enhanced ultrasonography (CEUS) has been used for various applications in breast diagnosis. The superiority of CEUS over conventional B-mode imaging in the ultrasound diagnosis of the breast lesions in clinical practice has been widely confirmed. On the other hand, there have been many proposals for computer-aided diagnosis of breast lesions on B-mode ultrasound images, but few for CEUS. We propose a semi-automatic classification method based on machine learning in CEUS of breast lesions. METHODS: The proposed method extracts spatial and temporal features from CEUS videos and breast tumors are classified as benign or malignant using linear support vector machines (SVM) with combination of selected optimal features. In the proposed method, tumor regions are extracted using the guidance information specified by the examiners, then morphological and texture features of tumor regions obtained from B-mode and CEUS images and TIC features obtained from CEUS video are extracted. Then, our method uses SVM classifiers to classify breast tumors as benign or malignant. During SVM training, many features are prepared, and useful features are selected. We name our proposed method "Ceucia-Breast" (Contrast Enhanced UltraSound Image Analysis for BREAST lesions). RESULTS: The experimental results on 119 subjects show that the area under the receiver operating curve, accuracy, precision, and recall are 0.893, 0.816, 0.841 and 0.920, respectively. The classification performance is improved by our method over conventional methods using only B-mode images. In addition, we confirm that the selected features are consistent with the CEUS guidelines for breast tumor diagnosis. Furthermore, we conduct an experiment on the operator dependency of specifying guidance information and find that the intra-operator and inter-operator kappa coefficients are 1.0 and 0.798, respectively. CONCLUSION: The experimental results show a significant improvement in classification performance compared to conventional classification methods using only B-mode images. We also confirm that the selected features are related to the findings that are considered important in clinical practice. Furthermore, we verify the intra- and inter-examiner correlation in the guidance input for region extraction and confirm that both correlations are in strong agreement.


Assuntos
Neoplasias da Mama , Diagnóstico por Computador , Humanos , Feminino , Ultrassonografia , Processamento de Imagem Assistida por Computador , Neoplasias da Mama/diagnóstico por imagem , Computadores
7.
Med Image Anal ; 89: 102888, 2023 10.
Artigo em Inglês | MEDLINE | ID: mdl-37451133

RESUMO

Formalizing surgical activities as triplets of the used instruments, actions performed, and target anatomies is becoming a gold standard approach for surgical activity modeling. The benefit is that this formalization helps to obtain a more detailed understanding of tool-tissue interaction which can be used to develop better Artificial Intelligence assistance for image-guided surgery. Earlier efforts and the CholecTriplet challenge introduced in 2021 have put together techniques aimed at recognizing these triplets from surgical footage. Estimating also the spatial locations of the triplets would offer a more precise intraoperative context-aware decision support for computer-assisted intervention. This paper presents the CholecTriplet2022 challenge, which extends surgical action triplet modeling from recognition to detection. It includes weakly-supervised bounding box localization of every visible surgical instrument (or tool), as the key actors, and the modeling of each tool-activity in the form of triplet. The paper describes a baseline method and 10 new deep learning algorithms presented at the challenge to solve the task. It also provides thorough methodological comparisons of the methods, an in-depth analysis of the obtained results across multiple metrics, visual and procedural challenges; their significance, and useful insights for future research directions and applications in surgery.


Assuntos
Inteligência Artificial , Cirurgia Assistida por Computador , Humanos , Endoscopia , Algoritmos , Cirurgia Assistida por Computador/métodos , Instrumentos Cirúrgicos
8.
Cancers (Basel) ; 15(10)2023 May 17.
Artigo em Inglês | MEDLINE | ID: mdl-37345132

RESUMO

Recently, breast types were categorized into four types based on the Breast Imaging Reporting and Data System (BI-RADS) atlas, and evaluating them is vital in clinical practice. A Japanese guideline, called breast composition, was developed for the breast types based on BI-RADS. The guideline is characterized using a continuous value called the mammary gland content ratio calculated to determine the breast composition, therefore allowing a more objective and visual evaluation. Although a discriminative deep convolutional neural network (DCNN) has been developed conventionally to classify the breast composition, it could encounter two-step errors or more. Hence, we propose an alternative regression DCNN based on mammary gland content ratio. We used 1476 images, evaluated by an expert physician. Our regression DCNN contained four convolution layers and three fully connected layers. Consequently, we obtained a high correlation of 0.93 (p < 0.01). Furthermore, to scrutinize the effectiveness of the regression DCNN, we categorized breast composition using the estimated ratio obtained by the regression DCNN. The agreement rates are high at 84.8%, suggesting that the breast composition can be calculated using regression DCNN with high accuracy. Moreover, the occurrence of two-step errors or more is unlikely, and the proposed method can intuitively understand the estimated results.

9.
Comput Methods Programs Biomed ; 236: 107561, 2023 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-37119774

RESUMO

BACKGROUND AND OBJECTIVE: In order to be context-aware, computer-assisted surgical systems require accurate, real-time automatic surgical workflow recognition. In the past several years, surgical video has been the most commonly-used modality for surgical workflow recognition. But with the democratization of robot-assisted surgery, new modalities, such as kinematics, are now accessible. Some previous methods use these new modalities as input for their models, but their added value has rarely been studied. This paper presents the design and results of the "PEg TRAnsfer Workflow recognition" (PETRAW) challenge with the objective of developing surgical workflow recognition methods based on one or more modalities and studying their added value. METHODS: The PETRAW challenge included a data set of 150 peg transfer sequences performed on a virtual simulator. This data set included videos, kinematic data, semantic segmentation data, and annotations, which described the workflow at three levels of granularity: phase, step, and activity. Five tasks were proposed to the participants: three were related to the recognition at all granularities simultaneously using a single modality, and two addressed the recognition using multiple modalities. The mean application-dependent balanced accuracy (AD-Accuracy) was used as an evaluation metric to take into account class balance and is more clinically relevant than a frame-by-frame score. RESULTS: Seven teams participated in at least one task with four participating in every task. The best results were obtained by combining video and kinematic data (AD-Accuracy of between 93% and 90% for the four teams that participated in all tasks). CONCLUSION: The improvement of surgical workflow recognition methods using multiple modalities compared with unimodal methods was significant for all teams. However, the longer execution time required for video/kinematic-based methods(compared to only kinematic-based methods) must be considered. Indeed, one must ask if it is wise to increase computing time by 2000 to 20,000% only to increase accuracy by 3%. The PETRAW data set is publicly available at www.synapse.org/PETRAW to encourage further research in surgical workflow recognition.


Assuntos
Algoritmos , Procedimentos Cirúrgicos Robóticos , Humanos , Fluxo de Trabalho , Procedimentos Cirúrgicos Robóticos/métodos
10.
Med Image Anal ; 86: 102803, 2023 05.
Artigo em Inglês | MEDLINE | ID: mdl-37004378

RESUMO

Context-aware decision support in the operating room can foster surgical safety and efficiency by leveraging real-time feedback from surgical workflow analysis. Most existing works recognize surgical activities at a coarse-grained level, such as phases, steps or events, leaving out fine-grained interaction details about the surgical activity; yet those are needed for more helpful AI assistance in the operating room. Recognizing surgical actions as triplets of combination delivers more comprehensive details about the activities taking place in surgical videos. This paper presents CholecTriplet2021: an endoscopic vision challenge organized at MICCAI 2021 for the recognition of surgical action triplets in laparoscopic videos. The challenge granted private access to the large-scale CholecT50 dataset, which is annotated with action triplet information. In this paper, we present the challenge setup and the assessment of the state-of-the-art deep learning methods proposed by the participants during the challenge. A total of 4 baseline methods from the challenge organizers and 19 new deep learning algorithms from the competing teams are presented to recognize surgical action triplets directly from surgical videos, achieving mean average precision (mAP) ranging from 4.2% to 38.1%. This study also analyzes the significance of the results obtained by the presented approaches, performs a thorough methodological comparison between them, in-depth result analysis, and proposes a novel ensemble method for enhanced recognition. Our analysis shows that surgical workflow analysis is not yet solved, and also highlights interesting directions for future research on fine-grained surgical activity recognition which is of utmost importance for the development of AI in surgery.


Assuntos
Benchmarking , Laparoscopia , Humanos , Algoritmos , Salas Cirúrgicas , Fluxo de Trabalho , Aprendizado Profundo
11.
Med Image Anal ; 86: 102770, 2023 05.
Artigo em Inglês | MEDLINE | ID: mdl-36889206

RESUMO

PURPOSE: Surgical workflow and skill analysis are key technologies for the next generation of cognitive surgical assistance systems. These systems could increase the safety of the operation through context-sensitive warnings and semi-autonomous robotic assistance or improve training of surgeons via data-driven feedback. In surgical workflow analysis up to 91% average precision has been reported for phase recognition on an open data single-center video dataset. In this work we investigated the generalizability of phase recognition algorithms in a multicenter setting including more difficult recognition tasks such as surgical action and surgical skill. METHODS: To achieve this goal, a dataset with 33 laparoscopic cholecystectomy videos from three surgical centers with a total operation time of 22 h was created. Labels included framewise annotation of seven surgical phases with 250 phase transitions, 5514 occurences of four surgical actions, 6980 occurences of 21 surgical instruments from seven instrument categories and 495 skill classifications in five skill dimensions. The dataset was used in the 2019 international Endoscopic Vision challenge, sub-challenge for surgical workflow and skill analysis. Here, 12 research teams trained and submitted their machine learning algorithms for recognition of phase, action, instrument and/or skill assessment. RESULTS: F1-scores were achieved for phase recognition between 23.9% and 67.7% (n = 9 teams), for instrument presence detection between 38.5% and 63.8% (n = 8 teams), but for action recognition only between 21.8% and 23.3% (n = 5 teams). The average absolute error for skill assessment was 0.78 (n = 1 team). CONCLUSION: Surgical workflow and skill analysis are promising technologies to support the surgical team, but there is still room for improvement, as shown by our comparison of machine learning algorithms. This novel HeiChole benchmark can be used for comparable evaluation and validation of future work. In future studies, it is of utmost importance to create more open, high-quality datasets in order to allow the development of artificial intelligence and cognitive robotics in surgery.


Assuntos
Inteligência Artificial , Benchmarking , Humanos , Fluxo de Trabalho , Algoritmos , Aprendizado de Máquina
12.
Med Image Anal ; 83: 102628, 2023 01.
Artigo em Inglês | MEDLINE | ID: mdl-36283200

RESUMO

Domain Adaptation (DA) has recently been of strong interest in the medical imaging community. While a large variety of DA techniques have been proposed for image segmentation, most of these techniques have been validated either on private datasets or on small publicly available datasets. Moreover, these datasets mostly addressed single-class problems. To tackle these limitations, the Cross-Modality Domain Adaptation (crossMoDA) challenge was organised in conjunction with the 24th International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI 2021). CrossMoDA is the first large and multi-class benchmark for unsupervised cross-modality Domain Adaptation. The goal of the challenge is to segment two key brain structures involved in the follow-up and treatment planning of vestibular schwannoma (VS): the VS and the cochleas. Currently, the diagnosis and surveillance in patients with VS are commonly performed using contrast-enhanced T1 (ceT1) MR imaging. However, there is growing interest in using non-contrast imaging sequences such as high-resolution T2 (hrT2) imaging. For this reason, we established an unsupervised cross-modality segmentation benchmark. The training dataset provides annotated ceT1 scans (N=105) and unpaired non-annotated hrT2 scans (N=105). The aim was to automatically perform unilateral VS and bilateral cochlea segmentation on hrT2 scans as provided in the testing set (N=137). This problem is particularly challenging given the large intensity distribution gap across the modalities and the small volume of the structures. A total of 55 teams from 16 countries submitted predictions to the validation leaderboard. Among them, 16 teams from 9 different countries submitted their algorithm for the evaluation phase. The level of performance reached by the top-performing teams is strikingly high (best median Dice score - VS: 88.4%; Cochleas: 85.7%) and close to full supervision (median Dice score - VS: 92.5%; Cochleas: 87.7%). All top-performing methods made use of an image-to-image translation approach to transform the source-domain images into pseudo-target-domain images. A segmentation network was then trained using these generated images and the manual annotations provided for the source image.


Assuntos
Neuroma Acústico , Humanos , Neuroma Acústico/diagnóstico por imagem
13.
Med Image Anal ; 84: 102699, 2023 02.
Artigo em Inglês | MEDLINE | ID: mdl-36463832

RESUMO

The density of mitotic figures (MF) within tumor tissue is known to be highly correlated with tumor proliferation and thus is an important marker in tumor grading. Recognition of MF by pathologists is subject to a strong inter-rater bias, limiting its prognostic value. State-of-the-art deep learning methods can support experts but have been observed to strongly deteriorate when applied in a different clinical environment. The variability caused by using different whole slide scanners has been identified as one decisive component in the underlying domain shift. The goal of the MICCAI MIDOG 2021 challenge was the creation of scanner-agnostic MF detection algorithms. The challenge used a training set of 200 cases, split across four scanning systems. As test set, an additional 100 cases split across four scanning systems, including two previously unseen scanners, were provided. In this paper, we evaluate and compare the approaches that were submitted to the challenge and identify methodological factors contributing to better performance. The winning algorithm yielded an F1 score of 0.748 (CI95: 0.704-0.781), exceeding the performance of six experts on the same task.


Assuntos
Algoritmos , Mitose , Humanos , Gradação de Tumores , Prognóstico
14.
Comput Methods Programs Biomed ; 212: 106452, 2021 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-34688174

RESUMO

BACKGROUND AND OBJECTIVE: Automatic surgical workflow recognition is an essential step in developing context-aware computer-assisted surgical systems. Video recordings of surgeries are becoming widely accessible, as the operational field view is captured during laparoscopic surgeries. Head and ceiling mounted cameras are also increasingly being used to record videos in open surgeries. This makes videos a common choice in surgical workflow recognition. Additional modalities, such as kinematic data captured during robot-assisted surgeries, could also improve workflow recognition. This paper presents the design and results of the MIcro-Surgical Anastomose Workflow recognition on training sessions (MISAW) challenge whose objective was to develop workflow recognition models based on kinematic data and/or videos. METHODS: The MISAW challenge provided a data set of 27 sequences of micro-surgical anastomosis on artificial blood vessels. This data set was composed of videos, kinematics, and workflow annotations. The latter described the sequences at three different granularity levels: phase, step, and activity. Four tasks were proposed to the participants: three of them were related to the recognition of surgical workflow at three different granularity levels, while the last one addressed the recognition of all granularity levels in the same model. We used the average application-dependent balanced accuracy (AD-Accuracy) as the evaluation metric. This takes unbalanced classes into account and it is more clinically relevant than a frame-by-frame score. RESULTS: Six teams participated in at least one task. All models employed deep learning models, such as convolutional neural networks (CNN), recurrent neural networks (RNN), or a combination of both. The best models achieved accuracy above 95%, 80%, 60%, and 75% respectively for recognition of phases, steps, activities, and multi-granularity. The RNN-based models outperformed the CNN-based ones as well as the dedicated modality models compared to the multi-granularity except for activity recognition. CONCLUSION: For high levels of granularity, the best models had a recognition rate that may be sufficient for applications such as prediction of remaining surgical time. However, for activities, the recognition rate was still low for applications that can be employed clinically. The MISAW data set is publicly available at http://www.synapse.org/MISAW to encourage further research in surgical workflow recognition.


Assuntos
Laparoscopia , Procedimentos Cirúrgicos Robóticos , Anastomose Cirúrgica , Humanos , Redes Neurais de Computação , Fluxo de Trabalho
15.
Sci Rep ; 11(1): 19067, 2021 09 24.
Artigo em Inglês | MEDLINE | ID: mdl-34561541

RESUMO

Green tea, a widely consumed beverage in Asia, contains green tea catechins effective against obesity, especially epigallocatechin-3-O-gallate (EGCG), but must be consumed in an impractically huge amount daily to elicit its biological effect. Meanwhile, citrus polyphenols have various physiological effects that could enhance EGCG functionality. Here we investigated the antiobesity effect of a combination of EGCG and α-glucosyl hesperidin, a citrus polyphenol, at doses that have not been previously reported to exert antiobesity effects by themselves in any clinical trial. In a randomized, placebo-controlled, double-blinded, and parallel-group-designed clinical trial, 60 healthy Japanese males and females aged 30-75 years consumed green tea combined with α-glucosyl hesperidin (GT-gH), which contained 178 mg α-glucosyl hesperidin and 146 mg EGCG, for 12 weeks. Physical, hematological, blood biochemical, and urine examinations showed that GT-gH is safe to use. At week 12, GT-gH prevented weight gain and reduced body mass index (BMI) compared with the placebo. Especially in those aged < 50 years, triglyceride and body fat percentage decreased at week 6, visceral fat level and body fat percentage decreased at week 12; body weight, BMI, and blood LDL/HDL ratio also decreased. In conclusion, taking GT-gH prevents weight gain, and the antiobesity effect of GT-gH was more pronounced in people aged < 50 years.


Assuntos
Catequina/análogos & derivados , Glucosídeos/uso terapêutico , Hesperidina/análogos & derivados , Obesidade/prevenção & controle , Chá , Adulto , Índice de Massa Corporal , Catequina/administração & dosagem , Catequina/uso terapêutico , Feminino , Glucosídeos/administração & dosagem , Hesperidina/administração & dosagem , Hesperidina/uso terapêutico , Humanos , Masculino , Pessoa de Meia-Idade , Placebos , Chá/química
17.
J Clin Oncol ; 38(22): 2488-2498, 2020 08 01.
Artigo em Inglês | MEDLINE | ID: mdl-32421442

RESUMO

PURPOSE: We report here the outcomes and late effects of the Japanese Study Group for Pediatric Liver Tumors (JPLT)-2 protocol, on the basis of cisplatin-tetrahydropyranyl-adriamycin (CITA) with risk stratification according to the pretreatment extent of disease (PRETEXT) classification for hepatoblastoma (HB). PATIENTS AND METHODS: From 1999 to 2012, 361 patients with untreated HB were enrolled. PRETEXT I/II patients were treated with up-front resection, followed by low-dose CITA (stratum 1) or received low-dose CITA, followed by surgery and postoperative chemotherapy (stratum 2). In the remaining patients, after 2 cycles of CITA, responders received the CITA regimen before resection (stratum 3), and nonresponders were switched to ifosfamide, pirarubicin, etoposide, and carboplatin (ITEC; stratum 4). Intensified chemotherapeutic regimens with autologous hematopoietic stem-cell transplantation (SCT) after resection were an optional treatment for patients with refractory/metastatic disease. RESULTS: The 5-year event-free and overall survival rates of HB patients were 74.2% and 89.9%, respectively, for stratum 1, 84.8% and 90.8%%, respectively, for stratum 2, 71.6% and 85.9%%, respectively, for stratum 3, and 59.1% and 67.3%%, respectively, for stratum 4. The outcomes for CITA responders were significantly better than those for nonresponders, whose outcomes remained poor despite salvage therapy with a second-line ITEC regimen or SCT. The late effects, ototoxicity, cardiotoxicity, and delayed growth, occurred in 61, 18, and 47 patients, respectively. Thirteen secondary malignant neoplasms (SMNs), including 10 leukemia, occurred, correlating with higher exposure to pirarubicin and younger age at diagnosis. CONCLUSION: The JPLT-2 protocol achieved up-front resectability in PRETEXT I/II patients with no annotation factors, and satisfactory survival in patients who were CITA responders in the remaining patients. However, outcomes for CITA nonresponders were unsatisfactory, despite therapy intensification with ITEC regimens and SCT. JPLT-2 had a relatively low incidence of cardiotoxicity but high rates of SMNs.


Assuntos
Protocolos de Quimioterapia Combinada Antineoplásica/uso terapêutico , Transplante de Células-Tronco Hematopoéticas/mortalidade , Hepatectomia/mortalidade , Hepatoblastoma/mortalidade , Neoplasias Hepáticas/mortalidade , Complicações Pós-Operatórias/mortalidade , Pré-Escolar , Terapia Combinada , Feminino , Seguimentos , Hepatoblastoma/patologia , Hepatoblastoma/terapia , Humanos , Lactente , Neoplasias Hepáticas/patologia , Neoplasias Hepáticas/terapia , Masculino , Ensaios Clínicos Controlados não Aleatórios como Assunto , Complicações Pós-Operatórias/etiologia , Complicações Pós-Operatórias/patologia , Prognóstico , Estudos Prospectivos , Taxa de Sobrevida
18.
Pediatr Surg Int ; 36(3): 305-316, 2020 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-32006092

RESUMO

PURPOSE: Recently, several investigators reported that costal cartilage does not overgrow in pectus excavatum (PE). We wished to clarify whether costochondral length is longer in PE than the normal thorax and we tried to clarify the change of the shape of precordial concavity according to the growth in PE. METHODS: We evaluated 243 CT axial images of patients with PE and 246 CT axial images of patients without thoracic deformity. We divided the fifth costal cartilage into several lengths. We considered each part to be a straight line and calculated the length of the lines. We compared the approximate costochondral length between PE and normal thorax. We analyzed the distance between both anterior tips of fifth rib, and the ratio of the width and the depth of concavity to thoracic diameter in PE. CONCLUSIONS: The costochondral length in patients with PE is highly likely to be longer than that of the normal thorax. The length of costal cartilage may be longer in asymmetric PE than symmetric PE. It may start in infantile period in PE that the thoracic shape turns into asymmetry from symmetry. The precordial concavity of PE may be shaped by overgrowth of both costal cartilages and ribs.


Assuntos
Algoritmos , Cartilagem Costal/diagnóstico por imagem , Tórax em Funil/diagnóstico , Costelas/diagnóstico por imagem , Adolescente , Criança , Pré-Escolar , Feminino , Humanos , Masculino , Tomografia Computadorizada por Raios X , Adulto Jovem
19.
Med Image Anal ; 52: 24-41, 2019 02.
Artigo em Inglês | MEDLINE | ID: mdl-30468970

RESUMO

Surgical tool detection is attracting increasing attention from the medical image analysis community. The goal generally is not to precisely locate tools in images, but rather to indicate which tools are being used by the surgeon at each instant. The main motivation for annotating tool usage is to design efficient solutions for surgical workflow analysis, with potential applications in report generation, surgical training and even real-time decision support. Most existing tool annotation algorithms focus on laparoscopic surgeries. However, with 19 million interventions per year, the most common surgical procedure in the world is cataract surgery. The CATARACTS challenge was organized in 2017 to evaluate tool annotation algorithms in the specific context of cataract surgery. It relies on more than nine hours of videos, from 50 cataract surgeries, in which the presence of 21 surgical tools was manually annotated by two experts. With 14 participating teams, this challenge can be considered a success. As might be expected, the submitted solutions are based on deep learning. This paper thoroughly evaluates these solutions: in particular, the quality of their annotations are compared to that of human interpretations. Next, lessons learnt from the differential analysis of these solutions are discussed. We expect that they will guide the design of efficient surgery monitoring tools in the near future.


Assuntos
Extração de Catarata/instrumentação , Aprendizado Profundo , Instrumentos Cirúrgicos , Algoritmos , Humanos , Gravação em Vídeo
20.
Med Phys ; 45(11): 4986-5003, 2018 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-30168159

RESUMO

PURPOSE: Compensation for respiratory motion is important during abdominal cancer treatments. In this work we report the results of the 2015 MICCAI Challenge on Liver Ultrasound Tracking and extend the 2D results to relate them to clinical relevance in form of reducing treatment margins and hence sparing healthy tissues, while maintaining full duty cycle. METHODS: We describe methodologies for estimating and temporally predicting respiratory liver motion from continuous ultrasound imaging, used during ultrasound-guided radiation therapy. Furthermore, we investigated the trade-off between tracking accuracy and runtime in combination with temporal prediction strategies and their impact on treatment margins. RESULTS: Based on 2D ultrasound sequences from 39 volunteers, a mean tracking accuracy of 0.9 mm was achieved when combining the results from the 4 challenge submissions (1.2 to 3.3 mm). The two submissions for the 3D sequences from 14 volunteers provided mean accuracies of 1.7 and 1.8 mm. In combination with temporal prediction, using the faster (41 vs 228 ms) but less accurate (1.4 vs 0.9 mm) tracking method resulted in substantially reduced treatment margins (70% vs 39%) in contrast to mid-ventilation margins, as it avoided non-linear temporal prediction by keeping the treatment system latency low (150 vs 400 ms). Acceleration of the best tracking method would improve the margin reduction to 75%. CONCLUSIONS: Liver motion estimation and prediction during free-breathing from 2D ultrasound images can substantially reduce the in-plane motion uncertainty and hence treatment margins. Employing an accurate tracking method while avoiding non-linear temporal prediction would be favorable. This approach has the potential to shorten treatment time compared to breath-hold and gated approaches, and increase treatment efficiency and safety.


Assuntos
Algoritmos , Imageamento Tridimensional/métodos , Fígado/diagnóstico por imagem , Fígado/efeitos da radiação , Radioterapia Guiada por Imagem/métodos , Adulto , Voluntários Saudáveis , Humanos , Ultrassonografia , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA