Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 14 de 14
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Lancet Reg Health Southeast Asia ; 24: 100279, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38756152

RESUMO

Background: Gallbladder cancer (GBC) is highly aggressive. Diagnosis of GBC is challenging as benign gallbladder lesions can have similar imaging features. We aim to develop and validate a deep learning (DL) model for the automatic detection of GBC at abdominal ultrasound (US) and compare its diagnostic performance with that of radiologists. Methods: In this prospective study, a multiscale, second-order pooling-based DL classifier model was trained (training and validation cohorts) using the US data of patients with gallbladder lesions acquired between August 2019 and June 2021 at the Postgraduate Institute of Medical Education and research, a tertiary care hospital in North India. The performance of the DL model to detect GBC was evaluated in a temporally independent test cohort (July 2021-September 2022) and was compared with that of two radiologists. Findings: The study included 233 patients in the training set (mean age, 48 ± (2SD) 23 years; 142 women), 59 patients in the validation set (mean age, 51.4 ± 19.2 years; 38 women), and 273 patients in the test set (mean age, 50.4 ± 22.1 years; 177 women). In the test set, the DL model had sensitivity, specificity, and area under the receiver operating characteristic curve (AUC) of 92.3% (95% CI, 88.1-95.6), 74.4% (95% CI, 65.3-79.9), and 0.887 (95% CI, 0.844-0.930), respectively for detecting GBC which was comparable to both the radiologists. The DL-based approach showed high sensitivity (89.8-93%) and AUC (0.810-0.890) for detecting GBC in the presence of stones, contracted gallbladders, lesion size <10 mm, and neck lesions, which was comparable to both the radiologists (p = 0.052-0.738 for sensitivity and p = 0.061-0.745 for AUC). The sensitivity for DL-based detection of mural thickening type of GBC was significantly greater than one of the radiologists (87.8% vs. 72.8%, p = 0.012), despite a reduced specificity. Interpretation: The DL-based approach demonstrated diagnostic performance comparable to experienced radiologists in detecting GBC using US. However, multicentre studies are warranted to explore the potential of DL-based diagnosis of GBC fully. Funding: None.

2.
Artigo em Inglês | MEDLINE | ID: mdl-38427281

RESUMO

Biliary tract cancers are malignant neoplasms arising from bile duct epithelial cells. They include cholangiocarcinomas and gallbladder cancer. Gallbladder cancer has a marked geographical preference and is one of the most common cancers in women in northern India. Biliary tract cancers are usually diagnosed at an advanced, unresectable stage. Hence, the prognosis is extremely dismal. The five-year survival rate in advanced gallbladder cancer is < 5%. Hence, early detection and radical surgery are critical to improving biliary tract cancer prognoses. Radiological imaging plays an essential role in diagnosing and managing biliary tract cancers. However, the diagnosis is challenging because the biliary tract is affected by many diseases that may have radiological appearances similar to cancer. Artificial intelligence (AI) can improve radiologists' performance in various tasks. Deep learning (DL)-based approaches are increasingly incorporated into medical imaging to improve diagnostic performance. This paper reviews the AI-based strategies in biliary tract cancers to improve the diagnosis and prognosis.

3.
Artigo em Inglês | MEDLINE | ID: mdl-38110782

RESUMO

BACKGROUND: The radiological differentiation of xanthogranulomatous cholecystitis (XGC) and gallbladder cancer (GBC) is challenging yet critical. We aimed at utilizing the deep learning (DL)-based approach for differentiating XGC and GBC on ultrasound (US). METHODS: This single-center study comprised consecutive patients with XGC and GBC from a prospectively acquired database who underwent pre-operative US evaluation of the gallbladder lesions. The performance of state-of-the-art (SOTA) DL models (GBCNet-convolutional neural network [CNN] and RadFormer, transformer) for XGC vs. GBC classification in US images was tested and compared with popular DL models and a radiologist. RESULTS: Twenty-five patients with XGC (mean age, 57 ± 12.3, 17 females) and 55 patients with GBC (mean age, 54.6 ± 11.9, 38 females) were included. The performance of GBCNet and RadFormer was comparable (sensitivity 89.1% vs. 87.3%, p = 0.738; specificity 72% vs. 84%, p = 0.563; and AUC 0.744 vs. 0.751, p = 0.514). The AUCs of DenseNet-121, vision transformer (ViT) and data-efficient image transformer (DeiT) were significantly smaller than of GBCNet (p = 0.015, 0.046, 0.013, respectively) and RadFormer (p = 0.012, 0.027, 0.007, respectively). The radiologist labeled US images of 24 (30%) patients non-diagnostic. In the remaining patients, the sensitivity, specificity and AUC for GBC detection were 92.7%, 35.7% and 0.642, respectively. The specificity of the radiologist was significantly lower than of GBCNet and RadFormer (p = 0.001). CONCLUSION: SOTA DL models have a better performance than radiologists in differentiating XGC and GBC on the US.

5.
Eur Radiol ; 33(11): 8112-8121, 2023 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-37209125

RESUMO

OBJECTIVES: To analyze the performance of deep learning in isodense/obscure masses in dense breasts. To build and validate a deep learning (DL) model using core radiology principles and analyze its performance in isodense/obscure masses. To show performance on screening mammography as well as diagnostic mammography distribution. METHODS: This was a retrospective, single-institution, multi-centre study with external validation. For model building, we took a 3-pronged approach. First, we explicitly taught the network to learn features other than density differences: such as spiculations and architectural distortion. Second, we used the opposite breast to enable the detection of asymmetries. Third, we systematically enhanced each image by piece-wise-linear transformation. We tested the network on a diagnostic mammography dataset (2569 images with 243 cancers, January to June 2018) and a screening mammography dataset (2146 images with 59 cancers, patient recruitment from January to April 2021) from a different centre (external validation). RESULTS: When trained with our proposed technique (and compared with baseline network), sensitivity for malignancy increased from 82.7 to 84.7% at 0.2 False positives per image (FPI) in the diagnostic mammography dataset, 67.9 to 73.8% in the subset of patients with dense breasts, 74.6 to 85.3 in the subset of patients with isodense/obscure cancers and 84.9 to 88.7 in an external validation test set with a screening mammography distribution. We showed that our sensitivity exceeded currently reported values (0.90 at 0.2 FPI) on a public benchmark dataset (INBreast). CONCLUSION: Modelling traditional mammographic teaching into a DL framework can help improve cancer detection accuracy in dense breasts. CLINICAL RELEVANCE STATEMENT: Incorporating medical knowledge into neural network design can help us overcome some limitations associated with specific modalities. In this paper, we show how one such deep neural network can help improve performance on mammographically dense breasts. KEY POINTS: • Although state-of-the-art deep learning networks achieve good results in cancer detection in mammography in general, isodense, obscure masses and mammographically dense breasts posed a challenge to deep learning networks. • Collaborative network design and incorporation of traditional radiology teaching into the deep learning approach helped mitigate the problem. • The accuracy of deep learning networks may be translatable to different patient distributions. We showed the results of our network on screening as well as diagnostic mammography datasets.


Assuntos
Neoplasias da Mama , Aprendizado Profundo , Humanos , Feminino , Mamografia/métodos , Densidade da Mama , Estudos Retrospectivos , Neoplasias da Mama/diagnóstico por imagem , Detecção Precoce de Câncer
6.
Med Image Anal ; 83: 102676, 2023 01.
Artigo em Inglês | MEDLINE | ID: mdl-36455424

RESUMO

We propose a novel deep neural network architecture to learn interpretable representation for medical image analysis. Our architecture generates a global attention for region of interest, and then learns bag of words style deep feature embeddings with local attention. The global, and local feature maps are combined using a contemporary transformer architecture for highly accurate Gallbladder Cancer (GBC) detection from Ultrasound (USG) images. Our experiments indicate that the detection accuracy of our model beats even human radiologists, and advocates its use as the second reader for GBC diagnosis. Bag of words embeddings allow our model to be probed for generating interpretable explanations for GBC detection consistent with the ones reported in medical literature. We show that the proposed model not only helps understand decisions of neural network models but also aids in discovery of new visual features relevant to the diagnosis of GBC. Source-code is available at https://github.com/sbasu276/RadFormer.


Assuntos
Neoplasias da Vesícula Biliar , Humanos , Neoplasias da Vesícula Biliar/diagnóstico por imagem , Aprendizagem , Redes Neurais de Computação , Software
7.
Curr Probl Diagn Radiol ; 52(1): 47-55, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-35618554

RESUMO

With the rapid integration of artificial intelligence into medical practice, there has been an exponential increase in the number of scientific papers and industry players offering models designed for various tasks. Understanding these, however, is difficult for a radiologist in practice, given the core mathematical principles and complicated terminology involved. This review aims to elucidate the core mathematical concepts of both machine learning and deep learning models, explaining the various steps and common terminology in common layman language. Thus, by the end of this article, the reader should be able to understand the basics of how prediction models are built and trained, including challenges faced and how to avoid them. The reader would also be equipped to adequately evaluate various models, and take a decision on whether a model is likely to perform adequately in the real-world setting.


Assuntos
Algoritmos , Inteligência Artificial , Humanos , Aprendizado de Máquina , Radiologistas , Pessoal de Saúde
8.
IEEE Trans Pattern Anal Mach Intell ; 45(6): 6832-6845, 2023 06.
Artigo em Inglês | MEDLINE | ID: mdl-34613911

RESUMO

The popularity of egocentric cameras and their always-on nature has lead to the abundance of day long first-person videos. The highly redundant nature of these videos and extreme camera-shakes make them difficult to watch from beginning to end. These videos require efficient summarization tools for consumption. However, traditional summarization techniques developed for static surveillance videos or highly curated sports videos and movies are either not suitable or simply do not scale for such hours long videos in the wild. On the other hand, specialized summarization techniques developed for egocentric videos limit their focus to important objects and people. This paper presents a novel unsupervised reinforcement learning framework to summarize egocentric videos both in terms of length and the content. The proposed framework facilitates incorporating various prior preferences such as faces, places, or scene diversity and interactive user choice in terms of including or excluding the particular type of content. This approach can also be adapted to generate summaries of various lengths, making it possible to view even 1-minute summaries of one's entire day. When using the facial saliency-based reward, we show that our approach generates summaries focusing on social interactions, similar to the current state-of-the-art (SOTA). The quantitative comparisons on the benchmark Disney dataset show that our method achieves significant improvement in Relaxed F-Score (RFS) (29.60 compared to 19.21 from SOTA), BLEU score (0.68 compared to 0.67 from SOTA), Average Human Ranking (AHR), and unique events covered. Finally, we show that our technique can be applied to summarize traditional, short, hand-held videos as well, where we improve the SOTA F-score on benchmark SumMe and TVSum datasets from 41.4 to 46.40 and 57.6 to 58.3 respectively. We also provide a Pytorch implementation and a web demo at https://pravin74.github.io/Int-sum/index.html.


Assuntos
Algoritmos , Gravação em Vídeo , Humanos
9.
World J Methodol ; 12(4): 274-284, 2022 Jul 20.
Artigo em Inglês | MEDLINE | ID: mdl-36159101

RESUMO

BACKGROUND: Performing ultrasound during the current pandemic time is quite challenging. To reduce the chances of cross-infection and keep healthcare workers safe, a robotic ultrasound system was developed, which can be controlled remotely. It will also pave way for broadening the reach of ultrasound in remote distant rural areas as well. AIM: To assess the feasibility of a robotic system in performing abdominal ultrasound and compare it with the conventional ultrasound system. METHODS: A total of 21 healthy volunteers were recruited. Ultrasound was performed in two settings, using the robotic arm and conventional hand-held procedure. Images acquired were analyzed by separate radiologists. RESULTS: Our study showed that the robotic arm model was feasible, and the results varied based on the organ imaged. The liver images showed no significant difference. For other organs, the need for repeat imaging was higher in the robotic arm, which could be attributed to the radiologist's learning curve and ability to control the haptic device. The doctor and volunteer surveys also showed significant comfort with acceptance of the technology and they expressed their desire to use it in the future. CONCLUSION: This study shows that robotic ultrasound is feasible and is the need of the hour during the pandemic.

10.
Sci Rep ; 12(1): 11622, 2022 07 08.
Artigo em Inglês | MEDLINE | ID: mdl-35803985

RESUMO

While detection of malignancies on mammography has received a boost with the use of Convolutional Neural Networks (CNN), detection of cancers of very small size remains challenging. This is however clinically significant as the purpose of mammography is early detection of cancer, making it imperative to pick them up when they are still very small. Mammography has the highest spatial resolution (image sizes as high as 3328 × 4096 pixels) out of all imaging modalities, a requirement that stems from the need to detect fine features of the smallest cancers on screening. However due to computational constraints, most state of the art CNNs work on reduced resolution images. Those that work on higher resolutions, compromise on global context and work at single scale. In this work, we show that resolution, scale and image-context are all important independent factors in detection of small masses. We thereby use a fully convolutional network, with the ability to take any input size. In addition, we incorporate a systematic multi-scale, multi-resolution approach, and encode image context, which we show are critical factors to detection of small masses. We show that this approach improves the detection of cancer, particularly for small masses in comparison to the baseline model. We perform a single institution multicentre study, and show the performance of the model on a diagnostic mammography dataset, a screening mammography dataset, as well as a curated dataset of small cancers < 1 cm in size. We show that our approach improves the sensitivity from 61.53 to 87.18% at 0.3 False Positives per Image (FPI) on this small cancer dataset. Model and code are available from  https://github.com/amangupt01/Small_Cancer_Detection.


Assuntos
Neoplasias da Mama , Mamografia , Neoplasias da Mama/diagnóstico por imagem , Detecção Precoce de Câncer , Feminino , Humanos , Mamografia/métodos , Programas de Rastreamento , Redes Neurais de Computação
11.
Sensors (Basel) ; 21(23)2021 Dec 04.
Artigo em Inglês | MEDLINE | ID: mdl-34884122

RESUMO

Recent scientific and technological advancements driven by the Internet of Things (IoT), Machine Learning (ML) and Artificial Intelligence (AI), distributed computing and data communication technologies have opened up a vast range of opportunities in many scientific fields-spanning from fast, reliable and efficient data communication to large-scale cloud/edge computing and intelligent big data analytics. Technological innovations and developments in these areas have also enabled many opportunities in the space industry. The successful Mars landing of NASA's Perseverance rover on 18 February 2021 represents another giant leap for humankind in space exploration. Emerging research and developments of connectivity and computing technologies in IoT for space/non-terrestrial environments is expected to yield significant benefits in the near future. This survey paper presents a broad overview of the area and provides a look-ahead of the opportunities made possible by IoT and space-based technologies. We first survey the current developments of IoT and space industry, and identify key challenges and opportunities in these areas. We then review the state-of-the-art and discuss future opportunities for IoT developments, deployment and integration to support future endeavors in space exploration.


Assuntos
Internet das Coisas , Inteligência Artificial , Computação em Nuvem , Aprendizado de Máquina , Tecnologia
12.
Eur Radiol ; 31(8): 6039-6048, 2021 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-33471219

RESUMO

OBJECTIVES: To study whether a trained convolutional neural network (CNN) can be of assistance to radiologists in differentiating Coronavirus disease (COVID)-positive from COVID-negative patients using chest X-ray (CXR) through an ambispective clinical study. To identify subgroups of patients where artificial intelligence (AI) can be of particular value and analyse what imaging features may have contributed to the performance of AI by means of visualisation techniques. METHODS: CXR of 487 patients were classified into [4] categories-normal, classical COVID, indeterminate, and non-COVID by consensus opinion of 2 radiologists. CXR which were classified as "normal" and "indeterminate" were then subjected to analysis by AI, and final categorisation provided as guided by prediction of the network. Precision and recall of the radiologist alone and radiologist assisted by AI were calculated in comparison to reverse transcriptase-polymerase chain reaction (RT-PCR) as the gold standard. Attention maps of the CNN were analysed to understand regions in the CXR important to the AI algorithm in making a prediction. RESULTS: The precision of radiologists improved from 65.9 to 81.9% and recall improved from 17.5 to 71.75 when assistance with AI was provided. AI showed 92% accuracy in classifying "normal" CXR into COVID or non-COVID. Analysis of attention maps revealed attention on the cardiac shadow in these "normal" radiographs. CONCLUSION: This study shows how deployment of an AI algorithm can complement a human expert in the determination of COVID status. Analysis of the detected features suggests possible subtle cardiac changes, laying ground for further investigative studies into possible cardiac changes. KEY POINTS: • Through an ambispective clinical study, we show how assistance with an AI algorithm can improve recall (sensitivity) and precision (positive predictive value) of radiologists in assessing CXR for possible COVID in comparison to RT-PCR. • We show that AI achieves the best results in images classified as "normal" by radiologists. We conjecture that possible subtle cardiac in the CXR, imperceptible to the human eye, may have contributed to this prediction. • The reported results may pave the way for a human computer collaboration whereby the expert with some help from the AI algorithm achieves higher accuracy in predicting COVID status on CXR than previously thought possible when considering either alone.


Assuntos
Inteligência Artificial , COVID-19 , Humanos , Radiografia Torácica , SARS-CoV-2 , Tomografia Computadorizada por Raios X , Raios X
13.
World Neurosurg ; 137: 398-407, 2020 05.
Artigo em Inglês | MEDLINE | ID: mdl-32014545

RESUMO

BACKGROUND: Minimally invasive neurosurgical approaches reduce patient morbidity by providing the surgeon with better visualization and access to complex lesions, with minimal disruption to normal anatomy. The use of rigid or flexible neuroendoscopes, supplemented with a conventional stereoscopic operating microscope, has been integral to the adoption of these techniques. Neurosurgeons commonly use neuroendoscopes to perform the ventricular and endonasal approaches. It is challenging to learn neuroendoscopy skills from the existing apprenticeship model of surgical education. The training methods, which use simulation-based systems, have achieved wide acceptance. Physical simulators provide anatomic orientation and hands-on experience with repeatability. Our aim is to review the existing physical simulators on the basis of the skills training of neuroendoscopic procedures. METHODS: We searched Scopus, Google Scholar, PubMed, IEEE Xplore, and dblp. We used the following keywords "neuroendoscopy," "training," "simulators," "physical," and "skills evaluation." A total of 351 articles were screened based on development methods, evaluation criteria, and validation studies on physical simulators for skills training in neuroendoscopy. RESULTS: The screening of the articles resulted in classifying the physical training methods developed for neuroendoscopy surgical skills into synthetic simulators and box trainers. The existing simulators were compared based on their design, fidelity, trainee evaluation methods, and validation studies. CONCLUSIONS: The state of simulation systems demands collaborative initiatives among translational research institutes. They need improved fidelity and validation studies for inclusion in the surgical educational curriculum. Learning should be imparted in stages with standardization of performance metrics for skills evaluation.


Assuntos
Modelos Anatômicos , Cirurgia Endoscópica por Orifício Natural/educação , Neuroendoscopia/educação , Treinamento por Simulação/métodos , Ventriculostomia/educação , Humanos , Cavidade Nasal
14.
IEEE Trans Pattern Anal Mach Intell ; 37(7): 1323-35, 2015 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-26352442

RESUMO

Use of higher order clique potentials in MRF-MAP problems has been limited primarily because of the inefficiencies of the existing algorithmic schemes. We propose a new combinatorial algorithm for computing optimal solutions to 2 label MRF-MAP problems with higher order clique potentials. The algorithm runs in time O(2(k)n(3)) in the worst case (k is size of clique and n is the number of pixels). A special gadget is introduced to model flows in a higher order clique and a technique for building a flow graph is specified. Based on the primal dual structure of the optimization problem, the notions of the capacity of an edge and a cut are generalized to define a flow problem. We show that in this flow graph, when the clique potentials are submodular, the max flow is equal to the min cut, which also is the optimal solution to the problem. We show experimentally that our algorithm provides significantly better solutions in practice and is hundreds of times faster than solution schemes like Dual Decomposition [1], TRWS [2] and Reduction [3], [4], [5]. The framework represents a significant advance in handling higher order problems making optimal inference practical for medium sized cliques.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...