Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Más filtros












Base de datos
Intervalo de año de publicación
1.
Radiol Case Rep ; 19(10): 4650-4653, 2024 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-39220785

RESUMEN

Trabectedin is an antineoplastic drug used to treat soft tissue sarcomas. Trabectedin is mainly infused from the central venous port (CVP) because trabectedin leakage causes serious skin and soft tissue complications. Characteristic sterile inflammation has recently been reported after infusion of trabectedin from the CVP. Here, we report a case of sterile inflammation along a tunneled catheter pathway after trabectedin infusion from the CVP, with residual postinflammatory changes even after CVP removal. A 57-year-old man with myxoid liposarcoma developed skin erythema, swelling, and induration along a tunneled catheter pathway of the CVP after 16 cycles of trabectedin infusion through the CVP. The patient was diagnosed with sterile inflammation because various tests were negative for infection. The CVP was removed because the increasing injection resistance made trabectedin infusion difficult. The catheter firmly adhered to the surrounding tissue during removal. The induration and pigmentation along the catheter persisted for 4 months after CVP removal.

2.
Artif Intell Med ; 153: 102889, 2024 07.
Artículo en Inglés | MEDLINE | ID: mdl-38728811

RESUMEN

BACKGROUND: Pretraining large-scale neural language models on raw texts has made a significant contribution to improving transfer learning in natural language processing. With the introduction of transformer-based language models, such as bidirectional encoder representations from transformers (BERT), the performance of information extraction from free text has improved significantly in both the general and medical domains. However, it is difficult to train specific BERT models to perform well in domains for which few databases of a high quality and large size are publicly available. OBJECTIVE: We hypothesized that this problem could be addressed by oversampling a domain-specific corpus and using it for pretraining with a larger corpus in a balanced manner. In the present study, we verified our hypothesis by developing pretraining models using our method and evaluating their performance. METHODS: Our proposed method was based on the simultaneous pretraining of models with knowledge from distinct domains after oversampling. We conducted three experiments in which we generated (1) English biomedical BERT from a small biomedical corpus, (2) Japanese medical BERT from a small medical corpus, and (3) enhanced biomedical BERT pretrained with complete PubMed abstracts in a balanced manner. We then compared their performance with those of conventional models. RESULTS: Our English BERT pretrained using both general and small medical domain corpora performed sufficiently well for practical use on the biomedical language understanding evaluation (BLUE) benchmark. Moreover, our proposed method was more effective than the conventional methods for each biomedical corpus of the same corpus size in the general domain. Our Japanese medical BERT outperformed the other BERT models built using a conventional method for almost all the medical tasks. The model demonstrated the same trend as that of the first experiment in English. Further, our enhanced biomedical BERT model, which was not pretrained on clinical notes, achieved superior clinical and biomedical scores on the BLUE benchmark with an increase of 0.3 points in the clinical score and 0.5 points in the biomedical score. These scores were above those of the models trained without our proposed method. CONCLUSIONS: Well-balanced pretraining using oversampling instances derived from a corpus appropriate for the target task allowed us to construct a high-performance BERT model.


Asunto(s)
Procesamiento de Lenguaje Natural , Humanos , Redes Neurales de la Computación
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...