RESUMO
Sinonasal squamous cell carcinomas (SNSCCs) are uncommon and they are associated with adverse prognosis. HPV-associated SNSCCs and fusion-driven SNSCCs are particularly rare. A case of an HPV-associated SNSCC with a FGFR3::TACC3 fusion is thus presented; a brief review of the pertinent literature is also provided.
Assuntos
Carcinoma de Células Escamosas , Receptor Tipo 3 de Fator de Crescimento de Fibroblastos , Humanos , Receptor Tipo 3 de Fator de Crescimento de Fibroblastos/genética , Carcinoma de Células Escamosas/virologia , Carcinoma de Células Escamosas/genética , Carcinoma de Células Escamosas/patologia , Proteínas Associadas aos Microtúbulos/genética , Masculino , Infecções por Papillomavirus/complicações , Infecções por Papillomavirus/virologia , Neoplasias dos Seios Paranasais/virologia , Neoplasias dos Seios Paranasais/patologia , Neoplasias dos Seios Paranasais/genética , Pessoa de Meia-Idade , FemininoRESUMO
Artificial intelligence (AI) has a multitude of applications in cancer research and oncology. However, the training of AI systems is impeded by the limited availability of large datasets due to data protection requirements and other regulatory obstacles. Federated and swarm learning represent possible solutions to this problem by collaboratively training AI models while avoiding data transfer. However, in these decentralized methods, weight updates are still transferred to the aggregation server for merging the models. This leaves the possibility for a breach of data privacy, for example by model inversion or membership inference attacks by untrusted servers. Somewhat-homomorphically-encrypted federated learning (SHEFL) is a solution to this problem because only encrypted weights are transferred, and model updates are performed in the encrypted space. Here, we demonstrate the first successful implementation of SHEFL in a range of clinically relevant tasks in cancer image analysis on multicentric datasets in radiology and histopathology. We show that SHEFL enables the training of AI models which outperform locally trained models and perform on par with models which are centrally trained. In the future, SHEFL can enable multiple institutions to co-train AI models without forsaking data governance and without ever transmitting any decryptable data to untrusted servers.