RESUMEN
OBJECTIVE: To apply deep neural networks (DNNs) to longitudinal EHR data in order to predict suicide attempt risk among veterans. Local explainability techniques were used to provide explanations for each prediction with the goal of ultimately improving outreach and intervention efforts. MATERIALS AND METHODS: The DNNs fused demographic information with diagnostic, prescription, and procedure codes. Models were trained and tested on EHR data of approximately 500 000 US veterans: all veterans with recorded suicide attempts from April 1, 2005, through January 1, 2016, each paired with 5 veterans of the same age who did not attempt suicide. Shapley Additive Explanation (SHAP) values were calculated to provide explanations of DNN predictions. RESULTS: The DNNs outperformed logistic and linear regression models in predicting suicide attempts. After adjusting for the sampling technique, the convolutional neural network (CNN) model achieved a positive predictive value (PPV) of 0.54 for suicide attempts within 12 months by veterans in the top 0.1% risk tier. Explainability methods identified meaningful subgroups of high-risk veterans as well as key determinants of suicide attempt risk at both the group and individual level. DISCUSSION AND CONCLUSION: The deep learning methods employed in the present study have the potential to significantly enhance existing suicide risk models for veterans. These methods can also provide important clues to explore the relative value of long-term and short-term intervention strategies. Furthermore, the explainability methods utilized here could also be used to communicate to clinicians the key features which increase specific veterans' risk for attempting suicide.
Asunto(s)
Intento de Suicidio , Veteranos , Humanos , Redes Neurales de la Computación , MotivaciónRESUMEN
Image-based simulation, the use of 3D images to calculate physical quantities, relies on image segmentation for geometry creation. However, this process introduces image segmentation uncertainty because different segmentation tools (both manual and machine-learning-based) will each produce a unique and valid segmentation. First, we demonstrate that these variations propagate into the physics simulations, compromising the resulting physics quantities. Second, we propose a general framework for rapidly quantifying segmentation uncertainty. Through the creation and sampling of segmentation uncertainty probability maps, we systematically and objectively create uncertainty distributions of the physics quantities. We show that physics quantity uncertainty distributions can follow a Normal distribution, but, in more complicated physics simulations, the resulting uncertainty distribution can be surprisingly nontrivial. We establish that bounding segmentation uncertainty can fail in these nontrivial situations. While our work does not eliminate segmentation uncertainty, it improves simulation credibility by making visible the previously unrecognized segmentation uncertainty plaguing image-based simulation.