RESUMO
Microstructures of additively manufactured metal parts are crucial since they determine the mechanical properties. The evolution of the microstructures during layer-wise printing is complex due to continuous re-melting and reheating effects. The current approach to studying this phenomenon relies on time-consuming numerical models such as finite element analysis due to the lack of effective sub-surface temperature measurement techniques. Attributed to the miniature footprint, chirped-fiber Bragg grating, a unique type of fiber optical sensor, has great potential to achieve this goal. However, using the traditional demodulation methods, its spatial resolution is limited to the millimeter level. In addition, embedding it during laser additive manufacturing is challenging since the sensor is fragile. This paper implements a machine learning-assisted approach to demodulate the optical signal to thermal distribution and significantly improve spatial resolution to 28.8 µm from the original millimeter level. A sensor embedding technique is also developed to minimize damage to the sensor and part while ensuring close contact. The case study demonstrates the excellent performance of the proposed sensor in measuring sharp thermal gradients and fast cooling rates during the laser powder bed fusion. The developed sensor has a promising potential to study the fundamental physics of metal additive manufacturing processes.
RESUMO
Differential equations are fundamental in modeling numerous physical systems, including thermal, manufacturing, and meteorological systems. Traditionally, numerical methods often approximate the solutions of complex systems modeled by differential equations. With the advent of modern deep learning, Physics-informed Neural Networks (PINNs) are evolving as a new paradigm for solving differential equations with a pseudo-closed form solution. Unlike numerical methods, the PINNs can solve the differential equations mesh-free, integrate the experimental data, and resolve challenging inverse problems. However, one of the limitations of PINNs is the poor training caused by using the activation functions designed typically for purely data-driven problems. This work proposes a scalable tanh-based activation function for PINNs to improve learning the solutions of differential equations. The proposed Self-scalable tanh (Stan) function is smooth, non-saturating, and has a trainable parameter. It can allow an easy flow of gradients and enable systematic scaling of the input-output mapping during training. Various forward problems to solve differential equations and inverse problems to find the parameters of differential equations demonstrate that the Stan activation function can achieve better training and more accurate predictions than the existing activation functions for PINN in the literature.