Your browser doesn't support javascript.
loading
Temporal focusing multiphoton microscopy with cross-modality multi-stage 3D U-Net for fast and clear bioimaging.
Hu, Yvonne Yuling; Hsu, Chia-Wei; Tseng, Yu-Hao; Lin, Chun-Yu; Chiang, Hsueh-Cheng; Chiang, Ann-Shyn; Chang, Shin-Tsu; Chen, Shean-Jen.
Affiliation
  • Hu YY; Department of Photonics, National Cheng Kung University, Tainan 701, Taiwan.
  • Hsu CW; College of Photonics, National Yang Ming Chiao Tung University, Tainan 711, Taiwan.
  • Tseng YH; College of Photonics, National Yang Ming Chiao Tung University, Tainan 711, Taiwan.
  • Lin CY; College of Photonics, National Yang Ming Chiao Tung University, Tainan 711, Taiwan.
  • Chiang HC; Department of Pharmacology, National Cheng Kung University, Tainan 701, Taiwan.
  • Chiang AS; Brain Research Center, National Tsing Hua University, Hsinchu 300, Taiwan.
  • Chang ST; Department of Physical Medicine and Rehabilitation, Kaohsiung Veterans General Hospital, Kaohsiung 813, Taiwan.
  • Chen SJ; Department of Physical Medicine and Rehabilitation, Tri-Service General Hospital, National Defense Medical Center, Taipei 114, Taiwan.
Biomed Opt Express ; 14(6): 2478-2491, 2023 Jun 01.
Article in En | MEDLINE | ID: mdl-37342698
Temporal focusing multiphoton excitation microscopy (TFMPEM) enables fast widefield biotissue imaging with optical sectioning. However, under widefield illumination, the imaging performance is severely degraded by scattering effects, which induce signal crosstalk and a low signal-to-noise ratio in the detection process, particularly when imaging deep layers. Accordingly, the present study proposes a cross-modality learning-based neural network method for performing image registration and restoration. In the proposed method, the point-scanning multiphoton excitation microscopy images are registered to the TFMPEM images by an unsupervised U-Net model based on a global linear affine transformation process and local VoxelMorph registration network. A multi-stage 3D U-Net model with a cross-stage feature fusion mechanism and self-supervised attention module is then used to infer in-vitro fixed TFMPEM volumetric images. The experimental results obtained for in-vitro drosophila mushroom body (MB) images show that the proposed method improves the structure similarity index measures (SSIMs) of the TFMPEM images acquired with a 10-ms exposure time from 0.38 to 0.93 and 0.80 for shallow- and deep-layer images, respectively. A 3D U-Net model, pretrained on in-vitro images, is further trained using a small in-vivo MB image dataset. The transfer learning network improves the SSIMs of in-vivo drosophila MB images captured with a 1-ms exposure time to 0.97 and 0.94 for shallow and deep layers, respectively.

Full text: 1 Collection: 01-internacional Database: MEDLINE Language: En Journal: Biomed Opt Express Year: 2023 Document type: Article Affiliation country: Taiwan Country of publication: United States

Full text: 1 Collection: 01-internacional Database: MEDLINE Language: En Journal: Biomed Opt Express Year: 2023 Document type: Article Affiliation country: Taiwan Country of publication: United States