Spatial redundancy transformer for self-supervised fluorescence image denoising.

Xinyang Li, Xiaowan Hu, Xingye Chen, Jiaqi Fan, Zhifeng Zhao, Jiamin Wu, Haoqian Wang, Qionghai Dai
Author Information
  1. Xinyang Li: Department of Automation, Tsinghua University, Beijing, China. ORCID
  2. Xiaowan Hu: Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, China.
  3. Xingye Chen: Department of Automation, Tsinghua University, Beijing, China.
  4. Jiaqi Fan: Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, China.
  5. Zhifeng Zhao: Department of Automation, Tsinghua University, Beijing, China.
  6. Jiamin Wu: Department of Automation, Tsinghua University, Beijing, China. wujiamin@tsinghua.edu.cn. ORCID
  7. Haoqian Wang: Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, China. wanghaoqian@tsinghua.edu.cn. ORCID
  8. Qionghai Dai: Department of Automation, Tsinghua University, Beijing, China. qhdai@tsinghua.edu.cn. ORCID

Abstract

Fluorescence imaging with high signal-to-noise ratios has become the foundation of accurate visualization and analysis of biological phenomena. However, the inevitable noise poses a formidable challenge to imaging sensitivity. Here we provide the spatial redundancy denoising transformer (SRDTrans) to remove noise from fluorescence images in a self-supervised manner. First, a sampling strategy based on spatial redundancy is proposed to extract adjacent orthogonal training pairs, which eliminates the dependence on high imaging speed. Second, we designed a lightweight spatiotemporal transformer architecture to capture long-range dependencies and high-resolution features at low computational cost. SRDTrans can restore high-frequency information without producing oversmoothed structures and distorted fluorescence traces. Finally, we demonstrate the state-of-the-art denoising performance of SRDTrans on single-molecule localization microscopy and two-photon volumetric calcium imaging. SRDTrans does not contain any assumptions about the imaging process and the sample, thus can be easily extended to various imaging modalities and biological applications.

References

  1. Royer, L. A. et al. Adaptive light-sheet microscopy for long-term, high-resolution imaging in living organisms. Nat. Biotechnol. 34, 1267–1278 (2016). [DOI: 10.1038/nbt.3708]
  2. Fan, J. et al. Video-rate imaging of biological dynamics at centimetre scale and micrometre resolution. Nat. Photon. 13, 809–816 (2019). [DOI: 10.1038/s41566-019-0474-7]
  3. Balzarotti, F. et al. Nanometer resolution imaging and tracking of fluorescent molecules with minimal photon fluxes. Science 355, 606–612 (2017). [DOI: 10.1126/science.aak9913]
  4. Wu, J. et al. Iterative tomography with digital adaptive optics permits hour-long intravital observation of 3D subcellular dynamics at millisecond scale. Cell 184, 3318–3332 (2021). [DOI: 10.1016/j.cell.2021.04.029]
  5. Verweij, F. J. et al. The power of imaging to understand extracellular vesicle biology in vivo. Nat. Methods 18, 1013–1026 (2021). [DOI: 10.1038/s41592-021-01206-3]
  6. Li, X. et al. Real-time denoising enables high-sensitivity fluorescence time-lapse imaging beyond the shot-noise limit. Nat. Biotechnol. 41, 282–292 (2023).
  7. Meiniel, W., Olivo-Marin, J. C. & Angelini, E. D. Denoising of microscopy images: a review of the state-of-the-art, and a new sparsity-based method. IEEE Trans. Image Process. 27, 3842–3856 (2018). [DOI: 10.1109/TIP.2018.2819821]
  8. Dabov, K., Foi, A., Katkovnik, V. & Egiazarian, K. Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16, 2080–2095 (2007). [DOI: 10.1109/TIP.2007.901238]
  9. Zhang, K. et al. Beyond a Gaussian denoiser: residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26, 3142–3155 (2017). [DOI: 10.1109/TIP.2017.2662206]
  10. Tai, Y., Yang, J., Liu, X. & Xu, C. MemNet: a persistent memory network for image restoration. In Proc. IEEE/CVF Conference on Computer Vision and Pattern Recognition 4539–4547 (IEEE, 2017).
  11. Weigert, M. et al. Content-aware image restoration: pushing the limits of fluorescence microscopy. Nat. Methods 15, 1090–1097 (2018). [DOI: 10.1038/s41592-018-0216-7]
  12. Belthangady, C. & Royer, L. A. Applications, promises, and pitfalls of deep learning for fluorescence image reconstruction. Nat. Methods 16, 1215–1225 (2019). [DOI: 10.1038/s41592-019-0458-z]
  13. Chen, J. et al. Three-dimensional residual channel attention networks denoise and sharpen fluorescence microscopy image volumes. Nat. Methods 18, 678–687 (2021). [DOI: 10.1038/s41592-021-01155-x]
  14. Chaudhary, S., Moon, S. & Lu, H. Fast, efficient, and accurate neuro-imaging denoising via supervised deep learning. Nat. Commun. 13, 5165 (2022). [DOI: 10.1038/s41467-022-32886-w]
  15. Wang, Z., Xie, Y. & Ji, S. Global voxel transformer networks for augmented microscopy. Nat. Mach. Intell. 3, 161–171 (2021). [DOI: 10.1038/s42256-020-00283-x]
  16. Lehtinen, J. et al. Noise2Noise: learning image restoration without clean data. In Proc. 35th International Conference on Machine Learning (eds Dy, J. & Krause, A.) 2965–2974 (PMLR, 2018).
  17. Lecoq, J. et al. Removing independent noise in systems neuroscience data using DeepInterpolation. Nat. Methods 18, 1401–1408 (2021). [DOI: 10.1038/s41592-021-01285-2]
  18. Li, X. et al. Reinforcing neuron extraction and spike inference in calcium imaging using deep self-supervised denoising. Nat. Methods 18, 1395–1400 (2021). [DOI: 10.1038/s41592-021-01225-0]
  19. Krull, A., Buchholz, T.-O. & Jug, F. Noise2Void—learning denoising from single noisy images. In Proc. IEEE/CVF Conference on Computer Vision and Pattern Recognition 2129–2137 (IEEE, 2019).
  20. Batson, J. & Royer, L. Noise2Self: blind denoising by self-supervision. In Proc. 36th International Conference on Machine Learning 524–533 (PMLR, 2019).
  21. Krull, A., Vičar, T., Prakash, M., Lalit, M. & Jug, F. Probabilistic noise2void: unsupervised content-aware denoising. Front. Comput. Sci. https://doi.org/10.3389/fcomp.2020.00005 (2020).
  22. Huang, T. et al. Neighbor2Neighbor: self-supervised denoising from single noisy images. In Proc. IEEE/CVF Conference on Computer Vision and Pattern Recognition 14781–14790 (IEEE, 2021).
  23. Lequyer, J. et al. A fast blind zero-shot denoiser. Nat. Mach. Intell. 4, 953–963 (2022). [DOI: 10.1038/s42256-022-00547-8]
  24. Luo, W. et al. Understanding the effective receptive field in deep convolutional neural networks. Adv. Neural Inf. Process. Syst. 29, 4905–4913 (2016).
  25. Rahaman N. et al. On the spectral bias of neural networks. In International Conference on Machine Learning 5301–5310 (PMLR, 2019).
  26. Lelek, M. et al. Single-molecule localization microscopy. Nat. Rev. Methods Prim. 1, 39 (2021). [DOI: 10.1038/s43586-021-00038-x]
  27. Liu, Z. et al. Swin transformer: hierarchical vision transformer using shifted windows. In Proc. IEEE/CVF International Conference on Computer Vision 10012–10022 (IEEE, 2021).
  28. Zhou H. et al. nnFormer: interleaved transformer for volumetric segmentation. Preprint at https://arxiv.org/abs/2109.03201 (2021).
  29. Hatamizadeh, A. et al. UNETR: transformers for 3D medical image segmentation. In Proc. IEEE/CVF Winter Conference on Applications of Computer Vision 574–584 (IEEE, 2022).
  30. Hatamizadeh, A. et al. Swin UNETR: Swin transformers for semantic segmentation of brain tumors in MRI images. In International MICCAI Brainlesion Workshop (eds Crimi, A. et al.) 272–284 (Springer, 2021).
  31. Çiçek, Ö. et al. 3D U-Net: learning dense volumetric segmentation from sparse annotation. In Medical Image Computing and Computer-Assisted Intervention—MICCAI 2016 (eds Ourselin, S. et al.) 424–432 (Springer, 2016).
  32. Taylor, M. A. & Bowen, W. P. Quantum metrology and its application in biology. Phys. Rep. 615, 1–59 (2016). [DOI: 10.1016/j.physrep.2015.12.002]
  33. Nagata, T. et al. Beating the standard quantum limit with four-entangled photons. Science 316, 726–729 (2007). [DOI: 10.1126/science.1138007]
  34. Rust, M., Bates, M. & Zhuang, X. Sub-diffraction-limit imaging by stochastic optical reconstruction microscopy (STORM). Nat. Methods 3, 793–796 (2006). [DOI: 10.1038/nmeth929]
  35. Nehme, E., Weiss, L. E., Michaeli, T. & Shechtman, Y. Deep-STORM: super-resolution single-molecule microscopy by deep learning. Optica 5, 458–464 (2018). [DOI: 10.1364/OPTICA.5.000458]
  36. Sinkó, J. et al. TestSTORM: simulator for optimizing sample labeling and image acquisition in localization based super-resolution microscopy. Biomed. Opt. Express 5, 778–787 (2014). [DOI: 10.1364/BOE.5.000778]
  37. Ovesný, M. et al. ThunderSTORM: a comprehensive ImageJ plug-in for PALM and STORM data analysis and super-resolution imaging. Bioinformatics 30, 2389–2390 (2014). [DOI: 10.1093/bioinformatics/btu202]
  38. Sage, D. et al. Quantitative evaluation of software packages for singlemolecule localization microscopy. Nat. Methods 12, 717–724 (2015). [DOI: 10.1038/nmeth.3442]
  39. Sage, D. et al. Super-resolution fight club: assessment of 2D and 3D single-molecule localization microscopy software. Nat. Methods 16, 387–395 (2019). [DOI: 10.1038/s41592-019-0364-4]
  40. Nieuwenhuizen, R. et al. Measuring image resolution in optical nanoscopy. Nat. Methods 10, 557–562 (2013). [DOI: 10.1038/nmeth.2448]
  41. Descloux, A., Grußmayer, K. S. & Radenovic, A. Parameter-free image resolution estimation based on decorrelation analysis. Nat. Methods 16, 918–924 (2019). [DOI: 10.1038/s41592-019-0515-7]
  42. Ouyang, W. et al. ShareLoc—an open platform for sharing localization microscopy data. Nat. Methods 19, 1331–1333 (2022). [DOI: 10.1038/s41592-022-01659-0]
  43. Jones, S. et al. Fast, three-dimensional super-resolution imaging of live cells. Nat. Methods 8, 499–505 (2011). [DOI: 10.1038/nmeth.1605]
  44. Song, A., Gauthier, J. L., Pillow, J. W., Tank, D. W. & Charles, A. S. Neural anatomy and optical microscopy (NAOMi) simulation for evaluating calcium imaging methods. J. Neurosci. Methods 358, 109173 (2021). [DOI: 10.1016/j.jneumeth.2021.109173]
  45. Chen, T. W. et al. Ultrasensitive fluorescent proteins for imaging neuronal activity. Nature 499, 295–300 (2013). [DOI: 10.1038/nature12354]
  46. Zhao, Z. et al. Two-photon synthetic aperture microscopy for minimally invasive fast 3D imaging of native subcellular behaviors in deep tissue. Cell 186, 2475–2491 (2023). [DOI: 10.1016/j.cell.2023.04.016]
  47. Platisa, J. et al. High-speed low-light in vivo two-photon voltage imaging of large neuronal populations. Nat. Methods 20, 1095–1103 (2023). [DOI: 10.1038/s41592-023-01820-3]
  48. Zhao, W. et al. Sparse deconvolution improves the resolution of live-cell super-resolution fluorescence microscopcy. Nat. Biotechnol. 40, 606–617 (2022). [DOI: 10.1038/s41587-021-01092-2]
  49. Dahmardeh, M. et al. Self-supervised machine learning pushes the sensitivity limit in label-free detection of single proteins below 10 kDa. Nat. Methods 20, 442–447 (2023). [DOI: 10.1038/s41592-023-01778-2]
  50. Li, X. et al. Unsupervised content-preserving transformation for optical microscopy. Light. Sci. Appl. 10, 44 (2021). [DOI: 10.1038/s41377-021-00484-y]
  51. Qiao, C. et al. Rationalized deep learning super-resolution microscopy for sustained live imaging of rapid subcellular processes. Nat. Biotechnol. 41, 367–377 (2023). [DOI: 10.1038/s41587-022-01471-3]
  52. Zhang, Y. et al. Fast and sensitive GCaMP calcium indicators for imaging neural populations. Nature 615, 884–891 (2023). [DOI: 10.1038/s41586-023-05828-9]
  53. Liu, Z. et al. Sustained deep-tissue voltage recording using a fast indicator evolved for two-photon microscopy. Cell 185, 3408–3425 (2022). [DOI: 10.1016/j.cell.2022.07.013]
  54. Jimenez, A., Friedl, K. & Leterrier, C. About samples, giving examples: optimized single molecule localization microscopy. Methods 174, 100–114 (2020). [DOI: 10.1016/j.ymeth.2019.05.008]
  55. Smith, M. B. et al. Segmentation and tracking of cytoskeletal filaments using open active contours. Cytoskeleton 67, 693–705 (2010). [DOI: 10.1002/cm.20481]
  56. LeCun, Y., Bottou, L., Bengio, Y. & Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE 86, 2278–2324 (1998). [DOI: 10.1109/5.726791]
  57. Li, X. et al. SRDTrans dataset: simulated calcium imaging data sampled at 30 Hz under different SNRs. Zenodo https://doi.org/10.5281/zenodo.8332083 (2023).
  58. Li, X. et al. SRDTrans dataset: simulated calcium imaging data at different imaging speeds. Zenodo https://doi.org/10.5281/zenodo.7812544 (2023).
  59. Li, X. et al. SRDTrans dataset: simulated SMLM data under different SNRs. Zenodo https://doi.org/10.5281/zenodo.7812589 (2023).
  60. Li, X. et al. SRDTrans dataset: SRDTrans dataset: experimentally obtained SMLM data Zenodo https://doi.org/10.5281/zenodo.7813184 (2023).
  61. Li, X. et al. Code for SRDTrans. Zenodo https://doi.org/10.5281/zenodo.10023889 (2023).

Grants

  1. 62222508/National Natural Science Foundation of China (National Science Foundation of China)

MeSH Term

Humans
Calcium, Dietary
Electric Power Supplies
Optical Imaging
Photons
Self-Management

Chemicals

Calcium, Dietary

Word Cloud

Created with Highcharts 10.0.0imagingSRDTransredundancydenoisingtransformerfluorescencehighbiologicalnoisespatialself-supervisedcanFluorescencesignal-to-noiseratiosbecomefoundationaccuratevisualizationanalysisphenomenaHoweverinevitableposesformidablechallengesensitivityprovideremoveimagesmannerFirstsamplingstrategybasedproposedextractadjacentorthogonaltrainingpairseliminatesdependencespeedSeconddesignedlightweightspatiotemporalarchitecturecapturelong-rangedependencieshigh-resolutionfeatureslowcomputationalcostrestorehigh-frequencyinformationwithoutproducingoversmoothedstructuresdistortedtracesFinallydemonstratestate-of-the-artperformancesingle-moleculelocalizationmicroscopytwo-photonvolumetriccalciumcontainassumptionsprocesssamplethuseasilyextendedvariousmodalitiesapplicationsSpatialimage

Similar Articles

Cited By