Challenges of intracellular visualization using virtual and augmented reality.
Cesar Augusto Valades-Cruz, Ludovic Leconte, Gwendal Fouche, Thomas Blanc, Nathan Van Hille, Kevin Fournier, Tao Laurent, Benjamin Gallean, Francois Deslandes, Bassam Hajj, Emmanuel Faure, Ferran Argelaguet, Alain Trubuil, Tobias Isenberg, Jean-Baptiste Masson, Jean Salamero, Charles Kervrann
Author Information
Cesar Augusto Valades-Cruz: SERPICO Project Team, Inria Centre Rennes-Bretagne Atlantique, Rennes, France.
Ludovic Leconte: SERPICO Project Team, Inria Centre Rennes-Bretagne Atlantique, Rennes, France.
Gwendal Fouche: SERPICO Project Team, Inria Centre Rennes-Bretagne Atlantique, Rennes, France.
Thomas Blanc: Laboratoire Physico-Chimie, Institut Curie, PSL Research University, Sorbonne Universites, CNRS UMR168, Paris, France.
Nathan Van Hille: CNRS, Inria, LISN, Université Paris-Saclay, Orsay, France.
Kevin Fournier: SERPICO Project Team, Inria Centre Rennes-Bretagne Atlantique, Rennes, France.
Tao Laurent: LIRMM, Université Montpellier, CNRS, Montpellier, France.
Benjamin Gallean: LIRMM, Université Montpellier, CNRS, Montpellier, France.
Francois Deslandes: MaIAGE, INRAE, Université Paris-Saclay, Jouy-en-Josas, France.
Bassam Hajj: Laboratoire Physico-Chimie, Institut Curie, PSL Research University, Sorbonne Universites, CNRS UMR168, Paris, France.
Emmanuel Faure: LIRMM, Université Montpellier, CNRS, Montpellier, France.
Ferran Argelaguet: Inria, CNRS, IRISA, University Rennes, Rennes, France.
Alain Trubuil: MaIAGE, INRAE, Université Paris-Saclay, Jouy-en-Josas, France.
Tobias Isenberg: CNRS, Inria, LISN, Université Paris-Saclay, Orsay, France.
Jean-Baptiste Masson: Decision and Bayesian Computation, Neuroscience and Computational Biology Departments, CNRS UMR 3571, Institut Pasteur, Université Paris Cité, Paris, France.
Jean Salamero: SERPICO Project Team, Inria Centre Rennes-Bretagne Atlantique, Rennes, France.
Charles Kervrann: SERPICO Project Team, Inria Centre Rennes-Bretagne Atlantique, Rennes, France.
Microscopy image observation is commonly performed on 2D screens, which limits human capacities to grasp volumetric, complex, and discrete biological dynamics. With the massive production of multidimensional images (3D + time, multi-channels) and derived images (e.g., restored images, segmentation maps, and object tracks), scientists need appropriate visualization and navigation methods to better apprehend the amount of information in their content. New modes of visualization have emerged, including virtual reality (VR)/augmented reality (AR) approaches which should allow more accurate analysis and exploration of large time series of volumetric images, such as those produced by the latest 3D + time fluorescence microscopy. They include integrated algorithms that allow researchers to interactively explore complex spatiotemporal objects at the scale of single cells or multicellular systems, almost in a real time manner. In practice, however, immersion of the user within 3D + time microscopy data represents both a paradigm shift in human-image interaction and an acculturation challenge, for the concerned community. To promote a broader adoption of these approaches by biologists, further dialogue is needed between the bioimaging community and the VR&AR developers.