Chris Kang, Jasmine A Moore, Samuel Robertson, Matthias Wilms, Emma K Towlson, Nils D Forkert
Artificial neural networks (ANNs) were originally modeled after their biological counterparts, but have since conceptually diverged in many ways. The resulting network architectures are not well understood, and furthermore, we lack the quantitative tools to characterize their structures. Network science provides an ideal mathematical framework with which to characterize systems of interacting components, and has transformed our understanding across many domains, including the mammalian brain. Yet, little has been done to bring network science to ANNs. In this work, we propose tools that leverage and adapt network science methods to measure both global- and local-level characteristics of ANNs. Specifically, we focus on the structures of efficient multilayer perceptrons as a case study, which are sparse and systematically pruned such that they share many characteristics with real-world networks. We use adapted network science metrics to show that the pruning process leads to the emergence of a spanning subnetwork (lottery ticket multilayer perceptrons) with complex architecture. This complex network exhibits global and local characteristics, including heavy-tailed nodal degree distributions and dominant weighted pathways, that mirror patterns observed in human neuronal connectivity. Furthermore, alterations in network metrics precede catastrophic decay in performance as the network is heavily pruned. This network science-driven approach to the analysis of artificial neural networks serves as a valuable tool to establish and improve biological fidelity, increase the interpretability, and assess the performance of artificial neural networks. Significance Statement Artificial neural network architectures have become increasingly complex, often diverging from their biological counterparts in many ways. To design plausible "brain-like" architectures, whether to advance neuroscience research or to improve explainability, it is essential that these networks optimally resemble their biological counterparts. Network science tools offer valuable information about interconnected systems, including the brain, but have not attracted much attention for analyzing artificial neural networks. Here, we present the significance of our work: ���We adapt network science tools to analyze the structural characteristics of artificial neural networks. ���We demonstrate that organizational patterns similar to those observed in the mammalian brain emerge through the pruning process alone. The convergence on these complex network features in both artificial neural networks and biological brain networks is compelling evidence for their optimality in information processing capabilities. ���Our approach is a significant first step towards a network science-based understanding of artificial neural networks, and has the potential to shed light on the biological fidelity of artificial neural networks.