Attention-based stackable graph convolutional network for multi-view learning.

Zhiyong Xu, Weibin Chen, Ying Zou, Zihan Fang, Shiping Wang
Author Information
  1. Zhiyong Xu: College of Computer and Data Science, Fuzhou University, Fuzhou 350108, China; Key Laboratory of Intelligent Metro, Fujian Province University, Fuzhou 350108, China. Electronic address: teddy_xu2@163.com.
  2. Weibin Chen: College of Computer and Data Science, Fuzhou University, Fuzhou 350108, China; Key Laboratory of Intelligent Metro, Fujian Province University, Fuzhou 350108, China. Electronic address: chenwb@fzu.edu.cn.
  3. Ying Zou: College of Computer and Data Science, Fuzhou University, Fuzhou 350108, China; Key Laboratory of Intelligent Metro, Fujian Province University, Fuzhou 350108, China. Electronic address: zouying5419@163.com.
  4. Zihan Fang: College of Computer and Data Science, Fuzhou University, Fuzhou 350108, China; Key Laboratory of Intelligent Metro, Fujian Province University, Fuzhou 350108, China. Electronic address: fzihan11@163.com.
  5. Shiping Wang: College of Computer and Data Science, Fuzhou University, Fuzhou 350108, China; Key Laboratory of Intelligent Metro, Fujian Province University, Fuzhou 350108, China. Electronic address: shipingwangphd@163.com.

Abstract

In multi-view learning, graph-based methods like Graph Convolutional Network (GCN) are extensively researched due to effective graph processing capabilities. However, most GCN-based methods often require complex preliminary operations such as sparsification, which may bring additional computation costs and training difficulties. Additionally, as the number of stacking layers increases in most GCN, over-smoothing problem arises, resulting in ineffective utilization of GCN capabilities. In this paper, we propose an attention-based stackable graph convolutional network that captures consistency across views and combines attention mechanism to exploit the powerful aggregation capability of GCN to effectively mitigate over-smoothing. Specifically, we introduce node self-attention to establish dynamic connections between nodes and generate view-specific representations. To maintain cross-view consistency, a data-driven approach is devised to assign attention weights to views, forming a common representation. Finally, based on residual connectivity, we apply an attention mechanism to the original projection features to generate layer-specific complementarity, which compensates for the information loss during graph convolution. Comprehensive experimental results demonstrate that the proposed method outperforms other state-of-the-art methods in multi-view semi-supervised tasks.

Keywords

MeSH Term

Neural Networks, Computer
Algorithms
Attention
Humans
Deep Learning

Word Cloud

Created with Highcharts 10.0.0learningGCNgraphmulti-viewmethodsconvolutionalnetworkattentionmechanismGraphcapabilitiesover-smoothingstackableconsistencyviewsgenerategraph-basedlikeConvolutionalNetworkextensivelyresearcheddueeffectiveprocessingHoweverGCN-basedoftenrequirecomplexpreliminaryoperationssparsificationmaybringadditionalcomputationcoststrainingdifficultiesAdditionallynumberstackinglayersincreasesproblemarisesresultingineffectiveutilizationpaperproposeattention-basedcapturesacrosscombinesexploitpowerfulaggregationcapabilityeffectivelymitigateSpecificallyintroducenodeself-attentionestablishdynamicconnectionsnodesview-specificrepresentationsmaintaincross-viewdata-drivenapproachdevisedassignweightsformingcommonrepresentationFinallybasedresidualconnectivityapplyoriginalprojectionfeatureslayer-specificcomplementaritycompensatesinformationlossconvolutionComprehensiveexperimentalresultsdemonstrateproposedmethodoutperformsstate-of-the-artsemi-supervisedtasksAttention-basedAttentionMachineMulti-viewSemi-supervisedclassification

Similar Articles

Cited By

No available data.