A Novel Paradigm for Neural Computation: X-Net with Learnable Neurons and Adaptable Structure
CoRR(2024)
摘要
Artificial neural networks (ANNs) have permeated various disciplinary
domains, ranging from bioinformatics to financial analytics, where their
application has become an indispensable facet of contemporary scientific
research endeavors. However, the inherent limitations of traditional neural
networks arise due to their relatively fixed network structures and activation
functions. 1, The type of activation function is single and relatively fixed,
which leads to poor "unit representation ability" of the network, and it is
often used to solve simple problems with very complex networks; 2, the network
structure is not adaptive, it is easy to cause network structure redundant or
insufficient. To address the aforementioned issues, this study proposes a novel
neural network called X-Net. By utilizing our designed Alternating
Backpropagation mechanism, X-Net dynamically selects appropriate activation
functions based on derivative information during training to enhance the
network's representation capability for specific tasks. Simultaneously, it
accurately adjusts the network structure at the neuron level to accommodate
tasks of varying complexities and reduce computational costs. Through a series
of experiments, we demonstrate the dual advantages of X-Net in terms of
reducing model size and improving representation power. Specifically, in terms
of the number of parameters, X-Net is only 3% of baselines on average, and
only 1.4% under some tasks. In terms of representation ability, X-Net can
achieve an average R^2=0.985 on the fitting task by only optimizing the
activation function without introducing any parameters. Finally, we also tested
the ability of X-Net to help scientific discovery on data from multiple
disciplines such as society, energy, environment, and aerospace, and achieved
concise and good results.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要