AI helps you reading Science

AI generates interpretation videos

AI extracts and analyses the key points of the paper to generate videos automatically


pub
Go Generating

AI Traceability

AI parses the academic lineage of this thesis


Master Reading Tree
Generate MRT

AI Insight

AI extracts a summary of this paper


Weibo:
Hierarchical networks of Slow Feature Analysis trained on simulated rat visual streams learn representations of position and orientation similar to representations encoded in the hippocampus

A Biologically Plausible Neural Network for Slow Feature Analysis

NIPS 2020, (2020)

Cited by: 0|Views31
EI
Full Text
Bibtex
Weibo

Abstract

Learning latent features from time series data is an important problem in both machine learning and brain function. One approach, called Slow Feature Analysis (SFA), leverages the slowness of many salient features relative to the rapidly varying input signals. Furthermore, when trained on naturalistic stimuli, SFA reproduces interesting...More

Code:

Data:

0
Introduction
  • Unsupervised learning of meaningful latent features from noisy, high-dimensional data is a fundamental problem for both machine learning and brain function.
  • The relevant features in an environment vary on relatively slow timescales when compared to noisy sensory data.
  • Temporal slowness has been proposed as a computational principle for extracting relevant latent features [8, 19, 31].
  • When trained on natural image sequences, SFA extracts features that resemble response properties of complex cells in early visual processing [2].
  • Hierarchical networks of SFA trained on simulated rat visual streams learn representations of position and orientation similar to representations encoded in the hippocampus [9]
Highlights
  • Unsupervised learning of meaningful latent features from noisy, high-dimensional data is a fundamental problem for both machine learning and brain function
  • Hierarchical networks of Slow Feature Analysis (SFA) trained on simulated rat visual streams learn representations of position and orientation similar to representations encoded in the hippocampus [9]
  • To derive an SFA network, we identify an objective function whose optimization leads to an online algorithm that can be implemented in a biologically plausible network
  • Our network includes direct lateral inhibitory synapses between excitatory neurons, whereas inhibition is typically modulated by interneurons in biological networks
  • Iteration (d) Slowness of SFA output require both the pre- and post-synaptic neurons to store slow variables; signal frequencies in dendrites are slower than in axons, suggesting that it is more likely for slow variables to be stored in the post-synaptic neuron, not the pre-synaptic neuron
Methods
  • The authors test Bio-SFA (Alg. 1) on synthetic and naturalistic datasets.
  • The evaluation code is available at github.com/flatiron/bio-sfa.
  • To measure the performance of the algorithm, the authors compare the “slowness” of the projection Y = M−1WX, with the slowest possible projection.
  • This can be quantified using the objective (6).
Conclusion
  • The authors derived an online algorithm for SFA with a biologically plausible neural network implementation, which is an important step towards understanding how the brain could use temporal slowness as a computational principle.
  • Slowness (a) Layered architecture (b) SFA firing maps (c) ICA firing maps (d) Slowness of SFA output require both the pre- and post-synaptic neurons to store slow variables; signal frequencies in dendrites are slower than in axons, suggesting that it is more likely for slow variables to be stored in the post-synaptic neuron, not the pre-synaptic neuron.
  • An interesting future direction is to understand the effect of enforcing a nonnegativity constraint on yt in the objective function (9)
Reference
  • Pietro Berkes and Laurenz Wiskott. Applying slow feature analysis to image sequences yields a rich repertoire of complex cell properties. In International Conference on Artificial Neural Networks, pages 81–86.
    Google ScholarLocate open access versionFindings
  • Pietro Berkes and Laurenz Wiskott. Slow feature analysis yields a rich repertoire of complex cell properties. Journal of Vision, 5(6):9–9, 2005.
    Google ScholarLocate open access versionFindings
  • T. Blaschke, P. Berkes, and L. Wiskott. What is the relationship between slow feature analysis and independent component analysis? Neural Computation, 18(10):2495–2508, 2006.
    Google ScholarLocate open access versionFindings
  • T. Blaschke and L. Wiskott. Cubica: independent component analysis by simultaneous third- and fourth-order cumulant diagonalization. IEEE Transactions on Signal Processing, 52(5):1250– 1256, 2004.
    Google ScholarLocate open access versionFindings
  • David Clark, Jesse Livezey, and Kristofer Bouchard. Unsupervised discovery of temporal structure in noisy data with dynamical components analysis. In Advances in Neural Information Processing Systems, pages 14267–14278, 2019.
    Google ScholarLocate open access versionFindings
  • Trevor F Cox and Michael AA Cox. Multidimensional Scaling. Chapman and Hall/CRC, 2000.
    Google ScholarLocate open access versionFindings
  • Felix Creutzig and Henning Sprekeler. Predictive coding and the slowness principle: An information-theoretic approach. Neural Computation, 20(4):1026–1041, 2008.
    Google ScholarLocate open access versionFindings
  • Peter Földiák. Learning invariance from transformation sequences. Neural Computation, 3(2):194–200, June 1991.
    Google ScholarLocate open access versionFindings
  • Mathias Franzius, Henning Sprekeler, and Laurenz Wiskott. Slowness and sparseness lead to place, head-direction, and spatial-view cells. PLoS Computational Biology, 3(8):e166, 2007.
    Google ScholarLocate open access versionFindings
  • A. Hyvärinen and E. Oja. Independent component analysis: algorithms and applications. Neural Networks, 13(4-5):411–430, June 2000.
    Google ScholarLocate open access versionFindings
  • Aapo Hyvarinen. Fast and robust fixed-point algorithms for independent component analysis. IEEE transactions on Neural Networks, 10(3):626–634, 1999.
    Google ScholarLocate open access versionFindings
  • Aapo Hyvärinen and Erkki Oja. Independent component analysis: Algorithms and applications. Neural Networks, 13(4-5):411–430, 2000.
    Google ScholarLocate open access versionFindings
  • Christof Koch and Tomaso Poggio. Multiplying with synapses and neurons. In Single neuron computation, pages 315–345.
    Google ScholarLocate open access versionFindings
  • Varun Raj Kompella, Matthew Luciw, and Jürgen Schmidhuber. Incremental slow feature analysis: Adaptive low-complexity slow feature updating from high-dimensional input streams. Neural Computation, 24(11):2994–3024, 2012.
    Google ScholarLocate open access versionFindings
  • David Lipshutz, Yanis Bahroun, Siavash Golkar, Anirvan M. Sengupta, and Dmitri B. Chkovskii. A biologically plausible neural network for multi-channel canonical correlation analysis. arXiv preprint arXiv:2010.00525, 2020.
    Findings
  • Stephan Liwicki, Stefanos Zafeiriou, and Maja Pantic. Incremental slow feature analysis with indefinite kernel for online temporal video segmentation. In Computer Vision – ACCV 2012, volume 7725, pages 162–176. Springer Berlin Heidelberg, 2013.
    Google ScholarLocate open access versionFindings
  • Zeeshan Khawar Malik, Amir Hussain, and Jonathan Wu. Novel biologically inspired approaches to extracting online information from temporal data. Cognitive Computation, 6(3):595– 607, April 2014.
    Google ScholarLocate open access versionFindings
  • Bartlett W Mel and Christof Koch. Sigma-pi learning: On radial basis functions and cortical associative learning. In Advances in Neural Information Processing Systems, pages 474–481, 1990.
    Google ScholarLocate open access versionFindings
  • Graeme Mitchison. Removing time variation with the anti-Hebbian differential synapse. Neural Computation, 3(3):312–320, 1991.
    Google ScholarLocate open access versionFindings
  • Frank Noé and Cecilia Clementi. Kinetic distance and kinetic maps from molecular dynamics simulation. Journal of Chemical Theory and Computation, 11(10):5002–5011, September 2015.
    Google ScholarLocate open access versionFindings
  • Cengiz Pehlevan and Dmitri Chklovskii. A normative theory of adaptive dimensionality reduction in neural networks. In Advances in Neural Information Processing Systems, pages 2269–2277, 2015.
    Google ScholarLocate open access versionFindings
  • Guillermo Pérez-Hernández, Fabian Paul, Toni Giorgino, Gianni De Fabritiis, and Frank Noé. Identification of slow molecular order parameters for Markov model construction. The Journal of Chemical Physics, 139(1):015102, July 2013.
    Google ScholarLocate open access versionFindings
  • David E Rumelhart, Geoffrey E Hinton, James L McClelland, et al. A general framework for parallel distributed processing. Parallel distributed processing: Explorations in the microstructure of cognition, 1(45-76):26, 1986.
    Google ScholarLocate open access versionFindings
  • Fabian Schönfeld and Laurenz Wiskott. RatLab: an easy to use tool for place code simulations. Frontiers in Computational Neuroscience, 7, 2013.
    Google ScholarLocate open access versionFindings
  • Fabian Schönfeld and Laurenz Wiskott. Modeling place field activity with hierarchical slow feature analysis. Frontiers in Computational Neuroscience, 9, 2015.
    Google ScholarLocate open access versionFindings
  • Christian R. Schwantes and Vijay S. Pande. Modeling molecular kinetics with tICA and the kernel trick. Journal of Chemical Theory and Computation, 11(2):600–608, January 2015.
    Google ScholarLocate open access versionFindings
  • Mohammad M. Sultan and Vijay S. Pande. tICA-metadynamics: Accelerating metadynamics by using kinetically selected collective variables. Journal of Chemical Theory and Computation, 13(6):2440–2447, May 2017.
    Google ScholarLocate open access versionFindings
  • J. H. van Hateren and A. van der Schaaf. Independent component filters of natural images compared with simple cells in primary visual cortex. Proceedings of the Royal Society of London. Series B: Biological Sciences, 265(1394):359–366, 1998.
    Google ScholarLocate open access versionFindings
  • Björn Weghenkel and Laurenz Wiskott. Slowness as a proxy for temporal predictability: An empirical comparison. Neural computation, 30(5):1151–1179, 2018.
    Google ScholarLocate open access versionFindings
  • Laurenz Wiskott. Estimating driving forces of nonstationary time series with slow feature analysis. arXiv preprint cond-mat/0312317, 2003.
    Google ScholarFindings
  • Laurenz Wiskott and Terrence J Sejnowski. Slow feature analysis: Unsupervised learning of invariances. Neural Computation, 14(4):715–770, 2002.
    Google ScholarLocate open access versionFindings
  • Bardia Yousefi and Chu Kiong Loo. Development of fast incremental slow feature analysis (f-IncSFA). In The 2012 International Joint Conference on Neural Networks (IJCNN). IEEE, June 2012.
    Google ScholarLocate open access versionFindings
  • Qingfu Zhang and Yiu Wing Leung. A class of learning algorithms for principal component analysis and minor component analysis. IEEE Transactions on Neural Networks, 11(1):200–204, 2000.
    Google ScholarLocate open access versionFindings
  • Zhang Zhang and Dacheng Tao. Slow feature analysis for human action recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 34(3):436–450, 2012.
    Google ScholarLocate open access versionFindings
  • Tiziano Zito. Modular toolkit for data processing (MDP): a python data processing framework. Frontiers in Neuroinformatics, 2, 2008.
    Google ScholarLocate open access versionFindings
  • 2. The second layer consists of a 2-dimensional array of 8 × 2 Bio-SFA modules. Each module receives inputs from a 14 × 6 grid of modules from the first layer, again overlapping each other by half their length in each dimension. Since the output of each module in the first layer is 32-dimensional, the vectorized input to each module has dimension 2688 = 32 × 14 × 6. The output of each module in the second layer is a sequence of 32-dimensional vectors.
    Google ScholarLocate open access versionFindings
  • 3. The third layer consists of a single Bio-SFA module that receives input from all 8 × 2 modules in the second layer. Thus, the input to the third layer module has dimension 512 = 32 × 8 × 2. The output of the third layer is a sequence of 32-dimensional vectors.
    Google ScholarLocate open access versionFindings
  • 4. The fourth layer is an offline ICA algorithm, described below. It receives as input the 32-dimensional vector output of the third layer and produces a 32-dimensional output.
    Google ScholarFindings
  • 2. The projected sequence is quadratically expanded to generate the 560-dimensional expanded sequence, which is centered in the online setting using the running mean.
    Google ScholarFindings
  • 3. Bio-SFA is applied to the expanded sequence to generate a 32-dimensional output.
    Google ScholarFindings
Author
David Lipshutz
David Lipshutz
Charles Windolf
Charles Windolf
Your rating :
0

 

Tags
Comments
数据免责声明
页面数据均来自互联网公开来源、合作出版商和通过AI技术自动分析结果,我们不对页面数据的有效性、准确性、正确性、可靠性、完整性和及时性做出任何承诺和保证。若有疑问,可以通过电子邮件方式联系我们:report@aminer.cn
小科