Nov. 28, 2011
& End Date
Symbolic Systems Forum
"Hyperdimensional Computing for Modeling How Brains Compute"
Redwood Center for Theoretical Neuroscience, UC Berkeley, and CSLI
Monday, November 28, 2011
Building 460, Room 126
map link: http://campus-map.stanford.edu/index.cfm?ID=01-460
We think of the brain as a kind of a computer, but what
brains accomplish is very different from what we can do with
computers. It is very likely that our present computer
architecture is incapable, in principle, of producing
brainlike behavior. We can look at neuroanatomy for ideas
about computer architecture with more brainlike powers.
Striking about the brain's circuits is their very size,
which in computer terms means that we should compute with,
say, 10,000-bit words rather than with 16-to-64-bit words.
What would computing with such wide words be like?
Neural-net associative memories (e.g., Willshaw, Hopfield)
are early examples of that kind of computing. They are
content-addressable and work with incomplete or noisy data
vectors. High-D vectors' tolerance for noise is well know.
Less well known is their suitability for symbolic computing.
The required operations form the core of a new kind of
computing that seems particularly suited for modeling
cognition, as well as for implementation in nanotechnology.
The best-known model of the kind, and perhaps the first,
is Plate's Holographic Reduced Representation in the early
1990s, but the field remains largely unexplored. Language
provides fertile grounds for the exploration.
Kanerva, P. Hyperdimensional Computing: An Introduction
to Computing in Distributed Representation with
High-Dimensional Random Vectors. COGNITIVE COMPUTATION