-
Notifications
You must be signed in to change notification settings - Fork 0
/
VSA_2020_presentation_gayler.Rmd
151 lines (107 loc) · 6.02 KB
/
VSA_2020_presentation_gayler.Rmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
---
title: VSA, Analogy, and Dynamic Similarity
author: Ross W. Gayler^[0000-0003-4679-585X]^
date: 2020-03-16 Workshop on Developments in HD Computing
and VSA, Heidelberg, Germany
institute: \href{https://www.rossgayler.com}{www.rossgayler.com} \hspace{20pt} \href{mailto:[email protected]}{[email protected]}
output: binb::metropolis
classoption: "aspectratio=169"
fontsize: 12pt
bibliography: bibliography.bibtex
csl: apa-single-spaced.csl
---
```{r LICENSE, include=FALSE}
# "VSA, Analogy, and Dynamic Similarity" (c) by Ross W. Gayler
# A presentation given at the Workshop on Developments in Hyperdimensional Computing and Vector Symbolic Architectures,
# 2020-03-16 in Heidelberg, Germany.
#
# This document is licensed under a Creative Commons Attribution 4.0 International License.
#
# You should have received a copy of the license along with this work.
# If not, see http://creativecommons.org/licenses/by/4.0/.
```
```{r setup, include=FALSE}
knitr::opts_chunk$set(cache=FALSE)
```
## Analogy as structure mapping
- Arguable that analogy is the core of cognition [@gust_analogical_2008; @blokpoel_deep_2018]
- Analogy commonly construed as *structure mapping* between *source* and *target* [@gentner_structure-mapping_1983]
- Find maximal subgraph isomorphisms between *source* and *target* graphs
- Find a mapping between *source* and *target* that makes maximal subpgraphs *identical*
![](figs/source_target.png){width=55%} \Huge $^{\longrightarrow}$ \normalsize ![](figs/source_target_merge.png){width=25%}
## Similarity and generalistation
- Cognitive objective to generalise as widely and rapidly as possible
- Generalisation usually invokes/induces a concept of similarity
- Typical statistical/ML models use *literal* similarity
- Relational similarity typically encodes relations as literals
- Analogical similarity is based on relational structure (relations between relations)
- Relational structures tend to reflect real causes in the environment (?)
- Relational structures reduce reliance on "good" encoding of literals (?)
- Analogy includes literal similarity as a special case
## Static similarity in VSA
- Similarity (angle between vectors) is central to VSA [@Kanerva2009]
- Typically a \underline{fixed} mapping from "things" to vectors (representations)
- Emphasis on encodings that yield useful similarity structure [@Sahlgren2004; @purdy_encoding_2016]
- Representations may be learned ...
- E.g. Random Indexing [@Sahlgren2004], vector embedding [@pennington_glove:_2014]
- but, are effectively fixed at time of use
## Human dynamic similarity
- Human similarity judgments known to be context-dependent [@cheng_context-dependent_1990]
- Arguable that similarity and analogy are based on the same processes [@gentner_structure_1997]
- Arguable that representations are created on-the-fly in response to task demands [@chalmers_high-level_1992]
- Doesn’t necessarily imply that base representations are context-dependent
- Could have dynamic *working* representations
derived from the static base representations by context-dependent transforms
## Substitution as a dynamic transformation
- An obvious candidate for a dynamic transformation function in VSA is substitution by binding
- The substitution mapping can be specified as a vector
- The substitution mapping vector can be dynamically generated [@Kanerva2009]
- This implies an internal degree of freedom (a register to hold the substitution vector while it evolves) ...
- and a recurrent VSA circuit to provide the dynamics to evolve the substitution vector
## Maximal subgraph isomorphism circuit
- Maximal subgraph isomorphism circuit [@Gayler2009]
- Finds the maximal subgraph isomorphism between two graphs represented as vectors
- Implemented as a recurrent VSA circuit with a register containing a vertex substitution vector that
evolves and settles over the course of the computation
- Final state of substitution vector represents the set of vertex substitutions that best transforms each static
graph into the other graph
![](figs/circuit.png){width=35%} \hspace{20pt} ![](figs/converge2.png){width=31%}
\vspace{10pt}
You can't see this because it's off the bottom of the slide
## Connections
- The subgraph isomorphism circuit can be interpreted as related
to the recently developed Resonator Circuits for factorisation of VSA representations [@kent_resonator_2019]
- They have internal degrees of freedom for each of the factors to be calculated
- Recurrent VSA dynamics that settles on the factorisation
- Subgraph isomorphism circuit can be interpreted as finding the square root of the product of the two graphs
- Alternatively, find a factor (the substitution vector)
such that the product of that factor with each of the graphs
is a good approximation to the other graph
- This links the VSA representation factorisation back to statistical modelling
- Long history of approximating matrices/tensors as the product of simpler factors [@kolda_tensor_2009].
## Future work
Lots to do:
- Subgraph isomorphism circuit takes two graphs,
analogical memory takes a query graph and a memory stocked with *many* graphs
- Hypergraphs rather than graphs
(suggestion that cognitive relations are rank up to 4 or 5)
- Make *thoroughly* distributed
(avoid localist components)
## Resources
\tiny
The video of this presentation given on 2020-05-18 is archived on Zenodo at
https://doi.org/10.5281/zenodo.3835154
The slides of this presentation are archived on Zenodo at
https://doi.org/10.5281/zenodo.3700835
The source code of this presentation is publicly accessible on GitHub at
https://github.com/rgayler/VSA_2020_presentation
The extended abstract of this presentation is publicly accessible on GitHub at
https://github.com/rgayler/VSA_2020_presentation/raw/master/VSA2020_Gayler_abstract.pdf
\vspace{100pt}
![](figs/by.png){width=10%}
This presentation is licensed under a
[Creative Commons Attribution 4.0 International License](http://creativecommons.org/licenses/by/4.0/)
\normalsize
## References
\tiny
\setlength{\parskip}{0.5em}