Notation

From A conversation about the brain
Jump to: navigation, search

Generating a motor output depends on a critical step in which an input vector, $\vec{r}$

\begin{equation} \vec{r} = \begin{bmatrix} r_{1} \\ r_{2} \\ \vdots \\ r_{n} \end{bmatrix} \end{equation}

is compared to a long list of vectors of the same length, i.e. weight matrix $\mathbf{W}$

\begin{equation} \mathbf{W} = \begin{bmatrix} w_{1,1} & w_{1,2} & \cdots & w_{1,n} \\ w_{2,1} & w_{2,2} & \cdots & w_{2,n} \\ \vdots & \vdots & \ddots & \vdots \\ w_{m,1} & w_{m,2} & \cdots & w_{m,n} \end{bmatrix} \end{equation}

and the stored vector most like $\vec{r}$ is chosen. This stored context is associated with a motor output (or equivalent). Some notation to describe this is given below. $n$ is the dimensionality of the space of sensory+motivational contexts ($\vec{r}\in \mathbb{R}^n$). In the case of the amoeba example, $n = 3$. For the brain, $n$ is very much larger (e.g. $n = 10^{10}$). $m$ is the number of stored contexts. In theory, this can be very large, e.g. of the order $2^n$.

By assumption, all the vectors stored in $\mathbf{W}$ have the same magnitude as each other and the same magnitude as $\vec{r}$:

\begin{equation} \lVert\vec{r}\rVert = \lVert w(i,*)\rVert, \forall i=[1,\ldots,m] \end{equation}

For example, if each neuron contributing to $\vec{r}$ is either firing or not (1 or 0) and each synaptic weight is either ‘on’ or ‘off’ (1 or 0), this is equivalent to assuming that the proportion of neurons firing at any one time is constant ($p$) and equal to the proportion of synapses that are ‘on’ in each stored context, w(i,*).

The stored vector, $w_{k,*}$, that is most similar to $\vec{r}$ can be found by:

\[k = \underset{i}{\operatorname{argmax}}\, ({\mathbf{W} \vec{r}})_i\]

with the function \(\underset{i}{\operatorname{arg\,max}}\, ({\vec{x}})_i\) returning the index of the maximum value in $\vec{x}$, i.e. $k$ is the index to $\mathbf{W}$ that gives the maximum correlation between $\vec{r}$ and $w_{i,*}$, for any $i = [1,\ldots,m]$.

For brevity, let

\begin{equation} \vec{c}=w_{k,*} \end{equation}

$\vec{c}$ is the 'recognised sensory+motivational context' and is associated with an output, $\vec{o}$, which in the simple examples here is a motor output (e.g.amoeba example). The output is not necessarily motor, though. For example, someone thinking for 10 minutes before making a move in chess may lead to lots of 'virtual' movement down paths through sensory+motivational space but no actual motor output.

If $\vec{o}$ leads to movement in the world, there is usually a new sensory input, new motivational input and hence a new input vector $\vec{r}$.

It will be useful to describe separate contributions to the input vector $\vec{r}$. It is a list concatenation of a vector of sensory inputs, $\vec{s}$, where $\vec{s} \in \mathbb{R}^{ns}$,

\begin{equation} \vec{s} = \begin{bmatrix} r_{1} \\ r_{2} \\ \vdots \\ r_{ns} \\ \end{bmatrix} \end{equation}


and a vector of motivational inputs, $\vec{t}$, where $\vec{s} \in \mathbb{R}^{nt}$,

\begin{equation} \vec{t} = \begin{bmatrix} r_{ns+1} \\ r_{ns+2} \\ \vdots \\ r_{ns+nt} \\ \end{bmatrix} \end{equation}

each of which add independent dimensions to $\vec{r}$, so in this case $n = ns + nt$:

\begin{equation} \vec{r} = \begin{bmatrix} r_{1} \\ r_{2} \\ \vdots \\ r_{ns} \\ r_{ns+1} \\ \vdots \\ r_{ns+nt} \\ \end{bmatrix} \end{equation}

In the case of the amoeba example, $\vec{s}\in \mathbb{R^2}$, $\vec{t} \in \mathbb{R}$, and $\vec{r} \in \mathbb{R^3}$.

Similarly, the sensory inputs can be described in terms of a list concatenation of non-overlapping contributory elements stored in vectors, for example a vector of dorsal visual inputs $\vec{s^{\prime}}$ with $nd = $ number of elements in $\vec{s^{\prime}}$, a vector of ventral visual inputs $\vec{s^{\prime\prime}}$ with $nv = $ number of elements in $\vec{s^{\prime\prime}}$, and a vector of other sensory inputs $\vec{s^{\prime\prime\prime}}$, with $no = $ number of elements in $\vec{s^{\prime\prime\prime}}$, with $ns = nd + nv + no$:

\begin{equation} \vec{s} = \begin{bmatrix} s^{\prime}_{1} \\ s^{\prime}_{2} \\ \vdots \\ s^{\prime\prime}_{nd} \\ s^{\prime\prime}_{nd+1} \\ s^{\prime\prime}_{nd+2} \\ \vdots \\ s^{\prime\prime}_{nd+nv} \\ s^{\prime\prime\prime}_{nd+nv+1} \\ s^{\prime\prime\prime}_{nd+nv+2} \\ \vdots \\ s^{\prime\prime\prime}_{nd+nv+no}\\ \end{bmatrix} \end{equation}