EMComposition¶
Contents¶
Overview¶
The EMComposition implements a configurable, content-addressable form of episodic, or eternal memory, that emulates
an EpisodicMemoryMechanism – reproducing all of the functionality of its ContentAddressableMemory
Function –
in the form of an AutodiffComposition that is capable of learning how to differentially weight different cues used
for retrieval,, and that adds the capability for memory_decay
. Its memory
is configured using the memory_template
argument of its constructor, which defines how
each entry in memory
is structured (the number of fields in each entry and the length of
each field), and its field_weights
argument that defines which fields are used as cues for retrieval – “keys” –
and whether and how they are differentially weighted in the match process used for retrieval, and which are treated
as “values” that are retrieved but not used for the match process. The inputs corresponding to each key and each
value are represented as INPUT
Nodes of the EMComposition (listed in its
key_input_nodes
and value_input_nodes
attributes, respectively), and the retrieved values are represented as OUTPUT
Nodes of the EMComposition. The memory
can be accessed using its memory
attribute.
Organization
Entries and Fields. Each entry in memory can have an arbitrary number of fields, and each field can have an arbitrary
length. However, all entries must have the same number of fields, and the corresponding fields must all have the same
length across entries. Fields can be weighted to determine the influence they have on retrieval, using the
field_weights
parameter (see retrieval below).
The number and shape of the fields in each entry is specified in the memory_template
argument of the EMComposition’s
constructor (see memory_template). Which fields treated as keys (i.e., used as cues for
retrieval) and which are treated as values (i.e., retrieved but not used for matching retrieval) is specified in the
field_weights
argument of the EMComposition’s constructor (see field_weights).
Operation
Retrieval. The values retrieved from memory
(one for each field) are based on
the relative similarity of the keys to the entries in memory, computed as the dot product of each key and the
values in the corresponding field for each entry in memory. These dot products are then softmaxed, and those
softmax distributions are weighted by the corresponding field_weights
for each field
and then combined, to produce a single softmax distribution over the entries in memory, that is used to generate a
weighted average as the retrieved value across all fields, and returned as the result
of the
EMComposition’s execution.
Storage. The inputs to the EMComposition’s fields are stored in memory
after each execution, with a probability determined by storage_prob
. If memory_decay
is specified, then the memory
is decayed by that amount after each execution. If memory_capacity
has been reached, then each new memory replaces the weakest entry (i.e., the one
with the smallest norm across all of its fields) in memory
.
Creation¶
An EMComposition is created by calling its constructor, that takes the following arguments:
Field Specification
memory_template: This specifies the shape of the entries to be stored in the EMComposition’s
memory
, and can be used to initializememory
with pre-specified entries. Thememory_template
argument can be specified in one of three ways (see Examples for representative use cases):tuple: interpreted as an np.array shape specification, that must be of length 2 or 3. If it is a 3-item tuple, then the first item specifies the number of entries in memory, the 2nd the number of fields in each entry, and the 3rd the length of each field. If it is a 2-item tuple, this specifies the shape of an entry, and the number of entries is specified by memory_capacity). All entries are filled with zeros or the value specified by memory_fill.
Warning
If the
memory_template
is specified with a 3-item tuple and memory_capacity is also specified with a value that does not match the first item ofmemory_template
, and error is generated indicating the conflict in the number of entries specified.Hint
To specify a single field, a list or array must be used (see below), as a 2-item tuple is interpreted as specifying the shape of an entry, and so it can’t be used to specify the number of entries each of which has a single field.
2d list or array: interpreted as a template for memory entries. This can be used to specify fields of different lengths (i.e., entries that are ragged arrays), with each item in the list (axis 0 of the array) used to specify the length of the corresponding field. The template is then used to initialze all entries in
memory
. If the template includes any non-zero elements, then the array is replicated for all entries inmemory
; otherwise, they are filled with either zeros or the value specified in memory_fill.Hint
To specify a single entry, with all other entries filled with zeros or the value specified in
memory_fill
, use a 3d array as described below.3d list or array: used to initialize
memory
directly with the entries specified in the outer dimension (axis 0) of the list or array. If memory_capacity is not specified, then it is set to the number of entries in the list or array. Ifmemory_capacity
is specified, then the number of entries specified inmemory_template
must be less than or equal tomemory_capacity
. If is less thanmemory_capacity
, then the remaining entries inmemory
are filled with zeros or the value specified inmemory_fill
(see below): if all of the entries specified contain only zeros, andmemory_fill
is specified, then the matrix is filled with the value specified inmemory_fill
; otherwise, zeros are used to fill all entries.
Memory Capacity
memory_capacity: specifies the number of items that can be stored in the EMComposition’s memory; when
memory_capacity
is reached, each new entry overwrites the weakest entry (i.e., the one with the smallest norm across all of its fields) inmemory
. Ifmemory_template EMComposition_Memory_Template>
is specified as a 3-item tuple or 3d list or array (see above), then that is used to determinememory_capacity
(if it is specified and conflicts with either of those an error is generated). Otherwise, it can be specified using a numerical value, with a default of 1000. Thememory_capacity
cannot be modified once the EMComposition has been constructed.
memory_fill: specifies the value used to fill the
memory
, based on the shape specified in thememory_template
(see above). The value can be a scalar, or a tuple to specify an interval over which to draw random values to fillmemory
— both should be scalars, with the first specifying the lower bound and the second the upper bound. Ifmemory_fill
is not specified, and no entries are specified inmemory_template
, thenmemory
is filled with zeros.Hint
If memory is initialized with all zeros and
normalize_memories
set toTrue
(see below) then a numpy.linalg warning is issued about divide by zero. This can be ignored, as it does not affect the results of execution, but it can be averted by specifying memory_fill to use small random values (e.g.,memory_fill=(0,.001)
).
field_weights: specifies which fields are used as keys, and how they are weighted during retrieval. The number of values specified must match the number of fields specified in
memory_template
(i.e., the size of of its first dimension (axis 0)). All non-zero entries must be positive, and designate keys – fields that are used to match items in memory for retrieval (see Match memories by field). Entries of 0 designate values – fields that are ignored during the matching process, but the values of which are retrieved and assigned as thevalue
of the correspondingretrieval_node
. This distinction between keys and value implements a standard “dictionary; however, if all entries are non-zero, then all fields are treated as keys, implemented a full form of content-addressable memory. Iflearn_weights
is True, the field_weights can be modified during training; otherwise they remain fixed. The following options can be used to specifyfield_weights
:None (the default): all fields except the last are treated as keys, and are weighted equally for retrieval, while the last field is treated as a value field;
single entry: its value is ignored, and all fields are treated as keys (i.e., used for retrieval) and equally weighted for retrieval;
multiple non-zero entries: If all entries are identical, the value is ignored and the corresponding keys are weighted equally for retrieval; if the non-zero entries are non-identical, they are used to weight the corresponding fields during retrieval (see Weight fields). In either case, the remaining fields (with zero weights) are treated as value fields.
field_names: specifies names that can be assigned to the fields. The number of names specified must match the number of fields specified in the memory_template. If specified, the names are used to label the nodes of the EMComposition. If not specified, the fields are labeled generically as “Key 0”, “Key 1”, etc..
concatenate_keys: specifies whether keys are concatenated before a match is made to items in memory. This is False by default. It is also ignored if the
field_weights
for all keys are not all equal (i.e., all non-zero weights are not equal – see field_weights) and/ornormalize_memories
is set to False. Setting concatenate_keys to True in either of those cases issues a warning, and the setting is ignored. If the keyfield_weights
(i.e., all non-zero values) are all equal andnormalize_memories
is set to True, then settingconcatenate_keys
then a concatenate_keys_node <EMComposition.concatenate_keys_node>` is created that receives input from all of thekey_input_nodes
and passes them as a single vector to themactch_node
.Note
While this is computationally more efficient, it can affect the outcome of the matching process, since computing the normalized dot product of a single vector comprised of the concatentated inputs is not identical to computing the normalized dot product of each field independently and then combining the results.
Note
All
key_input_nodes
andretrieval_nodes
are always preserved, even whenconcatenate_keys
is True, so that separate inputs can be provided for each key, and the value of each key can be retrieved separately.
memory_decay_rate: specifies the rate at which items in the EMComposition’s memory decay; the default rate is AUTO, which sets it to 1 /
memory_capacity
, such that the oldest memories are the most likely to be replaced whenmemory_capacity
is reached. Ifmemory_decay_rate
is set to 0 None or False, then memories do not decay and, whenmemory_capacity
is reached, the weakest memories are replaced, irrespective of order of entry.
Retrieval and Storage
storage_prob : specifies the probability that the inputs to the EMComposition will be stored as an item in
memory
on each execution.normalize_memories : specifies whether keys and memories are normalized before computing their dot products.
softmax_gain : specifies the gain (inverse temperature) used for softmax normalizing the dot products of keys and memories (see Execution below). If a value is specified, that is used. If the keyword CONTROL is (or the value is None), then the
softmax_gain
function is used to adaptively set the gain based on the entropy of the dot products, preserving the distribution over non-(or near) zero entries irrespective of how many (near) zero entries there are.learn_weights : specifies whether
field_weights
are modifiable during training.learning_rate : specifies the rate at which
field_weights
are learned iflearn_weights
is True.
Structure¶
Input¶
The inputs corresponding to each key and value field are represented as INPUT
Nodes of the EMComposition, listed in its key_input_nodes
and value_input_nodes
attributes, respectively,
Memory¶
The memory
attribute contains a record of the entries in the EMComposition’s memory. This is
in the form of a 2d array, in which rows (axis 0) are entries and columns (axis 1) are fields. The number of fields
is determined by the memory_template argument of the EMComposition’s constructor,
and the number of entries is determined by the memory_capacity argument.
The memories are actually stored in the
matrix
parameters of theMappingProjections
from theretrieval_weighting_node
to each of theretrieval_nodes
. Memories associated with each key are also stored in thematrix
parameters of theMappingProjections
from thekey_input_nodes
to each of the correspondingmatch_nodes
. This is done so that the match of each key to the memories for the corresponding field can be computed simply by passing the input for each key through the Projection (which computes the dot product of the input with the Projection’smatrix
parameter) to the corresponding match_node; and, similarly, retrieivals can be computed by passing the softmax disintributions and weighting for each field computed in theretrieval_weighting_node
through its Projection to eachretrieval_node
(which computes the dot product of the weighted softmax over entries with the corresponding field of each entry) to get the retreieved value for each field.
Output¶
The outputs corresponding to retrieved value for each field are represented as OUTPUT
Nodes of the EMComposition, listed in its retrieval_nodes
attribute.
Execution¶
The arguments of the run
, learn
and Composition.execute
methods are the same as those of a Composition, and they can be passed any of the arguments valid for
an AutodiffComposition. The details of how the EMComposition executes are described below.
Processing¶
When the EMComposition is executed, the following sequence of operations occur (also see figure):
Concatenation. By default, if the
field_weights
are the same for all keys andnormalize_memories
is True then, for efficiency of computation, the inputs provided to thekey_input_nodes
are concatenated into a single vector in theconcatenate_keys_node
, that is provided to a singlematch_node
. However, if either of these conditions is not met orconcatenate_keys
is provided to its ownmatch_node
(see concatenate keys for additional information).Match memories by field. The values of each
key_input_node
(or theconcatenate_keys_node
if concatenate_keys attribute is True) are passed through the correspondingmatch_node
, which computes the dot product of the input with each memory for the corresponding field, resulting in a vector of dot products for each memory in the corresponding field.Softmax normalize matches over fields. The dot products of memories for each field are passed to the corresponding
softmax_node
, which applies a softmax function to normalize the dot products of memories for each field. Ifsoftmax_gain
is specified, it is used as the gain (inverse temperature) for the softmax function; if it is specified as CONTROL or None, then thesoftmax_gain
function is used to adaptively set the gain (see softmax_gain for details).Weight fields. The softmax normalized dot products of keys and memories for each field are passed to the
retrieval_weighting_node
, which applies the correspondingfield_weight
to the softmaxed dot products of memories for each field, and then haddamard sums those weighted dot products to produce a single weighting for each memory.Retrieve values by field. The vector of weights for each memory generated by the
retrieval_weighting_node
is passed through the Projections to the each of theretrieval_nodes
to compute the retrieved value for each field.Decay memories. If
memory_decay
is True, then each of the memories is decayed by the amount specified inmemory_decay
.This is done by multiplying the
matrix
parameter of the MappingProjection from theretrieval_weighting_node
to each of theretrieval_nodes
, as well as thematrix
parameter of the MappingProjection from eachkey_input_node
to the correspondingmatch_node
bymemory_decay
,by 1 -
memory_decay
.
Store memories. After the values have been retrieved, the inputs to for each field (i.e., values in the
key_input_nodes
andvalue_input_nodes
) are added by thestorage_node
as a new entry inmemory
, replacing the weakest one if memory_capacity has been reached.This is done by adding the input vectors to the the corresponding rows of the
matrix
of the MappingProjection from theretreival_weighting_node
to each of theretrieval_nodes
, as well as thematrix
parameter of the MappingProjection from eachkey_input_node
to the correspondingmatch_node
(see note above for additional details). If memory_capacity has been reached, then the weakest memory (i.e., the one with the lowest norm across all fields) is replaced by the new memory.
Learning¶
If learn
is called and the learn_weights
attribute is True,
then the field_weights
are modified to minimize the error passed to the EMComposition
retrieval nodes, using the learning_rate specified in the learning_rate
attribute. If
learn_weights
is False (or run
is called, then the
field_weights
are not modified and the EMComposition is simply executed without any
modification, and the error signal is passed to the nodes that project to its INPUT
Nodes.
Note
Although memory storage is implemented as a form of learning (though modification of MappingProjection
matrix
parameters; see memory storage), this occurs irrespective of how EMComposition is run (i.e., whetherlearn
orrun
is called), and is not affected by thelearn_weights
orlearning_rate
attributes, which pertain only to whether thefield_weights
are modified during learning.
Examples
The following are examples of how to configure and initialize the EMComposition’s memory
:
Visualizing the EMComposition¶
The EMComposition can be visualized graphically, like any Composition, using its show_graph method. For example, the figure below shows an EMComposition that implements a simple dictionary, with one key field and one value field, each of length 5:
>>> import psyneulink as pnl
>>> em = EMComposition(memory_template=(2,5))
>>> em.show_graph()
Memory Template¶
The memory_template argument of a EMComposition’s constructor is used to configure
it memory
, which can be specified using either a tuple or a list or array.
Tuple specification
The simplest form of specification is a tuple, that uses the numpy shape format. If it has two elements (as in the example above), the first specifies the number of fields, and the second the length of each field. In this case, a default number of entries (1000) is created:
>>> em.memory_capacity
1000
The number of entries can be specified explicitly in the EMComposition’s constructor, using either the memory_capacity argument, or by using a 3-item tuple to specify the memory_template argument, in which case the first element specifies the number of entries, while the second and their specify the number of fields and the length of each field, respectively. The following are equivalent:
>>> em = EMComposition(memory_template=(2,5), memory_capcity=4)
and
>>> em = EMComposition(memory_template=(4,2,5))
both of which create a memory with 4 entries, each with 2 fields of length 5. The contents of memory can be inspected using the memory
attribute:
>>> em.memory
[[array([0., 0., 0., 0., 0.]), array([0., 0., 0., 0., 0.])],
[array([0., 0., 0., 0., 0.]), array([0., 0., 0., 0., 0.])],
[array([0., 0., 0., 0., 0.]), array([0., 0., 0., 0., 0.])],
[array([0., 0., 0., 0., 0.]), array([0., 0., 0., 0., 0.])]]
The default for memory_capacity
is 1000, which is used if it is not otherwise
specified.
List or array specification
Note that in the example above the two fields have the same length (5). This is always the case when a tuple is used,
as it generates a regular array. A list or numpy array can also be used to specify the memory_template
argument.
For example, the following is equivalent to the examples above:
>>> em = EMComposition(memory_template=[[0,0,0],[0,0,0]], memory_capacity=4)
However, a list or array can be used to specify fields of different length (i.e., as a ragged array). For example, the following specifies one field of length 3 and another of length 1:
>>> em = EMComposition(memory_template=[[0,0,0],[0]], memory_capacity=4)
>>> em.memory
[[[array([0., 0., 0.]), array([0.])]],
[[array([0., 0., 0.]), array([0.])]],
[[array([0., 0., 0.]), array([0.])]],
[[array([0., 0., 0.]), array([0.])]]]
Memory fill
Note that the examples above generate a warning about the use of zeros to initialize the memory. This is
because the default value for memory_fill
is 0
, and the default value for normalize_memories
is True, which will cause a divide by zero warning when memories are
normalized. While this doesn’t crash, it will result in nan’s that are likely to cauase problems elsewhere.
This can be avoided by specifying a non-zero value for memory_fill
, such as small number:
>>> em = EMComposition(memory_template=[[0,0,0],[0]], memory_capacity=4, memory_fill=.001)
>>> em.memory
[[[array([0.001, 0.001, 0.001]), array([0.001])]],
[[array([0.001, 0.001, 0.001]), array([0.001])]],
[[array([0.001, 0.001, 0.001]), array([0.001])]],
[[array([0.001, 0.001, 0.001]), array([0.001])]]]
Here, a single value was specified for memory_fill
(which can be a float or int), that is used to fill all values.
Random values can be assigned using a tuple to specify and internval between the first and second elements. For
example, the following uses random values between 0 and 0.01 to fill all entries:
>>> em = EMComposition(memory_template=[[0,0,0],[0]], memory_capacity=4, memory_fill=(0,0.01))
>>> em.memory
[[[array([0.00298981, 0.00563404, 0.00444073]), array([0.00245373])]],
[[array([0.00148447, 0.00666486, 0.00228882]), array([0.00237541])]],
[[array([0.00432786, 0.00035378, 0.00265932]), array([0.00980598])]],
[[array([0.00151163, 0.00889032, 0.00899815]), array([0.00854529])]]]
Multiple entries
In the examples above, a single entry was specified, and that was used as a template for initializing the remaining entries in memory. However, a list or array can be used to directly initialize any or all entries. For example, the following initializes memory with two specific entries:
>>> em = EMComposition(memory_template=[[[1,2,3],[4]],[[100,101,102],[103]]], memory_capacity=4)
>>> em.memory
[[[array([1., 2., 3.]), array([4.])]],
[[array([100., 101., 102.]), array([103.])]],
[[array([0., 0., 0.]), array([0.])]],
[[array([0., 0., 0.]), array([0.])]]]
Note that the two entries must have exactly the same shapes. If they do not, an error is generated.
Also note that the remaining entries are filled with zeros (the default value for memory_fill
).
Here again, memory_fill
can be used to specify a different value:
>>> em = EMComposition(memory_template=[[[7],[24,5]],[[100],[3,106]]], memory_capacity=4, memory_fill=(0,.01))
>>> em.memory
[[[array([7.]), array([24., 5.])]],
[[array([100.]), array([ 3., 106.])]],
[[array([0.00803646]), array([0.00341276, 0.00286969])]],
[[array([0.00143196]), array([0.00079033, 0.00710556])]]]
Field Weights¶
By default, all of the fields specified are treated as keys except the last, which is treated as a “value” field –
that is, one that is not included in the matching process, but for which a value is retrieved along with the key fields.
For example, in the figure above, the first field specified was used as a key field,
and the last as a value field. However, the field_weights
argument can be used to modify this, specifying which
fields should be used as keys fields – including the relative contribution that each makes to the matching process
– and which should be used as value fields. Non-zero elements in the field_weights
argument designate key fields,
and zeros specify value fields. For example, the following specifies that the first two fields should be used as keys
while the last two should be used as values:
>>> em = EMComposition(memory_template=[[0,0,0],[0],[0,0],[0,0,0,0]], memory_capacity=3, field_weights=[1,1,0,0])
>>> em.show_graph()
Use of field_weights to specify keys and values.¶
Note that the figure now shows RETRIEVAL WEIGHTING
nodes
,
that are used to implement the relative contribution that each key field makes to the matching process specifed in
field_weights
argument. By default, these are equal (all assigned a value of 1),
but different values can be used to weight the relative contribution of each key field. The values are normalized so
that they sum 1, and the relative contribution of each is determined by the ratio of its value to the sum of all
non-zero values. For example, the following specifies that the first two fields should be used as keys,
with the first contributing 75% to the matching process and the second field contributing 25%:
>>> em = EMComposition(memory_template=[[0,0,0],[0],[0,0]], memory_capacity=3, field_weights=[3,1,0])
Class Reference¶
- class psyneulink.library.compositions.emcomposition.EMComposition(memory_template=[[0], [0]], memory_capacity=None, memory_fill=0, field_names=None, field_weights=None, concatenate_keys=False, learn_weights=False, learning_rate=None, memory_decay_rate='auto', normalize_memories=True, softmax_gain='control', storage_prob=1.0, random_state=None, seed=None, name='EM_Composition')¶
Subclass of AutodiffComposition that implements the functions of an EpisodicMemoryMechanism in a differentiable form and in which it
field_weights
parameter can be learned.Takes only the following arguments, all of which are optional
- Parameters
memory_template (tuple, list, 2d or 3d array : default [[0],[0]]) – specifies the shape of an items to be stored in the EMComposition’s memory; see memory_template for details.
memory_fill (scalar or tuple : default 0) – specifies the value used to fill the memory when it is initialized; see memory_fill for details.
field_weights (tuple : default (1,0)) – specifies the relative weight assigned to each key when matching an item in memory’ see field weights for details.
field_names (list : default None) – specifies the optional names assigned to each field in the memory_template; see field names for details.
concatenate_keys (bool : default False) – specifies whether to concatenate the keys into a single field before matching them to items in the corresponding fields in memory; see concatenate keys for details.
normalize_memories (bool : default True) – specifies whether keys and memories are normalized before computing their dot product (similarity); see Match memories by field for additional details.
softmax_gain (float : default CONTROL) – specifies the temperature used for softmax normalizing the dot products of keys and memories; see Softmax normalize matches over fields for additional details.
storage_prob (float : default 1.0) – specifies the probability that an item will be stored in
memory
when the EMComposition is executed (see Retrieval and Storage for additional details).learn_weights (bool : default False) – specifies whether
field_weights
are learnable during training; see Learning for additional details.learning_rate (float : default .01) – specifies rate at which
field_weights
are learned iflearn_weights
is True.memory_capacity (int : default None) – specifies the number of items that can be stored in the EMComposition’s memory; see memory_capacity for details.
memory_decay (bool : default True) – specifies whether memories decay with each execution of the EMComposition; see memory_decay for details.
memory_decay_rate (float : AUTO) – specifies the rate at which items in the EMComposition’s memory decay; see
memory_decay_rate
for details.
- memory¶
list of entries in memory, in which each row (outer dimensions) is an entry and each item in the row is the value for the corresponding field; see Memory for additional details.
Note
This is a read-only attribute; memories can be added to the EMComposition’s memory either by executing its
run
or learn methods with the entry as theinputs
argument.- Type
list[list[list[float]]]
- .. _EMComposition_Parameters
- field_weights¶
determines which fields of the input are treated as “keys” (non-zero values), used to match entries in
memory
for retrieval, and which are used as “values” (zero values), that are stored and retrieved from memory, but not used in the match process (see Match memories by field; see field_weights for additional details of specification).- Type
list[float]
- field_names¶
determines which names that can be used to label fields in
memory
; see field_names for additional details.- Type
list[str]
- learn_weights¶
determines whether
field_weights
are learnable during training; see Learning for additional details.- Type
bool
- learning_rate¶
determines whether the rate at which
field_weights
are learned iflearn_weights
is True; seeEMComposition_Learning>
for additional details.- Type
float
- concatenate_keys¶
determines whether keys are concatenated into a single field before matching them to items in
memory
; see concatenate keys for additional details.- Type
bool
- normalize_memories¶
determines whether keys and memories are normalized before computing their dot product (similarity); see Match memories by field for additional details.
- Type
bool
- softmax_gain¶
determines gain (inverse temperature) used for softmax normalizing the dot products of keys and memories by the
softmax
function of thesoftmax_nodes
; see Softmax normalize matches over fields for additional details.- Type
CONTROL
- storage_prob¶
determines the probability that an item will be stored in
memory
when the EMComposition is executed (see Retrieval and Storage for additional details).- Type
float
- memory_capacity¶
determines the number of items that can be stored in
memory
; see memory_capacity for additional details.- Type
int
- memory_decay_rate¶
determines the rate at which items in the EMComposition’s memory decay (see
memory_decay_rate
for details).- Type
float
- .. _EMComposition_Nodes
- key_input_nodes¶
INPUT
Nodes that receive keys used to determine the item to be retrieved frommemory
, and then themselves stored inmemory
(see Match memories by field for additional details). By default these are assigned the name KEY_n_INPUT where n is the field number (starting from 0); however, iffield_names
is specified, then the name of each key_input_node is assigned the corresponding field name.- Type
list[TransferMechanism]
- value_input_nodes¶
INPUT
Nodes that receive values to be stored inmemory
; these are not used in the matching process used for retrieval. By default these are assigned the name VALUE_n_INPUT where n is the field number (starting from 0); however, iffield_names
is specified, then the name of each value_input_node is assigned- Type
list[TransferMechanism]
- concatenate_keys_node¶
TransferMechanism that concatenates the inputs to
key_input_nodes
into a single vector used for the matching processing ifconcatenate keys
is True. This is not created if theconcatenate_keys
argument to the EMComposition’s constructor is False or is overridden (see concatenate_keys), or there is only one key_input_node.- Type
- match_nodes¶
TransferMechanisms that receive the dot product of each key and those stored in the corresponding field of
memory
(see Match memories by field for additional details). These are assigned names that prepend MATCH_n to the name of the correspondingkey_input_nodes
.- Type
list[TransferMechanism]
- softmax_control_nodes¶
ControlMechanisms that adaptively control the
softmax_gain
for the correspondingsoftmax_nodes
. These are implemented only ifsoftmax_gain
is specified as CONTROL (see softmax_gain for details).- Type
list[ControlMechanism]
- softmax_nodes¶
TransferMechanisms that compute the softmax over the vectors received from the corresponding
match_nodes
(see Softmax normalize matches over fields for additional details).- Type
list[TransferMechanism]
- retrieval_gating_nodes¶
GatingMechanisms that uses the
field weight
for each field to modulate the output of the correspondingretrieval_node
before it is passed to theretrieval_weighting_node
. These are implemented only if more than one key field is specified (see Fields for additional details).- Type
list[GatingMechanism]
- retrieval_weighting_node¶
TransferMechanism that receives the softmax normalized dot products of the keys and memories from the
softmax_nodes
, weights these usingfield_weights
, and haddamard sums those weighted dot products to produce a single weighting for each memory.- Type
- retrieval_nodes¶
TransferMechanisms that receive the vector retrieved for each field in
memory
(see Retrieve values by field for additional details); these are assigned the same names as thekey_input_nodes
andvalue_input_nodes
to which they correspond appended with the suffix _RETRIEVAL.- Type
list[TransferMechanism]
- storage_node¶
EMStorageMechanism
that receives inputs from thekey_input_nodes
andvalue_input_nodes
, and stores these in the corresponding field ofmemory
with probabilitystorage_prob
after a retrieval has been made (see Retrieval and Storage for additional details).The
storage_node
is assigned a Condition to execute after theretrieval_nodes
have executed, to ensure that storage occurs after retrieval, but before any subequent processing is done (i.e., in a composition in which the EMComposition may be embededded.- Type
EMStorageMechanism
- _validate_memory_specs(memory_template, memory_capacity, memory_fill, field_weights, field_names, name)¶
Validate the memory_template, field_weights, and field_names arguments
- _parse_memory_template(memory_template, memory_capacity, memory_fill, field_weights)¶
Construct memory from memory_template and memory_fill Assign self.memory_template and self.entry_template attributes
- Return type
(<class ‘numpy.ndarray’>, <class ‘int’>)
- _parse_memory_shape(memory_template)¶
Parse shape of memory_template to determine number of entries and fields
- _construct_pathway(memory_template, memory_capacity, memory_decay_rate, field_weights, concatenate_keys, normalize_memories, softmax_gain, storage_prob)¶
Construct pathway for EMComposition
- Return type
set
- _construct_key_input_nodes(field_weights)¶
Create one node for each key to be used as cue for retrieval (and then stored) in memory. Used to assign new set of weights for Projection for key_input_node[i] -> match_node[i] where i is selected randomly without replacement from (0->memory_capacity)
- Return type
list
- _construct_value_input_nodes(field_weights)¶
Create one input node for each value to be stored in memory. Used to assign new set of weights for Projection for retrieval_weighting_node -> retrieval_node[i] where i is selected randomly without replacement from (0->memory_capacity)
- Return type
list
- _construct_concatenate_keys_node(concatenate_keys)¶
Create node that concatenates the inputs for all keys into a single vector Used to create a matrix for Projectoin from match / memory weights from concatenate_node -> match_node
- Return type
- _construct_match_nodes(memory_template, memory_capacity, concatenate_keys, normalize_memories)¶
Create nodes that, for each key field, compute the similarity between the input and each item in memory. - If self.concatenate_keys is True, then all inputs for keys from concatenated_keys_node are assigned a single
match_node, and weights from memory_template are assigned to a Projection from concatenated_keys_node to that match_node.
- Otherwise, each key has its own match_node, and weights from memory_template are assigned to a Projection
from each key_input_node[i] to each match_node[i].
Each element of the output represents the similarity between the key_input and one item in memory.
- Return type
list
- _construct_softmax_nodes(memory_capacity, field_weights, softmax_gain)¶
Create nodes that, for each key field, compute the softmax over the similarities between the input and the memories in the corresponding match_node.
- Return type
list
- _construct_softmax_control_nodes(softmax_gain)¶
Create nodes that set the softmax gain (inverse temperature) for each softmax_node.
- Return type
list
- _construct_retrieval_gating_nodes(field_weights, concatenate_keys)¶
Create GatingMechanisms that weight each key’s contribution to the retrieved values.
- Return type
list
- _construct_retrieval_weighting_node(memory_capacity)¶
Create nodes that compute the weighting of each item in memory.
- Return type
- _construct_retrieval_nodes(memory_template)¶
Create nodes that report the value field(s) for the item(s) matched in memory.
- Return type
list
- _construct_storage_node(memory_template, field_weights, concatenate_keys, memory_decay_rate, storage_prob)¶
Create EMStorageMechanism that stores the key and value inputs in memory. Memories are stored by adding the current input to each field to the corresponding row of the matrix for the Projection from the key_input_node to the matching_node and retrieval_node for keys, and from the value_input_node to the retrieval_node for values. The
function
of theEMSorageMechanism
that takes the following arguments:**fields* – the
input_nodes
for the corresponding fields of anentry
inmemory
;**field_types* – a list of the same length as
fields
, containing 1’s for key fields and 0’s for value fields;**memory_matrix* –
memory_template
),**learning_signals* – list of ` MappingProjections (or their ParameterPort`s) that store each field of
memory
;decay_rate – rate at which entries in the
memory_matrix
decay;storage_prob – probability for storing an entry in
memory
.
- Return type
list
- _set_learning_attributes()¶
Set learning-related attributes for Node and Projections
- _store_memory(inputs, context)¶
Store inputs in memory as weights of Projections to softmax_nodes (keys) and retrieval_nodes (values).
- _encode_memory(context=None)¶
Encode inputs as memories For each node in key_input_nodes and value_input_nodes, assign its value to afferent weights of corresponding retrieval_node. - memory = key_input or value_input - memories = weights of Projections for each field
- learn()¶
Runs the composition in learning mode - that is, any components with disable_learning False will be executed in learning mode. See Learning in a Composition for details.
- Parameters
inputs ({Node:list }) –
a dictionary containing a key-value pair for each Node (Mechanism or Composition) in the composition that receives inputs from the user. There are several equally valid ways that this dict can be structured:
For each pair, the key is the and the value is an input, the shape of which must match the Node’s default variable. This is identical to the input dict in the
run
method (see Input Dictionary for additional details).A dict with keys ‘inputs’, ‘targets’, and ‘epochs’. The
inputs
key stores a dict that is the same same structure as input specification (1) of learn. Thetargets
andepochs
keys should contain values of the same shape astargets
andepochs
.
targets ({Node:list }) – a dictionary containing a key-value pair for each Node in the Composition that receives target values as input to the Composition for training learning pathways. The key of each entry can be either the TARGET_MECHANISM for a learning pathway or the final Node in that Pathway, and the value is the target value used for that Node on each trial (see target inputs for additional details concerning the formatting of targets).
num_trials (int (default=None)) – typically, the Composition infers the number of trials to execute from the length of its input specification. However, num_trials can be used to enforce an exact number of trials to execute; if it is greater than there are inputs then inputs will be repeated (see Composition Inputs for additional information).
epochs (int (default=1)) – specifies the number of training epochs (that is, repetitions of the batched input set) to run with
learning_rate (float : default None) – specifies the learning_rate used by all learning pathways when the Composition’s learn method is called. This overrides the `learning_rate specified for any individual Pathways at construction, but only applies for the current execution of the learn method.
minibatch_size (int (default=1)) – specifies the size of the minibatches to use. The input trials will be batched and run, after which learning mechanisms with learning mode TRIAL will update weights
randomize_minibatch (bool (default=False)) – specifies whether the order of the input trials should be randomized on each epoch
patience (int or None (default=None)) – used for early stopping of training; If a model has more than
patience
bad consecutive epochs, thenlearn
will prematurely return. A bad epoch is determined by themin_delta
valuemin_delta (float (default=0)) – the minimum reduction in average loss that an epoch must provide in order to qualify as a ‘good’ epoch; Any reduction less than this value is considered to be a bad epoch. Used for early stopping of training, in combination with
patience
.scheduler (Scheduler) – the scheduler object that owns the conditions that will instruct the execution of the Composition If not specified, the Composition will use its automatically generated scheduler.
context – context will be set to self.default_execution_id if unspecified
call_before_minibatch (callable) – called before each minibatch is executed
call_after_minibatch (callable) – called after each minibatch is executed
report_output (ReportOutput : default ReportOutput.OFF) – specifies whether to show output of the Composition and its Nodes trial-by-trial as it is generated; see Output Reporting for additional details and
ReportOutput
for options.report_params (ReportParams : default ReportParams.OFF) – specifies whether to show values the Parameters of the Composition and its Nodes as part of the output report; see Output Reporting for additional details and
ReportParams
for options.report_progress (ReportProgress : default ReportProgress.OFF) – specifies whether to report progress of execution in real time; see Progress Reporting for additional details.
report_simulations (ReportSimulatons : default ReportSimulations.OFF) – specifies whether to show output and/or progress for simulations executed by the Composition’s controller; see Simulations for additional details.
report_to_devices (list(ReportDevices) : default ReportDevices.CONSOLE) – specifies where output and progress should be reported; see
Report_To_Device
for additional details andReportDevices
for options.
- Returns
the results of the last trial of training (list)
.. note:: – the results of the final epoch of training are stored in the Composition’s
learning_results
attribute.
- get_output_values(context=None)¶
Override to provide ordering of retrieval_nodes that matches order of inputs. This is needed since nodes were constructed as sets