• Github
Table of Contents
0.16.1.0
  • Welcome to PsyNeuLink
  • Basics and Primer
  • Quick Reference
  • Core
  • Library
  • Contributors Guide
  • Docs >
  • Library >
  • Compositions >
  • EMComposition
Shortcuts

EMComposition¶

Related

  • AutodiffComposition

  • Learning in a Composition

  • EpisodicMemoryMechanism

  • ContentAddressableMemory

Contents¶

  • Overview
    • Organization

    • Operation

  • Creation
    • Memory

    • Capacity

    • Fields

    • Storage and Retrieval

    • Learning

  • Structure
    • Input

    • Memory

    • Output

  • Execution
    • Processing

    • Learning

  • Examples
    • Memory Template and Fill

    • Field Weights

  • Class Reference

Overview¶

The EMComposition implements a configurable, content-addressable form of episodic (or external) memory. It emulates an EpisodicMemoryMechanism – reproducing all of the functionality of its ContentAddressableMemory Function – in the form of an AutodiffComposition. This allows it to backpropagate error signals based retrieved values to it inputs, and learn how to differentially weight cues (queries) used for retrieval. It also adds the capability for memory_decay. In these respects, it implements a variant of a Modern Hopfield Network, as well as some of the features of a Transformer

The memory of an EMComposition is configured using two arguments of its constructor: the memory_template argument, that defines the overall structure of its memory (the number of fields in each entry, the length of each field, and the number of entries); and fields argument, that defines which fields are used as cues for retrieval (i.e., as “keys”), including whether and how they are weighted in the match process used for retrieval, which fields are treated as “values” that are stored retrieved but not used by the match process, and which are involved in learning. The inputs to an EMComposition, corresponding to its keys and values, are assigned to each of its INPUT Nodes: inputs to be matched to keys (i.e., used as “queries”) are assigned to its query_input_nodes; and the remaining inputs assigned to it value_input_nodes. When the EMComposition is executed, the retrieved values for all fields are returned as the result, and recorded in its results attribute. The value for each field is assigned as the value of its OUTPUT Nodes. The input is then stored in its memory, with a probability determined by its storage_prob Parameter, and all previous memories decayed by its memory_decay_rate. The memory can be accessed using its memory Parameter.

The memories of an EMComposition are actually stored in the matrix Parameter of a set of MappingProjections (see note below). The memory Parameter compiles and formats these as a single 3d array, the rows of which (axis 0) are each entry, the columns of which (axis 1) are the fields of each entry, and the items of which (axis 2) are the values of each field (see EMComposition_Memory_Configuration for additional details).

Organization

Entries and Fields. Each entry in memory can have an arbitrary number of fields, and each field can have an arbitrary length. However, all entries must have the same number of fields, and the corresponding fields must all have the same length across entries. Each field is treated as a separate “channel” for storage and retrieval, and is associated with its own corresponding input (key or value) and output (retrieved value) Node, some or all of which can be used to compute the similarity of the input (key) to entries in memory, that is used for retreieval. Fields can be differentially weighted to determine the influence they have on retrieval, using the field_weights parameter (see retrieval below). The number and shape of the fields in each entry is specified in the memory_template argument of the EMComposition’s constructor (see memory_template). Which fields treated as keys (i.e., matched against queries during retrieval) and which are treated as values (i.e., retrieved but not used for matching retrieval) is specified in the field_weights argument of the EMComposition’s constructor (see field_weights).

Operation

Retrieval. The values retrieved from memory (one for each field) are based on the relative similarity of the keys to the entries in memory, computed as the distance of each key and the values in the corresponding field for each entry in memory. By default, for queries and keys that are vectors, normalized dot products (comparable to cosine similarity) are used to compute the similarity of each query to each key in memory; and if they are scalars the L0 norm is used. These distances are then weighted by the corresponding field_weights for each field (if specified) and then summed, and the sum is softmaxed to produce a softmax distribution over the entries in memory. That is then used to generate a softmax-weighted average of the retrieved values across all fields, which is returned as the result of the EMComposition’s execution (an EMComposition can also be configured to return the exact entry with the lowest distance (weighted by field), however then it is not compatible with learning; see softmax_choice).

Storage. The inputs to the EMComposition’s fields are stored in memory after each execution, with a probability determined by storage_prob. If memory_decay_rate is specified, then the memory is decayed by that amount after each execution. If memory_capacity has been reached, then each new memory replaces the weakest entry (i.e., the one with the smallest norm across all of its fields) in memory.

Creation¶

An EMComposition is created by calling its constructor. There are four major elements that can be configured: the structure of its memory for the entries in memory; how storage and retrieval operate; and whether and how learning is carried out.

Memory Specification¶

These arguments are used to specify the shape and number of memory entries.

  • memory_template: This specifies the shape of the entries to be stored in the EMComposition’s memory, and can be used to initialize memory with pre-specified entries. The memory_template argument can be specified in one of three ways (see Examples for representative use cases):

    • tuple: interpreted as an np.array shape specification, that must be of length 2 or 3. If it is a 3-item tuple, then the first item specifies the number of entries in memory, the 2nd the number of fields in each entry, and the 3rd the length of each field. If it is a 2-item tuple, this specifies the shape of an entry, and the number of entries is specified by memory_capacity). All entries are filled with zeros or the value specified by memory_fill.

      Warning

      If memory_template is specified with a 3-item tuple and memory_capacity is also specified with a value that does not match the first item of memory_template, and error is generated indicating the conflict in the number of entries specified.

      Hint

      To specify a single field, a list or array must be used (see below), as a 2-item tuple is interpreted as specifying the shape of an entry, and so it can’t be used to specify the number of entries each of which has a single field.

    • 2d list or array: interpreted as a template for memory entries. This can be used to specify fields of different lengths (i.e., entries that are ragged arrays), with each item in the list (axis 0 of the array) used to specify the length of the corresponding field. The template is then used to initialze all entries in memory. If the template includes any non-zero elements, then the array is replicated for all entries in memory; otherwise, they are filled with either zeros or the value specified in memory_fill.

      Hint

      To specify a single entry, with all other entries filled with zeros or the value specified in memory_fill, use a 3d array as described below.

    • 3d list or array: used to initialize memory directly with the entries specified in the outer dimension (axis 0) of the list or array. If memory_capacity is not specified, then it is set to the number of entries in the list or array. If memory_capacity is specified, then the number of entries specified in memory_template must be less than or equal to memory_capacity. If is less than memory_capacity, then the remaining entries in memory are filled with zeros or the value specified in memory_fill (see below): if all of the entries specified contain only zeros, and memory_fill is specified, then the matrix is filled with the value specified in memory_fill; otherwise, zeros are used to fill all entries.

  • memory_fill: specifies the value used to fill the memory, based on the shape specified in the memory_template (see above). The value can be a scalar, or a tuple to specify an interval over which to draw random values to fill memory — both should be scalars, with the first specifying the lower bound and the second the upper bound. If memory_fill is not specified, and no entries are specified in memory_template, then memory is filled with zeros.

    Hint

    If memory is initialized with all zeros and normalize_memories set to True (see below) then a numpy.linalg warning is issued about divide by zero. This can be ignored, as it does not affect the results of execution, but it can be averted by specifying memory_fill to use small random values (e.g., memory_fill=(0,.001)).

  • memory_capacity: specifies the number of items that can be stored in the EMComposition’s memory; when memory_capacity is reached, each new entry overwrites the weakest entry (i.e., the one with the smallest norm across all of its fields) in memory. If memory_template is specified as a 3-item tuple or 3d list or array (see above), then that is used to determine memory_capacity (if it is specified and conflicts with either of those an error is generated). Otherwise, it can be specified using a numerical value, with a default of 1000. The memory_capacity cannot be modified once the EMComposition has been constructed.

Fields¶

These arguments are used to specify the names of the fields in a memory entry, which are used for its keys and values, how keys are weighted for retrieval, whether those weights are learned, and which fields are used for computing error that is propagated through the EMComposition.

  • fields: a dict that specifies the names of the fields and their attributes. There must be an entry for each field specified in the memory_template, and must have the following format:

    • key: a string that specifies the name of the field.

    • value: a dict or tuple with three entries; if a dict, the key to each entry must be the keyword specified below, and if a tuple, the entries must appear in the following order:

      • FIELD_WEIGHT specification - value must be a scalar or None. If it is a scalar, the field is treated as a retrieval key in memory that is weighted by that value during retrieval; if None, it is treated as a value in memory and the field cannot be reconfigured later.

      • LEARN_FIELD_WEIGHT specification - value must be a boolean or a float; if False, the field_weight for that field is not learned; if True, the field weight is learned using the EMComposition’s learning_rate; if a float, that is used as its learning_rate.

      • TARGET_FIELD specification - value must be a boolean; if True, the value of the retrieved_node for that field conrtributes to the error computed during learning and backpropagated through the EMComposition (see Backpropagation of); if False, the retrieved value for that field does not contribute to the error; however, its field_weight can still be learned if that is specfified in learn_field_weight.

    The specifications provided in the fields argument are assigned to the corresponding Parameters of the EMComposition which, alternatively, can be specified individually using the field_names, field_weights, learn_field_weights and target_fields arguments of the EMComposition’s constructor, as described below. However, these and the fields argument cannot both be used together; doing so raises an error.

  • field_names: a list specifies names that can be assigned to the fields. The number of names specified must match the number of fields specified in the memory_template. If specified, the names are used to label the nodes of the EMComposition; otherwise, the fields are labeled generically as “Key 0”, “Key 1”, and “Value 1”, “Value 2”, etc..

  • field_weights: specifies which fields are used as keys, and how they are weighted during retrieval. Fields designated as keys used to match inputs (queries) against entries in memory for retrieval (see Match memories by field); entries designated as values are ignored during the matching process, but their values in memory are retrieved and assigned as the value of the corresponding retrieved_node. This distinction between keys and value corresponds to the format of a standard “dictionary,” though in that case only a single key and value are allowed, whereas in an EMComposition there can be one or more keys and any number of values; if all fields are keys, this implements a full form of content-addressable memory. The following options can be used to specify field_weights:

    • None (the default): all fields except the last are treated as keys, and are assigned a weight of 1, while the last field is treated as a value field (same as assiging it None in a list or tuple (see below).

    • scalar: all fields are treated as keys (i.e., used for retrieval) and weighted equally for retrieval. If normalize_field_weights is True, the value is divided by the number of keys, whereas if normalize_field_weights is False, then the value specified is used to weight the retrieval of all keys with that value.

      Note

      At present these have the same result, since the SoftMax function is used to normalize the match between queries and keys. However, other retrieval functions could be used in the future that would be affected by the value of the field_weights. Therefore, it is recommended to leave normalize_field_weights set to True (the default) to ensure that the field_weights are normalized to sum to 1.0.

    • list or tuple: the number of entries must match the number of fields specified in memory_template, and all entries must be either 0, a positive scalar value, or None. If all entries are identical, they are treated as if a single value was specified (see above); if the entries are non-identical, any entries that are not None are used to weight the corresponding fields during retrieval (see Weight fields), including those that are 0 (though these will not be used in the retrieval process unless/until they are changed to a positive value). If normalize_field_weights is True, all non-None entries are normalized so that they sum to 1.0; if False, the raw values are used to weight the retrieval of the corresponding fields. All entries of None are treated as value fields, are not assigned a field_weight_node, and are ignored during retrieval. These cannot be modified after the EMComposition has been constructed (see note below).

    Note

    The field_weights can be modified after the EMComposition has been constructed, by assigning a new set of weights to its field_weights Parameter. However, only field_weights associated with key fields (i.e., that were initially assigned non-zero field_weights) can be modified; the weights for value fields (i.e., ones that were initially assigned a field_weight of None) cannot be modified, and doing so raises an error. If a field that will be used initially as a value may later need to be used as a key, it should be assigned a field_weight of 0 at construction (rather than None), which can then later be changed as needed.

    The reason that field_weights can be modified only for keys is that field_weight_nodes are constructed only for keys, since ones for values would have no effect on the retrieval process and therefore are uncecessary (and can be misleading).

  • learn_field_weights: if enable_learning is True, this specifies which field_weights are subject to learning, and optionally the learning_rate for each (see learn_field_weights below for details of specification).

  • normalize_field_weights: specifies whether the field_weights are normalized or

    their raw values are used. If True, the value of all non-None field_weights are normalized so that they sum to 1.0, and the normalized values are used to weight (i.e., multiply) the corresponding fields during retrieval (see Weight fields). If False, the raw values of the field_weights are used to weight the retrieved value of each field. This setting is ignored if field_weights is None or concatenate_queries is True.

  • concatenate_queries: specifies whether keys are concatenated before a match is made to items in memory. This is False by default. It is also ignored if the field_weights for all keys are not all equal (i.e., all non-zero weights are not equal – see field_weights) and/or normalize_memories is set to False. Setting concatenate_queries to True in either of those cases issues a warning, and the setting is ignored. If the key field_weights (i.e., all non-zero values) are all equal and normalize_memories is set to True, then setting concatenate_queries causes a concatenate_queries_node to be created that receives input from all of the query_input_nodes and passes them as a single vector to the mactch_node.

    Note

    While this is computationally more efficient, it can affect the outcome of the matching process, since computing the distance of a single vector comprised of the concatentated inputs is not identical to computing the distance of each field independently and then combining the results.

    Note

    All query_input_nodes and retrieved_nodes are always preserved, even when concatenate_queries is True, so that separate inputs can be provided for each key, and the value of each key can be retrieved separately.

Retrieval and Storage¶

  • storage_prob: specifies the probability that the inputs to the EMComposition will be stored as an item in memory on each execution.

  • normalize_memories: specifies whether queries and keys in memory are normalized before computing their dot products.

  • softmax_gain: specifies the gain (inverse temperature) used for softmax normalizing the combined distances used for retrieval (see Execution below). The following options can be used:

    • numeric value: the value is used as the gain of the SoftMax Function for the EMComposition’s softmax_node.

    • ADAPTIVE: the adapt_gain method of the SoftMax Function is used to adaptively set the softmax_gain based on the entropy of the distances, in order to preserve the distribution over non- (or near) zero entries irrespective of how many (near) zero entries there are (see Thresholding and Adaptive Gain for additional details).

    • CONTROL: a ControlMechanism is created, and its ControlSignal is used to modulate the softmax_gain parameter of the SoftMax function of the EMComposition’s softmax_node.

    If None is specified, the default value for the SoftMax function is used.

  • softmax_threshold: if this is specified, and softmax_gain is specified with a numeric value, then any values below the specified threshold are set to 0 before the distances are softmaxed (see mask_threhold under Thresholding and Adaptive Gain for additional details).

  • softmax_choice: specifies how the SoftMax Function of the EMComposition’s softmax_node is used, with the combined distances, to generate a retrieved item; the following are the options that can be used and the retrieved value they produce:

    • WEIGHTED_AVG (default): softmax-weighted average based on combined distances of queries and keys in memory.

    • ARG_MAX: entry with the smallest distance (one with lowest index in memory) if there are identical ones).

    • PROBABISTIC: probabilistically chosen entry based on softmax-transformed distribution of combined distance.

    Warning

    Use of the ARG_MAX and PROBABILISTIC options is not compatible with learning, as these implement a discrete choice and thus are not differentiable. Constructing an EMComposition with softmax_choice set to either of these options and learn_field_weights set to True (or a list with any True entries) will generate a warning, and calling the EMComposition’s learn method will generate an error; it must be changed to WEIGHTED_AVG to execute learning.

    The WEIGHTED_AVG option is passed as ALL to the output argument of the SoftMax Function, ARG_MAX is passed as ARG_MAX_INDICATOR; and PROBALISTIC is passed as PROB_INDICATOR; the other SoftMax options are not currently supported.

  • memory_decay_rate: specifies the rate at which items in the EMComposition’s memory decay; the default rate is AUTO, which sets it to 1 / memory_capacity, such that the oldest memories are the most likely to be replaced when memory_capacity is reached. If memory_decay_rate is set to 0 None or False, then memories do not decay and, when memory_capacity is reached, the weakest memories are replaced, irrespective of order of entry.

  • purge_by_field_weight: specifies whether field_weights are used in determining which memory entry is replaced when a new memory is stored. If True, the norm of each entry is multiplied by its field_weight to determine which entry is the weakest and will be replaced.

Learning¶

EMComposition supports two forms of learning: error backpropagation through the entire Composition, and the learning of field_weights within it. Learning is enabled by setting the enable_learning argument of the EMComposition’s constructor to True, and optionally specifying the learn_field_weights argument (as detailed below). If enable_learning is False, no learning of any kind occurs; if it is True, then both forms of learning are enable.

Backpropagation of error. If enable_learning is True, then the values retrieved from memory when the EMComposition is executed during learning can be used for error computation and backpropagation through the EMComposition to its inputs. By default, the values of all of its retrieved_nodes are included. For those that do not project to an outer Composition (i.e., one in which the EMComposition is nested), a TARGET node is constructed for each, and used to compute errors that are backpropagated through the network to its query_input_nodes and value_input_nodes, and on to any nodes that project to those from a Composition within which the EMComposition is nested. Retrieved_nodes that do project to an outer Composition receive their errors from those nodes, which are also backpropagated through the EMComposition. Fields can be selecdtively specified for learning in the fields argument or the target_fields argument of the EMComposition’s constructor, as detailed below.

Field Weight Learning. If enable_learning is True, then the field_weights can be learned, by specifing these either in the fields argument or the learn_field_weights argument of the EMComposition’s constructor, as detailed below. Learning field_weights implements a function comparable to the learning in an attention head of the Transformer architecture, although at present the field can only be scalar values rather than vectors or matrices, and it cannot receive input. These capabilities will be added in the future.

The following arguments of the EMComposition’s constructor can be used to configure learning:

  • enable_learning: specifies whether any learning is enabled for the EMComposition. If False, no learning occurs; ` if True, then both error backpropagation and learning of field_weights can occur. If enable_learning is True, use_gating_for_weighting must be False (see note).

  • target_fields: specifies which retrieved_nodes are used to compute errors, and propagate these back through the EMComposition to its query and value_input_nodes. If this is None (the default), all retrieved_nodes are used; if it is a list or tuple, then it must have the same number of entries as there are fields, and each entry must be a boolean specifying whether the corresponding retrieved_nodes participate in learning, and errors are computed only for those nodes. This can also be specified in a dict for the fields argument (see fields).

  • learn_field_weights: specifies which field_weights are subject to learning, and optionally the learning_rate for each; this can also be specified in a dict for the fields argument (see fields). The following specfications can be used:

    • None: all field_weights are subject to learning, and the learning_rate for the EMComposition is used as the learning_rate for all field_weights.

    • bool: If True, all field_weights are subject to learning, and the learning_rate for the EMComposition is used as the learning rate for all field_weights; if False, no field_weights are subject to learning, regardless of enable_learning.

    • list or tuple: must be the same length as the number of fields specified in the memory_template, and each entry must be either True, False or a positive scalar value. If True, the corresponding field_weight is subject to learning and the learning_rate for the EMComposition is used to specify the learning rate for that field; if False, the corresponding field_weight is not subject to learning; if a scalar value is specified, it is used as the learning_rate for that field.

  • learning_rate: specifies the learning_rate for any field_weights for which a learning_rate is not individually specified in the learn_field_weights argument (see above).

Structure¶

Input¶

The inputs corresponding to each key and value field are represented as INPUT Nodes of the EMComposition, listed in its query_input_nodes and value_input_nodes attributes, respectively,

Memory¶

The memory attribute contains a record of the entries in the EMComposition’s memory. This is in the form of a 3d array, in which rows (axis 0) are entries, columns (axis 1) are fields, and items (axis 2) are the values of an entry in a given field. The number of fields is determined by the memory_template argument of the EMComposition’s constructor, and the number of entries is determined by the memory_capacity argument. Information about the fields is stored in the fields attribute, which is a list of Field objects containing information about the nodes and values associated with each field.

The memories are actually stored in the matrix parameters of the`MappingProjections` from the combined_matches_node to each of the retrieved_nodes. Memories associated with each key are also stored (in inverted form) in the matrix parameters of the MappingProjection from the query_input_nodes to each of the corresponding match_nodes. This is done so that the match of each query to the keys in memory for the corresponding field can be computed simply by passing the input for each query through the Projection (which computes the distance of the input with the Projection’s matrix parameter) to the corresponding match_node; and, similarly, retrieivals can be computed by passing the softmax distributions for each field computed in the combined_matches_node through its Projection to each retrieved_node (which are inverted versions of the matrices of the MappingProjections from the query_input_nodes to each of the corresponding match_nodes), to compute the distance of the weighted softmax over entries with the corresponding field of each entry that yields the retreieved value for each field.

Output¶

The outputs corresponding to retrieved value for each field are represented as OUTPUT Nodes of the EMComposition, listed in its retrieved_nodes attribute.

Execution¶

The arguments of the run , learn and Composition.execute methods are the same as those of a Composition, and they can be passed any of the arguments valid for an AutodiffComposition. The details of how the EMComposition executes are described below.

Processing¶

When the EMComposition is executed, the following sequence of operations occur (also see figure):

  • Input. The inputs to the EMComposition are provided to the query_input_nodes and value_input_nodes. The former are used for matching to the corresponding fields of the memory, while the latter are retrieved but not used for matching.

  • Concatenation. By default, the input to every query_input_node is passed to a to its own match_node through a MappingProjection that computes its distance with the corresponding field of each entry in memory. In this way, each match is normalized so that, absent field_weighting, all keys contribute equally to retrieval irrespective of relative differences in the norms of the queries or the keys in memory. However, if the field_weights are the same for all keys and normalize_memories is True, then the inputs provided to the query_input_nodes are concatenated into a single vector (in the concatenate_queries_node), which is passed to a single match_node. This may be more computationally efficient than passing each query through its own match_node, however it will not necessarily produce the same results as passing each query through its own match_node (see concatenate keys for additional information).

  • Match memories by field. The values of each query_input_node (or the concatenate_queries_node if concatenate_queries attribute is True) are passed through a MappingProjection that computes the distance between the corresponding input (query) and each memory (key) for the corresponding field, the result of which is possed to the corresponding match_node. By default, the distance is computed as the normalized dot product (i.e., between the normalized query vector and the normalized key for the corresponding field, that is comparable to using cosine similarity). However, if normalize_memories is set to False, just the raw dot product is computed. The distance can also be customized by specifying a different function for the MappingProjection to the match_node. The result is assigned as the value of the corresponding match_node.

  • Weight distances. If field weights are specified, then the distance computed by the MappingProjection to each match_node is multiplied by the corresponding field_weight using the field_weight_node. By default (if use_gating_for_weighting is False), this is done using the weighted_match_nodes, each of which receives a Projection from a match_node and the corresponding field_weight_node and multiplies them to produce the weighted distance for that field as its output. However, if use_gating_for_weighting is True, the field_weight_nodes are implemented as GatingMechanisms, each of which uses its field weight as a GatingSignal to output gate (i.e., multiplicatively modulate the output of) the corresponding match_node. In this case, the weighted_match_nodes are not implemented, and the output of the match_node is passed directly to the combined_matches_node.

    Note

    Setting use_gating_for_weighting to True reduces the size and complexity of the EMComposition, by eliminating the weighted_match_nodes. However, doing to precludes the ability to learn the field_weights, since GatingSignals are ModulatorySignal> that cannot be learned. If learning is required, then use_gating_for_weighting should be set to False.

  • Combine distances. If field weights are used to specify more than one key field, then the (weighted) distances computed for each field (see above) are summed across fields by the combined_matches_node, before being passed to the softmax_node. If only one key field is specified, then the output of the match_node is passed directly to the softmax_node.

  • Softmax normalize distances. The distances, passed either from the combined_matches_node, or directly from the match_node if there is only one key field, are passed to the softmax_node, which applies the SoftMax Function, which generates the softmax distribution used to retrieve entries from memory. If a numerical value is specified for softmax_gain, that is used as the gain (inverse temperature) for the SoftMax Function; if ADAPTIVE is specified, then the SoftMax.adapt_gain function is used to adaptively set the gain based on the summed distance (i.e., the output of the combined_matches_node; if CONTROL is specified, then the summed distance is monitored by a ControlMechanism that uses the adapt_gain method of the SoftMax Function to modulate its gain parameter; if None is specified, the default value of the Softmax Function is used as the gain parameter (see Softmax_Gain for additional details).

  • Retrieve values by field. The vector of softmax weights for each memory generated by the softmax_node is passed through the Projections to the each of the retrieved_nodes to compute the retrieved value for each field, which is assigned as the value of the corresponding retrieved_node.

  • Decay memories. If memory_decay is True, then each of the memories is decayed by the amount specified in memory_decay_rate.

    This is done by multiplying the matrix parameter of the MappingProjection from the combined_matches_node to each of the retrieved_nodes, as well as the matrix parameter of the MappingProjection from each query_input_node to the corresponding match_node by memory_decay,

    by 1 - memory_decay.

  • Store memories. After the values have been retrieved, the storage_node adds the inputs to each field (i.e., values in the query_input_nodes and value_input_nodes) as a new entry in memory, replacing the weakest one. The weakest memory is the one with the lowest norm, multipled by its field_weight if purge_by_field_weight is True.

    The norm of each entry is calculated by adding the input vectors to the the corresponding rows of the matrix of the MappingProjection from the combined_matches_node to each of the retrieved_nodes, as well as the matrix parameter of the MappingProjection from each query_input_node to the corresponding match_node (see note above for additional details).

Training¶

If learn is called, enable_learning is True, then errors will be computed for each of the retrieved_nodes that is specified for learning (see Learning for details about specification). These errors are derived either from any errors backprpated to the EMComposition from an outer Composition in which it is nested, or locally by the difference between the retrieved_nodes and the target_nodes that are created for each of the retrieved_nodes that do not project to an outer Composition. These errors are then backpropagated through the EMComposition to the query_input_nodes and value_input_nodes, and on to any nodes that project to it from a composition in which the EMComposition is nested.

If learn_field_weights is also specified, then the corresponding field_weights are modified to minimize the error passed to the EMComposition retrieved nodes that have been specified for learning, using the learning_rate for them in learn_field_weights or the default learning rate for the EMComposition. If enable_learning is False (or run is called rather than learn, then the field_weights are not modified, and no error signals are passed to the nodes that project to its query_input_nodes and value_input_nodes.

Note

The only parameters modifable by learning in the EMComposition are its field_weights; all other parameters (including all other Projection matrices) are fixed, and used only to compute gradients and backpropagate errors.

Although memory storage is implemented as a form of learning (though modification of MappingProjection matrix parameters; see memory storage), this occurs irrespective of how EMComposition is run (i.e., whether learn or run is called), and is not affected by the enable_learning or learning_rate attributes, which pertain only to whether the field_weights are modified during learning. Furthermore, when run in PyTorch mode, storage is executed after the forward() and backward() passes are complete, and is not considered as part of the gradient calculations.

Examples

The following are examples of how to configure and initialize the EMComposition’s memory:

Visualizing the EMComposition¶

The EMComposition can be visualized graphically, like any Composition, using its show_graph method. For example, the figure below shows an EMComposition that implements a simple dictionary, with one key field and one value field, each of length 5:

>>> import psyneulink as pnl
>>> em = EMComposition(memory_template=(2,5))
>>> em.show_graph()
Exxample of an EMComposition

Memory Template¶

The memory_template argument of a EMComposition’s constructor is used to configure it memory, which can be specified using either a tuple or a list or array.

Tuple specification

The simplest form of specification is a tuple, that uses the numpy shape format. If it has two elements (as in the example above), the first specifies the number of fields, and the second the length of each field. In this case, a default number of entries (1000) is created:

>>> em.memory_capacity
1000

The number of entries can be specified explicitly in the EMComposition’s constructor, using either the memory_capacity argument, or by using a 3-item tuple to specify the memory_template argument, in which case the first element specifies the number of entries, while the second and their specify the number of fields and the length of each field, respectively. The following are equivalent:

>>> em = EMComposition(memory_template=(2,5), memory_capcity=4)

and

>>> em = EMComposition(memory_template=(4,2,5))

both of which create a memory with 4 entries, each with 2 fields of length 5. The contents of memory can be inspected using the memory attribute:

>>> em.memory
[[array([0., 0., 0., 0., 0.]), array([0., 0., 0., 0., 0.])],
 [array([0., 0., 0., 0., 0.]), array([0., 0., 0., 0., 0.])],
 [array([0., 0., 0., 0., 0.]), array([0., 0., 0., 0., 0.])],
 [array([0., 0., 0., 0., 0.]), array([0., 0., 0., 0., 0.])]]

The default for memory_capacity is 1000, which is used if it is not otherwise specified.

List or array specification

Note that in the example above the two fields have the same length (5). This is always the case when a tuple is used, as it generates a regular array. A list or numpy array can also be used to specify the memory_template argument. For example, the following is equivalent to the examples above:

>>> em = EMComposition(memory_template=[[0,0,0],[0,0,0]], memory_capacity=4)

However, a list or array can be used to specify fields of different length (i.e., as a ragged array). For example, the following specifies one field of length 3 and another of length 1:

>>> em = EMComposition(memory_template=[[0,0,0],[0]], memory_capacity=4)
>>> em.memory
[[[array([0., 0., 0.]), array([0.])]],
 [[array([0., 0., 0.]), array([0.])]],
 [[array([0., 0., 0.]), array([0.])]],
 [[array([0., 0., 0.]), array([0.])]]]

Memory fill

Note that the examples above generate a warning about the use of zeros to initialize the memory. This is because the default value for memory_fill is 0, and the default value for normalize_memories is True, which will cause a divide by zero warning when memories are normalized. While this doesn’t crash, it will result in nan’s that are likely to cauase problems elsewhere. This can be avoided by specifying a non-zero value for memory_fill, such as small number:

>>> em = EMComposition(memory_template=[[0,0,0],[0]], memory_capacity=4, memory_fill=.001)
>>> em.memory
[[[array([0.001, 0.001, 0.001]), array([0.001])]],
 [[array([0.001, 0.001, 0.001]), array([0.001])]],
 [[array([0.001, 0.001, 0.001]), array([0.001])]],
 [[array([0.001, 0.001, 0.001]), array([0.001])]]]

Here, a single value was specified for memory_fill (which can be a float or int), that is used to fill all values. Random values can be assigned using a tuple to specify and internval between the first and second elements. For example, the following uses random values between 0 and 0.01 to fill all entries:

>>> em = EMComposition(memory_template=[[0,0,0],[0]], memory_capacity=4, memory_fill=(0,0.01))
>>> em.memory
[[[array([0.00298981, 0.00563404, 0.00444073]), array([0.00245373])]],
 [[array([0.00148447, 0.00666486, 0.00228882]), array([0.00237541])]],
 [[array([0.00432786, 0.00035378, 0.00265932]), array([0.00980598])]],
 [[array([0.00151163, 0.00889032, 0.00899815]), array([0.00854529])]]]

Multiple entries

In the examples above, a single entry was specified, and that was used as a template for initializing the remaining entries in memory. However, a list or array can be used to directly initialize any or all entries. For example, the following initializes memory with two specific entries:

>>> em = EMComposition(memory_template=[[[1,2,3],[4]],[[100,101,102],[103]]], memory_capacity=4)
>>> em.memory
[[[array([1., 2., 3.]), array([4.])]],
 [[array([100., 101., 102.]), array([103.])]],
 [[array([0., 0., 0.]), array([0.])]],
 [[array([0., 0., 0.]), array([0.])]]]

Note that the two entries must have exactly the same shapes. If they do not, an error is generated. Also note that the remaining entries are filled with zeros (the default value for memory_fill). Here again, memory_fill can be used to specify a different value:

>>> em = EMComposition(memory_template=[[[7],[24,5]],[[100],[3,106]]], memory_capacity=4, memory_fill=(0,.01))
>>> em.memory
[[[array([7.]), array([24.,  5.])]],
 [[array([100.]), array([  3., 106.])]],
 [[array([0.00803646]), array([0.00341276, 0.00286969])]],
 [[array([0.00143196]), array([0.00079033, 0.00710556])]]]

Field Weights¶

By default, all of the fields specified are treated as keys except the last, which is treated as a “value” field – that is, one that is not included in the matching process, but for which a value is retrieved along with the key fields. For example, in the figure above, the first field specified was used as a key field, and the last as a value field. However, the field_weights argument can be used to modify this, specifying which fields should be used as keys fields – including the relative contribution that each makes to the matching process – and which should be used as value fields. Non-zero elements in the field_weights argument designate key fields, and zeros specify value fields. For example, the following specifies that the first two fields should be used as keys while the last two should be used as values:

>>> em = EMComposition(memory_template=[[0,0,0],[0],[0,0],[0,0,0,0]], memory_capacity=3, field_weights=[1,1,0,0])
>>> em.show_graph()
_images/EMComposition_field_weights_equal_fig.svg

Use of field_weights to specify keys and values.¶

Note that the figure now shows <QUERY> [WEIGHT] nodes, that are used to implement the relative contribution that each key field makes to the matching process specifed in field_weights argument. By default, these are equal (all assigned a value of 1), but different values can be used to weight the relative contribution of each key field. The values are normalized so that they sum 1, and the relative contribution of each is determined by the ratio of its value to the sum of all non-zero values. For example, the following specifies that the first two fields should be used as keys, with the first contributing 75% to the matching process and the second field contributing 25%:

>>> em = EMComposition(memory_template=[[0,0,0],[0],[0,0]], memory_capacity=3, field_weights=[3,1,0])

Class Reference¶

class psyneulink.library.compositions.emcomposition.EMComposition(memory_template=[[0], [0]], memory_capacity=None, memory_fill=0, fields=None, field_names=None, field_weights=None, learn_field_weights=None, learning_rate=None, normalize_field_weights=True, concatenate_queries=False, normalize_memories=True, softmax_gain=1.0, softmax_threshold=0.001, softmax_choice='all', storage_prob=1.0, memory_decay_rate='auto', purge_by_field_weights=False, enable_learning=True, target_fields=None, use_storage_node=True, use_gating_for_weighting=False, random_state=None, seed=None, name='EM_Composition', **kwargs)¶

Subclass of AutodiffComposition that implements the functions of an EpisodicMemoryMechanism in a differentiable form and in which it’s field_weights parameter can be learned.

Takes only the following arguments, all of which are optional

Parameters
  • memory_template (tuple, list, 2d or 3d array : default [[0],[0]]) – specifies the shape of an item to be stored in the EMComposition’s memory (see memory_template for details).

  • memory_fill (scalar or tuple : default 0) – specifies the value used to fill the memory when it is initialized (see memory_fill for details).

  • memory_capacity (int : default None) – specifies the number of items that can be stored in the EMComposition’s memory; (see memory_capacity for details).

  • fields (dict[tuple[field weight, learning specification]] : default None) – each key must a string that is the name of a field, and its value a dict or tuple that specifies that field’s field_weight, learn_field_weights, and target_fields specifications (see fields for details of specificaton format). The fields arg replaces the field_names, field_weights learn_field_weights, and target_fields arguments, and specifying any of these raises an error.

  • field_names (list or tuple : default None) – specifies the names assigned to each field in the memory_template (see field names for details). If the fields argument is specified, this is not necessary and specifying raises an error.

  • field_weights (list or tuple : default (1,0)) – specifies the relative weight assigned to each key when matching an item in memory (see field weights for additional details). If the fields argument is specified, this is not necessary and specifying raises an error.

  • learn_field_weights (bool or list[bool, int, float]: default False) – specifies whether the field_weights are learnable and, if so, optionally what the learning_rate is for each field (see learn_field_weights for specifications). If the fields argument is specified, this is not necessary and specifying raises an error.

  • learning_rate (float : default .01) – specifies the default learning_rate for field_weights not specified in learn_field_weights (see learning_rate for additional details).

  • normalize_field_weights (bool : default True) – specifies whether the fields_weights are normalized over the number of keys, or used as absolute weighting values when retrieving an item from memory (see normalize_field weights for additional details).

  • concatenate_queries (bool : default False) – specifies whether to concatenate the keys into a single field before matching them to items in the corresponding fields in memory (see concatenate keys for details).

  • normalize_memories (bool : default True) – specifies whether keys and memories are normalized before computing their dot product (similarity) (see Match memories by field for additional details).

  • softmax_gain (float, ADAPTIVE or CONTROL : default 1.0) – specifies the temperature used for softmax normalizing the distance of queries and keys in memory (see Softmax normalize matches over fields for additional details).

  • softmax_threshold (float : default .0001) – specifies the threshold used to mask out small values in the softmax calculation see mask_threshold under Thresholding and Adaptive Gain for details).

  • softmax_choice (WEIGHTED_AVG, ARG_MAX, PROBABILISTIC : default WEIGHTED_AVG) – specifies how the softmax over distances of queries and keys in memory is used for retrieval (see softmax_choice for a description of each option).

  • storage_prob (float : default 1.0) – specifies the probability that an item will be stored in memory when the EMComposition is executed (see Retrieval and Storage for additional details).

  • memory_decay_rate (float : AUTO) – specifies the rate at which items in the EMComposition’s memory decay (see memory_decay_rate for details).

  • purge_by_field_weights (bool : False) – specifies whether fields_weights are used to determine which memory to replace when a new one is stored (see purge_by_field_weight for details).

  • enable_learning (bool : default True) – specifies whether learning is enabled for the EMCComposition (see Learning for additional details); use_gating_for_weighting must be False.

  • target_fields (list[bool]: default None) –

    specifies whether a learning pathway is constructed for each field of the EMComposition. If it is a list, each item must be True or False and the number of items must be equal to the number of `fields <EMComposition_Fields> specified (see `Target Fields

    <EMComposition_Target_Fields>` for additional details). If the fields argument is specified, this is not necessary and specifying raises an error.

  • FIX (# 7/10/24) –

  • technical_note:: (.) –

    use_storage_nodebooldefault True

    specifies whether to use a LearningMechanism to store entries in memory. If False, a method on EMComposition is used rather than a LearningMechanism. This is meant for debugging, and precludes use of import_composition to integrate the EMComposition into another Composition; to do so, use_storage_node must be True (default).

  • use_gating_for_weighting (bool : default False) – specifies whether to use output gating to weight the match_nodes instead of a standard input (see Weight distances for additional details).

memory¶

3d array of entries in memory, in which each row (axis 0) is an entry, each column (axis 1) is a field, and each item (axis 2) is the value for the corresponding field (see Memory Specification for additional details).

Note

This is a read-only attribute; memories can be added to the EMComposition’s memory either by executing its run or learn methods with the entry as the inputs argument.

Type

ndarray

fields¶

list of Field objects, each of which contains information about the nodes and values of a field in the EMComposition’s memory (see Field).

Type

ContentAddressableList[Field]

.. _EMComposition_Parameters
memory_capacity¶

determines the number of items that can be stored in memory (see memory_capacity for additional details).

Type

int

field_names¶

determines which names that can be used to label fields in memory (see field_names for additional details).

Type

list[str]

field_weights¶

determines which fields of the input are treated as “keys” (non-zero values) that are used to match entries in memory for retrieval, and which are used as “values” (zero values) that are stored and retrieved from memory but not used in the match process (see Match memories by field; also determines the relative contribution of each key field to the match process; see field_weights additional details. The field_weights can be changed by assigning a new list of weights to the field_weights attribute, however only the weights for fields used as keys can be changed (see EMComposition_Field_Weights_Change_Note for additional details).

Type

tuple[float]

learn_field_weights¶

determines whether the field_weight for each field for additional details).

Type

bool or list[bool, int, float]

learning_rate¶

determines the default learning_rate for field_weights not specified in learn_field_weights (see learning_rate for additional details).

Type

float

normalize_field_weights¶

determines whether fields_weights are normalized over the number of keys, or used as absolute weighting values when retrieving an item from memory (see normalize_field weights for additional details).

Type

bool

concatenate_queries¶

determines whether keys are concatenated into a single field before matching them to items in memory for additional details).

Type

bool

normalize_memories¶

determines whether keys and memories are normalized before computing their dot product (similarity) (see Match memories by field for additional details).

Type

bool

softmax_gain¶

determines gain (inverse temperature) used for softmax normalizing the summed distances of queries and keys in memory by the SoftMax Function of the softmax_node (see Softmax normalize distances for additional details).

Type

float, ADAPTIVE or CONTROL

softmax_threshold¶

determines the threshold used to mask out small values in the softmax calculation (see mask_threshold under Thresholding and Adaptive Gain for details).

Type

float

softmax_choice¶

determines how the softmax over distances of queries and keys in memory is used for retrieval (see softmax_choice for a description of each option).

Type

WEIGHTED_AVG, ARG_MAX or PROBABILISTIC

storage_prob¶

determines the probability that an item will be stored in memory when the EMComposition is executed (see Retrieval and Storage for additional details).

Type

float

memory_decay_rate¶

determines the rate at which items in the EMComposition’s memory decay (see memory_decay_rate for details).

Type

float

purge_by_field_weights¶

determines whether fields_weights are used to determine which memory to replace when a new one is stored (see purge_by_field_weight for details).

Type

bool

enable_learning¶

determines whether learning is enabled for the EMCComposition (see Learning for additional details).

Type

bool

target_fields¶

determines which fields convey error signals during learning (see Target Fields for additional details).

Type

list[bool]

.. _EMComposition_Nodes
query_input_nodes¶

INPUT Nodes that receive keys used to determine the item to be retrieved from memory, and then themselves stored in memory (see Match memories by field for additional details). By default these are assigned the name KEY_n_INPUT where n is the field number (starting from 0); however, if field_names is specified, then the name of each query_input_node is assigned the corresponding field name appended with * [QUERY]*.

Type

list[ProcessingMechanism]

value_input_nodes¶

INPUT Nodes that receive values to be stored in memory; these are not used in the matching process used for retrieval. By default these are assigned the name VALUE_n_INPUT where n is the field number (starting from 0); however, if field_names is specified, then the name of each value_input_node is assigned the corresponding field name appended with * [VALUE]*.

Type

list[ProcessingMechanism]

concatenate_queries_node¶

ProcessingMechanism that concatenates the inputs to query_input_nodes into a single vector used for the matching processing if concatenate keys is True. This is not created if the concatenate_queries argument to the EMComposition’s constructor is False or is overridden (see concatenate_queries), or there is only one query_input_node. This node is named CONCATENATE_QUERIES

Type

ProcessingMechanism

match_nodes¶

ProcessingMechanisms that compute the dot product of each query and the key stored in the corresponding field of memory (see Match memories by field for additional details). These are named the same as the corresponding query_input_nodes appended with the suffix [MATCH to KEYS].

Type

list[ProcessingMechanism]

field_weight_nodes¶

Nodes used to weight the distances computed by the match_nodes with the field weight for the corresponding key field (see Weight distances for implementation). These are named the same as the corresponding query_input_nodes.

Type

list[ProcessingMechanism or GatingMechanism]

weighted_match_nodes¶

ProcessingMechanisms that combine the field weight for each key field with the dot product computed by the corresponding the match_node. These are only implemented if use_gating_for_weighting is False (see Weight distances for details), and are named the same as the corresponding query_input_nodes appended with the suffix [WEIGHTED MATCH].

Type

list[ProcessingMechanism]

combined_matches_node¶

ProcessingMechanism that receives the weighted distances from the weighted_match_nodes if more than one key field is specified (or directly from match_nodes if use_gating_for_weighting is True), and combines them into a single vector that is passed to the softmax_node for retrieval. This node is named COMBINE MATCHES.

Type

ProcessingMechanism

softmax_node¶

ProcessingMechanisms that computes the softmax over the summed distances of keys and memories (output of the combined_match_node) from the corresponding match_nodes (see Softmax over summed distances for additional details). This is named RETRIEVE (as it yields the softmax-weighted average over the keys in memory).

Type

list[ProcessingMechanism]

softmax_gain_control_node¶

ControlMechanisms that adaptively control the softmax_gain of the softmax_node. This is implemented only if softmax_gain is specified as CONTROL (see softmax_gain for details).

Type

list[ControlMechanism]

retrieved_nodes¶

ProcessingMechanisms that receive the vector retrieved for each field in memory (see Retrieve values by field for additional details). These are assigned the same names as the query_input_nodes and value_input_nodes to which they correspond appended with the suffix * [RETRIEVED]*, and are in the same order as input_nodes to which to which they correspond.

Type

list[ProcessingMechanism]

storage_node¶

EMStorageMechanism that receives inputs from the query_input_nodes and value_input_nodes, and stores these in the corresponding field of`memory <EMComposition.memory>` with probability storage_prob after a retrieval has been made (see Retrieval and Storage for additional details). This node is named STORE.

The storage_node is assigned a Condition to execute after the retrieved_nodes have executed, to ensure that storage occurs after retrieval, but before any subequent processing is done (i.e., in a composition in which the EMComposition may be embededded.

Type

EMStorageMechanism

input_nodes¶

Full list of INPUT Nodes in the same order specified in the field_names argument of the constructor and in self.field_names.

Type

list[ProcessingMechanism]

query_and_value_input_nodes¶

Full list of INPUT Nodes ordered with query_input_nodes first followed by value_input_nodes; used primarily for internal computations.

Type

list[ProcessingMechanism]

class PytorchEMCompositionWrapper(*args, **kwargs)¶

Wrapper for EMComposition as a Pytorch Module

execute_node(node, variable, optimization_num, context)¶

Override to handle storage of entry to memory_matrix by EMStorage Function

property memory¶

Return list of memories in which rows (outer dimension) are memories for each field. These are derived from the matrix parameters of the afferent Projections to the retrieval_nodes

Return type

Optional[Tensor]

store_memory(memory_to_store, context)¶

Store variable in memory_matrix (parallel EMStorageMechanism._execute)

For each node in query_input_nodes and value_input_nodes, assign its value to weights of corresponding afferents to corresponding match_node and/or retrieved_node. - memory = matrix of entries made up vectors for each field in each entry (row) - entry_to_store = query_input or value_input to store - field_projections = Projections the matrices of which comprise memory

DIVISION OF LABOR between this method and function called by it store_memory (corresponds to EMStorageMechanism._execute)

  • compute norms to find weakest entry in memory

  • compute storage_prob to determine whether to store current entry in memory

  • call function with memory matrix for each field, to decay existing memory and assign input to weakest entry

storage_node.function (corresponds to EMStorage._function):
  • decay existing memories

  • assign input to weakest entry (given index for passed from EMStorageMechanism)

Returns

List[2d tensor] updated memories

pytorch_composition_wrapper_type¶

alias of psyneulink.library.compositions.pytorchEMcompositionwrapper.PytorchEMCompositionWrapper

_validate_memory_specs(memory_template, memory_capacity, memory_fill, field_weights, field_names, name)¶

Validate the memory_template, field_weights, and field_names arguments

_parse_memory_template(memory_template, memory_capacity, memory_fill)¶

Construct memory from memory_template and memory_fill Assign self.memory_template and self.entry_template attributes

Return type

(<class ‘numpy.ndarray’>, <class ‘int’>)

_parse_memory_shape(memory_template)¶

Parse shape of memory_template to determine number of entries and fields

_construct_pathways(memory_template, memory_capacity, field_weights, concatenate_queries, normalize_memories, softmax_gain, softmax_threshold, softmax_choice, storage_prob, memory_decay_rate, use_storage_node, learn_field_weights, enable_learning, use_gating_for_weighting)¶

Construct Nodes and Pathways for EMComposition

_construct_input_nodes()¶

Create one node for each input to EMComposition and identify as key or value

_construct_concatenate_queries_node(concatenate_queries)¶

Create node that concatenates the inputs for all keys into a single vector Used to create a matrix for Projection from match / memory weights from concatenate_node -> match_node

_construct_match_nodes(memory_template, memory_capacity, concatenate_queries, normalize_memories)¶

Create nodes that, for each key field, compute the similarity between the input and each item in memory. - If self.concatenate_queries is True, then all inputs for keys from concatenated_keys_node are

assigned a single match_node, and weights from memory_template are assigned to a Projection from concatenated_keys_node to that match_node.

  • Otherwise, each key has its own match_node, and weights from memory_template are assigned to a Projection

    from each query_input_node[i] to each match_node[i].

  • Each element of the output represents the similarity between the query_input and one key in memory.

_construct_field_weight_nodes(concatenate_queries, use_gating_for_weighting)¶

Create ProcessingMechanisms that weight each key’s softmax contribution to the retrieved values.

_construct_weighted_match_nodes(concatenate_queries)¶

Create nodes that weight the output of the match node for each key.

_construct_softmax_gain_control_node(softmax_gain)¶

Create nodes that set the softmax gain (inverse temperature) for each softmax_node.

_construct_combined_matches_node(concatenate_queries, memory_capacity, use_gating_for_weighting)¶

Create node that combines weighted matches for all keys into one match vector.

_construct_softmax_node(memory_capacity, softmax_gain, softmax_threshold, softmax_choice)¶

Create node that applies softmax to output of combined_matches_node.

_construct_retrieved_nodes(memory_template)¶

Create nodes that report the value field(s) for the item(s) matched in memory.

Return type

list

_construct_storage_node(use_storage_node, memory_template, memory_decay_rate, storage_prob)¶

Create EMStorageMechanism that stores the key and value inputs in memory. Memories are stored by adding the current input to each field to the corresponding row of the matrix for the Projection from the query_input_node (or concatenate_node) to the matching_node and retrieved_node for keys, and from the value_input_node to the retrieved_node for values. The function of the EMSorageMechanism that takes the following arguments:

  • variable – template for an entry in memory;

  • fields – the input_nodes for the corresponding fields of an entry in memory;

  • field_types – a list of the same length as fields, containing 1’s for key fields and 0’s for value fields;

  • concatenate_queries_node – node used to concatenate keys (if concatenate_queries is True) or None;

  • memory_matrix – memory_template);

  • learning_signals – list of ` MappingProjections (or their ParameterPort`s) that store each field of memory;

  • decay_rate – rate at which entries in the memory_matrix decay;

  • storage_prob – probability for storing an entry in memory.

_set_learning_attributes()¶

Set learning-related attributes for Node and Projections

_store_memory(inputs, context)¶

Store inputs to query and value nodes in memory Store memories in weights of Projections to match_nodes (queries) and retrieved_nodes (values). Note: inputs argument is ignored (included for compatibility with function of MemoryFunctions class;

storage is handled by call to EMComopsition._encode_memory

_encode_memory(context=None)¶

Encode inputs as memories For each node in query_input_nodes and value_input_nodes, assign its value to afferent weights of corresponding retrieved_node. - memory = matrix of entries made up vectors for each field in each entry (row) - memory_full_vectors = matrix of entries made up vectors concatentated across all fields (used for norm) - entry_to_store = query_input or value_input to store - field_memories = weights of Projections for each field

learn(*args, **kwargs)¶

Override to check for inappropriate use of ARG_MAX or PROBABILISTIC options for retrieval with learning

Return type

list

_get_execution_mode(execution_mode)¶

Parse execution_mode argument and return a valid execution mode for the learn() method

_identify_target_nodes(context)¶

Identify retrieval_nodes specified by target_field_weights as TARGET nodes

Return type

list

infer_backpropagation_learning_pathways(execution_mode, context=None)¶

Create backpropapagation learning pathways for every Input Node –> Output Node pathway Flattens nested compositions:

  • only includes the Projections in outer Composition to/from the CIMs of the nested Composition (i.e., to input_CIMs and from output_CIMs) – the ones that should be learned;

  • excludes Projections from/to CIMs in the nested Composition (from input_CIMs and to output_CIMs), as those should remain identity Projections;

see PytorchCompositionWrapper for table of how Projections are handled and further details.

Returns list of target nodes for each pathway

do_gradient_optimization(retain_in_pnl_options, context, optimization_num=None)¶

Compute loss and use in call to autodiff_backward() to compute gradients and update PyTorch parameters. Update parameters (weights) based on trial(s) executed since last optimization, Reinitizalize minibatch_loss and minibatch_loss_count

exception psyneulink.library.compositions.emcomposition.EMCompositionError(error_value)¶
class psyneulink.library.compositions.emcomposition.FieldType(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)¶
Next Previous

© Copyright 2016, Jonathan D. Cohen.

Built with Sphinx using a theme provided by Read the Docs.
  • EMComposition
    • Contents
    • Overview
    • Creation
      • Memory Specification
      • Fields
      • Retrieval and Storage
      • Learning
    • Structure
      • Input
      • Memory
      • Output
    • Execution
      • Processing
      • Training
      • Visualizing the EMComposition
      • Memory Template
      • Field Weights
    • Class Reference
  • Github