LeabraMechanism¶
Contents¶
LeabraMechanism_Class_Reference
Overview¶
A LeabraMechanism is a subclass of ProcessingMechanism that wraps a leabra network. Leabra is an artificial neural network algorithm (O’Reilly, 1996). For more info about leabra, please see O’Reilly and Munakata, 2016.
Note
The LeabraMechanism uses the leabra Python package, which can be found here at Github. While the LeabraMechanism should always match the output of an equivalent network in the leabra package, the leabra package itself is still in development, so it is not guaranteed to be correct yet.
Creating a LeabraMechanism¶
A LeabraMechanism can be created in two ways. Users can specify the size of the input layer (input_size), size of the output layer (output_size), number of hidden layers (hidden_layers), and sizes of the hidden layers (hidden_sizes). In this case, the LeabraMechanism will initialize the connections as uniform random values between 0.55 and 0.95. Alternatively, users can provide a leabra Network object from the leabra package as an argument (network), in which case the network will be used as the network wrapped by the LeabraMechanism. This option requires users to be familiar with the leabra package, but allows more flexibility in specifying parameters. In the former method of creating a LeabraMechanism, the training_flag argument specifies whether the network should be learning (updating its weights) or not.
Structure¶
The LeabraMechanism has an attribute training_flag
which can be set to True/False to
determine whether the network is currently learning. The training_flag
can also be
changed after creation of the LeabraMechanism, causing it to start/stop learning.
Note
If the training_flag is True, the network will learn using the Leabra learning algorithm. Other algorithms may be added later.
The LeabraMechanism has two InputPorts: the MAIN_INPUT InputPort and the LEARNING_TARGET InputPort. The
MAIN_INPUT InputPort is the input to the leabra network, while the LEARNING_TARGET InputPort is the learning
target for the LeabraMechanism. The input to the MAIN_INPUT InputPort should have length equal to
input_size
and the input to the LEARNING_TARGET InputPort should have length equal to
output_size
.
Note
Currently, there is a bug where LeabraMechanism (and other Mechanisms with multiple input ports) cannot be
used as ORIGIN Mechanisms
for a System
. If you desire to use a LeabraMechanism as an ORIGIN
Mechanism, you can work around this bug by creating two TransferMechanisms as ORIGIN
Mechanisms instead, and have these two TransferMechanisms pass their output to the InputPorts of the
LeabraMechanism. Here is an example of how to do this. In the example, T2 passes the training_data to the
LEARNING_TARGET InputPort of L (L.input_ports[1]):
L = LeabraMechanism(input_size=input_size, output_size=output_size)
T1 = TransferMechanism(name='T1', input_shapes=input_size, function=Linear)
T2 = TransferMechanism(name='T2', input_shapes=output_size, function=Linear)
p1 = Process(pathway=[T1, L])
proj = MappingProjection(sender=T2, receiver=L.input_ports[1])
p2 = Process(pathway=[T2, proj, L])
s = System(processes=[p1, p2])
s.run(inputs={T1: input_data, T2: training_data})
Execution¶
The LeabraMechanism passes input and training data to the leabra Network it wraps, and the LeabraMechanism passes its leabra Network’s output (after one “trial”, default 200 cycles in PsyNeuLink) to its primary OutputPort. For details on Leabra, see O’Reilly and Munakata, 2016 and the leabra Python package on Github.
Class Reference¶
- exception psyneulink.library.components.mechanisms.processing.leabramechanism.LeabraError(message, component=None)¶
- class psyneulink.library.components.mechanisms.processing.leabramechanism.LeabraFunction(default_variable=None, network=None, params=None, owner=None, prefs=None)¶
LeabraFunction is a custom function that lives inside the LeabraMechanism. As a function, it transforms the variable by providing it as input to the leabra network inside the LeabraFunction.
- Parameters
default_variable (number or np.array : default np.zeros() (array of zeros)) – specifies a template for the input to the leabra network.
network (leabra.Network) – specifies the leabra network to be used.
params (Dict[param keyword: param value] : default None) – a parameter dictionary that specifies the parameters for the function. Values specified for parameters in the dictionary override any assigned to those parameters in arguments of the constructor.
owner (Component) – component to which to assign the Function.
- variable¶
contains value to be transformed.
- Type
number or np.array
- network¶
the leabra network that is being used
- Type
leabra.Network
- prefs¶
the
PreferenceSet
for the LeabraMechanism; if it is not specified in the prefs argument of the constructor, a default is assigned usingclassPreferences
defined in __init__.py (see Preferences for details).- Type
PreferenceSet or specification dict
- _validate_variable(variable, context=None)¶
Validate variable and return validated variable
Convert self.class_defaults.variable specification and variable (if specified) to list of 1D np.ndarrays:
VARIABLE SPECIFICATION: ENCODING: Simple value variable: 0 -> [array([0])] Single state array (vector) variable: [0, 1] -> [array([0, 1])] Multiple port variables, each with a single value variable: [[0], [0]] -> [array[0], array[0]]
- Perform top-level type validation of variable against the self.class_defaults.variable;
if the type is OK, the value is returned (which should be used by the function)
This can be overridden by a subclass to perform more detailed checking (e.g., range, recursive, etc.) It is called only if the parameter_validation attribute is
True
(which it is by default)- IMPLEMENTATION NOTES:
future versions should add hierarchical/recursive content (e.g., range) checking
add request/target pattern?? (as per _validate_params) and return validated variable?
- Parameters
variable – (anything other than a dictionary) - variable to be validated:
context – (str)
- Return variable
validated variable
- _validate_params(request_set, target_set=None, context=None)¶
Validate params and assign validated values to targets,
This performs top-level type validation of params
This can be overridden by a subclass to perform more detailed checking (e.g., range, recursive, etc.) It is called only if the parameter_validation attribute is
True
(which it is by default)- IMPLEMENTATION NOTES:
future versions should add recursive and content (e.g., range) checking
should method return validated param set?
- Parameters
validated (dict (target_set) - repository of params that have been) –
validated –
- Return none
- class psyneulink.library.components.mechanisms.processing.leabramechanism.LeabraMechanism(network=None, input_size=1, output_size=1, hidden_layers=0, hidden_sizes=None, training_flag=False, params=None, name=None, prefs=None)¶
Subclass of ProcessingMechanism that is a wrapper for a Leabra network in PsyNeuLink. See Mechanism for additional arguments and attributes.
- Parameters
network (Optional[leabra.Network]) – a network object from the leabra package. If specified, the LeabraMechanism’s network becomes network, and the other arguments that specify the network are ignored (input_size, output_size, hidden_layers, hidden_sizes).
input_size (int : default 1) – an integer specifying how many units are in (the size of) the first layer (input) of the leabra network.
output_size (int : default 1) – an integer specifying how many units are in (the size of) the final layer (output) of the leabra network.
hidden_layers (int : default 0) – an integer specifying how many hidden layers are in the leabra network.
hidden_sizes (int or List[int] : default input_size) – if specified, this should be a list of integers, specifying the size of each hidden layer. If hidden_sizes is a list, the number of integers in hidden_sizes should be equal to the number of hidden layers. If not specified, hidden layers will default to the same size as the input layer. If hidden_sizes is a single integer, then all hidden layers are of that size.
training_flag (boolean : default None) – a boolean specifying whether the leabra network should be learning. If True, the leabra network will adjust its weights using the “leabra” algorithm, based on the training pattern (which is read from its second output state). The
training_flag
attribute can be changed after initialization, causing the leabra network to start/stop learning. If None,training_flag
will default to False if network argument is not provided. If network argument is provided andtraining_flag
is None, then the existing learning rules of the network will be preserved.quarter_size (int : default 50) – an integer specifying how many times the Leabra network cycles each time it is run. Lower values of quarter_size result in shorter execution times, though very low values may cause slight fluctuations in output. Lower values of quarter_size also effectively reduce the magnitude of learning weight changes during a given trial.
- function¶
the function that wraps and executes the leabra mechanism
- Type
- input_size¶
an integer specifying how many units are in (the size of) the first layer (input) of the leabra network.
- Type
int : default 1
- output_size¶
an integer specifying how many units are in (the size of) the final layer (output) of the leabra network.
- Type
int : default 1
an integer specifying how many hidden layers are in the leabra network.
- Type
int : default 0
an integer or list of integers, specifying the size of each hidden layer.
- Type
int or List[int] : default input_size
- training_flag¶
a boolean specifying whether the leabra network should be learning. If True, the leabra network will adjust its weights using the “leabra” algorithm, based on the training pattern (which is read from its second output state). The
training_flag
attribute can be changed after initialization, causing the leabra network to start/stop learning.- Type
boolean
- quarter_size¶
an integer specifying how many times the Leabra network cycles each time it is run. Lower values of quarter_size result in shorter execution times, though very low values may cause slight fluctuations in output. Lower values of quarter_size also effectively reduce the magnitude of learning weight changes during a given trial.
- Type
int : default 50
- network¶
the leabra.Network object which is executed by the LeabraMechanism. For more info about leabra Networks, please see the
leabra package
on Github.- Type
leabra.Network
- Returns
instance of LeabraMechanism
- Return type