• Github
Table of Contents
0.17.0.0+7
  • Welcome to PsyNeuLink
  • Basics and Primer
  • Quick Reference
  • Core
  • Library
  • Contributors Guide
  • Docs >
  • Library >
  • Compositions >
  • AutodiffComposition
Shortcuts

AutodiffComposition¶

Related

  • Learning in a Composition

  • BIAS Nodes

Contents¶

  • Overview

  • Creating an AutodiffComposition
    • AutodiffComposition
      • No Modulatory Components

      • No Bias Parameters

      • Nesting

      • Learning Rates and Optimizer Params

      • Exchanging Parameters with Pytorch Modules

      • No Post-construction Modification

    • Execution
      • PyTorch mode

      • LLVM mode

      • Python mode

      • Nested Execution and Modulation

      • Logging

  • Examples

  • Class Reference

Overview¶

AutodiffComposition is a subclass of Composition for constructing and training feedforward neural network either, using either direct compilation (to LLVM) or automatic conversion to PyTorch, both of which considerably accelerate training (by as much as three orders of magnitude) compared to the standard implementation of learning in a Composition. Although an AutodiffComposition is constructed and executed in much the same way as a standard Composition, it largely restricted to feedforward neural networks using supervised learning, and in particular the the backpropagation learning algorithm. although it can be used for some forms of unsupervised learning that are supported in PyTorch (e.g., self-organized maps).

Creating an AutodiffComposition¶

An AutodiffComposition can be created by calling its constructor, and then adding Components using the standard Composition methods for doing so (e.g., add_node, add_projection, add_linear_processing_pathway, etc.). The constructor also includes a number of parameters that are specific to the AutodiffComposition (see Class Reference for a list of these parameters, and examples below). While an AutodiffComposition can generally be created using the same methods as a standard Composition, there are a few restrictions that apply to its construction, summarized below.

Only one OutputPort per Node¶

The Nodes of an AutodiffComposition currently can have only one OutputPort, though that can have more than one efferent MappingProjection. Nodes can also have more than one InputPort, that can receive more than one afferent `path_afferent Projections.

No Modulatory Components¶

All of the Components in an AutodiffComposition must be able to be subjected to learning, which means that no ModulatoryMechanisms can be included in an AutodiffComposition. Specifically, this precludes any learning components, ControlMechanisms, or a controller.

Learning Components. An AutodiffComposition cannot include any learning components themselves (i.e., LearningMechanisms, LearningSignals, or LearningProjections, nor the ComparatorMechanism or ObjectiveMechanism used to compute the loss for learning). These are constructed automatically when learning is executed in Python mode or LLVM mode, and PyTorch-compatible Components are constructed when it is executed in PyTorch mode.

Control Components. An AutodiffComposition also cannot include any ControlMechanisms or a controller. However, it can include Mechanisms that are subject to modulatory control (see Figure, and modulation) by ControlMechanisms outside the Composition, including the controller of a Composition within which the AutodiffComposition is nested. That is, an AutodiffComposition can be nested in a Composition that has other such Components (see Nested Execution and Modulation below).

No Bias Parameters¶

AutodiffComposition does not (currently) support the automatic construction of separate bias parameters. Thus, when constructing the PyTorch version of an AutodiffComposition, the bias parameter of any PyTorch modules are set to False. However, biases can be implemented using BIAS Nodes.

Nesting¶

An AutodiffComposition can be nested inside another Composition for learning, and there can be any level of such nestings. However, all of the nested Compositions must be AutodiffCompositions. Furthermore, all nested Compositions use the learning_rate specified for the outermost Composition, whether this is specified in the call to its learn method, its constructor, or its default value is being used (see learning_rate below for additional details).

Projections from Nodes in an immediately enclosing outer Composition to the input_CIM of a nested Composition, and from its output_CIM to Nodes in the outer Composition are subject to learning; however those within the nested Composition itself (i.e., from its input_CIM to its INPUT Nodes and from its OUTPUT Nodes to its output_CIM) are not subject to learning, as they serve simply as conduits of information between the outer Composition and the nested one.

Warning

Nested Compositions are supported for learning only in PyTorch mode, and will cause an error if the learn method of an AutodiffComposition is executed in Python mode or LLVM mode.

Learning Rates and Optimizer Params¶

The optimizer_params argument of the constructor can be used to specify parameters for the optimizer used for learning by the AutodiffComposition. At present, this is restricted to overriding the learning_rate Parameter of the Composition (used as the default by the optimizer) to assign individual learning rates to specific Projections. This is done by specifying optimizer_params as a dict, in which each key is a reference to a learnable MappingProjection in the AutodiffComposition, and the value of which specifies its learning_rate. Sublcasses of AutodiffComposition may involve different forms of specification and/or support other parameters for the optimizer. Projections that are not sepcified in optimizer_params use, in order of precedence: the learning_rate specified in the call to the AutodiffComposition’s learn method, the learning_rate argument of its constructor, or the default value for the AutodiffComposition.

Exchanging Parameters with Pytorch Modules¶

The AutodiffComposition’s copy_torch_param_to_projection_matrix and copy_projection_matrix_to_torch_param methods can be used to exchange weight matrices between the parameters of a PyTorch module and the matrix Parameter of a MappingProjection in the AutodiffComposition. Pytorch Parameters can be referenced flexibly, either by the Parameter object itself, or by the module and either the name or index of the Parameter in the module’s state_dict or parameter list, respectively. Slices of PyTorch Parameters can also be used, for cases in which the matrix of a Project corresponds to only a subpart of the PyTorch Parameter (e.g., for GRUComposition). Both methods return the item assigned.

No Post-construction Modification¶

Mechanisms or Projections should not be added to or deleted from an AutodiffComposition after it has been executed. Unlike an ordinary Composition, AutodiffComposition does not support this functionality.

Execution¶

An AutodiffComposition’s run, execute, and learn methods are the same as for a Composition. However, the execution_mode in the learn method has different effects than for a standard Composition, that determine whether it uses LLVM compilation or translation to PyTorch to execute learning. These are each described in greater detail below, and summarized in this table which provides a comparison of the different modes of execution for an AutodiffComposition and standard Composition.

PyTorch mode¶

This is the default for an AutodiffComposition, but, can be specified explicitly by setting execution_mode = ExecutionMode.PyTorch in the learn method (see example in Basics and Primer). In this mode, the AutodiffComposition is automatically translated to a PyTorch model for learning. This is comparable in speed to LLVM compilation, but provides greater flexiblity, including the ability to include nested AutoDiffCompositions in learning. Although it is best suited for use with supervised learning, it can also be used for some forms of unsupervised learning that are supported in PyTorch (e.g., self-organized maps).

Note

While specifying ExecutionMode.PyTorch in the learn method of an AutodiffComposition causes it to use PyTorch for training, specifying this in the run method causes it to be executed using the Python interpreter (and not PyTorch); this is so that any modulation can take effect during execution (see Nested Execution and Modulation below), which is not supported by PyTorch.

Warning

  • Specifying ExecutionMode.LLVMRun or ExecutionMode.PyTorch in the learn() method of a standard Composition causes an error.

LLVM mode¶

This is specified by setting execution_mode = ExecutionMode.LLVMRun in the learn method of an AutodiffCompositon. This provides the fastest performance, but is limited to supervised learning using the BackPropagation algorithm. This can be run using standard forms of loss, including mean squared error (MSE) and cross entropy, by specifying this in the loss_spec argument of the constructor (see AutodiffComposition for additional details, and Compilation Modes for more information about executing a Composition in compiled mode.

Note

Specifying ExecutionMode.LLVMRun in either the learn and run methods of an AutodiffComposition causes it to (attempt to) use compiled execution in both cases; this is because LLVM compilation supports the use of modulation in PsyNeuLink models (as compared to PyTorch mode; see note below).

Python mode¶

An AutodiffComposition can also be run using the standard PsyNeuLink learning components. However, this cannot be used if the AutodiffComposition has any nested Compositions, irrespective of whether they are ordinary Compositions or AutodiffCompositions.

Nested Execution and Modulation¶

# FIX: Like any other Composition, an AutodiffComposition may be nested inside another (see example below). However, during learning, none of the internal Components of the AutodiffComposition (e.g., intermediate layers of a neural network model) are accessible to the other Components of the outer Composition, (e.g., as sources of information, or for modulation). However, when it is executed using its run method, then the AutodiffComposition functions like any other, and all of its internal Components are accessible to other Components of the outer Composition. Thus, as long as access to its internal Components is not needed during learning, an AutodiffComposition can be trained, and then used to execute the trained Composition like any other.

Logging¶

Logging in AutodiffCompositions follows the same procedure as logging in a Composition. However, since an AutodiffComposition internally converts all of its Mechanisms either to LLVM or to an equivalent PyTorch model, then its inner components are not actually executed. This means that there is limited support for logging parameters of components inside an AutodiffComposition; Currently, the only supported parameters are:

  1. the matrix parameter of Projections

  2. the value parameter of its inner components

Examples

The following is an example showing how to create a simple AutodiffComposition, specify its inputs and targets, and run it with learning enabled and disabled:

>>> import psyneulink as pnl
>>> # Set up PsyNeuLink Components
>>> my_mech_1 = pnl.TransferMechanism(function=pnl.Linear, input_shapes = 3)
>>> my_mech_2 = pnl.TransferMechanism(function=pnl.Linear, input_shapes = 2)
>>> my_projection = pnl.MappingProjection(matrix=np.random.randn(3,2),
...                     sender=my_mech_1,
...                     receiver=my_mech_2)
>>> # Create AutodiffComposition
>>> my_autodiff = pnl.AutodiffComposition()
>>> my_autodiff.add_node(my_mech_1)
>>> my_autodiff.add_node(my_mech_2)
>>> my_autodiff.add_projection(sender=my_mech_1, projection=my_projection, receiver=my_mech_2)
>>> # Specify inputs and targets
>>> my_inputs = {my_mech_1: [[1, 2, 3]]}
>>> my_targets = {my_mech_2: [[4, 5]]}
>>> input_dict = {"inputs": my_inputs, "targets": my_targets, "epochs": 2}
>>> # Run Composition in learnng mode
>>> my_autodiff.learn(inputs = input_dict)
>>> # Run Composition in test mode
>>> my_autodiff.run(inputs = input_dict['inputs'])

The following shows how the AutodiffComposition created in the previous example can be nested and run inside another Composition:

>>> # Create outer composition
>>> my_outer_composition = pnl.Composition()
>>> my_outer_composition.add_node(my_autodiff)
>>> # Specify dict containing inputs and targets for nested Composition
>>> training_input = {my_autodiff: input_dict}
>>> # Run in learning mode
>>> result1 = my_outer_composition.learn(inputs=training_input)

Class Reference¶

class psyneulink.library.compositions.autodiffcomposition.AutodiffComposition(pathways=None, optimizer_type='sgd', loss_spec=Loss.MSE, weight_decay=0, learning_rate=None, optimizer_params=None, disable_learning=False, force_no_retain_graph=False, refresh_losses=False, synch_projection_matrices_with_torch='run', synch_node_variables_with_torch=None, synch_node_values_with_torch='run', synch_results_with_torch='run', retain_torch_trained_outputs='minibatch', retain_torch_targets='minibatch', retain_torch_losses='minibatch', device=None, disable_cuda=True, cuda_index=None, name='autodiff_composition', **kwargs)¶
AutodiffComposition( optimizer_type=’sgd’,

loss_spec=Loss.MSE, weight_decay=0, learning_rate=0.001, optimizer_params=None, disable_learning=False, synch_projection_matrices_with_torch=RUN, synch_node_variables_with_torch=None, synch_node_values_with_torch=RUN, synch_results_with_torch=RUN, retain_torch_trained_outputs=MINIBATCH, retain_torch_targets=MINIBATCH, retain_torch_losses=MINIBATCH, device=CPU )

Subclass of Composition that trains models using either LLVM compilation or PyTorch; see and Composition for additional arguments and attributes. See Composition for additional arguments to constructor.

Parameters:
  • optimizer_type (str : default 'sgd') – the kind of optimizer used in training. The current options are ‘sgd’ or ‘adam’.

  • loss_spec (Loss or PyTorch loss function : default Loss.MSE) – specifies the loss function for training; see Loss for arguments.

  • weight_decay (float : default 0) – specifies the L2 penalty (which discourages large weights) used by the optimizer.

  • learning_rate (float : default 0.001) – specifies the learning rate passed to the optimizer if none is specified in the learn method of the AutodiffComposition; see learning_rate for additional details.

  • optimizer_params (Dict[str: value]) – specifies parameters for the optimizer used for learning by the GRUComposition (see Learning Rates and Optimizer Params for details of specification.

  • disable_learning (bool: default False) – specifies whether the AutodiffComposition should disable learning when run in learning mode.

  • synch_projection_matrices_with_torch (LearningScale : default RUN) – specifies the default for the AutodiffComposition for when to copy Pytorch parameters to PsyNeuLink Projection matrices (connection weights), which can be overridden by specifying the synch_projection_matrices_with_torch argument in the learn method; see synch_projection_matrices_with_torch for additional details.

  • synch_node_variables_with_torch (LearningScale : default None) – specifies the default for the AutodiffComposition for when to copy the current input to Pytorch nodes to the PsyNeuLink variable attribute of the corresponding PsyNeuLink nodes, which can be overridden by specifying the synch_node_variables_with_torch argument in the learn method; see synch_node_variables_with_torch for additional details.

  • synch_node_values_with_torch (LearningScale : default RUN) – specifies the default for the AutodiffComposition for when to copy the current output of Pytorch nodes to the PsyNeuLink value attribute of the corresponding PsyNeuLink nodes, which can be overridden by specifying the synch_node_values_with_torch argument in the learn method; see synch_node_values_with_torch for additional details.

  • synch_results_with_torch (LearningScale : default RUN) – specifies the default for the AutodiffComposition for when to copy the outputs of the Pytorch model to the AutodiffComposition’s results attribute, which can be overridden by specifying the synch_results_with_torch argument in the learn method. Note that this differs from retain_torch_trained_outputs, which specifies the frequency at which the outputs of the PyTorch model are tracked, all of which are stored in the AutodiffComposition’s torch_trained_outputs attribute at the end of the run; see synch_results_with_torch for additional details.

  • retain_torch_trained_outputs (LearningScale : default MINIBATCH) – specifies the default for the AutodiffComposition for scale at which the outputs of the Pytorch model are tracked, all of which are stored in the AutodiffComposition’s torch_trained_outputs attribute at the end of the run; this can be overridden by specifying the retain_torch_trained_outputs argument in the learn method. Note that this differs from synch_results_with_torch, which specifies the frequency with which values are called to the AutodiffComposition’s results attribute; see retain_torch_trained_outputs for additional details.

  • retain_torch_targets (LearningScale : default MINIBATCH) – specifies the default for the AutodiffComposition for when to copy the targets used for training the Pytorch model to the AutodiffComposition’s torch_targets attribute, which can be overridden by specifying the retain_torch_targets argument in the learn method; see retain_torch_targets for additional details.

  • retain_torch_losses (LearningScale : default MINIBATCH) – specifies the default for the AutodiffComposition for the scale at which the losses of the Pytorch model are tracked, all of which are stored in the AutodiffComposition’s torch_losses attribute at the end of the run; see retain_torch_losses for additional details.

  • device (torch.device : default device-dependent) – specifies the device on which the model is run. If None, the device is set to ‘cuda’ if available, then ‘mps`, otherwise ‘cpu’.

pytorch_representation¶

represents the PyTorch model of the AutodiffComposition, which is created when the AutodiffComposition is run in PyTorch mode.

Type:

PytorchCompositionWrapper : default None

optimizer¶

the optimizer used for training. Depends on the optimizer_type, learning_rate, and weight_decay arguments from initialization.

Type:

PyTorch optimizer function

loss¶

the loss function used for training. Depends on the loss_spec argument from initialization.

Type:

PyTorch loss function

learning_rate¶

determines the default learning_rate passed the optimizer, that is applied to all Projections in the AutodiffComposition that are learnable, and for which individual rates have not been specified (for how to do the latter, see Learning Rates and Optimizer Params).

Note

At present, an outermost Compositon’s learning rate is applied to any nested Compositions, whether this is specified in the call to its learn method, its constructor, or its default value is being used.

Hint

To disable updating of a particular MappingProjection in an AutodiffComposition, specify either the learnable parameter of its constructor or its learning_rate specification in the optimizer_params argument of the AutodiffComposition’s constructor to False (see Learning Rates and Optimizer Params); this applies to MappingProjections at any level of nesting

Type:

float or bool

synch_projection_matrices_with_torch¶

determines when to copy PyTorch parameters to PsyNeuLink Projection matrices (connection weights) if this is not specified in the call to learn. Copying more frequently keeps the PsyNeuLink representation more closely synchronized with parameter updates in Pytorch, but slows performance (see AutodiffComposition_PyTorch_LearningScale for information about settings).

Type:

OPTIMIZATION_STEP, MINIBATCH, EPOCH or RUN

synch_node_variables_with_torch¶

determines when to copy the current input to Pytorch functions to the PsyNeuLink variable attribute of the corresponding PsyNeuLink nodes, if this is not specified in the call to learn. Copying more frequently keeps the PsyNeuLink representation more closely copying more frequently keeps them synchronized with parameter updates in Pytorch, but can slow performance (see AutodiffComposition_PyTorch_LearningScale for information about settings).

Type:

OPTIMIZATION_STEP, TRIAL, MINIBATCH, EPOCH, RUN or None

synch_node_values_with_torch¶

determines when to copy the current output of Pytorch functions to the PsyNeuLink value attribute of the corresponding PsyNeuLink nodes, if this is not specified in the call to learn. Copying more frequently keeps the PsyNeuLink representation more closely synchronized with parameter updates in Pytorch, but can also slow performance (see AutodiffComposition_PyTorch_LearningScale for information about settings).

Type:

OPTIMIZATION_STEP, MINIBATCH, EPOCH or RUN

synch_results_with_torch¶

determines when to copy the current outputs of Pytorch nodes to the PsyNeuLink results attribute of the AutodiffComposition if this is not specified in the call to learn. Copying more frequently keeps the PsyNeuLink representation more closely synchronized with parameter updates in Pytorch, but slows performance (see AutodiffComposition_PyTorch_LearningScale for information about settings).

Type:

OPTIMIZATION_STEP, TRIAL, MINIBATCH, EPOCH or RUN

retain_torch_trained_outputs¶

determines the scale at which the outputs of the Pytorch model are tracked, all of which are stored in the AutodiffComposition’s results attribute at the end of the run if this is not specified in the call to learn <AutodiffComposition.learn>`(see `AutodiffComposition_PyTorch_LearningScale for information about settings)

Type:

OPTIMIZATION_STEP, MINIBATCH, EPOCH, RUN or None

retain_torch_targets¶

determines the scale at which the targets used for training the Pytorch model are tracked, all of which are stored in the AutodiffComposition’s targets attribute at the end of the run if this is not specified in the call to learn (see AutodiffComposition_PyTorch_LearningScale for information about settings).

Type:

OPTIMIZATION_STEP, TRIAL, MINIBATCH, EPOCH, RUN or None

retain_torch_losses¶

determines the scale at which the losses of the Pytorch model are tracked, all of which are stored in the AutodiffComposition’s torch_losses attribute at the end of the run if this is nota specified in the call to learn (see AutodiffComposition_PyTorch_LearningScale for information about settings).

Type:

OPTIMIZATION_STEP, MINIBATCH, EPOCH, RUN or None

torch_trained_outputs¶

stores the outputs (converted to np arrays) of the Pytorch model trained during learning, at the frequency specified by retain_torch_trained_outputs if it is set to MINIBATCH, EPOCH, or RUN; see retain_torch_trained_outputs for additional details.

Type:

List[ndarray]

torch_targets¶

stores the targets used for training the Pytorch model during learning at the frequency specified by retain_torch_targets if it is set to MINIBATCH, EPOCH, or RUN; see retain_torch_targets for additional details.

Type:

List[ndarray]

torch_losses¶

stores the average loss after each weight update (i.e. each minibatch) during learning, at the frequency specified by retain_torch_trained_outputs if it is set to MINIBATCH, EPOCH, or RUN; see retain_torch_losses for additonal details.

Type:

list of floats

last_saved_weights¶

path for file to which weights were last saved.

Type:

path

last_loaded_weights¶

path for file from which weights were last loaded.

Type:

path

device¶

the device on which the model is run.

Type:

torch.device

class PytorchMechanismWrapper(mechanism, composition, component_idx, use, dtype, device, subclass_specifies_function=False, context=None)¶

Wrapper for a Mechanism in a PytorchCompositionWrapper These comprise nodes of the PytorchCompositionWrapper, and generally correspond to functions in a Pytorch model.

mechanism¶

the PsyNeuLink Mechanism being wrapped.

Type:

Mechanism

composition¶

the AutodiffComposition to which the Mechanism being wrapped belongs (and for which the PytorchCompositionWrapper – to which the PytorchMechanismWrapper belongs – is the pytorch_representation).

Type:

AutodiffComposition

afferents¶

list of PytorchProjectionWrapper objects that project to the PytorchMechanismWrapper.

Type:

List[PytorchProjectionWrapper]

input¶

most recent input to the PytorchMechanismWrapper.

Type:

torch.Tensor

function¶

Pytorch version of the Mechanism’s function assigned in its __init__.

Type:

_gen_pytorch_fct

integrator_function¶

Pytorch version of the Mechanism’s integrator_function assigned in its __init__ if Mechanism has an integrator_function; this assumes the Mechanism also has an integrator_mode attribute that is used to determine whether to execute the integrator_function first, and use its result as the input to its function.

Type:

_gen_pytorch_fct

output¶

most recent output of the PytorchMechanismWrapper.

Type:

torch.Tensor

efferents¶

list of PytorchProjectionWrapper objects that project from the PytorchMechanismWrapper.

Type:

List[PytorchProjectionWrapper]

exclude_from_gradient_calc¶

used to prevent a node from being included in the Pytorch gradient calculation by excluding it in calls to the forward() and backward(). If AFTER is specified, the node is executed after at the end of the update_learning_parameters method. BEFORE is not currently supported

Type:

bool or str[BEFORE | AFTER]: False

_use¶

designates the uses of the Mechanism, specified by the following keywords (see PytorchCompositionWrapper docstring for additional details):

  • LEARNING: inputs and function Parameters) are used for actual execution of the corresponding Pytorch Module;

  • SYNCH: used to store results of executing a Pytorch module that are then transferred to the value Parameter of the PytorchMechanismWrappers mechanism;

  • SHOW_PYTORCH: Mechanism is included when the AutoDiffCompositions show_graph method to used with the show_pytorch option to display its pytorch_representation; if it is not specified, the Mechanism is not displayed when the AutoDiffCompositions show_graph method is called, even if the show_pytorch option is specified.

Type:

list[LEARNING, SYNCH]

add_afferent(afferent)¶

Add ProjectionWrapper for afferent to MechanismWrapper. For use in call to collect_afferents

add_efferent(efferent)¶

Add ProjectionWrapper for efferent from MechanismWrapper. Implemented for completeness; not currently used

collect_afferents(batch_size, port=None, inputs=None)¶

Return afferent projections for input_port(s) of the Mechanism If there is only one input_port, return the sum of its afferents (for those in Composition) If there are multiple input_ports, return a tensor (or list of tensors if input ports are ragged) of shape:

(batch, input_port, projection, …)

Where the ellipsis represent 1 or more dimensions for the values of the projected afferent.

FIX: AUGMENT THIS TO SUPPORT InputPort’s function

execute(variable, optimization_num, synch_with_pnl_options, context=None)¶

Execute Mechanism’s _gen_pytorch version of function on variable. Enforce result to be 2d, and assign to self.output

Return type:

Tensor

set_pnl_variable_and_values(set_variable=False, set_value=True, context=None)¶

Set the state of the PytorchMechanismWrapper’s Mechanism Note: if execute_mech=True requires that variable=True

pytorch_composition_wrapper_type¶

alias of PytorchCompositionWrapper

pytorch_mechanism_wrapper_type¶

alias of PytorchMechanismWrapper

infer_backpropagation_learning_pathways(execution_mode, context=None)¶

Create backpropagation learning pathways for every Input Node –> Output Node pathway Flattens nested compositions:

  • only includes the Projections in outer Composition to/from the CIMs of the nested Composition (i.e., to input_CIMs and from output_CIMs) – the ones that should be learned;

  • excludes Projections from/to CIMs in the nested Composition (from input_CIMs and to output_CIMs), as those should remain identity Projections;

see PytorchCompositionWrapper for table of how Projections are handled and further details.

Returns list of target nodes for each pathway

Return type:

list

get_target_nodes(execution_mode=<ExecutionMode.PyTorch: 1>)¶

Return TARGET Nodes of the AutodiffComposition.

set_weights(pnl_proj, weights, context=None)¶

Set weights for specified Projection.

learn(*args, synch_projection_matrices_with_torch=NotImplemented, synch_node_variables_with_torch=NotImplemented, synch_node_values_with_torch=NotImplemented, synch_results_with_torch=NotImplemented, retain_torch_trained_outputs=NotImplemented, retain_torch_targets=NotImplemented, retain_torch_losses=NotImplemented, context=None, base_context=<psyneulink.core.globals.context.Context object>, skip_initialization=False, **kwargs)¶

Override to handle synch and retain args Note: defaults for synch and retain args are set to NotImplemented, so that the user can specify None if

they want to locally override the default values for the AutodiffComposition (see docstrings for run() and _parse_synch_and_retain_args() for additonal details).

Return type:

list

execute(inputs=None, num_trials=None, minibatch_size=1, optimizations_per_minibatch=1, do_logging=False, scheduler=None, termination_processing=None, call_before_minibatch=None, call_after_minibatch=None, call_before_time_step=None, call_before_pass=None, call_after_time_step=None, call_after_pass=None, reset_stateful_functions_to=None, context=None, base_context=<psyneulink.core.globals.context.Context object>, clamp_input='soft_clamp', targets=None, runtime_params=None, execution_mode=<ExecutionMode.PyTorch: 1>, skip_initialization=False, synch_with_pnl_options=None, retain_in_pnl_options=None, report_output=ReportOutput.OFF, report_params=ReportParams.OFF, report_progress=ReportProgress.OFF, report_simulations=ReportSimulations.OFF, report_to_devices=None, report=None, report_num=None)¶

Override to execute autodiff_forward() in learning mode if execute_mode is not Python

Return type:

ndarray

run(*args, synch_projection_matrices_with_torch=NotImplemented, synch_node_variables_with_torch=NotImplemented, synch_node_values_with_torch=NotImplemented, synch_results_with_torch=NotImplemented, retain_torch_trained_outputs=NotImplemented, retain_torch_targets=NotImplemented, retain_torch_losses=NotImplemented, batched_results=False, context=None, **kwargs)¶

Override to handle synch and retain args if run called directly from run() rather than learn() Note: defaults for synch and retain args are NotImplemented, so that the user can specify None if they want

to locally override the default values for the AutodiffComposition (see _parse_synch_and_retain_args() for details). This is distinct from the user assigning the Parameter default_values(s), which is done in the AutodiffComposition constructor and handled by the Parameter._specify_none attribute.

save(path=None, directory=None, filename=None, context=None)¶

Saves all weight matrices for all MappingProjections in the AutodiffComposition

Parameters:
  • path (Path, PosixPath or str : default None) – path specification; must be a legal path specification in the filesystem.

  • directory (str : default current working directory) – directory where matrices for all MappingProjections in the AutodiffComposition are saved.

  • filename (str : default <name of AutodiffComposition>_matrix_wts.pnl) – filename in which matrices for all MappingProjections in the AutodiffComposition are saved.

  • note:: (..) – Matrices are saved in PyTorch state_dict format.

Return type:

Path

load(path=None, directory=None, filename=None, context=None, weights_only=False)¶

Loads all weight matrices for all MappingProjections in the AutodiffComposition from file :type path: PosixPath :param path: Path for file in which MappingProjection matrices are stored.

This must be a legal PosixPath object; if it is specified directory and filename are ignored.

Parameters:
  • directory (str : default current working directory) – directory where MappingProjection matrices are stored.

  • filename (str : default <name of AutodiffComposition>_matrix_wts.pnl) – name of file in which MappingProjection matrices are stored.

  • note:: (..) –

    Matrices must be stored in PyTorch state_dict format.

copy_torch_param_to_projection_matrix(projection, torch_param, torch_module=None, torch_slice=None, validate=True, context=None)¶

Assign torch Parameter to matrix Parameter of specified MappingProjection. Return torch_param as the np.ndarray assigned to matrix Parameter of projection.

Parameters:
  • projection (str or MappingProjection) – specifies MappingProjection to which the torch_param is assigned as its matrix Parameter; if specified as a str, it must be the name of a MappingProjection in the AutodiffComposition.

  • torch_param (torch.nn.Parameter, str or int) – specifies torch_param to assign to the matrix Parameter of projection; if it is a torch.nn.Parameter or torch.Tensor, then the torch_module argument does not need to be specified; if specified as a str or int, it must be the name of a torch Parameter (used to access it in the state_dict) or its index (used to access it in the parameterlist) of the torch_module argument, which must be also specified.

  • torch_module (torch.nn.Module : default None) – specifies a torch.nn.Module containing torch_param assigned to the`matrix<MappingProjection.matrix>` Parameter of projection; this does not need to be specified if torch_param is a torch.nn.Parameter or torch.Tensor, but must be specified if torch_param is a str or int.

  • torch_slice (slice : default None) –

    specifies a slice of torch_param to assign to the matrix Parameter

    of projection; if it is not specified, the entire tensor of torch_param is used.

    Warning

    torch_slice should not be specified if the specification of torch_param already takes this into account.

  • validate (bool : default True) –

    specifies whether to validate the projection and torch_param arguments; setting it to False results in more efficient processing if this method is called frequently; however, invalid arguments will raise standard Python exceptions rather than more informative AutodiffComposition errors, and unexpected results may go unnoticed.

    Warning

    if validate is False, for efficiency: projection must be a MappingProjection, torch_param must be a torch.Tensor, and both torch_module and torch_slice are ignored.

  • context (Context or None : default most recent Context) – specifies context to use for the value of Projection.matrix; if it is not provided, then a default Context is constructed using the name of the AutodiffComposition as the execution_id, commensurate with the one used bydefault for its execution.

Return type:

ndarray

copy_projection_matrix_to_torch_param(projection, torch_param, torch_module=None, torch_slice=None, validate=True, context=None)¶

Assign the matrix Parameter of a MappingProjection to a Pytorch Parameter.

Return torch.Tensor assigned to torch_param

Parameters:
  • projection (str or MappingProjection) – specifies MappingProjection, the matrix of which is assigned torch_param; if specified as a str, it must be the name of a MappingProjection in the AutodiffComposition.

  • torch_param (torch.nn.Parameter, str or int) – specifies torch Parameter to which the matrix of the Projection is assigned; if it is a torch.nn.Parameter or torch.Tensor, then the torch_module argument does not need to be specified; if specified as a str or int, it must be the name of a torch Parameter (used to access it in the state_dict) or its index (used to access it in the parameterlist) of the torch_module argument, which must be also specified.

  • torch_module (torch.nn.Module : default None) – specifies a torch.nn.Module containing torch_param to which the projection’s matrix Parameter is assigned; this does not need to be specified if torch_param is a torch.nn.Parameter or torch.Tensor, but must be specified if torch_param is a str or int.

  • torch_slice (slice : default None) –

    specifies a slice of torch_param to assign to the matrix Parameter

    of projection; if it is not specified, the entire tensor of torch_param is used.

    Warning

    torch_slice should not be specified if the specification of torch_param already takes this into account.

  • validate (bool : default True) –

    specifies whether to validate the projection and torch_param arguments; setting it to False results in more efficient processing if this method is called frequently; however, invalid arguments then raise standard Python exceptions rather than more informative AutodiffComposition errors, and unexpected results may go unnoticed.

    Warning

    if validate is False, for efficiency: projection must be a MappingProjection, torch_param must be a torch.Tensor, and both torch_module and torch_slice are ignored.

  • context (Context or None : default most recent Context) – specifies context to use for the value of Projection.matrix; if it is not provided, then a default Context is constructed using the name of the AutodiffComposition as the execution_id, commensurate with the one used bydefault for its execution.

Return type:

Tensor

_validate_torch_param_and_projection(torch_param, torch_module, torch_slice, projection_spec)¶

Validate torch and projection arguments for copying between PyTorch and AutodiffComposition. Return tuple of torch.Tensor and MappingProjection.

Return type:

tuple

show_graph(*args, **kwargs)¶

Override to use PytorchShowGraph if show_pytorch is True

property _dependent_components¶

Returns: Components that must have values in a given Context for this Component to execute in that Context

Return type:

Iterable[Component]

Next Previous

© Copyright 2016, Jonathan D. Cohen.

Built with Sphinx using a theme provided by Read the Docs.
  • AutodiffComposition
    • Contents
    • Overview
    • Creating an AutodiffComposition
      • Only one OutputPort per Node
      • No Modulatory Components
      • No Bias Parameters
      • Nesting
      • Learning Rates and Optimizer Params
      • Exchanging Parameters with Pytorch Modules
      • No Post-construction Modification
    • Execution
      • PyTorch mode
      • LLVM mode
      • Python mode
      • Nested Execution and Modulation
      • Logging
    • Class Reference
    • AutodiffComposition
      • AutodiffComposition.pytorch_representation
      • AutodiffComposition.optimizer
      • AutodiffComposition.loss
      • AutodiffComposition.learning_rate
      • AutodiffComposition.synch_projection_matrices_with_torch
      • AutodiffComposition.synch_node_variables_with_torch
      • AutodiffComposition.synch_node_values_with_torch
      • AutodiffComposition.synch_results_with_torch
      • AutodiffComposition.retain_torch_trained_outputs
      • AutodiffComposition.retain_torch_targets
      • AutodiffComposition.retain_torch_losses
      • AutodiffComposition.torch_trained_outputs
      • AutodiffComposition.torch_targets
      • AutodiffComposition.torch_losses
      • AutodiffComposition.last_saved_weights
      • AutodiffComposition.last_loaded_weights
      • AutodiffComposition.device
      • AutodiffComposition.PytorchMechanismWrapper
        • AutodiffComposition.PytorchMechanismWrapper.mechanism
        • AutodiffComposition.PytorchMechanismWrapper.composition
        • AutodiffComposition.PytorchMechanismWrapper.afferents
        • AutodiffComposition.PytorchMechanismWrapper.input
        • AutodiffComposition.PytorchMechanismWrapper.function
        • AutodiffComposition.PytorchMechanismWrapper.integrator_function
        • AutodiffComposition.PytorchMechanismWrapper.output
        • AutodiffComposition.PytorchMechanismWrapper.efferents
        • AutodiffComposition.PytorchMechanismWrapper.exclude_from_gradient_calc
        • AutodiffComposition.PytorchMechanismWrapper._use
        • AutodiffComposition.PytorchMechanismWrapper.add_afferent()
        • AutodiffComposition.PytorchMechanismWrapper.add_efferent()
        • AutodiffComposition.PytorchMechanismWrapper.collect_afferents()
        • AutodiffComposition.PytorchMechanismWrapper.execute()
        • AutodiffComposition.PytorchMechanismWrapper.set_pnl_variable_and_values()
      • AutodiffComposition.pytorch_composition_wrapper_type
      • AutodiffComposition.pytorch_mechanism_wrapper_type
      • AutodiffComposition.infer_backpropagation_learning_pathways()
      • AutodiffComposition.get_target_nodes()
      • AutodiffComposition.set_weights()
      • AutodiffComposition.learn()
      • AutodiffComposition.execute()
      • AutodiffComposition.run()
      • AutodiffComposition.save()
      • AutodiffComposition.load()
      • AutodiffComposition.copy_torch_param_to_projection_matrix()
      • AutodiffComposition.copy_projection_matrix_to_torch_param()
      • AutodiffComposition._validate_torch_param_and_projection()
      • AutodiffComposition.show_graph()
      • AutodiffComposition._dependent_components
  • Github