Open In Colab

_BaseOptimizer patches

_BaseOptimizer.__getstate__[source]

_BaseOptimizer.__getstate__()

Pickling opt state should include param_groups and defaults

_BaseOptimizer.__setstate__[source]

_BaseOptimizer.__setstate__(data)

Pickling opt state should include param_groups and defaults

Patch the fastai.optimizer._BaseOptimizer __getstate__ and __setstate__ methods which are used in pickling fastai optimizers.

This should fix the bug where running the learner on multiple TPU cores on XLA triggers an error in which the method _fetch_gradients(optimizer) fails in the statement for param_group in optimizer.__getstate__()['param_groups']: in the torch_xla.core.xla_model module.

The patch modifies the copy constructor to include the param_groups and defaults.