Patch the fastai.optimizer._BaseOptimizer
__getstate__
and __setstate__
methods which are used in pickling fastai optimizers.
This should fix the bug where running the learner on multiple TPU cores on XLA triggers an error in which the method _fetch_gradients(optimizer)
fails in the statement for param_group in optimizer.__getstate__()['param_groups']:
in the torch_xla.core.xla_model
module.
The patch modifies the copy constructor to include the param_groups and defaults.