AdagradOptimizer¶
-
class
ampligraph.latent_features.
AdagradOptimizer
(optimizer_params, batches_count, verbose=False)¶ Wrapper around adagrad optimizer
Methods
__init__
(optimizer_params, batches_count[, …])Initialize the Optimizer
minimize
(loss)Create an optimizer to minimize the model loss
update_feed_dict
(feed_dict, batch_num, epoch_num)Fills values of placeholders created by the optimizers.
-
__init__
(optimizer_params, batches_count, verbose=False)¶ Initialize the Optimizer
- Parameters
optimizer_params (dict) –
Consists of key-value pairs. The optimizer will check the keys to get the corresponding params:
’lr’: (float). Learning Rate (default: 0.0005)
Example:
optimizer_params={'lr': 0.001}
batches_count (int) – number of batches in an epoch
verbose (bool) – Enable/disable verbose mode
-
minimize
(loss)¶ Create an optimizer to minimize the model loss
- Parameters
loss (tf.Tensor) – Node which needs to be evaluated for computing the model loss.
- Returns
train – Node that needs to be evaluated for minimizing the loss during training
- Return type
tf.Operation
-
update_feed_dict
(feed_dict, batch_num, epoch_num)¶ Fills values of placeholders created by the optimizers.
- Parameters
feed_dict (dict) – Dictionary that would be passed while optimizing the model loss to sess.run.
batch_num (int) – current batch number
epoch_num (int) – current epoch number
-