You can not select more than 25 topics Topics must start with a chinese character,a letter or number, can include dashes ('-') and can be up to 35 characters long.

paddle_optimizers.py 1.4 kB

5 years ago
1234567891011121314151617181920212223242526272829303132333435363738394041424344
  1. #! /usr/bin/python
  2. # -*- coding: utf-8 -*-
  3. from __future__ import absolute_import, division, print_function
  4. __all__ = ['Adadelta', 'Adagrad', 'Adam', 'Admax', 'Ftrl', 'Nadam', 'RMSprop', 'SGD', 'Momentum', 'Lamb', 'LARS']
  5. # Add module aliases
  6. # learning_rate=0.001, rho=0.95, epsilon=1e-07, name='Adadelta'
  7. Adadelta = None
  8. # learning_rate=0.001, initial_accumulator_value=0.1, epsilon=1e-07,name='Adagrad'
  9. Adagrad = None
  10. # learning_rate=0.001, beta_1=0.9, beta_2=0.999, epsilon=1e-07, amsgrad=False,name='Adam'
  11. Adam = None
  12. # learning_rate=0.001, beta_1=0.9, beta_2=0.999, epsilon=1e-07, name='Adamax'
  13. Admax = None
  14. # learning_rate=0.001, learning_rate_power=-0.5, initial_accumulator_value=0.1,
  15. # l1_regularization_strength=0.0, l2_regularization_strength=0.0, name='Ftrl',l2_shrinkage_regularization_strength=0.0
  16. Ftrl = None
  17. # learning_rate=0.001, beta_1=0.9, beta_2=0.999, epsilon=1e-07, name='Nadam',
  18. Nadam = None
  19. # learning_rate=0.001, rho=0.9, momentum=0.0, epsilon=1e-07, centered=False,name='RMSprop'
  20. RMSprop = None
  21. # learning_rate=0.01, momentum=0.0, nesterov=False, name='SGD'
  22. SGD = None
  23. # learning_rate, momentum, use_locking=False, name='Momentum', use_nesterov=False
  24. Momentum = None
  25. def Lamb(**kwargs):
  26. raise Exception('Lamb optimizer function not implemented')
  27. def LARS(**kwargs):
  28. raise Exception('LARS optimizer function not implemented')

TensorLayer3.0 是一款兼容多种深度学习框架为计算后端的深度学习库。计划兼容TensorFlow, Pytorch, MindSpore, Paddle.