Gradient based hyperparameter optimization in Echo State Networks

Abstract

Like most machine learning algorithms, Echo State Networks possess several hyperparameters that have to be carefully tuned for achieving best performance. For minimizing the error on a specific task, we present a gradient based optimization algorithm, for the input scaling, the spectral radius, the leaking rate, and the regularization parameter.

Publication
Neural Networks 115: 23–29