First we describe, analyze and present the theoretical derivations and the source codes for several (modified and well-known) non-linear Neural Network algorithms based on the unconstrained optimization theory and applied to supervised training networks. In addition to the indication of the relative efficiency of these algorithms in an application, we analyze their main characteristics and present the MATLAB source codes. Algorithms of this part depend on some modified variable metric updates and for the purpose of comparison, we illustrate the default values specification for each algorithm, presenting a simple non-linear test problem. Further more in this thesis we also emphasized on the conjugate gradient (CG) algorithms, which are usually used for solving nonlinear test functions and are combined with the modified back propagation (BP) algorithm yielding few new fast training multilayer Neural Network algorithms. This study deals with the determination of new search directions by exploiting the information calculated by gradient descent as well as the previous search directions.
Die Inhaltsangabe kann sich auf eine andere Ausgabe dieses Titels beziehen.
First we describe, analyze and present the theoretical derivations and the source codes for several (modified and well-known) non-linear Neural Network algorithms based on the unconstrained optimization theory and applied to supervised training networks. In addition to the indication of the relative efficiency of these algorithms in an application, we analyze their main characteristics and present the MATLAB source codes. Algorithms of this part depend on some modified variable metric updates and for the purpose of comparison, we illustrate the default values specification for each algorithm, presenting a simple non-linear test problem. Further more in this thesis we also emphasized on the conjugate gradient (CG) algorithms, which are usually used for solving nonlinear test functions and are combined with the modified back propagation (BP) algorithm yielding few new fast training multilayer Neural Network algorithms. This study deals with the determination of new search directions by exploiting the information calculated by gradient descent as well as the previous search directions.
Gulnar Wasim Sadiq, was burn in 1974 kurdistan region. Complete the PhD. Degree at University of Sulaimani- College of Science, Department of Mathematics in the field Operation Research and Optimization.
„Über diesen Titel“ kann sich auf eine andere Ausgabe dieses Titels beziehen.
Gratis für den Versand innerhalb von/der Deutschland
Versandziele, Kosten & DauerAnbieter: buchversandmimpf2000, Emtmannsberg, BAYE, Deutschland
Taschenbuch. Zustand: Neu. Neuware -First we describe, analyze and present the theoretical derivations and the source codes for several (modified and well-known) non-linear Neural Network algorithms based on the unconstrained optimization theory and applied to supervised training networks. In addition to the indication of the relative efficiency of these algorithms in an application, we analyze their main characteristics and present the MATLAB source codes. Algorithms of this part depend on some modified variable metric updates and for the purpose of comparison, we illustrate the default values specification for each algorithm, presenting a simple non-linear test problem. Further more in this thesis we also emphasized on the conjugate gradient (CG) algorithms, which are usually used for solving nonlinear test functions and are combined with the modified back propagation (BP) algorithm yielding few new fast training multilayer Neural Network algorithms. This study deals with the determination of new search directions by exploiting the information calculated by gradient descent as well as the previous search directions.Books on Demand GmbH, Überseering 33, 22297 Hamburg 156 pp. Englisch. Artikel-Nr. 9783846580806
Anzahl: 2 verfügbar