Neural Networks and Learning Machines

Neural Networks and Learning Machines pdf epub mobi txt 电子书 下载 2025

出版者:Pearson
作者:Simon O. Haykin
出品人:
页数:936
译者:
出版时间:2008-11-28
价格:USD 252.40
装帧:Hardcover
isbn号码:9780131471399
丛书系列:
图书标签:
  • 神经网络
  • 机器学习
  • MachineLearning
  • 人工智能
  • NeuralNetworks
  • 计算机
  • AI
  • 计算机科学
  • Neural Networks
  • Machine Learning
  • Deep Learning
  • Artificial Intelligence
  • Algorithms
  • Mathematics
  • Computer Science
  • Pattern Recognition
  • Operations Research
  • Engines
想要找书就要到 本本书屋
立刻按 ctrl+D收藏本页
你会得到大惊喜!!

具体描述

For graduate-level neural network courses offered in the departments of Computer Engineering, Electrical Engineering, and Computer Science. Renowned for its thoroughness and readability, this well-organized and completely up-to-date text remains the most comprehensive treatment of neural networks from an engineering perspective. Refocused, revised and renamed to reflect the duality of neural networks and learning machines, this edition recognizes that the subject matter is richer when these topics are studied together. Ideas drawn from neural networks and machine learning are hybridized to perform improved learning tasks beyond the capability of either independently.

作者简介

Simon Haykin,于1953年获得英国伯明翰大学博士学位,目前为加拿大McMaster大学电子与计算机工程系教授、通信研究实验室主任。他是国际电子电气工程界的*名学者,曾获得IEEE McNaughton金奖。他是加拿大皇家学会院士、IEEE会士,在神经网络、通信、自适应滤波器等领域成果颇丰,*有多部标准教材。

目录信息

Preface .
Acknowledgements
Abbreviations and Symbols
GLOSSARY
Introduction
1. What is a Neural Network?
2. The Human Brain
3. Models of a Neuron
4. Neural Networks Viewed As Directed Graphs
5. Feedback
6. Network Architectures
7. Knowledge Representation
8. Learning Processes
9. Learning Tasks
10. Concluding Remarks
Notes and References
Chapter 1 Rosenblatt's Pereeptron
1.1 Introduction
1.2. Perceptron
1.3. The Perceptron Convergence Theorem
1.4. Relation Between the Perceptron and Bayes Classifier for a Gaussian Environment
1.5. Computer Experiment: Pattern Classification
1.6. The Batch Perceptron Algorithm
1.7. Summary and Discussion
Notes and References
Problems
Chapter 2 Model Building through Regression
2.1 Introduction
2.2 Linear Regression Model: Preliminary Considerations
2.3 Maximum a Posteriori Estimation of the Parameter Vector
2.4 Relationship .Between Regularized Least-Squares Estimation and MAP Estimation
2.5 Computer Experiment: Pattern Classification
2.6 The Minimum-Description-Length Principle
2.7 Finite Sample-Size Considerations
2.8 The Instrumental-Variables Method
2.9 Summary and Discussion
Notes and References
Problems
Chapter 3 The Least-Mean-Square Algorithm
3.1 Introduction
3.2 Filtering Structure of the LMS Algorithm
3.3 Unconstrained Optimization: a Review
3.4 The Wiener Filter
3.5 The Least-Mean-Square Algorithm
3.6 Markov Model Portraying the Deviation of the LMS Algorithm from the Wiener Filter
3.7 The Langevin Equation: Characterization of Brownian Motion
3.8 Kushner's Direct-Averaging Method
3.9 Statistical LMS Learning Theory for Small Learning-Rate Parameter
3.10 Computer Experiment I:Linear Prediction
3.11 Computer Experiment II: Pattern Classification
3.12 Virtues and Limitations of the LMS Algorithm
3.13 Learning-Rate Annealing Schedules
3.14 Summary and Discussion
Notes and References
Problems
Chapter 4 Multilayer Pereeptrons
4.1 Introduction
4.2 Some Preliminaries
4.3 Batch Learning and On-Line Learning
4.4 The Back-Propagation Algorithm
4.5 XOR Problem
4.6 Heuristics for Making the Back-Propagation Algorithm Perform Better
4.7 Computer Experiment: Pattern Classification
4.8 Back Propagation and Differentiation
4.9 The Hessian and Its Role in On-Line Learning
4.10 Optimal Annealing and Adaptive Control of the Learning Rate
4.11 Generalization
4.12 Approximations of Functions
4.13 Cross-Validation
4.14 Complexity Regularization and Network Pruning
4.15 Virtues and Limitations of Back-Propagation Learning
4.16 Supervised Learning Viewed as an Optimization Problem
4.17 Convolutional Networks
4.18 Nonlinear Filtering
4.19 Small-Scale Versus Large-Scale Learning Problems
4.20 Summary and Discussion
Notes and References
Problems
Chapter 5 Kernel Methods and Radial-Basis Function Networks
5.1 Introduction
5.2 Cover's Theorem on the Separability of Patterns
5.3 The Interpolation Problem
5.4 Radial-Basis-Function Networks
5.5 K-Means Clustering
5.6 Recursive Least-Squares Estimation of the Weight Vector
5.7 Hybrid Learning Procedure for RBF Networks
5.8 Computer Experiment: Pattern Classification
5.9 Interpretations of the Gaussian Hidden Units
5.10 Kernel Regression and Its Relation to RBF Networks
5.11 Summary and Discussion
Notes and References
Problems
Chapter 6 Support Vector Machines
6.1 Introduction 268
6.2 Optimal Hyperplane for Linearly Separable Patterns
6.3 Optimal Hyperplane for Nonseparable Patterns
6.4 The Support Vector Machine Viewed as a Kernel Machine
6.5 Design of Support Vector Machines
6.6 XOR Problem
6.7 Computer Experiment: Pattern Classification
6.8 Regression: Robustness Considerations
6.9 Optimal Solution of the Linear Regression Problem
6.10 The Representer Theorem and Related Issues
6.11 Summary and Discussion
Notes and References
Problems
Chapter 7 RegularizationTheory
7.1 Introduction
7.2 Hadamard's Conditions for Well-Posedness
7.3 Tikhonov's Regularization Theory
7.4 Regularization Networks
7.5 Generalized Radial-Basis-Function Networks
7.6 The Regularized Least-Squares Estimator: Revisited
7.7 Additional Notes of Interest on Regularization
7.8 Estimation of the Regularization Parameter
7.9 Semisupervised Learning
7.10 Manifold Regularization: Preliminary Considerations
7.11 Differentiable Manifolds
7.12 Generalized Regularization Theory ..
7.13 Spectral Graph Theory
7.14 Generalized Representer Theorem
7.15 Laplacian Regularized Least-Squares Algorithm
7.16 Experiments on Pattern Classification Using Semisupervised Learning
7.17 Summary and Discussion
Notes and References
Problems
Chapter 8 Principal-Components Analysis
8.1 Introduction
8.2 Principles of Self-Organization
8.3 Self-Organized Feature Analysis
8.4 Principal-Components Analysis: Perturbation Theory
8.5 Hebbian-Based Maximum Eigenfilter
8.6 Hebbian-Based Principal-Components Analysis
8.7 Case Study: Image Coding
8.8 Kernel Principal-Components Analysis
8.9 Basic Issues Involved in the Coding of Natural Images
8.10 Kernel Hebbian Algorithm
8.11 Summary and Discussion
Notes and References
Problems
Chapter 9 Self-Organizing Maps
9.1 Introduction
9.2 Two Basic Feature-Mapping Models
9.3 Self-Organizing Map
9.4 Properties of the Feature Map
9.5 Computer Experiments I: Disentangling Lattice Dynamics Using SOM
9.6 Contextual Maps
9.7 Hierarchical Vector Quantization
9.8 Kernel Self-Organizing Map
9.9 Computer Experiment II: Disentangling Lattice Dynamics Using Kernel SOM
9.10 Relationship Between Kernel SOM and Kullback-Leibler Divergence
9.11 Summary and Discussion
Notes and References
Problems
Chapter 10 Information-Theoretic Learning Models
10.1 Introduction
10.2 Entropy
10.3 Maximum-Entropy Principle
10.4 Mutual Information
10.5 Kullback-Leibler Divergence
10.6 Copulas
10.7 Mutual Information as an Objective Function to be Optimized
10.8 Maximum Mutual Information Principle
10.9 Infomax and Redundancy Reduction
10.10 Spatially Coherent Features
10.11 Spatially Incoherent Features
10.12 Independent-Components Analysis
10.13 Sparse Coding of Natural Images and Comparison with ICA Coding
10.14 Natural-Gradient Learning for Independent -Components Analysis
10.15 Maximum-Likelihood Estimation for Independent-Components Analysis
10.16 Maximum-Entropy Learning for Blind Source Separation
10.17 Maximization of Negentropy for Independent-Components Analysis
10.18 Coherent Independent-Components Analysis
10.19 Rate Distortion Theory and Information Bottleneck
10.20 Optimal Manifold Representation of Data
10.21 Computer Experiment: Pattern Classifcation
10.22 Summary and Discussion
Notes and References
Problems
Chapter 11 Stochastic Methods Rooted in Statistical Mechanics
11.1 Introduction
11.2 Statistical Mechanics
11.3 Markov Chains
11.4 Metropolis Algorithm
11.5 Simulated Annealing
11.6 Gibbs Sampling
11.7 Boltzmann Machine
11.8 Logistic Belief Nets
11.9 Deep Belief Nets
11.10 Deterministic Annealing
11.11 Analogy of Deterministic Annealing with Expectation-Maximization Algorithm
11.12 Summary and Discussion
Notes and References
Problems
Chapter 12 Dynamic Programming
12.1 Introduction
12.2 Markov Decision Process
12.3 Bellman's Optimality Criterion
12.4 Policy Iteration
12.5 Value Iteration
12.6 Approximate Dynamic Programming: Direct Methods
12.7 Temporal-Difference Learning
12.8 Q-Learning
12.9 Approximate Dynamic Programming: Indirect Methods
12.10 Least-Squares Policy Evaluation
12.11 Approximate Policy Iteration
12.12 Summary and Discussion
Notes and References
Problems
Chapter 13 Neurodynamics
13.1 Introduction
13.2 Dynamic Systems
13.3 Stability of Equilibrium States
13.4 Attractors
13.5 Neurodynamic Models
13.6 Manipulation of Attractors as a Recurrent Network Paradigm
13.7 Hopfield Model
13.8 The Cohen-GrossbergTheorem
13.9 Brain-State-In-A-Box Model
13.10 Strange Attractors and Chaos
13.11 Dynamic Reconstruction of a Chaotic Process
13.12 Summary and Discussion
Notes and References
Problems
Chapter 14 Bayseian Filtering for State Estimation of Dynamic Systems
14.1 Introduction
14.2 State-Space Models
14.3 Kalman Filters
14.4 The Divergence-Phenomenon and Square-Root Filtering
14.5 The Extended Kalman Filter
14.6 The Bayesian Filter
14.7 Cubature Kalman Filter: Building on the Kalman Filter
14.8 Particle Filters
14.9 Computer Experiment: Comparative Evaluation of Extended Kalman and Particle Filters
14.10 Kalman Filtering in Modeling of Brain Functions
14.11 Summary and Discussion
Notes and References
Problems
Chapter 15 Dynamically Driven Recurrent Networks
15.1 Introduction
15.2 Recurrent Network Architectures
15.3 Universal Approximation Theorem
15.4 Controllability and Observability
15.5 Computational Power of Recurrent Networks
15.6 Learning Algorithms
15.7 Back Propagation Through Time
15.8 Real-Tune Recurrent Learning
15.9 Vanishing Gradients in Recurrent Networks
15.10 Supervised Training Framework for Recurrent Networks Using Nonlinear Sequential State Estimators
15.11 Computer Experiment: Dynamic Reconstruction of Mackay-Glass Attractor
15.12 Adaptivity Considerations
15.13 Case Study: Model Reference Applied to Neurocontrol
15.14 Summary and Discussion
Notes and References
Problems
Bibliography
Index
· · · · · · (收起)

读后感

评分

我的研究生课程Neural Networks就是用的本书第二版。因为教授说了,他不喜欢更新的第三版。 感觉本书基本涵盖了神经网络的许多基础部分和重要方面。像Back Propagation, Radial-Basis Function,Self-Organizing Maps,以及single neuron中的Hebbian Learning, Competitive L...  

评分

神经网络不仅是现在的思维模式,计算机的将来计算模式,还是简单的细胞的运算模式。他们没有真正的思考,而是计算。计算是机器也能够做到的,因此不管人是否理解或者机器是否知道,都可以从容应对。而不知道的事物如此之多,因此不必担心他们会自动的进入圈套。他们不仅是可以...  

评分

看着看着,我想起了那一句老话:一人翻为佳,二人翻为庸,三人翻为渣,若是三人等,则弗如渣渣 —————————— 这本书的译者不知道是不大熟悉这方面,还是机翻习惯了? 这本书本身大多是数学理论的堆砌,没有比较好的基础很难看懂,加上译者含混过关,大量的机翻体验与...  

评分

这次是第一次通读了整本书,里面的很多数学公司推导、部分原理没看明白,我想大部分第一次读的人应该也和我差不多吧。 如果作为学习神经网络的入门书,我想这本可能不太适合,因为它太多太细,初学者很容易陷入细节受到挫败感。 但并不是说这本书不好,相反,这本书绝对是经典...  

评分

这次是第一次通读了整本书,里面的很多数学公司推导、部分原理没看明白,我想大部分第一次读的人应该也和我差不多吧。 如果作为学习神经网络的入门书,我想这本可能不太适合,因为它太多太细,初学者很容易陷入细节受到挫败感。 但并不是说这本书不好,相反,这本书绝对是经典...  

用户评价

评分

这书很让我欣慰,写了好多continuous Hopfield Network和Boltzmann Machine,也教了RNN基本的backpropogation through time。。。。但是这一行的书总是更不上行情(没有现在常用的LSTM和Gated recurrent units)

评分

还是看原版好一点

评分

这书很让我欣慰,写了好多continuous Hopfield Network和Boltzmann Machine,也教了RNN基本的backpropogation through time。。。。但是这一行的书总是更不上行情(没有现在常用的LSTM和Gated recurrent units)

评分

Sophisticated yet understandable

评分

Sophisticated yet understandable

本站所有内容均为互联网搜索引擎提供的公开搜索信息,本站不存储任何数据与内容,任何内容与数据均与本站无关,如有需要请联系相关搜索引擎包括但不限于百度google,bing,sogou

© 2025 onlinetoolsland.com All Rights Reserved. 本本书屋 版权所有