1 Introduction and Examples
1.1 How do neural methods differ?
1.2 The patterm recognition task
1.3 Overview of the remaining chapters
1.4 Examples
1.5 Literature
2 Statistical Decision Theory
2.1 Bayes rules for known distributions
2.2 Parametric models
2.3 Logistic discrimination
2.4 Predictive classification
2.5 Alternative estimation procedures
2.6 How complex a model do we need?
2.7 Performance assessment
2.8 Computational learning approaches
3 Linear Discriminant Analysis
3.1 Classical linear discriminatio
3.2 Linear discriminants via regression
3.3 Robustness
3.4 Shrinkage methods
3.5 Logistic discrimination
3.6 Linear separatio andperceptrons
4 Flexible Diseriminants
4.1 Fitting smooth parametric functions
4.2 Radial basis functions
4.3 Regularization
5 Feed-forward Neural Networks
5.1 Biological motivation
5.2 Theory
5.3 Learning algorithms
5.4 Examples
5.5 Bayesian perspectives
5.6 Network complexity
5.7 Approximation results
6 Non-parametric Methods
6.1 Non-parametric estlmation of class densities
6.2 Nearest neighbour methods
6 3 Learning vector quantization
6.4 Mixture representations
7 Tree-structured Classifiers
7.1 Splitting rules
7.2 Pruning rules
7.3 Missing values
7.4 Earlier approaches
7.5 Refinements
7.6 Relationships to neural networks
7.7 Bayesian trees
8 Belief Networks
8.1 Graphical models and networks
8.2 Causal networks
8 3 Learning the network structure
8.4 Boltzmann machines
8.5 Hierarchical mixtures of experts
9 Unsupervised Methods
9.1 Projection methods
9.2 Multidimensional scaling
9.3 Clustering algorithms
9.4 Self-organizing maps
10 Finding Good Pattern Features
10.1 Bounds for the Bayes error
10.2 Normal class distributions
10.3 Branch-and-bound techniques
10.4 Feature extraction
A Statistical Sidelines
A.1 Maximum likelihood and MAP estimation
A.2 The EM algorithm
A.3 Markov chain Monte Carlo
A.4 Axioms for conditional independence
A.5 Optimization
Glossary
References
Author Index
Subject Index
· · · · · · (
收起)