98867064
98867064
https://ebooknice.com/product/introduction-to-machine-learning-49433532
https://ebooknice.com/product/introduction-to-machine-learning-4937004
https://ebooknice.com/product/introduction-to-machine-learning-second-
edition-adaptive-computation-and-machine-learning-1405420
https://ebooknice.com/product/machine-learning-revised-and-updated-
edition-57324242
https://ebooknice.com/product/machine-learning-the-new-ai-5764082
(Ebook) Machine Learning, revised and updated edition (The MIT Press
Essential Knowledge series) by Alpaydin, Ethem ISBN 9780262542524,
0262542528
https://ebooknice.com/product/machine-learning-revised-and-updated-
edition-the-mit-press-essential-knowledge-series-34002294
(Ebook) Machine Learning, revised and updated edition (The MIT Press
Essential Knowledge series) by Ethem Alpaydin ISBN 9780262542524,
0262542528
https://ebooknice.com/product/machine-learning-revised-and-updated-
edition-the-mit-press-essential-knowledge-series-36088310
https://ebooknice.com/product/machine-learning-simplified-a-gentle-
introduction-to-supervised-learning-42568766
https://ebooknice.com/product/an-introduction-to-machine-
learning-34821410
Introduction
to
Machine
Learning
Fourth
Edition
Adaptive Computation and Machine Learning
Fourth
Edition
Ethem Alpaydın
d_r0
Brief Contents
1 Introduction
2 Supervised Learning
3 Bayesian Decision Theory
4 Parametric Methods
5 Multivariate Methods
6 Dimensionality Reduction
7 Clustering
8 Nonparametric Methods
9 Decision Trees
10 Linear Discrimination
11 Multilayer Perceptrons
12 Deep Learning
13 Local Models
14 Kernel Machines
15 Graphical Models
16 Hidden Markov Models
17 Bayesian Estimation
18 Combining Multiple Learners
19 Reinforcement Learning
20 Design and Analysis of Machine Learning Experiments
A Probability
B Linear Algebra
C Optimization
Contents
Copyright
Preface
Notations
1 Introduction
1.1 What Is Machine Learning?
1.2 Examples of Machine Learning Applications
1.2.1 Association Rules
1.2.2 Classification
1.2.3 Regression
1.2.4 Unsupervised Learning
1.2.5 Reinforcement Learning
1.3 History
1.4 Related Topics
1.4.1 High-Performance Computing
1.4.2 Data Privacy and Security
1.4.3 Model Interpretability and Trust
1.4.4 Data Science
1.5 Exercises
1.6 References
2 Supervised Learning
2.1 Learning a Class from Examples
2.2 Vapnik-Chervonenkis Dimension
2.3 Probably Approximately Correct Learning
2.4 Noise
2.5 Learning Multiple Classes
2.6 Regression
2.7 Model Selection and Generalization
2.8 Dimensions of a Supervised Machine Learning
Algorithm
2.9 Notes
2.10 Exercises
2.11 References
4 Parametric Methods
4.1 Introduction
4.2 Maximum Likelihood Estimation
4.2.1 Bernoulli Density
4.2.2 Multinomial Density
4.2.3 Gaussian (Normal) Density
4.3 Evaluating an Estimator: Bias and Variance
4.4 The Bayes’ Estimator
4.5 Parametric Classification
4.6 Regression
4.7 Tuning Model Complexity: Bias/Variance Dilemma
4.8 Model Selection Procedures
4.9 Notes
4.10 Exercises
4.11 References
5 Multivariate Methods
5.1 Multivariate Data
5.2 Parameter Estimation
5.3 Estimation of Missing Values
5.4 Multivariate Normal Distribution
5.5 Multivariate Classification
5.6 Tuning Complexity
5.7 Discrete Features
5.8 Multivariate Regression
5.9 Notes
5.10 Exercises
5.11 References
6 Dimensionality Reduction
6.1 Introduction
6.2 Subset Selection
6.3 Principal Component Analysis
6.4 Feature Embedding
6.5 Factor Analysis
6.6 Singular Value Decomposition and Matrix
Factorization
6.7 Multidimensional Scaling
6.8 Linear Discriminant Analysis
6.9 Canonical Correlation Analysis
6.10 Isomap
6.11 Locally Linear Embedding
6.12 Laplacian Eigenmaps
6.13 t-Distributed Stochastic Neighbor Embedding
6.14 Notes
6.15 Exercises
6.16 References
7 Clustering
7.1 Introduction
7.2 Mixture Densities
7.3 k-Means Clustering
7.4 Expectation-Maximization Algorithm
7.5 Mixtures of Latent Variable Models
7.6 Supervised Learning after Clustering
7.7 Spectral Clustering
7.8 Hierarchical Clustering
7.9 Choosing the Number of Clusters
7.10 Notes
7.11 Exercises
7.12 References
8 Nonparametric Methods
8.1 Introduction
8.2 Nonparametric Density Estimation
8.2.1 Histogram Estimator
8.2.2 Kernel Estimator
8.2.3 k-Nearest Neighbor Estimator
8.3 Generalization to Multivariate Data
8.4 Nonparametric Classification
8.5 Condensed Nearest Neighbor
8.6 Distance-Based Classification
8.7 Outlier Detection
8.8 Nonparametric Regression: Smoothing Models
8.8.1 Running Mean Smoother
8.8.2 Kernel Smoother
8.8.3 Running Line Smoother
8.9 How to Choose the Smoothing Parameter
8.10 Notes
8.11 Exercises
8.12 References
9 Decision Trees
9.1 Introduction
9.2 Univariate Trees
9.2.1 Classification Trees
9.2.2 Regression Trees
9.3 Pruning
9.4 Rule Extraction from Trees
9.5 Learning Rules from Data
9.6 Multivariate Trees
9.7 Notes
9.8 Exercises
9.9 References
10 Linear Discrimination
10.1 Introduction
10.2 Generalizing the Linear Model
10.3 Geometry of the Linear Discriminant
10.3.1 Two Classes
10.3.2 Multiple Classes
10.4 Pairwise Separation
10.5 Parametric Discrimination Revisited
10.6 Gradient Descent
10.7 Logistic Discrimination
10.7.1 Two Classes
10.7.2 Multiple Classes
10.7.3 Multiple Labels
10.8 Learning to Rank
10.9 Notes
10.10 Exercises
10.11 References
11 Multilayer Perceptrons
11.1 Introduction
11.1.1 Understanding the Brain
11.1.2 Neural Networks as a Paradigm for Parallel
Processing
11.2 The Perceptron
11.3 Training a Perceptron
11.4 Learning Boolean Functions
11.5 Multilayer Perceptrons
11.6 MLP as a Universal Approximator
11.7 Backpropagation Algorithm
11.7.1 Nonlinear Regression
11.7.2 Two-Class Discrimination
11.7.3 Multiclass Discrimination
11.7.4 Multilabel Discrimination
11.8 Overtraining
11.9 Learning Hidden Representations
11.10 Autoencoders
11.11 Word2vec Architecture
11.12 Notes
11.13 Exercises
11.14 References
12 Deep Learning
12.1 Introduction
12.2 How to Train Multiple Hidden Layers
12.2.1 Rectified Linear Unit
12.2.2 Initialization
12.2.3 Generalizing Backpropagation to Multiple
Hidden Layers
12.3 Improving Training Convergence
12.3.1 Momentum
12.3.2 Adaptive Learning Factor
12.3.3 Batch Normalization
12.4 Regularization
12.4.1 Hints
12.4.2 Weight Decay
12.4.3 Dropout
12.5 Convolutional Layers
12.5.1 The Idea
12.5.2 Formalization
12.5.3 Examples: LeNet-5 and AlexNet
12.5.4 Extensions
12.5.5 Multimodal Deep Networks
12.6 Tuning the Network Structure
12.6.1 Structure and Hyperparameter Search
12.6.2 Skip Connections
12.6.3 Gating Units
12.7 Learning Sequences
12.7.1 Example Tasks
12.7.2 Time-Delay Neural Networks
12.7.3 Recurrent Networks
12.7.4 Long Short-Term Memory Unit
12.7.5 Gated Recurrent Unit
12.8 Generative Adversarial Network
12.9 Notes
12.10 Exercises
12.11 References
13 Local Models
13.1 Introduction
13.2 Competitive Learning
13.2.1 Online k-Means
13.2.2 Adaptive Resonance Theory
13.2.3 Self-Organizing Maps
13.3 Radial Basis Functions
13.4 Incorporating Rule-Based Knowledge
13.5 Normalized Basis Functions
13.6 Competitive Basis Functions
13.7 Learning Vector Quantization
13.8 The Mixture of Experts
13.8.1 Cooperative Experts
13.8.2 Competitive Experts
13.9 Hierarchical Mixture of Experts and Soft Decision
Trees
13.10 Notes
13.11 Exercises
13.12 References
14 Kernel Machines
14.1 Introduction
14.2 Optimal Separating Hyperplane
14.3 The Nonseparable Case: Soft Margin Hyperplane
14.4 ν-SVM
14.5 Kernel Trick
14.6 Vectorial Kernels
14.7 Defining Kernels
14.8 Multiple Kernel Learning
14.9 Multiclass Kernel Machines
14.10 Kernel Machines for Regression
14.11 Kernel Machines for Ranking
14.12 One-Class Kernel Machines
14.13 Large Margin Nearest Neighbor Classifier
14.14 Kernel Dimensionality Reduction
14.15 Notes
14.16 Exercises
14.17 References
15 Graphical Models
15.1 Introduction
15.2 Canonical Cases for Conditional Independence
15.3 Generative Models
15.4 d-Separation
15.5 Belief Propagation
15.5.1 Chains
15.5.2 Trees
15.5.3 Polytrees
15.5.4 Junction Trees
15.6 Undirected Graphs: Markov Random Fields
15.7 Learning the Structure of a Graphical Model
15.8 Influence Diagrams
15.9 Notes
15.10 Exercises
15.11 References
17 Bayesian Estimation
17.1 Introduction
17.2 Bayesian Estimation of the Parameters of a Discrete
Distribution
17.2.1 K > 2 States: Dirichlet Distribution
17.2.2 K = 2 States: Beta Distribution
17.3 Bayesian Estimation of the Parameters of a Gaussian
Distribution
17.3.1 Univariate Case: Unknown Mean, Known
Variance
17.3.2 Univariate Case: Unknown Mean, Unknown
Variance
17.3.3 Multivariate Case: Unknown Mean, Unknown
Covariance
17.4 Bayesian Estimation of the Parameters of a Function
17.4.1 Regression
17.4.2 Regression with Prior on Noise Precision
17.4.3 The Use of Basis/Kernel Functions
17.4.4 Bayesian Classification
17.5 Choosing a Prior
17.6 Bayesian Model Comparison
17.7 Bayesian Estimation of a Mixture Model
17.8 Nonparametric Bayesian Modeling
17.9 Gaussian Processes
17.10 Dirichlet Processes and Chinese Restaurants
17.11 Latent Dirichlet Allocation
17.12 Beta Processes and Indian Buffets
17.13 Notes
17.14 Exercises
17.15 References
18 Combining Multiple Learners
18.1 Rationale
18.2 Generating Diverse Learners
18.3 Model Combination Schemes
18.4 Voting
18.5 Error-Correcting Output Codes
18.6 Bagging
18.7 Boosting
18.8 The Mixture of Experts Revisited
18.9 Stacked Generalization
18.10 Fine-Tuning an Ensemble
18.10.1 Choosing a Subset of the Ensemble
18.10.2 Constructing Metalearners
18.11 Cascading
18.12 Notes
18.13 Exercises
18.14 References
19 Reinforcement Learning
19.1 Introduction
19.2 Single State Case: K-Armed Bandit
19.3 Elements of Reinforcement Learning
19.4 Model-Based Learning
19.4.1 Value Iteration
19.4.2 Policy Iteration
19.5 Temporal Difference Learning
19.5.1 Exploration Strategies
19.5.2 Deterministic Rewards and Actions
19.5.3 Nondeterministic Rewards and Actions
19.5.4 Eligibility Traces
19.6 Generalization
19.7 Partially Observable States
19.7.1 The Setting
19.7.2 Example: The Tiger Problem
19.8 Deep Q Learning
19.9 Policy Gradients
19.10 Learning to Play Backgammon and Go
19.11 Notes
19.12 Exercises
19.13 References
A Probability
A.1 Elements of Probability
A.1.1 Axioms of Probability
A.1.2 Conditional Probability
A.2 Random Variables
A.2.1 Probability Distribution and Density
Functions
A.2.2 Joint Distribution and Density Functions
A.2.3 Conditional Distributions
A.2.4 Bayes’ Rule
A.2.5 Expectation
A.2.6 Variance
A.2.7 Weak Law of Large Numbers
A.3 Special Random Variables
A.3.1 Bernoulli Distribution
A.3.2 Binomial Distribution
A.3.3 Multinomial Distribution
A.3.4 Uniform Distribution
A.3.5 Normal (Gaussian) Distribution
A.3.6 Chi-Square Distribution
A.3.7 t Distribution
A.3.8 F Distribution
A.4 References
B Linear Algebra
B.1 Vectors
B.2 Matrices
B.3 Similarity of Vectors
B.4 Square Matrices
B.5 Linear Dependence and Ranks
B.6 Inverses
B.7 Positive Definite Matrices
B.8 Trace and Determinant
B.9 Eigenvalues and Eigenvectors
B.10 Spectral Decomposition
B.11 Singular Value Decomposition
B.12 References
C Optimization
C.1 Introduction
C.2 Linear Optimization
C.3 Convex Optimization
C.4 Duality
C.5 Local Optimization
C.6 References
Index
Other documents randomly have
different content
in command of the cannon, who both opened fire on the assailants.
The machine-gun entered into action as well, whilst the soldiers of
the 13th Line Regiment fired direct on the German troops who,
nevertheless, managed to get a footing on the bridge.
The officer of the Engineers who had mined it had two discharges.
Seeing that the assailants who were killed were instantly replaced by
others, and that the enemy was threatening the left bank, this brave
man established the electric contact. To our stupefaction, no
detonation followed. The Germans had now reached the end of the
bridge. Without any excitement, the officer seized the second
discharge. A formidable explosion took place, flinging into the
distance the ruins of the bridge, fragments of human beings, and
various objects of their equipment. All fell pêle-mêle into the river
and on to the banks, covering the soldiers who were hidden there
with blood and with human shreds. In face of this disaster, the
assaulting column stopped short, horrified, and then rushed back in
disorder towards the town, whilst huge flames rose from the piles of
the bridge which had been soaked in petroleum.
The surprise attack had failed, and two more weak attempts were
cut short by our shelling. The usual vengeance was then resorted to.
The enemy Artillery concentrated its fire on the vicinity of the bridge.
Our brave troops lived through one of those critical moments when
the destructive power of the human machine is only comparable to
the grandeur of souls ready for any sacrifice. For one long hour, our
soldiers were submitted to a storm of steel which, with a hellish
clatter, warned them of a fresh attack. It was necessary to conquer
the intense nervous strain, to watch without ceasing, and to
examine all the impenetrable and threatening fortification works on
the other bank of the river. It was whilst examining all this, from
above the shield of his cannon, that Sub-Lieutenant Hiernaux fell,
just at the critical moment, struck between the eyes by a ball. His
fine death proved to us once more all that there is of energy, sang-
froid, and courage among our subaltern ranks.
Quartermaster Francotte ordered the officer's body to be carried to a
neighbouring shelter and he covered it over with a wrap. He then
took Hiernaux's place at the cannon and kept his aids there all night,
whilst the neighbouring trenches had to be abandoned for a time, as
they were impossible, on account of the gas from the explosion of
the shells.
Two days later, Sub-Lieutenant Mayat was on service at the bridge.
In the afternoon, the Commander of the group and his aid came to
examine the adversary's organisation. The heads of the three
officers, Sub-Lieutenant Mayat between the other two, were just for
an instant above the shield formed by the cannon. This formed an
excellent target for those on the other side. A ball whizzed by and
one of the heads disappeared. Mayat, without uttering a cry, fell
against his chief, and a stream of red blood spurted from his pierced
temple and inundated his face, which had turned suddenly livid.
At present, the two friends are sleeping their glorious sleep side by
side, in the little cemetery of Grembergen, where we buried them
reverently. The day will come when those who know of their noble
death and who, more fortunate than they, have been spared, will be
able to go and place flowers on their tombs, in order to show their
gratitude and admiration.
But no homage can be equal to the tears of sincere grief of the
officer who was sent to take Sub-Lieutenant Mayat's place, when he
saw his comrade lying at his post, in all the rigidity of the last sleep.
CHAPTER XIX
The No. 7 Armoured Car
FOOTNOTES:
[6] A charge comprises thirty cartridges placed on a metallic
band.
CHAPTER XX
The Wavre-St. Catherine Combat
FOOTNOTES:
[7] In spite of several operations the Captain is still crippled.
CHAPTER XXI
The Death-Struggle of Lierre Fort
By an Officer of the Garrison
No harvest of impressions will be found in this account, for, although
it might seem that the garrison of a Fort must be crowded together
within the narrow surface occupied by the building, it is in reality
dispersed everywhere: three men here, ten there, in the cupolas, in
the munition stores, at the observation posts. Each man is in his
special department and the contact is much less close than among
the troops in campaign.
When, on account of the destruction of certain parts of the Fort, the
garrison comes gradually nearer together, the moral tension, the lack
of sleep, the irregularity of the alimentation transform the garrison
into a passive troop under an avalanche of blows. The men are still
capable of reaction and of desperate efforts, but the efforts are
silent and, as it were, mechanical. Those who have never lived
through such hours can never know the intensity of the suffering
endured by the defenders of the Fort.
Our website is not just a platform for buying books, but a bridge
connecting readers to the timeless values of culture and wisdom. With
an elegant, user-friendly interface and an intelligent search system,
we are committed to providing a quick and convenient shopping
experience. Additionally, our special promotions and home delivery
services ensure that you save time and fully enjoy the joy of reading.
ebooknice.com