Welcome to LZN's Blog!
Wind extinguishes a candle but energizes fire.
-
W7 Large Margin Classification
Notes
SVM - Support Vector Machine “Large Margin Classifiers”
- SVM gives computational advantages compared with Logistic Regression, easier to solve.
- Form of SVM cost function:
J=A+lambda*B
–> J=C*A+B
- For Linerly seperable data set, larger C, sensitive to outliers, but yields large margin.
-
W6 Evaluation
Notes
Evaluation
- Training set (60%), cross-validation set (20%), and test set (20%).
- If a learning algorithm is suffering from high variance, getting more training data is likely to help.
- Small neural network is prone to underfitting
System Design
- Start from simple model
- Plot learning curves to decide if more data, features, etc. are likely to help
- Error analysis: (1) manually examine errors (2) what potential features could help to classify them.
- Skewed Classes: Positive examples# « Negative examples#
- Precission:
#True positive/#Predicted positive
- Recall:
#True positives/#Actural positives
- F1 score 2PR/(P+R)
-
W5 Backward Propagation
Notes
Cost Function and Backpropagation
- total layer number: L, nodes # in layer l: sl, k # of units in output layer
- cost function
- Backpropagation
- Random initialization: symmetry breaking
- Arch: Reasonable default: 1 hidden layer; multiple layers: same # of nodes in each layer; more layers, better performance
-
backward propagation
- cyan /saian/ 青色
- magenta 品红