๐Ÿ‘€Lecture 2-3

โ— Image Classification

Image classification assigns a label or category to an image based on its visual content. It is a fundamental problem in computer vision and has numerous applications such as object recognition, face detection, and image retrieval.

โ—‹ Semantic gap, challenges (17:04)

The semantic gap refers to the difference between low-level visual features extracted from an image and high-level semantic concepts that humans associate with them. The challenges in image classification include dealing with variations in lighting, scale, and orientation, recognizing objects under partial occlusion, and distinguishing between objects with similar visual appearances.

Semantic Gap:

(ppt)Challenges๏ผš viewpoint variation, intraclass variation, deformation, illumination

โ—‹ Machine learning: a data-driven approach

The data-driven approach to machine learning involves training a model using a large dataset of labeled examples. The model learns to generalize patterns from the training data and can then be used to predict labels for new, unseen examples.

โ— Nearest neighbor classifier

The nearest neighbor classifier is a simple but effective algorithm for image classification. It works by finding the nearest training image(s) to a test image based on some distance metric, and then assigning the label of the nearest training image(s) to the test image.

Not learning, just store

Distance Metric

ๆœ‰ๅฏ่ƒฝๆœ‰่ฏฏ๏ผŒๅ› ไธบfocus on pixel level๏ผˆcolor๏ผ‰

Decision boundaries

  • Pixel color level difference not useful๏ผš

โ— Hyperparameters

Hyperparameters are parameters in a machine learning model that are set before training and are not learned from the data. They control the complexity of the model and can have a significant impact on its performance. Examples of hyperparameters in a linear classifier include the regularization strength and the learning rate.

่ดต

ๅ…ถๅฎždeep learningๅœจๆฒกๆœ‰ๅพˆๅคšdataๆ—ถ work not well

โ— Linear classifier

A linear classifier is a type of machine learning model that learns to separate data points into different classes using a linear decision boundary. This decision boundary can be represented algebraically, visually, or geometrically.

โ—‹ Algebraic, Visual, Geometric viewpoints

๏ผˆmatric๏ผ‰

๏ผˆๅƒๅ›พ๏ผ‰

โ—‹ Loss functions: SVM, Softmax

In a linear classifier, the loss function is used to measure how well the model is able to predict the correct class labels. Two commonly used loss functions for linear classifiers are the support vector machine (SVM) loss and the softmax loss. The SVM loss encourages the model to have a large margin between different classes, while the softmax loss is used for multi-class classification problems and produces a probability distribution over all possible classes.

* svm๏ผšๅฏน็š„class score ้ซ˜

A1: ๆ—  change๏ผŒๅ› ไธบ4.9-0.5่ฟ˜ๆ˜ฏๆฏ”๏ผˆ1.3+1๏ผ‰ๅ’Œ๏ผˆ2.0+1๏ผ‰ๅคง

A2๏ผšmin๏ผš0๏ผ›max๏ผšๆญฃๆ— ็ฉท

A3๏ผšloop over ไธๆญฃ็กฎ็š„class๏ผŒๆœ‰C-1ไธชไธๆญฃ็กฎ๏ผŒmax๏ผˆ0๏ผŒ 0-0+1๏ผ‰=1๏ผ›ๆ‰€ไปฅ็ญ”ๆกˆไธบC-1

A4: C

Q๏ผšwhy should we skip the ground tree class๏ผŸ

A๏ผš่‹ฅ่€ƒ่™‘๏ผŒๅ…ถloss ไธบ1๏ผ›0ๆฏ”1ๆ›ด่กจ็Žฐๆญฃ็กฎ

A5:ๅชๆ˜ฏไธ็”จmeanไบ†๏ผŒๆฒกไป€ไนˆไธๅŒ

A6: bigger loss for bigger error

Linear-ใ€‹quadratic

๏ผˆๆœ‰่ฎธๅคšๅฐerror ๆฏ”ๆœ‰ไธ€ไธชๅคงerrorๅฅฝ๏ผ‰

๏ผ้‡่ฆ

HOW to choose a unique W๏ผšโฌ‡๏ธ

้˜ฒๆญขmodel doing to well on training data

SOFTMAX๏ผˆ่ฟ™ไธชloss function็”จไบŽmultinomial logistic regression/softmax classifier๏ผ‰

๏ผˆ็”ฑไบŽsvmๅชๆ˜ฏไธ€ไบ›scaler value็š„loss๏ผŒๅชๆ˜ฏ่ฆๆฑ‚ๅฏน็š„class score้ซ˜๏ผŒ่ฟ™ๅนถไธ้‚ฃไนˆinterable๏ผ‰

๏ผšๆƒณ่ฆinterpret raw classifier scores as probabilities๏ผ

A1: min๏ผš0//-log๏ผˆ1๏ผ‰=0

max๏ผšๆญฃๆ— ็ฉท//-log๏ผˆ0๏ผ‰=ๆญฃๆ— ็ฉท

A2:

ๆ€ป็ป“๏ผš

โ—‹ Regularization

Regularization is a technique used to prevent overfitting in machine learning models. It involves adding a penalty term to the loss function that encourages the model to have smaller weights. Examples of regularization techniques include L1 and L2 regularization.

โ€œSpread outโ€ the weight๏ผš ๆŒ‡็š„ๆ˜ฏ่ฎฉๆฏไธชweight้ƒฝๆœ‰่ดก็Œฎ

Last updated