Machine Learning Specialization
- Supervised Machine Learning: Regression and Classification
- Advanced Learning Algorithms
- Unsupervised Learning, Recommenders, Reinforcement Learning
Supervised Machine Learning: Regression and Classification [Course]
Week 1 정리!!
Machine Learing
- Supervised Learning(Labels)
- Regression (e.g. more experiences abroad ⇒ language skills)
- classification (e.g. malevolent vs benign tumors)
- Unsupervised Learning(Without Labels/ algorithm has to find structure in the data)
- clustering(e.g Google News)
- group similar data points together
- dimensionality reduction
- compress data using fewer numbers
- anomaly detection
- find unusual data point
- clustering(e.g Google News)
Week 1 정리!!
Machine Learing
- Supervised Learning(Labels)
- Regression (e.g. more experiences abroad ⇒ language skills)
- classification (e.g. malevolent vs benign tumors)
- Unsupervised Learning(Without Labels/ algorithm has to find structure in the data)
- clustering(e.g Google News)
- group similar data points together
- dimensionality reduction
- compress data using fewer numbers
- anomaly detection
- find unusual data points
- clustering(e.g Google News)
Regression
- Goal: Finding Parameter ($w, b$) that minimizes Cost Function min J($w,b$)
- Gradient Descent: Algorithm that automatically finds best fit line that minimizes the cost function
- Goal: Find min J($w,b$) as efficiently as possible
- start with some w, b (set w=0, b=0 per se)
- from that point take a look around 360, take a baby step in the direction has the steepest descent
- repeat step 2 until you cannot go down anymore (you found yourself a local minima, but is it the global minimum?)
- if you start at different point, you might end up at different local minima!
- Goal: Find min J($w,b$) as efficiently as possible
- derivative term이 포인트에서 내려가게 만드는 핵심적인 부분
- learning rate은 얼마나 보폭을 크게 움직일 것인지 결정
- learning rate이 크면 , derivate term이 커져서 저렇게 계속 더 벌어질 수 있음!! (Fails to converge, may even diverge)
- local minimum에 다가갈수록 derivative term이 작아지면서 update 보폭도 점점 줄어듦!
- 그래서 fixed learning rate을 사용할 수 있는 거임!
- 각 gradient descent step에서 총 cost를 계산하기 위해 모든 training data point를 사용하는 것을 “Batch” gradient descent라고 함!
'AI Study' 카테고리의 다른 글
Advanced Learning Algorithms #4 (Decision Tree Model ) (0) | 2024.07.06 |
---|---|
Advanced Learning Algorithms #3 (Advice for Applying Machine Learning) (0) | 2024.07.04 |
Advanced Learning Algorithms #2 (Neural Network Training) (0) | 2024.07.02 |
Advanced Learning Algorithms #1 (Neural Networks) (0) | 2024.07.01 |
Supervised Machine Learning Regression and Classification #2 (0) | 2024.06.29 |