## Continuing Adventures in Machine Learning

In the last post, I wrote about calculating the cost of linear regression learning models combined with using gradient descent to find the minimized cost.
Quick review of the key equations.
Hypothesis: \(h_\theta(x) = \theta_0 + \theta_{1}x\) Parameters: \(\theta_0, \theta_1\) Cost Function: \(J(\theta_0,\theta_1) = \frac{1}{2m} \sum_{i=1}^m(h_\theta(x^{(i)}) - y^{(i)})^2\) Goal: \(\underset{\rm \theta_0,\theta_1}{\rm minimize}\) \(J(\theta_0, \theta_1)\)
With these tools, we can perform a gradient descent, an optimization algorithm designed to find \(\underset{\rm \theta_0,\theta_1}{\rm minimize}\) \(J(\theta_0, \theta_1)\).
[Read More]