What is Machine Learning? 🤷🏽‍♀️

Truthfully, it depends on who you ask. To some, it's a mysterious black box capable of solving any problem. For academics, it's a complex interplay of advanced mathematics and statistics. For skeptics, it signals the end of the world, with Terminator robots soon coming to end humanity! Regardless of your perspective, at its core, machine learning is about creating algorithms that identify patterns in data and make predictions based on those patterns.[1] (Not too different from how humans learn from experience, right?)

Types of Machine Learning 🔍

Different datasets and problems require different types of learning.[2]

In this introduction, we will focus on supervised learning. Supervised learning involves training models using labeled data to predict or classify outcomes. For instance, a spam email classifier learns patterns from a dataset of emails labeled as "spam" or "not spam" to classify new, unseen emails. Essentially, supervised learning builds a model that understands the relationship between input features and known outcomes, which it then uses to make predictions on new data.

A Beginner's Example 👽

Imagine you've gathered information about the height and weight of an alien species. You have 50 examples of their heights and weights. Now, you want to create a machine learning model that can predict an alien's weight just by knowing their height.

Observation Height (cm) Weight (kg)
1 4 10
2 6 13
3 9 22
... ... ...
48 11 27
49 12 28
50 13 33

Visualizing the Dataset 📈

When we visualize this data, a clear pattern emerges: as height increases, weight also increases in a roughly linear manner. We can train a machine to learn this pattern and make accurate predictions based on the pattern it learned.

Finding the Optimal Parameters 🧭

Our data exhibits a linear relationship where \( \text{weight} \approx w_0 + w_1 \times \text{height} \). But how do we determine the optimal values of \( w_0 \) and \( w_1 \)? With an infinite number of possibilities, it would take forever to guess and check until we found the best fit.

Fortunately, we don't need to do this manually. We can use supervised learning to automatically learn the optimal values of \( w_0 \) and \( w_1 \).

Training the Model 🤖

How does the machine learn the optimal parameters \( w_0 \) and \( w_1 \)?

First, we need a way to measure how wrong the model is. The model starts with random values for \( w_0 \) and \( w_1 \). Then, it makes small changes to these values to gradually minimize the error between its predictions and the actual data. [3]

By continually reducing this error, the model learns the true relationship between height and weight. This process creates a linear regression model that can predict weight based on height.

Making Predictions 🔮

After training our model, we find the optimal parameters:
\( w_0 = \) and \( w_1 = \)

Using these, we can easily predict an alien's weight. Just input any height value into our equation: \( weight \approx \) \( + \) \( \times \) \( height \)

So, if an alien is tall, our model predicts it will weigh .

A Mathematician's Perspective 🧮

From a mathematical standpoint, supervised learning (typically) follows a structured process. To illustrate, let's consider a regression problem similar to our example, where the objective is to predict a continuous output \( y \) based on an input \( x \).

The Supervised Learning Process [4]

  1. Data Collection: Collect a dataset \( D = \{(x_1, y_1), \ldots, (x_N, y_N)\} \) consisting of pairs \( (x_i, y_i) \) where \( x_i \) are inputs and \( y_i \) are known outputs.
  2. Function Definition: Define a function \( y = f_w(x) \) that represents how \( x \) relates to \( y \), using parameters \( w \).
  3. Loss Function Formulation: Formulate a loss function \( L(w) \) to measure how well \( f_w \) fits the data.
  4. Parameter Optimization: Optimize the parameters \( w \) by minimizing the loss function \( L(w) \) to make \( f_w \) fit the data as accurately as possible.

This framework is versatile and widely applicable in supervised learning. It encompasses not only simple linear regression but also extends to advanced models like neural networks. Mastering this workflow equips you to effectively address nearly any supervised learning problem. If it doesn't make sense just yet, don't worry! As you encounter it more, you'll gradually grasp the concept.

Summary 🤓

Machine learning, at its core, involves creating algorithms that can identify patterns in data and make predictions based on those patterns. In our exploration, we've focused on supervised learning, where models are trained using labeled data to predict outcomes.

We began by visualizing a dataset where height and weight data exhibited a clear linear relationship. Through supervised learning, specifically linear regression, we demonstrated how machines learn to fit a linear equation \(weight \approx w_0 + w_1 \times height\) to predict weight based on height.

The training process involves defining a metric to measure model error, initializing parameters \(w_0\) and \(w_1\), and iteratively adjusting them to minimize prediction errors. This iterative refinement allows the model to learn the underlying patterns effectively.

By grasping these foundational concepts and processes, you can dive deeper into supervised learning applications, ranging from simple linear regression to more intricate algorithms like neural networks. These concepts form a common thread throughout various supervised learning methodologies.