I have a friend of mine who began his exploration of machine learning in Python a few months back. Excited to help him out and curious as to where he is in his familiarity in statistics and algorithms (**See mental model: the Forest**), we hopped on a call and discussed a few things. As we started to discuss certain topics, I realized that there were some mathematical ideas that were difficult to grasp.

This blog post is designed as an ELI5 (Explain Like I’m Five), intended for people who have zero clue to beginner level ideas about machine learning. It tackles the following fundamental concepts in machine learning and artificial intelligence:

- Intuition: Modeled from a human’s brain, neural networks
- Data (dimensions, quality, characteristics, quantity)
- Mathematical functions: simple to complex (linear, quadratic, etc)
- Data collection

## Intuition: A human’s brain

In designing how we can teach a machine how to learn, we modeled (imitated) it based on how human brains work. A surface level description of how that works is that our brains have neurons and synapses which send messages to each other, depending on the results (of our behavior, our environment, our perceptions, etc).

At a human-to-human level, neurons and synapses work something like this:

#### Narrative example

Alice, Bob, and Carlos are classmates. Whenever they have upcoming exams, Alice approaches both boys to ask for help to study for the exams. Alice studied with Bob and studied with Carlos the day prior to the exam.

After receiving the results of the test, she found out that majority of what Bob taught him was correct, while a lot of the things Carlos covered was incorrect. She tells herself, “alright, Bob is pretty reliable. I can come to him again later to ask for help. Maybe Carlos only misunderstood a certain things, so I’ll study with him again next time just to check.”

**On the next exam however, the results were the same. Bob was reliably a better study partner, while practices with Carlos only confused her. **

She tells herserlf, “Okay- I’m quite confident now that I should rely on Bob when it comes to my studies, and that I shouldn’t study with Carlos because I only get confused during the test.”

## Example numbers

We could come up with a simple mathematical example below, with made up figures for the numbers just so that we could see how Alice learns who to choose to study with.

The given values are:

- Alice has 0.50 confidence level for both study partners at first.
- Alice ‘reacts’ or adjusts her confidence levels by a factor of 0.20.
- Alice’s criteria/threshold for whether she should study with someone is 0.25, which translates to: if I feel at least 25% confident about a study partner, I’ll study with them again (activation function, threshold)

Alice’s study partner | Alice’s confidence level before first exam | Alice’s confidence level after first exam | Alice’s confidence level after second exam |

Bob | 0.50 | 0.70 | 0.90 |

Carlos | 0.50 | 0.30 | 0.10 |

*PSA to study diligently boys; girls like that.*

As seen above, Alice doesn’t discriminate at first, giving both boys the same confidence level prior to the exam. However, after the results of the exam, Alice readjusted her confidence levels (weights/synapses) accordingly. Both boys still pass the activation function (at least 0.25%) and so she studies with them both again. However, after the second exam, her confidence level in Carlos does not meet the activation function’s threshold (0.10 is less than 0.25) and so she no longer studies with Carlos.

## Terminologies, analogies

I like the above analogy, because both tasks (studying for an exam and machine learning) are both about learning.

So let’s break down the other analogies:

Analogy | How the brain works | How machine learning works |

Exam for human learning | The task of the brain to learn | The task for the machine to learn |

Having a perfect score on the exam | The expected result of the brain | The expected result of the machine |

Having an actual score that varies from 0 to 100 | The actual result from the world/experience | A mathematical score based on statistics |

Alice, Bob, and Carlos as human students | The neurons; in an organic neural network (the brain) | Digitally coded neurons, part of a neural network |

Alice confidence level to choose to have study sessions with a student | The activation level of a synapse | An activation level of a synapse, represented as a threshold value (e.g. > 0.50) |

Alice having a study session with another student | Activating or firing (organic) neurons | Activating or firing (digital) neurons |

Alice pondering about the performance of both Bob and Carlos, as factors for considering future studying sessions | Back propagation; Based on results, specific synapses are strengthened or weakened | Back propagation; the weights or multipliers (you can think of it as 0 to 1 values) which represent the weakness (closer to 0) or strength (closer to 1) of a synapse are adjusted based on the results |

Alice becoming more inclined to rely on Bob because of his past performance | Strengthening a synapse between one neuron to another (making it easier to activate a specific neuron); | Increasing the weights of a neuron closer to 1, making it more likely to activate. |

Alice giving Carlos another shot, but no more after the second exam | Although the synapse was weakened after the first exam, the neuron still passed its activation level. So Alice pushed forward with the study session. However, after its synapse was further weakened on the second exam, the activation level was no longer reached. | Reducing the weight of a neuron closer to 0, but still passing the threshold. Reducing it again a second time, but no longer passing the threshold. |

Hopefully, the above example shows the translation from a human-to-human scale (left column) to its single-human’s brain scale (center column), and its counterpart as to how we modeled machine learning based on how human brains work (right column).

Mathematically speaking, we represent these as a graph (or a network) of nodes (neurons) and weighted vertices (synapses) respectively. This where the machine learning terminology **neural network** comes from. It’s based on the network of neurons from the human brain, which we model digitally.

### But what is a neural network?

If you’re more of a visual and auditory learner, I recommend watching 3 Blue 1 Brown’s video on what a neural network is.

## Closing

A lot of the math may get overwhelming and confusing especially because of the very technical terminologies, but it really boils down to imitating what real life already survives and thrives with. Making these biological and technological discoveries accessible to more people is important to me, which sparked this blog series on machine learning.

I’ll be writing the next sections in another blog post, so stay tuned. You can follow me on Twitter, where I regularly post my updates on my blog.

How did you like the article? Thank you very much for taking the time to read this, and I hope you enjoyed it.

If you had any questions or you want to send me a message, feel free to tweet me.

I just read an ELI5 on machine learning! @darrensapalo

Tweet