Introduction to Machine Learning in Python
Machine learning is the act of giving computers the ability to learn without explicitly programming them. This is done by giving data to computers and having them transform the data into decision models which are then used for future predictions.
In this tutorial, we will talk about machine learning and some of the fundamental concepts that are required in order to get started with machine learning. We will also devise a few Python examples to predict certain elements or events.
Introduction to Machine Learning
Machine learning is a type of technology that aims to learn from experience. For example, as a human, you can learn how to play chess simply by observing other people playing chess. In the same way, computers are programmed by providing them with data from which they learn and are then able to predict future elements or conditions.
Let’s say, for instance, that you want to write a program that can tell whether a certain type of fruit is an orange or a lemon. You might find it easy to write such a program and it will give the required results, but you might also find that the program doesn’t work effectively for large datasets. This is where machine learning comes into play.
There are various steps involved in machine learning:
- collection of data
- filtering of data
- analysis of data
- algorithm training
- testing of the algorithm
- using the algorithm for future predictions
Machine learning uses different kinds of algorithms to find patterns, and these algorithms are classified into two groups:
- supervised learning
- unsupervised learning
Supervised learning is the science of training a computer to recognize elements by giving it sample data. The computer then learns from it and is able to predict future datasets based on the learned data.
For example, you can train a computer to filter out spam messages based on past information.
Supervised learning has been used in many applications, e.g. Facebook, to search images based on a certain description. You can now search images on Facebook with words that describe the contents of the photo. Since the social networking site already has a database of captioned images, it is able to search and match the description to features from photos with some degree of accuracy.
There are only two steps involved in supervised learning:
Some of the supervised learning algorithms include:
- decision trees
- support vector machines
- naive Bayes
- k-nearest neighbor
- linear regression
We are going to write a simple program to demonstrate how supervised learning works using the Sklearn library and the Python language. Sklearn is a machine learning library for the Python programming language with a range of features such as multiple analysis, regression, and clustering algorithms.
Sklearn also interoperates well with the NumPy and SciPy libraries.
The Sklearn installation guide offers a very simple way of installing it for multiple platforms. It requires several dependencies:
- Python (>= 2.7 or >= 3.3),
- NumPy (>=1.82)
- SciPy (>=0.13.3)
If you already have these dependencies, you can install Sklearn as simply as:
pip install -U scikit-learn
An easier way is to simply install Anaconda. This takes care of all the dependencies so you don’t have to worry about installing them one by one.
To test if Sklearn is running properly, simply import it from a Python interpreter as follows:
If no error occurs, then you are good to go.
Now that we are done with the installation, let’s get back to our problem. We want to able to differentiate between different animals. So we will design an algorithm that can tell specifically whether a given animal is either a horse or a chicken.
We first need to collect some sample data from each type of animal. Some sample data is shown in the table below.
|Height (inches)||Weight (kg)||Temperature (Celsius)||Label|
The sample data we have obtained gives some of the common features of the two animals and data from two of the animals. The larger the sample data, the more accurate and less biased the results will be.
With this type of data, we can code an algorithm and train it to recognize an animal based on the trained values and classify it either as a horse or a chicken. Now we will go ahead and write the algorithm that will get the job done.
First, import the tree module from Sklearn.
from sklearn import tree
Define the features you want to use to classify the animals.
features = [[7, 0.6, 40], [7, 0.6, 41], [37, 600, 37], [37, 600, 38]]
Define the output each classifier will give. A chicken will be represented by 0, while a horse will be represented by 1.
#labels = [chicken, chicken, horse, horse] # we use 0 to represent a chicken and 1 to represent a horse labels = [0, 0, 1, 1]
We then define the classifier which will be based on a decision tree.
classifier = tree.DecisionTreeClassifier()
Feed or fit your data to the classifier.
The complete code for the algorithm is shown below.
from sklearn import tree features = [[7, 0.6, 40], [7, 0.6, 41], [37, 600, 37], [37, 600, 38]] #labels = [chicken, chicken, horse, horse] labels = [0, 0, 1, 1] classif = tree.DecisionTreeClassifier() classif.fit(features, labels)
We can now predict a given set of data. Here’s how to predict an animal with a height of 7 inches, a weight of 0.6 kg, and a temperature of 41:
from sklearn import tree features = [[7, 0.6, 40], [7, 0.6, 41], [37, 600, 37], [37, 600, 38]] #labels = [chicken, chicken, horse, horse] labels = [0, 0, 1, 1] classif = tree.DecisionTreeClassifier() classif.fit(features, labels) print classif.predict([[7, 0.6, 41]]) #output #  or a Chicken
Here’s how to predict an animal with a height of 38 inches, a weight of 600 kg, and a temperature of 37.5:
from sklearn import tree features = [[7, 0.6, 40], [7, 0.6, 41], [37, 600, 37], [37, 600, 38]] #labels = [chicken, chicken, horse, horse] labels = [0, 0, 1, 1] classif = tree.DecisionTreeClassifier() classif.fit(features, labels) print classif.predict([[38, 600, 37.5]]) # output #  or a Horse
As you can see above, you have trained the algorithm to learn all the features and names of the animals, and the knowledge of this data is used for testing new animals.
Unsupervised learning is when you train your machine with only a set of inputs. The machine will then be able to find a relationship between the input data and any other you might want to predict. Unlike in supervised learning, where you present a machine with some data to train on, unsupervised learning is meant to make the computer find patterns or relationships between different datasets.
Unsupervised learning can be further subdivided into:
Clustering: Clustering means grouping data inherently. For example, you can classify the shopping habits of consumers and use it for advertising by targeting the consumers based on their purchases and shopping habits.
Association: Association is where you identify rules that describe large sets of your data. This type of learning can be applicable in associating books based on author or category, whether motivational, fictional, or educational books.
Some of the popular unsupervised learning algorithms include:
- k-means clustering
- hierarchical clustering
Unsupervised learning will be an important technology in the near future. This is due to the fact that there is a lot of unfiltered data which has not yet been digitized.
I hope this tutorial has helped you get started with machine learning. This is just an introduction—machine learning has a lot to cover, and this is just a fraction of what machine learning can do.
Additionally, don’t hesitate to see what we have available for sale and for study in the Envato Market, and don’t hesitate to ask any questions and provide your valuable feedback using the feed below.
Your decision to use either a supervised or unsupervised machine learning algorithm will depend on various factors, such as the structure and size of the data.
Machine learning can be applied in almost all areas of our lives, e.g. in fraud prevention, personalizing news feed in social media sites to fit users’ preferences, email and malware filtering, weather predictions, and even in the e-commerce sector to predict consumer shopping habits.