8 Best Air Fryer Meals. Healthy Eating Made Easy!

Air fryers have become increasingly popular in recent years, and with good reason. They allow you to cook your favorite foods with little to no oil, making them a healthier alternative to frying. You…

Smartphone

独家优惠奖金 100% 高达 1 BTC + 180 免费旋转




Your first dive into Deep Learning

As the Deep Learning is part of my daily work and also the main trend in AI, I decided to share with you some of the insights I learned at workshop I participated and help you to start your own first deep learning project.

Neural networks are the subset of machine learning techniques. They are modeled to mimic neurons, their connections and interactions in the brain. As signal spreads through the brain, it travels from one neuron to another. Similarly, in a neural network, information spreads layer by layer allowing a computer to learn by itself from the observed data. As neural networks tend to have multiple layers, we often refer to these techniques as Deep Learning.

The simplest type of neural network is Feedforward neural network. Its architecture allows information to move only in one direction — forward. From input layer through the “hidden” layers to the output layer, but without any loops along the way. Actually, training any neural network consists of providing input and telling the network what an output should be, while network itself try to figure out the best performing parameters. So, you should always “feed” it with as much data as you can, because it learns with every iteration (i.e. readjusts the parameters). The larger the input the better the results are!

The basic building block of the neural network is an artificial neuron. It processes all inputs and produces an output. All inputs are weighted and summed up with a bias to a single value; which is then turned into an output with an activation/transfer function. Therefore, with every iteration, neural network gradually shifts the weights and biases of all the neurons, in a way that the next iteration output is a bit closer to the known/true output. A measure of the difference between final outputs is called cost function (or loss function) and it should be minimised with learning. I’ll return to these terms in one of my next articles and address the problem of overfitting (model is overfitting when it’s perfect for the training data but crashes for the test data) and teach you how to avoid it.

Let’s start and construct a simple neural network in Keras. We’ll create a neural network that mimics the linear function f(x) = 2*x + 1 . Input for neural network will be 1000 decimal numbers from the interval [1, 2] and outputs will be values of the linear function in those points. We’ll use a sequential model and just stack layers on top of each other, with a marvelous Keras function add(), which allows super fast and super easy neural net construction! You’ll see, it’s pretty simple!

Output dimension is 1, the same as an input. The final construction step is a model compilation, where you define the loss function and optimiser (i.e. learning method). We’ll use the standard regression loss function: mean square error and an RMSprop optimiser. Note: always use function summary() to get info about created neural network!

Example of model contruction:

Output:

If everything is ok, in every iteration you will see the decrease of a loss function.

Example of model training:

Output:

So, the parameters are great! As you can see, the first one is 2.0006754 (a true value is 2) and the second one is 1.0005257 (with true value 1). You can make them even better with more data or more complex neural network (because this architecture is way too basic), but this was also too trivial problem to solve with a neural network.

When working with images, you’ll always use a Convolutional neural network (the form of Feedforward neural net). Every Convolutional network contains three main types of layers: Convolutional layer, Pooling layer, and Fully-Connected layer (exactly as seen in a regular neural network). It arranges the neurons in 3 dimensions: width, height, and depth. The Convolutional layer is specific because it takes as input image presented as a tensor that holds the raw pixel values with dimensions width, height, and with three color channels R, G, B. In our observed dataset, all the pictures are black and white; and because of that one input volume has the size 48x48x1 (the same colored image would be 48x48x3).

Further, the Pooling layer reduces the spatial size of the image representation, simultaneously reducing the parameter number and the amount of computation. Here, we also added a Dropout layer which randomly drops/forgets a few connections between the neurons in every iteration.

As mentioned before, you can just stack all those layers with function add() like this:

Output:

Output:

Output:

Final prediction results are returned in a form of a vector which values represent the percentages of certainty that image belongs to each class. We’ll take the next picture to test our classifier.

Output:

In our testing example, you can see that neural network classifies the upper person as happy with the certainty of 91%. It also finds some features specific for a sad person (maybe eyes looks bit sad) but the overall result is great. The accuracy could be better with more data or bit different network, but you have to take into the account that all pictures have a really small resolution.

I hope these examples were useful for everyone who wants to make a first dive into deep learning. Neural networks are currently the number one machine learning algorithms in the field of image classification, speech recognition, self-driving vehicles, machine translation, question-answering and dialogue systems. Their training demands a huge amount of data and large computational power, but their advantages are remarkable! Their main disadvantage is a “black box” learning — it is impossible to interpret connections between the input and the output. Currently, no one knows the exact structure of functions approximated by neural networks, but they work! And they are the future of technology!

In my next article, I will share some tips and tricks for working with neural networks and more complex examples of using Deep Learning in practice.

Add a comment

Related posts:

Welcome To Vietnam Tours

All Vietnam tours are designed to expose travelers to the amazing scenery that makes up the Vietnam terrain. In just one country you’ll see massive mountains, pristine coastlines, expansive rice…

Online Press Releases

Composing and presenting an official statement to an online circulation webpage is an astounding method to advance things like an organization site re-dispatch, declare another item or associate…

Making Making Easier

Prototyping is thankfully becoming more of a valuable skill in today’s economy. No company enjoys realizing that nobody wants the product they worked so hard on. Lean prototyping helps us all prevent…