Apache Spark Deep Learning Cookbook
上QQ阅读APP看书,第一时间看更新

Getting ready

A cursory understanding of the building blocks of a simple neural network is helpful in understanding this section and the rest of the chapter.  Each neural network has inputs and outputs.  In our case, the inputs are the height and weight of the individuals and the output is the gender.  In order to get to the output, the inputs are multiplied with values (also known as weights: w1 and w2) and then a bias (b) is added to the end.  This equation is known as the summation function, z, and is given the following equation:

z = (input1) x (w1) + (input2) x (w2) + b

The weights and the bias are initially just random generated values that can be performed with numpy. The weights will literally add weight to inputs by increasing or decreasing their impact on the output. The bias will serve a slightly different role in that it will shift the baseline of the summation (z) upwards or downwards, depending on what is needed to meet the prediction. Each value of z is then converted into a predicted value between 0 and 1 through an activation function.  The activation function is a converter that gives us a value that we can convert into a binary output (male/female). The predicted output is then compared with the actual output.  Initially, the difference between the predicted and actual output will be large as the weights will be random when first starting out.  However, a process known as backpropagation is used to minimize the difference between the actual and the predicted using a technique known as gradient descent.  Once we settle on a negligible difference between the actual and predicted, we store the values of w1, w2, and b for the neural network.