Skip to content

Logic gates

A logic gate (in electronics), is a device which performs logical operations using binary logic. Logic gates are the fundaments of modern processors, enabling chips to store data using Latches1, perform binary calculations using Arithmic Logic Units2 and much more.

In this first Mini-project, a single-neuron 'network' will be built to replicate the behaviour of an OR logic gate3. The OR gate will activate whenever either of it's inputs reads HIGH (or 1 in this case).

OR gate logic symbol

The logic table for an OR gate looks like this:

Input Output
0 0 0
0 1 1
1 0 1
1 1 1

In order to make writing the equations easier, the sum and activation function are combined into one formula.

\[ \text{output}=f(\sum\text{inputs}*\text{weights}) \]

Now try if the neuron from the previous section will predict the correct output for the OR logic gate.

\[ \displaylines{ \text{input 1}=0\\ \text{input 2}=0\\ \text{weight 1}=0.3\\ \text{weight 2}=0.9\\ } \]
\[ \text{output}=f(0*0.3+0*0.9)=0 \]

That looks promising, so the other inputs are tried as well.

\[ \displaylines{ \text{output}=f(0*0.3+1*0.9)=1 \\ \text{output}=f(1*0.3+0*0.9)=0 \\ \text{output}=f(1*0.3+1*0.9)=1 \\ } \]

Almost correct, however with the inputs \(1,0\), the neuron wrongly outputs \(0\), which should be \(1\) (according to the logic table above).

How can this behaviour be changed?
By changing the weights.

Take a closer look at the calculation of the incorrect output.

\[ \text{sum}=\text{input 1}*\text{weight 1}+\text{input 2}+\text{weight 2}\\ =1*0.3+0*0.9=0.3 \]

Indeed, if the sum is run through the activation function, the result is incorrect.

\[ f(0.3)=0 \]

The output of the sum, \(0.3\), is too low to trigger the activation function. By intuitively approaching the calculations above, there are some things that can be concluded:

  • the inputs cannot be changed
  • the activation function could be changed
  • the weights could be changed

In neural networks, inputs cannot change, so it does not make sense to focus on that aspect of the calculation. The activation function could be changed, however since the activation function produced correct outputs for the other inputs, this is probably not the issue (additionally, activation functions in neural networks are only rarely changed to get to the desired output). Lastly, the weights can be changed, and consequently, this is the usual way of tuning a neural network to achieve correct outputs.

In order to trigger the activation function \(f(x)\), the sum for inputs \(1,0\) needs to be higher than or equal to 0.5. If the \(\text{weight 1}\) is increased to \(0.5\) for example, recalculate all outputs and see if the neuron 'network' is correct for all inputs.

\[ \displaylines{ \text{output}=f(0*0.5+0*0.9)=0 \\ \text{output}=f(0*0.5+1*0.9)=1 \\ \text{output}=f(1*0.5+0*0.9)=1 \\ \text{output}=f(1*0.5+1*0.9)=1 \\ } \]

Alternatively, to see the changes \(\text{weight 1}\) has on the outputs of the model, use the interactive visualisation below.

Now that the weights are tuned correctly, program this neuron 'netowkr' into a simple Python script. To make sure that all possible inputs of the OR-gate are covered, they are stored inside of an array. The array will be looped over, and outputs for every input combination will be created.

Open In Colab

single_neuron_OR_gate.py
inputs = [
    [0, 0],
    [0, 1],
    [1, 0],
    [1, 1]
]
weight1 = 0.5
weight2 = 0.9

for input in inputs:
    sum = (input[0] * weight1) + (input[1] * weight2)
    if sum >= 0.5:
        activation = 1
    else:
        activation = 0

    print(input, activation)

In the next section, this network will be deployted to the TinySpark development board, in order to experience how to expand the neuron 'network' beyond the screen.