Differential 2R Cell Implementation of Artificial Neural Networks
























An Artificial neural
network is a statistical learning model that takes its inspiration from the biological
neural networks that are used in machine learning. These links are represented
as systems of organized “neurons” that send messages to one another. The
connections that make up a network can be deliberately adjusted based on inputs
and outputs, making them ideal for specific types of learning. In 1943, Warren
S. McCulloch, a neuroscientist, and Walter Pitts, a logician, developed the
first conceptual model of an artificial neural network. In their paper, “A logical calculus of the ideas
imminent in nervous activity,” they described the concept of a neuron as a
single component in a network that receives an input, processes the input, and then
generates an output for it. A neural network is basically a group of
“neurons” with “synapses” that connect them. The collection is structured to
have three main parts: the input layer, the hidden layer, and the output layer.

We Will Write a Custom Essay Specifically
For You For Only $13.90/page!

order now

There can be an exponential amount of hidden layers, with terms such as deep
learning indicating multiple layers within a system. Hidden layers are an
important part of a neural network as it has to make sense of things that are
really complicated, contextual, or non-obvious, like image recognition. Because
these layers are not visible as a network output they are known as hidden
layers. The synapses are the components that take the input and multiply it by
a given weight, the strength an input has on determining the output. Neurons
then add the outputs from all of synapses and apply an activation function. To train
a neural network we would need to calibrate all of the weights.



When considering the general
structure of an artificial neural network there are two components of that
remain the same: analogous with other biological systems, they are referred to
as neurons and synapses, and correspond to the vertices and the edges of the
graph respectively. There are challenges in hardware implementation of the an
Artificial Network such as:



wiring: Number of synapses scales quadratically with the number of

 Synaptic weight: Weights have to be
defined with high precision in order to ensure proper convergence of the

synaptic weights have to be updateable


state: Summation of weighed input must be performed

Activation function: Highly
nonlinear function calculation


the recall stage, new data sets are introduced and the network must calibrate
itself. While the recall stage is always performed on the physical network
itself, the learning stage can be performed in prior. There are three training

Off-chip learning involves performing the learning stage completely in the
software on a simulated network. This allows for faster and more accurately
calculated algorithms than could be done using the hardware network.

Chip-in-the-loop learning uses both the hardware network and an external
software computation. The software performs the learning algorithm and the
hardware network to perform the computations.

On-chip learning strictly uses the hardware to perform the learning. While this
particular method is slower and less precise in regard to the weight
calculation this technique does not extrapolate any data manipulation in the
learning stage. This makes the design more complicated and less flexible, since
the algorithm performing the learning has to be implemented in hardware.


of the most promising candidates is RRAM, RRAM stands for resistive random access
memory. There are many different ways to generate resistive storage materials. Usually
the cell structure is like a compact with two  electrodes on top and bottom and metal-oxide
in the middle. For example a structure made of titanium dioxide (TiO2)to form
its two layers, the bottom layers will electrically insulting TiO2 And the
upper layer will be conductive with its positively charged vacancies. As a device the RRAM cells resistance
can be looked at as a product of both the magnitude and polarity of the voltage
applied to it and the time in which that voltage is applied. Producing a positive
voltage on the cell pushes the positive particles to lower layer, the
conductive layer is thicker so the resistance will be smaller. This is called a
SET operation. conversely, for the RESET, the negative particles will attract
the positively charged vacancies and the insulating layer will get thicker, thereby
increasing the resistance.


Conventional, the
RRAM cell is composes by one transistor and one resistor.



driving enough current through the cell can be an issue. In order for this to
work the transistor must  to be large which
take up a large majority of the cell area So an alternative for this was to
move to the cross point structure. In a cross point structure the device is placed
between word line and bit line. Without direct access to the transistor, there
are a number must be addressed the first of which being, leakage current. There
is no way to  isolate the unselected
cells this would lead to low write energy efficiency and low read margin Second,
Write disturbance, we need to prevent the voltage drop on unselected cells is
not large enough to switch it. Therefore we need to bias the unselected BL.

There are four scheme, H means provide half of the write voltage, F means
Floating The equivalent circuit for calculate leakge current is shown as the
figure and the equation By taking n=m=16, we can find that FWHB has the lowest
leakage current with stable operaiton.


To solve the leakage issue when reading cells in cross
point array, researchers proposed the differential 2R cross point structure. For
this arrangement, there are two resistive devices that have opposite resistance
states together represent 1-bit data line. To store a 1, the first resistance
is written to low resistance state and the second resistance is written to high
resistance state; to store a 0, the opposite is performed. Rather than sensing
the current flowing through the cell, the state of a differential 2R cell is
determined simply by the voltage divider of the two combined resistances. In
read operation, the BL voltage would be Vread*Rb/(Ra+Rb) by applying Vread
across Ra and Rb. The BL is then connected to a simple StrongARM sense
amplifier with a reference voltage of Vread/2. Therefore, the read operation is
immune to the leakage current flowing in from neighbor BLs and greatly
increases the read margin without limiting the block size. Moreover, the
differential 2R cell contains both RH and RL, which solves the data pattern
issue and suppresses the leakage consumption in read operation. Thanks to the
stack ability of RRAM, the differential 2R cell can be constructed between
different metal layers without much area penalty. Since Ra and Rb have opposite
electrodes connected to WLa and WLb, we can SET one device and RESET another at
the same time by applying the same voltage on WLa and WLb.


Compared to PRAM,
2R Cell configuration operates at a faster timescale (switching time can be
less than 10 ns), while compared to MRAM, it has a simpler, smaller cell
structure, Compared to flash memory, a lower voltage is sufficient.. Also, due
to its relatively small access latency and high density. Implementing ANNs
using CMOS processes in both digital and analog case suffers from significant
drawbacks, especially when one attempts to scale the networks to large numbers
of neurons and synapses. As was already explained, the number of synapses
required soon makes the circuit wiring impossible, especially for complete
graphs (of which the Hopfield network is an example). The full power of
hardware ANNs has not been seen yet, but with the coming release of commercial
chips implementing arbitrary neural networks, more efficient algorithms will no
doubt be realized in those domains where neural networks are known to
dramatically improve performance. The 2R Cell configuration should be
considered a promising candidate for designing neural networks. Weight updating
happens automatically in the network, with only one transistor and one
memristor required at every synapse; this is one of the most compact possible
implementations for ANNs.`



Post Author: admin


I'm Eileen!

Would you like to get a custom essay? How about receiving a customized one?

Check it out