1 INTRODUCTION
RNNs are known to be the generalization of the feedforward neural
networks 13, 15 . Unlike the feedforward neural network RNNs
use their internal memory to process the arbitrary sequences of
inputs. The output of a normal feedforward network can be a class
or a predicted value but the output of the RNN can be the sequence
of values which is basically the application in which you are using
the RNNs (for eg. For classification , regression or forcaster ). RNNs
are used for mapping the sequences which is used in different kinds
of application like Speech Recognition, named entity recognition,
machine translation, Named Entity Recognition etc.
Restricted Boltzmann machine (RBM) is a widely used density
model to sequences which are not well suited for sequence data.
The Temporal Restricted Boltzmann Machine (TRBM) was first
introduced because the RBM did not exist at that time and this
TRBM could be able to model high complex sequences but the
problem with it was the parameter update required the use of high
approximations which was not acceptable and unsatisfying. This
issue was solved by modifying the TRBM into RNN-RBM in which
the parameter update can be computed nearly exactly.
As the Hessen Free (HF) Optimization solved the impossible
problem of training the deep auto encoders so it was assumed
that it could also solve the complex problem of training the RNN.
After the successful training of the Recurrent Neural Networks we
applied this approach to the character recognition which means to
predict the next character in the natural text. The RNNs performs
very well in almost every homogeneous Language model and is
the only approach that can exploit long character contexts as for
example it was able to balance the parenthesis and quotes over tens
of characters.
When it comes to training of the RNNs GPUs are an obvious
choice over normal CPUs. This was validated by the research team
at Indigo which uses these nets on text processing tasks like sentiment
analysis so GPUs can train the nets 250 times faster.
Finally, all the beliefs about RNNs that they are very difficult to
train are incorrect.
1.1 Markov ?
s chain vs Recurrent Neural
Networks
An RNN results in the generation of each character which is based
on the entire previous history of characters generated. A Markov
chains is only able to condition on a fixed window. Perhaps a particular
RNN will learn to truncate its conditioning context and behave
as a Markov chain, or perhaps not but RNNs in general certainly can
generate formal languages that Markov chains cannot. So, RNNs
work more efficiently than Markovâ??s chain or we can also say
that they ae not comparable.
For example: RNN was actually capable of generating well-formed
XML, generating matching opening and closing tags with an unbounded
amount of text between them. Markov chain cannot do
this.
The advantages of using a RNN over Markov chains and hidden
Markov model would be the higher representational power of the
neural networks and their ability to perform intelligently by taking
into account syntactic and semantic features . By comparison ngrams
have a number of parameters exploding with the size of the
vocabulary and n and rely on simple smoothing techniques like
Kneser_Ney (KN) or Good_Turing.

Post Author: admin

x

Hi!
I'm Eileen!

Would you like to get a custom essay? How about receiving a customized one?

Check it out