Neural Networks

Neural Networks

Length: 1695 words (4.8 double-spaced pages)

Rating: Excellent

Open Document

Essay Preview

More ↓
Neural Network

Neural Network, highly interconnected network of information-processing elements that mimics the connectivity and functioning of the human brain.
Neural networks are a form of multiprocessor computer system, with
· Simple processing elements
· A high degree of interconnection
· Simple scalar messages
· Adaptive interaction between elements
Where can neural network systems help?
· Where we can't formulate an algorithmic solution.
· Where we can get lots of examples of the behavior we require.
· Where we need to pick out the structure from existing data.

Neural networks address problems that are often difficult for traditional computers to solve, such as speech and pattern recognition. They also provide some insight into the way the human brain works. One of the most significant strengths of neural networks is their ability to learn from a limited set of examples Neural networks have been applied to many problems since they were first introduced, including pattern recognition, handwritten character recognition, speech recognition, financial and economic modeling, and next-generation computing models.

HOW A NEURAL NETWORK WORKS ?

Neural networks fall into two categories: artificial neural networks and biological neural networks. Artificial neural networks are modeled on the structure and functioning of biological neural networks. The most familiar biological neural network is the human brain. The human brain is composed of approximately 100 billion nerve cells called neurons that are massively interconnected. Typical neurons in the human brain are connected to on the order of 10,000 other neurons, with some types of neurons having more than 200,000 connections. The extensive number of neurons and their high degree of interconnectedness are part of the reason that the brains of living creatures are capable of making a vast number of calculations in a short amount of time. See also Neurophysiology.

Artificial Neural Network Architecture

The architecture of a neural network is the specific arrangement and connections of the neurons that make up the network. One of the most common neural network architectures has three layers. The first layer is called the input layer and is the only layer exposed to external signals. The input layer transmits signals to the neurons in the next layer, which is called a hidden layer. The hidden layer extracts relevant features or patterns from the received signals. Those features or patterns that are considered important are then directed to the output layer, the final layer of the network. Sophisticated neural networks may have several hidden layers, feedback loops, and time-delay elements, which are designed to make the network as efficient as possible in discriminating relevant features or patterns from the input layer.

How to Cite this Page

MLA Citation:
"Neural Networks." 123HelpMe.com. 26 Jun 2019
    <https://www.123helpme.com/view.asp?id=159221>.

Need Writing Help?

Get feedback on grammar, clarity, concision and logic instantly.

Check your paper »

Artificial Neural Networks Report Essay

- ... And gave guidelines about how to choose a learning rate. If we choose a learning rate that is too large, it may moves too far in the “correct” direction, resulting in overshooting or minimum in the error surface, it takes longer to train, which effect the accuracy (will be poor in that case).By decreasing the learning rate, the training speed and accuracy will be improved, but if we use a too small learning rate it is a waste of time and computational resources. Their approach is explained as following steps: Step1.Train the neural network for one epoch using different orders magnitude of learning rates (e.g., 100 down to .00001) , So we can find a learning rate L with the smallest tota...   [tags: artificial intelligence]

Research Papers
2258 words (6.5 pages)

Essay about Neural Networks

- Neural Network Neural Network, highly interconnected network of information-processing elements that mimics the connectivity and functioning of the human brain. Neural networks are a form of multiprocessor computer system, with · Simple processing elements · A high degree of interconnection · Simple scalar messages · Adaptive interaction between elements Where can neural network systems help. · Where we can't formulate an algorithmic solution. · Where we can get lots of examples of the behavior we require....   [tags: Neural Network Computers Technology]

Free Essays
1695 words (4.8 pages)

Artificial Neural Networks Essay

- Artificial neural networks (ANNs) were built to model the brain for the purpose of solving the problems humans alone cannot as well as to advance, artificial intelligence. To approximate organic beings and gain great computational power, to become a technological hybrid between sentient beings and advanced electronics; they are the future of advanced robotics. They can be used in miscellaneous fields such as speech recognition, prediction of stocks, weather and so on. Artificial neural networks (ANNs) approximates the probable function that will likely produce the best output....   [tags: Brain Model, Brain Functions, Problem Solving]

Research Papers
1003 words (2.9 pages)

Spiking Neural Networks Essay

- Introduction A neuron is composed of three main parts; the soma (cell body), the dendrites, and the axon. The soma contains the nucleus, the dendrites receive synaptic input (neurotransmitters), and the axon releases the neurotransmitters. The large branching structure of the dendrites and axon terminal allow each neuron to have connections to thousands of other neurons, forming a massive communication web. A neuron communicates via action potentials. An action potential is an electric pulse that travels down the axon until it reaches the synapses, where it then causes the release of neurotransmitters....   [tags: neurology, brain science]

Research Papers
1858 words (5.3 pages)

Essay about Quantum Neural Network

- ... This has been demonstrated by many quantum 1.2. QUANTUM MECHANICS AND ANN 3 Figure 1.2: Quantum analogies for di erent concepts of arti cial neural networks algorithms such as Grover's search algorithm, Shor's factorisation algorithm etc. We can use this property and our knowledge of classical neural net- works to create a new computing paradigm called quantum neural networks (QNNs). There are many ways of using building a QNN. Figure 1.2 has been taken from [2], which shows various approaches to achieving For the purpose of this chapter will will consider only the Menneer and Narayanan model....   [tags: the human brain, artificial neural networks]

Research Papers
1015 words (2.9 pages)

Essay about Neural Networks as a Solution to Big Data in Law Enforcement

- 1. INTRODUCTION "It is a capital mistake to theorize before you have all the evidence. It biases the judgment.” This quote is from the fictional master detective Sherlock Holmes, the protagonist of Sir Arthur Conan Doyle’s detective novels (2004). Sherlock Holmes knew the value of collecting data and analyzing it thoroughly before attempting to solve a crime, and this inductive method is still the hallmark of a good criminal investigator. Today, databases and record rooms abound with information about various crimes, information that if looked at holistically could can often lead to the apprehension of the perpetrator....   [tags: inductive method, CATCH, CSSCP]

Research Papers
3247 words (9.3 pages)

Neural Networks in Investments Essay

- Neural Networks in Investments I. ABSTRACT Investment managers often find themselves overwhelmed with the large amount of data obtained from the financial markets. Most of the data available is numeric and noisy in nature, making the decision-making process harder. These decisions usually rely on the integration of statistical measures that attempt to compress much of the data and qualitative depictions such as graphs and bar charts with news events and other pertinent information. Investment decisions usually involve non-linear relationships among the various components of the data....   [tags: Computers Technology Investing]

Research Papers
2680 words (7.7 pages)

Neural Networks Essay

- Neural Networks Abstract This paper will provide an introductory level discussion of neural networks within the field of artificial intelligence. This discussion will briefly cover the history of the neural network as well as recent advances within this field. In addition, several real world applications of neural networks will be discussed. Introduction The primary goal in the field of artificial intelligence is to construct a machine with an intellect comparable to that of a human. This pursuit of an artificial intelligence has had a long history....   [tags: Artificial Intelligence Technology]

Research Papers
1329 words (3.8 pages)

Essay on Neural Networks

- Neural Networks A neural network also known as an artificial neural network provides a unique computing architecture whose potential has only begun to be tapped. They are used to address problems that are intractable or cumbersome with traditional methods. These new computing architectures are radically different from the computers that are widely used today. ANN's are massively parallel systems that rely on dense arrangements of interconnections and surprisingly simple processors (Cr95, Ga93)....   [tags: essays research papers]

Research Papers
2934 words (8.4 pages)

Artificial Neural Networks Essay

- Artificial Neural Networks Artificial neural networks are systems implemented on computer systems as specialized hardware or sophisticated software that loosely model the learning and remembering functions of the human brain. They are an attempt to simulate the multiple layers of processing elements in the brain, called neurons. These elements are implemented in such a way so that the layers can learn from prior experience and remember their outputs. In this way, the system can learn to recognize certain patterns and situations and apply these to certain priorities and output appropriate results....   [tags: Essays Papers]

Free Essays
1152 words (3.3 pages)




NEURAL NETWORK LEARNING

Neuroscientists studying the structure and function of the brain believe that important information that needs to be remembered may cause the brain to constantly reinforce the pathways between the neurons that form the memory, while relatively unimportant information will not receive the same degree of reinforcement.


A. Connection Weights

To mimic the way in which biological neurons reinforce certain axon-dendrite pathways, the connections between artificial neurons in a neural network are given adjustable connection weights, or measures of importance. When signals are received and processed by a node, they are multiplied by a weight, added up, and then transformed by a nonlinear function. The effect of the nonlinear function is to cause the sum of the input signals to approach some value, usually +1 or 0. If the signals entering the node add up to a positive number, the node sends an output signal that approaches +1 out along all of its connections, while if the signals add up to a negative value, the node sends a signal that approaches 0. This is similar to a simplified model of a how a biological neuron functions—the larger the input signal, the larger the output signal.

B. Training Sets

Computer scientists teach neural networks by presenting them with desired input-output training sets. The input-output training sets are related patterns of data. For instance, a sample training set might consist of ten different photographs for each of ten different faces. The photographs would then be digitally entered into the input layer of the network. The desired output would be for the network to signal one of the neurons in the output layer of the network per face. Beginning with equal, or random, connection weights between the neurons, the photographs are digitally entered into the input layer of the neural network and an output signal is computed and compared to the target output. Small adjustments are then made to the connection weights to reduce the difference between the actual output and the target output. The input-output set is again presented to the network and further adjustments are made to the connection weights because the first few times that the input is entered, the network will usually choose the incorrect output neuron. After repeating the weight-adjustment process many times for all input-output patterns in the training set, the network learns to respond in the desired manner.

A neural network is said to have learned when it can correctly perform the tasks for which it has been trained. Neural networks are able to extract the important features and patterns of a class of training examples and generalize from these to correctly process new input data that they have not encountered before. For a neural network trained to recognize a series of photographs, generalization would be demonstrated if a new photograph presented to the network resulted in the correct output neuron being signaled.

A number of different neural network learning rules, or algorithms, exist and use various techniques to process information. Common arrangements use some sort of system to adjust the connection weights between the neurons automatically. The most widely used scheme for adjusting the connection weights is called error back-propagation, developed independently by American computer scientists Paul Werbos (in 1974), David Parker (in 1984/1985), and David Rumelhart, Ronald Williams, and others (in 1985). The back-propagation learning scheme compares a neural network's calculated output to a target output and calculates an error adjustment for each of the nodes in the network. The neural network adjusts the connection weights according to the error values assigned to each node, beginning with the connections between the last hidden layer and the output layer. After the network has made adjustments to this set of connections, it calculates error values for the next previous layer and makes adjustments. The back-propagation algorithm continues in this way, adjusting all of the connection weights between the hidden layers until it reaches the input layer. At this point it is ready to calculate another output.

Where are Neural Networks applicable?
..... Or are they just a solution in search of a problem?
Neural networks cannot do anything that cannot be done using traditional computing techniques, BUT they can do some things, which would otherwise be very difficult.
In particular, they can form a model from their training data (or possibly input data) alone.
This is particularly useful with sensory data, or with data from a complex (e.g. chemical, manufacturing, or commercial) process. There may be an algorithm, but it is not known, or has too many variables. It is easier to let the network learn from examples.
Neural networks are being used:
In investment analysis:
To attempt to predict the movement of stocks currencies etc., from previous data. There, they are replacing earlier simpler linear models.
In signature analysis:
As a mechanism for comparing signatures made (e.g. in a bank) with those stored. This is one of the first large-scale applications of neural networks in the USA, and is also one of the first to use a neural network chip.
In process control:
There are clearly applications to be made here: most processes cannot be determined as computable algorithms. Newcastle University Chemical Engineering Department is working with industrial partners (such as Zeneca and BP) in this area.
In monitoring:
Networks have been used to monitor
· The state of aircraft engines. By monitoring vibration levels and sound, early warning of engine problems can be given.
· British Rail has also been testing a similar application monitoring diesel engines.
In marketing:
Networks have been used to improve marketing mailshots. One technique is to run a test mailshot, and look at the pattern of returns from this. The idea is to find a predictive mapping from the data known about the clients to how they have responded. This mapping is then used to direct further mailshots


A great deal of research is going on in neural networks worldwide.
This ranges from basic research into new and more efficient learning algorithms, to networks which can respond to temporally varying patterns (both ongoing at Sterling), to techniques for implementing neural networks directly in silicon. Already one chip commercially available exists, but it does not include adaptation. Edinburgh University has implemented a neural network chip, and is working on the learning problem.
Production of a learning chip would allow the application of this technology to a whole range of problems where the price of a PC and software cannot be justified.
There is particular interest in sensory and sensing applications: nets, which learn to interpret real-world sensors and learn about their environment.
New Application areas:
Pen PC's

PC's where one can write on a tablet, and the writing will be recognised and translated into (ASCII) text.

Speech and Vision recognition systems

Not new, but Neural Networks are becoming increasingly part of such systems. They are used as a system component, in conjunction with traditional computers.

White goods and toys

As Neural Network chips become available, the possibility of simple cheap systems which have learned to recognise simple entities (e.g. walls looming, or simple commands like Go, or Stop), may lead to their incorporation in toys and washing machines etc. Already the Japanese are using a related technology, fuzzy logic, in this way. There is considerable interest in the combination of fuzzy and neural technologies.
But the world has moved on. Neural Networks should be seen as part of a larger field sometimes called Soft Computing or Natural Computing. In the last few years, there has been a real movement of the discipline in three different directions:
Neural networks, statistics, generative models, Bayesian inference
Return to 123HelpMe.com