Soft Computing

 

Introduction to Soft Computing

Neural-Networks

 

The neural networks have the ability to learn by example which makes them very flexible and powerful.

For neural networks, there is no need to devise an algorithm to perform a specific task that is, there is no need to understand the internal mechanisms of that task. These networks are also well suited for real time systems because of their fast response and computational times which are because of their parallel architecture.

 

Artificial Neural Network: Definition

An artificial neural network (ANN) may be defined as an information processing model that is inspired by the way biological nervous systems, such as the brain, process information. This model tries to replicate only the most basic functions of the brain.

An ANN is composed of a large number of highly interconnected processing elements (neurons) working in union to solve specific problems.

 

Advantages of Neural Networks

 

l. Adaptive learning: An ANN is endowed with the ability m learn how to do tasks based on the data given for training or initial experience.

 

2.   Self-organization: An ANN can create its own organization or representation of the information it receives during learning time.

 

3.      Real-time operation: ANN computations may be carried out in parallel. Special hardware devices are being designed and manufactured to take advantage of this capability of ANNs.

 

4.         Fault tolerance via redundant information coding: Partial destruction of a neural network leads to the corresponding degradation of performance. However, some capabilities may be retained even after major network damage.

 

Application Scope of Neural Networks

 

1.                    Air traffic control could be automated with the location, altitude, direction and speed of each radar blip taken as input to the network. The output would be the air traffic controller's instruction in response to each blip.

 

2.   Animal behavior, predator/prey relationships and population cycles

may be suitable for analysis by neural networks.

 

3.                    Appraisal and valuation of property, buildings, automobiles, machinery, etc. should be an easy task for a neural network.

 

4.                    Betting on horse races, stock markets, sporting events, etc. could be based on neural network predictions.

 

5.                    Criminal sentencing could be predicted using a large sample of crime details as input and the resulting sentences as output.

 

 1.                    Complex physical and chemical processes that may involve the interaction of numerous (possibly unknown) mathematical formulas could be ·modeled heuristically using a neural network.

 

2.                    Data mining, cleaning and validation could be achieved by determining which records suspiciously diverge from the pattern of their peers.

 

3.                    Direct mail advertisers could use neural network analysis of their databases to decide which customers should be targeted, and avoid wasting money on unlikely targets.

 

4.                    Echo patterns from sonar, radar, seismic and magnetic instruments could be used to predict their targets.


Fuzzy Logic

Fuzzy logic is a problem-solving control system methodology that lends itself to implementation in systems ranging from simple, small, embedded microcontrollers to large, networked, multichannel PC or workstation based data acquisition and control systems. It can be implemented in hardware, software or a combination of both.

FL Provides a simple way to arrive at a definite conclusion based upon vague, ambiguous, imprecise, noisy, or missing input information. FLs approach to control problems mimics how a person would make decisions, only much faster.

 

Genetic Algorithm

Genetic algorithms are adaptive computational procedures modeled on the mechanics of natural generic systems. They express their ability by efficiently exploiting the historical information to speculate on new offspring with expected improved performance.
GAs is executed iteratively on a set of coded solutions, called population, with three basic operators: selection/reproduction, crossover and mutation.
They use only the payoff (objective function) information and probabilistic transition rules for moving to the next iteration. They are different from most of the normal optimization and search procedures in the following four ways:

1.      GAs work with the coding of the parameter set, not with the parameter themselves;

2.   GAs work simultaneously with multiple points, not a single point;

3.     GAs search via sampling (a blind search) using only the payoff information;

4.   GAs search using stochastic operators, not deterministic rules.

 

Hybrid Systems

 

Hybrid systems can be classified into three different systems:

Ø Neuro fuzzy hybrid system

Ø Neuron generic hybrid system

Ø Fuzzy genetic hybrid systems

 

Neuro Fuzzy Hybrid Systems

A neuro fuzzy hybrid system is a fuzzy system that uses a learning algorithm derived from or inspired by neural network theory to determine its parameters (fuzzy sets and fuzzy rules) by processing data samples.

 

1.    It can handle any kind of information (numeric, linguistic, logical, etc.).

2.   It can manage imprecise, partial, vague or imperfect information.

3.   It can resolve conflicts by collaboration and aggregation.

4.   It has self-learning, self-organizing and self-tuning capabilities.

5.   It doesn't need prior knowledge of relationships of data.

6.   It can mimic human decision-making process.

7.   It makes computation fast by using fuzzy number operations.

 

Neuro Genetic Hybrid Systems

Genetic algorithms {GAs) have been increasingly applied in ANN design in several ways: topology optimization, genetic training algorithms and control parameter optimization.

In topology optimization, GA is used to select a topology for the ANN which in turn is trained using some training scheme, most commonly back propagation.

In genetic training algorithms, the learning of an ANN is formu1ated as a weight optimization problem, usually using the inverse mean squared error as a fitness measure.

Many of the control parameters such as learning rate, momentum rate, tolerance level, etc., can also be optimized using GAs.

 

 

 

Fuzzy Genetic Hybrid Systems

The optimization abilities of GAs are used to develop the best set of rules to be used by a fuzzy inference engine, and to optimize the choice of membership functions. A particular use of GAs is in fuzzy classification systems, where an object is classified on the basis of the linguistic values of the object attributes.

 

Soft Computing

 

The two major problem-solving technologies include:

1.   Hard computing

2.  Soft computing.

Hard computing deals with precise models where accurate solutions are achieved quickly.

Soft computing deals with approximate models and gives solution to complex problems. The two problem-solving technologies are shown in Figure below:

 Soft computing uses a combination of GAs, neural networks and FL. An important thing about the constituents of soft computing is that they are complementary, not competitive, offering their own advantages and techniques to partnerships to allow solutions to otherwise unsolvable problems.

 

Artificial Neural Network

 

Neural networks are those information processing systems, which are constructed and implemented to model the human brain.


 

Objective

The main objective of the neural network is to develop a computational device for modeling the brain to perform various computational tasks at a faster rate than the traditional systems.

 

Tasks

Artificial neural networks perform various tasks such as

Ø pattern matching and classification

Ø optimization function

Ø approximation

Ø vector quantization

Ø data clustering.

These tasks are very difficult for traditional Computers. Therefore, for implementation of artificial networks high speed digital computers are used.

 

Artificial Neural Network

 

An artificial neural network (ANN) is an efficient information processing system which resembles in characteristics with a biological neural network.

ANNs possess large number of highly interconnected processing elements called nodes or units or neurons.

Each neuron is connected with the other by a connection link.

Each connection link is associated with weights which contain information about the input signal.

This information is used by the neuron net to solve a particular problem.

ANNs' collective behavior is characterized by their ability to learn. They have the capability to model networks of original neurons as found in the brain. Thus, the ANN processing elements are called neurons or artificial neurons.

Basic operation of a neural net

Each neuron has an internal stare of its own. This internal state is called activation or activity level of neuron, which is the function of the inputs the neuron receives. The activation signal of a neuron is transmitted to other neurons.

A neuron can send only one signal at a time, which can be transmitted to several ocher neurons.

To depict the basic operation of a neural net, consider a set of neurons, say X1 and X2, transmitting signals to another neuron, Y.

Here X1, and X2 are input neurons, which transmit signals, and Y is the output neuron, which receives signals.

Input neurons X1, and X2are connected to the output neuron Y, over a weighted interconnection links (W1, and W2) as shown in Figure.



Basic operation of a neural net

Each neuron has an internal stare of its own. This internal state is called activation or activity level of neuron, which is the function of the inputs the neuron receives. The activation signal of a neuron is transmitted to other neurons.

A neuron can send only one signal at a time, which can be transmitted to several ocher neurons.

To depict the basic operation of a neural net, consider a set of neurons, say X1 and X2, transmitting signals to another neuron, Y.

Here X1, and X2 are input neurons, which transmit signals, and Y is the output neuron, which receives signals.

Input neurons X1, and X2are connected to the output neuron Y, over a weighted interconnection links (W1, and W2) as shown in Figure.







 

 For the above simple neuron net architecture, the net input has to be calculated in the following way:

yin= +x1w1 + x2w2

x1 and x2 àactivations of the input neurons X1, and X2, i.e., the output of input signals.

The output y of the output neuron Y can be obtained by applying activations over the net input, i.e., the function of the net input:

y= f(yin)

Output= Function (net input calculated)

The function to be applied over the net input is called activation function.

 

 

Biological Neural Network

 

 

A schematic diagram of a biological neuron is shown in Figure below:

The biological neuron depicted in Figure, consists of three main parts:

1.   Soma or cell body- where the cell nucleus is located.

2.   Dendrites- where the nerve is connected to the cell body.

3.   Axon- which carries the impulses of the neuron.


Dendrites are tree-like networks made of nerve fiber connected to the cell body.

An axon is a single, long connection extending from the cell body and carrying signals from the neuron. The end of the axon splits into fine strands. It is found that each strand terminates into a small bulb like organ called synapse. It is through synapse that the neuron introduces its signals to other nearby neurons. The receiving ends of these synapses on the nearby neurons can be found both on the dendrites and on the cell body. There are approximately 104 synapses per neuron in the human brain.

Electric impulses are passed between the synapse and the dendrites. This type of signal transmission involves a chemical process in which specific transmitter substances are released from the sending side of the junction. This result in increase or decrease in the electric potential inside the body of the receiving cell.

If the electric potential reaches a threshold then the receiving cell fires and a pulse or action potential of fixed strength and duration is sent out through the axon to the synaptic junctions of the other cells. After firing, a cell has to wait for a period of time called the refractory period before it can fire again.

The synapses are said to be inhibitory if they let passing impulses hinder the firing of the receiving cell or excitatory if they let passing impulses cause the firing of the receiving cell.

The Figure below shows a mathematical representation of the chemical processing taking place in an artificial neuron.

 

characteristics of ANN:

1.   It is a neurally implemented mathematical model.

2.     There exists a large number of highly interconnected processing elements called neurons in an ANN.

3.        The interconnections with their weighted linkages hold the informative knowledge.

4.       The input signals arrive at the processing elements through connections and connecting weights.

5.    The processing elements of the ANN have the ability to learn, recall and generalize from the given data by suitable assignment or adjustment of weights.

6.   The computational power can be demonstrated only by the collective behavior of neurons, and it should be noted that no single neuron carries specific information.



types of neuron connection architectures.

They are:

 

1.   single-layer feed-forward network

2.   Multilayer feed-forward network

 

1.   single-layer feed-forward network

A layer implies a stage, going stage by stage, i.e., the input stage and the output stage are linked with each other. These linked interconnections lead to the formation of various network architectures. When a layer of the processing nodes is formed, the inputs can be connected to these nodes with various weights, resulting in a series of outputs, one per node. Thus, a single-layer feed-forward network is formed.

2.   Multilayer feed-forward network

 

A    multilayer    feed-forward    network    is    formed               by                the interconnection of several layers.

The input layer is that which receives the input and this layer has no function except buffering the input signal.

The output layer generates the output of the network.

Any layer that is formed between e input and output layers is called hidden layer. This hidden layer is internal to the network and has no direct contact with the external environment. There may be zero to several hidden layers in an ANN.

More the number of the hidden layers, more is the complexity of the network.



Learning and Memory

 

The main property of an ANN is its capability to learn. Learning or training is a process by means of which a neural network adapts itself to a stimulus by making proper parameter adjustments resulting in the production of desired response.

There are two kinds of learning in ANNs:

 

1.    Parameter learning: It updates the connecting weights in a neural net.

2.   Structure learning: It focuses on the change in network structure The above two types of learning can be performed simultaneously

or separately.

 

Apart from these two categories of learning, the learning in an ANN can be generally classified into three categories as:

Ø Supervised learning

Ø Unsupervised learning

Ø Reinforcement learning

1)  Supervised Learning

In ANNs following the supervised learning, each input vector requires a corresponding target vector, which represents the desired output. The input vector along with the target vector is called training pair. The network here is informed precisely about what should be emitted as output. 


2)  Unsupervised Learning

In ANNs following unsupervised learning, the input vectors of similar type are grouped without the use of training data to specify. A member of each group looks or to which group a number belongs. In the training process, the network receives the input patterns and organizes these patterns to form clusters.

When a new input pattern is applied, the neural network gives an output response indicating the class to which the input pattern belongs. If for an input, a pattern class cannot be found then a new class is generated.


3)  Reinforcement Learning

 

This learning process is similar to supervised learning. In the case of supervised learning, the correct target output values are known for each input pattern. But, in some cases, less information might be available.

For example, the network might be told that its actual output is only "50% correct" or so. Thus, here only critic information is available, nor the exact information. The learning based on this critic information is called reinforcement learning and the feedback sent is called reinforcement signal.

The reinforcement learning is a form of supervised learning because the network receives some feedback from its environment. The reinforcement learning is also called learning with a critic as opposed to learning with a teacher, which indicates supervised learning.



Activation Functions

The activation function is applied over the net input to calculate the output of an ANN.

The information processing of a processing element can be viewed as consisting of two major parts: input and output.

An integration function is associated with the input of a processing element. This function serves to combine activation, information or evidence from an external source or other processing elements into a net input to the processing element.

 

There are several activation functions. They are

 

1.   Identity function: It is a linear function and can be defined as

f(x) = x for all x

The output here remains the same as input. The input layer uses the identity activation function.


1.     Sigmoidal functions: The sigmoidal functions are widely used in back-propagation nets because of the relationship between the value of the functions at a point and the value of the derivative at that point which reduces the computational burden during training.

 

Sigmoidal functions are of two types: -

 

(i)     Binary sigmoid function: It is also termed as logistic sigmoid function or unipolar sigmoid function. It can be defined as






SHARE

Milan Tomic

Hi. I’m Designer of Blog Magic. I’m CEO/Founder of ThemeXpose. I’m Creative Art Director, Web Designer, UI/UX Designer, Interaction Designer, Industrial Designer, Web Developer, Business Enthusiast, StartUp Enthusiast, Speaker, Writer and Photographer. Inspired to make things looks better.

  • Image
  • Image
  • Image
  • Image
  • Image
    Blogger Comment
    Facebook Comment

0 comments:

Post a Comment

4.Time Management

                                      Time Management   •        Effective time management in project management involves strategic plann...