Christian
Hafner, Electromagnetics Group, ETH Zurich, Switzerland

E-mail: hafner@ifh.ee.ethz.ch

WWW home page: http://alphard.ethz.ch

The Demo version of GGP is FREEWARE, based on a reduced, pre-release version of GGP that has been compiled for solving relatively simple problems only. If you publish results obtained with the GGP Demo, please, acknowledge this and mention the WWW home page, where everyone can get the Demo.

This is a part of the pre-release manual of the GGP Demo Version 1.0. It does not explain the more sophisticated features and system parameters of GGP, even if they are not turned off. Features and parameters that are not explained should not be used or modified when you want to avoid unexpected results. Anyway, use the GGP Demo Version at your own risk!

What is GGP? GGP is a numeric code for the representation, approximation, extrapolation, and 'recognition' of a given function. The actual implementation assumes that the given function is real and has only one real argument. GGP uses a combination of the classical series expansions and an extension of the Genetic Programming approach.

What are the requirements for running GGP? The actual GGP version has been compiled with DEC Digita Visual Fortran and Microsoft Visual C++ for Intel platforms under Windows NT and should also run under Windows 95. Since GGP can be very time consuming, a fast processor is helpful. GGP should run with a VGA adapter, but a higher resolution SVGA with 256 colors or more is recommended. GGP resets the palette. When your graphic adapter does not allow to modify the palette, the GGP graphic windows look less nice and the background color will be black rather than white. In this case, the default drawing colors might be inappropriate.

How does GGP approximate f(x)?
Assume f(x) is given in a finite number of points
x1<x2<...<xi<...<xI. GGP uses a series
approximation of the form

f(x)=Sum(Ak*fk(pk,x))+Error(x), k=1,2,...K, where fk is the k-th
basis function with a non-linear parameter pk. The amplitudes Ak
of the basis functions are linear parameters.

Note that formally identical series expansions are used in
Fourier series, power series, etc. There is a big difference
between these well-known series expansions and GGP's series
approximation: GGP's basis functions and non-linear parameters
are (almost) completely free. GGP tries to optimize not only the
linear parameters Ak, but also the non-linear parameters pk and
the basis functions fk. For solving these three optimization
tasks, GGP applies 1) an efficient and robust Generalized
Point-Matching Technique, 2) a formula based nonlinear
optimization with a user-definable formula, and 3) an generalized
Genetic Programming algorithm with several new features for
improving the efficiency and the robustness.

What are the benefits of GGP compared with Fourier and similar series expansions? Usual series expansions allow to accurately approximate any given function within a finite interval xmin...xmax, provided that the number of known values I and the number of linear parameters K are sufficiently high and provided that the given function and its derivatives have not too many discontinuities within the interval xmin...xmax. Although this allows to interpolate f(x) for x anywhere between xmin and xmax, it does not allow to extrapolate f(x) for x<xmin or x>xmax. GGP often allows to obtain excellent extrapolations, even for x<<xmin and x>>xmax. This means that GGP can often find out, what f(x) really is. Moreover, the number K of linear parameters required for reasonable approximations with usual series expansions can be very high. GGP typically works with very small K and even with K=1.

What are the drawbacks of GGP compared with Fourier and similar series expansions? Usual series expansions have a strong theoretical background and can be implemented in simple and efficient routines that are easy to use. GGP is a quite complex and flexible program based on some numerical experiments and it can be very time consuming to perform the non-linear parameter optimizations and to construct an appropriate basis.

How does GGP construct an appropriate basis? GGP picks the basis functions fk out of a pool. If the user has some knowledge of the solution, he can any function in the pool, provided that the function can be represented by operators and functions known by GGP. If the user has no idea what to put into the pool, GGP starts with a random generation of pool functions. GGP performs many experiments with different series expansions. Within these experiments, it computes the linear parameters Ak and optimizes the non-linear parameters pk. From the results of these experiments, GGP computes the fitness of all functions in the pool. It discards pool functions with a low fitness and it selects fit pool functions to generate new ones with some genetic operations (mutations, crossover). I.e. GGP constructs the basis functions (more or less) like Koza's well-known Genetic Programming (GP). The entire basis corresponds to a society that tries to solve a problem. Each society is characterized by its fitness. It consists of one or several members, the basis functions. The members of each society are individuals who can be members of one or several societies. Each individual is characterized by its fitness. The fitness of an individual depends not only on the fitness values of its societies, but also on its usefulness within each society. The GGP pool contains all individuals available within one generation. A society elects individuals, estimates their usefulness, and replaces those with a low individual fitness by other individuals. In the actual GGP implementation, there is only one society with a fixed (user-definable) number of members. The society fitness and the individual fitness are defined by symbolic formula that can be modified by the user.

What are the differences between GP and GGP? GP uses a different, formally simpler approximation of the given function f(x)=f0(x)+Error(x). I.e., GP constructs only one function f0(x). f0(x) has the form f0(x)=F1(F2(T1,T2,F3(T3)),F3(T4)), where F1 is a function with two arguments, F2 a function with tree arguments and F3 a function with one argument. For example, F1 might be one of the well-known binary arithmetic operators +,-,*,/, F3 might be sin or cos. The elementary functions F1, F2, ... that are available depend on the implementation and on the settings of the user. T1, T2 denote so-called terminals. For standard GP implementations, terminals are either the variable x or a randomly generated constant. GGP constructs the basis functions fk(x) exactly like GP's f0(x), but GGP 'knows' four types of terminals: 1) explicit constants (2.38, 1.76, etc.), 2) implicit constants (C0, C2, C3, C4) that can be defined by the user, 3) the non-linear parameters pk, 4) the variable x. Theoretically, GP could obtain f0(x)=Sum(Ak*fk(pk,x)), but this would require an extremely large formula depth for f0(x) and is very unlikely. If the GGP user sets K=1, the difference between GP and GGP obviously becomes smaller. GGP then works with f(x)=A1*f1(p1,x)+Error(x). It has been found, that the presence of the linear and non-linear parameter is extremely helpful and speeds up the performance drastically. Koza's standard example f(x)=x*x/2 is almost trivial for GGP. It is solved almost immediately. GGP's standard example f(x)=sin(3.1416*x)/(3.1416*x), defined in 50 points within the interval x=0...2, seems to be not much more difficult and is typically solved by GGP with 500-5000 fitness computations with a pool of 50-100 basis functions, K=1...4. For the non-linear parameter optimization of each basis function (which is an 'individual' in GP), GGP usually requires 3-5 fitness computations. I.e., needs to compute 100-1000 individuals for finding an appropriate solution of f(x)=sin(3.1416*x)/(3.1416*x). My colleagues were not able to find a useful solution of this problem with a C++ version of Koza's GP even with 500 individuals per generation and more than 100 generations. In addition to the linear and non-linear parameters, GGP has many useful features, that shall not be discussed here. 1) Plausibility checking of individuals avoids unnecessary fitness computations of completely useless individuals. 2) Mini mutations like the exchange of terminals, freezing of parameter values, etc. improve the terminal optimization. 3) Fixed shape, 'full' trees with binary 'argument transfer' operators simulate introns. 4) Storage of pools and basis functions on files allows to take advantage of the results of previous computations. 5) Sophisticated fitness definitions and system parameter handling improves the flexibility and performance of GGP. 6) GGP uses only a part of the values x1,x2,...xI for computing the parameters and the approximation error. The remaining values are used to check the quality of the extrapolation. 7) The graphic features improve the user friendliness.

What are GGP's elementary operators and functions? GGP 'knows' the elementary operators and functions with one and two arguments. Most of them can be formally differentiated, some cannot or at least not everywhere. (GGP can formally differentiate a given formula!). Note that all operations and functions are 'protected', i.e., they will return a finite result in all cases. For example, 1/0 is infinite and usually causes a float exception. The protected version of 1/0 will return 1.0E+300, which will cause unexpected results, but no program termination. For more information, consult the list of elementary operators and functions.

Now, you probably know enough
about GGP for a first run! You can start GGP as any
executable using any feature of Windows NT/95. GGP will show a
desk with the five menus ** File**,

For watching, how GGP solves the standard problem (with
standard system settings), click the button with the two
triangles on the GGP Actions box. GGP will work with only one
basis function f1. It will pick the first one out of the pool ant
try to optimize its amplitude A1 and its nonlinear parameter by
minimizing the least squares error (Integral of Error^2(x)) over
the interval x=0...1. For optimizing the nonlinear parameter, it
will perform only three attempts. This is certainly not enough
for a good optimization, but if a promising result is obtained,
GGP will retry this function later. GGP immediately displays all
results. After the parameter optimization, GGP computes the error
of the approximation in the interval x=0...1, the error of the
extrapolation in the interval x=1...2, several fitness values,
and it displays some of the actual data in the ** Data
of the approximation** box. In this box, you can also
read what GGP is doing. Then, GGP picks the second function out
of the pool, optimizes its parameters, and so on. Since there are
100 functions in the pool and 3 tries are required for optimizing
the parameters, GGP will try (almost) 300 times for computing all
fitness values of all functions. Sometimes, GGP recognizes that
the basis function has no chance already after the first or
second try and will punish it with a bad fitness without
performing tree tries. As soon as GGP has computed the fitness of
all pool functions, it will pick up the randomly fittest again
for trying to improve its parameters. With the standard settings,
GGP will repeat this 10 times. Afterwards, GGP discards half of
the pool functions and generates new pool functions by several
genetic operations (random, mutation, mini mutations, crossover).