Definition: The Field of Artificial Intelligence
By ai-depot | June 30, 2002
Branches
The branches of Artificial Intelligence split off in many directions, and some overlap quite extensively. It is therefore difficult to classify them. Also note that some fields initially started as distinct part of AI, but have grown to become only remotely related to AI.
Search and Optimisation
There are many kinds of searches, the simplest of which involve trying out all the solutions in a particular order. The entire set of possible solutions is called the search space.
Constraint Satisfaction
Here, the problem is modelled as a set of variables, which can be assigned particular values. Different types of constraints are set-up on these variables (equality, numerical constraints), in order to specify the requirements for the problem. A search is then performed on the variables, in order to find the potential solutions. There are many nifty tricks involved to partly resolve constraints in order to guide the search more efficiently (this is called a heuristic search). The problems solved can also be a combinatorial optimisation, where a particular solution has a better value than another, and the best needs to be found. The class of problems usually solved is NP-complete, where the complexity increases exponentially as the problem size increases linearly.
Function Optimisation
This is the task of finding the optimal set of parameters of a function. There are many simple ways of doing this, including hill-climbing. Metaphorically, hill-climbing looks around the current position for a higher position, and moves to it. If there is no higher position, then the top is reached! This approach is fairly na�ve, and can lead to finding sub-optimal solutions (called local maxima).
Genetic Algorithms also provide optimisation capabilities, by mimicking the process of evolution (according to Darwin’s theory) and the survival of the fittest. The best solutions are mated together to create better offspring solutions. This approach has less problems with local maxima, but there are still no guarantees of finding the ideal solution.
Planning
Planning involves finding a sequence of actions that can lead from the current state, to the goal state. This is usually done in a hierarchical manner: overall plans are elaborated first, and the details are worked out later. This is a more efficient approach.
The major problem planning has to contend with is an imperfect world. With perfect environments, a simple search can be performed, and if a result is found, it will be possible in practice. Sometimes, however, the actions do not have the expected results, causing the plans not to work out.
Machine Learning
Machine learning is becoming increasingly popular, and equally important. People realise that it is theoretically much easier to get a machine to learn something from facts, rather than have to spend time teaching it explicitly. The quality of the learning algorithm is of course a major factor!
Neural Networks
Artificial Neural Networks, often just called Neural Networks (NN), are modelled on the human brain. The internal structure of the network, composed of a small number of artificial neurons, implies that the information learnt is not perfect. There is, however, the advantage of being able to generalise, i.e. work with data that it didn’t come across during its training. How well it performs depends on how well it can generalise, which in turn depends on how well the network was designed and trained. As such, a lot of research is done on ways to assure good generalisation.
Inductive Programming
Given only the results of a function (a limited amount of them), inductive programming tries to write the definition of the program that created those results. This is more or less successful depending on how many example results were given, and how complex the function is. Currently, some inductive programming algorithms can learn simple logic programs, even recursively defined. More complex programs will prove challenging to learn, as well as applying this process to real-life data rather than computer generated functions.
Decision Tree Learning
A decision tree is a structure that allows learning of opinions (e.g. good or bad) about objects based on their attributes (length, colour…). Given a series of examples, the learning algorithm can build a decision tree that will be able of classifying new examples. If the new examples are handled correctly, nothing is done. Otherwise, the structure of the tree is modified until the correct results are displayed. The challenge is getting the algorithm to perform well on very large sets of data, handling errors in values (noise), and determining the optimal fit of the tree to the training and test data.
Data Mining
This is the process of extracting useful rules from very large sets of data. When trends are observed, their causes need to be identified, and a rule expressing their relationship needs to be established. In this field, the challenge is being able to process a lot of information very efficiently, and ignore the potential errors.
Bayesian Networks
Bayesian Networks models the relationship between variables. This is called conditional dependence: a the state of a variable may depend on many others. This can be represented as a graph, and there’s a clever algorithm to estimate the probability of unknown events given existing knowledge. Admittedly, one common complaint against this approach relates to the design; it can be very tedious to model such networks. As such, learning the structure and the inference between variables seems like an appealing option.
Tags: none
Category: essay |