Artificial Intelligence (AI) deals with structuring large amounts of data. As a very first example of an expert system, take the oldest known scientific treatise surviving from the ancient world, the surgical papyrus of about 3000 BC. It discusses cases of injured men for whom a surgeon had no hope of saving and lay many years unnoticed until it was rediscovered and published for the New York Historical Society. The papyrus summarizes surgical observations of head wounds disclosing an inductive method for inference, with observations that were stated with title, examination, diagnosis, treatment, prognosis and glosses much in the sense that if a patient has this symptom, then he has this injury with this prognosis if this treatment is applied. About half a century ago, pioneering computer scientists report the emergence of intelligence with machines that think, learn and create. The prospects were driven by early successes in exploration. Samuel wrote a checkers-playing program that was able to beat him, whereas Newell and Simon successfully ran the general problem solver (GPS) that reduced the difference between the predicted and the desired outcome on different state-space problems. GPS represents problems as the task of transforming one symbolic expression into another, with a decomposition that fitted well with the structure of several other problem solving programs. Due to small available memories and slow CPUs, these and some other promising initial AI programs were limited in their problem solving abilities and failed to scale in later years. There are two basic problems to overcome: the frame problem - characterized as the smoking pistol behind a lot of the attacks on AI - refers to all that is going on around the central actors, while the qualification problem refers to the host of qualifiers to stop an expected rule from being followed exactly. While Dreyfus identifies several arguments of why intelligence in a computer is not a true ontological one, the most important reason for many drawbacks in AI are existing limits in computational resources, especially in memory, which is often too small to keep all information for a suitable inference accessible. Bounded resources lead to a performance-oriented interpretation of the term intelligence: different to the Turing-test, programs have to show human-adequate or human-superior abilities in a competitive resource-constrained environment on a selected class of benchmarks. As a consequence even the same program can be judged to be more intelligent, when ran on better hardware or when given more time to execute. This competitive view settles; international competitions in data mining (e.g. KDD-Cup), game playing (e.g. Olympiads), robotics (e.g. Robo-Cup), theorem proving (e.g. CADE ATP), and action planning (e.g. IPC) call for highly performant systems on current machines with space and time limitations.