The problem to deal with large data sets is at the heart of many real-world domains. In many organizations the data size has already surpassed Petabytes. It is clear that to process such a huge amount of data the physical limitations of RAM is a major hurdle. On the other hand, the cheapest media that can hold huge data sets, i.e., hard disks, are about a 10,000 to 1,000,000 times slower to access than RAM. Moreover, according to recent estimates, technological progress yields annual rates of about 40%-60% increase in processor speeds, while disk transfers only improve by 7% to 10% percent. Moreover, the costs for larger amounts of disk space have considerably decreased. At the time of writing, 500 gigabytes could be obtained at the cost of about 150 US dollar. This growing disparity has led to a rising attention to the design of external memory algorithms in recent years. In a hard disk, random disk accesses are slow due to disk latency in moving the head on top of the data. But once the head is at its proper position, data can be read very rapidly. External memory algorithms are designed to access the data bock-wise. They are more informed about the future access to the data and can organize their execution to have minimum number of block accesses.