www.cs.mcgill.ca/~cs251/OldCourses/1997/topic3 -> www.cs.mcgill.ca/~cs251/OldCourses/1997/topic3/
Hypothesis : Exponential is larger than polynomial : Compute the limit of c^n / n^k for c > 1. Substituting n^2k for c^n in the original problem, shows that the limit we desire is the limit of n^2k / n^k. In 1962, based upon physical reasoning, he stated that no computer, biological or artificial, could ever process more than (2 * 10^47) bits of information per second-gram. Now, assume that we protect a computer system with a code consisting of 40 decimal digits. Breaking this code by brute force would in the worst case require trying out 10^40 combinations. In a computer working with Bremermann's speed, that could be done in less than a second, even if the computer weighed only 1 gram.
Factorial time TSP TSP is the Travelling Salesman Problem. Let's say you are a salesperson and you have to visit each of n cities to sell your product. You want to visit all the cities and return to the city you came from. You want to find the shortest path possible, visiting all the cities. Obviously, it would be naive to try all the permutations. Many tried their hand on this problem, but few have found a good solution for it. For all n and for all integers a CAPTION: FIGURE 1 n log*n 1 0 2^1=2 1 2^2=4 2 2^4=16 3 2^16=65536 4 2^65536 5 Summations Summations are used extensively in computer science as a mathematical tool to help analyze the running time of algorithms. In computer science, most algorithms are completed through the repitition of simple actions, or loops. Loops are created in two different ways: first, by constructs such as repeat,while,and for, and second, by recursion. In order to analyze the times taken by looping programs, we have to deal with summations, as one must sum the times taken in various iterations. In order to analyze the times taken by recursive programs, we have to deal with recurrences, which is the subject of the next lecture. There are five basic methods for dealing with summations. Rough Bounding Given a summation, we would like to bound each term by something simple, and thus make the summation more managable. While this method will not determine an exact measure of the running time of the algorithm which the summation is supposed to represent, it is useful in achieving bounds to solve Big Oh notation questions. In the example below, the summation represents the running time of many different algorithms, among them selection sort. Exact Computations As the name of the method suggests, we would like to compute an exact numerical value for the summation. Pair off the n numbers such that the first element is grouped with the last element (n^th element), the second is grouped with the (n-1)^th element, and so on. For example, 1 is grouped with n, 2 is grouped with (n-1), 10 is grouped with (n-9), etc. Clearly, each pair sums to (n+1), and there are (n/2) such pairs. Induction Induction can be used to prove a certain statement. We say it is "induction with respect to a parameter n", if we first show it for n = 0. Then we assume the statement to be true up to but not including n, and finally we verify that statement for n. We believe the statement to be true because an exponentially growing series usually has a huge dominating last term. So indeed, the last term in an exponentially increasing sum is about the same order of magnitude as the entire sum. Telescoping Series A telescoping series is a series where each term can be expressed as a difference in such a way that most of the terms cancel each other out. This problem is motivated by the VLSI industry, where millions of VLSI chips must be tested for defaults. The idea is to let the chips test each other by built-in circuitry. Note that VLSI stands for very large scale integration which is the integrated-circuit chip technology used to fabricate most microprocessors today. Bad chips say anything about the other chip, thus cannot be trusted. Our computational model states that each chip tested counts for one time unit. This is because one good chip can identify all good chips in O time. A naive method We are going to test each chip individually by comparing it against all remaining chips. We take votes from the remaining chips and let the majority decide. If a chip is declared bad, it is junked and we continue. Divide and Conquer Paradigm Definitions : + G : set of all good chips + B : set of all bad chips + B-B : pair of chips that has been declared "bad, bad" + B-G : pair of chips that has been declared "bad, good" + G-G : pair of chips that has been declared "good, good" Here is one iteration of the divide and conquer paradigm: If n = 1 or n = 2, then return one chip (it is good). If odd chip is bad, with remaining chips do : + pair all chips + junk all B-G, B-B pairs + junk one chip of every G-G pair + find (recursively) one good chip among the remaining chips Remarks : 1. If before the iteration, |G| > |B|, then after the iteration, |G| > |B|. Each G-G pair has either two good chips or two bad ones;
|