In an asymptotic analysis, we care more about the order of magnitude of a function rather than the actual value of a function itself. In terms of the abstract time of an algorithm, this should make some intuitive sense. After all, we call it "abstract time" because we use "abstract" operations such as "number of math operations" in f (n), or "number of comparisons" in f (n). In addition, we don't know exactly how long each operation might actually take on a certain computer. Intuitively, the order of magnitude appeals to our sense that n2 is a faster-growing function than a linear function like n.
To describe the order of magnitude of a function, we use Big-O notation. If we had an algorithm that did 7n4 +35n3 - 19n2 + 3 operations, its big-O notation would be O(n4). If we had an algorithm that did 2n + 5 operations, the big-O notation would be O(n). Pretty simple, right?
We can formalize what it means for a function to be the big-O of something: g(n)EO(f (n)) if and only if there is some constant c > 0 and no > 1, such that g(n) < = cf (n) for all n > no.
Now in English: a function g(n) is in the class of functions of the order f (n) if, and only if, we can multiply f (n) by some constant c, and ignore all the n below some constant n0, and have the function c*f (n) be greater (for each n > n0) than g(n).
That might sound very confusing, but it is actually pretty straightforward and you'll get the hang of it soon enough. Practically we run into a few basic big-Os (there are of course an infinite number of others, but you will see these most frequently):
- 1. O(1) - constant time
- 2. O(logn) - logarithmic time
- 3. O(n) - linear time
- 4. O(nlogn)
- 5. O(nc) - polynomial
- 6. O(cn) - exponential
- 7. O(n!) - factorial
When comparing functions using big-O notation, think about very large n. For example, O(n2) > O(n) and O(cn) > O(nc).