Computer Science GRE Study Guide, Algorithmic design techniques

Table of Contents

III. Theory and Mathematical Background – 40%

A.    Algorithms and complexity

2.      Algorithmic design techniques (e.g. greedy, dynamic programming, divide and conquer)

a.       Definition: Since we really do not know how to make algorithms, we have created artistic guidelines.

b.      Divide-and-conquer is an effective algorithmic technique that requires recursion. First recursively divide the problem into multiple sub-problems until you get to the base case), then add all of the sub-problems together to find the answer.

c.       Dynamic programming is something like being spontaneous on a tour of England. When making any decision, you do not think about what you did in the past, you only think about what you want to do right this second and what you want to do later.

d.      A greedy algorithm always makes the choice that is best for that moment and does not consider the future consequences.

3.      Upper and lower bounds on the complexity of specific problems

a.       Definition: Complexity refers to how much time or memory (space) it takes to solve a problem.

b.      The time complexity of an algorithm talks about how long it takes to solve problems using that algorithm. The space complexity of an algorithm talks about how much memory it takes to solve a problem using that algorithm.

c.       We typically measure the upper bound of time complexity with Big-O notation.

d.      The formal definition for Big-O notation is: given two functions f and g (that have their domain in natural numbers and their range in real numbers), then f(n) = O(g(n)) if we can find a positive integer constant c and a case of n (n0) where f(n) c (g(n)). In other words, there exists a multiple of g(n) that is greater than or equal to all f(n). Keep in mind that the constant c is not important when we express complexity (i.e. drop all constants in the expression).

e.       We measure the least case of time complexity with the notation f(n) = W(g(n)). This means that the function f will take at least as long as some multiple of g(n).

f.       We measure the average case of time complexity with the notation f(n) = Q(g(n)). This means that, on average, the function f will take as long as some multiple of g(n).

g.      For space complexity, we use the same notation, but we usually explicitly let the reader know that we are talking about space complexity and not time complexity. For example, we might write something like “f(n) = O(log n) space” means that it take logarithmic space to solve the algorithm.

h.      Let us order different complexities.

Shortest

O(1)

Constant time

 

O(log n)

Logarithmic time

 

O(n)

Linear Time

 

O(n log n)

n log n time

 

O(n2)

Quadratic time

 

O(nx)

Polynomial time

 

O(xn) (n > 1)

Exponential time

Longest

O(n!)

Factorial time

i.        Imagine you have a sorted array of 1000 numbers. How would you find a specific element in that array? A sequential search of that array is an easy choice. On average, this will take Q(n) time. The worst case is O(n) time (to find the last element, you have to examine all elements). A binary search starts with the middle element. If the middle element is bigger than the element you are looking for, then you go to the element that is half way between the first element and the middle element. If the middle element is smaller than the element you are looking for, then you go to the element that is half way between the last element and the middle element. Repeat until you find the element you are looking for. Result: Q(log n) and O(log n).

⇒ Next section

Liked it? Take a second to support Hunter Hogan on Patreon!
Become a patron at Patreon!

I’m disabled & homeless.

I must earn money from advertisements.

Please whitelist my website and YouTube channel in your ad blocker or cookie blocker, such as Privacy Badger.

Thank you.