Read Sipser, CLRS, and SCIP.
>>3 is an idiot who can't into theoretical CS
To prove the time efficiency of an algorithm, break it up into pseudo code or a very high level approach which is even better.
For worst time complexity, give an argument about the worse case senario. Try to break your argument in a way that is generalized for the algorithm. If you're really formal, talk about loop invariants (google it or read CLRS). Ultimately you'll be talking about loops and loop depth, and the complexity of the inner most loop. i.e. it is easy to show that bubble sort is O(n^2) because it has a double for loop and the inside operation is constant time. If your algorithm calls other algorithms, it is necessary and sufficient to simply treat it as a black box, and refer to it's operational cost via it's time complexity.
For example:
We define BINARY SEARCH to take array A, and element X, and return the position of X in A, and -1 if not found.
Main loop: Compute the floor of A.length / 2 and call the element there Y.
- If X = Y, return the position of Y
- If A.length = 0, return -1
- If X > Y, set A = A[ceiling(A.length/2) -> A.length]
- Else if X < Y, set A = A[0 -> floor(A.length / 2)]
- return to the main loop.
Correctness:
Here you show that for X not in A, BINARY SEARCH will terminate and return -1, and for X in A, BINARY SEARCH will terminate and return the right element. You have to include this step in a formal proof, other wise your algorithm could be something along the lines of compute O(log(n)), sleep, O(log(n)), and return 0, and you'd be able to prove the time complexity, but it wouldn't mean anything.
Firstly, the program must terminate. The operations inside the loop have a fixed cost. Also note that the length of A must eventually reach 0. The floor N/2, is no larger than N/2, and N/t^2 will eventually be less than 1, at which point the next floor will be 0, and the loop will stop.
Now it is obvious that if X isn't in A, the program cannot return a false positive, because it checks for equivalence directly. Before returning true. To show that it will find all members isn't as easy, But it's not that bad simply assert how starting with the assumption that X is infact in A (which is given), each subarray must contain X. Couple that fact with the previous one about termination, and you get that eventually the element will be found.
Time complexity:
So we've talked about termination but not termination speed. Well when you divide a number n by 2 and take the floor, the size of the array will be no larger that size N/2. Formally, you'd use the masters theorem, and assert that T(n) = T(n/2) + O(1), which will let you get the O(log(n)) answer directly. Else, you could just make the argument that multiplying by 1/2 at every step makes the size of the array <= A.length * (1/2)^t, and then just do some algebra to show how this will make The total time logarithmic with respect to A. length.
I hope this helps, correct me if I'm wrong. It's what I learned at uni last year and so I wrote this off of memory.