If you're going to beg for homework help on the Internet you could at least be less of a demanding little shit about it.
Name:
eye2012-04-04 16:59
>>11
Just demonstrating how much full of shit you are, yet once again. If this were homework, this would be the last place I'd go unless I wanted to compile something that would destroy my computer.
for i, n in enumerate(array):
# Keep looking for an array element larger than the last one.
if minimum is None or n > minimum:
minimum = n
solution.append(i)
if len(solution) >= 3:
return solution
for i in range(10):
array = [random.randrange(1, 100) for i in range(10)]
print array, '==>', findthree(array)
Indeed, but in the modern reality of rapid deployments developer time is paramount. Picking the naïvest solution possible is A-OK if it saves precious time that would have otherwise been wasted thinking.
I think I have a good O(n) algorithm. Can anyone break it?
Basically I compensate for faux lists by these assumptions, given our x,y,z tuple as the state:
* It's ALWAYS okay to, given a smaller number than the last value taken, reset the value of x to this value. At this point we save the current value of x as x_earlier.
* We're then guaranteed to get another number greater than the resetted x, if indeed there is one remaining in the list; if not, the list is invalid anyway.
* If at the end of the list we only have x and y populated, no big deal, we re-arrange our tuple to (x_earlier,x,y) giving our x,y,z values.
function indices(array){
var x_earlier = null;
var x = null;
var y = null;
var z = null;
for (i = 0; i < array.length; i++) {
if(x!=null && y!=null & z!=null) break;
if(x == null) {
x = array[i];
}
else if(y == null) {
if(array[i] > x) {
y = array[i];
} else if(array[i] < x) {
x_earlier = x;
x = array[i];
}
}
else if(z == null) {
if(array[i] > y) {
z = array[i];
} else if(array[i] < y){
x_earlier = x;
x = array[i];
y = null;
}
}
}
var result = [];
if(x!=null) result.push(x);
if(y!=null) result.push(y);
if(z!=null) result.push(z);
if(result.length == 2) result.unshift(x_earlier);
if(result.length != 3) return "NO";
return result;
}
[5, 6, 1, 2, 3] → 1,2,3 because we reset x to 1.
[1, 5, 2, 3] → 1,2,3, because we finish at [2,3] and then unshift 1 back to the front of our tuple.
[5, 6, 1, 2, 7] → 1,2,3, because we reset x to 1. Because 5,6 OR 1,2 are valid preceeding numbers for 7.
[5, 6, 1, 2, 7, 8] → 1,2,7, because we reset at 1 and finish at 7.
[1, 2, 999, 3] → 1,2,999
[999, 1, 2, 3] → [1,2,3] because we reset at 1
[11,12,8,9,5,6,3,4,1,2,3] → [1,2,3] because we reset at 8, 5, 3, 1.
I just realised that I should throw an error at the unshift line to ensure I don't unshift larger numbers. It'd be an invalid list anyway, but for the sake of helpfulness to the programmer.
Not sure I can come up with a proof for this, I suck at proofs. Anyone want to make a proof for me?
>>35
I didn't say anything about proof. Learn to read, son. However, I do claim my algorithm excellent and win, and better than you, because you cannot provide a counter example.
Name:
Anonymous2012-04-05 16:15
I've actually found a counterexample. But I'm not going to tell you.
Name:
Stolas2012-04-05 19:15
Python version
i,j,k = 0,0,0
for l in range(0,len(array)):
if array[k] < array[l]:
i = j
j = k
k = l
We're only going over n elements and doing constant work thus it takes O(n) time, and we're only updating the previous elements when we find a new value for k as we increase the index thus fulfill the requirement of i<j<k and of array[i]<array[i] etc
Name:
Stolas2012-04-05 19:20
Just looked over the thread and yeah that algorithm doesn't take into account having the largest value up front.
I don't feel like actually coding it, but I think I've got a solution.
Split the list into a list of increasing lists: [1,2,3,2,3,4] => [[1,2,3], [2,3,4]]
[1,3,2,4,3,5] => [[1,3], [2,4], [3,5]]
[5,4,3,2,1] => [[5], [4], [3], [2], [1]]
If a list's last value is greater than the previous list's last value, remove all elements from the beginning of the list that are less than or equal to the last element in the previous list. [[1,2,3], [2,3,4]] => [[1,2,3], [4]]
[[4], [1,2,3]] => [[4], [1,2,3]] (don't do anything because 3 < 4)
Repeat the first step.
If any sublist's length is at least 3, there's your answer.
After writing all that out, I just realized that you want the indices and not the values. Store each element as (i,v) where i is the index and v is the value, and make all comparisons between v.
although the merge step would need to be O(1), assuming every block is merged as normal. And if the merging can be done in O(1) regardless of the sizes of the two blocks, then a straight iteration through the array would work fine.
Like, the maximum function can be written:
int max(int* arr, int len) {
int mid = len / 2;
int max_left = max(arr, mid);
int max_right = max(arr + mid, len - mid);
if(max_left < max_right)
return max_right;
else
return max_left;
}
Each recursive call produces a single value, the maximum int observed in that sub array. The procedure for merging the result of two sub arrays is to compare the two maximum values and return the larger, an O(1) operation. But the cost of the merge does not depend on the lengths of the two sub arrays, so there isn't a good reason to keep the lengths balanced, and the normal method of collecting the max by iterating through the array will be more efficient and have the same asymptotic running time.
>>43
The first and last steps definitely are. I'm not 100% sure about the second, but I think it's O(n). It looks at each element of the sublists at most one time.
If a list's last value is greater than the previous list's last value, remove all elements from the beginning of the list that are less than or equal to the last element in the previous list.
Construct a B-tree out of the array that follows the following three rules:
1) Leaves added to the left of the parent denote "less than the value of the parent;" leaves to the right of the parent denote "greater than the value of the parent."
2) Duplicate values, when they are encountered, are ignored and are not added.
3) Maintain relative depth under the condition that whenever a leaf is added to the left, its relative depth is (re)set to 1 and whenever it is added to the right of the parent it is set to the parent's depth plus one.
This final rule allows that once you have sorted a node that will have depth 3, you have found a numeric organization (parent->right child->right child) of the array's elements that are in ascending positive value and have indicies of ascending value. Maintain each parent's index on their node. If you never sort to a depth of 3, the array has no such organization of elements that would satisfy the conditions.
>>49
That's interesting, but it's O(n^2). Consider running it on:
[13, 12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 0, 1, 2]
But that's still a good improvement over the O(n^3) solution. There might be some error in it though. Because of the insertion order and no balancing, the parent elements will always have an array index less than its children. But the array index relationship among siblings is lost. It could be that the solution consists of 3 cousins, rather than a node to right child to right grand child.
Name:
Anonymous2012-04-06 23:26
i don't think this is possible to do
in O(n) unless the list is sorted
Name:
Anonymous2012-04-06 23:39
>>52
Ah, I think I see the problem you imply. Input like [5, 3, 4, 6] would fail with the B-tree as stated. The solution would be thwarted by the last value's branch decision using the root node. A partial solution is no good; I need to rethink this.
Is that extra level of complexity I didn't account for the B-tree insertion being factored into the big o time?
Name:
Anonymous2012-04-06 23:55
>>54
yeah, if you want to insert n nodes into a binary search tree without any balancing, the worst case performance will be O(n^2). It happens whenever the original input is nearly already sorted, or nearly perfectly in reverse order. Every insertion will almost always go all the way to the bottom of the tree, and it degenerates to a singly linked list.
I think you were talking about a binary search tree. There is another tree called a B-tree that is an ordered tree specialized for storage on a hard disk. I don't think this is what you meant though.
There is the O(nlog(n)) algorithm mentioned earlier in the thread for finding the longest increasing subsequence. If the sequence generated has at least three elements, then you are set. But there might be an even faster algorithm, since you only need to know if there exists a increasing subsequence with length at least three.
(define (for-range-fn low high parameterized-body)
(if (< low high)
(begin (parameterized-body low)
(for-range-fn (+ low 1) high parameterized-body))))
(define (blah array)
(define min 999999999999)
(define max 0)
(define min+1 999999999999)
(define i #f)
(define j #f)
(define k #f)
(for-range-fn 0 (vector-length array) (lambda (x)
(define current (vector-ref array x))
(cond
((< current min)
(display `(min goes from ,min to ,current)) (newline)
(set! min current)
(set! i x))
((and (> current min) (< current min+1))
(display `(min+1 goes from ,min+1 to ,current)) (newline)
(set! min+1 current)
(set! j x))
((> current max)
(display `(max goes from ,max to ,current)) (newline)
(set! max current)
(set! k x)))))
(values i j k))
(min goes from 999999999999 to 999)
(min goes from 999 to 1)
(min+1 goes from 999999999999 to 998)
(max goes from 0 to 1)
(max goes from 1 to 1000)
(min+1 goes from 998 to 7)
(min+1 goes from 7 to 2)
1
6
4
good try though. I'm sure something similar to this could work.