I don't feel like actually coding it, but I think I've got a solution.
Split the list into a list of increasing lists: [1,2,3,2,3,4] => [[1,2,3], [2,3,4]]
[1,3,2,4,3,5] => [[1,3], [2,4], [3,5]]
[5,4,3,2,1] => [[5], [4], [3], [2], [1]]
If a list's last value is greater than the previous list's last value, remove all elements from the beginning of the list that are less than or equal to the last element in the previous list. [[1,2,3], [2,3,4]] => [[1,2,3], [4]]
[[4], [1,2,3]] => [[4], [1,2,3]] (don't do anything because 3 < 4)
Repeat the first step.
If any sublist's length is at least 3, there's your answer.
After writing all that out, I just realized that you want the indices and not the values. Store each element as (i,v) where i is the index and v is the value, and make all comparisons between v.
although the merge step would need to be O(1), assuming every block is merged as normal. And if the merging can be done in O(1) regardless of the sizes of the two blocks, then a straight iteration through the array would work fine.
Like, the maximum function can be written:
int max(int* arr, int len) {
int mid = len / 2;
int max_left = max(arr, mid);
int max_right = max(arr + mid, len - mid);
if(max_left < max_right)
return max_right;
else
return max_left;
}
Each recursive call produces a single value, the maximum int observed in that sub array. The procedure for merging the result of two sub arrays is to compare the two maximum values and return the larger, an O(1) operation. But the cost of the merge does not depend on the lengths of the two sub arrays, so there isn't a good reason to keep the lengths balanced, and the normal method of collecting the max by iterating through the array will be more efficient and have the same asymptotic running time.
>>43
The first and last steps definitely are. I'm not 100% sure about the second, but I think it's O(n). It looks at each element of the sublists at most one time.
If a list's last value is greater than the previous list's last value, remove all elements from the beginning of the list that are less than or equal to the last element in the previous list.
Construct a B-tree out of the array that follows the following three rules:
1) Leaves added to the left of the parent denote "less than the value of the parent;" leaves to the right of the parent denote "greater than the value of the parent."
2) Duplicate values, when they are encountered, are ignored and are not added.
3) Maintain relative depth under the condition that whenever a leaf is added to the left, its relative depth is (re)set to 1 and whenever it is added to the right of the parent it is set to the parent's depth plus one.
This final rule allows that once you have sorted a node that will have depth 3, you have found a numeric organization (parent->right child->right child) of the array's elements that are in ascending positive value and have indicies of ascending value. Maintain each parent's index on their node. If you never sort to a depth of 3, the array has no such organization of elements that would satisfy the conditions.
>>49
That's interesting, but it's O(n^2). Consider running it on:
[13, 12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 0, 1, 2]
But that's still a good improvement over the O(n^3) solution. There might be some error in it though. Because of the insertion order and no balancing, the parent elements will always have an array index less than its children. But the array index relationship among siblings is lost. It could be that the solution consists of 3 cousins, rather than a node to right child to right grand child.
Name:
Anonymous2012-04-06 23:26
i don't think this is possible to do
in O(n) unless the list is sorted
Name:
Anonymous2012-04-06 23:39
>>52
Ah, I think I see the problem you imply. Input like [5, 3, 4, 6] would fail with the B-tree as stated. The solution would be thwarted by the last value's branch decision using the root node. A partial solution is no good; I need to rethink this.
Is that extra level of complexity I didn't account for the B-tree insertion being factored into the big o time?
Name:
Anonymous2012-04-06 23:55
>>54
yeah, if you want to insert n nodes into a binary search tree without any balancing, the worst case performance will be O(n^2). It happens whenever the original input is nearly already sorted, or nearly perfectly in reverse order. Every insertion will almost always go all the way to the bottom of the tree, and it degenerates to a singly linked list.
I think you were talking about a binary search tree. There is another tree called a B-tree that is an ordered tree specialized for storage on a hard disk. I don't think this is what you meant though.
There is the O(nlog(n)) algorithm mentioned earlier in the thread for finding the longest increasing subsequence. If the sequence generated has at least three elements, then you are set. But there might be an even faster algorithm, since you only need to know if there exists a increasing subsequence with length at least three.
(define (for-range-fn low high parameterized-body)
(if (< low high)
(begin (parameterized-body low)
(for-range-fn (+ low 1) high parameterized-body))))
(define (blah array)
(define min 999999999999)
(define max 0)
(define min+1 999999999999)
(define i #f)
(define j #f)
(define k #f)
(for-range-fn 0 (vector-length array) (lambda (x)
(define current (vector-ref array x))
(cond
((< current min)
(display `(min goes from ,min to ,current)) (newline)
(set! min current)
(set! i x))
((and (> current min) (< current min+1))
(display `(min+1 goes from ,min+1 to ,current)) (newline)
(set! min+1 current)
(set! j x))
((> current max)
(display `(max goes from ,max to ,current)) (newline)
(set! max current)
(set! k x)))))
(values i j k))
(min goes from 999999999999 to 999)
(min goes from 999 to 1)
(min+1 goes from 999999999999 to 998)
(max goes from 0 to 1)
(max goes from 1 to 1000)
(min+1 goes from 998 to 7)
(min+1 goes from 7 to 2)
1
6
4
good try though. I'm sure something similar to this could work.