Seems pretty interesting, I wish I had more time to get into it, seems like a worthy project to contribute to.
The code doesn't look half-bad for a mostly C++ project, but I think it suffers from too much object-orientation, and they're having problems trying to re-implement parts of it to run on OpenCL because of that. They should have focused more on making it data-oriented, and less on fine-grained OO.
And it's a HUGE code-base, so it's not trivial to just rewrite it with more data-oriented techniques.
I think in the end though, it's going to suffer from many of the same problems as OpenCV--bloat and slowness.
I tried using OpenCV for some stuff, and was able to put together a prototype for a commercial government project, but in the end I had to build my own image analysis and object-recognition kernel from scratch in C++, assembly and OpenCL to meet our performance requirements.
Name:
Anonymous2011-05-04 14:29
They would've already created Strong-AI, would they spent more time on experiments, than writing papers, making stupid videos and fancy sites.
>>3
It's obviously underfunded, and they do indeed write more prose than code.
I'm not even sure their approach is the right one, but I think it's worth pursuing. I'm more hopeful of those working on large-scale neuromorphic hardware, even though it's more of hack, but it has a higher chance of achieving something closer to us.
Name:
Anonymous2011-05-04 14:43
>>4
People, such as they, created the AI winter. Investors want to see working code, not some sci-fi prose.
Name:
Anonymous2011-05-04 14:45
>>4
>I'm more hopeful of those working on large-scale neuromorphic hardware, even though it's more of hack, but it has a higher chance of achieving something closer to us.
Wikipedia says they are using this "hypergraph" as main data structure. This means they must manually hardcode most of the stuff, instead of extracting it from environment automatically.
>>6
Hence why I have higher confidence in seeing hardware based on the human/mmamlian cortex achieve general intelligence first.
It's not that I think that mish-mash AGI's like theirs can't do it, it's just that learning is incredibly harder without a real environment and a way to process, correlate and predict environmental data. It's only a bit better than a expert system + theorem prover without a way to use the data from any random environment and interact with it.
Name:
Anonymous2011-05-04 15:02
>>7
AFAIK, primitive brute force methods, like video compression, works best on IRL problems. For example, instead of using complicated geometric methods for matching object's angle and facing, just store raw maps of previous interactions with the object, then compare to them in parallel.
Name:
Anonymous2011-05-04 15:03
>>8
But this means, you should already have "raw maps from previous interactions with the object". It's a chicken and egg problem.
>>8
At the higher level of "abstraction" the brain just works by storing and retrieving cached patterns of patterns of ... patterns (a bit less than 2 dozen layers in depth, with each layer being smaller). For example, you retain very tiny areas (think of it like 4x4 or 16x16, although our brain isn't anything as simple, it's a lot more stochastic) of visual patterns in the lowest layer (such as V1), and then the detected patterns(both spatial and temporal) are passed to the next layer which itself finds patterns and passes them on to the next and so on (up until the top, your prefrontal cortex). This isn't just for visual information, it happens for all senses, including touch, aural and many others (including internal ones such as those coming from the reward system and emotional systems). The patterns themselves unfold back to lower layers from upper ones and so forth (thinking, imagining, inner voice, talking and other processes are like this). This allows the system to cope with a lot of randomness and ever-changing (always unique) environments, however all these environments will of course always have high-level patterns which can be understood and made use of (if no such patterns existed, it's unlikely a nervous system as advanced as the mammalian one would have evolved).
Name:
Anonymous2011-05-04 15:31
>>10
Yep. Using a rigid mathematical graph structure to model this isn't the best idea. They should try using something like JPEG instead.