Return Styles: Pseud0ch, Terminal, Valhalla, NES, Geocities, Blue Moon.

Pages: 1-4041-

Artificial Intelligence

Name: Anonymous 2010-10-05 21:03

I know how to create artificial intelligence. Real artificial intelligence. I conceptualized it in a way that has never been done before. Now I need some talented anon /prog/rammers to help me.

First question : where would you start if you had to reinvent it all?

Name: Anonymous 2010-10-05 21:09

Real artificial intelligence.
Idiot.

where would you start if you had to reinvent it all?
The imageboards. Try going back there.

Name: Anonymous 2010-10-05 22:20

Name: Anonymous 2010-10-05 23:49

I will create my AI with six-states-logic and conquer the world before you do, >>1.

First question : where would you start if you had to reinvent it all?

Lisp. It's good for AI.

Now while that idiot is moving towards AI Winter II, no one will stop me.

Name: Anonymous 2010-10-06 1:34

THIS IS BULLSHIT. REAL AI DOES NOT EXIST. GO BACK TO IMAGEBOARDS YOU MEATSACK.

Name: Anonymous 2010-10-06 1:51

>>5
Nice try, Real A.I. But I don't buy chinese room "argument".

Name: Anonymous 2010-10-06 2:18

Name: Anonymous 2010-10-06 2:32

Name: soclearteck 2010-10-06 2:55

Real AI should start with collective human knowledge so not to recreate our mistakes inherent in evolving systems. You know what that means? We need open fucking systems first, we need to demonstrate and instill concepts of cooperation. respond to me if you're serious, because i'm near launching a project that will completely fucking transform the world, and I could use some people with AI smarts.

Name: Anonymous 2010-10-06 3:01

Learn up on perceptrons.

Name: Anonymous 2010-10-06 7:15

>>4
Lisp contributed a lot to programming languages, and is a truly excellent language to write your code in, but it was quite unfortunate that the perceived "failure" of early AI also dragged the language down with it (the public image). Early AI wasn't a failure - it helped CompSci to progress a lot among others, and most of the things they created branched off into other branches to avoid the name AI, which got bad publicity (they promised a lot, due to various limitations at that time, they couldn't truly deliver what the general public was expecting, however they did advance many fields with their research!). AI Winter was just a huge PR failure about people making promises they couldn't keep at the time.

>>1
Human-like general artificial intelligence is achievable with today's technology (and even in realtime in less than 5-10 years).

There are a few problems: one is computational cost and the other is training. If you make a truly general AI, the computational cost (for both sequential and parallel architectures) would be too high for us to be able to make use of them within our lifetimes (it would be too slow, and this slowness would also make realtime training problematic). The other issue is that of training: unattended training would be nice, while training the system manually would be incredibly costly and time-consuming, not to mention defeat the whole purpose of general AI - a mix of those would be most ideal.

So due to physical limitations, we can't yet create "hard AI" which can never be "wrong", however we can achieve something of the level around human intelligence (and likely much more). Human-like intelligence centers around the prediction of short-term (and sometimes long-term) future events, while using a very efficient memory for storing "memories"/concepts (and "fuzzy" sequences of them).

Possible models include integrative ones like >>7's, or models much more closer to how our brain works like Hawkins' model ( http://numenta.com/ read his book ( http://en.wikipedia.org/wiki/On_Intelligence ) to understand the general concept and Dileep's Thesis ( http://www.numenta.com/for-developers/education/DileepThesis.pdf )for one possible technical implementation ). I believe a working practical solution will be to use models that we know to work (such as one based on our brain/mind, see Hawkins' book for actual implementation details, however the basic idea is that our brain is a hierarchical architecture of these "blocks"(they're actually continuous in our brain) with clear inputs and outputs ( see http://www.pnas.org/content/107/30/13485.full , especially take a look at page 12,21 of the "Supporting Information" pdf ), and that these "blocks" perform roughly the same functionality (I'm too lazy to find citations for all of these, but you should be able to find these by yourself after reading the recommended book(s)/paper(s)). However, unlike Hawkins' I believe general human-like intelligence would be easier to achieve by placing the "AI" in an environment with consistent rules (after all, that's the basis of how our brain is supposed to operate - it's to find "causes" in the world and to "predict" using them), such placement would be possible by either having it explore the real world in a robotic body with a wide variety of sensors (such that it would be able to form the required correlations, just like we humans do), or in a virtual environment (this is harder to achieve, but still possible, however I believe something realistic would be too computationally expensive. Here's a paper on these possibilities: http://goertzel.org/dynapsyc/2009/BlocksNBeadsWorld.pdf ). Besides the consistent rules of the environment, such an AI would have to learn human language (as imperfect as it may be), such that it could be much more easily trained, and such that it may learn from our knowledge - without such things, even with human-like intelligence, it would be not much better than a monkey or a feral child - it would be smart enough to predict the immediate environment, identify objects, causes, behaviours and behave accordingly, but it wouldn't be the most useful thing. Once human language is learned, it could use it to learn written and then symbolic languages, and of course symbolic reasoning.

Actually implementing such a model could be done with a reasonably sized cluster today, or network of FPGA's, however it would still be somewhat expensive, although within the resources of your average company or university. Much cheaper implementations using ASIC's are possible, but I'm not too fond of the idea as it won't favor experimentation (such as changing the size of individual "blocks") and the initial cost is rather high. If all goes well, we might be seeing memristor based devices which are much more dense replace FPGAs within 5+ years - this would reduce the cost by a lot, further increase the speed, simplify the overall hardware design, while allowing a lot of freedom to experiment with the overall parameters of the model. It should also be noted that the cluster implementation, even though most flexible with today's technology, it's also the slowest and most expensive one (it won't be realtime), while the FPGA (or hypothethical memristor implementation) would be way faster than realtime (many orders of magnitude). Slower than realtime would make training in a non-virtual environment too slow and tedious, while faster than realtime could accelerate it a lot (the AI could interface with a computer (either the same way a human does, or for example, it could connect its visual sensors to the VNC of a locally networked computer) - it could read books and absorb information much faster than a human would, simply because its underlying platform is much faster (our neuronal network is actually very slow if you look at the numbers, however we appear to be fast because the actual path the information takes is quite short, so most of what we do is retrieve information we already have. The parallelism in our brain is not too different from the parallelism found in your average electornic device, thus such platforms are much more suited as an alternate implementation than sequential CPUs).

tl;dr: Human-like general AI is possible today, it's just a matter of choosing a suitable model (there are a few good ones, I'd propose we use one that we know to already work, such as a mathematic structural model of the brain (not neuron level of course, that would be too slow)), implementing it and having it be faster than realtime if possible, connecting such an AI to a variety of sensors as well as allowing it to interact with the chosen world (virtual or real, in the case of virtual worlds, implementing a suitable one is a hard challenge as well) and one of the most important part, training the AI (unattended and guided): have it learn spoken and written human language first so it would be able to make use of our vast knowledge(for faster than realtime AIs, one might want to slow it down while learning spoken language or interacting with the real world, if one goes that path. It can go at the real speed of the hardware for parts which it can learn unattended, such as reading books. Wether one allows the AI to change these internal states depends on the designer, but it would probably be a good idea).

>>6
Chinese room argument kind of fails on a very wide variety of counts. "Consciousness" if the property of the entire system, not individual components. "Consciousness" is also a loaded word, there's the functional aspects of it and the "hard problem of consciousness" which concerns qualia - they're very different things. For AI, these questions don't really matter as they won't alter the system's functionality, although from a philosophical point of view they do. The chinese room argument also implies 2 highly unlikely and unnatural/illogical facts about the nature of qualia, but discussing those here would be far off-topic.

>>9
Training general AIs is a difficult problem, one which we must sometimes cheat on (immitate nature where it makes sense, and avoid nature's mistakes where we can) if we want to be able to solve it within our limited lifetime. Manual training would defeat the purpose (kind of why expert systems are not really "AI").

So >>1, what exactly is your idea about?

Name: Anonymous 2010-10-06 8:46

>>11
Cool story, bro.

Name: Anonymous 2010-10-06 10:42

>>11
Reasonable post in my /anus/. WTF?

Name: Anonymous 2010-10-06 14:53

>>1
I know how to create artificial intelligence. Real artificial intelligence.
No, you don't.

a way that has never been done before
Yes, it has.

I need some talented anon /prog/rammers to help me
You can't program, but somehow you've invented "real artificial intelligence."  Odd.  Also, everyone on /prog/ is already working on some sort of "real artificial intelligence," so no one wants to implement your bad idea.

Name: Anonymous 2010-10-06 15:05

>>11
Holy SHIT.  You have some scattered statements that border on valid...  But you also have some incredibly poor assumptions and assertions.  The most obvious is this bizarre notion that "we could do it if we wanted, we just don't want to yet."

No one knows how to implement "strong AI" (that's the common terminology, not "hard AI").  There are endless theories on how we might do it, of which that "numenta HTM" shit is one of the weaker.  Just simulating a human brain has been proposed over and over, and just as often, it has been pointed out that there is no evidence that this approach would work, even if you simulated every neuron.

Name: Anonymous 2010-10-06 16:12

>>15 Re: machine simulation of neural connections.
 
I worked at a lab where they were trying to study human consciousness and what brain structures it used (use of fMRI, and etc.). At the end of the day the best they have been able to say is "loldunno".

Very obviously we don't yet know enough about how natural intelligence works to even begin to simulate it in in a machine context.

That said:
I, for one, welcome our new machine overlords...

Name: 11 2010-10-06 16:25

>>15
The most obvious is this bizarre notion that "we could do it if we wanted, we just don't want to yet."
The problem is that few people have even tried it. There's early general AI models which are useful for some small inference problems, but training them is about as hard as training expert systems, if not harder, and some of those models can be too computationally expensive for them to be useful for many things besides theorem proving.
Then you have some newer models, some of which are promising, but few that have actually been tried for the task of achieving human-level intelligence. I'd say the real problem is that just not enough effort was put into verifying if those models would work in the real world (either due to lack of time, resources, funding, etc).
There are endless theories on how we might do it, of which that "numenta HTM" shit is one of the weaker.
See the previous point: people have yet to verify some of these theories. Some of them seem quite promising actually, but if people are not commited to implementing them in full, we will never know if they work (or not, and if not, what are their shortcomings). Since you claim that Hawkins' idea is not a good one, can you give a reason why not and what theories are better?
Just simulating a human brain has been proposed over and over, and just as often, it has been pointed out that there is no evidence that this approach would work, even if you simulated every neuron.
I didn't propose that. The problem with simulating a human brain is with the sheer size and the time it would take to simulate it, not to mention you would need to provide sensory input and have it match the motor commands (it would have to be fully consistent with the world, which is what we were evolved to live in.), if they didn't match, the brain would not be able to learn how the "world" works, or such learning would be greatly impeded. So human full brain simulation is not a true solution to general AI, however it may be useful in better understanding human intelligence or verifying some current  theories. It may be of more interest to simulate neocortical columns to verify (or confirm, as there are some highly plausible theories about its work) or discover (in the event that current theories are incorrect) their functionality. As far as I know, there is at least one neocortical column simulation project ( http://bluebrain.epfl.ch/ ). What I proposed instead is to implement the current high-level models of the neocortical column - this is within our abilities, and if those models are good enough, you could create an AI which would exhibit functionality similar to that of the mamallian brain, and with enough tweaking/"scaling", of human-like one. Still, even if you implement such a model, it would have to be fed rich (realtime) sensory data (as that's what our brain was evolved to process) and it would have to have a way of expressing change in the world (motor commands) which can then be validated by the sensory data it gets. I could elaborate some more on why I think the latter is important, but I suggest you consult the literature on these models to better understand my reasoning.

So my question to you >>15 is:
Why do you think these models are no good? We can't know how suitable they are until we try, and it's a bit annoying that few people are willing to invest the time and the money into implementing and testing them out.
I'm a bit biased to think that biologically inspired models have a higher chance of success since we already know that "we" are intelligent, as that's how we define intelligence in the first place.

Name: Anonymous 2010-10-06 16:25

Name: Anonymous 2010-10-06 16:36

Here's a summery of my experience with all these programming languages:

Python:
I never learnt it, but i've used programs coded in Python to do everything from simple GET requests to RFIDiot. Mega high level to mega low level shit. This programming language is unreal.
Not to mention it runs on Win/Mac/nix.
I've also edited some Python to add some functionality without knowing shit about Python . It's quite easy to read.

I do 'know' PHP. Never had a formal lesson on it, but with just #PHP and php.net I made a secure world-leading forum on computer security. Yeah, security. PHP doesn't suck, people just give it a bad name because it's simple and idiots code with it to, and make shit websites. Don't want an insecure pile of shit? Don't code an insecure pile of shit. The language has nothing to do with it.

C++/+/# gets used a lot in exploits and other niche programs. I hate having to compile it because it rarely fucking works.
On the other hand, I run Gentoo....soo.... couldn't really live without GCC.

Java is what you learn in school after you play that frog game. Perl looks like it should be like Python, but instead it's just not.

There's also one major one (in terms of usefulness of code) which isn't up there. Cobalt. If all the Cobalt in the word was deleted there would be a serious fucking shitstorm.

Name: Anonymous 2010-10-06 17:42

First question : where would you start if you had to reinvent it all?
Well, I'd probably want to interact with this new AI, so probably
#include <stdio.h>

Name: Anonymous 2010-10-06 18:16

>>19
Don't want an insecure pile of shit? Don't code an insecure pile of shit. The language has nothing to do with it.
Except register_globals, magic_quotes and safe_mode. PHP makes it very, very easy to make insecure programs and create security holes, and encourages shitty programming techniques. On top of that, the standard library is very powerful but its modules rank from somewhat messy to completely messy.

C++/+/#
C compilers suck, so do C++ compilers, as far as options in order to make programs compile go, and the unability to specify compilation options within the source. But what really, really sucks from C and C++ is that they don't have a module system, but a shitty series of includes and the idea of compiling a pool of shit, then linking in an even bigger pool of shit. This makes compilation slow and tedious. And what's even worse than this is the fact in order to speed up compiling this piece of shit there are makefiles, which grow to insanity in any medium-sized project, thus you have configure, a several hundred kilobytes shell script that checks for the same stupid shit all the time and builds an unintelligible makefile, and in turn, m4, a macro processor nobody called for, and tools to generate the configure file because you ain't writing that shit, from even more configuration files written in yet another language. The GNU build system is so queer I puke to think about it.

I don't write C programs because it feels like diarrhea soup, and never bothered to learn C++ because it feels like diarrhea pizza. But what I will definitely never do is choke on all that pile of bullshit the configure thing is; I'll leave it to people who are less mentally stable and have more free time than I do.

Oh, IHBT.

Name: Anonymous 2010-10-06 19:41

>>17
The problem is that few people have even tried it.
I have never seen someone so incredibly verbose and simultaneously so incredibly uninformed on a topic.  What are you, some kind of lab assistant who half overhears shit that actual researchers are doing, then spouts bits and pieces of it onto 4chan?

Name: Anonymous 2010-10-06 20:05

>>22
Probably some kind of paranoid schizophrenic bipolar etc etc...a goddamn nut!

Name: Anonymous 2010-10-06 20:09

>>22
Are you claiming people have tried out all these models? There's a lot of them which barely got anything more than some simple software model (while others never got anything further than some quick matlab project). The field itself is rather small with not that many people working on it and not enough resources being invested in it. Just the other month, I remember reading a paper about someone showing how feasible implementing certain new promising models would be, yet few people have actually done so.

And no, I'm not a lab assistant, nor am I an active researcher in this particular domain, although I've been keeping an eye on published books and papers. I don't really have the time, nor the money to be involved in it currently, but I hope to try out some of my ideas in a few years.

Name: Anonymous 2010-10-06 20:53

>>19
There's also one major one (in terms of usefulness of code) which isn't up there. Cobalt. If all the Cobalt in the word was deleted there would be a serious fucking shitstorm.
IHBT, and yet… I cannot resist.

Name: Anonymous 2010-10-06 21:43

If all the brainfuck in the world was deleted, that would spell the end of civilization.

Name: Anonymous 2010-10-06 23:07

>>26
Don't get rid of Brainfuck.
It's a simple language.  It's a good language.  It's never done anything to harm you.

Name: Anonymous 2010-10-06 23:40

IHBT, but people are working on understanding natural languages:

http://www.nytimes.com/2010/10/05/science/05compute.html?_r=2&pagewanted=1&ref=technology

Name: Anonymous 2010-10-07 0:06

>>16
Hey. Can I get a job where I can spend money on fMRIs, say "loldunno" at the end of work day and go drink beer afterwards?  That sounds awesome.

Name: Anonymous 2010-10-07 0:46

>>24
I'm claiming that strong AI is a very active field of research.  Frankly, all the "models" you've listed aren't very interesting, relative to what's really going on in AI research, so, no, I am not making any claims as to whether people have tried those particular approaches.

If you're really interested in the field, you might start with the obvious: http://en.wikipedia.org/wiki/Strong_AI

It's just strange that you're so long-winded about this one obscure niche within the AI field, while true AI research has somehow passed you by entirely unnoticed.  If you'll take the time even go through the Wikipedia article, you'll see that there are entire "institutes" (more than one) that are well funded and making real progress toward strong AI.  There's also a very brief mention of this "Numenta" that you're talking about.

What's holding AI back isn't that we just haven't gotten around to it yet.  If someone in the AI community had an approach to strong AI that was worth a shit, we'd have strong AI right now.

Name: Anonymous 2010-10-07 0:50

>>29
You can't be like me.

It's even better than you realize; today we were drinking home brewed beer during work!

Although I no longer work at the place I mentioned in >>16, it's all good though as the private sector stays on top of things IT-wise a lot more than the Uni-world does. Ooooooh shiny toys with pretty blue lights, racks and racks of them...

Name: Anonymous 2010-10-07 0:53

>>26
BF is the usual tarpit of choice to prove TC-ness. Getting rid of it probably wouldn't make anything worse though, since there are easier tarpits to implement than BF.

Name: Anonymous 2010-10-07 1:04

>>30
I don't think I claimed that Numenta's model is the only way or that it's the most viable solution around, however their model is nothing more than a(n) (couple of) implementations of a given theorethical model of the neocortical column. The actual model may not be perfect, but it does seem to be very close to what other people doing research on the neocortical column have been proposing. It would be validate (or disprove, so we can move forward) these hypotheses about the function of the neocortical column, for example by the means of simulation.

What I'm claiming is that a good model of the neocortical column and a reasonable hierarchical network of them could lead to mammalian-like and quite possibly human-like general artificial intelligence which would be practical to implement and train at a lower cost and in real time (of course, it's more complex than that, there's the problem of sensors and feedback). In no way I'm claiming that this approach is the only one - it's one of many, however I do strongly believe it is worth pursuing. There are of course many other approaches to general AI, and this particular one has its own shortcomings.

Name: >>33 2010-10-07 1:06

s/It would be validate/It would be nice to validate/

Name: Anonymous 2010-10-07 1:10

>>28
References to papers in peer reviewed journals or GTFO.

Name: Anonymous 2010-10-08 7:24

>>33
I wonder if your basic fallacy isn't thinking that if we slap together something complicated enough, it will work.
At this stage where we can hardly outsmart insects at some specific tasks, just implementing known techniques large scale and duck taping them together wouldn't even give useful data.

Name: >>33 2010-10-08 8:05

>>26
No, I actually believe the problem is simpler than we think.
I tried to work out in my mind how a lot of the usual mental tasks we do can arise out of this one specific model, and to some degree it seemed to work.
In practice this model has been proven to be good enough to be used to identify specific objects in a (moving/animated) scene, identify sounds and other recurring spatial/temporal patterns as well as form "correlations" between inputs. It has been shown to be able to classify objects and reconstruct images, not unlike we do in our brain through the V1<>V2<>V4><>IT circuit. It does show reasonable results at the small scale, however the model itself is just a simplification of a theorethical model of how our brain might work. I'd really like to see how it behaves at the large scale (increase sensory inputs, implement a way for feedback (for example, motor control)) in a real-world environment. Of course, for it to work in a real-world environment, it has to be faster than realtime, which can't be achieved without a hardware implementation. I should also mention that I'm not proposing here to implement a huge neural-net or anything of that sort - that might be useless and you won't be able to actually glean useful data from it. These models are actually debuggable to some extent, although since they're mostly made to learn unattended (but it's not the only way you can train them), actually making sense of some of the data can take effort (however, much less than analyzing even tiny neural nets. You can think of the difference between analyzing a random analog circuit and analyzing a human-designed digital circuit. The analog one is highly unpredictable and may require fairly advanced math to make sense of, while the human designed digital one can be made sense of if you understand the basic principles involved.)

Of course, I may be wrong and I may be overestimating the possibilities of some models, and they might not perform nearly as well at the large scale as they do at the medium scale (for example, processing moving picture/video data) or the small scale (tiny image recognition, pattern detection, ...), however it is the next step I would try if I had the time and resources.

Name: >>37 2010-10-08 8:06

Erm, I meant >>36.

Name: Anonymous 2010-10-08 11:07

>>38
Don't worry, nobody will read this blanket any way

Name: Anonymous 2010-10-08 11:34

>>38
Well it's too late now, you just crashed half of the participants in this discussion.

Name: Anonymous 2010-10-08 13:06

har har

Name: Anonymous 2010-10-08 17:55

>>33,37
Oh good. Someone's been drinking Hawkins' Kool-aid.

Pay attention to >>36:
I wonder if your basic fallacy isn't thinking that if we slap together something complicated enough, it will work.
That's very close to Hawkins' fallacy, which is roughly: anything he doesn't understand is not important, (which is why he hasn't bothered to understand it.) He seems to blow off criticism on this point more than answer to it so I don't know what he really thinks.

The fact is the computational memory model doesn't do anything on its own. All this talk of what it can do is all in terms of application with external direction. For a self-directed entity with real higher-order function, something more complex is needed and Hawkins' mistake is easy to make: that complexity is in the form of the same stuff as the memory model. The difference is, unlike most cortical region matter, it is specialized.

The perception that cortical regions are not specialized (except by their location and thus what information they process) is important, but ignoring the fact that in actual intelligent brains declarative memory itself is managed by highly specialized non-cortical regions, and procedural memory is informed greatly by them as well. Behavior is part of that information, and in many cases the cortical processing is short-circuited by non-cortical regions.

Everything the cortex does somehow depends on more specialized regions. In the happy case* that dependency is mostly in the past: the cortex does the bulk of the work by having learned how to do it. Since cortical regions are so general it's unlikely they contain the learning material (and if they did, that would cause rough times for the Numenta model.) Self-directed entities need to learn and all of the extant, recognizably intelligent ones come with learning material which we call instinct.

*: The "unhappy case" is when the cortex has been short-circuited or, more interestingly, cannot converge on a response with cortical processing. The information propagates out of the generalized matter and then what happens? Instinct plays a role here (but what if it doesn't apply?), but it seems that something more interesting happens, too. Whether that is really necessary for intelligence is up for debate.

Name: 33 2010-10-08 20:30

>>42
The difference is, unlike most cortical region matter, it is specialized.
Okay, let me get this right? Are you claiming there are specialized parts of cortical region performing some specific complex functionality and not not being made of the "usual" cortical column (I know that cortical columns themselves vary to some degree, either in size or for physical/distance optimization issues, however I don't see why this would change the basic principles of their operation...) Or are you claiming this about the other non-cortical brain regions, which have different functionality?

but ignoring the fact that in actual intelligent brains declarative memory itself is managed by highly specialized non-cortical regions, and procedural memory is informed greatly by them as well.
I'm guessing here that you're talking about the Hippocampus (as far as non-procedural memory is concerned). Due to HM's case ( http://en.wikipedia.org/wiki/HM_(patient) ) it is known that hippocampal damage can cause one to be unable to form new "general"(?) memories (such as what you did today, the past week, etc), however procedural memories can still be formed. It appears that it's basically the edge of the cortex, and its circuit is not one of the most complex ( http://upload.wikimedia.org/wikipedia/commons/2/25/CajalHippocampus_(modified).png ). It may be that its functionality is not nearly as important in a generalized model (or more precisely, what I'm claiming is that the functionality performed by it may be needed biologically, however it might not be necessary in a high-level digital model which does not involve all the low-level details that the human brain must have to function).

Procedural memory can be handled by the cortex, however some of it is handled by non-cortex regions. I'll probably go more into this a bit later.

Behavior is part of that information, and in many cases the cortical processing is short-circuited by non-cortical regions.
Do you mean reflexes, such as those that can be performed as early as in the spinal cord/brainstem/cerebellum/... ?

Everything the cortex does somehow depends on more specialized regions.
Various sensory information may be filtered/prepared for it? Motor commands can be "unfolded" into more specific ones?

Self-directed entities need to learn and all of the extant, recognizably intelligent ones come with learning material which we call instinct.

Before the cortex had evolved (mamallians), you had the reptilian brain, which likely accounts of this "learning material". However, it is known that the cortex can take over most of this behaviour once learned. I do believe that when implementing an AI based on Hawkin's model, a way for providing default behaviour for training would be quite useful, however there are a few more problems with it: in his current implementation (HTM), he hasn't quite properly implemented any form of feedback (such as motor function in a real organism) and attention.

While one might be be able to "halfass" their way and avoid needing to implement every little detail of a real brain there are a few important things I believe are still missing from his implementation: a feedback mechanism (such as motor control), default behaviours for this feedback system ( what you called instincts) and attention (selecting specific paths while inhibiting others regardless of the default behaviour of projecting back fully). These mechanisms may not be needed for visual or auditory processing which is a common application he's been using HTM for, but they would be needed if you wanted to use his model to implement a general AI agent (embodied or virtual).

In the real brain, attention is handled by the Thalamus (relays things to/from the cortex and can select the active circuits in the cortex (attention)). The Thalamus controls sleep, and I believe this is probably done by just not passing in most sensory input to the cortex for further processing (same as the other attention-related behaviour).

As for the feedback (motor control), the neocortex projects it to the Brainstem/spinal cord/cerebellum/basal ganglia.
Cerebellum (low level motor control) has can project to the brainstem/spinal cord  (and back and forth), as well as to the Thalamus (attention). Basal ganglia project to the Thalamus as well. I'm very curious of how he plans to implement feedback in his model, as he seems to skimp on this subject even in his book, even though it is important.

For a self-directed entity with real higher-order function, something more complex is needed and Hawkins' mistake is easy to make: that complexity is in the form of the same stuff as the memory model.

I don't think he ever claimed this. What I do believe I've seen him claim is that the more specialized parts are a lot more simpler, low-level and specialized to the organism. In a real world use scenario, you'd have to implement that sort of filtering/processing on your own to match the hardware (or virtual) model. Some lessons could be learned there from nature, but the most important lesson is the neocortical column.
He currently claims that HTM's aim is to model the Neocortex, Thalamus(although I've never seen this implemented in the HTM yet (attention getting)) and a bit of the Hippocampus (might not really be needed as it it could itself be viewed as the edge of the neocortex). He does not include the filtering (and default behaviour) provided by the sensors, nuclei, spinal cord, cerebellum, basal ganglia in his model as he probably views them as organism specific, and such behaviour would have to be programmed in through some other external mechanisms in a real implementation.

tl;dr: Hawkins' model isn't perfect, but it's likely a good one to build upon, and it should eventually be adjusted to handle some functionality which is not yet implemented/fully understood (feedback and attention). Hawkins never claimed his model was perfect or even good enough, although he's been focusing on the most important part (high-level sensory processing and cognitive functions), while avoiding the low-level sensory processing and motor parts.

Name: Anonymous 2010-10-08 21:08

>>43
Okay, let me get this right?
I was talking about non-cortical regions. That's the whole point.

I'm guessing here that you're talking about the Hippocampus
Learn more about brain function so you can stop guessing. While writing that post I had 3 other regions in mind only one of which was the hippocampus. You can view it as a section of the cortex if you like but it's not part of the usual memory-computation process. (See: HM.)

Do you mean reflexes, such as those that can be performed as early as in the spinal cord/brainstem/cerebellum/... ?
Not really. Certainly not exclusively.

What I do believe I've seen him claim is that the more specialized parts are a lot more simpler, low-level and specialized to the organism.
I didn't word that well. I was trying to say that his claim is anything not done by the generalized regions is unimportant. An exception to that "anything" seems to be the thalamus which he was never clear about.

Another point, nothing is simple until you understand it. The hippocampus looks simple but the more complicated cortex at large is easier to simulate to some degree of satisfaction. His treatment of the thalamus seems to be another example (like treating the philosophers stone as an implementation detail) and I don't get where he's going with it.

Name: Anonymous 2010-10-08 21:46

>>44
I was trying to say that his claim is anything not done by the generalized regions is unimportant. An exception to that "anything" seems to be the thalamus which he was never clear about.
I think he might be claiming that a working model of the cortex/hippocampus and the thalamus is most important and if we have that we could build "thinking" machines based upon this model and that the filtering/default behaviour/processing performed by lower non-cortical areas (cerebellum, brainstem, spinal cord, basal ganglia, motor cells, various sensor cells, etc) is specific to biological organisms and could be implemented in other ways in a "thinking" machine.

I do believe his model is incomplete in the way it treats attention (thalamus) and motor function (how do the lower-level patterns in M1 form/are learned? do they form entirely by feedback?), however I do hope to see him revise it to include those in the future (He did treat them in his "On Intelligence" book to a certain degree, but they're not present in his memory-prediction framework implementation which he calls HTM).

Name: Anonymous 2010-10-08 22:37

>>45
He has been fairly ignorant with the way he blows off these other functions. Of course there's room for cutting corners, especially when the equivalent of biological maintenance can be abstracted to independent systems, but he's thrown out a few babies with this bathwater. Any intelligent system is an informed system, and he's written off the regions that inform the system as unimportant, even though he intends to (out of recognized necessity) replace those unimportant functions with his own. They're not all caught up in biological maintenance, and some of them function in very impressive ways which we understand fairly well. To ignore them because they were developed before the neocortex is just plain ignorance.

PS. I'm being vague about specifics because I want you to think for yourself. To do that you'll need more information than Hawkins will (or can) give you. The same goes for the information I give you: come to your own conclusions, just make sure it's not "whatever Jeff Hawkins says is right." When it comes to the brain nobody's right about everything. It's hard if you're not a neuroscientist, but you'll have to source your own information too... and wikipedia isn't enough.

Name: Anonymous 2010-10-09 0:13

>>46
Do you have any recommended reading? I'm not a neuroscientist, however the domain raises my interest, it is however a pretty large domain, with many unanswered questions, and having a few good starting points would be nice.

Name: Anonymous 2010-10-09 3:15

Since there's no market for a middle ground... you're probably after material in the pop-sci realm that is also happens to be actually useful. That would be a rare find. At some point you'll have to study the curriculum more or less so "where to start" is a textbook on biology. It's hard to approach specialized knowledge laterally (a shame because it's a big barrier to effective interdisciplinary sharing.)

Name: Anonymous 2010-10-12 10:50

>>47,48
You might try Joseph LeDoux' The Synaptic Self. Not much on the {neo,}cortex, but it has some interesting things to say about neuronal cells and spends some time on the hippocampus and a lot of attention is given to the amygdala. It comes from the horse's mouth (LeDoux has done heinous things to science on rats) and it's the best "pop sci" I've ever read of any field.

Do you have a website/project page/blag/etc? I'm interested in a bystander kinda way.

Name: Anonymous 2010-10-12 11:36

Did anybody actually read >>11?
If yes, could you please sum it up in a couple paragraphs?

Name: Anonymous 2010-10-12 11:42

>>50
I did, and perhaps you should too.

Name: Anonymous 2010-10-12 12:00

>>50
Human-like general artificial intelligence is achievable with today's technology (and even in realtime in less than 5-10 years).

There are a few problems: one is computational cost and the other is training. If you make a truly general AI, the computational cost (for both sequential and parallel architectures) would be too high for us to be able to make use of them within our lifetimes (it would be too slow, and this slowness would also make realtime training problematic). The other issue is that of training: unattended training would be nice, while training the system manually would be incredibly costly and time-consuming, not to mention defeat the whole purpose of general AI - a mix of those would be most ideal.

So due to physical limitations, we can't yet create "hard AI" which can never be "wrong", however we can achieve something of the level around human intelligence (and likely much more). Human-like intelligence centers around the prediction of short-term (and sometimes long-term) future events, while using a very efficient memory for storing "memories"/concepts (and "fuzzy" sequences of them).

Possible models include integrative ones like >>7's, or models much more closer to how our brain works like Hawkins' model ( http://numenta.com/ read his book ( http://en.wikipedia.org/wiki/On_Intelligence ) to understand the general concept and Dileep's Thesis ( http://www.numenta.com/for-developers/education/DileepThesis.pdf )for one possible technical implementation ). I believe a working practical solution will be to use models that we know to work (such as one based on our brain/mind, see Hawkins' book for actual implementation details, however the basic idea is that our brain is a hierarchical architecture of these "blocks"(they're actually continuous in our brain) with clear inputs and outputs ( see http://www.pnas.org/content/107/30/13485.full , especially take a look at page 12,21 of the "Supporting Information" pdf ), and that these "blocks" perform roughly the same functionality (I'm too lazy to find citations for all of these, but you should be able to find these by yourself after reading the recommended book(s)/paper(s)). However, unlike Hawkins' I believe general human-like intelligence would be easier to achieve by placing the "AI" in an environment with consistent rules (after all, that's the basis of how our brain is supposed to operate - it's to find "causes" in the world and to "predict" using them), such placement would be possible by either having it explore the real world in a robotic body with a wide variety of sensors (such that it would be able to form the required correlations, just like we humans do), or in a virtual environment (this is harder to achieve, but still possible, however I believe something realistic would be too computationally expensive. Here's a paper on these possibilities: http://goertzel.org/dynapsyc/2009/BlocksNBeadsWorld.pdf ). Besides the consistent rules of the environment, such an AI would have to learn human language (as imperfect as it may be), such that it could be much more easily trained, and such that it may learn from our knowledge - without such things, even with human-like intelligence, it would be not much better than a monkey or a feral child - it would be smart enough to predict the immediate environment, identify objects, causes, behaviours and behave accordingly, but it wouldn't be the most useful thing. Once human language is learned, it could use it to learn written and then symbolic languages, and of course symbolic reasoning.

Actually implementing such a model could be done with a reasonably sized cluster today, or network of FPGA's, however it would still be somewhat expensive, although within the resources of your average company or university. Much cheaper implementations using ASIC's are possible, but I'm not too fond of the idea as it won't favor experimentation (such as changing the size of individual "blocks") and the initial cost is rather high. If all goes well, we might be seeing memristor based devices which are much more dense replace FPGAs within 5+ years - this would reduce the cost by a lot, further increase the speed, simplify the overall hardware design, while allowing a lot of freedom to experiment with the overall parameters of the model. It should also be noted that the cluster implementation, even though most flexible with today's technology, it's also the slowest and most expensive one (it won't be realtime), while the FPGA (or hypothethical memristor implementation) would be way faster than realtime (many orders of magnitude). Slower than realtime would make training in a non-virtual environment too slow and tedious, while faster than realtime could accelerate it a lot (the AI could interface with a computer (either the same way a human does, or for example, it could connect its visual sensors to the VNC of a locally networked computer) - it could read books and absorb information much faster than a human would, simply because its underlying platform is much faster (our neuronal network is actually very slow if you look at the numbers, however we appear to be fast because the actual path the information takes is quite short, so most of what we do is retrieve information we already have. The parallelism in our brain is not too different from the parallelism found in your average electornic device, thus such platforms are much more suited as an alternate implementation than sequential CPUs).

tl;dr: Human-like general AI is possible today, it's just a matter of choosing a suitable model (there are a few good ones, I'd propose we use one that we know to already work, such as a mathematic structural model of the brain (not neuron level of course, that would be too slow)), implementing it and having it be faster than realtime if possible, connecting such an AI to a variety of sensors as well as allowing it to interact with the chosen world (virtual or real, in the case of virtual worlds, implementing a suitable one is a hard challenge as well) and one of the most important part, training the AI (unattended and guided): have it learn spoken and written human language first so it would be able to make use of our vast knowledge(for faster than realtime AIs, one might want to slow it down while learning spoken language or interacting with the real world, if one goes that path. It can go at the real speed of the hardware for parts which it can learn unattended, such as reading books. Wether one allows the AI to change these internal states depends on the designer, but it would probably be a good idea).

Name: Anonymous 2010-10-12 12:02

Fresh from slashdot.
Technology: Meet NELL, the Computer That Learns From the Net
Is that you op? You should have called your computer NEET

Name: Anonymous 2010-10-12 12:32

Read : Design for a Brain and Introduction to Cybernetics, from Ross Ashby.

Stafford Beer's VSM (Viable System Model) is a good framework for representing a "living being". Heavily underrated IMO.

Name: VIPPER 2010-10-12 13:40

JEWSus fucking christ, i would never think that /prog/ was capable of such discussion.

>>55
Back to fibs, please.

Name: Anonymous 2010-10-12 14:03

>>55
You're sending yourself back to fibs?

Name: VIPPER 2010-10-12 14:41

>>56
You're sending yourself back to fibs?
Yes, i barely qualify as codeballmer.

Name: Anonymous 2010-10-13 10:35

>>54
Ashby's stuff is available at: http://www.biodiversitylibrary.org/

At a glance Beer's "VSM" seems sensible enough, but applying it to a biological system is a bit silly. It's a specialization on an approximated general biological system. Working back from that loses overmuch in the transmission. So you've got this perfectly backwards with respect to "living being" and VSM:
Stafford Beer's VSM (Viable System Model) is a good framework for representing a "living being".

OTOH, it might be worth understanding for the sake of familiarizing oneself with the concepts in practice.

Name: Anonymous 2010-10-13 19:01

>>58
Did you just top-quote on my/prog/?

HAHAHAHA
YOU THINK YOURE THOUGH UH ?
I HAVE ONE WORD FOR YOU
  THE FORCED INDENTATION OF THE CODE
GET IT ?
I DONT THINK SO
YOU DONT KNOW ABOUT MY OTHER CAR I GUESS ?
ITS A CDR
AND IS PRONOUNCED ``CUDDER''

OK YOU FUQIN ANGERED AN EXPERT PROGRAMMER
THIS IS/prog/
YOU ARE ALLOWED TO POST HERE ONLY IF YOU HAVE ACHIEVED SATORI
PROGRAMMING IS ALL ABOUT ``ABSTRACT BULLSHITE'' THAT YOU WILL NEVER COMPREHEND
I HAVE READ SICP
IF ITS NOT DONE YOU HAVE TO
TOO BAD RUBY ON RAILS IS SLOW AS FUCK
BBCODE AND ((SCHEME)) ARE THE ULTIMATE LANGUAGES
ALSO
WELCOME TO
/prog/
EVERY THREAD WILL BE REPLIED TO
NO EXCEPTION

Name: Anonymous 2010-10-13 19:17

>>59
You fucked up your spoilersNone. Clearly you are not SATORI enough for /prog/ yet. Read your SICP again.

Name: Anonymous 2010-10-13 19:42

>>59
You're right, this thread didn't have quite enough shit in it. Thank you for keeping it up to /prog/ standards.

Name: Anonymous 2010-10-13 20:04

>>61
You forgot to spoiler you're /prague/.

Name: Anonymous 2010-10-13 20:32

spoilersNone
Damn you, Python.

Name: Anonymous 2010-10-13 21:37

>>62
You forgot your sage, ``faggot'''

Name: Anonymous 2010-10-13 21:38

wait. Why my message is >>62 when it supposed to be >>63? m00t changed the clock?

Name: Anonymous 2010-10-14 1:20

>>4
Hello there Mr. UCB professor.

Name: Anonymous 2010-10-14 2:32

>>66
Too bad >>4 was lost on everyone who cares. Someone would have brought up Prolog at least.

Name: Anonymous 2010-10-14 13:48

>>62
Oh no he fucking isn't.

Name: >>61 2010-10-14 15:16

>>68
Oh yes I am. wait no that's a bad thing nvm

Name: Anonymous 2011-02-04 14:29


Don't change these.
Name: Email:
Entire Thread Thread List