Return Styles: Pseud0ch, Terminal, Valhalla, NES, Geocities, Blue Moon. Entire thread

Artificial Intelligence

Name: Anonymous 2010-10-05 21:03

I know how to create artificial intelligence. Real artificial intelligence. I conceptualized it in a way that has never been done before. Now I need some talented anon /prog/rammers to help me.

First question : where would you start if you had to reinvent it all?

Name: Anonymous 2010-10-08 13:06

har har

Name: Anonymous 2010-10-08 17:55

>>33,37
Oh good. Someone's been drinking Hawkins' Kool-aid.

Pay attention to >>36:
I wonder if your basic fallacy isn't thinking that if we slap together something complicated enough, it will work.
That's very close to Hawkins' fallacy, which is roughly: anything he doesn't understand is not important, (which is why he hasn't bothered to understand it.) He seems to blow off criticism on this point more than answer to it so I don't know what he really thinks.

The fact is the computational memory model doesn't do anything on its own. All this talk of what it can do is all in terms of application with external direction. For a self-directed entity with real higher-order function, something more complex is needed and Hawkins' mistake is easy to make: that complexity is in the form of the same stuff as the memory model. The difference is, unlike most cortical region matter, it is specialized.

The perception that cortical regions are not specialized (except by their location and thus what information they process) is important, but ignoring the fact that in actual intelligent brains declarative memory itself is managed by highly specialized non-cortical regions, and procedural memory is informed greatly by them as well. Behavior is part of that information, and in many cases the cortical processing is short-circuited by non-cortical regions.

Everything the cortex does somehow depends on more specialized regions. In the happy case* that dependency is mostly in the past: the cortex does the bulk of the work by having learned how to do it. Since cortical regions are so general it's unlikely they contain the learning material (and if they did, that would cause rough times for the Numenta model.) Self-directed entities need to learn and all of the extant, recognizably intelligent ones come with learning material which we call instinct.

*: The "unhappy case" is when the cortex has been short-circuited or, more interestingly, cannot converge on a response with cortical processing. The information propagates out of the generalized matter and then what happens? Instinct plays a role here (but what if it doesn't apply?), but it seems that something more interesting happens, too. Whether that is really necessary for intelligence is up for debate.

Name: 33 2010-10-08 20:30

>>42
The difference is, unlike most cortical region matter, it is specialized.
Okay, let me get this right? Are you claiming there are specialized parts of cortical region performing some specific complex functionality and not not being made of the "usual" cortical column (I know that cortical columns themselves vary to some degree, either in size or for physical/distance optimization issues, however I don't see why this would change the basic principles of their operation...) Or are you claiming this about the other non-cortical brain regions, which have different functionality?

but ignoring the fact that in actual intelligent brains declarative memory itself is managed by highly specialized non-cortical regions, and procedural memory is informed greatly by them as well.
I'm guessing here that you're talking about the Hippocampus (as far as non-procedural memory is concerned). Due to HM's case ( http://en.wikipedia.org/wiki/HM_(patient) ) it is known that hippocampal damage can cause one to be unable to form new "general"(?) memories (such as what you did today, the past week, etc), however procedural memories can still be formed. It appears that it's basically the edge of the cortex, and its circuit is not one of the most complex ( http://upload.wikimedia.org/wikipedia/commons/2/25/CajalHippocampus_(modified).png ). It may be that its functionality is not nearly as important in a generalized model (or more precisely, what I'm claiming is that the functionality performed by it may be needed biologically, however it might not be necessary in a high-level digital model which does not involve all the low-level details that the human brain must have to function).

Procedural memory can be handled by the cortex, however some of it is handled by non-cortex regions. I'll probably go more into this a bit later.

Behavior is part of that information, and in many cases the cortical processing is short-circuited by non-cortical regions.
Do you mean reflexes, such as those that can be performed as early as in the spinal cord/brainstem/cerebellum/... ?

Everything the cortex does somehow depends on more specialized regions.
Various sensory information may be filtered/prepared for it? Motor commands can be "unfolded" into more specific ones?

Self-directed entities need to learn and all of the extant, recognizably intelligent ones come with learning material which we call instinct.

Before the cortex had evolved (mamallians), you had the reptilian brain, which likely accounts of this "learning material". However, it is known that the cortex can take over most of this behaviour once learned. I do believe that when implementing an AI based on Hawkin's model, a way for providing default behaviour for training would be quite useful, however there are a few more problems with it: in his current implementation (HTM), he hasn't quite properly implemented any form of feedback (such as motor function in a real organism) and attention.

While one might be be able to "halfass" their way and avoid needing to implement every little detail of a real brain there are a few important things I believe are still missing from his implementation: a feedback mechanism (such as motor control), default behaviours for this feedback system ( what you called instincts) and attention (selecting specific paths while inhibiting others regardless of the default behaviour of projecting back fully). These mechanisms may not be needed for visual or auditory processing which is a common application he's been using HTM for, but they would be needed if you wanted to use his model to implement a general AI agent (embodied or virtual).

In the real brain, attention is handled by the Thalamus (relays things to/from the cortex and can select the active circuits in the cortex (attention)). The Thalamus controls sleep, and I believe this is probably done by just not passing in most sensory input to the cortex for further processing (same as the other attention-related behaviour).

As for the feedback (motor control), the neocortex projects it to the Brainstem/spinal cord/cerebellum/basal ganglia.
Cerebellum (low level motor control) has can project to the brainstem/spinal cord  (and back and forth), as well as to the Thalamus (attention). Basal ganglia project to the Thalamus as well. I'm very curious of how he plans to implement feedback in his model, as he seems to skimp on this subject even in his book, even though it is important.

For a self-directed entity with real higher-order function, something more complex is needed and Hawkins' mistake is easy to make: that complexity is in the form of the same stuff as the memory model.

I don't think he ever claimed this. What I do believe I've seen him claim is that the more specialized parts are a lot more simpler, low-level and specialized to the organism. In a real world use scenario, you'd have to implement that sort of filtering/processing on your own to match the hardware (or virtual) model. Some lessons could be learned there from nature, but the most important lesson is the neocortical column.
He currently claims that HTM's aim is to model the Neocortex, Thalamus(although I've never seen this implemented in the HTM yet (attention getting)) and a bit of the Hippocampus (might not really be needed as it it could itself be viewed as the edge of the neocortex). He does not include the filtering (and default behaviour) provided by the sensors, nuclei, spinal cord, cerebellum, basal ganglia in his model as he probably views them as organism specific, and such behaviour would have to be programmed in through some other external mechanisms in a real implementation.

tl;dr: Hawkins' model isn't perfect, but it's likely a good one to build upon, and it should eventually be adjusted to handle some functionality which is not yet implemented/fully understood (feedback and attention). Hawkins never claimed his model was perfect or even good enough, although he's been focusing on the most important part (high-level sensory processing and cognitive functions), while avoiding the low-level sensory processing and motor parts.

Name: Anonymous 2010-10-08 21:08

>>43
Okay, let me get this right?
I was talking about non-cortical regions. That's the whole point.

I'm guessing here that you're talking about the Hippocampus
Learn more about brain function so you can stop guessing. While writing that post I had 3 other regions in mind only one of which was the hippocampus. You can view it as a section of the cortex if you like but it's not part of the usual memory-computation process. (See: HM.)

Do you mean reflexes, such as those that can be performed as early as in the spinal cord/brainstem/cerebellum/... ?
Not really. Certainly not exclusively.

What I do believe I've seen him claim is that the more specialized parts are a lot more simpler, low-level and specialized to the organism.
I didn't word that well. I was trying to say that his claim is anything not done by the generalized regions is unimportant. An exception to that "anything" seems to be the thalamus which he was never clear about.

Another point, nothing is simple until you understand it. The hippocampus looks simple but the more complicated cortex at large is easier to simulate to some degree of satisfaction. His treatment of the thalamus seems to be another example (like treating the philosophers stone as an implementation detail) and I don't get where he's going with it.

Name: Anonymous 2010-10-08 21:46

>>44
I was trying to say that his claim is anything not done by the generalized regions is unimportant. An exception to that "anything" seems to be the thalamus which he was never clear about.
I think he might be claiming that a working model of the cortex/hippocampus and the thalamus is most important and if we have that we could build "thinking" machines based upon this model and that the filtering/default behaviour/processing performed by lower non-cortical areas (cerebellum, brainstem, spinal cord, basal ganglia, motor cells, various sensor cells, etc) is specific to biological organisms and could be implemented in other ways in a "thinking" machine.

I do believe his model is incomplete in the way it treats attention (thalamus) and motor function (how do the lower-level patterns in M1 form/are learned? do they form entirely by feedback?), however I do hope to see him revise it to include those in the future (He did treat them in his "On Intelligence" book to a certain degree, but they're not present in his memory-prediction framework implementation which he calls HTM).

Name: Anonymous 2010-10-08 22:37

>>45
He has been fairly ignorant with the way he blows off these other functions. Of course there's room for cutting corners, especially when the equivalent of biological maintenance can be abstracted to independent systems, but he's thrown out a few babies with this bathwater. Any intelligent system is an informed system, and he's written off the regions that inform the system as unimportant, even though he intends to (out of recognized necessity) replace those unimportant functions with his own. They're not all caught up in biological maintenance, and some of them function in very impressive ways which we understand fairly well. To ignore them because they were developed before the neocortex is just plain ignorance.

PS. I'm being vague about specifics because I want you to think for yourself. To do that you'll need more information than Hawkins will (or can) give you. The same goes for the information I give you: come to your own conclusions, just make sure it's not "whatever Jeff Hawkins says is right." When it comes to the brain nobody's right about everything. It's hard if you're not a neuroscientist, but you'll have to source your own information too... and wikipedia isn't enough.

Name: Anonymous 2010-10-09 0:13

>>46
Do you have any recommended reading? I'm not a neuroscientist, however the domain raises my interest, it is however a pretty large domain, with many unanswered questions, and having a few good starting points would be nice.

Name: Anonymous 2010-10-09 3:15

Since there's no market for a middle ground... you're probably after material in the pop-sci realm that is also happens to be actually useful. That would be a rare find. At some point you'll have to study the curriculum more or less so "where to start" is a textbook on biology. It's hard to approach specialized knowledge laterally (a shame because it's a big barrier to effective interdisciplinary sharing.)

Name: Anonymous 2010-10-12 10:50

>>47,48
You might try Joseph LeDoux' The Synaptic Self. Not much on the {neo,}cortex, but it has some interesting things to say about neuronal cells and spends some time on the hippocampus and a lot of attention is given to the amygdala. It comes from the horse's mouth (LeDoux has done heinous things to science on rats) and it's the best "pop sci" I've ever read of any field.

Do you have a website/project page/blag/etc? I'm interested in a bystander kinda way.

Name: Anonymous 2010-10-12 11:36

Did anybody actually read >>11?
If yes, could you please sum it up in a couple paragraphs?

Name: Anonymous 2010-10-12 11:42

>>50
I did, and perhaps you should too.

Name: Anonymous 2010-10-12 12:00

>>50
Human-like general artificial intelligence is achievable with today's technology (and even in realtime in less than 5-10 years).

There are a few problems: one is computational cost and the other is training. If you make a truly general AI, the computational cost (for both sequential and parallel architectures) would be too high for us to be able to make use of them within our lifetimes (it would be too slow, and this slowness would also make realtime training problematic). The other issue is that of training: unattended training would be nice, while training the system manually would be incredibly costly and time-consuming, not to mention defeat the whole purpose of general AI - a mix of those would be most ideal.

So due to physical limitations, we can't yet create "hard AI" which can never be "wrong", however we can achieve something of the level around human intelligence (and likely much more). Human-like intelligence centers around the prediction of short-term (and sometimes long-term) future events, while using a very efficient memory for storing "memories"/concepts (and "fuzzy" sequences of them).

Possible models include integrative ones like >>7's, or models much more closer to how our brain works like Hawkins' model ( http://numenta.com/ read his book ( http://en.wikipedia.org/wiki/On_Intelligence ) to understand the general concept and Dileep's Thesis ( http://www.numenta.com/for-developers/education/DileepThesis.pdf )for one possible technical implementation ). I believe a working practical solution will be to use models that we know to work (such as one based on our brain/mind, see Hawkins' book for actual implementation details, however the basic idea is that our brain is a hierarchical architecture of these "blocks"(they're actually continuous in our brain) with clear inputs and outputs ( see http://www.pnas.org/content/107/30/13485.full , especially take a look at page 12,21 of the "Supporting Information" pdf ), and that these "blocks" perform roughly the same functionality (I'm too lazy to find citations for all of these, but you should be able to find these by yourself after reading the recommended book(s)/paper(s)). However, unlike Hawkins' I believe general human-like intelligence would be easier to achieve by placing the "AI" in an environment with consistent rules (after all, that's the basis of how our brain is supposed to operate - it's to find "causes" in the world and to "predict" using them), such placement would be possible by either having it explore the real world in a robotic body with a wide variety of sensors (such that it would be able to form the required correlations, just like we humans do), or in a virtual environment (this is harder to achieve, but still possible, however I believe something realistic would be too computationally expensive. Here's a paper on these possibilities: http://goertzel.org/dynapsyc/2009/BlocksNBeadsWorld.pdf ). Besides the consistent rules of the environment, such an AI would have to learn human language (as imperfect as it may be), such that it could be much more easily trained, and such that it may learn from our knowledge - without such things, even with human-like intelligence, it would be not much better than a monkey or a feral child - it would be smart enough to predict the immediate environment, identify objects, causes, behaviours and behave accordingly, but it wouldn't be the most useful thing. Once human language is learned, it could use it to learn written and then symbolic languages, and of course symbolic reasoning.

Actually implementing such a model could be done with a reasonably sized cluster today, or network of FPGA's, however it would still be somewhat expensive, although within the resources of your average company or university. Much cheaper implementations using ASIC's are possible, but I'm not too fond of the idea as it won't favor experimentation (such as changing the size of individual "blocks") and the initial cost is rather high. If all goes well, we might be seeing memristor based devices which are much more dense replace FPGAs within 5+ years - this would reduce the cost by a lot, further increase the speed, simplify the overall hardware design, while allowing a lot of freedom to experiment with the overall parameters of the model. It should also be noted that the cluster implementation, even though most flexible with today's technology, it's also the slowest and most expensive one (it won't be realtime), while the FPGA (or hypothethical memristor implementation) would be way faster than realtime (many orders of magnitude). Slower than realtime would make training in a non-virtual environment too slow and tedious, while faster than realtime could accelerate it a lot (the AI could interface with a computer (either the same way a human does, or for example, it could connect its visual sensors to the VNC of a locally networked computer) - it could read books and absorb information much faster than a human would, simply because its underlying platform is much faster (our neuronal network is actually very slow if you look at the numbers, however we appear to be fast because the actual path the information takes is quite short, so most of what we do is retrieve information we already have. The parallelism in our brain is not too different from the parallelism found in your average electornic device, thus such platforms are much more suited as an alternate implementation than sequential CPUs).

tl;dr: Human-like general AI is possible today, it's just a matter of choosing a suitable model (there are a few good ones, I'd propose we use one that we know to already work, such as a mathematic structural model of the brain (not neuron level of course, that would be too slow)), implementing it and having it be faster than realtime if possible, connecting such an AI to a variety of sensors as well as allowing it to interact with the chosen world (virtual or real, in the case of virtual worlds, implementing a suitable one is a hard challenge as well) and one of the most important part, training the AI (unattended and guided): have it learn spoken and written human language first so it would be able to make use of our vast knowledge(for faster than realtime AIs, one might want to slow it down while learning spoken language or interacting with the real world, if one goes that path. It can go at the real speed of the hardware for parts which it can learn unattended, such as reading books. Wether one allows the AI to change these internal states depends on the designer, but it would probably be a good idea).

Name: Anonymous 2010-10-12 12:02

Fresh from slashdot.
Technology: Meet NELL, the Computer That Learns From the Net
Is that you op? You should have called your computer NEET

Name: Anonymous 2010-10-12 12:32

Read : Design for a Brain and Introduction to Cybernetics, from Ross Ashby.

Stafford Beer's VSM (Viable System Model) is a good framework for representing a "living being". Heavily underrated IMO.

Name: VIPPER 2010-10-12 13:40

JEWSus fucking christ, i would never think that /prog/ was capable of such discussion.

>>55
Back to fibs, please.

Name: Anonymous 2010-10-12 14:03

>>55
You're sending yourself back to fibs?

Name: VIPPER 2010-10-12 14:41

>>56
You're sending yourself back to fibs?
Yes, i barely qualify as codeballmer.

Name: Anonymous 2010-10-13 10:35

>>54
Ashby's stuff is available at: http://www.biodiversitylibrary.org/

At a glance Beer's "VSM" seems sensible enough, but applying it to a biological system is a bit silly. It's a specialization on an approximated general biological system. Working back from that loses overmuch in the transmission. So you've got this perfectly backwards with respect to "living being" and VSM:
Stafford Beer's VSM (Viable System Model) is a good framework for representing a "living being".

OTOH, it might be worth understanding for the sake of familiarizing oneself with the concepts in practice.

Name: Anonymous 2010-10-13 19:01

>>58
Did you just top-quote on my/prog/?

HAHAHAHA
YOU THINK YOURE THOUGH UH ?
I HAVE ONE WORD FOR YOU
  THE FORCED INDENTATION OF THE CODE
GET IT ?
I DONT THINK SO
YOU DONT KNOW ABOUT MY OTHER CAR I GUESS ?
ITS A CDR
AND IS PRONOUNCED ``CUDDER''

OK YOU FUQIN ANGERED AN EXPERT PROGRAMMER
THIS IS/prog/
YOU ARE ALLOWED TO POST HERE ONLY IF YOU HAVE ACHIEVED SATORI
PROGRAMMING IS ALL ABOUT ``ABSTRACT BULLSHITE'' THAT YOU WILL NEVER COMPREHEND
I HAVE READ SICP
IF ITS NOT DONE YOU HAVE TO
TOO BAD RUBY ON RAILS IS SLOW AS FUCK
BBCODE AND ((SCHEME)) ARE THE ULTIMATE LANGUAGES
ALSO
WELCOME TO
/prog/
EVERY THREAD WILL BE REPLIED TO
NO EXCEPTION

Name: Anonymous 2010-10-13 19:17

>>59
You fucked up your spoilersNone. Clearly you are not SATORI enough for /prog/ yet. Read your SICP again.

Name: Anonymous 2010-10-13 19:42

>>59
You're right, this thread didn't have quite enough shit in it. Thank you for keeping it up to /prog/ standards.

Name: Anonymous 2010-10-13 20:04

>>61
You forgot to spoiler you're /prague/.

Name: Anonymous 2010-10-13 20:32

spoilersNone
Damn you, Python.

Name: Anonymous 2010-10-13 21:37

>>62
You forgot your sage, ``faggot'''

Name: Anonymous 2010-10-13 21:38

wait. Why my message is >>62 when it supposed to be >>63? m00t changed the clock?

Name: Anonymous 2010-10-14 1:20

>>4
Hello there Mr. UCB professor.

Name: Anonymous 2010-10-14 2:32

>>66
Too bad >>4 was lost on everyone who cares. Someone would have brought up Prolog at least.

Name: Anonymous 2010-10-14 13:48

>>62
Oh no he fucking isn't.

Name: >>61 2010-10-14 15:16

>>68
Oh yes I am. wait no that's a bad thing nvm

Name: Anonymous 2011-02-04 14:29


Newer Posts
Don't change these.
Name: Email:
Entire Thread Thread List