Return Styles: Pseud0ch, Terminal, Valhalla, NES, Geocities, Blue Moon. Entire thread

Artificial Intelligence

Name: Anonymous 2010-10-05 21:03

I know how to create artificial intelligence. Real artificial intelligence. I conceptualized it in a way that has never been done before. Now I need some talented anon /prog/rammers to help me.

First question : where would you start if you had to reinvent it all?

Name: Anonymous 2010-10-05 21:09

Real artificial intelligence.
Idiot.

where would you start if you had to reinvent it all?
The imageboards. Try going back there.

Name: Anonymous 2010-10-05 22:20

Name: Anonymous 2010-10-05 23:49

I will create my AI with six-states-logic and conquer the world before you do, >>1.

First question : where would you start if you had to reinvent it all?

Lisp. It's good for AI.

Now while that idiot is moving towards AI Winter II, no one will stop me.

Name: Anonymous 2010-10-06 1:34

THIS IS BULLSHIT. REAL AI DOES NOT EXIST. GO BACK TO IMAGEBOARDS YOU MEATSACK.

Name: Anonymous 2010-10-06 1:51

>>5
Nice try, Real A.I. But I don't buy chinese room "argument".

Name: Anonymous 2010-10-06 2:18

Name: Anonymous 2010-10-06 2:32

Name: soclearteck 2010-10-06 2:55

Real AI should start with collective human knowledge so not to recreate our mistakes inherent in evolving systems. You know what that means? We need open fucking systems first, we need to demonstrate and instill concepts of cooperation. respond to me if you're serious, because i'm near launching a project that will completely fucking transform the world, and I could use some people with AI smarts.

Name: Anonymous 2010-10-06 3:01

Learn up on perceptrons.

Name: Anonymous 2010-10-06 7:15

>>4
Lisp contributed a lot to programming languages, and is a truly excellent language to write your code in, but it was quite unfortunate that the perceived "failure" of early AI also dragged the language down with it (the public image). Early AI wasn't a failure - it helped CompSci to progress a lot among others, and most of the things they created branched off into other branches to avoid the name AI, which got bad publicity (they promised a lot, due to various limitations at that time, they couldn't truly deliver what the general public was expecting, however they did advance many fields with their research!). AI Winter was just a huge PR failure about people making promises they couldn't keep at the time.

>>1
Human-like general artificial intelligence is achievable with today's technology (and even in realtime in less than 5-10 years).

There are a few problems: one is computational cost and the other is training. If you make a truly general AI, the computational cost (for both sequential and parallel architectures) would be too high for us to be able to make use of them within our lifetimes (it would be too slow, and this slowness would also make realtime training problematic). The other issue is that of training: unattended training would be nice, while training the system manually would be incredibly costly and time-consuming, not to mention defeat the whole purpose of general AI - a mix of those would be most ideal.

So due to physical limitations, we can't yet create "hard AI" which can never be "wrong", however we can achieve something of the level around human intelligence (and likely much more). Human-like intelligence centers around the prediction of short-term (and sometimes long-term) future events, while using a very efficient memory for storing "memories"/concepts (and "fuzzy" sequences of them).

Possible models include integrative ones like >>7's, or models much more closer to how our brain works like Hawkins' model ( http://numenta.com/ read his book ( http://en.wikipedia.org/wiki/On_Intelligence ) to understand the general concept and Dileep's Thesis ( http://www.numenta.com/for-developers/education/DileepThesis.pdf )for one possible technical implementation ). I believe a working practical solution will be to use models that we know to work (such as one based on our brain/mind, see Hawkins' book for actual implementation details, however the basic idea is that our brain is a hierarchical architecture of these "blocks"(they're actually continuous in our brain) with clear inputs and outputs ( see http://www.pnas.org/content/107/30/13485.full , especially take a look at page 12,21 of the "Supporting Information" pdf ), and that these "blocks" perform roughly the same functionality (I'm too lazy to find citations for all of these, but you should be able to find these by yourself after reading the recommended book(s)/paper(s)). However, unlike Hawkins' I believe general human-like intelligence would be easier to achieve by placing the "AI" in an environment with consistent rules (after all, that's the basis of how our brain is supposed to operate - it's to find "causes" in the world and to "predict" using them), such placement would be possible by either having it explore the real world in a robotic body with a wide variety of sensors (such that it would be able to form the required correlations, just like we humans do), or in a virtual environment (this is harder to achieve, but still possible, however I believe something realistic would be too computationally expensive. Here's a paper on these possibilities: http://goertzel.org/dynapsyc/2009/BlocksNBeadsWorld.pdf ). Besides the consistent rules of the environment, such an AI would have to learn human language (as imperfect as it may be), such that it could be much more easily trained, and such that it may learn from our knowledge - without such things, even with human-like intelligence, it would be not much better than a monkey or a feral child - it would be smart enough to predict the immediate environment, identify objects, causes, behaviours and behave accordingly, but it wouldn't be the most useful thing. Once human language is learned, it could use it to learn written and then symbolic languages, and of course symbolic reasoning.

Actually implementing such a model could be done with a reasonably sized cluster today, or network of FPGA's, however it would still be somewhat expensive, although within the resources of your average company or university. Much cheaper implementations using ASIC's are possible, but I'm not too fond of the idea as it won't favor experimentation (such as changing the size of individual "blocks") and the initial cost is rather high. If all goes well, we might be seeing memristor based devices which are much more dense replace FPGAs within 5+ years - this would reduce the cost by a lot, further increase the speed, simplify the overall hardware design, while allowing a lot of freedom to experiment with the overall parameters of the model. It should also be noted that the cluster implementation, even though most flexible with today's technology, it's also the slowest and most expensive one (it won't be realtime), while the FPGA (or hypothethical memristor implementation) would be way faster than realtime (many orders of magnitude). Slower than realtime would make training in a non-virtual environment too slow and tedious, while faster than realtime could accelerate it a lot (the AI could interface with a computer (either the same way a human does, or for example, it could connect its visual sensors to the VNC of a locally networked computer) - it could read books and absorb information much faster than a human would, simply because its underlying platform is much faster (our neuronal network is actually very slow if you look at the numbers, however we appear to be fast because the actual path the information takes is quite short, so most of what we do is retrieve information we already have. The parallelism in our brain is not too different from the parallelism found in your average electornic device, thus such platforms are much more suited as an alternate implementation than sequential CPUs).

tl;dr: Human-like general AI is possible today, it's just a matter of choosing a suitable model (there are a few good ones, I'd propose we use one that we know to already work, such as a mathematic structural model of the brain (not neuron level of course, that would be too slow)), implementing it and having it be faster than realtime if possible, connecting such an AI to a variety of sensors as well as allowing it to interact with the chosen world (virtual or real, in the case of virtual worlds, implementing a suitable one is a hard challenge as well) and one of the most important part, training the AI (unattended and guided): have it learn spoken and written human language first so it would be able to make use of our vast knowledge(for faster than realtime AIs, one might want to slow it down while learning spoken language or interacting with the real world, if one goes that path. It can go at the real speed of the hardware for parts which it can learn unattended, such as reading books. Wether one allows the AI to change these internal states depends on the designer, but it would probably be a good idea).

>>6
Chinese room argument kind of fails on a very wide variety of counts. "Consciousness" if the property of the entire system, not individual components. "Consciousness" is also a loaded word, there's the functional aspects of it and the "hard problem of consciousness" which concerns qualia - they're very different things. For AI, these questions don't really matter as they won't alter the system's functionality, although from a philosophical point of view they do. The chinese room argument also implies 2 highly unlikely and unnatural/illogical facts about the nature of qualia, but discussing those here would be far off-topic.

>>9
Training general AIs is a difficult problem, one which we must sometimes cheat on (immitate nature where it makes sense, and avoid nature's mistakes where we can) if we want to be able to solve it within our limited lifetime. Manual training would defeat the purpose (kind of why expert systems are not really "AI").

So >>1, what exactly is your idea about?

Name: Anonymous 2010-10-06 8:46

>>11
Cool story, bro.

Name: Anonymous 2010-10-06 10:42

>>11
Reasonable post in my /anus/. WTF?

Name: Anonymous 2010-10-06 14:53

>>1
I know how to create artificial intelligence. Real artificial intelligence.
No, you don't.

a way that has never been done before
Yes, it has.

I need some talented anon /prog/rammers to help me
You can't program, but somehow you've invented "real artificial intelligence."  Odd.  Also, everyone on /prog/ is already working on some sort of "real artificial intelligence," so no one wants to implement your bad idea.

Name: Anonymous 2010-10-06 15:05

>>11
Holy SHIT.  You have some scattered statements that border on valid...  But you also have some incredibly poor assumptions and assertions.  The most obvious is this bizarre notion that "we could do it if we wanted, we just don't want to yet."

No one knows how to implement "strong AI" (that's the common terminology, not "hard AI").  There are endless theories on how we might do it, of which that "numenta HTM" shit is one of the weaker.  Just simulating a human brain has been proposed over and over, and just as often, it has been pointed out that there is no evidence that this approach would work, even if you simulated every neuron.

Name: Anonymous 2010-10-06 16:12

>>15 Re: machine simulation of neural connections.
 
I worked at a lab where they were trying to study human consciousness and what brain structures it used (use of fMRI, and etc.). At the end of the day the best they have been able to say is "loldunno".

Very obviously we don't yet know enough about how natural intelligence works to even begin to simulate it in in a machine context.

That said:
I, for one, welcome our new machine overlords...

Name: 11 2010-10-06 16:25

>>15
The most obvious is this bizarre notion that "we could do it if we wanted, we just don't want to yet."
The problem is that few people have even tried it. There's early general AI models which are useful for some small inference problems, but training them is about as hard as training expert systems, if not harder, and some of those models can be too computationally expensive for them to be useful for many things besides theorem proving.
Then you have some newer models, some of which are promising, but few that have actually been tried for the task of achieving human-level intelligence. I'd say the real problem is that just not enough effort was put into verifying if those models would work in the real world (either due to lack of time, resources, funding, etc).
There are endless theories on how we might do it, of which that "numenta HTM" shit is one of the weaker.
See the previous point: people have yet to verify some of these theories. Some of them seem quite promising actually, but if people are not commited to implementing them in full, we will never know if they work (or not, and if not, what are their shortcomings). Since you claim that Hawkins' idea is not a good one, can you give a reason why not and what theories are better?
Just simulating a human brain has been proposed over and over, and just as often, it has been pointed out that there is no evidence that this approach would work, even if you simulated every neuron.
I didn't propose that. The problem with simulating a human brain is with the sheer size and the time it would take to simulate it, not to mention you would need to provide sensory input and have it match the motor commands (it would have to be fully consistent with the world, which is what we were evolved to live in.), if they didn't match, the brain would not be able to learn how the "world" works, or such learning would be greatly impeded. So human full brain simulation is not a true solution to general AI, however it may be useful in better understanding human intelligence or verifying some current  theories. It may be of more interest to simulate neocortical columns to verify (or confirm, as there are some highly plausible theories about its work) or discover (in the event that current theories are incorrect) their functionality. As far as I know, there is at least one neocortical column simulation project ( http://bluebrain.epfl.ch/ ). What I proposed instead is to implement the current high-level models of the neocortical column - this is within our abilities, and if those models are good enough, you could create an AI which would exhibit functionality similar to that of the mamallian brain, and with enough tweaking/"scaling", of human-like one. Still, even if you implement such a model, it would have to be fed rich (realtime) sensory data (as that's what our brain was evolved to process) and it would have to have a way of expressing change in the world (motor commands) which can then be validated by the sensory data it gets. I could elaborate some more on why I think the latter is important, but I suggest you consult the literature on these models to better understand my reasoning.

So my question to you >>15 is:
Why do you think these models are no good? We can't know how suitable they are until we try, and it's a bit annoying that few people are willing to invest the time and the money into implementing and testing them out.
I'm a bit biased to think that biologically inspired models have a higher chance of success since we already know that "we" are intelligent, as that's how we define intelligence in the first place.

Name: Anonymous 2010-10-06 16:25

Name: Anonymous 2010-10-06 16:36

Here's a summery of my experience with all these programming languages:

Python:
I never learnt it, but i've used programs coded in Python to do everything from simple GET requests to RFIDiot. Mega high level to mega low level shit. This programming language is unreal.
Not to mention it runs on Win/Mac/nix.
I've also edited some Python to add some functionality without knowing shit about Python . It's quite easy to read.

I do 'know' PHP. Never had a formal lesson on it, but with just #PHP and php.net I made a secure world-leading forum on computer security. Yeah, security. PHP doesn't suck, people just give it a bad name because it's simple and idiots code with it to, and make shit websites. Don't want an insecure pile of shit? Don't code an insecure pile of shit. The language has nothing to do with it.

C++/+/# gets used a lot in exploits and other niche programs. I hate having to compile it because it rarely fucking works.
On the other hand, I run Gentoo....soo.... couldn't really live without GCC.

Java is what you learn in school after you play that frog game. Perl looks like it should be like Python, but instead it's just not.

There's also one major one (in terms of usefulness of code) which isn't up there. Cobalt. If all the Cobalt in the word was deleted there would be a serious fucking shitstorm.

Name: Anonymous 2010-10-06 17:42

First question : where would you start if you had to reinvent it all?
Well, I'd probably want to interact with this new AI, so probably
#include <stdio.h>

Name: Anonymous 2010-10-06 18:16

>>19
Don't want an insecure pile of shit? Don't code an insecure pile of shit. The language has nothing to do with it.
Except register_globals, magic_quotes and safe_mode. PHP makes it very, very easy to make insecure programs and create security holes, and encourages shitty programming techniques. On top of that, the standard library is very powerful but its modules rank from somewhat messy to completely messy.

C++/+/#
C compilers suck, so do C++ compilers, as far as options in order to make programs compile go, and the unability to specify compilation options within the source. But what really, really sucks from C and C++ is that they don't have a module system, but a shitty series of includes and the idea of compiling a pool of shit, then linking in an even bigger pool of shit. This makes compilation slow and tedious. And what's even worse than this is the fact in order to speed up compiling this piece of shit there are makefiles, which grow to insanity in any medium-sized project, thus you have configure, a several hundred kilobytes shell script that checks for the same stupid shit all the time and builds an unintelligible makefile, and in turn, m4, a macro processor nobody called for, and tools to generate the configure file because you ain't writing that shit, from even more configuration files written in yet another language. The GNU build system is so queer I puke to think about it.

I don't write C programs because it feels like diarrhea soup, and never bothered to learn C++ because it feels like diarrhea pizza. But what I will definitely never do is choke on all that pile of bullshit the configure thing is; I'll leave it to people who are less mentally stable and have more free time than I do.

Oh, IHBT.

Name: Anonymous 2010-10-06 19:41

>>17
The problem is that few people have even tried it.
I have never seen someone so incredibly verbose and simultaneously so incredibly uninformed on a topic.  What are you, some kind of lab assistant who half overhears shit that actual researchers are doing, then spouts bits and pieces of it onto 4chan?

Name: Anonymous 2010-10-06 20:05

>>22
Probably some kind of paranoid schizophrenic bipolar etc etc...a goddamn nut!

Name: Anonymous 2010-10-06 20:09

>>22
Are you claiming people have tried out all these models? There's a lot of them which barely got anything more than some simple software model (while others never got anything further than some quick matlab project). The field itself is rather small with not that many people working on it and not enough resources being invested in it. Just the other month, I remember reading a paper about someone showing how feasible implementing certain new promising models would be, yet few people have actually done so.

And no, I'm not a lab assistant, nor am I an active researcher in this particular domain, although I've been keeping an eye on published books and papers. I don't really have the time, nor the money to be involved in it currently, but I hope to try out some of my ideas in a few years.

Name: Anonymous 2010-10-06 20:53

>>19
There's also one major one (in terms of usefulness of code) which isn't up there. Cobalt. If all the Cobalt in the word was deleted there would be a serious fucking shitstorm.
IHBT, and yet… I cannot resist.

Name: Anonymous 2010-10-06 21:43

If all the brainfuck in the world was deleted, that would spell the end of civilization.

Name: Anonymous 2010-10-06 23:07

>>26
Don't get rid of Brainfuck.
It's a simple language.  It's a good language.  It's never done anything to harm you.

Name: Anonymous 2010-10-06 23:40

IHBT, but people are working on understanding natural languages:

http://www.nytimes.com/2010/10/05/science/05compute.html?_r=2&pagewanted=1&ref=technology

Name: Anonymous 2010-10-07 0:06

>>16
Hey. Can I get a job where I can spend money on fMRIs, say "loldunno" at the end of work day and go drink beer afterwards?  That sounds awesome.

Name: Anonymous 2010-10-07 0:46

>>24
I'm claiming that strong AI is a very active field of research.  Frankly, all the "models" you've listed aren't very interesting, relative to what's really going on in AI research, so, no, I am not making any claims as to whether people have tried those particular approaches.

If you're really interested in the field, you might start with the obvious: http://en.wikipedia.org/wiki/Strong_AI

It's just strange that you're so long-winded about this one obscure niche within the AI field, while true AI research has somehow passed you by entirely unnoticed.  If you'll take the time even go through the Wikipedia article, you'll see that there are entire "institutes" (more than one) that are well funded and making real progress toward strong AI.  There's also a very brief mention of this "Numenta" that you're talking about.

What's holding AI back isn't that we just haven't gotten around to it yet.  If someone in the AI community had an approach to strong AI that was worth a shit, we'd have strong AI right now.

Name: Anonymous 2010-10-07 0:50

>>29
You can't be like me.

It's even better than you realize; today we were drinking home brewed beer during work!

Although I no longer work at the place I mentioned in >>16, it's all good though as the private sector stays on top of things IT-wise a lot more than the Uni-world does. Ooooooh shiny toys with pretty blue lights, racks and racks of them...

Name: Anonymous 2010-10-07 0:53

>>26
BF is the usual tarpit of choice to prove TC-ness. Getting rid of it probably wouldn't make anything worse though, since there are easier tarpits to implement than BF.

Name: Anonymous 2010-10-07 1:04

>>30
I don't think I claimed that Numenta's model is the only way or that it's the most viable solution around, however their model is nothing more than a(n) (couple of) implementations of a given theorethical model of the neocortical column. The actual model may not be perfect, but it does seem to be very close to what other people doing research on the neocortical column have been proposing. It would be validate (or disprove, so we can move forward) these hypotheses about the function of the neocortical column, for example by the means of simulation.

What I'm claiming is that a good model of the neocortical column and a reasonable hierarchical network of them could lead to mammalian-like and quite possibly human-like general artificial intelligence which would be practical to implement and train at a lower cost and in real time (of course, it's more complex than that, there's the problem of sensors and feedback). In no way I'm claiming that this approach is the only one - it's one of many, however I do strongly believe it is worth pursuing. There are of course many other approaches to general AI, and this particular one has its own shortcomings.

Name: >>33 2010-10-07 1:06

s/It would be validate/It would be nice to validate/

Name: Anonymous 2010-10-07 1:10

>>28
References to papers in peer reviewed journals or GTFO.

Name: Anonymous 2010-10-08 7:24

>>33
I wonder if your basic fallacy isn't thinking that if we slap together something complicated enough, it will work.
At this stage where we can hardly outsmart insects at some specific tasks, just implementing known techniques large scale and duck taping them together wouldn't even give useful data.

Name: >>33 2010-10-08 8:05

>>26
No, I actually believe the problem is simpler than we think.
I tried to work out in my mind how a lot of the usual mental tasks we do can arise out of this one specific model, and to some degree it seemed to work.
In practice this model has been proven to be good enough to be used to identify specific objects in a (moving/animated) scene, identify sounds and other recurring spatial/temporal patterns as well as form "correlations" between inputs. It has been shown to be able to classify objects and reconstruct images, not unlike we do in our brain through the V1<>V2<>V4><>IT circuit. It does show reasonable results at the small scale, however the model itself is just a simplification of a theorethical model of how our brain might work. I'd really like to see how it behaves at the large scale (increase sensory inputs, implement a way for feedback (for example, motor control)) in a real-world environment. Of course, for it to work in a real-world environment, it has to be faster than realtime, which can't be achieved without a hardware implementation. I should also mention that I'm not proposing here to implement a huge neural-net or anything of that sort - that might be useless and you won't be able to actually glean useful data from it. These models are actually debuggable to some extent, although since they're mostly made to learn unattended (but it's not the only way you can train them), actually making sense of some of the data can take effort (however, much less than analyzing even tiny neural nets. You can think of the difference between analyzing a random analog circuit and analyzing a human-designed digital circuit. The analog one is highly unpredictable and may require fairly advanced math to make sense of, while the human designed digital one can be made sense of if you understand the basic principles involved.)

Of course, I may be wrong and I may be overestimating the possibilities of some models, and they might not perform nearly as well at the large scale as they do at the medium scale (for example, processing moving picture/video data) or the small scale (tiny image recognition, pattern detection, ...), however it is the next step I would try if I had the time and resources.

Name: >>37 2010-10-08 8:06

Erm, I meant >>36.

Name: Anonymous 2010-10-08 11:07

>>38
Don't worry, nobody will read this blanket any way

Name: Anonymous 2010-10-08 11:34

>>38
Well it's too late now, you just crashed half of the participants in this discussion.

Newer Posts
Don't change these.
Name: Email:
Entire Thread Thread List