>>4
Lisp contributed a lot to programming languages, and is a truly excellent language to write your code in, but it was quite unfortunate that the perceived "failure" of early AI also dragged the language down with it (the public image). Early AI wasn't a failure - it helped CompSci to progress a lot among others, and most of the things they created branched off into other branches to avoid the name AI, which got bad publicity (they promised a lot, due to various limitations at that time, they couldn't truly deliver what the general public was expecting, however they did advance many fields with their research!). AI Winter was just a huge PR failure about people making promises they couldn't keep at the time.
>>1
Human-like general artificial intelligence is achievable with today's technology (and even in realtime in less than 5-10 years).
There are a few problems: one is computational cost and the other is training. If you make a truly general AI, the computational cost (for both sequential and parallel architectures) would be too high for us to be able to make use of them within our lifetimes (it would be too slow, and this slowness would also make realtime training problematic). The other issue is that of training: unattended training would be nice, while training the system manually would be incredibly costly and time-consuming, not to mention defeat the whole purpose of general AI - a mix of those would be most ideal.
So due to physical limitations, we can't yet create "hard AI" which can never be "wrong", however we can achieve something of the level around human intelligence (and likely much more). Human-like intelligence centers around the prediction of short-term (and sometimes long-term) future events, while using a very efficient memory for storing "memories"/concepts (and "fuzzy" sequences of them).
Possible models include integrative ones like
>>7's, or models much more closer to how our brain works like Hawkins' model (
http://numenta.com/ read his book (
http://en.wikipedia.org/wiki/On_Intelligence ) to understand the general concept and Dileep's Thesis (
http://www.numenta.com/for-developers/education/DileepThesis.pdf )for one possible technical implementation ). I believe a working practical solution will be to use models that we know to work (such as one based on our brain/mind, see Hawkins' book for actual implementation details, however the basic idea is that our brain is a hierarchical architecture of these "blocks"(they're actually continuous in our brain) with clear inputs and outputs ( see
http://www.pnas.org/content/107/30/13485.full , especially take a look at page 12,21 of the "Supporting Information" pdf ), and that these "blocks" perform roughly the same functionality (I'm too lazy to find citations for all of these, but you should be able to find these by yourself after reading the recommended book(s)/paper(s)). However, unlike Hawkins' I believe general human-like intelligence would be easier to achieve by placing the "AI" in an environment with consistent rules (after all, that's the basis of how our brain is supposed to operate - it's to find "causes" in the world and to "predict" using them), such placement would be possible by either having it explore the real world in a robotic body with a wide variety of sensors (such that it would be able to form the required correlations, just like we humans do), or in a virtual environment (this is harder to achieve, but still possible, however I believe something realistic would be too computationally expensive. Here's a paper on these possibilities:
http://goertzel.org/dynapsyc/2009/BlocksNBeadsWorld.pdf ). Besides the consistent rules of the environment, such an AI would have to learn human language (as imperfect as it may be), such that it could be much more easily trained, and such that it may learn from our knowledge - without such things, even with human-like intelligence, it would be not much better than a monkey or a feral child - it would be smart enough to predict the immediate environment, identify objects, causes, behaviours and behave accordingly, but it wouldn't be the most useful thing. Once human language is learned, it could use it to learn written and then symbolic languages, and of course symbolic reasoning.
Actually implementing such a model could be done with a reasonably sized cluster today, or network of FPGA's, however it would still be somewhat expensive, although within the resources of your average company or university. Much cheaper implementations using ASIC's are possible, but I'm not too fond of the idea as it won't favor experimentation (such as changing the size of individual "blocks") and the initial cost is rather high. If all goes well, we might be seeing memristor based devices which are much more dense replace FPGAs within 5+ years - this would reduce the cost by a lot, further increase the speed, simplify the overall hardware design, while allowing a lot of freedom to experiment with the overall parameters of the model. It should also be noted that the cluster implementation, even though most flexible with today's technology, it's also the slowest and most expensive one (it won't be realtime), while the FPGA (or hypothethical memristor implementation) would be way faster than realtime (many orders of magnitude). Slower than realtime would make training in a non-virtual environment too slow and tedious, while faster than realtime could accelerate it a lot (the AI could interface with a computer (either the same way a human does, or for example, it could connect its visual sensors to the VNC of a locally networked computer) - it could read books and absorb information much faster than a human would, simply because its underlying platform is much faster (our neuronal network is actually very slow if you look at the numbers, however we appear to be fast because the actual path the information takes is quite short, so most of what we do is retrieve information we already have. The parallelism in our brain is not too different from the parallelism found in your average electornic device, thus such platforms are much more suited as an alternate implementation than sequential CPUs).
tl;dr: Human-like general AI is possible today, it's just a matter of choosing a suitable model (there are a few good ones, I'd propose we use one that we know to already work, such as a mathematic structural model of the brain (not neuron level of course, that would be too slow)), implementing it and having it be faster than realtime if possible, connecting such an AI to a variety of sensors as well as allowing it to interact with the chosen world (virtual or real, in the case of virtual worlds, implementing a suitable one is a hard challenge as well) and one of the most important part, training the AI (unattended and guided): have it learn spoken and written human language first so it would be able to make use of our vast knowledge(for faster than realtime AIs, one might want to slow it down while learning spoken language or interacting with the real world, if one goes that path. It can go at the real speed of the hardware for parts which it can learn unattended, such as reading books. Wether one allows the AI to change these internal states depends on the designer, but it would probably be a good idea).
>>6
Chinese room argument kind of fails on a very wide variety of counts. "Consciousness" if the property of the entire system, not individual components. "Consciousness" is also a loaded word, there's the functional aspects of it and the "hard problem of consciousness" which concerns qualia - they're very different things. For AI, these questions don't really matter as they won't alter the system's functionality, although from a philosophical point of view they do. The chinese room argument also implies 2 highly unlikely and unnatural/illogical facts about the nature of qualia, but discussing those here would be far off-topic.
>>9
Training general AIs is a difficult problem, one which we must sometimes cheat on (immitate nature where it makes sense, and avoid nature's mistakes where we can) if we want to be able to solve it within our limited lifetime. Manual training would defeat the purpose (kind of why expert systems are not really "AI").
So
>>1, what exactly is your idea about?