What would happen if someone REALLY developed an AI?
It would probably mindfuck religious people who believe in "souls" if how they think and feel was spelled out as some algorithm.
Some jobs would be redundant.
If it was smart enough to use for science, there could be some cool new technology.
Giving AIs political rights would be a huge mess. But it might be easy if they were given no rights.
If there was an accurate model for human behavior, all sorts psychological and institutional problems might be solved.
Why would someone develop an AI?
- Because it would be the most important invention of the whole century.
Name:
Anonymous2010-07-12 7:55
AI with human-level intelligence isn't necessary more smarter than a couple of Indian programmers. What people want is super-human AI which would self-improve. The problem:
1.We don't even have human-level AI.
2.No self-improvement methods yet were discovered that would make AI much smarter than it already is.
3.Neural networks and probabilistic/statistic methods require huge investment in time/memory with really low payoff and limited learning potential.
That aside, we don't need human-level intelligence for it to have an impact - a limited mental capacity is fine - but if it can think for itself and solve simple problems without assistance, that alone will be a huge leap forward.
Name:
Anonymous2010-07-12 8:12
>>6
I think they should start off with a blind-dog level AI. That seems a lot closer to a human than machines that reason and plan much. It seems like AI research is focused on modeling how scientists think while normal people think in a simpler way and grow up by being wrong a lot.
Name:
Anonymous2010-07-12 10:30
>>5 the most important invention of the whole century
That's sort of an understatement
People forget that we're going to have to program the damn thing to be able to program itself to improve it self.
We haven't even been able to figure out a good theory of how to build the hardware, much less build the software, much less build clever software to make it self improving.
The human race will probably reach resource limits before we're able to figure it out. And we'll probably build the damn thing to run on water or something and fuck ourselves.
A.I. research can't really advance until we have a thorough understanding of natural intelligence first. You could say that the A.I. research thread is waiting for the results from the neurology research thread to finish. The first A.I. will probably be just a dumb software emulation of neurological and biological properties and behaviors, just running on whatever super-fast processors we have at the time. Maybe quantum computers (which may be necessary to emulate certain biology.)
You just want a computer to masturbate you.
That is correct, I'm looking for a CAM (Computer Assisted Masturbation) system, preferably one with a speech synthesis module and a large database of dirty phrases. It would be great if it had an extension language. Is there an Emacs package for this? I've searched CPAN but haven't had any luck.
I don't think anybody takes the idea that the brain is using quantum processing seriously
Name:
Anonymous2010-07-12 15:16
>>20,12
Yeah, no. Neurons are many orders of magnitude outside the range of quantum effects in size, speed and temperature. And even if there were quantum effects, we wouldn't need a quantum computer to simulate them.
About the only serious scientist who thinks quantum effects give us consciousness is Roger Penrose, and he's not a neuroscientist.
>>20 >>21
I didn't say that quantum computers would be needed to simulate quantum effects in neurons, just that they might be needed to efficiently implement biological processes. Just like you can theoretically break an RSA cipher with a classical computer, but it only becomes practical with quantum algorithms. Obviously RSA has nothing to do with quantum mechanics, but that doesn't stop it from benefiting from it. (I don't understand quantum computing much, but am trying to understand more and more, so forgive the vagueness in my statements.)
Name:
Anonymous2010-07-12 16:15
>>5
That's not a soul. Descartes was wrong. I am aware, therefore I am. A machine can never be truly aware the way a human can.
>>25 A machine can never be truly aware the way a human can.
Do you have anything resembling an argument to back up that bullshit assertion, or are you just talking out your ass?
I'm pretty sure that my laptop is aware. Also, I think my dog is the Sussman.
Name:
Anonymous2010-07-12 17:10
>>29
Ok I am talking out my ass a little bit. However, we really don't understand the true nature of consciousness, and why humans don't operate (in precisely the same manner) without a sense of self.
Awareness doesn't need a soul. Which is nice, since there isn't any evidence of a soul.
And, really, how aware are human beings? There's a ton of shit we aren't set up to perceive, and lots that doesn't get beyond low-level processes. After that there's the vast amount of data we've evolved to filter out. Then there's what we process just plain wrong.
>>32
We don't understand it → it must be supernatural? Or what reason do you have for believing it's not purely mechanical and therefore replicateable in machines?
>>36
Thanks, Rand(all), for your redundancy in this thread.
Name:
Anonymous2010-07-12 18:00
>>36
An A.I. would have qualia realized logically as bits (which are themselves realized physically in numerous ways,) just as human qualia are realized physically. Qualia supervene on the physical.
Most kids go through a phase where they wonder if they are the only conscious person, and everyone else is a robot or a mindless actor.
You eventually just sort of take it on faith and ignore it.
You can't know if other people are REALLY FORREALS THERE, and you won't know if a sophisticated-enough AI is.