Return Styles: Pseud0ch, Terminal, Valhalla, NES, Geocities, Blue Moon. Entire thread

So when will the day come,

Name: !L33tUKZj5I 2010-11-10 5:55

When programmers aren't needed anymore and we just tell robots to do the programming? If I were you guys I'd be scared for your jobs. If you don't have a job, then these robots would make your existence pointless anyway.

Name: Anonymous 2010-11-12 22:59

>>23
I think you're a bit confused.

Considering that we haven't got a clue how conciousness functions how can we build an AI?

There are 2 problems concerning consciousness.

The "simple" one, which is merely how cognitive functions work, how "thoughts" are generated, and so on. The "simple" problem is merely about understanding the physical processes at work at the low-level (physical processes that make neurons work, and interactions between them) and at the high-level (interactions between functional blocks of the brain, and so on - even if in reality there is a continuity between regions, and there is no clear delimiter). There are a lot of books talking about the low-level processes and some which attempt to construct high-level theories (some of which are quite logical and possibly how we actually work).
The "hard" problem of consciousness is the existential one: what it is "to be like something", it concerns our perception and the nature of qualia. There is nothing physical that indicates we have qualia. It's either that it's an illusion and this raw/basic form of consciousness doesn't exist, it being an illusion caused by our brain, and we are just hardwired to believe in it (See "Consciousness Explained" by Daniel Dennett for this view), or that this type of consciousness is a property of the world we live in (either non-physical (see David Chalmers' "The Conscious Mind") or pyshical (see Penrose's books, since this appears to be your viewpoint)).
The view Dennett gives is rather logical and might be how it actually works, however I find it terribly hard to believe in, as that's probably how I'm hardwired to be.
The view Chamlers gives is a bit more interesting, in which it supposes that consciousness would just form in any system organized so that it would form, and that system would experience qualia and so on - it's not something you need to engineer at all, it just appears naturally and is a basic property of everything in the world.
The view that Penrose/Searle givesis actually the hardest to stomach for me and it's one of the most unpopular views (the one which you seem to think it's true: consciousness is due to quantum entanglement), and assuming his view would be true, it would lead to some really unlikely conclusions).
Current research points to conciousness requiring quantum entanglement, that means we are a long long long time away from any processing power of that sort.
I'd really like a citation on this. The only one that seems to  believe this is Penrose, and this view is highly unpopular.

You should read the literature and draw your own conclusions on this problem.

However, regardless of what point of view you take on this "hard" problem of consciousness, you will notice that it's not required at all for building (strong) AIs. The only thing that will change is wether you think that such AIs can achieve a similar consciousness to yours or not. It doesn't mean that such AIs can't achieve the same level of intelligence as you, or much better.

I'd also like to understand why do you think understanding consciousness completely (the hard problem, especially) is of any importance to building general AIs? It may be something we humans are puzzled about, but I really doubt it has any importance on wether we can build them or not.

I've been loosely following various neuroscience and AI research and I do think we'll be able to achieve human-level intelligence AIs within 50 years or less (probably much less), and I'd also be interested in doing research in some of these fields (and some related ones) someday.

Newer Posts
Don't change these.
Name: Email:
Entire Thread Thread List