>>14
Consciousness can perceive time in different ways. So I thought about coding an app that perceives time retroactively.
If you have a conscious program, all you have to do is slow or speed it up. How can you get unbounded speedup? Speedup is always relative to something. If you can slow yourself down with respect to some perceived "physics", that is the same as the physics being faster. What happens if you run a VR(Virtual Reality) and a SIM(Substrate Independent Mind) on the same VM(Virtual Machine, in this case I assume it runs on a true Turing Machine, not merely a Finite State Automata), but allow either to adjust their priority? The SIM can just increase its priority unboundedly thus it can find itself growing as fast (or as slow) with respect to the VR. The VM's speed or implementation shouldn't matter as well - subjective notion of time and physical notion of time are very different.
The problem is how the hell do I control what an app "perceives"? How are the conscious patterns found?
That's a hard problem. You could say it's the input data, but that view would be incomplete - we don't perceive our input as it is, but merely processed/compressed versions of it. The input that we get from our senses is very noisy, but we almost always have a crystal-clear perception/model-of-the-world in our mind. Some reading that I like on this subject is found in Hawins' "On Intelligence" book, feel free to read it. The problem is close to that of building Aritificial General Intelligence, although it only applies to some such systems, not to all of them (some AGI researchers still seem to dream of making unembodied AIs which don't learn from the environment, but I don't think that's doable from scratch - you need to have knowledge from somewhere and feeding knowledge by hand is unrealistic if you want truly general intelligence).
On the subject of pattern recognition, there are a variety of interesting books/papers on how to induce function/patterns from data, I could look up some of them up if you're interested, although I don't have them here right now.
Some interesting videos on the subject that you might like can be found here:
http://agi-school.org
I assume you are the same dude I talked to about a month or so ago who said that "consciousness finds itself".
I think it was me.
Does this mean that only the consciousness itself can perceive itself, i.e. I cannot do any calculations or computations of the consciousness of someone else?
You can run a program that could act intelligently with regards to you and likely be conscious as well. You cannot literally experience it, although I do think you can translate someone's experiences to ones you can perceive if you have a program trace (sometimes, not generally).
Here's a tentative example:
http://sites.google.com/site/gallantlabucb/publications/nishimoto-et-al-2011
Of course, if you have no way of translating some experience, I don't see how you could have it without modifying one's cognitive architecture.
Also, it's possible to have programs that cannot be reverse engineered (in theory) in any trivial manner (
http://en.wikipedia.org/wiki/Homomorphic_encryption) or that have properties you don't know about (
http://en.wikipedia.org/wiki/Rice's_theorem ), thus in a way, only the program truly knows what it's like to be itself and you can at best guess.
>>15
Sure, that's one way to go about it. Try taking a look at OpenCog or some of Ben Goertzel's work, some of it should match your criteria. Also read that book I mentioned earlier.