(define (understand? concept)
;; understanding is true for cached concepts
;; understanding is true for non-simple concepts which
;; can be broken down and
;; understood in peices
;; understanding is attained by producing an explanation
(or (cached? concept)
(and (not (simple? concept))
(all understand? (deconstruct concept))
(understand? concept))
(let ((explanation (explain concept)))
(when explanation
(cache concept explanation)))))
; some procedures omitted
(define (explain concept)
;; to explain a concept requires programming it to the previous AI
;; programming is a subset of theorem proving
(theorem-prover (make-programming-problem (previous-ai) concept)
(lambda (explanation _ _ _ _) ;; success
explanation)
(lambda () ;; failure
#f)))
That may be a bit too high-level, but that may also be what you intended.
General intelligences such as ourselves just work with continuous input data streams coming from the environment (senses) which are then compressed(hashed) based on spatial and temporal contexts into smaller chunks ("thoughts") which are themselves subject to the same 'function/procedure' and so on, up the hierarchy of our neocortex, until all data/sensory input has been integrated into coherent thought patterns, all this is done in real-time, while data is also being fed to the environment (motor control) which of course alters what is being sensed. The cortex ends up building associations and patterns, patterns of patterns (... of patterns). This is all a continuous process, with the neurons expressing both what is predicted (expected, thought, imagined, ...) to happen next (what the next/expected pattern will be) based on the context (currently firing neurons).
>Welcome to the website of the MIT Anime Club. We are a non-profit MIT student organization dedicated to increasing the awareness of anime, or Japanese animation, in the MIT community.