>>44
My argument was pretty much just a loose philosophy version of John Searle's Chinese Room argument, that without symbol grounding a program is purely syntactical(is that even a word?) and is just following a set list of instructions.
Merely acting human, not thinking human. Weak AI is weak.