gurrier said:
That's actually a category error
The programme is the important thing - the algorithms, data structures and the inputs and outputs. You could build it with valves or mirrors if you wanted.
Semantics are a diversion. As I explained above, you can bind whatever symbols you like to whatever semantics you want - the hard part is to know how to map the semantics from the real world.
Most thinking mathematician's, and virtually all thinking cognitive scientists reject Penrose's conclusions. Godel may have shown that there are certain classes of stopping problem which are not computably decidable, but neither he nor Penrose demonstrated that there is anything non-computable going on in human brains.
Well, I'm not sure you're right about that. In any case, it's not much of an argument that most people reject penrose's conclusions. It's not really any better than Penrose supports his conclusions.
And I think there's something he's onto. There's a demonstration of pythagoras' theorem, - not a proof, a demonstration, by getting a set of squares of two different sizes that tesselate, and then drawing lines from corner to corner of the squares making triangles, and larger squares,. Then you "see" that it just has to be the case that the area of the square on the hypotenuse is equal to the sum of the areas of the two smaller squares. It's just obvious. But it's not a proof. You just see it, and yet it's easier to follow than a formal proof. But a computer definitely wouldn't get it.
I was quite surprised the other day to find that it's really very difficult to define a reciprocal relationship in prolog.
If you have something like, friend(tom,bill). - and that means tom is a friend of bill, - obviously that doesn't mean for the computer that bill is a friend of tom. But if you try to give the computer the information. friend(X,Y) :- friend (Y,X). (meaning- if there's some y that's a friend of x, then x is a friend of y, - ) the computer will go into an endless loop.
I don't think semantics are a diversion, - All human understanding takes place in a temporal context. Just try for a moment to imagine how you could give a computer a functional understanding of the meaning of past present and future, and their relationships with all the different tenses, and the different significances of past, present and future, - and try to see how you could do it in a non-circular way, grounding it, so that at thte end, the computer doesn't just connect a bunch of equally meaningless concepts, - and you may get some idea of the scale of the problem.