Urban75 Home About Offline BrixtonBuzz Contact

The end of cash?

I think that's a non-starter. We can't even definitively prove that humans are truly self-aware, rather than being entities that merely act like it.
This is where the disembodiment of cognitive psychology has got us. A complete dead-end whereby creatures that understand their own existence well enough to question its meaning and reflect on the response are told that they might not be self-aware.
 
This is where the disembodiment of cognitive psychology has got us. A complete dead-end whereby creatures that understand their own existence well enough to question its meaning and reflect on the response are told that they might not be self-aware.

Solipsism thought experiments are there for philosophers, not psychologists.
 
Solipsism thought experiments are there for philosophers, not psychologists.
But they rest on the reasonability of the premise in the first place, which is the domain of psychology. As that wiki article says:

Galen Strawson argues that it is not possible to establish the conceivability of zombies, so the argument, lacking its first premise, can never get going.[27]
The whole thing is essentially the same as the ontological argument of the proof of God — “the fact that I can imagine it means that it is possible, and that in turn means that the thing I imagine is true”. It doesn’t work because it mistakes metaphysics for physics. The very premise of the zombie begs the question that you can have human responses without human embodiment. That you can have human responses to the physical world without the human brain that evolved to create those responses. Sure, I can imagine such a thing. But I can imagine lots of things that can’t actually exist. If I assume that 0=1 then I can establish any mathematical “proof” you like.
 
But they rest on the reasonability of the premise in the first place, which is the domain of psychology. As that wiki article says:


The whole thing is essentially the same as the ontological argument of the proof of God — “the fact that I can imagine it means that it is possible, and that in turn means that the thing I imagine is true”. It doesn’t work because it mistakes metaphysics for physics. The very premise of the zombie begs the question that you can have human responses without human embodiment. That you can have human responses to the physical world without the human brain that evolved to create those responses. Sure, I can imagine such a thing. But I can imagine lots of things that can’t actually exist. If I assume that 0=1 then I can establish any mathematical “proof” you like.
Well yes I know I'm self aware and while I may not have direct evidence other people are to belive they aren't I would need to explain why I am special. Since I have no reason to belive I'm any different, I can only conclude other people must be self aware as well.

Although there are some people I wonder about.
 
To even question self-awareness both begs the question of what a “self” is and the idea that “awareness” is distinct from the construction of self, existing in some kind of essentialised plane separate to experience. In truth, both the “self” and “awareness” are themselves complex metaphors, made up of simpler metaphors, which are, in the end, constructed of primary metaphors that derive from our embodied existence (e.g., the embodied understanding of what “forward” means). It makes no sense to ask if humans are “self-aware” because to ask the question in the first place is fundamentally human.
 
To even question self-awareness both begs the question of what a “self” is and the idea that “awareness” is distinct from the construction of self, existing in some kind of essentialised plane separate to experience. In truth, both the “self” and “awareness” are themselves complex metaphors, made up of simpler metaphors, which are, in the end, constructed of primary metaphors that derive from our embodied existence (e.g., the embodied understanding of what “forward” means). It makes no sense to ask if humans are “self-aware” because to ask the question in the first place is fundamentally human.

But you’re still extrapolating from a very limited sample when you talk both of selves and of awareness, at least autophenomonologically, and so any comparisons with the private experience of real or hypothetical AIs are still going to suffer from possible overreach in assuming the commonality of human experience as a comparator. That’s where Noxion’s zombies come in.
 
The only being that can wonder if other beings are self-aware is a being that has been shaped by being with other self-aware beings.
 
But you’re still extrapolating from a very limited sample when you talk both of selves and of awareness, at least autophenomonologically, and so any comparisons with the private experience of real or hypothetical AIs are still going to suffer from possible overreach in assuming the commonality of human experience as a comparator. That’s where Noxion’s zombies come in.
OBJECTION. There's no such thing as private experience.
 
But you’re still extrapolating from a very limited sample when you talk both of selves and of awareness, at least autophenomonologically, and so any comparisons with the private experience of real or hypothetical AIs are still going to suffer from possible overreach in assuming the commonality of human experience as a comparator. That’s where Noxion’s zombies come in.
We have multiple metaphors that we use when we talk about the “self” and those metaphors are, in places, incompatible with each other. They all derive from making sense of existence using different embodied frames. So no, I’m not extrapolating anything. I’m saying that concepts that we use as if they are self-evident (ho ho) are anything but — they are intrinsically linked to our experience as beings in the world.
 
I’m not advocating crude solipsism as a position, I’m just borrowing enough of it to say that excessive generalisations about consciousness (Wittgensteinian PLA concerns notwithstanding) are not sufficiently grounded to justify arguments for fundamental and irreconcilable differences between human and machine autophenomenology.
 
This position, by the way, will become absolutely necessary when we move from considering what it is like to be a LLM to describing what human-machine hybrids are up to.
 
Or hybrids of machine intelligences with multiple human brains, which a sufficiently desperate and autocratic state might well resort to, given how cheap humans are to source and scale.
 
I’m not advocating crude solipsism as a position, I’m just borrowing enough of it to say that excessive generalisations about consciousness (Wittgensteinian PLA concerns notwithstanding) are not sufficiently grounded to justify arguments for fundamental and irreconcilable differences between human and machine autophenomenology.
What machine autophenomenology? Are you suggesting that the machine is experiencing?
 
Any experience would no meaning within the processes that we use to understand experience. There’s no translation.

It’s fun to play philosophical games and all, but concepts have to come from somewhere, and that somewhere is material
 
Any experience would no meaning within the processes that we use to understand experience. There’s no translation.

It’s fun to play philosophical games and all, but concepts have to come from somewhere, and that somewhere is material

Concepts have to be sufficiently elastic to accommodate plausible near-term scenarios.
 
If you don’t realise that the onus is on you to do that, rather than me, then we’re talking at cross-purposes.
I have defined it. I’ve said that it’s a combination of mutually contradictory metaphors that derive from our embodied experience of the world. This means what we conceptually understand to be “awareness” is uniquely tied up with being human. The answer is tied to the question — if there is no human, there is no creature to ask about awareness and make sense of the answer.

You’re disagreeing with that and assigning some kind of essential objectivist notion to the concept, claiming that it needs to be “flexible”, as if language exists as something other than a symbolic system of cultural meaning-making. So I’m asking, in that case, how you’re defining it.
 
I have defined it. I’ve said that it’s a combination of mutually contradictory metaphors that derive from our embodied experience of the world. This means what we conceptually understand to be “awareness” is uniquely tied up with being human. The answer is tied to the question — if there is no human, there is no creature to ask about awareness and make sense of the answer.

You’re disagreeing with that and assigning some kind of essential objectivist notion to the concept, claiming that it needs to be “flexible”, as if language exists as something other than a symbolic system of cultural meaning-making. So I’m asking, in that case, how you’re defining it.

I’m not defining it, I’m postulating it. For things that aren’t me, that is, which range from you and dogs (fairly certain) to beetles and mesh-networked supercomputers (pretty sure) to trees and washing machines (implausible, but who knows?).

I’m also leaving it open whether it’s an emergent property attending on sufficiently complex systems (a Daniel Dennett or Gilbert Ryle sort of view), or an inherent property of certain types of matter (and who doesn’t like a bit of Leibniz?).

Because I’m willing to accept all kinds of hypothetical awarenesses, and none of them are provable anyway, I think it is a foolish threshhold for AGI. Hence agreeing with NoXion - to get back on topic.
 
FWIW, I would say my personal position is closest to that of Dennett. But concepts like p-zombies wouldn't even be seriously discussed if the precise nature and limits of conscious intelligence were as cut and dried as the nature and limits of atomic matter. Maybe one of the current philosophies of mind is on the right track, but like Democritus and his atomic philosophy back in the day, we don't currently have the knowledge and tools to be that certain. Hence the debates.
 
We can't solve the hard problem of getting people who use cards in ATMs and then use the cash to buy stuff to be okay with skipping the middle part, so we're unlikely to solve any other hard problems.
 
We can't solve the hard problem of getting people who use cards in ATMs and then use the cash to buy stuff to be okay with skipping the middle part, so we're unlikely to solve any other hard problems.

You realise that the cash people buy things with does not necessarily go straight back into the banking system, though, right? Ie. someone might use cash from a sale to buy something for themselves, or their business. And then that person does the same, and the next etc. etc.

In the worst case you could see a nightmare scenario where all manner of economic activity was happening with no charges on transactions, no centralised tracking of who was buying what, and no generation of capital for data brokers.

Won’t someone think of the wealth creators?

Even people with no bank accounts and no fixed abode might start participating in the economy. :eek:

It needs a military solution, really. We’re beyond stuff like social shaming.
 
Last edited:
We can't solve the hard problem of getting people who use cards in ATMs and then use the cash to buy stuff to be okay with skipping the middle part, so we're unlikely to solve any other hard problems.
I won't repeat myself because it's been played out to death. There are times when cash is key to day to day use. Having no knowledge or awareness of how much you've received, how much you're spending, and what is going where, is no way to carry on your personal spending.
 
Back
Top Bottom