Urban75 Home About Offline BrixtonBuzz Contact

Self-driving cars: Motorists will not be liable for crashes and can watch TV behind the wheel, government says

Are you in favour of self-drive cars?


  • Total voters
    44
speculating...

Something for which there is something it is like to be it. Something that models all the things that are relevant to its existence and functions by making predictions, which it then checks against the incoming stream of data from its sensors. Something that is predictive, but also has feedback loops of self-correction and appropriate error bars, and always allows for the possibility that it might be wrong. Something with some form of an embodied intelligence, that has some sense of 'me/not me'.

Problem is that I'm not so sure AI really deserves the 'I' bit yet. In many important ways, bacteria are far cleverer than any AI system yet invented. When you're asking computers to take over life functions, they are likely to need some life-like qualities.
Yes I think I see what you mean, it would have a sense of self and therefore a sense of self preservation and a level of risk which it wouldn't exceed which would hopefully be similar to that of a mature and relatively safe human driver.
 
I am sure that we will get there one way or another, I just think that current complexities and the state of AI isn't there yet. It could require something like you mention, a different angle.

In Iain M Banks Sci-fi AIs control most of the human environments (planets etc) and various smaller AIs assist humans in aspects of their lives, but AIs do things for which they are fundamentally better suited than humans and leave humans to do the rest.
 
Yes I think I see what you mean, it would have a sense of self and therefore a sense of self preservation and a level of risk which it wouldn't exceed which would hopefully be similar to that of a mature and relatively safe human driver.

This part seems to me just about programming goals. We have made computers that can beat humans at chess - I don’t think goals are at the root of the problem.

There are elements where I think LBJ has a point though, in the area of context, doubt and what we might call self-reflection.
One thing that strikes me from a lot of accident reports is that these vehicles, when they crash, crash with apparent absolute confidence that they are doing the right thing.

They are also prone to confusion and dithering in situations of no apparent threat.
 
I think I recall a Tesla drive under a Semi Trailer in the states, its cameras couldn't see it because of the conditions of the light.

When I can't see properly, driving into a setting sun, I slow right down and look harder, it didn't, it just assumed a truck was not there because it couldn't see it immediately. It thought it was in the right junction wise.
 
This part seems to me just about programming goals. We have made computers that can beat humans at chess - I don’t think goals are at the root of the problem.
While that is true, was it "deep blue" from IBM? it was a humungous machine, way too large to fit in a car :)
There are elements where I think LBJ has a point though, in the area of context, doubt and what we might call self-reflection.
One thing that strikes me from a lot of accident reports is that these vehicles, when they crash, crash with apparent absolute confidence that they are doing the right thing.
I think varieties of driverless cars have different primary sensors. I think Tesla just relies on cameras while others have more sophisticated radars.
They are also prone to confusion and dithering in situations of no apparent threat.
Pulling out into busy rush hour traffic requiring someone to give way or rather pretty much forcing them to might not be a skill driverless cars would have :)
 
Pulling out into busy rush hour traffic requiring someone to give way or rather pretty much forcing them to might not be a skill driverless cars would have :)

Yeah, this is exactly the sort of thing that gives them problems.
As well as roundabouts, apparently.
 
As well as roundabouts, apparently.
I can remember arriving at a roundabout a few summers ago to find everyone stationary at the various entry points, no one making a move to enter the roundabout which is when I noticed a small French car driving rather hesitantly the wrong way around. Everyone just waited patiently until they had managed to exit before normal service resumed.

I wonder how a driverless car would have handled that?
 
The whole AI industry has seriously misunderstood the embodied reality of real intelligence. The intelligence didn’t come first, it evolved in tiny steps as an extra tool for helping the body survive in its environment. Survival-based intention preceded cognition by a long way. Even a jellyfish has a very basic intention that its dozens of brain cells are fixated on — to eat. AI has no intention, it has no survival drive. Good thing too, I don’t think we want to be imbuing robots with a will to survive. But without putting the survival drive first (and then also its more sophisticated embodied evolutions, like boredom and loneliness and play) and drawing decisions from that, you don’t have the capacity to respond to novelty.

Animals “think” with their whole body, basically. They exist in the world, they’re not apart from it.
 
Automatic driverless car systems are fraught with complexity danger and legal liability.

By way of a comparison. Could something as completely simple as an electric window be deadly?

In early electric windows there were a couple of cases of a child standing on the close button and trapping themselves by the neck resulting in their deaths.

As a result for a while there were competing anti-trap systems under development.

A successful system had to detect if the window had trapped a bit of a human and stop and reverse the window a bit so they could escape.

This might seem simple enough, compared with the complexity of an automatic driverless car, but it was not without complication. The window still had to close properly if it was encountering just some ice or snow or the like on a cold morning, etc etc etc ..

If your present car has a one touch close option on its window buttons, it should have such a system, test it by trying to shut your arm in the window (only try this when your vehicle is stationary please!). I believe all German cars and GM cars have a system, I would expect all cars with electric windows, certainly with one touch up, have it by now.

Anyhow, such a system compared to a fully autonomous driverless car must be simplicity itself? yet the code to control windows ran to many many pages and because it was safety critical a number of copies were run simultaneously to avoid any coding errors in one or other version.

Why mention such a simple thing? Because something apparently simple like an electric window had many significant safety issues and killed before the development of anti-trap systems which themselves became very complex.

An auto driving car is vastly more complex as are the sensors required for perception of road limits and junction conditions, quite apart from the behaviour and intention of other road users, and there are the full range of road and weather conditions in which it will have to operate. Aircraft auto pilot systems are simplistic by comparison. There are simply so many more ways that a driverless car can get into trouble and cause an accident.

When I read people predicting the full implementation of driverless cars for all roads in all conditions, in just a few years time, I just don't think they have grasped the full complexities of the technology and software required for this to come to pass.
I read a horrendous report of a dog being dragged behind a car because it jumped out of a boot while the boot was automatically closing and the owner/driver didn't notice! And that's just an auto close boot.
 
The whole AI industry has seriously misunderstood the embodied reality of real intelligence. The intelligence didn’t come first, it evolved in tiny steps as an extra tool for helping the body survive in its environment. Survival-based intention preceded cognition by a long way. Even a jellyfish has a very basic intention that its dozens of brain cells are fixated on — to eat. AI has no intention, it has no survival drive. Good thing too, I don’t think we want to be imbuing robots with a will to survive. But without putting the survival drive first (and then also its more sophisticated embodied evolutions, like boredom and loneliness and play) and drawing decisions from that, you don’t have the capacity to respond to novelty.

Animals “think” with their whole body, basically. They exist in the world, they’re not apart from it.
Yep. And you don't even need a nervous system. Bacteria will act for their survival, and have a basic understanding of me/not me needed for maintaining homeostasis.

I'd be impressed by a 'thick' AI that could demonstrate something like that - a basic drive for self-preservation. You could maybe then build up from that.

And yeah maybe it's a bad idea to try to do that.
 
.. There are elements where I think LBJ has a point though, in the area of context, doubt and what we might call self-reflection.
One thing that strikes me from a lot of accident reports is that these vehicles, when they crash, crash with apparent absolute confidence that they are doing the right thing.
This reminds me of me when I was a teenage motorcyclist, I used to argue with my dad about road manners often arguing about situations in which I was in the right but I still found myself in an accident.

The point (which often escaped me back then) being that as a vulnerable road user it didn't really matter if I was in the right or the wrong, as a vulnerable road user it would be me ending up in intensive care or worse if I was involved in a road traffic accident.

Are current driverless cars like my teenage self?
 
This part seems to me just about programming goals. We have made computers that can beat humans at chess - I don’t think goals are at the root of the problem.
Chess computers might be great at playing chess, but can they also play Monopoly?

Playing the word game Squareword, I was struck by the similarities between it and finding the right move in chess. Similar thought processes. That kind of intelligence, with the ability to recognise where similar skills are needed in different contexts - the ability to make analogies - is lacking in current AI. Chess and Go computers that can beat any human hands down at the particular game they have been trained to play, but can do nothing else, are good examples of that.

And while the calculations needed to play well may be incredibly complicated, the goals of the games chess and go are incredibly simple and 100% rigidly rule-bound - I can only do certain things, and crucially, my opponent can only do certain things as well.

Computers that are good at poker are now being developed. That's impressive, dealing with a game in which you have incomplete information. But the method of getting there - playing yourself billions of times - still doesn't involve developing a real understanding of what you're doing and why.
 
Chess computers might be great at playing chess, but can they also play Monopoly?

Playing the word game Squareword, I was struck by the similarities between it and finding the right move in chess. Similar thought processes. That kind of intelligence, with the ability to recognise where similar skills are needed in different contexts - the ability to make analogies - is lacking in current AI. Chess and Go computers that can beat any human hands down at the particular game they have been trained to play, but can do nothing else, are good examples of that.

And while the calculations needed to play well may be incredibly complicated, the goals of the games chess and go are incredibly simple and 100% rigidly rule-bound - I can only do certain things, and crucially, my opponent can only do certain things as well.

Computers that are good at poker are now being developed. That's impressive, dealing with a game in which you have incomplete information. But the method of getting there - playing yourself billions of times - still doesn't involve developing a real understanding of what you're doing and why.
All good points and I’ll take it a step further. I’m betting that when you play chess, your true intention is not so much to win the game as to have fun, or feel satisfaction or some other higher-level emotional motivation. No chess computer has ever had fun playing the game. No chess computer has ever made its own decision whether or not it wants to play the game.
 
There has only been one serious attempt to build a computer that could win at Mornington Crescent. The machine ended up breaking two laws of robotics, three articles of the Geneva Convention and most of a business park in Milton Keynes before the plug could be pulled. Whether or not it enjoyed its twelve seconds of existence will never be known.
 
But there are driverless cars running right now in Arizona and San Francisco.

Places chosen for their consistent and ideal weather, which makes sense for a first test but might mean they work there but not here, but still places that have all the general issues of urban environments and unpredictable humans.

I know there have been pedestrians killed, but then there have also been pedestrians killed by human drivers in those places. I don't know what the data says or if there's enough of it yet to know if AI drivers are safer than humans. I know Tesla claims autopilot is but this is disputed, I don't know about Waymo or the other full auto companies operating at the moment I'm those areas.

For me, that's what matters. I don't think people generally are great at driving, understanding purpose may be less important than being constantly attentive with fewer (no?*) blind spots and more sensory types.

*I'm not sure it is possible to have no blind spots, it might be with some kinds of sensors and not with others.
 
But there are driverless cars running right now in Arizona and San Francisco.

Places chosen for their consistent and ideal weather, which makes sense for a first test but might mean they work there but not here, but still places that have all the general issues of urban environments and unpredictable humans.

I know there have been pedestrians killed, but then there have also been pedestrians killed by human drivers in those places. I don't know what the data says or if there's enough of it yet to know if AI drivers are safer than humans. I know Tesla claims autopilot is but this is disputed, I don't know about Waymo or the other full auto companies operating at the moment I'm those areas.

For me, that's what matters. I don't think people generally are great at driving, understanding purpose may be less important than being constantly attentive with fewer (no?*) blind spots and more sensory types.

*I'm not sure it is possible to have no blind spots, it might be with some kinds of sensors and not with others.
It’s an interesting point. I just had a look to see if there were first hand accounts to see what this is really like. This is a pretty useful piece of reportage by a tech journalist


It sounds as if these Waymo cabs in Phoenix are very impressive but there are also signs of the kind of problems you get with not having a theory of mind to tell you why things are as they are. They repeatedly make a mistake that comes from doing something that ‘works’ for the vehicle itself but actually causes a hazard by blocking other road users. And it has to do an emergency stop in a place that’s tricky for an AI — a parking lot. People are phenomenally good at reading the intention of other humans — it’s arguably our prime survival trait. We cope with parking lots by understanding what pedestrians want to do and anticipating that. The car just looks for movement so reacts very late.

I approached the van and was again surprised. It was illegally parked in a fire lane, which was apparent by the brightly painted red curb. It was also partially blocking a lane used by cars entering and exiting the shopping center. One car had to go around the Waymo to get into the parking lot.

Just as the car neared Trader Joe's, it came to an abrupt stop, slamming the brake for an apparent pedestrian. It nearly gave me whiplash and made me particularly grateful for the working seatbelt. The jolt was surprising, as the car was going no more than seven miles an hour in a parking lot.

After gasping — and letting out an audible "Jesus!" (see video below) — I settled back in until the car let me off in front of the Trader Joe's. The drop-off spot was in yet another fire lane, next to a red-painted curb.

McGoldrick didn't provide a comment on why the car kept parking in clearly marked fire zones, and said the team is looking into it.

And the conclusion:
Waymo is now 13 years old. It's taken this long to get self-driving cars operating fluidly on city streets in part of one U.S. market. While even getting that far is a mighty impressive technological feat, ubiquity — if it ever comes — feels like it's still a long way off.

The cars have learnt, more or less, to cope with one small area. That’s very far from what they would need to do if they became the standard transport.
 
It’s an interesting point. I just had a look to see if there were first hand accounts to see what this is really like. This is a pretty useful piece of reportage by a tech journalist


It sounds as if these Waymo cabs in Phoenix are very impressive but there are also signs of the kind of problems you get with not having a theory of mind to tell you why things are as they are. They repeatedly make a mistake that comes from doing something that ‘works’ for the vehicle itself but actually causes a hazard by blocking other road users. And it has to do an emergency stop in a place that’s tricky for an AI — a parking lot. People are phenomenally good at reading the intention of other humans — it’s arguably our prime survival trait. We cope with parking lots by understanding what pedestrians want to do and anticipating that. The car just looks for movement so reacts very late.









And the conclusion:


The cars have learnt, more or less, to cope with one small area. That’s very far from what they would need to do if they became the standard transport.

Cheers, I haven't read much about where these things are actually at so it's nice to see something recent. They are certainly still a long way off, and I'm not overly keen on the technological-gadget solutions to some of these problems like you could put an active beacon on the fire lanes here to tell AI cars not to stop there, similar to the suggestions that pedestrians/cyclists could carry something around with them - I'm not happy with AI cars that require that kind of thing to drive more safely around pedestrians/cyclists than humans currently do.

13 years is some time but also has come a long way in that time. If they need to deeply learn a specific geographical area then I agree, they'll never become standard/general transportation and we'll only see them in specific places/applications, like motorway driving between transport hubs or taxi services in limited areas like waymo are doing. It'll be interesting to see how things develop anyway.
 
The whole AI industry has seriously misunderstood the embodied reality of real intelligence. The intelligence didn’t come first, it evolved in tiny steps as an extra tool for helping the body survive in its environment.

I don't think you'll find many AI developers/researchers who would stand behind the claim that intelligence "came first". They tend not to be creationists.
 
I don't think you'll find many AI developers/researchers who would stand behind the claim that intelligence "came first". They tend not to be creationists.
They might know that intellectually but their ontology of intelligence belies a different view
 
They might know that intellectually but their ontology of intelligence belies a different view

In your opinion. We didn't have to work out the exact ins and outs of how bird or insect wings work in order to work out how to get powered flight to work.
 
In your opinion. We didn't have to work out the exact ins and outs of how bird or insect wings work in order to work out how to get powered flight to work.
We didn’t get to flight by having the wrong model of how it works, though.

Yes, this is my view of what intelligence is. Not simply “opinion”, it is based on actual study. But there are no clear cut answers, so it remains just a view. I wouldn’t want to suggest otherwise. And in my view, an ontological perspective of intelligence that equates it with learning enough data points is fundamentally flawed.
 
I don't think you'll find many AI developers/researchers who would stand behind the claim that intelligence "came first". They tend not to be creationists.

If they think of themselves as AIbdevelopers they've still been suckered by a silly metaphor. Making a machine work better doesn't in any real sense make it intelligent.
 
Automatic driverless car systems are fraught with complexity danger and legal liability.

By way of a comparison. Could something as completely simple as an electric window be deadly?

In early electric windows there were a couple of cases of a child standing on the close button and trapping themselves by the neck resulting in their deaths.

As a result for a while there were competing anti-trap systems under development.

A successful system had to detect if the window had trapped a bit of a human and stop and reverse the window a bit so they could escape.

This might seem simple enough, compared with the complexity of an automatic driverless car, but it was not without complication. The window still had to close properly if it was encountering just some ice or snow or the like on a cold morning, etc etc etc ..

If your present car has a one touch close option on its window buttons, it should have such a system, test it by trying to shut your arm in the window (only try this when your vehicle is stationary please!). I believe all German cars and GM cars have a system, I would expect all cars with electric windows, certainly with one touch up, have it by now.

Anyhow, such a system compared to a fully autonomous driverless car must be simplicity itself? yet the code to control windows ran to many many pages and because it was safety critical a number of copies were run simultaneously to avoid any coding errors in one or other version.

Why mention such a simple thing? Because something apparently simple like an electric window had many significant safety issues and killed before the development of anti-trap systems which themselves became very complex.

An auto driving car is vastly more complex as are the sensors required for perception of road limits and junction conditions, quite apart from the behaviour and intention of other road users, and there are the full range of road and weather conditions in which it will have to operate. Aircraft auto pilot systems are simplistic by comparison. There are simply so many more ways that a driverless car can get into trouble and cause an accident.

When I read people predicting the full implementation of driverless cars for all roads in all conditions, in just a few years time, I just don't think they have grasped the full complexities of the technology and software required for this to come to pass.

Well auto pilot doesn’t mean pilotless planes. It’s simply there to save the pilot/s the fatigue of grappling with the yoke for the entire flight (see also trim). It won’t fly the plane for them though. It needs human input.
 
Chess computers might be great at playing chess, but can they also play Monopoly?

Bad analogy as Monopoly is available for most platforms. :D

You do have a point though as human players generally have an advantage over AI in most games more complex than chess. It’s simply impossible to code every eventuality.
 
Bad analogy as Monopoly is available for most platforms. :D
His point is that a chess programme can’t abstract from its knowledge of chess to play monopoly even though monopoly is a more straightforward game. It can’t apply intelligence because it doesn’t actually have intelligence. The chess programme doesn’t even know its playing chess.
 
His point is that a chess programme can’t abstract from its knowledge of chess to play monopoly even though monopoly is a more straightforward game. It can’t apply intelligence because it doesn’t actually have intelligence. The chess programme doesn’t even know its playing chess.

Fair point.
 
Well auto pilot doesn’t mean pilotless planes. It’s simply there to save the pilot/s the fatigue of grappling with the yoke for the entire flight (see also trim). It won’t fly the plane for them though. It needs human input.
Modern auto pilots you can set the destination airport and engage it and it will take off and fly all the way to the other airport and can even land and taxi to the correct gate with only an ok from the pilot.
 
Back
Top Bottom