Urban75 Home About Offline BrixtonBuzz Contact

autonomous cars - the future of motoring is driverless

I also saw there was a "driver" in the uber car that killed that poor woman, but why, what were they for? Given that the legislation and insurance for Driverless cars is still in it's infancy, I wonder who will get the blame, the non driver, the programmer or uber?
The 'driver' is there to provide oversight and intervene if necessary. It's slightly different here because they are there in a paid capacity, but the same exists in other autonomous systems like Tesla Autopilot. I've said this many times: it's total bullshit. Give someone a task where they're mostly redundant and you will inherently get disengagement. You can either have full autonomy or you make the driver perform some continuous activity, no middle ground.
 
The 'driver' is there to provide oversight and intervene if necessary. It's slightly different here because they are there in a paid capacity, but the same exists in other autonomous systems like Tesla Autopilot. I've said this many times: it's total bullshit. Give someone a task where they're mostly redundant and you will inherently get disengagement. You can either have full autonomy or you make the driver perform some continuous activity, no middle ground.
This is basic stuff too if you understand how attention works psychologically. If a task doesn’t fully engage our attention, the excess capacity latches onto something else. If that engages us, our limited capacity system diverts to focus on that instead. Overcoming this process is very hard and the subject of its own field of endeavour (ISTR it’s called something like psycho-ergonomics but a search doesn’t reveal anything so I must have that wrong.). Putting people in a car with nothing to do except in an emergency breaks every rule in that book.
 
I saw the video earlier and it appears to be a failure of the car that should have been avoidable. It's a linear, perpendicular intrusion into the path of the vehicle, the classic cross-path scenario it should be designed to handle. The dashcam video presents a bit of a skewed view in that its dynamic range is very poor - a human would be able to see more - but the car systems shouldn't be confined to visible light anyway (instead: infrared, RADAR/LIDAR, etc).

It leaves me wondering various things, not least how much of this accident will be lab-reproducible and how much it's an edge case. It's hard to explain this but in certain systems designs, written under safety-critical development doctrine, someone like the NTSB would be able to identify the sort of swiss cheese condition that failed to handle a scenario, and go from there. I have a strong suspicion that they can't do this here, that it's not meaningfully safety-critical and that overall it shouldn't be being tested on the public.
 
I also saw there was a "driver" in the uber car that killed that poor woman, but why, what were they for? Given that the legislation and insurance for Driverless cars is still in it's infancy, I wonder who will get the blame, the non driver, the programmer or uber?
A trans woman apparently. The Daily Mail were trying to make a big issue out of that but they had nothing. She's an ex con, but thoroughly reformed.
 
A few things spring to mind after watching.

A human wouldn't have been able to react.

The car was driving really fast, I assume legally.

Why couldn't the car detect the person in the road earlier? If they can't drive better than people what's the point?

As mentioned earlier, the "pilot" isn't there for safety. Had no eyes on the road and isn't concentrating. Even if she had doubt it'd have made a difference. Can't be blamed but I bet she feels awful.
 
The car also doesn't swerve. Could it have missed the pedestrian but then been involved in another accident? Was there a calculation made by the algorithms to not swerve in case the pilot or more people were hurt? I assume there is something like this built in? What are those calculations?
 
A few things spring to mind after watching.

A human wouldn't have been able to react.
Wrong.



Also, as mentioned (post 245), dashcam videos represent much worse visibility than the human eye perceives in real life.
The car was driving really fast, I assume legally.
38 in a 35, apparently. The video above is a 50 limit, FWIW, although the actual speed is unknown.
As mentioned earlier, the "pilot" isn't there for safety. Had no eyes on the road and isn't concentrating. Even if she had doubt it'd have made a difference. Can't be blamed but I bet she feels awful.
The driver is meant to be there for safety, but as discussed, it's a flawed premise.
Why couldn't the car detect the person in the road earlier? If they can't drive better than people what's the point?

...

The car also doesn't swerve. Could it have missed the pedestrian but then been involved in another accident? Was there a calculation made by the algorithms to not swerve in case the pilot or more people were hurt? I assume there is something like this built in? What are those calculations?
Nothing complicated - it should have been able to detect the person. It didn't. No avoidance, no braking.
 
The car also doesn't swerve. Could it have missed the pedestrian but then been involved in another accident? Was there a calculation made by the algorithms to not swerve in case the pilot or more people were hurt? I assume there is something like this built in? What are those calculations?
This is a massively overblown fear/discussion point, I think, that, if I’m right, derives from a possible misunderstanding of how machine learning works (unless it’s me that misunderstands how machine learning works). It’s an issue raised more by philosophers than programmers. The assumption is that the machine makes decisions in the same way that people makes decisions, which involves moral imperatives. But I don’t think the machine is capable of working through the consequences of an act more than a very short time period ahead. I don’t think it forms a possibility space of downstream consequences and chooses between them. I think it just acts to avoid a developing risk in the way that the data shows is most effective.
 
Looking at that video, it seems when she became visible from the shadows, she was only a couple of car lengths in front of the uber car. As the 'thinking' distance at 40 mph is 3 car lengths/12m, with braking distance another 6 car lengths, I am not convinced that a driver could have changed the outcome, and can understand the police chief saying the car is likely not at fault.
 
Looking at that video, it seems when she became visible from the shadows, she was only a couple of car lengths in front of the uber car. As the 'thinking' distance at 40 mph is 3 car lengths/12m, with braking distance another 6 car lengths, I am not convinced that a driver could have changed the outcome, and can understand the police chief saying the car is likely not at fault.
No, for all the reasons already given.
 
This is a massively overblown fear/discussion point, I think, that, if I’m right, derives from a possible misunderstanding of how machine learning works (unless it’s me that misunderstands how machine learning works). It’s an issue raised more by philosophers than programmers. The assumption is that the machine makes decisions in the same way that people makes decisions, which involves moral imperatives. But I don’t think the machine is capable of working through the consequences of an act more than a very short time period ahead. I don’t think it forms a possibility space of downstream consequences and chooses between them. I think it just acts to avoid a developing risk in the way that the data shows is most effective.
Yeah, but also: so does a human driver, if they even do that (target fixation etc). The trolley problem is something confined to philosophers and disaster movies.
 
No, for all the reasons already given.

Your conclusion is just based the video, from which it's hard to conclude one way or the other, but 2 seconds is spot on the time it takes a human to normally react, then you also have the braking time on top of that. The police on the ground clearly have far more to go with in order to judge the situation, so I am more inclined to accept what they have said at this point in time.
 
Your conclusion is just based the video, from which it's hard to conclude one way or the other, but 2 seconds is spot on the time it takes a human to normally react, then you also have the braking time on top of that. The police on the ground clearly have far more to go with in order to judge the situation, so I am more inclined to accept what they have said at this point in time.
Some police spokesperson (the chief?) came out early on and said it was the victim's fault, and then the video came out, and the police department backtracked, claimed they were taken out of context.

Regardless, have you looked at many dashcam videos? They're all much worse than a human at capturing available information that you take in when driving at night. The sensor (the infeed to purely visible light visual processing) is probably worse than the human eye, and then the rendered output is worse than the sensor data.

Then, the idea of the 'car not being at fault' misses the myriad ways in which it's supposed to be able to detect things better than the human eye.
 
Some police spokesperson (the chief?) came out early on and said it was the victim's fault, and then the video came out, and the police department backtracked, claimed they were taken out of context

The police statement was after seeing the video...
Chief of Police Sylvia Moir told the San Francisco Chronicle on Monday that video footage taken from cameras equipped to the autonomous Volvo SUV potentially shift the blame to the victim herself, 49-year-old Elaine Herzberg, rather than the vehicle.

“It’s very clear it would have been difficult to avoid this collision in any kind of mode [autonomous or human-driven] based on how she came from the shadows right into the roadway,” Moir told the paper, adding that the incident occurred roughly 100 yards from a crosswalk. “It is dangerous to cross roadways in the evening hour when well-illuminated managed crosswalks are available,” she said.

I haven't seen anything about the police backtracking, do you have a link?
 
Some police spokesperson (the chief?) came out early on and said it was the victim's fault, and then the video came out, and the police department backtracked, claimed they were taken out of context.

Regardless, have you looked at many dashcam videos? They're all much worse than a human at capturing available information that you take in when driving at night. The sensor (the infeed to purely visible light visual processing) is probably worse than the human eye, and then the rendered output is worse than the sensor data.

Then, the idea of the 'car not being at fault' misses the myriad ways in which it's supposed to be able to detect things better than the human eye.

Camera sensors are definitely worse in low light / mixed light conditions - they will auto adjust exposure to control the brightest light (from the streetlights in this case presumably), making darker bits very dark. The human eye is much better at dealing with images with mixed bright and dark areas and so to the human eye the person would have been more visible/visible earlier. Although we can't really know, I would think that a human would have been able to see the pedestrian earlier and react accordingly. HDR video/photo was developed to help deal with this problem but there's basically no chance of having this in a dashcam I shouldn't have thought.

Last line is paramount though - autonomous cars should not be reliant on light sensing when there are so many other sensor types available.
 
This is a massively overblown fear/discussion point, I think, that, if I’m right, derives from a possible misunderstanding of how machine learning works (unless it’s me that misunderstands how machine learning works). It’s an issue raised more by philosophers than programmers. The assumption is that the machine makes decisions in the same way that people makes decisions, which involves moral imperatives. But I don’t think the machine is capable of working through the consequences of an act more than a very short time period ahead. I don’t think it forms a possibility space of downstream consequences and chooses between them. I think it just acts to avoid a developing risk in the way that the data shows is most effective.
I don't see what machine learning has to do with it. There will be some basic rules about car handling. The car could have swerved but flipped over so perhaps this is a rule that it couldn't break. Maybe it should have?
 
I don't see what machine learning has to do with it. There will be some basic rules about car handling. The car could have swerved but flipped over so perhaps this is a rule that it couldn't break. Maybe it should have?
The concept of machines making a calculus of life is dependent on the idea that the machine is modelling future scenarios and choosing between them. This just isn’t the case — it’s not how machine learning works.
 
The concept of machines making a calculus of life is dependent on the idea that the machine is modelling future scenarios and choosing between them. This just isn’t the case — it’s not how machine learning works.
Maybe not explicitly to do with calculus of life but there is a calculation that was made that meant the car wasn't braking or swerving. Should it have? I imagine there's a bunch of engineers looking at that now...
 
Also interesting from that article.
Uber's cars have lidar and radar sensors in addition to cameras, and those sensors don't require ambient light to function. So the vehicle should have spotted Herzberg even if the road was pitch black.
That's some serious fuck up then. So who is responsible?
 
The police statement was after seeing the video...


I haven't seen anything about the police backtracking, do you have a link?
In the States it's almost impossible for a driver to be at fault in a pedestrian fatality, unless you run a red light (red light cameras are unconstitutional). The victim would be to blame even on a sunny day, in this instance. Even if it's where everyone crosses the road because it'd where the shops are. Zebra crossings are an amazing invention.

it's the lack of pedestrians that keeps the body count down
 
Back
Top Bottom