Urban75 Home About Offline BrixtonBuzz Contact

Tesla autopilot death

Tesla definitely have something to answer for by using an optical system that can be fooled by bright surfaces, rather than radar-based system which works off the texture and solidity of a surface, but to criticise them for not releasing a "perfect" product is ridiculous.
That and beta testing a safety critical system on the public.

You simply can't demote a human to a bit player in a system and expect them to carry the same ultimate responsibility as before. It's fortunate for Tesla that their car only killed their customer, not some random.

As before - it's not the death knell for autonomous cars. They are currently inadequate on occasion but this will be reduced over time. It ought to be the death knell for mixed autonomy, which is already significantly reducing driver involvement even in the mainstream. Either you develop a car that drives itself to an acceptable standard, which will still involve deaths, or you don't and a human drives it. One or the other - no attempting to get there by osmosis.
 
It can be a graduated process, and might well be:

first they introduce ABS on all vehicles.
then they introduce auto braking
then they introduce lane warning then lane changing
then they introduce automatic parking
then they introduce etc
etc
 
It can be a graduated process, and might well be:

first they introduce ABS on all vehicles.
then they introduce auto braking
then they introduce lane warning then lane changing
then they introduce automatic parking
then they introduce etc
etc
All of which is deskilling, but at least it's a set piece with clear delineation, especially the early things.

So ABS comes along and now you don't have to be an expert at cadence braking in an emergency, which I doubt many people ever were anyway
Auto parking comes along and now - whilst parking - you're relegated to looking for pedestrians
Auto braking handles the set piece of the event where you forgot to pay attention and are about to rear-end something - it's not a habitual use

Lane departure warning and things like active cruise control start to blur the boundaries of this approach, but Tesla goes much further and says the car will mostly drive itself, a general behaviour.

This is extremely difficult, even if it works well, because you have no idea if the system is going to react. I sometimes passenger with people, and they brake much later & harder than I do, sometimes reacting later than is ideal or even safe. But I have other cues that tell me if we're going to crash or not, like whether they're even looking in the right direction. Take away those and either you produce an overly cautious human supervisor, or you produce cases where the combination of ambiguity, the will-it-won't-it-do-anything delay, the diluted efficiency of the human and all the other factors produce situations where it is physically impossible for the human to intervene in time to avoid an accident. It's inherently dangerous and it shouldn't have been allowed.
 
I don't fully follow the line of your argument mauvais surely going straight to robotic driving would be even more of an issue than getting there one step at a time?
 
I don't fully follow the line of your argument mauvais surely going straight to robotic driving would be even more of an issue than getting there one step at a time?
Yes it would. But someone - the manufacturer - would be wholly responsible, not this halfway house bollocks.

At some point - right now, I would say - the demoted, deskilled driver cannot be expected to serve any valuable purpose, because they will be mostly inattentive through having a lack of anything meaningful to do. Thus having them around to provide oversight and "intervene" is completely useless, and cannot be counted on to mitigate the risk of the autonomous system. So a responsible manufacturer needs to bite the bullet and have a car that either works OK on its own, or doesn't and is meaningfully driven by a human.
 
Last edited:
And that includes "enable autopilot at your own risk!" disclaimers, which are more bollocks once in the hands of the public.
 
OK, I see what you mean then. I prefer the incremental approach but there are times, or features that especially pressure this liability question.

I can remember when ABS first became popular, on more powerful cars initially, there was a debate in Germany that ABS cars should carry a large badge on their rear to stop people in non ABS cars following them too closely as they couldn't stop as quickly as the ABS cars could.

And I remember worries about the advent of cruise control in the USA, even though you were able to override it with brake or accelerator there was a lingering worry that it might just keep charging along.

wrt Tesla, I didn't hear of a multi year multi million dollar Tesla autopilot development program - as with companies like Google - I wonder if their autopilot is just pretty basic, ultrasonics and a camera doesn't sound as sophisticated as Google's radar based system.
 
I'd be surprised if it doesn't use both radar and vision processing. Auto braking (AEB) and active cruise control should already be using radar. I'd also be surprised if their R&D doesn't amount to a very large amount.

I know they crowdsource a lot of data, like where the lanes on a highway are exactly, as do Google.

But then all of this doesn't necessarily get you anywhere. Defence manufacturers abandoned some of their UAV projects because turned out it was hard.
 
ABS is another thing which isn't perfect. It is fine on dry clean roads but is unreliable on loose surfaces or wet roads, that is unless someone has come up with a system which can detect what the road surface is like and adjusts the braking accordingly. Last time I played around in a new car with ABS, the ABS didn't work well on a loose surface.
 
Discussion on this crash and the possible causes.

Fatal autopilot crash, NHTSA investigating...
Read a few pages on that site. It seems the people posting there do believe the Tesla AutoPilot has radar but that it has been tweaked not to trigger on roadside signs and that this may have caused it to have ignored the high side of the truck, thinking instead that it was a roadside sign.

Also some interesting comment that Tesla should not have called their system AutoPilot, rather they should have called it something like DriverAssist instead.
 
ABS is another thing which isn't perfect. It is fine on dry clean roads but is unreliable on loose surfaces or wet roads, that is unless someone has come up with a system which can detect what the road surface is like and adjusts the braking accordingly. Last time I played around in a new car with ABS, the ABS didn't work well on a loose surface.
It was designed to work in the wet, so I'd be pretty surprised if it wasn't great on wet roads.
But yeah, it can't handle gravel or packing snow very well. But then neither can most people.

Of course, you have to remember that the intent is not to stop the car faster. The intent is to maintain control while stopping the car as quickly as possible.
 
ABS is another thing which isn't perfect. It is fine on dry clean roads but is unreliable on loose surfaces or wet roads, that is unless someone has come up with a system which can detect what the road surface is like and adjusts the braking accordingly. Last time I played around in a new car with ABS, the ABS didn't work well on a loose surface.
Why did you think it wasn't working well?
 
High levels of automation can cause problems, planes have been highly automated for some time. (apart from the start and the end)

In 2007 a plane on approach to Bournemouth airport stalled, which is very very bad. The auto-throttle decided to stop working and it failed to warn the pilots of this failure. So day in day out it works fine, humans get into a routine and then one day the auto-throttle fails and the humans are surprised by the plane's behavior.

Aircraft Accident Report 3/2009 - Boeing 737-3Q8, G-THOF, 23 September 2007 Air Accidents Investigation Branch report - GOV.UK

I think this is the problem with the almost fully automated cars which will fail now and then. It expects the human to step in and quickly deal with a problem after long periods of automated driving.

Big advantage with cars is that this problem can be solved. Once cars become fully automated and can fail safe, i.e. if it thinks there's a problem it parks the car by the side of the road. Don't let humans make decisions leave it to machines. :)

(of course on rare occasions I'm sure automated cars will cause people to die but its better letting the human in the car to make decisions)
 
Anyhow, these contraptions are not fool proof, if the truck appeared at the last minute when the car was only metres away it would not have been able to avoid a collision.
 
Interesting case (from the Netherlands) of Tesla autopilot's forward collision warning system identifying a potential collision before it occurred and then taking action to brake the Tesla, ahead of the driver manually doing so, when it came to pass.
 
Interesting case (from the Netherlands) of Tesla autopilot's forward collision warning system identifying a potential collision before it occurred and then taking action to brake the Tesla, ahead of the driver manually doing so, when it came to pass.


Interesting... Footage on youtube is a bit clearer:

 
Thats not nice.
I am sure I heard a story recently about police trying to pull over a self driving car but couldn't get it to stop?
 
Back
Top Bottom