Urban75 Home About Offline BrixtonBuzz Contact

autonomous cars - the future of motoring is driverless

I'm not sure that I agree. Don't you find that when you are driving, it's not just a case of identifying what sort of drivers are around you but also of reacting accordingly incredibly swiftly based - and I hate to say it - pretty much on instinct? Like you can figure out exactly who's going to cut you up on a roundabout because they're distracted just by seeing that their reactions are slightly slow or undisciplined but not enough to comment on, and that would be imperceptible to any sort of sensor. I'm surprised that you always find it easy to spot a middle lane hogger, too. I find it less simple because they are quite often going very slightly faster than the vehicles to the left, but not necessarily fast enough to actually overtake because that's not really their aim.

Of course, if it was all automated cars rather than a mix, this would all be much easier.

Broadly speaking, what kabbes said.

Specifically, a driver who is reacting slightly slowly or driving in an undisciplined manner would be absolutely perceptible to a sensor - how you do know this is happening? Because you use your eyes along with memories of situations to assess that this particularly car is moving in such a way as to remind you of previous time(s) at which you have been cut up by a driver.
Your eyes are sensors, the car has visual sensors and more sensors besides. Your memory is a flawed and limited database, the car has a less flawed and bigger database.
Why wouldn't the car be able to see what you see (and some things you can't) and compare that to past known behaviour to come to the same conclusions you do?

There is a huge amount of work to be done in terms of visual object recognition to allow those sensors and database to work together properly - this is where humans beat machines currently - but this area has improved so much over the past 20 years that there's no reason to doubt that it will continue to improve, and at some point in time become reliable enough (in all weathers) to perform as well as we do.

With middle lane hogging - the situation you describe where someone is slowly overtaking other drivers - they aren't hogging the middle lane, not until they come to the end of the queue and don't move left. How do you behave differently in this situation if you think someone is going to turn out to be a middle lane hogger? Is there any reason to behave differently? Either you are wanting to travel faster than them, in which case you overtake in lane 3, or you don't, in which case you don't need to overtake. Whether they are middle lane hoggers makes no difference to this decision making. It can cause queues/delays and when they are actually hogging the middle lane, the decision to overtake is more difficult / may need to be done in two stages, but really this is just a car to be overtaken (or not), especially whilst they are overtaking themselves.
An auto car will easily be able to detect the relative speed differences between vehicles in different lanes, and if you identify this behaviour as one which is typical of someone who is going to be a middle lane hogger, why would th auto cars not also learn this after millions of hours of experience driving and seeing whether a slow speed differential overtake results in middle lane hogging more often than not. I doubt you keep a tally of the times this has or hasn't resulted in middle lane hogging, but they will. Perhaps your perception will turn out to be wrong in more cases than it is right (although cautionary factors may come into play, ie: in some situations it is better to be wrong 99% of the time because the 1% of the time you are right is going to be catastrophic if you don't behave in that way, don't see it with middle lane hoggers but the "drivers edging out at junctions" situation this might come into play).
 
My belief is that we need to advance incrementally, step by step, autonomous cars are coming, but we should proceed gradually towards them.
I would have thought we should be looking at a step change, a factor change, if the effort and expense and nannying of autonomous cars were to be deemed worthwhile and a success.
You can't do this, because incremental steps towards autonomy are dangerous. We've had this discussion before but you can't leave humans in partial, occasional control of a car. So at some point, approximately now, you have to either go all in or give up.
 
You can't do this, because incremental steps towards autonomy are dangerous. We've had this discussion before but you can't leave humans in partial, occasional control of a car. So at some point, approximately now, you have to either go all in or give up.

Whilst I think there are some incremental steps that have been made (such as auto-parking), I do agree with this, and other than the learning/development stage they are going through (full auto with paid human staff driver available to intervene, not publicly available) I don't really know what other stages there could be - I think that you could get full auto-trucks which are limited to motorways, moving goods between distribution centres without entering towns/cities, but I'm not sure that you could have auto-motorway driving which stops when you leave the motorway - you'd need parking type areas at every entry/exit to allow for/enforce the changeover.
I'm not sure if the auto-truck thing could work in the UK as I don't know where distribution centres are, but I know there is a move to having them outside cities and using smaller vehicles to bring them into cities, I imagine there are at least some places you could do this, if not in the UK than in Europe or north America.

Or there is stuff like they are doing in Warwick iirc where they will have full auto vehicles moving along very limited paths. Waymo/Uber in Arizona - full auto but without weather concerns. Get it working in perfect conditions first then develop on to imperfect conditions. These are full auto but are limited in scope, so is that incremental development? It's not really of the technology but it is of the practical application of it?

We can see the issues of partial automation in tesla's auto-pilot: VIDEO: Tesla Driver Appears To Be Asleep At The Wheel On LA Area Freeway and if you search for tesla driver sleeping you find more than one story like that.
 
You can't do this, because incremental steps towards autonomy are dangerous. We've had this discussion before but you can't leave humans in partial, occasional control of a car. So at some point, approximately now, you have to either go all in or give up.
Cruise control, Auto braking, Lane assist, ABS, Auto parking, Reversing sensors .. are all increments towards autonomy which are not yet present in the vast majority of cars on the road.

eta: and could you not say the same thing wrt aircraft auto-pilot systems, at the moment they fly the aircraft for most of the flight but a human pilot takes over for takeoff and landing.
 
Cruise control, Auto braking, Lane assist, ABS, Auto parking, Reversing sensors .. are all increments towards autonomy which are not yet present in the vast majority of cars on the road.
ABS has been mandatory fitment on new cars for a long time, auto braking is or will be soon. All of these technologies are well established, it's only economics that prevent them being omnipresent. Missing the point though: you can't delegate control any further without diminishing human roles to the point of being dangerous. Arguably we're already there.
 
ABS has been mandatory fitment on new cars for a long time, auto braking is or will be soon. All of these technologies are well established, it's only economics that prevent them being omnipresent. Missing the point though: you can't delegate control any further without diminishing human roles to the point of being dangerous. Arguably we're already there.
And what about the aircraft example above?

And why cars, whose environment is complex, and not rail whose environment is much more controlled?
 
eta: and could you not say the same thing wrt aircraft auto-pilot systems, at the moment they fly the aircraft for most of the flight but a human pilot takes over for takeoff and landing.
Yes: confusion over who is flying the plane, and how, has killed people. But there are lots of differences. Trained professionals, a distinct mode of control, usually more time to react, the benefit to safety of its operation, and so on.

I'm not sure what point you're making about rail.
 
..
I'm not sure what point you're making about rail.

My point is: why are we tackling the automation of cars with their very complex environments, before we tackle trains, which are arguably in a much more controlled environment, no steering for example?
 
My point is: why are we tackling the automation of cars with their very complex environments, before we tackle trains, which are arguably in a much more controlled environment, no steering for example?
I don't see why they should be mutually exclusive but driverless trains exist and have been in use for a long time.
 
Who is “we” in this statement? Me and you? Or commercial organisations whose R&D is directed towards projects that will bring in the highest rate of return?

Technology budgets aren’t government mandated for maximum public utility, more’s the pity.
 
I don't see why they should be mutually exclusive but driverless trains exist and have been in use for a long time.
Well I suppose the transit between Stansted gates and the main airport, recent development in mining trains in Australia, but the majority of the UK mainlines and tubes have drivers no?
 
The link train between Birmingham international station/NEC and the airport is driverless.
Weren't they going to make the Jubilee tube line driverless, but objections from the public and trade unions prevented it? More a political than technological concern at this point?
 
Humanity was what was intended in the "we"

As in humanity has decided to push ahead with autonomous cars ..
Really? I don’t remember getting a vote on it.

Why are “we” even attempting to build cars at all when “we” still can’t cure psoriasis?
 
Until we can be certain that they're absolutely safe in all situations and have all the computational and sensory ability needed to drive without killing people, I don't think we should be letting humans drive at all.
 
To be clear, anything you do on “instinct” whilst driving is actually a conditioned response developed from repeated cases of experiencing a particular pattern and the outcome of that pattern. It isn’t innate. Replicating “instinct” is exactly what machine learning is designed to do and it does it better than any human because for every 1000 such patterns you experience and learn from, the machine can experience and learn from 1000 million. How robust is it? Incredibly robust. It’s being used in your everyday life in everything from search engines and sat navs to traffic lights and medical diagnostics. Next up — the nuclear industry is developing machine learning for predicting failures in its systems. It outperforms humans on any task that involves learning from experience to develop “instinct” and that’s where the use cases are going.

Sorry kabbes, I seem to have missed this post.

Search engines and sat navs are not as complex as driving where there are many more variables, and many argue Google, the currently dominant so called search engine, is now in fact an advertising engine for the benefit of Google, before anyone else. Their engine now just displays results people have bid for and are paying for, it isn't rocket science.

Traffic lights? a wholly modellable closed system with fixed inputs. Road work crews put up new traffic light systems in minutes. 40 years ago traffic lights in Koln would predict and display the speed you should drive to arrive at the next lights when they were green, this reduced emissions from slowing down and speeding up. Where are these systems in the UK? Nowhere is where.

You display great almost messianic confidence in machine learning kabbes, I have seen you state this before, yet there are others on this same thread saying that autonomous vehicles will initially be restricted to safe predictable roads and not operate outside those zones. Why do you think this view is there?

Also I wonder, how you can have such confidence because as far as I am aware you are neither a software programmer nor an engineer. As it happens I am also neither a software programmer nor an engineer either, but I have worked alongside engineers developing products for the car industry. There is a distinction between teams working on safety critical devices like ABS and those working on less safety critical items like ICE (In Car Entertainment) or body control.

Autonomous driving hardware sensor packs and actuators, controlling software and the like is going to be all 100% safety critical. I don't wholly disagree with you on the potential for machine learning, I just don't share your massive confidence in the driving example and the way it is being approached at the moment.

I think if you want to look for the problems with autonomous cars, it’s precisely the opposite to the instincts you’re proud to have developed. It’s the times you override that instinct or had to figure out something you don’t have instinct for. Novelty is a problem for machines because they don’t have intentionality, so they can’t take a top-down view of a system to spot where it is failing them. That infamous Uber death occurred when the light source confused the car’s sensors, but whereas a person with the sun in their eyes gets that a problem with their driving model itself has developed, the machine can’t do that kind of meta-analysis. Either the failsafe has been preprogrammed or learnt or it doesn’t exist.

I thought the Uber fatal accident occurred because their primary sensor had a blind spot, no matter, the Tesla autopilot fatal crash into the truck was also because their sensors did not detect the truck.

I often have to drive 40 miles down the M4 into the setting sun. It is a very demanding drive, I use the sun visor and drive with much greater care than I would in good visibility.

If autonomous vehicles are to know when or how to take extra care they will have to comprehend and be aware of the limitations in their sensor systems, all sensor systems have limitations, weaknesses and vulnerabilities. Hence there is more than one system, but still weaknesses and blind spots will exist.
 
Last edited:
That’s an odd response that shows you *still* don’t understand the point. In particular, how can someone be showing “messianic confidence” in something one is saying has problems that may not be solvable. I’m not saying machine learning cars will work, I’m saying you’re looking for the problem with it in exactly the wrong place.

And this nonsense of what “we” are choosing to solve is ridiculous. There is no “we” making these decisions. Besides, if machine learning trains are so crucial, what are you personally doing about it? If nothing, why should anybody else be?
 
That’s an odd response that shows you *still* don’t understand the point. In particular, how can someone be showing “messianic confidence” in something one is saying has problems that may not be solvable. I’m not saying machine learning cars will work, I’m saying you’re looking for the problem with it in exactly the wrong place.
I haven't seen you saying machine learning as applied to autonomous vehicles may have problems that may not be solvable. If that is what you believe then I won't argue.
 
You display great almost messianic confidence in machine learning kabbes, I have seen you state this before, yet there are others on this same thread saying that autonomous vehicles will initially be restricted to safe predictable roads and not operate outside those zones. Why do you think this view is there?

For me, that is an initial step - one we are seeing in arizona/new mexico. It makes sense - you can remove a variable and get the things working in good weather and develop onwards from there to cope with more weather conditions. The step is part of the machine learning that needs to be done. It's not a lack of confidence in machine learning that makes me say there will be initial stages on easy roads (whether that is easy because of the weather, or motorways because of the simplicity, or small fixed routes/areas in towns/cities which have been heavily mapped and are clear of any known situations auto cars have trouble with), but because this provides a space for machines to learn basics before moving on to more complicated situations. Kind of like how we do it.
 
My point is: why are we tackling the automation of cars with their very complex environments, before we tackle trains, which are arguably in a much more controlled environment, no steering for example?
Because the vast majority of people traveling in trains don't drive them, and are quite happy to split the cost of paying someone else to do it.
 
Will autonomous cars be able to spot things like personal plates or Audi badges and know to give those cars a bit more space/caution in anticipation of bad driving/entitlement issues?
 
Back
Top Bottom