Urban75 Home About Offline BrixtonBuzz Contact

Self-driving cars: Motorists will not be liable for crashes and can watch TV behind the wheel, government says

Are you in favour of self-drive cars?


  • Total voters
    44
His point is that a chess programme can’t abstract from its knowledge of chess to play monopoly even though monopoly is a more straightforward game.
Not sure a human could, they would need to be taught the rules for monopoly to start with. :hmm:
 
Plane auto-pilots, chess, Monopoly, AI - there seems to be a lot of confusion over the challenges that autonomous vehicles face and their applicability to other problems. An AV doesn’t need to learn how to be a completely different vehicle, it doesn’t need to enjoy driving and it doesn’t need to dream about electric sheep when it is being charged.

I think possibly the biggest issue has been mentioned, though - the “theory of mind” element that is required in order for an AV to mix it on the road with other drivers. As has the ability to recognise and deal with ambiguous situations (where humans often apply “theory of mind” elements to muddle through, so that relates to the first issue).

It’s much easier to envisage AV’s working very well on AV-only roads where they can communicate with each other electronically, but that’s a whole messy infrastructure problem.
 
Modern auto pilots you can set the destination airport and engage it and it will take off and fly all the way to the other airport and can even land and taxi to the correct gate with only an ok from the pilot.

Whilst you’re quite right, the point I’m trying to make is that auto pilot exists to aid pilots, not replace them.
 
Also, aircraft pilots are monitoring everything very intensively while the plane is in flight, regardless of whether auto-pilot is engaged.
This is completely different to the intentions for self-driving vehicles.

Precisely and it’s auto pilot that enables them to do that. They wouldn’t have much time to monitor the instruments if they were busy pissing around with the yoke and pedals for fourteen hours.
 
Precisely and it’s auto pilot that enables them to do that. They wouldn’t have much time to monitor the instruments if they were busy pissing around with the yoke and pedals for fourteen hours.

There’s quite a big emphasis on monitoring and optimising fuel burn. Which, as a totally uninformed opinion strikes me as the kind of thing that you might expect computer algorithms to be a good fit for.

I’m sure an aviation enthusiast on here would easily be able to explain why that’s not correct, though.
 
Plane auto-pilots, chess, Monopoly, AI - there seems to be a lot of confusion over the challenges that autonomous vehicles face and their applicability to other problems. An AV doesn’t need to learn how to be a completely different vehicle, it doesn’t need to enjoy driving and it doesn’t need to dream about electric sheep when it is being charged.

I think possibly the biggest issue has been mentioned, though - the “theory of mind” element that is required in order for an AV to mix it on the road with other drivers. As has the ability to recognise and deal with ambiguous situations (where humans often apply “theory of mind” elements to muddle through, so that relates to the first issue).

It’s much easier to envisage AV’s working very well on AV-only roads where they can communicate with each other electronically, but that’s a whole messy infrastructure problem.
Driving a car involves a non-limited set of circumstances - the challenge is to create an AI that can adapt to something it hasn't trained to deal with. But if it hasn't trained to deal with something, it has no general principles to fall back on. And it appears that some (many) AI developers refuse to recognise that this is even a problem.

Gary Marcus, who I referenced above, advocates a hybrid approach in which human-inputted principles are combined with the results of computer self-teaching. Even then, I'm not sure it will solve the problem.

Going back to chess to illustrate the point, at one stage in the development of chess computers, where the advantages of showing it past games to learn from were being discovered, a computer that had been shown a bunch of grandmaster games started inexplicably to throw its queen away at the first opportunity. Why? Because it had picked up on the fact that when a player sacrificed their queen in the games it had studied, that player usually won - grandmasters won't sacrifice a queen unless they're very confident that they will win in very short order by doing so. The computer eventually overcame this compulsion, but the fact it had so blindly done so at one stage in its learning was revealing. This relates to the point you made earlier about computers confidently making catastrophic mistakes. It didn't ask itself the question 'why?' It didn't think about why the queen was being sacrificed. That's because it doesn't really think at all.

There is a certain 'naive AI' approach, which appears to be common, that holds that self-teaching 'neural network'-type systems will arrive at things like 'understanding' as an emergent phenomenon that results from complexity. But it hasn't happened yet. No hints of it happening yet have appeared. I suspect that this is because understanding doesn't emerge as a result of complexity. Understanding emerges as a result of life processes. So Marcus's idea of welding general principles to the computers' self-learning may well help, but if the computer didn't come up with those principles itself, it won't actually understand them. They will be a sticking plaster at best.
 
There’s quite a big emphasis on monitoring and optimising fuel burn. Which, as a totally uninformed opinion strikes me as the kind of thing that you might expect computer algorithms to be a good fit for.

I’m sure an aviation enthusiast on here would easily be able to explain why that’s not correct, though.

I know pilots input fuel payloads into the systems of commercial jets so there must be computational requirements for it. My knowledge ends there though. I never got into the jets when simming.
 
These neural net type systems need to be many layers deep before they can even emulate the processes of a single neuron.
I agree that many of the ideas around the applicability of AI to problems like driving on a road that has human drivers on it are misguided.

If the problem is solved, though, I don’t think it will involve “hard AI”. I think it will be a case of dedicated machines that do nothing but drive and have very limited adaptation capabilities once they leave the assembly line.
 
I think another way of looking at it is that you can teach a car how to drive but you can’t teach it how to be a human. That’s means you can’t get it to anticipate a human doing things it hasn’t seen before, which is the very essence of being human. And that means the only way to make sure the car’s ability to drive is not compromised by a human doing something novel is to remove the humans. The result would be dystopian indeed — cities even more impenetrable to pedestrians than they already are and people shut out from systems unless they can afford access to the AI transportation.
 
I think another way of looking at it is that you can teach a car how to drive but you can’t teach it how to be a human. That’s means you can’t get it to anticipate a human doing things it hasn’t seen before, which is the very essence of being human. And that means the only way to make sure the car’s ability to drive is not compromised by a human doing something novel is to remove the humans. The result would be dystopian indeed — cities even more impenetrable to pedestrians than they already are and people shut out from systems unless they can afford access to the AI transportation.

Yep, pretty much my conclusion a few posts back. Though how dystopian it would be would come down to political and economic decisions.
 
Yeah, I agree that AI-only roads would be more feasible. Can't see that happening any time soon, although maybe somewhere like China might try it.

China already uses AI face recognition to prosecute people for road offences. Marcus in that podcast references a case where a famous Chinese actress received a court summons through the post. Turned out that her face had been on a billboard opposite where a road crime had taken place.

These things are nowhere near fit for purpose as yet. And those purposes themselves are really quite unsettling.
 
Now I think about it, this whole AI driving problem is neatly encapsulated by this image

1650814913464.png
Easy for a person, who understands the purpose of things like crosswalks. Apparently so difficult for a machine that it’s still the de facto way to act as a gatekeeper against machines.
 
Yeah, I agree that AI-only roads would be more feasible. Can't see that happening any time soon, although maybe somewhere like China might try it.

China already uses AI face recognition to prosecute people for road offences. Marcus in that podcast references a case where a famous Chinese actress received a court summons through the post. Turned out that her face had been on a billboard opposite where a road crime had taken place.

These things are nowhere near fit for purpose as yet. And those purposes themselves are really quite unsettling.

Some pretty poor software testing going on there.
 
Now I think about it, this whole AI driving problem is neatly encapsulated by this image

View attachment 319918
Easy for a person, who understands the purpose of things like crosswalks. Apparently so difficult for a machine that it’s still the de facto way to act as a gatekeeper against machines.

Would be interesting to know whether researchers are chucking advanced systems at this. It’s largely a security measure to stop entry from systems with no AI element at all.

Even reading the instructions for different kinds of test is quite a difficult computing problem.
 
Some pretty poor software testing going on there.
Maybe, or maybe it is just inherent to these things that they'll do something really dumb eventually.

In this case, the computer failed the 'Father Dougal Test': 'This is small; that is far away.'
 
Maybe, or maybe it is just inherent to these things that they'll do something really dumb eventually.

In this case, the computer failed the 'Father Dougal Test': 'This is small; that is far away.'

I think a lot of things get called AI that really aren’t, even taking into account the somewhat ropey definitions that get used.

Just being able to find a face and pass it to the recognition algorithm seems well below par, and something that you’d catch with reasonable testing.
 
I think a lot of things get called AI that really aren’t, even taking into account the somewhat ropey definitions that get used.

Just being able to find a face and pass it to the recognition algorithm seems well below par, and something that you’d catch with reasonable testing.
If the woman's face just happened to be the size appropriate to someone in the car, perhaps as seen through the window on the other side of the road, the only way to pass the Father Dougal test is to have an understanding of the context in which you are seeing the face. That's not simple, at all.
 
I wonder, instead of autonomous cars which take up the same amount of space and use the same amount of energy as normal cars, why not get a single large vehicle that can carry dozens of passengers at once? Instead of expensive and potentially fallible self-driving tech, these vehicles could be operated by a single professional driver. Then people who don't want to drive could still get where they need to be and traffic and pollution would be greatly reduced.

For longer distances and on routes where large numbers of people need to travel, multiple large vehicles could be connected together and controlled by a single human operator. They could even be given special metal roads to run on, to increase speed and energy efficiency and to keep them separate from the rest of the road network.

As these innovations would benefit the general public rather than just private individuals, they could be funded from general taxation. This would create beneficial economies of scale and ensure that transport services were available to all, regardless of economic or social status.

Ah, forgive an old fool his impossible pipe dreams. Of course it's much more important that the business cunts in their audis can finally watch porn on their way to work.


Do you mean like a train?

😄
 
If the woman's face just happened to be the size appropriate to someone in the car, perhaps as seen through the window on the other side of the road, the only way to pass the Father Dougal test is to have an understanding of the context in which you are seeing the face. That's not simple, at all.

Yeah, but I’d bet money that this wasn’t the case at all.
 
Yeah, but I’d bet money that this wasn’t the case at all.
One of us would be losing money as I strongly suspect that something like that did happen. Be surprised if the computer didn't have information about how big a face is from its training, but that on its own doesn't solve the Father Dougal problem.
 
The plan isn't to use lithium in the longer term, but your point is very pertinent. Unless we find an alternative there is no way private vehicles can continue to operate in the long term.
It's not like everything but private vehicles could operate in the absence of fossil fuels either.


Back to the horse and cart.
Eco friendly.
Plenty manure for fertilizer.
Grand about town.
 

UK's first self-driving bus takes to the road for tests in Scotland​


Passengers are expected to start using the self-driving bus this summer as Scotland begins testing the "hugely exciting project".

Scotland will today start testing a new self-driving bus that is set to become the UK's first fully sized autonomous vehicle of its kind to take to the roads.

Stagecoach will be carrying out on-road testing of its self-driving bus from today in preparation for passengers stepping aboard later this summer.

The buses will be full of sensors that enable them to run on pre-selected roads without the safety driver having to intervene or take control.

UK's first self-driving bus takes to the road for tests in Scotland
 
It should be told.

It thinks it's playing noughts and crosses.

Facial recognition is something AI still terrible at, from not recognising dark-skinned people to confusing toys and anything vaguely face-shaped with humans. It is improving all the time, but I really don't think it'll ever be able to cope with the differences humans can cope with. I always pose the example of a small child dropping their teddy bear or doll, which lands with its face visible to the car, and the child turning back to pick up the toy, face away from the car. The car would swerve away from the toy because it sees a "human" face, and, if there's a car or another human on the other side, towards the child, because it doesn't see one there.

It's not a situation that would guarantee a human driver would never kill the child, but an AI would be actually aiming for them.
 
A bit of context on "edge cases" and what the state of play is at the moment can be found here. The driver is a former worker on Tesla AI systems who got fired for making videos about how it was (and wasn't) coping with driving around San Francisco (xref that one with Elon "free speech" Musk eh). A couple of notes on this as well - San Francisco is, like most US cities, a relatively simple grid model with nice wide roads, a couple of oddities (tram lines) thrown in and a bit of construction work going on, being driven at a relatively quiet time of day. It's safe to say the system is nowhere near ready for primetime in European cities yet, let alone in a position where it should be exempted from crash liability.



And this is it absolutely failing - very dangerously - to deal with more winding mountain roads in a video uploaded yesterday.

 
Last edited:
Back
Top Bottom