I saw this on gizmodo, I thought this was interesting, getting a machine learning system to explain what its learnt.
Explainable Artificial Intelligence
As an example, many years ago a tv program had researchers demonstrating a system for recognising tanks. It appeared to be working correctly, then they showed a slightly different picture and the system failed to spot the tank. After some investigation the training pictures had been taken at different times of day, this is what the system had latched on to rather than the tank.
I assume self driving cars are heavily reliant on machine learning, so its going to be critical for safety that self driving cars learn the right lesson, it may work fine on a road during testing, what will the car do if the road is flooded? (do what a human would do, drive quickly to the center of the flood and wreck the car )
Explainable Artificial Intelligence
As an example, many years ago a tv program had researchers demonstrating a system for recognising tanks. It appeared to be working correctly, then they showed a slightly different picture and the system failed to spot the tank. After some investigation the training pictures had been taken at different times of day, this is what the system had latched on to rather than the tank.
I assume self driving cars are heavily reliant on machine learning, so its going to be critical for safety that self driving cars learn the right lesson, it may work fine on a road during testing, what will the car do if the road is flooded? (do what a human would do, drive quickly to the center of the flood and wreck the car )