Urban75 Home About Offline BrixtonBuzz Contact

Explainable Artificial Intelligence

HAL9000

Well-Known Member
I saw this on gizmodo, I thought this was interesting, getting a machine learning system to explain what its learnt.

XAI_Explanation_619x316.jpg


Explainable Artificial Intelligence

As an example, many years ago a tv program had researchers demonstrating a system for recognising tanks. It appeared to be working correctly, then they showed a slightly different picture and the system failed to spot the tank. After some investigation the training pictures had been taken at different times of day, this is what the system had latched on to rather than the tank.

I assume self driving cars are heavily reliant on machine learning, so its going to be critical for safety that self driving cars learn the right lesson, it may work fine on a road during testing, what will the car do if the road is flooded? (do what a human would do, drive quickly to the center of the flood and wreck the car :) )

 
Funnily enough I've been working on something to do with this this week. I'm a distributed systems nerd not an AI nerd, but I do work with a lot of them. See Explainable AI – what does your voice say about you?

We made an accent classifier demo if you want to try it MyAccent
That's cool. I can hit 99% British if I speak posh but in my normal (London) accent I'm only 78% British.

ETA: I am now messing with your survey results by repeatedly trying to see if I can fool it that I'm American. I managed 77%
 
That's cool. I can hit 99% British if I speak posh but in my normal (London) accent I'm only 78% British.

ETA: I am now messing with your survey results by repeatedly trying to see if I can fool it that I'm American. I managed 77%

:) everyone does that
 
One point is that humans aren’t explainable - or at least if you ask them why they did something, what they tell you may well be rubbish. They back formulate reasons in many cases and while they may believe that those were honestly the reasons, they can (and often do) turn out to be inconsistent nonsense. Cases where humans consistently behave according to a clear set of rules are the _easy_ ones for AI.
 
Humans: “we know how to classify this content but we don’t have enough people, let’s get a learning system to follow us and then do it automatically”

AI: *is massively racist*

Humans: “OH GOD WHERE DID IT GET THIS FROM I LITERALLY CANNOT EVEN”
 
Humans: “we know how to classify this content but we don’t have enough people, let’s get a learning system to follow us and then do it automatically”

AI: *is massively racist*

Humans: “OH GOD WHERE DID IT GET THIS FROM I LITERALLY CANNOT EVEN”

It’s an interesting problem. Not sure how to solve it, training your classifier is a bit of an art rather than something you can fully automate, but then how do you account for your own bias?
 
It’s an interesting problem. Not sure how to solve it, training your classifier is a bit of an art rather than something you can fully automate, but then how do you account for your own bias?
And it’s particularly hard to account for your own bias when you don’t admit to it or even see it - but the system that’s trained on thousands of you and your peers’ decisions just extrapolates it outwards because it doesn’t know that it’s meant to pretend otherwise.

Perhaps we should all have AI systems following us to point out when we are acting in contrast to how we like to think we do. Or, more likely, someone will develop “fake explaining AI” that justifies its biases just as well as a human. That’s worth money.
 
A lot of money :eek:

never before has artificial intelligence (AI) had the ability to interpret what a human is drawing and then complete the piece for them. Beyond simple machine-generated art, Vincent is an engaging, interactive system in which the output is guided and influenced by the user

Pretty impressive . Especialy when applied to disabled people.
 
Back
Top Bottom