Cloo
Approved by toads
Following me bringing this up on the Elon Musk thread, I had a thought that I reckoned might need its own discussion.
My starting point was AI and bias and my realisation that inevitably, some people [white men] were going to pipe up that attempting to remove bias inherent in AI (presumably by some tinkering with algorithms) was 'woke' and misleading. But obviously, to me at least, there are very good reasons to try to remove bias in AI because, even with use in a relatively small scale, we've seen damaging effects on the most disadvantaged.
It's also occurred to me that herein is a problem of AI - sometimes one needs the literal facts, like if asking an AI 'What is the average income of black households in the US?'.or 'What is the average highest educational qualification of black individuals in the US' both things that are lower than the white population through structural reasons of historical and ongoing racism. But at other times, presumably AI might be asked to make judgements where such information could prejudice a comfortably off or well educated black American because it has absorbed the facts about averages, without the context? Can AI be trained to recognise these difference and not apply things wholesale to members of disadvantaged groups?
My starting point was AI and bias and my realisation that inevitably, some people [white men] were going to pipe up that attempting to remove bias inherent in AI (presumably by some tinkering with algorithms) was 'woke' and misleading. But obviously, to me at least, there are very good reasons to try to remove bias in AI because, even with use in a relatively small scale, we've seen damaging effects on the most disadvantaged.
It's also occurred to me that herein is a problem of AI - sometimes one needs the literal facts, like if asking an AI 'What is the average income of black households in the US?'.or 'What is the average highest educational qualification of black individuals in the US' both things that are lower than the white population through structural reasons of historical and ongoing racism. But at other times, presumably AI might be asked to make judgements where such information could prejudice a comfortably off or well educated black American because it has absorbed the facts about averages, without the context? Can AI be trained to recognise these difference and not apply things wholesale to members of disadvantaged groups?