Jeff Robinson
Marxist-Lentilist: Jackboots and Jackfruit
Relevant exchange
the extinction of the human race and all biological life on earth has been suggestedThought they'd be a thread on this already, but I couldn't find one. I've been a bit ignoring all this stuff, but it does seem to have had a leap foward recently.
BT job losses (partly related to AI) have been in the news recently, but what are other and more far reaching political & social ramifications of all this?
We shouldn't trust them. Heck I'd sooner ask ChatGPT what the solutions are.the extinction of the human race and all biological life on earth has been suggested
AI risks leading humanity to 'extinction,' experts warn
It's the most recent in a series of alarms raised by experts in artificial intelligence — but also one that stoked growing pushback against a focus on what seem to be its overhyped hypothetical harms.www.nbcnews.com
Can we not have 2 jokesI thought the joke was that he got ChatGPT to write that. I’m usually wrong though.
We shouldn't trust them. Heck I'd sooner ask ChatGPT what the solutions are.
I don't see the harm in asking to be honest.Only a twat would do that.
Oh, hang on a minute...
Those mistakes are intentional, to filter out the most vulnerable and avoid wasting time and resources on people who are more likely to become suspicious.Just think how all those 419 scammers and phishing cunts will benefit from well-written bait text. Poor English used to be one of the clues that helped stop them emptying your elderly parent’s bank accounts.
One thing is for sure: it'll end up with the rich reaping the financial benefits and the poor getting less on their plate.AI is still in it’s embryonic infancy, still a mere baby. Yet already we are starting to feel it’s cause and effect. So who knows what the true consequences of it all will be in the coming years and decades. Hopefully the positives will outweigh the negatives but if I’m honest my gut feeling is more pessimistic than optimistic. It feels like we’re on the verge of something pretty monumental that it is both exhilarating and terrifying at the same time.
And us lot at the bottom having less and less power, and less and less opportunity to challenge those who do wield the power. Revolutionary General Strike anyone? When the work is being done by robots?One thing is for sure: it'll end up with the rich reaping the financial benefits and the poor getting less on their plate.
Robots don’t buy houses or cars. That could be problematic for Hayekist dreams.And us lot at the bottom having less and less power, and less and less opportunity to challenge those who do wield the power. Revolutionary General Strike anyone? When the work is being done by robots?
Like what, exactly?From a creative pov I think it can open up possibilities that are usually fenced off to most but the wealthy.
They haven't thought it all through yet.Robots don’t buy houses or cars. That could be problematic for Hayekist dreams.
Robots don’t buy houses or cars. That could be problematic for Hayekist dreams.
It's already happening.the progress of GAI into things like programming, surveillance, the legal profession - that's all pushing against traditionally secure professions and it'll be interesting to see how far that goes towards removing those people (not now, but within decades) from their privileged status.
It's already happening.
In the short term, it's a pause on new job creation. I think the skilled workers already in place will have to leverage AI to become more productive. The demands will increase if your boss knows you can now do the work of 5 (or more) people using these technologies. If you can't, or refuse, you'll fall behind and others will take your place who can/will.
At some point, the AI will be so good that they will replace, rather than augment, workers - but we're not there yet.
There's not been a huge increase in capabilities of the individual models in the last few months*, but what we have seen is people chaining them together so the sum is greater than the parts. People are experimenting with groups of AIs ("agents") taking on specific roles in a virtual organisation, and then communicating between each other to do the work of a traditional business.
Microsoft’s AutoGen framework allows multiple AI agents to talk to each other and complete your tasks
This collaborative approach can lead to significant efficiency gains. According to Microsoft, AutoGen can speed up coding by up to four times.venturebeat.com
ChatGPT is already pretty good at most tasks, but it makes mistakes that require a human to oversee and correct. But it's also good at doing this itself, so having two ChatGPTs, one as worker, one was QA / manager / fact checker, can give better performance.
You can, right now, now describe a piece of software in plain English and have an entire virtual software development company created that will plan, build, market, etc. your product, all within a few minutes. This used to take years, cost hundreds of thousands of pounds, and give employment to, at the very minimum, 5 people.
I think healthcare is ripe for automation, but there are so many issues, it will probably be among the last to get there - at least in the UK. Some AI systems perform comparably to doctors and people like X-ray technicians already. Some are even licensed to do it: Autonomous X-ray-analyzing AI is cleared in the EU
I am torn on all of this. Reducing workloads should be amazing, but if, like other technology improvements, it just means we have to do more, then that is obviously not ideal. With healthcare, it's not great to replace workers, but if this means more people get better, easier, faster, cheaper(?) access to medical help, then it's a price worth paying.
*You know things are heating up when progress is measure by months rather than years!
With MH, there are many places I think it could help without being on the frontline of delivering care. I have some knowledge of this area as my current (and ex) girlfriend work in MH, and they had very similar complaints: mostly around waiting lists / too many referrals.I was refering less to the loss of jobs and more to the reclassifaction of individuals out of a productive labour pool, which comes with a potential for dehumanisation, that's what is still not here yet (imo). Tech workers are facing challenges but they still retain a privileged status in relation to AI, as you say, it's augumenting them, not replacing them (for now).
As to healthcare, I'm hugely cynical about the applications there. Whatever the capabilities of the technology itself it's being mobilised within the context of Capitalism and increasingly authoritarian/dehumanising models of population management. On the purely technical end screening, checking for anomalies etc, that may not be spectacularly harmful. But when AI is utilised in things like mental health or care work there is no way in which it can be 'better' because an intrinsic part of those is empathy and the capacity to understand human experience. The MH bots deployed as what are, to my mind, wreckless experiments on refugees for example (Karim the AI delivers psychological support to Syrian refugees) are far less about providing care than they are about satisfying the requirements of the audit within the humanitarian sector and generating proof of concepts for Silicon Valley.
Also highly doubt it'll reduce workloads, the people deploying these things do so with the same mentality as those using automated call centres. It's not done to free up human labour for the more sensitive work of direct interaction and support, it's done to provide at least the illusion of a functional service provision while cutting funding as much as possible. That's a systemically ingrained reaction from Capital, for it to be changed there has to be a change in broader political and economic cultures.
With MH, there are many places I think it could help without being on the frontline of delivering care. I have some knowledge of this area as my current (and ex) girlfriend work in MH, and they had very similar complaints: mostly around waiting lists / too many referrals.
The service you get on the NHS once you actually get seen is excellent, but the steps and time it takes to get to that point are not acceptible.
I think AI could be used here to augment this, in a similar way tech workers are doing. I think a triage system or screening tool could do a first pass on the referrals and add them to the waiting list or point them elsewhere. This way, the psychologists are seeing more patients, so reducing waiting lists, and the ones who have had bad referrals are not having their time wasted.
When I say bad referrals, as an example: Girlfriend works as a psychologist in an autism service. She spends 1 day a week revewing referrals from GPs/private assessments. These referrals are about 75% successful, from memory. Of the others, there is very often a mental health issue there, but it's not autism. The GPs lack the knowledge to determine between ADHD, autism, etc (not their fault).
MH professionals can spot some of these from the referral alone. They don't even speak to the patient. I'm sure an AI could be trained to a similar standard.
The service might reject the refferal, and ask them to go back to the GP for an ADHD referral instead, or maybe they pass them to ADHD directly (can't remember exactly what happens here - the point is it's inefficient). If this could be completely automated, then that saves a day for her. And a day for everyone else who does this in her service on the other days when she is not doing it. And a day for every other person doing this in every other service that also deals with referral reviews. That adds up to a lot more face to face time with people in need.
Another way I can think of is the report after an assessment: this takes longer than any task she works on. You get referred by GP -> wait 6 months or more -> get assessed by a psychologist/psychiatrist -> then they send you a report with your dianosis and reasable adjustments you can ask for from work, school, etc.
Most of these assessments take place online now, and there are transcripts available from Microsoft Teams. It would be trivial to send these transcripts to an AI and ask it to summarise them in a report for the patient. The MH professional could still check this to make sure the main points were covered and that it reflects their views. It could do this in seconds, vs the hours she spends on reports.
I think these two changes alone would have huge time savings for her service, and not threaten jobs, nor quality of care. And I don't work there, so this is just stuff I've picked up by talking to her about work. I'm sure a proper systems analyst, embedded in the NHS, could come up with lots of these relatively simple changes that have huge knock on effects, none of which requires a chat bot to do councilling or therapy.
I know the current situation means the kinds of things I mentioned aren't likely, or ideal, but they're possible. And possible without the biases and commercial interests you mentioned becoming a factor.Can't really talk to specific use cases within the NHS but do know a bit about wider issues that might arise. With the screening of referrals you're putting a lot of faith in AI being able to make value judgements there which is something it's largely incapable of doing. You talk about GPs not being properly capable of diagnosing these kind of issues but AI is drastically less able to do because it has no conception of what the naturalised input from either a patient or a referring GP actually means. An issue which may have a profound human effect can be a negligable statistical anomaly to AI. Beyond that there's also inherent biases in most AI datasets especially when it comes to groups which aren't extensively represented in training data - symptoms or experiences which may be talked about in person with a certain cultural or social framing can be a complete mystery to AI that doesn't have a conception of those cultural or social cues. I know there's a trend towards more egalitarian models of AI which account for minority representation but that's really not present in the sort of outsourced commercial models a body as big as the NHS is likely to use, not yet anyway.
On that there's also the issue of the right to privacy, if AI is used within the NHS it'll be created by private contractors, not in house. Any data offered by a patient will likely be used for training purposes by those companies, basically commodifying their clinical experience and exploiting it for profit not just within the same use case but also elsewhere. You can contractually and legally place limits on that but ime bodies like the NHS are really badly equipped to do that. For the most part those dealing with procurement in a lot of government led fields are just being awed by AI and the miraculous promises of tech companies, companies who are deeply familiar with ways of exploiting that. Google for example got successfully sued for data harvesting in schools and emergent models of AI integration in higher education are really, really shady imo, often reliant on the good will and self policing of organisations that couldn't give a damn about either.
That's not to say there aren't valuable use cases like the ones you suggest but in the context we currently face with AI it's worth remembering that most actors involved aren't in it to further the good of humanity, they're in it for the money and to enclose as much data as possible in order to get even more money. I know there are proponents of Open AI models, community led AI, 'AI for Good' and all the rest out there but as things stand they're side notes to the main event of corporately led big data. There's also the question of whether the solutions you suggest are the best solutions or just solutions within the narrow context of massive underfunding, staff shortages and a lack of training within the NHS. Not judging your ideas either way but a lot of AI implementations exist largely because of the latter, human led interactions would still be better but flashy promises and the logic of the audit (we can guarantee MH assessments for 10 times as many people with AI, it'll just be shit) hold far more appeal to both governments and overworked frontline staff. That's no solution at all though imo.
I know the current situation means the kinds of things I mentioned aren't likely, or ideal, but they're possible. And possible without the biases and commercial interests you mentioned becoming a factor.
I don't want to drag this too far along my very specific examples, but on training data: that's easily solved by using actual screening referrals from the NHS. One thing AI needs is the work of previous human experts in order to spot the patterns to generate the algos that make them work.
We have so much data in the NHS, including referral data, and decisions made. We could give the models all of the previous referrals with a decision for each: accepted/rejected/moved on, etc. And then the AI will determine what makes a referral successful from those previous decisions. There will be biases in the data itself, as humans are biased, but I don't see how that's any worse than what we have now, so shouldn't be a factor.
The NHS not having the technical knowledge to do this is a bigger problem, and one I don't see a solution for. You're right, they will almost certainly contract someone (Palantir, etc) to do it for them, which is a massive data privacy risk.
Ideally, it would be done in house. NHS Digital is the one part of the failed National Programme for IT that was successful, and it was kept on and now runs the 'Spine' - the centralised system that manages the electronic prescription service, centralised patient care records, and e-referrals, among other things. They would be a good place to start building something like this, but I'm not sure they have the skills, manpower or money to do it.
Of course I would love funding for the NHS to be increased to reduce wait times, and give better patient care, but I still think this kind of automation would have a place in a well-run, well-funded, adequately-staffed system. AI has the potential to be better than humans, not just a way to do things on the cheap. And when it comes to health, I'm sure we all want everyone to have the best there is.
edit: and it integrates with Spine, which I mentioned earlier. Maybe they do have the resources to look into this.Mind Matters Surrey NHS deployed the Limbic Access chatbot that supports e-triage and assessments at the front end of the care pathway, acting as the first point of contact for the patient.
It aims to help reduce the heavy administrative workload, and provide an alternative to traditional longer referral forms that are considered off-putting by patients.
Using conversational AI, the platform aligns with the conversational nature of talk therapy and engages with patients as soon as they visit the service’s website, guiding patients through the referral process. The chatbot can be used 24/7 and patients have the choice of seeing it appear as an embedded chatbot window or on a full screen chat window.
Interesting. I just googled "AI referrals for health" and found this: Using an AI chatbot to streamline mental health referrals
Seems like the NHS already trialling something similar, though not on the same scale I was suggesting.
edit: and it integrates with Spine, which I mentioned earlier. Maybe they do have the resources to look into this.