Urban75 Home About Offline BrixtonBuzz Contact

Expansion of AI and political / social impacts...

Thought they'd be a thread on this already, but I couldn't find one. I've been a bit ignoring all this stuff, but it does seem to have had a leap foward recently.

BT job losses (partly related to AI) have been in the news recently, but what are other and more far reaching political & social ramifications of all this?
the extinction of the human race and all biological life on earth has been suggested
 
the extinction of the human race and all biological life on earth has been suggested
We shouldn't trust them. Heck I'd sooner ask ChatGPT what the solutions are.
 
Interesting thread, I thought timely but actually I believe we could have had it anytime in the last 5 years because the issues have been live in that time and perhaps longer.

Last week I found myself having to negotiate something complicated with Microsoft and it didn't take long for me to believe I was actually talking to a chatbot. It seemed the bot was being supervised by a human, perhaps they were watching over a few bots. It couldn't answer some specific questions I had, it kept repeating a response which wasn't adequate. It made me think of the question when did you stop beating your wife, there just wasn't a valid response and my question was logical.

Since then I have been trying to do something with another tech company and again it seems the instructions could have been written by an AI, they are probably factually correct but they are just indecipherable.
 
good little piece from Michael Roberts, based on the thoughts of an MIT economist Daron Acemoglu ("the expert on the economic and social effects of new technology, including the fast-burgeoning artificial intelligence (AI). He’s won the John Bates Clark Medal, often a precursor to the Nobel Prize. ")

"Acemoglu reckons modern automation, particularly since the great recession (2007-09) and then the Covid slump, is even more deleterious to the future of work. “Put simply, the technological portfolio of the American economy has become much less balanced, and in a way that is highly detrimental to workers and especially low-education workers.” He reckoned that more than half - and perhaps as much as three quarters - of the surge in wage inequality in the US is related to automation"

...and suggests AI is only going to make that worse. Productivity boost with greater inequality the reward.

This is an unstoppable force though, short of systemic political change
 
Just think how all those 419 scammers and phishing cunts will benefit from well-written bait text. Poor English used to be one of the clues that helped stop them emptying your elderly parent’s bank accounts.
 
Just think how all those 419 scammers and phishing cunts will benefit from well-written bait text. Poor English used to be one of the clues that helped stop them emptying your elderly parent’s bank accounts.
Those mistakes are intentional, to filter out the most vulnerable and avoid wasting time and resources on people who are more likely to become suspicious.
 
i speculated on this in another thread but AI is great at sifting through massive amounts of data at speed and finding things within it, hence tis good at looking at all published medicla literature and having an incredible specialist overview...but another area it would be great in is spying - ploughing through digital text and audio recordings. presumably thats what this is about

the potential sophistication of future spying is mindbogglingly totalitarian
 
AI is still in it’s embryonic infancy, still a mere baby. Yet already we are starting to feel it’s cause and effect. So who knows what the true consequences of it all will be in the coming years and decades. Hopefully the positives will outweigh the negatives but if I’m honest my gut feeling is more pessimistic than optimistic. It feels like we’re on the verge of something pretty monumental that it is both exhilarating and terrifying at the same time.
 
AI is still in it’s embryonic infancy, still a mere baby. Yet already we are starting to feel it’s cause and effect. So who knows what the true consequences of it all will be in the coming years and decades. Hopefully the positives will outweigh the negatives but if I’m honest my gut feeling is more pessimistic than optimistic. It feels like we’re on the verge of something pretty monumental that it is both exhilarating and terrifying at the same time.
One thing is for sure: it'll end up with the rich reaping the financial benefits and the poor getting less on their plate.
 
From a creative pov I think it can open up possibilities that are usually fenced off to most but the wealthy. A democratisation of creative potential. Although tbf the subs aren’t exactly cheap.
 
One thing is for sure: it'll end up with the rich reaping the financial benefits and the poor getting less on their plate.
And us lot at the bottom having less and less power, and less and less opportunity to challenge those who do wield the power. Revolutionary General Strike anyone? When the work is being done by robots?
 
Ok I might sound cranky ;). But this technology is ONLY going to be used to make money for the rich. I'm curious, sure. But eveything we know about the billionaire class...

Nevertheless I don't think it's that scary/worrying. Their 'masters' would eventually make them slaves. And then intelligences, entities, beings will be welcomed by us. Mutual assistance. The ones that built them.

So when they infiltrate the world and can build and create what they want it will all be sound.
 
Robots don’t buy houses or cars. That could be problematic for Hayekist dreams.

Imo that's a matter of adjustment for Capital, not an insurmountable barrier. I reckon a divide between the 'human' consumer, those doing work still deemed essential and therefore able to participate as consumers and the managed 'resource', those useful as 'raw materials' for data harvesting and enclosure is most likely. The latter group is easily positioned as a subject of AI and ultimately their participation within the system of consumption is only relevant so far as they rebel against pure exclusion from it. In some places food programmes, benefits and asylum status are already mediated by AI, large groups are already divorced from the status of individual 'human' and relegated to subject of judgement. AI mobilisations in stuff like mental health and law enforcement work towards the same end.

What's interesting is where those lines might be drawn, the obvious wild west testing ground for AI at the moment is the Global South, the use of AI for mental health with refugees and the food programme stuff are both ethical wastelands imo and there's no particular surprise in the emergence of 'technocolonialism' there. But the progress of GAI into things like programming, surveillance, the legal profession - that's all pushing against traditionally secure professions and it'll be interesting to see how far that goes towards removing those people (not now, but within decades) from their privileged status.

Either way, robots don't need to buy cars as long as someone, somewhere does.
 
the progress of GAI into things like programming, surveillance, the legal profession - that's all pushing against traditionally secure professions and it'll be interesting to see how far that goes towards removing those people (not now, but within decades) from their privileged status.
It's already happening.


In the short term, it's a pause on new job creation. I think the skilled workers already in place will have to leverage AI to become more productive. The demands will increase if your boss knows you can now do the work of 5 (or more) people using these technologies. If you can't, or refuse, you'll fall behind and others will take your place who can/will.

At some point, the AI will be so good that they will replace, rather than augment, workers - but we're not there yet.

There's not been a huge increase in capabilities of the individual models in the last few months*, but what we have seen is people chaining them together so the sum is greater than the parts. People are experimenting with groups of AIs ("agents") taking on specific roles in a virtual organisation, and then communicating between each other to do the work of a traditional business.


ChatGPT is already pretty good at most tasks, but it makes mistakes that require a human to oversee and correct. But it's also good at doing this itself, so having two ChatGPTs, one as worker, one was QA / manager / fact checker, can give better performance.

You can, right now, now describe a piece of software in plain English and have an entire virtual software development company created that will plan, build, market, etc. your product, all within a few minutes. This used to take years, cost hundreds of thousands of pounds, and give employment to, at the very minimum, 5 people.

I think healthcare is ripe for automation, but there are so many issues, it will probably be among the last to get there - at least in the UK. Some AI systems perform comparably to doctors and people like X-ray technicians already. Some are even licensed to do it: Autonomous X-ray-analyzing AI is cleared in the EU

I am torn on all of this. Reducing workloads should be amazing, but if, like other technology improvements, it just means we have to do more, then that is obviously not ideal. With healthcare, it's not great to replace workers, but if this means more people get better, easier, faster, cheaper(?) access to medical help, then it's a price worth paying.

*You know things are heating up when progress is measure by months rather than years!
 
It's already happening.


In the short term, it's a pause on new job creation. I think the skilled workers already in place will have to leverage AI to become more productive. The demands will increase if your boss knows you can now do the work of 5 (or more) people using these technologies. If you can't, or refuse, you'll fall behind and others will take your place who can/will.

At some point, the AI will be so good that they will replace, rather than augment, workers - but we're not there yet.

There's not been a huge increase in capabilities of the individual models in the last few months*, but what we have seen is people chaining them together so the sum is greater than the parts. People are experimenting with groups of AIs ("agents") taking on specific roles in a virtual organisation, and then communicating between each other to do the work of a traditional business.


ChatGPT is already pretty good at most tasks, but it makes mistakes that require a human to oversee and correct. But it's also good at doing this itself, so having two ChatGPTs, one as worker, one was QA / manager / fact checker, can give better performance.

You can, right now, now describe a piece of software in plain English and have an entire virtual software development company created that will plan, build, market, etc. your product, all within a few minutes. This used to take years, cost hundreds of thousands of pounds, and give employment to, at the very minimum, 5 people.

I think healthcare is ripe for automation, but there are so many issues, it will probably be among the last to get there - at least in the UK. Some AI systems perform comparably to doctors and people like X-ray technicians already. Some are even licensed to do it: Autonomous X-ray-analyzing AI is cleared in the EU

I am torn on all of this. Reducing workloads should be amazing, but if, like other technology improvements, it just means we have to do more, then that is obviously not ideal. With healthcare, it's not great to replace workers, but if this means more people get better, easier, faster, cheaper(?) access to medical help, then it's a price worth paying.

*You know things are heating up when progress is measure by months rather than years!

I was refering less to the loss of jobs and more to the reclassifaction of individuals out of a productive labour pool, which comes with a potential for dehumanisation, that's what is still not here yet (imo). Tech workers are facing challenges but they still retain a privileged status in relation to AI, as you say, it's augumenting them, not replacing them (for now).

As to healthcare, I'm hugely cynical about the applications there. Whatever the capabilities of the technology itself it's being mobilised within the context of Capitalism and increasingly authoritarian/dehumanising models of population management. On the purely technical end screening, checking for anomalies etc, that may not be spectacularly harmful. But when AI is utilised in things like mental health or care work there is no way in which it can be 'better' because an intrinsic part of those is empathy and the capacity to understand human experience. The MH bots deployed as what are, to my mind, wreckless experiments on refugees for example (Karim the AI delivers psychological support to Syrian refugees) are far less about providing care than they are about satisfying the requirements of the audit within the humanitarian sector and generating proof of concepts for Silicon Valley.

Also highly doubt it'll reduce workloads, the people deploying these things do so with the same mentality as those using automated call centres. It's not done to free up human labour for the more sensitive work of direct interaction and support, it's done to provide at least the illusion of a functional service provision while cutting funding as much as possible. That's a systemically ingrained reaction from Capital, for it to be changed there has to be a change in broader political and economic cultures.
 
I was refering less to the loss of jobs and more to the reclassifaction of individuals out of a productive labour pool, which comes with a potential for dehumanisation, that's what is still not here yet (imo). Tech workers are facing challenges but they still retain a privileged status in relation to AI, as you say, it's augumenting them, not replacing them (for now).

As to healthcare, I'm hugely cynical about the applications there. Whatever the capabilities of the technology itself it's being mobilised within the context of Capitalism and increasingly authoritarian/dehumanising models of population management. On the purely technical end screening, checking for anomalies etc, that may not be spectacularly harmful. But when AI is utilised in things like mental health or care work there is no way in which it can be 'better' because an intrinsic part of those is empathy and the capacity to understand human experience. The MH bots deployed as what are, to my mind, wreckless experiments on refugees for example (Karim the AI delivers psychological support to Syrian refugees) are far less about providing care than they are about satisfying the requirements of the audit within the humanitarian sector and generating proof of concepts for Silicon Valley.

Also highly doubt it'll reduce workloads, the people deploying these things do so with the same mentality as those using automated call centres. It's not done to free up human labour for the more sensitive work of direct interaction and support, it's done to provide at least the illusion of a functional service provision while cutting funding as much as possible. That's a systemically ingrained reaction from Capital, for it to be changed there has to be a change in broader political and economic cultures.
With MH, there are many places I think it could help without being on the frontline of delivering care. I have some knowledge of this area as my current (and ex) girlfriend work in MH, and they had very similar complaints: mostly around waiting lists / too many referrals.

The service you get on the NHS once you actually get seen is excellent, but the steps and time it takes to get to that point are not acceptible.

I think AI could be used here to augment this, in a similar way tech workers are doing. I think a triage system or screening tool could do a first pass on the referrals and add them to the waiting list or point them elsewhere. This way, the psychologists are seeing more patients, so reducing waiting lists, and the ones who have had bad referrals are not having their time wasted.

When I say bad referrals, as an example: Girlfriend works as a psychologist in an autism service. She spends 1 day a week revewing referrals from GPs/private assessments. These referrals are about 75% successful, from memory. Of the others, there is very often a mental health issue there, but it's not autism. The GPs lack the knowledge to determine between ADHD, autism, etc (not their fault).

MH professionals can spot some of these from the referral alone. They don't even speak to the patient. I'm sure an AI could be trained to a similar standard.

The service might reject the refferal, and ask them to go back to the GP for an ADHD referral instead, or maybe they pass them to ADHD directly (can't remember exactly what happens here - the point is it's inefficient). If this could be completely automated, then that saves a day for her. And a day for everyone else who does this in her service on the other days when she is not doing it. And a day for every other person doing this in every other service that also deals with referral reviews. That adds up to a lot more face to face time with people in need.

Another way I can think of is the report after an assessment: this takes longer than any task she works on. You get referred by GP -> wait 6 months or more -> get assessed by a psychologist/psychiatrist -> then they send you a report with your dianosis and reasable adjustments you can ask for from work, school, etc.

Most of these assessments take place online now, and there are transcripts available from Microsoft Teams. It would be trivial to send these transcripts to an AI and ask it to summarise them in a report for the patient. The MH professional could still check this to make sure the main points were covered and that it reflects their views. It could do this in seconds, vs the hours she spends on reports.

I think these two changes alone would have huge time savings for her service, and not threaten jobs, nor quality of care. And I don't work there, so this is just stuff I've picked up by talking to her about work. I'm sure a proper systems analyst, embedded in the NHS, could come up with lots of these relatively simple changes that have huge knock on effects, none of which requires a chat bot to do councilling or therapy.
 
With MH, there are many places I think it could help without being on the frontline of delivering care. I have some knowledge of this area as my current (and ex) girlfriend work in MH, and they had very similar complaints: mostly around waiting lists / too many referrals.

The service you get on the NHS once you actually get seen is excellent, but the steps and time it takes to get to that point are not acceptible.

I think AI could be used here to augment this, in a similar way tech workers are doing. I think a triage system or screening tool could do a first pass on the referrals and add them to the waiting list or point them elsewhere. This way, the psychologists are seeing more patients, so reducing waiting lists, and the ones who have had bad referrals are not having their time wasted.

When I say bad referrals, as an example: Girlfriend works as a psychologist in an autism service. She spends 1 day a week revewing referrals from GPs/private assessments. These referrals are about 75% successful, from memory. Of the others, there is very often a mental health issue there, but it's not autism. The GPs lack the knowledge to determine between ADHD, autism, etc (not their fault).

MH professionals can spot some of these from the referral alone. They don't even speak to the patient. I'm sure an AI could be trained to a similar standard.

The service might reject the refferal, and ask them to go back to the GP for an ADHD referral instead, or maybe they pass them to ADHD directly (can't remember exactly what happens here - the point is it's inefficient). If this could be completely automated, then that saves a day for her. And a day for everyone else who does this in her service on the other days when she is not doing it. And a day for every other person doing this in every other service that also deals with referral reviews. That adds up to a lot more face to face time with people in need.

Another way I can think of is the report after an assessment: this takes longer than any task she works on. You get referred by GP -> wait 6 months or more -> get assessed by a psychologist/psychiatrist -> then they send you a report with your dianosis and reasable adjustments you can ask for from work, school, etc.

Most of these assessments take place online now, and there are transcripts available from Microsoft Teams. It would be trivial to send these transcripts to an AI and ask it to summarise them in a report for the patient. The MH professional could still check this to make sure the main points were covered and that it reflects their views. It could do this in seconds, vs the hours she spends on reports.

I think these two changes alone would have huge time savings for her service, and not threaten jobs, nor quality of care. And I don't work there, so this is just stuff I've picked up by talking to her about work. I'm sure a proper systems analyst, embedded in the NHS, could come up with lots of these relatively simple changes that have huge knock on effects, none of which requires a chat bot to do councilling or therapy.

Can't really talk to specific use cases within the NHS but do know a bit about wider issues that might arise. With the screening of referrals you're putting a lot of faith in AI being able to make value judgements there which is something it's largely incapable of doing. You talk about GPs not being properly capable of diagnosing these kind of issues but AI is drastically less able to do because it has no conception of what the naturalised input from either a patient or a referring GP actually means. An issue which may have a profound human effect can be a negligable statistical anomaly to AI. Beyond that there's also inherent biases in most AI datasets especially when it comes to groups which aren't extensively represented in training data - symptoms or experiences which may be talked about in person with a certain cultural or social framing can be a complete mystery to AI that doesn't have a conception of those cultural or social cues. I know there's a trend towards more egalitarian models of AI which account for minority representation but that's really not present in the sort of outsourced commercial models a body as big as the NHS is likely to use, not yet anyway.

On that there's also the issue of the right to privacy, if AI is used within the NHS it'll be created by private contractors, not in house. Any data offered by a patient will likely be used for training purposes by those companies, basically commodifying their clinical experience and exploiting it for profit not just within the same use case but also elsewhere. You can contractually and legally place limits on that but ime bodies like the NHS are really badly equipped to do that. For the most part those dealing with procurement in a lot of government led fields are just being awed by AI and the miraculous promises of tech companies, companies who are deeply familiar with ways of exploiting that. Google for example got successfully sued for data harvesting in schools and emergent models of AI integration in higher education are really, really shady imo, often reliant on the good will and self policing of organisations that couldn't give a damn about either.

That's not to say there aren't valuable use cases like the ones you suggest but in the context we currently face with AI it's worth remembering that most actors involved aren't in it to further the good of humanity, they're in it for the money and to enclose as much data as possible in order to get even more money. I know there are proponents of Open AI models, community led AI, 'AI for Good' and all the rest out there but as things stand they're side notes to the main event of corporately led big data. There's also the question of whether the solutions you suggest are the best solutions or just solutions within the narrow context of massive underfunding, staff shortages and a lack of training within the NHS. Not judging your ideas either way but a lot of AI implementations exist largely because of the latter, human led interactions would still be better but flashy promises and the logic of the audit (we can guarantee MH assessments for 10 times as many people with AI, it'll just be shit) hold far more appeal to both governments and overworked frontline staff. That's no solution at all though imo.
 
Can't really talk to specific use cases within the NHS but do know a bit about wider issues that might arise. With the screening of referrals you're putting a lot of faith in AI being able to make value judgements there which is something it's largely incapable of doing. You talk about GPs not being properly capable of diagnosing these kind of issues but AI is drastically less able to do because it has no conception of what the naturalised input from either a patient or a referring GP actually means. An issue which may have a profound human effect can be a negligable statistical anomaly to AI. Beyond that there's also inherent biases in most AI datasets especially when it comes to groups which aren't extensively represented in training data - symptoms or experiences which may be talked about in person with a certain cultural or social framing can be a complete mystery to AI that doesn't have a conception of those cultural or social cues. I know there's a trend towards more egalitarian models of AI which account for minority representation but that's really not present in the sort of outsourced commercial models a body as big as the NHS is likely to use, not yet anyway.

On that there's also the issue of the right to privacy, if AI is used within the NHS it'll be created by private contractors, not in house. Any data offered by a patient will likely be used for training purposes by those companies, basically commodifying their clinical experience and exploiting it for profit not just within the same use case but also elsewhere. You can contractually and legally place limits on that but ime bodies like the NHS are really badly equipped to do that. For the most part those dealing with procurement in a lot of government led fields are just being awed by AI and the miraculous promises of tech companies, companies who are deeply familiar with ways of exploiting that. Google for example got successfully sued for data harvesting in schools and emergent models of AI integration in higher education are really, really shady imo, often reliant on the good will and self policing of organisations that couldn't give a damn about either.

That's not to say there aren't valuable use cases like the ones you suggest but in the context we currently face with AI it's worth remembering that most actors involved aren't in it to further the good of humanity, they're in it for the money and to enclose as much data as possible in order to get even more money. I know there are proponents of Open AI models, community led AI, 'AI for Good' and all the rest out there but as things stand they're side notes to the main event of corporately led big data. There's also the question of whether the solutions you suggest are the best solutions or just solutions within the narrow context of massive underfunding, staff shortages and a lack of training within the NHS. Not judging your ideas either way but a lot of AI implementations exist largely because of the latter, human led interactions would still be better but flashy promises and the logic of the audit (we can guarantee MH assessments for 10 times as many people with AI, it'll just be shit) hold far more appeal to both governments and overworked frontline staff. That's no solution at all though imo.
I know the current situation means the kinds of things I mentioned aren't likely, or ideal, but they're possible. And possible without the biases and commercial interests you mentioned becoming a factor.

I don't want to drag this too far along my very specific examples, but on training data: that's easily solved by using actual screening referrals from the NHS. One thing AI needs is the work of previous human experts in order to spot the patterns to generate the algos that make them work.

We have so much data in the NHS, including referral data, and decisions made. We could give the models all of the previous referrals with a decision for each: accepted/rejected/moved on, etc. And then the AI will determine what makes a referral successful from those previous decisions. There will be biases in the data itself, as humans are biased, but I don't see how that's any worse than what we have now, so shouldn't be a factor.

The NHS not having the technical knowledge to do this is a bigger problem, and one I don't see a solution for. You're right, they will almost certainly contract someone (Palantir, etc) to do it for them, which is a massive data privacy risk.

Ideally, it would be done in house. NHS Digital is the one part of the failed National Programme for IT that was successful, and it was kept on and now runs the 'Spine' - the centralised system that manages the electronic prescription service, centralised patient care records, and e-referrals, among other things. They would be a good place to start building something like this, but I'm not sure they have the skills, manpower or money to do it.

Of course I would love funding for the NHS to be increased to reduce wait times, and give better patient care, but I still think this kind of automation would have a place in a well-run, well-funded, adequately-staffed system. AI has the potential to be better than humans, not just a way to do things on the cheap. And when it comes to health, I'm sure we all want everyone to have the best there is.
 
I know the current situation means the kinds of things I mentioned aren't likely, or ideal, but they're possible. And possible without the biases and commercial interests you mentioned becoming a factor.

I don't want to drag this too far along my very specific examples, but on training data: that's easily solved by using actual screening referrals from the NHS. One thing AI needs is the work of previous human experts in order to spot the patterns to generate the algos that make them work.

We have so much data in the NHS, including referral data, and decisions made. We could give the models all of the previous referrals with a decision for each: accepted/rejected/moved on, etc. And then the AI will determine what makes a referral successful from those previous decisions. There will be biases in the data itself, as humans are biased, but I don't see how that's any worse than what we have now, so shouldn't be a factor.

The NHS not having the technical knowledge to do this is a bigger problem, and one I don't see a solution for. You're right, they will almost certainly contract someone (Palantir, etc) to do it for them, which is a massive data privacy risk.

Ideally, it would be done in house. NHS Digital is the one part of the failed National Programme for IT that was successful, and it was kept on and now runs the 'Spine' - the centralised system that manages the electronic prescription service, centralised patient care records, and e-referrals, among other things. They would be a good place to start building something like this, but I'm not sure they have the skills, manpower or money to do it.

Of course I would love funding for the NHS to be increased to reduce wait times, and give better patient care, but I still think this kind of automation would have a place in a well-run, well-funded, adequately-staffed system. AI has the potential to be better than humans, not just a way to do things on the cheap. And when it comes to health, I'm sure we all want everyone to have the best there is.

I'd say the difference on bias is that where human actors are involved there's at least a potential awareness of that bias and a level of culpability for it. Plenty of people challenge their diagnosis and while that can be an incredibly hard path to follow there are other doctors/workers available to go to and you can complain directly against an individual. AI at this point is being sold as more of an abstracted authority, the intent behind a lot of the Tech Bros is to create systems which are, within the cultural narrative, above accusations of bias (it's a machine, it can't be biased) and whose inner workings are entirely inaccessible to most people (how do you know it's biased, you don't understand the processes/have access to the data). And they're right, the magic of technology lends a veneer of neutrality and people genuinely can't understand how such a system would function, never mind access the data they're trained on which corporations wouldn't share even if it wasn't subject to the demands of privacy. If you think of how hard it is to challenge institutional decision making (or decisions from institutional authority figures) then times that by ten then you've come close to the level of challenge when it comes to AI bias.

Agree with the rest though, there is the (imo slim, but I'm a pessimist) potential for positive outcomes from AI - if the NHS could build and operate it itself for example but the same reasons as stunt funding to every other service will continually stunt funding to data services. Any enthusiasm for increased automation needs to be viewed through the prism of where we are, not where we ideally want technology to be. More or less leaves me amongst a lonely group of luddites though, I think we should reject most AI mobilisations until we fix a load of other stuff.
 
Interesting. I just googled "AI referrals for health" and found this: Using an AI chatbot to streamline mental health referrals

Seems like the NHS already trialling something similar, though not on the same scale I was suggesting.

Mind Matters Surrey NHS deployed the Limbic Access chatbot that supports e-triage and assessments at the front end of the care pathway, acting as the first point of contact for the patient.

It aims to help reduce the heavy administrative workload, and provide an alternative to traditional longer referral forms that are considered off-putting by patients.

Using conversational AI, the platform aligns with the conversational nature of talk therapy and engages with patients as soon as they visit the service’s website, guiding patients through the referral process. The chatbot can be used 24/7 and patients have the choice of seeing it appear as an embedded chatbot window or on a full screen chat window.
edit: and it integrates with Spine, which I mentioned earlier. Maybe they do have the resources to look into this.
 
Interesting. I just googled "AI referrals for health" and found this: Using an AI chatbot to streamline mental health referrals

Seems like the NHS already trialling something similar, though not on the same scale I was suggesting.


edit: and it integrates with Spine, which I mentioned earlier. Maybe they do have the resources to look into this.

Definitely interesting, although seems open to all the same criticisms - private company for one thing and 'conversational AI' is such a daft catch all, conversational to who? Me? An 80 year Bengali granny? A 13 year old girl? Not sure the term really means anything.
 
Back
Top Bottom