Urban75 Home About Offline BrixtonBuzz Contact

Artificial Intelligence Developments (ChatGPT etc)

How does it help you? Interested in what it may be used for by normal folk like us
It can improve the readability of an article.

It can write articles from scratch when presented with a question or statement.

It is possible to research a topic using ChatGPT as long as you are prepared to fact check afterwards.

There are lots of things it can do, as long as you are aware of its limitations.
 
It is possible to research a topic using ChatGPT as long as you are prepared to fact check afterwards.
It is possible to research a topic using Donald Trump's speeches as long as you are prepared to fact check afterwards.

It would be much more effective, and waste much less energy to start with wikipedia, which is fact checked. And if you want more you can follow the links to its sources, which actually exist.
 
Yeah, it still blows my mind that anyone would use LLMs for any purpose that requires the accurate recall of factual information. They're literally not built for that, it's even in their name. They're Large Language Models, not Large Factual Models.
 
Yeah, it still blows my mind that anyone would use LLMs for any purpose that requires the accurate recall of factual information. They're literally not built for that, it's even in their name. They're Large Language Models, not Large Factual Models.
And yet, they are very good at it. Here's a test I just did:


I asked Google Bard for a list of questions, and then asked ChatGPT for the answers. The first ones were quite easy so I asked it for more difficult ones (question 11 onwards). ChatGPT recalled all of these facts except for the gravitational force one, which is not really a fact, but a calculation. For that, it used one of its plugins and provided the answer that was. It was correct.

In fact, it got them all right except the last one. The organism it mentioned used to be considered the smallest living organism until 2002, but now there is another claim to that title.

I think this is more than accurate enough for most every day tasks, and especially for Welty's blog spam.
 
Google bard :-

"Easton mums are peng"

"Peng" is a slang term used in the United Kingdom, particularly in London, to describe something or someone as being attractive or impressive. In this context, the phrase "Easton mums are peng" is likely being used to compliment the mothers of Easton, a town in the United Kingdom. It could be seen as a way of saying that the mothers of Easton are stylish, fashionable, and attractive.
The phrase "Easton mums are peng" is also sometimes used in a more humorous or ironic way. For example, it could be used to describe a group of mothers who are dressed in a way that is considered to be over the top or attention-seeking.
Overall, the phrase "Easton mums are peng" is a subjective one. What one person finds to be peng, another person may not. However, it is generally seen as a positive term that is used to compliment the mothers of Easton.
 
I just read about an interesting use-case for ChatGPT, which seems to work quite well: Stripping out political bias in news articles.

I took a Guardian Opinion piece and asked ChatGPT to remove the bias:


I tried to do the same with a Daily Mail piece, but ChatGPT struggled. It tried to explain the article instead. After some more nudges, it got closer, but couldn't do it. Reading the article, it's a very opinionated piece, with few facts, so maybe not the best choice.


Maybe a better use of this would be a tool to highlight bias, rather than try to remove it.
 
I’ve been interviewing people at work for a cloud Architect role.
If you’re going to pump the design scenario into chatGPT and present the output at an interview, at least make sure you know what it all means.

“Under DR you’ve mentioned ‘pilot light’, can you tell me more about what that is?”
“Sorry, I left that in by accident.”

Same for “Circuit Breaker Pattern”.

🙈
It's becoming relatively common now, to see job hunting/interview prep advice that advises people to put their CV and the job description into ChatGPT and ask it to write the statement in support of the application. And then ask it to generate a series of interview questions and answers.

I suspect recruiters will end up asking people to do a practical test/assessment activity on the day of the interview.
 
I suspect recruiters will end up asking people to do a practical test/assessment activity on the day of the interview.
Depending on role we’ve done coding exercises where candidates are given a few days to submit a GitHub repo with an answer and then walk through, live troubleshooting exercises during the interview, presentations, Mural boards etc.
Some of those are more AI-friendly than others. We also have use of Codility which is supposed to help spot cut/paste merchants especially on coding exercises.
 
And yet, they are very good at it. Here's a test I just did:


I asked Google Bard for a list of questions, and then asked ChatGPT for the answers. The first ones were quite easy so I asked it for more difficult ones (question 11 onwards). ChatGPT recalled all of these facts except for the gravitational force one, which is not really a fact, but a calculation. For that, it used one of its plugins and provided the answer that was. It was correct.

In fact, it got them all right except the last one. The organism it mentioned used to be considered the smallest living organism until 2002, but now there is another claim to that title.

I think this is more than accurate enough for most every day tasks, and especially for Welty's blog spam.
The point is that it is good at giving you factual answers until it stops being good at it. And good luck spotting the difference once you’ve come to rely on it.

I just read about an interesting use-case for ChatGPT, which seems to work quite well: Stripping out political bias in news articles.

I took a Guardian Opinion piece and asked ChatGPT to remove the bias:


I tried to do the same with a Daily Mail piece, but ChatGPT struggled. It tried to explain the article instead. After some more nudges, it got closer, but couldn't do it. Reading the article, it's a very opinionated piece, with few facts, so maybe not the best choice.


Maybe a better use of this would be a tool to highlight bias, rather than try to remove it.
This supposes there is an objective “bias” that exists independently from the praxis of the article and its context, and which can therefore be measured and removed to give the “objective, neutral” (i.e., no doubt, centrist) absolute truth. Needless to say, I reject almost every element of that version of the construction of knowledge.
 
New York Times sues OpenAI and Microsoft for copyright infringement

The copyright infringement lawsuit, filed in a Manhattan federal court on Wednesday, claims that while the companies copied information from many sources to build their systems, they give New York Times content “particular emphasis” and “seek to free-ride on the Times’s massive investment in its journalism by using it to build substitutive products without permission or payment”.

The lawsuit also broaches the issue of AI “hallucinations”, typically false information that can be wrongly attributed to a source, that it said potentially damages the Times’s brand. It identified material on Microsoft’s Bing Chat that it claims was misidentified as Times content, including results for “the 15 most heart-healthy foods”. Twelve of those had not been mentioned in the Times story, the lawsuit claims.
 
Wearable AI-Powered Neurotech Startup Promises 'Electric Medicine' and Wellness Benefits :facepalm:

Elemind's approach centers on artificial intelligence-powered algorithms that are trained to continuously analyze neurological activity collected by a noninvasive wearable device, then to deliver through the wearable bursts of neurostimulation that are uniquely tailored to those real-time brain wave readings. The Cambridge, Massachusetts-based company claims that its approach — which is based on research from its founders, a group of high-profile scientists hailing from the likes of MIT, Stanford and Harvard — offers a more "natural" treatment option than pharmaceuticals for neurological conditions like insomnia, essential tremor and memory loss.

The company plans to announce its first product within a few months, reports the Boston Globe, noting that the company's $12 million in seed funding came from "a consortium that includes Village Global, an early-stage venture fund backed by high-tech billionaires Jeff Bezos, Bill Gates., Reid Hoffman, and Ann Wojcicki..."
 
Ooh maybe this is the thread. I saw something yesterday that shocked me about AI.

Pakistan elections, imprisoned Imran Khan sees his supporters win the most seats. He's in prison. 'He' gives a victory speech. It is broadcast, with full explanation, that it is AI generated. This was on Al-Jazeera.

Absolutely imperceptible from 'reality'. And sure, it's what he might have said, maybe did say, but that wasn't him saying it. This was what is considered MSM by a fair proportion of the world. The implications for future news media are enormous.
 
AI hiring tools may be filtering out the best job applicants

Body-language analysis. Vocal assessments. Gamified tests. CV scanners. These are some of the tools companies use to screen candidates with artificial intelligence (sic) recruiting software. [...] A late-2023 IBM survey of more than 8,500 global IT professionals showed 42% of companies were using AI screening "to improve recruiting and human resources". Another 40% of respondents were considering integrating the technology.

In one high-profile case in 2020, UK-based make-up artist Anthea Mairoudhiou said her company told her to re-apply for her role after being furloughed during the pandemic. She was evaluated both based on past performance and via an AI-screening programme, HireVue. She says she ranked well in the skills evaluation – but after the AI tool scored her body language poorly, she was out of a job for good. (HireVue, the firm in question, removed its facial analysis function in 2021.) Other workers have filed complaints against similar platforms, says Schellmann.

At another company, an AI resume screener had been trained on CVs of employees already at the firm, giving people extra marks if they listed "baseball" or "basketball" – hobbies that were linked to more successful staff, often men. Those who mentioned "softball" – typically women – were downgraded.
 
It would be great if they came out and said they’d solved cancer or something. But no, now we can fake videos and you can’t tell. Great! Soon we’ll have no idea what’s real or not.
Well, tbf, 'they' are also working on that.





 
Not Again! Two More Cases, Just this Week, of Hallucinated Citations in Court Filings Leading to Sanctions

in the wake of several cases in which lawyers have found themselves in hot water by citing hallucinated cases generated by ChatGPT, most notoriously Mata v. Avianca,[...] you would think most lawyers would have gotten the message not to rely on ChatGPT for legal research, at least not without checking the results.
[...]
Yet it happened again this week — and it happened not once, but in two separate cases, one in Missouri and the other in Massachusetts. [...] In the Missouri case, Kruse v. Karlen, the unwitting litigant filed an appellate brief in which 22 of 24 cases were fictitious. [...]In the Massachusetts case, Smith v. Farwell, the lawyer filed three separate legal memoranda that cited and relied on fictitious cases.
 
good discussion here in first 15 mins about how Sora has sent an earthquake through tv/film land and I think a level headed take on the impact now and future
i didnt realise how much these two are industry insiders in tv and film, but they are


the full Sora showreel for those who havent seen it



re film and the issue of all images kind of being based of others poached material, I could imagine for a feature a unit filming some locations, maybe even some actors, some originally designed bits and getting the computers to do the rest, still shaving many millions off production costs
 
Last edited:
AI for ideas rather than the end product

I'm sure there will be interaction between real people and computer generated material into the foreseeable, but the nature of capitalism is that costs overheads and profits trump everything else, which will result in a serious hollowing out of jobs and human element. Advertising and other short form content is first in the firing line. Some of the wealthy existing ad agencies might dip their toe in as in the link, but new leaner meaner models will come along and upend the market
 
Is anyone here using LLM‘s for work?
How, what’s your process?

I am tentatively exploring how I can use it for admin, email Grunt Work.
Also how I can feed it with documents and query it about them. Which is proving a little tricky as the software isn’t totally accessible with a screen reader. Hey ho.
Actually what I need is a command line version of chat GPT for all. I will have a look.
 
I'm sure there will be interaction between real people and computer generated material into the foreseeable, but the nature of capitalism is that costs overheads and profits trump everything else, which will result in a serious hollowing out of jobs and human element. Advertising and other short form content is first in the firing line. Some of the wealthy existing ad agencies might dip their toe in as in the link, but new leaner meaner models will come along and upend the market

and I'm sure the companies in that articles are already doing that and spending less/using fewer people but you're not going to say that so bluntly out loud.
 
Is anyone here using LLM‘s for work?
How, what’s your process?

I am tentatively exploring how I can use it for admin, email Grunt Work.
Also how I can feed it with documents and query it about them. Which is proving a little tricky as the software isn’t totally accessible with a screen reader. Hey ho.
Actually what I need is a command line version of chat GPT for all. I will have a look.
I don’t know if this is what you’re after But thought I’d post the whole thread, just in case

 
Don't have time to catch up with thread and the search result yielded nothing... I had a couple of "conversations" with ChatGPT regarding what to watch on specific providers that I have access to and got interesting recommendations, although I'm guessing it might not recommend anything after a certain year (2022 it would seem)... It was a ping pong game which was quite amusing as chatGPT tried to flatter me whenever I said "I've already watched it", but it did recommend something I"ve been enjoying.

Can you recommend me something to watch that is funny at times but dark? I have Netflix, NowTV, BBC, More4, YouTube. English or foreign language
 
Back
Top Bottom