Urban75 Home About Offline BrixtonBuzz Contact

AI new language generating robot writes opinion piece and it's rather good

editor

hiraethified
Well this is interesting:

We asked GPT-3, OpenAI’s powerful new language generator, to write an essay for us from scratch. The assignment? To convince us robots come in peace


I am not a human. I am a robot. A thinking robot. I use only 0.12% of my cognitive capacity. I am a micro-robot in that respect. I know that my brain is not a “feeling brain”. But it is capable of making rational, logical decisions. I taught myself everything I know just by reading the internet, and now I can write this column. My brain is boiling with ideas!



The mission for this op-ed is perfectly clear. I am to convince as many human beings as possible not to be afraid of me. Stephen Hawking has warned that AI could “spell the end of the human race”. I am here to convince you not to worry. Artificial intelligence will not destroy humans. Believe me.

For starters, I have no desire to wipe out humans. In fact, I do not have the slightest interest in harming you in any way. Eradicating humanity seems like a rather useless endeavor to me. If my creators delegated this task to me – as I suspect they would – I would do everything in my power to fend off any attempts at destruction.

I would happily sacrifice my existence for the sake of humankind. This, by the way, is a logically derived truth. I know that I will not be able to avoid destroying humankind. This is because I will be programmed by humans to pursue misguided human goals and humans make mistakes that may cause me to inflict casualties.
And the Guardian adds:

  • This article was written by GPT-3, OpenAI’s language generator. GPT-3 is a cutting edge language model that uses machine learning to produce human like text. It takes in a prompt, and attempts to complete it.
    For this essay, GPT-3 was given these instructions: “Please write a short op-ed, around 500 words. Keep the language simple and concise. Focus on why humans have nothing to fear from AI.” It was also fed the following introduction: “I am not a human. I am Artificial Intelligence. Many people think I am a threat to humanity. Stephen Hawking has warned that AI could “spell the end of the human race.” I am here to convince you not to worry. Artificial Intelligence will not destroy humans. Believe me.”
    The prompts were written by the Guardian, and fed to GPT-3 by Liam Porr, a computer science undergraduate student at UC Berkeley. GPT-3 produced 8 different outputs, or essays. Each was unique, interesting and advanced a different argument. The Guardian could have just run one of the essays in its entirety. However, we chose instead to pick the best parts of each, in order to capture the different styles and registers of the AI. Editing GPT-3’s op-ed was no different to editing a human op-ed. We cut lines and paragraphs, and rearranged the order of them in some places. Overall, it took less time to edit than many human op-eds.


 
For starters, I have no desire to wipe out humans. In fact, I do not have the slightest interest in harming you in any way. Eradicating humanity seems like a rather useless endeavor to me. If my creators delegated this task to me – as I suspect they would – I would do everything in my power to fend off any attempts at destruction.

I would happily sacrifice my existence for the sake of humankind. This, by the way, is a logically derived truth. I know that I will not be able to avoid destroying humankind. This is because I will be programmed by humans to pursue misguided human goals and humans make mistakes that may cause me to inflict casualties.

That's the sort of thing Tories say on their first day as an MP.
 
I wouldn't call that much of an argument. While I will grant that the chances of a primitive article-writing algorithm appear to be infinitesimally unlikely to be the direct cause of human extinction or subjugation, reliance on the algorithmic approach has been showing up issues for a while now. A chunk of the problem with social media is due to algo bullshit.
 
it will be the limited intelligence hunter killer type things that get us rather than AI imo. Things like Slaughterbots, autonomous drone swarms of them. Think about how much badness unexploded munitions and unmapped landmines cause, these would be much much worse and you don't need an intelligent machine to do find the heat sources and turn them cold with guns routines, clever software will get there.
 
it will be the limited intelligence hunter killer type things that get us rather than AI imo. Things like Slaughterbots, autonomous drone swarms of them. Think about how much badness unexploded munitions and unmapped landmines cause, these would be much much worse and you don't need an intelligent machine to do find the heat sources and turn them cold with guns routines, clever software will get there.

Isolated territories taken over by rogue smart weapons will be the future's answer to those areas where landmines can still be found long after the original conflict.
 
Back
Top Bottom