OpenAI released a 98-page technical report on Tuesday [which] did not disclose what it used to train the model or how it trained the model, including the energy costs and hardware used for it, making GPT-4 the company’s most secretive release thus far. As Motherboard has noted before, this is a complete 180 from OpenAI's founding principles as a nonprofit, open-source entity.
[...]
“They are willfully ignoring the most basic risk mitigation strategies, all while proclaiming themselves to be working towards the benefit of humanity,”
[...]
Competitors can't copy it, but ethical AI researchers and users also can't scrutinize it to point out obvious problems. Keeping it closed source doesn't mean those problems don't exist, it just means they'll remain hidden until people stumble on them or something goes amiss.
Keeping its training set secret also has the effect of making it more difficult for people to know whether their intellectual property and copyrighted work have been scraped.
[...]
Big tech companies like Google, Microsoft, and Meta are racing to create new AI technologies as fast as possible, often sidestepping or shrugging off ethical concerns along the way. [...] Microsoft cut an entire ethics and society team within its AI department, as part of its recent layoffs, leaving the company without a dedicated responsible AI team, while it continues to adopt OpenAI products as part of its business.