Showing posts with label Open AI. Show all posts
Showing posts with label Open AI. Show all posts

Saturday, July 29

Technology IS NOT Going to be Good for Workers


Generative artificial intelligence technology such as ChatGPT could boost productivity for many workers in the years ahead. But some people are likely to lose their jobs in the process.

That's according to Sam Altman (ABOVE), the CEO of OpenAI, the company behind ChatGPT. Altman said in June that AI's development could provide the "most tremendous leap forward" for people's quality of life. But he also said in March it'd be "crazy not to be a little afraid of AI" and its potential to create "disinformation problems or economic shocks."

In a new interview with The Atlantic, Altman pushed back on the idea that the AI boom would have only a positive impact on workers.

"A lot of people working on AI pretend that it's only going to be good; it's only going to be a supplement; no one is ever going to be replaced," he said. "Jobs are definitely going to go away, full stop."

Since ChatGPT rolled out last November, economy experts have spoken about the ways AI could serve as a valuable assistant to workers — helping them become more productive and spend less time on boring tasks. 

Some experts expressed optimism that AI wouldn't result in the widespread job displacement many Americans fear and said that they should be more worried about their co-workers using these technologies to supplant them.

"You will not be replaced by AI but by someone who knows what to do with AI," Oded Netzer, a Columbia Business School professor, told Insider in early July.

But Altman's comments speak to a harsh reality: Even if most jobs aren't displaced, some are likely to go by the wayside. In March, Goldman Sachs said that 300 million full-time jobs across the globe could be disrupted by AI.

"History tells us that simplification is often merely a step towards automation," Carl Benedikt Frey, an Oxford economist, previously told Insider. "AI assistants that analyze telemarketers' calls and provide recommendations are being trained with the ultimate goal of replacing them."  READ MORE...

Friday, May 12

Artificial Intelligence Needs Oversight


EVERY TIME YOU post a photo, respond on social media, make a website, or possibly even send an email, your data is scraped, stored, and used to train generative AI technology that can create text, audio, video, and images with just a few words. 

This has real consequences: OpenAI researchers studying the labor market impact of their language models estimated that approximately 80 percent of the US workforce could have at least 10 percent of their work tasks affected by the introduction of large language models (LLMs) like ChatGPT, while around 19 percent of workers may see at least half of their tasks impacted. 

We’re seeing an immediate labor market shift with image generation, too. In other words, the data you created may be putting you out of a job.

When a company builds its technology on a public resource—the internet—it’s sensible to say that that technology should be available and open to all. But critics have noted that GPT-4 lacked any clear information or specifications that would enable anyone outside the organization to replicate, test, or verify any aspect of the model. 

Some of these companies have received vast sums of funding from other major corporations to create commercial products. For some in the AI community, this is a dangerous sign that these companies are going to seek profits above public benefit.Code transparency alone is unlikely to ensure that these generative AI models serve the public good. 

There is little conceivable immediate benefit to a journalist, policy analyst, or accountant (all “high exposure” professions according to the OpenAI study) if the data underpinning an LLM is available. We increasingly have laws, like the Digital Services Act, that would require some of these companies to open their code and data for expert auditor review. 

And open source code can sometimes enable malicious actors, allowing hackers to subvert safety precautions that companies are building in. Transparency is a laudable objective, but that alone won’t ensure that generative AI is used to better society.     READ MORE...