Showing posts with label ChatGPT. Show all posts
Showing posts with label ChatGPT. Show all posts

Thursday, January 4

AI Development Will Explode in 2024


Artificial intelligence made a big splash with consumers and regulators alike in 2023, with experts believing the continued development of the technology will reach even greater heights in 2024.

"I think that in 2024, AI will move a little closer to what is in the public imagination, but we remain years from AI being autonomous in the way people are imagining it," Christopher Alexander, chief analytics officer of Pioneer Development Group, told Fox News Digital.

Alexander's comments come after 2023 saw a noticeable leap in the development and availability of AI tools, with popular language learning model (LLM) platforms such as OpenAI's ChatGPT gaining huge popularity and energizing other tech giants to come along for the ride.  READ MORE...

Wednesday, December 27

AI Can Reproduce AI On Their Own


A scientific collaboration has achieved a breakthrough in creating larger AI models that can autonomously develop smaller AI models.

These smaller models have practical applications such as identifying human voices, monitoring pipelines, and tracking wildlife in confined spaces.

The self-replicating AI concept has sparked negative reactions on social media, with references to sci-fi scenarios like Terminator and The Matrix. (Trending: Prominent LGBTQ Activist Arrested Over Disturbing Charges)

Yubei Chan, one of the project’s researchers, said “This month, we just demonstrated the first proof of concept such that one type of model can be automatically designed all the way from data generation to the model deployment and testing without human intervention.”

“If we think about ChatGPT and tiny machine learning, they are on the two extremes of the spectrum of intelligence,” he continued.     READ MORE...

Saturday, November 4

AI Apocalypse Team Formed


Artificial intelligence (AI) is advancing rapidly, bringing unprecedented benefits to us, yet it also poses serious risks, such as chemical, biological, radiological and nuclear (CBRN) threats, that could have catastrophic consequences for the world.

How can we ensure that AI is used for good and not evil? How can we prepare for the worst-case scenarios that might arise from AI?

How OpenAI is preparing for the worst
These are some of the questions that OpenAI, a leading AI research lab and the company behind ChatGPT, is trying to answer with its new Preparedness team. Its mission is to track, evaluate, forecast and protect against the frontier risks of AI models.                          READ MORE...

Sunday, October 29

Wiping Out Humanity

Google is placing a huge bet on an artificial intelligence start-up whose owners have admitted it could wipe out humanity.

The tech giant's parent company, Alphabet, has reportedly committed $2 billion in funding to Anthropic, a startup that develops AI systems.

Anthropic is seen as one of the biggest rivals to OpenAI, behind the hugely popular ChatGPT that took the world by storm this past year - leaving Google's Bard in the dust.

Anthropic's CEO and co-founder, Dario Amodei (left) said earlier this week that AI has a '10 to 25 percent' chance of destroying humanity.

The report has claimed an upfront $500 million has already been invested into the startup, and the rest will be allocated over time.

The whooping investment comes just one month after Amazon invested $4 billion in Anthropic, The Wall Street Journal reports.   READ MORE...

Friday, October 27

AI Becoming More Secretive


A damning assessment of 10 key AI foundation models in a new transparency index is stoking new pressure on AI developers to share more information about their products — and on legislators and regulators to require such disclosures.

Why it matters: The Stanford, MIT and Princeton researchers who created the index say that unless AI companies are more forthcoming about the inner workings, training data and impacts of their most advanced tools, users will never be able to fully understand the risks associated with AI, and experts will never be able to mitigate them.

The big picture: Self-regulation hasn't moved the field toward transparency. In the year since ChatGPT kicked the AI market into overdrive, leading companies have become more secretive, citing competitive and safety concerns."Transparency should be a top priority for AI legislation," according to a paper the researchers published alongside their new index.

Driving the news: A Capitol Hill AI forum led by Senate Majority Leader Chuck Schumer Tuesday afternoon will put some of AI's biggest boosters and skeptics in the same room, as Congress works to develop AI legislation.

Details: The index measures models based on 100 transparency indicators, covering both the technical and social aspects of AI development, with only 2 of 10 models scoring more than 50% overall.

All 10 models had major transparency holes, and the mean score for the models is 37 out of 100. "None release information about the real-world impact of their systems," one of the co-authors, Kevin Klyman, told Axios.

Because 82 of the 100 criteria are met by at least one developer, the index authors say there are dozens of options for developers to copy or build on the work of their competitors to improve their own transparency.

The researchers urge policymakers to develop precise definitions of transparency requirements,. They advise large customers of AI companies to push for more transparency during contract negotiations — or partner with their peers to "to increase their collective bargaining power."  READ MORE...

Wednesday, September 27

ChatGPT Art Generator


OPENAI HAS ANNOUNCED Dall-E 3, its latest AI art tool. It uses OpenAI’s smash-hit chatbot, ChatGPT, to help create more complex and carefully composed works of art by automatically expanding on a prompt in a way that gives the generator more detailed and coherent instruction.

What’s new with Dall-E 3 is how it removes some of the complexity required with refining the text that is fed to the program—what’s known as “prompt engineering”—and how it allows users to make refinements through ChatGPT’s conversational interface. 

The new tool could help lower the bar for generating sophisticated AI artwork, and it could help OpenAI stay ahead of the competition thanks to the superior abilities of its chatbot.

Take this image of the potato king, for example.

This kind of quirky AI-generated art has become commonplace on social media thanks to a number of tools that turn a text prompt into a visual composition. 

But this one was created with a significant amount of artistic assistance from ChatGPT, which took a short prompt and turned it into a more detailed one, including instructions about how to compose it correctly.

That’s a big step forward not just for Dall-E, but for generative AI art as a whole. Dall-E, a portmanteau of the Pixar character Wall-E and the artist Salvador Dalí that was announced in 2021 and launched in 2022, consists of an algorithm that’s fed huge quantities of labeled images scraped from the web and other sources. 

It uses what’s known as a diffusion model to predict how to render an image for a given prompt. With sufficiently huge quantities of data this can produce complex, coherent, and aesthetically pleasing imagery. What’s different with Dall-E 3 is in the way humans and machines interact.  READ MORE...

Saturday, July 29

Technology IS NOT Going to be Good for Workers


Generative artificial intelligence technology such as ChatGPT could boost productivity for many workers in the years ahead. But some people are likely to lose their jobs in the process.

That's according to Sam Altman (ABOVE), the CEO of OpenAI, the company behind ChatGPT. Altman said in June that AI's development could provide the "most tremendous leap forward" for people's quality of life. But he also said in March it'd be "crazy not to be a little afraid of AI" and its potential to create "disinformation problems or economic shocks."

In a new interview with The Atlantic, Altman pushed back on the idea that the AI boom would have only a positive impact on workers.

"A lot of people working on AI pretend that it's only going to be good; it's only going to be a supplement; no one is ever going to be replaced," he said. "Jobs are definitely going to go away, full stop."

Since ChatGPT rolled out last November, economy experts have spoken about the ways AI could serve as a valuable assistant to workers — helping them become more productive and spend less time on boring tasks. 

Some experts expressed optimism that AI wouldn't result in the widespread job displacement many Americans fear and said that they should be more worried about their co-workers using these technologies to supplant them.

"You will not be replaced by AI but by someone who knows what to do with AI," Oded Netzer, a Columbia Business School professor, told Insider in early July.

But Altman's comments speak to a harsh reality: Even if most jobs aren't displaced, some are likely to go by the wayside. In March, Goldman Sachs said that 300 million full-time jobs across the globe could be disrupted by AI.

"History tells us that simplification is often merely a step towards automation," Carl Benedikt Frey, an Oxford economist, previously told Insider. "AI assistants that analyze telemarketers' calls and provide recommendations are being trained with the ultimate goal of replacing them."  READ MORE...

Monday, June 5

ChatGPT Taking Jobs


Tech jobs (Coders, computer programmers, software engineers, data analysts)

Coding and computer programming are in-demand skills, but it's possible that ChatGPT and similar AI tools may fill in some of the gaps in the near future.

Tech jobs such as software developers, web developers, computer programmers, coders, and data scientists are "pretty amenable" to AI technologies "displacing more of their work," Madgavkar said.

That's because AI like ChatGPT is good at crunching numbers with relative accuracy.

In fact, advanced technologies like ChatGPT could produce code faster than humans, which means that work can be completed with fewer employees, Mark Muro, a senior fellow at the Brookings Institute who has researched AI's impact on the American workforce, told Insider.

"What took a team of software developers might only take some of them," he added.

Tech companies like ChatGPT maker's OpenAI are already considering replacing software engineers with AI.

Still, Oded Netzer, a Columbia Business School professor, thinks that AI will help coders rather than replace them.

"In terms of jobs, I think it's primarily an enhancer than full replacement of jobs," Netzer told CBS MoneyWatch. "Coding and programming is a good example of that. It actually can write code quite well."  READ MORE...

Saturday, May 27

Selling Superintelligent Sunshine


The OpenAI CEO is on a world tour to talk up the benefits of AI and the need for regulation — but not too much. Some, though, think Altman’s vision is dangerous.

The queue to see OpenAI CEO Sam Altman speak at University College London on Wednesday stretched hundreds deep into the street. 

Those waiting gossiped in the sunshine about the company and their experience using ChatGPT, while a handful of protesters delivered a stark warning in front of the entrance doors: OpenAI and companies like it need to stop developing advanced AI systems before they have the chance to harm humanity.

“Look, maybe he’s selling a grift. I sure as hell hope he is,” one of the protestors, Gideon Futerman, a student at Oxford University studying solar geoengineering and existential risk, said of Altman. 

“But in that case, he’s hyping up systems with enough known harms. We probably should be putting a stop to them anyway. And if he’s right and he’s building systems which are generally intelligent, then the dangers are far, far, far bigger.”  READ MORE...

Monday, May 15

Global Control of AI Needed


FILE - Computer scientist Geoffrey Hinton, who studies neural networks used in artificial intelligence applications, poses at Google's Mountain View, Calif, headquarters on March 25, 2015. Hinton, a computer scientist known as the "godfather of artif".
 (AP Photo/Noah Berger, File / AP Newsroom)




Geoffrey Hinton, who recently resigned from his position as Google's vice president of engineering to sound the alarm about the dangers of artificial intelligence, cautioned in an interview published Friday that the world needs to find a way to control the tech as it develops.

The "godfather of AI" told EL PAÍS via videoconference that he believed a letter calling for a sixth-month-long moratorium on training AI systems more powerful than OpenAI's GPT-4 is "completely naive" and that the best he can recommend is that many very intelligence minds work to figure out "how to contain the dangers of these things."

"AI is a fantastic technology – it’s causing great advances in medicine, in the development of new materials, in forecasting earthquakes or floods… [but we] need a lot of work to understand how to contain AI," Hinton urged. "There’s no use waiting for the AI to outsmart us; we must control it as it develops. We also have to understand how to contain it, how to avoid its negative consequences."

For instance, Hinton believes all governments insist that fake images be flagged.  The scientist said that the best thing to do now is to "put as much effort into developing this technology as we do into making sure it’s safe" – which he says is not happening right now.

"How [can that be] accomplished in a capitalist system? I don’t know," Hinton noted.  When asked about sharing concerns with colleagues, Hinton said that many of the smartest people he knows are "seriously concerned."

"We’ve entered completely unknown territory. We’re capable of building machines that are stronger than ourselves, but we’re still in control. But what if we develop machines that are smarter than us?" he asked. "We have no experience dealing with these things."

Hinton says there are many different dangers to AI, citing job reduction and the creation of fake news. Hint on noted that he now believes AI may be doing things more efficiently than the human brain, with models like ChatGPT having the ability to see thousands of times more data than anyone else.

"That’s what scares me," he said.     READ MORE...

Sunday, May 7

AI and ChatGPT Threaten Humanity


As tech experts warn that the rapid evolution of artificial intelligence could threaten humanity, OpenAI's ChatGPT weighed in with its own predictions on how humanity could be wiped off the face of the Earth.

Fox News Digital asked the chatbot to weigh in on the apocalypse, and it shared four possible scenarios how humanity could ultimately be wiped out.

"It's important to note that predicting the end of the world is a difficult and highly speculative task, and any predictions in this regard should be viewed with skepticism," the bot responded. "However, there are several trends and potential developments that could significantly impact the trajectory of humanity and potentially contribute to its downfall."

Fears that AI could spell the end of humanity has for years been fodder for fiction but has become a legitimate talking point among experts as tech rapidly evolves – with British theoretical physicist Stephen Hawking issuing a dire warning back in 2014  

"The development of full artificial intelligence could spell the end of the human race," he said then. Hawking died in 2018.

The sentiment has only intensified among some experts nearly a decade later, with tech giant Elon Musk saying this year that the tech "has the potential of civilizational destruction."   READ MORE...

Wednesday, April 12

Article Referenced in Newspaper Never Published


OpenAI's ChatGPT is flooding the internet with a tsunami of made-up facts and disinformation — and that's rapidly becoming a very real problem for the journalism industry.

Reporters at The Guardian noticed that the AI chatbot had made up entire articles and bylines that it never actually published, a worrying side effect of democratizing tech that can't reliably distinguish truth from fiction.

Worse yet, letting these chatbots "hallucinate" — itself now a disputed euphemism — sources could serve to undermine legitimate news sources.

"Huge amounts have been written about generative AI’s tendency to manufacture facts and events," The Guardian's head of editorial innovation Chris Moran wrote. "But this specific wrinkle — the invention of sources — is particularly troubling for trusted news organizations and journalists whose inclusion adds legitimacy and weight to a persuasively written fantasy."

"And for readers and the wider information ecosystem, it opens up whole new questions about whether citations can be trusted in any way," he added, "and could well feed conspiracy theories about the mysterious removal of articles on sensitive issues that never existed in the first place."

It's not just journalists at The Guardian. Many other writers have found that their names were attached to sources that ChatGPT had drawn out of thin air.

Kate Crawford, an AI researcher and author of "Atlas of AI," was contacted by an Insider journalist who had been told by ChatGPT that Crawford was one of the top critics of podcaster Lex Fridman. The AI tool offered up a number of links and citations linking Crawford to Fridman — which were entirely fabricated, according to the authorREAD MORE...

Tuesday, April 4

Pausing Chat GPT


AN open letter signed by hundreds of prominent artificial intelligence experts, tech entrepreneurs, and scientists calls for a pause on the development and testing of AI technologies more powerful than OpenAI’s language model GPT-4 so that the risks it may pose can be properly studied.

It warns that language models like GPT-4 can already compete with humans at a growing range of tasks and could be used to automate jobs and spread misinformation. The letter also raises the distant prospect of AI systems that could replace humans and remake civilization.

“We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4 (including the currently-being-trained GPT-5),” states the letter, whose signatories include Yoshua Bengio, a professor at the University of Montreal considered a pioneer of modern AI, historian Yuval Noah Harari, Skype cofounder Jaan Tallinn, and Twitter CEO Elon Musk.

The letter, which was written by the Future of Life Institute, an organization focused on technological risks to humanity, adds that the pause should be “public and verifiable,” and should involve all those working on advanced AI models like GPT-4. It does not suggest how a halt on development could be verified, but adds that “if such a pause cannot be enacted quickly, governments should step in and institute a moratorium,” something that seems unlikely to happen within six months.

Microsoft and Google did not respond to requests for comment on the letter. The signatories seemingly include people from numerous tech companies that are building advanced language models, including Microsoft and Google. Hannah Wong, a spokesperson for OpenAI, says the company spent more than six months working on the safety and alignment of GPT-4 after training the model. She adds that OpenAI is not currently training GPT-5.  READ MORE...

Saturday, April 1

Nervous About AI


OpenAI entered the Silicon Valley stratosphere last year with the release of two AI products, the image-generator DALLE-2 and the chatbot ChatGPT. (The company recently unveiled GPT-4, which can ace most standardized tests, among other improvements on its predecessor.) Sam Altman (above), OpenAI’s co-founder, has become a public face of the AI revolution, alternately evangelical and circumspect about the potent force he has helped unleash on the world.

In the latest episode of On With Kara Swisher, Swisher speaks with Altman about the many possibilities and pitfalls of his nascent field, focusing on some of the key questions around it. Among them: How do we best to regulate a technology even its founders don’t fully understand? And who gets the enormous sums of money at stake? Altman has lofty ideas for how generative AI could transform society. But as Swisher observes, he sounds like the starry-eyed tech founders she encountered a quarter-century ago — only some of whom stayed true to their ideals.

Kara Swisher: You started Loopt. That’s where I met you.

Sam Altman: Yeah.

Swisher: Explain what it was. I don’t even remember, Sam. I’m sorry.

Altman: That’s no problem. Well, it didn’t work out. There’s no reason to remember. It was a location-based social app for mobile phones.

Swisher: Right. What happened?

Altman: The market wasn’t there, I’d say, is the No. 1 thing.

Swisher: Yeah. Because?

TO READ MORE, CLICK HERE...

Thursday, March 16

Robots Taking Over Jobs by 2025


There are two sides to this coin: Robots and AI will take some jobs away from humans — but they will also create new ones. Since 2000, robots and automation systems have slowly phased out many manufacturing jobs — 1.7 million of them. On the flip side, it’s predicted that AI will create 97 million new jobs by 2025.

WILL ARTIFICIAL INTELLIGENCE (AI) REPLACE JOBS?
AI is and will continue to replace some jobs. Workers in industries ranging from healthcare to agriculture and industrial sectors can all expect to see disruptions in hiring due to AI. But demand for workers, especially in robotics and software engineering, are expected to rise thanks to AI.

Some people don’t see it both ways. For example, Sean Chou, former CEO of AI startup Catalytic, thinks robots are stupid —and he’s not alone in his frank assessment.

“All you have to do is type in ‘YouTube robot fail,’” Chou said.

Don’t misunderstand, though; it isn’t that the machines aren’t rising. It’s that they’re rising much more slowly than some of the more breathless media coverage might have you believe — which is great news for most of those who think robots and other AI-powered technology will soon steal their jobs. “Most of” being the operative words.

Types of Jobs AI Will Impact
The consensus among many experts is that a number of professions will be totally automated in the next five to 10 years. A group of senior-level tech executives who comprise the Forbes Technology Council named 15: insurance underwriting, warehouse and manufacturing jobs, customer service, research and data entry, long haul trucking and a somewhat disconcertingly broad category titled “Any Tasks That Can Be Learned.”

HOW MANY JOBS WILL AI REPLACE?
According to the World Economic Forum's "The Future of Jobs Report 2020," AI is expected to replace 85 million jobs worldwide by 2025. Though that sounds scary, the report goes on to say that it will also create 97 million new jobs in that same timeframe.

Kai-Fu Lee, AI expert and CEO of Sinovation Ventures, wrote in a 2018 essay that 50 percent of all jobs will be automated by AI within 15 years.

“Accountants, factory workers, truckers, paralegals, and radiologists — just to name a few — will be confronted by a disruption akin to that faced by farmers during the Industrial Revolution,” Lee wrote.

When considering those developments and predictions, and based on multiple studies — by the McKinsey Global Institute, Oxford University and the U.S. Bureau of Labor Statistics, among others — there is massive and unavoidable change afoot. Research suggests that both specially trained workers and blue-collar workers will be impacted by the continued implementation of AI.

Developments in generative AI tools like ChatGPT and Bard have raised questions about if AI will replace jobs that involve writing. While it’s unlikely that AI will ever match the authentic creativity of humans, it is already being used as a catalyst for writing ideas and assisting with repetitive content creation.  READ MORE...

Monday, February 6

ChatGPT


In a complicated story from TIME, work between OpenAI, the company behind ChatGPT, and Sana, a San Francisco-based firm that employs people in Kenya named Sama. Sama works with some of the biggest companies in the tech industry, including Google, Meta, and Microsoft, on labeling content in images and text for explicit content.

Microsoft already has invested $1 billion into OpenAI, with a possible $10 billion more on the way. Microsoft plans to put AI into everything and reportedly leverage ChatGPT with Bing.

Sama is based in San Fran, but the work is performed by workers in Kenya, earning around $1.32 and $2 per hour. Unfortunately, to keep ChatGPT “safe” for users, OpenAI needs to feed it a lot of data from the internet, which is all unfiltered. So instead of using humans to filter out all the bad stuff, OpenAI (and companies like Meta with Facebook) employ other AI tools to remove that content from the data pool automatically.

Like the 2019 story on Facebook from The Verge, which highlighted the psychological impact of such content on the workers, Sama employees also suffered a similar fate:

“One Sama worker tasked with reading and labeling text for OpenAI told TIME he suffered from recurring visions after reading a graphic description of a man having sex with a dog in the presence of a young child. “That was torture,” he said. “You will read a number of statements like that all through the week. By the time it gets to Friday, you are disturbed from thinking through that picture.” The work’s traumatic nature eventually led Sama to cancel all its work for OpenAI in February 2022, eight months earlier than planned.”

The overall contract with Sama was $200,000, and that contract stipulated it would pay “an hourly rate of $12.50 to Sama for the work, which was between six and nine times the amount Sama employees on the project were taking home per hour.”

Later, Sama began to pilot a new project for OpenAI unrelated to ChatGPT. However, instead of text this time, it was imagery, including some illegal under US law, such as child sexual abuse, bestiality, rape, sexual slavery, and death and violence. Again, workers were to view and label the content so that OpenAI’s systems could filter out such things.  READ MORE...

Saturday, January 28

Sweatshops Powering ChatGPT


On January 18, Time magazine published revelations that alarmed if not necessarily surprised many who work in Artificial Intelligence. The news concerned ChatGPT, an advanced AI chatbot that is both hailed as one of the most intelligent AI systems built to date and feared as a new frontier in potential plagiarism and the erosion of craft in writing.

Many had wondered how ChatGPT, which stands for Chat Generative Pre-trained Transformer, had improved upon earlier versions of this technology that would quickly descend into hate speech. The answer came in the Time magazine piece: dozens of Kenyan workers were paid less than $2 per hour to process an endless amount of violent and hateful content in order to make a system primarily marketed to Western users safer.

It should be clear to anyone paying attention that our current paradigm of digitalisation has a labour problem. We have and are pivoting away from the ideal of an open internet built around communities of shared interests to one that is dominated by the commercial prerogatives of a handful of companies located in specific geographies.

In this model, large companies maximise extraction and accumulation for their owners at the expense not just of their workers but also of the users. Users are sold the lie that they are participating in a community, but the more dominant these corporations become, the more egregious the unequal power between the owners and the users is.

“Community” increasingly means that ordinary people absorb the moral and the social costs of the unchecked growth of these companies, while their owners absorb the profit and the acclaim. And a critical mass of underpaid labour is contracted under the most tenuous conditions that are legally possible to sustain the illusion of a better internet.  READ MORE...