Showing posts with label OpenAI. Show all posts
Showing posts with label OpenAI. Show all posts

Saturday, August 17

Humanoid Robots F.02



California-based robotics outfit Figure has today announced its second-generation humanoid robot, which is initially being aimed at production lines in commercial settings, but the company is promising a bipedal butler in our homes in the near future.

Figure was founded in 2022 by entrepreneur Brett Adcock – of Vettery and Archer Aviation – with the aim of bringing a "commercially viable general purpose humanoid robot" to market. 

We caught up with Adcock last year, publishing a series of three interview pieces, and have followed the progress of its first robot from first steps to learning and performing tasks to joining BMW's workforce, and then gaining OpenAI chattiness.      READ MORE...

Thursday, January 4

AI Development Will Explode in 2024


Artificial intelligence made a big splash with consumers and regulators alike in 2023, with experts believing the continued development of the technology will reach even greater heights in 2024.

"I think that in 2024, AI will move a little closer to what is in the public imagination, but we remain years from AI being autonomous in the way people are imagining it," Christopher Alexander, chief analytics officer of Pioneer Development Group, told Fox News Digital.

Alexander's comments come after 2023 saw a noticeable leap in the development and availability of AI tools, with popular language learning model (LLM) platforms such as OpenAI's ChatGPT gaining huge popularity and energizing other tech giants to come along for the ride.  READ MORE...

Tuesday, December 5

In The News


Gravitational waves from the aftereffects of the most powerful merger of two black holes observed to date detected by researchers; "ringing" effect comes from new black hole assuming a spherical shape (More) | General relativity 101 (More, w/video)





Google delays launch of Gemini, a large language model expected to compete with OpenAI's ChatGPT-4, until January; reports say the model has trouble with some non-English prompts (More)





Ancient redwood trees can recover from severe fire damage by tapping long-buried buds, which have laid dormant under their bark for centuries (More)

 

Thursday, November 30

Breakthrough Known as Q*


In today’s column, I am going to walk you through a prominent AI-mystery that has caused quite a stir leading to an incessant buzz across much of social media and garnering outsized headlines in the mass media. This is going to be quite a Sherlock Holmes adventure and sleuth detective-exemplifying journey that I will be taking you on.

Please put on your thinking cap and get yourself a soothing glass of wine.

The roots of the circumstance involve the recent organizational gyrations and notable business crisis drama associated with the AI maker OpenAI, including the off and on-again firing and then rehiring of the CEO Sam Altman, along with a plethora of related carry-ons. My focus will not particularly be the comings and goings of the parties involved. I instead seek to leverage those reported facts primarily as telltale clues associated with the AI-mystery that some believe sits at the core of the organizational earthquake.  READ MORE...

Saturday, November 4

AI Apocalypse Team Formed


Artificial intelligence (AI) is advancing rapidly, bringing unprecedented benefits to us, yet it also poses serious risks, such as chemical, biological, radiological and nuclear (CBRN) threats, that could have catastrophic consequences for the world.

How can we ensure that AI is used for good and not evil? How can we prepare for the worst-case scenarios that might arise from AI?

How OpenAI is preparing for the worst
These are some of the questions that OpenAI, a leading AI research lab and the company behind ChatGPT, is trying to answer with its new Preparedness team. Its mission is to track, evaluate, forecast and protect against the frontier risks of AI models.                          READ MORE...

Wednesday, September 27

ChatGPT Art Generator


OPENAI HAS ANNOUNCED Dall-E 3, its latest AI art tool. It uses OpenAI’s smash-hit chatbot, ChatGPT, to help create more complex and carefully composed works of art by automatically expanding on a prompt in a way that gives the generator more detailed and coherent instruction.

What’s new with Dall-E 3 is how it removes some of the complexity required with refining the text that is fed to the program—what’s known as “prompt engineering”—and how it allows users to make refinements through ChatGPT’s conversational interface. 

The new tool could help lower the bar for generating sophisticated AI artwork, and it could help OpenAI stay ahead of the competition thanks to the superior abilities of its chatbot.

Take this image of the potato king, for example.

This kind of quirky AI-generated art has become commonplace on social media thanks to a number of tools that turn a text prompt into a visual composition. 

But this one was created with a significant amount of artistic assistance from ChatGPT, which took a short prompt and turned it into a more detailed one, including instructions about how to compose it correctly.

That’s a big step forward not just for Dall-E, but for generative AI art as a whole. Dall-E, a portmanteau of the Pixar character Wall-E and the artist Salvador Dalí that was announced in 2021 and launched in 2022, consists of an algorithm that’s fed huge quantities of labeled images scraped from the web and other sources. 

It uses what’s known as a diffusion model to predict how to render an image for a given prompt. With sufficiently huge quantities of data this can produce complex, coherent, and aesthetically pleasing imagery. What’s different with Dall-E 3 is in the way humans and machines interact.  READ MORE...

Thursday, June 29

Driven to Extinction Will be our Fault


Sam Altman, chief executive of ChatGPT-maker OpenAI, Demis Hassabis, chief executive of Google DeepMind and Dario Amodei of Anthropic have all supported the statement.

The Centre for AI Safety website suggests a number of possible disaster scenarios:

AIs could be weaponised - for example, drug-discovery tools could be used to build chemical weapons

AI-generated misinformation could destabilise society and "undermine collective decision-making"

The power of AI could become increasingly concentrated in fewer and fewer hands, enabling "regimes to enforce narrow values through pervasive surveillance and oppressive censorship"

Enfeeblement, where humans become dependent on AI "similar to the scenario portrayed in the film Wall-E"

Dr Geoffrey Hinton, who issued an earlier warning about risks from super-intelligent AI, has also supported the Centre for AI Safety's call.

Yoshua Bengio, professor of computer science at the university of Montreal, also signed.

Dr Hinton, Prof Bengio and NYU Professor Yann LeCun are often described as the "godfathers of AI" for their groundbreaking work in the field - for which they jointly won the 2018 Turing Award, which recognises outstanding contributions in computer science.

But Prof LeCun, who also works at Meta, has said these apocalyptic warnings are overblown tweeting that "the most common reaction by AI researchers to these prophecies of doom is face palming".     READ MORE...

Monday, June 5

ChatGPT Taking Jobs


Tech jobs (Coders, computer programmers, software engineers, data analysts)

Coding and computer programming are in-demand skills, but it's possible that ChatGPT and similar AI tools may fill in some of the gaps in the near future.

Tech jobs such as software developers, web developers, computer programmers, coders, and data scientists are "pretty amenable" to AI technologies "displacing more of their work," Madgavkar said.

That's because AI like ChatGPT is good at crunching numbers with relative accuracy.

In fact, advanced technologies like ChatGPT could produce code faster than humans, which means that work can be completed with fewer employees, Mark Muro, a senior fellow at the Brookings Institute who has researched AI's impact on the American workforce, told Insider.

"What took a team of software developers might only take some of them," he added.

Tech companies like ChatGPT maker's OpenAI are already considering replacing software engineers with AI.

Still, Oded Netzer, a Columbia Business School professor, thinks that AI will help coders rather than replace them.

"In terms of jobs, I think it's primarily an enhancer than full replacement of jobs," Netzer told CBS MoneyWatch. "Coding and programming is a good example of that. It actually can write code quite well."  READ MORE...

Saturday, May 27

Selling Superintelligent Sunshine


The OpenAI CEO is on a world tour to talk up the benefits of AI and the need for regulation — but not too much. Some, though, think Altman’s vision is dangerous.

The queue to see OpenAI CEO Sam Altman speak at University College London on Wednesday stretched hundreds deep into the street. 

Those waiting gossiped in the sunshine about the company and their experience using ChatGPT, while a handful of protesters delivered a stark warning in front of the entrance doors: OpenAI and companies like it need to stop developing advanced AI systems before they have the chance to harm humanity.

“Look, maybe he’s selling a grift. I sure as hell hope he is,” one of the protestors, Gideon Futerman, a student at Oxford University studying solar geoengineering and existential risk, said of Altman. 

“But in that case, he’s hyping up systems with enough known harms. We probably should be putting a stop to them anyway. And if he’s right and he’s building systems which are generally intelligent, then the dangers are far, far, far bigger.”  READ MORE...

Monday, May 15

Global Control of AI Needed


FILE - Computer scientist Geoffrey Hinton, who studies neural networks used in artificial intelligence applications, poses at Google's Mountain View, Calif, headquarters on March 25, 2015. Hinton, a computer scientist known as the "godfather of artif".
 (AP Photo/Noah Berger, File / AP Newsroom)




Geoffrey Hinton, who recently resigned from his position as Google's vice president of engineering to sound the alarm about the dangers of artificial intelligence, cautioned in an interview published Friday that the world needs to find a way to control the tech as it develops.

The "godfather of AI" told EL PAÍS via videoconference that he believed a letter calling for a sixth-month-long moratorium on training AI systems more powerful than OpenAI's GPT-4 is "completely naive" and that the best he can recommend is that many very intelligence minds work to figure out "how to contain the dangers of these things."

"AI is a fantastic technology – it’s causing great advances in medicine, in the development of new materials, in forecasting earthquakes or floods… [but we] need a lot of work to understand how to contain AI," Hinton urged. "There’s no use waiting for the AI to outsmart us; we must control it as it develops. We also have to understand how to contain it, how to avoid its negative consequences."

For instance, Hinton believes all governments insist that fake images be flagged.  The scientist said that the best thing to do now is to "put as much effort into developing this technology as we do into making sure it’s safe" – which he says is not happening right now.

"How [can that be] accomplished in a capitalist system? I don’t know," Hinton noted.  When asked about sharing concerns with colleagues, Hinton said that many of the smartest people he knows are "seriously concerned."

"We’ve entered completely unknown territory. We’re capable of building machines that are stronger than ourselves, but we’re still in control. But what if we develop machines that are smarter than us?" he asked. "We have no experience dealing with these things."

Hinton says there are many different dangers to AI, citing job reduction and the creation of fake news. Hint on noted that he now believes AI may be doing things more efficiently than the human brain, with models like ChatGPT having the ability to see thousands of times more data than anyone else.

"That’s what scares me," he said.     READ MORE...

Sunday, May 7

AI and ChatGPT Threaten Humanity


As tech experts warn that the rapid evolution of artificial intelligence could threaten humanity, OpenAI's ChatGPT weighed in with its own predictions on how humanity could be wiped off the face of the Earth.

Fox News Digital asked the chatbot to weigh in on the apocalypse, and it shared four possible scenarios how humanity could ultimately be wiped out.

"It's important to note that predicting the end of the world is a difficult and highly speculative task, and any predictions in this regard should be viewed with skepticism," the bot responded. "However, there are several trends and potential developments that could significantly impact the trajectory of humanity and potentially contribute to its downfall."

Fears that AI could spell the end of humanity has for years been fodder for fiction but has become a legitimate talking point among experts as tech rapidly evolves – with British theoretical physicist Stephen Hawking issuing a dire warning back in 2014  

"The development of full artificial intelligence could spell the end of the human race," he said then. Hawking died in 2018.

The sentiment has only intensified among some experts nearly a decade later, with tech giant Elon Musk saying this year that the tech "has the potential of civilizational destruction."   READ MORE...

Saturday, April 15

Artificial Intelligence

Sam Altman of OpenAI


It was a blockbuster 2022 for artificial intelligence. The technology made waves from Google’s DeepMind predicting the structure of almost every known protein in the human body to successful launches of OpenAI’s generative A.I. assistant tools DALL-E and ChatGPT

The sector now looks to be on a fast track toward revolutionizing our economy and everyday lives, but many experts remain concerned that changes are happening too fast, with potentially disastrous implications for the world.

Many experts in A.I. and computer science say the technology is likely a watershed moment for human society. But 36% don’t mean that as a positive, warning that decisions made by A.I. could lead to “nuclear-level catastrophe,” according to researchers surveyed in an annual report on the technology by Stanford University’s Institute for Human-Centered A.I., published earlier this month.

Almost three quarters of researchers in natural language processing—the branch of computer science concerned with developing A.I.—say the technology might soon spark “revolutionary societal change,” according to the report. 

And while an overwhelming majority of researchers say the future net impact of A.I. and natural language processing will be positive, concerns remain that the technology could soon develop potentially dangerous capabilities, while A.I.’s traditional gatekeepers are no longer as powerful as they once were.

“As the technical barrier to entry for creating and deploying generative A.I. systems has lowered dramatically, the ethical issues around A.I. have become more apparent to the general public. Startups and large companies find themselves in a race to deploy and release generative models, and the technology is no longer controlled by a small group of actors,” the report said.  READ MORE...

Tuesday, April 4

Pausing Chat GPT


AN open letter signed by hundreds of prominent artificial intelligence experts, tech entrepreneurs, and scientists calls for a pause on the development and testing of AI technologies more powerful than OpenAI’s language model GPT-4 so that the risks it may pose can be properly studied.

It warns that language models like GPT-4 can already compete with humans at a growing range of tasks and could be used to automate jobs and spread misinformation. The letter also raises the distant prospect of AI systems that could replace humans and remake civilization.

“We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4 (including the currently-being-trained GPT-5),” states the letter, whose signatories include Yoshua Bengio, a professor at the University of Montreal considered a pioneer of modern AI, historian Yuval Noah Harari, Skype cofounder Jaan Tallinn, and Twitter CEO Elon Musk.

The letter, which was written by the Future of Life Institute, an organization focused on technological risks to humanity, adds that the pause should be “public and verifiable,” and should involve all those working on advanced AI models like GPT-4. It does not suggest how a halt on development could be verified, but adds that “if such a pause cannot be enacted quickly, governments should step in and institute a moratorium,” something that seems unlikely to happen within six months.

Microsoft and Google did not respond to requests for comment on the letter. The signatories seemingly include people from numerous tech companies that are building advanced language models, including Microsoft and Google. Hannah Wong, a spokesperson for OpenAI, says the company spent more than six months working on the safety and alignment of GPT-4 after training the model. She adds that OpenAI is not currently training GPT-5.  READ MORE...

Monday, April 3

GPT-5 Could Change the World


GPT-4 may have only just launched, but people are already excited about the next version of the artificial intelligence (AI) chatbot technology. Now, a new claim has been made that GPT-5 will complete its training this year, and could bring a major AI revolution with it.

The assertion comes from developer Siqi Chen on Twitter, who stated: “I have been told that GPT-5 is scheduled to complete training this December and that OpenAI expects it to achieve AGI.”

AGI is the concept of “artificial general intelligence,” which refers to an AI’s ability to comprehend and learn any task or idea that humans can wrap their heads around. In other words, an AI that has achieved AGI could be indistinguishable from a human in its capabilities.

That makes Chen’s claim pretty explosive, considering all the possibilities AGI might enable. At the positive end of the spectrum, it could massively increase the productivity of various AI-enabled processes, speeding things up for humans and eliminating monotonous drudgery and tedious work.

At the same time, bestowing an AI with that much power could have unintended consequences — ones that we simply haven’t thought of yet. It doesn’t mean the robot apocalypse is imminent, but it certainly raises a lot of questions about what the negative effects of AGI could be.

It should be noted that other forecasters predict that AGI will not be achieved until 2032.  READ MORE...

Saturday, April 1

Nervous About AI


OpenAI entered the Silicon Valley stratosphere last year with the release of two AI products, the image-generator DALLE-2 and the chatbot ChatGPT. (The company recently unveiled GPT-4, which can ace most standardized tests, among other improvements on its predecessor.) Sam Altman (above), OpenAI’s co-founder, has become a public face of the AI revolution, alternately evangelical and circumspect about the potent force he has helped unleash on the world.

In the latest episode of On With Kara Swisher, Swisher speaks with Altman about the many possibilities and pitfalls of his nascent field, focusing on some of the key questions around it. Among them: How do we best to regulate a technology even its founders don’t fully understand? And who gets the enormous sums of money at stake? Altman has lofty ideas for how generative AI could transform society. But as Swisher observes, he sounds like the starry-eyed tech founders she encountered a quarter-century ago — only some of whom stayed true to their ideals.

Kara Swisher: You started Loopt. That’s where I met you.

Sam Altman: Yeah.

Swisher: Explain what it was. I don’t even remember, Sam. I’m sorry.

Altman: That’s no problem. Well, it didn’t work out. There’s no reason to remember. It was a location-based social app for mobile phones.

Swisher: Right. What happened?

Altman: The market wasn’t there, I’d say, is the No. 1 thing.

Swisher: Yeah. Because?

TO READ MORE, CLICK HERE...