Showing posts with label AI Systems. Show all posts
Showing posts with label AI Systems. Show all posts

Monday, April 15

Liquid Circuits and Brain Computers


Liquid circuits that mimic synapses in the brain can for the first time perform the kind of logical operations underlying modern computers, a new study finds. 

Near-term applications for these devices may include tasks such as image recognition, as well as the kinds of calculations underlying most artificial intelligence systems.

Just as biological neurons both compute and store data, brain-imitating neuromorphic technology often combines both operations. These devices may greatly reduce the energy and time lost in conventional microchips shuttling data back and forth between processors and memory. 

They may also prove ideal for implementing neural networks—AI systems increasingly finding use in applications such as analyzing medical scans and controlling autonomous vehicles.   READ MORE...

Saturday, May 27

Selling Superintelligent Sunshine


The OpenAI CEO is on a world tour to talk up the benefits of AI and the need for regulation — but not too much. Some, though, think Altman’s vision is dangerous.

The queue to see OpenAI CEO Sam Altman speak at University College London on Wednesday stretched hundreds deep into the street. 

Those waiting gossiped in the sunshine about the company and their experience using ChatGPT, while a handful of protesters delivered a stark warning in front of the entrance doors: OpenAI and companies like it need to stop developing advanced AI systems before they have the chance to harm humanity.

“Look, maybe he’s selling a grift. I sure as hell hope he is,” one of the protestors, Gideon Futerman, a student at Oxford University studying solar geoengineering and existential risk, said of Altman. 

“But in that case, he’s hyping up systems with enough known harms. We probably should be putting a stop to them anyway. And if he’s right and he’s building systems which are generally intelligent, then the dangers are far, far, far bigger.”  READ MORE...

Saturday, April 15

Artificial Intelligence

Sam Altman of OpenAI


It was a blockbuster 2022 for artificial intelligence. The technology made waves from Google’s DeepMind predicting the structure of almost every known protein in the human body to successful launches of OpenAI’s generative A.I. assistant tools DALL-E and ChatGPT

The sector now looks to be on a fast track toward revolutionizing our economy and everyday lives, but many experts remain concerned that changes are happening too fast, with potentially disastrous implications for the world.

Many experts in A.I. and computer science say the technology is likely a watershed moment for human society. But 36% don’t mean that as a positive, warning that decisions made by A.I. could lead to “nuclear-level catastrophe,” according to researchers surveyed in an annual report on the technology by Stanford University’s Institute for Human-Centered A.I., published earlier this month.

Almost three quarters of researchers in natural language processing—the branch of computer science concerned with developing A.I.—say the technology might soon spark “revolutionary societal change,” according to the report. 

And while an overwhelming majority of researchers say the future net impact of A.I. and natural language processing will be positive, concerns remain that the technology could soon develop potentially dangerous capabilities, while A.I.’s traditional gatekeepers are no longer as powerful as they once were.

“As the technical barrier to entry for creating and deploying generative A.I. systems has lowered dramatically, the ethical issues around A.I. have become more apparent to the general public. Startups and large companies find themselves in a race to deploy and release generative models, and the technology is no longer controlled by a small group of actors,” the report said.  READ MORE...

Saturday, May 28

Alien Invasion



Image Credit: Rick_Jo/Getty



An alien species is headed for planet Earth and we have no reason to believe it will be friendly. Some experts predict it will get here within 30 years, while others insist it will arrive far sooner. Nobody knows what it will look like, but it will share two key traits with us humans – it will be intelligent and self-aware.

No, this alien will not come from a distant planet – it will be born right here on Earth, hatched in a research lab at a major university or large corporation. I am referring to the first artificial general intelligence (AGI) that reaches (or exceeds) human-level cognition.

As I write these words, billions are being spent to bring this alien to life, as it would be viewed as one of the greatest technological achievements in human history. But unlike our other inventions, this one will have a mind of its own, literally. And if it behaves like every other intelligent species we know, it will put its own self-interests first, working to maximize its prospects for survival.

AI in our own image
Should we fear a superior intelligence driven by its own goals, values and self-interests? Many people reject this question, believing we will build AI systems in our own image, ensuring they think, feel and behave just like we do. This is extremely unlikely to be the case.

Artificial minds will not be created by writing software with carefully crafted rules that make them think like us. Instead engineers feed massive datasets into simple algorithms that automatically adjust their own parameters, making millions upon millions of tiny changes to their structure until an intelligence emerges – an intelligence with inner workings that are far too complex for us to comprehend.

And no – feeding it data about humans will not make it think like humans do. This is a common misconception – the false belief that by training an AI on data that describes human behaviors, we will ensure it ends up thinking, feeling and acting like we do. It will not.  READ MORE...