Showing posts with label Neural Networks. Show all posts
Showing posts with label Neural Networks. Show all posts

Monday, April 15

Liquid Circuits and Brain Computers


Liquid circuits that mimic synapses in the brain can for the first time perform the kind of logical operations underlying modern computers, a new study finds. 

Near-term applications for these devices may include tasks such as image recognition, as well as the kinds of calculations underlying most artificial intelligence systems.

Just as biological neurons both compute and store data, brain-imitating neuromorphic technology often combines both operations. These devices may greatly reduce the energy and time lost in conventional microchips shuttling data back and forth between processors and memory. 

They may also prove ideal for implementing neural networks—AI systems increasingly finding use in applications such as analyzing medical scans and controlling autonomous vehicles.   READ MORE...

Sunday, March 20

Limits of Artificial Intelligence


Humans are usually pretty good at recognizing when they get things wrong, but artificial intelligence systems are not. According to a new study, AI generally suffers from inherent limitations due to a century-old mathematical paradox.

Like some people, AI systems often have a degree of confidence that far exceeds their actual abilities. And like an overconfident person, many AI systems don't know when they're making mistakes. Sometimes it's even more difficult for an AI system to realize when it's making a mistake than to produce a correct result.

Researchers from the University of Cambridge and the University of Oslo say that instability is the Achilles' heel of modern AI and that a mathematical paradox shows AI's limitations. Neural networks, the state of the art tool in AI, roughly mimic the links between neurons in the brain. The researchers show that there are problems where stable and accurate neural networks exist, yet no algorithm can produce such a network. Only in specific cases can algorithms compute stable and accurate neural networks.

The researchers propose a classification theory describing when neural networks can be trained to provide a trustworthy AI system under certain specific conditions. Their results are reported in the Proceedings of the National Academy of Sciences.

Deep learning, the leading AI technology for pattern recognition, has been the subject of numerous breathless headlines. Examples include diagnosing disease more accurately than physicians or preventing road accidents through autonomous driving. However, many deep learning systems are untrustworthy and easy to fool.

"Many AI systems are unstable, and it's becoming a major liability, especially as they are increasingly used in high-risk areas such as disease diagnosis or autonomous vehicles," said co-author Professor Anders Hansen from Cambridge's Department of Applied Mathematics and Theoretical Physics. "If AI systems are used in areas where they can do real harm if they go wrong, trust in those systems has got to be the top priority."  READ MORE...

Saturday, October 23

Quantum Artificial Intelligence

A novel proof that certain quantum convolutional networks can be guaranteed to be trained clears the way for quantum artificial intelligence to aid in materials discovery and many other applications. Credit: Los Alamos National Laboratory


Convolutional neural networks running on quantum computers have generated significant buzz for their potential to analyze quantum data better than classical computers can. While a fundamental solvability problem known as "barren plateaus" has limited the application of these neural networks for large data sets, new research overcomes that Achilles heel with a rigorous proof that guarantees scalability.


"The way you construct a quantum neural network can lead to a barren plateau—or not," said Marco Cerezo, co-author of the paper titled "Absence of Barren Plateaus in Quantum Convolutional Neural Networks," published today by a Los Alamos National Laboratory team in Physical Review X. Cerezo is a physicist specializing in quantum computing, quantum machine learning, and quantum information at Los Alamos. "We proved the absence of barren plateaus for a special type of quantum neural network. Our work provides trainability guarantees for this architecture, meaning that one can generically train its parameters."

As an artificial intelligence (AI) methodology, quantum convolutional neural networks are inspired by the visual cortex. As such, they involve a series of convolutional layers, or filters, interleaved with pooling layers that reduce the dimension of the data while keeping important features of a data set.

These neural networks can be used to solve a range of problems, from image recognition to materials discovery. Overcoming barren plateaus is key to extracting the full potential of quantum computers in AI applications and demonstrating their superiority over classical computers.  TO READ MORE, CLICK HERE...