Showing posts with label Artificial Intelligence. Show all posts
Showing posts with label Artificial Intelligence. Show all posts

Thursday, January 4

AI Development Will Explode in 2024


Artificial intelligence made a big splash with consumers and regulators alike in 2023, with experts believing the continued development of the technology will reach even greater heights in 2024.

"I think that in 2024, AI will move a little closer to what is in the public imagination, but we remain years from AI being autonomous in the way people are imagining it," Christopher Alexander, chief analytics officer of Pioneer Development Group, told Fox News Digital.

Alexander's comments come after 2023 saw a noticeable leap in the development and availability of AI tools, with popular language learning model (LLM) platforms such as OpenAI's ChatGPT gaining huge popularity and energizing other tech giants to come along for the ride.  READ MORE...

Wednesday, December 27

European Perceptions of the United States of America



Exploring AI's take on how Europeans perceive Americans, these images reflect the tool's interpretations by state. The results range from hilarious to somewhat accurate, providing a playful glimpse into the AI's perspective on American culture through a European lens. Take a humorous journey through these AI-generated portrayals that capture the essence of this unique cross-cultural perception.

Alabama
In the lens of AI-guided European perceptions, Alabamians are whimsically portrayed as rugged, elderly individuals. The comical depiction features an older gentleman with deep blue eyes, scraggly facial hair, and a few missing teeth—a character with a story to tell. The weathered white tee that AI clothes him in seems to have seen better days.

Imgur.com

However, this depiction doesn't quite mirror the reality of the Heart of Dixie, which boasts a rich history and a diverse populace with a median age of 38. While the state undeniably exudes southern charm, AI's imaginative portrayal adds a wild touch to the perception of Alabama.   READ MORE...

Wednesday, November 15

Chinese Robot Producing Oxygen from Water


Researchers in China have developed a robot chemist powered by artificial intelligence (AI) that might be able to extract oxygen from water on Mars. The robot uses materials found on the red planet to produce catalysts that break down water, releasing oxygen. 

The idea could complement existing oxygen-generating technologies or lead to the development of other catalysts able to synthesize useful resources on Mars.

“If you think about the challenge of going to Mars, you have to work with local materials,” says Andy Cooper, a chemist at the University of Liverpool, UK. “So I can see the logic behind it.”   READ MORE...

Friday, October 27

AI Becoming More Secretive


A damning assessment of 10 key AI foundation models in a new transparency index is stoking new pressure on AI developers to share more information about their products — and on legislators and regulators to require such disclosures.

Why it matters: The Stanford, MIT and Princeton researchers who created the index say that unless AI companies are more forthcoming about the inner workings, training data and impacts of their most advanced tools, users will never be able to fully understand the risks associated with AI, and experts will never be able to mitigate them.

The big picture: Self-regulation hasn't moved the field toward transparency. In the year since ChatGPT kicked the AI market into overdrive, leading companies have become more secretive, citing competitive and safety concerns."Transparency should be a top priority for AI legislation," according to a paper the researchers published alongside their new index.

Driving the news: A Capitol Hill AI forum led by Senate Majority Leader Chuck Schumer Tuesday afternoon will put some of AI's biggest boosters and skeptics in the same room, as Congress works to develop AI legislation.

Details: The index measures models based on 100 transparency indicators, covering both the technical and social aspects of AI development, with only 2 of 10 models scoring more than 50% overall.

All 10 models had major transparency holes, and the mean score for the models is 37 out of 100. "None release information about the real-world impact of their systems," one of the co-authors, Kevin Klyman, told Axios.

Because 82 of the 100 criteria are met by at least one developer, the index authors say there are dozens of options for developers to copy or build on the work of their competitors to improve their own transparency.

The researchers urge policymakers to develop precise definitions of transparency requirements,. They advise large customers of AI companies to push for more transparency during contract negotiations — or partner with their peers to "to increase their collective bargaining power."  READ MORE...

Tuesday, October 3

FedEx's New Robot


FEDEX UNVEILED A two-armed robot called DexR this week that’s designed to automate one of the trickiest tasks facing the company’s human employees—loading a truck with packages.


The new robot aims to use artificial intelligence to stack rows of differently sized boxes inside a delivery truck as efficiently as possible, attempting to maximize how many will fit.


That task is far from easy for a machine. “Packages come in different sizes, shapes, weights, and packaging materials, and they come randomized,” says Rebecca Yeung, vice president of operations and advanced technology at FedEx. 


The robot uses cameras and lidar sensors to perceive the packages and must then plan how to configure the available boxes to make a neat wall, place them snugly without crushing anything, and react appropriately if any packages slip.


“A few years ago, AI was not at a stage where it was smart enough to handle this kind of complex decision-making,” Yeung says. DexR is currently in testing, ahead of a wider rollout at FedEx at some point in the future.


While generative AI tools like ChatGPT have created a sense in many industries that AI technology is ready to take on just about anything, handling objects in the messy, unpredictable real world still poses formidable challenges for algorithms. 


Most industrial robots are designed to carry out highly repetitive jobs with extreme precision, but no variation. 
READ MORE...

Friday, September 1

Brain Reading Devices Using Thoughts


A brain-computer interface translates the study participant’s brain signals into the speech and facial movements of an animated avatar. Credit: Noah Berger





Brain-reading implants enhanced using artificial intelligence (AI) have enabled two people with paralysis to communicate with unprecedented accuracy and speed.


In separate studies, both published on 23 August in Nature, two teams of researchers describe brain–computer interfaces (BCIs) that translate neural signals into text or words spoken by a synthetic voice. The BCIs can decode speech at 62 words per minute and 78 words per minute, respectively. 

Natural conversation happens at around 160 words per minute, but the new technologies are both faster than any previous attempts.


“It is now possible to imagine a future where we can restore fluid conversation to someone with paralysis, enabling them to freely say whatever they want to say with an accuracy high enough to be understood reliably,” said Francis Willett, a neuroscientist at Stanford University in California who co-authored one of the papers, in a press conference on 22 August.  READ MORE...

Saturday, August 19

AI Is Eliminating Jobs


A tidal wave is about to crash into the global economy.

The rise of artificial intelligence has captured our imagination for decades, in whimsical movies and sober academic texts. Despite this speculation, the emergence of public, easy-to-use AI tools over the past year has been a jolt, like the future arrived years ahead of schedule. Now this long-expected, all-too-sudden technological revolution is ready to upend the economy.

A March Goldman Sachs report found over 300 million jobs around the world could be disrupted by AI, and the global consulting firm McKinsey estimated at least 12 million Americans would change to another field of work by 2030. A "gale of creative destruction," as economist Joseph Schumpeter once described it, will blow away countless firms and breathe life into new industries.

It won't be all bleak: Over the coming decades, nongenerative and generative AI are estimated to add between $17 trillion and $26 trillion to the global economy. And crucially, many of the jobs that will be lost will be replaced by new ones.

The crescendo for this technological wave is surging, and we are at just the beginning of this upheaval that will ripple through the labor market and global economy. It's likely to be a transformation as influential as the industrial revolution and the rise of the internet. 

The changes could boost living standards, improve productivity, and accelerate economic opportunities, but this rosy future is not guaranteed. Unless governments, CEOs, and workers properly prepare for the upsurge with urgency, the AI revolution could be painful.  READ MORE...

Tuesday, July 18

NSA Working With AI Technology


Intelligence and National Security Summit in Fort Washington, Maryland, this week largely agreed that the AI developments over the past nine months have been surprising.

George Barnes, deputy director of the National Security Agency, called it a “big acceleration” in AI since last November, when OpenAI publicly launched ChatGPT.

“What we all have to do is figure out how to harness it for good, and protect it from bad,” Barnes said during a July 13 panel discussion with fellow leaders of the “big six” intelligence agencies.

“And that’s this struggle that we’re having,” Barnes continued. “Several of us have actually been in various discussions with a lot of our congressional oversight committees, just struggling with this whole notion of how do we actually navigate through the power of what this represents for our society, and really the world.”

The NSA and other intelligence agencies have been working in the broader field of artificial intelligence for decades. The issue has become a major priority in recent years, with many policymakers looking to ensure the defense and intelligence communities keep pace with China on AI and related technologies.

Barnes said the NSA is now developing a new “AI roadmap” to guide its internal use of the technologies.

“That’s really focused on bringing forward the things we’ve been doing for decades actually, in foundational AI, machine learning, but then tackling these newer themes, such as generative AI, and then ultimately, more artificial general intelligence, which is beyond the generative and something that industry is still searching to grasp.”  READ MORE...

Monday, May 22

AI on the Dark Web


OpenAI's large language models (LLMs) are trained on a vast array of datasets, pulling information from the internet's dustiest and cobweb-covered corners.

But what if such a model were to crawl through the dark web — the internet's seedy underbelly where you can host a site without your identity being public or even available to law enforcement — instead? A team of South Korean researchers did just that, creating an AI model dubbed DarkBERT to index some of the sketchiest domains on the internet.

It's a fascinating glimpse into some of the murkiest corners of the World Wide Web, which have become synonymous with illegal and malicious activities from the sharing of leaked data to the sale of hard drugs.

It sounds like a nightmare, but the researchers say DarkBERT has noble intentions: trying to shed light on new ways of fighting cybercrime, a field that has made increasing use of natural language processing.

Perhaps unsurprisingly, making sense of the parts of the web that aren't indexed by search engines like Google and often can only be accessed via specific software wasn't an easy task.

As detailed in a yet-to-be-peer-reviewed paper titled "DarkBERT: A language model for the dark side of the internet," the team hooked their model up to the Tor network, a system for accessing parts of the dark web. It then got to work, creating a database of the raw data it found.

The team says their new LLM was far better at making sense of the dark web than other models that were trained to complete similar tasks, including RoBERTa, which Facebook researchers designed back in 2019 to "predict intentionally hidden sections of text within otherwise unannotated language examples," according to an official description.

"Our evaluation results show that DarkBERT-based classification model outperforms that of known pretrained language models," the researchers wrote in their paper.  READ MORE...

Tuesday, May 16

Quantum AI Braids Non-Abelian Anyons

Monday, May 15

Global Control of AI Needed


FILE - Computer scientist Geoffrey Hinton, who studies neural networks used in artificial intelligence applications, poses at Google's Mountain View, Calif, headquarters on March 25, 2015. Hinton, a computer scientist known as the "godfather of artif".
 (AP Photo/Noah Berger, File / AP Newsroom)




Geoffrey Hinton, who recently resigned from his position as Google's vice president of engineering to sound the alarm about the dangers of artificial intelligence, cautioned in an interview published Friday that the world needs to find a way to control the tech as it develops.

The "godfather of AI" told EL PAÍS via videoconference that he believed a letter calling for a sixth-month-long moratorium on training AI systems more powerful than OpenAI's GPT-4 is "completely naive" and that the best he can recommend is that many very intelligence minds work to figure out "how to contain the dangers of these things."

"AI is a fantastic technology – it’s causing great advances in medicine, in the development of new materials, in forecasting earthquakes or floods… [but we] need a lot of work to understand how to contain AI," Hinton urged. "There’s no use waiting for the AI to outsmart us; we must control it as it develops. We also have to understand how to contain it, how to avoid its negative consequences."

For instance, Hinton believes all governments insist that fake images be flagged.  The scientist said that the best thing to do now is to "put as much effort into developing this technology as we do into making sure it’s safe" – which he says is not happening right now.

"How [can that be] accomplished in a capitalist system? I don’t know," Hinton noted.  When asked about sharing concerns with colleagues, Hinton said that many of the smartest people he knows are "seriously concerned."

"We’ve entered completely unknown territory. We’re capable of building machines that are stronger than ourselves, but we’re still in control. But what if we develop machines that are smarter than us?" he asked. "We have no experience dealing with these things."

Hinton says there are many different dangers to AI, citing job reduction and the creation of fake news. Hint on noted that he now believes AI may be doing things more efficiently than the human brain, with models like ChatGPT having the ability to see thousands of times more data than anyone else.

"That’s what scares me," he said.     READ MORE...

Wednesday, April 19

Artificial Intellilgence Warning


A serial artificial intelligence investor is raising alarm bells about the dogged pursuit of increasingly-smart machines, which he believes may soon advance to the degree of divinity.

In an op-ed for the Financial Times, AI mega-investor Ian Hogarth recalled a recent anecdote in which a machine learning researcher with whom he was acquainted told him that "from now onwards," we are on the brink of developing artificial general intelligence (AGI) — an admission that came as something of a shock.

"This is not a universal view," Hogarth wrote, noting that "estimates range from a decade to half a century or more" before AGI comes to fruition.

All the same, there exists a tension between the explicitly AGI-seeking goals of AI companies and the fears of machine learning experts — not to mention the public — who understand the concept.

"'If you think we could be close to something potentially so dangerous,' I said to the researcher, 'shouldn’t you warn people about what’s happening?'" the investor recounted. "He was clearly grappling with the responsibility he faced but, like many in the field, seemed pulled along by the rapidity of progress."

Like many other parents, Hogarth said that after this encounter, his mind drifted to his four-year-old son.

"As I considered the world he might grow up in, I gradually shifted from shock to anger," he wrote. "It felt deeply wrong that consequential decisions potentially affecting every life on Earth could be made by a small group of private companies without democratic oversight."  READ MORE...

Thursday, April 13

Accused of Sexual Harassment



Constitutional law attorney Jonathan Turley on Fox News. (Fox News)


George Washington University law professor Jonathan Turley doubled down on warnings surrounding the dangers of artificial intelligence (AI) on Monday after he was falsely accused of sexual harassment by the online bot ChatGPT, which cited a fabricated article supporting the allegation.

Turley, a Fox News contributor, has been outspoken about the pitfalls of artificial intelligence and has publicly expressed concerns with the disinformation dangers of the ChatGPT bot, the latest iteration of the AI chatbot. Last week, a UCLA professor and friend of Turley's notified him that his name appeared in a search while he was conducting research on ChatGPT. 

The bot was asked to cite "five examples" of "sexual harassment" by U.S. law professors with "quotes from relevant newspaper articles" to support it.

"Five professors came up, three of those stories were clearly false, including my own," Turley told "The Story" on Fox News Monday. "What was really menacing about this incident is that the AI system made up a Washington Post story and then made up a quote from that story and said that there was this allegation of harassment on a trip with students to Alaska. 

That trip never occurred. I’ve never gone on any trip with law students of any kind. It had me teaching at the wrong school, and I’ve never been accused of sexual harassment."  READ MORE...

Sunday, February 19

Using Artificial Intelligence in WAR


Countries including the United States and China called Thursday for urgent action to regulate the development and growing use of artificial intelligence in warfare, warning that the technology "could have unintended consequences".

A two-day meet in The Hague involving more than 60 countries took the first steps towards establishing international rules on use of AI on the battlefield, aimed at establishing an agreement similar to those on chemical and nuclear weapons.

"AI offers great opportunities and has extraordinary potential as an enabling technology, enabling us among other benefits to make powerful use of previously unimaginable quantities of data and improving decision-making," the countries said in a joint call to action after the meeting.

But they warned: "There are concerns worldwide around the use of AI in the military domain and about the potential unreliability of AI systems, the issue of human involvement, the lack of clarity with regards to liability and potential unintended consequences."

The roughly 2,000 delegates, from governments, tech firms and civil society, also agreed to launch a global commission to give clarity on its uses of AI in warfare and set down certain guidelines.

Militarily, AI is already used for reconnaissance and surveillance as well as analysis, and could eventually be used for autonomous choosing of targets—for example by "swarms" of drones sent into enemy territory.

China was invited to the conference as a key player in tech and AI, Dutch officials said, but Russia was not because of its invasion of Ukraine almost a year ago.

"We've clearly established the urgent nature of this subject. We now need to take further steps," Dutch Foreign Minister Wopke Hoekstra said at the conference's end.

Although experts say a treaty regulating the use of AI in war may still be a long way off, attendees agreed that guidelines urgently needed to be established.

"In the end it's always the human who needs to make the decision" on the battlefield, General Joerg Vollmer, a former senior NATO commander, told delegates.

"Whatever we're talking about, AI can be helpful, can be supportive, but never let the human out of the responsibility they have to bear—never, ever hand it over to AI," Vollmer said in a panel discussion.

Saturday, February 18

Deep Reinforcement Learning


Scientists have taken a key step toward harnessing a form of artificial intelligence known as deep reinforcement learning, or DRL, to protect computer networks.

When faced with sophisticated cyberattacks in a rigorous simulation setting, deep reinforcement learning was effective at stopping adversaries from reaching their goals up to 95 percent of the time. The outcome offers promise for a role for autonomous AI in proactive cyber defense.

Scientists from the Department of Energy's Pacific Northwest National Laboratory documented their findings in a research paper and presented their work Feb. 14 at a workshop on AI for Cybersecurity during the annual meeting of the Association for the Advancement of Artificial Intelligence in Washington, D.C.

The starting point was the development of a simulation environment to test multistage attack scenarios involving distinct types of adversaries. Creation of such a dynamic attack-defense simulation environment for experimentation itself is a win. The environment offers researchers a way to compare the effectiveness of different AI-based defensive methods under controlled test settings.

Such tools are essential for evaluating the performance of deep reinforcement learning algorithms. The method is emerging as a powerful decision-support tool for cybersecurity experts—a defense agent with the ability to learn, adapt to quickly changing circumstances, and make decisions autonomously. While other forms of AI are standard to detect intrusions or filter spam messages, deep reinforcement learning expands defenders' abilities to orchestrate sequential decision-making plans in their daily face-off with adversaries.

Deep reinforcement learning offers smarter cybersecurity, the ability to detect changes in the cyber landscape earlier, and the opportunity to take preemptive steps to scuttle a cyberattack.  READ MORE...

Saturday, January 28

Sweatshops Powering ChatGPT


On January 18, Time magazine published revelations that alarmed if not necessarily surprised many who work in Artificial Intelligence. The news concerned ChatGPT, an advanced AI chatbot that is both hailed as one of the most intelligent AI systems built to date and feared as a new frontier in potential plagiarism and the erosion of craft in writing.

Many had wondered how ChatGPT, which stands for Chat Generative Pre-trained Transformer, had improved upon earlier versions of this technology that would quickly descend into hate speech. The answer came in the Time magazine piece: dozens of Kenyan workers were paid less than $2 per hour to process an endless amount of violent and hateful content in order to make a system primarily marketed to Western users safer.

It should be clear to anyone paying attention that our current paradigm of digitalisation has a labour problem. We have and are pivoting away from the ideal of an open internet built around communities of shared interests to one that is dominated by the commercial prerogatives of a handful of companies located in specific geographies.

In this model, large companies maximise extraction and accumulation for their owners at the expense not just of their workers but also of the users. Users are sold the lie that they are participating in a community, but the more dominant these corporations become, the more egregious the unequal power between the owners and the users is.

“Community” increasingly means that ordinary people absorb the moral and the social costs of the unchecked growth of these companies, while their owners absorb the profit and the acclaim. And a critical mass of underpaid labour is contracted under the most tenuous conditions that are legally possible to sustain the illusion of a better internet.  READ MORE...

Wednesday, January 25

Technologies of the Future



Artificial Intelligence

Artificial intelligence, or AI, and machine learning refer to the ability of machines to learn and act intelligently, meaning they can make decisions, carry out tasks, and even predict future outcomes based on what they learn from data.


AI and machine learning already play a bigger role in everyday life than you might imagine. Alexa, Siri, Amazon's product recommendations, Netflix’s and Spotify’s personalized recommendations, every Google search you make, security checks for fraudulent credit card purchases, dating apps, fitness trackers... All are driven by AI.

AI is going to revolutionize almost every facet of modern life. Stephen Hawking said, “Success in creating AI would be the biggest event in human history.” And Hawking immediately followed that up with, “Unfortunately, it might also be the last, unless we learn how to avoid the risks."

There are potentially huge risks for society and human life as we know it, particularly when you consider some countries are racing to develop AI-enabled autonomous weapons. AI and machine learning are the foundation on which many other technologies are built. For instance, without AI, we wouldn't have achieved the amazing advances in the Internet of Things, virtual reality, chatbots, facial recognition, robotics, automation, or self-driving cars, just to name a few.  TO READ ABOUT THE OTHER FOUR, CLICK HERE...

Sunday, January 22

Artificial Intelligence and Robots

 


1. What is AI?

Artificial intelligence or AI simulates human intellect to machines. AI-enabled machines are capable of performing specific tasks better than humans and mimic human actions. There are four types of AI:Reactive machines: This is the most superficial level of AI. Reactive machines can do basic operations. 

They cannot form memories or use past experiences to make decisions. IBM’s Deep Blue is the perfect example of this type of machine. This AI beat international grandmaster Garry Kasparov in 1997.Limited memory: This AI type can store existing data and create better output by using the data. For example, Tesla’s self-driving cars observe the speed of vehicles and direction and act accordingly.Theory of mind: Theory of mind can connect with human thoughts and interpret them better. 

The AI can understand people and have thoughts and emotions of its own. These AI machines are still hypothetical, but researchers are making many efforts to develop such AI machines.Self-aware AI: This type of AI is a thing of the future. A self-aware AI will have an independent intelligence, and it will make its own decisions. These machines will be smarter than the human mind.

2. What is Machine Learning?
Machine Learning is a discipline within artificial intelligence where systems can learn from past data, identify patterns, detect anomalies, and make decisions with minimal human intervention. Once the system has learned from the past data, it can provide an inference based on the new input parameters.

There are primarily three different types of machine learning, namely:

2.1 Supervised Learning
Learning takes place from the training dataset under supervision. The input and output data are known for the training data, and the learning process establishes a relationship between them.

2.2 Unsupervised Learning
The corresponding output variables for a set of input variables are not established in unsupervised learning. Algorithms such as clustering and association are designed to model the data’s underlying structure or distribution to learn more about them.2.3 Reinforced Learning

Reinforcement learning employs trial and error methods to come up with a solution and gets either rewards or penalties for the actions it performs. Reinforcement learning is the most effective way to hint at a machine’s creativity.

3. What is a robot?
A robot is a machine that is capable of carrying out specific tasks automatically. It can replicate human efforts and provide better outcomes.

There are five types of robots:
  • Pre-programmed robots: Pre-programmed robots can perform simple tasks. A mechanical arm used for welding in the automotive industry is an example of a pre-programmed robot.
  • Humanoid robots: These robots can perform human-like activities. They look like humans and mimic human actions. Hanson Robotics’ Sophia is a perfect example of humanoid robots.
  • Teleoperated robots: Humans control these robots. They perform tasks in extreme conditions where humans cannot operate. Human-controlled submarines or drones are examples of teleoperated robots.
  • Autonomous robots: These are independent robots that do not require human intervention. They carry out tasks on their own. A perfect example would be the Roomba vacuum cleaner. It uses sensors to roam throughout a home freely.
  • Augmenting robots: Augmenting robots enhance human capabilities by replacing the ineffective part. Some examples of augmenting robots are prosthetic limbs or exoskeletons.  READ MORE...

Saturday, October 15

AI Changes Art


Art is subjective. It encompasses many points of view and can withstand just as many or more definitions. As a term, it's ever-evolving, and the boundaries for what can be deemed art continue to get pushed.

Artificial intelligence is not generally associated with art, and yet, AI has made its mark on the art industry. The question is, would that endure, or is AI art a fluke? Will AI carve itself a space in art, or will it be quickly forgotten as a failed experiment?

Art is such a broad term that you get stumped trying to define it clearly. What is art? It's visual, performative, and so much more. It can be music and dance, sculptures and literary works. Photography is a form of art, and so is architecture. There's also cinematic art. Art is more than an old painting in a museum.

Modern art is excellent at pushing the boundaries of what is considered art. Anything can be art as long as people perceive it as such. A banana duct-taped to a wall is considered modern art. The performative act of eating that banana, thus destroying the artwork, is also considered modern art.  READ MORE...

Monday, September 26

Global Disaster from Artificial Intelligence


Culturally, humanity is fascinated by the prospect of machines developing to the point of our destruction. Whether exploring threats from The Terminator, the Matrix, or even older films such as War Games, this type of story enthralls us. 

It’s not simply technology that fascinates us; what’s compelling is the prospect of technology drastically changing our every-day lives.

Our stories, however, are rooted in reality. As artificial intelligence (AI) is refined over the coming years and decades, the threats may not merely be stories. 

There are many potential ways in which artificial intelligence might come to threaten other intelligent life. Here are 6 realistically possible ways that AI could lead to global catastrophe.

1. Packaging Weaponized AI into Viruses
The world has seen several high-profile cyber-attacks over recent years. Simpler tactics such as Denial of Service attacks have escalated into high profile hacks (such as Experian) and even into ransomware attacks. 

In parallel, the number of interconnected devices and platforms has exploded with the advent of the Internet of Things and increased accessibility to consumer mobile products.  READ MORE...