Showing posts with label Google. Show all posts
Showing posts with label Google. Show all posts

Wednesday, May 8

Google Lays Off Hundreds of Employees


Sundar Pichai, chief executive officer of Alphabet Inc., during Stanford’s 2024 Business, Government, and Society forum in Stanford, California, US, on Wednesday, April 3, 2024. Justin Sullivan | Getty Images




Just ahead of its blowout first-quarter earnings report on April 25, Google laid off at least 200 employees from its “Core” teams, in a reorganization that will include moving some roles to India and Mexico, CNBC has learned.

The Core unit is responsible for building the technical foundation behind the company’s flagship products and for protecting users’ online safety, according to Google’s website. Core teams include key technical units from information technology, its Python developer team, technical infrastructure, security foundation, app platforms, core developers, and various engineering roles.


At least 50 of the positions eliminated were in engineering at the company’s offices in Sunnyvale, California, filings show. Many Core teams will hire corresponding roles in Mexico and India, according to internal documents viewed by CNBC.    READ MORE...

Friday, March 15

Gemini's Historically Inaccurate AI Images


Following controversy over historically inaccurate images, Google’s generative AI tool is under fire again by the company’s cofounder.

Sergey Brin, Google’s cofounder and former president of Google parent Alphabet, said Google “definitely messed up on the image generation,” and that he thinks “it was mostly due to not thorough testing.”

“[I]t definitely, for good reasons, upset a lot of people,” Brin said at San Francisco’s AGI House. He added that Google doesn’t know why Gemini “leans left in many cases,” but that it isn’t intentional, and other large language models could make similar errors.

“If you deeply test any text model out there, whether it’s ours, ChatGPT, Grok, what have you, it’ll say some pretty weird things that are out there that you know definitely feel far left, for example,” Brin said. He also said, "he kind of came out of retirement just because the trajectory of AI is so exciting.”   READ MORE...

Friday, December 8

Gemini Unveiled


Google this morning announced the rollout of Gemini, its largest and most capable large language model to date. Starting today, the company’s Bard chatbot will be powered by a version of Gemini, and will be available in English in more than 170 countries and territories. Developers and enterprise customers will get access to Gemini via API next week, with a more advanced version set to become available next year.

How good is Gemini? Google says the performance of its most capable model “exceeds current state-of-the-art results on 30 of the 32 widely-used academic benchmarks used in LLM research and development.” Gemini also scored 90.0% on a test known as “Massive Multitask Language Understanding,” or MMLU, which assesses capabilities across 57 subjects including math, physics, history and medicine. It is the first LLM to perform better than human experts on the test, Google said.    READ MORE...

Tuesday, December 5

In The News


Gravitational waves from the aftereffects of the most powerful merger of two black holes observed to date detected by researchers; "ringing" effect comes from new black hole assuming a spherical shape (More) | General relativity 101 (More, w/video)





Google delays launch of Gemini, a large language model expected to compete with OpenAI's ChatGPT-4, until January; reports say the model has trouble with some non-English prompts (More)





Ancient redwood trees can recover from severe fire damage by tapping long-buried buds, which have laid dormant under their bark for centuries (More)

 

Friday, November 17

Google's DeepMind AI


Artificial-intelligence (AI) firm Google DeepMind has turned its hand to the intensive science of weather forecasting — and developed a machine-learning model that outperforms the best conventional tools as well as other AI approaches at the task.

The model, called GraphCast, can run from a desktop computer and makes more accurate predictions than conventional models in minutes rather than hours.

“GraphCast currently is leading the race amongst the AI models,” says computer scientist Aditya Grover at University of California, Los Angeles. The model is described1 in Science on 14 November.  READ MORE...

Sunday, October 29

Wiping Out Humanity

Google is placing a huge bet on an artificial intelligence start-up whose owners have admitted it could wipe out humanity.

The tech giant's parent company, Alphabet, has reportedly committed $2 billion in funding to Anthropic, a startup that develops AI systems.

Anthropic is seen as one of the biggest rivals to OpenAI, behind the hugely popular ChatGPT that took the world by storm this past year - leaving Google's Bard in the dust.

Anthropic's CEO and co-founder, Dario Amodei (left) said earlier this week that AI has a '10 to 25 percent' chance of destroying humanity.

The report has claimed an upfront $500 million has already been invested into the startup, and the rest will be allocated over time.

The whooping investment comes just one month after Amazon invested $4 billion in Anthropic, The Wall Street Journal reports.   READ MORE...

Monday, August 28

Vibrations Prevent Quantum Computing Loses


Michigan State University researchers have discovered how to utilize vibrations, usually an obstacle in quantum computing, as a tool to stabilize quantum states. Their research provides insights into controlling environmental factors in quantum systems and has implications for the advancement of quantum technology.




When quantum systems, such as those used in quantum computers, operate in the real world, they can lose information to mechanical vibrations.

New research led by Michigan State University, however, shows that a better understanding of the coupling between the quantum system and these vibrations can be used to mitigate loss.

The research, published in the journal Nature Communications, could help improve the design of quantum computers that companies such as IBM and Google are currently developing.

The Challenge of Isolation in Quantum Computing

Nothing exists in a vacuum, but physicists often wish this weren’t the case. Because if the systems that scientists study could be completely isolated from the outside world, things would be a lot easier.

Take quantum computing. It’s a field that’s already drawing billions of dollars in support from tech investors and industry heavyweights including IBM, Google, and Microsoft. But if the tiniest vibrations creep in from the outside world, they can cause a quantum system to lose information.

For instance, even light can cause information leaks if it has enough energy to jiggle the atoms within a quantum processor chip.

The Problem of Vibrations
“Everyone is really excited about building quantum computers to answer really hard and important questions,” said Joe Kitzman, a doctoral student at Michigan State University. “But vibrational excitations can really mess up a quantum processor.”

However, with new research published in the journal Nature Communications, Kitzman and his colleagues are showing that these vibrations need not be a hindrance. In fact, they could benefit quantum technology.     READ MORE...

Wednesday, May 31

The Pixel Watch


When we reviewed the Pixel Watch back in October, we genuinely liked Google’s first Android smartwatch, and found it to be a solid choice for those not already deep into Samsung’s eco-system. 

One thing we weren’t so keen on was the Pixel Watch’s
disappointing battery life, but if leaked details turn out to be correct, that’s a problem Google plans to fix with the Pixel Watch 2.

As with the original Pixel Watch, Google is expected to announce its second Wear OS-based smartwatch sometime later this year closer to the holiday shopping season. 

Details have been sparse, but according to unnamed sources who recently spoke to 9to5Google, we now potentially know a little bit more about how Google plans to address the shortcomings of the original with the Pixel Watch 2.

Goodbye Samsung Exynos, Hello Qualcomm Snapdragon
The original Pixel Watch arrived with a Samsung Exynos 9110 chipset under the hood, which was already at least three years old at the time, having shown up in Samsung’s own smart wearables a few years prior. 

The Pixel Watch’s interface never felt laggy or like its processor was being over-taxed, but for the Pixel Watch 2, Google is apparently ready to trade-up to the latest W5-series Qualcomm Snapdragon chipset, which will double the number of A53 cores (from two to four) which are built on a 4nm process that will deliver other benefits besides raw processing power...   READ MORE...

Tuesday, May 16

Quantum AI Braids Non-Abelian Anyons

Thursday, May 11

AI Needs to be REGULATED


For most of the past decade, public concerns about digital technology have focused on the potential abuse of personal data. People were uncomfortable with the way companies could track their movements online, often gathering credit card numbers, addresses, and other critical information. They found it creepy to be followed around the web by ads that had clearly been triggered by their idle searches, and they worried about identity theft and fraud.

Those concerns led to the passage of measures in the United States and Europe guaranteeing internet users some level of control over their personal data and images—most notably, the European Union’s 2018 General Data Protection Regulation (GDPR). 

Of course, those measures didn’t end the debate around companies’ use of personal data. Some argue that curbing it will hamper the economic performance of Europe and the United States relative to less restrictive countries, notably China, whose digital giants have thrived with the help of ready, lightly regulated access to personal information of all sorts. (Recently, however, the Chinese government has started to limit the digital firms’ freedom—as demonstrated by the large fines imposed on Alibaba.) 

Others point out that there’s plenty of evidence that tighter regulation has put smaller European companies at a considerable disadvantage to deeper-pocketed U.S. rivals such as Google and Amazon.

But the debate is entering a new phase. As companies increasingly embed artificial intelligence in their products, services, processes, and decision-making, attention is shifting to how data is used by the software—particularly by complex, evolving algorithms that might diagnose a cancer, drive a car, or approve a loan. 

The EU, which is again leading the way (in its 2020 white paper “On Artificial Intelligence—A European Approach to Excellence and Trust” and its 2021 proposal for an AI legal framework), considers regulation to be essential to the development of AI tools that consumers can trust.  READ MORE...

Tuesday, April 4

Pausing Chat GPT


AN open letter signed by hundreds of prominent artificial intelligence experts, tech entrepreneurs, and scientists calls for a pause on the development and testing of AI technologies more powerful than OpenAI’s language model GPT-4 so that the risks it may pose can be properly studied.

It warns that language models like GPT-4 can already compete with humans at a growing range of tasks and could be used to automate jobs and spread misinformation. The letter also raises the distant prospect of AI systems that could replace humans and remake civilization.

“We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4 (including the currently-being-trained GPT-5),” states the letter, whose signatories include Yoshua Bengio, a professor at the University of Montreal considered a pioneer of modern AI, historian Yuval Noah Harari, Skype cofounder Jaan Tallinn, and Twitter CEO Elon Musk.

The letter, which was written by the Future of Life Institute, an organization focused on technological risks to humanity, adds that the pause should be “public and verifiable,” and should involve all those working on advanced AI models like GPT-4. It does not suggest how a halt on development could be verified, but adds that “if such a pause cannot be enacted quickly, governments should step in and institute a moratorium,” something that seems unlikely to happen within six months.

Microsoft and Google did not respond to requests for comment on the letter. The signatories seemingly include people from numerous tech companies that are building advanced language models, including Microsoft and Google. Hannah Wong, a spokesperson for OpenAI, says the company spent more than six months working on the safety and alignment of GPT-4 after training the model. She adds that OpenAI is not currently training GPT-5.  READ MORE...

Friday, March 17

Google Starlink Partnership


Rumors that Elon Musk has bought Google persist, and are no more true now than they were a month ago, but it is certainly worth keeping up to date with SpaceX and Starlink’s partnership with the company’s cloud computing services.

In 2015, Google and Fidelity together invested $1 billion in SpaceX, meaning the partnership Elon Musk enjoys with the search engine and cloud computing giant has history.

The investment meant the Google and Fidelity together owned just under 10% of SpaceX.

“It’s no surprise that Alphabet is interested in space,” The Motley Fool wrote shortly afterwards. But what is at the root of Google’s partnership with Elon Musk’s SpaceX and Starlink companies, and what could it mean for the future of the Internet?

WHAT DOES THE PARTNERSHIP LINKING ELON MUSK, SPACEX, STARLINK AND GOOGLE ACTUALLY MEAN?

In a nutshell, one of the key ideas behind the partnership is to affordably bring a fast and secure Internet connection to that part of the world’s population that can’t currently get online. That’s roughly 2.9 billion. When running, data will travel from Google cloud services to Starlink satellites and then to end users, bypassing the need for expensive cell towers and dramatically increasing coverage.

2.9 billion is a big number and represents a lot of people. The UN’s ICT arm published a report in late 2021 claiming that more than a third of the world’s population have “still never used the Internet.”

In May 2021, Google said it had signed a partnership deal with Elon Musk’s SpaceX that would enable it to use the space company’s growing network of satellites, known as Starlink.

The deal will allow Starlink customers to use Google’s cloud computing capabilities while enabling Google to use Starlink’s fast Internet speeds for its cloud customers.  READ MORE..

Wednesday, December 7

Tech Companies Must Pay for News in Australia


CANBERRA, Australia (AP) — Australia’s law forcing Google and Facebook to pay for news is ready to take effect, though the laws’ architect said it will take time for the digital giants to strike media deals.

The Parliament on Thursday passed the final amendments to the so-called News Media Bargaining Code agreed between Treasurer Josh Frydenberg and Facebook chief executive Mark Zuckerberg on Tuesday.

In return for the changes, Facebook agreed to lift a ban on Australians accessing and sharing news.

Rod Sims, the competition regulator who drafted the code, said he was happy that the amended legislation would address the market imbalance between Australian news publishers and the two gateways to the internet.

“All signs are good,” Sims said.

“The purpose of the code is to address the market power that clearly Google and Facebook have. Google and Facebook need media, but they don’t need any particular media company, and that meant media companies couldn’t do commercial deals,” the Australian Competition and Consumer Commission chair added.

The rest of the law had passed in Parliament earlier, so it can now be implemented.

Google has already struck deals with major Australian news businesses in recent weeks including News Corp. and Seven West Media.

Frydenberg said he was pleased to see progress by Google and more recently Facebook in reaching commercial deals with Australian news businesses.  READ MORE...

Saturday, March 5

Google to Pay French Publishers


Google has signed a new agreement to pay French publishers for the right to display their news content online.  The deal means that media organisations in France will be fairly renumerated when their news articles appear on the search engine's results pages.

A new agreement was unveiled on Thursday by Google and the Alliance for the General Information Press (Apig), which brings together nearly 300 national, regional and local news groups.  The deal replaces a previous agreement that was announced last January.

Facebook reached a similar agreement with Agip in October to pay French publishers over the copyright of their content.

The dispute over so-called "neighbouring rights" has soured relations between French news organisations and the US tech giant for more than two years.  Google and Facebook have long argued against the principle, stating that French publishers are already exposed on their platforms and promoted to customers.

But in 2019, a European Union Directive entrenched "neighbouring rights" into law, a move that France swiftly adopted.  Although Google and Agip reached an agreement last year, the US company was fined €500 million for not having negotiated with French publishers "in good faith".

A French watchdog had asked Google to resume negotiations and propose a new compensation offer.

Google and Agip said in a joint statement that the new agreement on "neighbouring rights" was a "historic step". The exact amount of compensation offered to French news organisations has not been made public.

Google also hopes to sign a similar deal with another French media group, SEPM, in the future.

Friday, February 18

Artificial Intelligence as a Service


Artificial intelligence as a service refers to off-the-shelf AI tools that enable companies to implement and scale AI techniques at a fraction of the cost of a full, in-house AI.

The concept of everything as a service refers to any software that can be called upon across a network because it relies on cloud computing. In most cases, the software is available off the shelf. You buy it from a third-party vendor, make a few tweaks, and begin using it nearly immediately, even if it hasn’t been totally customized to your system.

For a long time, artificial intelligence was cost-prohibitive to most companies:
  • The machines were massive and expensive.
  • The programmers who worked on such machines were in short supply (which meant they demanded high payments).
  • Many companies didn’t have sufficient data to study.

As cloud services have become incredibly accessible, AI is more accessible: companies can gather and store infinite data. This is where AI-as-a-service comes in.

Now, let’s detour into AI so that we have the right expectations when engaging with AIaaS.
Understanding AI

We hear it repeated over and over: artificial intelligence is a way to get machines to do the same kind of work that human brains can accomplish. This definition is the subject of significant debate, with technology experts arguing that comparing machines to human brains is the wrong paradigm to use. It may promote fear that humans can be taken over by machines.

The term AI can also be used as a marketing tactic for companies to show how innovative they are—something known as artificial AI or fake AI.

Before we start worrying about the technological singularity, we need to understand what AI actually is.

“Intelligence is the efficiency with which you acquire new skills at tasks you didn’t previously prepare for… Intelligence is not skill itself, it’s not what you can do, it’s how well and how efficiently you can learn new things.”Francois Challot, AI Researcher at Google and creator of Keras

Wednesday, January 26

Nuclear Quantum Computing


A trio of separate research teams from three different continents published individual papers indicating similar quantum computing breakthroughs yesterday. All three were funded in part by the US Army and each paper appears to be a slam dunk for the future of quantum computing.

But only one of them heralds the onset of the age of nuclear quantum computers.

Maybe it’s the whole concept of entanglement, but for a long time it’s felt like we were suspended in a state where functional quantum machines were both “right around the corner” and “decades or more away.”

But the past few years have seen a more rapid advancement toward functional quantum systems than most technologists could have imagined in their wildest dreams.

The likes of IBM, Microsoft, D-Wave, and Google putting hybrid quantum systems on the cloud coupled with the latter’s amazing time crystal breakthrough have made 2018-2021 the opening years of what promises to be a golden age for quantum computing.

Despite this amazing progress, there are still holdouts who believe we’ll never have a truly useful, fully-functional, qubit-based quantum computing system.

The main reason given by these cynics is usually because quantum systems are incredibly error-prone.  READ MORE...

Friday, December 17

Facial Recognition


GETTY IMAGES


An Australian firm which claims to have a database of more than 10 billion facial images is facing a potential £17m fine over its handling of personal data in the UK.

The Information Commissioner's Office said it had significant concerns about Clearview AI, whose facial recognition software is used by police forces.  It has told the firm to stop processing UK personal data and delete any it has.  Clearview said the regulator's claims were "factually and legally incorrect".

The company - which has been inviooted to make representations - said it was considering an appeal and "further action".  It has already been found to have broken Australian privacy law but is seeking a review of that ruling.

'Google search for faces'

Clearview AI's system allows a user - for example, a police officer seeking to identify a suspect - to upload a photo of a face and find matches in a database of billions of images it has collected from the internet and social media.

The system then provides links to where matching images appeared online.The firm has promoted its service to police as resembling a "Google search for faces".

But in a statement, the UK's Information Commissioner said that Clearview's database was likely to include "a substantial number of people from the UK" whose data may have been gathered without people's knowledge.

The firm's services are understood to have been trialled by a number of UK law enforcement agencies, but that was discontinued and Clearview AI does not have any UK customers.  The ICO said its "preliminary view" was that the firm appeared to have failed to comply with UK data protection laws by:

  • Failing to process the information of UK citizens fairly
  • Failing to have a process in place to stop the data being retained indefinitely
  • Failing to have a lawful reason for collecting the information
  • And failing to inform people in the UK about what is happening to their data.
The UK Information Commissioner, Elizabeth Denham, said: "I have significant concerns that personal data was processed in a way that nobody in the UK will have expected.

"UK data protection legislation does not stop the effective use of technology to fight crime. But to enjoy public trust and confidence in their products, technology providers must ensure people's legal protections are respected and complied with."

The decision is provisional and the ICO said any representations by Clearview AI will be carefully considered before a final ruling is made in the middle of next year.  READ MORE...

Saturday, September 11

Facebook Apologizes




Facebook users who watched a newspaper video featuring black men were asked if they wanted to "keep seeing videos about primates" by an artificial-intelligence recommendation system.

Facebook told BBC News it "was clearly an unacceptable error", disabled the system and launched an investigation.  "We apologise to anyone who may have seen these offensive recommendations."  It is the latest in a long-running series of errors that have raised concerns over racial bias in AI.

'Genuinely sorry'
In 2015, Google's Photos app labelled pictures of black people as "gorillas".  The company said it was "appalled and genuinely sorry", though its fix, Wired reported in 2018, was simply to censor photo searches and tags for the word "gorilla".

In May, Twitter admitted racial biases in the way its "saliency algorithm" cropped previews of images.  Studies have also shown biases in the algorithms powering some facial-recognition systems.

Algorithmic error
In 2020, Facebook announced a new "inclusive product council" - and a new equity team in Instagram - that would examine, among other things, whether its algorithms exhibited racial bias.

The "primates" recommendation "was an algorithmic error on Facebook" and did not reflect the content of the video, a representative told BBC News.   READ MORE

Friday, August 6

Remote Working Employees

All across the United States, the leaders at large tech companies like Apple, Google, and Facebook are engaged in a delicate dance with thousands of employees who have recently become convinced that physically commuting to an office every day is an empty and unacceptable demand from their employers.

The COVID-19 pandemic forced these companies to operate with mostly remote workforces for months straight. 

And since many of them are based in areas with relatively high vaccination rates, the calls to return to the physical office began to sound over the summer.

But thousands of high-paid workers at these companies aren't having it. Many of them don't want to go back to the office full time, even if they're willing to do so a few days a week. 

Workers are even pointing to how effective they were when fully remote and using that to question why they have to keep living in the expensive cities where these offices are located.  READ MORE

Monday, August 2

Time Crystal

Google’s quantum computer has been used to build a “time crystal” according to freshly-published research, a new phase of matter that upends the traditional laws of thermodynamics. 

Despite what the name might suggest, however, the new breakthrough won’t let Google build a time machine.

Time crystals were first proposed in 2012, as systems that continuously operate out of equilibrium. Unlike other phases of matter, which are in thermal equilibrium, time crystals are stable yet the atoms which make them up are constantly evolving.

At least, that’s been the theory: scientists have disagreed on whether such a thing was actually possible in reality. Different levels of time crystals that could or could not be generated have been argued, with demonstrations of some that partly – but not completely – meet all the relevant criteria. 

In a new research preprint by researchers at Google, along with physicists at Princeton, Stanford, and other universities, it’s claimed that Google’s quantum computer project has delivered what many believed impossible. 

Preprints are versions of academic papers that are published prior to going through peer-review and full publishing; as such, their findings can be challenged or even overturned completely during that review process. READ MORE