For most of the past decade, public concerns about digital technology have focused on the potential abuse of personal data. People were uncomfortable with the way companies could track their movements online, often gathering credit card numbers, addresses, and other critical information. They found it creepy to be followed around the web by ads that had clearly been triggered by their idle searches, and they worried about identity theft and fraud.
Those concerns led to the passage of measures in the United States and Europe guaranteeing internet users some level of control over their personal data and images—most notably, the European Union’s 2018 General Data Protection Regulation (GDPR).
Of course, those measures didn’t end the debate around companies’ use of personal data. Some argue that curbing it will hamper the economic performance of Europe and the United States relative to less restrictive countries, notably China, whose digital giants have thrived with the help of ready, lightly regulated access to personal information of all sorts. (Recently, however, the Chinese government has started to limit the digital firms’ freedom—as demonstrated by the large fines imposed on Alibaba.)
Others point out that there’s plenty of evidence that tighter regulation has put smaller European companies at a considerable disadvantage to deeper-pocketed U.S. rivals such as Google and Amazon.
But the debate is entering a new phase. As companies increasingly embed artificial intelligence in their products, services, processes, and decision-making, attention is shifting to how data is used by the software—particularly by complex, evolving algorithms that might diagnose a cancer, drive a car, or approve a loan.
But the debate is entering a new phase. As companies increasingly embed artificial intelligence in their products, services, processes, and decision-making, attention is shifting to how data is used by the software—particularly by complex, evolving algorithms that might diagnose a cancer, drive a car, or approve a loan.
The EU, which is again leading the way (in its 2020 white paper “On Artificial Intelligence—A European Approach to Excellence and Trust” and its 2021 proposal for an AI legal framework), considers regulation to be essential to the development of AI tools that consumers can trust. READ MORE...
No comments:
Post a Comment