Showing posts with label Algorithms. Show all posts
Showing posts with label Algorithms. Show all posts

Thursday, March 16

Machine Learning Matters


Because of new computing technologies, machine learning today is not like machine learning of the past. It was born from pattern recognition and the theory that computers can learn without being programmed to perform specific tasks; researchers interested in artificial intelligence wanted to see if computers could learn from data. 

The iterative aspect of machine learning is important because as models are exposed to new data, they are able to independently adapt. They learn from previous computations to produce reliable, repeatable decisions and results. It’s a science that’s not new – but one that has gained fresh momentum.

While many machine learning algorithms have been around for a long time, the ability to automatically apply complex mathematical calculations to big data – over and over, faster and faster – is a recent development. 

Here are a few widely publicized examples of machine learning applications you may be familiar with:
The heavily hyped, self-driving Google car? 
The essence of machine learning.
Online recommendation offers such as those from Amazon and Netflix? 
Machine learning applications for everyday life.
Knowing what customers are saying about you on Twitter? 
Machine learning combined with linguistic rule creation.
Fraud detection? 
One of the more obvious, important uses in our world today.

Machine Learning and Artificial Intelligence
While artificial intelligence (AI) is the broad science of mimicking human abilities, machine learning is a specific subset of AI that trains a machine how to learn. Watch this video to better understand the relationship between AI and machine learning. You'll see how these two technologies work, with useful examples and a few funny asides.

Why is machine learning important?
Resurging interest in machine learning is due to the same factors that have made data mining and Bayesian analysis more popular than ever. Things like growing volumes and varieties of available data, computational processing that is cheaper and more powerful, and affordable data storage.

All of these things mean it's possible to quickly and automatically produce models that can analyze bigger, more complex data and deliver faster, more accurate results – even on a very large scale. And by building precise models, an organization has a better chance of identifying profitable opportunities – or avoiding unknown risks.  READ MORE...

Monday, February 14

Cognitive Computing


The goal of cognitive computing is to simulate human thought processes in a computerized model. Using self-learning algorithms that use data mining, pattern recognition and natural language processing, the computer can mimic the way the human brain works.

While computers have been faster at calculations and processing than humans for decades, they haven’t been able to accomplish tasks that humans take for granted as simple, like understanding natural language, or recognizing unique objects in an image.


Some people say that cognitive computing represents the third era of computing: we went from computers that could tabulate sums (1900s) to programmable systems (1950s), and now to cognitive systems.

These cognitive systems, most notably IBM ’s Watson, rely on deep learning algorithms and neural networks to process information by comparing it to a teaching set of data. The more data the system is exposed to, the more it learns, and the more accurate it becomes over time, and the neural network is a complex “tree” of decisions the computer can make to arrive at an answer.

What can cognitive computing do?

For example, according to this TED Talk video from IBM, Watson could eventually be applied in a healthcare setting to help collate the span of knowledge around a condition, including patient history, journal articles, best practices, diagnostic tools, etc., analyze that vast quantity of information, and provide a recommendation.  READ MORE.

Wednesday, December 29

The Future of Work


The future of work is uncertain. Some say robots will dominate the workforce, perhaps eliminating human jobs altogether. The guesswork doesn't stop in imagining possible futures of an even more technology-driven economy. Amid such speculation, it’s easy for business owners to feel unsure about how to plan for the next decade.


In this article, we’ll look at the underlying trend expected to dominate the future workplace: The rise of artificial intelligence (AI). Recently, Gartner made six predictions about how businesses will work by 2028 (full content available to Gartner clients). These got us thinking about two critical impacts AI will have on the future workplace, what these mean for small and midsize businesses, and how business owners and HR leaders can start preparing for these trends in advance.

Prediction #1: AI will replace a number of middle management jobs

Ever imagined taking orders from a robot? This could soon be a reality.

Machine bosses will replace human bosses by the end of the decade. Algorithms that boss employees around, also known as robobosses, will be responsible for assigning work based on skill sets. Robobosses will also decide whether employees will get a promotion and what their salary increases will be.

Here are the top reasons why businesses will be interested in implementing robobosses:
  • Data-driven decision-making: It’s true that robots can't show emotions or empathy, but there's one area where they can outperform humans: data-driven decision-making. AI can scan large datasets and apply predictive algorithms to provide actionable insights to business owners. For instance, a roboboss can use factors such as efficiency, skill, knowledge, and motivation level to select team members for projects. This practice will ensure that members with the right skill set and work attitude are chosen, which will increase the chances of timely project completion.
  • Cost-effectiveness: Robobosses will take over most middle management tasks, eliminating the need for multiple middle management positions. This will not only lower the salary costs associated with middle managers but also make team management more efficient.
  • Availability: Unlike human bosses, robobosses will be available 24/7, making it easier for businesses to manage a global workforce operating in different time zones.
  • Impact of this prediction – 2020 vs. 2030

Team composition at the beginning of the decade
Today’s teams comprise employees with expertise in particular skill sets brought together by organizational hierarchy. For instance, a marketing team consists of members who have expertise in search engine optimization (SEO), email marketing, social media marketing, and analytics. Each team has a manager who supervises projects, manages conflicts and people-centric issues, assigns tasks to members, and ensures smooth project execution. The team manager is also responsible for monitoring employee performance and scaling the team size (up or down) as per business requirements.

Team composition at the end of the decade
By the end of the decade, a large number of teams will be autonomous with robobosses responsible for functions currently performed by team managers. Robobosses will manage project allocation, deadlines, delivery, and communication. Smart machines will be responsible for ensuring coordination among different teams, such as sales, marketing, and finance. They will also monitor employee performance and assess the need for upscaling or downsizing based on predicted project workloads.  READ MO

Saturday, September 11

Facebook Apologizes




Facebook users who watched a newspaper video featuring black men were asked if they wanted to "keep seeing videos about primates" by an artificial-intelligence recommendation system.

Facebook told BBC News it "was clearly an unacceptable error", disabled the system and launched an investigation.  "We apologise to anyone who may have seen these offensive recommendations."  It is the latest in a long-running series of errors that have raised concerns over racial bias in AI.

'Genuinely sorry'
In 2015, Google's Photos app labelled pictures of black people as "gorillas".  The company said it was "appalled and genuinely sorry", though its fix, Wired reported in 2018, was simply to censor photo searches and tags for the word "gorilla".

In May, Twitter admitted racial biases in the way its "saliency algorithm" cropped previews of images.  Studies have also shown biases in the algorithms powering some facial-recognition systems.

Algorithmic error
In 2020, Facebook announced a new "inclusive product council" - and a new equity team in Instagram - that would examine, among other things, whether its algorithms exhibited racial bias.

The "primates" recommendation "was an algorithmic error on Facebook" and did not reflect the content of the video, a representative told BBC News.   READ MORE