Friday, February 18

Wet Apple


 

Artificial Intelligence as a Service


Artificial intelligence as a service refers to off-the-shelf AI tools that enable companies to implement and scale AI techniques at a fraction of the cost of a full, in-house AI.

The concept of everything as a service refers to any software that can be called upon across a network because it relies on cloud computing. In most cases, the software is available off the shelf. You buy it from a third-party vendor, make a few tweaks, and begin using it nearly immediately, even if it hasn’t been totally customized to your system.

For a long time, artificial intelligence was cost-prohibitive to most companies:
  • The machines were massive and expensive.
  • The programmers who worked on such machines were in short supply (which meant they demanded high payments).
  • Many companies didn’t have sufficient data to study.

As cloud services have become incredibly accessible, AI is more accessible: companies can gather and store infinite data. This is where AI-as-a-service comes in.

Now, let’s detour into AI so that we have the right expectations when engaging with AIaaS.
Understanding AI

We hear it repeated over and over: artificial intelligence is a way to get machines to do the same kind of work that human brains can accomplish. This definition is the subject of significant debate, with technology experts arguing that comparing machines to human brains is the wrong paradigm to use. It may promote fear that humans can be taken over by machines.

The term AI can also be used as a marketing tactic for companies to show how innovative they are—something known as artificial AI or fake AI.

Before we start worrying about the technological singularity, we need to understand what AI actually is.

“Intelligence is the efficiency with which you acquire new skills at tasks you didn’t previously prepare for… Intelligence is not skill itself, it’s not what you can do, it’s how well and how efficiently you can learn new things.”Francois Challot, AI Researcher at Google and creator of Keras

Protective Mom


 

5 G


5G is the 5th generation mobile network. It is a new global wireless standard after 1G, 2G, 3G, and 4G networks. 5G enables a new kind of network that is designed to connect virtually everyone and everything together including machines, objects, and devices.

5G wireless technology is meant to deliver higher multi-Gbps peak data speeds, ultra low latency, more reliability, massive network capacity, increased availability, and a more uniform user experience to more users. Higher performance and improved efficiency empower new user experiences and connects new industries.

The previous generations of mobile networks are 1G, 2G, 3G, and 4G.

First generation - 1G
1980s: 1G delivered analog voice.

Second generation - 2G
Early 1990s: 2G introduced digital voice (e.g. CDMA- Code Division Multiple Access).

Third generation - 3G
Early 2000s: 3G brought mobile data (e.g. CDMA2000).

Fourth generation - 4G LTE
2010s: 4G LTE ushered in the era of mobile broadband.

1G, 2G, 3G, and 4G all led to 5G, which is designed to provide more connectivity than was ever available before.

5G is a unified, more capable air interface. It has been designed with an extended capacity to enable next-generation user experiences, empower new deployment models and deliver new services.

With high speeds, superior reliability and negligible latency, 5G will expand the mobile ecosystem into new realms. 5G will impact every industry, making safer transportation, remote healthcare, precision agriculture, digitized logistics — and more — a reality.     READ MORE...

Cruising Cat


 

Augmented Reality


Augmented reality (AR) is an experience where designers enhance parts of users’ physical world with computer-generated input. Designers create inputs—ranging from sound to video, to graphics to GPS overlays and more—in digital content which responds in real time to changes in the user’s environment, typically movement.

Augmented reality has science-fiction roots dating to 1901. However, Thomas Caudell described the term as a technology only in 1990 while designing to help Boeing workers visualize intricate aircraft systems. A major advance came in 1992 with Louis Rosenberg’s complex Virtual Fixtures AR system for the US Air Force. 

AR releases followed in the consumer world, most notably the ARQuake game (2000) and the design tool ARToolkit (2009). The 2010s witnessed a technological explosion—for example, with Microsoft’s HoloLens in 2015—that stretched beyond AR in the classical sense, while AR software itself became increasingly sophisticated, popular and affordable.

Under the umbrella term extended reality (XR), AR differs from virtual reality (VR) and mixed reality (MR). Some confusion exists, notably between AR and MR. Especially amid the 2020s’ technology boom, considerable debate continues about what each term covers. 

In user experience (UX) design, you have:
  • AR—You design for digital elements to appear over real-world views, sometimes with limited interactivity between them, often via smartphones. Examples include Apple’s ARKit and Android’s ARCore (developer kits), the Pokémon Go game.
  • VR—You design immersive experiences that isolate users from the real world, typically via headset devices. Examples include PSVR for gaming, Oculus and Google Cardboard, where users can explore, e.g., Stonehenge using headset-mounted smartphones.
  • MR—You design to combine AR and VR elements so digital objects can interact with the real world; therefore, you design elements that are anchored to a real environment. Examples include Magic Leap and HoloLens, which users can use, e.g., to learn more directly how to fix items.      READ MORE...

Outside






 

Thursday, February 17

Muscles









 

Self Determination


Fifty independent countries existed in 1920. Today, there are nearly two hundred. One of the motivating forces behind this wave of country-creation was self-determination—the concept that nations (groups of people united by ethnicity, language, geography, history, or other common characteristics) should be able to determine their political future.

In the early twentieth century, a handful of European empires ruled the majority of the world. However, colonized nations across Africa, Asia, the Caribbean, and elsewhere argued that they deserved the right to determine their political future. Their calls for self-determination became rallying cries for independence.

Ultimately, the breakup of these empires throughout the twentieth century—a process known as decolonization—resulted in an explosion of new countries, creating the world map largely as we recognize it today.

But now that the age of empires is over, is that map set in stone? Not quite. Self-determination continues to play a role in deciding borders, but the landscape is more complicated.

Many people around the world argue that their governments—many of which emerged during decolonization—do not in reality represent the entire country’s population. The borders of colonies seldom had anything to do with any national (or economic or internal political) criteria. So when decolonization occurred, many of the newly created countries were artificial and thus rife with internal division.

However, for a group inside a country to achieve self-determination today, that country’s sovereignity... the principle that guarantees countries get to control what happens within their borders and prohibits them from meddling in another country’s domestic affairs—will be violated. In other words, creating a country through self-determination inherently means taking territory and people away from a country that already exists.

Whereas many world leaders openly called for the breakup of empires, few are willing to endorse the breakup of modern countries. Indeed, the United Nations’ founding charter explicitly discourages it. And the fact that so many modern countries face internal divisions means few governments are eager to embrace the creation of new countries abroad, fearing that doing so could set a precedent that leads to the unraveling of their own borders.

A road to self-determination still remains, but it is far trickier in a world in which empires no longer control colonies oceans away.

Hungry Cats


 

DevOps


DevOps is the combination of cultural philosophies, practices, and tools that increases an organization’s ability to deliver applications and services at high velocity: evolving and improving products at a faster pace than organizations using traditional software development and infrastructure management processes. This speed enables organizations to better serve their customers and compete more effectively in the market.


Under a DevOps model, development and operations teams are no longer “siloed.” Sometimes, these two teams are merged into a single team where the engineers work across the entire application lifecycle, from development and test to deployment to operations, and develop a range of skills not limited to a single function.

In some DevOps models, quality assurance and security teams may also become more tightly integrated with development and operations and throughout the application lifecycle. When security is the focus of everyone on a DevOps team, this is sometimes referred to as DevSecOps.

These teams use practices to automate processes that historically have been manual and slow. They use a technology stack and tooling which help them operate and evolve applications quickly and reliably. These tools also help engineers independently accomplish tasks (for example, deploying code or provisioning infrastructure) that normally would have required help from other teams, and this further increases a team’s velocity.

Why DevOps Matters
Software and the Internet have transformed the world and its industries, from shopping to entertainment to banking. Software no longer merely supports a business; rather it becomes an integral component of every part of a business. Companies interact with their customers through software delivered as online services or applications and on all sorts of devices. They also use software to increase operational efficiencies by transforming every part of the value chain, such as logistics, communications, and operations. In a similar way that physical goods companies transformed how they design, build, and deliver products using industrial automation throughout the 20th century, companies in today’s world must transform how they build and deliver software.  READ MORE...

Free Ride


 

Sovereignity


Sovereignty is the bedrock of international relations. The concept lays out basic rules for how countries are allowed to interact with one another. In principle, it means countries get to control what happens inside their borders and can’t interfere in what happens elsewhere. This protects countries from being invaded over internal matters.

But the concept of sovereignty doesn’t play out perfectly in reality. There are limits to the control a country can exercise over what happens inside its borders. In the case of grievous human rights abuses like genocide, many countries argue breaches of sovereignty should be allowed on humanitarian grounds. Meanwhile, dozens of countries around the globe choose to give up a degree of sovereignty to join organizations like the European Union and the World Trade Organization.

Today, as the world grows increasingly interconnected, what constitutes a violation of sovereignty is up for interpretation—and world leaders have to decide how to tackle problems like climate change and terrorism that know no borders.

Nationalism can unite people—but also divide them, to destructive ends.  Countries that respect one another’s independence are the building blocks of our modern international system and learn what it takes for groups of people to form a new country.

Questions:
  • Why countries as different as Poland and Germany would give up some sovereignty to join the European Union?
  • Is a country’s sovereignty justifiable to protect human rights, such as the NATO-led intervention in Libya in 2011?
  • Why some countries breach each other’s sovereignty for non-humanitarian reasons leaving other countries and governments deciding how to respond to such actions?

Online, we see sovereignity this way:  Sovereignty is the supreme authority within a territory.  Sovereignty entails hierarchy within the state, as well as external autonomy for states.  In any state, sovereignty is assigned to the person, body, or institution that has the ultimate authority over other people in order to establish a law or change an existing law.  In political theory, sovereignty is a substantive term designating supreme legitimate authority over some polity.  In international law, sovereignty is the exercise of power by a state. De jure sovereignty refers to the legal right to do so; de facto sovereignty refers to the factual ability to do so. This can become an issue of special concern upon the failure of the usual expectation that de jure and de facto sovereignty exist at the place and time of concern, and reside within the same organization.

A Few Funnies





 

Wednesday, February 16

Ballet







 

Internet of Things

The Internet of Things (IoT) describes the network of physical objects—“things”—that are embedded with sensors, software, and other technologies for the purpose of connecting and exchanging data with other devices and systems over the internet. These devices range from ordinary household objects to sophisticated industrial tools. With more than 7 billion connected IoT devices today, experts are expecting this number to grow to 10 billion by 2020 and 22 billion by 2025. Oracle has a network of device partners.

Over the past few years, IoT has become one of the most important technologies of the 21st century. Now that we can connect everyday objects—kitchen appliances, cars, thermostats, baby monitors—to the internet via embedded devices, seamless communication is possible between people, processes, and things.

By means of low-cost computing, the cloud, big data, analytics, and mobile technologies, physical things can share and collect data with minimal human intervention. In this hyperconnected world, digital systems can record, monitor, and adjust each interaction between connected things. The physical world meets the digital world—and they cooperate.

While the idea of IoT has been in existence for a long time, a collection of recent advances in a number of different technologies has made it practical.
  • Access to low-cost, low-power sensor technology. Affordable and reliable sensors are making IoT technology possible for more manufacturers.
  • Connectivity. A host of network protocols for the internet has made it easy to connect sensors to the cloud and to other “things” for efficient data transfer.
  • Cloud computing platforms. The increase in the availability of cloud platforms enables both businesses and consumers to access the infrastructure they need to scale up without actually having to manage it all.
  • Machine learning and analytics. With advances in machine learning and analytics, along with access to varied and vast amounts of data stored in the cloud, businesses can gather insights faster and more easily. The emergence of these allied technologies continues to push the boundaries of IoT and the data produced by IoT also feeds these technologies.
  • Conversational artificial intelligence (AI). Advances in neural networks have brought natural-language processing (NLP) to IoT devices (such as digital personal assistants Alexa, Cortana, and Siri) and made them appealing, affordable, and viable for home use.
TO READ MORE ABOUT IoT,  CLICK HERE...

Mountain Range


 

Predictive Analytics


Predictive analytics
is the use of data, statistical algorithms and machine learning techniques to identify the likelihood of future outcomes based on historical data. The goal is to go beyond knowing what has happened to providing a best assessment of what will happen in the future.

Though predictive analytics has been around for decades, it's a technology whose time has come. More and more organizations are turning to predictive analytics to increase their bottom line and competitive advantage. 

Why now?

  • Growing volumes and types of data, and more interest in using data to produce valuable insights.
  • Faster, cheaper computers.
  • Easier-to-use software.
  • Tougher economic conditions and a need for competitive differentiation.

With interactive and easy-to-use software becoming more prevalent, predictive analytics is no longer just the domain of mathematicians and statisticians. Business analysts and line-of-business experts are using these technologies as well.

Organizations are turning to predictive analytics to help solve difficult problems and uncover new opportunities. Common uses include:

Detecting fraud. Combining multiple analytics methods can improve pattern detection and prevent criminal behavior. As cybersecurity becomes a growing concern, high-performance behavioral analytics examines all actions on a network in real time to spot abnormalities that may indicate fraud, zero-day vulnerabilities and advanced persistent threats.

Optimizing marketing campaigns. Predictive analytics are used to determine customer responses or purchases, as well as promote cross-sell opportunities. Predictive models help businesses attract, retain and grow their most profitable customers.

Improving operations. Many companies use predictive models to forecast inventory and manage resources. Airlines use predictive analytics to set ticket prices. Hotels try to predict the number of guests for any given night to maximize occupancy and increase revenue. Predictive analytics enables organizations to function more efficiently.

Reducing risk. Credit scores are used to assess a buyer’s likelihood of default for purchases and are a well-known example of predictive analytics. A credit score is a number generated by a predictive model that incorporates all data relevant to a person’s creditworthiness. Other risk-related uses include insurance claims and collections.
  READ MORE...

Pals