The Take

An Eye for an AI14/08/18

Michael O’Shea is a double degree Mathematics and Business student, and a member of QUTEFS. He brings together his skills from both arenas to explain the mystifying area of artificial intelligence in this week’s edition of The Take:

You’ve probably been to networking events and heard people raving about AI and how it is transforming businesses. The 4th industrial revolution will be driven by Big Data and analytics. An ability to understand these concepts will allow us all to have greater discussions and ultimately lead to greater solutions.

Every company has data. Most companies actively collect data. Few companies know what to do with their data. As companies are forced to adapt archaic and outdated processes, they sometimes do so rapidly. Suddenly, they have tonnes- nay, Gigs- of confusing information which they would have never dreamed of previously. They can create cool bar charts, hell, even useless 3D pie charts to describe and visualise this amazingly powerful data which they now own. Pretty graphs and visualisations are certainly useful, but when we have incredibly large and complex datasets, like those which Google and Facebook yield, formal methods may be more useful.

This is where statistics comes into play! That blessed bell curve which we all know and love. Traditional statistics plays a foundational role in understanding data. However, it is inherently uncertain and biased, so there are diverse opinions, techniques and methods that emerge. And when datasets or processes are large or complex, we turn to alternative methods- sometimes heavily assisted by computers! No, I’m not quite talking about the beautiful financial Monte Carlo with 10,000 simulations, I’m talking about Artificial Intelligence (AI).

AI describes any adaptive simulation. We may consider it the starting point or overarching term for efficient automation. The next steps of the AI hierarchy are Machine Learning (ML) and Deep Learning (DL). One such representation of this, created by Intel, is shown below:

A quick google search will reveal many diagrams also relating Big Data, Data Science, Representation Learning, Business Intelligence- the list goes on. However, these are the fundamental 3 levels in understanding this “brained” new world.

 

Ok, so you’ve told me some boring theory about how these ideas interact, now what on earth do they look like in practice?

 

Let’s start with the broadest category, AI. An everyday example may be when you are playing an offline video game, battling against an artificial opponent. Take a sporting game, like FIFA, for example. In this context, the opponent cannot simply run a quick simulation and complete the game. It depends on and must therefore react to your movements, as the human player. In fulfilling the criteria, your opponent can:

  1. Sense…movements translated by your controller
  2. Reason…that it’s not the best idea to score an own goal
  3. Act…by trying to defeat you
  4. Adapt…to different contexts in the game, be it a corner, cross, pass or shot

Now, when you play this AI opponent, they don’t improve over time. Sure, you may increase the difficulty level, but this is your conscious decision to manually improve the AI. But what if you wanted to square off against an opponent which improved with every day, every game, every moment?

This is where ML comes into play. This next level down in the AI hierarchy consumes data to become bigger, stronger, and more intelligent. A foundational concept in probability is that we obtain more accurate estimates with more data or trials. In the context of your FIFA game, this means your opponent will collect more data each game and subsequently improve.

Another example of this may be customer segmentation in marketing. You may encourage customers to enter their details into some sign on form or email list. As you collect more customers’ details, you may consider certain segments which display similar behaviour. Using some form of ML algorithm like Regression Trees, you will be able to make predictions on customer behaviour and segmentation. As you collect more data, these categories and predictions will shift and improve, your machine is ‘learning’ and (hopefully) your profits are increasing.

The fundamental aspect of ML is that we have parameters which represent the effects and influences of variables in the data. For our customers, this may include: time spent on webpage, age, gender, web browser and many other factors.

This where Deep Learning (DL) is different. In this instance, we don’t have parameters from the data itself. Rather, we aim to replicate the initial or ideal system using hundreds or thousands of hidden ‘hyper parameters’. DL uses artificial neural networks to replicate brain activity and drill down into big data in its smallest forms. An incredibly useful and evolving area is image recognition. As self-driving cars emerge, it becomes essential to have flawless image recognition and classification. Another example is natural language translation. Sure, GoogleTranslate is ok. But soon, with enough data processed through artificial neural networks, we may have near instantaneous automatic translation which evolves with languages. This is cool stuff.

There is always more to learn in this space. While the names of these methods seem ever-changing and confusing, a solid foundation in probability and statistics underpins almost all of these concepts. Where they take us, well, only time- and faster computers- will tell…

If you are interested in writing an article for The Take, contact publications@qutefs.org