This is A.I.: A.I. For the Average Guy/Girl by Ean Mikale, J.D. - Chapter Two of Seventeen - Deep Learning vs. Machine-learning / by Ean Mikale

Chapter Two of Seventeen

Chapter Two: Machine Learning vs. Deep Learning vs. Transformers

Artificial Intelligence (AI) is a rapidly growing field that has the potential to revolutionize the way we live and work. It is important for us to understand the differences between different types of AI, such as Machine Learning (ML) and Deep Learning (DL), to appreciate the unique features they offer.

Machine Learning is a type of AI that involves the use of algorithms to analyze data and identify patterns. It relies heavily on statistical models and mathematical optimization techniques to improve its accuracy and efficiency. ML models are trained using large datasets that contain various inputs and corresponding outputs, and the algorithms learn from these datasets to make predictions or decisions about new data. This learning process is iterative, meaning the algorithm gets better over time as it processes more data.

To understand the concept of Machine Learning, think about how a child learns to identify objects. For example, if a child is shown a picture of a dog and is told it is a dog, they begin to recognize the common features of dogs, such as their fur, wagging tails, and barking sounds. Similarly, an ML algorithm is trained to identify patterns in data and can make predictions based on those patterns.

Deep Learning, on the other hand, is a subset of Machine Learning that uses artificial neural networks to process data. The term "deep" refers to the layers of artificial neurons that are used to analyze the data. These networks are modeled after the human brain and are designed to process complex patterns in data. DL algorithms are more complex than traditional ML algorithms and require more computational power to train.

To continue with our analogy of a child, think about how a child learns to recognize faces. A child can easily recognize their parents' faces, but it may take some time for them to learn to recognize other faces. Similarly, DL algorithms are designed to process complex data, such as images and speech, and can learn to recognize patterns that may be difficult for traditional ML algorithms.

Both Machine Learning and Deep Learning have their advantages and disadvantages. ML is best suited for tasks that involve relatively simple data processing, such as identifying spam emails or predicting sales trends. DL, on the other hand, is more effective for tasks that involve complex data processing, such as image or speech recognition. However, DL requires a significant amount of computational power and data to train the neural networks effectively.

atlas_ByhdcCsp7.png


Deep Learning on the other hand is a Machine-learning technique that teaches computers to learn by example. This subset of Machine-learning took off as I entered the field, in 2015-16. Deep Learning has one key difference from Machine-Learning, and it is the depth of learning. In this method, computer models crunch massive amounts of data, using specialised computing processors, which we mentioned before, called, "Graphics Processing Units" (GPUs).

We will touch on how they specifically accomplish this task in the next chapter. For now, it is only important that you know that GPU's were the breakthrough in the industry, allowing for the computation of much larger data sets, in turn, creating many advances in vast areas of Science and other technical fields, as a result of access to the previously unattainable computing power.

The power of GPU's has allowed for accuracy levels of A.I. to reach unprecedented milestones. The Deep Learning is influenced by Neurology, as these A.I. models imitate the human brain, structuring data in such a way, as to create artificial neural connections, similar to the neural connections created when a child learns something new. Within the niche of Deep Learning, there are three additional, more narrow niches and subsets that are important to know. Let's explore these trio of methods in greater detail.

The first subset of Deep Learning is Supervised Learning. Imagine that you are an instructor, and you are standing over your students shoulder as you explain a new concept. After showing the student thousands of images of a 67’ Shelby Ford Mustang, eventually the student is able to pick out the vehicle from multiple choices of 60's era Ford Mustangs nine out of ten times, proving a high level of accuracy. In short, because you are watching over the A.I. and telling it what the correct data looks like, also called ‘labelling data’, and correcting the A.I. when it is incorrect, this is why it is referred to as Supervised Learning. Next, we will discuss another subset of Deep Learning, where data is not labelled, but random and unexplored.

The second subset of Deep Learning is Unsupervised Learning. During college, I was a paralegal intern for a D.C.-based International Law Firm, that did not make it through the 2008 Financial Crash. One of the tasks I was given, was to take 16-20 boxes full of hundreds or thousands of emails, and I was given the objective of putting them all into chronological order. I had to take randomized data or information, and create a recognizable pattern.

Another example, would be IBM's Watson Analytics, which I have used to train students, as well as to analyzes training trends and outcomes. I would feed the A.I. random data that was required for regulatory compliance, and we would find all kinds of interesting patterns that otherwise would have gone unnoticed. In the industry, we call these "Actionable Insights". A group this subset has yet to touch in a meaningful way, that will be impacted greatly, is the sea of non-profits with massive amounts of data, but who lack efficient methods to analyze such data, to assist in the creation of better service-delivery models. But before we get too far ahead into the future, let's come back to the third and more cutting-edge subset of Deep Learning.

The third subset of Deep Learning is Reinforcement Learning. To give you an example, I will return to my son. While he was two and three years old, while potty-training, he was trained to sit on the toilet over and over again, until he understood this was where he needed to go whenever he felt the need to relieve himself. This method is extremely useful in the field of robotics, where instead of creating an algorithm for a robot to pick up a flower, you can show the robot how to pick up the flower 1000 times until it does so correctly. And when the robot does so correctly, then you reward it just like you would a child for making it to the restroom without an accident. In the future, when robots are more ubiquitous this will be the most efficient way to train them on many tasks.

In a nutshell, Machine-learning and Deep Learning have many similarities, but it is the subtle difference that has led Deep Learning to become a driving force behind the wide adoption of A.I. in society today. While it is possible to implement each of the Deep Learning subsets, it may be more efficient for one to choose a subset of Deep Learning to master. Whichever method allows you personally to achieve the highest levels of accuracy is the method I recommend. Now, let's move onto something that will fuel tomorrows delivery drones, self-driving cars, robotic companions, and smart cities… the GPU.

Finally, we have the latest addition to the family, Transformers, which are revolutionary in a subtle way. These transformers allow the Deep Learning Models to order information according to its context, thereby allowing for information to be output in a manner that sounds like natural human language. The AI is able to gain context in accordance with its human language interface, and the User Experience is thereby more natural and engaging, causing the user to spend more time with the technology, as well as pushing it further through a commonality in language and contextual understanding. This is the revolution of AI, the Machine-human interface, a ability to create together, what either cannot create alone. ChatGPT is an example of the first and most popular transformer model, which brought rise to the LLM, or Large Language Model, powered by transformer technology.

Exercises

  1. Can you explain the difference between Machine Learning and Deep Learning? What is a Transformer?

  2. Can you name the three different types of Deep Learning? What does LLM stand for?

  3. Collaborate with your team or individually discuss and select the specific version of Deep Learning that you will use to create your companies proprietary technology.

  4. What type of information is Machine Learning methods will you choose, and why?

  5. For Beginners, here are more examples of Deep Learning.

  6. For more the more advanced, here are examples of Deep Learning Models for AI development.

Ean Mikale, J.D., is an eight-time author with 11 years of experience in the AI industry. He serves as the Principal Engineer of Infinite 8 Industries, Inc., and is the IEEE Chair of the Hybrid Quantum-inspired Internet Protocol Industry Connections Group. He has initiated and directed his companies 7-year Nvidia Inception and Metropolis Partnerships. Mikale has created dozens of AI Assistants, many of which are currently in production. His clientele includes Fortune 500 Companies, Big Three Consulting Firms, and leading World Governments. He is a former graduate of IBM's Global Entrepreneur Program, AWS for Startups, Oracle for Startups, and Accelerate with Google. Finally, he is the creator of the World's First Hybrid Quantum Internet Layer, InfiNET. As an Industry Expert, he has also led coursework at Institutions, such as Columbia and MIT. Follow him on Linkedin, Instagram, and Facebook: @eanmikale