This is A.I.: A.I. For the Average Guy/Girl by Ean Mikale, J.D. - Chapter Three of Seventeen - The Power of GPUs / by Ean Mikale

Chapter Three of SEVENTEEN

Chapter Three: The Power of GPUs

Have you ever played or watched someone else play video games on an X-box, or PlayStation? Did you ever wonder how the video graphics came to life? This was as a result of the power of Graphics Processing Units or GPUs. A GPU is defined as a programmable computer processor that processes all images on a computer's screen. There are stark differences between a CPU, or a Central Processing Unit, and a GPU. Let's dig into the differences to discover the key to performance variations between CPUs and GPUs.


In comparison, a CPU is a like a Ford F-150, compared to the GPU, which would be closer to a Ferrari. The reason has to do with the number and size of the cores, or gates that information passes through. Information passes through the CPUs gates in binary code, in zero's and ones. The more gates that a processor has, the faster it can process information. A CPU on your average laptop may have four cores, but they are normally larger, more powerful cores than those of a GPU.

One CPU core usually is occupied by each program running in the background or open browser windows. If you have a four core computer, with twelve windows open, usually the computer will run slow because of there are twelve streams of data competing to get through four gates. This is why GPUs are much more efficient for higher workloads. Rather than an average of 4-6 larger cores, your average GPU has hundreds, if not thousands of smaller cores for the processing of large batches of information.

While GPUs initially were used for gaming and simulations, from around 2012-2015, scientists began to use GPUs for Machine-learning and Deep Learning, in place of the CPUs with limited core-counts. For example, in 2010, a Chinese supercomputer achieved the record for top speed using more than seven thousand GPUs. The result was that research in Deep Learning skyrocketed, allowing for much more complex problems to be solved, while involving previously unimaginable computational power. This type of computing power was previously delegated solely to the Defense sector. Now, for the first time, this technology has been made easily accessible to the lay person.

The next relative concept, involves the real magic behind the GPU, which is called, "Parallelism". This means that the computer, thanks to the GPU, can process many hundreds and thousands of instructions at the same time. Once again, this is as a result of the many cores that make up a GPU. This is extremely important for real-time operational devices, such as drones, self-driving cars, or medical devices; meaning, the device needs to operate with almost zero buffering or latency, in order to ensure the machine can safely react and quickly process massive amounts of data points in unpredictable situations in the real-world.

It would be the difference between 2,000 engineers working on many small problems, as opposed to 4-6 engineers working on a few big problems. However, if you can recall, the CPU is a workhorse, and so when you begin to program CPUs and GPUs, you want to delegate the workload efficiently between the two. Next, we will discuss a few commercially available GPUs and their applications. However, note that the GPUs, although more powerful, need the CPU to operate, but this is not the case the other way around. Let us look at a few examples of GPUs to gain an enhanced perspective.

nvidia-titanv-technical-front-3qtr-left_1512609636_678x452.jpg

This GPU is the Nvidia Titan V, which is a powerful graphics card for desktop processing with 5120 cores. The target market is the A.I. community for Deep Learning and Machine-learning applications.

This GPU is the Nvidia Jetson Orin AGX. The Xavier is the world's most powerful single-board computer, reaching 275 TOPS. In contrast, your average laptop reaches anywhere between 2-4 TOPS.

This GPU is the Nvidia Jetson Orin Nano and the original Nvidia Jetson Nano. The Nano has the power of about 75 Macbook Pro computers, with 128 cores, and is an entry-level device for deploying A.I. applications on real-world embedded systems. If that impresses you, the new version, the Jetson Orin Nano, has the power 1750 Macbook Pro computers, with 1280 cores, providing a great intermediate device for modern AI applications.

To give a deep dive into different applications where GPUs are applied, I will provide a few examples. One example, is Oak Ridge National Laboratory, where they are currently using NVIDIA GPU-powered A.I. to accelerate mapping and analysis of population distribution around the world. NASA uses NVIDIA GPUs for developing DeepSat, a deep learning framework for satellite image classification and segmentation. WePods, also uses Nvidia GPUs, for self-driving public transportation in Smart Cities of the future. Pinterest is even using GPUs for analyzing it's billions of Pins, curated by its users.

While Nvidia GPUs are expensive compared to rivals, such as AMD, the developer community and online support is second to none. Likewise, NVIDIA provides workflows from Machine-learning/Deep Learning to Hardware Deployment, using open-source tools, which is also convenient and cost-saving. Another large player in the GPU space is ARM, who is the world's number one provide of GPUs for mobile devices, with their chips being embedded in 95 percent of the worlds mobile devices today.

Ultimately, while it is possible to conduct Machine-learning and Deep Learning, while using a CPU alone, it is much slower than using a GPU with the increased number of cores, and the addition of parallelism. But, to be fair, CPUs have their own benefits, and since the CPU is the brains of the operation, low-latency processing is more efficient with the CPU, with information having further to travel to reach the GPU for processing. In spite of this nuance, these are edge-cases, while the majority of the time, processing will be must faster passing off certain operations to the GPU.

Here, we have explored GPUs for embedded and desktop devices. While embedded devices are the most secure and affordable overall, the trend has surged towards cloud-based GPUs to build A.I., which we shall ow explore in the next Chapter. Homework: Students will receive 1 Quantum token for every completed chapter. Visit Metamask.io and create a Metamask Account. Visit the following link for instructions to set up the Metamask Account correctly, and receive your Loyalty Rewards: www.infinite8institute.com/wallet

Exercises

  1. Can you or your team explain whether a GPU or CPU is more powerful?

  2. Can you or your team explain whether a GPU or CPU has more cores?

  3. When might you or your team want to use a CPU vs. a GPU for your project?

  4. Why do you think GPUs have not yet gotten smaller?

  5. Does it make sense why many computers need a GPU to run? Why or why not?

Ean Mikale, J.D., is an eight-time author with 11 years of experience in the AI industry. He serves as the Principal Engineer of Infinite 8 Industries, Inc., and is the IEEE Chair of the Hybrid Quantum-inspired Internet Protocol Industry Connections Group. He has initiated and directed his companies 7-year Nvidia Inception and Metropolis Partnerships. Mikale has created dozens of AI Assistants, many of which are currently in production. His clientele includes Fortune 500 Companies, Big Three Consulting Firms, and leading World Governments. He is a former graduate of IBM's Global Entrepreneur Program, AWS for Startups, Oracle for Startups, and Accelerate with Google. Finally, he is the creator of the World's First Hybrid Quantum Internet Layer, InfiNET. As an Industry Expert, he has also led coursework at Institutions, such as Columbia and MIT. Follow him on Linkedin, Instagram, and Facebook: @eanmikale