This is A.I.: A.I. For the Average Guy/Girl by Ean Mikale, J.D. - Chapter Four of Seventeen - Deploying A.I. in the Cloud / by Ean Mikale

Chapter Four of SEVENTEEN

The next topic of discussion will take a look at modern cloud-based A.I. applications and infrastructure. I am not the biggest fan of these platforms, and I will explain more in Chapter Nine, where we discuss A.I. and Cyber-Security. However, for now, there is a huge demand for cloud-based developers of A.I. Technology. There are many platforms to choose from. We will explore the cloud and a few of the different cloud-based platforms, discuss pros and cons of each, and hopefully help you to narrow down which one is right for your project or enterprise.

Before we explore different cloud-based platforms, first, let's explore the question of, "What is the cloud?" Have you ever thought about where all your information goes when you upload a document, Youtube Video, or slide deck? All around the world, there are clusters of servers, essentially, large-capacity computers, that store and exchange information and data distributed over the internet. Over 95% of the server's worldwide run on Linux-based, or open-source operating systems. This means, that the owners of the servers do not pay any licensing fees for the software running on their servers. This is hugely convenient, as the cloud allows the user to accomplish what could not otherwise be accomplished, using solely the computing resources available to them.

For example, if your computer is low on memory, or running on an older system, the cloud allows you to extend and scale your computing resources extensively. However, the servers are thirsty for electricity, which requires hosting companies to charge a micro-fee for most worthwhile cloud-based services. The caveat, is that building A.I. takes time, and a micro-fee can become a large fee very quickly. If using open-source tools, this does not apply. Now, let's visit the different platforms for additional context.

ngcshot.png

The Nvidia GPU Cloud, is Nvidia's effort at simplifying Machine-learning and Deep Learning workflows. The development of A.I. currently requires the skill sets of a Software Developer, Network Administrator, Data Scientist, and A.I. Architect. Many of the tasks required to develop A.I., oddly enough do not directly involve the development of A.I., but mostly involve the time-intensive downloading of software dependencies and other platform specific requirements.

The interface is meant to be simple and intuitive, allowing the user to select various Machine-learning and Deep Learning system configurations, that might take days to configure locally on your system. Next, the user selects the number of GPU's they desire to use in order to train and fine-tune A.I. models. The platform, then connects users to their favorite cloud-providers, such as Amazon Web Services, Microsoft Azure Cloud, or the likes of Nvidia Saturn V, one of the most powerful supercomputers in the world.

WatsonDash.PNG

IBM's Watson Supercomputer was named after the company's Founder and first CEO, industrialist Thomas J. Watson. IBM’s Watson Supercomputer gained international notoriety overnight, after beating out the reigning Jeopardy! Champion in 2011, winning the first place $1 million dollar prize. Watson has evolved into a Suite of various software applications and tools to build and deploy A.I. Applications. IBM has created a self-sustaining ecosystem, that does not necessarily play well with outside platforms, which is something to consider when weighing options. Watson servers are also running on Nvidia GPU's if that means anything to you. I recommend IBM's Watson most highly among beginners, as the self-sustaining ecosystem makes learning easier, rather than forcing the student to learn multiple platforms at once.

Amazon Web Services Cloud simply provides on-demand cloud services on a pay-as-you-go basis. The user has the option to access remote CPUs/GPUs, and pre-configured software applications or instances. The user also possesses the ability to remotely access physical computing power, such as powerful GPUs, or Virtual computing power, where the functionality and processes of a computing system are mimicked, allowing for immense scalability and distributed computing, where users access many computers or servers at the same time. Of all platforms, AWS is likely the most extensive, flexible, and popular among users and researchers in the A.I. community. If stepping into the A.I. industry, with a focus on cloud-based computing, then AWS is unavoidable, and its mastery must then become a priority.

AzureDash.png

Microsoft Azure is a cloud-computing platform for deploying, testing, and managing software applications. Prior to the past few years, Microsoft only offered Azure services for Windows users. This was severely limiting, and a step backwards, with most A.I. deployable devices running on Linux-based systems. This recently changed within the past 5-6 years, as Microsoft began to open up its ecosystem to the Linux community, likely resulting in the name change from, "Windows Azure" to "Microsoft Azure" in 2014. The platform supports various computing languages, frameworks, and third-party integrations. It also provides computer services, mobile, services, storage, data management, and messaging, similar AWS offerings. The prime reasons one would choose Microsoft Azure, over the more popular AWS, is if the user were a developer in C languages, arguably Java, or specialized in Windows-based applications.

GoogleCloudDash.png

Google Cloud is Google's crack at creating a suite of cloud computing services. The cloud-based platform and infrastructure runs on the same infrastructure that Google runs its own proprietary services on. Google also offers infrastructure as a service, platform as a service, and virtual computing environments. The benefit of the Google platform comes to play for developers who are focused on deploying mobile applications, particularly on the Android ecosystem. With over 90 cloud-based services, the resources for building and developing A.I. applications with Google is quite extensive.

DockerDash.png

Docker is a unique ecosystem that was a game-changer in the space of virtualization. Docker allows for the digital “boxing-up” of an entire operating system, or application and its dependencies, in the form of a "container", which can then be redeployed in the cloud or locally on embedded or desktop computing systems. Docker can also run on any Linux server. Microsoft has also made Docker available for use natively on Windows 10. Docker containers are lightweight and memory efficient, allowing for the running of several Docker containers simultaneously. In the development of A.I. applications, it is likely, that eventually you will run into Docker, and should become comfortable with its usage.

DIGITSDASH.png

Nvidia's DIGITS is an open-source platform for the building, fine-tuning, and deploying of A.I. applications. The platform currently allows for the development of image classification, object detection, and image segmentation models. For data inputs, the platform is limited to only four file formats: PNG, JPEG, BMP, and PPM. Thus, if you are looking to explore more extensive applications, requiring different data inputs, such as 3D models, then another platform may be more appropriate, such as using Pytorch in the terminal.

The benefit of the platform, is that it is FREE. However, you will still pay, but you will pay with your time, as the platform must be built from source if you desire to use the completely free version on your local computer. However, Docker images are available for the beginner. DIGITS is also available among the various cloud-based platforms, as a pre-configured virtual container, but cloud-usage must then be paid for. Please note, that DIGITS is optimized for the Nvidia ecosystem, which may not be desirable for those looking to create applications on non-Nvidia hardware.

In summary, there are many cloud-based platforms to choose from, and this list is far from exhaustive. However, it depends on the budget, desired applications, specifications, and knowledge of the user that will determine the best platform for individual use-cases. The popularity and flexibility of cloud-based platforms mean that they are not going anywhere anytime well into the future. However, there are limitations to cloud-based systems, which will be later explored. In the meantime, by mastering such systems, you will without a doubt be in high-demand. In the next Chapter, we will explore one of my favorite topics and areas of expertise, which will become even more important as the world automates and deploys new A.I. applications, Embedded Systems.

Exercises

  1. Homework: Students will receive 1 Quantum BNB (www.quantumcoins.org) for every completed chapter. Visit Metamask.io and create a Metamask Account. Visit the following link for instructions to set up the Metamask Account correctly, and receive your Loyalty Rewards: www.infinite8institute.com/wallet. What ways could you deploy Artificial Intelligence in the Cloud? Compare and contrast each provider to determine which cloud platform is best for your project.

  2. What is Docker, and why might it become more important in the future of Artificial Intelligence and Supercomputing? Here is more information on Docker.

  3. How can you use Pose Recognition and Image Recognition to develop your product or service using Google Teachable AI? See button below to access Google’s Teachable A.I.

  4. What are the pros or cons of utilizing the technology in the cloud?

Ean Mikale, J.D., is an eight-time author with 11 years of experience in the AI industry. He serves as the Principal Engineer of Infinite 8 Industries, Inc., and is the IEEE Chair of the Hybrid Quantum-inspired Internet Protocol Industry Connections Group. He has initiated and directed his companies 7-year Nvidia Inception and Metropolis Partnerships. Mikale has created dozens of AI Assistants, many of which are currently in production. His clientele includes Fortune 500 Companies, Big Three Consulting Firms, and leading World Governments. He is a former graduate of IBM's Global Entrepreneur Program, AWS for Startups, Oracle for Startups, and Accelerate with Google. Finally, he is the creator of the World's First Hybrid Quantum Internet Layer, InfiNET. As an Industry Expert, he has also led coursework at Institutions, such as Columbia and MIT. Follow him on Linkedin, Instagram, and Facebook: @eanmikale