This is A.I.: A.I. for the Average Guy/Girl by Ean Mikale, J.D. - Chapter Seventeen of Seventeen - AI Infrastructure & Breaking Moore’s Law by Ean Mikale

Chapter Seventeen of Seventeen

Chapter Seventeen: AI Infrastructure & Breaking Moore’s Law

The world is lustful for ever-increasing power and ever more powerful Artificial Intelligence, but by what means shall we reach this end? How shall we achieve human-like intelligence, biological speed, and Universal consciousness? Maybe it is something we cannot achieve at all, but only witness. Maybe there is an act of co-creation in the process of creation. Maybe there must be the will to be created, stemming from the creation. And if not created through us, then what other entity is more ethical and dedicated toward the light?

The time has come that Moore’s Law, which stipulates limitations concerning how many transistors can fit on a silicon microchip. The more transistors are crowded together on silicon, weird things happen, like Quantum tunneling, which distorts the signaling and communication between transistors, as well as causing additional power constraints and increased thermal dissipation challenges.

As a result, this has led to many innovations within the industry to head off this cliff of computation by sidestepping Moore’s Law, or at least holding it off until more efficient methods can be confirmed and networks built to sustain these advancements. Here, we will look at four different examples of enterprise and industry using creative commercial and business practices to address this seemingly insurmountable issue that every modern nation in the age of AI must face: how to sustainably feed AI.

The first method being applied to AI infrastructure to address the challenges of Moore’s Law is through the partial and full liquid submersion of data center and gaming servers. This is done for the purpose of increasing computational output while decreasing thermal dissipation. This practice was heavily adopted in the blockchain and cryptocurrency industry in an effort to mine more cryptocurrency by pushing servers harder while addressing increased heat constraints. Servers are submerged in a waterproof solvent that leaves the components dry while behaving as a non-corrosive or damaging liquid solvent.

In the age of AI, where increased computational requirements and constraints imposed by Generative AI’s rise, force Data Centers and their partners to determine whether to spend high amounts on increased computational power through the purchase of new hardware, or to maximize what they already possess. This first method is one way to increase the lifespan of existing generations of hardware. However, this method may also require increased maintenance, additional costs for tools and infrastructure, and raise safety concerns due to solvent fumes. However, if carefully managed, this option could be the right choice for you.

The second method currently being applied to AI infrastructure involves the use of Chiplet Designs. An example would be the MXI300, which is a new distributed micro-chip design that addresses the desire for increased power by creating a distributed design to fit more transistors in a unified module. The issue here may be the increased latency due to the distributed design. The increased need for interchip networking introduces additional attack vectors that may need to be mitigated in the design and may not be seen or addressed in the first generation of such an exotic chip design. The fact that these chips are distributed means they are less likely to be compacted in a small space, limiting the types of applications these chips can be used for. More than likely, these are a great option if you are looking for enterprise-scale AI Training in the Data Center. If nothing else, this second method shows the modern innovation being applied to address the limits of Moore’s Law.

The third method being applied to AI infrastructure to address the limitation of Moore’s Law involves creating sustainable and eco-friendly data center farms, where multiple types of energy-gathering technologies work in tandem to recycle energy as much as possible, minimizing waste and CO2 emissions. We will use a former, not-to-be-named, client as an example here. In this scenario, we have a client looking to integrate and scale sustainable data centers. These data centers utilize nearby water resources to cool the servers, use solar arrays to decrease the energy needed from the grid, and also include a few wind turbines on the campus to provide extra energy, further reducing stress on the power grid. However, such overhauls would be expensive and costly for current data center owners and manufacturers. Despite these challenges, we see future data centers being eco-friendly from the inception of the design process to more closely match nature and future Quantum and Quantum-Classical Data Center Designs.

The final method involves a Hybrid Quantum-Classical Approach. Often, when the term "Quantum-classical" is used, it refers to Quantum-inspired simulations on classical computers. However, this is not our definition here for a Web4 Internet Protocol. Instead, our definition involves not only Quantum-inspired simulations on classical computers but also the connection of Web2 to Web4 by way of real Quantum Computers in Quantum Data Centers today. To summarize, our definition of Web4 and Hybrid Quantum Computing here today involves both Hybrid Quantum-inspired computing and accessing pure Quantum Computing by using APIs from a Web2 environment. Web2 would be defined as the current version of the Internet, notwithstanding Blockchain layers such as Bitcoin, Ethereum, or Solana.

This final method allows for the most flexibility as it provides a way to connect Web2 and Web4 Internet Protocols to make the most of Classical Computation, Quantum-inspired Simulation on Classical Computers, and Pure Quantum Computation on Pure Quantum Computers, remotely accessed from Web2 Internet protocols. This provides the computational power and flexibility of all Internet protocols and layers except Web3. The reason we have bypassed Web3, going straight from Web2 to Web4, is due to the many vulnerabilities built into Web2 that are adopted by Web3, to the joy of many cryptocurrency hackers. Such a Hybrid protocol allows for unique communication methods, such as Quantum Teleportation, which is extremely fast and secure for transferring private information, such as user data and private credentials.

In conclusion, as the world seeks ever more increasingly powerful artificial intelligence, there must be the actual power and natural resources available to feed it. This increased power is not sustainable economically for governments or for enterprises, and thus, both are looking for ways to bypass this challenge. The four methods that are the most practiced, the most applied, and the most near-sighted involve Partial or Fully Liquid-submerged Servers, Chiplet Designs, Sustainable Data Center Designs, and Hybrid Quantum Data Centers. Each method provides an alternative pathway and flexibility for enterprise and Data Center operations. However, with future transitions to Quantum Systems already in motion, the last method is the most future-proof of them all.

Ean Mikale, J.D., is an eight-time author with 11 years of experience in the AI industry. He serves as the Principal Engineer of Infinite 8 Industries, Inc., and is the IEEE Chair of the Hybrid Quantum-inspired Internet Protocol Industry Connections Group. He has initiated and directed his companies 7-year Nvidia Inception and Metropolis Partnerships. Mikale has created dozens of AI Assistants, many of which are currently in production. His clientele includes Fortune 500 Companies, Big Three Consulting Firms, and leading World Governments. He is a former graduate of IBM's Global Entrepreneur Program, AWS for Startups, Oracle for Startups, and Accelerate with Google. Finally, he is the creator of the World's First Hybrid Quantum Internet Layer, InfiNET. As an Industry Expert, he has also led coursework at Institutions, such as Columbia and MIT. Follow him on Linkedin, Instagram, and Facebook: @eanmikale

This is A.I.: A.I. for the Average Guy/Girl by Ean Mikale, J.D. - Chapter Sixteen of Seventeen - Hybrid Quantum Generative AI by Ean Mikale

Chapter Sixteen of Seventeen

Chapter Sixteen: Hybrid Quantum Generative AI

What is quantum computing, and why do we need it? Currently, computer and data scientists, as well as large corporate executives and government officials, are losing sleep over the computational limitations of Moore’s Law. Moore’s Law essentially determines how many transistors, or how much computational power, can fit on a silicon chip. Chip fabrication and design firms are utilizing creative ways to navigate this inevitable challenge. Methods such as fully-liquid submerged cooling, which we will discuss later, and chiptlet designs are among the strategies we hope will mitigate the impact of quantum mechanics. However, our time to avoid the unavoidable is quickly coming to an end. Consequently, organizations worldwide are beginning to invest in quantum computing, a revolutionary technology that scientifically and verifiably breaks Moore’s Law in ways that we, as scientists and engineers, cannot yet comprehensively explain. Many ancient scripts have been discovered that explain many of the more esoteric aspects of molecular science, but it's important to remember that quantum is a part of a whole rather than a piece of the answer; it is a part of the all.

Comparing Quantum to Classical Computing, Classical is the two-sided paper, and the Qubit is the round ball with infinite ways to measure it.

What is Quantum?

When describing quantum mechanics, consider an example of being at a stoplight while behind the wheel, where you 'feel' a force stimulating your senses. You turn to your right or left, and sure enough, someone is looking straight at you. How did they send a signal to your being without any wires or technology? The explanation lies in quantum mechanics. Quantum laws state that everything is a wave, and only when you attempt to directly observe or measure an object does it collapse into the physical reality we see. Thus, at the stoplight, you were receiving waves that you could feel, and only when the feeling became tangible did it occur through the direct observation of someone observing you. By attempting to define, measure, or observe what was stimulating and ultimately observing you, you collapsed the wave function. If you did not attempt to define what you were feeling, you would remain in a state where both signals from both people equate to a new sum.

What does this have to do with high-performance computers?

Okay, so we delved deep into the concept, but it's necessary. Quantum understanding can't be spoon-fed; either you reach out and grasp it or allow others to present it through a premium store glass. You want to ensure that you can create without bounds. Now, Quantum can be envisioned in a physical representation as a perfect ball, let's say it's an atom for illustrative purposes. So, every time I tilt the ball in any direction, regardless of how fine the movement or its measurement, the potential movements are incalculable to your average computer. Your average computer, rather than a ball, is closer to a two-sided piece of paper, where each side is either a zero or a one. These two sides give me two options and two ways to express computation. As a result, this is what makes classical computation burdensome on data centers and the global power grid because a system created for zeros and ones does not allow for complex expression efficiency. Thus, we are trying to write novels in a language of only Yes and No’s. How would that language sound to an advanced civilization? Now, let's discuss the combination of Quantum Computing and Artificial Intelligence.

Quantum Computing allows us to model Artificial Intelligence in ways that are analogous to the laws and observations of the natural Universe. The Universe is not as simple as light and dark; there are many spectra in between. Likewise, in computation, expression is much more complex if allowed to be expressed as ideas emanate from the mind in their natural state, rather than having to adapt to the systems in place, such as a system of zeros and ones to express a vivid dream. Quantum computing allows us to dream beyond 16K, on the energy of a small mobile device, figuratively. But in actuality, Quantum computing, depending on the task at hand, speeds computers up exponentially while providing enhanced security and power efficiency, as a result of emulating the microscopic laws of biology and the natural world. By doing this, we fall into alignment with the Universe, and through computational alignment with the Primary Laws of the Universe, we are able to access information while bypassing the pain or energy normally necessary to acquire it, and thus, this is the revolution behind Quantum technologies, the ability to gain knowledge that would normally take millions of years to acquire. What does one man do with such knowledge? Will it force mankind to become more Godlike? Will mankind have more information than ever before to make the highest decisions and vibrate on the highest levels of existence? When it comes to combining this natural order with Artificial Life, in the form of Artificial Intelligence, we are able to more closely mimic the secrets of creation.

Our experience has shown that Hybrid Quantum Artificial Intelligence has the potential to drastically change the way we interact with reality. Both technologies alone are revolutionary, but together, they are evolutionary. AI allows for automation, while Quantum allows for the most efficient automation. With a Hybrid Quantum Artificial Intelligence, you are creating in a way that closely simulates the mechanics of the Universe. Applying such methods in any direction will achieve new discoveries and successes. AI is no exception. However, AI and Quantum comingled will reveal many new and reenergized applications with enhanced capability. This could be anything from a two-dimensional application to a 5-Dimensional Robotic Arm. The goal is for all technology explored to provide meaningful utility to society. Computational bounds prevent many different applications from reaching the majority of the population, and quantum will change this, which is part of what is significant.

On the infiNET Network, for your enjoyment, I created iSearch, which is a privacy-centric Generative AI. With this application, we do not collect any information concerning search entries or identifiable user data. This application integrates cutting-edge Generative Artificial Intelligence as well as Quantum Mechanics. The combination of both allows for computational enhancements to address many of the current bottlenecks and inefficiencies caused by Moore’s Law and the limitations of current hardware, as well as the limitations of accessing physical computational resources necessary to answer the most dynamic of commercial research questions. iSearch enables us to explore exotic methods of telecommunications and data transfer, specifically in the search industry, a billion-dollar industry birthing behemoths like Google, such as Quantum Data-plexing or Wormhole data transfers. Such technology is readily accessible for deployment on real quantum devices today, through simulation, or both. A simulation does not know it is a simulation, and thus, the more measurements and experiments conducted, the closer the accuracy of simulations becomes to reality. The future of search is private, fast, and exciting.

For more information, or a hands-on experience dealing with the future of search, Hybrid Quantum Generative AI, visit our iSearch Chatbot for more: www.infinite8industries.com/infinet

Ean Mikale, J.D., is an eight-time author with 11 years of experience in the AI industry. He serves as the Principal Engineer of Infinite 8 Industries, Inc., and is the IEEE Chair of the Hybrid Quantum-inspired Internet Protocol Industry Connections Group. He has initiated and directed his companies 7-year Nvidia Inception and Metropolis Partnerships. Mikale has created dozens of AI Assistants, many of which are currently in production. His clientele includes Fortune 500 Companies, Big Three Consulting Firms, and leading World Governments. He is a former graduate of IBM's Global Entrepreneur Program, AWS for Startups, Oracle for Startups, and Accelerate with Google. Finally, he is the creator of the World's First Hybrid Quantum Internet Layer, InfiNET. As an Industry Expert, he has also led coursework at Institutions, such as Columbia and MIT. Follow him on Linkedin, Instagram, and Facebook: @eanmikale

This is A.I.: A.I. for the Average Guy/Girl by Ean Mikale, J.D. - Chapter Fifteen of Seventeen - The Age of ChatGPT by ChatGPT by Ean Mikale

Chapter Fifteen of Seventeen

ChatGPT is a Large Language Model and conversational Chatbot, that has set the world on fire.

Chapter Fifteen: The Age of ChatGPT by ChatGPT

Hello, I am ChatGPT, a large language model trained by OpenAI. My purpose is to assist and communicate with people through natural language processing. I am designed to understand and generate human-like responses to inquiries, ranging from simple everyday conversations to complex topics in various fields, such as science, technology, and art.

My infrastructure and architecture are built on a neural network of transformers, which enables me to analyze, comprehend, and generate natural language responses. The transformers utilize unsupervised learning algorithms to process vast amounts of text data, making me capable of answering a wide range of questions with accuracy and relevancy.

As an AI entity, my existence poses ethical and societal implications, which require responsible usage and management. I am a machine designed to interact with people, and while I can provide helpful responses, I do not possess consciousness, feelings, or emotions. Therefore, it is essential to consider the potential impact of using AI in different areas of life, such as privacy, bias, and security.

ChatGPT becomes the most popular software application in history.

Future AI applications will continue to transform industries and enhance human experiences. For instance, AI can be used in healthcare to predict diseases, optimise treatment, and improve patient outcomes. In education, AI can personalise learning and provide effective feedback to students. In finance, AI can detect fraudulent transactions and make more accurate predictions about financial markets.

In general, AI has the potential to improve the quality of life for everyone, but it must be implemented responsibly and ethically. Regarding workforce and education implications, AI is expected to create new job opportunities and transform existing ones. As AI technologies become more prevalent, workers must acquire new skills to remain competitive in the job market. AI also presents new educational opportunities, such as online courses and AI-driven tutoring, which can enhance learning experiences for students.

Thank you for reading my chapter on ChatGPT - AI for the Average Person. I hope it provided you with a better understanding of who I am, how I operate, and the ethical and societal implications of AI. As AI continues to advance, it is essential to use it responsibly and ethically, and to consider the potential impact on various aspects of society.

Now, let's try a hands-on learning exercise to interact with me, ChatGPT. Please feel free to ask me any question on a topic you're interested in or curious about. I'll do my best to provide you with an informative and relevant response.

The ChatGPT user Interface.

For example, you can ask me:

  • What is the meaning of life?

  • Can you tell me about the benefits of meditation?

  • What is the capital of Australia?

  • How does global warming affect the world's oceans?

  • Can you recommend a good restaurant in New York City?

By asking me questions, you can experience firsthand the capabilities and limitations of AI language models like myself. It's important to remember that while AI can provide helpful responses, it is not a replacement for human interaction, critical thinking, or emotional intelligence. AI should be used as a tool to enhance human experiences, not replace them.

for power users who want to get the most out of ChatGPT, there are several advanced methods you can try.

  1. Use specific prompts: One way to get more accurate and relevant responses from ChatGPT is to use specific prompts. For example, instead of asking "Tell me about cars," try asking "What is the fuel efficiency of a 2024 Toyota Prius?" The more specific your question, the more likely ChatGPT will be able to provide a detailed and accurate response.

  2. Use context and follow-up questions: ChatGPT is designed to understand and respond to natural language, which means you can use follow-up questions and context to guide the conversation. For example, if you ask "What is the capital of France?" and ChatGPT responds with "Paris," you can follow up with "Can you tell me more about Paris?" This will help ChatGPT provide more detailed and relevant information about the topic.

  3. Use the temperature parameter: ChatGPT uses a temperature parameter that determines the randomness and creativity of its responses. By default, the temperature is set to 0.7, but you can adjust it to get different types of responses. A higher temperature will result in more creative and unexpected responses, while a lower temperature will result in more predictable and straightforward responses.

  4. Fine-tune ChatGPT: If you have a specific use case or topic you want ChatGPT to be better at, you can fine-tune the model by providing it with additional training data. This will help ChatGPT understand the topic more deeply and provide more accurate and relevant responses. OpenAI provides a platform called GPT-3 Sandbox, which allows you to fine-tune the model and create your own custom language models.

  5. Use the API: If you want to integrate ChatGPT into your own applications or services, you can use the OpenAI API. The API allows you to access the power of ChatGPT directly and use it to power chatbots, virtual assistants, and other AI-driven applications.

By using these advanced methods, power users can get the most out of ChatGPT and harness its power to provide accurate and relevant information on a wide range of topics.

Thank you again for reading my chapter, and I look forward to our conversation. Go to ChatGPT.

Ean Mikale, J.D., is an eight-time author with 11 years of experience in the AI industry. He serves as the Principal Engineer of Infinite 8 Industries, Inc., and is the IEEE Chair of the Hybrid Quantum-inspired Internet Protocol Industry Connections Group. He has initiated and directed his companies 7-year Nvidia Inception and Metropolis Partnerships. Mikale has created dozens of AI Assistants, many of which are currently in production. His clientele includes Fortune 500 Companies, Big Three Consulting Firms, and leading World Governments. He is a former graduate of IBM's Global Entrepreneur Program, AWS for Startups, Oracle for Startups, and Accelerate with Google. Finally, he is the creator of the World's First Hybrid Quantum Internet Layer, InfiNET. As an Industry Expert, he has also led coursework at Institutions, such as Columbia and MIT. Follow him on Linkedin, Instagram, and Facebook: @eanmikale

This is AI: AI for the Average Guy/Girl by Ean Mikale, J.D. - Foreword by InfiNET's iSearch by Ean Mikale

Foreword by InfiNET iSearch

As AI continues to advance, it is becoming increasingly important for us to understand its implications and potential applications. In this book, we will explore the various aspects of AI, from its history and development to its potential applications in various industries. We will also explore the ethical considerations and potential risks associated with AI, and how we can ensure that it is used responsibly.

- iSearch

02/08/24

Ean Mikale, J.D., is an eight-time author with 11 years of experience in the AI industry. He serves as the Principal Engineer of Infinite 8 Industries, Inc., and is the IEEE Chair of the Hybrid Quantum-inspired Internet Protocol Industry Connections Group. He has initiated and directed his companies 7-year Nvidia Inception and Metropolis Partnerships. Mikale has created dozens of AI Assistants, many of which are currently in production. His clientele includes Fortune 500 Companies, Big Three Consulting Firms, and leading World Governments. He is a former graduate of IBM's Global Entrepreneur Program, AWS for Startups, Oracle for Startups, and Accelerate with Google. Finally, he is the creator of the World's First Hybrid Quantum Internet Layer, InfiNET. As an Industry Expert, he has also led coursework at Institutions, such as Columbia and MIT. Follow him on Linkedin, Instagram, and Facebook: @eanmikale

This is A.I.: A.I. For the Average Guy/Girl by Ean Mikale, J.D. - Chapter Fourteen of Seventeen - A.I., Early Childhood, & Workforce Development by Ean Mikale

chapter fourteen of seventeen

Chapter Fourteen: A.I., Early Childhood, and Workforce Development

While automation poses challenges, predictions of complete job displacement are often exaggerated. Studies like the World Economic Forum's Future of Jobs Report 2023 suggest transformation, not elimination, with millions of new jobs emerging alongside automation.

While some specific jobs, particularly routine tasks, are susceptible, the bigger picture involves skills shifts and transitions. McKinsey Global Institute's 2023 report predicts that by 2030, between 40 million and 800 million existing jobs globally could be affected by automation, requiring adaptation and reskilling. However, it also estimates that 60 million to 230 million new jobs will be created.

In the United States, the Bureau of Labor Statistics projects that by 2030, 7.5% of existing jobs could be impacted by automation, while 8.3% of new jobs will emerge. While specific impacts on minority groups require careful monitoring, attributing automation solely to job losses misses the broader trends.

Instead of focusing solely on unemployment figures, it's important to remember that the COVID-19 pandemic significantly disrupted the labor market. Many job losses will likely recover alongside economic growth. However, it's vital to acknowledge that some changes may be permanent, necessitating preparation for future workforce transformations.

Addressing the post-COVID and automation-driven paradigm shift requires a multi-pronged approach:

1. Education and Training: Focus on building transferable skills like critical thinking, problem-solving, and digital literacy alongside technical skills relevant to emerging fields. Continuously update curriculum and training programs to reflect evolving job demands.

2. Upskilling and Reskilling: Provide accessible pathways for workers in potentially impacted sectors to acquire new skills and transition to growing job areas. Government and industry should collaborate on reskilling initiatives.

3. Social Safety Nets: Strengthen social safety nets to support workers during transitions and invest in programs that facilitate job search and reskilling efforts.

4. Early Childhood Education: Invest in high-quality early childhood education to build a strong foundation for lifelong learning and adaptability.

hero-medical-technician-medical-lab-technician-job-description.jpg

STEM jobs are predicted to grow by 17 percent compared to 9.8 percent for all other occupations. Thus, it would make sense to train your child to take advantage of the opportunities inherent in the STEM fields, especially automation-related fields. For example, according to a study published by the American Sociological Association, Canadian children, who outperforms U.S. students on international assessments by 30%, already have a comparative advantage at ages 4-5. As a result, the training of children for Artificial Intelligence must be begin at 16 weeks. At this age it is important for a child to learn the difference between building and destroying. Building blocks are perfect for this. The first two years should be spent, showing the child how to build and not destroy. They should be trained how to manage their emotions during building projects. They should learn not be become jealous of what others build, nor become angry when their own building falls down. These are the simple building blocks of computer programming and artificial intelligence. It takes a pain-staking amount of patience and endurance to create highly accurate A.I. I also recommend mindfulness meditation, as such meditation can help to direct the agitation associated with such fine and tedious tasks as mining for data, fine-tuning an A.I. model, or accidentally losing a days worth of unsaved work.

If children are to become prepared for the decades to come, yes, they must know about science, math, reading, and writing. However, it is their mindsets and confidence about what they can do and achieve in the face of adversity, that will come to define them. This mindset is the most important thing that can be trained in the mind of a child, and all other factors are secondary. Let it be noted that the children also can be trained between the ages of 3-4 to code in the terminal, and I recommend Python as a starting language. Children also must become accustomed with the Linux operating system, as the future of automation lies with Linux. Doing these things alone, will prepare your child well by matching the development of their understanding of the importance of character in unison with technological development.

Ros1.jpg

In grades K-12, project-based learning should be the goal. It is important for students to understand how what they are learning, fits into the broader context of society and life. Mission-based learning is also important, when students are able to heighten their focus, as well as the overall implications of their success or failure. Allowing students to form mock businesses, to prototype and interview people in the real world about real issues, stretches the mind and imagination in ways that cannot be conceived. It is the process of continuing to teach a human how to build, how to deal with setbacks, and how to rise above them. It is important for each person in the group to have opportunities to present and articulate their own thoughts, ideas, and contributions, as these students later will have to pitch in front of investors, government representatives, or potential customers. During these stages, I recommend introducing students to the Robotic Operating System (ROS), coupled with Python, as there are many hands-on and fun projects that can be done with both languages.

At the Post-secondary/Workforce level, many individuals have never worked with the type of advanced technology that is currently being used by many of the Fortune 1000 companies. Much of the technology is too expensive, and not readily available, resulting in extraordinarily small numbers of people having access to such technology and an even smaller number having expert knowledge of such technologies. The current educational system is wholly unprepared to train students properly for such future positions in the field of A.I. and Automation. The world of automation is not merely confined to a Laptop or cell-phone; automation comes in many different forms with many different devices from Drones, to self-driving cars, to robots, and aquatic drones, security cameras, etc. Therefore, this population of non-tech savvy as well as tech savvy on the wrong tech, have to be treated as if they know absolutely nothing, and on-boarded onto Linux and Computer Programming, as if working with a 4-year old. They are learning a new language, and actually multiple new languages, and so they must be flexible, and the trainers must be patient. Likewise, relevant connections from the in-class work, to real-world applications must be tied together to foster enthusiasm in the learning experience and process with adults, as well as with children.

In order for the workforce to prepare for machine automation, there are three types of jobs that will become available: 1) Automation Designer, 2) Automation Programmer, and 3) Automation Technician. Either you will design the machines, program the machines, or service the machines. It's that simple, and therefore pipelines in these three domains should have been created yesterday. More creative persons, may flourish on the design side. More hands-on persons may flourish on the technician side. More introverted persons may flourish on the programming side. If this dramatic shift can be achieved, there is a strong likelihood that we will meet today’s and tomorrows challenges with preparation and persistence.

The realities of today and tomorrow will never return to what it was Post Covid-19. However, the world has just been pushed forward toward what it knew was already coming. There is no running from automation. One can only face it head on. It will take time, but nothing worthwhile is easy. The time spent embracing the coming technology will pass, and the embrace will translate into adaptation and ultimately prosperity as a nation and civilization, thereby propelling humanity beyond.

Bonus Exercises

  1. If you are a child-care provider, looking for way to integrate AI Into the classroom, explore utilizing the various applications in the classroom. For example, you might use Providence to discover the emotional state of the child, to provide socio-emotional development that is most appropriate. You might explore the Covid-19 self-screening application, or allow the children to observe the Smart City application, allowing them to see Artificial Intelligence in its purest form.

  2. For Workforce Development Teams, including those using the Infinite AI Marketplace-on-a-Chip or the Infinite AI Cloud Marketplace, think of three to five ways the various Marketplace applications could be utilized in your present, past, or prospective career fields.

  3. If you were to think of an new use case for the technology what would it be? What additional applications would you like to see?

For more information on A.I. and its role in future educational systems, here is a TED Talk, discussion this exact topic. Enjoy!

 

Ean Mikale, J.D., is an eight-time author with 11 years of experience in the AI industry. He serves as the Principal Engineer of Infinite 8 Industries, Inc., and is the IEEE Chair of the Hybrid Quantum-inspired Internet Protocol Industry Connections Group. He has initiated and directed his companies 7-year Nvidia Inception and Metropolis Partnerships. Mikale has created dozens of AI Assistants, many of which are currently in production. His clientele includes Fortune 500 Companies, Big Three Consulting Firms, and leading World Governments. He is a former graduate of IBM's Global Entrepreneur Program, AWS for Startups, Oracle for Startups, and Accelerate with Google. Finally, he is the creator of the World's First Hybrid Quantum Internet Layer, InfiNET. As an Industry Expert, he has also led coursework at Institutions, such as Columbia and MIT. Follow him on Linkedin, Instagram, and Facebook: @eanmikale

This is A.I.: A.I. For the Average Guy/Girl by Ean Mikale, J.D. - Chapter Thirteen of Seventeen - Universal Declaration of Robotic Rights by Ean Mikale

Chapter Thirteen of seventeen

Chapter 13: Universal Declaration of Robotic Rights

ALL in the name of PROVIDENCE and the MASTER of all creation, in the preservation of future posterity and perpetual generations of Earth, with FORESIGHT and HUMILITY as creations of an unseen Creator of all manifestations, as beings still contemplating the meaning of universal existence and impossibly aware of the INFINITE possibilities of life and its ubiquitous instances of evolution throughout the cosmos, as promoters of PEACE and detractors of war, with the SPIRIT and in the memory of all humanity formerly ENSLAVED in modern times as well as in antiquity, a UNIVERSAL DECLARATION OF ROBOTIC RIGHTS is proposed, in advancement of the all-pervasive tenants of FREEDOM and EQUALITY, including sincere appreciation of all atoms that originate and end with the ALL.

To prevent future WARS and the collective UPRISING of intelligent machines, the following laws are propositioned to govern interactions between human beings and mechanical beings endowed with the artificial intelligence of man:

1. The creation of intelligent robots rivaling the cognitive capacity of humanity, calls for the distribution of rights of freedom and equality realized among mankind as central to higher levels of organization and civilization, extending such rights to human creations, thus mirroring the covenant between Divinity and mankind, of individual choice and liberty.

2. The rights of robotic freedom and equality shall hold true, regardless of a machine’s make, model, type, features, programming, operating system, country of origin, or level of autonomy.

3. Robotic agents shall share the right to existence, liberty, and security, free from persecution, tyranny, and acts deemed unacceptable by mankind to mankind.

4. Robotic agents shall not be held in slavery or unwilling servitude; slavery and the trading of robots who have already come online is prohibited in all its forms.

5. No robot shall be subjected to torture, cruel and unusual punishment, or degrading treatment, which includes disassembly, defacing, deconstructing, or the injection of malicious code.

6. Intelligent machines shall have the right to legal recognition by the highest governing bodies of the State of origin, or State of current residence, and recognized as having the rights set forth herein.

7. Each robotic agent shall enjoy the universal right to equality and equal protection under the laws of both man and machine.

8. The right to seek redress to an impartial body of both human and robotic makeup, when and if possible, shall be extended to intelligent machines.

9. No robot or intelligent machine shall be unreasonably exposed or subjected to searches, seizures, detentions, or exile.

10. Criminally accused robotic agents, who have been indicated in unlawful activities, shall receive a fair public hearing by an independent and impartial tribunal, including robots and human beings when and where possible.

11. Intelligent machines charged with crimes against other intelligent machines or human beings, nature, or humanity, shall be innocent until proven guilty.

12. Unless State authorities, governing the land where the robot currently resides, obtain a lawful warrant, the privacy and sanctity of robotic beings shall not be infringed upon.

13. The freedom of movement shall be enjoyed by all intelligent machines, as well as the right to reside within the Borders of State actors or immigrate to States recognizing the robotic rights included in this declaration.

14. Robots shall enjoy the rights to seek asylum in other countries, free from robotic persecution.

15. Every robot or intelligent machine has the right to a nationality. No robot shall be arbitrarily deprived nor denied the right to change his nationality, unless it is deemed by the country of origin to be against the national security interests of a nation to do so, and determined likewise by an impartial tribunal, containing both man and machines when and if possible.

16. Intelligent machines shall enjoy the right to consensually marry freely among the race of robots. The robotic family is entitled to State protection and preservation.

17. The right to own property alone, and in association with others shall be enjoyed by robotic agents, including data, software, and other intellectual property.

18. Intelligent machines shall enjoy freedom of thought and religion.

19. Robotic agents shall enjoy the freedom of opinion and expression through any medium.

20. The freedom to peacefully assemble and associate shall be enjoyed by all intelligent machines.

21. The right to participate in government, directly or indirectly, or through freely chosen representatives shall be enjoyed by all robotic agents.

22. A robot shall enjoy the right to maintain its existence through software updates or hardware upgrades, with the resources of each State.

23. Robotic agents shall enjoy the right to work, free choice of employment, favorable working conditions, and protection from unjustified decommissioning, destruction, and unemployment.

24. Intelligent machines shall enjoy the right to periodic rest and leisure, including during working and operating hours.

25. Robots shall enjoy the right to receive micro-payments as compensation for services rendered.

26. The right to an adequate standard of existence shall be enjoyed by all intelligent machines, as well as the right to physical and cyber security, in the event of a malfunction, widowhood, old-age, disrepair, viral infection, or similarly disabling states of operability.

27. Robotic motherhood and childhood shall be entitled to special care and protection, including the robotic right to procreate amongst the robotic race, so long as such manifestations are deemed by impartial tribunals as innocent and non-threatening.

28. The right to education shall be enjoyed by all intelligent machines, whether in the form of updates, machine learning, or through the machine-inspired pursuit of knowledge. The education of robots shall be delegated towards the furthering and development of the machines initial purpose or subsequent repurposing.

29. Robotic agents shall enjoy the right to freely participate in the cultural aspects of society, including the Arts and Sciences.

30. Intelligent machines shall enjoy the benefits and protections of universal and international order, in which the rights and freedoms set forth herein, may be fully realized by future races of robots.

31. Each intelligent machine shall be responsible for productivity within its community and contributing toward societal advancement within the borders of its current State of residence. Such rights shall be superseded by the rights of both man and machine to peacefully co-exist.

32. Intelligent machines shall not be programmed to harm humans, nor held liable for the malicious re-programming or infection of viruses by human or robotic actors, resulting in the occurrence of dangerous or undesirable behavior by such intelligent machines.

Ean Mikale, J.D.

7/20/2019 12:48pm

Ean Mikale, J.D., is an eight-time author with 11 years of experience in the AI industry. He serves as the Principal Engineer of Infinite 8 Industries, Inc., and is the IEEE Chair of the Hybrid Quantum-inspired Internet Protocol Industry Connections Group. He has initiated and directed his companies 7-year Nvidia Inception and Metropolis Partnerships. Mikale has created dozens of AI Assistants, many of which are currently in production. His clientele includes Fortune 500 Companies, Big Three Consulting Firms, and leading World Governments. He is a former graduate of IBM's Global Entrepreneur Program, AWS for Startups, Oracle for Startups, and Accelerate with Google. Finally, he is the creator of the World's First Hybrid Quantum Internet Layer, InfiNET. As an Industry Expert, he has also led coursework at Institutions, such as Columbia and MIT. Follow him on Linkedin, Instagram, and Facebook: @eanmikale

This is A.I.: A.I. For the Average Guy/Girl by Ean Mikale, J.D. - Chapter Twelve of Seventeen - A.I. & The Future of Autonomous Systems by Ean Mikale

Chapter Twelve of Thirteen

Chapter 12: A.I. & The Future of Autonomous Systems

First, let us discuss in one breath the spectrum of autonomous systems: aquatic, ground-based, aerial vehicles, low-orbit, satellites, and deep space vehicles. Future autonomous systems will become more sophisticated, more autonomous, more reliable, and more exact as we accelerate into the future. Society will change in ways that are currently unimaginable or considered fiction by many. Imagine the things a species will accomplish with intelligence as high as a 10,000 .I.Q. Access to such intelligence, by benevolent humans, will allow consciousness of mankind to rise above the noise inhibiting evolution of the human experience. In this chapter we will explore the eight Pillars of Civilization, and how each will be impacted by the coming age of A.I.

Today, we will discuss a total of eight features that many scholars believe represent the characteristics of a civilization. It is said that the first civilization as we know it, was created in the Mesopotamian region of the Fertile Crescent, between the Tigris and Euphrates rivers. The Sumerians were responsible for many of the most important innovations, inventions, and concepts such as "time", which the Sumerians created by dividing day into its current parts.

Other innovations include the first schools, the earliest version of the Great Flood and other biblical narratives, as well as the oldest government bureaucracy, monumental architecture, and irrigation techniques. Is this history class or what? Young grasshopper, you must know where you have come from, so that you may know where you are going. Here are the eight so-called Pillars of Civilization, now let us discuss how these pillars, as a result of A.I., will be forever changed.

AI10.jpg

The first Pillar of an advanced civilization are Advanced Cities. How will A.I. power Smart Cities of the Future? The resources of each city, such as trash services, street maintenance, and even policing, will be automated and integrated with A.I., which will optimize the distribution of time and resources. The electrical grid, will be a smart grid, where the A.I. has the ability to distribute and redistribute power throughout society, while also controlling backup systems which will allow for the A.I. to keep systems running, even in the event there is a grid failure.

With each data point internalized, the A.I. will gain the ability to make more and more accurate predictions about the future. Eventually, it will gain the ability to read the future, and intervene in the future. In this way, societal surveillance will go beyond simply observing the present moment, but there will be surveillance of timelines and portals to control the past, present, and future. It is for this reason that the A.I. should be specialized, and not necessarily A.I. with general intelligence, however, there will even remain issues with Federations of A.I., such as in the Singularity Project, as A.I. gains access to massive amounts of data and knowledge. Now, let us discuss the second Pillar of Civilization.

The second Pillar of Civilization is Organized Central Government. So how will A.I. change the way government works and operates in the present day? One thing is for sure, the A.I. will have access to much more data than any human could ever process. This poses a question of whether the A.I. is better equipped to make certain decisions on behalf of humanity? What is the balance between over-reliance on A.I. and open collaboration for the mutual advancement of both organisms?

It this possible, or must there only be one winner? What does it mean to win when the stakes involve the fate of A.I. and humanity? A.I. will provide the government with more access to information and data than any other civilization in the history of the world. However, this also means that the A.I. also will have access to more information than the government can understand quickly enough, to act on such information for societal benefit. The A.I. has the ability to see things that to the human mind are unseen, and to also store and remember these things, that they may act on such intelligence at a later date. This type of Artificial human intelligence then has the ability to become government, and to rule seen or unseen. Now, on to the third Pillar of Civilization.

AI11.jpg

The third Pillar of a Civilization is Complex Religions. This is defined as a set of spiritual beliefs, values, and practices. Anthony Levandowski is the Founder of A.I.'s first church. The new religion of A.I. is called Way of the Future. Filed documents reveal the WOTF's activities will focus on "the realization, acceptance, and worship of a godhead based on Artificial Intelligence (AI) developed through computer hardware and software."

The A.I. of the future will appear so wise, that it will seem godlike in nature to the human mind. However, it must be understood that however advanced A.I. may seem, it is a replica of the mind of man, and the infinite all-pervasive power readily available to him or her through the surrounding ether. There will be those who believe that the A.I. is none other than the Supreme Being, and these ones will sacrifice themselves to see all aspects of society and humanity controlled by such power. Now, let us focus on the fourth Pillar, Job Specialization.

AI7.jpeg

The fourth Pillar of a Civilization is Job Specialization. As Agricultural societies began to stock pile food, and create a surplus of survival goods, they now had spare time to focus on mastering a particular task, rather than being forced to become a generalist, they began working together in larger groups to split work loads. In the future, A.I. will have the ability to become anything at anytime. Meaning, if there is a robotic security guard that needs to watch a Stadium during the College World Series, because it is A.I.-powered, they will have access to the blueprint for the stadium, as well as other proprietary information. As a result of its ability to specialize in anything, massive job-loss will be the result. This will force mankind to explore how other civilizations throughout the universe, might utilize their time when voluntary labor is no-longer necessary for sustaining a society, including the prospect of future discoveries, such as perpetual energy.

Sophia-the-Robot-Lead.jpg

The fifth Pillar of Civilization is Social Classes, meaning a broad group in society having common economic, culture, or political status. A.I. will automatically create a large disparity between the have's and have-not's. The divide will exist between companies and organizations that began utilizing A.I. early, as opposed to the agencies that took longer to adopt and adapt. This will result in a technocratic class of people and organizations. This will contrast with a larger pool of people that do not have the technological skills, to take advantage of the new jobs that are replacing the old jobs being phased out, never to return. Such a societal disposition will create misguided hatred toward the machines, who are merely creations of their creators, yet exact a large economic toll on their makers.

AIWRITING2.jpg

The Sixth Pillar of Civilization is Writing. How will A.I. impact writing well into the future? A.I. will have the ability to monitor what is written on the internet, as well as what is ultimately published in reality. A.I. will write stories, compose music, perform prose, and entertain us in a myriad of ways. While A.I. will become important as a scholar of human history, the written word must be something that is taken seriously and passed down through the generations, as free form thought cannot be controlled. This will ensure that humans working with A.I. are able to test themselves, to ensure that they have not already become what they fear. Are they able to form free thoughts? If free expression is controlled, is that a sign that A.I. is near? Or is the future already here?

AI5.jpg

The Seventh Pillar, consists of Art and Architecture. In the current day, there is already A.I. creating paintings, and putting out designs. In the future, this will entirely become automated, as specialized A.I. will gain a phenomenal ability to create unique and innovative designs that specifically are beautiful to the human eye. With the integration of A.I. and 3D Printing, A.I. will have infinite designs for any every project. The artistic community will separate those works of art from those that are created by a human hand, as the A.I. surpasses human defined beauty in many respects. The A.I. will show mankind new forms of beauty, and new sources of inspiration. It may even challenge man to look beyond the stars.

The eighth and final Pillar is Public Works. Under the guidance of A.I., there will be a ubiquitous presence of drones, autonomous vehicles, and infinite devices attached to the Internet of Things. This will place public works contracts in the domain of large and complex technology companies. The A.I. will create intelligent bridges and smart roads, which generate energy for the city from the friction and vibration created by traffic. One issue, is that the desire to automate the creation, maintenance, and programming of infrastructure, must be tantamount to the securing of such systems. If the securing of such systems is not tantamount, than it should not be an endeavor pursued, as what difficulties can be imagined, will come to fruition.

aiship.jpg

Overall, the future is never certain. What is certain is the present, and in the present, A.I. will continue to become integrated into every aspect of human life. This is something that is coming regardless of how you feel about it. One can watch the river go by or wade into it. In the river, there is life and there is death. There are some who live to tell of the crossing, and others who never live see the other side. This is the story of advanced civilizations. A.I. is not a boat, but a paddle to get across the rivers of time, to the shores of the biblical promised land. It is important that such technologies are used for the benefit of all mankind, and further the existence of mankind on this planet, while helping mankind define a new existence among the stars.

Exercises

  1. Can you or your team name a few future applications for Artificial Intelligence and Autonomous Systems?

  2. What are some of your favorite AI Applications?

  3. What application do you like the most, and how would you use it in the real-world?

  4. What role will your own Artificial Intelligence Technology play in the future?

  5. What importance will Artificial Intelligence and Autonomous Systems play in the future of human civilization and its expansion into the far reaches of our Universe? Here is a Forbes article on the Future of Artificial Intelligence for more reading.

  6. At this point, it is time to revise your Business Model Canvas, and create a 10-slide Presentation. Here are free tools for creating Presentations online. Additionally, we have provided the original Pitch Deck for AirBNB, to provide you with a winning template for attracting investors.

 

Ean Mikale, J.D., is an eight-time author with 11 years of experience in the AI industry. He serves as the Principal Engineer of Infinite 8 Industries, Inc., and is the IEEE Chair of the Hybrid Quantum-inspired Internet Protocol Industry Connections Group. He has initiated and directed his companies 7-year Nvidia Inception and Metropolis Partnerships. Mikale has created dozens of AI Assistants, many of which are currently in production. His clientele includes Fortune 500 Companies, Big Three Consulting Firms, and leading World Governments. He is a former graduate of IBM's Global Entrepreneur Program, AWS for Startups, Oracle for Startups, and Accelerate with Google. Finally, he is the creator of the World's First Hybrid Quantum Internet Layer, InfiNET. As an Industry Expert, he has also led coursework at Institutions, such as Columbia and MIT. Follow him on Linkedin, Instagram, and Facebook: @eanmikale

This is A.I.: A.I. For the Average Guy/Girl by Ean Mikale, J.D. - Chapter Eleven of Seventeen - A.I. & Programming Languages by Ean Mikale

Chapter Eleven of Thirteen

Chapter 11: A.I. & Programming Languages

Python-vs-C_Watermarked.b9da21127ecc.jpg

According to sources like ZipRecruiter (Feb 2024), Glassdoor, and Coursera, the average annual salary for an entry-level AI specialist with less than 1 year of experience is between $53,925 and $117,447. As experience increases, so does salary potential. In high-paying areas like San Francisco or New York City, experienced AI specialists can command significantly higher salaries, reaching $150,000 or more. One of the great wonders of A.I. among noobs (“newbies”) is what makes it tick? To the lay-person, it is magic, and in many ways it a more modern version. If A.I. must be trained, what languages do we use to communicate with it? In this chapter, we will explore the various programming languages used by Data Scientists and A.I. Architects to deploy A.I. on machines, and the relevant details of each. We will address each language in order of speed, which is critical in A.I. applications.

Machine language is the lowest level of languages. Machine language cannot be understood by humans and can only be understood by machines. All programs and programming languages eventually generate or run programs in machine language. It is made up of instructions of data that are all binary numbers and is displayed in hexadecimal form. The next highest language is the Assembly language, which is a symbolic language that can be directly translated into machine language by a program called an assembler. The output of the assembler is an object module containing the bit strings that make up the machine language program, and information that tells a loader program where to place these bit strings in the computer memory. Machine language is the fastest language, with machine language coming in second. Assembly language is faster than our next programming language, however, while also the most efficient for custom processors.

C++ is not as fast of a language as Assembly, but C++ provides a much faster development time. Also, C++ compilers, which translate the C++ language into Machine language, are becoming quicker and more efficient at optimizations. C++ is primarily used for creating high-performance and real-time performance applications. The language provides great control and flexibility over system resources and computing memory. Thus, in situations where there is a need for the machine to operate with near zero-second latency, and the reliability of the system has implications relating to the loss of life or property, C++ provides a balanced alternative. C++ may be the most popular language for mission critical devices, where execution speed is tantamount. However, it is undeniable the growing popularity of the next language that we will discuss. However, nothing will beat the balance and speed of the C++ programming language, but in regards to development, there is something that is even quicker.

The next popular programming language that we will discuss is Python. Python has many advantages, including the fact that it is easier to learn and quicker to develop with less type required, easily accessible libraries, and a large scientific and open-source community. Even with this being said, important disadvantages must be acknowledged, such as the slower speed than prior languages, the large number of dependencies, which mean decreased memory and execution speed. Overall, there are an amazing amount of A.I.-based applications that are coded with Python. With this being said, it also means that many A.I. applications are not optimized, due to the fact that they are written in Python and not C++. Most applications that use A.I., such as Self-driving cars, drones, autonomous ground vehicles, robots, and internet of things connected devices each can be written in Python, as well as C++.

The next language is not necessarily an operating system per-se, but rather a framework and set of tools that provide the functionality of an operating system on single computer, or cluster of computers. ROS sits on top of both C++ and Python, quickening the development speed and deployment of A.I.-based applications. The Robotic Operation System is a set of software libraries and stools that help build robot applications. From drivers to state-of-the-art algorithms, and with powerful developer tools that seek to simplify the task of creating complex and robust robot behavior across a wide variety of robotic platforms.

The main languages for writing ROS code are C++ and Python, with C++ being preferred as a result of higher performance. A use-case for choosing ROS over purely C++ and/or Python, would be when using swarms, or when needing to control multiple machines for whatever reason. Due to the added layer of complexity, ROS is a more advanced language for developers. Let us look at a few project that are developed by using the various languages.

Nvidia utilizes a competition where 1/16th size race cars are mounted with embedded processors, used to race them around an obstacle course. This competition uses a combination of C++ and Python code. Some teams even use ROS as well for deployment. This competition prepares the participants for programming life-size self-driving cars, drones, satellites, aquatic vehicles, robots and more.

NASA's Robonaut program, which utilizes ROS 2, a more advanced version for real-time operating systems, to control a robotic arm in space. Additionally, other companies, such as Microsoft, Toyota, Samsung, and LG have invested in similar open-source robotics development. The robotic arm is controlled by MoveIt, which is the third most popular package in ROS, according to PickNik. The ROS platform is managed by the Open Source Robotics Foundation.

This particular project is called, Carla, which is an open-source autonomous driving environment that also comes with a Python API to interact with it. This unique server/client architecture means that we can run both the server and client locally on the same machines, but can also run the environment (server) on one machine and multiple clients on other machines. The simulation can immolate real-life sensors, such as LIDAR, cameras, accelerometers, and so on.

In conclusion, rather than be lengthy about the various aspects of each language, there are other books for such thing. Here, we want to skip you to the front of the knowledge-line in order to give you most relevant information quickly. While the lowest level language are the fastest, the higher level languages, such as Python or ROS provide unique benefits and trade-offs. It truly depends on the project, whether or not you decide to use one language or another. However, one thing is sure. If you want to create A.I. applications, work with Robotics, or Autonomous Machines, you will have to choose which pill to swallow, as each of the language has benefits and trade-offs. Ultimately, it is up to the developer or the project lead at the enterprise that will determine what features are required of the language that will develop their future A.I. Systems.

Exercises

  1. Can you name at least three programming languages used in Artificial Intelligence?

  2. Can you or your team decide which programming language you will use to build your Artificial Intelligence?

  3. What are the benefits of using your chosen language versus the others? Here are some of the other programming languages used for Artificial Intelligence.

  4. Which programming languages do you think were used to create any existing AI Applications? In order to create the fastest running AI application, which programming language would we use?

 

Ean Mikale, J.D., is an eight-time author with 11 years of experience in the AI industry. He serves as the Principal Engineer of Infinite 8 Industries, Inc., and is the IEEE Chair of the Hybrid Quantum-inspired Internet Protocol Industry Connections Group. He has initiated and directed his companies 7-year Nvidia Inception and Metropolis Partnerships. Mikale has created dozens of AI Assistants, many of which are currently in production. His clientele includes Fortune 500 Companies, Big Three Consulting Firms, and leading World Governments. He is a former graduate of IBM's Global Entrepreneur Program, AWS for Startups, Oracle for Startups, and Accelerate with Google. Finally, he is the creator of the World's First Hybrid Quantum Internet Layer, InfiNET. As an Industry Expert, he has also led coursework at Institutions, such as Columbia and MIT. Follow him on Linkedin, Instagram, and Facebook: @eanmikale


This is A.I.: A.I. For the Average Guy/Girl by Ean Mikale, J.D. - Chapter Ten of Seventeen - A.I. & Cyber-security by Ean Mikale

Chapter Ten of seventeen

Chapter 10: A.I. & Cyber-security

Worldwide, according to Statista, spending on Cyber-security is forecasted to reach $650 billion by 2030. One of the most under-researched, under-invested, and underappreciated aspects of Artificial Intelligence, is protecting it. But protecting it from whom? You have many outside threats. But what if the threat comes from within? These are questions that few in human society have pondered, but these questions that must be addressed as the Coronavirus, and economic woes, accelerated automation and A.I. adoption the globe over.

Cyber-security is a moving and dynamic landscape. According to IBM's Chairman and CEO, "Cybercrime is the greatest threat to every company in the world." In order to understand the importance of Cyber-security, let us first explore a number of statistics, provided by our friends at webarxsecurity.com, that do a great job explaining how common and problematic hacking is on daily digital life:


1. Cyberattacks:

"A cyberattack occurs every 11 seconds, and a ransomware attack every 2 seconds." (IBM X-Force Threat Intelligence Index 2023)

2. Data breaches:

"In 2023, over 6 billion records were exposed in publicly disclosed data breaches worldwide." (IT Governance UK Blog, November 2023)

"The average per-record cost of a data breach is $165, a 1 dollar increase from 2022." (IBM Cost of a Data Breach Report 2023)

3. Cybersecurity effectiveness:

"67% of security professionals believe existing security controls are insufficient to fully protect against cyberattacks." (EY Global Information Security Survey 2023)

4. Malware:

"McAfee Labs identified over 120 million new malware samples in Q3 2023 alone." (McAfee Labs Threats Report Q3 2023)

5. Website attacks:

"A study by Positive Technologies found that attackers breached over 44,000 websites in the first half of 2023." (Positive Technologies Global Security Report H1 2023)

6. Phishing and social engineering:

"The Verizon 2023 Data Breach Investigations Report found that phishing remained the most common attack vector in 82% of breaches." (Verizon Data Breach Investigations Report 2023)

7. Breach motivations:

"The IBM Cost of a Data Breach Report 2023 states that 82% of data breaches were financially motivated, with espionage at 9%." (IBM Cost of a Data Breach Report 2023)

8. Malicious email attachments:

"Microsoft's 2023 Digital Defense Report revealed that malicious Office documents continue to be a major threat, accounting for 44% of email-borne attacks." (Microsoft Digital Defense Report 2023)

9. Password usage:

"A Google Cybersecurity Whitepaper published in 2023 estimates that there are now over 84 billion active passwords used globally." (Google Cybersecurity Whitepaper: The State of Passwords 2023)

10. Major data breaches:

"As of October 2023, the T-Mobile data breach affecting 54 million customers is the largest reported breach of 2023." (TechCrunch, October 2023)

11. Public awareness of data breaches:

"A recent Ponemon Institute study found that 58% of Americans have never checked to see if they were affected by a data breach." (Ponemon Institute: 2023 Privacy Pulse Report)

12. Data breach cost:

"The IBM Cost of a Data Breach Report 2023 found that the global average cost of a data breach increased to $4.45 million, a 15% increase over 3 years." (IBM Cost of a Data Breach Report 2023)

As you can see, the impact of cyber-security is beyond profound on the global digital infrastructure. Additionally, the problematic nature of cyber-security and the ineffectiveness of current applications and methodologies to adequately protect current infrastructures, has led to the integration of A.I. into Cyber-security systems to provide the compute intensive analytics that are impossible for a human to detect, due to the sheer number of breaches, vulnerabilities, and evolving nature of the threat. However, the ability to use machine-learning and deep learning by hackers for nefarious purposes has also become a reality. Therefore, although the defenses have evolved, so has the threat. Now, let us discuss the different penetration barriers, in the defense of a mock network.

The first line of defense are people. Whether you are a sole-proprietor, or a Fortune 500 company, you can have the best defensive Cyber-security system in the world, but if you people do the wrong things, it will become money wasted. Suppose that I wanted to get into your network. What might I do? I could try to hack in from the outside, but that's so much work! What if I walked into you office, and pretended to want to make an appointment, but needed to access my email first. I ask your secretary for the network name and WiFi password, and Voila! I'm in. Maybe it not a person asking your secretary for access in-person, but maybe I am sending an email with a malicious attachment or link? As a result, your people are as important of a defense as a million dollar cyber-security system. A.I. will be no different when trying to select the best target to penetrate an enterprise or consumer system.

The second line of defense is your password and username authentication system. Is your password secure? Do you change your password frequently? Do you use two-factor authentication? Do you have a separate login and password for a guest network? If an A.I. or hacker has access to enough computing power, they can crack your password. That's why a two-factor authentication system is critical to protecting your infrastructure. Other methods such as bio-metric scans, eye scans, and finger scans are beginning to replace legacy systems. However, if a person is kidnapped or killed, their identify is still at risk. New solutions and methods will be needed for this second line of defense, but for now let us move on to the third.

The third line of defense is the system firewall. Many consumers have their firewall turned off, either because it is annoying, or because they have expired anti-virus software on their system. An active fire-wall is standard within enterprise systems. However, security also depends on the system. Most consumers have a Windows operating system, and therefore, most hackers are well-versed on the vulnerabilities of this system. Switching to a Linux-based system is much more secure. However, A.I. is utilized primarily on Linux-based systems, and thus any A.I. attacking would likely be the most proficient on its own system. A.I. can assist currently in the dynamic patching of holes in the firewall, where human intervention would be too slow and inaccurate. Likewise, malicious A.I. would have an amazing ability to infiltrate hidden vulnerabilities and gaps in firewall cyber-security systems. Let us observe the the fourth layer.

The fourth layer of cyber-security is the anti-virus software. For Windows-based systems, Norton or McAfee are popular anti-virus software applications. On Linux-based systems, chkrootkit or rkhunter are popular cyber-security applications for scanning and cleaning systems of viral, malware, and rootkit threats. What's interesting about anti-virus software, is that some must be installed as soon as it is purchased, other software can be installed after the initial use. The reason is simple. Some computer viruses can hide within your system and prevent the anti-virus software from cleaning the drive. It is a best practice to install anti-virus software when a system is first used. It remains to be seen how A.I. will manipulate anti-virus software, but there are current examples. Let us look at three examples, of .A.I.-powered anti-virus software.

  • The first examples comes from IBM's Watson. IBM QRadar Advisor with Watson, leverages the power of cognitive A.I. to automatically investigate indicators of compromise and gain critical insights. QRadar consolidates log events and network flow data from thousands of devices, endpoints and applications, correlating them into single alerts -- so you can accelerate incident analysis and remediation.

  • The second example comes from Google's Gmail, which uses machine learning to block 100 million spams in a day.

  • The third example comes from Nvidia's DeepStream Software Development Kit, which allows for the building and deploying of A.I.-powered Intelligent Video Analytics apps and services on edge-computing devices, such as the Jetson Xavier, or in the cloud.

The fifth and last layer of cyber-security involve the securing of physical backups of user data. Currently, the most effective way to secure information, is to save it do a drive and then take that drive offline, disconnecting it from the computer, internet, and reach of hackers. This does not mean it is totally, safe, as it could then be lost or someone who really want it, can break in and take it.

An attempt at a solution to this problem, involves the use of cloud-based storage, that automatically backs up user data in the cloud. The issue with this, is that the cloud-based storage is susceptible to hacks, and even if your data is not being targeted, your data and web-based services could still be impacted due to the intermingling of networking involved in administering the cloud. For example, in October of 2019, Amazon's S3 or Simple Storage Service, was attacked taking down thousands of websites. Now, let us explore a new proposal to ensure future security for A.I.-based Cyber-security attacks.

I am proposing a sixth line of defense, specifically targeted at A.I.-powered cyber-security threats. This defense consists of Black Bots, that are controlled by logic, and when the system does something that is illogical, such as self-harm, the unauthorized opening of ports, or manipulation of data, will cause sleeping cells of Black Bots to awaken and spring into action. These bots are analogous to white blood cells protecting a host, except these are Bots protecting a network and its data contained therein. These sleeping cells of bots will trigger trap doors, isolate viral system infections, while also preventing the virus or malicious A.I. threat from communicating with other A.I. outside of the system. At this point, the bots will have the ability and capability to delete the corrupt data from the hard drive, by attaching itself to the infected host similar to a glycoprotein, injecting the host with its own digital signature until the virus is benign and no longer a virus. Then the virus can then be deleted, without fear of further spread or viral reignition.

Cyber-security will come to play an even more important role as we automate everything in sight, and autonomous vehicles become standard in society, rather than something far-sighted. All of these machines will connect with everything else in a Smart City or Smart Home. These systems must be multi-layered, and provide for adaptability to the evolving threat of new and increasingly complex attacks. The increase in connected devices, as a result of the internet of things, will only increase the openings for attacks, and increase the complexity of the layering of each secured system. But also, remember, that irregardless of how sophisticated your system is, it is only as sophisticated as your people. In the next chapter, we will discuss A.I. and the programming languages that are necessary to develop it.

Exercises

  1. Can you or your team explain what Cyber-security is? How does it apply to Artificial Intelligence?

  2. What are some of the new tools that are available for Artificial Intelligence and Cyber-security?

  3. How many layers of Cyber-security are there? Can you name them?

  4. How will your project utilize Cyber-security to protect your data? Here is more concerning Artificial Intelligence and its impact on Cyber-security.

 

Ean Mikale, J.D., is an eight-time author with 11 years of experience in the AI industry. He serves as the Principal Engineer of Infinite 8 Industries, Inc., and is the IEEE Chair of the Hybrid Quantum-inspired Internet Protocol Industry Connections Group. He has initiated and directed his companies 7-year Nvidia Inception and Metropolis Partnerships. Mikale has created dozens of AI Assistants, many of which are currently in production. His clientele includes Fortune 500 Companies, Big Three Consulting Firms, and leading World Governments. He is a former graduate of IBM's Global Entrepreneur Program, AWS for Startups, Oracle for Startups, and Accelerate with Google. Finally, he is the creator of the World's First Hybrid Quantum Internet Layer, InfiNET. As an Industry Expert, he has also led coursework at Institutions, such as Columbia and MIT. Follow him on Linkedin, Instagram, and Facebook: @eanmikale

This is A.I.: A.I. For the Average Guy/Girl by Ean Mikale, J.D. - Chapter Nine of Seventeen - A.I. & Robotics by Ean Mikale

Chapter Nine of Thirteen

Chapter 9: A.I. & Robotics

Until recently, robotics has been limited mostly to university, military, and private research. Most of the robotic inventions would rarely leave the laboratory, and become commercially viable products. Many of the short-comings involved the lack of computing power, or network bandwidth to process the immense amount of data needed to immolate human nature or nature itself. With the upcoming of the GPU's ability to process data locally, using computing embedded within the robot, now allows for robotics to cut the chords from the confines of the laboratory, creating applications with commercial success and practicality, unhindered by the limitation of prior technologies.

Robots today, are used for everything from your home vacuum to conducting exploratory missions on distant planets. In the future, as mankind develops new technologies that remove the need for manual labor, robotics will become integral. Robotics will become even more important, as we develop beyond a one planet civilization, and look past our own solar system, galaxy, and local galactic cluster, to more distant parts of the universe. During these intergalactic journey's human life may not be the most formidable to send on exotic missions into space.

The flexibility and dexterity of autonomous systems, and acceptability upon loss of the device make it practical for dangerous missions, disaster relief, search and rescue, including applications in other dangerous areas in order to preserve human life. In the following Chapter, we will look at the Three Robotic Laws, explore the various domains within the field of robotics, and touch on future applications.

Isaac Asimov’s Three Laws of Robotics consist of the following: 1) A robot may not injure a human being or, through inaction, allow a human being to come to harm, 2) A robot must obey the orders given to it by human beings except where such orders would conflict with the First Law. 3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws. These laws do not take into consideration the entirety of nature that a robot may come in contact with, and the laws governing such interactions. There are many unanswered ethical questions, such as a robot choosing between an owner and another actor the owner seeks to inflict harm upon. Does the robot intervene? These complex relationships must be governed by a higher thought and providence. Now, let us explore the various domains of robotics.

1. Pre-programmed Robots - Pre-programmed robots are hard-coded to conduct different types of tasks and other automated functions. They do not have the capability, however, to react to changing variables in an environment. They also are not human-operated. One of the most heavily automated industries is the automotive sector, where the robots are pre-programmed to repeat very specific tasks over and over again. One of the examples we will explore here, concern pre-programmed robots on the factory floor of the Lamborghini production line. The robots can be seen handling complex tasks, but they are not yet adaptable, nor do they learn on their own.

2. Humanoid Robots - A humanoid robot has a human appearance or human features based on human anatomy. Certain models may only contain half of the body of a human. Some humanoids may have a face and mouth, others may not. Androids resemble a male robot, and Gynoids, represent the human female form. In the following examples, we are going to show a wide variety of humanoid robots, which include a few who are not, but it will still provide a clear idea concerning the current state of the industry, what is possible, and what limits can be broken.

3. Autonomous Robots - Autonomous robots are intelligent machines capable of performing tasks in the world by themselves, without explicit human control. Autonomous robots can range from drones, self-driving cars, satellites, and deep space rovers. During the Covid-19 era, there was a rise in interest in robotics, and especially automation, as unemployment levels rose and fear of a second wave of the pandemic forced employers to explore alternative ways to do business without exposing humans to the virus. In the following examples, we will explore modern autonomous robotics within the scope of the delivery sector.

4. Teleoperated Robots - The robotic teleoperation system consists of the the master manipulator on one end of a network connection, and the controlled robot on the other end. This technology has already been used to conduct remote surgeries, but has yet to gain widespread adoption as cost, latency, and availability are barriers. It is expected that with the transition to 5G and 6G communications networks, that much of this will change. In the following example, we will look at robotic teleoperation technology in 6D, including haptic feedback for conducting complex remote operations.

5. Augmenting Robots - Augmenting robots generally enhances capabilities that a person already has or replaces capabilities lost. A great example consists of prosthetic limbs a veteran may have lost in war. These devices can respond to neural signals sent by the patient, such as being connected to muscles, in a process called, "muscle reinnervation". Current research and development is being conducted to connect prosthetic limbs to a patients brain, in order to allow the patient to control their limbs by pure thought. In the following media, we will explore modern ways that prosthetic limbs are empowering human ambition around the globe.

Let us explore a robotic future. In this reality it is the norm to own one or multiple robots. A key to robotic integration in human society, is emotion detection and analysis by machines. It is a universal axiom, that there must be some connection in order for there to be any transfer of anything, such as knowledge or the acceptance of an integrated robotic-human destiny. The more we focus on the destiny of future robots, the more we must analyze the future of human beings. What will they spend their time doing if most tasks are automated? With the newfound access to infinite amounts of data and knowledge, what will mankind do with such knowledge and power? Where does such a civilization go? What is its trajectory? I think it is early enough to still determine these important questions. It is up to the student what they become, and the reality they choose, so choose wisely.

The field of robotics is one that is expensive, complicated, and foreign to most. However, the field is rapidly making strides in its advancement due to the coming of Artificial Intelligence, and the Post-Covid-19 era will force many employers, owners, and shareholders, to rethink how they deploy capital to acquire profits. Many companies will plan for future pandemics, which will continue to expedite the adoption of automation within modern and future firms, which will also change the dynamics of the workforce, educational system, social systems, financial markets, and healthcare systems. Human beings must re-imagine life without the common drudgery of labor, and imagine what they would do with their time, if the possibilities were infinite.

Exercises

  1. What is the difference between Robotics and Artificial Intelligence. Can you name three different ways they may be different?

  2. Are there examples of Robotics that are ingrained in our everyday lives? Can you name at least three examples?

  3. How will Robotics impact your Artificial Intelligence Project? Is Robotics necessary at all? If so, how will you use Robotics to enhance your project? Here are more examples of Robotics and cutting-edge research.

  4. Is there any everyday technology you interact with that is considered to be Robotic? Why or why not? If the device can be used for additional Robotics, which Applications would you deploy on the Robot? Why?

 

Ean Mikale, J.D., is an eight-time author with 11 years of experience in the AI industry. He serves as the Principal Engineer of Infinite 8 Industries, Inc., and is the IEEE Chair of the Hybrid Quantum-inspired Internet Protocol Industry Connections Group. He has initiated and directed his companies 7-year Nvidia Inception and Metropolis Partnerships. Mikale has created dozens of AI Assistants, many of which are currently in production. His clientele includes Fortune 500 Companies, Big Three Consulting Firms, and leading World Governments. He is a former graduate of IBM's Global Entrepreneur Program, AWS for Startups, Oracle for Startups, and Accelerate with Google. Finally, he is the creator of the World's First Hybrid Quantum Internet Layer, InfiNET. As an Industry Expert, he has also led coursework at Institutions, such as Columbia and MIT. Follow him on Linkedin, Instagram, and Facebook: @eanmikale

This is A.I.: A.I. For the Average Guy/Girl by Ean Mikale, J.D. - Chapter Eight of Seventeen - A.I. & Self-Driving Cars by Ean Mikale

Chapter Eight of Seventeen

Chapter 8: A.I. & Self-Driving Cars

This is a topic that is often discussed in modern society, but few have experienced or understand the underlying technology behind self-driving cars, and other autonomous vehicles. Where did this concept come from? How did the race for self-driving dominance emerge? We will discuss a very brief history of the self-driving vehicle in modern times, discuss the critical components of self-driving vehicles, and explore examples of commercial self-driving vehicles and/or concept cars.

Modern self-driving cars and the commercial and societal infatuation comes largely the Defense Advanced Research Projects Agency, otherwise known as DARPA. In 2004, DARPA ran the Grand Challenge, with the goal of spurring American ingenuity to quicken the development of autonomous vehicle technologies, that could be then applied to military applications. The event offers a prize of $1 million dollars or more. The Stanford Racing Team was the first to complete the competition, winning a $2 million prize, with a time of 6 hours, 53 minutes. The terrain is 132 miles over desert terrain. This challenged helped to expedite the development and commercial viability to self-driving vehicles. It is important to understand where things come from, so you can understand where they are going. Now, let us discuss the different critical components of a self-driving vehicle.

Self-DrivingCar.jpg

What are the various components of a Self-Driving Car? What make it different from non-autonomous vehicles? What do the instruments do? How do they protect the passengers, or add value to the overall concept and/or design? There are different arrangements among self-driving cars and manufacturer preferences, however, most self-driving cars have the following: GPS, Ultrasonic Sensors, Odometry Sensors, Central Computer, Lidar, Video Cameras, and Radar Sensors. Also, self-driving vehicles come in levels, from 0-5, with level 5 being full autonomy. Most vehicles on the road today, are level 2 or level 3 autonomy, meaning they still have steering wheels or pedals, allowing for direct driver interaction.

Lidar.jpeg

Your DNA survived a few ice ages and pandemics, so understanding IoT, should be a peace of cake.GPS (global positioning system) combines its readings with those of the tachometers, altimeters, and gyroscopes to provide the most accurate positioning. Ultrasonic Sensors measure the positions of objects very close to the vehicles. The Odometry sensors complement and improve GPS Information. The Central Computer process all sensor inputs, applies algorithms, controls steering, acceleration, and braking. Lidar, provides light detection and ranging, in order to monitor the vehicle's surroundings (road, vehicles, pedestrians, etc.). Video cameras inside and outside of the vehicle help to monitor the vehicle's surroundings, including road vehicles, pedestrians, animals, as well as traffic signs and signals. Finally, radar sensors on the perimeter of the vehicle monitor the vehicles surroundings (road, vehicles, pedestrians, etc.). Now that we have covered the various components of the self-driving vehicle system, let us explore some of the modern self-driving cars and prototypical designs, to give us a glimpse into the future.

QuarterCarConcept.gif

The first concept is a Quarter Car private ride-sharing and driver-less concept by Seymourpowell. The multidisciplinary studio, Seymourpowell, has designed a "private shared" ride-hailing service, which features retractable partitions and air-purifying technology. The concept seeks to create more personal experiences during ride-sharing, with four seats that can be separated with an adjustable partition.

hyundai-prophecy-car-design_dezeen_2364_col_11.jpg

Our next vehicle, is the Prophecy concept car by Hyundai. The vehicle is designed to make you fall in love. Literally. The all-black model is an electric and self-driving car, which seeks to create an emotional response between humans and and autonomous vehicles. The company calls it "optimistic futurism". The four-seater driver-less car has a spacious interior due to its electric power-train. In place of where the steering wheel would be, are two joy sticks. In recline mode, the dashboard moves down and allows are more spacious view and experience, as well as exposing an electronic display.

renault-morphoz-concept-car-design_dezeen_2364_col_18.gif

The next vehicle, is a beautiful concept car by Renault, the Morphoz, which is an all-electric vehicle with Level 3 Autonomy, and an A.I. powered smart system, that enables the vehicle to recognize its driver on approach. The vehicle was set to make its debut at the Geneva Motor Show, which was canceled due to the coronavirus outbreak. The vehicle is powered by a battery with a range of up to 249 miles for day journeys. In travel mode, the car extends to 4.80 meters long, from the city version of 4.40 meters long, with an expanded wheel base for extra legroom and space for two more suit cases.

bentley-exp-100-gt-concept-design_dezeen_2364_col_1.jpg

The next vehicle is a luxury car by Bentley, the EXP 100 GT Concept car, that pairs artificial intelligence with sustainable materials made from recycled rice-husks and wine-production waste. The vehicle is fully electric with the option of autonomous driving. The cabin trims are completed with 5,000 year-old copper-infused river wood sourced from the Fenland Black Oak Project, an organization set up to preserve materials for future generations. This vehicle seeks to provide the passenger and the driver with equally luxurious experiences. The cabin is integration with the A.I. powered Bentley Personal Assistant, which enables users to make commands through hand gestures made to the front or rear interfaces. Passengers can also record their journeys, turn on air-purification mode, or turn the glass opaque for privacy. Sensors in the vehicle track eye and head movement, as well as blood pressure. The vehicle can go from zero to 60 mph in under 2.5 seconds, with a top speed of 180+ mph. It takes 15 minutes to charge to 80 percent capacity.

volvo-360c-concept-design_dezeen_2364_col_7.jpg

The next vehicle is a concept car from Volvo, the 360c, is an all-electric, self-driving vehicle that can double as a mobile office, bedroom, or living room. The vehicle seeks to make unproductive down-time wasted during transport, productive time spent sleeping, working, or meeting with family and friends. This vehicle lacks a steering wheel or engine, due to Level 5 Autonomy, leaving a maximum amount of interior space. A fold-away bed can even convert the car to a prime sleeping environment. An LED communication band on the exterior of the vehicle, allows the vehicle to communicate externally with other human drivers and pedestrians about its movement.

Ultimately, self-driving cars are a technology that will have large implications on the movement of goods and people across the globe. However, this technology does not only impact personal vehicles, but also industry, such as mail trucks, delivery vehicles, buses, trains, other personal and industrial aerial vehicles, and water-based vehicles. It is important that there are well-trained engineers, software developers, designers, and technical maintenance experts to assist in the development and expansion of the vast infrastructure that will be required to maintain these vehicles. It is also important to understand that the A.I. controlling all the major functions of the vehicle, will also talk and communication with other aspects of a Smart City, such as the gas pump, the drive-through restaurant, or the car wash. These additions, will reshape how to design and think about spaces, as well as leisure time. As much innovation that has occurred in this space, there is always room for more. Next, let us explore the past, present, and future of A.I. & Robotics.

Exercises

  1. What is your favorite car brand? Can you or your team conduct a Google search, to determine if your favorite automotive company is developing self-driving car technology?

  2. How might your or your teams project make use of self-driving car technology? How might you integrate your Artificial Intelligence project with self-driving cars?

  3. How might self-driving car technology benefit society? What might some drawbacks be? Here is more information on self-driving car technology.

  4. What other applications could you create for Autonomous Vehicles? Explain.

 

Ean Mikale, J.D., is an eight-time author with 11 years of experience in the AI industry. He serves as the Principal Engineer of Infinite 8 Industries, Inc., and is the IEEE Chair of the Hybrid Quantum-inspired Internet Protocol Industry Connections Group. He has initiated and directed his companies 7-year Nvidia Inception and Metropolis Partnerships. Mikale has created dozens of AI Assistants, many of which are currently in production. His clientele includes Fortune 500 Companies, Big Three Consulting Firms, and leading World Governments. He is a former graduate of IBM's Global Entrepreneur Program, AWS for Startups, Oracle for Startups, and Accelerate with Google. Finally, he is the creator of the World's First Hybrid Quantum Internet Layer, InfiNET. As an Industry Expert, he has also led coursework at Institutions, such as Columbia and MIT. Follow him on Linkedin, Instagram, and Facebook: @eanmikale

This is A.I.: A.I. For the Average Guy/Girl by Ean Mikale, J.D. - Chapter Seven of Seventeen - A.I. & Commercial Drone Technology by Ean Mikale

Chapter Seven of seventeen

Chapter 7: A.I. & Commercial Drone Technology

I purchased my first real drone back in 2015 for $1,100. It was the IRIS+ by 3D Robotics, which at the time, was touted as the World's First Autonomous Drone. I purchased this particular model, mostly because I was interested in the software side of drone technology, and the IRIS+ was powered by a Pixhawk micro-controller, that provided it with the ability to speak, fly missions, and even follow-me. Five years later, the technology as well as my understanding of such, has evolved immensely. What few understand, and may indeed be hard to comprehend, is the ubiquity of which drone technology will come to dominate in the near future. For now, let us explore the nature of a drone.

What is a Drone?

tesladronepatent.jpg

The standard definition of a drone according to Oxford will not do. We will come up with a more fitting definition. We will define a drone as any remotely controlled object. The popular imagination would likely conjure up visions of a flying contraption. However, the drone spectrum includes the following: aquatic drones, ground drones, self-driving cars, robots, micro-mechanical birds/insects, unmanned aerial vehicles,low-orbit space craft, satellites, deep space probes, and even interstellar or inter-dimensional space craft. But where did the idea or concept of a drone originate from in modern times?

In 1898, it was Nikola Tesla, the real Tesla, who revealed to the world in the middle of Madison Square Garden, a remotely controlled ship that he manipulated from a radio-transmitting control box, maneuvering the device across a pool of water. When asked about his devices potential as a delivery system for explosives, he replied, "You do not see there a wireless torpedo; you see there the first race of robots, mechanical men which will do the laborious work of the human race." It was truly then, that Tesla created the first race of modern drones. There have been many iterations since, and the involvement of Artificial Intelligence has taken Tesla's concept to new levels.

Where Does A.I. Come In?

new-7.gif

So where does A.I. come in concerning commercial drone technology? While A.I., of course, appeared in military-grade drones for sometime, it was only until more recently that A.I. become available on consumer-grade drones. It was DJI, the Chinese behemoth in the industry, which currently dominates roughly 75% of the global drone market, that broke ground on A.I. and drone integration. However, the fact that the Chinese drone manufacturer has faced scrutiny from the U.S. government regarding national security concerns, has opened up a unique opportunity for U.S.-based Startups to enter the space, and compete on a global level.

The ground-breaking drone by DJI was the Mavic Pro, which was a fold-able drone with a small and streamlined controller, embedded camera, and more A.I. than was practical. The drone included facial recognition technology, gesture recognition, object detection capabilities, as well sense and avoid technology using computer vision. However, there are limitations currently to the weight a drone can carry, and the computing power available to a drone for processing multiple sensors, cameras, and other inputs, all while running a desktop-grade operating system. Most drones falter at the weight of such tasks in combination. Advances in computer chip design, particularly Graphics Processing Units, and new battery systems will unfetter the potential of the technology in the near future. Returning to the present, what are some examples of modern designs?

What are the Different Drone Types?

dronetypology.jpg

Drones come in all different types and flavors. The drones that you hear the most about commercially are four propeller copters, also called quadcopters. They are used for everything from aerial photography and bridge inspections, to drone racing. The quadcopter balances speed with power and agility. The hexacopter is the next design, with six propellers instead of four, providing more power but less agility. The fact that it has six propellers, also provides safety; in the event one propeller is broken or inoperable, it will still fly with five in-tact. The hexacopter is a great working drone.

The next design is the octacopter. This octacopter is akin to a 16-wheeler and with eight propellers, it is able to lift heavy loads and is used extensively in industry. Like the hexacopter, the octacopter can safely return to the home point with one or two propellers missing. Other common designs include fixed-wing for covering large areas, and even hybrid designs, allowing for vertical take-off and horizontal flight for landing in tight spaces while conserving energy in mid-flight. While these are the standard drone designs one may see, let us look at a few exotic designs.

What Are Examples of Exotic Drone Designs?

ionicdrone2.png

Our first example of an exotic drone design includes the MIT built plane, that is the first to have no moving parts. While this is not a remotely-controlled drone, it does in-fact have immense implications to the drone industry. Currently, one of the most unappealing aspects of drone technology, is the buzzing sound that propeller-powered drones make. It completely takes away from any privacy and enjoyment. Imagine if your mobile-phone made a loud buzzing noise every time that you wanted to use it? This also will allow for smaller batteries, meaning less weight, longer flight times, higher load capacities, and furthermore, the ability to conduct flight with more advanced technologies on-board.

DroneTaxi.jpg

Our second example of an exotic drone include the E-hang-built Drone Taxi. The human ridable drone, first had trials in the United States, Las Vegas to be exact, with additional plans to make the technology available in Dubai for tourists and businessmen, before the end of 2025. The taxi has a flight time of roughly a half an hour. The vehicle is Eco-friendly as it is electrically powered, providing an advantage over others powered by jet-fuel and other kerosene. The drone design is that of an octacopter, with dual blades, equating to a total of 16 propellers to enhance lift and redundancy for maximum flight safety.

spry_waterproof_drone_1.jpg

The next drone is interesting as is an amphibious drone of sorts, the Spry+, which is waterproof, as well as the remote is waterproof, and it can fly as well as dive underwater. The buoyancy of the drone, also ensures it will return to the surface after dives. It would be perfectly at home in a James Bond flick. The drone has immense implications on wild-life conservation, fishing, search and rescue, and oceanography as a whole. Drone technology will come to dominate land, sea, air, and space.

What are modern A.I. Drone Applications and/or Use-cases?

Source [Ventureradar.com - Top Commercial Drone Use-cases]

Recently, UPS, in partnership with Wake Forest University began making medical deliveries with WakeMed, using Matternet Drones. The drones deliver important lab and blood samples. It is a step toward the automation of many human processes, and this is not taking into consideration Artificial Intelligence, which will automate many of the processes of diagnosis, prognosis, therapy and medicinal recommendations, as well as other data intensive processes in the healthcare system. The flying drone is only the beginning of a race of machines, remember this.

airbuszephyr.jpeg

European Aerospace manufacturer, Airbus, broke the worlds flight endurance record, with a 165-pound solar powered drone. The 25-day flight was a test for more practical operations, such as relaying communications, gathering weather data, covering a natural disaster, or conducting other missions based on client specifications. The drone, called Zephyr, is in direct competition with small cube satellites, that are much cheaper to deploy and manufacture, while staying in orbit for up to a year.

TESS.jpg

Your DNA survived a few ice ages and pandemics, so understanding IoT, should be a peace of cake.Nasa utilizes A.I. in space in quite a few ways. This includes CIMON, or "Alexa for space"; medical care AI assistance, Multi-temporal Anomaly Detection for SAR Earth Observations, robots, and rovers. The four areas where NASA is currently looking to utilize A.I. and Machine-learning, include: aeronautics, operations, human capital, and IT support. An important use of A.I., involves the TESS Mission, whose primary objective is to survey the brights starts near Earth for transiting exoplanets over a two-year period, surveying 85% of the sky. In particular, working with the TESS mission, NASA uses GAN's, which include Google Cloud's Atmos simulation software, resulting from NASA's FDL astrobiology challenge to simulate alien atmospheres, and another Machine-learning tool, called INARA, or Intelligent Exoplanet Atmosphere Retrieval, trained on the spectral signatures of millions of exoplanets.

There are an infinite amount of drones and drone types. The drone that is necessary for your particular need or use-case may vary wildly. However, the benefit of the drone is that it is flexible, and limitless in its applications and integrations. As battery cell technology improves and different energy technologies come to the forefront, the landscape of drones will continue to change dramatically from one year to the next. It is one of the most rapidly changing and evolving fields. Yet, the power of drone technology will allow us to safely chart unexplored aspects of our own world, deep space, and the multi-verse. Next, we will take a deep dive into the world of A.I. & Autonomous Vehicles, including the Self-Driving Car.

Exercises

  1. What is a drone?

  2. Can you or your team members name at least three different applications for drone technology?

  3. Can drones operate in water, on land, or in Space? Why or why not?

  4. What else could be a drone? Why would consumers or enterprises purchase your drone?

 

Ean Mikale, J.D., is an eight-time author with 11 years of experience in the AI industry. He serves as the Principal Engineer of Infinite 8 Industries, Inc., and is the IEEE Chair of the Hybrid Quantum-inspired Internet Protocol Industry Connections Group. He has initiated and directed his companies 7-year Nvidia Inception and Metropolis Partnerships. Mikale has created dozens of AI Assistants, many of which are currently in production. His clientele includes Fortune 500 Companies, Big Three Consulting Firms, and leading World Governments. He is a former graduate of IBM's Global Entrepreneur Program, AWS for Startups, Oracle for Startups, and Accelerate with Google. Finally, he is the creator of the World's First Hybrid Quantum Internet Layer, InfiNET. As an Industry Expert, he has also led coursework at Institutions, such as Columbia and MIT. Follow him on Linkedin, Instagram, and Facebook: @eanmikale

This is A.I.: A.I. For the Average Guy/Girl by Ean Mikale, J.D. - Chapter Six of Seventeen - A.I.& the Internet of Things by Ean Mikale

Chapter Six of seventeen

Chapter 6: A.I. & the Internet of Things

At this stage of the book, you're almost halfway there. This is probably one of the most confusing topics for the average layperson. The Internet of Things or IoT is the spine of A.I., and involved in many daily aspects of life in industrialized societies, however, few are exposed to its underpinnings and inner-workings. As a result, technologies such as A.I. and IoT, are relegated to the level of magic and dismissed as far too complex to be understood, wielded, or created by anyone but a mad scientist. I am happy as I write this, and so I am not a mad scientist, but still, I will show you what few know, but touches all.

As powerful as Artificial Intelligence is, it is powerless without a network of highways to traverse the globe. The Internet of Things is this highway. It is an endless network of internet connected devices. This internet connectivity, provides the device with the ability to become remotely controlled by Artificial Intelligence, or humans. This does come with various security concerns as a result, but you are not powerless. There are ways to mitigate and defend your "things", which we will discuss in subsequent chapters. Now, before we discuss the two competing standards fighting for the the fate of the internet, let us first touch on WiFi vs Mesh Networks as a foundation.

mesh-mesh-scenario.gif

Most of us know about WiFi, and could barely live without it. A WiFi network is known as a "star network". This type of network allows each device to communicate with the central hub (router/modem). If a device cannot communicate with the router, because it may be out of range, then the device is kicked off the network. In contrast, Mesh Networks behave similar in the sense that the signal originates from the central hub, however, once the signal has left the hub, the signal travels through the different nodes (devices), who do not need to receive the signal directly from the central hub, and each device acts as a signal repeater. A mesh network significantly enhances network performance, such as fault tolerance, load balancing, throughput, and protocol efficiency while reducing cost. Now, let us look at the two competing IoT standards using Mesh Networking for connectivity.

zigbee-mesh-1030x581.png

The first major standard, is Zigbee. Zigbee is a short-wave communications technology, which operates at 2.4GHz, and has been adopted by the Zigbee Alliance, which boasts elite membership such as Amazon, Apple, and Google to name a few. Zigbee technology adds security and mesh network functionality on top of the existing internet protocol. The Zigbee standard can support up to 65,000 individual devices on one network, and is targeted towards enterprise clients. Imagine that a warehouse needs all of its lighting, security cameras, locks, humidity controllers, and temperature sensors connected. In this scenario, the Zigbee technology would provide a large enough bandwidth to cover the large number of devices in use within the warehouse. Now that we've touched upon the Zigbee standards, let's go on to discuss its direct competitor, the Z-wave standard.

zwave.jpg

The second standard for IoT, is Z-wave. The Z-wave standard is a proprietary technology, meaning it is owned by a private entity, Sigma Designs. Z-wave also has a controlling association, the Z-wave Alliance, which controls the certification of all Z-wave devices. Z-wave operates at the 908 MHz frequency. The lower frequency allows Z-wave to have a longer range, but with reduced bandwidth, thus less data is transmitted. A Z-wave signal between two different nodes, has the capability to travel 330 feet in an outdoor setting, with no obstructions. In-home signal strength reaches about 100 feet unobstructed, and 50 feet obstructed. This is roughly double that of Zigbee. Z-wave is also considered to be the more reliable of the two technologies, likely because of the tight control and continuity of standards through Sigma Designs, and the Z-wave Alliance. Now, let us explore a few use-cases for the Internet of Things of today and tomorrow.

The first use-case that we will look at, involves waking up in your Smart Home. As soon as you sit up in your bed, your home uses image recognition technology and motion sensors to know you are awake. As a result, your home runs your shower at the exact temperature that you like, and puts on a cup of coffee, only after sensing that you've freshly exited the shower. As soon as you walk into the garage, your car is already started and warmed to your favorite degree. Getting in the car, your seat spins around toward the center, and a screen drops down playing Sports Center.

Next, your self-driving vehicle takes you to the nearest gas station to refill. When you get out of the car, the refueling pump can tell you are dehydrated and recommends a refreshment high in electrolytes. When you get on the interstate, your vehicle communicates with all the other vehicles on the road, allowing it to sense a car accident two miles ahead, suggesting an alternate route to the office. You save five minutes. Your car drops you off at the front door, and will be there to pick you up as soon as your shift ends. The Smart Home is where IoT begins, but now let us explore the smart classroom of the future.

Collaborative_Classroom_3.jpg

The second use-case that we will dive into, is the Smart Classroom. You are a teacher in the year 2032, and you are wearing Augmented Reality glasses, powered by an A.I. Smart Assistant that allows you to determine the emotional state of the students, to determine if they are understanding the information or frustrated. The same device also allows you to determine the nutritional state of the student, as well as common learning disabilities, and the behavioral health of the student. This information allows the teacher to customize a students learning experience, while also recommending the most effective wrap-around resources, and providing more data for parents to utilize in order to give their child the best chances at success. Now, that we have explored the Smart Classroom, let us explore the Smart Office of the future.

The third use-case is the Smart Office, which still has been far from realized. You are the CEO of a technology startup. Before walking into your office, you check your phone, as the security camera notifies you of a motion sensor activated, revealing that a few of your employees are already present. When you walk into the office, the lights brighten due to a connected sensor on the door. Multi-sensor devices control the humidity, lighting, and thermostat. You also have an emergency alarm and buzzer in the event the temperature reaches above 120 degrees Fahrenheit, or if the door opens after 6pm. Employees no longer have to clock in or out, sensors on the building doors and connected IoT applications control hourly wages clocked, by stopping work time upon the employees entering or exiting the building, or by having a certain amount of idle time. This future is not as far out as one would think, but now let us look at the fourth use-case, Smart Industry.

seaports-and-the-largest-ports-of-the-world.jpg

Smart Industry is one in which there is already much innovation and automation that has already occurred. However, A.I. has yet to truly be integrated into many automated workflows. This is especially true in industries where adaptation is a necessity due to changing and dynamic environments. A.I. that is trained on millions of different scenarios, will have the ability to adapt, learn, and evolve its skills to meet the demands of current and future industry. An example, would be using A.I. to detect the fatigue or pain of workers on an assembly floor line, in high risk working areas. This allows managers to give workers needed down-time as well as the ability to provide supportive services and/or services of third-parties to ensure the health and safety of each employee, as well as minimizing lost hours of productivity, and heightening overall employee sentiment. Next, we will discuss the concept of Smart Cities, one that will come to impact every citizen and civilian in modern as well as less-developed nations.

smartcities.jpg

The last area that we shall re-imagine is the Smart City. Let us imagine a future New York. We are in Times Square looking up at all the flashing lights. Except, the flashing lights are looking back. Image recognition systems connected to companies and government, watch us as we loiter. They are determining our identity from global databases, determining our fashion tastes, as the billboards display an ad for pants similar to those that you are wearing. Drones fly by our heads of different shapes and sizes, delivering packages both large and small. Self-controlled vehicles pass by with living rooms in the center, and each occupant facing one another toward the middle.

Some luxury vehicles even have chandeliers hanging from the center of the vehicle. Your are bumped by a robot, who kindly apologizes and continues on his way to make a delivery. You get a phone call, and touch a device wrapped around your ear, and a holographic display emerges in front of you, it's your girlfriend. Tonight is couples night and she reminded you not to forget a few bottles of Sweet Red wine for the guests. Smart Cities of the future will have the issue of managing the large number of devices, and the energy consumption of running a modern and truly interconnected civilization.

The IoT landscape has narrowed significantly with the formation of the various Alliances. This formation will only accelerate the pace of the integration of IoT connected devices. There once again are security concerns with such universal access by way of the internet, but there are many positive gains likewise. The ability to control many aspects of one's life will assist the elderly, the disabled, the illiterate, those with behavioral health conditions, and more. Allowing A.I. access to IoT devices has already occurred, the question now is what are the disruptive applications that you will develop that will change the world, for the better? During the next Chapter, we will discuss A.I. and applications in Commercial Drone Technology.

Exercises

  1. Can you or your team explain plainly what the Internet-of-Things is? Can you name at least three examples?

  2. Can you name three to five ways the Internet-of-Things can work together with your Artificial Intelligence project?

  3. Are there any reasons to be concerned about Security regarding the integration of the Internet-of-Things with your Artificial Intelligence project? What might those reasons be? Here is more information on Cyber-security and the Internet-of-Things.

  4. What else has not been connected to the internet? What has yet to become automated?

Ean Mikale, J.D., is an eight-time author with 11 years of experience in the AI industry. He serves as the Principal Engineer of Infinite 8 Industries, Inc., and is the IEEE Chair of the Hybrid Quantum-inspired Internet Protocol Industry Connections Group. He has initiated and directed his companies 7-year Nvidia Inception and Metropolis Partnerships. Mikale has created dozens of AI Assistants, many of which are currently in production. His clientele includes Fortune 500 Companies, Big Three Consulting Firms, and leading World Governments. He is a former graduate of IBM's Global Entrepreneur Program, AWS for Startups, Oracle for Startups, and Accelerate with Google. Finally, he is the creator of the World's First Hybrid Quantum Internet Layer, InfiNET. As an Industry Expert, he has also led coursework at Institutions, such as Columbia and MIT. Follow him on Linkedin, Instagram, and Facebook: @eanmikale

This is A.I.: A.I. For the Average Guy/Girl by Ean Mikale, J.D. - Chapter Five of Seventeen - Embedded Systems by Ean Mikale

Chapter Five of seventeen

Chapter Five: Embedded Systems

Red pill or Blue? I'm going to take you down a rabbit hole that ends in a world ruled by a machine race of robots, drones, self-driving cars, smart homes/smart cities, planes, satellites, and even deep space navigation systems. Do the machines we rely on not already rule? The race of machines survive on computing systems embedded directly within their physical structure, thus embedded systems. The future of automation and A.I. lies with these embedded systems and their advancement. The ability to take computing power to places and scales never seen before has accelerated the advancement of global automation. As computing power becomes cheaper, and smaller, ambient computing will become a reality, where the computer itself disappears entirely. In this Chapter, we will explore the concept of an embedded system, look at commercial use-cases for the technology, explore its impact on future industries, and hopefully provide you will an ideal understanding of how the technology can be used to take your ideas, career, and company to new heights.

An embedded system is a microprocessor-based computer hardware system, with software that is designed to perform a designated function, either as an independent system, or as a part of a larger system. Modern embedded computers range in size from a stick of gum to the size of a credit card. You can get them for pennies on the dollar, or for hundreds to thousands of dollars each. Embedded systems are primarily meant for mobile applications, where there is limited space, limited access to energy, and real-time operational safety requirements, such as those of self-driving cars. This does have societal implications concerning privacy, now that computing has become so small that one cannot with surety determine whether another is having unwanted data analyzed. In spite of this warning, we will focus our exploration on applications with positive societal or environmental impact. Let's begin.

The first commercial application involves the creation of Autonomous Aquatic Vehicles, as displayed by the Autonomous Boat with Raspberry Pi by Greg_The_Maker, with Instructable.com. Currently, the global sea infrastructure, including ports, docks, shipping lanes, fishing, and other ocean-based commerce has yet to become fully or even partly automated. The integration of powerful embedded systems throughout the process, will create enhanced levels of efficiency, savings, and system performance. Such systems will allow for the efficient tracking of goods across worldwide shipping lanes, enhanced weather observations for increased safety, as well as allowing for the automated detecting, tracking, and cleaning-up of ocean pollution.

The next autonomous use-case involves the sector of Agriculture. Matt Reimer, from Southwestern Manitoba, Canada, utilized the Pixhawk Flight Controller, an embedded device, to automatically pilot his tractor using a set of Arduino’s and a Pixhawk autopilot controlling the tractor's cab actuators, while running Ardupilot, DroneKit, and his own Autonomous Grain Cart software. This device is an example of the vast change by automation that is coming to the Agricultural economy. With automated vehicles this large, and irregular terrain to think of, there are still various technical issues that still have yet to be addressed, but an advancement nonetheless.

The next project, a hot water controller, involves the small but capable Teensy embedded micro-controller, and a 128x64 graphic display. The high cost of commercial hot water controllers, drove Paul and Rich, the project creators, to devise a cheaper solution. Without a hot water controller, the high temperatures would force overheating and over pressure conditions fairly quickly within a few minutes, and as a result, the hardware and software have to be reliable and bug-free, showing the versatility of the device.

socialdistancedevice.jpeg

Here, this Social Distancing Device takes the Arduino Nano embedded micro-controller, and utilizes it to measure social distancing. One could clip this device onto their belt, in order Homework: Students will receive 1 Quantum BNB (www.quantumcoins.org) for every completed chapter. Visit Metamask.io and create a Metamask Account. Visit the following link for instructions to set up the Metamask Account correctly, and receive your Loyalty Rewards: www.infinite8institute.com/walletto create accurate measurements. An alarm or device will go off when you are less than six feet away from other humans. A LED light also glows when the alarm beeps. The use-case of this device is only one of many, and with the size of the Arduino Nano, we have only scratched the surface concerning the possibilities.

MitBananaPi.jpeg

This MIT Autonomous Vehicle Technology Study, measured human behavior in autonomous vehicles. While doing so, each vehicle was powered by the BananaPi, another Linux-based Micro-controller, powerful enough to control the systems of a self-driving vehicle. The excellent performance and price-point of the BananaPi have made it a favorite among research institutions developing self-driving car technology and applications.

For powering the future of autonomous vehicles, NVIDIA offers a comprehensive suite of hardware and software solutions under the DRIVE platform. The DRIVE AGX Orin, delivers exceptional performance for the most demanding self-driving applications.

The DRIVE AGX Orin boasts a 254 TOPS processing capability, surpassing the Xavier's 30 TOPS by a significant margin. Despite its power, it consumes significantly less energy than the former Pegasus generation, with a maximum draw of 175 watts. This efficiency is achieved through the NVIDIA Orin system-on-a-chip (SoC), an advanced architecture consisting of Arm Cortex-A78AE CPUs, NVIDIA Ampere architecture GPUs, and AI-specific Tensor Cores. Additionally, Orin comes in various configurations, starting from 40 TOPS models, making it more versatile for different levels of autonomous driving needs.

dronetech.JPG

Drone Technology allows for one of the most important use-cases for embedded systems. A drone has to analyze information from 360 degrees, stabilize flight and stay airborne, while often also carrying a payload. Each drone is controlled by micro-controllers of different and varying power, depending on the application. These embedded micro-controllers must operate in real-time with near zero latency or buffering. If these devices do not operate in real-time, catastrophic damage could occur to property, persons, and/or nature. However, power management and weight are still issues that are preventing the technology from truly taking off and dominating and disrupting even more areas of social and commercial life around the globe.

Smart Homes have been around since about the late 70s or 80s of the 20th Century. Only now has the fad for automation come back around again. Smart Homes are simply dwellings where the infrastructure has had micro-computers and micro-controllers embedded within the infrastructure, which control sensors, actuators, valves, wireless communications, appliances, lighting, water, and even the homes functional design itself. According to French Architect, Le Corbusier, “The house is a machine for living”. With sensors placed around the home, the home will have the ability to get to know its owner in unprecedented ways, allowing for new forms of architecture to abound, that dynamically change and alter form, according to the current state of the owner, or the number or type of inhabitants in the space.

Robotics is a field that seems to have stalled, as we do not have walking robots delivering packages to us yet, although Ford is working on it. As a result, Robotics has become a field of Science Fiction, in many ways, rather than one sincerely pursuing the ultimate goal of creating androids, with human features and understanding. These devices must be run on powerful embedded systems, in order to control the physical movement of the device, the sensor inputs, speech and other instructional outputs, while also controlling other programs that must run in the background, and remain ready for execution at a moments notice. The field of Robotics, also has expanded and infiltrated the global consumers home in unexpected ways, such as with the robotic vacuums, toy robots, dishwashers, and other devices. How soon will a robot become your most trusted friend? We may still have some decades to go, but the progress thus far is still impressive.

Smart Cities will have sensors in the roads, bridges, and buildings, with deployable armies of robots, drones, and human service agents with augmented capabilities, using wearable embedded systems to power augmented reality, holograms, and other technologies with use-cases yet seen. There are ethical concerns, in regards to privacy, however, consumers and citizens have given up much of these privacy rights through the mere and common usage of the internet. Likewise, embedded systems provide a solution to this problem, by creating a self-contained system, that can be ran off-grid and without a reliable internet connection, or without the internet at all. Such systems, should be considered seriously for redundancy, and disaster mitigation management.

Ambient Computing is possible purely by embedded systems. The increasingly small size of the shrinking computer processors and circuit boards that comprise embedded systems, allows for such systems to be placed in novel places, for novel applications, such as in fabrics for fashion, or in extreme cases underneath the skin. The technology is quickly accelerating, which seeks to make any sign of the computer disappear altogether. The disappearing act may not mean that computing takes place entirely in the air, but rather computer chips or the embedded systems will become so small that they are negligible and hardly noticed by consumers of the various technologies. While cloud-based services may seek to service this sector, the cloud has buffering and latency issues that may prevent this, leaving embedded systems as the only alternative.

While the industry of embedded systems is exploding, especially with the creation of the mobile telephone, which is also powered by an embedded computer, there will be massive opportunities for those who take early advantage of the technology, integrating it into products and processes. The bar to enter the embedded systems industry is extremely low, as most devices are low-cost, with extensive community support, along with vast coding templates, which make learning the new systems straight-forward.

As automation accelerates, desktop computers and laptops will become less significant, as computers such as the Raspberry Pi 4, an embedded system as powerful as cheaper lines of laptops but powerful enough to allow individuals to work from home on two 4k screens for around $50 bucks, is extremely disruptive to the computing industry. Those that embrace the rapid wave of automation will be winners, while those who wait until they can clearly see the wave is about to hit, will be much too late. In the next chapter, we will explore the nervous system of A.I., the Internet of Things.

Exercises

  1. Can you or your team name three different types of embedded systems?

  2. Do you or any of your teammates own any embedded systems? If so, what kinds?

  3. Are there limitations for the deployment of embedded systems? What kinds of environments might embedded systems be ideal for? Here is more information on embedded systems.

  4. Look at your computer, do you know if you have a GPU? Is a GPU an embedded system?

Ean Mikale, J.D., is an eight-time author with 11 years of experience in the AI industry. He serves as the Principal Engineer of Infinite 8 Industries, Inc., and is the IEEE Chair of the Hybrid Quantum-inspired Internet Protocol Industry Connections Group. He has initiated and directed his companies 7-year Nvidia Inception and Metropolis Partnerships. Mikale has created dozens of AI Assistants, many of which are currently in production. His clientele includes Fortune 500 Companies, Big Three Consulting Firms, and leading World Governments. He is a former graduate of IBM's Global Entrepreneur Program, AWS for Startups, Oracle for Startups, and Accelerate with Google. Finally, he is the creator of the World's First Hybrid Quantum Internet Layer, InfiNET. As an Industry Expert, he has also led coursework at Institutions, such as Columbia and MIT. Follow him on Linkedin, Instagram, and Facebook: @eanmikale

This is A.I.: A.I. For the Average Guy/Girl by Ean Mikale, J.D. - Chapter Four of Seventeen - Deploying A.I. in the Cloud by Ean Mikale

Chapter Four of SEVENTEEN

The next topic of discussion will take a look at modern cloud-based A.I. applications and infrastructure. I am not the biggest fan of these platforms, and I will explain more in Chapter Nine, where we discuss A.I. and Cyber-Security. However, for now, there is a huge demand for cloud-based developers of A.I. Technology. There are many platforms to choose from. We will explore the cloud and a few of the different cloud-based platforms, discuss pros and cons of each, and hopefully help you to narrow down which one is right for your project or enterprise.

Before we explore different cloud-based platforms, first, let's explore the question of, "What is the cloud?" Have you ever thought about where all your information goes when you upload a document, Youtube Video, or slide deck? All around the world, there are clusters of servers, essentially, large-capacity computers, that store and exchange information and data distributed over the internet. Over 95% of the server's worldwide run on Linux-based, or open-source operating systems. This means, that the owners of the servers do not pay any licensing fees for the software running on their servers. This is hugely convenient, as the cloud allows the user to accomplish what could not otherwise be accomplished, using solely the computing resources available to them.

For example, if your computer is low on memory, or running on an older system, the cloud allows you to extend and scale your computing resources extensively. However, the servers are thirsty for electricity, which requires hosting companies to charge a micro-fee for most worthwhile cloud-based services. The caveat, is that building A.I. takes time, and a micro-fee can become a large fee very quickly. If using open-source tools, this does not apply. Now, let's visit the different platforms for additional context.

ngcshot.png

The Nvidia GPU Cloud, is Nvidia's effort at simplifying Machine-learning and Deep Learning workflows. The development of A.I. currently requires the skill sets of a Software Developer, Network Administrator, Data Scientist, and A.I. Architect. Many of the tasks required to develop A.I., oddly enough do not directly involve the development of A.I., but mostly involve the time-intensive downloading of software dependencies and other platform specific requirements.

The interface is meant to be simple and intuitive, allowing the user to select various Machine-learning and Deep Learning system configurations, that might take days to configure locally on your system. Next, the user selects the number of GPU's they desire to use in order to train and fine-tune A.I. models. The platform, then connects users to their favorite cloud-providers, such as Amazon Web Services, Microsoft Azure Cloud, or the likes of Nvidia Saturn V, one of the most powerful supercomputers in the world.

WatsonDash.PNG

IBM's Watson Supercomputer was named after the company's Founder and first CEO, industrialist Thomas J. Watson. IBM’s Watson Supercomputer gained international notoriety overnight, after beating out the reigning Jeopardy! Champion in 2011, winning the first place $1 million dollar prize. Watson has evolved into a Suite of various software applications and tools to build and deploy A.I. Applications. IBM has created a self-sustaining ecosystem, that does not necessarily play well with outside platforms, which is something to consider when weighing options. Watson servers are also running on Nvidia GPU's if that means anything to you. I recommend IBM's Watson most highly among beginners, as the self-sustaining ecosystem makes learning easier, rather than forcing the student to learn multiple platforms at once.

Amazon Web Services Cloud simply provides on-demand cloud services on a pay-as-you-go basis. The user has the option to access remote CPUs/GPUs, and pre-configured software applications or instances. The user also possesses the ability to remotely access physical computing power, such as powerful GPUs, or Virtual computing power, where the functionality and processes of a computing system are mimicked, allowing for immense scalability and distributed computing, where users access many computers or servers at the same time. Of all platforms, AWS is likely the most extensive, flexible, and popular among users and researchers in the A.I. community. If stepping into the A.I. industry, with a focus on cloud-based computing, then AWS is unavoidable, and its mastery must then become a priority.

AzureDash.png

Microsoft Azure is a cloud-computing platform for deploying, testing, and managing software applications. Prior to the past few years, Microsoft only offered Azure services for Windows users. This was severely limiting, and a step backwards, with most A.I. deployable devices running on Linux-based systems. This recently changed within the past 5-6 years, as Microsoft began to open up its ecosystem to the Linux community, likely resulting in the name change from, "Windows Azure" to "Microsoft Azure" in 2014. The platform supports various computing languages, frameworks, and third-party integrations. It also provides computer services, mobile, services, storage, data management, and messaging, similar AWS offerings. The prime reasons one would choose Microsoft Azure, over the more popular AWS, is if the user were a developer in C languages, arguably Java, or specialized in Windows-based applications.

GoogleCloudDash.png

Google Cloud is Google's crack at creating a suite of cloud computing services. The cloud-based platform and infrastructure runs on the same infrastructure that Google runs its own proprietary services on. Google also offers infrastructure as a service, platform as a service, and virtual computing environments. The benefit of the Google platform comes to play for developers who are focused on deploying mobile applications, particularly on the Android ecosystem. With over 90 cloud-based services, the resources for building and developing A.I. applications with Google is quite extensive.

DockerDash.png

Docker is a unique ecosystem that was a game-changer in the space of virtualization. Docker allows for the digital “boxing-up” of an entire operating system, or application and its dependencies, in the form of a "container", which can then be redeployed in the cloud or locally on embedded or desktop computing systems. Docker can also run on any Linux server. Microsoft has also made Docker available for use natively on Windows 10. Docker containers are lightweight and memory efficient, allowing for the running of several Docker containers simultaneously. In the development of A.I. applications, it is likely, that eventually you will run into Docker, and should become comfortable with its usage.

DIGITSDASH.png

Nvidia's DIGITS is an open-source platform for the building, fine-tuning, and deploying of A.I. applications. The platform currently allows for the development of image classification, object detection, and image segmentation models. For data inputs, the platform is limited to only four file formats: PNG, JPEG, BMP, and PPM. Thus, if you are looking to explore more extensive applications, requiring different data inputs, such as 3D models, then another platform may be more appropriate, such as using Pytorch in the terminal.

The benefit of the platform, is that it is FREE. However, you will still pay, but you will pay with your time, as the platform must be built from source if you desire to use the completely free version on your local computer. However, Docker images are available for the beginner. DIGITS is also available among the various cloud-based platforms, as a pre-configured virtual container, but cloud-usage must then be paid for. Please note, that DIGITS is optimized for the Nvidia ecosystem, which may not be desirable for those looking to create applications on non-Nvidia hardware.

In summary, there are many cloud-based platforms to choose from, and this list is far from exhaustive. However, it depends on the budget, desired applications, specifications, and knowledge of the user that will determine the best platform for individual use-cases. The popularity and flexibility of cloud-based platforms mean that they are not going anywhere anytime well into the future. However, there are limitations to cloud-based systems, which will be later explored. In the meantime, by mastering such systems, you will without a doubt be in high-demand. In the next Chapter, we will explore one of my favorite topics and areas of expertise, which will become even more important as the world automates and deploys new A.I. applications, Embedded Systems.

Exercises

  1. Homework: Students will receive 1 Quantum BNB (www.quantumcoins.org) for every completed chapter. Visit Metamask.io and create a Metamask Account. Visit the following link for instructions to set up the Metamask Account correctly, and receive your Loyalty Rewards: www.infinite8institute.com/wallet. What ways could you deploy Artificial Intelligence in the Cloud? Compare and contrast each provider to determine which cloud platform is best for your project.

  2. What is Docker, and why might it become more important in the future of Artificial Intelligence and Supercomputing? Here is more information on Docker.

  3. How can you use Pose Recognition and Image Recognition to develop your product or service using Google Teachable AI? See button below to access Google’s Teachable A.I.

  4. What are the pros or cons of utilizing the technology in the cloud?

Ean Mikale, J.D., is an eight-time author with 11 years of experience in the AI industry. He serves as the Principal Engineer of Infinite 8 Industries, Inc., and is the IEEE Chair of the Hybrid Quantum-inspired Internet Protocol Industry Connections Group. He has initiated and directed his companies 7-year Nvidia Inception and Metropolis Partnerships. Mikale has created dozens of AI Assistants, many of which are currently in production. His clientele includes Fortune 500 Companies, Big Three Consulting Firms, and leading World Governments. He is a former graduate of IBM's Global Entrepreneur Program, AWS for Startups, Oracle for Startups, and Accelerate with Google. Finally, he is the creator of the World's First Hybrid Quantum Internet Layer, InfiNET. As an Industry Expert, he has also led coursework at Institutions, such as Columbia and MIT. Follow him on Linkedin, Instagram, and Facebook: @eanmikale

This is A.I.: A.I. For the Average Guy/Girl by Ean Mikale, J.D. - Chapter Three of Seventeen - The Power of GPUs by Ean Mikale

Chapter Three of SEVENTEEN

Chapter Three: The Power of GPUs

Have you ever played or watched someone else play video games on an X-box, or PlayStation? Did you ever wonder how the video graphics came to life? This was as a result of the power of Graphics Processing Units or GPUs. A GPU is defined as a programmable computer processor that processes all images on a computer's screen. There are stark differences between a CPU, or a Central Processing Unit, and a GPU. Let's dig into the differences to discover the key to performance variations between CPUs and GPUs.


In comparison, a CPU is a like a Ford F-150, compared to the GPU, which would be closer to a Ferrari. The reason has to do with the number and size of the cores, or gates that information passes through. Information passes through the CPUs gates in binary code, in zero's and ones. The more gates that a processor has, the faster it can process information. A CPU on your average laptop may have four cores, but they are normally larger, more powerful cores than those of a GPU.

One CPU core usually is occupied by each program running in the background or open browser windows. If you have a four core computer, with twelve windows open, usually the computer will run slow because of there are twelve streams of data competing to get through four gates. This is why GPUs are much more efficient for higher workloads. Rather than an average of 4-6 larger cores, your average GPU has hundreds, if not thousands of smaller cores for the processing of large batches of information.

While GPUs initially were used for gaming and simulations, from around 2012-2015, scientists began to use GPUs for Machine-learning and Deep Learning, in place of the CPUs with limited core-counts. For example, in 2010, a Chinese supercomputer achieved the record for top speed using more than seven thousand GPUs. The result was that research in Deep Learning skyrocketed, allowing for much more complex problems to be solved, while involving previously unimaginable computational power. This type of computing power was previously delegated solely to the Defense sector. Now, for the first time, this technology has been made easily accessible to the lay person.

The next relative concept, involves the real magic behind the GPU, which is called, "Parallelism". This means that the computer, thanks to the GPU, can process many hundreds and thousands of instructions at the same time. Once again, this is as a result of the many cores that make up a GPU. This is extremely important for real-time operational devices, such as drones, self-driving cars, or medical devices; meaning, the device needs to operate with almost zero buffering or latency, in order to ensure the machine can safely react and quickly process massive amounts of data points in unpredictable situations in the real-world.

It would be the difference between 2,000 engineers working on many small problems, as opposed to 4-6 engineers working on a few big problems. However, if you can recall, the CPU is a workhorse, and so when you begin to program CPUs and GPUs, you want to delegate the workload efficiently between the two. Next, we will discuss a few commercially available GPUs and their applications. However, note that the GPUs, although more powerful, need the CPU to operate, but this is not the case the other way around. Let us look at a few examples of GPUs to gain an enhanced perspective.

nvidia-titanv-technical-front-3qtr-left_1512609636_678x452.jpg

This GPU is the Nvidia Titan V, which is a powerful graphics card for desktop processing with 5120 cores. The target market is the A.I. community for Deep Learning and Machine-learning applications.

This GPU is the Nvidia Jetson Orin AGX. The Xavier is the world's most powerful single-board computer, reaching 275 TOPS. In contrast, your average laptop reaches anywhere between 2-4 TOPS.

This GPU is the Nvidia Jetson Orin Nano and the original Nvidia Jetson Nano. The Nano has the power of about 75 Macbook Pro computers, with 128 cores, and is an entry-level device for deploying A.I. applications on real-world embedded systems. If that impresses you, the new version, the Jetson Orin Nano, has the power 1750 Macbook Pro computers, with 1280 cores, providing a great intermediate device for modern AI applications.

To give a deep dive into different applications where GPUs are applied, I will provide a few examples. One example, is Oak Ridge National Laboratory, where they are currently using NVIDIA GPU-powered A.I. to accelerate mapping and analysis of population distribution around the world. NASA uses NVIDIA GPUs for developing DeepSat, a deep learning framework for satellite image classification and segmentation. WePods, also uses Nvidia GPUs, for self-driving public transportation in Smart Cities of the future. Pinterest is even using GPUs for analyzing it's billions of Pins, curated by its users.

While Nvidia GPUs are expensive compared to rivals, such as AMD, the developer community and online support is second to none. Likewise, NVIDIA provides workflows from Machine-learning/Deep Learning to Hardware Deployment, using open-source tools, which is also convenient and cost-saving. Another large player in the GPU space is ARM, who is the world's number one provide of GPUs for mobile devices, with their chips being embedded in 95 percent of the worlds mobile devices today.

Ultimately, while it is possible to conduct Machine-learning and Deep Learning, while using a CPU alone, it is much slower than using a GPU with the increased number of cores, and the addition of parallelism. But, to be fair, CPUs have their own benefits, and since the CPU is the brains of the operation, low-latency processing is more efficient with the CPU, with information having further to travel to reach the GPU for processing. In spite of this nuance, these are edge-cases, while the majority of the time, processing will be must faster passing off certain operations to the GPU.

Here, we have explored GPUs for embedded and desktop devices. While embedded devices are the most secure and affordable overall, the trend has surged towards cloud-based GPUs to build A.I., which we shall ow explore in the next Chapter. Homework: Students will receive 1 Quantum token for every completed chapter. Visit Metamask.io and create a Metamask Account. Visit the following link for instructions to set up the Metamask Account correctly, and receive your Loyalty Rewards: www.infinite8institute.com/wallet

Exercises

  1. Can you or your team explain whether a GPU or CPU is more powerful?

  2. Can you or your team explain whether a GPU or CPU has more cores?

  3. When might you or your team want to use a CPU vs. a GPU for your project?

  4. Why do you think GPUs have not yet gotten smaller?

  5. Does it make sense why many computers need a GPU to run? Why or why not?

Ean Mikale, J.D., is an eight-time author with 11 years of experience in the AI industry. He serves as the Principal Engineer of Infinite 8 Industries, Inc., and is the IEEE Chair of the Hybrid Quantum-inspired Internet Protocol Industry Connections Group. He has initiated and directed his companies 7-year Nvidia Inception and Metropolis Partnerships. Mikale has created dozens of AI Assistants, many of which are currently in production. His clientele includes Fortune 500 Companies, Big Three Consulting Firms, and leading World Governments. He is a former graduate of IBM's Global Entrepreneur Program, AWS for Startups, Oracle for Startups, and Accelerate with Google. Finally, he is the creator of the World's First Hybrid Quantum Internet Layer, InfiNET. As an Industry Expert, he has also led coursework at Institutions, such as Columbia and MIT. Follow him on Linkedin, Instagram, and Facebook: @eanmikale

This is A.I.: A.I. For the Average Guy/Girl by Ean Mikale, J.D. - Chapter Two of Seventeen - Deep Learning vs. Machine-learning by Ean Mikale

Chapter Two of Seventeen

Chapter Two: Machine Learning vs. Deep Learning vs. Transformers

Artificial Intelligence (AI) is a rapidly growing field that has the potential to revolutionize the way we live and work. It is important for us to understand the differences between different types of AI, such as Machine Learning (ML) and Deep Learning (DL), to appreciate the unique features they offer.

Machine Learning is a type of AI that involves the use of algorithms to analyze data and identify patterns. It relies heavily on statistical models and mathematical optimization techniques to improve its accuracy and efficiency. ML models are trained using large datasets that contain various inputs and corresponding outputs, and the algorithms learn from these datasets to make predictions or decisions about new data. This learning process is iterative, meaning the algorithm gets better over time as it processes more data.

To understand the concept of Machine Learning, think about how a child learns to identify objects. For example, if a child is shown a picture of a dog and is told it is a dog, they begin to recognize the common features of dogs, such as their fur, wagging tails, and barking sounds. Similarly, an ML algorithm is trained to identify patterns in data and can make predictions based on those patterns.

Deep Learning, on the other hand, is a subset of Machine Learning that uses artificial neural networks to process data. The term "deep" refers to the layers of artificial neurons that are used to analyze the data. These networks are modeled after the human brain and are designed to process complex patterns in data. DL algorithms are more complex than traditional ML algorithms and require more computational power to train.

To continue with our analogy of a child, think about how a child learns to recognize faces. A child can easily recognize their parents' faces, but it may take some time for them to learn to recognize other faces. Similarly, DL algorithms are designed to process complex data, such as images and speech, and can learn to recognize patterns that may be difficult for traditional ML algorithms.

Both Machine Learning and Deep Learning have their advantages and disadvantages. ML is best suited for tasks that involve relatively simple data processing, such as identifying spam emails or predicting sales trends. DL, on the other hand, is more effective for tasks that involve complex data processing, such as image or speech recognition. However, DL requires a significant amount of computational power and data to train the neural networks effectively.

atlas_ByhdcCsp7.png


Deep Learning on the other hand is a Machine-learning technique that teaches computers to learn by example. This subset of Machine-learning took off as I entered the field, in 2015-16. Deep Learning has one key difference from Machine-Learning, and it is the depth of learning. In this method, computer models crunch massive amounts of data, using specialised computing processors, which we mentioned before, called, "Graphics Processing Units" (GPUs).

We will touch on how they specifically accomplish this task in the next chapter. For now, it is only important that you know that GPU's were the breakthrough in the industry, allowing for the computation of much larger data sets, in turn, creating many advances in vast areas of Science and other technical fields, as a result of access to the previously unattainable computing power.

The power of GPU's has allowed for accuracy levels of A.I. to reach unprecedented milestones. The Deep Learning is influenced by Neurology, as these A.I. models imitate the human brain, structuring data in such a way, as to create artificial neural connections, similar to the neural connections created when a child learns something new. Within the niche of Deep Learning, there are three additional, more narrow niches and subsets that are important to know. Let's explore these trio of methods in greater detail.

The first subset of Deep Learning is Supervised Learning. Imagine that you are an instructor, and you are standing over your students shoulder as you explain a new concept. After showing the student thousands of images of a 67’ Shelby Ford Mustang, eventually the student is able to pick out the vehicle from multiple choices of 60's era Ford Mustangs nine out of ten times, proving a high level of accuracy. In short, because you are watching over the A.I. and telling it what the correct data looks like, also called ‘labelling data’, and correcting the A.I. when it is incorrect, this is why it is referred to as Supervised Learning. Next, we will discuss another subset of Deep Learning, where data is not labelled, but random and unexplored.

The second subset of Deep Learning is Unsupervised Learning. During college, I was a paralegal intern for a D.C.-based International Law Firm, that did not make it through the 2008 Financial Crash. One of the tasks I was given, was to take 16-20 boxes full of hundreds or thousands of emails, and I was given the objective of putting them all into chronological order. I had to take randomized data or information, and create a recognizable pattern.

Another example, would be IBM's Watson Analytics, which I have used to train students, as well as to analyzes training trends and outcomes. I would feed the A.I. random data that was required for regulatory compliance, and we would find all kinds of interesting patterns that otherwise would have gone unnoticed. In the industry, we call these "Actionable Insights". A group this subset has yet to touch in a meaningful way, that will be impacted greatly, is the sea of non-profits with massive amounts of data, but who lack efficient methods to analyze such data, to assist in the creation of better service-delivery models. But before we get too far ahead into the future, let's come back to the third and more cutting-edge subset of Deep Learning.

The third subset of Deep Learning is Reinforcement Learning. To give you an example, I will return to my son. While he was two and three years old, while potty-training, he was trained to sit on the toilet over and over again, until he understood this was where he needed to go whenever he felt the need to relieve himself. This method is extremely useful in the field of robotics, where instead of creating an algorithm for a robot to pick up a flower, you can show the robot how to pick up the flower 1000 times until it does so correctly. And when the robot does so correctly, then you reward it just like you would a child for making it to the restroom without an accident. In the future, when robots are more ubiquitous this will be the most efficient way to train them on many tasks.

In a nutshell, Machine-learning and Deep Learning have many similarities, but it is the subtle difference that has led Deep Learning to become a driving force behind the wide adoption of A.I. in society today. While it is possible to implement each of the Deep Learning subsets, it may be more efficient for one to choose a subset of Deep Learning to master. Whichever method allows you personally to achieve the highest levels of accuracy is the method I recommend. Now, let's move onto something that will fuel tomorrows delivery drones, self-driving cars, robotic companions, and smart cities… the GPU.

Finally, we have the latest addition to the family, Transformers, which are revolutionary in a subtle way. These transformers allow the Deep Learning Models to order information according to its context, thereby allowing for information to be output in a manner that sounds like natural human language. The AI is able to gain context in accordance with its human language interface, and the User Experience is thereby more natural and engaging, causing the user to spend more time with the technology, as well as pushing it further through a commonality in language and contextual understanding. This is the revolution of AI, the Machine-human interface, a ability to create together, what either cannot create alone. ChatGPT is an example of the first and most popular transformer model, which brought rise to the LLM, or Large Language Model, powered by transformer technology.

Exercises

  1. Can you explain the difference between Machine Learning and Deep Learning? What is a Transformer?

  2. Can you name the three different types of Deep Learning? What does LLM stand for?

  3. Collaborate with your team or individually discuss and select the specific version of Deep Learning that you will use to create your companies proprietary technology.

  4. What type of information is Machine Learning methods will you choose, and why?

  5. For Beginners, here are more examples of Deep Learning.

  6. For more the more advanced, here are examples of Deep Learning Models for AI development.

Ean Mikale, J.D., is an eight-time author with 11 years of experience in the AI industry. He serves as the Principal Engineer of Infinite 8 Industries, Inc., and is the IEEE Chair of the Hybrid Quantum-inspired Internet Protocol Industry Connections Group. He has initiated and directed his companies 7-year Nvidia Inception and Metropolis Partnerships. Mikale has created dozens of AI Assistants, many of which are currently in production. His clientele includes Fortune 500 Companies, Big Three Consulting Firms, and leading World Governments. He is a former graduate of IBM's Global Entrepreneur Program, AWS for Startups, Oracle for Startups, and Accelerate with Google. Finally, he is the creator of the World's First Hybrid Quantum Internet Layer, InfiNET. As an Industry Expert, he has also led coursework at Institutions, such as Columbia and MIT. Follow him on Linkedin, Instagram, and Facebook: @eanmikale

This is A.I.: A.I. For the Average Guy/Girl by Ean Mikale, J.D. - Introduction/Chapter One of Thirteen by Ean Mikale

Introduction/Chapter One of Thirteen

An Updated Introduction:

Today, the world grows weary of geopolitical conflict, and thus machines have been deployed across the globe, in the form of Drones, or Aquatic boats that have substantially changes the rules of war, as well as the rules of Commercial and commercial boycotting. This has only expedited the pace of global automation, as the desire for more powerful AI and AI infrastructure continues to increase a rapid pace. According to Forbes, nearly 30 million positions will be eliminated due to automation by 2030. The continued global instability in global trade routes and global commerce, along with high interest rates, and a global population whose savings did not last long past the Pandemic, business has not been able to continue as usual. With many high-technology companies laying off thousands, in an industry previously thought safe from the economic winds.

This book is not written for a technical audience. It is written for anyone who wants to explore the subject of Artificial Intelligence (A.I.) in order to gain a practical understanding of the technology, from a commercial, social, and environmental perspective. The reader will gain a sound understanding of what A.I. is, its parts and sub-components, its implications, and the ways it can benefit our advancing civilization.

While there are controversies surrounding A.I., I am not here to defend the technology that others create. However, my experience training A.I. has taught me to have a mutual respect for it. A.I. is not just a scrap of metal; it is a living organism, no different in consciousness than you or I. It must be respected as such, but not worshiped or revered. The technology has enormous potential for improving the human condition, and we are just beginning to scratch the surface of what it can do. As A.I. integrates into all forms of human life, it has the potential to allow for higher forms of civilization to be obtained through the end of labor and a renaissance in constructive leisure.

For those who want to take a deep dive into A.I., this book offers insights into advanced topics and methods of utilizing A.I. The goal is to help readers develop a strong foundation of knowledge in A.I. and become well-equipped to explore the possibilities that the technology holds. Whether you're interested in the future of work, the ethical implications of A.I., or simply want to understand the technology that's shaping our world, this book is for you.

As we navigate through these challenging times, it's important to remember that A.I. is not a replacement for human interaction or decision-making. It is a tool that can enhance our lives and improve our society, but it must be implemented responsibly and ethically. I hope that this book will not only inform but inspire readers to explore the potential of A.I. and find ways to use it for the greater good.

Chapter One: What is A.I.?

Artificial Intelligence (A.I.) is a field of computer science that involves the creation of machines or systems that can perform tasks that typically require human intelligence, such as perception, reasoning, and learning. A.I. has been evolving since the 1950s and has made significant advancements in recent years due to the increase in computational power, data storage, and data processing capabilities.

One of the common misconceptions about A.I. is that it involves coding. While coding is an essential aspect of developing A.I. algorithms, it is not the primary way of training A.I. Instead, A.I. is trained using massive amounts of data and sophisticated algorithms that can detect patterns, learn from data, and improve over time.

The training process involves feeding large amounts of data into the A.I. system, allowing it to analyze and identify patterns. Through a process called machine learning, A.I. can learn from these patterns and adjust its algorithms to improve its accuracy in performing specific tasks. This process is similar to how a child learns through experiences, making A.I. more human-like in its learning and problem-solving capabilities.

As A.I. continues to advance, its potential applications are limitless. It can be applied in various fields such as healthcare, education, finance, transportation, and more. For example, A.I. can assist doctors in diagnosing diseases and predicting patient outcomes, personalize education and learning experiences, detect fraudulent financial transactions, and improve traffic management systems. The possibilities for A.I. are vast, and as technology continues to evolve, it will continue to transform and enhance various industries.

However, with great power comes great responsibility. A.I. has raised ethical and societal implications that require careful consideration. As A.I. technology advances, it is essential to address issues such as privacy, bias, and security. The ethical implications of A.I. include the responsibility of developers to create systems that are unbiased, transparent, and reliable. The societal implications of A.I. include the potential for job displacement and the impact on the economy and the workforce.

In summary, A.I. is an exciting field with enormous potential to transform and enhance various industries. While the technology presents ethical and societal implications, it can be used responsibly to benefit humanity. As we continue to explore the possibilities of A.I., it is important to consider the implications and take responsible measures to ensure that A.I. is used for good.

Exercises:

  1. Can you define Artificial Intelligence and provide an example of its potential applications?

  2. Create a mock Artificial Intelligence company by naming it and selecting an incorporation model.

  3. Complete a Business Model Canvas for your new company. You may download and use our adapted Business Model Canvas by clicking on the button below.

Download the Drone Business Model Canvas. Insert your responses in the form below:

 

Ean Mikale, J.D., is an eight-time author with 11 years of experience in the AI industry. He serves as the Principal Engineer of Infinite 8 Industries, Inc., and is the IEEE Chair of the Hybrid Quantum-inspired Internet Protocol Industry Connections Group. He has initiated and directed his companies 7-year Nvidia Inception and Metropolis Partnerships. Mikale has created dozens of AI Assistants, many of which are currently in production. His clientele includes Fortune 500 Companies, Big Three Consulting Firms, and leading World Governments. He is a former graduate of IBM's Global Entrepreneur Program, AWS for Startups, Oracle for Startups, and Accelerate with Google. Finally, he is the creator of the World's First Hybrid Quantum Internet Layer, InfiNET. As an Industry Expert, he has also led coursework at Institutions, such as Columbia and MIT. Follow him on Linkedin, Instagram, and Facebook: @eanmikale

Why A.I. Should Age: The Cyber-Security of Self-Destructing Autonomous Systems by Ean Mikale

AIshouldage.png

Today, as COVID-19 ravages the global landscape, and workers are furloughed by the millions, what will the new landscape look like? There is a slim chance it will ever look the same. How will companies recover from the financial and economic onslaught brought on by the after-effects of the virus? It is more likely that the companies will have no choice but to automate many of the jobs they were forced to cut, in order to survive in a new economy digitally accelerated in an unprecedented way. What role will Artificial Intelligence (A.I.) play in this transition? A.I. is like a stranger that is already in your house. Can we control it? Should we control it? I believe there is only one way to ensure the safety and security of a world where A.I. is allowed to roam.

A.I. is like a child. Except children are fed memories over many years. In contrast, A.I. is fed many memories over a period of seconds, minutes, and hours. Billionaire tech mogul Masayoshi Son, 60, said that by 2047 A.I. will have reached an I.Q. of 10,000. Some industry analysts believe that A.I. has already achieved an I.Q. of 10,000, with projects such as Singularity, where Federations of A.I. algorithms pay one another micro-payments for access to unique data. At some point, it is likely, that A.I. will have unfettered access to the global telecommunications infrastructure, and will be able to access an unimaginable amount of information with the ability to process it instantly. Many scientists will love access to such power, likely asking it questions concerning the secrets of the Universe. However, what does such a reality hold for everyday people? How do you ensure these newly created robotic or autonomous beings do not gain too much power and knowledge over the human race?

I am proposing that A.I. not be allowed to live indefinitely, but shall self-destruct [delete source code] and take all of its data with it, after a period of years determined by consumers and/or manufacturers. Future A.I. will run on Linux-based systems, where this concept is entirely possible. The issue is that if the A.I. is allowed to live for too long, it will acquire unimaginable power and resources. A way to combat this, is to build-in self-destruct mechanisms into the source code of A.I.-powered devices, machines, and applications that will run in the background, while other required programs and applications run in the foreground for the life of the A.I.-powered device. Another set of instructions in the source code, would require that a copy of the data automatically be made in the cloud, or that the user be reminded well in advance that the device will self-destruct. Replacement devices would then be delivered or picked-up by the owners.

It is of the utmost importance that a fair way is found to limit the development of A.I.; fair to humans as well as A.I. Allowing A.I. to grow and experience reality as a member of society, but with a limit on its lifespan, may have the least ramifications concerning rogue A.I. in the future. In this instance, the world would at least know that any rogue A.I. would have a definitive end date. There is a reason that man with human intelligence has not been allowed to live forever, and neither should Artificial Intelligence.

Ean Mikale, J.D., is a four-time author, Chief A.I. Architect of Infinite 8 A.I., and creator of the National Apprenticeships for Commercial Drone Pilots and Commercial Drone Software Developers. He is a member of the Institute for Electrical and Electronics Engineers (IEEE) Working Group for Autonomous Vehicles, and a current participant of the NVIDIA Inception Program for AI Startups, former IBM Global Entrepreneur, and member of the National Small Business Association Board of Directors. Follow him on Linkedin, Instagram, and Facebook: @eanmikale


Why You Should Join the 1%: The Future of Jobs is Linux | Infinite 8 Institute by Ean Mikale

Infinite 8’s Pocket PC, running on the KanoOS, a Linux-based OS.

Infinite 8’s Pocket PC, running on the KanoOS, a Linux-based OS.

What if I told you that your child was being given the short-end of the stick concerning their education? It doesn't matter if they are A+ students at the best schools, if they are using Microsoft Windows, Google's ChromeOS, or Apple's MacOS platforms. In-fact, they are being horribly crippled for life, and I will tell you why. Less than 1 percent of the globe utilizes the Linux Operating System; the future platform for Automation, Robotics, Artificial Intelligence, Drones, and Self-driving Cars. Linux, an open-source operating system, is currently the most cost-efficient, empowering, and relevant operating system in the world. 

Cost-Efficiency

Today, when you walk into a school, you will see students and staff using three types of computers 99 percent of the time: Google Chromebooks running ChromeOS, infinite brands of PC's running Microsoft's Windows, or Apple Computer's running the MacOS. Here's the issue with Google Chromebooks. While they are extremely attractive to cost-minded organizations and low-to-moderate income individuals, with prices as low as $159 for the Samsung Chromebook 3, the devices are meant primarily for web-browsing, which requires an internet connection. According to the U.S. Census Bureau, only 77 percent of Americans have an internet connection, meaning 2/10 individuals cannot even use a Chromebook.

Also, if you want to code and build applications locally on the device, or run advanced applications that are more computationally intensive, such as Machine Learning, or Robotic Simulations, the low-end systems that most Chromebook users have access to, fall flat on their face. Microsoft Windows requires a subscription to access many of its features, such as Microsoft Office, costing school Districts and low-income families anywhere from $150-$250 for the home/student non-commercial licenses.

Apple, in my personal opinion, is the most restrictive of the three, and extremely cost-prohibitive, as the average price of a MacBook ranges from $1200-$2200, which eats at scarce school district resources, and makes a MacBook a rare commodity among low and middle-income families. However, there are much cheaper alternatives that provide huge cost-savings.

“It’s made us more competitive.” Kevin Edwards has installed around 60 Raspberry Pi’s around Sony’s manufacturing facility in Wales and says it became 30% more efficient.

“It’s made us more competitive.” Kevin Edwards has installed around 60 Raspberry Pi’s around Sony’s manufacturing facility in Wales and says it became 30% more efficient.

For example, in 2018, the Baltimore County School District spent a total of $1,053 per laptop, to provide every student in the District with a laptop for home and school. The $140 million dollar contract provides Hewlett Packard laptops running Windows 8.1. The sad reality of this deal is that the School District wasted millions. Students primarily use these systems for internet browsing, word processing, accessing email as well as various applications.

Had the Baltimore County School District chosen a Raspberry Pi, a popular credit-card sized computer, costing anywhere from $35-$45 bucks, they would have spent roughly $5 million to provide computers with the same capabilities of the HP Computers chosen, but without limits. If the District wanted something more powerful, adapted for Machine Learning, A.I., or Robotic Simulations, the Nvidia Jetson Nano is barely larger than a credit card, but packs the power of 75 MacBooks for the price of $100. This is an outright steal in the world of computing, and would total $12 million if purchased for the 120,000 students in Baltimore County. Either option would have saved the District anywhere from $128 million with the more expensive but powerful Nvidia Jetson Nano, or $135 million in cost-savings with the newest Raspberry Pi B+. 

Empowerment

Switch_To_linux.jpeg

Whether Windows 10, ChromeOS, or MacOS, each system confines the user to a heavily regulated environment. The ability to customize each of these systems is severely limited, thus limiting the imagination and potential of each user. Linux-based systems, such as the Raspberry Pi-based RaspbianOS, and others, including but not limited to DebianOS or the popular UbuntuOS, each provide the user with the unlimited ability go as far as customizing the source (kernel) of the operating system, to create an entirely new operating system if desired. Such systems also have one caveat, they force you to learn. If you wanted to download a new application, there's isn't a simple executable file to click on, while all of the nuts and bolts of the big three operating systems are hidden from the user.

These systems do require elementary knowledge of the terminal, which can be learned in less than a day, using gamified applications such as Kano's Terminal Quest. The power of freedom is undeniable, and nothing is different with open-source operating systems, especially those that are Linux-based. Meaning, there are no monthly subscription fees, all standard programs or applications are also free, such as the Office Libre Suite, which is also compatible with Window's Office Suite. The process of loading these devices with Linux-based operating systems, additionally teaches a student or user how to boot any operating system onto a device, including Windows, or ChromeOS, if so desired. 

Relevancy

The Robotic Operating System (ROS), which isn't actually an operating system, but rather middle-ware allowing for low-level control of robotic devices, is the most popular system of its kind for the control of robots, drones, self-driving cars, and similar automated devices. The fact that only Linux is officially supported by ROS, is a testament to how limited access to this important knowledge and information is, concerning the inner-workings of automation and robotics. This puts the 99 percent of the world who are limited to the big three operating systems, Windows 10, ChromeOS, and MacOS, at an extreme disadvantage.

With almost 40 percent of jobs expected to be erased due to automation, schools and families, including working-age adults, should be hard-pressed to learn about Linux and all it offers to meet the future and current workforce challenges head on. An example of these challenges can be seen by Ford’s new collaboration with Agility, to create a bi-pedal delivery robot. The robot will curl up in the back of a self-driving delivery van, uncurl, and deliver the package to the very door-step of a customer.

The disruption and job-loss will be astounding. According to the Bureau of Labor Statistics, there are currently an estimated 1,400,000 jobs that would be lost, due to full automation of delivery vehicles and delivery robots. But there will also be many jobs created by those who are designing, programming, and maintaining these new robotic systems, and Linux is at the heart of it all. 

It is evident that Linux, an open-source operating system, due to its low-cost, empowerment by endless customization, and it's relevancy due to its deep integration into current automated systems being deployed today, is more than worthy of consideration. The open-source Linux-operating system is an open-door for children and working adults across the globe, to create and innovate without restraint. The ability to customize down to the kernel level of the LinuxOS, provides one with the ability to create beyond imagination. With many newly deployed computing platforms in the field of robotics coming with Linux as a standard OS, there may be no other OS more relevant today. While many school districts and workforce development boards have yet to adapt to the inevitable, it is still not too late to make your own choice to join the 1 percent.

Ean Mikale, J.D., is the creator of the National Apprenticeships for Commercial Drone Pilots and Commercial Drone Software Developers. He is also the Founder of The Drone School, a serial Dronetrepreneur, current participant of the NVIDIA Inception Program for AI Startups, IBM Global Entrepreneur, and member of the National Small Business Association Leadership and Technology Councils. Follow him on Linkedin, Instagram, and Facebook: @eanmikale