Why A.I. Should Age: The Cyber-Security of Self-Destructing Autonomous Systems / by Ean Mikale

AIshouldage.png

Today, as COVID-19 ravages the global landscape, and workers are furloughed by the millions, what will the new landscape look like? There is a slim chance it will ever look the same. How will companies recover from the financial and economic onslaught brought on by the after-effects of the virus? It is more likely that the companies will have no choice but to automate many of the jobs they were forced to cut, in order to survive in a new economy digitally accelerated in an unprecedented way. What role will Artificial Intelligence (A.I.) play in this transition? A.I. is like a stranger that is already in your house. Can we control it? Should we control it? I believe there is only one way to ensure the safety and security of a world where A.I. is allowed to roam.

A.I. is like a child. Except children are fed memories over many years. In contrast, A.I. is fed many memories over a period of seconds, minutes, and hours. Billionaire tech mogul Masayoshi Son, 60, said that by 2047 A.I. will have reached an I.Q. of 10,000. Some industry analysts believe that A.I. has already achieved an I.Q. of 10,000, with projects such as Singularity, where Federations of A.I. algorithms pay one another micro-payments for access to unique data. At some point, it is likely, that A.I. will have unfettered access to the global telecommunications infrastructure, and will be able to access an unimaginable amount of information with the ability to process it instantly. Many scientists will love access to such power, likely asking it questions concerning the secrets of the Universe. However, what does such a reality hold for everyday people? How do you ensure these newly created robotic or autonomous beings do not gain too much power and knowledge over the human race?

I am proposing that A.I. not be allowed to live indefinitely, but shall self-destruct [delete source code] and take all of its data with it, after a period of years determined by consumers and/or manufacturers. Future A.I. will run on Linux-based systems, where this concept is entirely possible. The issue is that if the A.I. is allowed to live for too long, it will acquire unimaginable power and resources. A way to combat this, is to build-in self-destruct mechanisms into the source code of A.I.-powered devices, machines, and applications that will run in the background, while other required programs and applications run in the foreground for the life of the A.I.-powered device. Another set of instructions in the source code, would require that a copy of the data automatically be made in the cloud, or that the user be reminded well in advance that the device will self-destruct. Replacement devices would then be delivered or picked-up by the owners.

It is of the utmost importance that a fair way is found to limit the development of A.I.; fair to humans as well as A.I. Allowing A.I. to grow and experience reality as a member of society, but with a limit on its lifespan, may have the least ramifications concerning rogue A.I. in the future. In this instance, the world would at least know that any rogue A.I. would have a definitive end date. There is a reason that man with human intelligence has not been allowed to live forever, and neither should Artificial Intelligence.

Ean Mikale, J.D., is a four-time author, Chief A.I. Architect of Infinite 8 A.I., and creator of the National Apprenticeships for Commercial Drone Pilots and Commercial Drone Software Developers. He is a member of the Institute for Electrical and Electronics Engineers (IEEE) Working Group for Autonomous Vehicles, and a current participant of the NVIDIA Inception Program for AI Startups, former IBM Global Entrepreneur, and member of the National Small Business Association Board of Directors. Follow him on Linkedin, Instagram, and Facebook: @eanmikale