Artificial Intelligence (AI). Deep Learning. Neural Networks. All are various terms referencing this subject that is growing in importance and impact. Essentially, it is a way to gather and use the multitude of information sources available. Combine the information from those sources to drive behavior, inform businesses and consumers of choices, and mobilize them toward immediate decision making based upon the intelligent combination of information.

However, any discussion of AI must be understood in the context of the ethical considerations inherent in AI as well as how to build “trust” into the AI child. For instance, “Raising AI” faces the same challenges as human education. How do we foster the understanding of right and wrong? How do we teach what it means to behave responsibly? How does the AI child learn to impart knowledge without bias? How do we teach the importance of being self-reliant but within the understanding of the importance of collaboration and communication? Melinda Gates has stated that there are so few women and people of color in the AI high tech sector that it will be difficult for bias to not find its way into the AI world. But strive we must, to do exactly that.

For trust in AI to occur, companies are going to have to build and train them to provide clear explanations for the actions the AI systems decide to take. Yes, they are being built to actually make decisions. Therefore, it is critical that businesses “raise” AI systems to act responsibly.

Europe is a bit ahead of the United States in considering the implications. European policymakers, in the spirit of the European Union’s GDPR (General Data Protection Regulations) are considering regulations requiring the “right to explanation” for individuals when AI actions are taken. Even to the extent of covering the AI system and the use of it’s algorithms. Basically, it seems intended to treat the AI system as a “person” and require that “person” to explain why it chose to take the action it did over other possible actions.

The German government has adopted rules requiring the algorithms of autonomous cars to “choose” material damage over human injury and to not discriminate based upon gender, age or race. The implications of that are enormous. High capacity, real-time recognition and processing will be a required certainty.

In Germany, Audi has announced that the company will assume liability for accidents involving its 2019 A8 when its Traffic Jam Pilot is engaged.

Utilized properly, companies can use AI to create collaborative and powerful new members of the workforce. (For Example: In medicine, an AI deep learning machine could, within 1-3 years, be working alongside the doctor in evaluating, diagnosing and providing treatment recommendations).

In lube centers, think about how AI and deep learning could permit the shop owner to effectively manage the increasing technological complexity of the next generation of vehicles. Both for customer facing and tech facing reasons. With a world of public information available, it is not beyond the realm of imagination to have a car drive in and have AI begin interacting with the driver. Once the POS system information about customer, vehicle history and current mileage becomes available to AI, it can begin to do some predictive analysis. Based upon all available data,
AI can make suggestions to either the tech (or even the driver directly) about the next most likely items that will require service. All in a way to educate the customer and build trust with the shop.

AI may be one of our newest and most reliable “employees” in the not so distant future. Are we ready?

Steve Barram

STEVE BARRAM is CEO of Integrated Services, Inc.(ISI) – software makers of LubeSoft. He has been actively involved in the fast lube segment of the automotive aftermarket industry for over 30 years through leadership roles, speaking engagements, and serving on boards.