Bryne (2016) introduces the concept and history of the Locomotive Act of 1865, also known as the Red Flag Act, that was passed by the U.K. parliament to increase safety and awareness surrounding self-propelled vehicles. The Red Flag Act states that when a vehicle has several carriages attached, a pedestrian with a red flag must lead the vehicle from at least 60 in front of the train. The author explains that the Red Flag Act illustrates the idea that there should be a warning when a large, dangerous machine is incoming. The same concept is applied to artificial intelligence and computer interactions as opposed to human-to-human interactions, where a machine would thereby be clearly labeled a machine so as not to mislead the human user. This concept has been called the Turning Red Flag Law by Toby Walsh, an Australian artificial intelligence professor.
Potential Benefits for Society
Artificial intelligence is an increasingly powerful tool that continues to benefit society in new ways. As it increases in complexity, artificial intelligence becomes more difficult to distinguish from human computer users. A Red Flag Turning Law would create a responsibility of maintaining accountability and transparency for all artificial intelligence which would reduce the amount of uncertainty in computer users and might even be a positive influence on the evolution of artificial intelligence itself (which further benefits society as a whole).
Enforcing transparency through the computer architecture itself mitigates the harmful potential consequences and risks of the indistinguishability between human and artificial intelligence users. If artificial intelligence continues to be utilized in an increasing number of applications, its recommendations will likely be trusted by users more than an anonymous human user’s recommendations due to the enforced regulation within its architecture. The flagging of artificial intelligence could benefit the user’s security and level of transparency by creating a more difficult environment for a human threat actor to intercept the user’s interactions. The human-to-AI authentication methods available to an artificial intelligence model could easily surpass those available to most human users with additional development.
Threat actors will inevitably continue to utilize artificial intelligence as a tool for malice, and a Turing Red Flag Law would bring additional transparency to the harmful actions of an artificial intelligence so that it could be more easily detected and stopped. Artificial intelligence systems have a history of exhibiting bias, and this behavior can be more easily identified as bias by a human user with a Red Flag Turing Law in effect.
Potential Consequences for Society
One of the primary concerns of the technology industry, if a Red Flag Turning Law is put into effect, is that the continued development of artificial intelligence will be stifled or stunted due to creating additional limitations on the software. This limitation would likely not be adopted by a global community which could lead to the potential of advanced rogue artificial intelligence use in non-agreeing organizations. Technology creators might fear legal repercussions in developing their artificial intelligence systems to their full potential. It is vital that a Red Flag Turing Law is not in opposition to the creativity and freedom of experimentation that is necessary for the continuous development of these important technologies.
Bryne quotes Toby Walsh in his discussion about the use of artificial intelligence in self-driving cars and how the Red Flag Turing Law would be applied within that specific example. He states that human drivers often make far more mistakes on the road than artificial intelligence and are a more dangerous threat on the road than a self-driving system. Therefore, there would be a potential benefit of the ability to distinguish between human drivers and artificial intelligence systems on the road because the human driver could be recognized as less predictable, more dangerous, and requiring of more attention. Simultaneously, the artificial intelligence drivers could be expected to drive predictably, follow traffic rules appropriately, and fair better in low visibility conditions. This entire concept adds to the discussion about the value inherent in knowing whether a user is human or computer.
Reflecting on the Author’s Proposal
The development of artificial intelligence systems requires balancing the forces of regulation and rapid development so that the technology continues to progress appropriately. The added transparency and accountability from a Red Flag Turing Law could serve as powerful benefits to adoption, while the slowing of development and attention to regulation present considerable challenges to implementation and development. Regardless of whether a Red Flag Turing Law is adopted in its current conceptualization, society must develop a way for artificial intelligence algorithms to provide accountability and transparency, as well as detection and intervention of artificial intelligence’s potentially harmful behavior.
Armed with artificial intelligence and machine learning tools, threat actors will make many attempts to disguise their artificial intelligence systems with hopes of masquerading as human users; I think it is extremely likely that tools to detect artificial intelligence algorithms for authentication will be brought to use. I think that defensive artificial intelligence models will quickly become more adapted at detecting other artificial intelligence algorithms and labeling them as artificial intelligence with more proficiency than is possible by humans. Due to the evolutionary nature of cyber security and artificial intelligence tools, the tools used to detect artificial intelligence will likely be an AI trained on data models of all known Ais. This technology would function similarly to anti-virus in that it uses signatures of the algorithm and matches them against a collection of signatures in a database of known AI algorithms.
Over enforcement of a law like a Red Flag Turing Law would likely disadvantage small businesses and individuals from developing artificial intelligence systems as rapidly as large corporations because of the regulatory attention that must be paid and the severe penalties of potential artificial intelligence mistakes. These regulations would lead to longer testing cycles and potentially higher ethical concern when working with artificial intelligence, which I think is a positive and necessary change.
AI models are trained on data that is created by humans and the system inherits all the bias, prejudice, and hypocrisy portrayed in the training data. Attention to the development of artificial intelligence regulation to correct the inherit problems from the training data will hopefully cause society to address those same issues as they exist in people, as a society. I hope that, as we make societal decisions surrounding these issues, we are brought into a global conversation to discuss all aspects of these issues so that we can begin to shape the architecture of the next generation of artificial intelligence tools. Without transparency, accountability, and due diligence given to ethics the benefits of artificial intelligence tools will not optimally serve the common goals of humanity. I do not look forward to a future with many different personalities of artificial intelligence that have been developed with various intent. The potential of artificial intelligence as a unifying element and technology for global communication and understanding is unprecedented, its significance exponential and similar in effect to the internet and telephone.
References
Bryne, M. (2016). AI Professor Proposes 'Turing Red Flag Law'. Vice Media. http://motherboard.vice.com/read/ai-professor-proposes-turing-red-flag-law