Isaac Asimov’s three laws of robotics are important factors in the development of artificial intelligence laws. They are as follows:
- First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- Second Law: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
- Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
These laws were designed to motivate his authorship on short stories and books. They have also impacted theories on the ethics of artificial intelligence. Now, the following legislation is designed to regulate artificial intelligence: (1) Self Drive Act; (2) AV Start Act; and (3) Future of AI Act.
The Self Drive Act establishes the federal role in ensuring the safety of highly automated vehicles by encouraging the testing and deployment of such vehicles. A “highly automated vehicle” is a motor vehicle, other than a commercial motor vehicle, that is equipped with an automated driving system capable of performing the entire dynamic driving task on a sustained basis. It preempts states from enacting laws regarding the design, construction, or performance of highly automated vehicles or automated driving systems unless such laws enact standards identical to federal standards.
The AV Start Act is a bipartisan Senate companion that addresses driverless vehicles. It establishes a framework for a federal role in ensuring the safety of highly automated vehicles (HAVs); (2) preempts states from adopting, maintaining, or enforcing any law, rule, or standard regulating an HAV or automated driving system (ADS) regarding certain safety evaluation report subject areas; (3) sets forth conditions under which HAVs may be introduced into interstate commerce for testing, evaluation, or demonstration; and (4) applies certain safety exemptions to HAVs.
The Future of AI Act is another bipartisan Senate bill that is designed to establish an advisory committee for artificial intelligence issues. This bill directs the Department of Commerce to establish the Federal Advisory Committee on the Development and Implementation of Artificial Intelligence to advise Commerce on matters relating to the development of artificial intelligence, including:
- U.S. competitiveness;
- U.S. workforce;
- Education to prepare workers as employers’ needs change;
- Open sharing of data and research on artificial intelligence; and
- International cooperation and competitiveness.
Artificial Intelligence laws address the rights and responsibilities of the artificial intelligence products or services and their users. It is important to realize that neural networks store a large amount of information which is inserted into an algorithm and used to create a result – i.e., to produce output. Now, there is a lot of autonomy granted to products that operate by using artificial intelligence. As such, the regulators are struggling with the legal issues.
The Applicable Principles
In general, product liability laws determine the outcome if an individual is injured when he or she uses a product (e.g., a robot). These laws are based on common and statutory laws such as negligence, breach of warranty, and strict liability. The Restatement of Torts discusses design defect, manufacturing defect, failure to warn principles. Also, the judicial system has evaluated cases involving complications that result from the use of artificial intelligence products or services.
In Cruz v. Talmadge, the claimants were injured when their bus hit an overpass. The plaintiffs claimed that the bus driver was using a global positioning system (“GPS”) that was arguably semi-autonomous and it failed to function. So, they sued the product manufacturer and transportation company.
In Nilsson v. General Motors, LLC, the claimant sued General Motors for the failure of an autonomous vehicle to properly function when it entered his property and injured him. The plaintiff claimed the defendant was negligent, but did not claim the product was defective.
In Holbrook v. Prodomax Automation, Ltd., an auto-parts worker was killed by a robot in her workplace. The claimant alleged that the robot should not have come into the victim’s work area. So, the claimant sued the manufacturer under various legal theories. This case may set the course which requires manufacturers to obtain proper insurance policies for their products.
Autonomous Vehicles Regulations
At this time, almost twenty-nine states have implemented legislation for autonomous vehicles. These regulations seem to carry a common premise which is to promote innovation and safety and protect the public. The National Conference of State Legislatures provides a list of the current laws.
For example, California under SB 1298, has passed legislation that requires the Department of the California Highway Patrol to adopt safety standards and performance requirements to ensure the safe operation and testing of autonomous vehicles on the public roads in this state. It permits autonomous vehicles to be operated or tested on the public roads pending the adoption of safety standards and performance requirements that would be adopted under the bill.
Finally, according to its website, six states have issued an executive order to regulate autonomous vehicles. Moreover, the National Highway and Transportation Safety Administration has released new federal guidelines for automated driving systems.