Artificial intelligence, or “AI” as it is commonly known, is becoming more prevalent as businesses have come to realize that AI can accomplish many basic tasks more efficiently and economically than a human employee. AI refers to machines or systems that simulate human intelligence in performing functions and solving problems. AI frequently uses algorithms to make decisions and can, with help from real-time data, learn from experience to become better problem solvers. Most people interact with AI regularly, with examples ranging from Amazon’s virtual assistant, Alexa, to route optimization tools like Waze, to Netflix’s recommendations for a viewer’s next series binge.
AI really shines when it comes to reviewing and culling vast amounts of data to provide business owners with important insight to make more informed decisions. For example, a medical group that has a shortage of radiologists may use AI to review patients’ computerized tomography (CT) scans to flag early indications of cancer for physician follow-up. The transportation industry has also utilized AI in developing driverless vehicles. Beyond providing autonomous functionality, the AI collects and uses the data to better prepare early warning systems for predicting vehicular problems and safety issues that need to be addressed.
Legal Compliance Is Crucial
Innovative developers seem to find more applications for AI all the time, and their capacity to expand the efficiencies of the industries to which they are introduced are limited only by their imaginations. However, one limitation that is often overlooked, but could prove disastrous, is failing to ensure that the decisions AI is making are compliant with any applicable laws and regulations. For example, a company might use an AI assistant to sift through countless résumés for job openings to flag the best candidates. In so doing, the AI may try to see patterns or find relationships in the data it is reviewing to help streamline that process. Business owners need to know the resulting analytics are not discriminatory against, among others, a candidate’s race, sex, religion or sexual orientation, which would violate federal law.
As another example, a developer may create an AI program that makes recommendations about what products to market to website visitors based on the information it gleans from them. Business owners need to ensure that visitors’ privacy is respected and that the visitors have approved the information being acquired. Otherwise, there is the risk of violating privacy rights here and abroad. For instance, European privacy laws are more stringent and, in some cases, require consent before using data collected by the website.
More Oversight Is Around the Corner
Some may believe that, because AI is not transparent, it will be difficult for a law enforcement agency or potential plaintiff to prove that an AI’s analytics do violate a particular law. One should not take comfort in such circumstances, however, for at least two reasons. In the first place, regardless of how the analytics are developed, in some cases it will be evident from patterns or outcomes that an AI tool is not complying with a particular law. If a large bank is denying loans to people of color despite having credit scores compared to others being approved, it may prove difficult for the bank to contend the loan review analytics are not, in fact, discriminatory. Secondly, and perhaps more importantly, it will not take long for governments and private companies to develop AI oversight programs that can monitor and assess what another AI tool is doing and the underlying bases for its algorithms and analytics. Such programs can conceptually accomplish more than a human overseer could in a fraction of the time and resources they would take, and, most importantly, would hold AI users accountable to applicable law.
California has already proposed a statute called the Automated Decision Systems Accountability Act, which provides that state agencies using AI need to develop algorithms that minimize the risk of adverse and discriminatory impacts from resulting decisions. Other states are likely to follow. Similarly, Senator Kirsten Gillibrand from New York has offered a bill in the U.S. Senate that would create a federal agency that would, among other things, be responsible for conducting impact assessments for AI that uses consumers’ personal data
Involve Attorneys in AI Development
It would be unfortunate to spend considerable money and effort to develop AI to create a competitive advantage only to be later fined by a government agency or sued by a customer for failing to comply with legal requirements. This outcome can be averted by including a lawyer during the AI development process to establish the AI includes and accounts for such legal requirements in determining its analytics. Most companies already include attorneys as part of teams convened to make critical decisions. Developing AI should be no different and may save a company considerable loss, both in potential fines and harm to its reputation.
Todd Kartchner is an attorney with Fennemore, where he focuses his practice in the areas of data privacy, telecommunications, cybersecurity, intellectual property, blockchain and cryptocurrency law. He oversees litigation throughout the firm. In 2020, Kartchner was recognized by SuperLawyers as one of the top 50 attorneys in Arizona.
Speak Your Mind
You must be logged in to post a comment.