Maine Holds Strong Against Facial Recognition
The new law represents the strongest move by any state in the country to regulate the widespread use of the technology by law enforcement and offer a pathway to legal action if used in violation of the law.
By unanimously approving a new state law, LD1585, Maine has just passed the nation’s toughest restriction on the use of facial recognition technology and serves as an example of the path other states can take to enact their own laws against the practice, reports Slate. The law was unanimously approved by the Maine House and Senate, respectively, and became law without the signature of Governor Janet Mills. The law states that law enforcement can use facial recognition technology only with “probable cause to believe an unidentified person in an image committed a serious crime,” or when seeking to identify a deceased or missing person. It stipulates that facial recognition data alone cannot establish probable cause for an arrest and that the Maine State Police and Bureau of Motor Vehicles must maintain de-identified records of every requested and performed search, which will be designated as public records. Facial recognition data that is obtained in violation of the law will be inadmissible as evidence. The bill also provides a pathway for anyone wishing to bring legal action against the state if they believe that the technology was used in violation of the law.
Maine’s law stands alone in the limitations it places on law enforcement, especially when compared to the federal government’s regulation on facial recognition (or, more accurately, their lack of regulation). A recent report from the Government Accountability Office found that 14 out of 42 federal agencies surveyed had used privately built facial recognition for criminal investigations. Despite the widespread use of the technology, only 1 of the 14 agencies had any awareness of which systems their employees were using in criminal investigations. A central worry for opponents of facial recognition technology is its racial bias, as the technology has been found to have the poorest accuracy when analyzing images of people who are Black, female and 18 to 30 years old. A study performed by the National Institute of Standards and Technology found that the highest error rates came in identifying Native Americans, while Black and Asian faces were falsely identified 10-100 times more frequently than white faces.