Concerns over China’s adoption, spread, and export of artificial intelligence-driven surveillance technology have grown as the technology has been applied to public services. While public support for the innovative potential of AI and big data-driven programs—such as those promising to help recover lost children, increase traffic safety and policing ability, or track and contain disease in a time of pandemic—has been covered and spurred on by state media positivity, distress over the imperfect application and the horrifying potential of normalizing the new technologies has also been expressed.
In an essay recently shared on WeChat, user @
lThere was a report about an old man surnamed Xie, of Shanghai’s Pudong District, who passed away in January of this year. Recently, his family received a traffic ticket that claimed the elderly Mr. Xie ran a red light at an intersection on October 1. The fine: 20 yuan. When the police discovered the mistake, they personally went to the Xie household to apologize, claiming they would upgrade and update the system as soon as possible, increase manual verification, and improve the recognition accuracy of their systems as much as possible.
It was a humorous and intriguing case of misjudgment by the traffic police artificial intelligence system. But the story leaves us with two conversation points: First, the only reason they had to admit the system made a mistake this time was because Mr. Xie had already passed away. If the system mistakenly targeted a living person, how would they be cleared of wrongdoing and get rid of this random ticket? Second, is it really okay to use a public intersection as a biological information collection point, and then to make that private information public?
According to the police officers, the computer system mistakenly identified someone else running the red light as Mr. Xie. The system matched the individual with old household registration data. So first there was an error with the data collected by the crosswalk camera, and then it associated that data with the wrong person. One possibility is that the image, which was taken haphazardly at the front end, was superimposed onto an equally messy database. The AI used by the traffic police made a quiet mistake in a place hidden from human view.
Shanghai is pretty strict about catching pedestrians crossing the street on a red light. In addition to traffic officers enforcing the law on the ground, the city has also launched automated systems, like the one that misidentified Mr. Xie, in order to further enforce jaywalking laws. Judging from the social media reaction, it seems that city residents generally understand or even support measures to punish pedestrians who cross at red lights. To a certain degree, stories of casualties and deaths involving jaywalking have contributed to people’s willingness to accept these measures. But more consideration should be given to the use of these AI systems.
A person jaywalks, an eye in the sky takes a picture, the person is identified through facial recognition, and a ticket is sent—all in one fell swoop. Efficiency has no doubt increased. But what about the important principles and privacy boundaries that also have to be considered? Should a person’s image be blown up and displayed on a huge screen above the intersection? Should they lose their privacy and be publicly shamed, just for jaywalking? Obviously, Shanghai’s AI surveillance system has tacitly accepted these aggressions—that is, until this “mentally impaired AI” made a mistake.
From artificial intelligence to artificial mental impairment—thank goodness Mr. Xie’s name was “cleared through death,” or else it’s really hard to say what would have happened. If someone else were to run into the same problem, erroneously identified as an offender disrupting street order, I bet it would be an ordeal to prove their innocence. First you’d need to provide the perfect alibi, then submit an appeal. Don’t tell me you’d be able to get this all done in one trip. It’s daunting just thinking about it. Why should someone suffer this crime?
The traffic police and other supporters of the AI system would say that the increase in efficiency is so substantial that it outweighs the mistakes. It’s a really marketable rationale. Take a look at what the Shanghai traffic police brought up when they made their house call apology. They said they would work to make the system better—not that they’d be more judicious in their use of the system. As always, the focus was not on the trouble the system causes for people, but on continuing to increase its efficiency, making it even stronger.
The Shanghai traffic police revealed some potentially vital information in their explanation: When you walk down the street, AI collects your biological data. The system then crunches your information into their database. Every person whose information has gone through the system has no choice but to passively hand over their biological information. Like guinea pigs, their information gets incorporated into a gigantic comparative database, where it’s used by the system to hone its recognition abilities, to teach itself.
False accusations aside, what’s really worthy of discussion is that this is a system that relies on huge amounts of biological data to train the AI. Those in favor of such public surveillance systems to catch jaywalkers perhaps have the ends and the means mixed up. Catching jaywalkers is not the only goal of the system. This is perhaps merely a strategy for heading off any public anxiety about the system. Perhaps preventing jaywalking is just a cover for openly collecting biological data from the public. Perhaps the real goal is improving the recognition and matching power of the system.
Jaywalking is inevitable, so the manpower and resources being spent on stopping it don’t seem to justify the outcomes. It doesn’t make economic sense to expend all this energy on AI just to prevent jaywalking. In the end, the only reasonable conclusion is that the traffic police have much larger, more ambitious plans for their AI systems, and the collection of the public’s biological information is the basis for how they will carry out this plan.
So when the Shanghai traffic cops explained their mistake, they unintentionally brought up other, unanswered questions they have a duty to explain: What is the true purpose behind such large-scale collection of people’s biological information? Is it legal? Does it comply with relevant regulations? How can you ensure mistakes by the AI system won’t be abused? These questions cannot be ignored. They await definitive answers. [Chinese]
Translation by Bluegill.