London Underground uses AI to detect trespassers in real time

At one of the London Underground stations, video surveillance cameras were combined with artificial intelligence to track violations of the subway rules. During the trial, more than 44 thousand notifications were issued, most of which concerned fare evasion.
This was reported by Wired.

From October 2022 to the end of September 2023, TfL, the transport company that operates the London Underground network, tested AI algorithms for recognizing events in video. Testing took place at Willesden Green station in the northwest of the city.

The software was combined with live feeds from CCTV cameras to try to detect aggressive behavior and brandishing of weapons or knives, as well as to look for people falling on the subway tracks or evading fares.

This is the first time in the UK that a transportation authority has combined artificial intelligence and real-time video to create alerts for station staff. During the trial, more than 44,000 notifications were issued, and 19,000 were delivered in real time.

Most of the warnings were issued against people who evaded fare payments – 26,000. According to TfL, losses from non-payment of fares reach up to 130 million pounds (over 150 million euros) per year.

TfL’s head of public safety Mandy McGregor says that there were no signs in the station mentioning the testing of AI tracking tools, so passengers would behave naturally.

Much of the analysis was aimed at helping staff understand what was happening at the plant and respond quickly to incidents. The system sent 2,200 warnings about people going beyond the yellow safety lines, 39 about people leaning over the edge of the platform, and almost 2,000 warnings about people sitting on the bench for a long time.

The publication writes that the AI made mistakes during the tests. For example, he labeled children who followed their parents through the turnstiles as potential defaulters. Police officers also helped with the setup of the system. The police officers held a machete and a shotgun in the cameras’ field of view to help the system better recognize the weapons.

During all tests, images of people’s faces were blurred and the data was stored for a maximum of 14 days. However, six months after the start of the system’s testing, it was decided to store this data longer due to the large number of notifications that employees did not have time to respond to.

The testing also aimed to find out whether artificial intelligence could detect bicycles and scooters. “The AI could not distinguish a folded bicycle from a regular one, nor an electric scooter from a children’s scooter,” the article says.

Source hromadske
You might also like
Comments
Loading...

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More