< Back to Blog

Seeing is believing – or is it?

January 3, 2025 by Alistair Enser

Deepfake video is on the rise, so what can be learned from the security industry’s experiences with video?

At the recent Reliance High-Tech cloud conferences, Chief Technology Officer Andy Schofield showed delegates some deepfake videos from a few years ago. Created using early AI, their ability to mimic well-known figures was certainly impressive. Andy then shared some videos he had created recently using cutting-edge AI and it was incredible how much more believable they had become, such was their quality.

The use of deepfake video is becoming a real issue in a year when over half the planet will vote in elections, for the technology is now so advanced it is easy to trick people online with what appear to be messages from real politicians, but are in fact the product of state-sponsored misinformation or the work of troublemakers.

Even politicians can be fooled: this weekend it was revealed that Foreign Secretary David Cameron fell for a hoax video call with someone pretending to be the former President of Ukraine. “Whilst the video call clearly appeared to be with Mr Poroshenko, following the conversation, the foreign secretary became suspicious”, said a spokesperson.

So, with a General Election only weeks away, can you believe what you see on social media? TikTok is an increasing influence on young voters and was the fastest-growing source of news in the UK for the second year in a row in 2023, according to OFCOM. But can you believe anything you see there? Recent deepfakes on the platform have featured convincing videos of Prime Minister Rishi Sunak and Labour Party leader Keir Starmer.

Spotting AI-generated deepfakes is possible, and guidance on identifying the signs of a deepfake can be found here. But I wonder if something can be learned from the security industry, where the integrity of images is paramount? After all, law enforcement agencies rely on video surveillance footage to identify and investigate criminal activity, and this footage is often used to support prosecution. Yet there can be no question over the veracity of video images that capture a bank robbery, for example, if justice is to be served. 

Only a few years ago, if evidence was required from your security system, the relevant footage was downloaded onto a DVD or USB stick and biked over to the recipient or even put in the post! Now, the process is far more robust, and Reliance High Tech partners such as Genetec offer full evidence management in their platforms, including generating a secure link which can be shared with a recipient, yet restricted for their use only. Various degrees of access can be assigned, downloading and sharing can be restricted, and the original video remains untouched at the original storage site. Using these measures, it is practically impossible to introduce fake video into the evidential process.

Of course, maintaining the integrity of video used for evidential purposes is also ensured by a range of other protective measures, from limiting physical access to a video management system to keeping the video footage secure with encryption, properly cyber hardening your systems and networks, or using trusted cloud providers.  Today, your choice of partner is every bit as important as your choice of equipment. Our security technology of today needs proper IT expertise, not just an electrician.

I am not saying the proliferation of deepfake videos on social media can be easily fixed. Education is important, of course, so viewers know how to spot signs of manipulation. But as tech companies develop tools to address the plague of deepfakes, they might consider a similarly robust approach to managing the video on their platforms to the procedures that the security industry adopts to keep video secure, and trusted.