AI in physical security

Genetec Australia Pty Ltd

By George Moawad, Country Manager ANZ, Genetec
Wednesday, 27 September, 2023


AI in physical security

Artificial intelligence has the capacity to elicit all sorts of different reactions. For some it is seen as a powerful vehicle, while others view it with fear and trepidation. In the world of physical security, AI can be weaponised or used as a powerful enabler. But knowing where it fits and how it can and should be leveraged means we need to examine its potential use cases, risks and shared responsibility.

With the Australian Government investigating the regulation of AI, or more specifically the outcomes AI produces, it’s important to understand what we are really looking at in order to evaluate the threats and opportunities. Generative AI tools such as ChatGPT and Google Bard highlight that great strides have been made in the development of these systems, but there are also many limitations.

Physical and digital security practitioners are faced with a major problem today: the number of sensors, alerts and logs they need to process to determine if a threat is real is overwhelming. The AI tools we see today are complex systems that use probabilities to determine whether specific words or inputs belong together. But this is not true intelligence.

We can look at the journey to AI as having four distinct stages. The first is automation, where systems mimic human action. Intelligent automation systems mimic or augment human judgement by executing simple ‘if this, then that’ actions. Cognitive automation systems augment human intelligence by using more complex inputs, such as questions. Generative AI, like ChatGPT, sits between intelligent automation and cognitive automation. At the highest level sits artificial general intelligence — systems that mimic human intelligence.

Those first three levels are not AI. Generative AI, for example, is not intelligent. It analyses a question and then provides a response based on the frequency and probability that certain words occur together. This is a sophisticated form of machine learning and deep learning.

AI in physical security

Machine learning and deep learning are important elements of modern physical security systems. Machine learning uses statistical techniques to solve problems, make predictions, or improve the efficiency of specific tasks by using data collected by physical security devices such cameras, doors or other sensors. Deep learning analyses the relationship between inputs and outputs to gain new insights.

For example, when licence plate recognition software collects data about vehicles, when sensors collect information on people and when intrusion alerts detect when a door is opened or a barrier is breached, it can provide a security operator with an alert that considers the full context of these series of connected events. For example, licence plate recognition and the use of a security tag could indicate that an authorised person has parked their car and entered the premises. But the breached barrier and camera footage could be indicators of a breach that needs to be investigated.

While that may look like intelligence, it is machine and deep learning working to understand inputs, put them into context, and deliver actionable information for security teams to analyse and humans to ultimately respond to. The system has not learned this independently. The software that underpins this is created by people that program the logic that understands the context.

This is why generative AI tools that use large language models, such as ChatGPT, are not suitable for security applications. The output from those models is not reliable and could lead to false positives or negatives that can waste resources and miss important indicators of compromise.

But machine learning and deep learning can be immensely valuable. They can scan through hundreds of hours of video to find specific patterns, count the number of people in a specific area to manage occupancy, monitor lines, and alert staff to overcrowding in a matter of seconds or minutes, much faster and more accurately than humanly possible. Retailers, for instance, can use that data to improve sales conversions. Stadium operators can use it to control crowd flow, and transit authorities can better understand and address peak travel times.

These tools cannot replace people, but they can help them be more efficient and effective. Rather than scrubbing through footage and analysing logs, machine learning systems can find anomalies and correlate them. But humans will always be needed as these systems can make mistakes that only people can detect and understand.

Responsible and ethical use of AI data in security

Security systems rely on data and are prone to the ‘garbage in, garbage out’ maxim. Without lots of high-quality data, the outputs these systems generate are unreliable. But that data must be ethically collected in compliance with local laws. In Australia, those laws vary across each state and territory, and federally.

AI has the potential to be a powerful tool in physical security. But it is not a silver bullet that will remove all physical security risks. It complements the work of security personnel and helps them become more efficient and effective by reducing the time taken to analyse data so they can focus on real threats and respond faster when required.

But using AI requires policies and procedures to ensure data is collected and managed ethically, with people overseeing the outputs to ensure the models are delivering sensible outputs.

And no… this was not written by AI.

Image credit: iStock.com/da-kuk

Related Articles

Secure-by-design software development for digital innovation

The rise of DevSecOps methodologies and developments in AI offers every business the opportunity...

Bolstering AI-powered cybersecurity in the face of increasing threats

The escalation of complex cyber risks is becoming a pressing issue for those in business...

How attackers are weaponising GenAI through data poisoning and manipulation

The possibility for shared large language models to be manipulated through data poisoning...


  • All content Copyright © 2024 Westwick-Farrow Pty Ltd