Artificial Intelligence: Firefighting game changer or useful aid to human decision making?

In the first of a two-part exclusive report on AI, FIRE’s Security Correspondent Dr Dave Sloggett investigates the potential for it to take decision making to the next level

The idea of replicating human decision making with computers has been around for nearly as long as the information technology industry has been in existence. Artificial Intelligence (AI) and more recently what has been termed Machine Learning (ML) are topics that have been of enduring interest in the international research community. Albeit that the popularity of AI has waxed and waned despite its potential to replace human beings in hazardous working environments, such as those that face firefighters every day.

Today the subjects of AI and ML are going through another renaissance with the ever-increasing speed of contemporary computing technologies, guided by Moore’s Law, and the emerging insights from the relatively new developments in data science. This is sometimes referred to as ‘big data’.

The latest resurgence of interest in AI has been driven in part by the emergence of ML techniques and the ability to use a technique known as ‘reinforcement learning’ to fine tune the ability of the algorithms to improve their ability to replicate human decision making. This is the ultimate goal of AI. While this dream has yet to be fully realised the most recent developments have taken the field of AI to a new level.

Given these recent developments it is appropriate to ask a number of questions. These include: to what extent might AI and ML techniques have some relevance to the emergency services community? What applications of AI might prove beneficial? How robust would those solutions be? Will AI ever be able to think creatively? Will a human always be involved in supervising the decision making in the foreseeable?

Before addressing these questions, it is worthwhile reflecting on some of the historical perspectives that have emerged from research over the last few decades. Of these perhaps the most enduring application of AI technologies outside the military research sphere has been in the medical world.

AI Diagnosis

The first attempt at creating an AI capability in this domain was called MYCIN. This was not an AI system. It was what can be called a sub-class of AI developments known as an ‘expert system’. Essentially, it was based upon information captured from experts and turned into rules that can be tracked and evaluated according to answers provided by a patient. It was an attempt to formulate the questions that would be asked by a doctor during a consultation with someone suspected of being at risk of blood-clotting diseases.

Its implementation was based upon what is known as backward chaining (backward reasoning or backward induction) method of inference (reasoning) by assuming that the patient has a specific blood-clotting disease and working backwards through the symptoms presented to see if they provide a coherent indication of the risks involved. This is one of a number of ways of conducting reasoning.

Modern ML applications have deeply embedded neural networks, mirroring the operations of neurons in the brain, which adjust weights in the significance of specific inputs according to data derived from a training set. The weakness with this approach is any unknown biases that exist in the training data.

On paper this appears sound. In practice of course many parts of the human system interact in ways that means that specific symptoms cannot be so clearly broken out and analysed. What one person says about how they are feeling and where the pain is does not necessarily get replicated in a conversation with another human being.

While blood-clotting disease forecasting was selected due to it specific nature, the ability to isolate it from other issues being experienced by a patient was not clear. In the end under evaluation MYCIN only achieved an acceptable diagnosis rating of 65 per cent against a validation by eight independent specialists.

This was comparable to an accuracy rate of between 42.5 and 62.5 per cent as rated by five faculty members in Stanford Medical School where MYCIN was developed. History shows that MYCIN was never used in practice. Medical teams raised several ethical objections to its introduction into service. This will not be the last time reservations about the introduction of AI systems into an operational environment will be raised.

The story of MYCIN is a good indicator of the problems associated with the first generation of AI systems or expert systems as they became more commonly known. They were rule-based and based on the idea that rules could define a sequence of symptoms into a unique diagnosis pathway. Clearly linkages in the human system did not allow such unique pathways to be understood.

What also showed up from the MYCIN development was the issue of how to capture expert knowledge that allowed the rules-based formulation to be implemented in a computer system. The medical world is notorious for its multiplicity of views on the relevance of specific symptoms to a particular disease. Doctors are also very vulnerable to biases that arise from anecdotal evidence and their own specific experiences. At best in the 1970s and 1980s so-called expert systems were able to provide an aide-memoire that prevented some of the more obvious biases to be avoided, such as when a medical practitioner may have been tired.

These lessons, however, did have a transformational impact on the second generation of AI systems that would emerge in the 1990s. While some simple rules-based systems did emerge, where the application space offered limited horizons and outcomes, the fundamental problem was of handling uncertainty. How could you say a rule was right on 35 per cent of the occasions in the presence of certain symptoms and (say) 72 per cent right if they changed?

The development of a mathematical theory by the Reverend Thomas Bayes provided the basis of the next developments of AI – Bayes Theorem – an ability to combine a-priori mathematical probabilities from certain events into a likelihood of that event being true. Applications in medicine were again of major interest in the research community. As more and more statistics were recorded in the medical world, so these were seen to offer an ability to programme probabilities into what became the second generation of expert system. But still a long way away from a true AI based capability.

Recent developments have seen a third wave of developments in the AI field. Of specific interest is the field of oncology, the study of human cancers. One area where developments have shown that human decision makers can be rivalled is the field of diagnosing lung cancer.

Research work has developed algorithms that are capable of analysing X-Rays to detect early insights into the onset of cancers. Recent results have been heralded by some enthusiasts as heralding the end of those who have trained to study lung disease. As with examples from history, this may prove to be yet another false dawn.

Arguably the clearest area of research in the AI field at this moment is the ability to replicate the performance of the human eye. Developments in neural networks have shown how close they can mirror the ability of the eye to detect and recognise shapes. This is where ML and reinforcement learning are seeing some exciting developments in the military world. But how might these translate into the emergency services field?

Firefighting Applications

For the Fire and Rescue Service, recognising the seat of a fire has always been a priority. With drone-based technology and hand-held thermal cameras now being routinely used in incidents, the question arises as to how these images might be used to help firefighters deal with incidents, no matter how small.

This is where the idea of fusing information from several sources into an integrated approach to firefighting has its potential. Imagine combining the technologies of virtual reality overlaid with imagery derived in real time from a drone hovering over a fire and thermal cameras located on firefighters or their equipment, such as turntable ladders. These multi-aspect images, covering the fire in detail from different directions, fused together and presented in a virtual reality environment to an incident commander, would allow them to make decisions about how to identify the seat of a fire and direct resources to minimise its impact.

Couple this research on how fires develop in certain situations and the possibility arises that an AI system might be trained, given accurate sources of real-world fires, to recognise and even forecast how a fire might evolve. With infra-red cameras involved having sufficient spectral and spatial resolution, the ability to get to the heart of the fire and its evolution would be straightforward.

Of course, while this may sound easy on paper the issue of ensuring the fusion of the data into a single recognised picture is not easy. The location of each sensor system contributing to the formation of the image has to be understood with some precision. Ideally better than that afforded by GPS in its current form. This then has to be merged with a forecasting model to create an integrated decision support system. This would then guide a human operator to specific areas of interest in a fire and make recommendations on the response.

It is possible to suggest that the next generation of AI systems will emerge once the current technologies, which are heavily dependent on the training set provided, to think outside the limitations imposed by the data. Being able to extrapolate from a training set to a wider range of situations, some of which are seen very infrequently, is a measure of the ability of an AI system to truly replace human beings. Developments in quantum computing will be a massive driver for this next stage.

Some of this is still some time in the future. For the moment, the important task is to get multi-sensor data accurately fused and presented in a 3D format to the firefighter. They must then use their knowledge and experience to decide what to do as situations vary and cause nuances that require adaptations of standard responses. That human edge, the ability to think outside the constraints on the training data, is where the human advantage remains.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More