top of page
Search

All (Threat) Models Are Wrong...

  • Writer: Logic Hazard Labs Staff
    Logic Hazard Labs Staff
  • Apr 3
  • 3 min read

Updated: Apr 4

"All (threat) models are wrong, but some are useful." - George Box

Some quotes are also wrong, but nonetheless, useful. Models are a frequent topic of conversation in modern tech - most of the buzz is around AI/ML models, such as large language models (LLMs) or other generative models that take training information and form an output resembling natural speech or a statistically improbable picture.



An AI image of a pig flying through a city.  For reasons known only to the AI model that made the image, a car is on fire in the lower left corner of the image.
AI Image prompt: "an ultra-realistic photograph of a pig with dragon wings flying through a bustling city alongside a unicorn." We are assuming that the car is on fire because of the dragons circling above and not the flying pig, but sadly, we will never know.

In security, though, we conduct threat modeling to help us predict the conditions that a theoretical system, which may or may not have been built yet, would face in real world conditions. While some conditions are somewhat easy to predict based on observable data, such as a publicly facing IPv4 address being scanned by platforms like SHODAN or your open-source project being scraped by any number of AI companies, other complex interactions between systems and dependencies can manifest risks in extremely diverse ways. Other analysis may lead to vulnerabilities that are possible in theory, but an attacker may be able to achieve similar results with different methods.


The usefulness of a threat model depends on it's intended function. Will the model be leveraged by a team of engineers to maintain an active software project to ensure adequate security measures are proactively built in to the solution along with new functionality? If so, then the model will need to contain significant technical detail, including comprehensive asset inventories, easily citable diagrams, and full control catalogs.


But what would happen if we gave that same threat model to a third party consultant or auditor who has a specific task to complete? It would take significant investments of time and (as is the case when many third parties are involved with business operations) money to consume the model with diminishing returns. Much like a database can be re-configured with different views based on user roles, a threat model intended for a narrow application can be justifiably abstracted and optimized to the consultant's project or the audit scope while keeping the engineering model intact and usable by the permanent team.


In this case, the threat model may be less precise in some ways, but that loss of fidelity doesn't affect its usefulness. In fact, an abstracted threat model used to inform a penetration test, for example, may help an independent assessor concentrate on interfaces where the threat model has sparse supporting data. While certain data flows are simplifications of a complex interaction that may not be fully understood by the engineering team, the goal is to expand that understanding with real-world investigation knowing the limitations of the current model. After technical assessment, that information could be fed back into the full threat model, increasing confidence in the updated model's validity.


There are lots of resources available to learn about threat modeling and tools to help you get started. But throughout the entire process, keep in mind the processes that the threat model supports and place less emphasis on the model for its own sake.


Need help getting started with threat modeling in your organization? Contact sales@logichazard.com today to see how we an help you capture the complexities of your security posture. In the mean-time, we may need to update our flying-pig threat models...

 
 
 

Comments


bottom of page