Skip to content

[Idea]: Apply GenAI to threat model use-cases and reference architectures using our governance framework data #203

@chamindra

Description

@chamindra

Contact Details

[email protected]

What is the idea

Now that we have a catalog of risks and mitigations independent of use-case and associated reference architecture we should use a LLM / Agents approach to identify the top risks and mitigations from our catalog for a specific use-case and reference architecture. It can also propose additional risks and mitigations that can be processed (reviews and refined) through human review in our AI governance framework working group.

This should be open source code that can be deployed for internal consumption where data input/output is sensitive. It can also be an endpoint hosted by finos if it has no liability associated with it.

This has to be benchmarked and tested for accuracy and that testing would also part of this exercise to assure the outputs are dependable with some disclaimers.

Though recommend input formats can be recommended LLMs are quite capable of precessing images in additional to structured definitions of use-cases and reference architectures. outputs can be given in MD files, CALM, CCC definitions.

@ColinEberhardt @vicenteherrera

Why is it a good idea

There are three benefits:

  1. It will help us accelerate our coverage of use-cases, risk identification and mitigation identification and help us deliver a AI governance framework with more coverage rapidly with the help of GenAI
  2. The approach will also support other reference architectures for use-cases more dynamically through our threat modeling engine
  3. This utility can also be a useful reference for financial institutions in their threat modelling activities especially if it is open source and can be deployed internally
  4. With will also reference against regulatory risks to identify mitigations rapidly

How does it work?

Approach:

  • We will use GenAI ( Prompt/LLM or agentic framework) to develop the threat modelling engine.
  • We can make our AI governance framework a MCP server in addition to other standards being referenced here.
  • We would recommend using the least complex approach ideally to make it easy to test and benchmark.
  • Recommend a modular/plugin architecture to allow for more efficient open source contributions.
  • Recommend using a python based approach using popular and trusted open source or most open components such as langchain and openAI to make the code accessible.
  • All components used have to ideally be open source and explainable

primer

artificial intelligence

Any other key information

Code of Conduct

  • I agree to follow the FINOS Code of Conduct

Metadata

Metadata

Assignees

Labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions