Terraform Version
Terraform v1.5.1
on linux_amd64
+ provider registry.terraform.io/hashicorp/http v3.4.0
+ provider registry.terraform.io/newrelic/newrelic v3.25.0
+ provider registry.terraform.io/opsgenie/opsgenie v0.6.26
Your version of Terraform is out of date! The latest version
is 1.5.2. You can update by downloading from https://www.terraform.io/downloads.html
Use Cases
To understand the use cases, it's important to understand a little about the implementation here: I have a couple of applications that procedurally generate *.auto.tfvars.json files by querying our CMDB prior to terraform being run in CI pipelines.
Procedurally claim and manage resources
Resources can be imported into CMDBs with discovery and other methods, these resources will already exist. In this use case, we not only discover these resources, but, where applicable, manage them too.
Build onto existing solutions at bulk
This use case fits best into those occasions where terraform is being used to manage CI's rather than traditional resources, such as Teams in OpsGenie or sub-accounts in New Relic. For the purposes of enabling people within an organisation to self-service, we want to allow a senior members of staff to create a team or service in OpsGenie, or a sub account in New Relic - safe in the knowledge that the new resource will be picked up by terraform and have everything set up for them such as integration between New Relic and OpsGenie, Service Incident rules in OpsGenie, workloads for each of their services in New Relic and so on.
Attempted Solutions
For simplicity, I'll just include Services in OpsGenie as the same principle applies to everything else.
Step-by-step
1. Pre execution
Before terraform is run, another application is called that queries our CMDB, cleans up the result and writes the output to *.auto.tfvars.json files in the root of the terraform project.
Example:
{
"services": [
{
"attributes": {
"Key": "SVC-1",
"Name": "Test Service 1",
"Created": "___",
"Updated": "___",
"Description": "This is a test service",
"Tier": "Tier 3",
"Service_ID": "___",
"Revision": "___",
"Service_Owners": {
"opsgenieTeam": {
"id": "___",
"name": "___"
}
}
},
"id": "1",
"label": "Test Service 1",
"name": "Test Service 1",
"objectKey": "SVC-1",
"objectTypeId": "1",
"objectTypeName": "Service",
"workspaceId": "___"
}
]
}
2. The variable structure
This response is interpreted by Terraform as a variable like so:
variable "services" {
description = "The Services to create."
type = list(
object(
{
attributes = object(
{
Key = string
Name = string
Created = string
Updated = string
Description = optional(string)
Tier = optional(string)
Service_ID = optional(string)
Revision = optional(string)
Service_Owners = optional(
object(
{
opsgenieTeam = object(
{
id = string
name = string
}
)
}
)
)
}
)
id = string
label = string
name = string
objectKey = string
objectTypeId = string
objectTypeName = string
workspaceId = string
}
)
)
}
3. The root module
The root module calls the OpsGenie Service submodule for each service object like so:
# Create the OpsGenie Services and necessary additional components
# Only include services that have an owner configured to avoid errors.
module "opsgenie_service" {
for_each = {
for service in var.services : service.id => {
id = service.attributes.Service_ID
name = service.attributes.Name
description = service.attributes.Description
team_id = service.attributes.Service_Owners.opsgenieTeam.id
} if lookup(service.attributes, "Service_Owners", null) != null
}
source = "./modules/opsgenie_service"
id = each.value.id
name = each.value.name
description = each.value.description
team_id = each.value.team_id
}
4. Import and manage the service object
The OpsGenie Service sub module should import each service and provision services that are already created like so:
import {
to = opsgenie_service.this
id = var.id
}
resource "opsgenie_service" "this" {
name = var.name
team_id = var.team_id
}
# Do more stuff with this service...
Issues encountered
1. Can't import to non-root module
An import block cannot be run as part of non-root module. While this can be worked around by running the import in the root module before calling the submodule. It's messy and it would be better for the import to be contained within the same module that will be managing that resource. Additionally, since import blocks don't support for_each calls this doesn't account for the inherent proceduralism in this implementation, whereas calling an import from a submodule would do.
│ Error: Invalid import configuration
│
│ on modules/opsgenie_service/main.tf line 10:
│ 10: import {
│
│ An import block was detected in "module.opsgenie_service". Import blocks are only allowed in the root module.
2. Variables not allowed
Variables are not allowed in import blocks, giving the following error:
│ Error: Variables not allowed
│
│ on modules/opsgenie_service/main.tf line 12, in import:
│ 12: id = var.id
│
│ Variables may not be used here.
3. Value for import must be known
Likely the result of using a variable to declare the import ID. However, the value here is known as it is a static value given by the json input generated in step 1.
│ Error: Unsuitable value type
│
│ on modules/opsgenie_service/main.tf line 12, in import:
│ 12: id = var.id
│
│ Unsuitable value: value must be known
Proposal
Allow import blocks in sub modules
In cases where the ID is a known, static value - it should be possible to allow imports to be run in a sub-module. Even sub modules that are called in a for_each loop. Given all the information needed to complete a plan exists. This would be an awesome first step towards enabling proceduralism in terraform runs.
Allow variables as import IDs
In cases where the variable is a static and known value, it should be allowed to be used in an import block. Edge cases whereby a variable can be modified between runs - it should be ok to destroy the previously imported resource and replace it with the newly import version. This is how the codebase remains declarative and where the input value changes, it should be treated as a declaration of intent. No different to manually writing out the ID in your codebase. However, allowing variables as IDs for import promotes good coding practices by not including potentially sensitive information in your codebase.
Pre-execution queries
This is a potential and hypothetical solution for future discussion. It would be really nice if something similar to a http data block can be marked in such a way that it is not run a second time during the apply stage. Subsequently, getting rid of the need to use external applications to generate json input.
References
No response
Terraform Version
Terraform v1.5.1 on linux_amd64 + provider registry.terraform.io/hashicorp/http v3.4.0 + provider registry.terraform.io/newrelic/newrelic v3.25.0 + provider registry.terraform.io/opsgenie/opsgenie v0.6.26 Your version of Terraform is out of date! The latest version is 1.5.2. You can update by downloading from https://www.terraform.io/downloads.htmlUse Cases
To understand the use cases, it's important to understand a little about the implementation here: I have a couple of applications that procedurally generate *.auto.tfvars.json files by querying our CMDB prior to terraform being run in CI pipelines.
Procedurally claim and manage resources
Resources can be imported into CMDBs with discovery and other methods, these resources will already exist. In this use case, we not only discover these resources, but, where applicable, manage them too.
Build onto existing solutions at bulk
This use case fits best into those occasions where terraform is being used to manage CI's rather than traditional resources, such as Teams in OpsGenie or sub-accounts in New Relic. For the purposes of enabling people within an organisation to self-service, we want to allow a senior members of staff to create a team or service in OpsGenie, or a sub account in New Relic - safe in the knowledge that the new resource will be picked up by terraform and have everything set up for them such as integration between New Relic and OpsGenie, Service Incident rules in OpsGenie, workloads for each of their services in New Relic and so on.
Attempted Solutions
Step-by-step
1. Pre execution
Before terraform is run, another application is called that queries our CMDB, cleans up the result and writes the output to *.auto.tfvars.json files in the root of the terraform project.
Example:
2. The variable structure
This response is interpreted by Terraform as a variable like so:
3. The root module
The root module calls the OpsGenie Service submodule for each service object like so:
4. Import and manage the service object
The OpsGenie Service sub module should import each service and provision services that are already created like so:
Issues encountered
1. Can't import to non-root module
An import block cannot be run as part of non-root module. While this can be worked around by running the import in the root module before calling the submodule. It's messy and it would be better for the import to be contained within the same module that will be managing that resource. Additionally, since import blocks don't support for_each calls this doesn't account for the inherent proceduralism in this implementation, whereas calling an import from a submodule would do.
2. Variables not allowed
Variables are not allowed in import blocks, giving the following error:
3. Value for import must be known
Likely the result of using a variable to declare the import ID. However, the value here is known as it is a static value given by the json input generated in step 1.
Proposal
Allow import blocks in sub modules
In cases where the ID is a known, static value - it should be possible to allow imports to be run in a sub-module. Even sub modules that are called in a for_each loop. Given all the information needed to complete a plan exists. This would be an awesome first step towards enabling proceduralism in terraform runs.
Allow variables as import IDs
In cases where the variable is a static and known value, it should be allowed to be used in an import block. Edge cases whereby a variable can be modified between runs - it should be ok to destroy the previously imported resource and replace it with the newly import version. This is how the codebase remains declarative and where the input value changes, it should be treated as a declaration of intent. No different to manually writing out the ID in your codebase. However, allowing variables as IDs for import promotes good coding practices by not including potentially sensitive information in your codebase.
Pre-execution queries
This is a potential and hypothetical solution for future discussion. It would be really nice if something similar to a http data block can be marked in such a way that it is not run a second time during the apply stage. Subsequently, getting rid of the need to use external applications to generate json input.
References
No response