Skip to content

Provider crash: invalid memory address or nil pointer dereference with aws_elasticache_serverless_cache and data.aws_vpc #42954

Open
@DavidFrahmVS

Description

@DavidFrahmVS

Terraform and AWS Provider Version

$ terraform --version
Terraform v1.5.7
on darwin_arm64
+ provider registry.terraform.io/datadog/datadog v3.39.0
+ provider registry.terraform.io/hashicorp/aws v5.99.1

Your version of Terraform is out of date! The latest version
is 1.12.1. You can update by downloading from https://www.terraform.io/downloads.html

Affected Resource(s) or Data Source(s)

resource "aws_elasticache_serverless_cache"
data "aws_vpc"

Expected Behavior

Plan so that we can update the tags on our module, as they were changed in AWS Console

Actual Behavior

Plan crashes with two resources in the module stating that the plugin did not respond

Relevant Error/Panic Output

│ Error: Plugin did not respond

│   with module.memcached-vivid-rn-bff-serverless.aws_elasticache_serverless_cache.cache,
│   on .terraform/modules/memcached-vivid-rn-bff-serverless/memcached-serverless/main.tf line 1, in resource "aws_elasticache_serverless_cache" "cache":
│    1: resource "aws_elasticache_serverless_cache" "cache" {

│ The plugin encountered an error, and failed to respond to the plugin.(*GRPCProvider).ReadResource call. The plugin logs may contain more details.


│ Error: Plugin did not respond

│   with module.memcached-vivid-rn-bff-serverless.data.aws_vpc.vpc,
│   on .terraform/modules/memcached-vivid-rn-bff-serverless/memcached-serverless/main.tf line 34, in data "aws_vpc" "vpc":
│   34: data "aws_vpc" "vpc" {

│ The plugin encountered an error, and failed to respond to the plugin.(*GRPCProvider).ReadDataSource call. The plugin logs may contain more details.

Releasing state lock. This may take a few moments...

Stack trace from the terraform-provider-aws_v5.99.1_x5 plugin:

panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x2 addr=0x20 pc=0x101406518]

goroutine 669 [running]:
github.com/hashicorp/terraform-provider-aws/internal/tags.Map.MapSemanticEquals({{0x140037669c0?, {0x11b4ddcf0?, 0x1275da1a0?}, 0x20?}}, {0x1?, 0x1275da1a0?}, {0x11b516e90?, 0x140047e9d60?})
        github.com/hashicorp/terraform-provider-aws/internal/tags/map.go:169 +0x3c8
github.com/hashicorp/terraform-plugin-framework/internal/fwschemadata.ValueSemanticEqualityMap({0x11b49af08, 0x14003766ae0}, {{{0x14004463e10, 0x1, 0x1}}, {0x11b4ddca8, 0x140047e9d60}, {0x11b4ddca8, 0x140047e9de0}}, 0x1400292cfc0)
        github.com/hashicorp/[email protected]/internal/fwschemadata/value_semantic_equality_map.go:50 +0x230
github.com/hashicorp/terraform-plugin-framework/internal/fwschemadata.ValueSemanticEquality({0x11b49af08, 0x14002bf6780}, {{{0x14004463e10, 0x1, 0x1}}, {0x11b4ddca8, 0x140047e9d60}, {0x11b4ddca8, 0x140047e9de0}}, 0x1400292cfc0)
        github.com/hashicorp/[email protected]/internal/fwschemadata/value_semantic_equality.go:77 +0x344
github.com/hashicorp/terraform-plugin-framework/internal/fwserver.SchemaSemanticEquality({0x11b49af08, 0x14002bf6780}, {{{0x1155aa075, 0x5}, {0x11b570308, 0x140033ef9f0}, {{0x11b5476a0, 0x140036cb1a0}, {0x119047900, 0x140036ca570}}}, ...}, ...)
        github.com/hashicorp/[email protected]/internal/fwserver/schema_semantic_equality.go:80 +0x214
github.com/hashicorp/terraform-plugin-framework/internal/fwserver.(*Server).ReadResource(0x140022b0b48, {0x11b49af08, 0x14002bf6780}, 0x14002bf67e0, 0x1400292d548)
        github.com/hashicorp/[email protected]/internal/fwserver/server_readresource.go:154 +0x974
github.com/hashicorp/terraform-plugin-framework/internal/proto5server.(*Server).ReadResource(0x140022b0b48, {0x11b49af08?, 0x14002bf6690?}, 0x1400223d640)
        github.com/hashicorp/[email protected]/internal/proto5server/server_readresource.go:56 +0x2e0
github.com/hashicorp/terraform-plugin-mux/tf5muxserver.(*muxServer).ReadResource(0x14001639380, {0x11b49af08?, 0x14002bf63c0?}, 0x1400223d640)
        github.com/hashicorp/[email protected]/tf5muxserver/mux_server_ReadResource.go:35 +0x184
github.com/hashicorp/terraform-plugin-go/tfprotov5/tf5server.(*server).ReadResource(0x140016243c0, {0x11b49af08?, 0x1400162fb90?}, 0x14001c9d5e0)
        github.com/hashicorp/[email protected]/tfprotov5/tf5server/server.go:784 +0x21c
github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/tfplugin5._Provider_ReadResource_Handler({0x11af5a120, 0x140016243c0}, {0x11b49af08, 0x1400162fb90}, 0x14009925300, 0x0)
        github.com/hashicorp/[email protected]/tfprotov5/internal/tfplugin5/tfplugin5_grpc.pb.go:575 +0x1c0
google.golang.org/grpc.(*Server).processUnaryRPC(0x14000fc7600, {0x11b49af08, 0x1400162fb00}, 0x1400407b620, 0x14003109e90, 0x12757c048, 0x0)
        google.golang.org/[email protected]/server.go:1405 +0xca8
google.golang.org/grpc.(*Server).handleStream(0x14000fc7600, {0x11b4d28f8, 0x140031d8000}, 0x1400407b620)
        google.golang.org/[email protected]/server.go:1815 +0x910
google.golang.org/grpc.(*Server).serveStreams.func2.1()
        google.golang.org/[email protected]/server.go:1035 +0x84
created by google.golang.org/grpc.(*Server).serveStreams.func2 in goroutine 55
        google.golang.org/[email protected]/server.go:1046 +0x13c

Error: The terraform-provider-aws_v5.99.1_x5 plugin crashed!

This is always indicative of a bug within the plugin. It would be immensely
helpful if you could report the crash with the plugin's maintainers so that it
can be fixed. The output above should help diagnose the issue.

Sample Terraform Configuration

Click to expand configuration

module "memcached-serverless" {
source = ""

name = "memcached-serverless"
description = "Memcached"
vpc_id = local.vpc_id
subnet_ids = local.private_subnet_ids
allow_vpc_ingress = true
tags_required = {
REDACTED
}
}

Module contents:

main.tf

resource "aws_elasticache_serverless_cache" "cache" {
  engine      = "memcached"
  name        = var.name
  description = var.description
  dynamic "cache_usage_limits" {
    for_each = var.data_storage_limit_gb != null ? [1] : []
    content {
      data_storage {
        maximum = var.data_storage_limit_gb
        unit    = "GB"
      }
    }
  }


  major_engine_version = var.cluster_version
  security_group_ids   = [aws_security_group.cache.id]
  subnet_ids           = var.subnet_ids
  tags = merge({
    Name = var.name
  }, local.tags)
}

resource "aws_security_group" "cache" {
  name        = "${var.name}-sg"
  description = "Security Group for Memcache ${var.name}"
  vpc_id      = var.vpc_id

  tags = merge({
    Name = "${var.name}-sg"
  }, local.tags)
}

data "aws_vpc" "vpc" {
  id = var.vpc_id
}

resource "aws_security_group_rule" "vpc_ingress" {
  count             = var.allow_vpc_ingress ? 1 : 0
  security_group_id = aws_security_group.cache.id
  type              = "ingress"
  protocol          = "tcp"
  from_port         = aws_elasticache_serverless_cache.cache.endpoint[0].port
  to_port           = aws_elasticache_serverless_cache.cache.endpoint[0].port
  cidr_blocks       = data.aws_vpc.vpc.cidr_block_associations[*].cidr_block
  description       = "VPC Access"
}

resource "aws_security_group_rule" "sg_ingress" {
  for_each                 = toset(var.ingress_security_groups)
  security_group_id        = aws_security_group.cache.id
  type                     = "ingress"
  protocol                 = "tcp"
  from_port                = aws_elasticache_serverless_cache.cache.endpoint[0].port
  to_port                  = aws_elasticache_serverless_cache.cache.endpoint[0].port
  source_security_group_id = each.value
  description              = "SG Access"
}

resource "aws_security_group_rule" "additional_cidrs" {
  count             = length(var.additional_cidrs) > 0 ? 1 : 0
  security_group_id = aws_security_group.cache.id
  type              = "ingress"
  protocol          = "tcp"
  from_port         = aws_elasticache_serverless_cache.cache.endpoint[0].port
  to_port           = aws_elasticache_serverless_cache.cache.endpoint[0].port
  cidr_blocks       = var.additional_cidrs
  description       = "CIDR Access"
}

locals.tf

locals {
  tags = merge(var.tags_required, var.tags_custom)
}

variables.tf

variable "tags_custom" {
  description = "Custom tags for resources"
  type        = map(string)
  default     = {}
}

variable "tags_required" {
  description = "Tags required for all resources"
  type = object({
    env            = string
    service        = string
    team           = string
    terraform-repo = string
  })
}

variable "name" {
  type        = string
  description = "Cache Name"
}

variable "description" {
  type        = string
  description = "Cache Description"
}

variable "cluster_version" {
  type        = string
  description = "Memcached Version"
  default     = "1.6"
}

variable "subnet_ids" {
  type        = list(string)
  description = "Subnet List"
}

variable "vpc_id" {
  type        = string
  description = "VPC ID"
}

variable "data_storage_limit_gb" {
  type        = string
  description = "Max Data Storage in GB"
  default     = null
}

variable "allow_vpc_ingress" {
  type        = bool
  description = "Allow ingress from VPC CIDR"
  default     = false
}

variable "ingress_security_groups" {
  type        = list(string)
  description = "Security Groups allows to connect to cluster"
  default     = []
}

variable "additional_cidrs" {
  type        = list(string)
  description = "Additional CIDRs to allow access"
  default     = []
}

versions.tf:

terraform {
  required_version = ">= 0.12"
  required_providers {
    aws = {
      version = ">= 5.30.0"
      source  = "hashicorp/aws"
    }
    datadog = {
      source  = "DataDog/datadog"
      version = "3.39.0"
    }
  }
}

Steps to Reproduce

  1. Made changes to tags (keys and values, removing old keys and adding new keys/values
  2. Run terraform plan on former repo/module to find the list of tags to update

Debug Logging

Click to expand log output

GenAI / LLM Assisted Development

n/a

Important Facts and References

Tags were changed in console, plan is being run to find where to update the terraform

Would you like to implement a fix?

No

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugAddresses a defect in current functionality.crashResults from or addresses a Terraform crash or kernel panic.prioritizedPart of the maintainer teams immediate focus. To be addressed within the current quarter.tagsPertains to resource tagging.

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions