Skip to content

Conversation

yasirfolio3
Copy link
Contributor

@yasirfolio3 yasirfolio3 commented Mar 3, 2023

Summary

  • Provides 2 out of the box ODP cache including in-memory and redis.
  • in-memory will use the builtin LRU cache with user provided size and timeout.
  • Adds support for custom odp cache.
  • If no odp cache is provided, agent will use go-sdk's in built LRU Cache with default size and timeout.

Out of Box ODP Cache Usage

  1. To use the in-memory ODPCache, update the config.yaml as shown below:
## configure optional ODP Cache
odp:
  segmentsCache:
    default: "in-memory"
    services:
      in-memory: 
        ## 0 means cache will be disabled
        size: 0
        ## timeout after which the cached item will become invalid.
        ## 0 means records will never be deleted
        timeout: 0s
  1. To use the redis ODPCache, update the config.yaml as shown below:
## configure optional ODP Cache
odp:
  segmentsCache:
    default: "redis"
    services:
      redis: 
        host: "your_host"
        password: "your_password"
        database: 0 ## your database
        timeout: 0s

Custom ODPCache Implementation

To implement a custom odp cache, followings steps need to be taken:

  1. Create a struct that implements the cache.Cache interface in plugins/odpcache/services.
  2. Add a init method inside your ODPCache file as shown below:
func init() {
	myCacheCreator := func() cache.Cache {
		return &yourCacheStruct{
		}
	}
	odpcache.Add("my_cache_name", myCacheCreator)
}
  1. Update the config.yaml file with your ODPCache config as shown below:
## configure optional ODPCache
odp
  segmentsCache:
    default: "my_cache_name"
    services:
      my_cache_name: 
        ## Add those parameters here that need to be mapped to the ODPCache
        ## For example, if the ODP struct has a json mappable property called `host`
        ## it can updated with value `abc.com` as shown
        host: “abc.com”
  • If a user has created multiple ODPCache services and wants to override the default ODPCache for a specific sdkKey, they can do so by providing the ODPCache name in the request Header X-Optimizely-ODP-Cache-Name.

  • Whenever a request is made with a unique sdkKey, The agent node handling that request creates and caches a new ODPCache. To keep the ODPCache type consistent among all nodes in a cluster, it is recommended to send the request Header X-Optimizely-ODP-Cache-Name in every request made.

Tests

  • Unit tests added.
  • FSC should pass.

Ticket

FSSDK-8838

@yasirfolio3 yasirfolio3 changed the title feat(odp-cache): Adds support for odp cache. [FSSDK-8838] feat(odp-cache): Adds support for odp cache. Mar 3, 2023
@yasirfolio3 yasirfolio3 requested a review from jaeopt March 7, 2023 20:43
Copy link
Contributor

@jaeopt jaeopt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks great! Some suggestions to reuse go-sdk cache. Let's discuss options.

config/config.go Outdated
Comment on lines 186 to 188
EventsRequestTimeout time.Duration `json:"eventsRequestTimeout"`
EventsFlushInterval time.Duration `json:"eventsFlushInterval"`
Disable bool `json:"disable"`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
EventsRequestTimeout time.Duration `json:"eventsRequestTimeout"`
EventsFlushInterval time.Duration `json:"eventsFlushInterval"`
Disable bool `json:"disable"`
Disable bool `json:"disable"`
EventsRequestTimeout time.Duration `json:"eventsRequestTimeout"`
EventsFlushInterval time.Duration `json:"eventsFlushInterval"`

@@ -0,0 +1,64 @@
# ODP Cache
Use a ODP Cache to persist segments and ensure they are sticky.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Use a ODP Cache to persist segments and ensure they are sticky.
Use a ODP Cache to cache user segments fetched from the ODP server.

)

// InMemoryCache represents the in-memory implementation of Cache interface
type InMemoryCache struct {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not clear about the value of providing an option to use this in-memory cache. It looks like instantiating the same URLCache implemented in go-sdk. Can't we use the default one already in go-sdk?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We are now using this as default odp cache with both timeout and size as arguments. Following is the default configuration:

odp:
      ## Disable odp
      disable: false
      ## Timeout in seconds after which event requests will timeout.
      eventsRequestTimeout: 10s
      ## Flush interval in seconds for odp events
      eventsFlushInterval: 1s
      ## Timeout in seconds after which segment requests will timeout.
      segmentsRequestTimeout: 10s
      ## This will override all other settings
      segmentsCache:
        default: "in-memory"
        services:
          in-memory: 
            size: 10000
            timeout: 600 # seconds
        #   redis: 
        #     host: "localhost:6379"
        #     password: ""
        #     database: 0
        #     timeout: 0 # seconds

}

// Check if JSON string was set using OPTIMIZELY_CLIENT_ODP_SEGMENTSCACHE environment variable
if odpSegmentsCache := v.GetStringMap("client.odp.segmentsCache"); odpSegmentsCache != nil {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wondering why we need this support for odpCache only? What about other odp config like "client.odp"?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No specific reason, its just something that we did for ups aswell.

config.yaml Outdated
segmentsCacheTimeout: 600s
## Timeout in seconds after which segment requests will timeout.
segmentsRequestTimeout: 10s
## This will override all other settings
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we be more specific about overriding? CacheSize and CacheTimeout will be overriden. Others like "disable" etc?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

deleted this comment as segmentsCacheTimeout and segmentsCacheSize no longer exist outside of the scope of in-memory cache.

config.yaml Outdated
segmentsRequestTimeout: 10s
## This will override all other settings
segmentsCache:
default: ""
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What does this empty default imply? Will agent use go-sdk native URLCache with segmentCacheSize and segmentsCacheTimeout? Or is it cache disabled?

Copy link
Contributor Author

@yasirfolio3 yasirfolio3 Mar 13, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

as per the new implementation, empty default simply means that go-sdk will use its default LRU cache with default timeout and size

config.yaml Outdated
Comment on lines 177 to 179
# in-memory:
# size: 0
# timeout: 0 # seconds
Copy link
Contributor

@jaeopt jaeopt Mar 9, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this in-memory cache the same as one in go-sdk (implemented and active by default)?
If so, we can use size and timeout for its config (segmentsCacheSize and segmentsCacheTimeout above can be removed).
What about "in-memory" as default for "default: in-memory"? So, they'll use go-sdk cache unless override with redis etc.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this default config make sense?

      disable: false
      eventsRequestTimeout: 10s
      eventsFlushInterval: 1s
      segmentsRequestTimeout: 10s
      segmentsCache:
          default: "in-memory"
          services:
              in-memory: 
                 size: 10000
                 timeout: 600s
         #  redis:
         #     host:
         #

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks for the suggestion, went with the same approach.

# size: 0
# timeout: 0 # seconds
# redis:
# host: "localhost:6379"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Don't we need to control redis timeout as well? We can reuse segmentsCacheTimeout but it may be confusing since we do not use segmentsCacheSize for redis.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

added timeout to redis aswell.

func init() {
redisCacheCreator := func() cache.Cache {
return &RedisCache{
Expiration: 0 * time.Second,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We need timeout control for redis cache. Can't we add timeout to config?

@yasirfolio3 yasirfolio3 requested a review from jaeopt March 13, 2023 19:08
Copy link
Contributor

@jaeopt jaeopt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All changes look good! A few small fixes suggested.

redis:
host: "localhost:6379"
password: ""
timeout: 5
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

5s ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

5s was possible if the key was at the root and viper was unmarshalling it. In this case we are unmarshalling it at runtime using native json library which does not support converting 5s to time duration.

config.yaml Outdated
# timeout: 0 # seconds
in-memory:
size: 10000
timeout: 600 # seconds
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

600s ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same as above.

config.yaml Outdated
# host: "localhost:6379"
# password: ""
# database: 0
# timeout: 0 # seconds
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

600s default?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same as above.

config.yaml Outdated
segmentsCacheTimeout: 600s
## Timeout in seconds after which segment requests will timeout.
segmentsRequestTimeout: 10s
## This will override all other settings
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No overriding any more. This line should be removed.

@yasirfolio3 yasirfolio3 requested a review from jaeopt March 14, 2023 15:49
Copy link
Contributor

@jaeopt jaeopt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks great! A few minor fixes suggested.

in-memory:
## 0 means cache will be disabled
size: 0
## timeout after which least recently used records will be deleted
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
## timeout after which least recently used records will be deleted
## timeout after which the cached item will become invalid.

## configure optional ODP Cache
odp:
segmentsCache:
default: "in-memory"
Copy link
Contributor

@jaeopt jaeopt Mar 17, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It can be better if default "default" is clarified. If no segmentsCache is defined (or no default is defined), we will use the default in-memory with default size and timeout?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This statement is true, if none provided then sdk uses its own in-memory with default values: https://github.com/optimizely/go-sdk/blob/master/pkg/odp/segment/segment_manager.go#L87


switch value := unmarshalledJSON.(type) {
case float64:
duration.Duration = time.Duration(value)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we accept number without unit? What is the default unit for this case?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes we do, default unit is nanosecond by golang.

@yasirfolio3 yasirfolio3 requested a review from jaeopt March 17, 2023 18:36
Copy link
Contributor

@jaeopt jaeopt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

@yasirfolio3 yasirfolio3 merged commit ce3b3a7 into master Mar 17, 2023
@yasirfolio3 yasirfolio3 deleted the yasir/odp-cache branch March 17, 2023 20:49
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants