Skip to content

Hybrid caching unreliable across process since version 1.2.0 #283

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
brutaldev opened this issue Feb 1, 2021 · 3 comments
Closed

Hybrid caching unreliable across process since version 1.2.0 #283

brutaldev opened this issue Feb 1, 2021 · 3 comments

Comments

@brutaldev
Copy link

Description

Since updating to version 1.2.0, reading hybrid cache values across processes return stale data. Each process always returns the last cache value that the process itself set. This problem does not occur in version 1.1.0.

Steps to Reproduce

  1. Start process Merge dev branch. #1, cache an object.
  2. Object will be in memory and Redis with correct values.
  3. Start process Merge new commits. #2, cache different object values with the same key.
  4. Object will be in memory and is updated in Redis correctly.
  5. Read cache key in process Merge dev branch. #1, notice the value is stale from memory and not what is currently in Redis
  6. Do this with as many processes as you want, they will all contain different values.

Expected behavior: Cache is invalidated in other processes via the Redis Bus. Version 1.1.0 works correctly as expected.

Actual behavior: Cache is not invalidated across process instances so they always read what's in their memory cache.

Specifications

  • Provider : InMemory + Redis (version 1.2.0)
  • Interceptor : None
  • Serializer : Json
  • System : Windows 10
@catcherwong
Copy link
Member

@brutaldev Thanks for your interest in this project.

I have a try with 1.2.0, but it works well for me. Here is the sample repo.

https://github.com/catcherwong-archive/EasyCachingHybridDemo

Can you provide a sample that can reproduce your issue?

@brutaldev
Copy link
Author

No sample project, which is why I provided a basic scenario. The solution this happens in contains dozens of services, all load balanced. Switching between 1.1.0 and 1.2.0 completely changes the behaviour as if the cache is never invalidated.

I'll strip things down if I get time and provide a sample, for now just going back to 1.1.0 has solved all these issues for us in both dev and production.

@brutaldev
Copy link
Author

brutaldev commented Feb 11, 2021

This is related to using TrySet vs Set in how the cache is updated (or not).

After reviewing the internals of the Redis cache provider, we discovered that using TrySet only updates the cache value if it does not exist (offending line 355).

Using Set overwrites the value in Redis as we expected so changed all calls to avoid TrySet. TrySet also attempts to update the distributed cache first and leaves the memory cache if it fails causing further stale data issues within the same process where as Set immediately updates the memory cache before attempting the distributed update which is preferable.

The hybrid demo application does not make use of TrySet anywhere so this problem could not be observed. Changing the calls there to TrySet is likely to cause havoc given that the distributed cache is not overwritten.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants