Cache is a component that makes access to data faster. KV cache usually is in-memory data store which uses an associative array (think of a map or dictionary) as the fundamental data model.
In concurrent scenes, especially when there are frequent writes, dirty data could occur. We can solve this by:
- Add a distributed lock when write data to cache
- Set a short cache expire time.
We seldom use read/write through because popular distributed kv system like memcached/redis does not support write through. Also, write through is synchronous which may influence performance.
This strategy is commonly used in OS designs like Page cache, or shuffling logs. However, it is seldom used in database-cache scenes because data loss could happen due to physical damage.