.3493622109966576:8845ac35323a947c4c29328f0aea5e09_69e7125e81ef3ff6db540126.69e7140c81ef3ff6db540261.69e7140cd6025e57c275fa6b:Trae CN.T(2026/4/21 14:07:08)#4041
Conversation
实现线程安全的本地内存缓存,支持 TTL 和 LRU 淘汰策略。当配置 local_cache_ttl 和 local_cache_max_size 时,自动为 GET 命令添加本地缓存层,减少对远端 Redis 的高频读取延迟。同时处理 SET/DEL 命令的缓存同步问题。
|
Hi, I’m Jit, a friendly security platform designed to help developers build secure applications from day zero with an MVS (Minimal viable security) mindset. In case there are security findings, they will be communicated to you as a comment inside the PR. Hope you’ll enjoy using Jit. Questions? Comments? Want to learn more? Get in touch with us. |
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 4 potential issues.
Reviewed by Cursor Bugbot for commit 02c0c50. Configure here.
| command_upper = command_name.upper() | ||
| if command_upper == "GET" and len(args) >= 2: | ||
| key = args[1] | ||
| self._local_cache.set(key, result) |
There was a problem hiding this comment.
Cache stores None values that can never be served
Medium Severity
LocalCache.get() returns None for both "key not in cache" and "cached value is None", but the cache-hit check uses if cached_value is not None. When a Redis GET returns None (non-existent key), it gets stored via self._local_cache.set(key, result), but subsequent lookups always treat the cached None as a miss. These dead entries consume cache space and can evict valid entries through LRU, without ever providing a cache hit.
Additional Locations (1)
Reviewed by Cursor Bugbot for commit 02c0c50. Configure here.
| if cached_value is not None: | ||
| if not self.connection: | ||
| pool.release(conn) | ||
| return cached_value |
There was a problem hiding this comment.
Connection pool acquired unnecessarily on every cache hit
Medium Severity
pool.get_connection() is called at line 794 before the local cache is checked at line 807. On a cache hit, the connection is immediately released without being used. Both get_connection and release acquire locks, modify internal data structures, and record metrics — significant overhead that defeats the cache's purpose of reducing latency for hot key reads.
Reviewed by Cursor Bugbot for commit 02c0c50. Configure here.
| except UnicodeDecodeError: | ||
| command_upper = command_name.decode('latin-1').upper() | ||
| else: | ||
| command_upper = command_name.upper() |
There was a problem hiding this comment.
Duplicate command name decoding logic in execute path
Low Severity
The command_name bytes-decoding and uppercasing logic at lines 852–858 is an exact duplicate of lines 797–803. Since self._local_cache doesn't change mid-execution, when the first block executes, command_upper is already in scope and valid for the second block. This redundant computation adds unnecessary code duplication.
Additional Locations (1)
Reviewed by Cursor Bugbot for commit 02c0c50. Configure here.
| self._local_cache.delete(key) | ||
| else: | ||
| key = args[1] | ||
| self._local_cache.delete(key) |
There was a problem hiding this comment.
Many write commands skip cache invalidation causing stale reads
High Severity
Cache invalidation only handles SET, SETEX, and DEL, but many other common write commands like INCRBY/incr, DECRBY/decr, APPEND, GETDEL, UNLINK, PSETEX, SETNX, MSET, and RENAME also modify key values through execute_command without invalidating the local cache. This causes subsequent GET calls to return stale cached data. INCR/DECR are especially concerning since counters and rate limiters are classic "hot key" use cases — the exact scenario this cache targets.
Reviewed by Cursor Bugbot for commit 02c0c50. Configure here.
|
Hi @zilvya , thank you for your contribution! Could you please update the PR title and description to be in English, and also replace the non-English comments in the code so they’re clear to the broader community? Additionally, could you share more context on the motivation behind this feature? We already have client-side caching implemented in the synchronous standalone client, so it would be helpful to understand the use case this PR is addressing and how it differs or extends the existing functionality. |


实现线程安全的本地内存缓存,支持 TTL 和 LRU 淘汰策略。当配置 local_cache_ttl 和 local_cache_max_size 时,自动为 GET 命令添加本地缓存层,减少对远端 Redis 的高频读取延迟。同时处理 SET/DEL 命令的缓存同步问题。
Description of change
Please provide a description of the change here.
Pull Request check-list
Please make sure to review and check all of these items:
NOTE: these things are not required to open a PR and can be done
afterwards / while the PR is open.
Note
Medium Risk
Introduces an experimental client-side caching layer that can change read-after-write behavior and return stale values within the configured TTL. It also touches the core command execution path (
_execute_command), so bugs could impact all Redis operations when enabled.Overview
Adds an experimental opt-in local in-memory cache to
Redisvia newlocal_cache_ttlandlocal_cache_max_sizeconstructor args.When enabled,
_execute_commandservesGETresults from the local cache, populates the cache after successfulGETcalls, and invalidates cached keys onSET/SETEXandDEL. A newLocalCacheimplementation (redis/local_cache.py) provides thread-safe TTL expiry and LRU eviction.Reviewed by Cursor Bugbot for commit 02c0c50. Bugbot is set up for automated code reviews on this repo. Configure here.