多线程情况下,想通过缓存+同步锁的机制去避免多次重复处理逻辑,尤其是I/0操作,但是在实际的操作过程中发现多次访问的日志
2025-06-05 17:30:27.683 [ForkJoinPool.commonPool-worker-3] INFO Rule - [vagueNameMilvusReacll,285] - embedding time-consuming:503
2025-06-05 17:30:29.693 [ForkJoinPool.commonPool-worker-3] INFO Rule - [vagueNameMilvusReacll,314] - milvus time-consuming:2010
2025-06-05 17:30:29.701 [ForkJoinPool.commonPool-worker-3] INFO Rule - [vagueNameMilvusReacll,358] - vagueName time-consuming:2534
2025-06-05 17:30:30.135 [ForkJoinPool.commonPool-worker-11] INFO Rule - [vagueNameMilvusReacll,285] - embedding time-consuming:434
2025-06-05 17:30:30.363 [ForkJoinPool.commonPool-worker-11] INFO Rule - [vagueNameMilvusReacll,314] - milvus time-consuming:228
2025-06-05 17:30:30.369 [ForkJoinPool.commonPool-worker-11] INFO Rule - [vagueNameMilvusReacll,358] - vagueName time-consuming:3202
2025-06-05 17:30:30.750 [ForkJoinPool.commonPool-worker-8] INFO Rule - [vagueNameMilvusReacll,285] - embedding time-consuming:381
2025-06-05 17:30:31.021 [ForkJoinPool.commonPool-worker-8] INFO Rule - [vagueNameMilvusReacll,314] - milvus time-consuming:270
2025-06-05 17:30:31.022 [ForkJoinPool.commonPool-worker-8] INFO Rule - [vagueNameMilvusReacll,358] - vagueName time-consuming:3855
代码如下:
public final static Map<String, Lock> keyLockMap = new ConcurrentHashMap<>();
Rule cacheRule = (Rule) CacheMap.get(nodeValue);
if (cacheRule != null) {
// 返回缓存
}
Lock lock = keyLockMap.computeIfAbsent(nodeValue, k -> new ReentrantLock());
lock.lock();
try {
}finally {
lock.unlock();
// 释放锁资源,避免 map 持有无用锁对象太久
keyLockMap.remove(nodeValue);
}
实际的问题:
在加锁之前做了第一次缓存检查(没问题),但在加锁之后没有再次检查缓存是否被其他线程填充过!
这就导致多个线程可能都进入了 lock.lock() 后的代码块,并且都执行了实际查询逻辑。
解决方案:双重检查缓存(Double-Checked Caching)
Rule cacheRule = (Rule) CacheMap.get(nodeValue);
if (cacheRule != null) {
rule.setKey(cacheRule.getKey());
rule.setValue(cacheRule.getValue());
return;
}
Lock lock = keyLockMap.computeIfAbsent(nodeValue, k -> new ReentrantLock());
lock.lock();
try {
// 【关键】第二次检查缓存
cacheRule = (Rule) CacheMap.get(nodeValue);
if (cacheRule != null) {
rule.setKey(cacheRule.getKey());
rule.setValue(cacheRule.getValue());
return;
}
// 真正执行 Milvus 请求...
// ...
// 最后更新缓存
CacheMap.put(nodeValue, rule);
} finally {
lock.unlock();
keyLockMap.remove(nodeValue); // 可选释放锁对象
}
这个可能出现锁失效的情况
keyLockMap.remove(nodeValue); // 可选释放锁对象
当T1 进入的时候处理完逻辑后,放入缓存,然后删除锁
sleep(xxx)
当T2 进入的时候处理逻辑,发现没有锁,上锁,访问缓存
发现问题了,如果finally 及时删除锁,可能会出现下一个线程重新建立锁对象,然后多了查询缓存的性能消耗。
为了避免这种情况存在
建立LockManager 类管理锁对象,同时对锁进行ttl 保留时间定期任务删除对应的key
import lombok.extern.slf4j.Slf4j;
import java.util.Map;
import java.util.concurrent.*;
import java.util.concurrent.locks.ReentrantLock;
@Slf4j
public class LockManager {
private final Map<String, ReentrantLock> lockMap = new ConcurrentHashMap<>();
private final Map<String, Long> lastAccessTime = new ConcurrentHashMap<>();
private static final long TTL = TimeUnit.MINUTES.toMillis(5); // 锁保留5分钟
private final ScheduledExecutorService scheduler = Executors.newSingleThreadScheduledExecutor();
public LockManager() {
startCleanupTask();
}
// 获取锁,并更新最后访问时间
public ReentrantLock getLock(String key) {
lastAccessTime.put(key, System.currentTimeMillis());
return lockMap.computeIfAbsent(key, k -> new ReentrantLock());
}
// 清理任务:扫描并移除超时的锁对象
private void startCleanupTask() {
scheduler.scheduleAtFixedRate(() -> {
log.info("------------start check expired lock---------------");
long now = System.currentTimeMillis();
lastAccessTime.forEach((key, timestamp) -> {
if (now - timestamp > TTL) {
lockMap.remove(key);
lastAccessTime.remove(key);
log.info("Removed expired lock for key: {}", key);
}
});
}, 1, 1, TimeUnit.MINUTES); // 每分钟执行一次清理
}
public void shutdown() {
scheduler.shutdownNow();
}
}
@Configuration
public class AppConfig {
@Bean(destroyMethod = "shutdown")
public LockManager lockManager() {
return new LockManager();
}
}
注入使用
Lock lock = lockManager.getLock(nodeValue);