-
Notifications
You must be signed in to change notification settings - Fork 158
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Locker context deadline exceeded #653
Comments
GetWriteTimeout=10s |
Hi @lenonqing, You can't return an existing client in the Lines 84 to 95 in 25821d0
|
OK,Thanks! |
NewLocker use a new Redis client, however, there are still some requests(very few) that cannot obtain locks. I cannot get the key of locker from Redis when obtain failed. func newLocker(c *client, opts ...LockerOption) (Locker, error) {
if c.version.LessThan(mustNewSemVersion(fallbackSETPXVersion)) {
opts = append(opts, WithLockerOptionFallbackSETPX(true))
}
cc := newLockerOptions(opts...)
l, err := rueidislock.NewLocker(rueidislock.LockerOption{
KeyPrefix: cc.GetKeyPrefix(),
KeyValidity: cc.GetKeyValidity(),
TryNextAfter: cc.GetTryNextAfter(),
KeyMajority: cc.GetKeyMajority(),
NoLoopTracking: cc.GetNoLoopTracking(),
FallbackSETPX: cc.GetFallbackSETPX(),
ClientOption: confVisitor2ClientOption(c.v),
ClientBuilder: func(option rueidis.ClientOption) (rueidis.Client, error) {
return rueidis.NewClient(option)
},
})
if err != nil {
return nil, err
}
return &wrapLocker{Locker: l, v: c.v}, nil
} lockerCtx, lockerCancel, err := redis.Locker.WithContext(ctx, lockerName)
if err != nil {
var exists bool
var ttl time.Duration
val := redis.Client.Exists(ctx, lockerName).Val()
if exists = val > 0; exists {
ttl = redis.Client.PTTL(ctx, lockerName).Val()
}
err = errors.WrapError(err, "acquire fid lock error, name: %v, uid: %d, locker_name: %s, ke_exists: %v, key_ttl: %s", name, id, lockerName, exists, ttl)
return lockerCtx, lockerCancel, err
} log: |
There is no key stored under the What does your full LockerOption look like? |
const (
defaultKeyPrefix = "redislock"
defaultKeyValidity = 5 * time.Second
defaultExtendInterval = 1 * time.Second
defaultTryNextAfter = 20 * time.Millisecond
defaultKeyMajority = int32(2)
)
//go:generate optiongen --option_with_struct_name=true --new_func=newLockerOptions --empty_composite_nil=true --usage_tag_name=usage
func LockerOptionsOptionDeclareWithDefault() any {
return map[string]any{
// annotation@KeyPrefix(KeyPrefix is the prefix of redis key for locks. Default value is defaultKeyPrefix)
"KeyPrefix": string(defaultKeyPrefix),
// annotation@KeyValidity(KeyValidity is the validity duration of locks and will be extended periodically by the ExtendInterval. Default value is defaultKeyValidity)
"KeyValidity": time.Duration(defaultKeyValidity),
// annotation@TryNextAfter(TryNextAfter is the timeout duration before trying the next redis key for locks. Default value is defaultTryNextAfter)
"TryNextAfter": time.Duration(defaultTryNextAfter),
// annotation@KeyMajority(KeyMajority is at least how many redis keys in a total of KeyMajority*2-1 should be acquired to be a valid lock. Default value is defaultKeyMajority)
"KeyMajority": int32(defaultKeyMajority),
// annotation@NoLoopTracking(NoLoopTracking will use NOLOOP in the CLIENT TRACKING command to avoid unnecessary notifications and thus have better performance. This can only be enabled if all your redis nodes >= 7.0.5)
"NoLoopTracking": false,
// annotation@FallbackSETPX(Use SET PX instead of SET PXAT when acquiring locks to be compatible with Redis < 6.2)
"FallbackSETPX": false,
}
}
NewLocker(redisson.WithLockerOptionKeyMajority(1)) |
I should check if "rueidislock: 0: lockerName" exists?Because "KeyMajority" is 1. |
You changed the KeyPrefix, so it should be redislock:0:locker:room_close:{63347223}. |
Hi @lenonqing, did you find more details? |
I have the latest logs when next service upgrade.If has result, I will inform you promptly. |
fs := make([]logbus.Field, 0, 4)
fs = append(fs, logbus.String("name", name), logbus.Uint64("id", id), logbus.String("locker_name", lockerName), logbus.String("source", source))
logbus.Info("want to acquire locker from redis", fs...)
lockerCtx, lockerCancel, err := redis.Locker.WithContext(ctx, lockerName)
if err != nil {
var exists bool
var ttl time.Duration
fullLockerName := fmt.Sprintf("redislock:0:%s", lockerName)
val := redis.Client.Exists(ctx, fullLockerName).Val()
if exists = val > 0; exists {
ttl = redis.Client.PTTL(ctx, fullLockerName).Val()
}
err = errors.WrapError(err, "acquire lock error, name: %v, uid: %d, source: %s, locker_name: %s, exists: %v, ttl: %s", name, id, source, lockerName, exists, ttl)
return lockerCtx, lockerCancel, err
}
|
@rueian locker key not exists, but acquire locker failure possible |
Hi @lenonqing, is there any deadline associated with your ctx? How long is it? Is there any concurrent access to the same key? |
It will be helpful if you have all request logs from redis. |
@rueian yes, has deadline for context, 3 second will cancel for deadline exceeded. |
No concurrent access, i update the log |
Are these all the logs related to the
|
yes, have only error, have multiple servers, but not have concurrent access this key |
I think it is caused by some race between Could you try |
OK,But we need to wait for the next update of the server. |
Hi @lenonqing, please use |
OK |
Hi @lenonqing, Was that happened on v1.0.50-alpha.4? The only information I found related to the panic is it indicates there are too many goroutines waiting for the fd to be available. Do you know how many goroutines you had at that time? Or, do you have the full goroutines dump at the time? |
200+ goroutines |
200+ goroutines shouldn't be a problem to a fd according to https://stackoverflow.com/questions/44678216/panic-net-inconsistent-fdmutex. Did your application run on x86 platform? |
@rueian Yes, run on x86 platform. |
Thanks for the confirmation. However, currently, I can't think of what situation can cause the panic. Did it happen on on v1.0.50-alpha.4? Would you be able to capture a goroutine dump when it happens next time? |
Current version is v1.0.47,The online server has not upgraded yet. |
version: v1.0.47
test code:
rueidis/rueidislock/lock.go
output:
The text was updated successfully, but these errors were encountered: