question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

limits > 2.0.3 breaks with Redis Sentinel

See original GitHub issue

Hi, I am the maintainer of the Falcon-Limiter package (see https://github.com/zoltan-fedor/falcon-limiter) and one of my users has raised an issue about the Falcon-Limiter breaking when using limits version 2.3.0 with Redis Sentinel, while it works with llimits version 2.0.3. See the original issue at https://github.com/zoltan-fedor/falcon-limiter/issues/1

I did manage to reproduce the user’s issue in Falcon-Limiter after deploying Redis Sentinel onto my Kubernetes cluster.

Then I went and tested the limits library (without the Falcon-Limiter) and unfortunately I was able to reproduce the same here too, so it seems to me this is an issue with the limits library, not with how Falcon-Limiter makes calls to it. (correct me if I am wrong here)

Below is the test code I use:

from limits import storage, strategies, parse
redis_password = "xxxxxxx"

r_storage = storage.storage_from_string(f"redis+sentinel://:{redis_password}@127.0.0.1:26379/mymaster",
    sentinel_kwargs={"password": redis_password})
moving_window = strategies.MovingWindowRateLimiter(r_storage)
one_per_minute = parse("1/minute")

assert True == moving_window.hit(one_per_minute, "test_namespace", "foo")

When running this with limits version 2.3.0 I get the following error - while there is no error when using version 2.0.3.

Traceback (most recent call last):
  File "test-limits.py", line 9, in <module>
    assert True == moving_window.hit(one_per_minute, "test_namespace", "foo")
  File "/home/myuser/.local/share/virtualenvs/test-limits-kWUTuU1U/lib/python3.8/site-packages/limits/strategies.py", line 84, in hit
    return self.storage().acquire_entry(  # type: ignore
  File "/home/myuser/.local/share/virtualenvs/test-limits-kWUTuU1U/lib/python3.8/site-packages/limits/storage/redis.py", line 178, in acquire_entry
    return super()._acquire_entry(key, limit, expiry, self.storage, amount)
  File "/home/myuser/.local/share/virtualenvs/test-limits-kWUTuU1U/lib/python3.8/site-packages/limits/storage/redis.py", line 79, in _acquire_entry
    acquired = self.lua_acquire_window([key], [timestamp, limit, expiry, amount])
  File "/home/myuser/.local/share/virtualenvs/test-limits-kWUTuU1U/lib/python3.8/site-packages/redis/commands/core.py", line 4440, in __call__
    return client.evalsha(self.sha, len(keys), *args)
  File "/home/myuser/.local/share/virtualenvs/test-limits-kWUTuU1U/lib/python3.8/site-packages/redis/commands/core.py", line 3891, in evalsha
    return self.execute_command("EVALSHA", sha, numkeys, *keys_and_args)
  File "/home/myuser/.local/share/virtualenvs/test-limits-kWUTuU1U/lib/python3.8/site-packages/redis/client.py", line 1176, in execute_command
    return conn.retry.call_with_retry(
  File "/home/myuser/.local/share/virtualenvs/test-limits-kWUTuU1U/lib/python3.8/site-packages/redis/retry.py", line 44, in call_with_retry
    fail(error)
  File "/home/myuser/.local/share/virtualenvs/test-limits-kWUTuU1U/lib/python3.8/site-packages/redis/client.py", line 1180, in <lambda>
    lambda error: self._disconnect_raise(conn, error),
  File "/home/myuser/.local/share/virtualenvs/test-limits-kWUTuU1U/lib/python3.8/site-packages/redis/client.py", line 1166, in _disconnect_raise
    raise error
  File "/home/myuser/.local/share/virtualenvs/test-limits-kWUTuU1U/lib/python3.8/site-packages/redis/retry.py", line 41, in call_with_retry
    return do()
  File "/home/myuser/.local/share/virtualenvs/test-limits-kWUTuU1U/lib/python3.8/site-packages/redis/client.py", line 1177, in <lambda>
    lambda: self._send_command_parse_response(
  File "/home/myuser/.local/share/virtualenvs/test-limits-kWUTuU1U/lib/python3.8/site-packages/redis/client.py", line 1153, in _send_command_parse_response
    return self.parse_response(conn, command_name, **options)
  File "/home/myuser/.local/share/virtualenvs/test-limits-kWUTuU1U/lib/python3.8/site-packages/redis/client.py", line 1192, in parse_response
    response = connection.read_response()
  File "/home/myuser/.local/share/virtualenvs/test-limits-kWUTuU1U/lib/python3.8/site-packages/redis/sentinel.py", line 61, in read_response
    return super().read_response(disable_decoding=disable_decoding)
  File "/home/myuser/.local/share/virtualenvs/test-limits-kWUTuU1U/lib/python3.8/site-packages/redis/connection.py", line 800, in read_response
    response = self._parser.read_response(disable_decoding=disable_decoding)
  File "/home/myuser/.local/share/virtualenvs/test-limits-kWUTuU1U/lib/python3.8/site-packages/redis/connection.py", line 336, in read_response
    raise error
redis.exceptions.AuthenticationError: Authentication required.

Does limits version 2.3.0 require the password for Sentinel to be passed differently?

In case you want to reproduce the issue on your side, unfortunately you will need to deploy a Redis Sentinel cluster. There are additional details about that in the original ticket, but just a quick summary:

# deploy Redis Sentinel onto a Kubernetes cluster (or Minikube):
$ helm repo add bitnami https://charts.bitnami.com/bitnami
$ helm install redis-sentinel bitnami/redis --set sentinel.enabled=true

# port-forward the services
$ kubectl port-forward --namespace default svc/redis-sentinel 26379:26379
$ kubectl port-forward --namespace default svc/redis-sentinel 6379:6379

# get the Redis password
$ export REDIS_PASSWORD=$(kubectl get secret --namespace default redis-sentinel -o jsonpath="{.data.redis-password}" | base64 --decode

# you might also need to alias the redis-sentinel-node-0.redis-sentinel-headless.default.svc.cluster.local domain name to localhost in your host file

Based on this test (https://github.com/alisaifee/limits/blob/master/tests/storage/test_redis_sentinel.py#L33) this scenario should work, but obviously it doesn’t - likely the mocking is not able to capture the issue.

Issue Analytics

  • State:closed
  • Created 2 years ago
  • Comments:6 (6 by maintainers)

github_iconTop GitHub Comments

1reaction
alisaifeecommented, Jan 22, 2022

For future readers this issue exists for limits versions >2.0.3, <=2.3.0. Since the actual releases in that range happened in a reasonably short period of time, I’ll only be making a fix in 2.3 (i.e. 2.3.1 contains the fix to ensure backward compatibility)

1reaction
alisaifeecommented, Jan 22, 2022

Thanks for the report and my apologies for the backward incompatibility - this was entirely unintentional.

The behaviour in version < 2.1 was incorrect and had resulted in people having to workaround by supplying the password for the master/slave nodes in the uri and passing the password for the sentinel (correctly) in the sentinel_kwargs. I didn’t even realise that and as part of the refactoring ended up breaking this.

The “expected” way to pass the parameters when both the sentinel and the master/slave are password protected would be either:

uri = f"redis+sentinel://{sentinel_host}:{sentinel_port}"
storage_options = {"sentinel_kwargs": {"password": sentinel_password, "password": master_password}

or

uri = f"redis+sentinel://:{sentinel_password}@{sentinel_host}:{sentinel_port}"
storage_options = {"password": master_password}

At the moment I don’t have any clear idea on how to add a patch that would be backward compatible as well as encourage the correct behaviour - I’m wondering if this can be communicated via documentation/release notes.

Open to suggestions though!

Read more comments on GitHub >

github_iconTop Results From Across the Web

High availability with Redis Sentinel
Redis Sentinel provides high availability for Redis when not using Redis Cluster. Redis Sentinel also provides other collateral tasks such as monitoring, ...
Read more >
Redis replication and failover with Omnibus GitLab
With 7 sentinels, a maximum of 3 nodes can go down. The Leader election can sometimes fail the voting round when consensus is...
Read more >
spotahome/redis-operator - GitHub
Redis Operator creates/configures/manages high availability redis with sentinel automatic failover atop Kubernetes. - GitHub - spotahome/redis-operator: ...
Read more >
Spring Data Redis
The Spring Data Redis project applies core Spring concepts to the development of solutions by using a key-value style data store.
Read more >
redisio Cookbook - Chef Supermarket
Redis -sentinel will write configuration and state data back into its configuration file. This creates obvious problems when that config is ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found