Implementing Rate Limiting in Flask Applications
Learn how to protect your Flask API from abuse with efficient rate-limiting strategies
Introduction
Rate limiting is a crucial security measure in Flask applications to prevent abuse, DDoS attacks, and excessive API usage. By controlling the number of requests a client can make within a specific time frame, you can improve performance and protect backend resources.
In this guide, we’ll explore how to implement rate limiting in Flask using Flask-Limiter and Redis, ensuring scalability and security for your APIs.
Why Use Rate Limiting?
Rate limiting helps:
- Prevent API abuse from malicious bots or excessive usage
- Protect server resources from unnecessary load
- Ensure fair access for all users
- Enhance security against brute-force attacks
Installing Flask-Limiter
Flask-Limiter is a powerful library that provides flexible rate-limiting capabilities for Flask.
Install Flask-Limiter
pip install flask-limiter
Basic Rate Limiting in Flask
Setting Up Flask-Limiter
from flask import Flask, jsonify
from flask_limiter import Limiter
from flask_limiter.util import get_remote_address
app = Flask(__name__)
# Initialize Limiter with default rate limit
limiter = Limiter(
get_remote_address, # Identifies clients by IP
app=app,
default_limits=["5 per minute"]
)
@app.route('/public')
@limiter.limit("10 per minute")
def public_endpoint():
return jsonify({"message": "This is a public endpoint"}), 200
@app.route('/private')
@limiter.limit("2 per minute")
def private_endpoint():
return jsonify({"message": "This is a private endpoint"}), 200
if __name__ == '__main__':
app.run(debug=True)
How It Works
- The default limit is 5 requests per minute for all endpoints.
- The
/public
endpoint allows 10 requests per minute. - The
/private
endpoint allows 2 requests per minute.
Customizing Rate Limiting Strategies
Limiting Based on API Keys
Instead of identifying users by IP address, you can limit based on API keys or user IDs.
def get_api_key():
return request.headers.get("X-API-Key", "anonymous")
limiter = Limiter(
key_func=get_api_key,
app=app,
default_limits=["5 per minute"]
)
Applying Rate Limits Dynamically
You can set dynamic limits based on user roles:
def dynamic_limit():
return "100 per hour" if request.headers.get("X-User-Role") == "premium" else "10 per hour"
@app.route('/dynamic')
@limiter.limit(dynamic_limit)
def dynamic_endpoint():
return jsonify({"message": "Rate limit applied dynamically"}), 200
Storing Rate Limits in Redis
For scalability, store rate limit data in Redis instead of in-memory storage.
Install Redis and Redis-Py
pip install redis
Configure Flask-Limiter with Redis
from redis import Redis
redis_connection = Redis(host='localhost', port=6379, db=0)
limiter = Limiter(
key_func=get_remote_address,
storage_uri="redis://localhost:6379"
)
Now, rate limit data is persisted across multiple instances, making it production-ready.
Handling Rate Limit Errors
By default, Flask-Limiter returns a 429 Too Many Requests
response when the limit is exceeded.
Customizing Error Messages
@app.errorhandler(429)
def ratelimit_handler(e):
return jsonify(error="Too many requests. Please slow down!"), 429
Rate Limiting in a Flask + Gunicorn Deployment
Gunicorn manages multiple workers, but Flask-Limiter’s default in-memory storage does not share limits across workers.
To fix this, use Redis as the storage backend, as shown earlier.
Run Gunicorn with multiple workers:
gunicorn -w 4 -b 0.0.0.0:5000 app:app
Testing Rate Limits
Use cURL to simulate multiple requests:
for i in {1..15}; do curl -i http://127.0.0.1:5000/public; sleep 1; done
If you exceed the limit, you’ll receive:
{
"error": "Too many requests. Please slow down!"
}
Conclusion
Rate limiting is essential for API security and performance. By implementing Flask-Limiter with Redis, you can:
✅ Protect APIs from abuse and excessive requests
✅ Ensure fair access for all users
✅ Improve scalability with Redis-backed storage
✅ Customize limits based on IP, API keys, or user roles
With these techniques, your Flask application will be secure, efficient, and production-ready! 🚀