使用 Redis 在 Go 中限制 HTTP 请求的速率

2025-06-11

使用 Redis 在 Go 中限制 HTTP 请求的速率

假设您创建了一个很棒的 AP​​I,它提供了许多客户感兴趣的功能,但由于用户使用频率过高,您无法有效地处理所有用户的负载。虽然扩展服务并提高可靠性是一个选项,但仅仅这样做是不够的,负载可能不均衡,使用模式可能不符合您的应用程序的预期,或者您可能存在一些目前无法解决的限制,这时就需要使用速率限制了。

速率限制背后的理念是,你对每个客户端的请求数量设定一个上限,一旦在规定的时间内达到该上限,你就会开始丢弃请求,直到期限结束并重新启动计数器。例如,客户端每分钟最多可以发出 60 个请求,一旦超过 60 个请求,你就会开始拒绝这些请求,并告知他们已超出配额,需要等待一段时间才能继续处理他们的请求。

这里的目标是提高您的服务可靠性,速率限制是您自行实施的一项保护措施,旨在确保恶意行为者或配置错误的客户端不会因为服务超出预期使用限制而导致整个服务瘫痪或中断。理想情况下,您不会惩罚良好的客户端,因为您已经向它们提供了足够的请求来完成其日常工作,但您会阻止不良客户端对您的服务造成破坏。

您可以在 Github 上找到我们将在下面讨论的项目的完整源代码

构建速率限制器

我们将为速率限制器提供多种实现,以讨论不同实现的优缺点,然后让我们从实现中常见的内容开始:

package redis_rate_limiter

import (
    "context"
    "time"
)

// Request defines a request that needs to be checked if it will be rate-limited or not.
// The `Key` is the identifier you're using for the client making calls. This could be a user/account ID if the user is
// signed into your application, the IP of the client making requests (this might not be reliable if you're not behind a
// proxy like Cloudflare, as clients can try to spoof IPs). The `Key` should be the same for multiple calls of the
// same client so we can correctly identify that this is the same app calling anywhere.
// `Limit` is the number of requests the client is allowed to make over the `Duration` period. If you set this to
// 100 and `Duration` to `1m` you'd have at most 100 requests over a minute.
type Request struct {
    Key      string
    Limit    uint64
    Duration time.Duration
}

// State is the result of evaluating the rate limit, either `Deny` or `Allow` a request.
type State int64

const (
    Deny  State = 0
    Allow       = 1
)

// Result represents the response to a check if a client should be rate-limited or not. The `State` will be either
// `Allow` or `Deny`, `TotalRequests` holds the number of requests this specific caller has already made over
// the current period and `ExpiresAt` defines when the rate limit will expire/roll over for clients that
// have gone over the limit.
type Result struct {
    State         State
    TotalRequests uint64
    ExpiresAt     time.Time
}

// Strategy is the interface the rate limit implementations must implement to be used, it takes a `Request` and
// returns a `Result` and an `error`, any errors the rate-limiter finds should be bubbled up so the code can make a
// decision about what it wants to do with the request.
type Strategy interface {
    Run(ctx context.Context, r *Request) (*Result, error)
}
Enter fullscreen mode Exit fullscreen mode

基于计数器的实现

所以我们已经了解了基础知识,什么是输入,什么是输出,以及每个计数器需要实现的小接口,让我们看一下第一个计数器,它只是使用 Redis 上的计数器:

package redis_rate_limiter

import (
    "context"
    "github.com/go-redis/redis/v8"
    "github.com/pkg/errors"
    "time"
)

var (
    _ Strategy = &counterStrategy{}
)

const (
    keyWithoutExpire = -1
)

func NewCounterStrategy(client *redis.Client, now func() time.Time) *counterStrategy {
    return &counterStrategy{
        client: client,
        now:    now,
    }
}

type counterStrategy struct {
    client *redis.Client
    now    func() time.Time
}

// Run this implementation uses a simple counter with an expiration set to the rate limit duration.
// This implementation is functional but not very effective if you have to deal with bursty traffic as
// it will still allow a client to burn through its full limit quickly once the key expires.
func (c *counterStrategy) Run(ctx context.Context, r *Request) (*Result, error) {
    // a pipeline in Redis is a way to send multiple commands that will all be run together.
    // this is not a transaction and there are many ways in which these commands could fail
    // (only the first, only the second) so we have to make sure all errors are handled, this
    // is a network performance optimization.

    p := c.client.Pipeline()
    incrResult := p.Incr(ctx, r.Key)
    ttlResult := p.TTL(ctx, r.Key)

    if _, err := p.Exec(ctx); err != nil {
        return nil, errors.Wrapf(err, "failed to execute increment to key %v", r.Key)
    }

    totalRequests, err := incrResult.Result()
    if err != nil {
        return nil, errors.Wrapf(err, "failed to increment key %v", r.Key)
    }

    var ttlDuration time.Duration

    // we want to make sure there is always an expiration set on the key, so on every
    // increment we check again to make sure it has a TTl and if it doesn't we add one.
    // a duration of -1 means that the key has no expiration so we need to make sure there
    // is one set, this should, most of the time, happen when we increment for the
    // first time but there could be cases where we fail at the previous commands so we should
    // check for the TTL on every request.
    if d, err := ttlResult.Result(); err != nil || d == keyWithoutExpire {
        ttlDuration = r.Duration
        if err := c.client.Expire(ctx, r.Key, r.Duration).Err(); err != nil {
            return nil, errors.Wrapf(err, "failed to set an expiration to key %v", r.Key)
        }
    } else {
        ttlDuration = d
    }

    expiresAt := c.now().Add(ttlDuration)

    requests := uint64(totalRequests)

    if requests > r.Limit {
        return &Result{
            State:         Deny,
            TotalRequests: requests,
            ExpiresAt:     expiresAt,
        }, nil
    }

    return &Result{
        State:         Allow,
        TotalRequests: requests,
        ExpiresAt:     expiresAt,
    }, nil
}
Enter fullscreen mode Exit fullscreen mode

这种实现通常被称为固定窗口策略,这意味着一旦设置了过期时间,达到限制的客户端将被阻止发出进一步的请求,直到过期时间到来。如果一个客户端每分钟的请求数限制为 50 个,并且在该分钟的前 5 秒内发出了所有 50 个请求,那么它将不得不等待 55 秒才能发出另一个请求。这也是这种实现的主要缺点,它仍然会让客户端快速耗尽其全部限制(突发流量),并且仍然可能导致您的服务过载,因为它可能会期望这些流量在整个限制期间分散。

有序集实现可以更好地应对突发流量

滚动窗口策略可以更好地保护突发流量,因为它不会完全重置计数器,而是在时间窗口期间保留历史记录。如果您有一个 5 分钟的窗口,它将始终计算过去 5 分钟内产生的流量,以决定是否应阻止客户端,而不是仅仅等待密钥过期。这种实现方式会占用更多的 CPU 和内存,因为您必须在内存中保存更多信息(每个请求及其时间戳),但它可以更好地抵御快速突发流量。

它看起来是这样的:

package redis_rate_limiter

import (
    "context"
    "github.com/go-redis/redis/v8"
    "github.com/google/uuid"
    "github.com/pkg/errors"
    "strconv"
    "time"
)

var (
    _ Strategy = &sortedSetCounter{}
)

func NewSortedSetCounterStrategy(client *redis.Client, now func() time.Time) Strategy {
    return &sortedSetCounter{
        client: client,
        now:    now,
    }
}

type sortedSetCounter struct {
    client *redis.Client
    now    func() time.Time
}

// Run this implementation uses a sorted set that holds a UUID for every request with a score that is the
// time the request has happened. This allows us to delete events from *before* the current window if the window
// is 5 minutes, we want to remove all events from before 5 minutes ago, this way we make sure we roll old
// requests off the counters creating a rolling window for the rate limiter. So, if your window is 100 requests
// in 5 minutes and you spread the load evenly across the minutes, once you hit 6 minutes the requests you made
// on the first minute have now expired but the other 4 minutes of requests are still valid.
// A rolling window counter is usually never 0 if traffic is consistent so it is very effective at preventing
// bursts of traffic as the counter won't ever expire.
func (s *sortedSetCounter) Run(ctx context.Context, r *Request) (*Result, error) {
    now := s.now()
    // every request needs an UUID
    item := uuid.New()

    minimum := now.Add(-r.Duration)

    p := s.client.Pipeline()

    // we then remove all requests that have already expired on this set
    removeByScore := p.ZRemRangeByScore(ctx, r.Key, "0", strconv.FormatInt(minimum.UnixMilli(), 10))

    // we add the current request
    add := p.ZAdd(ctx, r.Key, &redis.Z{
        Score:  float64(now.UnixMilli()),
        Member: item.String(),
    })

    // count how many non-expired requests we have on the sorted set
    count := p.ZCount(ctx, r.Key, "-inf", "+inf")

    if _, err := p.Exec(ctx); err != nil {
        return nil, errors.Wrapf(err, "failed to execute sorted set pipeline for key: %v", r.Key)
    }

    if err := removeByScore.Err(); err != nil {
        return nil, errors.Wrapf(err, "failed to remove items from key %v", r.Key)
    }

    if err := add.Err(); err != nil {
        return nil, errors.Wrapf(err, "failed to add item to key %v", r.Key)
    }

    totalRequests, err := count.Result()
    if err != nil {
        return nil, errors.Wrapf(err, "failed to count items for key %v", r.Key)
    }

    expiresAt := now.Add(r.Duration)
    requests := uint64(totalRequests)

    if requests > r.Limit {
        return &Result{
            State:         Deny,
            TotalRequests: requests,
            ExpiresAt:     expiresAt,
        }, nil
    }

    return &Result{
        State:         Allow,
        TotalRequests: requests,
        ExpiresAt:     expiresAt,
    }, nil
}
Enter fullscreen mode Exit fullscreen mode

这里的目标是使用以请求当前时间戳为排序键的有序集合,以便快速删除特定时间范围内的所有请求,从而清除已过期的请求。由于我们使用 UUID 作为值,因此这些请求在有序集合上相互冲突的概率非常低,即使处理数百万个请求,偶尔发生冲突也不会造成太大麻烦。

编写繁重的解决方案

这两种实现都是先写后写,即先将数据写入 Redis,然后再检查是否应该接受请求。虽然这使得代码更简洁,并减少了从客户端到服务器的网络流量,但它们会大大增加服务器的运行成本,因为它总是在内存中写入和更改数据,即使是无界数据,例如在有序集合的情况下。我们无法确保这些有序集合会被清理,也无法确保可以存储的请求数量有限制,这是一个普遍的危险信号,你不应该让它发生在你正在使用的系统上。

我们必须确保此处的资源使用是有界的,也就是说,它有一个众所周知的限制。你应该始终假设你编写的任何东西都会以某种方式(无论是有意还是无意)被滥用,并确保你的代码以某种方式强制限制这些资源。在 Redis 本身上,有很多方法可以确保你不会耗尽内存,比如设置maxmemory值来控制它可以使用的内存量,并将maxmemory-policy设置为一个非 的值no-eviction。对于像我们这里构建的速率限制器来说,allkeys-lru这是一个相当不错的选择。

有界计数器实现

更新后的计数器实现(先读后写)如下所示:

package redis_rate_limiter

import (
    "context"
    "github.com/go-redis/redis/v8"
    "github.com/pkg/errors"
    "time"
)

var (
    _ Strategy = &counterStrategy{}
)

const (
    keyThatDoesNotExist = -2
    keyWithoutExpire    = -1
)

func NewCounterStrategy(client *redis.Client, now func() time.Time) *counterStrategy {
    return &counterStrategy{
        client: client,
        now:    now,
    }
}

type counterStrategy struct {
    client *redis.Client
    now    func() time.Time
}

// Run this implementation uses a simple counter with an expiration set to the rate limit duration.
// This implementation is functional but not very effective if you have to deal with bursty traffic as
// it will still allow a client to burn through its full limit quickly once the key expires.
func (c *counterStrategy) Run(ctx context.Context, r *Request) (*Result, error) {

    // a pipeline in redis is a way to send multiple commands that will all be run together.
    // this is not a transaction and there are many ways in which these commands could fail
    // (only the first, only the second) so we have to make sure all errors are handled, this
    // is a network performance optimization.

    // here we try to get the current value and also try to set an expiration on it
    getPipeline := c.client.Pipeline()
    getResult := getPipeline.Get(ctx, r.Key)
    ttlResult := getPipeline.TTL(ctx, r.Key)

    if _, err := getPipeline.Exec(ctx); err != nil && !errors.Is(err, redis.Nil) {
        return nil, errors.Wrapf(err, "failed to execute pipeline with get and ttl to key %v", r.Key)
    }

    var ttlDuration time.Duration

    // we want to make sure there is always an expiration set on the key, so on every
    // increment we check again to make sure it has a TTl and if it doesn't we add one.
    // a duration of -1 means that the key has no expiration so we need to make sure there
    // is one set, this should, most of the time, happen when we increment for the
    // first time but there could be cases where we fail at the previous commands so we should
    // check for the TTL on every request.
    // a duration of -2 means that the key does not exist, given we're already here we should set an expiration
    // to it anyway as it means this is a new key that will be incremented below.
    if d, err := ttlResult.Result(); err != nil || d == keyWithoutExpire || d == keyThatDoesNotExist {
        ttlDuration = r.Duration
        if err := c.client.Expire(ctx, r.Key, r.Duration).Err(); err != nil {
            return nil, errors.Wrapf(err, "failed to set an expiration to key %v", r.Key)
        }
    } else {
        ttlDuration = d
    }

    expiresAt := c.now().Add(ttlDuration)

    if total, err := getResult.Uint64(); err != nil && errors.Is(err, redis.Nil) {

    } else if total >= r.Limit {
        return &Result{
            State:         Deny,
            TotalRequests: total,
            ExpiresAt:     expiresAt,
        }, nil
    }

    incrResult := c.client.Incr(ctx, r.Key)

    totalRequests, err := incrResult.Uint64()
    if err != nil {
        return nil, errors.Wrapf(err, "failed to increment key %v", r.Key)
    }

    if totalRequests > r.Limit {
        return &Result{
            State:         Deny,
            TotalRequests: totalRequests,
            ExpiresAt:     expiresAt,
        }, nil
    }

    return &Result{
        State:         Allow,
        TotalRequests: totalRequests,
        ExpiresAt:     expiresAt,
    }, nil
}
Enter fullscreen mode Exit fullscreen mode

现在我们在递增之前进行读取,确保只有当请求正确且被允许时才进行递增。否则,在尝试任何操作之前,先拒绝该请求。

现在我们来看看更新后的排序集实现:

package redis_rate_limiter

import (
    "context"
    "github.com/go-redis/redis/v8"
    "github.com/google/uuid"
    "github.com/pkg/errors"
    "strconv"
    "time"
)

var (
    _ Strategy = &sortedSetCounter{}
)

const (
    sortedSetMax = "+inf"
    sortedSetMin = "-inf"
)

func NewSortedSetCounterStrategy(client *redis.Client, now func() time.Time) Strategy {
    return &sortedSetCounter{
        client: client,
        now:    now,
    }
}

type sortedSetCounter struct {
    client *redis.Client
    now    func() time.Time
}

// Run this implementation uses a sorted set that holds an UUID for every request with a score that is the
// time the request has happened. This allows us to delete events from *before* the current window, if the window
// is 5 minutes, we want to remove all events from before 5 minutes ago, this way we make sure we roll old
// requests off the counters creating a rolling window for the rate limiter. So, if your window is 100 requests
// in 5 minutes and you spread the load evenly across the minutes, once you hit 6 minutes the requests you made
// on the first minute have now expired but the other 4 minutes of requests are still valid.
// A rolling window counter is usually never 0 if traffic is consistent so it is very effective at preventing
// bursts of traffic as the counter won't ever expire.
func (s *sortedSetCounter) Run(ctx context.Context, r *Request) (*Result, error) {
    now := s.now()
    expiresAt := now.Add(r.Duration)
    minimum := now.Add(-r.Duration)

    // first count how many requests over the period we're tracking on this rolling window so check wether
    // we're already over the limit or not. this prevents new requests from being added if a client is already
    // rate limited, not allowing it to add an infinite amount of requests to the system overloading redis.
    // if the client continues to send requests it also means that the memory for this specific key will not
    // be reclaimed (as we're not writing data here) so make sure there is an eviction policy that will
    // clear up the memory if the redis starts to get close to its memory limit.
    result, err := s.client.ZCount(ctx, r.Key, strconv.FormatInt(minimum.UnixMilli(), 10), sortedSetMax).Uint64()
    if err == nil && result >= r.Limit {
        return &Result{
            State:         Deny,
            TotalRequests: result,
            ExpiresAt:     expiresAt,
        }, nil
    }

    // every request needs an UUID
    item := uuid.New()

    p := s.client.Pipeline()

    // we then remove all requests that have already expired on this set
    removeByScore := p.ZRemRangeByScore(ctx, r.Key, "0", strconv.FormatInt(minimum.UnixMilli(), 10))

    // we add the current request
    add := p.ZAdd(ctx, r.Key, &redis.Z{
        Score:  float64(now.UnixMilli()),
        Member: item.String(),
    })

    // count how many non-expired requests we have on the sorted set
    count := p.ZCount(ctx, r.Key, sortedSetMin, sortedSetMax)

    if _, err := p.Exec(ctx); err != nil {
        return nil, errors.Wrapf(err, "failed to execute sorted set pipeline for key: %v", r.Key)
    }

    if err := removeByScore.Err(); err != nil {
        return nil, errors.Wrapf(err, "failed to remove items from key %v", r.Key)
    }

    if err := add.Err(); err != nil {
        return nil, errors.Wrapf(err, "failed to add item to key %v", r.Key)
    }

    totalRequests, err := count.Result()
    if err != nil {
        return nil, errors.Wrapf(err, "failed to count items for key %v", r.Key)
    }

    requests := uint64(totalRequests)

    if requests > r.Limit {
        return &Result{
            State:         Deny,
            TotalRequests: requests,
            ExpiresAt:     expiresAt,
        }, nil
    }

    return &Result{
        State:         Allow,
        TotalRequests: requests,
        ExpiresAt:     expiresAt,
    }, nil
}
Enter fullscreen mode Exit fullscreen mode

现在,在我们向 Redis 服务器添加请求之前,我们会检查该时间段内是否已经有太多请求,如果有,我们会在向 Redis 添加新请求之前拒绝该请求。

将其集成为 HTTP 中间件

现在我们已经实现了两个独立的计数器,我们如何使用它们?

其中一种方法是创建一个中间件处理程序,该处理程序包装现有的 HTTP 处理程序并为其添加速率限制功能,以便我们可以在需要时使用速率限制来组合处理程序,或者如果我们愿意的话,也可以不使用速率限制,如下所示:

package redis_rate_limiter

import (
    "fmt"
    "net/http"
    "strconv"
    "strings"
    "time"
)

var (
    _            http.Handler = &httpRateLimiterHandler{}
    _            Extractor    = &httpHeaderExtractor{}
    stateStrings              = map[State]string{
        Allow: "Allow",
        Deny:  "Deny",
    }
)

const (
    rateLimitingTotalRequests = "Rate-Limiting-Total-Requests"
    rateLimitingState         = "Rate-Limiting-State"
    rateLimitingExpiresAt     = "Rate-Limiting-Expires-At"
)

// Extractor represents the way we will extract a key from an HTTP request, this could be
// a value from a header, request path, method used, user authentication information, any information that
// is available at the HTTP request that wouldn't cause side effects if it was collected (this object shouldn't
// read the body of the request).
type Extractor interface {
    Extract(r *http.Request) (string, error)
}

type httpHeaderExtractor struct {
    headers []string
}

// Extract extracts a collection of http headers and joins them to build the key that will be used for
// rate limiting. You should use headers that are guaranteed to be unique for a client.
func (h *httpHeaderExtractor) Extract(r *http.Request) (string, error) {
    values := make([]string, 0, len(h.headers))

    for _, key := range h.headers {
        // if we can't find a value for the headers, give up and return an error.
        if value := strings.TrimSpace(r.Header.Get(key)); value == "" {
            return "", fmt.Errorf("the header %v must have a value set", key)
        } else {
            values = append(values, value)
        }
    }

    return strings.Join(values, "-"), nil
}

// NewHTTPHeadersExtractor creates a new HTTP header extractor
func NewHTTPHeadersExtractor(headers ...string) Extractor {
    return &httpHeaderExtractor{headers: headers}
}

// RateLimiterConfig holds the basic config we need to create a middleware http.Handler object that
// performs rate limiting before offloading the request to an actual handler.
type RateLimiterConfig struct {
    Extractor   Extractor
    Strategy    Strategy
    Expiration  time.Duration
    MaxRequests uint64
}

// NewHTTPRateLimiterHandler wraps an existing http.Handler object performing rate limiting before
// sending the request to the wrapped handler. If any errors happen while trying to rate limit a request
// or if the request is denied, the rate limiting handler will send a response to the client and will not
// call the wrapped handler.
func NewHTTPRateLimiterHandler(originalHandler http.Handler, config *RateLimiterConfig) http.Handler {
    return &httpRateLimiterHandler{
        handler: originalHandler,
        config:  config,
    }
}

type httpRateLimiterHandler struct {
    handler http.Handler
    config  *RateLimiterConfig
}

func (h *httpRateLimiterHandler) writeRespone(writer http.ResponseWriter, status int, msg string, args ...interface{}) {
    writer.Header().Set("Content-Type", "text/plain")
    writer.WriteHeader(status)
    if _, err := writer.Write([]byte(fmt.Sprintf(msg, args...))); err != nil {
        fmt.Printf("failed to write body to HTTP request: %v", err)
    }
}

// ServeHTTP performs rate limiting with the configuration it was provided and if there were not errors
// and the request was allowed it is sent to the wrapped handler. It also adds rate limiting headers that will be
// sent to the client to make it aware of what state it is in terms of rate limiting.
func (h *httpRateLimiterHandler) ServeHTTP(writer http.ResponseWriter, request *http.Request) {
    key, err := h.config.Extractor.Extract(request)
    if err != nil {
        h.writeRespone(writer, http.StatusBadRequest, "failed to collect rate limiting key from request: %v", err)
        return
    }

    result, err := h.config.Strategy.Run(request.Context(), &Request{
        Key:      key,
        Limit:    h.config.MaxRequests,
        Duration: h.config.Expiration,
    })

    if err != nil {
        h.writeRespone(writer, http.StatusInternalServerError, "failed to run rate limiting for request: %v", err)
        return
    }

    // set the rate limiting headers both on allow or deny results so the client knows what is going on
    writer.Header().Set(rateLimitingTotalRequests, strconv.FormatUint(result.TotalRequests, 10))
    writer.Header().Set(rateLimitingState, stateStrings[result.State])
    writer.Header().Set(rateLimitingExpiresAt, result.ExpiresAt.Format(time.RFC3339))

    // when the state is Deny, just return a 429 response to the client and stop the request handling flow
    if result.State == Deny {
        h.writeRespone(writer, http.StatusTooManyRequests, "you have sent too many requests to this service, slow down please")
        return
    }

    // if the request was not denied we assume it was allowed and call the wrapped handler.
    // by leaving this to the end we make sure the wrapped handler is only called once and doesn't have to worry
    // about any rate limiting at all (it doesn't even have to know there was rate limiting happening for this request)
    // as we have already set the headers, so when the handler flushes the response the headers above will be sent.
    h.handler.ServeHTTP(writer, request)
}
Enter fullscreen mode Exit fullscreen mode

那么我们这里有什么?我们首先从Extractor返回字符串的接口开始,该字符串用作速率限制检查的键。您可以有多个此接口的实现,检查 HTTP 标头或 HTTP 请求中可用的任何其他字段来识别客户端。最好的方法是拥有一个用户/帐户 ID,该 ID 可以在 IP 地址以一定频率变化时从 Cookie 或标头中提取,因此请实现一个适合您应用程序的解决方案。

然后,我们得到了一个RateLimiterConfig结构体,它封装了执行请求速率限制所需的所有字段,包括 an Extractor、a Strategy(我们的计数器)、客户端可以发出的请求数量以及持续时间。有了所有这些以及 an 的http.Handler封装,我们就有了一个功能齐全的 HTTP 速率限制器中间件。您可以使用相同的模式为任何其他请求/响应协议构建相同类型的中间件,只需更改此处理程序的构建方式即可。

我应该担心什么?

首先,正如我们之前提到的,突发流量。客户端过快地用完其全部限制仍然会对您的系统造成问题。虽然滚动窗口限制器可以稍微减少这种情况,但这不应该成为您阻止突发流量的唯一措施。另一个可能的解决方案是使用更短的持续时间,因此,不要让客户端每小时发出 10,000 个请求,而是每分钟发出 160 个请求,这样,它们在短时间内最差的情况就是发出 160 个请求,而不是 10,000 个。

同样,不要设置很长一段时间的限制,比如每天 100 个请求,因为这会使使用 API 非常令人沮丧,您发出 100 个请求,现在您必须等待一整天才能再次达到限制,使用较短的时间段,以便人们可以更均匀地分散负载。

在这样的系统上,完全阻止特定客户端流量的动态拒绝列表也是必需的。您可以将其配置在执行速率限制的应用程序之前的代理上(例如 Nginx/Apache 服务器),或者在您使用的 CDN 上(例如 Cloudflare),这样您就可以快速黑掉所有来自已知滥用者的流量。

我们这里构建的实现是一个故障关闭的解决方案,这意味着任何错误(例如与 Redis 通信、运行提取器)都会导致请求被拒绝。这主要是因为偏执的实现通常更安全,但可能不是最适合您服务的解决方案。您可能需要一个尽力而为的故障开放解决方案,一旦在速率限制期间发现某个故障,它就会允许请求继续执行。

虽然这可能更加方便用户使用,但也增加了滥用的风险,例如速率限制器过载,进而导致其背后的系统过载,因为在速率限制器关闭的情况下,它们现在没有任何保护措施。因此,在完全转向故障开放解决方案之前,请确保在速率限制器发生故障时有其他缓解措施,并且对堆栈的所有部分(包括应用程序和 Redis 服务器)进行监控,以便在其中任何一个部分出现故障并允许流量通过时快速收到通知。您甚至可以向下游客户端添加额外的 HTTP 标头,告知它们速率限制失败但请求仍然已发送,以便它们可以决定是否接受该请求。

再次强调,使用速率限制的主要目标应该是保护您的系统,以便为大多数用户提供最好的服务,并防止少数行为不端的客户端破坏您的应用程序,阻止正常用户使用它们。

鏂囩珷鏉ユ簮锛�https://dev.to/mauriciolinhares/rate-limiting-http-requests-in-go-using-redis-51m7
PREV
使用纯 CSS 创建模式窗口:无需 JavaScript
NEXT
ES6 与 ES7 Javascript 生命周期的变化 [ES6,ES7,ES8]