diff options
| author | David S. Miller <davem@davemloft.net> | 2016-05-19 11:36:50 -0700 |
|---|---|---|
| committer | David S. Miller <davem@davemloft.net> | 2016-05-19 11:36:50 -0700 |
| commit | 87553aa5212f43d3d14b9b5d1dfba89f1a6e6f21 (patch) | |
| tree | 694ab01c605470329b19bf7ae06f2d411b8687d9 /net/rds/tcp_send.c | |
| parent | e00be9e4d0ffcc0121606229f0aa4b246d6881d7 (diff) | |
| parent | b91083a45e4c41b8c952cf02ceb0ce16f0b1b9b1 (diff) | |
Merge branch 'tcp_bh_fixes'
Eric Dumazet says:
====================
net: block BH in TCP callbacks
Four layers using TCP stack were assuming sk_callback_lock could
be locked using read_lock() in their handlers because TCP stack
was running with BH disabled.
This is no longer the case. Since presumably the rest could
also depend on BH being disabled, just use read_lock_bh().
Then each layer might consider switching to RCU protection
and no longer depend on BH.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'net/rds/tcp_send.c')
| -rw-r--r-- | net/rds/tcp_send.c | 4 |
1 files changed, 2 insertions, 2 deletions
diff --git a/net/rds/tcp_send.c b/net/rds/tcp_send.c index 2894e6095e3b..22d0f2020a79 100644 --- a/net/rds/tcp_send.c +++ b/net/rds/tcp_send.c @@ -180,7 +180,7 @@ void rds_tcp_write_space(struct sock *sk) struct rds_connection *conn; struct rds_tcp_connection *tc; - read_lock(&sk->sk_callback_lock); + read_lock_bh(&sk->sk_callback_lock); conn = sk->sk_user_data; if (!conn) { write_space = sk->sk_write_space; @@ -200,7 +200,7 @@ void rds_tcp_write_space(struct sock *sk) queue_delayed_work(rds_wq, &conn->c_send_w, 0); out: - read_unlock(&sk->sk_callback_lock); + read_unlock_bh(&sk->sk_callback_lock); /* * write_space is only called when data leaves tcp's send queue if |