Skip to content

test Github pull request 2 - ignore #7

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 1 commit into from
Closed

test Github pull request 2 - ignore #7

wants to merge 1 commit into from

Conversation

LinuxMinion
Copy link
Member

Test 2

gregmarsden pushed a commit that referenced this pull request Dec 14, 2018
[ Upstream commit 46b3722 ]

We occasionaly hit following assert failure in 'perf top', when processing the
/proc info in multiple threads.

  perf: ...include/linux/refcount.h:109: refcount_inc:
        Assertion `!(!refcount_inc_not_zero(r))' failed.

The gdb backtrace looks like this:

  [Switching to Thread 0x7ffff11ba700 (LWP 13749)]
  0x00007ffff50839fb in raise () from /lib64/libc.so.6
  (gdb)
  #0  0x00007ffff50839fb in raise () from /lib64/libc.so.6
  #1  0x00007ffff5085800 in abort () from /lib64/libc.so.6
  #2  0x00007ffff507c0da in __assert_fail_base () from /lib64/libc.so.6
  #3  0x00007ffff507c152 in __assert_fail () from /lib64/libc.so.6
  #4  0x0000000000535373 in refcount_inc (r=0x7fffdc009be0)
      at ...include/linux/refcount.h:109
  #5  0x00000000005354f1 in comm_str__get (cs=0x7fffdc009bc0)
      at util/comm.c:24
  #6  0x00000000005356bd in __comm_str__findnew (str=0x7fffd000b260 ":2",
      root=0xbed5c0 <comm_str_root>) at util/comm.c:72
  #7  0x000000000053579e in comm_str__findnew (str=0x7fffd000b260 ":2",
      root=0xbed5c0 <comm_str_root>) at util/comm.c:95
  #8  0x000000000053582e in comm__new (str=0x7fffd000b260 ":2",
      timestamp=0, exec=false) at util/comm.c:111
  #9  0x00000000005363bc in thread__new (pid=2, tid=2) at util/thread.c:57
  #10 0x0000000000523da0 in ____machine__findnew_thread (machine=0xbfde38,
      threads=0xbfdf28, pid=2, tid=2, create=true) at util/machine.c:457
  #11 0x0000000000523eb4 in __machine__findnew_thread (machine=0xbfde38,
  ...

The failing assertion is this one:

  REFCOUNT_WARN(!refcount_inc_not_zero(r), ...

The problem is that we keep global comm_str_root list, which
is accessed by multiple threads during the 'perf top' startup
and following 2 paths can race:

  thread 1:
    ...
    thread__new
      comm__new
        comm_str__findnew
          down_write(&comm_str_lock);
          __comm_str__findnew
            comm_str__get

  thread 2:
    ...
    comm__override or comm__free
      comm_str__put
        refcount_dec_and_test
          down_write(&comm_str_lock);
          rb_erase(&cs->rb_node, &comm_str_root);

Because thread 2 first decrements the refcnt and only after then it removes the
struct comm_str from the list, the thread 1 can find this object on the list
with refcnt equls to 0 and hit the assert.

This patch fixes the thread 1 __comm_str__findnew path, by ignoring objects
that already dropped the refcnt to 0. For the rest of the objects we take the
refcnt before comparing its name and release it afterwards with comm_str__put,
which can also release the object completely.

Signed-off-by: Jiri Olsa <[email protected]>
Acked-by: Namhyung Kim <[email protected]>
Cc: Alexander Shishkin <[email protected]>
Cc: Andi Kleen <[email protected]>
Cc: David Ahern <[email protected]>
Cc: Kan Liang <[email protected]>
Cc: Lukasz Odzioba <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Wang Nan <[email protected]>
Cc: [email protected]
Link: http://lkml.kernel.org/r/20180720101740.GA27176@krava
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
gregmarsden pushed a commit that referenced this pull request Jan 18, 2019
…error

The sequence that leads to this state is as follows.

1) First we see CQ error logged.

Sep 29 22:32:33 dm54cel14 kernel: [471472.784371] mlx4_core
0000:46:00.0: CQ access violation on CQN 000419 syndrome=0x2
vendor_error_syndrome=0x0

2) That is followed by the drop of the associated RDS connection.

Sep 29 22:32:33 dm54cel14 kernel: [471472.784403] RDS/IB: connection
<192.168.54.43,192.168.54.1,0> dropped due to 'qp event'

3) We don't get the WR_FLUSH_ERRs for the posted receive buffers after that.

4) RDS is stuck in rds_ib_conn_shutdown while shutting down that connection.

crash64> bt 62577
PID: 62577  TASK: ffff88143f045400  CPU: 4   COMMAND: "kworker/u224:1"
 #0 [ffff8813663bbb58] __schedule at ffffffff816ab68b
 #1 [ffff8813663bbbb0] schedule at ffffffff816abca7
 #2 [ffff8813663bbbd0] schedule_timeout at ffffffff816aee71
 #3 [ffff8813663bbc80] rds_ib_conn_shutdown at ffffffffa041f7d1 [rds_rdma]
 #4 [ffff8813663bbd10] rds_conn_shutdown at ffffffffa03dc6e2 [rds]
 #5 [ffff8813663bbdb0] rds_shutdown_worker at ffffffffa03e2699 [rds]
 #6 [ffff8813663bbe00] process_one_work at ffffffff8109cda1
 #7 [ffff8813663bbe50] worker_thread at ffffffff8109d92b
 #8 [ffff8813663bbec0] kthread at ffffffff810a304b
 #9 [ffff8813663bbf50] ret_from_fork at ffffffff816b0752
crash64>

It was stuck here in rds_ib_conn_shutdown for ever:

                /* quiesce tx and rx completion before tearing down */
                while (!wait_event_timeout(rds_ib_ring_empty_wait,
                                rds_ib_ring_empty(&ic->i_recv_ring) &&
                                (atomic_read(&ic->i_signaled_sends) == 0),
                                msecs_to_jiffies(5000))) {

                        /* Try to reap pending RX completions every 5 secs */
                        if (!rds_ib_ring_empty(&ic->i_recv_ring)) {
                                spin_lock_bh(&ic->i_rx_lock);
                                rds_ib_rx(ic);
                                spin_unlock_bh(&ic->i_rx_lock);
                        }
                }

The recv ring was not empty.
w_alloc_ptr = 560
w_free_ptr  = 256

This is what Mellanox had to say:
When CQ moves to error (e.g. due to CQ Overrun, CQ Access violation) FW will
generate Async event to notify this error, also the QPs that tries to access
this CQ will be put to error state but will not be flushed since we must not
post CQEs to a broken CQ. The QP that tries to access will also issue an
Async catas event.

In summary we cannot wait for any more WR_FLUSH_ERRs in that state.

Orabug: 29180452

Reviewed-by: Rama Nichanamatlu <[email protected]>
Signed-off-by: Venkat Venkatsubra <[email protected]>
gregmarsden pushed a commit that referenced this pull request Jan 18, 2019
The sequence that leads to this state is as follows.

1) First we see CQ error logged.

Sep 29 22:32:33 dm54cel14 kernel: [471472.784371] mlx4_core
0000:46:00.0: CQ access violation on CQN 000419 syndrome=0x2
vendor_error_syndrome=0x0

2) That is followed by the drop of the associated RDS connection.

Sep 29 22:32:33 dm54cel14 kernel: [471472.784403] RDS/IB: connection
<192.168.54.43,192.168.54.1,0> dropped due to 'qp event'

3) We don't get the WR_FLUSH_ERRs for the posted receive buffers after that.

4) RDS is stuck in rds_ib_conn_shutdown while shutting down that connection.

crash64> bt 62577
PID: 62577  TASK: ffff88143f045400  CPU: 4   COMMAND: "kworker/u224:1"
 #0 [ffff8813663bbb58] __schedule at ffffffff816ab68b
 #1 [ffff8813663bbbb0] schedule at ffffffff816abca7
 #2 [ffff8813663bbbd0] schedule_timeout at ffffffff816aee71
 #3 [ffff8813663bbc80] rds_ib_conn_shutdown at ffffffffa041f7d1 [rds_rdma]
 #4 [ffff8813663bbd10] rds_conn_shutdown at ffffffffa03dc6e2 [rds]
 #5 [ffff8813663bbdb0] rds_shutdown_worker at ffffffffa03e2699 [rds]
 #6 [ffff8813663bbe00] process_one_work at ffffffff8109cda1
 #7 [ffff8813663bbe50] worker_thread at ffffffff8109d92b
 #8 [ffff8813663bbec0] kthread at ffffffff810a304b
 #9 [ffff8813663bbf50] ret_from_fork at ffffffff816b0752
crash64>

It was stuck here in rds_ib_conn_shutdown for ever:

                /* quiesce tx and rx completion before tearing down */
                while (!wait_event_timeout(rds_ib_ring_empty_wait,
                                rds_ib_ring_empty(&ic->i_recv_ring) &&
                                (atomic_read(&ic->i_signaled_sends) == 0),
                                msecs_to_jiffies(5000))) {

                        /* Try to reap pending RX completions every 5 secs */
                        if (!rds_ib_ring_empty(&ic->i_recv_ring)) {
                                spin_lock_bh(&ic->i_rx_lock);
                                rds_ib_rx(ic);
                                spin_unlock_bh(&ic->i_rx_lock);
                        }
                }

The recv ring was not empty.
w_alloc_ptr = 560
w_free_ptr  = 256

This is what Mellanox had to say:
When CQ moves to error (e.g. due to CQ Overrun, CQ Access violation) FW will
generate Async event to notify this error, also the QPs that tries to access
this CQ will be put to error state but will not be flushed since we must not
post CQEs to a broken CQ. The QP that tries to access will also issue an
Async catas event.

In summary we cannot wait for any more WR_FLUSH_ERRs in that state.

Orabug: 28733324

Reviewed-by: Rama Nichanamatlu <[email protected]>
Signed-off-by: Venkat Venkatsubra <[email protected]>
Signed-off-by: Brian Maly <[email protected]>
LinuxMinion pushed a commit that referenced this pull request Feb 27, 2019
…error

The sequence that leads to this state is as follows.

1) First we see CQ error logged.

Sep 29 22:32:33 dm54cel14 kernel: [471472.784371] mlx4_core
0000:46:00.0: CQ access violation on CQN 000419 syndrome=0x2
vendor_error_syndrome=0x0

2) That is followed by the drop of the associated RDS connection.

Sep 29 22:32:33 dm54cel14 kernel: [471472.784403] RDS/IB: connection
<192.168.54.43,192.168.54.1,0> dropped due to 'qp event'

3) We don't get the WR_FLUSH_ERRs for the posted receive buffers after that.

4) RDS is stuck in rds_ib_conn_shutdown while shutting down that connection.

crash64> bt 62577
PID: 62577  TASK: ffff88143f045400  CPU: 4   COMMAND: "kworker/u224:1"
 #0 [ffff8813663bbb58] __schedule at ffffffff816ab68b
 #1 [ffff8813663bbbb0] schedule at ffffffff816abca7
 #2 [ffff8813663bbbd0] schedule_timeout at ffffffff816aee71
 #3 [ffff8813663bbc80] rds_ib_conn_shutdown at ffffffffa041f7d1 [rds_rdma]
 #4 [ffff8813663bbd10] rds_conn_shutdown at ffffffffa03dc6e2 [rds]
 #5 [ffff8813663bbdb0] rds_shutdown_worker at ffffffffa03e2699 [rds]
 #6 [ffff8813663bbe00] process_one_work at ffffffff8109cda1
 #7 [ffff8813663bbe50] worker_thread at ffffffff8109d92b
 #8 [ffff8813663bbec0] kthread at ffffffff810a304b
 #9 [ffff8813663bbf50] ret_from_fork at ffffffff816b0752
crash64>

It was stuck here in rds_ib_conn_shutdown for ever:

                /* quiesce tx and rx completion before tearing down */
                while (!wait_event_timeout(rds_ib_ring_empty_wait,
                                rds_ib_ring_empty(&ic->i_recv_ring) &&
                                (atomic_read(&ic->i_signaled_sends) == 0),
                                msecs_to_jiffies(5000))) {

                        /* Try to reap pending RX completions every 5 secs */
                        if (!rds_ib_ring_empty(&ic->i_recv_ring)) {
                                spin_lock_bh(&ic->i_rx_lock);
                                rds_ib_rx(ic);
                                spin_unlock_bh(&ic->i_rx_lock);
                        }
                }

The recv ring was not empty.
w_alloc_ptr = 560
w_free_ptr  = 256

This is what Mellanox had to say:
When CQ moves to error (e.g. due to CQ Overrun, CQ Access violation) FW will
generate Async event to notify this error, also the QPs that tries to access
this CQ will be put to error state but will not be flushed since we must not
post CQEs to a broken CQ. The QP that tries to access will also issue an
Async catas event.

In summary we cannot wait for any more WR_FLUSH_ERRs in that state.

Orabug: 29180514

Reviewed-by: Rama Nichanamatlu <[email protected]>
Signed-off-by: Venkat Venkatsubra <[email protected]>
gregmarsden pushed a commit that referenced this pull request Mar 5, 2019
The customer hit this crash few times.

PID: 31556  TASK: ffff880f823caa00  CPU: 1   COMMAND: "cellsrv"
 #0 [ffff880f823db850] machine_kexec at ffffffff8105d93c
 #1 [ffff880f823db8b0] crash_kexec at ffffffff811103b3
 #2 [ffff880f823db980] oops_end at ffffffff8101a788
 #3 [ffff880f823db9b0] no_context at ffffffff8106b9cf
 #4 [ffff880f823dba20] __bad_area_nosemaphore at ffffffff8106bc9d
 #5 [ffff880f823dba70] bad_area at ffffffff8106be97
 #6 [ffff880f823dbaa0] __do_page_fault at ffffffff8106c71e
 #7 [ffff880f823dbb00] do_page_fault at ffffffff8106c81f
 #8 [ffff880f823dbb40] page_fault at ffffffff816b5a9f
    [exception RIP: rds_ib_inc_copy_to_user+104]
    RIP: ffffffffa04607b8  RSP: ffff880f823dbbf8  RFLAGS: 00010287
    RAX: 0000000000000340  RBX: 0000000000001000  RCX: 0000000000004000
    RDX: 0000000000001000  RSI: ffff88176cea2000  RDI: ffff8817d291f520
    RBP: ffff880f823dbc48   R8: 0000000000001340   R9: 0000000000001000
    R10: 0000000000001200  R11: ffff880f823dc000  R12: ffff880f823dbed0
    R13: 0000000000000000  R14: 0000000000000000  R15: 0000000000001000
    ORIG_RAX: ffffffffffffffff  CS: 0010  SS: 0018
 #9 [ffff880f823dbc50] rds_recvmsg at ffffffffa041d837 [rds]

int rds_ib_inc_copy_to_user(struct rds_incoming *inc, struct iov_iter *to)
...
...
        ibinc = container_of(inc, struct rds_ib_incoming, ii_inc);
        frag = list_entry(ibinc->ii_frags.next, struct rds_page_frag, f_item);
        len = be32_to_cpu(inc->i_hdr.h_len);
        sg = frag->f_sg;

        while (iov_iter_count(to) && copied < len) {
                to_copy = min_t(unsigned long, iov_iter_count(to),
                                sg->length - frag_off);
                ...

sg is NULL and it crashes accessing sg->length above.

The cause looks like is due to ic->i_frag_sz returning incorrect value.
16KB when 4KB was expected.

                if (copied % ic->i_frag_sz == 0) {
                        frag = list_entry(frag->f_item.next,
                                          struct rds_page_frag, f_item);
                        frag_off = 0;
                        sg = frag->f_sg;
                }

The other end is using 4KB RDS fragsize (Solaris Super Cluster).
This end is UEK4 (4.1.12-94.8.4.el6uek.x86_64).

The message being copied arrived over 4KB RDS frag size connection.
But during the above check ic->i_frag_sz is 16KB.
This can happen during a reconnect at the connection setup phase.
We start off with ic->i_frag_sz as 16KB. Then settle down at 4KB.

Failing this check
  if (copied % ic->i_frag_sz == 0) {
can result in sg not getting set correctly.

Say, "copied" = 4KB but ic->i_frag_sz is 16KB when it should be 4KB.

During race condition with a reconnect, ic->i_frag_sz can be 16KB
even though once the connection is set up it settled down to 4KB.
It can change from 4KB to 16KB and back to 4KB during connection setup
due to reconnect.

We started seeing this crash after bug 26848749.
But prior to that the same scenario could result in data copied to user
from incorrect "sg" resulting in data corruption.

Orabug: 28748049

Reviewed-by: Rama Nichanamatlu <[email protected]>
Signed-off-by: Venkat Venkatsubra <[email protected]>
gregmarsden pushed a commit that referenced this pull request Mar 5, 2019
…error

The sequence that leads to this state is as follows.

1) First we see CQ error logged.

Sep 29 22:32:33 dm54cel14 kernel: [471472.784371] mlx4_core
0000:46:00.0: CQ access violation on CQN 000419 syndrome=0x2
vendor_error_syndrome=0x0

2) That is followed by the drop of the associated RDS connection.

Sep 29 22:32:33 dm54cel14 kernel: [471472.784403] RDS/IB: connection
<192.168.54.43,192.168.54.1,0> dropped due to 'qp event'

3) We don't get the WR_FLUSH_ERRs for the posted receive buffers after that.

4) RDS is stuck in rds_ib_conn_shutdown while shutting down that connection.

crash64> bt 62577
PID: 62577  TASK: ffff88143f045400  CPU: 4   COMMAND: "kworker/u224:1"
 #0 [ffff8813663bbb58] __schedule at ffffffff816ab68b
 #1 [ffff8813663bbbb0] schedule at ffffffff816abca7
 #2 [ffff8813663bbbd0] schedule_timeout at ffffffff816aee71
 #3 [ffff8813663bbc80] rds_ib_conn_shutdown at ffffffffa041f7d1 [rds_rdma]
 #4 [ffff8813663bbd10] rds_conn_shutdown at ffffffffa03dc6e2 [rds]
 #5 [ffff8813663bbdb0] rds_shutdown_worker at ffffffffa03e2699 [rds]
 #6 [ffff8813663bbe00] process_one_work at ffffffff8109cda1
 #7 [ffff8813663bbe50] worker_thread at ffffffff8109d92b
 #8 [ffff8813663bbec0] kthread at ffffffff810a304b
 #9 [ffff8813663bbf50] ret_from_fork at ffffffff816b0752
crash64>

It was stuck here in rds_ib_conn_shutdown for ever:

                /* quiesce tx and rx completion before tearing down */
                while (!wait_event_timeout(rds_ib_ring_empty_wait,
                                rds_ib_ring_empty(&ic->i_recv_ring) &&
                                (atomic_read(&ic->i_signaled_sends) == 0),
                                msecs_to_jiffies(5000))) {

                        /* Try to reap pending RX completions every 5 secs */
                        if (!rds_ib_ring_empty(&ic->i_recv_ring)) {
                                spin_lock_bh(&ic->i_rx_lock);
                                rds_ib_rx(ic);
                                spin_unlock_bh(&ic->i_rx_lock);
                        }
                }

The recv ring was not empty.
w_alloc_ptr = 560
w_free_ptr  = 256

This is what Mellanox had to say:
When CQ moves to error (e.g. due to CQ Overrun, CQ Access violation) FW will
generate Async event to notify this error, also the QPs that tries to access
this CQ will be put to error state but will not be flushed since we must not
post CQEs to a broken CQ. The QP that tries to access will also issue an
Async catas event.

In summary we cannot wait for any more WR_FLUSH_ERRs in that state.

Orabug: 29180535

Reviewed-by: Rama Nichanamatlu <[email protected]>
Signed-off-by: Venkat Venkatsubra <[email protected]>
gregmarsden pushed a commit that referenced this pull request Mar 29, 2019
[ Upstream commit ebaf39e ]

The *_frag_reasm() functions are susceptible to miscalculating the byte
count of packet fragments in case the truesize of a head buffer changes.
The truesize member may be changed by the call to skb_unclone(), leaving
the fragment memory limit counter unbalanced even if all fragments are
processed. This miscalculation goes unnoticed as long as the network
namespace which holds the counter is not destroyed.

Should an attempt be made to destroy a network namespace that holds an
unbalanced fragment memory limit counter the cleanup of the namespace
never finishes. The thread handling the cleanup gets stuck in
inet_frags_exit_net() waiting for the percpu counter to reach zero. The
thread is usually in running state with a stacktrace similar to:

 PID: 1073   TASK: ffff880626711440  CPU: 1   COMMAND: "kworker/u48:4"
  #5 [ffff880621563d48] _raw_spin_lock at ffffffff815f5480
  #6 [ffff880621563d48] inet_evict_bucket at ffffffff8158020b
  #7 [ffff880621563d80] inet_frags_exit_net at ffffffff8158051c
  #8 [ffff880621563db0] ops_exit_list at ffffffff814f5856
  #9 [ffff880621563dd8] cleanup_net at ffffffff814f67c0
 #10 [ffff880621563e38] process_one_work at ffffffff81096f14

It is not possible to create new network namespaces, and processes
that call unshare() end up being stuck in uninterruptible sleep state
waiting to acquire the net_mutex.

The bug was observed in the IPv6 netfilter code by Per Sundstrom.
I thank him for his analysis of the problem. The parts of this patch
that apply to IPv4 and IPv6 fragment reassembly are preemptive measures.

Signed-off-by: Jiri Wiesner <[email protected]>
Reported-by: Per Sundstrom <[email protected]>
Acked-by: Peter Oskolkov <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
gregmarsden pushed a commit that referenced this pull request Mar 29, 2019
[ Upstream commit c5a94f4 ]

It was observed that a process blocked indefintely in
__fscache_read_or_alloc_page(), waiting for FSCACHE_COOKIE_LOOKING_UP
to be cleared via fscache_wait_for_deferred_lookup().

At this time, ->backing_objects was empty, which would normaly prevent
__fscache_read_or_alloc_page() from getting to the point of waiting.
This implies that ->backing_objects was cleared *after*
__fscache_read_or_alloc_page was was entered.

When an object is "killed" and then "dropped",
FSCACHE_COOKIE_LOOKING_UP is cleared in fscache_lookup_failure(), then
KILL_OBJECT and DROP_OBJECT are "called" and only in DROP_OBJECT is
->backing_objects cleared.  This leaves a window where
something else can set FSCACHE_COOKIE_LOOKING_UP and
__fscache_read_or_alloc_page() can start waiting, before
->backing_objects is cleared

There is some uncertainty in this analysis, but it seems to be fit the
observations.  Adding the wake in this patch will be handled correctly
by __fscache_read_or_alloc_page(), as it checks if ->backing_objects
is empty again, after waiting.

Customer which reported the hang, also report that the hang cannot be
reproduced with this fix.

The backtrace for the blocked process looked like:

PID: 29360  TASK: ffff881ff2ac0f80  CPU: 3   COMMAND: "zsh"
 #0 [ffff881ff43efbf8] schedule at ffffffff815e56f1
 #1 [ffff881ff43efc58] bit_wait at ffffffff815e64ed
 #2 [ffff881ff43efc68] __wait_on_bit at ffffffff815e61b8
 #3 [ffff881ff43efca0] out_of_line_wait_on_bit at ffffffff815e625e
 #4 [ffff881ff43efd08] fscache_wait_for_deferred_lookup at ffffffffa04f2e8f [fscache]
 #5 [ffff881ff43efd18] __fscache_read_or_alloc_page at ffffffffa04f2ffe [fscache]
 #6 [ffff881ff43efd58] __nfs_readpage_from_fscache at ffffffffa0679668 [nfs]
 #7 [ffff881ff43efd78] nfs_readpage at ffffffffa067092b [nfs]
 #8 [ffff881ff43efda0] generic_file_read_iter at ffffffff81187a73
 #9 [ffff881ff43efe50] nfs_file_read at ffffffffa066544b [nfs]
#10 [ffff881ff43efe70] __vfs_read at ffffffff811fc756
#11 [ffff881ff43efee8] vfs_read at ffffffff811fccfa
#12 [ffff881ff43eff18] sys_read at ffffffff811fda62
#13 [ffff881ff43eff50] entry_SYSCALL_64_fastpath at ffffffff815e986e

Signed-off-by: NeilBrown <[email protected]>
Signed-off-by: David Howells <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
@LinuxMinion
Copy link
Member Author

Closing test pull request

gregmarsden pushed a commit that referenced this pull request Apr 19, 2019
This work around should be reverted when upstream commit (d8b91dd
Merge branch 'perf-core-for-linus' of
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip)
is available in uek.

Issue appear is fixed in upstream tag 4.16.0 . Tag 4.15.0 still has this
issue.

The lask known commit on the perf topic branch that solve this issue
is this: (c19d084 (tag: perf-core-for-mingo-4.16-20180125)
perf trace beauty flock: Move to separate object file).

Without this commit the perf topic branch has the below issue. With
this commit the branch does not have the issue.

Issue is that the above commit does not fix the issue on top of upstream
tag 4.15.0. So the issue is probably fixed by this commit and some additional
commits on the perf topic branch *or/and* on master branch below the point that
the perf branch was branched.

Also this specific commit is not a fix and the only possible relation to this
bug is that it touches the 'flock' code which is used by bash/scripts to
synchronize.

To find the additional commits via git bisect I need to re-order the commits so
that the above commit will be *below* the other commits that solve this issue.
To do that I need to know what's the lowest commit that relate to this fix.

I do not know and have no way to know that.

Attempt to merge the perf topic on top of uek5 produce ~20k commits and tons
of merge conflicts as uek5 is way behind the upstream. So can't even know if
the topic branch with it's ~270 commits fix this issue for uek5.

So I chose to work-around the issue and wait for the upstream topic merge to
obsolite this commit.

When issue occuer:

Serial is flooded with messages:

[71266.680745] bondib0: link status up for interface ib0, enabling it in 0 ms
[71266.682740] bondib0: link status up for interface ib0, enabling it in 0 ms
[71266.685738] bondib0: link status up for interface ib0, enabling it in 0 ms

Then panic occur:

[71266.695757] INFO: task NetworkManager:5837 blocked for more than 120 seconds.
[71266.695759]       Not tainted 4.14.35-1902.0.6.el7uek.x86_64 #2
[71266.695760] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[71266.695761] NetworkManager  D    0  5837      1 0x00000082
[71266.695765] Call Trace:
[71266.695778]  __schedule+0x2bc/0x8da
[71266.695782]  schedule+0x36/0x7c
[71266.695785]  schedule_preempt_disabled+0xe/0x10
[71266.695788]  __mutex_lock.isra.5+0x20c/0x634
[71266.695792]  __mutex_lock_slowpath+0x13/0x15
[71266.695794]  mutex_lock+0x2f/0x3a
[71266.695800]  rtnetlink_rcv_msg+0x1d0/0x289
[71266.695806]  ? __skb_try_recv_datagram+0xca/0x174
[71266.695809]  ? rtnl_calcit.isra.25+0x110/0x103
[71266.695812]  netlink_rcv_skb+0xdf/0x111
[71266.695816]  rtnetlink_rcv+0x15/0x17
[71266.695818]  netlink_unicast+0x18d/0x255
[71266.695820]  netlink_sendmsg+0x2df/0x3cc
[71266.695825]  sock_sendmsg+0x3e/0x4a
[71266.695828]  ___sys_sendmsg+0x2b5/0x2c6
[71266.695832]  ? arch_tlb_finish_mmu+0x1b/0xcb
[71266.695835]  ? tlb_finish_mmu+0x23/0x30
[71266.695838]  ? unmap_region+0xf4/0x12d
[71266.695844]  ? lockref_put_or_lock+0x44/0x72
[71266.695846]  ? __vma_rb_erase+0x10f/0x1f4
[71266.695850]  __sys_sendmsg+0x54/0x8d
[71266.695854]  SyS_sendmsg+0x12/0x1c
[71266.695860]  do_syscall_64+0x79/0x1ae
[71266.695864]  entry_SYSCALL_64_after_hwframe+0x151/0x0
[71266.695866] RIP: 0033:0x7f16f2553c5d
[71266.695867] RSP: 002b:00007ffff7a493f0 EFLAGS: 00000293 ORIG_RAX: 000000000000002e
[71266.695870] RAX: ffffffffffffffda RBX: 00005570a5026380 RCX: 00007f16f2553c5d
[71266.695874] RDX: 0000000000000000 RSI: 00007ffff7a49420 RDI: 0000000000000007
[71266.695875] RBP: 00007ffff7a49420 R08: 0000000000000001 R09: 0000000000000000
[71266.695876] R10: 0000000000000808 R11: 0000000000000293 R12: 00005570a5026380
[71266.695876] R13: 0000000000000000 R14: 0000000000000000 R15: 00007f16d4004b70

Issue analysis:

The ip process is hung in addrconf_notify while trying to print to serial
one of the below messages:
"ADDRCONF(NETDEV_UP): %s: link is not ready\n"
"ADDRCONF(NETDEV_CHANGE): %s: link becomes ready\n"
The ip process hold the rtnl_lock while network-manager process try to grab
this lock in 1 msec loop and every time it fail to grab the lock, the
network-manager send additional line to the serial log as seen in the dmesg:
"bondib0: link status up for interface ib0, enabling it in 0 ms"
So the bond device flood the serial while waiting for the rtnl_lock while ip
hold the rtnl_lock while waiting for the serial.

Offending stack trace from vmcore is this:

PID: 30063  TASK: ffff909c3f675a00  CPU: 7   COMMAND: "ip"
 #0 [fffffe000013ce38] crash_nmi_callback at ffffffff8e059ba7
    /usr/src/debug/kernel-4.14.35/linux-4.14.35-1902.0.6.el7uek/arch/x86/include/asm/paravirt.h: 99
 #1 [fffffe000013ce48] nmi_handle at ffffffff8e032748
    /usr/src/debug/kernel-4.14.35/linux-4.14.35-1902.0.6.el7uek/arch/x86/kernel/nmi.c: 137
 #2 [fffffe000013cea0] default_do_nmi at ffffffff8e032c96
    /usr/src/debug/kernel-4.14.35/linux-4.14.35-1902.0.6.el7uek/arch/x86/kernel/nmi.c: 336
 #3 [fffffe000013cec8] do_nmi at ffffffff8e032e76
    /usr/src/debug/kernel-4.14.35/linux-4.14.35-1902.0.6.el7uek/arch/x86/kernel/nmi.c: 521
 #4 [fffffe000013cef0] end_repeat_nmi at ffffffff8ea0436f
    /usr/src/debug/kernel-4.14.35/linux-4.14.35-1902.0.6.el7uek/arch/x86/entry/entry_64.S: 1750
    [exception RIP: delay_tsc+51]
    RIP: ffffffff8e8558f3  RSP: ffff9f63c6c07390  RFLAGS: 00000046
    RAX: 0000000016d23977  RBX: ffffffff903fbc00  RCX: 00009b7616d23038
    RDX: 0000000000009b76  RSI: 0000000000000007  RDI: 000000000000095a
    RBP: ffff9f63c6c07390   R8: 00000000fffffffe   R9: 0000000000000000
    R10: 0000000000000005  R11: 0000000000020503  R12: 000000000000261f
    R13: 0000000000000020  R14: ffffffff8f96de2f  R15: ffffffff903fbc00
    ORIG_RAX: ffffffffffffffff  CS: 0010  SS: 0018
    /usr/src/debug/kernel-4.14.35/linux-4.14.35-1902.0.6.el7uek/arch/x86/include/asm/msr.h: 193
    <NMI exception stack>
 #5 [ffff9f63c6c07390] delay_tsc at ffffffff8e8558f3
    /usr/src/debug/kernel-4.14.35/linux-4.14.35-1902.0.6.el7uek/arch/x86/include/asm/msr.h: 193
 #6 [ffff9f63c6c07398] __const_udelay at ffffffff8e855838
    /usr/src/debug/kernel-4.14.35/linux-4.14.35-1902.0.6.el7uek/arch/x86/lib/delay.c: 176
 #7 [ffff9f63c6c073a8] wait_for_xmitr at ffffffff8e510dcc
    /usr/src/debug/kernel-4.14.35/linux-4.14.35-1902.0.6.el7uek/include/linux/nmi.h: 126
 #8 [ffff9f63c6c073d0] serial8250_console_putchar at ffffffff8e510e6c
    /usr/src/debug/kernel-4.14.35/linux-4.14.35-1902.0.6.el7uek/include/linux/serial_core.h: 265
 #9 [ffff9f63c6c073f0] uart_console_write at ffffffff8e509573
    /usr/src/debug/kernel-4.14.35/linux-4.14.35-1902.0.6.el7uek/drivers/tty/serial/serial_core.c: 1886
    /usr/src/debug/kernel-4.14.35/linux-4.14.35-1902.0.6.el7uek/drivers/tty/serial/8250/8250_port.c: 3256
    /usr/src/debug/kernel-4.14.35/linux-4.14.35-1902.0.6.el7uek/drivers/tty/serial/8250/8250_core.c: 598
    /usr/src/debug/kernel-4.14.35/linux-4.14.35-1902.0.6.el7uek/kernel/printk/printk.c: 1574
    /usr/src/debug/kernel-4.14.35/linux-4.14.35-1902.0.6.el7uek/kernel/printk/printk.c: 1766
    /usr/src/debug/kernel-4.14.35/linux-4.14.35-1902.0.6.el7uek/kernel/printk/printk.c: 1808
    /usr/src/debug/kernel-4.14.35/linux-4.14.35-1902.0.6.el7uek/kernel/printk/printk_safe.c: 402
    /usr/src/debug/kernel-4.14.35/linux-4.14.35-1902.0.6.el7uek/kernel/printk/printk.c: 1842
    /usr/src/debug/kernel-4.14.35/linux-4.14.35-1902.0.6.el7uek/net/ipv6/addrconf.c: 3532
    /usr/src/debug/kernel-4.14.35/linux-4.14.35-1902.0.6.el7uek/kernel/notifier.c: 95
    /usr/src/debug/kernel-4.14.35/linux-4.14.35-1902.0.6.el7uek/kernel/notifier.c: 402
    /usr/src/debug/kernel-4.14.35/linux-4.14.35-1902.0.6.el7uek/net/core/dev.c: 1682
    /usr/src/debug/kernel-4.14.35/linux-4.14.35-1902.0.6.el7uek/net/core/dev.c: 1697
    /usr/src/debug/kernel-4.14.35/linux-4.14.35-1902.0.6.el7uek/net/core/dev.c: 6903
    /usr/src/debug/kernel-4.14.35/linux-4.14.35-1902.0.6.el7uek/net/core/rtnetlink.c: 2072
    /usr/src/debug/kernel-4.14.35/linux-4.14.35-1902.0.6.el7uek/net/core/rtnetlink.c: 2624
    /usr/src/debug/kernel-4.14.35/linux-4.14.35-1902.0.6.el7uek/net/core/rtnetlink.c: 4255
    /usr/src/debug/kernel-4.14.35/linux-4.14.35-1902.0.6.el7uek/net/netlink/af_netlink.c: 2433
    /usr/src/debug/kernel-4.14.35/linux-4.14.35-1902.0.6.el7uek/net/core/rtnetlink.c: 4268
    /usr/src/debug/kernel-4.14.35/linux-4.14.35-1902.0.6.el7uek/net/netlink/af_netlink.c: 1287
    /usr/src/debug/kernel-4.14.35/linux-4.14.35-1902.0.6.el7uek/net/netlink/af_netlink.c: 1877
    /usr/src/debug/kernel-4.14.35/linux-4.14.35-1902.0.6.el7uek/net/socket.c: 646
    /usr/src/debug/kernel-4.14.35/linux-4.14.35-1902.0.6.el7uek/net/socket.c: 2061
    /usr/src/debug/kernel-4.14.35/linux-4.14.35-1902.0.6.el7uek/include/linux/file.h: 26
    /usr/src/debug/kernel-4.14.35/linux-4.14.35-1902.0.6.el7uek/net/socket.c: 2102
    /usr/src/debug/kernel-4.14.35/linux-4.14.35-1902.0.6.el7uek/arch/x86/entry/common.c: 295
    /usr/src/debug/kernel-4.14.35/linux-4.14.35-1902.0.6.el7uek/arch/x86/entry/entry_64.S: 247
    RIP: 00007faf75ccafd0  RSP: 00007ffc710a9368  RFLAGS: 00000246
    RAX: ffffffffffffffda  RBX: 000000005c65f66d  RCX: 00007faf75ccafd0
    RDX: 0000000000000000  RSI: 00007ffc710a93b0  RDI: 0000000000000003
    RBP: 00007ffc710a93b0   R8: 0000000000000000   R9: 0000000000000008
    R10: 00007ffc710a8f30  R11: 0000000000000246  R12: 0000000000000000
    R13: 000000000066a440  R14: 00007ffc710a9458  R15: 00007ffc710a9b88
    ORIG_RAX: 000000000000002e  CS: 0033  SS: 002b

Orabug: 29357838

Signed-off-by: Shamir Rabinovitch <[email protected]>
Signed-off-by: Aron Silverton <[email protected]>
Reviewed-by: John Haxby <[email protected]>
gregmarsden pushed a commit that referenced this pull request Apr 19, 2019
[ Upstream commit 1d1bbc3 ]

ibmvnic_reset allocated new reset work item objects in a non-atomic
context. This can be called from a tasklet, generating the output below.
Allocate work items with the GFP_ATOMIC flag instead.

BUG: sleeping function called from invalid context at mm/slab.h:421
in_atomic(): 1, irqs_disabled(): 1, pid: 93, name: kworker/0:2
INFO: lockdep is turned off.
irq event stamp: 66049
hardirqs last  enabled at (66048): [<c000000000122468>] tasklet_action_common.isra.12+0x78/0x1c0
hardirqs last disabled at (66049): [<c000000000befce8>] _raw_spin_lock_irqsave+0x48/0xf0
softirqs last  enabled at (66044): [<c000000000a8ac78>] dev_deactivate_queue.constprop.28+0xc8/0x160
softirqs last disabled at (66045): [<c0000000000306e0>] call_do_softirq+0x14/0x24
CPU: 0 PID: 93 Comm: kworker/0:2 Kdump: loaded Not tainted 4.20.0-rc6-00001-g1b50a8f03706 #7
Workqueue: events linkwatch_event
Call Trace:
[c0000003fffe7ae0] [c000000000bc83e4] dump_stack+0xe8/0x164 (unreliable)
[c0000003fffe7b30] [c00000000015ba0c] ___might_sleep+0x2dc/0x320
[c0000003fffe7bb0] [c000000000391514] kmem_cache_alloc_trace+0x3e4/0x440
[c0000003fffe7c30] [d000000005b2309c] ibmvnic_reset+0x16c/0x360 [ibmvnic]
[c0000003fffe7cc0] [d000000005b29834] ibmvnic_tasklet+0x1054/0x2010 [ibmvnic]
[c0000003fffe7e00] [c0000000001224c8] tasklet_action_common.isra.12+0xd8/0x1c0
[c0000003fffe7e60] [c000000000bf1238] __do_softirq+0x1a8/0x64c
[c0000003fffe7f90] [c0000000000306e0] call_do_softirq+0x14/0x24
[c0000003f3967980] [c00000000001ba50] do_softirq_own_stack+0x60/0xb0
[c0000003f39679c0] [c0000000001218a8] do_softirq+0xa8/0x100
[c0000003f39679f0] [c000000000121a74] __local_bh_enable_ip+0x174/0x180
[c0000003f3967a60] [c000000000bf003c] _raw_spin_unlock_bh+0x5c/0x80
[c0000003f3967a90] [c000000000a8ac78] dev_deactivate_queue.constprop.28+0xc8/0x160
[c0000003f3967ad0] [c000000000a8c8b0] dev_deactivate_many+0xd0/0x520
[c0000003f3967b70] [c000000000a8cd40] dev_deactivate+0x40/0x60
[c0000003f3967ba0] [c000000000a5e0c4] linkwatch_do_dev+0x74/0xd0
[c0000003f3967bd0] [c000000000a5e694] __linkwatch_run_queue+0x1a4/0x1f0
[c0000003f3967c30] [c000000000a5e728] linkwatch_event+0x48/0x60
[c0000003f3967c50] [c0000000001444e8] process_one_work+0x238/0x710
[c0000003f3967d20] [c000000000144a48] worker_thread+0x88/0x4e0
[c0000003f3967db0] [c00000000014e3a8] kthread+0x178/0x1c0
[c0000003f3967e20] [c00000000000bfd0] ret_from_kernel_thread+0x5c/0x6c

Signed-off-by: Thomas Falcon <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
gregmarsden pushed a commit that referenced this pull request May 17, 2019
This work around should be reverted when upstream commit (d8b91dd
Merge branch 'perf-core-for-linus' of
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip)
is available in uek.

Issue appear is fixed in upstream tag 4.16.0 . Tag 4.15.0 still has this
issue.

The lask known commit on the perf topic branch that solve this issue
is this: (c19d084 (tag: perf-core-for-mingo-4.16-20180125)
perf trace beauty flock: Move to separate object file).

Without this commit the perf topic branch has the below issue. With
this commit the branch does not have the issue.

Issue is that the above commit does not fix the issue on top of upstream
tag 4.15.0. So the issue is probably fixed by this commit and some additional
commits on the perf topic branch *or/and* on master branch below the point that
the perf branch was branched.

Also this specific commit is not a fix and the only possible relation to this
bug is that it touches the 'flock' code which is used by bash/scripts to
synchronize.

To find the additional commits via git bisect I need to re-order the commits so
that the above commit will be *below* the other commits that solve this issue.
To do that I need to know what's the lowest commit that relate to this fix.

I do not know and have no way to know that.

Attempt to merge the perf topic on top of uek5 produce ~20k commits and tons
of merge conflicts as uek5 is way behind the upstream. So can't even know if
the topic branch with it's ~270 commits fix this issue for uek5.

So I chose to work-around the issue and wait for the upstream topic merge to
obsolite this commit.

When issue occuer:

Serial is flooded with messages:

[71266.680745] bondib0: link status up for interface ib0, enabling it in 0 ms
[71266.682740] bondib0: link status up for interface ib0, enabling it in 0 ms
[71266.685738] bondib0: link status up for interface ib0, enabling it in 0 ms

Then panic occur:

[71266.695757] INFO: task NetworkManager:5837 blocked for more than 120 seconds.
[71266.695759]       Not tainted 4.14.35-1902.0.6.el7uek.x86_64 #2
[71266.695760] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[71266.695761] NetworkManager  D    0  5837      1 0x00000082
[71266.695765] Call Trace:
[71266.695778]  __schedule+0x2bc/0x8da
[71266.695782]  schedule+0x36/0x7c
[71266.695785]  schedule_preempt_disabled+0xe/0x10
[71266.695788]  __mutex_lock.isra.5+0x20c/0x634
[71266.695792]  __mutex_lock_slowpath+0x13/0x15
[71266.695794]  mutex_lock+0x2f/0x3a
[71266.695800]  rtnetlink_rcv_msg+0x1d0/0x289
[71266.695806]  ? __skb_try_recv_datagram+0xca/0x174
[71266.695809]  ? rtnl_calcit.isra.25+0x110/0x103
[71266.695812]  netlink_rcv_skb+0xdf/0x111
[71266.695816]  rtnetlink_rcv+0x15/0x17
[71266.695818]  netlink_unicast+0x18d/0x255
[71266.695820]  netlink_sendmsg+0x2df/0x3cc
[71266.695825]  sock_sendmsg+0x3e/0x4a
[71266.695828]  ___sys_sendmsg+0x2b5/0x2c6
[71266.695832]  ? arch_tlb_finish_mmu+0x1b/0xcb
[71266.695835]  ? tlb_finish_mmu+0x23/0x30
[71266.695838]  ? unmap_region+0xf4/0x12d
[71266.695844]  ? lockref_put_or_lock+0x44/0x72
[71266.695846]  ? __vma_rb_erase+0x10f/0x1f4
[71266.695850]  __sys_sendmsg+0x54/0x8d
[71266.695854]  SyS_sendmsg+0x12/0x1c
[71266.695860]  do_syscall_64+0x79/0x1ae
[71266.695864]  entry_SYSCALL_64_after_hwframe+0x151/0x0
[71266.695866] RIP: 0033:0x7f16f2553c5d
[71266.695867] RSP: 002b:00007ffff7a493f0 EFLAGS: 00000293 ORIG_RAX: 000000000000002e
[71266.695870] RAX: ffffffffffffffda RBX: 00005570a5026380 RCX: 00007f16f2553c5d
[71266.695874] RDX: 0000000000000000 RSI: 00007ffff7a49420 RDI: 0000000000000007
[71266.695875] RBP: 00007ffff7a49420 R08: 0000000000000001 R09: 0000000000000000
[71266.695876] R10: 0000000000000808 R11: 0000000000000293 R12: 00005570a5026380
[71266.695876] R13: 0000000000000000 R14: 0000000000000000 R15: 00007f16d4004b70

Issue analysis:

The ip process is hung in addrconf_notify while trying to print to serial
one of the below messages:
"ADDRCONF(NETDEV_UP): %s: link is not ready\n"
"ADDRCONF(NETDEV_CHANGE): %s: link becomes ready\n"
The ip process hold the rtnl_lock while network-manager process try to grab
this lock in 1 msec loop and every time it fail to grab the lock, the
network-manager send additional line to the serial log as seen in the dmesg:
"bondib0: link status up for interface ib0, enabling it in 0 ms"
So the bond device flood the serial while waiting for the rtnl_lock while ip
hold the rtnl_lock while waiting for the serial.

Offending stack trace from vmcore is this:

PID: 30063  TASK: ffff909c3f675a00  CPU: 7   COMMAND: "ip"
 #0 [fffffe000013ce38] crash_nmi_callback at ffffffff8e059ba7
    /usr/src/debug/kernel-4.14.35/linux-4.14.35-1902.0.6.el7uek/arch/x86/include/asm/paravirt.h: 99
 #1 [fffffe000013ce48] nmi_handle at ffffffff8e032748
    /usr/src/debug/kernel-4.14.35/linux-4.14.35-1902.0.6.el7uek/arch/x86/kernel/nmi.c: 137
 #2 [fffffe000013cea0] default_do_nmi at ffffffff8e032c96
    /usr/src/debug/kernel-4.14.35/linux-4.14.35-1902.0.6.el7uek/arch/x86/kernel/nmi.c: 336
 #3 [fffffe000013cec8] do_nmi at ffffffff8e032e76
    /usr/src/debug/kernel-4.14.35/linux-4.14.35-1902.0.6.el7uek/arch/x86/kernel/nmi.c: 521
 #4 [fffffe000013cef0] end_repeat_nmi at ffffffff8ea0436f
    /usr/src/debug/kernel-4.14.35/linux-4.14.35-1902.0.6.el7uek/arch/x86/entry/entry_64.S: 1750
    [exception RIP: delay_tsc+51]
    RIP: ffffffff8e8558f3  RSP: ffff9f63c6c07390  RFLAGS: 00000046
    RAX: 0000000016d23977  RBX: ffffffff903fbc00  RCX: 00009b7616d23038
    RDX: 0000000000009b76  RSI: 0000000000000007  RDI: 000000000000095a
    RBP: ffff9f63c6c07390   R8: 00000000fffffffe   R9: 0000000000000000
    R10: 0000000000000005  R11: 0000000000020503  R12: 000000000000261f
    R13: 0000000000000020  R14: ffffffff8f96de2f  R15: ffffffff903fbc00
    ORIG_RAX: ffffffffffffffff  CS: 0010  SS: 0018
    /usr/src/debug/kernel-4.14.35/linux-4.14.35-1902.0.6.el7uek/arch/x86/include/asm/msr.h: 193
    <NMI exception stack>
 #5 [ffff9f63c6c07390] delay_tsc at ffffffff8e8558f3
    /usr/src/debug/kernel-4.14.35/linux-4.14.35-1902.0.6.el7uek/arch/x86/include/asm/msr.h: 193
 #6 [ffff9f63c6c07398] __const_udelay at ffffffff8e855838
    /usr/src/debug/kernel-4.14.35/linux-4.14.35-1902.0.6.el7uek/arch/x86/lib/delay.c: 176
 #7 [ffff9f63c6c073a8] wait_for_xmitr at ffffffff8e510dcc
    /usr/src/debug/kernel-4.14.35/linux-4.14.35-1902.0.6.el7uek/include/linux/nmi.h: 126
 #8 [ffff9f63c6c073d0] serial8250_console_putchar at ffffffff8e510e6c
    /usr/src/debug/kernel-4.14.35/linux-4.14.35-1902.0.6.el7uek/include/linux/serial_core.h: 265
 #9 [ffff9f63c6c073f0] uart_console_write at ffffffff8e509573
    /usr/src/debug/kernel-4.14.35/linux-4.14.35-1902.0.6.el7uek/drivers/tty/serial/serial_core.c: 1886
    /usr/src/debug/kernel-4.14.35/linux-4.14.35-1902.0.6.el7uek/drivers/tty/serial/8250/8250_port.c: 3256
    /usr/src/debug/kernel-4.14.35/linux-4.14.35-1902.0.6.el7uek/drivers/tty/serial/8250/8250_core.c: 598
    /usr/src/debug/kernel-4.14.35/linux-4.14.35-1902.0.6.el7uek/kernel/printk/printk.c: 1574
    /usr/src/debug/kernel-4.14.35/linux-4.14.35-1902.0.6.el7uek/kernel/printk/printk.c: 1766
    /usr/src/debug/kernel-4.14.35/linux-4.14.35-1902.0.6.el7uek/kernel/printk/printk.c: 1808
    /usr/src/debug/kernel-4.14.35/linux-4.14.35-1902.0.6.el7uek/kernel/printk/printk_safe.c: 402
    /usr/src/debug/kernel-4.14.35/linux-4.14.35-1902.0.6.el7uek/kernel/printk/printk.c: 1842
    /usr/src/debug/kernel-4.14.35/linux-4.14.35-1902.0.6.el7uek/net/ipv6/addrconf.c: 3532
    /usr/src/debug/kernel-4.14.35/linux-4.14.35-1902.0.6.el7uek/kernel/notifier.c: 95
    /usr/src/debug/kernel-4.14.35/linux-4.14.35-1902.0.6.el7uek/kernel/notifier.c: 402
    /usr/src/debug/kernel-4.14.35/linux-4.14.35-1902.0.6.el7uek/net/core/dev.c: 1682
    /usr/src/debug/kernel-4.14.35/linux-4.14.35-1902.0.6.el7uek/net/core/dev.c: 1697
    /usr/src/debug/kernel-4.14.35/linux-4.14.35-1902.0.6.el7uek/net/core/dev.c: 6903
    /usr/src/debug/kernel-4.14.35/linux-4.14.35-1902.0.6.el7uek/net/core/rtnetlink.c: 2072
    /usr/src/debug/kernel-4.14.35/linux-4.14.35-1902.0.6.el7uek/net/core/rtnetlink.c: 2624
    /usr/src/debug/kernel-4.14.35/linux-4.14.35-1902.0.6.el7uek/net/core/rtnetlink.c: 4255
    /usr/src/debug/kernel-4.14.35/linux-4.14.35-1902.0.6.el7uek/net/netlink/af_netlink.c: 2433
    /usr/src/debug/kernel-4.14.35/linux-4.14.35-1902.0.6.el7uek/net/core/rtnetlink.c: 4268
    /usr/src/debug/kernel-4.14.35/linux-4.14.35-1902.0.6.el7uek/net/netlink/af_netlink.c: 1287
    /usr/src/debug/kernel-4.14.35/linux-4.14.35-1902.0.6.el7uek/net/netlink/af_netlink.c: 1877
    /usr/src/debug/kernel-4.14.35/linux-4.14.35-1902.0.6.el7uek/net/socket.c: 646
    /usr/src/debug/kernel-4.14.35/linux-4.14.35-1902.0.6.el7uek/net/socket.c: 2061
    /usr/src/debug/kernel-4.14.35/linux-4.14.35-1902.0.6.el7uek/include/linux/file.h: 26
    /usr/src/debug/kernel-4.14.35/linux-4.14.35-1902.0.6.el7uek/net/socket.c: 2102
    /usr/src/debug/kernel-4.14.35/linux-4.14.35-1902.0.6.el7uek/arch/x86/entry/common.c: 295
    /usr/src/debug/kernel-4.14.35/linux-4.14.35-1902.0.6.el7uek/arch/x86/entry/entry_64.S: 247
    RIP: 00007faf75ccafd0  RSP: 00007ffc710a9368  RFLAGS: 00000246
    RAX: ffffffffffffffda  RBX: 000000005c65f66d  RCX: 00007faf75ccafd0
    RDX: 0000000000000000  RSI: 00007ffc710a93b0  RDI: 0000000000000003
    RBP: 00007ffc710a93b0   R8: 0000000000000000   R9: 0000000000000008
    R10: 00007ffc710a8f30  R11: 0000000000000246  R12: 0000000000000000
    R13: 000000000066a440  R14: 00007ffc710a9458  R15: 00007ffc710a9b88
    ORIG_RAX: 000000000000002e  CS: 0033  SS: 002b

Orabug: 29016284

Signed-off-by: Shamir Rabinovitch <[email protected]>
Reviewed-by: John Haxby <[email protected]>
gregmarsden pushed a commit that referenced this pull request May 17, 2019
This work around should be reverted when upstream commit (d8b91dd
Merge branch 'perf-core-for-linus' of
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip)
is available in uek.

Issue appear is fixed in upstream tag 4.16.0 . Tag 4.15.0 still has this
issue.

The lask known commit on the perf topic branch that solve this issue
is this: (c19d084 (tag: perf-core-for-mingo-4.16-20180125)
perf trace beauty flock: Move to separate object file).

Without this commit the perf topic branch has the below issue. With
this commit the branch does not have the issue.

Issue is that the above commit does not fix the issue on top of upstream
tag 4.15.0. So the issue is probably fixed by this commit and some additional
commits on the perf topic branch *or/and* on master branch below the point that
the perf branch was branched.

Also this specific commit is not a fix and the only possible relation to this
bug is that it touches the 'flock' code which is used by bash/scripts to
synchronize.

To find the additional commits via git bisect I need to re-order the commits so
that the above commit will be *below* the other commits that solve this issue.
To do that I need to know what's the lowest commit that relate to this fix.

I do not know and have no way to know that.

Attempt to merge the perf topic on top of uek5 produce ~20k commits and tons
of merge conflicts as uek5 is way behind the upstream. So can't even know if
the topic branch with it's ~270 commits fix this issue for uek5.

So I chose to work-around the issue and wait for the upstream topic merge to
obsolite this commit.

When issue occuer:

Serial is flooded with messages:

[71266.680745] bondib0: link status up for interface ib0, enabling it in 0 ms
[71266.682740] bondib0: link status up for interface ib0, enabling it in 0 ms
[71266.685738] bondib0: link status up for interface ib0, enabling it in 0 ms

Then panic occur:

[71266.695757] INFO: task NetworkManager:5837 blocked for more than 120 seconds.
[71266.695759]       Not tainted 4.14.35-1902.0.6.el7uek.x86_64 #2
[71266.695760] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[71266.695761] NetworkManager  D    0  5837      1 0x00000082
[71266.695765] Call Trace:
[71266.695778]  __schedule+0x2bc/0x8da
[71266.695782]  schedule+0x36/0x7c
[71266.695785]  schedule_preempt_disabled+0xe/0x10
[71266.695788]  __mutex_lock.isra.5+0x20c/0x634
[71266.695792]  __mutex_lock_slowpath+0x13/0x15
[71266.695794]  mutex_lock+0x2f/0x3a
[71266.695800]  rtnetlink_rcv_msg+0x1d0/0x289
[71266.695806]  ? __skb_try_recv_datagram+0xca/0x174
[71266.695809]  ? rtnl_calcit.isra.25+0x110/0x103
[71266.695812]  netlink_rcv_skb+0xdf/0x111
[71266.695816]  rtnetlink_rcv+0x15/0x17
[71266.695818]  netlink_unicast+0x18d/0x255
[71266.695820]  netlink_sendmsg+0x2df/0x3cc
[71266.695825]  sock_sendmsg+0x3e/0x4a
[71266.695828]  ___sys_sendmsg+0x2b5/0x2c6
[71266.695832]  ? arch_tlb_finish_mmu+0x1b/0xcb
[71266.695835]  ? tlb_finish_mmu+0x23/0x30
[71266.695838]  ? unmap_region+0xf4/0x12d
[71266.695844]  ? lockref_put_or_lock+0x44/0x72
[71266.695846]  ? __vma_rb_erase+0x10f/0x1f4
[71266.695850]  __sys_sendmsg+0x54/0x8d
[71266.695854]  SyS_sendmsg+0x12/0x1c
[71266.695860]  do_syscall_64+0x79/0x1ae
[71266.695864]  entry_SYSCALL_64_after_hwframe+0x151/0x0
[71266.695866] RIP: 0033:0x7f16f2553c5d
[71266.695867] RSP: 002b:00007ffff7a493f0 EFLAGS: 00000293 ORIG_RAX: 000000000000002e
[71266.695870] RAX: ffffffffffffffda RBX: 00005570a5026380 RCX: 00007f16f2553c5d
[71266.695874] RDX: 0000000000000000 RSI: 00007ffff7a49420 RDI: 0000000000000007
[71266.695875] RBP: 00007ffff7a49420 R08: 0000000000000001 R09: 0000000000000000
[71266.695876] R10: 0000000000000808 R11: 0000000000000293 R12: 00005570a5026380
[71266.695876] R13: 0000000000000000 R14: 0000000000000000 R15: 00007f16d4004b70

Issue analysis:

The ip process is hung in addrconf_notify while trying to print to serial
one of the below messages:
"ADDRCONF(NETDEV_UP): %s: link is not ready\n"
"ADDRCONF(NETDEV_CHANGE): %s: link becomes ready\n"
The ip process hold the rtnl_lock while network-manager process try to grab
this lock in 1 msec loop and every time it fail to grab the lock, the
network-manager send additional line to the serial log as seen in the dmesg:
"bondib0: link status up for interface ib0, enabling it in 0 ms"
So the bond device flood the serial while waiting for the rtnl_lock while ip
hold the rtnl_lock while waiting for the serial.

Offending stack trace from vmcore is this:

PID: 30063  TASK: ffff909c3f675a00  CPU: 7   COMMAND: "ip"
 #0 [fffffe000013ce38] crash_nmi_callback at ffffffff8e059ba7
    /usr/src/debug/kernel-4.14.35/linux-4.14.35-1902.0.6.el7uek/arch/x86/include/asm/paravirt.h: 99
 #1 [fffffe000013ce48] nmi_handle at ffffffff8e032748
    /usr/src/debug/kernel-4.14.35/linux-4.14.35-1902.0.6.el7uek/arch/x86/kernel/nmi.c: 137
 #2 [fffffe000013cea0] default_do_nmi at ffffffff8e032c96
    /usr/src/debug/kernel-4.14.35/linux-4.14.35-1902.0.6.el7uek/arch/x86/kernel/nmi.c: 336
 #3 [fffffe000013cec8] do_nmi at ffffffff8e032e76
    /usr/src/debug/kernel-4.14.35/linux-4.14.35-1902.0.6.el7uek/arch/x86/kernel/nmi.c: 521
 #4 [fffffe000013cef0] end_repeat_nmi at ffffffff8ea0436f
    /usr/src/debug/kernel-4.14.35/linux-4.14.35-1902.0.6.el7uek/arch/x86/entry/entry_64.S: 1750
    [exception RIP: delay_tsc+51]
    RIP: ffffffff8e8558f3  RSP: ffff9f63c6c07390  RFLAGS: 00000046
    RAX: 0000000016d23977  RBX: ffffffff903fbc00  RCX: 00009b7616d23038
    RDX: 0000000000009b76  RSI: 0000000000000007  RDI: 000000000000095a
    RBP: ffff9f63c6c07390   R8: 00000000fffffffe   R9: 0000000000000000
    R10: 0000000000000005  R11: 0000000000020503  R12: 000000000000261f
    R13: 0000000000000020  R14: ffffffff8f96de2f  R15: ffffffff903fbc00
    ORIG_RAX: ffffffffffffffff  CS: 0010  SS: 0018
    /usr/src/debug/kernel-4.14.35/linux-4.14.35-1902.0.6.el7uek/arch/x86/include/asm/msr.h: 193
    <NMI exception stack>
 #5 [ffff9f63c6c07390] delay_tsc at ffffffff8e8558f3
    /usr/src/debug/kernel-4.14.35/linux-4.14.35-1902.0.6.el7uek/arch/x86/include/asm/msr.h: 193
 #6 [ffff9f63c6c07398] __const_udelay at ffffffff8e855838
    /usr/src/debug/kernel-4.14.35/linux-4.14.35-1902.0.6.el7uek/arch/x86/lib/delay.c: 176
 #7 [ffff9f63c6c073a8] wait_for_xmitr at ffffffff8e510dcc
    /usr/src/debug/kernel-4.14.35/linux-4.14.35-1902.0.6.el7uek/include/linux/nmi.h: 126
 #8 [ffff9f63c6c073d0] serial8250_console_putchar at ffffffff8e510e6c
    /usr/src/debug/kernel-4.14.35/linux-4.14.35-1902.0.6.el7uek/include/linux/serial_core.h: 265
 #9 [ffff9f63c6c073f0] uart_console_write at ffffffff8e509573
    /usr/src/debug/kernel-4.14.35/linux-4.14.35-1902.0.6.el7uek/drivers/tty/serial/serial_core.c: 1886
    /usr/src/debug/kernel-4.14.35/linux-4.14.35-1902.0.6.el7uek/drivers/tty/serial/8250/8250_port.c: 3256
    /usr/src/debug/kernel-4.14.35/linux-4.14.35-1902.0.6.el7uek/drivers/tty/serial/8250/8250_core.c: 598
    /usr/src/debug/kernel-4.14.35/linux-4.14.35-1902.0.6.el7uek/kernel/printk/printk.c: 1574
    /usr/src/debug/kernel-4.14.35/linux-4.14.35-1902.0.6.el7uek/kernel/printk/printk.c: 1766
    /usr/src/debug/kernel-4.14.35/linux-4.14.35-1902.0.6.el7uek/kernel/printk/printk.c: 1808
    /usr/src/debug/kernel-4.14.35/linux-4.14.35-1902.0.6.el7uek/kernel/printk/printk_safe.c: 402
    /usr/src/debug/kernel-4.14.35/linux-4.14.35-1902.0.6.el7uek/kernel/printk/printk.c: 1842
    /usr/src/debug/kernel-4.14.35/linux-4.14.35-1902.0.6.el7uek/net/ipv6/addrconf.c: 3532
    /usr/src/debug/kernel-4.14.35/linux-4.14.35-1902.0.6.el7uek/kernel/notifier.c: 95
    /usr/src/debug/kernel-4.14.35/linux-4.14.35-1902.0.6.el7uek/kernel/notifier.c: 402
    /usr/src/debug/kernel-4.14.35/linux-4.14.35-1902.0.6.el7uek/net/core/dev.c: 1682
    /usr/src/debug/kernel-4.14.35/linux-4.14.35-1902.0.6.el7uek/net/core/dev.c: 1697
    /usr/src/debug/kernel-4.14.35/linux-4.14.35-1902.0.6.el7uek/net/core/dev.c: 6903
    /usr/src/debug/kernel-4.14.35/linux-4.14.35-1902.0.6.el7uek/net/core/rtnetlink.c: 2072
    /usr/src/debug/kernel-4.14.35/linux-4.14.35-1902.0.6.el7uek/net/core/rtnetlink.c: 2624
    /usr/src/debug/kernel-4.14.35/linux-4.14.35-1902.0.6.el7uek/net/core/rtnetlink.c: 4255
    /usr/src/debug/kernel-4.14.35/linux-4.14.35-1902.0.6.el7uek/net/netlink/af_netlink.c: 2433
    /usr/src/debug/kernel-4.14.35/linux-4.14.35-1902.0.6.el7uek/net/core/rtnetlink.c: 4268
    /usr/src/debug/kernel-4.14.35/linux-4.14.35-1902.0.6.el7uek/net/netlink/af_netlink.c: 1287
    /usr/src/debug/kernel-4.14.35/linux-4.14.35-1902.0.6.el7uek/net/netlink/af_netlink.c: 1877
    /usr/src/debug/kernel-4.14.35/linux-4.14.35-1902.0.6.el7uek/net/socket.c: 646
    /usr/src/debug/kernel-4.14.35/linux-4.14.35-1902.0.6.el7uek/net/socket.c: 2061
    /usr/src/debug/kernel-4.14.35/linux-4.14.35-1902.0.6.el7uek/include/linux/file.h: 26
    /usr/src/debug/kernel-4.14.35/linux-4.14.35-1902.0.6.el7uek/net/socket.c: 2102
    /usr/src/debug/kernel-4.14.35/linux-4.14.35-1902.0.6.el7uek/arch/x86/entry/common.c: 295
    /usr/src/debug/kernel-4.14.35/linux-4.14.35-1902.0.6.el7uek/arch/x86/entry/entry_64.S: 247
    RIP: 00007faf75ccafd0  RSP: 00007ffc710a9368  RFLAGS: 00000246
    RAX: ffffffffffffffda  RBX: 000000005c65f66d  RCX: 00007faf75ccafd0
    RDX: 0000000000000000  RSI: 00007ffc710a93b0  RDI: 0000000000000003
    RBP: 00007ffc710a93b0   R8: 0000000000000000   R9: 0000000000000008
    R10: 00007ffc710a8f30  R11: 0000000000000246  R12: 0000000000000000
    R13: 000000000066a440  R14: 00007ffc710a9458  R15: 00007ffc710a9b88
    ORIG_RAX: 000000000000002e  CS: 0033  SS: 002b

Orabug: 29631452

Signed-off-by: Shamir Rabinovitch <[email protected]>
Signed-off-by: Aron Silverton <[email protected]>
Reviewed-by: John Haxby <[email protected]>
LinuxMinion pushed a commit that referenced this pull request Sep 6, 2019
…_map

[ Upstream commit 39df730 ]

Detected via gcc's ASan:

  Direct leak of 2048 byte(s) in 64 object(s) allocated from:
    6     #0 0x7f606512e370 in __interceptor_realloc (/usr/lib/x86_64-linux-gnu/libasan.so.5+0xee370)
    7     #1 0x556b0f1d7ddd in thread_map__realloc util/thread_map.c:43
    8     #2 0x556b0f1d84c7 in thread_map__new_by_tid util/thread_map.c:85
    9     #3 0x556b0f0e045e in is_event_supported util/parse-events.c:2250
   10     #4 0x556b0f0e1aa1 in print_hwcache_events util/parse-events.c:2382
   11     #5 0x556b0f0e3231 in print_events util/parse-events.c:2514
   12     #6 0x556b0ee0a66e in cmd_list /home/changbin/work/linux/tools/perf/builtin-list.c:58
   13     #7 0x556b0f01e0ae in run_builtin /home/changbin/work/linux/tools/perf/perf.c:302
   14     #8 0x556b0f01e859 in handle_internal_command /home/changbin/work/linux/tools/perf/perf.c:354
   15     #9 0x556b0f01edc8 in run_argv /home/changbin/work/linux/tools/perf/perf.c:398
   16     #10 0x556b0f01f71f in main /home/changbin/work/linux/tools/perf/perf.c:520
   17     #11 0x7f6062ccf09a in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x2409a)

Signed-off-by: Changbin Du <[email protected]>
Reviewed-by: Jiri Olsa <[email protected]>
Cc: Alexei Starovoitov <[email protected]>
Cc: Daniel Borkmann <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Steven Rostedt (VMware) <[email protected]>
Fixes: 8989605 ("perf tools: Do not put a variable sized type not at the end of a struct")
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
LinuxMinion pushed a commit that referenced this pull request Sep 6, 2019
[ Upstream commit 54569ba ]

Detected with gcc's ASan:

  Direct leak of 66 byte(s) in 5 object(s) allocated from:
      #0 0x7ff3b1f32070 in __interceptor_strdup (/usr/lib/x86_64-linux-gnu/libasan.so.5+0x3b070)
      #1 0x560c8761034d in collect_config util/config.c:597
      #2 0x560c8760d9cb in get_value util/config.c:169
      #3 0x560c8760dfd7 in perf_parse_file util/config.c:285
      #4 0x560c8760e0d2 in perf_config_from_file util/config.c:476
      #5 0x560c876108fd in perf_config_set__init util/config.c:661
      #6 0x560c87610c72 in perf_config_set__new util/config.c:709
      #7 0x560c87610d2f in perf_config__init util/config.c:718
      #8 0x560c87610e5d in perf_config util/config.c:730
      #9 0x560c875ddea0 in main /home/changbin/work/linux/tools/perf/perf.c:442
      #10 0x7ff3afb8609a in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x2409a)

Signed-off-by: Changbin Du <[email protected]>
Reviewed-by: Jiri Olsa <[email protected]>
Cc: Alexei Starovoitov <[email protected]>
Cc: Daniel Borkmann <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Steven Rostedt (VMware) <[email protected]>
Cc: Taeung Song <[email protected]>
Fixes: 20105ca ("perf config: Introduce perf_config_set class")
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
LinuxMinion pushed a commit that referenced this pull request Sep 6, 2019
[ Upstream commit 8bde851 ]

Detected with gcc's ASan:

  Direct leak of 4356 byte(s) in 120 object(s) allocated from:
      #0 0x7ff1a2b5a070 in __interceptor_strdup (/usr/lib/x86_64-linux-gnu/libasan.so.5+0x3b070)
      #1 0x55719aef4814 in build_id_cache__origname util/build-id.c:215
      #2 0x55719af649b6 in print_sdt_events util/parse-events.c:2339
      #3 0x55719af66272 in print_events util/parse-events.c:2542
      #4 0x55719ad1ecaa in cmd_list /home/changbin/work/linux/tools/perf/builtin-list.c:58
      #5 0x55719aec745d in run_builtin /home/changbin/work/linux/tools/perf/perf.c:302
      #6 0x55719aec7d1a in handle_internal_command /home/changbin/work/linux/tools/perf/perf.c:354
      #7 0x55719aec8184 in run_argv /home/changbin/work/linux/tools/perf/perf.c:398
      #8 0x55719aeca41a in main /home/changbin/work/linux/tools/perf/perf.c:520
      #9 0x7ff1a07ae09a in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x2409a)

Signed-off-by: Changbin Du <[email protected]>
Reviewed-by: Jiri Olsa <[email protected]>
Cc: Alexei Starovoitov <[email protected]>
Cc: Daniel Borkmann <[email protected]>
Cc: Masami Hiramatsu <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Steven Rostedt (VMware) <[email protected]>
Fixes: 40218da ("perf list: Show SDT and pre-cached events")
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
LinuxMinion pushed a commit that referenced this pull request Sep 6, 2019
[ Upstream commit 42dfa45 ]

Using gcc's ASan, Changbin reports:

  =================================================================
  ==7494==ERROR: LeakSanitizer: detected memory leaks

  Direct leak of 48 byte(s) in 1 object(s) allocated from:
      #0 0x7f0333a89138 in calloc (/usr/lib/x86_64-linux-gnu/libasan.so.5+0xee138)
      #1 0x5625e5330a5e in zalloc util/util.h:23
      #2 0x5625e5330a9b in perf_counts__new util/counts.c:10
      #3 0x5625e5330ca0 in perf_evsel__alloc_counts util/counts.c:47
      #4 0x5625e520d8e5 in __perf_evsel__read_on_cpu util/evsel.c:1505
      #5 0x5625e517a985 in perf_evsel__read_on_cpu /home/work/linux/tools/perf/util/evsel.h:347
      #6 0x5625e517ad1a in test__openat_syscall_event tests/openat-syscall.c:47
      #7 0x5625e51528e6 in run_test tests/builtin-test.c:358
      #8 0x5625e5152baf in test_and_print tests/builtin-test.c:388
      #9 0x5625e51543fe in __cmd_test tests/builtin-test.c:583
      #10 0x5625e515572f in cmd_test tests/builtin-test.c:722
      #11 0x5625e51c3fb8 in run_builtin /home/changbin/work/linux/tools/perf/perf.c:302
      #12 0x5625e51c44f7 in handle_internal_command /home/changbin/work/linux/tools/perf/perf.c:354
      #13 0x5625e51c48fb in run_argv /home/changbin/work/linux/tools/perf/perf.c:398
      #14 0x5625e51c5069 in main /home/changbin/work/linux/tools/perf/perf.c:520
      #15 0x7f033214d09a in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x2409a)

  Indirect leak of 72 byte(s) in 1 object(s) allocated from:
      #0 0x7f0333a89138 in calloc (/usr/lib/x86_64-linux-gnu/libasan.so.5+0xee138)
      #1 0x5625e532560d in zalloc util/util.h:23
      #2 0x5625e532566b in xyarray__new util/xyarray.c:10
      #3 0x5625e5330aba in perf_counts__new util/counts.c:15
      #4 0x5625e5330ca0 in perf_evsel__alloc_counts util/counts.c:47
      #5 0x5625e520d8e5 in __perf_evsel__read_on_cpu util/evsel.c:1505
      #6 0x5625e517a985 in perf_evsel__read_on_cpu /home/work/linux/tools/perf/util/evsel.h:347
      #7 0x5625e517ad1a in test__openat_syscall_event tests/openat-syscall.c:47
      #8 0x5625e51528e6 in run_test tests/builtin-test.c:358
      #9 0x5625e5152baf in test_and_print tests/builtin-test.c:388
      #10 0x5625e51543fe in __cmd_test tests/builtin-test.c:583
      #11 0x5625e515572f in cmd_test tests/builtin-test.c:722
      #12 0x5625e51c3fb8 in run_builtin /home/changbin/work/linux/tools/perf/perf.c:302
      #13 0x5625e51c44f7 in handle_internal_command /home/changbin/work/linux/tools/perf/perf.c:354
      #14 0x5625e51c48fb in run_argv /home/changbin/work/linux/tools/perf/perf.c:398
      #15 0x5625e51c5069 in main /home/changbin/work/linux/tools/perf/perf.c:520
      #16 0x7f033214d09a in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x2409a)

His patch took care of evsel->prev_raw_counts, but the above backtraces
are about evsel->counts, so fix that instead.

Reported-by: Changbin Du <[email protected]>
Cc: Alexei Starovoitov <[email protected]>
Cc: Daniel Borkmann <[email protected]>
Cc: Jiri Olsa <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Steven Rostedt (VMware) <[email protected]>
Link: https://lkml.kernel.org/n/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
LinuxMinion pushed a commit that referenced this pull request Sep 6, 2019
…_event_on_all_cpus test

[ Upstream commit 93faa52 ]

  =================================================================
  ==7497==ERROR: LeakSanitizer: detected memory leaks

  Direct leak of 40 byte(s) in 1 object(s) allocated from:
      #0 0x7f0333a88f30 in __interceptor_malloc (/usr/lib/x86_64-linux-gnu/libasan.so.5+0xedf30)
      #1 0x5625e5326213 in cpu_map__trim_new util/cpumap.c:45
      #2 0x5625e5326703 in cpu_map__read util/cpumap.c:103
      #3 0x5625e53267ef in cpu_map__read_all_cpu_map util/cpumap.c:120
      #4 0x5625e5326915 in cpu_map__new util/cpumap.c:135
      #5 0x5625e517b355 in test__openat_syscall_event_on_all_cpus tests/openat-syscall-all-cpus.c:36
      #6 0x5625e51528e6 in run_test tests/builtin-test.c:358
      #7 0x5625e5152baf in test_and_print tests/builtin-test.c:388
      #8 0x5625e51543fe in __cmd_test tests/builtin-test.c:583
      #9 0x5625e515572f in cmd_test tests/builtin-test.c:722
      #10 0x5625e51c3fb8 in run_builtin /home/changbin/work/linux/tools/perf/perf.c:302
      #11 0x5625e51c44f7 in handle_internal_command /home/changbin/work/linux/tools/perf/perf.c:354
      #12 0x5625e51c48fb in run_argv /home/changbin/work/linux/tools/perf/perf.c:398
      #13 0x5625e51c5069 in main /home/changbin/work/linux/tools/perf/perf.c:520
      #14 0x7f033214d09a in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x2409a)

Signed-off-by: Changbin Du <[email protected]>
Reviewed-by: Jiri Olsa <[email protected]>
Cc: Alexei Starovoitov <[email protected]>
Cc: Daniel Borkmann <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Steven Rostedt (VMware) <[email protected]>
Fixes: f30a79b ("perf tools: Add reference counting for cpu_map object")
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
LinuxMinion pushed a commit that referenced this pull request Sep 6, 2019
[ Upstream commit f97a899 ]

  =================================================================
  ==7506==ERROR: LeakSanitizer: detected memory leaks

  Direct leak of 13 byte(s) in 3 object(s) allocated from:
      #0 0x7f03339d6070 in __interceptor_strdup (/usr/lib/x86_64-linux-gnu/libasan.so.5+0x3b070)
      #1 0x5625e53aaef0 in expr__find_other util/expr.y:221
      #2 0x5625e51bcd3f in test__expr tests/expr.c:52
      #3 0x5625e51528e6 in run_test tests/builtin-test.c:358
      #4 0x5625e5152baf in test_and_print tests/builtin-test.c:388
      #5 0x5625e51543fe in __cmd_test tests/builtin-test.c:583
      #6 0x5625e515572f in cmd_test tests/builtin-test.c:722
      #7 0x5625e51c3fb8 in run_builtin /home/changbin/work/linux/tools/perf/perf.c:302
      #8 0x5625e51c44f7 in handle_internal_command /home/changbin/work/linux/tools/perf/perf.c:354
      #9 0x5625e51c48fb in run_argv /home/changbin/work/linux/tools/perf/perf.c:398
      #10 0x5625e51c5069 in main /home/changbin/work/linux/tools/perf/perf.c:520
      #11 0x7f033214d09a in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x2409a)

Signed-off-by: Changbin Du <[email protected]>
Cc: Alexei Starovoitov <[email protected]>
Cc: Andi Kleen <[email protected]>
Cc: Daniel Borkmann <[email protected]>
Cc: Jiri Olsa <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Steven Rostedt (VMware) <[email protected]>
Fixes: 0751673 ("perf tools: Add a simple expression parser for JSON")
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
LinuxMinion pushed a commit that referenced this pull request Sep 6, 2019
[ Upstream commit d982b33 ]

  =================================================================
  ==20875==ERROR: LeakSanitizer: detected memory leaks

  Direct leak of 1160 byte(s) in 1 object(s) allocated from:
      #0 0x7f1b6fc84138 in calloc (/usr/lib/x86_64-linux-gnu/libasan.so.5+0xee138)
      #1 0x55bd50005599 in zalloc util/util.h:23
      #2 0x55bd500068f5 in perf_evsel__newtp_idx util/evsel.c:327
      #3 0x55bd4ff810fc in perf_evsel__newtp /home/work/linux/tools/perf/util/evsel.h:216
      #4 0x55bd4ff81608 in test__perf_evsel__tp_sched_test tests/evsel-tp-sched.c:69
      #5 0x55bd4ff528e6 in run_test tests/builtin-test.c:358
      #6 0x55bd4ff52baf in test_and_print tests/builtin-test.c:388
      #7 0x55bd4ff543fe in __cmd_test tests/builtin-test.c:583
      #8 0x55bd4ff5572f in cmd_test tests/builtin-test.c:722
      #9 0x55bd4ffc4087 in run_builtin /home/changbin/work/linux/tools/perf/perf.c:302
      #10 0x55bd4ffc45c6 in handle_internal_command /home/changbin/work/linux/tools/perf/perf.c:354
      #11 0x55bd4ffc49ca in run_argv /home/changbin/work/linux/tools/perf/perf.c:398
      #12 0x55bd4ffc5138 in main /home/changbin/work/linux/tools/perf/perf.c:520
      #13 0x7f1b6e34809a in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x2409a)

  Indirect leak of 19 byte(s) in 1 object(s) allocated from:
      #0 0x7f1b6fc83f30 in __interceptor_malloc (/usr/lib/x86_64-linux-gnu/libasan.so.5+0xedf30)
      #1 0x7f1b6e3ac30f in vasprintf (/lib/x86_64-linux-gnu/libc.so.6+0x8830f)

Signed-off-by: Changbin Du <[email protected]>
Reviewed-by: Jiri Olsa <[email protected]>
Cc: Alexei Starovoitov <[email protected]>
Cc: Daniel Borkmann <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Steven Rostedt (VMware) <[email protected]>
Fixes: 6a6cd11 ("perf test: Add test for the sched tracepoint format fields")
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
gregmarsden pushed a commit that referenced this pull request Nov 15, 2019
[ Upstream commit 347ab94 ]

This patch fixes deadlock warning if removing PWM device
when CONFIG_PROVE_LOCKING is enabled.

This issue can be reproceduced by the following steps on
the R-Car H3 Salvator-X board if the backlight is disabled:

 # cd /sys/class/pwm/pwmchip0
 # echo 0 > export
 # ls
 device  export  npwm  power  pwm0  subsystem  uevent  unexport
 # cd device/driver
 # ls
 bind  e6e31000.pwm  uevent  unbind
 # echo e6e31000.pwm > unbind

[   87.659974] ======================================================
[   87.666149] WARNING: possible circular locking dependency detected
[   87.672327] 5.0.0 #7 Not tainted
[   87.675549] ------------------------------------------------------
[   87.681723] bash/2986 is trying to acquire lock:
[   87.686337] 000000005ea0e178 (kn->count#58){++++}, at: kernfs_remove_by_name_ns+0x50/0xa0
[   87.694528]
[   87.694528] but task is already holding lock:
[   87.700353] 000000006313b17c (pwm_lock){+.+.}, at: pwmchip_remove+0x28/0x13c
[   87.707405]
[   87.707405] which lock already depends on the new lock.
[   87.707405]
[   87.715574]
[   87.715574] the existing dependency chain (in reverse order) is:
[   87.723048]
[   87.723048] -> #1 (pwm_lock){+.+.}:
[   87.728017]        __mutex_lock+0x70/0x7e4
[   87.732108]        mutex_lock_nested+0x1c/0x24
[   87.736547]        pwm_request_from_chip.part.6+0x34/0x74
[   87.741940]        pwm_request_from_chip+0x20/0x40
[   87.746725]        export_store+0x6c/0x1f4
[   87.750820]        dev_attr_store+0x18/0x28
[   87.754998]        sysfs_kf_write+0x54/0x64
[   87.759175]        kernfs_fop_write+0xe4/0x1e8
[   87.763615]        __vfs_write+0x40/0x184
[   87.767619]        vfs_write+0xa8/0x19c
[   87.771448]        ksys_write+0x58/0xbc
[   87.775278]        __arm64_sys_write+0x18/0x20
[   87.779721]        el0_svc_common+0xd0/0x124
[   87.783986]        el0_svc_compat_handler+0x1c/0x24
[   87.788858]        el0_svc_compat+0x8/0x18
[   87.792947]
[   87.792947] -> #0 (kn->count#58){++++}:
[   87.798260]        lock_acquire+0xc4/0x22c
[   87.802353]        __kernfs_remove+0x258/0x2c4
[   87.806790]        kernfs_remove_by_name_ns+0x50/0xa0
[   87.811836]        remove_files.isra.1+0x38/0x78
[   87.816447]        sysfs_remove_group+0x48/0x98
[   87.820971]        sysfs_remove_groups+0x34/0x4c
[   87.825583]        device_remove_attrs+0x6c/0x7c
[   87.830197]        device_del+0x11c/0x33c
[   87.834201]        device_unregister+0x14/0x2c
[   87.838638]        pwmchip_sysfs_unexport+0x40/0x4c
[   87.843509]        pwmchip_remove+0xf4/0x13c
[   87.847773]        rcar_pwm_remove+0x28/0x34
[   87.852039]        platform_drv_remove+0x24/0x64
[   87.856651]        device_release_driver_internal+0x18c/0x21c
[   87.862391]        device_release_driver+0x14/0x1c
[   87.867175]        unbind_store+0xe0/0x124
[   87.871265]        drv_attr_store+0x20/0x30
[   87.875442]        sysfs_kf_write+0x54/0x64
[   87.879618]        kernfs_fop_write+0xe4/0x1e8
[   87.884055]        __vfs_write+0x40/0x184
[   87.888057]        vfs_write+0xa8/0x19c
[   87.891887]        ksys_write+0x58/0xbc
[   87.895716]        __arm64_sys_write+0x18/0x20
[   87.900154]        el0_svc_common+0xd0/0x124
[   87.904417]        el0_svc_compat_handler+0x1c/0x24
[   87.909289]        el0_svc_compat+0x8/0x18
[   87.913378]
[   87.913378] other info that might help us debug this:
[   87.913378]
[   87.921374]  Possible unsafe locking scenario:
[   87.921374]
[   87.927286]        CPU0                    CPU1
[   87.931808]        ----                    ----
[   87.936331]   lock(pwm_lock);
[   87.939293]                                lock(kn->count#58);
[   87.945120]                                lock(pwm_lock);
[   87.950599]   lock(kn->count#58);
[   87.953908]
[   87.953908]  *** DEADLOCK ***
[   87.953908]
[   87.959821] 4 locks held by bash/2986:
[   87.963563]  #0: 00000000ace7bc30 (sb_writers#6){.+.+}, at: vfs_write+0x188/0x19c
[   87.971044]  #1: 00000000287991b2 (&of->mutex){+.+.}, at: kernfs_fop_write+0xb4/0x1e8
[   87.978872]  #2: 00000000f739d016 (&dev->mutex){....}, at: device_release_driver_internal+0x40/0x21c
[   87.988001]  #3: 000000006313b17c (pwm_lock){+.+.}, at: pwmchip_remove+0x28/0x13c
[   87.995481]
[   87.995481] stack backtrace:
[   87.999836] CPU: 0 PID: 2986 Comm: bash Not tainted 5.0.0 #7
[   88.005489] Hardware name: Renesas Salvator-X board based on r8a7795 ES1.x (DT)
[   88.012791] Call trace:
[   88.015235]  dump_backtrace+0x0/0x190
[   88.018891]  show_stack+0x14/0x1c
[   88.022204]  dump_stack+0xb0/0xec
[   88.025514]  print_circular_bug.isra.32+0x1d0/0x2e0
[   88.030385]  __lock_acquire+0x1318/0x1864
[   88.034388]  lock_acquire+0xc4/0x22c
[   88.037958]  __kernfs_remove+0x258/0x2c4
[   88.041874]  kernfs_remove_by_name_ns+0x50/0xa0
[   88.046398]  remove_files.isra.1+0x38/0x78
[   88.050487]  sysfs_remove_group+0x48/0x98
[   88.054490]  sysfs_remove_groups+0x34/0x4c
[   88.058580]  device_remove_attrs+0x6c/0x7c
[   88.062671]  device_del+0x11c/0x33c
[   88.066154]  device_unregister+0x14/0x2c
[   88.070070]  pwmchip_sysfs_unexport+0x40/0x4c
[   88.074421]  pwmchip_remove+0xf4/0x13c
[   88.078163]  rcar_pwm_remove+0x28/0x34
[   88.081906]  platform_drv_remove+0x24/0x64
[   88.085996]  device_release_driver_internal+0x18c/0x21c
[   88.091215]  device_release_driver+0x14/0x1c
[   88.095478]  unbind_store+0xe0/0x124
[   88.099048]  drv_attr_store+0x20/0x30
[   88.102704]  sysfs_kf_write+0x54/0x64
[   88.106359]  kernfs_fop_write+0xe4/0x1e8
[   88.110275]  __vfs_write+0x40/0x184
[   88.113757]  vfs_write+0xa8/0x19c
[   88.117065]  ksys_write+0x58/0xbc
[   88.120374]  __arm64_sys_write+0x18/0x20
[   88.124291]  el0_svc_common+0xd0/0x124
[   88.128034]  el0_svc_compat_handler+0x1c/0x24
[   88.132384]  el0_svc_compat+0x8/0x18

The sysfs unexport in pwmchip_remove() is completely asymmetric
to what we do in pwmchip_add_with_polarity() and commit 0733424
("pwm: Unexport children before chip removal") is a strong indication
that this was wrong to begin with. We should just move
pwmchip_sysfs_unexport() where it belongs, which is right after
pwmchip_sysfs_unexport_children(). In that case, we do not need
separate functions anymore either.

We also really want to remove sysfs irrespective of whether or not
the chip will be removed as a result of pwmchip_remove(). We can only
assume that the driver will be gone after that, so we shouldn't leave
any dangling sysfs files around.

This warning disappears if we move pwmchip_sysfs_unexport() to
the top of pwmchip_remove(), pwmchip_sysfs_unexport_children().
That way it is also outside of the pwm_lock section, which indeed
doesn't seem to be needed.

Moving the pwmchip_sysfs_export() call outside of that section also
seems fine and it'd be perfectly symmetric with pwmchip_remove() again.

So, this patch fixes them.

Signed-off-by: Phong Hoang <[email protected]>
[shimoda: revise the commit log and code]
Fixes: 76abbdd ("pwm: Add sysfs interface")
Fixes: 0733424 ("pwm: Unexport children before chip removal")
Signed-off-by: Yoshihiro Shimoda <[email protected]>
Tested-by: Hoan Nguyen An <[email protected]>
Reviewed-by: Geert Uytterhoeven <[email protected]>
Reviewed-by: Simon Horman <[email protected]>
Reviewed-by: Uwe Kleine-König <[email protected]>
Signed-off-by: Thierry Reding <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
gregmarsden pushed a commit that referenced this pull request Nov 29, 2019
commit d0a255e upstream.

A deadlock with this stacktrace was observed.

The loop thread does a GFP_KERNEL allocation, it calls into dm-bufio
shrinker and the shrinker depends on I/O completion in the dm-bufio
subsystem.

In order to fix the deadlock (and other similar ones), we set the flag
PF_MEMALLOC_NOIO at loop thread entry.

PID: 474    TASK: ffff8813e11f4600  CPU: 10  COMMAND: "kswapd0"
   #0 [ffff8813dedfb938] __schedule at ffffffff8173f405
   #1 [ffff8813dedfb990] schedule at ffffffff8173fa27
   #2 [ffff8813dedfb9b0] schedule_timeout at ffffffff81742fec
   #3 [ffff8813dedfba60] io_schedule_timeout at ffffffff8173f186
   #4 [ffff8813dedfbaa0] bit_wait_io at ffffffff8174034f
   #5 [ffff8813dedfbac0] __wait_on_bit at ffffffff8173fec8
   #6 [ffff8813dedfbb10] out_of_line_wait_on_bit at ffffffff8173ff81
   #7 [ffff8813dedfbb90] __make_buffer_clean at ffffffffa038736f [dm_bufio]
   #8 [ffff8813dedfbbb0] __try_evict_buffer at ffffffffa0387bb8 [dm_bufio]
   #9 [ffff8813dedfbbd0] dm_bufio_shrink_scan at ffffffffa0387cc3 [dm_bufio]
  #10 [ffff8813dedfbc40] shrink_slab at ffffffff811a87ce
  #11 [ffff8813dedfbd30] shrink_zone at ffffffff811ad778
  #12 [ffff8813dedfbdc0] kswapd at ffffffff811ae92f
  #13 [ffff8813dedfbec0] kthread at ffffffff810a8428
  #14 [ffff8813dedfbf50] ret_from_fork at ffffffff81745242

  PID: 14127  TASK: ffff881455749c00  CPU: 11  COMMAND: "loop1"
   #0 [ffff88272f5af228] __schedule at ffffffff8173f405
   #1 [ffff88272f5af280] schedule at ffffffff8173fa27
   #2 [ffff88272f5af2a0] schedule_preempt_disabled at ffffffff8173fd5e
   #3 [ffff88272f5af2b0] __mutex_lock_slowpath at ffffffff81741fb5
   #4 [ffff88272f5af330] mutex_lock at ffffffff81742133
   #5 [ffff88272f5af350] dm_bufio_shrink_count at ffffffffa03865f9 [dm_bufio]
   #6 [ffff88272f5af380] shrink_slab at ffffffff811a86bd
   #7 [ffff88272f5af470] shrink_zone at ffffffff811ad778
   #8 [ffff88272f5af500] do_try_to_free_pages at ffffffff811adb34
   #9 [ffff88272f5af590] try_to_free_pages at ffffffff811adef8
  #10 [ffff88272f5af610] __alloc_pages_nodemask at ffffffff811a09c3
  #11 [ffff88272f5af710] alloc_pages_current at ffffffff811e8b71
  #12 [ffff88272f5af760] new_slab at ffffffff811f4523
  #13 [ffff88272f5af7b0] __slab_alloc at ffffffff8173a1b5
  #14 [ffff88272f5af880] kmem_cache_alloc at ffffffff811f484b
  #15 [ffff88272f5af8d0] do_blockdev_direct_IO at ffffffff812535b3
  #16 [ffff88272f5afb00] __blockdev_direct_IO at ffffffff81255dc3
  #17 [ffff88272f5afb30] xfs_vm_direct_IO at ffffffffa01fe3fc [xfs]
  #18 [ffff88272f5afb90] generic_file_read_iter at ffffffff81198994
  #19 [ffff88272f5afc50] __dta_xfs_file_read_iter_2398 at ffffffffa020c970 [xfs]
  #20 [ffff88272f5afcc0] lo_rw_aio at ffffffffa0377042 [loop]
  #21 [ffff88272f5afd70] loop_queue_work at ffffffffa0377c3b [loop]
  #22 [ffff88272f5afe60] kthread_worker_fn at ffffffff810a8a0c
  #23 [ffff88272f5afec0] kthread at ffffffff810a8428
  #24 [ffff88272f5aff50] ret_from_fork at ffffffff81745242

Signed-off-by: Mikulas Patocka <[email protected]>
Cc: [email protected]
Signed-off-by: Jens Axboe <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
gregmarsden pushed a commit that referenced this pull request Nov 29, 2019
commit cf3591e upstream.

Revert the commit bd293d0. The proper
fix has been made available with commit d0a255e ("loop: set
PF_MEMALLOC_NOIO for the worker thread").

Note that the fix offered by commit bd293d0 doesn't really prevent
the deadlock from occuring - if we look at the stacktrace reported by
Junxiao Bi, we see that it hangs in bit_wait_io and not on the mutex -
i.e. it has already successfully taken the mutex. Changing the mutex
from mutex_lock to mutex_trylock won't help with deadlocks that happen
afterwards.

PID: 474    TASK: ffff8813e11f4600  CPU: 10  COMMAND: "kswapd0"
   #0 [ffff8813dedfb938] __schedule at ffffffff8173f405
   #1 [ffff8813dedfb990] schedule at ffffffff8173fa27
   #2 [ffff8813dedfb9b0] schedule_timeout at ffffffff81742fec
   #3 [ffff8813dedfba60] io_schedule_timeout at ffffffff8173f186
   #4 [ffff8813dedfbaa0] bit_wait_io at ffffffff8174034f
   #5 [ffff8813dedfbac0] __wait_on_bit at ffffffff8173fec8
   #6 [ffff8813dedfbb10] out_of_line_wait_on_bit at ffffffff8173ff81
   #7 [ffff8813dedfbb90] __make_buffer_clean at ffffffffa038736f [dm_bufio]
   #8 [ffff8813dedfbbb0] __try_evict_buffer at ffffffffa0387bb8 [dm_bufio]
   #9 [ffff8813dedfbbd0] dm_bufio_shrink_scan at ffffffffa0387cc3 [dm_bufio]
  #10 [ffff8813dedfbc40] shrink_slab at ffffffff811a87ce
  #11 [ffff8813dedfbd30] shrink_zone at ffffffff811ad778
  #12 [ffff8813dedfbdc0] kswapd at ffffffff811ae92f
  #13 [ffff8813dedfbec0] kthread at ffffffff810a8428
  #14 [ffff8813dedfbf50] ret_from_fork at ffffffff81745242

Signed-off-by: Mikulas Patocka <[email protected]>
Cc: [email protected]
Fixes: bd293d0 ("dm bufio: fix deadlock with loop device")
Depends-on: d0a255e ("loop: set PF_MEMALLOC_NOIO for the worker thread")
Signed-off-by: Mike Snitzer <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
gregmarsden pushed a commit that referenced this pull request Nov 29, 2019
[ Upstream commit 0216234 ]

We release wrong pointer on error path in cpu_cache_level__read
function, leading to segfault:

  (gdb) r record ls
  Starting program: /root/perf/tools/perf/perf record ls
  ...
  [ perf record: Woken up 1 times to write data ]
  double free or corruption (out)

  Thread 1 "perf" received signal SIGABRT, Aborted.
  0x00007ffff7463798 in raise () from /lib64/power9/libc.so.6
  (gdb) bt
  #0  0x00007ffff7463798 in raise () from /lib64/power9/libc.so.6
  #1  0x00007ffff7443bac in abort () from /lib64/power9/libc.so.6
  #2  0x00007ffff74af8bc in __libc_message () from /lib64/power9/libc.so.6
  #3  0x00007ffff74b92b8 in malloc_printerr () from /lib64/power9/libc.so.6
  #4  0x00007ffff74bb874 in _int_free () from /lib64/power9/libc.so.6
  #5  0x0000000010271260 in __zfree (ptr=0x7fffffffa0b0) at ../../lib/zalloc..
  #6  0x0000000010139340 in cpu_cache_level__read (cache=0x7fffffffa090, cac..
  #7  0x0000000010143c90 in build_caches (cntp=0x7fffffffa118, size=<optimiz..
  ...

Releasing the proper pointer.

Fixes: 720e98b ("perf tools: Add perf data cache feature")
Signed-off-by: Jiri Olsa <[email protected]>
Cc: Alexander Shishkin <[email protected]>
Cc: Michael Petlan <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: [email protected]: # v4.6+
Link: http://lore.kernel.org/lkml/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
gregmarsden pushed a commit that referenced this pull request Nov 29, 2019
[ Upstream commit 443f2d5 ]

Observe a segmentation fault when 'perf stat' is asked to repeat forever
with the interval option.

Without fix:

  # perf stat -r 0 -I 5000 -e cycles -a sleep 10
  #           time             counts unit events
       5.000211692  3,13,89,82,34,157      cycles
      10.000380119  1,53,98,52,22,294      cycles
      10.040467280       17,16,79,265      cycles
  Segmentation fault

This problem was only observed when we use forever option aka -r 0 and
works with limited repeats. Calling print_counter with ts being set to
NULL, is not a correct option when interval is set. Hence avoid
print_counter(NULL,..)  if interval is set.

With fix:

  # perf stat -r 0 -I 5000 -e cycles -a sleep 10
   #           time             counts unit events
       5.019866622  3,15,14,43,08,697      cycles
      10.039865756  3,15,16,31,95,261      cycles
      10.059950628     1,26,05,47,158      cycles
       5.009902655  3,14,52,62,33,932      cycles
      10.019880228  3,14,52,22,89,154      cycles
      10.030543876       66,90,18,333      cycles
       5.009848281  3,14,51,98,25,437      cycles
      10.029854402  3,15,14,93,04,918      cycles
       5.009834177  3,14,51,95,92,316      cycles

Committer notes:

Did the 'git bisect' to find the cset introducing the problem to add the
Fixes tag below, and at that time the problem reproduced as:

  (gdb) run stat -r0 -I500 sleep 1
  <SNIP>
  Program received signal SIGSEGV, Segmentation fault.
  print_interval (prefix=prefix@entry=0x7fffffffc8d0 "", ts=ts@entry=0x0) at builtin-stat.c:866
  866		sprintf(prefix, "%6lu.%09lu%s", ts->tv_sec, ts->tv_nsec, csv_sep);
  (gdb) bt
  #0  print_interval (prefix=prefix@entry=0x7fffffffc8d0 "", ts=ts@entry=0x0) at builtin-stat.c:866
  #1  0x000000000041860a in print_counters (ts=ts@entry=0x0, argc=argc@entry=2, argv=argv@entry=0x7fffffffd640) at builtin-stat.c:938
  #2  0x0000000000419a7f in cmd_stat (argc=2, argv=0x7fffffffd640, prefix=<optimized out>) at builtin-stat.c:1411
  #3  0x000000000045c65a in run_builtin (p=p@entry=0x6291b8 <commands+216>, argc=argc@entry=5, argv=argv@entry=0x7fffffffd640) at perf.c:370
  #4  0x000000000045c893 in handle_internal_command (argc=5, argv=0x7fffffffd640) at perf.c:429
  #5  0x000000000045c8f1 in run_argv (argcp=argcp@entry=0x7fffffffd4ac, argv=argv@entry=0x7fffffffd4a0) at perf.c:473
  #6  0x000000000045cac9 in main (argc=<optimized out>, argv=<optimized out>) at perf.c:588
  (gdb)

Mostly the same as just before this patch:

  Program received signal SIGSEGV, Segmentation fault.
  0x00000000005874a7 in print_interval (config=0xa1f2a0 <stat_config>, evlist=0xbc9b90, prefix=0x7fffffffd1c0 "`", ts=0x0) at util/stat-display.c:964
  964		sprintf(prefix, "%6lu.%09lu%s", ts->tv_sec, ts->tv_nsec, config->csv_sep);
  (gdb) bt
  #0  0x00000000005874a7 in print_interval (config=0xa1f2a0 <stat_config>, evlist=0xbc9b90, prefix=0x7fffffffd1c0 "`", ts=0x0) at util/stat-display.c:964
  #1  0x0000000000588047 in perf_evlist__print_counters (evlist=0xbc9b90, config=0xa1f2a0 <stat_config>, _target=0xa1f0c0 <target>, ts=0x0, argc=2, argv=0x7fffffffd670)
      at util/stat-display.c:1172
  #2  0x000000000045390f in print_counters (ts=0x0, argc=2, argv=0x7fffffffd670) at builtin-stat.c:656
  #3  0x0000000000456bb5 in cmd_stat (argc=2, argv=0x7fffffffd670) at builtin-stat.c:1960
  #4  0x00000000004dd2e0 in run_builtin (p=0xa30e00 <commands+288>, argc=5, argv=0x7fffffffd670) at perf.c:310
  #5  0x00000000004dd54d in handle_internal_command (argc=5, argv=0x7fffffffd670) at perf.c:362
  #6  0x00000000004dd694 in run_argv (argcp=0x7fffffffd4cc, argv=0x7fffffffd4c0) at perf.c:406
  #7  0x00000000004dda11 in main (argc=5, argv=0x7fffffffd670) at perf.c:531
  (gdb)

Fixes: d4f63a4 ("perf stat: Introduce print_counters function")
Signed-off-by: Srikar Dronamraju <[email protected]>
Acked-by: Jiri Olsa <[email protected]>
Tested-by: Arnaldo Carvalho de Melo <[email protected]>
Tested-by: Ravi Bangoria <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Naveen N. Rao <[email protected]>
Cc: [email protected] # v4.2+
Link: http://lore.kernel.org/lkml/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
gregmarsden pushed a commit that referenced this pull request Mar 6, 2020
Orabug: 30657780

Shrinker hold sb->s_umount lock and invoked .destroy_inode to reclaim
inode, if fs was freezed, Shrinker would hung by freeze lock. But
unfreeze could never happen because it would be hung by sb->s_umount.

Backgroud inode inactivation feature could fix this, but it was not
merged by mainline yet, according Darrick, even merged, it would be
nearly impossible to backport to 4.14. The effort here is to make a
one OFF-MAINLINE fix for uek, if future uek have that feature merged,
this patch should be dropped.

To avoid deadlock, add inode needing inactivation to list
and destroy them async.

 crash7latest> set 132
     PID: 132
 COMMAND: "kswapd0:0"
    TASK: ffff9cdc9dfb5f00  [THREAD_INFO: ffff9cdc9dfb5f00]
     CPU: 6
   STATE: TASK_UNINTERRUPTIBLE
 crash7latest> bt
 PID: 132    TASK: ffff9cdc9dfb5f00  CPU: 6   COMMAND: "kswapd0:0"
  #0 [ffffaa5d075bf900] __schedule at ffffffff8186487c
  #1 [ffffaa5d075bf998] schedule at ffffffff81864e96
  #2 [ffffaa5d075bf9b0] rwsem_down_read_failed at ffffffff818689ee
  #3 [ffffaa5d075bfa40] call_rwsem_down_read_failed at ffffffff81859308
  #4 [ffffaa5d075bfa90] __percpu_down_read at ffffffff810ebd38
  #5 [ffffaa5d075bfab0] __sb_start_write at ffffffff812859ef
  #6 [ffffaa5d075bfad0] xfs_trans_alloc at ffffffffc07ebe9c [xfs]
  #7 [ffffaa5d075bfb18] xfs_free_eofblocks at ffffffffc07c39d1 [xfs]
  #8 [ffffaa5d075bfb80] xfs_inactive at ffffffffc07de878 [xfs]
  #9 [ffffaa5d075bfba0] __dta_xfs_fs_destroy_inode_3543 at ffffffffc07e885e [xfs]
 #10 [ffffaa5d075bfbd0] destroy_inode at ffffffff812a25de
 #11 [ffffaa5d075bfbe8] evict at ffffffff812a2b73
 #12 [ffffaa5d075bfc10] dispose_list at ffffffff812a2c1d
 #13 [ffffaa5d075bfc38] prune_icache_sb at ffffffff812a421a
 #14 [ffffaa5d075bfc70] super_cache_scan at ffffffff812870a1
 #15 [ffffaa5d075bfcc8] shrink_slab at ffffffff811eebb3
 #16 [ffffaa5d075bfdb0] shrink_node at ffffffff811f4788
 #17 [ffffaa5d075bfe38] kswapd at ffffffff811f58c3
 #18 [ffffaa5d075bff08] kthread at ffffffff810b75d5
 #19 [ffffaa5d075bff50] ret_from_fork at ffffffff81a0035e
 crash7latest> set 31060
     PID: 31060
 COMMAND: "safefreeze"
    TASK: ffff9cd292868000  [THREAD_INFO: ffff9cd292868000]
     CPU: 2
   STATE: TASK_UNINTERRUPTIBLE
 crash7latest> bt
 PID: 31060  TASK: ffff9cd292868000  CPU: 2   COMMAND: "safefreeze"
  #0 [ffffaa5d10047c90] __schedule at ffffffff8186487c
  #1 [ffffaa5d10047d28] schedule at ffffffff81864e96
  #2 [ffffaa5d10047d40] rwsem_down_write_failed at ffffffff81868f18
  #3 [ffffaa5d10047dd8] call_rwsem_down_write_failed at ffffffff81859367
  #4 [ffffaa5d10047e20] down_write at ffffffff81867cfd
  #5 [ffffaa5d10047e38] thaw_super at ffffffff81285d2d
  #6 [ffffaa5d10047e60] do_vfs_ioctl at ffffffff81299566
  #7 [ffffaa5d10047ee8] sys_ioctl at ffffffff81299709
  #8 [ffffaa5d10047f28] do_syscall_64 at ffffffff81003949
  #9 [ffffaa5d10047f50] entry_SYSCALL_64_after_hwframe at ffffffff81a001ad
     RIP: 0000000000453d67  RSP: 00007ffff9c1ce78  RFLAGS: 00000206
     RAX: ffffffffffffffda  RBX: 0000000001cbe92c  RCX: 0000000000453d67
     RDX: 0000000000000000  RSI: 00000000c0045878  RDI: 0000000000000014
     RBP: 00007ffff9c1cf80   R8: 0000000000000000   R9: 0000000000000012
     R10: 0000000000000008  R11: 0000000000000206  R12: 0000000000401fb0
     R13: 0000000000402040  R14: 0000000000000000  R15: 0000000000000000
     ORIG_RAX: 0000000000000010  CS: 0033  SS: 002b

Signed-off-by: Junxiao Bi <[email protected]>
Reviewed-by: Darrick J. Wong <[email protected]>
Signed-off-by: Somasundaram Krishnasamy <[email protected]>
gregmarsden pushed a commit that referenced this pull request Mar 6, 2020
Orabug: 30898008

Shrinker hold sb->s_umount lock and invoked .destroy_inode to reclaim
inode, if fs was freezed, Shrinker would hung by freeze lock. But
unfreeze could never happen because it would be hung by sb->s_umount.

Backgroud inode inactivation feature could fix this, but it was not
merged by mainline yet, according Darrick, even merged, it would be
nearly impossible to backport to 4.14. The effort here is to make a
one OFF-MAINLINE fix for uek, if future uek have that feature merged,
this patch should be dropped.

To avoid deadlock, add inode needing inactivation to list
and destroy them async.

 crash7latest> set 132
     PID: 132
 COMMAND: "kswapd0:0"
    TASK: ffff9cdc9dfb5f00  [THREAD_INFO: ffff9cdc9dfb5f00]
     CPU: 6
   STATE: TASK_UNINTERRUPTIBLE
 crash7latest> bt
 PID: 132    TASK: ffff9cdc9dfb5f00  CPU: 6   COMMAND: "kswapd0:0"
  #0 [ffffaa5d075bf900] __schedule at ffffffff8186487c
  #1 [ffffaa5d075bf998] schedule at ffffffff81864e96
  #2 [ffffaa5d075bf9b0] rwsem_down_read_failed at ffffffff818689ee
  #3 [ffffaa5d075bfa40] call_rwsem_down_read_failed at ffffffff81859308
  #4 [ffffaa5d075bfa90] __percpu_down_read at ffffffff810ebd38
  #5 [ffffaa5d075bfab0] __sb_start_write at ffffffff812859ef
  #6 [ffffaa5d075bfad0] xfs_trans_alloc at ffffffffc07ebe9c [xfs]
  #7 [ffffaa5d075bfb18] xfs_free_eofblocks at ffffffffc07c39d1 [xfs]
  #8 [ffffaa5d075bfb80] xfs_inactive at ffffffffc07de878 [xfs]
  #9 [ffffaa5d075bfba0] __dta_xfs_fs_destroy_inode_3543 at ffffffffc07e885e [xfs]
 #10 [ffffaa5d075bfbd0] destroy_inode at ffffffff812a25de
 #11 [ffffaa5d075bfbe8] evict at ffffffff812a2b73
 #12 [ffffaa5d075bfc10] dispose_list at ffffffff812a2c1d
 #13 [ffffaa5d075bfc38] prune_icache_sb at ffffffff812a421a
 #14 [ffffaa5d075bfc70] super_cache_scan at ffffffff812870a1
 #15 [ffffaa5d075bfcc8] shrink_slab at ffffffff811eebb3
 #16 [ffffaa5d075bfdb0] shrink_node at ffffffff811f4788
 #17 [ffffaa5d075bfe38] kswapd at ffffffff811f58c3
 #18 [ffffaa5d075bff08] kthread at ffffffff810b75d5
 #19 [ffffaa5d075bff50] ret_from_fork at ffffffff81a0035e
 crash7latest> set 31060
     PID: 31060
 COMMAND: "safefreeze"
    TASK: ffff9cd292868000  [THREAD_INFO: ffff9cd292868000]
     CPU: 2
   STATE: TASK_UNINTERRUPTIBLE
 crash7latest> bt
 PID: 31060  TASK: ffff9cd292868000  CPU: 2   COMMAND: "safefreeze"
  #0 [ffffaa5d10047c90] __schedule at ffffffff8186487c
  #1 [ffffaa5d10047d28] schedule at ffffffff81864e96
  #2 [ffffaa5d10047d40] rwsem_down_write_failed at ffffffff81868f18
  #3 [ffffaa5d10047dd8] call_rwsem_down_write_failed at ffffffff81859367
  #4 [ffffaa5d10047e20] down_write at ffffffff81867cfd
  #5 [ffffaa5d10047e38] thaw_super at ffffffff81285d2d
  #6 [ffffaa5d10047e60] do_vfs_ioctl at ffffffff81299566
  #7 [ffffaa5d10047ee8] sys_ioctl at ffffffff81299709
  #8 [ffffaa5d10047f28] do_syscall_64 at ffffffff81003949
  #9 [ffffaa5d10047f50] entry_SYSCALL_64_after_hwframe at ffffffff81a001ad
     RIP: 0000000000453d67  RSP: 00007ffff9c1ce78  RFLAGS: 00000206
     RAX: ffffffffffffffda  RBX: 0000000001cbe92c  RCX: 0000000000453d67
     RDX: 0000000000000000  RSI: 00000000c0045878  RDI: 0000000000000014
     RBP: 00007ffff9c1cf80   R8: 0000000000000000   R9: 0000000000000012
     R10: 0000000000000008  R11: 0000000000000206  R12: 0000000000401fb0
     R13: 0000000000402040  R14: 0000000000000000  R15: 0000000000000000
     ORIG_RAX: 0000000000000010  CS: 0033  SS: 002b

Signed-off-by: Junxiao Bi <[email protected]>
Reviewed-by: Darrick J. Wong <[email protected]>
Signed-off-by: Somasundaram Krishnasamy <[email protected]>
LinuxMinion pushed a commit that referenced this pull request Mar 20, 2020
Orabug: 30898016

Shrinker hold sb->s_umount lock and invoked .destroy_inode to reclaim
inode, if fs was freezed, Shrinker would hung by freeze lock. But
unfreeze could never happen because it would be hung by sb->s_umount.

Backgroud inode inactivation feature could fix this, but it was not
merged by mainline yet, according Darrick, even merged, it would be
nearly impossible to backport to 4.14. The effort here is to make a
one OFF-MAINLINE fix for uek, if future uek have that feature merged,
this patch should be dropped.

To avoid deadlock, add inode needing inactivation to list
and destroy them async.

 crash7latest> set 132
     PID: 132
 COMMAND: "kswapd0:0"
    TASK: ffff9cdc9dfb5f00  [THREAD_INFO: ffff9cdc9dfb5f00]
     CPU: 6
   STATE: TASK_UNINTERRUPTIBLE
 crash7latest> bt
 PID: 132    TASK: ffff9cdc9dfb5f00  CPU: 6   COMMAND: "kswapd0:0"
  #0 [ffffaa5d075bf900] __schedule at ffffffff8186487c
  #1 [ffffaa5d075bf998] schedule at ffffffff81864e96
  #2 [ffffaa5d075bf9b0] rwsem_down_read_failed at ffffffff818689ee
  #3 [ffffaa5d075bfa40] call_rwsem_down_read_failed at ffffffff81859308
  #4 [ffffaa5d075bfa90] __percpu_down_read at ffffffff810ebd38
  #5 [ffffaa5d075bfab0] __sb_start_write at ffffffff812859ef
  #6 [ffffaa5d075bfad0] xfs_trans_alloc at ffffffffc07ebe9c [xfs]
  #7 [ffffaa5d075bfb18] xfs_free_eofblocks at ffffffffc07c39d1 [xfs]
  #8 [ffffaa5d075bfb80] xfs_inactive at ffffffffc07de878 [xfs]
  #9 [ffffaa5d075bfba0] __dta_xfs_fs_destroy_inode_3543 at ffffffffc07e885e [xfs]
 #10 [ffffaa5d075bfbd0] destroy_inode at ffffffff812a25de
 #11 [ffffaa5d075bfbe8] evict at ffffffff812a2b73
 #12 [ffffaa5d075bfc10] dispose_list at ffffffff812a2c1d
 #13 [ffffaa5d075bfc38] prune_icache_sb at ffffffff812a421a
 #14 [ffffaa5d075bfc70] super_cache_scan at ffffffff812870a1
 #15 [ffffaa5d075bfcc8] shrink_slab at ffffffff811eebb3
 #16 [ffffaa5d075bfdb0] shrink_node at ffffffff811f4788
 #17 [ffffaa5d075bfe38] kswapd at ffffffff811f58c3
 #18 [ffffaa5d075bff08] kthread at ffffffff810b75d5
 #19 [ffffaa5d075bff50] ret_from_fork at ffffffff81a0035e
 crash7latest> set 31060
     PID: 31060
 COMMAND: "safefreeze"
    TASK: ffff9cd292868000  [THREAD_INFO: ffff9cd292868000]
     CPU: 2
   STATE: TASK_UNINTERRUPTIBLE
 crash7latest> bt
 PID: 31060  TASK: ffff9cd292868000  CPU: 2   COMMAND: "safefreeze"
  #0 [ffffaa5d10047c90] __schedule at ffffffff8186487c
  #1 [ffffaa5d10047d28] schedule at ffffffff81864e96
  #2 [ffffaa5d10047d40] rwsem_down_write_failed at ffffffff81868f18
  #3 [ffffaa5d10047dd8] call_rwsem_down_write_failed at ffffffff81859367
  #4 [ffffaa5d10047e20] down_write at ffffffff81867cfd
  #5 [ffffaa5d10047e38] thaw_super at ffffffff81285d2d
  #6 [ffffaa5d10047e60] do_vfs_ioctl at ffffffff81299566
  #7 [ffffaa5d10047ee8] sys_ioctl at ffffffff81299709
  #8 [ffffaa5d10047f28] do_syscall_64 at ffffffff81003949
  #9 [ffffaa5d10047f50] entry_SYSCALL_64_after_hwframe at ffffffff81a001ad
     RIP: 0000000000453d67  RSP: 00007ffff9c1ce78  RFLAGS: 00000206
     RAX: ffffffffffffffda  RBX: 0000000001cbe92c  RCX: 0000000000453d67
     RDX: 0000000000000000  RSI: 00000000c0045878  RDI: 0000000000000014
     RBP: 00007ffff9c1cf80   R8: 0000000000000000   R9: 0000000000000012
     R10: 0000000000000008  R11: 0000000000000206  R12: 0000000000401fb0
     R13: 0000000000402040  R14: 0000000000000000  R15: 0000000000000000
     ORIG_RAX: 0000000000000010  CS: 0033  SS: 002b

Signed-off-by: Junxiao Bi <[email protected]>
Reviewed-by: Darrick J. Wong <[email protected]>
Signed-off-by: Somasundaram Krishnasamy <[email protected]>
oraclelinuxkernel pushed a commit that referenced this pull request Jan 28, 2025
Add a check to mlx5e_xmit() for shorter frames. A corrupted/malformed
packet, with shorter length can eventually cause system panic further
down in the code path. Avoid it by validating the length and dropping it
at the earliest.

Following is seen in our env with shorter skb->len

crash> bt
PID: 76981    TASK: ff19828cfe508000  CPU: 106  COMMAND: "vhost-76942"
 #0 [ff2d20159b39f2c8] machine_kexec at ffffffffad884801
 #1 [ff2d20159b39f328] __crash_kexec at ffffffffad976142
 #2 [ff2d20159b39f3f8] panic at ffffffffad8b3640
 #3 [ff2d20159b39f4a0] no_context at ffffffffad8954e1
 #4 [ff2d20159b39f518] __bad_area_nosemaphore at ffffffffad8958de
 #5 [ff2d20159b39f578] bad_area_nosemaphore at ffffffffad895a96
 #6 [ff2d20159b39f588] do_kern_addr_fault at ffffffffad89688e
 #7 [ff2d20159b39f5b0] __do_page_fault at ffffffffad896b30
 #8 [ff2d20159b39f618] do_page_fault at ffffffffad896db6
 #9 [ff2d20159b39f650] page_fault at ffffffffae402acd
    [exception RIP: memcpy_erms+6]
    RIP: ffffffffae261ab6  RSP: ff2d20159b39f700  RFLAGS: 00010293
    RAX: ff198291741ecf2e  RBX: ff19828e70d6a100  RCX: fffffffffea1af2b
    RDX: fffffffffffffffd  RSI: ff19828eba6d7e5e  RDI: ff198291757d2000
    RBP: ff2d20159b39f760   R8: ff198291741ecf00   R9: 000000000000037c
    R10: 000000000000003c  R11: ff19828ffe953940  R12: ff198291741ecf20
    R13: ff198267dcb1b600  R14: ff19828eeebb09c0  R15: ff198291741ecf00
    ORIG_RAX: ffffffffffffffff  CS: 0010  SS: 0018
 #10 [ff2d20159b39f700] mlx5e_sq_xmit_wqe at ffffffffc05c162e [mlx5_core]
 #11 [ff2d20159b39f768] mlx5e_xmit at ffffffffc05c1ca3 [mlx5_core]
 #12 [ff2d20159b39f800] dev_hard_start_xmit at ffffffffae083766
 #13 [ff2d20159b39f860] sch_direct_xmit at ffffffffae0e2564
 #14 [ff2d20159b39f8b0] __qdisc_run at ffffffffae0e294e
 #15 [ff2d20159b39f928] __dev_queue_xmit at ffffffffae083eee
 #16 [ff2d20159b39f9a8] dev_queue_xmit at ffffffffae084370
 #17 [ff2d20159b39f9b8] vlan_dev_hard_start_xmit at ffffffffc2fb6fec [8021q]
 #18 [ff2d20159b39f9d8] dev_hard_start_xmit at ffffffffae083766
 #19 [ff2d20159b39fa38] __dev_queue_xmit at ffffffffae08416a
 #20 [ff2d20159b39fab8] dev_queue_xmit_accel at ffffffffae08438e
 #21 [ff2d20159b39fac8] macvlan_start_xmit at ffffffffc2fc18d9 [macvlan]
 #22 [ff2d20159b39faf0] dev_hard_start_xmit at ffffffffae083766
 #23 [ff2d20159b39fb50] sch_direct_xmit at ffffffffae0e2564
 #24 [ff2d20159b39fba0] __qdisc_run at ffffffffae0e294e
 #25 [ff2d20159b39fc18] __dev_queue_xmit at ffffffffae083c81
 #26 [ff2d20159b39fc90] dev_queue_xmit at ffffffffae084370
 #27 [ff2d20159b39fca0] tap_sendmsg at ffffffffc07206ed [tap]
 #28 [ff2d20159b39fd20] vhost_tx_batch at ffffffffc2fd6590 [vhost_net]
 #29 [ff2d20159b39fd68] handle_tx_copy at ffffffffc2fd70f3 [vhost_net]
 #30 [ff2d20159b39fe80] handle_tx at ffffffffc2fd7651 [vhost_net]
 #31 [ff2d20159b39feb0] handle_tx_kick at ffffffffc2fd76b5 [vhost_net]
 #32 [ff2d20159b39fec0] vhost_worker at ffffffffc12a5be8 [vhost]
 #33 [ff2d20159b39ff08] kthread at ffffffffad8dbfe5
 #34 [ff2d20159b39ff50] ret_from_fork at ffffffffae400364

This change was discussed with Nvidia and they are in agreement.

Orabug: 36660755

Fixes: e4cf27b ("net/mlx5e: Re-eanble client vlan TX acceleration")
Reported-and-tested-by: Dongli Zhang <[email protected]>
Signed-off-by: Manjunath Patil <[email protected]>
Reviewed-by: Si-Wei Liu <[email protected]>
Reviewed-by: Jack Vogel <[email protected]>
oraclelinuxkernel pushed a commit that referenced this pull request Feb 7, 2025
…le_direct_reclaim()

commit 6aaced5 upstream.

The task sometimes continues looping in throttle_direct_reclaim() because
allow_direct_reclaim(pgdat) keeps returning false.

 #0 [ffff80002cb6f8d0] __switch_to at ffff8000080095ac
 #1 [ffff80002cb6f900] __schedule at ffff800008abbd1c
 #2 [ffff80002cb6f990] schedule at ffff800008abc50c
 #3 [ffff80002cb6f9b0] throttle_direct_reclaim at ffff800008273550
 #4 [ffff80002cb6fa20] try_to_free_pages at ffff800008277b68
 #5 [ffff80002cb6fae0] __alloc_pages_nodemask at ffff8000082c4660
 #6 [ffff80002cb6fc50] alloc_pages_vma at ffff8000082e4a98
 #7 [ffff80002cb6fca0] do_anonymous_page at ffff80000829f5a8
 #8 [ffff80002cb6fce0] __handle_mm_fault at ffff8000082a5974
 #9 [ffff80002cb6fd90] handle_mm_fault at ffff8000082a5bd4

At this point, the pgdat contains the following two zones:

        NODE: 4  ZONE: 0  ADDR: ffff00817fffe540  NAME: "DMA32"
          SIZE: 20480  MIN/LOW/HIGH: 11/28/45
          VM_STAT:
                NR_FREE_PAGES: 359
        NR_ZONE_INACTIVE_ANON: 18813
          NR_ZONE_ACTIVE_ANON: 0
        NR_ZONE_INACTIVE_FILE: 50
          NR_ZONE_ACTIVE_FILE: 0
          NR_ZONE_UNEVICTABLE: 0
        NR_ZONE_WRITE_PENDING: 0
                     NR_MLOCK: 0
                    NR_BOUNCE: 0
                   NR_ZSPAGES: 0
            NR_FREE_CMA_PAGES: 0

        NODE: 4  ZONE: 1  ADDR: ffff00817fffec00  NAME: "Normal"
          SIZE: 8454144  PRESENT: 98304  MIN/LOW/HIGH: 68/166/264
          VM_STAT:
                NR_FREE_PAGES: 146
        NR_ZONE_INACTIVE_ANON: 94668
          NR_ZONE_ACTIVE_ANON: 3
        NR_ZONE_INACTIVE_FILE: 735
          NR_ZONE_ACTIVE_FILE: 78
          NR_ZONE_UNEVICTABLE: 0
        NR_ZONE_WRITE_PENDING: 0
                     NR_MLOCK: 0
                    NR_BOUNCE: 0
                   NR_ZSPAGES: 0
            NR_FREE_CMA_PAGES: 0

In allow_direct_reclaim(), while processing ZONE_DMA32, the sum of
inactive/active file-backed pages calculated in zone_reclaimable_pages()
based on the result of zone_page_state_snapshot() is zero.

Additionally, since this system lacks swap, the calculation of inactive/
active anonymous pages is skipped.

        crash> p nr_swap_pages
        nr_swap_pages = $1937 = {
          counter = 0
        }

As a result, ZONE_DMA32 is deemed unreclaimable and skipped, moving on to
the processing of the next zone, ZONE_NORMAL, despite ZONE_DMA32 having
free pages significantly exceeding the high watermark.

The problem is that the pgdat->kswapd_failures hasn't been incremented.

        crash> px ((struct pglist_data *) 0xffff00817fffe540)->kswapd_failures
        $1935 = 0x0

This is because the node deemed balanced.  The node balancing logic in
balance_pgdat() evaluates all zones collectively.  If one or more zones
(e.g., ZONE_DMA32) have enough free pages to meet their watermarks, the
entire node is deemed balanced.  This causes balance_pgdat() to exit early
before incrementing the kswapd_failures, as it considers the overall
memory state acceptable, even though some zones (like ZONE_NORMAL) remain
under significant pressure.

The patch ensures that zone_reclaimable_pages() includes free pages
(NR_FREE_PAGES) in its calculation when no other reclaimable pages are
available (e.g., file-backed or anonymous pages).  This change prevents
zones like ZONE_DMA32, which have sufficient free pages, from being
mistakenly deemed unreclaimable.  By doing so, the patch ensures proper
node balancing, avoids masking pressure on other zones like ZONE_NORMAL,
and prevents infinite loops in throttle_direct_reclaim() caused by
allow_direct_reclaim(pgdat) repeatedly returning false.

The kernel hangs due to a task stuck in throttle_direct_reclaim(), caused
by a node being incorrectly deemed balanced despite pressure in certain
zones, such as ZONE_NORMAL.  This issue arises from
zone_reclaimable_pages() returning 0 for zones without reclaimable file-
backed or anonymous pages, causing zones like ZONE_DMA32 with sufficient
free pages to be skipped.

The lack of swap or reclaimable pages results in ZONE_DMA32 being ignored
during reclaim, masking pressure in other zones.  Consequently,
pgdat->kswapd_failures remains 0 in balance_pgdat(), preventing fallback
mechanisms in allow_direct_reclaim() from being triggered, leading to an
infinite loop in throttle_direct_reclaim().

This patch modifies zone_reclaimable_pages() to account for free pages
(NR_FREE_PAGES) when no other reclaimable pages exist.  This ensures zones
with sufficient free pages are not skipped, enabling proper balancing and
reclaim behavior.

[[email protected]: coding-style cleanups]
Link: https://lkml.kernel.org/r/[email protected]
Link: https://lkml.kernel.org/r/[email protected]
Fixes: 5a1c84b ("mm: remove reclaim and compaction retry approximations")
Signed-off-by: Seiji Nishikawa <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
(cherry picked from commit 66cd37660ec34ec444fe42f2277330ae4a36bb19)
Signed-off-by: Sherry Yang <[email protected]>
oraclelinuxkernel pushed a commit that referenced this pull request Feb 14, 2025
When following a trailing symlink in rcu-walk mode it's possible for
the dentry to become invalid between the last dentry seq lock check
and getting the link (eg. an unlink) leading to a backtrace similar
to this:

crash> bt
PID: 10964  TASK: ffff951c8aa92f80  CPU: 3   COMMAND: "TaniumCX"
…
 #7 [ffffae44d0a6fbe0] page_fault at ffffffff8d6010fe
    [exception RIP: unknown or invalid address]
    RIP: 0000000000000000  RSP: ffffae44d0a6fc90  RFLAGS: 00010246
    RAX: ffffffff8da3cc80  RBX: ffffae44d0a6fd30  RCX: 0000000000000000
    RDX: ffffae44d0a6fd98  RSI: ffff951aa9af3008  RDI: 0000000000000000
    RBP: 0000000000000000   R8: ffffae44d0a6fb94   R9: 0000000000000000
    R10: ffff951c95d8c318  R11: 0000000000080000  R12: ffffae44d0a6fd98
    R13: ffff951aa9af3008  R14: ffff951c8c9eb840  R15: 0000000000000000
    ORIG_RAX: ffffffffffffffff  CS: 0010  SS: 0018
 #8 [ffffae44d0a6fc90] trailing_symlink at ffffffff8cf24e61
 #9 [ffffae44d0a6fcc8] path_lookupat at ffffffff8cf261d1

Most of the time this is not a problem because the inode is unchanged
while the rcu read lock is held.

But xfs can re-use inodes which can result in the inode ->get_link()
method becoming invalid (or NULL).

This case needs to be checked for in fs/namei.c:get_link() and if
detected the walk re-started.

Signed-off-by: Ian Kent <[email protected]>

Orabug: 37536393

This is the backport of an upstream patch, yet to be merged:
https://lore.kernel.org/lkml/[email protected]

One of our customers found a similar issue on uek5 in bug 37322383.
Investigation of vmcore revealed that the dentry became invalid
between the last dentry seq lock check and getting the link. The
customer has tested this patch and verified that this patch avoids
the crash.

We want to merge this patch to uek6 only for now as this is not yet
in upstream.

Signed-off-by: Srikanth C S <[email protected]>
Signed-off-by: Gautham Ananthakrishna <[email protected]>
Reviewed-by: Mark Tinguely <[email protected]>
Signed-off-by: Alok Tiwari <[email protected]>
oraclelinuxkernel pushed a commit that referenced this pull request Feb 24, 2025
…le_direct_reclaim()

commit 6aaced5 upstream.

The task sometimes continues looping in throttle_direct_reclaim() because
allow_direct_reclaim(pgdat) keeps returning false.

 #0 [ffff80002cb6f8d0] __switch_to at ffff8000080095ac
 #1 [ffff80002cb6f900] __schedule at ffff800008abbd1c
 #2 [ffff80002cb6f990] schedule at ffff800008abc50c
 #3 [ffff80002cb6f9b0] throttle_direct_reclaim at ffff800008273550
 #4 [ffff80002cb6fa20] try_to_free_pages at ffff800008277b68
 #5 [ffff80002cb6fae0] __alloc_pages_nodemask at ffff8000082c4660
 #6 [ffff80002cb6fc50] alloc_pages_vma at ffff8000082e4a98
 #7 [ffff80002cb6fca0] do_anonymous_page at ffff80000829f5a8
 #8 [ffff80002cb6fce0] __handle_mm_fault at ffff8000082a5974
 #9 [ffff80002cb6fd90] handle_mm_fault at ffff8000082a5bd4

At this point, the pgdat contains the following two zones:

        NODE: 4  ZONE: 0  ADDR: ffff00817fffe540  NAME: "DMA32"
          SIZE: 20480  MIN/LOW/HIGH: 11/28/45
          VM_STAT:
                NR_FREE_PAGES: 359
        NR_ZONE_INACTIVE_ANON: 18813
          NR_ZONE_ACTIVE_ANON: 0
        NR_ZONE_INACTIVE_FILE: 50
          NR_ZONE_ACTIVE_FILE: 0
          NR_ZONE_UNEVICTABLE: 0
        NR_ZONE_WRITE_PENDING: 0
                     NR_MLOCK: 0
                    NR_BOUNCE: 0
                   NR_ZSPAGES: 0
            NR_FREE_CMA_PAGES: 0

        NODE: 4  ZONE: 1  ADDR: ffff00817fffec00  NAME: "Normal"
          SIZE: 8454144  PRESENT: 98304  MIN/LOW/HIGH: 68/166/264
          VM_STAT:
                NR_FREE_PAGES: 146
        NR_ZONE_INACTIVE_ANON: 94668
          NR_ZONE_ACTIVE_ANON: 3
        NR_ZONE_INACTIVE_FILE: 735
          NR_ZONE_ACTIVE_FILE: 78
          NR_ZONE_UNEVICTABLE: 0
        NR_ZONE_WRITE_PENDING: 0
                     NR_MLOCK: 0
                    NR_BOUNCE: 0
                   NR_ZSPAGES: 0
            NR_FREE_CMA_PAGES: 0

In allow_direct_reclaim(), while processing ZONE_DMA32, the sum of
inactive/active file-backed pages calculated in zone_reclaimable_pages()
based on the result of zone_page_state_snapshot() is zero.

Additionally, since this system lacks swap, the calculation of inactive/
active anonymous pages is skipped.

        crash> p nr_swap_pages
        nr_swap_pages = $1937 = {
          counter = 0
        }

As a result, ZONE_DMA32 is deemed unreclaimable and skipped, moving on to
the processing of the next zone, ZONE_NORMAL, despite ZONE_DMA32 having
free pages significantly exceeding the high watermark.

The problem is that the pgdat->kswapd_failures hasn't been incremented.

        crash> px ((struct pglist_data *) 0xffff00817fffe540)->kswapd_failures
        $1935 = 0x0

This is because the node deemed balanced.  The node balancing logic in
balance_pgdat() evaluates all zones collectively.  If one or more zones
(e.g., ZONE_DMA32) have enough free pages to meet their watermarks, the
entire node is deemed balanced.  This causes balance_pgdat() to exit early
before incrementing the kswapd_failures, as it considers the overall
memory state acceptable, even though some zones (like ZONE_NORMAL) remain
under significant pressure.

The patch ensures that zone_reclaimable_pages() includes free pages
(NR_FREE_PAGES) in its calculation when no other reclaimable pages are
available (e.g., file-backed or anonymous pages).  This change prevents
zones like ZONE_DMA32, which have sufficient free pages, from being
mistakenly deemed unreclaimable.  By doing so, the patch ensures proper
node balancing, avoids masking pressure on other zones like ZONE_NORMAL,
and prevents infinite loops in throttle_direct_reclaim() caused by
allow_direct_reclaim(pgdat) repeatedly returning false.

The kernel hangs due to a task stuck in throttle_direct_reclaim(), caused
by a node being incorrectly deemed balanced despite pressure in certain
zones, such as ZONE_NORMAL.  This issue arises from
zone_reclaimable_pages() returning 0 for zones without reclaimable file-
backed or anonymous pages, causing zones like ZONE_DMA32 with sufficient
free pages to be skipped.

The lack of swap or reclaimable pages results in ZONE_DMA32 being ignored
during reclaim, masking pressure in other zones.  Consequently,
pgdat->kswapd_failures remains 0 in balance_pgdat(), preventing fallback
mechanisms in allow_direct_reclaim() from being triggered, leading to an
infinite loop in throttle_direct_reclaim().

This patch modifies zone_reclaimable_pages() to account for free pages
(NR_FREE_PAGES) when no other reclaimable pages exist.  This ensures zones
with sufficient free pages are not skipped, enabling proper balancing and
reclaim behavior.

[[email protected]: coding-style cleanups]
Link: https://lkml.kernel.org/r/[email protected]
Link: https://lkml.kernel.org/r/[email protected]
Fixes: 5a1c84b ("mm: remove reclaim and compaction retry approximations")
Signed-off-by: Seiji Nishikawa <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
(cherry picked from commit 58d0d02dbc67438fc80223fdd7bbc49cf0733284)
Signed-off-by: Jack Vogel <[email protected]>
oraclelinuxkernel pushed a commit that referenced this pull request Feb 24, 2025
commit 59d9094 upstream.

The folio refcount may be increased unexpectly through try_get_folio() by
caller such as split_huge_pages.  In huge_pmd_unshare(), we use refcount
to check whether a pmd page table is shared.  The check is incorrect if
the refcount is increased by the above caller, and this can cause the page
table leaked:

 BUG: Bad page state in process sh  pfn:109324
 page: refcount:0 mapcount:0 mapping:0000000000000000 index:0x66 pfn:0x109324
 flags: 0x17ffff800000000(node=0|zone=2|lastcpupid=0xfffff)
 page_type: f2(table)
 raw: 017ffff800000000 0000000000000000 0000000000000000 0000000000000000
 raw: 0000000000000066 0000000000000000 00000000f2000000 0000000000000000
 page dumped because: nonzero mapcount
 ...
 CPU: 31 UID: 0 PID: 7515 Comm: sh Kdump: loaded Tainted: G    B              6.13.0-rc2master+ #7
 Tainted: [B]=BAD_PAGE
 Hardware name: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015
 Call trace:
  show_stack+0x20/0x38 (C)
  dump_stack_lvl+0x80/0xf8
  dump_stack+0x18/0x28
  bad_page+0x8c/0x130
  free_page_is_bad_report+0xa4/0xb0
  free_unref_page+0x3cc/0x620
  __folio_put+0xf4/0x158
  split_huge_pages_all+0x1e0/0x3e8
  split_huge_pages_write+0x25c/0x2d8
  full_proxy_write+0x64/0xd8
  vfs_write+0xcc/0x280
  ksys_write+0x70/0x110
  __arm64_sys_write+0x24/0x38
  invoke_syscall+0x50/0x120
  el0_svc_common.constprop.0+0xc8/0xf0
  do_el0_svc+0x24/0x38
  el0_svc+0x34/0x128
  el0t_64_sync_handler+0xc8/0xd0
  el0t_64_sync+0x190/0x198

The issue may be triggered by damon, offline_page, page_idle, etc, which
will increase the refcount of page table.

1. The page table itself will be discarded after reporting the
   "nonzero mapcount".

2. The HugeTLB page mapped by the page table miss freeing since we
   treat the page table as shared and a shared page table will not be
   unmapped.

Fix it by introducing independent PMD page table shared count.  As
described by comment, pt_index/pt_mm/pt_frag_refcount are used for s390
gmap, x86 pgds and powerpc, pt_share_count is used for x86/arm64/riscv
pmds, so we can reuse the field as pt_share_count.

Link: https://lkml.kernel.org/r/[email protected]
Fixes: 39dde65 ("[PATCH] shared page table for hugetlb page")
Signed-off-by: Liu Shixin <[email protected]>
Cc: Kefeng Wang <[email protected]>
Cc: Ken Chen <[email protected]>
Cc: Muchun Song <[email protected]>
Cc: Nanyong Sun <[email protected]>
Cc: Jane Chu <[email protected]>
Cc: <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
(cherry picked from commit 2e31443a0d18ae43b9d29e02bf0563f07772193d)
Signed-off-by: Jack Vogel <[email protected]>
oraclelinuxkernel pushed a commit that referenced this pull request Feb 24, 2025
[ Upstream commit c7b87ce0dd10b64b68a0b22cb83bbd556e28fe81 ]

libtraceevent parses and returns an array of argument fields, sometimes
larger than RAW_SYSCALL_ARGS_NUM (6) because it includes "__syscall_nr",
idx will traverse to index 6 (7th element) whereas sc->fmt->arg holds 6
elements max, creating an out-of-bounds access. This runtime error is
found by UBsan. The error message:

  $ sudo UBSAN_OPTIONS=print_stacktrace=1 ./perf trace -a --max-events=1
  builtin-trace.c:1966:35: runtime error: index 6 out of bounds for type 'syscall_arg_fmt [6]'
    #0 0x5c04956be5fe in syscall__alloc_arg_fmts /home/howard/hw/linux-perf/tools/perf/builtin-trace.c:1966
    #1 0x5c04956c0510 in trace__read_syscall_info /home/howard/hw/linux-perf/tools/perf/builtin-trace.c:2110
    #2 0x5c04956c372b in trace__syscall_info /home/howard/hw/linux-perf/tools/perf/builtin-trace.c:2436
    #3 0x5c04956d2f39 in trace__init_syscalls_bpf_prog_array_maps /home/howard/hw/linux-perf/tools/perf/builtin-trace.c:3897
    #4 0x5c04956d6d25 in trace__run /home/howard/hw/linux-perf/tools/perf/builtin-trace.c:4335
    #5 0x5c04956e112e in cmd_trace /home/howard/hw/linux-perf/tools/perf/builtin-trace.c:5502
    #6 0x5c04956eda7d in run_builtin /home/howard/hw/linux-perf/tools/perf/perf.c:351
    #7 0x5c04956ee0a8 in handle_internal_command /home/howard/hw/linux-perf/tools/perf/perf.c:404
    #8 0x5c04956ee37f in run_argv /home/howard/hw/linux-perf/tools/perf/perf.c:448
    #9 0x5c04956ee8e9 in main /home/howard/hw/linux-perf/tools/perf/perf.c:556
    #10 0x79eb3622a3b7 in __libc_start_call_main ../sysdeps/nptl/libc_start_call_main.h:58
    #11 0x79eb3622a47a in __libc_start_main_impl ../csu/libc-start.c:360
    #12 0x5c04955422d4 in _start (/home/howard/hw/linux-perf/tools/perf/perf+0x4e02d4) (BuildId: 5b6cab2d59e96a4341741765ad6914a4d784dbc6)

     0.000 ( 0.014 ms): Chrome_ChildIO/117244 write(fd: 238, buf: !, count: 1)                                      = 1

Fixes: 5e58fcf ("perf trace: Allow allocating sc->arg_fmt even without the syscall tracepoint")
Signed-off-by: Howard Chu <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Namhyung Kim <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
(cherry picked from commit 161348aea66fde8356030df21f998d64f585bd51)
Signed-off-by: Jack Vogel <[email protected]>
oraclelinuxkernel pushed a commit that referenced this pull request Feb 24, 2025
[ Upstream commit 2c2ebb2b49573e5f8726112ad06b1dffc3c9ea03 ]

Fix the suspend/resume path by ensuring the rtnl lock is held where
required. Calls to ravb_open, ravb_close and wol operations must be
performed under the rtnl lock to prevent conflicts with ongoing ndo
operations.

Without this fix, the following warning is triggered:
[   39.032969] =============================
[   39.032983] WARNING: suspicious RCU usage
[   39.033019] -----------------------------
[   39.033033] drivers/net/phy/phy_device.c:2004 suspicious
rcu_dereference_protected() usage!
...
[   39.033597] stack backtrace:
[   39.033613] CPU: 0 UID: 0 PID: 174 Comm: python3 Not tainted
6.13.0-rc7-next-20250116-arm64-renesas-00002-g35245dfdc62c #7
[   39.033623] Hardware name: Renesas SMARC EVK version 2 based on
r9a08g045s33 (DT)
[   39.033628] Call trace:
[   39.033633]  show_stack+0x14/0x1c (C)
[   39.033652]  dump_stack_lvl+0xb4/0xc4
[   39.033664]  dump_stack+0x14/0x1c
[   39.033671]  lockdep_rcu_suspicious+0x16c/0x22c
[   39.033682]  phy_detach+0x160/0x190
[   39.033694]  phy_disconnect+0x40/0x54
[   39.033703]  ravb_close+0x6c/0x1cc
[   39.033714]  ravb_suspend+0x48/0x120
[   39.033721]  dpm_run_callback+0x4c/0x14c
[   39.033731]  device_suspend+0x11c/0x4dc
[   39.033740]  dpm_suspend+0xdc/0x214
[   39.033748]  dpm_suspend_start+0x48/0x60
[   39.033758]  suspend_devices_and_enter+0x124/0x574
[   39.033769]  pm_suspend+0x1ac/0x274
[   39.033778]  state_store+0x88/0x124
[   39.033788]  kobj_attr_store+0x14/0x24
[   39.033798]  sysfs_kf_write+0x48/0x6c
[   39.033808]  kernfs_fop_write_iter+0x118/0x1a8
[   39.033817]  vfs_write+0x27c/0x378
[   39.033825]  ksys_write+0x64/0xf4
[   39.033833]  __arm64_sys_write+0x18/0x20
[   39.033841]  invoke_syscall+0x44/0x104
[   39.033852]  el0_svc_common.constprop.0+0xb4/0xd4
[   39.033862]  do_el0_svc+0x18/0x20
[   39.033870]  el0_svc+0x3c/0xf0
[   39.033880]  el0t_64_sync_handler+0xc0/0xc4
[   39.033888]  el0t_64_sync+0x154/0x158
[   39.041274] ravb 11c30000.ethernet eth0: Link is Down

Reported-by: Claudiu Beznea <[email protected]>
Closes: https://lore.kernel.org/netdev/[email protected]/
Fixes: 0184165 ("ravb: add sleep PM suspend/resume support")
Signed-off-by: Kory Maincent <[email protected]>
Tested-by: Niklas Söderlund <[email protected]>
Signed-off-by: Paolo Abeni <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
(cherry picked from commit 0296981941cf291edfbc318d3255a93439f368e4)
Signed-off-by: Jack Vogel <[email protected]>
oraclelinuxkernel pushed a commit that referenced this pull request Mar 7, 2025
[ Upstream commit 24c6843 ]

The 5760X (P7) chip's HW GRO/LRO interface is very similar to that of
the previous generation (5750X or P5).  However, the aggregation ID
fields in the completion structures on P7 have been redefined from
16 bits to 12 bits.  The freed up 4 bits are redefined for part of the
metadata such as the VLAN ID.  The aggregation ID mask was not modified
when adding support for P7 chips.  Including the extra 4 bits for the
aggregation ID can potentially cause the driver to store or fetch the
packet header of GRO/LRO packets in the wrong TPA buffer.  It may hit
the BUG() condition in __skb_pull() because the SKB contains no valid
packet header:

kernel BUG at include/linux/skbuff.h:2766!
Oops: invalid opcode: 0000 1 PREEMPT SMP NOPTI
CPU: 4 UID: 0 PID: 0 Comm: swapper/4 Kdump: loaded Tainted: G           OE      6.12.0-rc2+ #7
Tainted: [O]=OOT_MODULE, [E]=UNSIGNED_MODULE
Hardware name: Dell Inc. PowerEdge R760/0VRV9X, BIOS 1.0.1 12/27/2022
RIP: 0010:eth_type_trans+0xda/0x140
Code: 80 00 00 00 eb c1 8b 47 70 2b 47 74 48 8b 97 d0 00 00 00 83 f8 01 7e 1b 48 85 d2 74 06 66 83 3a ff 74 09 b8 00 04 00 00 eb a5 <0f> 0b b8 00 01 00 00 eb 9c 48 85 ff 74 eb 31 f6 b9 02 00 00 00 48
RSP: 0018:ff615003803fcc28 EFLAGS: 00010283
RAX: 00000000000022d2 RBX: 0000000000000003 RCX: ff2e8c25da334040
RDX: 0000000000000040 RSI: ff2e8c25c1ce8000 RDI: ff2e8c25869f9000
RBP: ff2e8c258c31c000 R08: ff2e8c25da334000 R09: 0000000000000001
R10: ff2e8c25da3342c0 R11: ff2e8c25c1ce89c0 R12: ff2e8c258e0990b0
R13: ff2e8c25bb120000 R14: ff2e8c25c1ce89c0 R15: ff2e8c25869f9000
FS:  0000000000000000(0000) GS:ff2e8c34be300000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 000055f05317e4c8 CR3: 000000108bac6006 CR4: 0000000000773ef0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe07f0 DR7: 0000000000000400
PKRU: 55555554
Call Trace:
 <IRQ>
 ? die+0x33/0x90
 ? do_trap+0xd9/0x100
 ? eth_type_trans+0xda/0x140
 ? do_error_trap+0x65/0x80
 ? eth_type_trans+0xda/0x140
 ? exc_invalid_op+0x4e/0x70
 ? eth_type_trans+0xda/0x140
 ? asm_exc_invalid_op+0x16/0x20
 ? eth_type_trans+0xda/0x140
 bnxt_tpa_end+0x10b/0x6b0 [bnxt_en]
 ? bnxt_tpa_start+0x195/0x320 [bnxt_en]
 bnxt_rx_pkt+0x902/0xd90 [bnxt_en]
 ? __bnxt_tx_int.constprop.0+0x89/0x300 [bnxt_en]
 ? kmem_cache_free+0x343/0x440
 ? __bnxt_tx_int.constprop.0+0x24f/0x300 [bnxt_en]
 __bnxt_poll_work+0x193/0x370 [bnxt_en]
 bnxt_poll_p5+0x9a/0x300 [bnxt_en]
 ? try_to_wake_up+0x209/0x670
 __napi_poll+0x29/0x1b0

Fix it by redefining the aggregation ID mask for P5_PLUS chips to be
12 bits.  This will work because the maximum aggregation ID is less
than 4096 on all P5_PLUS chips.

Fixes: 13d2d3d ("bnxt_en: Add new P7 hardware interface definitions")
Reviewed-by: Damodharam Ammepalli <[email protected]>
Reviewed-by: Kalesh AP <[email protected]>
Reviewed-by: Andy Gospodarek <[email protected]>
Signed-off-by: Michael Chan <[email protected]>
Reviewed-by: Simon Horman <[email protected]>
Link: https://patch.msgid.link/[email protected]
Signed-off-by: Jakub Kicinski <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
(cherry picked from commit fe9274027697c69c1430dc7ee568f82b331ef972)

Orabug: 37434220
CVE: CVE-2024-56656

Signed-off-by: Saeed Mirzamohammadi <[email protected]>
Reviewed-by: Harshit Mogalapalli <[email protected]>
Signed-off-by: Vijayendra Suman <[email protected]>
oraclelinuxkernel pushed a commit that referenced this pull request Mar 14, 2025
On the node of an NFS client, some files saved in the mountpoint of the
NFS server were copied to another location of the same NFS server.
Accidentally, the nfs42_complete_copies() got a NULL-pointer dereference
crash with the following syslog:

[232064.838881] NFSv4: state recovery failed for open file nfs/pvc-12b5200d-cd0f-46a3-b9f0-af8f4fe0ef64.qcow2, error = -116
[232064.839360] NFSv4: state recovery failed for open file nfs/pvc-12b5200d-cd0f-46a3-b9f0-af8f4fe0ef64.qcow2, error = -116
[232066.588183] Unable to handle kernel NULL pointer dereference at virtual address 0000000000000058
[232066.588586] Mem abort info:
[232066.588701]   ESR = 0x0000000096000007
[232066.588862]   EC = 0x25: DABT (current EL), IL = 32 bits
[232066.589084]   SET = 0, FnV = 0
[232066.589216]   EA = 0, S1PTW = 0
[232066.589340]   FSC = 0x07: level 3 translation fault
[232066.589559] Data abort info:
[232066.589683]   ISV = 0, ISS = 0x00000007
[232066.589842]   CM = 0, WnR = 0
[232066.589967] user pgtable: 64k pages, 48-bit VAs, pgdp=00002000956ff400
[232066.590231] [0000000000000058] pgd=08001100ae100003, p4d=08001100ae100003, pud=08001100ae100003, pmd=08001100b3c00003, pte=0000000000000000
[232066.590757] Internal error: Oops: 96000007 [#1] SMP
[232066.590958] Modules linked in: rpcsec_gss_krb5 auth_rpcgss nfsv4 dns_resolver nfs lockd grace fscache netfs ocfs2_dlmfs ocfs2_stack_o2cb ocfs2_dlm vhost_net vhost vhost_iotlb tap tun ipt_rpfilter xt_multiport ip_set_hash_ip ip_set_hash_net xfrm_interface xfrm6_tunnel tunnel4 tunnel6 esp4 ah4 wireguard libcurve25519_generic veth xt_addrtype xt_set nf_conntrack_netlink ip_set_hash_ipportnet ip_set_hash_ipportip ip_set_bitmap_port ip_set_hash_ipport dummy ip_set ip_vs_sh ip_vs_wrr ip_vs_rr ip_vs iptable_filter sch_ingress nfnetlink_cttimeout vport_gre ip_gre ip_tunnel gre vport_geneve geneve vport_vxlan vxlan ip6_udp_tunnel udp_tunnel openvswitch nf_conncount dm_round_robin dm_service_time dm_multipath xt_nat xt_MASQUERADE nft_chain_nat nf_nat xt_mark xt_conntrack xt_comment nft_compat nft_counter nf_tables nfnetlink ocfs2 ocfs2_nodemanager ocfs2_stackglue iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi ipmi_ssif nbd overlay 8021q garp mrp bonding tls rfkill sunrpc ext4 mbcache jbd2
[232066.591052]  vfat fat cas_cache cas_disk ses enclosure scsi_transport_sas sg acpi_ipmi ipmi_si ipmi_devintf ipmi_msghandler ip_tables vfio_pci vfio_pci_core vfio_virqfd vfio_iommu_type1 vfio dm_mirror dm_region_hash dm_log dm_mod nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 br_netfilter bridge stp llc fuse xfs libcrc32c ast drm_vram_helper qla2xxx drm_kms_helper syscopyarea crct10dif_ce sysfillrect ghash_ce sysimgblt sha2_ce fb_sys_fops cec sha256_arm64 sha1_ce drm_ttm_helper ttm nvme_fc igb sbsa_gwdt nvme_fabrics drm nvme_core i2c_algo_bit i40e scsi_transport_fc megaraid_sas aes_neon_bs
[232066.596953] CPU: 6 PID: 4124696 Comm: 10.253.166.125- Kdump: loaded Not tainted 5.15.131-9.cl9_ocfs2.aarch64 #1
[232066.597356] Hardware name: Great Wall .\x93\x8e...RF6260 V5/GWMSSE2GL1T, BIOS T656FBE_V3.0.18 2024-01-06
[232066.597721] pstate: 20400009 (nzCv daif +PAN -UAO -TCO -DIT -SSBS BTYPE=--)
[232066.598034] pc : nfs4_reclaim_open_state+0x220/0x800 [nfsv4]
[232066.598327] lr : nfs4_reclaim_open_state+0x12c/0x800 [nfsv4]
[232066.598595] sp : ffff8000f568fc70
[232066.598731] x29: ffff8000f568fc70 x28: 0000000000001000 x27: ffff21003db33000
[232066.599030] x26: ffff800005521ae0 x25: ffff0100f98fa3f0 x24: 0000000000000001
[232066.599319] x23: ffff800009920008 x22: ffff21003db33040 x21: ffff21003db33050
[232066.599628] x20: ffff410172fe9e40 x19: ffff410172fe9e00 x18: 0000000000000000
[232066.599914] x17: 0000000000000000 x16: 0000000000000004 x15: 0000000000000000
[232066.600195] x14: 0000000000000000 x13: ffff800008e685a8 x12: 00000000eac0c6e6
[232066.600498] x11: 0000000000000000 x10: 0000000000000008 x9 : ffff8000054e5828
[232066.600784] x8 : 00000000ffffffbf x7 : 0000000000000001 x6 : 000000000a9eb14a
[232066.601062] x5 : 0000000000000000 x4 : ffff70ff8a14a800 x3 : 0000000000000058
[232066.601348] x2 : 0000000000000001 x1 : 54dce46366daa6c6 x0 : 0000000000000000
[232066.601636] Call trace:
[232066.601749]  nfs4_reclaim_open_state+0x220/0x800 [nfsv4]
[232066.601998]  nfs4_do_reclaim+0x1b8/0x28c [nfsv4]
[232066.602218]  nfs4_state_manager+0x928/0x10f0 [nfsv4]
[232066.602455]  nfs4_run_state_manager+0x78/0x1b0 [nfsv4]
[232066.602690]  kthread+0x110/0x114
[232066.602830]  ret_from_fork+0x10/0x20
[232066.602985] Code: 1400000d f9403f20 f9402e61 91016003 (f9402c00)
[232066.603284] SMP: stopping secondary CPUs
[232066.606936] Starting crashdump kernel...
[232066.607146] Bye!

Analysing the vmcore, we know that nfs4_copy_state listed by destination
nfs_server->ss_copies was added by the field copies in handle_async_copy(),
and we found a waiting copy process with the stack as:
PID: 3511963  TASK: ffff710028b47e00  CPU: 0   COMMAND: "cp"
 #0 [ffff8001116ef740] __switch_to at ffff8000081b92f4
 #1 [ffff8001116ef760] __schedule at ffff800008dd0650
 #2 [ffff8001116ef7c0] schedule at ffff800008dd0a00
 #3 [ffff8001116ef7e0] schedule_timeout at ffff800008dd6aa0
 #4 [ffff8001116ef860] __wait_for_common at ffff800008dd166c
 #5 [ffff8001116ef8e0] wait_for_completion_interruptible at ffff800008dd1898
 #6 [ffff8001116ef8f0] handle_async_copy at ffff8000055142f4 [nfsv4]
 #7 [ffff8001116ef970] _nfs42_proc_copy at ffff8000055147c8 [nfsv4]
 #8 [ffff8001116efa80] nfs42_proc_copy at ffff800005514cf0 [nfsv4]
 #9 [ffff8001116efc50] __nfs4_copy_file_range.constprop.0 at ffff8000054ed694 [nfsv4]

The NULL-pointer dereference was due to nfs42_complete_copies() listed
the nfs_server->ss_copies by the field ss_copies of nfs4_copy_state.
So the nfs4_copy_state address ffff0100f98fa3f0 was offset by 0x10 and
the data accessed through this pointer was also incorrect. Generally,
the ordered list nfs4_state_owner->so_states indicate open(O_RDWR) or
open(O_WRITE) states are reclaimed firstly by nfs4_reclaim_open_state().
When destination state reclaim is failed with NFS_STATE_RECOVERY_FAILED
and copies are not deleted in nfs_server->ss_copies, the source state
may be passed to the nfs42_complete_copies() process earlier, resulting
in this crash scene finally. To solve this issue, we add a list_head
nfs_server->ss_src_copies for a server-to-server copy specially.

Fixes: 0e65a32 ("NFS: handle source server reboot")
Signed-off-by: Yanjun Zhang <[email protected]>
Reviewed-by: Trond Myklebust <[email protected]>
Signed-off-by: Anna Schumaker <[email protected]>

(cherry picked from commit a848c29)
Orabug: 37206487
Signed-off-by: Dai Ngo <[email protected]>
Signed-off-by: Sherry Yang <[email protected]>
oraclelinuxkernel pushed a commit that referenced this pull request Apr 4, 2025
[ Upstream commit c7b87ce0dd10b64b68a0b22cb83bbd556e28fe81 ]

libtraceevent parses and returns an array of argument fields, sometimes
larger than RAW_SYSCALL_ARGS_NUM (6) because it includes "__syscall_nr",
idx will traverse to index 6 (7th element) whereas sc->fmt->arg holds 6
elements max, creating an out-of-bounds access. This runtime error is
found by UBsan. The error message:

  $ sudo UBSAN_OPTIONS=print_stacktrace=1 ./perf trace -a --max-events=1
  builtin-trace.c:1966:35: runtime error: index 6 out of bounds for type 'syscall_arg_fmt [6]'
    #0 0x5c04956be5fe in syscall__alloc_arg_fmts /home/howard/hw/linux-perf/tools/perf/builtin-trace.c:1966
    #1 0x5c04956c0510 in trace__read_syscall_info /home/howard/hw/linux-perf/tools/perf/builtin-trace.c:2110
    #2 0x5c04956c372b in trace__syscall_info /home/howard/hw/linux-perf/tools/perf/builtin-trace.c:2436
    #3 0x5c04956d2f39 in trace__init_syscalls_bpf_prog_array_maps /home/howard/hw/linux-perf/tools/perf/builtin-trace.c:3897
    #4 0x5c04956d6d25 in trace__run /home/howard/hw/linux-perf/tools/perf/builtin-trace.c:4335
    #5 0x5c04956e112e in cmd_trace /home/howard/hw/linux-perf/tools/perf/builtin-trace.c:5502
    #6 0x5c04956eda7d in run_builtin /home/howard/hw/linux-perf/tools/perf/perf.c:351
    #7 0x5c04956ee0a8 in handle_internal_command /home/howard/hw/linux-perf/tools/perf/perf.c:404
    #8 0x5c04956ee37f in run_argv /home/howard/hw/linux-perf/tools/perf/perf.c:448
    #9 0x5c04956ee8e9 in main /home/howard/hw/linux-perf/tools/perf/perf.c:556
    #10 0x79eb3622a3b7 in __libc_start_call_main ../sysdeps/nptl/libc_start_call_main.h:58
    #11 0x79eb3622a47a in __libc_start_main_impl ../csu/libc-start.c:360
    #12 0x5c04955422d4 in _start (/home/howard/hw/linux-perf/tools/perf/perf+0x4e02d4) (BuildId: 5b6cab2d59e96a4341741765ad6914a4d784dbc6)

     0.000 ( 0.014 ms): Chrome_ChildIO/117244 write(fd: 238, buf: !, count: 1)                                      = 1

Fixes: 5e58fcf ("perf trace: Allow allocating sc->arg_fmt even without the syscall tracepoint")
Signed-off-by: Howard Chu <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Namhyung Kim <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
(cherry picked from commit a48ebcd853a4e973566e3ed313655a8d96789e78)
Signed-off-by: Vijayendra Suman <[email protected]>
oraclelinuxkernel pushed a commit that referenced this pull request Apr 11, 2025
[ Upstream commit c7b87ce0dd10b64b68a0b22cb83bbd556e28fe81 ]

libtraceevent parses and returns an array of argument fields, sometimes
larger than RAW_SYSCALL_ARGS_NUM (6) because it includes "__syscall_nr",
idx will traverse to index 6 (7th element) whereas sc->fmt->arg holds 6
elements max, creating an out-of-bounds access. This runtime error is
found by UBsan. The error message:

  $ sudo UBSAN_OPTIONS=print_stacktrace=1 ./perf trace -a --max-events=1
  builtin-trace.c:1966:35: runtime error: index 6 out of bounds for type 'syscall_arg_fmt [6]'
    #0 0x5c04956be5fe in syscall__alloc_arg_fmts /home/howard/hw/linux-perf/tools/perf/builtin-trace.c:1966
    #1 0x5c04956c0510 in trace__read_syscall_info /home/howard/hw/linux-perf/tools/perf/builtin-trace.c:2110
    #2 0x5c04956c372b in trace__syscall_info /home/howard/hw/linux-perf/tools/perf/builtin-trace.c:2436
    #3 0x5c04956d2f39 in trace__init_syscalls_bpf_prog_array_maps /home/howard/hw/linux-perf/tools/perf/builtin-trace.c:3897
    #4 0x5c04956d6d25 in trace__run /home/howard/hw/linux-perf/tools/perf/builtin-trace.c:4335
    #5 0x5c04956e112e in cmd_trace /home/howard/hw/linux-perf/tools/perf/builtin-trace.c:5502
    #6 0x5c04956eda7d in run_builtin /home/howard/hw/linux-perf/tools/perf/perf.c:351
    #7 0x5c04956ee0a8 in handle_internal_command /home/howard/hw/linux-perf/tools/perf/perf.c:404
    #8 0x5c04956ee37f in run_argv /home/howard/hw/linux-perf/tools/perf/perf.c:448
    #9 0x5c04956ee8e9 in main /home/howard/hw/linux-perf/tools/perf/perf.c:556
    #10 0x79eb3622a3b7 in __libc_start_call_main ../sysdeps/nptl/libc_start_call_main.h:58
    #11 0x79eb3622a47a in __libc_start_main_impl ../csu/libc-start.c:360
    #12 0x5c04955422d4 in _start (/home/howard/hw/linux-perf/tools/perf/perf+0x4e02d4) (BuildId: 5b6cab2d59e96a4341741765ad6914a4d784dbc6)

     0.000 ( 0.014 ms): Chrome_ChildIO/117244 write(fd: 238, buf: !, count: 1)                                      = 1

Fixes: 5e58fcf ("perf trace: Allow allocating sc->arg_fmt even without the syscall tracepoint")
Signed-off-by: Howard Chu <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Namhyung Kim <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
(cherry picked from commit 093c20a38c9c81c653ced839e241cbf1b3b2a8b3)
Signed-off-by: Sherry Yang <[email protected]>
oraclelinuxkernel pushed a commit that referenced this pull request Apr 18, 2025
commit d093c90892607be505e801469d6674459e69ab89 upstream.

Currently, when no active threads are running, a root user using nfsdctl
command can try to remove a particular listener from the list of previously
added ones, then start the server by increasing the number of threads,
it leads to the following problem:

[  158.835354] refcount_t: addition on 0; use-after-free.
[  158.835603] WARNING: CPU: 2 PID: 9145 at lib/refcount.c:25 refcount_warn_saturate+0x160/0x1a0
[  158.836017] Modules linked in: rpcrdma rdma_cm iw_cm ib_cm ib_core nfsd auth_rpcgss nfs_acl lockd grace overlay isofs uinput snd_seq_dummy snd_hrtimer nft_fib_inet nft_fib_ipv4 nft_fib_ipv6 nft_fib nft_reject_inet nf_reject_ipv4 nf_reject_ipv6 nft_reject nft_ct nft_chain_nat nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 rfkill ip_set nf_tables qrtr sunrpc vfat fat uvcvideo videobuf2_vmalloc videobuf2_memops uvc videobuf2_v4l2 videodev videobuf2_common snd_hda_codec_generic mc e1000e snd_hda_intel snd_intel_dspcfg snd_hda_codec snd_hda_core snd_hwdep snd_seq snd_seq_device snd_pcm snd_timer snd soundcore sg loop dm_multipath dm_mod nfnetlink vsock_loopback vmw_vsock_virtio_transport_common vmw_vsock_vmci_transport vmw_vmci vsock xfs libcrc32c crct10dif_ce ghash_ce vmwgfx sha2_ce sha256_arm64 sr_mod sha1_ce cdrom nvme drm_client_lib drm_ttm_helper ttm nvme_core drm_kms_helper nvme_auth drm fuse
[  158.840093] CPU: 2 UID: 0 PID: 9145 Comm: nfsd Kdump: loaded Tainted: G    B   W          6.13.0-rc6+ #7
[  158.840624] Tainted: [B]=BAD_PAGE, [W]=WARN
[  158.840802] Hardware name: VMware, Inc. VMware20,1/VBSA, BIOS VMW201.00V.24006586.BA64.2406042154 06/04/2024
[  158.841220] pstate: 61400005 (nZCv daif +PAN -UAO -TCO +DIT -SSBS BTYPE=--)
[  158.841563] pc : refcount_warn_saturate+0x160/0x1a0
[  158.841780] lr : refcount_warn_saturate+0x160/0x1a0
[  158.842000] sp : ffff800089be7d80
[  158.842147] x29: ffff800089be7d80 x28: ffff00008e68c148 x27: ffff00008e68c148
[  158.842492] x26: ffff0002e3b5c000 x25: ffff600011cd1829 x24: ffff00008653c010
[  158.842832] x23: ffff00008653c000 x22: 1fffe00011cd1829 x21: ffff00008653c028
[  158.843175] x20: 0000000000000002 x19: ffff00008653c010 x18: 0000000000000000
[  158.843505] x17: 0000000000000000 x16: 0000000000000000 x15: 0000000000000000
[  158.843836] x14: 0000000000000000 x13: 0000000000000001 x12: ffff600050a26493
[  158.844143] x11: 1fffe00050a26492 x10: ffff600050a26492 x9 : dfff800000000000
[  158.844475] x8 : 00009fffaf5d9b6e x7 : ffff000285132493 x6 : 0000000000000001
[  158.844823] x5 : ffff000285132490 x4 : ffff600050a26493 x3 : ffff8000805e72bc
[  158.845174] x2 : 0000000000000000 x1 : 0000000000000000 x0 : ffff000098588000
[  158.845528] Call trace:
[  158.845658]  refcount_warn_saturate+0x160/0x1a0 (P)
[  158.845894]  svc_recv+0x58c/0x680 [sunrpc]
[  158.846183]  nfsd+0x1fc/0x348 [nfsd]
[  158.846390]  kthread+0x274/0x2f8
[  158.846546]  ret_from_fork+0x10/0x20
[  158.846714] ---[ end trace 0000000000000000 ]---

nfsd_nl_listener_set_doit() would manipulate the list of transports of
server's sv_permsocks and close the specified listener but the other
list of transports (server's sp_xprts list) would not be changed leading
to the problem above.

Instead, determined if the nfsdctl is trying to remove a listener, in
which case, delete all the existing listener transports and re-create
all-but-the-removed ones.

Fixes: 16a4711 ("NFSD: add listener-{set,get} netlink command")
Signed-off-by: Olga Kornievskaia <[email protected]>
Reviewed-by: Jeff Layton <[email protected]>
Cc: [email protected]
Signed-off-by: Chuck Lever <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
(cherry picked from commit a84c80515ca8a0cdf6d06f1b6ca721224b08453e)
Signed-off-by: Jack Vogel <[email protected]>
oraclelinuxkernel pushed a commit that referenced this pull request Apr 18, 2025
Scenario:
1. Port down and do fail over
2. Ap do rds_bind syscall

PID: 47039  TASK: ffff89887e2fe640  CPU: 47  COMMAND: "kworker/u:6"
 #0 [ffff898e35f159f0] machine_kexec at ffffffff8103abf9
 #1 [ffff898e35f15a60] crash_kexec at ffffffff810b96e3
 #2 [ffff898e35f15b30] oops_end at ffffffff8150f518
 #3 [ffff898e35f15b60] no_context at ffffffff8104854c
 #4 [ffff898e35f15ba0] __bad_area_nosemaphore at ffffffff81048675
 #5 [ffff898e35f15bf0] bad_area_nosemaphore at ffffffff810487d3
 #6 [ffff898e35f15c00] do_page_fault at ffffffff815120b8
 #7 [ffff898e35f15d10] page_fault at ffffffff8150ea95
    [exception RIP: unknown or invalid address]
    RIP: 0000000000000000  RSP: ffff898e35f15dc8  RFLAGS: 00010282
    RAX: 00000000fffffffe  RBX: ffff889b77f6fc00  RCX:ffffffff81c99d88
    RDX: 0000000000000000  RSI: ffff896019ee08e8  RDI:ffff889b77f6fc00
    RBP: ffff898e35f15df0   R8: ffff896019ee08c8  R9:0000000000000000
    R10: 0000000000000400  R11: 0000000000000000  R12:ffff896019ee08c0
    R13: ffff889b77f6fe68  R14: ffffffff81c99d80  R15: ffffffffa022a1e0
    ORIG_RAX: ffffffffffffffff  CS: 0010 SS: 0018
 #8 [ffff898e35f15dc8] cma_ndev_work_handler at ffffffffa022a228 [rdma_cm]
 #9 [ffff898e35f15df8] process_one_work at ffffffff8108a7c6
 #10 [ffff898e35f15e58] worker_thread at ffffffff8108bda0
 #11 [ffff898e35f15ee8] kthread at ffffffff81090fe6

PID: 45659  TASK: ffff880d313d2500  CPU: 31  COMMAND: "oracle_45659_ap"
 #0 [ffff881024ccfc98] __schedule at ffffffff8150bac4
 #1 [ffff881024ccfd40] schedule at ffffffff8150c2cf
 #2 [ffff881024ccfd50] __mutex_lock_slowpath at ffffffff8150cee7
 #3 [ffff881024ccfdc0] mutex_lock at ffffffff8150cdeb
 #4 [ffff881024ccfde0] rdma_destroy_id at ffffffffa022a027 [rdma_cm]
 #5 [ffff881024ccfe10] rds_ib_laddr_check at ffffffffa0357857 [rds_rdma]
 #6 [ffff881024ccfe50] rds_trans_get_preferred at ffffffffa0324c2a [rds]
 #7 [ffff881024ccfe80] rds_bind at ffffffffa031d690 [rds]
 #8 [ffff881024ccfeb0] sys_bind at ffffffff8142a670

PID: 45659                          PID: 47039
rds_ib_laddr_check
  /* create id_priv with a null event_handler */
  rdma_create_id
  rdma_bind_addr
    cma_acquire_dev
      /* add id_priv to cma_dev->id_list */
      cma_attach_to_dev
                                    cma_ndev_work_handler
                                      /* event_hanlder is null */
                                      id_priv->id.event_handler

Orabug: 27530931

Signed-off-by: Guanglei Li <[email protected]>
Signed-off-by: Honglei Wang <[email protected]>
Reviewed-by: Junxiao Bi <[email protected]>
Reviewed-by: Yanjun Zhu <[email protected]>
Reviewed-by: Leon Romanovsky <[email protected]>
Acked-by: Santosh Shilimkar <[email protected]>
Acked-by: Doug Ledford <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
(cherry picked from commit 2c0aa08)
Reviewed-by: Håkon Bugge <[email protected]>
Signed-off-by: Somasundaram Krishnasamy <[email protected]>

Orabug: 33590097

UEK6 => UEK7

(cherry picked from commit 39e0939)
cherry-pick-repo=UEK/production/linux-uek.git

Signed-off-by: Gerd Rausch <[email protected]>
Reviewed-by: William Kucharski <[email protected]>

Orabug: 33590087

UEK7 => LUCI

(cherry picked from commit 7d342f8)
cherry-pick-repo=UEK/production/linux-uek.git

Signed-off-by: Gerd Rausch <[email protected]>
Reviewed-by: William Kucharski <[email protected]>
oraclelinuxkernel pushed a commit that referenced this pull request Apr 18, 2025
The customer hit this crash few times.

PID: 31556  TASK: ffff880f823caa00  CPU: 1   COMMAND: "cellsrv"
 #0 [ffff880f823db850] machine_kexec at ffffffff8105d93c
 #1 [ffff880f823db8b0] crash_kexec at ffffffff811103b3
 #2 [ffff880f823db980] oops_end at ffffffff8101a788
 #3 [ffff880f823db9b0] no_context at ffffffff8106b9cf
 #4 [ffff880f823dba20] __bad_area_nosemaphore at ffffffff8106bc9d
 #5 [ffff880f823dba70] bad_area at ffffffff8106be97
 #6 [ffff880f823dbaa0] __do_page_fault at ffffffff8106c71e
 #7 [ffff880f823dbb00] do_page_fault at ffffffff8106c81f
 #8 [ffff880f823dbb40] page_fault at ffffffff816b5a9f
    [exception RIP: rds_ib_inc_copy_to_user+104]
    RIP: ffffffffa04607b8  RSP: ffff880f823dbbf8  RFLAGS: 00010287
    RAX: 0000000000000340  RBX: 0000000000001000  RCX: 0000000000004000
    RDX: 0000000000001000  RSI: ffff88176cea2000  RDI: ffff8817d291f520
    RBP: ffff880f823dbc48   R8: 0000000000001340   R9: 0000000000001000
    R10: 0000000000001200  R11: ffff880f823dc000  R12: ffff880f823dbed0
    R13: 0000000000000000  R14: 0000000000000000  R15: 0000000000001000
    ORIG_RAX: ffffffffffffffff  CS: 0010  SS: 0018
 #9 [ffff880f823dbc50] rds_recvmsg at ffffffffa041d837 [rds]

int rds_ib_inc_copy_to_user(struct rds_incoming *inc, struct iov_iter *to)
...
...
        ibinc = container_of(inc, struct rds_ib_incoming, ii_inc);
        frag = list_entry(ibinc->ii_frags.next, struct rds_page_frag, f_item);
        len = be32_to_cpu(inc->i_hdr.h_len);
        sg = frag->f_sg;

        while (iov_iter_count(to) && copied < len) {
                to_copy = min_t(unsigned long, iov_iter_count(to),
                                sg->length - frag_off);
                ...

sg is NULL and it crashes accessing sg->length above.

The cause looks like is due to ic->i_frag_sz returning incorrect value.
16KB when 4KB was expected.

                if (copied % ic->i_frag_sz == 0) {
                        frag = list_entry(frag->f_item.next,
                                          struct rds_page_frag, f_item);
                        frag_off = 0;
                        sg = frag->f_sg;
                }

The other end is using 4KB RDS fragsize (Solaris Super Cluster).
This end is UEK4 (4.1.12-94.8.4.el6uek.x86_64).

The message being copied arrived over 4KB RDS frag size connection.
But during the above check ic->i_frag_sz is 16KB.
This can happen during a reconnect at the connection setup phase.
We start off with ic->i_frag_sz as 16KB. Then settle down at 4KB.

Failing this check
  if (copied % ic->i_frag_sz == 0) {
can result in sg not getting set correctly.

Say, "copied" = 4KB but ic->i_frag_sz is 16KB when it should be 4KB.

During race condition with a reconnect, ic->i_frag_sz can be 16KB
even though once the connection is set up it settled down to 4KB.
It can change from 4KB to 16KB and back to 4KB during connection setup
due to reconnect.

We started seeing this crash after bug 26848749.
But prior to that the same scenario could result in data copied to user
from incorrect "sg" resulting in data corruption.

Orabug: 28748008

Reviewed-by: Rama Nichanamatlu <[email protected]>
Signed-off-by: Venkat Venkatsubra <[email protected]>

Orabug: 33590097

UEK6 => UEK7

(cherry picked from commit 14858a3)
cherry-pick-repo=UEK/production/linux-uek.git

Signed-off-by: Gerd Rausch <[email protected]>
Reviewed-by: William Kucharski <[email protected]>

Orabug: 33590087

UEK7 => LUCI

(cherry picked from commit e86878f)
cherry-pick-repo=UEK/production/linux-uek.git

Signed-off-by: Gerd Rausch <[email protected]>
Reviewed-by: William Kucharski <[email protected]>
oraclelinuxkernel pushed a commit that referenced this pull request Apr 18, 2025
…error

The sequence that leads to this state is as follows.

1) First we see CQ error logged.

Sep 29 22:32:33 dm54cel14 kernel: [471472.784371] mlx4_core
0000:46:00.0: CQ access violation on CQN 000419 syndrome=0x2
vendor_error_syndrome=0x0

2) That is followed by the drop of the associated RDS connection.

Sep 29 22:32:33 dm54cel14 kernel: [471472.784403] RDS/IB: connection
<192.168.54.43,192.168.54.1,0> dropped due to 'qp event'

3) We don't get the WR_FLUSH_ERRs for the posted receive buffers after that.

4) RDS is stuck in rds_ib_conn_shutdown while shutting down that connection.

crash64> bt 62577
PID: 62577  TASK: ffff88143f045400  CPU: 4   COMMAND: "kworker/u224:1"
 #0 [ffff8813663bbb58] __schedule at ffffffff816ab68b
 #1 [ffff8813663bbbb0] schedule at ffffffff816abca7
 #2 [ffff8813663bbbd0] schedule_timeout at ffffffff816aee71
 #3 [ffff8813663bbc80] rds_ib_conn_shutdown at ffffffffa041f7d1 [rds_rdma]
 #4 [ffff8813663bbd10] rds_conn_shutdown at ffffffffa03dc6e2 [rds]
 #5 [ffff8813663bbdb0] rds_shutdown_worker at ffffffffa03e2699 [rds]
 #6 [ffff8813663bbe00] process_one_work at ffffffff8109cda1
 #7 [ffff8813663bbe50] worker_thread at ffffffff8109d92b
 #8 [ffff8813663bbec0] kthread at ffffffff810a304b
 #9 [ffff8813663bbf50] ret_from_fork at ffffffff816b0752
crash64>

It was stuck here in rds_ib_conn_shutdown for ever:

                /* quiesce tx and rx completion before tearing down */
                while (!wait_event_timeout(rds_ib_ring_empty_wait,
                                rds_ib_ring_empty(&ic->i_recv_ring) &&
                                (atomic_read(&ic->i_signaled_sends) == 0),
                                msecs_to_jiffies(5000))) {

                        /* Try to reap pending RX completions every 5 secs */
                        if (!rds_ib_ring_empty(&ic->i_recv_ring)) {
                                spin_lock_bh(&ic->i_rx_lock);
                                rds_ib_rx(ic);
                                spin_unlock_bh(&ic->i_rx_lock);
                        }
                }

The recv ring was not empty.
w_alloc_ptr = 560
w_free_ptr  = 256

This is what Mellanox had to say:
When CQ moves to error (e.g. due to CQ Overrun, CQ Access violation) FW will
generate Async event to notify this error, also the QPs that tries to access
this CQ will be put to error state but will not be flushed since we must not
post CQEs to a broken CQ. The QP that tries to access will also issue an
Async catas event.

In summary we cannot wait for any more WR_FLUSH_ERRs in that state.

Orabug: 29180452

Reviewed-by: Rama Nichanamatlu <[email protected]>
Signed-off-by: Venkat Venkatsubra <[email protected]>

Orabug: 33590097

UEK6 => UEK7

(cherry picked from commit 964cad6)
cherry-pick-repo=UEK/production/linux-uek.git

Signed-off-by: Gerd Rausch <[email protected]>
Reviewed-by: William Kucharski <[email protected]>

Orabug: 33590087

UEK7 => LUCI

(cherry picked from commit e40c8e4)
cherry-pick-repo=UEK/production/linux-uek.git

Signed-off-by: Gerd Rausch <[email protected]>
Reviewed-by: William Kucharski <[email protected]>
oraclelinuxkernel pushed a commit that referenced this pull request Apr 18, 2025
Multiple RDS/TCP backend connections need to be brought up
deterministically (i.e. first #0, then #1, ... then #7),
in order to not shuffle per-path connection state
that's preserved across reconnects (e.g. "cp_next_rx_seq")

Orabug: 32422417

Signed-off-by: Gerd Rausch <[email protected]>
Reviewed-by: Sharath Srinivasan <[email protected]>
oraclelinuxkernel pushed a commit that referenced this pull request Apr 18, 2025
Scenario:
1. Port down and do fail over
2. Ap do rds_bind syscall

PID: 47039  TASK: ffff89887e2fe640  CPU: 47  COMMAND: "kworker/u:6"
 #0 [ffff898e35f159f0] machine_kexec at ffffffff8103abf9
 #1 [ffff898e35f15a60] crash_kexec at ffffffff810b96e3
 #2 [ffff898e35f15b30] oops_end at ffffffff8150f518
 #3 [ffff898e35f15b60] no_context at ffffffff8104854c
 #4 [ffff898e35f15ba0] __bad_area_nosemaphore at ffffffff81048675
 #5 [ffff898e35f15bf0] bad_area_nosemaphore at ffffffff810487d3
 #6 [ffff898e35f15c00] do_page_fault at ffffffff815120b8
 #7 [ffff898e35f15d10] page_fault at ffffffff8150ea95
    [exception RIP: unknown or invalid address]
    RIP: 0000000000000000  RSP: ffff898e35f15dc8  RFLAGS: 00010282
    RAX: 00000000fffffffe  RBX: ffff889b77f6fc00  RCX:ffffffff81c99d88
    RDX: 0000000000000000  RSI: ffff896019ee08e8  RDI:ffff889b77f6fc00
    RBP: ffff898e35f15df0   R8: ffff896019ee08c8  R9:0000000000000000
    R10: 0000000000000400  R11: 0000000000000000  R12:ffff896019ee08c0
    R13: ffff889b77f6fe68  R14: ffffffff81c99d80  R15: ffffffffa022a1e0
    ORIG_RAX: ffffffffffffffff  CS: 0010 SS: 0018
 #8 [ffff898e35f15dc8] cma_ndev_work_handler at ffffffffa022a228 [rdma_cm]
 #9 [ffff898e35f15df8] process_one_work at ffffffff8108a7c6
 #10 [ffff898e35f15e58] worker_thread at ffffffff8108bda0
 #11 [ffff898e35f15ee8] kthread at ffffffff81090fe6

PID: 45659  TASK: ffff880d313d2500  CPU: 31  COMMAND: "oracle_45659_ap"
 #0 [ffff881024ccfc98] __schedule at ffffffff8150bac4
 #1 [ffff881024ccfd40] schedule at ffffffff8150c2cf
 #2 [ffff881024ccfd50] __mutex_lock_slowpath at ffffffff8150cee7
 #3 [ffff881024ccfdc0] mutex_lock at ffffffff8150cdeb
 #4 [ffff881024ccfde0] rdma_destroy_id at ffffffffa022a027 [rdma_cm]
 #5 [ffff881024ccfe10] rds_ib_laddr_check at ffffffffa0357857 [rds_rdma]
 #6 [ffff881024ccfe50] rds_trans_get_preferred at ffffffffa0324c2a [rds]
 #7 [ffff881024ccfe80] rds_bind at ffffffffa031d690 [rds]
 #8 [ffff881024ccfeb0] sys_bind at ffffffff8142a670

PID: 45659                          PID: 47039
rds_ib_laddr_check
  /* create id_priv with a null event_handler */
  rdma_create_id
  rdma_bind_addr
    cma_acquire_dev
      /* add id_priv to cma_dev->id_list */
      cma_attach_to_dev
                                    cma_ndev_work_handler
                                      /* event_hanlder is null */
                                      id_priv->id.event_handler

Orabug: 27530931

Signed-off-by: Guanglei Li <[email protected]>
Signed-off-by: Honglei Wang <[email protected]>
Reviewed-by: Junxiao Bi <[email protected]>
Reviewed-by: Yanjun Zhu <[email protected]>
Reviewed-by: Leon Romanovsky <[email protected]>
Acked-by: Santosh Shilimkar <[email protected]>
Acked-by: Doug Ledford <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
(cherry picked from commit 2c0aa08)
Reviewed-by: Håkon Bugge <[email protected]>
Signed-off-by: Somasundaram Krishnasamy <[email protected]>

Orabug: 33590097

UEK6 => UEK7

(cherry picked from commit 39e0939)
cherry-pick-repo=UEK/production/linux-uek.git

Signed-off-by: Gerd Rausch <[email protected]>
Reviewed-by: William Kucharski <[email protected]>

Orabug: 33590087

UEK7 => LUCI

(cherry picked from commit 7d342f8)
cherry-pick-repo=UEK/production/linux-uek.git

Signed-off-by: Gerd Rausch <[email protected]>
Reviewed-by: William Kucharski <[email protected]>
oraclelinuxkernel pushed a commit that referenced this pull request Apr 18, 2025
Scenario:
1. Port down and do fail over
2. Ap do rds_bind syscall

PID: 47039  TASK: ffff89887e2fe640  CPU: 47  COMMAND: "kworker/u:6"
 #0 [ffff898e35f159f0] machine_kexec at ffffffff8103abf9
 #1 [ffff898e35f15a60] crash_kexec at ffffffff810b96e3
 #2 [ffff898e35f15b30] oops_end at ffffffff8150f518
 #3 [ffff898e35f15b60] no_context at ffffffff8104854c
 #4 [ffff898e35f15ba0] __bad_area_nosemaphore at ffffffff81048675
 #5 [ffff898e35f15bf0] bad_area_nosemaphore at ffffffff810487d3
 #6 [ffff898e35f15c00] do_page_fault at ffffffff815120b8
 #7 [ffff898e35f15d10] page_fault at ffffffff8150ea95
    [exception RIP: unknown or invalid address]
    RIP: 0000000000000000  RSP: ffff898e35f15dc8  RFLAGS: 00010282
    RAX: 00000000fffffffe  RBX: ffff889b77f6fc00  RCX:ffffffff81c99d88
    RDX: 0000000000000000  RSI: ffff896019ee08e8  RDI:ffff889b77f6fc00
    RBP: ffff898e35f15df0   R8: ffff896019ee08c8  R9:0000000000000000
    R10: 0000000000000400  R11: 0000000000000000  R12:ffff896019ee08c0
    R13: ffff889b77f6fe68  R14: ffffffff81c99d80  R15: ffffffffa022a1e0
    ORIG_RAX: ffffffffffffffff  CS: 0010 SS: 0018
 #8 [ffff898e35f15dc8] cma_ndev_work_handler at ffffffffa022a228 [rdma_cm]
 #9 [ffff898e35f15df8] process_one_work at ffffffff8108a7c6
 #10 [ffff898e35f15e58] worker_thread at ffffffff8108bda0
 #11 [ffff898e35f15ee8] kthread at ffffffff81090fe6

PID: 45659  TASK: ffff880d313d2500  CPU: 31  COMMAND: "oracle_45659_ap"
 #0 [ffff881024ccfc98] __schedule at ffffffff8150bac4
 #1 [ffff881024ccfd40] schedule at ffffffff8150c2cf
 #2 [ffff881024ccfd50] __mutex_lock_slowpath at ffffffff8150cee7
 #3 [ffff881024ccfdc0] mutex_lock at ffffffff8150cdeb
 #4 [ffff881024ccfde0] rdma_destroy_id at ffffffffa022a027 [rdma_cm]
 #5 [ffff881024ccfe10] rds_ib_laddr_check at ffffffffa0357857 [rds_rdma]
 #6 [ffff881024ccfe50] rds_trans_get_preferred at ffffffffa0324c2a [rds]
 #7 [ffff881024ccfe80] rds_bind at ffffffffa031d690 [rds]
 #8 [ffff881024ccfeb0] sys_bind at ffffffff8142a670

PID: 45659                          PID: 47039
rds_ib_laddr_check
  /* create id_priv with a null event_handler */
  rdma_create_id
  rdma_bind_addr
    cma_acquire_dev
      /* add id_priv to cma_dev->id_list */
      cma_attach_to_dev
                                    cma_ndev_work_handler
                                      /* event_hanlder is null */
                                      id_priv->id.event_handler

Orabug: 27530931

Signed-off-by: Guanglei Li <[email protected]>
Signed-off-by: Honglei Wang <[email protected]>
Reviewed-by: Junxiao Bi <[email protected]>
Reviewed-by: Yanjun Zhu <[email protected]>
Reviewed-by: Leon Romanovsky <[email protected]>
Acked-by: Santosh Shilimkar <[email protected]>
Acked-by: Doug Ledford <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
(cherry picked from commit 2c0aa08)
Reviewed-by: Håkon Bugge <[email protected]>
Signed-off-by: Somasundaram Krishnasamy <[email protected]>

Orabug: 33590097

UEK6 => UEK7

(cherry picked from commit 39e0939)
cherry-pick-repo=UEK/production/linux-uek.git

Signed-off-by: Gerd Rausch <[email protected]>
Reviewed-by: William Kucharski <[email protected]>

Orabug: 33590087

UEK7 => LUCI

(cherry picked from commit 7d342f8)
cherry-pick-repo=UEK/production/linux-uek.git

Signed-off-by: Gerd Rausch <[email protected]>
Reviewed-by: William Kucharski <[email protected]>
oraclelinuxkernel pushed a commit that referenced this pull request Apr 18, 2025
The customer hit this crash few times.

PID: 31556  TASK: ffff880f823caa00  CPU: 1   COMMAND: "cellsrv"
 #0 [ffff880f823db850] machine_kexec at ffffffff8105d93c
 #1 [ffff880f823db8b0] crash_kexec at ffffffff811103b3
 #2 [ffff880f823db980] oops_end at ffffffff8101a788
 #3 [ffff880f823db9b0] no_context at ffffffff8106b9cf
 #4 [ffff880f823dba20] __bad_area_nosemaphore at ffffffff8106bc9d
 #5 [ffff880f823dba70] bad_area at ffffffff8106be97
 #6 [ffff880f823dbaa0] __do_page_fault at ffffffff8106c71e
 #7 [ffff880f823dbb00] do_page_fault at ffffffff8106c81f
 #8 [ffff880f823dbb40] page_fault at ffffffff816b5a9f
    [exception RIP: rds_ib_inc_copy_to_user+104]
    RIP: ffffffffa04607b8  RSP: ffff880f823dbbf8  RFLAGS: 00010287
    RAX: 0000000000000340  RBX: 0000000000001000  RCX: 0000000000004000
    RDX: 0000000000001000  RSI: ffff88176cea2000  RDI: ffff8817d291f520
    RBP: ffff880f823dbc48   R8: 0000000000001340   R9: 0000000000001000
    R10: 0000000000001200  R11: ffff880f823dc000  R12: ffff880f823dbed0
    R13: 0000000000000000  R14: 0000000000000000  R15: 0000000000001000
    ORIG_RAX: ffffffffffffffff  CS: 0010  SS: 0018
 #9 [ffff880f823dbc50] rds_recvmsg at ffffffffa041d837 [rds]

int rds_ib_inc_copy_to_user(struct rds_incoming *inc, struct iov_iter *to)
...
...
        ibinc = container_of(inc, struct rds_ib_incoming, ii_inc);
        frag = list_entry(ibinc->ii_frags.next, struct rds_page_frag, f_item);
        len = be32_to_cpu(inc->i_hdr.h_len);
        sg = frag->f_sg;

        while (iov_iter_count(to) && copied < len) {
                to_copy = min_t(unsigned long, iov_iter_count(to),
                                sg->length - frag_off);
                ...

sg is NULL and it crashes accessing sg->length above.

The cause looks like is due to ic->i_frag_sz returning incorrect value.
16KB when 4KB was expected.

                if (copied % ic->i_frag_sz == 0) {
                        frag = list_entry(frag->f_item.next,
                                          struct rds_page_frag, f_item);
                        frag_off = 0;
                        sg = frag->f_sg;
                }

The other end is using 4KB RDS fragsize (Solaris Super Cluster).
This end is UEK4 (4.1.12-94.8.4.el6uek.x86_64).

The message being copied arrived over 4KB RDS frag size connection.
But during the above check ic->i_frag_sz is 16KB.
This can happen during a reconnect at the connection setup phase.
We start off with ic->i_frag_sz as 16KB. Then settle down at 4KB.

Failing this check
  if (copied % ic->i_frag_sz == 0) {
can result in sg not getting set correctly.

Say, "copied" = 4KB but ic->i_frag_sz is 16KB when it should be 4KB.

During race condition with a reconnect, ic->i_frag_sz can be 16KB
even though once the connection is set up it settled down to 4KB.
It can change from 4KB to 16KB and back to 4KB during connection setup
due to reconnect.

We started seeing this crash after bug 26848749.
But prior to that the same scenario could result in data copied to user
from incorrect "sg" resulting in data corruption.

Orabug: 28748008

Reviewed-by: Rama Nichanamatlu <[email protected]>
Signed-off-by: Venkat Venkatsubra <[email protected]>

Orabug: 33590097

UEK6 => UEK7

(cherry picked from commit 14858a3)
cherry-pick-repo=UEK/production/linux-uek.git

Signed-off-by: Gerd Rausch <[email protected]>
Reviewed-by: William Kucharski <[email protected]>

Orabug: 33590087

UEK7 => LUCI

(cherry picked from commit e86878f)
cherry-pick-repo=UEK/production/linux-uek.git

Signed-off-by: Gerd Rausch <[email protected]>
Reviewed-by: William Kucharski <[email protected]>
oraclelinuxkernel pushed a commit that referenced this pull request Apr 18, 2025
The customer hit this crash few times.

PID: 31556  TASK: ffff880f823caa00  CPU: 1   COMMAND: "cellsrv"
 #0 [ffff880f823db850] machine_kexec at ffffffff8105d93c
 #1 [ffff880f823db8b0] crash_kexec at ffffffff811103b3
 #2 [ffff880f823db980] oops_end at ffffffff8101a788
 #3 [ffff880f823db9b0] no_context at ffffffff8106b9cf
 #4 [ffff880f823dba20] __bad_area_nosemaphore at ffffffff8106bc9d
 #5 [ffff880f823dba70] bad_area at ffffffff8106be97
 #6 [ffff880f823dbaa0] __do_page_fault at ffffffff8106c71e
 #7 [ffff880f823dbb00] do_page_fault at ffffffff8106c81f
 #8 [ffff880f823dbb40] page_fault at ffffffff816b5a9f
    [exception RIP: rds_ib_inc_copy_to_user+104]
    RIP: ffffffffa04607b8  RSP: ffff880f823dbbf8  RFLAGS: 00010287
    RAX: 0000000000000340  RBX: 0000000000001000  RCX: 0000000000004000
    RDX: 0000000000001000  RSI: ffff88176cea2000  RDI: ffff8817d291f520
    RBP: ffff880f823dbc48   R8: 0000000000001340   R9: 0000000000001000
    R10: 0000000000001200  R11: ffff880f823dc000  R12: ffff880f823dbed0
    R13: 0000000000000000  R14: 0000000000000000  R15: 0000000000001000
    ORIG_RAX: ffffffffffffffff  CS: 0010  SS: 0018
 #9 [ffff880f823dbc50] rds_recvmsg at ffffffffa041d837 [rds]

int rds_ib_inc_copy_to_user(struct rds_incoming *inc, struct iov_iter *to)
...
...
        ibinc = container_of(inc, struct rds_ib_incoming, ii_inc);
        frag = list_entry(ibinc->ii_frags.next, struct rds_page_frag, f_item);
        len = be32_to_cpu(inc->i_hdr.h_len);
        sg = frag->f_sg;

        while (iov_iter_count(to) && copied < len) {
                to_copy = min_t(unsigned long, iov_iter_count(to),
                                sg->length - frag_off);
                ...

sg is NULL and it crashes accessing sg->length above.

The cause looks like is due to ic->i_frag_sz returning incorrect value.
16KB when 4KB was expected.

                if (copied % ic->i_frag_sz == 0) {
                        frag = list_entry(frag->f_item.next,
                                          struct rds_page_frag, f_item);
                        frag_off = 0;
                        sg = frag->f_sg;
                }

The other end is using 4KB RDS fragsize (Solaris Super Cluster).
This end is UEK4 (4.1.12-94.8.4.el6uek.x86_64).

The message being copied arrived over 4KB RDS frag size connection.
But during the above check ic->i_frag_sz is 16KB.
This can happen during a reconnect at the connection setup phase.
We start off with ic->i_frag_sz as 16KB. Then settle down at 4KB.

Failing this check
  if (copied % ic->i_frag_sz == 0) {
can result in sg not getting set correctly.

Say, "copied" = 4KB but ic->i_frag_sz is 16KB when it should be 4KB.

During race condition with a reconnect, ic->i_frag_sz can be 16KB
even though once the connection is set up it settled down to 4KB.
It can change from 4KB to 16KB and back to 4KB during connection setup
due to reconnect.

We started seeing this crash after bug 26848749.
But prior to that the same scenario could result in data copied to user
from incorrect "sg" resulting in data corruption.

Orabug: 28748008

Reviewed-by: Rama Nichanamatlu <[email protected]>
Signed-off-by: Venkat Venkatsubra <[email protected]>

Orabug: 33590097

UEK6 => UEK7

(cherry picked from commit 14858a3)
cherry-pick-repo=UEK/production/linux-uek.git

Signed-off-by: Gerd Rausch <[email protected]>
Reviewed-by: William Kucharski <[email protected]>

Orabug: 33590087

UEK7 => LUCI

(cherry picked from commit e86878f)
cherry-pick-repo=UEK/production/linux-uek.git

Signed-off-by: Gerd Rausch <[email protected]>
Reviewed-by: William Kucharski <[email protected]>
oraclelinuxkernel pushed a commit that referenced this pull request Apr 18, 2025
…error

The sequence that leads to this state is as follows.

1) First we see CQ error logged.

Sep 29 22:32:33 dm54cel14 kernel: [471472.784371] mlx4_core
0000:46:00.0: CQ access violation on CQN 000419 syndrome=0x2
vendor_error_syndrome=0x0

2) That is followed by the drop of the associated RDS connection.

Sep 29 22:32:33 dm54cel14 kernel: [471472.784403] RDS/IB: connection
<192.168.54.43,192.168.54.1,0> dropped due to 'qp event'

3) We don't get the WR_FLUSH_ERRs for the posted receive buffers after that.

4) RDS is stuck in rds_ib_conn_shutdown while shutting down that connection.

crash64> bt 62577
PID: 62577  TASK: ffff88143f045400  CPU: 4   COMMAND: "kworker/u224:1"
 #0 [ffff8813663bbb58] __schedule at ffffffff816ab68b
 #1 [ffff8813663bbbb0] schedule at ffffffff816abca7
 #2 [ffff8813663bbbd0] schedule_timeout at ffffffff816aee71
 #3 [ffff8813663bbc80] rds_ib_conn_shutdown at ffffffffa041f7d1 [rds_rdma]
 #4 [ffff8813663bbd10] rds_conn_shutdown at ffffffffa03dc6e2 [rds]
 #5 [ffff8813663bbdb0] rds_shutdown_worker at ffffffffa03e2699 [rds]
 #6 [ffff8813663bbe00] process_one_work at ffffffff8109cda1
 #7 [ffff8813663bbe50] worker_thread at ffffffff8109d92b
 #8 [ffff8813663bbec0] kthread at ffffffff810a304b
 #9 [ffff8813663bbf50] ret_from_fork at ffffffff816b0752
crash64>

It was stuck here in rds_ib_conn_shutdown for ever:

                /* quiesce tx and rx completion before tearing down */
                while (!wait_event_timeout(rds_ib_ring_empty_wait,
                                rds_ib_ring_empty(&ic->i_recv_ring) &&
                                (atomic_read(&ic->i_signaled_sends) == 0),
                                msecs_to_jiffies(5000))) {

                        /* Try to reap pending RX completions every 5 secs */
                        if (!rds_ib_ring_empty(&ic->i_recv_ring)) {
                                spin_lock_bh(&ic->i_rx_lock);
                                rds_ib_rx(ic);
                                spin_unlock_bh(&ic->i_rx_lock);
                        }
                }

The recv ring was not empty.
w_alloc_ptr = 560
w_free_ptr  = 256

This is what Mellanox had to say:
When CQ moves to error (e.g. due to CQ Overrun, CQ Access violation) FW will
generate Async event to notify this error, also the QPs that tries to access
this CQ will be put to error state but will not be flushed since we must not
post CQEs to a broken CQ. The QP that tries to access will also issue an
Async catas event.

In summary we cannot wait for any more WR_FLUSH_ERRs in that state.

Orabug: 29180452

Reviewed-by: Rama Nichanamatlu <[email protected]>
Signed-off-by: Venkat Venkatsubra <[email protected]>

Orabug: 33590097

UEK6 => UEK7

(cherry picked from commit 964cad6)
cherry-pick-repo=UEK/production/linux-uek.git

Signed-off-by: Gerd Rausch <[email protected]>
Reviewed-by: William Kucharski <[email protected]>

Orabug: 33590087

UEK7 => LUCI

(cherry picked from commit e40c8e4)
cherry-pick-repo=UEK/production/linux-uek.git

Signed-off-by: Gerd Rausch <[email protected]>
Reviewed-by: William Kucharski <[email protected]>
oraclelinuxkernel pushed a commit that referenced this pull request Apr 18, 2025
One of our customers reported the following stack.

crash-7.3.0> bt
PID: 250515  TASK: ffff888189482f80  CPU: 1   COMMAND: "vmbackup"
 #0 [ffffc90025017878] die at ffffffff81033c22
 #1 [ffffc900250178a8] do_trap at ffffffff81030990
 #2 [ffffc900250178f8] do_error_trap at ffffffff810311d7
 #3 [ffffc900250179c0] do_invalid_op at ffffffff81031310
 #4 [ffffc900250179d0] invalid_op at ffffffff81a01f2a
    [exception RIP: ocfs2_truncate_rec+1914]
    RIP: ffffffffc1e73b4a  RSP: ffffc90025017a80  RFLAGS: 00010246
    RAX: 0000000000000000  RBX: 0000000000053a75  RCX: 0000000000000000
    RDX: 0000000000000000  RSI: ffff8882d385be08  RDI: ffff8882d385be08
    RBP: ffffc90025017b10   R8: 0000000000000000   R9: 0000000000005900
    R10: 0000000000000001  R11: 0000000000aaaaaa  R12: 0000000000000001
    R13: ffff88829e5a9900  R14: ffffc90025017cf0  R15: 0000000000000000
    ORIG_RAX: ffffffffffffffff  CS: e030  SS: e02b
 #5 [ffffc90025017b18] ocfs2_remove_extent at ffffffffc1e73e6c [ocfs2]
 #6 [ffffc90025017bc8] ocfs2_remove_btree_range at ffffffffc1e745f2 [ocfs2]
 #7 [ffffc90025017c60] ocfs2_commit_truncate at ffffffffc1e75b1f [ocfs2]
 #8 [ffffc90025017d68] __dta_ocfs2_wipe_inode_606 at ffffffffc1e9a3e0 [ocfs2]
 #9 [ffffc90025017dd8] ocfs2_evict_inode at ffffffffc1e9ac10 [ocfs2]
    RIP: 00007f9b26ec8307  RSP: 00007ffc5a193f68  RFLAGS: 00000246
    RAX: ffffffffffffffda  RBX: 0000000000ddd0a0  RCX: 00007f9b26ec8307
    RDX: 0000000000000001  RSI: 00007f9b2719e770  RDI: 0000000001010400
    RBP: 0000000001263d80   R8: 0000000000000000   R9: 00000000012146a0
    R10: 000000000000000d  R11: 0000000000000246  R12: 0000000000ddd0a0
    R13: 00007f9b27ba9595  R14: 00007f9b27ca4a50  R15: 00000000ffffffff
    ORIG_RAX: 0000000000000057  CS: 0033  SS: 002b crash-7.3.0>

This crash resulted due to invalid extent record selected for truncate.

At the top of the function ocfs2_truncate_rec(), the code checks if the
first extent record at the leaf extent list corresponding to the input
path is still empty. In that case the tree is rotated left to get rid of
the empty extent record but this rotation did not happen.

But the function ocfs2_truncate_rec() assumes that the top level call
to ocfs2_rotate_tree_left() to get rid of the empty extent always
succeeds and hence it decrements the input "index" value. This results
in selection of a wrong record for truncate that causes to hit a call to
BUG() with the message "Owner %llu: Invalid record truncate: (%u, %u) ".
The stack above is the panic stack caused due to hitting BUG().

Though the function ocfs2_rotate_tree_left() was intended to get rid of
the first empty record in the extent block, it did not call the function
ocfs2_rotate_rightmost_leaf_left() as it did not find h_next_leaf_blk
in the extentleaf block to be zero, instead, it proceeded to call
__ocfs2_rotate_tree_left(). However the input "index" value was indeed
pointing to the last extent record in the leaf block. The macro
path_leaf_bh() was returning rightmost extent block as per the tree-depth.
and the function ocfs2_find_cpos_for_right_leaf() also found out that
the extent block in question is indeed the rightmost and hence there is
nothing to rotate at the last extent record pointed by the input "index"
value. Hence the extent tree in the leaf block was not totated at all.

Hence, the real reason for the above panic is that the value of the field
h_next_leaf_blk in the right most leaf block was non-zero that caused
the tree not to rotate left resulting in selection of invalid record for
truncate.

The reason why h_next_leaf_blk was not cleared for the last extent block
is still not known and the code changes here is a workaround to avoid
the panic by verifying that the extent block in question is indeed the
rightmost leaf block in the tree and then correcting the invalid
h_next_leaf_blk value. These changes have been verified by the customer
by running the provided rpm in their env.

Orabug: 34393593

Signed-off-by: Gautham Ananthakrishna <[email protected]>
Reviewed-by: Junxiao Bi <[email protected]>
oraclelinuxkernel pushed a commit that referenced this pull request Apr 18, 2025
…error

The sequence that leads to this state is as follows.

1) First we see CQ error logged.

Sep 29 22:32:33 dm54cel14 kernel: [471472.784371] mlx4_core
0000:46:00.0: CQ access violation on CQN 000419 syndrome=0x2
vendor_error_syndrome=0x0

2) That is followed by the drop of the associated RDS connection.

Sep 29 22:32:33 dm54cel14 kernel: [471472.784403] RDS/IB: connection
<192.168.54.43,192.168.54.1,0> dropped due to 'qp event'

3) We don't get the WR_FLUSH_ERRs for the posted receive buffers after that.

4) RDS is stuck in rds_ib_conn_shutdown while shutting down that connection.

crash64> bt 62577
PID: 62577  TASK: ffff88143f045400  CPU: 4   COMMAND: "kworker/u224:1"
 #0 [ffff8813663bbb58] __schedule at ffffffff816ab68b
 #1 [ffff8813663bbbb0] schedule at ffffffff816abca7
 #2 [ffff8813663bbbd0] schedule_timeout at ffffffff816aee71
 #3 [ffff8813663bbc80] rds_ib_conn_shutdown at ffffffffa041f7d1 [rds_rdma]
 #4 [ffff8813663bbd10] rds_conn_shutdown at ffffffffa03dc6e2 [rds]
 #5 [ffff8813663bbdb0] rds_shutdown_worker at ffffffffa03e2699 [rds]
 #6 [ffff8813663bbe00] process_one_work at ffffffff8109cda1
 #7 [ffff8813663bbe50] worker_thread at ffffffff8109d92b
 #8 [ffff8813663bbec0] kthread at ffffffff810a304b
 #9 [ffff8813663bbf50] ret_from_fork at ffffffff816b0752
crash64>

It was stuck here in rds_ib_conn_shutdown for ever:

                /* quiesce tx and rx completion before tearing down */
                while (!wait_event_timeout(rds_ib_ring_empty_wait,
                                rds_ib_ring_empty(&ic->i_recv_ring) &&
                                (atomic_read(&ic->i_signaled_sends) == 0),
                                msecs_to_jiffies(5000))) {

                        /* Try to reap pending RX completions every 5 secs */
                        if (!rds_ib_ring_empty(&ic->i_recv_ring)) {
                                spin_lock_bh(&ic->i_rx_lock);
                                rds_ib_rx(ic);
                                spin_unlock_bh(&ic->i_rx_lock);
                        }
                }

The recv ring was not empty.
w_alloc_ptr = 560
w_free_ptr  = 256

This is what Mellanox had to say:
When CQ moves to error (e.g. due to CQ Overrun, CQ Access violation) FW will
generate Async event to notify this error, also the QPs that tries to access
this CQ will be put to error state but will not be flushed since we must not
post CQEs to a broken CQ. The QP that tries to access will also issue an
Async catas event.

In summary we cannot wait for any more WR_FLUSH_ERRs in that state.

Orabug: 29180452

Reviewed-by: Rama Nichanamatlu <[email protected]>
Signed-off-by: Venkat Venkatsubra <[email protected]>

Orabug: 33590097

UEK6 => UEK7

(cherry picked from commit 964cad6)
cherry-pick-repo=UEK/production/linux-uek.git

Signed-off-by: Gerd Rausch <[email protected]>
Reviewed-by: William Kucharski <[email protected]>

Orabug: 33590087

UEK7 => LUCI

(cherry picked from commit e40c8e4)
cherry-pick-repo=UEK/production/linux-uek.git

Signed-off-by: Gerd Rausch <[email protected]>
Reviewed-by: William Kucharski <[email protected]>
oraclelinuxkernel pushed a commit that referenced this pull request Apr 18, 2025
Multiple RDS/TCP backend connections need to be brought up
deterministically (i.e. first #0, then #1, ... then #7),
in order to not shuffle per-path connection state
that's preserved across reconnects (e.g. "cp_next_rx_seq")

Orabug: 32422417

Signed-off-by: Gerd Rausch <[email protected]>
Reviewed-by: Sharath Srinivasan <[email protected]>
oraclelinuxkernel pushed a commit that referenced this pull request Apr 18, 2025
Add a check to mlx5e_xmit() for shorter frames. A corrupted/malformed
packet, with shorter length can eventually cause system panic further
down in the code path. Avoid it by validating the length and dropping it
at the earliest.

Following is seen in our env with shorter skb->len

crash> bt
PID: 76981    TASK: ff19828cfe508000  CPU: 106  COMMAND: "vhost-76942"
 #0 [ff2d20159b39f2c8] machine_kexec at ffffffffad884801
 #1 [ff2d20159b39f328] __crash_kexec at ffffffffad976142
 #2 [ff2d20159b39f3f8] panic at ffffffffad8b3640
 #3 [ff2d20159b39f4a0] no_context at ffffffffad8954e1
 #4 [ff2d20159b39f518] __bad_area_nosemaphore at ffffffffad8958de
 #5 [ff2d20159b39f578] bad_area_nosemaphore at ffffffffad895a96
 #6 [ff2d20159b39f588] do_kern_addr_fault at ffffffffad89688e
 #7 [ff2d20159b39f5b0] __do_page_fault at ffffffffad896b30
 #8 [ff2d20159b39f618] do_page_fault at ffffffffad896db6
 #9 [ff2d20159b39f650] page_fault at ffffffffae402acd
    [exception RIP: memcpy_erms+6]
    RIP: ffffffffae261ab6  RSP: ff2d20159b39f700  RFLAGS: 00010293
    RAX: ff198291741ecf2e  RBX: ff19828e70d6a100  RCX: fffffffffea1af2b
    RDX: fffffffffffffffd  RSI: ff19828eba6d7e5e  RDI: ff198291757d2000
    RBP: ff2d20159b39f760   R8: ff198291741ecf00   R9: 000000000000037c
    R10: 000000000000003c  R11: ff19828ffe953940  R12: ff198291741ecf20
    R13: ff198267dcb1b600  R14: ff19828eeebb09c0  R15: ff198291741ecf00
    ORIG_RAX: ffffffffffffffff  CS: 0010  SS: 0018
 #10 [ff2d20159b39f700] mlx5e_sq_xmit_wqe at ffffffffc05c162e [mlx5_core]
 #11 [ff2d20159b39f768] mlx5e_xmit at ffffffffc05c1ca3 [mlx5_core]
 #12 [ff2d20159b39f800] dev_hard_start_xmit at ffffffffae083766
 #13 [ff2d20159b39f860] sch_direct_xmit at ffffffffae0e2564
 #14 [ff2d20159b39f8b0] __qdisc_run at ffffffffae0e294e
 #15 [ff2d20159b39f928] __dev_queue_xmit at ffffffffae083eee
 #16 [ff2d20159b39f9a8] dev_queue_xmit at ffffffffae084370
 #17 [ff2d20159b39f9b8] vlan_dev_hard_start_xmit at ffffffffc2fb6fec [8021q]
 #18 [ff2d20159b39f9d8] dev_hard_start_xmit at ffffffffae083766
 #19 [ff2d20159b39fa38] __dev_queue_xmit at ffffffffae08416a
 #20 [ff2d20159b39fab8] dev_queue_xmit_accel at ffffffffae08438e
 #21 [ff2d20159b39fac8] macvlan_start_xmit at ffffffffc2fc18d9 [macvlan]
 #22 [ff2d20159b39faf0] dev_hard_start_xmit at ffffffffae083766
 #23 [ff2d20159b39fb50] sch_direct_xmit at ffffffffae0e2564
 #24 [ff2d20159b39fba0] __qdisc_run at ffffffffae0e294e
 #25 [ff2d20159b39fc18] __dev_queue_xmit at ffffffffae083c81
 #26 [ff2d20159b39fc90] dev_queue_xmit at ffffffffae084370
 #27 [ff2d20159b39fca0] tap_sendmsg at ffffffffc07206ed [tap]
 #28 [ff2d20159b39fd20] vhost_tx_batch at ffffffffc2fd6590 [vhost_net]
 #29 [ff2d20159b39fd68] handle_tx_copy at ffffffffc2fd70f3 [vhost_net]
 #30 [ff2d20159b39fe80] handle_tx at ffffffffc2fd7651 [vhost_net]
 #31 [ff2d20159b39feb0] handle_tx_kick at ffffffffc2fd76b5 [vhost_net]
 #32 [ff2d20159b39fec0] vhost_worker at ffffffffc12a5be8 [vhost]
 #33 [ff2d20159b39ff08] kthread at ffffffffad8dbfe5
 #34 [ff2d20159b39ff50] ret_from_fork at ffffffffae400364

This change was discussed with Nvidia and they are in agreement.

Orabug: 36879156
CVE: CVE-2024-41090
CVE: CVE-2024-41091

Fixes: e4cf27b ("net/mlx5e: Re-eanble client vlan TX acceleration")
Reported-and-tested-by: Dongli Zhang <[email protected]>
Signed-off-by: Manjunath Patil <[email protected]>
Reviewed-by: Si-Wei Liu <[email protected]>
Reviewed-by: Jack Vogel <[email protected]>
Signed-off-by: Brian Maly <[email protected]>
(cherry picked from commit 0dd4b99)

Orabug: 36879126
CVE: CVE-2024-41090
CVE: CVE-2024-41091

Signed-off-by: Harshvardhan Jha <[email protected]>
Reviewed-by: Vijayendra Suman <[email protected]>
oraclelinuxkernel pushed a commit that referenced this pull request Apr 18, 2025
Multiple RDS/TCP backend connections need to be brought up
deterministically (i.e. first #0, then #1, ... then #7),
in order to not shuffle per-path connection state
that's preserved across reconnects (e.g. "cp_next_rx_seq")

Orabug: 32422417

Signed-off-by: Gerd Rausch <[email protected]>
Reviewed-by: Sharath Srinivasan <[email protected]>
oraclelinuxkernel pushed a commit that referenced this pull request Apr 18, 2025
One of our customers reported the following stack.

crash-7.3.0> bt
PID: 250515  TASK: ffff888189482f80  CPU: 1   COMMAND: "vmbackup"
 #0 [ffffc90025017878] die at ffffffff81033c22
 #1 [ffffc900250178a8] do_trap at ffffffff81030990
 #2 [ffffc900250178f8] do_error_trap at ffffffff810311d7
 #3 [ffffc900250179c0] do_invalid_op at ffffffff81031310
 #4 [ffffc900250179d0] invalid_op at ffffffff81a01f2a
    [exception RIP: ocfs2_truncate_rec+1914]
    RIP: ffffffffc1e73b4a  RSP: ffffc90025017a80  RFLAGS: 00010246
    RAX: 0000000000000000  RBX: 0000000000053a75  RCX: 0000000000000000
    RDX: 0000000000000000  RSI: ffff8882d385be08  RDI: ffff8882d385be08
    RBP: ffffc90025017b10   R8: 0000000000000000   R9: 0000000000005900
    R10: 0000000000000001  R11: 0000000000aaaaaa  R12: 0000000000000001
    R13: ffff88829e5a9900  R14: ffffc90025017cf0  R15: 0000000000000000
    ORIG_RAX: ffffffffffffffff  CS: e030  SS: e02b
 #5 [ffffc90025017b18] ocfs2_remove_extent at ffffffffc1e73e6c [ocfs2]
 #6 [ffffc90025017bc8] ocfs2_remove_btree_range at ffffffffc1e745f2 [ocfs2]
 #7 [ffffc90025017c60] ocfs2_commit_truncate at ffffffffc1e75b1f [ocfs2]
 #8 [ffffc90025017d68] __dta_ocfs2_wipe_inode_606 at ffffffffc1e9a3e0 [ocfs2]
 #9 [ffffc90025017dd8] ocfs2_evict_inode at ffffffffc1e9ac10 [ocfs2]
    RIP: 00007f9b26ec8307  RSP: 00007ffc5a193f68  RFLAGS: 00000246
    RAX: ffffffffffffffda  RBX: 0000000000ddd0a0  RCX: 00007f9b26ec8307
    RDX: 0000000000000001  RSI: 00007f9b2719e770  RDI: 0000000001010400
    RBP: 0000000001263d80   R8: 0000000000000000   R9: 00000000012146a0
    R10: 000000000000000d  R11: 0000000000000246  R12: 0000000000ddd0a0
    R13: 00007f9b27ba9595  R14: 00007f9b27ca4a50  R15: 00000000ffffffff
    ORIG_RAX: 0000000000000057  CS: 0033  SS: 002b crash-7.3.0>

This crash resulted due to invalid extent record selected for truncate.

At the top of the function ocfs2_truncate_rec(), the code checks if the
first extent record at the leaf extent list corresponding to the input
path is still empty. In that case the tree is rotated left to get rid of
the empty extent record but this rotation did not happen.

But the function ocfs2_truncate_rec() assumes that the top level call
to ocfs2_rotate_tree_left() to get rid of the empty extent always
succeeds and hence it decrements the input "index" value. This results
in selection of a wrong record for truncate that causes to hit a call to
BUG() with the message "Owner %llu: Invalid record truncate: (%u, %u) ".
The stack above is the panic stack caused due to hitting BUG().

Though the function ocfs2_rotate_tree_left() was intended to get rid of
the first empty record in the extent block, it did not call the function
ocfs2_rotate_rightmost_leaf_left() as it did not find h_next_leaf_blk
in the extentleaf block to be zero, instead, it proceeded to call
__ocfs2_rotate_tree_left(). However the input "index" value was indeed
pointing to the last extent record in the leaf block. The macro
path_leaf_bh() was returning rightmost extent block as per the tree-depth.
and the function ocfs2_find_cpos_for_right_leaf() also found out that
the extent block in question is indeed the rightmost and hence there is
nothing to rotate at the last extent record pointed by the input "index"
value. Hence the extent tree in the leaf block was not totated at all.

Hence, the real reason for the above panic is that the value of the field
h_next_leaf_blk in the right most leaf block was non-zero that caused
the tree not to rotate left resulting in selection of invalid record for
truncate.

The reason why h_next_leaf_blk was not cleared for the last extent block
is still not known and the code changes here is a workaround to avoid
the panic by verifying that the extent block in question is indeed the
rightmost leaf block in the tree and then correcting the invalid
h_next_leaf_blk value. These changes have been verified by the customer
by running the provided rpm in their env.

Orabug: 34393593

Signed-off-by: Gautham Ananthakrishna <[email protected]>
Reviewed-by: Junxiao Bi <[email protected]>
oraclelinuxkernel pushed a commit that referenced this pull request Apr 18, 2025
One of our customers reported the following stack.

crash-7.3.0> bt
PID: 250515  TASK: ffff888189482f80  CPU: 1   COMMAND: "vmbackup"
 #0 [ffffc90025017878] die at ffffffff81033c22
 #1 [ffffc900250178a8] do_trap at ffffffff81030990
 #2 [ffffc900250178f8] do_error_trap at ffffffff810311d7
 #3 [ffffc900250179c0] do_invalid_op at ffffffff81031310
 #4 [ffffc900250179d0] invalid_op at ffffffff81a01f2a
    [exception RIP: ocfs2_truncate_rec+1914]
    RIP: ffffffffc1e73b4a  RSP: ffffc90025017a80  RFLAGS: 00010246
    RAX: 0000000000000000  RBX: 0000000000053a75  RCX: 0000000000000000
    RDX: 0000000000000000  RSI: ffff8882d385be08  RDI: ffff8882d385be08
    RBP: ffffc90025017b10   R8: 0000000000000000   R9: 0000000000005900
    R10: 0000000000000001  R11: 0000000000aaaaaa  R12: 0000000000000001
    R13: ffff88829e5a9900  R14: ffffc90025017cf0  R15: 0000000000000000
    ORIG_RAX: ffffffffffffffff  CS: e030  SS: e02b
 #5 [ffffc90025017b18] ocfs2_remove_extent at ffffffffc1e73e6c [ocfs2]
 #6 [ffffc90025017bc8] ocfs2_remove_btree_range at ffffffffc1e745f2 [ocfs2]
 #7 [ffffc90025017c60] ocfs2_commit_truncate at ffffffffc1e75b1f [ocfs2]
 #8 [ffffc90025017d68] __dta_ocfs2_wipe_inode_606 at ffffffffc1e9a3e0 [ocfs2]
 #9 [ffffc90025017dd8] ocfs2_evict_inode at ffffffffc1e9ac10 [ocfs2]
    RIP: 00007f9b26ec8307  RSP: 00007ffc5a193f68  RFLAGS: 00000246
    RAX: ffffffffffffffda  RBX: 0000000000ddd0a0  RCX: 00007f9b26ec8307
    RDX: 0000000000000001  RSI: 00007f9b2719e770  RDI: 0000000001010400
    RBP: 0000000001263d80   R8: 0000000000000000   R9: 00000000012146a0
    R10: 000000000000000d  R11: 0000000000000246  R12: 0000000000ddd0a0
    R13: 00007f9b27ba9595  R14: 00007f9b27ca4a50  R15: 00000000ffffffff
    ORIG_RAX: 0000000000000057  CS: 0033  SS: 002b crash-7.3.0>

This crash resulted due to invalid extent record selected for truncate.

At the top of the function ocfs2_truncate_rec(), the code checks if the
first extent record at the leaf extent list corresponding to the input
path is still empty. In that case the tree is rotated left to get rid of
the empty extent record but this rotation did not happen.

But the function ocfs2_truncate_rec() assumes that the top level call
to ocfs2_rotate_tree_left() to get rid of the empty extent always
succeeds and hence it decrements the input "index" value. This results
in selection of a wrong record for truncate that causes to hit a call to
BUG() with the message "Owner %llu: Invalid record truncate: (%u, %u) ".
The stack above is the panic stack caused due to hitting BUG().

Though the function ocfs2_rotate_tree_left() was intended to get rid of
the first empty record in the extent block, it did not call the function
ocfs2_rotate_rightmost_leaf_left() as it did not find h_next_leaf_blk
in the extentleaf block to be zero, instead, it proceeded to call
__ocfs2_rotate_tree_left(). However the input "index" value was indeed
pointing to the last extent record in the leaf block. The macro
path_leaf_bh() was returning rightmost extent block as per the tree-depth.
and the function ocfs2_find_cpos_for_right_leaf() also found out that
the extent block in question is indeed the rightmost and hence there is
nothing to rotate at the last extent record pointed by the input "index"
value. Hence the extent tree in the leaf block was not totated at all.

Hence, the real reason for the above panic is that the value of the field
h_next_leaf_blk in the right most leaf block was non-zero that caused
the tree not to rotate left resulting in selection of invalid record for
truncate.

The reason why h_next_leaf_blk was not cleared for the last extent block
is still not known and the code changes here is a workaround to avoid
the panic by verifying that the extent block in question is indeed the
rightmost leaf block in the tree and then correcting the invalid
h_next_leaf_blk value. These changes have been verified by the customer
by running the provided rpm in their env.

Orabug: 34393593

Signed-off-by: Gautham Ananthakrishna <[email protected]>
Reviewed-by: Junxiao Bi <[email protected]>
oraclelinuxkernel pushed a commit that referenced this pull request Apr 18, 2025
Add a check to mlx5e_xmit() for shorter frames. A corrupted/malformed
packet, with shorter length can eventually cause system panic further
down in the code path. Avoid it by validating the length and dropping it
at the earliest.

Following is seen in our env with shorter skb->len

crash> bt
PID: 76981    TASK: ff19828cfe508000  CPU: 106  COMMAND: "vhost-76942"
 #0 [ff2d20159b39f2c8] machine_kexec at ffffffffad884801
 #1 [ff2d20159b39f328] __crash_kexec at ffffffffad976142
 #2 [ff2d20159b39f3f8] panic at ffffffffad8b3640
 #3 [ff2d20159b39f4a0] no_context at ffffffffad8954e1
 #4 [ff2d20159b39f518] __bad_area_nosemaphore at ffffffffad8958de
 #5 [ff2d20159b39f578] bad_area_nosemaphore at ffffffffad895a96
 #6 [ff2d20159b39f588] do_kern_addr_fault at ffffffffad89688e
 #7 [ff2d20159b39f5b0] __do_page_fault at ffffffffad896b30
 #8 [ff2d20159b39f618] do_page_fault at ffffffffad896db6
 #9 [ff2d20159b39f650] page_fault at ffffffffae402acd
    [exception RIP: memcpy_erms+6]
    RIP: ffffffffae261ab6  RSP: ff2d20159b39f700  RFLAGS: 00010293
    RAX: ff198291741ecf2e  RBX: ff19828e70d6a100  RCX: fffffffffea1af2b
    RDX: fffffffffffffffd  RSI: ff19828eba6d7e5e  RDI: ff198291757d2000
    RBP: ff2d20159b39f760   R8: ff198291741ecf00   R9: 000000000000037c
    R10: 000000000000003c  R11: ff19828ffe953940  R12: ff198291741ecf20
    R13: ff198267dcb1b600  R14: ff19828eeebb09c0  R15: ff198291741ecf00
    ORIG_RAX: ffffffffffffffff  CS: 0010  SS: 0018
 #10 [ff2d20159b39f700] mlx5e_sq_xmit_wqe at ffffffffc05c162e [mlx5_core]
 #11 [ff2d20159b39f768] mlx5e_xmit at ffffffffc05c1ca3 [mlx5_core]
 #12 [ff2d20159b39f800] dev_hard_start_xmit at ffffffffae083766
 #13 [ff2d20159b39f860] sch_direct_xmit at ffffffffae0e2564
 #14 [ff2d20159b39f8b0] __qdisc_run at ffffffffae0e294e
 #15 [ff2d20159b39f928] __dev_queue_xmit at ffffffffae083eee
 #16 [ff2d20159b39f9a8] dev_queue_xmit at ffffffffae084370
 #17 [ff2d20159b39f9b8] vlan_dev_hard_start_xmit at ffffffffc2fb6fec [8021q]
 #18 [ff2d20159b39f9d8] dev_hard_start_xmit at ffffffffae083766
 #19 [ff2d20159b39fa38] __dev_queue_xmit at ffffffffae08416a
 #20 [ff2d20159b39fab8] dev_queue_xmit_accel at ffffffffae08438e
 #21 [ff2d20159b39fac8] macvlan_start_xmit at ffffffffc2fc18d9 [macvlan]
 #22 [ff2d20159b39faf0] dev_hard_start_xmit at ffffffffae083766
 #23 [ff2d20159b39fb50] sch_direct_xmit at ffffffffae0e2564
 #24 [ff2d20159b39fba0] __qdisc_run at ffffffffae0e294e
 #25 [ff2d20159b39fc18] __dev_queue_xmit at ffffffffae083c81
 #26 [ff2d20159b39fc90] dev_queue_xmit at ffffffffae084370
 #27 [ff2d20159b39fca0] tap_sendmsg at ffffffffc07206ed [tap]
 #28 [ff2d20159b39fd20] vhost_tx_batch at ffffffffc2fd6590 [vhost_net]
 #29 [ff2d20159b39fd68] handle_tx_copy at ffffffffc2fd70f3 [vhost_net]
 #30 [ff2d20159b39fe80] handle_tx at ffffffffc2fd7651 [vhost_net]
 #31 [ff2d20159b39feb0] handle_tx_kick at ffffffffc2fd76b5 [vhost_net]
 #32 [ff2d20159b39fec0] vhost_worker at ffffffffc12a5be8 [vhost]
 #33 [ff2d20159b39ff08] kthread at ffffffffad8dbfe5
 #34 [ff2d20159b39ff50] ret_from_fork at ffffffffae400364

This change was discussed with Nvidia and they are in agreement.

Orabug: 36879156
CVE: CVE-2024-41090
CVE: CVE-2024-41091

Fixes: e4cf27b ("net/mlx5e: Re-eanble client vlan TX acceleration")
Reported-and-tested-by: Dongli Zhang <[email protected]>
Signed-off-by: Manjunath Patil <[email protected]>
Reviewed-by: Si-Wei Liu <[email protected]>
Reviewed-by: Jack Vogel <[email protected]>
Signed-off-by: Brian Maly <[email protected]>
(cherry picked from commit 0dd4b99)

Orabug: 36879126
CVE: CVE-2024-41090
CVE: CVE-2024-41091

Signed-off-by: Harshvardhan Jha <[email protected]>
Reviewed-by: Vijayendra Suman <[email protected]>
oraclelinuxkernel pushed a commit that referenced this pull request Apr 18, 2025
Add a check to mlx5e_xmit() for shorter frames. A corrupted/malformed
packet, with shorter length can eventually cause system panic further
down in the code path. Avoid it by validating the length and dropping it
at the earliest.

Following is seen in our env with shorter skb->len

crash> bt
PID: 76981    TASK: ff19828cfe508000  CPU: 106  COMMAND: "vhost-76942"
 #0 [ff2d20159b39f2c8] machine_kexec at ffffffffad884801
 #1 [ff2d20159b39f328] __crash_kexec at ffffffffad976142
 #2 [ff2d20159b39f3f8] panic at ffffffffad8b3640
 #3 [ff2d20159b39f4a0] no_context at ffffffffad8954e1
 #4 [ff2d20159b39f518] __bad_area_nosemaphore at ffffffffad8958de
 #5 [ff2d20159b39f578] bad_area_nosemaphore at ffffffffad895a96
 #6 [ff2d20159b39f588] do_kern_addr_fault at ffffffffad89688e
 #7 [ff2d20159b39f5b0] __do_page_fault at ffffffffad896b30
 #8 [ff2d20159b39f618] do_page_fault at ffffffffad896db6
 #9 [ff2d20159b39f650] page_fault at ffffffffae402acd
    [exception RIP: memcpy_erms+6]
    RIP: ffffffffae261ab6  RSP: ff2d20159b39f700  RFLAGS: 00010293
    RAX: ff198291741ecf2e  RBX: ff19828e70d6a100  RCX: fffffffffea1af2b
    RDX: fffffffffffffffd  RSI: ff19828eba6d7e5e  RDI: ff198291757d2000
    RBP: ff2d20159b39f760   R8: ff198291741ecf00   R9: 000000000000037c
    R10: 000000000000003c  R11: ff19828ffe953940  R12: ff198291741ecf20
    R13: ff198267dcb1b600  R14: ff19828eeebb09c0  R15: ff198291741ecf00
    ORIG_RAX: ffffffffffffffff  CS: 0010  SS: 0018
 #10 [ff2d20159b39f700] mlx5e_sq_xmit_wqe at ffffffffc05c162e [mlx5_core]
 #11 [ff2d20159b39f768] mlx5e_xmit at ffffffffc05c1ca3 [mlx5_core]
 #12 [ff2d20159b39f800] dev_hard_start_xmit at ffffffffae083766
 #13 [ff2d20159b39f860] sch_direct_xmit at ffffffffae0e2564
 #14 [ff2d20159b39f8b0] __qdisc_run at ffffffffae0e294e
 #15 [ff2d20159b39f928] __dev_queue_xmit at ffffffffae083eee
 #16 [ff2d20159b39f9a8] dev_queue_xmit at ffffffffae084370
 #17 [ff2d20159b39f9b8] vlan_dev_hard_start_xmit at ffffffffc2fb6fec [8021q]
 #18 [ff2d20159b39f9d8] dev_hard_start_xmit at ffffffffae083766
 #19 [ff2d20159b39fa38] __dev_queue_xmit at ffffffffae08416a
 #20 [ff2d20159b39fab8] dev_queue_xmit_accel at ffffffffae08438e
 #21 [ff2d20159b39fac8] macvlan_start_xmit at ffffffffc2fc18d9 [macvlan]
 #22 [ff2d20159b39faf0] dev_hard_start_xmit at ffffffffae083766
 #23 [ff2d20159b39fb50] sch_direct_xmit at ffffffffae0e2564
 #24 [ff2d20159b39fba0] __qdisc_run at ffffffffae0e294e
 #25 [ff2d20159b39fc18] __dev_queue_xmit at ffffffffae083c81
 #26 [ff2d20159b39fc90] dev_queue_xmit at ffffffffae084370
 #27 [ff2d20159b39fca0] tap_sendmsg at ffffffffc07206ed [tap]
 #28 [ff2d20159b39fd20] vhost_tx_batch at ffffffffc2fd6590 [vhost_net]
 #29 [ff2d20159b39fd68] handle_tx_copy at ffffffffc2fd70f3 [vhost_net]
 #30 [ff2d20159b39fe80] handle_tx at ffffffffc2fd7651 [vhost_net]
 #31 [ff2d20159b39feb0] handle_tx_kick at ffffffffc2fd76b5 [vhost_net]
 #32 [ff2d20159b39fec0] vhost_worker at ffffffffc12a5be8 [vhost]
 #33 [ff2d20159b39ff08] kthread at ffffffffad8dbfe5
 #34 [ff2d20159b39ff50] ret_from_fork at ffffffffae400364

This change was discussed with Nvidia and they are in agreement.

Orabug: 36879156
CVE: CVE-2024-41090
CVE: CVE-2024-41091

Fixes: e4cf27b ("net/mlx5e: Re-eanble client vlan TX acceleration")
Reported-and-tested-by: Dongli Zhang <[email protected]>
Signed-off-by: Manjunath Patil <[email protected]>
Reviewed-by: Si-Wei Liu <[email protected]>
Reviewed-by: Jack Vogel <[email protected]>
Signed-off-by: Brian Maly <[email protected]>
(cherry picked from commit 0dd4b99)

Orabug: 36879126
CVE: CVE-2024-41090
CVE: CVE-2024-41091

Signed-off-by: Harshvardhan Jha <[email protected]>
Reviewed-by: Vijayendra Suman <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant