Skip to content

Aspnet Core 1.1.7 Memory Leak #6167

Closed
Closed
@guneysus

Description

@guneysus

Describe the bug

We have a memory leak problem which occurs in at least once a day, sometimes in a half hour, sometimes in a couple of hours. We do not the root cause so need help to narrow down the issue.

To Reproduce

Steps to reproduce the behavior:

  1. We are using this version of ASP.NET Core 'Microsoft.AspNetCore.Mvc 1.1.7'
  2. Deployed on Windows Server 2012 R2 and IIS 6.2 Build 9200

Expected behavior

The memory usage of our API project spikes suddenly occasionally which eats all available memory in a couple minutes. We do not know the root cause but we spotted that when it occurs, starts from ~140MB RAM (stable) and increase in ~50-100 MB/s.

We took a memory dump and analysed with DebugDiag and took a report. Since it is a long report, I will share some selected parts of it. If necessary, I will provide more.

Additional context

PS > dotnet --version
Microsoft .NET Core Shared Framework Host

  Version  : 1.1.0
  Build    : 928f77c4bc3f49d892459992fb6e1d5542cb5e86

DebugDiag Report (some selected parts)

Warning The following threads in BA.ECommerce.Api.DMP are waiting in a WaitMultiple( 11 13 15 18 20 22 23 24 25 28 30 )26.19% of threads blocked Typically threads waiting in WaitMultiple are monitoring threads in a process and this may be ignored, however too many threads waiting in WaitOne\WaitMultiple may be a problem. Review the callstack of the threads waiting to see what they are waiting on
Warning The following threads in BA.ECommerce.Api.DMP are waiting in a WaitOne( 0 7 8 9 10 34 )14.29% of threads blocked Typically threads waiting in WaitMultiple are monitoring threads in a process and this may be ignored, however too many threads waiting in WaitOne\WaitMultiple may be a problem. Review the callstack of the threads waiting to see what they are waiting on
-- -- --
Warning The following threads in BA.ECommerce.Api.DMP are waiting for .net garbage collection to finish.The current set of scripts were not able to determine which thread induced GC( 35 36 )4.76% of threads blocked When a GC is running the .NET objects are not in a valid state and the reported analysis may be inaccurate. Also, the thread that triggered the Garbage collection may or may not be a problematic thread. Too many garbage collections in a process are bad for the performance of the application. Too many GC's in a process may indicate a memory pressure or symptoms of fragmenation. Review the blog ASP.NET Case Study: High CPU in GC - Large objects and high allocation rates for more details
-- -- --
Warning Detected possible blocking or leaked critical section at 0x000000a3`c4337db0 owned by thread 37 in BA.ECommerce.Api.DMPImpact of this lock2.38% of threads blocked(Threads 40)The following functions are trying to enter this critical section0x00000000 The following vendors were identified for follow up based on root cause analysisPlease f

Possible Blocking or Leaked Criticical Section Details

Critical Section 0x000000a3`c4337db0
Lock State Locked
Lock Count 1
Recursion Count 1
Entry Count 0
Contention Count 19
Spin Count 33556429
Owner Thread 37
Owner Thread System ID 3592
Entry point ntdll!RtlUserThreadStart+34
Create time 12/24/2018 12:06:09 AM
Time spent in user mode 0 Days 00:00:00.00
Time spent in kernel mode 0 Days 00:00:00.00
Function Source
ntdll!NtWaitForSingleObject+a  
ntdll!RtlpWaitOnCriticalSection+e1  
ntdll!RtlpEnterCriticalSectionContended+a4  
coreclr!CrstBase::Enter+7a  
coreclr!ThreadSuspend::LockThreadStore+b8  
coreclr!ThreadStore::AddThread+29  
coreclr!SetupUnstartedThread+50  
coreclr!ThreadpoolMgr::CreateUnimpersonatedThread+99  
coreclr!ThreadpoolMgr::MaybeAddWorkingWorker+19e  
coreclr!ThreadpoolMgr::GateThreadStart+6e0  
kernel32!BaseThreadInitThunk+22  
ntdll!RtlUserThreadStart+34

Metadata

Metadata

Assignees

Labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions