Skip to content

[WIP] Parallel garbage collector #5362

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 137 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
137 commits
Select commit Hold shift + click to select a range
1533d7b
Noop thread using volatile flags and busy wait
petermz Feb 17, 2022
6195b82
Noop thread with safepoints and synchronization
petermz Feb 28, 2022
3dfd353
OldGeneration.scanGreyObjects() runs on worker thread (but logging cr…
petermz Mar 2, 2022
ce6b153
Worker thread synchronizing via a queue, never put on safepoint
petermz Apr 21, 2022
674df60
Two parallel workers synchronizing via queue
petermz Apr 22, 2022
ae4f77f
`releaseSpaces` stage working on 2 background threads
petermz May 31, 2022
7db5c49
Added a custom Timer to HeapImpl
petermz Jun 2, 2022
aef1e38
Parallel `dirtyCardIfNecessary`
petermz Jun 7, 2022
54683ec
Parallel objRef update
petermz Jun 7, 2022
2bfc96e
Passing `original` object safely via Pointer
petermz Jun 8, 2022
7d69248
`installFwPtr` works but not on workers
petermz Jun 8, 2022
f036854
`installForwardingPointer` works on parallel threads
petermz Jun 9, 2022
b8dd7c8
`enableRememberedSetForObject` works in parallel
petermz Jun 9, 2022
ac78015
Protect against promoting an object twice.
petermz Jun 9, 2022
d8b11d0
Uninlined several methods
petermz Jun 21, 2022
0820d8d
Parallel memory allocation in to-space.
petermz Jun 23, 2022
980155b
Better sync strategy in `HeapChunk.walkObjectsFromInline`
petermz Jun 24, 2022
66aa7fc
Simplified parallel invocation, assuming complete gc, innerOffset=0, …
petermz Jun 24, 2022
d526393
Restored call to `Space.promoteAlignedObject()`
petermz Jun 24, 2022
3bf93c0
Restored call to `GCImpl.promoteObject()`
petermz Jun 24, 2022
1b15e01
Restored call to `GreyToBlackObjRefVisitor.visitObjectReferenceInline()`
petermz Jun 24, 2022
043da38
Extended TaskQueue to 1024 task items
petermz Jun 27, 2022
0d936b0
Put objects on the queue rather than refs
petermz Jun 27, 2022
d47b6d2
Don't put G2BObjectVisitor instances on TaskQueue
petermz Jun 28, 2022
239c849
Added max queue size statistic
petermz Jun 28, 2022
e04d49a
Made TaskQueue private in ParallelGCImpl
petermz Jun 28, 2022
6b692e2
Queue grey objects instead of scanning chunks
petermz Jun 28, 2022
69bbd0d
Queue unaligned objects for parallel scan
petermz Jun 30, 2022
23b19ac
TaskQueue fixes: `drain` and `idleCount`
petermz Jun 30, 2022
57e8275
Made TaskQueue reference-free
petermz Jun 30, 2022
c0129b4
Protect `GCImpl.promoteObject` with critical section
petermz Jul 4, 2022
ff66f18
Fixed `TaskQueue.waitUntilIdle`
petermz Jul 4, 2022
af6f70b
Removed locking on fromSpace
petermz Jul 5, 2022
972002c
`scanGreyObjects` cleanup
petermz Jul 6, 2022
b6c5168
Added `ParallelGCImpl.checkThrowable`
petermz Jul 8, 2022
2295b15
Increased queue size for HyperAlloc
petermz Jul 10, 2022
81c02c7
Forwarding pointers done right + VerifyHeap hack
petermz Jul 8, 2022
db7c057
Protected chunk promotion with mutex
petermz Jul 10, 2022
57d2372
Cleanup
petermz Jul 10, 2022
e376e13
Handle queue overflow by executing tasks synchronously
petermz Jul 11, 2022
1c891d1
Cleanup
petermz Jul 11, 2022
7b880b6
Added thread local Stats
petermz Jul 19, 2022
cdf4815
(Incomplete) CAS based queue with minimal synchronization.
petermz Jul 18, 2022
71c515a
Thread local memory allocation
petermz Jul 19, 2022
1d2f791
Retract speculatively allocated memory
petermz Aug 8, 2022
17c695b
Thread local task stacks
petermz Aug 10, 2022
d9a47b7
Post merge cleanup
petermz Aug 12, 2022
ef9c35f
Introduced `UseParallelGC` option
petermz Aug 15, 2022
4c77aa3
Removed some static state from ParallelGCImpl
petermz Aug 16, 2022
3d6b797
Replaced `Space.mutex` with `ParallelGCImpl.mutex`
petermz Aug 16, 2022
4a3c01c
Cleanup: removed Stats printout
petermz Aug 16, 2022
d99ca0f
Enqueue objects as they are copied
petermz Aug 17, 2022
e29e81d
Cleanup
petermz Aug 17, 2022
e068f43
Fixed hangup; better worker thread management
petermz Aug 17, 2022
e67370c
Reuse GC thread as one of the workers
petermz Aug 17, 2022
8e33c3d
Determine number of workers at runtime
petermz Aug 26, 2022
73177bd
Shared synchronized buffer, so far holds object pointers
petermz Sep 15, 2022
060bbd4
Use buffer for aligned chunks instead of objects
petermz Sep 16, 2022
f0a26b7
Support for unaligned chunks
petermz Sep 17, 2022
ec5dd99
HyperAlloc works but load is badly unbalanced
petermz Sep 19, 2022
d59cb36
Report stats for relevant threads only
petermz Sep 19, 2022
c4c4b34
Fixed SerialGC compilation
petermz Oct 3, 2022
c061049
Cleanup
petermz Oct 3, 2022
1dbd6c1
Determine buffer size at runtime
petermz Oct 3, 2022
ca5edac
Removed Stats
petermz Oct 3, 2022
757075f
Cleanup
petermz Oct 4, 2022
01627fc
Fixed crash due to unscanned chunks
petermz Nov 2, 2022
80df70e
Added copyright notices
petermz Nov 3, 2022
b9fed22
Merge branch 'master' of github.com:oracle/graal into parallel-gc
petermz Nov 3, 2022
2a540d3
Fixed merge errors
petermz Nov 10, 2022
f228f74
Fixed Substrate options to work with ParallelGC
petermz Nov 10, 2022
3cbe52e
Post-review renaming and cleanup
petermz Nov 21, 2022
c8ae8a3
Review: forwarding header installation was not atomic enough
petermz Nov 22, 2022
1a22815
Worker thread shutdown
petermz Nov 23, 2022
00a8f37
Made `ChunkBuffer` grow as needed and free memory in the end
petermz Nov 23, 2022
2af8a30
More robust worker thread routine
petermz Nov 25, 2022
b849c4a
Thread safe Reference handling
petermz Dec 7, 2022
14c2047
Made SerialGC options work with ParallelGC
petermz Dec 7, 2022
271134d
Enabled incremental collections
petermz Dec 8, 2022
bced7d1
Use try-finally for mutex locking
petermz Dec 12, 2022
165a8dc
Added assertion to `ParallelGCImpl.getScannedChunk()`
petermz Dec 13, 2022
8ec4ce9
Wait for workers to be blocked before starting parallel phase
petermz Dec 13, 2022
93bfef8
Merge branch 'master' of github.com:oracle/graal into parallel-gc
petermz Dec 13, 2022
e4ec854
Don't retire scanned allocation chunks
petermz Jan 26, 2023
793deff
Merge branch 'master' of github.com:oracle/graal into parallel-gc
petermz Mar 6, 2023
dd32219
Merged ParallelGC and ParallelGCImpl
petermz Mar 7, 2023
d9f38f3
Introduced special SafepointBehavior for worker threads
petermz Mar 7, 2023
ec3b2a5
Style fixes
petermz Mar 7, 2023
1462c2f
Fall back to serial GC if collection occurs before worker threads hav…
petermz Mar 9, 2023
df09255
Style fixes
petermz Mar 14, 2023
7086973
Merge with master.
christianhaeubl Mar 15, 2023
34ad44d
Fix after rebasing to master.
christianhaeubl Mar 13, 2023
35325d6
Make the inner-most part of the GC uninterruptible.
christianhaeubl Mar 15, 2023
5b16c29
Use unattached threads as GC worker threads.
christianhaeubl Mar 15, 2023
f432220
Cleanup and assertion fixes
petermz Mar 16, 2023
22d9dc5
Addressed minor review comments
petermz Mar 20, 2023
43b944d
Adapted to IdentityHash changes
petermz Mar 20, 2023
c824b1b
Worker thread shutdown routine
petermz Mar 20, 2023
13557ba
Merge remote-tracking branch 'upstream/master' into parallel-gc
petermz Mar 20, 2023
5954d36
Report OOME on main GC thread only
petermz Mar 21, 2023
cf486e6
Added javadoc for ParallelGC
petermz Mar 21, 2023
d31e9d7
Workaround for @Uninterruptible visitors
petermz Mar 27, 2023
65e7ad2
Fixed crash due to chunk space being null
petermz Mar 29, 2023
3579f90
Merge branch 'master' of github.com:oracle/graal into parallel-gc
petermz Mar 29, 2023
4f6b3f6
Merge with master.
christianhaeubl Apr 3, 2023
5b47410
Fix object forwarding.
christianhaeubl Apr 3, 2023
7c1aa13
Use at most 8 worker threads
petermz Apr 4, 2023
c87e357
Merge branch 'master' of github.com:oracle/graal into parallel-gc
petermz Apr 28, 2023
5a820e0
Merge with master.
christianhaeubl May 2, 2023
91eec09
Cleanups.
christianhaeubl May 2, 2023
cc7b107
Fix counter initialization.
christianhaeubl May 2, 2023
c4957c6
Fix an overflow.
christianhaeubl May 2, 2023
ad940e9
Fix a race that could result in an incorrect remembered set.
christianhaeubl May 3, 2023
b338cb3
Support unattached threads in VMError.shouldNotReachHere().
christianhaeubl May 4, 2023
c698f80
Various cleanups and fixes for the parallel GC.
christianhaeubl May 4, 2023
2e11172
Throw an error if the parallel GC is used on other platforms than Lin…
christianhaeubl May 5, 2023
24051eb
Style fix.
christianhaeubl May 5, 2023
6ea15c5
Verify that SpawnIsolates is enabled if the ParallelGC is used.
christianhaeubl May 5, 2023
101726a
Fix parallel GC teardown.
christianhaeubl May 8, 2023
393bb27
Fix identity hashcode computation.
christianhaeubl May 8, 2023
657f774
Fix validation of UseParallelGC option.
christianhaeubl May 8, 2023
91e3177
Avoid that unattached threads are interpreted as the error handling t…
christianhaeubl May 8, 2023
35013d3
Skip the parallel GC phase if there isn't any work.
christianhaeubl May 8, 2023
31bb51d
Cleanups.
christianhaeubl May 8, 2023
0c55642
Fixed race in reference processing
petermz May 11, 2023
383484a
Fixed racy assertion in GreyToBlackObjRefVisitor
petermz May 11, 2023
c5190e1
Merge branch 'master' of github.com:oracle/graal into parallel-gc
petermz May 12, 2023
1e02a09
Fixed assertion that broke SerialGC
petermz May 15, 2023
e520318
Merge with master.
christianhaeubl May 16, 2023
c38406c
Added some documentation.
christianhaeubl May 16, 2023
59c56b0
Fixed some races
petermz Jun 19, 2023
73f0b54
Merge with master.
christianhaeubl Jun 27, 2023
e43b3e9
Fix an issue where the parallel GC could destroy the array length.
christianhaeubl Jun 29, 2023
80f27d4
Fix the object size used in retractAllocation().
christianhaeubl Jun 29, 2023
2047127
Support the parallel GC on Windows.
christianhaeubl Jun 29, 2023
9710da3
Enable RememberedSet before promoting unaligned chunk
petermz Jul 13, 2023
3e57bab
Log number of worker threads
petermz Jul 17, 2023
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -114,6 +114,14 @@ static Pointer allocateMemory(AlignedHeader that, UnsignedWord size) {
return result;
}

/** Retract the latest allocation. */
@Uninterruptible(reason = "Called from uninterruptible code.", mayBeInlined = true)
static void retractAllocation(AlignedHeader that, UnsignedWord size) {
Pointer newTop = HeapChunk.getTopPointer(that).subtract(size);
assert newTop.aboveOrEqual(getObjectsStart(that));
HeapChunk.setTopPointer(that, newTop);
}

@Uninterruptible(reason = "Called from uninterruptible code.", mayBeInlined = true)
static UnsignedWord getCommittedObjectMemory(AlignedHeader that) {
return HeapChunk.getEndOffset(that).subtract(getObjectsStartOffset());
Expand Down Expand Up @@ -169,10 +177,5 @@ static final class MemoryWalkerAccessImpl extends HeapChunk.MemoryWalkerAccessIm
public boolean isAligned(AlignedHeapChunk.AlignedHeader heapChunk) {
return true;
}

@Override
public UnsignedWord getAllocationStart(AlignedHeapChunk.AlignedHeader heapChunk) {
return getObjectsStart(heapChunk);
}
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -63,6 +63,7 @@
import com.oracle.svm.core.genscavenge.BasicCollectionPolicies.NeverCollect;
import com.oracle.svm.core.genscavenge.HeapChunk.Header;
import com.oracle.svm.core.genscavenge.UnalignedHeapChunk.UnalignedHeader;
import com.oracle.svm.core.genscavenge.parallel.ParallelGC;
import com.oracle.svm.core.genscavenge.remset.RememberedSet;
import com.oracle.svm.core.graal.RuntimeCompilation;
import com.oracle.svm.core.heap.CodeReferenceMapDecoder;
Expand Down Expand Up @@ -126,6 +127,8 @@ public final class GCImpl implements GC {
public String getName() {
if (SubstrateOptions.UseEpsilonGC.getValue()) {
return "Epsilon GC";
} else if (SubstrateOptions.UseParallelGC.getValue()) {
return "Parallel GC";
} else {
return "Serial GC";
}
Expand Down Expand Up @@ -196,6 +199,10 @@ private void collectOperation(CollectionVMOperationData data) {
assert VMOperation.isGCInProgress() : "Collection should be a VMOperation.";
assert getCollectionEpoch().equal(data.getRequestingEpoch());

if (ParallelGC.isEnabled()) {
ParallelGC.singleton().initialize();
}

timers.mutator.closeAt(data.getRequestingNanoTime());
startCollectionOrExit();

Expand Down Expand Up @@ -1071,6 +1078,8 @@ private void scanGreyObjects(boolean isIncremental) {
try {
if (isIncremental) {
scanGreyObjectsLoop();
} else if (ParallelGC.isEnabled()) {
ParallelGC.singleton().waitForIdle();
} else {
HeapImpl.getHeapImpl().getOldGeneration().scanGreyObjects();
}
Expand All @@ -1093,32 +1102,37 @@ private static void scanGreyObjectsLoop() {

@AlwaysInline("GC performance")
@Uninterruptible(reason = "Called from uninterruptible code.", mayBeInlined = true)
@SuppressWarnings("static-method")
Object promoteObject(Object original, UnsignedWord header) {
HeapImpl heap = HeapImpl.getHeapImpl();
boolean isAligned = ObjectHeaderImpl.isAlignedHeader(header);
Header<?> originalChunk = getChunk(original, isAligned);

/* If the parallel GC is used, then the space may be outdated or null. */
Space originalSpace = HeapChunk.getSpace(originalChunk);
if (!originalSpace.isFromSpace()) {
assert originalSpace != null || ParallelGC.isEnabled() && ParallelGC.singleton().isInParallelPhase();
if (originalSpace == null || !originalSpace.isFromSpace()) {
/* Object was already promoted or is currently being promoted. */
return original;
}

Object result = null;
if (!completeCollection && originalSpace.getNextAgeForPromotion() < policy.getTenuringAge()) {
if (isAligned) {
result = heap.getYoungGeneration().promoteAlignedObject(original, (AlignedHeader) originalChunk, originalSpace);
result = heap.getYoungGeneration().promoteAlignedObject(original, originalSpace);
} else {
result = heap.getYoungGeneration().promoteUnalignedObject(original, (UnalignedHeader) originalChunk, originalSpace);
}
if (result == null) {
accounting.onSurvivorOverflowed();
}
}
if (result == null) { // complete collection, tenuring age reached, or survivor space full

/* Complete collection, tenuring age reached, or survivor space full. */
if (result == null) {
if (isAligned) {
result = heap.getOldGeneration().promoteAlignedObject(original, (AlignedHeader) originalChunk, originalSpace);
result = heap.getOldGeneration().promoteAlignedObject(original, originalSpace);
} else {
result = heap.getOldGeneration().promoteUnalignedObject(original, (UnalignedHeader) originalChunk, originalSpace);
result = heap.getOldGeneration().promoteUnalignedObject(original, (UnalignedHeader) originalChunk);
}
assert result != null : "promotion failure in old generation must have been handled";
}
Expand Down Expand Up @@ -1152,7 +1166,7 @@ private void promotePinnedObject(PinnedObjectImpl pinned) {
}
}
if (!promoted) {
heap.getOldGeneration().promoteChunk(originalChunk, isAligned, originalSpace);
heap.getOldGeneration().promoteChunk(originalChunk, isAligned);
}
}
}
Expand Down Expand Up @@ -1241,7 +1255,7 @@ public static boolean hasNeverCollectPolicy() {
}

@Fold
GreyToBlackObjectVisitor getGreyToBlackObjectVisitor() {
public GreyToBlackObjectVisitor getGreyToBlackObjectVisitor() {
return greyToBlackObjectVisitor;
}

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ public class GenScavengeMemoryPoolMXBeans {

@Platforms(Platform.HOSTED_ONLY.class)
public static MemoryPoolMXBean[] createMemoryPoolMXBeans() {
if (SubstrateOptions.UseSerialGC.getValue()) {
if (SubstrateOptions.useSerialOrParallelGC()) {
mxBeans = new AbstractMemoryPoolMXBean[]{
new EdenMemoryPoolMXBean(YOUNG_GEN_SCAVENGER, COMPLETE_SCAVENGER),
new SurvivorMemoryPoolMXBean(YOUNG_GEN_SCAVENGER, COMPLETE_SCAVENGER),
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -27,8 +27,6 @@
import org.graalvm.nativeimage.Platform;
import org.graalvm.nativeimage.Platforms;

import com.oracle.svm.core.AlwaysInline;
import com.oracle.svm.core.Uninterruptible;
import com.oracle.svm.core.heap.ObjectVisitor;
import com.oracle.svm.core.log.Log;

Expand All @@ -55,46 +53,4 @@ public String getName() {

/** Report some statistics about the Generation to a Log. */
public abstract Log report(Log log, boolean traceHeapChunks);

/**
* Promote an Object to this Generation, typically by copying and leaving a forwarding pointer
* to the new Object in place of the original Object. If the object cannot be promoted due to
* insufficient capacity, returns {@code null}.
*
* This turns an Object from white to grey: the object is in this Generation, but has not yet
* had its interior pointers visited.
*
* @return a reference to the promoted object, which is different to the original reference if
* promotion was done by copying, or {@code null} if there was insufficient capacity in
* this generation.
*/
@AlwaysInline("GC performance")
@Uninterruptible(reason = "Called from uninterruptible code.", mayBeInlined = true)
protected abstract Object promoteAlignedObject(Object original, AlignedHeapChunk.AlignedHeader originalChunk, Space originalSpace);

/**
* Promote an Object to this Generation, typically by HeapChunk motion. If the object cannot be
* promoted due to insufficient capacity, returns {@code null}.
*
* This turns an Object from white to grey: the object is in this Generation, but has not yet
* had its interior pointers visited.
*
* @return a reference to the promoted object, which is the same as the original if the object
* was promoted through HeapChunk motion, or {@code null} if there was insufficient
* capacity in this generation.
*/
@AlwaysInline("GC performance")
@Uninterruptible(reason = "Called from uninterruptible code.", mayBeInlined = true)
protected abstract Object promoteUnalignedObject(Object original, UnalignedHeapChunk.UnalignedHeader originalChunk, Space originalSpace);

/**
* Promote a HeapChunk from its original space to the appropriate space in this generation if
* there is sufficient capacity.
*
* This turns all the Objects in the chunk from white to grey: the objects are in the target
* Space, but have not yet had their interior pointers visited.
*
* @return true on success, false if the there was insufficient capacity.
*/
protected abstract boolean promoteChunk(HeapChunk.Header<?> originalChunk, boolean isAligned, Space originalSpace);
}
Original file line number Diff line number Diff line change
Expand Up @@ -31,11 +31,13 @@

import com.oracle.svm.core.AlwaysInline;
import com.oracle.svm.core.Uninterruptible;
import com.oracle.svm.core.genscavenge.parallel.ParallelGC;
import com.oracle.svm.core.genscavenge.remset.RememberedSet;
import com.oracle.svm.core.heap.ObjectHeader;
import com.oracle.svm.core.heap.ObjectReferenceVisitor;
import com.oracle.svm.core.heap.ReferenceAccess;
import com.oracle.svm.core.hub.LayoutEncoding;
import com.oracle.svm.core.jdk.UninterruptibleUtils.AtomicLong;
import com.oracle.svm.core.log.Log;

/**
Expand Down Expand Up @@ -97,6 +99,8 @@ public boolean visitObjectReferenceInline(Pointer objRef, int innerOffset, boole
counters.noteForwardedReferent();
// Update the reference to point to the forwarded Object.
Object obj = ohi.getForwardedObject(p, header);
assert ParallelGC.isEnabled() && ParallelGC.singleton().isInParallelPhase() ||
innerOffset < LayoutEncoding.getSizeFromObjectInGC(obj).rawValue();
Object offsetObj = (innerOffset == 0) ? obj : Word.objectToUntrackedPointer(obj).add(innerOffset).toObject();
ReferenceAccess.singleton().writeObjectAt(objRef, offsetObj, compressed);
RememberedSet.get().dirtyCardIfNecessary(holderObject, obj);
Expand All @@ -105,11 +109,12 @@ public boolean visitObjectReferenceInline(Pointer objRef, int innerOffset, boole

// Promote the Object if necessary, making it at least grey, and ...
Object obj = p.toObject();
assert innerOffset < LayoutEncoding.getSizeFromObjectInGC(obj).rawValue();
Object copy = GCImpl.getGCImpl().promoteObject(obj, header);
if (copy != obj) {
// ... update the reference to point to the copy, making the reference black.
counters.noteCopiedReferent();
assert ParallelGC.isEnabled() && ParallelGC.singleton().isInParallelPhase() ||
innerOffset < LayoutEncoding.getSizeFromObjectInGC(copy).rawValue();
Object offsetCopy = (innerOffset == 0) ? copy : Word.objectToUntrackedPointer(copy).add(innerOffset).toObject();
ReferenceAccess.singleton().writeObjectAt(objRef, offsetCopy, compressed);
} else {
Expand Down Expand Up @@ -152,33 +157,31 @@ public interface Counters extends AutoCloseable {
@Uninterruptible(reason = "Called from uninterruptible code.", mayBeInlined = true)
void noteUnmodifiedReference();

void toLog();

void reset();
}

public static class RealCounters implements Counters {
private long objRef;
private long nullObjRef;
private long nullReferent;
private long forwardedReferent;
private long nonHeapReferent;
private long copiedReferent;
private long unmodifiedReference;
private final AtomicLong objRef = new AtomicLong(0);
private final AtomicLong nullObjRef = new AtomicLong(0);
private final AtomicLong nullReferent = new AtomicLong(0);
private final AtomicLong forwardedReferent = new AtomicLong(0);
private final AtomicLong nonHeapReferent = new AtomicLong(0);
private final AtomicLong copiedReferent = new AtomicLong(0);
private final AtomicLong unmodifiedReference = new AtomicLong(0);

RealCounters() {
reset();
}

@Override
public void reset() {
objRef = 0L;
nullObjRef = 0L;
nullReferent = 0L;
forwardedReferent = 0L;
nonHeapReferent = 0L;
copiedReferent = 0L;
unmodifiedReference = 0L;
objRef.set(0L);
nullObjRef.set(0L);
nullReferent.set(0L);
forwardedReferent.set(0L);
nonHeapReferent.set(0L);
copiedReferent.set(0L);
unmodifiedReference.set(0L);
}

@Override
Expand All @@ -196,50 +199,49 @@ public void close() {
@Override
@Uninterruptible(reason = "Called from uninterruptible code.", mayBeInlined = true)
public void noteObjRef() {
objRef += 1L;
objRef.incrementAndGet();
}

@Override
@Uninterruptible(reason = "Called from uninterruptible code.", mayBeInlined = true)
public void noteNullReferent() {
nullReferent += 1L;
nullReferent.incrementAndGet();
}

@Override
@Uninterruptible(reason = "Called from uninterruptible code.", mayBeInlined = true)
public void noteForwardedReferent() {
forwardedReferent += 1L;
forwardedReferent.incrementAndGet();
}

@Override
@Uninterruptible(reason = "Called from uninterruptible code.", mayBeInlined = true)
public void noteNonHeapReferent() {
nonHeapReferent += 1L;
nonHeapReferent.incrementAndGet();
}

@Override
@Uninterruptible(reason = "Called from uninterruptible code.", mayBeInlined = true)
public void noteCopiedReferent() {
copiedReferent += 1L;
copiedReferent.incrementAndGet();
}

@Override
@Uninterruptible(reason = "Called from uninterruptible code.", mayBeInlined = true)
public void noteUnmodifiedReference() {
unmodifiedReference += 1L;
unmodifiedReference.incrementAndGet();
}

@Override
public void toLog() {
private void toLog() {
Log log = Log.log();
log.string("[GreyToBlackObjRefVisitor.counters:");
log.string(" objRef: ").signed(objRef);
log.string(" nullObjRef: ").signed(nullObjRef);
log.string(" nullReferent: ").signed(nullReferent);
log.string(" forwardedReferent: ").signed(forwardedReferent);
log.string(" nonHeapReferent: ").signed(nonHeapReferent);
log.string(" copiedReferent: ").signed(copiedReferent);
log.string(" unmodifiedReference: ").signed(unmodifiedReference);
log.string(" objRef: ").signed(objRef.get());
log.string(" nullObjRef: ").signed(nullObjRef.get());
log.string(" nullReferent: ").signed(nullReferent.get());
log.string(" forwardedReferent: ").signed(forwardedReferent.get());
log.string(" nonHeapReferent: ").signed(nonHeapReferent.get());
log.string(" copiedReferent: ").signed(copiedReferent.get());
log.string(" unmodifiedReference: ").signed(unmodifiedReference.get());
log.string("]").newline();
}
}
Expand Down Expand Up @@ -288,10 +290,6 @@ public void noteCopiedReferent() {
public void noteUnmodifiedReference() {
}

@Override
public void toLog() {
}

@Override
public void reset() {
}
Expand Down
Loading