Skip to content

Commit 33c3fc7

Browse files
Vladimir Davydovtorvalds
Vladimir Davydov
authored andcommitted
mm: introduce idle page tracking
Knowing the portion of memory that is not used by a certain application or memory cgroup (idle memory) can be useful for partitioning the system efficiently, e.g. by setting memory cgroup limits appropriately. Currently, the only means to estimate the amount of idle memory provided by the kernel is /proc/PID/{clear_refs,smaps}: the user can clear the access bit for all pages mapped to a particular process by writing 1 to clear_refs, wait for some time, and then count smaps:Referenced. However, this method has two serious shortcomings: - it does not count unmapped file pages - it affects the reclaimer logic To overcome these drawbacks, this patch introduces two new page flags, Idle and Young, and a new sysfs file, /sys/kernel/mm/page_idle/bitmap. A page's Idle flag can only be set from userspace by setting bit in /sys/kernel/mm/page_idle/bitmap at the offset corresponding to the page, and it is cleared whenever the page is accessed either through page tables (it is cleared in page_referenced() in this case) or using the read(2) system call (mark_page_accessed()). Thus by setting the Idle flag for pages of a particular workload, which can be found e.g. by reading /proc/PID/pagemap, waiting for some time to let the workload access its working set, and then reading the bitmap file, one can estimate the amount of pages that are not used by the workload. The Young page flag is used to avoid interference with the memory reclaimer. A page's Young flag is set whenever the Access bit of a page table entry pointing to the page is cleared by writing to the bitmap file. If page_referenced() is called on a Young page, it will add 1 to its return value, therefore concealing the fact that the Access bit was cleared. Note, since there is no room for extra page flags on 32 bit, this feature uses extended page flags when compiled on 32 bit. [[email protected]: fix build] [[email protected]: kpageidle requires an MMU] [[email protected]: decouple from page-flags rework] Signed-off-by: Vladimir Davydov <[email protected]> Reviewed-by: Andres Lagar-Cavilla <[email protected]> Cc: Minchan Kim <[email protected]> Cc: Raghavendra K T <[email protected]> Cc: Johannes Weiner <[email protected]> Cc: Michal Hocko <[email protected]> Cc: Greg Thelen <[email protected]> Cc: Michel Lespinasse <[email protected]> Cc: David Rientjes <[email protected]> Cc: Pavel Emelyanov <[email protected]> Cc: Cyrill Gorcunov <[email protected]> Cc: Jonathan Corbet <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
1 parent 1d7715c commit 33c3fc7

17 files changed

+512
-3
lines changed

Documentation/vm/00-INDEX

+2
Original file line numberDiff line numberDiff line change
@@ -14,6 +14,8 @@ hugetlbpage.txt
1414
- a brief summary of hugetlbpage support in the Linux kernel.
1515
hwpoison.txt
1616
- explains what hwpoison is
17+
idle_page_tracking.txt
18+
- description of the idle page tracking feature.
1719
ksm.txt
1820
- how to use the Kernel Samepage Merging feature.
1921
numa
+98
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,98 @@
1+
MOTIVATION
2+
3+
The idle page tracking feature allows to track which memory pages are being
4+
accessed by a workload and which are idle. This information can be useful for
5+
estimating the workload's working set size, which, in turn, can be taken into
6+
account when configuring the workload parameters, setting memory cgroup limits,
7+
or deciding where to place the workload within a compute cluster.
8+
9+
It is enabled by CONFIG_IDLE_PAGE_TRACKING=y.
10+
11+
USER API
12+
13+
The idle page tracking API is located at /sys/kernel/mm/page_idle. Currently,
14+
it consists of the only read-write file, /sys/kernel/mm/page_idle/bitmap.
15+
16+
The file implements a bitmap where each bit corresponds to a memory page. The
17+
bitmap is represented by an array of 8-byte integers, and the page at PFN #i is
18+
mapped to bit #i%64 of array element #i/64, byte order is native. When a bit is
19+
set, the corresponding page is idle.
20+
21+
A page is considered idle if it has not been accessed since it was marked idle
22+
(for more details on what "accessed" actually means see the IMPLEMENTATION
23+
DETAILS section). To mark a page idle one has to set the bit corresponding to
24+
the page by writing to the file. A value written to the file is OR-ed with the
25+
current bitmap value.
26+
27+
Only accesses to user memory pages are tracked. These are pages mapped to a
28+
process address space, page cache and buffer pages, swap cache pages. For other
29+
page types (e.g. SLAB pages) an attempt to mark a page idle is silently ignored,
30+
and hence such pages are never reported idle.
31+
32+
For huge pages the idle flag is set only on the head page, so one has to read
33+
/proc/kpageflags in order to correctly count idle huge pages.
34+
35+
Reading from or writing to /sys/kernel/mm/page_idle/bitmap will return
36+
-EINVAL if you are not starting the read/write on an 8-byte boundary, or
37+
if the size of the read/write is not a multiple of 8 bytes. Writing to
38+
this file beyond max PFN will return -ENXIO.
39+
40+
That said, in order to estimate the amount of pages that are not used by a
41+
workload one should:
42+
43+
1. Mark all the workload's pages as idle by setting corresponding bits in
44+
/sys/kernel/mm/page_idle/bitmap. The pages can be found by reading
45+
/proc/pid/pagemap if the workload is represented by a process, or by
46+
filtering out alien pages using /proc/kpagecgroup in case the workload is
47+
placed in a memory cgroup.
48+
49+
2. Wait until the workload accesses its working set.
50+
51+
3. Read /sys/kernel/mm/page_idle/bitmap and count the number of bits set. If
52+
one wants to ignore certain types of pages, e.g. mlocked pages since they
53+
are not reclaimable, he or she can filter them out using /proc/kpageflags.
54+
55+
See Documentation/vm/pagemap.txt for more information about /proc/pid/pagemap,
56+
/proc/kpageflags, and /proc/kpagecgroup.
57+
58+
IMPLEMENTATION DETAILS
59+
60+
The kernel internally keeps track of accesses to user memory pages in order to
61+
reclaim unreferenced pages first on memory shortage conditions. A page is
62+
considered referenced if it has been recently accessed via a process address
63+
space, in which case one or more PTEs it is mapped to will have the Accessed bit
64+
set, or marked accessed explicitly by the kernel (see mark_page_accessed()). The
65+
latter happens when:
66+
67+
- a userspace process reads or writes a page using a system call (e.g. read(2)
68+
or write(2))
69+
70+
- a page that is used for storing filesystem buffers is read or written,
71+
because a process needs filesystem metadata stored in it (e.g. lists a
72+
directory tree)
73+
74+
- a page is accessed by a device driver using get_user_pages()
75+
76+
When a dirty page is written to swap or disk as a result of memory reclaim or
77+
exceeding the dirty memory limit, it is not marked referenced.
78+
79+
The idle memory tracking feature adds a new page flag, the Idle flag. This flag
80+
is set manually, by writing to /sys/kernel/mm/page_idle/bitmap (see the USER API
81+
section), and cleared automatically whenever a page is referenced as defined
82+
above.
83+
84+
When a page is marked idle, the Accessed bit must be cleared in all PTEs it is
85+
mapped to, otherwise we will not be able to detect accesses to the page coming
86+
from a process address space. To avoid interference with the reclaimer, which,
87+
as noted above, uses the Accessed bit to promote actively referenced pages, one
88+
more page flag is introduced, the Young flag. When the PTE Accessed bit is
89+
cleared as a result of setting or updating a page's Idle flag, the Young flag
90+
is set on the page. The reclaimer treats the Young flag as an extra PTE
91+
Accessed bit and therefore will consider such a page as referenced.
92+
93+
Since the idle memory tracking feature is based on the memory reclaimer logic,
94+
it only works with pages that are on an LRU list, other pages are silently
95+
ignored. That means it will ignore a user memory page if it is isolated, but
96+
since there are usually not many of them, it should not affect the overall
97+
result noticeably. In order not to stall scanning of the idle page bitmap,
98+
locked pages may be skipped too.

fs/proc/page.c

+3
Original file line numberDiff line numberDiff line change
@@ -10,12 +10,15 @@
1010
#include <linux/seq_file.h>
1111
#include <linux/hugetlb.h>
1212
#include <linux/memcontrol.h>
13+
#include <linux/mmu_notifier.h>
14+
#include <linux/page_idle.h>
1315
#include <linux/kernel-page-flags.h>
1416
#include <asm/uaccess.h>
1517
#include "internal.h"
1618

1719
#define KPMSIZE sizeof(u64)
1820
#define KPMMASK (KPMSIZE - 1)
21+
#define KPMBITS (KPMSIZE * BITS_PER_BYTE)
1922

2023
/* /proc/kpagecount - an array exposing page counts
2124
*

fs/proc/task_mmu.c

+4-1
Original file line numberDiff line numberDiff line change
@@ -13,6 +13,7 @@
1313
#include <linux/swap.h>
1414
#include <linux/swapops.h>
1515
#include <linux/mmu_notifier.h>
16+
#include <linux/page_idle.h>
1617

1718
#include <asm/elf.h>
1819
#include <asm/uaccess.h>
@@ -459,7 +460,7 @@ static void smaps_account(struct mem_size_stats *mss, struct page *page,
459460

460461
mss->resident += size;
461462
/* Accumulate the size in pages that have been accessed. */
462-
if (young || PageReferenced(page))
463+
if (young || page_is_young(page) || PageReferenced(page))
463464
mss->referenced += size;
464465
mapcount = page_mapcount(page);
465466
if (mapcount >= 2) {
@@ -807,6 +808,7 @@ static int clear_refs_pte_range(pmd_t *pmd, unsigned long addr,
807808

808809
/* Clear accessed and referenced bits. */
809810
pmdp_test_and_clear_young(vma, addr, pmd);
811+
test_and_clear_page_young(page);
810812
ClearPageReferenced(page);
811813
out:
812814
spin_unlock(ptl);
@@ -834,6 +836,7 @@ static int clear_refs_pte_range(pmd_t *pmd, unsigned long addr,
834836

835837
/* Clear accessed and referenced bits. */
836838
ptep_test_and_clear_young(vma, addr, pte);
839+
test_and_clear_page_young(page);
837840
ClearPageReferenced(page);
838841
}
839842
pte_unmap_unlock(pte - 1, ptl);

include/linux/mmu_notifier.h

+2
Original file line numberDiff line numberDiff line change
@@ -471,6 +471,8 @@ static inline void mmu_notifier_mm_destroy(struct mm_struct *mm)
471471

472472
#define ptep_clear_flush_young_notify ptep_clear_flush_young
473473
#define pmdp_clear_flush_young_notify pmdp_clear_flush_young
474+
#define ptep_clear_young_notify ptep_test_and_clear_young
475+
#define pmdp_clear_young_notify pmdp_test_and_clear_young
474476
#define ptep_clear_flush_notify ptep_clear_flush
475477
#define pmdp_huge_clear_flush_notify pmdp_huge_clear_flush
476478
#define pmdp_huge_get_and_clear_notify pmdp_huge_get_and_clear

include/linux/page-flags.h

+11
Original file line numberDiff line numberDiff line change
@@ -108,6 +108,10 @@ enum pageflags {
108108
#endif
109109
#ifdef CONFIG_TRANSPARENT_HUGEPAGE
110110
PG_compound_lock,
111+
#endif
112+
#if defined(CONFIG_IDLE_PAGE_TRACKING) && defined(CONFIG_64BIT)
113+
PG_young,
114+
PG_idle,
111115
#endif
112116
__NR_PAGEFLAGS,
113117

@@ -289,6 +293,13 @@ PAGEFLAG_FALSE(HWPoison)
289293
#define __PG_HWPOISON 0
290294
#endif
291295

296+
#if defined(CONFIG_IDLE_PAGE_TRACKING) && defined(CONFIG_64BIT)
297+
TESTPAGEFLAG(Young, young)
298+
SETPAGEFLAG(Young, young)
299+
TESTCLEARFLAG(Young, young)
300+
PAGEFLAG(Idle, idle)
301+
#endif
302+
292303
/*
293304
* On an anonymous page mapped into a user virtual memory area,
294305
* page->mapping points to its anon_vma, not to a struct address_space;

include/linux/page_ext.h

+4
Original file line numberDiff line numberDiff line change
@@ -26,6 +26,10 @@ enum page_ext_flags {
2626
PAGE_EXT_DEBUG_POISON, /* Page is poisoned */
2727
PAGE_EXT_DEBUG_GUARD,
2828
PAGE_EXT_OWNER,
29+
#if defined(CONFIG_IDLE_PAGE_TRACKING) && !defined(CONFIG_64BIT)
30+
PAGE_EXT_YOUNG,
31+
PAGE_EXT_IDLE,
32+
#endif
2933
};
3034

3135
/*

include/linux/page_idle.h

+110
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,110 @@
1+
#ifndef _LINUX_MM_PAGE_IDLE_H
2+
#define _LINUX_MM_PAGE_IDLE_H
3+
4+
#include <linux/bitops.h>
5+
#include <linux/page-flags.h>
6+
#include <linux/page_ext.h>
7+
8+
#ifdef CONFIG_IDLE_PAGE_TRACKING
9+
10+
#ifdef CONFIG_64BIT
11+
static inline bool page_is_young(struct page *page)
12+
{
13+
return PageYoung(page);
14+
}
15+
16+
static inline void set_page_young(struct page *page)
17+
{
18+
SetPageYoung(page);
19+
}
20+
21+
static inline bool test_and_clear_page_young(struct page *page)
22+
{
23+
return TestClearPageYoung(page);
24+
}
25+
26+
static inline bool page_is_idle(struct page *page)
27+
{
28+
return PageIdle(page);
29+
}
30+
31+
static inline void set_page_idle(struct page *page)
32+
{
33+
SetPageIdle(page);
34+
}
35+
36+
static inline void clear_page_idle(struct page *page)
37+
{
38+
ClearPageIdle(page);
39+
}
40+
#else /* !CONFIG_64BIT */
41+
/*
42+
* If there is not enough space to store Idle and Young bits in page flags, use
43+
* page ext flags instead.
44+
*/
45+
extern struct page_ext_operations page_idle_ops;
46+
47+
static inline bool page_is_young(struct page *page)
48+
{
49+
return test_bit(PAGE_EXT_YOUNG, &lookup_page_ext(page)->flags);
50+
}
51+
52+
static inline void set_page_young(struct page *page)
53+
{
54+
set_bit(PAGE_EXT_YOUNG, &lookup_page_ext(page)->flags);
55+
}
56+
57+
static inline bool test_and_clear_page_young(struct page *page)
58+
{
59+
return test_and_clear_bit(PAGE_EXT_YOUNG,
60+
&lookup_page_ext(page)->flags);
61+
}
62+
63+
static inline bool page_is_idle(struct page *page)
64+
{
65+
return test_bit(PAGE_EXT_IDLE, &lookup_page_ext(page)->flags);
66+
}
67+
68+
static inline void set_page_idle(struct page *page)
69+
{
70+
set_bit(PAGE_EXT_IDLE, &lookup_page_ext(page)->flags);
71+
}
72+
73+
static inline void clear_page_idle(struct page *page)
74+
{
75+
clear_bit(PAGE_EXT_IDLE, &lookup_page_ext(page)->flags);
76+
}
77+
#endif /* CONFIG_64BIT */
78+
79+
#else /* !CONFIG_IDLE_PAGE_TRACKING */
80+
81+
static inline bool page_is_young(struct page *page)
82+
{
83+
return false;
84+
}
85+
86+
static inline void set_page_young(struct page *page)
87+
{
88+
}
89+
90+
static inline bool test_and_clear_page_young(struct page *page)
91+
{
92+
return false;
93+
}
94+
95+
static inline bool page_is_idle(struct page *page)
96+
{
97+
return false;
98+
}
99+
100+
static inline void set_page_idle(struct page *page)
101+
{
102+
}
103+
104+
static inline void clear_page_idle(struct page *page)
105+
{
106+
}
107+
108+
#endif /* CONFIG_IDLE_PAGE_TRACKING */
109+
110+
#endif /* _LINUX_MM_PAGE_IDLE_H */

mm/Kconfig

+12
Original file line numberDiff line numberDiff line change
@@ -649,6 +649,18 @@ config DEFERRED_STRUCT_PAGE_INIT
649649
processes running early in the lifetime of the systemm until kswapd
650650
finishes the initialisation.
651651

652+
config IDLE_PAGE_TRACKING
653+
bool "Enable idle page tracking"
654+
depends on SYSFS && MMU
655+
select PAGE_EXTENSION if !64BIT
656+
help
657+
This feature allows to estimate the amount of user pages that have
658+
not been touched during a given period of time. This information can
659+
be useful to tune memory cgroup limits and/or for job placement
660+
within a compute cluster.
661+
662+
See Documentation/vm/idle_page_tracking.txt for more details.
663+
652664
config ZONE_DEVICE
653665
bool "Device memory (pmem, etc...) hotplug support" if EXPERT
654666
default !ZONE_DMA

mm/Makefile

+1
Original file line numberDiff line numberDiff line change
@@ -79,3 +79,4 @@ obj-$(CONFIG_MEMORY_BALLOON) += balloon_compaction.o
7979
obj-$(CONFIG_PAGE_EXTENSION) += page_ext.o
8080
obj-$(CONFIG_CMA_DEBUGFS) += cma_debug.o
8181
obj-$(CONFIG_USERFAULTFD) += userfaultfd.o
82+
obj-$(CONFIG_IDLE_PAGE_TRACKING) += page_idle.o

mm/debug.c

+4
Original file line numberDiff line numberDiff line change
@@ -48,6 +48,10 @@ static const struct trace_print_flags pageflag_names[] = {
4848
#ifdef CONFIG_TRANSPARENT_HUGEPAGE
4949
{1UL << PG_compound_lock, "compound_lock" },
5050
#endif
51+
#if defined(CONFIG_IDLE_PAGE_TRACKING) && defined(CONFIG_64BIT)
52+
{1UL << PG_young, "young" },
53+
{1UL << PG_idle, "idle" },
54+
#endif
5155
};
5256

5357
static void dump_flags(unsigned long flags,

mm/huge_memory.c

+10-2
Original file line numberDiff line numberDiff line change
@@ -25,6 +25,7 @@
2525
#include <linux/migrate.h>
2626
#include <linux/hashtable.h>
2727
#include <linux/userfaultfd_k.h>
28+
#include <linux/page_idle.h>
2829

2930
#include <asm/tlb.h>
3031
#include <asm/pgalloc.h>
@@ -1757,6 +1758,11 @@ static void __split_huge_page_refcount(struct page *page,
17571758
/* clear PageTail before overwriting first_page */
17581759
smp_wmb();
17591760

1761+
if (page_is_young(page))
1762+
set_page_young(page_tail);
1763+
if (page_is_idle(page))
1764+
set_page_idle(page_tail);
1765+
17601766
/*
17611767
* __split_huge_page_splitting() already set the
17621768
* splitting bit in all pmd that could map this
@@ -2262,7 +2268,8 @@ static int __collapse_huge_page_isolate(struct vm_area_struct *vma,
22622268
VM_BUG_ON_PAGE(PageLRU(page), page);
22632269

22642270
/* If there is no mapped pte young don't collapse the page */
2265-
if (pte_young(pteval) || PageReferenced(page) ||
2271+
if (pte_young(pteval) ||
2272+
page_is_young(page) || PageReferenced(page) ||
22662273
mmu_notifier_test_young(vma->vm_mm, address))
22672274
referenced = true;
22682275
}
@@ -2693,7 +2700,8 @@ static int khugepaged_scan_pmd(struct mm_struct *mm,
26932700
*/
26942701
if (page_count(page) != 1 + !!PageSwapCache(page))
26952702
goto out_unmap;
2696-
if (pte_young(pteval) || PageReferenced(page) ||
2703+
if (pte_young(pteval) ||
2704+
page_is_young(page) || PageReferenced(page) ||
26972705
mmu_notifier_test_young(vma->vm_mm, address))
26982706
referenced = true;
26992707
}

0 commit comments

Comments
 (0)