Closed
Description
Consider the following program that I run with Go 1.5.2 on 64-bit Fedora Linux:
package main
import "fmt"
func main() {
a := make([]byte, 185 * 1024 * 1024)
for i := 0; i < len(a); i += 4096 {
a[i] = 'x'
}
fmt.Printf("%c\n", a[0])
}
It allocates 185MB byte array and then forces OS to commit memory to it via touching all the pages. This programs runs OK and prints expected x
even if I restrict the size of available virtual memory per process to 200MB using ulimit:
~/s> ulimit -S -v 204800
~/s> go run test.go
x
Now consider its modification like:
package main
import "fmt"
func main() {
a := make([]byte, 85 * 1024 * 1024)
a = nil
a = make([]byte, 150 * 1024 * 1024)
for i := 0; i < len(a); i += 4096 {
a[i] = 'x'
}
fmt.Printf("%c\n", a[0])
}
It allocates first 85MB, then clears the reference to the slice, and then allocates 150MB. This time under the same 200MB limit as set with ulimit it fails:
~/s> go run test.go
fatal error: runtime: out of memory
The same failure happens even with the explicit GC call after a = nil:
package main
import "fmt"
import "runtime"
func main() {
a := make([]byte, 85 * 1024 * 1024)
a = nil
runtime.GC()
a = make([]byte, 150 * 1024 * 1024)
for i := 0; i < len(a); i += 4096 {
a[i] = 'x'
}
fmt.Printf("%c\n", a[0])
}
Is it just a runtime bug? If not, how can I force the runtime to release a large allocation?