Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Possible memory leak #25

Closed
begelundmuller opened this issue Nov 19, 2019 · 14 comments
Closed

Possible memory leak #25

begelundmuller opened this issue Nov 19, 2019 · 14 comments

Comments

@begelundmuller
Copy link

Hi,

I've been playing with v8go, and so far liking it a lot!

I'm running into a fatal error when benchmarking isolate initialization (and running a simple workload). Here's my code:

package main

import (
	"encoding/json"
	"fmt"
	"math/rand"

	"rogchap.com/v8go"
)

const n = 10000 // tweak this

const script = `
	const process = (record) => {
		const res = [];
		for (let [k, v] of Object.entries(record)) {
			res.push({
				name: k,
				value: v,
			});
		}
		return JSON.stringify(res);
	};
`

func main() {
	for i := 0; i < n; i++ {
		vm, _ := v8go.NewIsolate()
		ctx, _ := v8go.NewContext(vm)
		ctx.RunScript(script, "main.js")
		str, _ := json.Marshal(makeObject())
		cmd := fmt.Sprintf("process(%s)", str)
		ctx.RunScript(cmd, "cmd.js")
	}
}

func makeObject() interface{} {
	return map[string]interface{}{
		"a": rand.Intn(1000000),
		"b": "AAAABBBBAAAABBBBAAAABBBBAAAABBBBAAAABBBB",
	}
}

When running it with n = 1000, it completes just fine. But running with n = 10000 gives me this error:

fatal error: unexpected signal during runtime execution
[signal SIGSEGV: segmentation violation code=0x1 addr=0xb01dfacedebac1e pc=0x4204ba0]

runtime stack:
runtime.throw(0x4ac92d0, 0x2a)
	/usr/local/go/src/runtime/panic.go:617 +0x72
runtime.sigpanic()
	/usr/local/go/src/runtime/signal_unix.go:374 +0x4a9

goroutine 18 [syscall]:
runtime.cgocall(0x40b5370, 0xc0000466b8, 0x0)
	/usr/local/go/src/runtime/cgocall.go:128 +0x5b fp=0xc000046688 sp=0xc000046650 pc=0x4004dfb
rogchap.com/v8go._Cfunc_ValueDispose(0x9ceed8d0)
	_cgo_gotypes.go:179 +0x41 fp=0xc0000466b8 sp=0xc000046688 pc=0x40b3961
rogchap.com/v8go.(*Value).finalizer.func1(0xc000422d58)
	/Users/benjamin/go/pkg/mod/rogchap.com/[email protected]/value.go:26 +0x5e fp=0xc0000466f8 sp=0xc0000466b8 pc=0x40b4c3e
rogchap.com/v8go.(*Value).finalizer(0xc000422d58)
	/Users/benjamin/go/pkg/mod/rogchap.com/[email protected]/value.go:26 +0x2b fp=0xc000046728 sp=0xc0000466f8 pc=0x40b449b
runtime.call32(0x0, 0x4aca8f0, 0xc0003c2000, 0x1000000010)
	/usr/local/go/src/runtime/asm_amd64.s:519 +0x3b fp=0xc000046758 sp=0xc000046728 pc=0x40522cb
runtime.runfinq()
	/usr/local/go/src/runtime/mfinal.go:222 +0x1e2 fp=0xc0000467e0 sp=0xc000046758 pc=0x40179d2
runtime.goexit()
	/usr/local/go/src/runtime/asm_amd64.s:1337 +0x1 fp=0xc0000467e8 sp=0xc0000467e0 pc=0x4053fc1
created by runtime.createfing
	/usr/local/go/src/runtime/mfinal.go:156 +0x61

goroutine 1 [syscall]:
rogchap.com/v8go._Cfunc_NewContext(0x9d8df000, 0x0)
	_cgo_gotypes.go:139 +0x4a
rogchap.com/v8go.NewContext.func1(0xc00046a008, 0xc000054ee0)
	/Users/benjamin/go/pkg/mod/rogchap.com/[email protected]/context.go:35 +0x5e
rogchap.com/v8go.NewContext(0xc00046a008, 0x0, 0x0, 0x4ac311b)
	/Users/benjamin/go/pkg/mod/rogchap.com/[email protected]/context.go:35 +0x3d
main.main()
	/Users/benjamin/Documents/streamtest/v8/main.go:29 +0x114
exit status 2

Monitoring the process, I noticed it's allocating several gigabytes of memory until eventually crashing.

Moving the initialization of cmd out of the loop prevents the crash, but the process still racks up lots of memory. It seems to me that this whole process should be lightweight, as it never runs more than a single isolate at once. Would appreciate some thoughts :)

@asido
Copy link

asido commented Nov 26, 2019

Any update on this?

@rogchap
Copy link
Owner

rogchap commented Dec 1, 2019

@bem7 thanks for reporting; I’ll take a closer look and see what is happening for this use-case.
I’m presuming that you are using the latest version?

@begelundmuller
Copy link
Author

begelundmuller commented Dec 1, 2019 via email

@rogchap
Copy link
Owner

rogchap commented Dec 2, 2019

@bem7 So I could easily replicate your issue;

What's happening is that the Go Garbage Collector is not finding the vm as a dead block (the ctx too) during the loop, therefore is not calling the finalizer() on these objects.
This means we keep allocating memory for the new Isolates over and over.... at the end of the loop it eventually calls finalizer() on the last Isolate and exits; which is way too late.

If I manually call finalizer() on each loop, then it runs fine with only 8-10MB RAM, Eg:

func BenchmarkIsolateInitialization(b *testing.B) {
	b.ReportAllocs()
	for n := 0; n < b.N; n++ {
		vm, _ := v8go.NewIsolate()
		ctx, _ := v8go.NewContext(vm)
		ctx.RunScript(script, "main.js")
		str, _ := json.Marshal(makeObject())
		cmd := fmt.Sprintf("process(%s)", str)
		ctx.RunScript(cmd, "cmd.js")
		ctx.Close() // this calls the finalizer function
		vm.Close() // this calls the finalizer funtion
	}
}

Please note that the Close() functions do NOT exist yet, this was just a test.

Need to have a think if adding Close() functions is the right way to go. You'll notice that I had to Close the context first otherwise it will panic, need see if we can avoid having to do this, as I don't think it's that intuitive to do both.

@rogchap
Copy link
Owner

rogchap commented Dec 2, 2019

Another approach might be to dig-in to find why the GC can't deallocate the Isolate.
There is a lot of memory blocks that require the Isolate, so I'll need to find a good way to "clean" up.

@rogchap
Copy link
Owner

rogchap commented Dec 2, 2019

@bem7 #26 is a possible fix.

Because the allocations happen in the CGO world it makes it slightly harder to profile the memory usage (I wish I could just use pprof); but I tried running the benchmarks continuously for some time and could see the memory flatline around the 14MB mark using top (not that sophisticated)

If you could run any tests against this PR, that would be helpful?

@begelundmuller
Copy link
Author

@rogchap Just tested the PR and I'm observing similar memory usage. However, consider this slightly different benchmark, which will still consume 100s MBs of memory:

func BenchmarkContextInitAndRun(b *testing.B) {
	b.ReportAllocs()
	vm, _ := v8go.NewIsolate()
	for n := 0; n < b.N; n++ {
		ctx, _ := v8go.NewContext(vm)
		ctx.RunScript(script, "main.js")
		str, _ := json.Marshal(makeObject())
		cmd := fmt.Sprintf("process(%s)", str)
		ctx.RunScript(cmd, "cmd.js")
	}
	vm.Dispose() // force disposal of the VM
}

It seems that – barring figuring out why GC is failing – it's necessary to manually close both isolates and contexts (as you suggested in a previous comment).

I don't necessarily think it's that big of a deal. It wouldn't be uncommon to create isolates and contexts in two different places in the code. And for one-off use, calls like defer f.Close() are idiomatic Go.

Is there a reason you're calling the close function Dispose instead of Close?

@rogchap
Copy link
Owner

rogchap commented Dec 2, 2019

@bem7 Thanks for testing. Yea; I’ll add a Close for context too.
I was debating what to call the method; I originally had Close but then changed to Dispose because that is what the underlining v8 code is called. I was also thinking that the Isolate or Context is not a “writer”... but in a way, it is.
I’m now thinking that Close is more idiomatic Go; I’ll change it back.

@rogchap
Copy link
Owner

rogchap commented Dec 3, 2019

I've added a Close() for the context in #26

@begelundmuller
Copy link
Author

@rogchap Works great!

I was thinking about why the GC is failing. My hypothesis: Go doesn't know about the allocations being made in v8, it only tracks the size of the Isolate and Context structs themselves. So they appear much more lightweight to Go than they actually are, causing Go to allow a large number of them to be allocated before it thinks it's time to garbage collect.

@asido
Copy link

asido commented Dec 3, 2019

Read on variable GOGC to understand how GC in Go triggers. It likely doesn't run, because you haven't allocated much in Go's heap. You also can't really figure memory usage just by looking at top, because Go doesn't just free() the memory. Instead, it uses madvise() with MADV_FREE, that will keep the memory unless you run into OOM (Linux 4.12 and above). Set GODEBUG=gctrace=1 to make GC report it's collections to stderr.

Go finalizers are not ideal destructors. There are too many cases when they don't run. I wouldn't use them in production. Instead, provide 1-to-1 Go mappings to C++.

@asido
Copy link

asido commented Dec 10, 2019

I was thinking about why the GC is failing. My hypothesis: Go doesn't know about the allocations being made in v8, it only tracks the size of the Isolate and Context structs themselves. So they appear much more lightweight to Go than they actually are, causing Go to allow a large number of them to be allocated before it thinks it's time to garbage collect.

This is exactly what happens. Go memory usage is growing very slowly while the C memory usage is growing very quickly, so the Go garbage collector never bothers to run and the C memory usage gets enormous. That is why finalizers can lead to broken programs. One has to be aware of the limitations.

Generally, I think V8 Context and Isolate lifetime is pretty easy to understand. In many cases, one would probably defer the dispose. Value lifetime can be trickier, but it is trivial to embed them in a finalizable struct in clients.

@aidansteele
Copy link

@rogchap For what it's worth, your PR fixes an issue I was just about to raise before seeing this existing issue.

On MacOS, it would consistently crash with a SIGSEGV and with PR #26 it now works fine.

For what it's worth, I hadn't yet bothered looking into memory usage - nor did I even change the code beyond the go.mod file. Maybe a coincidence?

@rogchap
Copy link
Owner

rogchap commented Jan 25, 2020

Closing as with #26

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants