question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Performance compared to J2V8?

See original GitHub issue

First of all, thank you for taking on the challenge of replacing J2V8. I really like your approach and thoughtfulness!

We are excited to consider the migration from J2V8 for our Android app, but I was doing some very rudimentary profiling, and it looks like Javet might be slower in a number of cases. One of our basic usecases was just calling into a Javascript function frequently (to approximate a very chatty Java<->JS API, and it looks like Javet is about 40% slower. I haven’t done a ton of other profiling yet, but the JS engine itself appears to be similarly fast. (If I call into a large slow JS function I see similar perf with J2V8 and Javet.)

I know microbenchmarks can be lame, and this isn’t the only way we are evaluating Javet, but I think the main part of my question is twofold: 1: Have you done much profiling work against J2V8? 2: Is there something I should do to speed this up?

@Test
    fun compareJ2V8JavetPerf() {
        val v8 = V8.createV8Runtime()
        v8.executeScript("function emptyFunc(){return;}")

        measure("j2V8Round1") {
            for (i in 0 until 1_000_000) {
                v8.executeVoidFunction("emptyFunc", null)
            }
        }
        measure("j2V8Round1") {
            for (i in 0 until 1_000_000) {
                v8.executeVoidFunction("emptyFunc", null)
            }
        }

        var instance : V8Host = V8Host.getV8Instance();
        var v8Runtime : V8Runtime = instance.createV8Runtime();
        v8Runtime.v8Locker.use { _ ->
            v8Runtime.getExecutor("function emptyFunc(){}").executeVoid();

            measure("JavetRound1") {
                var executor = v8Runtime.getExecutor("emptyFunc();");
                for (i in 0 until 1_000_000) {
                    executor.executeVoid();
                }
            }
            measure("JavetRound2") {
                var executor = v8Runtime.getExecutor("emptyFunc();");
                for (i in 0 until 1_000_000) {
                    executor.executeVoid();
                }
            }
        }
    }

I’m seeing output like this (on my Macbook running the simulator): ⏱ j2V8Round1: 962.841126 ms ⏱ j2V8Round1: 934.805917 ms ⏱ JavetRound1: 1524.791793 ms ⏱ JavetRound2: 1512.266751 msf

(If I take out the v8Locker line, it is about twice as slow).

The real benchmark that we are working on is more about marshaling a large object from JS to Java, and it’s quite possible thjat Javet is better there, but after seeing these results, I wanted to ask about it.

I was surprised that Javet was slower since J2V8 spends a third of its executeVoidFunction time inside of checkThread, and Javet gets rid of that completely! From what little I can tell though from the profiler and basic code poking, it looks like 25% of the time is in checkV8Runtime(). Even when I got rid of that by calling into the v8Runtime.v8Internal directly, 1M calls still took 1250ms or so.

Thanks again!

Issue Analytics

  • State:open
  • Created a year ago
  • Comments:12 (6 by maintainers)

github_iconTop GitHub Comments

1reaction
caoccaocommented, Aug 1, 2022

There are something that could be kept an eye on.

  • How many rounds of GC take place.
  • Which test cases are impacted by the GC.
  • Is there a warm-up for JVM.
  • The scope of the lock could be smaller. (Similar to the DB transaction, you may want to insert 1 million rows then commit in one transaction, but the performance would be…)
  • What’s the test code that calls the internal.

By the way, I’m not a big fan of this kind of test because the code path is similar as well as the performance. It’s the features that attract Javet users.

1reaction
Taytaycommented, Jul 23, 2022

Excellent point about checkV8Runtime being essentially JITted away and how the profiler might have interfered with that. And yes, once I saw the actual timing results, I agree that trading that safety for the minuscule perf increase isn’t worth it.

I bet once you get used to Javet, you won’t want to go back to J2V8 anymore.

Agreed!

Wish you a successful migration.

Thanks!

Read more comments on GitHub >

github_iconTop Results From Across the Web

JavaScript Performance V8 vs Nashorn (for Typescript ...
So performance indicates that the service should go with j2v8 but requiring that as hard dependency has the following disadvantages: you need to ......
Read more >
update J2V8 or migrate to graalvm or something else ...
I did research agaist Embedded Node.js vs. Node.js with gRPC. I have to admit the performance gap is like ~4M calls/s vs. ~10K...
Read more >
J2V8 a Highly Efficient JS Runtime for Java
Summary. Ian Bull introduces J2V8 and its API, how it was inspired by SWT, how V8 (C++) was integrated with Java, and some...
Read more >
J2V8
In this tutorial we will demonstrate how to execute scripts with J2V8. Primitive When Possible. J2V8 was designed with performance and memory consumption...
Read more >
Javascript benchmarks
Nashorn vs. ... Comparing these numbers with the results of current runs gave ... Using an older version of J2V8 gave a similar...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found