Skip to content

Performance regression with ZincWorkerMain subprocesses #5693

@lihaoyi

Description

@lihaoyi

If we enable per-module jvmIds in our example/thirdparty/netty example build:

@@ -15,6 +15,7 @@ import mill.api.BuildCtx
 def isOSX = System.getProperty("os.name").toLowerCase.contains("mac")
 
 trait NettyBaseModule extends MavenModule {
+  def jvmId = "17"
   def javacOptions = Seq("-source", "1.8", "-target", "1.8")
 }
 trait NettyBaseTestSuiteModule extends NettyBaseModule, TestModule.Junit5 {
@@ -78,6 +79,7 @@ trait NettyModule extends NettyBaseModule {
   def testMvnDeps: T[Seq[Dep]] = Task { Seq.empty[Dep] }
 
   object test extends NettyBaseTestSuiteModule, MavenTests {
+    def jvmId = "17"
     def moduleDeps = super.moduleDeps ++ testModuleDeps
     def mvnDeps = super.mvnDeps() ++ testMvnDeps()
     def forkWorkingDir = NettyModule.this.moduleDir

Running ./mill clean && time ./mill __.compile on the Netty codebase now takes ~234 seconds, where previously it took ~34 seconds. In contrast, without the per-module jvmIds, it takes ~10s

the 10s -> 34s slowdown from per-module jvmIds is due to forking, and I would expect the long-lived ZincWorkerMain daemon to avoid forking and thus take ~10s similar to the default behavior, so it becoming even slower at 234s is surprising. We should figure out where the slowdown is and fix it

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

No type

Projects

No projects

Milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions