This case is more difficult to diagnose, or better: it is more difficult to understand whether the JVM is considering CPU limits or not, since when using the " share and sharing cpu " (as Swarm probably does ) we cannot rely on the classic Runtime.getRuntime().availableProcessors()because there is a bug solved only starting from JVM 9 . In reality, this is not a trivial matter because many APIs rely on this value to size the thread pools, such as for the garbage collector or the fork-join.
This actually sent me into confusion: the response of http: // localhost: 8080 / jvm related to the cores will always be 4 (those of the host) starting the stack in Swarm mode , despite all the changes we can make, but actually we notice differences in performance by varying the limits of the cpu . The only case where it seems to have a correct value is when starting a single container with the parameter --cpuset-cpus:
docker run --name ...