The profiled code ran for {{TotalTime}}ms. Of this, {{OverheadTime}}ms were spent on garbage collection and dynamic optimization (that's {{OverheadTimePercent}}%).
Executing Code |
{{ExecutingTimePercent}}%
({{ExecutingTime}}ms)
|
Garbage Collection |
{{GCTimePercent}}%
({{GCTime}}ms)
|
Dynamic Optimization |
{{SpeshTimePercent}}%
({{SpeshTime}}ms)
|
In total, {{EntriesWithoutInline}} call frames were entered and exited by the profiled code. Inlining eliminated the need to create {{EntriesInline}} call frames (that's {{InlinePercent}}%).
Interpreted Frames |
{{InterpFramesPercent}}%
({{InterpFrames}})
|
Specialized Frames |
{{SpeshFramesPercent}}%
({{SpeshFrames}})
|
JIT-Compiled Frames |
{{JITFramesPercent}}%
({{JITFrames}})
|
The profiled code did {{GCRuns}} garbage collections. There were {{FullGCRuns}} full collections involving the entire heap.
The average nursery collection time was {{NurseryAverage}}ms. The average full collection time was {{FullAverage}}ms.
Of {{OptimizedFrames}} specialized or JIT-compiled frames, there were {{DeoptOnes}} deoptimizations (that's {{DeoptOnePercent}}% of all optimized frames).
There was no global deoptimization triggered by the profiled code.
There was one global deoptimization triggered by the profiled code.
There were {{DeoptAlls}} global deoptimization triggered by the profiled code.
There was no On Stack Replacement performed while executing the profiled code (normal if the code lacks long-running loops with many iterations).
There was one On Stack Replacement performed while executing the profiled code.
There were {{OSRs}} On Stack Replacements performed while executing the profiled code.
Name | Entries | Inclusive Time | Exclusive Time | Interp / Spesh / JIT |
---|---|---|---|---|
{{routine.Name}} {{routine.File}}:{{routine.Line}} |
{{routine.Entries}} |
{{routine.InclusiveTimePercent}}%
({{routine.InclusiveTime}}ms)
|
{{routine.ExclusiveTimePercent}}%
({{routine.ExclusiveTime}}ms)
|
OSR
|
{{Current.name == '' ? '<anon>' : Current.name}}
{{File}}:{{Line}}
Calls (Inlined) |
{{Entries}}
{{Entries}} +
{{InlineEntries}} ({{InlinePercent}}%)
|
Interpreted Calls |
{{InterpPercent}}%
({{InterpEntries}})
|
Specialized Calls |
{{SpeshPercent}}%
({{SpeshEntries}})
|
JIT-Compiled Calls |
{{JITPercent}}%
({{JITEntries}})
|
Name | Calls | Time In Callee | Interp / Spesh / JIT | Inlined |
---|---|---|---|---|
{{callee.Name}} {{callee.File}}:{{callee.Line}} |
{{callee.Calls}} |
{{callee.TimePercent}}%
({{callee.Time}}ms)
|
|
{{callee.InlinedPercent}}%
{{callee.InlinedPercent}}%
|
Name | Allocations | Allocating Routines |
---|---|---|
{{alloc.Name}} |
{{alloc.Allocations}}
|
View |
On Stack Replacement detects routines containing hot loops that are being interpreted, and replaces them with specialized or JIT-compiled code.
Routine | On Stack Replacements |
---|---|
{{osr.Name}} {{osr.File}}:{{osr.Line}} |
{{osr.Count}}
|
No OSR was performed during this profile.
Local deoptimization happens when a guard in specialized or JIT-compiled code fails. Since the code was produced assuming the guard would hold, the VM falls back to running the safe, but slower, interpreted code.
Routine | Deoptimizations |
---|---|
{{deopt.Name}} {{deopt.File}}:{{deopt.Line}} |
{{deopt.Count}}
|
No local deoptimizations occurred during this profile.
Global deoptimization happens when an event occurs that renders all currently type-specialized or JIT-compiled code on the call stack potentially invalid. Mixins - changing the type of an object in place - are a common reason.
Routine | Deoptimizations |
---|---|
{{deopt.Name}} {{deopt.File}}:{{deopt.Line}} |
{{deopt.Count}}
|
No global deoptimizations occurred during this profile.