The golden rule of optimisation is, DON'T.
The second golden rule is, NOT YET.
The third golden rule is actually a question, HAVE YOU MEASURED?
Answering your question, go with whatever is the accepted standard in your team: your code should not be any different unless it does something really different. That is, in a good team the code is all similar and boring. Any step aside is a huge flashing sign saying, "Careful, it's an exception from the rule!" — and unless you want to say exactly that, go with whatever others use.
Now, code-wise, the answer is in the compiler:
import java.util.logging.*;
public class ForLoop {
private static Logger logger = Logger.getLogger("log");
public static void a(Throwable e) {
StackTraceElement elements[] = e.getStackTrace();
for (int i = 0, n = elements.length; i < n; i++) {
logger.log(Level.WARNING, elements[i].getMethodName());
}
}
public static void b(Throwable e) {
StackTraceElement elements[] = e.getStackTrace();
for (int i = 0; i < elements.length; i++) {
logger.log(Level.WARNING, elements[i].getMethodName());
}
}
public static void c(Throwable e) {
StackTraceElement elements[] = e.getStackTrace();
int n = elements.length;
for (int i = 0; i < n; i++) {
logger.log(Level.WARNING, elements[i].getMethodName());
}
}
}
compiles into
alf-pro:Learning alf$ javap -p -c ForLoop.class
Compiled from "ForLoop.java"
public class ForLoop {
private static java.util.logging.Logger logger;
public ForLoop();
Code:
0: aload_0
1: invokespecial #1 // Method java/lang/Object."<init>":()V
4: return
public static void a(java.lang.Throwable);
Code:
0: aload_0
1: invokevirtual #2 // Method java/lang/Throwable.getStackTrace:()[Ljava/lang/StackTraceElement;
4: astore_1
5: iconst_0
6: istore_2
7: aload_1
8: arraylength
9: istore_3
10: iload_2
11: iload_3
12: if_icmpge 36
15: getstatic #3 // Field logger:Ljava/util/logging/Logger;
18: getstatic #4 // Field java/util/logging/Level.WARNING:Ljava/util/logging/Level;
21: aload_1
22: iload_2
23: aaload
24: invokevirtual #5 // Method java/lang/StackTraceElement.getMethodName:()Ljava/lang/String;
27: invokevirtual #6 // Method java/util/logging/Logger.log:(Ljava/util/logging/Level;Ljava/lang/String;)V
30: iinc 2, 1
33: goto 10
36: return
public static void b(java.lang.Throwable);
Code:
0: aload_0
1: invokevirtual #2 // Method java/lang/Throwable.getStackTrace:()[Ljava/lang/StackTraceElement;
4: astore_1
5: iconst_0
6: istore_2
7: iload_2
8: aload_1
9: arraylength
10: if_icmpge 34
13: getstatic #3 // Field logger:Ljava/util/logging/Logger;
16: getstatic #4 // Field java/util/logging/Level.WARNING:Ljava/util/logging/Level;
19: aload_1
20: iload_2
21: aaload
22: invokevirtual #5 // Method java/lang/StackTraceElement.getMethodName:()Ljava/lang/String;
25: invokevirtual #6 // Method java/util/logging/Logger.log:(Ljava/util/logging/Level;Ljava/lang/String;)V
28: iinc 2, 1
31: goto 7
34: return
public static void c(java.lang.Throwable);
Code:
0: aload_0
1: invokevirtual #2 // Method java/lang/Throwable.getStackTrace:()[Ljava/lang/StackTraceElement;
4: astore_1
5: aload_1
6: arraylength
7: istore_2
8: iconst_0
9: istore_3
10: iload_3
11: iload_2
12: if_icmpge 36
15: getstatic #3 // Field logger:Ljava/util/logging/Logger;
18: getstatic #4 // Field java/util/logging/Level.WARNING:Ljava/util/logging/Level;
21: aload_1
22: iload_3
23: aaload
24: invokevirtual #5 // Method java/lang/StackTraceElement.getMethodName:()Ljava/lang/String;
27: invokevirtual #6 // Method java/util/logging/Logger.log:(Ljava/util/logging/Level;Ljava/lang/String;)V
30: iinc 3, 1
33: goto 10
36: return
static {};
Code:
0: ldc #7 // String log
2: invokestatic #8 // Method java/util/logging/Logger.getLogger:(Ljava/lang/String;)Ljava/util/logging/Logger;
5: putstatic #3 // Field logger:Ljava/util/logging/Logger;
8: return
}
As you can see, a() and c() are only different in variable assignment, and therefore will behave pretty much the same in JVM, JITted or not. b() is slightly different: here, arraylength is called within the loop. As @thurstycrow notes, it is not necessarily a performance problem, but it may be.
So when it comes to real-life performance, you need to be sure that it is worth considering. A code examining the stack trace is almost never worth optimising, as you're not expected to grab a stacktrace over and over again. Moreover, I would argue that logging is way more expensive than array length check.
But if we imagine that you have a case when you need to optimise this particular bit of code, say you're writing a custom monitoring tool that examines all the exceptions thrown and you want it to be awfully slow rather than terminally slow (you see, collecting the stack trace is expensive; navigating it is usually dirt cheap), you need to measure it in the wild, as JIT can and will take into account everything that happens around this code as well.
But that's a boring answer, so let's still do some benchmarking even though we know it doesn't make much sense — after all, we can, so why not? A good benchmarking tool that you can use is JMH; you can go and play with it following the tutorial, or just grab this answer.
The benchmark will be,
package org.sample;
import java.util.logging.*;
import java.util.*;
import org.openjdk.jmh.annotations.Benchmark;
import org.openjdk.jmh.annotations.State;
import org.openjdk.jmh.annotations.Scope;
@State(Scope.Thread)
public class ForLoop {
private static Logger logger = Logger.getLogger("log");
static {
logger.setLevel(Level.OFF);
}
private StackTraceElement elements[] = new Exception().getStackTrace();
@Benchmark
public void a0() {
for (int i = 0, n = elements.length; i < n; i++) {
logger.log(Level.WARNING, elements[i].getMethodName());
}
}
@Benchmark
public void a() {
for (int i = 0, n = elements.length; i < n; i++) {
logger.log(Level.WARNING, elements[i].getMethodName());
}
}
@Benchmark
public void b() {
for (int i = 0; i < elements.length; i++) {
logger.log(Level.WARNING, elements[i].getMethodName());
}
}
@Benchmark
public void c() {
int n = elements.length;
for (int i = 0; i < n; i++) {
logger.log(Level.WARNING, elements[i].getMethodName());
}
}
@Benchmark
public void d() {
for (StackTraceElement e: elements) {
logger.log(Level.WARNING, e.getMethodName());
}
}
@Benchmark
public void e() {
Arrays.stream(elements)
.forEach(item -> logger.log(Level.WARNING, item.getMethodName()));
}
}
(more on a0 later; answers with for-each and streams included). When we run it,
alf-pro:test alf$ java -jar target/benchmarks.jar -wi 5 -i 5 -t 4 -f 1
# JMH 1.13 (released 41 days ago)
# VM version: JDK 1.8.0, VM 25.0-b70
# VM invoker: /Library/Java/JavaVirtualMachines/jdk1.8.0.jdk/Contents/Home/jre/bin/java
# VM options: <none>
# Warmup: 5 iterations, 1 s each
# Measurement: 5 iterations, 1 s each
# Timeout: 10 min per iteration
# Threads: 4 threads, will synchronize iterations
# Benchmark mode: Throughput, ops/time
# Benchmark: org.sample.ForLoop.a
# Run progress: 0.00% complete, ETA 00:01:00
# Fork: 1 of 1
# Warmup Iteration 1: 139092694.098 ops/s
# Warmup Iteration 2: 130833602.261 ops/s
# Warmup Iteration 3: 121355640.666 ops/s
# Warmup Iteration 4: 114037414.954 ops/s
# Warmup Iteration 5: 110355992.809 ops/s
Iteration 1: 111331613.966 ops/s
Iteration 2: 111401698.736 ops/s
Iteration 3: 108193438.523 ops/s
Iteration 4: 105277721.260 ops/s
Iteration 5: 103549199.780 ops/s
Result "a":
107950734.453 ±(99.9%) 13602765.938 ops/s [Average]
(min, avg, max) = (103549199.780, 107950734.453, 111401698.736), stdev = 3532595.117
CI (99.9%): [94347968.515, 121553500.391] (assumes normal distribution)
# JMH 1.13 (released 41 days ago)
# VM version: JDK 1.8.0, VM 25.0-b70
# VM invoker: /Library/Java/JavaVirtualMachines/jdk1.8.0.jdk/Contents/Home/jre/bin/java
# VM options: <none>
# Warmup: 5 iterations, 1 s each
# Measurement: 5 iterations, 1 s each
# Timeout: 10 min per iteration
# Threads: 4 threads, will synchronize iterations
# Benchmark mode: Throughput, ops/time
# Benchmark: org.sample.ForLoop.a0
# Run progress: 16.67% complete, ETA 00:00:51
# Fork: 1 of 1
# Warmup Iteration 1: 96143259.419 ops/s
# Warmup Iteration 2: 107561397.775 ops/s
# Warmup Iteration 3: 97488364.065 ops/s
# Warmup Iteration 4: 95880266.969 ops/s
# Warmup Iteration 5: 96943938.140 ops/s
Iteration 1: 95208831.922 ops/s
Iteration 2: 94012219.834 ops/s
Iteration 3: 95369199.325 ops/s
Iteration 4: 96090174.793 ops/s
Iteration 5: 90375159.036 ops/s
Result "a0":
94211116.982 ±(99.9%) 8743077.939 ops/s [Average]
(min, avg, max) = (90375159.036, 94211116.982, 96090174.793), stdev = 2270549.576
CI (99.9%): [85468039.043, 102954194.921] (assumes normal distribution)
# JMH 1.13 (released 41 days ago)
# VM version: JDK 1.8.0, VM 25.0-b70
# VM invoker: /Library/Java/JavaVirtualMachines/jdk1.8.0.jdk/Contents/Home/jre/bin/java
# VM options: <none>
# Warmup: 5 iterations, 1 s each
# Measurement: 5 iterations, 1 s each
# Timeout: 10 min per iteration
# Threads: 4 threads, will synchronize iterations
# Benchmark mode: Throughput, ops/time
# Benchmark: org.sample.ForLoop.b
# Run progress: 33.33% complete, ETA 00:00:41
# Fork: 1 of 1
# Warmup Iteration 1: 58778298.331 ops/s
# Warmup Iteration 2: 66786552.916 ops/s
# Warmup Iteration 3: 67834850.418 ops/s
# Warmup Iteration 4: 69421299.220 ops/s
# Warmup Iteration 5: 72392517.533 ops/s
Iteration 1: 70791625.464 ops/s
Iteration 2: 70393808.080 ops/s
Iteration 3: 68931230.026 ops/s
Iteration 4: 67113234.799 ops/s
Iteration 5: 71628641.383 ops/s
Result "b":
69771707.950 ±(99.9%) 6847578.759 ops/s [Average]
(min, avg, max) = (67113234.799, 69771707.950, 71628641.383), stdev = 1778294.458
CI (99.9%): [62924129.192, 76619286.709] (assumes normal distribution)
# JMH 1.13 (released 41 days ago)
# VM version: JDK 1.8.0, VM 25.0-b70
# VM invoker: /Library/Java/JavaVirtualMachines/jdk1.8.0.jdk/Contents/Home/jre/bin/java
# VM options: <none>
# Warmup: 5 iterations, 1 s each
# Measurement: 5 iterations, 1 s each
# Timeout: 10 min per iteration
# Threads: 4 threads, will synchronize iterations
# Benchmark mode: Throughput, ops/time
# Benchmark: org.sample.ForLoop.c
# Run progress: 50.00% complete, ETA 00:00:31
# Fork: 1 of 1
# Warmup Iteration 1: 87120233.617 ops/s
# Warmup Iteration 2: 88506588.802 ops/s
# Warmup Iteration 3: 82506886.728 ops/s
# Warmup Iteration 4: 75379852.092 ops/s
# Warmup Iteration 5: 83735974.951 ops/s
Iteration 1: 80107472.633 ops/s
Iteration 2: 85088925.886 ops/s
Iteration 3: 81051339.754 ops/s
Iteration 4: 85997882.597 ops/s
Iteration 5: 87092494.956 ops/s
Result "c":
83867623.165 ±(99.9%) 11946241.585 ops/s [Average]
(min, avg, max) = (80107472.633, 83867623.165, 87092494.956), stdev = 3102401.003
CI (99.9%): [71921381.580, 95813864.750] (assumes normal distribution)
# JMH 1.13 (released 41 days ago)
# VM version: JDK 1.8.0, VM 25.0-b70
# VM invoker: /Library/Java/JavaVirtualMachines/jdk1.8.0.jdk/Contents/Home/jre/bin/java
# VM options: <none>
# Warmup: 5 iterations, 1 s each
# Measurement: 5 iterations, 1 s each
# Timeout: 10 min per iteration
# Threads: 4 threads, will synchronize iterations
# Benchmark mode: Throughput, ops/time
# Benchmark: org.sample.ForLoop.d
# Run progress: 66.67% complete, ETA 00:00:20
# Fork: 1 of 1
# Warmup Iteration 1: 90916967.864 ops/s
# Warmup Iteration 2: 91629586.405 ops/s
# Warmup Iteration 3: 107866944.554 ops/s
# Warmup Iteration 4: 102023453.435 ops/s
# Warmup Iteration 5: 111336773.130 ops/s
Iteration 1: 107770637.269 ops/s
Iteration 2: 107103283.175 ops/s
Iteration 3: 107545862.850 ops/s
Iteration 4: 113804266.277 ops/s
Iteration 5: 114628060.965 ops/s
Result "d":
110170422.107 ±(99.9%) 14295432.924 ops/s [Average]
(min, avg, max) = (107103283.175, 110170422.107, 114628060.965), stdev = 3712478.533
CI (99.9%): [95874989.183, 124465855.031] (assumes normal distribution)
# JMH 1.13 (released 41 days ago)
# VM version: JDK 1.8.0, VM 25.0-b70
# VM invoker: /Library/Java/JavaVirtualMachines/jdk1.8.0.jdk/Contents/Home/jre/bin/java
# VM options: <none>
# Warmup: 5 iterations, 1 s each
# Measurement: 5 iterations, 1 s each
# Timeout: 10 min per iteration
# Threads: 4 threads, will synchronize iterations
# Benchmark mode: Throughput, ops/time
# Benchmark: org.sample.ForLoop.e
# Run progress: 83.33% complete, ETA 00:00:10
# Fork: 1 of 1
# Warmup Iteration 1: 33385239.816 ops/s
# Warmup Iteration 2: 38698292.771 ops/s
# Warmup Iteration 3: 38876327.513 ops/s
# Warmup Iteration 4: 37957299.742 ops/s
# Warmup Iteration 5: 39190117.656 ops/s
Iteration 1: 38286235.065 ops/s
Iteration 2: 39449652.416 ops/s
Iteration 3: 38541883.894 ops/s
Iteration 4: 40384962.473 ops/s
Iteration 5: 36975457.540 ops/s
Result "e":
38727638.278 ±(99.9%) 4934050.936 ops/s [Average]
(min, avg, max) = (36975457.540, 38727638.278, 40384962.473), stdev = 1281357.359
CI (99.9%): [33793587.342, 43661689.213] (assumes normal distribution)
# Run complete. Total time: 00:01:02
Benchmark Mode Cnt Score Error Units
ForLoop.a thrpt 5 107950734.453 ± 13602765.938 ops/s
ForLoop.a0 thrpt 5 94211116.982 ± 8743077.939 ops/s
ForLoop.b thrpt 5 69771707.950 ± 6847578.759 ops/s
ForLoop.c thrpt 5 83867623.165 ± 11946241.585 ops/s
ForLoop.d thrpt 5 110170422.107 ± 14295432.924 ops/s
ForLoop.e thrpt 5 38727638.278 ± 4934050.936 ops/s
we can see that all three methods are so fast that literally the same method results in very different measurement when run a few times. So for this example, I'll just stop here. For your real case, feel free to pick it up with the actual code that creates a problem.
PS. Added the for-each and the streaming solutions. As you can see, for-each is pretty much the same (while more readable); streams are noticeably slower. I'm leaving the javap examination for those two solutions as an exercise for the reader.
for (int i = 0, n = elements.length; i < n; i++) {is not nice