0

I am trying to figure out the use of strings.Join method compared to regular concatenation with +=.

For this comparison, I am using both methods on os.Args as the list of strings to concatenate.

My code for the concatenating functions is:

func regular_concatenation() {
    var s, sep string
    for i := 1; i < len(os.Args); i++ {
        s += sep + os.Args[i]
        sep = " "
    }
    fmt.Println(s)
}

func join_concatenation() {
    fmt.Println(strings.Join(os.Args, " "))
}

And the main function for the performance check is:

func main() {
    var end_1, end_2 float64
    var start_1, start_2 time.Time

    start_1 = time.Now()
    for i:=0; i < 100; i++ {
        ex1_3_join_concatenation()
    }
    end_1 = time.Since(start_1).Seconds()

    start_2 = time.Now()
    for i:=0; i < 100; i++ {
        ex1_3_regular_concatenation()
    }
    end_2 = time.Since(start_2).Seconds()

    fmt.Println(end_1)
    fmt.Println(end_2)
}

Problem is - when I run the code, say with 20 arguments (os.Args), I get the result that the strings.Join method is slower than the regular concatination.

This is confusing for me, because the way I understood it - when using regular += method, it creates a new string reference each time (because strings are immutable in golang), therefore the garbage collector is supposed to run in order to collect the unused data and this wastes time.

So the question is - is strings.Join really a faster method? And if it is - what am I doing wrong in this example?

5
  • 2
    Start by creating a real, independent benchmarks; poor benchmarks give poor results. The join method will have fewer allocations, and will be faster provided there aren't other confounding variables. Your join function also handles one more argument than the other. Commented Mar 15, 2020 at 0:47
  • You have to google an info on how to benchmark go code, here's nothing to discuss, I think Commented Mar 15, 2020 at 1:06
  • well, I would suggest you ask the question about what's' wrong with this benchmark. Coz your simple question has a simple answer that strings.Join is faster than string concatenation. Commented Mar 15, 2020 at 5:31
  • 2
    One difference may be that you are adding os.Args[0] in the 2nd benchmark. Commented Mar 15, 2020 at 6:51
  • Try using an array of 1000 or so elements instead of os.Args and you will see that strings.Join is faster every time. Commented Mar 2, 2021 at 11:30

1 Answer 1

4

Due to various compiler optimizations string concatenation can be quite efficient but in you case I found that strings.Join is faster (see benchmarks of your code below).

In general for building up a string it is recommended to use strings.Builder. See How to efficiently concatenate strings in go .

BTW you should be using the brilliant benchmarking facility that comes with Go. Just put these functions in a file ending with _test.go (eg string_test.go) and run go test -bench=..

func BenchmarkConcat(b *testing.B) { // 132 ns/op
    ss := []string {"sadsadsa", "dsadsakdas;k", "8930984"}
    for i := 0; i < b.N; i++ {
        var s, sep string
        for j := 0; j < len(ss); j++ {
            s += sep + ss[j]
            sep = " "
        }
        _ = s
    }
}

func BenchmarkJoin(b *testing.B) {  // 56.7 ns/op
    ss := []string {"sadsadsa", "dsadsakdas;k", "8930984"}
    for i := 0; i < b.N; i++ {
        s := strings.Join(ss, " ")
        _ = s
    }
}

func BenchmarkBuilder(b *testing.B) { // 58.5
    ss := []string {"sadsadsa", "dsadsakdas;k", "8930984"}
    for i := 0; i < b.N; i++ {
        var s strings.Builder
        // Grow builder to expected max length (maybe this
        // needs to be calculated dep. on your requirements)
        s.Grow(32)
        var sep string
        for j := 0; j < len(ss); j++ {
            s.WriteString(ss[j])
            s.WriteString(sep)
            sep = " "
        }
        _ = s.String()
    }
}
Sign up to request clarification or add additional context in comments.

7 Comments

There's nothing to benchmark. Concatenation explicitly makes allocation per each meanwhile strings.Join is using a preallocated buffer. Ofc it might be faster in the case of 1-2 element concatenation because of less work.
There's no need to use a builder because it's what strings.Join do. In case of the known size, I'd use make([]byte, size) and copy if it requires to be fast
If there is nothing to benchmark it is weird that the code that i used with the "time" testing does not show the wanted results that you are speaking of (that strings.Join is faster either way), isn't it?
sure I upvoted this answer because it is really helpful information, but I can't why the answer to his true question - if join supposed to be faster, why in his case (without benchmark) it's not?
Good answer, benchmark is always necessary. I recently met a different situation that I have to create new string slice, and have no idea about the capacity. In this code it shows that strings.Builder is nearly 3 times faster than strings.Join.
|

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.