Your example lacks an exit condition so it's a bit weird. That being said, you don't always know in advance how many subroutines you are about to launch. A typical way of dealing with this is to use sync.WaitGroup. For instance:
for {
wg.Add(1) // declare new goroutine
go func(i int, wg *sync.WaitGroup) {
defer wg.Done() // when work is done, declare termination
log.Printf("hello wait group %v\n", i)
}(i, &wg)
// Please add a stop condition !
}
wg.Wait() // prevent your main program to return until all goroutines have ended
But in your case it looks like creating thousands of goroutines won't help (you probably have less CPUs available). In that case you can use a pool with limited concurrency instead. If you care to use it, I wrote a library for that:
import "github.com/aherve/gopool"
func main() {
pool := gopool.NewPool(8) // creates pool with limited concurrency of 8
for {
pool.Add(1)
go func(i int, pool *gopool.GoPool) {
defer pool.Done()
time.Sleep(time.Second)
log.Printf("hello pool %v\n", i)
}(i, pool)
// Please add a stop condition !
}
pool.Wait()
}
With this version, no more than 8 routines will be launched simultaneously, and the code remains pretty similar to a waitGroup usage.