Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yeah, this particular instance seems ok to me. This one makes the example feel weirder:

  func main() {
  	runtime.GOMAXPROCS(runtime.NumCPU())
  	fmt.Println(runtime.NumCPU(), runtime.GOMAXPROCS(0))
  	started := make(chan bool)
  
  	go func() {
  		started <- true
  		for {
  			atomic.AddUint64(&a, uint64(1))
  		}
  	}()
  
  	<-started
  	for {
  		fmt.Println(atomic.LoadUint64(&a))
  		time.Sleep(time.Second)
  	}
  }
Here we explicitly wait until the goroutine is started, so we know it's scheduled by the time our other loop runs. Here on my computer with go1.8 linux/amd64 it still optimizes out the while loop, which makes sense as nothing changed that would convince the optimizer the loop should remain given the compiler's current logic.

If you add time.Sleep(time.Millisecond) to the goroutine loop or any other synchronization it works fine. I'm having trouble thinking of a real world example where you'd want an atomic operation going ham in a loop without any sort of time or synchronization. At the very least a channel indicating when the loop is done would cause the loop to compile.



Just because started ensures the goroutine has been scheduled once it makes no guarantees it will ever be scheduled again. It really does nothing to extend the original example code.


FWIW, "runtime.GOMAXPROCS(runtime.NumCPU())" has been done automatically for you since Go 1.5, so there's no need to include it.


> I'm having trouble thinking of a real world example where you'd want an atomic operation going ham in a loop without any sort of time or synchronization.

By using another atomic op as synchronization? After all that's their stated purpose.


But I don't think in go atomic ops are considered synchronization in the sense that they force two goroutines to synchronize at a particular point like a chan. I.e. a chan send in one goroutine must be matched with a chan receive in another (unless they're buffered). If you have an atomic operation between two synchronization points I'd expect the only guarantee is that it occurs between the two points, and when it does it happens atomically.


>>Strictly saying this is confirming behavior as we don't give any guarantees about scheduler (just like any other language, e.g. C++). This can be explained as "the goroutine is just not scheduled".

Not a GO developer. On a multi-processor machine, how is this a conforming behavior? Is "scheduler cannot give any guarantees" acceptable?


'Is "scheduler cannot give any guarantees" acceptable?'

Most schedulers give far fewer guarantees that you might think. A guarantee to be a guarantee must be true no matter what you do within the boundaries of the language. If you create a goroutine fork bomb

    func fork_bomb() {
        for {
            go fork_bomb()
        }
    }
Go doesn't, to the best of my knowledge, guarantee that any other goroutine will get any execution time, or guarantee much of anything will happen. Your OS is likely to have similarly weak guarantees for the equivalent process bomb, unless you do something to turn on more guarantees/protection.

You have to go into some relatively special-purpose stuff before you can get schedulers that will guarantee that some process will get scheduled for at least 10ms out of every 100ms or something. And then, once you get that guarantee, you'll pay some other way.

Given the fact that most of our machines are incredibly powerful, and that a lot of them still get upgraded on a fairly routine schedule in a lot of dimensions even if single-core clock speed has stalled out, most of us prefer to work with things that just promise to do their best as long as you don't overload them, because the other prices you have to pay to get guarantees turn out not to be beneficial on our monster machines. Of course one should always have their eyes out for when that may not be the case at some point, but in general we're headed away from rigorous guarantees in favor of shared resources that are cheaper and more scalable and making up the difference in volume, rather than trying to get better guarantees.


My understanding is there are only scheduling guarantees around synchronization points between goroutines through chans or mutexes and stuff.

Go's concurrency model is inspired by Hoare's communicating sequential processes which kinda has the same idea: http://usingcsp.com/cspbook.pdf

For instance in this program:

  func main() {
  	point1 := make(chan bool)
  	point2 := make(chan bool)
  
  	go func() {
  		point1 <- true
  		fmt.Println("hello")
  		point2 <- true
  	}()
  
  	<-point1
  	time.Sleep(3 * time.Second)
  	<-point2
  }
An unbuffered channel read is always matched with a corresponding write. Let's call the point at which `point1 <- true` occurs and `<-point1` occurs T1 and the point at which `point2 <- true` occurs and `<-point2` occurs T2. fmt.Println("hello") and time.Sleep(3*time.Second) are both guaranteed to occur between T1 and T2. If we didn't have T2 there's no guarantee fmt.Println("hello") would run before the program exits.

Maybe I'm wrong but this is my understanding of Hoare's and go's concurrency model.


How about a long-running calculation that uses atomic variables to report progress or poll for cancellation? Might it move all of the atomic ops outside of the loop?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: