Skip to content

Concurrency in Go: From First Goroutine to Production-Ready Patterns

Go makes concurrency look deceptively simple. A go keyword here, a channel there — and your program feels fast and modern. Until it doesn't. And when it breaks, it breaks in ways that leave you scratching your head for hours.

This article bridges the gap between "I've started goroutines" and "I actually understand what's happening." We'll build intuition from the ground up, spot the traps before you fall into them, and land on the patterns that hold up in production.


1. Concurrency vs Parallelism — The Mental Model

Before writing a single goroutine, get this distinction locked in:

  • Concurrency is about structure — breaking a program into independently runnable pieces.
  • Parallelism is about execution — running those pieces at the same time on multiple cores.

Go gives you concurrency primitives. Whether things actually run in parallel depends on your hardware and the Go scheduler's decisions. You don't control it — and that's the point. The runtime manages scheduling across OS threads automatically.

Goroutines are not threads. They are lightweight tasks managed by the Go runtime. Thousands of goroutines can share a single OS thread. You ask the runtime to schedule something; you don't command when or where it runs.


2. Goroutines: Lightweight, Not Free

Starting a goroutine is as simple as prepending go:

go func() {
    fmt.Println("Hello from goroutine")
}()

Here's the catch hidden in that simplicity: you just surrendered control. You don't know when this runs. You don't know if it completes before your program exits. You have no handle to it.

Let's see this problem surface step by step.

Sequential (blocks forever on "dog", never reaches "cat"):

func infiniteCount(thing string) {
    for i := 1; true; i++ {
        fmt.Println(i, thing)
        time.Sleep(time.Second)
    }
}

func main() {
    infiniteCount("dog") // blocks here forever
    infiniteCount("cat") // never reached
}

With goroutines (both run concurrently):

func main() {
    go infiniteCount("dog") // runs in background
    infiniteCount("cat")    // main goroutine runs this
}
// Output: interleaved "dog" and "cat" counts

The go keyword makes the call non-blocking. The function runs concurrently while your main goroutine continues.


3. The Goroutine Lifetime Problem

What happens when you run both as goroutines?

func main() {
    go infiniteCount("dog")
    go infiniteCount("cat")
    // main returns immediately — program exits, goroutines vanish
}
// Output: (nothing)

The program exits before either goroutine gets scheduled. When main returns, the entire program terminates — regardless of running goroutines. This is one of the most common "it worked in my head" bugs for beginners.


4. Synchronization with WaitGroups

sync.WaitGroup is your primary tool for waiting on goroutines to finish.

func count(thing string) {
    for i := 1; i <= 5; i++ {
        fmt.Println(i, thing)
        time.Sleep(time.Millisecond * 500)
    }
}

func main() {
    var wg sync.WaitGroup

    wg.Add(1) // declare: 1 goroutine to wait for
    go func() {
        defer wg.Done() // decrement when done
        count("dog")
    }()

    wg.Wait() // block until count reaches zero
}

For multiple goroutines:

func main() {
    var wg sync.WaitGroup
    things := []string{"dog", "cat", "fish"}

    for _, thing := range things {
        wg.Add(1)
        go func(t string) {
            defer wg.Done()
            count(t)
        }(thing) // pass loop variable as argument — see pitfalls section
    }

    wg.Wait()
}

The three rules of WaitGroup: - Call wg.Add(n) before launching goroutines, not inside them. - Always pair Add with Done — use defer wg.Done() to guarantee it runs even on panic. - Call wg.Wait() only from one goroutine (usually main).


5. Channels: More Than Just Pipes

Channels are Go's preferred mechanism for goroutine communication. But here's the insight that most tutorials miss: channels don't just transfer data — they synchronize execution.

func countWithChannel(thing string, c chan string) {
    for i := 1; i <= 5; i++ {
        c <- thing             // send: blocks until receiver is ready
        time.Sleep(time.Millisecond * 500)
    }
    close(c) // signal: no more values
}

func main() {
    c := make(chan string) // unbuffered channel
    go countWithChannel("dog", c)

    for msg := range c { // receive until channel is closed
        fmt.Println(msg)
    }
}

range over a channel reads until the channel is closed. Always close channels from the sender side, never the receiver.

Unbuffered Channels as Synchronization Points

An unbuffered channel's send blocks until a receiver is ready, and vice versa. This creates a strict ordering guarantee — the send happens before the receive. This is not just a data transfer; it's a rendezvous point.

done := make(chan struct{})

go func() {
    x = 42        // write happens here
    close(done)   // then signal
}()

<-done            // wait for signal
fmt.Println(x)   // guaranteed to see 42

This is safe. The channel operation establishes a happens-before relationship as defined by Go's memory model. A time.Sleep does not.

Channel Ownership

Every channel should have a clear owner — the goroutine responsible for creating, writing to, and closing it. Ask yourself for every channel in your code:

  • Who creates it?
  • Who sends to it?
  • Who closes it?
  • Who receives from it?

If the answer is "I'm not sure," that's a design problem.


6. The Deadlock Trap

Deadlocks happen when goroutines are waiting on each other with no way out. The most common beginner version:

func main() {
    c := make(chan string)
    c <- "hello" // blocks — no receiver exists
    msg := <-c
    fmt.Println(msg)
}
// fatal error: all goroutines are asleep - deadlock!

The send on an unbuffered channel blocks until a receiver is ready. Since there's no goroutine receiving, the main goroutine blocks forever. Go detects this and panics.

Fix: put the send in a goroutine:

func main() {
    c := make(chan string)
    go func() {
        c <- "hello"
    }()
    msg := <-c
    fmt.Println(msg)
}

Now there's a sender goroutine and a receiver (main). They meet at the channel and both proceed.


7. Buffered Channels

A buffered channel lets senders proceed without waiting for a receiver — up to the buffer capacity.

c := make(chan string, 2) // buffer size 2

c <- "hello" // doesn't block (buffer has room)
c <- "world" // doesn't block (buffer has room)
// c <- "overflow" // this would block — buffer is full

msg := <-c
fmt.Println(msg) // "hello"

msg = <-c
fmt.Println(msg) // "world"

When to use buffered channels: - Decoupling producers and consumers when they run at different speeds - Rate-limiting / semaphore patterns - Collecting results from goroutines when you know the count upfront

When NOT to use them as a crutch: - Don't add a buffer just to "fix" a deadlock. If a channel is deadlocking, it usually means your design needs revisiting.


8. Select: Multiplexing Channels

select lets a goroutine wait on multiple channels simultaneously, proceeding with whichever is ready first.

func main() {
    c1 := make(chan string)
    c2 := make(chan string)

    go func() {
        for {
            time.Sleep(500 * time.Millisecond)
            c1 <- "tick (every 500ms)"
        }
    }()

    go func() {
        for {
            time.Sleep(2 * time.Second)
            c2 <- "tock (every 2s)"
        }
    }()

    for {
        select {
        case msg := <-c1:
            fmt.Println(msg)
        case msg := <-c2:
            fmt.Println(msg)
        }
    }
}

select blocks until one of its cases can proceed. If multiple are ready simultaneously, it picks one at random — this is intentional, not a bug.

Select with a Default

Adding a default case makes select non-blocking:

select {
case msg := <-c:
    fmt.Println("received:", msg)
default:
    fmt.Println("no message ready, moving on")
}

Select for Cancellation

select is how you implement cancellation and timeouts:

func doWork(ctx context.Context, jobs <-chan int) {
    for {
        select {
        case j, ok := <-jobs:
            if !ok {
                return
            }
            process(j)
        case <-ctx.Done():
            fmt.Println("cancelled:", ctx.Err())
            return
        }
    }
}

9. Data Races and Memory Visibility

This is where most production bugs hide. Consider:

var x int

go func() {
    x = 42
}()

fmt.Println(x) // might print 0, might print 42, might corrupt memory

This is a data race. Without synchronization, Go makes no promises about when the write to x becomes visible to other goroutines. This is not a timing issue you can fix with time.Sleep. It is undefined behavior.

Run the race detector — always:

go run -race main.go
go test -race ./...

The race detector catches races at runtime and is invaluable. Enable it in your CI pipeline.

Safe Shared State with Mutex

When multiple goroutines genuinely need to share state, use a mutex:

type SafeCounter struct {
    mu    sync.Mutex
    count int
}

func (c *SafeCounter) Increment() {
    c.mu.Lock()
    defer c.mu.Unlock()
    c.count++
}

func (c *SafeCounter) Value() int {
    c.mu.Lock()
    defer c.mu.Unlock()
    return c.count
}

Or Use sync/atomic for Simple Cases

var counter int64

go func() {
    atomic.AddInt64(&counter, 1)
}()

val := atomic.LoadInt64(&counter)

10. Bounded Concurrency: Worker Pools

Spawning an unbounded number of goroutines is a production anti-pattern:

// ⚠️ Don't do this with large slices
for _, v := range millionItems {
    go process(v) // spawns 1,000,000 goroutines
}

Each goroutine has overhead. Memory, scheduler time, and if process does I/O, you'll exhaust file descriptors or connection pools.

The worker pool pattern gives you bounded, predictable concurrency:

func worker(id int, jobs <-chan int, results chan<- int) {
    for j := range jobs {
        fmt.Printf("worker %d processing job %d\n", id, j)
        time.Sleep(time.Second) // simulate work
        results <- j * 2
    }
}

func main() {
    const numJobs    = 5
    const numWorkers = 3

    jobs    := make(chan int, numJobs)
    results := make(chan int, numJobs)

    // Start fixed pool of workers
    for w := 1; w <= numWorkers; w++ {
        go worker(w, jobs, results)
    }

    // Send all jobs
    for j := 1; j <= numJobs; j++ {
        jobs <- j
    }
    close(jobs) // signals workers: no more jobs

    // Collect results
    for a := 1; a <= numJobs; a++ {
        fmt.Println("result:", <-results)
    }
}

Sample output:

worker 3 processing job 1
worker 1 processing job 2
worker 2 processing job 3
worker 3 processing job 4   (worker 3 picked up next job after finishing job 1)
worker 1 processing job 5
result: 2
result: 4
result: 6
result: 8
result: 10

Three workers share five jobs. The channel acts as a work queue; workers pull from it as they become free.

Semaphore Pattern (Simpler Alternative)

For simpler cases, a buffered channel works as a semaphore:

sem := make(chan struct{}, 10) // max 10 concurrent goroutines

for _, v := range items {
    sem <- struct{}{} // acquire slot
    go func(val Item) {
        defer func() { <-sem }() // release slot
        process(val)
    }(v)
}

// Drain the semaphore to wait for all goroutines
for i := 0; i < cap(sem); i++ {
    sem <- struct{}{}
}

11. Common Pitfalls and How to Avoid Them

Pitfall 1: Loop Variable Capture

// ❌ Bug: all goroutines share the same 'v' variable
for _, v := range items {
    go func() {
        process(v) // v may have changed by the time this runs
    }()
}

// ✅ Fix: pass as argument
for _, v := range items {
    go func(val Item) {
        process(val)
    }(v)
}

Note: In Go 1.22+, loop variables are re-scoped per iteration, making this bug less common. But it's still good practice to pass explicitly for clarity.

Pitfall 2: Forgetting to Close Channels

// ❌ The receiver range loop will block forever
func producer(c chan int) {
    for i := 0; i < 5; i++ {
        c <- i
    }
    // forgot: close(c)
}

func main() {
    c := make(chan int)
    go producer(c)
    for v := range c { // blocks forever waiting for more
        fmt.Println(v)
    }
}

Always close a channel when the sender is done. Only the sender should close.

Pitfall 3: Using sleep for Synchronization

// ❌ This is gambling, not synchronization
go func() { x = 42 }()
time.Sleep(time.Millisecond)
fmt.Println(x) // maybe works, will fail under load

Use channels, WaitGroups, or mutexes. Sleep-based synchronization will fail under load or on different hardware.

Pitfall 4: Goroutine Leaks

A goroutine that's blocked with no way to exit is a leak. Common cause: sending to a channel nobody reads.

// ❌ If nobody reads from results, this goroutine leaks
go func() {
    result := heavyWork()
    results <- result // blocks forever if caller moved on
}()

Use context.Context for cancellation, and always ensure your goroutines have an exit path.

Pitfall 5: Nil Channel Tricks

A send or receive on a nil channel blocks forever. This is actually useful in select:

var c chan int // nil

select {
case v := <-c:  // blocks forever, effectively disabling this case
    fmt.Println(v)
case msg := <-other:
    fmt.Println(msg)
}

Setting a channel to nil inside a select loop disables that case — a useful technique for draining multiple channels.


12. Concurrency Checklist

Before shipping concurrent code, run through this:

Design

  • Is ownership of each channel clear? (who creates, sends, closes, receives?)
  • Does every goroutine have a defined exit path?
  • Is concurrency actually bounded, or could you spawn unbounded goroutines?

Correctness

  • Did you run go test -race ./...?
  • Are all shared variables protected by a mutex or accessed only through channels?
  • Are all channels closed by the sender, not the receiver?
  • Is wg.Add(n) called before launching goroutines?

Robustness

  • Do goroutines handle panics (with recover) if they're long-lived?
  • Do goroutines respect context.Context cancellation?
  • Are goroutine leaks guarded against (e.g., timeout, cancellation, buffered channels)?

Key Principles

Communicate to synchronize, don't synchronize to communicate. Channel operations in Go carry a formal happens-before guarantee. Passing data through a channel is also synchronizing. This is the design center of Go concurrency.

Ownership is everything. Every piece of shared state needs a clear owner. If ownership is ambiguous, you'll get races, leaks, or deadlocks. Good concurrent design makes ownership obvious.

Boring is better. Experienced Go developers use a small set of well-understood patterns: worker pools, WaitGroups, select with context. Not because they lack creativity, but because these patterns make failure modes visible and behavior predictable.

The race detector is your friend. Enable it in tests and CI. It catches bugs that review and intuition miss.