Advanced Go Concurrency Patterns
Jul 29, 2025 am 12:55 AMUse context.Context to cancel propagation to ensure that the sub-goroutines can be terminated in time; 2. Use errgroup.Group to achieve error aggregation and rapid failure of concurrent tasks; 3. Use pipeline mode to improve data processing throughput through fan-out-fan-in; 4. Use sync.Once and atomic.Value to achieve efficient concurrent initialization; 5. Use rate.Limiter to control the request rate to prevent system overload; 6. Avoid memory leaks in time. After in select, use stopable timer instead; 7. Use atomic.Value to achieve lock-free configuration hot updates, requiring storage of immutable objects; these modes need to be combined, using context as the life cycle basis, and combining error processing, current limiting and atomic operations to build a robust and efficient concurrent system.
Go's concurrency model is known for its simplicity and efficiency, goroutine
and channel
at its core. However, with the increase in system complexity, using basic go
and chan
alone is no longer enough to cope with complex concurrent control needs. Mastering advanced concurrency mode can help us write more robust, maintainable, and high-performance concurrency programs.

Here are a few advanced Go Concurrency Patterns that are very useful in actual development, combined with usage scenarios and best practices.
1. Context and Cancellation Propagation
In a concurrent program, an operation may start multiple subtasks (goroutines). When the main task is cancelled, all subtasks should be terminated in time to avoid wasting resources.

Key point: Use context.Context
to manage life cycles uniformly.
ctx, cancel := context.WithTimeout(context.Background(), 100*time.Millisecond) defer cancel() resultCh := make(chan string, 1) go func() { result := slowOperation(ctx) // ctx.Done() is required to be listened to inside the function select { case resultCh <- result: default: } }() select { case <-ctx.Done(): log.Println("operation cancelled:", ctx.Err()) case result := <-resultCh: log.Println("got result:", result) }
?Best Practice :
- All long-running functions should accept
context.Context
parameter.- Check
ctx.Done()
regularly in thefor
loop, or listen withselect
.- Use
context.WithCancel
,WithTimeout
,WithDeadline
to build a hierarchical cancel tree.
2. ErrGroup: Error aggregation of concurrent tasks
When multiple tasks need to be executed concurrently and you want to fail quickly when any task errors occur, while waiting for all tasks to be cleaned up, errgroup.Group
is the best choice.
g, ctx := errgroup.WithContext(context.Background()) urls := []string{"http://example1.com", "http://example2.com"} for _, url := range urls { url := url g.Go(func() error { req, _ := http.NewRequestWithContext(ctx, "GET", url, nil) resp, err := http.DefaultClient.Do(req) if err != nil { return err } defer resp.Body.Close() // Process the response... return nil }) } if err := g.Wait(); err != nil { log.Printf("failed to fetch: %v", err) }
? Advantages :
- Automatically propagate
context
to all subtasks.- As long as one task returns a non-
nil
error, other tasks are cancelled throughctx
.- Wait for all startup goroutines to complete (even if there is an error).
3. Pipeline mode: Fan-out/Fan-in
Split the data stream into multiple concurrent processing stages to improve throughput. Commonly found in data processing pipelines.
func gen(ctx context.Context, nums ...int) <-chan int { out := make(chan int, len(nums)) go func() { defer close(out) for _, n := range nums { select { case out <- n: case <-ctx.Done(): Return } } }() return out } func sq(ctx context.Context, in <-chan int) <-chan int { out := make(chan int) go func() { defer close(out) for n := range in { select { case out <- n * n: case <-ctx.Done(): Return } } }() return out } // Use ctx, cancel := context.WithCancel(context.Background()) defer cancel() nums := gen(ctx, 1, 2, 3, 4) squared := sq(ctx, nums) for n := range squared { fmt.Println(n) }
? Optimization Tips :
- Concurrent processing of multiple workers (fan-out):
// Start multiple sq workers var chans []<-chan int for i := 0; i < 3; i { chans = append(chans, sq(ctx, nums)) } // Merge results (fan-in) merged := merge(ctx, chans...)- Use the
merge
function to merge multiple channels into one.
4. Dual check lock sync.Once (control initialization concurrency)
Although sync.Once
can guarantee a single execution, in some scenarios, manually implementing dual-check locks can reduce lock competition.
var once sync.Once var client *http.Client func GetClient() *http.Client { once.Do(func() { client = &http.Client{Timeout: 10 * time.Second} }) Return client }
??Note : Go's
sync.Once
has been optimized internally, and there is usually no need for manual double check locks. However, in extreme performance scenarios, lock-free reading can be achieved in combination withatomic.Value
:
var client atomic.Value func initClient() { client.Store(&http.Client{Timeout: 10 * time.Second}) } func GetClient() *http.Client { c := client.Load() if c == nil { once.Do(initClient) c = client.Load() } return c.(*http.Client) }
5. Current limiter (Rate Limited) and resource control
Use golang.org/x/time/rate
to achieve smooth current limit to prevent system overload.
limiter := rate.NewLimiter(10, 1) // 10 per second, 1 burst for { if err := limiter.Wait(ctx); err != nil { break } go processRequest() }
?Applicable scenarios :
- Call third-party API to limit current.
- Controls the number of concurrency of background tasks.
- Prevent database connections from skyrocketing.
6. Select timeout and default branch traps
select
is the core control structure of Go concurrency, but it is prone to errors when used improperly.
select { case msg := <-ch: fmt.Println("received:", msg) case <-time.After(1 * time.Second): fmt.Println("timeout") }
?Problem :
time.After
creates a timer every time it is called, which can cause memory leaks (especially in loops).
?Fix : Use
time.NewTimer
and stop manually:
timer := time.NewTimer(1 * time.Second) select { case msg := <-ch: fmt.Println(msg) if !timer.Stop() { <-timer.C // Empty channel } case <-timer.C: fmt.Println("timeout") }
7. Concurrent and secure configuration hot update: atomic.Value
Avoid using mutex
to protect the entire configuration structure and use atomic.Value
to implement lock-free reading.
var config atomic.Value // Initialize config.Store(loadConfig()) // Read (high frequency) current := config.Load().(*Config) // Update (low frequency) newCfg := loadConfig() config.Store(newCfg)
?Requirements : The stored objects must be immutable, or stored after copying.
Summarize
These advanced concurrency patterns are not isolated and often require combinations:
- Use
context
to control the life cycle. - Use
errgroup
to manage concurrent tasks errors. - Build data flow with
pipeline
. - Use
rate.Limiter
to control resources. - Use
atomic.Value
to achieve high-performance configuration updates.
Go's concurrency philosophy is: "Don't communicate through shared memory, but share memory through communication." But the advanced model tells us: Only by rationally using locks, atomic operations and context control between performance and control can we build a truly reliable system.
Basically all this is not complicated but easy to ignore.
The above is the detailed content of Advanced Go Concurrency Patterns. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undress AI Tool
Undress images for free

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics

How to use the concurrent function in Go language to crawl multiple web pages in parallel? In modern web development, it is often necessary to scrape data from multiple web pages. The general approach is to initiate network requests one by one and wait for responses, which is less efficient. The Go language provides powerful concurrency functions that can improve efficiency by crawling multiple web pages in parallel. This article will introduce how to use the concurrent function of Go language to achieve parallel crawling of multiple web pages, as well as some precautions. First, we need to create concurrent tasks using the go keyword built into the Go language. Pass

How to deal with the failover problem of concurrent database connections in Go language? When dealing with concurrent database connections, we often encounter the problem of failover of database connections. When a database connection fails, we need to consider how to switch to an available database connection in time to ensure the normal operation of the system. The following will introduce in detail how to handle the failover problem of concurrent database connections in the Go language and provide some specific code examples. Use connection pool: In Go language, we can use connection pool to manage database connections

Methods to solve concurrent scheduling problems in Go language development With the development of the Internet and the advancement of technology, more and more developers are turning to Go, a simple and efficient programming language. Go language is famous for its good concurrency performance. It provides rich concurrent programming features, allowing developers to easily implement multi-task concurrent execution. However, in actual development, we will still encounter some concurrent scheduling problems. This article will introduce some methods to solve these problems. Go language provides goroutine and chann

ContextandWaitGroupsarecrucialinGoformanaginggoroutineseffectively.1)ContextallowssignalingcancellationanddeadlinesacrossAPIboundaries,ensuringgoroutinescanbestoppedgracefully.2)WaitGroupssynchronizegoroutines,ensuringallcompletebeforeproceeding,prev

ToeffectivelyhandleerrorsinconcurrentGoprograms,usechannelstocommunicateerrors,implementerrorwatchers,considertimeouts,usebufferedchannels,andprovideclearerrormessages.1)Usechannelstopasserrorsfromgoroutinestothemainfunction.2)Implementanerrorwatcher

How to optimize the performance of concurrent Go code? Use Go's built-in tools such as getest, gobench, and pprof for benchmarking and performance analysis. 1) Use the testing package to write benchmarks to evaluate the execution speed of concurrent functions. 2) Use the pprof tool to perform performance analysis and identify bottlenecks in the program. 3) Adjust the garbage collection settings to reduce its impact on performance. 4) Optimize channel operation and limit the number of goroutines to improve efficiency. Through continuous benchmarking and performance analysis, the performance of concurrent Go code can be effectively improved.

Go'sselectstatementstreamlinesconcurrentprogrammingbymultiplexingoperations.1)Itallowswaitingonmultiplechanneloperations,executingthefirstreadyone.2)Thedefaultcasepreventsdeadlocksbyallowingtheprogramtoproceedifnooperationisready.3)Itcanbeusedforsend

Best practices to improve Go concurrency performance: Optimize Goroutine scheduling: Adjust GOMAXPROCS, SetNumGoroutine and SetMaxStack parameters to optimize performance. Synchronization using Channels: Utilize unbuffered and buffered channels to synchronize coroutine execution in a safe and efficient manner. Code parallelization: Identify blocks of code that can be executed in parallel and execute them in parallel through goroutines. Reduce lock contention: Use read-write locks, lock-free communication, and local variables to minimize contention for shared resources. Practical case: Optimizing the concurrency performance of image processing programs, significantly improving throughput by adjusting the scheduler, using channels and parallel processing.
