First, clarify the answer: The core of implementing a work pool in Go is to use a fixed number of goroutines to read and process tasks from the task channel, ensure that all tasks are completed through WaitGroup, and the channel is closed reasonably to avoid leakage. The specific steps are: 1. Define the Job and Result structures for passing tasks and results; 2. Create buffered task channels and result channels; 3. Start a fixed number of work goroutines, and each worker reads tasks from the task channel and processes them; 4. Use sync.WaitGroup to wait for all workers to complete the work; 5. Close the task channel after the main coroutine sends the task; 6. Close the result channel after all workers are finished; 7. The main coroutine collects results until the result channel is closed; 8. Cancel mechanism can be implemented through context to enhance control capabilities. This mode is suitable for processing a large number of similar tasks, can effectively control the use of resources and prevent the surge in goroutines. It is often used in batch processing, current limiting and background task systems in production environments, ultimately ensuring efficient and stable operation of the program.
Implementing a worker pool in Go is a common and effective way to manage concurrent tasks efficiently, especially when dealing with a large number of jobs without overwhelming system resources. Instead of spawning a goroutine for every task (which can lead to high memory usage and context-switching overhead), a worker pool limits concurrency by reusing a fixed number of workers to process jobs from a shared queue.

Here's how to implement a simple yet practical worker pool in Go.
Understanding the Worker Pool Pattern
A worker pool consists of:

- A fixed number of workers (goroutines)
- A job queue (channel) that holds incoming tasks
- A result queue (optional channel) for collecting results
- A way to signal completion (often using
sync.WaitGroup
)
Workers continuously pull jobs from the job channel and process them until the channel is closed.
Basic Implementation
package main import ( "fmt" "sync" "time" ) // Job represents a task to be processed type Job struct { ID int Data string } // Results represent the output of a job type Result struct { JobID int Success bool Msg string } // Worker function func worker(id int, jobs <-chan Job, results chan<- Results, wg *sync.WaitGroup) { defer wg.Done() for job := range jobs { // Simulate work time.Sleep(500 * time.Millisecond) fmt.Printf("Worker %d processing job %d: %s\n", id, job.ID, job.Data) // Simulate success/failure var success bool if job.ID%2 == 0 { success = true } else { success = false } // Send result results <- Results{ JobID: job.ID, Success: success, Msg: fmt.Sprintf("Job %d completed", job.ID), } } } // StartWorkerPool initializes the pool func StartWorkerPool(numWorkers int, jobs <-chan Job, results chan<-Result) { var wg sync.WaitGroup // Start workers for i := 1; i <= numWorkers; i { wg.Add(1) go worker(i, jobs, results, &wg) } // Wait for all workers to finish go func() { wg.Wait() close(results) // Signal no more results }() }
Using the Worker Pool
func main() { numJobs := 10 numWorkers := 3 jobs := make(chan Job, numJobs) results := make(chan Results, numJobs) // Send jobs go func() { for i := 1; i <= numJobs; i { jobs <- Job{ID: i, Data: fmt.Sprintf("payload-%d", i)} } close(jobs) // Important: close job channel to stop workers }() // Start pool StartWorkerPool(numWorkers, jobs, results) // Collect results for result := range results { status := "SUCCESS" if !result.Success { status = "FAILED" } fmt.Printf("Result: Job %d | %s | %s\n", result.JobID, status, result.Msg) } fmt.Println("All jobs processed.") }
Key Design Points
- Buffered job channel : Allows decoupling of job submission and processing.
- Closing the job channel : Tells workers there are no more jobs. Using
range
on a closed channel drains it and exits cleanly. - WaitGroup in worker pool : Ensures all workers finish before closing the results channel.
- Results channel : Optional. Use it if you need feedback (eg, success/failure, processed data).
- No goroutine leaks : Proper channel closure ensures workers exit and don't block forever.
When to Use a Worker Pool
- Processing thousands of similar tasks (eg, file parsing, HTTP requests, DB inserts)
- Rate limiting or controlling resource usage
- Background job processing
- Avoiding unbounded goroutine creation
Possible Enhancements
You can extend this pattern with:

- Error handling and retries
- Context cancellation for graceful shutdown
- Dynamic worker scaling (advanced)
- Prioritized job queues using multiple channels or
select
Example with context:
func workerWithContext(ctx context.Context, id int, jobs <-chan Job, results chan<-Result, wg *sync.WaitGroup) { defer wg.Done() for { select { case job, ok := <-jobs: if !ok { return // Channel closed } // Process job time.Sleep(200 * time.Millisecond) results <- Result{JobID: job.ID, Success: true} case <-ctx.Done(): fmt.Printf("Worker %d shutting down...\n", id) Return } } }
This pattern gives you control, visibility, and efficiency. It's widely used in production Go services for batch processing, API rate limiting, and background workers.
Basically, keep it simple: fixed workers, a job channel, and proper cleanup .
The above is the detailed content of Implementing a Worker Pool in Go. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undress AI Tool
Undress images for free

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics

TointegrateGolangserviceswithexistingPythoninfrastructure,useRESTAPIsorgRPCforinter-servicecommunication,allowingGoandPythonappstointeractseamlesslythroughstandardizedprotocols.1.UseRESTAPIs(viaframeworkslikeGininGoandFlaskinPython)orgRPC(withProtoco

Golangofferssuperiorperformance,nativeconcurrencyviagoroutines,andefficientresourceusage,makingitidealforhigh-traffic,low-latencyAPIs;2.Python,whileslowerduetointerpretationandtheGIL,provideseasierdevelopment,arichecosystem,andisbettersuitedforI/O-bo

Golang is mainly used for back-end development, but it can also play an indirect role in the front-end field. Its design goals focus on high-performance, concurrent processing and system-level programming, and are suitable for building back-end applications such as API servers, microservices, distributed systems, database operations and CLI tools. Although Golang is not the mainstream language for web front-end, it can be compiled into JavaScript through GopherJS, run on WebAssembly through TinyGo, or generate HTML pages with a template engine to participate in front-end development. However, modern front-end development still needs to rely on JavaScript/TypeScript and its ecosystem. Therefore, Golang is more suitable for the technology stack selection with high-performance backend as the core.

The key to installing Go is to select the correct version, configure environment variables, and verify the installation. 1. Go to the official website to download the installation package of the corresponding system. Windows uses .msi files, macOS uses .pkg files, Linux uses .tar.gz files and unzip them to /usr/local directory; 2. Configure environment variables, edit ~/.bashrc or ~/.zshrc in Linux/macOS to add PATH and GOPATH, and Windows set PATH to Go in the system properties; 3. Use the government command to verify the installation, and run the test program hello.go to confirm that the compilation and execution are normal. PATH settings and loops throughout the process

Golang usually consumes less CPU and memory than Python when building web services. 1. Golang's goroutine model is efficient in scheduling, has strong concurrent request processing capabilities, and has lower CPU usage; 2. Go is compiled into native code, does not rely on virtual machines during runtime, and has smaller memory usage; 3. Python has greater CPU and memory overhead in concurrent scenarios due to GIL and interpretation execution mechanism; 4. Although Python has high development efficiency and rich ecosystem, it consumes a high resource, which is suitable for scenarios with low concurrency requirements.

To build a GraphQLAPI in Go, it is recommended to use the gqlgen library to improve development efficiency. 1. First select the appropriate library, such as gqlgen, which supports automatic code generation based on schema; 2. Then define GraphQLschema, describe the API structure and query portal, such as defining Post types and query methods; 3. Then initialize the project and generate basic code to implement business logic in resolver; 4. Finally, connect GraphQLhandler to HTTPserver and test the API through the built-in Playground. Notes include field naming specifications, error handling, performance optimization and security settings to ensure project maintenance

The choice of microservice framework should be determined based on project requirements, team technology stack and performance expectations. 1. Given the high performance requirements, KitEx or GoMicro of Go is given priority, especially KitEx is suitable for complex service governance and large-scale systems; 2. FastAPI or Flask of Python is more flexible in rapid development and iteration scenarios, suitable for small teams and MVP projects; 3. The team's skill stack directly affects the selection cost, and if there is already Go accumulation, it will continue to be more efficient. The Python team's rash conversion to Go may affect efficiency; 4. The Go framework is more mature in the service governance ecosystem, suitable for medium and large systems that need to connect with advanced functions in the future; 5. A hybrid architecture can be adopted according to the module, without having to stick to a single language or framework.

sync.WaitGroup is used to wait for a group of goroutines to complete the task. Its core is to work together through three methods: Add, Done, and Wait. 1.Add(n) Set the number of goroutines to wait; 2.Done() is called at the end of each goroutine, and the count is reduced by one; 3.Wait() blocks the main coroutine until all tasks are completed. When using it, please note: Add should be called outside the goroutine, avoid duplicate Wait, and be sure to ensure that Don is called. It is recommended to use it with defer. It is common in concurrent crawling of web pages, batch data processing and other scenarios, and can effectively control the concurrency process.
