The answer is to implement Worker Pool mode using channel and goroutine to control the number of concurrent and efficiently process batch tasks. 1. Define the task job and result Result structure; 2. Create buffered task channel and result channel; 3. Start a fixed number of worker goroutines, each worker reads the task from the jobs channel and writes the results channel, and calls wg.Done() after completion; 4. The main goroutine sends all tasks and closes the jobs channel; 5. Use sync.WaitGroup to wait for all workers to complete, and closes the results channel in the goroutine; 6. The main goroutine collects results from results to count the number of successes and failures. This mode prevents resource exhaustion by limiting the number of goroutines, is suitable for batch task processing, and can be extended and optimized by adding context timeouts, retry mechanisms, etc. The final program correctly outputs the processing results and exits safely.
In Go, Worker Pool is a common concurrency pattern that is used to efficiently handle large numbers of tasks while limiting the number of goroutines running simultaneously and avoiding resource exhaustion. It is very suitable for handling batch tasks such as file processing, network requests, data cleaning, etc.

The following is a specific example to demonstrate how to implement a simple Worker Pool using Go.
?? Scenario: Process a batch of tasks (such as downloading URLs)
Suppose we have 100 tasks (such as HTTP requests), but we only want to run 10 goroutines at the same time to handle these tasks to prevent system overload.

? Core structure
- A
jobs
channel - Multiple workers (goroutines)
- A result queue (
results
channel) - Wait for all tasks to complete (
sync.WaitGroup
)
? Complete sample code
package main import ( "fmt" "math/rand" "sync" "time" ) // Task structure type Job struct { ID int } // Result structure type Result struct { JobID int Success bool Message string } func worker(id int, jobs <-chan Job, results chan<- Results, wg *sync.WaitGroup) { defer wg.Done() for job := range jobs { fmt.Printf("Worker %d started job %d\n", id, job.ID) // Simulate time-consuming operations (such as network requests) time.Sleep(time.Duration(rand.Intn(1000)) * time.Millisecond) success := rand.Float32() > 0.3 // 70% success rate var msg string if success { msg = "processed successfully" } else { msg = "failed" } results <- Results{ JobID: job.ID, Success: success, Message: msg, } fmt.Printf("Worker %d finished job %d\n", id, job.ID) } } func main() { const numJobs = 100 const numWorkers = 10 jobs := make(chan Job, numJobs) results := make(chan Results, numJobs) var wg sync.WaitGroup // 1. Start workers for i := 1; i <= numWorkers; i { wg.Add(1) go worker(i, jobs, results, &wg) } // 2. Send task for i := 1; i <= numJobs; i { jobs <- Job{ID: i} } close(jobs) // Close jobs channel, indicating that there are no more tasks // 3. Wait for all workers to complete in another goroutine, and then close results go func() { wg.Wait() close(results) }() // 4. Collect results successfully := 0 failed := 0 for result := range results { if result.Success { Successful } else { failed } // Optional: Print or record the result // fmt.Printf("Job %d: %s\n", result.JobID, result.Message) } fmt.Printf("Processing complete. Success: %d, Failed: %d\n", successful, failed) }
? Key points description
? jobs
and results
use buffered channel
-
jobs := make(chan Job, numJobs)
: Buffer the channel to avoid send blockage - It can also not buffer, but when sending tasks, make sure that a worker is receiving it.
?After close(jobs)
, range jobs
will automatically exit
- All workers'
for job := range jobs
will end naturally after the channel is closed
? Use sync.WaitGroup
to wait for all workers to exit
- Each worker completes
wg.Done()
- The main goroutine closes
results
after waiting for completion to preventrange results
from deadlocking
? results
need to be closed as well
- Otherwise
for range results
will block forever
? Scalable optimization
- Error retry mechanism : Add retry logic to the worker
- Timeout control : Use
context.WithTimeout
- Dynamic worker number : adjust according to load
- Priority task queue : Use multiple channels or priority queues
? Applicable scenarios
- Batch processing of files or database records
- Concurrent crawling of web pages
- Email Send Queue
- Image/video transcoding
- Log processing
Basically that's it. Go's channel goroutine makes the worker pool implementation very simple and intuitive. The key is to understand the use of channel shutdown and WaitGroup to avoid deadlocks or leaks.
The above is the detailed content of go by example worker pools. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undress AI Tool
Undress images for free

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics

TointegrateGolangserviceswithexistingPythoninfrastructure,useRESTAPIsorgRPCforinter-servicecommunication,allowingGoandPythonappstointeractseamlesslythroughstandardizedprotocols.1.UseRESTAPIs(viaframeworkslikeGininGoandFlaskinPython)orgRPC(withProtoco

Golangofferssuperiorperformance,nativeconcurrencyviagoroutines,andefficientresourceusage,makingitidealforhigh-traffic,low-latencyAPIs;2.Python,whileslowerduetointerpretationandtheGIL,provideseasierdevelopment,arichecosystem,andisbettersuitedforI/O-bo

Golang is mainly used for back-end development, but it can also play an indirect role in the front-end field. Its design goals focus on high-performance, concurrent processing and system-level programming, and are suitable for building back-end applications such as API servers, microservices, distributed systems, database operations and CLI tools. Although Golang is not the mainstream language for web front-end, it can be compiled into JavaScript through GopherJS, run on WebAssembly through TinyGo, or generate HTML pages with a template engine to participate in front-end development. However, modern front-end development still needs to rely on JavaScript/TypeScript and its ecosystem. Therefore, Golang is more suitable for the technology stack selection with high-performance backend as the core.

The key to installing Go is to select the correct version, configure environment variables, and verify the installation. 1. Go to the official website to download the installation package of the corresponding system. Windows uses .msi files, macOS uses .pkg files, Linux uses .tar.gz files and unzip them to /usr/local directory; 2. Configure environment variables, edit ~/.bashrc or ~/.zshrc in Linux/macOS to add PATH and GOPATH, and Windows set PATH to Go in the system properties; 3. Use the government command to verify the installation, and run the test program hello.go to confirm that the compilation and execution are normal. PATH settings and loops throughout the process

Golang usually consumes less CPU and memory than Python when building web services. 1. Golang's goroutine model is efficient in scheduling, has strong concurrent request processing capabilities, and has lower CPU usage; 2. Go is compiled into native code, does not rely on virtual machines during runtime, and has smaller memory usage; 3. Python has greater CPU and memory overhead in concurrent scenarios due to GIL and interpretation execution mechanism; 4. Although Python has high development efficiency and rich ecosystem, it consumes a high resource, which is suitable for scenarios with low concurrency requirements.

To build a GraphQLAPI in Go, it is recommended to use the gqlgen library to improve development efficiency. 1. First select the appropriate library, such as gqlgen, which supports automatic code generation based on schema; 2. Then define GraphQLschema, describe the API structure and query portal, such as defining Post types and query methods; 3. Then initialize the project and generate basic code to implement business logic in resolver; 4. Finally, connect GraphQLhandler to HTTPserver and test the API through the built-in Playground. Notes include field naming specifications, error handling, performance optimization and security settings to ensure project maintenance

The choice of microservice framework should be determined based on project requirements, team technology stack and performance expectations. 1. Given the high performance requirements, KitEx or GoMicro of Go is given priority, especially KitEx is suitable for complex service governance and large-scale systems; 2. FastAPI or Flask of Python is more flexible in rapid development and iteration scenarios, suitable for small teams and MVP projects; 3. The team's skill stack directly affects the selection cost, and if there is already Go accumulation, it will continue to be more efficient. The Python team's rash conversion to Go may affect efficiency; 4. The Go framework is more mature in the service governance ecosystem, suitable for medium and large systems that need to connect with advanced functions in the future; 5. A hybrid architecture can be adopted according to the module, without having to stick to a single language or framework.

sync.WaitGroup is used to wait for a group of goroutines to complete the task. Its core is to work together through three methods: Add, Done, and Wait. 1.Add(n) Set the number of goroutines to wait; 2.Done() is called at the end of each goroutine, and the count is reduced by one; 3.Wait() blocks the main coroutine until all tasks are completed. When using it, please note: Add should be called outside the goroutine, avoid duplicate Wait, and be sure to ensure that Don is called. It is recommended to use it with defer. It is common in concurrent crawling of web pages, batch data processing and other scenarios, and can effectively control the concurrency process.
