-
-
Notifications
You must be signed in to change notification settings - Fork 632
Add solution for Challenge 4 by macborowy #575
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
WalkthroughAdds a concurrent BFS query processor using a worker pool. Introduces an exported ConcurrentBFSQueries that distributes multiple start-node BFS traversals across goroutines, coordinating via channels, WaitGroup, and a mutex-protected results map. Includes an internal bfs helper and a main placeholder for local testing. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant C as Caller
participant CBQ as ConcurrentBFSQueries
participant Q as QueryChan
participant W1 as Worker 1
participant Wn as Worker n
participant R as Result Map
C->>CBQ: call ConcurrentBFSQueries(graph, queries, numWorkers)
activate CBQ
CBQ->>Q: preload queries
loop spawn workers
CBQ->>W1: start goroutine
CBQ->>Wn: start goroutine
end
par workers process
W1->>Q: dequeue query
alt got query
W1->>W1: bfs(graph, start)
W1->>R: write result (mutex)
else no query
W1-->>CBQ: done
end
Wn->>Q: dequeue query
alt got query
Wn->>Wn: bfs(graph, start)
Wn->>R: write result (mutex)
else no query
Wn-->>CBQ: done
end
end
CBQ-->>C: return results map
deactivate CBQ
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Pre-merge checks❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Nitpick comments (2)
challenge-4/submissions/macborowy/solution-template.go (2)
16-18
: Use range for more idiomatic Go.The loop can be simplified using range over the queries slice.
- for i := 0; i < len(queries); i++ { - queue <- queries[i] + for _, query := range queries { + queue <- query }
43-43
: Consider adding nil check for graph parameter.While not strictly required depending on the challenge specification, a nil check would prevent potential panics and make the function more robust.
func bfs(graph map[int][]int, start int) []int { + if graph == nil { + return []int{start} + } queue := []int{start}
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
challenge-4/submissions/macborowy/solution-template.go
(1 hunks)
🔇 Additional comments (2)
challenge-4/submissions/macborowy/solution-template.go (2)
65-67
: LGTM!The empty main function with a comment for optional local testing is appropriate for a challenge submission.
50-51
: Inefficient queue dequeue operation.Using
queue[0]
andqueue = queue[1:]
for dequeuing is O(n) per operation because it requires slice reallocation or memory copying. For a graph with n nodes, this results in O(n²) time complexity just for queue operations, significantly degrading performance on large graphs.Consider using a ring buffer approach with head/tail indices:
func bfs(graph map[int][]int, start int) []int { queue := []int{start} + head := 0 visited := make(map[int]bool) visited[start] = true var order []int - for len(queue) > 0 { - u := queue[0] - queue = queue[1:] + for head < len(queue) { + u := queue[head] + head++ order = append(order, u)Likely an incorrect or invalid review comment.
for i := 0; i < numWorkers; i++ { | ||
wg.Add(1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Validate numWorkers to prevent indefinite hang.
If numWorkers
is 0 or negative, no workers will be spawned, but wg.Wait()
on line 38 will wait indefinitely since no goroutines call wg.Done()
. This causes the function to hang.
Add validation at the start of the function:
func ConcurrentBFSQueries(graph map[int][]int, queries []int, numWorkers int) map[int][]int {
+ if numWorkers <= 0 {
+ numWorkers = 1
+ }
queue := make(chan int, len(queries))
🤖 Prompt for AI Agents
In challenge-4/submissions/macborowy/solution-template.go around lines 24-25
(with wg.Wait() at line 38), the code may call wg.Wait() while spawning zero
workers if numWorkers <= 0, causing an indefinite hang; add validation at the
start of the function to handle non-positive numWorkers by either returning an
error immediately or normalizing it (e.g., set numWorkers = 1) before the loop
so at least one goroutine calls wg.Done(), and ensure any early return does not
leave the caller waiting.
mutex.Lock() | ||
result[elem] = bfs(graph, elem) | ||
mutex.Unlock() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Critical: Move BFS computation outside the mutex lock.
The mutex currently protects both the expensive BFS computation and the result map write. This serializes all BFS work across goroutines, completely negating the performance benefits of concurrency. The mutex should only protect the shared result map write.
Apply this diff to fix the issue:
for elem := range queue {
+ bfsResult := bfs(graph, elem)
mutex.Lock()
- result[elem] = bfs(graph, elem)
+ result[elem] = bfsResult
mutex.Unlock()
}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
mutex.Lock() | |
result[elem] = bfs(graph, elem) | |
mutex.Unlock() | |
for elem := range queue { | |
bfsResult := bfs(graph, elem) | |
mutex.Lock() | |
result[elem] = bfsResult | |
mutex.Unlock() | |
} |
🤖 Prompt for AI Agents
In challenge-4/submissions/macborowy/solution-template.go around lines 30 to 32,
the code currently calls bfs(graph, elem) while holding the mutex which
serializes expensive work; compute the BFS result before acquiring the mutex,
then acquire the mutex only to write the computed value into result (assign and
release). This ensures the expensive bfs call runs without holding the lock and
the mutex only protects the shared map write.
Challenge 4 Solution
Submitted by: @macborowy
Challenge: Challenge 4
Description
This PR contains my solution for Challenge 4.
Changes
challenge-4/submissions/macborowy/solution-template.go
Testing
Thank you for reviewing my submission! 🚀