Skip to content

Conversation

macborowy
Copy link

Challenge 4 Solution

Submitted by: @macborowy
Challenge: Challenge 4

Description

This PR contains my solution for Challenge 4.

Changes

  • Added solution file to challenge-4/submissions/macborowy/solution-template.go

Testing

  • Solution passes all test cases
  • Code follows Go best practices

Thank you for reviewing my submission! 🚀

Copy link

coderabbitai bot commented Oct 12, 2025

Walkthrough

Adds a concurrent BFS query processor using a worker pool. Introduces an exported ConcurrentBFSQueries that distributes multiple start-node BFS traversals across goroutines, coordinating via channels, WaitGroup, and a mutex-protected results map. Includes an internal bfs helper and a main placeholder for local testing.

Changes

Cohort / File(s) Summary
Concurrent BFS worker pool
challenge-4/submissions/macborowy/solution-template.go
Added ConcurrentBFSQueries(graph, queries, numWorkers) executing per-query bfs concurrently via worker pool; uses buffered channel for tasks, WaitGroup for lifecycle, mutex-guarded map for results; includes bfs helper returning visitation order and a main stub for local tests.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  participant C as Caller
  participant CBQ as ConcurrentBFSQueries
  participant Q as QueryChan
  participant W1 as Worker 1
  participant Wn as Worker n
  participant R as Result Map

  C->>CBQ: call ConcurrentBFSQueries(graph, queries, numWorkers)
  activate CBQ
  CBQ->>Q: preload queries
  loop spawn workers
    CBQ->>W1: start goroutine
    CBQ->>Wn: start goroutine
  end
  par workers process
    W1->>Q: dequeue query
    alt got query
      W1->>W1: bfs(graph, start)
      W1->>R: write result (mutex)
    else no query
      W1-->>CBQ: done
    end
    Wn->>Q: dequeue query
    alt got query
      Wn->>Wn: bfs(graph, start)
      Wn->>R: write result (mutex)
    else no query
      Wn-->>CBQ: done
    end
  end
  CBQ-->>C: return results map
  deactivate CBQ
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Pre-merge checks

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 33.33% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Title Check ✅ Passed The title clearly indicates that a solution for Challenge 4 is being added, which aligns with the main change of introducing the solution file, but it includes the author’s handle rather than describing the specific implementation details such as the concurrent BFS function.
Description Check ✅ Passed The pull request description directly relates to the changeset by stating it introduces the Challenge 4 solution in the specified file, outlines what was added, and confirms testing and adherence to Go best practices.

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (2)
challenge-4/submissions/macborowy/solution-template.go (2)

16-18: Use range for more idiomatic Go.

The loop can be simplified using range over the queries slice.

-	for i := 0; i < len(queries); i++ {
-		queue <- queries[i]
+	for _, query := range queries {
+		queue <- query
 	}

43-43: Consider adding nil check for graph parameter.

While not strictly required depending on the challenge specification, a nil check would prevent potential panics and make the function more robust.

 func bfs(graph map[int][]int, start int) []int {
+	if graph == nil {
+		return []int{start}
+	}
 	queue := []int{start}
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between a2868df and f839731.

📒 Files selected for processing (1)
  • challenge-4/submissions/macborowy/solution-template.go (1 hunks)
🔇 Additional comments (2)
challenge-4/submissions/macborowy/solution-template.go (2)

65-67: LGTM!

The empty main function with a comment for optional local testing is appropriate for a challenge submission.


50-51: Inefficient queue dequeue operation.

Using queue[0] and queue = queue[1:] for dequeuing is O(n) per operation because it requires slice reallocation or memory copying. For a graph with n nodes, this results in O(n²) time complexity just for queue operations, significantly degrading performance on large graphs.

Consider using a ring buffer approach with head/tail indices:

 func bfs(graph map[int][]int, start int) []int {
 	queue := []int{start}
+	head := 0
 	visited := make(map[int]bool)
 	visited[start] = true
 	var order []int
 
-	for len(queue) > 0 {
-		u := queue[0]
-		queue = queue[1:]
+	for head < len(queue) {
+		u := queue[head]
+		head++
 		order = append(order, u)

Likely an incorrect or invalid review comment.

Comment on lines +24 to +25
for i := 0; i < numWorkers; i++ {
wg.Add(1)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Validate numWorkers to prevent indefinite hang.

If numWorkers is 0 or negative, no workers will be spawned, but wg.Wait() on line 38 will wait indefinitely since no goroutines call wg.Done(). This causes the function to hang.

Add validation at the start of the function:

 func ConcurrentBFSQueries(graph map[int][]int, queries []int, numWorkers int) map[int][]int {
+	if numWorkers <= 0 {
+		numWorkers = 1
+	}
 	queue := make(chan int, len(queries))
🤖 Prompt for AI Agents
In challenge-4/submissions/macborowy/solution-template.go around lines 24-25
(with wg.Wait() at line 38), the code may call wg.Wait() while spawning zero
workers if numWorkers <= 0, causing an indefinite hang; add validation at the
start of the function to handle non-positive numWorkers by either returning an
error immediately or normalizing it (e.g., set numWorkers = 1) before the loop
so at least one goroutine calls wg.Done(), and ensure any early return does not
leave the caller waiting.

Comment on lines +30 to +32
mutex.Lock()
result[elem] = bfs(graph, elem)
mutex.Unlock()
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Critical: Move BFS computation outside the mutex lock.

The mutex currently protects both the expensive BFS computation and the result map write. This serializes all BFS work across goroutines, completely negating the performance benefits of concurrency. The mutex should only protect the shared result map write.

Apply this diff to fix the issue:

 			for elem := range queue {
+				bfsResult := bfs(graph, elem)
 				mutex.Lock()
-				result[elem] = bfs(graph, elem)
+				result[elem] = bfsResult
 				mutex.Unlock()
 			}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
mutex.Lock()
result[elem] = bfs(graph, elem)
mutex.Unlock()
for elem := range queue {
bfsResult := bfs(graph, elem)
mutex.Lock()
result[elem] = bfsResult
mutex.Unlock()
}
🤖 Prompt for AI Agents
In challenge-4/submissions/macborowy/solution-template.go around lines 30 to 32,
the code currently calls bfs(graph, elem) while holding the mutex which
serializes expensive work; compute the BFS result before acquiring the mutex,
then acquire the mutex only to write the computed value into result (assign and
release). This ensures the expensive bfs call runs without holding the lock and
the mutex only protects the shared map write.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant