-
Notifications
You must be signed in to change notification settings - Fork 403
Use a faster priority queue implementation #8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use a faster priority queue implementation #8
Conversation
Depends on Graphviz (http://www.graphviz.org/) and the BProfile package (pip install bprofile).
This will make it easier to test different priority queue implementations.
Based on runs of profile.py, the heapq implementation is significantly faster.
|
While testing more mazes I found a minor bug with the new priority queue. Please wait to merge until I've submitted a fix. |
|
This is interesting. I ignored the HeapPQ because I thought didn't didn't have a "decreasekey" function. It seems you've implemented one through a remove + add option. It certainly looks faster, wikipedia suggests that a fibheap is the optimal pq structure, but I strongly suspect that depends on the way it's being used, and also relies on my implementation being optimised, which it probably isn't! |
Mazes with multiple solutions were failing when A-Star and Dijkstra
used the new HeapPQ implementation. Those issues are now
resolved.
Based on the current profile.py FibPQ and HeapPQ now neck and
neck.
FibPQ:
profile.py 228.22s user 2.53s system 100% cpu 3:50.53 total
HeapPQ:
profile.py 228.36s user 3.66s system 100% cpu 3:51.86 total
There's still room for other optimization opportunities, like
refactoring so client code doesn't call `unvisited.minimium` to get a
copy of the minimum entry and then immediately call
`unvisited.removeminimum`. If the `removeminimum` were changed to
return the removed entry, redundant `minimum`, `remove`, and `insert`
calls could be eliminated. Based on the profiler output images,
HeapPQ's bottleneck by and large is `heappop` and `heappush`, so this
change could be a major speed improvement.
The new priority queue, QueuePQ, is based on Python's
Queue.PriorityQueue. Underneath it's also implemented using heapq, but
adds synchronization primitives. This makes it slower than HeapPQ;
however, the synchronization features may be desirable in some
contexts.
|
I fixed the bug and profiled more extensively on other inputs. The initial gains I saw on Fibonacci heaps, as I understand it, are theoretically ideal, but not necessarily in practice. Binary heaps, like |
Rather than getting the minimum element and then calling
removeminimum, just have removeminimum return the removed
element. This gives a significant speed improvement to all priority
queue implementations. HeapPQ's relative gain in performance exceeds
FibPQ's, so it's now the default.
FibPQ:
profile.py 203.06s user 3.63s system 100% cpu 3:26.50 total
HeapPQ:
profile.py 130.79s user 2.84s system 100% cpu 2:13.50 total
|
The latest change is a nice speed improvement for all priority queues. It includes changing to the FibPQ
HeapPQ
|
|
Also, having thought about it, even if the fib heap decreasekey is very efficient, I'm not actually sure it's called on a perfect maze. There's never an alternative path to a node, so we're never in a situation where it's needed. It's pretty rare even in the braid mazes. A pq that focuses on speed of insert and removemin is likely to be faster. |


A priority queue based on Python's
heapqis significantly faster thanFibHeap. ThePriorityQueueabstract class establishes a standard interface used by both implementations, and minimal changes were required toastar.pyanddijkstra.pyto use it.Performance analysis
FibPQ
HeapPQ