Why explore a data structure in a website about concurrency?
Because timers/schedulers rely on them to be efficient.
In this article:
- Why timers/schedulers require a binary heap.
- How they work internally.
- How to implement a working one from absolute scratch.
The working thread of a scheduler (i.e. a timer) has one responsibility: To run the tasks submitted to it.
To do that though, that thread needs to find the next task due to run. What if millions of tasks have been submitted? If it needs to sort millions of tasks to find the next, the working thread would spend more time sorting tasks than running them.
When we developed a Java Timer from scratch on the previous article, we required a data structure to hold the tasks to run. This data structure acts as buffer between the client threads submitting (i.e inserting) new tasks and the working thread responsible for removing tasks and running them.
Shamefully, we ended up using an existing java.util.PriorityQueue
. There was too much to talk about as it was. Now we atone.
Originally, the main concern was thread safety. The data structure needs to handle multiple threads mutating it concurrently. And so, we overlooked what kind of data structure was feasible from the efficiency point of view.
Imagine a scenario where clients submit millions of tasks for execution. These tasks are not sorted. The order in which they are submitted is not the order by which they are to be executed:
Timer timer;
timer.schedule(<task-1>, 5.seconds)
timer.schedule(<task-2>, 6.seconds)
timer.schedule(<task-3>, 1.seconds)
// expected order of execution: [task-3, task-1, task-2]
On the receiving end, the worker thread needs to correctly pick the task to execute next. Sorting through millions of elements each time means it would spend more time sorting than executing. This would inevitably lead to tasks executing past their due date because the worker thread was busy sorting.
Alternatively, you could move the responsibility of sorting to the calling thread, but that would mean the client calls would block for an unreasonable amount of time.
insert | removeMin | who pays | |
sorted “list” | O(n) | O(1) | client threads |
unsorted list | O(1) | O(n) | worker thread |
For reference, here is they time it takes to sort an array list in Java:
n | sort time (ms) |
---|---|
1000 | 0.2 |
10,000 | 1 |
100,000 | 14 |
1,000,000 | 206 |
10,000,000 | 357 |
As expected, sorting time grows linearly with the number of elements. It becomes unfeasible to use a scheduler backed by an array list when many tasks are expected.
Overview
With the scheduler diagram in mind, there are only two important operations the data structure has to perform efficiently:
- Inserting an element (any element)
- Removing the least element
All other operations are irrelevant for schedulers. Namely, it is irrelevant:
- How fast we can sort all the elements.
Critically, notice this is different from finding the least element. - Determine if a particular element exists.
- Finding the last element inserted.
Important
Schedulers do 2 operations on the underlying data-structure. They insert new elements and remove the least element. Therefore, they can sacrifice performance on all other operations.
Given the above, two data structures stand out. Binary Heaps and Binary Search Trees (BST) :
Binary Heap | Binary Search Tree | |
insert average | O(1) π₯ | O(log2 n) |
insert worst | O(log2 n) | O(log2 n) |
findMin | O(1) π₯ | O(log2 n) |
findAny | O(n) β οΈ | O(log2 n) |
removeMin | O(log2 n) | O(log2 n) |
comments | always balanced π₯ | requires explicit balancing |
Both do a good job for removing the least element O(log2n)
. Additionally, Binary Search Trees are good at fast lookup and removal of any element. But schedulers are not interested in finding just any element. They only need the next. That is why the binary heap wins. It provides constant time averaged insertions.
Additionally, binary heaps are simpler than BSTs, as the former are always balanced trees. Vanilla BSTs can degenerate, losing their performance. They required balanced algorithms which introduce overhead and add complexity.
Fundamentals
Binary heaps come in 2 flavours. The figure below illustrates a minHeap, whereby the least element is at the top. There is a maxHeap version which maintains the greatest element at the top. They are identical.
This visual representation is extremely useful. Most arguments are explained and understood by analysing it. Even scientific papers on the topic use them. So have a good look:
Note
The diagram exemplifies a binary heap where the elements are integers. This is for simplicity. They can contain any type of element that can be compared. For schedulers, they will be a tuple with a task (in the JVM world, a Runnable
instance) to run and a timestamp for when such task should run. In that case, the comparison of the tuples is determined solely by the timestamp.
There are only two properties for a binary heap:
- The value of each node must be less or equal than the values at both its children.
- All nodes have both children. Except possibly the nodes on the bottom level. In which case the tree must be filled left to right.
Corollary of the 1st property
The least element is always the first (i.e. at the top). As we will see, that is why we can find the minimum so fast.
Corollary of the 2nd property
The binary heap is always balanced. This is unlike what happens in Binary Search Tree. These can degenerate and loose all its advantages.
Key observations
From the 2nd property, it follows that the number of elements of a tree deterministically determines both 1) its height, and 2) how many elements exist on the last level. The reverse is also true. Its height and number of elements on the last level uniquely determine the total number of elements.
The most important observation is that the total number of elements within a binary heap grows exponentially fast with its height.
Because every node on a particular level has 2 children, then it is always the case the number of elements doubles with each level. If some level has p elements, then the next has 2*p
, and the one after has 2*2*p
and the next after that 2*2*2*p
, and then 2*2*2*2*p
.
Hopefully this is enough to convince you the number of elements on each level is given by 2h
, where h is the height of that level, and where the first level is h=0
.
height | elems |
0 | 20 = 1 |
1 | 21 = 2 |
2 | 22 = 4 |
10 | 210 = 1024 |
15 | 215 = 32,768 |
20 | 220 = 1,048,576 |
30 | 230 = 1,073,741,824 |
Important
The number of elements in a tree grows exponentially with its height.
The brilliancy of the algorithm
The core idea of the data structure is to keep the insert/deleteMin operations grow linearly with the height of the tree, which means they grow logarithmically with the number of elements.
Operations
As stressed before, the 2 operations binary heaps are good at, which coincides with the only 2 operations schedulers are interested in, are:
- Insertion of some random element
- Removal of the least element.
This is achieved if the operations scale linearly with the height of the tree, which means scaling logarithmically with the number of elements.
Insert
In order to respect the second property, inserting a new element implies the creation of a new node/element on the next available (i.e. leftmost) “slot” of the last level. Or, if the current level is already full, on leftmost slot of a new level.
At the same time, in order to respect the first property, the above does not mean the new element will occupy that slot. We might have to re-shuffle elements.
As a way of example, from the binary heap above, the next slot is the one to the right of element 76
, which will be the right child of element 60
.
However, if the element to be introduced is 1
, then the element cannot stay there, as it would violate the 1st property given that 1 < 60
. We have to re-organize the position of the elements so that the 1st property holds.
This can be achieved by comparing the child with its parent, and if necessary, swapping their position. The process – called percolation – is repeated until the 1st property holds across the entire tree.
Critically, notice that the number of comparisons done is only 4, which is equal to the height of the tree.
Algorithm
- New element is initially appended to end of the tree.
- Compare new element with its parent, and then
- If element lower, swap position with parent and repeat step 2 from current position.
- If element greater or equal, stop.
In the “worst case” scenario, the new element inserted is a new minimum, and therefore we must “pop it up” from the lowest level to the very first level. At each level change we must make one comparison operation. Therefore, in the worst case, we must make as many operations as there are levels. Because the number of levels is equal to log2N, we need to make exponentially fewer operations as there are total elements of the tree.
Where is the magic? The magic relies on the fact that every time we jump one level, we are ignoring half of the elements!
O(1) – on average
(skip to DeleteMin if not interested)
It gets better.
We need to make log2N operations in the worst case. On the best case, we would need only one – the comparison with the parent. What about on average? Mathematics tells us that on average we need to make only ~ 2 operations. That is, the average amount of operations required for inserts is amortized to be constant.
We wonβt go into the formal explanation for it. The intuitive argument goes like this: every extra level adds twice as many elements as the level before it. Furthermore, the number of elements on the last level is given by 2h
, and, as we shall proof below, the number of all elements before that level is given by 2h-1
. In other words, at least half the elements of the binary heap are at the very bottom!
For example, our binary heap has 8 elements on level h=3
, which is greater than the 7 elements above that level.
The argument is then, if you insert elements at random, there is 50% probability that it would “land” on the last level, and 25% it would “land” on the level h-1
, and 12.5% on level h-2
, and 6.25% on level h-3
, and so forth. So, on average, it is very likely the random element will stay at the bottom levels, and therefore not need to be propagated up the tree.
The proof of this is not trivial. You can search on the internet for scientific articles deriving this rigorously.
DeleteMin
We donβt want to delete just any element. We want to delete the least element, the minimum, which we know is at the top of the heap.
Deletion is similar to insertion. But it goes from the top of the tree to the bottom.
Again, to respect the 2nd property, even though we delete the first element, that position (i.e. “slot”) of the tree remains. But its content changes. The “slot” which seizes to exist is the one of the last element of the heap.
Algorithm
- The first element of the heap is removed.
- The last element of the heap is moved into the now vacant 1st “slot”.
- To respect the 1st property, the new first element is compared with its children.
- If any of its children are lower, swap position with the lowest child and repeat step 3 from current position.
- If the childrenβs values are higher, stop.
Notice again the core idea: The worst case scenario, after moving the last element into the top, this element will sink all the way down to the bottom of the tree. So the number of comparisons you have to do depend on the height of the tree, not on the total number of elements.
Implementation
On the original article we already focused on concurrency. Therefore, to keep things simple let us focus on the core implementation of the minHeap without thinking about the complexities of thread safety.
The most common approach is to use an array. Recall that arrays are conceptually contiguous memory locations whereby each element can be accessed by index in constant time.
The challenge is how to translate the tree structure, and the abstract insert/deleteMin algorithms onto an array.
import java.util.Comparator;
// For conciseness, we overlook important points, such as: 1) checking for nulls,
// 2) exception handling, and 3) growing/shrinking the backing array as required.
public final class BinaryHeap<E> {
private static final int DEFAULT_CAPACITY = 1024;
private final Comparator<? super E> comparator;
private E[] queue; // zero based
private int numberElem;
public MyPriorityBlockingQueue(Comparator<? super E> comparator) {
this.comparator = comparator;
this.numberElem = 0;
queue = (E[]) new Object[DEFAULT_CAPACITY];
}
public E peekMin() {
return queue[0];
}
private boolean lower(int i, int j) {
return comparator.compare(queue[i], queue[j]) < 0;
}
private void swap(int i, int j) {
E savedI = queue[i];
queue[i] = queue[j];
queue[j] = savedI;
}
private int indexChildL(int index) {
return 2 * index + 1;
}
private int indexChildR(int index) {
return 2 * index + 2;
}
private int indexParent(int index) {
return (int) Math.ceil(index * 1.0 / 2) - 1;
}
}
There are many ways you could index the elements into the array. One that works is to do it breadth-first. Elements lower on the tree have higher indexes. Within each level, elements to the left come earlier on the array. For our example binary tree, you would have the following array [2, 5, 6, 40, 7, 8, 90, 45, 60, 12, 14, 75, 10, 91, 95, 99, 50, 76]
.
In this case, the first element of the array is the minimum and can be accessed with queue[0]
. The last element of the tree is also the last element of the array.
This arrangement feels more natural, but others would also be possible.
The critical thing is off course to be able to implement the algorithms underlying the insert
and deleteMin
operations. For that we need to be able to:
- Determine what is the first element of the heap. β
From earlier, the index is0
. - Determine what is the last element of the heap. β
From earlier, the index isqueue.size - 1
. - Determine the index of the parent of a given element. π€
Required for theinsert
algorithm, whereby one iterates up the height of the tree, swapping elements as needed. - Determine the index of the children of a given element. π€
Required for thedeleteMin
algorithm, whereby one iterates down the height of the tree, swapping elements as needed
Most online resources, as well as authoritative books, avoid explaining the derivation and simply state the following:
Given an element i
of the binary heap the following holds:
where the odd looking, square bracket like symbol on the first expression is the mathematical notation for the ceil function Math.ceil(<some-double>)
.
We will take this as the truth. However, we can deduce these expressions on our own. Have a look at the Appendix if you are curious; there we explain the derivation step by step.
The algorithm is simple. The code snippets donβt require much explanation, aside from highlighting that this is not thread-safe.
Importantly, even if the usage of this code must be on the same thread, you should attempt to make the elements immutable. You must not mutate the elements of the binary queue in a way that violates the heap invariants.
Insert implementation
public final class BinaryHeap<E> {
// constructor and parameters hidden
// utility methods
public void insert(E elem) {
queue[++numberElem - 1] = elem;
perculateUp();
}
private void perculateUp() {
int iC = numberElem - 1; // index of child
while (iC > 0) {
var iP = indexParent(iC); // index of parent
if (lower(iP, iC)) break;
swap(iC, iP);
iC = iP;
}
}
}
DeleteMin implementation
public final class BinaryHeap<E> {
// constructor and parameters hidden
// utility methods
public E deleteMin() {
if (numberElem == 0) throw new NoSuchElementException("No elements");
E elemToReturn = queue[0];
queue[0] = queue[numberElem - 1];
queue[--numberElem] = null;
perculateDown();
return elemToReturn;
}
private void perculateDown() {
int iP = 0; // index of parent
while (indexChildL(iP) <= numberElem - 1) {
var iL = indexChildL(iP);
var iR = indexChildR(iP);
int iC; // index of child
if (iR <= numberElem - 1 && lower(iR, iL)) iC = iR;
else iC = iL;
if (lower(iP, iC)) break;
swap(iC, iP);
iP = iC;
}
}
}
The benchmark results further down – that you can compare with sorting of array list – show the time it takes to deleteMin
grows much slower (exponentially slower) than the number of elements.
We show also the results of java ‘s PriorityQueue
, as that was the data-structure used on the original article about schedulers, and because that structure is implemented with a binary heap:
/**
* Priority queue represented as a balanced binary heap: the two
* children of queue[n] are queue[2*n+1] and queue[2*(n+1)]. The
* priority queue is ordered by comparator, or by the elements'
* natural ordering, if comparator is null: For each node n in the
* heap and each descendant d of n, n <= d. The element with the
* lowest value is in queue[0], assuming the queue is nonempty.
*/
transient Object[] queue;
n | java.util.PriorityQueue | our BinaryHeap |
---|---|---|
1000 | 3.7 | 3.7 |
10,000 | 3.3 | 3.1 |
100,000 | 3.2 | 3.3 |
1,000,000 | 12.1 | 8.6 |
10,000,000 | 15.8 | 16.2 |
Appendix
Letβs temporarily introduce a coordinate system, whereby the position of an element on the binary heap is determined by:
- The height
h
of its level (starting at 0) - Its index/position
p
within that level (starting at 1)
So, from above, the last element – 76
– has coordinates (4, 3)
.
Index
Given an element with coordinates (h, p)
its index in the array the sum between 1) all the elements on the levels above h
, with 2) p
, and 3) then the value 1 since the array is 0-indexed.
Since we established the number of elements at height h
is 2h
, then the number of all elements above h
is:
From mathematics, the following equality on finite geometric series can be proven:
which leads to
For example, element 75
, with coordinates (3,5)
has index 11
.
Children coordinates
For the element i
we can say pi-1
is the number of elements that come before it on that level. Because every such element will have two children, and because the children of element i
must come after those children, then it follows that the left child of i
will have coordinate p
as 2*(pi - 1) + 1
. The right child is that plus 1.
For example, the element with value 7
is at position/index 2 within its level. Therefore, its left and right child are at positions 2*(2-1)+1=3
and 2*(2-1)+2=4
within their level.
At the same time, it follows from the definition that the height of the children of one element is one higher than the height of that element; hleft_child(i) = hi + 1
.
Linking this with the earlier expression for the index of the array, we have:
Which can be simplified to:
Parent coordinates
The parent will have height hparent=h-1
. Coordinate pparent
is harder to derive but the reasoning is very similar to the child case above.
If the current element i
is at position p
, then it is the pth
element on that level. This, and the other p-1
before it, are one of two children of the elements on the level above, and parent of i
must be the last of all these parents. This should intuitively convince you pparent=p/2
.
More precisely, it is pparent = Math.ceil(p/2)
to account for when i
is the left child and p/2
would be a rational number.
Linking this with the earlier expression for the index of the array, we have:
where the odd looking, square bracket like symbol is the mathematical notation for the ceiling function Math.ceil(<some-double>)
.
This can be factorized into:
For example, the element with value 75
has coordinates (3,5)
and index 11. Its parent is the node with the value 8, which is at Math.ceil(11/2)-1=5
.