Talk:Binomial heap

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

heap[edit]

The decrease operation is explained wrong! Of course the height of the tree is at most lgn and thus you need to "move up" the element at most lgn times. But each single move can not be done in constant time since you need to update the parent field of all neighbors (= elements which have the same parent). Since these are in worst case lgn the worst case running time for hole operation is lg²n which ist not O(lgn) anymore. Also the given link in the article describes the operation wrong.

What you have to do is to remove the element from the heap, change the key and insert it again. Thus implies that also the remove operation is explained wrong since it is based here on decrease. Since the correct implementation of decrease already bases on remove, remove can not be implemented basing on decrease. Instead you have to implement it directly and this operation is not very easy to explain in short time. You have to splitt the binomial tree in binomial subtrees, sucht that each splitt needs constant time, the number of subtrees is at most lgn and the element you want to remove is a root of one of these subtrees. Than you can use the same operation as delete_min to remove the element (just put all children as new binomial trees in the tree-list and remove the element) and merge all the new subtrees similar to the union operation. --Coma 01:03, 3 Sep 2004 (UTC)

Thank you for reading the entry carefully. However I think that the operation decrease key can be indeed implemented as described in the article. When you need to exchange a node and its parent, you do not exchange the whole node (which would require changing the pointers of all children, as you point out), but you only exchange the key and any other information stored in the node. Thus the pointers remain valid. This is explicitly mentioned in the pseudocode on the webpage with the animation:
  exchange key[y] and key[z]
        if y and z have satellite fields, exchange them, too.
If you want to make this point clearer in the article, please go ahead. --Brona 20:09, 3 Sep 2004 (UTC)
Ok, that would work, but it is somehow more complicated as you describe! Since you can not find an object/key-pair efficiently in a binomial heap, you have to give back a pointer to an element which stores the object/key-pair after inserting it. This pointer is needed to go directly to the object/key-pair if you want to delete it or change the key. If you implement decrease-key as described, several object/key-pairs are "suddenly" associated to an other element and you have to tell this the sourrounded program, which uses this data-structure. Of course this is not impossible, but makes the binomial heap less user-friendly. Secondly you just can decrease the key. The operation outlined above allows to decrease and increase key. --Coma 11:07, 5 Sep 2004 (UTC)


The textbook by Cormen, Leiserson & Rivest uses the method described in the article whereas the original paper by Vuillemin uses the method you describe. Vuillemin's method may be more desirable in actual implementation, but it is harder to explain (and is less similar to the more familiar binary heap). Since this is an encyclopedia article, I think it is more important to illustrate the principles rather than to dwell on some little implementation details. Therefore I suggest sticking with the textbook version.
In principle even the textbook version can be made user-friendly by using some additional memory - for example you have one structure for describing node of the tree and its connection with other nodes, and a separate structure holding the key and other user information. These two structures have pointers to each other. User only has pointers to the structure with key. I agree it is not ideal, but it works in the stated assymptotic running time.
This is just my opinion and I will not mind if you decide to change the article. Just please do not replace the current more or less complete description by something like "this operation is not very easy to explain in short time". Thanks --Brona 18:30, 5 Sep 2004 (UTC)
Ohh no, I surly would not do that! You are right, the explanation should be as easy (but also complete) as possible. This discussion can be one first information about an alternativ implementation. I rather would write a german article about that topic than change this one with my bad english. I have implemented binomial heaps in C++ now with my method (which should be the method of Vuillemin, but I am not sure, since I have not read that paper. I have read another one explainig it incomplete and somehow different than I would do.) Actually it turned out, that the implementation is not that hard as I thougt. Even the proof would not be very hard but needs some more background-information about binomial trees. Here is the split-method which splits a binomial tree in the described way into at most logn binomial trees:
template<class ObjType, class KeyType>
void BinomialHeap<ObjType, KeyType>::split(BinomialElement<ObjType, KeyType> *e)
{
  if (e->parent != NULL) {
    split(e->parent);
    while (e->parent->children != e)
      insertLL(e->parent->removeFirstChild());
    insertLL(e->parent->removeFirstChild());
  }
}
--Coma 22:05, 5 Sep 2004 (UTC)

Complexity of Insert might be wrong[edit]

When reading the page, Insert is based on merge, which has O(log n) as worst case runtime (not amortized). So to me, Insert cannot perform better than what it's based on - or am I wrong?

A specific example: If one binomial heap is full, i.e. it has a binomial tree of every level <--> n = 2**k - 1, the insertion of an element must produce a "wrap over" (or how it's correctly called in English). This will take logarithmic time. --OOo.Rax 15:03, 15 September 2007 (UTC)[reply]


It seems a distinction should be made for the amortized analysis for each operation in addition to the individual worst case costs. In the case of insert, an amortized analysis will give a O(1) cost for insert. If amortized and individual costs are mixed, this could be highly confusing for the reader. Dfj225 (talk) 03:25, 9 December 2008 (UTC)[reply]

Other algoritms[edit]

Hm... in my University i stumbled upon a diferent algorithm that uses binomial heaps for creating "Binomial Heaps"(as a sort of forest), were the root has only on the left; but in the rest it is all the same. Should this be included? AdrianCo (talk) 23:38, 19 January 2008 (UTC)AdrianCo[reply]

Deletions[edit]

In most cases, when you are deleting an element, you don't know where in the structure it is, so you have to find it first, which I don't see how you can do this in log(n) time. Just saying, this should be included in the article 64.91.186.214 (talk) 02:39, 21 April 2008 (UTC)[reply]

Please read the Heap (data structure) article first. You'll learn from it that a heap is a specialized type of tree, which does not necessarilyy support a deletion of an arbitrary node, but supports quick deletion of the smallest (or the biggest, depending on sort order applied) key node. A binomial heap supports it too – and you only need to scan root nodes of ~log(n) subtrees to do that, thus a logarithmic time. --CiaPan (talk) 13:40, 25 June 2008 (UTC)[reply]

Name[edit]

The article is seven years old and nobody ever bothered explaining the name? Bo Lindbergh (talk) 23:30, 9 August 2010 (UTC)[reply]

@Bo Lindbergh: You did, apparently: Special:Diff/378084399. Thank you.
 Done
CiaPan (talk) 07:48, 21 July 2020 (UTC)[reply]