Talk:Multiplication algorithm

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Example in Objective Caml[edit]

The example given is I'm sure great, but I have no idea what caml is, and it's kinda confusing to read. A C or pseudocode example would convey the meaning of the code far more clearly to the majority of the reader base. Could someone translate the example to C/Pseudocode for us? AdamSebWolf 09:09, 30 May 2006 (UTC)[reply]

Article needs attention tag[edit]

I've excerpted the following comment made at Wikipedia:Pages needing attention/Mathematics concerning this article:

The article … contains a false statement: "The fastest known method based on this idea was described in 1971 by Schönhage and Strassen (Schönhage-Strassen algorithm) and has a time complexity of Θ(n ln(n) ln(ln(n)))". (About multiplying long integers - Θ(nln(n)) is evidently enough). The part "A simple improvement to the basic recursive multiplication algorithm..." contains warmup example and shouldn't be in the end. Discussion about complexity is mixed with the discussion about rounding errors and the GIMPS project. (Unsigned entry at 08:06, May 10, 2005 (UTC) by 212.193.10.2. Paul August 18:54, Jun 5, 2005 (UTC))

Some of the above user's comments may no longer apply because of subsequent edits, but I thought I would copy this here for the editors of this page to consider. Paul August 19:28, Jun 5, 2005 (UTC)

To be more clear, the commentor above was the same person who added the "article needs attention" msg. So if we think all of that persons issues have been adequately addressed then, we can remove the tag, and it's entry on Wikipedia:Pages needing attention/Mathematics. What concerns me most is the alleged factual error. I don't know enough to confirm or fix this. Paul August 19:48, Jun 5, 2005 (UTC)

In fact the article is correct, the complexity of S-S multiplication is O(n log n log log n), and is the best known. Often FFT multiplication is quoted as having complexity O(n log n), because it is assumed the "double" data type on a computer is sufficiently accurate to perform the convolution (which is almost always true in practice). In fact, when you take into account the number of bits needed of the base data type to guarantee sufficiently small error, FFT multiplication has complexity O(n log^2 n).--Luke Gustafson 07:24, 1 January 2006 (UTC)[reply]

After reviewing the article thoroughly I removed the attention tag and removed it from articles needing attention. (I'm a little new to these sorts of procedures so hopefully this was done appropriately.)--Luke Gustafson 07:37, 1 January 2006 (UTC)[reply]

Number theoretic transform[edit]

Is it correct to say the finite field should have at least characteristic m ? -- Nic Roets 12:24, 21 July 2005 (UTC)[reply]

I believe (although I haven't actually seen the NTT algorithm) that the characteristic needs to be larger than possible values of the convolutions-- 2^w+1 I think.--Luke Gustafson 07:31, 1 January 2006 (UTC)[reply]

Details can/could be found in Nussbaumer's Fast Fourier Transform and Convolution Algorithms, if anyone has the time to spare. I might double back an add something on NTTs at a later date. Polynomial transforms probably also deserve a mention, I've used them to avoid the rounding error problems mentioned alongside the NTT reference. 118.208.79.170 (talk) 10:47, 6 April 2009 (UTC)[reply]

Hardware[edit]

Note that it's possible to implement any of these in Combinatorial logic hardware. Is there a reference to show that long multiplication is the dominant implementation in silicon ? -- Nic Roets 12:22, 21 July 2005 (UTC)[reply]

Long Multiplication[edit]

Seems like a more thorough walkthrough of long multiplication should be given, to parallel the long division article. The example doesn't explain the process at all.

Is there a corresponding "short multiplication"? — Omegatron 03:36, 29 November 2005 (UTC)[reply]
I think the name 'long multiplication' was given to distinguish it from multiplication that uses tricks such as lookup tables -- Nic Roets 11:12, 12 August 2006 (UTC)[reply]
Should the algorithm be labeled as "fairly accurate"? The algorithm is accurate and will produce a correct answer if used correctly. Any algorithm is subject to operator error -- EconProf86 19:09, 1 July 2007 (UTC)[reply]

I think it would be good to add proof of correctness. It is a little strange that everybody learns it but I never encouter proof. It is as simple that number is sum. 123 is 1*10^2+2*10^1+3*10^0. From gemometry we know that field of rectangle with sum (a+b+c)*(d+e+f) is sum of multiplication of each part with another ad+ae+af+bd+be+bf+cd+ce+cf and that is exactly what this algorithm do. — Preceding unsigned comment added by Marekmosiewicz (talkcontribs) 11:45, 24 January 2024 (UTC)[reply]

lattice multiplication[edit]

the lattice isn't explained very well. it should also be converted into an image, like this: http://online.edfac.unimelb.edu.au/485129/wnproj/multiply/images/lattice4.gif

I started to convert it to table markup, but then saw that the examples on other sites draw diagonal lines. it might be possible to draw it in tex, too? but an image is probably the best method. — Omegatron 03:32, 29 November 2005 (UTC)[reply]

Karatsuba[edit]

I thought this section could use more references and more in the way of practical application -- what ranges it's efficient in and so forth. Since the section was already too long, I spun this out into its own article: Karatsuba multiplication. As a result, I've shortened the explanation here by making the example less explicit and technical and removing the Objective Caml example (both of which were moved over). CRGreathouse (talkcontribs) 00:13, 12 August 2006 (UTC)[reply]

Lower bounds?[edit]

Are there proven lower bounds on how long any general multiplication algorithm of n-bit numbrs must take? It's trivially Ω(n), and S-S shows it's O(n log n log log n), but are any better bounds known? CRGreathouse (t | c) 03:42, 22 September 2006 (UTC)[reply]

I think the article summary should list the lower possible/known bounds. ~ Booya Bazooka 14:46, 17 November 2010 (UTC)[reply]
Furer's algorithm is asymptotically better than S-S. I'm not aware of any nontrivial lower bounds on serial multiplication - however multiplication is known to lie outside of AC0, meaning it can't be solved with bounded fan-in constant-depth logical circuits, because there is a constant-depth reduction from parity to multiplication, and parity is not in AC0 (see e.g. [1]). This is the best lower bound I know of, and I will add it. Dcoetzee 23:21, 19 November 2010 (UTC)[reply]

FFT[edit]

Can someone explain why this is true (in the section regarding FFTs): "We choose the largest integer w that will not cause overflow during the process outlined below. Then we split the two numbers into groups of w bits"?

I haven't messed around with my own examples (and I'm only in high school, so I probably am just missing a point somewhere), but why does it make sense to have the integer w be the size of the groups? Shouldn't the size of w be the size of the groups?

The most watered down example I could come up with to prove this (to myself) is this: Say we want to do FFT multiplication to multiply 6 by 9 (I know this is pointless in actuality, stick with me), the largest integer w that will not cause overflow is 3. The size of 3 in bits is 2, but we are going to group the numbers into 3 bit pieces, with a maximum value for each of 7. Would 7 not cause errors due to overflow?

I just think there's one or two words amiss in the quoted text; but, again, I don't really have a clue what I'm talking about (dd you notice? :D) — KyleP 07:33, 24 February 2007 (UTC)[reply]

Hi Kyle - the short answer is, w is already a size in bits, not the number of values possible for each group, which would be 2w. I don't quite follow your example but this may be your source of confusion. Dcoetzee 08:37, 7 April 2009 (UTC)[reply]

I think there might be a slight mistake in the convolution equations: in the double sum after the 2nd '=' sign k goes from 0 to 2m and i from 0 to k which means we multiply for example a_0 and b_2m while b only have m values. Or am I wrong somewhere? —Preceding unsigned comment added by 46.116.195.141 (talk) 17:16, 14 February 2011 (UTC)[reply]

Oops, just realized it's defined as 0 later on. sorry. —Preceding unsigned comment added by 46.116.195.141 (talk) 17:18, 14 February 2011 (UTC)[reply]

Leonardo[edit]

There is reference to a Leonardo but there is no acute way of knowing who Leonardo really is. 129.32.94.123 (talk) 17:43, 19 February 2010 (UTC)[reply]

Leonardo was the author Fibonacci's first name. Dmcq (talk) 18:19, 19 February 2010 (UTC)[reply]

Quarter square multiplication and recurrence relations[edit]

I have in mind putting in something about this, but first I thought I should put it up for comment, both to get it debugged and to allay any objections about original research (it isn't original, but I can't remember where I came across it, so I had to redo it - and it does seem obvious once you see it). Read on...PMLawrence (talk) 14:11, 11 March 2011 (UTC)[reply]

With qs the quarter square function on integers (rounded down), then because we have , putting and , we get a family of recurrence relations for each r. In general this would involve more multiplications, but we can choose r to make those convenient, say powers of 2 so that the multiplications can be done by repeated doubling. Then it is convenient to work out values of qs(n) using a suitable subfamily of the recurrence relations, e.g.:-

... or using more or fewer members of the family, or using tabulated values of qs for more than just n = 0 and n = 1. (Clearly, if only the recurrence relation for r = 1 is used, there is no multiplication needed to evaluate qs at all, but then multiplying using qs resembles multiplying using repeated addition. In particular, it is slow for most multiplications.)

I really can't see how anyone could ever have got any gain from using this, so I would need to see a citation which also mentioned it helping with multiplication. Dmcq (talk) 14:38, 11 March 2011 (UTC)[reply]
The gain comes from not needing such a large table as the straight look up method, while still being reasonably fast. Code would typically be a function using a (much smaller) table of number of times to double (doubling by adding) and the amounts to subtract (), read off against the values of r, highest first, and would drop through looking for the first recurrence relation to use. It would then use n, those values, and a recursive call to get the result. The base case would be a small, direct look up table it would hit if no number were found (just two entries in the example above). Alternatively, even those values could be calculated rather than looked up, which is slightly slower but more flexible. But what does the gain have to do with it, anyway? PMLawrence (talk) 00:04, 12 March 2011 (UTC)[reply]
The gain has to do with citation in an article about multiplication. I didn't see it gained anything useful so I asked for a citation because it seemed unlikely to me that anyone actually even contemplated using this recurrence in the context of multiplication. It still seems extremely unlikely so yes you definitely need a citation and your assurance you saw it somewhere is not enough. Dmcq (talk) 00:25, 12 March 2011 (UTC)[reply]
And there I was thinking you were asking what using quarter square multiplication, you know, gained from being evaluated this way - since you did write "I really can't see how anyone could ever have got any gain from using [emphasis added] this". Are you sure you aren't simply shifting your position? But clearly citation isn't necessary for certain levels of discussion, as that section already contains "A simple algebraic proof shows that the discarded remainder would have canceled [sic] when the final difference is taken, so no accuracy is lost by discarding the remainders". So how about "A simple algebraic proof shows that quarter squares can also be evaluated conveniently using recurrence relations", at least to be going on with? If I don't give readers any more useful - i.e. more detailed - information, that will be both true and easily verified, by the standards of the section. Or would you regard that as a challenge to delete that necessary part of the table method, since the cited reference doesn't actually cover that, even though it is necessary for completeness? In any case, it strikes me that providing the level of detail I suggested is obvious once you notice the recurrence relations, and is really only for illustration. I don't see why it should be contentious, since it is so easy to confirm its behaviour (though I put it up here partly so as not to have to debug it myself while I was still so close to it). PMLawrence (talk) 03:00, 12 March 2011 (UTC)[reply]
The relevance to multiplication hasn't been shown. It is missing a citation. And it is big. There is no good reason to include it and good reasons not to. Dmcq (talk) 08:44, 12 March 2011 (UTC)[reply]
Huh? The section itself shows the relevance of quarter squares to multiplication, and this is clearly relevant to quarter squares (by the way, when I first heard of quarter square multiplication, it was described as a method the Romans used, with look up tables; with their notation, I wouldn't be surprised if they used the r = 1 recurrence relation to build up their tables rather than actually calculating the entries individually - and that, I freely admit, is the sort of speculation about historical fact that would need a citation to justify going in the section, but I mention it here simply to show that there is nothing prima facie unlikely that people would ever have used recurrence relations to evaluate quarter squares, or prima facie absurd and unhelpful about referring to recurrence relations in this section). You have not addressed my interim suggestion of a brief mention with the same level of citation as a comparable sentence already in there, and you have not addressed the reasons I gave for thinking that a free standing source is not so necessary in this case (i.e., that the level of detail is only for completeness/purposes of illustration and can easily be verified without needing outside references, since the principle is covered by my interim suggestion above - it's not as though verification would need readers to conduct a laboratory experiment or anything like that). While the particular illustration I canvassed above is "big", that's only how it appears on screen with images enabled - it's actually only an ordinary paragraph of notation between two shorter ones that are mostly text; and this current exercise is about seeking constructive criticism, so would you be happy if I suggested a briefer illustration or if you tried your hand at one yourself? And I thought I gave good reasons for including something along these lines, if not precisely the draft above (at any rate, so far you have not addressed the reasons I gave, showing whether they were good or bad and why but only that you did not agree with them - which is no proper test of "good", in this sense), while I have yet to see your reasons, good or otherwise, for not doing so. That is, I have seen your stated conclusions, but no reasons over and above being "big" and a lack of citation, but you can't mean those in your paragraph just there as that would be double counting (that you don't see a gain is not a reason in itself but a statement of your position, a conclusion you have come to). PMLawrence (talk) 11:19, 12 March 2011 (UTC)[reply]
Just because quarter squares are relevant doesn't mean this recurrence is. Please just follow the WP:5P. We're supposed to find things which are halfway notable written about in reliable sources and summarize what's written about them. You really need a reliable source if you want to take this any further. Dmcq (talk) 23:46, 12 March 2011 (UTC)[reply]
Again, huh? By that standard, much of what is in this section shouldn't be in there. You have failed to address any of my specific points bringing this out. In particular, you have failed to state any reasons not to go with this, in the sense of specific reasons of the sort I asked for. So, I'll start again. Unless I soon receive specific reasons not to, or constructive criticism suggesting alternatives, in the near future I shall put in an interim entry roughly like this: "A simple algebraic argument shows that quarter squares can also be evaluated using recurrence relations, . This is fast for large r, and easy when r is a power of 2 as the multiplications of n reduce to doubling by repeated addition and drop out completely in the simplest, slowest case.". I believe only the first sentence is strictly necessary for this matter, and it meets the standards already followed in this section, but the second sentence is necessary to clarify its usefulness. Even so, a fuller illustration would be better, and I would welcome any constructive criticism offering that. PMLawrence (talk) 03:04, 15 March 2011 (UTC)[reply]
And I'd remove it citing WP:Original research. I do not believe this is a basis for a good algorithm, and I don't believe it has ever been used or written about in this context. Dmcq (talk) 10:04, 15 March 2011 (UTC)[reply]
You have given a strong indication that you are not acting in good faith. Now that I have provided references - you are claiming that the particular use proves that the material does not belong here. You are inventing objections on the fly rather than carrying out constructive criticism to provide helpful suggestions, you are failing and refusing to provide specific objections so I can see how to address them, and when I meet your general and generic objections you claim - irrelevantly as well as incorrectly - that those links make that material more specific to something else and thus irrelevant here. But not only is that wrong - about it being irrelevant - it misses the point about how encyclopaedias work. If that material were carefully confined in a different article, the article on multiplication would be missing stuff on how that is achieved. I believe I have given plenty of leeway, asked for constructive criticism in advance, met and more than met your stated objections insofar as you have condescended to reveal them, and now been confronted with an objection invented after the fact that was not among any objection given before and is anyway both unfounded and irrelevant. In all the circumstances I feel justified in asking for arbitration, since I have gone out of my way to avoid the necessity without success. Feel free to offer constructive criticism while we wait, if you can bring yourself to it (but note that putting this "how" stuff elsewhere does not help this article). PMLawrence (talk) 03:35, 20 March 2011 (UTC)[reply]
I would say this is a perfectly adequate section, as long as the references are appropriately set up. I haven't seen this analysis before and wouldn't know where to go to look for it, and until I get the latter I wouldn't be happy about this section going in - but the work itself looks fine. --Matt Westwood 23:07, 20 March 2011 (UTC)[reply]
The article is about multiplication algorithms. The 'recurrence relations themselves are no more than a dressed up version of (a+b)2=a2+2ab+b2 and hopefully you would have seen an 'analysis' of that somewhere or be able to figure out where to go for it. They are not used in quarter square multiplication. As I have said to the person the references could be used in the mathematical table or human computer article as these are about table construction. However particular methods of table construction do not belong here and the recurrence relations as given here were not used in table construction anyway and have never appeared in the literature as far as I'm aware for any purpose. Dmcq (talk) 08:32, 21 March 2011 (UTC)[reply]

Construction of tables[edit]

The justification for a section about recursive generation of the quarter suares was that it was used in the table generation.

That is absolutely false. The fact is, you asked for a reference showing that recurrence relations were in fact used. I provided more than one relevant citation. However, that is not the justification for mentioning it, that is, the thing that makes it relevant. It is relevant because it shows how to obtain conveniently the things quarter square multiplication uses - the values of the quarter squares. Bluntly, this is not about table construction, it is about quarter square multiplication, and the reference to using this approach to construct tables was solely and simply to satisfy you in your asking for a reference to a real situation in which this approach was used. That does not turn it into something that only matters to table construction, it meets your demand for a demonstration of usefulness. That does not make it useless for anything else; by that Morton's Forkish reasoning, if I don't give you a citation I can't make the edit, and if I do I must not make the edit. PMLawrence (talk) 03:35, 20 March 2011 (UTC)[reply]

The method was a standard one in the pre-computer era. It doesn't have anything to do with multiplication as such so I removed it.

By that logic, most of that section should be removed. However, as I have just pointed out, it is indeed relevant to multiplication because it shows how to obtain conveniently the things quarter square multiplication uses - the values of the quarter squares. If you don't believe me, read Samuel Laundy's own words in a report to the Institute of Actuaries, in their magazine I cited: "THE paper which I have the honour to submit is so elementary in its details that I have to claim the indulgence of the meeting; but, as the practical application of the simple formulæ to which I shall have occasion to refer may be interesting, I trust I may not unworthily occupy a short portion of your time. A few months ago it occurred to me that tables to facilitate the process of multiplication, based upon the identity would be of extensive use in actuarial and other calculations, and would effect a considerable saving of labour—to an extent even greater than that acquired by the use of logarithms... Before proceeding to explain the use of the tables, I may be allowed to refer to the process by which they have been computed. To form each value separately by multiplication would require an amount of labour that few would voluntarily undergo : but considering the fact that the squares of numbers beginning from unity may be found by the successive addition of a constantly and uniformly increasing difference, it is evident that every tabular number may be found from that preceding it by the process of addition... But the computation from No. (2000) and onward can be more conveniently continued by a modification of the above method..." Mr. Laundy, himself, pointed out the relevance of the method of construction, and the Institute of Actuaries was willing to accept that in the report. PMLawrence (talk) 03:35, 20 March 2011 (UTC)[reply]

A bit more in the article human computer about how they were used to compute tables would be useful I think.

While that would help that article, it fails to help this one, which would - as I have shown - benefit from the presence of material in this area. PMLawrence (talk) 03:35, 20 March 2011 (UTC)[reply]

It should be based on what's written about how to divide up the work between multiple computers. The method here is a slight variation on what a difference engine normally does and could be done by one by having two extra digits which are ignored and of course they worked in decimal. Dmcq (talk) 14:22, 19 March 2011 (UTC)[reply]

Please do not split up peoples contributions this way. You are not replying to an email. You can quote a bit of what a person says if it is not obvious from your reply.
If you can point to other parts of this article which are not directly pertinent to multiplication algorithms please do so.
Computing the quarter squares is no problem. You just square, divide by four and discard te remainder as explained in the article. What was being described by Samuel Laundy was the standard technique to generate tables and is relevant to human computer or mathematical table. It isn't relevant here. And sticking in your own formula and saying Laundy used a variant of it isn't exactly following WP:Reliable sources either is it?
I guess from what you are saying you don't feel like trying to improve human computer or mathematical table and want to continue here instead. I have said my piece a couple of times and doubt saying it again is a terribly useful occupation so I shall raise this at Wikipedia talk:WikiProject Mathematics#Quarter squares. Dmcq (talk) 13:07, 20 March 2011 (UTC)[reply]

Linear time multiplication[edit]

I think there is an error in the section "Linear time multiplication". In the reference, Knuth discusses that in the model described in this article, multiplication can be done in O(n log n), not O(n). After that he leaves as an exercise to prove that in the pointer machine model it can be done in O(n). — Preceding unsigned comment added by 181.28.64.87 (talk) 04:15, 28 August 2012 (UTC)[reply]

Cheprasov Algorithm[edit]

There seems to be some edit warring about this. Please discuss on this talk page first if still insistent on including it.

Personally I see no reason to include it at present. There does not seem to be any general interest that I can see. Also if there was some interest I believe mental calculation would eb more appropriate.

As to the actual system as described personally I think it is silly. I can make up better myself in a couple of minutes, for instance using the symbolism there

@
@

Which also possibly has negative numbers and adds up to something positive. I don't specially feel inclined to write a book about that though. Dmcq (talk) 12:02, 4 January 2013 (UTC)[reply]

I agree with Dmcq. One can remark that the algorithm presented by Dmcq is well known (up some changes of signs) as Karatsuba algorithm, and has the great advantage to use only 3 multiplications of digits instead of 4 in the classical algorithm and in the so called "Cheprasov Algorithm". D.Lazard (talk) 12:53, 4 January 2013 (UTC)[reply]
Yes, the proposed algorithm has no particular advantages in time complexity or for mental calculation, which is precisely why it hasn't gotten published in any peer-reviewed venue. Dcoetzee 05:56, 5 January 2013 (UTC)[reply]

justification for taking remainders in quarter squaring[edit]

IMO the explanation

If x+y is odd then x-y will also be odd, this means any fraction will cancel out so no accuracy is lost by discarding the remainders.

is not valid as it stands. It is not immediately obvious (without "writing out" (2n-1)²=4n²-4n+1) that the remainders could not add up their effects rather than cancel out; at least the argumentation much be changed. I'd rather suggest

We know that the result is an integer, so any possible fractional part of +/- 1/4 in either (x+y)²/4 or (x-y)²/4 (in case the squares are of the form (2n+1)²=4n²+4n+1) must necessarily cancel.

This argumentation leads (IMO) more obviously to the correct result. — MFH:Talk 14:40, 19 March 2013 (UTC)[reply]

The statement is true. You never get a -1/4 remainder when squaring. It isn't a textbook and checking is fairly easy for a reader if they want to do so. Dmcq (talk) 18:01, 19 March 2013 (UTC)[reply]

A possible memory trade for time gaining 4 the common O(N*N) multiplications[edit]

i think there is a chance that purely creative paradigm approaching way of 1->K data length extrapolation could lead us to an algo that is derived from common algo O(N*N) n that is kinda trading memory O(N) complexity 4 an Sqrt(N) time factor demultiplied from N*N time complexity. Take this as a challange... it looks to me that idea can not b (at least not easily) extrapolated for recursivity so better algo using this only suggested idea seems harder to b obtained. Good Luck, Yankees !! Florin Matei, Romania. 93.118.212.93 (talk) 14:15, 5 May 2013 (UTC)[reply]

ok, purely creative part 4 finding similarities with quicksort paradigm[edit]

qsort might use small sorting in order to partitionate the datas practically forcing some D&C solution, i wonder if there is a similary idea 4 multiplyings: heres the pure creative aproach: we got a rectangle we know the lengths of the edges but we dont know its surface (probably surface area) we might use some both sides splitting in such a way that we might b forming a square or...(a more "docile" rectangle) ... the formed square might help as using algebra taking advantages of its equals borders... its not quite easy to get this to an end but, again, it might worth a try. Florin Matei, Romania... 93.118.212.93 (talk) 17:16, 13 June 2013 (UTC)[reply]

I don't understand what you are saying. I think you are proposing some of your own ideas. We only put things into articles which have been written about in books and magazines and suchlike and not editors own ideas. See WP:Verifiability. Dmcq (talk) 18:36, 13 June 2013 (UTC)[reply]

wow: thats a cold war specialist talking, but i agree u got some of ur own policy: we got ours, too bad my ideas r working ha?93.118.212.93 (talk) 20:36, 13 June 2013 (UTC)[reply]

take these a a homework, please, y not some credits 4 me anyway?

n*log(n) muls or even better

wow, im sorry lets say we want to multiply a1 n b1, to avoid confusion a1*b1=m*m-n*n; m*m =(m1;m2)*(m1;m2) , m1 the most significant part m1=3*a+b, m2=3*b+a, computing m1*m1, m2*m2 n m1*m2 from "autosolve". good luck n dont forget credits, ok?? Florin Matei, Romania 93.118.212.93 (talk) 19:16, 6 December 2012 (UTC)

   What exactly are you trying to say? What is "autosolve"? The idea to reduce multiplication to two squarings is old. How is 3a+b the most significant part of m? What is m? Usually one would take m=(a+b)/2 and n=(a-b)/2. m=3a+b would require n=3a-b to cancel the squares, then m*m-n*n=12*a*b. How do you propose to achieve the squarings? Using Karatsuba? Do you resolve the mixed scale multiplication again to a difference of two squares? What is the complexity relation, that is, how is T(2n) expressed in terms of n and T(n)? n*log(n)-Multiplication is Schönhage-Strassen, an algebraic variant of the FFT. Did you check that?--LutzL (talk) 12:35, 7 December 2012 (UTC)
   Please study Daniel J. Bernstein: Multidigit multiplication for mathematicians. for an overview of fast multiplication algorithms and ways to communicate them. If the formalism is right, the language doesn't matter that much.--LutzL (talk) 20:16, 19 December 2012 (UTC)

autosolve in a nonexpansive way to compute m1*m2 using the notations m1=3a+b n m2=3b+a, m1, m2 the 2 parts of spilting m (whos m? dont make me laught, u is the one to say idea is old), n the computed values m1*m1 n m2*m2 that works recursively, dont need extra ideas... once we got m1*m1, m2*m2 n M1*m2 we've completed the curent recursivity level by computing m*m... idea is to use a decent sum m1*m1+r*m2*m2 with the notation suggested in order to find m1*m2 (algo main idea) i know its a bit crazy but might evn work n if so then Order is n*log(n)... n dont bother me with master method formulas i practically reinvented it once n way to verify it also :P 93.118.212.93 (talk) 14:47, 2 February 2013 (UTC)

   Please publish a paper, preferably refereed, where you give a detailed description of what happens when in your algorithm, some example and also a detailed account of your reinvention of the "master theorem". But even then this discussion page is not the right place to talk about this fundamentally different "idea".--LutzL (talk) 20:32, 5 February 2013 (UTC)

idk abt the rite place 4 this, apparently is all i can afford it here, not to mention the brain treat... they r doing this 2 me 4 their fun, i guess 93.118.212.93 (talk) 09:29, 9 February 2013 (UTC) Karatsuba idea n estimating MSb mantissa for factorials

i think due to the arithmetic progression of operands n using binary codifyings 4 numbers we could use ideas similary to Karatsuba to multiply in polynomial time a great deal of numbers such as factorials, only for the significant mantissa (n exponent) the details r let to u :) — Preceding unsigned comment added by 93.118.212.93 (talk) 11:53, 2 February 2013 (UTC)

estimating modulus of such product could b possible , anyway, in order for primality tests, but i think i m not able to do that one, not in polynomial time, anyway 93.118.212.93 (talk) 12:00, 2 February 2013 (UTC)

Please stop copying stuff into different talk pages. You have been told what is needed. What you are doing is wasting your time and anybody's who reads this. Please stop. Also if you cannot be bothered to try and write understandably and correctly others will not bother trying to make out what you say. Your time is not worth more than mine. Dmcq (talk) 21:20, 13 June 2013 (UTC)[reply]

Who was it above who wrote: "wow, im sorry lets say we want to multiply a1 n b1, to avoid confusion a1*b1=m*m-n*n;" ??? You write "a1 n b1" which actually CREATES confusion, because: 1) That includes the letter n so it looks as if you're multiplying a1 * n * b1 2) You go on to use the letter n in the very next equation

Do you think you could write in NON-CONFUSING English, please? "Let's say we want to multiply a1 and b1 ..." or "Let's say we want to multiply a1 by b1..." 46.64.255.189 (talk) 19:58, 10 January 2014 (UTC)[reply]

Ancient Indian algorithm for multiplying numbers close to a round number[edit]

What is the complexity of this algorithm? 31.42.233.14 (talk) 15:08, 10 January 2014 (UTC) Like, really unbelievably complex, man! 46.64.255.189 (talk) 19:59, 10 January 2014 (UTC)[reply]

Peasant Multiplication[edit]

It is necessary to say that only one column is added. If this is not specified, the reader will follow the recipe "add numbers which are not scratched out" and that is most of them, i.e. 7 of them in the example of 11 x 3.

It is also necessary to say that it is the column of numbers formed by doubling which is added. Adding the OTHER column will give garbage. If this is not specified, the reader will not know which column should be added.

Neither of the above is obvious from just an example. 46.64.255.189 (talk) 22:45, 10 January 2014 (UTC)[reply]

Try looking up "Egyptian multiplication" on wikipedia. You'll find the same method is described. Is there a case for merging or cross-linking them? More fundamentally, we could use "multiplication algorithm" for something which is usually done by computer, and "multiplication method" for something which is normally done by hand. 132.244.72.6 (talk) 15:22, 6 February 2014 (UTC)[reply]

The above is correct. One has to say that the DOUBLING column is added. The article DOES NOT SAY that only one column is added. It just says "the numbers are added" which gives garbage. Why can't you bring yourself to just say ADD THE CORRECT COLUMN, DAMMIT!!! ? 132.244.72.6 (talk) 15:22, 6 February 2014 (UTC)[reply]

The article just says "All not-scratched-out values are summed" which is WRONG. What it ought to say is "all non-scratched-out numbers IN THE DOUBLING COLUMN are summed". 132.244.72.6 (talk) 15:24, 6 February 2014 (UTC)[reply]

The single source used to explain that Peasant Multiplication has been used 'widely' in Russia is unreliable. It itself claims that the history of its name is "unexpectedly murky", and that it may have egyptian roots. It also cites text that no longer exists from another wikipedia page. The passage also claims that it is the *same* algorithm as Egyptian multiplication, which isn't true; the Egyptian Multiplication article has a separate section dedicated to Peasant Multiplication. I believe it would be best to remove this section for now, as it requires heavy modification in its current state. BOOT9989 (talk) 03:26, 1 December 2022 (UTC)[reply]

Multiply by averaging[edit]

Original research is not accepted in Wikipedia
' Method developed by albert_redditt@yahoo.com

' Written in FreeBasic for Windows http://www.freebasic.net

' best to use FBIDE and copy and paste and press F5

' First we set up r_botom and l_bottom

' Then we double those two numbers for r_top and l_top

' then we average each side.

' If l_avg is higher than num2 then we set l_top to l_avg and r_top to r_avg

' if l_avg is lower than num2 then we set l_bottom to l_avg and r_bottom to r_avg

' eventually l_top and l_bottom are straddling num2 and l_avg = num2

' at this point r_avg = multplied answer

' if you divide (l_top / num2) the answer is the same as (r_top / mul answer)

' do

   dim as ulongint num1 = int(rnd*1e7)
   dim as ulongint num2 = int(rnd*1e7)
   if num2 > num1 then swap num1 , num2
   '===============================================================================
   'during averaging,
   'if the l_avg ends in .5 then we add .5 to it and
   'the point_5 var (half of num1 ) to the r_avg
   dim as double point_5 = num1/2
   
   dim as string str1 = str(num1)
   dim as string str2 = str(num2)
   
   dim as double l_bottom , l_avg , l_top
   dim as double r_bottom , r_avg , r_top
   '===============================================================================
   'r_bottom = num1* leftmost digit of num2 + zero padding to make it the right length
   r_bottom = val(str( val(str1) * (val(left(str2,1))) ) + string(len(str2)-1,"0"))
   r_avg = r_bottom
   r_top = r_bottom + r_bottom
   '===============================================================================
   'l_bottom = left most digit of num2 + zero padding to make it right length
   l_bottom = val( str(val(left(str2,1))) + string( len(str2)-1,"0") )
   l_avg = l_bottom
   l_top = l_bottom + l_bottom
   '===============================================================================
   'for each unit on the left side there's a num1 on the right side
   'So if l_top minus l_bottom = 1000 units, then 
   'r_top minus r_bottom = (1000 * num1) units..
   
   dim as ulongint count = 0
   
   do
       count = count + 1
       
       l_avg = (l_top + l_bottom) / 2
       r_avg = (r_top + r_bottom) / 2
       
       if l_avg = num2 then exit do
       
       if right(str(l_avg),2) = ".5" then 
           l_avg = l_avg + .5
           r_avg = r_avg + point_5
       end if
       
       if l_avg = num2 then exit do
       
       if l_avg > num2 then 
           l_top = l_avg
           r_top = r_avg
       elseif l_avg < num2 then 
           l_bottom = l_avg
           r_bottom = r_avg
       end if
   
   loop until l_avg = num2
   
   print
   locate ,1  : print "loop req = " ; count
   locate ,1  : print "n1       = " ; num1 
   locate ,1  : print "n2       = " ; num2 
   locate ,1  : print "l_avg   = " ; l_avg
   locate ,1  : print "r_avg   = " ; r_avg
   locate ,1  : print "n1 * n2  = " ; num1*num2
   
   print
   print "Press a key to continue.."
   sleep

loop until inkey = chr(27) sleep end '===============================================================================

Method takes aprox 3.3 averages per digit , so 10,000 x 10,000 digits = 33,000 averages — Preceding unsigned comment added by Albert redditt (talkcontribs) 21:03, 18 February 2014 (UTC)[reply]

This method appears to be original research, being developed by the author of the post and not published in any reliable journal. As such Wikipedia policy is to forbid the insertion of the method in any article. Sorry, Wikipedia is not a media for publishing your work. D.Lazard (talk) 22:08, 18 February 2014 (UTC)[reply]

Quarter Square on the CP-1600 (Intellivision)[edit]

Inspired by the article here, I implemented quarter-square multiplication for the old school CP-1600 processor family, used in the Intellivision. I implemented unsigned 8-bit × unsigned 8-bit => unsigned 16-bit, 16-bit × 16-bit => 16-bit (signed/unsigned doesn't matter), and a funky fixed point implementation that did signed 8.8 × signed 8.8, where the RHS was constrained to the range -1.00 to 1.00. You can read all about it here: http://atariage.com/forums/topic/236848-quarter-square-multiplication-blindingly-fast-multiplies/ Mr z (talk) 05:13, 15 April 2015 (UTC)[reply]

Grid method citation[edit]

A user said the first line of the Grid method

The grid method (or box method) is an introductory method for multiple-digit multiplication that is often taught to pupils at primary school or elementary school level. It has been a standard part of the national primary-school mathematics curriculum in England and Wales since the late 1990s.[1]
  1. ^ Gary Eason, Back to school for parents, BBC News, 13 February 2000
    Rob Eastaway, Why parents can't do maths today, BBC News, 10 September 2010
  2. failed verification 'Grid method: The first citation talks about maths from 2000 onwards. The second one is over a decade later. Nothing supports this utterly irrelevant point.'. It seems pretty evident to me from the source but if someone can point out why they think it fails or can provide some more explicit maybe that would help. The citation from 2000 is talking about recently introduced methods for doing arithmetic and talks about the grid method for multiplication. The second says 'The revolution in the teaching of maths at primary school kicked in with the National Numeracy Strategy in 1999' Dmcq (talk) 20:35, 12 January 2016 (UTC)[reply]

    Phinary multiplication.[edit]

    Maybe it is nice to add a section about multiplying with help of (general) Fibonacci sequences.
    Like Binary multiplication the Fibonacci way also code a number in 0 and 1 digits. The code is Phinary in stead of Binary. The Phinary number is used to look up values from a general Fibonacci sequence. This is analogous to "Peasant or binary multiplication" where the binary number is used to look up value's from a sequence of powers of 2 times an initial value.

    Binary multiplication example
    decimal sequence Binary 11 lookup
    3*powers of 2 1011
    3 1 3
    6 1 6
    12 0
    24 1 24
    Total 33


    Two analogous Phinary example's
    Fibonacci sequence Phinary 11 lookup Alt-fib sequence Phinary 11 lookup
    center at 3 10101.0101 center at 3 10101.0101
    0 1 0 -6 1 -6
    1 0 5 0
    1 1 1 -1 1 -1
    2 0 4 0
    3 1. 3 3 1. 3
    5 0 7 0
    8 1 8 10 1 10
    13 0 17 0
    21 1 21 27 1 27
    TOTAL 33 33

    At first impression both methods share the same Big-O (classification), but figuring out the Phinary notation is more time consuming than a Binary notation. A classical computation of a fibonacci sequence is more complex than generating a sequence based on bit-shift operations.
    From the computation point of view, the positive thing is the amount of freedom to figure out creative optimizations. Phinary 11 (10101.0101) for example, can also be written as 100000.0000-1.
    The computation become than: fibonacci(9) - fibonacci(-1) = 34 - 1 = 33 , or from the non-standard fibonacci swquence: 27 - -6 = 33, a lot less computations to make. Swiersma (talk) 12:06, 28 February 2016 (UTC) or[reply]

    It hasn't anywhere near the weight or relevance to go in this article but a link to Golden ratio base would probably be okay. Dmcq (talk) 14:32, 28 February 2016 (UTC)[reply]

    "Gauss multiplication trick??"[edit]

    Are there any reliable source explaining why people think Gauss knew this? The reference to [11] Knuth, Donald E. (1988), The Art of Computer Programming volume 2: Seminumerical algorithms, Addison-Wesley, pp. 519, 706 is misleading in several aspects. Technical: it is not clear which edition is meant, third edition seems to be around 1998, the second around 1981. Important: -- The Karatsuba multiplication algorithm (with reference to Karatsuba, of course) is mention in Section 4.3.3., p. 294 (p.278 of the second edition), with no references to Gauss. On p. 519 and 706 of the third edition indeed the complex multiplication is mentioned. On p.519 (p.501 of the second edition) there is an exercise nuber 41 for section 4.6.4 asking to provide a compex multiplication with 3 read multiplications and 5 real additions, and p.706 (p.647 of the second edition) gives an answer to this exercise, "41. 𝑎(𝑐 + 𝑑) − (𝑎 + 𝑏)𝑑 + 𝑖(𝑎(𝑐 + 𝑑) + (𝑏 − 𝑎)𝑐). ⟨…⟩ Without the restriction on additions there are other possibilities. For example, the symmetric formula 𝑎𝑐 − 𝑏𝑑 + 𝑖((𝑎 + 𝑏)(𝑐 + 𝑑) − 𝑎𝑐 − 𝑏𝑑) was suggested by Peter Ungar in 1963. See I. Munro, STOC 3 (1971), 40–44; S. Winograd, Linear Algebra and its Applications, 4 (1971), 381–388." There are no explicit reference to any work of Ungar. Winograd and Munro do not say anything about Gauss (or Ungar) either. Knuth (in the index) says that Gauss is mentioned on pp.20, 101, 363, 417, 422, 449, 578, 679, 685, 688, 701; none of these pages says something about complex multiplication. There are several places where Gauss is claimed to know the Gauss multiplication trick (e.g., Dasgupta et al. classical textbook), but I did not manage to find any sources explaining why this is said... — Preceding unsigned comment added by Alexander.shen (talkcontribs) 10:12, 20 January 2017 (UTC)[reply]

    (Changed the page since no verified information about Gauss appeared) — Preceding unsigned comment added by Alexander.shen (talkcontribs) 15:35, 12 February 2017 (UTC)[reply]


    this isssue was discussed recently at MathOverflow: https://mathoverflow.net/q/319559 Brienanni (talk) 11:00, 31 December 2018 (UTC)[reply]

    External links modified (February 2018)[edit]

    Hello fellow Wikipedians,

    I have just modified one external link on Multiplication algorithm. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:

    When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.

    This message was posted before February 2018. After February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than regular verification using the archive tool instructions below. Editors have permission to delete these "External links modified" talk page sections if they want to de-clutter talk pages, but see the RfC before doing mass systematic removals. This message is updated dynamically through the template {{source check}} (last update: 18 January 2022).

    • If you have discovered URLs which were erroneously considered dead by the bot, you can report them with this tool.
    • If you found an error with any archives or the URLs themselves, you can fix them with this tool.

    Cheers.—InternetArchiveBot (Report bug) 04:45, 8 February 2018 (UTC)[reply]

    Even Faster Integer Multiplication[edit]

    The 2014 Paper "Even Faster Integer Multiplication" by Harvey, David ; Van Der Hoeven, Joris ; Lecerf, Grégoire is not mentioned, that I can tell. It claims to improve the complexity over Fürer's algorithm by a factor of 2^(log*n), where log* is the iterated logarithm. Has the paper not been validated by the mathematical community? If it is indeed faster than Fürer's algorithm, then it deserves a mention (and also other articles such as Schönhage–Strassen algorithm and Fürer's algorithm would need to be updated to reflect that there is a new algorithm with the lowest known asymptotic complexity). I thought I would mention it here and let those more mathematically qualified than me make the decision. — Preceding unsigned comment added by Karatekid430 (talkcontribs) 04:27, 10 September 2018 (UTC)[reply]

    Optimizing space complexity[edit]

    Formulation of the indices j and k, and j+k=i as the running variable of the summation appears incorrect. When j and k are the right most digits of the operands, e.g. when they are both 1, the summation does not compute the right most resulting digit r1 but r2.

    The assertion "A simple inductive argument shows that the carry can never exceed n" is certainly wrong. Let two two digit numbers be 99 and 99. Multiply any two digits 9*9=81. The carry 8 is greater than 2 (digits). I think the sentence is worded incorrectly, while the concept may be right. — Preceding unsigned comment added by 24.46.146.12 (talk) 03:53, 11 October 2018 (UTC)[reply]

    O(n logn)[edit]

    Shouldn't the article be updated to mention the discovery of an integer multiplication algorithm (https://hal.archives-ouvertes.fr/hal-02070778/document)? -- Meni Rosenfeld (talk) 11:03, 16 April 2019 (UTC)[reply]

    Normally, we must wait for secondary sources, or at least for acceptance by a notable journal; this is needed for verifying the correctness of the result. However, because of the importance of the result, a sentence such as "In March 2019, an algorithm has been presented with a claimed complexity of " could be acceptable. D.Lazard (talk) 12:14, 16 April 2019 (UTC)[reply]
    I've added a sentence about it. Feel free to modify or update. Wqwt (talk) 20:20, 3 May 2019 (UTC)[reply]

    Lower bounds again[edit]

    The 1st sentence in section "Lower bounds" currently says

    There is a trivial lower bound of Ω(n) for multiplying two n-bit numbers on a single processor; no matching algorithm (on conventional machines, that is on Turing equivalent machines) nor any better lower bound is known.

    It leaves me with these questions:

    1. If the lower bound is trivial, it is not a matter of current knowledge whether a matching algorithm exists, so the word "known" seems inappropriate.
    2. I guess, "better bound" means a sharper (i.e. larger) bound? - Here, the word "known" is of course appropriate.
    3. While I understand D.Lazard's motivation for inserting the (somewhat eastereggy) link to Turing completeness, I guess that a conventional Turing machine, extended by a "special-purpose processor" (oracle) to perform addition in O(1), would still be Turing-complete, but violate the lower bound? On the other hand, every machine with a tape would need at least O(n) steps to read the input numbers. (Maybe, this is the trivial proof for the lower bound; in that case, I suggest to mention the word "tape" in the sentence.)
    4. Looking up "Ω(n)" at Big_O_notation#Family_of_Bachmann–Landau_notations, I found two slightly different definitions; it should be made clear which is meant - or why this doesn't matter.

    As might be seen from my last question, my knowledge in complexity theory doesn't go much beyond Big-O and NP-completeness; so please apologize when some of my questions are silly. - Jochen Burghardt (talk) 13:14, 6 October 2019 (UTC)[reply]

    1. I believe "matching algorithm" means a Θ(n) algorithm. In other words, it's trying to say that it's not known that the trivial bound is the best one.
    2. Yes.
    3. The Ω(n) bound indeed comes from the necessity of reading the input. The oracle machine you mentioned would do multiplication in Θ(n), in contraditction to the claim that no matching algorithm is known. The edit about Turing completeness is incorrect and should be reverted.
    4. The article you linked says: "Computer science uses […] Knuth's big Omega Ω notations".
    JWilk (talk) 15:35, 8 October 2019 (UTC)[reply]

    Fast base 256 lookup tables[edit]

    I've been looking around for content about the computer computational advantage of multiplication lookup tables. For example, to create a fast base 256 lookup table, one can precalculate:

    int16 multiplybase256[256*256];


    where

    multiplybase256[ (a<<8) | b ] = a*b;


    then

    x = a*b;


    becomes

    x = multiplybase256[ (a<<8) | b ];


    which is much faster versus simple single instruction multiplication and in small loops frequently executes in one or less CPU clock cycles since there isn't much math involved with the low overhead SHIFT and OR operations. In many applications, small integers get multiplied repeatedly which is unnecessary stalling since small read only tables fit in the CPU memory caches which are faster.


    Essentially, the fastest math operations are the ones you don't do, or only do once.


    Youjaes (talk) 06:00, 13 November 2019 (UTC)[reply]

    Multiplication Algorithm Based on Collatz Function[edit]

    I'm not sure if it's notable, but a new multiplication algorithm based on Collatz's function has emerged:

    --DaBler (talk) 08:15, 16 May 2020 (UTC)[reply]

    For being notable, it must be mentioned in secondary sources. This is not the case here, as it is very recent original research. So, at time, not suitable for WP. D.Lazard (talk) 08:23, 16 May 2020 (UTC)[reply]

    N÷(1÷M)[edit]

    N×M multiplication can be done in terms of calculating N÷(1÷M). 2A01:119F:253:7000:694A:A3D8:66A5:C4B2 (talk) 11:52, 28 January 2022 (UTC)[reply]

    Merger proposal[edit]

    The following discussion is closed. Please do not modify it. Subsequent comments should be made in a new section. A summary of the conclusions reached follows.
    The result of this discussion was to merge. Fawly (talk) 17:38, 28 July 2022 (UTC)[reply]

    I propose merging Fürer's algorithm into Multiplication algorithm. This article was created when Fürer's algorithm had the best asymptotic runtime for multiplication, but this hasn't been the case for some time, and now I don't think it's very significant. A similar situation happened with Coppersmith–Winograd algorithm, which I ended up merging into Matrix multiplication algorithm (and later split into Computational complexity of matrix multiplication.) (In fact, arguably, Coppersmith–Winograd is more significant to the matrix multiplication literature than Fürer's is to integer multiplication.)

    Most of the Fürer's algorithm article is devoted to other, faster algorithms for multiplication, so it makes sense to dissolve it into the section on that here. Another option would be to make a separate article Computational complexity of multiplication or something similar. Fawly (talk) 21:30, 28 February 2022 (UTC)[reply]

    "Most of the Fürer's algorithm article is devoted to other" That is what is needed to be changed. Algorithm described since source code is available. 2A00:1370:8184:164:389D:FB46:1A30:ED39 (talk) 08:14, 30 March 2022 (UTC)[reply]
    My argument is not about the state of the article exactly, but that the topic is not notable enough for its own article. I don't know what the notability guidelines are for algorithms, but Fürer's algorithm does not have the press coverage or significance in the literature that the Schönhage–Strassen algorithm or the algorithm of Harvey and van der Hoeven has. So, I think a merge is appropriate.
    Adding more content to the article would not change this. It might not even be desirable, since what one would have to add to distinguish it from other FFT algorithms would likely be overly technical. Fawly (talk) 18:56, 30 March 2022 (UTC)[reply]
    Merge It seems to me that Fürer's algorithm is best appreciated as a step along the way of finding multiplication algorithms with good asymptotic runtime, and therefore it is most useful for the reader to see it in context. If and when the target section outgrows its current article one could consider a split, but a substantial section on asymptotically performant algorithms should be due weight in an article expressly on multiplication algorithms. Felix QW (talk) 11:25, 1 April 2022 (UTC)[reply]
    The discussion above is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.

    x, *[edit]

    Please change all instances of * and the letter x used as multiplication operators to ×, except in the computers section.|Please remove excess vertical whitespace in the computers section. — Preceding unsigned comment added by 109.70.40.55 (talk)

     Not done: According to the page's protection level you should be able to edit the page yourself. If you seem to be unable to, please reopen the request with further details. ‑‑ElHef (Meep?) 20:26, 9 April 2022 (UTC)[reply]

    Lattice multiplication inconsistency[edit]

    In the lattice multiplication section of this article, it is claimed that the first known appearance of the lattice method in Europe was in a book of Fibonacci, and that in Arab mathematics, the first known appearance of the method is due to Al-Khawarizmi.

    However, the history section of the article on lattice multiplication says that this is totally false and that neither of the two authors mentioned the lattice method in his book.

    As such, there are contradictory statements in two different articles on the same topic. Someone should look into the sources and see which one is correct (I might get around to it once I have more time but no guarantee) المبتدئ18 (talk) 20:35, 21 November 2023 (UTC)[reply]