Fast Matrix Multiplication

Consider the following two-by-two matrix multiplication:

Note that we need eight multiplications and four additions in order to perform the multiplication. This means that in order to multiply two n-by-n matrices, it is necessary to perform n3 multiplications and n2(n-1) additions.

In 1969, Strassen [Str69] surprised everyone by discovering a way to multiply the above matrices using only seven multiplications. He first computed:

x1 = (a + d)(e + h)
x2 = (b – d)(g + h)
x3 = (a – c)(e + f)
x4 = (a + b)h
x5 = (c + d)e
x6 = a(f - h)
x7 = d(-e + g)

Then he noted that:

ae + bg = x1 + x2 – x4 + x7
af + bh = x4 + x6
ce + dg = x5 + x7
cf + dh = x1 – x3 – x5 + x6

Of course, the number of additions and subtractions went up to eighteen, but this is fine since multiplications take longer to perform.

In order to extend this to n-by-n from two-by-two, we merely divide the matrices into four equal portions and multiply the parts as above in a recursivel manner. This of course requires n to be a power of two, but that is fine. Then, if you solve the recurrence relation from this (setting it up is a nice exercise), the complexity comes out to be O(nlog7).

For the curious, Pan [Pan80] discovered an even faster way to do this.


[Pan80] Pan, V. Y. "New fast algorithms for matrix operations." Siam Journal on Computing 9, 1980, pp. 321-342.

[Str69] Strassen, V. "Gaussian elimination is not optimal." Numerische Mathematik 13, 1969, pp. 354-356.