# Tweaking integer overflows

published on 2013-08-28T13:00:40+00:00, last updated 2013-08-28T13:00:40+00:00

Integer overflows when calculating the memory size for data structures (such as to hold image data from an image file) is a common source of security vulnerabilities. Often, such integer overflows are initially reported as denial-of-service issues, as the result of an arbitrarily large memory allocation. But with some tweaking, they can be turned into the successful allocation of a memory area that is too small because the integer overflow results in the wrong computed allocation size. Subsequent code writes beyond the allocated memory area because it is unexpectedly small, and a heap-based buffer overflow vulnerability is the result.

In the following, I present some techniques to achieve an out-of-bounds heap write operation after an overflowing size computation that involves one or more multiplications.  This is still very far away from an actual exploit which achieves code execution, but a proof-of-concept showing the out-of-bounds write is still helpful for vulnerability assessment. (In general, we do not require reporters to provide us with fully working exploits before we fix something as a security vulnerability.)

It is often beneficial if the resulting product is as small as possible because the data written to the heap after the allocation comes from the document or protocol data unit, following the length information, and there might be additional constraints on the amount of data that can be present there. To simplify the presentation, I assume a machine word size of 32 bit.

The first important case are size computations of the form a·C, where a is controlled by the attacker and C is some compile-time constant (e.g., the size of a struct).  We write C = 2i·C' such that 2 does not divide C'.  We can then use the extended Euclidean algorithm to compute a multiplicative inverse a of C' (modulo the machine word size), that is, find integers a and b such that a·C' + b ·232 = 1. This gives our desired value of a, and the result of the multiplication after the overflow is 2i.

The other case occurs mainly in image formats and involves two or more factors that are controlled by the attacker.  We assume that the code imposes bounds on the factors, perhaps requiring that they are not larger than 218.  (Otherwise, we can use the extended Euclidean algorithm as before.)

When the expression which computes the allocation size is of the form a·b + C for a small, compile-time constant C, it is possible to use the identity 232 - 1 = (216 - 1)(216 + 1), that is, put a = 216 - 1 and b = 216 + 1.  The end result modulo the word size will be C - 1.  Such an addition happens behind the scenes in the old implementation of `operator new[]` in GCC, provided that the array allocation needs a cookie.

If the size computation is just a product a·b, approximating the square root of 232 with a = 216 + 1 and b = 216, making sure that the product still overflows, yields an end result of 216 (modulo the word size).  This might be too large to actually achieve an out-of-bounds heap write using the available payload data.  A completely different approach looks at the factorization of integers slightly larger than the word size:

232 + 1 = 4294967297 = 641·6700417
232 + 2 = 4294967298 = 2·3·715827883
232 + 3 = 4294967299 = 7·613566757
232 + 4 = 4294967300 = 2·2·5·5·13·41·61·1321

232 + 4 is a very interesting candidate because we can arrange the prime factors like this: a = 2·5·5·1321 = 66050, b = 2·13·41·61 = 65026, and the final product will be 4 (modulo the word size).

Such smooth integers which consist entirely of small prime factors are quite frequent, so this technique applies to other word sizes.  It is also possible to take an additional constant factor a·b·C into account.  Factoring numbers close to 264 is still straightforward using Pollard's ρ algorithm.  The distribution of the prime factors to match size constraints on the final factors can be automated, too.

Although these techniques also apply to a 64-bit word size, there is an important practical difference, though: often, the attacker-controlled factors are (implicitly) cast to `size_t`, a 64-bit type.  If those factors are still restricted to 32 bits, this can make overflows impossible to achieve.  This is one of the primary ways how switching to 64-bit architectures can increase security.  However, in contrast to the increased randomness available in a 64-bit address space for ASLR (Address Space Layout Randomization), this is more or less an accident due to the way integers are handled in C or C++.