In commits a004e72b9 (1.0.2) and 6f35f6deb (1.0.1) we released a fix for CVE-2016-2177. The fix corrects a common coding idiom present in OpenSSL 1.0.2 and OpenSSL 1.0.1 which actually relies on a usage of pointer arithmetic that is undefined in the C specification. The problem does not exist in master (OpenSSL 1.1.0) which refactored this code some while ago. This usage could give rise to a low severity security issue in certain unusual scenarios. The OpenSSL security policy (https://www.openssl.org/policies/secpolicy.html) states that we publish low severity issues directly to our public repository, and they get rolled up into the next release whenever that happens. The rest of this blog post describes the problem in a little more detail, explains the scenarios where a security issue could arise and why this issue has been rated as low severity.
The coding idiom we are talking about here is this one:
if (p + len > limit)
{
return; /* Too long */
}
Where p
points to some malloc’d data of SIZE
bytes and limit == p + SIZE
.
len
here could be from some externally supplied data (e.g. from a TLS
message).
The idea here is that we are performing a length check on peer supplied
data to ensure that len
(received from the peer) is actually within the
bounds of the supplied data. The problem is that this pointer arithmetic
is only defined by the C90 standard (which we aim to conform to) if len
<= SIZE
, so if len
is too long then this is undefined behaviour, and
of course (theoretically) anything could happen. In practice of course
it usually works the way you expect it to work and there is no issue
except in the case where an overflow occurs.
In order for an overflow to occur p + len
would have to be sufficiently
large to exceed the maximum addressable location. Recall that len
here
is received from the peer. However in all the instances that we fixed it
represents either one byte or two bytes, i.e. its maximum value is
0xFFFF
. Assuming a flat 32 bit address space this means that if p <
0xFFFF0000
then there is no issue, or putting it another way, approx.
0.0015% of all conceivable addresses are potentially “at risk”.
Most architectures do not have the heap at the edge of high memory. Typically, one finds:
0: Reserved low-memory
LOW: TEXT
DATA
BSS
HEAP
BREAK
...
mapped libraries and files
STACK
And often (on systems with a unified kernel/user address space):
HIGH:
kernel
I don’t claim to know much about memory lay outs so perhaps there are
architectures out there that don’t conform to this. So, assuming that
they exist, what would the impact be if p
was sufficiently large that an
overflow could occur?
There are two primary locations where a 2 byte len
could be an issue (I
will ignore 1 byte len
s because it seems highly unlikely that we will
ever run into a problem with those in reality). The first location is
whilst reading in the ciphersuites in a ClientHello, and the second is
whilst reading in the extensions in the ClientHello.
Here is the (pre-patch) ciphersuite code:
n2s(p, i);
if (i == 0) {
al = SSL_AD_ILLEGAL_PARAMETER;
SSLerr(SSL_F_SSL3_GET_CLIENT_HELLO, SSL_R_NO_CIPHERS_SPECIFIED);
goto f_err;
}
/* i bytes of cipher data + 1 byte for compression length later */
if ((p + i + 1) > (d + n)) {
/* not enough data */
al = SSL_AD_DECODE_ERROR;
SSLerr(SSL_F_SSL3_GET_CLIENT_HELLO, SSL_R_LENGTH_MISMATCH);
goto f_err;
}
if (ssl_bytes_to_cipher_list(s, p, i, &(ciphers)) == NULL) {
goto err;
}
p += i;
Here i
represents the two byte length read from the peer, p
is the
pointer into our buffer, and d + n
is the end of the buffer. If `p + i
- 1
overflows then we will end up passing an excessively large
ivalue to
ssl_bytes_to_cipher_list(). Analysing that function it can be seen that it will loop over all of the memory from
p` onwards interpreting it as ciphersuite data bytes and attempting to read its value. This is likely to cause a crash once we overflow (if not before).
The analysis of the extensions code is similar and more complicated, but the outcome is the same. It will loop over the out-of-bounds memory interpreting it as extension data and will eventually crash.
It seems that the only way the above two issues could be exploited is via a DoS.
The final consideration in all of this, is how much of the above is
under the control of the attacker. Clearly len
is, but not the value
of p
. In order to exploit this an attacker would have to first find a
system which is likely to allocate at the very highest end of address
space and then send a high number of requests through until one “got
lucky” and a DoS results. A possible alternative approach is to send out
many requests to many hosts attempting to find one that is vulnerable.
It appears that such attacks, whilst possible, are unlikely and would be difficult to achieve successfully in reality - with a worst case scenario being a DoS. Given the difficulty of the attack and the relatively low impact we rated this as a low severity issue.
With thanks to Guido Vranken who originally reported this issue.