tag:blogger.com,1999:blog-86064768448725241942023-12-15T16:07:04.341-08:00Antonio BianchiPhD CandidateUnknownnoreply@blogger.comBlogger4125tag:blogger.com,1999:blog-8606476844872524194.post-50393978070517734632019-05-27T12:12:00.001-07:002019-05-27T12:12:54.270-07:00New Website<h3>
<b>My personal website is now: <a href="http://antoniobianchi.me/">antoniobianchi.me</a></b></h3>
<h3>
<b><br /></b></h3>
Unknownnoreply@blogger.com1tag:blogger.com,1999:blog-8606476844872524194.post-69305461498210774962013-12-10T15:23:00.000-08:002013-12-12T14:30:21.065-08:00mathconsole (iCTF 2013): writeup<div style="text-align: justify;">
<br />
For the <a href="http://ictf.cs.ucsb.edu/">iCTF</a> 2013 competition, I developed the service called mathconsole. This writeup will be a a little bit different than usual: I will try not only to explain how to exploit this service, but also to justify some design choices I made while developing it. Additionally, I will also give some insights about how teams have exploited and patched this service.<br />
<br />
iCTF is a CTF competition organized by <a href="http://ucsb.edu/">UCSB</a>, its rules are different than usual CTF and tend to change from year to year. You can read the description of the 2013 edition rules <a href="https://lists.cs.ucsb.edu/pipermail/ctf-participants/20131204/000499.html">here</a>. My main goal for this service was to implement a server with two well-known vulnerabilities found in the gaming consoles Wii and Playstation 3. As far as I know, both vulnerabilities have been discovered by team Twiizers/fail0verflow <a href="#r1">[1]</a><a href="#r2">[2]</a><a href="#r3">[3]</a><a href="#r4">[4]</a>.</div>
<div style="text-align: justify;">
<br />
The first vulnerability (found in the Wii console) is caused by wrongly checking a RSA signature, the second (found in the Playstation 3) is caused by using a non-random nonce in the generation of an ECDSA signature. I will explain these vulnerabilities in detail later in this writeup. Unfortunately, nobody exploited the second vulnerability (about the ECDSA signature), probably because the other one was easier and, given the structure of the competition, it was not really worthing to exploit it. If you are not interested in this vulnerability, you can skip sections "Authorization level bug" and "ECDSA vulnerability".<br />
<br />
<a href="http://cs.ucsb.edu/~antoniob/blog/mathconsole_blog.zip">This zip file</a> contains the source code of the service, the scripts used to generate benign traffic, the .deb package installed on teams' machines, and all the other files I will refer during this writeup (forgive me for the "not exactly clean" code).<br />
<br /></div>
<h2 style="text-align: justify;">
Overview</h2>
<div style="text-align: justify;">
This service has been written in Python 2.7 and it is implemented as a classical forking server. For the networking part, I mainly reused the code I wrote for the service PowerPlan that I wrote for the previous iCTF (you can read a very good writeup <a href="http://www.rogdham.net/2013/03/23/ictf-powerplan-write-up.en">here</a> about it). The service is listening on port 9898; when you connect to it you get the following message:</div>
<div style="text-align: justify;">
<blockquote class="tr_bq">
<span style="font-size: x-small;"><span style="font-family: "Courier New",Courier,monospace;">Welcome to the most powerful backdoor-free calculator<br />You can use it to safely perform your calculations about Nuclear reactions and trajectories of missiles<br />A new on-the-cloud hacker-proof file storage functionality has been just added<br />available commands:<br />sum|<n1>|<n2> required authorization: L1 or L2<br />multiply|<n1>|<n2> required authorization: L1<br />mod|<n1>|<n2> required authorization: L1 or L2<br />power|<n1>|<n2> required authorization: L3<br />write|<fname>|<b64 content> required authorization: L2<br />read_encrypted1|<fname>|<key> required authorization: L2<br />read_encrypted2|<fname>|<key> required authorization: L2<br />read_protected|<fname>|<password> required authorization: L1 + password<br />what do you want to do?</span></span></blockquote>
<br />
When a command is inserted (in the format <command>|<p1>|<p2>) a signature is asked. Different commands require different authorizations that a user can get by providing a (base64 encoded) 256-byte long signature. We will see how a valid L1 signature can be trivially forged, and that both the verifications of L2 and L3 signatures contain fundamental problems that can be exploited. When a correct signature is provided, the specified command will be executed.<br />
<br />
The commands <i>sum</i>, <i>multiply</i>, <i>mod</i>, and <i>power</i> just perform the corresponding mathematical operations and return the result to the user. <i>write</i> allows the user to write a file in the folder <i>service_files</i> (we used this command to set the flags in teams' machines). <i>read_protected</i> returns the content of a file in the <i>service_files</i> folder but it requires a password. Specifically, it checks that the provided password matches with the first line of the content of the file that has been requested. We used this command to check that the flags we stored in the teams' machines were not deleted.<br />
<br />
Finally, <i>read_encrypted1</i> and <i>read_encrypted2 </i>read and return the content of a file in the folder <i>service_files</i>, however they need the correct authentication. In theory, it would not be possible to execute them without having the correct RSA and ECDSA private keys (you can find both private keys in file private_keys.txt). In practice, the vulnerabilities of this service allow an attacker to skip almost entirely the verification of the RSA signature and to recover the private ECDSA signature.<br />
<br />
In this service, flags were written by us using the command <i>write</i>. Specifically, the first line was a random password protecting a file (which value was only known by us), whereas the second line was the flag. The name of the file was the <i>flag_id</i>. Attackers had to write exploits that were able to steal the <i>flag</i> corresponding to the provided <i>flag_id</i>. To do so, the only possibility was to use the commands <i>read_encrypted1</i> or <i>read_encrypted2</i>. Replay attacks were not possible since the requested <i>flag_id</i> (and so the command that an attacker needed to sign) were random and always different. <br />
<br />
<h2>
Anti-decompilation</h2>
I tried to make this service a little bit tricky to be decompiled. From my experience, <a href="https://github.com/wibiti/uncompyle2">uncopyle2</a> is usually able to recover "almost perfectly" the original Python code, given the compiled .pyc file. My goal was to create a .pyc file that was perfectly working when executed by the Python interpreter, but that made the decompilation process to fail.<br />
<br />
By randomly fuzzing a .pyc file and looking a little bit in the Python bytecode specification I realized that:<br />
<ul>
<li>a division operation (like this: a = 0/0) is compiled to the following 10-byte sequence: 15 5A 12 00 64 04 00 64 04 00</li>
<li>changing the second byte from 0x5A to 0x00 makes uncompyle2 to fail with the following error:<br /><span style="font-size: x-small;">[...]</span><br /><span style="font-size: x-small;">194 LOAD_CONST 0</span><br /><span style="font-size: x-small;">197 LOAD_CONST 0</span><br /><span style="font-size: x-small;">200 BINARY_DIVIDE None</span><br /><span style="font-size: x-small;">201 STOP_CODE None</span><br /><span style="font-size: x-small;">202 <18> None</span><br /><span style="font-size: x-small;">203 STOP_CODE None</span><br /><br /><span style="font-size: x-small;">204 LOAD_CONST 0</span><br /><span style="font-size: x-small;">207 LOAD_CONST 0</span><br /><span style="font-size: x-small;">210 BINARY_DIVIDE None</span><br /><span style="font-size: x-small;">211 STORE_NAME 'a'</span><br /><br /><span style="font-size: x-small;">214 LOAD_CONST 0</span><br /><span style="font-size: x-small;">217 LOAD_CONST 0</span><br /><span style="font-size: x-small;">220 BINARY_DIVIDE None</span><br /><span style="font-size: x-small;">221 STORE_NAME 'a'</span><br /><span style="font-size: x-small;">[...]</span><br /><span style="font-size: x-small;">Syntax error at or near `STOP_CODE' token at offset 201</span></li>
</ul>
So, I inserted in my code (at the "top-level" of the game_server.py file) the following code (that obviously is never executed):<br />
<blockquote class="tr_bq">
<span style="font-size: x-small;">#uncompyle trick<br />fff=False<br />if(fff):<br /> a = 0/0<br /> a = 0/0<br /> a = 0/0<br /> a = 0/0<br /> a = 0/0<br /> a = 0/0<br /> a = 0/0<br /> a = 0/0<br /> a = 0/0<br /> a = 0/0<br /> a = 0/0<br /> a = 0/0<br /> a = 0/0<br /> a = 0/0<br /> a = 0/0 </span></blockquote>
<br />
compile.py compiles game_server.py in game_server.pyc (using the "standard" Python mechanism), then it looks for the byte sequence corresponding to the 15 division instructions I showed before. When it finds this sequence, it patches the second byte of the first division instruction from 0x5A to 0x00. This makes uncompyle2 unable to decompile the .pyc file correctly.<br />
<br />
The easiest way to fix this is to change the byte at 0xE7 from 0x00 to 0x5A in the file game_server.pyc. After this this modification, uncompyle2 is able to recover the original source code.<br />
<br />
<br />
<h2>
Signatures and authorization levels</h2>
Before executing a command, the service checks if the provided signature grants the authorization level required to execute it. For instance, to execute the command <i>multiply</i> users have to provide a L1 signature, to execute the command <i>power</i> a L3 signature is necessary, and to execute the command <i>sum</i> they can provide either a L1 or a L2 signature. Every signature is 256-byte long, before sending it to the server, it needs to be base64-encoded.<br />
<br />
I will now explain how L1, L2, and L3 signature are legitimately created by scripts generating benign traffic (see benign1.py, benign2.py, benign3.py, benign4.py). In this context, <i>command</i> is the full string of the requested command, for instance: "mod|23|5":<br />
<ul>
<li><i>L1</i>: a sha1 hash of the sent command:<br /><span style="font-size: x-small;"><span style="font-family: "Courier New",Courier,monospace;">hash = self.sha1(command)</span></span><br />
<span style="font-size: x-small;"><span style="font-family: "Courier New",Courier,monospace;">cert = "X"*(256-len(hash)) + hash</span></span><br />
Of course, this can be trivially forged by anyone.</li>
<li><i>L2</i>: a RSA signature of the sent command:<br /><span style="font-size: x-small;"><span style="font-family: "Courier New",Courier,monospace;"> signer = PKCS1_v1_5.new(private_key_rsa)</span></span><br />
<span style="font-size: x-small;"><span style="font-family: "Courier New",Courier,monospace;"> digest = SHA.new() </span></span><br />
<span style="font-size: x-small;"><span style="font-family: "Courier New",Courier,monospace;"> digest.update(command)</span></span><br />
<span style="font-size: x-small;"><span style="font-family: "Courier New",Courier,monospace;"> cert = signer.sign(digest)</span></span><br />
To compute it, I use a (secret) private key and the PyCrypto library. See the file benign1.py for the complete source code.</li>
<li><i>L3</i>: an ECDSA signature of the sent command:<br /><span style="font-size: x-small;"><span style="font-family: "Courier New",Courier,monospace;">cert_key = private_key_ec.sign_digest(hash,k=4)</span></span><br />
<span style="font-size: x-small;"><span style="font-family: "Courier New",Courier,monospace;">cert = "X"*(256-len(cert_key)) + cert_key</span></span><br />
To compute it, I use a (secret) private key and the Python ecdsa library (install it by using <i>pip install ecdsa</i>). See the file benign2.py for the complete source code.</li>
</ul>
<br />
<h2>
Authorization level bug</h2>
The code managing this authorization-checking functionality is contained in functions <i>init_access</i> and <i>check_security_level</i>. I implemented this in a deliberately over-complicated way, to be able to hide a subtle bug. My goal was to allow the command <i>read_encrypted2</i> to be used providing a L3 signature (even if in the textual description of the commands it is written that <i>read_encrypted2</i> needs a L2 signature). This will be relevant for the full exploitation of the ECDSA vulnerability, if you are not interested in it, you can just skip this section.<br />
<br />
I will now explain the permission checking in terms of mathematical operations, the actual implementation is slightly different. A function <i>indexOf()</i> is defined, such that, given a tuple <i>p</i> = <command, permission>, it returns an index between 1 and 24. For instance: <i>indexOf(<sum, L3>)</i> = 3 and <i>indexOf(<multiply, L1>)</i> = 4. The variable <i>access_summary</i> is computed with the following formula:<br />
$$access\_summary = \prod_{p}^{P}{( (primes[indexOf(p)])^2 )}$$<br />
<br />
Where <i>primes</i> is a sorted list of all the prime numbers up to 97 and <i>P</i> is a list of tuples <command, permission> as specified in the textual description of each command. For instance:<br />
$$<sum, L1> \in P, but <power, L2> \not\in P$$<br />
<br />
To check if a given command C, can be accessed by a signature of level L, it is checked if:<br />
$$(access\_summary \% primes[indefOf(C, L)]) == 0$$<br />
If true, then the command C can be executed by providing a signature of level L, otherwise it cannot.<br />
<br />
<i>indexOf</i>(<<i>read_encrypted2</i>, <i>L3</i>>) = 21. The list of primes has been deliberately set incorrectly, so that primes[21] = 75 (instead of 79). Since <i>access_summary</i> contains twice the factor 5 (corresponding to the tuple <<i>sum</i>, <i>L2</i>>) and 3 (corresponding to the tuple <<i>sum</i>,<i> L1</i>>), <i>access_summary</i> is divisible by 75 and so the command <i>read_encrypted2</i> can be used by providing a L3 signature, differently from the textual specification.<br />
<br />
<h2>
RSA vulnerability</h2>
As I said, in implementing the RSA vulnerability I tried to mimic the one found in the Wii console <a href="http://www.blogger.com/blogger.g?blogID=8606476844872524194#r2">[2]</a>. For a general overview of how RSA can be used to sign a message look here <a href="http://www.blogger.com/blogger.g?blogID=8606476844872524194#r5">[5]</a>. In theory, to produce a valid L2 signature (that can be used to read the flags by invoking the command <i>read_encrypted1</i> or <i>read_encrypted2</i>), it is necessary to have the original private key. However, there is a fundamental bug in how the signature it is checked:<br />
<blockquote class="tr_bq">
<span style="font-size: x-small;"><span style="font-family: "Courier New",Courier,monospace;">def verify_with_pubkey_rsa(self,public_key_rsa,local_hash,certificate):<br /> m = public_key_rsa.encrypt(certificate,None)[0]<br /> m = "\x00"*(256-len(m)) + m <br /> remote_hash = m[-20:] #the padding is not checked<br /> libc = ctypes.CDLL("libc.so.6")<br /> if(libc.strncmp(local_hash,remote_hash,20)==0):#strncmp is REALLY bad here<br /> return True<br /> else:<br /> return False </span></span></blockquote>
As you can see <i>strncmp</i> is used, instead of <i>memcmp</i> (I used <i>ctypes</i> to directly invoke the <i>libc</i> implementation of <i>strncmp</i>). <i>strncmp</i> returns 0 and stops the comparison between the signed hash and the computed one as soon as 0x00 bytes are encountered in the same position in both strings. In addition, the padding of the original (20-byte long) message is not checked.<br />
<br />
The easiest way to exploit this is to provide a signature of all 0x00 bytes: given the mathematical properties of RSA, this makes m = "\x00" * 256. At this point it is just necessary to provide a command whose sha1 hash starts with 0x00. It is possible to bruteforce it by changing the value <key> of the command <i>read_encrypted1</i> or <i>read_encrypted2</i> (<key> cannot be bigger than 4000, however, as we will see, we do not actually need so many tries).<br />
<br />
Assuming that the sha1 function returns values with a random distribution, the probability of randomly find such a hash in N tries is:<br />
$$1 - (255/256) ^ N $$ <br />
For instance, trying as <key> all the numbers between 0 and 2000, the probability of finding a command whose hash starts with 0x00 is more that 99.9%. So, a value of <key> generating a hash with the wanted property can be found in a negligible amount of time. This attack is implemented in exploit.py.<br />
<br />
It is important to notice that, even if the checking of the padding had been implemented correctly, it would have been still possible to forge a signature. Refer to <a href="http://www.blogger.com/blogger.g?blogID=8606476844872524194#r2">[2]</a> for further details.<br />
<br />
An easy patch to this vulnerability is to change <i>strncmp</i> to <i>memcmp</i> (or use the == Python operator). Alternatively, it is possible to directly use the <i>verify </i>function provided by PyCrypto (that also checks the validity of the padding).<br />
<br />
To have a complete and working exploit, it is also necessary to revert the manipulation that the service does to the content of the file read before sending it to the client. In fact, the commands <i>read_encrypted1</i> use (<key> % 256) as a xor-key, whereas <i>read_encrypted2</i> works similarly, but it rotates right the value (<key> % 256) before xoring it to a new character. Both operations can be trivially reverted (knowing the value of <key>).<br />
<br />
<h2>
ECDSA vulnerability</h2>
The code generating the ECDSA signature (see benign2.py):<br />
<blockquote class="tr_bq">
<span style="font-size: x-small;"><span style="font-family: "Courier New",Courier,monospace;">cert_key = private_key_ec.sign_digest(hash,k=4)</span></span></blockquote>
uses a fixed value for <i>k</i>. This is a well-known implementation problem that can be exploited to recover the private_key. See <a href="http://www.blogger.com/blogger.g?blogID=8606476844872524194#r4">[4]</a> (or <a href="http://www.blogger.com/blogger.g?blogID=8606476844872524194#r3">[3]</a> at minute 35:30) for the mathematical explanation.<br />
<br />
To do so, it is necessary to obtain two tuples <message1, signature1>, <message2, signature2>. This can be done by recording the signatures used when the command <i>power</i> (that requires a L3 signature) is invoked (benign2.py invokes such a command). The service nicely prints these values on standard output. The script recover_ecdsa_key.py recovers the private key in this way.<br />
<br />
The code performing the mathematical operations is the following:<br />
<blockquote class="tr_bq">
<span style="font-size: x-small;"> #using the same variable names as in: http://en.wikipedia.org/wiki/Elliptic_Curve_DSA<br /> n = curve_order<br /> s1 = string_to_number(sig1[-24:])<br /> s2 = string_to_number(sig2[-24:])<br /> r = string_to_number(sig1[-48:-24])<br /><br /> z1 = string_to_number(sha1(c1))<br /> z2 = string_to_number(sha1(c2))<br /><br /> sdiff_inv = inverse_mod(((s1-s2)%n),n)<br /> k = ( ((z1-z2)%n) * sdiff_inv) %n<br /> r_inv = inverse_mod(r,n)<br /> da = (((((s1*k) %n) -z1) %n) * r_inv) % n<br /> <br /> recovered_private_key_ec = SigningKey.from_secret_exponent(da)</span></blockquote>
Refer to the file recover_ecdsa_key.py for the full implementation.<br />
<br />
After the recovery of the private key it is possible to use it to generate a valid L3 signature that, due to the bug explained in the section "Authorization level bug", can be used to invoke the command <i>read_encrypted2</i>.<br />
<br />
Once the private key has been recovered, an attacker can produce traffic indistinguishable from the "benign" one. In fact, in this case, the vulnerability is not in the service, but it is in how the client (that I wrote to generate "benign" traffic) use the (secret) private ECDSA signature.<br />
<br />
The generated benign traffic never targets the <i>read_encrypted2</i> using a L3 signature both because this is against the textual description of the permissions and because, otherwise, it would have been impossible for teams to distinguish between benign and malicious traffic.<br />
<br />
For this reason, an easy patch is to change the value 75 in the list of primes to 79, making the command <i>read_encrypted2 </i><i> </i>inaccessible by using a L3 signature (see section "Authorization level bug").<br />
<br />
<h2>
What happened during the iCTF </h2>
Unfortunately nobody exploited the ECDSA vulnerability, even if two hours before the ending of the competition I gave this hint:<br />
<blockquote class="tr_bq">
<div style="-qt-block-indent: 0; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px; text-indent: 0px;">
HINT: <a href="http://xkcd.com/221/"><span style="color: blue; text-decoration: underline;"></span></a><a href="http://xkcd.com/221">http://xkcd.com/221</a>/ (and 75 is not prime!)</div>
</blockquote>
<br />
35 exploits were submitted, out of which 18 were working. 12 working exploits were just a copy of the exploit for the RSA vulnerability I developed (teams could buy exploits from us). Interestingly, one team obfuscated the source code of its attack. This was pointless, in fact, teams could not see the source code of attacks developed by other teams.<br />
<br />
Some teams submitted exploits generating a signature different than "\x00" * 256. Specifically, some signatures with the following property were generated:<br />
$$rsa\_public\_key.encrypt((b64decode(signature)))[-20] == 0x00$$<br />
Given the bug in the hash comparison, such signatures were still considered as valid if the hash of the submitted command was starting with a 0x00 byte.<br />
<br />
The first working exploit was submitted by PPP (congratulations!).<br />
<br />
I did not dig into all the teams' machines, but from a quick look I have seen some teams patching the service by changing the <i>strncmp </i>instruction either with the standard "==" Python operator or the <i>memcmp</i> libc function.<br />
<br />
<h2>
References</h2>
<span style="font-size: small;"> [1] <a href="http://hackmii.com/2008/04/keys-keys-keys" name="r1">http://hackmii.com/2008/04/keys-keys-keys</a></span><br />
<span style="font-size: small;">[2] <a href="http://wiibrew.org/wiki/Signing_bug" name="r2">http://wiibrew.org/wiki/Signing_bug</a></span><br />
<span style="font-size: small;">[3] <a href="http://www.youtube.com/watch?v=PR9tFXz4Quc" name="r3">http://www.youtube.com/watch?v=PR9tFXz4Quc</a></span><br />
<span style="font-size: small;">[4] <a href="http://en.wikipedia.org/wiki/Elliptic_Curve_DSA" name="r4">http://en.wikipedia.org/wiki/Elliptic_Curve_DSA</a></span><br />
<span style="font-size: small;">[5] <a href="http://en.wikipedia.org/wiki/RSA_%28algorithm%29" name="r5">http://en.wikipedia.org/wiki/RSA_%28algorithm%29</a> </span></div>
<script type="text/x-mathjax-config;executed=true">
MathJax.Hub.Config({
TeX: { equationNumbers: { autoNumber: "AMS" } }
});
</script>
<script src="http://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML" type="text/javascript">
</script>
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-8606476844872524194.post-33427007293114218762013-12-09T03:32:00.001-08:002013-12-09T03:32:09.129-08:00Android forkbomb<br />
Recently, we found that it is possible to freeze any
Android device with a simple forkbomb attack.<br />
<br />
Not really exciting,
however it is strange that a forkbomb is not considered a security issue
by Google, whereas <a href="http://cve.mitre.org/cgi-bin/cvename.cgi?name=2011-3918">this</a> is (and it has been <a href="https://code.google.com/p/android-source-browsing/source/detail?repo=platform--system--core&r=e7fd911fd42b">patched</a>, by removing a functionality that was originally intended to be used).<br />
<br />
For more details, read this blogpost <a href="http://reyammer.blogspot.com/2013/06/what-fork-how-to-immediately-block-any.html">here</a>.<br />
Unknownnoreply@blogger.com1tag:blogger.com,1999:blog-8606476844872524194.post-23404068278392142882013-12-08T21:55:00.000-08:002013-12-09T03:27:54.274-08:00Old writeups <br />
In 2011, I wrote two writeups for bin400 and bin500 challenges of Defcon Quals 19.<br />
You can find them <a href="http://gnisrevereversing.blogspot.com/2011/07/defcon-19-2011-binary-l33tness-400.html">here</a> and <a href="http://gnisrevereversing.blogspot.com/2011/07/defcon-19-2011-binary-l33tness-500.html">here</a>.<br />
<br />Unknownnoreply@blogger.com0