CTFZone Paper: Little Knowledge

Intro

If you are unfamiliar with what CTFs are, these are competitions for practitioners of applied information security. They are hosted by Infosec-related companies, CTF teams and universities. There are two types of competitions: jeopardy and attack-defense.

Why a cryptographic task was necessary

At the initial planning phase, we were arguing whether to make a cryptographic task for AD at all. You see, usually each vulnerable service in AD has several bugs with varying degrees of difficulty. Task creators design this with less experienced teams in mind. It also helps stronger teams start pulling flags from their opponents earlier, resulting in a more dynamic game.

How we chose our development stack

How would we achieve this? You see, usually it is quite easy to find and fix bugs in crypto tasks, since they are most often developed in a scripting language like Python. You read the code, you notice the bug, you fix it.

How we decided on the task idea

We didn’t want to make a task full of vulnerabilities familiar to every CTF player, so we decided to pick one of the slightly less conventional subfields: Zero Knowledge Proofs of Knowledge (ZKPoK). The idea is to prove that you know some secret without revealing the secret itself. We decided to use ZKPoK as an identification scheme: when checker proves knowledge of something, it receives the flag. The particular scheme we chose is based on Hamiltonicity.

A. What is graph Hamiltonicity?

B. How Hamiltonicity can be used for identification

The scheme is simple:

  • publish the graph with an embedded hamiltonian cycle. It should be visible to everyone;
  • connect to a team server and prove, the we know the cycle without revealing the cycle itself.

How we developed the task

Protocol: high-level view

We decided that the number of vertices in the graph had to be set by the teams themselves, so we couldn’t use the same graph for everyone. To combat this issue, at the start of each round the checker asked each team’s server, how big a graph it needs. A suitable graph with a Hamiltonian cycle was created. Next it sent to the team server:

  • the graph;
  • the round flag for the team
  • a 16-byte random string RANDOMR received previously from the team’s server;
  • an RSA signature of all the above.

Protocol: internals

Now that we’ve covered the general protocol, let’s look at the vulnerabilities we‘ve introduced into the challenge.

  1. The desired number of vertices in the graph.
  2. The number of parallel proofs.
  3. The chosen commitment schemes.
  1. Compute around 2¹⁷ values of CRC32_FH with random b_1, b_2, b_3 and put them in a hash table, so that we can look up a precursor for each computed value of CRC32_FH in near constant time
  2. We compute CRC32_SH_INVERTED with the initial value of the CRC32 we are trying to find a collision to as input and random values b_4, b_5, b_6 in a loop. During each iteration we check if the result is in the hash table we compiled earlier. This should take around 2¹⁵ — 2¹⁶ iterations.
  3. Once we’ve found a collision in the hash table, we’ve also found a collision for CRC32. y=CRC32(b_1, b_2, b_3, b_4, b_5, b_6)

The final vulnerability

We’ve covered how the whole system works and we’ve already discussed four intended vulnerabilities. However, there was also a fifth bug. The challenge bits submitted by Prover were sourced from an insecure PRNG (Pseudo Random Number Generator). The common choice of an insecure primitive in such cases is C’s rand , but we wanted to introduce something a bit more interesting. In recent years, there were several papers about Legendre PRF, so we chose to implement it as our PRNG instead of overused Mersenne Twister.

Development quirks

Frankly, writing the task was quite fun.

Testing

As you might imagine, the task had to go through some rough testing.

Anti-cheat

The Legendre PRNG bug requires lots of queries to the Verifier. We didn’t want team servers to just drop connections in an event of multiple sequential challenge requests. To dissuade teams from implementing such basic and boring mitigations we diversified checker functionality.

  1. Simple pull. Checker simply proves that it knows the cycle. Anything goes wrong — team’s SLA (Service Level Agreement) points are deducted.
  2. Checker creates valid commitments, but before sending them to the team’s server, damages them in a random fashion, ensuring that the Verifier will treat the resulting commitment as erroneous. It repeats the cycle of wrong proofs for several seconds. Then it completes one correct interaction and pulls the flag. If at any point the answer from the team’s server is an unexpected one, or the server drops connection, team loses SLA points.
  3. Checker initializes the protocol, sends a valid commitment. Then it tells the Verifier to send the challenge in a loop for several seconds. After the time runs out, it picks the latest challenge, creates a packet that reveals the commitment and sends it to the team’s server. If the server replies with the flag, everything is alright. Any deviations from checker’s expectations are once again penalized by deducting their SLA points.

Conclusion

When I started writing this article, I didn’t expect to drag this out for so long, but I didn’t want to leave out anything important.

--

--

International community conference for cybersecurity researchers and professionals. No suits, no business — only hardcore research.

Love podcasts or audiobooks? Learn on the go with our new app.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
OFFZONE

OFFZONE

15 Followers

International community conference for cybersecurity researchers and professionals. No suits, no business — only hardcore research.