CTFZone Paper: Trust Area — Infra

OFFZONE
3 min readSep 2, 2020

--

By Roman Shemyakin

Part I: CTFZone Paper: Trust Area — Backend Part
Part II: CTFZone Paper: Trust Area — Client Part

Apps bootstrap

We had an emulator and a clean snapshot. Every round, we did a reset to the initial ‘clean’ state and installed the teams’ applications.

We grabbed the teams’ APK files from their backends ten times per round, but deployed only the latest version once — at the beginning of a new round. This layout appeared to confuse the players, resulting in many questions along the lines of, ‘I saw my APK grabbed, but not deployed’. This solution was aimed at mitigating the impact of DoS (of the teams’ or our services) and possible network problems. Everything for you, guys :)

Also, we tracked the updates to the teams’ APKs, saved every unique version for analysis: to see the payloads, funny stuff and catch out the lazy asses 😉

Concurrent communication chains

We inherited the checker infrastructure from the previous CTFs as a global checking platform. Within this platform, there were several checking modules (system checkers), one per service. The platform spawned tasks for the checkers totest the condition of the team’s services or update its round flag. The Trust Area checker will be referred to as the System Checker.

In our part of the checker infrastructure, we introduced two very important extra components: Trust Area and Checker Agent.

The Trust Area Core was responsible for the communication between all the checker and team components and management of the Android emulator. It included:

  • REST API to accept commands from the System Checker and results from the Checker Agent
  • Emulator Manager for Android-side deployments
  • APK Grabber 3000 to collect the teams’ applications and control their versions
  • small async glue to organize the other parts

The Checker Agent was a special application pre-installed in the Android system. It was responsible for running a pre-defined set of scenarios and supplying the results to the System Checker.

This chart depicts the flow of actions (e.g. check the state, put a flag, etc.) initiated by the System Checker:

Fig. 1. System Checker-initiated action flow

There were two points of interest:

  1. The System Checker interacted with a simple REST API of the Trust Area Core and knew nothing about the complexity of the underlying communications — therefore, it was programmed in a simple synchronous manner.
  2. The Emulator Manager (within the Trust Area Core) spawned an ADB command to send the intent to the Checker Agent and waited for an HTTP request with the results from the Checker Agent (try to implement it yourself with Futures 😉).

Flags delivery

Another interesting implementation detail was the delivery of flags at the beginning of every new round. Our checking platform was designed to process such tasks/events separately in an asynchronous manner, which satisfies the needsof classic attack-defense challenges.

However, we encountered a problem in our Trust Area. The issue emerged at the beginning of the rounds. This is when all the ten independent tasks to update the flags must stop altogether, then wait for the teams' applications to redeploy and continue to run concurrently. So, this was another place where we practiced concurrent programming.

This is how the flags were changed at the beginning of a round:

Fig. 2. Flags change flow

Again, the System Checker interacted with a simple REST API and knew nothing about the underlying synchronization barrier (try to implement it yourself with Futures 😉).

Source code:

--

--

OFFZONE
OFFZONE

Written by OFFZONE

International community conference for cybersecurity researchers and professionals. No suits, no business — only hardcore research.

No responses yet