We had an emulator and a clean snapshot. Every round, we did a reset to the initial ‘clean’ state and installed the teams’ applications.
We grabbed the teams’ APK files from their backends ten times per round, but deployed only the latest version once — at the beginning of a new round. This layout appeared to confuse the players, resulting in many questions along the lines of, ‘I saw my APK grabbed, but not deployed’. This solution was aimed at mitigating the impact of DoS (of the teams’ or our services) and possible network problems. Everything for you, guys :)
Also, we tracked the updates to the teams’ APKs, saved every unique version for analysis: to see the payloads, funny stuff and catch out the lazy asses 😉
Concurrent communication chains
We inherited the checker infrastructure from the previous CTFs as a global checking platform. Within this platform, there were several checking modules (system checkers), one per service. The platform spawned tasks for the checkers totest the condition of the team’s services or update its round flag. The
Trust Area checker will be referred to as the
In our part of the checker infrastructure, we introduced two very important extra components:
Trust Area and
The Trust Area Core was responsible for the communication between all the checker and team components and management of the Android emulator. It included:
- REST API to accept commands from the
System Checkerand results from the
Emulator Managerfor Android-side deployments
APK Grabber 3000to collect the teams’ applications and control their versions
- small async glue to organize the other parts
Checker Agent was a special application pre-installed in the Android system. It was responsible for running a pre-defined set of scenarios and supplying the results to the
This chart depicts the flow of actions (e.g. check the state, put a flag, etc.) initiated by the System Checker:
There were two points of interest:
The System Checkerinteracted with a simple REST API of the
Trust Area Coreand knew nothing about the complexity of the underlying communications — therefore, it was programmed in a simple synchronous manner.
The Emulator Manager(within the Trust Area Core) spawned an ADB command to send the intent to the
Checker Agentand waited for an HTTP request with the results from the Checker Agent (try to implement it yourself with
Another interesting implementation detail was the delivery of flags at the beginning of every new round. Our checking platform was designed to process such tasks/events separately in an asynchronous manner, which satisfies the needsof classic attack-defense challenges.
However, we encountered a problem in our
Trust Area. The issue emerged at the beginning of the rounds. This is when all the ten independent tasks to update the flags must stop altogether, then wait for the teams' applications to redeploy and continue to run concurrently. So, this was another place where we practiced concurrent programming.
This is how the flags were changed at the beginning of a round:
System Checker interacted with a simple REST API and knew nothing about the underlying synchronization barrier (try to implement it yourself with
- The Client Part: https://github.com/ctf-zone/CTFZone-TrustArea-Client-1
- The Backend: https://github.com/ctf-zone/CTFZone-2020-TrustArea-Backend
- The Emulators: https://github.com/ctf-zone/CTFZone-2020-Trust-Area-Infra
- The Checker Agent: https://github.com/ctf-zone/CTFZone-2020-Trust-Area-Checker-Agent