Hey, it's been a hot minute since we've talked about intelligence (i.e. spy stuff). As a reminder, micronations generally DON'T need an intelligence service, but it's fun to talk about, so here we are.
Today I want to talk about "red teaming". Red teams mean different things in different contexts, but it's all ultimately the same thing - having a team on your side whose job is to "think like the other guys" in order to be able to anticipate what "the other guys" might do. (The name comes from military exercises, where the friendly force is deemed "blue" and the other side is "red".)
For example, the United States Air Force uses "aggressor squadrons" (the Navy and USMC call them "adversary squadrons") which use the tactics, techniques, and procedures of the enemy, in order for their own pilots to be familiar with them, which can give an edge in a fight.
Over in the cybersecurity world, "red teaming" is a form of security testing that's one big step beyond "ethical hacking". Red teams can use almost any tactic to try to gain access to facilities and information, within reason. This can include lock-picking, "seeding" the parking lot with USB drives that have malware on them, launching their own phishing campaigns, and other tactics a real attacker might use.
Back in Spook Country, red teaming can help you to avoid two separate but related pitfalls, either of which could result in intelligence failures. First, done properly, it can help you to understand that your adversary doesn't think like you. They have different values, goals, and priorities, and may even come from different cultures, and red teaming can help identify those and prevent the "mirror image problem" (meaning that you assume the other side thinks just like you do, with the same motives, values, and understandings). Second, it can help you escape the mental trap of "They're the bad guys" because once you start to believe that, you can start to believe all of the tropes that modern media assigns to "bad guys". It's important to remember that they're doing their job, just like you're doing yours.
In order to successfully use red teaming, you need to understand the other side's culture, history, organizations, and overall objectives from a real perspective. This is why, for example, the CIA used to hire a lot of people who studied Russian literature in university, because Russian literature shaped the national psyche, and thus informed their culture to a large degree.
The big risk of red teaming (in intelligence or in cybersecurity) is overconfidence - assuming you've got THE answer. I.e., in cybersecurity, you might assume your red team found all of the ways in and you've blocked them. Well, no - your red team found all of the ways in that they looked for, but there's no guarantee that they looked for everything. The reverse of that is also true - you can go nuts trying to figure out what might have been missed, and you can always run another test, and another, and another - basically, you can never finish, if you're so inclined. After all, the organization you tested today isn't the same organization that'll be there next week - new servers, new software vulnerabilities, new people joining the company, it all means that there's always another potential way in. Since red teams can be sneaky and under-handed (as can the other side!), their methods can also lead to resentment among the people they've come in contact with, and even mistrust of management.
On the other hand, there are definite benefits to "thinking like the other guys", and to be honest, it can be quite fun sometimes. Just make sure you can take their shoes back off, when you're done wearing them...
If you want to know more about red teaming, you can check out the CIA's Tradecraft Primer (which devotes a page or two to red teaming, among other techniques) here, or the US Army's Red Team Manual (which is devoted to the subject) here.
![]() |
| Picture is unrelated. (Probably...) |

No comments:
Post a Comment