The book includes a number of scenarios designed to help solidify the content. I have invited readers to provide their solutions and was fortunate to have Andre provide his take.

Andre Gironda has an impressive resume including structuring organizations by using models that delegate risk decisions to Cyber Operations teams and has spoken worldwide on risk models, cyber threat intelligence, DFIR, APT hunter-killer teaming, and red-teaming analysis.

Scenario 1:

  1. Most-valuable assets — Data at-rest, communications, signals, localities, temporals, personal knowledge
  2. Threat actors and sources targeting me — most-likelies: Competitors (my company’s, the foreign company’s; internal and external to each company: gather data on LinkedIn, Owler, DataFox, Data.com CorpWatch; automate with recon-ng); likelies: ad-hoc or organized criminal elements, weather disturbances, car accidents; least-likelies: national/regional intelligence, terrorists, natural disasters, infrastructure accidents, brownouts/blackouts (check local news, State Department especially OSAC Crime & Safety reports or equivalent, etc); minor-affecting: jet lag, sickness. It’s good to know both threat actors and threat sources because threat actors can use a threat source to conceal their ops, providing denial and deception capabilities.
  3. Threat source accessing and obtaining my data — surveillance reconnaissance of my most-valuable assets; actors will likely check hotel desk and safe (first but possibly other areas including vehicles) while I’m away; actors somewhat-likely to compromise networks (especially WiFi which can deliver JavaScript and other malware); actors rarely-likely to load implants via SIM card, LTE network, RF / power cables, or firmware; actors rarely-likely to grab devices out of my hands or when close to my body
  4. Impact from successful threat incident/event — malicious logic installation or execution, user- or root-level compromise of systems and/or data, compromise of my mission or tasking, compromise of my company’s mission, successful completion or partial completion of threat-actor mission or tasking.
  5. Mitigation and control — Encrypt data at-rest (e.g., BitLocker with TPM chip with Secure EFI Boot is somewhat ideal, but preferably LUKS Nuke patch), Cover and conceal assets; provide clandestine ops to the overall trip when and where necessary, provide covert ops to my identity and to my assets; Prepare before trip by converting prezos and data to secured written form; Keep assets close to body (e.g., sleep on top of material and devices); forego use of RF-emitting devices as well as spoken word (or used sparingly, powered down or keeping quiet unless expecting or requiring use); Prepare and plan for use of TSCM if relevant, but otherwise speak only when required, softly and whispering for most-critical dealings, or writing down on secured material; Store and transfer data using symmetric (preferbaly RSA, DSA, or El-Gamal) using, e.g., gpg (preferred), 7-zip, WinZip, IzArc, with passphrase distribution performed by splitting the passphrase and sending splits over two or more channels; suggest or ensure others do the same, operating at the same levels of OPSEC as I do to meet covert and/or clandestine requirements.An excellent principle to compartment assets is to establish one-to-one, always-off except when broadcasting, communications such as a separate cell phone, email address, or instant-messaging handle for each individual communicating with.Canaries/tripwires, especially if properly-tuned, can provide an early-warning system (and detection engine) against threat actors targeting assets. Once a threat actor/community is identified, additional surveillance reconnaissance, especially along with counterintelligence and counter deception (CI/CD) — can be worked to provide strategic warning capabilities (i.e., indications and warnings) which ideally include intentions analysis and ultimately capabilities analysis. For computer accounts on Windows computers, the RID 500 Administrator (which is not the one I’d likely login with, preferring a least-privilege principle non-admin account with Device Guard or AppLocker) can have a hidden file called desktop.ini (generated from canarytokens.org zip files) in the C:\Users\Administrator location (which will alert any time that account is accessed). The cell phone could be used to login to a secure cloud email account that has access to where the canarytoken alerts will go (and the cell phone would do nothing else but this one task). Additionally, on the hard drive and on the computer, filenames that appear to be my prezo and contract materials could instead be canarytoken documents (or a diversified canary mechanism). My real files could be renamed and stored in everyday-looking photo images of the scenes I’ve taken on the way to my trip via stenography tools. File-integrity monitoring, such as via tools such as OSSEC and UpGuard, could monitor access to all of my files, especially flash alerting/prompting me when the fake or real files are accessed. I don’t think I’d use the WiFi or the LTE except for my own surveillance reconnaissance and CI TECHINT via a VirtualBox Kali-Linux-2016.1-vbox-amd64.7z image or similar. When a compromised asset can be tied to a threat actor/community, then a careful balance between CI/CD and DFIR courses of action must be met, but observables, indicators, TTPs, and strategies could be elicited and/or collected for evidence.

Scenario 2:

  1. Company’s assets — Most-important: value-chain maintenance, security/safety of admins/appdev and admin/appdev workstations, security of cloud configuration and systems/apps/data, availability of API, integrity of consumer and payment-card data; company insurance policies; moderately-important: availability of web portal; security/safety of other employees, partners (e.g., payment provider, cloud provider), and customers including their workstations; least-important: company facilities, company backoffice components
  2. Impact from threat incident/event — Almost any successful threat-actor compromise more-significant than non-targeted malware will result in an insurance claim that definitely has the potential to force the company into bankruptcy within the first year. Even non-targeted malware is of serious concern, especially since multiple events likely won’t force a claim or bankruptcy but will degrade company cost of ops. Denial-of service and/or defacement are other major concerns, as the API, cloud, or portal downtime or degradation will likely affect contractual obligations and cause significant secondary losses. If the value-chain cannot be maintained, customers will leave to competitors — a trusted insider or even business partner/customer (even a janitor) could clone the company’s value chain(s) creating unfair or unplanned-for competition that would at best result in a prolonged lawsuit.
  3. Three steps to monitor for threat group: 1) CI/CD from HUMINT and TECHINT sources (at the least), all-source (at most); Prefer deception controls such as canarytokens or deception-engagement servers/services/apps/data/platforms; 2) PCI DSS controls for the payment-card network could be bolstered by anti-fraud and anti-ATO (account take-over) techniques available from Cybersource or similar (while also providing necessary Tokenization), Network-based automated malware analysis solutions will significantly reduce costs (e.g., put a PAN VM-100 or redundant pair in locations where employees work, ensure WildFire licensing is active) especially when combined with endpoint anti-malware/exploit, e.g., macOS with MalwareBytes or less-secure Win10 with Defender or least-secure: Win<10 with EMET/Defender; Prudent to collect, at least, Event Logs from endpoints via WEF or similar; 3) CloudFlare or similar can alert/protect against DDoS/DoS incidents, other services to alert to defacement (the PCI DSS control file-integrity monitoring will also alert to defacement in most situations especially via UpGuard or OSSEC agents)
  4. Likely harm to business from threats — Most-likely/impactful: For such a small business, the unplanned competitor risk to the value chain is forefront; moderately-likely/impactful: ransomware or cyber extortion (especially cyber extortion from a real or imaginary payment-card data breach); least-likely/impactful: http://advisera.com/27001academy/knowledgebase/threats-vulnerabilities/

Scenario 3:

  1. First priority and most-important step during merger: [Assumed that Umulot and Blonks run compatible Windows Server Forest technology] — Stop the establishment of any identity (especially privileged-identity) trust relationships (e.g., and most-common — Windows Server Forest and Domain trust relationships) between orgs. Ensure that all domain admins and other sysadmins learn privileged-access management etiquette, especially JitJea. Set up policies, standards, guidelines, procedures, and baselines as seen in the book, Enterprise Cybersecurity: How to Build a Successful Cyberdefense Program Against Advanced Threats.
  2. Next three steps for company — Meet with the board and executives from both sides. Figure out what the new board structure will be and how it will influence the merger, especially changes to short-term/mid-term cyber security governance. Ensure legal practice is followed especially before proxy voting occurs and SEC or other regulatory agencies approve the merger. Many M&A regulations dictate that the orgs must operate business-as usual as if the merger or acquisition may or may not go through, or was never conceived in the first place.
  3. Three three actors — Trusted insiders, Unintentional insiders, and APT. APT loves M&A activity across the Global 2000.
  4. Threat-actor harm how-to — APT will establish a foothold through unintentional insiders and/or external, public-facing Internet stuff one could find on Shodan (or via a project such as altdns). If APT targets unintentional insiders, they will use endpoints as an initial entry point, often Internet Explorer (using Scanbox.js or BeEF for strategic web surveillance reconnaissance leading to strategic web compromise), but also the Microsoft Office Suite, and Adobe Flash/Reader. A very-common vector is MS-Office malicious macros (i.e., malicious-script OLE/VBA) delivered via spear phishing or general phishing, but often directed from results of the strategic web recon/compromise. Another common vector includes office documents with Flash or other malicious scripts. Once an initial-entry point is established, APT will work through an election system towards either Precaution (with as many hosts as possible on each network segment (typically Entry Points per segment = natural_log(number of devices))) versus Stealth (which can follow the same natural_log formula but without persistence, e.g., preferring systems with high uptimes but residing only in-memory), whichever will suit their efforts best considering the targeting, mission, and tasking. Trusted insiders are thought to work alone (providing Stealth), but they may coordinate a series of unintentional insiders in order to provide Precaution. However, trusted insiders will likely first target non-computer resources (e.g., filing cabinets and paper documents) and/or offline computer resources (e.g., disk drives and not business-as usual laptops/desktops). It is less-likely that trusted insiders will target live-compute resources, but when they do they usually operate as a sysadmin, appdev, or regular user would, i.e., with valid credentials (N.B., not always their own) obtained as part of their business-as usual job duties. In many cases, trusted insiders will grow in pace ( [sic] greedily ) with the number of unauthorized entities that they access — very-often culminating in a watershed moment that involves, both: a) a quick, massive asset grab (e.g., running wget or httrack to mirror multiple web server content, copying tons of files from one or more huge fileshares, printing a lot, physically muscling a server out of its rack, etc), and b) leaving the company, taking a new position, and/or physically moving locations (e.g., starting to work remotely). This watershed moment may or may not coincide or be related to Locard’s exchange principle, which is likely also present throughout the trusted insider compromise process.