CyberEffective
The product is the process.
Friday, May 1, 2020
Patient Gardening
Wednesday, September 4, 2019
Measuring Training
Most organizations require all their users to undergo some form of cyber security awareness training, and most organizations squander their users' time and attention by trying to boil the ocean.
Rather than share practical skills for avoiding common online threat, the vast majority of security shops use their awareness training to test their colleagues on how well they can regurgitate the company security policy or jargon. Under this model, there is no good way to quantify the effectiveness of the training, or to see which parts worked and which could use some improvement. More important, most of the training I've seen doesn't address real-world security issues that the organization is grappling with. Instead, it deals mainly with policy and HR issues. Not only is it no fun to take, but it's impossible to know whether it does any good.
Instead of using a shotgun approach, what if you had your incident response team work with management to determine the issues behind the three or four most common real security incidents from the past year? Things that can be counted, and are quantifiable in terms of cost, labor, and damage to reputation. Things like phishing, NSFW web surfing, and overly permissive file shares.
Now build a training module around each issue you selected. Explain the risks, what the problem looks like, and how to avoid and report it. Use anonymized, real-world case studies from your organization to illustrate the issue. Rather than bore them with acronyms and policy jargon, engage your students by talking about things they care about - like downtime, loss of privacy data and revenue. This resonates: The reason the holiday bonus was smaller than usual was because we had to purchase Credit Monitoring for 10,000 customers whose data was stolen because of a phishing attack.
At the end of the year, compare the number of incidents between the last two years. If your training module is effective, the numbers around that incident category should have gone down. If so, work with your incident response team to identify a new issue to target. If you see the numbers for a particular incident type start to creep up again, you can rotate the corresponding module back into your training.
If, on the other hand, the numbers for a particular incident category aren't going down, or at least remaining static, that module may not be effective. At this point, you have a couple of options:
Solicit feedback from your users as to how the module could have been more effective.
Try attacking a different issue. It could be that training just doesn't prevent that type of incident.
Over time, you will develop a library of training modules for all of your most painful security issues. You can continue to expand and update them based on emerging threats. And even if you don't like the results, you can still quantify the business value and ROI of cyber security awareness training, enabling your management to make informed business decisions about the program.
Wednesday, April 10, 2019
Limitations
I'm not going to go into the Capability Maturity Model in this post - you can look it up yourself. I don't really like the idea of giving an organization a single score on their capabilities, because I think most organizations are great at some things and pretty terrible at others, and you lose a lot of resolution if you try to pack all of that into one step.
I do think that a lot of organizations are delusional about their capabilities. I recently read a Computer Weekly article by Warwick Ashford saying that 60% of the organizations they surveyed had had an outage due to digital certificates in the past year. Sixty percent of organizations are having trouble managing their certificates.
In case you hadn't heard, certificates are the foundation of the Secure Web. Browsers are starting to break sessions if the certs aren't good. In words of two syllables or less: If you can't do certs, you will fail at the sexy stuff, Stuff like automation, single-sign on, big data, AI, and all the other cutting-edge things your boss wants you to do this year.
Almost any next-gen technology you build is going to rely on your infrastructure, and if you're having trouble with foundational capabilities like certificate management, or DNS, or routing, you may want to seriously rethink your roadmap. Maybe it's time to stop fishing and cut bait for a while.
After all, a man's got to know his limitations.
Wednesday, April 3, 2019
Positions and Interests
Monday, April 1, 2019
Count and Measure
Why do you count?
What do you count?
Do you want the thing you're counting to get bigger or smaller?
What is important to measure?
How do you measure it?
What do you measure against in order to know if the number is moving in the right direction in an absolute sense?
Things change: for example, if you're counting the number of machines that aren't configured up to your standard and you add a thousand new machines, would you expect the number of non-standard machines to go up or down?
Assuming the machines are brand new, you'd expect them to be released to the current spec, so you'd hope the count of non-standard machines to remain the same, while the proportion of non-standard machines to the total would go down.
Even though the proportion looks better, you're not allowed to call this a victory, because you haven't actually fixed anything.
Similarly, if the count of nonstandard devices goes up while youre releasing those 1000 new machines, you should stop. Stop and figure out why your brand new machines aren't going out with the correct configuration. Stop the bleeding. Fix the process, release the rest of the machines with the correct configurations. Unless it's easy to wipe them and start over, it's usually a bad idea to try go back and do a special fix for the machines that already went out with the bad configs. It's cheaper and easier to simply treat them the same way you are treating the misconfigured machines you already had.
The earlier in the lifecycle you can fix an issue, the cheaper and faster it is to fix. So if you see bad work, stop and fix it right away. Then figure out what to do with the broken stuff.
The only way you know something is broken is if you're measuring it.
Thursday, March 21, 2019
Responsiveness vs. reactivity.
Tuesday, March 19, 2019
Intelligence Test
If you were a nation-state, how would you test a rival state's intelligence system? What if you fed them fake information, and then sat back and observed how quickly they reacted to it? You could measure your own effectiveness at disinformation at the same time you measured their response time. You'd also begin to understand how they react to different stimuli. Simply by forcing your adversary to react to non-existent issues would throw them off balance and create general malaise.
What if you made them question their tools? Got them to throw away perfectly good - maybe even best-of-breed - systems just because you were able to convince them they were no good, or had a bug?
Now, instead of moving forward, your rival is tied up replacing resources that work just fine - at a great cost in labor, cost, and time. All for nothing. They're operating at diminished capacity during the replacement, and may replaced something effective with something not-so-effective. And you've figured out what buttons to push to make them react, at virtually no cost.
Am I the only person who thinks this is a pretty efficient way to test an opponent's capabilities?
Patient Gardening
I was pulling weeds in my garden last weekend, and it struck me that there are a lot of parallels between gardening and cybersecurity. I’m...
-
What do you do with a blank page? A blank page is a clean slate, a chance to start fresh, an opportunity to transform the way you express...
-
If you were a nation-state, how would you test a rival state's intelligence system? What if you fed them fake information, and then sat...
-
It's great to be confident. But, to quote Dirty Harry in Magnum Force, a man's got to know his limitations. I'm not going to...