A research paper published under Microsoft Research by Cormac Herley, on why users reject much/most/all of the security education and advice given nowadays by security researchers and professionals alike.
The direction taken is that the users aren’t dumb at all. In fact, they do make economically good and rational decisions with regards to security measures (i.e. discarding most of it), even if they don’t make these decisions consciously.
After defining some of the terms of analysis (costs, benefits, implict and explicit), Cormac then proceeds on to discuss on a few cases where not following the advice makes better economics than to follow them: passwords, phishing URL recognition and SSL certificate errors. All of which the benefits are more damaging than the actual risks and damages involved in the real world.
Analysis wise, he has these to say: (greatly paraphrased :P)
- Users aren’t dumb.
- Worst-case scenarios are greatly disparate from actual risk scenarios, we need actual stats to help determine the actual risk and relevant actions.
- User efforts aren’t free, security advice shouldn’t assume otherwise.
- The cost of implementing security advice must not be greater than the actual potential damages, rather than just determining solely whether there’re “benefits” from implementation.
- Using worst case scenarios to calculate potential damages (and thus damages averted) would most likely result in orders of magnitude of differences from the actual damages.
Takeaways for us security folks: (also greatly paraphrased :P)
- We need to better understand the actual harms faced by the users, and use that to design the risk and mitigation scenarios.
- User education is a cost borne by the entire population. The cost of any security advise should be in proportion to the victimization rate. A (good) way to help with the cost-benefit ratios especially with rare exploits is to target the at-risk population.
- Retiring of irrelevant advice is necessary.
- Prioritization of advice is a must.
- User’s time and effort must be recognized and taken into account.
What I agree with:
- (Actual) stats collection for the relevant/entire population.
- Cost-benefit analysis and comparisons.
- Remembering that users’ time isn’t free.
- Customizing advice for different groups based on risk levels (to a certain extent).
What I’m unsure about:
- Retiring of irrelevant advice is going to be difficult, or is going to make things even more complicated for the end users. Take example this particular phishing attack http://email@example.com. Not all browsers will warn about or drop the basic auth credentails for such a link. If we’re going to give advice for specific browsers’ users, talk about swamping them with too much information…
- If security education is going to be customized properly for the user’s risk level, then why do we still need prioritization?
The main thing lacking in the (IT) security world from what I see is the lack of a way to measure and collect stats for just about anything related to risks and damages, especially for web applications. If that can be gotten (if ever), there’s going to be plenty of improvement I think. Like they say: “If it can’t be measured, it can’t be improved (and vice versa)”/
IF you’ve read this far, you’re either a stalker/assassin, like me a lot, are as crazy about security education as I am, or have too much time on your hands. I’m thinking most of you would fit into the last category heh 😀