Security Quotient Research

AI scammers, holographic PMs and losing the race to the research pole

Home » Research Updates » AI scammers, holographic PMs and losing the race to the research pole

We live in interesting times.

If a royal wedding watched by half the planet or the pending implementation of an EU privacy regulation doesn’t float your boat – 5 days to GDPR! – tomorrow New Zealand’s Prime Minister will address the crowds at Techweek in holographic form. Likely so she can keep up with work commitments and be in two places at once and who wouldn’t benefit from cloning themselves to stay on top of email.

“Help me NZ techies, you’re my only hope….”

Meanwhile the boffins at Google have taken decades of research into AI and computer speech synthesis and produced an autonomous assistant in the form of ‘Duplex’ that can book a hair appointment for you and sound uncannily real in the process. Parody makers start your engines…

If the loping, door-opening robots of Boston Dynamics doesn’t have you reaching for that classic 80’s Terminator DVD, Juha Saarinen’s observations of Duplex’s abilities in adversarial human hands should prove a lightbulb moment:

Humanity has an infallible ability to subvert and pervert the coolest technology, and use it to hurt each other with.

Unfortunately, it’s all too easy to imagine how Duplex could be misused by robocallers and phone fraudsters who won’t start off the conversations with a “you are talking to an AI” warning.

Think email spam, phishing, romance scamming and 419ing, except they’ll arrive on your mobile phone.

More naturally sounding and behaving digital assistants backed by self-learning AI will make them more attractive to people, not less, so expect to speak to machines more often.

Google CEO Sundar Pichai told cheering crowds that Duplex understands the context and the nuance of conversation, a mean feat for those of us struggling to improve our EQ scores. The result of his Duplex demo was a concern that more effort should be made on protecting the human to prevent AI deception.

As someone researching human vulnerabilities and the role they play in socio-technical internet attacks, this latest development reminded me just how far behind in my project timeline I’ve slipped in 2018.

In January this year I presented an update on pilot survey data that looked promising based on research into OCEAN personality facets and the role they may play in social engineering susceptibility.

The pilot survey requested basic demographics and used 62 questions from 3 psychometric scales to measure computer use, health and lifestyle factors and how they may shape risk appetite and risk perception:

SeBIS – Security Behavior Intentions Scale
Measures attitudes towards choosing passwords, device securement, staying up-to-date, and proactive awareness

DOSPERT – Domain-Specific Risk Taking Scale
Assesses individual risk taking and risk attitude

CFC – Consideration of Future Consequences Scale
Identifies individuals who are more inclined to act in ways that are protective of their future health and well-being

Five basic hypotheses underlie the research:

  1. An average individual with average security knowledge, an average appreciation of future consequences and average propensity for risk taking scores 60% across all three scales.
  2. Does security knowledge, an appreciation of future consequences and a risk averse nature result in higher scores?
  3. Does a lack of security knowledge, a desire for immediate returns and a risk taking or sensation seeking nature result in lower scores?
  4. Does a lower score correlate with previous adverse experiences? Requires next stage data bearing evidence of cybercrime/security impacts, e.g. Falling victim to credential harvesting, financial losses.
  5. Can we prove that a low score is predictive of being pre-disposed to socio-technical internet attacks?

The high-level concept being to generate a ‘Security Quotient’ score and to see if it’s possible to test for high-risk human behaviour and mitigate it through additional security controls or by educating people in a targeted manner to mitigate those risks.

In short, can predictive analytics utilising psychometric profiling prevent internet users from falling victim to cybercrime.

Could personality profiling be used for more than just targeted advertising remarketing on search engines and social media? What if you could understand and quantify the nature of the people risk in your organisation as you can the technology risk?

Results from the pilot showed a distribution of scores from 28 valid responses with one anonymous respondent identified as very/high risk on two of the three scales:

To those attending, I summarised the next steps:

  • A larger survey dataset is necessary to validate the ‘average individual score’ concept of 60%.
  • Submissions by victims of cybercrime are required to validate the predictive ability of any such Security Quotient score.
  • Nationality should be captured in the full survey for evaluation of cultural ‘Individualism’ being a protective factor

2018 project delays

A mix of family commitments and a new role working in Deloitte’s cyber team has pushed back the final survey by three months. The race is now on to complete this second stage and write up the findings.

Race might be the wrong word though. Two weeks ago – thanks to a good friend working in Westpac’s security team – I discovered that researchers at the Universities of Cambridge and Helsinki had developed the ‘Susceptibility to Persuasion II (StP-II)’ test that can be used to predict who will be more likely to become a victim of cybercrime.

Whilst this initially left me feeling like Robert Scott beaten to the South Pole by Roald Amundsen (but without the cold and suffering), my reading of their work suggests the Security Quotient concept is still valid.

Dr David Modic’s team developed the StP-II scale with an initial 138 items based on significant research into scam compliance. They had used the 12-item Consideration of Future Consequences Scale and confirmed that self-control is an important predictor of various behaviours including victimisation. Lack of premeditation – thinking before you act – is a significant predictor of scam compliance. They also made use of the full DOSPERT-R scale (as opposed to just the recreational risk elements highlighted by Elie Bursztein’s 2016 research into USB drops) to evaluate individual risk preferences.

Read the full research and you find the eventual StP-II scale drops to 54 core items to measure susceptibility to persuasion. The best part is the test is now online so give it a go and see how your personality stacks up.

But please be sure to take the updated Security Quotient survey once the final tweaks have been made, hopefully later this month, I don’t want to suffer the fate of Antarctic explorers…

Photo by @franckinjapan

Home » Research Updates » AI scammers, holographic PMs and losing the race to the research pole