Tesla incident proves insider threat mitigation strategies must evolve
Last week, it was reported that Tesla co-founder and CEO Elon Musk recently sent an email out to the company’s employees accusing one of their fellow co-workers, later identified as Martin Tripp, of engaging in malicious activity as part of an attempt to sabotage the electric carmaker after failing to receive a promotion he wanted. Specifically, Musk said Tripp, a former engineer for the company, made code changes to the organization’s operation system using phony usernames and exported a large amount of sensitive data to unknown third parties.
For his part, Tripp claims that he is merely a whistle-blower being made a scapegoat for leaking his concerns about the company to the media. Tesla is now suing Tripp for his alleged misconduct.
While mitigating the risks posed by insider threats has always been one of the top priorities of corporate security professionals, the alleged incident at Tesla demonstrates the complexities that many companies face today in trying to identify potential suspicious behaviors by trusted employees. The fact that even an organization as technologically advanced as Tesla could be victimized by such an age-old problem also proves that no one is immune to these types of incidents.
Changing the Mindset
Although statistics show that the bulk of cyber-attacks and data breaches against organizations are perpetrated by external actors, the damage that can be inflicted by a malicious insider is immense and many companies have not put security controls in place to mitigate internal threats until recently, according to Saryu Nayyar, CEO of behavior analytics firm Gurucul. “When we talk with many organizations, insider threat hasn’t been a key part of their security program but we’re starting to see it become more and more important for organizations,” she says.
Traditionally, Nayyar says that because companies tend to treat their workers as trusted employees and have given them the “keys to the kingdom,” in a sense, that organizations have a hard time implementing countermeasures that treat them as untrustworthy.
“Are we really going to go in with the thought that our trusted people are going to do something that’s not good for the company? It’s like not trusting your co-worker who has been sitting next to you every day for 15 years, so there is a very different mindset there and I think that is big obstacle,” Nayyar adds.
In fact, Nayyar says this type of mindset was pervasive at a manufacturing company where Gurucul’s technology was recently implemented as part of a pilot project that uncovered a serious insider threat.
“In talking to the CISO, the first reaction was, ‘No, these are trusted people.’ But what they started seeing was that over a period of time – they had lost 25 percent of their market share in the last three years and data was getting exfiltrated – they learned there was a competitive product right next to theirs on the market at half the price,” she says.
Mitigation Strategies
Nikolai Vargas, CISSP, CTO of Switchfast Technologies, says that while details about the alleged data theft at Tesla are still emerging, it should serve as an interesting case study moving forward as to what companies should do when they suspect data theft is occurring.
“Whenever data theft is expected, IT staff need to first enter information gathering and preservation mode—collect access logs, capture forensic disk images, review network logs to determine where data is being transmitted, etc. This work is hard because it has to be done in a clandestine way so as to not tip off the bad actor, it has to be done very quickly, and there may be limits on what can be collected in cases of BYOD or personal accounts. Once you have proof that data theft is occurring, then you can switch to containment and cut off access,” Vargas explains. “In this regard, Tesla is making the right moves in using legal channels to preserve Tripp’s personal accounts, but in the end the data is ‘out there,’ and as we have seen with other cases of data theft, that exposure can have a ripple effect for years to come depending on the amount and sensitivity of the data stolen."
Nayyar says that organizations need to monitor myriad activities by their employees on the network to ensure their behaviors don’t change in a way that could indicate malicious intent.
“You already have this data, so the data is not being recreated. Most companies also have some sort of log aggregation capability. If not, they could live stream log data,” Nayyar recommends. “You should also look at your users and what access they have, as well as setup HR flags. Most of our customers are getting mature enough to where they have flags for things like performance reviews, so if someone’s performance wasn’t great, there is a disgruntled employee who didn’t get the promotion they were looking for, etc. – you can create those flags, no one has to see it to maintain the privacy of the employee, but it can help you with risk modeling and evaluating their behavior deviation differently from other users.”
The bottom line, according to Nayyar, is that “behaviors don’t lie.”
“You can steal an identity but you can’t steal behavior,” she concludes.
About the Author:
Joel Griffin is the Editor-in-Chief of SecurityInfoWatch.com and a veteran security journalist. You can reach him at [email protected].