The process is to start working backwards to find the individual components, according to Pogue. “It’s both art and science,” requiring deep technical prowess combined with finely tuned human instinct.
A cybersecurity team with diverse skills, which can include individuals skilled in particular areas, is the starting point. Depending on the nature of the attack, this might be Linux, network analysis, Windows registry and so on. Technology-assisted analysis helps narrow down vast amounts of data, but human intuition remains vital in identifying patterns and anomalies. These include small evidentiary components, such as the time of day a file was accessed and whether it is a system or a file that user typically accesses. And the IP address associated with these logins, that’s the science side, Pogue says.
The art is human analysis. “We look for anomalies within those logs and activities to show us those deviations, and sometimes those deviations are very subtle, and sometimes they are completely overt. Human beings have brains that are amazing association machines. We can spot patterns and anomalies like nothing else.”
Cyber criminals take advantage of real-world events
Threat actors are well-informed and responsive to news and information about organizations, looking to target exploits, security weaknesses or spin up false alarms if an organization appears unprepared or has suffered other events. “There’s a risk of subsequent attacks as a result of an organization being known as not being particularly well prepared,” says Pogue.
Attackers are experts at communications, utilizing dark web channels, mobile phones, and encrypted chat platforms like Signal. “They know who’s responding well and who’s not. Who’s getting breached and who isn’t. This is their business. So, they know all about it,” Pogue says.
Other communication channels used by threat actors include Tor chat rooms and anonymous email services. “Organizations need to be aware of these channels when investigating breach claims,” Wong says.
To counter this, if there’s been an arrest, law enforcement will look to gain some intelligence by assuming the nickname or handle of the criminal and emulate that person for as long as possible to gather intelligence.
This helps feed into behavioral analysis required to prove the legitimacy of cyber criminals when dealing with an incident. Being human, they make mistakes like forgetting to use anonymity tools such as Tor or VPNs. “Behavioral analysis of these mistakes can help identify and trace these actors,” says Wong.
Communications and response plan are key, even if it’s a false flag
An organization needs to have done its due diligence and practiced its incident response plan, which includes how to handle claims that turn out to be false. “This is where a strong, well-practiced relationship between legal, corporate communications, and security teams, including any outside support or retainer services available, really pays off for the organization,” says Netscout CISO, Debby Briggs.
Being able to engage a team across a range of disciplines to discuss the pros and cons of the situation is valuable. Likewise, it’s helpful to make sense of and learn from what others have done in similar situations. “At times, you may decide to take no action on the report of a false incident. Responding to a report may only serve to legitimize these reports and increase the visibility to a false statement and bad actor,” Briggs says.
Mandiant’s Wong says organizations need to be prepared to respond to breach alerts effectively, even if they turn out to be false, by undertaking drills beforehand. “Having a well-defined plan, conducting tabletop exercises, and ensuring that the right personnel are informed and trained are essential,” he says.
While handling breach announcements, organizations need to consider their communication strategy carefully, aiming to be transparent and factual while avoiding unnecessary panic or reputational damage.
It should be along the lines of ‘we’re investigating a suspected security incident until there’s actual evidence that data was taken or customers were impacted.’ Otherwise, “it’s hard to have to walk it back,” Wong says.
How to respond to a claimed breach on third-party
In the case of auDA, it turned out to be related to a third party and didn’t involve auDA data. But assessing the credibility of claimed breaches on third parties can be more challenging because it’s done at arm’s reach, not directly.
It’s good practice to maintain an updated list of all third parties that includes the scope of services they provide, the name of the internal business owner, the main point of contact, the type of data involved, and whether the third party maintains access to the network.
When facing a claimed breach on a third-party network or systems, a good rule of thumb in analyzing the credibility of a specific report would be to first examine the source of the report. “Consulting experts who are familiar with the background and history of the reporter is a good practice. All reports should be investigated,” Netscout’s Briggs says.
“When an incident with a third party arises, there are a number of questions that need to be addressed before an organization can decide its next best course of action. Generally, it needs to consider the amount and type of data that’s involved, as well as the nature of services that are being provided,” she says.
Briggs recommends formulating a risk score for each third party to help prioritize and analyze their associated risk. This can be applied in the event of a third-party breach. “Depending upon the situation, an organization may consider suspending access to its network, out of an abundance of caution, while the investigation is ongoing.”
Can a false breach improve the incident response plan?
Acting on a false breach alert can be an opportunity to test out the organization’s response plan in a way that shifts tabletop exercises to real-world drills. There’s always something to be discovered, even if something turns out to be a non-incident.
“Look at the playbook. Did your team follow the playbook? If it’s a vendor incident, did it play out the way you thought it would? Learn from every single incident,” Wong says.
Briggs says that incident plans should address a range of different scenarios and if these haven’t been considered previously, the CISO may want to add them to their planning. “You should be prepared to address any scenario publicly, to your customers and to your employees,” she says.
Employees should know who to contact about incidents if they are the recipients of a breach report. And the experience of a false breach can sometimes reveal that the point of contact and process for escalating breaches may have gaps or problems.
“A well-tested and practiced incident response plan is an important tool to have implemented ahead of time. And a plan tailored to your organization should set out individual roles and specific procedures your organization needs to take depending upon the specific circumstances at hand,” she says.
CyberCX’s Pogue says a post-incident review, even if it’s a false breach, is vital to see what worked and where additional training and education may be needed, or where the playbook for the incident response plan may need to be updated.
False breaches should also be plotted on the organization’s risk matrix. “We can look at the risk register and the likelihood and the impact of this kind of breach on a risk matrix. We can see, had this been real, it would have been catastrophic,” Pogue says.
On the other hand, it can open conversations, about when it’s appropriate to increase the organization’s risk appetite and even add some additional steps to qualify security incidents. But the bottom line is that learning from false positives is crucial.
“You operate under the assumption that it’s the worst-case scenario, that this data really has been compromised, and you start all of those activities, because it’s far easier as the CISO to call a timeout if you find out it is not a real breach,” he says.
“The unfortunate truth of being a CISO is that your every decision is going to be evaluated after the fact: Did you wait too long? Did you inform the right people? Did you do all of these things? The last thing you want is not to follow the playbooks and incident response plan. Now you look foolish, the organization suffers loss of customer confidence, loss of market share and all sorts of negative things like that. It’s unnecessary, self-inflicted damage,” Pogue says.