Security Posts

Infocon: green

The DAA File Format
Categories: Security Posts

Easier path discovery for network troubleshooting

BreakingPoint Labs Blog - 5 hours 12 min ago
The cost of managing complex networks is driven up by the time and effort you must spend to…
Categories: Security Posts

iBypass and Thoughts in a Traffic Jam

BreakingPoint Labs Blog - 5 hours 12 min ago
Each of us has sat in standstill traffic, trying to understand why this major highway we drive all…
Categories: Security Posts

Are you Feeling the Need for Speed?

BreakingPoint Labs Blog - 5 hours 12 min ago
If you haven’t seen the official trailer for Top Gun: Maverick, you need to. And if watching the…
Categories: Security Posts

Net Optics, Anue, BreakingPoint and Veriwave - Great Ixia Acquisitions

BreakingPoint Labs Blog - 5 hours 12 min ago
Acquisitions can be tough to get right and easy to get wrong. For example, HP acquired Autonomy for…
Categories: Security Posts

EVPN over SRv6 – Simplification with Unified Technology

BreakingPoint Labs Blog - 5 hours 12 min ago
In my last blog about SRv6, I reviewed SRv6 technology and Ixia’s solution to validate SRv6…
Categories: Security Posts

Buddy, Can You Spare a Nano-Second?

BreakingPoint Labs Blog - 5 hours 12 min ago
Customers in the finance sector often ask the question – “What is latency across an optical tap?”.…
Categories: Security Posts

Broadcast Industry Revolution - Migration to IP Infrastructure

BreakingPoint Labs Blog - 5 hours 12 min ago
The broadcast industry has embraced IP technology and is transitioning from tradition serial…
Categories: Security Posts

How to Implement Security Monitoring For Critical Infrastructure

BreakingPoint Labs Blog - 5 hours 12 min ago
I ran across an interesting statistic a couple weeks ago. According to a Ponemon Institute, report…
Categories: Security Posts

Hybrid IT Monitoring: The ABCs of Network Visibility

BreakingPoint Labs Blog - 5 hours 12 min ago
The recently released 2019 State of the Cloud report by RightScale found that 58% of 800…
Categories: Security Posts

Flex Your Time-Sensitive Networking (TSN) Conformance Testing

BreakingPoint Labs Blog - 5 hours 12 min ago
Many times, my customers tell me that validating time-sensitive networking (TSN) is a big challenge…
Categories: Security Posts

Facebook's Voice Transcripts Were More Invasive Than Amazon's

Wired: Security - Sat, 2019/08/17 - 15:00
The Capital One hacker, a Bluetooth vulnerability, and more of the week's top security news.
Categories: Security Posts

Rashomon of disclosure

ADD / XOR / ROL - Sat, 2019/08/17 - 10:36
In a world of changing technology, there are few constants - but if there is one constant in security, it is the rhythmic flare-up of discussions about disclosure on the social-media-du-jour (mailing lists in the past, now mostly Twitter and Facebook).

Many people in the industry have wrestled with, and contributed to, the discussions, norms, and modes of operation - I would particularly like to highlight contributions by Katie Moussouris and Art Manion, but there are many that would deserve mentioning whose true impact is unknown outside a small circle. In all discussions of disclosure, it is important to keep in mind that many smart people have struggled with the problem. There may not be easy answers.

In this blog post, I would like to highlight a few aspects of the discussion that are important to me personally - aspects which influenced my thinking, and which are underappreciated in my view.

I have been on many (but not most) sides of the table during my career:
  • On the side of the independent bug-finder who reports to a vendor and who is subsequently threatened.
  • On the side of the independent bug-finder that decided reporting is not worth my time.
  • On the side of building and selling a security appliance that handles malicious input and that needs to be built in a way that we do not add net exposure to our clients.
  • On the side of building and selling a software that clients install.
  • On the side of Google Project Zero, which tries to influence the industry to improve its practices and rectify some of the bad incentives.
The sides of the table that are notably missing here are the role of the middle- or senior-level manager that makes his living shipping software on a tight deadline and who is in competition for features, and the role of the security researcher directly selling bugs to governments. I will return to this in the last section.

I expect almost every reader will find something to vehemently disagree with. This is expected, and to some extent, the point of this blog post.

The simplistic view of reporting vulnerabilities to vendorsI will quickly describe the simplistic view of vulnerability reporting / patching. It is commonly brought up in discussions, especially by folks that have not wrestled with the topic for long. The gist of the argument is:
  1. Prior to publishing a vulnerability, the vulnerability is unknown except to the finder and the software vendor.
  2. Very few, if any, people are at risk while we are in this state.
  3. Publishing about the information prior to the vendor publishing a patch puts many people at risk (because they can now be hacked). This should hence not happen.
Variants of this argument are used to claim that no researcher should publish vulnerability information before patches are available, or that no researcher should publish information until patches are applied, or that no researcher should publish information that is helpful for building exploits.

This argument, at first glance, is simple, plausible, and wrong. In the following, I will explain the various ways in which this view is flawed.

The Zardoz experienceFor those that joined Cybersecurity in recent years: Zardoz was a mailing list on which "whitehats" discussed and shared security vulnerabilities with each other so they could be fixed without the "public" knowing about them.
The result of this activity was: Every hacker and active intelligence shop at the time wanted to have access to this mailing list (because it would regularly contain important new attacks). They generally succeeded. Quote from the Wikipedia entry on Zardoz:
On the other hand, the circulation of Zardoz postings among computer hackers was an open secret, mocked openly in a famous Phrack parody of an IRC channel populated by notable security experts.[3] History shows, again and again, that small groups of people that share vulnerability information ahead of time always have at least one member compromised; there are always attackers that read the communication.

It is reasonably safe to assume that the same holds for the email addresses to which security vulnerabilities are reported. These are high-value targets, and getting access to them (even if it means physical tampering or HUMINT) is so useful that well-funded persistent adversaries need to  be assumed to have access to them. It is their job, after all.

(Zardoz isn't unique. Other examples are unfortunately less-well documented. Mail spools of internal mailing lists of various CERTs were circulated in hobbyist hacker circles in the early 2000s, and it is safe to assume that any dedicated intelligence agency today can reproduce that level of access.)
The fallacy of uniform riskRisk is not uniformly distributed throughout society. Some people are more at-risk than others: Dissidents in oppressive countries, holders of large quantities of cryptocurrency, people who think their work is journalism when the US government thinks their work is espionage, political stakeholders and negotiators. Some of them face quite severe consequences from getting hacked, ranging from mild discomfort to death.

The majority of users in the world are much less at risk: The worst-case scenario for them, in terms of getting hacked, is inconvenience and a moderate amount of financial loss.
This means that the naive "counting" of victims in the original argument makes a false assumption:  Everybody has the same "things to lose" by getting hacked. This is not the case: Some people have their life and liberty at risk, but most people don't. For those that do not, it may actually be rational behavior to not update their devices immediately, or to generally not care much about security - why take precautions against an event that you think is either unlikely or largely not damaging to you?
For those at risk, though, it is often rational to be paranoid - to avoid using technology entirely for a while, to keep things patched, and to invest time and resources into keeping their things secure.
Any discussion of the pros and cons of disclosure should take into account that risk profiles vary drastically. Taking this argument to the extreme, the question arises: "Is it OK to put 100m people at risk of inconvenience if I can reduce the risk of death for 5 people?"
I do not have an answer for this sort of calculation, and given the uncertainty of all probabilities and data points in this, I am unsure whether one exists.

Forgetting about patch diffingOne of the lessons that our industry sometimes (and to my surprise) forgets is: Public availability of a patch is, from the attacker perspective, not much different than a detailed analysis of the vulnerability including a vulnerability trigger.
There used to be a cottage industry of folks that analyze patches and write reports on what the fixed bugs are, whether they were fixed correctly, and how to trigger them. They usually operated away from the spotlight, but that does not mean they do not exist - many were our customers.
People in the offensive business can build infrastructure that helps them rapidly analyze patches and get the information they need out of them. Defenders, mostly due to organizational and not technical reasons, can not do this. This means that in the absence of a full discussion of the vulnerability, defenders will be at a significant information disadvantage compared to attackers.
Without understanding the details of the vulnerability, networks and hosts cannot be monitored for its exploitation, and mitigations-other-than-patching cannot be applied.
Professional attackers, on the other hand, will have all the information about a vulnerability not long after they obtain a patch (if they did not have it beforehand already).

The fallacy of "do not publish triggers"When publishing about a vulnerability, should "triggers", small pieces of data that hit the vulnerability and crash the program, be published?
Yes, building the first trigger is often time-consuming for an attacker. Why would we save them the time?
Well, because without a public trigger for a vulnerability, at least, it is extremely hard for defensive staff to determine whether a particular product in use may contain the bug in question. A prime example of this CVE-2012-6706: Everybody assumed that the vulnerability is only present on Sophos; no public PoC was provided. So nobody realized that the bug lived in upstream Unrar, and it wasn't until 2017 that it got re-discovered and fixed. 5 years of extra time for a bug because no trigger was published.
If you have an Antivirus Gateway running somewhere, or any piece of legacy software, you need at least a trigger to check whether the product includes the vulnerable software. If you are attempting at building any form of custom detection for an attack, you also need the trigger.

The fallacy of "do not publish exploits"Now, should exploits be published? Clearly the answer should be no?
In my experience, even large organizations with mature security teams and programs often struggle to understand the changing nature of attacks. Many people that are now in management positions cut their teeth on (from today's perspective) relatively simple bugs, and have not fully understood or appreciated how exploitation has changed.
In general, defenders are almost always at an information disadvantage: Attackers will not tell them what they do, and gleefully applaud and encourage when the defender gets a wrong idea in his head about what to do. Read the declassified cryptolog_126.pdf Eurocrypt trip report to get a good impression of how this works.
Three of the last four sessions were of no value whatever, and indeed there was almost nothing at Eurocrypt to interest us (this is good news!). The scholarship was actually extremely good; it's just that the directions which external cryptologic researchers have taken are remarkably far from our own lines of interest.  Defense has many resources, but many of them are misapplied: Mitigations performed that do not hold up to an attacker slightly changing strategies, products bought that do not change an attacker calculus or the exploit economics, etc.
A nontrivial part of this misapplication is information scarcity about real exploits. My personal view is that Project Zero's exploit write-ups, and the many great write-ups by the Pwn2Own competitors and other security research teams (Pangu and other Chinese teams come to mind) about the actual internal mechanisms of their exploits is invaluable to transmit understanding of actual attacks to defenders, and are necessary to help the industry stay on course.
Real exploits can be studied, understood, and potentially used by skilled defenders for both mitigation and detection, and to test other defensive measures.

The reality of software shipping and prioritizationCompanies that sell software make their money by shipping new features. Managers in these organizations get promoted for shipping said features and reaching more customers. If they succeed in doing so, their career prospects are bright, and by the time the security flaws in the newly-shipped features become evident, they are four steps in the career ladder and two companies away from the risk they created.

The true cost of attack surface is not properly accounted for in modern software development (even if you have an SDLC); largely because this cost is shouldered by the customers that run the software - and even then, only by a select few that have unusual risk profiles.

A sober look at current incentive structures in software development shows that there is next to zero incentive for a team that ships a product to invest in security on a 4-5 year horizon. Everybody perceives themselves to be in breakneck competition, and velocity is prioritized. This includes bug reports: The entire reason for the 90-day deadline that Project Zero enforced was the fact that without a hard deadline, software vendors would routinely not prioritize fixing an obvious defect, because ... why would you distract yourself with doing it if you could be shipping features instead?

The only disincentive to adding new attack surface these days is getting heckled on a blog post or in a Blackhat talk. Has any manager in the software industry ever had their career damaged by shipping particularly broken software and incurring risks for their users? I know of precisely zero examples. If you know of one, please reach out, I would be extremely interested to learn more.

The tech industry as risk-taker on behalf of others(I will use Microsoft as an example in the following paragraphs, but you can replace it with Apple or Google/Android with only minor changes. The tech giants are quite similar in this.)
Microsoft has made 248bn$+ in profits since 2005. In no year did they make less than 1bn$ in profits per month. Profits in the decade leading up to 2005 were lower, and I could not find numbers, but even in 2000 MS was raking in more than a billion in profits a quarter. And part of these profits were made by incurring risks on behalf of their customers - by making decisions to not properly sandbox the SMB components, by under-staffing security, by not deprecating and migrating customers away from insecure protocols.The software product industry (including mobile phone makers) has reaped excess profits for decades by selling risky products and offloading the risk onto their clients and society. My analogy is that they constructed financial products that yield a certain amount of excess return but blow up disastrously under certain geopolitical events, then sold some of the excess return and *all* of the risk to a third party that is not informed of the risk.
Any industry that can make profits while offloading the risks will incur excess risks, and regulation is required to make sure that those that make the profits also carry the risks. Due to historical accidents (the fact that software falls under copyright) and unwillingness to regulate the golden goose, we have allowed 30 years of societal-risk-buildup, largely driven by excess profits in the software and tech industry.
Now that MS (and the rest of the tech industry) has sold the rest of society a bunch of toxic paper that blows up in case of some geopolitical tail events (like the resurgence of great-power competition), they really do not wish to take the blame for it - after all, there may be regulation in the future, and they may have to actually shoulder some of the risks they are incurring.
What is the right solution to such a conundrum? Lobbying, and a concerted PR effort to deflect the blame. Security researchers, 0-day vendors, and people that happen to sell tools that could be useful to 0-day vendors are much more convenient targets than admitting: All this risk that is surfaced by security research and 0-day vendors is originally created for excess profit by the tech industry.
FWIW, it is rational for them to do so, but I disagree that we should let them do it :-). 
A right to knowMy personal view on disclosure is influenced by the view that consumers have a right to get all available information about the known risks of the products they use. If an internal Tobacco industry study showed that smoking may cause cancer, that should have been public from day 1, including all data.

Likewise, consumers of software products should have access to all known information about the security of their product, all the time. My personal view is that the 90 days deadlines that are accepted these days are an attempt at balancing competing interests (availability of patches vs. telling users about the insecurity of their device).

Delaying much further or withholding data from the customer is - in my personal opinion - a form of deceit; my personal opinion is that the tech industry should be much more aggressive in warning users that under current engineering practices, their personal data is never fully safe in any consumer-level device. Individual bug chains may cost a million dollars now, but that million dollars is amortized over a large number of targets, so the cost-per-individual compromise is reasonably low.

I admit that my view (giving users all the information so that they can (at least in theory) make good decisions using all available information) is a philosophical one: I believe that withholding available information that may alter someone's decision is a form of deceit, and that consent (even in business relationships) requires transparency with each other. Other people may have different philosophies.

Rashomon, or how opinions are driven by career incentivesThe movie Rashomon that gave this blog post the title is a beautiful black-and-white movie from 1950, directed by the famous Akira Kurosawa. From the Wikipedia page:
The film is known for a plot device that involves various characters providing subjective, alternative, self-serving, and contradictory versions of the same incident.If you haven't seen it, I greatly recommend watching it.

The reason why I gave this blog post the title "Rashomon of disclosure" is to emphasize the complexity of the situation. There are many facets, and my views are strongly influenced by the sides of the table I have been on - and those I haven't been on.

Everybody participating in the discussion has some underlying interests and/or philosophical views that influence their argument.

Software vendors do not want to face up to generating excess profits by offloading risk to society. 0-day vendors do not want to face up to the fact that a fraction of their clients kills people (sometimes entirely innocent ones), or at least break laws in some jurisdiction. Security researchers want to have the right to publish their research, even if they fail to significantly impact the broken economics of security.

Everybody wants to be the hero of their own story, and in their own account of the state of the world, they are.

None of the questions surrounding vulnerability disclosure, vulnerability discovery, and the trade-offs involved in it are easy. People that claim there is an easy and obvious path to go about security vulnerability disclosure have either not thought about it very hard, or have sufficiently strong incentives to self-delude that there is one true way.

After 20+ years of seeing this debate go to and fro, my request to everybody is: When you explain to the world why you are the hero of your story, take a moment to reflect on alternative narratives, and make an effort to recognize that the story is probably not that simple.

Categories: Security Posts

IDA 7.4: IDAPython and Python 3

Hex blog - Thu, 2019/08/01 - 09:34
IDA 7.4 will still ship with IDAPython for Python 2.7 by default, but users will now have the opportunity to pick IDAPython for Python 3.x at installation-time!
Categories: Security Posts

IDA 7.4: Turning off IDA 6.x compatibility in IDAPython by default

Hex blog - Thu, 2019/08/01 - 09:32
IDA 7.4 will ship with the IDAPython “IDA 6.x” compatibility layer off by default. Please see this article for more information!
Categories: Security Posts

Using Anomaly Detection to find malicious domains

Fox-IT - Tue, 2019/06/11 - 15:00
Applying unsupervised machine learning to find ‘randomly generated domains. Authors: Ruud van Luijk and Anne Postma At Fox-IT we perform a variety of research and investigation projects to detect malicious activity to improve the service of  our Security Operations Center. One of these areas is applying data science techniques to real world data in real world production environments, such as anomalous SMB sequences, beaconing patterns, and other unexpected patterns. This blog entry will share an application of machine learning to detect random-like patterns, indicating possible malicious activity. Attackers use domain generation algorithm[1] (DGA) to make a resilient Command and Control[2] (C2) infrastructure. Automatic and large scale malware operations pose a challenge on the C2 infrastructure of malware. If defenders identify key domains of the malware, these can be taken down or sinkholed, weakening the C2. To overcome this challenge, attackers may use a domain generation algorithm. A DGA is used to dynamically generate a large number of seemingly random domain names and then selecting a small subset of these domains for C2 communication. The generated domains are computed based on a given seed, which can consist of numeric constants, the current date, or even the Twitter trend of the day. Based on this same seed, each infected device will produce the same domain. The rapid change of C2 domains in use allows attackers to create a large network of servers, that is resilient to sinkholing, takedowns, and blacklisting. If you sinkhole one domain, another pops up the next day or the next minute. This technique is commonly used by multiple malware families and actors. For example, Ramnit, Gozi, and Quakbot use generated domains in the malware. Methods for detection Machine-learning approaches are proven to be effective to detect DGA domains in contrast to static rules. The input of these machine-learning approaches may for example consist of the entropy, frequency of occurrence, top-level domain, number of dictionary words, length of the domain, and n-gram. However, many of these approaches need labelled data. You need to know a lot of ‘good’ domains, and a lot of DGA domains. Good domains can be taken, for example, from the Alexa and Majestic million sets and DGA domains can be generated from known malicious algorithms. While these DGA domains are valid, they are only valid for the remainder of the usage of that specific algorithm. If there is a new type of DGA, chances are your model is not correct anymore and does not detect newly generated domains. Language regions pose a challenge on the ‘good’ domains. Each language has different structures and combinations. Taking the Alexa or Majestic million is a one-size-fits-all approach. Nuances might get lost. To overcome the challenges of labelled data, unsupervised machine learning might be a solution. These approaches do not need an explicit DGA training set – you only need to know what is normal or expected. A majority of research move to variants of neural networks, which require a lot of computational power to train and predict. With the amount of network data this is not necessarily a deal-breaker if there is ample computing power, but it certainly is a factor to consider. An easier to implement solution is to look solely at the occurrences of n-grams to define what is normal. N-grams are sequences of N consecutive elements such as words or letters, where bi-grams (2-grams) are sequences of two, tri-grams (3-grams) are sequences of three, etc. To illustrate with the domain ‘google.com’: “This is an intuitive way to dissect language. Because, what are the odds you see a ‘kzp’ in a domain? And what are the odds you see ‘oog’ in a domain?”   We calculate the domain probability by multiplying the probability of each of the tri-grams and normalise by dividing it by the length of the domain. We chose an unconditional probability, meaning we ignore the dependency between n-grams as this speeds up training and calculation times. We also ignored the top level domain (e.g. “.co.uk”, “.org”) as these are common in both normal as in DGA domains and will focus our model to the parts of the domain that is distinctive. If the domain probability is below a predefined threshold, the domain is deviant from the baseline and likely a DGA domain. Results To evaluate this technique we trained on roughly 8 million non-unique common names of a network, thereby creating a baseline of what is normal for this network. We evaluated the model by scoring one million non-unique common names and roughly 125.000 DGA domains over multiple algorithms, provided by Johannes Bader[3]. We excluded some domains that are known to use random generated (sub)-domains from both the training- and evaluation set, such as content delivery networks. Figure below illustrates the log probability distributions of the blue baseline domains, i.e. the domains you would expect to see, and the red DGA domains. Although a clear distinction between the two distributions can be seen there is also a small overlap between the -10 and -7.5 visible. This is because some DGA domains are much alike to regular domains, some baseline domain are random-like, and for some domains our model wasn’t able to correctly distinguish it from DGA domains. For our detection to be practically useful in large operations, such as Security Operation Centers, we need a very low false positive rate. We also assumed that every baseline has a small contamination ratio. We chose for a ratio of 0.001%. We also use this as the cut-off value between predicting a domain as DGA or not. During hunting this threshold may be increased or completely ignored. True DGA True Normal Predicted DGA 94.67% ~0 Predicted Normal 6.33% ~100% Total 100% 100% If we take the cut-off value at this point we get an accuracy (the percentage correct) of 99.35%  and an F1-score of 97.26. Conclusion DGA domains are a tactic used by various malware families. Machine learning approaches are proven to be useful in the detection of this tactic, but lack to generalize in a simple and strong solution for production. By relaxing some restrictions on the math and compensating this with a lot of baseline data, a simple and effective solution can be found. This simple and effective solution does not rely on labelled data, is on par with scientific research and has the benefit to take into account the common language of regular domains used in the network. We demonstrated this solution with hostnames in common names, but it is also applicable for HTTP and DNS. Moreover, a wide range of applications is possible since it detects deviations from the expected. For example random generated file names, deviating hostnames, unexpected sequences of connections, etc.
  1. This technique is recently added to the MITTRE ATT&CK tactics. https://attack.mitre.org/techniques/T1483/
  2. For more information about C2, see: https://attack.mitre.org/tactics/TA0011/
  3. https://github.com/baderj/domain_generation_algorithms
Categories: Security Posts

Pattern Welding Explained as Wearable Art

Niels Provos - Tue, 2018/08/28 - 06:37

Pattern-Welding was used throughout the Viking-age to imbue swords with intricate patterns that were associated with mystical qualities. This visualization shows the pattern progression in a twisted road with increasing removal of material. It took me two years of intermittent work to get to this image. I liked this image so much that I ordered it for myself as a t-shirt and am looking forward for people asking me what the image is all about. If you want to get a t-shirt yourself, you can order this design via RedBubble. If you end up ordering a t-shirt, let me know if it ends up getting you into any interesting conversations!

Categories: Security Posts

Thu, 1970/01/01 - 02:00
Syndicate content