Stop using Virustotal to measure how AV sucks!

We recently came across an article which again is a FUD regarding how AV sucks. VirusTotal has been writing  about this years ago. Although we agree that most AV is not as good as it is stated in the marketing materials, but that is not the point.

Let’s look at this particular exploit for a moment. Quoting from the article, “The attack, originating from, was launched though an iFrame which was not detected by 52 anti-virus products, researchers said.” Uploading a malicious Flash file to Virustotal, looking at the 0/57 detection rate is just lame.

Here are just a few AV components which can block the infection:

  • URL blacklisting (not effective, but still)
  • URL reputation – high false positive rates but can be very effective
  • Analyses of HTML/Javascript code to detect the exploit kit (this looks pretty effective when implemented properly)
  • Analyses of the exploit code itself, whether it is Flash, Javascript, Silverlight, whatever. Exploit kits are very good at obfuscating this layer.
  • Blocking the malware download – URL blacklisting/reputation/static AV signatures/heuristics. These detections can be bypassed, but works most of the time.
  • Blocking the execution of the malware – some AV engines do have real exploit protection, which can detect that an unknown exploit tries to start malware on the machine – and block it. I know it, I have seen it with my very own eyes. Multiple times. I also saw this protection being bypassed. It is not 100% perfect.
  • Block the malware by reputation – this can be very effective, when previously unknown binaries are blocked. A few AV uses this technique.
  • Block malware based on how it interacts with the OS – I don’t consider this as a real protection, as it means the malware already started, did something (malicious), and after some time one of the actions are flagged as malicious by the AV. Although this is somehow late, this can still block the real risk, e.g. banking trojans stealing your money.
  • Scheduled scans: Very late detection, but still, it is better late than never.

Looking at this particular exploit, I can confirm some AV did detect and block this threat on day 0. Others did not. Angler exploit kit is especially dangerous because of in-memory-malware – which renders a lot of AV protection components useless.



And yes, all AV can be bypassed. Imagine an AV which provides 100% proactive protection, out of the box. If something like this could have happened, this will mean all the developers of the AV company should have been fired, as the AV does not need any maintenance/updates at all. Which means most of the update you get with your AV engine is about threats which bypassed the AV yesterday …

Only a real world protection test, testing all AV components can measure how effective today endpoint protections/internet security suites/AV engines really are.

We would like to thank Kafeine for sharing the sample with us.


Using Virustotal to compare AV protection is unprofessional, and lame. Don’t do that. Especially when it comes to exploit kits …

PS: using Virustotal to test Android AV is more realistic than Windows AV, or especially exploits. AV on Android is most of the time just plain static scanner.

Update (2015/01/30): the screenshot has been fixed

Read More

The many fails of Internet Security Suites

This blog post is a follow-up post on our quarterly Online Banking Certification project.

During our Q3 tests – especially the Botnet test – we have witnessed many problems with the  Internet Security Suites, and in this blog post we share our experiences regarding these problems. This Botnet test is about remediation capabilities of Internet Security Suites. First, we infect a clean unprotected system with a common banking trojan (e.g. Zeus, Citadel, SpyEye), and after that we install the Internet Security Suite. Because the botnet C&C is fully operated by MRG Effitas, we are able to monitor the information stealing capabilities of the banking trojans with 100% certainty, by navigating to an online banking site, logging in, and checking the malware C&C panel for the extracted credentials. The C&C server is firewalled, so that only our lab can connect to the C&C server.


During the test, we witnessed the following problems.

Botnet files not detected

In our test we use rather old samples (but we will improve this in our next test), and yet, some of the Internet Security Suites (ISS) are not capable of detecting the banking trojans. E.g. some of the vendors do not initiate mandatory quick scan during or after the installation, and neither schedule any quick scans. This is a really bad practice.

Inconsistent behaviour/block

Some vendors failed to protect the user in the first test, but protected the user after the first test. During the first test, the protected browser usually crashed and was restarted automatically. We believe this has been caused by some key components not loaded into the browser consistently. This problem can result in stolen banking credentials.

Missing alert during initial scan

Some vendors detected the banking trojans during the security product installation, but failed to warn the user about the detected and removed threat. However, the detailed AV log revealed the threat detection and removal. In the case of any malware, it is important to notify the user what has been detected, so the user can take precautionary measures (e.g. change passwords, notify bank, etc.). Especially if it is a banking trojan, it is non-optional to inform the user what has been detected on the computer. We also believe that some high level instructions should be displayed to the user, like it is advisable to change passwords, and contact the financial institution (e.g. to change credit cards, check transactions, etc.).

Missing log and alert

Some vendors detected the threat during the security product installation, but failed to warn the user about the detected and removed threat, and even failed to log the action in the detailed AV log. As it was already detailed in the previous paragraph, it is important to notify the users about detected and removed threats.

Missing mandatory reboot after remediation

Some vendors detected the banking trojan (SpyEye) on the disk, successfully removed it, but failed to detect the malware in memory. And in top of that, these ISS’s did not enforce (or even suggested) a reboot, thus the malware stayed fully operational in memory until the next restart. Nowadays, most people don’t restart their systems very often, so the threat could stay in memory for weeks.

Failed categorization

One vendor categorized SpyEye as Citadel. Although this is not a big issue, but we believe proper categorization could help users. In another case, SpyEye has been categorized as a low risk threat. A banking trojan is anything but a low risk threat.

Fail to block

And last but not least, there was one product, which detected the threats in all three cases, and gave the user the option to block the threats. After clicking on block, we tested the password stealing capability of the trojan, and all three trojan were able to steal the passwords from the browser. In this case, the product gave the false sense of security to the user, as the user would have thought the browser is protected, although it was not.


These problems highlight the problem of Internet Security Suites. We believe these products should have been tested either internally or by independent test companies better. Vendors can find contact information on the following URL in case they want to test their product with MRG Effitas:

Read More

New anti-APT tools are no silver bullets: An independent test of APT attack detection appliances

New anti-APT tools are no silver bullets:
An independent test of APT attack detection appliances

CrySyS Lab, BME

November 26, 2014.

The term Advanced Persistent Threat (APT) refers to a potential attacker that has the capability and the intent to carry out advanced attacks against specific high profile targets in order to compromise their systems and maintain permanent control over them in a stealthy manner. APT attacks often rely on new malware, which is not yet known to and recognized by traditional anti-virus products. Therefore, a range of new solutions, specifically designed to detect APT attacks, have appeared on the market in the recent past, including Cisco’s SourceFire, Checkpoint, Damballa, Fidelis XPS, FireEye, Fortinet, LastLine, Palo Alto’s WildFire, Trend Micro’s Deep Discovery and Websense.

While these tools are useful, determining their real effectiveness is challenging, because measuring their detection rate would require testing them with new, previously unseen malware samples with characteristics similar to those of advanced malware used by APT attackers. Developing such test samples require special expertise and experience obtained either through the development of advanced targeted malware or at least through extensive analysis of known samples.

We at MRG Effitas, together with our colleagues at CrySyS Lab, decided to join our forces and perform a test of leading APT attack detection tools using custom developed samples. MRG Effitas has a lot of experience in testing anti-virus products, while the CrySyS Lab has a very good understanding of APT attacks gained through the analysis of many targeted malware campaigns. Therefore, collaborating and bringing together our complementary sets of expertise looked like a promising idea. Our goal was not to determine the detection rates of different APT attack detection products, because that would have required testing with a large set of custom developed malware samples, which was not feasible to obtain within the limited time frame and with the limited resources we had for the test. Instead, our goal was simply to implement some ideas we had for bypassing cutting-edge APT attack detection tools without actually being detected, and to test if our ideas really work in practice.

We developed 4 custom samples in 2 weeks and without access to any APT attack detection tools during the development, and then later tested with these samples 5 APT attack detection solutions in Q3 2014. All 5 tested products are well-established in the market; however, we cannot mention vendor names publicly. The result of the test was alarming:
– one of our 4 custom samples bypassed all 5 products,
– another sample of the remaining 3 samples bypassed 3 products,
– only the two simplest samples have been detected by the tested products, and even those triggered alarms with low severity in some cases.

We made the full report ( on our test available online. It contains our test methodology, including a brief description of each sample we developed for the purpose of the test, and we also present in it the test results in more details. We decided to publish this report for multiple reasons:
– First of all, we believe that our test was more appropriate for evaluating the detection capabilities of APT attack detection tools than some earlier, heavily criticized tests were, because unlike earlier tests, we used custom developed samples that resemble the malware used in APT attacks.
– Second, some of the products that we tested seem to be overestimated by the users who believe that those products are silver bullets. On the other hand, we have already emphasized at multiple occasions that these products can and will be bypassed by determined attackers. Our test is a clear proof of this, and if we could do that, then APT attackers will also be able to do that, if they have not done so yet.
– Third, we observed that some vendors of APT attack detection tools are often reluctant to participate in tests that try to evaluate the effectiveness of their products. On the one hand, we understand their caution, but on the other hand, we all know that the approach of security by obscurity has its own pitfalls. By publishing this report, we would like to encourage anti-APT tool vendors to participate in independent tests more readily and cooperatively, in order to have sufficient amount of convincing results about their products, based on which well-informed decisions can be made by the users.
– And last but not least, we believe that there are significant differences in the APT detection capabilities of the tested products, and users should be aware that not all vendors provide the same detection rate.

The test sample that bypassed all 5 tested products was developed by the CrySyS Lab. It is a custom designed sample written in C++ with a server side written in PHP. It was designed to be as stealthy as possible. It is downloaded by the victim as part of an HTML page, where it is actually hidden in an image with steganography. The downloaded page also contains scripts that extract an executable from the image when the user clicks on something that appears to be a download button. Once the sample is running, it can communicate with a remote C&C server. To hide the C&C network traffic, the sample simulates a user clicking on links in a web forum, and downloads full HTML pages with CSS style sheets and images. The real C&C traffic is hidden inside these HTTP requests. The sample allows for file download from and upload to the C&C server, as well as remote execution of commands on the victim computer.

We named this test sample BAB0, which (babo) means hobbit in Hungarian, as its objective was to stealthily bypass all state-of-the-art defenses, while actually being very simple, and this situation shows a parallel to the story of the Lord of the Rings, where Frodo, the small hobbit managed to bypass all defenses of the fearsome Sauron, the Lord of Mordor, and reached Amon Amarth, where the One Ring was finally destroyed.

We have a strong intention to publish BAB0 in the near future. This may seem to be controversial, as making the details of BAB0 publicly available can help attackers. We have a different opinion: Powerful attackers have probably been using already similar tricks, but apparently detection tools are not yet prepared to cope with them. By publishing BAB0, we push anti-APT vendors to strengthen their products, which will ultimately make the attackers’ job harder.

For further information, please contact either Zoltan Balázs ([email protected]) or Levente Buttyán ([email protected]). Please note that we cannot provide any vendor specific information about the tests, but we can help organizations to test the products integrated in their environment.

Read More