Security Posts

Infocon: green

ISC Stormcast For Monday, June 24th 2019 https://isc.sans.edu/podcastdetail.html?id=6552
Categories: Security Posts

What Is a Next Generation Firewall?

BreakingPoint Labs Blog - 2 hours 25 min ago
A Next Generation Firewall (NGFW) is a device that incorporates both the features of a traditional…
Categories: Security Posts

Vision X Best of Show Special Prize at Interop Tokyo 2019

BreakingPoint Labs Blog - 2 hours 25 min ago
Ixia's Vision X - 2019 Tokyo Interop Best of Show Special Prize Winner  There are a number of…
Categories: Security Posts

How The New TLS1.3 Standard Will Affect Your Encryption Tactics

BreakingPoint Labs Blog - 2 hours 25 min ago
The IETF released a new version of their encryption standard called RFC 8446 (Transport Layer…
Categories: Security Posts

Why SPAN when you can Tap?

BreakingPoint Labs Blog - 2 hours 25 min ago
In networking, as is the case with life, there are usually multiple ways of trying to get to the…
Categories: Security Posts

Introducing Ixia’s Newest Packet Broker: Vision X

BreakingPoint Labs Blog - 2 hours 25 min ago
As the FIFA Women’s World Cup matches start up, you can bet I will be live-streaming games — at…
Categories: Security Posts

Investigating Windows Graphics Vulnerabilities: A Reverse Engineering and Fuzzing Story

BreakingPoint Labs Blog - 2 hours 25 min ago
Introduction It is not surprising that vulnerabilities targeting Windows applications and…
Categories: Security Posts

Join us at Cisco Live ‘19

BreakingPoint Labs Blog - 2 hours 25 min ago
The one constant in networking is change, and that usually means a little added complexity, at…
Categories: Security Posts

Game of Vulnerabilities: Bluekeep

BreakingPoint Labs Blog - 2 hours 25 min ago
If you have been following what’s happening in the field of computer security, or perhaps even if…
Categories: Security Posts

Dynamic Analysis of a Windows Malicious Self-Propagating Binary

BreakingPoint Labs Blog - 2 hours 25 min ago
Dynamic analysis (execution of malware in a controlled, supervised environment) is one of the most…
Categories: Security Posts

GDPR is here to stay

BreakingPoint Labs Blog - 2 hours 25 min ago
What is GDPR? General Data Protection Regulation, or GDPR, is a European regulatory package for…
Categories: Security Posts

RUBIKA: Un sistema Anti-Rubber Hose basado en un cubo de Rubik (Parte 2 de 5)

Un informático en el lado del mal - 4 hours 55 min ago
Con el surgimiento de la idea original de RUBIKA, dimos muchas vueltas. Desde hacer un Cryptex con forma de Cubo de Rubik que almacenara un Token OAuth que solo se liberase para autenticar una cuenta cuando se hubiera resuelto por la persona correcta, hasta usarlo como un Segundo (o Tercer) Factor de Autenticación (2FA). En cualquier caso, lo que estaba claro es que necesitábamos un Cubo de Rubik de un tamaño especial para poder meterle la electrónica que queríamos.

Figura 14: RUBIKA: Un sistema Anti-Rubber Hose basado en un cubo de Rubik (Parte 2 de 5)
El proceso de construcción del Cubo de Rubik tuvo varias fases, y fuimos evolucionando el diseño. Mola mucho ver las fotografías de todo el proceso, que nos llevó de un proceso artesanal a tener que acabar trabajando - después de los prototipos iniciales - con una empresa profesional.

El Pre-Prototipo de Cubo de Rubik en 3D

Antes de comenzar a trabajar en la construcción del Cubo de Rubik impreso en 3D necesitábamos probar si la electrónica era viable o no, así que lo primero que se hizo, para conseguir ver cómo podíamos medir los giros, qué componentes necesitábamos para el hardware y cuántos grados de libertad eran necesarios a la hora de hacer las matemáticas adecuadas para saber cómo y de qué manera se estaba moviendo el cubo.

Figura 15: Un Cubo de Rubik desmontado en la mesa de entrenamiento
Así que lo primero que se hizo fue desmontar un Cubo de Rubik y ponerlo en una mesa de entrenamiento para moverlo manualmente y captura información de los movimientos en un microcontrolador que pudiera servirnos de base para controlar el sistema completo. Al final, como veremos más adelante, el micro elegido fue un Atmel Tiny1634R, pero de momento

Impresión en 3D del primer prototipo de Cubo de Rubik

He de decir, que pensamos que éste iba a ser un proceso sencillo. Es decir, buscar alguna plantilla de Cubo de Rubik en 3D, que seguro que había, escalarla al tamaño adecuado, y darle caña a la Impresora 3D que tenemos en la planta tercera del edificio. Pero no fue tan sencillo. Este es un tutorial de los varios que puedes encontrar que cuenta cómo se debe construir un Cubo de Rubik impreso en 3D.


Figura 16: Cómo imprimir un Cubo de Rubik con una impresora 3D
Así que comenzamos a construir las partes necesarias para hacer, primero el núcleo del cubo, y luego las esquinas del mismo. Y el resultado fue muy peculiar. Parecía mucho más fácil al principio, pero poco a poco fuimos consiguiendo una primera versión de nuestro cubo de Rubik, tal y como podéis ver en las imágenes siguientes.

Figura 17: Estructura de las piezas centrales de las caras.
Como se ve, las seis piezas centrales del núcleo son iguales, así que comenzamos por ellas, que además son las que deberían llevar toda la electrónica. El resto de las piezas no son tan importantes, ya que solo giran esas seis piezas, por lo que había que dedicarle más cariño.

Figura 18: Nucleo central del cubo de Rubik impreso en 3DAhora ya se podía ir construyendo el cubo cara a cara. Como podéis ver en los cambios de colores, hubo que repetir muchas veces algunas de las piezas, porque no encajaban bien, se movían demasiado o simplemente no tenían la calidad adecuada.

Figura 19: Construcción de la primera cara del cubo con el núcleo
Y a medida que crecía, había que ir haciendo y rehaciendo las piezas para que encajaran de las mejores de las maneras. No fue fácil, como os he dicho ya anteriormente.

Figura 20: Montando el cubo pieza a pieza
Aquí tenemos una foto de todas las piezas puestas una sobre otra, pero el núcleo fuera. Como se ve, tiene la forma del Cubo de Rubik, pero tiene mucho trabajo aún por hacer.

Figura 21: Todas las piezas montadas y el núcleo fuera
Y aquí el primer prototipo de esta versión montado para poder trabajar. Os garantizo que mover ese Cubo de Rubik era un auténtico reto para que no se te deshiciera en las manos. Grandote, con movimientos imperfectos, y frágil. Pero aún así, he de decir que era funcional.

Figura 22: Primer prototipo funcional del Cubo de Rubik impreso en 3D
Como podéis ver, no habíamos tenido en cuenta la carga de la electrónica, así que teníamos que desmontar el cubo cada vez que necesitábamos recargar las pilas, y además no iba lo fino suficiente como para poder sacar datos de aquellos que estaban acostumbrados a manejar el Cubo de Rubik tradicional.

Figura 23: Pruebas con otras piezas para buscar mejores encajes en las piezas y movimientos más fluidos
Como se puede ver, hicimos muchas pruebas, con muchos colores para reconocer las piezas, buscando mejores acoples, y también añadiendo soluciones para que el movimiento fuera más fluido, como añadir algunos imanes entre las piezas para hacer que se atrajeran a determinados estados.

Primera captura de datos

Incluso con esta versión primigenia de nuestro RUBIKA, ya podíamos empezar a capturar datos que procesabamos en back-end para poder generar los algoritmos de Machine Learning, donde conseguimos sacar las primeras conclusiones.


Figura 24: Capturando datos del primer prototipo
Pero aún debíamos mejorar esto mucho más, así que decidimos ir a por un diseño mucho más profesional y a capturar datos que de verdad nos sirvieran para poder generar los algoritmos adecuados al proceso.

Saludos Malignos!

**************************************************************************************************
- RUBIKA: Un sistema Anti-Rubber Hose basado en un cubo de Rubik (Parte 1 de 5)
- RUBIKA: Un sistema Anti-Rubber Hose basado en un cubo de Rubik (Parte 2 de 5)
- RUBIKA: Un sistema Anti-Rubber Hose basado en un cubo de Rubik (Parte 3 de 5)
- RUBIKA: Un sistema Anti-Rubber Hose basado en un cubo de Rubik (Parte 4 de 5)
- RUBIKA: Un sistema Anti-Rubber Hose basado en un cubo de Rubik (Parte 5 de 5)
**************************************************************************************************
Sigue Un informático en el lado del mal - Google+ RSS 0xWord
Categories: Security Posts

ISC Stormcast For Monday, June 24th 2019 https://isc.sans.edu/podcastdetail.html?id=6552, (Sun, Jun 23rd)

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.
Categories: Security Posts

Hunting for Linux library injection with Osquery

AlienVault Blogs - Thu, 2019/06/20 - 15:00
When analyzing malware and adversary activity in Windows environments, DLL injection techniques are commonly used, and there are plenty of resources on how to detect these activities. When it comes to Linux, this is less commonly seen in the wild. I recently came across a great blog from TrustedSec that describes a few techniques and tools that can be used to do library injection in Linux. In this blog post, we are going to review some of those techniques and focus on how we can hunt for them using Osquery. LD_PRELOAD LD_PRELOAD is the easiest and most popular way to load a shared library in a process at startup. This environmental variable can be configured with a path to the shared library to be loaded before any other shared object. For most of the blog, we will be using the examples available in GitHub, listed here. Let’s use sample-target as the target process and sample-library as the shared library we will be injecting. We can utilize the ldd tool to inspect the shared libraries that are loaded into a process. If we execute the sample-target binary with ldd we can see that information. Linux-vdso.so.1, is a virtual dynamic shared object that the kernel automatically maps into the address space in every process. Depending on the architecture, it can have other names. Libc.so.6 is one of the dynamic libraries that the sample-target requires to run, and ld-linux.so.2 is in charge of finding and loading the shared libraries. We can see how this is defined in the sample-target ELF file by using readelf. Now, let’s set the LD_PRELOAD environment variable to load our library by executing. export LD_PRELOAD=/home/ubuntu/linux-inject/sample-library.so; ldd /home/ubuntu/linux-inject/sample-target We can see our sample-library being loaded now. We can also get more verbose information by setting the LD_DEBUG environment variable. export LD_DEBUG=files A simple way to hunt for malicious LD_PRELOAD usage with Osquery is by querying the process_envs table and looking for processes with the LD_PRELOAD environment variable set. SELECT process_envs.pid as source_process_id, process_envs.key as environment_variable_key, process_envs.value as environment_variable_value, processes.name as source_process, processes.path as file_path, processes.cmdline as source_process_commandline, processes.cwd as current_working_directory, 'T1055' as event_attack_id, 'Process Injection' as event_attack_technique, 'Defense Evasion, Privilege Escalation' as event_attack_tactic FROM process_envs join processes USING (pid) WHERE key = 'LD_PRELOAD'; Since some monitoring and security software uses LD_PRELOAD for benign purposes, you will have to create a baseline of known processes in your environment using LD_PRELOAD. A few benign examples we have encountered using LD_PRELOAD include the following. From an attacker’s perspective, as TrustedSec mentions in their blog, there are some inconveniences on using LD_PRELOAD - mainly that you need to restart the process that you want to inject code into in order for it to work. Below, I’ve given an overview of other techniques that don’t require this. In addition to LD_PRELOAD, there are other similar techniques an attacker could use to achieve the same results. For example, by setting the LD_LIBRARY_PATH environmental variable, one could specify a directory where the loader will try to find the required libraries first, so an attacker could create a modified version of libc.so, or other required shared libraries, and load malicious code into the process. Finally, it is recommended to monitor changes to /etc/ld.so.conf and /etc/ld.so.conf.d/*.conf since they can be used for the same purpose. This can be done using Osquery’s FIM functionality by adding "/etc/ld.so.conf" and "/etc/ld.so.conf.d/%%" to the file_paths configuration. As for particular examples in the wild, a recent one is a version of Winnti targeting Linux systems that was unveiled by Kris McConkey during his presentation “Skeletons in the supply chain,” given at the SAS conference earlier this year. You can find a detailed analysis by Intezer. Linux Inject The Linux inject tool can be used to load a shared library into a running process by using ptrace,  similar to the well-known DLL injection technique in Windows. Let’s take a look at how it works. First, we attach to the target process using ptrace() and inject  the code that will be loading the library. Then, the loader code allocates memory using malloc(), copies the path of the shared library to the buffer and calls __libc_dlopen_mode() to load the shared library. Let’s give it a try. ./inject -n sample-target sample-library.so It failed! What happened? The source of this is a Linux security module called Yama that implements discretionary access control (DAC) for specific kernel functions such as ptrace. You can check the current state by looking at /proc/sys/kernel/yama/ptrace_scope or using systemctl. As you can see in the docs, when ptrace_scope is set to 1, only a parent process can be debugged (this is the default in Ubuntu 18.04.2). When set to 3, no process can be debugged with ptrace and a reboot is required to change the value. This is actually great from a defender’s perspective because it means that an attacker would have to modify the value of ptrace_scope before using ptrace. We can take advantage of this and utilize Osquery’s system_controls table to query the current configuration value of ptrace_scope. osquery> select * from system_controls WHERE name == 'kernel.yama.ptrace_scope'; You can also utilize the following scheduled query in Osquery to monitor for changes to ptrace_scope. "detection_ptrace_scope_changed": { "platform": "linux", "description": "Detects changes to kernel.yama.ptrace_scope", "query": "SELECT name, current_value, config_value from system_controls WHERE name == 'kernel.yama.ptrace_scope';", "interval": 3600, "removed": false } We could also hunt for systems where the current value of ptrace_scope has been modified from the original one in /etc/sysctl.conf. SELECT name, subsystem, current_value, config_value from system_controls WHERE name == 'kernel.yama.ptrace_scope' AND current_value != config_value; Or, we could simply check for systems where ptrace is always allowed and flag this as a potential security issue. SELECT name, subsystem, current_value, config_value from system_controls WHERE name == 'kernel.yama.ptrace_scope' AND current_value = 0; More importantly, when ptrace is blocked, a Syslog message is recorded that can be used to detect unsuccessful attempts to use ptrace. Jun 10 20:59:24 ip-172-31-32-145 kernel: [955105.055910] ptrace attach of "./sample-target"[13134] was attempted by "./inject -n sample-target sample-library.so"[13148] If you are already collecting Syslog messages in your security stack, I recommend you alert on multiple attempts to use ptrace in a system. In Osquery, the following query can be used to monitor this specific syslog message. SELECT * from syslog WHERE tag = 'kernel' AND message LIKE '%ptrace attach%'; Ok, so now let’s go back to the linux-inject tool. In order for ptrace to work, the attacker would need to set ptrace_scope to 0. echo 0 | sudo tee /proc/sys/kernel/yama/ptrace_scope Let’s take a look at the process memory using Osquery’s process_memory_map table. SELECT process_memory_map.*, pid as mpid from process_memory_map WHERE pid in (select PID from processes where name LIKE '%sample-target'); Among other common memory regions, we can see some of the shared objects we are already familiar with (libc, ld, etc). In addition to that, the injected library, sample-library.so is also there. We know that we are looking for memory regions that have execute permission, and we can also discard the original image and the regions marked as pseudo. SELECT count(distinct(process_memory_map.pid)) from process_memory_map LEFT JOIN processes USING (pid) WHERE process_memory_map.path LIKE '/%' and process_memory_map.pseudo != 1 AND process_memory_map.path != processes.path AND process_memory_map.permissions LIKE '%x%'; That query is still too broad. Let’s check what the most common paths for the shared libraries are. SELECT split(process_memory_map.path, '/', 0) AS folder, count(*) as cnt from process_memory_map LEFT JOIN processes USING (pid) WHERE process_memory_map.path LIKE '/%' AND process_memory_map.pseudo != 1 AND process_memory_map.path != processes.path AND process_memory_map.permissions LIKE '%x%' GROUP by folder order by cnt desc; That makes sense; we can create a query that ignores common paths. This would help us hunt for shared libraries that are loaded from non-standard locations. SELECT process_memory_map.*, pid as mpid from process_memory_map LEFT JOIN processes USING (pid) WHERE process_memory_map.path LIKE '/%' and process_memory_map.pseudo != 1 AND process_memory_map.path NOT LIKE '/lib/%' AND process_memory_map.path NOT LIKE '/usr/lib%' AND process_memory_map.path != processes.path AND process_memory_map.permissions LIKE '%x%'; The TrustedSec blog mentions that in many cases attackers will remove the file from disk after loading it to make the analysis more difficult and avoid detection. If we remove the injected .so file, we can validate with the following query, leveraging it to detect when the shared library has been deleted from disk. SELECT process_memory_map.pid, process_memory_map.start, process_memory_map.end, process_memory_map.permissions, process_memory_map.offset, process_memory_map.path from process_memory_map LEFT join file USING (path) where pseudo != 1 AND process_memory_map.path NOT LIKE '/lib/%' AND process_memory_map.path NOT LIKE '/usr/lib%' AND process_memory_map.permissions LIKE '%x%' AND filename IS NULL and process_memory_map.inode !=0 AND process_memory_map.permissions = 'r-xp'; In terms of detecting the code that gets injected in the target process, we can use the following Yara rule.
ReflectiveSOInjectionInterestingly, when I searched for ELF files matching those patterns in Virustotal, I quickly discovered that the popular Pupy RAT actually uses Linux inject in the Linux client. ReflectiveSOInjection is a tool based on linux-inject. The main difference is in the way the shared object is injected. In linux-inject, the shellcode uses __libc_dlopen_mode to load the shared object. ReflectiveSOInjection maps the shared object into memory and then forces the main program to call the ReflectiveLoader export. The ReflectiveLoader takes care of resolving functions, loading required libraries and map the program segments into memory. Let’s use ReflectiveSOInjection on the same sample-target that we used with linux-inject. If we take a look at the memory map, we can see that there is a new memory section marked as rwxp with an empty path. As TrustedSec mentions in their blog, the advantage of this method is that the injected shared object doesn’t have to be on disk. We can use the following query to hunt for this activity. SELECT processes.name, process_memory_map.*, pid as mpid from process_memory_map join processes USING (pid) WHERE process_memory_map.permissions = 'rwxp' AND process_memory_map.path = ''; As a bonus detection, if the attacker is lazy and doesn’t modify the ReflectiveSOInjection code, by default the code is looking for an export with the name “ReflectiveLoader” in the injected shared library. So, we can write a simple Yara signature to detect shared libraries on disk with that export. import "elf" rule ReflectiveLoader : LinuxMalware { meta: author = "AlienVault Labs" description = "Detects a shared object with the name of an export used by ReflectiveLoader" reference = "https://github.com/infosecguerrilla/ReflectiveSOInjection" condition: uint32(0) == 0x464c457f and elf.type == elf.ET_DYN and for any i in (0..elf.symtab_entries): ( elf.symtab[i].name == "ReflectiveLoader" ) } GDB The last method we are going to explore is using the Gnu Project Debugger, GDB to load a shared object. From an attacker’s perspective GDB may be already installed in the target system and it is less noisy than bringing your own tool. The advantage of using the GDB method is that we can take advantage of this. Under the covers, this method is almost exactly the same as linux-inject. GDB uses ptrace to attach to a process and then calls the same __libc_dlopen_mode() function that we are familiar with to load the shared object. The results are the same as with the linux-inject and we can use the same query to hunt for this. In summary, we analyzed multiple ways an attacker can inject a shared object into a running process, and we shared different Osquery queries and detection ideas that blue teams can use to hunt for this behavior in their environments. In addition to that, we shared some examples of how these techniques are being used in the wild.
Categories: Security Posts

Getting 2FA Right in 2019

Since March, Trail of Bits has been working with the Python Software Foundation to add two-factor authentication (2FA) to Warehouse, the codebase that powers PyPI. As of today, PyPI members can enable time-based OTP (TOTP) and WebAuthn (currently in beta). If you have an account on PyPI, go enable your preferred 2FA method before you continue reading! 2018 and 2019 have been big years for two factor authentication: All told, there’s never been a better time to add 2FA to your services. Keep reading to find out how you can do it right. What 2FA is and is not Before we get into the right decisions to make when implementing two factor authentication, it’s crucial to understand what second factors are and shouldn’t be. In particular:
  • Second factor methods should not be knowable. Second factor methods are  something the user has or is, not something the user knows.
  • Second factor methods should not be a replacement for a user’s first factor (usually their password). Because they’re something the user has or is, they are an attestation of identity. WebAuthn is a partial exception to this: it can serve as a single factor due to its stronger guarantees.
  • Second factor methods are orderable by security. In particular, WebAuthn is always better than TOTP, so a user who has both enabled should be prompted for WebAuthn first. Don’t let users default to a less secure second factor. If you support SMS as a second factor for legacy reasons, do let users know that they can remove it once they add a better method.
  • 2FA implementations should not request the user’s second factor before the first factor. This isn’t really feasible with TOTP anyways, but you might be tempted to do it with a WebAuthn credential’s ID or public key. This doesn’t introduce a security risk per se, but inconsistent ordering only serves to confuse users that already have difficulty understanding the role their security key plays in authentication.
  • Recovery codes should be available and should be opt-out with sufficient warnings for users who prefer their accounts to fail-deadly. Recovery codes are not second factors — they circumvent the 2FA scheme. However, users don’t understand 2FA (see below) and will disable it out of frustration if not given a straightforward recovery method. When stored securely, recovery codes represent an acceptable compromise between usability and the soundness of your 2FA scheme.
Your users don’t understand 2FA, and will try to break it Users, even extremely technical ones (like the average PyPI package maintainer), do not understand 2FA or its constraints. They will, to varying degrees: Attempt to Risk Remedy Screenshot their TOTP QR codes and leave them lying in their Downloads folder Exposed TOTP secrets. Documentation: Warn users not to save their QR codes Use the same QR to provision multiple TOTP applications Poor understanding of what/where their second factor is. Documentation: Tell users to only provision one device Use TOTP applications that allow them to export their codes as unencrypted text Exposed TOTP secrets; unsafe secret management. Documentation: Suggest TOTP applications that don’t support unencrypted export Use broken TOTP applications, or applications that don’t respect TOTP parameters. Incorrect TOTP code generation; confusing TOTP labeling within the application. Little to none: virtually every TOTP application ignores or imposes additional constraints on provisioning. Use default provisioning parameters! Scan the TOTP QR with the wrong application. Lost second factor; inability to log in. Require the user to enter a TOTP code before accepting the provisioning request. Attempt to enter the provisioning URI or secret by hand and get it wrong. Lost second factor; inability to log in. Same as above; require a TOTP code to complete provisioning. Label their TOTP logins correctly and get them confused. Mislabeled second factor; inability to log in. Provide all username and issuer name fields supported by otpauth. Discourage users from using TOTP applications that only support a whitelist of services or require manual labeling. Delete their TOTP secret from their application before your service. Account lockout. Documentation: Warn users against doing this, and recommend TOTP applications that provide similar warnings. Save their recovery codes to a text file on their Desktop. Second factor bypass. Make recovery codes opt-in, and tell users to save only a print copy of their recovery codes. Get recovery codes for different services mixed up. Lost bypass; inability to log in. Prefix recovery codes with the site name or other distinguishing identifier. Ignore their second factors entirely and only use recovery codes. Not real 2FA. Track recovery code usage and warn repeat offenders. Attempt to use their old RSA SecurID, weird corporate HOTP fob, or pre-U2F key. Not supported by WebAuthn. Provide explicit errors when provisioning fails. Most browsers should do this for pre-U2F keys. Get their hardware keys mixed up. Mislabeled second factor; inability to log in. Give your users the ability to label their registered keys with human-friendly names on your service, and encourage them to mark them physically. Give or re-sell their hardware keys without deprovisioning them. Second factor compromise. Documentation: Warn users against doing this. For more aggressive security, challenge them to assert each of their WebAuthn credentials on some interval. Technical users can be even worse: while writing this post, an acquaintance related a tale of using Twilio and a notification-pushing service to circumvent his University’s SMS-based 2FA. Many of these scenarios are partially unavoidable, and not all fundamentally weaken or threaten to weaken the soundness of your 2FA setup. You should however be aware of each of them, and seek to user-proof your scheme to the greatest extent possible. WebAuthn and TOTP are the only things you need You don’t need SMS or voice codes. If you currently support SMS or voice codes for legacy reasons, then you should be:
  1. Preventing new users from enabling them,
  2. Telling current users to remove them and change to either WebAuthn or TOTP, and
  3. Performing extra logging and alerting on users who still have SMS and/or voice codes enabled.
Paranoid? Yes. But if you hold any cryptocurrency, you probably should be paranoid. Overkill? Maybe. SIM port attacks remain relatively uncommon (and targeted), despite requiring virtually no technical skill. It’s still better to have 2FA via SMS or voice codes than nothing at all. Google’s own research shows that just SMS prevents nearly all untargeted phishing attacks. The numbers for targeted attacks are more bleak: nearly one quarter of targeted attacks were successful against users with only SMS-based 2FA. Worried about anything other than SMS being impractical and/or costly? Don’t be. There is a plethora of free TOTP applications for both iOS and Android. On the WebAuthn front, Google will sell you a kit with two security keys for $50. You can even buy a fully-open source key that will work with WebAuthn for $25! Most importantly of all: the fact that TOTP is not as good as a hardware key is not an excuse to continue allowing either SMS or voice codes. Contrasting TOTP and WebAuthn TOTP and WebAuthn are both solid choices for adding 2FA to your service and, given the opportunity, you should support both. Here are some factors for consideration: TOTP is symmetric and simple, WebAuthn is asymmetric and complex TOTP is a symmetric cryptographic scheme, meaning that the client and server share a secret. This, plus TOTP’s relatively simple code-generation process, makes it a breeze to implement, but results in some gotchas:
  1. Because clients are required to store the symmetric secret, TOTP is only as secure as the containing application or device. If a malicious program can extract the user’s TOTP secrets, then they can produce as many valid TOTP codes as they want without the user’s awareness.
  2. Because the only state shared between the client and server in TOTP is the initial secret and subsequent generated codes, TOTP lacks a notion of device identity. As a result a misinformed user can provision multiple devices with the same secret, increasing their attack surface.
  3. TOTP provides no inherent replay protection. Services may elect to guard against replays by refusing to accept a valid code more than once, but this can ensnare legitimate users who log in more than once within a TOTP window.
  4. Potentially brute-forceable. Most services use 6 or 8-digit TOTP codes and offer an expanded validation window to accommodate mobility-impaired users (and clock drift), putting an online brute-force just barely on the edge of feasibility. The solution: rate-limit login attempts.
  5. All of the above combine to make TOTP codes into ideal phishing targets. Both private and nation-state groups have successfully used fake login forms and other techniques to successfully fool users into sharing their TOTP codes.
By contrast, WebAuthn uses asymmetric, public-key cryptography: the client generates a keypair after receiving a list of options from the server, sends the public half to the server for verification purposes, and securely stores the private half for signing operations during authentication. This design results in a substantially more complex attestation model, but yields numerous benefits:
  1. Device identity: WebAuthn devices are identified by their credential ID, typically paired with a human-readable label for user management purposes. WebAuthn’s notion of identity makes it easy to support multiple security keys per user — don’t artificially constrain your users to a single WebAuthn key per account!
  2. Anti-replay and anti-cloning protections: device registration and assertion methods include a random challenge generated by the authenticating party, and well-behaved WebAuthn devices send an updated sign counter after each assertion.
  3. Origin and secure context guarantees: WebAuthn includes origin information during device registration and attestation and only allows transactions within secure contexts, preventing common phishing vectors.
TOTP is free, WebAuthn (mostly, currently) is not As mentioned above, there are many free TOTP applications, available for just about every platform your users will be on. Almost all of them support Google’s otpauth URI “standard,” albeit with varying degrees of completeness/correctness. In contrast, most potential users of WebAuthn will need to buy a security key. The relationship between various hardware key standards is confusing (and could occupy an entire separate blog post), but most U2F keys should be WebAuthn compatible. WebAuth is not, however, limited to security keys: as mentioned earlier, Google is working to make their mobile devices function as WebAuthn-compatible second factors, and we hope that Apple is doing the same. Once that happens, many of your users will be able to switch to WebAuthn without an additional purchase. Use the right tools TOTP’s simplicity makes it an alluring target for reimplementation. Don’t do that — it’s still a cryptosystem, and you should never roll your own crypto. Instead, use a mature and misuse-resistant implementation, like PyCA’s hazmat.primitives.twofactor. WebAuthn is still relatively new, and as such doesn’t have as many server-side implementations available. The fine folks at Duo are working hard to remedy that: they’ve already open sourced Go and Python libraries, and have some excellent online demos and documentation for users and implementers alike. Learn from our work Want to add 2FA to your service, but have no idea where to start? Take a look at our TOTP and WebAuthn implementations within the Warehouse codebase. Our public interfaces are well documented, and (per Warehouse standards) all branches are test-covered. Multiple WebAuthn keys are supported, and support for optional recovery codes will be added in the near future. If you have other, more bespoke cryptography needs, contact us for help.
Categories: Security Posts

IDA 7.3: CSS styling

Hex blog - Wed, 2019/06/19 - 13:08
Since version 7.3, IDA is styled using CSS. Please see this article to see what can be done, and how!
Categories: Security Posts

Using Anomaly Detection to find malicious domains

Fox-IT - Tue, 2019/06/11 - 15:00
Applying unsupervised machine learning to find ‘randomly generated domains. Authors: Ruud van Luijk and Anne Postma At Fox-IT we perform a variety of research and investigation projects to detect malicious activity to improve the service of  our Security Operations Center. One of these areas is applying data science techniques to real world data in real world production environments, such as anomalous SMB sequences, beaconing patterns, and other unexpected patterns. This blog entry will share an application of machine learning to detect random-like patterns, indicating possible malicious activity. Attackers use domain generation algorithm[1] (DGA) to make a resilient Command and Control[2] (C2) infrastructure. Automatic and large scale malware operations pose a challenge on the C2 infrastructure of malware. If defenders identify key domains of the malware, these can be taken down or sinkholed, weakening the C2. To overcome this challenge, attackers may use a domain generation algorithm. A DGA is used to dynamically generate a large number of seemingly random domain names and then selecting a small subset of these domains for C2 communication. The generated domains are computed based on a given seed, which can consist of numeric constants, the current date, or even the Twitter trend of the day. Based on this same seed, each infected device will produce the same domain. The rapid change of C2 domains in use allows attackers to create a large network of servers, that is resilient to sinkholing, takedowns, and blacklisting. If you sinkhole one domain, another pops up the next day or the next minute. This technique is commonly used by multiple malware families and actors. For example, Ramnit, Gozi, and Quakbot use generated domains in the malware. Methods for detection Machine-learning approaches are proven to be effective to detect DGA domains in contrast to static rules. The input of these machine-learning approaches may for example consist of the entropy, frequency of occurrence, top-level domain, number of dictionary words, length of the domain, and n-gram. However, many of these approaches need labelled data. You need to know a lot of ‘good’ domains, and a lot of DGA domains. Good domains can be taken, for example, from the Alexa and Majestic million sets and DGA domains can be generated from known malicious algorithms. While these DGA domains are valid, they are only valid for the remainder of the usage of that specific algorithm. If there is a new type of DGA, chances are your model is not correct anymore and does not detect newly generated domains. Language regions pose a challenge on the ‘good’ domains. Each language has different structures and combinations. Taking the Alexa or Majestic million is a one-size-fits-all approach. Nuances might get lost. To overcome the challenges of labelled data, unsupervised machine learning might be a solution. These approaches do not need an explicit DGA training set – you only need to know what is normal or expected. A majority of research move to variants of neural networks, which require a lot of computational power to train and predict. With the amount of network data this is not necessarily a deal-breaker if there is ample computing power, but it certainly is a factor to consider. An easier to implement solution is to look solely at the occurrences of n-grams to define what is normal. N-grams are sequences of N consecutive elements such as words or letters, where bi-grams (2-grams) are sequences of two, tri-grams (3-grams) are sequences of three, etc. To illustrate with the domain ‘google.com’: “This is an intuitive way to dissect language. Because, what are the odds you see a ‘kzp’ in a domain? And what are the odds you see ‘oog’ in a domain?”   We calculate the domain probability by multiplying the probability of each of the tri-grams and normalise by dividing it by the length of the domain. We chose an unconditional probability, meaning we ignore the dependency between n-grams as this speeds up training and calculation times. We also ignored the top level domain (e.g. “.co.uk”, “.org”) as these are common in both normal as in DGA domains and will focus our model to the parts of the domain that is distinctive. If the domain probability is below a predefined threshold, the domain is deviant from the baseline and likely a DGA domain. Results To evaluate this technique we trained on roughly 8 million non-unique common names of a network, thereby creating a baseline of what is normal for this network. We evaluated the model by scoring one million non-unique common names and roughly 125.000 DGA domains over multiple algorithms, provided by Johannes Bader[3]. We excluded some domains that are known to use random generated (sub)-domains from both the training- and evaluation set, such as content delivery networks. Figure below illustrates the log probability distributions of the blue baseline domains, i.e. the domains you would expect to see, and the red DGA domains. Although a clear distinction between the two distributions can be seen there is also a small overlap between the -10 and -7.5 visible. This is because some DGA domains are much alike to regular domains, some baseline domain are random-like, and for some domains our model wasn’t able to correctly distinguish it from DGA domains. For our detection to be practically useful in large operations, such as Security Operation Centers, we need a very low false positive rate. We also assumed that every baseline has a small contamination ratio. We chose for a ratio of 0.001%. We also use this as the cut-off value between predicting a domain as DGA or not. During hunting this threshold may be increased or completely ignored. True DGA True Normal Predicted DGA 94.67% ~0 Predicted Normal 6.33% ~100% Total 100% 100% If we take the cut-off value at this point we get an accuracy (the percentage correct) of 99.35%  and an F1-score of 97.26. Conclusion DGA domains are a tactic used by various malware families. Machine learning approaches are proven to be useful in the detection of this tactic, but lack to generalize in a simple and strong solution for production. By relaxing some restrictions on the math and compensating this with a lot of baseline data, a simple and effective solution can be found. This simple and effective solution does not rely on labelled data, is on par with scientific research and has the benefit to take into account the common language of regular domains used in the network. We demonstrated this solution with hostnames in common names, but it is also applicable for HTTP and DNS. Moreover, a wide range of applications is possible since it detects deviations from the expected. For example random generated file names, deviating hostnames, unexpected sequences of connections, etc.
  1. This technique is recently added to the MITTRE ATT&CK tactics. https://attack.mitre.org/techniques/T1483/
  2. For more information about C2, see: https://attack.mitre.org/tactics/TA0011/
  3. https://github.com/baderj/domain_generation_algorithms
Categories: Security Posts

Pattern Welding Explained as Wearable Art

Niels Provos - Tue, 2018/08/28 - 06:37

Pattern-Welding was used throughout the Viking-age to imbue swords with intricate patterns that were associated with mystical qualities. This visualization shows the pattern progression in a twisted road with increasing removal of material. It took me two years of intermittent work to get to this image. I liked this image so much that I ordered it for myself as a t-shirt and am looking forward for people asking me what the image is all about. If you want to get a t-shirt yourself, you can order this design via RedBubble. If you end up ordering a t-shirt, let me know if it ends up getting you into any interesting conversations!

Categories: Security Posts

Thu, 1970/01/01 - 02:00
Syndicate content