Feed aggregator
The best VPN routers of 2024
We found the best Wi-Fi routers on the market with built-in VPNs or easy VPN installation to combine privacy, security, and speedy Wi-Fi.
Categories: Security Posts
AI Steve: Un Humano Digital creado para la Política por el que puedes votar en UK
Hoy me he estado leyendo cómo funciona el sistema que ha propuesto un emprendedor (o empresario), con el Humano Digital llamado AI Steve, y me ha parecido muy particular, y una apertura a algo que abre el camino a un futuro que puede terminar con muchas posibilidades, como que sea cierto que tengamos agentes de IA tomando decisiones para la gestión de nuestra sociedad como si fuera un gestor político o un político gestor. Dejadme que os cuente el planteamiento de AI Steve.
Figura 1:AI Steve - Un Humano Digital creado parala Política por el que puedes votar en UK
IA Steve está creado como un Humano Digital, como los que crea la empresa BeHumans, con un avatar que lo identifica de forma única. Recordad que los Humanos Digitales se piensan con avatares realistas - éste no es el caso total, que además gestionan empatía emocional, e interactúan con las personas lo más humanamente posible.
Figura 2: ¿Sabes qué son los humanos digitales?
Por detrás, modelos de GenAI permiten a los humanos digitales acceder a datos, realizar tareas, o resolver situaciones, y en el caso de AI Steve su misión es decidir qué política quieren sus votantes que apoye o no apoye, porque sea bueno o no para sus votantes. Ese es el objetivo, "políticas para el pueblo creadas por el pueblo", y que el político la gente elija no tenga vida propia sino que tenga que votar lo que AI Steve haya consolidado con sus creadores y sus validadores.
Figura 3: Cómo funciona AI Steve
Para ello, AI Steve permite a las personas que quieran que se cree una política para el condado de Brighton & Hove que se una como "Creadores". Allí, hablarán con AI Steve, charlarán y analizarán los problemas que existen sobre un determinado tema, haciendo que AI Steve haga preguntas, valide las propuestas con otras legislaciones existentes, y al final haga resúmenes de las conversaciones para escuchar a todos los ciudadanos en cualquier momento.
Figura 4: Modelo de AI Steve
Después, esas conversaciones en forma de propuestas serán validadas por personas que se han unido como "Validadores", asignando puntuaciones a las diferentes ideas para que al final, aquellas que superen el 50% de apoyo sean las que AI Steve apoyará con su voto. Por supuesto, como un Humano Digital aún no tiene derecho para ocupar un puesto de responsabilidad pública (acabo de releer esta frase y os juro que nunca imaginé cuando empecé con este blog que iba a poner algo así), será el Humano Emprendedor Steve Endacott el que ejecute oficialmente lo que haya dicho AI Steve.
Figura 5: El "Humano Steve Endacott"
La idea es que la gente vote a su candidato, y después pueda tener impacto directo en las decisiones que él toma, y no como sucede ahora que se vota a un candidato, y una vez electo, la posibilidad que tiene un ciudadano de hablar con él y que su voz sea escuchada por el candidato que votó es muy pequeña, mientras que gracias a la tecnología GenAI se puede solventar esto mediante conversaciones a todas horas y en paralelo.
Figura 6: Controla tu candidato vía AI
Se me ocurren muchos comentarios, dudas, interpretaciones, puertas que se abren, corner cases, etcétera, pero los voy a madurar un poco más antes de especular y pensar en futuros distópicos de esos que hemos visto en tantas películas de ciencia ficción desde que somos niños.
¡Saludos Malignos!
Autor: Chema Alonso (Contactar con Chema Alonso)
Sigue Un informático en el lado del mal RSS 0xWord
- Contacta con Chema Alonso en MyPublicInbox.com
Figura 1:AI Steve - Un Humano Digital creado parala Política por el que puedes votar en UK
IA Steve está creado como un Humano Digital, como los que crea la empresa BeHumans, con un avatar que lo identifica de forma única. Recordad que los Humanos Digitales se piensan con avatares realistas - éste no es el caso total, que además gestionan empatía emocional, e interactúan con las personas lo más humanamente posible.
Figura 2: ¿Sabes qué son los humanos digitales?
Por detrás, modelos de GenAI permiten a los humanos digitales acceder a datos, realizar tareas, o resolver situaciones, y en el caso de AI Steve su misión es decidir qué política quieren sus votantes que apoye o no apoye, porque sea bueno o no para sus votantes. Ese es el objetivo, "políticas para el pueblo creadas por el pueblo", y que el político la gente elija no tenga vida propia sino que tenga que votar lo que AI Steve haya consolidado con sus creadores y sus validadores.
Figura 3: Cómo funciona AI Steve
Para ello, AI Steve permite a las personas que quieran que se cree una política para el condado de Brighton & Hove que se una como "Creadores". Allí, hablarán con AI Steve, charlarán y analizarán los problemas que existen sobre un determinado tema, haciendo que AI Steve haga preguntas, valide las propuestas con otras legislaciones existentes, y al final haga resúmenes de las conversaciones para escuchar a todos los ciudadanos en cualquier momento.
Figura 4: Modelo de AI Steve
Después, esas conversaciones en forma de propuestas serán validadas por personas que se han unido como "Validadores", asignando puntuaciones a las diferentes ideas para que al final, aquellas que superen el 50% de apoyo sean las que AI Steve apoyará con su voto. Por supuesto, como un Humano Digital aún no tiene derecho para ocupar un puesto de responsabilidad pública (acabo de releer esta frase y os juro que nunca imaginé cuando empecé con este blog que iba a poner algo así), será el Humano Emprendedor Steve Endacott el que ejecute oficialmente lo que haya dicho AI Steve.
Figura 5: El "Humano Steve Endacott"
La idea es que la gente vote a su candidato, y después pueda tener impacto directo en las decisiones que él toma, y no como sucede ahora que se vota a un candidato, y una vez electo, la posibilidad que tiene un ciudadano de hablar con él y que su voz sea escuchada por el candidato que votó es muy pequeña, mientras que gracias a la tecnología GenAI se puede solventar esto mediante conversaciones a todas horas y en paralelo.
Figura 6: Controla tu candidato vía AI
Se me ocurren muchos comentarios, dudas, interpretaciones, puertas que se abren, corner cases, etcétera, pero los voy a madurar un poco más antes de especular y pensar en futuros distópicos de esos que hemos visto en tantas películas de ciencia ficción desde que somos niños.
¡Saludos Malignos!
Autor: Chema Alonso (Contactar con Chema Alonso)
Sigue Un informático en el lado del mal RSS 0xWord
- Contacta con Chema Alonso en MyPublicInbox.com
Categories: Security Posts
A Guide to RCS, Why Apple’s Adopting It, and How It Makes Texting Better
The messaging standard promises better security and cooler features than plain old SMS. Android has had it for years, but now iPhones are getting it too.
Categories: Security Posts
Ukrainian Sailors Are Using Telegram to Avoid Being Tricked Into Smuggling Oil for Russia
Contract seafarers in Ukraine are turning to online whisper networks to keep themselves from being hired into Russia’s sanctions-busting shadow fleet.
Categories: Security Posts
Update: python-per-line.py Version 0.0.12
New option -O allows to use a function that receives a object per line as argument.
Like option -n, option -O is used to invoke a single Python function taking one argument, but this time the argument is an object in stead of a string. The object has several properties: item is the line (string), left is the previous line, right is the next line, index is equal to the line counter – 1.
python-per-line_V0_0_12.zip (http)
MD5: 16ADE95E968CAE357D1725659224FA0B
SHA256: 1B8D1D8B27A5F5D66FBAB5BACD0594B6A08E96EC03D0BAE08C616A3C172BFD0B
MD5: 16ADE95E968CAE357D1725659224FA0B
SHA256: 1B8D1D8B27A5F5D66FBAB5BACD0594B6A08E96EC03D0BAE08C616A3C172BFD0B
Categories: Security Posts
Ransomware Attacks Are Getting Worse
Plus: US lawmakers have nothing to say about an Israeli influence campaign aimed at US voters, a former LA Dodgers owner wants to fix the internet, and more.
Categories: Security Posts
Overview of My Tools That Handle JSON Data, (Sat, Jun 15th)
I wrote a couple of diary entries showing my tools that produce and consume JSON data. Like "Analyzing PDF Streams", "Another PDF Streams Example: Extracting JPEGs" and "Analyzing MSG Files".
The tools than can produce MyJSON output (option –jsonoutput) to stdout are:
Senior handler
blog.DidierStevens.com (c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.
- base64dump.py
- cut-bytes.py
- emldump.py
- file-magic.py
- myjson-transform.py
- oledump.py
- pdf-parser.py
- rtfdump.py
- zipdump.py
- 1768.py
- amsiscan.py
- base64dump.py
- file-magic.py
- format-bytes.py
- hash.py
- isodump.py
- onedump.py
- pdftool.py
- pngdump.py
- search-for-compression.py
- strings.py
- xmldump.py
Senior handler
blog.DidierStevens.com (c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.
Categories: Security Posts
Understanding Apple’s On-Device and Server Foundation Models release
By Artem Dinaburg
Earlier this week, at Apple’s WWDC, we finally witnessed Apple’s AI strategy. The videos and live demos were accompanied by two long-form releases: Apple’s Private Cloud Compute and Apple’s On-Device and Server Foundation Models. This blog post is about the latter.
So, what is Apple releasing, and how does it compare to the current open-source ecosystem? We integrate the video and long-form releases and parse through the marketing speak to bring you the nuggets of information within.
The sound of silence
No NVIDIA/CUDA Tax. What’s unsaid is as important as what is, and those words are CUDA and NVIDIA. Apple goes out of its way to specify that it is not dependent on NVIDIA hardware or CUDA APIs for anything. The training uses Apple’s AXLearn (which runs on TPUs and Apple Silicon), Server model inference runs on Apple Silicon (!), and the on-device APIs are CoreML and Metal.
Why? Apple hates NVIDIA with the heat of a thousand suns. Tim Cook would rather sit in a data center and do matrix multiplication with an abacus than spend millions on NVIDIA hardware. Aside from personal enmity, it is a good business idea. Apple has its own ML stack from the hardware on up and is not hobbled by GPU supply shortages. Apple also gets to dogfood its hardware and software for ML tasks, ensuring that it’s something ML developers want.
What’s the downside? Apple’s hardware and software ML engineers must learn new frameworks and may accidentally repeat prior mistakes. For example, Apple devices were originally vulnerable to LeftoverLocals, but NVIDIA devices were not. If anyone from Apple is reading this, we’d love to audit AXLearn, MLX, and anything else you have cooking! Our interests are in the intersection of ML, program analysis, and application security, and your frameworks pique our interest.
The models
There are (at least) five models being released. Let’s count them:
- The ~3B parameter on-device model used for language tasks like summarization and Writing Tools.
- The large Server model is used for language tasks too complex to do on-device.
- The small on-device code model built into XCode used for Swift code completion.
- The large Server code model (“Swift Assist”) that is used for complex code generation and understanding tasks.
- The diffusion model powering Genmoji and Image Playground.
- Data Parallelism: Each GPU has a copy of the full model but is assigned a chunk of the training data. The gradients from all GPUs are aggregated and used to update weights, which are synchronized across models.
- Tensor Parallelism: Specific parts of the model are split across multiple GPUs. PyTorch docs say you will need this once you have a big model or GPU communication overhead becomes an issue.
- Sequence Parallelism was the hardest topic to find; I had to dig to page 6 of this paper. Parts of the transformer can be split to process multiple data items at once.
- FSDP shards your model across multiple GPUs or even CPUs. Sharding reduces peak GPU memory usage since the whole model does not have to be kept in memory, at the expense of communication overhead to synchronize state. FDSP is supported by PyTorch and is regularly used for finetuning large models.
Categories: Security Posts
PCC: Bold step forward, not without flaws
By Adelin Travers
Earlier this week, Apple announced Private Cloud Compute (or PCC for short). Without deep context on the state of the art of Artificial Intelligence (AI) and Machine Learning (ML) security, some sensible design choices may seem surprising. Conversely, some of the risks linked to this design are hidden in the fine print. In this blog post, we’ll review Apple’s announcement, both good and bad, focusing on the context of AI/ML security. We recommend Matthew Green’s excellent thread on X for a more general security context on this announcement:
https://x.com/matthew_d_green/status/1800291897245835616
Disclaimer: This breakdown is based solely on Apple’s blog post and thus subject to potential misinterpretations of wording. We do not have access to the code yet, but we look forward to Apple’s public PCC Virtual Environment release to examine this further!
Review summary
This design is excellent on the conventional non-ML security side. Apple seems to be doing everything possible to make PCC a secure, privacy-oriented solution. However, the amount of review that security researchers can do will depend on what code is released, and Apple is notoriously secretive.
On the AI/ML side, the key challenges identified are on point. These challenges result from Apple’s desire to provide additional processing power for compute-heavy ML workloads today, which incidentally requires moving away from on-device data processing to the cloud. Homomorphic Encryption (HE) is a big hope in the confidential ML field but doesn’t currently scale. Thus, Apple’s choice to process data in its cloud at scale requires decryption. Moreover, the PCC guarantees vary depending on whether Apple will use a PCC environment for model training or inference. Lastly, because Apple is introducing its own custom AI/ML hardware, implementation flaws that lead to information leakage will likely occur in PCC when these flaws have already been patched in leading AI/ML vendor devices.
Running commentary
We’ll follow the release post’s text in order, section-by-section, as if we were reading and commenting, halting on specific passages.
Introduction
When I first read this post, I’ll admit that I misunderstood this passage as Apple starting an announcement that they had achieved end-to-end encryption in Machine Learning. This would have been even bigger news than the actual announcement. That’s because Apple would need to use Homomorphic Encryption to achieve full end-to-end encryption in an ML context. HE allows computation of a function, typically an ML model, without decrypting the underlying data. HE has been making steady progress and is a future candidate for confidential ML (see for instance this 2018 paper). However, this would have been a major announcement and shift in the ML security landscape because HE is still considered too slow to be deployed at the cloud scale and in complex functions like ML. More on this later on. Note that Multi-Party Computation (MPC)—which allows multiple agents, for instance the server and the edge device, to compute different parts of a function like an ML model and aggregate the result privately—would be a distributed scheme on both the server and edge device which is different from what is presented here. The term “requires unencrypted access” is the key to the PCC design challenges. Apple could continue processing data on-device, but this means abiding by mobile hardware limitations. The complex ML workloads Apple wants to offload, like using Large Language Models (LLM), exceed what is practical for battery-powered mobile devices. Apple wants to move the compute to the cloud to provide these extended capabilities, but HE doesn’t currently scale to that level. Thus to provide these new capabilities of service presently, Apple requires access to unencrypted data. This being said, Apple’s design for PCC is exceptional, and the effort required to develop this solution was extremely high, going beyond most other cloud AI applications to date. Thus, the security and privacy of ML models in the cloud is an unsolved and active research domain when an auditor only has access to the model. A good example of these difficulties can be found in Machine Unlearning—a privacy scheme that allows removing data from a model—that was shown to be impossible to formally prove by just querying a model. Unlearning must thus be proven at the algorithm implementation level. When the underlying entirely custom and proprietary technical stack of Apple’s PCC is factored in, external audits become significantly more complex. Matthew Green notes that it’s unclear what part of the stack and ML code and binaries Apple will release to audit ML algorithm implementations. This is also definitely true. Members of the ML Assurance team at Trail of Bits have been releasing attacks that modify the ML software stack at runtime since 2021. Our attacks have exploited the widely used pickle VM for traditional RCE backdoors and malicious custom ML graph operators on Microsoft’s ONNXRuntime. Sleepy Pickles, our most recent attack, uses a runtime attack to dynamically swap an ML model’s weights when the model is loaded. This is also true; the design later introduced by Apple is far better than many other existing designs. Designing Private Cloud compute From an ML perspective, this claim depends on the intended use case for PCC, as it cannot hold true in general. This claim may be true if PCC is only used for model inference. The rest of the PCC post only mentions inference which suggests that PCC is not currently used for training. However, if PCC is used for training, then data will be retained, and stateless computation that leaves no trace is likely impossible. This is because ML models retain data encoded in their weights as part of their training. This is why the research field of Machine Unlearning introduced above exists. The big question that Apple needs to answer is thus whether it will use PCC for training models in the future. As others have noted, this is an easy slope to slip into. Non-targetability is a really interesting design idea that hasn’t been applied to ML before. It also mitigates hardware leakage vulnerabilities, which we will see next. Introducing Private Cloud Compute nodes As others have noted, using Secure Enclaves and Secure Boot is excellent since it ensures only legitimate code is run. GPUs will likely continue to play a large role in AI acceleration. Apple has been building its own GPUs for some time, with its M series now in the third generation rather than using Nvidia’s, which are more pervasive in ML. However, enclaves and attestation will provide only limited guarantees to end-users, as Apple effectively owns the attestation keys. Moreover, enclaves and GPUs have had vulnerabilities and side channels that resulted in exploitable leakage in ML. Apple GPUs have not yet been battle-tested in the AI domain as much as Nvidia’s; thus, these accelerators may have security issues that their Nvidia counterparts do not have. For instance, Apple’s custom hardware was and remains affected by the LeftoverLocals vulnerability when Nvidia’s hardware was not. LeftoverLocals is a GPU hardware vulnerability released by Trail of Bits earlier this year. It allows an attacker collocated with a victim on a vulnerable device to listen to the victim’s LLM output. Apple’s M2 processors are still currently impacted at the time of writing. This being said, the PCC design’s non-targetability property may help mitigate LeftoverLocals for PCC since it prevents an attacker from identifying and achieving collocation to the victim’s device. This is important as Swift is a compiled language. Swift is thus not prone to the dynamic runtime attacks that affect languages like Python which are more pervasive in ML. Note that Swift would likely only be used for CPU code. The GPU code would likely be written in Apple’s Metal GPU programming framework. More on dynamic runtime attacks and Metal in the next section. Stateless computation and enforceable guarantees Apple’s solution is not end-to-end encrypted but rather an enclave-based solution. Thus, it does not represent an advancement in HE for ML but rather a well-thought-out combination of established technologies. This is, again, impressive, but the data is decrypted on Apple’s server. As presented in the introduction, using compiled Swift and signed code throughout the stack should prevent attacks on ML software stacks at runtime. Indeed, the ONNXRuntime attack defines a backdoored custom ML primitive operator by loading an adversary-built shared library object, while the Sleepy Pickle attack relies on dynamic features of Python. Just-in-Time (JIT) compiled code has historically been a steady source of remote code execution vulnerabilities. JIT compilers are notoriously difficult to implement and create new executable code by design, making them a highly desirable attack vector. It may surprise most readers, but JIT is widely used in ML stacks to speed up otherwise slow Python code. JAX, an ML framework that is the basis for Apple’s own AXLearn ML framework, is a particularly prolific user of JIT. Apple avoids the security issues of JIT by not using it. Apple’s ML stack is instead built in Swift, a memory safe ahead-of-time compiled language that does not need JIT for runtime performance. As we’ve said, the GPU code would likely be written in Metal. Metal does not enforce memory safety. Without memory safety, attacks like LeftoverLocals are possible (with limitations on the attacker, like machine collocation). No privileged runtime access This is an interesting approach because it shows Apple is willing to trade off infrastructure monitoring capabilities (and thus potentially reduce PCC’s reliability) for additional security and privacy guarantees. To fully understand the benefits and limits of this solution, ML security researchers would need to know what exact information is captured in the structured logs. A complete analysis thus depends on Apple’s willingness or unwillingness to release the schema and pre-determined fields for these logs. Interestingly, limiting the type of logs could increase ML model risks by preventing ML teams from collecting adequate information to manage these risks. For instance, the choice of collected logs and metrics may be insufficient for the ML teams to detect distribution drift—when input data no longer matches training data and the model performance decreases. If our understanding is correct, most of the collected metrics will be metrics for SRE purposes, meaning that data drift detection would not be possible. If the collected logs include ML information, accidental data leakage is possible but unlikely. Non-targetability This is excellent as lower levels of the ML stack, including the physical layer, are sometimes overlooked in ML threat models. The term “metadata” is important here. Only the metadata can be filtered away in the manner Apple describes. However, there are virtually no ways of filtering out all PII in the body content sent to the LLM. Any PII in the body content will be processed unencrypted by the LLM. If PCC is used for inference only, this risk is mitigated by structured logging. If PCC is also used for training, which Apple has yet to clarify, we recommend not sharing PII with systems like these when it can be avoided. It might be possible for an attacker to obtain identifying information in the presence of side channel vulnerabilities, for instance, linked to implementation flaws, that leak some information. However, this is unlikely to happen in practice: the cost placed on the adversary to simultaneously exploit both the load balancer and side channels will be prohibitive for non-nation state threat actors. An adversary with this level of control should be able to spoof the statistical distribution of nodes unless the auditing and statistical analysis are done at the network level. Verifiable transparency
This is nice to see! Of course, we do not know if these will need to be analyzed through extensive reverse engineering, which will be difficult, if not impossible, for Apple’s custom ML hardware. It is still a commendable rare occurrence for projects of this scale. PCC: Security wins, ML questions Apple’s design is excellent from a security standpoint. Improvements on the ML side are always possible. However, it is important to remember that those improvements are tied to some open research questions, like the scalability of homomorphic encryption. Only future vulnerability research will shed light on whether implementation flaws in hardware and software will impact Apple. Lastly, only time will tell if Apple continuously commits to security and privacy by only using PCC for inference rather than training and implementing homomorphic encryption as soon as it is sufficiently scalable.
When I first read this post, I’ll admit that I misunderstood this passage as Apple starting an announcement that they had achieved end-to-end encryption in Machine Learning. This would have been even bigger news than the actual announcement. That’s because Apple would need to use Homomorphic Encryption to achieve full end-to-end encryption in an ML context. HE allows computation of a function, typically an ML model, without decrypting the underlying data. HE has been making steady progress and is a future candidate for confidential ML (see for instance this 2018 paper). However, this would have been a major announcement and shift in the ML security landscape because HE is still considered too slow to be deployed at the cloud scale and in complex functions like ML. More on this later on. Note that Multi-Party Computation (MPC)—which allows multiple agents, for instance the server and the edge device, to compute different parts of a function like an ML model and aggregate the result privately—would be a distributed scheme on both the server and edge device which is different from what is presented here. The term “requires unencrypted access” is the key to the PCC design challenges. Apple could continue processing data on-device, but this means abiding by mobile hardware limitations. The complex ML workloads Apple wants to offload, like using Large Language Models (LLM), exceed what is practical for battery-powered mobile devices. Apple wants to move the compute to the cloud to provide these extended capabilities, but HE doesn’t currently scale to that level. Thus to provide these new capabilities of service presently, Apple requires access to unencrypted data. This being said, Apple’s design for PCC is exceptional, and the effort required to develop this solution was extremely high, going beyond most other cloud AI applications to date. Thus, the security and privacy of ML models in the cloud is an unsolved and active research domain when an auditor only has access to the model. A good example of these difficulties can be found in Machine Unlearning—a privacy scheme that allows removing data from a model—that was shown to be impossible to formally prove by just querying a model. Unlearning must thus be proven at the algorithm implementation level. When the underlying entirely custom and proprietary technical stack of Apple’s PCC is factored in, external audits become significantly more complex. Matthew Green notes that it’s unclear what part of the stack and ML code and binaries Apple will release to audit ML algorithm implementations. This is also definitely true. Members of the ML Assurance team at Trail of Bits have been releasing attacks that modify the ML software stack at runtime since 2021. Our attacks have exploited the widely used pickle VM for traditional RCE backdoors and malicious custom ML graph operators on Microsoft’s ONNXRuntime. Sleepy Pickles, our most recent attack, uses a runtime attack to dynamically swap an ML model’s weights when the model is loaded. This is also true; the design later introduced by Apple is far better than many other existing designs. Designing Private Cloud compute From an ML perspective, this claim depends on the intended use case for PCC, as it cannot hold true in general. This claim may be true if PCC is only used for model inference. The rest of the PCC post only mentions inference which suggests that PCC is not currently used for training. However, if PCC is used for training, then data will be retained, and stateless computation that leaves no trace is likely impossible. This is because ML models retain data encoded in their weights as part of their training. This is why the research field of Machine Unlearning introduced above exists. The big question that Apple needs to answer is thus whether it will use PCC for training models in the future. As others have noted, this is an easy slope to slip into. Non-targetability is a really interesting design idea that hasn’t been applied to ML before. It also mitigates hardware leakage vulnerabilities, which we will see next. Introducing Private Cloud Compute nodes As others have noted, using Secure Enclaves and Secure Boot is excellent since it ensures only legitimate code is run. GPUs will likely continue to play a large role in AI acceleration. Apple has been building its own GPUs for some time, with its M series now in the third generation rather than using Nvidia’s, which are more pervasive in ML. However, enclaves and attestation will provide only limited guarantees to end-users, as Apple effectively owns the attestation keys. Moreover, enclaves and GPUs have had vulnerabilities and side channels that resulted in exploitable leakage in ML. Apple GPUs have not yet been battle-tested in the AI domain as much as Nvidia’s; thus, these accelerators may have security issues that their Nvidia counterparts do not have. For instance, Apple’s custom hardware was and remains affected by the LeftoverLocals vulnerability when Nvidia’s hardware was not. LeftoverLocals is a GPU hardware vulnerability released by Trail of Bits earlier this year. It allows an attacker collocated with a victim on a vulnerable device to listen to the victim’s LLM output. Apple’s M2 processors are still currently impacted at the time of writing. This being said, the PCC design’s non-targetability property may help mitigate LeftoverLocals for PCC since it prevents an attacker from identifying and achieving collocation to the victim’s device. This is important as Swift is a compiled language. Swift is thus not prone to the dynamic runtime attacks that affect languages like Python which are more pervasive in ML. Note that Swift would likely only be used for CPU code. The GPU code would likely be written in Apple’s Metal GPU programming framework. More on dynamic runtime attacks and Metal in the next section. Stateless computation and enforceable guarantees Apple’s solution is not end-to-end encrypted but rather an enclave-based solution. Thus, it does not represent an advancement in HE for ML but rather a well-thought-out combination of established technologies. This is, again, impressive, but the data is decrypted on Apple’s server. As presented in the introduction, using compiled Swift and signed code throughout the stack should prevent attacks on ML software stacks at runtime. Indeed, the ONNXRuntime attack defines a backdoored custom ML primitive operator by loading an adversary-built shared library object, while the Sleepy Pickle attack relies on dynamic features of Python. Just-in-Time (JIT) compiled code has historically been a steady source of remote code execution vulnerabilities. JIT compilers are notoriously difficult to implement and create new executable code by design, making them a highly desirable attack vector. It may surprise most readers, but JIT is widely used in ML stacks to speed up otherwise slow Python code. JAX, an ML framework that is the basis for Apple’s own AXLearn ML framework, is a particularly prolific user of JIT. Apple avoids the security issues of JIT by not using it. Apple’s ML stack is instead built in Swift, a memory safe ahead-of-time compiled language that does not need JIT for runtime performance. As we’ve said, the GPU code would likely be written in Metal. Metal does not enforce memory safety. Without memory safety, attacks like LeftoverLocals are possible (with limitations on the attacker, like machine collocation). No privileged runtime access This is an interesting approach because it shows Apple is willing to trade off infrastructure monitoring capabilities (and thus potentially reduce PCC’s reliability) for additional security and privacy guarantees. To fully understand the benefits and limits of this solution, ML security researchers would need to know what exact information is captured in the structured logs. A complete analysis thus depends on Apple’s willingness or unwillingness to release the schema and pre-determined fields for these logs. Interestingly, limiting the type of logs could increase ML model risks by preventing ML teams from collecting adequate information to manage these risks. For instance, the choice of collected logs and metrics may be insufficient for the ML teams to detect distribution drift—when input data no longer matches training data and the model performance decreases. If our understanding is correct, most of the collected metrics will be metrics for SRE purposes, meaning that data drift detection would not be possible. If the collected logs include ML information, accidental data leakage is possible but unlikely. Non-targetability This is excellent as lower levels of the ML stack, including the physical layer, are sometimes overlooked in ML threat models. The term “metadata” is important here. Only the metadata can be filtered away in the manner Apple describes. However, there are virtually no ways of filtering out all PII in the body content sent to the LLM. Any PII in the body content will be processed unencrypted by the LLM. If PCC is used for inference only, this risk is mitigated by structured logging. If PCC is also used for training, which Apple has yet to clarify, we recommend not sharing PII with systems like these when it can be avoided. It might be possible for an attacker to obtain identifying information in the presence of side channel vulnerabilities, for instance, linked to implementation flaws, that leak some information. However, this is unlikely to happen in practice: the cost placed on the adversary to simultaneously exploit both the load balancer and side channels will be prohibitive for non-nation state threat actors. An adversary with this level of control should be able to spoof the statistical distribution of nodes unless the auditing and statistical analysis are done at the network level. Verifiable transparency
This is nice to see! Of course, we do not know if these will need to be analyzed through extensive reverse engineering, which will be difficult, if not impossible, for Apple’s custom ML hardware. It is still a commendable rare occurrence for projects of this scale. PCC: Security wins, ML questions Apple’s design is excellent from a security standpoint. Improvements on the ML side are always possible. However, it is important to remember that those improvements are tied to some open research questions, like the scalability of homomorphic encryption. Only future vulnerability research will shed light on whether implementation flaws in hardware and software will impact Apple. Lastly, only time will tell if Apple continuously commits to security and privacy by only using PCC for inference rather than training and implementing homomorphic encryption as soon as it is sufficiently scalable.
Categories: Security Posts
Internet Safety Month: Keep Your Online Experience Safe and Secure
What is Internet Safety Month?
Each June, the online safety community observes Internet Safety Month as a time to reflect on our digital habits and ensure we’re taking the best precautions to stay safe online. It serves as a reminder for everyone—parents, teachers, and kids alike—to be mindful of our online activities and to take steps to protect ourselves.
Why is it important?
As summer approaches and we all pursue a bit more leisure time—that typically includes more screen time—it’s important to understand the risks and safeguard our digital well-being. While the Internet offers us countless opportunities, it also comes with risks that we must be aware of:
How to protect it
Install reputable antivirus software like Webroot on all your devices and keep it updated. Regularly scan your devices for malware and avoid clicking on suspicious links or downloading unknown files. 2. Be skeptical of offers that appear too good to be true
If an offer seems too good to be true, it probably is. Scammers often use enticing offers or promotions to lure victims into sharing personal information or clicking on malicious links. These can lead to financial loss, identity theft, or installation of malware.
How to protect it
If an offer seems too good to be true, it probably is. Research the company or website before pursuing an offer or providing any personal information. 3. Monitor your identity for fraud activity Identity theft happens when someone swipes your personal information to commit fraud or other crimes. This can wreak havoc on your finances, tank your credit score, and bring about a host of other serious consequences. How to protect it
Consider using an identity protection service like Webroot Premium that monitors your personal information for signs of unauthorized use. Review your bank and credit card statements regularly for any unauthorized transactions. 4. Ensure your online privacy with a VPN
Without proper protection, your sensitive information—like passwords and credit card details—can be easily intercepted by cybercriminals while browsing. Surfing the web and using public Wi-Fi networks often lack security, giving hackers a prime opportunity to snatch your data. How to protect it
Use a Virtual Private Network (VPN) when connecting to the internet. A VPN encrypts your internet traffic, making it unreadable to hackers. Choose a reputable VPN service and enable it whenever you connect to the internet. 5. Avoid clicking on links from unknown sources
Clicking on links in emails, text messages, or social media from unknown or suspicious sources can expose you to phishing attacks or malware. These seemingly harmless clicks can quickly compromise your security and personal information. How to protect it
Verify the sender’s identity before clicking on any links. Hover over links to see the actual URL before clicking. If you’re unsure about a link, type the company’s name directly into your browser instead. 6. Avoid malicious websites
Malicious websites are crafted to deceive you into downloading malware or revealing sensitive information. Visiting these sites can expose your device to viruses, phishing attempts, and other online threats, putting your security at risk. How to protect it
Install a web threat protection tool or browser extension that can block access to malicious websites. Products like Webroot Internet Security Plus and Webroot AntiVirus make it easy to avoid threatening websites with secure web browsing on your desktop, laptop, tablet, or mobile phone. 7. Keep your passwords safe Weak or reused passwords can easily be guessed or cracked by attackers, compromising your online accounts. But keeping track of all your unique passwords can be difficult if you don’t have them stored securely in a password manager. If one account is compromised, attackers can gain access to your other accounts, potentially leading to identity theft or financial loss. How to protect your passwords
Use a password manager to create and store strong, unique passwords for each of your online accounts. A password manager encrypts your passwords and helps you automatically fill them in on websites, reducing the risk of phishing attacks and password theft. Take action now As we celebrate Internet Safety Month, take a moment to review your current online habits and security measures. Are you doing everything you can to protect yourself and your family? If not, now is the perfect time to make some changes. By following these tips, you can enjoy a safer and more secure online experience. Remember, Internet Safety Month is not just about protecting yourself—it’s also about spreading awareness and educating others. You can share this flyer, “9 Things to Teach Kids to Help Improve Online Safety,” with your friends and family to spread the word and help create a safer online community for everyone. Sources: [1] Forbes. The Ultimate Internet Safety Guide for Kids. [2] Forbes. The Ultimate Internet Safety Guide for Kids. [3] Pew Research Center [4] Information Week. What Cybersecurity Gets Wrong. [5] MIT. Learn how to avoid a phishing scam. The post Internet Safety Month: Keep Your Online Experience Safe and Secure appeared first on Webroot Blog.
- 37% of children and adolescents have been the target of cyberbullying.1
- 50% of tweens (kids ages 10 to 12) have been exposed to inappropriate online content.2
- 64% of Americans have experienced a data breach.3
- 95% of cybersecurity breaches are due to human error.4
- 30% of phishing emails are opened by targeted users.5
- Protect your devices from malware
How to protect it
Install reputable antivirus software like Webroot on all your devices and keep it updated. Regularly scan your devices for malware and avoid clicking on suspicious links or downloading unknown files. 2. Be skeptical of offers that appear too good to be true
If an offer seems too good to be true, it probably is. Scammers often use enticing offers or promotions to lure victims into sharing personal information or clicking on malicious links. These can lead to financial loss, identity theft, or installation of malware.
How to protect it
If an offer seems too good to be true, it probably is. Research the company or website before pursuing an offer or providing any personal information. 3. Monitor your identity for fraud activity Identity theft happens when someone swipes your personal information to commit fraud or other crimes. This can wreak havoc on your finances, tank your credit score, and bring about a host of other serious consequences. How to protect it
Consider using an identity protection service like Webroot Premium that monitors your personal information for signs of unauthorized use. Review your bank and credit card statements regularly for any unauthorized transactions. 4. Ensure your online privacy with a VPN
Without proper protection, your sensitive information—like passwords and credit card details—can be easily intercepted by cybercriminals while browsing. Surfing the web and using public Wi-Fi networks often lack security, giving hackers a prime opportunity to snatch your data. How to protect it
Use a Virtual Private Network (VPN) when connecting to the internet. A VPN encrypts your internet traffic, making it unreadable to hackers. Choose a reputable VPN service and enable it whenever you connect to the internet. 5. Avoid clicking on links from unknown sources
Clicking on links in emails, text messages, or social media from unknown or suspicious sources can expose you to phishing attacks or malware. These seemingly harmless clicks can quickly compromise your security and personal information. How to protect it
Verify the sender’s identity before clicking on any links. Hover over links to see the actual URL before clicking. If you’re unsure about a link, type the company’s name directly into your browser instead. 6. Avoid malicious websites
Malicious websites are crafted to deceive you into downloading malware or revealing sensitive information. Visiting these sites can expose your device to viruses, phishing attempts, and other online threats, putting your security at risk. How to protect it
Install a web threat protection tool or browser extension that can block access to malicious websites. Products like Webroot Internet Security Plus and Webroot AntiVirus make it easy to avoid threatening websites with secure web browsing on your desktop, laptop, tablet, or mobile phone. 7. Keep your passwords safe Weak or reused passwords can easily be guessed or cracked by attackers, compromising your online accounts. But keeping track of all your unique passwords can be difficult if you don’t have them stored securely in a password manager. If one account is compromised, attackers can gain access to your other accounts, potentially leading to identity theft or financial loss. How to protect your passwords
Use a password manager to create and store strong, unique passwords for each of your online accounts. A password manager encrypts your passwords and helps you automatically fill them in on websites, reducing the risk of phishing attacks and password theft. Take action now As we celebrate Internet Safety Month, take a moment to review your current online habits and security measures. Are you doing everything you can to protect yourself and your family? If not, now is the perfect time to make some changes. By following these tips, you can enjoy a safer and more secure online experience. Remember, Internet Safety Month is not just about protecting yourself—it’s also about spreading awareness and educating others. You can share this flyer, “9 Things to Teach Kids to Help Improve Online Safety,” with your friends and family to spread the word and help create a safer online community for everyone. Sources: [1] Forbes. The Ultimate Internet Safety Guide for Kids. [2] Forbes. The Ultimate Internet Safety Guide for Kids. [3] Pew Research Center [4] Information Week. What Cybersecurity Gets Wrong. [5] MIT. Learn how to avoid a phishing scam. The post Internet Safety Month: Keep Your Online Experience Safe and Secure appeared first on Webroot Blog.
Categories: Security Posts
2024 RSA Recap: Centering on Cyber Resilience
Cyber resilience is becoming increasingly complex to achieve with the changing nature of computing. Appropriate for this year’s conference theme, organizations are exploring “the art of the possible”, ushering in an era of dynamic computing as they explore new technologies. Simultaneously, as innovation expands and computing becomes more dynamic, more threats become possible – thus, the approach to securing business environments must also evolve.
As part of this year’s conference, I led a keynote presentation around the possibilities, risks, and rewards of cyber tech convergence. We explored the risks and rewards of cyber technology convergence and integration across network & security operations. More specifically, we looked into the future of more open, adaptable security architectures, and what this means for security teams.
LevelBlue Research Reveals New Trends for Cyber Resilience
This year, we also launched the inaugural LevelBlue Futures™ Report: Beyond the Barriers to Cyber Resilience. Led by Theresa Lanowitz, Chief Evangelist of AT&T Cybersecurity / LevelBlue, we hosted an in-depth session based on our research that examined the complexities of dynamic computing. This included an analysis of how dynamic computing merges IT and business operations, taps into data-driven decision-making, and redefines cyber resilience for the modern era. Some of the notable findings she discussed include:
- 85% of respondents say computing innovation is increasing risk, while 74% confirmed that the opportunity of computing innovation outweighs the corresponding increase in cybersecurity risk.
- The adoption of Cybersecurity-as-a-Service (CSaaS) is on the rise, with 32% of organizations opting to outsource their cybersecurity needs rather than managing them in-house.
- 66% of respondents share cybersecurity is an afterthought, while another 64% say cybersecurity is siloed. This isn’t surprising when 61% say there is a lack of understanding of cybersecurity at the board level.
Categories: Security Posts
Sifting through the spines: identifying (potential) Cactus ransomware victims
Authored by Willem Zeeman and Yun Zheng Hu
This blog is part of a series written by various Dutch cyber security firms that have collaborated on the Cactus ransomware group, which exploits Qlik Sense servers for initial access. To view all of them please check the central blog by Dutch special interest group Cyberveilig Nederland [1]
The effectiveness of the public-private partnership called Melissa [2] is increasingly evident. The Melissa partnership, which includes Fox-IT, has identified overlap in a specific ransomware tactic. Multiple partners, sharing information from incident response engagements for their clients, found that the Cactus ransomware group uses a particular method for initial access. Following that discovery, NCC Group’s Fox-IT developed a fingerprinting technique to identify which systems around the world are vulnerable to this method of initial access or, even more critically, are already compromised.
Qlik Sense vulnerabilities
Qlik Sense, a popular data visualisation and business intelligence tool, has recently become a focal point in cybersecurity discussions. This tool, designed to aid businesses in data analysis, has been identified as a key entry point for cyberattacks by the Cactus ransomware group.
The Cactus ransomware campaign
Since November 2023, the Cactus ransomware group has been actively targeting vulnerable Qlik Sense servers. These attacks are not just about exploiting software vulnerabilities; they also involve a psychological component where Cactus misleads its victims with fabricated stories about the breach. This likely is part of their strategy to obscure their actual method of entry, thus complicating mitigation and response efforts for the affected organizations.
For those looking for in-depth coverage of these exploits, the Arctic Wolf blog [3] provides detailed insights into the specific vulnerabilities being exploited, notably CVE-2023-41266, CVE-2023-41265 also known as ZeroQlik, and potentially CVE-2023-48365 also known as DoubleQlik.
Threat statistics and collaborative action
The scope of this threat is significant. In total, we identified 5205 Qlik Sense servers, 3143 servers seem to be vulnerable to the exploits used by the Cactus group. This is based on the initial scan on 17 April 2024. Closer to home in the Netherlands, we’ve identified 241 vulnerable systems, fortunately most don’t seem to have been compromised. However, 6 Dutch systems weren’t so lucky and have already fallen victim to the Cactus group. It’s crucial to understand that “already compromised” can mean that either the ransomware has been deployed and the initial access artifacts left behind were not removed, or the system remains compromised and is potentially poised for a future ransomware attack.
Since 17 April 2024, the DIVD (Dutch Institute for Vulnerability Disclosure) and the governmental bodies NCSC (Nationaal Cyber Security Centrum) and DTC (Digital Trust Center) have teamed up to globally inform (potential) victims of cyberattacks resembling those from the Cactus ransomware group. This collaborative effort has enabled them to reach out to affected organisations worldwide, sharing crucial information to help prevent further damage where possible.
Identifying vulnerable Qlik Sense servers
Expanding on Praetorian’s thorough vulnerability research on the ZeroQlik and DoubleQlik vulnerabilities [4,5], we found a method to identify the version of a Qlik Sense server by retrieving a file called product-info.json from the server. While we acknowledge the existence of Nuclei templates for the vulnerability checks, using the server version allows for a more reliable evaluation of potential vulnerability status, e.g. whether it’s patched or end of support.
This JSON file contains the release label and version numbers by which we can identify the exact version that this Qlik Sense server is running.
Figure 1: Qlik Sense product-info.json file containing version information
Keep in mind that although Qlik Sense servers are assigned version numbers, the vendor typically refers to advisories and updates by their release label, such as “February 2022 Patch 3”.
The following cURL command can be used to retrieve the product-info.json file from a Qlik server:
curl -H "Host: localhost" -vk 'https://<ip>/resources/autogenerated/product-info.json?.ttf'
Note that we specify ?.ttf at the end of the URL to let the Qlik proxy server think that we are requesting a .ttf file, as font files can be accessed unauthenticated. Also, we set the Host header to localhost or else the server will return 400 - Bad Request - Qlik Sense, with the message The http request header is incorrect.
Retrieving this file with the ?.ttf extension trick has been fixed in the patch that addresses CVE-2023-48365 and you will always get a 302 Authenticate at this location response:
> GET /resources/autogenerated/product-info.json?.ttf HTTP/1.1
> Host: localhost
> Accept: */*
>
< HTTP/1.1 302 Authenticate at this location
< Cache-Control: no-cache, no-store, must-revalidate
< Location: https://localhost/internal_forms_authentication/?targetId=2aa7575d-3234-4980-956c-2c6929c57b71
< Content-Length: 0
<
Nevertheless, this is still a good way to determine the state of a Qlik instance, because if it redirects using 302 Authenticate at this location it is likely that the server is not vulnerable to CVE-2023-48365.
An example response from a vulnerable server would return the JSON file:
> GET /resources/autogenerated/product-info.json?.ttf HTTP/1.1
> Host: localhost
> Accept: */*
>
< HTTP/1.1 200 OK
< Set-Cookie: X-Qlik-Session=893de431-1177-46aa-88c7-b95e28c5f103; Path=/; HttpOnly; SameSite=Lax; Secure
< Cache-Control: public, max-age=3600
< Transfer-Encoding: chunked
< Content-Type: application/json;charset=utf-8
< Expires: Tue, 16 Apr 2024 08:14:56 GMT
< Last-Modified: Fri, 04 Nov 2022 23:28:24 GMT
< Accept-Ranges: bytes
< ETag: 638032013040000000
< Server: Microsoft-HTTPAPI/2.0
< Date: Tue, 16 Apr 2024 07:14:55 GMT
< Age: 136
<
{"composition":{"contentHash":"89c9087978b3f026fb100267523b5204","senseId":"qliksenseserver:14.54.21","releaseLabel":"February 2022 Patch 12","originalClassName":"Composition","deprecatedProductVersion":"4.0.X","productName":"Qlik Sense","version":"14.54.21","copyrightYearRange":"1993-2022","deploymentType":"QlikSenseServer"},
<snipped>
We utilised Censys and Google BigQuery [6] to compile a list of potential Qlik Sense servers accessible on the internet and conducted a version scan against them. Subsequently, we extracted the Qlik release label from the JSON response to assess vulnerability to CVE-2023-48365.
Our vulnerability assessment for DoubleQlik / CVE-2023-48365 operated on the following criteria:
We shared our fingerprints and scan data with the Dutch Institute of Vulnerability Disclosure (DIVD), who then proceeded to issue responsible disclosure notifications to the administrators of the Qlik Sense servers. Call to action Ensure the security of your Qlik Sense installations by checking your current version. If your software is still supported, apply the latest patches immediately. For systems that are at the end of support, consider upgrading or replacing them to maintain robust security. Additionally, to enhance your defences, it’s recommended to avoid exposing these services to the entire internet. Implement IP whitelisting if public access is necessary, or better yet, make them accessible only through secure remote working solutions. If you discover you’ve been running a vulnerable version, it’s crucial to contact your (external) security experts for a thorough check-up to confirm that no breaches have occurred. Taking these steps will help safeguard your data and infrastructure from potential threats. References
- The release label corresponds to vulnerability statuses outlined in the original ZeroQlik and DoubleQlik vendor advisories [7,8].
- The release label is designated as End of Support (EOS) by the vendor [9], such as “February 2019 Patch 5”.
- The release label date is post-November 2023, as the advisory states that “November 2023” is not affected.
- The server responded with HTTP/1.1 302 Authenticate at this location.
We shared our fingerprints and scan data with the Dutch Institute of Vulnerability Disclosure (DIVD), who then proceeded to issue responsible disclosure notifications to the administrators of the Qlik Sense servers. Call to action Ensure the security of your Qlik Sense installations by checking your current version. If your software is still supported, apply the latest patches immediately. For systems that are at the end of support, consider upgrading or replacing them to maintain robust security. Additionally, to enhance your defences, it’s recommended to avoid exposing these services to the entire internet. Implement IP whitelisting if public access is necessary, or better yet, make them accessible only through secure remote working solutions. If you discover you’ve been running a vulnerable version, it’s crucial to contact your (external) security experts for a thorough check-up to confirm that no breaches have occurred. Taking these steps will help safeguard your data and infrastructure from potential threats. References
- https://cyberveilignederland.nl/actueel/persbericht-samenwerkingsverband-melissa-vindt-diverse-nederlandse-slachtoffers-van-ransomwaregroepering-cactus ︎
- https://www.ncsc.nl/actueel/nieuws/2023/oktober/3/melissa-samenwerkingsverband-ransomwarebestrijding ︎
- https://arcticwolf.com/resources/blog/qlik-sense-exploited-in-cactus-ransomware-campaign/ ︎
- https://www.praetorian.com/blog/qlik-sense-technical-exploit/ ︎
- https://www.praetorian.com/blog/doubleqlik-bypassing-the-original-fix-for-cve-2023-41265/ ︎
- https://support.censys.io/hc/en-us/articles/360038759991-Google-BigQuery-Introduction ︎
- https://community.qlik.com/t5/Official-Support-Articles/Critical-Security-fixes-for-Qlik-Sense-Enterprise-for-Windows/ta-p/2110801 ︎
- https://community.qlik.com/t5/Official-Support-Articles/Critical-Security-fixes-for-Qlik-Sense-Enterprise-for-Windows/ta-p/2120325 ︎
- https://community.qlik.com/t5/Product-Lifecycle/Qlik-Sense-Enterprise-on-Windows-Product-Lifecycle/ta-p/1826335 ︎
Categories: Security Posts
Cybersecurity Concerns for Ancillary Strength Control Subsystems
Additive manufacturing (AM) engineers have been incredibly creative in developing ancillary systems that modify a printed parts mechanical properties. These systems mostly focus on the issue of anisotropic properties of additively built components. This blog post is a good reference if you are unfamiliar with isotropic vs anisotropic properties and how they impact 3d printing. […]
The post Cybersecurity Concerns for Ancillary Strength Control Subsystems appeared first on BreakPoint Labs - Blog.
Categories: Security Posts
Update on Naked Security
To consolidate all of our security intelligence and news in one location, we have migrated Naked Security to the Sophos News platform.
Categories: Security Posts
![eternal-todo.com aggregator Syndicate content](/misc/feed.png)