Security Posts

AppSec Mistake No. 3: Neglecting to Integrate AppSec Into Developer Processes

Zero in a bit - Mié, 2018/08/15 - 17:58
We’ve been in the application security business for more than 10 years, and we’ve learned a lot in that time about what works, and what doesn’t. This is the third in a blog series that takes a look at some of the most common mistakes we see that lead to failed AppSec initiatives. Use our experience to make sure you avoid these mistakes and set yourself up for application security success. Why do you need to integrate security testing into the development cycle? In a nutshell: If your organization is using a DevOps model or moving toward it, your days of manually security testing code at the end of the development cycle are over. You won’t reap the benefits of DevOps if you are tacking any kind of testing onto the end of the development process. Those benefits include continuous feedback, continuous delivery, learning quickly, responding to the marketplace quickly, and avoiding rework. All those things necessitate security testing that is seamless and unobtrusive, and integrated into existing processes. In turn, AppSec tools that lack flexible APIs and customizable integrations will eventually be under-used, or not used at all. Ultimately, even if you are not practicing DevOps, you will struggle to keep up with the fast pace of software development today if your testing isn’t integrated. With integrated testing, you:
  • Avoid costly “context switching,” where developers have to switch gears to focus on security testing, then try to get back to their coding.
  • Make the testing process repeatable.
  • Make fixing flaws cheaper. The earlier you find flaws, the easier and less expensive they are to fix.
  • Avoid rework.
  • Empower development to test for security themselves.
How do you integrate security into the development cycle? The main goals for integrated security testing are to ensure it complements developer tools and processes, and that it’s relatively “invisible.” Our integrations approach is to “follow the code” and integrate at natural touchpoints along the lifecycle of the code. Therefore, consider integrating with developers’: IDEs: This is about as “shift left” as it gets. With this integration, developers assess code for security and fix flaws— as they’re writing it. For instance, CA Veracode Greenlight allows developers to test individual classes as they work on them in their IDE, getting results back in seconds and highlighting areas where they’ve successfully applied secure coding principles. Then, before checking in their code, developers can start a full application scan, review security findings and triage the results, all from within their own IDE. In addition, they can easily see which findings violate their security policy and view the data path and call stack information to understand how their code may be vulnerable to attack. Ticketing and bug tracking systems: With this integration, you add security findings into the mix of issues developers need to address. With CA Veracode, this integration enables security findings to automatically appear as tickets in the developer’s “to-do list.” Based on scan results, the Veracode integration will open, update and close tickets related to security flaws automatically in developers’ bug tracking systems, embedding Veracode scans into developers’ work cycles. Build systems: With this integration, application security scanning is an automated step in the build or release process. Security testing simply becomes another automated test the build server performs, along with its other functionality and quality tests. Developers decide if an application should not be released if it does not pass policy. Learn From Others’ Mistakes Don’t repeat the mistakes of the past; learn from other organizations and avoid the most common AppSec pitfalls. Today’s tip: Don’t neglect to integrate security testing into development processes. Get details on all six of the most popular mistakes in our eBook, AppSec: What Not to Do.
Categorías: Security Posts

Discovering CVE-2018-11512 - wityCMS 0.6.1 Persistent XSS

AlienVault Blogs - Mié, 2018/08/15 - 15:00
Content Management Systems (CMS) are usually good to check out for security issues, especially if the system is gaining popularity or being used by a number of people. Doing a white box type of assessment not only gives the potential to discover security issues but it opens interesting possibilities if ever a bug is found. This is because a white box assessment looks into the internal structure of how an application works.
 
WityCMS, for instance, is a system made by CreatiWity which assists in managing content for different uses, like personal blogging, business websites, or any other customized systems. In this post, I will walk through the steps of setting up the CMS, finding a web application issue, and processing a CVE for it. Installation (Windows with XAMPP) 1. Download a copy of the source code (Version 0.6.1).
2. Extract the folder /witycms-0.6.1 from the archive to C:\xampp\htdocs\ or where ever you have installed XAMPP in Windows.
3. Assuming Apache and MySQL are running, visit http://localhost/phpmyadmin/index.php.
4. Click on the "databases" tab.
5. Type in “creatiwity_cms” as the name of the database and click create.


6. You should now able to browse the application by visiting http://localhost/witycms-0.6.1/


7. Fill in data required. Like for “Site name”, I’ve added in “Test”. Click on the Next button.

8. Next comes defining the homepage of the system. You can choose any from the options. For example:

9. Setting up the database is next. From step #5, I have used the database name “creatiwity_cms” so this goes in the database setup.

10. Enter the administrator account details and click “Launch install!” (I have added user “admin” with the password of “admin” here)

11. Once successful, this page should pop up:
Finding a Web Application Security Issue Since this article is about CVE-2018-11512, I will be limiting the scope of finding web application vulnerabilities to a persistent XSS vulnerability. But first, let’s try to understand what a persistent XSS is.
 
According to OWASP, “Cross-Site Scripting (XSS) attacks are a type of injection, in which malicious scripts are injected into otherwise benign and trusted web sites”. This simply means that an attack can happen if an injection point can be taken advantage of in a website. Basically, there are three types of XSS but I'll discuss the common ones - namely reflected and persistent.
 
Reflected XSS can happen whenever an input data is thrown back at us after a request has been made. A very good example of a potentially vulnerable point for reflected XSS is a search function in a website. When a user enters a term in the search field and the website returns the term entered, that search function is potentially vulnerable to a reflected XSS.
 
Persistent XSS on the other hand is also called “stored” XSS. This type of XSS can only happen if the value is being saved somewhere in the system, whether it is through a database or a file, and later retrieved for presentation. An example of this one can be a field that requires user details such as the user’s email, first name, last name, address, and more. This can also be settings in a system that a user is able to change in any time. In the case of wityCMS, the target is to find fields that can save data in the system. This can basically be done both manually and through automated finding of these fields. Since I have installed it in Windows, I had to use the command “findstr” instead of “grep” (Sorry “grep” fans). A reference of “findstr” can be found here.
 
To list down the files having input fields, we can use the following flags:
 
/S = Recursive searching
/P = Skip files with non-printable characters
/I = Case insensitive
/N = Prints the line number
/c:<STR> = String to look for
 
Code:
findstr /SPIN /c:"<input" "c:\xampp\htdocs\witycms-0.6.1\*.html"
 
The result of running the command above will be:

Now, since the result is surely astounding because there are a lot of fields, we can easily pinpoint potential input boxes to start with once we login to the administrator panel. By visiting the URL http://localhost/witycms-0.6.1/, a noticeable value can be seen as shown in the image:

When we were setting up the system, we were asked to input the site name and it’s currently showing up in the main page. Wondering if that site name could lead to a persistent XSS, let’s see if it can be modified within the administrative settings. Login to the administration panel with the credentials entered during the setup. Once logged in, a small link to the administration panel should look like below:

When I clicked on the “Administration” hyperlink, I got redirected to the Settings page because this was the page I entered during the setup and the first field there is the website’s name too.

A very basic test for XSS is through adding a Javascript code such as:
 
Code:
<script>alert(1)</script>


When you click the “save” button, the field returns the value:


Notice that the tags <script> and </script> were stripped off. Since the tags were stripped, we now know that there is a sanitizing mechanism in the code. The next step is finding out how the sanitizing method works.
 
Whenever data like the above is being saved in the database, a request is being processed. In this case, we should be able to identify if the request method is a POST or GET by right clicking the field and doing an “inspect”. After viewing the client source code, it can be confirmed that the method is a POST request.


At this point, we should try to find where the POST request happens so we can see the sanitizing method. To do this, type in the following command in cmd:
 
Code:
findstr /SPIN /c:"$_POST" "c:\xampp\htdocs\witycms-0.6.1\*.php"
 
The command is similar to what we did earlier to find files containing the “input” tag but this time, we are trying to find references of “$_POST” in .php files.

The result of the command points us to the files WMain.php, WRequest.php, and WSession.php because the other files pertain to libraries included. Browsing these files will then point us to an interesting function found in WRequest.php as shown below and notice that when a script tag is found, it is being replaced by an empty string:

Replacing the “script” tag with an empty string actually works as a sanitizing technique but it should filter recursively. After doing more analysis, it has been found out that the “filter” function was being called only once by referring to this function found in the same file:

Since there is no recursion for the filter function, the filter can only work for an input like this:
The filter can then be bypassed by entering an input like:

Trying this out as the input in the website’s name field will get us a result of:

Once this payload becomes the site name, a visiting user will be able to come across this script even when he or she is unauthenticated:

This opens a whole new world of opportunities because being able to execute an unwanted script when a user visits the website can be disastrous. Examples for this could be redirecting a user to a phishing site, executing miner scripts without the knowledge of the user, or many other possibilities. Processing a CVE Number Since this bug leads to a security issue and the CMS application is being used by about a hundred or more people, I decided to process an application for a CVE number as to get a public advisory. CVE or the “Common Vulnerabilities and Exposures” is simply a list of entries that show references of vulnerabilities for applications used in computing. There are CNAs or “CVE Numbering Authorities” that process these CVE numbers depending on the application support. For instance, if a security issue has been found in a Lenovo device, it should be reported to Lenovo’s PSIRT (Product Security Incident Response Team). After they assess the vulnerability, they will process a CVE number for it.

This simply means that if a security issue has been found in a product or project of a company that’s also a CNA, they can process the CVE number directly. A list of CNAs can be found here . In the case of wityCMS, CreatiWity, the creator of the product is not a registered CNA so we can request a CVE number for this persistent XSS through MITRE Corporation. Below are the steps to process a CVE number.
 
1. Confirm if the product is managed by a CNA. If it is managed by a CNA, report the vulnerability to that specific CNA. If not, report it to MITRE Corporation.
2. Confirm if the vulnerability found has already been assigned a CVE number. This can be done using a simple Google search about the product. Always check for the product updates to confirm if a public advisory already exists.
3. For wityCMS’s case, I have used MITRE’s CVE form which can be found here. 
4. Fill in the form with the required details. For wityCMS, I have added in the following:
Vulnerability Type: Cross-Site Scripting
Product: wityCMS
Version: 0.6.1
Vendor confirmed the vulnerability? No (Not acknowledged yet at the time of request)
Attack Type: Remote
Impact: Code execution
Affected Components: Source code files showing “site_title” as output
Attack Vector: To exploit the vulnerability, one must craft and enter a script in the Site name field of the system
Suggested Description: Stored cross-site scripting (XSS) vulnerability in the "Website's name" field found in the "Settings" page under the "General" menu in Creatiwity wityCMS 0.6.1 allows remote attackers to inject arbitrary web script or HTML via a crafted website name by doing an authenticated POST HTTP request to admin/settings/general.
Discoverer: Nathu Nandwani
Reference(s): https://github.com/Creatiwity/wityCMS/commit/7967e5bf15b4d2ee6b85b56e82d7e1229147de44
 
Information you provide should be detailed. To make the CVE number processing fast, a public reference should exist to discuss details of the vulnerability and a possible fix (if existing). For example, before sending in this report, I opened an issue in the project’s GitHub page with the suggested description. Since there are a lot of CVE numbers representing persistent XSS issues, I figured there would be good examples.  I found one and used it as a model. Final Tips: CVE number processing takes only a day or two if the details have been disclosed publicly, so it is always best if you communicate with the developer or the response team associated with the project for proper fixing first. Details of CVEs should be accurate. Changing details of the report sent to CNAs will slow down the process of the application. That means the vulnerability has to be confirmed first to make sure that time is not wasted by both sides. More details about the conditions for CVE number applications can be found in the document at this website: http://common-vulnerabilities-and-exposures-cve-board.1128451.n5.nabble.com/attachment/850/0/Draft%20CVE%20ID%20Request%20Guidelines%20v1.0.docx  VulDB helps in public advisories. Register in VulDB and you can submit an entry there. For example, here is the VulDB entry of this security issue.  Submit an entry for exploit-db.com too. This doesn’t only show proof of the issue, but it also adds a credible reference for the CVE number because offensive-security teams try their best to test the proof of concept. It's here https://vuldb.com/?id.118269 and notice that it is currently pending verification. The submission instructions can be found here.

I found other persistent XSS vulnerabilities in this specific version of wityCMS but I haven’t gotten a CVE numbers. Can you identify them? Looking forward to hearing comments or questions. Cheers!
Categorías: Security Posts

Fault Analysis on RSA Signing

Aditi Gupta This spring and summer, as an intern at Trail of Bits, I researched modeling fault attacks on RSA signatures. I looked at an optimization of RSA signing that uses the Chinese Remainder Theorem (CRT) and induced calculation faults that reveal private keys. I analyzed fault attacks at a low level rather than in a mathematical context. After analyzing both a toy program and the mbed TLS implementation of RSA, I identified bits in memory that leak private keys when flipped. The Signature Process with RSA-CRT Normally, an RSA signing operation would use this algorithm: s=md (mod n). Here, s represents the signature, m the message, d the private exponent, and n the public key. This algorithm is effective, but when the numbers involved increase to the necessary size for security, the computation begins to take an extremely long time. For this reason, many cryptography libraries use the Chinese Remainder Theorem (CRT) to speed up decryption and signing. The CRT splits up the single large calculation into two smaller ones before stitching their results together. Given the private exponent d, we calculate two values, dp and dq, as: dp=d (mod (p-1)) and dq=d (mod (q-1)). We then compute two partial signatures, each using one of these two numbers; the first partial signature, s1, equals mdp (mod p), while the second, s2, equals mdq (mod q). The inverses of p (mod q) and q (mod p) are calculated, and finally, the two partial signatures are combined to form the final signature, s, with s=(s1*q*qInv)+(s2*p*pInv) (mod n). The Fault Attack The problem arises when one of the two partial signatures (let’s assume it’s s2, calculated using q) is incorrect. It happens. Combining the two partial signatures will give us a faulty final signature. If the signature were correct, we would be able to verify it by comparing the original message to se (mod n), where e is the public exponent. However, with the faulted signature, se (mod p) will still equal m, but se (mod q) will not. From here, we can say that p, but not q, is a factor of se-m. Because p is also a factor of n itself, the attacker can take the greatest common denominator of n and se-m to extract p. n divided by p is simply q, and the attacker now has both of the private keys. Faulting a Toy Program I began by writing a simple toy program in C to conduct RSA signing using the Chinese Remainder Theorem. This program included no padding and no checks, using textbook RSA to sign fairly small numbers. I used a debugger to modify one of the partial signatures manually and produce a faulted final signature. I wrote a program in Python to use this faulted signature to calculate the private keys and successfully decrypt another encrypted message. I tried altering data at various different stages of the signing process to see whether I could still extract the private keys. When I felt comfortable carrying out these fault attacks by hand, I began to automate the process. Flipping Bits with Manticore I used Binary Ninja to view the disassembly of my program and identify the memory locations of the data that I was interested in. When I tried to solve for the private keys, I would know where to look. Then, I installed and learned how to use Manticore, the binary analysis tool developed by Trail of Bits with which I was going to conduct the fault attacks. I wrote a Manticore script that would iterate through each consecutive byte of memory, alter an instruction by flipping a bit in that byte, and execute the RSA signing program. For each execution that did not result in a crash or a timeout, I used the output to try to extract the private keys. I checked them against the correct keys by attempting to successfully decrypt another message. With all of this data, I generated a CSV file of the intermediate and final results from each bit flip, including the partial signatures, the private keys, and whether the private keys were accurate. Fig. 1: Excerpt from code to find faultable addresses in toy program Results I tested a total of 938 bit flips, and I found that 45 of them, or 4.8%, successfully produced the correct private keys. Nearly 55% resulted in either a crash or a timeout, meaning that the program failed to create a signature. Approximately 31% did not alter the partial signatures. Fig. 2: Output of analysis code Fig. 3. Bit flip results for toy program This kind of automation offers a massive speedup in developing exploits for vulnerabilities like this, as once you simply describe the vulnerability to Manticore, you get back a comprehensive list of ways to exploit it. This is particularly useful if you’re able to introduce some imprecise fault (e.g. using Rowhammer) as you can find clusters of bits which, when flipped, leak a private key. Faulting mbed TLS Once I had the file of bit flip results for my toy program, I looked for a real cryptographic library to attack. I settled on mbed TLS, an implementation that is primarily used on embedded systems. Because it was much more complex than the program I had written, I spent some time looking at the mbed TLS source code to try to understand the RSA signing process before compiling it and looking at the disassembled binary using Binary Ninja. One key difference between mbed TLS and my toy program was that signatures using mbed TLS were padded. The fault attacks I was trying to model are applicable only to deterministic padding, in which a given message will always result in the same padded value, and not to probabilistic schemes. Although mbed TLS can implement a variety of different padding schemes, I looked at RSA signing using PKCS#1 v1.5, a deterministic alternative to the more complex, randomized PSS padding scheme. Again, I used a debugger to locate the target data. When I knew what memory locations I would be reading from, I began to fault one of the partial signatures and produce an incorrect signature. I soon realized, however, that there were some runtime checks in place to prevent a fault attack of the type I was trying to conduct. In particular, two of the checks, if failed, would stop execution and output an error message without creating the signature. I used the debugger to skip over the checks and produce the faulted signature I was looking for. With the faulted signature and all of the public key data, I was able to replicate the process I had used on my toy program to extract the private keys successfully. Automating the Attacks Just as I had with the toy program, I started to try to automate the fault attacks and identify the bit flips that would leak the private keys. In order to speed up the process, I wrote a GDB script instead of using Manticore. I found bit flips that would allow me to bypass both of the checks that would normally prevent the creation of a faulted signature. I used GDB to alter both of those memory instructions. In a process identical to the toy program, I also flipped one bit in a given memory address. I then used Python to loop through each byte of memory, call this script, and try to extract the private keys, again checking whether they were correct by attempting to decrypt a known message. I collected the solved private keys and wrote the results to a CSV file of all the bit flips. Fig. 4: Excerpt from code to find faultable locations in mbed TLS Fig. 5: Excerpt from GDB script called from Python code to induce faults in mbed TLS Results I tested 566 bit flips, all within the portion of the mbed TLS code that carried out the signing operation. Combined with the two bit flips that ensured that the checks would pass, I found that 28 of them – nearly 5% – leaked the private keys. About 55% failed to produce a signature. Fig.6. Bit flip results for mbed TLS The fact that this kind of analysis works on real programs is exciting, but unfortunately, I ran out of time in the summer before I got a chance to test it in the “real world.” Nonetheless, the ability to input real TLS code and get a comprehensive description of fault attacks against it is exciting, and yields fascinating possibilities for future research. Conclusion I loved working at Trail of Bits. I gained a better understanding of cryptography, and became familiar with some of the tools used by security engineers. It was a wonderful experience, and I’m excited to apply everything I learned to my classes and projects at Carnegie Mellon University when I start next year.  
Categorías: Security Posts

Phishing – Ask and ye shall receive

Fox-IT - Mar, 2018/08/14 - 15:25
During penetration tests, our primary goal is to identify the difference in paths that can be used to obtain the goal(s) as agreed upon with our customers. This often succeeds due to insufficient hardening, lack of awareness or poor password hygiene. Sometimes we do get access to a resource, but do not have access to the username or password of the user that is logged on. In this case, a solution can be to just ask for credentials in order to increase your access or escalate our privileges throughout the network. This blogpost will go into the details of how the default credential gathering module in a pentesting framework like MetaSploit can be further improved and introduces a new tool and a Cobalt Strike module that demonstrates these improvements. Current situation Let’s say that we have a meterpreter running on our target system but were unable to extract user credentials. Since the meterpreter is not running with sufficient privileges, we also cannot access the part of the memory where the passwords reside. To ask the user for their credentials, we can use a post module to spawn a credential box on the user’s desktop that asks for their credentials. This credential box looks like the one in the image below. While this often works in practice, a few problems arise with using this technique:
  • The style of the input box stems from Windows XP. When newer versions of Windows ask for your credentials, a different type of input box is used;
  • The credential box spawns out-of-the-blue. Even though a message and a title can be provided, it does not really look genuine; it misses a scenario where a credential box asking for your credentials can be justified.
A better solution Because of these issues, this technique will perhaps not work on more security aware users. These users can be interesting targets as well, so we created a new script that solves the aforementioned problems. For creating a realistic scenario, the main approach was: “What would work on us?” The answer to this question must at least entail the following:
  • The credential box should be genuine and the same as the one that Windows uses;
  • The credential box should not be spawned out-of-the-blue; the user must be persuaded or should expect a credential box;
  • If (error) messages are used, the messages should be realistic. Real life examples are even better;
  • No or limited visible indications that the scenario is not real.
As a proof of concept, Fox-IT created a tool that uses the following two scenario’s:
  • Notifications that stem from an (installed) application;
  • System notifications that can be postponed.
With this, the attacker can use his creativity to deceive the user. Below are some examples that were created during the development of this tool: Outlook lost connection to Microsoft Exchange. In order to reconnect, the user must specify credentials to reestablish connection to Microsoft Exchange. Password that expires within a short period of time. The second scenario imitates notifications from Windows itself, such as pending updates that need to be installed. The notification toast tricks the user into thinking that the updates can be postponed or dismissed. When the user clicks on the notification toast the user will be asked to supply their credentials. If the user clicks on one of these notifications, the following credential box will pop up. The text of the credential box is fully customizable. Once the user has submitted their credentials, the result is printed on the console which can be intercepted with a tool of your choosing, such as Cobalt strike. We created an aggressor script for Cobalt Strike that extends the user interface with a variety of phishing options. Clicking on one of these options will launch the phishing attack on the remote computer. And if users enter their credentials, the aggressor script will store these in Cobalt Strike as well. The tool as well as the Cobalt Strike aggressor script are available on Fox-IT’s GitHub page: https://github.com/fox-it/Invoke-CredentialPhisher   Technical details During the development of this tool, there were some hurdles that we needed to take. At first, we created a tool that pops a notification balloon. That worked quite well, however, the originating source of the balloon was mentioned in the balloon as well. It’s not really genuine when Windows Powershell ISE asks for Outlook credentials, so that was not a solution that satisfied us. In recent versions of Windows, toast notifications were introduced. These notifications look almost the same as a notification balloon that we used earlier, but work entirely different. By using toast notifications, the problem that the originating source was shown was solved. However, it proved not possible to use event handlers on the toast notifications with native PowerShell. We needed an additional library that acts as a wrapper, which can be found on the following GitHub page: https://github.com/rkeithhill/PoshWinRT That library solved one part of the issue, but needed to be present on the filesystem of the target computer. That leaves traces of our attack which we do not want, plus, we want to leave the least amount of traces of our malicious code. Therefore, we encoded the library as base64 and stored that in the PowerShell script. The base64 equivalent of the library is evaluated and loaded from memory during runtime and will leave no trace on the filesystem once the tool has been executed. So, now we had a tool capable of sending toast notifications that look genuine. Because of how Windows works, we could also create an extra layer of trust by using an application ID as the source of the notification toast. That way, if you were able to find the corresponding AppID, it looks like the toast notification was issued by the application rather than an attacker. The notifcation toasts supports the following:
  • Custom toast notification title;
  • Custom credential box title;
  • Custom multiline toast notification message.
To make it more personal, it is possible to use references to attributes that are part of the System.DirectoryServices.AccountManagement.UserPrincipal object. These attributes can be found on the following Technet article: https://msdn.microsoft.com/en-us/library/system.directoryservices.accountmanagement.userprincipal_properties(v=vs.110).aspx. Additionally, the application scenario supports the following extra features when an application name is provided and can be found by the tool:
  • AppID lookup for adding extra layer of credibility. If no AppID is found, the tool will default to control panel;
  • Extraction of the application icon. The extracted icon will be used in the notification toast;
  • If no process is given or the process cannot be found, the tool will extract the information icon from the C:\Windows\system32\shell32.dll library. By modifying the script, it is easy to incorporate icons from other libraries as well;
  • Hiding of application processes. All windows will be hidden from the user for extra persuasion. The visibility will be restored once the tool is finished or when the user supplied their credentials.
Cmdline examples For the examples above, the following onliners were used: Outlook connection: .\Invoke-CredentialPhisher.ps1 -ToastTitle "Microsoft Office Outlook" -ToastMessage "Connection to Microsoft Exchange has been lost.`r`nClick here to restore the connection" -Application "Outlook" -credBoxTitle "Microsoft Outlook" -credBoxMessage "Enter password for user ‘{emailaddress|samaccountname}'" -ToastType Application -HideProcesses Updates are available: .\Invoke-CredentialPhisher.ps1 -ToastTitle "Updates are available" -ToastMessage "Your computer will restart in 5 minutes to install the updates" -credBoxTitle "Credentials needed" -credBoxMessage "Please specify your credentials in order to postpone the updates" -ToastType System -Application "System Configuration" Password expires: .\Invoke-CredentialPhisher.ps1 -ToastTitle "Consider changing your password" -ToastMessage "Your password will expire in 5 minutes.`r`nTo change your password, click here or press CTRL+ALT+DELETE and then click 'Change a password'." -Application "Control Panel" -credBoxTitle "Windows Password reset" -credBoxMessage "Enter password for user '{samaccountname}'" -ToastType Application Recommendations There are no specific recommendations that are applicable to this phishing technique, however, some more generic recommendations are still applicable:
  • Check if PowerShell script logging and transcript logging is enabled;
  • Raise security awareness;
  • Although it is quite hard to distinguish a fake notification toast from a genuine notification toast, users should have a paranoid approach when it comes to processes asking for their credentials.
Categorías: Security Posts

Update: format-bytes Version 0.0.5

Didier Stevens - Mar, 2018/08/14 - 02:00
This new version has many new features and options. First there is the remainder (*) when using option -f to specify a parsing format. For example, -f “<i25s” directs format-bytes to interpret the provided data as a little-endian integer followed by a 25-byte long string: With the remainder (-f “<i25s*”), format-bytes will provide info for the remaining bytes (if any) after parsing (e.g. after the 25-byte long string): Options -c and -s changed ito -C and -S, so that option -s can be used to select items (to be consistent across my tools). Option -s can be used to select an item, like a string, to be dumped (options -a, -x and -d). If no dump option is provided, an hex-ascii dump (-a) is the default. And option –jsoninput can be used to process JSON output produced by oledump.py or zipdump.py, for example.   format-bytes_V0_0_5.zip (https)
MD5: 3D92BCAF8E31BFBF6F4917B3AAB64AEF
SHA256: AD43756F69C8C2ABF0F5778BC466AD480630727FA7B03A6D4DEC80743549845A
Categorías: Security Posts

Microcode in pictures

Hex blog - Vie, 2018/06/15 - 23:07
Since a picture is worth thousand words below are a few drawings for your perusal. Let us start at the top level, with the mbl_array_t class, which represents the entire microcode object: The above picture does not show the control flow graph. For that we use predecessor and successor lists: Pay attention to the block … Continue reading Microcode in pictures
Categorías: Security Posts

Fabricating a Trellis

Niels Provos - Vie, 2018/05/04 - 06:10

The garden needed some trellises for roses. We came up with a circle design and are fabricating it in the shop. Mild steel bar is bent into many different ring sizes and then put together into a fairly large trellis. I am also showing some really beautiful slow motion shots of welding and grinding in high dynamic range.
Categorías: Security Posts

An Elaborate Hack Shows How Much Damage IoT Bugs Can Do

Wired: Security - Lun, 2018/04/16 - 19:00
Rube-Goldbergesque IoT hacks are surprisingly simple to pull off—and can do a ton of damage.
Categorías: Security Posts

How Russian Facebook Ads Divided and Targeted US Voters Before the 2016 Election

Wired: Security - Lun, 2018/04/16 - 15:00
New research shows just how prevalent political advertising was from suspicious groups in 2016—including Russian trolls.
Categorías: Security Posts

Infocon: green

SANS Internet Storm Center, InfoCON: green - Vie, 2018/04/06 - 17:46
ISC Stormcast For Friday, April 6th 2018 https://isc.sans.edu/podcastdetail.html?id=5943
Categorías: Security Posts

ISC Stormcast For Friday, April 6th 2018 https://isc.sans.edu/podcastdetail.html&#x3f;id=5943, (Fri, Apr 6th)

SANS Internet Storm Center, InfoCON: green - Vie, 2018/04/06 - 03:30
(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.
Categorías: Security Posts

&#x26;#xa;Threat Hunting &#x26; Adversary Emulation: The HELK vs APTSimulator - Part 1, (Thu, Apr 5th)

SANS Internet Storm Center, InfoCON: green - Jue, 2018/04/05 - 19:26

Ladies and gentlemen, for our main attraction, I give you...The HELK vs APTSimulator, in a Death Battle! The late, great Randy "Macho Man" Savage said many things in his day, in his own special way, but "Expect the unexpected in the kingdom of madness!" could be our theme. I'm having a flashback to my college days, many moons ago. :-) The HELK just brought it on. Yes, I know, HELK is the Hunting ELK stack, got it, but it reminded me of the Hulk, and then, I thought of a Hulkamania showdown with APTSimulator, and Randy Savage's classic, raspy voice popped in my head with "Hulkamania is like a single grain of sand in the Sahara desert that is Macho Madness." And that, dear reader, is a glimpse into exactly three seconds or less in the mind of your scribe, a strange place to be certain. But alas, that's how we came up with this fabulous showcase.
In this corner, from Roberto Rodriguez, @Cyb3rWard0g, the specter in SpecterOps, it's...The...HELK! This, my friends, worth every ounce of hype we can muster.
And in the other corner, from Florian Roth, @cyb3rops, the The Fracas of Frankfurt, we have APTSimulator. All your worst adversary apparitions in one APT mic drop. This...is...Death Battle! Now with that out of our system, let's begin. There's a lot of goodness here, so I'm definitely going to do this in two parts so as not undervalue these two offerings.
HELK is incredibly easy to install. Its also well documented, with lots of related reading material, let me propose that you take the tine to to review it all. Pay particular attention to the wiki, gain comfort with the architecture, then review installation steps.
On an Ubuntu 16.04 LTS system I ran:
git clone https://github.com/Cyb3rWard0g/HELK.git
cd HELK/
sudo ./helk_install.sh 
Of the three installation options I was presented with, pulling the latest HELK Docker Image from cyb3rward0g dockerhub, building the HELK image from a local Dockerfile, or installing the HELK from a local bash script, I chose the first and went with the latest Docker image. The installation script does a fantastic job of fulfilling dependencies for you, if you haven't installed Docker, the HELK install script does it for you. You can observe the entire install process in Figure 1. Figure 1: HELK Installation
You can immediately confirm your clean installation by navigating to your HELK KIBANA URL, in my case http://192.168.248.29.
For my test Windows system I created a Windows 7 x86 virtual machine with Virtualbox. The key to success here is ensuring that you install Winlogbeat on the Windows systems from which you'd like to ship logs to HELK. More important, is ensuring that you run Winlogbeat with the right winlogbeat.yml file. You'll want to modify and copy this to your target systems. The critical modification is line 123, under Kafka output, where you need to add the IP address for your HELK server in three spots. My modification appeared as hosts: ["192.168.248.29:9092","192.168.248.29:9093","192.168.248.29:9094"]. As noted in the HELK architecture diagram, HELK consumes Winlogbeat event logs via Kafka.
On your Windows systems, with a properly modified winlogbeat.yml, you'll run:
./winlogbeat -c winlogbeat.yml -e
./winlogbeat setup -e
You'll definitely want to set up Sysmon on your target hosts as well. I prefer to do so with the @SwiftOnSecurity configuration file. If you're doing so with your initial setup, use sysmon.exe -accepteula -i sysmonconfig-export.xml. If you're modifying an existing configuration, use sysmon.exe -c sysmonconfig-export.xml.  This will ensure rich data returns from Sysmon, when using adversary emulation services from APTsimulator, as we will, or experiencing the real deal.
With all set up and working you should see results in your Kibana dashboard as seen in Figure 2.
Figure 2: Initial HELK Kibana Sysmon dashboard.
Now for the showdown. :-) Florian's APTSimulator does some comprehensive emulation to make your systems appear compromised under the following scenarios:
  • POCs: Endpoint detection agents / compromise assessment tools
  • Test your security monitoring's detection capabilities
  • Test your SOCs response on a threat that isn't EICAR or a port scan
  • Prepare an environment for digital forensics classes 
This is a truly admirable effort, one I advocate for most heartily as a blue team leader. With particular attention to testing your security monitoring's detection capabilities, if you don't do so regularly and comprehensively, you are, quite simply, incomplete in your practice. If you haven't tested and validated, don't consider it detection, it's just a rule with a prayer. APTSimulator can be observed conducting the likes of:
  • Creating typical attacker working directory C:\TMP...
  • Activating guest user account
    • Adding the guest user to the local administrators group
  • Placing a svchost.exe (which is actually srvany.exe) into C:\Users\Public
  • Modifying the hosts file
    • Adding update.microsoft.com mapping to private IP address
  • Using curl to access well-known C2 addresses
    • C2: msupdater.com
  • Dropping a Powershell netcat alternative into the APT dir
  • Executes nbtscan on the local network
  • Dropping a modified PsExec into the APT dir
  • Registering mimikatz in At job
  • Registering a malicious RUN key
  • Registering mimikatz in scheduled task
  • Registering cmd.exe as debugger for sethc.exe
  • Dropping web shell in new WWW directory
A couple of notes here.
Download and install APTSimulator from the Releases section of its GitHub pages.
APTSimulator includes curl.exe, 7z.exe, and 7z.dll in its helpers directory. Be sure that you drop the correct version of 7 Zip for your system architecture. I'm assuming the default bits are 64bit, I was testing on a 32bit VM. Let's do a fast run-through with HELK's Kibana Discover option looking for the above mentioned APTSimulator activities. Starting with a search for TMP in the sysmon-* index yields immediate results and strikes #1, 6, 7, and 8 from our APTSimulator list above, see for yourself in Figure 3.
Figure 3: TMP, PS nc, nbtscan, and PsExec in one shot
Created TMP, dropped a PowerShell netcat, nbtscanned the local network, and dropped a modified PsExec, check, check, check, and check.
How about enabling the guest user account and adding it to the local administrator's group? Figure 4 confirms.
Figure 4: Guest enabled and escalated
Strike #2 from the list. Something tells me we'll immediately find svchost.exe in C:\Users\Public. Aye, Figure 5 makes it so.
Figure 5: I've got your svchost right here
Knock #3 off the to-do, including the process.commandline, process.name, and file.creationtime references. Up next, the At job and scheduled task creation. Indeed, see Figure 6.
Figure 6: tasks OR schtasks
I think you get the point, there weren't any misses here. There are, of course, visualization options. Don't forget about Kibana's Timelion feature. Forensicators and incident responders live and die by timelines, use it to your advantage (Figure 7).
Figure 7: Timelion
Finally, under HELK's Kibana Visualize menu, you'll note 34 visualizations. By default, these are pretty basic, but you quickly add value with sub-buckets. As an example, I selected the Sysmon_UserName visualization. Initially, it yielded a donut graph inclusive of malman (my pwned user), SYSTEM and LOCAL SERVICE. Not good enough to be particularly useful I added a sub-bucket to include process names associated with each user. The resulting graph is more detailed and tells us that of the 242 events in the last four hours associated with the malman user, 32 of those were specific to cmd.exe processes, or 18.6% (Figure 8).
Figure 8: Powerful visualization capabilities
I am thrilled with both HELK and APTSimulator. The true principles of blue team and detection quality are innate in these projects. The fact that Roberto considers HELK still in alpha state leads me to believe there is so much more to come. Be sure to dig deeply into APTSimulator's Advanced Solutions as well, there's more than one way to emulate an adversary.
Part 2 will explore HELK integration with Spark, Graphframes & Jupyter notebooks.
Russ McRee | @holisticinfosec (c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.
Categorías: Security Posts

Jue, 1970/01/01 - 02:00
Distribuir contenido