pritha's Posts (624)

Sort by

Here is the list of my top 10 blogs on DLP solution, which you should go through if you are in-charge of creating, implementing and managing DLP program in your organisation.

 

1. A business case for Data loss prevention:

A good small write up giving out some of the tips for building a business case for DLP in terms of some of the immediate benefits that it brings to the organisation, such as data security and compliance obligations.

 

2. Building a business case for DLP tools:

A comprehensive article and guide to help you build a business case for DLP solution.

 

3. Positioning DLP for executive buy-in:

A blog from Digital Guardian, one of the leading vendors in DLP market, talks about how to build allies and properly position DLP to decision makers. This Blog is a part of more comprehensive guide ” The definitive guide to DLP”

 

4. Tips for creating a data Classification policy:

A good data classification policy is perhaps the most important pre-requisites for a successful DLP program in any organisation. This Blog from TechTarget gives out some of the tips for a workable data classification policy.

 

5. Key considerations in protecting sensitive data leakage using DLP tools:

This article from ISACA highlights 10 key considerations that could help organisations plan, implement, enforce and manage DLP solutions. This article also gives a good overview of DLP solution in general

 

6. 5 tips to evaluate your readiness before implementing DLP solution:

This Blog from CISO platform lists out the five questions to ask yourself to assess your organisational readiness for implementing DLP solution. You should take care of these 5 things before you go ahead with your DLP project.

 

7. 7 Strategies for a successful DLP deployment:

This blog from CSOonline lists out a set of strategies to help you see through a successful DLP implementation. Though it’s obvious people often miss out on these.

 

8. How to evaluate DLP solutions: 6 steps to follow and 10 questions to ask:

Choosing the right DLP solution for your company can be overwhelming; in order to make an educated buying decision, each vendor must be properly evaluated for its strengths and weaknesses.

 

9. Top 6 reasons why DLP implementations fail:

Another blog from CISO Platform lists out some of the top reasons why a DLP implementation may fail or may not achieve the stated company objectives.

 

10. An Expert Guide to Securing Sensitive Data: 34 Experts Reveal the Biggest Mistakes Companies Make with Data Security:

Digital guarding has some of the good resources on DLPsolution. This blog elicits insights from some of the data security experts on top mistakes one can make while approaching a data security problem in organisations.

Read more…

Bug bounty programs are quite common these days with several of the biggest names in the industry have launched various avatars of the program. I have been asked by a few security managers and managements about should they launch a bug bounty program. Definitely bug bounty program has the advantage of crowd sourcing. However an organization should be mature and prepared enough to launch such a program. Here are some questions which shall tell you if you are prepared or not. You are ready only if all the answers to the questions are “Yes”.

 

( Read More: 16 Application Security Trends That You Can’t Ignore In 2016 )

 

You are ready if you can say “Yes” to all of the following:

 

1. Have you conducted a deep penetration testing exercise before?

Bug bounty should be adopted not as the first step but as one of the last few steps in your application security testing. If you are not secure enough and have not done the home work, it will just open an ugly face you do not want to show. Also this will expose you to unnecessary risks apart from losing a lot of money since there will be too many vulnerabilities for which you will have to pay.

 

2. Do you regularly conduct security testing for your apps?

Do you test your app during every release? If not your organization maturity in terms of application security testing is not enough to expose yourself to the hackers (both blackhat and whitehat) around the world.


3. Do you have an application security management program in place?

You should ideally have a define application security management program in place. How do you test, remediate, manage and respond to vulnerabilities? Is it adhoc? Do you have a written process? Do you have an organization structure with right team, defined KRA/KPIs and management process in relation to Application Security Management?

 

4. Do you have capacity to fix vulnerabilities very fast?

It is bad situation to have a vulnerability reported to you but you are not able to close it fast enough. There will be several persons who will report the same vulnerability and you might have the policy to pay the one who does it first. So if your closing time is not fast enough, you have the risk of denying “bounty” to more number of persons and hence creating more number of dissatisfied souls.

 

5. Does bug bounty program affect any of your customer SLAs?

Do make sure to check your customer SLAs before you expose yourself to bug bounty. Do you have a multi-tenant system? What are the bindings and the rights which are there with you or your customers? What are the SLAs and terms you have with your internal customers?

( Read More: 9 Top Features To Look For In Next Generation Firewall (NGFW) )

6. Does Bounty affect your organizations Risk Management Program?

You need to check with your Chief Risk Officer or CFO or whoever who manages the Risk Management Program. Bug bounty shall definitely have implications on your organizational risk and hence it should be routed through the proper channel to measure the acceptance of the risk.

 

7. Did you calculate your financial ROI metrics?

You should calculate the financial ROI before you jump in. How many vulnerabilities do you expect to be discovered? What is the amount of money you need to pay? What will be the cost of discovering the same vulnerabilities using other models like internal testing or testing through a known vendor? Does it make financial sense to launch such a program? If yes, what should be the right payout for each vulnerability?

 

8. Did you create a detailed document on the program, policies and procedures? Do you have an exit strategy?

Make sure that you create a detailed written document of the program, policies and procedures. Please get it vetted by a few pair of extra eyes. It is even better to get some feedback from somebody who did it before. Suppose the program does not work, do you have a failover and exit strategy?

 

9. Do you a single owner for the program and organizational support structure?

You should ideally have a single owner with the right set of KRAs and KPIs defined for him/her. Also, make sure that the person is provided with the right amount of organization support to make the program successful.

 

10. Do you have enough marketing reach/support to make the bug bounty successful?

Bug bounty will be successful if you have the reach and access to the right set of audience. Have you identified the channels for reach out/ marketing? You need to have a sustained program to make it work.

 

Bug bounty can work if executed right. If your organization is not geared up for bug bounty you can definitely work with the more traditional means like using various solutions from consultants, in-house teams or even the emerging cloud based testing solutions.

 

Read more…

Over the last few years, our On-Demand and Hybrid Penetration Testing platform has performed security testing of applications across various verticals and domains including Banking, e-commerce, Manufacturing, Enterprise Applications, Gaming and so on. On one side, SQL Injection, XSS and CSRF vulnerabilities are still the top classes of vulnerabilities found by our automated scanning system, on the other hand however, there are a lot of business logic vulnerabilities that are often found by our security experts powered by a comprehensive knowledge base. Here we will discuss top business logic vulnerabilities in Banking Applications.


Business logic vulnerabilities are defined as security weaknesses or bugs in the functional or design aspect of the application. Because the security weakness or bug is in the function or design, it is often missed by all existing automated web application scanners.


In this blog we are sharing the top commonly found Business Logic Vulnerabilities in the Virtual Credit Creation (VCC) module of a Banking Application. 

Consider the following scenario: A Banking Application provides web based functionality to users to pay Bills Online as well as to create and manage Virtual Credit Cards. Virtual Credit cards are used to shop online. A Virtual Credit Card creation use case involves the following steps:

1. User visits banking application.
2. User opts to create virtual credit card.
3. User fills up personal details, required amount, expiry date of VCC etc.
4. User chooses a payment gateway.
5. User fills up credit / debit card details.
6. Banking Application redirects user to a Payment Gateway.
7. Required amount + Service Charge are debited from user’s Debit / Credit card.
8. Payment Gateway redirects user to a Callback URL provided by the Banking Application.
9. Banking Application verifies the Payment Gateway confirmation.
10. Banking Application generates a CVV number.
11. Banking Application presents VCC details to the user.
12. Banking application performs SMS verification of the user.


A couple of security weaknesses that are found in the above scenario are as follows:

(Read more:  Technology/Solution Guide for Single Sign-On)

TAMPERING OF DATA COMMUNICATION BETWEEN PAYMENT GATEWAY AND BANKING APPLICATION:

Weaknesses: The Banking application does not verify whether the required amount is successfully paid at the Payment Gateway Side, or what amount is being paid at the Payment Gateway Side. As a result, a virtual card can be recharged with higher amount while paying a lower amount to the bank by modifying amount when the request is sent from payment gateway to the bank.


Mitigation:
 There should be sufficient validations between the Banking application and the payment gateway. The callback URL should not be allowed to be directly controlled by an attacker
Tweet This!



NO VALIDATION ON BANKING APPLICATION’S CALLBACK URL

Weakness: There is lack of validation on the Banking Application Side when the Payment Gateway redirects a user to the Banking Application’s callback URL. As a result, a virtual credit card can be created without paying any service charges, by sending the request directly to the callback URL of Payment Gateway.


Mitigation:
 There should be enough validations on the callback URL including whether the URL is redirected by the Payment Gateway or directly called by an attacker-
Tweet This!

 

VIRTUAL CREDIT NUMBER IS PREDICTABLE

Weakness: Generated Virtual Credit card numbers are predictable or follow certain patterns. As a result, an attacker can predict what virtual credit card numbers are being used by other legitimate users.


Mitigation:
 Virtual Credit Card numbers should be sufficiently random-
Tweet This!

NO ANTI-AUTOMATION IN VIRTUAL CREDIT CARD DETAILS VERIFICATION

Weakness: There is no anti-automation (e.g. CAPTCHA) while verifying the Virtual Credit Card details such as CVV number and expiry date. The Credit Card number is sufficiently long however, the CVV number is generally a 3 digit number and expiry date is also a 2 digit number. As a result, it is possible to brute force the CVV number and expiry date, and shop online using a stolen virtual credit card number.


Mitigation:
 There should be sufficient anti-automation e.g. CAPTCHA while verifying the CVV numbers along with the Credit Card Number-
Tweet This!

 

NO ANTI-AUTOMATION IN CARD CREATION PROCESS

Weakness: There is no anti-automation while creating a virtual credit card. An attacker can use automated scripts to exhaust credit card numbers. As a result, Credit Card Numbers can be exhausted and be therefore made unavailable to users leading to a Denial of Service (DoS) attack. It can also lead to other attacks including Credit Card Number pattern prediction.


Mitigation:
 There should be sufficient anti-automation e.g. CAPTCHA while creating virtual credit card numbers-
Tweet This!

 

Adapted from the original post written in Iviz Security Website.

 

(Read more: CISO Guide for Denial-of-Service (DoS) Security)

Read more…

Top 5 Application Security Technology Trends

Following are the top 5 Application Security Technology Trends:

1.    Run Time Application Security Protection (RASP)

Today applications mostly rely on external protection like IPS (Intrusion Prevention Systems), WAF (Web Application Firewall)etc and there is a great scope for a lot of these security features being built into the application so that it can protect itself during run time.

RASP is an integral part of an application run time environment and can be implemented as an extension of the Java debugger interface. RASP can detect an attempt to write high volume data in the application run time memory or detect unauthorized database access. It has real time capability to take actions like terminate sessions, raise alerts etc. WAF and RASP can work together in a complimentary way. WAF can detect potential attacks and RASP can actually verify it by studying the actual responses in the internal applications.

Once RASP is inbuilt in the applications itself, it would be more powerful than external devices which have only limited information of how the internal processes of the application work.


(Read more:  Top 5 Big Data Vulnerability Classes)


2.   
 Collaborative Security Intelligence

By collaborative security, I mean collaboration or integration between different Application Security technologies.

 

DAST+SAST: DAST (Dynamic Application Security Testing) does not need access to the code and is easy to adopt. SAST (Static Application Security Testing) on the other hand needs access to the code but has the advantage of having more insights of your application’s internal logic. Both the technologies have their own pros and cons, however, there is great merit in the ability to connect and correlate the results of both SAST and DAST. This can not only reduce false positives but also increases the efficiency in terms of finding more vulnerability.

 

SAST+DAST+WAF: The vulnerabilities detected by the SAST or DAST technologies can be provided as input to WAF. The vulnerability information is used to create specific rule sets so that WAF can stop those attacks even before the fixes are implemented.

 

SAST+DAST+SIM/SIEM: The SAST/DAST vulnerability information can be very valuable for SIM (Security Incident Management) or SIEM (Security Incident Event Management) Correlation engines. The vulnerability information can help in providing more accurate correlation and attack detection.

 

WAF+RASP: WAF and RASP are complementary. WAF can provide information which can be validated by RASP and hence help in more accurate detection and prevention of attacks.

 

Grand Unification: Finally one day we will have all the above combined together (and many more) in such a way so that organization can have true security intelligence.

 

(Read more:  5 easy ways to build your personal brand !)

 

3.    Hybrid Application Security Testing

By “Hybrid” I mean combining automation and manual testing in a manner “beyond what consultants do” so that we can achieve higher scalability, predictability and cost effectiveness.

READ MORE >>  5 Key Benefits of Source Code Analysis

DAST and SAST both have their own limitations. Two of the major problems areas are False Positives and Business Logic Testing. Unlike Network Testing where you need to find known vulnerabilities in a known piece of code, Application Testing deals with unknown code. This makes the model of vulnerability detection quite different and more difficult to automate. So you get the best quality results from consultants or your in-house security experts. However, this model is non-scalable. There are more than a Billion applications which need testing and we do not have enough humans on earth to test them.

 

It is not a question of “man vs. machine” but it is a matter of “man and machine”. The future is in the combination of automation and manual validation in “smart ways”. iViZ is an interesting example that uses the automated technology along with “work flow automation” (for manual checks) so that they can assure Zero False Positives and Business Logic Testing with 100% WASC Class coverage. In fact they offer unlimited applications security testing at a fixed flat fee while operating at a gross margin better than average SaaS players.

 

(Read more: Phishers Target Social Media, Are you the Victim?)

 

4.    Application Security as a Service

I believe in “as a Service” model for a very simple reason: We do not need technology for the sake of technology but to solve a problem i.e. it’s the solution/service that we need. With the growing focus on “Core Competency”, it makes more sense to procure services than acquire products. “Get it done” makes more sense than “Do It Yourself” (off course there are exceptions).

 

Today we have SAST as a Service, DAST as a Service, and WAF as a Service. Virtually everything is available as a service. Gartner, in fact has created a separate hype cycle for “Application Security as a Service”.

 

Application Security as a Service has several benefits like: reduction of fixed operational costs, help in focusing on core competency, resolving the problems of talent acquisition and retention, reduction of operational management overheads and many more.

 

(Watch more : 3 causes of stress which we are unaware of !)

 

5.    Beyond Secure SDLC: Integrating Development and Operations in a secure thread

Today is the time to look beyond Secure SDLC (Software Development Life Cycle). There was a time we saw a huge drive to integrate security with the SDLC and I believe the industry has made some decent progress. The future is to do the same in terms of “Security+Development+Operations”. The entire thread of Design, Development, Testing through to the Production, Management, Maintenance and Operations should be tied seamlessly with security as the major focus. Today there is a “security divide” between Development and Operations. This divide will blur some day with a more integrated view of security life cycle.

 

Adapted from the original blog written in Iviz Security Website.

 

Read more…

Application Security has emerged over years both as a market as well as a technology. Some of the key drivers had been the explosion in the number of applications (web and mobile), attacks moving to the application layer and the compliance needs.

Following are 16 Application Security Trends which we believe the industry will observe in 2016.

 

1. Beyond Tools – Build Application Security Program

As an industry mature organizations shall look at Application Security not as technology and tool problem but as a Holistic Program. BSIMM lists out more than 100 elements of application security program that is observed in more in 78 participating organization.

 

2. Hacking of Everything shall be on rise: Internet Of Things (IOT), Cars, Air Planes and more

With more of adoption of Internet Of Things (IOT) and not so secure practices by the startups, we will see a surge of Internet Of Things (IOT) devices getting hacked. Now your camera, light bulb, refrigerator, car or anything that is connected shall be hacked.

 

( Read More: 8 Questions To Ask Your Application Security Testing Provider! )

 

3. Security Testing for Continuous Integration and Continuous Deployment (CI/CD)

More and more organizations shall integrate security testing for Continuous Integration (CI) Or Continuous Deployment (CD). Scanning tools shall gradually evolve and mature to support CI/CD

 

4. Emergence of Run Time Application Self Protection (RASP), Interactive Application Security Testing (IAST) and Real Time Polymorphism tools

RASP (Run Time Application Self Protection) and IAST (Interactive Application Security Testing) is being aggressively promoted by vendors. This year shall be more of the year of awareness with potential mainstream adoption being at least 2 years away. Both RASP and IAST has it’s strengths and weakness and time will say whether they will win. Real Time Polymorphism has the potential but has slow adoption until now.

 

5. Third Party Vendor Risk Management shall become more important

Increasingly more number of organizations will ask for Penetration Testing report for applications developed by third party to manege Vendor Risks. Acceptance criteria shall not just have the functional but also the security aspects.

 

( Read More: 5 Questions You Want Answered Before Implementing Enterprise Mobili… )

 

6. Higher due diligence before adopting new cloud solution

Most of the larger enterprises shall ask for third party pen test report or more thorough due diligence before they adopt a cloud solution. Especially the newer Software As A service (SaaS) or Cloud solution providers have to provide pen test report as a part of the sales process.

 

7. Dynamic Application Security Testing (DAST) will remain the most popular form of testing with Static Application Security Testing (SAST) playing the catch up game

DAST (Dynamic Application Security Testing) had been the primary mode of application security testing and will continue to be so. It is the easiest to adopt and gives exactly the perspective of an external attacker who will not have access to your code. For Web based Applications there is resistance towards providing binaries or the code. However for mobile apps organizations are more willing to provide the binary for the client side application. This shall be one of the drivers for higher adoption of SAST (Static Application Security Testing).

 

8. Customers will ask for a combination of Static Application Security Testing (SAST)& Dynamic Application Security Testing (DAST)especially for Mobile Apps

Though organizations understand the importance of combining SAST and DAST, it is the mobile App testing which shall drive higher adoption for this. More security sensitive organizations at a higher maturity level shall conduct SAST and DAST together. DAST will continue to be the first most important type of testing.

 

9. Large organizations will scan more than 80% of their portfolio applications at least once a year

Large organizations with more than 100 apps will strive to test more than 80% of their applications at least once a year. Testing all the apps shall be one of the priorities of the Chief Information Security Officers (CISO).

 

( Read More: 9 Top Features To Look For In Next Generation Firewall (NGFW)

 

10. Application hacking incidents shall rise with the need for mature response program

Last year had been the year of hacks for big companies. 2016 shall be no different. Apart from detection and prevention, the industry shall need mature breach response program. No matter what you do – Hack happens.

 

 11. Jobs for Application Security will be more than ever before and would continue to grow

The industry has a severe shortage in terms of the number of application security testers. There are the higher number of jobs than the available eligible professionals. Few of the major trends in terms of ethical hacking as professions is available in this blog- Click Here

 

12. Majority of Large organizations shall outsource their Application Security Testing

Large organizations shall not be able to manage application security testing due to shortage of available talents and management overhead. Most of the large organizations shall outsource application security testing as a continuous program.

  

13. Organizations will move toward continuous/regular vulnerability management program

Organizations have understood that one time or sporadic testing is not enough. The industry has understood the importance of continuous or regular testing and the criticality to adopt it as a management program.

 

14. Integration of Vulnerability management program with Security Information & Event Management (SIEM) Or Web Application Firewall (WAF)

The industry shall see higher number of integration of vulnerability management program and the preventive solutions like Security Information & Event Management (SIEM) Or Web Application Firewall (WAF). This shall become one of the criteria of choosing the vendors for security testing.

 

( Watch More: Webinar on “Defusing Cyber Threats Using Malware Intelligence” )

15. Difficult to detect but more dangerous Logical Vulnerabilities

The importance of Logical Vulnerabilities in application security testing is one of the less spoken topics by the security testing product vendors. Most of the security testing products or cloud solutions are unable to cover this. Logical vulnerabilities are the most critical and difficult to detect. The mature organizations shall ask for Business Logic testing as a mandatory requirement.

 

16. Changing the habit of coders

Just awareness is not enough. Think of the number of us who know about the importance of exercise but how many can do it. We need habit forming tools and products to embed secure coding behavior right at the time somebody types out a function. Testing is too late to enter the game.

READ MORE >>  How to benchmark a web application security scanner?

Read more…

5 Key Benefits of Source Code Analysis

Static Code Analysis: Binary vs. Source

Static Code Analysis is the technique of automatically analyzing the application’s source and binary code to find security vulnerabilities. According to Gartner’s 2011 Magic Quadrant for Static Application Security Testing (SAST), “SAST should be considered a mandatory requirement for all IT organizations that develop or procure application”. In fact, in recent years we have seen a shift in application security, whereas code analysis has become a standard method of introducing secure software development and gauging inherent software risk.

 

Two categories exist in this realm:

1. Binary – or byte- code analysis (BCA). Analyzes the binary/ byte code that is created by the compiler.

2. Source code analysis (SCA). Analyzes the actual source code of the program without the requirement of retrieving all code for a compilation.

Both offerings promise to deliver security and the requirement of incorporating security into the software development lifecycle (SDLC). Faced with the BCA vs SCA dilemma, which should you choose?


(Read more: Checklist to Evaluate A Cloud Based WAF Vendor)


The Inherent Flaws of Binary Code Analysis (BCA)

On the one hand, BCA saves some of the code analysis efforts since the compiler automates parts of the work such as resolving code symbols. Ironically, however, it is precisely this compiler off-loading which presents the fundamental flaw with BCA. In order to use BCA, all code must be compiled before it is scanned. This raises a plethora of problems that push back the SDLC process and gives security a bad, nagging name.

Issues include:

  • Vulnerabilities exposed too late in the game. Since all the code must be compiled prior to the scan, security gets pushed to a relatively late stage in the SDLC. At this point, the scan usually finds too many vulnerabilities to handle, no time to fix, and pressure from sales and marketing teams to release the product. As a result, these vulnerabilities – albeit being uncovered – are pushed to release. In fact, actual vulnerabilities have already slipped through the scanning process in real-world projects, such as occurred in a Linux OS distribution release.

 

  • Compiler optimization hurts the accuracy of the results. One of the many roles compilers fulfill is to optimize code in terms of efficiency and size. However, this optimization may come at the expense of the accuracy of results. For example, compilers might remove so-called “irrelevant” lines, aka dead code. These are lines of code that developers insert as part of their debugging process. While the compiler removes these code snippets, they can contain code that breaches corporate standards.
  • PaaS-providers incapable of retrieving the byte-code. In a Cloud Computing scenario, the PaaS-provider is responsible for validation, proprietary compilation and execution of the programs. However, the PaaS provider cannot retrieve the byte-code, or has no manifestation as byte-code or binary.

(Read more:  Checklist to Evaluate a DLP Provider)


Benefits of Source Code Analysis (SCA)
 

By scanning the source code itself, SCA can be integrated smoothly within the SDLC and provide near real-time feedback on the code and its security. Source code analysis comes to compensate for BCA’s shortcomings and provide an efficient, workable alternative. How?

 

1. Scans Code Fragments and Non-Compiling Code

An SCA tool is capable of scanning code fragments, regardless of compilation errors arising from syntactic or other errors. Both auditors and developers can scan incomplete code in the midst of the development process without having to achieve a build, ultimately allowing the discovery of vulnerabilities much earlier during the Software development Life Cycle (SDLC).

 

2. Supports Cloud Compiled Language

New coding language breeds have developed under cloud computing scenarios. In these cases, the developer codes in the PaaS-provider’s language, while the PaaS-provider is responsible for the validation, proprietary compilation and execution of the programs. In these cases, the code has no manifestation as byte-code nor as binary, and the SCA must be done on the source code itself. The most known example is the Force.com platform supplied by Salesforce.com. This platform is based on the server-based language called Apex, and client-based language called VisualForce. Only an SCA product can support this new paradigm.

 

3. Assesses Security of Non Linking Code

In the case where the code references infrastructure libraries for which their source is missing, the BCA tools immediately fails on the unfortunate “Missing Library” message. Days may be spent building stubs for these missing parts, just to make the code compile – a lot of hard work without any added value.

An SCA product easily identifies vulnerabilities, such as SQL Injection – even when the actual library code of the executing SQL function call is missing.

 

4. Compiler Agnostic

In a multi-compiler environment- typically found at code auditors and large corporations- the SCA standard provides a one solution fits all. This is starkly opposed to the BCA which must support an endless number of compilers and versions. The reason? Each compiler transforms source code into its own version of binary/ byte code forcing the BCA tool to read, understand and analyze the different outputs of different compilers. However, since an SCA tool runs on the code itself – and not post-compilation, the SCA provides a single standard irrelevant to the compiler version or compiler upgrades.

 

5. Platform Agnostic

Similarly, when integrating SCA into the SDLC, the exact same tool can be used to scan the code anywhere – regardless of the operating system or development environment. This eliminates the inherent redundancy of BCA which must deliver separate scanning tools for each platform.

Disclaimer: This report is from Checkmarx and if you want more details or want to connect you can write to contact@cisoplatform.com

(Read more: Checklist for PCI DSS Implementation & Certification)

 

Read more…

Penetration Testing for E-commerce Applications

Over the past decade, E-Commerce applications have grown both in terms of numbers and complexity. Currently, E-Commerce application are going forward becoming more personalized, more mobile friendly and rich in functionality. Complicated recommendation algorithms are constantly running at the back end to make content searching as personalized as possible. Here we will learn about the necessity of penetration testing for E-commerce Applications.

 

Why a conventional application penetration testing  for E-commerce Applications is not enough?

E-Commerce applications are growing in complexity, as a result conventional application penetration is simply not enough. Conventional application penetration testing focus on vulnerability classes described in OWASP or WASC standards like SQL Injection, XSS, CSRF etc.

 

It is required to create specialized framework of penetration testing for E-Commerce applications which is  tailored  and should have following features:

  • Comprehensive Business Logic Vulnerabilities for various functional modules related to E-Commerce Applications.
  • Comprehensive flaws related to various Integrations with various 3rd party products.

(Read more:  Can your SMART TV get hacked?)


Key Vulnerability Classes Covered:

Some of the vulnerability classes covered as part of E-commerce penetration testing are listed below.


Order Management Flaws

Order management flaws primarily consists of misusing placing an order functionality. The exact vulnerabilities will depend on the kind of application, however some examples are listed below:


Possibility of Price manipulation during order placement.

  • Possibility of manipulating the shipping address after order placement.
  • Absence of Mobile Verification for Cash-on-Delivery orders.
  • Obtaining cash-back/refunds even after order cancellation.
  • Non deduction of discounts offered even after order cancellation
  • Possibility of illegitimate ticket blocking for certain time using automation techniques.
  • Client side validation bypass for max seat limit on a single order.
  • Bookings/Reservations using fake a/c info.
  • Usage of Burner (Disposable) phones for verification.


Coupon and Reward Management Flaws

Coupons and Reward management flaws are extremely complex in nature. Some examples are listed below:

 

  • Coupon Redemption possibility even after order cancellation.
  • Bypass of coupon’s terms & conditions.
  • Bypass of coupon’s validity.
  • Usage of multiple coupons for the same transaction.
  • Predictable Coupon codes.
  • Failure of re-computation in coupon value after partial order cancellation.
  • Bypass of coupon’s validity date.
  • Illegitimate usage of coupons with other products.


(Read more:  How to choose your Security / Penetration Testing Vendor?)

Payment Gateway Integration (PG) Flaws

Many of the classical attacks on E-Commerce applications are because of Payment gateway integrations. Buying a pizza in 1$ is a classical example of misusing PG integration by an attacker.

  • Price modification at client side with zero or negative values.
  • Price modification at client side with varying price values.
  • Call back URL manipulation.
  • Checksum bypass.
  • Possibility of price manipulation at Run Time.


Content Management System (CMS) Flaws

Most E-Commerce applications have backend content management system to upload / update content. In most cases, CMS will be integrated with resellers, content providers and partners. For example, hotel E-Commerce application will be integrated with individuals hotels or with multiple partners. As a result of increased complexity, there are multiple sub vulnerability classes that need to testes, some of them are listed below:

  • File management logical flaws
  • RBAC Flaws
  • Notification System Flaws
  • Misusing Rich Editor Functionalities
  • 3rd Party APIs Flaws
  • Flaws in Integration with PoS (Point of Sales Devices)


Conventional Vulnerabilities

Apart from business logic vulnerabilities, conventional vulnerabilities are also part of the penetration testing framework. Examples of conventional vulnerabilities are SQL Injection, Cross Site Scripting (XSS), CSRF and other vulnerabilities defined as part of OWASP.


This is a re-post of the blog originally published on CISO Platform

Link to original blog: http://www.cisoplatform.com/profiles/blogs/penetration-testing-e-commerce-applications

 

Read more…

The AppSec How -To:

Visualizing and Effectively Remediating Your Vulnerabilities: The biggest challenge when working with Source Code Analysis (SCA) tools is how to effectively prioritize and fix the numerous results. Developers are quickly overwhelmed trying to analyze security reports containing results that are presented independently from one another.

 

Take for example, WebGoat – OWASP’s deliberately insecure Web application used as a test-bed for security training – has more than 100 Cross-Site Scripting (XSS) flaws. Assuming that each vulnerability takes 30 minutes to fix, and another 30 minutes to validate, we’re looking at nearly three weeks of work. This turnaround is certainly too long and costly- and even impractical- for large projects containing thousands of lines of code, or for environments with quick development cycles such as DevOps. With such a large amount of vulnerabilities, it should come as no surprise that vulnerable and unfixed code is released.

 

In this article, we show how visual insights into the vulnerability – from origin to impact – can help developers to:

  • Picture the security state of their code
  • View the effect of fixing vulnerabilities in different locations
  • Automatically narrow down the results of extra-large code bases to a manageable amount


In fact, using this method we were able to cut down the number of fixing locations of WebGoat XSS vulnerabilities to only 16 – even without looking at the code.

 

(Read more:  Annual Survey on Cloud Adoption Status Across Industry Verticals)

 

A Picture is Worth a Thousand LoC: Visualizing Your Vulnerabilities

“Know your Enemy” is the mantra of any security professional. It defines what they’re up against, how to face it and what tactics to employ. It sets the groundwork for all future outcomes. The same goes for developers – and the enemy is vulnerable code. In the practice of secure coding, developers should receive an overview of the security posture of their code, the amount of vulnerabilities contained within the code and how they manifest themselves to the point of exploitation. This is where the graph view comes in.

The Basics: Data Flow

A data flow is best described as a visualization of the code’s path from the source of the vulnerability until the point where it can be exploited (aka “sink”). As you can see, each step in the flow is reflected as a node in the graph:

 


Traditionally, each vulnerability result has a single data flow – independent from other findings. Accordingly, for numerous results, say 14 different vulnerability findings, we can view a graph with 14 separate flows:

 


Obviously, such a graph does not help much in understanding how to prioritize fixes. What developers really need is to understand the relationships between the different flows and simplify the resulting graph as much as possible.

 

(Read more:  Annual Survey on Security Budget Analysis Across Industry Verticals)


Improving Visibility: The Graph View

The graph view takes those separate data flows and depicts them in a way that easily presents the relationships between flows.
Building the graph is a two-step process:

1.Combine the same node appearing in multiple paths. In other words, identify and merge those pieces of code that are actually shared by the same data flows. Taking the 14-path graph from above, consider the case where the 5 leftmost sources share the same node. In turn, this node shares with another node on its level a node closer to the sink:

 

2. Simplify the graph to reduce the number of data flow levels. This can be done by combining similar-looking data flows to a single node. For those familiar with graph theory, you might recognise by now that we’re building the “homeograph” of the original graph, i.e., a graph with an identical structure but with a simplified representation.We do this by first grouping the nodes:


As we continue this process the resulting graph eventually looks like this:


With this simplified graph flow we now have a visual mapping of the security of the code. Moving away from just looking at code bits and at seemingly disparate code flaws, the graph flow actually allows us to see the correlation between vulnerabilities. Furthermore, a quick glance at the graph provides us with a deep understanding of the effect that a certain vulnerability has over the rest of the code – a relationship that’s much too intricate to understand through a code review.


The Butterfly Effect: Considering Fixing Scenarios

What if you fix the code in a certain location? How will that affect the code? How about in another location? With the graph view in hand, we can consider all these scenarios, see the overall effect quickly, and decide for ourselves which route to take.
Let’s look again at our simplified view (aka “homeograph”) of our original example. A fix of the single node pointed to by the arrow results in fixing two separate paths.

 


On the other hand, the following graph shows what happens if we try to fix a different node. In this case, the node pointed to by the arrow only leads to a partial fixing of the path. The reason is that the bottom “branch” of that code is also affected by other nodes that are not yet fixed.

 


We can continue to interact with the graph and consider different “what-if” scenarios. Not only will they show us the ripple effect of fixing a certain vulnerability, but after a certain time of getting into such a habit- we’ll unconsciously understand the impact of certain vulnerabilities and invariably start to recognize our own “best places” to fix.

 

Only the Best: Optimizing Vulnerability Fixing

Ideally, we’d also like to accurately and automatically pinpoint those “best-fix” locations on the graph.


Once again, this calls for the adoption of graph-theory concepts. In particular, the “Max-Flow Min-Cut” theorem helps us to calculate the smallest amount of node updates that fix the highest number of flows. Applying this calculation to our example graph, we can visually locate those 3 nodes that if fixed -amass to rectifying the complete flow graph.

This is incredible considering that we started with a 14-path graph equivalent to 70 nodes.

 

(Read more: Security Technology Implementation Report- Annual CISO Survey)


Summary

Graph flows are a visually appealing way for developers and security professionals alike to fully comprehend the relationships between the different parts in the code and the propagation of a tainted piece of code to its sink.
The visualization of the code provides an interactive tool allowing the developer to proactively consider the effect of fixing various vulnerabilities at different places. Most importantly, the graph flow allows us to locate the best-fix locations in a quick, efficient and accurate manner.

Disclaimer: This report is from Checkmarx and if you want more details or want to connect you can write to contact@cisoplatform.com

Read more…

Top 10 Mistakes in Cyber Security Buying

Acquisition of new security tools are not an easy task to handle. Some procurement activities are tedious and requires months of effort to select the right tool that meets all your expectations. In this blog, we are going to list out top 10 mistakes in cyber security buying to avoid while procuring new security tools. Let’s get to it.

 

The value is not communicated to all the stakeholders (from boards to employees) 

It so happens that CISO’s often find hard to articulate the value that the security control will bring to the organisation. Be it Board or a specific department or a group of employees, they must understand the value of security and the reason for using any such control.

 

In-depth use cases are not clearly defined:

Identify the most important use-cases specific to your organisation before you buy any security tool. It helps you create custom rule-sets and policies which in turn will help you get the most out of your tool.

 

Holistic search of vendors and product comparisons not done

There may be many vendors with either same or similar offerings. Some vendor’s capabilities may be comprehensive others it may be very basic. Pricing and licensing models may also vary greatly from vendors to vendors. Security managers need to evaluate and do product comparisons with as many vendors as they can before zeroing on any single vendor.

 

Read more:( Top 50 Emerging Vendors to look out for in 2017)

 

Enough Peer reviews & user feedback not collected

Most security mangers may not know about this but peer review and ratings of security products are available online and can be leveraged to learn from other people’s experience. Check for Peer review before you select on any vendors or their products. Peer reviews can tell you about vendors after sale support, Product Bottlenecks, implementation challenges and so on.

READ MORE >>  Cyber Security Maturity Report of Indian Industry (2017)

Tool’s compatibility with existing technology and process stack is not tested

Check whether the tool is compatible with your organisation existing processes. It should support and enhance the existing processes and not be in conflict with any. If its conflicting define exceptions and document them properly. Request vendors for feature customization.

 

Vendor’s local Support, self or through partners are not considered

Check their support services because human factor is important. It’s not always about the product after sale services matter. Check if the local support is available to you from vendor side or through their partners.

 

Vendor’s background check and ability to execute is not verified

Do your due diligence before finalising on any vendor. Ask for case studies, take Proof of concept and ask for their competitors.

 

Vendor’s risk due diligence is not conducted properly

Organization’s also often ignore to check the vendors risk profile and 3rd party risks. You need a robust vendor risk management program before security buying. Please verify from open source intelligence whether there are any major vulnerabilities in their products, patching cadence, past security history, strength of their internal security program, benchmarking against their competition etc.

  

The Governance policy for that security program is not designed and documented

Any security program to be successful needs proper governance framework & policies. Create policy document specific to each security program.

 

What do you think could be other points?

Please suggest your ideas in the comments. We would love to hear your opinion.

Read more…

Top 10 Metrics for your Vulnerability Management Program

Security Metrics are essential for quantitative measurement of any security program. Below, we’ve listed some security metrics (in no particular order) which can be used to measure the performance of your Vulnerability Management (VM) program. For demonstrating performance improvements, you can create dashboards / graphs which can show trends over time for some of these metrics. Consider  using Vulnerability Management Platforms or GRC Solutions to help automate collection and reporting of some of these metrics.

 

  1. Mean Time to Detect

    Measures how long it takes before known vulnerabilities get detected, across the organization. If a Heartbleed 2 or EternalBlue 2 were discovered today, how long will it take to identify all the impacted systems across the organization?

  2. Mean Time to Resolve

    The mean time interval taken to remediate / patch vulnerabilities after identification by the Vulnerability Assessment (VA) tool. (i.e. post detection)

  3. Average Window of Exposure

    The time when a vulnerability was first publicly known to the time the impacted systems gets patched.

  4. Scanner Coverage

    This measures the ratio of known assets (e.g.: from Asset Management solution) to those which actually get scanned. Can be split by Internal Assets & External assets.

  5. Scan Frequency by Asset Group

    How frequently are the assets scanned based on different groupings (e.g.: Internal Assets, BU Assets, Impacting Compliance like PCI etc.)

    ( Do More : Check out the top technologies in Vulnerability Assessment Domain )

  6. Number of Open Critical / High Vulnerabilities

    Based on Risk based Prioritization of vulnerability, considering a number of factors (e.g.: CVSS, Asset Criticality, Exploit Availability, Asset Accessibility (Internet vs Intranet), Asset Owner etc.)

  7. Average Risk by BU / Asset Group etc.

    Based on Risk based Prioritization of vulnerabilities (outlined above), the average risk exposure can be calculated based on different groupings.

  8. Number of Exceptions Granted

    This metrics tracks the vulnerabilities which have not been remediated because of various reasons. You may set rules in your scanner to overlook such vulnerabilities but you have to track them for auditing and/or future actions as they may still impact your risk posture.

  9. Vulnerability Reopen Rate

    This measures the effectiveness of the remediation process. A high rate means that the patching process is flawed

  10. % of Systems with no open High / Critical Vulnerability

    What % of systems are fully patched and have no high severity vulnerability present. Can be reported by asset groups.

Do let me know if you want us to add or modify any of the listed metrics. Check out the Vulnerability Assessment market within Product Comparison Platform to get more information on these markets.

Read more…

CISO Viewpoint: Safe Penetration Testing

Safe Penetration Testing – 3 Myths and the Facts behind them

Penetration testing vendors will often make promises and assurances that they can test your Web Applications safely and comprehensively in your production environment. So when performing Penetration Testing of a Web Application that is hosted in a Production Environment you need to consider the following myths and facts which can directly or indirectly end up causing you to do to yourself what you are trying to prevent hackers from doing to you in the first place.

 

(Read more:  Under the hood of Top 4 BYOD Security Technologies: Pros & Cons)

 

Myth 1 – Vendors promise that testing on your production environment is perfectly safe and that penetration testing will not cause any disruption to your end users.

 

The Facts

  • During testing, the application or its host may suffer degradation in performance if it is not designed, configured and implemented adequately. This will result in end users of the application suffering a diminished user experience or even a Denial of Service situation under the wrong circumstances. This is quite often out of the hands of the testing vendor and can be neither predicted nor fully avoided if any decent level of penetration testing is to be done.
  • Safe testing is usually limited to reducing the number of threads and requests made by any scanners used and will make testing take much longer than usually quoted by your testing vendor. Another way vendors claim to do safe testing is by disabling automated form fills by the scanner which results in substantially lower test coverage.
  • During our testing, we have encountered quite a few cases where the target application suffered performance issues due to bad design even though automated form fill was disabled and the scan was limited to only one thread with request throttling. In one case, we found that the application was performing detailed logging which was disk intensive. The application was normally very sparsely used, but during testing, the logs quickly filled up and caused a Denial of Service.

 

(Read more: CISO Mantra on data sanitization)

 


Myth 2
 – Your penetration testing vendor may tell you that your data is safe for full blown penetration testing on a production system.


The Facts

  • SQL injection, Cross Site Scripting (XSS) and Cross Site Request Forgery (CSRF) in some cases can only be confirmed by actually attempting to insert data into the Web Applications underlying database particularly where any forms are present on the URL where the test case is crafted to either perform a create or update function.
  • Also, any application function that is designed to perform any data insertion,      updating or deletion from the database within the confines of the expected design may be executed during testing for exploits resulting in data corruption which may be undesirable. Again safe testing will mean that a      lot of test cases won’t be performed and hence vulnerabilities will be missed.

(Read more:  BYOD Security: From Defining the Requirements to Choosing a Vendor)


Myth 3
 – There will be no disruption to your business during penetration testing.


The Facts

  • If the target application to be scanned is linked to other servers and applications that are part of a business process chain, then they are likely to be affected. The effects could range from flooding the system with dummy emails, orders, info request forms etc. which can all potentially disrupt the business if not handled carefully.
  • In one case, the target application was generating multiple synchronous back end requests for each request sent to it. This led to an amplification of requests which quickly overloaded the servers and led to a Denial of      Service.  Safe testing may be done by disabling form filling which will severely limit the coverage of the testing performed.


(Watch more : Top Myths of IPV-6 Security)


Advantages of Performing Pen Testing on a Staging Environment

What seems obvious from all the above is that wherever possible you should try to perform penetration testing on a staging or testing deployment. This has two main advantages;

  • First,  you don’t impact your business directly in any way.
  • Second and more importantly you do not put constraints on your Penetration Testing vendor that would not apply to a hacker. Once your testing regime is mature and you have fixed all the vulnerabilities on the staging environment you can consider doing a full Penetration Testing on your Production Environment as a final assurance check.


Adapted from the original blog written on Iviz Security website.

Read more…

9 Key Security Metrics for Monitoring Cloud Risks

Most organizations are using multiple cloud applications daily (by some estimates 100+). These applications need to be closely monitored based on the risk they pose and the purpose they serve. Here are some key security metrics which can help you monitor the use of Cloud Applications (primarily SaaS) within your organization. You can automate the measurement and report for most of these metrics using solutions like Cloud Access Security Brokers (CASB).

 

1- High-Risk Cloud Apps Discovered
Number of High-Risk Cloud Apps Detected based on Risk classification parameters for apps (e.g.: Apps without a well-defined privacy policy, hosting data outside EU etc.)

 

2- Cloud Apps Unauthorized / Authorized :
The ratio of Unauthorized vs authorised Cloud-Apps in use. Often Business Units can purchase Cloud Services on their own without informing IT, which results in Shadow IT. Some of these apps might not be authorized due to security concerns.

  

3- of Redundant Cloud Apps:
The number of duplicate / redundant cloud apps based on app discovery and use case. This can also help demonstrate cost savings providing a metric business can directly relate to. E.g.: Cloud-based File Storage can be consolidation to 1 provider from current 4 (Google Drive, SkyDrive, Box and Dropbox).

 

4- Sensitive Data Exposures Detected
Files accessible by unauthorized users either via the internet or intranet

 

5- Number of External Collaborators
Count of people from outside the organization who’re working collaborating on files containing sensitive data, hosted within or outside your domain

 

6- Cloud Services Having Access to Sensitive Data
Number of cloud services which store or process any data which is classified as sensitive by the organization.

 

7- Number of Cloud Services by Category
Number of cloud services in use by the organization in various categories (e.g.: Social Media, File Sharing, Screen Sharing etc.)

 

8- Cloud Policy Violations
These can vary based on the cloud policy defined by the organizations, but policy violations & exceptions need to be closely monitored, that’s why we included this metric. Some examples:

  1. # Unmanaged Devices having Access to Sensitive Data on Cloud
  2. # Instances of Sensitive Data on Cloud without Organization Managed Encryption Keys
  3. # Unmanaged cloud applications (e.g.: for Which Logs are not there for tracking user activities/logins)

9- Administrative or Privileged logins / Cloud Service
Average number of users having admin privileges for authorized cloud applications being

Did we miss something? Drop a note and we’ll update the list based on the feedback.

Read more…

This blog will provide the pros and cons of different types of Application Security Testing Technologies, and checklist to chose among them.

Static Application Security Testing (SAST)

SAST or Static Application Security Testing is the process of testing the source code, binary or byte code of an application. In SAST you do not need a running system.

 

Pros

  • SAST can pin point the code where the flaw is.
  • You can detect vulnerabilities before it is deployed: SAST does not need a running application.
  • Using SAST you can find vulnerabilities in an earlier phase of the Application’s development.

 

Cons

  • SAST fails to find vulnerabilities located outside the code or in third party interfacing.
  • SAST cannot find out vulnerabilities related to operational deployment.
  • Business Logic vulnerabilities cannot be discovered by a typical SAST automated tool.
  • SAST is more expensive and has higher overhead.
  • You need to provide the source code or binaries for SAST.

 

(Read more:  CISO Round Table on Effective Implementation of DLP & Data Security)

 

Dynamic Application Security Testing (DAST)

DAST or Dynamic Application Security Testing is the process of testing an application during its running state. In DAST you do not need the source code or the binaries. It is a method to probe from outside just like a hacker.

 

Pros

  • Can detect vulnerabilities related to operational deployment.
  • Business Logic Flaws can be figured out by DAST if you are using Hybrid Testing (with manual augmentation).
  • Does not need access to the code.
  • Easier to adopt, lower in cost and is more mature in terms of industry adoption.
  • Can find vulnerabilities located outside the code or in third party interfacing.

 

Cons

  • Cannot pin point the exact location of vulnerability in the code.
  • Coding quality or adherence to coding guidelines cannot be understood easily.

 

(Read more:  Can your SMART TV get hacked?)

 

DAST vs SAST: What should I choose?

 

1 Step: Conduct DAST.

This is low hanging fruit, Easy to adopt, Less Expensive, More mature.

Exception: Choose SAST if your application needs to be installed and is not web-based (e.g. client based apps like Chat Client, VOIP Client etc)

 

2 Step: Conduct SAST+DAST

Lower false negative, better coverage, more costly, higher overhead

 

Adapted from the original blog written on Iviz Security website.

Read more…

Secure SDLC Program: “The Art of Starting Small”

I have seen several organizations trying to adopt secure SDLC and failing badly towards the beginning. One of the biggest reason is they try to use “Big Bang Approach”. Yeah, there are several consultants who will push you to go for a big project use the classical waterfall model to adopt secure SDLC. But that’s asking too much. Changing the habits of a group is not very easy.

 

Typically there is a big push back and depending on how determined you are and the amount of dedicated resource you have either the exercise will be a half hearted success or a failure.  However, with less effort than that you can be more successful. Here is how.

 

( Read More: 5 Major Types Of Hardware Attacks You Need To Know )

 

Why starting small is important?

  1. Changing group habit is very tough. Remember the last time you or your friend wanted to change the habit of smoking?
  2. Defining the optimal (minimal but effective) process is tougher than you think
  3. What you think will work might actually not
  4. Every organization is different. You will have your own learning.
  5. Secure SDLC is not just technology. You will have to deal with human minds, habits and resistance

 

Phase 1:  Art of starting small

Define only one small area (in terms of secure coding) or a small group and implement the most important coding guidelines you want to implement. Keep the number of stuff minimal so that you get the least pushback in adoption and start building the desirable habit/mindset among the users. During this phase make sure you have the following:

  1. Define the most important goals. It should not be more than 1 or 2. Changing habits of a group is not easy. Hence keeping it small makes it easier. Once your pilot is successful you will have enough learning to do the complete roll out. Select the top 20% of guidelines which will help you the most in phase 1.
  2. Define the measures of success. It is very important to measure the success of adoption. Implementation just for sake of implementation will produce all most similar amount of junk code.
  3. Do weekly huddles. Measure the weekly adoption and success metrics. Check out the target vs achievement, road block, solutions and next week plan.
  4. Create a Secure SDLC learning document. Create a document of what you learnt from the process and define the model which worked. This should be the document which shall be the guide for you to launch the bigger mission across the organization and across all areas of coding.

 

( Read More: 5 Reasons Why You Should Consider Evaluating Security Information & Event Management (SIEM) Solution )

 

Phase 2: Big Bang Implementation

Now that you have done a small implementation and have gone through the learning, you will better equipped to implement for the larger organization or for the larger domain. I am not discussing the details of this phase here since I wanted to focus on the “Lean model” of “Starting Small”.

 

This is a re-post of the blog originally published on CISO Platform

Link to original blog: http://www.cisoplatform.com/profiles/blogs/secure-sdlc-implementation-art-of-starting-small

 

Read more…

SAST vs. DAST: How should you choose ?

This blog will provide information about SAST or Static Application Security Testing and DAST or Dynamic Application Security Testing. And also answer the common question of SAST vs DAST.

What is SAST?

SAST or Static Application Security Testing is the process of testing the source code, binary or byte code of an application. In SAST you do not need a running system.

 

What is DAST?

DAST or Dynamic Application Security Testing is the process of testing an application during its running state.  In DAST you do not need the source code or the binaries. It is a method to probe from outside just like a hacker.

 

SAST: Pros and Cons

Pros
• SAST can pin point the code where the flaw is
• You can detect vulnerabilities before it is deployed: SAST does not need a running application
• Using SAST you can find vulnerabilities in an earlier phase of the Application’s development

Cons
• SAST fails to find vulnerabilities located outside the code or in third party interfacing
• SAST cannot find out vulnerabilities related to operational deployment
• Business Logic vulnerabilities cannot be discovered by a typical SAST automated tool
• SAST is more expensive and has higher overhead
• You need to provide the source code or binaries for SAST

 

(Read more:   How Should a CISO choose the right Anti-Malware Technology?)

 

DAST: Pros and Cons

Pros
• DAST can detect vulnerabilities related to operational deployment
• Business Logic Flaws can be figured out by DAST if you are using Hybrid Testing (with manual augmentation)
• Does not need access to code.
• DAST is easier to adopt, lower in cost and is more mature in terms of industry adoption
• DAST can find vulnerabilities located outside the code or in third party interfacing

Cons
• DAST cannot pin point the exact location of vulnerability in the code
• Coding quality or adherence to coding guidelines cannot be understood easily

 

(Read more: 5 of the most famous and all time favourite white hat hackers!)

 

A Few SAST myths

• Myth 1: SAST gives better coverage: It is a myth that SAST gives better coverage. SAST cannot find vulnerabilities in Business Logic or in third party code/interfacing.
• Myth 2: SAST has lower false positive: This is not true. All tools throw out a lot of false positives irrespective of SAST or DAST. Human augmentation is the only way to remove all false positives.


When to choose DAST?

• Ideally, DAST should be adopted irrespective of SAST since you want to know the flaws (including Business Logic Flaws, Flaws due to third party code etc) which SAST cannot find. DAST gives you the picture from the perspective of a hacker.
• DAST should be adopted prior to the system going live or during every release(production).
• When you do not have access to code or don’t want to give access to it.


(Watch more : South Asia’s Cyber Security Landscape after the Snowden Revelations)


When to choose SAST?

• SAST is ideal if you want to test the application while it is being built
• You have access to the code/binary and have enough maturity in the organization and the budget to handle it.


Final words

Neither SAST nor DAST is enough. They are complimentary to a certain extent. The future is in the smart integration of SAST and DAST technologies.


This is a re-post of the blog originally published on CISO Platform

Link to original blog: http://www.cisoplatform.com/profiles/blogs/technologies-in-penatration-testing-what-to-choose

Read more…

While the proliferation of the BYOD trend has been bonus for businesses in terms of cost savings to productivity gains. But for IT departments, security and compliance is a headache as they scramble to catch with the mobility requirements of workforce. Here are some of the key metrics which can help your organization to monitor the use of enterprise mobility management.

Unmanaged devices in the enterprise network:

This is the total number of un-managed devices being used in the enterprise. Un-managed devices pose security risk to any organization; hence, this number should be as minimum as possible

 

Average number of hours an authorized device is found on network:

This is the total duration an unauthorized device appeared which may hide themselves through different approach which can be through personal firewalls or having their service disabled.

 

Number of OWASP Mobile Top 10 Risks Identified and Fixed:

By evaluating mobile apps for flaws and vulnerabilities in 10 distinct categories, security teams can work on mitigation plan to reduce these flaws in each risk categories.

 

Risk/Vulnerability Score:

This is risk score which can be derived using factors like number of unauthorized devices, average hours an unauthorized device is found on network and the device threat or unauthorized app is accessed. The reporting should assign a total risk score, summarize discovered vulnerabilities, and provide suggestions on how to resolve threats.

 

Shadow IT apps used by employees on mobile devices:

This metric identifies the number of unauthorized apps used on employee’s enterprise mobile devices. It should give detailed reporting like determine the most frequently blacklisted or whitelisted apps, view the number of devices and the applications the users have.

 

Benchmarking:

It should stack your security risk score with the competitor and identify gaps across deployment, devices, and apps. It should also give tips to better organization’s approach to mobile productivity and security.

 
 

Read more:(TOP 6 VENDORS IN ENTERPRISE MOBILITY MANAGEMENT (EMM) MARKET AT RSAC 2017)

 

Policy violations per month:
This is the total number of policy violations per month. This metric indicates the possible false positives/false negatives and help in policy fine-tuning.

 

Mean time it takes to provision and deprovisioning mobile devices in an enterprise network:
This metric refers to the mean time it takes to provision/deprovisioning any mobile devices in the network. EMM solution with centralized management and control this time should be usually in minutes.

 

Do let me know if you want us to add or modify any of the listed metrics. Check out the Enterprise Mobility Management market within Product Comparison Platform to get more information on these markets.

Read more…

Key Metrics for your IT GRC Program

IT GRC is a very broad topic encompassing nearly all aspects of information security. In this blog, we’ve tried to list down some key metrics that you should be tracking as part of your IT GRC program. Like all metrics these can be tracked on a periodic basis (monthly, quarterly etc.) and represented using a trending graph. Solutions like IT GRC Platforms can help automate the collection and reporting of metrics.

 

Maturity Score

This will be based on the frameworks the organization is following, like NIST Cybersecurity Framework (CSF), COBIT etc. Demonstrating progress based on maturity levels should be a key requirement for your IT GRC program.

Policy Related Metrics

These metrics provide insights into the effectiveness of your policies and can include metrics like:

  1. # Policy Exceptions and/or violations
  2. Avg Duration of Policy Exceptions
  3. # of Redundant Controls

Risk Metrics

This is very broad topic and should be based on organizational context. Organizations can also look at frameworks like FAIR for adopting a quantitative approach to cyber risk management. Here are some generic metrics organizations can consider:

  1. Risk Assessment Frequency
  2. Risk Tolerance or Risk Appetite (in $ value if possible)
  3. Residual Risk / Risk Tolerance Level
  4. # of Open Critical / High findings (via Risk Assessment)
  5. Average Time to Remediate Risk

 


Audit & Compliance

Audit related issues grab attention quickly. These are some of the metrics which can help track your audit program (monthly / quarterly / annual).

  1. # Critical or High Audit Findings
  2. Audit Exceptions Index (this can be calculated by : Audit Exceptions / Audit Findings)
  3. # Control Test Failures (by Criticality)

Read more:(TOP 6 VENDORS IN IT GOVERNANCE, RISK AND COMPLIANCE (IT GRC) MARKET AT RSAC 2017)
 

Incidents Metrics

Here’s a short list of key metrics which you can consider to monitor your incident management program:

  1. Incident Cost or Loss (brand impact)
  2. Critical or High Incidents Frequency
  3. Number of Incidents by Category (e.g.: Malware, Data Loss, Downtime etc.)

This is a short list of metrics, help us expand the list by listing your favorite metrics in the comments section.

Read more…

Top 6 Metrics for your Data Loss Prevention Program

This blog lists out 6 key metrics to measure the maturity and effectiveness of your Data Loss Prevention (DLP) program. All the metrics are operational and can be measured quantitatively to help you fine-tune your DLP program.

 

  • Number of policy exceptions granted for any defined time period:

This is the number of exceptions granted over a defined time period. Exceptions are temporary permissions granted on a case-to-case basis. If the Exceptions are not tracked or documented these could result in potential vulnerabilities for exploitation. Ideally, the number of exceptions for a defined time period should remain as minimum as possible

 

  • Number of False positives generated for any defined time period:

One of the major challenges in DLP program is dealing with false positives. Any mature DLP program within an organisation will try to reduce the false positives to near zero value. This metric is a very good indicator of your Data classification effectiveness, DLP rule-set effectiveness etc.

 

  • Mean time to respond to any DLP alerts:

This is the mean time to respond and initiate action to DLP alerts regarding possible data ex-filtration attempt. This metric is important as most DLP implementations are for alerts only and aren’t put into Blocking mode due to high False-positives.  DLP alerts are among the most significant security events that if not prioritised can result in a major data breach. DLP alerts can uncover malicious insider attacks,  advance persistent threats

 

  • Number of un-managed devices in your network handling sensitive data:

This is the number of unmanaged devices which processes and stores sensitive data. This could be file shares, endpoints, servers etc. Each of these devices is potential egress points for sensitive data. A good DLP program will have all of the devices, that handles sensitive data, managed using DLP tool.

  • Number of Databases not yet fingerprinted:

Database fingerprinting is one of the key methods which any modern Data Loss Prevention tool use to protect your sensitive data against possible leakages. Ideally, all the databases holding sensitive data must be fingerprinted and available to the DLP tool. This metric gives an indication of the risks associated with databases which are yet to be fingerprinted.

 

  • Number of Databases and data residents not yet classified:

The first step in any Data Loss Prevention program is data classification. Data classification is done to identify sensitive data wherever it resides. It is imperative to classify databases and other data resident devices so that effective controls can be applied to them. If you are blind about your sensitive data sources your DLP is already a failure. This metric indicates you the number of databases, devices, endpoints, file shares which are still at your blind spots.

 

Do let me know if you want us to add or modify any of the listed metrics. Check out the Data Loss Prevention market within FireCompass to get more information on these markets.

Read more…

Top Metrics to manage your SIEM Program

SIEM tool is among some of the most complex security tools to manage and operate. Here in this learn about the key parameters which you can track to make your SIEM tool more effective:

 

  1. Percentage reduction in False Positives/Negatives over a specified period of time:

These metrics track the maturity and effectiveness of SIEM tool rule sets. A SIEM rule-sets which is not properly defined can throw a lot of alerts in a day which overloads the available resources to analyse the alerts. Fine-tuning rule sets can reduce this number drastically and help you focus your resources on more genuine alerts.

 

  1. Number of Redundant/Out-dated SIEM rule sets:

SIEM Rule sets are continuously updated with new rule sets. Over a period of time, some rules become redundant and obsolete. Redundant SIEM rule-sets pose management overhead and also poses difficulty in auditing. This can also be a security risk for the organisation. This metrics is tracked to optimise SIEM rule-set.

 

  1. Ratio of Alerts triggered to Alerts remediated:

A mature SIEM program will generate only high fidelity alerts. If a SIEM tool is generating thousands of alerts every day with lots of false positives then it probably needs to fine-tune. This Metric gives you the idea about your organisation risk-score. Ideally, all the alerts generated by SIEM should be looked into by Analysts in a timely manner. Alerts triggered by SIEM solution if not followed and remediated on time can render SIEM program useless

 

  1. Number of undocumented SIEM rules:

It is of utmost importance that all SIEM rules must be documented properly for audits. Ideally, the number of undocumented SIEM rules should be zero

 

  1. Mean time to respond to security incidents:

The time interval between when an alert is generated and first response to it is initiated. This time should not be too long

 

  1. Number of open incidents related to your critical assets (Devices, systems, applications and users):

SIEM tools can classify Alerts and incidents in respect to their criticality. If an incident is alert is raised and the device, user, endpoint and application in question handles critical business function or data then that should be remediated on a priority basis. This metric talks about the incidents that are critical in nature. Ideally, this metric should be zero as it leaves your organisation vulnerable to severe disruptions or data breach incidents.

 

Check out the Security Information and Event Management (SIEM) market within Product Comparison Platform to get more information on these markets.

Read more…

50 Emerging IT Security Vendors To Look Out For In 2017

We have completed our selection of the final list of 50 emerging IT Security Vendors to look out for in 2017 from the 1500+ Vendors globally. Believe me this was not easy & we don’t claim this is exhaustive list as it probably will never be as we might have missed some of the products. But still we gave our best to give you the top guns who are uniquely innovative.

 

Emerging IT Security Vendors:

Here is the list of Top 50 Emerging IT Security vendors to watch out for: 

Acalvio - Emerging IT Security Vendor 2017Acalvio provides Advanced Threat Defense (ATD) solutions to detect, engage and respond to malicious activity inside the perimeter.  The solutions are anchored on patented innovations in Deception and Data Science. This enables a DevOps approach to ATD, enabling ease of deployment, monitoring and management. Acalvio enriches its threat intelligence by data obtained from internal and partner ecosystems, enabling customers to benefit from defense in depth, reduce false positives, and derive actionable intelligence for remediation.

 
Anomali - Emerging IT Security Vendor 2017

Anomali provides earlier detection and identification of adversaries in your organization network. Anomali delivers earlier detection and identification of adversaries in your organization’s network by making it possible to correlate tens of millions of threat indicators against your real-time network activity logs and up to a year or more of forensic log data. Its approach enables detection at every point along the kill chain, making it possible to mitigate threats before material damage to your organization has occurred. They have offerings like STAXX (Free), ThreatStream and Anomali Enterprise.

Arxan - Emerging IT Security Vendor 2017Arxan is the world’s most comprehensive enterprise solution for application protection, period. Specializing in Mobile and IoT, Arxan protects sensitive data, prevents copying, tampering, unauthorized access and modifications to applications. It also blocks the insertion of malicious code and determines whether or not environments are safe for running mobile apps.

Baffle - Emerging IT Security Vendor 2017

Baffle™ Encryption as a Service (End-to-end encryption for the sensitive data in your database with no risk of breach): Baffle addresses this insider threat by providing an easy way to keep data encrypted on database servers. This solution protects data irrespective of whether the data is on disk, in memory, or being processed in the database. Baffle is pioneering a solution that makes data breaches irrelevant by keeping data encrypted from production through processing.

BigID - Emerging IT Security Vendor 2017BigID is transforming enterprise protection and privacy of personal data. Organizations are facing record breaches of personal information and proliferating global privacy regulations with fines reaching 4% of annual revenue. Today enterprises lack dedicated purpose built technology to help them track and govern their customer data. By bringing data science to data privacy, It aims to give enterprises the software to safeguard and steward the most important asset organizations manage: their customer data.

BluVector - Emerging IT Security Vendor 2017BluVector is a cyber-threat detection and hunting platform that defends enterprises against evolving security threats. Leveraging patented machine learning technology and based upon years of malware analysis and classification, BluVector delivers fast, highly scalable, and integrated detection of malicious software targeting enterprise networks to help security teams stay ahead of advanced threats and protect against data breaches and theft.

CATO Networks - Emerging IT Security Vendor 2017Cato Management Application enables full traffic visibility for the entire organizational network and a way to manage a unified policy across all users, locations, data, and applications (both internal and Internet/Cloud-based). The Cato Cloud environment is managed by Cato’s global Network and Security Operations Center, manned by a team of network and security experts to ensure maximum up-time, optimal performance, and highest level of security.

Cavirin - Emerging IT Security Vendor 2017Cavirin provides security and compliance across physical, public, and hybrid clouds, supporting AWS, Microsoft Azure, Google Cloud Platform, VMware, KVM, and Docker. It has capabilities like Continuous Visibility Extended to the Cloud, Automated Analysis and Reporting, Cloud-Agnostic Security & Continuous Security Compliance etc.

Centrify - Emerging IT Security Vendor 2017Centrify is the next generation enterprise security platform, built to protect against the leading point of attack for cyber threats & data breaches — compromised credentials. It protects against the leading point of attack used in data breaches — the password. It protects end users and privileged users by stopping the breach at multiple points in the cyber threat chain and secures access to apps and infrastructure across your boundary less hybrid enterprise through the power of identity services.

Claroty - Emerging IT Security Vendor 2017

 

Claroty discovers the most granular OT network elements, extracts the critical information, and distils it into actionable insights needed to secure and optimize complex industrial control environments. Claroty provides a clear view of each site’s control assets, and displays real-time status.  Claroty provides the deepest and broadest visibility across complex multi-vendor OT environments. It uncovers hidden issues and provides real-time monitoring of critical control systems.

Contrast Security - Emerging IT Security Vendor 2017Contrast Security is the world’s leading provider of security technology that enables software applications to protect themselves against cyberattacks, heralding the new era of self-protecting software. Contrast’s patented deep security instrumentation is the breakthrough technology that enables highly accurate assessment and always-on protection of an entire application portfolio, without disruptive scanning or expensive security experts.

Cryptomove - Emerging IT Security Vendor 2017CryptoMove is a fundamental innovation that protects data with continuous movement. As CryptoMove moves data, distributed and decentralized CryptoMove nodes perform dynamic mutation, fragmentation, distribution, and re-encryption with any algorithm. Their solution key value offerings are active defense fights back (Integrity attacks, Data destruction & Ransomware, Data Recon & Exfiltration, future-proof software defined secure storage (Encryption agnostic, re-encryption, Security orchestration etc.)

Cybellum - Emerging IT Security Vendor 2017Cybellum’s Zero-Day Prevention Platform™ is easily deployed with no need to configure learning algorithms prior to the set-up. Their platform gives fully automatic forensics and visibility into each incident without the need for cyber experts to operate the platform. Deterministically tackling the cause of zero-days gives you a real solution for known and unknown threats in your organization. True security cannot be achieved using heuristic algorithms and are always prone to error unlike Cybellum. Alerts and false positives are vast and occur daily which consumes a lot of management resources.

Cyence - Emerging IT Security Vendor 2017Cyence brings together data science, cybersecurity, and economics to build a unique analytics platform that quantifies the financial impact of cyber risk. It is used by leaders across the insurance industry to prospect and select risks, assess and price risks, manage risk portfolios and accumulations, and bring new insurance products to market.

 

Cymmetria - Emerging IT Security Vendor 2017

Cymmetria‘s  MazeRunner platform lets you dominate an attacker’s movements from the very beginning – and lead them to a monitored deception network. MazeRunner shifts the balance of power to the defender’s side. It intercepts attackers during the reconnaissance phase, when they have no knowledge of the network. The hackers are led through a carefully planned path toward a controlled location. At this point, believing the target is real, the attacker is revealed and their tools confiscated.

 

 

DarkCubed - Emerging IT Security Vendor 2017Darkcubed help companies save money and improve security by reducing complexity, designing new workflows, and improving data quality.  It delivers enterprise-grade capability without impossible investment or armies of analysts. The Dark Cubed Cyber Security Platform demonstrates their commitment to meeting their customer’s needs, wherever they are. Whether you need a product that can be deployed physically, virtually or in the cloud, they have the solution.

Demisto Enterprise 2.0 is industry’s first comprehensive incident management platform to offer integrated threat intelligence and security orchestration. The new capabilities enable enterprises to integrate leading threat feeds with it to manage indicators and automate threat hunting operations, saving time, and significantly reducing the risk of exposure. Unprecedented insight and resolution into complex incident. You are the front-line – performing security incident response. You work valiantly to protect your company and its people from cyber-attacks.

Enveil - Emerging IT Security Vendor 2017EnVeil is Powered by homomorphic encryption, EN|VEIL’s scalable framework lets enterprises operate on data (query/analytics) without ever revealing the content of the interaction, the results, or the data itself. They won 2nd Place in the RSA Conference 2017 innovation sandbox contest.

Evident.io - Emerging IT Security Vendor 2017Evident.io security platform gives us the birds-eye view of their AWS infrastructure that makes us certain they are delivering a secure and solid service for their customers. Its continuous security & compliance for your public cloud. It has capabilities like security & compliance for AWS, Build for modern cloud environments, continuous monitoring etc.

FinalCode - Emerging IT Security Vendor 2017FinalCode makes implementing enterprise-grade file encryption and granular usage control easy, manageable and in a way, that provides persistent protection of files wherever they go. By providing file security management, not file storage, distribution, or content management, FinalCode allows for rapid and flexible deployment.  This patented approach preserves user work flows, file storage and collaboration platform investments, while protecting files across all communication channels: trusted, untrusted, private, or public.

Fugue - Emerging IT Security Vendor 2017Fugue is an infrastructure-level cloud operating system. It builds, operates, and terminates cloud infrastructure and services and automates the continuous enforcement of declared infrastructure configurations. Fugue completes the DevOps workflow by automating cloud lifecycle management via enforced and versionable infrastructure as code. Fugue is a single source of truth and trust for the cloud. Fugue removes the complexity and undifferentiated burden of configuring and maintaining cloud infrastructure, allowing you and your team to focus on creating value with your applications.

GreatHorn - Emerging IT Security Vendor 2017GreatHorn is built on a foundation of machine learning, automation, and cloud-native technology, it deploys in minutes, reducing risk, and simplifying compliance through a combination of real-time monitoring and policy-driven response. They have offerings like Inbound Email Security, Messaging Security & GH Threat Platform.

GuardiCore - Emerging IT Security Vendor 2017

GuardiCore is specially designed for today’s software-defined and virtualized data center and clouds, providing unparalleled visibility, active breach detection and real-time response. Its lightweight architecture scales easily to support the performance requirements of high traffic data center environments. A unique combination of threat deception, process-level visibility, semantics-based analysis, and automated response engages, investigates, and then thwarts confirmed attacks with pin-point accuracy.

Hexadite - Emerging IT Security Vendor 2017

Hexadite is a Cyber analyst thinking at the speed of automation. Modelled after the investigative and decision-making skills of top cyber analysts and driven by artificial intelligence, Hexadite Automated Incident Response Solution (AIRS™) remediates threats and compresses weeks of work into minutes. With analysts, free to focus on the most advanced threats, Hexadite optimizes overtaxed security resources for increased productivity, reduced costs, and stronger overall security. Hexadite AIRS integrates with a full range of enterprise detection tools to investigate every alert your system receives.

Click Here to know more about Security Operations, Analytics and Reporting (SOAR) Market

Illusive Network - Emerging IT Security Vendor 2017

 

Illusive networks is a cybersecurity company at the forefront of deception technology, the most effective protection against Advanced Attacks. illusive creates an alternate reality, transparently woven into your existing network. Attackers led into this reality will be instantly identified beyond all doubt, triggering a high-fidelity alert you can act upon.

 

Immunio - Emerging IT Security Vendor 2017

Immunio is based on patented runtime self-protection technology that protects your web apps and your customers against application layer attacks.When an attacker attempts to exploit your app, IMMUNIO collects and reports information about the attacker, the exploit attempt, and the code vulnerability. The attack is automatically prevented, and you have the information to stop it from ever happening again.

IntSights - Emerging IT Security Vendor 2017

Intsights is an intelligence driven security provider, established to meet the growing need for rapid, accurate cyber intelligence and incident mitigation. Their founders are veterans of elite military cybersecurity and intelligence units, where they acquired a deep understanding of how hackers think, collaborate and act. This is achieved through a subscription-based service which Infiltrates the cyber threat underworld to detect and analyse planned or potential attacks and threats that are specific to their partners and Provides warning and customized insight concerning potential cyber-attacks, including recommended steps to avoid or withstand the attacks.

Kenna - Emerging IT Security Vendor 2017

Kenna uses almost any vulnerability scanner you may have (Qualys, Nessus, Rapid7) and integrates it with over 8 threat feeds, giving you unparalleled insight into what you need to fix first. It’s like having a team of data scientists working on your behalf. Use the power of Kenna to correlate vulnerability scan data, real-time threat intelligence, and zero-day data into one easy-to-understand dashboard display. With less time spent on parsing scan results, integrating with threat intelligence, and creating reports, your InfoSec team can double their efficiency and productivity.Nehemiah Security - Emerging IT Security Vendor 2017Nehemiah Security operates throughout an enterprises network to make security operations – and the business – run better. They have capabilities of detecting the most harmful exploits without any prior knowledge, reduces time required to respond and remediate down to seconds and unleash artificial intelligence for continuous optimization and learning.

peimeter x - Emerging IT Security Vendor 2017Perimeterx is Sophisticated attackers can inflict damage without triggering your security mechanism. By focusing on the behavior of humans, applications, and networks. It catches real-time automated attacks with unparalleled accuracy. Their solution has key capabilities like detect abnormal behavior, diagnose user as a human or malicious bot, can be deployed in minutes etc.

Phantom - Emerging IT Security Vendor 2017Phantom reduces dwell times with automated detection and investigation. Reduce response times with playbooks that execute at machine speed. Integrate your existing security infrastructure together so that each part is actively participating in your defense strategy which includes Improve security by reducing your Mean Time to Resolution (MTTR), Marshall the full power of your security investment with defense that operate in unison and Deploy apps developed by Phantom, the community, or your own team. Automate repetitive tasks to force multiply your team’s efforts and better focus your attention on mission-critical decisions.

PhishMe- Emerging IT Security Vendor 2017

PhishMe Simulator embraces the concept of learning through doing.  It was never meant to be “computer-based training” like the traditional videos employees have to watch once a month or quarter. It is the leading provider of anti-phishing CBT and enjoys robust success globally… This capability is supported with flexible and effective analysis and reporting capabilities.

RedLock - Emerging IT Security Vendor 2017RedLock is a platform that provides the ease of use, visibility, continuous monitoring, and investigation tools that security and compliance teams need to do their jobs at SecDevOps speed. They have capabilities like frictionless Deployment, Instant Visibility, Continuous Monitoring, Easy Audits & Security Investigations and Unprecedented Due Diligence etc.

SafeBreach - Emerging IT Security Vendor 2017SafeBreach has A Unique Approach to Offensive Security – A fundamentally-different platform that automates adversary breach methods across the entire kill chain, without impacting users or your infrastructure. It has capabilities like Deploy simulators to “play the hacker”, Orchestrate and execute breach scenarios, continuous validation and quickly take corrective action.

Silent Circle - Emerging IT Security Vendor 2017Silent Circle is a secure communications company offering mobile devices, software and applications, and communication management services to the enterprise. Silent Manager is a user-friendly, web-based service that manages the Silent Circle users, groups, plans, and devices in use across your enterprise with simple, zero-touch deployment. It can be used in conjunction with identity management systems to authorize a user’s account, or it can stand independently.

Sparkcognition - Emerging IT Security Vendor 2017Sparkcognition is the world’s first Cognitive Security Analytics company. It has capabilities like adding Human Intelligence at Machine Scale. It adds a cognitive layer to traditional security solutions, increasing the operational efficiency and knowledge retention of your incident response and security analyst teams. It identifies new attacks automatically with over 45,000 zero-day attacks occurring every day, solutions that rely solely on signature matching are behind the times.

StackPath - Emerging IT Security Vendor 2017StackPath is the only web services platform built on security, with a fortified, machine learning core that aggregates, analyses, and syndicates real-time threat data both to and from each of their secure services. With StackPath, security is what’s built on, not bolted on. They have quite a few capabilities like Web Application Firewall, DDos Mitigation, Infrastructure, and Compliance.

ThinAir - Emerging IT Security Vendor 2017

ThinAir is the industry’s first Data Defense and Intelligence Platform. On their platform, enterprises have unprecedented visibility, control, and insight into all the data in their organization. Sensitive data is protected from insider threats, malware, and even human error. They see everything and protect what matters. It automatically tags all your digital assets—no complex processes or end-user involvement required. The full spectrum of metadata feeds directly into the powerful ThinAir platform. Topspin Security - Emerging IT Security Vendor 2017

Topspin Security empowers your security professionals to go on the offensive against APT and other sophisticated network threats. Their solutions learn your network topography and sniff all egresses to keep ahead of attackers. Using their deep network insights to intelligently plant mini-traps (breadcrumbs), it identifies attacks early and diverts attackers to a decoy network. Then, they track Command and Control communications and catch attackers in the act.

Trusona - Emerging IT Security Vendor 2017Trusona solves the fundamental problem with the Internet is that you don’t know who is on the other end. For this reason, Trusona identity proofs Internet users to become TruUsers. Identity proofing is done one time. Then, on every use of Trusona, user’s dynamic credentials and their patented anti-replay runs behind the scenes to ensure the user is who they say they are.

Unify ID - Emerging IT Security Vendor 2017UnifyID combines implicit authentication with machine learning to uniquely identify you on more than 500 websites and unlocks a new generation of IoT devices making remembering passwords a thing of the past. UnifyID, a service that can authenticate a user based on unique factors like the way you walk, type and sit. They won most innovative start-up award at RSA Conference 2017 innovation sandbox contest.

Uplevel - Emerging IT Security Vendor 2017Uplevel applies advanced data science to aggregate and contextualize cybersecurity data from internal systems and external sources, extract meaningful insights and provide automation throughout the incident response lifecycle. They have A Sophisticated Platform for Informed Response. Their solution has capabilities like Manage incidents and threat intelligence, Orchestrate workflows, Assess and apply threat intelligence etc.

Vera - Emerging IT Security Vendor 2017

Vera (formerly Veradocs) enables businesses to easily secure and track any digital information across all platforms and devices. It has capabilities like Secure any file, on any device, Seamless user experience, Granular visibility and control, Military-grade file encryption, Real-time policy enforcement, and Centralized control and analytics.

Veridium - Emerging IT Security Vendor 2017

Veridium offers an end-to-end, biometrics-based authentication solution for the enterprise. Everyone acknowledges that passwords are a weak link in enterprise security. You can lose them, share them, and crack them. Biometrics can strengthen legacy systems by adding an additional layer of security. With their technology, a company can deploy biometrics as a second factor or replace passwords altogether. Either way, you can now truly verify the identity of the end user. VeridiumID is a server-side protocol for biometric authentication that works in conjunction with a front-end mobile SDK that allows you to embed biometrics into your company’s mobile app.

Veriflow - Emerging IT Security Vendor 2017Veriflow pioneered a new way for enterprises to model, manage and protect their networks from vulnerabilities and outages. Leveraging Veriflow’s patented continuous network verification technology, enterprises can now predict all possible network-wide behavior and mathematically verify availability and security, instead of waiting for users to experience outages or vulnerabilities to be exploited. Their solution has capabilities like Network Segmentation & Vulnerability Detection, network Availability & Resilience, Continuous Compliance & Dynamic Mapping etc.

Votiro - Emerging IT Security Vendor 2017

Votiro patented Advanced Content Disarm and Reconstruction (CDR) technology is a proactive, signature-less technology that targets the file formats that are most commonly exploited via spear phishing, other advanced persistent threats, and cyber-attacks. Even security analyst’s firms, including Gartner, states that increasingly organizations will need to add CDR technology to their cyber security protection to assist organizations with today’s ever rising sandbox evasion techniques.

Vthreat - Emerging IT Security Vendor 2017

vThreat helps companies verify the efficacy of the three pillars of cybersecurity: people, process, and products. Their solutions imitate the techniques, tactics, and procedures that real-world attackers use, such as: phishing, lateral movement, data exfiltration, and malware distribution. Its 100% cloud-based solution makes it easy to verify your security posture in seconds.

Zentera CoIP® solution directly addresses the security and networking needs of the multi cloud market. CoIP’s security capabilities are deeply integrated with its virtual overlay network, accelerating productivity, and business agility. CoIP works with any transport in any environment, does not interfere with existing infrastructure, and can be up and running in less than a day. The company is a Red Herring Top 100 winner based in Silicon Valley, and offers CoIP through select partners.

Zingbox leads a new generation of cybersecurity solutions focused on service protection, today unveiled IoT Guardian: the industry’s first offering that uses Deep Learning algorithms to discern each device’s unique personality and enforce acceptable behavior. IoT Guardian’s self-learning approach continually builds on previous knowledge to discover, detect, and defend critical IoT services and data while avoiding false positives with 99.9 percent accuracy. It works for any IoT Device, Has Trusted Behavior, and Ensures Business Continuity. 

360 Security - Emerging IT Security Vendor 2017360 Security provides 360° of protection, backed by a leading antivirus engine. Their intelligent boost and clean technology keeps your device junk-free and fast. They provide capabilities like Real-time protection, at all times, Impossibly fast smartphone acceleration and will Keep your device spotless, like it’s still new etc.

 

 

1000 + Products (Product Comparison Platform):

It is the platform for simplifying your IT-security buying process. Product Comparison Platform currently has 30+ IT security markets and  700+ IT-security products listed. With PCP, you can perform:

  • Benchmarking & Product Portfolio Management
  • Product discovery and comparison, Fitment
  • RFP and Product Evaluation
Read more…