CISO Platform's Posts (224)

Sort by

This talk will cover the concept of mis-using the hardware (x86 translation lookaside buffer) to provide code hiding and how the evolution of the Intel x86 architecture has rendered previous techniques obsolete and new techniques to perform TLB-splitting on modern hardware. After requisite background is provided, the talk will then move to the new research, the author's method for splitting a TLB on Core i-series and newer processors and how it can again be used for defensive (MoRE code-injection detection) and offensive purposes (EPT Shadow Walker root-kit). This talk will be very high-level but aims to convey the complexities of the hardware and possible attack vectors that can happen at the lowest-levels of an organization's IT infrastructure.

(Read more:  Technology/Solution Guide for Single Sign-On)

(Read more: CISO Guide for Denial-of-Service (DoS) Security)

Read more…

6 Key Principals for creating a Secure Cloud

Securing a cloud environment requires, and offers a new approach to security: holistic Security Intelligence. Many organizations have dozens of different point products to address security concerns. For example, they may have a firewall from one vendor, identity management from another, and application scanning from a third. This creates a siloed approach to security. However, as attacks become both more complex and sophisticated, it has become a priority to look across all of these different products in order to identify and respond to threats. By reducing the number of point products in an environment and adopting a unified approach, organizations are gaining better insight into unknown threats while also managing continued security risks.

Whether deploying a traditional data center, or a cloud, organizations must protect the infrastructure and applications while monitoring and controlling access to all resources. This security must be accomplished in a way that meets industry regulatory and compliance standards. Organizations must be able to protect against both known and unknown threats across all of these elements of the computing environment.

(Read more:  5 Best Practices to secure your Big Data Implementation)


6 Key Principals for creating a Secure Cloud

  • Create a Secure Infrastructure. Creating a secure infrastructure means that the underlying systems architecture must be protected against traditional vulnerabilities such as network threats and hypervisor vulnerabilities. In addition, virtual machines must be securely isolated from each other and patches must be kept up to date.
  • Build Security into Applications Development. Developers of web and cloud based applications often lack deep expertise in security and therefore do not appreciate the vulnerabilities that exist with applications. Securing applications requires building application scanning into the development process combined with a patch management plan.
  • Establish an Automated and Unified Approach to Identity Management. With the introduction of cloud computing, more employees and external users need access to a broad range of systems and services ranging from virtual desktops to public SaaS environments. All of this activity might take place in just a few minutes. A successful identity strategy gives administrators federated identity management and gives users Single Sign On (SSO) capabilities.
  • Keep Data Secure Regardless of the Deployment Model. A successful cloud data management strategy allows an organization to know where data is located and who has accessed that data. Often this data is not static, it will change and move based on business transactions. In addition, data must remain secure whether it’s being accessed in the office or from a mobile device. All of this data must be backed up in a reliable and secure manner.

  • Ensure Compliance within a Hybrid Computing Model. Compliance and regulatory requirements are quickly evolving and organizations are struggling to stay current. Many industries require compliance with specific
    regulations related to protection of customer and corporate data.

  • Prepare for Advanced Persistent Threats (APTs). APTs are ongoing slow attacks that masquerade as ordinary activity and are typically not identified by traditional security technology. These sophisticated threats are becoming commonplace. Companies need to be able to anticipate these threats so they can be stopped before they cause significant damage.

>> Download the Complete Report

(Read more:  7 Key Lessons from the LinkedIn Breach)

Read more…

5 Key Benefits of Source Code Analysis

6xxliq.jpgStatic Code Analysis: Binary vs. Source

Static Code Analysis is the technique of automatically analyzing the application’s source and binary code to find security vulnerabilities. According to Gartner’s 2011 Magic Quadrant for Static Application Security Testing (SAST), “SAST should be considered a mandatory requirement for all IT organizations that develop or procure application”. In fact, in recent years we have seen a shift in application security, whereas code analysis has become a standard method of introducing secure software development and gauging inherent software risk.

Two categories exist in this realm:

1. Binary – or byte- code analysis (BCA). Analyzes the binary/ byte code that is created by the compiler.

2. Source code analysis (SCA). Analyzes the actual source code of the program without the requirement of retrieving all code for a compilation.

 8669800090?profile=original

Both offerings promise to deliver security and the requirement of incorporating security into the software development lifecycle (SDLC). Faced with the BCA vs SCA dilemma, which should you choose?

 

(Read more: Checklist to Evaluate A Cloud Based WAF Vendor)

The Inherent Flaws of Binary Code Analysis (BCA)

On the one hand, BCA saves some of the code analysis efforts since the compiler automates parts of the work such as resolving code symbols. Ironically, however, it is precisely this compiler off-loading which presents the fundamental flaw with BCA. In order to use BCA, all code must be compiled before it is scanned. This raises a plethora of problems that push back the SDLC process and gives security a bad, nagging name.

Issues include:

  • Vulnerabilities exposed too late in the game. Since all the code must be compiled prior to the scan, security gets pushed to a relatively late stage in the SDLC. At this point, the scan usually finds too many vulnerabilities to handle, no time to fix, and pressure from sales and marketing teams to release the product. As a result, these vulnerabilities – albeit being uncovered – are pushed to release. In fact, actual vulnerabilities have already slipped through the scanning process in real-world projects, such as occurred in a Linux OS distribution release.

 

  • Compiler optimization hurts the accuracy of the results. One of the many roles compilers fulfill is to optimize code in terms of efficiency and size. However, this optimization may come at the expense of the accuracy of results. For example, compilers might remove so-called “irrelevant” lines, aka dead code. These are lines of code that developers insert as part of their debugging process. While the compiler removes these code snippets, they can contain code that breaches corporate standards.

 

  • PaaS-providers incapable of retrieving the byte-code. In a Cloud Computing scenario, the PaaS-provider is responsible for validation, proprietary compilation and execution of the programs. However, the PaaS provider cannot retrieve the byte-code, or has no manifestation as byte-code or binary.

 (Read more:  Checklist to Evaluate a DLP Provider)

Benefits of Source Code Analysis (SCA) 

By scanning the source code itself, SCA can be integrated smoothly within the SDLC and provide near real-time feedback on the code and its security. Source code analysis comes to compensate for BCA’s shortcomings and provide an efficient, workable alternative. How?

 

1. Scans Code Fragments and Non-Compiling Code 

An SCA tool is capable of scanning code fragments, regardless of compilation errors arising from syntactic or other errors. Both auditors and developers can scan incomplete code in the midst of the development process without having to achieve a build, ultimately allowing the discovery of vulnerabilities much earlier during the Software development Life Cycle (SDLC).

 

2. Supports Cloud Compiled Language 

New coding language breeds have developed under cloud computing scenarios. In these cases, the developer codes in the PaaS-provider’s language, while the PaaS-provider is responsible for the validation, proprietary compilation and execution of the programs. In these cases, the code has no manifestation as byte-code nor as binary, and the SCA must be done on the source code itself. The most known example is the Force.com platform supplied by Salesforce.com. This platform is based on the server-based language called Apex, and client-based language called VisualForce. Only an SCA product can support this new paradigm.

 

3. Assesses Security of Non Linking Code 

In the case where the code references infrastructure libraries for which their source is missing, the BCA tools immediately fails on the unfortunate “Missing Library” message. Days may be spent building stubs for these missing parts, just to make the code compile – a lot of hard work without any added value.

An SCA product easily identifies vulnerabilities, such as SQL Injection - even when the actual library code of the executing SQL function call is missing.

 

4. Compiler Agnostic 

In a multi-compiler environment- typically found at code auditors and large corporations- the SCA standard provides a one solution fits all. This is starkly opposed to the BCA which must support an endless number of compilers and versions. The reason? Each compiler transforms source code into its own version of binary/ byte code forcing the BCA tool to read, understand and analyze the different outputs of different compilers. However, since an SCA tool runs on the code itself – and not post-compilation, the SCA provides a single standard irrelevant to the compiler version or compiler upgrades.

 

5. Platform Agnostic 

Similarly, when integrating SCA into the SDLC, the exact same tool can be used to scan the code anywhere – regardless of the operating system or development environment. This eliminates the inherent redundancy of BCA which must deliver separate scanning tools for each platform.

Disclaimer: This report is from Checkmarx and if you want more details or want to connect you can write to contact@cisoplatform.com

(Read more: Checklist for PCI DSS Implementation & Certification)

Read more…

1imjq8.jpgThe AppSec How -To:Visualizing and Effectively Remediating Your Vulnerabilities: The biggest challenge when working with Source Code Analysis (SCA) tools is how to effectively prioritize and fix the numerous results. Developers are quickly overwhelmed trying to analyze security reports containing results that are presented independently from one another.

Take for example, WebGoat – OWASP’s deliberately insecure Web application used as a test-bed for security training – has more than 100 Cross-Site Scripting (XSS) flaws. Assuming that each vulnerability takes 30 minutes to fix, and another 30 minutes to validate, we’re looking at nearly three weeks of work. This turnaround is certainly too long and costly- and even impractical- for large projects containing thousands of lines of code, or for environments with quick development cycles such as DevOps. With such a large amount of vulnerabilities, it should come as no surprise that vulnerable and unfixed code is released.

8669800090?profile=original

In this article, we show how visual insights into the vulnerability – from origin to impact – can help developers to:

  • Picture the security state of their code
  • View the effect of fixing vulnerabilities in different locations
  • Automatically narrow down the results of extra-large code bases to a manageable amount

In fact, using this method we were able to cut down the number of fixing locations of WebGoat XSS vulnerabilities to only 16 – even without looking at the code.

(Read more:  Annual Survey on Cloud Adoption Status Across Industry Verticals)

A Picture is Worth a Thousand LoC: Visualizing Your Vulnerabilities

“Know your Enemy” is the mantra of any security professional. It defines what they’re up against, how to face it and what tactics to employ. It sets the groundwork for all future outcomes. The same goes for developers - and the enemy is vulnerable code. In the practice of secure coding, developers should receive an overview of the security posture of their code, the amount of vulnerabilities contained within the code and how they manifest themselves to the point of exploitation. This is where the graph view comes in.

The Basics: Data Flow

A data flow is best described as a visualization of the code’s path from the source of the vulnerability until the point where it can be exploited (aka “sink”). As you can see, each step in the flow is reflected as a node in the graph:

29nasgn.jpg

Traditionally, each vulnerability result has a single data flow – independent from other findings. Accordingly, for numerous results, say 14 different vulnerability findings, we can view a graph with 14 separate flows:

29w1hsl.jpg

Obviously, such a graph does not help much in understanding how to prioritize fixes. What developers really need is to understand the relationships between the different flows and simplify the resulting graph as much as possible.

(Read more:  Annual Survey on Security Budget Analysis Across Industry Verticals)

Improving Visibility: The Graph View

The graph view takes those separate data flows and depicts them in a way that easily presents the relationships between flows.


Building the graph is a two-step process:

  1. Combine the same node appearing in multiple paths. In other words, identify and merge those pieces of code that are actually shared by the same data flows. Taking the 14-path graph from above, consider the case where the 5 leftmost sources share the same node. In turn, this node shares with another node on its level a node closer to the sink:

    16a722f.jpg
  2. Simplify the graph to reduce the number of data flow levels. This can be done by combining similar-looking data flows to a single node. For those familiar with graph-theory, you might recognize by now that we’re building the “homeograph” of the original graph, i.e., a graph with an identical structure but with a simplified representation.We do this by first grouping the nodes:

2egf8lj.jpg

As we continue this process the resulting graph eventually looks like this:

14nh65c.jpg

With this simplified graph flow we now have a visual mapping of the security of the code. Moving away from just looking at code bits and at seemingly disparate code flaws, the graph flow actually allows us to see the correlation between vulnerabilities. Furthermore, a quick glance at the graph provides us with a deep understanding of the effect that a certain vulnerability has over the rest of the code – a relationship that’s much too intricate to understand through a code review.

The Butterfly Effect: Considering Fixing Scenarios

What if you fix the code in a certain location? How will that affect the code? How about in another location? With the graph view in hand, we can consider all these scenarios, see the overall effect quickly, and decide for ourselves which route to take.
Let’s look again at our simplified view (aka “homeograph”) of our original example. A fix of the single node pointed to by the arrow results in fixing two separate paths.

2zggb55.jpg

On the other hand, the following graph shows what happens if we try to fix a different node. In this case, the node pointed to by the arrow only leads to a partial fixing of the path. The reason is that the bottom “branch” of that code is also affected by other nodes that are not yet fixed.

23lh5qt.jpg

We can continue to interact with the graph and consider different “what-if” scenarios. Not only will they show us the ripple effect of fixing a certain vulnerability, but after a certain time of getting into such a habit- we’ll unconsciously understand the impact of certain vulnerabilities and invariably start to recognize our own “best places” to fix.

Only the Best: Optimizing Vulnerability Fixing

Ideally, we’d also like to accurately and automatically pinpoint those “best-fix” locations on the graph. Once again, this166k8iu.jpg calls for the adoption of graph-theory concepts. In particular, the “Max-Flow Min-Cut” theorem helps us to calculate the smallest amount of node updates that fix the highest number of flows. Applying this calculation to our example graph, we can visually locate those 3 nodes that if fixed -amass to rectifying the complete flow graph.

This is incredible considering that we started with a 14-path graph equivalent to 70 nodes.

(Read more: Security Technology Implementation Report- Annual CISO Survey)

Summary

Graph flows are a visually appealing way for developers and security professionals alike to fully comprehend the relationships between the different parts in the code and the propagation of a tainted piece of code to its sink.


The visualization of the code provides an interactive tool allowing the developer to proactively consider the effect of fixing various vulnerabilities at different places. Most importantly, the graph flow allows us to locate the best-fix locations in a quick, efficient and accurate manner.

Disclaimer: This report is from Checkmarx and if you want more details or want to connect you can write to contact@cisoplatform.com

Read more…

10 Steps to Secure Agile Development

288qb79.jpgIn Agile’s fast-paced environment and frequent releases,security reviews and testing sound like an impediment to success. How can you keep up with Agile demands of continuous integration and continuous deployment without abandoning security best practices? 

Companies have found the following ten practices helpful to achieve a holistic secure Software Development Life Cycle (SDLC) process in an Agile SaaS world. The approaches taken by these companies follow a basic philosophy: keeping security as simple as possible and remove any unnecessary load from the development team.

8669800090?profile=original

(Read more:  APT Secrets that Vendors Don't Tell)

1 Be part of the process

Security requirements should be considered as additional development checkpoints. Each benchmark
needs to be achieved before proceeding to the next stage of an Agile process.For each step in Agile, associate a security milestone that needs to be achieved. Start already at the post-release planning to perform a security high-level design. This includes the following aspects:
- Security in code development. For example, inspect the planned application in terms of which APIs are going to be used.
- Security in technologies. Identify technologies that are going to be used. For example, if system testing is performed within a Maven process, security tests should be integrated within this system.
- Security in features. For example, forecast any problems associated with regulations. Say, when tracking cookies are used within a product delivered to the UK then prepare compliance to UK privacy regulations.

2 Enforce your policy by using a security package API in each product

There are two aspects to this stage:


a. Use a security package such as OWASP’s Enterprise Security API (ESAPI)


ESAPI is a toolkit which enables the developers to easily consume various utilities.The toolkit provides a variety of out-of-the box utilities such as validators, encoders, encryptors, and randomizers. By using ESAPI, developers do not need to investigate the best security practices and spend time researching correct implementation methods.Consider hashing, as an example. Instead of relying on the developer to add a hash salt, the salt can
already be implemented as part of the ESAPI configuration. The developer, in turn, is left simply to
consume the provided API.

Particular emphasis should be made on validators because these prevent the most common Web application vulnerabilities, such as SQLi and XSS. Each organization needs to evaluate where to integrate the validators. Some businesses may decide to apply validators on the controller level (e.g. on the
Apache layer or within the Tomcat filter). Other companies prefer to integrate validators within the development code to test each input. While each company needs to decide the right strategy for them, we have found that many companies choose to validate each input within their code. This decision is based on two main reasons:

  • All the regular expressions that are written to validate the input can be constructed as simple as possible in order to avoid any type of performance issue. These regular expressions are actually more similar to a business-oriented validation.
  • In case a problem arises- or a specific validator needs to be changed- only the specific input needs to be changed. On the other hand, a higher level validator requires a whole QA process to verify the entire system.

One organization we worked with took code-level validation one step further. The particular organization implemented a validator that does not return the traditional true/ false boolean values, but rather returns null if the input is invalid. In this manner, the security team was able to prevent developers from mistakenly using that same value later on in the code.


b. Validate that the developers are using the right API


For each input, ensure that the developer uses the right validator as provided by the security team. This entails failure of the security test in case the developer chooses not to use the API. Enforcement can be achieved through source code analysis that is customized to the security team’s requirement.

3 Integrate Source Code Analysis (SCA) within the native process of code management

Enforcing the security policy means that the developers cannot proceed with the build process if the checked in code does not comply with policy. To keep up with the fast-paced development environment, the developers must be able to consume the policy without a long training period.
The way to address this challenge is by integrating SCA within the different stages of the development process. Particular aspects to pay attention to are:

  • Integrating the SCA within the build automation tool (such as Maven). Organizations typically run the SCA in two modes. The first is running the scan as incremental tests each time the developer performs a commit. In this way, only the change between the last scan and the current scan is checked. The second is running a full security scan within a full-system test, say during the nightly build. If the build fails, the developer has to fix the flaw before continuing with the development.
  • Presenting SCA findings within the build management and Continuous Integration server (such as within TeamCity). In case of an SCA alert, it’s more efficient for the developer to click on the finding and dynamically identify the specific vulnerability.
  • Enhancing the SCA with a knowledge base for the developer. Similarly to a developer fixing a compilation error, the developer needs to know how to fix the faulty code. For this, the SCA tool should also contain a knowledge base which describes the risk and the proper remediation advice.

(Read more:  5 Security Trends from Defcon 2014 - The Largest Hacker Conference)

4 Break the build for any “high” or “medium” findings

Do not compromise security by releasing a product that contains any high or medium findings. This requires eliminating the flaw already at the build process. Meaning, if the developer checks in a few
high security bugs, the build will break. Unless the vulnerabilities are fixed, the developer fails to build a package.

(Read more:  5 Security Trends from Defcon 2014 - The Largest Hacker Conference)

5 Use automation to collaborate with the security dynamic test

Dynamic testing within the product can be implemented through positive and negative unit tests.

  • Use positive testing to validate the input. For instance, a positive test validating input in the form of an email address will test that the characters “@” and “.” appear, but no other special characters.
  • Complement the positive test through a negative test. The negative test should “pick up” all input that does not conform to the positive test. Using the above scenario, an email address embedding a SQL Injection will be caught by the negative test. Essentially, the complementary negative test acts as the dynamic test.

6 Run a penetration test

Engage both professional and your customers as penetration testers:

  • Perform a penetration testing of the final product by an external vendor. This includes an automated or manual test of the product once it’s released in its Alpha stage.
  • Allow customers to run a penetration test and work as a community to succeed. The organization must perform all the necessary steps in order to release a secure product. That said, many customers are themselves subjected to regulation which requires running penetration testing on third-party products. The benefit to you? Gaining customer confidence. This is particularly important when talking about Software-as-a-Service (SaaS) products whose success is based on the trust that customers put on their providers.


7 Engage technology leaders as security champions by showing them the value

Even with large security teams, it is obvious that developers outnumber the security team. To extend security’s outreach, engage with the technology leaders and place them as the security champions. Gaining such cooperation with R&D guarantees that also when security is not physically present, security does come up in each and every scrum meeting.

8 Train developers on a regular basis

The point here is not necessarily to establish a formal training process where developers are sent to a Web application security course. There are other means for training, such as:

  • Providing developers with the security knowledge Enhancing the SCA with the knowledge base of a specific vulnerability, as recommended in one of the practices above, is part of this kind of training. By helping developers understand the risk and its mitigation, security awareness increases whereas developers start viewing security and code differently.
  • Being accessible to developers. Once security becomes an ingrained process, Q&A from the developers begin to pile up at the security team’s desk. The security team should have an open door policy to address all the developers’ concerns.

9 Provide a collaboration platform for security discussions

This practice goes hand in hand with the previous practice on training developers. The point here is to not only accumulate or disseminate information related to security practices. This practice focuses on establishing a security collaboration platform with the intent of sharing information and raising discussions surrounding security issues.

10 Start small but think big

Many of these practices, especially the practice of breaking the build for any “high” or “medium” finding, requires the management and superiors support. We recognize that gaining this type of trust is not an easy goal to achieve. Various companies have found the following steps helpful:

  • Take one small project and turn it to a success story. Listen to R&D during this process. Learn from mistakes and refine the process moving forward.
  • With one success story under the belt, move on to a new project. Continue refining, and learning from mistakes. Create 2-3 successful stories.
  • Review the security bugs that are returned by customers. Compare the number of vulnerabilities in one of these success stories with a different project that does not follow the security practices.Show management how these vulnerabilities interfere with the normal delivery and maintenance of the product.
  • Progress to the big legacy project. At first, don’t break the build for security findings in this big
    project. It is enough at this stage to identify the gaps, close them and create a program of how
    development will fix the flaws.

  • Fix the flaws on the legacy system only after achieving confidence. Fix the flaws on the legacy systems only after understanding how to correctly deliver the security package (such as ESAPI), how to inspect it correctly, and where to correctly apply the validation without any impact to the product.

  • Proceed to the big project in-making. Naturally, this is the ultimate goal and security should already be integrated within the Agile process. At this step, the validators should already be packaged and set within a single framework.

Disclaimer: This report is from Checkmarx and if you want more details or want to connect you can write to contact@cisoplatform.com

Read more: Technology/Solution Guide for Single Sign-On

Read more…

Your Guide to Multi-Layered Web Security

Why Read This Report

The data center perimeter is dead. But its memory lives on in the way many IT departments continue to secure their infrastructure. The meteoric rise of the Internet brought with it an ever-changing landscape of new attacks and completely disrupted organizations’ old models of guarding their IT infrastructure. Previously, information assets that needed protection all resided in a fortress that IT controlled, namely a secured data center. Attacks typically came from outside the data center’s four walls or from insiders abusing their privileges. Companies placed protections, such as
firewalls, at the border crossings and guarded against inside attacks through strict roles and access privileges.

>> Download the Complimentary Guide

Websites and applications, however, increasingly live outside the data center in the cloud. How can you protect a perimeter that no longer exists? First, you need to understand which of your assets are most at risk and determine your company’s tolerance for risk. Then you need to manage that risk by extending security controls to the cloud and by guarding against the types of attacks that occur over the Internet.This guide will detail the threats common to websites and web applications and what you can do to mitigate them.

>> Download the Complimentary Guide

Table Of Contents

  • A multi-layered approach to securing web application
  • How to choose a solution?
  • Common web application vulnerabilities
  • How to address Common web application vulnerabilities

Read more…

200qvwg.jpg?width=200Why do we need a common security technology evaluation framework? 

Floating an RFP (Request for Proposal)  or evaluating a new technology for a CISO is a substantial effort. Going through the sea of data  and marketing buzz to judge a vendor and its product is definitely not an easy task. We started creating frameworks for various security technologies to solve the following problems:

  • Creating a framework to evaluate a technology is a substantial effort.
  • There is tremendous effort duplication in the industry since everybody invents their own framework. There is scope for reuse and collaboration for the basic structure (at least).
  • Going through the sea of product information and marketing buzz is a massive effort.
  • There are too many "buyer's guide" in the market. It is time to have an open community based unbiased framework where anybody can provide their feedback and incorporate their learning.

(Read more:  5 Best Practices to secure your Big Data Implementation)

What will be the process of building the framework?

As a part of our initiative, we have create a common framework and then 20 specific frameworks for specific technologies. As an example, if you want to evaluate a DLP solution, you may use the framework and then work on top of it for your custom requirements. We at least intent to save some of your effort.

Step 1:  Release of the generic framework for community feedback. We shall finalize the overall structure in next 15 days.

Step 2: Release of technology specific frameworks (e.g. Web Application Firewall, DLP etc) for community feedback and extensive community sessions during CISO Platform Annual Summit.

Step 3: Release of final technology frameworks for the members.

Community Feedback Process and Documentation Trail: Any member can provide their comment on the framework using the commenting feature. Our analyst and advisory team shall take the final decision on incorporating such feedback. In any case, all feedback (whether accepted or not) shall remain published in the comment section for other members to use it based on their own judgment/discretion.

(Read more:  How Should a CISO choose the right Anti-Malware Technology?)

Common Framework structure

In this article we are introducing the structure of Evaluation Framework. This is a generic framework and for each specific technology we shall add specific questions for evaluation. It takes a top down approach where the high level categories are the Major areas of security product evaluation namely Functionality, Commercial, Management and Support, Organization and Reference. Further each category is sub-divided into the sub-categories. 

Important Note: The following is just a structure. We will soon release 20 specific frameworks with specific checklist and questionnaire for each "sub category".

2zyxxjq.png

29x9hsh.png

>> Formal launch of the frameworks @ Annual Summit

The final draft above framework will be launched at Annual Summit 2014 (20-21st Nov) where active discussions with CISOs and technology vendors shall take place. Post such feedback, we will release the final checklists for the community.

We invite your review/feedback as comments below or via mail directly to 'pritha.aash@cisoplatform.com' .

(Read more:  7 Key Lessons from the LinkedIn Breach)

Read more…