The two cornerstones of next-generation cybersecurity (Part 1)

Applications, endpoints, networks, and servers will enforce security policies related to identity and data security.

So what should CISOs do to address the “shadow IT” dilemma? As IT loses control of some of its traditional assets, my suggestion to CISOs is to double-down on security controls and oversight for the things they still own. In my humble opinion, there are two key areas to focus on: Sensitive data and identity. Everything else – applications, endpoints, networks, and servers – must kowtow to these two cornerstones and enforce specific data security and identity policies.

Allow me to be a bit more specific. I’ll focus on data security in this blog (Part 1) and then move on to identity in a future blog (Part 2).

As we’ve seen over the past few years, our cyber adversaries want to steal our data; whether its credit card information, emails, or intellectual property. Furthermore, we are moving toward an era where data privacy will become a much bigger deal, even here in the money-driven U.S. of A. To better mitigate risks, data security processes and technologies must bolster:

Sensitive data discovery and classification. We need more efficient tools to scan networks and discover anything that looks like regulated data, intellectual property, or company confidential information. Current scanning and keyword technologies have been relatively weak and cumbersome thus far. My guess is that big data analytics will be used to model sensitive data, assess files and unstructured data, classify data with a high degree of confidence and add automation to help organizations with tedious data discovery and classification projects.
Sensitive data identity tagging. I’ve been squawking about the need for standards in this area for years. If some piece of data is classified as sensitive, it should be tagged as such so every host- and network-based security enforcement points knows its sensitive and can then enforce security policies accordingly. Are you listening to this requirement, Adobe and Microsoft? Cisco is making progress with its TrustSec/ISE technologies, but it would be incredibly helpful if the industry got together and agreed upon more universal standard data-tagging schemas and protocols. Think of the possible ramifications here as they are pretty powerful. With a standard tag, every DLP device could easily detect and block an email when a junior administrator uses Gmail to send a copy of the HR database to her home computer. Employees could use Box, Dropbox, or any other file-sharing services because the files themselves have their own “firewalls.” Finally, all endpoints (inside or outside the organization) could enforce DRM policies without the need for goofy proprietary software agents. Good stuff.
Federated encryption and key management. Yeah, I know we are encrypting a lot of stuff today, but there’s no coordination between technologies. It would be much better if the potpourri of encryption and key management technologies worked together. This would require something akin to object-level encryption (i.e. encrypt the “object” whether it is a file, database, or piece of unstructured data) as well as a federated key management model for intra-organizational distribution. Complex, but we’ve done things like this before (think DNS, federated identity, etc.).
Proactive “phone home” data monitoring. In the current model, security professionals are forced to hunt for the data using simple file system scanning tools – a cumbersome and somewhat ineffective task. It would be great if sensitive data itself actually “phoned home” to issue regular status reports to some central auditing/monitoring/reporting system. For example, suppose Alice is working on a sensitive spreadsheet. She sends copies to Bob in legal and a third party named Carol. As this happens, each copy reports into a central management system, which logs each file location, maps the distribution patterns (i.e. Alice as source, Bob and Carol as destination), and assigns each copy a unique identifier. Remember that each copy is already instrumented with DRM policy rules so that policies can be enforced to safeguard the data on all endpoints. Meanwhile, any further proliferation or changes are logged in order. This model could also support a failsafe policy like, “do not allow access to the sensitive data until the file checks into central management for verification and approval.”

Some elements of this model exist today in data security tools from Cisco, Ionic Security, McAfee, RSA, Symantec, Varonis, Verdasys, Vormetric and others. Advancing this model further will require industry standards and lots of cooperation from the likes of companies like Adobe, Box, Dropbox, Microsoft, Oracle, etc. To align these technology advances with business needs, security organizations and consulting firms should put more work into enterprise data governance models.

Finally, this kind of data security model would greatly benefit from a push from the Federal government. I could see NIST spinning up an initiative, which mirrors what was done with the National Strategy for Trusted Identities in Cyberspace (NSTIC) project.

Of course, data security is only one side of the coin – we also need to know who is accessing the data, where they are located, what type of device they are using, etc. In part 2 of this blog series, I’ll dig into the other critical cornerstone – identity.


 


Microsoft MCTS Certification, MCITP Certification and over 2000+ Exams at Actualkey.com

 

 

About the author /


Post your comments

Your email address will not be published. Required fields are marked *

Archives

Latest

+

Random

+
June 2014
M T W T F S S
 1
2345678
9101112131415
16171819202122
23242526272829
30