As I briefly mentioned in my last guest column, one of the keys to a secure data center is understanding what “normal” looks like within your environment. Doing so makes it is possible to recognize abnormalities that may indicate a security breach. This baselining process should be part of a robust security policy that improves your team’s ability to build and execute on a predetermined response plan when deviation is detected.

There are several ways to develop such baselines, but one of the tools that is often overlooked is the data center monitoring system you probably already have in place. Combined with security-specific tools and a disciplined approach, monitoring should be key in a comprehensive defense in-depth approach.

 

READING TEA LEAVES AND HISTOGRAMS

I’m consistently surprised by how many IT departments are not using historical and baseline data as part of their security processes. Perhaps part of the issue is a lack of understanding on what performance metrics can and should be taken into account from a security standpoint. Here are a few to consider:

  • Network bandwidth utilization. Understanding the amount of network traffic that normally flows in and out of your data center is a good place to start, as an unexpected spike in network traffic can indicate data being pilfered. Using your monitoring software to alert on increases or decreases in network traffic in and out of your data center is an effective, yet not necessarily common, method of being proactively aware of changes before a breach occurs. And of course, NetFlow, sFlow®, or J-Flow™ traffic analysis tracks data from end-point to end-point, allowing you to assess and locate potential threats.

  • Data storage volume. Using your storage baselines to determine normalcy for data volume and placement is also a good place to start. With this knowledge, you can watch for unexpected volume increases or decreases and files being moved, which could be signs of data being deleted, duplicated, or moved as part of a data breach.

  • CPU and memory. CPU or memory usage is another important performance metric that can improve security awareness. Knowing what’s normal from a CPU and memory standpoint can allow you to watch for sustained increases that could indicate a previously undetected malware infection.

 

SECURITY DATA WITH PURPOSE

Performance data also offers another benefit: informing the development and implementation of a comprehensive security policy and process. Here’s an outline of the steps I typically recommend for doing so, with the steps associated with performance monitoring and baselining in bold.

In coordination with the entire IT department and key business leaders, determine:

  • What government policies (if any) apply to your business and the data you collect and store?

  • What departments have access to sensitive data?

  • What level of access is allowed (tablets, smartphones, laptops, applications, etc.)?

  • What are the key data center performance baselines that should be included?

  • Type all agreed-upon elements of the policy and distribute.

  • Create a security maintenance schedule, which will need to be fluid and frequently changed/updated.

  • Implement IT monitoring software within the data center and on the network, setting up alerts that will trigger when deviation from the previously determined performance baselines occur.

Implement security procedures. Remember to do this after having performance baselines so you can assess the effects of implementing security procedures.

Develop predetermined response plans for when an attack or other deviation from normal is detected. It’s important to help ensure that all team leads know and understand these response plans. It may help to schedule drills to practice responding.

As expected changes in performance occur over time — re-evaluate performance baselines regularly each time based on at least one week’s worth of performance data.

  • Train all — including C-level — employees on security policies and processes.

 

CONCLUSION

Remember, the best IT projects are those that let you repurpose what you already have in new ways. In this case, the system you’re using for availability and performance troubleshooting can become a critical part of a more comprehensive security policy and process. Security policies and processes that include baselines for normalcy through the use of monitoring data can bring data breaches and other attacks to your attention sooner, or in some cases, when they have been able to remain undetected by other security measures.

 


I’m consistently surprised by how many IT departments are not using historical and baseline data as part of their security processes. Perhaps part of the issue is a lack of understanding on what performance metrics can and should be taken into account from a security standpoint.