A key part of any information security strategy is disposing of data once it’s no longer needed. Failure to do so can lead to serious breaches of data-protection and privacy policies, compliance problems and added costs.

When it comes to selecting ways to destroy data, organizations have a short menu. There are basically three options: overwriting, which is covering up old data with information; degaussing, which erases the magnetic field of the storage media; and physical destruction, which employs techniques such as disk shredding. Each of these techniques has benefits and drawbacks.

Some organizations use more than one method. For example, microprocessor maker Intel uses all three.

Still, some organizations, particularly smaller ones, need more education about data destruction. Enterprise clients generally have a pretty good idea of how to deal with this; the practices have been relatively consistent over the years.

Unfortunately, there are still many small-to-midsize businesses that haven’t fully thought through the risks of undestroyed data.

There are also persistent questions among all types of companies about how to handle data that’s in the hands of cloud computing providers.

Although the storage architecture of most SaaS services probably means that data from former customers will quickly be written over and soon become virtually impossible to recover, there’s no good way to know if this is the case. The SaaS market also has little or no convention surrounding the treatment of former client data on backup media.

With the massive herd heading toward cloud, most vestigial physical destruction remnants are being killed off. In other words, logical destruction, for all but truly classified data, is further entrenched as the norm. The problem is not destruction as much as it is discovery of the data. How do we find the data that we need to destroy?

As for on-premise data, organizations need to consider several factors before choosing a method of destruction.

The first is the time spent on data destruction. For example, is this something the company does a lot, or does it have a lot of disks to go though?

The second is cost. Can the company afford to destroy disks or do they need to be reused, and can it afford specialized destruction hardware?

Finally, think about validation and certification. Is data destruction a regulatory compliance requirement? How will you prove to regulators or auditors that you have met the requirements?

Here’s a look at some of the advantages and disadvantages of the three main methods of data destruction.

Overwriting

One of the most common ways to address data remanence—the residual representation of data that remains on storage media after attempts erase it—is to overwrite the media with new data.

Because overwriting can be done by software and can be used selectively on part or all of a storage medium, it’s a relatively easy, low-cost option for some applications, experts say.

Among the biggest advantages of this method is that a single pass is adequate for data removal, as long as all data storage regions are addressed.

Software can also be configured to clear specific data, files, partitions or just the free space on storage media. Overwriting erases all remnants of deleted data to maintain security and it’s an environmentally friendly option.

On the downside, it takes a long time to overwrite an entire high-capacity drive. This process might not be able to sanitize data from inaccessible regions such as host-protected areas. In addition, there is no security protection during the erasure process, and it is subject to intentional or accidental parameter changes. Overwriting might require a separate license for every hard drive, and the process is ineffective without good quality assurance processes.

Another factor to consider is that overwriting works only when the storage media is not damaged and is still writable.

Media degradation will render this [method] ineffective. Nor will overwriting work on disks with advanced storage-management features. For example, the use of RAID means that data is written to multiple locations for fault tolerance, which means that remnants of the data are scattered in the enterprise storage architecture.

Security practitioners point out that while overwriting is cost effective, it’s not free. Overwriting is definitely cheaper [than other methods] but you still have to have the headcount to manage it, so there are costs there.

By following standards created by the Department of Defense and the National Institute of Standards and Technology, you can be pretty sure the [overwritten] data will be unreadable and unusable. There are studies seen where people will prove that they can find stuff on drives that are overwritten. But I think if you follow the standards you greatly minimize the likelihood that that would be case.

Still, overwriting is by no means foolproof. There are areas where errors might occur and the data might not be fully overwritten. In the wrong hands, someone might still be able to recover the data.

Degaussing

Degaussing is the removal or reduction of the magnetic field of a storage disk or drive. It’s done using a device called a degausser, which is specifically designed for the medium being erased.

When applied to magnetic storage media such as hard disks, magnetic tape or floppy disks, the process of degaussing can quickly and effectively purge an entire storage medium.

A key advantage to degaussing is that it makes data completely unrecoverable, making this method of destruction particularly appealing for dealing with highly sensitive data.

On the negative side, strong degausser products can be expensive and heavy, and they can have especially strong electromagnetic fields that can produce collateral damage to vulnerable equipment nearby.

In addition, degaussing can create irreversible damage to hard drives. It destroys the special servo control data on the drive, which is meant to be permanently embedded. Once the servo is damaged, the drive is unusable.

Degaussing makes data unrecoverable, but it can damage certain media types so that they are no longer usable. So if you’re reusing [those media] this may not be the right method.

Once disks are rendered inoperable by degaussing, manufacturers may not be able to fix drives or honor replacement warranties and service contracts.

There’s also the issue of securing media during the process of degaussing. If there are strict requirements that prevent exit of failed and decommissioned media from the data center, then the organization must assign physical space in the data center to secure the media and equipment for the disk eradication” process.

The effectiveness of degaussing can depend on the density of drives. Because of [technology] changes in hard drives and the size of them, we found that some of the degaussing capabilities [were] diminishing over time.

How effective the method is also depends on the people doing the degaussing. If people make mistakes, then your control gets diminished. Let’s say the person responsible for degaussing drives was supposed to do it for 15 minutes, but they have to go to lunch so put it in for five minutes instead. You could have breakdowns like this but all three methods are susceptible to human error.

Physical Destruction

Organizations can physically destroy data in a number of ways, such as disk shredding, melting or any other method that renders physical storage media unusable and unreadable.

One of the biggest advantages of this method is that it provides the highest assurance of absolute destruction of the data. There’s no likelihood that someone will be able to reconstruct or recover the data from a disk or drive that’s been physically destroyed.

Intel has found that physical destruction is an efficient method of getting rid of data when transporting storage media for degaussing is not practical or secure.

For example, when the company needed to wipe data from thousands of drives in multiple locations, its choices were to either degauss at multiple sites, which would have been costly, or ship the drives to a single location, which would have been risky if the drives got into the wrong hands.

The company ended up stockpiling thousands of old drives while pondering how to destroy them in a way that was not prohibitively expensive but that still resulted in the complete destruction of the data. Intel had been working with scrap contractors that meltdown and reclaim precious metals. There was no cost impact to the IT budget, and it was also green because the metals were getting recycled.

Leave a Reply

Your email address will not be published. Required fields are marked *