6 Golden Rules for Reliable Backups

Malo Paletou
· 7 min read
Send by email

Our mission at Datashelter is to offer a simple and accessible solution for backing up your Linux servers in just a few clicks. We have already automated the backup of thousands of servers for our clients (SMBs, startups, managed service providers) and have identified recurring patterns in businesses that do not yet have reliable backup solutions.

This article aims to provide you with the keys and best practices based on our experience to help you define, improve, or replace your backup plans. Let’s dive into the 6 golden rules for reliable and resilient data backups!

Your backups must be encrypted

The first rule is that all your backups must be encrypted. While implementing encryption isn’t always straightforward, you should remember that the data you’re backing up often includes your users' personal data (hello GDPR!) or strategic/business data you definitely don’t want falling into the wrong hands. Your backups are also your data, so they must be encrypted end-to-end.

In an ideal world, this encryption should occur at two levels: during transit, and while the data is at rest.

Data in transit

Encrypting data in transit is the simplest to implement. All you need is to transmit your data using a protocol that supports TLS or AES encryption. This includes protocols such as HTTPS, FTPS, SSH, or your VPN tunnel like IPsec or OpenVPN.

This way, your backup data never travels unencrypted over your network, whether it’s public or private.

Data at rest

Encrypting your backups at rest is done using symmetric encryption algorithms like AES-256. An encryption key ensures that your backups are inaccessible to anyone without knowledge of the key.

Ideally, this encryption should take place directly on the server responsible for creating your backups to guarantee true end-to-end encryption.

The encryption key, which safeguards your data's confidentiality, should be stored in a password manager or a Key Management System (KMS) as offered by many cloud providers.

Protection against ransomware

We couldn’t write an article about data backups without addressing ransomware. While backup protection covers a broader range of risks than "just" ransomware, this type of attack remains one of the most dreaded by business leaders.

In general, you’ll come across three critical features when researching protection against these attacks: versioning, immutability, and air-gapped backups.

Versioning

First, versioning refers to the ability to manage different versions of the same file. For instance, imagine a file named fileA.txt on your storage device. What happens if we overwrite it with a new version bearing the same name?

In systems that don’t support versioning, the previous file will be overwritten. In systems that support versioning, a new version of the file is created (read by default when accessing fileA.txt), while the older version remains on disk. You can then retrieve an earlier version by referencing its version number. This can be particularly useful for responding to ransomware attacks.

This feature is essential for data backups. Most backup software implements it natively or indirectly uses it when connected to object storage.

Immutability

Immutability takes data protection a step further but is inherently linked to versioning. There are two ways to alter data: deleting or overwriting it.

In the case of file overwriting, the previous version is retained, as explained in the versioning section. For file deletion, the system can simply block this action for users.

Thus, immutability generally relies on versioning functionality with the added restriction that users cannot delete file versions. In object storage systems (such as S3), features like object locking address this requirement effectively.

However, keep in mind that pure immutability has a trade-off: since data cannot be deleted or overwritten, the storage volume required for your backups will only grow.

Air-gapped backups

A backup copy is considered “air-gapped” when stored on a device disconnected from the network. The goal is to shield this backup from most attacks by completely isolating it from the IT system.

The main downside of this method is that it requires disconnecting the storage device after each backup, which can be a cumbersome and complex procedure for many organizations.

At Datashelter, we chose a software segmentation approach. We implemented a software architecture that makes it impossible to alter backups—even if the credentials used for backups are stolen.

From a broader perspective, Datashelter was built with a focus on security. That’s why we highly recommend reading our dedicated article on securing your backups to learn more.

A backup is not just a simple copy of your data

Now, let’s touch on a key aspect of data backups: they’re not just simple copies of your data. To be precise, it’s crucial to understand the difference between backups and data synchronization.

You cannot consider synchronizing your data to a NAS or other storage medium as a true backup. Let’s take the example of a ransomware attack. The ransomware will encrypt your data as quickly as possible, holding it hostage and demanding a ransom for its release.

A common setup we’ve observed in SMBs involves synchronizing production data to a NAS overnight. This approach exposes your backups to two critical risks:

  1. Your organization is not protected against ransomware.
  2. You’re often unaware of this vulnerability.

In this setup, you only have a few hours to react between the start of the attack and the next synchronization. After that, your healthy data will be overwritten by encrypted data, leaving you with no choice but to pay the ransom to recover your data and save your business.

Unlike synchronization, a backup is independent of the state of the source data. If the source is corrupted, it doesn’t affect previous backups. You can simply restore your most recent backup to resume operations with clean data.

Your production and backups must never be simultaneously impacted in the event of an attack.

Always have at least one external backup

You may have heard of the 3-2-1 rule, which recommends:

  • At least 3 copies of your data,
  • Stored on 2 different mediums (hard drives, tapes),
  • With 1 copy off-site.

This is theoretically the ideal policy for backup management. However, its implementation is not always straightforward, technically or economically, especially for small businesses that cannot afford costly equipment.

It’s common for this realization to delay decision-making, leading you to postpone addressing the backup issue. This is why we believe you must approach backup planning pragmatically, tailoring it to your needs and budget.

As with many topics, the Pareto principle applies: 80% of the results (or more) come from 20% of the effort (or less). A simple external backup solution like Datashelter allows you to:

  • Store at least 2 copies of your data,
  • On 2 different mediums,
  • With 1 off-site copy.

This easy-to-implement strategy covers the majority of data loss risks while requiring minimal effort and investment (our plans start at €9/month for 1TB of data).

You can later complement this strategy with a third local copy for faster restoration compared to external backups.

Regularly test your backups

The fifth golden rule is to avoid being caught off guard when a major incident affects your organization.

Imagine a scenario where an attack renders your production data unusable, forcing you to recover it from your most recent backup. What would happen if you discovered the backup process had been stuck for six months due to an error?

Email notifications

Prevent such situations with email notifications. These alert you when a backup fails. However, ensuring you reliably receive these notifications can be challenging. If your backup software is responsible for sending alerts, how can you be sure a bug won’t prevent it from doing so?

The solution is to delegate the notification responsibility to an external system. For example, Datashelter separates backup creation (handled by "snaper") from notifications. An independent system monitors backups and sends alerts when certain conditions are met (e.g., missing backups, significant changes between backups).

Why regularly test your backups?

That said, email notifications are a safety net but don’t replace the need for routine backup testing (quarterly or semi-annually). Integrating backup tests into your routine helps avoid painful situations such as:

  • Backups stopped working after a system update, and I didn’t receive alerts. My last valid backup is 6 months old.
  • Restoring a dataset of millions of files would take 3 months due to poor restoration parallelization.
  • I backed up /var/lib/mysql, but my database won’t restart after restoration because certain indexes are corrupted.

Testing backups can be cumbersome and prone to procrastination. To simplify the process, create a checklist during your first restoration test. Note each step and any observations in a document you can reuse for future tests.

A clear procedure ensures more reliable and efficient backup testing, reducing errors and saving time.

Develop a Disaster Recovery Plan (DRP)

This brings us to the Disaster Recovery Plan (DRP). It can be as simple as a Word document detailing steps, key points, and the responsibilities of all stakeholders (if you’re not the sole IT manager).

This document serves as your reference during incidents, much like an airline pilot’s checklist. Carefully reviewing each point minimizes oversights and reduces reliance on your reasoning capacity, which may be strained under pressure.

Rather than being a burdensome administrative task, the DRP provides structure and peace of mind, ensuring you’re well-prepared for IT crises.

A plan that’s economically and technically feasible

Our final golden rule once again emphasizes pragmatism. The biggest barriers to adopting a reliable backup process are often cost or technical feasibility.

More specifically, the lack of clarity surrounding these two aspects explains why 80% of organizations admit they don’t have reliable backups. Taking action requires a clear plan. You need a precise understanding of your backup solution's cost and installation process.

This straightforward realization led to the creation of Datashelter. Our vision is to modernize the backup industry by offering solutions that are easy to understand and transparent, catering to the needs of small and medium-sized businesses.

We’ve built a comprehensive solution (software and backup storage) that lets you leverage our data backup expertise in just a few clicks. All our features and advice are included in a single offer, priced based on storage capacity and the number of servers to back up.

Data backups are your first line of defense against cyberattacks. No matter the nature of the attack, only reliable and resilient backups will enable your recovery if the worst-case scenario occurs.