Planning a Smooth Transition From Your Current Unix Host
When you decide to switch hosting providers or add a new server to your infrastructure, the first thing you’ll notice is the sheer volume of files, directories, and configuration artifacts that belong to your domain. A domain is not just a handful of HTML files; it is a collection of static assets, server‑side scripts, database dumps, and often a set of CGI programs that depend on exact file permissions and ownership. If any of these components get corrupted or misconfigured during the move, the entire site can become inaccessible or, at the very least, lose functionality such as dynamic content, authentication, or search engine visibility.
One common pitfall is to treat the migration as a simple “download and re‑upload” exercise using an FTP client. This approach is tempting because it seems straightforward: connect to the old server, pull everything to a local machine, then push it all back to the new host. Unfortunately, the FTP transfer can change file permissions, strip ownership information, and mishandle text files that have no explicit extension. CGI scripts, in particular, rely on executable flags and a proper owner; if those flags are lost, the scripts will refuse to run or even return a 500 internal server error. In addition, uncompressed transfers waste bandwidth and time, especially when dealing with large media libraries or compressed assets.
Another complication is the environment differences between the two servers. Even subtle differences such as the user ID (UID) or group ID (GID) of the web server account can lead to permission errors. The directory layout on the new host might also differ; if your scripts reference absolute paths or assume a particular directory structure, the migration will break until you adjust those references.
Given these challenges, the most reliable approach is to use the Unix `tar` command to create a single, compressed archive of your entire domain. The archive preserves ownership, permissions, and the full directory hierarchy. By transferring the archive as a single file, you reduce the risk of losing any metadata and you also cut transfer time by avoiding multiple round‑trips for each file. Moreover, once the archive is extracted on the new server, the environment is almost identical to the old one, which makes troubleshooting far easier.
Before you start the actual migration, make a quick inventory of the key elements that need to be preserved: the user account that owns the files, the group the web server belongs to, the exact directory tree, and any configuration files such as `.htaccess` or `httpd.conf` snippets that affect your domain. Also, check that the new host supports the same filesystem type (most use ext4 or XFS) and that you have sufficient disk space to hold the archive and the unpacked content. If you’re working with multiple domains on the same server, keep their directories separate and ensure that you don’t overwrite one domain’s files with another’s during the extraction step.
Once you’ve gathered that information, you’re ready to create the archive on the old server. Open a terminal session via SSH or Telnet, navigate to the root of your domain’s document root, and execute the tar command. Below is a step‑by‑step guide that demonstrates how to preserve ownership, compress the archive with gzip, and include special files like `.htaccess` that might otherwise be omitted because they begin with a dot.
The command you’ll use looks like this:
Here’s what each flag means:
- `c` – create a new archive
- `p` – preserve permissions and ownership data
- `z` – compress the archive with gzip to keep the file size small
- `--same-owner` – ensure that extracted files retain the same owner as they had on the source system
- `-f` – specify the name of the archive file
By including the asterisk (`*`) you tell tar to add every file and directory in the current directory, and the explicit mention of `.htaccess` makes sure that file is also bundled in. When the archive finishes, you’ll find a file named `yourdomain.tar.gz` in your current directory. If you prefer another name, just replace the filename in the command, but keep the `.tar.gz` extension to signal that it’s a compressed tarball.
After creating the archive, it’s a good idea to verify its integrity before you transfer it. Run `tar -tzf yourdomain.tar.gz` to list its contents and confirm that all expected files appear. You can also check the archive’s size with `ls -lh yourdomain.tar.gz` to ensure that the compression worked as expected.
With the archive verified, you’re ready to move it to the new server. The next section walks through how to set up the destination environment and extract the archive while keeping permissions and ownership intact.
Setting Up the Destination Server and Restoring Your Site
Once the archive is on hand, the next step is to create the user account and directory structure on the new Unix host that mirror those on the old server. You’ll typically use the same username that the web server runs under (for example, `domainowner`). If you’re working with a hosting provider’s control panel, they may provide a user creation wizard; otherwise, use the `adduser` command on the command line:
sudo adduser --system domainownerAfter adding the user, create the target document root. Most hosting environments use `/usr/www/htdocs` or `/var/www/html` as the base directory for web content. Create the exact path you used before: for instance, `/usr/www/htdocs/yourdomain`. Set the ownership of the directory to the newly created user and to the web server group (often `www-data` on Debian‑based systems or `apache` on Red‑Hat). You can set this with:
sudo chown -R domainowner:www-data /usr/www/htdocs/yourdomainEnsuring that the directory permissions allow the web server to read files is essential; typically, `755` works for directories and `644` for files, but if you have CGI scripts or executables, you’ll need to grant execute permissions (`755` for directories, `755` for executable files, `644` for static assets). The `tar` command will preserve the original permissions when you extract, but it’s still worth double‑checking the base directory’s permissions after extraction.
With the environment ready, the next step is to transfer the archive to the new host. There are two common methods:
- Direct server‑to‑server transfer: Log into the new server, navigate to the target directory, and use `wget` or `curl` to pull the archive from the old server’s public URL, if you’ve made it temporarily accessible.
- Local relay: Download the archive to your workstation using SFTP, then upload it to the new server via SFTP. This gives you a chance to verify the file on your local machine before sending it out.
Whichever method you choose, once the archive arrives in `/usr/www/htdocs/yourdomain`, log into the new server via SSH or Telnet as `domainowner` (or use `sudo` if needed). Change to the document root and run the extraction command:
tar -xzvf yourdomain.tar.gzThe flags are straightforward: `x` extracts files from the archive, `z` decompresses gzip data, and `v` (verbose) prints the list of extracted files to the console, which is useful for debugging. If you prefer a quieter operation, drop the `v` flag. The extraction will rebuild the entire directory tree, restoring each file’s permissions and ownership exactly as they existed on the source server.
After extraction, it’s time to perform a sanity check. Open a browser and navigate to your domain’s URL. If everything is correct, you should see the same pages as before the move. Check for missing images, broken links, or 403/500 errors that might indicate permission problems. Also verify that CGI scripts run as expected; try to access a script you know is sensitive to the `x` flag. If a script fails, use `ls -l` on the script file to confirm that it has execute permissions and that the owner is correct.
Once the site is live, you may need to update any environment‑specific configuration, such as database credentials or API keys, that were hard‑coded in configuration files. If you’re using a CMS like WordPress, run any post‑migration scripts or update the `wp-config.php` file accordingly. Don’t forget to update DNS records if you’re changing IP addresses; keep propagation times in mind so that users don’t hit the old server by mistake.
Now that the site is fully migrated, the last piece of the puzzle is maintaining sync between the old and new servers if you still have active changes on the original host. The following section explains how to identify new or modified files and update the new server accordingly.
Keeping the Old and New Servers in Sync After Migration
Once your domain is live on the new server, you may discover that certain files on the old host were added or updated after the initial migration. Perhaps a new article was published, a static image was updated, or a configuration file was tweaked for performance. To keep the new server current, you need a reliable method for detecting those changes, packaging only the updated files, and applying them to the new environment without disrupting the live site.
The Unix `find` command is perfect for this task because it can compare modification times against a reference file. On the old server, navigate to the domain’s document root and run:
/usr/bin/find -newer yourdomain.tar.gz > newerfiles.txtThis command scans the entire directory tree and writes the relative paths of any file newer than the original archive to `newerfiles.txt`. The resulting file lists every new or modified file, but it also includes directory entries and a line with just a dot (`.`) representing the current directory. Since directories themselves don’t need to be archived again, you should clean the list before creating a new archive.
Open `newerfiles.txt` in a text editor that preserves line breaks, and delete any line that contains only a dot or any directory name. You can do this locally by downloading the file, editing it, and uploading it back, or by using an inline stream editor like `sed`:
sed -i '/^\\./d;/^[^\/]*\/$/{n;d;}' newerfiles.txtNow that the list contains only the files you want to update, create a new compressed archive containing those files only. The `tar` command can read file names from a list with the `-T` flag:
tar -cpz --same-owner -f yourdomain-new-files.tar.gz -T newerfiles.txtWhen this archive is ready, transfer it to the new server using the same method you used for the original archive. Once on the new host, extract it in the same directory where the site resides:
tar -xzvf yourdomain-new-files.tar.gzBecause you’re using the same ownership and permission flags, the extraction will replace the existing files with the updated ones while preserving the original directory structure. Any new files will be created, and modified files will be overwritten. This approach avoids re‑uploading the entire site and keeps bandwidth usage minimal.
After applying the updates, it’s good practice to perform a quick integrity check. Compare the checksums of the updated files between the old and new servers, or simply reload the pages that contain the changes to ensure they render correctly. If you’re using version control, you could also check out the repository on both servers and run a diff, but for most simple deployments, the `find`‑based workflow is sufficient.
Finally, remember to update any scheduled tasks, cron jobs, or server‑level configurations that might rely on the old paths or usernames. Once everything is verified, the new server should be a mirror of the old one with all recent changes in place. Keep a record of the date and time of each sync operation so that you can trace back any issues if they arise. This systematic approach keeps your domain running smoothly across server migrations and ongoing updates.





No comments yet. Be the first to comment!