Why Resumable Downloads Matter and How WinInet Helps
When an application pulls a large file from the Internet, any hiccup in the network can kill the transfer midstream. Re‑starting the download from the beginning wastes bandwidth, time, and can frustrate users. Most modern browsers hide this problem by automatically resuming interrupted downloads, but native applications built with the WinInet API often lack a simple way to pick up where they left off. Understanding how WinInet can support resumable transfers is the first step toward writing robust download utilities.
At the core of the WinInet library is the notion of a streaming connection. Once you call InternetOpen and InternetOpenUrl, you receive an HINTERNET handle that represents a continuous flow of bytes. The API provides InternetReadFile to read chunks of data, but by default it starts from the beginning of the resource. If the connection drops after reading 5 MB of a 50 MB file, the next attempt will begin from zero unless you tell WinInet otherwise.
WinInet offers two main mechanisms for this: InternetSetFilePointer and HTTP range requests. The former is a low‑level file‑pointer manipulation that lets you jump to an offset within the resource. The latter is an HTTP protocol feature that requests a specific byte range from the server. Each has its own constraints, and the choice depends on the scenario you face.
Resumable downloads are especially valuable in two contexts. First, when network reliability is unpredictable, such as on mobile connections or in corporate environments with intermittent VPNs. Second, when downloading very large files - hundreds of megabytes or gigabytes - where a single interruption could cost minutes or more. By enabling resumable downloads, you can provide a smoother user experience and reduce overall bandwidth usage.
Before diving into code, consider the server side. Not every web server honours HTTP range requests. Some serve dynamic content that is streamed on demand and cannot be sliced. Others may have range support disabled for security or configuration reasons. If the server rejects a range request, WinInet falls back to a full transfer. Therefore, a pre‑flight check that queries the server for range support is a prudent step. A simple HEAD request that looks for the Accept-Ranges header is enough to decide whether to use the range method.
The next sections walk through two concrete strategies. The first covers the classic InternetSetFilePointer technique, which is straightforward but has limitations when dealing with multi‑threaded downloads. The second focuses on the more modern HTTP range approach, ideal for parallel transfers that exploit multiple TCP connections. By the end of this article you should be able to choose the right strategy for your application and implement it with confidence.
While the examples below use C++ and the Windows API, the concepts translate to other languages that wrap WinInet, such as Delphi, C#, or even PowerShell scripts. What matters is understanding the state machine that governs the download flow and the proper sequencing of API calls. Keep this in mind as you read on.
In practice, resumable downloads are a small addition to your codebase that yields big gains in reliability and user satisfaction. As you experiment, pay attention to error handling, timeouts, and the user interface: informing the user of progress, pause, or resume states can make the difference between a polished tool and a brittle utility. The next section demonstrates the simplest resume pattern, which many developers find surprisingly effective.
Implementing a Simple Resume with InternetSetFilePointer
When a download fails, the most straightforward way to continue is to reopen the URL, skip the bytes already received, and append the rest to your local file. The InternetSetFilePointer function gives you direct access to the internal file pointer used by InternetReadFile. By positioning the pointer just after the last byte you wrote, the next call to InternetReadFile resumes the stream at that offset.
Here’s the typical workflow: first, open the URL and start reading. If the read loop throws an error - say, due to a lost socket - the program closes the handle. It then opens the file for appending, records the current file size, and re‑opens the URL. Before the next read loop, it calls InternetSetFilePointer with the offset equal to the file size. Once set, the next InternetReadFile call will continue from the missing point.
The key to success is keeping the offset calculation accurate. Windows file I/O returns the number of bytes written, but you must also consider partial reads from the network. A robust implementation typically tracks the total bytes written across all reads, then uses that value when resetting the pointer. If the server sends less data than requested, InternetReadFile will return a smaller byte count; you must sum these values correctly.
Below is a simplified example that demonstrates this pattern in C++. The code omits extensive error handling for brevity, but the skeleton shows the critical steps:
When you invoke resumeDownload after an interruption, the function checks the current file length, resets the pointer, and continues from that byte. This approach works well for single‑threaded downloads because the file handle and the WinInet handle remain in sync. The main drawback surfaces when you try to split the download across multiple threads. Each thread would need its own InternetSetFilePointer call, but WinInet does not allow simultaneous pointer adjustments on the same handle. Consequently, the method is limited to sequential retries.
Despite this limitation, many applications benefit from the simplicity of InternetSetFilePointer. It requires no extra HTTP headers, no server configuration, and minimal code. If you only need to recover from sporadic network hiccups and are fine with a single thread, this technique is the easiest path to add resumable functionality.
Before relying on it, consider the server’s content length. The API may return INVALID_SET_FILE_POINTER if the remote resource does not report its size. In that case, the function cannot compute the offset, and you must fall back to another method - usually the HTTP range approach discussed next. Testing your download against a variety of servers will reveal whether InternetSetFilePointer is viable for your use case.
In summary, InternetSetFilePointer gives you a quick way to pick up a stalled download. It’s lightweight, works out of the box with any server that supports byte offsets, and integrates neatly into existing single‑threaded loops. However, if you plan to parallelize the transfer or need finer control over partial content, you’ll need to move beyond this method.
Optimizing Performance with HTTP Range Requests and Multithreaded Downloading
When bandwidth is at a premium or the file is massive, the single‑threaded resume strategy becomes a bottleneck. Modern web servers support HTTP range requests, a protocol feature that allows clients to ask for specific byte ranges. By combining this feature with multiple simultaneous connections, you can dramatically speed up downloads and still recover from interruptions.
To use ranges, you must add the Range header to your request. WinInet provides the HttpAddRequestHeaders function for this purpose. The header’s format is Range: bytes=start-end. For a resume, you would set start to the size of the already downloaded file, and leave end empty to request the rest of the file. If you want parallel streams, split the file into N segments, each with its own start and end values. Each segment is fetched on a separate thread with its own HINTERNET handle.
Below is a minimal example that creates two threads to download the first and second halves of a file. The code illustrates how to set the range header, open the URL, read the data, and write it to the correct offset in the local file.
#include <thread>
struct RangeChunk {
std::wstring url;
std::wstring filePath;
ULONGLONG start;
ULONGLONG end;
};
void downloadRange(const RangeChunk &chunk)
{
HINTERNET hSession = InternetOpen(L"RangeDemo", INTERNET_OPEN_TYPE_DIRECT, nullptr, nullptr, 0);
if (!hSession) return;
HINTERNET hUrl = InternetOpenUrl(hSession, chunk.url.c_str(), nullptr, 0, INTERNET_FLAG_RELOAD, 0);
if (!hUrl) { InternetCloseHandle(hSession); return; }
// Append the Range header
wchar_t rangeHeader[64];
wsprintf(rangeHeader, L"Range: bytes=%I64u-%I64u", chunk.start, chunk.end);
DWORD dwFlags = HTTP_ADDREQ_FLAG_ADD | HTTP_ADDREQ_FLAG_REPLACE;
HttpAddRequestHeaders(hUrl, rangeHeader, -1, dwFlags);
std::fstream out(chunk.filePath, std::ios::binary | std::ios::in | std::ios::out);
out.seekp(chunk.start, std::ios::beg);
}
out.close();
}
int main()
{
std::wstring url = L"http://example.com/largefile.zip";
std::wstring filePath = L"C:\\Downloads\\largefile.zip";
// Determine file size via a HEAD request
ULONGLONG fileSize = 0;
// ... code to get Content-Length ...
// Create two ranges
RangeChunk first{url, filePath, 0, fileSize / 2 - 1};
RangeChunk second{url, filePath, fileSize / 2, fileSize - 1};
std::thread t1(downloadRange, first);
std::thread t2(downloadRange, second);
t1.join();
t2.join();
return 0;
}
In the example above, each thread opens its own HTTP connection and writes directly to the shared file at the correct offset. Because the two streams operate independently, the total download time can be close to half of the serial download duration, assuming the network and server can handle two parallel streams.
When an interruption occurs, you can restart only the affected chunk. Each thread can report its last byte received. On resume, the program rebuilds the range headers to request the missing portion from the same start point. The advantage is that you need not re‑download data that was successfully fetched by the other thread.
Some servers limit the number of concurrent connections from a single client. Respecting Connection and Keep-Alive headers can help avoid throttling. Additionally, the Accept-Ranges header from the HEAD request informs you whether the server allows partial content. If the header is absent or set to none, the server will ignore the range request and stream the entire file. In that scenario, revert to the single‑threaded InternetSetFilePointer method.
Implementing range requests also requires careful file locking. In the sample, std::fstream is opened with read/write and no exclusive lock; on some platforms this may cause race conditions. A robust implementation might use CreateFile with FILE_SHARE_READ | FILE_SHARE_WRITE and OVERLAPPED I/O, or simply serialize writes with a mutex if the file system does not support concurrent writes.
Finally, error handling becomes more complex. If one thread fails, you may want to retry only that segment while letting the other continue. A cancellation token can signal all threads to abort gracefully if the user requests it. Logging progress for each segment helps diagnose throttling or server issues.
By using HTTP ranges and multi‑threading, you unlock the full bandwidth potential of both the client and the server. This approach scales to large files, reduces total download time, and still offers graceful recovery from transient failures. The technique is widely used by download managers and many high‑performance file transfer tools.





No comments yet. Be the first to comment!