Understanding HTTP Response Methods: How Servers Deliver Large File Downloads via GET Requests


4 views

When you send a GET request to a server, the response doesn't use a "method" in the same way the request does. The HTTP protocol specifies that responses simply include:

HTTP/1.1 200 OK
Content-Type: application/octet-stream
Content-Disposition: attachment; filename="large_file.iso"
Transfer-Encoding: chunked

Here's what happens during a file download:

  1. Client sends GET request for resource
  2. Server responds with status code and headers
  3. Response body contains the actual file data

For handling large downloads in a web app:

// Using Fetch API
fetch('/download/large_file.iso', {
  method: 'GET',
  headers: {
    'Accept': 'application/octet-stream'
  }
})
.then(response => response.blob())
.then(blob => {
  const url = window.URL.createObjectURL(blob);
  const a = document.createElement('a');
  a.href = url;
  a.download = 'large_file.iso';
  document.body.appendChild(a);
  a.click();
  window.URL.revokeObjectURL(url);
});

For resumable downloads, implement Range requests:

// Server-side (Node.js/Express example)
app.get('/download/:file', (req, res) => {
  const filePath = path.join(__dirname, 'files', req.params.file);
  const stat = fs.statSync(filePath);
  const fileSize = stat.size;
  const range = req.headers.range;

  if (range) {
    const parts = range.replace(/bytes=/, "").split("-");
    const start = parseInt(parts[0], 10);
    const end = parts[1] ? parseInt(parts[1], 10) : fileSize-1;
    const chunkSize = (end-start)+1;
    const file = fs.createReadStream(filePath, {start, end});
    
    res.writeHead(206, {
      'Content-Range': bytes ${start}-$/${fileSize},
      'Accept-Ranges': 'bytes',
      'Content-Length': chunkSize,
      'Content-Type': 'application/octet-stream'
    });
    file.pipe(res);
  } else {
    res.writeHead(200, {
      'Content-Length': fileSize,
      'Content-Type': 'application/octet-stream'
    });
    fs.createReadStream(filePath).pipe(res);
  }
});
  • Always use GET for file downloads (never POST)
  • Implement proper Content-Disposition headers
  • Consider using CDN for very large files
  • Enable compression when possible (except for already compressed files)
  • Implement proper caching headers

For optimal large file serving:

# Nginx configuration example
server {
    location /downloads/ {
        sendfile on;
        tcp_nopush on;
        aio on;
        directio 512;
        output_buffers 1 128k;
    }
}

When you send a HTTP GET request, the server always responds to that request - but crucially, the response itself doesn't have a "method" like GET or POST. The response is simply an HTTP message containing status codes, headers, and the requested content. Let me clarify how this works in practice:

// Client sends GET request
fetch('/download/large-file.iso', {
  method: 'GET',
  headers: {
    'Accept': 'application/octet-stream'
  }
})
.then(response => {
  // Server responds with status code, headers and body stream
  if (response.ok) return response.blob();
  throw new Error('Download failed');
});

The server doesn't "send" files in the HTTP method sense - it responds to your GET request with the file data in the response body. For large files like ISO images:

  • The response uses HTTP status code 200 (OK) for successful requests
  • Content-Type header specifies the file type (e.g., application/octet-stream)
  • Content-Length header indicates the full file size
  • The actual file data streams in the response body

To make downloads resumable, you need to implement Range requests on both client and server:

// Client request with Range header (for resuming)
fetch('/download/large-file.iso', {
  headers: {
    'Range': 'bytes=1000-'
  }
});

// Node.js server handling partial content
app.get('/download/:file', (req, res) => {
  const range = req.headers.range;
  if (range) {
    const parts = range.replace(/bytes=/, '').split('-');
    // Implement partial content logic here
    res.writeHead(206, {
      'Content-Range': bytes ${parts[0]}-${parts[1]}/${fileSize},
      'Accept-Ranges': 'bytes'
    });
    // Stream the specific chunk
  } else {
    // Regular full file download
  }
});

For optimal large file downloads:

  1. Always use GET for downloads (POST isn't cacheable)
  2. Implement proper Content-Disposition headers for file naming
  3. Enable compression where applicable (though not for already compressed files like ISOs)
  4. Consider CDN distribution for globally accessed large files

The key takeaway is that HTTP methods only apply to requests - responses are simply data deliveries matching the request's requirements. For large files, proper implementation of range requests and streaming is what makes the difference between a robust and fragile download experience.