How to Fix S3 Upload Errors in Kubernetes Environments
-
Published on 20 Oct 2025
-
Last updated 04 May 2026
-
Reading Time 10 minutes
-
Written By Can Şentay
Executive Summary
This document details a critical issue encountered during Amazon S3 (S3-compatible) object storage uploads originating from a Kubernetes environment. Users were intermittently experiencing ValidationError and NotFound errors during the UploadPart operation. These errors occurred more frequently, especially when using path-style access and uploading smaller file chunks, while downloads from the same buckets were successful.
After detailed analysis, the root cause was identified as an incompatibility in how the S3-compatible service processes chunked uploads when the Content-Length header is not explicitly defined. This behavior deviates from standard S3 protocols and leads to the observed upload errors.
The solution involves applying a configuration change to ensure proper checksum calculation for upload requests. This can be implemented programmatically using botocore.config.Config in the boto3 client, or globally via environment variables (AWS_REQUEST_CHECKSUM_CALCULATION, AWS_RESPONSE_CHECKSUM_VALIDATION) or AWS CLI configuration. By setting request_checksum_calculation to WHEN_REQUIRED, required checksums are calculated and transmitted, ensuring compatibility with the S3-compatible service. This solution significantly improves the reliability of Kubernetes-based S3 uploads, ensuring data integrity and operational stability.
-
Problem Description
Users were consistently encountering errors when uploading files from Kubernetes to an S3-compatible service. These errors appeared as
ValidationErrorduringUploadPartoperations and container-relatedNotFounderrors. The issue was particularly observed when using path-style access and handling smaller file chunks.Interestingly, download operations from the same S3-compatible service and bucket were consistently successful. This indicated that the problem was not a general connectivity issue but specific to upload operations. The VZ team confirmed that the S3-compatible service does not support chunked uploads when the Content-Length is not explicitly defined. The customer confirmed using Python and boto3 with default configurations, without passing explicit arguments such as
ExtraArgs,Callback, orConfig. The issue was tracked internally as DTCS-3193. -
Root Cause Analysis
The investigation revealed a critical mismatch between the client upload mechanism and how the S3-compatible service processes chunked data. Specifically,
ValidationErrorandNotFounderrors duringUploadPartoperations were caused by the service limitation: it does not support chunked uploads without a defined Content-Length.This limitation is exacerbated when using path-style access. While virtual-host style is generally preferred for compatibility, path-style access can expose underlying service differences. The customer relied on default
boto3behavior without custom configuration. Combined with the service limitation, this resulted inValidationError.NotFounderrors are likely cascading effects where the service cannot properly locate or process incomplete uploads.Confirmation from the VZ team reinforced this finding, indicating that their service does not support chunked uploads without
Content-Length. This directly points toboto3’s default multipart upload behavior as the root cause. -
Solution Architecture and Implementation
The core of the solution is addressing the limitation of the S3-compatible service regarding chunked uploads and undefined content length. This is achieved by configuring
request_checksum_calculation.Setting this parameter to when_required instructs boto3 to calculate checksums only when explicitly required, avoiding problematic behavior.
Method 1: Boto3 Code Configuration
import boto3 from botocore.config import Config s3_client = boto3.client( 's3', endpoint_url='YOUR_S3_ENDPOINT', aws_access_key_id='YOUR_ACCESS_KEY', aws_secret_access_key='YOUR_SECRET_KEY', config=Config( s3_use_sigv4=True, s3_verify_ssl=False, signature_version='s3v4', s3_addressing_style='path', request_checksum_calculation='when_required', response_checksum_validation='when_required' ) ) # s3_client.upload_file(...)
Method 2: Environment Variables
export AWS_REQUEST_CHECKSUM_CALCULATION=WHEN_REQUIRED export AWS_RESPONSE_CHECKSUM_VALIDATION=WHEN_REQUIRED # export AWS_ACCESS_KEY_ID=... # export AWS_SECRET_ACCESS_KEY=... # export AWS_ENDPOINT_URL=...
Method 3: AWS CLI Configuration
[default] output = json region = us-east-1 [profile my-s3-profile] s3 = endpoint_url = YOUR_S3_ENDPOINT signature_version = s3v4 addressing_style = path request_checksum_calculation = WHEN_REQUIRED response_checksum_validation = WHEN_REQUIRED -
Benefits and Results
Implementing this solution directly resolved the S3 upload errors from Kubernetes pods, significantly improving reliability. The configuration works effectively even under specific conditions such as path-style access and small file chunks.
Additionally, this improves compatibility with various S3-compatible storage solutions, ensures data integrity, reduces operational overhead, and establishes a more robust and predictable data transfer pipeline.
Related Articles
-
Mar 22,2024DT Cloud Has Joined GAIA-X - European Union Federated and Secure Data Infrastructure Association
Türkiye ve dünyada farklı lokasyonlarda hizmet veren uluslararası alternatif bir bulut platformu sağlayıcısı Digital Transformation Group, bulut altyapısı DT Cloud ile Avrupa Birliği veri ve altyapı girişimi Gaia-X’e dahil oldu. Avrupa'da da yürütmekte olduğu faaliyetlerini güçlendirmek amaçlı bu adım ile DT Cloud, öncelikle Türkiye'de çalışan ulusal ve uluslararası kurum ve kuruluşların, Avrupa Birliği veri ve bulut regülasyonlarına tam uyumlu şekilde Avrupa'ya açılmasını, veri ve bulut çalışmalarını Avrupa'da güçlendirmeyi hedefliyor.
Learn More -
Feb 21,2023Introducing DT Cloud’s new logo
We are excited to announce that we are rebranding with a new logo and color scheme as part of the ongoing transformation of our company’s brand. We felt it was time for a change as our company grew and evolved over the years. We have refreshed our brand to reflect our journey until today and we are excited for the new opportunities the future will bring.
Learn More -
Jan 30,2026What is Metro Ethernet?
Metro Ethernet is a dedicated, enterprise-grade internet infrastructure offered especially for individuals and businesses that ...
Learn More