Files
cf-uploader/MEMORY_FIX.md
2025-10-22 09:18:11 -05:00

4.3 KiB

Fix for OutOfMemoryError: Cannot reserve direct buffer memory

Problem

java.lang.OutOfMemoryError: Cannot reserve 10485760 bytes of direct buffer memory

This occurs because Tanzu's default JVM configuration allocates very little direct (off-heap) memory, and multipart file uploads use direct buffers.

Solutions Applied

1. Code Changes (Already Applied)

ChunkedUploadService.java - Changed to stream chunks in 8KB buffers instead of loading entire chunk into memory MultipartConfig.java - Added configuration to write all uploads directly to disk (file-size-threshold=0) application.properties - Reduced chunk size from 5MB to 2MB and enabled disk-based uploads

2. Tanzu Manifest Configuration (You Need to Apply)

Option A: Set in manifest.yml

Create or update your manifest.yml:

applications:
- name: cf-deployer
  memory: 1G
  instances: 1
  path: build/libs/cf-deployer.jar
  buildpacks:
    - java_buildpack
  env:
    # Increase direct memory allocation
    JAVA_TOOL_OPTIONS: "-XX:MaxDirectMemorySize=256m -XX:+UseG1GC"
    # Alternative if using Java Buildpack Memory Calculator
    JBP_CONFIG_OPEN_JDK_JRE: '{ jre: { version: 17.+ }, memory_calculator: { memory_sizes: { metaspace: 128m, direct: 256m } } }'

Then deploy:

cf push

Option B: Set environment variable directly

# Increase direct memory to 256MB
cf set-env cf-deployer JAVA_TOOL_OPTIONS "-XX:MaxDirectMemorySize=256m -XX:+UseG1GC"

# Restage to apply changes
cf restage cf-deployer

Option C: Increase overall memory

If you have more memory available:

# Increase app memory to 2GB (gives more headroom)
cf scale cf-deployer -m 2G

# Or in manifest.yml
memory: 2G

3. Client-Side Changes

Update your client to use 2MB chunks instead of 5MB:

Bash script:

CHUNK_SIZE=2097152  # 2MB instead of 5MB

JavaScript:

const CHUNK_SIZE = 2 * 1024 * 1024; // 2MB

Python:

CHUNK_SIZE = 2 * 1024 * 1024  # 2MB

Verification

After applying fixes, check the logs:

cf logs cf-deployer --recent

You should see successful chunk uploads:

2025-10-21 16:30:00 - Session xxx: Received chunk 1/50 for jarFile (2097152 bytes)
2025-10-21 16:30:01 - Session xxx: Received chunk 2/50 for jarFile (2097152 bytes)

Why This Works

  1. file-size-threshold=0 - Spring writes uploads directly to disk instead of buffering in memory
  2. Streaming chunks - We read and write in 8KB buffers instead of loading entire chunk
  3. Smaller chunks - 2MB chunks use less memory than 5MB chunks
  4. Increased direct memory - More headroom for JVM's direct buffers
  5. G1GC - Better garbage collection for managing off-heap memory

Testing

Test with a small file first:

# Create test session
SESSION_ID=$(curl -s -X POST https://your-app.apps.cf.example.com/api/cf/upload/init \
  -H "Content-Type: application/json" \
  -d '{"apiEndpoint":"https://api.cf.example.com","username":"user","password":"pass","organization":"org","space":"space","appName":"test","skipSslValidation":false}' \
  | grep -o '"uploadSessionId":"[^"]*' | cut -d'"' -f4)

# Upload a 2MB chunk
head -c 2097152 /dev/urandom > test-chunk.bin

curl -X POST "https://your-app.apps.cf.example.com/api/cf/upload/chunk" \
  -F "uploadSessionId=$SESSION_ID" \
  -F "fileType=jarFile" \
  -F "chunkIndex=0" \
  -F "totalChunks=1" \
  -F "fileName=test.jar" \
  -F "chunk=@test-chunk.bin"

If this succeeds, the fix is working!

For production deployments handling large files:

applications:
- name: cf-deployer
  memory: 2G                    # Total memory
  disk_quota: 2G                # Disk for temp files
  instances: 2                  # For high availability
  health-check-type: http
  health-check-http-endpoint: /actuator/health
  env:
    JAVA_TOOL_OPTIONS: "-XX:MaxDirectMemorySize=512m -XX:+UseG1GC -XX:MaxGCPauseMillis=200"
    JBP_CONFIG_OPEN_JDK_JRE: '{ jre: { version: 17.+ }, memory_calculator: { memory_sizes: { direct: 512m, metaspace: 128m, reserved: 256m } } }'

This gives you:

  • 512MB direct memory (plenty for chunked uploads)
  • G1 garbage collector (better for large objects)
  • 2GB total memory (Java heap + direct + metaspace + overhead)
  • Health check endpoint for monitoring