Additional files
This commit is contained in:
112
CHUNK_SIZE_GUIDE.md
Normal file
112
CHUNK_SIZE_GUIDE.md
Normal file
@@ -0,0 +1,112 @@
|
||||
# Chunk Size Quick Reference
|
||||
|
||||
## TL;DR
|
||||
|
||||
**You control the chunk size in your upload script.** The server doesn't care what size you use - it accepts ANY chunk size and reassembles them sequentially.
|
||||
|
||||
## Recommended Sizes
|
||||
|
||||
| Environment | Chunk Size | Reason |
|
||||
|-------------|-----------|--------|
|
||||
| **Tanzu (low memory)** | **1MB** | Safe for 10MB direct memory limit |
|
||||
| **Strict nginx** | **512KB - 1MB** | Works with any nginx config |
|
||||
| **Normal setup** | **2-5MB** | Good balance of speed vs safety |
|
||||
| **High bandwidth** | **5-10MB** | Faster uploads, fewer requests |
|
||||
|
||||
## Setting Chunk Size
|
||||
|
||||
### Bash Script
|
||||
```bash
|
||||
CHUNK_SIZE=1048576 # 1MB
|
||||
```
|
||||
|
||||
### JavaScript
|
||||
```javascript
|
||||
const CHUNK_SIZE = 1 * 1024 * 1024; // 1MB
|
||||
```
|
||||
|
||||
### Python
|
||||
```python
|
||||
CHUNK_SIZE = 1 * 1024 * 1024 # 1MB
|
||||
```
|
||||
|
||||
## Common Sizes in Bytes
|
||||
|
||||
| Size | Bytes | Setting |
|
||||
|------|-------|---------|
|
||||
| 100KB | 102,400 | `CHUNK_SIZE=102400` |
|
||||
| 256KB | 262,144 | `CHUNK_SIZE=262144` |
|
||||
| 512KB | 524,288 | `CHUNK_SIZE=524288` |
|
||||
| **1MB** | **1,048,576** | **`CHUNK_SIZE=1048576`** ✅ |
|
||||
| **2MB** | **2,097,152** | **`CHUNK_SIZE=2097152`** ✅ |
|
||||
| 5MB | 5,242,880 | `CHUNK_SIZE=5242880` |
|
||||
| 10MB | 10,485,760 | `CHUNK_SIZE=10485760` |
|
||||
|
||||
## Trade-offs
|
||||
|
||||
### Smaller Chunks (100KB - 1MB)
|
||||
✅ Less memory per request
|
||||
✅ Works with ANY nginx config
|
||||
✅ Safe for Tanzu low-memory instances
|
||||
❌ More HTTP requests
|
||||
❌ Slower overall upload
|
||||
|
||||
### Larger Chunks (5MB - 10MB)
|
||||
✅ Fewer HTTP requests
|
||||
✅ Faster overall upload
|
||||
❌ More memory needed
|
||||
❌ May exceed nginx limits
|
||||
❌ Can cause OutOfMemoryError on Tanzu
|
||||
|
||||
## For Your Tanzu Issue
|
||||
|
||||
Based on your `OutOfMemoryError: Cannot reserve 10485760 bytes of direct buffer memory`:
|
||||
|
||||
**Use 1MB chunks:**
|
||||
```bash
|
||||
CHUNK_SIZE=1048576 # 1MB
|
||||
```
|
||||
|
||||
This keeps each request under 1MB, well below your 10MB direct memory limit, leaving plenty of headroom for multiple concurrent requests and garbage collection delays.
|
||||
|
||||
## Testing
|
||||
|
||||
Quick test with different chunk sizes:
|
||||
|
||||
```bash
|
||||
# Test with 512KB chunks
|
||||
CHUNK_SIZE=524288 ./deploy-chunked.sh
|
||||
|
||||
# Test with 1MB chunks
|
||||
CHUNK_SIZE=1048576 ./deploy-chunked.sh
|
||||
|
||||
# Test with 2MB chunks
|
||||
CHUNK_SIZE=2097152 ./deploy-chunked.sh
|
||||
```
|
||||
|
||||
Watch the logs:
|
||||
```bash
|
||||
cf logs cf-deployer --recent | grep "Received chunk"
|
||||
```
|
||||
|
||||
If you see OutOfMemoryError, use smaller chunks.
|
||||
|
||||
## Rules
|
||||
|
||||
1. **Chunks MUST be uploaded in order**: 0, 1, 2, 3... (enforced by server)
|
||||
2. **All chunks of the same file MUST use the same chunk size** (except the last chunk, which can be smaller)
|
||||
3. **Different files can use different chunk sizes** (jarFile vs manifest can differ)
|
||||
4. **Total chunks must be accurate**: Calculate as `ceil(file_size / chunk_size)`
|
||||
|
||||
## Example
|
||||
|
||||
For a 50MB JAR file:
|
||||
|
||||
| Chunk Size | Number of Chunks | Total Requests |
|
||||
|-----------|-----------------|----------------|
|
||||
| 512KB | 100 chunks | ~100 requests |
|
||||
| 1MB | 50 chunks | ~50 requests |
|
||||
| 2MB | 25 chunks | ~25 requests |
|
||||
| 5MB | 10 chunks | ~10 requests |
|
||||
|
||||
All work equally well - pick based on your constraints!
|
||||
Reference in New Issue
Block a user