Cleanup: improve pod naming, remove dead code, update docs

This commit is contained in:
Mondo Diaz
2026-01-14 14:47:11 -06:00
parent 1bb0c4e911
commit 32162c4ec7
7 changed files with 122 additions and 158 deletions

View File

@@ -26,13 +26,9 @@ stages:
- deploy - deploy
kics: kics:
allow_failure: true
variables: variables:
KICS_CONFIG: kics.config KICS_CONFIG: kics.config
hadolint:
allow_failure: true
# Post-deployment integration tests template # Post-deployment integration tests template
.integration_test_template: &integration_test_template .integration_test_template: &integration_test_template
stage: deploy # Runs in deploy stage, but after deployment due to 'needs' stage: deploy # Runs in deploy stage, but after deployment due to 'needs'
@@ -179,7 +175,7 @@ frontend_tests:
# Shared deploy configuration # Shared deploy configuration
.deploy_template: &deploy_template .deploy_template: &deploy_template
stage: deploy stage: deploy
needs: [build_image] needs: [build_image, kics, hadolint, python_tests, frontend_tests]
image: deps.global.bsf.tools/registry-1.docker.io/alpine/k8s:1.29.12 image: deps.global.bsf.tools/registry-1.docker.io/alpine/k8s:1.29.12
.helm_setup: &helm_setup .helm_setup: &helm_setup
@@ -250,7 +246,7 @@ deploy_stage:
--set image.tag=git.linux-amd64-$CI_COMMIT_SHA \ --set image.tag=git.linux-amd64-$CI_COMMIT_SHA \
--wait \ --wait \
--timeout 5m --timeout 5m
- kubectl rollout status deployment/orchard-stage -n $NAMESPACE --timeout=5m - kubectl rollout status deployment/orchard-stage-server -n $NAMESPACE --timeout=5m
- *verify_deployment - *verify_deployment
environment: environment:
name: stage name: stage
@@ -285,7 +281,7 @@ deploy_feature:
--set minioIngress.tls.secretName=minio-$CI_COMMIT_REF_SLUG-tls \ --set minioIngress.tls.secretName=minio-$CI_COMMIT_REF_SLUG-tls \
--wait \ --wait \
--timeout 5m --timeout 5m
- kubectl rollout status deployment/orchard-$CI_COMMIT_REF_SLUG -n $NAMESPACE --timeout=5m - kubectl rollout status deployment/orchard-$CI_COMMIT_REF_SLUG-server -n $NAMESPACE --timeout=5m
- export BASE_URL="https://orchard-$CI_COMMIT_REF_SLUG.common.global.bsf.tools" - export BASE_URL="https://orchard-$CI_COMMIT_REF_SLUG.common.global.bsf.tools"
- *verify_deployment - *verify_deployment
environment: environment:

View File

@@ -11,6 +11,20 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
- Added `deploy_feature` job with dynamic hostnames and unique release names (#51) - Added `deploy_feature` job with dynamic hostnames and unique release names (#51)
- Added `cleanup_feature` job with `on_stop` for automatic cleanup on merge (#51) - Added `cleanup_feature` job with `on_stop` for automatic cleanup on merge (#51)
- Added `values-dev.yaml` Helm values for lightweight ephemeral environments (#51) - Added `values-dev.yaml` Helm values for lightweight ephemeral environments (#51)
- Added main branch deployment to stage environment (#51)
- Added post-deployment integration tests (#51)
- Added internal proxy configuration for npm, pip, helm, and apt (#51)
### Changed
- Improved pod naming: Orchard pods now named `orchard-{env}-server-*` for clarity (#51)
### Fixed
- Fixed `cleanup_feature` job failing when branch is deleted (`GIT_STRATEGY: none`) (#51)
- Fixed gitleaks false positives with fingerprints for historical commits (#51)
- Fixed integration tests running when deploy fails (`when: on_success`) (#51)
### Removed
- Removed unused `store_streaming()` method from storage.py (#51)
## [0.4.0] - 2026-01-12 ## [0.4.0] - 2026-01-12
### Added ### Added

View File

@@ -46,6 +46,12 @@ Orchard is a centralized binary artifact storage system that provides content-ad
- `.whl` - Python wheels (name, version, author) - `.whl` - Python wheels (name, version, author)
- `.jar` - Java JARs (manifest info, Maven coordinates) - `.jar` - Java JARs (manifest info, Maven coordinates)
- `.zip` - ZIP files (file count, uncompressed size) - `.zip` - ZIP files (file count, uncompressed size)
- **Authentication** - Multiple authentication methods:
- Session-based login with username/password
- API keys for programmatic access (`orch_` prefixed tokens)
- OIDC integration for SSO
- Admin user management
- **Garbage Collection** - Clean up orphaned artifacts (ref_count=0) via admin API
### API Endpoints ### API Endpoints
@@ -522,15 +528,48 @@ Configuration is provided via environment variables prefixed with `ORCHARD_`:
| `ORCHARD_DOWNLOAD_MODE` | Download mode: `presigned`, `redirect`, or `proxy` | `presigned` | | `ORCHARD_DOWNLOAD_MODE` | Download mode: `presigned`, `redirect`, or `proxy` | `presigned` |
| `ORCHARD_PRESIGNED_URL_EXPIRY` | Presigned URL expiry in seconds | `3600` | | `ORCHARD_PRESIGNED_URL_EXPIRY` | Presigned URL expiry in seconds | `3600` |
## CI/CD Pipeline
The GitLab CI/CD pipeline automates building, testing, and deploying Orchard.
### Pipeline Stages
| Stage | Jobs | Description |
|-------|------|-------------|
| lint | `kics`, `hadolint`, `secrets` | Security and code quality scanning |
| build | `build_image` | Build and push Docker image |
| test | `python_tests`, `frontend_tests` | Run unit tests with coverage |
| deploy | `deploy_stage`, `deploy_feature` | Deploy to Kubernetes |
| deploy | `integration_test_*` | Post-deployment integration tests |
### Environments
| Environment | Branch | Namespace | URL |
|-------------|--------|-----------|-----|
| Stage | `main` | `orch-stage-namespace` | `orchard-stage.common.global.bsf.tools` |
| Feature | `*` (non-main) | `orch-dev-namespace` | `orchard-{branch}.common.global.bsf.tools` |
### Feature Branch Workflow
1. Push a feature branch
2. Pipeline builds, tests, and deploys to isolated environment
3. Integration tests run against the deployed environment
4. GitLab UI shows environment link for manual testing
5. On merge to main, environment is automatically cleaned up
6. Environments also auto-expire after 1 week if branch is not deleted
### Manual Cleanup
Feature environments can be manually cleaned up via:
- GitLab UI: Environments → Stop environment
- CLI: `helm uninstall orchard-{branch} -n orch-dev-namespace`
## Kubernetes Deployment ## Kubernetes Deployment
### Using Helm ### Using Helm
```bash ```bash
# Add Bitnami repo for dependencies # Update dependencies (uses internal OCI registry)
helm repo add bitnami https://charts.bitnami.com/bitnami
# Update dependencies
cd helm/orchard cd helm/orchard
helm dependency update helm dependency update
@@ -593,10 +632,16 @@ The following features are planned but not yet implemented:
- [ ] Export/Import for air-gapped systems - [ ] Export/Import for air-gapped systems
- [ ] Consumer notification - [ ] Consumer notification
- [ ] Automated update propagation - [ ] Automated update propagation
- [ ] OIDC/SAML authentication - [ ] SAML authentication
- [ ] API key management
- [ ] Redis caching layer - [ ] Redis caching layer
- [ ] Garbage collection for orphaned artifacts - [ ] Download integrity verification (see `docs/design/integrity-verification.md`)
### Recently Implemented
- [x] OIDC authentication
- [x] API key management
- [x] Garbage collection for orphaned artifacts
- [x] User authentication with sessions
## License ## License

View File

@@ -6,7 +6,6 @@ from typing import (
Optional, Optional,
Dict, Dict,
Any, Any,
Generator,
NamedTuple, NamedTuple,
Protocol, Protocol,
runtime_checkable, runtime_checkable,
@@ -511,127 +510,6 @@ class S3Storage:
) )
raise raise
def store_streaming(self, chunks: Generator[bytes, None, None]) -> StorageResult:
"""
Store a file from a stream of chunks.
First accumulates to compute hash, then uploads.
For truly large files, consider using initiate_resumable_upload instead.
"""
# Accumulate chunks and compute all hashes
sha256_hasher = hashlib.sha256()
md5_hasher = hashlib.md5()
sha1_hasher = hashlib.sha1()
all_chunks = []
size = 0
for chunk in chunks:
sha256_hasher.update(chunk)
md5_hasher.update(chunk)
sha1_hasher.update(chunk)
all_chunks.append(chunk)
size += len(chunk)
sha256_hash = sha256_hasher.hexdigest()
md5_hash = md5_hasher.hexdigest()
sha1_hash = sha1_hasher.hexdigest()
s3_key = f"fruits/{sha256_hash[:2]}/{sha256_hash[2:4]}/{sha256_hash}"
s3_etag = None
# Check if already exists
if self._exists(s3_key):
obj_info = self.get_object_info(s3_key)
s3_etag = obj_info.get("etag", "").strip('"') if obj_info else None
return StorageResult(
sha256=sha256_hash,
size=size,
s3_key=s3_key,
md5=md5_hash,
sha1=sha1_hash,
s3_etag=s3_etag,
already_existed=True,
)
# Upload based on size
if size < MULTIPART_THRESHOLD:
content = b"".join(all_chunks)
response = self.client.put_object(
Bucket=self.bucket, Key=s3_key, Body=content
)
s3_etag = response.get("ETag", "").strip('"')
else:
# Use multipart for large files
mpu = self.client.create_multipart_upload(Bucket=self.bucket, Key=s3_key)
upload_id = mpu["UploadId"]
try:
parts = []
part_number = 1
buffer = b""
for chunk in all_chunks:
buffer += chunk
while len(buffer) >= MULTIPART_CHUNK_SIZE:
part_data = buffer[:MULTIPART_CHUNK_SIZE]
buffer = buffer[MULTIPART_CHUNK_SIZE:]
response = self.client.upload_part(
Bucket=self.bucket,
Key=s3_key,
UploadId=upload_id,
PartNumber=part_number,
Body=part_data,
)
parts.append(
{
"PartNumber": part_number,
"ETag": response["ETag"],
}
)
part_number += 1
# Upload remaining buffer
if buffer:
response = self.client.upload_part(
Bucket=self.bucket,
Key=s3_key,
UploadId=upload_id,
PartNumber=part_number,
Body=buffer,
)
parts.append(
{
"PartNumber": part_number,
"ETag": response["ETag"],
}
)
complete_response = self.client.complete_multipart_upload(
Bucket=self.bucket,
Key=s3_key,
UploadId=upload_id,
MultipartUpload={"Parts": parts},
)
s3_etag = complete_response.get("ETag", "").strip('"')
except Exception as e:
logger.error(f"Streaming multipart upload failed: {e}")
self.client.abort_multipart_upload(
Bucket=self.bucket,
Key=s3_key,
UploadId=upload_id,
)
raise
return StorageResult(
sha256=sha256_hash,
size=size,
s3_key=s3_key,
md5=md5_hash,
sha1=sha1_hash,
s3_etag=s3_etag,
already_existed=False,
)
def initiate_resumable_upload(self, expected_hash: str) -> Dict[str, Any]: def initiate_resumable_upload(self, expected_hash: str) -> Dict[str, Any]:
""" """
Initiate a resumable upload session. Initiate a resumable upload session.

View File

@@ -46,8 +46,11 @@ services:
- no-new-privileges:true - no-new-privileges:true
cap_drop: cap_drop:
- ALL - ALL
mem_limit: 1g deploy:
cpus: 1.0 resources:
limits:
cpus: '1.0'
memory: 1G
postgres: postgres:
image: postgres:16-alpine image: postgres:16-alpine
@@ -72,8 +75,11 @@ services:
- no-new-privileges:true - no-new-privileges:true
cap_drop: cap_drop:
- ALL - ALL
mem_limit: 512m deploy:
cpus: 0.5 resources:
limits:
cpus: '0.5'
memory: 512M
minio: minio:
image: minio/minio:latest image: minio/minio:latest
@@ -98,8 +104,11 @@ services:
- no-new-privileges:true - no-new-privileges:true
cap_drop: cap_drop:
- ALL - ALL
mem_limit: 512m deploy:
cpus: 0.5 resources:
limits:
cpus: '0.5'
memory: 512M
minio-init: minio-init:
image: minio/mc:latest image: minio/mc:latest
@@ -119,8 +128,11 @@ services:
- no-new-privileges:true - no-new-privileges:true
cap_drop: cap_drop:
- ALL - ALL
mem_limit: 128m deploy:
cpus: 0.25 resources:
limits:
cpus: '0.25'
memory: 128M
redis: redis:
image: redis:7-alpine image: redis:7-alpine
@@ -141,8 +153,11 @@ services:
- no-new-privileges:true - no-new-privileges:true
cap_drop: cap_drop:
- ALL - ALL
mem_limit: 256m deploy:
cpus: 0.25 resources:
limits:
cpus: '0.25'
memory: 256M
volumes: volumes:
postgres-data-local: postgres-data-local:

View File

@@ -44,8 +44,11 @@ services:
- no-new-privileges:true - no-new-privileges:true
cap_drop: cap_drop:
- ALL - ALL
mem_limit: 1g deploy:
cpus: 1.0 resources:
limits:
cpus: '1.0'
memory: 1G
postgres: postgres:
image: containers.global.bsf.tools/postgres:16-alpine image: containers.global.bsf.tools/postgres:16-alpine
@@ -70,8 +73,11 @@ services:
- no-new-privileges:true - no-new-privileges:true
cap_drop: cap_drop:
- ALL - ALL
mem_limit: 512m deploy:
cpus: 0.5 resources:
limits:
cpus: '0.5'
memory: 512M
minio: minio:
image: containers.global.bsf.tools/minio/minio:latest image: containers.global.bsf.tools/minio/minio:latest
@@ -96,8 +102,11 @@ services:
- no-new-privileges:true - no-new-privileges:true
cap_drop: cap_drop:
- ALL - ALL
mem_limit: 512m deploy:
cpus: 0.5 resources:
limits:
cpus: '0.5'
memory: 512M
minio-init: minio-init:
image: containers.global.bsf.tools/minio/mc:latest image: containers.global.bsf.tools/minio/mc:latest
@@ -117,8 +126,11 @@ services:
- no-new-privileges:true - no-new-privileges:true
cap_drop: cap_drop:
- ALL - ALL
mem_limit: 128m deploy:
cpus: 0.25 resources:
limits:
cpus: '0.25'
memory: 128M
redis: redis:
image: containers.global.bsf.tools/redis:7-alpine image: containers.global.bsf.tools/redis:7-alpine
@@ -139,8 +151,11 @@ services:
- no-new-privileges:true - no-new-privileges:true
cap_drop: cap_drop:
- ALL - ALL
mem_limit: 256m deploy:
cpus: 0.25 resources:
limits:
cpus: '0.25'
memory: 256M
volumes: volumes:
postgres-data: postgres-data:

View File

@@ -7,6 +7,7 @@ Expand the name of the chart.
{{/* {{/*
Create a default fully qualified app name. Create a default fully qualified app name.
Appends "-server" to distinguish from subcharts (minio, postgresql, redis).
*/}} */}}
{{- define "orchard.fullname" -}} {{- define "orchard.fullname" -}}
{{- if .Values.fullnameOverride }} {{- if .Values.fullnameOverride }}
@@ -14,9 +15,9 @@ Create a default fully qualified app name.
{{- else }} {{- else }}
{{- $name := default .Chart.Name .Values.nameOverride }} {{- $name := default .Chart.Name .Values.nameOverride }}
{{- if contains $name .Release.Name }} {{- if contains $name .Release.Name }}
{{- .Release.Name | trunc 63 | trimSuffix "-" }} {{- printf "%s-server" .Release.Name | trunc 63 | trimSuffix "-" }}
{{- else }} {{- else }}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }} {{- printf "%s-%s-server" .Release.Name $name | trunc 63 | trimSuffix "-" }}
{{- end }} {{- end }}
{{- end }} {{- end }}
{{- end }} {{- end }}