10 Commits

Author SHA1 Message Date
Mondo Diaz
2843335f6d Document curl -OJ flag for correct download filenames
- Update download examples to use -OJ flag
- Add note explaining curl download flags (-O, -J, -OJ, -o)
- Add example for saving to a specific filename
2025-12-12 13:53:15 -06:00
Mondo Diaz
2097865874 Remove redundant search bar from Home, rename page filters
- Remove SearchInput from Home page (use GlobalSearch in header instead)
- Rename "Search packages..." to "Filter packages..." on ProjectPage
- Rename "Search tags..." to "Filter tags..." on PackagePage
- Update FilterChip labels from "Search" to "Filter"

This differentiates the global search (header) from page-level filtering.
2025-12-12 12:55:31 -06:00
Mondo Diaz
0e1474bf6c Merge branch 'feature/database-storage-layer' into 'main'
Implement database storage layer

Closes #17

See merge request esv/bsf/bsf-integration/orchard/orchard-mvp!11
2025-12-12 12:45:33 -06:00
Mondo Diaz
9604540dd3 Implement database storage layer 2025-12-12 12:45:33 -06:00
Mondo Diaz
a6df5aba5a Merge branch 'feature/search-filtering-enhancements' into 'main'
Add global search and filtering enhancements

Closes #6

See merge request esv/bsf/bsf-integration/orchard/orchard-mvp!10
2025-12-12 12:12:46 -06:00
Mondo Diaz
096887d4da Add global search and filtering enhancements 2025-12-12 12:12:46 -06:00
Mondo Diaz
7d80bef39a Fix: restore enhanced tags API endpoints 2025-12-12 10:57:27 -06:00
Mondo Diaz
96198dc127 Merge branch 'fix/restore-merged-features' 2025-12-12 10:55:19 -06:00
Mondo Diaz
fd06dfb3ce Reapply "Add API endpoints for listing tagged versions and artifacts"
This reverts commit 11852adc66.
2025-12-12 10:55:15 -06:00
Mondo Diaz
5d0122fc36 Revert "Add API endpoints for listing tagged versions and artifacts"
This reverts commit 54e33e67ce.
2025-12-12 10:33:21 -06:00
30 changed files with 2724 additions and 39 deletions

View File

@@ -275,14 +275,17 @@ curl -X POST http://localhost:8080/api/v1/project/my-project/releases/upload/abc
### Download an Artifact
```bash
# By tag
curl -O http://localhost:8080/api/v1/project/my-project/releases/+/v1.0.0
# By tag (use -OJ to save with the correct filename from Content-Disposition header)
curl -OJ http://localhost:8080/api/v1/project/my-project/releases/+/v1.0.0
# By artifact ID
curl -O http://localhost:8080/api/v1/project/my-project/releases/+/artifact:a3f5d8e12b4c6789...
curl -OJ http://localhost:8080/api/v1/project/my-project/releases/+/artifact:a3f5d8e12b4c6789...
# Using the short URL pattern
curl -O http://localhost:8080/project/my-project/releases/+/latest
curl -OJ http://localhost:8080/project/my-project/releases/+/latest
# Save to a specific filename
curl -o myfile.tar.gz http://localhost:8080/api/v1/project/my-project/releases/+/v1.0.0
# Partial download (range request)
curl -H "Range: bytes=0-1023" http://localhost:8080/api/v1/project/my-project/releases/+/v1.0.0
@@ -291,6 +294,12 @@ curl -H "Range: bytes=0-1023" http://localhost:8080/api/v1/project/my-project/re
curl -I http://localhost:8080/api/v1/project/my-project/releases/+/v1.0.0
```
> **Note on curl flags:**
> - `-O` saves the file using the URL path as the filename (e.g., `latest`, `v1.0.0`)
> - `-J` tells curl to use the filename from the `Content-Disposition` header (e.g., `app-v1.0.0.tar.gz`)
> - `-OJ` combines both: download to a file using the server-provided filename
> - `-o <filename>` saves to a specific filename you choose
### Create a Tag
```bash

83
backend/alembic.ini Normal file
View File

@@ -0,0 +1,83 @@
# Alembic Configuration File
[alembic]
# path to migration scripts
script_location = alembic
# template used to generate migration files
# file_template = %%(rev)s_%%(slug)s
# sys.path path, will be prepended to sys.path if present.
prepend_sys_path = .
# timezone to use when rendering the date within the migration file
# as well as the filename.
# If specified, requires the python-dateutil library
# timezone =
# max length of characters to apply to the "slug" field
# truncate_slug_length = 40
# set to 'true' to run the environment during the 'revision' command,
# regardless of autogenerate
# revision_environment = false
# set to 'true' to allow .pyc and .pyo files without a source .py file
# to be detected as revisions in the versions/ directory
# sourceless = false
# version location specification
# version_locations = %(here)s/bar:%(here)s/bat:alembic/versions
# version path separator
# version_path_separator = :
# set to 'true' to search source files recursively
# in each "version_locations" directory
# recursive_version_locations = false
# the output encoding used when revision files are written from script.py.mako
# output_encoding = utf-8
# Database URL - will be overridden by env.py
sqlalchemy.url = driver://user:pass@localhost/dbname
[post_write_hooks]
# post_write_hooks defines scripts or Python functions that are run
# on newly generated revision scripts.
# Logging configuration
[loggers]
keys = root,sqlalchemy,alembic
[handlers]
keys = console
[formatters]
keys = generic
[logger_root]
level = WARN
handlers = console
qualname =
[logger_sqlalchemy]
level = WARN
handlers =
qualname = sqlalchemy.engine
[logger_alembic]
level = INFO
handlers =
qualname = alembic
[handler_console]
class = StreamHandler
args = (sys.stderr,)
level = NOTSET
formatter = generic
[formatter_generic]
format = %(levelname)-5.5s [%(name)s] %(message)s
datefmt = %H:%M:%S

27
backend/alembic/README Normal file
View File

@@ -0,0 +1,27 @@
Alembic Migrations for Orchard
This directory contains database migration scripts managed by Alembic.
Common Commands:
# Generate a new migration (autogenerate from model changes)
alembic revision --autogenerate -m "description of changes"
# Apply all pending migrations
alembic upgrade head
# Rollback one migration
alembic downgrade -1
# Show current migration status
alembic current
# Show migration history
alembic history
# Generate SQL without applying (for review)
alembic upgrade head --sql
Notes:
- Always review autogenerated migrations before applying
- Test migrations in development before applying to production
- Migrations are stored in the versions/ directory

95
backend/alembic/env.py Normal file
View File

@@ -0,0 +1,95 @@
"""
Alembic migration environment configuration.
"""
from logging.config import fileConfig
import sys
from pathlib import Path
from sqlalchemy import engine_from_config
from sqlalchemy import pool
from alembic import context
# Add the app directory to path for imports
sys.path.insert(0, str(Path(__file__).parent.parent))
from app.config import get_settings
from app.models import Base
# this is the Alembic Config object, which provides
# access to the values within the .ini file in use.
config = context.config
# Get database URL from settings
settings = get_settings()
config.set_main_option("sqlalchemy.url", settings.database_url)
# Interpret the config file for Python logging.
# This line sets up loggers basically.
if config.config_file_name is not None:
fileConfig(config.config_file_name)
# add your model's MetaData object here
# for 'autogenerate' support
target_metadata = Base.metadata
# other values from the config, defined by the needs of env.py,
# can be acquired:
# my_important_option = config.get_main_option("my_important_option")
# ... etc.
def run_migrations_offline() -> None:
"""Run migrations in 'offline' mode.
This configures the context with just a URL
and not an Engine, though an Engine is acceptable
here as well. By skipping the Engine creation
we don't even need a DBAPI to be available.
Calls to context.execute() here emit the given string to the
script output.
"""
url = config.get_main_option("sqlalchemy.url")
context.configure(
url=url,
target_metadata=target_metadata,
literal_binds=True,
dialect_opts={"paramstyle": "named"},
)
with context.begin_transaction():
context.run_migrations()
def run_migrations_online() -> None:
"""Run migrations in 'online' mode.
In this scenario we need to create an Engine
and associate a connection with the context.
"""
connectable = engine_from_config(
config.get_section(config.config_ini_section, {}),
prefix="sqlalchemy.",
poolclass=pool.NullPool,
)
with connectable.connect() as connection:
context.configure(
connection=connection,
target_metadata=target_metadata,
compare_type=True, # Detect column type changes
compare_server_default=True, # Detect default value changes
)
with context.begin_transaction():
context.run_migrations()
if context.is_offline_mode():
run_migrations_offline()
else:
run_migrations_online()

View File

@@ -0,0 +1,26 @@
"""${message}
Revision ID: ${up_revision}
Revises: ${down_revision | comma,n}
Create Date: ${create_date}
"""
from typing import Sequence, Union
from alembic import op
import sqlalchemy as sa
${imports if imports else ""}
# revision identifiers, used by Alembic.
revision: str = ${repr(up_revision)}
down_revision: Union[str, None] = ${repr(down_revision)}
branch_labels: Union[str, Sequence[str], None] = ${repr(branch_labels)}
depends_on: Union[str, Sequence[str], None] = ${repr(depends_on)}
def upgrade() -> None:
${upgrades if upgrades else "pass"}
def downgrade() -> None:
${downgrades if downgrades else "pass"}

View File

@@ -18,6 +18,12 @@ class Settings(BaseSettings):
database_dbname: str = "orchard"
database_sslmode: str = "disable"
# Database connection pool settings
database_pool_size: int = 5 # Number of connections to keep open
database_max_overflow: int = 10 # Max additional connections beyond pool_size
database_pool_timeout: int = 30 # Seconds to wait for a connection from pool
database_pool_recycle: int = 1800 # Recycle connections after this many seconds (30 min)
# S3
s3_endpoint: str = ""
s3_region: str = "us-east-1"

View File

@@ -1,7 +1,10 @@
from sqlalchemy import create_engine, text
from sqlalchemy import create_engine, text, event
from sqlalchemy.orm import sessionmaker, Session
from sqlalchemy.pool import QueuePool
from typing import Generator
from contextlib import contextmanager
import logging
import time
from .config import get_settings
from .models import Base
@@ -9,10 +12,44 @@ from .models import Base
settings = get_settings()
logger = logging.getLogger(__name__)
engine = create_engine(settings.database_url, pool_pre_ping=True)
# Create engine with connection pool configuration
engine = create_engine(
settings.database_url,
pool_pre_ping=True, # Check connection health before using
poolclass=QueuePool,
pool_size=settings.database_pool_size,
max_overflow=settings.database_max_overflow,
pool_timeout=settings.database_pool_timeout,
pool_recycle=settings.database_pool_recycle,
)
SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine)
# Connection pool monitoring
@event.listens_for(engine, "checkout")
def receive_checkout(dbapi_connection, connection_record, connection_proxy):
"""Log when a connection is checked out from the pool"""
logger.debug(f"Connection checked out from pool: {id(dbapi_connection)}")
@event.listens_for(engine, "checkin")
def receive_checkin(dbapi_connection, connection_record):
"""Log when a connection is returned to the pool"""
logger.debug(f"Connection returned to pool: {id(dbapi_connection)}")
def get_pool_status() -> dict:
"""Get current connection pool status for monitoring"""
pool = engine.pool
return {
"pool_size": pool.size(),
"checked_out": pool.checkedout(),
"overflow": pool.overflow(),
"checked_in": pool.checkedin(),
}
def init_db():
"""Create all tables and run migrations"""
Base.metadata.create_all(bind=engine)
@@ -62,6 +99,51 @@ def _run_migrations():
END IF;
END $$;
""",
# Add ref_count index and constraints for artifacts
"""
DO $$
BEGIN
-- Add ref_count index
IF NOT EXISTS (
SELECT 1 FROM pg_indexes WHERE indexname = 'idx_artifacts_ref_count'
) THEN
CREATE INDEX idx_artifacts_ref_count ON artifacts(ref_count);
END IF;
-- Add ref_count >= 0 constraint
IF NOT EXISTS (
SELECT 1 FROM pg_constraint WHERE conname = 'check_ref_count_non_negative'
) THEN
ALTER TABLE artifacts ADD CONSTRAINT check_ref_count_non_negative CHECK (ref_count >= 0);
END IF;
END $$;
""",
# Add composite indexes for packages and tags
"""
DO $$
BEGIN
-- Composite index for package lookup by project and name
IF NOT EXISTS (
SELECT 1 FROM pg_indexes WHERE indexname = 'idx_packages_project_name'
) THEN
CREATE UNIQUE INDEX idx_packages_project_name ON packages(project_id, name);
END IF;
-- Composite index for tag lookup by package and name
IF NOT EXISTS (
SELECT 1 FROM pg_indexes WHERE indexname = 'idx_tags_package_name'
) THEN
CREATE UNIQUE INDEX idx_tags_package_name ON tags(package_id, name);
END IF;
-- Composite index for recent tags queries
IF NOT EXISTS (
SELECT 1 FROM pg_indexes WHERE indexname = 'idx_tags_package_created_at'
) THEN
CREATE INDEX idx_tags_package_created_at ON tags(package_id, created_at);
END IF;
END $$;
""",
]
with engine.connect() as conn:
@@ -80,3 +162,75 @@ def get_db() -> Generator[Session, None, None]:
yield db
finally:
db.close()
@contextmanager
def transaction(db: Session):
"""
Context manager for explicit transaction management with savepoint support.
Usage:
with transaction(db):
# operations here
# automatically commits on success, rolls back on exception
"""
try:
yield db
db.commit()
except Exception:
db.rollback()
raise
@contextmanager
def savepoint(db: Session, name: str = None):
"""
Create a savepoint for partial rollback support.
Usage:
with savepoint(db, "my_savepoint"):
# operations here
# rolls back to savepoint on exception, but doesn't rollback whole transaction
"""
savepoint_obj = db.begin_nested()
try:
yield savepoint_obj
savepoint_obj.commit()
except Exception:
savepoint_obj.rollback()
raise
def retry_on_deadlock(func, max_retries: int = 3, delay: float = 0.1):
"""
Decorator/wrapper to retry operations on deadlock detection.
Usage:
@retry_on_deadlock
def my_operation(db):
...
Or:
retry_on_deadlock(lambda: my_operation(db))()
"""
import functools
from sqlalchemy.exc import OperationalError
@functools.wraps(func)
def wrapper(*args, **kwargs):
last_exception = None
for attempt in range(max_retries):
try:
return func(*args, **kwargs)
except OperationalError as e:
# Check for deadlock error codes (PostgreSQL: 40P01, MySQL: 1213)
error_str = str(e).lower()
if "deadlock" in error_str or "40p01" in error_str:
last_exception = e
logger.warning(f"Deadlock detected, retrying (attempt {attempt + 1}/{max_retries})")
time.sleep(delay * (attempt + 1)) # Exponential backoff
else:
raise
raise last_exception
return wrapper

View File

@@ -53,6 +53,7 @@ class Package(Base):
Index("idx_packages_name", "name"),
Index("idx_packages_format", "format"),
Index("idx_packages_platform", "platform"),
Index("idx_packages_project_name", "project_id", "name", unique=True), # Composite unique index
CheckConstraint(
"format IN ('generic', 'npm', 'pypi', 'docker', 'deb', 'rpm', 'maven', 'nuget', 'helm')",
name="check_package_format"
@@ -84,6 +85,9 @@ class Artifact(Base):
__table_args__ = (
Index("idx_artifacts_created_at", "created_at"),
Index("idx_artifacts_created_by", "created_by"),
Index("idx_artifacts_ref_count", "ref_count"), # For cleanup queries
CheckConstraint("ref_count >= 0", name="check_ref_count_non_negative"),
CheckConstraint("size > 0", name="check_size_positive"),
)
@@ -104,6 +108,8 @@ class Tag(Base):
__table_args__ = (
Index("idx_tags_package_id", "package_id"),
Index("idx_tags_artifact_id", "artifact_id"),
Index("idx_tags_package_name", "package_id", "name", unique=True), # Composite unique index
Index("idx_tags_package_created_at", "package_id", "created_at"), # For recent tags queries
)

View File

@@ -0,0 +1,22 @@
"""
Repository pattern implementation for data access layer.
Repositories abstract database operations from business logic,
providing clean interfaces for CRUD operations on each entity.
"""
from .base import BaseRepository
from .project import ProjectRepository
from .package import PackageRepository
from .artifact import ArtifactRepository
from .tag import TagRepository
from .upload import UploadRepository
__all__ = [
"BaseRepository",
"ProjectRepository",
"PackageRepository",
"ArtifactRepository",
"TagRepository",
"UploadRepository",
]

View File

@@ -0,0 +1,157 @@
"""
Artifact repository for data access operations.
"""
from typing import Optional, List, Tuple
from sqlalchemy.orm import Session
from sqlalchemy import func, or_
from uuid import UUID
from .base import BaseRepository
from ..models import Artifact, Tag, Upload, Package, Project
class ArtifactRepository(BaseRepository[Artifact]):
"""Repository for Artifact entity operations."""
model = Artifact
def get_by_sha256(self, sha256: str) -> Optional[Artifact]:
"""Get artifact by SHA256 hash (primary key)."""
return self.db.query(Artifact).filter(Artifact.id == sha256).first()
def exists_by_sha256(self, sha256: str) -> bool:
"""Check if artifact with SHA256 exists."""
return self.db.query(
self.db.query(Artifact).filter(Artifact.id == sha256).exists()
).scalar()
def create_artifact(
self,
sha256: str,
size: int,
s3_key: str,
created_by: str,
content_type: Optional[str] = None,
original_name: Optional[str] = None,
format_metadata: Optional[dict] = None,
) -> Artifact:
"""Create a new artifact."""
artifact = Artifact(
id=sha256,
size=size,
s3_key=s3_key,
created_by=created_by,
content_type=content_type,
original_name=original_name,
format_metadata=format_metadata or {},
ref_count=1,
)
self.db.add(artifact)
self.db.flush()
return artifact
def increment_ref_count(self, artifact: Artifact) -> Artifact:
"""Increment artifact reference count."""
artifact.ref_count += 1
self.db.flush()
return artifact
def decrement_ref_count(self, artifact: Artifact) -> Artifact:
"""
Decrement artifact reference count.
Returns the artifact with updated count.
Does not delete the artifact even if ref_count reaches 0.
"""
if artifact.ref_count > 0:
artifact.ref_count -= 1
self.db.flush()
return artifact
def get_orphaned_artifacts(self, limit: int = 100) -> List[Artifact]:
"""Get artifacts with ref_count = 0 (candidates for cleanup)."""
return (
self.db.query(Artifact)
.filter(Artifact.ref_count == 0)
.limit(limit)
.all()
)
def get_artifacts_without_tags(self, limit: int = 100) -> List[Artifact]:
"""Get artifacts that have no tags pointing to them."""
# Subquery to find artifact IDs that have tags
tagged_artifacts = self.db.query(Tag.artifact_id).distinct().subquery()
return (
self.db.query(Artifact)
.filter(~Artifact.id.in_(tagged_artifacts))
.limit(limit)
.all()
)
def find_by_package(
self,
package_id: UUID,
page: int = 1,
limit: int = 20,
content_type: Optional[str] = None,
) -> Tuple[List[Artifact], int]:
"""Find artifacts uploaded to a package."""
# Get distinct artifact IDs from uploads
artifact_ids_subquery = (
self.db.query(func.distinct(Upload.artifact_id))
.filter(Upload.package_id == package_id)
.subquery()
)
query = self.db.query(Artifact).filter(Artifact.id.in_(artifact_ids_subquery))
if content_type:
query = query.filter(Artifact.content_type == content_type)
total = query.count()
offset = (page - 1) * limit
artifacts = query.order_by(Artifact.created_at.desc()).offset(offset).limit(limit).all()
return artifacts, total
def get_referencing_tags(self, artifact_id: str) -> List[Tuple[Tag, Package, Project]]:
"""Get all tags referencing this artifact with package and project info."""
return (
self.db.query(Tag, Package, Project)
.join(Package, Tag.package_id == Package.id)
.join(Project, Package.project_id == Project.id)
.filter(Tag.artifact_id == artifact_id)
.all()
)
def search(self, query_str: str, limit: int = 10) -> List[Tuple[Tag, Artifact, str, str]]:
"""
Search artifacts by tag name or original filename.
Returns (tag, artifact, package_name, project_name) tuples.
"""
search_lower = query_str.lower()
return (
self.db.query(Tag, Artifact, Package.name, Project.name)
.join(Artifact, Tag.artifact_id == Artifact.id)
.join(Package, Tag.package_id == Package.id)
.join(Project, Package.project_id == Project.id)
.filter(
or_(
func.lower(Tag.name).contains(search_lower),
func.lower(Artifact.original_name).contains(search_lower)
)
)
.order_by(Tag.name)
.limit(limit)
.all()
)
def update_metadata(self, artifact: Artifact, metadata: dict) -> Artifact:
"""Update or merge format metadata."""
if artifact.format_metadata:
artifact.format_metadata = {**artifact.format_metadata, **metadata}
else:
artifact.format_metadata = metadata
self.db.flush()
return artifact

View File

@@ -0,0 +1,96 @@
"""
Base repository class with common CRUD operations.
"""
from typing import TypeVar, Generic, Type, Optional, List, Any, Dict
from sqlalchemy.orm import Session
from sqlalchemy import func, asc, desc
from uuid import UUID
from ..models import Base
T = TypeVar("T", bound=Base)
class BaseRepository(Generic[T]):
"""
Base repository providing common CRUD operations.
Subclasses should set `model` class attribute to the SQLAlchemy model.
"""
model: Type[T]
def __init__(self, db: Session):
self.db = db
def get_by_id(self, id: Any) -> Optional[T]:
"""Get entity by primary key."""
return self.db.query(self.model).filter(self.model.id == id).first()
def get_all(
self,
skip: int = 0,
limit: int = 100,
order_by: str = None,
order_desc: bool = False,
) -> List[T]:
"""Get all entities with pagination and optional ordering."""
query = self.db.query(self.model)
if order_by and hasattr(self.model, order_by):
column = getattr(self.model, order_by)
query = query.order_by(desc(column) if order_desc else asc(column))
return query.offset(skip).limit(limit).all()
def count(self) -> int:
"""Count total entities."""
return self.db.query(func.count(self.model.id)).scalar() or 0
def create(self, **kwargs) -> T:
"""Create a new entity."""
entity = self.model(**kwargs)
self.db.add(entity)
self.db.flush() # Flush to get ID without committing
return entity
def update(self, entity: T, **kwargs) -> T:
"""Update an existing entity."""
for key, value in kwargs.items():
if hasattr(entity, key):
setattr(entity, key, value)
self.db.flush()
return entity
def delete(self, entity: T) -> None:
"""Delete an entity."""
self.db.delete(entity)
self.db.flush()
def delete_by_id(self, id: Any) -> bool:
"""Delete entity by ID. Returns True if deleted, False if not found."""
entity = self.get_by_id(id)
if entity:
self.delete(entity)
return True
return False
def exists(self, id: Any) -> bool:
"""Check if entity exists by ID."""
return self.db.query(
self.db.query(self.model).filter(self.model.id == id).exists()
).scalar()
def commit(self) -> None:
"""Commit the current transaction."""
self.db.commit()
def rollback(self) -> None:
"""Rollback the current transaction."""
self.db.rollback()
def refresh(self, entity: T) -> T:
"""Refresh entity from database."""
self.db.refresh(entity)
return entity

View File

@@ -0,0 +1,177 @@
"""
Package repository for data access operations.
"""
from typing import Optional, List, Tuple
from sqlalchemy.orm import Session
from sqlalchemy import func, or_, asc, desc
from uuid import UUID
from .base import BaseRepository
from ..models import Package, Project, Tag, Upload, Artifact
class PackageRepository(BaseRepository[Package]):
"""Repository for Package entity operations."""
model = Package
def get_by_name(self, project_id: UUID, name: str) -> Optional[Package]:
"""Get package by name within a project."""
return (
self.db.query(Package)
.filter(Package.project_id == project_id, Package.name == name)
.first()
)
def get_by_project_and_name(self, project_name: str, package_name: str) -> Optional[Package]:
"""Get package by project name and package name."""
return (
self.db.query(Package)
.join(Project, Package.project_id == Project.id)
.filter(Project.name == project_name, Package.name == package_name)
.first()
)
def exists_by_name(self, project_id: UUID, name: str) -> bool:
"""Check if package with name exists in project."""
return self.db.query(
self.db.query(Package)
.filter(Package.project_id == project_id, Package.name == name)
.exists()
).scalar()
def list_by_project(
self,
project_id: UUID,
page: int = 1,
limit: int = 20,
search: Optional[str] = None,
format: Optional[str] = None,
platform: Optional[str] = None,
sort: str = "name",
order: str = "asc",
) -> Tuple[List[Package], int]:
"""
List packages in a project with filtering and pagination.
Returns tuple of (packages, total_count).
"""
query = self.db.query(Package).filter(Package.project_id == project_id)
# Apply search filter
if search:
search_lower = search.lower()
query = query.filter(
or_(
func.lower(Package.name).contains(search_lower),
func.lower(Package.description).contains(search_lower)
)
)
# Apply format filter
if format:
query = query.filter(Package.format == format)
# Apply platform filter
if platform:
query = query.filter(Package.platform == platform)
# Get total count
total = query.count()
# Apply sorting
sort_columns = {
"name": Package.name,
"created_at": Package.created_at,
"updated_at": Package.updated_at,
}
sort_column = sort_columns.get(sort, Package.name)
if order == "desc":
query = query.order_by(desc(sort_column))
else:
query = query.order_by(asc(sort_column))
# Apply pagination
offset = (page - 1) * limit
packages = query.offset(offset).limit(limit).all()
return packages, total
def create_package(
self,
project_id: UUID,
name: str,
description: Optional[str] = None,
format: str = "generic",
platform: str = "any",
) -> Package:
"""Create a new package."""
return self.create(
project_id=project_id,
name=name,
description=description,
format=format,
platform=platform,
)
def update_package(
self,
package: Package,
name: Optional[str] = None,
description: Optional[str] = None,
format: Optional[str] = None,
platform: Optional[str] = None,
) -> Package:
"""Update package fields."""
updates = {}
if name is not None:
updates["name"] = name
if description is not None:
updates["description"] = description
if format is not None:
updates["format"] = format
if platform is not None:
updates["platform"] = platform
return self.update(package, **updates)
def get_stats(self, package_id: UUID) -> dict:
"""Get package statistics (tag count, artifact count, total size)."""
tag_count = (
self.db.query(func.count(Tag.id))
.filter(Tag.package_id == package_id)
.scalar() or 0
)
artifact_stats = (
self.db.query(
func.count(func.distinct(Upload.artifact_id)),
func.coalesce(func.sum(Artifact.size), 0)
)
.join(Artifact, Upload.artifact_id == Artifact.id)
.filter(Upload.package_id == package_id)
.first()
)
return {
"tag_count": tag_count,
"artifact_count": artifact_stats[0] if artifact_stats else 0,
"total_size": artifact_stats[1] if artifact_stats else 0,
}
def search(self, query_str: str, limit: int = 10) -> List[Tuple[Package, str]]:
"""Search packages by name or description. Returns (package, project_name) tuples."""
search_lower = query_str.lower()
return (
self.db.query(Package, Project.name)
.join(Project, Package.project_id == Project.id)
.filter(
or_(
func.lower(Package.name).contains(search_lower),
func.lower(Package.description).contains(search_lower)
)
)
.order_by(Package.name)
.limit(limit)
.all()
)

View File

@@ -0,0 +1,132 @@
"""
Project repository for data access operations.
"""
from typing import Optional, List, Tuple
from sqlalchemy.orm import Session
from sqlalchemy import func, or_, asc, desc
from uuid import UUID
from .base import BaseRepository
from ..models import Project
class ProjectRepository(BaseRepository[Project]):
"""Repository for Project entity operations."""
model = Project
def get_by_name(self, name: str) -> Optional[Project]:
"""Get project by unique name."""
return self.db.query(Project).filter(Project.name == name).first()
def exists_by_name(self, name: str) -> bool:
"""Check if project with name exists."""
return self.db.query(
self.db.query(Project).filter(Project.name == name).exists()
).scalar()
def list_accessible(
self,
user_id: str,
page: int = 1,
limit: int = 20,
search: Optional[str] = None,
visibility: Optional[str] = None,
sort: str = "name",
order: str = "asc",
) -> Tuple[List[Project], int]:
"""
List projects accessible to user with filtering and pagination.
Returns tuple of (projects, total_count).
"""
# Base query - filter by access
query = self.db.query(Project).filter(
or_(Project.is_public == True, Project.created_by == user_id)
)
# Apply visibility filter
if visibility == "public":
query = query.filter(Project.is_public == True)
elif visibility == "private":
query = query.filter(Project.is_public == False, Project.created_by == user_id)
# Apply search filter
if search:
search_lower = search.lower()
query = query.filter(
or_(
func.lower(Project.name).contains(search_lower),
func.lower(Project.description).contains(search_lower)
)
)
# Get total count before pagination
total = query.count()
# Apply sorting
sort_columns = {
"name": Project.name,
"created_at": Project.created_at,
"updated_at": Project.updated_at,
}
sort_column = sort_columns.get(sort, Project.name)
if order == "desc":
query = query.order_by(desc(sort_column))
else:
query = query.order_by(asc(sort_column))
# Apply pagination
offset = (page - 1) * limit
projects = query.offset(offset).limit(limit).all()
return projects, total
def create_project(
self,
name: str,
created_by: str,
description: Optional[str] = None,
is_public: bool = True,
) -> Project:
"""Create a new project."""
return self.create(
name=name,
description=description,
is_public=is_public,
created_by=created_by,
)
def update_project(
self,
project: Project,
name: Optional[str] = None,
description: Optional[str] = None,
is_public: Optional[bool] = None,
) -> Project:
"""Update project fields."""
updates = {}
if name is not None:
updates["name"] = name
if description is not None:
updates["description"] = description
if is_public is not None:
updates["is_public"] = is_public
return self.update(project, **updates)
def search(self, query_str: str, limit: int = 10) -> List[Project]:
"""Search projects by name or description."""
search_lower = query_str.lower()
return (
self.db.query(Project)
.filter(
or_(
func.lower(Project.name).contains(search_lower),
func.lower(Project.description).contains(search_lower)
)
)
.order_by(Project.name)
.limit(limit)
.all()
)

View File

@@ -0,0 +1,168 @@
"""
Tag repository for data access operations.
"""
from typing import Optional, List, Tuple
from sqlalchemy.orm import Session
from sqlalchemy import func, or_, asc, desc
from uuid import UUID
from .base import BaseRepository
from ..models import Tag, TagHistory, Artifact, Package, Project
class TagRepository(BaseRepository[Tag]):
"""Repository for Tag entity operations."""
model = Tag
def get_by_name(self, package_id: UUID, name: str) -> Optional[Tag]:
"""Get tag by name within a package."""
return (
self.db.query(Tag)
.filter(Tag.package_id == package_id, Tag.name == name)
.first()
)
def get_with_artifact(self, package_id: UUID, name: str) -> Optional[Tuple[Tag, Artifact]]:
"""Get tag with its artifact."""
return (
self.db.query(Tag, Artifact)
.join(Artifact, Tag.artifact_id == Artifact.id)
.filter(Tag.package_id == package_id, Tag.name == name)
.first()
)
def exists_by_name(self, package_id: UUID, name: str) -> bool:
"""Check if tag with name exists in package."""
return self.db.query(
self.db.query(Tag)
.filter(Tag.package_id == package_id, Tag.name == name)
.exists()
).scalar()
def list_by_package(
self,
package_id: UUID,
page: int = 1,
limit: int = 20,
search: Optional[str] = None,
sort: str = "name",
order: str = "asc",
) -> Tuple[List[Tuple[Tag, Artifact]], int]:
"""
List tags in a package with artifact metadata.
Returns tuple of ((tag, artifact) tuples, total_count).
"""
query = (
self.db.query(Tag, Artifact)
.join(Artifact, Tag.artifact_id == Artifact.id)
.filter(Tag.package_id == package_id)
)
# Apply search filter (tag name or artifact original filename)
if search:
search_lower = search.lower()
query = query.filter(
or_(
func.lower(Tag.name).contains(search_lower),
func.lower(Artifact.original_name).contains(search_lower)
)
)
# Get total count
total = query.count()
# Apply sorting
sort_columns = {
"name": Tag.name,
"created_at": Tag.created_at,
}
sort_column = sort_columns.get(sort, Tag.name)
if order == "desc":
query = query.order_by(desc(sort_column))
else:
query = query.order_by(asc(sort_column))
# Apply pagination
offset = (page - 1) * limit
results = query.offset(offset).limit(limit).all()
return results, total
def create_tag(
self,
package_id: UUID,
name: str,
artifact_id: str,
created_by: str,
) -> Tag:
"""Create a new tag."""
return self.create(
package_id=package_id,
name=name,
artifact_id=artifact_id,
created_by=created_by,
)
def update_artifact(
self,
tag: Tag,
new_artifact_id: str,
changed_by: str,
record_history: bool = True,
) -> Tag:
"""
Update tag to point to a different artifact.
Optionally records change in tag history.
"""
old_artifact_id = tag.artifact_id
if record_history and old_artifact_id != new_artifact_id:
history = TagHistory(
tag_id=tag.id,
old_artifact_id=old_artifact_id,
new_artifact_id=new_artifact_id,
changed_by=changed_by,
)
self.db.add(history)
tag.artifact_id = new_artifact_id
tag.created_by = changed_by
self.db.flush()
return tag
def get_history(self, tag_id: UUID) -> List[TagHistory]:
"""Get tag change history."""
return (
self.db.query(TagHistory)
.filter(TagHistory.tag_id == tag_id)
.order_by(TagHistory.changed_at.desc())
.all()
)
def get_latest_in_package(self, package_id: UUID) -> Optional[Tag]:
"""Get the most recently created/updated tag in a package."""
return (
self.db.query(Tag)
.filter(Tag.package_id == package_id)
.order_by(Tag.created_at.desc())
.first()
)
def get_by_artifact(self, artifact_id: str) -> List[Tag]:
"""Get all tags pointing to an artifact."""
return (
self.db.query(Tag)
.filter(Tag.artifact_id == artifact_id)
.all()
)
def count_by_artifact(self, artifact_id: str) -> int:
"""Count tags pointing to an artifact."""
return (
self.db.query(func.count(Tag.id))
.filter(Tag.artifact_id == artifact_id)
.scalar() or 0
)

View File

@@ -0,0 +1,136 @@
"""
Upload repository for data access operations.
"""
from typing import Optional, List, Tuple
from datetime import datetime
from sqlalchemy.orm import Session
from sqlalchemy import func, desc
from uuid import UUID
from .base import BaseRepository
from ..models import Upload, Artifact, Package, Project
class UploadRepository(BaseRepository[Upload]):
"""Repository for Upload entity operations."""
model = Upload
def create_upload(
self,
artifact_id: str,
package_id: UUID,
uploaded_by: str,
original_name: Optional[str] = None,
source_ip: Optional[str] = None,
) -> Upload:
"""Record a new upload event."""
return self.create(
artifact_id=artifact_id,
package_id=package_id,
original_name=original_name,
uploaded_by=uploaded_by,
source_ip=source_ip,
)
def list_by_package(
self,
package_id: UUID,
page: int = 1,
limit: int = 20,
) -> Tuple[List[Upload], int]:
"""List uploads for a package with pagination."""
query = self.db.query(Upload).filter(Upload.package_id == package_id)
total = query.count()
offset = (page - 1) * limit
uploads = query.order_by(Upload.uploaded_at.desc()).offset(offset).limit(limit).all()
return uploads, total
def list_by_artifact(self, artifact_id: str) -> List[Upload]:
"""List all uploads of a specific artifact."""
return (
self.db.query(Upload)
.filter(Upload.artifact_id == artifact_id)
.order_by(Upload.uploaded_at.desc())
.all()
)
def get_latest_for_package(self, package_id: UUID) -> Optional[Upload]:
"""Get the most recent upload for a package."""
return (
self.db.query(Upload)
.filter(Upload.package_id == package_id)
.order_by(Upload.uploaded_at.desc())
.first()
)
def get_latest_timestamp(self, package_id: UUID) -> Optional[datetime]:
"""Get timestamp of most recent upload for a package."""
result = (
self.db.query(func.max(Upload.uploaded_at))
.filter(Upload.package_id == package_id)
.scalar()
)
return result
def count_by_artifact(self, artifact_id: str) -> int:
"""Count uploads of a specific artifact."""
return (
self.db.query(func.count(Upload.id))
.filter(Upload.artifact_id == artifact_id)
.scalar() or 0
)
def count_by_package(self, package_id: UUID) -> int:
"""Count total uploads for a package."""
return (
self.db.query(func.count(Upload.id))
.filter(Upload.package_id == package_id)
.scalar() or 0
)
def get_distinct_artifacts_count(self, package_id: UUID) -> int:
"""Count distinct artifacts uploaded to a package."""
return (
self.db.query(func.count(func.distinct(Upload.artifact_id)))
.filter(Upload.package_id == package_id)
.scalar() or 0
)
def get_uploads_by_user(
self,
user_id: str,
page: int = 1,
limit: int = 20,
) -> Tuple[List[Upload], int]:
"""List uploads by a specific user."""
query = self.db.query(Upload).filter(Upload.uploaded_by == user_id)
total = query.count()
offset = (page - 1) * limit
uploads = query.order_by(Upload.uploaded_at.desc()).offset(offset).limit(limit).all()
return uploads, total
def get_upload_stats(self, package_id: UUID) -> dict:
"""Get upload statistics for a package."""
stats = (
self.db.query(
func.count(Upload.id),
func.count(func.distinct(Upload.artifact_id)),
func.min(Upload.uploaded_at),
func.max(Upload.uploaded_at),
)
.filter(Upload.package_id == package_id)
.first()
)
return {
"total_uploads": stats[0] if stats else 0,
"unique_artifacts": stats[1] if stats else 0,
"first_upload": stats[2] if stats else None,
"last_upload": stats[3] if stats else None,
}

View File

@@ -1,3 +1,4 @@
from datetime import datetime
from fastapi import APIRouter, Depends, HTTPException, UploadFile, File, Form, Request, Query, Header, Response
from fastapi.responses import StreamingResponse
from sqlalchemy.orm import Session
@@ -10,13 +11,13 @@ import hashlib
from .database import get_db
from .storage import get_storage, S3Storage, MULTIPART_CHUNK_SIZE
from .models import Project, Package, Artifact, Tag, Upload, Consumer
from .models import Project, Package, Artifact, Tag, TagHistory, Upload, Consumer
from .schemas import (
ProjectCreate, ProjectResponse,
PackageCreate, PackageResponse, PackageDetailResponse, TagSummary,
PACKAGE_FORMATS, PACKAGE_PLATFORMS,
ArtifactResponse,
TagCreate, TagResponse,
ArtifactResponse, ArtifactDetailResponse, ArtifactTagInfo, PackageArtifactResponse,
TagCreate, TagResponse, TagDetailResponse, TagHistoryResponse,
UploadResponse,
ConsumerResponse,
HealthResponse,
@@ -27,6 +28,7 @@ from .schemas import (
ResumableUploadCompleteRequest,
ResumableUploadCompleteResponse,
ResumableUploadStatusResponse,
GlobalSearchResponse, SearchResultProject, SearchResultPackage, SearchResultArtifact,
)
from .metadata import extract_metadata
@@ -50,32 +52,155 @@ def health_check():
return HealthResponse(status="ok")
# Global search
@router.get("/api/v1/search", response_model=GlobalSearchResponse)
def global_search(
request: Request,
q: str = Query(..., min_length=1, description="Search query"),
limit: int = Query(default=5, ge=1, le=20, description="Results per type"),
db: Session = Depends(get_db),
):
"""
Search across all entity types (projects, packages, artifacts/tags).
Returns limited results for each type plus total counts.
"""
user_id = get_user_id(request)
search_lower = q.lower()
# Search projects (name and description)
project_query = db.query(Project).filter(
or_(Project.is_public == True, Project.created_by == user_id),
or_(
func.lower(Project.name).contains(search_lower),
func.lower(Project.description).contains(search_lower)
)
)
project_count = project_query.count()
projects = project_query.order_by(Project.name).limit(limit).all()
# Search packages (name and description) with project name
package_query = db.query(Package, Project.name.label("project_name")).join(
Project, Package.project_id == Project.id
).filter(
or_(Project.is_public == True, Project.created_by == user_id),
or_(
func.lower(Package.name).contains(search_lower),
func.lower(Package.description).contains(search_lower)
)
)
package_count = package_query.count()
package_results = package_query.order_by(Package.name).limit(limit).all()
# Search tags/artifacts (tag name and original filename)
artifact_query = db.query(
Tag, Artifact, Package.name.label("package_name"), Project.name.label("project_name")
).join(
Artifact, Tag.artifact_id == Artifact.id
).join(
Package, Tag.package_id == Package.id
).join(
Project, Package.project_id == Project.id
).filter(
or_(Project.is_public == True, Project.created_by == user_id),
or_(
func.lower(Tag.name).contains(search_lower),
func.lower(Artifact.original_name).contains(search_lower)
)
)
artifact_count = artifact_query.count()
artifact_results = artifact_query.order_by(Tag.name).limit(limit).all()
return GlobalSearchResponse(
query=q,
projects=[SearchResultProject(
id=p.id,
name=p.name,
description=p.description,
is_public=p.is_public
) for p in projects],
packages=[SearchResultPackage(
id=pkg.id,
project_id=pkg.project_id,
project_name=project_name,
name=pkg.name,
description=pkg.description,
format=pkg.format
) for pkg, project_name in package_results],
artifacts=[SearchResultArtifact(
tag_id=tag.id,
tag_name=tag.name,
artifact_id=artifact.id,
package_id=tag.package_id,
package_name=package_name,
project_name=project_name,
original_name=artifact.original_name
) for tag, artifact, package_name, project_name in artifact_results],
counts={
"projects": project_count,
"packages": package_count,
"artifacts": artifact_count,
"total": project_count + package_count + artifact_count
}
)
# Project routes
@router.get("/api/v1/projects", response_model=PaginatedResponse[ProjectResponse])
def list_projects(
request: Request,
page: int = Query(default=1, ge=1, description="Page number"),
limit: int = Query(default=20, ge=1, le=100, description="Items per page"),
search: Optional[str] = Query(default=None, description="Search by project name"),
search: Optional[str] = Query(default=None, description="Search by project name or description"),
visibility: Optional[str] = Query(default=None, description="Filter by visibility (public, private)"),
sort: str = Query(default="name", description="Sort field (name, created_at, updated_at)"),
order: str = Query(default="asc", description="Sort order (asc, desc)"),
db: Session = Depends(get_db),
):
user_id = get_user_id(request)
# Validate sort field
valid_sort_fields = {"name": Project.name, "created_at": Project.created_at, "updated_at": Project.updated_at}
if sort not in valid_sort_fields:
raise HTTPException(status_code=400, detail=f"Invalid sort field. Must be one of: {', '.join(valid_sort_fields.keys())}")
# Validate order
if order not in ("asc", "desc"):
raise HTTPException(status_code=400, detail="Invalid order. Must be 'asc' or 'desc'")
# Base query - filter by access
query = db.query(Project).filter(
or_(Project.is_public == True, Project.created_by == user_id)
)
# Apply search filter (case-insensitive)
# Apply visibility filter
if visibility == "public":
query = query.filter(Project.is_public == True)
elif visibility == "private":
query = query.filter(Project.is_public == False, Project.created_by == user_id)
# Apply search filter (case-insensitive on name and description)
if search:
query = query.filter(func.lower(Project.name).contains(search.lower()))
search_lower = search.lower()
query = query.filter(
or_(
func.lower(Project.name).contains(search_lower),
func.lower(Project.description).contains(search_lower)
)
)
# Get total count before pagination
total = query.count()
# Apply sorting
sort_column = valid_sort_fields[sort]
if order == "desc":
query = query.order_by(sort_column.desc())
else:
query = query.order_by(sort_column.asc())
# Apply pagination
offset = (page - 1) * limit
projects = query.order_by(Project.name).offset(offset).limit(limit).all()
projects = query.offset(offset).limit(limit).all()
# Calculate total pages
total_pages = math.ceil(total / limit) if total > 0 else 1
@@ -850,8 +975,17 @@ def download_artifact_compat(
# Tag routes
@router.get("/api/v1/project/{project_name}/{package_name}/tags", response_model=List[TagResponse])
def list_tags(project_name: str, package_name: str, db: Session = Depends(get_db)):
@router.get("/api/v1/project/{project_name}/{package_name}/tags", response_model=PaginatedResponse[TagDetailResponse])
def list_tags(
project_name: str,
package_name: str,
page: int = Query(default=1, ge=1, description="Page number"),
limit: int = Query(default=20, ge=1, le=100, description="Items per page"),
search: Optional[str] = Query(default=None, description="Search by tag name"),
sort: str = Query(default="name", description="Sort field (name, created_at)"),
order: str = Query(default="asc", description="Sort order (asc, desc)"),
db: Session = Depends(get_db),
):
project = db.query(Project).filter(Project.name == project_name).first()
if not project:
raise HTTPException(status_code=404, detail="Project not found")
@@ -860,8 +994,71 @@ def list_tags(project_name: str, package_name: str, db: Session = Depends(get_db
if not package:
raise HTTPException(status_code=404, detail="Package not found")
tags = db.query(Tag).filter(Tag.package_id == package.id).order_by(Tag.name).all()
return tags
# Validate sort field
valid_sort_fields = {"name": Tag.name, "created_at": Tag.created_at}
if sort not in valid_sort_fields:
raise HTTPException(status_code=400, detail=f"Invalid sort field. Must be one of: {', '.join(valid_sort_fields.keys())}")
# Validate order
if order not in ("asc", "desc"):
raise HTTPException(status_code=400, detail="Invalid order. Must be 'asc' or 'desc'")
# Base query with JOIN to artifact for metadata
query = db.query(Tag, Artifact).join(Artifact, Tag.artifact_id == Artifact.id).filter(Tag.package_id == package.id)
# Apply search filter (case-insensitive on tag name OR artifact original filename)
if search:
search_lower = search.lower()
query = query.filter(
or_(
func.lower(Tag.name).contains(search_lower),
func.lower(Artifact.original_name).contains(search_lower)
)
)
# Get total count before pagination
total = query.count()
# Apply sorting
sort_column = valid_sort_fields[sort]
if order == "desc":
query = query.order_by(sort_column.desc())
else:
query = query.order_by(sort_column.asc())
# Apply pagination
offset = (page - 1) * limit
results = query.offset(offset).limit(limit).all()
# Calculate total pages
total_pages = math.ceil(total / limit) if total > 0 else 1
# Build detailed responses with artifact metadata
detailed_tags = []
for tag, artifact in results:
detailed_tags.append(TagDetailResponse(
id=tag.id,
package_id=tag.package_id,
name=tag.name,
artifact_id=tag.artifact_id,
created_at=tag.created_at,
created_by=tag.created_by,
artifact_size=artifact.size,
artifact_content_type=artifact.content_type,
artifact_original_name=artifact.original_name,
artifact_created_at=artifact.created_at,
artifact_format_metadata=artifact.format_metadata,
))
return PaginatedResponse(
items=detailed_tags,
pagination=PaginationMeta(
page=page,
limit=limit,
total=total,
total_pages=total_pages,
),
)
@router.post("/api/v1/project/{project_name}/{package_name}/tags", response_model=TagResponse)
@@ -908,6 +1105,70 @@ def create_tag(
return db_tag
@router.get("/api/v1/project/{project_name}/{package_name}/tags/{tag_name}", response_model=TagDetailResponse)
def get_tag(
project_name: str,
package_name: str,
tag_name: str,
db: Session = Depends(get_db),
):
"""Get a single tag with full artifact metadata"""
project = db.query(Project).filter(Project.name == project_name).first()
if not project:
raise HTTPException(status_code=404, detail="Project not found")
package = db.query(Package).filter(Package.project_id == project.id, Package.name == package_name).first()
if not package:
raise HTTPException(status_code=404, detail="Package not found")
result = db.query(Tag, Artifact).join(Artifact, Tag.artifact_id == Artifact.id).filter(
Tag.package_id == package.id,
Tag.name == tag_name
).first()
if not result:
raise HTTPException(status_code=404, detail="Tag not found")
tag, artifact = result
return TagDetailResponse(
id=tag.id,
package_id=tag.package_id,
name=tag.name,
artifact_id=tag.artifact_id,
created_at=tag.created_at,
created_by=tag.created_by,
artifact_size=artifact.size,
artifact_content_type=artifact.content_type,
artifact_original_name=artifact.original_name,
artifact_created_at=artifact.created_at,
artifact_format_metadata=artifact.format_metadata,
)
@router.get("/api/v1/project/{project_name}/{package_name}/tags/{tag_name}/history", response_model=List[TagHistoryResponse])
def get_tag_history(
project_name: str,
package_name: str,
tag_name: str,
db: Session = Depends(get_db),
):
"""Get the history of artifact assignments for a tag"""
project = db.query(Project).filter(Project.name == project_name).first()
if not project:
raise HTTPException(status_code=404, detail="Project not found")
package = db.query(Package).filter(Package.project_id == project.id, Package.name == package_name).first()
if not package:
raise HTTPException(status_code=404, detail="Package not found")
tag = db.query(Tag).filter(Tag.package_id == package.id, Tag.name == tag_name).first()
if not tag:
raise HTTPException(status_code=404, detail="Tag not found")
history = db.query(TagHistory).filter(TagHistory.tag_id == tag.id).order_by(TagHistory.changed_at.desc()).all()
return history
# Consumer routes
@router.get("/api/v1/project/{project_name}/{package_name}/consumers", response_model=List[ConsumerResponse])
def get_consumers(project_name: str, package_name: str, db: Session = Depends(get_db)):
@@ -923,10 +1184,122 @@ def get_consumers(project_name: str, package_name: str, db: Session = Depends(ge
return consumers
# Package artifacts
@router.get("/api/v1/project/{project_name}/{package_name}/artifacts", response_model=PaginatedResponse[PackageArtifactResponse])
def list_package_artifacts(
project_name: str,
package_name: str,
page: int = Query(default=1, ge=1, description="Page number"),
limit: int = Query(default=20, ge=1, le=100, description="Items per page"),
content_type: Optional[str] = Query(default=None, description="Filter by content type"),
created_after: Optional[datetime] = Query(default=None, description="Filter artifacts created after this date"),
created_before: Optional[datetime] = Query(default=None, description="Filter artifacts created before this date"),
db: Session = Depends(get_db),
):
"""List all unique artifacts uploaded to a package"""
project = db.query(Project).filter(Project.name == project_name).first()
if not project:
raise HTTPException(status_code=404, detail="Project not found")
package = db.query(Package).filter(Package.project_id == project.id, Package.name == package_name).first()
if not package:
raise HTTPException(status_code=404, detail="Package not found")
# Get distinct artifacts uploaded to this package via uploads table
artifact_ids_subquery = db.query(func.distinct(Upload.artifact_id)).filter(
Upload.package_id == package.id
).subquery()
query = db.query(Artifact).filter(Artifact.id.in_(artifact_ids_subquery))
# Apply content_type filter
if content_type:
query = query.filter(Artifact.content_type == content_type)
# Apply date range filters
if created_after:
query = query.filter(Artifact.created_at >= created_after)
if created_before:
query = query.filter(Artifact.created_at <= created_before)
# Get total count before pagination
total = query.count()
# Apply pagination
offset = (page - 1) * limit
artifacts = query.order_by(Artifact.created_at.desc()).offset(offset).limit(limit).all()
# Calculate total pages
total_pages = math.ceil(total / limit) if total > 0 else 1
# Build responses with tag info
artifact_responses = []
for artifact in artifacts:
# Get tags pointing to this artifact in this package
tags = db.query(Tag.name).filter(
Tag.package_id == package.id,
Tag.artifact_id == artifact.id
).all()
tag_names = [t.name for t in tags]
artifact_responses.append(PackageArtifactResponse(
id=artifact.id,
size=artifact.size,
content_type=artifact.content_type,
original_name=artifact.original_name,
created_at=artifact.created_at,
created_by=artifact.created_by,
format_metadata=artifact.format_metadata,
tags=tag_names,
))
return PaginatedResponse(
items=artifact_responses,
pagination=PaginationMeta(
page=page,
limit=limit,
total=total,
total_pages=total_pages,
),
)
# Artifact by ID
@router.get("/api/v1/artifact/{artifact_id}", response_model=ArtifactResponse)
@router.get("/api/v1/artifact/{artifact_id}", response_model=ArtifactDetailResponse)
def get_artifact(artifact_id: str, db: Session = Depends(get_db)):
"""Get artifact metadata including list of packages/tags referencing it"""
artifact = db.query(Artifact).filter(Artifact.id == artifact_id).first()
if not artifact:
raise HTTPException(status_code=404, detail="Artifact not found")
return artifact
# Get all tags referencing this artifact with package and project info
tags_with_context = db.query(Tag, Package, Project).join(
Package, Tag.package_id == Package.id
).join(
Project, Package.project_id == Project.id
).filter(
Tag.artifact_id == artifact_id
).all()
tag_infos = [
ArtifactTagInfo(
id=tag.id,
name=tag.name,
package_id=package.id,
package_name=package.name,
project_name=project.name,
)
for tag, package, project in tags_with_context
]
return ArtifactDetailResponse(
id=artifact.id,
size=artifact.size,
content_type=artifact.content_type,
original_name=artifact.original_name,
created_at=artifact.created_at,
created_by=artifact.created_by,
ref_count=artifact.ref_count,
format_metadata=artifact.format_metadata,
tags=tag_infos,
)

View File

@@ -129,6 +129,78 @@ class TagResponse(BaseModel):
from_attributes = True
class TagDetailResponse(BaseModel):
"""Tag with embedded artifact metadata"""
id: UUID
package_id: UUID
name: str
artifact_id: str
created_at: datetime
created_by: str
# Artifact metadata
artifact_size: int
artifact_content_type: Optional[str]
artifact_original_name: Optional[str]
artifact_created_at: datetime
artifact_format_metadata: Optional[Dict[str, Any]] = None
class Config:
from_attributes = True
class TagHistoryResponse(BaseModel):
"""History entry for tag changes"""
id: UUID
tag_id: UUID
old_artifact_id: Optional[str]
new_artifact_id: str
changed_at: datetime
changed_by: str
class Config:
from_attributes = True
class ArtifactTagInfo(BaseModel):
"""Tag info for embedding in artifact responses"""
id: UUID
name: str
package_id: UUID
package_name: str
project_name: str
class ArtifactDetailResponse(BaseModel):
"""Artifact with list of tags/packages referencing it"""
id: str
size: int
content_type: Optional[str]
original_name: Optional[str]
created_at: datetime
created_by: str
ref_count: int
format_metadata: Optional[Dict[str, Any]] = None
tags: List[ArtifactTagInfo] = []
class Config:
from_attributes = True
class PackageArtifactResponse(BaseModel):
"""Artifact with tags for package artifact listing"""
id: str
size: int
content_type: Optional[str]
original_name: Optional[str]
created_at: datetime
created_by: str
format_metadata: Optional[Dict[str, Any]] = None
tags: List[str] = [] # Tag names pointing to this artifact
class Config:
from_attributes = True
# Upload response
class UploadResponse(BaseModel):
artifact_id: str
@@ -197,6 +269,51 @@ class ConsumerResponse(BaseModel):
from_attributes = True
# Global search schemas
class SearchResultProject(BaseModel):
"""Project result for global search"""
id: UUID
name: str
description: Optional[str]
is_public: bool
class Config:
from_attributes = True
class SearchResultPackage(BaseModel):
"""Package result for global search"""
id: UUID
project_id: UUID
project_name: str
name: str
description: Optional[str]
format: str
class Config:
from_attributes = True
class SearchResultArtifact(BaseModel):
"""Artifact/tag result for global search"""
tag_id: UUID
tag_name: str
artifact_id: str
package_id: UUID
package_name: str
project_name: str
original_name: Optional[str]
class GlobalSearchResponse(BaseModel):
"""Combined search results across all entity types"""
query: str
projects: List[SearchResultProject]
packages: List[SearchResultPackage]
artifacts: List[SearchResultArtifact]
counts: Dict[str, int] # Total counts for each type
# Health check
class HealthResponse(BaseModel):
status: str

View File

@@ -0,0 +1,9 @@
"""
Service layer for business logic.
"""
from .artifact_cleanup import ArtifactCleanupService
__all__ = [
"ArtifactCleanupService",
]

View File

@@ -0,0 +1,181 @@
"""
Service for artifact reference counting and cleanup.
"""
from typing import List, Optional, Tuple
from sqlalchemy.orm import Session
import logging
from ..models import Artifact, Tag, Upload, Package
from ..repositories.artifact import ArtifactRepository
from ..repositories.tag import TagRepository
from ..storage import S3Storage
logger = logging.getLogger(__name__)
class ArtifactCleanupService:
"""
Service for managing artifact reference counts and cleaning up orphaned artifacts.
Reference counting rules:
- ref_count starts at 1 when artifact is first uploaded
- ref_count increments when the same artifact is uploaded again (deduplication)
- ref_count decrements when a tag is deleted or updated to point elsewhere
- ref_count decrements when a package is deleted (for each tag pointing to artifact)
- When ref_count reaches 0, artifact is a candidate for deletion from S3
"""
def __init__(self, db: Session, storage: Optional[S3Storage] = None):
self.db = db
self.storage = storage
self.artifact_repo = ArtifactRepository(db)
self.tag_repo = TagRepository(db)
def on_tag_deleted(self, artifact_id: str) -> Artifact:
"""
Called when a tag is deleted.
Decrements ref_count for the artifact the tag was pointing to.
"""
artifact = self.artifact_repo.get_by_sha256(artifact_id)
if artifact:
artifact = self.artifact_repo.decrement_ref_count(artifact)
logger.info(f"Decremented ref_count for artifact {artifact_id}: now {artifact.ref_count}")
return artifact
def on_tag_updated(self, old_artifact_id: str, new_artifact_id: str) -> Tuple[Optional[Artifact], Optional[Artifact]]:
"""
Called when a tag is updated to point to a different artifact.
Decrements ref_count for old artifact, increments for new (if different).
Returns (old_artifact, new_artifact) tuple.
"""
old_artifact = None
new_artifact = None
if old_artifact_id != new_artifact_id:
# Decrement old artifact ref_count
old_artifact = self.artifact_repo.get_by_sha256(old_artifact_id)
if old_artifact:
old_artifact = self.artifact_repo.decrement_ref_count(old_artifact)
logger.info(f"Decremented ref_count for old artifact {old_artifact_id}: now {old_artifact.ref_count}")
# Increment new artifact ref_count
new_artifact = self.artifact_repo.get_by_sha256(new_artifact_id)
if new_artifact:
new_artifact = self.artifact_repo.increment_ref_count(new_artifact)
logger.info(f"Incremented ref_count for new artifact {new_artifact_id}: now {new_artifact.ref_count}")
return old_artifact, new_artifact
def on_package_deleted(self, package_id) -> List[str]:
"""
Called when a package is deleted.
Decrements ref_count for all artifacts that had tags in the package.
Returns list of artifact IDs that were affected.
"""
# Get all tags in the package before deletion
tags = self.db.query(Tag).filter(Tag.package_id == package_id).all()
affected_artifacts = []
for tag in tags:
artifact = self.artifact_repo.get_by_sha256(tag.artifact_id)
if artifact:
self.artifact_repo.decrement_ref_count(artifact)
affected_artifacts.append(tag.artifact_id)
logger.info(f"Decremented ref_count for artifact {tag.artifact_id} (package delete)")
return affected_artifacts
def cleanup_orphaned_artifacts(self, batch_size: int = 100, dry_run: bool = False) -> List[str]:
"""
Find and delete artifacts with ref_count = 0.
Args:
batch_size: Maximum number of artifacts to process
dry_run: If True, only report what would be deleted without actually deleting
Returns:
List of artifact IDs that were (or would be) deleted
"""
orphaned = self.artifact_repo.get_orphaned_artifacts(limit=batch_size)
deleted_ids = []
for artifact in orphaned:
if dry_run:
logger.info(f"[DRY RUN] Would delete orphaned artifact: {artifact.id}")
deleted_ids.append(artifact.id)
else:
try:
# Delete from S3 first
if self.storage:
self.storage.delete(artifact.s3_key)
logger.info(f"Deleted artifact from S3: {artifact.s3_key}")
# Then delete from database
self.artifact_repo.delete(artifact)
deleted_ids.append(artifact.id)
logger.info(f"Deleted orphaned artifact from database: {artifact.id}")
except Exception as e:
logger.error(f"Failed to delete artifact {artifact.id}: {e}")
if not dry_run and deleted_ids:
self.db.commit()
return deleted_ids
def get_orphaned_count(self) -> int:
"""Get count of artifacts with ref_count = 0."""
from sqlalchemy import func
return (
self.db.query(func.count(Artifact.id))
.filter(Artifact.ref_count == 0)
.scalar() or 0
)
def verify_ref_counts(self, fix: bool = False) -> List[dict]:
"""
Verify that ref_counts match actual tag references.
Args:
fix: If True, fix any mismatched ref_counts
Returns:
List of artifacts with mismatched ref_counts
"""
from sqlalchemy import func
# Get actual tag counts per artifact
tag_counts = (
self.db.query(Tag.artifact_id, func.count(Tag.id).label("tag_count"))
.group_by(Tag.artifact_id)
.all()
)
tag_count_map = {artifact_id: count for artifact_id, count in tag_counts}
# Check all artifacts
artifacts = self.db.query(Artifact).all()
mismatches = []
for artifact in artifacts:
actual_count = tag_count_map.get(artifact.id, 0)
# ref_count should be at least 1 (initial upload) + additional uploads
# But tags are the primary reference, so we check against tag count
if artifact.ref_count < actual_count:
mismatch = {
"artifact_id": artifact.id,
"stored_ref_count": artifact.ref_count,
"actual_tag_count": actual_count,
}
mismatches.append(mismatch)
if fix:
artifact.ref_count = max(actual_count, 1)
logger.warning(f"Fixed ref_count for artifact {artifact.id}: {mismatch['stored_ref_count']} -> {artifact.ref_count}")
if fix and mismatches:
self.db.commit()
return mismatches

View File

@@ -11,6 +11,8 @@ import {
TagListParams,
PackageListParams,
ArtifactListParams,
ProjectListParams,
GlobalSearchResponse,
} from './types';
const API_BASE = '/api/v1';
@@ -34,8 +36,15 @@ function buildQueryString(params: Record<string, unknown>): string {
return query ? `?${query}` : '';
}
// Global Search API
export async function globalSearch(query: string, limit: number = 5): Promise<GlobalSearchResponse> {
const params = buildQueryString({ q: query, limit });
const response = await fetch(`${API_BASE}/search${params}`);
return handleResponse<GlobalSearchResponse>(response);
}
// Project API
export async function listProjects(params: ListParams = {}): Promise<PaginatedResponse<Project>> {
export async function listProjects(params: ProjectListParams = {}): Promise<PaginatedResponse<Project>> {
const query = buildQueryString(params as Record<string, unknown>);
const response = await fetch(`${API_BASE}/projects${query}`);
return handleResponse<PaginatedResponse<Project>>(response);

View File

@@ -0,0 +1,75 @@
.filter-dropdown {
position: relative;
}
.filter-dropdown__trigger {
display: flex;
align-items: center;
gap: 8px;
padding: 8px 12px;
background: var(--bg-tertiary);
border: 1px solid var(--border-primary);
border-radius: var(--radius-md);
color: var(--text-secondary);
font-size: 0.875rem;
cursor: pointer;
transition: all var(--transition-fast);
}
.filter-dropdown__trigger:hover {
background: var(--bg-hover);
color: var(--text-primary);
}
.filter-dropdown__trigger--active {
border-color: var(--accent-primary);
color: var(--text-primary);
}
.filter-dropdown__chevron {
transition: transform var(--transition-fast);
}
.filter-dropdown__chevron--open {
transform: rotate(180deg);
}
.filter-dropdown__menu {
position: absolute;
top: calc(100% + 4px);
left: 0;
min-width: 150px;
background: var(--bg-secondary);
border: 1px solid var(--border-primary);
border-radius: var(--radius-md);
box-shadow: var(--shadow-lg);
z-index: 50;
overflow: hidden;
}
.filter-dropdown__option {
display: flex;
align-items: center;
justify-content: space-between;
width: 100%;
padding: 8px 12px;
background: transparent;
border: none;
color: var(--text-primary);
font-size: 0.875rem;
text-align: left;
cursor: pointer;
transition: background var(--transition-fast);
}
.filter-dropdown__option:hover {
background: var(--bg-hover);
}
.filter-dropdown__option--selected {
color: var(--accent-primary);
}
.filter-dropdown__option svg {
color: var(--accent-primary);
}

View File

@@ -0,0 +1,80 @@
import { useState, useRef, useEffect } from 'react';
import './FilterDropdown.css';
export interface FilterOption {
value: string;
label: string;
}
interface FilterDropdownProps {
label: string;
options: FilterOption[];
value: string;
onChange: (value: string) => void;
className?: string;
}
export function FilterDropdown({ label, options, value, onChange, className = '' }: FilterDropdownProps) {
const [isOpen, setIsOpen] = useState(false);
const dropdownRef = useRef<HTMLDivElement>(null);
const selectedOption = options.find((o) => o.value === value);
useEffect(() => {
function handleClickOutside(event: MouseEvent) {
if (dropdownRef.current && !dropdownRef.current.contains(event.target as Node)) {
setIsOpen(false);
}
}
document.addEventListener('mousedown', handleClickOutside);
return () => document.removeEventListener('mousedown', handleClickOutside);
}, []);
return (
<div className={`filter-dropdown ${className}`.trim()} ref={dropdownRef}>
<button
type="button"
className={`filter-dropdown__trigger ${value ? 'filter-dropdown__trigger--active' : ''}`}
onClick={() => setIsOpen(!isOpen)}
aria-expanded={isOpen}
>
<span>{selectedOption ? selectedOption.label : label}</span>
<svg
className={`filter-dropdown__chevron ${isOpen ? 'filter-dropdown__chevron--open' : ''}`}
width="14"
height="14"
viewBox="0 0 24 24"
fill="none"
stroke="currentColor"
strokeWidth="2"
>
<polyline points="6 9 12 15 18 9" />
</svg>
</button>
{isOpen && (
<div className="filter-dropdown__menu">
{options.map((option) => (
<button
key={option.value}
type="button"
className={`filter-dropdown__option ${option.value === value ? 'filter-dropdown__option--selected' : ''}`}
onClick={() => {
onChange(option.value);
setIsOpen(false);
}}
>
{option.label}
{option.value === value && (
<svg width="14" height="14" viewBox="0 0 24 24" fill="none" stroke="currentColor" strokeWidth="2">
<polyline points="20 6 9 17 4 12" />
</svg>
)}
</button>
))}
</div>
)}
</div>
);
}

View File

@@ -0,0 +1,216 @@
.global-search {
position: relative;
flex: 1;
max-width: 400px;
margin: 0 24px;
}
.global-search__input-wrapper {
position: relative;
display: flex;
align-items: center;
}
.global-search__icon {
position: absolute;
left: 12px;
color: var(--text-secondary);
pointer-events: none;
}
.global-search__input {
width: 100%;
padding: 8px 40px 8px 36px;
background: var(--bg-tertiary);
border: 1px solid var(--border-primary);
border-radius: var(--radius-md);
color: var(--text-primary);
font-size: 0.875rem;
transition: all var(--transition-fast);
}
.global-search__input:focus {
outline: none;
border-color: var(--accent-primary);
box-shadow: 0 0 0 3px rgba(16, 185, 129, 0.15);
}
.global-search__input::placeholder {
color: var(--text-muted);
}
.global-search__shortcut {
position: absolute;
right: 8px;
padding: 2px 6px;
background: var(--bg-secondary);
border: 1px solid var(--border-primary);
border-radius: var(--radius-sm);
color: var(--text-muted);
font-family: inherit;
font-size: 0.75rem;
pointer-events: none;
}
.global-search__spinner {
position: absolute;
right: 36px;
width: 14px;
height: 14px;
border: 2px solid var(--border-primary);
border-top-color: var(--accent-primary);
border-radius: 50%;
animation: spin 0.6s linear infinite;
}
@keyframes spin {
to {
transform: rotate(360deg);
}
}
/* Dropdown */
.global-search__dropdown {
position: absolute;
top: calc(100% + 8px);
left: 0;
right: 0;
background: var(--bg-secondary);
border: 1px solid var(--border-primary);
border-radius: var(--radius-lg);
box-shadow: var(--shadow-lg);
max-height: 400px;
overflow-y: auto;
z-index: 1000;
}
.global-search__empty {
padding: 24px;
text-align: center;
color: var(--text-secondary);
font-size: 0.875rem;
}
/* Sections */
.global-search__section {
padding: 8px 0;
border-bottom: 1px solid var(--border-primary);
}
.global-search__section:last-child {
border-bottom: none;
}
.global-search__section-header {
display: flex;
align-items: center;
justify-content: space-between;
padding: 4px 12px 8px;
color: var(--text-secondary);
font-size: 0.75rem;
font-weight: 600;
text-transform: uppercase;
letter-spacing: 0.05em;
}
.global-search__count {
background: var(--bg-tertiary);
padding: 2px 6px;
border-radius: var(--radius-sm);
font-size: 0.7rem;
}
/* Results */
.global-search__result {
display: flex;
align-items: flex-start;
gap: 12px;
width: 100%;
padding: 8px 12px;
background: transparent;
border: none;
text-align: left;
color: var(--text-primary);
cursor: pointer;
transition: background var(--transition-fast);
}
.global-search__result:hover,
.global-search__result.selected {
background: var(--bg-hover);
}
.global-search__result svg {
flex-shrink: 0;
margin-top: 2px;
color: var(--text-secondary);
}
.global-search__result-content {
flex: 1;
min-width: 0;
display: flex;
flex-direction: column;
gap: 2px;
}
.global-search__result-name {
font-weight: 500;
color: var(--text-primary);
}
.global-search__result-path {
font-size: 0.75rem;
color: var(--text-secondary);
}
.global-search__result-desc {
font-size: 0.75rem;
color: var(--text-muted);
white-space: nowrap;
overflow: hidden;
text-overflow: ellipsis;
}
/* Badges */
.global-search__badge {
flex-shrink: 0;
padding: 2px 8px;
border-radius: var(--radius-sm);
font-size: 0.7rem;
font-weight: 500;
text-transform: uppercase;
}
.global-search__badge.public {
background: rgba(16, 185, 129, 0.15);
color: var(--accent-primary);
}
.global-search__badge.private {
background: rgba(234, 179, 8, 0.15);
color: #eab308;
}
.global-search__badge.format {
background: var(--bg-tertiary);
color: var(--text-secondary);
}
/* Responsive */
@media (max-width: 768px) {
.global-search {
max-width: none;
margin: 0 12px;
}
.global-search__shortcut {
display: none;
}
}
@media (max-width: 640px) {
.global-search {
display: none;
}
}

View File

@@ -0,0 +1,265 @@
import { useState, useEffect, useRef, useCallback } from 'react';
import { useNavigate } from 'react-router-dom';
import { globalSearch } from '../api';
import { GlobalSearchResponse } from '../types';
import './GlobalSearch.css';
export function GlobalSearch() {
const navigate = useNavigate();
const [query, setQuery] = useState('');
const [results, setResults] = useState<GlobalSearchResponse | null>(null);
const [loading, setLoading] = useState(false);
const [isOpen, setIsOpen] = useState(false);
const [selectedIndex, setSelectedIndex] = useState(-1);
const inputRef = useRef<HTMLInputElement>(null);
const containerRef = useRef<HTMLDivElement>(null);
// Build flat list of results for keyboard navigation
const flatResults = results
? [
...results.projects.map((p) => ({ type: 'project' as const, item: p })),
...results.packages.map((p) => ({ type: 'package' as const, item: p })),
...results.artifacts.map((a) => ({ type: 'artifact' as const, item: a })),
]
: [];
const handleSearch = useCallback(async (searchQuery: string) => {
if (!searchQuery.trim()) {
setResults(null);
setIsOpen(false);
return;
}
setLoading(true);
try {
const data = await globalSearch(searchQuery);
setResults(data);
setIsOpen(true);
setSelectedIndex(-1);
} catch (err) {
console.error('Search failed:', err);
setResults(null);
} finally {
setLoading(false);
}
}, []);
// Debounced search
useEffect(() => {
const timer = setTimeout(() => {
handleSearch(query);
}, 300);
return () => clearTimeout(timer);
}, [query, handleSearch]);
// Close on click outside
useEffect(() => {
function handleClickOutside(event: MouseEvent) {
if (containerRef.current && !containerRef.current.contains(event.target as Node)) {
setIsOpen(false);
}
}
document.addEventListener('mousedown', handleClickOutside);
return () => document.removeEventListener('mousedown', handleClickOutside);
}, []);
// Keyboard navigation
useEffect(() => {
function handleKeyDown(event: KeyboardEvent) {
if (event.key === '/' && !['INPUT', 'TEXTAREA'].includes((event.target as HTMLElement).tagName)) {
event.preventDefault();
inputRef.current?.focus();
}
if (!isOpen) return;
switch (event.key) {
case 'ArrowDown':
event.preventDefault();
setSelectedIndex((prev) => Math.min(prev + 1, flatResults.length - 1));
break;
case 'ArrowUp':
event.preventDefault();
setSelectedIndex((prev) => Math.max(prev - 1, -1));
break;
case 'Enter':
if (selectedIndex >= 0 && flatResults[selectedIndex]) {
event.preventDefault();
navigateToResult(flatResults[selectedIndex]);
}
break;
case 'Escape':
setIsOpen(false);
inputRef.current?.blur();
break;
}
}
document.addEventListener('keydown', handleKeyDown);
return () => document.removeEventListener('keydown', handleKeyDown);
}, [isOpen, selectedIndex, flatResults]);
function navigateToResult(result: (typeof flatResults)[0]) {
setIsOpen(false);
setQuery('');
switch (result.type) {
case 'project':
navigate(`/project/${result.item.name}`);
break;
case 'package':
navigate(`/project/${result.item.project_name}/${result.item.name}`);
break;
case 'artifact':
navigate(`/project/${result.item.project_name}/${result.item.package_name}`);
break;
}
}
const hasResults = results && results.counts.total > 0;
return (
<div className="global-search" ref={containerRef}>
<div className="global-search__input-wrapper">
<svg
className="global-search__icon"
width="16"
height="16"
viewBox="0 0 24 24"
fill="none"
stroke="currentColor"
strokeWidth="2"
>
<circle cx="11" cy="11" r="8" />
<line x1="21" y1="21" x2="16.65" y2="16.65" />
</svg>
<input
ref={inputRef}
type="text"
value={query}
onChange={(e) => setQuery(e.target.value)}
onFocus={() => query && results && setIsOpen(true)}
placeholder="Search projects, packages, artifacts..."
className="global-search__input"
/>
<kbd className="global-search__shortcut">/</kbd>
{loading && <span className="global-search__spinner" />}
</div>
{isOpen && (
<div className="global-search__dropdown">
{!hasResults && query && (
<div className="global-search__empty">No results found for "{query}"</div>
)}
{hasResults && (
<>
{results.projects.length > 0 && (
<div className="global-search__section">
<div className="global-search__section-header">
Projects
<span className="global-search__count">{results.counts.projects}</span>
</div>
{results.projects.map((project, index) => {
const flatIndex = index;
return (
<button
key={project.id}
className={`global-search__result ${selectedIndex === flatIndex ? 'selected' : ''}`}
onClick={() => navigateToResult({ type: 'project', item: project })}
onMouseEnter={() => setSelectedIndex(flatIndex)}
>
<svg width="16" height="16" viewBox="0 0 24 24" fill="none" stroke="currentColor" strokeWidth="2">
<path d="M22 19a2 2 0 0 1-2 2H4a2 2 0 0 1-2-2V5a2 2 0 0 1 2-2h5l2 3h9a2 2 0 0 1 2 2z" />
</svg>
<div className="global-search__result-content">
<span className="global-search__result-name">{project.name}</span>
{project.description && (
<span className="global-search__result-desc">{project.description}</span>
)}
</div>
<span className={`global-search__badge ${project.is_public ? 'public' : 'private'}`}>
{project.is_public ? 'Public' : 'Private'}
</span>
</button>
);
})}
</div>
)}
{results.packages.length > 0 && (
<div className="global-search__section">
<div className="global-search__section-header">
Packages
<span className="global-search__count">{results.counts.packages}</span>
</div>
{results.packages.map((pkg, index) => {
const flatIndex = results.projects.length + index;
return (
<button
key={pkg.id}
className={`global-search__result ${selectedIndex === flatIndex ? 'selected' : ''}`}
onClick={() => navigateToResult({ type: 'package', item: pkg })}
onMouseEnter={() => setSelectedIndex(flatIndex)}
>
<svg width="16" height="16" viewBox="0 0 24 24" fill="none" stroke="currentColor" strokeWidth="2">
<path d="M16.5 9.4l-9-5.19M21 16V8a2 2 0 0 0-1-1.73l-7-4a2 2 0 0 0-2 0l-7 4A2 2 0 0 0 3 8v8a2 2 0 0 0 1 1.73l7 4a2 2 0 0 0 2 0l7-4A2 2 0 0 0 21 16z" />
<polyline points="3.27 6.96 12 12.01 20.73 6.96" />
<line x1="12" y1="22.08" x2="12" y2="12" />
</svg>
<div className="global-search__result-content">
<span className="global-search__result-name">{pkg.name}</span>
<span className="global-search__result-path">{pkg.project_name}</span>
{pkg.description && (
<span className="global-search__result-desc">{pkg.description}</span>
)}
</div>
<span className="global-search__badge format">{pkg.format}</span>
</button>
);
})}
</div>
)}
{results.artifacts.length > 0 && (
<div className="global-search__section">
<div className="global-search__section-header">
Artifacts / Tags
<span className="global-search__count">{results.counts.artifacts}</span>
</div>
{results.artifacts.map((artifact, index) => {
const flatIndex = results.projects.length + results.packages.length + index;
return (
<button
key={artifact.tag_id}
className={`global-search__result ${selectedIndex === flatIndex ? 'selected' : ''}`}
onClick={() => navigateToResult({ type: 'artifact', item: artifact })}
onMouseEnter={() => setSelectedIndex(flatIndex)}
>
<svg width="16" height="16" viewBox="0 0 24 24" fill="none" stroke="currentColor" strokeWidth="2">
<path d="M20.59 13.41l-7.17 7.17a2 2 0 0 1-2.83 0L2 12V2h10l8.59 8.59a2 2 0 0 1 0 2.82z" />
<line x1="7" y1="7" x2="7.01" y2="7" />
</svg>
<div className="global-search__result-content">
<span className="global-search__result-name">{artifact.tag_name}</span>
<span className="global-search__result-path">
{artifact.project_name} / {artifact.package_name}
</span>
{artifact.original_name && (
<span className="global-search__result-desc">{artifact.original_name}</span>
)}
</div>
</button>
);
})}
</div>
)}
</>
)}
</div>
)}
</div>
);
}

View File

@@ -1,5 +1,6 @@
import { ReactNode } from 'react';
import { Link, useLocation } from 'react-router-dom';
import { GlobalSearch } from './GlobalSearch';
import './Layout.css';
interface LayoutProps {
@@ -32,6 +33,7 @@ function Layout({ children }: LayoutProps) {
</div>
<span className="logo-text">Orchard</span>
</Link>
<GlobalSearch />
<nav className="nav">
<Link to="/" className={location.pathname === '/' ? 'active' : ''}>
<svg width="16" height="16" viewBox="0 0 24 24" fill="none" stroke="currentColor" strokeWidth="2">

View File

@@ -4,6 +4,9 @@ export { Breadcrumb } from './Breadcrumb';
export { SearchInput } from './SearchInput';
export { SortDropdown } from './SortDropdown';
export type { SortOption } from './SortDropdown';
export { FilterDropdown } from './FilterDropdown';
export type { FilterOption } from './FilterDropdown';
export { FilterChip, FilterChipGroup } from './FilterChip';
export { DataTable } from './DataTable';
export { Pagination } from './Pagination';
export { GlobalSearch } from './GlobalSearch';

View File

@@ -3,8 +3,8 @@ import { Link, useSearchParams } from 'react-router-dom';
import { Project, PaginatedResponse } from '../types';
import { listProjects, createProject } from '../api';
import { Badge } from '../components/Badge';
import { SearchInput } from '../components/SearchInput';
import { SortDropdown, SortOption } from '../components/SortDropdown';
import { FilterDropdown, FilterOption } from '../components/FilterDropdown';
import { FilterChip, FilterChipGroup } from '../components/FilterChip';
import { Pagination } from '../components/Pagination';
import './Home.css';
@@ -15,6 +15,12 @@ const SORT_OPTIONS: SortOption[] = [
{ value: 'updated_at', label: 'Updated' },
];
const VISIBILITY_OPTIONS: FilterOption[] = [
{ value: '', label: 'All Projects' },
{ value: 'public', label: 'Public Only' },
{ value: 'private', label: 'Private Only' },
];
function Home() {
const [searchParams, setSearchParams] = useSearchParams();
@@ -27,9 +33,9 @@ function Home() {
// Get params from URL
const page = parseInt(searchParams.get('page') || '1', 10);
const search = searchParams.get('search') || '';
const sort = searchParams.get('sort') || 'name';
const order = (searchParams.get('order') || 'asc') as 'asc' | 'desc';
const visibility = searchParams.get('visibility') || '';
const updateParams = useCallback(
(updates: Record<string, string | undefined>) => {
@@ -49,7 +55,12 @@ function Home() {
const loadProjects = useCallback(async () => {
try {
setLoading(true);
const data = await listProjects({ page, search, sort, order });
const data = await listProjects({
page,
sort,
order,
visibility: visibility as 'public' | 'private' | undefined || undefined,
});
setProjectsData(data);
setError(null);
} catch (err) {
@@ -57,7 +68,7 @@ function Home() {
} finally {
setLoading(false);
}
}, [page, search, sort, order]);
}, [page, sort, order, visibility]);
useEffect(() => {
loadProjects();
@@ -78,14 +89,14 @@ function Home() {
}
}
const handleSearchChange = (value: string) => {
updateParams({ search: value, page: '1' });
};
const handleSortChange = (newSort: string, newOrder: 'asc' | 'desc') => {
updateParams({ sort: newSort, order: newOrder, page: '1' });
};
const handleVisibilityChange = (value: string) => {
updateParams({ visibility: value, page: '1' });
};
const handlePageChange = (newPage: number) => {
updateParams({ page: String(newPage) });
};
@@ -94,7 +105,7 @@ function Home() {
setSearchParams({});
};
const hasActiveFilters = search !== '';
const hasActiveFilters = visibility !== '';
const projects = projectsData?.items || [];
const pagination = projectsData?.pagination;
@@ -154,18 +165,24 @@ function Home() {
)}
<div className="list-controls">
<SearchInput
value={search}
onChange={handleSearchChange}
placeholder="Search projects..."
className="list-controls__search"
<FilterDropdown
label="Visibility"
options={VISIBILITY_OPTIONS}
value={visibility}
onChange={handleVisibilityChange}
/>
<SortDropdown options={SORT_OPTIONS} value={sort} order={order} onChange={handleSortChange} />
</div>
{hasActiveFilters && (
<FilterChipGroup onClearAll={clearFilters}>
{search && <FilterChip label="Search" value={search} onRemove={() => handleSearchChange('')} />}
{visibility && (
<FilterChip
label="Visibility"
value={visibility === 'public' ? 'Public' : 'Private'}
onRemove={() => handleVisibilityChange('')}
/>
)}
</FilterChipGroup>
)}

View File

@@ -325,7 +325,7 @@ function PackagePage() {
<SearchInput
value={search}
onChange={handleSearchChange}
placeholder="Search tags..."
placeholder="Filter tags..."
className="list-controls__search"
/>
<SortDropdown options={SORT_OPTIONS} value={sort} order={order} onChange={handleSortChange} />
@@ -333,7 +333,7 @@ function PackagePage() {
{hasActiveFilters && (
<FilterChipGroup onClearAll={clearFilters}>
{search && <FilterChip label="Search" value={search} onRemove={() => handleSearchChange('')} />}
{search && <FilterChip label="Filter" value={search} onRemove={() => handleSearchChange('')} />}
</FilterChipGroup>
)}

View File

@@ -226,7 +226,7 @@ function ProjectPage() {
<SearchInput
value={search}
onChange={handleSearchChange}
placeholder="Search packages..."
placeholder="Filter packages..."
className="list-controls__search"
/>
<select
@@ -246,7 +246,7 @@ function ProjectPage() {
{hasActiveFilters && (
<FilterChipGroup onClearAll={clearFilters}>
{search && <FilterChip label="Search" value={search} onRemove={() => handleSearchChange('')} />}
{search && <FilterChip label="Filter" value={search} onRemove={() => handleSearchChange('')} />}
{format && <FilterChip label="Format" value={format} onRemove={() => handleFormatChange('')} />}
</FilterChipGroup>
)}

View File

@@ -117,3 +117,47 @@ export interface UploadResponse {
package: string;
tag: string | null;
}
// Global search types
export interface SearchResultProject {
id: string;
name: string;
description: string | null;
is_public: boolean;
}
export interface SearchResultPackage {
id: string;
project_id: string;
project_name: string;
name: string;
description: string | null;
format: string;
}
export interface SearchResultArtifact {
tag_id: string;
tag_name: string;
artifact_id: string;
package_id: string;
package_name: string;
project_name: string;
original_name: string | null;
}
export interface GlobalSearchResponse {
query: string;
projects: SearchResultProject[];
packages: SearchResultPackage[];
artifacts: SearchResultArtifact[];
counts: {
projects: number;
packages: number;
artifacts: number;
total: number;
};
}
export interface ProjectListParams extends ListParams {
visibility?: 'public' | 'private';
}