base-repository

English | 한국어

Performance Test

BaseRepository is a library that wraps SQLAlchemy, so the performance validation focuses on two things:

  1. How small the wrapping overhead is (CPU-bound)
  2. How it performs compared to other code that serves the same purpose (including DB-bound)
    • 2-1. Compared to SQLAlchemy best-practice query and usage patterns
    • 2-2. Compared to other wrapper libraries (SQLModel) Note: SQLModel comparisons are limited to the feature set SQLModel directly provides.

Performance Test Environment

NOTE

This performance test was executed only for a subset of representative cases. Therefore, the results serve as comparative indicators under specific environments and conditions and do not guarantee general performance across all features or production environments.


1. CPU-bound Tests (Wrapping Overhead Verification)

1.1 Common Measurement Method → Jump to results

1.2 Read: get_list performance → Jump to results

Measures overhead on the “fetch multiple rows” path.

Baselines

Notes


1.3 Converting: ORM → schema conversion performance

→ Jump to results

Baselines

Test schema

class ResultStrictSchema(BaseModel):
    model_config = ConfigDict(from_attributes=True)
    id: int | None = None
    item_id: int
    sub_category_id: int | None
    result_value: str | None
    is_abnormal: bool | None
    tenant_id: int
    checkup_id: int

Measurement conditions


1.4 Create: bulk create preparation cost

→ Jump to results

Measures CPU cost up to “input data → ORM object creation and create-path preparation”.

Test cases

Schema

class ResultCreateSchema(BaseModel):
    item_id: int
    sub_category_id: int
    result_value: str
    is_abnormal: bool
    tenant_id: int
    checkup_id: int

Baselines


1.5 Update: bulk update preparation cost

→ Jump to results

Same measurement concept as Create. DB execution is excluded.

Test cases

Payload example

{
    "result_value": f"updated-{i}",
    "is_abnormal": bool(i % 2),
}

2. DB-bound Tests (Real DB Performance)

2.1 Environment and Common Conditions

→ Jump to results

Goal: measure CRUD performance with a real DB (network, driver, transaction included)

Baselines

Common conditions

Databases tested

Runtime environment

Seed data (rows per table)


2.2 Target table schema and seed generation rules

→ Jump to results

Target table: PerfResult

PERF_RESULT_COLUMNS = [
    ("payload",   lambda i: f"row-{i}"),
    ("value",     lambda i: f"{i}"),
    ("category",  lambda i: f"cat{i % 10}"),
    ("status",    lambda i: f"status{i % 3}"),
    ("tag",       lambda i: f"tag{i % 5}"),
    ("group_no",  lambda i: f"{i % 20}"),
    ("flag",      lambda i: f"{1 if i % 2 == 0 else 0}"),
    ("value2",    lambda i: f"{i * 2}"),
    ("extra",     lambda i: f"extra-{i % 7}"),
]

2.3 Metrics and measurement window

→ Jump to results

Metrics

Measurement window (transaction included)

start → object creation + API call + commit + return results → end

Schema conversion is disabled for DB-bound tests.


2.4 DB Test Cases


(1) bulk_create

→ Jump to results

Baselines

Input size INSERT_ROW_VALUES = [100, 500, 1_000, 5_000]

Iterations ITERATIONS = 100


(2) bulk_update

→ Jump to results

Update query example:

stmt = (
    sa_update(PerfResult)
    .where(PerfResult.id <= n)
    .values(value2=999)
)

(3) fetch (get_one)

→ Jump to results

row = (await session.execute(
    select(PerfResult).where(PerfResult.id == target_id)
)).scalar_one_or_none()

(4) fetch (get_list)

→ Jump to results

Fetch performance varies significantly depending on WHERE and ORDER BY composition. Cases are separated.


Case 1: 8 WHERE + 1 ORDER BY

stmt = (
    select(PerfResult)
    .where(
        PerfResult.id.in_([1, 2, 3, 4]),
        PerfResult.category.in_(["cat-1", "cat-2"]),
        PerfResult.status.in_(["status-1", "status-2"]),
        PerfResult.tag.in_(["tag-0", "tag-1"]),
        PerfResult.group_no.in_([10, 11, 12]),
        PerfResult.value == 100,
        PerfResult.value2 == 1_000_000,
        PerfResult.flag.in_([0, 1]),
    )
    .order_by(PerfResult.id.asc())
    .limit(1_000)
    .offset(0)
)

Case 2: 3 WHERE + 3 ORDER BY

stmt = (
    select(PerfResult)
    .where(
        PerfResult.category == "cat-1",
        PerfResult.status == "status-1",
        PerfResult.flag == 1,
    )
    .order_by(
        PerfResult.category.asc(),
        PerfResult.value2.desc(),
        PerfResult.id.asc(),
    )
    .limit(1_000)
    .offset(0)
)

Case 3: 8 ORDER BY

stmt = (
    select(PerfResult)
    .order_by(
        PerfResult.category.asc(),
        PerfResult.status.asc(),
        PerfResult.tag.asc(),
        PerfResult.group_no.asc(),
        PerfResult.flag.desc(),
        PerfResult.value.desc(),
        PerfResult.value2.desc(),
        PerfResult.id.asc(),
    )
    .limit(1_000)
    .offset(0)
)

(5) delete one

→ Jump to results

res = await session.execute(
    delete(PerfResult).where(PerfResult.id == target_id)
)

(6) count all

→ Jump to results

cnt = await session.scalar(select(func.count()).select_from(PerfResult))

(7) count with 3 WHERE predicates

→ Jump to results

stmt = (
    select(func.count())
    .select_from(PerfResult)
    .where(
        PerfResult.category == "cat-1",
        PerfResult.status == "status-1",
        PerfResult.flag == 1,
    )
)

3. Result Graphs and Datasets

3.1 Artifact Paths

NOTE: Report images (tests/perf/report/**) are not committed. They are generated locally during benchmark execution.


3.2 Attached Results

CPU BOUND

USE DB