BaseRepository is a library that wraps SQLAlchemy, so the performance validation focuses on two things:
This performance test was executed only for a subset of representative cases. Therefore, the results serve as comparative indicators under specific environments and conditions and do not guarantee general performance across all features or production environments.
ITERATIONS = 50.[10, 50, 100, 200, 500, 1000, 5000]Measures overhead on the “fetch multiple rows” path.
get_list (selected because it internally uses the chaining-based list flow)order_by = id, limit = 50, offset = 0Baselines
Notes
Baselines
Test schema
class ResultStrictSchema(BaseModel):
model_config = ConfigDict(from_attributes=True)
id: int | None = None
item_id: int
sub_category_id: int | None
result_value: str | None
is_abnormal: bool | None
tenant_id: int
checkup_id: int
Measurement conditions
[10, 50, 100, 200, 500, 1000, 5000], ITERATIONS = 50.Measures CPU cost up to “input data → ORM object creation and create-path preparation”.
Test cases
Schema
class ResultCreateSchema(BaseModel):
item_id: int
sub_category_id: int
result_value: str
is_abnormal: bool
tenant_id: int
checkup_id: int
Baselines
Same measurement concept as Create. DB execution is excluded.
Test cases
Payload example
{
"result_value": f"updated-{i}",
"is_abnormal": bool(i % 2),
}
Goal: measure CRUD performance with a real DB (network, driver, transaction included)
Baselines
Common conditions
Databases tested
Runtime environment
Seed data (rows per table)
Target table: PerfResult
PERF_RESULT_COLUMNS = [
("payload", lambda i: f"row-{i}"),
("value", lambda i: f"{i}"),
("category", lambda i: f"cat{i % 10}"),
("status", lambda i: f"status{i % 3}"),
("tag", lambda i: f"tag{i % 5}"),
("group_no", lambda i: f"{i % 20}"),
("flag", lambda i: f"{1 if i % 2 == 0 else 0}"),
("value2", lambda i: f"{i * 2}"),
("extra", lambda i: f"extra-{i % 7}"),
]
Metrics
Measurement window (transaction included)
start → object creation + API call + commit + return results → end
Schema conversion is disabled for DB-bound tests.
Baselines
Input size
INSERT_ROW_VALUES = [100, 500, 1_000, 5_000]
Iterations
ITERATIONS = 100
Update query example:
stmt = (
sa_update(PerfResult)
.where(PerfResult.id <= n)
.values(value2=999)
)
row = (await session.execute(
select(PerfResult).where(PerfResult.id == target_id)
)).scalar_one_or_none()
Fetch performance varies significantly depending on WHERE and ORDER BY composition. Cases are separated.
stmt = (
select(PerfResult)
.where(
PerfResult.id.in_([1, 2, 3, 4]),
PerfResult.category.in_(["cat-1", "cat-2"]),
PerfResult.status.in_(["status-1", "status-2"]),
PerfResult.tag.in_(["tag-0", "tag-1"]),
PerfResult.group_no.in_([10, 11, 12]),
PerfResult.value == 100,
PerfResult.value2 == 1_000_000,
PerfResult.flag.in_([0, 1]),
)
.order_by(PerfResult.id.asc())
.limit(1_000)
.offset(0)
)
stmt = (
select(PerfResult)
.where(
PerfResult.category == "cat-1",
PerfResult.status == "status-1",
PerfResult.flag == 1,
)
.order_by(
PerfResult.category.asc(),
PerfResult.value2.desc(),
PerfResult.id.asc(),
)
.limit(1_000)
.offset(0)
)
stmt = (
select(PerfResult)
.order_by(
PerfResult.category.asc(),
PerfResult.status.asc(),
PerfResult.tag.asc(),
PerfResult.group_no.asc(),
PerfResult.flag.desc(),
PerfResult.value.desc(),
PerfResult.value2.desc(),
PerfResult.id.asc(),
)
.limit(1_000)
.offset(0)
)
res = await session.execute(
delete(PerfResult).where(PerfResult.id == target_id)
)
cnt = await session.scalar(select(func.count()).select_from(PerfResult))
stmt = (
select(func.count())
.select_from(PerfResult)
.where(
PerfResult.category == "cat-1",
PerfResult.status == "status-1",
PerfResult.flag == 1,
)
)
tests/perf/results/cpu/<RUN_ID>.jsonltests/perf/results/db/<RUN_ID>.jsonlNOTE: Report images (tests/perf/report/**) are not committed.
They are generated locally during benchmark execution.
20251127T050031Z, iter: 50, unit: ms
→
View full HTML report
MySQL — run_id: 20251126T065306Z, iter: 100, unit: ms, seed: 10000000
→
View full HTML report
PostgreSQL — run_id: 20251205T025441Z, iter: 100, unit: ms, seed: 100000
→
View full HTML report
SQLite — run_id: 20251205T030413Z, iter: 100, unit: ms, seed: 100000
→
View full HTML report