Add TopN plan node for O(limit) ORDER BY + LIMIT#24
Open
philcunliffe wants to merge 11 commits intomasterfrom
Open
Add TopN plan node for O(limit) ORDER BY + LIMIT#24philcunliffe wants to merge 11 commits intomasterfrom
philcunliffe wants to merge 11 commits intomasterfrom
Conversation
Add multi-level caching and reduce per-row overhead: - parseSql: LRU cache (64 entries) avoids re-tokenizing/parsing same SQL strings - planSql: WeakMap cache on parsed ASTs avoids re-planning identical queries - asyncRow: attach _data field for zero-copy collection - collect: sync fast-path skips Promise.all when all rows have pre-materialized _data - executeProject: pre-compute static column names, fast-path for simple identifier projections with direct cell passthrough and _data propagation - executeSql: skip table normalization when no array tables are present - compareForTerm: use module-level Set instead of per-call array allocation - memorySource: hoist column computation outside scan loop, use Set for validation
- Add _data to AsyncRow type definition - Cast to DerivedColumn/IdentifierNode where type narrowing is needed - Type _data as Record<string, SqlPrimitive> - Fix JSDoc placement for compareForTerm
Adapt optimizations to the new QueryResults return type: - executeSql: keep table normalization skip, use new inline plan+execute - executeProject: move pre-computation outside rows(), keep identifier fast-path and static column names inside the rows() generator - Add _data to AsyncRow type definition - Fix JSDoc placement and type casts for tsc
Drop the parseSql/planSql memoization caches added in 881a031. Also rename the pre-materialized row payload from `_data` to `resolved` for clarity, and delete stale scratch files (query-parquet.mjs, repro-525.mjs). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Resolves all cell values when rows are buffered for ORDER BY, replacing AsyncRow closures (which capture decompressed parquet row group data) with plain value-returning functions. The original closures become GC-eligible immediately. For tables with large text columns (~10KB/row), this reduces per-row buffer cost from ~10KB (closure over parquet data) to ~100B (plain value).
Fuses Sort + Limit into a TopN node that uses a bounded binary max-heap. ORDER BY x LIMIT N now buffers only N rows instead of the entire dataset. The planner detects two patterns: - Limit(Sort(child)) → TopN(child) - Limit(Project(Sort(child))) → Project(TopN(child))
3 tasks
# Conflicts: # src/execute/execute.js # src/execute/sort.js
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Sort+Limitinto aTopNnode using a bounded binary max-heapORDER BY x LIMIT Nnow buffers only N rows instead of the entire datasetLimit(Sort(...))andLimit(Project(Sort(...)))patternsmaterializeRowTest plan