Mark provenance of SQL via the branded types SafeSqlFragment and
UntrustedSqlFragment. Only SafeSqlFragment should be executed;
UntrustedSqlFragments require some kind of implicit user approval (show
on screen + user has to click something) before they are promoted to
SafeSqlFragment.
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit
* **New Features**
* Editor and RLS tester show loading states for inferred/generated SQL
and include a dedicated user SQL editor for safer edits.
* **Refactor**
* Platform-wide SQL handling tightened: snippets and AI-generated SQL
are treated as untrusted/display-only until promoted, improving safety
and consistency.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
## Summary
- Reset the table rows query after the user confirms loading data on a
high-cost table, so React Query re-executes the fetch without the
preflight check
- Close the confirmation dialog after the user clicks "I understand,
proceed"
**Root cause:** `preflightCheck` is intentionally excluded from the
React Query query key (to avoid duplicate cache entries). When the user
clicked "Load data", the preflight flag flipped to `false` but the query
key stayed the same — so React Query returned the cached error instead
of refetching.
## Test plan
- [x] Navigate to a table with high estimated query cost (triggers "Data
not loaded to protect database performance")
- [x] Click "Load data" → "I understand, proceed"
- [x] Verify the dialog closes and table data loads
- [x] Verify the warning does not reappear for the same table in the
same session
To test this you can run this locally with COST_THRESHOLD set to a low
value (< 10)
Fixes FE-2979
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit
* **Bug Fixes**
* Confirming the high-cost warning now closes the dialog and proceeds
with loading as expected.
* Improved query cache key composition so queries reflect the full set
of relevant parameters for correct caching.
* Loading from the grid error now properly clears related cached results
and proceeds when the user confirms.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
## Context
Shifts all remaining dashboard queries into pg-meta so that we
centralize all manually written queries in one place
Having them in packages/pg-meta also allows us to write tests for them
## To test
Just needs a smoke test on
- Role Impersonation
- Lints
- Data API
- Database
- Enumerated Types
- Integrations
- Foreign Data Wrappers
- Vault
Previously `connectionString` was passed into the query key for
`table-rows`. Since `connectionString` is unstable, it would cause the
query key to change randomly, often making the user experience brief
random "no rows found" states while data loaded for the new key.
This PR uses `readReplicaIdentifier` as a stable version of
`connectionString`, maintaining the existing functionality while
avoiding the unnecessary data reloads.
Note that there are still quite a few places where `connectionString` is
passed into the query key. We'll fix these as follow up PRs.
I'm opting to also include the fix from
https://github.com/supabase/supabase/pull/43572 in here for consistency.
**To test:**
- Ensure table editor functions normally
- Use the table editor with a read replica selected
## Context
Part of dashboard scalability project
Opting to use the connection string of the project's read replica (if
available) for read queries on the database.
Trialing with the Table Editor as a first pass - changes involved will
opt to use replica connection string for `useTableRowsQuery`,
`useTableRowsCountQuery`, and `useForeignKeyConstraintsQuery`
There's definitely optimizations to be done for deciding which replica
to use - but am starting off with a rather naive logic to prioritize
replicas in the same region as the project.
## Changes involved
- We're no longer passing `connectionString` as a param into the
affected hooks, the `connectionString` is derived from within those
hooks instead
- Change is feature flagged, so things should be status quo if flag is
off (use primary database's connection string)
- Added `useConnectionStringForReadOps` hook which returns the replica's
connection string if (Otherwise defaults to primary database connection
string)
- Feature flag is on
- Project has a replica available
## To test
- [ ] Verify that the table editor works as expected for a project that
has read replicas (There shouldn't be any change really)
- [ ] Also just double check that updating cells in the table editor
works as well (There's no change there, we're using the primary DB's
connection string for mutation ops)
- [ ] ^ Same thing for a project that doesn't have read replicas
- [ ] ^ Same thing for local / self-host
## Context
Noticed that the table editor prefetch wasn't working as intended as the
table's rows were getting fetched again when opening the table.
Fix is that `preflightCheck` should've been excluded from the
`table-rows` query key
## To test
Within the Table Editor
- [ ] Verify that `table-rows` is getting prefetched when hovering over
a table in the side menu
- [ ] Verify that `table-rows` doesn't get fetched again when opening
the table subsequently (There shouldn't be a UI loader too - rows should
render immediately)
## Context
Related to this previous PR
[here](https://github.com/supabase/supabase/pull/42321)
Table Editor: Adding a CTA to the `HighQueryCost` UI to allow users to
proceed with fetching data despite the high query cost warning, to
prevent completely blocking the users from their workflows (realised
that certain heavy queries are required and this safeguard shouldn't be
creating dead-ends for users)
<img width="1159" height="264" alt="image"
src="https://github.com/user-attachments/assets/5fa01f7f-4442-4349-91f2-f4275e177f89"
/>
Clicking "Load more" will open a confirmation dialog, in which
proceeding to load the data will thereafter suppress this preflight
check for the table, for the rest of the browser session
<img width="450" height="305" alt="image"
src="https://github.com/user-attachments/assets/d3197a5d-a861-47a8-95da-e157972ce092"
/>
## Other changes
- Also bumped the query cost threshold from 100,000 to 200,000 - the
former might have been too aggressive 😓
- (Unrelated) Added query cost tooltip for cron jobs high query cost
warning
<img width="450" height="230" alt="image"
src="https://github.com/user-attachments/assets/d2c66972-7c4c-4f99-818c-e90a0991c2f5"
/>
## Context
Part of an investigation to see how we can make the dashboard more
resilient for large databases by ensuring that the dashboard never
becomes the reason for taking down the database accidentally.
Am proposing that for interfaces that rely heavily on queries to the
database for data to render, we add preflight checks to ensure that we
never run queries that exceed a certain cost threshold (and also have UI
handlers to communicate this) - this can be done by running an EXPLAIN
query before running the actual query, and if the cost from the EXPLAIN
exceeds a specified threshold, the UI throws an error then and skips
calling the actual query.
## Demo
Am piloting this with the Table Editor, and got an example here in which
my table has 500K+ rows, and I'm trying to sort on an unindexed column:
https://github.com/user-attachments/assets/ccad2ea9-d62c-4106-8295-2a6df5941474
With this UX, the pros are that
- It's relatively seamless and not too invasive, most users won't notice
this unless they run into this specific scenario
- We can incrementally apply this to other parts of the dashboard, next
will probably be Auth Users for example
However there are some considerations:
- The additional EXPLAIN query adds a bit more latency to the query
since its a separate API request to the query endpoint
- ^ On a similar note, it will hammer the API a bit more, which may
result in higher probability of 429s
- However, I reckon that the preflight checks are meant to be used
sparingly and only for certain parts of the dashboard that we believe
may cause high load.
- e.g for the Table Editor, reckon we only need this for fetching rows?
The count query is largely optimized already (although we could just add
a preflight check there too)
- It's just meant to be a safeguard to prevent running heavy queries on
the database
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit
* **New Features**
* Query preflight with cost checks and a user-facing high-cost dialog
showing cost details and remediation suggestions.
* Grid exposes an explicit error flag and surfaces richer error
metadata.
* **Bug Fixes**
* Standardized error handling and more consistent error displays across
the app.
* Explain analysis now reports an additional max-cost metric for
queries.
* **UI**
* Tweaked empty-state interaction/layout and slightly wider header
delete control.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
---------
Co-authored-by: Ali Waseem <waseema393@gmail.com>
Exporting all rows (in CSV, SQL, or JSON format) currently uses offset pagination, which can cause performance problems if the table is large. There is also a correctness problem if the table is being actively updated as the export happens, because the relative row offsets could shift between queries.
Now that composite filters are available in postgres-meta, we can change to using cursor pagination on the primary key (or any non-null unique keys) wherever possible. Where this is not possible, the user will be shown a confirmation dialog explaining the possible performance impact.
---------
Co-authored-by: Ali Waseem <waseema393@gmail.com>
Co-authored-by: Joshen Lim <joshenlimek@gmail.com>
There is an edge case interaction between the Postgres query parser and
MSSQL foreign tables, where the query parser may drop sort clauses that
are redundant with applied filters. This leads to invalid MSSQL syntax,
because the resulting query has a `limit` but no `sort`, and the user
sees a confusing error message.
This PR detects this edge case on MSSQL foreign tables. There are three
cases:
1. The user filters by a column, but there are still other columns
available for sorting. The default search for a sorting column will
leave out the filtered column.
2. The user filters by a column/several columns, and there are no more
columns that can be used for sorting. We stop the query and show an
admonition.
3. The user filters by a column, then tries to sort by the same column.
We stop the query and show an admonition.
* Add custom types for queries, mutations and infinite queries.
* Migrate all queries to use the new type.
* Migrate all infinite queries to useCustomInfiniteQueryOptions.
* Migrate all mutations to use useCustomMutationOptions.
* Add type to all imports in `types` folder.
* Migrate all uses of invalidateQueries to use object syntax.
* Migrate the remainder of useInfiniteQuery.
* Migrate all setQueriesData.
* Migrate all fetchQuery uses.
* Migrate some leftover functions from RQ.
* Fix issues found by Charis.
Increasing this because of reports that CSV uploads are hitting 429s. (I
checked and there are no Retry-After headers, so we should be falling
back to the default exponential backoff.)
Also brings the implementation in line with the comment :P Before, the
comment claimed we were starting at 1s but we were starting at 500ms.
* feat: optimized users page
* Update UI
* Reinstate footer with count if mode is freeform
* Simplify disabled sort in performance mode
* Clean
* Small fix
* Final fixes
* Shift users SQL query to packages/pg-meta
* Nit unrelated: Clear query params from useLogsUrlState when going to logs tab of a selected user
* Shift user count SQL and get count estimate SQL into packages/pg-meta
* Fix
* Nit
* Nit
* Minor nits
* Refactor UX for searching
---------
Co-authored-by: Joshen Lim <joshenlimek@gmail.com>
* chore(studio): move Query to pgMeta add tests
- Move the Query builder from studio to pgMeta
- Add e2e tests over the generated sql to ensure syntax and runtime
result over pg database
- fix bug with orde by for table with undefined column
* chore: add table-row-query to pgMeta and tests
* chore: fix query import path
* chore: reduce maxArraySize
* chore: use pg-meta getTableRowsSql implementation in studio
* chore: add truncation on large array fields
* chore: set ES target for lint
* chore: update comment
* chore: reduce test size for CI
* chore(studio): move Query to pgMeta add tests
- Move the Query builder from studio to pgMeta
- Add e2e tests over the generated sql to ensure syntax and runtime
result over pg database
- fix bug with orde by for table with undefined column
* chore: fix query import path
* chore: set ES target for lint
* chore: add github action for pg-meta test package
* chore: add tsconfig to sparse checkout
* chore: remove useExecuteSqlQuery part 1
* fix invalidation
* fixes
* more fixes
* I fixed it, but tested on preview instead of local 🤦🏻
* only refetch table rows and not count on row updates
* removed unneeded invalidations and prefetched new table after create
* Update default sort order to use all primary keys in the case of composite primary keys to prevent row resorting after updating a row (still within threshold of applying only when max 50k rows or less)
* Fix updating column fk value when there are multiple columns whose target schema, table and column are the same
* Fix updating composite foreign key from withn the grid editor itself
* Set the primary key as the default sort if no other sort exists
* Rename
* Add contextual toast messages when trying to update/delete row on table with no PKs
---------
Co-authored-by: Joshen Lim <joshenlimek@gmail.com>
* Support rendering large text/jsons in grid
* Support rendering large json in side panel, and large text in grid
* Support rendering large text in side panel
* Fix
* Update comment
* Fix the editValue type for json fields.
* Fix
* Update blur value
* Fix
* Feex
* Padding bump
* Small refactor
---------
Co-authored-by: Ivan Vasilov <vasilov.ivan@gmail.com>
Co-authored-by: Terry Sutton <saltcod@gmail.com>
* fix: Unable to export data from the table to CSV
* Update apps/studio/components/layouts/TableEditorLayout/EntityListItem.tsx
* Prettier
---------
Co-authored-by: Terry Sutton <saltcod@gmail.com>
* feat: user impersonation
* refactor
* add anon impersonation
* add user impersonation to graphiql
* thanks Ivan for this one
* add radio buttons
* progress
* working user selector
* adds loading and error states to user selector
* nicer popover button
* default graphiql to use service role key
* styling updates
* fix padding in graphiql
* add service role warning to graphiql
* add user impersonation to realtime inspector
* add feature flag
* use Alert_Shadcn_ instead of old Alert
* Update apps/studio/lib/role-impersonation.ts
Co-authored-by: Joshen Lim <joshenlimek@gmail.com>
* update title
* Update apps/studio/components/interfaces/RoleImpersonationSelector/UserImpersonationSelector.tsx
Co-authored-by: Joshen Lim <joshenlimek@gmail.com>
* Update apps/studio/components/interfaces/RoleImpersonationSelector/Icons.tsx
Co-authored-by: Joshen Lim <joshenlimek@gmail.com>
* Update apps/studio/components/interfaces/RoleImpersonationSelector/Icons.tsx
Co-authored-by: Joshen Lim <joshenlimek@gmail.com>
* Update apps/studio/components/interfaces/RoleImpersonationSelector/Icons.tsx
Co-authored-by: Joshen Lim <joshenlimek@gmail.com>
* Update apps/studio/components/interfaces/RoleImpersonationSelector/Icons.tsx
Co-authored-by: Joshen Lim <joshenlimek@gmail.com>
* Update apps/studio/components/interfaces/RoleImpersonationSelector/Icons.tsx
Co-authored-by: Joshen Lim <joshenlimek@gmail.com>
* Update apps/studio/components/interfaces/RoleImpersonationSelector/UserImpersonationSelector.tsx
Co-authored-by: Joshen Lim <joshenlimek@gmail.com>
* add constant padding at the bottom of user selector
* update graphiql buttons font size
* make rls warning dismissable
* use text-foreground on rls warning title
* fix no results while impersonating a role
* fix error line number when impersonating a role
---------
Co-authored-by: Jonathan Summers-Muir <MildTomato@users.noreply.github.com>
Co-authored-by: Joshen Lim <joshenlimek@gmail.com>