Mark provenance of SQL via the branded types SafeSqlFragment and
UntrustedSqlFragment. Only SafeSqlFragment should be executed;
UntrustedSqlFragments require some kind of implicit user approval (show
on screen + user has to click something) before they are promoted to
SafeSqlFragment.
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit
* **New Features**
* Editor and RLS tester show loading states for inferred/generated SQL
and include a dedicated user SQL editor for safer edits.
* **Refactor**
* Platform-wide SQL handling tightened: snippets and AI-generated SQL
are treated as untrusted/display-only until promoted, improving safety
and consistency.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
This PR migrates the whole monorepo to use Tailwind v4:
- Removed `@tailwindcss/container-queries` plugin since it's included by
default in v4,
- Bump all instances of Tailwind to v4. Made minimal changes to the
shared config to remove non-supported features (`alpha` mentions),
- Migrate all apps to be compatible with v4 configs,
- Fix the `typography.css` import in 3 apps,
- Add missing rules which were included by default in v3,
- Run `pnpm dlx @tailwindcss/upgrade` on all apps, which renames a lot
of classes
- Rename all misnamed classes according to
https://tailwindcss.com/docs/upgrade-guide#renamed-utilities in all
apps.
---------
Co-authored-by: Jordi Enric <jordi.err@gmail.com>
## Summary
- Reset the table rows query after the user confirms loading data on a
high-cost table, so React Query re-executes the fetch without the
preflight check
- Close the confirmation dialog after the user clicks "I understand,
proceed"
**Root cause:** `preflightCheck` is intentionally excluded from the
React Query query key (to avoid duplicate cache entries). When the user
clicked "Load data", the preflight flag flipped to `false` but the query
key stayed the same — so React Query returned the cached error instead
of refetching.
## Test plan
- [x] Navigate to a table with high estimated query cost (triggers "Data
not loaded to protect database performance")
- [x] Click "Load data" → "I understand, proceed"
- [x] Verify the dialog closes and table data loads
- [x] Verify the warning does not reappear for the same table in the
same session
To test this you can run this locally with COST_THRESHOLD set to a low
value (< 10)
Fixes FE-2979
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit
* **Bug Fixes**
* Confirming the high-cost warning now closes the dialog and proceeds
with loading as expected.
* Improved query cache key composition so queries reflect the full set
of relevant parameters for correct caching.
* Loading from the grid error now properly clears related cached results
and proceeds when the user confirms.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
## Context
Resolves https://github.com/supabase/supabase/issues/43548
There's currently an issue with the Table Editor where if you have, for
example, a nullable `text` column with a default value, inserting a new
row and selecting "Set to NULL" doesn't do anything, and saving will
insert the row with the default value
<img width="700" height="258" alt="image"
src="https://github.com/user-attachments/assets/6a284ebb-c346-40a6-9a30-793118844084"
/>
This stems from a legacy logic in the Table Editor whereby we treat
`null` values as "no input" - which is incorrect as `null` values are
also valid values. So the PR here changes a few things to resolve this
properly:
## Changes involved
Main fix:
- `undefined` will be the "no input" value instead, and it'll be the
default value when generating the row object for inserting a new row
- `NULL` or even empty string like `''` will be treated as they are
(valid inputs)
Secondary adjustments:
- (Queue operations) Queueing an insert with no value but default value
is NULL, will show the placeholder as `DEFAULT` instead of `NULL` for
better accuracy in representation
<img width="892" height="96" alt="image"
src="https://github.com/user-attachments/assets/02cf86bf-c17b-4e25-9a8f-17960b1d2575"
/>
- Added a `Set to Default` CTA here, but will only show up if adding a
new row or updating a queued insert row operation, which will set the
value of the input field back to `undefined` for PG to handle it as the
default value
<img width="734" height="208" alt="image"
src="https://github.com/user-attachments/assets/23887c0c-533e-4494-acbe-61309ff5d7c5"
/>
## To test
Verify within the Table Editor (along with queue operation feature
preview)
- For inserting a new row, setting value to NULL and setting value to
Default works
- For updating a row, setting value to NULL works
## Context
Shifts all remaining dashboard queries into pg-meta so that we
centralize all manually written queries in one place
Having them in packages/pg-meta also allows us to write tests for them
## To test
Just needs a smoke test on
- Role Impersonation
- Lints
- Data API
- Database
- Enumerated Types
- Integrations
- Foreign Data Wrappers
- Vault
## Context
Related to FE-2557
Part of shifting manually written dashboard queries into
packages/pg-meta where
- pg-meta can be code owners of
- we can write tests for the queries
This PR just shifts all the `.sql.ts` files that we previously created
into packages/pg-meta
There's still other areas where we need to shift over as well which I'll
address in subsequent PRs
## Notable changes
- `getTableRowsCountSql` -> Opted to shift `formatFilterValue` logic out
before calling this method (ref `table-rows-count-query`)
- `getDeleteOldCronJobRunDetailsByCtidSql` -> Opted to shift
`validatePageNumber` logic out before calling this method (ref
`CronJobsTab.useCleanupActions`)
Previously `connectionString` was passed into the query key for
`table-rows`. Since `connectionString` is unstable, it would cause the
query key to change randomly, often making the user experience brief
random "no rows found" states while data loaded for the new key.
This PR uses `readReplicaIdentifier` as a stable version of
`connectionString`, maintaining the existing functionality while
avoiding the unnecessary data reloads.
Note that there are still quite a few places where `connectionString` is
passed into the query key. We'll fix these as follow up PRs.
I'm opting to also include the fix from
https://github.com/supabase/supabase/pull/43572 in here for consistency.
**To test:**
- Ensure table editor functions normally
- Use the table editor with a read replica selected
## Context
Part of dashboard scalability project
Opting to use the connection string of the project's read replica (if
available) for read queries on the database.
Trialing with the Table Editor as a first pass - changes involved will
opt to use replica connection string for `useTableRowsQuery`,
`useTableRowsCountQuery`, and `useForeignKeyConstraintsQuery`
There's definitely optimizations to be done for deciding which replica
to use - but am starting off with a rather naive logic to prioritize
replicas in the same region as the project.
## Changes involved
- We're no longer passing `connectionString` as a param into the
affected hooks, the `connectionString` is derived from within those
hooks instead
- Change is feature flagged, so things should be status quo if flag is
off (use primary database's connection string)
- Added `useConnectionStringForReadOps` hook which returns the replica's
connection string if (Otherwise defaults to primary database connection
string)
- Feature flag is on
- Project has a replica available
## To test
- [ ] Verify that the table editor works as expected for a project that
has read replicas (There shouldn't be any change really)
- [ ] Also just double check that updating cells in the table editor
works as well (There's no change there, we're using the primary DB's
connection string for mutation ops)
- [ ] ^ Same thing for a project that doesn't have read replicas
- [ ] ^ Same thing for local / self-host
## Context
Noticed that the table editor prefetch wasn't working as intended as the
table's rows were getting fetched again when opening the table.
Fix is that `preflightCheck` should've been excluded from the
`table-rows` query key
## To test
Within the Table Editor
- [ ] Verify that `table-rows` is getting prefetched when hovering over
a table in the side menu
- [ ] Verify that `table-rows` doesn't get fetched again when opening
the table subsequently (There shouldn't be a UI loader too - rows should
render immediately)
## Context
Related to this previous PR
[here](https://github.com/supabase/supabase/pull/42321)
Table Editor: Adding a CTA to the `HighQueryCost` UI to allow users to
proceed with fetching data despite the high query cost warning, to
prevent completely blocking the users from their workflows (realised
that certain heavy queries are required and this safeguard shouldn't be
creating dead-ends for users)
<img width="1159" height="264" alt="image"
src="https://github.com/user-attachments/assets/5fa01f7f-4442-4349-91f2-f4275e177f89"
/>
Clicking "Load more" will open a confirmation dialog, in which
proceeding to load the data will thereafter suppress this preflight
check for the table, for the rest of the browser session
<img width="450" height="305" alt="image"
src="https://github.com/user-attachments/assets/d3197a5d-a861-47a8-95da-e157972ce092"
/>
## Other changes
- Also bumped the query cost threshold from 100,000 to 200,000 - the
former might have been too aggressive 😓
- (Unrelated) Added query cost tooltip for cron jobs high query cost
warning
<img width="450" height="230" alt="image"
src="https://github.com/user-attachments/assets/d2c66972-7c4c-4f99-818c-e90a0991c2f5"
/>
## I have read the
[CONTRIBUTING.md](https://github.com/supabase/supabase/blob/master/CONTRIBUTING.md)
file.
YES
## What kind of change does this PR introduce?
Completion of batch edits on the table editor
## Demo
https://github.com/user-attachments/assets/ab5a7112-3dcc-456a-a5fc-1c9a99fccf34
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit
* **New Features**
* Queued add/edit/delete operations with optimistic UI, conflict
resolution, and queue-based flows
* Side-panel items showing queued add/delete row previews
* **UI**
* Pending-add placeholders plus a visible "DEFAULT" marker in grid cells
* Visual row states: green for pending adds, red with strike-through for
pending deletes
* Queue-based deletes can bypass confirmation when queue mode is enabled
* **Tests**
* Expanded tests covering queue conflict resolution and queue utilities
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
---------
Co-authored-by: Alaister Young <a@alaisteryoung.com>
## Context
Part of an investigation to see how we can make the dashboard more
resilient for large databases by ensuring that the dashboard never
becomes the reason for taking down the database accidentally.
Am proposing that for interfaces that rely heavily on queries to the
database for data to render, we add preflight checks to ensure that we
never run queries that exceed a certain cost threshold (and also have UI
handlers to communicate this) - this can be done by running an EXPLAIN
query before running the actual query, and if the cost from the EXPLAIN
exceeds a specified threshold, the UI throws an error then and skips
calling the actual query.
## Demo
Am piloting this with the Table Editor, and got an example here in which
my table has 500K+ rows, and I'm trying to sort on an unindexed column:
https://github.com/user-attachments/assets/ccad2ea9-d62c-4106-8295-2a6df5941474
With this UX, the pros are that
- It's relatively seamless and not too invasive, most users won't notice
this unless they run into this specific scenario
- We can incrementally apply this to other parts of the dashboard, next
will probably be Auth Users for example
However there are some considerations:
- The additional EXPLAIN query adds a bit more latency to the query
since its a separate API request to the query endpoint
- ^ On a similar note, it will hammer the API a bit more, which may
result in higher probability of 429s
- However, I reckon that the preflight checks are meant to be used
sparingly and only for certain parts of the dashboard that we believe
may cause high load.
- e.g for the Table Editor, reckon we only need this for fetching rows?
The count query is largely optimized already (although we could just add
a preflight check there too)
- It's just meant to be a safeguard to prevent running heavy queries on
the database
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit
* **New Features**
* Query preflight with cost checks and a user-facing high-cost dialog
showing cost details and remediation suggestions.
* Grid exposes an explicit error flag and surfaces richer error
metadata.
* **Bug Fixes**
* Standardized error handling and more consistent error displays across
the app.
* Explain analysis now reports an additional max-cost metric for
queries.
* **UI**
* Tweaked empty-state interaction/layout and slightly wider header
delete control.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
---------
Co-authored-by: Ali Waseem <waseema393@gmail.com>
Exporting all rows (in CSV, SQL, or JSON format) currently uses offset pagination, which can cause performance problems if the table is large. There is also a correctness problem if the table is being actively updated as the export happens, because the relative row offsets could shift between queries.
Now that composite filters are available in postgres-meta, we can change to using cursor pagination on the primary key (or any non-null unique keys) wherever possible. Where this is not possible, the user will be shown a confirmation dialog explaining the possible performance impact.
---------
Co-authored-by: Ali Waseem <waseema393@gmail.com>
Co-authored-by: Joshen Lim <joshenlimek@gmail.com>
There is an edge case interaction between the Postgres query parser and
MSSQL foreign tables, where the query parser may drop sort clauses that
are redundant with applied filters. This leads to invalid MSSQL syntax,
because the resulting query has a `limit` but no `sort`, and the user
sees a confusing error message.
This PR detects this edge case on MSSQL foreign tables. There are three
cases:
1. The user filters by a column, but there are still other columns
available for sorting. The default search for a sorting column will
leave out the filtered column.
2. The user filters by a column/several columns, and there are no more
columns that can be used for sorting. We stop the query and show an
admonition.
3. The user filters by a column, then tries to sort by the same column.
We stop the query and show an admonition.
* Add custom types for queries, mutations and infinite queries.
* Migrate all queries to use the new type.
* Migrate all infinite queries to useCustomInfiniteQueryOptions.
* Migrate all mutations to use useCustomMutationOptions.
* Add type to all imports in `types` folder.
* Migrate all uses of invalidateQueries to use object syntax.
* Migrate the remainder of useInfiniteQuery.
* Migrate all setQueriesData.
* Migrate all fetchQuery uses.
* Migrate some leftover functions from RQ.
* Fix issues found by Charis.
Increasing this because of reports that CSV uploads are hitting 429s. (I
checked and there are no Retry-After headers, so we should be falling
back to the default exponential backoff.)
Also brings the implementation in line with the comment :P Before, the
comment claimed we were starting at 1s but we were starting at 500ms.
* feat: optimized users page
* Update UI
* Reinstate footer with count if mode is freeform
* Simplify disabled sort in performance mode
* Clean
* Small fix
* Final fixes
* Shift users SQL query to packages/pg-meta
* Nit unrelated: Clear query params from useLogsUrlState when going to logs tab of a selected user
* Shift user count SQL and get count estimate SQL into packages/pg-meta
* Fix
* Nit
* Nit
* Minor nits
* Refactor UX for searching
---------
Co-authored-by: Joshen Lim <joshenlimek@gmail.com>
Add telemetry tracking for activation-related table operations
- Implement SQL event parser to detect table creation, data insertion, and RLS enablement
- Add telemetry tracking for these operations in table editor as well
- Add test coverage for SQL event parser
* Update Supabase docs URLs to use env variable
Co-authored-by: a <a@alaisteryoung.com>
* Refactor: Use DOCS_URL constant for documentation links
This change centralizes documentation links using a new DOCS_URL constant, improving maintainability and consistency.
Co-authored-by: a <a@alaisteryoung.com>
* Refactor: Use DOCS_URL constant for all documentation links
This change replaces hardcoded documentation URLs with a centralized constant, improving maintainability and consistency.
Co-authored-by: a <a@alaisteryoung.com>
* replace more instances
* ci: Autofix updates from GitHub workflow
* remaining instances
* fix duplicate useRouter
---------
Co-authored-by: Cursor Agent <cursoragent@cursor.com>
Co-authored-by: alaister <10985857+alaister@users.noreply.github.com>
fix: handle undefined count when fetchint table rows
count can be undefined in the return from fetching table rows (it uses
executeSqlQuery under the hood which has no type safety) so we need to
deal with that case
* fix: estimate number of users when count is large
In the Auth Users table, we always fetch an exact count of users. This
can be a problem for projects with many (>50K) users as the count(*)
might cause performance issues on the database. We already have logic on
the Table Editor to only run automatic count estimates (fetching the
exact count only if usr requests it), this change ports the same logic
over to Auth Users.
* Nit refactor
---------
Co-authored-by: Joshen Lim <joshenlimek@gmail.com>
* chore(studio): move Query to pgMeta add tests
- Move the Query builder from studio to pgMeta
- Add e2e tests over the generated sql to ensure syntax and runtime
result over pg database
- fix bug with orde by for table with undefined column
* chore: add table-row-query to pgMeta and tests
* chore: fix query import path
* chore: reduce maxArraySize
* chore: use pg-meta getTableRowsSql implementation in studio
* chore: add truncation on large array fields
* chore: set ES target for lint
* chore: update comment
* chore: reduce test size for CI
* chore(studio): move Query to pgMeta add tests
- Move the Query builder from studio to pgMeta
- Add e2e tests over the generated sql to ensure syntax and runtime
result over pg database
- fix bug with orde by for table with undefined column
* chore: fix query import path
* chore: set ES target for lint
* chore: add github action for pg-meta test package
* chore: add tsconfig to sparse checkout
* chore: remove useExecuteSqlQuery part 1
* fix invalidation
* fixes
* more fixes
* I fixed it, but tested on preview instead of local 🤦🏻
* only refetch table rows and not count on row updates
* removed unneeded invalidations and prefetched new table after create
* Update the design of the sonner toasts. Add the close button by default.
* Migrate studio and www apps to use the SonnerToaster.
* Migrate all toasts from studio.
* Migrate all leftover toasts in studio.
* Add a new toast component with progress. Use it in studio.
* Migrate the design-system app.
* Refactor the consent toast to use sonner.
* Switch docs to use the new sonner toasts.
* Remove toast examples from the design-system app.
* Remove all toast-related components and old code.
* Fix the progress bar in the toast progress component. Also make the bottom components vertically centered.
* Fix the width of the toast progress.
* Use text-foreground-lighter instead of muted for ToastProgress text
* Rename ToastProgress to SonnerProgress.
* Shorten the text in sonner progress.
* Use the correct classes for the close button. Add a const var for the default toast duration. Remove the custom width class from sonner.
* Set the position for all progress toasts to bottom right. Set the duration for all toasts to the default (when reusing a toast id from loading/progress toast, the duration is set to infinity).
* Fix the playwright tests.
* Refactor imports to use ui instead of @ui.
* Change all imports of react-hot-toast with sonner. These components were merged since the last commit to this branch.
* Remove react-hot-toast lib.
---------
Co-authored-by: Joshen Lim <joshenlimek@gmail.com>
Co-authored-by: Jonathan Summers-Muir <MildTomato@users.noreply.github.com>