docs: Fix typos (#40906)

This commit is contained in:
omahs
2025-12-01 13:19:35 +01:00
committed by GitHub
parent 4804642b18
commit 96fb9fb99d
10 changed files with 11 additions and 11 deletions
@@ -19,11 +19,11 @@ You can examine your database and queries for these issues using either the [Sup
The Supabase CLI comes with a range of tools to help inspect your Postgres instances for potential issues. The CLI gets the information from <a href="https://www.postgresql.org/docs/current/internals.html" target="_blank">Postgres internals</a>. Therefore, most tools provided are compatible with any Postgres databases regardless if they are a Supabase project or not.
You can find installation instructions for the the Supabase CLI <a href="/docs/guides/cli" target="_blank">here</a>.
You can find installation instructions for the Supabase CLI <a href="/docs/guides/cli" target="_blank">here</a>.
### The `inspect db` command
The inspection tools for your Postgres database are under then `inspect db` command. You can get a full list of available commands by running `supabase inspect db help`.
The inspection tools for your Postgres database are under the `inspect db` command. You can get a full list of available commands by running `supabase inspect db help`.
```
$ supabase inspect db help
@@ -63,7 +63,7 @@ Attribute Statements allow Supabase to get information about your Okta users on
Supabase needs to finalize enabling single sign-on with your Okta application.
To do this scroll down to the _SAML Signing Certificates_ section on the _Sign On_ tab of the _Supabase_ application. Pick the the _SHA-2_ row with an _Active_ status. Click on the _Actions_ dropdown button and then on the _View IdP Metadata_.
To do this scroll down to the _SAML Signing Certificates_ section on the _Sign On_ tab of the _Supabase_ application. Pick the _SHA-2_ row with an _Active_ status. Click on the _Actions_ dropdown button and then on the _View IdP Metadata_.
This will open up the SAML 2.0 Metadata XML file in a new tab in your browser. You will need to enter this URL later in [Step 9](#dashboard-configure-metadata).
@@ -168,7 +168,7 @@ By default, the database is not accessible from outside the local machine but th
You may also want to connect to your Postgres database via an ORM or another direct method other than `psql`.
For this you can use the standard Postgres connection string.
You can find the the environment values mentioned below in the `.env` file which will be covered in the next section.
You can find the environment values mentioned below in the `.env` file which will be covered in the next section.
```
postgres://postgres:[POSTGRES_PASSWORD]@[your-server-ip]:5432/[POSTGRES_DB]
+1 -1
View File
@@ -12,7 +12,7 @@ import { Result } from '~/features/helpers.fn'
import { nanoId } from '~/features/helpers.misc'
/**
* Extracts the name from a a GraphQLOutputType.
* Extracts the name from a GraphQLOutputType.
*/
function extractNodeTypeName(
/**
+1 -1
View File
@@ -7,7 +7,7 @@ run: download transform generate format
###############################################################################
# Download all the specs
###############################################################################
# commment out download.auth.v1 temporarily, we're manually creating the file
# comment out download.auth.v1 temporarily, we're manually creating the file
# download: download.api.v1 download.auth.v1 download.storage.v1 download.tsdoc.v2
download: download.api.v1 download.storage.v1 download.tsdoc.v2
@@ -58,7 +58,7 @@ We are still working on Open Sourcing our Dashboard, and took another step close
There are a huge number of Supabase developers in Japan and China, and at their request we've launched Tokyo as a region.
![Tokyo is now availabel as a region](/images/blog/2021-may/japan-region.png)
![Tokyo is now available as a region](/images/blog/2021-may/japan-region.png)
## Return data as CSV
@@ -53,7 +53,7 @@ Watch us as we kick off Launch Week, shipping this blog post.
### What?
All for one, and one for all. It has become a tradition to kick off Launch Week with Community Day -
a day where we shine a spotlight on some of the the open source tools that we use and community contributions.
a day where we shine a spotlight on some of the open source tools that we use and community contributions.
This Community Day is going to be our biggest one yet.
Have a look at the [#SupaLaunchWeek hashtag on Twitter](https://twitter.com/hashtag/SupaLaunchWeek?src=hashtag_click)
to see some of the awesome guests that will be joining us.
@@ -1276,7 +1276,7 @@ The required function in our class and service already exists, so you can now al
## Handling Realtime Table Changes
The cool thing is is how easy we are now able to implement real time functionality - the only thing required for this is to turn it on.
The cool thing is how easy we are now able to implement real time functionality - the only thing required for this is to turn it on.
We can do this right inside the table editor of Supabase, so go to your tables, click that little arrow next to edi so you can edit the table and then enable realtime for bot **cards** and **lists**!
+1 -1
View File
@@ -140,7 +140,7 @@ The Realtime engine seems like a great _compliment_ for `pg_crdt`, but before we
These are a few of the _known_ limitations:
- Realtime broadcasts database changes from the Postgres write ahead log (WAL). The WAL includes a complete copy of the the underlying data so small updates cause the entire document to broadcast to all collaborators
- Realtime broadcasts database changes from the Postgres write ahead log (WAL). The WAL includes a complete copy of the underlying data so small updates cause the entire document to broadcast to all collaborators
- Frequently updated CRDTs produce a lot of WAL and dead tuples
- Large CRDT types in Postgres generate significant serialization/deserialization overhead on-update.
@@ -37,7 +37,7 @@ Under the hood, we use [WAL-G](https://github.com/wal-g/wal-g), an open source a
Consider your Recovery Point Objective (RPO) when deciding whether to enable Point in Time Recovery. RPO is the threshold for how much data, measured in time, a business could lose when disaster strikes. This is dependent on a business and its underlying requirements. The agreed upon RPO would be a deciding factor in choosing which solution best fits a project.
While all Pro Plan projects and above are backed up on a daily basis, this means that at the worst case, a project could lose up to 24 hours worth of data if disaster hits at the most inopportune time. With Point in Time Recovery however, backups are made at much shorter intervals, shortening the RPO. WAL files are backed up at two minute intervals. This could be faster if it hits a certain file threshold before the the two minute mark.
While all Pro Plan projects and above are backed up on a daily basis, this means that at the worst case, a project could lose up to 24 hours worth of data if disaster hits at the most inopportune time. With Point in Time Recovery however, backups are made at much shorter intervals, shortening the RPO. WAL files are backed up at two minute intervals. This could be faster if it hits a certain file threshold before the two minute mark.
## Getting started