# Description of Changes
The first commit defines a type `TableName` that is used in e.g.,
`TxData` and where determined profitable and necessary to do this
change.
`TableName` is backed by
[`ecow::EcoString`](https://docs.rs/ecow/0.2.6/ecow/string/struct.EcoString.html)
which affords O(1) clones and 15 bytes of inline storage and
`mem::size_of::<EcoString>() == 16`.
The second commit does the same for `ReducerName`. This is also used in
reducer execution.
Together, these commits increase TPS by around 5-7k TPS.
# API and ABI breaking changes
None
# Expected complexity level and risk
1
# Testing
Covered by existing tests.
# Description of Changes
Rework `TxData` to:
- store all information for a table together in a single type
`TxDataTableEntry` rather than having several different maps.
- be constructed as fast as possible.
- fit every entry within a single cache line, i.e., `(TableId,
TxDataTableEntry)` takes up 64 bytes.
- fit a single table entry inline to optimize for small transactions.
- expose methods `{inserts, deletes}_for_table` to make `DeltaTx`
faster.
Rework `DatabaseUpdate` to:
- store a single `DatabaseTableUpdate` inline
- make `from_writes` profit from the changes to `TxData`, avoiding the
temporary hash map and allocating the necessary capacity from the start.
# API and ABI breaking changes
None
# Expected complexity level and risk
3? Fairly simple changes, but in important places.
# Testing
Existing tests are changed to match the changes to `TxData`.
# Description of Changes
Skip more work in `MutTxId::view_for_update` when there are no views.
This is a gain of a few kTPS and results in `view_for_update`
disappearing from flamegraphs.
# API and ABI breaking changes
None
# Expected complexity level and risk
1
# Testing
Covered by existing tests.
# Description of Changes
Add `impl MemoryUsage for TxState` and all the types below.
Extracted from https://github.com/clockworklabs/SpacetimeDB/pull/3831 to
reduce the diff to ease figuring out why its not helping perf.
# API and ABI breaking changes
None
# Expected complexity level and risk
1
# Description of Changes
Just adds public accessors for some existing internal functionality, to
enable a change in another repo. Review starting from there.
# API and ABI breaking changes
These crates are marked unstable.
# Expected complexity level and risk
0.5
# Testing
See other PR.
---------
Co-authored-by: Kim Altintop <kim@eagain.io>
# Description of Changes
Fixes https://github.com/clockworklabs/SpacetimeDB/issues/1122.
Adds hash indices and exposes them through `#[index(hash)]` for Rust
modules,
with support for typescript and C# to come in follow ups.
On the client/sdk side, for now, any index is backed via a BTree/native
index as it is the most general.
A hash index may only be queried through point scan and never ranged
scans.
Attempting a ranged scan results in an error, with the mechanism
implemented in the previous PR
(https://github.com/clockworklabs/SpacetimeDB/pull/3974).
# API and ABI breaking changes
None
# Expected complexity level and risk
2?
# Testing
A test for ensuring that hash indices cannot be used for range scans is
added.
Tests exercising hash indices will come in the next PR.
# Description of Changes
When doing a ranged seek on a non-ranged index (none such exist yet, but
will be added in a follow up), return an (ABI) error.
Also:
- Use point scans in query execution (`IxScan(Delta)Eq`).
- Refactor table index code with macro `same_for_all_types`.
# API and ABI breaking changes
None
# Expected complexity level and risk
2?
# Testing
Testing the error handling will be possible once hash indices are added
(follow up PR).
# Description of Changes
Sets the `committed` label to `false` when we rollback a txn. It was
always set to `true` before this change.
# API and ABI breaking changes
None
# Expected complexity level and risk
0
# Testing
- [x] Unit test
# Description of Changes
When "fixing" a clippy lint, I accidentally flipped a boolean condition
in a `filter` call, breaking replay of column-type-altering
automigrations.
I'm also fixing an error message containing a typo I apparently wrote
many months ago, where I was printing the "old" layout in both places
rather than also printing the "new" layout. This is used only as a
diagnostic message, and is never programmatically inspected.
# API and ABI breaking changes
N/a
# Expected complexity level and risk
1
# Testing
- [x] Replayed the broken module manually locally.
# Description of Changes
Prior to this commit, we had special handling for deletes from
`st_table` during replay, but we did not pair them with inserts to form
updates, instead only treating the delete as a dropped table.
With this commit, we record inserts to `st_table` which will form update
pairs during replay, and handle them appropriately, updating the table's
schema and not dropping the table.
There is a tricky case where a table exists but is empty, and then
within a single transaction:
- The table undergoes a migration s.t. its `table_access` or
`primary_key` changes.
- At least one row is inserted into the table.
In this case, the in-memory table structure will not exist at the point
of the `st_table` insert, but will be created before the corresponding
`st_table` delete, meaning there will be two conflicting `st_table` rows
resident. To handle this case, `CommittedState` tracks a side table,
`replay_table_updated`, which stores a `RowPointer` to the correct
most-recent `st_table` row for the migrating table.
I've also renamed the one previously-extant replay-only side table,
`table_dropped`, to include the `replay_` prefix, which IMO improves
clarity. And I've made it so `replay_table_dropped` is cleared at the
end of each transaction, as the previous behavior of continuing to
ignore a table that should be unreachable masked errors which would have
been helpful when debugging this issue.
This PR also includes an extended error message when encountering a
unique constraint violation while replaying, which I found helpful while
debugging.
# API and ABI breaking changes
N/a
# Expected complexity level and risk
2 - replay is complicated and scary, but this PR isn't gonna make things
*more* broken than they already were.
# Testing
- [x] Manually replayed a commitlog which included a migration that
altered a table's `table_access`, which was previously broken but now
replays successfully.
# Description of Changes
With the addition of module-defined views, subscriptions are no longer
read-only as they may invoke view materialization.
The way this works is that a subscription starts off as a mutable
transaction, materializes views if necessary, and then downgrades to a
read-only transaction to evaluate the subscription.
Before this patch, we were calling `commit_downgrade` directly on the
`MutTxId` in order to downgrade the transaction. This would update the
in-memory `CommittedState`, but it wouldn't make the transaction
durable.
This would result in us incrementing the transaction offset of the
in-memory `CommittedState` without writing anything to the commitlog.
This in turn would invalidate snapshots as they would be pointing
further ahead into the commitlog than they should, and so when replaying
from a snapshot we would potentially skip over commits that were not
included in the snapshot.
This patch changes those call sites to use
`RelationalDB::commit_tx_downgrade` which both updates the in-memory
state **and** makes the transaction durable.
**NOTE:** The fact that views are materialized is purely an
implementation detail at this point in time. And technically view tables
are ephemeral meaning they are not persisted to the commitlog. So the
real bug here was that we were updating the tx offset of the in-memory
committed state at all. This is technically fixed by
https://github.com/clockworklabs/SpacetimeDB/pull/3884 and so after
https://github.com/clockworklabs/SpacetimeDB/pull/3884 lands this change
becomes a no-op. However, we still shouldn't be calling `commit` and
`commit_downgrade` directly on a `MutTxId` since in most cases it is
wrong to bypass the durability layer. And without this change, the bug
would still be present were view tables not ephemeral, which they may
not be at some point in the future.
# API and ABI breaking changes
None
# Expected complexity level and risk
1. The change itself is trivial, the bug is not.
# Testing
Adding an automated test for this is not so straightforward. First it's
view related which means we don't have many options apart from a smoke
test, but I don't believe the smoke tests have a mechanism for replaying
the commitlog.
If transaction offsets are supposed to be linear, without any gaps, then
it would be useful to assert that on each append, in which case we could
write a smoke test that would fail as soon as the offsets diverged.
# Description of Changes
Based on #3887 . Review starting from commit 233b48cc4.
We've encountered a commitlog which includes inserts into `st_table`,
`st_column`, &c of the rows which describe `st_view`, `st_view_param`,
&c. This caused replay to fail, as those rows were already inserted
during bootstrapping,
so we got set-semantic duplicate errors. With this commit, we ignore
set-semantic duplicate errors when replaying a commitlog specifically
for rows in system tables which describe system tables.
We also have to do an additional fixup for sequences. This is described
in-depth in comments added at the relevant locations.
# API and ABI breaking changes
N/a
# Expected complexity level and risk
1 - I was careful not to swallow any errors which aren't obviously safe.
# Testing
- [x] Manually replayed commitlog which includes the above mentioned
inserts, got error prior to this commit, no error with this commit.
Controlled shutdown of a database should drain the outstanding
transactions
queue(s) and flush them to the durability layer.
With the introduction of another queueing layer in #3868, it became
harder to
observe when or if this process is completed.
This patch thus introduces an explicit (async) shutdown method for
`RelationalDB` and below, which will wait until all submitted
transactions are
either reported durable, or an error occurs in the durability layer.
`RelationalDB` is made `!Clone`, such that shutdown can be initiated in
the
`Drop` impl. Note that this requires access to a tokio runtime, which we
thread
through via the `Persistence` services in order to allow control over
which of
the various runtimes is being used for durability-related tasks.
Also moves `RelationalDB::open` to a blocking thread when a
persistence-enabled
database is constructed by the `HostController` -- this process performs
heavy
I/O and can take a substantial amount of time, during which we don't
want to
block a worker thread.
# API and ABI breaking changes
None
# Expected complexity level and risk
3
# Testing
- [ ] some testing added
- [ ] existing tests still pass
- [ ] `impl Drop for RelationalDB` difficult to test, extra eyeballs
needed
---------
Co-authored-by: Mazdak Farrokhzad <twingoow@gmail.com>
# Description of Changes
When debugging broken commitlogs, we want to inspect the whole
commitlog, including the part after the first error.
This is in contrast with the way we want to replay in prod, where we'd
rather get a hard error than an incorrect state.
This commit adds a new flag to commitlog replay, `ErrorBehavior`. The
`core` crate passes `ErrorBehavior::FailFast`
when replaying commitlogs to reconstruct databases. Internal tooling
(not in this repository) uses `ErrorBehavior::Warn` to print the
entirety of a broken commitlog.
# API and ABI breaking changes
Changes internal APIs only.
# Expected complexity level and risk
1 - no change to behavior of SpacetimeDB.
# Testing
None.
Views are materialized in mutable transactions, but should not increment
the transaction offset maintained in the committed state.
This fixes storing completely empty transactions in the commitlog, and
maintains that the committed state tx offset is in-sync with the
commitlog's tx offset.
# Expected complexity level and risk
2
# Testing
Added a test.
---------
Signed-off-by: Kim Altintop <kim@eagain.io>
Co-authored-by: Mazdak Farrokhzad <twingoow@gmail.com>
# Description of Changes
Provides new WASM ABIs:
- `datastore_index_scan_point_bsatn`
- `datastore_delete_by_index_scan_point_bsatn`
These are then used where applicable to speed up `.find(_)` and friends.
Point scans are also used more internally where applicable.
What remains after this is use in C# module bindings and to expose this
in TS as well.
The PR makes TPS go from roughly 36k to 38k TPS on my machine and also
makes a difference in flamegraphs where the time spent in some index
scans are substantially decreased.
# API and ABI breaking changes
None
# Expected complexity level and risk
3? This touches the datastore an how we expose it to modules.
# Testing
Some existing tests now exercise the new ABIs by changing what
`.find(_)` and friends do.
---------
Signed-off-by: Mazdak Farrokhzad <twingoow@gmail.com>
# Description of Changes
Fixes https://github.com/clockworklabs/SpacetimeDB/issues/3617.
# API and ABI breaking changes
None
# Expected complexity level and risk
1
# Testing
A proptest `empty_range_scans_dont_panic` is added.
# Description of Changes
Fixes the following issues:
1. When dropping a view, we deleted its row from `st_view`, but didn't
drop the backing table.
2. `delete_col_eq` returned a nonsensical error if the delete set was
empty.
# API and ABI breaking changes
None
# Expected complexity level and risk
0
# Testing
- [x] Auto-migrate smoketests
# Description of Changes.
fixes#3715
The patch makes snapshots to skip ephemeral tables.
# API and ABI breaking changes
NA
# Expected complexity level and risk
1
# Testing
<!-- Describe any testing you've done, and any testing you'd like your
reviewers to do,
so that you're confident that all the changes work as expected! -->
- [ ] <!-- maybe a test you want to do -->
- [ ] <!-- maybe a test you want a reviewer to do, so they can check it
off when they're satisfied. -->
---------
Co-authored-by: joshua-spacetime <josh@clockworklabs.io>
# Description of Changes
This commit adds several new metrics to `DB_METRICS` for tracking
procedures' HTTP requests:
- `procedure_http_request_size_bytes`.
- `procedure_http_response_size_bytes`.
- `procedure_num_http_requests`.
- `procedure_num_successful_http_requests`.
- `procedure_num_failed_http_requests`.
- `procedure_num_timeout_http_requests`.
- `procedure_num_in_progress_http_requests`.
See help strings in `crates/datastore/src/db_metrics/mod.rs` for details
on what each of these tracks.
Closes#3712 .
# API and ABI breaking changes
N/a - I don't think we count metrics as a stable API.
# Expected complexity level and risk
2, I guess? If we intend to use these for billing, some of the choices
I've made about tracking may impact our business.
# Testing
None; I don't know how to test Prometheus metrics.
Co-authored-by: Noa <coolreader18@gmail.com>
# Description of Changes
There were mentions of `hashbrown` in the repo that did not go through
`spacetimedb_data_structures::map`.
This caused compile errors on master when running certain tests locally.
These have been replaced with the proper imports.
The PR also bump hashbrown to 0.16.1 and foldhash to 0.2.0.
# API and ABI breaking changes
None
# Expected complexity level and risk
2
# Testing
Covered by existing tests.
# Description of Changes
Precise index readsets
fixes #https://github.com/clockworklabs/SpacetimeDBPrivate/issues/2118
# API and ABI breaking changes
NA
# Expected complexity level and risk
2.5
Potential to regress performance.
# Testing
Updated smoketests.
# Description of Changes
A background task to cleanup unsubscribed views.
fixes#3587
# API and ABI breaking changes
NA
# Expected complexity level and risk
2
# Testing
Added a test
---------
Signed-off-by: Shubham Mishra <shivam828787@gmail.com>
# Description of Changes
Make View backing tables and related St tables not persistent.
1. Modifies `CommittedState` to hold set of ephemeral tables.
2. Update `TxData` to contain a subset of ephemeral tables which has
been modified in current transaction.
`do_durability` filter those table out before writting the transaction
to commitlog.
depends on: https://github.com/clockworklabs/SpacetimeDB/pull/3651
# API and ABI breaking changes
NA
# Expected complexity level and risk
2.5.
looks simple but changes comes in the hotpath, I ensured we don't do
unneccessary heap allocations but patch has the potential to regress
perfomance.
# Testing
- unit test.
---------
Signed-off-by: Shubham Mishra <shivam828787@gmail.com>
Co-authored-by: Mazdak Farrokhzad <twingoow@gmail.com>
# Description of Changes
Adds `ProcedureContext::{with_tx, try_with_tx}`.
Fixes https://github.com/clockworklabs/SpacetimeDB/issues/3515.
# API and ABI breaking changes
None
# Expected complexity level and risk
2
# Testing
An integration test `test_calling_with_tx` is added.
# Description of Changes
This makes us go from 3 threads to 2.
The next step is to core pin the V8 worker thread.
# API and ABI breaking changes
None
# Expected complexity level and risk
4
# Testing
Existing tests should cover this.
---------
Co-authored-by: Noa <coolreader18@gmail.com>
# Description of Changes
Improves auto-migration support for views by minimizing the cases where
we must disconnect clients.
Before this patch, any schema compatible view update and even no view
updates at all would require us to disconnect clients, because we had to
assume that the view was modified thereby rendering its currently
materialized result set stale and out of date.
Patch adds a code to re-evaluate all views whose entry is in
`st_view_sub` and make `update_database` to start using
`commit_and_broadcast_event` so that now we only have to disconnect
clients for incompatible view updates or dropped views.
# API and ABI breaking changes
NA
# Expected complexity level and risk
2
# Testing
Added smoketests.
# Description of Changes
This patch tests calling, updating, and materialization of views through
the SQL api.
# API and ABI breaking changes
None
# Expected complexity level and risk
1.5
# Testing
Smoketests
---------
Signed-off-by: joshua-spacetime <josh@clockworklabs.io>
Co-authored-by: joshua-spacetime <josh@clockworklabs.io>
# Description of Changes
Updates views atomically on commit, but before downgrading to a
read-only transaction for subscription evaluation.
What this patch does:
1. Renames `ViewId` to `ViewFnPtr`
2. Renames `ViewDatabaseId` to `ViewId`
3. Removes the `module_rx` module watcher from the subscription manager
4. Refactors read sets to only track table scans (index key tracking
will be added later)
5. Drops read sets and removes rows from `st_view_sub` when dropping a
view in an auto-migrate
6. Re-evaluates and updates views (`call_views_with_tx`) from
`call_reducer_with_tx` for any view whose read set overlaps with the
reducer's write set
7. Does the same for sql dml
# API and ABI breaking changes
None
# Expected complexity level and risk
3
It's a bit of a messy diff.
# Testing
- [x] Integrate with
https://github.com/clockworklabs/SpacetimeDB/pull/3616
---------
Signed-off-by: joshua-spacetime <josh@clockworklabs.io>
Co-authored-by: Shubham Mishra <shivam828787@gmail.com>
# Description of Changes
`schema_for_table_raw` was hardcoded to return `None` for `ViewInfo`.
This issue was not observable when the module launched for the first
time, but during module re-launches, it caused problems with `impl
CollectViews`, as the implementation was treating view tables as normal
tables.
# API and ABI breaking changes
NA
# Expected complexity level and risk
1, pretty obvious.
When a new commitlog segment is created, allocate disk space for it up
to the maximum segment size. Also do this when resuming writes to an
existing segment, such that segments created without preallocation will
allocate as well when the database is opened.
Preallocation is gated behind the feature "fallocate", because it is not
always desirable to preallocate, e.g. for local `standalone` users.
The feature can only be enabled on Linux targets, because allocation is
done using the Linux-specific `fallocate(2)` system call.
Unlike `ftruncate(2)` or the portable `posix_fallocate(3)`,
`fallocate(2)`
supports allocating disk space without zeroing. This is currently
required, because the commitlog format does not handle padding bytes.
If not enough space can be allocated, the commitlog refuses writes. For
commitlogs that were created without preallocation, this means that the
commitlog cannot even be opened in this situation.
The local durability impl will crash if it detects that the commitlog is
unable to allocate enough space.
This means that a database will eventually crash and be unable to start
in
an out-of-space situation.
Allocated space is not included in the reported size of the commitlog.
Instead, allocated blocks are reported separately.
# Expected complexity level and risk
3 - Disk size monitoring may need to be adjusted.
# Testing
- [x] Adds a test that demonstrates the crash behavior of
[`spacetimedb_durability::Local`]
when there is insufficient space. The test performs I/O against a loop
device.
- [x] Modified the `repo::Memory` impl so that it can run out of space.
No test currently
utilizes this, but existing tests assuming infinite space still pass.
# Description of Changes
This patch:
1. Materializes views on subscribe and sql calls by invoking `call_view`
on the `ModuleHost`.
2. Downgrades to a read-only transaction after view materialization but
before query execution.
3. Updates the `st_view_sub` system table on both subscribe and
unsubscribe.
4. Makes subscribe methods on the SubscriptionManager async.
# API and ABI breaking changes
None
# Expected complexity level and risk
2
# Testing
End-to-end tests to be added with atomic view updates
# Description of Changes
Currently need changes from both #3548 and #3527, and the latter is
merged into master but the former isn't rebased on top of it. so we stay
silly for the moment
I'd like to pull out my first commit into its own PR, but that's not
really possible until #3548 rebases onto master.
# Expected complexity level and risk
2 - pretty straightforward translation of the wasm/Rust view
implementation to typescript.
# Testing
- [ ] <!-- maybe a test you want to do -->
- [ ] <!-- maybe a test you want a reviewer to do, so they can check it
off when they're satisfied. -->
# Description of Changes
Host implementation to invoke `call_view` method.
I also covers:
1. API `MutTxId::is_materialized`to check if existing view exisits and
updated.
2. Update in readsets logic to remove stale views.
3. sql caller implmentation.
# API and ABI breaking changes
NA
How complicated do you think these changes are? Grade on a scale from 1
to 5,
where 1 is a trivial change, and 5 is a deep-reaching and complex
change.
3
# Description of Changes
Refactored `st_view_client` and renamed it `st_view_sub` which tracks
the number of clients subscribed to a view. On disconnect, we decrement
the `num_subscribers` column in the appropriate rows. An async task will
be in charge of cleaning up views (and their read sets) whose subscriber
count has gone to zero (not in this patch).
On module init, we clear the entirety of each view table.
# API and ABI breaking changes
None. Technically this updates the schema of a system table, but the
system table was added and modified between releases.
# Expected complexity level and risk
~2
Need to make sure we cover all cases so that we don't leave dangling
data. Making these tables ephemeral in the future should simplify this.
# Testing
Will add tests once we can subscribe to views
# Description of Changes
Not many changes were required for the query compiler to be able to
resolve views. This is because the query engine can always assume a view
is materialized and therefore has a backing table. So from the
perspective of the query engine, a view is just another table with one
small caveat: The physical table in the datastore has two internal
metadata columns - `sender` and `arg_id`. These columns are not user
facing and so should be hidden from name resolution/type checking.
# API and ABI breaking changes
None
# Expected complexity level and risk
1.5
# Testing
<!-- Describe any testing you've done, and any testing you'd like your
reviewers to do,
so that you're confident that all the changes work as expected! -->
- [x] SQL type checking tests
# Description of Changes
Specifically generate client table handles for views.
# API and ABI breaking changes
None
# Expected complexity level and risk
1
# Testing
Added a `#[view]` to `module-test` and generated new client snapshots
# Description of Changes
<!-- Please describe your change, mention any related tickets, and so on
here. -->
Adds two new system tables for views. One tracks view arguments, the
other tracks view subscribers.
`st_view_arg` generates a unique id for each unique argument
instantiation. This id will act as a foreign key that both read sets and
backing tables can reference.
`st_view_client` is needed so that we can drop views when clients
unsubscribe or disconnect. Note this disconnect logic is not implemented
in this patch.
Eventually both of these tables should be ephemeral. There's no reason
they need to write to the commitlog.
Note also that the schema of a view's backing table has been updated. It
no longer stores a view's argument values but rather the `id` from
`st_view_arg`.
# API and ABI breaking changes
<!-- If this is an API or ABI breaking change, please apply the
corresponding GitHub label. -->
None
# Expected complexity level and risk
<!--
How complicated do you think these changes are? Grade on a scale from 1
to 5,
where 1 is a trivial change, and 5 is a deep-reaching and complex
change.
This complexity rating applies not only to the complexity apparent in
the diff,
but also to its interactions with existing and future code.
If you answered more than a 2, explain what is complex about the PR,
and what other components it interacts with in potentially concerning
ways. -->
1
# Testing
<!-- Describe any testing you've done, and any testing you'd like your
reviewers to do,
so that you're confident that all the changes work as expected! -->
Tests will be added in the patch that computes read sets
---------
Signed-off-by: joshua-spacetime <josh@clockworklabs.io>
Co-authored-by: Shubham Mishra <shivam828787@gmail.com>
# Description of Changes
This commit builds support for executing procedures in WASM modules.
This includes an HTTP endpoint,
`/v1/database/:name_or_address/procedure/:name POST`, as well as an
extension to the WS protocol. These new APIs are not wired up to the CLI
or SDKs, but I have manually tested the HTTP endpoint via `curl`. The
new WS extensions are completely untested.
Several TODOs are scattered throughout the new code, most notably for
sensibly tracking procedure execution time in the metrics.
I also expect that we will want to remove the `procedure_sleep_until`
syscall and the `ProcedureContext::sleep_until` method prior to release.
# API and ABI breaking changes
Adds new APIs and ABIs.
# Expected complexity level and risk
3? 4? Unlikely to break existing stuff, 'cause it's mostly additive, but
adds plenty of potentially-fragile new stuff. Notably is the first time
we're doing anything actually `async`hronous on a database core Tokio
worker, and we don't yet have strong evidence of how that will affect
reducer execution.
# Testing
- [x] Manually published `modules/module-test` and executed procedures
with the following `curl` invocations:
- `curl -X POST -H "Content-Type:application/json" -d '[]'
http://localhost:3000/v1/database/module-test/procedure/sleep_one_second`
- `curl -X POST -H "Content-Type:application/json" -d '[1223]'
http://localhost:3000/v1/database/module-test/procedure/return_value`
- [ ] Need to write automated tests.
---------
Co-authored-by: Mazdak Farrokhzad <twingoow@gmail.com>
# Description of Changes
<!-- Please describe your change, mention any related tickets, and so on
here. -->
The query engine can now execute queries using mutable transactions
because this patch implements the `Datastore` trait for `MutTxId`.
Note this is both a refactor and a feature patch. It is a refactor
because the `Datastore` trait was updated to allow for mutable
transactions.
# API and ABI breaking changes
<!-- If this is an API or ABI breaking change, please apply the
corresponding GitHub label. -->
None
# Expected complexity level and risk
1.5
<!--
How complicated do you think these changes are? Grade on a scale from 1
to 5,
where 1 is a trivial change, and 5 is a deep-reaching and complex
change.
This complexity rating applies not only to the complexity apparent in
the diff,
but also to its interactions with existing and future code.
If you answered more than a 2, explain what is complex about the PR,
and what other components it interacts with in potentially concerning
ways. -->
# Testing
<!-- Describe any testing you've done, and any testing you'd like your
reviewers to do,
so that you're confident that all the changes work as expected! -->
- [x] Current tests pass after refactor
- [ ] Use `MutTxId` for query execution
# Description of Changes
<!-- Please describe your change, mention any related tickets, and so on
here. -->
The `InstanceEnv` will now compute read sets when executing a view and
store them in the `MutTxId`.
These read sets track table scans as well as singular index key scans.
Index key ranges will be tracked in the future, but for now an index
range scan is treated as a full table scan.
These read sets are maintained as part of the `CommittedState`.
TODO: Check write sets against read sets. Re-evaluate views in the case
of overlap and update the read sets.
# API and ABI breaking changes
<!-- If this is an API or ABI breaking change, please apply the
corresponding GitHub label. -->
None
# Expected complexity level and risk
<!--
How complicated do you think these changes are? Grade on a scale from 1
to 5,
where 1 is a trivial change, and 5 is a deep-reaching and complex
change.
This complexity rating applies not only to the complexity apparent in
the diff,
but also to its interactions with existing and future code.
If you answered more than a 2, explain what is complex about the PR,
and what other components it interacts with in potentially concerning
ways. -->
1
# Testing
<!-- Describe any testing you've done, and any testing you'd like your
reviewers to do,
so that you're confident that all the changes work as expected! -->
- [ ] Unit tests
# Description of Changes
<!-- Please describe your change, mention any related tickets, and so on
here. -->
Defines auto-migration rules for views.
> What are they?
Answer: You can always auto-migrate a view. It's just a matter of
whether you have to disconnect clients. Removal of a view or changing
the return type in a breaking way requires that we disconnect clients.
TODO: Even if a view is updated in a completely compatible way, we still
have to wipe or dirty the read set in order to force a re-computation
because the behavior could have changed.
# API and ABI breaking changes
<!-- If this is an API or ABI breaking change, please apply the
corresponding GitHub label. -->
None
# Expected complexity level and risk
<!--
How complicated do you think these changes are? Grade on a scale from 1
to 5,
where 1 is a trivial change, and 5 is a deep-reaching and complex
change.
This complexity rating applies not only to the complexity apparent in
the diff,
but also to its interactions with existing and future code.
If you answered more than a 2, explain what is complex about the PR,
and what other components it interacts with in potentially concerning
ways. -->
2.5
Not just a mechanical change.
Most of the time you can think of views as though they were tables.
However for auto-migration there is a key distinction. Views don't have
observably persistent state. While we can persist state internally, we
can always derive a view's result set from the database state. Hence
auto-migration rules are not as strict.
# Testing
<!-- Describe any testing you've done, and any testing you'd like your
reviewers to do,
so that you're confident that all the changes work as expected! -->
- [x] Auto-migration planner tests
# Description of Changes
Extra debug info for #3465.
# API and ABI breaking changes
None.
# Expected complexity level and risk
### 1.
# Testing
- [x] Manual testing on a borked database.
# Description of Changes
<!-- Please describe your change, mention any related tickets, and so on
here. -->
Adds the `#[view]` procedural macro and module describers for views.
```rust
#[view(public)]
fn player(ctx: &ViewContext) -> Vec<Player> {
ctx.db.player().identity().find(ctx.sender).into_iter().collect()
}
#[view(public)]
fn player(ctx: &AnonymousViewContext, level: u32) -> Vec<Player> {
ctx.db.player().level().filter(level).collect()
}
```
Note, this deviates from the proposal in that views may only return
`Vec<T>` or `Option<T>`. They can't return an arbitrary `SpacetimeType`.
# API and ABI breaking changes
<!-- If this is an API or ABI breaking change, please apply the
corresponding GitHub label. -->
This technically isn't a breaking change, but it's worth mentioning that
this patch refactors `ReducerInfo` so that we can use it for views as
well.
# Expected complexity level and risk
<!--
How complicated do you think these changes are? Grade on a scale from 1
to 5,
where 1 is a trivial change, and 5 is a deep-reaching and complex
change.
This complexity rating applies not only to the complexity apparent in
the diff,
but also to its interactions with existing and future code.
If you answered more than a 2, explain what is complex about the PR,
and what other components it interacts with in potentially concerning
ways. -->
2
# Testing
<!-- Describe any testing you've done, and any testing you'd like your
reviewers to do,
so that you're confident that all the changes work as expected! -->
- [x] Negative compile tests
- [x] Negative publish (module validation) tests
- [x] Test system tables are updated accurately
# Description of Changes
<!-- Please describe your change, mention any related tickets, and so on
here. -->
Adds type definitions for `RawViewDefV9` and `ViewDef`. Also validates
these module defs and populates the system tables:
- `st_table`
- `st_column`
- `st_view`
- `st_view_param`
- `st_view_column`
# API and ABI breaking changes
<!-- If this is an API or ABI breaking change, please apply the
corresponding GitHub label. -->
None
# Expected complexity level and risk
<!--
How complicated do you think these changes are? Grade on a scale from 1
to 5,
where 1 is a trivial change, and 5 is a deep-reaching and complex
change.
This complexity rating applies not only to the complexity apparent in
the diff,
but also to its interactions with existing and future code.
If you answered more than a 2, explain what is complex about the PR,
and what other components it interacts with in potentially concerning
ways. -->
2
# Testing
<!-- Describe any testing you've done, and any testing you'd like your
reviewers to do,
so that you're confident that all the changes work as expected! -->
Integration tests to follow when views are added to
`__describe_module__`
---------
Signed-off-by: joshua-spacetime <josh@clockworklabs.io>
Co-authored-by: Phoebe Goldman <phoebe@clockworklabs.io>
Co-authored-by: Mazdak Farrokhzad <twingoow@gmail.com>
# Description of Changes
This exposes client credentials in reducer calls for rust.
# API and ABI breaking changes
API Changes:
The main API change is the addition of `AuthCtx` and the `sender_auth`
in `ReducerContext`. This also adds JwtClaims, which has some helpers
for getting commonly used claims.
ABI Changes:
This adds one new functions `get_jwt`. This uses
`st_connection_credentials` to look up the credentials associated with a
connection id.
This adds ABI version 10.2.
# Expected complexity level and risk
2. This adds new ABI functions
# Testing
I've done some manual testing with modified versions of the quickstart.
We should add some examples that use the new API.
# Description of Changes
<!-- Please describe your change, mention any related tickets, and so on
here. -->
This patch defines the system table schemas for views and allocates IDs
for them. It **does not** populate these tables.
# API and ABI breaking changes
<!-- If this is an API or ABI breaking change, please apply the
corresponding GitHub label. -->
None
# Expected complexity level and risk
<!--
How complicated do you think these changes are? Grade on a scale from 1
to 5,
where 1 is a trivial change, and 5 is a deep-reaching and complex
change.
This complexity rating applies not only to the complexity apparent in
the diff,
but also to its interactions with existing and future code.
If you answered more than a 2, explain what is complex about the PR,
and what other components it interacts with in potentially concerning
ways. -->
1
# Testing
<!-- Describe any testing you've done, and any testing you'd like your
reviewers to do,
so that you're confident that all the changes work as expected! -->
Tests will be added in the patch that populates these tables
# Description of Changes
This commit extends various schema and schema-adjacent structures to
describe procedures, a new kind of database function which are allowed
to perform side effects.
This includes extending `RawModuleDefV9` with a way to register
`RawProcedureDefV9`s in the `misc_exports`, preserving compatibility
with modules that predate procedures.
The module validation path is reorganized somewhat to validate various
properties related to procedures while preserving code clarity and
maintainability.
Additionally, the `ArgsTuple` machinery for ser/de-ing reducer arguments
using the argument type as a seed is extended to also support procedure
arguments.
All of this is currently unused.
# API and ABI breaking changes
Additive and backwards-compatible additions to `RawModuleDefV9` and
friends.
# Expected complexity level and risk
2 - some minor complexity in schema validation which may have gotten
borked in a merge at some point.
# Testing
Unsure what tests would be useful, open to suggestions from reviewers.
- [ ] <!-- maybe a test you want to do -->
- [ ] <!-- maybe a test you want a reviewer to do, so they can check it
off when they're satisfied. -->
# Description of Changes
Necessary for pulling in rolldown.
# API and ABI breaking changes
None
# Expected complexity level and risk
1, with the caveat that this updates the Rust version and therefore
touches all the code.
# Testing
- [ ] Just the automated testing