# Description of Changes
<!-- Please describe your change, mention any related tickets, and so on
here. -->
This bumps the Typescript SDK to 1.11.3.
# API and ABI breaking changes
<!-- If this is an API or ABI breaking change, please apply the
corresponding GitHub label. -->
None
# Expected complexity level and risk
1
<!--
How complicated do you think these changes are? Grade on a scale from 1
to 5,
where 1 is a trivial change, and 5 is a deep-reaching and complex
change.
This complexity rating applies not only to the complexity apparent in
the diff,
but also to its interactions with existing and future code.
If you answered more than a 2, explain what is complex about the PR,
and what other components it interacts with in potentially concerning
ways. -->
# Testing
<!-- Describe any testing you've done, and any testing you'd like your
reviewers to do,
so that you're confident that all the changes work as expected! -->
NA - this is just a version bump and the CI passes
I will change the base to master once the `1.11.2` version bump merges.
---------
Signed-off-by: John Detter <4099508+jdetter@users.noreply.github.com>
# Description of Changes
<!-- Please describe your change, mention any related tickets, and so on
here. -->
This bumps the Typescript SDK to 1.11.2 so that we can send out Noa's
fix.
# API and ABI breaking changes
<!-- If this is an API or ABI breaking change, please apply the
corresponding GitHub label. -->
None
# Expected complexity level and risk
1
<!--
How complicated do you think these changes are? Grade on a scale from 1
to 5,
where 1 is a trivial change, and 5 is a deep-reaching and complex
change.
This complexity rating applies not only to the complexity apparent in
the diff,
but also to its interactions with existing and future code.
If you answered more than a 2, explain what is complex about the PR,
and what other components it interacts with in potentially concerning
ways. -->
# Testing
<!-- Describe any testing you've done, and any testing you'd like your
reviewers to do,
so that you're confident that all the changes work as expected! -->
NA - this is just a version bump and the CI passes
# Description of Changes
When "fixing" a clippy lint, I accidentally flipped a boolean condition
in a `filter` call, breaking replay of column-type-altering
automigrations.
I'm also fixing an error message containing a typo I apparently wrote
many months ago, where I was printing the "old" layout in both places
rather than also printing the "new" layout. This is used only as a
diagnostic message, and is never programmatically inspected.
# API and ABI breaking changes
N/a
# Expected complexity level and risk
1
# Testing
- [x] Replayed the broken module manually locally.
# Description of Changes
Prior to this commit, we had special handling for deletes from
`st_table` during replay, but we did not pair them with inserts to form
updates, instead only treating the delete as a dropped table.
With this commit, we record inserts to `st_table` which will form update
pairs during replay, and handle them appropriately, updating the table's
schema and not dropping the table.
There is a tricky case where a table exists but is empty, and then
within a single transaction:
- The table undergoes a migration s.t. its `table_access` or
`primary_key` changes.
- At least one row is inserted into the table.
In this case, the in-memory table structure will not exist at the point
of the `st_table` insert, but will be created before the corresponding
`st_table` delete, meaning there will be two conflicting `st_table` rows
resident. To handle this case, `CommittedState` tracks a side table,
`replay_table_updated`, which stores a `RowPointer` to the correct
most-recent `st_table` row for the migrating table.
I've also renamed the one previously-extant replay-only side table,
`table_dropped`, to include the `replay_` prefix, which IMO improves
clarity. And I've made it so `replay_table_dropped` is cleared at the
end of each transaction, as the previous behavior of continuing to
ignore a table that should be unreachable masked errors which would have
been helpful when debugging this issue.
This PR also includes an extended error message when encountering a
unique constraint violation while replaying, which I found helpful while
debugging.
# API and ABI breaking changes
N/a
# Expected complexity level and risk
2 - replay is complicated and scary, but this PR isn't gonna make things
*more* broken than they already were.
# Testing
- [x] Manually replayed a commitlog which included a migration that
altered a table's `table_access`, which was previously broken but now
replays successfully.
# Description of Changes
Fixes a deadlock in the subscription code and HTTP SQL handler that was
caused by calling view methods on the module while holding the
transaction lock.
I tried a couple of approaches to make the closures `Send` for all code
paths that need to hold the transaction while working with views, but
that didn’t work out well. The V8 module communicates with the host
through channels, which would require dynamic dispatch.
In the current approach, all existing methods that were calling views
from the host are now invoked from inside the module itself. In future,
It will be better to move these methods to common place rather than
being scattrered.
# Description of Changes
Update the `useTable` hook in the `spacetimedb/react` package to use the
client language convention aware table accessor key to lookup the
correct table
# API and ABI breaking changes
None
# Expected complexity level and risk
1
# Testing
I built and ran a client using this code change before and after.
Previously with multi-word table names in rust module (i.e.
`crew_assignments` table), the useTable hook would fail to lookup the
table information and hookup the onInsert/onDelete/onUpdate callbacks.
With this change, they successfully connect and data returned by the
useTable hook now flows.
All of my testing was with module_bindings generated with the 1.11.0
rust module crate. Additional testing for backwards compatibility might
be useful. I'm not sure what the clockwork labs target is for that sort
of thing.
# Description of Changes
Fixes#3919.
# Expected complexity level and risk
1
# Testing
- [x] Erroring code now works
Co-authored-by: John Detter <4099508+jdetter@users.noreply.github.com>
# Description of Changes
To resolve#3875, we added exact-match unique index point lookup support
to the C# bindings by introducing and using
`datastore_index_scan_point_bsatn`.
Previously, generated unique index `Find()` was (in at least one
codepath) implemented as:
* A range scan (`datastore_index_scan_range_bsatn`) over a BTree bound,
then
* `SingleOrDefault()` to collapse the results into a single row.
When the scan is empty, `SingleOrDefault()` returns `default(T)`. For
value-type rows this can manifest as a default-initialized row instead
of “missing”, which is what surfaced as “default-ish row” behavior in
views.
Using `datastore_index_scan_point_bsatn` makes the C# implementation
match Rust semantics more closely by performing an exact point lookup
and returning:
* `null` when no rows are found
* the row when exactly one row is found
* (defensively) an error if >1 row is returned (unique index invariant
violation)
Similarly, `datastore_delete_by_index_scan_point_bsatn` was added and
used so deletes-by-unique-key are also exact-match point operations
rather than range deletes.
Runtime updates were made to utilize point scan in `FindSingle(key)` and
in both mutable/read-only unique-index paths.
To keep this non-breaking for existing modules, codegen now detects
whether the table row is a struct or a class and chooses the appropriate
base type:
* Struct rows: `Find()` returns `Row?` (`Nullable<Row>`).
* Class rows: `Find()` returns `Row?` (nullable reference, `null` on
miss).
# API and ABI breaking changes
This change is non-breaking with respect to row type kinds, because
class/record table rows continue to work via
RefUniqueIndex/ReadOnlyRefUniqueIndex while struct rows use
UniqueIndex/ReadOnlyUniqueIndex.
API surface changes:
* Generated `Find()` return type is now nullable (`Row?`) to correctly
represent “missing”.
ABI/runtime:
* Requires the point-scan hostcall import
(`datastore_index_scan_point_bsatn`) to be available; the runtime uses
point-scan for unique lookup (and point delete for unique delete).
# Expected complexity level and risk
Low 2
# Testing
- [X] Local testing: repro module + client validate view and direct
Find() behavior
---------
Signed-off-by: rekhoff <r.ekhoff@clockworklabs.io>
# Description of Changes
<!-- Please describe your change, mention any related tickets, and so on
here. -->
- Just bumping typescript size limits again
# API and ABI breaking changes
No
<!-- If this is an API or ABI breaking change, please apply the
corresponding GitHub label. -->
# Expected complexity level and risk
0 - this is just a release change.
<!--
How complicated do you think these changes are? Grade on a scale from 1
to 5,
where 1 is a trivial change, and 5 is a deep-reaching and complex
change.
This complexity rating applies not only to the complexity apparent in
the diff,
but also to its interactions with existing and future code.
If you answered more than a 2, explain what is complex about the PR,
and what other components it interacts with in potentially concerning
ways. -->
# Testing
I have not tested this but I've verified the new limit is greater than
the package size.
<!-- Describe any testing you've done, and any testing you'd like your
reviewers to do,
so that you're confident that all the changes work as expected! -->
# Description of Changes
Fixes https://github.com/clockworklabs/SpacetimeDB/issues/2824.
Defines a global pool `BsatnRowListBuilderPool` which reclaims the
buffers of a `ServerMessage<BsatnFormat>` and which is then used when
building new `ServerMessage<BsatnFormat>`s.
Notes:
1. The new pool `BsatnRowListBuilderPool` reports the same kind of
metrics to prometheus as `PagePool` does.
2. `BsatnRowListBuilder` now works in terms of `BytesMut`.
3. The trait method `fn to_bsatn_extend` is redefined to be capable of
dealing with `BytesMut` as well as `Vec<u8>`.
4. A trait `ConsumeEachBuffer` is defined from
`ServerMessage<BsatnFormat>` and down to extract buffers.
`<ServerMessage<_> as ConsumeEachBuffer>::consume_each_buffer(...)` is
then called in `messages::serialize(...)` just after bsatn-encoding the
entire message and before any compression is done. This is the place
where the pool reclaims buffers.
# Benchmarks
Benchmark numbers vs. master using `cargo bench --bench subscription --
--baseline subs` on i7-7700K, 64GB RAM:
```
footprint-scan time: [21.607 ms 21.873 ms 22.187 ms]
change: [-62.090% -61.438% -60.787%] (p = 0.00 < 0.05)
Performance has improved.
full-scan time: [22.185 ms 22.245 ms 22.324 ms]
change: [-36.884% -36.497% -36.166%] (p = 0.00 < 0.05)
Performance has improved.
```
The improvements in `footprint-scan` are mostly thanks to
https://github.com/clockworklabs/SpacetimeDB/pull/2918, but 7 ms of the
improvements here are thanks to the pool. The improvements to
`full-scan` should be only thanks to the pool.
# API and ABI breaking changes
None
# Expected complexity level and risk
2?
# Testing
- Tests for `Pool<T>` also apply to `BsatnRowListBuilderPool`.
# Description of Changes
A few main goals here:
* have our iterator functions return an [`Iterator`
object](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Iterator)
so that users can use its combinators like `filter()` and `find()` and
`reduce()`. It's a very new JS api but we happen to know that the module
code will always be run in an environment that has it* :)
* improve lifecycle handling for iterator handles - mainly, if an
iterator is not run to completion, it will now eventually get garbage
collected, whereas before we would have a resource leak.
It turns out that the easiest way to do both of those things was to turn
TableIterator into a generator function, which also happens to make the
code much easier to read. Hooray :)
\* I did mention it in `table_cache` (which isn't run in our module
host) but it's fine, since that's only in the type system and
`IteratorObject` is defined in typescript's `lib.es2015.iterable.d.ts`
but is only given fancy methods in `lib.esnext.iterator.d.ts` - so if
the user uses esnext, they'll have access to them, but otherwise not.
# Expected complexity level and risk
1: this better separates concerns and makes the code clearer in its
purpose.
# Testing
- [x] Refactor, so automated testing is sufficient.
# Description of Changes
Implements the C# equivalent of #3638
This implement uses inheritance, where abstract base classes (like
`ProcedureContextBase` in `ProcedureContext.cs`) store the core of the
implementation, and then generated wrappers (like `ProcedureContext` in
the generated FFI.cs file) inherit from them.
For error handling, we work like Rust's implementation of `Result<T,E>`
but we require `where E : Exception` because of how exceptions work in
C#. Transaction-level failures come back as a `TxOutcome` and user
errors should follow the `Result<T,E>` pattern. In this implementation,
we have `UnwrapOrThrow()` throws exceptions directly because of C#'s
error handling pattern.
Unlike the Rust implementation's direct `Result` propagation, we are
using an `AbortGuard` pattern (in `ProcedureContext.cs`) for exception
handling, which uses `IDisposable` for automatic cleanup.
Most changes should have fairly similar Rust-equivalents beyond that.
For module authors, the changes here allow for the transation logic to
work like:
```csharp
ctx.TryWithTx<ResultType, Exception>(tx => {
// transaction logic
return Result<ResultType, Exception>.Ok(result);
});
```
This change includes a number of tests added to the
`sdks/csharp/examples~/regression-tests/`'s `server` and `client` to
validate the behavior of the changes. `server` changes provide further
usage examples for module authors.
# API and ABI breaking changes
Should not be a breaking change
# Expected complexity level and risk
2
# Testing
- [x] Created Regression Tests that show transitions in procedures
working in various ways, all of which pass.
# Description of Changes
<!-- Please describe your change, mention any related tickets, and so on
here. -->
- Bump version numbers to `1.11.1`
# API and ABI breaking changes
<!-- If this is an API or ABI breaking change, please apply the
corresponding GitHub label. -->
None
# Expected complexity level and risk
1
<!--
How complicated do you think these changes are? Grade on a scale from 1
to 5,
where 1 is a trivial change, and 5 is a deep-reaching and complex
change.
This complexity rating applies not only to the complexity apparent in
the diff,
but also to its interactions with existing and future code.
If you answered more than a 2, explain what is complex about the PR,
and what other components it interacts with in potentially concerning
ways. -->
# Testing
- [x] Verified that the license has been updated
- [x] `spacetime --version` on this commit is correct
There is also a corresponding private PR.
Using `#[tokio::test(start_paused = true)]` pauses time, yet tokio will
still advance it when encountering `sleep`s while it has no other work
to do.
This makes the tests that rely on timeouts deterministic and should
prevent those tests from becoming flaky on busy machines.
# Expected complexity level and risk
2
# Testing
This modifies tests.
It does appear to work as described, but it can't hurt if the reviewers
convince themselves that it does indeed.
# Description of Changes
With the addition of module-defined views, subscriptions are no longer
read-only as they may invoke view materialization.
The way this works is that a subscription starts off as a mutable
transaction, materializes views if necessary, and then downgrades to a
read-only transaction to evaluate the subscription.
Before this patch, we were calling `commit_downgrade` directly on the
`MutTxId` in order to downgrade the transaction. This would update the
in-memory `CommittedState`, but it wouldn't make the transaction
durable.
This would result in us incrementing the transaction offset of the
in-memory `CommittedState` without writing anything to the commitlog.
This in turn would invalidate snapshots as they would be pointing
further ahead into the commitlog than they should, and so when replaying
from a snapshot we would potentially skip over commits that were not
included in the snapshot.
This patch changes those call sites to use
`RelationalDB::commit_tx_downgrade` which both updates the in-memory
state **and** makes the transaction durable.
**NOTE:** The fact that views are materialized is purely an
implementation detail at this point in time. And technically view tables
are ephemeral meaning they are not persisted to the commitlog. So the
real bug here was that we were updating the tx offset of the in-memory
committed state at all. This is technically fixed by
https://github.com/clockworklabs/SpacetimeDB/pull/3884 and so after
https://github.com/clockworklabs/SpacetimeDB/pull/3884 lands this change
becomes a no-op. However, we still shouldn't be calling `commit` and
`commit_downgrade` directly on a `MutTxId` since in most cases it is
wrong to bypass the durability layer. And without this change, the bug
would still be present were view tables not ephemeral, which they may
not be at some point in the future.
# API and ABI breaking changes
None
# Expected complexity level and risk
1. The change itself is trivial, the bug is not.
# Testing
Adding an automated test for this is not so straightforward. First it's
view related which means we don't have many options apart from a smoke
test, but I don't believe the smoke tests have a mechanism for replaying
the commitlog.
If transaction offsets are supposed to be linear, without any gaps, then
it would be useful to assert that on each append, in which case we could
write a smoke test that would fail as soon as the offsets diverged.
# Description of Changes
Resolves algebraic type refs recursively in order to check the product
type of a query builder view.
This should fix the issue reported
[here](https://discord.com/channels/1037340874172014652/1448796556366057513).
However I've so far been unsuccessful in trying repro it.
Also adds further commentary to `Typespace::resolve` to make it clear
that it is not recursive.
# API and ABI breaking changes
None
# Expected complexity level and risk
0
# Testing
TODO. So far I haven't been able to repro with a smoketest
# Description of Changes
Based on #3887 . Review starting from commit 233b48cc4.
We've encountered a commitlog which includes inserts into `st_table`,
`st_column`, &c of the rows which describe `st_view`, `st_view_param`,
&c. This caused replay to fail, as those rows were already inserted
during bootstrapping,
so we got set-semantic duplicate errors. With this commit, we ignore
set-semantic duplicate errors when replaying a commitlog specifically
for rows in system tables which describe system tables.
We also have to do an additional fixup for sequences. This is described
in-depth in comments added at the relevant locations.
# API and ABI breaking changes
N/a
# Expected complexity level and risk
1 - I was careful not to swallow any errors which aren't obviously safe.
# Testing
- [x] Manually replayed commitlog which includes the above mentioned
inserts, got error prior to this commit, no error with this commit.
Controlled shutdown of a database should drain the outstanding
transactions
queue(s) and flush them to the durability layer.
With the introduction of another queueing layer in #3868, it became
harder to
observe when or if this process is completed.
This patch thus introduces an explicit (async) shutdown method for
`RelationalDB` and below, which will wait until all submitted
transactions are
either reported durable, or an error occurs in the durability layer.
`RelationalDB` is made `!Clone`, such that shutdown can be initiated in
the
`Drop` impl. Note that this requires access to a tokio runtime, which we
thread
through via the `Persistence` services in order to allow control over
which of
the various runtimes is being used for durability-related tasks.
Also moves `RelationalDB::open` to a blocking thread when a
persistence-enabled
database is constructed by the `HostController` -- this process performs
heavy
I/O and can take a substantial amount of time, during which we don't
want to
block a worker thread.
# API and ABI breaking changes
None
# Expected complexity level and risk
3
# Testing
- [ ] some testing added
- [ ] existing tests still pass
- [ ] `impl Drop for RelationalDB` difficult to test, extra eyeballs
needed
---------
Co-authored-by: Mazdak Farrokhzad <twingoow@gmail.com>
# Description of Changes
When debugging broken commitlogs, we want to inspect the whole
commitlog, including the part after the first error.
This is in contrast with the way we want to replay in prod, where we'd
rather get a hard error than an incorrect state.
This commit adds a new flag to commitlog replay, `ErrorBehavior`. The
`core` crate passes `ErrorBehavior::FailFast`
when replaying commitlogs to reconstruct databases. Internal tooling
(not in this repository) uses `ErrorBehavior::Warn` to print the
entirety of a broken commitlog.
# API and ABI breaking changes
Changes internal APIs only.
# Expected complexity level and risk
1 - no change to behavior of SpacetimeDB.
# Testing
None.
# Description of Changes
This helps with the issue reported in
https://github.com/clockworklabs/SpacetimeDB/issues/3811.
Right now we have a type representing the reducer, and a type for the
reducer args, and both have the same name. This adds `Reducer` to the
end of the args type, which is similar to what we are doing for
procedure arguments or the `Row` suffix for tables.
This will still cause some potential problems, since someone could have
a type that ends in `Reducer` (or `Row`), but this will fix the majority
of issues that are currently breaking people.
This also has some changes to get the basic react example to build.
# API and ABI breaking changes
This is technically a breaking change if people are using this for type
annotations (which doesn't seem too likely), but it should be an easy
one for people to fix.
# Expected complexity level and risk
1.
# Testing
I tested the quickstart.
Views are materialized in mutable transactions, but should not increment
the transaction offset maintained in the committed state.
This fixes storing completely empty transactions in the commitlog, and
maintains that the committed state tx offset is in-sync with the
commitlog's tx offset.
# Expected complexity level and risk
2
# Testing
Added a test.
---------
Signed-off-by: Kim Altintop <kim@eagain.io>
Co-authored-by: Mazdak Farrokhzad <twingoow@gmail.com>
`spacetime delete` may print the database tree and ask for confirmation.
We did not flush stdout, causing the tree to be interleaved with the
yes/no prompt.
# Expected complexity level and risk
1
# Description of Changes
`reqwest` includes the full URL in its errors, including query params.
This is unfortunate, as query params can contain sensitive info like API
tokens. It's difficult for modules to clean these themselves, as they
see errors as strings, losing the structure of `reqwest::Error`.
In this commit, we strip query parts out of URLs in errors before
returning them to modules. I've also audited all of the error return
paths in the `http_request` method and left comments justifying why the
unchanged ones are safe.
# API and ABI breaking changes
Only if you consider the format of error messages part of our API, which
I don't. Procedure APIs aren't stable yet anyways.
# Expected complexity level and risk
1
# Testing
None yet - accepting input from reviewers about desired tests if we feel
that's necessary.
- [ ] <!-- maybe a test you want to do -->
- [ ] <!-- maybe a test you want a reviewer to do, so they can check it
off when they're satisfied. -->
---------
Signed-off-by: Phoebe Goldman <phoebe@goldman-tribe.org>
Co-authored-by: Julien Lavocat <JulienLavocat@users.noreply.github.com>
# Description of Changes
Fixes#3807. Not sure what the best way to model this in the API is
(another argument to `t.row()`?), but it does work.
# Expected complexity level and risk
1
# Testing
- [ ] <!-- maybe a test you want to do -->
- [ ] <!-- maybe a test you want a reviewer to do, so they can check it
off when they're satisfied. -->
# Description of Changes
Add back the instructions for regenerating CLI docs, which were removed
in https://github.com/clockworklabs/SpacetimeDB/pull/3343. I also made a
script for it.
This also fixes the CI checking this file, which was silently broken in
the same PR.
I have **not** verified that this works in Git Bash in Windows.
# API and ABI breaking changes
None
# Expected complexity level and risk
1
# Testing
- [x] CI passes
- [x] CI fails if I change the CLI reference
- [x] CLI reference looks visually reasonable on a local `pnpm dev`
---------
Signed-off-by: Zeke Foppa <196249+bfops@users.noreply.github.com>
Co-authored-by: Zeke Foppa <bfops@users.noreply.github.com>
# Description of Changes
`ParamsType` is a tuple, so we need to spread when we actually call
reducer functions.
Currently reducer arguments aren't being sent with the `useReducer`
hook.
# Expected complexity level and risk
1
# Testing
I tested this manually with the quickstart chat app.
Co-authored-by: Zeke Foppa <196249+bfops@users.noreply.github.com>
# Description of Changes
This improves the type safety a bit from
https://github.com/clockworklabs/SpacetimeDB/pull/3812.
The core change is that the previous version typed queries based on the
typescript type, not the spacetime type. This meant that we allowed
queries for incorrect tables, like a table that had the same column
names and types, but had a u32 instead of a u64 somewhere.
This still has an issue with allowing results from tables where the rows
are reordered, which would actually be a problem, but hopefully that is
not too common.
# API and ABI breaking changes
This is technically a breaking change, because it changes some type
parameters. I don't think people should be relying on these though, so I
don't think we should be worried about breaking them.
This would only cause new type errors for apps that are likely to error
at runtime anyway.
# Expected complexity level and risk
1.5. This should be low risk, since it is just a typing change.
# Testing
This has some type checks in `view.test-d.ts`, and I've done some manual
e2e testing.
# Description of Changes
Provides new WASM ABIs:
- `datastore_index_scan_point_bsatn`
- `datastore_delete_by_index_scan_point_bsatn`
These are then used where applicable to speed up `.find(_)` and friends.
Point scans are also used more internally where applicable.
What remains after this is use in C# module bindings and to expose this
in TS as well.
The PR makes TPS go from roughly 36k to 38k TPS on my machine and also
makes a difference in flamegraphs where the time spent in some index
scans are substantially decreased.
# API and ABI breaking changes
None
# Expected complexity level and risk
3? This touches the datastore an how we expose it to modules.
# Testing
Some existing tests now exercise the new ABIs by changing what
`.find(_)` and friends do.
---------
Signed-off-by: Mazdak Farrokhzad <twingoow@gmail.com>
# Description of Changes
Uses `with_host_stack` to provide a `StackCreator` that pools
`FiberStack`s.
This does not use the pooling instance allocator and is limited to just
stacks.
# API and ABI breaking changes
None
# Expected complexity level and risk
3? Some unsafe and wasmtime internals relied upon.
# Testing
Covered by existing tests.
# Description of Changes
Fixes https://github.com/clockworklabs/SpacetimeDB/issues/3617.
# API and ABI breaking changes
None
# Expected complexity level and risk
1
# Testing
A proptest `empty_range_scans_dont_panic` is added.
# Description of Changes
As title.
# API and ABI breaking changes
incremental binding.
# Expected complexity level and risk
# Testing
Unit and smoketest
# Description of Changes
fixes#3861.
While running prechecks for automigration, the `range` passed to
`iter_by_col_range_mut` was of type `AlgebraicValue::I128`, even though
it should have matched the column’s type.
Fix is straightforward by introducing `AlgebraicValue::from_i128`
helper.
# API and ABI breaking changes
NA
# Expected complexity level and risk
1
# Testing
- Modified smoketest to repro reported bug.
---------
Signed-off-by: Shubham Mishra <shivam828787@gmail.com>
Co-authored-by: Mazdak Farrokhzad <twingoow@gmail.com>
# Description of Changes
Fixes#3827
Adding lte and gte support for query_builder in rust.
# API and ABI breaking changes
Incremental change to bindings.
# Expected complexity level and risk
1
# Testing
unit test and smoketest
# Description of Changes
Don't throw an error if there is no `reducerInfo`. The code was
previously trying to handle the case of an unknown reducer, but was
effectively asserting that the reducerInfo existed too soon.
Now we should be fine handling a transaction, even if we can't determine
reducer information for it.
# Expected complexity level and risk
1.
# Testing
I tested this by running the quickstart chat app, and using the CLI to
delete rows (via `spacetime sql`). Before this change, the client
errors, but now it handles it correctly.
I also tested with the repro in
https://github.com/clockworklabs/SpacetimeDB/issues/3817
# Description of Changes
This should fix part of #3503. Adds an override modifier to generated
code and fixes a warning from the angular compiler.
# Expected complexity level and risk
1
# Testing
- [x] Code works fine with the added `override` modifier.
- [ ] Perhaps we should have an angular test project?
Mainly a smoketest to exercise the intended behaviour. Also return an
error if we end up delegating to the reset database endpoint, which
itself doesn't accept a `parent` parameter.
# Description of Changes
<!-- Please describe your change, mention any related tickets, and so on
here. -->
Updated package sizes from a failed release dry-run:
https://github.com/clockworklabs/SpacetimeDBPrivate/actions/runs/20045893743/job/57491806595
# API and ABI breaking changes
<!-- If this is an API or ABI breaking change, please apply the
corresponding GitHub label. -->
None
# Expected complexity level and risk
1 - this is just a release fix
<!--
How complicated do you think these changes are? Grade on a scale from 1
to 5,
where 1 is a trivial change, and 5 is a deep-reaching and complex
change.
This complexity rating applies not only to the complexity apparent in
the diff,
but also to its interactions with existing and future code.
If you answered more than a 2, explain what is complex about the PR,
and what other components it interacts with in potentially concerning
ways. -->
# Testing
<!-- Describe any testing you've done, and any testing you'd like your
reviewers to do,
so that you're confident that all the changes work as expected! -->
I have not tested this, it is a trivial change.
RFC 6455, Section 5.4 describes message fragmentation, and we can do
that with tungstenite.
It does seem to help getting control messages (ping, pong, close)
through without head-of-line blocking.
# Expected complexity level and risk
2 - Need to test with clients
# Testing
TBD - some more abstraction is needed due to the difficulty of
synthetically producing a large outgoing message.
# Description of Changes
Uses the `sourcemap` crate to map text locations in the bundle to text
locations in the original source code.
# Expected complexity level and risk
1 - essentially only related to diagnostics
# Testing
- [x] Manually tested
- [ ] Add an automated test for backtrace output
# Description of Changes
As title
# API and ABI breaking changes
NA
# Expected complexity level and risk
0
---------
Signed-off-by: Shubham Mishra <shivam828787@gmail.com>
Co-authored-by: joshua-spacetime <josh@clockworklabs.io>
# Description of Changes
Fixes the following issues:
1. When dropping a view, we deleted its row from `st_view`, but didn't
drop the backing table.
2. `delete_col_eq` returned a nonsensical error if the delete set was
empty.
# API and ABI breaking changes
None
# Expected complexity level and risk
0
# Testing
- [x] Auto-migrate smoketests
# Description of Changes
This reapplies the patch from #3704, and fixes the issues that were
causing it to deadlock.
The reason it was deadlocking was that it allowed for the following
sequence of events:
* `SchedulerActor::handle_queued()` begins mutable tx
* `ModuleHost::disconnect_client()` submits call to `call_reducer(tx:
None)`
* scheduler submits call to `call_reducer(tx: Some)`
* `WasmModuleInstance::disconnect_client` now has to try to take tx
lock, but the scheduler's call_reducer already holds it and is behind it
in the queue
So, I moved most of the logic from `handle_queued` back to being
executed in the module worker thread, but kept the code in
`scheduler.rs` so that it can all be reasoned about locally.
Fixes#3645. Should I uncomment the implementation of
`ExportFunctionForScheduledTable for F: Procedure` now?
# Expected complexity level and risk
2 - there's a chance that this patch hasn't fully fixed the deadlock
issue from #3704, but I'm quite confident.
# Testing
- [x] Manually verified that deadlock no longer occurs - previously,
`while true; do python -m smoketests schedule_reducer -k
test_scheduled_table_subscription; done` would freeze up in only 2 or 3
iterations, but now it can run for 10 minutes without issues.
# Description of Changes
In the past we've been converting CPU instructions into energy. We are
not doing it on the SpacetimeDB side anymore, thus we should report the
WasmTime fuel directly
# Description of Changes.
fixes#3715
The patch makes snapshots to skip ephemeral tables.
# API and ABI breaking changes
NA
# Expected complexity level and risk
1
# Testing
<!-- Describe any testing you've done, and any testing you'd like your
reviewers to do,
so that you're confident that all the changes work as expected! -->
- [ ] <!-- maybe a test you want to do -->
- [ ] <!-- maybe a test you want a reviewer to do, so they can check it
off when they're satisfied. -->
---------
Co-authored-by: joshua-spacetime <josh@clockworklabs.io>
# Description of Changes
This adds a way to build queries with typescript views, and it allows
views to return queries (if the return type of the query is an array).
For examples and syntax, you can look at the tests in
[crates/bindings-typescript/tests/query.test.ts](https://github.com/clockworklabs/SpacetimeDB/compare/jsdt/ts-query-builder?expand=1#diff-4fd25c191f1207085a491cf84996c601f805f5e8280d1cf2a812ebad6aa6e75a).
To play around with the syntax, you might find it easier to look in
[crates/bindings-typescript/src/server/view.test-d.ts](https://github.com/clockworklabs/SpacetimeDB/compare/jsdt/ts-query-builder?expand=1#diff-4fd25c191f1207085a491cf84996c601f805f5e8280d1cf2a812ebad6aa6e75a).
This could still use some cleanup, and there are some places where the
type safety is imperfect. I'll try to list the known limitations here:
1. This will allow the use of `eq` for columns that are product types,
even though the query engine doesn't support it. This can be fixed
later, and it would only be a breaking change for modules that have
invalid queries.
2. When we check if a view is returning a query of the correct type, we
are checking with the typescript row type. We should be checking with
the spacetime type, since this type checking will allow a couple
incorrect things to be returned:
1. A different table with any superset of the fields (for example, a
different table that has one extra field). That will fail when
executing, but will be allowed by the typescript compiler.
2. A table with the same fields, but with those fields in a different
order would also fail at runtime, but be allowed by the typescript
compiler.
4. A table with fields of a different spacetimetype that map to the same
typescript type (like `u16` and `u32`).
I can also add back functions for things like inequality once we are ok
with the rest of it.
# API and ABI breaking changes
This adds some new API surface, but does not break existing code.
# Expected complexity level and risk
2.
# Testing
For automated tests, there are unit tests to see what sql gets emitted
in `tests/query.test.ts`, and some tests of the types in
`view.test-d.ts`.
I've also run some manual tests with a typescript module with views.
# Description of Changes
Fixes https://github.com/clockworklabs/SpacetimeDB/issues/3729
I genuinely don't know what came over me.
# API and ABI breaking changes
None
# Expected complexity level and risk
1.5 very straightforward but not strictly trivial
# Testing
Adds automated integration tests (written in Rust and run with `cargo
test`, although note this comment from @matklad about integration tests
for the future
https://internals.rust-lang.org/t/running-test-crates-in-parallel/15639/2):
- [x] Can publish an updated module if no migration is required
- [x] Can publish an updated module if auto-migration is required (with
the yes-break flag true/false)
- [x] Cannot publish if a manual migration is required
- [x] Can publish if a manual migration is required but the user
specified `--delete-data`
- [x] Can publish if a manual migration is required by the user
specified `--delete-data=on-conflict`
- [x] No data deletion occurs if no migration is required and
`--delete-data=on-conflict` is specified
---------
Signed-off-by: Zeke Foppa <196249+bfops@users.noreply.github.com>
Co-authored-by: Zeke Foppa <196249+bfops@users.noreply.github.com>
Co-authored-by: Zeke Foppa <bfops@users.noreply.github.com>
Co-authored-by: Phoebe Goldman <phoebe@clockworklabs.io>
Co-authored-by: John Detter <4099508+jdetter@users.noreply.github.com>
# Description of Changes
This implements (a subset of) the TextEncoder/TextDecoder web APIs using
native functions to do the actual `Uint8Array <-> String` conversion.
This should be a good bit faster than the `fast-text-encoding` package.
# Expected complexity level and risk
2 - this introduces new kinds of JS code and host calls to the v8 host,
but they're pretty well encapsulated.
# Testing
- [x] All TS modules make use of these already for encoding/decoding
strings in BSATN - the `fast-text-encoding` polyfill we pull in only
takes effect if the classes don't already exist, so the smoketests
passing means it works.