# Description of Changes
This PR fixes a C# SDK/BSATN serialization issue where inserting a
`null` string into a non-nullable `string` column could throw an
`ArgumentNullException` with an unhelpful stack trace, as reported in
[#3960](https://github.com/clockworklabs/SpacetimeDB/issues/3960).
* Updated the C# BSATN runtime to reject `null` for non-nullable
`string` serialization with a clear, actionable error message (guides
users to model nullable strings as `string?` so BSATN encodes an
option).
* Added BSATN runtime unit tests to cover:
* Non-nullable string rejects `null` with an actionable message
* Nullable string (`RefOption<string, BSATN.String>`) round-trips `null`
* Added end-to-end C# regression-test module + client assertions for
#3960:
* Inserting `""` into non-nullable `string` succeeds
* Inserting `null` into non-nullable `string` fails with the expected
message
* Inserting `null` into nullable `string?` succeeds and round-trips as
`null`
# API and ABI breaking changes
None.
* No schema or wire-format changes.
* Behavior change is limited to improving the error surfaced when
attempting to serialize null as a non-nullable BSATN `String`, plus
stricter input validation for `ConnectionId` / `Identity` parsing.
# Expected complexity level and risk
2 - Low
* Changes are isolated to regression test coverage.
* No impact on valid serialization paths; only affects invalid/malformed
inputs and improves diagnostics.
# Testing
- [X] Ran C# regression-tests client and verified new tests pass.
---------
Signed-off-by: Ryan <r.ekhoff@clockworklabs.io>
Co-authored-by: John Detter <4099508+jdetter@users.noreply.github.com>
# Description of Changes
This PR fixes Release-only flakiness in C# runtime/property tests by
making `ConnectionId` / `Identity` parsing and related low-level byte
decoding strict about input length.
Previously, length validation relied on `Debug.Assert`, which is
compiled out in Release builds. That meant malformed inputs
(wrong-length spans/strings) could slip through in Release and produce
inconsistent behavior.
Changes:
* Updated `ConnectionId.FromHexString(string hex)` to throw when the
input is not exactly 32 hex characters.
* Updated `Identity.FromHexString(string hex)` to throw when the input
is not exactly 64 hex characters.
* Updated `Util.Read(ReadOnlySpan source, bool littleEndian)` to throw
an `ArgumentException` when `source.Length != sizeof(T)` (instead of
relying on `Debug.Assert`), preventing Release-mode “partial reads” from
incorrectly-sized spans.
# API and ABI breaking changes
None.
* No schema or wire-format changes.
* Behavior change is limited to stricter validation of invalid/malformed
inputs for `ConnectionId` / `Identity` parsing and internal runtime
decoding. Valid inputs are unaffected.
# Expected complexity level and risk
2 - Low
* Changes are isolated to the C# BSATN runtime input validation paths.
* No impact on valid serialization/normal runtime behavior; only affects
invalid inputs and makes behavior consistent between Debug and Release.
# Testing
- [X] Ran C# BSATN runtime tests locally (Debug + Release) and verified
they pass.
# Description of Changes
This PR fixes a C# reducer error-reporting issue where exceptions thrown
in reducers were returned to clients with full stack traces, as reported
in [#3959](https://github.com/clockworklabs/SpacetimeDB/issues/3959).
- Updated the C# reducer host-call error serialization to return only
`Exception.Message` (instead of `Exception.ToString()`), preventing
server-side stack traces from being leaked to clients.
- Added a C# regression-test client assertion to validate reducer
failures do not contain stack traces:
- `exception.Message` matches the thrown message exactly
- the message does not contain newlines or `" at "` stack-trace markers
# API and ABI breaking changes
None.
- No schema or wire-format changes.
- Behavior change is limited to the *contents* of reducer error strings
returned to clients (stack traces removed).
# Expected complexity level and risk
2 - Low
- Change is isolated to C# reducer error formatting plus a regression
assertion.
- Removes unintended information exposure without affecting reducer
success paths.
# Testing
- [X] Ran C# regression-tests client; verified reducer error message is
preserved and stack trace is not present.
## Summary
This fixes a regression introduced in commit afe169ac4 ("Fix the issues
with scheduling procedures #3816", Dec 5, 2025) where scheduled reducers
stopped recording their `ReducerContext` to the commitlog.
**Impact:**
- No `inputs` section was written to the commitlog for scheduled
reducers
- The reducer's name, caller, timestamp, and arguments were not
persisted
## Root Cause
The old code in `module_host.rs::call_scheduled_reducer_inner` patched
`tx.ctx` with the `ReducerContext`:
```rust
tx.ctx = ExecutionContext::with_workload(
tx.ctx.database_identity(),
Workload::Reducer(ReducerContext { ... }),
);
```
This was removed when the logic moved to `scheduler.rs`. The new code in
`call_reducer_with_tx` only applies the `ReducerContext` when `tx` is
`None` — since the scheduler passes `Some(tx)`, the context was never
applied.
## Fix
Restore the `tx.ctx` patching in `scheduler.rs` before calling
`call_reducer_with_tx`.
## Test Plan
- [ ] Existing tests pass
Co-authored-by: Phoebe Goldman <phoebe@clockworklabs.io>
# Description of Changes
We would like to move all of the templates to a central directory
# API and ABI breaking changes
None
# Expected complexity level and risk
2
# Testing
---------
Co-authored-by: spacetimedb-bot <spacetimedb-bot@users.noreply.github.com>
# Description of Changes
Fixes https://github.com/clockworklabs/SpacetimeDB/issues/3240.
Non-unique indices are now backed by a type `SameKeyEntry` which holds
the `RowPointer`s for the same key.
When these `RowPointer`s exceed 4KiB (512 entries), the data structure
switches from using an array list to a hash set.
# API and ABI breaking changes
None
# Expected complexity level and risk
2?
# Testing
Covered by existing tests, though more test will come in future PRs.
# Description of Changes
<!-- Please describe your change, mention any related tickets, and so on
here. -->
- Version upgrade to `1.11.2`
# API and ABI breaking changes
<!-- If this is an API or ABI breaking change, please apply the
corresponding GitHub label. -->
- None, this is just a version bump
# Expected complexity level and risk
1 - no real changes here
<!--
How complicated do you think these changes are? Grade on a scale from 1
to 5,
where 1 is a trivial change, and 5 is a deep-reaching and complex
change.
This complexity rating applies not only to the complexity apparent in
the diff,
but also to its interactions with existing and future code.
If you answered more than a 2, explain what is complex about the PR,
and what other components it interacts with in potentially concerning
ways. -->
# Testing
- NA this is just a version bump, no functionality change here.
# Description of Changes
Closes#3673
*NOTE*: C++ part will be in another PR
# Expected complexity level and risk
2
Adding a new type touch everywhere
# Testing
- [x] Adding smoke and unit test
---------
Signed-off-by: Ryan <r.ekhoff@clockworklabs.io>
Co-authored-by: rekhoff <r.ekhoff@clockworklabs.io>
Co-authored-by: Phoebe Goldman <phoebe@goldman-tribe.org>
Co-authored-by: Jason Larabie <jason@clockworklabs.io>
This is the first step to make in-memory only databases not touch the
disk at all. Pending is an in-memory only sink for module logs.
Responsibility for the lock file is transferred to `Durability`, which
means that only persistent databases opened for writing acquire the
lock.
As a consequence, the `Durability` trait gains a `close` method that
prevents further writes and drains the internal buffers, even when
multiple `Arc`-pointers to the `Durability` exist.
# Expected complexity level and risk
2
# Testing
Covered by existing tests.
# Description of Changes
Previously the run file was used with ci-check, and whatever summary
hash was not used. I have updated the `ensure_mode` to always update the
hash in the summary file.
# API and ABI breaking changes
<!-- If this is an API or ABI breaking change, please apply the
corresponding GitHub label. -->
# Expected complexity level and risk
<!--
How complicated do you think these changes are? Grade on a scale from 1
to 5,
where 1 is a trivial change, and 5 is a deep-reaching and complex
change.
This complexity rating applies not only to the complexity apparent in
the diff,
but also to its interactions with existing and future code.
If you answered more than a 2, explain what is complex about the PR,
and what other components it interacts with in potentially concerning
ways. -->
# Testing
<!-- Describe any testing you've done, and any testing you'd like your
reviewers to do,
so that you're confident that all the changes work as expected! -->
- [ ] <!-- maybe a test you want to do -->
- [ ] <!-- maybe a test you want a reviewer to do, so they can check it
off when they're satisfied. -->
# Description of Changes
#3538 added a `uuid_counter` field to ReducerCtx, which has no need to
be public. Implementing ReducerCtx as a class allows us to encapsulate
better, and lets us enumerate exactly the data that it needs to hold so
that the runtime could possibly optimize it.
# Expected complexity level and risk
1: straightforward switch
# Testing
- [x] Automated testing is sufficient.
# Description of Changes
Rust added procedure-scoped HTTP via the `procedure_http_request` ABI
(see Rust PR #3684) to C# host bindings.
What changed:
1) Fix for BSATN wire format
* Updated C# BSATN wire types for HTTP
(`BSATN.Runtime/HttpWireTypes.cs`) to match exactly the Rust stable
types:
* `spacetimedb_lib::http::Request` fields and order: `method`,
`headers`, `timeout`, `uri`, `version`
* `spacetimedb_lib::http::Response` fields and order: `headers`,
`version`, `code`
* Header pairs are (`name: string, value: bytes`); no additional
metadata is encoded.
* Added explicit “do not reorder/extend” note in the C# wire type file
since the host BSATN-decodes these bytes directly and may trap on
mismatch.
2) C# runtime HTTP client implementation
* Implemented `ProcedureContext.Http` support (`Runtime/Http.cs`) that:
* BSATN-serializes `HttpRequestWire` + sends body bytes via
`procedure_http_request`
* On success, BSATN-decodes `HttpResponseWire` and returns `(response,
body)` as a typed `HttpResponse`
* On `HTTP_ERROR`, decodes the error payload as a BSATN `string`
* On `WOULD_BLOCK_TRANSACTION`, returns a stable error message (host
rejects blocking HTTP while a mut tx is open)
* Clamps timeouts to 500ms max (matching host behavior documented on the
Rust side)
3) ABI wiring / import plumbing
* Added the `procedure_http_request` import to the C# wasm shim
(`Runtime/bindings.c`) under the `spacetime_10.3` import module.
* Verified the generated C# P/Invoke signature matches the host ABI
(request ptr/len, body ptr/len, out `[BytesSource; 2]`).
4) Procedure entrypoint contract (trap vs errno)
* Updated/clarified the module procedure trampoline behavior
(`Runtime/Internal/Module.call_procedure`): it must either return
`Errno.OK` or trap (log + rethrow). This mirrors the host’s expectations
and avoids “unexpected errno values from guest entrypoints” behavior.
Behavioral notes vs Rust
* Rust bindings may `expect(...)`/panic on “should never happen”
serialization failures; C# runtime explicitly avoids throwing across the
procedure boundary for the HTTP client path and instead returns
`Result.Err` for unexpected failures. This prevents whole-module traps
from user-space HTTP usage while still surfacing rich errors.
* The host remains authoritative on timeout enforcement; C# now mirrors
the effective host clamp (500ms) to keep behavior aligned.
# API and ABI breaking changes
No new host ABI introduced, and does not change the syscall signature.
* This is not an ABI “break” in the host, but it is a guest-side wire
compatibility correction: older C# guests built with the incorrect wire
types could cause the host to trap during BSATN decode. After this PR,
C# guests are compatible with the host’s expected layout.
**Adds API**: `ProcedureContextBase.Http` is available for procedures
No existing public types/method signatures are removed
* A notable difference from Rust's panic behavior is that Rust code
sometimes `expect(...)`s and may panic/trap on internal “should not
happen” cases. On the other hand, C#'s `HttpClient.Send` explicitly
converts unexpected failures into `Result.Err` to avoid trapping the
module. This is a behavioral difference, but not an API break.
# Expected complexity level and risk
2 - Mostly just creating equivalents to the Rust version.
# Testing
C# regression tests updated to cover:
- [X] Successful request path (`ReadMySchemaViaHttp`)
- [X] Invalid request path (`InvalidHttpRequest`) returning error
without trapping
Testing performed:
- [X] Full regression suite run against local node (see chat log); no
wasm traps, all suites reported `Success`.
---------
Signed-off-by: Ryan <r.ekhoff@clockworklabs.io>
# Description of Changes
Introduce a new **LLM benchmarking app** and supporting code.
* **CLI:** `llm` with subcommands `run`, `routes list`, `diff`,
`ci-check`.
* **Runner:** executes globally numbered tasks; filters by `--lang`,
`--categories`, `--tasks`, `--providers`, `--models`.
* **Providers/clients:** route layer (`provider:model`) with HTTP LLM
Vendor clients; env-driven keys/base URLs.
* **Evaluation:** deterministic scorers (hash/equality, JSON
shape/count, light schema/reducer parity) with clear failure messages.
* **Results:** stable JSON schema; single-file HTML viewer to
inspect/filter/export CSV.
* **Build & guards:** build script for compile-time setup;
* **Docs:** `DEVELOP.md` includes `cargo llm …` usage.
This PR is the initial addition of the app and its modules (runner,
config, routes, prompt/segmentation, scorers, schema/types,
defaults/constants/paths/hashing/combine, publishers, spacetime guard,
HTML stats viewer).
### How it works
1. **Pick what to run**
* Choose tasks (`--tasks 0,7,12`), or a language (`--lang rust|csharp`),
or categories (`--categories basics,schema`).
* Optionally limit vendors/models (`--providers …`, `--models …`).
2. **Resolve routes**
* Read env (API keys + base URLs) and build the active set (e.g.,
`openai:gpt-5`).
3. **Build context**
* Start Spacetime
* Publish golden answer modules
* Prepare prompts and send to LLM model
* Attempt to publish LLM module
4. **Execute calls**
* Run the selected tasks within each test against selected models and
languages.
5. **Score outputs**
* Apply deterministic scorers (hash/equality, JSON shape/count, simple
schema/reducer checks).
* Record the score and any short failure reason.
6. **Update results file**
* Write/update the single results JSON with task/route outcomes,
timings, and summaries.
# API and ABI breaking changes
None. New application and modules; no existing public APIs/ABIs altered.
# Expected complexity level and risk
**4/5.** New CLI, routing, evaluation, and artifact format.
* External model APIs may rate-limit/timeout; concurrency tunable via
`LLM_BENCH_CONCURRENCY` / `LLM_BENCH_ROUTE_CONCURRENCY`.
# Testing
I ran the full test matrix and generated results for every task against
every vendor, model, and language (rust + C#). I also tested the CI
check locally using [act](https://github.com/nektos/act).
**Please verify**
* [ ] `llm run --tasks 0,1,2` (explicit `run`)
* [ ] `llm run --lang rust --categories basics` (filters)
* [ ] `llm run --categories basics,schema` (multiple categories)
* [ ] `llm run --lang csharp` (language switch)
* [ ] `llm run --providers openai,anthropic --models "openai:gpt-5
anthropic:claude-sonnet-4-5"` (provider/model limits)
* [ ] `llm run --hash-only` (dry integrity)
* [ ] `llm run --goldens-only` (test goldens only)
* [ ] `llm run --force` (skip hash check)
* [ ] `llm ci-check`
* [ ] Stats viewer loads the JSON; filtering and CSV export work
* [ ] CI works as intended
---------
Signed-off-by: bradleyshep <148254416+bradleyshep@users.noreply.github.com>
Signed-off-by: Tyler Cloutier <cloutiertyler@users.noreply.github.com>
Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>
Co-authored-by: Tyler Cloutier <cloutiertyler@aol.com>
Co-authored-by: Tyler Cloutier <cloutiertyler@users.noreply.github.com>
Co-authored-by: spacetimedb-bot <spacetimedb-bot@users.noreply.github.com>
Co-authored-by: John Detter <4099508+jdetter@users.noreply.github.com>
# Description of Changes
TS equivalent of #3863. I also did optimized single-column indices such
that there's a special case that avoids branching.
# Expected complexity level and risk
1 - straightforward and there's Rust precedent to draw from.
# Testing
- [x] Automated testing is sufficient
# Description of Changes
Sets the `committed` label to `false` when we rollback a txn. It was
always set to `true` before this change.
# API and ABI breaking changes
None
# Expected complexity level and risk
0
# Testing
- [x] Unit test
# Description of Changes
This PR fixes a C# codegen performance/behavior issue triggered by views
that include nullable value-type fields (e.g. `DbVector2?`), as reported
in #3914.
* Updated the C# BSATN code generator to emit non-boxing equality for
`Nullable<T>` members by using `System.Nullable.Equals(a, b)` rather
than `a.Equals(b)` (which can box and cause excessive host logging/work
in view diff/equality paths).
This causes code generation to change from something like:
``` csharp
// From:
var ___eqNullableIntField = this.NullableIntField.Equals(that.NullableIntField);
// To:
var ___eqNullableIntField = System.Nullable.Equals(
this.NullableIntField,
that.NullableIntField
);
```
* Added a regression scenario to the C# regression-test module that
exercises the problematic pattern:
* A table containing a nullable struct field (`DbVector2? Pos`).
* A public view that returns rows containing that nullable field.
* A reducer to mutate the nullable field from `some` → `none` → `some`
to force view re-evaluation and diffing.
* Updated the regression-test client to:
* Subscribe to the new view.
* Validate that the view evaluates successfully and contains the
expected rows.
* Call the reducer and validate view state after updates.
* Fail the test if view evaluation/diffing produces errors.
# API and ABI breaking changes
None.
* No changes to public SpacetimeDB schema or wire format semantics.
* Changes are limited to generated equality code and regression tests.
# Expected complexity level and risk
2 - Low
* The codegen change is small and localized (special-casing
`System.Nullable<T>` equality generation).
* Risk is primarily around subtle behavior differences in equality for
nullable value types; however, `System.Nullable.Equals` matches the
expected semantics and avoids boxing.
* Regression tests specifically cover the nullable-struct-in-view
scenario to guard against regressions.
# Testing
<!-- Describe any testing you've done, and any testing you'd like your
reviewers to do,
so that you're confident that all the changes work as expected! -->
- [X] Ran the C# regression test suite via `run-regression-tests.sh` and
confirmed no errors.
# Description of Changes
Reverted the incorrect inversion of `build_debug` condition introduced
by
https://github.com/clockworklabs/SpacetimeDB/commit/bb432132457b023d98a379beea2a52d76d3bd8d6
that resulted in only debug builds being optimized instead of only
release ones, as the comment just below says.
# API and ABI breaking changes
None.
# Expected complexity level and risk
### 1.
# Testing
- [x] Manual testing
- [x] Personal experience (saw `.opt.wasm` files disappear at some
point)
# Description of Changes
Exntention of https://github.com/clockworklabs/SpacetimeDB/pull/3938 to
polish some of the code, which was not done in original PR due to hurry.
- Adds some missing commentary.
- code cleaning mostly limited to changed functions signature.
# API and ABI breaking changes
NA
# Expected complexity level and risk
0
# Testing
Exisiting tests should be enough.
---------
Signed-off-by: Shubham Mishra <shivam828787@gmail.com>
Co-authored-by: joshua-spacetime <josh@clockworklabs.io>
# Description of Changes
Closes
[#3290](https://github.com/clockworklabs/SpacetimeDB/issues/3290).
Adds a new "special" type to SATS, `UUID`, which is represented as the
product `{ __uuid__: u128 }`. Adds versions of this type to all of our
various languages' module bindings libraries and client SDKs, and
updates codegen to recognize it and output references to those named
library types. Adds methods for creating new UUIDs according to the V4
(all random) and V7 (timestamp, monotonic counter and random)
specifications.
# API and ABI breaking changes
We add a new type
# Expected complexity level and risk
2
it impacts all over the code
# Testing
- [x] Extends the Rust and Unreal SDK tests, and the associated
`module-test` modules in Rust, C# and TypeScript, with uses of UUIDs.
- [x] Extends the C# SDK regression tests with uses of UUIDs.
- [x] Extends the TypeScript test suite with tests with uses of UUIDs.
---------
Signed-off-by: Mario Montoya <mamcx@elmalabarista.com>
Co-authored-by: Phoebe Goldman <phoebe@clockworklabs.io>
Co-authored-by: Jason Larabie <jason@clockworklabs.io>
Co-authored-by: John Detter <4099508+jdetter@users.noreply.github.com>
# Description of Changes
Closes#3659
# API and ABI breaking changes
Remove route and alter the semantics of the `call` route on both server
and `cli`
# Expected complexity level and risk
1
# Testing
- [x] Publish module with `procedures` and observe calling the `cli` the
result is print.
---------
Co-authored-by: Phoebe Goldman <phoebe@goldman-tribe.org>
# Description of Changes
<!-- Please describe your change, mention any related tickets, and so on
here. -->
This bumps the Typescript SDK to 1.11.3.
# API and ABI breaking changes
<!-- If this is an API or ABI breaking change, please apply the
corresponding GitHub label. -->
None
# Expected complexity level and risk
1
<!--
How complicated do you think these changes are? Grade on a scale from 1
to 5,
where 1 is a trivial change, and 5 is a deep-reaching and complex
change.
This complexity rating applies not only to the complexity apparent in
the diff,
but also to its interactions with existing and future code.
If you answered more than a 2, explain what is complex about the PR,
and what other components it interacts with in potentially concerning
ways. -->
# Testing
<!-- Describe any testing you've done, and any testing you'd like your
reviewers to do,
so that you're confident that all the changes work as expected! -->
NA - this is just a version bump and the CI passes
I will change the base to master once the `1.11.2` version bump merges.
---------
Signed-off-by: John Detter <4099508+jdetter@users.noreply.github.com>
# Description of Changes
<!-- Please describe your change, mention any related tickets, and so on
here. -->
This bumps the Typescript SDK to 1.11.2 so that we can send out Noa's
fix.
# API and ABI breaking changes
<!-- If this is an API or ABI breaking change, please apply the
corresponding GitHub label. -->
None
# Expected complexity level and risk
1
<!--
How complicated do you think these changes are? Grade on a scale from 1
to 5,
where 1 is a trivial change, and 5 is a deep-reaching and complex
change.
This complexity rating applies not only to the complexity apparent in
the diff,
but also to its interactions with existing and future code.
If you answered more than a 2, explain what is complex about the PR,
and what other components it interacts with in potentially concerning
ways. -->
# Testing
<!-- Describe any testing you've done, and any testing you'd like your
reviewers to do,
so that you're confident that all the changes work as expected! -->
NA - this is just a version bump and the CI passes
# Description of Changes
This reverts unnecessary changes to whitespace to prevent conflicts in
old PRs.
If you hide whitespace, you can confirm that this commit is a no-op.
# API and ABI breaking changes
None
# Expected complexity level and risk
0
# Description of Changes
When "fixing" a clippy lint, I accidentally flipped a boolean condition
in a `filter` call, breaking replay of column-type-altering
automigrations.
I'm also fixing an error message containing a typo I apparently wrote
many months ago, where I was printing the "old" layout in both places
rather than also printing the "new" layout. This is used only as a
diagnostic message, and is never programmatically inspected.
# API and ABI breaking changes
N/a
# Expected complexity level and risk
1
# Testing
- [x] Replayed the broken module manually locally.
# Description of Changes
Prior to this commit, we had special handling for deletes from
`st_table` during replay, but we did not pair them with inserts to form
updates, instead only treating the delete as a dropped table.
With this commit, we record inserts to `st_table` which will form update
pairs during replay, and handle them appropriately, updating the table's
schema and not dropping the table.
There is a tricky case where a table exists but is empty, and then
within a single transaction:
- The table undergoes a migration s.t. its `table_access` or
`primary_key` changes.
- At least one row is inserted into the table.
In this case, the in-memory table structure will not exist at the point
of the `st_table` insert, but will be created before the corresponding
`st_table` delete, meaning there will be two conflicting `st_table` rows
resident. To handle this case, `CommittedState` tracks a side table,
`replay_table_updated`, which stores a `RowPointer` to the correct
most-recent `st_table` row for the migrating table.
I've also renamed the one previously-extant replay-only side table,
`table_dropped`, to include the `replay_` prefix, which IMO improves
clarity. And I've made it so `replay_table_dropped` is cleared at the
end of each transaction, as the previous behavior of continuing to
ignore a table that should be unreachable masked errors which would have
been helpful when debugging this issue.
This PR also includes an extended error message when encountering a
unique constraint violation while replaying, which I found helpful while
debugging.
# API and ABI breaking changes
N/a
# Expected complexity level and risk
2 - replay is complicated and scary, but this PR isn't gonna make things
*more* broken than they already were.
# Testing
- [x] Manually replayed a commitlog which included a migration that
altered a table's `table_access`, which was previously broken but now
replays successfully.
# Description of Changes
Fixes a deadlock in the subscription code and HTTP SQL handler that was
caused by calling view methods on the module while holding the
transaction lock.
I tried a couple of approaches to make the closures `Send` for all code
paths that need to hold the transaction while working with views, but
that didn’t work out well. The V8 module communicates with the host
through channels, which would require dynamic dispatch.
In the current approach, all existing methods that were calling views
from the host are now invoked from inside the module itself. In future,
It will be better to move these methods to common place rather than
being scattrered.
# Description of Changes
Update the `useTable` hook in the `spacetimedb/react` package to use the
client language convention aware table accessor key to lookup the
correct table
# API and ABI breaking changes
None
# Expected complexity level and risk
1
# Testing
I built and ran a client using this code change before and after.
Previously with multi-word table names in rust module (i.e.
`crew_assignments` table), the useTable hook would fail to lookup the
table information and hookup the onInsert/onDelete/onUpdate callbacks.
With this change, they successfully connect and data returned by the
useTable hook now flows.
All of my testing was with module_bindings generated with the 1.11.0
rust module crate. Additional testing for backwards compatibility might
be useful. I'm not sure what the clockwork labs target is for that sort
of thing.
# Description of Changes
Closes: #3895
- Added third tier expanding folder
- Merged most into intro + core concepts
- Collapsed modules + database into one
- Moved several documents to how-to to streamline the core concepts
- Added warning to RLS to use views
- refactored all links
- Moved How To and References into Developer Resources
- Fixed TypeScript View code
Sample: (Getting Started will start expanded due to being the initial
page)
<img width="293" height="746" alt="image"
src="https://github.com/user-attachments/assets/6abc3a0d-1ae2-4886-bda7-11f5565afecf"
/>
# API and ABI breaking changes
N/A
# Expected complexity level and risk
1
# Testing
- [X] Updated the quickstarts.py and generate-cli tooling
---------
Signed-off-by: Jason Larabie <jason@clockworklabs.io>
Co-authored-by: Tyler Cloutier <cloutiertyler@users.noreply.github.com>
# Description of Changes
Fixes#3919.
# Expected complexity level and risk
1
# Testing
- [x] Erroring code now works
Co-authored-by: John Detter <4099508+jdetter@users.noreply.github.com>
# Description of Changes
To resolve#3875, we added exact-match unique index point lookup support
to the C# bindings by introducing and using
`datastore_index_scan_point_bsatn`.
Previously, generated unique index `Find()` was (in at least one
codepath) implemented as:
* A range scan (`datastore_index_scan_range_bsatn`) over a BTree bound,
then
* `SingleOrDefault()` to collapse the results into a single row.
When the scan is empty, `SingleOrDefault()` returns `default(T)`. For
value-type rows this can manifest as a default-initialized row instead
of “missing”, which is what surfaced as “default-ish row” behavior in
views.
Using `datastore_index_scan_point_bsatn` makes the C# implementation
match Rust semantics more closely by performing an exact point lookup
and returning:
* `null` when no rows are found
* the row when exactly one row is found
* (defensively) an error if >1 row is returned (unique index invariant
violation)
Similarly, `datastore_delete_by_index_scan_point_bsatn` was added and
used so deletes-by-unique-key are also exact-match point operations
rather than range deletes.
Runtime updates were made to utilize point scan in `FindSingle(key)` and
in both mutable/read-only unique-index paths.
To keep this non-breaking for existing modules, codegen now detects
whether the table row is a struct or a class and chooses the appropriate
base type:
* Struct rows: `Find()` returns `Row?` (`Nullable<Row>`).
* Class rows: `Find()` returns `Row?` (nullable reference, `null` on
miss).
# API and ABI breaking changes
This change is non-breaking with respect to row type kinds, because
class/record table rows continue to work via
RefUniqueIndex/ReadOnlyRefUniqueIndex while struct rows use
UniqueIndex/ReadOnlyUniqueIndex.
API surface changes:
* Generated `Find()` return type is now nullable (`Row?`) to correctly
represent “missing”.
ABI/runtime:
* Requires the point-scan hostcall import
(`datastore_index_scan_point_bsatn`) to be available; the runtime uses
point-scan for unique lookup (and point delete for unique delete).
# Expected complexity level and risk
Low 2
# Testing
- [X] Local testing: repro module + client validate view and direct
Find() behavior
---------
Signed-off-by: rekhoff <r.ekhoff@clockworklabs.io>
# Description of Changes
Removes the `TimestampCapabilities` test from the C# SDK's regression
test suite, due to creating inconsistent failures in automated CI
testing.
# API and ABI breaking changes
Not API/ABI breaking
# Expected complexity level and risk
1
# Testing
- [X] Ran locally, no failures in building or running C# regression
tests
# Description of Changes
Introduce a hacky workaround to our `csharp-testsuite` to address
missing `librusty_v8.a`: manually check for that file and manually build
the package if it's missing.
# API and ABI breaking changes
CI-only change
# Expected complexity level and risk
1
# Testing
- [x] Locally tested removing the `librusty_v8.a` and then running
`cargo clean -p v8 && cargo build -p v8`, and this does seem to repair
it.
- [x] The CI has run with a cache that is "broken", but successfully
passes `csharp-testsuite`
---------
Co-authored-by: Zeke Foppa <bfops@users.noreply.github.com>
# Description of Changes
Avoid warning if table is valid.
# API and ABI breaking changes
-
# Expected complexity level and risk
1
# Testing
Accessing tables via Blueprint and no warning in output log, if table is
valid.
# Description of Changes
<!-- Please describe your change, mention any related tickets, and so on
here. -->
- Just bumping typescript size limits again
# API and ABI breaking changes
No
<!-- If this is an API or ABI breaking change, please apply the
corresponding GitHub label. -->
# Expected complexity level and risk
0 - this is just a release change.
<!--
How complicated do you think these changes are? Grade on a scale from 1
to 5,
where 1 is a trivial change, and 5 is a deep-reaching and complex
change.
This complexity rating applies not only to the complexity apparent in
the diff,
but also to its interactions with existing and future code.
If you answered more than a 2, explain what is complex about the PR,
and what other components it interacts with in potentially concerning
ways. -->
# Testing
I have not tested this but I've verified the new limit is greater than
the package size.
<!-- Describe any testing you've done, and any testing you'd like your
reviewers to do,
so that you're confident that all the changes work as expected! -->
# Description of Changes
With the refactor docs I had left the original module folder and
mistakenly didn't remove it as requested as per Tyler's comment:
https://github.com/clockworklabs/SpacetimeDB/pull/3877#pullrequestreview-3585291570:
> Also please remove the "old files" before merging.
# API and ABI breaking changes
N/A
# Expected complexity level and risk
1
# Testing
Removal of documents there is no test to run
Co-authored-by: Zeke Foppa <196249+bfops@users.noreply.github.com>
# Description of Changes
Fixes https://github.com/clockworklabs/SpacetimeDB/issues/2824.
Defines a global pool `BsatnRowListBuilderPool` which reclaims the
buffers of a `ServerMessage<BsatnFormat>` and which is then used when
building new `ServerMessage<BsatnFormat>`s.
Notes:
1. The new pool `BsatnRowListBuilderPool` reports the same kind of
metrics to prometheus as `PagePool` does.
2. `BsatnRowListBuilder` now works in terms of `BytesMut`.
3. The trait method `fn to_bsatn_extend` is redefined to be capable of
dealing with `BytesMut` as well as `Vec<u8>`.
4. A trait `ConsumeEachBuffer` is defined from
`ServerMessage<BsatnFormat>` and down to extract buffers.
`<ServerMessage<_> as ConsumeEachBuffer>::consume_each_buffer(...)` is
then called in `messages::serialize(...)` just after bsatn-encoding the
entire message and before any compression is done. This is the place
where the pool reclaims buffers.
# Benchmarks
Benchmark numbers vs. master using `cargo bench --bench subscription --
--baseline subs` on i7-7700K, 64GB RAM:
```
footprint-scan time: [21.607 ms 21.873 ms 22.187 ms]
change: [-62.090% -61.438% -60.787%] (p = 0.00 < 0.05)
Performance has improved.
full-scan time: [22.185 ms 22.245 ms 22.324 ms]
change: [-36.884% -36.497% -36.166%] (p = 0.00 < 0.05)
Performance has improved.
```
The improvements in `footprint-scan` are mostly thanks to
https://github.com/clockworklabs/SpacetimeDB/pull/2918, but 7 ms of the
improvements here are thanks to the pool. The improvements to
`full-scan` should be only thanks to the pool.
# API and ABI breaking changes
None
# Expected complexity level and risk
2?
# Testing
- Tests for `Pool<T>` also apply to `BsatnRowListBuilderPool`.
# Description of Changes
A few main goals here:
* have our iterator functions return an [`Iterator`
object](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Iterator)
so that users can use its combinators like `filter()` and `find()` and
`reduce()`. It's a very new JS api but we happen to know that the module
code will always be run in an environment that has it* :)
* improve lifecycle handling for iterator handles - mainly, if an
iterator is not run to completion, it will now eventually get garbage
collected, whereas before we would have a resource leak.
It turns out that the easiest way to do both of those things was to turn
TableIterator into a generator function, which also happens to make the
code much easier to read. Hooray :)
\* I did mention it in `table_cache` (which isn't run in our module
host) but it's fine, since that's only in the type system and
`IteratorObject` is defined in typescript's `lib.es2015.iterable.d.ts`
but is only given fancy methods in `lib.esnext.iterator.d.ts` - so if
the user uses esnext, they'll have access to them, but otherwise not.
# Expected complexity level and risk
1: this better separates concerns and makes the code clearer in its
purpose.
# Testing
- [x] Refactor, so automated testing is sufficient.
# Description of Changes
Implements the C# equivalent of #3638
This implement uses inheritance, where abstract base classes (like
`ProcedureContextBase` in `ProcedureContext.cs`) store the core of the
implementation, and then generated wrappers (like `ProcedureContext` in
the generated FFI.cs file) inherit from them.
For error handling, we work like Rust's implementation of `Result<T,E>`
but we require `where E : Exception` because of how exceptions work in
C#. Transaction-level failures come back as a `TxOutcome` and user
errors should follow the `Result<T,E>` pattern. In this implementation,
we have `UnwrapOrThrow()` throws exceptions directly because of C#'s
error handling pattern.
Unlike the Rust implementation's direct `Result` propagation, we are
using an `AbortGuard` pattern (in `ProcedureContext.cs`) for exception
handling, which uses `IDisposable` for automatic cleanup.
Most changes should have fairly similar Rust-equivalents beyond that.
For module authors, the changes here allow for the transation logic to
work like:
```csharp
ctx.TryWithTx<ResultType, Exception>(tx => {
// transaction logic
return Result<ResultType, Exception>.Ok(result);
});
```
This change includes a number of tests added to the
`sdks/csharp/examples~/regression-tests/`'s `server` and `client` to
validate the behavior of the changes. `server` changes provide further
usage examples for module authors.
# API and ABI breaking changes
Should not be a breaking change
# Expected complexity level and risk
2
# Testing
- [x] Created Regression Tests that show transitions in procedures
working in various ways, all of which pass.
# Description of Changes
Disable `cache-on-failure` as we think that it's contributing to
mysterious `rusty_v8` linker issues.
I bumped the prefix key so that PRs with this change don't share caches
with PRs missing this change.
# API and ABI breaking changes
CI only.
# Expected complexity level and risk
1
# Testing
None. We'll have to see if we stop having issues once this is merged.
---------
Co-authored-by: Zeke Foppa <bfops@users.noreply.github.com>
# Description of Changes
<!-- Please describe your change, mention any related tickets, and so on
here. -->
- Bump version numbers to `1.11.1`
# API and ABI breaking changes
<!-- If this is an API or ABI breaking change, please apply the
corresponding GitHub label. -->
None
# Expected complexity level and risk
1
<!--
How complicated do you think these changes are? Grade on a scale from 1
to 5,
where 1 is a trivial change, and 5 is a deep-reaching and complex
change.
This complexity rating applies not only to the complexity apparent in
the diff,
but also to its interactions with existing and future code.
If you answered more than a 2, explain what is complex about the PR,
and what other components it interacts with in potentially concerning
ways. -->
# Testing
- [x] Verified that the license has been updated
- [x] `spacetime --version` on this commit is correct
There is also a corresponding private PR.
Using `#[tokio::test(start_paused = true)]` pauses time, yet tokio will
still advance it when encountering `sleep`s while it has no other work
to do.
This makes the tests that rely on timeouts deterministic and should
prevent those tests from becoming flaky on busy machines.
# Expected complexity level and risk
2
# Testing
This modifies tests.
It does appear to work as described, but it can't hurt if the reviewers
convince themselves that it does indeed.
# Description of Changes
With the addition of module-defined views, subscriptions are no longer
read-only as they may invoke view materialization.
The way this works is that a subscription starts off as a mutable
transaction, materializes views if necessary, and then downgrades to a
read-only transaction to evaluate the subscription.
Before this patch, we were calling `commit_downgrade` directly on the
`MutTxId` in order to downgrade the transaction. This would update the
in-memory `CommittedState`, but it wouldn't make the transaction
durable.
This would result in us incrementing the transaction offset of the
in-memory `CommittedState` without writing anything to the commitlog.
This in turn would invalidate snapshots as they would be pointing
further ahead into the commitlog than they should, and so when replaying
from a snapshot we would potentially skip over commits that were not
included in the snapshot.
This patch changes those call sites to use
`RelationalDB::commit_tx_downgrade` which both updates the in-memory
state **and** makes the transaction durable.
**NOTE:** The fact that views are materialized is purely an
implementation detail at this point in time. And technically view tables
are ephemeral meaning they are not persisted to the commitlog. So the
real bug here was that we were updating the tx offset of the in-memory
committed state at all. This is technically fixed by
https://github.com/clockworklabs/SpacetimeDB/pull/3884 and so after
https://github.com/clockworklabs/SpacetimeDB/pull/3884 lands this change
becomes a no-op. However, we still shouldn't be calling `commit` and
`commit_downgrade` directly on a `MutTxId` since in most cases it is
wrong to bypass the durability layer. And without this change, the bug
would still be present were view tables not ephemeral, which they may
not be at some point in the future.
# API and ABI breaking changes
None
# Expected complexity level and risk
1. The change itself is trivial, the bug is not.
# Testing
Adding an automated test for this is not so straightforward. First it's
view related which means we don't have many options apart from a smoke
test, but I don't believe the smoke tests have a mechanism for replaying
the commitlog.
If transaction offsets are supposed to be linear, without any gaps, then
it would be useful to assert that on each append, in which case we could
write a smoke test that would fail as soon as the offsets diverged.
# Description of Changes
Resolves algebraic type refs recursively in order to check the product
type of a query builder view.
This should fix the issue reported
[here](https://discord.com/channels/1037340874172014652/1448796556366057513).
However I've so far been unsuccessful in trying repro it.
Also adds further commentary to `Typespace::resolve` to make it clear
that it is not recursive.
# API and ABI breaking changes
None
# Expected complexity level and risk
0
# Testing
TODO. So far I haven't been able to repro with a smoketest