# Description of Changes
CI was running `gen-quickstart.sh` and then checking for a diff.. but it
was checking in the wrong directory.
I have also regenerated the files because the fixed check was failing.
# API and ABI breaking changes
None.
# Expected complexity level and risk
1
# Testing
- [x] CI passes
- [x] updated CI failed without the changes to the other files
---------
Co-authored-by: Zeke Foppa <bfops@users.noreply.github.com>
Adds a way to pass a "closure" when compressing a commitlog segment,
such that the source and destination are polymorphic. This allows to
have them wrapped externally, which is used to apply bandwidth limiting
where necessary.
Also return bytes in + bytes out stats from the compressor.
# Expected complexity level and risk
1
# Testing
Elsewhere.
Commitlog compression is coordinated externally, and it is safe to
compress the same segment concurrently. Therefore, we can release the
lock before starting the actual compression work, thereby avoiding to
interfere with writes.
Also, just ignore the currently active segment instead of panicking.
# Description of Changes
This is a fix for #4959.
Unity 6 upgraded to a newer Emscripten version that removed the
`dynCall()` library function. The SpacetimeDB C# SDK's
[WebSocket.jslib](cci:7://file:///D:/Projects/ClockworkLabs/SpacetimeDB/sdks/csharp/src/Plugins/WebSocket.jslib:0:0-0:0)
plugin used `dynCall()` in 6 places to invoke C# callbacks from
JavaScript WebSocket events.
This PR adds a `$WebSocketDynCall` helper function that detects which
API is available at runtime:
- On Unity 6+: uses `getWasmTableEntry(ptr).apply(null, args)` (direct
WASM function table access)
- On Unity 2022 and earlier: uses the legacy `dynCall(sig, ptr, args)`
All `dynCall` call sites in `WebSocket.jslib` now route through this
helper, ensuring we maintain backward-compatiblity.
# API and ABI breaking changes
No breaking changes. This is a backward-compatible fix that works on
both older and newer Unity versions.
# Expected complexity level and risk
1. Very low risk. The change is isolated to the WebGL `jslib` plugin and
uses runtime feature detection to choose the appropriate calling
mechanism. Both code paths are well-tested:
- Unity 2022.3.62f2: uses legacy `dynCall` path
- Unity 6000.4.5f1: uses `getWasmTableEntry` path
# Testing
- [x] Create test server module with reducers and tables
- [x] Unity 2022.3.62f2 Editor: Connect and subscribe works
- [x] Unity 2022.3.62f2 WebGL Build: Connect and subscribe works
- [x] Unity 6000.4.5f1 Editor: Connect and subscribe works
- [x] Unity 6000.4.5f1 WebGL Build: Connect and subscribe works (was
previously failing with `dynCall is not defined`)
# Description of Changes
Defers commitlog segment compression until write load has been idle for
a short window, while still forcing progress when the uncompressed
segment backlog grows too large.
This tries to avoid creating a durability backlog that eventually blocks
the main execution thread.
# API and ABI breaking changes
None
# Expected complexity level and risk
2
# Testing
TODO
# Description of Changes
Update keynote readme with updated benchmark figures
# API and ABI breaking changes
N/A
# Expected complexity level and risk
0
# Testing
N/A
In #4338, the read-only path was made resilient against empty segments
at the end of the log, but corresponding logic was not applied to
re-opening the commitlog for writing.
This patch rectifies that by ignoring and removing segments from the
tail of the log if they contain equal to or less than
`segment::Header::LEN` bytes.
Additionally, zero-sized segments are eliminated entirely by ensuring
that the header is written before moving the segment into place
atomically. The benefit of this is not huge, but could simplify
commitlog-consuming code by not having to worry about empty (zero-sized)
segments. Happy to revert if that is deemed too less of a benefit.
# Expected complexity level and risk
2
# Testing
Adds a test.
# Description of Changes
The core motivation for this change is simple: avoid cross-thread
handoffs and synchronization on the main execution path.
Before this change, the ingress task for each websocket connection would
wait for a completion response on each request before submitting the
next request to the database. This was mainly used to guarantee that we
delivered message responses in receive-order per connection. However it
also meant that for every request, we notified a waiting Tokio task,
which potentially incurred kernel-assisted wakeup and scheduler
overhead.
Note this design existed mainly for historical reasons. Before the
database had a dedicated job thread, requests were not serialized
through a single queue. The module instance was gated behind a semaphore
which guaranteed mutual exclusion, but it did not guarantee FIFO
ordering. Awaiting the completion of each request in `ws_recv_task` was
therefore the mechanism that enforced per-connection receive-order
semantics. However it now serves primarily as a source of overhead.
Procedures are the important exception. They are not serialized through
the main worker queue. Instead they use their own instance pool so as to
be able to run concurrently with other requests. However procedures may
be composed of multiple transactions and they may effectively yield
between transactions. This means that before this change, if a procedure
were to yield, it would effectively block all subsequent requests from
that client until it returned which is quite undesirable.
So with this change, procedures may execute out of order with other
operations received on the same WebSocket. Hence if this is not a
desirable property, clients must enforce ordering themselves by waiting
for a response before submitting the next request.
## What changed?
### 1. Different instance managers for procedures and everything else
Procedures use a bounded instance pool where each instance is backed by
an isolate running in a thread. Reducers and all other operations are
serialized through an mpsc queue that feeds a single isolate running in
a thread.
Trapped isolates are replaced inline. Only a fatal error within one of
the instance threads results in the `ModuleHost` and all its connections
being dropped. The host controller will recreate a new `ModuleHost`
lazily on the next request.
### 2. New enqueue-only `ModuleHost` interface
`ClientConnection` now calls enqueue-only methods on `ModuleHost` which
return immediately after enqueuing on the main instance lane or in the
case of a procedure, checking out an available instance and starting the
operation.
### 3. Separate `ModuleHost` interfaces for scheduled reducers and
scheduled procedures
Scheduled reducers now target the main js instance/worker, while
scheduled procedures go through the pool. The scheduler now
distinguishes between reducers and procedures and calls the appropriate
method.
Note, the scheduler does not pipeline its operations. It waits for each
one to complete before scheduling the next operation. This means that a
long running procedure will block all other operations from being
scheduled. This will need to be fixed at some point, but this patch
doesn't change the current behavior.
### 4. Misc
This patch also names the main js worker thread for better diagnostics.
It also disables core pinning by default and makes it an explicit
opt-in.
This last one is pretty important. The current architecture reduces
thread and context switching significantly such that naive core pinning
may perform worse than just deferring to the OS scheduler on certain
platforms. As it stands, the main motivation which led us to our
original core pinning strategy no longer exists, so we should probably
just defer to the OS until we've designed a proper scheduler that suits
our needs.
# API and ABI breaking changes
As mentioned above, with this change, procedures may execute out of
order with other operations received on the same WebSocket. Hence if
this is not a desirable property, clients must enforce ordering
themselves by waiting for a response before submitting the next request.
# Expected complexity level and risk
4
# Testing
This is mainly a performance oriented refactor, so no additional
correctness tests were added. However this patch does touch a lot of
code that could probably use more coverage in general. Benchmarks were
run to verify expected performance characteristics.
---------
Signed-off-by: joshua-spacetime <josh@clockworklabs.io>
Co-authored-by: Noa <coolreader18@gmail.com>
# Description of Changes
A user in the public Discord was confused about the intended way(s) to
interact with SpacetimeDB, and was attempting to (mis)use the HTTP API's
SQL endpoint in production rather than or in addition to the WebSocket
API.
After some discussion, we determined these additions to the
documentation would be helpful.
# API and ABI breaking changes
N/a
# Expected complexity level and risk
1
# Testing
N/a
# Description of Changes
This comes up when doing a fresh `dotnet test`, which causes `dotnet
test -warnaserror` to fail in CI:
```
➜ dotnet test -warnaserror
Determining projects to restore...
Restored /home/work/SpacetimeDBPrivate/public/crates/bindings-csharp/Runtime/Runtime.csproj (in 126 ms).
Restored /home/work/SpacetimeDBPrivate/public/crates/bindings-csharp/BSATN.Runtime/BSATN.Runtime.csproj (in 133 ms).
Restored /home/work/SpacetimeDBPrivate/public/crates/bindings-csharp/BSATN.Runtime.Tests/BSATN.Runtime.Tests.csproj (in 142 ms).
Restored /home/work/SpacetimeDBPrivate/public/crates/bindings-csharp/BSATN.Codegen/BSATN.Codegen.csproj (in 148 ms).
Restored /home/work/SpacetimeDBPrivate/public/crates/bindings-csharp/Codegen/Codegen.csproj (in 148 ms).
Restored /home/work/SpacetimeDBPrivate/public/crates/bindings-csharp/Codegen.Tests/Codegen.Tests.csproj (in 280 ms).
BSATN.Codegen -> /home/work/SpacetimeDBPrivate/public/crates/bindings-csharp/BSATN.Codegen/bin/Debug/netstandard2.0/SpacetimeDB.BSATN.Codegen.dll
BSATN.Runtime -> /home/work/SpacetimeDBPrivate/public/crates/bindings-csharp/BSATN.Runtime/bin/Debug/netstandard2.1/SpacetimeDB.BSATN.Runtime.dll
Codegen -> /home/work/SpacetimeDBPrivate/public/crates/bindings-csharp/Codegen/bin/Debug/netstandard2.0/SpacetimeDB.Codegen.dll
/home/work/SpacetimeDBPrivate/public/crates/bindings-csharp/BSATN.Runtime/QueryBuilder.cs(880,9): error CA1510: Use 'ArgumentNullException.ThrowIfNull' instead of explicitly throwing a new exception instance (https://learn.microsoft.com/dotnet/fundamentals/code-analysis/quality-rules/ca1510) [/home/work/SpacetimeDBPrivate/public/crates/bindings-csharp/BSATN.Runtime/BSATN.Runtime.csproj::TargetFramework=net8.0]
/home/work/SpacetimeDBPrivate/public/crates/bindings-csharp/BSATN.Runtime/Builtins.cs(3,1): error IDE0005: Using directive is unnecessary. (https://learn.microsoft.com/dotnet/fundamentals/code-analysis/style-rules/ide0005) [/home/work/SpacetimeDBPrivate/public/crates/bindings-csharp/BSATN.Runtime/BSATN.Runtime.csproj::TargetFramework=net8.0]
BSATN.Runtime -> /home/work/SpacetimeDBPrivate/public/crates/bindings-csharp/BSATN.Runtime/bin/Debug/net8.0/SpacetimeDB.BSATN.Runtime.dll
Codegen.Tests -> /home/work/SpacetimeDBPrivate/public/crates/bindings-csharp/Codegen.Tests/bin/Debug/net8.0/Codegen.Tests.dll
```
The fix is done by AI.
# API and ABI breaking changes
None. Behavior should be the same.
# Expected complexity level and risk
1
# Testing
`git clean -fdx . && dotnet test -warnaserror` no longer fails
---------
Co-authored-by: Zeke Foppa <bfops@users.noreply.github.com>
# Description of Changes
This PR updates Rust SDK subscription unsubscribe handling to return an
error instead of panicking when the internal pending mutation channel is
closed.
## Motivation
`SubscriptionHandle::unsubscribe_then` currently calls `unwrap()` after
sending an unsubscribe mutation through the internal pending mutation
channel. If the connection or subscription manager has already shut
down, that send can fail with a disconnected channel error, causing the
application to panic.
I encountered this in [`bevy_stdb`](https://github.com/onx2/bevy_stdb),
which tracks subscriptions across reconnects and unsubscribes old
handles after applying a new subscription for the same key.
## Changes
- Replaces `unwrap()` on `pending_mutation_sender.unbounded_send(...)`
with error propagation.
- Returns `crate::Error::Internal(...)` when the unsubscribe mutation
cannot be sent.
- Updates local unsubscribe state only after the unsubscribe mutation is
successfully queued.
# API and ABI breaking changes
None expected.
This changes an internal error path from panicking to returning an
existing `crate::Error::Internal(...)` through the existing
`crate::Result<()>` return type.
# Expected complexity level and risk
Complexity: 1/5
This is a small, localized change. The method already returns
`crate::Result<()>`, so replacing the `unwrap()` with error propagation
fits the existing API shape.
Risk is low. Callers now receive an error if the internal pending
mutation channel is closed instead of panicking. Existing `AlreadyEnded`
and `AlreadyUnsubscribed` behavior is unchanged.
# Testing
- [x] Verified that `unsubscribe_then` returns
`Err(crate::Error::Internal(...))` instead of panicking when the pending
mutation channel is disconnected.
- [ ] Verified that successful unsubscribe behavior is unchanged.
- [ ] Verified that `AlreadyEnded` is still returned when the
subscription has already ended.
- [ ] Verified that `AlreadyUnsubscribed` is still returned when
unsubscribe was already requested.
---------
Signed-off-by: Jeff Rooks <onx2rj@gmail.com>
# Description of Changes
Merged the `upgrade-version-check.yml` into `ci.yml`, and moved the
business logic under `cargo ci`.
I would also be very open to just removing this test until we choose to
define a better suite of tests for `cargo bump-version`.
# API and ABI breaking changes
None. CI only.
# Expected complexity level and risk
1
# Testing
- [x] Ran it locally. It made a diff
---------
Co-authored-by: Zeke Foppa <bfops@users.noreply.github.com>
# Description of Changes
https://github.com/clockworklabs/SpacetimeDB/pull/4231 changed our CI to
always pass a parameter corresponding to the PR number, which.. broke on
`master` commits since they don't have a PR number.
# API and ABI breaking changes
None. CI only.
# Expected complexity level and risk
1.
# Testing
I think I don't know how to test this. But it's basically the old
behavior on `master` commits, so it should work fine? One hopes?
Co-authored-by: Zeke Foppa <bfops@users.noreply.github.com>
## Summary
Prevents `spacetime version uninstall <ver>` from showing a confirmation
prompt when the version isn't installed, which previously resulted in a
cryptic "No such file or directory (os error 2)" error after the user
confirmed.
## Changes
### Bug fix
- Check if the version directory exists **before** showing the y/N
prompt
- Return a clear error message: `v{version} is not installed`
### Tests (4 unit tests)
- `test_uninstall_nonexistent_version_errors_before_prompt` — confirms
error fires before prompt for missing versions
- `test_uninstall_current_version_errors` — confirms you can't uninstall
the active version
- `test_uninstall_current_keyword_errors` — confirms the literal string
"current" is rejected
- `test_uninstall_existing_version_with_yes` — confirms normal uninstall
flow works
## Verification
```
cargo check -p spacetimedb-update
cargo clippy -p spacetimedb-update -- -D warnings
cargo test -p spacetimedb-update uninstall
```
## Reproduction
Before this fix:
```
$ spacetime version uninstall 2.0.3
Uninstall v2.0.3? yes
Error: No such file or directory (os error 2)
```
After this fix:
```
$ spacetime version uninstall 2.0.3
Error: v2.0.3 is not installed
```
Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>
## Summary
- add `ReducerContext::database_identity()` in Rust bindings
- deprecate `ReducerContext::identity()` and keep it as a compatibility
alias
- update reducer docs example to use `ctx.database_identity()`
- add C# reducer-context equivalent: `DatabaseIdentity` and obsolete
`Identity` alias
- update Rust/C# module test callsites to the new API name
- update C# codegen snapshots for generated `ReducerContext` API output
## Why
Issue #3201 reports user confusion between reducer `ctx.identity()`
(database/module identity) and `ctx.sender`. This change clarifies
naming while preserving compatibility.
## Validation
- `cargo check -p spacetimedb -p module-test` (Passed)
- `dotnet test crates/bindings-csharp/Codegen.Tests/Codegen.Tests.csproj
--nologo` (Passed)
- `dotnet test crates/bindings-csharp/Runtime.Tests/Runtime.Tests.csproj
--nologo` (Failed) pre-existing unrelated failure:
- `Runtime.Tests/JwtClaimsTest.cs(10,23): CS1729: 'JwtClaims' does not
contain a constructor that takes 2 arguments`
## Compatibility
- Rust: `identity()` still works but is deprecated in favor of
`database_identity()`.
- C#: `Identity` still works but is marked `[Obsolete]` in favor of
`DatabaseIdentity`.
Closes#3201
# Description of Changes
This patch aims to improve the situation for the need of a generic
function over multiple `ctxs`.
The `DbContext` was introduced for this but due to the associated type
amibguity when trying to implement it another mmethod is needed.
This pr implements a `db_read_only()` method which always results in a
`LocalReadOnly` hence Rust can infer the types of a the returned type
(which prior to this was either `__view , __query and a 3rd one i cent
remember right now 😓`.
I have chosen to be defensive by implementing it as `unstable` but i can
also remove this if that is desired.
It allows the following pattern which has been workig great in my
project so its time to contribute it :>
```rust
pub(crate) trait YourTableNameRead {
// Your methods which only need read access
fn test(&self,args);
}
// The read version is a supertrait to give access to the read methods.
pub(crate) trait YourTableNameWrite: YourTableNameRead {
// Your methods which need read-write access
}
// The read version gets implemented for every DbContext since we can always read.
impl<Db: DbContext> YourTableNameRead for Db {
fn test(&self,args) {
self.db_read_only().table_name().whatever(args);
}
// By constraining the associated type to Local we only get this for writeable ctxs.
impl<Db: DbContext<DbView = Local>> YourTableNameWrite for Db {}
```
These allow you to do on the calling site:
```rust
use YourTableNameRead;
#[view|reducer|procedure]
fn my_func(ctx: Reducer|(Anon)View|Tx, args ) {
ctx.test(args);
}
```
# API and ABI breaking changes
None
# Expected complexity level and risk
1. Minor api and qol adition which is furthermore unstable.
# Testing
- [x] Works in my project
---------
Co-authored-by: clockwork-labs-bot <clockwork-labs-bot@users.noreply.github.com>
# Description of Changes
This pull request improves the handling of connection lifecycle events
in the Rust client SDK for SpacetimeDB, particularly distinguishing
between connection failures and disconnections. It introduces a new
`ConnectionLifecycle` state machine to track connection progress,
ensures that the correct callback (`on_connect_error` or
`on_disconnect`) is invoked based on the connection state.
**Changes**
* `ConnectionLifecycle` enum to track the connection state
(`Connecting`, `Connected`, `Ended`)
* Refactored error handling so that if a connection fails before
establishment, the `on_connect_error` callback is invoked; if the
connection fails after establishment, the `on_disconnect` callback is
invoked. See `end_connection`.
* Updated where disconnections are handled
(`advance_one_message_blocking`, `advance_one_message_async`, and
message processing) to use `finish_connection`
* Improved handling of user-initiated disconnects during the connection
process to avoid reporting them as connection errors and to ensure
proper cleanup.
# API and ABI breaking changes
I guess maybe if people relied on the `on_connect_error` to actually
fire the `on_disconnect` then this changes that behavior.
# Expected complexity level and risk
Maybe a 2? Seems pretty low risk but I'm still new to the codebase,
please double check.
This doesn't fix the websocket issues, that'll be for another day. I
noticed websocket.rs has some places it just drops and the error isn't
handled properly. We could technically surface that information and run
our callbacks with more specific error messages.
# Testing
I had an agent build and run loads of tests for this but didn't commit
those since it would have made the PR massive. I was planning on testing
locally though to see if I could trigger a connection failure at some
point, maybe via an invalid access token.
# Description of Changes
Add documentation for how to deploy on Railway under "Hosting" in the
docs
https://github.com/user-attachments/assets/b36a7f53-e0db-4fa3-8b0e-65930c71b3a8
# API and ABI breaking changes
<!-- If this is an API or ABI breaking change, please apply the
corresponding GitHub label. -->
# Expected complexity level and risk
1 - just a docs change
# Testing
Ran the docs locally and verified the page looked good and that the
content is correct. See video above.
# Description of Changes
Just removing an old script that we would like to maintain via a cargo
subcommand now
# API and ABI breaking changes
None
# Expected complexity level and risk
1
# Testing
None
Co-authored-by: Zeke Foppa <bfops@users.noreply.github.com>
# Description of Changes
<!-- Please describe your change, mention any related tickets, and so on
here. -->
- Bumps version to 2.2.0
# API and ABI breaking changes
<!-- If this is an API or ABI breaking change, please apply the
corresponding GitHub label. -->
None
# Expected complexity level and risk
- 1 - this is just a version bump
<!--
How complicated do you think these changes are? Grade on a scale from 1
to 5,
where 1 is a trivial change, and 5 is a deep-reaching and complex
change.
This complexity rating applies not only to the complexity apparent in
the diff,
but also to its interactions with existing and future code.
If you answered more than a 2, explain what is complex about the PR,
and what other components it interacts with in potentially concerning
ways. -->
# Testing
<!-- Describe any testing you've done, and any testing you'd like your
reviewers to do,
so that you're confident that all the changes work as expected! -->
- [x] Version number is correct (`2.2.0`)
- [x] BSL license file has been updated with the new date and version
number
---------
Co-authored-by: Zeke Foppa <196249+bfops@users.noreply.github.com>
Co-authored-by: Zeke Foppa <bfops@users.noreply.github.com>
As reported in #4886, `metadata.toml` can get get lost if the server
crashes at an unfortunate point in time. To mitigate that, make
`path_type::write` replace the file atomically, and issue `fsync` on the
file and enclosing directory.
# Expected complexity level and risk
1
# Description of Changes
Disclaimer: This description was written by claude:
The two `.leader()` call sites in
[`client-api/src/routes/database.rs`](crates/client-api/src/routes/database.rs)
and
[`client-api/src/routes/subscribe.rs`](crates/client-api/src/routes/subscribe.rs)
pipe `GetLeaderHostError` through `log_and_500`, which:
1. Logs every error variant at `error` level — including `Suspended`,
`Bootstrapping`, `NoLeader`, etc., which are normal operational states.
This produces noisy log lines like:
```
ERROR /app/.../client-api/src/lib.rs:623: internal error: database is
suspended
```
2. Forces every such error into a **500 Internal Server Error**
response, even when the appropriate status code is something else (e.g.
503 Service Unavailable for a suspended database).
`GetLeaderHostError` already implements
`Into<axum::response::ErrorResponse>` with the correct per-variant
mapping:
| Variant | Status |
|---|---|
| `NoSuchDatabase` | 404 Not Found |
| `LaunchError`, `Misdirected` | 500 Internal Server Error |
| `NoNodeId`, `NoLeader`, `ControlConnection`, `Suspended`,
`Bootstrapping` | 503 Service Unavailable |
The standalone implementation already uses `?`-propagation directly.
This PR makes the two client-api call sites match that pattern.
## Result
- Suspended / bootstrapping / no-leader databases now return **503**
instead of 500.
- These expected states no longer produce `error`-level log spam in the
request path. Genuinely unexpected internal errors elsewhere in the
codebase continue to log via `log_and_500` unchanged.
# API and ABI breaking changes
None
# Expected complexity level and risk
1
# Testing
- [ ] Deploy to staging to see if we still see this error when trying to
access a suspended database
# Description of Changes
Add EV code signing for Windows CLI binaries using DigiCert KeyLocker.
The workflow now signs `spacetimedb-update.exe`, `spacetimedb-cli.exe`,
and `spacetimedb-standalone.exe` on tag pushes using `smctl sign` with a
cloud HSM-backed certificate.
These changes reflect the updated DigiCert guidance for code signing
through GitHub found here:
https://github.com/marketplace/actions/digicert-binary-signing
# API and ABI breaking changes
No API or ABI changes. This change only affects the CI/CD packaging
workflow.
# Expected complexity level and risk
1 - This PR only adds code signing to existing CI packaging. Risk is
limited to the Windows packaging step failing on tags; Linux and macOS
builds are unaffected.
# Testing
- [X] Tested via workflow dispatch on tag `test-signing-v0.0.1`
- [X] All three executables signed and verified successfully
- [X] Signature verification confirms certificate chain
- [X] Signed artifacts uploaded successfully
# Description of Changes
`UClientCache::ApplyDiff` (`sdks/unreal/.../DBCache/ClientCache.h`) has
an asymmetry between its insert and delete paths.
**Phase 1 (deletes)** correctly only emits a `Diff.Deletes` entry when
the row's refcount transitions to 0 — overlapping subscriptions just
decrement.
**Phase 2 (inserts)** always appends to `Diff.Inserts`, regardless of
whether the row was already cached:
```cpp
// before
FRowEntry<RowType>* Entry = Table->Entries.Find(Key);
if (!Entry) { /* refcount = 1 */ }
else { /* refcount + 1 */ }
Diff.Inserts.Add(Key, *NewRow); // ← fires for both branches
```
`BroadcastDiff` then fires `OnInsert` for every `Diff.Inserts` entry, so
any table subscribed by two overlapping queries (e.g. a global `SELECT *
FROM t` plus a per-row `WHERE id = ...`) re-fires its insert handler on
every later subscription apply — once per cached row, every time. Game
code that does work in `OnInsert` (positioning, spawning, snapping to
terrain) re-runs and clobbers state that was meant to be set once.
The intent is documented in `RowEntry.h`: *"Wrapper storing a row value
with a reference count used by overlapping subscriptions."* Phase 1
follows that design; phase 2 doesn't.
### Fix
Move the `Diff.Inserts.Add` into the `!Entry` branch only, so it fires
only on the absent → refcount=1 transition:
```cpp
if (!Entry) {
Table->Entries.Add(Key, FRowEntry<RowType>{NewRow, 1});
Diff.Inserts.Add(Key, *NewRow);
}
else {
Table->Entries.Add(Key, FRowEntry<RowType>{NewRow, Entry->RefCount + 1});
}
```
### Why real updates still work
Cache keys are BSATN row-bytes, not primary keys. A real update arrives
as a `(delete old_bytes, insert new_bytes)` pair where `old_bytes ≠
new_bytes` — so the insert side still takes the `!Entry` branch and gets
a `Diff.Inserts` entry. `FTableAppliedDiff::DeriveUpdatesByPrimaryKey`
then pairs the delete and insert by PK into
`UpdateInserts`/`UpdateDeletes`, and `OnUpdate` (not `OnInsert`) fires,
exactly as today.
Edge cases:
| Scenario | Phase 2 branch | Result | Correct? |
|---|---|---|---|
| New row | `!Entry` | `OnInsert` | ✓ |
| Real update (different bytes) | `!Entry` | `OnInsert`+`OnDelete`
reconciled to `OnUpdate` by PK | ✓ |
| Overlapping sub re-delivers cached row | `else` | refcount bump, no
event | ✓ (was broken — fired duplicate `OnInsert`) |
| Trivial update (identical bytes) | `else` | refcount bump | irrelevant
— server diffs identical rows away before emitting |
# API and ABI breaking changes
None. Purely internal cache bookkeeping. Existing
`OnInsert`/`OnDelete`/`OnUpdate` semantics are preserved for all
non-overlapping cases; the only behavior change is that overlapping
subscriptions stop emitting duplicate `OnInsert` events for
already-cached rows — which matches the documented `RowEntry` refcount
contract.
# Expected complexity level and risk
**1.** Two lines moved into a branch; comments updated. Mirrors logic
already present and known-correct in phase 1.
# Testing
Reproduced and validated downstream in an Unreal project. The repro
setup is straightforward to replicate against any module:
1. A table `t` with ~150 rows.
2. A global subscription `SELECT * FROM t`, applied first.
3. A diagnostic actor that binds `t.OnInsert`, then every 10s submits a
new overlapping subscription (e.g. `SELECT * FROM t` again, or any
`SELECT * FROM t WHERE 'id' = X` covering already-cached rows) and
counts the `OnInsert` events that arrive in that round.
Expected: round 1 fires once per row that is *new to the cache*;
subsequent rounds against already-cached rows fire 0.
Observed (~161 rows cached after initial load):
```
Pre-fix Post-fix
Global sub (empty → 161) 161 161 ← genuine inserts; unchanged
Round 1 (overlapping) 134 0 ← 134 cached dupes
Round 2 (overlapping) 134 0
Round 3 (overlapping) 134 0
Round 4–6 (overlapping) 134 each 0 each
```
The genuine "empty cache → 161 entries" wave on the initial global
subscription is unaffected — same `OnInsert` count both pre- and
post-fix. Only the duplicate fires from later overlapping subscriptions
on already-cached rows are eliminated. `OnUpdate` still fires correctly
when underlying rows actually change.
- [x] Reviewer: confirm an existing real-update (different bytes, same
PK) test still produces `OnUpdate` and not `OnInsert`+`OnDelete`.
- [x] Reviewer: confirm any test that relies on `OnInsert` firing on
every subscription apply (rather than only on cache 0→1 transition) is
not present — if it exists, it was relying on the bug.
# Description of Changes
Tests for case conversion.
# API and ABI breaking changes
NA
# Expected complexity level and risk
1
---------
Co-authored-by: clockwork-labs-bot <bot@clockworklabs.com>
Co-authored-by: clockwork-labs-bot <clockwork-labs-bot@users.noreply.github.com>
# Description of Changes
The update version tool sets the
/templates/basic-cpp/spacetimedb/CMakeLists.txt but as we're in the
middle of release this can block the smoketest. This PR changes it to
dynamically change to the `latest` release to allow release night
smoketests to work correctly.
# API and ABI breaking changes
N/A
# Expected complexity level and risk
1 - Small smoketest change
# Testing
- [x] Ran the C++ quickstart smoketest locally
- [x] Updated local template and tested with / without the change
Co-authored-by: John Detter <4099508+jdetter@users.noreply.github.com>
When creating or compressing a snapshot, `fsync` all files and
directories, so as to ensure that the snapshot is durable on the local
disk.
This obviously amounts to a large number of `fsync` calls, which may
negatively impact performance of taking a snapshot -- since we hold a
transaction lock while taking a snapshot, this is not to be taken
lightly.
# Expected complexity level and risk
3 -- performance impact
# Testing
I haven't quantified the performance impact.
# Description of Changes
[Some
runtimes](https://developer.mozilla.org/en-US/docs/Web/API/DecompressionStream#browser_compatibility)
support brotli for `DecompressionStream`, so I figure we may as
well allow it. Also reorganizes some of the websocket code for better
separation of concerns.
# Expected complexity level and risk
1
# Testing
- [ ] <!-- maybe a test you want to do -->
- [ ] <!-- maybe a test you want a reviewer to do, so they can check it
off when they're satisfied. -->
# Description of Changes
<!-- Please describe your change, mention any related tickets, and so on
here. -->
This was prompted from another request from the discord. Conversation
context:
https://discord.com/channels/1037340874172014652/1138987509834059867/1496964407278698656
We've had the ticket to make `--yes` take an enum value:
https://github.com/clockworklabs/spacetimedb/issues/3784 . Since we can
do this in a non-API/ABI breaking way we're just implementing this
ticket.
*Disclaimer: I used claude to write ~90% of this PR.*
# API and ABI breaking changes
<!-- If this is an API or ABI breaking change, please apply the
corresponding GitHub label. -->
None. `--yes` now just takes an optional argument string that we parse,
existing CLI commands that use `--yes` will be unaffected.
# Expected complexity level and risk
1 - This is just modifying the `--yes` argument and I've included tests
in the PR.
<!--
How complicated do you think these changes are? Grade on a scale from 1
to 5,
where 1 is a trivial change, and 5 is a deep-reaching and complex
change.
This complexity rating applies not only to the complexity apparent in
the diff,
but also to its interactions with existing and future code.
If you answered more than a 2, explain what is complex about the PR,
and what other components it interacts with in potentially concerning
ways. -->
# Testing
<!-- Describe any testing you've done, and any testing you'd like your
reviewers to do,
so that you're confident that all the changes work as expected! -->
- [x] `spacetime publish --yes` still works as expected
- [x] `spacetime publish --yes=all` does the same thing as `spacetime
publish --yes`
- [x] `spacetime publish --yes=remote` only skips asking if publishing
to a remote server is ok.
- [x] `spacetime publish --yes=remote` only skips asking if publishing
to a remote server is ok.
- [x] `spacetime publish --yes=migrate|remote` skips both the migrate
prompt and the remote server prompt.
- [x] new tests are passing
# Description of Changes
Make Timestamp a FilterableValue in Rust, C#, and Typescript. I'm not
sure this is changing all the places because we have the server and the
client in those 3 languages.
# API and ABI breaking changes
It's an additive change.
# Expected complexity level and risk
3. There are some designs decisions, like comparing timestamp to
strings/numbers.
# Testing
Added unit tests for the 3 languages.
# Description of Changes
This API used to be unimplemented and the SDK tests did not exercise it.
Now it is implemented but while playing with blackholio I noticed the C#
implementation was wrong.
For now I am going to fix blackholio by avoiding use of this API for
now, but we should also correct the implementation and test it.
# API and ABI breaking changes
None
# Expected complexity level and risk
0
# Testing
Working on adding tests. If someone is more familiar with the SDK tests
I would appreciate help amending them.
Co-authored-by: clockwork-labs-bot <clockwork-labs-bot@users.noreply.github.com>
# Description of Changes
Fix the `spacetime start --listen-addr` help text to match the actual
default value.
The flag already defaults to `0.0.0.0:3000`, but the help text
incorrectly said port 80.
# API and ABI breaking changes
None.
# Expected complexity level and risk
Complexity 1.
# Testing
- [x] `cargo fmt --all --check`
- [x] `cargo check -p spacetimedb-standalone`
Co-authored-by: clockwork-labs-bot <clockwork-labs-bot@users.noreply.github.com>
Co-authored-by: Tyler Cloutier <cloutiertyler@users.noreply.github.com>
# Description of Changes
- Updated the Unreal SDK test harness to allow Unreal Editor to work
with MacOS
- Updated the Unreal SDK test handler to work with Nil as it's a special
case
# API and ABI breaking changes
N/A
# Expected complexity level and risk
1 - Small changes to the Unreal SDK tests
# Testing
- [x] Ran full suite of tests on Mac for Unreal SDK
- [x] Ran full suite of tests on Windows + Linux for Unreal SDK to
confirm no regression
---------
Co-authored-by: Jason Larabie <jasonlarabie@Mac.lan>
# Description of Changes
Reapply changes from #4515 after reversion
# API and ABI breaking changes
No API or ABI changes
# Expected complexity level and risk
2 - This PR change itself is trivial, as it just reimplements #4515,
however as #4515 had broken the `quickstart` smoketest, this should be
considered when reviewing this PR.
# Testing
- [X] Tested against `python3 -m smoketests quickstart` locally
---------
Signed-off-by: Ryan <r.ekhoff@clockworklabs.io>
Co-authored-by: Tyler Cloutier <cloutiertyler@aol.com>
Co-authored-by: Jason Larabie <jason@clockworklabs.io>
Co-authored-by: John Detter <4099508+jdetter@users.noreply.github.com>
# Description of Changes
Remove the Python smoketests and the CI check that tests for edits.
# API and ABI breaking changes
None. CI only.
# Expected complexity level and risk
1
# Testing
- [ ] All CI passes
Co-authored-by: Zeke Foppa <bfops@users.noreply.github.com>
# Description of Changes
See the corresponding commits/PRs for descriptions.
# API and ABI breaking changes
None
# Expected complexity level and risk
1 -- individual PRs already reviewed.
# Testing
No semantic changes.
# Description of Changes
This adds an `enabled` option to the `useTable` hook in the React SDK,
allowing devs to conditionally disable a subscription without having to
wrap it in a component that is conditionally mounted.
With `enabled`, you can control subscription lifecycle as such:
```tsx
const [rows, isReady] = useTable(tables.messages, { enabled: isChatOpen });
```
This is a similar pattern in other data hooks (React Query's `useQuery`,
SWR, Apollo's `useSubscription`, etc).
When `enabled` is `false`:
- `computeSnapshot` returns `[[], true]` immediately (no data, ready
state)
- The subscription effect skips setup and resets `subscribeApplied`
- The event listener callback returns a no-op cleanup
When `enabled` flips back to `true`, the subscription is re-established
automatically via the dependency arrays.
# API and ABI breaking changes
None. The `enabled` field is optional and defaults to `true`, so
existing usage is unaffected.
# Expected complexity level and risk
1 — Single file change, additive only. The `enabled` flag gates existing
behavior behind early returns and is wired into the existing dependency
arrays. No interaction with other components.
# Testing I Did
- [x] Verify `useTable(tables.foo)` works as before (no `enabled` option
passed)
- [x] Verify `useTable(tables.foo, { enabled: false })` returns `[[],
true]` and does not subscribe
- [x] Verify toggling `enabled` from `false` to `true` establishes the
subscription and returns rows
- [x] Verify toggling `enabled` from `true` to `false` clears rows and
unsubscribes
Co-authored-by: Zeke Foppa <196249+bfops@users.noreply.github.com>
# Description of Changes
Users have an odd tendency to overuse this param, which is really meant
for testing and some exceptional circumstances.
This PR hides it from the help so that it is less discoverable.
# API and ABI breaking changes
None.
# Expected complexity level and risk
1
# Testing
- [x] Helptext no longer includes `--server-issued-login`
---------
Co-authored-by: Zeke Foppa <bfops@users.noreply.github.com>
Resolves#3652
# Description of Changes
Update docs nginx configuration to better support `spacetime logs
--follow` when self hosting configured to use ssl.
# API and ABI breaking changes
None
# Expected complexity level and risk
1
# Testing
I have added this location to my own nginx configuration and confirmed I
am able to `spacetime logs -s some-https-server <my-module> --follow`
and see logs immediately and continue following even if no log is
produced after 1 minute.
Co-authored-by: Tyler Cloutier <cloutiertyler@users.noreply.github.com>
Co-authored-by: Zeke Foppa <196249+bfops@users.noreply.github.com>
# Description of Changes
`MutTxId::add_columns_to_table` creates new table but only copies
sequences to in-memory state, which causes `autoinc` columns to reset on
module restart.
The existing implementation relies on `create_table_and_update_seq`
helper, which only updates the sequence state in memory. This change
ensures that the `allocation` is also persisted to the system table,
keeping it consistent across restarts.
# API and ABI breaking changes
NA
# Expected complexity level and risk
2
# Testing
Added a test, which migrate table and checks for `autoinc` column value
without and with restart.
---------
Signed-off-by: Shubham Mishra <shivam828787@gmail.com>
Co-authored-by: Mazdak Farrokhzad <twingoow@gmail.com>