# Description of Changes
CI was running `gen-quickstart.sh` and then checking for a diff.. but it
was checking in the wrong directory.
I have also regenerated the files because the fixed check was failing.
# API and ABI breaking changes
None.
# Expected complexity level and risk
1
# Testing
- [x] CI passes
- [x] updated CI failed without the changes to the other files
---------
Co-authored-by: Zeke Foppa <bfops@users.noreply.github.com>
Adds a way to pass a "closure" when compressing a commitlog segment,
such that the source and destination are polymorphic. This allows to
have them wrapped externally, which is used to apply bandwidth limiting
where necessary.
Also return bytes in + bytes out stats from the compressor.
# Expected complexity level and risk
1
# Testing
Elsewhere.
# Description of Changes
This is a fix for #4959.
Unity 6 upgraded to a newer Emscripten version that removed the
`dynCall()` library function. The SpacetimeDB C# SDK's
[WebSocket.jslib](cci:7://file:///D:/Projects/ClockworkLabs/SpacetimeDB/sdks/csharp/src/Plugins/WebSocket.jslib:0:0-0:0)
plugin used `dynCall()` in 6 places to invoke C# callbacks from
JavaScript WebSocket events.
This PR adds a `$WebSocketDynCall` helper function that detects which
API is available at runtime:
- On Unity 6+: uses `getWasmTableEntry(ptr).apply(null, args)` (direct
WASM function table access)
- On Unity 2022 and earlier: uses the legacy `dynCall(sig, ptr, args)`
All `dynCall` call sites in `WebSocket.jslib` now route through this
helper, ensuring we maintain backward-compatiblity.
# API and ABI breaking changes
No breaking changes. This is a backward-compatible fix that works on
both older and newer Unity versions.
# Expected complexity level and risk
1. Very low risk. The change is isolated to the WebGL `jslib` plugin and
uses runtime feature detection to choose the appropriate calling
mechanism. Both code paths are well-tested:
- Unity 2022.3.62f2: uses legacy `dynCall` path
- Unity 6000.4.5f1: uses `getWasmTableEntry` path
# Testing
- [x] Create test server module with reducers and tables
- [x] Unity 2022.3.62f2 Editor: Connect and subscribe works
- [x] Unity 2022.3.62f2 WebGL Build: Connect and subscribe works
- [x] Unity 6000.4.5f1 Editor: Connect and subscribe works
- [x] Unity 6000.4.5f1 WebGL Build: Connect and subscribe works (was
previously failing with `dynCall is not defined`)
# Description of Changes
Defers commitlog segment compression until write load has been idle for
a short window, while still forcing progress when the uncompressed
segment backlog grows too large.
This tries to avoid creating a durability backlog that eventually blocks
the main execution thread.
# API and ABI breaking changes
None
# Expected complexity level and risk
2
# Testing
TODO
# Description of Changes
Update keynote readme with updated benchmark figures
# API and ABI breaking changes
N/A
# Expected complexity level and risk
0
# Testing
N/A
In #4338, the read-only path was made resilient against empty segments
at the end of the log, but corresponding logic was not applied to
re-opening the commitlog for writing.
This patch rectifies that by ignoring and removing segments from the
tail of the log if they contain equal to or less than
`segment::Header::LEN` bytes.
Additionally, zero-sized segments are eliminated entirely by ensuring
that the header is written before moving the segment into place
atomically. The benefit of this is not huge, but could simplify
commitlog-consuming code by not having to worry about empty (zero-sized)
segments. Happy to revert if that is deemed too less of a benefit.
# Expected complexity level and risk
2
# Testing
Adds a test.
# Description of Changes
The core motivation for this change is simple: avoid cross-thread
handoffs and synchronization on the main execution path.
Before this change, the ingress task for each websocket connection would
wait for a completion response on each request before submitting the
next request to the database. This was mainly used to guarantee that we
delivered message responses in receive-order per connection. However it
also meant that for every request, we notified a waiting Tokio task,
which potentially incurred kernel-assisted wakeup and scheduler
overhead.
Note this design existed mainly for historical reasons. Before the
database had a dedicated job thread, requests were not serialized
through a single queue. The module instance was gated behind a semaphore
which guaranteed mutual exclusion, but it did not guarantee FIFO
ordering. Awaiting the completion of each request in `ws_recv_task` was
therefore the mechanism that enforced per-connection receive-order
semantics. However it now serves primarily as a source of overhead.
Procedures are the important exception. They are not serialized through
the main worker queue. Instead they use their own instance pool so as to
be able to run concurrently with other requests. However procedures may
be composed of multiple transactions and they may effectively yield
between transactions. This means that before this change, if a procedure
were to yield, it would effectively block all subsequent requests from
that client until it returned which is quite undesirable.
So with this change, procedures may execute out of order with other
operations received on the same WebSocket. Hence if this is not a
desirable property, clients must enforce ordering themselves by waiting
for a response before submitting the next request.
## What changed?
### 1. Different instance managers for procedures and everything else
Procedures use a bounded instance pool where each instance is backed by
an isolate running in a thread. Reducers and all other operations are
serialized through an mpsc queue that feeds a single isolate running in
a thread.
Trapped isolates are replaced inline. Only a fatal error within one of
the instance threads results in the `ModuleHost` and all its connections
being dropped. The host controller will recreate a new `ModuleHost`
lazily on the next request.
### 2. New enqueue-only `ModuleHost` interface
`ClientConnection` now calls enqueue-only methods on `ModuleHost` which
return immediately after enqueuing on the main instance lane or in the
case of a procedure, checking out an available instance and starting the
operation.
### 3. Separate `ModuleHost` interfaces for scheduled reducers and
scheduled procedures
Scheduled reducers now target the main js instance/worker, while
scheduled procedures go through the pool. The scheduler now
distinguishes between reducers and procedures and calls the appropriate
method.
Note, the scheduler does not pipeline its operations. It waits for each
one to complete before scheduling the next operation. This means that a
long running procedure will block all other operations from being
scheduled. This will need to be fixed at some point, but this patch
doesn't change the current behavior.
### 4. Misc
This patch also names the main js worker thread for better diagnostics.
It also disables core pinning by default and makes it an explicit
opt-in.
This last one is pretty important. The current architecture reduces
thread and context switching significantly such that naive core pinning
may perform worse than just deferring to the OS scheduler on certain
platforms. As it stands, the main motivation which led us to our
original core pinning strategy no longer exists, so we should probably
just defer to the OS until we've designed a proper scheduler that suits
our needs.
# API and ABI breaking changes
As mentioned above, with this change, procedures may execute out of
order with other operations received on the same WebSocket. Hence if
this is not a desirable property, clients must enforce ordering
themselves by waiting for a response before submitting the next request.
# Expected complexity level and risk
4
# Testing
This is mainly a performance oriented refactor, so no additional
correctness tests were added. However this patch does touch a lot of
code that could probably use more coverage in general. Benchmarks were
run to verify expected performance characteristics.
---------
Signed-off-by: joshua-spacetime <josh@clockworklabs.io>
Co-authored-by: Noa <coolreader18@gmail.com>
# Description of Changes
A user in the public Discord was confused about the intended way(s) to
interact with SpacetimeDB, and was attempting to (mis)use the HTTP API's
SQL endpoint in production rather than or in addition to the WebSocket
API.
After some discussion, we determined these additions to the
documentation would be helpful.
# API and ABI breaking changes
N/a
# Expected complexity level and risk
1
# Testing
N/a
# Description of Changes
This comes up when doing a fresh `dotnet test`, which causes `dotnet
test -warnaserror` to fail in CI:
```
➜ dotnet test -warnaserror
Determining projects to restore...
Restored /home/work/SpacetimeDBPrivate/public/crates/bindings-csharp/Runtime/Runtime.csproj (in 126 ms).
Restored /home/work/SpacetimeDBPrivate/public/crates/bindings-csharp/BSATN.Runtime/BSATN.Runtime.csproj (in 133 ms).
Restored /home/work/SpacetimeDBPrivate/public/crates/bindings-csharp/BSATN.Runtime.Tests/BSATN.Runtime.Tests.csproj (in 142 ms).
Restored /home/work/SpacetimeDBPrivate/public/crates/bindings-csharp/BSATN.Codegen/BSATN.Codegen.csproj (in 148 ms).
Restored /home/work/SpacetimeDBPrivate/public/crates/bindings-csharp/Codegen/Codegen.csproj (in 148 ms).
Restored /home/work/SpacetimeDBPrivate/public/crates/bindings-csharp/Codegen.Tests/Codegen.Tests.csproj (in 280 ms).
BSATN.Codegen -> /home/work/SpacetimeDBPrivate/public/crates/bindings-csharp/BSATN.Codegen/bin/Debug/netstandard2.0/SpacetimeDB.BSATN.Codegen.dll
BSATN.Runtime -> /home/work/SpacetimeDBPrivate/public/crates/bindings-csharp/BSATN.Runtime/bin/Debug/netstandard2.1/SpacetimeDB.BSATN.Runtime.dll
Codegen -> /home/work/SpacetimeDBPrivate/public/crates/bindings-csharp/Codegen/bin/Debug/netstandard2.0/SpacetimeDB.Codegen.dll
/home/work/SpacetimeDBPrivate/public/crates/bindings-csharp/BSATN.Runtime/QueryBuilder.cs(880,9): error CA1510: Use 'ArgumentNullException.ThrowIfNull' instead of explicitly throwing a new exception instance (https://learn.microsoft.com/dotnet/fundamentals/code-analysis/quality-rules/ca1510) [/home/work/SpacetimeDBPrivate/public/crates/bindings-csharp/BSATN.Runtime/BSATN.Runtime.csproj::TargetFramework=net8.0]
/home/work/SpacetimeDBPrivate/public/crates/bindings-csharp/BSATN.Runtime/Builtins.cs(3,1): error IDE0005: Using directive is unnecessary. (https://learn.microsoft.com/dotnet/fundamentals/code-analysis/style-rules/ide0005) [/home/work/SpacetimeDBPrivate/public/crates/bindings-csharp/BSATN.Runtime/BSATN.Runtime.csproj::TargetFramework=net8.0]
BSATN.Runtime -> /home/work/SpacetimeDBPrivate/public/crates/bindings-csharp/BSATN.Runtime/bin/Debug/net8.0/SpacetimeDB.BSATN.Runtime.dll
Codegen.Tests -> /home/work/SpacetimeDBPrivate/public/crates/bindings-csharp/Codegen.Tests/bin/Debug/net8.0/Codegen.Tests.dll
```
The fix is done by AI.
# API and ABI breaking changes
None. Behavior should be the same.
# Expected complexity level and risk
1
# Testing
`git clean -fdx . && dotnet test -warnaserror` no longer fails
---------
Co-authored-by: Zeke Foppa <bfops@users.noreply.github.com>
# Description of Changes
This PR updates Rust SDK subscription unsubscribe handling to return an
error instead of panicking when the internal pending mutation channel is
closed.
## Motivation
`SubscriptionHandle::unsubscribe_then` currently calls `unwrap()` after
sending an unsubscribe mutation through the internal pending mutation
channel. If the connection or subscription manager has already shut
down, that send can fail with a disconnected channel error, causing the
application to panic.
I encountered this in [`bevy_stdb`](https://github.com/onx2/bevy_stdb),
which tracks subscriptions across reconnects and unsubscribes old
handles after applying a new subscription for the same key.
## Changes
- Replaces `unwrap()` on `pending_mutation_sender.unbounded_send(...)`
with error propagation.
- Returns `crate::Error::Internal(...)` when the unsubscribe mutation
cannot be sent.
- Updates local unsubscribe state only after the unsubscribe mutation is
successfully queued.
# API and ABI breaking changes
None expected.
This changes an internal error path from panicking to returning an
existing `crate::Error::Internal(...)` through the existing
`crate::Result<()>` return type.
# Expected complexity level and risk
Complexity: 1/5
This is a small, localized change. The method already returns
`crate::Result<()>`, so replacing the `unwrap()` with error propagation
fits the existing API shape.
Risk is low. Callers now receive an error if the internal pending
mutation channel is closed instead of panicking. Existing `AlreadyEnded`
and `AlreadyUnsubscribed` behavior is unchanged.
# Testing
- [x] Verified that `unsubscribe_then` returns
`Err(crate::Error::Internal(...))` instead of panicking when the pending
mutation channel is disconnected.
- [ ] Verified that successful unsubscribe behavior is unchanged.
- [ ] Verified that `AlreadyEnded` is still returned when the
subscription has already ended.
- [ ] Verified that `AlreadyUnsubscribed` is still returned when
unsubscribe was already requested.
---------
Signed-off-by: Jeff Rooks <onx2rj@gmail.com>
# Description of Changes
Merged the `upgrade-version-check.yml` into `ci.yml`, and moved the
business logic under `cargo ci`.
I would also be very open to just removing this test until we choose to
define a better suite of tests for `cargo bump-version`.
# API and ABI breaking changes
None. CI only.
# Expected complexity level and risk
1
# Testing
- [x] Ran it locally. It made a diff
---------
Co-authored-by: Zeke Foppa <bfops@users.noreply.github.com>
# Description of Changes
https://github.com/clockworklabs/SpacetimeDB/pull/4231 changed our CI to
always pass a parameter corresponding to the PR number, which.. broke on
`master` commits since they don't have a PR number.
# API and ABI breaking changes
None. CI only.
# Expected complexity level and risk
1.
# Testing
I think I don't know how to test this. But it's basically the old
behavior on `master` commits, so it should work fine? One hopes?
Co-authored-by: Zeke Foppa <bfops@users.noreply.github.com>
## Summary
Prevents `spacetime version uninstall <ver>` from showing a confirmation
prompt when the version isn't installed, which previously resulted in a
cryptic "No such file or directory (os error 2)" error after the user
confirmed.
## Changes
### Bug fix
- Check if the version directory exists **before** showing the y/N
prompt
- Return a clear error message: `v{version} is not installed`
### Tests (4 unit tests)
- `test_uninstall_nonexistent_version_errors_before_prompt` — confirms
error fires before prompt for missing versions
- `test_uninstall_current_version_errors` — confirms you can't uninstall
the active version
- `test_uninstall_current_keyword_errors` — confirms the literal string
"current" is rejected
- `test_uninstall_existing_version_with_yes` — confirms normal uninstall
flow works
## Verification
```
cargo check -p spacetimedb-update
cargo clippy -p spacetimedb-update -- -D warnings
cargo test -p spacetimedb-update uninstall
```
## Reproduction
Before this fix:
```
$ spacetime version uninstall 2.0.3
Uninstall v2.0.3? yes
Error: No such file or directory (os error 2)
```
After this fix:
```
$ spacetime version uninstall 2.0.3
Error: v2.0.3 is not installed
```
Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>
## Summary
- add `ReducerContext::database_identity()` in Rust bindings
- deprecate `ReducerContext::identity()` and keep it as a compatibility
alias
- update reducer docs example to use `ctx.database_identity()`
- add C# reducer-context equivalent: `DatabaseIdentity` and obsolete
`Identity` alias
- update Rust/C# module test callsites to the new API name
- update C# codegen snapshots for generated `ReducerContext` API output
## Why
Issue #3201 reports user confusion between reducer `ctx.identity()`
(database/module identity) and `ctx.sender`. This change clarifies
naming while preserving compatibility.
## Validation
- `cargo check -p spacetimedb -p module-test` (Passed)
- `dotnet test crates/bindings-csharp/Codegen.Tests/Codegen.Tests.csproj
--nologo` (Passed)
- `dotnet test crates/bindings-csharp/Runtime.Tests/Runtime.Tests.csproj
--nologo` (Failed) pre-existing unrelated failure:
- `Runtime.Tests/JwtClaimsTest.cs(10,23): CS1729: 'JwtClaims' does not
contain a constructor that takes 2 arguments`
## Compatibility
- Rust: `identity()` still works but is deprecated in favor of
`database_identity()`.
- C#: `Identity` still works but is marked `[Obsolete]` in favor of
`DatabaseIdentity`.
Closes#3201
# Description of Changes
This patch aims to improve the situation for the need of a generic
function over multiple `ctxs`.
The `DbContext` was introduced for this but due to the associated type
amibguity when trying to implement it another mmethod is needed.
This pr implements a `db_read_only()` method which always results in a
`LocalReadOnly` hence Rust can infer the types of a the returned type
(which prior to this was either `__view , __query and a 3rd one i cent
remember right now 😓`.
I have chosen to be defensive by implementing it as `unstable` but i can
also remove this if that is desired.
It allows the following pattern which has been workig great in my
project so its time to contribute it :>
```rust
pub(crate) trait YourTableNameRead {
// Your methods which only need read access
fn test(&self,args);
}
// The read version is a supertrait to give access to the read methods.
pub(crate) trait YourTableNameWrite: YourTableNameRead {
// Your methods which need read-write access
}
// The read version gets implemented for every DbContext since we can always read.
impl<Db: DbContext> YourTableNameRead for Db {
fn test(&self,args) {
self.db_read_only().table_name().whatever(args);
}
// By constraining the associated type to Local we only get this for writeable ctxs.
impl<Db: DbContext<DbView = Local>> YourTableNameWrite for Db {}
```
These allow you to do on the calling site:
```rust
use YourTableNameRead;
#[view|reducer|procedure]
fn my_func(ctx: Reducer|(Anon)View|Tx, args ) {
ctx.test(args);
}
```
# API and ABI breaking changes
None
# Expected complexity level and risk
1. Minor api and qol adition which is furthermore unstable.
# Testing
- [x] Works in my project
---------
Co-authored-by: clockwork-labs-bot <clockwork-labs-bot@users.noreply.github.com>
# Description of Changes
This pull request improves the handling of connection lifecycle events
in the Rust client SDK for SpacetimeDB, particularly distinguishing
between connection failures and disconnections. It introduces a new
`ConnectionLifecycle` state machine to track connection progress,
ensures that the correct callback (`on_connect_error` or
`on_disconnect`) is invoked based on the connection state.
**Changes**
* `ConnectionLifecycle` enum to track the connection state
(`Connecting`, `Connected`, `Ended`)
* Refactored error handling so that if a connection fails before
establishment, the `on_connect_error` callback is invoked; if the
connection fails after establishment, the `on_disconnect` callback is
invoked. See `end_connection`.
* Updated where disconnections are handled
(`advance_one_message_blocking`, `advance_one_message_async`, and
message processing) to use `finish_connection`
* Improved handling of user-initiated disconnects during the connection
process to avoid reporting them as connection errors and to ensure
proper cleanup.
# API and ABI breaking changes
I guess maybe if people relied on the `on_connect_error` to actually
fire the `on_disconnect` then this changes that behavior.
# Expected complexity level and risk
Maybe a 2? Seems pretty low risk but I'm still new to the codebase,
please double check.
This doesn't fix the websocket issues, that'll be for another day. I
noticed websocket.rs has some places it just drops and the error isn't
handled properly. We could technically surface that information and run
our callbacks with more specific error messages.
# Testing
I had an agent build and run loads of tests for this but didn't commit
those since it would have made the PR massive. I was planning on testing
locally though to see if I could trigger a connection failure at some
point, maybe via an invalid access token.
# Description of Changes
Add documentation for how to deploy on Railway under "Hosting" in the
docs
https://github.com/user-attachments/assets/b36a7f53-e0db-4fa3-8b0e-65930c71b3a8
# API and ABI breaking changes
<!-- If this is an API or ABI breaking change, please apply the
corresponding GitHub label. -->
# Expected complexity level and risk
1 - just a docs change
# Testing
Ran the docs locally and verified the page looked good and that the
content is correct. See video above.
# Description of Changes
Just removing an old script that we would like to maintain via a cargo
subcommand now
# API and ABI breaking changes
None
# Expected complexity level and risk
1
# Testing
None
Co-authored-by: Zeke Foppa <bfops@users.noreply.github.com>
# Description of Changes
<!-- Please describe your change, mention any related tickets, and so on
here. -->
- Bumps version to 2.2.0
# API and ABI breaking changes
<!-- If this is an API or ABI breaking change, please apply the
corresponding GitHub label. -->
None
# Expected complexity level and risk
- 1 - this is just a version bump
<!--
How complicated do you think these changes are? Grade on a scale from 1
to 5,
where 1 is a trivial change, and 5 is a deep-reaching and complex
change.
This complexity rating applies not only to the complexity apparent in
the diff,
but also to its interactions with existing and future code.
If you answered more than a 2, explain what is complex about the PR,
and what other components it interacts with in potentially concerning
ways. -->
# Testing
<!-- Describe any testing you've done, and any testing you'd like your
reviewers to do,
so that you're confident that all the changes work as expected! -->
- [x] Version number is correct (`2.2.0`)
- [x] BSL license file has been updated with the new date and version
number
---------
Co-authored-by: Zeke Foppa <196249+bfops@users.noreply.github.com>
Co-authored-by: Zeke Foppa <bfops@users.noreply.github.com>
As reported in #4886, `metadata.toml` can get get lost if the server
crashes at an unfortunate point in time. To mitigate that, make
`path_type::write` replace the file atomically, and issue `fsync` on the
file and enclosing directory.
# Expected complexity level and risk
1