# Description of Changes
Apparently, I missed several license files in #3002. I'm not sure what
method I was using to find them, but apparently it was insufficient.
**This replaces all empty `LICENSE` files with an explicit (symlink to)
BSL license, and all apache licenses with symlinks to the root apache
license.** This PR does not intentionally change any license terms, so
if you see one that changed, **it's a mistake**.
# API and ABI breaking changes
None
# Expected complexity level and risk
1
# Testing
```bash
$ find . -name '*LICENSE*' -type f | grep -v '\.meta$'
./crates/sqltest/standards/LICENSE # this one is an external library that we are not allowed to re-license
./LICENSE.txt # this is the root license
```
---------
Co-authored-by: Zeke Foppa <bfops@users.noreply.github.com>
# Description of Changes
Add a RemoteQuery regression test to C# SDK to address issue:
https://github.com/clockworklabs/SpacetimeDB/issues/3064
This adds on to existing regression tests, creating a rather small
change.
# API and ABI breaking changes
Not breaking change
# Expected complexity level and risk
1
# Testing
- [X] Ran `dotnet test`, all tests pass
# Description of Changes
This enables smoketests to run against remote servers, such as maincloud
/ maincloud staging.
I also added a `--spacetime-login` param, for servers that require a
"proper" spacetime login (such as both servers above).
Usage:
```bash
python3 -m smoketests \
--remote-server https://maincloud.staging.spacetimedb.com \
--spacetime-login \
-x replication # for some reason this is required, even though I swear it should be disabled by not passing `--docker`
```
# API and ABI breaking changes
None. CI only.
# Expected complexity level and risk
1
# Testing
- [x] Smoketests pass on this PR
- [x] Smoketests pass when run against maincloud staging (using the
instructions above)
- [x] Manual review to check whether I've accidentally de-fanged any
"test for negative case" tests
---------
Co-authored-by: Zeke Foppa <bfops@users.noreply.github.com>
# Description of Changes
`fn alter_table_row_type` now writes the new row type to `st_column`.
This is not a fix for the adding-variants problem, but it does get us
closer.
# API and ABI breaking changes
None
# Expected complexity level and risk
2, system tables stuff is always risky.
# Testing
`test_alter_table_row_type_is_transactional` is amended with assertions.
# Description of Changes
We had weird caching issues in the C#/Unity testsuite. Somehow, they got
triggered only as of
https://github.com/clockworklabs/SpacetimeDB/pull/3181 merging, and I
have no idea why/how.
I've restored the `id` field of the checkout step (which is used by the
cache step), and this _seems_ to have fixed it.
# API and ABI breaking changes
None.
# Expected complexity level and risk
1
# Testing
- [x] It passes on this PR
- [x] It passes in a test PR that combines this change with
https://github.com/clockworklabs/SpacetimeDB/pull/3182
---------
Co-authored-by: Zeke Foppa <bfops@users.noreply.github.com>
# Description of Changes
Refactors the machinery of call_reducer and update_database to
a) have smaller pieces that make up the whole so that the code becomes
clearer
b) extract all the VM-independent stuff so that it can be reused by V8
modules.
This is best reviewed commit by commit.
# API and ABI breaking changes
None.
# Expected complexity level and risk
2, it's an important place, but this is doing just code motion.
# Testing
No semantic changes, just code motion.
# Description of Changes
<!-- Please describe your change, mention any related tickets, and so on
here. -->
Disables automigrations for column type changes
# API and ABI breaking changes
<!-- If this is an API or ABI breaking change, please apply the
corresponding GitHub label. -->
None
# Expected complexity level and risk
<!--
How complicated do you think these changes are? Grade on a scale from 1
to 5,
where 1 is a trivial change, and 5 is a deep-reaching and complex
change.
This complexity rating applies not only to the complexity apparent in
the diff,
but also to its interactions with existing and future code.
If you answered more than a 2, explain what is complex about the PR,
and what other components it interacts with in potentially concerning
ways. -->
0
# Testing
<!-- Describe any testing you've done, and any testing you'd like your
reviewers to do,
so that you're confident that all the changes work as expected! -->
- [x] Updated smoketests
# Description of Changes
Iterating over a HashMap does not guarantee any ordering of the items.
To ensure consistent and predictable pretty-printing, explicitly sort
entries.
# API and ABI breaking changes
NA
# Expected complexity level and risk
0
# Description of Changes
This obscured other used code, so let's remove it as its dead.
# API and ABI breaking changes
None
# Expected complexity level and risk
-1
# Testing
Not applicable.
# Description of Changes
I'm moving `crates/sdk` to `sdks/rust` to be more in line with where the
rest of our SDKs are listed. I updated the corresponding paths etc. that
pointed to the previous location.
This PR is based on
https://github.com/clockworklabs/SpacetimeDB/pull/3185, because if we
merge this without that, our release scripts will be broken.
# API and ABI breaking changes
None
# Expected complexity level and risk
1
# Testing
- [x] Existing CI passes
---------
Co-authored-by: Zeke Foppa <bfops@users.noreply.github.com>
# Description of Changes
I updated `tools/publish-crates.sh` and `tools/find-publish-list.py` to
be more flexible to where our crates can be located (rather than
hardcoding `crates/foo`). This is in preparation for
https://github.com/clockworklabs/SpacetimeDB/pull/3181.
Now, `find-publish-list.py` loads the output of `cargo metadata` to
dynamically find the path of the crate in question. This also allows us
to "properly" determine which crates are ours, instead of the
`spacetimedb-` string prefix check that we were doing before.
It also now supports `--directories` to output directory paths, rather
than just crate names. `publish-crates.sh` now uses those output paths
rather than assuming that it knows where to find crates.
As a bonus, this approach is also much faster (~0.3s to find the crate
list, vs ~8 before to load and process all the tomls).
# API and ABI breaking changes
None
# Expected complexity level and risk
2
# Testing
- [x] `find-publish-list.py` lists the same crates before and after:
```bash
$ ( git checkout master && python3 tools/find-publish-list.py --recursive --quiet bindings sdk cli standalone > before.txt )
$ ( git checkout bfops/flexible-publish-scripts && python3 tools/find-publish-list.py --recursive --quiet spacetimedb spacetimedb-sdk spacetimedb-cli spacetimedb-standalone > after.txt )
# the new script prints out crate names rather than directory names, so we need to tweak a bit
$ diff -U2 before.txt <(cat after.txt | sed 's/^spacetimedb-//' | sed 's/^spacetimedb$/bindings/')
--- before.txt 2025-08-20 10:18:07.323217870 -0700
+++ /dev/fd/63 2025-08-20 10:35:38.344074842 -0700
@@ -8,17 +8,17 @@
data-structures
schema
-table
-expr
-physical-plan
paths
fs-utils
commitlog
+table
durability
-execution
+expr
+physical-plan
snapshot
+execution
client-api-messages
query
-subscription
vm
+subscription
datastore
auth
```
Some lines are reordered because we find dependencies via metadata
instead of toml now - I spot-checked some of the moved lines to see
whether they were in an invalid place, and I did not find any evidence
of that.
- [x] `--directories` flag prints correct directories instead of names
```bash
$ python3 tools/find-publish-list.py --directories --quiet --recursive spacetimedb-sdk
/home/lead/work/clockwork-localhd/SpacetimeDBPrivate/public/crates/memory-usage/Cargo.toml
/home/lead/work/clockwork-localhd/SpacetimeDBPrivate/public/crates/primitives/Cargo.toml
/home/lead/work/clockwork-localhd/SpacetimeDBPrivate/public/crates/metrics/Cargo.toml
/home/lead/work/clockwork-localhd/SpacetimeDBPrivate/public/crates/bindings-macro/Cargo.toml
/home/lead/work/clockwork-localhd/SpacetimeDBPrivate/public/crates/sats/Cargo.toml
/home/lead/work/clockwork-localhd/SpacetimeDBPrivate/public/crates/lib/Cargo.toml
/home/lead/work/clockwork-localhd/SpacetimeDBPrivate/public/crates/client-api-messages/Cargo.toml
/home/lead/work/clockwork-localhd/SpacetimeDBPrivate/public/crates/data-structures/Cargo.toml
/home/lead/work/clockwork-localhd/SpacetimeDBPrivate/public/crates/sdk/Cargo.toml
```
---------
Co-authored-by: Zeke Foppa <bfops@users.noreply.github.com>
# Description of Changes
This adds auto migration support to the `schema` crate.
Additional changes are needed to fully implement this feature: see
https://github.com/clockworklabs/SpacetimeDB/issues/2912
# API and ABI breaking changes
None
# Expected complexity level and risk
1, at this level the change is fairly straightforward. The integration
will be more difficult.
# Testing
Added `schema` unit tests for success/failure cases.
---------
Co-authored-by: Shubham Mishra <shivam828787@gmail.com>
# Description of Changes
fixes#3174 .
During initialization, entries were added to the `DelayQueue` but not to
`key_map`.
### Detailed Explanation:
1. `DelayQueue` is **not set-semantic**, so we track uniqueness with a
`key_map: FxHashMap` but that wasn't updated during initiliaziation.
2. `Scheduler::schedule` is **not transactional**: it enqueues reducers
even if the DB transaction later fails (abort, duplicate row, etc.). On
yield, `SchedulerActor` checks the DB before execution.
**Combined Effect**: A transaction that does not actually change a
scheduled entry but still calls `Scheduler::schedule` after a module
update will cause a duplicate entry in the `DelayQueue`, since `key_map`
does not yet contain that entry.
**Why It Didn’t Show Earlier**:
When a repeating reducer executes, we re-schedule it by updating both
`DelayQueue` and `key_map` correctly.
The bug only appears in the window after updating module but before the
first execution, if a transaction calls schedule without actually
modifying the DB row.
Which was indeed happening as per discord chat:
> but yeah most likely order of event was modue was updated
> and then update_scheduled_timers_from_static_data was called
window between update module and first execution is 1 hour for this
case.
## Repo steps:
1. publish this module, it makes `send_scheduled_message` reducer to be
called every 10 secs.
```rust
#[spacetimedb::table(name = scheduled_message, public, scheduled(send_scheduled_message))]
pub struct ScheduledMessage {
#[primary_key]
#[auto_inc]
scheduled_id: u64,
scheduled_at: ScheduleAt,
}
#[spacetimedb::reducer]
fn send_scheduled_message(ctx: &ReducerContext, sched: ScheduledMessage) -> Result<(), String> {
info!("Sending scheduled message: {:?}", ctx.timestamp);
Ok(())
}
#[spacetimedb::reducer(init)]
pub fn init(ctx: &ReducerContext) {
ctx.db.scheduled_message().insert(ScheduledMessage {
scheduled_id: 0,
scheduled_at: Duration::from_secs(10).into(),
});
}
#[spacetimedb::reducer]
pub fn update_timer(ctx: &ReducerContext) {
for mut timer in ctx.db.scheduled_message().iter() {
timer.scheduled_at = Duration::from_secs(10).into();
ctx.db.scheduled_message().scheduled_id().update(timer);
log::info!("building decay agent timer was updated");
}
}
```
2. Update module to support automigration (add a table) and re-publish
it.
3. Call reducer `update_timer` and do it before first execution of
`send_scheduled_message` after updating module.
4. As `update_timer` doesn't change the existing scheduler but calls
`Scheduler::schedule` it will cause duplicate entry in `DelayQueue`.
# API and ABI breaking changes
N/A
# Expected complexity level and risk
1, pretty obvious fix.
# Testing
manually.
The code fix is straightforward, but the issue only becomes visible
under specific conditions.
# Description of Changes
This change adds the following `System.TimeSpan`-style static
construction methods to `SpacetimeDB.TimeDuration`:
* `static TimeDuration FromMilliseconds(double milliseconds)`
* `static TimeDuration FromSeconds(double seconds)`
* `static TimeDuration FromMinutes(double minutes)`
* `static TimeDuration FromHours(double hours)`
* `static TimeDuration FromDays(double days)`
These mirror the equivalently named static methods on `System.TimeSpan`
and dramatically improve usability and familiarity for experienced C#
users with no more overhead than the user performing the multiplication
themselves.
Wish I'd thought to do this before v1.1.2 got released. Ah well.
# API and ABI breaking changes
None. Convenience methods added in bindings only.
# Expected complexity level and risk
1 (potentially up to a low 2 if cleanup is desired elsewhere in the
bindings to leverage these new methods).
# Testing
<!-- Describe any testing you've done, and any testing you'd like your
reviewers to do,
so that you're confident that all the changes work as expected! -->
- [x] Ensure the changes build. <!-- maybe a test you want to do -->
- [ ] New contributor check! Review to make sure repo style & substance
standards are complied with. <!-- maybe a test you want a reviewer to
do, so they can check it off when they're satisfied. -->
# Description of Changes
<!-- Please describe your change, mention any related tickets, and so on
here. -->
We stopped incrementing the incoming queue length metric. This patch
increments it again and adds a regression test.
# API and ABI breaking changes
<!-- If this is an API or ABI breaking change, please apply the
corresponding GitHub label. -->
None
# Expected complexity level and risk
<!--
How complicated do you think these changes are? Grade on a scale from 1
to 5,
where 1 is a trivial change, and 5 is a deep-reaching and complex
change.
This complexity rating applies not only to the complexity apparent in
the diff,
but also to its interactions with existing and future code.
If you answered more than a 2, explain what is complex about the PR,
and what other components it interacts with in potentially concerning
ways. -->
1
# Testing
<!-- Describe any testing you've done, and any testing you'd like your
reviewers to do,
so that you're confident that all the changes work as expected! -->
- [x] Regression test
# Description of Changes
Closes#3170 .
Commit messages:
### Increase the default incoming-queue-length limit
2048 turned out to be too low a value for BitCraft, as their world
upload process requests on the order of 6000 reducers very rapidly. We
still feel that having a limit is valuable to prevent malicious or
misguided clients from taking an arbitrarily large amount of host
memory, so we bump the value to give us a wide safety error for
BitCraft's needs but don't remove the limit entirely.
### Add log at `warn` when the host disconnects a client due to too many
requests
# API and ABI breaking changes
N/a
# Expected complexity level and risk
1
# Testing
- [x] @mamcx to run a BitCraft bot test.
# Description of Changes
Defines a building piece `call_describe_module` that can be used to call
the expected ABI function `describe_module` in JS.
There might be further changes to this but this is a first working pass
at it.
# API and ABI breaking changes
None
# Expected complexity level and risk
2, this does not integrate with existing code reachable in production,
but the exception handling required some googling/reading docs.
# Testing
A test `call_describe_module_works` is added that tests returning an
empty `RawModuleDef` from JS.
The `Clone` impl for `ClientConnection` would create an independent
instance that could not observe module hotswapping. This would result in
methods called on a replaced `ModuleHost` to fail, because that host
exited already.
Fix by reading the `ModuleHost` from the watch channel directly, instead
of maintaining a redundant copy.
Also fix `watch_module_host` to properly mark the current module host as
seen.
# Expected complexity level and risk
2
# Testing
- [x] test suite passes
- [x] ran @bfops repro script
# Description of Changes
We recently merged several repos together. This PR clarifies the license
terms for several subdirectories, as well as the relationship between
the licenses.
The licenses in our subdirectories have become symbolic links to
licenses in our toplevel `licenses` directory. For any particular
subdirectory's license file in the diff, you can click `... -> View
file` and then click on the text that says "Symbolic Link" on that page.
This will take you to the license file that it links to.
I have also updated the `tools/upgrade-version` script to update the
change date in the new `licenses/BSL.txt` file.
# API and ABI breaking changes
None.
# Expected complexity level and risk
1
# Testing
None. Only changes to license files.
---------
Co-authored-by: Zeke Foppa <bfops@users.noreply.github.com>
# Description of Changes
This does 2 things:
1. simplifies the deserialization of optional sums so that they are
treated as regular sums.
2. adds higher level v8 ser/de interfaces and privatizes what can be
private.
# API and ABI breaking changes
None
# Expected complexity level and risk
1
# Testing
Existing ser/de tests are adjusted to use the new interfaces.
# Description of Changes
Fixes a Typescript SDK bug where onclose of WebsocketDecompressAdapter
is never called
# API and ABI breaking changes
None
<!-- If this is an API or ABI breaking change, please apply the
corresponding GitHub label. -->
# Expected complexity level and risk
Potential risk as Typescript SDK consumers might be currently relying
only on onConnectError to report disconnection. Users should instead use
onDisconnect, or can simply subscribe to both callbacks with the same
function again to get the same functionality.
<!--
How complicated do you think these changes are? Grade on a scale from 1
to 5,
where 1 is a trivial change, and 5 is a deep-reaching and complex
change.
This complexity rating applies not only to the complexity apparent in
the diff,
but also to its interactions with existing and future code.
If you answered more than a 2, explain what is complex about the PR,
and what other components it interacts with in potentially concerning
ways. -->
This is a trivial change, but could have minor repercussions on current
code. I'd still say it's a level 1 change, with consumers able to easily
change to the correct callback.
# Testing
<!-- Describe any testing you've done, and any testing you'd like your
reviewers to do,
so that you're confident that all the changes work as expected! -->
# Description of Changes
Updated outdated repository link in documentation from
`https://github.com/ClockworkLabs/tree/master/demo/Blackholio`
to
`https://github.com/clockworklabs/SpacetimeDB/tree/master/demo/Blackholio`
to reflect the current location of the Blackholio demo.
# API and ABI breaking changes
None.
# Expected complexity level and risk
Complexity level: **1** (trivial change)
This is a simple documentation update with no impact on code execution,
APIs, or ABIs.
# Testing
Verified that the new link is valid and accessible.
No additional testing required.
* [ ] Reviewer can confirm that the new link resolves correctly.
# Description of Changes
As requested by @gefjon
Most of this is:
- `'a` -> `'this`
- `'s` -> `'scope`
# API and ABI breaking changes
None
# Expected complexity level and risk
0
# Testing
No semantic changes.
# Description of Changes
On databases with many already-compressed snapshots, this was leading to
log spam without providing any useful information.
# API and ABI breaking changes
N/a
# Expected complexity level and risk
1
# Testing
N/a
# Description of Changes
- Previous to this PR if you login, create a player, then logout and log
back in you are prompted for a username again instead of just resuming
your gameplay. This is confusing to players because it seems like your
token is not being reused and you are just creating a new player.
# API and ABI breaking changes
None
# Expected complexity level and risk
1 - this is a demo change but technically affects the testsuite
# Testing
1. Clear your token, then open the game, create a player. Then logout
2. Without clearing your token, log back into the game.
3. Make sure that:
- Your username is correct
- Your circles are where you left them (and the amount of circles is
correct.)
Adds methods and free-standing functions to allow folds to stop at an
upper
bound, by passing a range instead of only a start offset.
# Expected complexity level and risk
1
# Testing
# Description of Changes
Currently with the function maybe_log_error to log errors we hide the
real source line where the error happened. So to still have the same
logic but with more accurate source line, I moved it to a macro to have
the same abstraction for logging.
# API and ABI breaking changes
-
# Expected complexity level and risk
1
# Testing
- [ ] I have tested it my application and it now show the better source
line of the log message
---------
Signed-off-by: ResuBaka <ResuBaka@users.noreply.github.com>
Co-authored-by: Phoebe Goldman <phoebe@goldman-tribe.org>
# Description of Changes
Added a repo migration notice workflow to each repository that we've
merged into SpacetimeDB.
These need to be mirrored back to the actual repos with a process like:
```bash
git subtree split --prefix=sdks/typescript -b release/typescript
git push -f git@github.com:clockworklabs/spacetimedb-typescript-sdk.git release/typescript:main
git branch -D release/typescript
```
Once they're migrated there, the bot will automatically comment on any
PR or issue opened in those repos.
# API and ABI breaking changes
None. CI-only changes.
# Expected complexity level and risk
2
# Testing
- [x] In a demo repo, it properly commented and closed a PR:
https://github.com/clockworklabs/github-tooling-test/pull/42
- [x] In a demo repo, it properly commented and closed an issue:
https://github.com/clockworklabs/github-tooling-test/issues/43
---------
Co-authored-by: Zeke Foppa <bfops@users.noreply.github.com>
# Description of Changes
We want to release #2980 for the TS SDK, so I'm bumping our version
number.
# API and ABI breaking changes
None
# Expected complexity level and risk
1
# Testing
None, just a version bump.
Co-authored-by: Zeke Foppa <bfops@users.noreply.github.com>
## Description of Changes
Moves table-specific operations out of `SpacetimeDBClient.cs` and into
`Table.cs` in order to reduce coupling between the two files.
This is the implementation of
https://github.com/clockworklabs/SpacetimeDB/issues/3047
This is a PR being duplicated/migrated from
https://github.com/clockworklabs/com.clockworklabs.spacetimedbsdk/pull/346,
which contains prior discussion/review/approval.
## API
- [ ] This is an API breaking change to the SDK
## Requires SpacetimeDB PRs
No PRs needed, works with latest master
## Testsuite
SpacetimeDB branch name: master
## Testing
- [X] Opened and ran Blackhol.io within Unity Editor and successfully
connecting to a locally running server
- [X] Compiled a WebGL build of Blackhol.io and successfully connecting
to a locally running server
- [x] Opened and ran BitCraft within Unity Editor and successfully
connecting to a locally running server
# Description of Changes
We haven't used this in a while and we're pretty sure it's currently
broken.
# API and ABI breaking changes
None.
# Expected complexity level and risk
1
# Testing
None
Co-authored-by: Zeke Foppa <bfops@users.noreply.github.com>
# Description of Changes
I forgot to do this in the original version bump PR... I just ran
`dotnet pack` and then moved the meta files.
# API and ABI breaking changes
None
# Expected complexity level and risk
1
# Testing
None.
---------
Co-authored-by: Zeke Foppa <bfops@users.noreply.github.com>
# Description of Changes
A few of our new CI checks weren't set up to run in the merge queue,
which prevented PRs from merging when those checks were marked required.
# API and ABI breaking changes
None. CI only change.
# Expected complexity level and risk
1
# Testing
This will have to be tested by making these checks required again, and
then seeing if this PR can merge.
---------
Co-authored-by: Zeke Foppa <bfops@users.noreply.github.com>
# Description of Changes
Just what it says on the can. This test suite can sometimes mysteriously
hang for a long time, so a timeout will help kill this (and free up
resources) earlier.
# API and ABI breaking changes
None
# Expected complexity level and risk
1
# Testing
None really. I guess I could test this by adding a step that just sleeps
forever, but this is basically just setting some data.
Co-authored-by: Zeke Foppa <bfops@users.noreply.github.com>
# Description of Changes
This changes the behavior on the rust client, so that we let the server
generate the connection id if the client hasn't called the unstable
method to set a connection id.
This is awkward for the `connection_id` function, since it now panics if
the connection id hasn't been received from the server. We should
deprecate this function in favor of a `try` version.
This also changes the behavior of reusing the connection id if a client
reconnects.
The corresponding private PR is
https://github.com/clockworklabs/SpacetimeDBPrivate/pull/1921.
# API and ABI breaking changes
Technically not changing any API signatures, but this is
behavior-changing, since the `connection_id` function can now panic.
# Expected complexity level and risk
2. The risk here is people relying on that behavior.
# Testing
I think some tests need to be updated.
# Description of Changes
The `ScheduleAt` type appears to have an outdated structure. I've
updated the structure of the `ScheduleAt` type represented in TypeScript
to be in line with the Rust type:
https://docs.rs/spacetimedb/latest/spacetimedb/enum.ScheduleAt.html
Namely, we were missing the inner Spacetime library types of
`TimeDuration` and `Timestamp`.
This is to address #2969.
# API and ABI breaking changes
This is an API breaking change in that it changes the API, but it's
breaking in that it fixes a bug.
# Expected complexity level and risk
2
# Testing
I have not done additional testing, but I thought I would get this PR
started to make it easier for the next person who comes along.
The ideal automated test to add would be one which connects a TypeScript
client to a SpacetimeDB module with a scheduled table, and verifies that
the type is correctly deserialized and represented in TypeScript when
the rows are sent down.
---------
Co-authored-by: Jeffrey Dallatezza <jeffreydallatezza@gmail.com>
Until we have confirmed reads, we need to wait for a tick until any
batched commits are flushed.
Also cleaned up the code a bit to do less string manipulation of sql
results all over the place, and to output less irrelevant logging.
# Expected complexity level and risk
1
# Testing
- [x] yes