# Description of Changes
This patch aims to improve the situation for the need of a generic
function over multiple `ctxs`.
The `DbContext` was introduced for this but due to the associated type
amibguity when trying to implement it another mmethod is needed.
This pr implements a `db_read_only()` method which always results in a
`LocalReadOnly` hence Rust can infer the types of a the returned type
(which prior to this was either `__view , __query and a 3rd one i cent
remember right now 😓`.
I have chosen to be defensive by implementing it as `unstable` but i can
also remove this if that is desired.
It allows the following pattern which has been workig great in my
project so its time to contribute it :>
```rust
pub(crate) trait YourTableNameRead {
// Your methods which only need read access
fn test(&self,args);
}
// The read version is a supertrait to give access to the read methods.
pub(crate) trait YourTableNameWrite: YourTableNameRead {
// Your methods which need read-write access
}
// The read version gets implemented for every DbContext since we can always read.
impl<Db: DbContext> YourTableNameRead for Db {
fn test(&self,args) {
self.db_read_only().table_name().whatever(args);
}
// By constraining the associated type to Local we only get this for writeable ctxs.
impl<Db: DbContext<DbView = Local>> YourTableNameWrite for Db {}
```
These allow you to do on the calling site:
```rust
use YourTableNameRead;
#[view|reducer|procedure]
fn my_func(ctx: Reducer|(Anon)View|Tx, args ) {
ctx.test(args);
}
```
# API and ABI breaking changes
None
# Expected complexity level and risk
1. Minor api and qol adition which is furthermore unstable.
# Testing
- [x] Works in my project
---------
Co-authored-by: clockwork-labs-bot <clockwork-labs-bot@users.noreply.github.com>
# Description of Changes
This pull request improves the handling of connection lifecycle events
in the Rust client SDK for SpacetimeDB, particularly distinguishing
between connection failures and disconnections. It introduces a new
`ConnectionLifecycle` state machine to track connection progress,
ensures that the correct callback (`on_connect_error` or
`on_disconnect`) is invoked based on the connection state.
**Changes**
* `ConnectionLifecycle` enum to track the connection state
(`Connecting`, `Connected`, `Ended`)
* Refactored error handling so that if a connection fails before
establishment, the `on_connect_error` callback is invoked; if the
connection fails after establishment, the `on_disconnect` callback is
invoked. See `end_connection`.
* Updated where disconnections are handled
(`advance_one_message_blocking`, `advance_one_message_async`, and
message processing) to use `finish_connection`
* Improved handling of user-initiated disconnects during the connection
process to avoid reporting them as connection errors and to ensure
proper cleanup.
# API and ABI breaking changes
I guess maybe if people relied on the `on_connect_error` to actually
fire the `on_disconnect` then this changes that behavior.
# Expected complexity level and risk
Maybe a 2? Seems pretty low risk but I'm still new to the codebase,
please double check.
This doesn't fix the websocket issues, that'll be for another day. I
noticed websocket.rs has some places it just drops and the error isn't
handled properly. We could technically surface that information and run
our callbacks with more specific error messages.
# Testing
I had an agent build and run loads of tests for this but didn't commit
those since it would have made the PR massive. I was planning on testing
locally though to see if I could trigger a connection failure at some
point, maybe via an invalid access token.
# Description of Changes
Add documentation for how to deploy on Railway under "Hosting" in the
docs
https://github.com/user-attachments/assets/b36a7f53-e0db-4fa3-8b0e-65930c71b3a8
# API and ABI breaking changes
<!-- If this is an API or ABI breaking change, please apply the
corresponding GitHub label. -->
# Expected complexity level and risk
1 - just a docs change
# Testing
Ran the docs locally and verified the page looked good and that the
content is correct. See video above.
# Description of Changes
Just removing an old script that we would like to maintain via a cargo
subcommand now
# API and ABI breaking changes
None
# Expected complexity level and risk
1
# Testing
None
Co-authored-by: Zeke Foppa <bfops@users.noreply.github.com>
# Description of Changes
<!-- Please describe your change, mention any related tickets, and so on
here. -->
- Bumps version to 2.2.0
# API and ABI breaking changes
<!-- If this is an API or ABI breaking change, please apply the
corresponding GitHub label. -->
None
# Expected complexity level and risk
- 1 - this is just a version bump
<!--
How complicated do you think these changes are? Grade on a scale from 1
to 5,
where 1 is a trivial change, and 5 is a deep-reaching and complex
change.
This complexity rating applies not only to the complexity apparent in
the diff,
but also to its interactions with existing and future code.
If you answered more than a 2, explain what is complex about the PR,
and what other components it interacts with in potentially concerning
ways. -->
# Testing
<!-- Describe any testing you've done, and any testing you'd like your
reviewers to do,
so that you're confident that all the changes work as expected! -->
- [x] Version number is correct (`2.2.0`)
- [x] BSL license file has been updated with the new date and version
number
---------
Co-authored-by: Zeke Foppa <196249+bfops@users.noreply.github.com>
Co-authored-by: Zeke Foppa <bfops@users.noreply.github.com>
As reported in #4886, `metadata.toml` can get get lost if the server
crashes at an unfortunate point in time. To mitigate that, make
`path_type::write` replace the file atomically, and issue `fsync` on the
file and enclosing directory.
# Expected complexity level and risk
1
# Description of Changes
Disclaimer: This description was written by claude:
The two `.leader()` call sites in
[`client-api/src/routes/database.rs`](crates/client-api/src/routes/database.rs)
and
[`client-api/src/routes/subscribe.rs`](crates/client-api/src/routes/subscribe.rs)
pipe `GetLeaderHostError` through `log_and_500`, which:
1. Logs every error variant at `error` level — including `Suspended`,
`Bootstrapping`, `NoLeader`, etc., which are normal operational states.
This produces noisy log lines like:
```
ERROR /app/.../client-api/src/lib.rs:623: internal error: database is
suspended
```
2. Forces every such error into a **500 Internal Server Error**
response, even when the appropriate status code is something else (e.g.
503 Service Unavailable for a suspended database).
`GetLeaderHostError` already implements
`Into<axum::response::ErrorResponse>` with the correct per-variant
mapping:
| Variant | Status |
|---|---|
| `NoSuchDatabase` | 404 Not Found |
| `LaunchError`, `Misdirected` | 500 Internal Server Error |
| `NoNodeId`, `NoLeader`, `ControlConnection`, `Suspended`,
`Bootstrapping` | 503 Service Unavailable |
The standalone implementation already uses `?`-propagation directly.
This PR makes the two client-api call sites match that pattern.
## Result
- Suspended / bootstrapping / no-leader databases now return **503**
instead of 500.
- These expected states no longer produce `error`-level log spam in the
request path. Genuinely unexpected internal errors elsewhere in the
codebase continue to log via `log_and_500` unchanged.
# API and ABI breaking changes
None
# Expected complexity level and risk
1
# Testing
- [ ] Deploy to staging to see if we still see this error when trying to
access a suspended database
# Description of Changes
Add EV code signing for Windows CLI binaries using DigiCert KeyLocker.
The workflow now signs `spacetimedb-update.exe`, `spacetimedb-cli.exe`,
and `spacetimedb-standalone.exe` on tag pushes using `smctl sign` with a
cloud HSM-backed certificate.
These changes reflect the updated DigiCert guidance for code signing
through GitHub found here:
https://github.com/marketplace/actions/digicert-binary-signing
# API and ABI breaking changes
No API or ABI changes. This change only affects the CI/CD packaging
workflow.
# Expected complexity level and risk
1 - This PR only adds code signing to existing CI packaging. Risk is
limited to the Windows packaging step failing on tags; Linux and macOS
builds are unaffected.
# Testing
- [X] Tested via workflow dispatch on tag `test-signing-v0.0.1`
- [X] All three executables signed and verified successfully
- [X] Signature verification confirms certificate chain
- [X] Signed artifacts uploaded successfully
# Description of Changes
`UClientCache::ApplyDiff` (`sdks/unreal/.../DBCache/ClientCache.h`) has
an asymmetry between its insert and delete paths.
**Phase 1 (deletes)** correctly only emits a `Diff.Deletes` entry when
the row's refcount transitions to 0 — overlapping subscriptions just
decrement.
**Phase 2 (inserts)** always appends to `Diff.Inserts`, regardless of
whether the row was already cached:
```cpp
// before
FRowEntry<RowType>* Entry = Table->Entries.Find(Key);
if (!Entry) { /* refcount = 1 */ }
else { /* refcount + 1 */ }
Diff.Inserts.Add(Key, *NewRow); // ← fires for both branches
```
`BroadcastDiff` then fires `OnInsert` for every `Diff.Inserts` entry, so
any table subscribed by two overlapping queries (e.g. a global `SELECT *
FROM t` plus a per-row `WHERE id = ...`) re-fires its insert handler on
every later subscription apply — once per cached row, every time. Game
code that does work in `OnInsert` (positioning, spawning, snapping to
terrain) re-runs and clobbers state that was meant to be set once.
The intent is documented in `RowEntry.h`: *"Wrapper storing a row value
with a reference count used by overlapping subscriptions."* Phase 1
follows that design; phase 2 doesn't.
### Fix
Move the `Diff.Inserts.Add` into the `!Entry` branch only, so it fires
only on the absent → refcount=1 transition:
```cpp
if (!Entry) {
Table->Entries.Add(Key, FRowEntry<RowType>{NewRow, 1});
Diff.Inserts.Add(Key, *NewRow);
}
else {
Table->Entries.Add(Key, FRowEntry<RowType>{NewRow, Entry->RefCount + 1});
}
```
### Why real updates still work
Cache keys are BSATN row-bytes, not primary keys. A real update arrives
as a `(delete old_bytes, insert new_bytes)` pair where `old_bytes ≠
new_bytes` — so the insert side still takes the `!Entry` branch and gets
a `Diff.Inserts` entry. `FTableAppliedDiff::DeriveUpdatesByPrimaryKey`
then pairs the delete and insert by PK into
`UpdateInserts`/`UpdateDeletes`, and `OnUpdate` (not `OnInsert`) fires,
exactly as today.
Edge cases:
| Scenario | Phase 2 branch | Result | Correct? |
|---|---|---|---|
| New row | `!Entry` | `OnInsert` | ✓ |
| Real update (different bytes) | `!Entry` | `OnInsert`+`OnDelete`
reconciled to `OnUpdate` by PK | ✓ |
| Overlapping sub re-delivers cached row | `else` | refcount bump, no
event | ✓ (was broken — fired duplicate `OnInsert`) |
| Trivial update (identical bytes) | `else` | refcount bump | irrelevant
— server diffs identical rows away before emitting |
# API and ABI breaking changes
None. Purely internal cache bookkeeping. Existing
`OnInsert`/`OnDelete`/`OnUpdate` semantics are preserved for all
non-overlapping cases; the only behavior change is that overlapping
subscriptions stop emitting duplicate `OnInsert` events for
already-cached rows — which matches the documented `RowEntry` refcount
contract.
# Expected complexity level and risk
**1.** Two lines moved into a branch; comments updated. Mirrors logic
already present and known-correct in phase 1.
# Testing
Reproduced and validated downstream in an Unreal project. The repro
setup is straightforward to replicate against any module:
1. A table `t` with ~150 rows.
2. A global subscription `SELECT * FROM t`, applied first.
3. A diagnostic actor that binds `t.OnInsert`, then every 10s submits a
new overlapping subscription (e.g. `SELECT * FROM t` again, or any
`SELECT * FROM t WHERE 'id' = X` covering already-cached rows) and
counts the `OnInsert` events that arrive in that round.
Expected: round 1 fires once per row that is *new to the cache*;
subsequent rounds against already-cached rows fire 0.
Observed (~161 rows cached after initial load):
```
Pre-fix Post-fix
Global sub (empty → 161) 161 161 ← genuine inserts; unchanged
Round 1 (overlapping) 134 0 ← 134 cached dupes
Round 2 (overlapping) 134 0
Round 3 (overlapping) 134 0
Round 4–6 (overlapping) 134 each 0 each
```
The genuine "empty cache → 161 entries" wave on the initial global
subscription is unaffected — same `OnInsert` count both pre- and
post-fix. Only the duplicate fires from later overlapping subscriptions
on already-cached rows are eliminated. `OnUpdate` still fires correctly
when underlying rows actually change.
- [x] Reviewer: confirm an existing real-update (different bytes, same
PK) test still produces `OnUpdate` and not `OnInsert`+`OnDelete`.
- [x] Reviewer: confirm any test that relies on `OnInsert` firing on
every subscription apply (rather than only on cache 0→1 transition) is
not present — if it exists, it was relying on the bug.
# Description of Changes
Tests for case conversion.
# API and ABI breaking changes
NA
# Expected complexity level and risk
1
---------
Co-authored-by: clockwork-labs-bot <bot@clockworklabs.com>
Co-authored-by: clockwork-labs-bot <clockwork-labs-bot@users.noreply.github.com>
# Description of Changes
The update version tool sets the
/templates/basic-cpp/spacetimedb/CMakeLists.txt but as we're in the
middle of release this can block the smoketest. This PR changes it to
dynamically change to the `latest` release to allow release night
smoketests to work correctly.
# API and ABI breaking changes
N/A
# Expected complexity level and risk
1 - Small smoketest change
# Testing
- [x] Ran the C++ quickstart smoketest locally
- [x] Updated local template and tested with / without the change
Co-authored-by: John Detter <4099508+jdetter@users.noreply.github.com>
When creating or compressing a snapshot, `fsync` all files and
directories, so as to ensure that the snapshot is durable on the local
disk.
This obviously amounts to a large number of `fsync` calls, which may
negatively impact performance of taking a snapshot -- since we hold a
transaction lock while taking a snapshot, this is not to be taken
lightly.
# Expected complexity level and risk
3 -- performance impact
# Testing
I haven't quantified the performance impact.
# Description of Changes
[Some
runtimes](https://developer.mozilla.org/en-US/docs/Web/API/DecompressionStream#browser_compatibility)
support brotli for `DecompressionStream`, so I figure we may as
well allow it. Also reorganizes some of the websocket code for better
separation of concerns.
# Expected complexity level and risk
1
# Testing
- [ ] <!-- maybe a test you want to do -->
- [ ] <!-- maybe a test you want a reviewer to do, so they can check it
off when they're satisfied. -->
# Description of Changes
<!-- Please describe your change, mention any related tickets, and so on
here. -->
This was prompted from another request from the discord. Conversation
context:
https://discord.com/channels/1037340874172014652/1138987509834059867/1496964407278698656
We've had the ticket to make `--yes` take an enum value:
https://github.com/clockworklabs/spacetimedb/issues/3784 . Since we can
do this in a non-API/ABI breaking way we're just implementing this
ticket.
*Disclaimer: I used claude to write ~90% of this PR.*
# API and ABI breaking changes
<!-- If this is an API or ABI breaking change, please apply the
corresponding GitHub label. -->
None. `--yes` now just takes an optional argument string that we parse,
existing CLI commands that use `--yes` will be unaffected.
# Expected complexity level and risk
1 - This is just modifying the `--yes` argument and I've included tests
in the PR.
<!--
How complicated do you think these changes are? Grade on a scale from 1
to 5,
where 1 is a trivial change, and 5 is a deep-reaching and complex
change.
This complexity rating applies not only to the complexity apparent in
the diff,
but also to its interactions with existing and future code.
If you answered more than a 2, explain what is complex about the PR,
and what other components it interacts with in potentially concerning
ways. -->
# Testing
<!-- Describe any testing you've done, and any testing you'd like your
reviewers to do,
so that you're confident that all the changes work as expected! -->
- [x] `spacetime publish --yes` still works as expected
- [x] `spacetime publish --yes=all` does the same thing as `spacetime
publish --yes`
- [x] `spacetime publish --yes=remote` only skips asking if publishing
to a remote server is ok.
- [x] `spacetime publish --yes=remote` only skips asking if publishing
to a remote server is ok.
- [x] `spacetime publish --yes=migrate|remote` skips both the migrate
prompt and the remote server prompt.
- [x] new tests are passing
# Description of Changes
Make Timestamp a FilterableValue in Rust, C#, and Typescript. I'm not
sure this is changing all the places because we have the server and the
client in those 3 languages.
# API and ABI breaking changes
It's an additive change.
# Expected complexity level and risk
3. There are some designs decisions, like comparing timestamp to
strings/numbers.
# Testing
Added unit tests for the 3 languages.
# Description of Changes
This API used to be unimplemented and the SDK tests did not exercise it.
Now it is implemented but while playing with blackholio I noticed the C#
implementation was wrong.
For now I am going to fix blackholio by avoiding use of this API for
now, but we should also correct the implementation and test it.
# API and ABI breaking changes
None
# Expected complexity level and risk
0
# Testing
Working on adding tests. If someone is more familiar with the SDK tests
I would appreciate help amending them.
Co-authored-by: clockwork-labs-bot <clockwork-labs-bot@users.noreply.github.com>
# Description of Changes
Fix the `spacetime start --listen-addr` help text to match the actual
default value.
The flag already defaults to `0.0.0.0:3000`, but the help text
incorrectly said port 80.
# API and ABI breaking changes
None.
# Expected complexity level and risk
Complexity 1.
# Testing
- [x] `cargo fmt --all --check`
- [x] `cargo check -p spacetimedb-standalone`
Co-authored-by: clockwork-labs-bot <clockwork-labs-bot@users.noreply.github.com>
Co-authored-by: Tyler Cloutier <cloutiertyler@users.noreply.github.com>
# Description of Changes
- Updated the Unreal SDK test harness to allow Unreal Editor to work
with MacOS
- Updated the Unreal SDK test handler to work with Nil as it's a special
case
# API and ABI breaking changes
N/A
# Expected complexity level and risk
1 - Small changes to the Unreal SDK tests
# Testing
- [x] Ran full suite of tests on Mac for Unreal SDK
- [x] Ran full suite of tests on Windows + Linux for Unreal SDK to
confirm no regression
---------
Co-authored-by: Jason Larabie <jasonlarabie@Mac.lan>
# Description of Changes
Reapply changes from #4515 after reversion
# API and ABI breaking changes
No API or ABI changes
# Expected complexity level and risk
2 - This PR change itself is trivial, as it just reimplements #4515,
however as #4515 had broken the `quickstart` smoketest, this should be
considered when reviewing this PR.
# Testing
- [X] Tested against `python3 -m smoketests quickstart` locally
---------
Signed-off-by: Ryan <r.ekhoff@clockworklabs.io>
Co-authored-by: Tyler Cloutier <cloutiertyler@aol.com>
Co-authored-by: Jason Larabie <jason@clockworklabs.io>
Co-authored-by: John Detter <4099508+jdetter@users.noreply.github.com>
# Description of Changes
Remove the Python smoketests and the CI check that tests for edits.
# API and ABI breaking changes
None. CI only.
# Expected complexity level and risk
1
# Testing
- [ ] All CI passes
Co-authored-by: Zeke Foppa <bfops@users.noreply.github.com>
# Description of Changes
See the corresponding commits/PRs for descriptions.
# API and ABI breaking changes
None
# Expected complexity level and risk
1 -- individual PRs already reviewed.
# Testing
No semantic changes.
# Description of Changes
This adds an `enabled` option to the `useTable` hook in the React SDK,
allowing devs to conditionally disable a subscription without having to
wrap it in a component that is conditionally mounted.
With `enabled`, you can control subscription lifecycle as such:
```tsx
const [rows, isReady] = useTable(tables.messages, { enabled: isChatOpen });
```
This is a similar pattern in other data hooks (React Query's `useQuery`,
SWR, Apollo's `useSubscription`, etc).
When `enabled` is `false`:
- `computeSnapshot` returns `[[], true]` immediately (no data, ready
state)
- The subscription effect skips setup and resets `subscribeApplied`
- The event listener callback returns a no-op cleanup
When `enabled` flips back to `true`, the subscription is re-established
automatically via the dependency arrays.
# API and ABI breaking changes
None. The `enabled` field is optional and defaults to `true`, so
existing usage is unaffected.
# Expected complexity level and risk
1 — Single file change, additive only. The `enabled` flag gates existing
behavior behind early returns and is wired into the existing dependency
arrays. No interaction with other components.
# Testing I Did
- [x] Verify `useTable(tables.foo)` works as before (no `enabled` option
passed)
- [x] Verify `useTable(tables.foo, { enabled: false })` returns `[[],
true]` and does not subscribe
- [x] Verify toggling `enabled` from `false` to `true` establishes the
subscription and returns rows
- [x] Verify toggling `enabled` from `true` to `false` clears rows and
unsubscribes
Co-authored-by: Zeke Foppa <196249+bfops@users.noreply.github.com>
# Description of Changes
Users have an odd tendency to overuse this param, which is really meant
for testing and some exceptional circumstances.
This PR hides it from the help so that it is less discoverable.
# API and ABI breaking changes
None.
# Expected complexity level and risk
1
# Testing
- [x] Helptext no longer includes `--server-issued-login`
---------
Co-authored-by: Zeke Foppa <bfops@users.noreply.github.com>
Resolves#3652
# Description of Changes
Update docs nginx configuration to better support `spacetime logs
--follow` when self hosting configured to use ssl.
# API and ABI breaking changes
None
# Expected complexity level and risk
1
# Testing
I have added this location to my own nginx configuration and confirmed I
am able to `spacetime logs -s some-https-server <my-module> --follow`
and see logs immediately and continue following even if no log is
produced after 1 minute.
Co-authored-by: Tyler Cloutier <cloutiertyler@users.noreply.github.com>
Co-authored-by: Zeke Foppa <196249+bfops@users.noreply.github.com>
# Description of Changes
`MutTxId::add_columns_to_table` creates new table but only copies
sequences to in-memory state, which causes `autoinc` columns to reset on
module restart.
The existing implementation relies on `create_table_and_update_seq`
helper, which only updates the sequence state in memory. This change
ensures that the `allocation` is also persisted to the system table,
keeping it consistent across restarts.
# API and ABI breaking changes
NA
# Expected complexity level and risk
2
# Testing
Added a test, which migrate table and checks for `autoinc` column value
without and with restart.
---------
Signed-off-by: Shubham Mishra <shivam828787@gmail.com>
Co-authored-by: Mazdak Farrokhzad <twingoow@gmail.com>
# Description of Changes
Flipping on some new inputs to `Internal Tests` to get new
functionality.
I'll follow-up with a more detailed description in discord.
# API and ABI breaking changes
None. CI only.
# Expected complexity level and risk
2
# Testing
- [x] public PR without private PR just uses master
- [x] public PR with private PR uses that one
- [x] fails if private PR is not approved
- [x] fails if private PR does not pass its CI
- [x] passes if private PR is ready to go
---------
Co-authored-by: Zeke Foppa <bfops@users.noreply.github.com>
# Description of Changes
1. This adds some basic tests of `LockedFile` (which would generally
have also passed before these changes).
2. `LockedFile` can now have contents, which are added to the errors if
we can't aquire the lock. The default metadata has the pid and a
timestamp.
I now see that there is a separate `Lockfile` that doesn't clean up
locks on a crash, so that should probably also be cleaned up (but in a
separate PR).
# Expected complexity level and risk
1.
# Testing
This has unit tests.
# Description of Changes
This is a copy of #4006 with only the updates to the way Unreal handles
ticks. As per @brougkr findings:
- **FTickableGameObject initialization order bug** - Replaced with
FTSTicker for reliable tick registration:
- Removed FTickableGameObject inheritance
- Added FTSTicker::FDelegateHandle for manual tick registration
- Added destructor to clean up ticker registration
- Added OnTickerTick() method
### Issue
FTickableGameObject registers itself in its constructor BEFORE
UDbConnectionBase's constructor body runs. Even with
ETickableTickType::Never, UE's GENERATED_BODY() macro can interfere with
base class initialization order, causing the default constructor to be
called instead.
# API and ABI breaking changes
- Refactor of an underlying component of the SDK and non-breaking.
# Expected complexity level and risk
2 - This changes the structure of how the database tooling can
auto-tick, but is invisible to the developer
# Testing
As this changes a core feature I tested all aspects:
- [x] Reproduced the bug and confirmed against `master` and this branch
to see the fix working
- [x] Ran full suite of Unreal tests
- [x] Manually tested Unreal Blackholio
# Description of Changes
Revert the following PRs that have caused some breakage:
```
a32cffa76 Finish refactoring out replay (#4850)
d639be0af Replay: some code motion & reuse `ReplayCommittedState` (#4849)
78d6b6f7d Update NativeAOT-LLVM infrastructure to current ABI (#4515)
d5c1738c1 Better module backtraces for panics and whatnot (#577)
6f23b19f3 Wait for database update to become durable (#4846)
81c9eab86 Add `spacetime lock/unlock` to prevent accidental database deletion (#4502)
809aebd7c Move field `replay_table_updated` to `ReplayCommittedState` (#4807)
21b58ef99 Update axum (#2713)
b5cadff7a Extract replay stuff out of `CommittedState`, part 1 (#4804)
```
I also updated the Python smoketests for breakage introduced in
https://github.com/clockworklabs/SpacetimeDB/pull/4502. Reverting that
PR caused conflicts, so this fix is more straightforward.
# API and ABI breaking changes
Maybe kind of, but we haven't released any of these.
# Expected complexity level and risk
1
# Testing
Ask @bfops about testing
---------
Co-authored-by: Zeke Foppa <bfops@users.noreply.github.com>
# Description of Changes
Move the rest of replay logic to `mod replay`.
Closes#4055.
# API and ABI breaking changes
None
# Expected complexity level and risk
2
# Testing
Just code motion.
# Description of Changes
Index code housekeeping: Privatizes `TableIndex` fields, fixes
`insert_index` panic logic, and adds some useful helpers for future
work.
# API and ABI breaking changes
None
# Expected complexity level and risk
1
# Testing
Covered by existing tests
# Description of Changes
Add support for btree indices where the keys are encoded byte strings
for e.g., multi-column indices of no-unbounded-types (arrays and
strings) that aren't floats.
The main interesting stuff in this PR is in `bytes_key.rs` which defines
`RangeCompatBytesKey`, a type that is derived from `BytesKey`, by
converting little-endian encoded integers to big-endian. Signed integers
are now also supported, but floats are not. `table_index/mod.rs` also
includes a bunch of interesting stuff.
# API and ABI breaking changes
Technically this fixes pre-existing bugs in the handling of `Excluded`
ranges for multi-col indices.
# Expected complexity level and risk
2?
# Testing
- A proptest `order_in_bsatn_is_preserved` is now adjusted and enabled
to exercise the ordering of `RangeCompatBytesKey`.
- A proptest `btree_multi_col_range_scans_work` is added to check the
behavior of range scans on multi-col indices.
---------
Co-authored-by: joshua-spacetime <josh@clockworklabs.io>
This fix is intended to resolve the community reported issue #4659
# Description of Changes
Add handling for `UnaryOperator::Minus` and `UnaryOperator::Plus` in a
new `parse_insert_value()` used during `INSERT` operations. This works
by convert negative unary expressions to signed numeric strings in
INSERT VALUES clauses.
# API and ABI breaking changes
None.
# Expected complexity level and risk
1 (Low). Localized parser fix.
# Testing
- [X] Ran local tests to verify negative numbers work in Rust,
TypeScript, and C# modules
- [X] Confirmed positive numbers and invalid expressions still work
correctly
* Testing involved running `spacetime sql module "INSERT INTO table
(col) VALUES (-100.0);"` from a CLI.
---------
Co-authored-by: joshua-spacetime <josh@clockworklabs.io>
# Description of Changes
Similar to some changes made in
https://github.com/clockworklabs/SpacetimeDB/pull/4269.
Found another smoketest with a name collision which could cause flakes
when running against a remote server.
# API and ABI breaking changes
<!-- If this is an API or ABI breaking change, please apply the
corresponding GitHub label. -->
# Expected complexity level and risk
1
# Testing
- [ ] Smoketests pass
Co-authored-by: Zeke Foppa <bfops@users.noreply.github.com>
# Description of Changes
Migrate these checks into `cargo ci`:
- Check that packages are publishable
- Docs test
- TypeScript - Tests
# API and ABI breaking changes
None. CI only.
# Expected complexity level and risk
2
# Testing
- [ ] CI passes
---------
Co-authored-by: Zeke Foppa <bfops@users.noreply.github.com>
# Description of Changes
Removed some "if we're on windows" checks in the CI so that we're always
running through `cargo ci update-flow`.
# API and ABI breaking changes
None. CI only.
# Expected complexity level and risk
1.
# Testing
- [x] Upgrade flow tests pass
Co-authored-by: Zeke Foppa <bfops@users.noreply.github.com>
# Description of Changes
Merged `typescript-test.yml` and `docs-test.yml` into `ci.yml`.
Note: The required checks will need to be updated when this PR is ready
to merge.
# API and ABI breaking changes
None. CI only.
# Expected complexity level and risk
1
# Testing
- [x] CI passes
---------
Co-authored-by: Zeke Foppa <bfops@users.noreply.github.com>
# Description of Changes
Just moving code into functions so it can be reused, no functional
changes.
# API and ABI breaking changes
None. CI only.
# Expected complexity level and risk
1
# Testing
None
---------
Co-authored-by: Zeke Foppa <bfops@users.noreply.github.com>
# Description of Changes
Several args for `cargo ci` had empty helptext, because we were only
printing the explicitly-annotated helptext. This PR updates it so that
inline helptext also shows in the README.
# API and ABI breaking changes
None. CI only.
# Expected complexity level and risk
1
# Testing
- [x] Updated README has more helptext
---------
Co-authored-by: Zeke Foppa <bfops@users.noreply.github.com>
# Description of Changes
Invoke a private workflow when a PR merges, so that we can do extra
follow-up actions.
# API and ABI breaking changes
None. CI only.
# Expected complexity level and risk
2
# Testing
- [x] When a PR merged with a corresponding private PR, I got a discord
notification:
<img width="543" height="70" alt="image"
src="https://github.com/user-attachments/assets/209347c3-57be-47d7-8d75-6154c9e222cb"
/>
- [x] When a PR merged without a corresponding private PR, no discord
notification
---------
Signed-off-by: Zeke Foppa <196249+bfops@users.noreply.github.com>
Co-authored-by: Zeke Foppa <bfops@users.noreply.github.com>
# Description of Changes
Uses `Isolate::add_near_heap_limit_callback` to prevent unbounded heap
growth. Upon nearing the heap limit, we will now:
1. Call `Isolate::terminate_execution`.
2. Request that v8 double the heap limit.
Then, upon finishing a function call, we lower the heap limit back down.
This should hopefully fix the issue where v8 hits the heap limit and
crashes the whole process.
Also improves the way termination requests are checked for and
processed.
# Expected complexity level and risk
2
# Testing
- [ ] Manual testing with memory-leaky modules
# Description of Changes
Merge the `TypeScript - Lint` CI job into `cargo ci lint`.
Note that this removes the custom caching for the pnpm store, but we're
planning to overhaul our CI cache approach anyway.
# API and ABI breaking changes
None. CI only.
# Expected complexity level and risk
1
# Testing
- [x] Lint step passes on this PR
---------
Co-authored-by: Zeke Foppa <bfops@users.noreply.github.com>
# Description of Changes
First two commits are code motion.
The second commit fixes a mistake I made in a previous PR that made us
use potentially several `ReplayCommittedState`s per
`datastore.replay(..)`.
More to come in terms of PRs; stay tuned.
# API and ABI breaking changes
None
# Expected complexity level and risk
3