During bootstrapping, indexes don't exist,
so `iter_by_col_eq` returns rows in physical storage order.
In simple cases (without unrelated deletes) this will be the same as insertion order,
and therefore also the same as sorted order,
but in cases with multiple column-changing automigrations
the physical storage order of the rows diverges from the correct sorted order.
Because this isn't a hot path, we can just call `.sort()` and not worry about it.
# Description of Changes
fixes#3174 .
During initialization, entries were added to the `DelayQueue` but not to
`key_map`.
### Detailed Explanation:
1. `DelayQueue` is **not set-semantic**, so we track uniqueness with a
`key_map: FxHashMap` but that wasn't updated during initiliaziation.
2. `Scheduler::schedule` is **not transactional**: it enqueues reducers
even if the DB transaction later fails (abort, duplicate row, etc.). On
yield, `SchedulerActor` checks the DB before execution.
**Combined Effect**: A transaction that does not actually change a
scheduled entry but still calls `Scheduler::schedule` after a module
update will cause a duplicate entry in the `DelayQueue`, since `key_map`
does not yet contain that entry.
**Why It Didn’t Show Earlier**:
When a repeating reducer executes, we re-schedule it by updating both
`DelayQueue` and `key_map` correctly.
The bug only appears in the window after updating module but before the
first execution, if a transaction calls schedule without actually
modifying the DB row.
Which was indeed happening as per discord chat:
> but yeah most likely order of event was modue was updated
> and then update_scheduled_timers_from_static_data was called
window between update module and first execution is 1 hour for this
case.
## Repo steps:
1. publish this module, it makes `send_scheduled_message` reducer to be
called every 10 secs.
```rust
#[spacetimedb::table(name = scheduled_message, public, scheduled(send_scheduled_message))]
pub struct ScheduledMessage {
#[primary_key]
#[auto_inc]
scheduled_id: u64,
scheduled_at: ScheduleAt,
}
#[spacetimedb::reducer]
fn send_scheduled_message(ctx: &ReducerContext, sched: ScheduledMessage) -> Result<(), String> {
info!("Sending scheduled message: {:?}", ctx.timestamp);
Ok(())
}
#[spacetimedb::reducer(init)]
pub fn init(ctx: &ReducerContext) {
ctx.db.scheduled_message().insert(ScheduledMessage {
scheduled_id: 0,
scheduled_at: Duration::from_secs(10).into(),
});
}
#[spacetimedb::reducer]
pub fn update_timer(ctx: &ReducerContext) {
for mut timer in ctx.db.scheduled_message().iter() {
timer.scheduled_at = Duration::from_secs(10).into();
ctx.db.scheduled_message().scheduled_id().update(timer);
log::info!("building decay agent timer was updated");
}
}
```
2. Update module to support automigration (add a table) and re-publish
it.
3. Call reducer `update_timer` and do it before first execution of
`send_scheduled_message` after updating module.
4. As `update_timer` doesn't change the existing scheduler but calls
`Scheduler::schedule` it will cause duplicate entry in `DelayQueue`.
# API and ABI breaking changes
N/A
# Expected complexity level and risk
1, pretty obvious fix.
# Testing
manually.
The code fix is straightforward, but the issue only becomes visible
under specific conditions.
The `Clone` impl for `ClientConnection` would create an independent
instance that could not observe module hotswapping. This would result in
methods called on a replaced `ModuleHost` to fail, because that host
exited already.
Fix by reading the `ModuleHost` from the watch channel directly, instead
of maintaining a redundant copy.
Also fix `watch_module_host` to properly mark the current module host as
seen.
# Expected complexity level and risk
2
# Testing
- [x] test suite passes
- [x] ran @bfops repro script
# Description of Changes
Bumped all versions to 1.3.0 in preparation for an upcoming minor
release.
# API and ABI breaking changes
No breaking changes.
# Expected complexity level and risk
1
# Testing
None
---------
Co-authored-by: Zeke Foppa <bfops@users.noreply.github.com>
# Description of Changes
New Unity test suite runs will cancel existing in-progress runs on the
same PR or github ref.
I tried to add a global limit of 2 test suites running at the same time,
but I did not find a successful way of making this work.
# API and ABI breaking changes
None. CI-only change.
# Expected complexity level and risk
1
# Testing
- [x] Test suite still runs and succeeds
- [x] If I push several commits in a row, previous runs get cancelled
---------
Co-authored-by: Zeke Foppa <bfops@users.noreply.github.com>
# Description of Changes
Out-of-band discussions with the BitCraft team brought up questions
about whether it was possible for a rejected client connection to start
an expensive computation like a subscription before their connection was
killed, e.g. by sending a `Subscribe` message along the WebSocket before
`client_connected` had finished returning `Err`.
I don't believe this was actually possible, as `ClientConnection::spawn`
called and awaited `call_identity_connected` before invoking its `actor`
closure, and it was that `actor` which processed `Subscribe` messages.
But it was somewhat difficult to verify that behavior, and so I
re-organized the code so that the outer layer of the `subscribe` handler
obviously had that property without having to step into
`ClientConnection::spawn`.
I also added some additional logging to the subscribe route, including
the `X-Forwarded-For` header in more messages, as the BitCraft team
complained about having difficulty correlating IP addresses with
connections. The log levels remain the same as before, just with
additional information added:
- Successful connections are at `debug` level,
- Rejected connections are at `info` level (these are the ones BitCraft
cares about in this case).
- Failed connections are at `warn`.
As the levels are unchanged, this should not add undesirable log noise.
# API and ABI breaking changes
N/a
# Expected complexity level and risk
3? The `subscribe` route was complex and remains so. I believe this
change simplifies the code and makes the logic more obvious, but
reviewers should take care to verify that the behavior actually is
equivalent as I believe.
# Testing
- [ ] I would like a test deployment of BitCraft to staging or
something, or to include this patch in the next bot test that we do
anyways, just for sanity's sake.
# Description of Changes
Some small CI tweaks related to the monorepo merge.
Once we've landed these, I think we can make most of the new checks
required.
# API and ABI breaking changes
CI only changes
# Expected complexity level and risk
1
# Testing
None, just tweaked names.
---------
Co-authored-by: Zeke Foppa <bfops@users.noreply.github.com>
# Description of Changes
Bumping this in preparation for an upcoming release.
# API and ABI breaking changes
None
# Expected complexity level and risk
1
# Testing
None, just a version bump
Co-authored-by: Zeke Foppa <bfops@users.noreply.github.com>
# Description of Changes
I ran `python3 tools~/upgrade-version.py 1.2.2` in preparation for an
upcoming release.
# API and ABI breaking changes
None
# Expected complexity level and risk
1
# Testing
None, just a version bump.
Co-authored-by: Zeke Foppa <bfops@users.noreply.github.com>
This PR helps to support React Native in the Typescript SDK. We have
identified two issues:
1. Certain versions of React Native exhibit a bug where the constructor
mistakenly treats a URL object as a string. This causes an error when
the constructor attempts to call `.endsWith()` on the URL object,
leading to runtime errors. This PR adds a .toString() call when passing
a URL instance to the URL constructor.
2. React Native doesn't provide an implementation for TextEncoder and
TextDecoder which are used to read/write strings in our binary reader
and writer. This PR use introduce the usage of a library for these two
types.
Note: this still requires the use of a Polyfill as React Native URL
implementation is missing most methods in version older than 0.79 (>= 6
month old)
# Description of Changes
We recently merged several repos into the SpacetimeDB repo. This PR
update the docs accordingly.
# API and ABI breaking changes
None
# Expected complexity level and risk
1
# Testing
None, just docs
Co-authored-by: Zeke Foppa <bfops@users.noreply.github.com>
# Description of Changes
Add a guard against `stack overflow` in case of nested expression and
`joins`.
# API and ABI breaking changes
None
# Expected complexity level and risk
1
# Testing
- [x] Find how `deep` can be recursed the affected functions and put a
limit on it
- [x] Add a extra test to prove we can (in theory) do lots of `joins` in
the planning steps, even if executing them will be slow
# Description of Changes
Adds a method on `SnapshotDir` to return snapshot offset.
# API and ABI breaking changes
N/A
How complicated do you think these changes are? Grade on a scale from 1
to 5,
1
# Testing
Added unit test
# Description of Changes
This is a fix for community bug which auth token is not used to create
Websocket connection in Unity WebGL build.
The issue here:
https://github.com/clockworklabs/com.clockworklabs.spacetimedbsdk/issues/352
# API and ABI breaking changes
None
# Expected complexity level and risk
1
# Testing
- [ ] Using OAuth2.0 tokens, you should get the same `Identity` even
when the token is refreshed in the WebGL build
Python has a funny way of dealing with absent values.
Fixes (again) restart tests to handle slowly restarting containers.
# Expected complexity level and risk
1
# Testing
n/a
The `subscribe` command would drop the connection without sending a
close frame. Doing so creates a warning on the server, which can be
distracting when debugging other connections issues.
# Expected complexity level and risk
1
# Testing
Use the `subscribe` command with a local database and compare server
logs before
and after.