# Description of Changes
Fixes https://github.com/clockworklabs/SpacetimeDB/issues/3729
I genuinely don't know what came over me.
# API and ABI breaking changes
None
# Expected complexity level and risk
1.5 very straightforward but not strictly trivial
# Testing
Adds automated integration tests (written in Rust and run with `cargo
test`, although note this comment from @matklad about integration tests
for the future
https://internals.rust-lang.org/t/running-test-crates-in-parallel/15639/2):
- [x] Can publish an updated module if no migration is required
- [x] Can publish an updated module if auto-migration is required (with
the yes-break flag true/false)
- [x] Cannot publish if a manual migration is required
- [x] Can publish if a manual migration is required but the user
specified `--delete-data`
- [x] Can publish if a manual migration is required by the user
specified `--delete-data=on-conflict`
- [x] No data deletion occurs if no migration is required and
`--delete-data=on-conflict` is specified
---------
Signed-off-by: Zeke Foppa <196249+bfops@users.noreply.github.com>
Co-authored-by: Zeke Foppa <196249+bfops@users.noreply.github.com>
Co-authored-by: Zeke Foppa <bfops@users.noreply.github.com>
Co-authored-by: Phoebe Goldman <phoebe@clockworklabs.io>
Co-authored-by: John Detter <4099508+jdetter@users.noreply.github.com>
# Description of Changes
Adds `ProcedureContext::{with_tx, try_with_tx}`.
Fixes https://github.com/clockworklabs/SpacetimeDB/issues/3515.
# API and ABI breaking changes
None
# Expected complexity level and risk
2
# Testing
An integration test `test_calling_with_tx` is added.
Historically, controldb reads could be treated as just a datastructure,
but that became a lie when reconnections were introduced.
Some tricks were employed to enter the async context when needed, but
those always bear the risk of introducing a deadlock somewhere.
So, just make it async.
# Expected complexity level and risk
2
It may or may not be safe to use sled in an async context. We did
already for
the write path, but if it's a problem it'll show now.
# Testing
Not a functional change.
So far, the `--clear-database` option to `publish` has simply dropped
and then re-created the database (if it did exist).
This will no longer work when databases can have "children": because
dropping and re-creating is not atomic, children would either become
orphans, or be dropped as well.
To solve this, `reset_database` is introduced as a separate action that:
- shuts down all replicas
- if a `program_bytes` is supplied, replaces the database's initial
program
- if a `host_type` is supplied, replaces the database's host type
- starts `num_replicas` or the previous number of replicas, which
initialize themselves as normal
As this could be its own CLI command, the action is provided as its own
API endpoint (undocumented).
However, since `publish` has no way of knowing whether the database it
operates on actually exists, the `publish_database` handler will just
invoke the `reset_database` handler if `clear=true` and the database
exists, and return its result. This is to avoid starting the transfer of
the program in the request body, only to receive a redirect.
Some refactoring was necessary to dissect and understand the flow.
# API and ABI breaking changes
Introduces a new, undocumented API endpoint.
We may want to nest it under `/unstable`.
# Expected complexity level and risk
2
# Testing
From the outside, the observed behavior should be as before,
so smoketests should cover it.
# Description of Changes
This reverts commit #3496.
# API and ABI breaking changes
Technically maybe yes? But definitely nothing is using the new code yet.
# Expected complexity level and risk
1
# Testing
CI only
Co-authored-by: Zeke Foppa <bfops@users.noreply.github.com>
So far, the `--clear-database` option to `publish` has simply dropped
and then re-created the database (if it did exist).
This will no longer work when databases can have "children": because
dropping and re-creating is not atomic, children would either become
orphans, or be dropped as well.
To solve this, `reset_database` is introduced as a separate action that:
- shuts down all replicas
- if a `program_bytes` is supplied, replaces the database's initial
program
- if a `host_type` is supplied, replaces the database's host type
- starts `num_replicas` or the previous number of replicas, which
initialize themselves as normal
As this could be its own CLI command, the action is provided as its own
API endpoint (undocumented).
However, since `publish` has no way of knowing whether the database it
operates on actually exists, the `publish_database` handler will just
invoke the `reset_database` handler if `clear=true` and the database
exists, and return its result. This is to avoid starting the transfer of
the program in the request body, only to receive a redirect.
Some refactoring was necessary to dissect and understand the flow.
# API and ABI breaking changes
Introduces a new, undocumented API endpoint.
We may want to nest it under `/unstable`.
# Expected complexity level and risk
2
# Testing
From the outside, the observed behavior should be as before,
so smoketests should cover it.
# Description of Changes
This commit builds support for executing procedures in WASM modules.
This includes an HTTP endpoint,
`/v1/database/:name_or_address/procedure/:name POST`, as well as an
extension to the WS protocol. These new APIs are not wired up to the CLI
or SDKs, but I have manually tested the HTTP endpoint via `curl`. The
new WS extensions are completely untested.
Several TODOs are scattered throughout the new code, most notably for
sensibly tracking procedure execution time in the metrics.
I also expect that we will want to remove the `procedure_sleep_until`
syscall and the `ProcedureContext::sleep_until` method prior to release.
# API and ABI breaking changes
Adds new APIs and ABIs.
# Expected complexity level and risk
3? 4? Unlikely to break existing stuff, 'cause it's mostly additive, but
adds plenty of potentially-fragile new stuff. Notably is the first time
we're doing anything actually `async`hronous on a database core Tokio
worker, and we don't yet have strong evidence of how that will affect
reducer execution.
# Testing
- [x] Manually published `modules/module-test` and executed procedures
with the following `curl` invocations:
- `curl -X POST -H "Content-Type:application/json" -d '[]'
http://localhost:3000/v1/database/module-test/procedure/sleep_one_second`
- `curl -X POST -H "Content-Type:application/json" -d '[1223]'
http://localhost:3000/v1/database/module-test/procedure/return_value`
- [ ] Need to write automated tests.
---------
Co-authored-by: Mazdak Farrokhzad <twingoow@gmail.com>
# Description of Changes
Adds `modules/benchmarks-ts`.
# API and ABI breaking changes
None
# Expected complexity level and risk
2?
# Testing
This is a test.
---------
Co-authored-by: Phoebe Goldman <phoebe@goldman-tribe.org>
Co-authored-by: Noa <coolreader18@gmail.com>
# Description of Changes
This commit extends various schema and schema-adjacent structures to
describe procedures, a new kind of database function which are allowed
to perform side effects.
This includes extending `RawModuleDefV9` with a way to register
`RawProcedureDefV9`s in the `misc_exports`, preserving compatibility
with modules that predate procedures.
The module validation path is reorganized somewhat to validate various
properties related to procedures while preserving code clarity and
maintainability.
Additionally, the `ArgsTuple` machinery for ser/de-ing reducer arguments
using the argument type as a seed is extended to also support procedure
arguments.
All of this is currently unused.
# API and ABI breaking changes
Additive and backwards-compatible additions to `RawModuleDefV9` and
friends.
# Expected complexity level and risk
2 - some minor complexity in schema validation which may have gotten
borked in a merge at some point.
# Testing
Unsure what tests would be useful, open to suggestions from reviewers.
- [ ] <!-- maybe a test you want to do -->
- [ ] <!-- maybe a test you want a reviewer to do, so they can check it
off when they're satisfied. -->
# Description of Changes
Currently based on #3361
Implements most of the TS module API (not yet a function for type
aliases).
# Expected complexity level and risk
<!--
How complicated do you think these changes are? Grade on a scale from 1
to 5,
where 1 is a trivial change, and 5 is a deep-reaching and complex
change.
This complexity rating applies not only to the complexity apparent in
the diff,
but also to its interactions with existing and future code.
If you answered more than a 2, explain what is complex about the PR,
and what other components it interacts with in potentially concerning
ways. -->
# Testing
<!-- Describe any testing you've done, and any testing you'd like your
reviewers to do,
so that you're confident that all the changes work as expected! -->
- [x] Extremely basic module stuff works
- [ ] <!-- maybe a test you want a reviewer to do, so they can check it
off when they're satisfied. -->
---------
Signed-off-by: Noa <coolreader18@gmail.com>
Co-authored-by: Tyler Cloutier <cloutiertyler@aol.com>
Co-authored-by: = <cloutiertyler@gmail.com>
Co-authored-by: Tyler Cloutier <cloutiertyler@users.noreply.github.com>
Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>
Co-authored-by: Zeke Foppa <bfops@users.noreply.github.com>
# Description of Changes
In service of adding procedures, which will need to execute WASM code in
an `async` environment so as to suspend execution while making HTTP
requests and whatnot.
Prior to this PR, `JobCores` worked by spawning an OS thread for each
database. These threads would then each be pinned to a specific core,
and in multi-tenant deployments multiple threads could be pinned to the
same core. Now, instead, we spawn one thread per available core at
startup. Each of these threads runs a single-threaded Tokio executor.
Each database is assigned to one of these executors, and runs tasks on
it via `tokio::spawn`.
When we run without core pinning (usually due to having too few hardware
cores), we won't spawn any additional threads or Tokio runtimes at all;
instead we will run database jobs on the "global" Tokio executor. These
jobs may block Tokio worker threads, which might be an issue if a very
core-constrained device runs multiple databases with very long-running
reducers. If this is an issue, we could in this case instead build a
second Tokio runtime only for running database jobs, and let the OS
scheduler figure things out like it did previously.
Previously, we implemented load-balancing among the database cores by
occasionally instructing per-database threads to re-pin themselves. Now,
we instead periodically send the database a new
`wasmtime::runtime::Handle`, which they will `spawn` future jobs into.
Previously, it was possible for a database thread to become canceled,
most likely as a result of `ModuleHost::exit`, after which calls would
fail with `NoSuchModule`. Cancellation is no longer meaningful, as the
database holds a `Handle` to a long-lived `tokio::runtime::Runtime`,
which should always outlive the `ModuleHost`. I have added an
`AtomicBool` flag to `ModuleHost` which is flipped by `exit` and checked
by calls to maintain the previous functionality.
Within this PR, the jobs run on the database-execution Tokio tasks are
not actually asynchronous; they will never yield. This is important
because these jobs may (will) hold a transaction open, and attempting to
swap execution to another job which wants a transaction on the same
database would be undesirable.
Note that this may regress our multi-tenant performance / fairness:
previously, in multi-tenant environments, the OS scheduler would divide
the database cores' time between the per-database threads, potentially
causing one high-load database to be interrupted in the middle of a
reducer in order to run other databases pinned to the same core. Now, a
high-load database will instead run its entire reducer to completion
before any other database gets to run.
We could, in the future, change this by instructing Wasmtime to yield
periodically, either via [epoch
interruption](https://docs.wasmtime.dev/api/wasmtime/struct.Store.html#method.epoch_deadline_async_yield_and_update)
or
[fuel](https://docs.wasmtime.dev/api/wasmtime/struct.Store.html#method.fuel_async_yield_interval),
both of which we're already configuring Wasmtime to track. We'd need (or
at least want) to (re-)introduce a queue s.t. we only attempt to run one
job for each database at a time. I have chosen not to do so within this
patch because I felt the changeset was complex enough already, and we
have so far not treated fairness in multi-tenant environments as a high
priority.
I have also reworked our module host machinery to no longer use dynamic
dispatch and trait polymorphism to manage modules and their instances,
and instead introduced `enum Module` and `enum Instance`, each of which
has a variant for Wasm and another for V8.
During this rewrite, I reworked `AutoReplacingModuleInstance`, which
previously used type-erased trait generics in a way that was brittle and
hard to re-use in the new `async` context. (Specifically, the module
instance no longer lives on the job thread, rather, the database grabs
the instance and sends it to the job thread, then gets it back when the
job exits. This is necessary to allow the re-worked load balancing
described above, as we can't have a single long-lived async task.) While
refactoring, I replaced it with `ModuleInstanceManager`, which can now
maintain multiple instances of the same module. This is not yet useful,
but will become necessary with procedures, as each concurrent procedure
will need its own instance. Relatedly, I changed
`ModuleHost::on_module_thread` (used by one-off and initial subscription
queries) to no longer acquire the/an instance. I discussed this with the
team, and consensus was that "locking" the module instance in that path
was not a useful behavior, just an artifact of the previous
implementation.
I have also switched our Wasmtime configuration to set
`async_support(true)`. This causes a variety of methods, notably
`InstancePre::instantiate` and `TypedFunc::call`, to panic, and requires
that we instead call their `_async` variants. As mentioned above, I have
not yet introduced any actual asynchronicity or concurrency, so these
methods should never yield. Rather than `.await`ing their futures, I
have defined a degenerate `async` executor, `poll_once_executor`, which
polls a future exactly once, failing if it does not return
`Poll::Ready`. This means that we will panic if one of these futures
returns `Poll::Pending` unexpectedly.
The previous `trait Module` had a method `initial_instances`. `Module`
is now a concrete type, and I gave it this method, but it appears to be
unused. This is causing lints to fail. I am unsure what, if anything,
that method was for.
The previous `AutoReplacingModuleInstance` called `create_instance` on
the job thread. I am unsure if this was intentional, or just an artifact
of the previous implementation, where the `AutoReplacingModuleInstance`
lived on the job thread. I have written the new `ModuleInstanceManager`
to call `create_instance` on the calling thread, but it would be easy to
move that call into the job executor if that behavior is desired.
# API and ABI breaking changes
None user-facing
# Expected complexity level and risk
4: significant rewrite of performance-sensitive fiddly concurrency code.
Note specifically in above description:
- Running database jobs on the global Tokio runtime when not using core
pinning.
- Multi-tenant fairness issue: no longer possible to interrupt a
performance-intensive database mid-reducer to run another database
pinned to the same core.
- Unused method `module_instances`.
- Running `create_instance` on the calling thread rather than the
database thread.
# Testing
<!-- Describe any testing you've done, and any testing you'd like your
reviewers to do,
so that you're confident that all the changes work as expected! -->
- [x] Will arrange for a bot test.
- [ ] Determine to what extent we can run with real or synthetic
multi-tenant load in a test or staging environment.
---------
Signed-off-by: Phoebe Goldman <phoebe@goldman-tribe.org>
Co-authored-by: Mazdak Farrokhzad <twingoow@gmail.com>
Co-authored-by: joshua-spacetime <josh@clockworklabs.io>
# Description of Changes
Closes#3219
This adds the Unreal SDK, the new Unreal test cases, updates the test
runner to handle Unreal, codegen updates for Unreal, and a QuickStart
Chat.
# API and ABI breaking changes
No breaking changes.
# Expected complexity level and risk
2 - This impacts the subcommand generate.rs to include unrealcpp and
crates/testing to expand for Unreal
# Testing
- [x] Run the new Unreal tests
- [x] Run any previous automation testing - with all the changes to
generate/testing I'm uncertain if there is an impact
- [x] Review the new CLI generate documentation changes
---------
Co-authored-by: Phoebe Goldman <phoebe@goldman-tribe.org>
Co-authored-by: Zeke Foppa <196249+bfops@users.noreply.github.com>
Co-authored-by: Zeke Foppa <bfops@users.noreply.github.com>
Co-authored-by: John Detter <4099508+jdetter@users.noreply.github.com>
# Description of Changes
This adds a new system table to store the jwt payloads of connected
clients. I'm planning to use this system table to expose client claims
to modules in subsequent PRs.
The new table is called `st_connection_credentials`. It is a **private**
system table which stores a mapping from `connection_id` to
`jwt_payload`. Note that a jwt payload is a json representation of the
clients claims, not a fully signed token.
The times when we need to insert and delete these rows closely mirrors
that of the existing `st_client` table, with 1.5 exceptions:
1. We weren't previously inserting to `st_client` until after the
`OnConnect` reducer ran (even though it was in the same transaction). We
want `st_connection_credentials` to be populated before calling the
reducer, so that the reducer can use it get the credentials, so I made a
change to insert to `st_client` and `st_connection_credentials` before
calling the reducer.
2. This difference has not actualized, but when clients start sending
refresh tokens, we will probably need to update the credentials stored
in this table.
This also enforces uniqueness of connection ids. A duplicate connection
id will now make the on-connect reducer fail (since it will violate
uniqueness when trying to insert to `st_connection_credentials`).
# Expected complexity level and risk
2.5
Adding a system table is a bit risky. This is almost rollback safe, with
one annoying case that is worth calling out:
If a database is created with this system table, opening it with an
older version of spacetimedb will only work if there is a snapshot of
the database. If we try to load a table without a snapshot, replaying
will fail on the first row for that table. This is because we don't
write the table schema information to the commit log when creating a
database. In practice, this is unlikely to be an issue, because new
databases asynchronously trigger a snapshot immediately after creation.
Migrating existing databases will be fine. On startup this will detect
that there is a missing system table, and add it in a way that writes it
to the commit log. Since it is in the commit log, we can open the
database with an older version and still understand the data for that
table.
# Testing
There are unit tests that cover opening a database created with an older
version (which doesn't have this table).
I manually tested opening a migrated database with an older version of
spacetimedb.
# Description of Changes
- Adds endpoint for for pretty printing migration plan.
- It also changes current `publish` endpoint to optionally provide
`MigrationToken` and `MigrationPolicy` to allow migration with breaking
clients.
# API and ABI breaking changes
Backward compatible change to existing API and new Api
# Expected complexity level and risk
2
# Testing
- Existing smoketest should cover changes for `publish` endpoint.
- For pretty print endpoint, smoketests can be written only after cli
changes.
---------
Signed-off-by: Shubham Mishra <shivam828787@gmail.com>
Co-authored-by: James Gilles <jameshgilles@gmail.com>
Co-authored-by: Phoebe Goldman <phoebe@clockworklabs.io>
# Description of Changes
We recently merged several repos together. This PR clarifies the license
terms for several subdirectories, as well as the relationship between
the licenses.
The licenses in our subdirectories have become symbolic links to
licenses in our toplevel `licenses` directory. For any particular
subdirectory's license file in the diff, you can click `... -> View
file` and then click on the text that says "Symbolic Link" on that page.
This will take you to the license file that it links to.
I have also updated the `tools/upgrade-version` script to update the
change date in the new `licenses/BSL.txt` file.
# API and ABI breaking changes
None.
# Expected complexity level and risk
1
# Testing
None. Only changes to license files.
---------
Co-authored-by: Zeke Foppa <bfops@users.noreply.github.com>