* datastore: impl mem arch using spacetimedb_table
* instance_env/iters: write bflatn -> bsatn directly
* Make tests build, fix 1 failure (of 4)
Commit message from SpacetimeDBPrivate 0ae26519d181cb4b13f748af53d036c322d4f642:
Bug fix in `delete_by_rel`: don't trigger unique constraints in dummy insert
Prior to this commit, `MutTxId::delete_by_rel` on a table with a unique constraint
would fail, because the `delete_by_rel` would attempt an insert of the row to be deleted,
with the intention of rolling back the insertion later.
This would trigger the unique constraint, aborting the `delete_by_rel`,
and the actual row would not be deleted.
With this commit, `MutTxId::delete_by_rel` properly uses
`Table::insert_internal_allow_duplicate` and `Table::delete_internal_skip_pointer_map`
to bypass unique constraints.
This requires exporting some methods from `Table` which were previously internal.
* Remove forgotten `dbg`
* `bflatn_to`: panic rather than UB on an ill-typed `ProductValue`
The docs in `bflatn_to.rs` claim that we'll panic
if the `ProductValue` passed to `write_row_to_pages`
does not conform to the type.
Prior to this commit, we would panic if some element of the `val`
had a different type constructor than the corresponding element of the `ty`,
but for `ProductValue`, we would not panic if the value had more or fewer elements
than the type.
When the value had more elements, this was merely a logic bug;
we'd drop the tail elements and go about our lives.
When the value had fewer elements, this invoked UB;
we'd leave the remainder of the row in the page uninit,
and later access uninitialized memory e.g. when hashing the row.
This commit adds a check that in `write_product`
that the `val` and `ty` have the same length.
If they do not, this commit panics.
This will need to be revised to return an `Err`, as spacetimedb-core has tests
that attempting to insert an ill-typed value returns an `Err` rather than panicking.
* Error handling, runtime schema mutations
This commit makes two sorts of changes:
- It begins filling in the error-handling story of the new mem-arch
by defining a few new error enums in the `spacetimedb-table` crate,
and converting them into `TableError` in `spacetimedb-core`.
- It implements `drop_table`, `rename_table` and friends to mutate a schema at runtime.
These operations are broken in master in that they violate transactionality,
and they're broken here too.
* Re-enable metrics
* Resolve some TODOs, incl. error type in `page.rs`
* Fixes for Tyler's comments
- Restrict some `pub` to `pub(crate)`
- Rename `MutTxId::delete_by_rel` to `delete_by_row_value`, since it deletes only one row.
- `relations` -> `relation`.
- Rename `ignore_duplicate_insert` -> `ignore_duplicate_insert_error` and add docs.
- nix `with_table_*` methods in favor of imperative `get_table_*`.
* Note correctness of `state_view::Iter` and add test in `relational_db`.
Tyler had concerns about the behavior of `state_view::Iter`
w.r.t. the edge case of a committed-deleted-inserted row,
since the new datastore maintains the invariant that
the committed and inserted tables are disjoint,
where the previous datastore did not.
This commit adds comments describing that invariant to the iterator,
and adds a test in `relational_db` that the external semantics are preserved.
* Add notes for future MVCC implementors re: disjoint committed and insert tables
Tyler convinced me via video call that, in MVCC, it is no longer valid
to treat the committed and insert tables as disjoint,
i.e. to elide inserts of already-present rows.
Doing so is still valid, and is more performant, in the current locking model,
so we retain that behavior here.
This commit adds comments to relevant code paths so that, when we move to MVCC,
we do not forget to remove the optimization.
---------
Co-authored-by: Phoebe Goldman <phoebe@goldman-tribe.org>
A database may continue to write to the log after a previous write
failed (due to an I/O error). At that point, offsets have already been
updated, which results in gaps in the offset sequence (which we can't
tolerate currently).
To fix this, the lock on `unwritten_commit` is acquired and held across
the write to the log segment. Only after that succeeded, offsets and
parent hash are updated.
Closes#748.
This patch adds a temporary feature flag for enabling db metrics.
Metrics are recorded synchronously at the moment.
This can have a noticable impact on latency.
Compiling with this flag will enable metrics collection.
This new flag will be turned on by default.
Hence metrics will be collected by default.
Note this flag is temporary.
It will be removed once metrics are recorded async.
* made MutTxId -> MutTx
* adopt using ass. type for tx
* fmt
* read tx in datastore
lint
test
lint
made MutTxId -> MutTx
adopt using ass. type for tx
fmt
added more methods to TxId
fix test
Co-authored-by: Phoebe Goldman <phoebe@clockworklabs.io>
Signed-off-by: Shubham Mishra <shivam828787@gmail.com>
comments on Test
Co-authored-by: Phoebe Goldman <phoebe@clockworklabs.io>
Signed-off-by: Shubham Mishra <shivam828787@gmail.com>
fmt
typo
* fix imports
* clippy
* read tx in datastore
lint
test
lint
added read tx in subs
fmt
fix iters
* comments
* comments
* fix schema_for_table
* fmt
* merge queue helped
* moved all_read_query anaylyzing to vm
* lint
* added StateView trait
* lint
* fix tests for read-tx-in-subs (#695)
Intention of this PR is to keep diff of main PR smaller - feat: subscription to use read type tx #685 by separating trivial unit test related changes from main logic.
naming changes begin_tx() -> begin_mut_tx() and begin_read_tx() -> begin_tx()
* rollback_tx -> release_tx
* Compiler should work with Write Tx
* execute sql with mut tx
* fix result len
* compiler to work with write tx (#716)
* Compiler should work with Write Tx
* merge queue helped
When updating a database, we may falsely report schemas as not
compatible due to the `ConstraintDef`s coming out in different orders in
the proposed and existing schemas.
Not sorting them was an oversight in 9bb2b215.
Max value metrics are currently reset to zero after each collection period.
However this won't be reflected in the metrics store immediately.
Once the database records new values for said metrics,
only then will the reset be reflected in the metrics store.
This subtlety is imperceptible for metrics that are frequently updated.
But if a metric is not frequently updated,
or for periods of reduced load in general,
the metrics store may present stale max value metrics.
This anomaly was thought to have been fixed by #709.
However it only fixed it for DB_METRICS.
This patch removes the anomaly for WORKER_METRICS as well.
Query metrics have very high cardinality.
The current metrics store does not handle high cardinality particulary well.
Query compilation contributes very little to overall resource consumption.
Therefore they have been disabled until the cardinality issue has been addressed.
* chore: Remove unused metrics
* chore: Move table size metrics back into core
Table size metrics were previously moved out of core.
This was due to the query planner needing access to them.
However that dependency was ultimately managed differently via #677.
Fixes#684.
Prometheus does not reset metrics.
It only updates them with new values that it collects.
If it does not receive a new value for a metric, it will keep the old value.
Therefore it is important to periodically reset certain metrics.
Namely gauge metrics that only receive updates occasionally.
Maximum reducer duration is one such example,
since some reducers run much more infrequently than others.
Fixes#707.
Previously the table size metric definition was moved to a different crate.
This was to facilitate its use in query optimization.
However as part of that change it was never added back into the prometheus registry.
This resulted in the metric no longer being collected by prometheus.
This patch reintroduces it back into the registry, forcing its collection, again.