mirror of
https://github.com/clockworklabs/SpacetimeDB.git
synced 2026-05-14 03:37:55 -04:00
c83f55f65e
# Description of Changes This PR moves most of the contents of `@clockworklabs/spacetimedb-sdk` into the `spacetimedb` module. The `spacetimedb` module now exports `sdk` and `server` as separate subpaths where `sdk` contains the code which was previously in `@clockworklabs/spacetimedb-sdk`. In particular it makes the following moves: - `/sdks/typescript/packages/sdk` -> `/sdks/typescript` - most of the contents of `/sdks/typescript/packages/sdk` -> `crates/bindings-typescript` - `/sdks/typescript/packages/test-app` -> `crates/bindings-typescript/test-app` The following packags was NOT moved: `/sdks/typescript/examples/quickstart-chat` ## Motivation In accordance with https://github.com/clockworklabs/SpacetimeDB/issues/3250, we would like to consolidate `@clockworklabs/spacetimedb-sdk` into a single `spacetimedb` package so that users can import the different things they need from a single package. ### Pros: - allow users to install a single package with subpaths `spacetimedb`, `spacetimedb/react`, `spacetimedb/sdk`, `spacetimedb/server`, etc. - Is much simpler for bundling, etc. - Is backwards compatible with `@clockworklabs/spacetimedb-sdk` which now becomes a thin wrapper - eventually allow us to break up the `spacetimedb` package into other packages if we want to split them up (e.g. `@spacetimedb/lib`, `@spacetimedb/sdk`, etc.) and we can solve the build complexity that introduces when we get to it - eventually allow us to move `bindings-csharp` out of the crates directory where it probably doesn't belong anyway - organizes all TypeScript packages into the packages directory where you'd normally expect them, with the possible exception of `/sdks/typescript` if we wanted to leave that separate ### Cons: - The `sdk` directory is now a bit of a ruse as to where the code actually lives since it's just a thin wrapper. If it eventually becomes its own independent package, we'll also have to break up spacetimedb into `@spacetimedb/lib` and `@spacetimedb/server` so that `@clockworklabs/spacetimedb-sdk` can depend on `@spacetimedb/lib` while being a dependency of `spacetimedb`. Ideally this change would have been made later, however, it became necessary for the following **heinously disastrous chain of forcing moves**: 1. Adding `react` support necessitated shipping react as an optional peer dependency under `@clockworklabs/spacetimedb-sdk/react` 2. This required adding a new build target/export/bundle 3. Previously `@clockworklabs/spacetimedb-sdk` was configured to have `noExternal` for `spacetimedb` meaning it would collect the library into the sdk bundle. I attempted to continue this for react support but... 4. Creating a new `react` bundle which also pulled in `spacetimedb` caused their to be nominal type conflicts between classes in the duplicate `spacetimedb` bundles. 5. Changing `spacetimedb` to be included as `external` caused compile errors because `@clockworklabs/spacetimedb-sdk` is configured in `tsconfig.json` to fail on unused variables and it was now including the source of `spacetimedb` which is not configured to error on unused variables and has "unused" private variables which are actually used by us secretly, but not exposed to the clients. > SIDE NOTE: The unused variables settings cannot be turned off on a line by line basis, so it has to be turned off entirely, but in order to maintain the linting checks we had I used `eslint` to enforce the rule so that we could disable it line by line. (This caused me to discover quite a lot of things that were broken that were caught by `eslint` being applied to the entire project. `eslint` was previously only applied to the `quickstart-chat` and the `crates/bindings-typescript` library) 6. Changing the build to be external, now requires `spacetimedb` to also be published to npm as its own module which `@clockworklabs/spacetimedb-sdk` now imports, which requires that we add `tsup` config to `spacetimedb` to publish a built version of the library. 7. The only way to avoid that is to move the `sdk` and `react` code from `@clockworklabs/spacetimedb-sdk` into the existing `spacetimedb` package to avoid the duplicate import problem on step 4 and change `@clockworklabs/spacetimedb-sdk` back to again use `noExternal` for its `spacetimedb` dependency. And here we are. I chose not to move `/crates/bindings-typescript` even though that's probably not a great place long term. It would be better to have it in `/packages/spacetimedb` or `/npm-packages/spacetimedb` or `/ts-packages/spacetimedb` or something, and move all our TypeScript packages in there. But that is a different matter. The net result however is that we have a new `spacetimedb` package which exports the different parts of the API under: - `spacetimedb` - `spacetimedb/server` - `spacetimedb/sdk` - `spacetimedb/react` while still not breaking the existing deploy process, nor any users/developers who are currently using `@clockworklabs/spacetimedb-sdk`. I think long term should we ever decide to split `spacetimedb` up into multiple packages or if we have additional unrelated packages, we should publish them to the `@spacetimedb` org which I reserved for us here: https://www.npmjs.com/org/spacetimedb > NOTE: `spacetimedb` is a package and `@spacetimedb` is an org. `spacetimedb/sdk` is not a separate package, it's a subpath export of the `spacetimedb` package, whereas `@spacetimedb/sdk` would be (and would need to be) it's own separate package. You can certainly have both `spacetimedb/sdk` and `@spacetimedb/sdk`. We could for example host the code for the sdk at `@spacetimedb/sdk` and just reexport it from `spacetimedb` under the `spacetimedb/sdk` subpath. # API and ABI breaking changes This should not change or modify the API or ABI in any way. If it does so accidentally it is a bug, although I carefully went through the exports. # Expected complexity level and risk 3 because it changes how the SDK is built a bit and rearranges a lot of paths. # Testing - [x] All of the CI passes - [x] I also ran quickstart-chat to confirm that it is not broken - [x] I also ran test-app
267 lines
8.1 KiB
TypeScript
267 lines
8.1 KiB
TypeScript
import fs from 'fs';
|
|
import path from 'path';
|
|
import nav from '../nav'; // Import the nav object directly
|
|
import GitHubSlugger from 'github-slugger';
|
|
import { unified } from 'unified';
|
|
import remarkParse from 'remark-parse';
|
|
import { visit } from 'unist-util-visit';
|
|
|
|
// Function to map slugs to file paths from nav.ts
|
|
function extractSlugToPathMap(nav: { items: any[] }): Map<string, string> {
|
|
const slugToPath = new Map<string, string>();
|
|
|
|
function traverseNav(items: any[]): void {
|
|
items.forEach(item => {
|
|
if (item.type === 'page' && item.slug && item.path) {
|
|
const resolvedPath = path.resolve(__dirname, '../docs', item.path);
|
|
slugToPath.set(`/docs/${item.slug}`, resolvedPath);
|
|
} else if (item.type === 'section' && item.items) {
|
|
traverseNav(item.items); // Recursively traverse sections
|
|
}
|
|
});
|
|
}
|
|
|
|
traverseNav(nav.items);
|
|
return slugToPath;
|
|
}
|
|
|
|
// Function to assert that all files in slugToPath exist
|
|
function validatePathsExist(slugToPath: Map<string, string>): void {
|
|
slugToPath.forEach((filePath, slug) => {
|
|
if (!fs.existsSync(filePath)) {
|
|
throw new Error(
|
|
`File not found: ${filePath} (Referenced by slug: ${slug})`
|
|
);
|
|
}
|
|
});
|
|
}
|
|
|
|
// Function to extract links and images from markdown files with line numbers
|
|
function extractLinksAndImagesFromMarkdown(
|
|
filePath: string
|
|
): { link: string; type: 'image' | 'link'; line: number }[] {
|
|
const content = fs.readFileSync(filePath, 'utf-8');
|
|
const tree = unified().use(remarkParse).parse(content);
|
|
|
|
const results: { link: string; type: 'image' | 'link'; line: number }[] = [];
|
|
|
|
visit(tree, ['link', 'image', 'definition'], (node: any) => {
|
|
const link = node.url;
|
|
const line = node.position?.start?.line ?? 0;
|
|
if (link) {
|
|
results.push({
|
|
link,
|
|
type: node.type === 'image' ? 'image' : 'link',
|
|
line,
|
|
});
|
|
}
|
|
});
|
|
|
|
return results;
|
|
}
|
|
|
|
// Function to resolve relative links using slugs
|
|
function resolveLink(link: string, currentSlug: string): string {
|
|
if (link.startsWith('#')) {
|
|
// If the link is a fragment, resolve it to the current slug
|
|
return `${currentSlug}${link}`;
|
|
}
|
|
|
|
if (link.startsWith('/')) {
|
|
// Absolute links are returned as-is
|
|
return link;
|
|
}
|
|
|
|
// Resolve relative links based on slug
|
|
const currentSlugDir = path.dirname(currentSlug);
|
|
const resolvedSlug = path
|
|
.normalize(path.join(currentSlugDir, link))
|
|
.replace(/\\/g, '/');
|
|
return resolvedSlug.startsWith('/docs')
|
|
? resolvedSlug
|
|
: `/docs${resolvedSlug}`;
|
|
}
|
|
|
|
// Function to check if the links in .md files match the slugs in nav.ts and validate fragments/images
|
|
function checkLinks(): void {
|
|
const brokenLinks: {
|
|
file: string;
|
|
link: string;
|
|
type: 'image' | 'link';
|
|
line: number;
|
|
}[] = [];
|
|
let totalFiles = 0;
|
|
let totalLinks = 0;
|
|
let validLinks = 0;
|
|
let invalidLinks = 0;
|
|
let totalFragments = 0;
|
|
let validFragments = 0;
|
|
let invalidFragments = 0;
|
|
|
|
// Extract the slug-to-path mapping from nav.ts
|
|
const slugToPath = extractSlugToPathMap(nav);
|
|
|
|
// Validate that all paths in slugToPath exist
|
|
validatePathsExist(slugToPath);
|
|
|
|
console.log(`Validated ${slugToPath.size} paths from nav.ts`);
|
|
|
|
// Extract valid slugs
|
|
const validSlugs = Array.from(slugToPath.keys());
|
|
|
|
// Hacky workaround because the slug for the root is /docs/index. No other slugs have a /index at the end.
|
|
validSlugs.push('/docs');
|
|
|
|
// Reverse map from file path to slug for current file resolution
|
|
const pathToSlug = new Map<string, string>();
|
|
slugToPath.forEach((filePath, slug) => {
|
|
pathToSlug.set(filePath, slug);
|
|
});
|
|
|
|
// Get all .md files to check
|
|
const mdFiles = getMarkdownFiles(path.resolve(__dirname, '../docs'));
|
|
|
|
totalFiles = mdFiles.length;
|
|
|
|
mdFiles.forEach(file => {
|
|
const linksAndImages = extractLinksAndImagesFromMarkdown(file);
|
|
totalLinks += linksAndImages.length;
|
|
|
|
const currentSlug = pathToSlug.get(file) || '';
|
|
|
|
linksAndImages.forEach(({ link, type, line }) => {
|
|
// Exclude external links (starting with http://, https://, mailto:, etc.)
|
|
if (/^([a-z][a-z0-9+.-]*):/.test(link)) {
|
|
return; // Skip external links
|
|
}
|
|
|
|
if (!link.startsWith('/docs')) {
|
|
return; // Skip site links
|
|
}
|
|
|
|
// Resolve the link
|
|
const resolvedLink = resolveLink(link, currentSlug);
|
|
|
|
if (type === 'image') {
|
|
// Validate image paths
|
|
const normalizedLink = resolvedLink.startsWith('/')
|
|
? resolvedLink.slice(1)
|
|
: resolvedLink;
|
|
const imagePath = path.resolve(__dirname, '../', normalizedLink);
|
|
|
|
if (!fs.existsSync(imagePath)) {
|
|
brokenLinks.push({ file, link: resolvedLink, type: 'image', line });
|
|
invalidLinks += 1;
|
|
} else {
|
|
validLinks += 1;
|
|
}
|
|
return;
|
|
}
|
|
|
|
// Split the resolved link into base and fragment
|
|
const [baseLinkRaw, fragmentRaw] = resolvedLink.split('#');
|
|
let baseLink = baseLinkRaw;
|
|
if (baseLink.endsWith('/')) {
|
|
baseLink = baseLink.slice(0, -1);
|
|
}
|
|
const fragment: string | null = fragmentRaw || null;
|
|
|
|
if (fragment) {
|
|
totalFragments += 1;
|
|
}
|
|
|
|
// Check if the base link matches a valid slug
|
|
if (!validSlugs.includes(baseLink)) {
|
|
brokenLinks.push({ file, link: resolvedLink, type: 'link', line });
|
|
invalidLinks += 1;
|
|
return;
|
|
} else {
|
|
validLinks += 1;
|
|
}
|
|
|
|
// Validate the fragment, if present
|
|
if (fragment) {
|
|
const targetFile = slugToPath.get(baseLink);
|
|
if (targetFile) {
|
|
const targetHeadings = extractHeadingsFromMarkdown(targetFile);
|
|
|
|
if (!targetHeadings.includes(fragment)) {
|
|
brokenLinks.push({ file, link: resolvedLink, type: 'link', line });
|
|
invalidFragments += 1;
|
|
invalidLinks += 1;
|
|
} else {
|
|
validFragments += 1;
|
|
}
|
|
}
|
|
}
|
|
});
|
|
});
|
|
|
|
if (brokenLinks.length > 0) {
|
|
console.error(`\nFound ${brokenLinks.length} broken links/images:`);
|
|
brokenLinks.forEach(({ file, link, type, line }) => {
|
|
const typeLabel = type === 'image' ? 'Image' : 'Link';
|
|
console.error(`${typeLabel}: ${file}:${line}, Path: ${link}`);
|
|
});
|
|
} else {
|
|
console.log('All links and images are valid!');
|
|
}
|
|
|
|
// Print statistics
|
|
console.log('\n=== Validation Statistics ===');
|
|
console.log(`Total markdown files processed: ${totalFiles}`);
|
|
console.log(`Total links/images processed: ${totalLinks}`);
|
|
console.log(` Valid: ${validLinks}`);
|
|
console.log(` Invalid: ${invalidLinks}`);
|
|
console.log(`Total links with fragments processed: ${totalFragments}`);
|
|
console.log(` Valid links with fragments: ${validFragments}`);
|
|
console.log(` Invalid links with fragments: ${invalidFragments}`);
|
|
console.log('===============================');
|
|
|
|
if (brokenLinks.length > 0) {
|
|
process.exit(1); // Exit with an error code if there are broken links
|
|
}
|
|
}
|
|
|
|
// Function to extract headings from a markdown file
|
|
function extractHeadingsFromMarkdown(filePath: string): string[] {
|
|
if (!fs.existsSync(filePath) || !fs.lstatSync(filePath).isFile()) {
|
|
return []; // Return an empty list if the file does not exist or is not a file
|
|
}
|
|
|
|
const fileContent = fs.readFileSync(filePath, 'utf-8');
|
|
const headingRegex = /^(#{1,6})\s+(.*)$/gm; // Match markdown headings like # Heading
|
|
const headings: string[] = [];
|
|
let match: RegExpExecArray | null;
|
|
|
|
const slugger = new GitHubSlugger();
|
|
while ((match = headingRegex.exec(fileContent)) !== null) {
|
|
const heading = match[2].trim(); // Extract the heading text
|
|
const slug = slugger.slug(heading); // Slugify the heading text
|
|
headings.push(slug);
|
|
}
|
|
|
|
return headings;
|
|
}
|
|
|
|
// Function to get all markdown files recursively
|
|
function getMarkdownFiles(dir: string): string[] {
|
|
let files: string[] = [];
|
|
const items = fs.readdirSync(dir);
|
|
|
|
items.forEach(item => {
|
|
const fullPath = path.join(dir, item);
|
|
const stat = fs.lstatSync(fullPath);
|
|
|
|
if (stat.isDirectory()) {
|
|
files = files.concat(getMarkdownFiles(fullPath)); // Recurse into directories
|
|
} else if (fullPath.endsWith('.md')) {
|
|
files.push(fullPath);
|
|
}
|
|
});
|
|
|
|
return files;
|
|
}
|
|
|
|
checkLinks();
|