Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

automated SEA builds #409

Merged
merged 2 commits into from
Oct 25, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
59 changes: 59 additions & 0 deletions .github/workflows/build-binaries.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,59 @@
name: Build binaries
on:
push:
branches: [build]

jobs:
build-binaries:
name: Build binaries

strategy:
matrix:
runner: [macos-13, macos-latest, ubuntu-latest, windows-latest]
node: [22.x]
include:
- runner: macos-13
os: mac
arch: x64
- runner: macos-latest
os: mac
arch: arm
- runner: ubuntu-latest
os: linux
arch: x64
- runner: windows-latest
os: windows
arch: x64

runs-on: ${{ matrix.runner }}
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: ${{ matrix.node }}
- run: npm ci
- name: Build binary
run: npm run build
- uses: actions/upload-artifact@v4
with:
# Name of the artifact to upload.
name: fauna-shell-${{ matrix.os }}-${{ matrix.arch }}-${{ matrix.node }}

# A file, directory or wildcard pattern that describes what to upload
path: ${{ matrix.os == 'windows' && 'dist\fauna.exe' || 'dist/fauna' }}

# Fail the action with an error message if no files are found at the path.
if-no-files-found: error

# Duration after which artifact will expire in days. 0 means use the repository's default retention.
retention-days: 0

# The level of compression for Zlib to be applied to the artifact archive from 0 (none) to 9 (most).
compression-level: 6

# Deletes any artifact with a matching name before a new one is uploaded.
# Does not fail if the artifact does not exist.
overwrite: true

# Don't upload hidden files in the provided path.
include-hidden-files: false
40 changes: 20 additions & 20 deletions .github/workflows/test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -6,9 +6,9 @@ name: Test
on:
# temporarily "v3"; change to "main" after merge
push:
branches: [ "v3" ]
branches: ["v3"]
pull_request:
branches: [ "v3" ]
branches: ["v3"]

jobs:
build:
Expand All @@ -20,21 +20,21 @@ jobs:
# See supported Node.js release schedule at https://nodejs.org/en/about/releases/

steps:
- uses: actions/checkout@v4
- name: Test (nodeJS ${{ matrix.node-version }})
uses: actions/setup-node@v4
with:
node-version: ${{ matrix.node-version }}
cache: 'npm'
- run: npm ci
- run: npm test
env:
TERM: xterm-256color
# Set to the correct color level; 2 is 256 colors
# https://github.com/chalk/chalk?tab=readme-ov-file#supportscolor
FORCE_COLOR: 2
- name: Publish Test Report
uses: mikepenz/action-junit-report@v4
if: success() || failure() # always run even if the previous step fails
with:
report_paths: '**/test-results.xml'
- uses: actions/checkout@v4
- name: Test (nodeJS ${{ matrix.node-version }})
uses: actions/setup-node@v4
with:
node-version: ${{ matrix.node-version }}
cache: "npm"
- run: npm ci
- run: npm test
env:
TERM: xterm-256color
# Set to the correct color level; 2 is 256 colors
# https://github.com/chalk/chalk?tab=readme-ov-file#supportscolor
FORCE_COLOR: 2
- name: Publish Test Report
uses: mikepenz/action-junit-report@v4
if: success() || failure() # always run even if the previous step fails
with:
report_paths: "**/test-results.xml"
8 changes: 8 additions & 0 deletions DEV-README.md
Original file line number Diff line number Diff line change
@@ -1,15 +1,23 @@
### Application versions

This project has 3 runnable entrypoints (a raw ESM one, a built CJS one, and an SEA one). You can read more about them [here](./sea/README.md).

### General style guidelines

- Prefer to throw errors instead of exiting the process. Exit is harder to mock well in tests, and the global error-handler in `src/cli.mjs` should already do verbosity-aware error-handling. You can request a specific exit code by attaching an `exitCode` property to your error before throwing it. The error-handling has a series of tests in `yargs-test/general-cli.mjs`; if you find a case where throwing results in bad output to the user, replicate that case in a test suite.
- Prefer to re-throw an existing error after modifying its message over catching and throwing a newly-constructed error. The `exitCode` and `stack` properties on the existing error are worth keeping.

#### Testing guidelines

- Prefer to mock the "far" edges of the application - methods on `fs`, complex async libraries (e.g., `http#Server`), `fetch`. This results in the test code traversing all of the CLI's business logic, but not interacting with error-prone external resources like disk, network, port availability, etc. `sinon` records all calls to a mock, and allows asserting against them. Use this if, e.g., your business logic calls `fetch` multiple times.
- ~~Prefer to run local tests in watch mode with (e.g., with `yarn local-test`) while developing.~~ This is currently broken.
- Use debug logs to output the shape of objects (especially network responses) to determine how to structure mocks. For instance, to get a quick mock for a network request caused by `fauna schema status ...`, set the situation up and run `fauna schema status ... --verbosity 5`. You can then use the logged objects as mocks.

#### Debugging strategies

- Fetch is not particularly amenable to debugging, but if you need to see the raw requests being made, open a netcat server locally (`nc -l 8080`) and then change the code to call the netcat server, either by passing the flag `--url http://localhost:8080` or by editing the code.
- This project has debug logging with a somewhat configurable logging strategy. To use it, provide either:
- `--verbose-component foo`, which logs all debug info for the component `foo`. Because it's an array, you can specify it multiple times. To see the available components, look at the help or tab completion.
- `--verbosity level`, where `level` is a number from 0 (no debug logging) to 5 (all debug logging) for all components.
- To investigate why a sinon stub is failing in a way you don't expect, you can log out `stub.args`, which is an array of the arguments provided each time the stub was called. This can be particularly helpful for assertions where the error message is hard to read or involves objects that don't stringify correctly (like FormData).
- To trigger [SEA builds](./sea/README.md) on all supported OS/architecture combinations, make changes and push commits to the `build` branch in github. This will [trigger github actions](./.github/workflows/build-binaries.yml) that build the SEA binaries.
145 changes: 74 additions & 71 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,16 +25,17 @@ $ npm update -g fauna-shell
```

<!-- toc -->
* [Fauna CLI](#fauna-cli)
* [Usage](#usage)
* [Technical Requirements](#technical-requirements)
* [Shell](#shell)
* [Connecting to different endpoints](#connecting-to-different-endpoints)
* [Connecting to local endpoints](#connecting-to-local-endpoints)
* [Overriding Connection Parameters](#overriding-connection-parameters)
* [Executing queries from a file](#executing-queries-from-a-file)
* [List of Commands](#list-of-commands)
* [Development](#development)

- [Fauna CLI](#fauna-cli)
- [Usage](#usage)
- [Technical Requirements](#technical-requirements)
- [Shell](#shell)
- [Connecting to different endpoints](#connecting-to-different-endpoints)
- [Connecting to local endpoints](#connecting-to-local-endpoints)
- [Overriding Connection Parameters](#overriding-connection-parameters)
- [Executing queries from a file](#executing-queries-from-a-file)
- [List of Commands](#list-of-commands)
- [Development](#development)
<!-- tocstop -->

# Usage
Expand Down Expand Up @@ -224,31 +225,31 @@ my_app> Post.create({ title: "What I had for breakfast .." })
We can also insert items in bulk by using iterator functions on arrays.

```ts
my_app> [
"My cat and other marvels",
"Pondering during a commute",
"Deep meanings in a latte"
].map(title => Post.create({ title: title }))
[
{
id: "373143473418666496",
coll: Post,
ts: Time("2023-08-15T16:16:36.960Z"),
title: "My cat and other marvels"
},
{
id: "373143473419715072",
coll: Post,
ts: Time("2023-08-15T16:16:36.960Z"),
title: "Pondering during a commute"
},
{
id: "373143473420763648",
coll: Post,
ts: Time("2023-08-15T16:16:36.960Z"),
title: "Deep meanings in a latte"
}
]
my_app >
[
"My cat and other marvels",
"Pondering during a commute",
"Deep meanings in a latte",
].map((title) => Post.create({ title: title }))[
({
id: "373143473418666496",
coll: Post,
ts: Time("2023-08-15T16:16:36.960Z"),
title: "My cat and other marvels",
},
{
id: "373143473419715072",
coll: Post,
ts: Time("2023-08-15T16:16:36.960Z"),
title: "Pondering during a commute",
},
{
id: "373143473420763648",
coll: Post,
ts: Time("2023-08-15T16:16:36.960Z"),
title: "Deep meanings in a latte",
})
];
```

Now let's try to fetch our post about _latte_. We need to access it by _id_ like this:
Expand Down Expand Up @@ -294,15 +295,15 @@ my_app> Post.byId("373143473418666496")!.replace({ title: "My dog and other marv
Now let's try to delete our post about _latte_:

```ts
my_app> Post.byId("373143473420763648")!.delete()
Post.byId("373143473420763648") /* not found */
my_app > Post.byId("373143473420763648")!.delete();
Post.byId("373143473420763648"); /* not found */
```

If we try to fetch it, we will receive a null document:

```ts
my_app> Post.byId("373143473420763648")
Post.byId("373143473420763648") /* not found */
my_app > Post.byId("373143473420763648");
Post.byId("373143473420763648"); /* not found */
```

Finally you can exit the _shell_ by pressing `ctrl+d`.
Expand Down Expand Up @@ -431,10 +432,10 @@ Collection.create({
name: "Post",
indexes: {
byTitle: {
terms: [{ field: ".title" }]
}
}
})
terms: [{ field: ".title" }],
},
},
});
```

Once the collection is created, you can execute queries against it in another
Expand Down Expand Up @@ -473,34 +474,35 @@ the queries file on the default fauna shell endpoint.
# List of Commands

<!-- commands -->
* [`fauna add-endpoint [NAME]`](#fauna-add-endpoint-name)
* [`fauna cloud-login`](#fauna-cloud-login)
* [`fauna create-database DBNAME`](#fauna-create-database-dbname)
* [`fauna create-key DBNAME [ROLE]`](#fauna-create-key-dbname-role)
* [`fauna default-endpoint [NAME]`](#fauna-default-endpoint-name)
* [`fauna delete-database DBNAME`](#fauna-delete-database-dbname)
* [`fauna delete-endpoint NAME`](#fauna-delete-endpoint-name)
* [`fauna delete-key KEYNAME`](#fauna-delete-key-keyname)
* [`fauna endpoint add [NAME]`](#fauna-endpoint-add-name)
* [`fauna endpoint list`](#fauna-endpoint-list)
* [`fauna endpoint remove NAME`](#fauna-endpoint-remove-name)
* [`fauna endpoint select [NAME]`](#fauna-endpoint-select-name)
* [`fauna environment add`](#fauna-environment-add)
* [`fauna environment list`](#fauna-environment-list)
* [`fauna environment select ENVIRONMENT`](#fauna-environment-select-environment)
* [`fauna eval [DBNAME] [QUERY]`](#fauna-eval-dbname-query)
* [`fauna help [COMMANDS]`](#fauna-help-commands)
* [`fauna import`](#fauna-import)
* [`fauna list-databases`](#fauna-list-databases)
* [`fauna list-endpoints`](#fauna-list-endpoints)
* [`fauna list-keys`](#fauna-list-keys)
* [`fauna project init [PROJECTDIR]`](#fauna-project-init-projectdir)
* [`fauna run-queries [DBNAME] [QUERY]`](#fauna-run-queries-dbname-query)
* [`fauna schema diff`](#fauna-schema-diff)
* [`fauna schema pull`](#fauna-schema-pull)
* [`fauna schema push`](#fauna-schema-push)
* [`fauna shell [DB_PATH]`](#fauna-shell-db_path)
* [`fauna upload-graphql-schema GRAPHQLFILEPATH`](#fauna-upload-graphql-schema-graphqlfilepath)

- [`fauna add-endpoint [NAME]`](#fauna-add-endpoint-name)
- [`fauna cloud-login`](#fauna-cloud-login)
- [`fauna create-database DBNAME`](#fauna-create-database-dbname)
- [`fauna create-key DBNAME [ROLE]`](#fauna-create-key-dbname-role)
- [`fauna default-endpoint [NAME]`](#fauna-default-endpoint-name)
- [`fauna delete-database DBNAME`](#fauna-delete-database-dbname)
- [`fauna delete-endpoint NAME`](#fauna-delete-endpoint-name)
- [`fauna delete-key KEYNAME`](#fauna-delete-key-keyname)
- [`fauna endpoint add [NAME]`](#fauna-endpoint-add-name)
- [`fauna endpoint list`](#fauna-endpoint-list)
- [`fauna endpoint remove NAME`](#fauna-endpoint-remove-name)
- [`fauna endpoint select [NAME]`](#fauna-endpoint-select-name)
- [`fauna environment add`](#fauna-environment-add)
- [`fauna environment list`](#fauna-environment-list)
- [`fauna environment select ENVIRONMENT`](#fauna-environment-select-environment)
- [`fauna eval [DBNAME] [QUERY]`](#fauna-eval-dbname-query)
- [`fauna help [COMMANDS]`](#fauna-help-commands)
- [`fauna import`](#fauna-import)
- [`fauna list-databases`](#fauna-list-databases)
- [`fauna list-endpoints`](#fauna-list-endpoints)
- [`fauna list-keys`](#fauna-list-keys)
- [`fauna project init [PROJECTDIR]`](#fauna-project-init-projectdir)
- [`fauna run-queries [DBNAME] [QUERY]`](#fauna-run-queries-dbname-query)
- [`fauna schema diff`](#fauna-schema-diff)
- [`fauna schema pull`](#fauna-schema-pull)
- [`fauna schema push`](#fauna-schema-push)
- [`fauna shell [DB_PATH]`](#fauna-shell-db_path)
- [`fauna upload-graphql-schema GRAPHQLFILEPATH`](#fauna-upload-graphql-schema-graphqlfilepath)

## `fauna add-endpoint [NAME]`

Expand Down Expand Up @@ -1317,6 +1319,7 @@ EXAMPLES
```

_See code: [dist/commands/upload-graphql-schema.ts](https://github.com/fauna/fauna-shell/blob/v1.2.1/dist/commands/upload-graphql-schema.ts)_

<!-- commandsstop -->

# Development
Expand Down
4 changes: 2 additions & 2 deletions package.json
Original file line number Diff line number Diff line change
Expand Up @@ -69,8 +69,8 @@
"test:local": "mocha --recursive ./test --require ./test/mocha-root-hooks.mjs",
"build": "npm run build:app && npm run build:sea",
"build:app": "esbuild --bundle ./src/user-entrypoint.mjs --platform=node --outfile=./dist/cli.cjs --format=cjs --inject:./sea/import-meta-url.js --define:import.meta.url=importMetaUrl",
"build:sea": "./sea/build-sea.sh",
"format": "prettier -w src test package.json prettier.config.js eslint.config.mjs"
"build:sea": "node ./sea/build.cjs",
"format": "prettier -w ."
},
"husky": {
"hooks": {
Expand Down
21 changes: 21 additions & 0 deletions sea/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
### SEA (single executable application)

This directory contains the infrastructure for building `fauna-shell` as a single executable application. You can find the docs for SEA [here](https://nodejs.org/docs/latest-v22.x/api/single-executable-applications.html#single-executable-applications); since this feature is experimental, make sure you're looking at the same nodeJS version as the project uses; there will be breaking changes across nodeJS versions.

The process generally looks like this:

1. A developer (or CI) runs [npm run build](../package.json).
2. `build:app` runs `eslint` to build the ES module CLI into a single-file CJS module with its dependencies' inlined. There are a few wrinkles here with `import.meta.url` and `__dirname`, but it's otherwise fairly straight-forward. This is what `./sea/import-meta-url.js` is for.
3. `build:sea` runs `./sea/build.cjs`. This nodeJS script detects the OS and builds an SEA for that OS. One of the inputs to this process is `./sea/config.json`, which specifies some paths and settings for the resulting build. We could optimize our builds here by enabling `useSnapshot` and `useCodeCache`, but that's likely not worth the effort until we have a (basic) perf benchmark in place.

### Versions of the CLI

1. The raw (runnable!) ESM CLI can be invoked by `./src/user-entrypoint.mjs <command> [subcommand] [args]`.
2. The built CJS CLI can be invoked by `./dist/cli.cjs <command> [subcommand] [args]`.
3. The SEA CLI can be invoked by `./dist/fauna <command> [subcommand] [args]`.

### Differences between versions

_All 3 versions should be runnable and behave the same, with these exceptions:_

- Currently, no exceptions.
8 changes: 0 additions & 8 deletions sea/build-sea.sh

This file was deleted.

Loading