Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

node, sdk/node: add options to configure reconnection backoff #11

Open
wants to merge 3 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
203 changes: 113 additions & 90 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,6 @@
[![npm downloads](https://img.shields.io/npm/dm/flatend.svg?style=flat)](https://www.npmjs.com/package/flatend)
[![Security Responsible Disclosure](https://img.shields.io/badge/Security-Responsible%20Disclosure-yellow.svg)](https://github.com/nodejs/security-wg/blob/master/processes/responsible_disclosure_template.md)


<img align="right" width ="200" height="200" src="https://lh3.googleusercontent.com/pw/ACtC-3c6eZvrCLM-wV5UkBn8JZVBf-C-lAJ7XmCLgX5Gz4tCdbhCtREUw_o2bsYIbibU1fCk5A43h_9dBSV7y9hwtv9iIifKVk6QkGEGXYV1E1Kd0jyH62k8zZBsbbT3JSSfGRYW660frbzTO0wtTR4FQECl=s599-no">

**flatend** is an experimental framework and protocol to make microservices more modular, simpler, safer, cheaper, and faster to build using [p2p networking](https://github.com/lithdew/monte).
Expand All @@ -16,30 +15,29 @@

## Features

* Fully agnostic and compatible with any type of language, database, tool, library, or framework.
* P2P-based service discovery, load balancing, routing, and PKI via [Kademlia](https://en.wikipedia.org/wiki/Kademlia).
* Fully-encrypted, end-to-end, bidirectional streaming RPC via [Monte](https://github.com/lithdew/monte).
* Automatic reconnect/retry upon crashes or connection loss.
* Zero-hassle serverless: every function is a microservice.
* Stream multiple gigabytes of data across microservices.
- Fully agnostic and compatible with any type of language, database, tool, library, or framework.
- P2P-based service discovery, load balancing, routing, and PKI via [Kademlia](https://en.wikipedia.org/wiki/Kademlia).
- Fully-encrypted, end-to-end, bidirectional streaming RPC via [Monte](https://github.com/lithdew/monte).
- Automatic reconnect/retry upon crashes or connection loss.
- Zero-hassle serverless: every function is a microservice.
- Stream multiple gigabytes of data across microservices.

## Gateways

**flatend** additionally comes with scalable, high-performance, production-ready, easily-deployable API gateways that are bundled into a [small, single executable binary](https://github.com/lithdew/flatend/releases) to help you quickly deploy your microservices.

* Written in [Go](https://golang.org/).
* HTTP/1.1, HTTP/2 support.
* Automatic HTTPS via [LetsEncrypt](https://letsencrypt.org/).
* Expose/load-balance across microservices.
* Serve static files and directories.
* REPL for real-time management (*coming soon!*).
* Prometheus metrics (*coming soon!*).
* WebSocket support (*coming soon!*).
* gRPC support (*coming soon!*).
- Written in [Go](https://golang.org/).
- HTTP/1.1, HTTP/2 support.
- Automatic HTTPS via [LetsEncrypt](https://letsencrypt.org/).
- Expose/load-balance across microservices.
- Serve static files and directories.
- REPL for real-time management (_coming soon!_).
- Prometheus metrics (_coming soon!_).
- WebSocket support (_coming soon!_).
- gRPC support (_coming soon!_).

All gateways have been extensively tested on [Rackspace](https://www.rackspace.com/), [Scaleway](https://www.scaleway.com/en/), [AWS](https://aws.amazon.com/), [Google Cloud](https://cloud.google.com/), and [DigitalOcean](https://www.digitalocean.com/).


## Requirements

Although **flatend** at its core is a protocol, and hence agnostic to whichever programming langauge you use, there are currently only two reference implementations in NodeJS and Go.
Expand All @@ -50,7 +48,7 @@ Although **flatend** at its core is a protocol, and hence agnostic to whichever
The rationale for starting with NodeJS and Go is so that, for any new product/service, you may:

1. Quickly prototype and deploy in NodeJS with SQLite using a 2USD/month bare-metal server.
2. Once you start scaling up, split up your microservice and rewrite the performance-critical parts in Go.
2. Once you start scaling up, split up your microservice and rewrite the performance-critical parts in Go.
3. Run a red/blue deployment easily to gradually deploy your new microservices and experience zero downtime.

Support is planned for the following runtimes/languages:
Expand Down Expand Up @@ -165,7 +163,7 @@ Hello world!
Try restart your API gateway and watch your service re-discover it.

```shell
$ go run main.go
$ go run main.go
2020/06/18 04:11:06 Listening for Flatend nodes on '[::]:39313'.
2020/06/18 04:11:06 You are now connected to 127.0.0.1:9000. Services: []
2020/06/18 04:11:06 Re-probed 127.0.0.1:9000. Services: []
Expand Down Expand Up @@ -207,34 +205,34 @@ success Saved X new dependencies.
Write a function that describes how to handle requests for the service `hello_world` in `index.js`.

```js
const {Node, Context} = require("flatend");
const { Node, Context } = require("flatend");

const helloWorld = ctx => ctx.send("Hello world!");
const helloWorld = (ctx) => ctx.send("Hello world!");
```

Register the function as a handler for the service `hello_world`. Start the node and have it connect to Flatend's API gateway.

```js
const {Node, Context} = require("flatend");
const { Node, Context } = require("flatend");

const helloWorld = ctx => ctx.send("Hello world!");
const helloWorld = (ctx) => ctx.send("Hello world!");

async function main() {
await Node.start({
addrs: ["127.0.0.1:9000"],
services: {
'hello_world': helloWorld,
},
});
await Node.start({
addrs: ["127.0.0.1:9000"],
services: {
hello_world: helloWorld,
},
});
}

main().catch(err => console.error(err));
main().catch((err) => console.error(err));
```

Run it.

```shell
$ DEBUG=* node index.js
$ DEBUG=* node index.js
flatend You are now connected to 127.0.0.1:9000. Services: [] +0ms
flatend Discovered 0 peer(s). +19ms
```
Expand All @@ -249,7 +247,7 @@ Hello world!
Try restart your API gateway and watch your service re-discover it.

```shell
$ DEBUG=* node index.js
$ DEBUG=* node index.js
flatend You are now connected to 127.0.0.1:9000. Services: [] +0ms
flatend Discovered 0 peer(s). +19ms
flatend Trying to reconnect to 127.0.0.1:9000. Sleeping for 500ms. +41s
Expand All @@ -276,21 +274,37 @@ package flatend
import "github.com/lithdew/kademlia"

type Node struct {
// A reachable, public address which peers may reach you on.
// The format of the address must be [host]:[port].
PublicAddr string
// A reachable, public address which peers may reach you on.
// The format of the address must be [host]:[port].
PublicAddr string

// A 32-byte Ed25519 private key. A secret key must be provided
// to allow for peers to reach you. A secret key may be generated
// by calling `flatend.GenerateSecretKey()`.
SecretKey kademlia.PrivateKey
// A 32-byte Ed25519 private key. A secret key must be provided
// to allow for peers to reach you. A secret key may be generated
// by calling `flatend.GenerateSecretKey()`.
SecretKey kademlia.PrivateKey

// A list of IPv4/IPv6 addresses and ports assembled as [host]:[port] which
// your Flatend node will listen for other nodes from.
BindAddrs []string
BindAddrs []string

// A mapping of service names to their respective handlers.
Services map[string]flatend.Handler

// Total number of attempts to reconnect to a peer we reached that disconnected.
// Default is 8 attempts, set to a negative integer to not attempt to reconnect at all.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can 0 be no reconnect and -1 be forever retry?

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tried to figure a way to do that, though I wonder how we could force the default value for NumReconnectAttempts to be 8 in that case. The only idiomatic way I could think of is to switch to functional options, or to encapsulate options into a separate Options struct with sensible defaults, though might you have any suggestions?

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When I do services in Go, typically I create some kind of config object which can be use by main.go. So the main program is the one that is setting the defaults. User can then set parameters on main.go to override the defaults.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@lithdew Why did you choose 8 as default?

NumReconnectAttempts int

// A mapping of service names to their respective handlers.
Services map[string]Handler
// A factor proportionally representing how much larger each reconnection attempts
// delay should increase by upon each attempt. Default is 1.25.
ReconnectBackoffFactor float64

// The minimum amount of time to wait before each reconnection attempt. Default is 500
// milliseconds.
ReconnectBackoffMinDuration time.Duration

// The maximum amount of time to wait before each reconnection attempt. Default is 1
// second.
ReconnectBackoffMaxDuration time.Duration

// ....
}
Expand All @@ -307,9 +321,9 @@ func helloWorld(ctx *flatend.Context) {
_ = ctx.ID

// All headers must be written before writing any response body data.

// Headers are used to send small amounts of metadata to a requester.

// For example, the HTTP API gateway directly sets headers provided
// as a response as the headers of a HTTP response to a HTTP request
// which has been transcribed to a Flatend service request that is
Expand All @@ -329,7 +343,7 @@ func helloWorld(ctx *flatend.Context) {

// The body of a request may be accessed via `ctx.Body`. Request bodies
// are unbounded in size, and represented as a `io.ReadCloser`.

// It is advised to wrap the body under an `io.LimitReader` to limit
// the size of the bodies of requests.

Expand All @@ -344,18 +358,18 @@ func helloWorld(ctx *flatend.Context) {
### NodeJS SDK

```js
const {Node} = require("flatend");
const { Node } = require("flatend");

export interface NodeOptions {
// A reachable, public address which peers may reach you on.
// The format of the address must be [host]:[port].
publicAddr?: string;

// A list of [host]:[port] addresses which this node will bind a listener
// against to accept new Flatend nodes.
bindAddrs?: string[];

// A list of addresses to nodes to initially reach out
// A list of addresses to nodes to initially reach out
// for/bootstrap from first.
addrs?: string[];

Expand All @@ -366,69 +380,78 @@ export interface NodeOptions {

// A mapping of service names to their respective handlers.
services?: { [key: string]: Handler };

// Total number of attempts to reconnect to a peer we reached that disconnected.
// Default is 8 attempts: set to 0 to not attempt to reconnect, or a negative number
// to always attempt to reconnect.
numReconnectAttempts?: number;
lithdew marked this conversation as resolved.
Show resolved Hide resolved

// The amount of time to wait before each reconnection attempt. Default is 500
// milliseconds.
reconnectBackoffDuration?: number;
}

await Node.start(opts: NodeOpts);
await Node.start((opts: NodeOpts));

const {Context} = require("flatend");
const { Context } = require("flatend");

// Handlers may optionally be declared as async, and may optionally
// return promises.

const helloWorld = async ctx => {
// 'ctx' is a NodeJS Duplex stream. Writing to it writes a response
// body, and reading from it reads a request body.
const helloWorld = async (ctx) => {
// 'ctx' is a NodeJS Duplex stream. Writing to it writes a response
// body, and reading from it reads a request body.

_ = ctx.id; // The ID of the requester.
_ = ctx.id; // The ID of the requester.

ctx.pipe(ctx); // This would pipe all request data as response data.
ctx.pipe(ctx); // This would pipe all request data as response data.

// Headers are used to send small amounts of metadata to a requester.

// For example, the HTTP API gateway directly sets headers provided
// as a response as the headers of a HTTP response to a HTTP request
// which has been transcribed to a Flatend service request that is
// handled by some given node.
// Headers are used to send small amounts of metadata to a requester.

ctx.header("header key", "header val");
// For example, the HTTP API gateway directly sets headers provided
// as a response as the headers of a HTTP response to a HTTP request
// which has been transcribed to a Flatend service request that is
// handled by some given node.

// All request headers may be accessed via 'ctx.headers'. Headers
// are represented as an object.
ctx.header("header key", "header val");

// The line below closes the response with the body being a
// JSON-encoded version of the request headers provided.
// All request headers may be accessed via 'ctx.headers'. Headers
// are represented as an object.

ctx.json(ctx.headers);
// The line below closes the response with the body being a
// JSON-encoded version of the request headers provided.

// Arbitrary streams may be piped into 'ctx', like the contents of
// a file for example.
ctx.json(ctx.headers);

const fs = require("fs");
fs.createFileStream("index.js").pipe(ctx);
// Arbitrary streams may be piped into 'ctx', like the contents of
// a file for example.

// Any errors thrown in a handler are caught and sent as a JSON
// response.
const fs = require("fs");
fs.createFileStream("index.js").pipe(ctx);

throw new Error("This shouldn't happen!");
// Any errors thrown in a handler are caught and sent as a JSON
// response.

// The 'ctx' stream must be closed, either manually via 'ctx.end()' or
// via a function. Not closing 'ctx' will cause the handler to deadlock.
throw new Error("This shouldn't happen!");

// DO NOT DO THIS!
// ctx.write("hello world!");
// The 'ctx' stream must be closed, either manually via 'ctx.end()' or
// via a function. Not closing 'ctx' will cause the handler to deadlock.

// DO THIS!
ctx.write("hello world!");
ctx.end();
// DO NOT DO THIS!
// ctx.write("hello world!");

// OR THIS!
ctx.send("hello world!");
// DO THIS!
ctx.write("hello world!");
ctx.end();

// The line below reads the request body into a buffer up to 65536 bytes.
// If the body exceeds 65536 bytes, an error will be thrown.
// OR THIS!
ctx.send("hello world!");

const body = await ctx.read({limit: 65536});
console.log("I got this message:", body.toString("utf8"));
// The line below reads the request body into a buffer up to 65536 bytes.
// If the body exceeds 65536 bytes, an error will be thrown.

const body = await ctx.read({ limit: 65536 });
console.log("I got this message:", body.toString("utf8"));
};
```

Expand Down Expand Up @@ -531,12 +554,12 @@ Got a question? Either:

#### Is flatend production-ready? Who uses flatend today?

*flatend is still a heavy work-in-progress*. That being said, it is being field tested with a few enterprise projects related to energy and IoT right now.
_flatend is still a heavy work-in-progress_. That being said, it is being field tested with a few enterprise projects related to energy and IoT right now.

Deployments of flatend have also been made with a few hundred thousand visitors.

#### Will I be able to run flatend myself?

It was built from the start to allow for self-hosting on the cloud, on bare-metal servers, in Docker containers, on Kubernetes, etc. The cloud is your limit (see the pun I did there?).

#### I'm worried about vendor lock-in - what happens if flatend goes out of business?
Expand Down Expand Up @@ -573,4 +596,4 @@ Reach out to us on Discord, maybe the system you are looking to support may be a

## License

**flatend**, and all of its source code is released under the [MIT License](LICENSE).
**flatend**, and all of its source code is released under the [MIT License](LICENSE).
Loading