diff --git a/BUILDING.md b/BUILDING.md index 1ff82736..eb85b954 100644 --- a/BUILDING.md +++ b/BUILDING.md @@ -4,7 +4,7 @@ Building Clone the repo: ```sh -git clone https://github.com/Kindelia/HVM.git +git clone https://github.com/HigherOrderCO/HVM.git cd HVM ``` diff --git a/NIX.md b/NIX.md index af60823a..f0af2ee9 100644 --- a/NIX.md +++ b/NIX.md @@ -6,7 +6,7 @@ Usage (Nix) [Install Nix](https://nixos.org/manual/nix/stable/installation/installation.html) and enable [Flakes](https://nixos.wiki/wiki/Flakes#Enable_flakes) then, in a shell, run: ```sh -git clone https://github.com/Kindelia/HVM.git +git clone https://github.com/HigherOrderCO/HVM.git cd HVM # Start a shell that has the `hvm` command without installing it. nix shell .#hvm diff --git a/README.md b/README.md index e1158217..f207a787 100644 --- a/README.md +++ b/README.md @@ -242,7 +242,7 @@ purpose is to show yet another important advantage of HVM: beta-optimality. This λ-encoded numbers **exponentially faster** than GHC, since it can deal with very higher-order programs with optimal asymptotics, while GHC can not. As esoteric as this technique may look, it can actually be very useful to design efficient functional algorithms. One application, for example, is to implement [runtime -deforestation](https://github.com/Kindelia/HVM/issues/167#issuecomment-1314665474) for immutable datatypes. In general, +deforestation](https://github.com/HigherOrderCO/HVM/issues/167#issuecomment-1314665474) for immutable datatypes. In general, HVM is capable of applying any fusible function `2^n` times in linear time, which sounds impossible, but is indeed true. *Charts made on [plotly.com](https://chart-studio.plotly.com/).* @@ -276,7 +276,7 @@ More Information - To learn more about the **underlying tech**, check [guide/HOW.md](guide/HOW.md). -- To ask questions and **join our community**, check our [Discord Server](https://discord.gg/kindelia). +- To ask questions and **join our community**, check our [Discord Server](https://discord.higherorderco.com). - To **contact the author** directly, send an email to . @@ -416,13 +416,13 @@ let f = (2 + x) in [λx. f, λx. f] The solution to that question is the main insight that the Interaction Net model brought to the table, and it is described in more details on the -[HOW.md](https://github.com/Kindelia/HVM/blob/master/guide/HOW.md) document. +[HOW.md](https://github.com/HigherOrderCO/HVM/blob/master/guide/HOW.md) document. ### Is HVM always *asymptotically* faster than GHC? No. In most common cases, it will have the same asymptotics. In some cases, it is exponentially faster. In [this -issue](https://github.com/Kindelia/HVM/issues/60), a user noticed that HVM +issue](https://github.com/HigherOrderCO/HVM/issues/60), a user noticed that HVM displays quadratic asymptotics for certain functions that GHC computes in linear time. That was a surprise to me, and, as far as I can tell, despite the "optimal" brand, seems to be a limitation of the underlying theory. That said, @@ -458,7 +458,7 @@ foldr (.) id funcs :: [Int -> Int] GHC won't be able to "fuse" the functions on the `funcs` list, since they're not known at compile time. HVM will do that just fine. See [this -issue](https://github.com/Kindelia/HVM/issues/167) for a practical example. +issue](https://github.com/HigherOrderCO/HVM/issues/167) for a practical example. Another practical application for λ-encodings is for monads. On Haskell, the Free Monad library uses Church encodings as an important optimization. Without diff --git a/guide/HOW.md b/guide/HOW.md index dd19893d..e293a602 100644 --- a/guide/HOW.md +++ b/guide/HOW.md @@ -38,7 +38,7 @@ exist in one place greatly simplifies parallelism. This was all known and possible since years ago (see other implementations of optimal reduction), but all implementations of this algorithm, until now, represented terms as graphs. This demanded a lot of pointer indirection, making -it slow in practice. A new memory format, based on the [Interaction Calculus](https://github.com/VictorTaelin/Symmetric-Interaction-Calculus), +it slow in practice. A new memory format, based on the [Interaction Calculus](https://github.com/VictorTaelin/Interaction-Calculus), takes advantage of the fact that inputs are known to be λ-terms, allowing for a 50% lower memory usage, and letting us avoid several impossible cases. This made the runtime 50x (!) faster, which finally allowed it to compete with GHC @@ -126,7 +126,7 @@ having incremented each number in `list` by 1. Notes: - You may write `@` instead of `λ`. -- Check [this](https://github.com/Kindelia/HVM/issues/64#issuecomment-1030688993) issue about how constructors, applications and currying work. +- Check [this](https://github.com/HigherOrderCO/HVM/issues/64#issuecomment-1030688993) issue about how constructors, applications and currying work. What makes it fast ================== diff --git a/guide/README.md b/guide/README.md index 893518d7..8d950a2e 100644 --- a/guide/README.md +++ b/guide/README.md @@ -463,9 +463,9 @@ hvm::runtime::eval(file, term, funs, size, tids, dbug); *To learn how to design the `apply` function, first learn HVM's memory model (documented on -[runtime/base/memory.rs](https://github.com/Kindelia/HVM/blob/master/src/runtime/base/memory.rs)), +[runtime/base/memory.rs](https://github.com/HigherOrderCO/HVM/blob/master/src/runtime/base/memory.rs)), and then consult some of the precompiled IO functions -[here](https://github.com/Kindelia/HVM/blob/master/src/runtime/base/precomp.rs). +[here](https://github.com/HigherOrderCO/HVM/blob/master/src/runtime/base/precomp.rs). You can also use this API to extend HVM with new compute primitives, but to make this efficient, you'll need to use the `visit` function too. You can see some examples by compiling a `.hvm` file to Rust, and then checking the `precomp.rs`