bliss-rs is the Rust improvement of bliss, a library used to make playlists by analyzing songs, and computing distance between them.
Like bliss, it eases the creation of « intelligent » playlists and/or continuous play, à la Spotify/Grooveshark Radio, as well as easing creating plug-ins for existing audio players. For instance, you can use it to make calm playlists to help you sleeping, fast playlists to get you started during the day, etc.
For now (and if you're looking for an easy-to use smooth play experience), blissify implements bliss for MPD.
There are also python bindings. The wheels are compiled used maturin; the sources are available here for inspiration.
Note: the features bliss-rs outputs are not compatible with the ones used by C-bliss, since it uses different, more accurate values, based on actual literature. It is also faster.
To use bliss-rs, you'll need a few packages: a C linker, ffmpeg
, and the associated
development packages (libavformat, libavutil, libavcodec, libavfilter, libavdevice),
as well as the clang development packages. These steps are necessary to e.g. run the
examples below.
On Ubuntu:
$ sudo apt install build-essential pkg-config libavutil-dev libavformat-dev \
libavfilter-dev libavdevice-dev libclang-dev
On Archlinux:
$ sudo pacman -S base-devel clang ffmpeg
If you want to use the library
trait, in order to e.g. make a bliss plugin for an existing audio player,
and build the examples associated with it, you will also need libsqlite3-dev
on Ubuntu, or sqlite
on
Archlinux.
For simple analysis / distance computing, take a look at examples/distance.rs
and
examples/analyze.rs
.
If you simply want to try out making playlists from a folder containing songs, this example contains all you need. Usage:
cargo run --features=serde --release --example=playlist /path/to/folder /path/to/first/song
Don't forget the --release
flag!
By default, it outputs the playlist to stdout, but you can use -o <path>
to output it to a specific path.
To avoid having to analyze the entire folder
several times, it also stores the analysis in /tmp/analysis.json
. You can customize
this behavior by using -a <path>
to store this file in a specific place.
Ready to use code examples:
use bliss_audio::decoder::bliss_ffmpeg::FFmpeg as Decoder;
use bliss_audio::decoder::Decoder as DecoderTrait;
use bliss_audio::BlissError;
fn main() -> Result<(), BlissError> {
let song1 = Decoder::from_path("/path/to/song1")?;
let song2 = Decoder::from_path("/path/to/song2")?;
println!("Distance between song1 and song2 is {}", song1.distance(&song2));
Ok(())
}
use bliss_audio::decoder::bliss_ffmpeg::FFmpeg as Decoder;
use bliss_audio::decoder::Decoder as DecoderTrait;
use bliss_audio::{BlissError, Song};
use noisy_float::prelude::n32;
fn main() -> Result<(), BlissError> {
let paths = vec!["/path/to/song1", "/path/to/song2", "/path/to/song3"];
let mut songs: Vec<Song> = paths
.iter()
.map(|path| Decoder::song_from_path(path))
.collect::<Result<Vec<Song>, BlissError>>()?;
// Assuming there is a first song
let first_song = songs.first().unwrap().to_owned();
songs.sort_by_cached_key(|song| n32(first_song.distance(&song)));
println!(
"Playlist is: {:?}",
songs
.iter()
.map(|song| &song.path)
.collect::<Vec<&String>>()
);
Ok(())
}
Instead of reinventing ways to fetch a user library, play songs, etc, and embed that into bliss, it is easier to look at the library module. It implements common analysis functions, and allows analyzed songs to be put in a sqlite database seamlessly.
See blissify for a reference implementation.
If you want to experiment with e.g. distance metrics, taking a look at the mahalanobis distance function is a good idea. It allows customizing the distance metric, putting more emphasis on certain parts of the features, e.g. making the tempo twice as prominent as the timbral features.
It is also possible to perform metric learning, i.e. making playlists tailored to your specific tastes by learning a distance metric for the mahalanobis distance using the code in the metric learning repository. blissify-rs is made to work with it, but it should be fairly straightforward to implement it for other uses. The metric learning repo also contains a bit more context on what metric learning is.
To cross-compile bliss-rs from linux to x86_64 windows, install the
x86_64-pc-windows-gnu
target via:
rustup target add x86_64-pc-windows-gnu
Make sure you have x86_64-w64-mingw32-gcc
installed on your computer.
Then after downloading and extracting ffmpeg's prebuilt binaries, running:
FFMPEG_DIR=/path/to/prebuilt/ffmpeg cargo build --target x86_64-pc-windows-gnu --release
Will produce a .rlib
library file. If you want to generate a shared .dll
library, add:
[lib]
crate-type = ["cdylib"]
to Cargo.toml
before compiling, and if you want to generate a .lib
static
library, add:
[lib]
crate-type = ["staticlib"]
You can of course test the examples yourself by compiling them as .exe:
FFMPEG_DIR=/path/to/prebuilt/ffmpeg cargo build --target x86_64-pc-windows-gnu --release --examples
WARNING: Doing all of the above and making it work on windows requires to have
ffmpeg's dll on your Windows %PATH%
(avcodec-59.dll
, etc).
Usually installing ffmpeg on the target windows is enough, but you can also just
extract them from /path/to/prebuilt/ffmpeg/bin
and put them next to the thing
you generated from cargo (either bliss' dll or executable).
- This library relies heavily on aubio's Rust bindings for the spectral / timbral analysis, so a big thanks to both the creators and contributors of librosa, and to @katyo for making aubio bindings for Rust.
- The first part of the chroma extraction is basically a rewrite of librosa's chroma feature extraction from python to Rust, with just as little features as needed. Thanks to both creators and contributors as well.
- Finally, a big thanks to Christof Weiss for pointing me in the right direction for the chroma feature summarization, which are basically also a rewrite from Python to Rust of some of the awesome notebooks by AudioLabs Erlangen, that you can find here.