A Node-RED node to give a provided sound (WAV,OGG) out on the defined output (hdmi or chinch or defined) on an Rasperry Pi with a speaker attached. Although this node is good in using with IBM Watson APIs like text-to-speach to demonstrate Cognitive APIs and IoT. See IBM Cloud for more information.
Run the following command in the root directory of your Node-RED install or home directory (usually ~/.node-red).
npm install node-red-contrib-speakerpi
sudo apt-get install libasound2-dev
amixer cset numid=3 1
amixer cset numid=3 2
alsamixer
Speakerpi provides a sound-node for sending out a sound object to the connected speaker. To use this node with the IBM Cloud Watson Services msg.speech as an input contains the WAV/OGG .
As an output you will get the complete message object as before after playing the sound object.
Within the filebased mode the buffer is dumped to an file and the Raspberry Pi Player APLAY is called in background with this file. This brings out best quality with minimum resources needed from the play. The msg.speech should contain the WAV/OGG file (mybe directly from Text2Speach Service from IBM Cloud). This will be dumped into a file and after playing the temporary file it will be deleted.
You can also play own pregiven files by using msg.filename (like /path/filename.wav). The msg.choose has then set to "givenfile" and msg.filename to the name with path.
The streambased mode is for streaming directly the buffer into a speaker framework (using node-speaker) which is from the quality perspective not very good.
The node also needs a defined sound configuration which contains channels (1 or 2), the bitdepth (8 or 16)and the samplerate (11025, 22050 or 44100) set in the node or in the msg.speakerConfig for the sound in msg.speech.
speakerConfig = {
channels: 1
bitdepth: 16
samplerate: 22050
}
This node runs fine now with the NodeJS 12.x LTS, NPM v6 and NodeRed v1.1.