An Interactive Smart Mirror using AVS
- Authenticate with Amazon with LWA
- Establish connection with AVS API
- Send request to AVS and handle audio feedback
- Handle other Alexa capabilties (TODOS and such)
- Implement Web Sockets and Design User Interface
- Move everything to Raspberry Pi and assemble Mirror
In Order to clone this repository locally and successfully run Francisco in it's entirety, you need these things:
- RaspberryPi 3 running Debian Jessie
- Node.js v6.10 Change version accordingly
SoX
to install, copy and paste the following command:sudo apt-get install sox libsox-fmt-all
- install ffmpeg from source Instructions
- a USB microphone and speakers
- a Mirror... obviously
- Go to developer.amazon.com and create a new device project under AVS and substitute all the parts where it takes
<TOKEN SECRET>
or<TOKEN CLIENT>
with yours, including therefreshToken
function - Then go to your security profile after youve created one and add this link to the
Allowed origin Login
and addhttp://localhost:3000/login
then addhttp://localhost:3000/authd
inAllowed Redirect urls
- After you've gotten those required resources, git clone this repository to your raspberrypi
- cd to the directory and run
npm install
, now wait a few minutes for all of the dependencies to build - google any error you encounter and fix
- run
electron .
, if you encounter and error refer to this: Issue - you might encounter a localStorage Error, thats fine
- if running successfully, go to
http://localhost:3000/login
to set up once
Francisco uses AVS for most of this commands, but when a certain hotword is spoken (like play
since using AVS in development mode does not allow for Music playback) it then switches to internal command functions and the speech is not sent to AVS, let's take a closer look at the play
internal command.
- In order to add a new hotword, go to https://snowboy.kitt.ai, record a new hotword and download the .pmdl and add it to the
resources
folder - in the
main.js
file, add this:
models.add({
file: 'resources/YOUR HOTWORD FILE>.pmdl',
sensitivity: '0.5',
hotwords: '<YOUR HOTWORD>'
});
- The logic for switching between AVS and internal commands are mostly written, so then you'd just need to figure out what to do according to what
internal_cmd
is equal to:
if (internal_cmd == 'play') {
speech.recognize('command.wav', conf)
.then((results) => {
const transcription = results[0];
let res;
try {
res = transcription.split('play')[1].trim()
console.log(transcription.split('play')[1].trim());
getYT(transcription.split('play')[1].trim())
in_session = false
io.sockets.emit('VOLUME', 100)
io.sockets.emit('STATE', 'Ready to Listen');
io.sockets.emit('STATUS', 'Listening to ' + transcription.split('play')[1].trim());
setTimeout(function() {
listen()
}, 250);
}
catch (e) {
in_session = false
io.sockets.emit('STATE', 'Ready to Listen');
io.sockets.emit('STATUS', 'Sorry I didnt get that');
setTimeout(function() {
listen()
}, 250);
}
})
.catch((err) => {
console.error('ERROR:', err);
});
}
- Then voila, adding a new command is really up to the programmer, as you can really do anything you want with it once you configure the hotword.