- Install pyenv & install the same python version as in our Dockerfile
- Install poetry
- Clone the github repo to gooey-server (and make sure that's the folder name)
- Create & activate a virtualenv (e.g.
poetry shell
) - Run
poetry install --with dev
- Install redis, rabbitmq, and postgresql (e.g.
brew install redis rabbitmq postgresql@15
) - Enable background services for
redis
,rabbitmq
, andpostgresql
(e.g. withbrew services start redis
and similar forrabbitmq
andpostgresql
) - Use
sqlcreate
helper to create a user and database for gooey:./manage.py sqlcreate | psql postgres
- make sure you are able to access the database with
psql -W -U gooey gooey
(and when prompted for password, enteringgooey
)
- Create an
.env
file from.env.example
(Read 12factor.net/config) - Run
./manage.py migrate
- Install the zbar library (
brew install zbar
)
- Create a google cloud project
- Create a firebase project (using the same google cloud project)
- Enable the following services:
- Go to IAM, Create a service account with following roles:
- Cloud Datastore User
- Cloud Speech Administrator
- Cloud Translation API Admin
- Firebase Authentication Admin
- Storage Admin
- Create and Download a JSON Key for this service account and save it to the project root as
serviceAccountKey.json
. - Add your project & bucket name to
.env
- Run tests to see if everything is working fine:
(If you run into issues with the number of open files, you can remove the limit with
./scripts/run-tests.sh
ulimit -n unlimited
)
You can start all required processes in one command with Honcho:
$ poetry run honcho start
The processes that it starts are defined in Procfile
.
Currently they are these:
Service | Port |
---|---|
API + GUI Server | 8080 |
Admin site | 8000 |
Usage dashboard | 8501 |
Celery | - |
UI | 3000 |
Vespa | 8085 |
This default startup assumes that Redis, RabbitMQ, and PostgreSQL are installed and running
as background services on ports 6379, 5672, and 5432 respectively.
It also assumes that the gooey-ui repo can be found at ../gooey-ui/
(adjacent to where the
gooey-server repo sits). You can open the Procfile and comment this out if you don't need
to run it.
Note: the Celery worker must be manually restarted on code changes. You can do this by stopping and starting Honcho.
You need to install OrbStack or Docker Desktop for this to work.
- Create a persistent volume for Vespa:
docker volume create vespa
- Run the container:
docker run \
--hostname vespa-container \
-p 8085:8080 -p 19071:19071 \
--volume vespa:/opt/vespa/var \
-it --rm --name vespa vespaengine/vespa
- Run the setup script
./manage.py runscript setup_vespa_db
- Connect to k8s cluster -
gcloud container clusters get-credentials cluster-5 --zone us-central1-a
- Port-forward the rabbitmq and redis services -
kubectl port-forward rabbitmq-1-rabbitmq-0 15674:15672 5674:5672 & kubectl port-forward redis-ha-1-server-0 6374:6379
- Add the following to
.env
file -
GPU_CELERY_BROKER_URL="amqp://rabbit:<password>@localhost:5674"
GPU_CELERY_RESULT_BACKEND="redis://:<password>@localhost:6374"
Needed for HEIC image support - https://docs.wand-py.org/en/0.5.7/guide/install.html
brew install freetype imagemagick
export MAGICK_HOME=/opt/homebrew
Use black - https://pypi.org/project/black
Recommended: Black IDE integration Guide: Pycharm
We use the following facebook app for testing -
gooey.ai (dev)
App ID: 228027632918921
Create a meta developer account & send admin your facebook ID to add you to the test app here
- start ngrok
ngrok http 8080
-
set env var
FB_WEBHOOK_TOKEN = asdf1234
-
Open WhatsApp Configuration, set the Callback URL and Verify Token
- Open WhatsApp API Setup, send yourself a message from the test number.
- Copy the temporary access token there and set env var
WHATSAPP_ACCESS_TOKEN = XXXX
(Optional) Use the test script to send yourself messages
python manage.py runscript test_wa_msg_send --script-args 104696745926402 +918764022384
Replace +918764022384
with your number and 104696745926402
with the test number ID
on server
# select a running container
cid=$(docker ps | grep gooey-api-prod | cut -d " " -f 1 | head -1)
# give it a nice name
fname=gooey_db_$(date +"%Y-%m-%d_%I-%M-%S_%p").dump
# exec the script to create the fixture
docker exec -it $cid pg_dump --dbname $PGDATABASE --format c -f "$fname"
# copy the fixture outside container
docker cp $cid:/app/$fname .
# print the absolute path
echo $PWD/$fname
on local
# reset the database
./manage.py reset_db -c
# create the database with an empty template
createdb -T template0 $PGDATABASE
# restore the database
pg_restore --no-privileges --no-owner -d $PGDATABASE $fname
on server
# select a running container
cid=$(docker ps | grep gooey-api-prod | cut -d " " -f 1 | head -1)
# exec the script to create the fixture
docker exec -it $cid poetry run ./manage.py runscript create_fixture
# upload the fixture
docker exec -it $cid poetry run ./manage.py runscript upload_fixture
To load the fixture on local db -
# reset the database
./manage.py reset_db -c
# create the database
./manage.py sqlcreate | psql postgres
# run migrations
./manage.py migrate
# load the fixture
./manage.py loaddata fixture.json
# create a superuser to access admin
./manage.py createsuperuser
on server
./manage.py reset_db
createdb -T template0 $PGDATABASE
pg_dump $SOURCE_DATABASE | psql -q $PGDATABASE