In this tutorial we will explain how to use ailia from python language. If you want to use ailia from other languages(C++/C#(Unity)/JNI/Kotlin) see the link at the bottom of this tutorial.
- Python 3.6 and later
- Download a free evaluation version of ailia SDK
- Unzip ailia SDK
- Run the following command
cd ailia_sdk/python
python3 bootstrap.py
pip3 install .
-
In the evaluation version, place the license file in the same folder as libailia.dll ([python_path]/site_packages/ailia) on Windows and in ~/Library/SHALO/ on Mac.
-
You can find the location of Python site-packages directory using the following command.
pip3 show ailia
pip install -r requirements.txt
sudo apt install python3-pip
sudo apt install python3-matplotlib
sudo apt install python3-scipy
pip3 install cython
pip3 install numpy
pip3 install pillow
OpenCV for python3 is pre-installed on Jetson. You only need to run this command if you get a cv2 import error.
sudo apt install nvidia-jetpack
- Note that Jetson Orin require ailia 1.2.13 or above. Please contact us if you would like to use an early build of ailia 1.2.13.
pip3 install numpy
pip3 install opencv-python
pip3 install matplotlib
pip3 install scikit-image
sudo apt-get install libatlas-base-dev
The following options can be specified for each model.
optional arguments:
-h, --help show this help message and exit
-i IMAGE/VIDEO, --input IMAGE/VIDEO
The default (model-dependent) input data (image /
video) path. If a directory name is specified, the
model will be run for the files inside. File type is
specified by --ftype argument (default: lenna.png)
-v VIDEO, --video VIDEO
Run the inference against live camera image.
If an integer value is given, corresponding
webcam input will be used. (default: None)
-s SAVE_PATH, --savepath SAVE_PATH
Save path for the output (image / video / text).
(default: output.png)
-b, --benchmark Running the inference on the same input 5 times to
measure execution performance. (Cannot be used in
video mode) (default: False)
-e ENV_ID, --env_id ENV_ID
A specific environment id can be specified. By
default, the return value of
ailia.get_gpu_environment_id will be used (default: 2)
--env_list display environment list (default: False)
--ftype FILE_TYPE file type list: image | video | audio (default: image)
--debug set default logger level to DEBUG (enable to show
DEBUG logs) (default: False)
--profile set profile mode (enable to show PROFILE logs)
(default: False)
-bc BENCHMARK_COUNT, --benchmark_count BENCHMARK_COUNT
set iteration count of benchmark (default: 5)
Input an image file, perform AI processing, and save the output to a file.
python3 yolov3-tiny.py -i input.png -s output.png
Input an video file, perform AI processing, and save the output to a video.
python3 yolov3-tiny.py -i input.mp4 -s output.mp4
Measure the execution time of the AI model.
python3 yolov3-tiny.py -b
Run AI model on CPU instead of GPU.
python3 yolov3-tiny.py -e 0
Get a list of executable environments.
python3 yolov3-tiny.py --env_list
Run the inference against live video stream. (Press 'Q' to quit)
python3 yolov3-tiny.py -v 0
You can use a GUI and select the model from the list using the command below. (Press 'Q' to quit each AI model app)
python3 launcher.py
- ailia AI showcase for iOS
- ailia AI showcase for Android
- Contact us for other platforms (Windows/macOS/Linux)
- ailia SDK python Tutorial (EN) (JP)
- API reference (EN)
- ailia Models (* This site)
- Note: All python models will also work with C++/Unity(C#)/Java(JNI)/Kotlin but you may need to write the pre/post processing code.