Skip to content

Latest commit

 

History

History
118 lines (79 loc) · 6.9 KB

README.md

File metadata and controls

118 lines (79 loc) · 6.9 KB

ci_status PyPI version version license downloads


LandingLens Python Library

The LandingLens Python library contains the LandingLens development library and examples that show how to integrate your app with LandingLens in a variety of scenarios. The examples cover different model types, image acquisition sources, and post-procesing techniques.

Documentation

Quick start

Install

First, install the LandingAI Python library:

pip install landingai

Acquire Your First Images

After installing the LandingAI Python library, you can start acquiring images from one of many image sources.

For example, from a single image file:

from landingai.pipeline.frameset import Frame

frame = Frame.from_image("/path/to/your/image.jpg")
frame.resize(width=512, height=512)
frame.save_image("/tmp/resized-image.png")

You can also extract frames from your webcam. For example:

from landingai.pipeline.image_source import Webcam

with Webcam(fps=0.5) as webcam:
    for frame in webcam:
        frame.resize(width=512, height=512)
        frame.save_image("/tmp/webcam-image.png")

To learn how to acquire images from more sources, go to Image Acquisition.

Run Inference

If you have deployed a computer vision model in LandingLens, you can use this library to send images to that model for inference.

For example, let's say we've created and deployed a model in LandingLens that detects coffee mugs. Now, we'll use the code below to extract images (frames) from a webcam and run inference on those images.

Note

If you don't have a LandingLens account, create one here. You will need to get an "endpoint ID" and "API key" from LandingLens in order to run inferences. Check our Running Inferences / Getting Started.

Note

Learn how to use LandingLens from our Support Center and Video Tutorial Library. Need help with specific use cases? Post your questions in our Community.

from landingai.pipeline.image_source import Webcam
from landingai.predict import Predictor

predictor = Predictor(
    endpoint_id="abcdef01-abcd-abcd-abcd-01234567890",
    api_key="land_sk_xxxxxx",
)
with Webcam(fps=0.5) as webcam:
    for frame in webcam:
        frame.resize(width=512)
        frame.run_predict(predictor=predictor)
        frame.overlay_predictions()
        if "coffee-mug" in frame.predictions:
            frame.save_image("/tmp/latest-webcam-image.png", include_predictions=True)

Examples

We've provided some examples in Jupyter Notebooks to focus on ease of use, and some examples in Python apps to provide a more robust and complete experience.

Example Description Type
Poker Card Suit Identification This notebook shows how to use an object detection model from LandingLens to detect suits on playing cards. A webcam is used to take photos of playing cards. Jupyter Notebook Colab
Door Monitoring for Home Automation This notebook shows how to use an object detection model from LandingLens to detect whether a door is open or closed. An RTSP camera is used to acquire images. Jupyter Notebook Colab
Satellite Images and Post-Processing This notebook shows how to use a Visual Prompting model from LandingLens to identify different objects in satellite images. The notebook includes post-processing scripts that calculate the percentage of ground cover that each object takes up. Jupyter Notebook Colab
License Plate Detection and Recognition This notebook shows how to extract frames from a video file and use a object detection model and OCR from LandingLens to identify and recognize different license plates. Jupyter Notebook Colab
Streaming Video This application shows how to continuously run inference on images extracted from a streaming RTSP video camera feed. Python application

Run Examples Locally

All the examples in this repo can be run locally.

To give you some guidance, here's how you can run the rtsp-capture example locally in a shell environment:

  1. Clone the repo to local: git clone https://github.com/landing-ai/landingai-python.git
  2. Install the library: poetry install --with examples (See the poetry docs for how to install poetry)
  3. Activate the virtual environment: poetry shell
  4. Run: python landingai-python/examples/capture-service/run.py