Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bus Error when running model #54

Open
phil-fill opened this issue Dec 19, 2023 · 0 comments
Open

Bus Error when running model #54

phil-fill opened this issue Dec 19, 2023 · 0 comments
Labels
comp:model Model related isssues comp:thirdparty Thirdparty related issues Hardware:USB Accelerator Coral USB Accelerator issues subtype:ubuntu/linux Ubuntu/Linux Build/installation issues type:build/install Build and install issues

Comments

@phil-fill
Copy link

phil-fill commented Dec 19, 2023

Description

I have a project where i want to run the blaze face short range (https://github.com/google/mediapipe/blob/master/mediapipe/examples/coral/models/face-detector-quantized_edgetpu.tflite) compiled for the edge tpu on my Raspberry pi 4 with Debian bookworm plus the coral TPU Accelerator with c++. I already tried it with the pycoral api and it is runing on my edgetpu (~400 fps) So i assume the model should be fine.

I made a code similar to the code suggested when running a model with edgetpu and tensorflowlite (https://coral.ai/docs/edgetpu/tflite-cpp/#set-up-the-tf-lite-interpreter-with-libedgetpu). This is my code:

#include<stdio.h>
#include<iostream>
#include<tensorflow/lite/interpreter.h>
#include<tensorflow/lite/kernels/register.h>
#include<ctime>
#include<cstdlib>
#include<vector>
#include<tensorflow/lite/c/common.h>
#include<tensorflow/lite/model.h>
#include<memory>
#include<tensorflow/lite/tools/gen_op_registration.h>
#include<opencv2/opencv.hpp>
#include<vector>
#include<chrono>
#include<tflite/public/edgetpu.h>

using namespace edgetpu;
using namespace tflite;
int main()
        {

        //load image
        std::string image_path = "/home/jean/Screenshot.png";
        cv::Mat img = cv::imread(image_path);
        if (img.empty()){
        std::cout << "Failed to load image!";
        }

        //transform to required model input
        int new_width = 128;
        int new_height = 128;
        cv::resize(img, img, cv::Size( new_width, new_height));
        
        //load model 
        const std::string model_path = "face-detector-quantized_edgetpu.tflite";
        std::unique_ptr<tflite::FlatBufferModel> model =
                tflite::FlatBufferModel::BuildFromFile(model_path.c_str());
        std::shared_ptr<edgetpu::EdgeTpuContext> edgetpu_context =
                edgetpu::EdgeTpuManager::GetSingleton()->OpenDevice();
         edgetpu::EdgeTpuContext* edgetpu_context_ptr = edgetpu_context.get();
        tflite::ops::builtin::BuiltinOpResolver resolver;
        resolver.AddCustom(edgetpu::kCustomOp, edgetpu::RegisterCustomOp());
        std::unique_ptr<tflite::Interpreter> interpreter = std::make_unique<tflite::Interpreter>();
        tflite::InterpreterBuilder builder(*model, resolver);
        std::cout << "i was here 1" << std::endl;
        interpreter->SetExternalContext(kTfLiteEdgeTpuContext, reinterpret_cast<TfLiteExternalContext*>(edgetpu_context_ptr));
        std::cout << "i was here2" << std::endl;
        interpreter->SetNumThreads(1);
        std::cout << "i was here3" << std::endl;
        interpreter->AllocateTensors();
        std::cout << "i was here4" << std::endl;
        int* inputImg_ptr = img.ptr<int>(0);
        
        //get output
        TfLiteTensor* input_tensor = interpreter->tensor(interpreter->inputs()[0]);
        TfLiteTensor* output_box = interpreter->tensor(interpreter->outputs()[0]);
        TfLiteTensor* output_score = interpreter->tensor(interpreter->outputs()[1]);
        memcpy(input_tensor->data.f, img.ptr<int>(0),128 * 128 * 3 * sizeof(int));
        interpreter->Invoke();
        }

this is the output

i was here 1
i was here2
Bus error

So as soon i am calling the set threads, there is going something wrong. A problem could be that i crosscompiled the libtensorflowlite.so the newest TF (2.15) with Bazel fro arm (https://www.tensorflow.org/lite/guide/build_arm) and for the libedgetpu i crosscompiled it for aarch64 with Docker and Bazel (https://github.com/google-coral/libedgetpu). The problem is that there is a default version for TF for the libedgetpu in the workspace.btl file (TF 2.5). I dont know if that is the problem but i tried first to crosscompile libedgetpu.so with the newest TF by adjusting the commit and sha256 in the workspace.bzl file and i tried to crosscompile libtensorflowlite.so for a TF 2.5. Both did not work. Do you have any hints for me?

Best wishes

Click to expand!

Issue Type

Build/Install

Operating System

Ubuntu

Coral Device

USB Accelerator

Other Devices

Rapsberry Pi 4

Programming Language

C++

Relevant Log Output

No response

@google-coral-bot google-coral-bot bot added comp:model Model related isssues comp:thirdparty Thirdparty related issues Hardware:USB Accelerator Coral USB Accelerator issues subtype:ubuntu/linux Ubuntu/Linux Build/installation issues type:build/install Build and install issues labels Dec 19, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
comp:model Model related isssues comp:thirdparty Thirdparty related issues Hardware:USB Accelerator Coral USB Accelerator issues subtype:ubuntu/linux Ubuntu/Linux Build/installation issues type:build/install Build and install issues
Projects
None yet
Development

No branches or pull requests

1 participant