Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pipeline for ZED Camera on Terga TX2 #58

Open
D0CX4ND3R opened this issue May 8, 2018 · 2 comments
Open

Pipeline for ZED Camera on Terga TX2 #58

D0CX4ND3R opened this issue May 8, 2018 · 2 comments

Comments

@D0CX4ND3R
Copy link

Hi, I run the pipeline on Terga TX2 with a ZED Camera, but a problem occurred.
To promise the ZED Camera can be used, I modified the structure Application as follow:

  1. I add a new structure VideoSource to initialize the zed camera.
struct VideoSource
{
        sl::Mat frame_zed;
        sl::Camera zed_camera;

        VideoSource()
        {
          sl::InitParameters init_params;
          init_params.camera_resolution = sl::RESOLUTION_HD720;
          init_params.depth_mode = sl::DEPTH_MODE_PERFORMANCE;
          init_params.coordinate_units = sl::UNIT_METER;
          init_params.camera_fps = 30;

          sl::ERROR_CODE err = zed_camera.open(init_params);
          if (err != sl::SUCCESS) {
                  std::cout << sl::toString(err) << std::endl;
                  zed_camera.close();
                  //return; // Quit if an error occurred
          }
          else
            std::cout << "ZED Camera created!!" << std::endl;
        }

        // Convert the zed camera Mat to opencv Mat
        virtual cv::Mat slMat2cvMat(sl::Mat &input)
        {
                // Mapping between MAT_TYPE and CV_TYPE
                int cv_type = -1;
                switch (input.getDataType())
                {
                        case sl::MAT_TYPE_32F_C1: cv_type = CV_32FC1; break;
                        case sl::MAT_TYPE_32F_C2: cv_type = CV_32FC2; break;
                        case sl::MAT_TYPE_32F_C3: cv_type = CV_32FC3; break;
                        case sl::MAT_TYPE_32F_C4: cv_type = CV_32FC4; break;
                        case sl::MAT_TYPE_8U_C1: cv_type = CV_8UC1; break;
                        case sl::MAT_TYPE_8U_C2: cv_type = CV_8UC2; break;
                        case sl::MAT_TYPE_8U_C3: cv_type = CV_8UC3; break;
                        case sl::MAT_TYPE_8U_C4: cv_type = CV_8UC4; break;
                default: break;
                }
                return cv::Mat(input.getHeight(), input.getWidth(), cv_type, input.getPtr<sl::uchar1>(sl::MEM_CPU), input.getStepBytes(sl::MEM_CPU));
        }

        virtual void operator>>(cv::Mat &output)
        {
            // get image from zed camera by zed SDK
            zed_camera.retrieveImage(frame_zed, sl::VIEW_LEFT);
            output = slMat2cvMat(frame_zed);
        }

        virtual int getWidth(){return zed_camera.getResolution().width;}
        virtual int getHeight(){return zed_camera.getResolution().height;}
};
  1. I modified the Application structure constructor
    // clang-format off
    Application
    (
        const std::string &input,
        const std::string &model,
        float acfCalibration,
        int minWidth,
        bool window,
        float resolution
    ) : resolution(resolution)
    // clang-format on
    {
        // Create a video source:
        // 1) integar == index to device camera
        // 2) filename == supported video formats
        // 3) "/fullpath/Image_%03d.png" == list of stills
        // http://answers.opencv.org/answers/761/revisions/
        //video = create(input);
        //zed_camera = create();

        // create zed camera
        zed_source = std::make_shared<VideoSource>();

        //video = create(0);

        // Create an OpenGL context:
        cv::Size size(zed_source->getWidth(),zed_source->getHeight());
        //const auto size = getSize(*video);

        context = aglet::GLContext::create(aglet::GLContext::kAuto, window ? "acf" : "", size.width, size.height);

        // Create an object detector:
        detector = std::make_shared<acf::Detector>(model);
        detector->setDoNonMaximaSuppression(true);

        if (acfCalibration != 0.f)
        {
            acf::Detector::Modify dflt;
            dflt.cascThr = { "cascThr", -1.0 };
            dflt.cascCal = { "cascCal", acfCalibration };
            detector->acfModify(dflt);
        }

        // Create the asynchronous scheduler:
        pipeline = std::make_shared<acf::GPUDetectionPipeline>(detector, size, 5, 0, minWidth);

        // Instantiate an ogles_gpgpu display class that will draw to the
        // default texture (0) which will be managed by aglet (typically glfw)
        if (window && context->hasDisplay())
        {
            display = std::make_shared<ogles_gpgpu::Disp>();
            display->init(size.width, size.height, TEXTURE_FORMAT);
            display->setOutputRenderOrientation(ogles_gpgpu::RenderOrientationFlipped);
        }
    }
  1. The update function is also modified correspondingly
cv::Mat frame;
(*zed_source)  >>  frame;

The program is compiled successfully, but no image in window, only a black frame.
Does the code has any mistakes?
Thank you for help me.

P.S. Another question, when I run the project acf-detect, I want show the capture frame in real time, but a opencv error is occurred as follow:

OpenCV(3.4.1) Error: Unspecified error (The function is not implemented. Rebuild the library with Windows, GTK+ 2.x or Carbon support. If you are on Ubuntu or Debian, install libgtk2.0-dev and pkg-config, then re-run cmake or configure script) in cvShowImage, file /home/nvidia/.hunter/_Base/8fee57e/c3fbf9e/a0ab86d/Build/OpenCV/Source/modules/highgui/src/window.cpp, line 636
Exception: OpenCV(3.4.1) /home/nvidia/.hunter/_Base/8fee57e/c3fbf9e/a0ab86d/Build/OpenCV/Source/modules/highgui/src/window.cpp:636: error: (-2) The function is not implemented. Rebuild the library with Windows, GTK+ 2.x or Carbon support. If you are on Ubuntu or Debian, install libgtk2.0-dev and pkg-config, then re-run cmake or configure script in function cvShowImage

How can I add the two libraries during the hunter compile. Thank you.

@D0CX4ND3R
Copy link
Author

I solved the camera problem.
But the hunter compile problem in P.S. is still there. How can I add the cmake parameters.
Thank you.

@headupinclouds
Copy link
Contributor

headupinclouds commented May 11, 2018

I solved the camera problem.

👍

But the hunter compile problem in P.S. is still there.


SEE: https://github.com/elucideye/acf/blob/3862fe398c8acb567d0f0b55bf170c1f17bf65d3/src/app/pipeline/pipeline.cpp#L326]

Okay, but usually build problems are related to the package + compiler combinations, not really Hunter itself (i.e., they would occur if you weren't using Hunter to manage the build).

>How can I add the cmake parameters

You can set CMake options in your local configuration `hunter_config()` commands using the `CMAKE_ARGS` tag, like:

``` 
hunter_config(foo VERSION v1.0.0 CMAKE_ARGS OPTION1=ON OPTION2=OFF)
```

in the `LOCAL` config of your top level project.  This will be used if you specify a `LOCAL` argument in your top `HunterGate()` call which will tell it to look in`cmake/Hunter/config.cmake`.  This will override default settings associated with the Hunter release you are using.  That's where you can customize things on a per package basis (both VERSION|GIT_SUBMODULE and CMAKE_ARGS).

* https://docs.hunter.sh/en/latest/reference/user-modules/hunter_config.html?highlight=hunter_config
* https://github.com/hunter-packages/gate#usage-custom-config

Example:
```
HunterGate(
    URL "https://github.com/ruslo/hunter/archive/v0.20.72.tar.gz"
    SHA1 "bd3cb40902ccf2fdde1d0cc71d5a7acd24a0696c"
    LOCAL # load `${CMAKE_CURRENT_LIST_DIR}/cmake/Hunter/config.cmake`
)
```

You can take a look at the LOCAL configuration in this repo, for example:

https://github.com/elucideye/acf/blob/3862fe398c8acb567d0f0b55bf170c1f17bf65d3/cmake/Hunter/config.cmake#L21

```
if(IOS OR ANDROID)
  # local workaround for protobuf compiler crash with Xcode 8.1
  # see https://github.com/elucideye/acf/issues/41
  set(opencv_cmake_args
    WITH_PROTOBUF=OFF
    BUILD_PROTOBUF=OFF
    BUILD_LIBPROTOBUF_FROM_SOURCES=NO
    BUILD_opencv_dnn=OFF
     
    WITH_JASPER=OFF
    BUILD_JASPER=OFF
  )
  hunter_config(OpenCV VERSION ${HUNTER_OpenCV_VERSION} CMAKE_ARGS ${opencv_cmake_args})  
endif()

### ogles_gpgpu ###
set(ogles_gpgpu_cmake_args
  OGLES_GPGPU_VERBOSE=OFF
  OGLES_GPGPU_OPENGL_ES3=${ACF_OPENGL_ES3}
)
hunter_config(ogles_gpgpu VERSION ${HUNTER_ogles_gpgpu_VERSION} CMAKE_ARGS ${ogles_gpgpu_cmake_args})
```

> Terga TX2 with a ZED Camera

I'm curious to see how this runs actually.  I haven't found time to make this pipeline ready for the main lib yet, but I plan to. 

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants