-
如题。系统为
add_rules("mode.debug", "mode.release")
set_runtimes("MD")
add_requires("libtorch", {configs = {shared=true, cuda=true}})
set_languages("c++20")
target("main")
set_kind("binary")
add_packages("libtorch")
add_files("src/*.cpp")
set_default(true) 当前项目只有一个用来测试的 #include <iostream>
#include <torch/torch.h>
int main(int argc, char **argv) {
std::cout << "hello world!" << std::endl;
std::cout << torch::cuda::is_available() << std::endl;
return 0;
} 编译运行成功,终端输出: hello world!
0 问题描述: 需求: |
Beta Was this translation helpful? Give feedback.
Answered by
xq114
Oct 8, 2024
Replies: 1 comment 4 replies
-
需要排查一下,暂时可以先下官方的gpu预编译包,然后 add_requires("cmake::Torch", {configs = {envs = {CMAKE_PREFIX_PATH = "path/to/libtorch"}}}) 引入 |
Beta Was this translation helpful? Give feedback.
4 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
用你的代码试了下
编译