You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I want to install max engine on Orange Pi with ARM CPU, but but I'm running into problems during "Updating apt repository metadata cache" step -> "Failed to update via apt-get update"
There is note -> N: Skipping acquire of configured file 'main/binary-armhf/Packages' as repository 'https://dl.modular.com/public/installer/deb/ubuntu focal InRelease' doesn't support architecture 'armhf'
Can you confirm that there is no support for this architecture? Is there any workaround to install Max Engine on Orange PI?
Is there possibility to enable Hyper Threading during inference on Max Engine? On 12th Gen Intel(R) Core(TM) i7-12700K platform CPU usage looks like half of cores is not engaged. Is there any way to enable Hyper Threading to use all logical cores? For exampled with onnx runtime toy can enable it.
Another thing with number of engaged cores. Is there possibility to reduce number of cores that will take part in inference. I want to test inference time with for example only 4 cores.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Hello,
I have few questions regarding Max Engine:
I want to install max engine on Orange Pi with ARM CPU, but but I'm running into problems during "Updating apt repository metadata cache" step -> "Failed to update via apt-get update"
There is note -> N: Skipping acquire of configured file 'main/binary-armhf/Packages' as repository 'https://dl.modular.com/public/installer/deb/ubuntu focal InRelease' doesn't support architecture 'armhf'
Can you confirm that there is no support for this architecture? Is there any workaround to install Max Engine on Orange PI?
Is there possibility to enable Hyper Threading during inference on Max Engine? On 12th Gen Intel(R) Core(TM) i7-12700K platform CPU usage looks like half of cores is not engaged. Is there any way to enable Hyper Threading to use all logical cores? For exampled with onnx runtime toy can enable it.
Beta Was this translation helpful? Give feedback.
All reactions