Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

shmem error when fuzzing solidity #17

Open
CityOfLight77 opened this issue Dec 9, 2021 · 1 comment
Open

shmem error when fuzzing solidity #17

CityOfLight77 opened this issue Dec 9, 2021 · 1 comment

Comments

@CityOfLight77
Copy link

I got an error when I run nautilus with solidity as my target. It said thread 'fuzzer_2' panicked at 'shm_id "No space left on device.

thread 'fuzzer_1' panicked at 'shm_id "No space left on device"', forksrv/src/lib.rs:205:17
stack backtrace:
thread 'fuzzer_2' panicked at 'shm_id "No space left on device"', forksrv/src/lib.rs:205:17
   0:     0x5555556e6bfc - std::backtrace_rs::backtrace::libunwind::trace::h2ab374bc2a3b7023
                               at /rustc/9dd4ce80fb01d1ff5cb5002f08b7b3847b59e664/library/std/src/../../backtrace/src/backtrace/libunwind.rs:90:5
   1:     0x5555556e6bfc - std::backtrace_rs::backtrace::trace_unsynchronized::h128cb5178b04dc46
                               at /rustc/9dd4ce80fb01d1ff5cb5002f08b7b3847b59e664/library/std/src/../../backtrace/src/backtrace/mod.rs:66:5
   2:     0x5555556e6bfc - std::sys_common::backtrace::_print_fmt::h5344f9eefca2041f
                               at /rustc/9dd4ce80fb01d1ff5cb5002f08b7b3847b59e664/library/std/src/sys_common/backtrace.rs:67:5
   3:     0x5555556e6bfc - <std::sys_common::backtrace::_print::DisplayBacktrace as core::fmt::Display>::fmt::h213003bc5c7acf04
                               at /rustc/9dd4ce80fb01d1ff5cb5002f08b7b3847b59e664/library/std/src/sys_common/backtrace.rs:46:22
   4:     0x555555708e5c - core::fmt::write::h78bf85fc3e93663f
                               at /rustc/9dd4ce80fb01d1ff5cb5002f08b7b3847b59e664/library/core/src/fmt/mod.rs:1126:17
   5:     0x5555556e4165 - std::io::Write::write_fmt::he619515c888f21a5
                               at /rustc/9dd4ce80fb01d1ff5cb5002f08b7b3847b59e664/library/std/src/io/mod.rs:1667:15
   6:     0x5555556e87c0 - std::sys_common::backtrace::_print::hf706674f77848203
                               at /rustc/9dd4ce80fb01d1ff5cb5002f08b7b3847b59e664/library/std/src/sys_common/backtrace.rs:49:5
   7:     0x5555556e87c0 - std::sys_common::backtrace::print::hf0b6c7a88804ec56
                               at /rustc/9dd4ce80fb01d1ff5cb5002f08b7b3847b59e664/library/std/src/sys_common/backtrace.rs:36:9
   8:     0x5555556e87c0 - std::panicking::default_hook::{{closure}}::h2dde766cd83a333a
                               at /rustc/9dd4ce80fb01d1ff5cb5002f08b7b3847b59e664/library/std/src/panicking.rs:210:50
   9:     0x5555556e8377 - std::panicking::default_hook::h501e3b2e134eb149
                               at /rustc/9dd4ce80fb01d1ff5cb5002f08b7b3847b59e664/library/std/src/panicking.rs:227:9
  10:     0x5555556e8e74 - std::panicking::rust_panic_with_hook::hc09e869c4cf00885
                               at /rustc/9dd4ce80fb01d1ff5cb5002f08b7b3847b59e664/library/std/src/panicking.rs:624:17
  11:     0x5555556e8950 - std::panicking::begin_panic_handler::{{closure}}::hc2c6d70142458fc8
                               at /rustc/9dd4ce80fb01d1ff5cb5002f08b7b3847b59e664/library/std/src/panicking.rs:521:13
  12:     0x5555556e70a4 - std::sys_common::backtrace::__rust_end_short_backtrace::had58f7c459a1cd6e
                               at /rustc/9dd4ce80fb01d1ff5cb5002f08b7b3847b59e664/library/std/src/sys_common/backtrace.rs:141:18
  13:     0x5555556e88b9 - rust_begin_unwind
                               at /rustc/9dd4ce80fb01d1ff5cb5002f08b7b3847b59e664/library/std/src/panicking.rs:517:5
  14:     0x55555559db3b - std::panicking::begin_panic_fmt::h72e1f9ab89522086
                               at /rustc/9dd4ce80fb01d1ff5cb5002f08b7b3847b59e664/library/std/src/panicking.rs:460:5
  15:     0x5555556c4322 - forksrv::ForkServer::new::h32c3a55efbd4dcac
  16:     0x5555555c827e - fuzzer::fuzzer::Fuzzer::new::hc13533b0a5d67be8
  17:     0x5555555a44f5 - fuzzer::fuzzing_thread::h8950c25ab0c74bcf
  18:     0x5555555ae752 - std::sys_common::backtrace::__rust_begin_short_backtrace::h256253a3ae85ff90
  19:     0x5555555c1497 - core::ops::function::FnOnce::call_once{{vtable.shim}}::h85a6a903e174724b
  20:     0x5555556ec653 - <alloc::boxed::Box<F,A> as core::ops::function::FnOnce<Args>>::call_once::h59eef3b9c8a82350
                               at /rustc/9dd4ce80fb01d1ff5cb5002f08b7b3847b59e664/library/alloc/src/boxed.rs:1636:9
  21:     0x5555556ec653 - <alloc::boxed::Box<F,A> as core::ops::function::FnOnce<Args>>::call_once::hb5bbe017c347469c
                               at /rustc/9dd4ce80fb01d1ff5cb5002f08b7b3847b59e664/library/alloc/src/boxed.rs:1636:9
  22:     0x5555556ec653 - std::sys::unix::thread::Thread::new::thread_start::h62931528f61e35f5
                               at /rustc/9dd4ce80fb01d1ff5cb5002f08b7b3847b59e664/library/std/src/sys/unix/thread.rs:106:17
  23:     0x7ffff7a1f609 - start_thread
                               at /build/glibc-eX1tMB/glibc-2.31/nptl/pthread_create.c:477:8
  24:     0x7ffff77ef293 - clone
  25:                0x0 - <unknown>
stack backtrace:
Segmentation fault

But I still have much free space on disk.

cityoflight77@vps:~/nautilus$ df -h /
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda1       150G   35G  110G  24% /
cityoflight77@vps:~/nautilus$ df -h /dev/shm
Filesystem      Size  Used Avail Use% Mounted on
tmpfs           7.7G     0  7.7G   0% /dev/shm

default shared memory limits

cityoflight77@vps:~/nautilus$ ipcs -l

------ Messages Limits --------
max queues system wide = 32000
max size of message (bytes) = 8192
default max size of queue (bytes) = 16384

------ Shared Memory Limits --------
max number of segments = 4096
max seg size (kbytes) = 18014398509465599
max total shared memory (kbytes) = 18014398509481980
min seg size (bytes) = 1

------ Semaphore Limits --------
max number of arrays = 32000
max semaphores per array = 32000
max semaphores system wide = 1024000000
max ops per semop call = 500
semaphore max value = 32767

When I change shmmni to bigger value and reboot than default nautilus will return same error.

And same error happened when I try to fuzz mruby with root. But the error doesn't happened when I fuzz mruby with sudo user.

Any idea @andreafioraldi ?

@domenukk
Copy link

I assume this was on MacOS? Getting the same error there.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants