From 4bff4d94e5771bbf321115018dd86205537a4647 Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Sun, 16 Aug 2020 00:51:02 +0100 Subject: [PATCH 001/294] Documentation: Add an initial glossary --- doc/source/glossary.rst | 179 ++++++++++++++++++++++++++++++++++++++++ doc/source/index.rst | 1 + 2 files changed, 180 insertions(+) create mode 100644 doc/source/glossary.rst diff --git a/doc/source/glossary.rst b/doc/source/glossary.rst new file mode 100644 index 0000000000..674b674690 --- /dev/null +++ b/doc/source/glossary.rst @@ -0,0 +1,179 @@ +Glossary +======== +There are many terms when talking about memory forensics, this list hopes to define the common ones and +provide some commonality on how to refer to particular ideas within the field. + +A +- +.. _Address: + An address is another name for an :ref:`offset`, specifically an offset within memory. Offsets can be + both relative or absolute, whereas addresses are almost always absolute. + +.. _Address Space: + +Address Space + This is the name in volatility 2 for what's referred to as a :ref:`Translation Layer`. It + encompasses all values that can be addresses, usually in reference to addresses in memory. + +.. _Alignment: + +Alignment + This value is what all data :ref:`offsets` will typically be a multiple of within a :ref:`type`. + +.. _Array: + +Array + This represents a list of items, which can be access by an index, which is zero-based (meaning the first + element has index 0). Items in arrays are almost always the same size (it is not a generic list, as in python) + even if they are :ref:`pointers` to different sized objects. + +D +- +.. _Data Layer: + +Data Layer + A group of bytes, where each byte can be addressed by a specific offset. Data layers are usually contiguous + chunks of data. + +.. _Dereference: + +Dereference + The act of taking the value of a pointer, and using it as an offset to another object, as a reference. + +.. _Domain: + +Domain + This the grouping for input values for a mapping or mathematical function. + +M +- +.. _Map: + +Map, mapping + A mapping is a relationship between two values, where one value (the :ref:`Domain` maps to the :ref:`Range` value). + Mappings can be seen as a mathematical function, and therefore volatility 3 attempts to use mathematical functional + notation where possible. + +.. _Member: + +Member + The name of subcomponents of a type, similar to attributes of objects in common programming parlance. These + are usually recorded as :ref:`offset` and :ref:`type` pairs within a :ref:`structure`. + +O +- +.. _Object: + +Object + This has a specific meaning within computer programming (as in Object Oriented Programming), but within the world + of Volatility it is used to refer to a type that has been associated with a chunk of data. See all :ref:`Type`. + +.. _Offset: + +Offset + A numeric value that identifies a distance within a group of bytes, to uniquely identify a single byte, or the + start of a run of bytes. This is often relative (offset from another object/item) but can be absolute (offset from + the start of a region of data). + +P +- +.. _Packed: + +Packed + Structures are often :ref:`aligned` meaning that the various members (subtypes) are always aligned at + particular values (usually multiples of 2, 4 or 8). Thus if a particular value is an odd number of bytes, the + next chunk of data containing useful information would start at an even offset, and a single byte of + :ref:`padding` would be used to ensure appropriate :ref:`alignment`. In packed structures, no + padding is used, and offsets may be at odd offsets. + +.. _Padding: + +Padding + Data that (usually) contains no useful information. The typical value used for padding is 0, so should a string + :ref:`object` that has been allocated a particular number of bytes, contain a string of fewer bytes, the remaing bytes + will be padded with null (0) bytes. + +.. _Page: + +Page + A specific chunk of contiguous data. It is an organizational quantity of memory (usually 0x1000, or 4096 bytes). + Pages, like pages in a book, make up the whole, but allow for specific chunks to be allocated and used as necessary. + Operating systems uses pages as a means to have granular control over chunks of memory. This allows them to be + reordered and reused as necessary (without having to move large chunks of data around), and allows them to have + access controls placed upon them, limiting actions such as reading and writing. + +.. _Page Table: + +Page Table + A table that points to a series of :ref:`pages`. Each page table is typically the size of a single page, + and page tables can point to pages that are in fact other page tables. Using tables that point to tables, it's + possible to use them as a way to map a particular address within a (potentially larger, but sparsely populated) + virtual space to a concrete (and usually contiguous) physical space, through the process of :ref:`mapping`. + +.. _Pointer: + +Pointer + A value within memory that points to a different area of memory. This allows objects to contain references to + other objects without containing all the data of the other object. Following a pointer is known as :ref:`dereferencing` + a pointer. Pointers are usually as large as the size of the + +R +- +.. _Range: + +Range + This is the grouping the output values for a mapping or mathematical function. + +S +- +.. _Struct: + +Struct, Structure + A means of containing multiple different :ref:`type` associated together. A struct typically contains + other :ref:`type`, one directly after another (unless :ref:`packing` is involved). In this way + the :ref:`members` of a type can be accessed by finding the data at the relative :ref:`offset` to + the start of the structure. + +.. _Symbol: + +Symbol + This is used in many different contexts, as short term for many things. A symbol is a construct that usually + encompasses a specific :ref:`offset` and a :ref:`type`, representing a specific instance of a type within the memory of a + compiled and running program. + +T +- +.. _Template: + +Template + Within volatility 3, the term template applies to a :ref:`type` that has not yet been instantiated or linked + to any data or a specific location within memory. Once a type has been tied to a particular chunk of data, it is + called an :ref:`object`. + +.. _Translation Layer: + +Translation Layer + This is a specific type of :ref:`data layer`, a non-contiguous group of bytes that can be references by + a unique :ref:`offset` within the layer. In particular, translation layers translates (or :ref:`maps`) + requests made of it to a location within a lower layer. This can be either linear (a one-to-one mapping between bytes) + or non-linear (a group of bytes :ref:`maps` to a larger or smaller group of bytes. + +.. _Type: + +Type + This is a structure definition of multiple elements that expresses how data is laid out. Basic types define how + the data should be interpretted in terms of a run of bits (or more commonly a collection of 8 bits at a time, + called bytes). More complex types can be made up of other types combined together at specific locations known + as :ref:`structs` or repeated, known as :ref:`array`. They can even defined types at the same + location depending on the data itself, known as :ref:`Unions`. Once a type has been linked to a specific + chunk of data, the result is referred to as an :ref:`object`. + +U +- +.. _Union: + +Union + A union is a type that can have can hold multiple different subtypes, which specifically overlap. A union is means + for holding two different types within the same size of data, meaning that not all types within the union will hold + valid data at the same time, more that depending on what the union is holding, a subset of the type will point to + accurate data (assumption no corruption). diff --git a/doc/source/index.rst b/doc/source/index.rst index c61e6e8475..2ba85da5a4 100644 --- a/doc/source/index.rst +++ b/doc/source/index.rst @@ -17,6 +17,7 @@ Here are some guidelines for using Volatility 3 effectively: complex-plugin using-as-a-library symbol-tables + glossary Python Packages =============== From 2fa4bf245d8bb58db24511c08f7233d177fc99be Mon Sep 17 00:00:00 2001 From: Andrew Case Date: Fri, 23 Oct 2020 12:45:54 -0500 Subject: [PATCH 002/294] implement a smear proof mac lsmod --- volatility/framework/plugins/mac/lsmod.py | 21 +++++++++++++++++++-- 1 file changed, 19 insertions(+), 2 deletions(-) diff --git a/volatility/framework/plugins/mac/lsmod.py b/volatility/framework/plugins/mac/lsmod.py index 27439efdcb..66dc82e5a8 100644 --- a/volatility/framework/plugins/mac/lsmod.py +++ b/volatility/framework/plugins/mac/lsmod.py @@ -38,12 +38,29 @@ def list_modules(cls, context: interfaces.context.ContextInterface, layer_name: """ kernel = contexts.Module(context, darwin_symbols, layer_name, 0) + kernel_layer = context.layers[layer_name] + kmod_ptr = kernel.object_from_symbol(symbol_name = "kmod") - # TODO - use smear-proof list walking API after dev release kmod = kmod_ptr.dereference().cast("kmod_info") - while kmod != 0: + + yield kmod + + kmod = kmod.next + + seen = set() + + while kmod != 0 and \ + kmod not in seen and \ + len(seen) < 1024: + + if not kernel_layer.is_valid(kmod.dereference().vol.offset, kmod.dereference().vol.size): + break + + seen.add(kmod) + yield kmod + kmod = kmod.next def _generator(self): From 84741c30fda75b43d14499d10b634b6da896f67c Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Mon, 7 Dec 2020 23:13:22 +0000 Subject: [PATCH 003/294] Documentation: Make improvements based on feedback from NiklasBeierl --- doc/source/glossary.rst | 45 ++++++++++++++++++++++++----------------- 1 file changed, 27 insertions(+), 18 deletions(-) diff --git a/doc/source/glossary.rst b/doc/source/glossary.rst index 674b674690..50bc69490b 100644 --- a/doc/source/glossary.rst +++ b/doc/source/glossary.rst @@ -50,9 +50,13 @@ M .. _Map: Map, mapping - A mapping is a relationship between two values, where one value (the :ref:`Domain` maps to the :ref:`Range` value). - Mappings can be seen as a mathematical function, and therefore volatility 3 attempts to use mathematical functional - notation where possible. + A mapping is a relationship between two sets (where elements of the :ref:`Domain` map to elements + of the :ref:`Range`). Mappings can be seen as a mathematical function, and therefore volatility 3 + attempts to use mathematical functional notation where possible. Within volatility a mapping is most often + used to refer to the function for translating addresses from a higher layer (domain) to a lower layer (range). + For further information, please see + `Function (mathematics) in wikipedia https://en.wikipedia.org/wiki/Function_(mathematics)` + .. _Member: @@ -66,13 +70,14 @@ O Object This has a specific meaning within computer programming (as in Object Oriented Programming), but within the world - of Volatility it is used to refer to a type that has been associated with a chunk of data. See all :ref:`Type`. + of Volatility it is used to refer to a type that has been associated with a chunk of data, or a specific instance + of a type. See also :ref:`Type`. .. _Offset: Offset A numeric value that identifies a distance within a group of bytes, to uniquely identify a single byte, or the - start of a run of bytes. This is often relative (offset from another object/item) but can be absolute (offset from + start of a run of bytes. An offset is often relative (offset from another object/item) but can be absolute (offset from the start of a region of data). P @@ -81,17 +86,18 @@ P Packed Structures are often :ref:`aligned` meaning that the various members (subtypes) are always aligned at - particular values (usually multiples of 2, 4 or 8). Thus if a particular value is an odd number of bytes, the - next chunk of data containing useful information would start at an even offset, and a single byte of - :ref:`padding` would be used to ensure appropriate :ref:`alignment`. In packed structures, no - padding is used, and offsets may be at odd offsets. + particular values (usually multiples of 2, 4 or 8). Thus if the data used to represent a particular value has + an odd number of bytes, not a multiple of the chosen number, there will be :ref:`padding` between it and + the next member. In packed structs, no padding is used and the offset of the next member depends on the length of + the previous one. .. _Padding: Padding - Data that (usually) contains no useful information. The typical value used for padding is 0, so should a string - :ref:`object` that has been allocated a particular number of bytes, contain a string of fewer bytes, the remaing bytes - will be padded with null (0) bytes. + Data that (usually) contains no useful information. The typical value used for padding is 0 (sometimes called + a null byte). As an example, if a string :ref:`object` that has been allocated a particular number of + bytes, actually contains fewer bytes, the rest of the data (to make up the original length) will be padded with + null (0) bytes. .. _Page: @@ -115,14 +121,15 @@ Page Table Pointer A value within memory that points to a different area of memory. This allows objects to contain references to other objects without containing all the data of the other object. Following a pointer is known as :ref:`dereferencing` - a pointer. Pointers are usually as large as the size of the + a pointer. Pointers are usually the same length as the maximum address of the address space, since they + should be able to point to any address within the space. R - .. _Range: Range - This is the grouping the output values for a mapping or mathematical function. + This is the set of the possible output values for a mapping or mathematical function. S - @@ -130,16 +137,18 @@ S Struct, Structure A means of containing multiple different :ref:`type` associated together. A struct typically contains - other :ref:`type`, one directly after another (unless :ref:`packing` is involved). In this way + other :ref:`type`, usually :ref:`aligned` (unless :ref:`packing` is involved). In this way the :ref:`members` of a type can be accessed by finding the data at the relative :ref:`offset` to the start of the structure. .. _Symbol: Symbol - This is used in many different contexts, as short term for many things. A symbol is a construct that usually - encompasses a specific :ref:`offset` and a :ref:`type`, representing a specific instance of a type within the memory of a - compiled and running program. + This is used in many different contexts, as a short term for many things. Within Volatility, a symbol is a + construct that usually encompasses a specific type :ref:`type` at a specfific :ref:`offset`, + representing a particular instance of that type within the memory of a compiled and running program. An example + would be the location in memory of a list of active tcp endpoints maintained by the networking stack + within an operating system. T - From 972d3d7076fa3fca5fad2ad54e66a9107cabe75e Mon Sep 17 00:00:00 2001 From: Jan Date: Tue, 8 Dec 2020 20:04:03 +0100 Subject: [PATCH 004/294] netlist for 16299 and 17134 --- .../framework/plugins/windows/netlist.py | 363 ++++++++++++++++++ .../windows/netscan-win10-16299-x64.json | 8 +- .../windows/netscan-win10-17134-x64.json | 205 +++++++++- .../windows/netscan-win10-17763-x64.json | 205 +++++++++- .../windows/netscan-win10-18363-x64.json | 190 +++++++++ 5 files changed, 965 insertions(+), 6 deletions(-) create mode 100644 volatility/framework/plugins/windows/netlist.py diff --git a/volatility/framework/plugins/windows/netlist.py b/volatility/framework/plugins/windows/netlist.py new file mode 100644 index 0000000000..15f2cf6516 --- /dev/null +++ b/volatility/framework/plugins/windows/netlist.py @@ -0,0 +1,363 @@ +# This file is Copyright 2019 Volatility Foundation and licensed under the Volatility Software License 1.0 +# which is available at https://www.volatilityfoundation.org/license/vsl-v1.0 +# + +import logging +import datetime +from typing import Iterable, List, Optional, Callable + +from volatility.framework import constants, exceptions, interfaces, renderers, symbols, layers +from volatility.framework.configuration import requirements +from volatility.framework.renderers import format_hints +from volatility.framework.symbols import intermed +from volatility.framework.symbols.windows.extensions import network +from volatility.framework.symbols.windows.pdbutil import PDBUtility +from volatility.plugins import timeliner +from volatility.plugins.windows import info, poolscanner, netscan, modules + +vollog = logging.getLogger(__name__) + + +class NetList(interfaces.plugins.PluginInterface, timeliner.TimeLinerInterface): + """Scans for network objects present in a particular windows memory image.""" + + _required_framework_version = (2, 0, 0) + _version = (1, 0, 0) + + @classmethod + def get_requirements(cls): + return [ + requirements.TranslationLayerRequirement(name = 'primary', + description = 'Memory layer for the kernel', + architectures = ["Intel32", "Intel64"]), + requirements.SymbolTableRequirement(name = "nt_symbols", description = "Windows kernel symbols"), + requirements.VersionRequirement(name = 'netscan', component = netscan.NetScan, version = (1, 0, 0)), + requirements.BooleanRequirement( + name = 'include-corrupt', + description = + "Radically eases result validation. This will show partially overwritten data. WARNING: the results are likely to include garbage and/or corrupt data. Be cautious!", + default = False, + optional = True), + ] + + @classmethod + def _decode_pointer(self, value): + """Windows encodes pointers to objects and decodes them on the fly + before using them. + + This function mimics the decoding routine so we can generate the + proper pointer values as well. + """ + + value = value & 0xFFFFFFFFFFFFFFFC + + return value + + @classmethod + def read_pointer(cls, + context: interfaces.context.ContextInterface, + layer_name: str, + offset: int, + length: int) -> int: + + return int.from_bytes(context.layers[layer_name].read(offset, length), "little") + + @classmethod + def parse_bitmap(cls, + context: interfaces.context.ContextInterface, + layer_name: str, + bitmap_offset: int, + bitmap_size_in_byte: int) -> list: + ret = [] + for idx in range(bitmap_size_in_byte-1): + current_byte = context.layers[layer_name].read(bitmap_offset + idx, 1)[0] + current_offs = idx*8 + if current_byte&1 != 0: + ret.append(0 + current_offs) + if current_byte&2 != 0: + ret.append(1 + current_offs) + if current_byte&4 != 0: + ret.append(2 + current_offs) + if current_byte&8 != 0: + ret.append(3 + current_offs) + if current_byte&16 != 0: + ret.append(4 + current_offs) + if current_byte&32 != 0: + ret.append(5 + current_offs) + if current_byte&64 != 0: + ret.append(6 + current_offs) + if current_byte&128 != 0: + ret.append(7 + current_offs) + return ret + + @classmethod + def enumerate_structures_by_port(cls, + context: interfaces.context.ContextInterface, + layer_name: str, + net_symbol_table: str, + port: int, + ppobj, + proto="tcp"): + if proto == "tcp": + obj_name = net_symbol_table + constants.BANG + "_TCP_LISTENER" + ptr_offset = context.symbol_space.get_type(obj_name).relative_child_offset("Next") + elif proto == "udp": + obj_name = net_symbol_table + constants.BANG + "_UDP_ENDPOINT" + ptr_offset = context.symbol_space.get_type(obj_name).relative_child_offset("Next") + else: + yield + list_index = port >> 8 + truncated_port = port & 0xff + inpa = ppobj.PortAssignments[list_index].dereference() + assignment = inpa.InPaBigPoolBase.dereference().Assignments[truncated_port] + if not assignment: + yield + netw_inside = cls._decode_pointer(assignment.Entry) + if netw_inside: + curr_obj = context.object(obj_name, layer_name = layer_name, offset = netw_inside - ptr_offset) + vollog.debug("Found object @ 0x{:2x}, yielding...".format(curr_obj.vol.offset)) + yield curr_obj + + vollog.debug("PrevPointer val: {}".format(curr_obj.Next)) + while curr_obj.Next: + curr_obj = context.object(obj_name, layer_name = layer_name, offset = cls._decode_pointer(curr_obj.Next) - ptr_offset) + yield curr_obj + vollog.debug("Checking if PrevPointer is valid (val: {})".format(curr_obj.Next)) + + @classmethod + def get_tcpip_module(cls, context, layer_name, nt_symbols): + for mod in modules.Modules.list_modules(context, layer_name, nt_symbols): + # ~ print(mod.BaseDllName.get_string()) + if mod.BaseDllName.get_string() == "tcpip.sys": + vollog.debug("Found tcpip.sys offset @ 0x{:x}".format(mod.DllBase)) + return mod + + @classmethod + def get_tcpip_guid(cls, context, layer_name, tcpip_module): + return list( + PDBUtility.pdbname_scan( + context, + layer_name, + context.layers[layer_name].page_size, + [b"tcpip.pdb"], + start=tcpip_module.DllBase, + end=tcpip_module.DllBase + tcpip_module.SizeOfImage + ) + ) + + @classmethod + def parse_hashtable(cls, context, layer_name, ht_offset, ht_length, pointer_length) -> list: + # ~ ret = [] + for idx in range(ht_length): + current_qword = (0xffff000000000000 | cls.read_pointer(context, layer_name, ht_offset + idx * 16, pointer_length)) + if current_qword == (0xffff000000000000 | (ht_offset + idx * 16)): + continue + yield current_qword + + @classmethod + def parse_partitions(cls, context, layer_name, net_symbol_table, tcpip_symbol_table, tcpip_module_offset, pointer_length): + # ~ endpoints = [] + obj_name = net_symbol_table + constants.BANG + "_TCP_ENDPOINT" + pto = context.symbol_space.get_symbol(tcpip_symbol_table + constants.BANG + "PartitionTable").address + pco = context.symbol_space.get_symbol(tcpip_symbol_table + constants.BANG + "PartitionCount").address + part_table = cls.read_pointer(context, layer_name, tcpip_module_offset + pto, pointer_length) + part_count = int.from_bytes(context.layers[layer_name].read(tcpip_module_offset + pco, 1), "little") + partitions = [] + for part_idx in range(part_count): + current_partition = context.object(net_symbol_table + "!_PARTITION", layer_name = layer_name, offset = part_table + 128 * part_idx) + partitions.append(current_partition) + for partition in partitions: + if partition.Endpoints.NumEntries > 0: + for endpoint_entry in cls.parse_hashtable(context, layer_name, partition.Endpoints.Directory, 128, pointer_length): + # ~ yield endpoint + entry_offset = context.symbol_space.get_type(obj_name).relative_child_offset("HashTableEntry") + endpoint = context.object(obj_name, layer_name = layer_name, offset = endpoint_entry - entry_offset) + yield endpoint + # ~ endpoints.extend(parse_hashtable(partition.Endpoints.Directory, 128)) + # ~ return endpoints + + @classmethod + def create_tcpip_symbol_table(cls, + context: interfaces.context.ContextInterface, + config_path: str, + layer_name: str, + tcpip_module): + + guids = cls.get_tcpip_guid(context, layer_name, tcpip_module) + + if not guids: + print("no pdb found!") + raise + + guid = guids[0] + + vollog.debug("Found {}: {}-{}".format(guid["pdb_name"], guid["GUID"], guid["age"])) + + return PDBUtility.load_windows_symbol_table(context, + guid["GUID"], + guid["age"], + guid["pdb_name"], + "volatility.framework.symbols.intermed.IntermediateSymbolTable", + config_path="tcpip") + + @classmethod + def list_sockets(cls, + context: interfaces.context.ContextInterface, + layer_name: str, + nt_symbols, + net_symbol_table: str, + tcpip_module, + tcpip_symbol_table: str) -> \ + Iterable[interfaces.objects.ObjectInterface]: + """Lists all the processes in the primary layer that are in the pid + config option. + + Args: + context: The context to retrieve required elements (layers, symbol tables) from + layer_name: The name of the layer on which to operate + nt_symbols: The name of the table containing the kernel symbols + net_symbol_table: The name of the table containing the tcpip symbols + + Returns: + The list of network objects from the `layer_name` layer's `PartitionTable` and `PortPools` + """ + + tcpip_vo = tcpip_module.DllBase + + pointer_length = context.symbol_space.get_type(net_symbol_table + constants.BANG + "pointer").size + + # tcpe + + for endpoint in cls.parse_partitions(context, layer_name, net_symbol_table, tcpip_symbol_table, tcpip_vo, pointer_length): + yield endpoint + + # listeners + + ucs = context.symbol_space.get_symbol(tcpip_symbol_table + constants.BANG + "UdpCompartmentSet").address + tcs = context.symbol_space.get_symbol(tcpip_symbol_table + constants.BANG + "TcpCompartmentSet").address + + ucs_offset = cls.read_pointer(context, layer_name, tcpip_vo + ucs, pointer_length) + tcs_offset = cls.read_pointer(context, layer_name, tcpip_vo + tcs, pointer_length) + + ucs_obj = context.object(net_symbol_table + constants.BANG + "_INET_COMPARTMENT_SET", layer_name = layer_name, offset = ucs_offset) + upp_addr = ucs_obj.InetCompartment.dereference().ProtocolCompartment.dereference().PortPool + + upp_obj = context.object(net_symbol_table + constants.BANG + "_INET_PORT_POOL", layer_name = layer_name, offset = upp_addr) + udpa_ports = cls.parse_bitmap(context, layer_name, upp_obj.PortBitMap.Buffer, upp_obj.PortBitMap.SizeOfBitMap // 8) + + tcs_obj = context.object(net_symbol_table + constants.BANG + "_INET_COMPARTMENT_SET", layer_name = layer_name, offset = tcs_offset) + tpp_addr = tcs_obj.InetCompartment.dereference().ProtocolCompartment.dereference().PortPool + + tpp_obj = context.object(net_symbol_table + constants.BANG + "_INET_PORT_POOL", layer_name = layer_name, offset = tpp_addr) + tcpl_ports = cls.parse_bitmap(context, layer_name, tpp_obj.PortBitMap.Buffer, tpp_obj.PortBitMap.SizeOfBitMap // 8) + + for port in tcpl_ports: + if port == 0: + continue + for obj in cls.enumerate_structures_by_port(context, layer_name, net_symbol_table, port, tpp_obj, "tcp"): + yield obj + + for port in udpa_ports: + if port == 0: + continue + for obj in cls.enumerate_structures_by_port(context, layer_name, net_symbol_table, port, upp_obj, "udp"): + yield obj + + def _generator(self, show_corrupt_results: Optional[bool] = None): + """ Generates the network objects for use in rendering. """ + + netscan_symbol_table = netscan.NetScan.create_netscan_symbol_table(self.context, self.config["primary"], + self.config["nt_symbols"], self.config_path) + + tcpip_module = self.get_tcpip_module(self.context, self.config["primary"], self.config["nt_symbols"]) + + tcpip_symbol_table = self.create_tcpip_symbol_table(self.context, self.config_path, self.config["primary"], tcpip_module) + + for netw_obj in self.list_sockets(self.context, + self.config['primary'], + self.config['nt_symbols'], + netscan_symbol_table, + tcpip_module, + tcpip_symbol_table): + + vollog.debug("Found netw obj @ 0x{:2x} of assumed type {}".format(netw_obj.vol.offset, type(netw_obj))) + # objects passed pool header constraints. check for additional constraints if strict flag is set. + if not show_corrupt_results and not netw_obj.is_valid(): + continue + + if isinstance(netw_obj, network._UDP_ENDPOINT): + vollog.debug("Found UDP_ENDPOINT @ 0x{:2x}".format(netw_obj.vol.offset)) + + # For UdpA, the state is always blank and the remote end is asterisks + for ver, laddr, _ in netw_obj.dual_stack_sockets(): + yield (0, (format_hints.Hex(netw_obj.vol.offset), "UDP" + ver, laddr, netw_obj.Port, "*", 0, "", + netw_obj.get_owner_pid() or renderers.UnreadableValue(), netw_obj.get_owner_procname() + or renderers.UnreadableValue(), netw_obj.get_create_time() + or renderers.UnreadableValue())) + + elif isinstance(netw_obj, network._TCP_ENDPOINT): + vollog.debug("Found _TCP_ENDPOINT @ 0x{:2x}".format(netw_obj.vol.offset)) + if netw_obj.get_address_family() == network.AF_INET: + proto = "TCPv4" + elif netw_obj.get_address_family() == network.AF_INET6: + proto = "TCPv6" + else: + proto = "TCPv?" + + try: + state = netw_obj.State.description + except ValueError: + state = renderers.UnreadableValue() + + yield (0, (format_hints.Hex(netw_obj.vol.offset), proto, netw_obj.get_local_address() + or renderers.UnreadableValue(), netw_obj.LocalPort, netw_obj.get_remote_address() + or renderers.UnreadableValue(), netw_obj.RemotePort, state, netw_obj.get_owner_pid() + or renderers.UnreadableValue(), netw_obj.get_owner_procname() or renderers.UnreadableValue(), + netw_obj.get_create_time() or renderers.UnreadableValue())) + + # check for isinstance of tcp listener last, because all other objects are inherited from here + elif isinstance(netw_obj, network._TCP_LISTENER): + vollog.debug("Found _TCP_LISTENER @ 0x{:2x}".format(netw_obj.vol.offset)) + + # For TcpL, the state is always listening and the remote port is zero + for ver, laddr, raddr in netw_obj.dual_stack_sockets(): + yield (0, (format_hints.Hex(netw_obj.vol.offset), "TCP" + ver, laddr, netw_obj.Port, raddr, 0, + "LISTENING", netw_obj.get_owner_pid() or renderers.UnreadableValue(), + netw_obj.get_owner_procname() or renderers.UnreadableValue(), netw_obj.get_create_time() + or renderers.UnreadableValue())) + else: + # this should not happen therefore we log it. + vollog.debug("Found network object unsure of its type: {} of type {}".format(netw_obj, type(netw_obj))) + + def generate_timeline(self): + for row in self._generator(): + _depth, row_data = row + # Skip network connections without creation time + if not isinstance(row_data[9], datetime.datetime): + continue + row_data = [ + "N/A" if isinstance(i, renderers.UnreadableValue) or isinstance(i, renderers.UnparsableValue) else i + for i in row_data + ] + description = "Network connection: Process {} {} Local Address {}:{} " \ + "Remote Address {}:{} State {} Protocol {} ".format(row_data[7], row_data[8], + row_data[2], row_data[3], + row_data[4], row_data[5], + row_data[6], row_data[1]) + yield (description, timeliner.TimeLinerType.CREATED, row_data[9]) + + def run(self): + show_corrupt_results = self.config.get('include-corrupt', None) + + return renderers.TreeGrid([ + ("Offset", format_hints.Hex), + ("Proto", str), + ("LocalAddr", str), + ("LocalPort", int), + ("ForeignAddr", str), + ("ForeignPort", int), + ("State", str), + ("PID", int), + ("Owner", str), + ("Created", datetime.datetime), + ], self._generator(show_corrupt_results = show_corrupt_results)) diff --git a/volatility/framework/symbols/windows/netscan-win10-16299-x64.json b/volatility/framework/symbols/windows/netscan-win10-16299-x64.json index 074b854c50..73bd6054a8 100644 --- a/volatility/framework/symbols/windows/netscan-win10-16299-x64.json +++ b/volatility/framework/symbols/windows/netscan-win10-16299-x64.json @@ -105,7 +105,7 @@ } }, - "MaskedPrevObj": { + "Next": { "offset": 112, "type":{ "kind": "pointer", @@ -175,7 +175,7 @@ "name": "unsigned be short" } }, - "MaskedPrevObj": { + "Next": { "offset": 120, "type":{ "kind": "pointer", @@ -259,7 +259,7 @@ "name": "TCPStateEnum" } }, - "MaskedPrevObj": { + "Next": { "offset": 112, "type":{ "kind": "pointer", @@ -459,7 +459,7 @@ }, "_PORT_ASSIGNMENT_ENTRY": { "fields": { - "MaskedObjectPtr": { + "Entry": { "offset": 8, "type": { "kind": "pointer", diff --git a/volatility/framework/symbols/windows/netscan-win10-17134-x64.json b/volatility/framework/symbols/windows/netscan-win10-17134-x64.json index c0890845ab..5b9c344114 100644 --- a/volatility/framework/symbols/windows/netscan-win10-17134-x64.json +++ b/volatility/framework/symbols/windows/netscan-win10-17134-x64.json @@ -49,7 +49,20 @@ "endian": "little" } }, - "symbols": {}, + "symbols": { + "TcpCompartmentSet": { + "address": 2010312 + }, + "UdpCompartmentSet": { + "address": 2006416 + }, + "PartitionCount": { + "address": 2008196 + }, + "PartitionTable": { + "address": 2008200 + } + }, "user_types": { "_UDP_ENDPOINT": { "fields": { @@ -92,6 +105,16 @@ } }, + "Next": { + "offset": 112, + "type":{ + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "_UDP_ENDPOINT" + } + } + }, "Port": { "offset": 120, "type": { @@ -145,6 +168,16 @@ } }, + "Next": { + "offset": 120, + "type":{ + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "_TCP_LISTENER" + } + } + }, "Port": { "offset": 114, "type": { @@ -209,12 +242,32 @@ "name": "unsigned be short" } }, + "HashTableEntry": { + "offset": 40, + "type":{ + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "_LIST_ENTRY" + } + } + }, "State": { "offset": 108, "type": { "kind": "enum", "name": "TCPStateEnum" } + }, + "Next": { + "offset": 112, + "type":{ + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "_TCP_ENDPOINT" + } + } } }, "kind": "struct", @@ -355,6 +408,156 @@ }, "kind": "union", "size": 8 + }, + "_INET_COMPARTMENT_SET": { + "fields": { + "InetCompartment": { + "offset": 328, + "type": { + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "_INET_COMPARTMENT" + } + } + } + }, + "kind": "struct", + "size": 384 + }, + "_INET_COMPARTMENT": { + "fields": { + "ProtocolCompartment": { + "offset": 32, + "type": { + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "_PROTOCOL_COMPARTMENT" + } + } + } + }, + "kind": "struct", + "size": 48 + }, + "_PROTOCOL_COMPARTMENT": { + "fields": { + "PortPool": { + "offset": 0, + "type": { + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "_INET_PORT_POOL" + } + } + } + }, + "kind": "struct", + "size": 16 + }, + "_PORT_ASSIGNMENT_ENTRY": { + "fields": { + "Entry": { + "offset": 8, + "type": { + "kind": "pointer", + "subtype": { + "kind": "base", + "name": "void" + } + } + } + }, + "kind": "struct", + "size": 24 + }, + "_PORT_ASSIGNMENT_LIST": { + "fields": { + "Assignments": { + "offset": 0, + "type": { + "count": 256, + "kind": "array", + "subtype": { + "kind": "struct", + "name": "_PORT_ASSIGNMENT_ENTRY" + } + } + } + }, + "kind": "struct", + "size": 6144 + }, + "_PORT_ASSIGNMENT": { + "fields": { + "InPaBigPoolBase": { + "offset": 24, + "type": { + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "_PORT_ASSIGNMENT_LIST" + } + } + } + }, + "kind": "struct", + "size": 32 + }, + "_INET_PORT_POOL": { + "fields": { + "PortAssignments": { + "offset": 232, + "type": { + "count": 256, + "kind": "array", + "subtype": { + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "_PORT_ASSIGNMENT" + } + } + } + }, + "PortBitMap": { + "offset": 216, + "type": { + "kind": "struct", + "name": "nt_symbols!_RTL_BITMAP" + } + } + }, + "kind": "struct", + "size": 11200 + }, + "_PARTITION": { + "fields": { + "Endpoints" : { + "offset": 8, + "type": { + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "nt_symbols!_RTL_DYNAMIC_HASH_TABLE" + } + } + }, + "UnknownHashTable" : { + "offset": 16, + "type": { + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "nt_symbols!_RTL_DYNAMIC_HASH_TABLE" + } + } + } + }, + "kind": "struct", + "size": 128 } }, "enums": { diff --git a/volatility/framework/symbols/windows/netscan-win10-17763-x64.json b/volatility/framework/symbols/windows/netscan-win10-17763-x64.json index 1eb3c754ce..072185d685 100644 --- a/volatility/framework/symbols/windows/netscan-win10-17763-x64.json +++ b/volatility/framework/symbols/windows/netscan-win10-17763-x64.json @@ -49,7 +49,20 @@ "endian": "little" } }, - "symbols": {}, + "symbols": { + "TcpCompartmentSet": { + "address": 2010312 + }, + "UdpCompartmentSet": { + "address": 2006416 + }, + "PartitionCount": { + "address": 2008196 + }, + "PartitionTable": { + "address": 2008200 + } + }, "user_types": { "_UDP_ENDPOINT": { "fields": { @@ -92,6 +105,16 @@ } }, + "Next": { + "offset": 112, + "type":{ + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "_UDP_ENDPOINT" + } + } + }, "Port": { "offset": 120, "type": { @@ -145,6 +168,16 @@ } }, + "Next": { + "offset": 120, + "type":{ + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "_TCP_LISTENER" + } + } + }, "Port": { "offset": 114, "type": { @@ -209,6 +242,26 @@ "name": "unsigned be short" } }, + "HashTableEntry": { + "offset": 40, + "type":{ + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "_LIST_ENTRY" + } + } + }, + "Next": { + "offset": 112, + "type":{ + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "_TCP_ENDPOINT" + } + } + }, "State": { "offset": 108, "type": { @@ -355,6 +408,156 @@ }, "kind": "union", "size": 8 + }, + "_INET_COMPARTMENT_SET": { + "fields": { + "InetCompartment": { + "offset": 328, + "type": { + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "_INET_COMPARTMENT" + } + } + } + }, + "kind": "struct", + "size": 384 + }, + "_INET_COMPARTMENT": { + "fields": { + "ProtocolCompartment": { + "offset": 32, + "type": { + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "_PROTOCOL_COMPARTMENT" + } + } + } + }, + "kind": "struct", + "size": 48 + }, + "_PROTOCOL_COMPARTMENT": { + "fields": { + "PortPool": { + "offset": 0, + "type": { + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "_INET_PORT_POOL" + } + } + } + }, + "kind": "struct", + "size": 16 + }, + "_PORT_ASSIGNMENT_ENTRY": { + "fields": { + "Entry": { + "offset": 8, + "type": { + "kind": "pointer", + "subtype": { + "kind": "base", + "name": "void" + } + } + } + }, + "kind": "struct", + "size": 24 + }, + "_PORT_ASSIGNMENT_LIST": { + "fields": { + "Assignments": { + "offset": 0, + "type": { + "count": 256, + "kind": "array", + "subtype": { + "kind": "struct", + "name": "_PORT_ASSIGNMENT_ENTRY" + } + } + } + }, + "kind": "struct", + "size": 6144 + }, + "_PORT_ASSIGNMENT": { + "fields": { + "InPaBigPoolBase": { + "offset": 24, + "type": { + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "_PORT_ASSIGNMENT_LIST" + } + } + } + }, + "kind": "struct", + "size": 32 + }, + "_INET_PORT_POOL": { + "fields": { + "PortAssignments": { + "offset": 232, + "type": { + "count": 256, + "kind": "array", + "subtype": { + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "_PORT_ASSIGNMENT" + } + } + } + }, + "PortBitMap": { + "offset": 216, + "type": { + "kind": "struct", + "name": "nt_symbols!_RTL_BITMAP" + } + } + }, + "kind": "struct", + "size": 11200 + }, + "_PARTITION": { + "fields": { + "Endpoints" : { + "offset": 8, + "type": { + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "nt_symbols!_RTL_DYNAMIC_HASH_TABLE" + } + } + }, + "UnknownHashTable" : { + "offset": 16, + "type": { + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "nt_symbols!_RTL_DYNAMIC_HASH_TABLE" + } + } + } + }, + "kind": "struct", + "size": 128 } }, "enums": { diff --git a/volatility/framework/symbols/windows/netscan-win10-18363-x64.json b/volatility/framework/symbols/windows/netscan-win10-18363-x64.json index d3537ef48c..a2f6f41304 100644 --- a/volatility/framework/symbols/windows/netscan-win10-18363-x64.json +++ b/volatility/framework/symbols/windows/netscan-win10-18363-x64.json @@ -92,6 +92,16 @@ } }, + "Next": { + "offset": 112, + "type":{ + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "_UDP_ENDPOINT" + } + } + }, "Port": { "offset": 128, "type": { @@ -145,6 +155,16 @@ } }, + "Next": { + "offset": 120, + "type":{ + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "_TCP_LISTENER" + } + } + }, "Port": { "offset": 114, "type": { @@ -209,6 +229,26 @@ "name": "unsigned be short" } }, + "HashTableEntry": { + "offset": 40, + "type":{ + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "_LIST_ENTRY" + } + } + }, + "Next": { + "offset": 112, + "type":{ + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "_TCP_ENDPOINT" + } + } + }, "State": { "offset": 108, "type": { @@ -355,6 +395,156 @@ }, "kind": "union", "size": 8 + }, + "_INET_COMPARTMENT_SET": { + "fields": { + "InetCompartment": { + "offset": 328, + "type": { + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "_INET_COMPARTMENT" + } + } + } + }, + "kind": "struct", + "size": 384 + }, + "_INET_COMPARTMENT": { + "fields": { + "ProtocolCompartment": { + "offset": 32, + "type": { + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "_PROTOCOL_COMPARTMENT" + } + } + } + }, + "kind": "struct", + "size": 48 + }, + "_PROTOCOL_COMPARTMENT": { + "fields": { + "PortPool": { + "offset": 0, + "type": { + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "_INET_PORT_POOL" + } + } + } + }, + "kind": "struct", + "size": 16 + }, + "_PORT_ASSIGNMENT_ENTRY": { + "fields": { + "Entry": { + "offset": 8, + "type": { + "kind": "pointer", + "subtype": { + "kind": "base", + "name": "void" + } + } + } + }, + "kind": "struct", + "size": 24 + }, + "_PORT_ASSIGNMENT_LIST": { + "fields": { + "Assignments": { + "offset": 0, + "type": { + "count": 256, + "kind": "array", + "subtype": { + "kind": "struct", + "name": "_PORT_ASSIGNMENT_ENTRY" + } + } + } + }, + "kind": "struct", + "size": 6144 + }, + "_PORT_ASSIGNMENT": { + "fields": { + "InPaBigPoolBase": { + "offset": 24, + "type": { + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "_PORT_ASSIGNMENT_LIST" + } + } + } + }, + "kind": "struct", + "size": 32 + }, + "_INET_PORT_POOL": { + "fields": { + "PortAssignments": { + "offset": 232, + "type": { + "count": 256, + "kind": "array", + "subtype": { + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "_PORT_ASSIGNMENT" + } + } + } + }, + "PortBitMap": { + "offset": 216, + "type": { + "kind": "struct", + "name": "nt_symbols!_RTL_BITMAP" + } + } + }, + "kind": "struct", + "size": 11200 + }, + "_PARTITION": { + "fields": { + "Endpoints" : { + "offset": 8, + "type": { + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "nt_symbols!_RTL_DYNAMIC_HASH_TABLE" + } + } + }, + "UnknownHashTable" : { + "offset": 16, + "type": { + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "nt_symbols!_RTL_DYNAMIC_HASH_TABLE" + } + } + } + }, + "kind": "struct", + "size": 128 } }, "enums": { From bcc79d4cb9e1c1a182a3178852925a96223a336c Mon Sep 17 00:00:00 2001 From: Jan Date: Sat, 19 Dec 2020 17:13:53 +0100 Subject: [PATCH 005/294] better documentation and support for more x64 Windows versions --- .../framework/plugins/windows/netlist.py | 311 +++++++-- .../framework/plugins/windows/netscan.py | 2 +- .../windows/netscan-win10-16299-x64.json | 2 +- .../windows/netscan-win10-17134-x64.json | 2 +- .../windows/netscan-win10-17763-x64.json | 2 +- .../windows/netscan-win10-18362-x64.json | 591 ++++++++++++++++++ .../windows/netscan-win10-18363-x64.json | 6 +- .../symbols/windows/netscan-win7-x64.json | 172 ++++- 8 files changed, 1015 insertions(+), 73 deletions(-) create mode 100644 volatility/framework/symbols/windows/netscan-win10-18362-x64.json diff --git a/volatility/framework/plugins/windows/netlist.py b/volatility/framework/plugins/windows/netlist.py index 15f2cf6516..415cc3ee53 100644 --- a/volatility/framework/plugins/windows/netlist.py +++ b/volatility/framework/plugins/windows/netlist.py @@ -42,7 +42,9 @@ def get_requirements(cls): @classmethod def _decode_pointer(self, value): - """Windows encodes pointers to objects and decodes them on the fly + """Copied from `windows.handles`. + + Windows encodes pointers to objects and decodes them on the fly before using them. This function mimics the decoding routine so we can generate the @@ -59,6 +61,17 @@ def read_pointer(cls, layer_name: str, offset: int, length: int) -> int: + """Reads a pointer at a given offset and returns the address it points to. + + Args: + context: The context to retrieve required elements (layers, symbol tables) from + layer_name: The name of the layer on which to operate + offset: Offset of pointer + length: Pointer length + + Returns: + The value the pointer points to. + """ return int.from_bytes(context.layers[layer_name].read(offset, length), "little") @@ -68,6 +81,17 @@ def parse_bitmap(cls, layer_name: str, bitmap_offset: int, bitmap_size_in_byte: int) -> list: + """Parses a given bitmap and looks for each occurence of a 1. + + Args: + context: The context to retrieve required elements (layers, symbol tables) from + layer_name: The name of the layer on which to operate + bitmap_offset: Start address of bitmap + bitmap_size_in_byte: Bitmap size in Byte, not in bit. + + Returns: + The list of indices at which a 1 was found. + """ ret = [] for idx in range(bitmap_size_in_byte-1): current_byte = context.layers[layer_name].read(bitmap_offset + idx, 1)[0] @@ -96,8 +120,22 @@ def enumerate_structures_by_port(cls, layer_name: str, net_symbol_table: str, port: int, - ppobj, - proto="tcp"): + port_pool: interfaces.objects.ObjectInterface, + proto="tcp") -> \ + Iterable[interfaces.objects.ObjectInterface]: + """Lists all UDP Endpoints and TCP Listeners by parsing UdpPortPool and TcpPortPool. + + Args: + context: The context to retrieve required elements (layers, symbol tables) from + layer_name: The name of the layer on which to operate + net_symbol_table: The name of the table containing the tcpip types + port: Current port as integer to lookup the associated object. + port_pool: Port pool object + proto: Either "tcp" or "udp" to decide which types to use. + + Returns: + The list of network objects from this image's TCP and UDP `PortPools` + """ if proto == "tcp": obj_name = net_symbol_table + constants.BANG + "_TCP_LISTENER" ptr_offset = context.symbol_space.get_type(obj_name).relative_child_offset("Next") @@ -105,89 +143,180 @@ def enumerate_structures_by_port(cls, obj_name = net_symbol_table + constants.BANG + "_UDP_ENDPOINT" ptr_offset = context.symbol_space.get_type(obj_name).relative_child_offset("Next") else: + # invalid argument. yield + + # the given port serves as a shifted index into the port pool lists list_index = port >> 8 truncated_port = port & 0xff - inpa = ppobj.PortAssignments[list_index].dereference() + + # first, grab the given port's PortAssignment (`_PORT_ASSIGNMENT`) + inpa = port_pool.PortAssignments[list_index].dereference() + + # then parse the port assignment list (`_PORT_ASSIGNMENT_LIST`) and grab the correct entry assignment = inpa.InPaBigPoolBase.dereference().Assignments[truncated_port] + if not assignment: yield + + # the value within assignment.Entry is a) masked and b) points inside of the network object + # first decode the pointer netw_inside = cls._decode_pointer(assignment.Entry) + if netw_inside: + # if the value is valid, calculate the actual object address by subtracting the offset curr_obj = context.object(obj_name, layer_name = layer_name, offset = netw_inside - ptr_offset) - vollog.debug("Found object @ 0x{:2x}, yielding...".format(curr_obj.vol.offset)) + vollog.debug("Found {} object @ 0x{:2x}, yielding...".format(proto, curr_obj.vol.offset)) yield curr_obj - vollog.debug("PrevPointer val: {}".format(curr_obj.Next)) + # if the same port is used on different interfaces multiple objects are created + # those can be found by following the pointer within the object's `Next` field until it is empty while curr_obj.Next: curr_obj = context.object(obj_name, layer_name = layer_name, offset = cls._decode_pointer(curr_obj.Next) - ptr_offset) yield curr_obj - vollog.debug("Checking if PrevPointer is valid (val: {})".format(curr_obj.Next)) @classmethod - def get_tcpip_module(cls, context, layer_name, nt_symbols): + def get_tcpip_module(cls, + context: interfaces.context.ContextInterface, + layer_name: str, + nt_symbols: str) -> interfaces.objects.ObjectInterface: + """Uses `windows.modules` to find tcpip.sys in memory. + + Args: + context: The context to retrieve required elements (layers, symbol tables) from + layer_name: The name of the layer on which to operate + nt_symbols: The name of the table containing the kernel symbols + + Returns: + The constructed tcpip.sys module object. + """ for mod in modules.Modules.list_modules(context, layer_name, nt_symbols): - # ~ print(mod.BaseDllName.get_string()) if mod.BaseDllName.get_string() == "tcpip.sys": - vollog.debug("Found tcpip.sys offset @ 0x{:x}".format(mod.DllBase)) + vollog.debug("Found tcpip.sys image base @ 0x{:x}".format(mod.DllBase)) return mod @classmethod - def get_tcpip_guid(cls, context, layer_name, tcpip_module): - return list( - PDBUtility.pdbname_scan( - context, - layer_name, - context.layers[layer_name].page_size, - [b"tcpip.pdb"], - start=tcpip_module.DllBase, - end=tcpip_module.DllBase + tcpip_module.SizeOfImage - ) - ) + def parse_hashtable(cls, + context: interfaces.context.ContextInterface, + layer_name: str, + ht_offset: int, + ht_length: int, + alignment: int, + pointer_length: int) -> list: + """Parses a hashtable quick and dirty. - @classmethod - def parse_hashtable(cls, context, layer_name, ht_offset, ht_length, pointer_length) -> list: - # ~ ret = [] - for idx in range(ht_length): - current_qword = (0xffff000000000000 | cls.read_pointer(context, layer_name, ht_offset + idx * 16, pointer_length)) - if current_qword == (0xffff000000000000 | (ht_offset + idx * 16)): + Args: + context: The context to retrieve required elements (layers, symbol tables) from + layer_name: The name of the layer on which to operate + ht_offset: Beginning of the hash table + ht_length: Length of the hash table + pointer_length: Length of this architecture's pointers + + Returns: + The hash table entries which are _not_ empty + """ + for index in range(ht_length): + # mask pointer so we do not get confused with abbreviated virtual offsets + # this currently only works for 64-bit. + # TODO: add x86 support. + current_qword = (0xffff000000000000 | cls.read_pointer(context, layer_name, ht_offset + index * alignment, pointer_length)) + if current_qword == (0xffff000000000000 | (ht_offset + index * alignment)): continue yield current_qword @classmethod - def parse_partitions(cls, context, layer_name, net_symbol_table, tcpip_symbol_table, tcpip_module_offset, pointer_length): - # ~ endpoints = [] + def parse_partitions(cls, + context: interfaces.context.ContextInterface, + layer_name: str, + net_symbol_table: str, + tcpip_symbol_table: str, + tcpip_module_offset: int, + pointer_length: int) -> Iterable[interfaces.objects.ObjectInterface]: + """Parses tcpip.sys's PartitionTable containing established TCP connections. + The amount of Partition depends on the value of the symbol `PartitionCount` and correlates with + the maximum processor count (refer to Art of Memory Forensics, chapter 11). + + Args: + context: The context to retrieve required elements (layers, symbol tables) from + layer_name: The name of the layer on which to operate + nt_symbols: The name of the table containing the kernel symbols + net_symbol_table: The name of the table containing the tcpip types + tcpip_module: The created vol Windows module object of the given memory image + tcpip_symbol_table: The name of the table containing the tcpip driver symbols + + Returns: + The list of TCP endpoint objects from the `layer_name` layer's `PartitionTable` + """ + if symbols.symbol_table_is_64bit(context, net_symbol_table): + alignment = 0x10 + else: + alignment = 8 + obj_name = net_symbol_table + constants.BANG + "_TCP_ENDPOINT" - pto = context.symbol_space.get_symbol(tcpip_symbol_table + constants.BANG + "PartitionTable").address - pco = context.symbol_space.get_symbol(tcpip_symbol_table + constants.BANG + "PartitionCount").address - part_table = cls.read_pointer(context, layer_name, tcpip_module_offset + pto, pointer_length) - part_count = int.from_bytes(context.layers[layer_name].read(tcpip_module_offset + pco, 1), "little") + + # part_table_symbol is the offset within tcpip.sys which contains the address of the partition table itself + part_table_symbol = context.symbol_space.get_symbol(tcpip_symbol_table + constants.BANG + "PartitionTable").address + part_count_symbol = context.symbol_space.get_symbol(tcpip_symbol_table + constants.BANG + "PartitionCount").address + + # part_table is the actual partition table offset + part_table = cls.read_pointer(context, layer_name, tcpip_module_offset + part_table_symbol, pointer_length) + part_count = int.from_bytes(context.layers[layer_name].read(tcpip_module_offset + part_count_symbol, 1), "little") + + part_table_size = context.symbol_space.get_type(net_symbol_table + constants.BANG + "_PARTITION").size + partitions = [] + + # create partition objects for each partition and append to list for part_idx in range(part_count): - current_partition = context.object(net_symbol_table + "!_PARTITION", layer_name = layer_name, offset = part_table + 128 * part_idx) + current_partition = context.object(net_symbol_table + constants.BANG + "_PARTITION", + layer_name = layer_name, + offset = part_table + part_table_size * part_idx) + partitions.append(current_partition) + for partition in partitions: if partition.Endpoints.NumEntries > 0: - for endpoint_entry in cls.parse_hashtable(context, layer_name, partition.Endpoints.Directory, 128, pointer_length): - # ~ yield endpoint - entry_offset = context.symbol_space.get_type(obj_name).relative_child_offset("HashTableEntry") + for endpoint_entry in cls.parse_hashtable(context, + layer_name, + partition.Endpoints.Directory, + part_table_size, + alignment, + pointer_length): + + entry_offset = context.symbol_space.get_type(obj_name).relative_child_offset("ListEntry") endpoint = context.object(obj_name, layer_name = layer_name, offset = endpoint_entry - entry_offset) yield endpoint - # ~ endpoints.extend(parse_hashtable(partition.Endpoints.Directory, 128)) - # ~ return endpoints @classmethod def create_tcpip_symbol_table(cls, context: interfaces.context.ContextInterface, config_path: str, layer_name: str, - tcpip_module): + tcpip_module: interfaces.objects.ObjectInterface) -> str: + """Creates - guids = cls.get_tcpip_guid(context, layer_name, tcpip_module) + Args: + context: The context to retrieve required elements (layers, symbol tables) from + config_path: The config path where to find symbol files + layer_name: The name of the layer on which to operate + tcpip_module: The created vol Windows module object of the given memory image + + Returns: + The name of the constructed and loaded symbol table + """ + guids = list( + PDBUtility.pdbname_scan( + context, + layer_name, + context.layers[layer_name].page_size, + [b"tcpip.pdb"], + start=tcpip_module.DllBase, + end=tcpip_module.DllBase + tcpip_module.SizeOfImage + ) + ) if not guids: - print("no pdb found!") - raise + raise exceptions.VolatilityException("Did not find GUID of tcpip.pdb in tcpip.sys module @ 0x{:x}!".format(tcpip_module.DllBase)) guid = guids[0] @@ -200,57 +329,105 @@ def create_tcpip_symbol_table(cls, "volatility.framework.symbols.intermed.IntermediateSymbolTable", config_path="tcpip") + @classmethod + def find_port_pools(cls, + context: interfaces.context.ContextInterface, + layer_name: str, + net_symbol_table: str, + tcpip_symbol_table: str, + tcpip_module_offset: int, + pointer_length: int) -> (int, int): + """Finds the given image's port pools. Older Windows versions (presumably < Win10 build 14251) use driver + symbols called `UdpPortPool` and `TcpPortPool` which point towards the pools. + Newer Windows versions use `UdpCompartmentSet` and `TcpCompartmentSet`, which we first have to translate into + the port pool address. See also: http://redplait.blogspot.com/2016/06/tcpip-port-pools-in-fresh-windows-10.html + + Args: + context: The context to retrieve required elements (layers, symbol tables) from + layer_name: The name of the layer on which to operate + net_symbol_table: The name of the table containing the tcpip types + tcpip_module_offset: This memory dump's tcpip.sys image offset + tcpip_symbol_table: The name of the table containing the tcpip driver symbols + pointer_length: Length of this architecture's pointers + + Returns: + The tuple containing the address of the UDP and TCP port pool respectively. + """ + + if "UdpPortPool" in context.symbol_space[tcpip_symbol_table].symbols: + # older Windows versions + upp_symbol = context.symbol_space.get_symbol(tcpip_symbol_table + constants.BANG + "UdpPortPool").address + upp_addr = cls.read_pointer(context, layer_name, tcpip_module_offset + upp_symbol, pointer_length) + + tpp_symbol = context.symbol_space.get_symbol(tcpip_symbol_table + constants.BANG + "TcpPortPool").address + tpp_addr = cls.read_pointer(context, layer_name, tcpip_module_offset + tpp_symbol, pointer_length) + + elif "UdpCompartmentSet" in context.symbol_space[tcpip_symbol_table].symbols: + # newer Windows versions since 10.14xxx + ucs = context.symbol_space.get_symbol(tcpip_symbol_table + constants.BANG + "UdpCompartmentSet").address + tcs = context.symbol_space.get_symbol(tcpip_symbol_table + constants.BANG + "TcpCompartmentSet").address + + ucs_offset = cls.read_pointer(context, layer_name, tcpip_module_offset + ucs, pointer_length) + tcs_offset = cls.read_pointer(context, layer_name, tcpip_module_offset + tcs, pointer_length) + + ucs_obj = context.object(net_symbol_table + constants.BANG + "_INET_COMPARTMENT_SET", layer_name = layer_name, offset = ucs_offset) + upp_addr = ucs_obj.InetCompartment.dereference().ProtocolCompartment.dereference().PortPool + + tcs_obj = context.object(net_symbol_table + constants.BANG + "_INET_COMPARTMENT_SET", layer_name = layer_name, offset = tcs_offset) + tpp_addr = tcs_obj.InetCompartment.dereference().ProtocolCompartment.dereference().PortPool + + else: + # this branch should not be reached. + raise exceptions.SymbolError("UdpPortPool", tcpip_symbol_table, + "Neither UdpPortPool nor UdpCompartmentSet found in {} table".format(tcpip_symbol_table)) + + vollog.debug("Found PortPools @ 0x{:x} (TCP) && 0x{:x} (UDP)".format(upp_addr, tpp_addr)) + return upp_addr, tpp_addr + @classmethod def list_sockets(cls, context: interfaces.context.ContextInterface, layer_name: str, - nt_symbols, + nt_symbols: str, net_symbol_table: str, - tcpip_module, + tcpip_module: interfaces.objects.ObjectInterface, tcpip_symbol_table: str) -> \ Iterable[interfaces.objects.ObjectInterface]: - """Lists all the processes in the primary layer that are in the pid - config option. + """Lists all UDP Endpoints, TCP Listeners and TCP Endpoints in the primary layer that + are in tcpip.sys's UdpPortPool, TcpPortPool and TCP Endpoint partition table, respectively. Args: context: The context to retrieve required elements (layers, symbol tables) from layer_name: The name of the layer on which to operate nt_symbols: The name of the table containing the kernel symbols - net_symbol_table: The name of the table containing the tcpip symbols + net_symbol_table: The name of the table containing the tcpip types + tcpip_module: The created vol Windows module object of the given memory image + tcpip_symbol_table: The name of the table containing the tcpip driver symbols Returns: The list of network objects from the `layer_name` layer's `PartitionTable` and `PortPools` """ - tcpip_vo = tcpip_module.DllBase + tcpip_module_offset = tcpip_module.DllBase pointer_length = context.symbol_space.get_type(net_symbol_table + constants.BANG + "pointer").size - # tcpe - - for endpoint in cls.parse_partitions(context, layer_name, net_symbol_table, tcpip_symbol_table, tcpip_vo, pointer_length): + # first, TCP endpoints by parsing the partition table + for endpoint in cls.parse_partitions(context, layer_name, net_symbol_table, tcpip_symbol_table, tcpip_module_offset, pointer_length): yield endpoint - # listeners - - ucs = context.symbol_space.get_symbol(tcpip_symbol_table + constants.BANG + "UdpCompartmentSet").address - tcs = context.symbol_space.get_symbol(tcpip_symbol_table + constants.BANG + "TcpCompartmentSet").address - - ucs_offset = cls.read_pointer(context, layer_name, tcpip_vo + ucs, pointer_length) - tcs_offset = cls.read_pointer(context, layer_name, tcpip_vo + tcs, pointer_length) - - ucs_obj = context.object(net_symbol_table + constants.BANG + "_INET_COMPARTMENT_SET", layer_name = layer_name, offset = ucs_offset) - upp_addr = ucs_obj.InetCompartment.dereference().ProtocolCompartment.dereference().PortPool + # then, towards the UDP and TCP port pools + # first, find their addresses + upp_addr, tpp_addr = cls.find_port_pools(context, layer_name, net_symbol_table, tcpip_symbol_table, tcpip_module_offset, pointer_length) + # create port pool objects at the detected address and parse the port bitmap upp_obj = context.object(net_symbol_table + constants.BANG + "_INET_PORT_POOL", layer_name = layer_name, offset = upp_addr) udpa_ports = cls.parse_bitmap(context, layer_name, upp_obj.PortBitMap.Buffer, upp_obj.PortBitMap.SizeOfBitMap // 8) - tcs_obj = context.object(net_symbol_table + constants.BANG + "_INET_COMPARTMENT_SET", layer_name = layer_name, offset = tcs_offset) - tpp_addr = tcs_obj.InetCompartment.dereference().ProtocolCompartment.dereference().PortPool - tpp_obj = context.object(net_symbol_table + constants.BANG + "_INET_PORT_POOL", layer_name = layer_name, offset = tpp_addr) tcpl_ports = cls.parse_bitmap(context, layer_name, tpp_obj.PortBitMap.Buffer, tpp_obj.PortBitMap.SizeOfBitMap // 8) + # given the list of TCP / UDP ports, calculate the address of their respective objects and yield them. for port in tcpl_ports: if port == 0: continue @@ -266,6 +443,10 @@ def list_sockets(cls, def _generator(self, show_corrupt_results: Optional[bool] = None): """ Generates the network objects for use in rendering. """ + # can this be checked via a PluginRequirement? + if not symbols.symbol_table_is_64bit(self.context, self.config['nt_symbols']): + raise exceptions.LayerException("This plugin currently only supports 64-bit memory images.") + netscan_symbol_table = netscan.NetScan.create_netscan_symbol_table(self.context, self.config["primary"], self.config["nt_symbols"], self.config_path) diff --git a/volatility/framework/plugins/windows/netscan.py b/volatility/framework/plugins/windows/netscan.py index 4aa3e5d5fc..d6a08977f8 100644 --- a/volatility/framework/plugins/windows/netscan.py +++ b/volatility/framework/plugins/windows/netscan.py @@ -179,7 +179,7 @@ def determine_tcpip_version(cls, context: interfaces.context.ContextInterface, l (10, 0, 16299): "netscan-win10-16299-x64", (10, 0, 17134): "netscan-win10-17134-x64", (10, 0, 17763): "netscan-win10-17763-x64", - (10, 0, 18362): "netscan-win10-17763-x64", + (10, 0, 18362): "netscan-win10-18362-x64", (10, 0, 18363): "netscan-win10-18363-x64", (10, 0, 19041): "netscan-win10-19041-x64" } diff --git a/volatility/framework/symbols/windows/netscan-win10-16299-x64.json b/volatility/framework/symbols/windows/netscan-win10-16299-x64.json index 73bd6054a8..243a9c42d4 100644 --- a/volatility/framework/symbols/windows/netscan-win10-16299-x64.json +++ b/volatility/framework/symbols/windows/netscan-win10-16299-x64.json @@ -218,7 +218,7 @@ } } }, - "HashTableEntry": { + "ListEntry": { "offset": 40, "type":{ "kind": "pointer", diff --git a/volatility/framework/symbols/windows/netscan-win10-17134-x64.json b/volatility/framework/symbols/windows/netscan-win10-17134-x64.json index 5b9c344114..d48f64e984 100644 --- a/volatility/framework/symbols/windows/netscan-win10-17134-x64.json +++ b/volatility/framework/symbols/windows/netscan-win10-17134-x64.json @@ -242,7 +242,7 @@ "name": "unsigned be short" } }, - "HashTableEntry": { + "ListEntry": { "offset": 40, "type":{ "kind": "pointer", diff --git a/volatility/framework/symbols/windows/netscan-win10-17763-x64.json b/volatility/framework/symbols/windows/netscan-win10-17763-x64.json index 072185d685..f4f237ce38 100644 --- a/volatility/framework/symbols/windows/netscan-win10-17763-x64.json +++ b/volatility/framework/symbols/windows/netscan-win10-17763-x64.json @@ -242,7 +242,7 @@ "name": "unsigned be short" } }, - "HashTableEntry": { + "ListEntry": { "offset": 40, "type":{ "kind": "pointer", diff --git a/volatility/framework/symbols/windows/netscan-win10-18362-x64.json b/volatility/framework/symbols/windows/netscan-win10-18362-x64.json new file mode 100644 index 0000000000..216d8bf42a --- /dev/null +++ b/volatility/framework/symbols/windows/netscan-win10-18362-x64.json @@ -0,0 +1,591 @@ +{ + "base_types": { + "unsigned long": { + "kind": "int", + "size": 4, + "signed": false, + "endian": "little" + }, + "unsigned char": { + "kind": "char", + "size": 1, + "signed": false, + "endian": "little" + }, + "pointer": { + "kind": "int", + "size": 8, + "signed": false, + "endian": "little" + }, + "unsigned int": { + "kind": "int", + "size": 4, + "signed": false, + "endian": "little" + }, + "unsigned short": { + "kind": "int", + "size": 2, + "signed": false, + "endian": "little" + }, + "unsigned be short": { + "kind": "int", + "size": 2, + "signed": false, + "endian": "big" + }, + "long long": { + "endian": "little", + "kind": "int", + "signed": true, + "size": 8 + }, + "long": { + "kind": "int", + "size": 4, + "signed": false, + "endian": "little" + } + }, + "symbols": { + "TcpCompartmentSet": { + "address": 2010312 + }, + "UdpCompartmentSet": { + "address": 2006416 + }, + "PartitionCount": { + "address": 2008196 + }, + "PartitionTable": { + "address": 2008200 + } + }, + "user_types": { + "_UDP_ENDPOINT": { + "fields": { + "Owner": { + "offset": 40, + "type": { + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "nt_symbols!_EPROCESS" + } + + } + }, + "CreateTime": { + "offset": 88, + "type": { + "kind": "union", + "name": "_LARGE_INTEGER" + } + }, + "LocalAddr": { + "offset": 128, + "type":{ + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "_LOCAL_ADDRESS_WIN10_UDP" + } + } + }, + "InetAF": { + "offset": 32, + "type":{ + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "_INETAF" + } + + } + }, + "Next": { + "offset": 112, + "type":{ + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "_UDP_ENDPOINT" + } + } + }, + "Port": { + "offset": 120, + "type": { + "kind": "base", + "name": "unsigned be short" + } + } + }, + "kind": "struct", + "size": 132 + }, + "_TCP_LISTENER": { + "fields": { + "Owner": { + "offset": 48, + "type": { + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "nt_symbols!_EPROCESS" + } + + } + }, + "CreateTime": { + "offset": 64, + "type": { + "kind": "union", + "name": "_LARGE_INTEGER" + } + }, + "LocalAddr": { + "offset": 96, + "type":{ + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "_LOCAL_ADDRESS" + } + + } + }, + "InetAF": { + "offset": 40, + "type":{ + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "_INETAF" + } + + } + }, + "Next": { + "offset": 120, + "type":{ + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "_TCP_LISTENER" + } + } + }, + "Port": { + "offset": 114, + "type": { + "kind": "base", + "name": "unsigned be short" + } + } + }, + "kind": "struct", + "size": 116 + }, + "_TCP_ENDPOINT": { + "fields": { + "Owner": { + "offset": 656, + "type": { + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "nt_symbols!_EPROCESS" + } + } + }, + "CreateTime": { + "offset": 672, + "type": { + "kind": "union", + "name": "_LARGE_INTEGER" + } + }, + "AddrInfo": { + "offset": 24, + "type":{ + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "_ADDRINFO" + } + } + }, + "InetAF": { + "offset": 16, + "type":{ + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "_INETAF" + } + } + }, + "LocalPort": { + "offset": 112, + "type": { + "kind": "base", + "name": "unsigned be short" + } + }, + "RemotePort": { + "offset": 114, + "type": { + "kind": "base", + "name": "unsigned be short" + } + }, + "ListEntry": { + "offset": 40, + "type":{ + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "_LIST_ENTRY" + } + } + }, + "Next": { + "offset": 112, + "type":{ + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "_TCP_ENDPOINT" + } + } + }, + "State": { + "offset": 108, + "type": { + "kind": "enum", + "name": "TCPStateEnum" + } + } + }, + "kind": "struct", + "size": 632 + }, + "_LOCAL_ADDRESS": { + "fields": { + "pData": { + "offset": 16, + "type": { + "kind": "pointer", + "subtype": { + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "_IN_ADDR" + } + } + } + } + }, + "kind": "struct", + "size": 20 + }, + "_LOCAL_ADDRESS_WIN10_UDP": { + "fields": { + "pData": { + "offset": 0, + "type": { + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "_IN_ADDR" + } + } + } + }, + "kind": "struct", + "size": 4 + }, + "_ADDRINFO": { + "fields": { + "Local": { + "offset": 0, + "type": { + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "_LOCAL_ADDRESS" + } + } + }, + "Remote": { + "offset": 16, + "type": { + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "_IN_ADDR" + } + } + } + }, + "kind": "struct", + "size": 4 + }, + "_IN_ADDR": { + "fields": { + "addr4": { + "offset": 0, + "type": { + "count": 4, + "subtype": { + "kind": "base", + "name": "unsigned char" + }, + "kind": "array" + } + }, + "addr6": { + "offset": 0, + "type": { + "count": 16, + "subtype": { + "kind": "base", + "name": "unsigned char" + }, + "kind": "array" + } + } + }, + "kind": "struct", + "size": 6 + }, + "_INETAF": { + "fields": { + "AddressFamily": { + "offset": 24, + "type": { + "kind": "base", + "name": "unsigned short" + } + } + }, + "kind": "struct", + "size": 26 + }, + "_LARGE_INTEGER": { + "fields": { + "HighPart": { + "offset": 4, + "type": { + "kind": "base", + "name": "long" + } + }, + "LowPart": { + "offset": 0, + "type": { + "kind": "base", + "name": "unsigned long" + } + }, + "QuadPart": { + "offset": 0, + "type": { + "kind": "base", + "name": "long long" + } + }, + "u": { + "offset": 0, + "type": { + "kind": "struct", + "name": "__unnamed_2" + } + } + }, + "kind": "union", + "size": 8 + }, + "_INET_COMPARTMENT_SET": { + "fields": { + "InetCompartment": { + "offset": 328, + "type": { + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "_INET_COMPARTMENT" + } + } + } + }, + "kind": "struct", + "size": 384 + }, + "_INET_COMPARTMENT": { + "fields": { + "ProtocolCompartment": { + "offset": 32, + "type": { + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "_PROTOCOL_COMPARTMENT" + } + } + } + }, + "kind": "struct", + "size": 48 + }, + "_PROTOCOL_COMPARTMENT": { + "fields": { + "PortPool": { + "offset": 0, + "type": { + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "_INET_PORT_POOL" + } + } + } + }, + "kind": "struct", + "size": 16 + }, + "_PORT_ASSIGNMENT_ENTRY": { + "fields": { + "Entry": { + "offset": 8, + "type": { + "kind": "pointer", + "subtype": { + "kind": "base", + "name": "void" + } + } + } + }, + "kind": "struct", + "size": 24 + }, + "_PORT_ASSIGNMENT_LIST": { + "fields": { + "Assignments": { + "offset": 0, + "type": { + "count": 256, + "kind": "array", + "subtype": { + "kind": "struct", + "name": "_PORT_ASSIGNMENT_ENTRY" + } + } + } + }, + "kind": "struct", + "size": 6144 + }, + "_PORT_ASSIGNMENT": { + "fields": { + "InPaBigPoolBase": { + "offset": 24, + "type": { + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "_PORT_ASSIGNMENT_LIST" + } + } + } + }, + "kind": "struct", + "size": 32 + }, + "_INET_PORT_POOL": { + "fields": { + "PortAssignments": { + "offset": 224, + "type": { + "count": 256, + "kind": "array", + "subtype": { + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "_PORT_ASSIGNMENT" + } + } + } + }, + "PortBitMap": { + "offset": 208, + "type": { + "kind": "struct", + "name": "nt_symbols!_RTL_BITMAP" + } + } + }, + "kind": "struct", + "size": 11200 + }, + "_PARTITION": { + "fields": { + "Endpoints" : { + "offset": 8, + "type": { + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "nt_symbols!_RTL_DYNAMIC_HASH_TABLE" + } + } + }, + "UnknownHashTable" : { + "offset": 16, + "type": { + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "nt_symbols!_RTL_DYNAMIC_HASH_TABLE" + } + } + } + }, + "kind": "struct", + "size": 128 + } + }, + "enums": { + "TCPStateEnum": { + "base": "long", + "constants": { + "CLOSED": 0, + "LISTENING": 1, + "SYN_SENT": 2, + "SYN_RCVD": 3, + "ESTABLISHED": 4, + "FIN_WAIT1": 5, + "FIN_WAIT2": 6, + "CLOSE_WAIT": 7, + "CLOSING": 8, + "LAST_ACK": 9, + "TIME_WAIT": 12, + "DELETE_TCB": 13 + }, + "size": 4 + } + }, + "metadata": { + "producer": { + "version": "0.0.1", + "name": "japhlange-by-hand", + "datetime": "2020-05-29T19:28:34" + }, + "format": "6.0.0" + } +} diff --git a/volatility/framework/symbols/windows/netscan-win10-18363-x64.json b/volatility/framework/symbols/windows/netscan-win10-18363-x64.json index a2f6f41304..9266069b4e 100644 --- a/volatility/framework/symbols/windows/netscan-win10-18363-x64.json +++ b/volatility/framework/symbols/windows/netscan-win10-18363-x64.json @@ -229,7 +229,7 @@ "name": "unsigned be short" } }, - "HashTableEntry": { + "ListEntry": { "offset": 40, "type":{ "kind": "pointer", @@ -496,7 +496,7 @@ "_INET_PORT_POOL": { "fields": { "PortAssignments": { - "offset": 232, + "offset": 224, "type": { "count": 256, "kind": "array", @@ -510,7 +510,7 @@ } }, "PortBitMap": { - "offset": 216, + "offset": 208, "type": { "kind": "struct", "name": "nt_symbols!_RTL_BITMAP" diff --git a/volatility/framework/symbols/windows/netscan-win7-x64.json b/volatility/framework/symbols/windows/netscan-win7-x64.json index 88aae05356..27cef122b0 100644 --- a/volatility/framework/symbols/windows/netscan-win7-x64.json +++ b/volatility/framework/symbols/windows/netscan-win7-x64.json @@ -230,6 +230,16 @@ } }, + "Next": { + "offset": 136, + "type":{ + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "_UDP_ENDPOINT" + } + } + }, "Port": { "offset": 128, "type": { @@ -239,7 +249,7 @@ } }, "kind": "struct", - "size": 130 + "size": 138 }, "_TCP_LISTENER": { "fields": { @@ -289,6 +299,16 @@ "kind": "base", "name": "unsigned be short" } + }, + "Next": { + "offset": 112, + "type":{ + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "_TCP_LISTENER" + } + } } }, "kind": "struct", @@ -568,6 +588,156 @@ }, "kind": "union", "size": 8 + }, + "_INET_COMPARTMENT_SET": { + "fields": { + "InetCompartment": { + "offset": 328, + "type": { + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "_INET_COMPARTMENT" + } + } + } + }, + "kind": "struct", + "size": 384 + }, + "_INET_COMPARTMENT": { + "fields": { + "ProtocolCompartment": { + "offset": 32, + "type": { + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "_PROTOCOL_COMPARTMENT" + } + } + } + }, + "kind": "struct", + "size": 48 + }, + "_PROTOCOL_COMPARTMENT": { + "fields": { + "PortPool": { + "offset": 0, + "type": { + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "_INET_PORT_POOL" + } + } + } + }, + "kind": "struct", + "size": 16 + }, + "_PORT_ASSIGNMENT_ENTRY": { + "fields": { + "Entry": { + "offset": 8, + "type": { + "kind": "pointer", + "subtype": { + "kind": "base", + "name": "void" + } + } + } + }, + "kind": "struct", + "size": 16 + }, + "_PORT_ASSIGNMENT_LIST": { + "fields": { + "Assignments": { + "offset": 0, + "type": { + "count": 256, + "kind": "array", + "subtype": { + "kind": "struct", + "name": "_PORT_ASSIGNMENT_ENTRY" + } + } + } + }, + "kind": "struct", + "size": 4096 + }, + "_PORT_ASSIGNMENT": { + "fields": { + "InPaBigPoolBase": { + "offset": 32, + "type": { + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "_PORT_ASSIGNMENT_LIST" + } + } + } + }, + "kind": "struct", + "size": 40 + }, + "_INET_PORT_POOL": { + "fields": { + "PortAssignments": { + "offset": 160, + "type": { + "count": 256, + "kind": "array", + "subtype": { + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "_PORT_ASSIGNMENT" + } + } + } + }, + "PortBitMap": { + "offset": 144, + "type": { + "kind": "struct", + "name": "nt_symbols!_RTL_BITMAP" + } + } + }, + "kind": "struct", + "size": 11200 + }, + "_PARTITION": { + "fields": { + "Endpoints" : { + "offset": 8, + "type": { + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "nt_symbols!_RTL_DYNAMIC_HASH_TABLE" + } + } + }, + "UnknownHashTable" : { + "offset": 16, + "type": { + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "nt_symbols!_RTL_DYNAMIC_HASH_TABLE" + } + } + } + }, + "kind": "struct", + "size": 128 } }, "enums": { From 2e70a9cb45edd340d9d547986c4b3aed5469bc62 Mon Sep 17 00:00:00 2001 From: Jan Date: Tue, 22 Dec 2020 17:11:55 +0100 Subject: [PATCH 006/294] addresses issues raised in pr #399 --- .../framework/plugins/windows/netlist.py | 96 +++++++++++-------- .../windows/netscan-win10-16299-x64.json | 26 +++-- .../windows/netscan-win10-17134-x64.json | 9 +- .../windows/netscan-win10-17763-x64.json | 9 +- .../windows/netscan-win10-18362-x64.json | 9 +- .../windows/netscan-win10-18363-x64.json | 9 +- .../symbols/windows/netscan-win7-x64.json | 86 ++++------------- 7 files changed, 108 insertions(+), 136 deletions(-) diff --git a/volatility/framework/plugins/windows/netlist.py b/volatility/framework/plugins/windows/netlist.py index 415cc3ee53..7a071676c3 100644 --- a/volatility/framework/plugins/windows/netlist.py +++ b/volatility/framework/plugins/windows/netlist.py @@ -1,4 +1,4 @@ -# This file is Copyright 2019 Volatility Foundation and licensed under the Volatility Software License 1.0 +# This file is Copyright 2020 Volatility Foundation and licensed under the Volatility Software License 1.0 # which is available at https://www.volatilityfoundation.org/license/vsl-v1.0 # @@ -10,10 +10,10 @@ from volatility.framework.configuration import requirements from volatility.framework.renderers import format_hints from volatility.framework.symbols import intermed +from volatility.framework.symbols.windows import pdbutil from volatility.framework.symbols.windows.extensions import network -from volatility.framework.symbols.windows.pdbutil import PDBUtility from volatility.plugins import timeliner -from volatility.plugins.windows import info, poolscanner, netscan, modules +from volatility.plugins.windows import netscan, modules vollog = logging.getLogger(__name__) @@ -32,6 +32,7 @@ def get_requirements(cls): architectures = ["Intel32", "Intel64"]), requirements.SymbolTableRequirement(name = "nt_symbols", description = "Windows kernel symbols"), requirements.VersionRequirement(name = 'netscan', component = netscan.NetScan, version = (1, 0, 0)), + requirements.VersionRequirement(name = 'modules', component = modules.Modules, version = (1, 0, 0)), requirements.BooleanRequirement( name = 'include-corrupt', description = @@ -151,10 +152,10 @@ def enumerate_structures_by_port(cls, truncated_port = port & 0xff # first, grab the given port's PortAssignment (`_PORT_ASSIGNMENT`) - inpa = port_pool.PortAssignments[list_index].dereference() + inpa = port_pool.PortAssignments[list_index] # then parse the port assignment list (`_PORT_ASSIGNMENT_LIST`) and grab the correct entry - assignment = inpa.InPaBigPoolBase.dereference().Assignments[truncated_port] + assignment = inpa.InPaBigPoolBase.Assignments[truncated_port] if not assignment: yield @@ -258,42 +259,39 @@ def parse_partitions(cls, part_table_symbol = context.symbol_space.get_symbol(tcpip_symbol_table + constants.BANG + "PartitionTable").address part_count_symbol = context.symbol_space.get_symbol(tcpip_symbol_table + constants.BANG + "PartitionCount").address - # part_table is the actual partition table offset - part_table = cls.read_pointer(context, layer_name, tcpip_module_offset + part_table_symbol, pointer_length) + part_table_addr = context.object(net_symbol_table + constants.BANG + "pointer", + layer_name = layer_name, + offset = tcpip_module_offset + part_table_symbol) + # part_table is the actual partition table offset and consists out of a dynamic amount of _PARTITION objects + part_table = context.object(net_symbol_table + constants.BANG + "_PARTITION_TABLE", + layer_name = layer_name, + offset = part_table_addr) part_count = int.from_bytes(context.layers[layer_name].read(tcpip_module_offset + part_count_symbol, 1), "little") + part_table.Partitions.count = part_count + partition_size = context.symbol_space.get_type(net_symbol_table + constants.BANG + "_PARTITION").size - part_table_size = context.symbol_space.get_type(net_symbol_table + constants.BANG + "_PARTITION").size - - partitions = [] - - # create partition objects for each partition and append to list - for part_idx in range(part_count): - current_partition = context.object(net_symbol_table + constants.BANG + "_PARTITION", - layer_name = layer_name, - offset = part_table + part_table_size * part_idx) - - partitions.append(current_partition) - - for partition in partitions: + entry_offset = context.symbol_space.get_type(obj_name).relative_child_offset("ListEntry") + for partition in part_table.Partitions: if partition.Endpoints.NumEntries > 0: for endpoint_entry in cls.parse_hashtable(context, layer_name, partition.Endpoints.Directory, - part_table_size, + partition_size, alignment, pointer_length): - entry_offset = context.symbol_space.get_type(obj_name).relative_child_offset("ListEntry") endpoint = context.object(obj_name, layer_name = layer_name, offset = endpoint_entry - entry_offset) yield endpoint - @classmethod def create_tcpip_symbol_table(cls, context: interfaces.context.ContextInterface, config_path: str, layer_name: str, tcpip_module: interfaces.objects.ObjectInterface) -> str: - """Creates + """Creates symbol table for the current image's tcpip.sys driver. + + Searches the memory section of the loaded tcpip.sys module for its PDB GUID + and loads the associated symbol table into the symbol space. Args: context: The context to retrieve required elements (layers, symbol tables) from @@ -305,7 +303,7 @@ def create_tcpip_symbol_table(cls, The name of the constructed and loaded symbol table """ guids = list( - PDBUtility.pdbname_scan( + pdbutil.PDBUtility.pdbname_scan( context, layer_name, context.layers[layer_name].page_size, @@ -322,7 +320,7 @@ def create_tcpip_symbol_table(cls, vollog.debug("Found {}: {}-{}".format(guid["pdb_name"], guid["GUID"], guid["age"])) - return PDBUtility.load_windows_symbol_table(context, + return pdbutil.PDBUtility.load_windows_symbol_table(context, guid["GUID"], guid["age"], guid["pdb_name"], @@ -357,24 +355,32 @@ def find_port_pools(cls, if "UdpPortPool" in context.symbol_space[tcpip_symbol_table].symbols: # older Windows versions upp_symbol = context.symbol_space.get_symbol(tcpip_symbol_table + constants.BANG + "UdpPortPool").address - upp_addr = cls.read_pointer(context, layer_name, tcpip_module_offset + upp_symbol, pointer_length) + upp_addr = context.object(net_symbol_table + constants.BANG + "pointer", + layer_name = layer_name, + offset = tcpip_module_offset + upp_symbol) tpp_symbol = context.symbol_space.get_symbol(tcpip_symbol_table + constants.BANG + "TcpPortPool").address - tpp_addr = cls.read_pointer(context, layer_name, tcpip_module_offset + tpp_symbol, pointer_length) + tpp_addr = context.object(net_symbol_table + constants.BANG + "pointer", + layer_name = layer_name, + offset = tcpip_module_offset + tpp_symbol) elif "UdpCompartmentSet" in context.symbol_space[tcpip_symbol_table].symbols: # newer Windows versions since 10.14xxx ucs = context.symbol_space.get_symbol(tcpip_symbol_table + constants.BANG + "UdpCompartmentSet").address tcs = context.symbol_space.get_symbol(tcpip_symbol_table + constants.BANG + "TcpCompartmentSet").address - ucs_offset = cls.read_pointer(context, layer_name, tcpip_module_offset + ucs, pointer_length) - tcs_offset = cls.read_pointer(context, layer_name, tcpip_module_offset + tcs, pointer_length) + ucs_offset = context.object(net_symbol_table + constants.BANG + "pointer", + layer_name = layer_name, + offset = tcpip_module_offset + ucs) + tcs_offset = context.object(net_symbol_table + constants.BANG + "pointer", + layer_name = layer_name, + offset = tcpip_module_offset + tcs) ucs_obj = context.object(net_symbol_table + constants.BANG + "_INET_COMPARTMENT_SET", layer_name = layer_name, offset = ucs_offset) - upp_addr = ucs_obj.InetCompartment.dereference().ProtocolCompartment.dereference().PortPool + upp_addr = ucs_obj.InetCompartment.ProtocolCompartment.PortPool tcs_obj = context.object(net_symbol_table + constants.BANG + "_INET_COMPARTMENT_SET", layer_name = layer_name, offset = tcs_offset) - tpp_addr = tcs_obj.InetCompartment.dereference().ProtocolCompartment.dereference().PortPool + tpp_addr = tcs_obj.InetCompartment.ProtocolCompartment.PortPool else: # this branch should not be reached. @@ -429,13 +435,15 @@ def list_sockets(cls, # given the list of TCP / UDP ports, calculate the address of their respective objects and yield them. for port in tcpl_ports: - if port == 0: + # port value can be 0, which we can skip + if not port: continue for obj in cls.enumerate_structures_by_port(context, layer_name, net_symbol_table, port, tpp_obj, "tcp"): yield obj for port in udpa_ports: - if port == 0: + # same as above, skip port 0 + if not port: continue for obj in cls.enumerate_structures_by_port(context, layer_name, net_symbol_table, port, upp_obj, "udp"): yield obj @@ -483,6 +491,8 @@ def _generator(self, show_corrupt_results: Optional[bool] = None): elif netw_obj.get_address_family() == network.AF_INET6: proto = "TCPv6" else: + vollog.debug("TCP Endpoint @ 0x{:2x} has unknown address family 0x{:x}".format(netw_obj.vol.offset, + netw_obj.get_address_family())) proto = "TCPv?" try: @@ -513,19 +523,25 @@ def _generator(self, show_corrupt_results: Optional[bool] = None): def generate_timeline(self): for row in self._generator(): _depth, row_data = row + row_dict = {} + row_dict["Offset"], row_dict["Proto"], row_dict["LocalAddr"], row_dict["LocalPort"], \ + row_dict["ForeignAddr"], row_dict["ForeignPort"], row_dict["State"], \ + row_dict["PID"], row_dict["Owner"], row_dict["Created"] = row_data + # Skip network connections without creation time - if not isinstance(row_data[9], datetime.datetime): + if not isinstance(row_dict["Created"], datetime.datetime): continue row_data = [ "N/A" if isinstance(i, renderers.UnreadableValue) or isinstance(i, renderers.UnparsableValue) else i for i in row_data ] description = "Network connection: Process {} {} Local Address {}:{} " \ - "Remote Address {}:{} State {} Protocol {} ".format(row_data[7], row_data[8], - row_data[2], row_data[3], - row_data[4], row_data[5], - row_data[6], row_data[1]) - yield (description, timeliner.TimeLinerType.CREATED, row_data[9]) + "Remote Address {}:{} State {} Protocol {} ".format(row_dict["PID"], row_dict["Owner"], + row_dict["LocalAddr"], row_dict["LocalPort"], + row_dict["ForeignAddr"], row_dict["ForeignPort"], + row_dict["State"], row_dict["Proto"]) + + yield (description, timeliner.TimeLinerType.CREATED, row_dict["Created"]) def run(self): show_corrupt_results = self.config.get('include-corrupt', None) diff --git a/volatility/framework/symbols/windows/netscan-win10-16299-x64.json b/volatility/framework/symbols/windows/netscan-win10-16299-x64.json index 243a9c42d4..9991438f8f 100644 --- a/volatility/framework/symbols/windows/netscan-win10-16299-x64.json +++ b/volatility/framework/symbols/windows/netscan-win10-16299-x64.json @@ -220,12 +220,9 @@ }, "ListEntry": { "offset": 40, - "type":{ - "kind": "pointer", - "subtype": { - "kind": "struct", - "name": "_LIST_ENTRY" - } + "type": { + "kind": "union", + "name": "nt_symbols!_LIST_ENTRY" } }, "InetAF": { @@ -558,6 +555,23 @@ }, "kind": "struct", "size": 128 + }, + "_PARTITION_TABLE": { + "fields": { + "Partitions": { + "offset": 0, + "type": { + "count": 1, + "kind": "array", + "subtype": { + "kind": "struct", + "name": "_PARTITION" + } + } + } + }, + "kind": "struct", + "size": 128 } }, "enums": { diff --git a/volatility/framework/symbols/windows/netscan-win10-17134-x64.json b/volatility/framework/symbols/windows/netscan-win10-17134-x64.json index d48f64e984..088c67b21d 100644 --- a/volatility/framework/symbols/windows/netscan-win10-17134-x64.json +++ b/volatility/framework/symbols/windows/netscan-win10-17134-x64.json @@ -244,12 +244,9 @@ }, "ListEntry": { "offset": 40, - "type":{ - "kind": "pointer", - "subtype": { - "kind": "struct", - "name": "_LIST_ENTRY" - } + "type": { + "kind": "union", + "name": "nt_symbols!_LIST_ENTRY" } }, "State": { diff --git a/volatility/framework/symbols/windows/netscan-win10-17763-x64.json b/volatility/framework/symbols/windows/netscan-win10-17763-x64.json index f4f237ce38..cfccb265dd 100644 --- a/volatility/framework/symbols/windows/netscan-win10-17763-x64.json +++ b/volatility/framework/symbols/windows/netscan-win10-17763-x64.json @@ -244,12 +244,9 @@ }, "ListEntry": { "offset": 40, - "type":{ - "kind": "pointer", - "subtype": { - "kind": "struct", - "name": "_LIST_ENTRY" - } + "type": { + "kind": "union", + "name": "nt_symbols!_LIST_ENTRY" } }, "Next": { diff --git a/volatility/framework/symbols/windows/netscan-win10-18362-x64.json b/volatility/framework/symbols/windows/netscan-win10-18362-x64.json index 216d8bf42a..dcda8ae581 100644 --- a/volatility/framework/symbols/windows/netscan-win10-18362-x64.json +++ b/volatility/framework/symbols/windows/netscan-win10-18362-x64.json @@ -244,12 +244,9 @@ }, "ListEntry": { "offset": 40, - "type":{ - "kind": "pointer", - "subtype": { - "kind": "struct", - "name": "_LIST_ENTRY" - } + "type": { + "kind": "union", + "name": "nt_symbols!_LIST_ENTRY" } }, "Next": { diff --git a/volatility/framework/symbols/windows/netscan-win10-18363-x64.json b/volatility/framework/symbols/windows/netscan-win10-18363-x64.json index 9266069b4e..3d15330805 100644 --- a/volatility/framework/symbols/windows/netscan-win10-18363-x64.json +++ b/volatility/framework/symbols/windows/netscan-win10-18363-x64.json @@ -231,12 +231,9 @@ }, "ListEntry": { "offset": 40, - "type":{ - "kind": "pointer", - "subtype": { - "kind": "struct", - "name": "_LIST_ENTRY" - } + "type": { + "kind": "union", + "name": "nt_symbols!_LIST_ENTRY" } }, "Next": { diff --git a/volatility/framework/symbols/windows/netscan-win7-x64.json b/volatility/framework/symbols/windows/netscan-win7-x64.json index 27cef122b0..34a524b615 100644 --- a/volatility/framework/symbols/windows/netscan-win7-x64.json +++ b/volatility/framework/symbols/windows/netscan-win7-x64.json @@ -489,72 +489,6 @@ "kind": "struct", "size": 48 }, - "_PARTITION_TABLE": { - "fields": { - "HashTable": { - "offset": 0, - "type": { - "kind": "pointer", - "subtype": { - "kind": "base", - "name": "void" - } - } - }, - "Unknown2": { - "offset": 8, - "type": { - "kind": "pointer", - "subtype": { - "kind": "base", - "name": "void" - } - } - }, - "Unknown3": { - "offset": 16, - "type": { - "kind": "pointer", - "subtype": { - "kind": "base", - "name": "void" - } - } - }, - "Unknown4": { - "offset": 24, - "type": { - "kind": "pointer", - "subtype": { - "kind": "base", - "name": "void" - } - } - }, - "Unknown5": { - "offset": 32, - "type": { - "kind": "pointer", - "subtype": { - "kind": "base", - "name": "void" - } - } - }, - "Unknown6": { - "offset": 40, - "type": { - "kind": "pointer", - "subtype": { - "kind": "base", - "name": "void" - } - } - } - }, - "kind": "struct", - "size": 128 - }, "_LARGE_INTEGER": { "fields": { "HighPart": { @@ -738,6 +672,26 @@ }, "kind": "struct", "size": 128 + }, + "_PARTITION_TABLE": { + "fields": { + "Partitions": { + "offset": 0, + "type": { + "count": 1, + "kind": "array", + "subtype": { + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "_PARTITION" + } + } + } + } + }, + "kind": "struct", + "size": 128 } }, "enums": { From 0e136b9437ff27dfcf2cca17d5851c20644e1d9c Mon Sep 17 00:00:00 2001 From: Jan Date: Tue, 22 Dec 2020 17:18:20 +0100 Subject: [PATCH 007/294] adds partition_table type to ISFs --- .../windows/netscan-win10-17134-x64.json | 17 +++++++++++++++++ .../windows/netscan-win10-17763-x64.json | 17 +++++++++++++++++ .../windows/netscan-win10-18362-x64.json | 17 +++++++++++++++++ .../windows/netscan-win10-18363-x64.json | 17 +++++++++++++++++ .../symbols/windows/netscan-win7-x64.json | 7 ++----- 5 files changed, 70 insertions(+), 5 deletions(-) diff --git a/volatility/framework/symbols/windows/netscan-win10-17134-x64.json b/volatility/framework/symbols/windows/netscan-win10-17134-x64.json index 088c67b21d..a20dd3799a 100644 --- a/volatility/framework/symbols/windows/netscan-win10-17134-x64.json +++ b/volatility/framework/symbols/windows/netscan-win10-17134-x64.json @@ -555,6 +555,23 @@ }, "kind": "struct", "size": 128 + }, + "_PARTITION_TABLE": { + "fields": { + "Partitions": { + "offset": 0, + "type": { + "count": 1, + "kind": "array", + "subtype": { + "kind": "struct", + "name": "_PARTITION" + } + } + } + }, + "kind": "struct", + "size": 128 } }, "enums": { diff --git a/volatility/framework/symbols/windows/netscan-win10-17763-x64.json b/volatility/framework/symbols/windows/netscan-win10-17763-x64.json index cfccb265dd..b7c8325a39 100644 --- a/volatility/framework/symbols/windows/netscan-win10-17763-x64.json +++ b/volatility/framework/symbols/windows/netscan-win10-17763-x64.json @@ -555,6 +555,23 @@ }, "kind": "struct", "size": 128 + }, + "_PARTITION_TABLE": { + "fields": { + "Partitions": { + "offset": 0, + "type": { + "count": 1, + "kind": "array", + "subtype": { + "kind": "struct", + "name": "_PARTITION" + } + } + } + }, + "kind": "struct", + "size": 128 } }, "enums": { diff --git a/volatility/framework/symbols/windows/netscan-win10-18362-x64.json b/volatility/framework/symbols/windows/netscan-win10-18362-x64.json index dcda8ae581..938b3fa841 100644 --- a/volatility/framework/symbols/windows/netscan-win10-18362-x64.json +++ b/volatility/framework/symbols/windows/netscan-win10-18362-x64.json @@ -555,6 +555,23 @@ }, "kind": "struct", "size": 128 + }, + "_PARTITION_TABLE": { + "fields": { + "Partitions": { + "offset": 0, + "type": { + "count": 1, + "kind": "array", + "subtype": { + "kind": "struct", + "name": "_PARTITION" + } + } + } + }, + "kind": "struct", + "size": 128 } }, "enums": { diff --git a/volatility/framework/symbols/windows/netscan-win10-18363-x64.json b/volatility/framework/symbols/windows/netscan-win10-18363-x64.json index 3d15330805..9ecfdc6420 100644 --- a/volatility/framework/symbols/windows/netscan-win10-18363-x64.json +++ b/volatility/framework/symbols/windows/netscan-win10-18363-x64.json @@ -542,6 +542,23 @@ }, "kind": "struct", "size": 128 + }, + "_PARTITION_TABLE": { + "fields": { + "Partitions": { + "offset": 0, + "type": { + "count": 1, + "kind": "array", + "subtype": { + "kind": "struct", + "name": "_PARTITION" + } + } + } + }, + "kind": "struct", + "size": 128 } }, "enums": { diff --git a/volatility/framework/symbols/windows/netscan-win7-x64.json b/volatility/framework/symbols/windows/netscan-win7-x64.json index 34a524b615..12dc8df7b3 100644 --- a/volatility/framework/symbols/windows/netscan-win7-x64.json +++ b/volatility/framework/symbols/windows/netscan-win7-x64.json @@ -681,11 +681,8 @@ "count": 1, "kind": "array", "subtype": { - "kind": "pointer", - "subtype": { - "kind": "struct", - "name": "_PARTITION" - } + "kind": "struct", + "name": "_PARTITION" } } } From 991a0a1091352938b5a3413b4f0fe0357d78c06d Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Tue, 22 Dec 2020 17:11:46 +0000 Subject: [PATCH 008/294] Objects: Make repr(object) more useful --- volatility/framework/objects/__init__.py | 13 +++++++++++++ 1 file changed, 13 insertions(+) diff --git a/volatility/framework/objects/__init__.py b/volatility/framework/objects/__init__.py index 30bbcbe740..dffc031de4 100644 --- a/volatility/framework/objects/__init__.py +++ b/volatility/framework/objects/__init__.py @@ -549,6 +549,10 @@ def count(self, value: int) -> None: self._vol['count'] = value self._vol['size'] = value * self._vol['subtype'].size + def __repr__(self) -> str: + """Describes the object appropriately""" + return AggregateType.__repr__(self) + class VolTemplateProxy(interfaces.objects.ObjectInterface.VolTemplateProxy): @classmethod @@ -643,6 +647,15 @@ def has_member(self, member_name: str) -> bool: member_name.""" return member_name in self.vol.members + def __repr__(self) -> str: + """Describes the object appropriately""" + extras = member_name = '' + if self.vol.native_layer_name != self.vol.layer_name: + extras += f' (Native: {self.vol.native_layer_name})' + if self.vol.member_name: + member_name = f' (.{self.vol.member_name})' + return f'<{self.__class__.__name__} {self.vol.type_name}{member_name}: {self.vol.layer_name} @ 0x{self.vol.offset:x} #{self.vol.size}{extras}>' + class VolTemplateProxy(interfaces.objects.ObjectInterface.VolTemplateProxy): @classmethod From f731ceba3ab9f455ccc749ff4cd213cfb7c076e9 Mon Sep 17 00:00:00 2001 From: Jan Date: Mon, 28 Dec 2020 21:22:50 +0100 Subject: [PATCH 009/294] optimizations as raised by #397 --- .../framework/plugins/windows/netlist.py | 73 +++---- .../symbols/windows/extensions/network.py | 8 +- .../windows/netscan-win10-15063-x64.json | 194 +++++++++++++++++ .../windows/netscan-win10-15063-x86.json | 187 +++++++++++++++++ .../windows/netscan-win10-19041-x64.json | 198 ++++++++++++++++-- .../symbols/windows/netscan-win10-x86.json | 189 ++++++++++++++++- .../symbols/windows/netscan-win7-x86.json | 189 ++++++++++++++++- 7 files changed, 974 insertions(+), 64 deletions(-) diff --git a/volatility/framework/plugins/windows/netlist.py b/volatility/framework/plugins/windows/netlist.py index 7a071676c3..c1a8de592d 100644 --- a/volatility/framework/plugins/windows/netlist.py +++ b/volatility/framework/plugins/windows/netlist.py @@ -96,23 +96,10 @@ def parse_bitmap(cls, ret = [] for idx in range(bitmap_size_in_byte-1): current_byte = context.layers[layer_name].read(bitmap_offset + idx, 1)[0] - current_offs = idx*8 - if current_byte&1 != 0: - ret.append(0 + current_offs) - if current_byte&2 != 0: - ret.append(1 + current_offs) - if current_byte&4 != 0: - ret.append(2 + current_offs) - if current_byte&8 != 0: - ret.append(3 + current_offs) - if current_byte&16 != 0: - ret.append(4 + current_offs) - if current_byte&32 != 0: - ret.append(5 + current_offs) - if current_byte&64 != 0: - ret.append(6 + current_offs) - if current_byte&128 != 0: - ret.append(7 + current_offs) + current_offs = idx * 8 + for bit in range(7): + if current_byte & (1 << bit) != 0: + ret.append(bit + current_offs) return ret @classmethod @@ -147,6 +134,7 @@ def enumerate_structures_by_port(cls, # invalid argument. yield + vollog.debug("Current Port: {}".format(port)) # the given port serves as a shifted index into the port pool lists list_index = port >> 8 truncated_port = port & 0xff @@ -167,7 +155,6 @@ def enumerate_structures_by_port(cls, if netw_inside: # if the value is valid, calculate the actual object address by subtracting the offset curr_obj = context.object(obj_name, layer_name = layer_name, offset = netw_inside - ptr_offset) - vollog.debug("Found {} object @ 0x{:2x}, yielding...".format(proto, curr_obj.vol.offset)) yield curr_obj # if the same port is used on different interfaces multiple objects are created @@ -203,7 +190,7 @@ def parse_hashtable(cls, ht_offset: int, ht_length: int, alignment: int, - pointer_length: int) -> list: + net_symbol_table: str) -> list: """Parses a hashtable quick and dirty. Args: @@ -211,19 +198,20 @@ def parse_hashtable(cls, layer_name: The name of the layer on which to operate ht_offset: Beginning of the hash table ht_length: Length of the hash table - pointer_length: Length of this architecture's pointers Returns: The hash table entries which are _not_ empty """ + # we are looking for entries whose values are not their own address for index in range(ht_length): - # mask pointer so we do not get confused with abbreviated virtual offsets - # this currently only works for 64-bit. - # TODO: add x86 support. - current_qword = (0xffff000000000000 | cls.read_pointer(context, layer_name, ht_offset + index * alignment, pointer_length)) - if current_qword == (0xffff000000000000 | (ht_offset + index * alignment)): + current_addr = ht_offset + index * alignment + current_pointer = context.object(net_symbol_table + constants.BANG + "pointer", + layer_name = layer_name, + offset = current_addr) + # check if addr of pointer is equal to the value pointed to + if current_pointer.vol.offset == current_pointer: continue - yield current_qword + yield current_pointer @classmethod def parse_partitions(cls, @@ -231,8 +219,7 @@ def parse_partitions(cls, layer_name: str, net_symbol_table: str, tcpip_symbol_table: str, - tcpip_module_offset: int, - pointer_length: int) -> Iterable[interfaces.objects.ObjectInterface]: + tcpip_module_offset: int) -> Iterable[interfaces.objects.ObjectInterface]: """Parses tcpip.sys's PartitionTable containing established TCP connections. The amount of Partition depends on the value of the symbol `PartitionCount` and correlates with the maximum processor count (refer to Art of Memory Forensics, chapter 11). @@ -254,7 +241,6 @@ def parse_partitions(cls, alignment = 8 obj_name = net_symbol_table + constants.BANG + "_TCP_ENDPOINT" - # part_table_symbol is the offset within tcpip.sys which contains the address of the partition table itself part_table_symbol = context.symbol_space.get_symbol(tcpip_symbol_table + constants.BANG + "PartitionTable").address part_count_symbol = context.symbol_space.get_symbol(tcpip_symbol_table + constants.BANG + "PartitionCount").address @@ -262,23 +248,25 @@ def parse_partitions(cls, part_table_addr = context.object(net_symbol_table + constants.BANG + "pointer", layer_name = layer_name, offset = tcpip_module_offset + part_table_symbol) + # part_table is the actual partition table offset and consists out of a dynamic amount of _PARTITION objects part_table = context.object(net_symbol_table + constants.BANG + "_PARTITION_TABLE", layer_name = layer_name, offset = part_table_addr) part_count = int.from_bytes(context.layers[layer_name].read(tcpip_module_offset + part_count_symbol, 1), "little") part_table.Partitions.count = part_count - partition_size = context.symbol_space.get_type(net_symbol_table + constants.BANG + "_PARTITION").size + vollog.debug("Found TCP connection PartitionTable @ 0x{:x} (partition count: {})".format(part_table_addr, part_count)) entry_offset = context.symbol_space.get_type(obj_name).relative_child_offset("ListEntry") - for partition in part_table.Partitions: + for ctr, partition in enumerate(part_table.Partitions): + vollog.debug("Parsing partition {}".format(ctr)) if partition.Endpoints.NumEntries > 0: for endpoint_entry in cls.parse_hashtable(context, layer_name, partition.Endpoints.Directory, - partition_size, + partition.Endpoints.TableSize, alignment, - pointer_length): + net_symbol_table): endpoint = context.object(obj_name, layer_name = layer_name, offset = endpoint_entry - entry_offset) yield endpoint @@ -333,8 +321,7 @@ def find_port_pools(cls, layer_name: str, net_symbol_table: str, tcpip_symbol_table: str, - tcpip_module_offset: int, - pointer_length: int) -> (int, int): + tcpip_module_offset: int) -> (int, int): """Finds the given image's port pools. Older Windows versions (presumably < Win10 build 14251) use driver symbols called `UdpPortPool` and `TcpPortPool` which point towards the pools. Newer Windows versions use `UdpCompartmentSet` and `TcpCompartmentSet`, which we first have to translate into @@ -346,7 +333,6 @@ def find_port_pools(cls, net_symbol_table: The name of the table containing the tcpip types tcpip_module_offset: This memory dump's tcpip.sys image offset tcpip_symbol_table: The name of the table containing the tcpip driver symbols - pointer_length: Length of this architecture's pointers Returns: The tuple containing the address of the UDP and TCP port pool respectively. @@ -387,7 +373,7 @@ def find_port_pools(cls, raise exceptions.SymbolError("UdpPortPool", tcpip_symbol_table, "Neither UdpPortPool nor UdpCompartmentSet found in {} table".format(tcpip_symbol_table)) - vollog.debug("Found PortPools @ 0x{:x} (TCP) && 0x{:x} (UDP)".format(upp_addr, tpp_addr)) + vollog.debug("Found PortPools @ 0x{:x} (UDP) && 0x{:x} (TCP)".format(upp_addr, tpp_addr)) return upp_addr, tpp_addr @classmethod @@ -416,15 +402,13 @@ def list_sockets(cls, tcpip_module_offset = tcpip_module.DllBase - pointer_length = context.symbol_space.get_type(net_symbol_table + constants.BANG + "pointer").size - # first, TCP endpoints by parsing the partition table - for endpoint in cls.parse_partitions(context, layer_name, net_symbol_table, tcpip_symbol_table, tcpip_module_offset, pointer_length): + for endpoint in cls.parse_partitions(context, layer_name, net_symbol_table, tcpip_symbol_table, tcpip_module_offset): yield endpoint # then, towards the UDP and TCP port pools # first, find their addresses - upp_addr, tpp_addr = cls.find_port_pools(context, layer_name, net_symbol_table, tcpip_symbol_table, tcpip_module_offset, pointer_length) + upp_addr, tpp_addr = cls.find_port_pools(context, layer_name, net_symbol_table, tcpip_symbol_table, tcpip_module_offset) # create port pool objects at the detected address and parse the port bitmap upp_obj = context.object(net_symbol_table + constants.BANG + "_INET_PORT_POOL", layer_name = layer_name, offset = upp_addr) @@ -433,6 +417,8 @@ def list_sockets(cls, tpp_obj = context.object(net_symbol_table + constants.BANG + "_INET_PORT_POOL", layer_name = layer_name, offset = tpp_addr) tcpl_ports = cls.parse_bitmap(context, layer_name, tpp_obj.PortBitMap.Buffer, tpp_obj.PortBitMap.SizeOfBitMap // 8) + vollog.debug("Found TCP Ports: {}".format(tcpl_ports)) + vollog.debug("Found UDP Ports: {}".format(udpa_ports)) # given the list of TCP / UDP ports, calculate the address of their respective objects and yield them. for port in tcpl_ports: # port value can be 0, which we can skip @@ -451,10 +437,6 @@ def list_sockets(cls, def _generator(self, show_corrupt_results: Optional[bool] = None): """ Generates the network objects for use in rendering. """ - # can this be checked via a PluginRequirement? - if not symbols.symbol_table_is_64bit(self.context, self.config['nt_symbols']): - raise exceptions.LayerException("This plugin currently only supports 64-bit memory images.") - netscan_symbol_table = netscan.NetScan.create_netscan_symbol_table(self.context, self.config["primary"], self.config["nt_symbols"], self.config_path) @@ -469,7 +451,6 @@ def _generator(self, show_corrupt_results: Optional[bool] = None): tcpip_module, tcpip_symbol_table): - vollog.debug("Found netw obj @ 0x{:2x} of assumed type {}".format(netw_obj.vol.offset, type(netw_obj))) # objects passed pool header constraints. check for additional constraints if strict flag is set. if not show_corrupt_results and not netw_obj.is_valid(): continue diff --git a/volatility/framework/symbols/windows/extensions/network.py b/volatility/framework/symbols/windows/extensions/network.py index 6c6f722b08..e13af67075 100644 --- a/volatility/framework/symbols/windows/extensions/network.py +++ b/volatility/framework/symbols/windows/extensions/network.py @@ -199,21 +199,21 @@ def get_remote_address(self): def is_valid(self): if self.State not in self.State.choices.values(): - vollog.debug("invalid due to invalid tcp state {}".format(self.State)) + vollog.debug("{} 0x{:x} invalid due to invalid tcp state {}".format(type(self), self.vol.offset, self.State)) return False try: if self.get_address_family() not in (AF_INET, AF_INET6): - vollog.debug("invalid due to invalid address_family {}".format(self.get_address_family())) + vollog.debug("{} 0x{:x} invalid due to invalid address_family {}".format(type(self), self.vol.offset, self.get_address_family())) return False if not self.get_local_address() and (not self.get_owner() or self.get_owner().UniqueProcessId == 0 or self.get_owner().UniqueProcessId > 65535): - vollog.debug("invalid due to invalid owner data") + vollog.debug("{} 0x{:x} invalid due to invalid owner data".format(type(self), self.vol.offset)) return False except exceptions.InvalidAddressException: - vollog.debug("invalid due to invalid address access") + vollog.debug("{} 0x{:x} invalid due to invalid address access".format(type(self), self.vol.offset)) return False return True diff --git a/volatility/framework/symbols/windows/netscan-win10-15063-x64.json b/volatility/framework/symbols/windows/netscan-win10-15063-x64.json index c212ea6be1..e15f6eb509 100644 --- a/volatility/framework/symbols/windows/netscan-win10-15063-x64.json +++ b/volatility/framework/symbols/windows/netscan-win10-15063-x64.json @@ -92,6 +92,16 @@ } }, + "Next": { + "offset": 112, + "type":{ + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "_UDP_ENDPOINT" + } + } + }, "Port": { "offset": 120, "type": { @@ -116,6 +126,16 @@ } }, + "Next": { + "offset": 120, + "type":{ + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "_TCP_LISTENER" + } + } + }, "CreateTime": { "offset": 64, "type": { @@ -195,6 +215,13 @@ } } }, + "ListEntry": { + "offset": 40, + "type": { + "kind": "union", + "name": "nt_symbols!_LIST_ENTRY" + } + }, "LocalPort": { "offset": 112, "type": { @@ -355,6 +382,173 @@ }, "kind": "union", "size": 8 + }, + "_INET_COMPARTMENT_SET": { + "fields": { + "InetCompartment": { + "offset": 328, + "type": { + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "_INET_COMPARTMENT" + } + } + } + }, + "kind": "struct", + "size": 384 + }, + "_INET_COMPARTMENT": { + "fields": { + "ProtocolCompartment": { + "offset": 32, + "type": { + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "_PROTOCOL_COMPARTMENT" + } + } + } + }, + "kind": "struct", + "size": 48 + }, + "_PROTOCOL_COMPARTMENT": { + "fields": { + "PortPool": { + "offset": 0, + "type": { + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "_INET_PORT_POOL" + } + } + } + }, + "kind": "struct", + "size": 16 + }, + "_PORT_ASSIGNMENT_ENTRY": { + "fields": { + "Entry": { + "offset": 8, + "type": { + "kind": "pointer", + "subtype": { + "kind": "base", + "name": "void" + } + } + } + }, + "kind": "struct", + "size": 24 + }, + "_PORT_ASSIGNMENT_LIST": { + "fields": { + "Assignments": { + "offset": 0, + "type": { + "count": 256, + "kind": "array", + "subtype": { + "kind": "struct", + "name": "_PORT_ASSIGNMENT_ENTRY" + } + } + } + }, + "kind": "struct", + "size": 6144 + }, + "_PORT_ASSIGNMENT": { + "fields": { + "InPaBigPoolBase": { + "offset": 24, + "type": { + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "_PORT_ASSIGNMENT_LIST" + } + } + } + }, + "kind": "struct", + "size": 32 + }, + "_INET_PORT_POOL": { + "fields": { + "PortAssignments": { + "offset": 232, + "type": { + "count": 256, + "kind": "array", + "subtype": { + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "_PORT_ASSIGNMENT" + } + } + } + }, + "PortBitMap": { + "offset": 216, + "type": { + "kind": "struct", + "name": "nt_symbols!_RTL_BITMAP" + } + } + }, + "kind": "struct", + "size": 11200 + }, + "_PARTITION": { + "fields": { + "Endpoints" : { + "offset": 8, + "type": { + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "nt_symbols!_RTL_DYNAMIC_HASH_TABLE" + } + } + }, + "UnknownHashTable" : { + "offset": 16, + "type": { + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "nt_symbols!_RTL_DYNAMIC_HASH_TABLE" + } + } + } + }, + "kind": "struct", + "size": 128 + }, + "_PARTITION_TABLE": { + "fields": { + "Partitions": { + "offset": 0, + "type": { + "count": 1, + "kind": "array", + "subtype": { + "kind": "struct", + "name": "_PARTITION" + } + } + } + }, + "kind": "struct", + "size": 128 } }, "enums": { diff --git a/volatility/framework/symbols/windows/netscan-win10-15063-x86.json b/volatility/framework/symbols/windows/netscan-win10-15063-x86.json index 08ba89dbb4..0f934b7d51 100644 --- a/volatility/framework/symbols/windows/netscan-win10-15063-x86.json +++ b/volatility/framework/symbols/windows/netscan-win10-15063-x86.json @@ -238,6 +238,16 @@ "kind": "base", "name": "unsigned be short" } + }, + "Next": { + "offset": 76, + "type":{ + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "_UDP_ENDPOINT" + } + } } }, "kind": "struct", @@ -291,6 +301,16 @@ "kind": "base", "name": "unsigned be short" } + }, + "Next": { + "offset": 72, + "type":{ + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "_TCP_LISTENER" + } + } } }, "kind": "struct", @@ -502,6 +522,173 @@ }, "kind": "union", "size": 8 + }, + "_INET_COMPARTMENT_SET": { + "fields": { + "InetCompartment": { + "offset": 324, + "type": { + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "_INET_COMPARTMENT" + } + } + } + }, + "kind": "struct", + "size": 384 + }, + "_INET_COMPARTMENT": { + "fields": { + "ProtocolCompartment": { + "offset": 20, + "type": { + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "_PROTOCOL_COMPARTMENT" + } + } + } + }, + "kind": "struct", + "size": 24 + }, + "_PROTOCOL_COMPARTMENT": { + "fields": { + "PortPool": { + "offset": 0, + "type": { + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "_INET_PORT_POOL" + } + } + } + }, + "kind": "struct", + "size": 16 + }, + "_PORT_ASSIGNMENT_ENTRY": { + "fields": { + "Entry": { + "offset": 4, + "type": { + "kind": "pointer", + "subtype": { + "kind": "base", + "name": "void" + } + } + } + }, + "kind": "struct", + "size": 8 + }, + "_PORT_ASSIGNMENT_LIST": { + "fields": { + "Assignments": { + "offset": 0, + "type": { + "count": 256, + "kind": "array", + "subtype": { + "kind": "struct", + "name": "_PORT_ASSIGNMENT_ENTRY" + } + } + } + }, + "kind": "struct", + "size": 4096 + }, + "_PORT_ASSIGNMENT": { + "fields": { + "InPaBigPoolBase": { + "offset": 16, + "type": { + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "_PORT_ASSIGNMENT_LIST" + } + } + } + }, + "kind": "struct", + "size": 24 + }, + "_INET_PORT_POOL": { + "fields": { + "PortAssignments": { + "offset": 152, + "type": { + "count": 256, + "kind": "array", + "subtype": { + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "_PORT_ASSIGNMENT" + } + } + } + }, + "PortBitMap": { + "offset": 144, + "type": { + "kind": "struct", + "name": "nt_symbols!_RTL_BITMAP" + } + } + }, + "kind": "struct", + "size": 11200 + }, + "_PARTITION": { + "fields": { + "Endpoints" : { + "offset": 4, + "type": { + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "nt_symbols!_RTL_DYNAMIC_HASH_TABLE" + } + } + }, + "UnknownHashTable" : { + "offset": 12, + "type": { + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "nt_symbols!_RTL_DYNAMIC_HASH_TABLE" + } + } + } + }, + "kind": "struct", + "size": 64 + }, + "_PARTITION_TABLE": { + "fields": { + "Partitions": { + "offset": 0, + "type": { + "count": 1, + "kind": "array", + "subtype": { + "kind": "struct", + "name": "_PARTITION" + } + } + } + }, + "kind": "struct", + "size": 64 } }, "enums": { diff --git a/volatility/framework/symbols/windows/netscan-win10-19041-x64.json b/volatility/framework/symbols/windows/netscan-win10-19041-x64.json index ec8e7d8704..0ff62ee999 100644 --- a/volatility/framework/symbols/windows/netscan-win10-19041-x64.json +++ b/volatility/framework/symbols/windows/netscan-win10-19041-x64.json @@ -71,35 +71,35 @@ "name": "_LARGE_INTEGER" } }, - "LocalAddr": { - "offset": 168, + "Next": { + "offset": 112, "type":{ "kind": "pointer", "subtype": { "kind": "struct", - "name": "_LOCAL_ADDRESS_WIN10_UDP" + "name": "_UDP_ENDPOINT" } } }, - "InetAF": { - "offset": 32, + "LocalAddr": { + "offset": 168, "type":{ "kind": "pointer", "subtype": { "kind": "struct", - "name": "_INETAF" + "name": "_LOCAL_ADDRESS_WIN10_UDP" } - } }, - "MaskedPrevObj": { - "offset": 112, + "InetAF": { + "offset": 32, "type":{ "kind": "pointer", "subtype": { "kind": "struct", - "name": "_UDP_ENDPOINT" + "name": "_INETAF" } + } }, "Port": { @@ -111,7 +111,7 @@ } }, "kind": "struct", - "size": 132 + "size": 168 }, "_TCP_LISTENER": { "fields": { @@ -155,7 +155,7 @@ } }, - "MaskedPrevObj": { + "Next": { "offset": 120, "type":{ "kind": "pointer", @@ -205,6 +205,13 @@ } } }, + "ListEntry": { + "offset": 40, + "type": { + "kind": "union", + "name": "nt_symbols!_LIST_ENTRY" + } + }, "InetAF": { "offset": 16, "type":{ @@ -375,6 +382,173 @@ }, "kind": "union", "size": 8 + }, + "_INET_COMPARTMENT_SET": { + "fields": { + "InetCompartment": { + "offset": 328, + "type": { + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "_INET_COMPARTMENT" + } + } + } + }, + "kind": "struct", + "size": 384 + }, + "_INET_COMPARTMENT": { + "fields": { + "ProtocolCompartment": { + "offset": 32, + "type": { + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "_PROTOCOL_COMPARTMENT" + } + } + } + }, + "kind": "struct", + "size": 48 + }, + "_PROTOCOL_COMPARTMENT": { + "fields": { + "PortPool": { + "offset": 0, + "type": { + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "_INET_PORT_POOL" + } + } + } + }, + "kind": "struct", + "size": 16 + }, + "_PORT_ASSIGNMENT_ENTRY": { + "fields": { + "Entry": { + "offset": 16, + "type": { + "kind": "pointer", + "subtype": { + "kind": "base", + "name": "void" + } + } + } + }, + "kind": "struct", + "size": 32 + }, + "_PORT_ASSIGNMENT_LIST": { + "fields": { + "Assignments": { + "offset": 0, + "type": { + "count": 256, + "kind": "array", + "subtype": { + "kind": "struct", + "name": "_PORT_ASSIGNMENT_ENTRY" + } + } + } + }, + "kind": "struct", + "size": 6144 + }, + "_PORT_ASSIGNMENT": { + "fields": { + "InPaBigPoolBase": { + "offset": 24, + "type": { + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "_PORT_ASSIGNMENT_LIST" + } + } + } + }, + "kind": "struct", + "size": 32 + }, + "_INET_PORT_POOL": { + "fields": { + "PortAssignments": { + "offset": 224, + "type": { + "count": 256, + "kind": "array", + "subtype": { + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "_PORT_ASSIGNMENT" + } + } + } + }, + "PortBitMap": { + "offset": 208, + "type": { + "kind": "struct", + "name": "nt_symbols!_RTL_BITMAP" + } + } + }, + "kind": "struct", + "size": 11200 + }, + "_PARTITION": { + "fields": { + "Endpoints" : { + "offset": 8, + "type": { + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "nt_symbols!_RTL_DYNAMIC_HASH_TABLE" + } + } + }, + "UnknownHashTable" : { + "offset": 16, + "type": { + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "nt_symbols!_RTL_DYNAMIC_HASH_TABLE" + } + } + } + }, + "kind": "struct", + "size": 192 + }, + "_PARTITION_TABLE": { + "fields": { + "Partitions": { + "offset": 0, + "type": { + "count": 1, + "kind": "array", + "subtype": { + "kind": "struct", + "name": "_PARTITION" + } + } + } + }, + "kind": "struct", + "size": 128 } }, "enums": { diff --git a/volatility/framework/symbols/windows/netscan-win10-x86.json b/volatility/framework/symbols/windows/netscan-win10-x86.json index bbd77d4aa1..b27380f4c4 100644 --- a/volatility/framework/symbols/windows/netscan-win10-x86.json +++ b/volatility/framework/symbols/windows/netscan-win10-x86.json @@ -211,6 +211,16 @@ "name": "_LARGE_INTEGER" } }, + "Next": { + "offset": 76, + "type":{ + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "_UDP_ENDPOINT" + } + } + }, "LocalAddr": { "offset": 56, "type":{ @@ -291,10 +301,20 @@ "kind": "base", "name": "unsigned be short" } + }, + "Next": { + "offset": 72, + "type":{ + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "_TCP_LISTENER" + } + } } }, "kind": "struct", - "size": 72 + "size": 78 }, "_TCP_ENDPOINT": { "fields": { @@ -502,6 +522,173 @@ }, "kind": "union", "size": 8 + }, + "_INET_COMPARTMENT_SET": { + "fields": { + "InetCompartment": { + "offset": 328, + "type": { + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "_INET_COMPARTMENT" + } + } + } + }, + "kind": "struct", + "size": 384 + }, + "_INET_COMPARTMENT": { + "fields": { + "ProtocolCompartment": { + "offset": 32, + "type": { + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "_PROTOCOL_COMPARTMENT" + } + } + } + }, + "kind": "struct", + "size": 48 + }, + "_PROTOCOL_COMPARTMENT": { + "fields": { + "PortPool": { + "offset": 0, + "type": { + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "_INET_PORT_POOL" + } + } + } + }, + "kind": "struct", + "size": 16 + }, + "_PORT_ASSIGNMENT_ENTRY": { + "fields": { + "Entry": { + "offset": 4, + "type": { + "kind": "pointer", + "subtype": { + "kind": "base", + "name": "void" + } + } + } + }, + "kind": "struct", + "size": 8 + }, + "_PORT_ASSIGNMENT_LIST": { + "fields": { + "Assignments": { + "offset": 0, + "type": { + "count": 256, + "kind": "array", + "subtype": { + "kind": "struct", + "name": "_PORT_ASSIGNMENT_ENTRY" + } + } + } + }, + "kind": "struct", + "size": 4096 + }, + "_PORT_ASSIGNMENT": { + "fields": { + "InPaBigPoolBase": { + "offset": 16, + "type": { + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "_PORT_ASSIGNMENT_LIST" + } + } + } + }, + "kind": "struct", + "size": 24 + }, + "_INET_PORT_POOL": { + "fields": { + "PortAssignments": { + "offset": 152, + "type": { + "count": 256, + "kind": "array", + "subtype": { + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "_PORT_ASSIGNMENT" + } + } + } + }, + "PortBitMap": { + "offset": 144, + "type": { + "kind": "struct", + "name": "nt_symbols!_RTL_BITMAP" + } + } + }, + "kind": "struct", + "size": 11200 + }, + "_PARTITION": { + "fields": { + "Endpoints" : { + "offset": 4, + "type": { + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "nt_symbols!_RTL_DYNAMIC_HASH_TABLE" + } + } + }, + "UnknownHashTable" : { + "offset": 12, + "type": { + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "nt_symbols!_RTL_DYNAMIC_HASH_TABLE" + } + } + } + }, + "kind": "struct", + "size": 64 + }, + "_PARTITION_TABLE": { + "fields": { + "Partitions": { + "offset": 0, + "type": { + "count": 1, + "kind": "array", + "subtype": { + "kind": "struct", + "name": "_PARTITION" + } + } + } + }, + "kind": "struct", + "size": 64 } }, "enums": { diff --git a/volatility/framework/symbols/windows/netscan-win7-x86.json b/volatility/framework/symbols/windows/netscan-win7-x86.json index 891e7072c7..03cbe7bf28 100644 --- a/volatility/framework/symbols/windows/netscan-win7-x86.json +++ b/volatility/framework/symbols/windows/netscan-win7-x86.json @@ -231,6 +231,16 @@ } }, + "Next": { + "offset": 76, + "type":{ + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "_UDP_ENDPOINT" + } + } + }, "Port": { "offset": 72, "type": { @@ -290,10 +300,20 @@ "kind": "base", "name": "unsigned be short" } + }, + "Next": { + "offset": 64, + "type":{ + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "_TCP_LISTENER" + } + } } }, "kind": "struct", - "size": 64 + "size": 72 }, "_TCP_ENDPOINT": { "fields": { @@ -503,6 +523,173 @@ }, "kind": "union", "size": 8 + }, + "_INET_COMPARTMENT_SET": { + "fields": { + "InetCompartment": { + "offset": 328, + "type": { + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "_INET_COMPARTMENT" + } + } + } + }, + "kind": "struct", + "size": 384 + }, + "_INET_COMPARTMENT": { + "fields": { + "ProtocolCompartment": { + "offset": 32, + "type": { + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "_PROTOCOL_COMPARTMENT" + } + } + } + }, + "kind": "struct", + "size": 48 + }, + "_PROTOCOL_COMPARTMENT": { + "fields": { + "PortPool": { + "offset": 0, + "type": { + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "_INET_PORT_POOL" + } + } + } + }, + "kind": "struct", + "size": 16 + }, + "_PORT_ASSIGNMENT_ENTRY": { + "fields": { + "Entry": { + "offset": 4, + "type": { + "kind": "pointer", + "subtype": { + "kind": "base", + "name": "void" + } + } + } + }, + "kind": "struct", + "size": 8 + }, + "_PORT_ASSIGNMENT_LIST": { + "fields": { + "Assignments": { + "offset": 0, + "type": { + "count": 256, + "kind": "array", + "subtype": { + "kind": "struct", + "name": "_PORT_ASSIGNMENT_ENTRY" + } + } + } + }, + "kind": "struct", + "size": 4096 + }, + "_PORT_ASSIGNMENT": { + "fields": { + "InPaBigPoolBase": { + "offset": 20, + "type": { + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "_PORT_ASSIGNMENT_LIST" + } + } + } + }, + "kind": "struct", + "size": 24 + }, + "_INET_PORT_POOL": { + "fields": { + "PortAssignments": { + "offset": 88, + "type": { + "count": 256, + "kind": "array", + "subtype": { + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "_PORT_ASSIGNMENT" + } + } + } + }, + "PortBitMap": { + "offset": 80, + "type": { + "kind": "struct", + "name": "nt_symbols!_RTL_BITMAP" + } + } + }, + "kind": "struct", + "size": 11200 + }, + "_PARTITION": { + "fields": { + "Endpoints" : { + "offset": 4, + "type": { + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "nt_symbols!_RTL_DYNAMIC_HASH_TABLE" + } + } + }, + "UnknownHashTable" : { + "offset": 12, + "type": { + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "nt_symbols!_RTL_DYNAMIC_HASH_TABLE" + } + } + } + }, + "kind": "struct", + "size": 64 + }, + "_PARTITION_TABLE": { + "fields": { + "Partitions": { + "offset": 0, + "type": { + "count": 1, + "kind": "array", + "subtype": { + "kind": "struct", + "name": "_PARTITION" + } + } + } + }, + "kind": "struct", + "size": 64 } }, "enums": { From 71e48cc67274963b7f25bd550b476182b24538db Mon Sep 17 00:00:00 2001 From: Jan Date: Mon, 28 Dec 2020 21:51:28 +0100 Subject: [PATCH 010/294] complex objects are not passed as arguments directly anymore, but rather by offset --- .../framework/plugins/windows/netlist.py | 39 ++++++++++++------- 1 file changed, 25 insertions(+), 14 deletions(-) diff --git a/volatility/framework/plugins/windows/netlist.py b/volatility/framework/plugins/windows/netlist.py index c1a8de592d..9fc46bed5f 100644 --- a/volatility/framework/plugins/windows/netlist.py +++ b/volatility/framework/plugins/windows/netlist.py @@ -108,7 +108,7 @@ def enumerate_structures_by_port(cls, layer_name: str, net_symbol_table: str, port: int, - port_pool: interfaces.objects.ObjectInterface, + port_pool_addr: int, proto="tcp") -> \ Iterable[interfaces.objects.ObjectInterface]: """Lists all UDP Endpoints and TCP Listeners by parsing UdpPortPool and TcpPortPool. @@ -118,7 +118,7 @@ def enumerate_structures_by_port(cls, layer_name: The name of the layer on which to operate net_symbol_table: The name of the table containing the tcpip types port: Current port as integer to lookup the associated object. - port_pool: Port pool object + port_pool_addr: Address of port pool object proto: Either "tcp" or "udp" to decide which types to use. Returns: @@ -139,6 +139,11 @@ def enumerate_structures_by_port(cls, list_index = port >> 8 truncated_port = port & 0xff + # constructing port_pool object here so callers don't have to + port_pool = context.object(net_symbol_table + constants.BANG + "_INET_PORT_POOL", + layer_name = layer_name, + offset = port_pool_addr) + # first, grab the given port's PortAssignment (`_PORT_ASSIGNMENT`) inpa = port_pool.PortAssignments[list_index] @@ -270,12 +275,14 @@ def parse_partitions(cls, endpoint = context.object(obj_name, layer_name = layer_name, offset = endpoint_entry - entry_offset) yield endpoint + @classmethod def create_tcpip_symbol_table(cls, context: interfaces.context.ContextInterface, config_path: str, layer_name: str, - tcpip_module: interfaces.objects.ObjectInterface) -> str: + tcpip_module_offset: int, + tcpip_module_size: int) -> str: """Creates symbol table for the current image's tcpip.sys driver. Searches the memory section of the loaded tcpip.sys module for its PDB GUID @@ -285,19 +292,21 @@ def create_tcpip_symbol_table(cls, context: The context to retrieve required elements (layers, symbol tables) from config_path: The config path where to find symbol files layer_name: The name of the layer on which to operate - tcpip_module: The created vol Windows module object of the given memory image + tcpip_module_offset: This memory dump's tcpip.sys image offset + tcpip_module_size: The size of `tcpip.sys` for this dump Returns: The name of the constructed and loaded symbol table """ + guids = list( pdbutil.PDBUtility.pdbname_scan( context, layer_name, context.layers[layer_name].page_size, [b"tcpip.pdb"], - start=tcpip_module.DllBase, - end=tcpip_module.DllBase + tcpip_module.SizeOfImage + start=tcpip_module_offset, + end=tcpip_module_offset + tcpip_module_size ) ) @@ -382,7 +391,7 @@ def list_sockets(cls, layer_name: str, nt_symbols: str, net_symbol_table: str, - tcpip_module: interfaces.objects.ObjectInterface, + tcpip_module_offset: int, tcpip_symbol_table: str) -> \ Iterable[interfaces.objects.ObjectInterface]: """Lists all UDP Endpoints, TCP Listeners and TCP Endpoints in the primary layer that @@ -393,15 +402,13 @@ def list_sockets(cls, layer_name: The name of the layer on which to operate nt_symbols: The name of the table containing the kernel symbols net_symbol_table: The name of the table containing the tcpip types - tcpip_module: The created vol Windows module object of the given memory image + tcpip_module_offset: Offset of `tcpip.sys`'s PE image in memory tcpip_symbol_table: The name of the table containing the tcpip driver symbols Returns: The list of network objects from the `layer_name` layer's `PartitionTable` and `PortPools` """ - tcpip_module_offset = tcpip_module.DllBase - # first, TCP endpoints by parsing the partition table for endpoint in cls.parse_partitions(context, layer_name, net_symbol_table, tcpip_symbol_table, tcpip_module_offset): yield endpoint @@ -424,14 +431,14 @@ def list_sockets(cls, # port value can be 0, which we can skip if not port: continue - for obj in cls.enumerate_structures_by_port(context, layer_name, net_symbol_table, port, tpp_obj, "tcp"): + for obj in cls.enumerate_structures_by_port(context, layer_name, net_symbol_table, port, tpp_addr, "tcp"): yield obj for port in udpa_ports: # same as above, skip port 0 if not port: continue - for obj in cls.enumerate_structures_by_port(context, layer_name, net_symbol_table, port, upp_obj, "udp"): + for obj in cls.enumerate_structures_by_port(context, layer_name, net_symbol_table, port, upp_addr, "udp"): yield obj def _generator(self, show_corrupt_results: Optional[bool] = None): @@ -442,13 +449,17 @@ def _generator(self, show_corrupt_results: Optional[bool] = None): tcpip_module = self.get_tcpip_module(self.context, self.config["primary"], self.config["nt_symbols"]) - tcpip_symbol_table = self.create_tcpip_symbol_table(self.context, self.config_path, self.config["primary"], tcpip_module) + tcpip_symbol_table = self.create_tcpip_symbol_table(self.context, + self.config_path, + self.config["primary"], + tcpip_module.DllBase, + tcpip_module.SizeOfImage) for netw_obj in self.list_sockets(self.context, self.config['primary'], self.config['nt_symbols'], netscan_symbol_table, - tcpip_module, + tcpip_module.DllBase, tcpip_symbol_table): # objects passed pool header constraints. check for additional constraints if strict flag is set. From b10b0254b7f02eed1ce778d49c1303eb7056c417 Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Fri, 11 Dec 2020 11:23:24 +0000 Subject: [PATCH 011/294] Volshell: Add support for running scriptlets --- volatility/cli/volshell/generic.py | 15 +++++++++++++-- 1 file changed, 13 insertions(+), 2 deletions(-) diff --git a/volatility/cli/volshell/generic.py b/volatility/cli/volshell/generic.py index 18d76ae04f..7c0d45d0b9 100644 --- a/volatility/cli/volshell/generic.py +++ b/volatility/cli/volshell/generic.py @@ -4,6 +4,7 @@ import binascii import code import io +import os import random import string import struct @@ -31,6 +32,7 @@ class Volshell(interfaces.plugins.PluginInterface): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) self.__current_layer = None # type: Optional[str] + self.__console = None def random_string(self, length: int = 32) -> str: return ''.join(random.sample(string.ascii_uppercase + string.digits, length)) @@ -73,7 +75,8 @@ def run(self, additional_locals: Dict[str, Any] = None) -> interfaces.renderers. """.format(mode, self.current_layer) sys.ps1 = "({}) >>> ".format(self.current_layer) - code.interact(banner = banner, local = self._construct_locals_dict()) + self.__console = code.InteractiveConsole(locals = self._construct_locals_dict()) + self.__console.interact(banner = banner) return renderers.TreeGrid([("Terminating", str)], None) @@ -109,7 +112,8 @@ def construct_locals(self) -> List[Tuple[List[str], Any]]: (['gt', 'generate_treegrid'], self.generate_treegrid), (['rt', 'render_treegrid'], self.render_treegrid), (['ds', 'display_symbols'], self.display_symbols), (['hh', 'help'], self.help), - (['cc', 'create_configurable'], self.create_configurable), (['lf', 'load_file'], self.load_file)] + (['cc', 'create_configurable'], self.create_configurable), (['lf', 'load_file'], self.load_file), + (['rs', 'run_script'], self.run_script)] def _construct_locals_dict(self) -> Dict[str, Any]: """Returns a dictionary of the locals """ @@ -318,6 +322,13 @@ def display_symbols(self, symbol_table: str = None): len_offset = len(hex(symbol.address)) print(" " * (longest_offset - len_offset), hex(symbol.address), " ", symbol.name) + def run_script(self, filename: str): + """Runs a python script within the context of volshell""" + print("Running code from {}\n".format(filename)) + with open(filename) as fp: + self.__console.runsource(fp.read(), symbol = 'exec') + print("\nCode complete") + def load_file(self, location: str = None, filename: str = ''): """Loads a file into a Filelayer and returns the name of the layer""" layer_name = self.context.layers.free_layer_name() From 01b2a38d84e0889fabac2e527738f1e849603880 Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Fri, 11 Dec 2020 14:57:18 +0000 Subject: [PATCH 012/294] Volshell: Allow a script to be run at startup --- volatility/cli/volshell/generic.py | 34 ++++++++++++++++++++++-------- 1 file changed, 25 insertions(+), 9 deletions(-) diff --git a/volatility/cli/volshell/generic.py b/volatility/cli/volshell/generic.py index 7c0d45d0b9..de43f8c717 100644 --- a/volatility/cli/volshell/generic.py +++ b/volatility/cli/volshell/generic.py @@ -10,12 +10,12 @@ import struct import sys from typing import Any, Dict, List, Optional, Tuple, Union, Type -from urllib import request +from urllib import request, parse from volatility.cli import text_renderer from volatility.framework import renderers, interfaces, objects, plugins, exceptions from volatility.framework.configuration import requirements -from volatility.framework.layers import intel, physical +from volatility.framework.layers import intel, physical, resources try: import capstone @@ -39,7 +39,17 @@ def random_string(self, length: int = 32) -> str: @classmethod def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface]: - return [requirements.TranslationLayerRequirement(name = 'primary', description = 'Memory layer for the kernel')] + reqs = [] # type: List[interfaces.configuration.RequirementInterface] + if cls == Volshell: + reqs = [ + requirements.URIRequirement(name = 'script', + description = 'File to load and execute at start', + default = None, + optional = True) + ] + return reqs + [ + requirements.TranslationLayerRequirement(name = 'primary', description = 'Memory layer for the kernel'), + ] def run(self, additional_locals: Dict[str, Any] = None) -> interfaces.renderers.TreeGrid: """Runs the interactive volshell plugin. @@ -76,6 +86,9 @@ def run(self, additional_locals: Dict[str, Any] = None) -> interfaces.renderers. sys.ps1 = "({}) >>> ".format(self.current_layer) self.__console = code.InteractiveConsole(locals = self._construct_locals_dict()) + if self.config['script'] is not None: + self.run_script(location = self.config['script']) + self.__console.interact(banner = banner) return renderers.TreeGrid([("Terminating", str)], None) @@ -322,18 +335,21 @@ def display_symbols(self, symbol_table: str = None): len_offset = len(hex(symbol.address)) print(" " * (longest_offset - len_offset), hex(symbol.address), " ", symbol.name) - def run_script(self, filename: str): + def run_script(self, location: str = None): """Runs a python script within the context of volshell""" - print("Running code from {}\n".format(filename)) - with open(filename) as fp: + if not parse.urlparse(location).scheme: + location = "file:" + request.pathname2url(location) + print("Running code from {}\n".format(location)) + accessor = resources.ResourceAccessor() + with io.TextIOWrapper(accessor.open(url = location), encoding = 'utf-8') as fp: self.__console.runsource(fp.read(), symbol = 'exec') print("\nCode complete") - def load_file(self, location: str = None, filename: str = ''): + def load_file(self, location: str = None): """Loads a file into a Filelayer and returns the name of the layer""" layer_name = self.context.layers.free_layer_name() - if location is None: - location = "file:" + request.pathname2url(filename) + if not parse.urlparse(location).scheme: + location = "file:" + request.pathname2url(location) current_config_path = 'volshell.layers.' + layer_name self.context.config[interfaces.configuration.path_join(current_config_path, "location")] = location layer = physical.FileLayer(self.context, current_config_path, layer_name) From 84aa3b5e70fdb1dc35dc5818b64e9684b691d7e6 Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Tue, 22 Dec 2020 17:41:05 +0000 Subject: [PATCH 013/294] Automagic: Improve vmware layer stacker diagnostics --- volatility/framework/layers/vmware.py | 11 +++++++++-- 1 file changed, 9 insertions(+), 2 deletions(-) diff --git a/volatility/framework/layers/vmware.py b/volatility/framework/layers/vmware.py index fc412fb2e0..ea449f7b65 100644 --- a/volatility/framework/layers/vmware.py +++ b/volatility/framework/layers/vmware.py @@ -2,6 +2,7 @@ # which is available at https://www.volatilityfoundation.org/license/vsl-v1.0 # +import logging import struct from typing import Any, Dict, List, Optional @@ -10,6 +11,8 @@ from volatility.framework.layers import physical, segmented, resources from volatility.framework.symbols import native +vollog = logging.getLogger(__name__) + class VmwareFormatException(exceptions.LayerException): """Thrown when an error occurs with the underlying VMware vmem file format.""" @@ -130,14 +133,16 @@ def stack(cls, current_config_path = interfaces.configuration.path_join("automagic", "layer_stacker", "stack", current_layer_name) + vmss_success = False try: _ = resources.ResourceAccessor().open(vmss).read(10) context.config[interfaces.configuration.path_join(current_config_path, "location")] = vmss context.layers.add_layer(physical.FileLayer(context, current_config_path, current_layer_name)) vmss_success = True except IOError: - vmss_success = False + pass + vmsn_success = False if not vmss_success: try: _ = resources.ResourceAccessor().open(vmsn).read(10) @@ -145,7 +150,9 @@ def stack(cls, context.layers.add_layer(physical.FileLayer(context, current_config_path, current_layer_name)) vmsn_success = True except IOError: - vmsn_success = False + pass + + vollog.log(constants.LOGLEVEL_VVVV, "Metadata found: VMSS ({}) or VMSN ({})".format(vmss_success, vmsn_success)) if not vmss_success and not vmsn_success: return None From 8de599181271bddc81cd8cf7d88a3a10358d56ed Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Sun, 3 Jan 2021 00:31:33 +0000 Subject: [PATCH 014/294] Timerliner: Improve timeliner caching This duplicates config values if the requirement are identical, which speeds up the automagic considerably. Note that this means even the requirement descriptions must match, but for safey that probably makes sense. --- volatility/framework/interfaces/configuration.py | 8 ++++++++ volatility/framework/plugins/timeliner.py | 16 +++++++++++++++- 2 files changed, 23 insertions(+), 1 deletion(-) diff --git a/volatility/framework/interfaces/configuration.py b/volatility/framework/interfaces/configuration.py index a52b9984fb..450bccc215 100644 --- a/volatility/framework/interfaces/configuration.py +++ b/volatility/framework/interfaces/configuration.py @@ -324,6 +324,14 @@ def __init__(self, def __repr__(self) -> str: return "<" + self.__class__.__name__ + ": " + self.name + ">" + def __eq__(self, other): + if not isinstance(other, self.__class__): + return False + for name in self.__dict__: + if other.__dict__.get(name, None) != self.__dict__[name]: + return False + return True + @property def name(self) -> str: """The name of the Requirement. diff --git a/volatility/framework/plugins/timeliner.py b/volatility/framework/plugins/timeliner.py index 40148800d9..d757067c9c 100644 --- a/volatility/framework/plugins/timeliner.py +++ b/volatility/framework/plugins/timeliner.py @@ -48,7 +48,7 @@ def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) self.timeline = {} self.usable_plugins = None - self.automagics = None + self.automagics = None # type: Optional[List[interfaces.automagic.AutomagicInterface]] @classmethod def get_usable_plugins(cls, selected_list: List[str] = None) -> List[Type]: @@ -176,6 +176,7 @@ def run(self): self.usable_plugins = self.usable_plugins or self.get_usable_plugins() self.automagics = self.automagics or automagic.available(self._context) plugins_to_run = [] + requirement_configs = {} filter_list = self.config['plugin-filter'] # Identify plugins that we can run which output datetimes @@ -183,9 +184,22 @@ def run(self): try: automagics = automagic.choose_automagic(self.automagics, plugin_class) + for requirement in plugin_class.get_requirements(): + if requirement.name in requirement_configs: + config_req, config_value = requirement_configs[requirement.name] + if requirement == config_req: + self.context.config[interfaces.configuration.path_join( + self.config_path, plugin_class.__name__)] = config_value + plugin = plugins.construct_plugin(self.context, automagics, plugin_class, self.config_path, self._progress_callback, self.open) + for requirement in plugin.get_requirements(): + if requirement.name not in requirement_configs: + config_value = plugin.config.get(requirement.name, None) + if config_value: + requirement_configs[requirement.name] = (requirement, config_value) + if isinstance(plugin, TimeLinerInterface): if not len(filter_list) or any( [filter in plugin.__module__ + '.' + plugin.__class__.__name__ for filter in filter_list]): From f0f5f1978d27582261b2c92b539e9d6857952dcb Mon Sep 17 00:00:00 2001 From: Jan Date: Thu, 7 Jan 2021 11:00:58 +0100 Subject: [PATCH 015/294] renamed to more consistent plugin name --- .../framework/plugins/windows/{netlist.py => netstat.py} | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) rename volatility/framework/plugins/windows/{netlist.py => netstat.py} (99%) diff --git a/volatility/framework/plugins/windows/netlist.py b/volatility/framework/plugins/windows/netstat.py similarity index 99% rename from volatility/framework/plugins/windows/netlist.py rename to volatility/framework/plugins/windows/netstat.py index 9fc46bed5f..b0d3c8bcf7 100644 --- a/volatility/framework/plugins/windows/netlist.py +++ b/volatility/framework/plugins/windows/netstat.py @@ -18,8 +18,8 @@ vollog = logging.getLogger(__name__) -class NetList(interfaces.plugins.PluginInterface, timeliner.TimeLinerInterface): - """Scans for network objects present in a particular windows memory image.""" +class NetStat(interfaces.plugins.PluginInterface, timeliner.TimeLinerInterface): + """Traverses network tracking structures present in a particular windows memory image.""" _required_framework_version = (2, 0, 0) _version = (1, 0, 0) From e108d8a5c4efb5a670ea571d04d6723ba262ff37 Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Thu, 7 Jan 2021 10:14:35 +0000 Subject: [PATCH 016/294] Layers: Fix typo in pae patch --- volatility/framework/layers/intel.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/volatility/framework/layers/intel.py b/volatility/framework/layers/intel.py index 86bf26de74..cdff9ce7e2 100644 --- a/volatility/framework/layers/intel.py +++ b/volatility/framework/layers/intel.py @@ -247,7 +247,7 @@ class IntelPAE(Intel): _maxphyaddr = 40 _maxvirtaddr = 32 _structure = [('page directory pointer', 2, False), ('page directory', 9, True), ('page table', 9, True)] - _direct_metadata = collections.ChainMap({'pae', True}, Intel._direct_metadata) + _direct_metadata = collections.ChainMap({'pae': True}, Intel._direct_metadata) class Intel32e(Intel): From 065dcb6cdb157c4d68db42a5b946f64734a5ef35 Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Thu, 7 Jan 2021 10:21:55 +0000 Subject: [PATCH 017/294] CLI: Dampen exceptions at low log levels --- volatility/cli/__init__.py | 2 ++ 1 file changed, 2 insertions(+) diff --git a/volatility/cli/__init__.py b/volatility/cli/__init__.py index 0c038a322f..ef39d98cad 100644 --- a/volatility/cli/__init__.py +++ b/volatility/cli/__init__.py @@ -188,6 +188,8 @@ def run(self): vollog.addHandler(file_logger) vollog.info("Logging started") if partial_args.verbosity < 3: + if partial_args.verbosity < 1: + sys.tracebacklimit = None console.setLevel(30 - (partial_args.verbosity * 10)) else: console.setLevel(10 - (partial_args.verbosity - 2)) From 63ba549bcab61799042ecc4b020bcae7359af787 Mon Sep 17 00:00:00 2001 From: Jan Date: Sun, 10 Jan 2021 17:08:14 +0100 Subject: [PATCH 018/294] fixes off-by-one error in port bitmap parsing --- volatility/framework/plugins/windows/netstat.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/volatility/framework/plugins/windows/netstat.py b/volatility/framework/plugins/windows/netstat.py index b0d3c8bcf7..96da1618b0 100644 --- a/volatility/framework/plugins/windows/netstat.py +++ b/volatility/framework/plugins/windows/netstat.py @@ -97,7 +97,7 @@ def parse_bitmap(cls, for idx in range(bitmap_size_in_byte-1): current_byte = context.layers[layer_name].read(bitmap_offset + idx, 1)[0] current_offs = idx * 8 - for bit in range(7): + for bit in range(8): if current_byte & (1 << bit) != 0: ret.append(bit + current_offs) return ret From e92920f6e66a0dfb3f56354676d5963446d4e041 Mon Sep 17 00:00:00 2001 From: Jan Date: Sun, 10 Jan 2021 17:23:00 +0100 Subject: [PATCH 019/294] another off-by-one error when parsing bitmaps --- volatility/framework/plugins/windows/netstat.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/volatility/framework/plugins/windows/netstat.py b/volatility/framework/plugins/windows/netstat.py index 96da1618b0..67e781380c 100644 --- a/volatility/framework/plugins/windows/netstat.py +++ b/volatility/framework/plugins/windows/netstat.py @@ -94,7 +94,7 @@ def parse_bitmap(cls, The list of indices at which a 1 was found. """ ret = [] - for idx in range(bitmap_size_in_byte-1): + for idx in range(bitmap_size_in_byte): current_byte = context.layers[layer_name].read(bitmap_offset + idx, 1)[0] current_offs = idx * 8 for bit in range(8): From 2a180c135ed3e293453060929ac847a20e2ecd27 Mon Sep 17 00:00:00 2001 From: iMHLv2 Date: Wed, 6 Jan 2021 10:10:51 -0600 Subject: [PATCH 020/294] refs #368 fix handles on 32-bit windows 8 and 10 - finding SAR is not necessary on these versions --- .../framework/plugins/windows/handles.py | 33 ++++++++++++------- 1 file changed, 21 insertions(+), 12 deletions(-) diff --git a/volatility/framework/plugins/windows/handles.py b/volatility/framework/plugins/windows/handles.py index d09536868b..5f697bc234 100644 --- a/volatility/framework/plugins/windows/handles.py +++ b/volatility/framework/plugins/windows/handles.py @@ -5,7 +5,7 @@ import logging from typing import List, Optional, Dict -from volatility.framework import constants, exceptions, renderers, interfaces +from volatility.framework import constants, exceptions, renderers, interfaces, symbols from volatility.framework.configuration import requirements from volatility.framework.objects import utility from volatility.framework.renderers import format_hints @@ -80,20 +80,29 @@ def _get_item(self, handle_table_entry, handle_value): object_header.GrantedAccess = handle_table_entry.GrantedAccess except AttributeError: # starting with windows 8 - if handle_table_entry.LowValue == 0: - return None + is_64bit = symbols.symbol_table_is_64bit(self.context, self.config["nt_symbols"]) + + if is_64bit: + if handle_table_entry.LowValue == 0: + return None + + magic = self.find_sar_value() - magic = self.find_sar_value() + # is this the right thing to raise here? + if magic is None: + if has_capstone: + raise AttributeError("Unable to find the SAR value for decoding handle table pointers") + else: + raise exceptions.MissingModuleException( + "capstone", "Requires capstone to find the SAR value for decoding handle table pointers") + + offset = self._decode_pointer(handle_table_entry.LowValue, magic) + else: + if handle_table_entry.InfoTable == 0: + return None - # is this the right thing to raise here? - if magic is None: - if has_capstone: - raise AttributeError("Unable to find the SAR value for decoding handle table pointers") - else: - raise exceptions.MissingModuleException( - "capstone", "Requires capstone to find the SAR value for decoding handle table pointers") + offset = handle_table_entry.InfoTable & ~7 - offset = self._decode_pointer(handle_table_entry.LowValue, magic) # print("LowValue: {0:#x} Magic: {1:#x} Offset: {2:#x}".format(handle_table_entry.InfoTable, magic, offset)) object_header = self.context.object(self.config["nt_symbols"] + constants.BANG + "_OBJECT_HEADER", virtual, From 92bf92eeff7817edb5e3ae01d75382da6526e599 Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Wed, 13 Jan 2021 01:26:01 +0000 Subject: [PATCH 021/294] Documentation: Make suggested changes from @NiklasBeierl --- doc/source/glossary.rst | 30 +++++++++++++++++------------- 1 file changed, 17 insertions(+), 13 deletions(-) diff --git a/doc/source/glossary.rst b/doc/source/glossary.rst index 50bc69490b..68bc41e4c1 100644 --- a/doc/source/glossary.rst +++ b/doc/source/glossary.rst @@ -162,27 +162,31 @@ Template .. _Translation Layer: Translation Layer - This is a specific type of :ref:`data layer`, a non-contiguous group of bytes that can be references by - a unique :ref:`offset` within the layer. In particular, translation layers translates (or :ref:`maps`) - requests made of it to a location within a lower layer. This can be either linear (a one-to-one mapping between bytes) - or non-linear (a group of bytes :ref:`maps` to a larger or smaller group of bytes. + This is a type of data layer which allows accessing data from lower layers using addresses different to those + used by the lower layers themselves. When accessing data in a translation layer, it translates (or :ref:`maps`) + addresses from its own :ref:`address space
` to the address space of the lower layer and returns the + corresponding data from the lower layer. Note that multiple addresses in the higher layer might refer to the same + address in the lower layer. Conversely, some addresses in the higher layer might have no corresponding address in the + lower layer at all. Translation layers most commonly handle the translation from virtual to physical addresses, + but can be used to translate data to and from a compressed form or translate data from a particular file format + into another format. .. _Type: Type This is a structure definition of multiple elements that expresses how data is laid out. Basic types define how - the data should be interpretted in terms of a run of bits (or more commonly a collection of 8 bits at a time, - called bytes). More complex types can be made up of other types combined together at specific locations known - as :ref:`structs` or repeated, known as :ref:`array`. They can even defined types at the same - location depending on the data itself, known as :ref:`Unions`. Once a type has been linked to a specific - chunk of data, the result is referred to as an :ref:`object`. + the data should be interpreted in terms of a run of bits (or more commonly a collection of 8 bits at a time, + called bytes). New types can be constructed by combining other types at specific relative offsets, forming something + called a :ref:`struct`, or by repeating the same type, known as an :ref:`array`. They can even + contain other types at the same offset depending on the data itself, known as :ref:`Unions`. Once a type + has been linked to a specific chunk of data, the result is referred to as an :ref:`object`. U - .. _Union: Union - A union is a type that can have can hold multiple different subtypes, which specifically overlap. A union is means - for holding two different types within the same size of data, meaning that not all types within the union will hold - valid data at the same time, more that depending on what the union is holding, a subset of the type will point to - accurate data (assumption no corruption). + A union is a type that can hold multiple different subtypes, whose relative offsets specifically overlap. + A union is a means for holding multiple different types within the same size of data, the relative offsets of the + types within the union specifically overlap. This means that the data in a union object is interpreted differently + based on the types of the union used to access it. From df89f1ceb9848ff01ec6483e6589de6a10666a00 Mon Sep 17 00:00:00 2001 From: Jan Date: Thu, 14 Jan 2021 19:49:49 +0100 Subject: [PATCH 022/294] adds symbols for win10 10586 x86 --- .../framework/plugins/windows/netscan.py | 4 +- ...-x86.json => netscan-win10-10240-x86.json} | 0 .../windows/netscan-win10-10586-x86.json | 722 ++++++++++++++++++ 3 files changed, 724 insertions(+), 2 deletions(-) rename volatility/framework/symbols/windows/{netscan-win10-x86.json => netscan-win10-10240-x86.json} (100%) create mode 100644 volatility/framework/symbols/windows/netscan-win10-10586-x86.json diff --git a/volatility/framework/plugins/windows/netscan.py b/volatility/framework/plugins/windows/netscan.py index abf36b6bad..69ad9d02d8 100644 --- a/volatility/framework/plugins/windows/netscan.py +++ b/volatility/framework/plugins/windows/netscan.py @@ -151,8 +151,8 @@ def determine_tcpip_version(cls, context: interfaces.context.ContextInterface, l (6, 1, 8400): "netscan-win7-x86", (6, 2, 9200): "netscan-win8-x86", (6, 3, 9600): "netscan-win81-x86", - (10, 0, 10240): "netscan-win10-x86", - (10, 0, 10586): "netscan-win10-x86", + (10, 0, 10240): "netscan-win10-10240-x86", + (10, 0, 10586): "netscan-win10-10586-x86", (10, 0, 14393): "netscan-win10-14393-x86", (10, 0, 15063): "netscan-win10-15063-x86", (10, 0, 16299): "netscan-win10-15063-x86", diff --git a/volatility/framework/symbols/windows/netscan-win10-x86.json b/volatility/framework/symbols/windows/netscan-win10-10240-x86.json similarity index 100% rename from volatility/framework/symbols/windows/netscan-win10-x86.json rename to volatility/framework/symbols/windows/netscan-win10-10240-x86.json diff --git a/volatility/framework/symbols/windows/netscan-win10-10586-x86.json b/volatility/framework/symbols/windows/netscan-win10-10586-x86.json new file mode 100644 index 0000000000..7a6b9827e6 --- /dev/null +++ b/volatility/framework/symbols/windows/netscan-win10-10586-x86.json @@ -0,0 +1,722 @@ +{ + "base_types": { + "unsigned long": { + "kind": "int", + "size": 4, + "signed": false, + "endian": "little" + }, + "unsigned char": { + "kind": "char", + "size": 1, + "signed": false, + "endian": "little" + }, + "pointer": { + "kind": "int", + "size": 4, + "signed": false, + "endian": "little" + }, + "unsigned int": { + "kind": "int", + "size": 4, + "signed": false, + "endian": "little" + }, + "unsigned short": { + "kind": "int", + "size": 2, + "signed": false, + "endian": "little" + }, + "unsigned be short": { + "kind": "int", + "size": 2, + "signed": false, + "endian": "big" + }, + "long long": { + "endian": "little", + "kind": "int", + "signed": true, + "size": 8 + }, + "long": { + "kind": "int", + "size": 4, + "signed": false, + "endian": "little" + } + }, + "symbols": {}, + "user_types": { + "_TCP_SYN_ENDPOINT": { + "fields": { + "Owner": { + "offset": 32, + "type": { + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "_SYN_OWNER" + } + } + }, + "CreateTime": { + "offset": 0, + "type": { + "kind": "union", + "name": "_LARGE_INTEGER" + } + }, + "ListEntry": { + "offset": 8, + "type": { + "kind": "union", + "name": "nt_symbols!_LIST_ENTRY" + } + }, + "InetAF": { + "offset": 24, + "type":{ + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "_INETAF" + } + + } + }, + "LocalPort": { + "offset": 60, + "type": { + "kind": "base", + "name": "unsigned be short" + } + }, + "RemotePort": { + "offset": 62, + "type": { + "kind": "base", + "name": "unsigned be short" + } + }, + "LocalAddr": { + "offset": 28, + "type":{ + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "_LOCAL_ADDRESS" + } + } + }, + "RemoteAddress": { + "offset": 40, + "type": { + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "_IN_ADDR" + } + } + } + }, + "kind": "struct", + "size": 64 + }, + "_TCP_TIMEWAIT_ENDPOINT": { + "fields": { + "CreateTime": { + "offset": 0, + "type": { + "kind": "union", + "name": "_LARGE_INTEGER" + } + }, + "ListEntry": { + "offset": 20, + "type": { + "kind": "union", + "name": "nt_symbols!_LIST_ENTRY" + } + }, + "InetAF": { + "offset": 12, + "type":{ + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "_INETAF" + } + + } + }, + "LocalPort": { + "offset": 28, + "type": { + "kind": "base", + "name": "unsigned be short" + } + }, + "RemotePort": { + "offset": 30, + "type": { + "kind": "base", + "name": "unsigned be short" + } + }, + "LocalAddr": { + "offset": 32, + "type":{ + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "_LOCAL_ADDRESS" + } + } + }, + "RemoteAddress": { + "offset": 36, + "type": { + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "_IN_ADDR" + } + } + } + }, + "kind": "struct", + "size": 40 + }, + "_UDP_ENDPOINT": { + "fields": { + "Owner": { + "offset": 16, + "type": { + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "nt_symbols!_EPROCESS" + } + + } + }, + "CreateTime": { + "offset": 40, + "type": { + "kind": "union", + "name": "_LARGE_INTEGER" + } + }, + "Next": { + "offset": 64, + "type":{ + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "_UDP_ENDPOINT" + } + } + }, + "LocalAddr": { + "offset": 48, + "type":{ + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "_LOCAL_ADDRESS" + } + } + }, + "InetAF": { + "offset": 52, + "type":{ + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "_INETAF" + } + + } + }, + "Port": { + "offset": 60, + "type": { + "kind": "base", + "name": "unsigned be short" + } + } + }, + "kind": "struct", + "size": 74 + }, + "_TCP_LISTENER": { + "fields": { + "Owner": { + "offset": 20, + "type": { + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "nt_symbols!_EPROCESS" + } + + } + }, + "CreateTime": { + "offset": 32, + "type": { + "kind": "union", + "name": "_LARGE_INTEGER" + } + }, + "LocalAddr": { + "offset": 52, + "type":{ + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "_LOCAL_ADDRESS" + } + + } + }, + "InetAF": { + "offset": 16, + "type":{ + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "_INETAF" + } + + } + }, + "Port": { + "offset": 62, + "type": { + "kind": "base", + "name": "unsigned be short" + } + }, + "Next": { + "offset": 72, + "type":{ + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "_TCP_LISTENER" + } + } + } + }, + "kind": "struct", + "size": 78 + }, + "_TCP_ENDPOINT": { + "fields": { + "Owner": { + "offset": 424, + "type": { + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "nt_symbols!_EPROCESS" + } + } + }, + "CreateTime": { + "offset": 432, + "type": { + "kind": "union", + "name": "_LARGE_INTEGER" + } + }, + "ListEntry": { + "offset": 32, + "type": { + "kind": "union", + "name": "nt_symbols!_LIST_ENTRY" + } + }, + "AddrInfo": { + "offset": 4, + "type":{ + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "_ADDRINFO" + } + } + }, + "InetAF": { + "offset": 0, + "type":{ + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "_INETAF" + } + } + }, + "LocalPort": { + "offset": 52, + "type": { + "kind": "base", + "name": "unsigned be short" + } + }, + "RemotePort": { + "offset": 54, + "type": { + "kind": "base", + "name": "unsigned be short" + } + }, + "State": { + "offset": 48, + "type": { + "kind": "enum", + "name": "TCPStateEnum" + } + } + }, + "kind": "struct", + "size": 448 + }, + "_LOCAL_ADDRESS": { + "fields": { + "pData": { + "offset": 12, + "type": { + "kind": "pointer", + "subtype": { + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "_IN_ADDR" + } + } + } + } + }, + "kind": "struct", + "size": 16 + }, + "_ADDRINFO": { + "fields": { + "Local": { + "offset": 0, + "type": { + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "_LOCAL_ADDRESS" + } + } + }, + "Remote": { + "offset": 12, + "type": { + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "_IN_ADDR" + } + } + } + }, + "kind": "struct", + "size": 16 + }, + "_IN_ADDR": { + "fields": { + "addr4": { + "offset": 0, + "type": { + "count": 4, + "subtype": { + "kind": "base", + "name": "unsigned char" + }, + "kind": "array" + } + }, + "addr6": { + "offset": 0, + "type": { + "count": 16, + "subtype": { + "kind": "base", + "name": "unsigned char" + }, + "kind": "array" + } + } + }, + "kind": "struct", + "size": 6 + }, + "_INETAF": { + "fields": { + "AddressFamily": { + "offset": 12, + "type": { + "kind": "base", + "name": "unsigned short" + } + } + }, + "kind": "struct", + "size": 16 + }, + "_SYN_OWNER": { + "fields": { + "Process": { + "offset": 24, + "type": { + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "nt_symbols!_EPROCESS" + } + } + } + }, + "kind": "struct", + "size": 14 + }, + "_LARGE_INTEGER": { + "fields": { + "HighPart": { + "offset": 4, + "type": { + "kind": "base", + "name": "long" + } + }, + "LowPart": { + "offset": 0, + "type": { + "kind": "base", + "name": "unsigned long" + } + }, + "QuadPart": { + "offset": 0, + "type": { + "kind": "base", + "name": "long long" + } + }, + "u": { + "offset": 0, + "type": { + "kind": "struct", + "name": "__unnamed_2" + } + } + }, + "kind": "union", + "size": 8 + }, + "_INET_COMPARTMENT_SET": { + "fields": { + "InetCompartment": { + "offset": 328, + "type": { + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "_INET_COMPARTMENT" + } + } + } + }, + "kind": "struct", + "size": 384 + }, + "_INET_COMPARTMENT": { + "fields": { + "ProtocolCompartment": { + "offset": 32, + "type": { + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "_PROTOCOL_COMPARTMENT" + } + } + } + }, + "kind": "struct", + "size": 48 + }, + "_PROTOCOL_COMPARTMENT": { + "fields": { + "PortPool": { + "offset": 0, + "type": { + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "_INET_PORT_POOL" + } + } + } + }, + "kind": "struct", + "size": 16 + }, + "_PORT_ASSIGNMENT_ENTRY": { + "fields": { + "Entry": { + "offset": 4, + "type": { + "kind": "pointer", + "subtype": { + "kind": "base", + "name": "void" + } + } + } + }, + "kind": "struct", + "size": 8 + }, + "_PORT_ASSIGNMENT_LIST": { + "fields": { + "Assignments": { + "offset": 0, + "type": { + "count": 256, + "kind": "array", + "subtype": { + "kind": "struct", + "name": "_PORT_ASSIGNMENT_ENTRY" + } + } + } + }, + "kind": "struct", + "size": 4096 + }, + "_PORT_ASSIGNMENT": { + "fields": { + "InPaBigPoolBase": { + "offset": 16, + "type": { + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "_PORT_ASSIGNMENT_LIST" + } + } + } + }, + "kind": "struct", + "size": 24 + }, + "_INET_PORT_POOL": { + "fields": { + "PortAssignments": { + "offset": 152, + "type": { + "count": 256, + "kind": "array", + "subtype": { + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "_PORT_ASSIGNMENT" + } + } + } + }, + "PortBitMap": { + "offset": 144, + "type": { + "kind": "struct", + "name": "nt_symbols!_RTL_BITMAP" + } + } + }, + "kind": "struct", + "size": 11200 + }, + "_PARTITION": { + "fields": { + "Endpoints" : { + "offset": 8, + "type": { + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "nt_symbols!_RTL_DYNAMIC_HASH_TABLE" + } + } + }, + "UnknownHashTable" : { + "offset": 12, + "type": { + "kind": "pointer", + "subtype": { + "kind": "struct", + "name": "nt_symbols!_RTL_DYNAMIC_HASH_TABLE" + } + } + } + }, + "kind": "struct", + "size": 72 + }, + "_PARTITION_TABLE": { + "fields": { + "Partitions": { + "offset": 0, + "type": { + "count": 1, + "kind": "array", + "subtype": { + "kind": "struct", + "name": "_PARTITION" + } + } + } + }, + "kind": "struct", + "size": 72 + } + }, + "enums": { + "TCPStateEnum": { + "base": "long", + "constants": { + "CLOSED": 0, + "LISTENING": 1, + "SYN_SENT": 2, + "SYN_RCVD": 3, + "ESTABLISHED": 4, + "FIN_WAIT1": 5, + "FIN_WAIT2": 6, + "CLOSE_WAIT": 7, + "CLOSING": 8, + "LAST_ACK": 9, + "TIME_WAIT": 12, + "DELETE_TCB": 13 + }, + "size": 4 + } + }, + "metadata": { + "producer": { + "version": "0.0.1", + "name": "japhlange-by-hand", + "datetime": "2021-01-14T18:28:34" + }, + "format": "6.0.0" + } +} From f13f5e438d06d0b22281e9750361e263e2bf1a66 Mon Sep 17 00:00:00 2001 From: Andrew Case Date: Tue, 19 Jan 2021 14:27:24 -0600 Subject: [PATCH 023/294] Address comments --- volatility/framework/plugins/mac/lsmod.py | 21 ++++++++++++++++----- 1 file changed, 16 insertions(+), 5 deletions(-) diff --git a/volatility/framework/plugins/mac/lsmod.py b/volatility/framework/plugins/mac/lsmod.py index 66dc82e5a8..0372eb79bb 100644 --- a/volatility/framework/plugins/mac/lsmod.py +++ b/volatility/framework/plugins/mac/lsmod.py @@ -3,7 +3,7 @@ # """A module containing a collection of plugins that produce data typically found in Mac's lsmod command.""" -from volatility.framework import renderers, interfaces, contexts +from volatility.framework import exceptions, renderers, interfaces, contexts from volatility.framework.configuration import requirements from volatility.framework.interfaces import plugins from volatility.framework.objects import utility @@ -42,11 +42,17 @@ def list_modules(cls, context: interfaces.context.ContextInterface, layer_name: kmod_ptr = kernel.object_from_symbol(symbol_name = "kmod") - kmod = kmod_ptr.dereference().cast("kmod_info") + try: + kmod = kmod_ptr.dereference().cast("kmod_info") + except exceptions.InvalidAddressException: + return [] yield kmod - kmod = kmod.next + try: + kmod = kmod.next + except exceptions.InvalidAddressException: + return [] seen = set() @@ -54,14 +60,19 @@ def list_modules(cls, context: interfaces.context.ContextInterface, layer_name: kmod not in seen and \ len(seen) < 1024: - if not kernel_layer.is_valid(kmod.dereference().vol.offset, kmod.dereference().vol.size): + kmod_obj = kmod.dereference() + + if not kernel_layer.is_valid(kmod_obj.vol.offset, kmod_obj.vol.size): break seen.add(kmod) yield kmod - kmod = kmod.next + try: + kmod = kmod.next + except exceptions.InvalidAddressException: + return def _generator(self): for module in self.list_modules(self.context, self.config['primary'], self.config['darwin']): From e44094fb7389bab38e80f632d9d483847e14f07c Mon Sep 17 00:00:00 2001 From: cstation Date: Tue, 19 Jan 2021 21:55:01 +0100 Subject: [PATCH 024/294] Fix VMware layer tag reading The VMware layer did not handle tags having a different data-size. This fixes the majority of cases, since the needed tags for determining the memory regions will often be located in the regular-sized tags. --- volatility/framework/layers/vmware.py | 15 +++++++++++---- 1 file changed, 11 insertions(+), 4 deletions(-) diff --git a/volatility/framework/layers/vmware.py b/volatility/framework/layers/vmware.py index fc412fb2e0..d7b74332fe 100644 --- a/volatility/framework/layers/vmware.py +++ b/volatility/framework/layers/vmware.py @@ -48,8 +48,7 @@ def _read_header(self) -> None: if magic not in [b"\xD2\xBE\xD2\xBE"]: raise VmwareFormatException(self.name, "Wrong magic bytes for Vmware layer: {}".format(repr(magic))) - # TODO: Change certain structure sizes based on the version - # version = magic[1] & 0xf + version = magic[0] & 0xf group_size = struct.calcsize(self.group_structure) @@ -81,12 +80,20 @@ def _read_header(self) -> None: self._context.object("vmware!unsigned int", offset = offset + name_len + 2 + (index * index_len), layer_name = self._meta_layer)) - data = self._context.object("vmware!unsigned int", + data_len = flags & 0x3f + + # TODO: Read special data sizes (signalling a longer data stream) properly instead of skipping them + if data_len in (62, 63): + data_len = 4 if version == 0 else 8 + offset += 2 + name_len + (indicies_len * index_len) + 2 * data_len + continue + + data = self._context.object("vmware!unsigned int" if data_len == 4 else "vmware!unsigned long long", layer_name = self._meta_layer, offset = offset + 2 + name_len + (indicies_len * index_len)) tags[(name, tuple(indicies))] = (flags, data) offset += 2 + name_len + (indicies_len * - index_len) + self._context.symbol_space.get_type("vmware!unsigned int").size + index_len) + data_len if tags[("regionsCount", ())][1] == 0: raise VmwareFormatException(self.name, "VMware VMEM is not split into regions") From 9ef341907f102a77d36958d70565b7634a314ac8 Mon Sep 17 00:00:00 2001 From: cstation Date: Thu, 21 Jan 2021 10:08:20 +0100 Subject: [PATCH 025/294] Improve VMware Tag reading Fix magic headers and properly read irregular-sized tags --- volatility/framework/layers/vmware.py | 50 ++++++++++++++++++--------- 1 file changed, 33 insertions(+), 17 deletions(-) diff --git a/volatility/framework/layers/vmware.py b/volatility/framework/layers/vmware.py index d7b74332fe..2150edc982 100644 --- a/volatility/framework/layers/vmware.py +++ b/volatility/framework/layers/vmware.py @@ -36,6 +36,10 @@ def _load_segments(self) -> None: """Loads up the segments from the meta_layer.""" self._read_header() + @staticmethod + def _choose_type(size: int) -> str: + return "vmware!unsigned int" if size == 4 else "vmware!unsigned long long" + def _read_header(self) -> None: """Checks the vmware header to make sure it's valid.""" if "vmware" not in self._context.symbol_space: @@ -45,11 +49,10 @@ def _read_header(self) -> None: header_size = struct.calcsize(self.header_structure) data = meta_layer.read(0, header_size) magic, unknown, groupCount = struct.unpack(self.header_structure, data) - if magic not in [b"\xD2\xBE\xD2\xBE"]: + if magic not in [b"\xD0\xBE\xD2\xBE", b"\xD1\xBA\xD1\xBA", b"\xD2\xBE\xD2\xBE", b"\xD3\xBE\xD3\xBE"]: raise VmwareFormatException(self.name, "Wrong magic bytes for Vmware layer: {}".format(repr(magic))) version = magic[0] & 0xf - group_size = struct.calcsize(self.group_structure) groups = {} @@ -73,27 +76,40 @@ def _read_header(self) -> None: layer_name = self._meta_layer, offset = offset + 2, max_length = name_len) - indicies_len = (flags >> 6) & 3 - indicies = [] - for index in range(indicies_len): - indicies.append( + indices_len = (flags >> 6) & 3 + indices = [] + for index in range(indices_len): + indices.append( self._context.object("vmware!unsigned int", offset = offset + name_len + 2 + (index * index_len), layer_name = self._meta_layer)) data_len = flags & 0x3f - - # TODO: Read special data sizes (signalling a longer data stream) properly instead of skipping them - if data_len in (62, 63): + + if data_len in [62, 63]: # Handle special data sizes that indicate a longer data stream data_len = 4 if version == 0 else 8 - offset += 2 + name_len + (indicies_len * index_len) + 2 * data_len - continue - - data = self._context.object("vmware!unsigned int" if data_len == 4 else "vmware!unsigned long long", + # Read the size of the data + data_size = self._context.object(self._choose_type(data_len), + layer_name = self._meta_layer, + offset = offset + 2 + name_len + (indices_len * index_len)) + # Read the size of the data when it would be decompressed + data_mem_size = self._context.object(self._choose_type(data_len), layer_name = self._meta_layer, - offset = offset + 2 + name_len + (indicies_len * index_len)) - tags[(name, tuple(indicies))] = (flags, data) - offset += 2 + name_len + (indicies_len * - index_len) + data_len + offset = offset + 2 + name_len + (indices_len * index_len) + data_len) + # Skip two bytes of padding (as it seems?) + # Read the actual data + data = self._context.object("vmware!bytes", + layer_name = self._meta_layer, + offset = offset + 2 + name_len + (indices_len * index_len) + + 2 * data_len + 2, + length = data_size) + offset += 2 + name_len + (indices_len * index_len) + 2 * data_len + 2 + data_size + else: # Handle regular cases + data = self._context.object(self._choose_type(data_len), + layer_name = self._meta_layer, + offset = offset + 2 + name_len + (indices_len * index_len)) + offset += 2 + name_len + (indices_len * index_len) + data_len + + tags[(name, tuple(indices))] = (flags, data) if tags[("regionsCount", ())][1] == 0: raise VmwareFormatException(self.name, "VMware VMEM is not split into regions") From 2c077c08c2e89445bc593c5091027dca447ea797 Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Sun, 24 Jan 2021 16:44:39 +0000 Subject: [PATCH 026/294] Windows: Catch more PDB URL errors --- volatility/framework/symbols/windows/pdbconv.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/volatility/framework/symbols/windows/pdbconv.py b/volatility/framework/symbols/windows/pdbconv.py index d2dbcfc508..8de01f53c1 100644 --- a/volatility/framework/symbols/windows/pdbconv.py +++ b/volatility/framework/symbols/windows/pdbconv.py @@ -935,7 +935,7 @@ def retreive_pdb(self, try: vollog.debug("Attempting to retrieve {}".format(url + suffix)) result = resources.ResourceAccessor(progress_callback).open(url + suffix) - except error.HTTPError as excp: + except (error.HTTPError, error.URLError) as excp: vollog.debug("Failed with {}".format(excp)) if result: break From f5504e19c6cfaf833b0c606799447fec03a48640 Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Sun, 24 Jan 2021 16:58:00 +0000 Subject: [PATCH 027/294] Windows: Improve pdb downloading messages --- volatility/framework/symbols/windows/pdbutil.py | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/volatility/framework/symbols/windows/pdbutil.py b/volatility/framework/symbols/windows/pdbutil.py index 7e1558304a..df83a8c93e 100644 --- a/volatility/framework/symbols/windows/pdbutil.py +++ b/volatility/framework/symbols/windows/pdbutil.py @@ -83,6 +83,9 @@ def load_windows_symbol_table(cls, if not isf_path: vollog.debug("Required symbol library path not found: {}".format(filter_string)) + vollog.info("The symbols can be downloaded later using pdbconv.py -p {} -g {}".format( + pdb_name.strip('\x00'), + guid.upper() + str(age))) return None vollog.debug("Using symbol library: {}".format(filter_string)) @@ -200,7 +203,7 @@ def download_pdb_isf(cls, # After we've successfully written it out, record the fact so we don't clear it out data_written = True else: - vollog.warning("Symbol file could not be found on remote server" + (" " * 100)) + vollog.warning("Symbol file could not be downloaded from remote server" + (" " * 100)) break except PermissionError: vollog.warning("Cannot write necessary symbol file, please check permissions on {}".format( From a13fa5ff6627dfeca68b7964674d87aadcdd05e4 Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Mon, 25 Jan 2021 00:04:57 +0000 Subject: [PATCH 028/294] Objects: Use 3.5.3 compatible format strings --- volatility/framework/objects/__init__.py | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/volatility/framework/objects/__init__.py b/volatility/framework/objects/__init__.py index dffc031de4..ad2ef21b55 100644 --- a/volatility/framework/objects/__init__.py +++ b/volatility/framework/objects/__init__.py @@ -651,10 +651,10 @@ def __repr__(self) -> str: """Describes the object appropriately""" extras = member_name = '' if self.vol.native_layer_name != self.vol.layer_name: - extras += f' (Native: {self.vol.native_layer_name})' + extras += " (Native: {})".format(self.vol.native_layer_name) if self.vol.member_name: - member_name = f' (.{self.vol.member_name})' - return f'<{self.__class__.__name__} {self.vol.type_name}{member_name}: {self.vol.layer_name} @ 0x{self.vol.offset:x} #{self.vol.size}{extras}>' + member_name = " (.{})".format(self.vol.member_name) + return "<{} {}{}: {} @ 0x{:x} #{}{}>".format(self.__class__.__name__, self.vol.type_name, member_name, self.vol.layer_name, self.vol.offset, self.vol.size, extras) class VolTemplateProxy(interfaces.objects.ObjectInterface.VolTemplateProxy): From 6b7177d285d58770115d8aed881ffb5f4f9fa2df Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Mon, 25 Jan 2021 00:05:21 +0000 Subject: [PATCH 029/294] Volshell: Fix recent script option for non-generic volshell --- volatility/cli/volshell/generic.py | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/volatility/cli/volshell/generic.py b/volatility/cli/volshell/generic.py index de43f8c717..be76eabe90 100644 --- a/volatility/cli/volshell/generic.py +++ b/volatility/cli/volshell/generic.py @@ -86,7 +86,9 @@ def run(self, additional_locals: Dict[str, Any] = None) -> interfaces.renderers. sys.ps1 = "({}) >>> ".format(self.current_layer) self.__console = code.InteractiveConsole(locals = self._construct_locals_dict()) - if self.config['script'] is not None: + # Since we have to do work to add the option only once for all different modes of volshell, we can't + # rely on the default having been set + if self.config.get('script', None) is not None: self.run_script(location = self.config['script']) self.__console.interact(banner = banner) From 04629182a783c68e418f0e7f094067774df32bf5 Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Tue, 26 Jan 2021 00:39:02 +0000 Subject: [PATCH 030/294] Pdbconv: Set a more useful default Fixes #434 --- .../framework/symbols/windows/pdbconv.py | 26 ++++++++++++++++--- 1 file changed, 23 insertions(+), 3 deletions(-) diff --git a/volatility/framework/symbols/windows/pdbconv.py b/volatility/framework/symbols/windows/pdbconv.py index 8de01f53c1..3d7bc4e286 100644 --- a/volatility/framework/symbols/windows/pdbconv.py +++ b/volatility/framework/symbols/windows/pdbconv.py @@ -2,9 +2,12 @@ # which is available at https://www.volatilityfoundation.org/license/vsl-v1.0 # import binascii +import bz2 import datetime +import gzip import json import logging +import lzma import os from bisect import bisect from typing import Tuple, Dict, Any, Optional, Union, List @@ -971,7 +974,7 @@ def __call__(self, progress: Union[int, float], description: str = None): parser = argparse.ArgumentParser( description = "Read PDB files and convert to Volatility 3 Intermediate Symbol Format") - parser.add_argument("-o", "--output", metavar = "OUTPUT", help = "Filename for data output", required = True) + parser.add_argument("-o", "--output", metavar = "OUTPUT", help = "Filename for data output", default = None) file_group = parser.add_argument_group("file", description = "File-based conversion of PDB to ISF") file_group.add_argument("-f", "--file", metavar = "FILE", help = "PDB file to translate to ISF") data_group = parser.add_argument_group("data", description = "Convert based on a GUID and filename pattern") @@ -1010,8 +1013,25 @@ def __call__(self, progress: Union[int, float], description: str = None): convertor = PdbReader(ctx, location, database_name = args.pattern, progress_callback = pg_cb) - with open(args.output, "w") as f: - json.dump(convertor.get_json(), f, indent = 2, sort_keys = True) + converted_json = convertor.get_json() + if args.output is None: + guid = args.guid[:-1] or converted_json['metadata']['windows']['pdb']['GUID'] + age = args.guid[-1:] or converted_json['metadata']['windows']['pdb']['age'] + args.output = "{}-{}.json.xz".format(guid, age) + + output_url = os.path.abspath(args.output) + + open_method = open + if args.output.endswith('.gz'): + open_method = gzip.open + elif args.output.endswith('.bz2'): + open_method = bz2.open + elif args.output.endswith('.xz'): + open_method = lzma.open + + with open_method(output_url, "wb") as f: + json_string = json.dumps(converted_json, indent = 2, sort_keys = True) + f.write(bytes(json_string, 'latin-1')) if args.keep: print("Temporary PDB file: {}".format(filename)) From 55bbd115ceb4fdb7478cf39ea793b153034cc7b1 Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Tue, 26 Jan 2021 00:45:42 +0000 Subject: [PATCH 031/294] Pdbconv: Fix up guid when no argument provided --- volatility/framework/symbols/windows/pdbconv.py | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/volatility/framework/symbols/windows/pdbconv.py b/volatility/framework/symbols/windows/pdbconv.py index 3d7bc4e286..b6f8d5794c 100644 --- a/volatility/framework/symbols/windows/pdbconv.py +++ b/volatility/framework/symbols/windows/pdbconv.py @@ -1015,8 +1015,12 @@ def __call__(self, progress: Union[int, float], description: str = None): converted_json = convertor.get_json() if args.output is None: - guid = args.guid[:-1] or converted_json['metadata']['windows']['pdb']['GUID'] - age = args.guid[-1:] or converted_json['metadata']['windows']['pdb']['age'] + if args.guid: + guid = args.guid[:-1] + age = args.guid[-1:] + else: + guid = converted_json['metadata']['windows']['pdb']['GUID'] + age = converted_json['metadata']['windows']['pdb']['age'] args.output = "{}-{}.json.xz".format(guid, age) output_url = os.path.abspath(args.output) From 69ca85b50a1ee3a319920d178f34f0fd50a47efc Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Wed, 27 Jan 2021 12:28:35 +0000 Subject: [PATCH 032/294] Layers: Add in check and document scanner behaviour --- volatility/framework/interfaces/layers.py | 3 +++ volatility/framework/layers/scanners/__init__.py | 2 ++ 2 files changed, 5 insertions(+) diff --git a/volatility/framework/interfaces/layers.py b/volatility/framework/interfaces/layers.py index 49d6920aa9..75f900a9fa 100644 --- a/volatility/framework/interfaces/layers.py +++ b/volatility/framework/interfaces/layers.py @@ -328,6 +328,9 @@ def _scan_chunk(self, scanner: 'ScannerInterface', progress: 'ProgressValue', vollog.debug("Invalid address in layer {} found scanning {} at address {:x}".format( layer_name, self.name, address)) + if len(data) > scanner.chunk_size + scanner.overlap: + vollog.debug("Scan chunk too large: {}".format(hex(len(data)))) + progress.value = chunk_end return list(scanner(data, chunk_end - len(data))) diff --git a/volatility/framework/layers/scanners/__init__.py b/volatility/framework/layers/scanners/__init__.py index 45208161d3..6ac8fc915f 100644 --- a/volatility/framework/layers/scanners/__init__.py +++ b/volatility/framework/layers/scanners/__init__.py @@ -21,6 +21,8 @@ def __call__(self, data: bytes, data_offset: int) -> Generator[int, None, None]: where the needle is found.""" find_pos = data.find(self.needle) while find_pos >= 0: + # Ensure that if we're in the overlap, we don't report it + # It'll be returned when the next block is scanned if find_pos < self.chunk_size: yield find_pos + data_offset find_pos = data.find(self.needle, find_pos + 1) From 2b1c68679b6c49c11ae70bb3de97fff1f7f93f9e Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Wed, 27 Jan 2021 23:51:29 +0000 Subject: [PATCH 033/294] Mac: Fix merging typos --- volatility3/framework/plugins/mac/lsmod.py | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/volatility3/framework/plugins/mac/lsmod.py b/volatility3/framework/plugins/mac/lsmod.py index 7e31a3e322..fe5d25adf1 100644 --- a/volatility3/framework/plugins/mac/lsmod.py +++ b/volatility3/framework/plugins/mac/lsmod.py @@ -44,10 +44,12 @@ def list_modules(cls, context: interfaces.context.ContextInterface, layer_name: kmod_ptr = kernel.object_from_symbol(symbol_name = "kmod") try: - kmod = kmod_ptr.dereference().cast("kmod_info") + kmod = kmod_ptr.dereference().cast("kmod_info") except exceptions.InvalidAddressException: return [] - yield kmod + + yield kmod + try: kmod = kmod.next except exceptions.InvalidAddressException: From b6f8e36fb514970c91438d2481e863a073839c2c Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Fri, 29 Jan 2021 09:17:25 +0000 Subject: [PATCH 034/294] Documentation: Ensure the JSON files are included --- MANIFEST.in | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/MANIFEST.in b/MANIFEST.in index 4e5bbf6a1f..504c7d89ad 100644 --- a/MANIFEST.in +++ b/MANIFEST.in @@ -2,5 +2,5 @@ prune development include * .* include doc/make.bat doc/Makefile recursive-include doc/source * -recursive-include volatility *.json -recursive-exclude doc/source volatility*.rst +recursive-include volatility3 *.json +recursive-exclude doc/source volatility3.*.rst From 6e0f785ccc04cae1b7aaa974c9d480953da6bae0 Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Sat, 30 Jan 2021 14:15:11 +0000 Subject: [PATCH 035/294] CLI: Add the ability to set the CACHE_PATH --- volatility3/cli/__init__.py | 7 +++++++ volatility3/cli/volshell/__init__.py | 7 +++++++ 2 files changed, 14 insertions(+) diff --git a/volatility3/cli/__init__.py b/volatility3/cli/__init__.py index 96d4bc5997..290e08ddc0 100644 --- a/volatility3/cli/__init__.py +++ b/volatility3/cli/__init__.py @@ -160,6 +160,10 @@ def run(self): help = "Clears out all short-term cached items", default = False, action = 'store_true') + parser.add_argument("--cache-path", + help = "Change the default path ({}) used to store the cache".format(constants.CACHE_PATH), + default = constants.CACHE_PATH, + type = str) # We have to filter out help, otherwise parse_known_args will trigger the help message before having # processed the plugin choice or had the plugin subparser added. @@ -179,6 +183,9 @@ def run(self): volatility3.symbols.__path__ = [os.path.abspath(p) for p in partial_args.symbol_dirs.split(";")] + constants.SYMBOL_BASEPATHS + if partial_args.cache_path: + constants.CACHE_PATH = partial_args.cache_path + if partial_args.log: file_logger = logging.FileHandler(partial_args.log) file_logger.setLevel(1) diff --git a/volatility3/cli/volshell/__init__.py b/volatility3/cli/volshell/__init__.py index 6dde839057..b67cc8a6ce 100644 --- a/volatility3/cli/volshell/__init__.py +++ b/volatility3/cli/volshell/__init__.py @@ -90,6 +90,10 @@ def run(self): help = "Clears out all short-term cached items", default = False, action = 'store_true') + parser.add_argument("--cache-path", + help = "Change the default path ({}) used to store the cache".format(constants.CACHE_PATH), + default = constants.CACHE_PATH, + type = str) # Volshell specific flags os_specific = parser.add_mutually_exclusive_group(required = False) @@ -113,6 +117,9 @@ def run(self): volatility3.symbols.__path__ = [os.path.abspath(p) for p in partial_args.symbol_dirs.split(";")] + constants.SYMBOL_BASEPATHS + if partial_args.cache_path: + constants.CACHE_PATH = partial_args.cache_path + vollog.info("Volatility plugins path: {}".format(volatility3.plugins.__path__)) vollog.info("Volatility symbols path: {}".format(volatility3.symbols.__path__)) From 22561368bdf31fe9aae46b71317f6b7c80bc68a5 Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Sat, 30 Jan 2021 17:24:08 +0000 Subject: [PATCH 036/294] Documentation: Ensure all references in the docs are accurate --- doc/source/basics.rst | 56 ++++++++++++------------- doc/source/complex-plugin.rst | 68 +++++++++++++++---------------- doc/source/simple-plugin.rst | 56 ++++++++++++------------- doc/source/using-as-a-library.rst | 46 ++++++++++----------- doc/source/vol2to3.rst | 18 ++++---- 5 files changed, 122 insertions(+), 122 deletions(-) diff --git a/doc/source/basics.rst b/doc/source/basics.rst index 77dedf63b4..d493c61b37 100644 --- a/doc/source/basics.rst +++ b/doc/source/basics.rst @@ -7,7 +7,7 @@ Volatility splits memory analysis down to several components: * Templates and Objects * Symbol Tables -Volatility 3 stores all of these within a :py:class:`Context `, +Volatility 3 stores all of these within a :py:class:`Context `, which acts as a container for all the various layers and tables necessary to conduct memory analysis. Memory layers @@ -21,8 +21,8 @@ two other sources. These are typically handled by programs that process file fo processor, but these are all translations (either in the geometric or linguistic sense) of the original data. In Volatility 3 this is represented by a directed graph, whose end nodes are -:py:class:`DataLayers ` and whose internal nodes are -specifically called a :py:class:`TranslationLayer `. +:py:class:`DataLayers ` and whose internal nodes are +specifically called a :py:class:`TranslationLayer `. In this way, a raw memory image in the LiME file format and a page file can be combined to form a single Intel virtual memory layer. When requesting addresses from the Intel layer, it will use the Intel memory mapping algorithm, along with the address of the directory table base or page table map, to translate that @@ -39,17 +39,17 @@ Templates and Objects Once we can address contiguous chunks of memory with a means to translate a virtual address (as seen by the programs) into the actual data used by the processor, we can start pulling out -:py:class:`Objects ` by taking a -:py:class:`~volatility.framework.interfaces.objects.Template` and constructing -it on the memory layer at a specific offset. A :py:class:`~volatility.framework.interfaces.objects.Template` contains +:py:class:`Objects ` by taking a +:py:class:`~volatility3.framework.interfaces.objects.Template` and constructing +it on the memory layer at a specific offset. A :py:class:`~volatility3.framework.interfaces.objects.Template` contains all the information you can know about the structure of the object without actually being populated by any data. -As such a :py:class:`~volatility.framework.interfaces.objects.Template` can tell you the size of a structure and its +As such a :py:class:`~volatility3.framework.interfaces.objects.Template` can tell you the size of a structure and its members, how far into the structure a particular member lives and potentially what various values in that field would mean, but not what resides in a particular member. -Using a :py:class:`~volatility.framework.interfaces.objects.Template` on a memory layer at a particular offset, an -:py:class:`Object ` can be constructed. In Volatility 3, once an -:py:class:`Object ` has been created, the data has been read from the +Using a :py:class:`~volatility3.framework.interfaces.objects.Template` on a memory layer at a particular offset, an +:py:class:`Object ` can be constructed. In Volatility 3, once an +:py:class:`Object ` has been created, the data has been read from the layer and is not read again. An object allows its members to be interrogated and in particular allows pointers to be followed, providing easy access to the data contained in the object. @@ -62,23 +62,23 @@ Symbol Tables ------------- Most compiled programs know of their own templates, and define the structure (and location within the program) of these -templates as a :py:class:`Symbol `. A -:py:class:`Symbol ` is often an address and a template and can +templates as a :py:class:`Symbol `. A +:py:class:`Symbol ` is often an address and a template and can be used to refer to either independently. Lookup tables of these symbols are often produced as debugging information alongside the compilation of the program. Volatility 3 provides access to these through a -:py:class:`SymbolTable `, many of which can be collected -within a :py:class:`~volatility.framework.contexts.Context` as a :py:class:`SymbolSpace `. -A :py:class:`~volatility.framework.contexts.Context` can store only one :py:class:`~volatility.framework.symbols.SymbolSpace` -at a time, although a :py:class:`~volatility.framework.symbols.SymbolSpace` can store as -many :py:class:`~volatility.framework.symbols.SymbolTable` items as necessary. +:py:class:`SymbolTable `, many of which can be collected +within a :py:class:`~volatility3.framework.contexts.Context` as a :py:class:`SymbolSpace `. +A :py:class:`~volatility3.framework.contexts.Context` can store only one :py:class:`~volatility.framework.symbols.SymbolSpace` +at a time, although a :py:class:`~volatility3.framework.symbols.SymbolSpace` can store as +many :py:class:`~volatility3.framework.symbols.SymbolTable` items as necessary. Volatility 3 uses the de facto naming convention for symbols of `module!symbol` to refer to them. It reads them from its own JSON formatted file, which acts as a common intermediary between Windows PDB files, Linux DWARF files, other symbol formats and the internal Python format that Volatility 3 uses to represent -a :py:class:`~volatility.framework.interfaces.objects.Template` or -a :py:class:`Symbol `. +a :py:class:`~volatility3.framework.interfaces.objects.Template` or +a :py:class:`Symbol `. -.. note:: Volatility 2's name for a :py:class:`~volatility.framework.symbols.SymbolSpace` was a profile, but it could +.. note:: Volatility 2's name for a :py:class:`~volatility3.framework.symbols.SymbolSpace` was a profile, but it could not differentiate between symbols from different modules and required special handling for 32-bit programs that used Wow64 on Windows. This meant that all symbols lived in a single namespace with the possibility of symbol name collisions. It read the symbols using a format called *vtypes*, written in Python code directly. @@ -88,18 +88,18 @@ Plugins ------- A plugin acts as a means of requesting data from the user interface (and so the user) and then using it to carry out a -specific form of analysis on the :py:class:`Context ` +specific form of analysis on the :py:class:`Context ` (containing whatever symbol tables and memory layers it may). The means of communication between the user interface and -the library is the configuration tree, which is used by components within the :py:class:`~volatility.framework.contexts.Context` +the library is the configuration tree, which is used by components within the :py:class:`~volatility3.framework.contexts.Context` to store configurable data. After the plugin has been run, it then returns the results in a specific format known as a -:py:class:`~volatility.framework.interfaces.renderers.TreeGrid`. This ensures that the data can be handled by consumers of +:py:class:`~volatility3.framework.interfaces.renderers.TreeGrid`. This ensures that the data can be handled by consumers of the library, without knowing exactly what the data is or how it's formatted. Output Renderers ---------------- User interfaces can choose how best to present the output of the results to their users. The library always responds from -every plugin with a :py:class:`~volatility.framework.renderers.TreeGrid`, and the user interface can then determine how +every plugin with a :py:class:`~volatility3.framework.renderers.TreeGrid`, and the user interface can then determine how best to display it. For the Command Line Interface, that might be via text output as a table, or it might output to an SQLite database or a CSV file. For a web interface, the best output is probably as JSON where it could be displayed as a table, or inserted into a database like Elastic Search and trawled using an existing frontend such as Kibana. @@ -111,9 +111,9 @@ Configuration Tree ------------------ The configuration tree acts as the interface between the calling program and Volatility 3 library. Elements of the -library (such as a :py:class:`Plugin `, -a :py:class:`TranslationLayer `, -an :py:class:`Automagic `, etc.) can use the configuration +library (such as a :py:class:`Plugin `, +a :py:class:`TranslationLayer `, +an :py:class:`Automagic `, etc.) can use the configuration tree to inform the calling program of the options they require and/or optionally support, and allows the calling program to provide that information when the library is then called. @@ -122,7 +122,7 @@ Automagic There are certain setup tasks that establish the context in a way favorable to a plugin before it runs, removing several tasks that are repetitive and also easy to get wrong. These are called -:py:class:`Automagic `, since they do things like magically +:py:class:`Automagic `, since they do things like magically taking a raw memory image and automatically providing the plugin with an appropriate Intel translation layer and an accurate symbol table without either the plugin or the calling program having to specify all the necessary details. diff --git a/doc/source/complex-plugin.rst b/doc/source/complex-plugin.rst index 23a45f0791..f06b398e8a 100644 --- a/doc/source/complex-plugin.rst +++ b/doc/source/complex-plugin.rst @@ -6,17 +6,17 @@ which are discussed below. Writing Reusable Methods ------------------------ -Classes which inherit from :py:class:`~volatility.framework.interfaces.plugins.PluginInterface` all have a :py:meth:`run()` method -which takes no parameters and will return a :py:class:`~volatility.framework.interfaces.renderers.TreeGrid`. Since most useful +Classes which inherit from :py:class:`~volatility3.framework.interfaces.plugins.PluginInterface` all have a :py:meth:`run()` method +which takes no parameters and will return a :py:class:`~volatility3.framework.interfaces.renderers.TreeGrid`. Since most useful functions are parameterized, to provide parameters to a plugin the `configuration` for the context must be appropriately manipulated. There is scope for this, in order to run multiple plugins (see `Writing plugins that run other plugins`) but a much simpler method is to provide a parameterized `classmethod` within the plugin, which will allow the method to yield whatever kind of output it will generate and take whatever parameters it might need. This is how processes are listed, which is an often used function. The code lives within the -:py:class:`~volatility.plugins.windows.pslist.PsList` plugin but can be used by other plugins by providing the +:py:class:`~volatility3.plugins.windows.pslist.PsList` plugin but can be used by other plugins by providing the appropriate parameters (see -:py:meth:`~volatility.plugins.windows.pslist.PsList.list_processes`). +:py:meth:`~volatility3.plugins.windows.pslist.PsList.list_processes`). It is up to the author of a plugin to validate that any required plugins are present and are the appropriate version. Writing plugins that run other plugins @@ -34,7 +34,7 @@ available plugins that feature a Timeliner interface). This can be achieved wit This code will first generate suitable automagics for running against the context. Unfortunately this must be re-run for each plugin in order to populate the context's configuration correctly based on the plugin's requirements (which may vary between plugins). Once the automagics have been constructed, the plugin can be instantiated using the helper function -:py:func:`~volatility.framework.plugins.construct_plugin` providing: +:py:func:`~volatility3.framework.plugins.construct_plugin` providing: * the base context (containing the configuration and any already loaded layers or symbol tables), * the plugin class to run, @@ -43,7 +43,7 @@ between plugins). Once the automagics have been constructed, the plugin can be * an open method for the plugin to create files during the run With the constructed plugin, it can either be run by calling its -:py:meth:`~volatility.framework.interfaces.plugins.PluginInterface.run` method, or any other known method can +:py:meth:`~volatility3.framework.interfaces.plugins.PluginInterface.run` method, or any other known method can be invoked on it. Writing plugins that output files @@ -55,7 +55,7 @@ an abstraction layer is used. The user interface specifies an open_method (which is actually a class constructor that can double as a python ContextManager, so it can be used by the python `with` keyword). This is set on the plugin using `plugin.set_open_method` and can then be called or accessed using `plugin.open(preferred_filename)`. There are no additional options -that can be set on the filename, and a :py:class:`~volatility.framework.interfaces.plugins.FileHandlerInterface` is the result. +that can be set on the filename, and a :py:class:`~volatility3.framework.interfaces.plugins.FileHandlerInterface` is the result. This mimics an `IO[bytes]` object, which closely mimics a standard python file-like object. As such code for outputting to a file would be expected to look something like: @@ -73,10 +73,10 @@ closed to allow the preferred filename to be changed (or data to be added/modifi Writing Scanners ---------------- -Scanners are objects that adhere to the :py:class:`~volatility.framework.interfaces.layers.ScannerInterface`. They are -passed to the :py:meth:`~volatility.framework.interfaces.layers.TranslationLayerInterface.scan` method on layers which will +Scanners are objects that adhere to the :py:class:`~volatility3.framework.interfaces.layers.ScannerInterface`. They are +passed to the :py:meth:`~volatility3.framework.interfaces.layers.TranslationLayerInterface.scan` method on layers which will divide the provided range of sections (or the entire layer -if none are provided) and call the :py:meth:`~volatility.framework.interfaces.layers.ScannerInterface`'s call method +if none are provided) and call the :py:meth:`~volatility3.framework.interfaces.layers.ScannerInterface`'s call method method with each chunk as a parameter, ensuring a suitable amount of overlap (as defined by the scanner). The offset of the chunk, within the layer, is also provided as a parameter. @@ -96,7 +96,7 @@ Writing/Using Intermediate Symbol Format Files ---------------------------------------------- It can occasionally be useful to create a data file containing the static structures that can create a -:py:class:`~volatility.framework.interfaces.objects.Template` to be instantiated on a layer. +:py:class:`~volatility3.framework.interfaces.objects.Template` to be instantiated on a layer. Volatility has all the machinery necessary to construct these for you from properly formatted JSON data. The JSON format is documented by the JSON schema files located in schemas. These are versioned using standard .so @@ -136,7 +136,7 @@ Another useful parameter is `table_mapping` which allows for type referenced ins table_mapping = {'one_table': 'another_table'}) The last parameter that can be used is called `class_types` which allows a particular structure to be instantiated on -a class other than :py:class:`~volatility.framework.objects.StructType`, allowing for additional methods to be defined +a class other than :py:class:`~volatility3.framework.objects.StructType`, allowing for additional methods to be defined and associated with the type. The table name can then by used to access the constructed table from the context, such as: @@ -152,7 +152,7 @@ Translation layers offer a way for data to be translated from a higher (domain) The main method that must be overloaded for a translation layer is the `mapping` method. Usually this is a linear mapping whereby a value at an offset in the domain maps directly to an offset in the range. -Most new layers should inherit from :py:class:`~volatility.framework.layers.linear.LinearlyMappedLayer` where they +Most new layers should inherit from :py:class:`~volatility3.framework.layers.linear.LinearlyMappedLayer` where they can define a mapping method as follows: .. code-block:: python @@ -205,7 +205,7 @@ This mechanism also allowed for some minor optimization in scanning such a layer scanning of layers be needed, please refer to the Layer Scanning page. Whilst it may seem as though some of the data seems redundant (the length values are always the same) this is not the -case for :py:class:`~volatility.framework.layers.segmented.NonLinearlySegmentedLayer`. These layers do not guarantee +case for :py:class:`~volatility3.framework.layers.segmented.NonLinearlySegmentedLayer`. These layers do not guarantee that each domain address maps directly to a range address, and in fact can carry out processing on the data. These layers are most commonly encountered as compression or encryption layers (whereby a domain address may map into a chunk of the range, but not directly). In this instance, the mapping will likely define additional methods that can @@ -285,8 +285,8 @@ Writing new Templates and Objects --------------------------------- In most cases, a whole new type of object is unnecessary. It will usually be derived from an -:py:class:`~volatility.framework.objects.StructType` (which is itself just another name for a -:py:class:`~volatility.framework.objects.AggregateType`, but it's better to use `StructType` for readability). +:py:class:`~volatility3.framework.objects.StructType` (which is itself just another name for a +:py:class:`~volatility3.framework.objects.AggregateType`, but it's better to use `StructType` for readability). This can be used as a class override for a particular symbol table, so that an existing structure can be augmented with additional methods. An example of this would be: @@ -300,27 +300,27 @@ This will mean that when a specific structure is loaded from the symbol_space, i `StructType`, but instead is instantiated using the NewStructureClass, meaning new methods can be called directly on it. If the situation really calls for an entirely new object, that isn't covered by one of the existing -:py:class:`~volatility.framework.objects.PrimativeObject` objects (such as -:py:class:`~volatility.framework.objects.Integer`, -:py:class:`~volatility.framework.objects.Boolean`, -:py:class:`~volatility.framework.objects.Float`, -:py:class:`~volatility.framework.objects.Char`, -:py:class:`~volatility.framework.objects.Bytes`) +:py:class:`~volatility3.framework.objects.PrimativeObject` objects (such as +:py:class:`~volatility3.framework.objects.Integer`, +:py:class:`~volatility3.framework.objects.Boolean`, +:py:class:`~volatility3.framework.objects.Float`, +:py:class:`~volatility3.framework.objects.Char`, +:py:class:`~volatility3.framework.objects.Bytes`) or the other builtins (such as -:py:class:`~volatility.framework.objects.Array`, -:py:class:`~volatility.framework.objects.Bitfield`, -:py:class:`~volatility.framework.objects.Enumeration`, -:py:class:`~volatility.framework.objects.Pointer`, -:py:class:`~volatility.framework.objects.String`, -:py:class:`~volatility.framework.objects.Void`) then you can review the following information about defining an entirely +:py:class:`~volatility3.framework.objects.Array`, +:py:class:`~volatility3.framework.objects.Bitfield`, +:py:class:`~volatility3.framework.objects.Enumeration`, +:py:class:`~volatility3.framework.objects.Pointer`, +:py:class:`~volatility3.framework.objects.String`, +:py:class:`~volatility3.framework.objects.Void`) then you can review the following information about defining an entirely new object. -All objects must inherit from :py:class:`~volatility.framework.interfaces.objects.ObjectInterface` which defines a -constructor that takes a context, a `type_name`, an :py:class:`~volatility.framework.interfaces.objects.ObjectInformation` +All objects must inherit from :py:class:`~volatility3.framework.interfaces.objects.ObjectInterface` which defines a +constructor that takes a context, a `type_name`, an :py:class:`~volatility3.framework.interfaces.objects.ObjectInformation` object and then can accept additional keywords (which will not necessarily be provided if the object is constructed from a JSON reference). -The :py:class:`~volatility.framework.interfaces.objects.ObjectInformation` class contains all the basic elements that +The :py:class:`~volatility3.framework.interfaces.objects.ObjectInformation` class contains all the basic elements that define an object, which include: * layer_name @@ -345,10 +345,10 @@ should be. Note, the size can change throughout the lifespan of the object, and it compensates for such a change. Objects must also contain a specific class called `VolTemplateProxy` which must inherit from -:py:class:`~volatility.framework.interfaces.objects.ObjectInterface`. This is used to access information about +:py:class:`~volatility3.framework.interfaces.objects.ObjectInterface`. This is used to access information about a structure before it has been associated with data and becomes an Object. The -:py:class:`~volatility.framework.interfaces.objects.ObjectInterface.VolTemplateProxy` class contains a number of -abstract classmethods, which take a :py:class:`~volatility.framework.interfaces.objects.Template`. The main method +:py:class:`~volatility3.framework.interfaces.objects.ObjectInterface.VolTemplateProxy` class contains a number of +abstract classmethods, which take a :py:class:`~volatility3.framework.interfaces.objects.Template`. The main method that is likely to need overwriting is the `size` method, which should return the size of the object (for the template of a dynamically-sized object, this should be a suitable value, and calculated based on the best available information). For most objects, this can be determined from the JSON data used to construct a normal `Struct` and therefore only needs diff --git a/doc/source/simple-plugin.rst b/doc/source/simple-plugin.rst index e857e830cd..52dabfd4fb 100644 --- a/doc/source/simple-plugin.rst +++ b/doc/source/simple-plugin.rst @@ -3,19 +3,19 @@ How to Write a Simple Plugin This guide will step through how to construct a simple plugin using Volatility 3. -The example plugin we'll use is :py:class:`~volatility.plugins.windows.dlllist.DllList`, which features the main traits +The example plugin we'll use is :py:class:`~volatility3.plugins.windows.dlllist.DllList`, which features the main traits of a normal plugin, and reuses other plugins appropriately. Inherit from PluginInterface ---------------------------- -The first step is to define a class that inherits from :py:class:`~volatility.framework.interfaces.plugins.PluginInterface`. +The first step is to define a class that inherits from :py:class:`~volatility3.framework.interfaces.plugins.PluginInterface`. Volatility automatically finds all plugins defined under the various plugin directories by importing them and then -making use of any classes that inherit from :py:class:`~volatility.framework.interfaces.plugins.PluginInterface`. +making use of any classes that inherit from :py:class:`~volatility3.framework.interfaces.plugins.PluginInterface`. :: - from volatility.framework import interfaces + from volatility3.framework import interfaces class DllList(interfaces.plugins.PluginInterface): @@ -56,7 +56,7 @@ to instantiate the plugin). At the moment these requirements are fairly straigh architectures = ["Intel32", "Intel64"]), This requirement indicates that the plugin will operate on a single -:py:class:`TranslationLayer `. The name of the +:py:class:`TranslationLayer `. The name of the loaded layer will appear in the plugin's configuration under the name ``primary``. Requirement values can be accessed within the plugin through the plugin's `config` attribute (for example ``self.config['pid']``). @@ -71,7 +71,7 @@ layers, for example a plugin that carries out some form of difference or statist This requirement (and the next two) are known as Complex Requirements, and user interfaces will likely not directly request a value for this from a user. The value stored in the configuration tree for a -:py:class:`~volatility.framework.configuration.requirements.TranslationLayerRequirement` is +:py:class:`~volatility3.framework.configuration.requirements.TranslationLayerRequirement` is the string name of a layer present in the context's memory that satisfies the requirement. :: @@ -80,14 +80,14 @@ the string name of a layer present in the context's memory that satisfies the re description = "Windows kernel symbols"), This requirement specifies the need for a particular -:py:class:`SymbolTable ` +:py:class:`SymbolTable ` to be loaded. This gets populated by various -:py:class:`Automagic ` as the nearest sibling to a particular -:py:class:`~volatility.framework.configuration.requirements.TranslationLayerRequirement`. -This means that if the :py:class:`~volatility.framework.configuration.requirements.TranslationLayerRequirement` -is satisfied and the :py:class:`Automagic ` can determine -the appropriate :py:class:`SymbolTable `, the -name of the :py:class:`SymbolTable ` will be stored in the configuration. +:py:class:`Automagic ` as the nearest sibling to a particular +:py:class:`~volatility3.framework.configuration.requirements.TranslationLayerRequirement`. +This means that if the :py:class:`~volatility3.framework.configuration.requirements.TranslationLayerRequirement` +is satisfied and the :py:class:`Automagic ` can determine +the appropriate :py:class:`SymbolTable `, the +name of the :py:class:`SymbolTable ` will be stored in the configuration. This requirement is also a Complex Requirement and therefore will not be requested directly from the user. @@ -119,10 +119,10 @@ Define the `run` method The run method is the primary method called on a plugin. It takes no parameters (these have been passed through the context's configuration tree, and the context is provided at plugin initialization time) and returns an unpopulated -:py:class:`~volatility.framework.interfaces.renderers.TreeGrid` object. These are typically constructed based on a +:py:class:`~volatility3.framework.interfaces.renderers.TreeGrid` object. These are typically constructed based on a generator that carries out the bulk of the plugin's processing. The -:py:class:`~volatility.framework.interfaces.renderers.TreeGrid` also specifies the column names and types -that will be output as part of the :py:class:`~volatility.framework.interfaces.renderers.TreeGrid`. +:py:class:`~volatility3.framework.interfaces.renderers.TreeGrid` also specifies the column names and types +that will be output as part of the :py:class:`~volatility3.framework.interfaces.renderers.TreeGrid`. :: @@ -143,28 +143,28 @@ that will be output as part of the :py:class:`~volatility.framework.interfaces.r In this instance, the plugin constructs a filter (using the PsList plugin's *classmethod* for creating filters). It checks the plugin's configuration for the ``pid`` value, and passes it in as a list if it finds it, or None if -it does not. The :py:func:`~volatility.plugins.windows.pslist.PsList.create_pid_filter` method accepts a list of process +it does not. The :py:func:`~volatility3.plugins.windows.pslist.PsList.create_pid_filter` method accepts a list of process identifiers that are included in the list. If the list is empty, all processes are returned. The next line specifies the columns by their name and type. The types are simple types (int, str, bytes, float, and bool) but can also provide hints as to how the output should be displayed (such as a hexidecimal number, using -:py:class:`volatility.framework.renderers.format_hints.Hex`). +:py:class:`volatility3.framework.renderers.format_hints.Hex`). This indicates to user interfaces that the value should be displayed in a particular way, but does not guarantee that the value will be displayed that way (for example, if it doesn't make sense to do so in a particular interface). Finally, the generator is provided. The generator accepts a list of processes, which is gathered using a different plugin, -the :py:class:`~volatility.plugins.windows.pslist.PsList` plugin. That plugin features a *classmethod*, +the :py:class:`~volatility3.plugins.windows.pslist.PsList` plugin. That plugin features a *classmethod*, so that other plugins can call it. As such, it takes all the necessary parameters rather than accessing them from a configuration. Since it must be portable code, it takes a context, as well as the layer name, symbol table and optionally a filter. In this instance we unconditionally pass it the values from the configuration for the ``primary`` and ``nt_symbols`` requirements. This will generate a list -of :py:class:`~volatility.framework.symbols.windows.extensions.EPROCESS` objects, as provided by the :py:class:`~volatility.plugins.windows.pslist.PsList` plugin, +of :py:class:`~volatility3.framework.symbols.windows.extensions.EPROCESS` objects, as provided by the :py:class:`~volatility.plugins.windows.pslist.PsList` plugin, and is not covered here but is used as an example for how to share code across plugins (both as the provider and the consumer of the shared code). Define the generator -------------------- -The :py:class:`~volatility.framework.interfaces.renderers.TreeGrid` can be populated without a generator, +The :py:class:`~volatility3.framework.interfaces.renderers.TreeGrid` can be populated without a generator, but it is quite a common model to use. This is where the main processing for this plugin lives. :: @@ -189,10 +189,10 @@ but it is quite a common model to use. This is where the main processing for th format_hints.Hex(entry.DllBase), format_hints.Hex(entry.SizeOfImage), BaseDllName, FullDllName)) -This iterates through the list of processes and for each one calls the :py:meth:`~volatility.framework.symbols.windows.extensions.EPROCESS.load_order_modules` method on it. This provides +This iterates through the list of processes and for each one calls the :py:meth:`~volatility3.framework.symbols.windows.extensions.EPROCESS.load_order_modules` method on it. This provides a list of the loaded modules within the process. -The plugin then defaults the ``BaseDllName`` and ``FullDllName`` variables to an :py:class:`~volatility.framework.renderers.UnreadableValue`, +The plugin then defaults the ``BaseDllName`` and ``FullDllName`` variables to an :py:class:`~volatility3.framework.renderers.UnreadableValue`, which is a way of indicating to the user interface that the value couldn't be read for some reason (but that it isn't fatal). There are currently four different reasons a value may be unreadable: @@ -204,7 +204,7 @@ There are currently four different reasons a value may be unreadable: This is a safety provision to ensure that the data returned by the Volatility library is accurate and describes why information may not be provided. -The plugin then takes the process's ``BaseDllName`` value, and calls :py:meth:`~volatility.framework.symbols.windows.extensions.UNICODE_STRING.get_string` on it. All structure attributes, +The plugin then takes the process's ``BaseDllName`` value, and calls :py:meth:`~volatility3.framework.symbols.windows.extensions.UNICODE_STRING.get_string` on it. All structure attributes, as defined by the symbols, are directly accessible and use the case-style of the symbol library it came from (in Windows, attributes are CamelCase), such as ``entry.BaseDllName`` in this instance. Any attribtues not defined by the symbol but added by Volatility extensions cannot be properties (in case they overlap with the attributes defined in the symbol libraries) @@ -215,16 +215,16 @@ read the data at a particular offset. This will cause an exception to be thrown as a means of communicating when something exceptional happens. It is the responsibility of the plugin developer to appropriately catch and handle any non-fatal exceptions and otherwise allow the exception to be thrown by the user interface. -In this instance, the :py:class:`~volatility.framework.exceptions.InvalidAddressException` class is caught, which is thrown +In this instance, the :py:class:`~volatility3.framework.exceptions.InvalidAddressException` class is caught, which is thrown by any layer which cannot access an offset requested of it. Since we have already populated both values with ``UnreadableValue`` we do not need to write code for the exception handler. -Finally, we yield the record in the format required by the :py:class:`~volatility.framework.interfaces.renderers.TreeGrid`, +Finally, we yield the record in the format required by the :py:class:`~volatility3.framework.interfaces.renderers.TreeGrid`, a tuple, listing the indentation level (for trees) and then the list of values for each column. This plugin demonstrates casting a value ``ImageFileName`` to ensure it's returned as a string with a specific maximum length, rather than its original type (potentially an array of characters, etc). -This is carried out using the :py:meth:`~volatility.framework.interfaces.objects.ObjectInterface.cast` method which takes a type (either a native type, such as string or pointer, or a -structure type defined in a :py:class:`SymbolTable ` +This is carried out using the :py:meth:`~volatility3.framework.interfaces.objects.ObjectInterface.cast` method which takes a type (either a native type, such as string or pointer, or a +structure type defined in a :py:class:`SymbolTable ` such as ``!_UNICODE``) and the parameters to that type. Since the cast value must populate a string typed column, it had to be a Python string (such as being cast to the native diff --git a/doc/source/using-as-a-library.rst b/doc/source/using-as-a-library.rst index abed65225c..ded861de45 100644 --- a/doc/source/using-as-a-library.rst +++ b/doc/source/using-as-a-library.rst @@ -25,7 +25,7 @@ from versions 1.1 or 1.2: :: - volatility.framework.require_interface_version(1, 0, 0) + volatility3.framework.require_interface_version(1, 0, 0) Contexts can be spun up quite easily, just construct one. It's not a singleton, so multiple contexts can be constructed and operate independently, but be aware of which context you're handing where and make sure to use @@ -42,20 +42,20 @@ Determine what plugins are available ------------------------------------ You can also interrogate the framework to see which plugins are available. First we have to try to load all -available plugins. The :py:func:`~volatility.framework.import_files` method will automatically use the module -paths for the provided module (in this case, volatility.plugins) and walk the directory (or directories) loading up +available plugins. The :py:func:`~volatility3.framework.import_files` method will automatically use the module +paths for the provided module (in this case, volatility3.plugins) and walk the directory (or directories) loading up all python files. Any import failures will be provided in the failures return value, unless the second parameter is False in which case the call will raise any exceptions encountered. Any additional directories containing plugins -should be added to the `__path__` attribute for the `volatility.plugins` module. The standard paths should generally -also be included, which can be found in `volatility.constants.PLUGINS_PATH`. +should be added to the `__path__` attribute for the `volatility3.plugins` module. The standard paths should generally +also be included, which can be found in `volatility3.constants.PLUGINS_PATH`. :: - volatility.plugins.__path__ = + constants.PLUGINS_PATH - failures = framework.import_files(volatility.plugins, True) + volatility3.plugins.__path__ = + constants.PLUGINS_PATH + failures = framework.import_files(volatility3.plugins, True) Once the plugins have been imported, we can interrogate which plugins are available. The -:py:func:`~volatility.framework.list_plugins` call will +:py:func:`~volatility3.framework.list_plugins` call will return a dictionary of plugin names and the plugin classes. :: @@ -68,9 +68,9 @@ Determine what configuration options a plugin requires ------------------------------------------------------ For each plugin class, we can call the classmethod `requirements` on it, which will return a list of objects that -adhere to the :py:class:`~volatility.framework.interfaces.configuration.RequirementInterface` method. The various +adhere to the :py:class:`~volatility3.framework.interfaces.configuration.RequirementInterface` method. The various types of Requirement are split roughly in two, -:py:class:`~volatility.framework.interfaces.configuration.SimpleTypeRequirement` (such as integers, booleans, floats +:py:class:`~volatility3.framework.interfaces.configuration.SimpleTypeRequirement` (such as integers, booleans, floats and strings) and more complex requirements (such as lists, choices, multiple requirements, translation layer requirements or symbol table requirements). A requirement just specifies a type of data and a name, and must be combined with a configuration hierarchy to have meaning. @@ -98,7 +98,7 @@ underneaths its own branch). To set the hierarchy, you'll need to know where th For this example, we'll assume plugins' base_config_path is set as `plugins`, and that automagics are configured under the `automagic` tree. We'll see later how to ensure this matches up with the plugins and automagic when they're constructed. Joining configuration options should always be carried out using -:py:func:`~volatility.framework.interfaces.configuration.path_join` +:py:func:`~volatility3.framework.interfaces.configuration.path_join` in case the separator value gets changed in the future. Configuration items can then be set as follows: :: @@ -170,7 +170,7 @@ be called whenever a plugin produces an auxiliary file. constructed = plugin(context, plugin_config_path, progress_callback = progress_callback) constructed.set_open_method(file_handler) -The file_handler must adhere to the :py:class:`~volatility.framework.interfaces.plugins.FileHandlerInterface`, +The file_handler must adhere to the :py:class:`~volatility3.framework.interfaces.plugins.FileHandlerInterface`, which represents an IO[bytes] object but also contains a `preferred_filename` attribute as a hint. All of this functionality has been condensed into a framework method called `construct_plugin` which will @@ -181,7 +181,7 @@ accepts an optional progress_callback and an optional file_consumer. constructed = plugins.construct_plugin(ctx, automagics, plugin, base_config_path, progress_callback, file_consumer) -Finally the plugin can be run, and will return a :py:class:`~volatility.framework.interfaces.renderers.TreeGrid`. +Finally the plugin can be run, and will return a :py:class:`~volatility3.framework.interfaces.renderers.TreeGrid`. :: @@ -201,22 +201,22 @@ does the actual work. This can return an exception if one occurs during the run The results can be accessed either as the results are being processed, or by visiting the nodes in the tree once it is fully populated. In either case, a visitor method will be required. The visitor method -should accept a :py:class:`~volatility.framework.interfaces.renderers.TreeNode` and an `accumulator`. It will +should accept a :py:class:`~volatility3.framework.interfaces.renderers.TreeNode` and an `accumulator`. It will return an updated accumulator. -When provided a :py:class:`~volatility.framework.interfaces.renderers.TreeNode`, it can be accessed as a dictionary +When provided a :py:class:`~volatility3.framework.interfaces.renderers.TreeNode`, it can be accessed as a dictionary based on the column names that the treegrid contains. It should be noted that each column can contain only the type specified in the `column.type` field (which can be a simple type like string, integer, float, bytes or a more complex type, like a DateTime, a Disassembly or a descendant of -:py:class:`~volatility.framework.interfaces.renderers.BaseAbsentValue`). The various fields may also be wrapped in +:py:class:`~volatility3.framework.interfaces.renderers.BaseAbsentValue`). The various fields may also be wrapped in `format_hints` designed to tell the user interface how to render the data. These hints can be things like Bin, Hex or HexBytes, so that fields like offsets are displayed in hex form or so that bytes are displayed in their hex form rather -than their raw form. Descendants of :py:class:`~volatility.framework.interfaces.renderers.BaseAbsentValue` can currently +than their raw form. Descendants of :py:class:`~volatility3.framework.interfaces.renderers.BaseAbsentValue` can currently be one of -:py:class:`~volatility.framework.renderers.UnreadableValue`, -:py:class:`~volatility.framework.renderers.UnparsableValue`, -:py:class:`~volatility.framework.renderers.NotApplicableValue` or -:py:class:`~volatility.framework.renderers.NotAvailableValue`. These indicate that data could not be read from the +:py:class:`~volatility3.framework.renderers.UnreadableValue`, +:py:class:`~volatility3.framework.renderers.UnparsableValue`, +:py:class:`~volatility3.framework.renderers.NotApplicableValue` or +:py:class:`~volatility3.framework.renderers.NotAvailableValue`. These indicate that data could not be read from the memory for some reason, could not be parsed properly, was not applicable or was not available. A simple text renderer (that returns output immediately) would appear as follows. This doesn't use @@ -240,5 +240,5 @@ the accumulator, but instead uses print to directly produce the output. This is grid.populate(visitor, None) More complex examples of renderers can be found in the default CLI implementation, such as the -:py:class:`~volatility.cli.text_renderer.QuickTextRenderer` or the -:py:class:`~volatility.cli.text_renderer.PrettyTextRenderer`. +:py:class:`~volatility3.cli.text_renderer.QuickTextRenderer` or the +:py:class:`~volatility3.cli.text_renderer.PrettyTextRenderer`. diff --git a/doc/source/vol2to3.rst b/doc/source/vol2to3.rst index 8e82ba8a2d..bc1733dcf8 100644 --- a/doc/source/vol2to3.rst +++ b/doc/source/vol2to3.rst @@ -6,7 +6,7 @@ Library and Context Volatility 3 has been designed from the ground up to be a library, this means the components are independent and all state required to run a particular plugin at a particular time is self-contained in an object derived from -a :py:class:`~volatility.framework.interfaces.context.ContextInterface`. +a :py:class:`~volatility3.framework.interfaces.context.ContextInterface`. The context contains the two core components that make up Volatility, layers of data and the available symbols. @@ -14,7 +14,7 @@ Symbols and Types ----------------- Volatility 3 no longer uses profiles, it comes with an extensive library of -:py:class:`symbol tables `, and can generate new symbol +:py:class:`symbol tables `, and can generate new symbol tables for most windows memory images, based on the memory image itself. This allows symbol tables to include specific offsets for locations (symbol locations) based on that operating system in particular. This means it is easier and quicker to identify structures within an operating system, by having known offsets for those structures provided by the official @@ -37,11 +37,11 @@ re-read many times over for no benefit (particularly since each re-read could re from following page table translations). Finally, in order to provide Volatility specific information without impact on the ability for structures to have members -with arbitrary names, all the metadata about the object (such as its layer or offset) have been moved to a read-only :py:meth:`~volatility.framework.interfaces.objects.ObjectInterface.vol` +with arbitrary names, all the metadata about the object (such as its layer or offset) have been moved to a read-only :py:meth:`~volatility3.framework.interfaces.objects.ObjectInterface.vol` dictionary. -Further the distinction between a :py:class:`~volatility.framework.interfaces.objects.Template` (the thing that -constructs an object) and the :py:class:`Object ` itself has +Further the distinction between a :py:class:`~volatility3.framework.interfaces.objects.Template` (the thing that +constructs an object) and the :py:class:`Object ` itself has been made more explicit. In Volatility 2, some information (such as size) could only be determined from a constructed object, leading to instantiating a template on an empty buffer, just to determine the size. In Volatility 3, templates contain information such as their size, which can be queried directly without constructing the object. @@ -49,7 +49,7 @@ information such as their size, which can be queried directly without constructi Layer and Layer dependencies ---------------------------- Address spaces in Volatility 2, are now more accurately referred to as -:py:class:`Translation Layers `, since each one typically sits +:py:class:`Translation Layers `, since each one typically sits atop another and can translate addresses between the higher logical layer and the lower physical layer. Address spaces in Volatility 2 were strictly limited to a stack, one on top of one other. In Volatility 3, layers can have multiple "dependencies" (lower layers), which allows for the integration of features such as swap space. @@ -65,13 +65,13 @@ included a stacker automagic to emulate the most common feature of Volatility 2, Searching and Scanning ---------------------- Scanning is very similar to scanning in Volatility 2, a scanner object (such as a -:py:class:`~volatility.framework.layers.scanners.BytesScanner` or :py:class:`~volatility.framework.layers.scanners.RegExScanner`) is -primed with the data to be searched for, and the :py:meth:`~volatility.framework.interfaces.layers.DataLayerInterface.scan` method is called on the layer to be searched. +:py:class:`~volatility3.framework.layers.scanners.BytesScanner` or :py:class:`~volatility.framework.layers.scanners.RegExScanner`) is +primed with the data to be searched for, and the :py:meth:`~volatility3.framework.interfaces.layers.DataLayerInterface.scan` method is called on the layer to be searched. Output Rendering ---------------- This is extremely similar to Volatility 2, because we were developing it for Volatility 3 when we added it to Volatility 2. -We now require that all plugins produce output in a :py:class:`~volatility.framework.interfaces.renderers.TreeGrid` object, +We now require that all plugins produce output in a :py:class:`~volatility3.framework.interfaces.renderers.TreeGrid` object, which ensure that the library can be used regardless of which interface is driving it. An example web GUI is also available called Volumetric which allows all the plugins that can be run from the command line to be run from a webpage, and offers features such as automatic formatting and sorting of the data, which previously couldn't be provided easily from the CLI. From b66fb35ffb3f6a0bdec889ba4080641001186121 Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Sat, 30 Jan 2021 18:24:07 +0000 Subject: [PATCH 037/294] Layers: Document the choice of mode for cached files --- volatility3/framework/layers/resources.py | 2 ++ 1 file changed, 2 insertions(+) diff --git a/volatility3/framework/layers/resources.py b/volatility3/framework/layers/resources.py index d99f7135c2..63cfd95707 100644 --- a/volatility3/framework/layers/resources.py +++ b/volatility3/framework/layers/resources.py @@ -142,6 +142,8 @@ def open(self, url: str, mode: str = "rb") -> Any: block = fp.read(block_size) cache_file.close() # Re-open the cache with a different mode + # Since we don't want people thinking they're able to save to the cache file, + # open it in read mode only and allow breakages to happen if they wanted to write curfile = open(temp_filename, mode = "rb") # Determine whether the file is a particular type of file, and if so, open it as such From 6947e8ead5d744a3373fcf64f66ce398b1f5b601 Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Mon, 1 Feb 2021 11:23:13 +0000 Subject: [PATCH 038/294] Windows: Memmap include file offset --- volatility3/framework/plugins/windows/memmap.py | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/volatility3/framework/plugins/windows/memmap.py b/volatility3/framework/plugins/windows/memmap.py index 86d4531152..45d7158d9f 100644 --- a/volatility3/framework/plugins/windows/memmap.py +++ b/volatility3/framework/plugins/windows/memmap.py @@ -50,11 +50,12 @@ def _generator(self, procs): file_handle = self.open("pid.{}.dmp".format(pid)) with file_handle as file_data: - + file_offset = 0 for mapval in proc_layer.mapping(0x0, proc_layer.maximum_address, ignore_errors = True): offset, size, mapped_offset, mapped_size, maplayer = mapval file_output = "Disabled" + file_offset += size if self.config['dump']: try: data = proc_layer.read(offset, size, pad = True) @@ -66,14 +67,14 @@ def _generator(self, procs): proc_layer_name, offset, file_handle.preferred_filename)) yield (0, (format_hints.Hex(offset), format_hints.Hex(mapped_offset), format_hints.Hex(mapped_size), - format_hints.Hex(offset), file_output)) + format_hints.Hex(file_offset), file_output)) offset += mapped_size def run(self): filter_func = pslist.PsList.create_pid_filter([self.config.get('pid', None)]) return renderers.TreeGrid([("Virtual", format_hints.Hex), ("Physical", format_hints.Hex), - ("Size", format_hints.Hex), ("Offset", format_hints.Hex), ("File output", str)], + ("Size", format_hints.Hex), ("Offset in File", format_hints.Hex), ("File output", str)], self._generator( pslist.PsList.list_processes(context = self.context, layer_name = self.config['primary'], From 1e580de86a4296dec01f768f67bb28b4be477104 Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Tue, 2 Feb 2021 00:08:00 +0000 Subject: [PATCH 039/294] Codebase: Minor LGTM issue fixes --- volatility3/cli/volshell/generic.py | 1 - volatility3/framework/automagic/stacker.py | 2 +- volatility3/framework/layers/vmware.py | 4 ---- volatility3/framework/plugins/windows/dumpfiles.py | 1 - volatility3/framework/plugins/windows/netstat.py | 5 ++--- 5 files changed, 3 insertions(+), 10 deletions(-) diff --git a/volatility3/cli/volshell/generic.py b/volatility3/cli/volshell/generic.py index 9407c77eec..a1fa7c172d 100644 --- a/volatility3/cli/volshell/generic.py +++ b/volatility3/cli/volshell/generic.py @@ -4,7 +4,6 @@ import binascii import code import io -import os import random import string import struct diff --git a/volatility3/framework/automagic/stacker.py b/volatility3/framework/automagic/stacker.py index 78fbab76c2..142bfd3bbf 100644 --- a/volatility3/framework/automagic/stacker.py +++ b/volatility3/framework/automagic/stacker.py @@ -13,7 +13,7 @@ import logging import sys import traceback -from typing import Any, List, Optional, Tuple, Type +from typing import List, Optional, Tuple, Type from volatility3 import framework from volatility3.framework import interfaces, constants diff --git a/volatility3/framework/layers/vmware.py b/volatility3/framework/layers/vmware.py index db7fbe6b8d..cd9cf651c6 100644 --- a/volatility3/framework/layers/vmware.py +++ b/volatility3/framework/layers/vmware.py @@ -94,10 +94,6 @@ def _read_header(self) -> None: data_size = self._context.object(self._choose_type(data_len), layer_name = self._meta_layer, offset = offset + 2 + name_len + (indices_len * index_len)) - # Read the size of the data when it would be decompressed - data_mem_size = self._context.object(self._choose_type(data_len), - layer_name = self._meta_layer, - offset = offset + 2 + name_len + (indices_len * index_len) + data_len) # Skip two bytes of padding (as it seems?) # Read the actual data data = self._context.object("vmware!bytes", diff --git a/volatility3/framework/plugins/windows/dumpfiles.py b/volatility3/framework/plugins/windows/dumpfiles.py index 757dbd3c32..9fc9daa54f 100755 --- a/volatility3/framework/plugins/windows/dumpfiles.py +++ b/volatility3/framework/plugins/windows/dumpfiles.py @@ -9,7 +9,6 @@ from volatility3.plugins.windows import pslist from volatility3.framework.configuration import requirements from volatility3.framework.renderers import format_hints -from volatility3.framework.objects import utility from typing import List, Tuple, Type, Optional vollog = logging.getLogger(__name__) diff --git a/volatility3/framework/plugins/windows/netstat.py b/volatility3/framework/plugins/windows/netstat.py index b7bcd4d2c2..0ec50b23de 100644 --- a/volatility3/framework/plugins/windows/netstat.py +++ b/volatility3/framework/plugins/windows/netstat.py @@ -4,12 +4,11 @@ import logging import datetime -from typing import Iterable, List, Optional, Callable +from typing import Iterable, Optional -from volatility3.framework import constants, exceptions, interfaces, renderers, symbols, layers +from volatility3.framework import constants, exceptions, interfaces, renderers, symbols from volatility3.framework.configuration import requirements from volatility3.framework.renderers import format_hints -from volatility3.framework.symbols import intermed from volatility3.framework.symbols.windows import pdbutil from volatility3.framework.symbols.windows.extensions import network from volatility3.plugins import timeliner From 25a74a8450b66febff26eb5ebb28efa40e6c8a9c Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Tue, 2 Feb 2021 00:20:53 +0000 Subject: [PATCH 040/294] Documentation: Clarify progress_callback and open_method --- doc/source/using-as-a-library.rst | 19 +++++++++++++++---- 1 file changed, 15 insertions(+), 4 deletions(-) diff --git a/doc/source/using-as-a-library.rst b/doc/source/using-as-a-library.rst index ded861de45..95d2b10808 100644 --- a/doc/source/using-as-a-library.rst +++ b/doc/source/using-as-a-library.rst @@ -161,9 +161,16 @@ If unsatisfied is an empty list, then the plugin has been given everything it re Dictionary of the hierarchy paths and their associated requirements that weren't satisfied. The plugin can then be instantiated with the context (containing the plugin's configuration) and the path that the -plugin can find its configuration at. A progress_callback can also be provided to give users feedback whilst the -plugin is running. Also, should the plugin produce files, an open_method can be set on the plugin, which will -be called whenever a plugin produces an auxiliary file. +plugin can find its configuration at. This configuration path only needs to be a unique value to identify where the +configuration details can be found, similar to a registry key in Windows. + +A progress_callback can also be provided to give users feedback whilst the plugin is running. A progress callback +is a function (callable) that takes a percentage and a descriptive string. User interfaces implementing these can +therefore provide progress feedback to a user, as the framework will call these every so often during intensive actions, +to update the user as to how much has been completed so far. + +Also, should the plugin produce files, an open_method can be set on the plugin, which will be called whenever a plugin +produces an auxiliary file. :: @@ -171,7 +178,11 @@ be called whenever a plugin produces an auxiliary file. constructed.set_open_method(file_handler) The file_handler must adhere to the :py:class:`~volatility3.framework.interfaces.plugins.FileHandlerInterface`, -which represents an IO[bytes] object but also contains a `preferred_filename` attribute as a hint. +which represents an IO[bytes] object but also contains a `preferred_filename` attribute as a hint indicating what the +file being produced should be called. When a plugin produces a new file, rather than opening it with the python `open` +method, it will use the `FileHandlerInterface` and construct it with a descriptive filename, and then write bytes to it +using the `write` method, just like other python file-like objects. This allows web user interfaces to offer the files +for download, whilst CLIs to write them to disk and other UIs to handle files however they need. All of this functionality has been condensed into a framework method called `construct_plugin` which will take and run the automagics, and instantiate the plugin on the provided `base_config_path`. It also From f52bd8d74fc47e63ea2611d0171b63dc589d4fdf Mon Sep 17 00:00:00 2001 From: leohearts Date: Thu, 4 Feb 2021 23:54:00 +0800 Subject: [PATCH 041/294] Replace \xe2\x80\x94 with normal "-" in README.md --- README.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/README.md b/README.md index f48bf728f7..dee2455dce 100644 --- a/README.md +++ b/README.md @@ -45,7 +45,7 @@ git clone https://github.com/volatilityfoundation/volatility3.git 2. See available options: ```shell - python3 vol.py —h + python3 vol.py -h ``` 3. To get more information on a Windows memory sample and to make sure @@ -55,10 +55,10 @@ Volatility supports that sample type, run Example: ```shell - python3 vol.py —f /home/user/samples/stuxnet.vmem windows.info + python3 vol.py -f /home/user/samples/stuxnet.vmem windows.info ``` -4. Run some other plugins. The `-f` or `—-single-location` is not strictly +4. Run some other plugins. The `-f` or `--single-location` is not strictly required, but most plugins expect a single sample. Some also require/accept other options. Run `python3 vol.py -h` for more information on a particular command. From d66357c64c1df03fc2c0060f82ad918c1d8ae04e Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Thu, 4 Feb 2021 16:07:54 +0000 Subject: [PATCH 042/294] Documentation: Fix an unnecessary unicode apostrophe --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index dee2455dce..ce285e76c0 100644 --- a/README.md +++ b/README.md @@ -1,6 +1,6 @@ # Volatility 3: The volatile memory extraction framework -Volatility is the world’s most widely used framework for extracting digital +Volatility is the world's most widely used framework for extracting digital artifacts from volatile memory (RAM) samples. The extraction techniques are performed completely independent of the system being investigated but offer visibility into the runtime state of the system. The framework is intended From afd394dab2d47e3fc2b3f7907b429168cb3b161b Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Thu, 4 Feb 2021 16:46:55 +0000 Subject: [PATCH 043/294] Layers: Don't map intel out-of-bounds addresses --- volatility3/framework/layers/intel.py | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/volatility3/framework/layers/intel.py b/volatility3/framework/layers/intel.py index cf78bbcd11..89ffcdbafa 100644 --- a/volatility3/framework/layers/intel.py +++ b/volatility3/framework/layers/intel.py @@ -52,6 +52,7 @@ def __init__(self, self._index_shift = int(math.ceil(math.log2(struct.calcsize(self._entry_format)))) @classproperty + @functools.lru_cache() def page_size(cls) -> int: """Page size for the intel memory layers. @@ -60,16 +61,19 @@ def page_size(cls) -> int: return 1 << cls._page_size_in_bits @classproperty + @functools.lru_cache() def bits_per_register(cls) -> int: """Returns the bits_per_register to determine the range of an IntelTranslationLayer.""" return cls._bits_per_register @classproperty + @functools.lru_cache() def minimum_address(cls) -> int: return 0 @classproperty + @functools.lru_cache() def maximum_address(cls) -> int: return (1 << cls._maxvirtaddr) - 1 @@ -119,6 +123,10 @@ def _translate_entry(self, offset: int) -> Tuple[int, int]: position = self._initial_position entry = self._initial_entry + if self.minimum_address > offset > self.maximum_address: + raise exceptions.PagedInvalidAddressException(self.name, offset, position + 1, entry, + "Entry outside virtual address range: " + hex(entry)) + # Run through the offset in various chunks for (name, size, large_page) in self._structure: # Check we're valid From 116df8d166d89d7437b7862466a791ceecdbb3d4 Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Tue, 22 Dec 2020 17:17:14 +0000 Subject: [PATCH 044/294] Objects: Avoid reconstructing pointed objects --- volatility3/framework/objects/__init__.py | 2 ++ 1 file changed, 2 insertions(+) diff --git a/volatility3/framework/objects/__init__.py b/volatility3/framework/objects/__init__.py index c3b4f93438..4660b14cc9 100644 --- a/volatility3/framework/objects/__init__.py +++ b/volatility3/framework/objects/__init__.py @@ -3,6 +3,7 @@ # import collections +import functools import logging import struct from typing import Any, ClassVar, Dict, List, Iterable, Optional, Tuple, Type, Union as TUnion, overload @@ -307,6 +308,7 @@ def _unmarshall(cls, context: interfaces.context.ContextInterface, data_format: value = int.from_bytes(data, byteorder = endian, signed = signed) return value & mask + @functools.lru_cache(3) def dereference(self, layer_name: Optional[str] = None) -> interfaces.objects.ObjectInterface: """Dereferences the pointer. From 7a2ce9e9997df2abeed292cf7d576797e378cd85 Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Wed, 27 Jan 2021 00:25:06 +0000 Subject: [PATCH 045/294] Layers: Make linear scanning more efficient Previously we'd chop the entire space up into scan_chunk sized blocks and then chop those up into mapped chunks. The mapping process can much more efficiently provide which blocks exist, so now we ask the layer to map itself in its entirety, and then any chunks of data we get, we cut into scan_chunk blocks. --- volatility3/framework/interfaces/layers.py | 70 ++++++++++++---------- 1 file changed, 40 insertions(+), 30 deletions(-) diff --git a/volatility3/framework/interfaces/layers.py b/volatility3/framework/interfaces/layers.py index 85c6d6c981..57208c629c 100644 --- a/volatility3/framework/interfaces/layers.py +++ b/volatility3/framework/interfaces/layers.py @@ -477,39 +477,49 @@ def _scan_iterator(self, assumed to have no holes """ for (section_start, section_length) in sections: - # For each section, split it into scan size chunks - for chunk_start in range(section_start, section_start + section_length, scanner.chunk_size): - # Shorten it, if we're at the end of the section - chunk_length = min(section_start + section_length - chunk_start, scanner.chunk_size + scanner.overlap) - - # Prev offset keeps track of the end of the previous subchunk - prev_offset = chunk_start - output = [] # type: List[Tuple[str, int, int]] - - # We populate the response based on subchunks that may be mapped all over the place - for mapped in self.mapping(chunk_start, chunk_length, ignore_errors = True): - # We don't bother with the other data in case the data's been processed by a lower layer - offset, sublength, mapped_offset, mapped_length, layer_name = mapped - - # We need to check if the offset is next to the end of the last one (contiguous) - if offset != prev_offset: - # Only yield if we've accumulated output - if len(output): - # Yield all the (joined) items so far - # and the ending point of that subchunk (where we'd gotten to previously) - yield output, prev_offset + chunk_end = section_start + output = [] + + # For each section, find out which bits of its exist and where they map to + # This is faster than cutting the entire space into scan_chunk sized blocks and then + # finding out what exists (particularly if most of the space isn't mapped) + for mapped in self.mapping(section_start, section_length, ignore_errors = True): + offset, sublength, mapped_offset, mapped_length, layer_name = mapped + + # Check if this chunk and the previous aren't next to each other, + if len(output) and (offset != chunk_end): + # if so we can ship everything so far, because this must be a new chunk + chunk_end = offset + output[-1][1] + yield output, chunk_end + output = [] + else: + # Otherwise we're in a long run, and we can chunk it up in chunk_size blocks + current_chunk_size = min(sublength, scanner.chunk_size + scanner.overlap) + + # Cut it into scan_chunk + overlap sized chunks (each scan_chunk apart from each other) + while current_chunk_size == scanner.chunk_size + scanner.overlap and current_chunk_size > 0: + if current_chunk_size > 0: + output += [(self.name if not linear else layer_name, + offset if not linear else mapped_offset, current_chunk_size)] + chunk_end = offset + current_chunk_size + # Ship this scan_chunk size block + yield output, chunk_end output = [] + offset += scanner.chunk_size + mapped_offset += scanner.chunk_size + sublength -= scanner.chunk_size + current_chunk_size = min(sublength, scanner.chunk_size + scanner.overlap) - # Shift the marker up to the end of what we just received and add it to the output - prev_offset = offset + sublength + # We should now only have less than chunk_size data in it, but don't forget to ship it off too + if current_chunk_size > 0: + output += [(self.name if not linear else layer_name, offset if not linear else mapped_offset, + current_chunk_size)] - if not linear: - output += [(self.name, offset, sublength)] - else: - output += [(layer_name, mapped_offset, mapped_length)] - # If there's still output left, output it - if len(output): - yield output, prev_offset + # Ship anything we've accumulated + chunk_end = offset + current_chunk_size + + if len(output): + yield output, chunk_end class LayerContainer(collections.abc.Mapping): From f5ddb8880651e104d06db724c2e76a42df1236c4 Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Wed, 27 Jan 2021 16:48:53 +0000 Subject: [PATCH 046/294] Layers: Scan by available and then chunk This changes the order of scanning to make it (potentially) more efficient. A layer can efficiently map (and ignore large segments that don't exist) and then have the available space chunked, rather than creating all the chunks and repeatedly asking the layer whether each can be mapped or not. --- volatility3/framework/interfaces/layers.py | 69 ++++++++++++---------- 1 file changed, 37 insertions(+), 32 deletions(-) diff --git a/volatility3/framework/interfaces/layers.py b/volatility3/framework/interfaces/layers.py index 57208c629c..4f5314d740 100644 --- a/volatility3/framework/interfaces/layers.py +++ b/volatility3/framework/interfaces/layers.py @@ -477,49 +477,54 @@ def _scan_iterator(self, assumed to have no holes """ for (section_start, section_length) in sections: - chunk_end = section_start output = [] - # For each section, find out which bits of its exist and where they map to + # Hold the offsets of each chunk (including how much has been filled) + chunk_start = chunk_position = 0 + + # For each section, find out which bits of its exists and where they map to # This is faster than cutting the entire space into scan_chunk sized blocks and then # finding out what exists (particularly if most of the space isn't mapped) for mapped in self.mapping(section_start, section_length, ignore_errors = True): offset, sublength, mapped_offset, mapped_length, layer_name = mapped - # Check if this chunk and the previous aren't next to each other, - if len(output) and (offset != chunk_end): - # if so we can ship everything so far, because this must be a new chunk - chunk_end = offset + output[-1][1] - yield output, chunk_end + # Setup the variables for this block + block_start = offset + block_end = offset + sublength + conversion = mapped_offset - offset + + # If this isn't contiguous, start a new chunk + if chunk_position < block_start: + yield output, chunk_position output = [] - else: - # Otherwise we're in a long run, and we can chunk it up in chunk_size blocks - current_chunk_size = min(sublength, scanner.chunk_size + scanner.overlap) - - # Cut it into scan_chunk + overlap sized chunks (each scan_chunk apart from each other) - while current_chunk_size == scanner.chunk_size + scanner.overlap and current_chunk_size > 0: - if current_chunk_size > 0: - output += [(self.name if not linear else layer_name, - offset if not linear else mapped_offset, current_chunk_size)] - chunk_end = offset + current_chunk_size - # Ship this scan_chunk size block - yield output, chunk_end - output = [] - offset += scanner.chunk_size - mapped_offset += scanner.chunk_size - sublength -= scanner.chunk_size - current_chunk_size = min(sublength, scanner.chunk_size + scanner.overlap) + chunk_start = chunk_position = block_start - # We should now only have less than chunk_size data in it, but don't forget to ship it off too - if current_chunk_size > 0: - output += [(self.name if not linear else layer_name, offset if not linear else mapped_offset, - current_chunk_size)] + return_name = self.name if not linear else layer_name - # Ship anything we've accumulated - chunk_end = offset + current_chunk_size + # Halfway through a chunk, finish the chunk, then take more + if chunk_position != chunk_start: + chunk_size = min(chunk_position - chunk_start, scanner.chunk_size + scanner.overlap) + output += [(return_name, chunk_position + conversion, chunk_size)] + chunk_start = chunk_position + chunk_size + chunk_position = chunk_start - if len(output): - yield output, chunk_end + # Pack chunks, if we're enter the loop (starting a new chunk) and there's already chunk there, ship it + for chunk_start in range(chunk_position, block_end, scanner.chunk_size): + if output: + yield output, chunk_position + output = [] + chunk_position = chunk_start + # Take from chunk_position as far as far as the block can go, + # or as much left of a scanner chunk as we can + chunk_size = min(block_end - chunk_position, + scanner.chunk_size + scanner.overlap - (chunk_position - chunk_start)) + output += [(return_name, chunk_position + conversion, chunk_size)] + chunk_start = chunk_position + chunk_size + chunk_position = chunk_start + + # Ship anything that might be left + if output: + yield output, chunk_position class LayerContainer(collections.abc.Mapping): From c6181d0dcfdcdea00481d563edf35b1b43f9ffdc Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Wed, 27 Jan 2021 16:58:13 +0000 Subject: [PATCH 047/294] Layers: Group the linearity settings together --- volatility3/framework/interfaces/layers.py | 11 ++++++++--- 1 file changed, 8 insertions(+), 3 deletions(-) diff --git a/volatility3/framework/interfaces/layers.py b/volatility3/framework/interfaces/layers.py index 4f5314d740..0cf7e087aa 100644 --- a/volatility3/framework/interfaces/layers.py +++ b/volatility3/framework/interfaces/layers.py @@ -491,7 +491,14 @@ def _scan_iterator(self, # Setup the variables for this block block_start = offset block_end = offset + sublength - conversion = mapped_offset - offset + + # Setup the necessary bits for non-linear mappings + # For linear we give one layer down and mapped offsets (therefore the conversion) + # This saves an tiny amount of time not have to redo lookups we've already done + # For non-linear layers, we give the layer name and the offset in the layer name + # so that the read/conversion occurs properly + conversion = mapped_offset - offset if linear else 0 + return_name = layer_name if linear else self.name # If this isn't contiguous, start a new chunk if chunk_position < block_start: @@ -499,8 +506,6 @@ def _scan_iterator(self, output = [] chunk_start = chunk_position = block_start - return_name = self.name if not linear else layer_name - # Halfway through a chunk, finish the chunk, then take more if chunk_position != chunk_start: chunk_size = min(chunk_position - chunk_start, scanner.chunk_size + scanner.overlap) From bf9329fa93d12a627b88e19d339c68b5a07d7a60 Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Sun, 7 Feb 2021 20:34:33 +0000 Subject: [PATCH 048/294] Plugins: netstat yapf and remove unused codeblock --- .../framework/plugins/windows/netstat.py | 186 ++++++++---------- 1 file changed, 82 insertions(+), 104 deletions(-) diff --git a/volatility3/framework/plugins/windows/netstat.py b/volatility3/framework/plugins/windows/netstat.py index 0ec50b23de..b6332e6e36 100644 --- a/volatility3/framework/plugins/windows/netstat.py +++ b/volatility3/framework/plugins/windows/netstat.py @@ -56,10 +56,7 @@ def _decode_pointer(self, value): return value @classmethod - def read_pointer(cls, - context: interfaces.context.ContextInterface, - layer_name: str, - offset: int, + def read_pointer(cls, context: interfaces.context.ContextInterface, layer_name: str, offset: int, length: int) -> int: """Reads a pointer at a given offset and returns the address it points to. @@ -76,10 +73,7 @@ def read_pointer(cls, return int.from_bytes(context.layers[layer_name].read(offset, length), "little") @classmethod - def parse_bitmap(cls, - context: interfaces.context.ContextInterface, - layer_name: str, - bitmap_offset: int, + def parse_bitmap(cls, context: interfaces.context.ContextInterface, layer_name: str, bitmap_offset: int, bitmap_size_in_byte: int) -> list: """Parses a given bitmap and looks for each occurence of a 1. @@ -164,13 +158,13 @@ def enumerate_structures_by_port(cls, # if the same port is used on different interfaces multiple objects are created # those can be found by following the pointer within the object's `Next` field until it is empty while curr_obj.Next: - curr_obj = context.object(obj_name, layer_name = layer_name, offset = cls._decode_pointer(curr_obj.Next) - ptr_offset) + curr_obj = context.object(obj_name, + layer_name = layer_name, + offset = cls._decode_pointer(curr_obj.Next) - ptr_offset) yield curr_obj @classmethod - def get_tcpip_module(cls, - context: interfaces.context.ContextInterface, - layer_name: str, + def get_tcpip_module(cls, context: interfaces.context.ContextInterface, layer_name: str, nt_symbols: str) -> interfaces.objects.ObjectInterface: """Uses `windows.modules` to find tcpip.sys in memory. @@ -188,13 +182,8 @@ def get_tcpip_module(cls, return mod @classmethod - def parse_hashtable(cls, - context: interfaces.context.ContextInterface, - layer_name: str, - ht_offset: int, - ht_length: int, - alignment: int, - net_symbol_table: str) -> list: + def parse_hashtable(cls, context: interfaces.context.ContextInterface, layer_name: str, ht_offset: int, + ht_length: int, alignment: int, net_symbol_table: str) -> list: """Parses a hashtable quick and dirty. Args: @@ -210,20 +199,17 @@ def parse_hashtable(cls, for index in range(ht_length): current_addr = ht_offset + index * alignment current_pointer = context.object(net_symbol_table + constants.BANG + "pointer", - layer_name = layer_name, - offset = current_addr) + layer_name = layer_name, + offset = current_addr) # check if addr of pointer is equal to the value pointed to if current_pointer.vol.offset == current_pointer: continue yield current_pointer @classmethod - def parse_partitions(cls, - context: interfaces.context.ContextInterface, - layer_name: str, - net_symbol_table: str, - tcpip_symbol_table: str, - tcpip_module_offset: int) -> Iterable[interfaces.objects.ObjectInterface]: + def parse_partitions(cls, context: interfaces.context.ContextInterface, layer_name: str, net_symbol_table: str, + tcpip_symbol_table: str, + tcpip_module_offset: int) -> Iterable[interfaces.objects.ObjectInterface]: """Parses tcpip.sys's PartitionTable containing established TCP connections. The amount of Partition depends on the value of the symbol `PartitionCount` and correlates with the maximum processor count (refer to Art of Memory Forensics, chapter 11). @@ -246,42 +232,38 @@ def parse_partitions(cls, obj_name = net_symbol_table + constants.BANG + "_TCP_ENDPOINT" # part_table_symbol is the offset within tcpip.sys which contains the address of the partition table itself - part_table_symbol = context.symbol_space.get_symbol(tcpip_symbol_table + constants.BANG + "PartitionTable").address - part_count_symbol = context.symbol_space.get_symbol(tcpip_symbol_table + constants.BANG + "PartitionCount").address + part_table_symbol = context.symbol_space.get_symbol(tcpip_symbol_table + constants.BANG + + "PartitionTable").address + part_count_symbol = context.symbol_space.get_symbol(tcpip_symbol_table + constants.BANG + + "PartitionCount").address part_table_addr = context.object(net_symbol_table + constants.BANG + "pointer", - layer_name = layer_name, - offset = tcpip_module_offset + part_table_symbol) + layer_name = layer_name, + offset = tcpip_module_offset + part_table_symbol) # part_table is the actual partition table offset and consists out of a dynamic amount of _PARTITION objects part_table = context.object(net_symbol_table + constants.BANG + "_PARTITION_TABLE", layer_name = layer_name, offset = part_table_addr) - part_count = int.from_bytes(context.layers[layer_name].read(tcpip_module_offset + part_count_symbol, 1), "little") + part_count = int.from_bytes(context.layers[layer_name].read(tcpip_module_offset + part_count_symbol, 1), + "little") part_table.Partitions.count = part_count - vollog.debug("Found TCP connection PartitionTable @ 0x{:x} (partition count: {})".format(part_table_addr, part_count)) + vollog.debug("Found TCP connection PartitionTable @ 0x{:x} (partition count: {})".format( + part_table_addr, part_count)) entry_offset = context.symbol_space.get_type(obj_name).relative_child_offset("ListEntry") for ctr, partition in enumerate(part_table.Partitions): vollog.debug("Parsing partition {}".format(ctr)) if partition.Endpoints.NumEntries > 0: - for endpoint_entry in cls.parse_hashtable(context, - layer_name, - partition.Endpoints.Directory, - partition.Endpoints.TableSize, - alignment, - net_symbol_table): + for endpoint_entry in cls.parse_hashtable(context, layer_name, partition.Endpoints.Directory, + partition.Endpoints.TableSize, alignment, net_symbol_table): endpoint = context.object(obj_name, layer_name = layer_name, offset = endpoint_entry - entry_offset) yield endpoint @classmethod - def create_tcpip_symbol_table(cls, - context: interfaces.context.ContextInterface, - config_path: str, - layer_name: str, - tcpip_module_offset: int, - tcpip_module_size: int) -> str: + def create_tcpip_symbol_table(cls, context: interfaces.context.ContextInterface, config_path: str, layer_name: str, + tcpip_module_offset: int, tcpip_module_size: int) -> str: """Creates symbol table for the current image's tcpip.sys driver. Searches the memory section of the loaded tcpip.sys module for its PDB GUID @@ -299,37 +281,31 @@ def create_tcpip_symbol_table(cls, """ guids = list( - pdbutil.PDBUtility.pdbname_scan( - context, - layer_name, - context.layers[layer_name].page_size, - [b"tcpip.pdb"], - start=tcpip_module_offset, - end=tcpip_module_offset + tcpip_module_size - ) - ) + pdbutil.PDBUtility.pdbname_scan(context, + layer_name, + context.layers[layer_name].page_size, [b"tcpip.pdb"], + start = tcpip_module_offset, + end = tcpip_module_offset + tcpip_module_size)) if not guids: - raise exceptions.VolatilityException("Did not find GUID of tcpip.pdb in tcpip.sys module @ 0x{:x}!".format(tcpip_module.DllBase)) + raise exceptions.VolatilityException("Did not find GUID of tcpip.pdb in tcpip.sys module @ 0x{:x}!".format( + tcpip_module.DllBase)) guid = guids[0] vollog.debug("Found {}: {}-{}".format(guid["pdb_name"], guid["GUID"], guid["age"])) - return pdbutil.PDBUtility.load_windows_symbol_table(context, - guid["GUID"], - guid["age"], - guid["pdb_name"], - "volatility3.framework.symbols.intermed.IntermediateSymbolTable", - config_path="tcpip") + return pdbutil.PDBUtility.load_windows_symbol_table( + context, + guid["GUID"], + guid["age"], + guid["pdb_name"], + "volatility3.framework.symbols.intermed.IntermediateSymbolTable", + config_path = "tcpip") @classmethod - def find_port_pools(cls, - context: interfaces.context.ContextInterface, - layer_name: str, - net_symbol_table: str, - tcpip_symbol_table: str, - tcpip_module_offset: int) -> (int, int): + def find_port_pools(cls, context: interfaces.context.ContextInterface, layer_name: str, net_symbol_table: str, + tcpip_symbol_table: str, tcpip_module_offset: int) -> (int, int): """Finds the given image's port pools. Older Windows versions (presumably < Win10 build 14251) use driver symbols called `UdpPortPool` and `TcpPortPool` which point towards the pools. Newer Windows versions use `UdpCompartmentSet` and `TcpCompartmentSet`, which we first have to translate into @@ -350,13 +326,13 @@ def find_port_pools(cls, # older Windows versions upp_symbol = context.symbol_space.get_symbol(tcpip_symbol_table + constants.BANG + "UdpPortPool").address upp_addr = context.object(net_symbol_table + constants.BANG + "pointer", - layer_name = layer_name, - offset = tcpip_module_offset + upp_symbol) + layer_name = layer_name, + offset = tcpip_module_offset + upp_symbol) tpp_symbol = context.symbol_space.get_symbol(tcpip_symbol_table + constants.BANG + "TcpPortPool").address tpp_addr = context.object(net_symbol_table + constants.BANG + "pointer", - layer_name = layer_name, - offset = tcpip_module_offset + tpp_symbol) + layer_name = layer_name, + offset = tcpip_module_offset + tpp_symbol) elif "UdpCompartmentSet" in context.symbol_space[tcpip_symbol_table].symbols: # newer Windows versions since 10.14xxx @@ -364,22 +340,27 @@ def find_port_pools(cls, tcs = context.symbol_space.get_symbol(tcpip_symbol_table + constants.BANG + "TcpCompartmentSet").address ucs_offset = context.object(net_symbol_table + constants.BANG + "pointer", - layer_name = layer_name, - offset = tcpip_module_offset + ucs) + layer_name = layer_name, + offset = tcpip_module_offset + ucs) tcs_offset = context.object(net_symbol_table + constants.BANG + "pointer", - layer_name = layer_name, - offset = tcpip_module_offset + tcs) + layer_name = layer_name, + offset = tcpip_module_offset + tcs) - ucs_obj = context.object(net_symbol_table + constants.BANG + "_INET_COMPARTMENT_SET", layer_name = layer_name, offset = ucs_offset) + ucs_obj = context.object(net_symbol_table + constants.BANG + "_INET_COMPARTMENT_SET", + layer_name = layer_name, + offset = ucs_offset) upp_addr = ucs_obj.InetCompartment.ProtocolCompartment.PortPool - tcs_obj = context.object(net_symbol_table + constants.BANG + "_INET_COMPARTMENT_SET", layer_name = layer_name, offset = tcs_offset) + tcs_obj = context.object(net_symbol_table + constants.BANG + "_INET_COMPARTMENT_SET", + layer_name = layer_name, + offset = tcs_offset) tpp_addr = tcs_obj.InetCompartment.ProtocolCompartment.PortPool else: # this branch should not be reached. - raise exceptions.SymbolError("UdpPortPool", tcpip_symbol_table, - "Neither UdpPortPool nor UdpCompartmentSet found in {} table".format(tcpip_symbol_table)) + raise exceptions.SymbolError( + "UdpPortPool", tcpip_symbol_table, + "Neither UdpPortPool nor UdpCompartmentSet found in {} table".format(tcpip_symbol_table)) vollog.debug("Found PortPools @ 0x{:x} (UDP) && 0x{:x} (TCP)".format(upp_addr, tpp_addr)) return upp_addr, tpp_addr @@ -409,19 +390,27 @@ def list_sockets(cls, """ # first, TCP endpoints by parsing the partition table - for endpoint in cls.parse_partitions(context, layer_name, net_symbol_table, tcpip_symbol_table, tcpip_module_offset): + for endpoint in cls.parse_partitions(context, layer_name, net_symbol_table, tcpip_symbol_table, + tcpip_module_offset): yield endpoint # then, towards the UDP and TCP port pools # first, find their addresses - upp_addr, tpp_addr = cls.find_port_pools(context, layer_name, net_symbol_table, tcpip_symbol_table, tcpip_module_offset) + upp_addr, tpp_addr = cls.find_port_pools(context, layer_name, net_symbol_table, tcpip_symbol_table, + tcpip_module_offset) # create port pool objects at the detected address and parse the port bitmap - upp_obj = context.object(net_symbol_table + constants.BANG + "_INET_PORT_POOL", layer_name = layer_name, offset = upp_addr) - udpa_ports = cls.parse_bitmap(context, layer_name, upp_obj.PortBitMap.Buffer, upp_obj.PortBitMap.SizeOfBitMap // 8) - - tpp_obj = context.object(net_symbol_table + constants.BANG + "_INET_PORT_POOL", layer_name = layer_name, offset = tpp_addr) - tcpl_ports = cls.parse_bitmap(context, layer_name, tpp_obj.PortBitMap.Buffer, tpp_obj.PortBitMap.SizeOfBitMap // 8) + upp_obj = context.object(net_symbol_table + constants.BANG + "_INET_PORT_POOL", + layer_name = layer_name, + offset = upp_addr) + udpa_ports = cls.parse_bitmap(context, layer_name, upp_obj.PortBitMap.Buffer, + upp_obj.PortBitMap.SizeOfBitMap // 8) + + tpp_obj = context.object(net_symbol_table + constants.BANG + "_INET_PORT_POOL", + layer_name = layer_name, + offset = tpp_addr) + tcpl_ports = cls.parse_bitmap(context, layer_name, tpp_obj.PortBitMap.Buffer, + tpp_obj.PortBitMap.SizeOfBitMap // 8) vollog.debug("Found TCP Ports: {}".format(tcpl_ports)) vollog.debug("Found UDP Ports: {}".format(udpa_ports)) @@ -444,22 +433,15 @@ def _generator(self, show_corrupt_results: Optional[bool] = None): """ Generates the network objects for use in rendering. """ netscan_symbol_table = netscan.NetScan.create_netscan_symbol_table(self.context, self.config["primary"], - self.config["nt_symbols"], self.config_path) + self.config["nt_symbols"], self.config_path) tcpip_module = self.get_tcpip_module(self.context, self.config["primary"], self.config["nt_symbols"]) - tcpip_symbol_table = self.create_tcpip_symbol_table(self.context, - self.config_path, - self.config["primary"], - tcpip_module.DllBase, - tcpip_module.SizeOfImage) + tcpip_symbol_table = self.create_tcpip_symbol_table(self.context, self.config_path, self.config["primary"], + tcpip_module.DllBase, tcpip_module.SizeOfImage) - for netw_obj in self.list_sockets(self.context, - self.config['primary'], - self.config['nt_symbols'], - netscan_symbol_table, - tcpip_module.DllBase, - tcpip_symbol_table): + for netw_obj in self.list_sockets(self.context, self.config['primary'], self.config['nt_symbols'], + netscan_symbol_table, tcpip_module.DllBase, tcpip_symbol_table): # objects passed pool header constraints. check for additional constraints if strict flag is set. if not show_corrupt_results and not netw_obj.is_valid(): @@ -482,8 +464,8 @@ def _generator(self, show_corrupt_results: Optional[bool] = None): elif netw_obj.get_address_family() == network.AF_INET6: proto = "TCPv6" else: - vollog.debug("TCP Endpoint @ 0x{:2x} has unknown address family 0x{:x}".format(netw_obj.vol.offset, - netw_obj.get_address_family())) + vollog.debug("TCP Endpoint @ 0x{:2x} has unknown address family 0x{:x}".format( + netw_obj.vol.offset, netw_obj.get_address_family())) proto = "TCPv?" try: @@ -522,10 +504,6 @@ def generate_timeline(self): # Skip network connections without creation time if not isinstance(row_dict["Created"], datetime.datetime): continue - row_data = [ - "N/A" if isinstance(i, renderers.UnreadableValue) or isinstance(i, renderers.UnparsableValue) else i - for i in row_data - ] description = "Network connection: Process {} {} Local Address {}:{} " \ "Remote Address {}:{} State {} Protocol {} ".format(row_dict["PID"], row_dict["Owner"], row_dict["LocalAddr"], row_dict["LocalPort"], From 130ae7e446d1cc1ac4957418e2b8a57a48b77733 Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Sun, 7 Feb 2021 20:50:16 +0000 Subject: [PATCH 049/294] Interfaces: Fix LGTM warning without changing behaviour --- volatility3/framework/interfaces/configuration.py | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/volatility3/framework/interfaces/configuration.py b/volatility3/framework/interfaces/configuration.py index 770ac0b4e1..396926b68d 100644 --- a/volatility3/framework/interfaces/configuration.py +++ b/volatility3/framework/interfaces/configuration.py @@ -477,6 +477,11 @@ def __init__(self, *args, **kwargs) -> None: super().__init__(*args, **kwargs) self._cls = None + def __eq__(self, other): + # We can just use super because it checks all member of `__dict__` + # This appeases LGTM and does the right thing + return super().__eq__(other) + @property def cls(self) -> Type: """Contains the actual chosen class based on the configuration value's @@ -525,6 +530,11 @@ def __init__(self, *args, **kwargs) -> None: self.add_requirement(ClassRequirement("class", "Class of the constructable requirement")) self._current_class_requirements = set() + def __eq__(self, other): + # We can just use super because it checks all member of `__dict__` + # This appeases LGTM and does the right thing + return super().__eq__(other) + @abstractmethod def construct(self, context: 'interfaces.context.ContextInterface', config_path: str) -> None: """Method for constructing within the context any required elements From 2de3cfcce2b25352b0eb2e3a952ce3ebef97d3fb Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Tue, 16 Feb 2021 16:51:29 +0000 Subject: [PATCH 050/294] Layers: Metadata inheritance wasn't per-instance --- volatility3/framework/interfaces/layers.py | 10 +++------- 1 file changed, 3 insertions(+), 7 deletions(-) diff --git a/volatility3/framework/interfaces/layers.py b/volatility3/framework/interfaces/layers.py index 0cf7e087aa..2fc8bffbc0 100644 --- a/volatility3/framework/interfaces/layers.py +++ b/volatility3/framework/interfaces/layers.py @@ -99,10 +99,7 @@ class DataLayerInterface(interfaces.configuration.ConfigurableInterface, metacla accesses a data source and exposes it within volatility. """ - _direct_metadata = collections.ChainMap({}, { - 'architecture': 'Unknown', - 'os': 'Unknown' - }) # type: collections.ChainMap + _direct_metadata = interfaces.objects.ReadOnlyMapping({'architecture': 'Unknown', 'os': 'Unknown'}) def __init__(self, context: 'interfaces.context.ContextInterface', @@ -111,8 +108,7 @@ def __init__(self, metadata: Optional[Dict[str, Any]] = None) -> None: super().__init__(context, config_path) self._name = name - if metadata: - self._direct_metadata.update(metadata) + self._metadata = metadata or {} # Standard attributes @@ -360,7 +356,7 @@ def build_configuration(self) -> interfaces.configuration.HierarchicalDict: def metadata(self) -> Mapping: """Returns a ReadOnly copy of the metadata published by this layer.""" maps = [self.context.layers[layer_name].metadata for layer_name in self.dependencies] - return interfaces.objects.ReadOnlyMapping(collections.ChainMap({}, self._direct_metadata, *maps)) + return interfaces.objects.ReadOnlyMapping(collections.ChainMap(self._metadata, self._direct_metadata, *maps)) class TranslationLayerInterface(DataLayerInterface, metaclass = ABCMeta): From 00a4d5ea4f572ab16ea23216e0dabd232cb5294c Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Tue, 16 Feb 2021 18:03:21 +0000 Subject: [PATCH 051/294] Layers: Fix circular import issue --- volatility3/framework/interfaces/layers.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/volatility3/framework/interfaces/layers.py b/volatility3/framework/interfaces/layers.py index 2fc8bffbc0..29b2922a78 100644 --- a/volatility3/framework/interfaces/layers.py +++ b/volatility3/framework/interfaces/layers.py @@ -99,7 +99,7 @@ class DataLayerInterface(interfaces.configuration.ConfigurableInterface, metacla accesses a data source and exposes it within volatility. """ - _direct_metadata = interfaces.objects.ReadOnlyMapping({'architecture': 'Unknown', 'os': 'Unknown'}) + _direct_metadata = {'architecture': 'Unknown', 'os': 'Unknown'} def __init__(self, context: 'interfaces.context.ContextInterface', From f562c3e864f459a744f17d748a0a3f4bf2fe6b19 Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Wed, 17 Feb 2021 02:01:36 +0000 Subject: [PATCH 052/294] Plugins: Layerwriter should not require an architecture --- volatility3/framework/plugins/layerwriter.py | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/volatility3/framework/plugins/layerwriter.py b/volatility3/framework/plugins/layerwriter.py index 721845fd3d..3597792d09 100644 --- a/volatility3/framework/plugins/layerwriter.py +++ b/volatility3/framework/plugins/layerwriter.py @@ -23,9 +23,7 @@ class LayerWriter(plugins.PluginInterface): @classmethod def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface]: return [ - requirements.TranslationLayerRequirement(name = 'primary', - description = 'Memory layer for the kernel', - architectures = ["Intel32", "Intel64"]), + requirements.TranslationLayerRequirement(name = 'primary', description = 'Memory layer for the kernel'), requirements.IntRequirement(name = 'block_size', description = "Size of blocks to copy over", default = cls.default_block_size, From 8b6c6b604d9d2f5bbf6d25cba09151c40698ed2a Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Fri, 19 Feb 2021 01:17:34 +0000 Subject: [PATCH 053/294] Layers: Allow handler to avoid caching --- volatility3/framework/layers/resources.py | 25 ++++++++++++++++++++--- 1 file changed, 22 insertions(+), 3 deletions(-) diff --git a/volatility3/framework/layers/resources.py b/volatility3/framework/layers/resources.py index 63cfd95707..d25c8d5f1d 100644 --- a/volatility3/framework/layers/resources.py +++ b/volatility3/framework/layers/resources.py @@ -13,7 +13,7 @@ import urllib.parse import urllib.request import zipfile -from typing import Optional, Any, IO +from typing import Optional, Any, IO, List from urllib import error from volatility3 import framework @@ -79,7 +79,15 @@ def uses_cache(self, url: str) -> bool: """Determines whether a URLs contents should be cached""" parsed_url = urllib.parse.urlparse(url) - return not parsed_url.scheme in ['file', 'jar'] + return not parsed_url.scheme in self._non_cached_schemes() + + @staticmethod + def _non_cached_schemes() -> List[str]: + """Returns the list of schemes not to be cached""" + result = ['file'] + for clazz in framework.class_subclasses(VolatilityHandler): + result += clazz.non_cached_schemes() + return result # Current urllib.request.urlopen returns Any, so we do the same def open(self, url: str, mode: str = "rb") -> Any: @@ -201,7 +209,14 @@ def open(self, url: str, mode: str = "rb") -> Any: return curfile -class JarHandler(urllib.request.BaseHandler): +class VolatilityHandler(urllib.request.BaseHandler): + + @classmethod + def non_cached_schemes(cls) -> List[str]: + return [] + + +class JarHandler(VolatilityHandler): """Handles the jar scheme for URIs. Reference used for the schema syntax: @@ -211,6 +226,10 @@ class JarHandler(urllib.request.BaseHandler): http://developer.java.sun.com/developer/onlineTraining/protocolhandlers/ """ + @classmethod + def non_cached_schemes(cls) -> List[str]: + return ['jar'] + @staticmethod def default_open(req: urllib.request.Request) -> Optional[Any]: """Handles the request if it's the jar scheme.""" From b8c8e0f852e9f0bc627c3e7c6c8a759b01010c03 Mon Sep 17 00:00:00 2001 From: AsafEitani Date: Tue, 23 Feb 2021 12:19:12 +0200 Subject: [PATCH 054/294] Update cachedump.py fix bytes to str concat --- volatility3/framework/plugins/windows/cachedump.py | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/volatility3/framework/plugins/windows/cachedump.py b/volatility3/framework/plugins/windows/cachedump.py index 42bb93a732..b436db7539 100644 --- a/volatility3/framework/plugins/windows/cachedump.py +++ b/volatility3/framework/plugins/windows/cachedump.py @@ -43,11 +43,11 @@ def decrypt_hash(self, edata, nlkm, ch, xp): else: # based on Based on code from http://lab.mediaservice.net/code/cachedump.rb aes = AES.new(nlkm[16:32], AES.MODE_CBC, ch) - data = "" + data = b"" for i in range(0, len(edata), 16): buf = edata[i:i + 16] if len(buf) < 16: - buf += (16 - len(buf)) * "\00" + buf += (16 - len(buf)) * b"\00" data += aes.decrypt(buf) return data From cf3ad1a6ebf13eca492d7764d8520d70fa56b0ac Mon Sep 17 00:00:00 2001 From: AsafEitani Date: Tue, 23 Feb 2021 12:28:17 +0200 Subject: [PATCH 055/294] changing PID field names Changing the pid name field of the result to match the PID of all other plugins --- volatility3/framework/plugins/windows/svcscan.py | 2 +- volatility3/framework/plugins/windows/vadyarascan.py | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/volatility3/framework/plugins/windows/svcscan.py b/volatility3/framework/plugins/windows/svcscan.py index 4820e5735b..1a035901a3 100644 --- a/volatility3/framework/plugins/windows/svcscan.py +++ b/volatility3/framework/plugins/windows/svcscan.py @@ -160,7 +160,7 @@ def run(self): return renderers.TreeGrid([ ('Offset', format_hints.Hex), ('Order', int), - ('Pid', int), + ('PID', int), ('Start', str), ('State', str), ('Type', str), diff --git a/volatility3/framework/plugins/windows/vadyarascan.py b/volatility3/framework/plugins/windows/vadyarascan.py index cdb349cf8e..248a2545b4 100644 --- a/volatility3/framework/plugins/windows/vadyarascan.py +++ b/volatility3/framework/plugins/windows/vadyarascan.py @@ -83,5 +83,5 @@ def get_vad_maps(task: interfaces.objects.ObjectInterface) -> Iterable[Tuple[int yield (start, end - start) def run(self): - return renderers.TreeGrid([('Offset', format_hints.Hex), ('Pid', int), ('Rule', str), ('Component', str), + return renderers.TreeGrid([('Offset', format_hints.Hex), ('PID', int), ('Rule', str), ('Component', str), ('Value', bytes)], self._generator()) From 8131a9e817ee847ab59ed7c47bdd9b536178513c Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Tue, 23 Feb 2021 14:46:43 +0000 Subject: [PATCH 056/294] Layers: Add in LeechCore physical file support --- setup.py | 1 + volatility3/framework/layers/leechcore.py | 169 ++++++++++++++++++++++ 2 files changed, 170 insertions(+) create mode 100644 volatility3/framework/layers/leechcore.py diff --git a/setup.py b/setup.py index ea0ddb8ddf..62f5bd2d54 100644 --- a/setup.py +++ b/setup.py @@ -39,6 +39,7 @@ }, install_requires = ["pefile"], extras_require = { + 'leechcorepyc': ["leechcorepyc>=2.4.0"], 'jsonschema': ["jsonschema>=2.3.0"], 'yara': ["yara-python>=3.8.0"], 'crypto': ["pycryptodome>=3"], diff --git a/volatility3/framework/layers/leechcore.py b/volatility3/framework/layers/leechcore.py new file mode 100644 index 0000000000..a2ccf48b5e --- /dev/null +++ b/volatility3/framework/layers/leechcore.py @@ -0,0 +1,169 @@ +import io +import logging +import urllib.parse +from typing import Optional, Any, List + +import leechcorepyc + +from volatility3.framework import exceptions +from volatility3.framework.layers import resources + +vollog = logging.getLogger(__file__) + + +class LeechCoreFile(io.RawIOBase): + """Class to mimic python-native file access to a LeechCore memory space""" + + _leechcore = None + + def __init__(self, leechcore_device): + self._chunk_size = 0x1000000 + self._device = leechcore_device + self._cursor = 0 + self._handle = None + self._pad = True + self._chunk_size = 0x1000000 + + @property + def maxaddr(self): + return self.handle.maxaddr + + @property + def handle(self): + """The actual LeechCore file object returned by leechcorepyc + + Accessing this attribute will create/attach the handle if it hasn't already been opened + """ + if not self._handle: + try: + self._handle = leechcorepyc.LeechCore(self._device) + except TypeError: + raise IOError("Unable to open LeechCore device {}".format(self._device)) + return self._handle + + def fileno(self): + raise OSError + + def flush(self): + pass + + def isatty(self): + return False + + def readable(self): + """This returns whether the handle is open + + This doesn't access self.handle so that it doesn't accidentally attempt to open the device + """ + return bool(self._handle) + + def seek(self, offset, whence = io.SEEK_SET): + if whence == io.SEEK_SET: + self._cursor = offset + elif whence == io.SEEK_CUR: + self._cursor += offset + elif whence == io.SEEK_END: + self._cursor = self.maxaddr + offset + + def tell(self): + """Return how far into the memory we are""" + return self._cursor + + def writable(self): + """Leechcore supports writing, so this is always true""" + return True + + def writelines(self, lines: List[bytes]): + return self.write(b"".join(lines)) + + def in_memmap(self, start, size): + chunk_start = start + chunk_size = size + output = [] + for entry in self.handle.memmap: + + if entry['base'] + entry['size'] <= chunk_start or entry['base'] >= chunk_start + chunk_size: + continue + output += [(max(entry['base'], chunk_start), min(entry['size'], chunk_size))] + chunk_start = output[-1][0] + output[-1][1] + chunk_size = max(0, size - chunk_start) + + if chunk_size <= 0: + break + return output + + def write(self, b: bytes): + result = self.handle.write(self._cursor, b) + self._cursor += len(b) + return result + + def read(self, size: int = -1) -> bytes: + """We ask leechcore to pad the data, because otherwise determining holes in the underlying file would + be extremely inefficient borderline impossible to do consistently""" + data = self.handle.read(self._cursor, size, True) + + if len(data) > size: + data = data[:size] + else: + data = data + b'\x00' * (size - len(data)) + self._cursor += len(data) + if not len(data): + raise exceptions.InvalidAddressException('LeechCore layer read failure', self._cursor + len(data)) + return data + + def readline(self, __size: Optional[int] = ...) -> bytes: + data = b'' + while __size > self._chunk_size or __size < 0: + data += self.read(self._chunk_size) + index = data.find(b"\n") + __size -= self._chunk_size + if index >= 0: + __size = 0 + break + data += self.read(__size) + index = data.find(b"\n") + return data[:index] + + def readlines(self, __hint: int = ...) -> List[bytes]: + counter = 0 + result = [] + while counter < __hint or __hint < 0: + line = self.readline() + counter += len(line) + result += [line] + return result + + def readall(self) -> bytes: + return self.read() + + def readinto(self, b: bytearray) -> Optional[int]: + data = self.read() + for index in range(len(data)): + b[index] = data[index] + return len(data) + + def close(self): + if self._handle: + self._handle.close() + self._handle = None + + def closed(self): + return self._handle + + +class LeechCoreHandler(resources.VolatilityHandler): + """Handler for the invented `leechcore` scheme. This is an unofficial scheme and not registered with IANA + """ + + @classmethod + def non_cached_schemes(cls) -> List[str]: + """We need to turn caching *off* for a live filesystem""" + return ['leechcore'] + + @staticmethod + def default_open(req: urllib.request.Request) -> Optional[Any]: + """Handles the request if it's the leechcore scheme.""" + if req.type == 'leechcore': + device_uri = '://'.join(req.full_url.split('://')[1:]) + return LeechCoreFile(device_uri) + return None From e18e9700f2011860bdb97f1399ad82593d18407b Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Wed, 24 Feb 2021 01:13:49 +0000 Subject: [PATCH 057/294] Core: Improve typing across codebase --- volatility3/cli/volargparse.py | 22 +++++++---- volatility3/cli/volshell/generic.py | 20 +++++----- volatility3/framework/__init__.py | 8 ++-- volatility3/framework/automagic/__init__.py | 2 +- .../framework/configuration/requirements.py | 7 +++- volatility3/framework/exceptions.py | 4 +- .../framework/interfaces/configuration.py | 12 ++++-- volatility3/framework/interfaces/layers.py | 4 +- volatility3/framework/plugins/mac/lsmod.py | 10 +++-- .../framework/plugins/windows/cachedump.py | 35 +++++++++++------ .../framework/plugins/windows/dumpfiles.py | 7 ++-- .../framework/plugins/windows/hashdump.py | 38 ++++++++----------- .../framework/plugins/windows/lsadump.py | 13 ++++--- .../framework/plugins/windows/netscan.py | 4 +- .../framework/plugins/windows/netstat.py | 18 ++++----- .../framework/plugins/windows/vadinfo.py | 6 +-- volatility3/framework/renderers/__init__.py | 12 +++--- .../framework/renderers/format_hints.py | 11 ++++-- .../framework/symbols/windows/pdbutil.py | 2 +- 19 files changed, 133 insertions(+), 102 deletions(-) diff --git a/volatility3/cli/volargparse.py b/volatility3/cli/volargparse.py index 3935d912f0..8f239867fc 100644 --- a/volatility3/cli/volargparse.py +++ b/volatility3/cli/volargparse.py @@ -5,7 +5,8 @@ import argparse import gettext import re -from typing import List +from typing import List, Optional, Sequence, Any, Union + # This effectively overrides/monkeypatches the core argparse module to provide more helpful output around choices # We shouldn't really steal a private member from argparse, but otherwise we're just duplicating code @@ -21,15 +22,22 @@ class HelpfulSubparserAction(argparse._SubParsersAction): def __init__(self, *args, **kwargs) -> None: super().__init__(*args, **kwargs) # We don't want the action self-check to kick in, so we remove the choices list, the check happens in __call__ - self.choices = None + self.choices = None # type: ignore def __call__(self, - parser: 'HelpfulArgParser', + parser: argparse.ArgumentParser, namespace: argparse.Namespace, - values: List[str], - option_string: None = None) -> None: - parser_name = values[0] - arg_strings = values[1:] + values: Union[str, Sequence[Any], None], + option_string: Optional[str] = None) -> None: + + parser_name = '' + arg_strings = [] # type: List[str] + if values is not None: + for value in values: + if not parser_name: + parser_name = value + else: + arg_strings += [value] # set the parser name if requested if self.dest != argparse.SUPPRESS: diff --git a/volatility3/cli/volshell/generic.py b/volatility3/cli/volshell/generic.py index a1fa7c172d..78ab9ba2c6 100644 --- a/volatility3/cli/volshell/generic.py +++ b/volatility3/cli/volshell/generic.py @@ -8,13 +8,13 @@ import string import struct import sys -from typing import Any, Dict, List, Optional, Tuple, Union, Type +from typing import Any, Dict, List, Optional, Tuple, Union, Type, Iterable from urllib import request, parse from volatility3.cli import text_renderer from volatility3.framework import renderers, interfaces, objects, plugins, exceptions from volatility3.framework.configuration import requirements -from volatility3.framework.layers import intel, physical +from volatility3.framework.layers import intel, physical, resources try: import capstone @@ -57,7 +57,7 @@ def run(self, additional_locals: Dict[str, Any] = None) -> interfaces.renderers. Return a TreeGrid but this is always empty since the point of this plugin is to run interactively """ - self._current_layer = self.config['primary'] + self.__current_layer = self.config['primary'] # Try to enable tab completion try: @@ -174,13 +174,13 @@ def _ascii_bytes(bytes): @property def current_layer(self): - return self._current_layer + return self.__current_layer def change_layer(self, layer_name = None): """Changes the current default layer""" if not layer_name: layer_name = self.config['primary'] - self._current_layer = layer_name + self.__current_layer = layer_name sys.ps1 = "({}) >>> ".format(self.current_layer) def display_bytes(self, offset, count = 128, layer_name = None): @@ -336,7 +336,7 @@ def display_symbols(self, symbol_table: str = None): len_offset = len(hex(symbol.address)) print(" " * (longest_offset - len_offset), hex(symbol.address), " ", symbol.name) - def run_script(self, location: str = None): + def run_script(self, location: str): """Runs a python script within the context of volshell""" if not parse.urlparse(location).scheme: location = "file:" + request.pathname2url(location) @@ -346,7 +346,7 @@ def run_script(self, location: str = None): self.__console.runsource(fp.read(), symbol = 'exec') print("\nCode complete") - def load_file(self, location: str = None): + def load_file(self, location: str): """Loads a file into a Filelayer and returns the name of the layer""" layer_name = self.context.layers.free_layer_name() if not parse.urlparse(location).scheme: @@ -399,10 +399,10 @@ def __init__(self, preferred_name: str): interfaces.plugins.FileHandlerInterface.__init__(self, preferred_name) super().__init__() - def writelines(self, lines): + def writelines(self, lines: Iterable[bytes]): """Dummy method""" pass - def write(self, data): + def write(self, b: bytes): """Dummy method""" - return len(data) + return len(b) diff --git a/volatility3/framework/__init__.py b/volatility3/framework/__init__.py index e8c67588cb..b0a2f2b5c6 100644 --- a/volatility3/framework/__init__.py +++ b/volatility3/framework/__init__.py @@ -53,22 +53,22 @@ def require_interface_version(*args) -> None: ".".join([str(x) for x in interface_version()[0:1]]), ".".join([str(x) for x in args[0:2]]))) -class noninheritable(object): +class NonInheritable(object): def __init__(self, value: Any, cls: Type) -> None: self.default_value = value self.cls = cls - def __get__(self, obj: Any, type: Type = None) -> Any: + def __get__(self, obj: Any, get_type: Type = None) -> Any: if type == self.cls: if hasattr(self.default_value, '__get__'): - return self.default_value.__get__(obj, type) + return self.default_value.__get__(obj, get_type) return self.default_value raise AttributeError def hide_from_subclasses(cls: Type) -> Type: - cls.hidden = noninheritable(True, cls) + cls.hidden = NonInheritable(True, cls) return cls diff --git a/volatility3/framework/automagic/__init__.py b/volatility3/framework/automagic/__init__.py index be1badb6b5..844d9a273a 100644 --- a/volatility3/framework/automagic/__init__.py +++ b/volatility3/framework/automagic/__init__.py @@ -48,7 +48,7 @@ def available(context: interfaces.context.ContextInterface) -> List[interfaces.a def choose_automagic( - automagics: List[interfaces.automagic.AutomagicInterface], + automagics: List[Type[interfaces.automagic.AutomagicInterface]], plugin: Type[interfaces.plugins.PluginInterface]) -> List[Type[interfaces.automagic.AutomagicInterface]]: """Chooses which automagics to run, maintaining the order they were handed in.""" diff --git a/volatility3/framework/configuration/requirements.py b/volatility3/framework/configuration/requirements.py index 2a8881ace0..20a7af0f6a 100644 --- a/volatility3/framework/configuration/requirements.py +++ b/volatility3/framework/configuration/requirements.py @@ -208,6 +208,9 @@ def construct(self, context: interfaces.context.ContextInterface, config_path: s num_layers_path = interfaces.configuration.path_join(new_config_path, "number_of_elements") number_of_layers = context.config[num_layers_path] + if not isinstance(number_of_layers, int): + raise TypeError("Number of layers must be an integer") + # Build all the layers that can be built for i in range(number_of_layers): layer_req = self.requirements.get(self.name + str(i), None) @@ -363,6 +366,8 @@ def construct(self, context: interfaces.context.ContextInterface, config_path: s raise TypeError("Class requirement is not of type ClassRequirement: {}".format( repr(self.requirements["class"]))) cls = self.requirements["class"].cls + if cls is None: + return None node_config = context.config.branch(config_path) for req in cls.get_requirements(): if req.name in node_config.data and req.name != "class": @@ -392,7 +397,7 @@ def __init__(self, super().__init__(name = name, description = description, default = default, optional = optional) if component is None: raise TypeError("Component cannot be None") - self._component = component + self._component = component # type: Type[interfaces.configuration.VersionableInterface] if version is None: raise TypeError("Version cannot be None") self._version = version diff --git a/volatility3/framework/exceptions.py b/volatility3/framework/exceptions.py index 117933bbd5..7f381b1662 100644 --- a/volatility3/framework/exceptions.py +++ b/volatility3/framework/exceptions.py @@ -96,6 +96,6 @@ def __init__(self, unsatisfied: Dict[str, interfaces.configuration.RequirementIn class MissingModuleException(VolatilityException): - def __init__(self, module: str, *args, **kwargs) -> None: - super().__init__(*args, **kwargs) + def __init__(self, module: str, *args) -> None: + super().__init__(*args) self.module = module diff --git a/volatility3/framework/interfaces/configuration.py b/volatility3/framework/interfaces/configuration.py index 396926b68d..8b02d0d930 100644 --- a/volatility3/framework/interfaces/configuration.py +++ b/volatility3/framework/interfaces/configuration.py @@ -23,7 +23,7 @@ import string import sys from abc import ABCMeta, abstractmethod -from typing import Any, ClassVar, Dict, Generator, Iterator, List, Optional, Type, Union, Tuple +from typing import Any, ClassVar, Dict, Generator, Iterator, List, Optional, Type, Union, Tuple, Set from volatility3 import classproperty from volatility3.framework import constants, interfaces @@ -186,7 +186,8 @@ def _sanitize_value(self, value: Any) -> ConfigSimpleType: element_value = self._sanitize_value(element) if isinstance(element_value, list): raise TypeError("Configuration list types cannot contain list types") - new_list.append(element_value) + if element_value is not None: + new_list.append(element_value) return new_list elif value is None: return None @@ -483,7 +484,7 @@ def __eq__(self, other): return super().__eq__(other) @property - def cls(self) -> Type: + def cls(self) -> Optional[Type]: """Contains the actual chosen class based on the configuration value's class name.""" return self._cls @@ -528,7 +529,7 @@ class ConstructableRequirementInterface(RequirementInterface): def __init__(self, *args, **kwargs) -> None: super().__init__(*args, **kwargs) self.add_requirement(ClassRequirement("class", "Class of the constructable requirement")) - self._current_class_requirements = set() + self._current_class_requirements = set() # type: Set[Any] def __eq__(self, other): # We can just use super because it checks all member of `__dict__` @@ -581,6 +582,9 @@ def _construct_class(self, return None cls = self.requirements["class"].cls + if cls is None: + return None + # These classes all have a name property # We could subclass this out as a NameableInterface, but it seems a little excessive # FIXME: We can't test this, because importing the other interfaces causes all kinds of import loops diff --git a/volatility3/framework/interfaces/layers.py b/volatility3/framework/interfaces/layers.py index 29b2922a78..cce80f0a1f 100644 --- a/volatility3/framework/interfaces/layers.py +++ b/volatility3/framework/interfaces/layers.py @@ -99,7 +99,7 @@ class DataLayerInterface(interfaces.configuration.ConfigurableInterface, metacla accesses a data source and exposes it within volatility. """ - _direct_metadata = {'architecture': 'Unknown', 'os': 'Unknown'} + _direct_metadata = {'architecture': 'Unknown', 'os': 'Unknown'} # type: Mapping def __init__(self, context: 'interfaces.context.ContextInterface', @@ -473,7 +473,7 @@ def _scan_iterator(self, assumed to have no holes """ for (section_start, section_length) in sections: - output = [] + output = [] # type: List[Tuple[str, int, int]] # Hold the offsets of each chunk (including how much has been filled) chunk_start = chunk_position = 0 diff --git a/volatility3/framework/plugins/mac/lsmod.py b/volatility3/framework/plugins/mac/lsmod.py index fe5d25adf1..89a3c08af0 100644 --- a/volatility3/framework/plugins/mac/lsmod.py +++ b/volatility3/framework/plugins/mac/lsmod.py @@ -3,7 +3,9 @@ # """A module containing a collection of plugins that produce data typically found in Mac's lsmod command.""" -from volatility3.framework import renderers, interfaces, contexts +from typing import Set + +from volatility3.framework import renderers, interfaces, contexts, exceptions from volatility3.framework.configuration import requirements from volatility3.framework.interfaces import plugins from volatility3.framework.objects import utility @@ -55,11 +57,11 @@ def list_modules(cls, context: interfaces.context.ContextInterface, layer_name: except exceptions.InvalidAddressException: return [] - seen = set() + seen = set() # type: Set while kmod != 0 and \ - kmod not in seen and \ - len(seen) < 1024: + kmod not in seen and \ + len(seen) < 1024: kmod_obj = kmod.dereference() diff --git a/volatility3/framework/plugins/windows/cachedump.py b/volatility3/framework/plugins/windows/cachedump.py index b436db7539..dc8246ef8b 100644 --- a/volatility3/framework/plugins/windows/cachedump.py +++ b/volatility3/framework/plugins/windows/cachedump.py @@ -3,12 +3,14 @@ # from struct import unpack +from typing import Tuple from Crypto.Cipher import ARC4, AES from Crypto.Hash import HMAC -from volatility3.framework import interfaces, renderers +from volatility3.framework import interfaces, renderers, exceptions from volatility3.framework.configuration import requirements +from volatility3.framework.layers import registry from volatility3.framework.symbols.windows import versions from volatility3.plugins.windows import hashdump, lsadump from volatility3.plugins.windows.registry import hivelist @@ -31,10 +33,12 @@ def get_requirements(cls): requirements.PluginRequirement(name = 'lsadump', plugin = lsadump.Lsadump, version = (1, 0, 0)) ] - def get_nlkm(self, sechive, lsakey, is_vista_or_later): + @staticmethod + def get_nlkm(sechive: registry.RegistryHive, lsakey: bytes, is_vista_or_later: bool): return lsadump.Lsadump.get_secret_by_name(sechive, 'NL$KM', lsakey, is_vista_or_later) - def decrypt_hash(self, edata, nlkm, ch, xp): + @staticmethod + def decrypt_hash(edata: bytes, nlkm: bytes, ch, xp: bool): if xp: hmac_md5 = HMAC.new(nlkm, ch) rc4key = hmac_md5.digest() @@ -51,16 +55,19 @@ def decrypt_hash(self, edata, nlkm, ch, xp): data += aes.decrypt(buf) return data - def parse_cache_entry(self, cache_data): + @staticmethod + def parse_cache_entry(cache_data: bytes) -> Tuple[int, int, int, bytes, bytes]: (uname_len, domain_len) = unpack(" Tuple[str, str, str, bytes]: """Get the data from the cache and separate it into the username, domain name, and hash data""" uname_offset = 72 pad = 2 * ((uname_len / 2) % 2) @@ -68,12 +75,9 @@ def parse_decrypted_cache(self, dec_data, uname_len, domain_len, domain_name_len pad = 2 * ((domain_len / 2) % 2) domain_name_offset = int(domain_offset + domain_len + pad) hashh = dec_data[:0x10] - username = dec_data[uname_offset:uname_offset + uname_len] - username = username.decode('utf-16-le', 'replace') - domain = dec_data[domain_offset:domain_offset + domain_len] - domain = domain.decode('utf-16-le', 'replace') - domain_name = dec_data[domain_name_offset:domain_name_offset + domain_name_len] - domain_name = domain_name.decode('utf-16-le', 'replace') + username = dec_data[uname_offset:uname_offset + uname_len].decode('utf-16-le', 'replace') + domain = dec_data[domain_offset:domain_offset + domain_len].decode('utf-16-le', 'replace') + domain_name = dec_data[domain_name_offset:domain_name_offset + domain_name_len].decode('utf-16-le', 'replace') return (username, domain, domain_name, hashh) @@ -116,6 +120,8 @@ def _generator(self, syshive, sechive): def run(self): offset = self.config.get('offset', None) + syshive = sechive = None + for hive in hivelist.HiveList.list_hives(self.context, self.config_path, self.config['primary'], @@ -127,5 +133,10 @@ def run(self): if hive.get_name().split('\\')[-1].upper() == 'SECURITY': sechive = hive + if syshive is None: + raise exceptions.VolatilityException('Unable to locate SYSTEM hive') + if sechive is None: + raise exceptions.VolatilityException('Unable to locate SECURITY hive') + return renderers.TreeGrid([("Username", str), ("Domain", str), ("Domain name", str), ('Hashh', bytes)], self._generator(syshive, sechive)) diff --git a/volatility3/framework/plugins/windows/dumpfiles.py b/volatility3/framework/plugins/windows/dumpfiles.py index 9fc9daa54f..f1e54d0f97 100755 --- a/volatility3/framework/plugins/windows/dumpfiles.py +++ b/volatility3/framework/plugins/windows/dumpfiles.py @@ -9,7 +9,8 @@ from volatility3.plugins.windows import pslist from volatility3.framework.configuration import requirements from volatility3.framework.renderers import format_hints -from typing import List, Tuple, Type, Optional +from typing import List, Tuple, Type, Optional, Generator + vollog = logging.getLogger(__name__) FILE_DEVICE_DISK = 0x7 @@ -91,13 +92,13 @@ def dump_file_producer(cls, file_object: interfaces.objects.ObjectInterface, @classmethod def process_file_object(cls, context: interfaces.context.ContextInterface, primary_layer_name: str, open_method: Type[interfaces.plugins.FileHandlerInterface], - file_obj: interfaces.objects.ObjectInterface) -> Tuple: + file_obj: interfaces.objects.ObjectInterface) -> Generator[Tuple, None, None]: """Given a FILE_OBJECT, dump data to separate files for each of the three file caches. :param context: the context to operate upon :param primary_layer_name: primary/virtual layer to operate on :param open_method: class for constructing output files - :param file_object: the FILE_OBJECT + :param file_obj: the FILE_OBJECT """ # Filtering by these types of devices prevents us from processing other types of devices that diff --git a/volatility3/framework/plugins/windows/hashdump.py b/volatility3/framework/plugins/windows/hashdump.py index ed2f1988a8..97fba7608d 100644 --- a/volatility3/framework/plugins/windows/hashdump.py +++ b/volatility3/framework/plugins/windows/hashdump.py @@ -133,7 +133,8 @@ def get_hbootkey(cls, samhive: registry.RegistryHive, bootkey: bytes) -> Optiona return None @classmethod - def decrypt_single_salted_hash(cls, rid, hbootkey: bytes, enc_hash: bytes, lmntstr, salt: bytes) -> Optional[bytes]: + def decrypt_single_salted_hash(cls, rid, hbootkey: bytes, enc_hash: bytes, _lmntstr, + salt: bytes) -> Optional[bytes]: (des_k1, des_k2) = cls.sid_to_key(rid) des1 = DES.new(des_k1, DES.MODE_ECB) des2 = DES.new(des_k2, DES.MODE_ECB) @@ -143,7 +144,7 @@ def decrypt_single_salted_hash(cls, rid, hbootkey: bytes, enc_hash: bytes, lmnts @classmethod def get_user_hashes(cls, user: registry.CM_KEY_NODE, samhive: registry.RegistryHive, - hbootkey: bytes) -> Tuple[bytes, bytes]: + hbootkey: bytes) -> Optional[Tuple[bytes, bytes]]: ## Will sometimes find extra user with rid = NAMES, returns empty strings right now try: rid = int(str(user.get_name()), 16) @@ -199,22 +200,16 @@ def sid_to_key(cls, sid: int) -> Tuple[bytes, bytes]: @classmethod def sidbytes_to_key(cls, s: bytes) -> bytes: """Builds final DES key from the strings generated in sid_to_key""" - key = [] - key.append(s[0] >> 1) - key.append(((s[0] & 0x01) << 6) | (s[1] >> 2)) - key.append(((s[1] & 0x03) << 5) | (s[2] >> 3)) - key.append(((s[2] & 0x07) << 4) | (s[3] >> 4)) - key.append(((s[3] & 0x0F) << 3) | (s[4] >> 5)) - key.append(((s[4] & 0x1F) << 2) | (s[5] >> 6)) - key.append(((s[5] & 0x3F) << 1) | (s[6] >> 7)) - key.append(s[6] & 0x7F) + key = [s[0] >> 1, ((s[0] & 0x01) << 6) | (s[1] >> 2), ((s[1] & 0x03) << 5) | (s[2] >> 3), + ((s[2] & 0x07) << 4) | (s[3] >> 4), ((s[3] & 0x0F) << 3) | (s[4] >> 5), + ((s[4] & 0x1F) << 2) | (s[5] >> 6), ((s[5] & 0x3F) << 1) | (s[6] >> 7), s[6] & 0x7F] for i in range(8): key[i] = (key[i] << 1) key[i] = cls.odd_parity[key[i]] return bytes(key) @classmethod - def decrypt_single_hash(cls, rid, hbootkey, enc_hash: bytes, lmntstr): + def decrypt_single_hash(cls, rid: int, hbootkey: bytes, enc_hash: bytes, lmntstr: bytes): (des_k1, des_k2) = cls.sid_to_key(rid) des1 = DES.new(des_k1, DES.MODE_ECB) des2 = DES.new(des_k2, DES.MODE_ECB) @@ -225,24 +220,23 @@ def decrypt_single_hash(cls, rid, hbootkey, enc_hash: bytes, lmntstr): rc4 = ARC4.new(rc4_key) obfkey = rc4.encrypt(enc_hash) - hash = des1.decrypt(obfkey[:8]) + des2.decrypt(obfkey[8:]) - return hash + return des1.decrypt(obfkey[:8]) + des2.decrypt(obfkey[8:]) @classmethod - def get_user_name(cls, user: interfaces.objects.ObjectInterface, samhive: registry.RegistryHive) -> Optional[bytes]: - V = None + def get_user_name(cls, user: registry.CM_KEY_NODE, samhive: registry.RegistryHive) -> Optional[bytes]: + value = None for v in user.get_values(): if v.get_name() == 'V': - V = samhive.read(v.Data + 4, v.DataLength) - if not V: + value = samhive.read(v.Data + 4, v.DataLength) + if not value: return None - name_offset = unpack(" len(V): + name_offset = unpack(" len(value): return None - username = V[name_offset:name_offset + name_length] + username = value[name_offset:name_offset + name_length] return username # replaces the dump_hashes method in vol2 diff --git a/volatility3/framework/plugins/windows/lsadump.py b/volatility3/framework/plugins/windows/lsadump.py index b9f8df9fe0..a9ee1737b5 100644 --- a/volatility3/framework/plugins/windows/lsadump.py +++ b/volatility3/framework/plugins/windows/lsadump.py @@ -3,12 +3,14 @@ # import logging from struct import unpack +from typing import Optional from Crypto.Cipher import ARC4, DES, AES from Crypto.Hash import MD5, SHA256 from volatility3.framework import interfaces, renderers from volatility3.framework.configuration import requirements +from volatility3.framework.layers import registry from volatility3.framework.symbols.windows import versions from volatility3.plugins.windows import hashdump from volatility3.plugins.windows.registry import hivelist @@ -33,7 +35,7 @@ def get_requirements(cls): ] @classmethod - def decrypt_aes(cls, secret, key): + def decrypt_aes(cls, secret: bytes, key: bytes) -> bytes: """ Based on code from http://lab.mediaservice.net/code/cachedump.rb """ @@ -54,7 +56,7 @@ def decrypt_aes(cls, secret, key): return data @classmethod - def get_lsa_key(cls, sechive, bootkey, vista_or_later): + def get_lsa_key(cls, sechive: registry.RegistryHive, bootkey: bytes, vista_or_later: bool) -> Optional[bytes]: if not bootkey: return None @@ -91,7 +93,7 @@ def get_lsa_key(cls, sechive, bootkey, vista_or_later): return lsa_key @classmethod - def get_secret_by_name(cls, sechive, name, lsakey, is_vista_or_later): + def get_secret_by_name(cls, sechive: registry.RegistryHive, name: str, lsakey: bytes, is_vista_or_later: bool): try: enc_secret_key = sechive.get_key("Policy\\Secrets\\" + name + "\\CurrVal") except KeyError: @@ -112,7 +114,7 @@ def get_secret_by_name(cls, sechive, name, lsakey, is_vista_or_later): return secret @classmethod - def decrypt_secret(cls, secret, key): + def decrypt_secret(cls, secret: bytes, key: bytes): """Python implementation of SystemFunction005. Decrypts a block of data with DES using given key. @@ -135,7 +137,7 @@ def decrypt_secret(cls, secret, key): return decrypted_data[8:8 + dec_data_len] - def _generator(self, syshive, sechive): + def _generator(self, syshive: registry.RegistryHive, sechive: registry.RegistryHive): vista_or_later = versions.is_vista_or_later(context = self.context, symbol_table = self.config['nt_symbols']) @@ -174,6 +176,7 @@ def _generator(self, syshive, sechive): def run(self): offset = self.config.get('offset', None) + syshive = sechive = None for hive in hivelist.HiveList.list_hives(self.context, self.config_path, diff --git a/volatility3/framework/plugins/windows/netscan.py b/volatility3/framework/plugins/windows/netscan.py index 77bf678416..7c0bdef0bf 100644 --- a/volatility3/framework/plugins/windows/netscan.py +++ b/volatility3/framework/plugins/windows/netscan.py @@ -4,7 +4,7 @@ import datetime import logging -from typing import Iterable, List, Optional +from typing import Iterable, List, Optional, Tuple, Type from volatility3.framework import constants, exceptions, interfaces, renderers, symbols from volatility3.framework.configuration import requirements @@ -82,7 +82,7 @@ def create_netscan_constraints(context: interfaces.context.ContextInterface, @classmethod def determine_tcpip_version(cls, context: interfaces.context.ContextInterface, layer_name: str, - nt_symbol_table: str) -> str: + nt_symbol_table: str) -> Tuple[str, Type]: """Tries to determine which symbol filename to use for the image's tcpip driver. The logic is partially taken from the info plugin. Args: diff --git a/volatility3/framework/plugins/windows/netstat.py b/volatility3/framework/plugins/windows/netstat.py index b6332e6e36..1d37258ab7 100644 --- a/volatility3/framework/plugins/windows/netstat.py +++ b/volatility3/framework/plugins/windows/netstat.py @@ -4,7 +4,7 @@ import logging import datetime -from typing import Iterable, Optional +from typing import Iterable, Optional, Generator, Tuple from volatility3.framework import constants, exceptions, interfaces, renderers, symbols from volatility3.framework.configuration import requirements @@ -125,7 +125,7 @@ def enumerate_structures_by_port(cls, ptr_offset = context.symbol_space.get_type(obj_name).relative_child_offset("Next") else: # invalid argument. - yield + return vollog.debug("Current Port: {}".format(port)) # the given port serves as a shifted index into the port pool lists @@ -144,7 +144,7 @@ def enumerate_structures_by_port(cls, assignment = inpa.InPaBigPoolBase.Assignments[truncated_port] if not assignment: - yield + return # the value within assignment.Entry is a) masked and b) points inside of the network object # first decode the pointer @@ -165,7 +165,7 @@ def enumerate_structures_by_port(cls, @classmethod def get_tcpip_module(cls, context: interfaces.context.ContextInterface, layer_name: str, - nt_symbols: str) -> interfaces.objects.ObjectInterface: + nt_symbols: str) -> Optional[interfaces.objects.ObjectInterface]: """Uses `windows.modules` to find tcpip.sys in memory. Args: @@ -180,10 +180,11 @@ def get_tcpip_module(cls, context: interfaces.context.ContextInterface, layer_na if mod.BaseDllName.get_string() == "tcpip.sys": vollog.debug("Found tcpip.sys image base @ 0x{:x}".format(mod.DllBase)) return mod + return None @classmethod def parse_hashtable(cls, context: interfaces.context.ContextInterface, layer_name: str, ht_offset: int, - ht_length: int, alignment: int, net_symbol_table: str) -> list: + ht_length: int, alignment: int, net_symbol_table: str) -> Generator[interfaces.objects.ObjectInterface, None, None]: """Parses a hashtable quick and dirty. Args: @@ -217,10 +218,9 @@ def parse_partitions(cls, context: interfaces.context.ContextInterface, layer_na Args: context: The context to retrieve required elements (layers, symbol tables) from layer_name: The name of the layer on which to operate - nt_symbols: The name of the table containing the kernel symbols net_symbol_table: The name of the table containing the tcpip types - tcpip_module: The created vol Windows module object of the given memory image tcpip_symbol_table: The name of the table containing the tcpip driver symbols + tcpip_module_offset: The offset of the tcpip module Returns: The list of TCP endpoint objects from the `layer_name` layer's `PartitionTable` @@ -289,7 +289,7 @@ def create_tcpip_symbol_table(cls, context: interfaces.context.ContextInterface, if not guids: raise exceptions.VolatilityException("Did not find GUID of tcpip.pdb in tcpip.sys module @ 0x{:x}!".format( - tcpip_module.DllBase)) + tcpip_module_offset)) guid = guids[0] @@ -305,7 +305,7 @@ def create_tcpip_symbol_table(cls, context: interfaces.context.ContextInterface, @classmethod def find_port_pools(cls, context: interfaces.context.ContextInterface, layer_name: str, net_symbol_table: str, - tcpip_symbol_table: str, tcpip_module_offset: int) -> (int, int): + tcpip_symbol_table: str, tcpip_module_offset: int) -> Tuple[int, int]: """Finds the given image's port pools. Older Windows versions (presumably < Win10 build 14251) use driver symbols called `UdpPortPool` and `TcpPortPool` which point towards the pools. Newer Windows versions use `UdpCompartmentSet` and `TcpCompartmentSet`, which we first have to translate into diff --git a/volatility3/framework/plugins/windows/vadinfo.py b/volatility3/framework/plugins/windows/vadinfo.py index d50f596aa0..aa6567ace1 100644 --- a/volatility3/framework/plugins/windows/vadinfo.py +++ b/volatility3/framework/plugins/windows/vadinfo.py @@ -132,11 +132,11 @@ def vad_dump(cls, vad_end = vad.get_end() except AttributeError: vollog.debug("Unable to find the starting/ending VPN member") - return + return None if maxsize > 0 and (vad_end - vad_start) > maxsize: vollog.debug("Skip VAD dump {0:#x}-{1:#x} due to maxsize limit".format(vad_start, vad_end)) - return + return None proc_id = "Unknown" try: @@ -163,7 +163,7 @@ def vad_dump(cls, except Exception as excp: vollog.debug("Unable to dump VAD {}: {}".format(file_name, excp)) - return + return None return file_handle diff --git a/volatility3/framework/renderers/__init__.py b/volatility3/framework/renderers/__init__.py index c7dc126b21..a3fa7cef37 100644 --- a/volatility3/framework/renderers/__init__.py +++ b/volatility3/framework/renderers/__init__.py @@ -9,7 +9,7 @@ import collections import datetime import logging -from typing import Any, Callable, Iterable, List, Optional, Sequence, Tuple, TypeVar, Union +from typing import Any, Callable, Iterable, List, Optional, Tuple, TypeVar, Union from volatility3.framework import interfaces from volatility3.framework.interfaces import renderers @@ -48,7 +48,7 @@ class NotAvailableValue(interfaces.renderers.BaseAbsentValue): class TreeNode(interfaces.renderers.TreeNode): """Class representing a particular node in a tree grid.""" - def __init__(self, path: str, treegrid: 'TreeGrid', parent: Optional['TreeNode'], + def __init__(self, path: str, treegrid: 'TreeGrid', parent: Optional[interfaces.renderers.TreeNode], values: List[interfaces.renderers.BaseTypes]) -> None: if not isinstance(treegrid, TreeGrid): raise TypeError("Treegrid must be an instance of TreeGrid") @@ -70,7 +70,7 @@ def __len__(self) -> int: def _validate_values(self, values: List[interfaces.renderers.BaseTypes]) -> None: """A function for raising exceptions if a given set of values is invalid according to the column properties.""" - if not (isinstance(values, collections.abc.Sequence) and len(values) == len(self._treegrid.columns)): + if not (isinstance(values, collections.Sequence) and len(values) == len(self._treegrid.columns)): raise TypeError( "Values must be a list of objects made up of simple types and number the same as the columns") for index in range(len(self._treegrid.columns)): @@ -85,10 +85,10 @@ def _validate_values(self, values: List[interfaces.renderers.BaseTypes]) -> None # tznaive = val.tzinfo is None or val.tzinfo.utcoffset(val) is None @property - def values(self) -> Sequence[interfaces.renderers.BaseTypes]: + def values(self) -> List[interfaces.renderers.BaseTypes]: """Returns the list of values from the particular node, based on column index.""" - return self._values + return list(self._values) @property def path(self) -> str: @@ -101,7 +101,7 @@ def path(self) -> str: return self._path @property - def parent(self) -> Optional['TreeNode']: + def parent(self) -> Optional[interfaces.renderers.TreeNode]: """Returns the parent node of this node or None.""" return self._parent diff --git a/volatility3/framework/renderers/format_hints.py b/volatility3/framework/renderers/format_hints.py index 029049df32..b169f44c95 100644 --- a/volatility3/framework/renderers/format_hints.py +++ b/volatility3/framework/renderers/format_hints.py @@ -8,7 +8,7 @@ Text renderers should attempt to honour all hints provided in this module where possible """ -from typing import Type +from typing import Type, Union class Bin(int): @@ -30,13 +30,16 @@ class MultiTypeData(bytes): """The contents are supposed to be a string, but may contain binary data.""" def __new__(cls: Type['MultiTypeData'], - original: int, + original: Union[int, bytes], encoding: str = 'utf-16-le', split_nulls: bool = False, show_hex: bool = False) -> 'MultiTypeData': + if isinstance(original, int): - original = str(original).encode(encoding) - return super().__new__(cls, original) + data = str(original).encode(encoding) + else: + data = original + return super().__new__(cls, data) def __init__(self, original: bytes, diff --git a/volatility3/framework/symbols/windows/pdbutil.py b/volatility3/framework/symbols/windows/pdbutil.py index 0b4f760c9b..7e39cbc1a9 100644 --- a/volatility3/framework/symbols/windows/pdbutil.py +++ b/volatility3/framework/symbols/windows/pdbutil.py @@ -68,7 +68,7 @@ def load_windows_symbol_table(cls, filter_string = os.path.join(pdb_name.strip('\x00'), guid.upper() + "-" + str(age)) - isf_path = False + isf_path = None # Take the first result of search for the intermediate file for value in intermed.IntermediateSymbolTable.file_symbol_url("windows", filter_string): isf_path = value From e27fc115dcf4cec3f0f4f1dc47c88aa0c209e218 Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Wed, 24 Feb 2021 17:39:43 +0000 Subject: [PATCH 058/294] Layers: Improve regex scanning for linux/mac --- .../framework/layers/scanners/__init__.py | 73 +++++++++++++++++-- 1 file changed, 66 insertions(+), 7 deletions(-) diff --git a/volatility3/framework/layers/scanners/__init__.py b/volatility3/framework/layers/scanners/__init__.py index 2364c77c9d..2a167fc4e4 100644 --- a/volatility3/framework/layers/scanners/__init__.py +++ b/volatility3/framework/layers/scanners/__init__.py @@ -1,9 +1,8 @@ # This file is Copyright 2019 Volatility Foundation and licensed under the Volatility Software License 1.0 # which is available at https://www.volatilityfoundation.org/license/vsl-v1.0 # - import re -from typing import Generator, List, Tuple +from typing import Generator, List, Tuple, Dict, Union, Optional from volatility3.framework.interfaces import layers from volatility3.framework.layers.scanners import multiregexp @@ -44,19 +43,79 @@ def __call__(self, data: bytes, data_offset: int) -> Generator[int, None, None]: if offset < self.chunk_size: yield offset + data_offset - class MultiStringScanner(layers.ScannerInterface): thread_safe = True def __init__(self, patterns: List[bytes]) -> None: super().__init__() - self._patterns = multiregexp.MultiRegexp() + self._pattern_trie = {} # type: Optional[Dict[int, Optional[Dict]]] for pattern in patterns: - self._patterns.add_pattern(pattern) - self._patterns.preprocess() + self._process_pattern(pattern) + self._regex = self._process_trie(self._pattern_trie) + + def _process_pattern(self, value: bytes) -> None: + trie = self._pattern_trie + if trie is None: + return None + + for char in value: + trie[char] = trie.get(char, {}) + trie = trie[char] + + # Mark the end of a string + trie[-1] = None + + def _process_trie(self, trie: Optional[Dict[int, Optional[Dict]]]) -> bytes: + if trie is None or len(trie) == 1 and -1 in trie: + # We've reached the end of this path, return the empty byte string + return b'' + + choices = [] + suffixes = [] + finished = False + + for entry in sorted(trie): + # Clump together different paths + if entry >= 0: + remainder = self._process_trie(trie[entry]) + if remainder: + choices.append(re.escape(bytes([entry])) + remainder) + else: + suffixes.append(re.escape(bytes([entry]))) + else: + # If we've fininshed one of the strings at this point, remember it for later + finished = True + + if len(suffixes) == 1: + choices.append(suffixes[0]) + elif len(suffixes) > 1: + choices.append(b'[' + b''.join(suffixes) + b']') + + if len(choices) == 0: + # If there's none, return the empty byte string + response = b'' + elif len(choices) == 1: + # If there's only one return it + response = choices[0] + else: + response = b'(?:' + b'|'.join(choices) + b')' + + if finished: + # We finished one string, so everything after this is optional + response = b"(?:" + response + b")?" + + return response def __call__(self, data: bytes, data_offset: int) -> Generator[Tuple[int, bytes], None, None]: """Runs through the data looking for the needles.""" - for offset, pattern in self._patterns.search(data): + for offset, pattern in self.search(data): if offset < self.chunk_size: yield offset + data_offset, pattern + + def search(self, haystack: bytes) -> Generator[Tuple[int, bytes], None, None]: + if not isinstance(haystack, bytes): + raise TypeError("Search haystack must be a byte string") + if not self._regex: + raise ValueError("MultiRegexp cannot be used with an empty set of search strings") + for match in re.finditer(self._regex, haystack): + yield match.start(0), match.group() From 70a94bf205c8fe2a3d4ea9198e4eb1c42cb79116 Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Thu, 25 Feb 2021 18:02:16 +0000 Subject: [PATCH 059/294] Automagic: Make sure Linux gets in the metadata too --- volatility3/framework/automagic/linux.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/volatility3/framework/automagic/linux.py b/volatility3/framework/automagic/linux.py index 34d6e79365..70b89dd2d9 100644 --- a/volatility3/framework/automagic/linux.py +++ b/volatility3/framework/automagic/linux.py @@ -79,7 +79,7 @@ def stack(cls, layer = layer_class(context, config_path = config_path, name = new_layer_name, - metadata = {'kaslr_value': aslr_shift}) + metadata = {'kaslr_value': aslr_shift, 'os': 'Linux'}) if layer and dtb: vollog.debug("DTB was found at: 0x{:0x}".format(dtb)) From 77e406e408f18642ca199967a2ba4ff02d184c93 Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Thu, 25 Feb 2021 18:07:28 +0000 Subject: [PATCH 060/294] Layers: Fix issue if leechcore not available --- volatility3/framework/layers/leechcore.py | 305 +++++++++++----------- 1 file changed, 155 insertions(+), 150 deletions(-) diff --git a/volatility3/framework/layers/leechcore.py b/volatility3/framework/layers/leechcore.py index a2ccf48b5e..d968fa715f 100644 --- a/volatility3/framework/layers/leechcore.py +++ b/volatility3/framework/layers/leechcore.py @@ -3,167 +3,172 @@ import urllib.parse from typing import Optional, Any, List -import leechcorepyc +try: + import leechcorepyc + HAS_LEECHCORE = True +except ImportError: + HAS_LEECHCORE = False from volatility3.framework import exceptions from volatility3.framework.layers import resources vollog = logging.getLogger(__file__) +if HAS_LEECHCORE: + + class LeechCoreFile(io.RawIOBase): + """Class to mimic python-native file access to a LeechCore memory space""" + + _leechcore = None + + def __init__(self, leechcore_device): + self._chunk_size = 0x1000000 + self._device = leechcore_device + self._cursor = 0 + self._handle = None + self._pad = True + self._chunk_size = 0x1000000 + + @property + def maxaddr(self): + return self.handle.maxaddr + + @property + def handle(self): + """The actual LeechCore file object returned by leechcorepyc + + Accessing this attribute will create/attach the handle if it hasn't already been opened + """ + if not self._handle: + try: + self._handle = leechcorepyc.LeechCore(self._device) + except TypeError: + raise IOError("Unable to open LeechCore device {}".format(self._device)) + return self._handle + + def fileno(self): + raise OSError + + def flush(self): + pass + + def isatty(self): + return False + + def readable(self): + """This returns whether the handle is open + + This doesn't access self.handle so that it doesn't accidentally attempt to open the device + """ + return bool(self._handle) + + def seek(self, offset, whence = io.SEEK_SET): + if whence == io.SEEK_SET: + self._cursor = offset + elif whence == io.SEEK_CUR: + self._cursor += offset + elif whence == io.SEEK_END: + self._cursor = self.maxaddr + offset + + def tell(self): + """Return how far into the memory we are""" + return self._cursor + + def writable(self): + """Leechcore supports writing, so this is always true""" + return True + + def writelines(self, lines: List[bytes]): + return self.write(b"".join(lines)) + + def in_memmap(self, start, size): + chunk_start = start + chunk_size = size + output = [] + for entry in self.handle.memmap: + + if entry['base'] + entry['size'] <= chunk_start or entry['base'] >= chunk_start + chunk_size: + continue + output += [(max(entry['base'], chunk_start), min(entry['size'], chunk_size))] + chunk_start = output[-1][0] + output[-1][1] + chunk_size = max(0, size - chunk_start) + + if chunk_size <= 0: + break + return output + + def write(self, b: bytes): + result = self.handle.write(self._cursor, b) + self._cursor += len(b) + return result + + def read(self, size: int = -1) -> bytes: + """We ask leechcore to pad the data, because otherwise determining holes in the underlying file would + be extremely inefficient borderline impossible to do consistently""" + data = self.handle.read(self._cursor, size, True) + + if len(data) > size: + data = data[:size] + else: + data = data + b'\x00' * (size - len(data)) + self._cursor += len(data) + if not len(data): + raise exceptions.InvalidAddressException('LeechCore layer read failure', self._cursor + len(data)) + return data + + def readline(self, __size: Optional[int] = ...) -> bytes: + data = b'' + while __size > self._chunk_size or __size < 0: + data += self.read(self._chunk_size) + index = data.find(b"\n") + __size -= self._chunk_size + if index >= 0: + __size = 0 + break + data += self.read(__size) + index = data.find(b"\n") + return data[:index] -class LeechCoreFile(io.RawIOBase): - """Class to mimic python-native file access to a LeechCore memory space""" - - _leechcore = None - - def __init__(self, leechcore_device): - self._chunk_size = 0x1000000 - self._device = leechcore_device - self._cursor = 0 - self._handle = None - self._pad = True - self._chunk_size = 0x1000000 - - @property - def maxaddr(self): - return self.handle.maxaddr - - @property - def handle(self): - """The actual LeechCore file object returned by leechcorepyc + def readlines(self, __hint: int = ...) -> List[bytes]: + counter = 0 + result = [] + while counter < __hint or __hint < 0: + line = self.readline() + counter += len(line) + result += [line] + return result - Accessing this attribute will create/attach the handle if it hasn't already been opened - """ - if not self._handle: - try: - self._handle = leechcorepyc.LeechCore(self._device) - except TypeError: - raise IOError("Unable to open LeechCore device {}".format(self._device)) - return self._handle + def readall(self) -> bytes: + return self.read() - def fileno(self): - raise OSError + def readinto(self, b: bytearray) -> Optional[int]: + data = self.read() + for index in range(len(data)): + b[index] = data[index] + return len(data) - def flush(self): - pass + def close(self): + if self._handle: + self._handle.close() + self._handle = None - def isatty(self): - return False + def closed(self): + return self._handle - def readable(self): - """This returns whether the handle is open - This doesn't access self.handle so that it doesn't accidentally attempt to open the device + class LeechCoreHandler(resources.VolatilityHandler): + """Handler for the invented `leechcore` scheme. This is an unofficial scheme and not registered with IANA """ - return bool(self._handle) - - def seek(self, offset, whence = io.SEEK_SET): - if whence == io.SEEK_SET: - self._cursor = offset - elif whence == io.SEEK_CUR: - self._cursor += offset - elif whence == io.SEEK_END: - self._cursor = self.maxaddr + offset - - def tell(self): - """Return how far into the memory we are""" - return self._cursor - - def writable(self): - """Leechcore supports writing, so this is always true""" - return True - - def writelines(self, lines: List[bytes]): - return self.write(b"".join(lines)) - - def in_memmap(self, start, size): - chunk_start = start - chunk_size = size - output = [] - for entry in self.handle.memmap: - - if entry['base'] + entry['size'] <= chunk_start or entry['base'] >= chunk_start + chunk_size: - continue - output += [(max(entry['base'], chunk_start), min(entry['size'], chunk_size))] - chunk_start = output[-1][0] + output[-1][1] - chunk_size = max(0, size - chunk_start) - - if chunk_size <= 0: - break - return output - - def write(self, b: bytes): - result = self.handle.write(self._cursor, b) - self._cursor += len(b) - return result - - def read(self, size: int = -1) -> bytes: - """We ask leechcore to pad the data, because otherwise determining holes in the underlying file would - be extremely inefficient borderline impossible to do consistently""" - data = self.handle.read(self._cursor, size, True) - - if len(data) > size: - data = data[:size] - else: - data = data + b'\x00' * (size - len(data)) - self._cursor += len(data) - if not len(data): - raise exceptions.InvalidAddressException('LeechCore layer read failure', self._cursor + len(data)) - return data - - def readline(self, __size: Optional[int] = ...) -> bytes: - data = b'' - while __size > self._chunk_size or __size < 0: - data += self.read(self._chunk_size) - index = data.find(b"\n") - __size -= self._chunk_size - if index >= 0: - __size = 0 - break - data += self.read(__size) - index = data.find(b"\n") - return data[:index] - - def readlines(self, __hint: int = ...) -> List[bytes]: - counter = 0 - result = [] - while counter < __hint or __hint < 0: - line = self.readline() - counter += len(line) - result += [line] - return result - - def readall(self) -> bytes: - return self.read() - - def readinto(self, b: bytearray) -> Optional[int]: - data = self.read() - for index in range(len(data)): - b[index] = data[index] - return len(data) - - def close(self): - if self._handle: - self._handle.close() - self._handle = None - - def closed(self): - return self._handle - - -class LeechCoreHandler(resources.VolatilityHandler): - """Handler for the invented `leechcore` scheme. This is an unofficial scheme and not registered with IANA - """ - - @classmethod - def non_cached_schemes(cls) -> List[str]: - """We need to turn caching *off* for a live filesystem""" - return ['leechcore'] - - @staticmethod - def default_open(req: urllib.request.Request) -> Optional[Any]: - """Handles the request if it's the leechcore scheme.""" - if req.type == 'leechcore': - device_uri = '://'.join(req.full_url.split('://')[1:]) - return LeechCoreFile(device_uri) - return None + + @classmethod + def non_cached_schemes(cls) -> List[str]: + """We need to turn caching *off* for a live filesystem""" + return ['leechcore'] + + @staticmethod + def default_open(req: urllib.request.Request) -> Optional[Any]: + """Handles the request if it's the leechcore scheme.""" + if req.type == 'leechcore': + device_uri = '://'.join(req.full_url.split('://')[1:]) + return LeechCoreFile(device_uri) + return None From 681b68f252d78de09eb9c191f199b8fb7ca50fa2 Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Fri, 26 Feb 2021 00:34:36 +0000 Subject: [PATCH 061/294] Objects: Improve writing of objects --- volatility3/framework/objects/__init__.py | 15 +++++++++++++-- 1 file changed, 13 insertions(+), 2 deletions(-) diff --git a/volatility3/framework/objects/__init__.py b/volatility3/framework/objects/__init__.py index c3b4f93438..6eb8e12fbb 100644 --- a/volatility3/framework/objects/__init__.py +++ b/volatility3/framework/objects/__init__.py @@ -40,7 +40,7 @@ def convert_value_to_data(value: TUnion[int, float, bytes, str, bool], struct_ty data_format: DataFormatInfo) -> bytes: """Converts a particular value to a series of bytes.""" if not isinstance(value, struct_type): - raise TypeError("Written value is not of the correct type for {}".format(struct_type.__class__.__name__)) + raise TypeError("Written value is not of the correct type for {}".format(struct_type.__name__)) if struct_type == int and isinstance(value, int): # Doubling up on the isinstance is for mypy @@ -621,7 +621,11 @@ def __len__(self) -> int: return self.vol.count def write(self, value) -> None: - raise NotImplementedError("Writing to Arrays is not yet implemented") + if not isinstance(value, collections.Sequence): + raise TypeError("Only Sequences can be writen to arrays") + self.count = len(value) + for index in range(len(value)): + self[index].write(value[index]) class AggregateType(interfaces.objects.ObjectInterface): @@ -748,6 +752,13 @@ def __getattr__(self, attr: str) -> Any: agg_name = agg_type.__name__ raise AttributeError("{} has no attribute: {}.{}".format(agg_name, self.vol.type_name, attr)) + def __setattr__(self, name, value): + """Method for writing specific members of a structure""" + if name in ['_concrete_members', 'vol', '_vol'] or not self.has_member(name): + return super().__setattr__(name, value) + attr = self.__getattr__(name) + return attr.write(value) + def __dir__(self) -> Iterable[str]: """Returns a complete list of members when dir is called.""" return list(super().__dir__()) + list(self.vol.members) From 9deb0c9d8a00cd6466f11e450cc73c66d26678bf Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Tue, 2 Mar 2021 11:15:16 +0000 Subject: [PATCH 062/294] CLI: Fix early logging target --- volatility3/cli/__init__.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/volatility3/cli/__init__.py b/volatility3/cli/__init__.py index 290e08ddc0..019f28f16b 100644 --- a/volatility3/cli/__init__.py +++ b/volatility3/cli/__init__.py @@ -32,7 +32,7 @@ # Make sure we log everything -vollog = logging.getLogger() +vollog = logging.getLogger(__name__) console = logging.StreamHandler() console.setLevel(logging.WARNING) formatter = logging.Formatter('%(levelname)-8s %(name)-12s: %(message)s') From 1d35406ffee9890927efe693d79353446bea0734 Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Tue, 2 Mar 2021 11:14:25 +0000 Subject: [PATCH 063/294] CLI: Improve file parameter handling --- volatility3/cli/__init__.py | 16 ++++++++++------ 1 file changed, 10 insertions(+), 6 deletions(-) diff --git a/volatility3/cli/__init__.py b/volatility3/cli/__init__.py index 019f28f16b..9988022bc6 100644 --- a/volatility3/cli/__init__.py +++ b/volatility3/cli/__init__.py @@ -16,9 +16,11 @@ import json import logging import os +import pathlib import sys import tempfile import traceback +import urllib from typing import Dict, Type, Union, Any from urllib import parse, request @@ -267,12 +269,14 @@ def run(self): # NOTE: This will *BREAK* if LayerStacker, or the automagic configuration system, changes at all ### if args.file: - file_name = os.path.abspath(args.file) - if not os.path.exists(file_name): - vollog.log(logging.INFO, "File does not exist: {}".format(file_name)) - else: - single_location = "file:" + request.pathname2url(file_name) - ctx.config['automagic.LayerStacker.single_location'] = single_location + # We want to work in URLs, but we need to accept absolute and relative files (including on windows) + single_location = urllib.parse.urlparse(args.file, 'file') + if single_location.scheme == 'file' or len(single_location.scheme) == 1: + # Otherwise construct a URL parameter + single_location = urllib.parse.urlparse(urllib.parse.urljoin('file:', request.pathname2url(os.path.abspath(args.file)))) + if not os.path.exists(single_location.path): + parser.error("File does not exist: {}".format(single_location.path)) + ctx.config['automagic.LayerStacker.single_location'] = urllib.parse.urlunparse(single_location) # UI fills in the config, here we load it from the config file and do it before we process the CL parameters if args.config: From 2a41b1eee9342bb34b9ac98383cbba9e343201c7 Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Tue, 2 Mar 2021 15:50:21 +0000 Subject: [PATCH 064/294] Linux: Make bang_addrs has something to search for --- volatility3/framework/plugins/linux/bash.py | 19 ++++++++++--------- 1 file changed, 10 insertions(+), 9 deletions(-) diff --git a/volatility3/framework/plugins/linux/bash.py b/volatility3/framework/plugins/linux/bash.py index 8e6de7241b..8f9bc9d246 100644 --- a/volatility3/framework/plugins/linux/bash.py +++ b/volatility3/framework/plugins/linux/bash.py @@ -72,15 +72,16 @@ def _generator(self, tasks): history_entries = [] - for address, _ in proc_layer.scan(self.context, - scanners.MultiStringScanner(bang_addrs), - sections = task.get_process_memory_sections(heap_only = True)): - hist = self.context.object(bash_table_name + constants.BANG + "hist_entry", - offset = address - ts_offset, - layer_name = proc_layer_name) - - if hist.is_valid(): - history_entries.append(hist) + if bang_addrs: + for address, _ in proc_layer.scan(self.context, + scanners.MultiStringScanner(bang_addrs), + sections = task.get_process_memory_sections(heap_only = True)): + hist = self.context.object(bash_table_name + constants.BANG + "hist_entry", + offset = address - ts_offset, + layer_name = proc_layer_name) + + if hist.is_valid(): + history_entries.append(hist) for hist in sorted(history_entries, key = lambda x: x.get_time_as_integer()): yield (0, (task.pid, task_name, hist.get_time_object(), hist.get_command())) From 0b4f49736c9ac6b8c051c5112c4a95c6907b64a7 Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Thu, 4 Mar 2021 18:48:58 +0000 Subject: [PATCH 065/294] Volshell: Fix missing import --- volatility3/cli/volshell/generic.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/volatility3/cli/volshell/generic.py b/volatility3/cli/volshell/generic.py index a1fa7c172d..238b12e04f 100644 --- a/volatility3/cli/volshell/generic.py +++ b/volatility3/cli/volshell/generic.py @@ -14,7 +14,7 @@ from volatility3.cli import text_renderer from volatility3.framework import renderers, interfaces, objects, plugins, exceptions from volatility3.framework.configuration import requirements -from volatility3.framework.layers import intel, physical +from volatility3.framework.layers import intel, physical, resources try: import capstone From 0c8c52ba329228931d9cfa59b6802799f581b971 Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Sat, 30 Jan 2021 16:16:49 +0000 Subject: [PATCH 066/294] Layers: Allow manual cache control and disable for PDB --- volatility3/framework/layers/resources.py | 3 +-- volatility3/framework/symbols/windows/pdbconv.py | 3 ++- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/volatility3/framework/layers/resources.py b/volatility3/framework/layers/resources.py index d25c8d5f1d..a56d54dc37 100644 --- a/volatility3/framework/layers/resources.py +++ b/volatility3/framework/layers/resources.py @@ -79,7 +79,7 @@ def uses_cache(self, url: str) -> bool: """Determines whether a URLs contents should be cached""" parsed_url = urllib.parse.urlparse(url) - return not parsed_url.scheme in self._non_cached_schemes() + return self._enable_cache and not parsed_url.scheme in self._non_cached_schemes() @staticmethod def _non_cached_schemes() -> List[str]: @@ -225,7 +225,6 @@ class JarHandler(VolatilityHandler): Actual reference (found from https://www.w3.org/wiki/UriSchemes/jar) seemed not to return: http://developer.java.sun.com/developer/onlineTraining/protocolhandlers/ """ - @classmethod def non_cached_schemes(cls) -> List[str]: return ['jar'] diff --git a/volatility3/framework/symbols/windows/pdbconv.py b/volatility3/framework/symbols/windows/pdbconv.py index 7d8f8109fe..6d50c9a588 100644 --- a/volatility3/framework/symbols/windows/pdbconv.py +++ b/volatility3/framework/symbols/windows/pdbconv.py @@ -937,7 +937,8 @@ def retreive_pdb(self, for suffix in [file_name, file_name[:-1] + '_']: try: vollog.debug("Attempting to retrieve {}".format(url + suffix)) - result = resources.ResourceAccessor(progress_callback).open(url + suffix) + # Don't cache the PDB files since they might build up and there's little benefit + result = resources.ResourceAccessor(progress_callback, enable_cache = False).open(url + suffix) except (error.HTTPError, error.URLError) as excp: vollog.debug("Failed with {}".format(excp)) if result: From 2fe7e22bf83070920c206b427597d817877b28a2 Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Thu, 4 Mar 2021 23:25:53 +0000 Subject: [PATCH 067/294] CLI: Tidy up single_location handling --- volatility3/cli/__init__.py | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/volatility3/cli/__init__.py b/volatility3/cli/__init__.py index 9988022bc6..ef99658728 100644 --- a/volatility3/cli/__init__.py +++ b/volatility3/cli/__init__.py @@ -273,7 +273,8 @@ def run(self): single_location = urllib.parse.urlparse(args.file, 'file') if single_location.scheme == 'file' or len(single_location.scheme) == 1: # Otherwise construct a URL parameter - single_location = urllib.parse.urlparse(urllib.parse.urljoin('file:', request.pathname2url(os.path.abspath(args.file)))) + file_path = request.pathname2url(os.path.abspath(args.file)) + single_location = urllib.parse.urlparse(urllib.parse.urljoin('file:', file_path)) if not os.path.exists(single_location.path): parser.error("File does not exist: {}".format(single_location.path)) ctx.config['automagic.LayerStacker.single_location'] = urllib.parse.urlunparse(single_location) From a75ccd72b1027b8dac032785497c668480e92ebf Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Thu, 4 Mar 2021 23:34:04 +0000 Subject: [PATCH 068/294] Layers: Restore a loss from a rebase/merge --- volatility3/framework/layers/resources.py | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/volatility3/framework/layers/resources.py b/volatility3/framework/layers/resources.py index a56d54dc37..88ae9f2ec3 100644 --- a/volatility3/framework/layers/resources.py +++ b/volatility3/framework/layers/resources.py @@ -62,7 +62,8 @@ class ResourceAccessor(object): def __init__(self, progress_callback: Optional[constants.ProgressCallback] = None, - context: Optional[ssl.SSLContext] = None) -> None: + context: Optional[ssl.SSLContext] = None, + enable_cache: bool = True) -> None: """Creates a resource accessor. Note: context is an SSL context, not a volatility context @@ -70,6 +71,7 @@ def __init__(self, self._progress_callback = progress_callback self._context = context self._handlers = list(framework.class_subclasses(urllib.request.BaseHandler)) + self._enable_cache = enable_cache if self.list_handlers: vollog.log(constants.LOGLEVEL_VVV, "Available URL handlers: {}".format(", ".join([x.__name__ for x in self._handlers]))) From bc470739031c22425fa29a82fec9fa1ce7288cca Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Fri, 5 Mar 2021 00:17:09 +0000 Subject: [PATCH 069/294] CLI: Fix logging issue after root logger change --- volatility3/cli/__init__.py | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/volatility3/cli/__init__.py b/volatility3/cli/__init__.py index ef99658728..cc33385193 100644 --- a/volatility3/cli/__init__.py +++ b/volatility3/cli/__init__.py @@ -16,7 +16,6 @@ import json import logging import os -import pathlib import sys import tempfile import traceback @@ -34,6 +33,7 @@ # Make sure we log everything +rootlog = logging.getLogger() vollog = logging.getLogger(__name__) console = logging.StreamHandler() console.setLevel(logging.WARNING) @@ -82,8 +82,8 @@ def __init__(self): @classmethod def setup_logging(cls): # Delay the setting of vollog for those that want to import volatility3.cli (issue #241) - vollog.setLevel(1) - vollog.addHandler(console) + rootlog.setLevel(1) + rootlog.addHandler(console) def run(self): """Executes the command line module, taking the system arguments, @@ -194,7 +194,7 @@ def run(self): file_formatter = logging.Formatter(datefmt = '%y-%m-%d %H:%M:%S', fmt = '%(asctime)s %(name)-12s %(levelname)-8s %(message)s') file_logger.setFormatter(file_formatter) - vollog.addHandler(file_logger) + rootlog.addHandler(file_logger) vollog.info("Logging started") if partial_args.verbosity < 3: if partial_args.verbosity < 1: From f251a1db15f24fca65dc27df01cc438093204d26 Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Fri, 5 Mar 2021 01:10:26 +0000 Subject: [PATCH 070/294] Windows: Update PDB code to work without cache --- .../framework/symbols/windows/pdbconv.py | 21 ++++++++++++------- .../framework/symbols/windows/pdbutil.py | 10 ++++++--- 2 files changed, 20 insertions(+), 11 deletions(-) diff --git a/volatility3/framework/symbols/windows/pdbconv.py b/volatility3/framework/symbols/windows/pdbconv.py index 6d50c9a588..9bb51ae83d 100644 --- a/volatility3/framework/symbols/windows/pdbconv.py +++ b/volatility3/framework/symbols/windows/pdbconv.py @@ -9,9 +9,10 @@ import logging import lzma import os +import urllib from bisect import bisect from typing import Tuple, Dict, Any, Optional, Union, List -from urllib import request, error +from urllib import request, error, parse from volatility3.framework import contexts, interfaces, constants from volatility3.framework.layers import physical, msf, resources @@ -937,17 +938,17 @@ def retreive_pdb(self, for suffix in [file_name, file_name[:-1] + '_']: try: vollog.debug("Attempting to retrieve {}".format(url + suffix)) - # Don't cache the PDB files since they might build up and there's little benefit + # We no longer cache it, so this is a glorified remote endpoint check result = resources.ResourceAccessor(progress_callback, enable_cache = False).open(url + suffix) except (error.HTTPError, error.URLError) as excp: vollog.debug("Failed with {}".format(excp)) - if result: - break + if result: + break if progress_callback is not None: progress_callback(100, "Downloading {}".format(url + suffix)) if result is None: return None - return result.name + return url + suffix if __name__ == '__main__': @@ -1008,9 +1009,13 @@ def __call__(self, progress: Union[int, float], description: str = None): parser.error("No suitable filename provided or retrieved") ctx = contexts.Context() - if not os.path.exists(filename): - parser.error("File {} does not exists".format(filename)) - location = "file:" + request.pathname2url(filename) + url = parse.urlparse(filename, scheme = 'file') + if url.scheme == 'file': + if not os.path.exists(filename): + parser.error("File {} does not exists".format(filename)) + location = "file:" + request.pathname2url(os.path.abspath(filename)) + else: + location = filename convertor = PdbReader(ctx, location, database_name = args.pattern, progress_callback = pg_cb) diff --git a/volatility3/framework/symbols/windows/pdbutil.py b/volatility3/framework/symbols/windows/pdbutil.py index 7e39cbc1a9..ac7357f6ae 100644 --- a/volatility3/framework/symbols/windows/pdbutil.py +++ b/volatility3/framework/symbols/windows/pdbutil.py @@ -9,7 +9,7 @@ import os import struct from typing import Any, Dict, Generator, List, Optional, Tuple, Union -from urllib import request +from urllib import request, parse from volatility3 import symbols from volatility3.framework import constants, interfaces @@ -196,8 +196,12 @@ def download_pdb_isf(cls, file_name = pdb_name, progress_callback = progress_callback) if filename: - tmp_files.append(filename) - location = "file:" + request.pathname2url(tmp_files[-1]) + url = parse.urlparse(filename, scheme = 'file') + if url.scheme == 'file' or len(url.scheme) == 1: + tmp_files.append(filename) + location = "file:" + request.pathname2url(os.path.abspath(tmp_files[-1])) + else: + location = filename json_output = pdbconv.PdbReader(context, location, pdb_name, progress_callback).get_json() of.write(bytes(json.dumps(json_output, indent = 2, sort_keys = True), 'utf-8')) # After we've successfully written it out, record the fact so we don't clear it out From 963b448d367f117dff66c65107b590dac96b5f0e Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Fri, 5 Mar 2021 01:13:34 +0000 Subject: [PATCH 071/294] Objects: Revert setting attributes to write data --- volatility3/framework/objects/__init__.py | 16 ++++++++++------ 1 file changed, 10 insertions(+), 6 deletions(-) diff --git a/volatility3/framework/objects/__init__.py b/volatility3/framework/objects/__init__.py index 6eb8e12fbb..05679af7f5 100644 --- a/volatility3/framework/objects/__init__.py +++ b/volatility3/framework/objects/__init__.py @@ -752,12 +752,16 @@ def __getattr__(self, attr: str) -> Any: agg_name = agg_type.__name__ raise AttributeError("{} has no attribute: {}.{}".format(agg_name, self.vol.type_name, attr)) - def __setattr__(self, name, value): - """Method for writing specific members of a structure""" - if name in ['_concrete_members', 'vol', '_vol'] or not self.has_member(name): - return super().__setattr__(name, value) - attr = self.__getattr__(name) - return attr.write(value) + # Disable messing around with setattr until the consequences have been considered properly + # For example pdbutil constructs objects and then sets values for them + # Some don't always match the type (for example, the data read is encoded and interpretted) + # + # def __setattr__(self, name, value): + # """Method for writing specific members of a structure""" + # if name in ['_concrete_members', 'vol', '_vol'] or not self.has_member(name): + # return super().__setattr__(name, value) + # attr = self.__getattr__(name) + # return attr.write(value) def __dir__(self) -> Iterable[str]: """Returns a complete list of members when dir is called.""" From 62c1d1df35661dce49be3d2ca358454995546fea Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Fri, 5 Mar 2021 01:14:44 +0000 Subject: [PATCH 072/294] Layers: Avoid writing to unwritable physical layers --- volatility3/framework/layers/physical.py | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/volatility3/framework/layers/physical.py b/volatility3/framework/layers/physical.py index 7e4e76a6e8..6010725bdf 100644 --- a/volatility3/framework/layers/physical.py +++ b/volatility3/framework/layers/physical.py @@ -1,6 +1,7 @@ # This file is Copyright 2019 Volatility Foundation and licensed under the Volatility Software License 1.0 # which is available at https://www.volatilityfoundation.org/license/vsl-v1.0 # +import logging import threading from typing import Any, Dict, IO, List, Optional, Union @@ -8,6 +9,8 @@ from volatility3.framework.configuration import requirements from volatility3.framework.layers import resources +vollog = logging.getLogger(__name__) + class BufferDataLayer(interfaces.layers.DataLayerInterface): """A DataLayer class backed by a buffer in memory, designed for testing and @@ -80,6 +83,7 @@ def __init__(self, metadata: Optional[Dict[str, Any]] = None) -> None: super().__init__(context = context, config_path = config_path, name = name, metadata = metadata) + self._write_warning = False self._location = self.config["location"] self._accessor = resources.ResourceAccessor() self._file_ = None # type: Optional[IO[Any]] @@ -157,6 +161,11 @@ def write(self, offset: int, data: bytes) -> None: This will technically allow writes beyond the extent of the file """ + if not self._file.writable(): + if not self._write_warning: + self._write_warning = True + vollog.warning("Try to write to unwritable layer: {}".format(self.name)) + return None if not self.is_valid(offset, len(data)): invalid_address = offset if self.minimum_address < offset <= self.maximum_address: From 1b57a899e81b7cdc37ce03b275f28672764e02bb Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Fri, 5 Mar 2021 01:16:14 +0000 Subject: [PATCH 073/294] Objects: Encode strings as bytes when writing --- volatility3/framework/objects/__init__.py | 2 ++ 1 file changed, 2 insertions(+) diff --git a/volatility3/framework/objects/__init__.py b/volatility3/framework/objects/__init__.py index 05679af7f5..cebdc27c71 100644 --- a/volatility3/framework/objects/__init__.py +++ b/volatility3/framework/objects/__init__.py @@ -56,6 +56,8 @@ def convert_value_to_data(value: TUnion[int, float, bytes, str, bool], struct_ty raise ValueError("Invalid float size") struct_format = ("<" if data_format.byteorder == 'little' else ">") + float_vals[data_format.length] elif struct_type in [bytes, str]: + if isinstance(value, str): + value = bytes(value, 'latin-1') struct_format = str(data_format.length) + "s" else: raise TypeError("Cannot construct struct format for type {}".format(type(struct_type))) From 69d303da8ca0b33f9eca6e02e15c16a124211516 Mon Sep 17 00:00:00 2001 From: Alec Petridis Date: Sat, 6 Mar 2021 22:44:19 -0800 Subject: [PATCH 074/294] Update copyright notice --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index ce285e76c0..c04188ced7 100644 --- a/README.md +++ b/README.md @@ -91,7 +91,7 @@ The latest generated copy of the documentation can be found at: Date: Sun, 7 Mar 2021 12:44:15 +0000 Subject: [PATCH 075/294] CLI: Fixes windows file path handling Closes: #470 --- volatility3/cli/__init__.py | 10 +++++----- volatility3/cli/volshell/__init__.py | 18 ++++++++++++------ 2 files changed, 17 insertions(+), 11 deletions(-) diff --git a/volatility3/cli/__init__.py b/volatility3/cli/__init__.py index cc33385193..4965959b2f 100644 --- a/volatility3/cli/__init__.py +++ b/volatility3/cli/__init__.py @@ -272,11 +272,11 @@ def run(self): # We want to work in URLs, but we need to accept absolute and relative files (including on windows) single_location = urllib.parse.urlparse(args.file, 'file') if single_location.scheme == 'file' or len(single_location.scheme) == 1: - # Otherwise construct a URL parameter - file_path = request.pathname2url(os.path.abspath(args.file)) - single_location = urllib.parse.urlparse(urllib.parse.urljoin('file:', file_path)) - if not os.path.exists(single_location.path): - parser.error("File does not exist: {}".format(single_location.path)) + if len(single_location.scheme) == 1: + # Mis-parsed a windows drive as a scheme, it doesn't need abspath because it features a drive letter + single_location = urllib.parse.urlparse(urllib.parse.urljoin('file:', urllib.request.pathname2url(args.file))) + if not os.path.exists(urllib.request.url2pathname(single_location.path)): + parser.error("File does not exist: {}".format(os.path.exists(urllib.request.url2pathname(single_location.path)))) ctx.config['automagic.LayerStacker.single_location'] = urllib.parse.urlunparse(single_location) # UI fills in the config, here we load it from the config file and do it before we process the CL parameters diff --git a/volatility3/cli/volshell/__init__.py b/volatility3/cli/volshell/__init__.py index b67cc8a6ce..8de128bb90 100644 --- a/volatility3/cli/volshell/__init__.py +++ b/volatility3/cli/volshell/__init__.py @@ -7,6 +7,7 @@ import logging import os import sys +import urllib from urllib import request import volatility3.plugins @@ -199,12 +200,17 @@ def run(self): # NOTE: This will *BREAK* if LayerStacker, or the automagic configuration system, changes at all ### if args.file: - file_name = os.path.abspath(args.file) - if not os.path.exists(file_name): - vollog.log(logging.INFO, "File does not exist: {}".format(file_name)) - else: - single_location = "file:" + request.pathname2url(file_name) - ctx.config['automagic.LayerStacker.single_location'] = single_location + # We want to work in URLs, but we need to accept absolute and relative files (including on windows) + single_location = urllib.parse.urlparse(args.file, 'file') + if single_location.scheme == 'file' or len(single_location.scheme) == 1: + if len(single_location.scheme) == 1: + # Mis-parsed a windows drive as a scheme, it doesn't need abspath because it features a drive letter + single_location = urllib.parse.urlparse( + urllib.parse.urljoin('file:', urllib.request.pathname2url(args.file))) + if not os.path.exists(urllib.request.url2pathname(single_location.path)): + parser.error("File does not exist: {}".format( + os.path.exists(urllib.request.url2pathname(single_location.path)))) + ctx.config['automagic.LayerStacker.single_location'] = urllib.parse.urlunparse(single_location) # UI fills in the config, here we load it from the config file and do it before we process the CL parameters if args.config: From 6a6e2a0e5c021242c6edcc0013a7f7c257bc8f03 Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Sun, 7 Mar 2021 13:03:45 +0000 Subject: [PATCH 076/294] CLI: Fix and refactor the URI handling code --- volatility3/cli/__init__.py | 33 ++++++++++++++++++++-------- volatility3/cli/volshell/__init__.py | 16 +++++--------- 2 files changed, 29 insertions(+), 20 deletions(-) diff --git a/volatility3/cli/__init__.py b/volatility3/cli/__init__.py index 4965959b2f..b5e6f9721f 100644 --- a/volatility3/cli/__init__.py +++ b/volatility3/cli/__init__.py @@ -269,15 +269,11 @@ def run(self): # NOTE: This will *BREAK* if LayerStacker, or the automagic configuration system, changes at all ### if args.file: - # We want to work in URLs, but we need to accept absolute and relative files (including on windows) - single_location = urllib.parse.urlparse(args.file, 'file') - if single_location.scheme == 'file' or len(single_location.scheme) == 1: - if len(single_location.scheme) == 1: - # Mis-parsed a windows drive as a scheme, it doesn't need abspath because it features a drive letter - single_location = urllib.parse.urlparse(urllib.parse.urljoin('file:', urllib.request.pathname2url(args.file))) - if not os.path.exists(urllib.request.url2pathname(single_location.path)): - parser.error("File does not exist: {}".format(os.path.exists(urllib.request.url2pathname(single_location.path)))) - ctx.config['automagic.LayerStacker.single_location'] = urllib.parse.urlunparse(single_location) + try: + single_location = self.location_from_file(args.file) + ctx.config['automagic.LayerStacker.single_location'] = single_location + except ValueError as excp: + parser.error(str(excp)) # UI fills in the config, here we load it from the config file and do it before we process the CL parameters if args.config: @@ -332,6 +328,25 @@ def run(self): except (exceptions.VolatilityException) as excp: self.process_exceptions(excp) + def location_from_file(self, filename: str) -> str: + """Returns the URL location from a file parameter (which may be a URL) + + Args: + filename: The path to the file (either an absolute, relative, or URL path) + + Returns: + The URL for the location of the file + """ + # We want to work in URLs, but we need to accept absolute and relative files (including on windows) + single_location = urllib.parse.urlparse(filename, '') + if single_location.scheme == '' or len(single_location.scheme) == 1: + single_location = urllib.parse.urlparse( + urllib.parse.urljoin('file:', urllib.request.pathname2url(os.path.abspath(filename)))) + if single_location.scheme == 'file': + if not os.path.exists(urllib.request.url2pathname(single_location.path)): + raise ValueError("File does not exist: {}".format(urllib.request.url2pathname(single_location.path))) + return urllib.parse.urlunparse(single_location) + def process_exceptions(self, excp): """Provide useful feedback if an exception occurs during a run of a plugin.""" # Ensure there's nothing in the cache diff --git a/volatility3/cli/volshell/__init__.py b/volatility3/cli/volshell/__init__.py index 8de128bb90..88ab103af4 100644 --- a/volatility3/cli/volshell/__init__.py +++ b/volatility3/cli/volshell/__init__.py @@ -200,17 +200,11 @@ def run(self): # NOTE: This will *BREAK* if LayerStacker, or the automagic configuration system, changes at all ### if args.file: - # We want to work in URLs, but we need to accept absolute and relative files (including on windows) - single_location = urllib.parse.urlparse(args.file, 'file') - if single_location.scheme == 'file' or len(single_location.scheme) == 1: - if len(single_location.scheme) == 1: - # Mis-parsed a windows drive as a scheme, it doesn't need abspath because it features a drive letter - single_location = urllib.parse.urlparse( - urllib.parse.urljoin('file:', urllib.request.pathname2url(args.file))) - if not os.path.exists(urllib.request.url2pathname(single_location.path)): - parser.error("File does not exist: {}".format( - os.path.exists(urllib.request.url2pathname(single_location.path)))) - ctx.config['automagic.LayerStacker.single_location'] = urllib.parse.urlunparse(single_location) + try: + single_location = self.location_from_file(args.file) + ctx.config['automagic.LayerStacker.single_location'] = single_location + except ValueError as excp: + parser.error(str(excp)) # UI fills in the config, here we load it from the config file and do it before we process the CL parameters if args.config: From 5e01ab811bc3b1cc8b0df5d311159f4a5bf7069f Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Sun, 7 Mar 2021 13:08:48 +0000 Subject: [PATCH 077/294] CLI: Improve URI handling error for windows --- volatility3/cli/__init__.py | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/volatility3/cli/__init__.py b/volatility3/cli/__init__.py index b5e6f9721f..07cd579558 100644 --- a/volatility3/cli/__init__.py +++ b/volatility3/cli/__init__.py @@ -344,7 +344,10 @@ def location_from_file(self, filename: str) -> str: urllib.parse.urljoin('file:', urllib.request.pathname2url(os.path.abspath(filename)))) if single_location.scheme == 'file': if not os.path.exists(urllib.request.url2pathname(single_location.path)): - raise ValueError("File does not exist: {}".format(urllib.request.url2pathname(single_location.path))) + filename = urllib.request.url2pathname(single_location.path) + if not filename: + raise ValueError("File URL looks incorrect (potentially missing /)") + raise ValueError("File does not exist: {}".format(filename)) return urllib.parse.urlunparse(single_location) def process_exceptions(self, excp): From a9e91ea19b469c665f845539088de41f904616f3 Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Sun, 7 Mar 2021 16:08:36 +0000 Subject: [PATCH 078/294] Objects: Writing an object should return the re-read object --- volatility3/framework/objects/__init__.py | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/volatility3/framework/objects/__init__.py b/volatility3/framework/objects/__init__.py index cebdc27c71..f258fa4410 100644 --- a/volatility3/framework/objects/__init__.py +++ b/volatility3/framework/objects/__init__.py @@ -150,11 +150,12 @@ def size(cls, template: interfaces.objects.Template) -> int: """Returns the size of the templated object.""" return template.vol.data_format.length - def write(self, value: TUnion[int, float, bool, bytes, str]) -> None: + def write(self, value: TUnion[int, float, bool, bytes, str]) -> interfaces.objects.ObjectInterface: """Writes the object into the layer of the context at the current offset.""" data = convert_value_to_data(value, self._struct_type, self._data_format) - return self._context.layers.write(self.vol.layer_name, self.vol.offset, data) + self._context.layers.write(self.vol.layer_name, self.vol.offset, data) + return self.cast(self.vol.type_name) # This must be int (and the _struct_type must be int) because bool cannot be inherited from: From 3bb8ddf089f0503884fcffa249b3082638411242 Mon Sep 17 00:00:00 2001 From: Frank Block Date: Thu, 11 Mar 2021 14:11:39 +0100 Subject: [PATCH 079/294] Temporary workaround for changes in transition PTE --- volatility3/framework/layers/intel.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/volatility3/framework/layers/intel.py b/volatility3/framework/layers/intel.py index 89ffcdbafa..bd5e49a17f 100644 --- a/volatility3/framework/layers/intel.py +++ b/volatility3/framework/layers/intel.py @@ -265,7 +265,7 @@ class Intel32e(Intel): _direct_metadata = collections.ChainMap({'architecture': 'Intel64'}, Intel._direct_metadata) _entry_format = " Date: Thu, 11 Mar 2021 14:14:33 +0100 Subject: [PATCH 080/294] Temporary workaround for changes in transition PTE --- volatility3/framework/symbols/windows/extensions/__init__.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/volatility3/framework/symbols/windows/extensions/__init__.py b/volatility3/framework/symbols/windows/extensions/__init__.py index f402c800f6..68660c41ad 100755 --- a/volatility3/framework/symbols/windows/extensions/__init__.py +++ b/volatility3/framework/symbols/windows/extensions/__init__.py @@ -972,7 +972,7 @@ def get_available_pages(self) -> Iterable[Tuple[int, int, int]]: # If the entry is not a valid physical address then see if it is in transition. elif mmpte.u.Trans.Transition == 1: - physoffset = mmpte.u.Trans.PageFrameNumber << 12 + physoffset = (mmpte.u.Trans.PageFrameNumber &~ (0b1111 << 32)) << 12 yield physoffset, file_offset, self.PAGE_SIZE # Go to the next PTE entry From 5eaa5ef7b24783aed7ea920dd5bd0feae688e033 Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Thu, 11 Mar 2021 15:50:45 +0000 Subject: [PATCH 081/294] CLI: Remove unnecessary urllib imports --- volatility3/cli/__init__.py | 12 +++++------- volatility3/cli/volshell/__init__.py | 2 -- volatility3/framework/symbols/windows/pdbconv.py | 1 - 3 files changed, 5 insertions(+), 10 deletions(-) diff --git a/volatility3/cli/__init__.py b/volatility3/cli/__init__.py index 07cd579558..58a80749cd 100644 --- a/volatility3/cli/__init__.py +++ b/volatility3/cli/__init__.py @@ -19,7 +19,6 @@ import sys import tempfile import traceback -import urllib from typing import Dict, Type, Union, Any from urllib import parse, request @@ -338,17 +337,16 @@ def location_from_file(self, filename: str) -> str: The URL for the location of the file """ # We want to work in URLs, but we need to accept absolute and relative files (including on windows) - single_location = urllib.parse.urlparse(filename, '') + single_location = parse.urlparse(filename, '') if single_location.scheme == '' or len(single_location.scheme) == 1: - single_location = urllib.parse.urlparse( - urllib.parse.urljoin('file:', urllib.request.pathname2url(os.path.abspath(filename)))) + single_location = parse.urlparse(parse.urljoin('file:', request.pathname2url(os.path.abspath(filename)))) if single_location.scheme == 'file': - if not os.path.exists(urllib.request.url2pathname(single_location.path)): - filename = urllib.request.url2pathname(single_location.path) + if not os.path.exists(request.url2pathname(single_location.path)): + filename = request.url2pathname(single_location.path) if not filename: raise ValueError("File URL looks incorrect (potentially missing /)") raise ValueError("File does not exist: {}".format(filename)) - return urllib.parse.urlunparse(single_location) + return parse.urlunparse(single_location) def process_exceptions(self, excp): """Provide useful feedback if an exception occurs during a run of a plugin.""" diff --git a/volatility3/cli/volshell/__init__.py b/volatility3/cli/volshell/__init__.py index 88ab103af4..25336f06c6 100644 --- a/volatility3/cli/volshell/__init__.py +++ b/volatility3/cli/volshell/__init__.py @@ -7,8 +7,6 @@ import logging import os import sys -import urllib -from urllib import request import volatility3.plugins import volatility3.symbols diff --git a/volatility3/framework/symbols/windows/pdbconv.py b/volatility3/framework/symbols/windows/pdbconv.py index 9bb51ae83d..1ef322eb6e 100644 --- a/volatility3/framework/symbols/windows/pdbconv.py +++ b/volatility3/framework/symbols/windows/pdbconv.py @@ -9,7 +9,6 @@ import logging import lzma import os -import urllib from bisect import bisect from typing import Tuple, Dict, Any, Optional, Union, List from urllib import request, error, parse From 085aacc86a6e5f7884cf171a3a56178131210936 Mon Sep 17 00:00:00 2001 From: Frank Block Date: Thu, 11 Mar 2021 23:24:21 +0100 Subject: [PATCH 082/294] Moved new _maxphyaddr to WindowsIntel32e --- volatility3/framework/layers/intel.py | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/volatility3/framework/layers/intel.py b/volatility3/framework/layers/intel.py index bd5e49a17f..296ea36ca1 100644 --- a/volatility3/framework/layers/intel.py +++ b/volatility3/framework/layers/intel.py @@ -265,7 +265,7 @@ class Intel32e(Intel): _direct_metadata = collections.ChainMap({'architecture': 'Intel64'}, Intel._direct_metadata) _entry_format = " Tuple[int, int, str]: class WindowsIntel32e(WindowsMixin, Intel32e): + _maxphyaddr = 45 + def _translate(self, offset: int) -> Tuple[int, int, str]: return self._translate_swap(self, offset, self._bits_per_register // 2) From 5d55bcfa52c993b1ef7a953dc16dfced1c7e98d2 Mon Sep 17 00:00:00 2001 From: Frank Block Date: Thu, 11 Mar 2021 23:33:39 +0100 Subject: [PATCH 083/294] Adjusted bit operation for PFN calculation --- volatility3/framework/symbols/windows/extensions/__init__.py | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/volatility3/framework/symbols/windows/extensions/__init__.py b/volatility3/framework/symbols/windows/extensions/__init__.py index 68660c41ad..bf054a91be 100755 --- a/volatility3/framework/symbols/windows/extensions/__init__.py +++ b/volatility3/framework/symbols/windows/extensions/__init__.py @@ -972,7 +972,8 @@ def get_available_pages(self) -> Iterable[Tuple[int, int, int]]: # If the entry is not a valid physical address then see if it is in transition. elif mmpte.u.Trans.Transition == 1: - physoffset = (mmpte.u.Trans.PageFrameNumber &~ (0b1111 << 32)) << 12 + # Strips the bit flag in 'PageFrameNumber' for pages in transition state + physoffset = (mmpte.u.Trans.PageFrameNumber & (( 1 << 33 ) - 1 ) ) << 12 yield physoffset, file_offset, self.PAGE_SIZE # Go to the next PTE entry From 3e04b347cd08c1b31b2b2b8a48422dbac5e83031 Mon Sep 17 00:00:00 2001 From: Frank Block Date: Thu, 11 Mar 2021 23:54:03 +0100 Subject: [PATCH 084/294] Added Comment/TODO for transition state issue See https://github.com/volatilityfoundation/volatility3/pull/475 --- volatility3/framework/layers/intel.py | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/volatility3/framework/layers/intel.py b/volatility3/framework/layers/intel.py index 296ea36ca1..94b15d3a0e 100644 --- a/volatility3/framework/layers/intel.py +++ b/volatility3/framework/layers/intel.py @@ -329,6 +329,10 @@ def _translate(self, offset: int) -> Tuple[int, int, str]: class WindowsIntel32e(WindowsMixin, Intel32e): + # TODO: Fix appropriately in a future release. + # Currently just a temprorary workaround to deal with custom bit flag + # in the PFN field for pages in transition state. + # See https://github.com/volatilityfoundation/volatility3/pull/475 _maxphyaddr = 45 def _translate(self, offset: int) -> Tuple[int, int, str]: From 9c1603e37259b1472ea6bb98948d643542eee0f5 Mon Sep 17 00:00:00 2001 From: Frank Block Date: Thu, 11 Mar 2021 23:55:37 +0100 Subject: [PATCH 085/294] Added Comment/TODO for transition state issue See https://github.com/volatilityfoundation/volatility3/pull/475 --- .../framework/symbols/windows/extensions/__init__.py | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/volatility3/framework/symbols/windows/extensions/__init__.py b/volatility3/framework/symbols/windows/extensions/__init__.py index bf054a91be..997175fbae 100755 --- a/volatility3/framework/symbols/windows/extensions/__init__.py +++ b/volatility3/framework/symbols/windows/extensions/__init__.py @@ -972,8 +972,12 @@ def get_available_pages(self) -> Iterable[Tuple[int, int, int]]: # If the entry is not a valid physical address then see if it is in transition. elif mmpte.u.Trans.Transition == 1: - # Strips the bit flag in 'PageFrameNumber' for pages in transition state + # TODO: Fix appropriately in a future release. + # Currently just a temprorary workaround to deal with custom bit flag + # in the PFN field for pages in transition state. + # See https://github.com/volatilityfoundation/volatility3/pull/475 physoffset = (mmpte.u.Trans.PageFrameNumber & (( 1 << 33 ) - 1 ) ) << 12 + yield physoffset, file_offset, self.PAGE_SIZE # Go to the next PTE entry From 5c8bd3c9d9b303e5a626fda9742b020ed5db1637 Mon Sep 17 00:00:00 2001 From: Arcuri Davide Date: Fri, 12 Mar 2021 11:30:04 +0100 Subject: [PATCH 086/294] enable support for compiled rules --- volatility3/framework/plugins/yarascan.py | 3 +++ 1 file changed, 3 insertions(+) diff --git a/volatility3/framework/plugins/yarascan.py b/volatility3/framework/plugins/yarascan.py index 1c04e23110..b30279d826 100644 --- a/volatility3/framework/plugins/yarascan.py +++ b/volatility3/framework/plugins/yarascan.py @@ -58,6 +58,7 @@ def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface] description = "Yara rules (as a string)", optional = True), requirements.URIRequirement(name = "yara_file", description = "Yara rules (as a file)", optional = True), + requirements.URIRequirement(name = "yara_compiled_file", description = "Yara compiled rules (as a file)", optional = True), requirements.IntRequirement(name = "max_size", default = 0x40000000, description = "Set the maximum size (default is 1GB)", @@ -78,6 +79,8 @@ def process_yara_options(cls, config: Dict[str, Any]): rules = yara.compile(sources = {'n': 'rule r1 {{strings: $a = {} condition: $a}}'.format(rule)}) elif config.get('yara_file', None) is not None: rules = yara.compile(file = resources.ResourceAccessor().open(config['yara_file'], "rb")) + elif config.get('yara_compiled_file', None) is not None: + rules = yara.load(file = resources.ResourceAccessor().open(config['yara_file'], "rb")) else: vollog.error("No yara rules, nor yara rules file were specified") return rules From cd3ad1e067f981c04e5b795c1bfa3550feafdfd1 Mon Sep 17 00:00:00 2001 From: dadokkio Date: Fri, 12 Mar 2021 13:09:38 +0100 Subject: [PATCH 087/294] fix config --- volatility3/framework/plugins/yarascan.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/volatility3/framework/plugins/yarascan.py b/volatility3/framework/plugins/yarascan.py index b30279d826..b3c3acddfe 100644 --- a/volatility3/framework/plugins/yarascan.py +++ b/volatility3/framework/plugins/yarascan.py @@ -80,7 +80,7 @@ def process_yara_options(cls, config: Dict[str, Any]): elif config.get('yara_file', None) is not None: rules = yara.compile(file = resources.ResourceAccessor().open(config['yara_file'], "rb")) elif config.get('yara_compiled_file', None) is not None: - rules = yara.load(file = resources.ResourceAccessor().open(config['yara_file'], "rb")) + rules = yara.load(file = resources.ResourceAccessor().open(config['yara_compiled_file'], "rb")) else: vollog.error("No yara rules, nor yara rules file were specified") return rules From cdf67cad2b58f104b891e9b4d06f058f15c6a831 Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Sat, 13 Mar 2021 01:50:03 +0000 Subject: [PATCH 088/294] Windows: Cache PDB files again Unfortunately, opening the file as a layer will cause it to cache anyway --- volatility3/framework/symbols/windows/pdbconv.py | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/volatility3/framework/symbols/windows/pdbconv.py b/volatility3/framework/symbols/windows/pdbconv.py index 1ef322eb6e..76d2282fe6 100644 --- a/volatility3/framework/symbols/windows/pdbconv.py +++ b/volatility3/framework/symbols/windows/pdbconv.py @@ -937,8 +937,8 @@ def retreive_pdb(self, for suffix in [file_name, file_name[:-1] + '_']: try: vollog.debug("Attempting to retrieve {}".format(url + suffix)) - # We no longer cache it, so this is a glorified remote endpoint check - result = resources.ResourceAccessor(progress_callback, enable_cache = False).open(url + suffix) + # We have to cache this because the file is opened by a layer and we can't control whether that caches + result = resources.ResourceAccessor(progress_callback).open(url + suffix) except (error.HTTPError, error.URLError) as excp: vollog.debug("Failed with {}".format(excp)) if result: From 638d73b822ef8601d9402ef1b71a0a8869b4b5a6 Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Sat, 13 Mar 2021 15:33:35 +0000 Subject: [PATCH 089/294] PDB: Improve pdb parsing --- volatility3/framework/symbols/windows/pdbconv.py | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/volatility3/framework/symbols/windows/pdbconv.py b/volatility3/framework/symbols/windows/pdbconv.py index 76d2282fe6..16a9919270 100644 --- a/volatility3/framework/symbols/windows/pdbconv.py +++ b/volatility3/framework/symbols/windows/pdbconv.py @@ -277,7 +277,7 @@ def __init__(self, self.symbols = {} # type: Dict[str, Any] self._omap_mapping = [] # type: List[Tuple[int, int]] self._sections = [] # type: List[interfaces.objects.ObjectInterface] - self.metadata = {"format": "6.1.0", "windows": {}} + self.metadata = {"format": "6.1.1", "windows": {}} self._database_name = database_name @property @@ -481,16 +481,16 @@ def read_symbol_stream(self): name = None address = None if sym.segment < len(self._sections): - if leaf_type == 0x110e: - # v3 symbol (c-string) - name = self.parse_string(sym.name, False, sym.length - sym.vol.size + 2) - address = self._sections[sym.segment - 1].VirtualAddress + sym.offset - elif leaf_type == 0x1009: + if leaf_type == 0x1009: # v2 symbol (pascal-string) name = self.parse_string(sym.name, True, sym.length - sym.vol.size + 2) address = self._sections[sym.segment - 1].VirtualAddress + sym.offset + elif leaf_type == 0x110e or leaf_type == 0x1127: + # v3 symbol (c-string) + name = self.parse_string(sym.name, False, sym.length - sym.vol.size + 2) + address = self._sections[sym.segment - 1].VirtualAddress + sym.offset else: - vollog.debug("Only v2 and v3 symbols are supported") + vollog.debug("Only v2 and v3 symbols are supported: {:x}".format(leaf_type)) if name: if self._omap_mapping: address = self.omap_lookup(address) From 5d77a7ca6b7b1a98cc8ab47274d79ea5182134ce Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Sat, 13 Mar 2021 18:50:26 +0000 Subject: [PATCH 090/294] PDB: Don't bump the version unnecessarily The version field of the producer should be enough to determine the difference --- volatility3/framework/symbols/windows/pdbconv.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/volatility3/framework/symbols/windows/pdbconv.py b/volatility3/framework/symbols/windows/pdbconv.py index 16a9919270..5db2e91375 100644 --- a/volatility3/framework/symbols/windows/pdbconv.py +++ b/volatility3/framework/symbols/windows/pdbconv.py @@ -277,7 +277,7 @@ def __init__(self, self.symbols = {} # type: Dict[str, Any] self._omap_mapping = [] # type: List[Tuple[int, int]] self._sections = [] # type: List[interfaces.objects.ObjectInterface] - self.metadata = {"format": "6.1.1", "windows": {}} + self.metadata = {"format": "6.1.0", "windows": {}} self._database_name = database_name @property From f41c75775adcd1cc8639c4c985a0de4a5dc65880 Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Sun, 14 Mar 2021 19:24:45 +0000 Subject: [PATCH 091/294] Windows: Group JSON symbols under directories --- volatility3/framework/plugins/windows/bigpools.py | 3 ++- volatility3/framework/plugins/windows/netscan.py | 3 ++- volatility3/framework/plugins/windows/svcscan.py | 2 +- .../symbols/windows/{ => bigpools}/bigpools-vista-x64.json | 0 .../symbols/windows/{ => bigpools}/bigpools-vista-x86.json | 0 .../symbols/windows/{ => bigpools}/bigpools-win10-x64.json | 0 .../symbols/windows/{ => bigpools}/bigpools-win10-x86.json | 0 .../framework/symbols/windows/{ => bigpools}/bigpools-x64.json | 0 .../framework/symbols/windows/{ => bigpools}/bigpools-x86.json | 0 .../symbols/windows/{ => netscan}/netscan-vista-sp12-x64.json | 0 .../symbols/windows/{ => netscan}/netscan-vista-x64.json | 0 .../symbols/windows/{ => netscan}/netscan-vista-x86.json | 0 .../symbols/windows/{ => netscan}/netscan-win10-10240-x86.json | 0 .../symbols/windows/{ => netscan}/netscan-win10-10586-x86.json | 0 .../symbols/windows/{ => netscan}/netscan-win10-14393-x86.json | 0 .../symbols/windows/{ => netscan}/netscan-win10-15063-x64.json | 0 .../symbols/windows/{ => netscan}/netscan-win10-15063-x86.json | 0 .../symbols/windows/{ => netscan}/netscan-win10-16299-x64.json | 0 .../symbols/windows/{ => netscan}/netscan-win10-17134-x64.json | 0 .../symbols/windows/{ => netscan}/netscan-win10-17134-x86.json | 0 .../symbols/windows/{ => netscan}/netscan-win10-17763-x64.json | 0 .../symbols/windows/{ => netscan}/netscan-win10-18362-x64.json | 0 .../symbols/windows/{ => netscan}/netscan-win10-18363-x64.json | 0 .../symbols/windows/{ => netscan}/netscan-win10-19041-x64.json | 0 .../symbols/windows/{ => netscan}/netscan-win10-19041-x86.json | 0 .../symbols/windows/{ => netscan}/netscan-win10-x64.json | 0 .../symbols/windows/{ => netscan}/netscan-win7-x64.json | 0 .../symbols/windows/{ => netscan}/netscan-win7-x86.json | 0 .../symbols/windows/{ => netscan}/netscan-win8-x64.json | 0 .../symbols/windows/{ => netscan}/netscan-win8-x86.json | 0 .../symbols/windows/{ => netscan}/netscan-win81-x64.json | 0 .../symbols/windows/{ => netscan}/netscan-win81-x86.json | 0 .../symbols/windows/{ => services}/services-vista-x64.json | 0 .../symbols/windows/{ => services}/services-vista-x86.json | 0 .../windows/{ => services}/services-win10-15063-x64.json | 0 .../windows/{ => services}/services-win10-15063-x86.json | 0 .../windows/{ => services}/services-win10-16299-x64.json | 0 .../windows/{ => services}/services-win10-16299-x86.json | 0 .../symbols/windows/{ => services}/services-win8-x64.json | 0 .../symbols/windows/{ => services}/services-win8-x86.json | 0 .../symbols/windows/{ => services}/services-xp-2003-x64.json | 0 .../symbols/windows/{ => services}/services-xp-x86.json | 0 42 files changed, 5 insertions(+), 3 deletions(-) rename volatility3/framework/symbols/windows/{ => bigpools}/bigpools-vista-x64.json (100%) rename volatility3/framework/symbols/windows/{ => bigpools}/bigpools-vista-x86.json (100%) rename volatility3/framework/symbols/windows/{ => bigpools}/bigpools-win10-x64.json (100%) rename volatility3/framework/symbols/windows/{ => bigpools}/bigpools-win10-x86.json (100%) rename volatility3/framework/symbols/windows/{ => bigpools}/bigpools-x64.json (100%) rename volatility3/framework/symbols/windows/{ => bigpools}/bigpools-x86.json (100%) rename volatility3/framework/symbols/windows/{ => netscan}/netscan-vista-sp12-x64.json (100%) rename volatility3/framework/symbols/windows/{ => netscan}/netscan-vista-x64.json (100%) rename volatility3/framework/symbols/windows/{ => netscan}/netscan-vista-x86.json (100%) rename volatility3/framework/symbols/windows/{ => netscan}/netscan-win10-10240-x86.json (100%) rename volatility3/framework/symbols/windows/{ => netscan}/netscan-win10-10586-x86.json (100%) rename volatility3/framework/symbols/windows/{ => netscan}/netscan-win10-14393-x86.json (100%) rename volatility3/framework/symbols/windows/{ => netscan}/netscan-win10-15063-x64.json (100%) rename volatility3/framework/symbols/windows/{ => netscan}/netscan-win10-15063-x86.json (100%) rename volatility3/framework/symbols/windows/{ => netscan}/netscan-win10-16299-x64.json (100%) rename volatility3/framework/symbols/windows/{ => netscan}/netscan-win10-17134-x64.json (100%) rename volatility3/framework/symbols/windows/{ => netscan}/netscan-win10-17134-x86.json (100%) rename volatility3/framework/symbols/windows/{ => netscan}/netscan-win10-17763-x64.json (100%) rename volatility3/framework/symbols/windows/{ => netscan}/netscan-win10-18362-x64.json (100%) rename volatility3/framework/symbols/windows/{ => netscan}/netscan-win10-18363-x64.json (100%) rename volatility3/framework/symbols/windows/{ => netscan}/netscan-win10-19041-x64.json (100%) rename volatility3/framework/symbols/windows/{ => netscan}/netscan-win10-19041-x86.json (100%) rename volatility3/framework/symbols/windows/{ => netscan}/netscan-win10-x64.json (100%) rename volatility3/framework/symbols/windows/{ => netscan}/netscan-win7-x64.json (100%) rename volatility3/framework/symbols/windows/{ => netscan}/netscan-win7-x86.json (100%) rename volatility3/framework/symbols/windows/{ => netscan}/netscan-win8-x64.json (100%) rename volatility3/framework/symbols/windows/{ => netscan}/netscan-win8-x86.json (100%) rename volatility3/framework/symbols/windows/{ => netscan}/netscan-win81-x64.json (100%) rename volatility3/framework/symbols/windows/{ => netscan}/netscan-win81-x86.json (100%) rename volatility3/framework/symbols/windows/{ => services}/services-vista-x64.json (100%) rename volatility3/framework/symbols/windows/{ => services}/services-vista-x86.json (100%) rename volatility3/framework/symbols/windows/{ => services}/services-win10-15063-x64.json (100%) rename volatility3/framework/symbols/windows/{ => services}/services-win10-15063-x86.json (100%) rename volatility3/framework/symbols/windows/{ => services}/services-win10-16299-x64.json (100%) rename volatility3/framework/symbols/windows/{ => services}/services-win10-16299-x86.json (100%) rename volatility3/framework/symbols/windows/{ => services}/services-win8-x64.json (100%) rename volatility3/framework/symbols/windows/{ => services}/services-win8-x86.json (100%) rename volatility3/framework/symbols/windows/{ => services}/services-xp-2003-x64.json (100%) rename volatility3/framework/symbols/windows/{ => services}/services-xp-x86.json (100%) diff --git a/volatility3/framework/plugins/windows/bigpools.py b/volatility3/framework/plugins/windows/bigpools.py index a2c23a31cf..5ffe8b7c71 100644 --- a/volatility3/framework/plugins/windows/bigpools.py +++ b/volatility3/framework/plugins/windows/bigpools.py @@ -3,6 +3,7 @@ # import logging +import os from typing import List, Optional, Tuple, Iterator from volatility3.framework import interfaces, renderers, exceptions, symbols @@ -83,7 +84,7 @@ def list_big_pools(cls, new_table_name = intermed.IntermediateSymbolTable.create( context = context, config_path = configuration.path_join(context.symbol_space[symbol_table].config_path, "bigpools"), - sub_path = "windows", + sub_path = os.path.join("windows", "bigpools"), filename = big_pools_json_filename, table_mapping = {'nt_symbols': symbol_table}, class_types = {'_POOL_TRACKER_BIG_PAGES': extensions.pool.POOL_TRACKER_BIG_PAGES}) diff --git a/volatility3/framework/plugins/windows/netscan.py b/volatility3/framework/plugins/windows/netscan.py index 7c0bdef0bf..1bd9165e61 100644 --- a/volatility3/framework/plugins/windows/netscan.py +++ b/volatility3/framework/plugins/windows/netscan.py @@ -4,6 +4,7 @@ import datetime import logging +import os from typing import Iterable, List, Optional, Tuple, Type from volatility3.framework import constants, exceptions, interfaces, renderers, symbols @@ -244,7 +245,7 @@ def create_netscan_symbol_table(cls, context: interfaces.context.ContextInterfac return intermed.IntermediateSymbolTable.create(context, config_path, - "windows", + os.path.join("windows", "netscan"), symbol_filename, class_types = class_types, table_mapping = table_mapping) diff --git a/volatility3/framework/plugins/windows/svcscan.py b/volatility3/framework/plugins/windows/svcscan.py index 1a035901a3..078445bb12 100644 --- a/volatility3/framework/plugins/windows/svcscan.py +++ b/volatility3/framework/plugins/windows/svcscan.py @@ -87,7 +87,7 @@ def create_service_table(context: interfaces.context.ContextInterface, symbol_ta return intermed.IntermediateSymbolTable.create(context, config_path, - "windows", + os.path.join("windows", "services"), symbol_filename, class_types = services.class_types, native_types = native_types) diff --git a/volatility3/framework/symbols/windows/bigpools-vista-x64.json b/volatility3/framework/symbols/windows/bigpools/bigpools-vista-x64.json similarity index 100% rename from volatility3/framework/symbols/windows/bigpools-vista-x64.json rename to volatility3/framework/symbols/windows/bigpools/bigpools-vista-x64.json diff --git a/volatility3/framework/symbols/windows/bigpools-vista-x86.json b/volatility3/framework/symbols/windows/bigpools/bigpools-vista-x86.json similarity index 100% rename from volatility3/framework/symbols/windows/bigpools-vista-x86.json rename to volatility3/framework/symbols/windows/bigpools/bigpools-vista-x86.json diff --git a/volatility3/framework/symbols/windows/bigpools-win10-x64.json b/volatility3/framework/symbols/windows/bigpools/bigpools-win10-x64.json similarity index 100% rename from volatility3/framework/symbols/windows/bigpools-win10-x64.json rename to volatility3/framework/symbols/windows/bigpools/bigpools-win10-x64.json diff --git a/volatility3/framework/symbols/windows/bigpools-win10-x86.json b/volatility3/framework/symbols/windows/bigpools/bigpools-win10-x86.json similarity index 100% rename from volatility3/framework/symbols/windows/bigpools-win10-x86.json rename to volatility3/framework/symbols/windows/bigpools/bigpools-win10-x86.json diff --git a/volatility3/framework/symbols/windows/bigpools-x64.json b/volatility3/framework/symbols/windows/bigpools/bigpools-x64.json similarity index 100% rename from volatility3/framework/symbols/windows/bigpools-x64.json rename to volatility3/framework/symbols/windows/bigpools/bigpools-x64.json diff --git a/volatility3/framework/symbols/windows/bigpools-x86.json b/volatility3/framework/symbols/windows/bigpools/bigpools-x86.json similarity index 100% rename from volatility3/framework/symbols/windows/bigpools-x86.json rename to volatility3/framework/symbols/windows/bigpools/bigpools-x86.json diff --git a/volatility3/framework/symbols/windows/netscan-vista-sp12-x64.json b/volatility3/framework/symbols/windows/netscan/netscan-vista-sp12-x64.json similarity index 100% rename from volatility3/framework/symbols/windows/netscan-vista-sp12-x64.json rename to volatility3/framework/symbols/windows/netscan/netscan-vista-sp12-x64.json diff --git a/volatility3/framework/symbols/windows/netscan-vista-x64.json b/volatility3/framework/symbols/windows/netscan/netscan-vista-x64.json similarity index 100% rename from volatility3/framework/symbols/windows/netscan-vista-x64.json rename to volatility3/framework/symbols/windows/netscan/netscan-vista-x64.json diff --git a/volatility3/framework/symbols/windows/netscan-vista-x86.json b/volatility3/framework/symbols/windows/netscan/netscan-vista-x86.json similarity index 100% rename from volatility3/framework/symbols/windows/netscan-vista-x86.json rename to volatility3/framework/symbols/windows/netscan/netscan-vista-x86.json diff --git a/volatility3/framework/symbols/windows/netscan-win10-10240-x86.json b/volatility3/framework/symbols/windows/netscan/netscan-win10-10240-x86.json similarity index 100% rename from volatility3/framework/symbols/windows/netscan-win10-10240-x86.json rename to volatility3/framework/symbols/windows/netscan/netscan-win10-10240-x86.json diff --git a/volatility3/framework/symbols/windows/netscan-win10-10586-x86.json b/volatility3/framework/symbols/windows/netscan/netscan-win10-10586-x86.json similarity index 100% rename from volatility3/framework/symbols/windows/netscan-win10-10586-x86.json rename to volatility3/framework/symbols/windows/netscan/netscan-win10-10586-x86.json diff --git a/volatility3/framework/symbols/windows/netscan-win10-14393-x86.json b/volatility3/framework/symbols/windows/netscan/netscan-win10-14393-x86.json similarity index 100% rename from volatility3/framework/symbols/windows/netscan-win10-14393-x86.json rename to volatility3/framework/symbols/windows/netscan/netscan-win10-14393-x86.json diff --git a/volatility3/framework/symbols/windows/netscan-win10-15063-x64.json b/volatility3/framework/symbols/windows/netscan/netscan-win10-15063-x64.json similarity index 100% rename from volatility3/framework/symbols/windows/netscan-win10-15063-x64.json rename to volatility3/framework/symbols/windows/netscan/netscan-win10-15063-x64.json diff --git a/volatility3/framework/symbols/windows/netscan-win10-15063-x86.json b/volatility3/framework/symbols/windows/netscan/netscan-win10-15063-x86.json similarity index 100% rename from volatility3/framework/symbols/windows/netscan-win10-15063-x86.json rename to volatility3/framework/symbols/windows/netscan/netscan-win10-15063-x86.json diff --git a/volatility3/framework/symbols/windows/netscan-win10-16299-x64.json b/volatility3/framework/symbols/windows/netscan/netscan-win10-16299-x64.json similarity index 100% rename from volatility3/framework/symbols/windows/netscan-win10-16299-x64.json rename to volatility3/framework/symbols/windows/netscan/netscan-win10-16299-x64.json diff --git a/volatility3/framework/symbols/windows/netscan-win10-17134-x64.json b/volatility3/framework/symbols/windows/netscan/netscan-win10-17134-x64.json similarity index 100% rename from volatility3/framework/symbols/windows/netscan-win10-17134-x64.json rename to volatility3/framework/symbols/windows/netscan/netscan-win10-17134-x64.json diff --git a/volatility3/framework/symbols/windows/netscan-win10-17134-x86.json b/volatility3/framework/symbols/windows/netscan/netscan-win10-17134-x86.json similarity index 100% rename from volatility3/framework/symbols/windows/netscan-win10-17134-x86.json rename to volatility3/framework/symbols/windows/netscan/netscan-win10-17134-x86.json diff --git a/volatility3/framework/symbols/windows/netscan-win10-17763-x64.json b/volatility3/framework/symbols/windows/netscan/netscan-win10-17763-x64.json similarity index 100% rename from volatility3/framework/symbols/windows/netscan-win10-17763-x64.json rename to volatility3/framework/symbols/windows/netscan/netscan-win10-17763-x64.json diff --git a/volatility3/framework/symbols/windows/netscan-win10-18362-x64.json b/volatility3/framework/symbols/windows/netscan/netscan-win10-18362-x64.json similarity index 100% rename from volatility3/framework/symbols/windows/netscan-win10-18362-x64.json rename to volatility3/framework/symbols/windows/netscan/netscan-win10-18362-x64.json diff --git a/volatility3/framework/symbols/windows/netscan-win10-18363-x64.json b/volatility3/framework/symbols/windows/netscan/netscan-win10-18363-x64.json similarity index 100% rename from volatility3/framework/symbols/windows/netscan-win10-18363-x64.json rename to volatility3/framework/symbols/windows/netscan/netscan-win10-18363-x64.json diff --git a/volatility3/framework/symbols/windows/netscan-win10-19041-x64.json b/volatility3/framework/symbols/windows/netscan/netscan-win10-19041-x64.json similarity index 100% rename from volatility3/framework/symbols/windows/netscan-win10-19041-x64.json rename to volatility3/framework/symbols/windows/netscan/netscan-win10-19041-x64.json diff --git a/volatility3/framework/symbols/windows/netscan-win10-19041-x86.json b/volatility3/framework/symbols/windows/netscan/netscan-win10-19041-x86.json similarity index 100% rename from volatility3/framework/symbols/windows/netscan-win10-19041-x86.json rename to volatility3/framework/symbols/windows/netscan/netscan-win10-19041-x86.json diff --git a/volatility3/framework/symbols/windows/netscan-win10-x64.json b/volatility3/framework/symbols/windows/netscan/netscan-win10-x64.json similarity index 100% rename from volatility3/framework/symbols/windows/netscan-win10-x64.json rename to volatility3/framework/symbols/windows/netscan/netscan-win10-x64.json diff --git a/volatility3/framework/symbols/windows/netscan-win7-x64.json b/volatility3/framework/symbols/windows/netscan/netscan-win7-x64.json similarity index 100% rename from volatility3/framework/symbols/windows/netscan-win7-x64.json rename to volatility3/framework/symbols/windows/netscan/netscan-win7-x64.json diff --git a/volatility3/framework/symbols/windows/netscan-win7-x86.json b/volatility3/framework/symbols/windows/netscan/netscan-win7-x86.json similarity index 100% rename from volatility3/framework/symbols/windows/netscan-win7-x86.json rename to volatility3/framework/symbols/windows/netscan/netscan-win7-x86.json diff --git a/volatility3/framework/symbols/windows/netscan-win8-x64.json b/volatility3/framework/symbols/windows/netscan/netscan-win8-x64.json similarity index 100% rename from volatility3/framework/symbols/windows/netscan-win8-x64.json rename to volatility3/framework/symbols/windows/netscan/netscan-win8-x64.json diff --git a/volatility3/framework/symbols/windows/netscan-win8-x86.json b/volatility3/framework/symbols/windows/netscan/netscan-win8-x86.json similarity index 100% rename from volatility3/framework/symbols/windows/netscan-win8-x86.json rename to volatility3/framework/symbols/windows/netscan/netscan-win8-x86.json diff --git a/volatility3/framework/symbols/windows/netscan-win81-x64.json b/volatility3/framework/symbols/windows/netscan/netscan-win81-x64.json similarity index 100% rename from volatility3/framework/symbols/windows/netscan-win81-x64.json rename to volatility3/framework/symbols/windows/netscan/netscan-win81-x64.json diff --git a/volatility3/framework/symbols/windows/netscan-win81-x86.json b/volatility3/framework/symbols/windows/netscan/netscan-win81-x86.json similarity index 100% rename from volatility3/framework/symbols/windows/netscan-win81-x86.json rename to volatility3/framework/symbols/windows/netscan/netscan-win81-x86.json diff --git a/volatility3/framework/symbols/windows/services-vista-x64.json b/volatility3/framework/symbols/windows/services/services-vista-x64.json similarity index 100% rename from volatility3/framework/symbols/windows/services-vista-x64.json rename to volatility3/framework/symbols/windows/services/services-vista-x64.json diff --git a/volatility3/framework/symbols/windows/services-vista-x86.json b/volatility3/framework/symbols/windows/services/services-vista-x86.json similarity index 100% rename from volatility3/framework/symbols/windows/services-vista-x86.json rename to volatility3/framework/symbols/windows/services/services-vista-x86.json diff --git a/volatility3/framework/symbols/windows/services-win10-15063-x64.json b/volatility3/framework/symbols/windows/services/services-win10-15063-x64.json similarity index 100% rename from volatility3/framework/symbols/windows/services-win10-15063-x64.json rename to volatility3/framework/symbols/windows/services/services-win10-15063-x64.json diff --git a/volatility3/framework/symbols/windows/services-win10-15063-x86.json b/volatility3/framework/symbols/windows/services/services-win10-15063-x86.json similarity index 100% rename from volatility3/framework/symbols/windows/services-win10-15063-x86.json rename to volatility3/framework/symbols/windows/services/services-win10-15063-x86.json diff --git a/volatility3/framework/symbols/windows/services-win10-16299-x64.json b/volatility3/framework/symbols/windows/services/services-win10-16299-x64.json similarity index 100% rename from volatility3/framework/symbols/windows/services-win10-16299-x64.json rename to volatility3/framework/symbols/windows/services/services-win10-16299-x64.json diff --git a/volatility3/framework/symbols/windows/services-win10-16299-x86.json b/volatility3/framework/symbols/windows/services/services-win10-16299-x86.json similarity index 100% rename from volatility3/framework/symbols/windows/services-win10-16299-x86.json rename to volatility3/framework/symbols/windows/services/services-win10-16299-x86.json diff --git a/volatility3/framework/symbols/windows/services-win8-x64.json b/volatility3/framework/symbols/windows/services/services-win8-x64.json similarity index 100% rename from volatility3/framework/symbols/windows/services-win8-x64.json rename to volatility3/framework/symbols/windows/services/services-win8-x64.json diff --git a/volatility3/framework/symbols/windows/services-win8-x86.json b/volatility3/framework/symbols/windows/services/services-win8-x86.json similarity index 100% rename from volatility3/framework/symbols/windows/services-win8-x86.json rename to volatility3/framework/symbols/windows/services/services-win8-x86.json diff --git a/volatility3/framework/symbols/windows/services-xp-2003-x64.json b/volatility3/framework/symbols/windows/services/services-xp-2003-x64.json similarity index 100% rename from volatility3/framework/symbols/windows/services-xp-2003-x64.json rename to volatility3/framework/symbols/windows/services/services-xp-2003-x64.json diff --git a/volatility3/framework/symbols/windows/services-xp-x86.json b/volatility3/framework/symbols/windows/services/services-xp-x86.json similarity index 100% rename from volatility3/framework/symbols/windows/services-xp-x86.json rename to volatility3/framework/symbols/windows/services/services-xp-x86.json From 4e50402f1123519f71d7d8878c9344341d7557b6 Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Sun, 14 Mar 2021 20:17:06 +0000 Subject: [PATCH 092/294] Windows: Add additional version info finding method --- .../framework/plugins/windows/verinfo.py | 38 ++++++++++++++++++- 1 file changed, 36 insertions(+), 2 deletions(-) diff --git a/volatility3/framework/plugins/windows/verinfo.py b/volatility3/framework/plugins/windows/verinfo.py index d65402a6ad..92449588f2 100644 --- a/volatility3/framework/plugins/windows/verinfo.py +++ b/volatility3/framework/plugins/windows/verinfo.py @@ -4,10 +4,12 @@ import io import logging -from typing import Generator, List, Tuple +import struct +from typing import Generator, List, Tuple, Optional from volatility3.framework import exceptions, renderers, constants, interfaces from volatility3.framework.configuration import requirements +from volatility3.framework.layers import scanners from volatility3.framework.renderers import format_hints from volatility3.framework.symbols import intermed from volatility3.framework.symbols.windows.extensions import pe @@ -25,7 +27,7 @@ class VerInfo(interfaces.plugins.PluginInterface): """Lists version information from PE files.""" - _required_framework_version = (1, 0, 0) + _required_framework_version = (1, 1, 0) @classmethod def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface]: @@ -39,8 +41,32 @@ def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface] description = 'Memory layer for the kernel', architectures = ["Intel32", "Intel64"]), requirements.SymbolTableRequirement(name = "nt_symbols", description = "Windows kernel symbols"), + requirements.BooleanRequirement(name = "extensive", + description = "Search physical layer for version information", + optional = True, + default = False), ] + @classmethod + def find_version_info(cls, context: interfaces.context.ContextInterface, layer_name: str, + filename: str) -> Optional[Tuple[int, int, int, int]]: + """Searches for an original filename, then tracks back to find the VS_VERSION_INFO and read the fixed + version information structure""" + premable_max_distance = 0x500 + filename = "OriginalFilename\x00" + filename + iterator = context.layers[layer_name].scan(context = context, + scanner = scanners.BytesScanner(bytes(filename, 'utf-16be'))) + for offset in iterator: + data = context.layers[layer_name].read(offset - premable_max_distance, premable_max_distance) + vs_ver_info = b"\xbd\x04\xef\xfe" + verinfo_offset = data.find(vs_ver_info) + len(vs_ver_info) + if verinfo_offset >= 0: + structure = ' Tuple[int, int, int, int]: @@ -103,6 +129,9 @@ def _generator(self, procs: Generator[interfaces.objects.ObjectInterface, None, "pe", class_types = pe.class_types) + # TODO: Fix this so it works with more than just intel layers + physical_layer_name = self.context.layers[self.config['primary']].config.get('memory_layer', None) + for mod in mods: try: BaseDllName = mod.BaseDllName.get_string() @@ -115,6 +144,11 @@ def _generator(self, procs: Generator[interfaces.objects.ObjectInterface, None, session_layer_name, mod.DllBase) except (exceptions.InvalidAddressException, TypeError, AttributeError): (major, minor, product, build) = [renderers.UnreadableValue()] * 4 + if (not isinstance(BaseDllName, renderers.UnreadableValue) and physical_layer_name is not None + and self.config['extensive']): + result = self.find_version_info(self._context, physical_layer_name, BaseDllName) + if result is not None: + (major, minor, product, build) = result # the pid and process are not applicable for kernel modules yield (0, (renderers.NotApplicableValue(), renderers.NotApplicableValue(), format_hints.Hex(mod.DllBase), From 970d15a82e658e047c96b74561e04a82f77ba8a0 Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Sun, 14 Mar 2021 21:47:53 +0000 Subject: [PATCH 093/294] Documentation: Document volshell --- doc/source/index.rst | 1 + doc/source/volshell.rst | 191 ++++++++++++++++++++++++++++ volatility3/cli/__init__.py | 3 +- volatility3/cli/volshell/generic.py | 5 +- 4 files changed, 196 insertions(+), 4 deletions(-) create mode 100644 doc/source/volshell.rst diff --git a/doc/source/index.rst b/doc/source/index.rst index e22963f4d4..0dc3b50258 100644 --- a/doc/source/index.rst +++ b/doc/source/index.rst @@ -17,6 +17,7 @@ Here are some guidelines for using Volatility 3 effectively: complex-plugin using-as-a-library symbol-tables + volshell glossary Python Packages diff --git a/doc/source/volshell.rst b/doc/source/volshell.rst new file mode 100644 index 0000000000..1b6846e6ba --- /dev/null +++ b/doc/source/volshell.rst @@ -0,0 +1,191 @@ +Volshell - A CLI tool for working with memory +============================================= + +Volshell is a utility to access the volatility framework interactively with a specific memory image. It allows for +direct introspection and access to all features of the volatility library from within a command line environment. + +Starting volshell +----------------- + +Volshell is started in much the same way as volatility. Rather than providing a plugin, you just specify the file. +If the operating system of the memory image is known, a flag can be provided allowing additional methods for the +specific operating system. + +:: + + $ volshell.py -f [-w|-m|-l] + +The flags to specify a known operating system are -w for windows, -m for mac and -l for linux. Volshell will run +through the usual automagic, trying to load the memory image. If no operating system is specified, all automagic will +be run. + +When volshell starts, it will show the version of volshell, a brief message indicating how to get more help, the current +operating system mode for volshell, and the current layer available for use. + +.. code-block:: python + + Volshell (Volatility 3 Framework) 1.0.1 + Readline imported successfully PDB scanning finished + + Call help() to see available functions + + Volshell mode: Generic + Current Layer: primary + + (primary) >>> + +Volshell itself in essentially a plugin, but an interactive one. As such, most values are accessed through `self` +although there is also a `context` object whenever a context must be provided. + +The prompt for the tool will indicate the name of the current layer (which can be accessed as `self.current_layer` +from within the tool). + +The generic mode is quite limited, won't have any symbols loaded and therefore won't be able to display much +information. When an operating system is chosen, the appropriate symbols should be loaded and additional functions +become available. The mode cannot easily be changed once the tool has started. + +Accessing objects +----------------- +All operating systems come with their equivalent of a process list, aliased to the function `ps()`. Running this +will provide a list of volatility objects, based on the operating system in question. We will use these objects to +run our examples against. + +We'll start by creating a process variable, and putting the first result from `ps()` in it. Since the shell is a +python environment, we can do the following: + +.. code-block:: python + + (primary) >>> proc = ps()[0] + (primary) >>> proc + + +When printing a volatility structure, various information is output, in this case the `type_name`, the `layer` and +`offset` that it's been constructed on, and the size of the structure. + +We can directly access the volatility information about a structure, using the `.vol` attribute, which contains +basic information such as structure size, type_name, and the list of members amongst others. However, volshell has a +built-in mechanism for providing more information about a structure, called `display_type` or `dt`. This can be given +either a type name (which if not prefixed with symbol table name, will use the kernel symbol table identified by the +automagic). + +.. code-block:: python + + (primary) >>> dt('_EPROCESS') + nt_symbols1!_EPROCESS (2624 bytes) + 0x0 : Pcb nt_symbols1!_KPROCESS + 0x438 : ProcessLock nt_symbols1!_EX_PUSH_LOCK + 0x440 : UniqueProcessId nt_symbols1!pointer + 0x448 : ActiveProcessLinks nt_symbols1!_LIST_ENTRY + ... + +It can also be provided with an object and will interpret the data for each in the process: + +.. code-block:: python + + (primary) >>> dt(proc) + nt_symbols1!_EPROCESS (2624 bytes) + 0x0 : Pcb nt_symbols1!_KPROCESS 0x8c0bccf8d040 + 0x438 : ProcessLock nt_symbols1!_EX_PUSH_LOCK 0x8c0bccf8d478 + 0x440 : UniqueProcessId nt_symbols1!pointer 356 + 0x448 : ActiveProcessLinks nt_symbols1!_LIST_ENTRY 0x8c0bccf8d488 + ... + +These values can be accessed directory as attributes + +.. code-block:: python + + (primary) >>> proc.UniqueProcessId + 356 + +Pointer structures contain the value they point to, but attributes accessed are forwarded to the object they point to. +This means that pointers do not need to be explicitly dereferenced to access underling objects. + +.. code-block:: python + + (primary) >>> proc.Pcb.DirectoryTableBase + 4355817472 + +Running plugins +--------------- + +It's possible to run any plugin by importing it appropriately and passing it to the `display_plugin_ouptut` or `dpo` +method. In the following example we'll provide no additional parameters. Volatility will show us which parameters +were required: + +.. code-block:: python + + (primary) >>> from volatility3.plugins.windows import pslist + (primary) >>> display_plugin_output(pslist.PsList) + Unable to validate the plugin requirements: ['plugins.Volshell.9QZLXJKFWESI0BAP3M1U7Y5VCT468GRN.PsList.primary', 'plugins.Volshell.9QZLXJKFWESI0BAP3M1U7Y5VCT468GRN.PsList.nt_symbols'] + +We can see that it's made a temporary configuration path for the plugin, and that neither `primary` nor `nt_symbols` +was fulfilled. + +We can see all the options that the plugin can accept by access the `get_requirements()` method of the plugin. +This is a classmethod, so can be called on an uninstantiated copy of the plugin. + +.. code-block:: python + + (primary) >>> pslist.PsList.get_requirements() + [, , , , ] + +We can provide arguments via the `dpo` method call: + +.. code-block:: python + + (primary) >>> display_plugin_output(pslist.PsList, primary = self.current_layer, nt_symbols = self.config['nt_symbols']) + + PID PPID ImageFileName Offset(V) Threads Handles SessionId Wow64 CreateTime ExitTime File output + + 4 0 System 0x8c0bcac87040 143 - N/A False 2021-03-13 17:25:33.000000 N/A Disabled + 92 4 Registry 0x8c0bcac5d080 4 - N/A False 2021-03-13 17:25:28.000000 N/A Disabled + 356 4 smss.exe 0x8c0bccf8d040 3 - N/A False 2021-03-13 17:25:33.000000 N/A Disabled + ... + +Here's we've provided the current layer as the TranslationLayerRequirement, and used the symbol tables requirement +requested by the volshell plugin itself. A different table could be loaded and provided instead. The context used +by the `dpo` method is always `context`. + +Instead of print the results directly to screen, they can be gathered into a TreeGrid objects for direct access by +using the `generate_treegrid` or `gt` command. + +.. code-block:: python + + (primary) >>> treegrid = gt(pslist.PsList, primary = self.current_layer, nt_symbols = self.config['nt_symbols']) + (primary) >>> treegrid.populate() + +Treegrids must be populated before the data in them can be accessed. This is where the plugin actually runs and +produces data. + + +Running scripts +--------------- + +It might be beneficial to code up a small snippet of code, and execute that on a memory image, rather than writing +a full plugin. + +The snippet should be lines that will be executed within the volshell context (as such they can immediately access +`self` and `context`, for example). These can be executed using the `run_script` or `rs` command, or by providing the +file on the command line with `--script`. + +For example, to load a layer and extract bytes from a particular offset into a new file, the following snippet could be +used: + +.. code-block:: python + + import volatility3.framework.layers.mynewlayer as mynewlayer + + layer = cc(mynewlayer.MyNewLayer, on_top_of = 'primary', other_parameter = 'important') + with open('output.dmp', 'wb') as fp: + for i in range(0, 1073741824, 0x1000): + data = layer.read(i, 0x1000, pad = True) + fp.write(data) + +As this demonstrates, all of the python is accessible, as are the volshell built in functions (such as `cc` which +creates a constructable, like a layer or a symbol table). + +Loading files +------------- + +Files can be loaded as physical layers using the `load_file` or `lf` command, which takes a filename or a URI. This will be added +to `context.layers` and can be accessed by the name returned by `lf`. diff --git a/volatility3/cli/__init__.py b/volatility3/cli/__init__.py index 58a80749cd..98b769c3b6 100644 --- a/volatility3/cli/__init__.py +++ b/volatility3/cli/__init__.py @@ -327,7 +327,8 @@ def run(self): except (exceptions.VolatilityException) as excp: self.process_exceptions(excp) - def location_from_file(self, filename: str) -> str: + @classmethod + def location_from_file(cls, filename: str) -> str: """Returns the URL location from a file parameter (which may be a URL) Args: diff --git a/volatility3/cli/volshell/generic.py b/volatility3/cli/volshell/generic.py index 78ab9ba2c6..6c81cda4bc 100644 --- a/volatility3/cli/volshell/generic.py +++ b/volatility3/cli/volshell/generic.py @@ -11,7 +11,7 @@ from typing import Any, Dict, List, Optional, Tuple, Union, Type, Iterable from urllib import request, parse -from volatility3.cli import text_renderer +from volatility3.cli import text_renderer, volshell from volatility3.framework import renderers, interfaces, objects, plugins, exceptions from volatility3.framework.configuration import requirements from volatility3.framework.layers import intel, physical, resources @@ -349,8 +349,7 @@ def run_script(self, location: str): def load_file(self, location: str): """Loads a file into a Filelayer and returns the name of the layer""" layer_name = self.context.layers.free_layer_name() - if not parse.urlparse(location).scheme: - location = "file:" + request.pathname2url(location) + location = volshell.VolShell.location_from_file(location) current_config_path = 'volshell.layers.' + layer_name self.context.config[interfaces.configuration.path_join(current_config_path, "location")] = location layer = physical.FileLayer(self.context, current_config_path, layer_name) From 95a9effbe39676c4f22b35bab538f57e7b2e3ef1 Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Sun, 14 Mar 2021 23:27:53 +0000 Subject: [PATCH 094/294] Windows: Generalize symbol_table_from_pdb --- .../framework/plugins/windows/netstat.py | 13 +++-- .../framework/symbols/windows/pdbutil.py | 47 ++++++++++++++++++- 2 files changed, 53 insertions(+), 7 deletions(-) diff --git a/volatility3/framework/plugins/windows/netstat.py b/volatility3/framework/plugins/windows/netstat.py index 1d37258ab7..e81a4d7232 100644 --- a/volatility3/framework/plugins/windows/netstat.py +++ b/volatility3/framework/plugins/windows/netstat.py @@ -32,6 +32,7 @@ def get_requirements(cls): requirements.SymbolTableRequirement(name = "nt_symbols", description = "Windows kernel symbols"), requirements.VersionRequirement(name = 'netscan', component = netscan.NetScan, version = (1, 0, 0)), requirements.VersionRequirement(name = 'modules', component = modules.Modules, version = (1, 0, 0)), + requirements.VersionRequirement(name = 'pdbutil', component = pdbutil.PDBUtility, version = (1, 0, 0)), requirements.BooleanRequirement( name = 'include-corrupt', description = @@ -184,7 +185,8 @@ def get_tcpip_module(cls, context: interfaces.context.ContextInterface, layer_na @classmethod def parse_hashtable(cls, context: interfaces.context.ContextInterface, layer_name: str, ht_offset: int, - ht_length: int, alignment: int, net_symbol_table: str) -> Generator[interfaces.objects.ObjectInterface, None, None]: + ht_length: int, alignment: int, + net_symbol_table: str) -> Generator[interfaces.objects.ObjectInterface, None, None]: """Parses a hashtable quick and dirty. Args: @@ -288,8 +290,8 @@ def create_tcpip_symbol_table(cls, context: interfaces.context.ContextInterface, end = tcpip_module_offset + tcpip_module_size)) if not guids: - raise exceptions.VolatilityException("Did not find GUID of tcpip.pdb in tcpip.sys module @ 0x{:x}!".format( - tcpip_module_offset)) + raise exceptions.VolatilityException( + "Did not find GUID of tcpip.pdb in tcpip.sys module @ 0x{:x}!".format(tcpip_module_offset)) guid = guids[0] @@ -437,8 +439,9 @@ def _generator(self, show_corrupt_results: Optional[bool] = None): tcpip_module = self.get_tcpip_module(self.context, self.config["primary"], self.config["nt_symbols"]) - tcpip_symbol_table = self.create_tcpip_symbol_table(self.context, self.config_path, self.config["primary"], - tcpip_module.DllBase, tcpip_module.SizeOfImage) + tcpip_symbol_table = pdbutil.PDBUtility.symbol_table_from_pdb( + self.context, interfaces.configuration.path_join(self.config_path, 'tcpip'), self.config["primary"], + "tcpip.pdb", tcpip_module.DllBase, tcpip_module.SizeOfImage) for netw_obj in self.list_sockets(self.context, self.config['primary'], self.config['nt_symbols'], netscan_symbol_table, tcpip_module.DllBase, tcpip_symbol_table): diff --git a/volatility3/framework/symbols/windows/pdbutil.py b/volatility3/framework/symbols/windows/pdbutil.py index ac7357f6ae..5db71ca019 100644 --- a/volatility3/framework/symbols/windows/pdbutil.py +++ b/volatility3/framework/symbols/windows/pdbutil.py @@ -12,7 +12,7 @@ from urllib import request, parse from volatility3 import symbols -from volatility3.framework import constants, interfaces +from volatility3.framework import constants, interfaces, exceptions from volatility3.framework.configuration.requirements import SymbolTableRequirement from volatility3.framework.symbols import intermed from volatility3.framework.symbols.windows import pdbconv @@ -20,9 +20,11 @@ vollog = logging.getLogger(__name__) -class PDBUtility: +class PDBUtility(interfaces.configuration.VersionableInterface): """Class to handle and manage all getting symbols based on MZ header""" + _version = (1, 0, 0) + @classmethod def symbol_table_from_offset( cls, @@ -279,6 +281,47 @@ def pdbname_scan(cls, 'mz_offset': mz_offset } + @classmethod + def symbol_table_from_pdb(cls, context: interfaces.context.ContextInterface, config_path: str, layer_name: str, + pdb_name: str, module_offset: int, module_size: int) -> str: + """Creates symbol table for a module in the specified layer_name. + + Searches the memory section of the loaded module for its PDB GUID + and loads the associated symbol table into the symbol space. + + Args: + context: The context to retrieve required elements (layers, symbol tables) from + config_path: The config path where to find symbol files + layer_name: The name of the layer on which to operate + module_offset: This memory dump's module image offset + module_size: The size of the module for this dump + + Returns: + The name of the constructed and loaded symbol table + """ + + guids = list( + cls.pdbname_scan(context, + layer_name, + context.layers[layer_name].page_size, [bytes(pdb_name, 'latin-1')], + start = module_offset, + end = module_offset + module_size)) + + if not guids: + raise exceptions.VolatilityException( + "Did not find GUID of tcpip.pdb in tcpip.sys module @ 0x{:x}!".format(module_offset)) + + guid = guids[0] + + vollog.debug("Found {}: {}-{}".format(guid["pdb_name"], guid["GUID"], guid["age"])) + + return cls.load_windows_symbol_table(context, + guid["GUID"], + guid["age"], + guid["pdb_name"], + "volatility3.framework.symbols.intermed.IntermediateSymbolTable", + config_path = config_path) + class PdbSignatureScanner(interfaces.layers.ScannerInterface): """A :class:`~volatility3.framework.interfaces.layers.ScannerInterface` From 0a7a7496672d18b333cade5ab37701036603d273 Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Mon, 15 Mar 2021 11:15:44 +0000 Subject: [PATCH 095/294] Documentation: Add text about banners and isfinfo --- doc/source/symbol-tables.rst | 11 ++++++++++- 1 file changed, 10 insertions(+), 1 deletion(-) diff --git a/doc/source/symbol-tables.rst b/doc/source/symbol-tables.rst index fd94dea4d4..aecc4d6f91 100644 --- a/doc/source/symbol-tables.rst +++ b/doc/source/symbol-tables.rst @@ -49,5 +49,14 @@ under the operating system directory. Linux and Mac symbol tables can be generated from a DWARF file using a tool called `dwarf2json `_. Currently a kernel with debugging symbols is the only suitable means for recovering all the information required by most Volatility plugins. +To determine the string for a particular memory image, use the `banners` plugin. Once the specific banner is known, +try to locate that exact kernel debugging package for the operating system. + Once a kernel with debugging symbols/appropriate DWARF file has been located, `dwarf2json `_ will convert it into an -appropriate JSON file. +appropriate JSON file. Example code for automatically creating a JSON from URLs for the kernel debugging package and +the package containing the Systemp.map, can be found in `stock-linux-json.py `. + +The banners available for volatility to use can be found using the `isfinfo` plugin, but this will potentially take a +long time to run depending on the number of JSON files available. This will list all the JSON (ISF) files that +volatility3 is aware of, and for linux/mac systems what banner string they search for. For volatility to use the JSON +file, the banners must match exactly (down to the compilation date). From 0890a962495c0274994685ed46fd7fed65aa3667 Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Mon, 15 Mar 2021 22:16:46 +0000 Subject: [PATCH 096/294] Windows: Check strings file before use This will cause the strings plugin to exit early if the strings file has formatting issues. --- .../framework/plugins/windows/strings.py | 37 +++++++++++++------ 1 file changed, 25 insertions(+), 12 deletions(-) diff --git a/volatility3/framework/plugins/windows/strings.py b/volatility3/framework/plugins/windows/strings.py index 3e9fdfb855..5d0d3f8738 100644 --- a/volatility3/framework/plugins/windows/strings.py +++ b/volatility3/framework/plugins/windows/strings.py @@ -20,7 +20,7 @@ class Strings(interfaces.plugins.PluginInterface): """Reads output from the strings command and indicates which process(es) each string belongs to.""" _required_framework_version = (1, 0, 0) - strings_pattern = re.compile(rb"(?:\W*)([0-9]+)(?:\W*)(\w[\w\W]+)\n?") + strings_pattern = re.compile(rb"^(?:\W*)([0-9]+)(?:\W*)(\w[\w\W]+)\n?") @classmethod def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface]: @@ -40,27 +40,40 @@ def run(self): def _generator(self) -> Generator[Tuple, None, None]: """Generates results from a strings file.""" - revmap = self.generate_mapping(self.config['primary']) + string_list = {} # type: Dict[int, List[bytes]] + # Test strings file format is accurate accessor = resources.ResourceAccessor() strings_fp = accessor.open(self.config['strings_file'], "rb") - strings_size = path.getsize(strings_fp.file.name) - line = strings_fp.readline() - last_prog = 0 + count = 0 while line: + count += 1 try: offset, string = self._parse_line(line) - try: - revmap_list = [name + ":" + hex(offset) for (name, offset) in revmap[offset >> 12]] - except (IndexError, KeyError): - revmap_list = ["FREE MEMORY"] - yield (0, (str(string, 'latin-1'), format_hints.Hex(offset), ", ".join(revmap_list))) + string_list[offset] = string_list.get(offset, []) + [string] except ValueError: - vollog.error("Strings file is in the wrong format") + vollog.error("Line in unrecognized format: line {}".format(count)) return line = strings_fp.readline() - prog = strings_fp.tell() / strings_size * 100 + + # TODO: Check the strings file *before* doing the expensive computation + revmap = self.generate_mapping(self.config['primary']) + + strings_fp = accessor.open(self.config['strings_file'], "rb") + strings_size = path.getsize(strings_fp.file.name) + + last_prog = count = 0 # type: float + num_strings = len(string_list) + for offset in string_list: + string = b"; ".join(string_list[offset]) + count += 1 + try: + revmap_list = [name + ":" + hex(offset) for (name, offset) in revmap[offset >> 12]] + except (IndexError, KeyError): + revmap_list = ["FREE MEMORY"] + yield (0, (str(string, 'latin-1'), format_hints.Hex(offset), ", ".join(revmap_list))) + prog = count / num_strings * 100 if round(prog, 1) > last_prog: last_prog = round(prog, 1) self._progress_callback(prog, "Matching strings in memory") From 05ffab9108b5f34455fbb9f79f4f05296d540b0b Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Mon, 15 Mar 2021 22:39:24 +0000 Subject: [PATCH 097/294] Windows: Make generate_mapping externally visible --- .../framework/plugins/windows/strings.py | 100 ++++++++++-------- 1 file changed, 58 insertions(+), 42 deletions(-) diff --git a/volatility3/framework/plugins/windows/strings.py b/volatility3/framework/plugins/windows/strings.py index 5d0d3f8738..a44e596725 100644 --- a/volatility3/framework/plugins/windows/strings.py +++ b/volatility3/framework/plugins/windows/strings.py @@ -4,10 +4,9 @@ import logging import re -from os import path -from typing import Dict, Generator, List, Set, Tuple +from typing import Dict, Generator, List, Set, Tuple, Optional -from volatility3.framework import interfaces, renderers, exceptions +from volatility3.framework import interfaces, renderers, exceptions, constants from volatility3.framework.configuration import requirements from volatility3.framework.layers import intel, resources, linear from volatility3.framework.renderers import format_hints @@ -19,6 +18,7 @@ class Strings(interfaces.plugins.PluginInterface): """Reads output from the strings command and indicates which process(es) each string belongs to.""" + _version = (1, 0, 0) _required_framework_version = (1, 0, 0) strings_pattern = re.compile(rb"^(?:\W*)([0-9]+)(?:\W*)(\w[\w\W]+)\n?") @@ -30,6 +30,10 @@ def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface] description = 'Memory layer for the kernel', architectures = ["Intel32", "Intel64"]), requirements.SymbolTableRequirement(name = "nt_symbols", description = "Windows kernel symbols"), + requirements.ListRequirement(name = 'pid', + element_type = int, + description = "Process ID to include (all other processes are excluded)", + optional = True), requirements.URIRequirement(name = "strings_file", description = "Strings file") ] # TODO: Make URLRequirement that can accept a file address which the framework can open @@ -40,40 +44,38 @@ def run(self): def _generator(self) -> Generator[Tuple, None, None]: """Generates results from a strings file.""" - string_list = {} # type: Dict[int, List[bytes]] + string_list = [] # type: List[Tuple[int,bytes]] # Test strings file format is accurate accessor = resources.ResourceAccessor() strings_fp = accessor.open(self.config['strings_file'], "rb") line = strings_fp.readline() - count = 0 + count = 0 # type: float while line: count += 1 try: offset, string = self._parse_line(line) - string_list[offset] = string_list.get(offset, []) + [string] + string_list.append((offset, string)) except ValueError: vollog.error("Line in unrecognized format: line {}".format(count)) - return line = strings_fp.readline() - # TODO: Check the strings file *before* doing the expensive computation - revmap = self.generate_mapping(self.config['primary']) + revmap = self.generate_mapping(self.context, + self.config['primary'], + self.config['nt_symbols'], + progress_callback = self._progress_callback, + pid_list = self.config['pid']) - strings_fp = accessor.open(self.config['strings_file'], "rb") - strings_size = path.getsize(strings_fp.file.name) - - last_prog = count = 0 # type: float + last_prog = line_count = 0 # type: float num_strings = len(string_list) - for offset in string_list: - string = b"; ".join(string_list[offset]) - count += 1 + for offset, string in string_list: + line_count += 1 try: revmap_list = [name + ":" + hex(offset) for (name, offset) in revmap[offset >> 12]] except (IndexError, KeyError): revmap_list = ["FREE MEMORY"] yield (0, (str(string, 'latin-1'), format_hints.Hex(offset), ", ".join(revmap_list))) - prog = count / num_strings * 100 + prog = line_count / num_strings * 100 if round(prog, 1) > last_prog: last_prog = round(prog, 1) self._progress_callback(prog, "Matching strings in memory") @@ -94,17 +96,29 @@ def _parse_line(self, line: bytes) -> Tuple[int, bytes]: offset, string = match.group(1, 2) return int(offset), string - def generate_mapping(self, layer_name: str) -> Dict[int, Set[Tuple[str, int]]]: + @classmethod + def generate_mapping(cls, + context: interfaces.context.ContextInterface, + layer_name: str, + symbol_table: str, + progress_callback: constants.ProgressCallback = None, + pid_list: Optional[List[int]] = None) -> Dict[int, Set[Tuple[str, int]]]: """Creates a reverse mapping between virtual addresses and physical addresses. Args: + context: the context for the method to run against layer_name: the layer to map against the string lines + symbol_table: the name of the symbol table for the provided layer + progress_callback: an optional callable to display progress + pid_list: a lit of process IDs to consider when generating the reverse map Returns: A mapping of virtual offsets to strings and physical offsets """ - layer = self._context.layers[layer_name] + filter = pslist.PsList.create_pid_filter(pid_list) + + layer = context.layers[layer_name] reverse_map = dict() # type: Dict[int, Set[Tuple[str, int]]] if isinstance(layer, intel.Intel): # We don't care about errors, we just wanted chunks that map correctly @@ -114,31 +128,33 @@ def generate_mapping(self, layer_name: str) -> Dict[int, Set[Tuple[str, int]]]: cur_set = reverse_map.get(mapped_offset >> 12, set()) cur_set.add(("kernel", offset)) reverse_map[mapped_offset >> 12] = cur_set - self._progress_callback((offset * 100) / layer.maximum_address, "Creating reverse kernel map") + if progress_callback: + progress_callback((offset * 100) / layer.maximum_address, "Creating reverse kernel map") # TODO: Include kernel modules - for process in pslist.PsList.list_processes(self.context, self.config['primary'], - self.config['nt_symbols']): - proc_id = "Unknown" - try: - proc_id = process.UniqueProcessId - proc_layer_name = process.add_process_layer() - except exceptions.InvalidAddressException as excp: - vollog.debug("Process {}: invalid address {} in layer {}".format( - proc_id, excp.invalid_address, excp.layer_name)) - continue - - proc_layer = self.context.layers[proc_layer_name] - if isinstance(proc_layer, linear.LinearlyMappedLayer): - for mapval in proc_layer.mapping(0x0, proc_layer.maximum_address, ignore_errors = True): - mapped_offset, _, offset, mapped_size, maplayer = mapval - for val in range(mapped_offset, mapped_offset + mapped_size, 0x1000): - cur_set = reverse_map.get(mapped_offset >> 12, set()) - cur_set.add(("Process {}".format(process.UniqueProcessId), offset)) - reverse_map[mapped_offset >> 12] = cur_set - # FIXME: make the progress for all processes, rather than per-process - self._progress_callback((offset * 100) / layer.maximum_address, - "Creating mapping for task {}".format(process.UniqueProcessId)) + for process in pslist.PsList.list_processes(context, layer_name, symbol_table): + if not filter(process): + proc_id = "Unknown" + try: + proc_id = process.UniqueProcessId + proc_layer_name = process.add_process_layer() + except exceptions.InvalidAddressException as excp: + vollog.debug("Process {}: invalid address {} in layer {}".format( + proc_id, excp.invalid_address, excp.layer_name)) + continue + + proc_layer = context.layers[proc_layer_name] + if isinstance(proc_layer, linear.LinearlyMappedLayer): + for mapval in proc_layer.mapping(0x0, proc_layer.maximum_address, ignore_errors = True): + mapped_offset, _, offset, mapped_size, maplayer = mapval + for val in range(mapped_offset, mapped_offset + mapped_size, 0x1000): + cur_set = reverse_map.get(mapped_offset >> 12, set()) + cur_set.add(("Process {}".format(process.UniqueProcessId), offset)) + reverse_map[mapped_offset >> 12] = cur_set + # FIXME: make the progress for all processes, rather than per-process + if progress_callback: + progress_callback((offset * 100) / layer.maximum_address, + "Creating mapping for task {}".format(process.UniqueProcessId)) return reverse_map From 88bddbd5965d895e0a787b2d237fb1ed4a4c973c Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Mon, 15 Mar 2021 22:56:40 +0000 Subject: [PATCH 098/294] Windows: Clarify separate yara compiled method --- volatility3/framework/plugins/yarascan.py | 2 ++ 1 file changed, 2 insertions(+) diff --git a/volatility3/framework/plugins/yarascan.py b/volatility3/framework/plugins/yarascan.py index b3c3acddfe..ca8a91f840 100644 --- a/volatility3/framework/plugins/yarascan.py +++ b/volatility3/framework/plugins/yarascan.py @@ -58,6 +58,8 @@ def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface] description = "Yara rules (as a string)", optional = True), requirements.URIRequirement(name = "yara_file", description = "Yara rules (as a file)", optional = True), + # This additional requirement is to follow suit with upstream, who feel that compiled rules could potentially be used to execute malicious code + # As such, there's a separate option to run compiled files, as happened with yara-3.9 and later requirements.URIRequirement(name = "yara_compiled_file", description = "Yara compiled rules (as a file)", optional = True), requirements.IntRequirement(name = "max_size", default = 0x40000000, From 180087e746dbeb5bc96fd1c3f22e6f1199057c52 Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Mon, 15 Mar 2021 22:58:48 +0000 Subject: [PATCH 099/294] Windows: Update vadyarascan with compiled file option --- volatility3/framework/plugins/windows/vadyarascan.py | 5 +++++ volatility3/framework/plugins/yarascan.py | 6 ++++-- 2 files changed, 9 insertions(+), 2 deletions(-) diff --git a/volatility3/framework/plugins/windows/vadyarascan.py b/volatility3/framework/plugins/windows/vadyarascan.py index 248a2545b4..6c4723e88b 100644 --- a/volatility3/framework/plugins/windows/vadyarascan.py +++ b/volatility3/framework/plugins/windows/vadyarascan.py @@ -35,6 +35,11 @@ def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface] description = "Yara rules (as a string)", optional = True), requirements.URIRequirement(name = "yara_file", description = "Yara rules (as a file)", optional = True), + # This additional requirement is to follow suit with upstream, who feel that compiled rules could potentially be used to execute malicious code + # As such, there's a separate option to run compiled files, as happened with yara-3.9 and later + requirements.URIRequirement(name = "yara_compiled_file", + description = "Yara compiled rules (as a file)", + optional = True), requirements.IntRequirement(name = "max_size", default = 0x40000000, description = "Set the maximum size (default is 1GB)", diff --git a/volatility3/framework/plugins/yarascan.py b/volatility3/framework/plugins/yarascan.py index ca8a91f840..89d1eb49ae 100644 --- a/volatility3/framework/plugins/yarascan.py +++ b/volatility3/framework/plugins/yarascan.py @@ -60,7 +60,9 @@ def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface] requirements.URIRequirement(name = "yara_file", description = "Yara rules (as a file)", optional = True), # This additional requirement is to follow suit with upstream, who feel that compiled rules could potentially be used to execute malicious code # As such, there's a separate option to run compiled files, as happened with yara-3.9 and later - requirements.URIRequirement(name = "yara_compiled_file", description = "Yara compiled rules (as a file)", optional = True), + requirements.URIRequirement(name = "yara_compiled_file", + description = "Yara compiled rules (as a file)", + optional = True), requirements.IntRequirement(name = "max_size", default = 0x40000000, description = "Set the maximum size (default is 1GB)", @@ -82,7 +84,7 @@ def process_yara_options(cls, config: Dict[str, Any]): elif config.get('yara_file', None) is not None: rules = yara.compile(file = resources.ResourceAccessor().open(config['yara_file'], "rb")) elif config.get('yara_compiled_file', None) is not None: - rules = yara.load(file = resources.ResourceAccessor().open(config['yara_compiled_file'], "rb")) + rules = yara.load(file = resources.ResourceAccessor().open(config['yara_compiled_file'], "rb")) else: vollog.error("No yara rules, nor yara rules file were specified") return rules From 8d8b2ea2dd529b4b640a7daa0216f57491940bb1 Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Wed, 17 Mar 2021 15:31:58 +0000 Subject: [PATCH 100/294] Windows: Fix up hardcoded filename in pdbutil --- volatility3/framework/symbols/windows/pdbutil.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/volatility3/framework/symbols/windows/pdbutil.py b/volatility3/framework/symbols/windows/pdbutil.py index 5db71ca019..56aa63f277 100644 --- a/volatility3/framework/symbols/windows/pdbutil.py +++ b/volatility3/framework/symbols/windows/pdbutil.py @@ -309,7 +309,7 @@ def symbol_table_from_pdb(cls, context: interfaces.context.ContextInterface, con if not guids: raise exceptions.VolatilityException( - "Did not find GUID of tcpip.pdb in tcpip.sys module @ 0x{:x}!".format(module_offset)) + "Did not find GUID of {} in module @ 0x{:x}!".format(pdb_name, module_offset)) guid = guids[0] From 9c30ed19ef600038db31b5150a0848a2e228fad0 Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Wed, 17 Mar 2021 20:07:29 +0000 Subject: [PATCH 101/294] Windows: Deprecate netstat create_tcpip_symbol_table --- .../framework/plugins/windows/netstat.py | 33 +++++-------------- 1 file changed, 9 insertions(+), 24 deletions(-) diff --git a/volatility3/framework/plugins/windows/netstat.py b/volatility3/framework/plugins/windows/netstat.py index e81a4d7232..70a47f85af 100644 --- a/volatility3/framework/plugins/windows/netstat.py +++ b/volatility3/framework/plugins/windows/netstat.py @@ -266,7 +266,9 @@ def parse_partitions(cls, context: interfaces.context.ContextInterface, layer_na @classmethod def create_tcpip_symbol_table(cls, context: interfaces.context.ContextInterface, config_path: str, layer_name: str, tcpip_module_offset: int, tcpip_module_size: int) -> str: - """Creates symbol table for the current image's tcpip.sys driver. + """DEPRECATED: Use PDBUtility.symbol_table_from_pdb instead + + Creates symbol table for the current image's tcpip.sys driver. Searches the memory section of the loaded tcpip.sys module for its PDB GUID and loads the associated symbol table into the symbol space. @@ -281,29 +283,12 @@ def create_tcpip_symbol_table(cls, context: interfaces.context.ContextInterface, Returns: The name of the constructed and loaded symbol table """ - - guids = list( - pdbutil.PDBUtility.pdbname_scan(context, - layer_name, - context.layers[layer_name].page_size, [b"tcpip.pdb"], - start = tcpip_module_offset, - end = tcpip_module_offset + tcpip_module_size)) - - if not guids: - raise exceptions.VolatilityException( - "Did not find GUID of tcpip.pdb in tcpip.sys module @ 0x{:x}!".format(tcpip_module_offset)) - - guid = guids[0] - - vollog.debug("Found {}: {}-{}".format(guid["pdb_name"], guid["GUID"], guid["age"])) - - return pdbutil.PDBUtility.load_windows_symbol_table( - context, - guid["GUID"], - guid["age"], - guid["pdb_name"], - "volatility3.framework.symbols.intermed.IntermediateSymbolTable", - config_path = "tcpip") + vollog.debug( + "Deprecation: This plugin uses netstat.create_tcpip_symbol_table instead of PDBUtility.symbol_table_from_pdb" + ) + return pdbutil.PDBUtility.symbol_table_from_pdb(context, + interfaces.configuration.path_join(config_path, 'tcpip'), + layer_name, "tcpip.pdb", tcpip_module_offset, tcpip_module_size) @classmethod def find_port_pools(cls, context: interfaces.context.ContextInterface, layer_name: str, net_symbol_table: str, From ea71cbe9c946d0ff9b3592338353e61b6c42d25d Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Wed, 17 Mar 2021 21:02:53 +0000 Subject: [PATCH 102/294] Windows: Fix the verinfo versioning This should already have been versioned because it had a classmethod. Since it wasn't, we can start at (1, 0, 0) but it should only need framrwork version (1, 0, 0) as well. --- volatility3/framework/plugins/windows/verinfo.py | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/volatility3/framework/plugins/windows/verinfo.py b/volatility3/framework/plugins/windows/verinfo.py index 92449588f2..25115136b6 100644 --- a/volatility3/framework/plugins/windows/verinfo.py +++ b/volatility3/framework/plugins/windows/verinfo.py @@ -27,7 +27,8 @@ class VerInfo(interfaces.plugins.PluginInterface): """Lists version information from PE files.""" - _required_framework_version = (1, 1, 0) + _version = (1, 0, 0) + _required_framework_version = (1, 0, 0) @classmethod def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface]: From d3b407515a5d1ea21fe00540c3357d5f9754870a Mon Sep 17 00:00:00 2001 From: Andrew Case Date: Fri, 19 Mar 2021 11:29:19 -0500 Subject: [PATCH 103/294] Initial commit for mass testing --- .../plugins/windows/skeleton_key_check.py | 558 ++++++++++++++++++ .../symbols/windows/kerb_ecrypt.json | 97 +++ 2 files changed, 655 insertions(+) create mode 100644 volatility3/framework/plugins/windows/skeleton_key_check.py create mode 100644 volatility3/framework/symbols/windows/kerb_ecrypt.json diff --git a/volatility3/framework/plugins/windows/skeleton_key_check.py b/volatility3/framework/plugins/windows/skeleton_key_check.py new file mode 100644 index 0000000000..88d2c9dd9d --- /dev/null +++ b/volatility3/framework/plugins/windows/skeleton_key_check.py @@ -0,0 +1,558 @@ +# This file is Copyright 2021 Volatility Foundation and licensed under the Volatility Software License 1.0 +# which is available at https://www.volatilityfoundation.org/license/vsl-v1.0 +# + +# This module attempts to locate skeleton-key like function hooks. +# It does this by locating the CSystems array through a variety of methods, +# and then validating the entry for RC4 HMAC (0x17 / 23) +# +# For a thorough walkthrough on how the R&D was performed to develop this plugin, +# please see our blogpost here: +# +# + +import logging, io + +from typing import Iterable, Tuple + +from volatility3.framework.symbols.windows import pdbutil +from volatility3.framework import interfaces, symbols, exceptions +from volatility3.framework import renderers, constants +from volatility3.framework.layers import scanners +from volatility3.framework.configuration import requirements +from volatility3.framework.objects import utility +from volatility3.framework.symbols import intermed +from volatility3.framework.renderers import format_hints +from volatility3.plugins.windows import pslist, vadinfo + +from volatility3.framework.symbols.windows.extensions import pe + +try: + import capstone + has_capstone = True +except ImportError: + has_capstone = False + +try: + import pefile + has_pefile = True +except ImportError: + has_pefile = False + +vollog = logging.getLogger(__name__) + +class Skeleton_Key_Check(interfaces.plugins.PluginInterface): + """Lists process memory ranges that potentially contain injected code.""" + + _required_framework_version = (1, 0, 0) + + @classmethod + def get_requirements(cls): + # Since we're calling the plugin, make sure we have the plugin's requirements + return [ + requirements.TranslationLayerRequirement(name = 'primary', + description = 'Memory layer for the kernel', + architectures = ["Intel32", "Intel64"]), + requirements.SymbolTableRequirement(name = "nt_symbols", description = "Windows kernel symbols"), + requirements.VersionRequirement(name = 'pslist', component = pslist.PsList, version = (2, 0, 0)), + requirements.VersionRequirement(name = 'vadinfo', component = vadinfo.VadInfo, version = (2, 0, 0)), + requirements.VersionRequirement(name = 'pdbutil', component = pdbutil.PDBUtility, version = (1, 0, 0)), + ] + + # @ikelos + # these lines are copy/paste from inside of verinfo->get_version_information + # not sure if this is worthy of making it an API or not though + # basically it taskes in a pe symbol table, layer name, and base address + # and then kicks back a pefile instance + # we can either make it a common API or we can just delete this comment + + # @ikelos I don't know how to specify the return value as a pefile object... + def _get_pefile_obj(self, pe_table_name: str, layer_name: str, base_address: int): + pe_data = io.BytesIO() + + try: + dos_header = self.context.object(pe_table_name + constants.BANG + "_IMAGE_DOS_HEADER", + offset = base_address, + layer_name = layer_name) + + for offset, data in dos_header.reconstruct(): + pe_data.seek(offset) + pe_data.write(data) + + pe_ret = pefile.PE(data = pe_data.getvalue(), fast_load = True) + + except exceptions.InvalidAddressException: + pe_ret = None + + return pe_ret + + def _check_for_skeleton_key_vad(self, csystem: interfaces.objects.ObjectInterface, + cryptdll_base: int, + cryptdll_size: int) -> bool: + """ + Checks if Initialize and/or Decrypt is hooked by determining if + these function pointers reference addresses inside of the cryptdll VAD + + Args: + csystem: The RC4HMAC KERB_ECRYPT instance + cryptdll_base: Base address of the cryptdll.dll VAD + cryptdll_size: Size of the VAD + Returns: + bool: if a skeleton key hook is present + """ + return not ((cryptdll_base <= csystem.Initialize <= cryptdll_base + cryptdll_size) and \ + (cryptdll_base <= csystem.Decrypt <= cryptdll_base + cryptdll_size)) + + def _check_for_skeleton_key_symbols(self, csystem: interfaces.objects.ObjectInterface, + rc4HmacInitialize: int, + rc4HmacDecrypt: int) -> bool: + """ + Uses the PDB information to specifically check if the csystem for RC4HMAC + has an initialization pointer to rc4HmacInitialize and a decryption pointer + for rc4HmacDecrypt. + + Args: + csystem: The RC4HMAC KERB_ECRYPT instance + rc4HmacInitialize: The expected address of csystem Initialization function + rc4HmacDecrypt: The expected address of the csystem Decryption function + + Returns: + bool: if a skeleton key hook was found + """ + return csystem.Initialize != rc4HmacInitialize or csystem.Decrypt != rc4HmacDecrypt + + def _find_array_with_pdb_symbols(self, cryptdll_symbols: str, + cryptdll_types: interfaces.context.ModuleInterface, + proc_layer_name: str, + cryptdll_base: int) -> Tuple[interfaces.objects.ObjectInterface, int, int, int]: + + """ + Finds the CSystems array through use of PDB symbols + + Args: + cryptdll_symbols: The symbols table from the PDB file + cryptdll_types: The types from cryptdll binary analysis + proc_layer_name: The lsass.exe process layer name + cryptdll_base: Base address of cryptdll.dll inside of lsass.exe + + Returns: + Tuple of: + array_start: Where CSystems begins + count: Number of array elements + rc4HmacInitialize: The runtime address of the expected initialization function + rc4HmacDecrypt: The runtime address of the expected decryption function + """ + cryptdll_module = self.context.module(cryptdll_symbols, layer_name = proc_layer_name, offset = cryptdll_base) + + count_address = cryptdll_module.get_symbol("cCSystems").address + count = cryptdll_types.object(object_type = "unsigned long", offset = count_address) + + array_start = cryptdll_module.get_symbol("CSystems").address + cryptdll_base + + rc4HmacInitialize = cryptdll_module.get_symbol("rc4HmacInitialize").address + cryptdll_base + + rc4HmacDecrypt = cryptdll_module.get_symbol("rc4HmacDecrypt").address + cryptdll_base + + return array_start, count, rc4HmacInitialize, rc4HmacDecrypt + + def _get_cryptdll_types(self, context: interfaces.context.ContextInterface, + config, + config_path: str, + proc_layer_name: str, + cryptdll_base: int): + """ + Builds a symbol table from the cryptdll types generated after binary analysis + + Args: + context: the context to operate upon + config: + config_path: + proc_layer_name: name of the lsass.exe process layer + cryptdll_base: base address of cryptdll.dll inside of lsass.exe + """ + table_mapping = {"nt_symbols": config["nt_symbols"]} + + cryptdll_symbol_table = intermed.IntermediateSymbolTable.create(context = context, + config_path = config_path, + sub_path = "windows", + filename = "kerb_ecrypt", + table_mapping = table_mapping) + + return context.module(cryptdll_symbol_table, proc_layer_name, offset = cryptdll_base) + + def _find_and_parse_cryptdll(self, proc_list: Iterable) -> \ + Tuple[interfaces.context.ContextInterface, str, int, int]: + """ + Finds the base address of cryptdll.dll insode of lsass.exe + + Args: + proc_list: the process list filtered to just lsass.exe instances + + Returns: + A tuple of: + lsass_proc: the process object for lsass.exe + proc_layer_name: the name of the lsass.exe process layer + cryptdll_base: the base address of cryptdll.dll + crytpdll_size: the size of the VAD for cryptdll.dll + """ + lsass_proc = None + proc_layer_name = None + cryptdll_base = None + cryptdll_size = None + + for proc in proc_list: + try: + proc_id = proc.UniqueProcessId + proc_layer_name = proc.add_process_layer() + except exceptions.InvalidAddressException as excp: + vollog.debug("Process {}: invalid address {} in layer {}".format(proc_id, excp.invalid_address, + excp.layer_name)) + continue + + proc_layer = self.context.layers[proc_layer_name] + + for vad in proc.get_vad_root().traverse(): + filename = vad.get_file_name() + if type(filename) == renderers.NotApplicableValue or not filename.lower().endswith("cryptdll.dll"): + continue + + cryptdll_base = vad.get_start() + cryptdll_size = vad.get_end() - cryptdll_base + + break + + lsass_proc = proc + break + + return lsass_proc, proc_layer_name, cryptdll_base, cryptdll_size + + def _find_csystems_with_symbols(self, proc_layer_name: str, + cryptdll_types: interfaces.context.ModuleInterface, + cryptdll_base: int, + cryptdll_size: int) -> \ + Tuple[interfaces.objects.ObjectInterface, int, int]: + """ + Attempts to find CSystems and the expected address of the handlers. + Relies on downloading and parsing of the cryptdll PDB file. + + Args: + proc_layer_name: the name of the lsass.exe process layer + cryptdll_types: The types from cryptdll binary analysis + cryptdll_base: the base address of cryptdll.dll + crytpdll_size: the size of the VAD for cryptdll.dll + + Returns: + A tuple of: + array: An initialized Volatility array of _KERB_ECRYPT structures + rc4HmacInitialize: The expected address of csystem Initialization function + rc4HmacDecrypt: The expected address of the csystem Decryption function + """ + try: + cryptdll_symbols = pdbutil.PDBUtility.symbol_table_from_pdb(self.context, + interfaces.configuration.path_join(self.config_path, 'cryptdll'), + proc_layer_name, + "cryptdll.pdb", + cryptdll_base, + cryptdll_size) + except exceptions.VolatilityException: + return None, None, None + + array_start, count, rc4HmacInitialize, rc4HmacDecrypt = self._find_array_with_pdb_symbols(cryptdll_symbols, cryptdll_types, proc_layer_name, cryptdll_base) + + array = cryptdll_types.object(object_type = "array", + offset = array_start, + subtype = cryptdll_types.get_type("_KERB_ECRYPT"), + count = count, + absolute = True) + + return array, rc4HmacInitialize, rc4HmacDecrypt + + def _get_rip_relative_target(self, inst) -> int: + """ + Returns the target address of a RIP-relative instruction. + + These instructions contain the offset of a target addresss + relative to the current instruction pointer. + + Args: + inst: A capstone instruction instance + + Returns: + None or the target address of the function + """ + try: + opnd = inst.operands[1] + except capstone.CsError: + return None + + if opnd.type != capstone.x86.X86_OP_MEM: + return None + + if inst.reg_name(opnd.mem.base) != "rip": + return None + + return inst.address + inst.size + opnd.mem.disp + + def _analyze_cdlocatecsystem(self, function_bytes: bytes, + function_start: int, + proc_layer_name: str) -> Tuple[int, int]: + """ + Performs static analysis on CDLocateCSystem to find the instructions that + reference CSystems as well as cCsystems + + Args: + function_bytes: the instruction bytes of CDLocateCSystem + function_start: the address of CDLocateCSystem + proc_layer_name: the name of the lsass.exe process layer + + Return: + Tuple of: + array_start: address of CSystem + count: the count from cCsystems or 16 + """ + found_count = False + array_start = None + count = None + + ## we only support 64bit disassembly analysis + md = capstone.Cs(capstone.CS_ARCH_X86, capstone.CS_MODE_64) + md.detail = True + + for inst in md.disasm(function_bytes, function_start): + # we should not reach debug traps + if inst.mnemonic == "int3": + break + + # cCsystems is referenced by a mov instruction + elif inst.mnemonic == "mov": + if found_count == False: + target_address = self._get_rip_relative_target(inst) + + # we do not want to fail just because the count is not memory + # 16 was the size on samples I tested, so I chose it as the default + if target_address: + count = int.from_bytes(self.context.layers[proc_layer_name].read(target_address, 4), "little") + else: + count = 16 + + found_count = True + + elif inst.mnemonic == "lea": + target_address = self._get_rip_relative_target(inst) + + if target_address: + array_start = target_address + + # we find the count before, so we can terminate the static analysis here + break + + return array_start, count + + def _find_csystems_with_export(self, proc_layer_name: str, + cryptdll_types: interfaces.context.ModuleInterface, + cryptdll_base: int, + _) -> Tuple[int, None, None]: + """ + Uses export table analysis to locate CDLocateCsystem + This function references CSystems and cCsystems + + Args: + proc_layer_name: The lsass.exe process layer name + cryptdll_types: The types from cryptdll binary analysis + cryptdll_base: Base address of cryptdll.dll inside of lsass.exe + _: unused in this source + Returns: + Tuple of: + array_start: Where CSystems begins + None: this method cannot find the expected initialization address + None: this method cannot find the expected decryption address + """ + if not has_capstone: + vollog.debug("capstone is not installed so cannot fall back to export table analysis.") + return None, None, None + + if not has_pefile: + vollog.debug("pefile is not installed so cannot fall back to export table analysis.") + return None, None, None + + vollog.debug("Unable to perform analysis using PDB symbols, falling back to export table analysis.") + + pe_table_name = intermed.IntermediateSymbolTable.create(self.context, + self.config_path, + "windows", + "pe", + class_types = pe.class_types) + + + cryptdll = self._get_pefile_obj(pe_table_name, proc_layer_name, cryptdll_base) + if not cryptdll or not hasattr(cryptdll, 'DIRECTORY_ENTRY_EXPORT'): + return None, None, None + + cryptdll.parse_data_directories(directories = [pefile.DIRECTORY_ENTRY["IMAGE_DIRECTORY_ENTRY_EXPORT"]]) + + array_start = None + count = None + + # find the location of CDLocateCSystem and then perform static analysis + for export in cryptdll.DIRECTORY_ENTRY_EXPORT.symbols: + if export.name != b"CDLocateCSystem": + continue + + function_start = cryptdll_base + export.address + + try: + function_bytes = self.context.layers[proc_layer_name].read(function_start, 0x50) + except exceptions.InvalidAddressException: + break + + array_start, count = self._analyze_cdlocatecsystem(function_bytes, function_start, proc_layer_name) + + break + + if array_start: + array = cryptdll_types.object(object_type = "array", + offset = array_start, + subtype = cryptdll_types.get_type("_KERB_ECRYPT"), + count = count, + absolute = True) + + return array, None, None + + def _find_csystems_with_scanning(self, proc_layer_name: str, + cryptdll_types: interfaces.context.ModuleInterface, + cryptdll_base: int, + cryptdll_size: int) -> Tuple[int, None, None]: + """ + Performs scanning to find potential RC4 HMAC csystem instances + + This function may return several values as it cannot validate which is the active one + + Args: + proc_layer_name: the lsass.exe process layer name + cryptdll_types: the types from cryptdll binary analysis + cryptdll_base: base address of cryptdll.dll inside of lsass.exe + cryptdll_size: size of the VAD + Returns: + Tuple of: + array_start: Where CSystems begins + None: this method cannot find the expected initialization address + None: this method cannot find the expected decryption address + """ + + csystems = [] + + cryptdll_end = cryptdll_base + cryptdll_size + + proc_layer = self.context.layers[proc_layer_name] + + ecrypt_size = cryptdll_types.get_type("_KERB_ECRYPT").size + + # scan for potential instances of RC4 HMAC + # the signature is based on the type being 0x17 + # and the block size member being 1 in all test samples + for address in proc_layer.scan(self.context, + scanners.BytesScanner(b"\x17\x00\x00\x00\x01\x00\x00\x00"), + sections = [(cryptdll_base, cryptdll_size)]): + + # this occurs across page boundaries + if not proc_layer.is_valid(address, ecrypt_size): + continue + + kerb = cryptdll_types.object("_KERB_ECRYPT", + offset = address, + absolute = True) + + # ensure the Encrypt and Finish pointers are inside the VAD + # these are not manipulated in the attack + if (cryptdll_base < kerb.Encrypt < cryptdll_end) and \ + (cryptdll_base < kerb.Finish < cryptdll_end): + csystems.append(kerb) + + return csystems, None, None + + def _generator(self, procs): + """ + Finds instances of the RC4 HMAC CSystem structure + + Returns whether the instances are hooked as well as the function handler addresses + + Args: + procs: the process list filtered to lsass.exe instances + """ + + if not symbols.symbol_table_is_64bit(self.context, self.config["nt_symbols"]): + vollog.info("This plugin only supports 64bit Windows memory samples") + return + + lsass_proc, proc_layer_name, cryptdll_base, cryptdll_size = self._find_and_parse_cryptdll(procs) + + if not lsass_proc: + vollog.warn("Unable to find lsass.exe process in process list. This should never happen. Analysis cannot proceed.") + return + + if not cryptdll_base: + vollog.warn("Unable to find the location of cryptdll.dll inside of lsass.exe. Analysis cannot proceed.") + return + + # the custom type information from binary analysis + cryptdll_types = self._get_cryptdll_types(self.context, + self.config, + self.config_path, + proc_layer_name, + cryptdll_base) + + + # attempt to locate csystem and handlers in order of + # reliability and reporting accuracy + sources = [self._find_csystems_with_symbols, + self._find_csystems_with_export, + self._find_csystems_with_scanning] + + for source in sources: + csystems, rc4HmacInitialize, rc4HmacDecrypt = \ + source(proc_layer_name, + cryptdll_types, + cryptdll_base, + cryptdll_size) + + if csystems is not None: + break + + if csystems == None: + vollog.info("Unable to find CSystems inside of cryptdll.dll. Analysis cannot proceed.") + return + + found_target = False + + for csystem in csystems: + # filter for RC4 HMAC + if csystem.EncryptionType != 0x17: + continue + + # use the specific symbols if present, otherwise use the vad start and size + if rc4HmacInitialize and rc4HmacDecrypt: + skeleton_key_present = self._check_for_skeleton_key_symbols(csystem, rc4HmacInitialize, rc4HmacDecrypt) + else: + skeleton_key_present = self._check_for_skeleton_key_vad(csystem, cryptdll_base, cryptdll_size) + + yield 0, (lsass_proc.UniqueProcessId, "lsass.exe", skeleton_key_present, \ + format_hints.Hex(csystem.Initialize), format_hints.Hex(csystem.Decrypt)) + + def _lsass_proc_filter(self, proc): + """ + Used to filter to only lsass.exe processes + + There should only be one of these, but malware can/does make lsass.exe + named processes to blend in or uses lsass.exe as a process hollowing target + """ + process_name = utility.array_to_string(proc.ImageFileName) + + return process_name != "lsass.exe" + + def run(self): + return renderers.TreeGrid([("PID", int), ("Process", str), ("Skeleton Key Found", bool), ("rc4HmacInitialize", format_hints.Hex), ("rc4HmacDecrypt", format_hints.Hex)], + self._generator( + pslist.PsList.list_processes(context = self.context, + layer_name = self.config['primary'], + symbol_table = self.config['nt_symbols'], + filter_func = self._lsass_proc_filter))) diff --git a/volatility3/framework/symbols/windows/kerb_ecrypt.json b/volatility3/framework/symbols/windows/kerb_ecrypt.json new file mode 100644 index 0000000000..bcba19b76e --- /dev/null +++ b/volatility3/framework/symbols/windows/kerb_ecrypt.json @@ -0,0 +1,97 @@ +{ + "metadata": { + "producer": { + "version": "0.0.1", + "name": "acase-by-hand-from-mimikatz", + "datetime": "2021-03-01T14:30:00.000000" + }, + "format": "6.2.0" + }, + "symbols": { + }, + "enums": { + }, + "user_types": { + "_KERB_ECRYPT": { + "fields": { + "EncryptionType": { + "offset": 0, + "type": { + "kind": "base", + "name": "unsigned long" + } + }, + "BlockSize": { + "offset": 4, + "type": { + "kind": "base", + "name": "unsigned long" + } + }, + "KeySize": { + "offset": 12, + "type": { + "kind": "base", + "name": "unsigned long" + } + }, + "Initialize": { + "offset": 40, + "type": { + "kind": "pointer", + "subtype": { + "kind": "base", + "name": "void" + } + } + }, + "Encrypt": { + "offset": 48, + "type": { + "kind": "pointer", + "subtype": { + "kind": "base", + "name": "void" + } + } + }, + "Decrypt": { + "offset": 56, + "type": { + "kind": "pointer", + "subtype": { + "kind": "base", + "name": "void" + } + } + }, + "Finish": { + "offset": 64, + "type": { + "kind": "pointer", + "subtype": { + "kind": "base", + "name": "void" + } + } + } + }, + "kind": "struct", + "size": 256 + } + }, + "base_types": { + "unsigned long": { + "endian": "little", + "kind": "int", + "signed": false, + "size": 4 + }, + "pointer": { + "kind": "int", + "size": 8, + "signed": false, + "endian": "little" + } + } +} From 0be5dad2bd563f5a65e875ad00e2c5e42f4cb605 Mon Sep 17 00:00:00 2001 From: atcuno Date: Fri, 19 Mar 2021 11:51:23 -0500 Subject: [PATCH 104/294] Prevent pslist from backtracing on invalid process. Include a warning with the offset. --- .../framework/plugins/windows/pslist.py | 29 +++++++++++-------- 1 file changed, 17 insertions(+), 12 deletions(-) diff --git a/volatility3/framework/plugins/windows/pslist.py b/volatility3/framework/plugins/windows/pslist.py index f9fb20aaf8..7d96571880 100644 --- a/volatility3/framework/plugins/windows/pslist.py +++ b/volatility3/framework/plugins/windows/pslist.py @@ -6,7 +6,7 @@ import logging from typing import Callable, Iterable, List, Type -from volatility3.framework import renderers, interfaces, layers, constants +from volatility3.framework import renderers, interfaces, layers, exceptions, constants from volatility3.framework.configuration import requirements from volatility3.framework.objects import utility from volatility3.framework.renderers import format_hints @@ -187,17 +187,22 @@ def _generator(self): (_, _, offset, _, _) = list(memory.mapping(offset = proc.vol.offset, length = 0))[0] file_output = "Disabled" - if self.config['dump']: - file_handle = self.process_dump(self.context, self.config['nt_symbols'], pe_table_name, proc, self.open) - file_output = "Error outputting file" - if file_handle: - file_handle.close() - file_output = str(file_handle.preferred_filename) - - yield (0, (proc.UniqueProcessId, proc.InheritedFromUniqueProcessId, - proc.ImageFileName.cast("string", max_length = proc.ImageFileName.vol.count, errors = 'replace'), - format_hints.Hex(offset), proc.ActiveThreads, proc.get_handle_count(), proc.get_session_id(), - proc.get_is_wow64(), proc.get_create_time(), proc.get_exit_time(), file_output)) + + try: + if self.config['dump']: + file_handle = self.process_dump(self.context, self.config['nt_symbols'], pe_table_name, proc, self.open) + file_output = "Error outputting file" + if file_handle: + file_handle.close() + file_output = str(file_handle.preferred_filename) + + yield (0, (proc.UniqueProcessId, proc.InheritedFromUniqueProcessId, + proc.ImageFileName.cast("string", max_length = proc.ImageFileName.vol.count, errors = 'replace'), + format_hints.Hex(offset), proc.ActiveThreads, proc.get_handle_count(), proc.get_session_id(), + proc.get_is_wow64(), proc.get_create_time(), proc.get_exit_time(), file_output)) + + except exceptions.InvalidAddressException: + vollog.debug("Invalid process found at address: {:x}. Skipping".format(proc.vol.offset)) def generate_timeline(self): for row in self._generator(): From 21b59e9458f04275a3519efd14d71a76df8b1cf2 Mon Sep 17 00:00:00 2001 From: atcuno Date: Fri, 19 Mar 2021 12:49:54 -0500 Subject: [PATCH 105/294] Catch exceptions triggered during testing --- .../plugins/windows/skeleton_key_check.py | 23 +++++++++++++++---- 1 file changed, 19 insertions(+), 4 deletions(-) diff --git a/volatility3/framework/plugins/windows/skeleton_key_check.py b/volatility3/framework/plugins/windows/skeleton_key_check.py index 88d2c9dd9d..945bd16461 100644 --- a/volatility3/framework/plugins/windows/skeleton_key_check.py +++ b/volatility3/framework/plugins/windows/skeleton_key_check.py @@ -145,7 +145,11 @@ def _find_array_with_pdb_symbols(self, cryptdll_symbols: str, cryptdll_module = self.context.module(cryptdll_symbols, layer_name = proc_layer_name, offset = cryptdll_base) count_address = cryptdll_module.get_symbol("cCSystems").address - count = cryptdll_types.object(object_type = "unsigned long", offset = count_address) + + try: + count = cryptdll_types.object(object_type = "unsigned long", offset = count_address) + except exceptions.InvalidAddressException: + count = 16 array_start = cryptdll_module.get_symbol("CSystems").address + cryptdll_base @@ -258,13 +262,17 @@ def _find_csystems_with_symbols(self, proc_layer_name: str, return None, None, None array_start, count, rc4HmacInitialize, rc4HmacDecrypt = self._find_array_with_pdb_symbols(cryptdll_symbols, cryptdll_types, proc_layer_name, cryptdll_base) - - array = cryptdll_types.object(object_type = "array", + + try: + array = cryptdll_types.object(object_type = "array", offset = array_start, subtype = cryptdll_types.get_type("_KERB_ECRYPT"), count = count, absolute = True) + except exceptions.InvalidAddressException: + return None, None, None + return array, rc4HmacInitialize, rc4HmacDecrypt def _get_rip_relative_target(self, inst) -> int: @@ -410,12 +418,16 @@ def _find_csystems_with_export(self, proc_layer_name: str, break if array_start: - array = cryptdll_types.object(object_type = "array", + try: + array = cryptdll_types.object(object_type = "array", offset = array_start, subtype = cryptdll_types.get_type("_KERB_ECRYPT"), count = count, absolute = True) + except exceptions.InvalidAddressException: + return None, None, None + return array, None, None def _find_csystems_with_scanning(self, proc_layer_name: str, @@ -525,6 +537,9 @@ def _generator(self, procs): found_target = False for csystem in csystems: + if not self.context.layers[proc_layer_name].is_valid(csystem.vol.offset, csystem.vol.size): + continue + # filter for RC4 HMAC if csystem.EncryptionType != 0x17: continue From 68a6fd252f686f386d60565281cf93655c2c98c6 Mon Sep 17 00:00:00 2001 From: atcuno Date: Fri, 19 Mar 2021 13:09:14 -0500 Subject: [PATCH 106/294] Fix typos and add more debug statements --- .../framework/plugins/windows/skeleton_key_check.py | 13 +++++++++---- 1 file changed, 9 insertions(+), 4 deletions(-) diff --git a/volatility3/framework/plugins/windows/skeleton_key_check.py b/volatility3/framework/plugins/windows/skeleton_key_check.py index 945bd16461..5976dfd479 100644 --- a/volatility3/framework/plugins/windows/skeleton_key_check.py +++ b/volatility3/framework/plugins/windows/skeleton_key_check.py @@ -82,6 +82,7 @@ def _get_pefile_obj(self, pe_table_name: str, layer_name: str, base_address: int pe_ret = pefile.PE(data = pe_data.getvalue(), fast_load = True) except exceptions.InvalidAddressException: + vollog.debug("Unable to reconstruct cryptdll.dll in memory") pe_ret = None return pe_ret @@ -109,7 +110,7 @@ def _check_for_skeleton_key_symbols(self, csystem: interfaces.objects.ObjectInte """ Uses the PDB information to specifically check if the csystem for RC4HMAC has an initialization pointer to rc4HmacInitialize and a decryption pointer - for rc4HmacDecrypt. + to rc4HmacDecrypt. Args: csystem: The RC4HMAC KERB_ECRYPT instance @@ -261,7 +262,8 @@ def _find_csystems_with_symbols(self, proc_layer_name: str, except exceptions.VolatilityException: return None, None, None - array_start, count, rc4HmacInitialize, rc4HmacDecrypt = self._find_array_with_pdb_symbols(cryptdll_symbols, cryptdll_types, proc_layer_name, cryptdll_base) + array_start, count, rc4HmacInitialize, rc4HmacDecrypt = \ + self._find_array_with_pdb_symbols(cryptdll_symbols, cryptdll_types, proc_layer_name, cryptdll_base) try: array = cryptdll_types.object(object_type = "array", @@ -271,6 +273,7 @@ def _find_csystems_with_symbols(self, proc_layer_name: str, absolute = True) except exceptions.InvalidAddressException: + vollog.debug("The CSystem array is not present in memory. Stopping PDB symbols based analysis.") return None, None, None return array, rc4HmacInitialize, rc4HmacDecrypt @@ -279,14 +282,14 @@ def _get_rip_relative_target(self, inst) -> int: """ Returns the target address of a RIP-relative instruction. - These instructions contain the offset of a target addresss + These instructions contain the offset of a target address relative to the current instruction pointer. Args: inst: A capstone instruction instance Returns: - None or the target address of the function + None or the target address of the instruction """ try: opnd = inst.operands[1] @@ -411,6 +414,7 @@ def _find_csystems_with_export(self, proc_layer_name: str, try: function_bytes = self.context.layers[proc_layer_name].read(function_start, 0x50) except exceptions.InvalidAddressException: + vollog.debug("The CDLocateCSystem function is not present in the lsass address space. Stopping export based analysis.") break array_start, count = self._analyze_cdlocatecsystem(function_bytes, function_start, proc_layer_name) @@ -426,6 +430,7 @@ def _find_csystems_with_export(self, proc_layer_name: str, absolute = True) except exceptions.InvalidAddressException: + vollog.debug("The CSystem array is not present in memory. Stopping export based analysis.") return None, None, None return array, None, None From 2402a51c609fd851702f4d4d8ffee456bdf5b7ec Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Thu, 7 Jan 2021 23:01:04 +0000 Subject: [PATCH 107/294] Symbols: Make the symbol shift optional The symbol_shift isn't quite as nice as it could be, because we use None to demark an unset state, which is different than a value of 0 (because unset will trip linux to try to identify, whereas 0 will not). Every where we use the value, we get it from the dictionary and use 0 if it's not found (essentially forcing a default), but ideally, the default would be set. As such, it's safe to set optional to true (and thus not require it for configuration files), but it's not ideal that the linux symbol finder can't determine whether to run or not without knowing whether the value's been intentionally set... --- volatility3/framework/interfaces/symbols.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/volatility3/framework/interfaces/symbols.py b/volatility3/framework/interfaces/symbols.py index 8f427d2330..ba43b95a2a 100644 --- a/volatility3/framework/interfaces/symbols.py +++ b/volatility3/framework/interfaces/symbols.py @@ -303,7 +303,7 @@ def build_configuration(self) -> 'configuration.HierarchicalDict': @classmethod def get_requirements(cls) -> List[RequirementInterface]: return super().get_requirements() + [ - requirements.IntRequirement(name = 'symbol_shift', description = 'Symbol Shift', optional = False), + requirements.IntRequirement(name = 'symbol_shift', description = 'Symbol Shift', optional = True), requirements.IntRequirement( name = 'symbol_mask', description = 'Address mask for symbols', optional = True, default = 0), ] From 792fb7080c9ce0230e6638f590bcd0727a92a09d Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Thu, 7 Jan 2021 23:23:40 +0000 Subject: [PATCH 108/294] Symbols: Set symbol_shift default rather than None Since all the checks for symbol_shift use essentially "if not config['symbol_shift']" is doesn't matter whether 0 or None is returned. I'd like to test this on an ASLR image, but I think it should be fine and I'd feel much happier about everything if we could give it a numeric default. --- volatility3/framework/interfaces/symbols.py | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/volatility3/framework/interfaces/symbols.py b/volatility3/framework/interfaces/symbols.py index ba43b95a2a..c1185bed9c 100644 --- a/volatility3/framework/interfaces/symbols.py +++ b/volatility3/framework/interfaces/symbols.py @@ -303,7 +303,8 @@ def build_configuration(self) -> 'configuration.HierarchicalDict': @classmethod def get_requirements(cls) -> List[RequirementInterface]: return super().get_requirements() + [ - requirements.IntRequirement(name = 'symbol_shift', description = 'Symbol Shift', optional = True), + requirements.IntRequirement( + name = 'symbol_shift', description = 'Symbol Shift', optional = True, default = 0), requirements.IntRequirement( name = 'symbol_mask', description = 'Address mask for symbols', optional = True, default = 0), ] From f3c3e2c4f98bda9a37ca55351b2f588c7cbed177 Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Mon, 29 Mar 2021 17:50:35 +0100 Subject: [PATCH 109/294] Windows: Don't try to delete URLs --- volatility3/framework/symbols/windows/pdbconv.py | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/volatility3/framework/symbols/windows/pdbconv.py b/volatility3/framework/symbols/windows/pdbconv.py index 5db2e91375..a9ce875510 100644 --- a/volatility3/framework/symbols/windows/pdbconv.py +++ b/volatility3/framework/symbols/windows/pdbconv.py @@ -9,6 +9,7 @@ import logging import lzma import os +import urllib from bisect import bisect from typing import Tuple, Dict, Any, Optional, Union, List from urllib import request, error, parse @@ -998,7 +999,8 @@ def __call__(self, progress: Union[int, float], description: str = None): filename = None if args.guid is not None and args.pattern is not None: filename = PdbRetreiver().retreive_pdb(guid = args.guid, file_name = args.pattern, progress_callback = pg_cb) - delfile = True + if urllib.parse.urlparse(filename, 'file').scheme == 'file': + delfile = True elif args.file: filename = args.file else: From f4dee3c5f01019306abdd30e321eadf03e9b4577 Mon Sep 17 00:00:00 2001 From: iMHLv2 Date: Tue, 30 Mar 2021 15:27:09 -0500 Subject: [PATCH 110/294] sync with fa1c03d of jxwegner/volatility3 --- volatility3/framework/layers/crash.py | 60 ++++++++++++------- .../framework/plugins/windows/crashinfo.py | 33 ++++++++++ 2 files changed, 70 insertions(+), 23 deletions(-) create mode 100644 volatility3/framework/plugins/windows/crashinfo.py diff --git a/volatility3/framework/layers/crash.py b/volatility3/framework/layers/crash.py index 8909eceb86..96c66d4831 100644 --- a/volatility3/framework/layers/crash.py +++ b/volatility3/framework/layers/crash.py @@ -1,14 +1,15 @@ -# This file is Copyright 2019 Volatility Foundation and licensed under the Volatility Software License 1.0 + + +# This file is Copyright 2021 Volatility Foundation and licensed under the Volatility Software License 1.0 # which is available at https://www.volatilityfoundation.org/license/vsl-v1.0 # - import logging import struct -from typing import Tuple, Optional +from typing import Tuple, Optional, Iterable -from volatility3.framework import constants, exceptions, interfaces -from volatility3.framework.layers import segmented -from volatility3.framework.symbols import intermed +from volatility.framework import constants, exceptions, interfaces +from volatility.framework.layers import segmented +from volatility.framework.symbols import intermed vollog = logging.getLogger(__name__) @@ -19,7 +20,6 @@ class WindowsCrashDumpFormatException(exceptions.LayerException): class WindowsCrashDump32Layer(segmented.SegmentedLayer): """A Windows crash format TranslationLayer. - This TranslationLayer supports Microsoft complete memory dump files. It currently does not support kernel or small memory dump files. """ @@ -42,7 +42,11 @@ def __init__(self, context: interfaces.context.ContextInterface, config_path: st self._context = context self._config_path = config_path self._page_size = 0x1000 - self._base_layer = self.config["base_layer"] + try: + self._base_layer = self.config["base_layer"] + except KeyError: + self._base_layer = 'base_layer' + self.config['base_layer']='base_layer' # Create a custom SymbolSpace self._crash_table_name = intermed.IntermediateSymbolTable.create(context, self._config_path, 'windows', @@ -71,30 +75,34 @@ def __init__(self, context: interfaces.context.ContextInterface, config_path: st def _load_segments(self) -> None: """Loads up the segments from the meta_layer.""" - header = self.context.object(self._crash_table_name + constants.BANG + self.dump_header_name, - offset = 0, - layer_name = self._base_layer) + segments = [] offset = self.headerpages + header = self.context.object(self._crash_table_name + constants.BANG + self.dump_header_name, + offset = 0, + layer_name = self._base_layer) + offset = self.headerpages header.PhysicalMemoryBlockBuffer.Run.count = header.PhysicalMemoryBlockBuffer.NumberOfRuns for x in header.PhysicalMemoryBlockBuffer.Run: segments.append((x.BasePage * 0x1000, offset * 0x1000, x.PageCount * 0x1000, x.PageCount * 0x1000)) - # print("Segments {:x} {:x} {:x}".format(x.BasePage * 0x1000, - # offset * 0x1000, - # x.PageCount * 0x1000)) + # print("Segments {:x} {:x} {:x}".format(x.BasePage * 0x1000, + # offset * 0x1000, + # x.PageCount * 0x1000)) offset += x.PageCount + if len(segments) == 0: raise WindowsCrashDumpFormatException(self.name, "No Crash segments defined in {}".format(self._base_layer)) - self._segments = segments + + @classmethod def check_header(cls, base_layer: interfaces.layers.DataLayerInterface, offset: int = 0) -> Tuple[int, int]: # Verify the Window's crash dump file magic - + try: header_data = base_layer.read(offset, cls._magic_struct.size) except exceptions.InvalidAddressException: @@ -114,7 +122,6 @@ def check_header(cls, base_layer: interfaces.layers.DataLayerInterface, offset: class WindowsCrashDump64Layer(WindowsCrashDump32Layer): """A Windows crash format TranslationLayer. - This TranslationLayer supports Microsoft complete memory dump files. It currently does not support kernel or small memory dump files. """ @@ -133,7 +140,6 @@ def _load_segments(self) -> None: summary_header = self.context.object(self._crash_table_name + constants.BANG + "_SUMMARY_DUMP64", offset = 0x2000, layer_name = self._base_layer) - if self.dump_type == 0x1: header = self.context.object(self._crash_table_name + constants.BANG + self.dump_header_name, offset = 0, @@ -146,16 +152,19 @@ def _load_segments(self) -> None: offset += x.PageCount elif self.dump_type == 0x05: - summary_header.BufferLong.count = (summary_header.BitmapSize + 31) // 32 + #Add 0x2000 as some bitmaps are too short by one offset + summary_header.BufferLong.count = (summary_header.BitmapSize + 31) // 32 + 0x2000 previous_bit = 0 start_position = 0 # We cast as an int because we don't want to carry the context around with us for infinite loop reasons mapped_offset = int(summary_header.HeaderSize) current_word = None - for bit_position in range(len(summary_header.BufferLong) * 32): + bitmap_len=len(summary_header.BufferLong) * 32 + for bit_position in range(bitmap_len): if (bit_position % 32) == 0: current_word = summary_header.BufferLong[bit_position // 32] current_bit = (current_word >> (bit_position % 32)) & 1 + if current_bit != previous_bit: if previous_bit == 0: # Start @@ -166,11 +175,15 @@ def _load_segments(self) -> None: segments.append((start_position * 0x1000, mapped_offset, length, length)) mapped_offset += length - # Finish it off - if bit_position == (len(summary_header.BufferLong) * 32) - 1 and current_bit == 1: + + # Find the last segment in a file which will be at the end or two pages from the end. We multiply by 32 as we want to offset bby words rather than bits + if (bit_position == bitmap_len - 1 or bit_position == bitmap_len - 1 -32*0x2000) and current_bit == 1: length = (bit_position - start_position) * 0x1000 segments.append((start_position * 0x1000, mapped_offset, length, length)) mapped_offset += length + break + + previous_bit = current_bit else: @@ -179,7 +192,7 @@ def _load_segments(self) -> None: if len(segments) == 0: raise WindowsCrashDumpFormatException(self.name, "No Crash segments defined in {}".format(self._base_layer)) - + self._segments = segments @@ -200,3 +213,4 @@ def stack(cls, except WindowsCrashDumpFormatException: pass return None + diff --git a/volatility3/framework/plugins/windows/crashinfo.py b/volatility3/framework/plugins/windows/crashinfo.py new file mode 100644 index 0000000000..22a8e16c00 --- /dev/null +++ b/volatility3/framework/plugins/windows/crashinfo.py @@ -0,0 +1,33 @@ + +# This file is Copyright 2021 Volatility Foundation and licensed under the Volatility Software License 1.0 +# which is available at https://www.volatilityfoundation.org/license/vsl-v1.0 +# + +import logging +from volatility.framework import interfaces, renderers +from volatility.framework.configuration import requirements +from volatility.framework.layers import crash +from volatility.framework import exceptions + +vollog = logging.getLogger(__name__) + +class Crashinfo(interfaces.plugins.PluginInterface): + _required_framework_version = (2, 0, 0) + + @classmethod + def get_requirements(cls): + return [ + requirements.TranslationLayerRequirement(name = 'primary', + description = 'Memory layer for the kernel', + architectures = ["Intel32", "Intel64"]), + ] + + def _generator(self, layer): + for offset, length, mapped_offset in layer.mapping(0x0, layer.maximum_address, ignore_errors = True): + yield(0,(offset,length,mapped_offset)) + + def run(self): + + layer = self._context.layers[self.config['primary.memory_layer']] + + return renderers.TreeGrid([("StartAddress", int),("FileOffset", int),("Length", int)],self._generator(layer)) \ No newline at end of file From f5e5fd00609e20950df80801ae17f25b44103631 Mon Sep 17 00:00:00 2001 From: iMHLv2 Date: Tue, 30 Mar 2021 17:25:32 -0500 Subject: [PATCH 111/294] apply fixes and improvements for crash layer (see description) 1) Remove empty newlines before the license 2) Remove unused imports 3) Add support for 32-bit Bitmap crash dumps 4) Move _SUMMARY_DUMP to crash_common.json and fix the swapped Pages and BitmapSize offsets 5) Fix other errors in crash64.json (swapped SystemTime vs SystemUpTime, PsActiveProcessHead should be unsigned long long, several incorrect offsets for other members 6) Switched to new volatility3 namespace 7) Reverted required_framework_version to (1, 0, 0) 8) Fixed crashinfo plugin from unpacking the wrong number of values from layer.mapping(). Actually, the plugin no longer displays runs - it shows metadata instead. 9) Address Ikelos' comments in PR #452 --- volatility3/framework/layers/crash.py | 175 +++++++++--------- .../framework/plugins/windows/crashinfo.py | 50 +++-- .../framework/symbols/windows/crash64.json | 90 +-------- .../symbols/windows/crash_common.json | 138 ++++++++++++++ 4 files changed, 266 insertions(+), 187 deletions(-) create mode 100644 volatility3/framework/symbols/windows/crash_common.json diff --git a/volatility3/framework/layers/crash.py b/volatility3/framework/layers/crash.py index 96c66d4831..7f6fe32db9 100644 --- a/volatility3/framework/layers/crash.py +++ b/volatility3/framework/layers/crash.py @@ -1,15 +1,13 @@ - - # This file is Copyright 2021 Volatility Foundation and licensed under the Volatility Software License 1.0 # which is available at https://www.volatilityfoundation.org/license/vsl-v1.0 # import logging import struct -from typing import Tuple, Optional, Iterable +from typing import Tuple, Optional -from volatility.framework import constants, exceptions, interfaces -from volatility.framework.layers import segmented -from volatility.framework.symbols import intermed +from volatility3.framework import constants, exceptions, interfaces +from volatility3.framework.layers import segmented +from volatility3.framework.symbols import intermed vollog = logging.getLogger(__name__) @@ -30,7 +28,7 @@ class WindowsCrashDump32Layer(segmented.SegmentedLayer): VALIDDUMP = 0x504d5544 crashdump_json = 'crash' - supported_dumptypes = [0x01] + supported_dumptypes = [0x01, 0x05] # we need 0x5 for 32-bit bitmaps dump_header_name = '_DUMP_HEADER' _magic_struct = struct.Struct(' None: - """Loads up the segments from the meta_layer.""" - - - segments = [] - - offset = self.headerpages - header = self.context.object(self._crash_table_name + constants.BANG + self.dump_header_name, - offset = 0, - layer_name = self._base_layer) - offset = self.headerpages - header.PhysicalMemoryBlockBuffer.Run.count = header.PhysicalMemoryBlockBuffer.NumberOfRuns - for x in header.PhysicalMemoryBlockBuffer.Run: - segments.append((x.BasePage * 0x1000, offset * 0x1000, x.PageCount * 0x1000, x.PageCount * 0x1000)) - # print("Segments {:x} {:x} {:x}".format(x.BasePage * 0x1000, - # offset * 0x1000, - # x.PageCount * 0x1000)) - offset += x.PageCount - - - if len(segments) == 0: - raise WindowsCrashDumpFormatException(self.name, "No Crash segments defined in {}".format(self._base_layer)) - self._segments = segments - - - - @classmethod - def check_header(cls, base_layer: interfaces.layers.DataLayerInterface, offset: int = 0) -> Tuple[int, int]: - # Verify the Window's crash dump file magic - - try: - header_data = base_layer.read(offset, cls._magic_struct.size) - except exceptions.InvalidAddressException: - raise WindowsCrashDumpFormatException(base_layer.name, - "Crashdump header not found at offset {}".format(offset)) - (signature, validdump) = cls._magic_struct.unpack(header_data) - - if signature != cls.SIGNATURE: - raise WindowsCrashDumpFormatException( - base_layer.name, "Bad signature 0x{:x} at file offset 0x{:x}".format(signature, offset)) - if validdump != cls.VALIDDUMP: - raise WindowsCrashDumpFormatException(base_layer.name, - "Invalid dump 0x{:x} at file offset 0x{:x}".format(validdump, offset)) - - return signature, validdump - - -class WindowsCrashDump64Layer(WindowsCrashDump32Layer): - """A Windows crash format TranslationLayer. - This TranslationLayer supports Microsoft complete memory dump files. - It currently does not support kernel or small memory dump files. - """ - - VALIDDUMP = 0x34365544 - crashdump_json = 'crash64' - dump_header_name = '_DUMP_HEADER64' - supported_dumptypes = [0x1, 0x05] - headerpages = 2 + def get_header(self) -> interfaces.objects.ObjectInterface: + return self._header def _load_segments(self) -> None: """Loads up the segments from the meta_layer.""" segments = [] - summary_header = self.context.object(self._crash_table_name + constants.BANG + "_SUMMARY_DUMP64", - offset = 0x2000, - layer_name = self._base_layer) + # instead of hard coding 0x2000, use 0x1000 * self.headerpages so this works for + # both 32- and 64-bit dumps + summary_header = self.context.object(self._crash_common_table_name + constants.BANG + "_SUMMARY_DUMP", + offset=0x1000 * self.headerpages, + layer_name=self._base_layer) if self.dump_type == 0x1: header = self.context.object(self._crash_table_name + constants.BANG + self.dump_header_name, - offset = 0, - layer_name = self._base_layer) + offset=0, + layer_name=self._base_layer) offset = self.headerpages header.PhysicalMemoryBlockBuffer.Run.count = header.PhysicalMemoryBlockBuffer.NumberOfRuns @@ -152,14 +100,28 @@ def _load_segments(self) -> None: offset += x.PageCount elif self.dump_type == 0x05: - #Add 0x2000 as some bitmaps are too short by one offset - summary_header.BufferLong.count = (summary_header.BitmapSize + 31) // 32 + 0x2000 + + ## NOTE: In the original crash64.json, _SUMMARY_DUMP.Pages was offset 48 and + ## _SUMMARY_DUMP.BitmapSize was offset 40. From the volatility2 vtypes, that is + ## backwards! The correct offsets should be: + ## + ## 'Pages' : [ 0x28, ['unsigned long long']], -> This is 40 decimal + ## 'BitmapSize': [0x30, ['unsigned long long']], -> This is 48 decimal + ## + ## Most likely, some of the code that follows needs to be adjusted with those + ## newly refactored offsets in mind. I commented out the "+ 0x2000" below, and + ## this still works on Win10x64_17763_crash.dmp, but it fails on + ## the Win10x86_17763_crash.dmp version. + + # Add 0x2000 as some bitmaps are too short by one offset + summary_header.BufferLong.count = (summary_header.BitmapSize + 31) // 32 # + 0x2000 previous_bit = 0 start_position = 0 # We cast as an int because we don't want to carry the context around with us for infinite loop reasons mapped_offset = int(summary_header.HeaderSize) current_word = None - bitmap_len=len(summary_header.BufferLong) * 32 + bitmap_len = len(summary_header.BufferLong) * 32 + for bit_position in range(bitmap_len): if (bit_position % 32) == 0: current_word = summary_header.BufferLong[bit_position // 32] @@ -175,16 +137,12 @@ def _load_segments(self) -> None: segments.append((start_position * 0x1000, mapped_offset, length, length)) mapped_offset += length - # Find the last segment in a file which will be at the end or two pages from the end. We multiply by 32 as we want to offset bby words rather than bits - if (bit_position == bitmap_len - 1 or bit_position == bitmap_len - 1 -32*0x2000) and current_bit == 1: + if (bit_position == bitmap_len - 1 or bit_position == bitmap_len - 1 - 32 * 0x2000) and current_bit == 1: length = (bit_position - start_position) * 0x1000 segments.append((start_position * 0x1000, mapped_offset, length, length)) mapped_offset += length break - - - previous_bit = current_bit else: vollog.log(constants.LOGLEVEL_VVVV, "unsupported dump format 0x{:x}".format(self.dump_type)) @@ -192,9 +150,42 @@ def _load_segments(self) -> None: if len(segments) == 0: raise WindowsCrashDumpFormatException(self.name, "No Crash segments defined in {}".format(self._base_layer)) - + self._segments = segments + @classmethod + def check_header(cls, base_layer: interfaces.layers.DataLayerInterface, offset: int = 0) -> Tuple[int, int]: + # Verify the Window's crash dump file magic + + try: + header_data = base_layer.read(offset, cls._magic_struct.size) + except exceptions.InvalidAddressException: + raise WindowsCrashDumpFormatException(base_layer.name, + "Crashdump header not found at offset {}".format(offset)) + (signature, validdump) = cls._magic_struct.unpack(header_data) + + if signature != cls.SIGNATURE: + raise WindowsCrashDumpFormatException( + base_layer.name, "Bad signature 0x{:x} at file offset 0x{:x}".format(signature, offset)) + if validdump != cls.VALIDDUMP: + raise WindowsCrashDumpFormatException(base_layer.name, + "Invalid dump 0x{:x} at file offset 0x{:x}".format(validdump, offset)) + + return signature, validdump + + +class WindowsCrashDump64Layer(WindowsCrashDump32Layer): + """A Windows crash format TranslationLayer. + This TranslationLayer supports Microsoft complete memory dump files. + It currently does not support kernel or small memory dump files. + """ + + VALIDDUMP = 0x34365544 + crashdump_json = 'crash64' + dump_header_name = '_DUMP_HEADER64' + supported_dumptypes = [0x1, 0x05] + headerpages = 2 + class WindowsCrashDumpStacker(interfaces.automagic.StackerLayerInterface): stack_order = 11 diff --git a/volatility3/framework/plugins/windows/crashinfo.py b/volatility3/framework/plugins/windows/crashinfo.py index 22a8e16c00..70f4239f5e 100644 --- a/volatility3/framework/plugins/windows/crashinfo.py +++ b/volatility3/framework/plugins/windows/crashinfo.py @@ -1,18 +1,18 @@ - # This file is Copyright 2021 Volatility Foundation and licensed under the Volatility Software License 1.0 # which is available at https://www.volatilityfoundation.org/license/vsl-v1.0 # import logging -from volatility.framework import interfaces, renderers -from volatility.framework.configuration import requirements -from volatility.framework.layers import crash -from volatility.framework import exceptions +import datetime +from volatility3.framework import interfaces, renderers +from volatility3.framework.configuration import requirements +from volatility3.framework.renderers import format_hints, conversion +from volatility3.framework.objects import utility vollog = logging.getLogger(__name__) class Crashinfo(interfaces.plugins.PluginInterface): - _required_framework_version = (2, 0, 0) + _required_framework_version = (1, 0, 0) @classmethod def get_requirements(cls): @@ -23,11 +23,39 @@ def get_requirements(cls): ] def _generator(self, layer): - for offset, length, mapped_offset in layer.mapping(0x0, layer.maximum_address, ignore_errors = True): - yield(0,(offset,length,mapped_offset)) + header = layer.get_header() + uptime = datetime.timedelta(microseconds=int(header.SystemUpTime) / 10) - def run(self): + yield(0, (utility.array_to_string(header.Signature), + header.MajorVersion, + header.MinorVersion, + format_hints.Hex(header.DirectoryTableBase), + format_hints.Hex(header.PfnDataBase), + format_hints.Hex(header.PsLoadedModuleList), + format_hints.Hex(header.PsActiveProcessHead), + header.MachineImageType, + header.NumberProcessors, + format_hints.Hex(header.KdDebuggerDataBlock), + header.DumpType, + str(uptime), + utility.array_to_string(header.Comment), + conversion.wintime_to_datetime(header.SystemTime), + )) + def run(self): layer = self._context.layers[self.config['primary.memory_layer']] - - return renderers.TreeGrid([("StartAddress", int),("FileOffset", int),("Length", int)],self._generator(layer)) \ No newline at end of file + return renderers.TreeGrid([("Signature", str), + ("MajorVersion", int), + ("MinorVersion", int), + ("DirectoryTableBase", format_hints.Hex), + ("PfnDataBase", format_hints.Hex), + ("PsLoadedModuleList", format_hints.Hex), + ("PsActiveProcessHead", format_hints.Hex), + ("MachineImageType", int), + ("NumberProcessors", int), + ("KdDebuggerDataBlock", format_hints.Hex), + ("DumpType", int), + ("SystemUpTime", str), + ("Comment", str), + ("SystemTime", datetime.datetime), + ], self._generator(layer)) \ No newline at end of file diff --git a/volatility3/framework/symbols/windows/crash64.json b/volatility3/framework/symbols/windows/crash64.json index 3a71346340..6106575e26 100644 --- a/volatility3/framework/symbols/windows/crash64.json +++ b/volatility3/framework/symbols/windows/crash64.json @@ -65,25 +65,25 @@ "offset": 40, "type": { "kind": "base", - "name": "unsigned long" + "name": "unsigned long long" } }, "MachineImageType": { - "offset": 44, + "offset": 48, "type": { "kind": "base", "name": "unsigned long" } }, "NumberProcessors": { - "offset": 48, + "offset": 52, "type": { "kind": "base", "name": "unsigned long" } }, "BugCheckCode": { - "offset": 60, + "offset": 56, "type": { "kind": "base", "name": "unsigned long" @@ -157,7 +157,7 @@ "name": "unsigned long long" } }, - "SystemUpTime": { + "SystemTime": { "offset": 4008, "type": { "kind": "base", @@ -175,7 +175,7 @@ } } }, - "SystemTime": { + "SystemUpTime": { "offset": 4144, "type": { "kind": "base", @@ -242,84 +242,6 @@ "kind": "struct", "size": 8192 }, - "_SUMMARY_DUMP64": { - "fields": { - "Signature": { - "offset": 0, - "type": { - "count": 4, - "kind": "array", - "subtype": { - "kind": "base", - "name": "unsigned char" - } - } - }, - "ValidDump": { - "offset": 4, - "type": { - "count": 4, - "kind": "array", - "subtype": { - "kind": "base", - "name": "unsigned char" - } - } - }, - "DumpOptions": { - "offset": 8, - "type": { - "kind": "base", - "name": "unsigned long" - } - }, - "HeaderSize": { - "offset": 32, - "type": { - "kind": "base", - "name": "unsigned long long" - } - }, - "BitmapSize": { - "offset": 40, - "type": { - "kind": "base", - "name": "unsigned long long" - } - }, - "Pages": { - "offset": 48, - "type": { - "kind": "base", - "name": "unsigned long long" - } - }, - "BufferLong": { - "offset": 56, - "type": { - "kind": "array", - "count": 1, - "subtype": { - "kind": "base", - "name": "unsigned long" - } - } - }, - "BufferChar": { - "offset": 56, - "type": { - "kind": "array", - "count": 1, - "subtype": { - "kind": "base", - "name": "unsigned char" - } - } - } - }, - "kind": "struct", - "size": 56 - }, "_EXCEPTION_RECORD64": { "fields": { "ExceptionCode": { diff --git a/volatility3/framework/symbols/windows/crash_common.json b/volatility3/framework/symbols/windows/crash_common.json new file mode 100644 index 0000000000..fe29b20a82 --- /dev/null +++ b/volatility3/framework/symbols/windows/crash_common.json @@ -0,0 +1,138 @@ +{ + "symbols": { + }, + "user_types": { + "_SUMMARY_DUMP": { + "fields": { + "Signature": { + "offset": 0, + "type": { + "count": 4, + "kind": "array", + "subtype": { + "kind": "base", + "name": "unsigned char" + } + } + }, + "ValidDump": { + "offset": 4, + "type": { + "count": 4, + "kind": "array", + "subtype": { + "kind": "base", + "name": "unsigned char" + } + } + }, + "DumpOptions": { + "offset": 8, + "type": { + "kind": "base", + "name": "unsigned long" + } + }, + "HeaderSize": { + "offset": 32, + "type": { + "kind": "base", + "name": "unsigned long long" + } + }, + "Pages": { + "offset": 40, + "type": { + "kind": "base", + "name": "unsigned long long" + } + }, + "BitmapSize": { + "offset": 48, + "type": { + "kind": "base", + "name": "unsigned long long" + } + }, + "BufferLong": { + "offset": 56, + "type": { + "kind": "array", + "count": 1, + "subtype": { + "kind": "base", + "name": "unsigned long" + } + } + }, + "BufferChar": { + "offset": 56, + "type": { + "kind": "array", + "count": 1, + "subtype": { + "kind": "base", + "name": "unsigned char" + } + } + } + }, + "kind": "struct", + "size": 56 + } + }, + "enums": { + }, + "base_types": { + "unsigned char": { + "endian": "little", + "kind": "char", + "signed": false, + "size": 1 + }, + "unsigned short": { + "endian": "little", + "kind": "int", + "signed": false, + "size": 2 + }, + "long": { + "endian": "little", + "kind": "int", + "signed": true, + "size": 4 + }, + "char": { + "endian": "little", + "kind": "char", + "signed": true, + "size": 1 + }, + "unsigned long": { + "endian": "little", + "kind": "int", + "signed": false, + "size": 4 + }, + "long long": { + "endian": "little", + "kind": "int", + "signed": true, + "size": 8 + }, + "unsigned long long": { + "endian": "little", + "kind": "int", + "signed": false, + "size": 8 + } + }, + "metadata": { + "producer": { + "version": "0.0.1", + "name": "ikelos-by-hand", + "datetime": "2020-09-10T00:20:00" + }, + "format": "6.2.0" + } +} From 3ebbddbd8dce0671e84bc8bddaa812b91e94f2f3 Mon Sep 17 00:00:00 2001 From: iMHLv2 Date: Tue, 30 Mar 2021 21:12:32 -0500 Subject: [PATCH 112/294] don't save objects in self they contain a reference to the context, so if we ever pickle that, then it'll cause a massive recursion loop and fail --- volatility3/framework/layers/crash.py | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-) diff --git a/volatility3/framework/layers/crash.py b/volatility3/framework/layers/crash.py index 7f6fe32db9..895b4cedfa 100644 --- a/volatility3/framework/layers/crash.py +++ b/volatility3/framework/layers/crash.py @@ -58,25 +58,25 @@ def __init__(self, context: interfaces.context.ContextInterface, config_path: st self.check_header(hdr_layer, hdr_offset) # Need to create a header object - self._header = self.context.object(self._crash_table_name + constants.BANG + self.dump_header_name, - offset = hdr_offset, - layer_name = self._base_layer) + header = self.get_header() # Extract the DTB - self.dtb = int(self._header.DirectoryTableBase) + self.dtb = int(header.DirectoryTableBase) - self.dump_type = int(self._header.DumpType) + self.dump_type = int(header.DumpType) # Verify that it is a supported format - if self._header.DumpType not in self.supported_dumptypes: - vollog.log(constants.LOGLEVEL_VVVV, "unsupported dump format 0x{:x}".format(self._header.DumpType)) - raise WindowsCrashDumpFormatException(name, "unsupported dump format 0x{:x}".format(self._header.DumpType)) + if header.DumpType not in self.supported_dumptypes: + vollog.log(constants.LOGLEVEL_VVVV, "unsupported dump format 0x{:x}".format(header.DumpType)) + raise WindowsCrashDumpFormatException(name, "unsupported dump format 0x{:x}".format(header.DumpType)) # Then call the super, which will call load_segments (which needs the base_layer before it'll work) super().__init__(context, config_path, name) def get_header(self) -> interfaces.objects.ObjectInterface: - return self._header + return self.context.object(self._crash_table_name + constants.BANG + self.dump_header_name, + offset=0, + layer_name=self._base_layer) def _load_segments(self) -> None: """Loads up the segments from the meta_layer.""" From fb54a2cade948d791e2705f39bc99d672ff73e57 Mon Sep 17 00:00:00 2001 From: iMHLv2 Date: Tue, 30 Mar 2021 21:23:46 -0500 Subject: [PATCH 113/294] report segments in the crash layer with LOGLEVEL_VVVV --- volatility3/framework/layers/crash.py | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/volatility3/framework/layers/crash.py b/volatility3/framework/layers/crash.py index 895b4cedfa..8d6b479fc6 100644 --- a/volatility3/framework/layers/crash.py +++ b/volatility3/framework/layers/crash.py @@ -150,6 +150,15 @@ def _load_segments(self) -> None: if len(segments) == 0: raise WindowsCrashDumpFormatException(self.name, "No Crash segments defined in {}".format(self._base_layer)) + else: + # report the segments for debugging. this is valuable for dev/troubleshooting but + # not important enough for a dedicated plugin. + for idx, (start_position, mapped_offset, length, _) in enumerate(segments): + vollog.log(constants.LOGLEVEL_VVVV, + "Segment {}: Position {:#x} Offset {:#x} Length {:#x}".format(idx, + start_position, + mapped_offset, + length)) self._segments = segments From 00db33e0feef43e8300c785cc0b0d35e38e753fe Mon Sep 17 00:00:00 2001 From: iMHLv2 Date: Wed, 31 Mar 2021 09:52:12 -0500 Subject: [PATCH 114/294] print human readable dump type in crashinfo, along with bitmap header size, bitmap size, and page count --- volatility3/framework/layers/crash.py | 9 ++++--- .../framework/plugins/windows/crashinfo.py | 26 +++++++++++++++++-- 2 files changed, 30 insertions(+), 5 deletions(-) diff --git a/volatility3/framework/layers/crash.py b/volatility3/framework/layers/crash.py index 8d6b479fc6..1a15ffde47 100644 --- a/volatility3/framework/layers/crash.py +++ b/volatility3/framework/layers/crash.py @@ -78,6 +78,11 @@ def get_header(self) -> interfaces.objects.ObjectInterface: offset=0, layer_name=self._base_layer) + def get_summary_header(self) -> interfaces.objects.ObjectInterface: + return self.context.object(self._crash_common_table_name + constants.BANG + "_SUMMARY_DUMP", + offset=0x1000 * self.headerpages, + layer_name=self._base_layer) + def _load_segments(self) -> None: """Loads up the segments from the meta_layer.""" @@ -85,9 +90,7 @@ def _load_segments(self) -> None: # instead of hard coding 0x2000, use 0x1000 * self.headerpages so this works for # both 32- and 64-bit dumps - summary_header = self.context.object(self._crash_common_table_name + constants.BANG + "_SUMMARY_DUMP", - offset=0x1000 * self.headerpages, - layer_name=self._base_layer) + summary_header = self.get_summary_header() if self.dump_type == 0x1: header = self.context.object(self._crash_table_name + constants.BANG + self.dump_header_name, offset=0, diff --git a/volatility3/framework/plugins/windows/crashinfo.py b/volatility3/framework/plugins/windows/crashinfo.py index 70f4239f5e..5ec123b442 100644 --- a/volatility3/framework/plugins/windows/crashinfo.py +++ b/volatility3/framework/plugins/windows/crashinfo.py @@ -26,6 +26,22 @@ def _generator(self, layer): header = layer.get_header() uptime = datetime.timedelta(microseconds=int(header.SystemUpTime) / 10) + if header.DumpType == 0x1: + dump_type = "Full Dump (0x1)" + elif header.DumpType == 0x5: + dump_type = "Bitmap Dump (0x5)" + else: + # this should never happen since the crash layer only accepts 0x1 and 0x5 + dump_type = "Unknown/Unsupported ({:#x})".format(header.DumpType) + + if header.DumpType == 0x5: + summary_header = layer.get_summary_header() + bitmap_header_size = format_hints.Hex(summary_header.HeaderSize) + bitmap_size = format_hints.Hex(summary_header.BitmapSize) + bitmap_pages = format_hints.Hex(summary_header.Pages) + else: + bitmap_header_size = bitmap_size = bitmap_pages = renderers.NotApplicableValue() + yield(0, (utility.array_to_string(header.Signature), header.MajorVersion, header.MinorVersion, @@ -36,10 +52,13 @@ def _generator(self, layer): header.MachineImageType, header.NumberProcessors, format_hints.Hex(header.KdDebuggerDataBlock), - header.DumpType, + dump_type, str(uptime), utility.array_to_string(header.Comment), conversion.wintime_to_datetime(header.SystemTime), + bitmap_header_size, + bitmap_size, + bitmap_pages, )) def run(self): @@ -54,8 +73,11 @@ def run(self): ("MachineImageType", int), ("NumberProcessors", int), ("KdDebuggerDataBlock", format_hints.Hex), - ("DumpType", int), + ("DumpType", str), ("SystemUpTime", str), ("Comment", str), ("SystemTime", datetime.datetime), + ("BitmapHeaderSize", format_hints.Hex), + ("BitmapSize", format_hints.Hex), + ("BitmapPages", format_hints.Hex), ], self._generator(layer)) \ No newline at end of file From 087b1721482935c3853868cbffab3a62690e0f18 Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Sat, 3 Apr 2021 13:07:43 +0100 Subject: [PATCH 115/294] Automagic: Add secondary 64-bit self-referential value --- volatility3/framework/automagic/windows.py | 38 ++++++++++++---------- 1 file changed, 20 insertions(+), 18 deletions(-) diff --git a/volatility3/framework/automagic/windows.py b/volatility3/framework/automagic/windows.py index 198e64e993..1a5e6a235d 100644 --- a/volatility3/framework/automagic/windows.py +++ b/volatility3/framework/automagic/windows.py @@ -46,7 +46,8 @@ class parameters. back to that page's offset. """ - def __init__(self, layer_type: Type[layers.intel.Intel], ptr_struct: str, ptr_reference: int, mask: int) -> None: + def __init__(self, layer_type: Type[layers.intel.Intel], ptr_struct: str, ptr_reference: List[int], + mask: int) -> None: self.layer_type = layer_type self.ptr_struct = ptr_struct self.ptr_size = struct.calcsize(ptr_struct) @@ -69,20 +70,21 @@ def __call__(self, data: bytes, data_offset: int, page_offset: int) -> Optional[ Returns: A valid DTB within this page (and an additional parameter for data) """ - value = data[page_offset + (self.ptr_reference * self.ptr_size):page_offset + - ((self.ptr_reference + 1) * self.ptr_size)] - try: - ptr = self._unpack(value) - except struct.error: - return None - # The value *must* be present (bit 0) since it's a mapped page - # It's almost always writable (bit 1) - # It's occasionally Super, but not reliably so, haven't checked when/why not - # The top 3-bits are usually ignore (which in practice means 0 - # Need to find out why the middle 3-bits are usually 6 (0110) - if ptr != 0 and (ptr & self.mask == data_offset + page_offset) & (ptr & 0xFF1 == 0x61): - dtb = (ptr & self.mask) - return self.second_pass(dtb, data, data_offset) + for ptr_reference in self.ptr_reference: + value = data[page_offset + (ptr_reference * self.ptr_size):page_offset + + ((ptr_reference + 1) * self.ptr_size)] + try: + ptr = self._unpack(value) + except struct.error: + return None + # The value *must* be present (bit 0) since it's a mapped page + # It's almost always writable (bit 1) + # It's occasionally Super, but not reliably so, haven't checked when/why not + # The top 3-bits are usually ignore (which in practice means 0 + # Need to find out why the middle 3-bits are usually 6 (0110) + if ptr != 0 and (ptr & self.mask == data_offset + page_offset) & (ptr & 0xFF1 == 0x61): + dtb = (ptr & self.mask) + return self.second_pass(dtb, data, data_offset) return None def second_pass(self, dtb: int, data: bytes, data_offset: int) -> Optional[Tuple[int, Any]]: @@ -117,7 +119,7 @@ class DtbTest32bit(DtbTest): def __init__(self) -> None: super().__init__(layer_type = layers.intel.WindowsIntel, ptr_struct = "I", - ptr_reference = 0x300, + ptr_reference = [0x300], mask = 0xFFFFF000) @@ -126,7 +128,7 @@ class DtbTest64bit(DtbTest): def __init__(self) -> None: super().__init__(layer_type = layers.intel.WindowsIntel32e, ptr_struct = "Q", - ptr_reference = 0x1ED, + ptr_reference = [0x1ED, 0x1FB], mask = 0x3FFFFFFFFFF000) @@ -135,7 +137,7 @@ class DtbTestPae(DtbTest): def __init__(self) -> None: super().__init__(layer_type = layers.intel.WindowsIntelPAE, ptr_struct = "Q", - ptr_reference = 0x3, + ptr_reference = [0x3], mask = 0x3FFFFFFFFFF000) def second_pass(self, dtb: int, data: bytes, data_offset: int) -> Optional[Tuple[int, Any]]: From a1c5f5e5e5ed2ba53711b34eeb63271fddf18ef2 Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Sat, 3 Apr 2021 13:33:20 +0100 Subject: [PATCH 116/294] Automagic: Improve secondary 64-bit self-ref finder --- volatility3/framework/automagic/windows.py | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/volatility3/framework/automagic/windows.py b/volatility3/framework/automagic/windows.py index 1a5e6a235d..873a020a0d 100644 --- a/volatility3/framework/automagic/windows.py +++ b/volatility3/framework/automagic/windows.py @@ -128,9 +128,13 @@ class DtbTest64bit(DtbTest): def __init__(self) -> None: super().__init__(layer_type = layers.intel.WindowsIntel32e, ptr_struct = "Q", - ptr_reference = [0x1ED, 0x1FB], + ptr_reference = range(0x1E0, 0x1FF), mask = 0x3FFFFFFFFFF000) + # As of Windows-10 RS1+, the ptr_reference is randomized: + # https://blahcat.github.io/2020/06/15/playing_with_self_reference_pml4_entry/ + # So far, we've only seen examples between 0x1e0 and 0x1ff + class DtbTestPae(DtbTest): From 191da08fb34ddf04a54aeaf119e286790667f3a3 Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Sun, 4 Apr 2021 17:46:37 +0100 Subject: [PATCH 117/294] Automagic: Add a slow scan for kernel identification --- volatility3/framework/automagic/pdbscan.py | 62 ++++++++++++++++------ 1 file changed, 45 insertions(+), 17 deletions(-) diff --git a/volatility3/framework/automagic/pdbscan.py b/volatility3/framework/automagic/pdbscan.py index 59b78c3cac..e2467dff57 100644 --- a/volatility3/framework/automagic/pdbscan.py +++ b/volatility3/framework/automagic/pdbscan.py @@ -10,7 +10,7 @@ import logging import math import os -from typing import Any, Dict, Iterable, List, Optional, Set, Tuple, Union +from typing import Any, Dict, Iterable, List, Optional, Set, Tuple, Union, Callable from volatility3.framework import constants, exceptions, interfaces, layers from volatility3.framework.configuration import requirements @@ -132,27 +132,27 @@ def set_kernel_virtual_offset(self, context: interfaces.context.ContextInterface def get_physical_layer_name(self, context, vlayer): return context.config.get(interfaces.configuration.path_join(vlayer.config_path, 'memory_layer'), None) + def method_slow_scan(self, + context: interfaces.context.ContextInterface, + vlayer: layers.intel.Intel, + progress_callback: constants.ProgressCallback = None) -> Optional[ValidKernelType]: + + def test_virtual_kernel(physical_layer_name, virtual_layer_name, kernel): + return (virtual_layer_name, kernel['mz_offset'], kernel) + + vollog.debug("Kernel base determination - slow scan virtual layer") + return self._method_layer_pdb_scan(context, vlayer, test_virtual_kernel, False, progress_callback) + def method_fixed_mapping(self, context: interfaces.context.ContextInterface, vlayer: layers.intel.Intel, progress_callback: constants.ProgressCallback = None) -> Optional[ValidKernelType]: - # TODO: Verify this is a windows image - vollog.debug("Kernel base determination - testing fixed base address") - valid_kernel = None - virtual_layer_name = vlayer.name - physical_layer_name = self.get_physical_layer_name(context, vlayer) - kernel_pdb_names = [bytes(name + ".pdb", "utf-8") for name in constants.windows.KERNEL_MODULE_NAMES] - kernels = PDBUtility.pdbname_scan(ctx = context, - layer_name = physical_layer_name, - page_size = vlayer.page_size, - pdb_names = kernel_pdb_names, - progress_callback = progress_callback) - for kernel in kernels: + def test_physical_kernel(physical_layer_name, virtual_layer_name, kernel): # It seems the kernel is loaded at a fixed mapping (presumably because the memory manager hasn't started yet) if kernel['mz_offset'] is None or not isinstance(kernel['mz_offset'], int): # Rule out kernels that couldn't find a suitable MZ header - continue + return None if vlayer.bits_per_register == 64: kvo = kernel['mz_offset'] + (31 << int(math.ceil(math.log2(vlayer.maximum_address + 1)) - 5)) else: @@ -161,13 +161,41 @@ def method_fixed_mapping(self, kvp = vlayer.mapping(kvo, 0) if (any([(p == kernel['mz_offset'] and layer_name == physical_layer_name) for (_, _, p, _, layer_name) in kvp])): - valid_kernel = (virtual_layer_name, kvo, kernel) - break + return (virtual_layer_name, kvo, kernel) else: vollog.debug("Potential kernel_virtual_offset did not map to expected location: {}".format( hex(kvo))) except exceptions.InvalidAddressException: vollog.debug("Potential kernel_virtual_offset caused a page fault: {}".format(hex(kvo))) + + vollog.debug("Kernel base determination - testing fixed base address") + return self._method_layer_pdb_scan(context, vlayer, test_physical_kernel, True, progress_callback) + + def _method_layer_pdb_scan(self, + context: interfaces.context.ContextInterface, + vlayer: layers.intel.Intel, + test_kernel: Callable, + physical: bool = True, + progress_callback: constants.ProgressCallback = None) -> Optional[ValidKernelType]: + # TODO: Verify this is a windows image + valid_kernel = None + virtual_layer_name = vlayer.name + physical_layer_name = self.get_physical_layer_name(context, vlayer) + + layer_to_scan = physical_layer_name + if not physical: + layer_to_scan = virtual_layer_name + + kernel_pdb_names = [bytes(name + ".pdb", "utf-8") for name in constants.windows.KERNEL_MODULE_NAMES] + kernels = PDBUtility.pdbname_scan(ctx = context, + layer_name = layer_to_scan, + page_size = vlayer.page_size, + pdb_names = kernel_pdb_names, + progress_callback = progress_callback) + for kernel in kernels: + valid_kernel = test_kernel(physical_layer_name, virtual_layer_name, kernel) + if valid_kernel is not None: + break return valid_kernel def _method_offset(self, @@ -245,7 +273,7 @@ def check_kernel_offset(self, return valid_kernel # List of methods to be run, in order, to determine the valid kernels - methods = [method_kdbg_offset, method_module_offset, method_fixed_mapping] + methods = [method_kdbg_offset, method_module_offset, method_fixed_mapping, method_slow_scan] def determine_valid_kernel(self, context: interfaces.context.ContextInterface, From ce3b0fcc0bcce05efb879d91cd42ddee17f69286 Mon Sep 17 00:00:00 2001 From: Andrew Case Date: Wed, 7 Apr 2021 12:04:50 -0500 Subject: [PATCH 118/294] Change debug to info for invalid process warning message --- volatility3/framework/plugins/windows/pslist.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/volatility3/framework/plugins/windows/pslist.py b/volatility3/framework/plugins/windows/pslist.py index 7d96571880..1b5a726de1 100644 --- a/volatility3/framework/plugins/windows/pslist.py +++ b/volatility3/framework/plugins/windows/pslist.py @@ -202,7 +202,7 @@ def _generator(self): proc.get_is_wow64(), proc.get_create_time(), proc.get_exit_time(), file_output)) except exceptions.InvalidAddressException: - vollog.debug("Invalid process found at address: {:x}. Skipping".format(proc.vol.offset)) + vollog.info("Invalid process found at address: {:x}. Skipping".format(proc.vol.offset)) def generate_timeline(self): for row in self._generator(): From 6f7d40f0ccb0014cdb7c7e6813d081f47ff7756f Mon Sep 17 00:00:00 2001 From: Andrew Case Date: Wed, 7 Apr 2021 12:06:31 -0500 Subject: [PATCH 119/294] Commit all requested changes except those related to array handling/creating --- .../plugins/windows/skeleton_key_check.py | 41 ++++++------------- 1 file changed, 12 insertions(+), 29 deletions(-) diff --git a/volatility3/framework/plugins/windows/skeleton_key_check.py b/volatility3/framework/plugins/windows/skeleton_key_check.py index 5976dfd479..51eb0d92f0 100644 --- a/volatility3/framework/plugins/windows/skeleton_key_check.py +++ b/volatility3/framework/plugins/windows/skeleton_key_check.py @@ -33,16 +33,10 @@ except ImportError: has_capstone = False -try: - import pefile - has_pefile = True -except ImportError: - has_pefile = False - vollog = logging.getLogger(__name__) class Skeleton_Key_Check(interfaces.plugins.PluginInterface): - """Lists process memory ranges that potentially contain injected code.""" + """ Looks for signs of Skeleton Key malware """ _required_framework_version = (1, 0, 0) @@ -59,15 +53,7 @@ def get_requirements(cls): requirements.VersionRequirement(name = 'pdbutil', component = pdbutil.PDBUtility, version = (1, 0, 0)), ] - # @ikelos - # these lines are copy/paste from inside of verinfo->get_version_information - # not sure if this is worthy of making it an API or not though - # basically it taskes in a pe symbol table, layer name, and base address - # and then kicks back a pefile instance - # we can either make it a common API or we can just delete this comment - - # @ikelos I don't know how to specify the return value as a pefile object... - def _get_pefile_obj(self, pe_table_name: str, layer_name: str, base_address: int): + def _get_pefile_obj(self, pe_table_name: str, layer_name: str, base_address: int) -> pefile.PE: pe_data = io.BytesIO() try: @@ -147,6 +133,8 @@ def _find_array_with_pdb_symbols(self, cryptdll_symbols: str, count_address = cryptdll_module.get_symbol("cCSystems").address + # we do not want to fail just because the count is not in memory + # 16 was the size on samples I tested, so I chose it as the default try: count = cryptdll_types.object(object_type = "unsigned long", offset = count_address) except exceptions.InvalidAddressException: @@ -218,13 +206,12 @@ def _find_and_parse_cryptdll(self, proc_list: Iterable) -> \ for vad in proc.get_vad_root().traverse(): filename = vad.get_file_name() - if type(filename) == renderers.NotApplicableValue or not filename.lower().endswith("cryptdll.dll"): - continue - - cryptdll_base = vad.get_start() - cryptdll_size = vad.get_end() - cryptdll_base + + if isinstance(filename, str) and filename.lower().endswith("cryptdll.dll"): + cryptdll_base = vad.get_start() + cryptdll_size = vad.get_end() - cryptdll_base - break + break lsass_proc = proc break @@ -336,10 +323,10 @@ def _analyze_cdlocatecsystem(self, function_bytes: bytes, # cCsystems is referenced by a mov instruction elif inst.mnemonic == "mov": - if found_count == False: + if not found_count: target_address = self._get_rip_relative_target(inst) - # we do not want to fail just because the count is not memory + # we do not want to fail just because the count is not in memory # 16 was the size on samples I tested, so I chose it as the default if target_address: count = int.from_bytes(self.context.layers[proc_layer_name].read(target_address, 4), "little") @@ -382,10 +369,6 @@ def _find_csystems_with_export(self, proc_layer_name: str, vollog.debug("capstone is not installed so cannot fall back to export table analysis.") return None, None, None - if not has_pefile: - vollog.debug("pefile is not installed so cannot fall back to export table analysis.") - return None, None, None - vollog.debug("Unable to perform analysis using PDB symbols, falling back to export table analysis.") pe_table_name = intermed.IntermediateSymbolTable.create(self.context, @@ -535,7 +518,7 @@ def _generator(self, procs): if csystems is not None: break - if csystems == None: + if csystems is None: vollog.info("Unable to find CSystems inside of cryptdll.dll. Analysis cannot proceed.") return From 88ff4f1d0611f8883c049267a975f141c75f3c25 Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Wed, 7 Apr 2021 22:30:45 +0100 Subject: [PATCH 120/294] Windows: Fix up double import in pdbconv --- volatility3/framework/symbols/windows/pdbconv.py | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/volatility3/framework/symbols/windows/pdbconv.py b/volatility3/framework/symbols/windows/pdbconv.py index a9ce875510..03f765ad0b 100644 --- a/volatility3/framework/symbols/windows/pdbconv.py +++ b/volatility3/framework/symbols/windows/pdbconv.py @@ -9,7 +9,6 @@ import logging import lzma import os -import urllib from bisect import bisect from typing import Tuple, Dict, Any, Optional, Union, List from urllib import request, error, parse @@ -999,7 +998,7 @@ def __call__(self, progress: Union[int, float], description: str = None): filename = None if args.guid is not None and args.pattern is not None: filename = PdbRetreiver().retreive_pdb(guid = args.guid, file_name = args.pattern, progress_callback = pg_cb) - if urllib.parse.urlparse(filename, 'file').scheme == 'file': + if parse.urlparse(filename, 'file').scheme == 'file': delfile = True elif args.file: filename = args.file From f1d7d8610a62735cd382f6b68d3508873a5b8caf Mon Sep 17 00:00:00 2001 From: Andrew Case Date: Wed, 7 Apr 2021 19:51:30 -0500 Subject: [PATCH 121/294] Updates and bug fixes --- .../plugins/windows/skeleton_key_check.py | 235 ++++++++++-------- 1 file changed, 137 insertions(+), 98 deletions(-) diff --git a/volatility3/framework/plugins/windows/skeleton_key_check.py b/volatility3/framework/plugins/windows/skeleton_key_check.py index 51eb0d92f0..75a561d551 100644 --- a/volatility3/framework/plugins/windows/skeleton_key_check.py +++ b/volatility3/framework/plugins/windows/skeleton_key_check.py @@ -13,7 +13,7 @@ import logging, io -from typing import Iterable, Tuple +from typing import Iterable, Tuple, List from volatility3.framework.symbols.windows import pdbutil from volatility3.framework import interfaces, symbols, exceptions @@ -27,6 +27,8 @@ from volatility3.framework.symbols.windows.extensions import pe +import pefile + try: import capstone has_capstone = True @@ -54,6 +56,17 @@ def get_requirements(cls): ] def _get_pefile_obj(self, pe_table_name: str, layer_name: str, base_address: int) -> pefile.PE: + """ + Attempts to pefile object from the bytes of the PE file + + Args: + pe_table_name: name of the pe types table + layer_name: name of the lsass.exe process layer + base_address: base address of cryptdll.dll in lsass.exe + + Returns: + the constructed pefile object + """ pe_data = io.BytesIO() try: @@ -108,6 +121,33 @@ def _check_for_skeleton_key_symbols(self, csystem: interfaces.objects.ObjectInte """ return csystem.Initialize != rc4HmacInitialize or csystem.Decrypt != rc4HmacDecrypt + def _construct_ecrypt_array(self, array_start: int, count: int, \ + cryptdll_types: interfaces.context.ModuleInterface) -> interfaces.context.ModuleInterface: + """ + Attempts to construct an array of _KERB_ECRYPT structures + + Args: + array_start: starting virtual address of the array + count: how many elements are in the array + cryptdll_types: the reverse engineered types + + Returns: + The instantiated array + """ + + try: + array = cryptdll_types.object(object_type = "array", + offset = array_start, + subtype = cryptdll_types.get_type("_KERB_ECRYPT"), + count = count, + absolute = True) + + except exceptions.InvalidAddressException: + vollog.debug("Unable to construct cSystems array at given offset: {:x}".format(array_start)) + array = None + + return array + def _find_array_with_pdb_symbols(self, cryptdll_symbols: str, cryptdll_types: interfaces.context.ModuleInterface, proc_layer_name: str, @@ -124,13 +164,16 @@ def _find_array_with_pdb_symbols(self, cryptdll_symbols: str, Returns: Tuple of: - array_start: Where CSystems begins - count: Number of array elements + array: The cSystems array rc4HmacInitialize: The runtime address of the expected initialization function rc4HmacDecrypt: The runtime address of the expected decryption function """ cryptdll_module = self.context.module(cryptdll_symbols, layer_name = proc_layer_name, offset = cryptdll_base) + rc4HmacInitialize = cryptdll_module.get_symbol("rc4HmacInitialize").address + cryptdll_base + + rc4HmacDecrypt = cryptdll_module.get_symbol("rc4HmacDecrypt").address + cryptdll_base + count_address = cryptdll_module.get_symbol("cCSystems").address # we do not want to fail just because the count is not in memory @@ -142,11 +185,12 @@ def _find_array_with_pdb_symbols(self, cryptdll_symbols: str, array_start = cryptdll_module.get_symbol("CSystems").address + cryptdll_base - rc4HmacInitialize = cryptdll_module.get_symbol("rc4HmacInitialize").address + cryptdll_base - - rc4HmacDecrypt = cryptdll_module.get_symbol("rc4HmacDecrypt").address + cryptdll_base + array = self._construct_ecrypt_array(array_start, count, cryptdll_types) + + if array is None: + vollog.debug("The CSystem array is not present in memory. Stopping PDB based analysis.") - return array_start, count, rc4HmacInitialize, rc4HmacDecrypt + return array, rc4HmacInitialize, rc4HmacDecrypt def _get_cryptdll_types(self, context: interfaces.context.ContextInterface, config, @@ -173,25 +217,19 @@ def _get_cryptdll_types(self, context: interfaces.context.ContextInterface, return context.module(cryptdll_symbol_table, proc_layer_name, offset = cryptdll_base) - def _find_and_parse_cryptdll(self, proc_list: Iterable) -> \ - Tuple[interfaces.context.ContextInterface, str, int, int]: + def _find_lsass_proc(self, proc_list: Iterable) -> \ + Tuple[interfaces.context.ContextInterface, str]: """ - Finds the base address of cryptdll.dll insode of lsass.exe + Walks the process list and returns the first valid lsass instances. + There should be only one lsass process, but malware will often use the + process name to try and blend in. Args: - proc_list: the process list filtered to just lsass.exe instances + proc_list: The process list generator - Returns: - A tuple of: - lsass_proc: the process object for lsass.exe - proc_layer_name: the name of the lsass.exe process layer - cryptdll_base: the base address of cryptdll.dll - crytpdll_size: the size of the VAD for cryptdll.dll + Return: + The process object for lsass """ - lsass_proc = None - proc_layer_name = None - cryptdll_base = None - cryptdll_size = None for proc in proc_list: try: @@ -202,21 +240,31 @@ def _find_and_parse_cryptdll(self, proc_list: Iterable) -> \ excp.layer_name)) continue - proc_layer = self.context.layers[proc_layer_name] + return proc, proc_layer_name + + return None, None - for vad in proc.get_vad_root().traverse(): - filename = vad.get_file_name() - - if isinstance(filename, str) and filename.lower().endswith("cryptdll.dll"): - cryptdll_base = vad.get_start() - cryptdll_size = vad.get_end() - cryptdll_base + def _find_cryptdll(self, lsass_proc: interfaces.context.ContextInterface) -> \ + Tuple[int, int]: + """ + Finds the base address of cryptdll.dll inside of lsass.exe - break + Args: + lsass_proc: the process object for lsass.exe - lsass_proc = proc - break + Returns: + A tuple of: + cryptdll_base: the base address of cryptdll.dll + crytpdll_size: the size of the VAD for cryptdll.dll + """ + for vad in lsass_proc.get_vad_root().traverse(): + filename = vad.get_file_name() + + if isinstance(filename, str) and filename.lower().endswith("cryptdll.dll"): + base = vad.get_start() + return base, vad.get_end() - base - return lsass_proc, proc_layer_name, cryptdll_base, cryptdll_size + return None, None def _find_csystems_with_symbols(self, proc_layer_name: str, cryptdll_types: interfaces.context.ModuleInterface, @@ -247,21 +295,14 @@ def _find_csystems_with_symbols(self, proc_layer_name: str, cryptdll_base, cryptdll_size) except exceptions.VolatilityException: + vollog.debug("Unable to use the cryptdll PDB. Stopping PDB symbols based analysis.") return None, None, None - array_start, count, rc4HmacInitialize, rc4HmacDecrypt = \ + array, rc4HmacInitialize, rc4HmacDecrypt = \ self._find_array_with_pdb_symbols(cryptdll_symbols, cryptdll_types, proc_layer_name, cryptdll_base) - try: - array = cryptdll_types.object(object_type = "array", - offset = array_start, - subtype = cryptdll_types.get_type("_KERB_ECRYPT"), - count = count, - absolute = True) - - except exceptions.InvalidAddressException: + if array is None: vollog.debug("The CSystem array is not present in memory. Stopping PDB symbols based analysis.") - return None, None, None return array, rc4HmacInitialize, rc4HmacDecrypt @@ -292,7 +333,8 @@ def _get_rip_relative_target(self, inst) -> int: return inst.address + inst.size + opnd.mem.disp def _analyze_cdlocatecsystem(self, function_bytes: bytes, - function_start: int, + function_start: int, + cryptdll_types: interfaces.context.ModuleInterface, proc_layer_name: str) -> Tuple[int, int]: """ Performs static analysis on CDLocateCSystem to find the instructions that @@ -304,9 +346,7 @@ def _analyze_cdlocatecsystem(self, function_bytes: bytes, proc_layer_name: the name of the lsass.exe process layer Return: - Tuple of: - array_start: address of CSystem - count: the count from cCsystems or 16 + The cSystems array of ecrypt instances """ found_count = False array_start = None @@ -344,12 +384,17 @@ def _analyze_cdlocatecsystem(self, function_bytes: bytes, # we find the count before, so we can terminate the static analysis here break - return array_start, count + if array_start and count: + array = self._construct_ecrypt_array(array_start, count, cryptdll_types) + else: + array = None + + return array def _find_csystems_with_export(self, proc_layer_name: str, cryptdll_types: interfaces.context.ModuleInterface, cryptdll_base: int, - _) -> Tuple[int, None, None]: + _) -> interfaces.context.ModuleInterface: """ Uses export table analysis to locate CDLocateCsystem This function references CSystems and cCsystems @@ -360,14 +405,12 @@ def _find_csystems_with_export(self, proc_layer_name: str, cryptdll_base: Base address of cryptdll.dll inside of lsass.exe _: unused in this source Returns: - Tuple of: - array_start: Where CSystems begins - None: this method cannot find the expected initialization address - None: this method cannot find the expected decryption address + The cSystems array """ + if not has_capstone: vollog.debug("capstone is not installed so cannot fall back to export table analysis.") - return None, None, None + return None vollog.debug("Unable to perform analysis using PDB symbols, falling back to export table analysis.") @@ -377,15 +420,14 @@ def _find_csystems_with_export(self, proc_layer_name: str, "pe", class_types = pe.class_types) - + cryptdll = self._get_pefile_obj(pe_table_name, proc_layer_name, cryptdll_base) - if not cryptdll or not hasattr(cryptdll, 'DIRECTORY_ENTRY_EXPORT'): - return None, None, None - - cryptdll.parse_data_directories(directories = [pefile.DIRECTORY_ENTRY["IMAGE_DIRECTORY_ENTRY_EXPORT"]]) + if not cryptdll: + return None - array_start = None - count = None + cryptdll.parse_data_directories(directories = [pefile.DIRECTORY_ENTRY["IMAGE_DIRECTORY_ENTRY_EXPORT"]]) + if not hasattr(cryptdll, 'DIRECTORY_ENTRY_EXPORT'): + return None # find the location of CDLocateCSystem and then perform static analysis for export in cryptdll.DIRECTORY_ENTRY_EXPORT.symbols: @@ -400,28 +442,18 @@ def _find_csystems_with_export(self, proc_layer_name: str, vollog.debug("The CDLocateCSystem function is not present in the lsass address space. Stopping export based analysis.") break - array_start, count = self._analyze_cdlocatecsystem(function_bytes, function_start, proc_layer_name) - - break - - if array_start: - try: - array = cryptdll_types.object(object_type = "array", - offset = array_start, - subtype = cryptdll_types.get_type("_KERB_ECRYPT"), - count = count, - absolute = True) - - except exceptions.InvalidAddressException: + array = self._analyze_cdlocatecsystem(function_bytes, function_start, cryptdll_types, proc_layer_name) + if array is None: vollog.debug("The CSystem array is not present in memory. Stopping export based analysis.") - return None, None, None - return array, None, None + return array + + return None def _find_csystems_with_scanning(self, proc_layer_name: str, cryptdll_types: interfaces.context.ModuleInterface, cryptdll_base: int, - cryptdll_size: int) -> Tuple[int, None, None]: + cryptdll_size: int) -> List[interfaces.context.ModuleInterface]: """ Performs scanning to find potential RC4 HMAC csystem instances @@ -433,10 +465,7 @@ def _find_csystems_with_scanning(self, proc_layer_name: str, cryptdll_base: base address of cryptdll.dll inside of lsass.exe cryptdll_size: size of the VAD Returns: - Tuple of: - array_start: Where CSystems begins - None: this method cannot find the expected initialization address - None: this method cannot find the expected decryption address + A list of csystem instances """ csystems = [] @@ -468,7 +497,7 @@ def _find_csystems_with_scanning(self, proc_layer_name: str, (cryptdll_base < kerb.Finish < cryptdll_end): csystems.append(kerb) - return csystems, None, None + return csystems def _generator(self, procs): """ @@ -484,14 +513,14 @@ def _generator(self, procs): vollog.info("This plugin only supports 64bit Windows memory samples") return - lsass_proc, proc_layer_name, cryptdll_base, cryptdll_size = self._find_and_parse_cryptdll(procs) - + lsass_proc, proc_layer_name = self._find_lsass_proc(procs) if not lsass_proc: - vollog.warn("Unable to find lsass.exe process in process list. This should never happen. Analysis cannot proceed.") + vollog.info("Unable to find a valid lsass.exe process in the process list. This should never happen. Analysis cannot proceed.") return + cryptdll_base, cryptdll_size = self._find_cryptdll(lsass_proc) if not cryptdll_base: - vollog.warn("Unable to find the location of cryptdll.dll inside of lsass.exe. Analysis cannot proceed.") + vollog.info("Unable to find the location of cryptdll.dll inside of lsass.exe. Analysis cannot proceed.") return # the custom type information from binary analysis @@ -502,21 +531,31 @@ def _generator(self, procs): cryptdll_base) - # attempt to locate csystem and handlers in order of - # reliability and reporting accuracy - sources = [self._find_csystems_with_symbols, - self._find_csystems_with_export, - self._find_csystems_with_scanning] + # attempt to find the array and symbols directly from the PDB + csystems, rc4HmacInitialize, rc4HmacDecrypt = \ + self._find_csystems_with_symbols(proc_layer_name, + cryptdll_types, + cryptdll_base, + cryptdll_size) - for source in sources: - csystems, rc4HmacInitialize, rc4HmacDecrypt = \ - source(proc_layer_name, - cryptdll_types, - cryptdll_base, - cryptdll_size) + csystems = None - if csystems is not None: - break + # if we can't find cSystems through the PDB then + # we fall back to export analysis and scanning + # we keep the address of the rc4 functions from the PDB + # though as its our only source to get them + if csystems is None: + fallback_sources = [self._find_csystems_with_export, + self._find_csystems_with_scanning] + + for source in fallback_sources: + csystems = source(proc_layer_name, + cryptdll_types, + cryptdll_base, + cryptdll_size) + + if csystems is not None: + break if csystems is None: vollog.info("Unable to find CSystems inside of cryptdll.dll. Analysis cannot proceed.") From aa26601b3486b4afde3577810063f8881c189f90 Mon Sep 17 00:00:00 2001 From: cstation Date: Tue, 13 Apr 2021 22:12:53 +0200 Subject: [PATCH 122/294] Fix reading JSON configration of QEMU-images --- volatility3/framework/layers/qemu.py | 13 ++++++++----- 1 file changed, 8 insertions(+), 5 deletions(-) diff --git a/volatility3/framework/layers/qemu.py b/volatility3/framework/layers/qemu.py index 8a2621fa8d..caf768ed6c 100644 --- a/volatility3/framework/layers/qemu.py +++ b/volatility3/framework/layers/qemu.py @@ -58,12 +58,15 @@ def _read_configuration(self, base_layer: interfaces.layers.DataLayerInterface, data = b'' for i in range(base_layer.maximum_address, base_layer.minimum_address, -chunk_size): if i != base_layer.maximum_address: - data = base_layer.read(i, chunk_size) + data + data = (base_layer.read(i, chunk_size) + data).rstrip(b'\x00') if b'\x00' in data: - start = data.rfind(b'\x00') - data = data[data.find(b'{', start):] - return json.loads(data) - raise exceptions.LayerException(name, "Could not load JSON configuration from the end of the file") + last_null_byte = data.rfind(b'\x00') + start_of_json = data.find(b'{', last_null_byte) + if start_of_json >= 0: + data = data[start_of_json:] + return json.loads(data) + return dict() + raise exceptions.LayerException(name, "Invalid JSON configuration at the end of the file") def _get_ram_segments(self, index: int, page_size: int) -> Tuple[List[Tuple[int, int, int, int]], int]: """Recovers the new index and any sections of memory from a ram section""" From c5c726ab2865966597b647576e20dbf6175b7aa3 Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Wed, 21 Apr 2021 00:02:36 +0100 Subject: [PATCH 123/294] Windows: Improve PDB scanning --- .../framework/symbols/windows/pdbutil.py | 31 +++++++++---------- 1 file changed, 14 insertions(+), 17 deletions(-) diff --git a/volatility3/framework/symbols/windows/pdbutil.py b/volatility3/framework/symbols/windows/pdbutil.py index 56aa63f277..85d8ab92e8 100644 --- a/volatility3/framework/symbols/windows/pdbutil.py +++ b/volatility3/framework/symbols/windows/pdbutil.py @@ -7,6 +7,7 @@ import logging import lzma import os +import re import struct from typing import Any, Dict, Generator, List, Optional, Tuple, Union from urllib import request, parse @@ -344,20 +345,16 @@ def __init__(self, pdb_names: List[bytes]) -> None: self._pdb_names = pdb_names def __call__(self, data: bytes, data_offset: int) -> Generator[Tuple[str, Any, bytes, int], None, None]: - sig = data.find(b"RSDS") - while sig >= 0: - null = data.find(b'\0', sig + 4 + self._RSDS_format.size) - if null > -1: - if (null - sig - self._RSDS_format.size) <= 100: - name_offset = sig + 4 + self._RSDS_format.size - pdb_name = data[name_offset:null] - if pdb_name in self._pdb_names: - - ## this ordering is intentional due to mixed endianness in the GUID - (g3, g2, g1, g0, g5, g4, g7, g6, g8, g9, ga, gb, gc, gd, ge, gf, a) = \ - self._RSDS_format.unpack(data[sig + 4:name_offset]) - - guid = (16 * '{:02X}').format(g0, g1, g2, g3, g4, g5, g6, g7, g8, g9, ga, gb, gc, gd, ge, gf) - if sig < self.chunk_size: - yield (guid, a, pdb_name, data_offset + sig) - sig = data.find(b"RSDS", sig + 1) + pattern = b'RSDS' + (b'.' * self._RSDS_format.size) + b'(' + b'|'.join(self._pdb_names) + b')\x00' + for match in re.finditer(pattern, data): + pdb_name = data[match.start(0) + 4 + self._RSDS_format.size:match.start(0) + len(match.group()) - 1] + print("MATCH", pdb_name) + if pdb_name in self._pdb_names: + ## this ordering is intentional due to mixed endianness in the GUID + (g3, g2, g1, g0, g5, g4, g7, g6, g8, g9, ga, gb, gc, gd, ge, gf, a) = \ + self._RSDS_format.unpack(data[match.start(0) + 4:match.start(0) + 4 + self._RSDS_format.size]) + + guid = (16 * '{:02X}').format(g0, g1, g2, g3, g4, g5, g6, g7, g8, g9, ga, gb, gc, gd, ge, gf) + if match.start(0) < self.chunk_size: + print("YIELDING", (guid, a, pdb_name, match.start(0))) + yield (guid, a, pdb_name, match.start(0)) From e241ac0ac07f70d38b8fe5585ea8283287aaf77d Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Thu, 22 Apr 2021 17:18:52 +0100 Subject: [PATCH 124/294] Windows: Fixes missing os import Fixes #495 --- volatility3/framework/plugins/windows/svcscan.py | 1 + 1 file changed, 1 insertion(+) diff --git a/volatility3/framework/plugins/windows/svcscan.py b/volatility3/framework/plugins/windows/svcscan.py index 078445bb12..24bc271097 100644 --- a/volatility3/framework/plugins/windows/svcscan.py +++ b/volatility3/framework/plugins/windows/svcscan.py @@ -3,6 +3,7 @@ # import logging +import os from typing import List from volatility3.framework import interfaces, renderers, constants, symbols, exceptions From 7d408ce0f36df378b5b0685f991d09649c8f6a7e Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Mon, 26 Apr 2021 01:16:30 +0100 Subject: [PATCH 125/294] Windows: Make IPI handling of PDBs optional --- .../framework/symbols/windows/pdbconv.py | 33 +++++++++++-------- 1 file changed, 19 insertions(+), 14 deletions(-) diff --git a/volatility3/framework/symbols/windows/pdbconv.py b/volatility3/framework/symbols/windows/pdbconv.py index 03f765ad0b..2880d44b8e 100644 --- a/volatility3/framework/symbols/windows/pdbconv.py +++ b/volatility3/framework/symbols/windows/pdbconv.py @@ -353,18 +353,21 @@ def read_ipi_stream(self): ipi_list = [] - type_references = self._read_info_stream(4, "IPI", ipi_list) + try: + type_references = self._read_info_stream(4, "IPI", ipi_list) + for name in type_references.keys(): + # This doesn't break, because we want to use the last string/pdbname in the list + if name.endswith('.pdb'): + self._database_name = name.split('\\')[-1] + except ValueError: + return None - for name in type_references.keys(): - # This doesn't break, because we want to use the last string/pdbname in the list - if name.endswith('.pdb'): - self._database_name = name.split('\\')[-1] def _read_info_stream(self, stream_number, stream_name, info_list): vollog.debug("Reading {}".format(stream_name)) info_layer = self._context.layers.get(self._layer_name + "_stream" + str(stream_number), None) if not info_layer: - raise ValueError("No TPI stream available") + raise ValueError("No {} stream available".format(stream_name)) module = self._context.module(module_name = info_layer.pdb_symbol_table, layer_name = info_layer.name, offset = 0) @@ -646,8 +649,8 @@ def get_size_from_index(self, index: int) -> int: else: leaf_type, name, value = self.types[index - 0x1000] if leaf_type in [ - leaf_type.LF_UNION, leaf_type.LF_CLASS, leaf_type.LF_CLASS_ST, leaf_type.LF_STRUCTURE, - leaf_type.LF_STRUCTURE_ST, leaf_type.LF_INTERFACE + leaf_type.LF_UNION, leaf_type.LF_CLASS, leaf_type.LF_CLASS_ST, leaf_type.LF_STRUCTURE, + leaf_type.LF_STRUCTURE_ST, leaf_type.LF_INTERFACE ]: if not value.properties.forward_reference: result = value.size @@ -691,8 +694,8 @@ def process_types(self, type_references: Dict[str, int]) -> None: self._progress_callback(index * 100 / max_len, "Processing types") leaf_type, name, value = self.types[index] if leaf_type in [ - leaf_type.LF_CLASS, leaf_type.LF_CLASS_ST, leaf_type.LF_STRUCTURE, leaf_type.LF_STRUCTURE_ST, - leaf_type.LF_INTERFACE + leaf_type.LF_CLASS, leaf_type.LF_CLASS_ST, leaf_type.LF_STRUCTURE, leaf_type.LF_STRUCTURE_ST, + leaf_type.LF_INTERFACE ]: if not value.properties.forward_reference and name: self.user_types[name] = { @@ -726,9 +729,9 @@ def process_types(self, type_references: Dict[str, int]) -> None: self.user_types = self.replace_forward_references(self.user_types, type_references) def consume_type( - self, module: interfaces.context.ModuleInterface, offset: int, length: int + self, module: interfaces.context.ModuleInterface, offset: int, length: int ) -> Tuple[Tuple[Optional[interfaces.objects.ObjectInterface], Optional[str], Union[ - None, List, interfaces.objects.ObjectInterface]], int]: + None, List, interfaces.objects.ObjectInterface]], int]: """Returns a (leaf_type, name, object) Tuple for a type, and the number of bytes consumed.""" leaf_type = self.context.object(module.get_enumeration("LEAF_TYPE"), @@ -738,8 +741,8 @@ def consume_type( remaining = length - consumed if leaf_type in [ - leaf_type.LF_CLASS, leaf_type.LF_CLASS_ST, leaf_type.LF_STRUCTURE, leaf_type.LF_STRUCTURE_ST, - leaf_type.LF_INTERFACE + leaf_type.LF_CLASS, leaf_type.LF_CLASS_ST, leaf_type.LF_STRUCTURE, leaf_type.LF_STRUCTURE_ST, + leaf_type.LF_INTERFACE ]: structure = module.object(object_type = "LF_STRUCTURE", offset = offset + consumed) name_offset = structure.name.vol.offset - structure.vol.offset @@ -953,6 +956,7 @@ def retreive_pdb(self, if __name__ == '__main__': import argparse + class PrintedProgress(object): """A progress handler that prints the progress value and the description onto the command line.""" @@ -973,6 +977,7 @@ def __call__(self, progress: Union[int, float], description: str = None): self._max_message_len = max([self._max_message_len, message_len]) print(message, end = (' ' * (self._max_message_len - message_len)) + '\r') + parser = argparse.ArgumentParser( description = "Read PDB files and convert to Volatility 3 Intermediate Symbol Format") parser.add_argument("-o", "--output", metavar = "OUTPUT", help = "Filename for data output", default = None) From 7152ca0ffc824532000ec08927f97ac6240d20bf Mon Sep 17 00:00:00 2001 From: iMHLv2 Date: Tue, 11 May 2021 15:59:42 -0500 Subject: [PATCH 126/294] refactor load_segments() to fix 32-bit bitmap crashdumps --- volatility3/framework/layers/crash.py | 89 +++++++++---------- .../symbols/windows/extensions/crash.py | 27 ++++++ 2 files changed, 68 insertions(+), 48 deletions(-) create mode 100644 volatility3/framework/symbols/windows/extensions/crash.py diff --git a/volatility3/framework/layers/crash.py b/volatility3/framework/layers/crash.py index 1a15ffde47..b6c620cb16 100644 --- a/volatility3/framework/layers/crash.py +++ b/volatility3/framework/layers/crash.py @@ -8,6 +8,7 @@ from volatility3.framework import constants, exceptions, interfaces from volatility3.framework.layers import segmented from volatility3.framework.symbols import intermed +from volatility3.framework.symbols.windows.extensions import crash vollog = logging.getLogger(__name__) @@ -50,7 +51,8 @@ def __init__(self, context: interfaces.context.ContextInterface, config_path: st # the _SUMMARY_DUMP is shared between 32- and 64-bit self._crash_common_table_name = intermed.IntermediateSymbolTable.create(context, self._config_path, 'windows', - 'crash_common') + 'crash_common', + class_types=crash.class_types) # Check Header hdr_layer = self._context.layers[self._base_layer] @@ -88,9 +90,6 @@ def _load_segments(self) -> None: segments = [] - # instead of hard coding 0x2000, use 0x1000 * self.headerpages so this works for - # both 32- and 64-bit dumps - summary_header = self.get_summary_header() if self.dump_type == 0x1: header = self.context.object(self._crash_table_name + constants.BANG + self.dump_header_name, offset=0, @@ -103,50 +102,44 @@ def _load_segments(self) -> None: offset += x.PageCount elif self.dump_type == 0x05: - - ## NOTE: In the original crash64.json, _SUMMARY_DUMP.Pages was offset 48 and - ## _SUMMARY_DUMP.BitmapSize was offset 40. From the volatility2 vtypes, that is - ## backwards! The correct offsets should be: - ## - ## 'Pages' : [ 0x28, ['unsigned long long']], -> This is 40 decimal - ## 'BitmapSize': [0x30, ['unsigned long long']], -> This is 48 decimal - ## - ## Most likely, some of the code that follows needs to be adjusted with those - ## newly refactored offsets in mind. I commented out the "+ 0x2000" below, and - ## this still works on Win10x64_17763_crash.dmp, but it fails on - ## the Win10x86_17763_crash.dmp version. - - # Add 0x2000 as some bitmaps are too short by one offset - summary_header.BufferLong.count = (summary_header.BitmapSize + 31) // 32 # + 0x2000 - previous_bit = 0 - start_position = 0 - # We cast as an int because we don't want to carry the context around with us for infinite loop reasons - mapped_offset = int(summary_header.HeaderSize) - current_word = None - bitmap_len = len(summary_header.BufferLong) * 32 - - for bit_position in range(bitmap_len): - if (bit_position % 32) == 0: - current_word = summary_header.BufferLong[bit_position // 32] - current_bit = (current_word >> (bit_position % 32)) & 1 - - if current_bit != previous_bit: - if previous_bit == 0: - # Start - start_position = bit_position - else: - # Finish - length = (bit_position - start_position) * 0x1000 - segments.append((start_position * 0x1000, mapped_offset, length, length)) - mapped_offset += length - - # Find the last segment in a file which will be at the end or two pages from the end. We multiply by 32 as we want to offset bby words rather than bits - if (bit_position == bitmap_len - 1 or bit_position == bitmap_len - 1 - 32 * 0x2000) and current_bit == 1: - length = (bit_position - start_position) * 0x1000 - segments.append((start_position * 0x1000, mapped_offset, length, length)) - mapped_offset += length - break - previous_bit = current_bit + summary_header = self.get_summary_header() + first_bit = None # First bit in a run + first_offset = 0 # File offset of first bit + last_bit_seen = 0 # Most recent bit processed + offset = summary_header.HeaderSize # Size of file headers + buffer_char = summary_header.get_buffer_char() + buffer_long = summary_header.get_buffer_long() + + for outer_index in range(0, ((summary_header.BitmapSize + 31) // 32)): + if buffer_long[outer_index] == 0: + if first_bit is not None: + last_bit = ((outer_index - 1) * 32) + 31 + segment_length = (last_bit - first_bit + 1) * 0x1000 + segments.append((first_bit * 0x1000, first_offset, segment_length, segment_length)) + first_bit = None + elif buffer_long[outer_index] == 0xFFFFFFFF: + if first_bit is None: + first_offset = offset + first_bit = outer_index * 32 + offset = offset + (32 * 0x1000) + else: + for inner_index in range(0, 32): + bit_addr = outer_index * 32 + inner_index + if (buffer_char[bit_addr >> 3] >> (bit_addr & 0x7)) & 1: + if first_bit is None: + first_offset = offset + first_bit = bit_addr + offset = offset + 0x1000 + else: + if first_bit is not None: + segment_length = ((bit_addr - 1) - first_bit + 1) * 0x1000 + segments.append((first_bit * 0x1000, first_offset, segment_length, segment_length)) + first_bit = None + last_bit_seen = (outer_index * 32) + 31 + + if first_bit is not None: + segment_length = (last_bit_seen - first_bit + 1) * 0x1000 + segments.append((first_bit * 0x1000, first_offset, segment_length, segment_length)) else: vollog.log(constants.LOGLEVEL_VVVV, "unsupported dump format 0x{:x}".format(self.dump_type)) raise WindowsCrashDumpFormatException(self.name, "unsupported dump format 0x{:x}".format(self.dump_type)) diff --git a/volatility3/framework/symbols/windows/extensions/crash.py b/volatility3/framework/symbols/windows/extensions/crash.py new file mode 100644 index 0000000000..7c60842e0e --- /dev/null +++ b/volatility3/framework/symbols/windows/extensions/crash.py @@ -0,0 +1,27 @@ +# This file is Copyright 2019 Volatility Foundation and licensed under the Volatility Software License 1.0 +# which is available at https://www.volatilityfoundation.org/license/vsl-v1.0 +# + +from volatility3.framework import interfaces, constants +from volatility3.framework import objects + + +class SUMMARY_DUMP(objects.StructType): + + def get_buffer(self, sub_type: str, count: int) -> interfaces.objects.ObjectInterface: + symbol_table_name = self.get_symbol_table_name() + subtype = self._context.symbol_space.get_type(symbol_table_name + constants.BANG + sub_type) + return self._context.object(object_type=symbol_table_name + constants.BANG + "array", + layer_name=self.vol.layer_name, + offset=self.BufferChar.vol.offset, + count=count, + subtype=subtype) + + def get_buffer_char(self) -> interfaces.objects.ObjectInterface: + return self.get_buffer(sub_type="unsigned char", count=(self.BitmapSize + 7) // 8) + + def get_buffer_long(self) -> interfaces.objects.ObjectInterface: + return self.get_buffer(sub_type="unsigned long", count=(self.BitmapSize + 31) // 32) + + +class_types = {'_SUMMARY_DUMP': SUMMARY_DUMP} From b9d5ffd2570fc181906bcb638ba309c80e4b36cd Mon Sep 17 00:00:00 2001 From: Gustavo Moreira Date: Thu, 13 May 2021 19:11:05 +1000 Subject: [PATCH 127/294] Kernel ring buffer reader plugin --- volatility3/framework/plugins/linux/kmsg.py | 386 ++++++++++++++++++++ 1 file changed, 386 insertions(+) create mode 100644 volatility3/framework/plugins/linux/kmsg.py diff --git a/volatility3/framework/plugins/linux/kmsg.py b/volatility3/framework/plugins/linux/kmsg.py new file mode 100644 index 0000000000..447237c1c3 --- /dev/null +++ b/volatility3/framework/plugins/linux/kmsg.py @@ -0,0 +1,386 @@ +# This file is Copyright 2021 Volatility Foundation and licensed under the Volatility Software License 1.0 +# which is available at https://www.volatilityfoundation.org/license/vsl-v1.0 +# +import logging +from typing import List, Iterator, Tuple, Generator + +from abc import ABC, abstractmethod +from enum import Enum + +from volatility3.framework import renderers, interfaces, constants, contexts, class_subclasses +from volatility3.framework.configuration import requirements +from volatility3.framework.interfaces import plugins +from volatility3.framework.objects import utility + + +vollog = logging.getLogger(__name__) + + +class DescStateEnum(Enum): + desc_miss = -1 # ID mismatch (pseudo state) + desc_reserved = 0x0 # reserved, in use by writer + desc_committed = 0x1 # committed by writer, could get reopened + desc_finalized = 0x2 # committed, no further modification allowed + desc_reusable = 0x3 # free, not yet used by any writer + + +class ABCKmsg(ABC): + """Kernel log buffer reader""" + LEVELS = ( + "emerg", # system is unusable + "alert", # action must be taken immediately + "crit", # critical conditions + "err", # error conditions + "warn", # warning conditions + "notice", # normal but significant condition + "info", # informational + "debug", # debug-level messages + ) + + FACILITIES = ( + "kern", # kernel messages + "user", # random user-level messages + "mail", # mail system + "daemon", # system daemons + "auth", # security/authorization messages + "syslog", # messages generated internally by syslogd + "lpr", # line printer subsystem + "news", # network news subsystem + "uucp", # UUCP subsystem + "cron", # clock daemon + "authpriv", # security/authorization messages (private) + "ftp" # FTP daemon + ) + + def __init__( + self, + context: interfaces.context.ContextInterface, + config: interfaces.configuration.HierarchicalDict + ): + self._context = context + self._config = config + self.layer_name = self._config['primary'] # type: ignore + symbol_table_name = self._config['vmlinux'] # type: ignore + self.vmlinux = contexts.Module(context, symbol_table_name, self.layer_name, 0) # type: ignore + + @classmethod + def run_all( + cls, + context: interfaces.context.ContextInterface, + config: interfaces.configuration.HierarchicalDict + ) -> Iterator[Tuple[str, str, str, str, str]]: + """It calls each subclass symtab_checks() to test the required + conditions to that specific kernel implementation. + + Args: + context: The volatility3 context on which to operate + config: Core configuration + + Yields: + kmsg records + """ + + symbol_table_name = config['vmlinux'] # type: ignore + layer_name = config['primary'] # type: ignore + vmlinux = contexts.Module(context, symbol_table_name, layer_name, 0) # type: ignore + + kmsg_inst = None # type: ignore + for subclass in class_subclasses(cls): + if not subclass.symtab_checks(vmlinux=vmlinux): + vollog.log(constants.LOGLEVEL_VVVV, + "Kmsg implementation '%s' doesn't match this memory dump", subclass.__name__) + continue + + vollog.log(constants.LOGLEVEL_VVVV, "Kmsg implementation '%s' matches!", subclass.__name__) + kmsg_inst = subclass(context=context, config=config) + # More than one class could be executed for an specific kernel + # version i.e. Netfilter Ingress hooks + # We expect just one implementation to be executed for an specific kernel + yield from kmsg_inst.run() + break + + if kmsg_inst is None: + vollog.error("Unsupported Netfilter kernel implementation") + + @abstractmethod + def run(self) -> Iterator[Tuple[str, str, str, str, str]]: + """Walks through the specific kernel implementation.""" + + @classmethod + def symtab_checks(cls, vmlinux: interfaces.context.ModuleInterface) -> bool: + pass + + def get_string(self, addr: int, length: int) -> str: + txt = self._context.layers[self.layer_name].read(addr, length) # type: ignore + return txt.decode(encoding='utf8', errors='replace') + + def nsec_to_sec_str(self, nsec: int) -> str: + # See kernel/printk/printk.c:print_time() + # Here, we could simply do: + # "%.6f" % (nsec / 1000000000.0) + # However, that will cause a roundoff error. For instance, using + # 17110365556 as input, the above will result in 17.110366. + # While the kernel print_time function will result in 17.110365. + # This might seem insignificant but it could cause some issues + # when compared with userland tool results or when used in + # timelines. + return "%lu.%06lu" % (nsec / 1000000000, (nsec % 1000000000) / 1000) + + def get_timestamp_in_sec_str(self, obj) -> str: + # obj could be printk_log or printk_info + return self.nsec_to_sec_str(obj.ts_nsec) + + def get_caller(self, obj): + # In some kernel versions, it's only available if CONFIG_PRINTK_CALLER is defined. + # caller_id is a member of printk_log struct from 5.1 to the latest 5.9 + # From kernels 5.10 on, it's a member of printk_info struct + if obj.has_member('caller_id'): + return self.get_caller_text(obj.caller_id) + else: + return "" + + def get_caller_text(self, caller_id): + caller_name = 'CPU' if caller_id & 0x80000000 else 'Task' + caller = "%s(%u)" % (caller_name, caller_id & ~0x80000000) + return caller + + def get_prefix(self, obj) -> Tuple[int, int, str, str]: + # obj could be printk_log or printk_info + return obj.facility, obj.level, self.get_timestamp_in_sec_str(obj), self.get_caller(obj) + + @classmethod + def get_level_text(cls, level: int) -> str: + if level < len(cls.LEVELS): + return cls.LEVELS[level] + else: + vollog.debug(f"Level {level} unknown") + return str(level) + + @classmethod + def get_facility_text(cls, facility: int) -> str: + if facility < len(cls.FACILITIES): + return cls.FACILITIES[facility] + else: + vollog.debug(f"Facility {facility} unknown") + return str(facility) + +class KmsgLegacy(ABCKmsg): + """Linux kernels prior to v5.10, the ringbuffer is initially kept in + __log_buf, and log_buf is a pointer to the former. __log_buf is declared as + a char array but it actually contains an array of printk_log structs. + The lenght of this array is defined in the kernel KConfig configuration via + the CONFIG_LOG_BUF_SHIFT value as a power of 2. + This can also be modified by the log_buf_len kernel boot parameter. + In SMP systems with more than 64 CPUs this ringbuffer size is dynamically + allocated according the number of CPUs based on the value of + CONFIG_LOG_CPU_MAX_BUF_SHIFT, and the log_buf pointer is updated + consequently to the new buffer. + In that case, the original static buffer in __log_buf is unused. + """ + @classmethod + def symtab_checks(cls, vmlinux) -> bool: + return vmlinux.has_type('printk_log') + + def get_text_from_printk_log(self, msg) -> str: + msg_offset = msg.vol.offset + self.vmlinux.get_type('printk_log').size + return self.get_string(msg_offset, msg.text_len) + + def get_log_lines(self, msg) -> Generator[str, None, None]: + if msg.text_len > 0: + text = self.get_text_from_printk_log(msg) + yield from text.splitlines() + + def get_dict_lines(self, msg) -> Generator[str, None, None]: + if msg.dict_len == 0: + return None + dict_offset = msg.vol.offset + self.vmlinux.get_type('printk_log').size + msg.text_len + dict_data = self._context.layers[self.layer_name].read(dict_offset, msg.dict_len) + for chunk in dict_data.split(b'\x00'): + yield " " + chunk.decode() + + def run(self) -> Iterator[Tuple[str, str, str, str, str]]: + log_buf_ptr = self.vmlinux.object_from_symbol(symbol_name='log_buf') + if log_buf_ptr == 0: + # This is weird, let's fallback to check the static ringbuffer. + log_buf_ptr = self.vmlinux.object_from_symbol(symbol_name='__log_buf').vol.offset + if log_buf_ptr == 0: + raise ValueError("Log buffer is not available") + + log_first_idx = int(self.vmlinux.object_from_symbol(symbol_name='log_first_idx')) + cur_idx = log_first_idx + end_idx = log_first_idx # We don't need log_next_idx here. See below msg.len == 0 + while True: + msg_offset = log_buf_ptr + cur_idx # type: ignore + msg = self.vmlinux.object(object_type='printk_log', offset=msg_offset) + if msg.len == 0: + # As per kernel/printk/printk.c: + # A length == 0 for the next message indicates a wrap-around to + # the beginning of the buffer. + cur_idx = 0 + else: + facility, level, timestamp, caller = self.get_prefix(msg) + level_txt = self.get_level_text(level) + facility_txt = self.get_facility_text(facility) + + for line in self.get_log_lines(msg): + yield facility_txt, level_txt, timestamp, caller, line + for line in self.get_dict_lines(msg): + yield facility_txt, level_txt, timestamp, caller, line + + cur_idx += msg.len + + if cur_idx == end_idx: + break + + +class KmsgFiveTen(ABCKmsg): + """In 5.10 the kernel ringbuffer implementation changed. + Previously only one process should read /proc/kmsg and it is permanently + open and periodically read by the syslog daemon. + A high level structure 'printk_ringbuffer' was added to represent the printk + ringbuffer which actually contains two ringbuffers. The descriptor ring + 'desc_ring' contains the records' metadata, text offsets and states. + The data block ring 'text_data_ring' contains the records' text strings. + A pointer to the high level structure is kept in the prb pointer which is + initialized to a static ringbuffer. + static struct printk_ringbuffer *prb = &printk_rb_static; + In SMP systems with more than 64 CPUs this ringbuffer size is dynamically + allocated according the number of CPUs based on the value of + CONFIG_LOG_CPU_MAX_BUF_SHIFT. The prb pointer is updated consequently to + this dynamic ringbuffer in setup_log_buf(). + prb = &printk_rb_dynamic; + Behind scenes, log_buf is still used as external buffer. + When the static printk_ringbuffer struct is initialized, _DEFINE_PRINTKRB + sets text_data_ring.data pointer to the address in log_buf which points to + the static buffer __log_buff. + If a dynamic ringbuffer takes place, setup_log_buf() sets + text_data_ring.data of printk_rb_dynamic to the new allocated external + buffer via the prb_init function. + In that case, the original external static buffer in __log_buf and + printk_rb_static are unused. + ... + new_log_buf = memblock_alloc(new_log_buf_len, LOG_ALIGN); + prb_init(&printk_rb_dynamic, new_log_buf, ...); + log_buf = new_log_buf; + prb = &printk_rb_dynamic; + ... + See printk.c and printk_ringbuffer.c in kernel/printk/ folder for more + details. + """ + @classmethod + def symtab_checks(cls, vmlinux) -> bool: + return vmlinux.has_symbol('prb') + + def get_text_from_data_ring(self, text_data_ring, desc, info) -> str: + text_data_sz = text_data_ring.size_bits + text_data_mask = 1 << text_data_sz + + begin = desc.text_blk_lpos.begin % text_data_mask + end = desc.text_blk_lpos.next % text_data_mask + + # This record doesn't contain text + if begin & 1: + return "" + + # This means a wrap-around to the beginning of the buffer + if begin > end: + begin = 0 + + # Each element in the ringbuffer is "ID + data". + # See prb_data_ring struct + desc_id_size = 8 # sizeof(long) + text_start = begin + desc_id_size + offset = text_data_ring.data + text_start + + # Safety first ;) + text_len = min(info.text_len, end - begin) + + return self.get_string(offset, text_len) + + def get_log_lines(self, text_data_ring, desc, info) -> Generator[str, None, None]: + text = self.get_text_from_data_ring(text_data_ring, desc, info) + yield from text.splitlines() + + def get_dict_lines(self, info) -> Generator[str, None, None]: + dict_text = utility.array_to_string(info.dev_info.subsystem) + if dict_text: + yield f" SUBSYSTEM={dict_text}" + + dict_text = utility.array_to_string(info.dev_info.device) + if dict_text: + yield f" DEVICE={dict_text}" + + def run(self) -> Iterator[Tuple[str, str, str, str, str]]: + # static struct printk_ringbuffer *prb = &printk_rb_static; + ringbuffers = self.vmlinux.object_from_symbol(symbol_name='prb').dereference() + + desc_ring = ringbuffers.desc_ring + text_data_ring = ringbuffers.text_data_ring + + desc_count = 1 << desc_ring.count_bits + desc_arr = self.vmlinux.object(object_type="array", + offset=desc_ring.descs, + subtype=self.vmlinux.get_type("prb_desc"), + count=desc_count) + info_arr = self.vmlinux.object(object_type="array", + offset=desc_ring.infos, + subtype=self.vmlinux.get_type("printk_info"), + count=desc_count) + + # See kernel/printk/printk_ringbuffer.h + desc_state_var_bytes_sz = 8 # sizeof(long) + desc_state_var_bits_sz = desc_state_var_bytes_sz * 8 + desc_flags_shift = desc_state_var_bits_sz - 2 + desc_flags_mask = 3 << desc_flags_shift + desc_id_mask = ~desc_flags_mask + + cur_id = desc_ring.tail_id.counter + end_id = desc_ring.head_id.counter + while True: + desc = desc_arr[cur_id % desc_count] # type: ignore + info = info_arr[cur_id % desc_count] # type: ignore + desc_state = DescStateEnum((desc.state_var.counter >> desc_flags_shift) & 3) + if desc_state in (DescStateEnum.desc_committed, DescStateEnum.desc_finalized): + facility, level, timestamp, caller = self.get_prefix(info) + level_txt = self.get_level_text(level) + facility_txt = self.get_facility_text(facility) + + for line in self.get_log_lines(text_data_ring, desc, info): + yield facility_txt, level_txt, timestamp, caller, line + for line in self.get_dict_lines(info): + yield facility_txt, level_txt, timestamp, caller, line + + cur_id += 1 + cur_id &= desc_id_mask + if cur_id == end_id: + break + + +class Kmsg(plugins.PluginInterface): + """Kernel log buffer reader""" + + _required_framework_version = (1, 0, 0) + + _version = (1, 0, 0) + + @classmethod + def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface]: + return [ + requirements.TranslationLayerRequirement(name='primary', + description="Memory layer for the kernel", + architectures=['Intel32', 'Intel64']), + requirements.SymbolTableRequirement(name='vmlinux', + description="Linux kernel symbols"), + ] + + def _generator(self) -> Iterator[Tuple[int, Tuple[str, str, str, str, str]]]: + for values in ABCKmsg.run_all(context=self.context, config=self.config): + yield (0, values) + + def run(self): + return renderers.TreeGrid([("facility", str), + ("level", str), + ("timestamp", str), + ("caller", str), + ("line", str)], + self._generator()) # type: ignore From 2705614306ad0db7582f623d66eb9be45a17d9cc Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Thu, 13 May 2021 21:49:25 +0100 Subject: [PATCH 128/294] Windows: Remove debugging statements from pdbutil --- volatility3/framework/symbols/windows/pdbutil.py | 2 -- 1 file changed, 2 deletions(-) diff --git a/volatility3/framework/symbols/windows/pdbutil.py b/volatility3/framework/symbols/windows/pdbutil.py index 85d8ab92e8..4ec94cd949 100644 --- a/volatility3/framework/symbols/windows/pdbutil.py +++ b/volatility3/framework/symbols/windows/pdbutil.py @@ -348,7 +348,6 @@ def __call__(self, data: bytes, data_offset: int) -> Generator[Tuple[str, Any, b pattern = b'RSDS' + (b'.' * self._RSDS_format.size) + b'(' + b'|'.join(self._pdb_names) + b')\x00' for match in re.finditer(pattern, data): pdb_name = data[match.start(0) + 4 + self._RSDS_format.size:match.start(0) + len(match.group()) - 1] - print("MATCH", pdb_name) if pdb_name in self._pdb_names: ## this ordering is intentional due to mixed endianness in the GUID (g3, g2, g1, g0, g5, g4, g7, g6, g8, g9, ga, gb, gc, gd, ge, gf, a) = \ @@ -356,5 +355,4 @@ def __call__(self, data: bytes, data_offset: int) -> Generator[Tuple[str, Any, b guid = (16 * '{:02X}').format(g0, g1, g2, g3, g4, g5, g6, g7, g8, g9, ga, gb, gc, gd, ge, gf) if match.start(0) < self.chunk_size: - print("YIELDING", (guid, a, pdb_name, match.start(0))) yield (guid, a, pdb_name, match.start(0)) From f873ced04e6e7bdd0d91ad4147382fdb589b7aa7 Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Sun, 16 May 2021 17:01:52 +0100 Subject: [PATCH 129/294] Windows: Improve hashdumping plugin errors --- .../framework/plugins/windows/cachedump.py | 35 +++++++++----- .../framework/plugins/windows/hashdump.py | 27 ++++++++--- .../framework/plugins/windows/lsadump.py | 48 ++++++++++--------- 3 files changed, 68 insertions(+), 42 deletions(-) diff --git a/volatility3/framework/plugins/windows/cachedump.py b/volatility3/framework/plugins/windows/cachedump.py index dc8246ef8b..8436b46f65 100644 --- a/volatility3/framework/plugins/windows/cachedump.py +++ b/volatility3/framework/plugins/windows/cachedump.py @@ -1,20 +1,22 @@ # This file is Copyright 2020 Volatility Foundation and licensed under the Volatility Software License 1.0 # which is available at https://www.volatilityfoundation.org/license/vsl-v1.0 # - +import logging from struct import unpack from typing import Tuple from Crypto.Cipher import ARC4, AES from Crypto.Hash import HMAC -from volatility3.framework import interfaces, renderers, exceptions +from volatility3.framework import interfaces, renderers from volatility3.framework.configuration import requirements from volatility3.framework.layers import registry from volatility3.framework.symbols.windows import versions from volatility3.plugins.windows import hashdump, lsadump from volatility3.plugins.windows.registry import hivelist +vollog = logging.getLogger(__name__) + class Cachedump(interfaces.plugins.PluginInterface): """Dumps lsa secrets from memory""" @@ -30,7 +32,8 @@ def get_requirements(cls): architectures = ["Intel32", "Intel64"]), requirements.SymbolTableRequirement(name = "nt_symbols", description = "Windows kernel symbols"), requirements.PluginRequirement(name = 'hivelist', plugin = hivelist.HiveList, version = (1, 0, 0)), - requirements.PluginRequirement(name = 'lsadump', plugin = lsadump.Lsadump, version = (1, 0, 0)) + requirements.PluginRequirement(name = 'lsadump', plugin = lsadump.Lsadump, version = (1, 0, 0)), + requirements.PluginRequirement(name = 'hashdump', plugin = hashdump.Hashdump, version = (1, 1, 0)) ] @staticmethod @@ -60,7 +63,7 @@ def parse_cache_entry(cache_data: bytes) -> Tuple[int, int, int, bytes, bytes]: (uname_len, domain_len) = unpack(" List[interfaces.objects.ObjectInterface]: user_key_path = "SAM\\Domains\\Account\\Users" - user_key = samhive.get_key(user_key_path) + user_key = cls.get_hive_key(samhive, user_key_path) + if not user_key: return [] return [k for k in user_key.get_subkeys() if k.Name != "Names"] @@ -75,7 +87,7 @@ def get_bootkey(cls, syshive: registry.RegistryHive) -> Optional[bytes]: lsa_base = "ControlSet{0:03}".format(cs) + "\\Control\\Lsa" lsa_keys = ["JD", "Skew1", "GBG", "Data"] - lsa = syshive.get_key(lsa_base) + lsa = cls.get_hive_key(syshive, lsa_base) if not lsa: return None @@ -83,9 +95,10 @@ def get_bootkey(cls, syshive: registry.RegistryHive) -> Optional[bytes]: bootkey = '' for lk in lsa_keys: - key = syshive.get_key(lsa_base + '\\' + lk) - - class_data = syshive.read(key.Class + 4, key.ClassLength) + key = cls.get_hive_key(syshive, lsa_base + '\\' + lk) + class_data = None + if key: + class_data = syshive.read(key.Class + 4, key.ClassLength) if class_data is None: return None @@ -102,7 +115,7 @@ def get_hbootkey(cls, samhive: registry.RegistryHive, bootkey: bytes) -> Optiona if not bootkey: return None - sam_account_key = samhive.get_key(sam_account_path) + sam_account_key = cls.get_hive_key(samhive, sam_account_path) if not sam_account_key: return None @@ -270,7 +283,7 @@ def _generator(self, syshive: registry.RegistryHive, samhive: registry.RegistryH rid = int(str(user.get_name()), 16) yield (0, (name, rid, lmout, ntout)) else: - raise ValueError("Hbootkey is not valid") + vollog.warning("Hbootkey is not valid") def run(self): offset = self.config.get('offset', None) diff --git a/volatility3/framework/plugins/windows/lsadump.py b/volatility3/framework/plugins/windows/lsadump.py index a9ee1737b5..c3d765f83f 100644 --- a/volatility3/framework/plugins/windows/lsadump.py +++ b/volatility3/framework/plugins/windows/lsadump.py @@ -31,7 +31,8 @@ def get_requirements(cls): description = 'Memory layer for the kernel', architectures = ["Intel32", "Intel64"]), requirements.SymbolTableRequirement(name = "nt_symbols", description = "Windows kernel symbols"), - requirements.PluginRequirement(name = 'hivelist', plugin = hivelist.HiveList, version = (1, 0, 0)) + requirements.VersionRequirement(name = 'hashdump', component = hashdump.Hashdump, version = (1, 1, 0)), + requirements.VersionRequirement(name = 'hivelist', component = hivelist.HiveList, version = (1, 0, 0)) ] @classmethod @@ -65,7 +66,7 @@ def get_lsa_key(cls, sechive: registry.RegistryHive, bootkey: bytes, vista_or_la else: policy_key = 'PolSecretEncryptionKey' - enc_reg_key = sechive.get_key("Policy\\" + policy_key) + enc_reg_key = hashdump.Hashdump.get_hive_key(sechive, "Policy\\" + policy_key) if not enc_reg_key: return None enc_reg_value = next(enc_reg_key.get_values()) @@ -94,23 +95,21 @@ def get_lsa_key(cls, sechive: registry.RegistryHive, bootkey: bytes, vista_or_la @classmethod def get_secret_by_name(cls, sechive: registry.RegistryHive, name: str, lsakey: bytes, is_vista_or_later: bool): - try: - enc_secret_key = sechive.get_key("Policy\\Secrets\\" + name + "\\CurrVal") - except KeyError: - raise ValueError("Unable to read cache from memory") + enc_secret_key = hashdump.Hashdump.get_hive_key(sechive, "Policy\\Secrets\\" + name + "\\CurrVal") - enc_secret_value = next(enc_secret_key.get_values()) - if not enc_secret_value: - return None + secret = None + if enc_secret_key: + enc_secret_value = next(enc_secret_key.get_values()) + if enc_secret_value: - enc_secret = sechive.read(enc_secret_value.Data + 4, enc_secret_value.DataLength) - if not enc_secret: - return None + enc_secret = sechive.read(enc_secret_value.Data + 4, enc_secret_value.DataLength) + if enc_secret: + + if not is_vista_or_later: + secret = cls.decrypt_secret(enc_secret[0xC:], lsakey) + else: + secret = cls.decrypt_aes(enc_secret, lsakey) - if not is_vista_or_later: - secret = cls.decrypt_secret(enc_secret[0xC:], lsakey) - else: - secret = cls.decrypt_aes(enc_secret, lsakey) return secret @classmethod @@ -133,7 +132,7 @@ def decrypt_secret(cls, secret: bytes, key: bytes): if len(key[j:j + 7]) < 7: j = len(key[j:j + 7]) - (dec_data_len, ) = unpack(" Date: Sun, 16 May 2021 17:08:56 +0100 Subject: [PATCH 130/294] Windows: Improve netstat errors --- .../framework/plugins/windows/netstat.py | 41 +++++++++++-------- 1 file changed, 23 insertions(+), 18 deletions(-) diff --git a/volatility3/framework/plugins/windows/netstat.py b/volatility3/framework/plugins/windows/netstat.py index 70a47f85af..f47c34450a 100644 --- a/volatility3/framework/plugins/windows/netstat.py +++ b/volatility3/framework/plugins/windows/netstat.py @@ -2,8 +2,8 @@ # which is available at https://www.volatilityfoundation.org/license/vsl-v1.0 # -import logging import datetime +import logging from typing import Iterable, Optional, Generator, Tuple from volatility3.framework import constants, exceptions, interfaces, renderers, symbols @@ -98,12 +98,12 @@ def parse_bitmap(cls, context: interfaces.context.ContextInterface, layer_name: @classmethod def enumerate_structures_by_port(cls, - context: interfaces.context.ContextInterface, - layer_name: str, - net_symbol_table: str, - port: int, - port_pool_addr: int, - proto="tcp") -> \ + context: interfaces.context.ContextInterface, + layer_name: str, + net_symbol_table: str, + port: int, + port_pool_addr: int, + proto = "tcp") -> \ Iterable[interfaces.objects.ObjectInterface]: """Lists all UDP Endpoints and TCP Listeners by parsing UdpPortPool and TcpPortPool. @@ -354,12 +354,12 @@ def find_port_pools(cls, context: interfaces.context.ContextInterface, layer_nam @classmethod def list_sockets(cls, - context: interfaces.context.ContextInterface, - layer_name: str, - nt_symbols: str, - net_symbol_table: str, - tcpip_module_offset: int, - tcpip_symbol_table: str) -> \ + context: interfaces.context.ContextInterface, + layer_name: str, + nt_symbols: str, + net_symbol_table: str, + tcpip_module_offset: int, + tcpip_symbol_table: str) -> \ Iterable[interfaces.objects.ObjectInterface]: """Lists all UDP Endpoints, TCP Listeners and TCP Endpoints in the primary layer that are in tcpip.sys's UdpPortPool, TcpPortPool and TCP Endpoint partition table, respectively. @@ -424,9 +424,12 @@ def _generator(self, show_corrupt_results: Optional[bool] = None): tcpip_module = self.get_tcpip_module(self.context, self.config["primary"], self.config["nt_symbols"]) - tcpip_symbol_table = pdbutil.PDBUtility.symbol_table_from_pdb( - self.context, interfaces.configuration.path_join(self.config_path, 'tcpip'), self.config["primary"], - "tcpip.pdb", tcpip_module.DllBase, tcpip_module.SizeOfImage) + try: + tcpip_symbol_table = pdbutil.PDBUtility.symbol_table_from_pdb( + self.context, interfaces.configuration.path_join(self.config_path, 'tcpip'), self.config["primary"], + "tcpip.pdb", tcpip_module.DllBase, tcpip_module.SizeOfImage) + except exceptions.VolatilityException: + vollog.warning("Unable to locate symbols for the memory image's tcpip module") for netw_obj in self.list_sockets(self.context, self.config['primary'], self.config['nt_symbols'], netscan_symbol_table, tcpip_module.DllBase, tcpip_symbol_table): @@ -494,8 +497,10 @@ def generate_timeline(self): continue description = "Network connection: Process {} {} Local Address {}:{} " \ "Remote Address {}:{} State {} Protocol {} ".format(row_dict["PID"], row_dict["Owner"], - row_dict["LocalAddr"], row_dict["LocalPort"], - row_dict["ForeignAddr"], row_dict["ForeignPort"], + row_dict["LocalAddr"], + row_dict["LocalPort"], + row_dict["ForeignAddr"], + row_dict["ForeignPort"], row_dict["State"], row_dict["Proto"]) yield (description, timeliner.TimeLinerType.CREATED, row_dict["Created"]) From ed9c2dd97973d9cb80eecdd2dbbb21ac78c4719d Mon Sep 17 00:00:00 2001 From: doomedraven Date: Tue, 18 May 2021 22:37:34 +0200 Subject: [PATCH 131/294] ability to reverse filtering see long description Hello, this is very useful to optimize some scans, like in case of sandboxing, imagine: 1 round it scans only all processes that was captured by sandbox aka pid_list 2. round it scans all the rest processes ignoring pid_list from round 1 if you have a better idea how to improve/implement this, let me know, we use our custom function, but i think it might be useful for the rest --- volatility3/framework/plugins/windows/pslist.py | 17 ++++++++++++----- 1 file changed, 12 insertions(+), 5 deletions(-) diff --git a/volatility3/framework/plugins/windows/pslist.py b/volatility3/framework/plugins/windows/pslist.py index 1b5a726de1..f0ef2f3cd8 100644 --- a/volatility3/framework/plugins/windows/pslist.py +++ b/volatility3/framework/plugins/windows/pslist.py @@ -83,12 +83,13 @@ def process_dump( return file_handle @classmethod - def create_pid_filter(cls, pid_list: List[int] = None) -> Callable[[interfaces.objects.ObjectInterface], bool]: + def create_pid_filter(cls, pid_list: List[int] = None, exclude: bool = False) -> Callable[[interfaces.objects.ObjectInterface], bool]: """A factory for producing filter functions that filter based on a list of process IDs. Args: pid_list: A list of process IDs that are acceptable, all other processes will be filtered out + exclude: Accept only tasks that are not in pid_list Returns: Filter function for passing to the `list_processes` method @@ -98,17 +99,20 @@ def create_pid_filter(cls, pid_list: List[int] = None) -> Callable[[interfaces.o pid_list = pid_list or [] filter_list = [x for x in pid_list if x is not None] if filter_list: - filter_func = lambda x: x.UniqueProcessId not in filter_list + if exclude: + filter_func = lambda x: x.UniqueProcessId in filter_list + else: + filter_func = lambda x: x.UniqueProcessId not in filter_list return filter_func @classmethod - def create_name_filter(cls, name_list: List[str] = None) -> Callable[[interfaces.objects.ObjectInterface], bool]: + def create_name_filter(cls, name_list: List[str] = None, exclude: bool = False) -> Callable[[interfaces.objects.ObjectInterface], bool]: """A factory for producing filter functions that filter based on a list of process names. Args: name_list: A list of process names that are acceptable, all other processes will be filtered out - + exclude: Accept only tasks that are not in name_list Returns: Filter function for passing to the `list_processes` method """ @@ -117,7 +121,10 @@ def create_name_filter(cls, name_list: List[str] = None) -> Callable[[interfaces name_list = name_list or [] filter_list = [x for x in name_list if x is not None] if filter_list: - filter_func = lambda x: utility.array_to_string(x.ImageFileName) not in filter_list + if exclude: + filter_func = lambda x: utility.array_to_string(x.ImageFileName) in filter_list + else: + filter_func = lambda x: utility.array_to_string(x.ImageFileName) not in filter_list return filter_func @classmethod From d6aba26aa76089e7524e0b94b30291f544be5512 Mon Sep 17 00:00:00 2001 From: Gustavo Moreira Date: Sat, 22 May 2021 09:14:38 +1000 Subject: [PATCH 132/294] fixing unsigned long size hardcoding --- volatility3/framework/plugins/linux/kmsg.py | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/volatility3/framework/plugins/linux/kmsg.py b/volatility3/framework/plugins/linux/kmsg.py index 447237c1c3..981b7b3f0b 100644 --- a/volatility3/framework/plugins/linux/kmsg.py +++ b/volatility3/framework/plugins/linux/kmsg.py @@ -62,6 +62,7 @@ def __init__( self.layer_name = self._config['primary'] # type: ignore symbol_table_name = self._config['vmlinux'] # type: ignore self.vmlinux = contexts.Module(context, symbol_table_name, self.layer_name, 0) # type: ignore + self.long_unsigned_int_size = self.vmlinux.get_type('long unsigned int').size @classmethod def run_all( @@ -288,7 +289,7 @@ def get_text_from_data_ring(self, text_data_ring, desc, info) -> str: # Each element in the ringbuffer is "ID + data". # See prb_data_ring struct - desc_id_size = 8 # sizeof(long) + desc_id_size = self.long_unsigned_int_size text_start = begin + desc_id_size offset = text_data_ring.data + text_start @@ -328,7 +329,7 @@ def run(self) -> Iterator[Tuple[str, str, str, str, str]]: count=desc_count) # See kernel/printk/printk_ringbuffer.h - desc_state_var_bytes_sz = 8 # sizeof(long) + desc_state_var_bytes_sz = self.long_unsigned_int_size desc_state_var_bits_sz = desc_state_var_bytes_sz * 8 desc_flags_shift = desc_state_var_bits_sz - 2 desc_flags_mask = 3 << desc_flags_shift From abafa58f1a474a2a6a6d60e01a611025a21d3108 Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Sun, 23 May 2021 15:49:28 +0100 Subject: [PATCH 133/294] Windows: Increase self-referential check --- volatility3/framework/automagic/windows.py | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/volatility3/framework/automagic/windows.py b/volatility3/framework/automagic/windows.py index 873a020a0d..3844e89e60 100644 --- a/volatility3/framework/automagic/windows.py +++ b/volatility3/framework/automagic/windows.py @@ -128,12 +128,11 @@ class DtbTest64bit(DtbTest): def __init__(self) -> None: super().__init__(layer_type = layers.intel.WindowsIntel32e, ptr_struct = "Q", - ptr_reference = range(0x1E0, 0x1FF), + ptr_reference = range(0x100, 0x1FF), mask = 0x3FFFFFFFFFF000) # As of Windows-10 RS1+, the ptr_reference is randomized: # https://blahcat.github.io/2020/06/15/playing_with_self_reference_pml4_entry/ - # So far, we've only seen examples between 0x1e0 and 0x1ff class DtbTestPae(DtbTest): From dd6ad75b22e10ffc88e5536a39b1c137a6042b9e Mon Sep 17 00:00:00 2001 From: doomedraven Date: Sun, 23 May 2021 17:10:52 +0200 Subject: [PATCH 134/294] Update pslist.py --- volatility3/framework/plugins/windows/pslist.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/volatility3/framework/plugins/windows/pslist.py b/volatility3/framework/plugins/windows/pslist.py index f0ef2f3cd8..7b1e504c2d 100644 --- a/volatility3/framework/plugins/windows/pslist.py +++ b/volatility3/framework/plugins/windows/pslist.py @@ -21,7 +21,7 @@ class PsList(interfaces.plugins.PluginInterface, timeliner.TimeLinerInterface): """Lists the processes present in a particular windows memory image.""" _required_framework_version = (1, 0, 0) - _version = (2, 0, 0) + _version = (2, 0, 1) PHYSICAL_DEFAULT = False @classmethod From 9e7e82f026f46b47bb8b14d7749107211bf32a23 Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Wed, 26 May 2021 22:13:54 +0100 Subject: [PATCH 135/294] Development: Minor mac extract_kernel updates --- development/mac-kdk/extract_kernel.sh | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/development/mac-kdk/extract_kernel.sh b/development/mac-kdk/extract_kernel.sh index 21797a308a..69c5e4b7d0 100755 --- a/development/mac-kdk/extract_kernel.sh +++ b/development/mac-kdk/extract_kernel.sh @@ -44,7 +44,8 @@ ${DWARF2JSON} popd rm -fr tmp -${DWARF2JSON} mac --macho "${UNPACK_DIR}/${KERNEL_DIR}/kernel.dSYM" --macho-symbols "${UNPACK_DIRECTORY}/${KERNEL_DIR}/kernel" | xz -9 > ${JSON_DIR}/${KERNEL_DIR}.json.xz -if [ $? == 0 ]; then - ${DWARF2JSON} mac --arch i386 --macho "${UNPACK_DIR}/${KERNEL_DIR}/kernel.dSYM" --macho-symbols "${UNPACK_DIRECTORY}/${KERNEL_DIR}/kernel" | xz -9 > ${JSON_DIR}/${KERNEL_DIR}.json.xz +echo "Running ${DWARF2JSON} mac --macho "${UNPACK_DIR}/${KERNEL_DIR}/kernel.dSYM" --macho-symbols "${UNPACK_DIR}/${KERNEL_DIR}/kernel" | xz -9 > ${JSON_DIR}/${KERNEL_DIR}.json.xz" +${DWARF2JSON} mac --macho "${UNPACK_DIR}/${KERNEL_DIR}/kernel.dSYM" --macho-symbols "${UNPACK_DIR}/${KERNEL_DIR}/kernel" | xz -9 > ${JSON_DIR}/${KERNEL_DIR}.json.xz +if [ $? != 0 ]; then + ${DWARF2JSON} mac --arch i386 --macho "${UNPACK_DIR}/${KERNEL_DIR}/kernel.dSYM" --macho-symbols "${UNPACK_DIR}/${KERNEL_DIR}/kernel" | xz -9 > ${JSON_DIR}/${KERNEL_DIR}.json.xz fi From 0214fa35a11697f03e7af33a6f90644cde0325c5 Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Wed, 26 May 2021 23:23:27 +0100 Subject: [PATCH 136/294] Core: Increment minimum python version to 3.6 --- volatility3/framework/__init__.py | 3 +-- .../symbols/windows/extensions/registry.py | 18 ++---------------- 2 files changed, 3 insertions(+), 18 deletions(-) diff --git a/volatility3/framework/__init__.py b/volatility3/framework/__init__.py index b0a2f2b5c6..a8401950a0 100644 --- a/volatility3/framework/__init__.py +++ b/volatility3/framework/__init__.py @@ -3,11 +3,10 @@ # """Volatility 3 framework.""" # Check the python version to ensure it's suitable -# We currently require 3.5.3 since 3.5.1 has no typing.Type and 3.5.2 is broken for ''/delayed encapsulated types import glob import sys -required_python_version = (3, 5, 3) +required_python_version = (3, 6, 0) if (sys.version_info.major != required_python_version[0] or sys.version_info.minor < required_python_version[1] or (sys.version_info.minor == required_python_version[1] and sys.version_info.micro < required_python_version[2])): raise RuntimeError( diff --git a/volatility3/framework/symbols/windows/extensions/registry.py b/volatility3/framework/symbols/windows/extensions/registry.py index 3cf2469ff5..fea385617d 100644 --- a/volatility3/framework/symbols/windows/extensions/registry.py +++ b/volatility3/framework/symbols/windows/extensions/registry.py @@ -30,23 +30,9 @@ class RegValueTypes(enum.Enum): REG_QWORD = 11 REG_UNKNOWN = 99999 - # TODO: This _missing_() method can replace the get() method below - # if support for Python 3.6 is added in the future - # @classmethod - # def _missing_(cls, value): - # return cls(RegValueTypes.REG_UNKNOWN) - @classmethod - def get(cls, value): - """An alternative method for using this enum when the value may be - unknown. - - This is used to support unknown value requests in Python <3.6. - """ - try: - return cls(value) - except ValueError: - return cls(RegValueTypes.REG_UNKNOWN) + def _missing_(cls, value): + return cls(RegValueTypes.REG_UNKNOWN) class RegKeyFlags(enum.IntEnum): From 3bb23b29e38b27e0114a2af94861207048d0aa04 Mon Sep 17 00:00:00 2001 From: ikelos Date: Wed, 26 May 2021 23:43:35 +0100 Subject: [PATCH 137/294] Revert "Windows: Increase self-referential check" --- volatility3/framework/automagic/windows.py | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/volatility3/framework/automagic/windows.py b/volatility3/framework/automagic/windows.py index 3844e89e60..873a020a0d 100644 --- a/volatility3/framework/automagic/windows.py +++ b/volatility3/framework/automagic/windows.py @@ -128,11 +128,12 @@ class DtbTest64bit(DtbTest): def __init__(self) -> None: super().__init__(layer_type = layers.intel.WindowsIntel32e, ptr_struct = "Q", - ptr_reference = range(0x100, 0x1FF), + ptr_reference = range(0x1E0, 0x1FF), mask = 0x3FFFFFFFFFF000) # As of Windows-10 RS1+, the ptr_reference is randomized: # https://blahcat.github.io/2020/06/15/playing_with_self_reference_pml4_entry/ + # So far, we've only seen examples between 0x1e0 and 0x1ff class DtbTestPae(DtbTest): From b15e69e11dad74e5a7ee321b8a3fdfd416dde71b Mon Sep 17 00:00:00 2001 From: iMHLv2 Date: Tue, 1 Jun 2021 13:55:21 -0500 Subject: [PATCH 138/294] ensure the crashinfo plugin gets a crash layer --- volatility3/framework/plugins/windows/crashinfo.py | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/volatility3/framework/plugins/windows/crashinfo.py b/volatility3/framework/plugins/windows/crashinfo.py index 5ec123b442..8797a4855f 100644 --- a/volatility3/framework/plugins/windows/crashinfo.py +++ b/volatility3/framework/plugins/windows/crashinfo.py @@ -8,6 +8,7 @@ from volatility3.framework.configuration import requirements from volatility3.framework.renderers import format_hints, conversion from volatility3.framework.objects import utility +from volatility3.framework.layers import crash vollog = logging.getLogger(__name__) @@ -63,6 +64,10 @@ def _generator(self, layer): def run(self): layer = self._context.layers[self.config['primary.memory_layer']] + if not isinstance(layer, crash.WindowsCrashDump32Layer): + vollog.error("This plugin requires a Windows crash dump") + raise + return renderers.TreeGrid([("Signature", str), ("MajorVersion", int), ("MinorVersion", int), From a0adbde99495a8c9638f127c23b34928d5bdc9a1 Mon Sep 17 00:00:00 2001 From: iMHLv2 Date: Tue, 1 Jun 2021 13:55:37 -0500 Subject: [PATCH 139/294] eliminate confusing single letter variable --- volatility3/framework/layers/crash.py | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/volatility3/framework/layers/crash.py b/volatility3/framework/layers/crash.py index b6c620cb16..f6cf0e0ccd 100644 --- a/volatility3/framework/layers/crash.py +++ b/volatility3/framework/layers/crash.py @@ -97,9 +97,9 @@ def _load_segments(self) -> None: offset = self.headerpages header.PhysicalMemoryBlockBuffer.Run.count = header.PhysicalMemoryBlockBuffer.NumberOfRuns - for x in header.PhysicalMemoryBlockBuffer.Run: - segments.append((x.BasePage * 0x1000, offset * 0x1000, x.PageCount * 0x1000, x.PageCount * 0x1000)) - offset += x.PageCount + for run in header.PhysicalMemoryBlockBuffer.Run: + segments.append((run.BasePage * 0x1000, offset * 0x1000, run.PageCount * 0x1000, run.PageCount * 0x1000)) + offset += run.PageCount elif self.dump_type == 0x05: summary_header = self.get_summary_header() From d285519ecb502b0d377d090e3fbd6d27895d3cfc Mon Sep 17 00:00:00 2001 From: iMHLv2 Date: Tue, 1 Jun 2021 14:17:31 -0500 Subject: [PATCH 140/294] don't assume primary.memory_layer is a crash layer...instead, cycle through the layers until finding the crash layer --- volatility3/framework/plugins/windows/crashinfo.py | 12 +++++++++--- 1 file changed, 9 insertions(+), 3 deletions(-) diff --git a/volatility3/framework/plugins/windows/crashinfo.py b/volatility3/framework/plugins/windows/crashinfo.py index 8797a4855f..47c18292fe 100644 --- a/volatility3/framework/plugins/windows/crashinfo.py +++ b/volatility3/framework/plugins/windows/crashinfo.py @@ -63,8 +63,14 @@ def _generator(self, layer): )) def run(self): - layer = self._context.layers[self.config['primary.memory_layer']] - if not isinstance(layer, crash.WindowsCrashDump32Layer): + crash_layer = None + for layer_name in self._context.layers: + layer = self._context.layers[layer_name] + if isinstance(layer, crash.WindowsCrashDump32Layer): + crash_layer = layer + break + + if crash_layer is None: vollog.error("This plugin requires a Windows crash dump") raise @@ -85,4 +91,4 @@ def run(self): ("BitmapHeaderSize", format_hints.Hex), ("BitmapSize", format_hints.Hex), ("BitmapPages", format_hints.Hex), - ], self._generator(layer)) \ No newline at end of file + ], self._generator(crash_layer)) \ No newline at end of file From d2db22c3103e514e123edf6269cf87f22a23e0a4 Mon Sep 17 00:00:00 2001 From: iMHLv2 Date: Tue, 1 Jun 2021 14:20:59 -0500 Subject: [PATCH 141/294] use typing for the layer variable passed to _generator --- volatility3/framework/plugins/windows/crashinfo.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/volatility3/framework/plugins/windows/crashinfo.py b/volatility3/framework/plugins/windows/crashinfo.py index 47c18292fe..cdcbfc9205 100644 --- a/volatility3/framework/plugins/windows/crashinfo.py +++ b/volatility3/framework/plugins/windows/crashinfo.py @@ -23,7 +23,7 @@ def get_requirements(cls): architectures = ["Intel32", "Intel64"]), ] - def _generator(self, layer): + def _generator(self, layer: crash.WindowsCrashDump32Layer): header = layer.get_header() uptime = datetime.timedelta(microseconds=int(header.SystemUpTime) / 10) From 7b73f8d5451954d116f5944216b7568cf97507f1 Mon Sep 17 00:00:00 2001 From: iMHLv2 Date: Tue, 1 Jun 2021 14:21:11 -0500 Subject: [PATCH 142/294] fix the copyright date for extensions/crash.py --- volatility3/framework/symbols/windows/extensions/crash.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/volatility3/framework/symbols/windows/extensions/crash.py b/volatility3/framework/symbols/windows/extensions/crash.py index 7c60842e0e..9786f0d40f 100644 --- a/volatility3/framework/symbols/windows/extensions/crash.py +++ b/volatility3/framework/symbols/windows/extensions/crash.py @@ -1,4 +1,4 @@ -# This file is Copyright 2019 Volatility Foundation and licensed under the Volatility Software License 1.0 +# This file is Copyright 2021 Volatility Foundation and licensed under the Volatility Software License 1.0 # which is available at https://www.volatilityfoundation.org/license/vsl-v1.0 # From 7d9c66c407f7c534026f03594d348213fa27cd2b Mon Sep 17 00:00:00 2001 From: iMHLv2 Date: Tue, 1 Jun 2021 14:27:20 -0500 Subject: [PATCH 143/294] run yapf on the newly added files --- volatility3/framework/layers/crash.py | 34 ++++---- .../framework/plugins/windows/crashinfo.py | 83 ++++++++++--------- .../symbols/windows/extensions/crash.py | 14 ++-- 3 files changed, 67 insertions(+), 64 deletions(-) diff --git a/volatility3/framework/layers/crash.py b/volatility3/framework/layers/crash.py index f6cf0e0ccd..328e73ed4d 100644 --- a/volatility3/framework/layers/crash.py +++ b/volatility3/framework/layers/crash.py @@ -29,7 +29,7 @@ class WindowsCrashDump32Layer(segmented.SegmentedLayer): VALIDDUMP = 0x504d5544 crashdump_json = 'crash' - supported_dumptypes = [0x01, 0x05] # we need 0x5 for 32-bit bitmaps + supported_dumptypes = [0x01, 0x05] # we need 0x5 for 32-bit bitmaps dump_header_name = '_DUMP_HEADER' _magic_struct = struct.Struct(' interfaces.objects.ObjectInterface: return self.context.object(self._crash_table_name + constants.BANG + self.dump_header_name, - offset=0, - layer_name=self._base_layer) + offset = 0, + layer_name = self._base_layer) def get_summary_header(self) -> interfaces.objects.ObjectInterface: return self.context.object(self._crash_common_table_name + constants.BANG + "_SUMMARY_DUMP", - offset=0x1000 * self.headerpages, - layer_name=self._base_layer) + offset = 0x1000 * self.headerpages, + layer_name = self._base_layer) def _load_segments(self) -> None: """Loads up the segments from the meta_layer.""" @@ -92,13 +93,14 @@ def _load_segments(self) -> None: if self.dump_type == 0x1: header = self.context.object(self._crash_table_name + constants.BANG + self.dump_header_name, - offset=0, - layer_name=self._base_layer) + offset = 0, + layer_name = self._base_layer) offset = self.headerpages header.PhysicalMemoryBlockBuffer.Run.count = header.PhysicalMemoryBlockBuffer.NumberOfRuns for run in header.PhysicalMemoryBlockBuffer.Run: - segments.append((run.BasePage * 0x1000, offset * 0x1000, run.PageCount * 0x1000, run.PageCount * 0x1000)) + segments.append( + (run.BasePage * 0x1000, offset * 0x1000, run.PageCount * 0x1000, run.PageCount * 0x1000)) offset += run.PageCount elif self.dump_type == 0x05: @@ -150,18 +152,17 @@ def _load_segments(self) -> None: # report the segments for debugging. this is valuable for dev/troubleshooting but # not important enough for a dedicated plugin. for idx, (start_position, mapped_offset, length, _) in enumerate(segments): - vollog.log(constants.LOGLEVEL_VVVV, - "Segment {}: Position {:#x} Offset {:#x} Length {:#x}".format(idx, - start_position, - mapped_offset, - length)) + vollog.log( + constants.LOGLEVEL_VVVV, + "Segment {}: Position {:#x} Offset {:#x} Length {:#x}".format(idx, start_position, mapped_offset, + length)) self._segments = segments @classmethod def check_header(cls, base_layer: interfaces.layers.DataLayerInterface, offset: int = 0) -> Tuple[int, int]: # Verify the Window's crash dump file magic - + try: header_data = base_layer.read(offset, cls._magic_struct.size) except exceptions.InvalidAddressException: @@ -209,4 +210,3 @@ def stack(cls, except WindowsCrashDumpFormatException: pass return None - diff --git a/volatility3/framework/plugins/windows/crashinfo.py b/volatility3/framework/plugins/windows/crashinfo.py index cdcbfc9205..0513456570 100644 --- a/volatility3/framework/plugins/windows/crashinfo.py +++ b/volatility3/framework/plugins/windows/crashinfo.py @@ -12,6 +12,7 @@ vollog = logging.getLogger(__name__) + class Crashinfo(interfaces.plugins.PluginInterface): _required_framework_version = (1, 0, 0) @@ -20,12 +21,12 @@ def get_requirements(cls): return [ requirements.TranslationLayerRequirement(name = 'primary', description = 'Memory layer for the kernel', - architectures = ["Intel32", "Intel64"]), - ] - + architectures = ["Intel32", "Intel64"]), + ] + def _generator(self, layer: crash.WindowsCrashDump32Layer): header = layer.get_header() - uptime = datetime.timedelta(microseconds=int(header.SystemUpTime) / 10) + uptime = datetime.timedelta(microseconds = int(header.SystemUpTime) / 10) if header.DumpType == 0x1: dump_type = "Full Dump (0x1)" @@ -43,24 +44,25 @@ def _generator(self, layer: crash.WindowsCrashDump32Layer): else: bitmap_header_size = bitmap_size = bitmap_pages = renderers.NotApplicableValue() - yield(0, (utility.array_to_string(header.Signature), - header.MajorVersion, - header.MinorVersion, - format_hints.Hex(header.DirectoryTableBase), - format_hints.Hex(header.PfnDataBase), - format_hints.Hex(header.PsLoadedModuleList), - format_hints.Hex(header.PsActiveProcessHead), - header.MachineImageType, - header.NumberProcessors, - format_hints.Hex(header.KdDebuggerDataBlock), - dump_type, - str(uptime), - utility.array_to_string(header.Comment), - conversion.wintime_to_datetime(header.SystemTime), - bitmap_header_size, - bitmap_size, - bitmap_pages, - )) + yield (0, ( + utility.array_to_string(header.Signature), + header.MajorVersion, + header.MinorVersion, + format_hints.Hex(header.DirectoryTableBase), + format_hints.Hex(header.PfnDataBase), + format_hints.Hex(header.PsLoadedModuleList), + format_hints.Hex(header.PsActiveProcessHead), + header.MachineImageType, + header.NumberProcessors, + format_hints.Hex(header.KdDebuggerDataBlock), + dump_type, + str(uptime), + utility.array_to_string(header.Comment), + conversion.wintime_to_datetime(header.SystemTime), + bitmap_header_size, + bitmap_size, + bitmap_pages, + )) def run(self): crash_layer = None @@ -74,21 +76,22 @@ def run(self): vollog.error("This plugin requires a Windows crash dump") raise - return renderers.TreeGrid([("Signature", str), - ("MajorVersion", int), - ("MinorVersion", int), - ("DirectoryTableBase", format_hints.Hex), - ("PfnDataBase", format_hints.Hex), - ("PsLoadedModuleList", format_hints.Hex), - ("PsActiveProcessHead", format_hints.Hex), - ("MachineImageType", int), - ("NumberProcessors", int), - ("KdDebuggerDataBlock", format_hints.Hex), - ("DumpType", str), - ("SystemUpTime", str), - ("Comment", str), - ("SystemTime", datetime.datetime), - ("BitmapHeaderSize", format_hints.Hex), - ("BitmapSize", format_hints.Hex), - ("BitmapPages", format_hints.Hex), - ], self._generator(crash_layer)) \ No newline at end of file + return renderers.TreeGrid([ + ("Signature", str), + ("MajorVersion", int), + ("MinorVersion", int), + ("DirectoryTableBase", format_hints.Hex), + ("PfnDataBase", format_hints.Hex), + ("PsLoadedModuleList", format_hints.Hex), + ("PsActiveProcessHead", format_hints.Hex), + ("MachineImageType", int), + ("NumberProcessors", int), + ("KdDebuggerDataBlock", format_hints.Hex), + ("DumpType", str), + ("SystemUpTime", str), + ("Comment", str), + ("SystemTime", datetime.datetime), + ("BitmapHeaderSize", format_hints.Hex), + ("BitmapSize", format_hints.Hex), + ("BitmapPages", format_hints.Hex), + ], self._generator(crash_layer)) diff --git a/volatility3/framework/symbols/windows/extensions/crash.py b/volatility3/framework/symbols/windows/extensions/crash.py index 9786f0d40f..8d8200aebf 100644 --- a/volatility3/framework/symbols/windows/extensions/crash.py +++ b/volatility3/framework/symbols/windows/extensions/crash.py @@ -11,17 +11,17 @@ class SUMMARY_DUMP(objects.StructType): def get_buffer(self, sub_type: str, count: int) -> interfaces.objects.ObjectInterface: symbol_table_name = self.get_symbol_table_name() subtype = self._context.symbol_space.get_type(symbol_table_name + constants.BANG + sub_type) - return self._context.object(object_type=symbol_table_name + constants.BANG + "array", - layer_name=self.vol.layer_name, - offset=self.BufferChar.vol.offset, - count=count, - subtype=subtype) + return self._context.object(object_type = symbol_table_name + constants.BANG + "array", + layer_name = self.vol.layer_name, + offset = self.BufferChar.vol.offset, + count = count, + subtype = subtype) def get_buffer_char(self) -> interfaces.objects.ObjectInterface: - return self.get_buffer(sub_type="unsigned char", count=(self.BitmapSize + 7) // 8) + return self.get_buffer(sub_type = "unsigned char", count = (self.BitmapSize + 7) // 8) def get_buffer_long(self) -> interfaces.objects.ObjectInterface: - return self.get_buffer(sub_type="unsigned long", count=(self.BitmapSize + 31) // 32) + return self.get_buffer(sub_type = "unsigned long", count = (self.BitmapSize + 31) // 32) class_types = {'_SUMMARY_DUMP': SUMMARY_DUMP} From a11c94a82b99aa3899853b44fa4b5b8935b91963 Mon Sep 17 00:00:00 2001 From: Anthony Fey Date: Fri, 4 Jun 2021 18:52:00 +0200 Subject: [PATCH 144/294] Fixed CM_KEY_VALUE get_decode method --- volatility3/framework/symbols/windows/extensions/registry.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/volatility3/framework/symbols/windows/extensions/registry.py b/volatility3/framework/symbols/windows/extensions/registry.py index fea385617d..fd01365566 100644 --- a/volatility3/framework/symbols/windows/extensions/registry.py +++ b/volatility3/framework/symbols/windows/extensions/registry.py @@ -260,7 +260,7 @@ def decode_data(self) -> Union[int, bytes]: # but the length at the start could be negative so just adding 4 to jump past it data = layer.read(self.Data + 4, datalen) - self_type = RegValueTypes.get(self.Type) + self_type = RegValueTypes(self.Type) if self_type == RegValueTypes.REG_DWORD: if len(data) != struct.calcsize(" Date: Fri, 4 Jun 2021 22:19:12 +0100 Subject: [PATCH 145/294] Poolscan: Further python 3.6 efficiencies --- volatility3/framework/plugins/windows/poolscanner.py | 6 ++---- 1 file changed, 2 insertions(+), 4 deletions(-) diff --git a/volatility3/framework/plugins/windows/poolscanner.py b/volatility3/framework/plugins/windows/poolscanner.py index b2ea449ead..297a99412c 100644 --- a/volatility3/framework/plugins/windows/poolscanner.py +++ b/volatility3/framework/plugins/windows/poolscanner.py @@ -18,9 +18,7 @@ vollog = logging.getLogger(__name__) -# TODO: When python3.5 is no longer supported, make this enum.IntFlag -# Revisit the page_type signature of PoolConstraint once using enum.IntFlag -class PoolType(enum.IntEnum): +class PoolType(enum.IntFlag): """Class to maintain the different possible PoolTypes The values must be integer powers of 2.""" @@ -37,7 +35,7 @@ def __init__(self, tag: bytes, type_name: str, object_type: Optional[str] = None, - page_type: Optional[int] = None, + page_type: Optional[PoolType] = None, size: Optional[Tuple[Optional[int], Optional[int]]] = None, index: Optional[Tuple[Optional[int], Optional[int]]] = None, alignment: Optional[int] = 1, From 772083ac75e0e5278ee956a63e36808dc5b4672c Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Sun, 6 Jun 2021 11:50:23 +0100 Subject: [PATCH 146/294] Mac: Fix unguarded read in automagic #515 --- volatility3/framework/automagic/mac.py | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-) diff --git a/volatility3/framework/automagic/mac.py b/volatility3/framework/automagic/mac.py index 3ba1790688..1a285783d6 100644 --- a/volatility3/framework/automagic/mac.py +++ b/volatility3/framework/automagic/mac.py @@ -6,7 +6,7 @@ import struct from typing import Optional -from volatility3.framework import interfaces, constants, layers +from volatility3.framework import interfaces, constants, layers, exceptions from volatility3.framework.automagic import symbol_cache, symbol_finder from volatility3.framework.layers import intel, scanners from volatility3.framework.symbols import mac @@ -79,7 +79,12 @@ def stack(cls, metadata = {'os': 'Mac'}) idlepml4_ptr = table.get_symbol("IdlePML4").address + kaslr_shift - idlepml4_str = layer.read(idlepml4_ptr, 4) + try: + idlepml4_str = layer.read(idlepml4_ptr, 4) + except exceptions.InvalidAddressException: + vollog.log(constants.LOGLEVEL_VVVV, f"Skipping invalid idlepml4_ptr: 0x{idlepml4_str:0x}") + continue + idlepml4_addr = struct.unpack(" Date: Mon, 7 Jun 2021 01:44:21 +0100 Subject: [PATCH 147/294] Mac: Fix minor typo in previous fix --- volatility3/framework/automagic/mac.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/volatility3/framework/automagic/mac.py b/volatility3/framework/automagic/mac.py index 1a285783d6..c8b42e5003 100644 --- a/volatility3/framework/automagic/mac.py +++ b/volatility3/framework/automagic/mac.py @@ -82,7 +82,7 @@ def stack(cls, try: idlepml4_str = layer.read(idlepml4_ptr, 4) except exceptions.InvalidAddressException: - vollog.log(constants.LOGLEVEL_VVVV, f"Skipping invalid idlepml4_ptr: 0x{idlepml4_str:0x}") + vollog.log(constants.LOGLEVEL_VVVV, f"Skipping invalid idlepml4_ptr: 0x{idlepml4_ptr:0x}") continue idlepml4_addr = struct.unpack(" Date: Tue, 22 Jun 2021 10:43:31 -0400 Subject: [PATCH 148/294] Create a list_head to fix tty_check bug This commit casts `tty_driver` to a `list_head` to fix an AttributeError in the linux tty_check plugin --- volatility3/framework/plugins/linux/tty_check.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/volatility3/framework/plugins/linux/tty_check.py b/volatility3/framework/plugins/linux/tty_check.py index fe35d8fa32..f633b99859 100644 --- a/volatility3/framework/plugins/linux/tty_check.py +++ b/volatility3/framework/plugins/linux/tty_check.py @@ -41,7 +41,7 @@ def _generator(self): self.config['vmlinux'], modules) try: - tty_drivers = vmlinux.object_from_symbol("tty_drivers") + tty_drivers = vmlinux.object_from_symbol("tty_drivers").cast("list_head") except exceptions.SymbolError: tty_drivers = None From 4ab0528a276d2652aa5d085831c1c07cda13a6a8 Mon Sep 17 00:00:00 2001 From: Gustavo Moreira Date: Tue, 29 Jun 2021 15:19:37 +1000 Subject: [PATCH 149/294] symtab_checks needs to be abstract. Added a doc string. --- volatility3/framework/plugins/linux/kmsg.py | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-) diff --git a/volatility3/framework/plugins/linux/kmsg.py b/volatility3/framework/plugins/linux/kmsg.py index 981b7b3f0b..9ef9c7c6db 100644 --- a/volatility3/framework/plugins/linux/kmsg.py +++ b/volatility3/framework/plugins/linux/kmsg.py @@ -108,8 +108,15 @@ def run(self) -> Iterator[Tuple[str, str, str, str, str]]: """Walks through the specific kernel implementation.""" @classmethod + @abstractmethod def symtab_checks(cls, vmlinux: interfaces.context.ModuleInterface) -> bool: - pass + """This method on each sublasss will be called to evaluate if the kernel + being analyzed fulfill the type & symbols requirements for the implementation. + The first class returning True will be instanciated and called via the + run() method. + + :return: True is the kernel being analysed fulfill the class requirements. + """ def get_string(self, addr: int, length: int) -> str: txt = self._context.layers[self.layer_name].read(addr, length) # type: ignore From 3d5df753777b99a1ab921c9e767c6683620f2219 Mon Sep 17 00:00:00 2001 From: Gustavo Moreira Date: Tue, 29 Jun 2021 16:03:45 +1000 Subject: [PATCH 150/294] Replacing 'while True' for 'while (condition)' --- volatility3/framework/plugins/linux/kmsg.py | 15 ++++++--------- 1 file changed, 6 insertions(+), 9 deletions(-) diff --git a/volatility3/framework/plugins/linux/kmsg.py b/volatility3/framework/plugins/linux/kmsg.py index 9ef9c7c6db..67a5540671 100644 --- a/volatility3/framework/plugins/linux/kmsg.py +++ b/volatility3/framework/plugins/linux/kmsg.py @@ -216,8 +216,9 @@ def run(self) -> Iterator[Tuple[str, str, str, str, str]]: log_first_idx = int(self.vmlinux.object_from_symbol(symbol_name='log_first_idx')) cur_idx = log_first_idx - end_idx = log_first_idx # We don't need log_next_idx here. See below msg.len == 0 - while True: + end_idx = None # We don't need log_next_idx here. See below msg.len == 0 + while cur_idx != end_idx: + end_idx = log_first_idx msg_offset = log_buf_ptr + cur_idx # type: ignore msg = self.vmlinux.object(object_type='printk_log', offset=msg_offset) if msg.len == 0: @@ -237,9 +238,6 @@ def run(self) -> Iterator[Tuple[str, str, str, str, str]]: cur_idx += msg.len - if cur_idx == end_idx: - break - class KmsgFiveTen(ABCKmsg): """In 5.10 the kernel ringbuffer implementation changed. @@ -343,8 +341,9 @@ def run(self) -> Iterator[Tuple[str, str, str, str, str]]: desc_id_mask = ~desc_flags_mask cur_id = desc_ring.tail_id.counter - end_id = desc_ring.head_id.counter - while True: + end_id = None + while cur_id != end_id: + end_id = desc_ring.head_id.counter desc = desc_arr[cur_id % desc_count] # type: ignore info = info_arr[cur_id % desc_count] # type: ignore desc_state = DescStateEnum((desc.state_var.counter >> desc_flags_shift) & 3) @@ -360,8 +359,6 @@ def run(self) -> Iterator[Tuple[str, str, str, str, str]]: cur_id += 1 cur_id &= desc_id_mask - if cur_id == end_id: - break class Kmsg(plugins.PluginInterface): From ec04dc9caa8ff614d5aea92c19526ecb0f263713 Mon Sep 17 00:00:00 2001 From: Gustavo Moreira Date: Wed, 30 Jun 2021 13:18:37 +1000 Subject: [PATCH 151/294] Fix issue #522: private attribute names mangle As per https://docs.python.org/3/tutorial/classes.html#private-variables Python will mangle private attribute names from `__attrname` to `_classname__attrname` to avoid name clashes of names with names defined by subclasses. This will happen even if subclasses are not involved i.e.: calling `type_member.__foo` from a plugin classmethod. Note that `__foo` is not meant to be a Python private attribute, but the actual name of the type member. Like sock.__sk_common here: https://github.com/torvalds/linux/blob/62fb9874f5da54fdb243003b386128037319b219/include/net/sock.h#L354 We need to strip the '_classname' prefix from the attribute's name before continuing with the member attribute lookup. --- volatility3/framework/objects/__init__.py | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/volatility3/framework/objects/__init__.py b/volatility3/framework/objects/__init__.py index 2f28296531..2129c9d819 100644 --- a/volatility3/framework/objects/__init__.py +++ b/volatility3/framework/objects/__init__.py @@ -732,6 +732,10 @@ def member(self, attr: str = 'member') -> object: def __getattr__(self, attr: str) -> Any: """Method for accessing members of the type.""" + + if attr.startswith("_") and not attr.startswith("__") and "__" in attr: + attr = attr[attr.find("_", 1):] # See issue #522 + if attr in ['_concrete_members', 'vol']: raise AttributeError("Object has not been properly initialized") if attr in self._concrete_members: From e441742a836829e416828f5d40902229e952b920 Mon Sep 17 00:00:00 2001 From: Gustavo Moreira Date: Thu, 1 Jul 2021 08:21:18 +1000 Subject: [PATCH 152/294] Moving the `if` down after the concrete members are checked. --- volatility3/framework/objects/__init__.py | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-) diff --git a/volatility3/framework/objects/__init__.py b/volatility3/framework/objects/__init__.py index 2129c9d819..f9c87931ce 100644 --- a/volatility3/framework/objects/__init__.py +++ b/volatility3/framework/objects/__init__.py @@ -733,14 +733,13 @@ def member(self, attr: str = 'member') -> object: def __getattr__(self, attr: str) -> Any: """Method for accessing members of the type.""" - if attr.startswith("_") and not attr.startswith("__") and "__" in attr: - attr = attr[attr.find("_", 1):] # See issue #522 - if attr in ['_concrete_members', 'vol']: raise AttributeError("Object has not been properly initialized") if attr in self._concrete_members: return self._concrete_members[attr] - elif attr in self.vol.members: + if attr.startswith("_") and not attr.startswith("__") and "__" in attr: + attr = attr[attr.find("_", 1):] # See issue #522 + if attr in self.vol.members: mask = self._context.layers[self.vol.layer_name].address_mask relative_offset, template = self.vol.members[attr] if isinstance(template, templates.ReferenceTemplate): From f1079e7e9b5f93f6d0556cfc479253a089c05451 Mon Sep 17 00:00:00 2001 From: iMHLv2 Date: Sat, 10 Jul 2021 08:32:40 -0500 Subject: [PATCH 153/294] fix up required framework version for crashinfo --- volatility3/framework/constants/__init__.py | 2 +- volatility3/framework/plugins/windows/crashinfo.py | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/volatility3/framework/constants/__init__.py b/volatility3/framework/constants/__init__.py index c04fbdb460..a43a55501b 100644 --- a/volatility3/framework/constants/__init__.py +++ b/volatility3/framework/constants/__init__.py @@ -39,7 +39,7 @@ # We use the SemVer 2.0.0 versioning scheme VERSION_MAJOR = 1 # Number of releases of the library with a breaking change -VERSION_MINOR = 0 # Number of changes that only add to the interface +VERSION_MINOR = 1 # Number of changes that only add to the interface VERSION_PATCH = 1 # Number of changes that do not change the interface VERSION_SUFFIX = "" diff --git a/volatility3/framework/plugins/windows/crashinfo.py b/volatility3/framework/plugins/windows/crashinfo.py index 0513456570..46ea6f1f0e 100644 --- a/volatility3/framework/plugins/windows/crashinfo.py +++ b/volatility3/framework/plugins/windows/crashinfo.py @@ -14,7 +14,7 @@ class Crashinfo(interfaces.plugins.PluginInterface): - _required_framework_version = (1, 0, 0) + _required_framework_version = (1, 1, 0) @classmethod def get_requirements(cls): From 42ef21fd29dee0437e0f7a54cd7cf4d8a334cc2c Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Sun, 11 Jul 2021 22:37:32 +0100 Subject: [PATCH 154/294] Documentation: Clarify linux ISF generation --- doc/source/symbol-tables.rst | 24 ++++++++++++++++++++---- 1 file changed, 20 insertions(+), 4 deletions(-) diff --git a/doc/source/symbol-tables.rst b/doc/source/symbol-tables.rst index aecc4d6f91..31329b9ebc 100644 --- a/doc/source/symbol-tables.rst +++ b/doc/source/symbol-tables.rst @@ -47,14 +47,30 @@ banner), which Volatility's automagic can detect. Volatility caches the mapping tables they come from, meaning the precise file names don't matter and can be organized under any necessary hierarchy under the operating system directory. -Linux and Mac symbol tables can be generated from a DWARF file using a tool called `dwarf2json `_. Currently a kernel -with debugging symbols is the only suitable means for recovering all the information required by most Volatility plugins. +Linux and Mac symbol tables can be generated from a DWARF file using a tool called `dwarf2json `_. +Currently a kernel with debugging symbols is the only suitable means for recovering all the information required by +most Volatility plugins. Note that in most linux distributions, the standard kernel is stripped of debugging information +and the kernel with debugging information is stored in a package that must be acquired separately. + +A generic table isn't guaranteed to produce accurate results, and would reduce the number of structures +that all plugins could rely on. As such, and because linux kernels with different configurations can produce different structures, +volatility 3 requires that the banners in the JSON file match the banners found in the image *exactly*, not just the version +number. This can include elements such as the compilation time and even the version of gcc used for the compilation. +The exact match is required to ensure that the results volatility returns are accurate, therefore there is no simple means +provided to get the wrong JSON ISF file to easily match. + To determine the string for a particular memory image, use the `banners` plugin. Once the specific banner is known, -try to locate that exact kernel debugging package for the operating system. +try to locate that exact kernel debugging package for the operating system. Unfortunately each distribution provides +its debugging packages under different package names and there are so many that the distribution may not keep all old +versions of the debugging symbols, and therefore **it may not be possible to find the right symbols to analyze a linux +memory image with volatlity**. With Macs there are far fewer kernels and only one distribution, making it easier to +ensure that the right symbols can be found. Once a kernel with debugging symbols/appropriate DWARF file has been located, `dwarf2json `_ will convert it into an appropriate JSON file. Example code for automatically creating a JSON from URLs for the kernel debugging package and -the package containing the Systemp.map, can be found in `stock-linux-json.py `. +the package containing the System.map, can be found in `stock-linux-json.py `_ . +The System.map file is recommended for completeness, but a kernel with debugging information often contains the same +symbol offsets within the DWARF data, which dwarf2json can extract into the JSON ISF file. The banners available for volatility to use can be found using the `isfinfo` plugin, but this will potentially take a long time to run depending on the number of JSON files available. This will list all the JSON (ISF) files that From c2b290dc42dac2c02e820d01fc3a90f22661e657 Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Mon, 12 Jul 2021 10:48:43 +0100 Subject: [PATCH 155/294] Windows: Fix up pdb.json for LF_UDT_SRC_LINE It looks like some copypasta snuck in the manual pdb.json. The offsets for the LF_UDT_SRC_LINE as was the total structure size for LF_UDT_MOD_SRC_LINE. Fixes #527 --- volatility3/framework/symbols/windows/pdb.json | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/volatility3/framework/symbols/windows/pdb.json b/volatility3/framework/symbols/windows/pdb.json index 42c11ec26a..0496803266 100644 --- a/volatility3/framework/symbols/windows/pdb.json +++ b/volatility3/framework/symbols/windows/pdb.json @@ -1305,14 +1305,14 @@ } }, "source_file": { - "offset": 0, + "offset": 4, "type": { "kind": "base", "name": "unsigned long" } }, "line": { - "offset": 0, + "offset": 8, "type": { "kind": "base", "name": "unsigned long" @@ -1354,7 +1354,7 @@ } }, "kind": "struct", - "size": 12 + "size": 16 }, "LF_UNION": { "fields": { From e28910f670773e60d61a277c4fe75466c5f977b1 Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Mon, 12 Jul 2021 12:03:22 +0100 Subject: [PATCH 156/294] Windows: Ensure we set the kernel_virtual_offset before other reqs It appears in issue #524 that the step to fulfil symbol requirements can throw an exception (which then prevents the kernel_virtual_offset (which is optional) from being set appropriately. This sets the kvo first and adds a check to where it's set. It doesn't get to the root of why an exception is thrown, but should ensure it's easier to spot if it goes wrong. Fixes #524. --- volatility3/framework/automagic/pdbscan.py | 2 +- volatility3/framework/interfaces/context.py | 2 ++ 2 files changed, 3 insertions(+), 1 deletion(-) diff --git a/volatility3/framework/automagic/pdbscan.py b/volatility3/framework/automagic/pdbscan.py index e2467dff57..786f3d3309 100644 --- a/volatility3/framework/automagic/pdbscan.py +++ b/volatility3/framework/automagic/pdbscan.py @@ -326,8 +326,8 @@ def __call__(self, if symbol_req.unsatisfied(context, parent_path): valid_kernel = self.determine_valid_kernel(context, potential_layers, progress_callback) if valid_kernel: - self.recurse_symbol_fulfiller(context, valid_kernel, progress_callback) self.set_kernel_virtual_offset(context, valid_kernel) + self.recurse_symbol_fulfiller(context, valid_kernel, progress_callback) if progress_callback is not None: progress_callback(100, "PDB scanning finished") diff --git a/volatility3/framework/interfaces/context.py b/volatility3/framework/interfaces/context.py index 6fede426a8..4e81d8e53a 100644 --- a/volatility3/framework/interfaces/context.py +++ b/volatility3/framework/interfaces/context.py @@ -143,6 +143,8 @@ def __init__(self, self._context = context self._module_name = module_name self._layer_name = layer_name + if not isinstance(offset, int): + raise TypeError(f"Module offset must be an int not {type(offset)}") self._offset = offset self._native_layer_name = None if native_layer_name: From 7f6378fd5def39d4e8326edfcc5142f558778fcc Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Mon, 12 Jul 2021 12:27:53 +0100 Subject: [PATCH 157/294] Automagic: Changing stacking order for linux/mac The linux/mac stackers are more accurate (based on banners) than the windows stacker (based on offsets). As such in cases where both would match, we should go for the more accurate match first (linux/mac) over windows. In most cases this will make no difference because of the exclusion lists, so only one stacker will run, but in cases such as volshell where all stackers are run, this may help with certain edge cases. --- volatility3/framework/automagic/linux.py | 2 +- volatility3/framework/automagic/mac.py | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/volatility3/framework/automagic/linux.py b/volatility3/framework/automagic/linux.py index 70b89dd2d9..fb1f6fc750 100644 --- a/volatility3/framework/automagic/linux.py +++ b/volatility3/framework/automagic/linux.py @@ -14,7 +14,7 @@ class LinuxIntelStacker(interfaces.automagic.StackerLayerInterface): - stack_order = 45 + stack_order = 35 exclusion_list = ['mac', 'windows'] @classmethod diff --git a/volatility3/framework/automagic/mac.py b/volatility3/framework/automagic/mac.py index c8b42e5003..1a039af8d3 100644 --- a/volatility3/framework/automagic/mac.py +++ b/volatility3/framework/automagic/mac.py @@ -15,7 +15,7 @@ class MacIntelStacker(interfaces.automagic.StackerLayerInterface): - stack_order = 45 + stack_order = 35 exclusion_list = ['windows', 'linux'] @classmethod From 88a8cde9ac2febc1028d0160587a97912e5fbd94 Mon Sep 17 00:00:00 2001 From: superponible Date: Wed, 24 Feb 2021 10:34:38 -0600 Subject: [PATCH 158/294] #457 - add debugging for registry mapping --- volatility3/framework/layers/registry.py | 10 ++++++++-- 1 file changed, 8 insertions(+), 2 deletions(-) diff --git a/volatility3/framework/layers/registry.py b/volatility3/framework/layers/registry.py index 32acbc4a9c..82a6ad1507 100644 --- a/volatility3/framework/layers/registry.py +++ b/volatility3/framework/layers/registry.py @@ -66,13 +66,13 @@ def __init__(self, self._hive_maxaddr_non_volatile = self.hive.Storage[0].Length self._hive_maxaddr_volatile = self.hive.Storage[1].Length self._maxaddr = 0x80000000 | self._hive_maxaddr_volatile - vollog.log(constants.LOGLEVEL_VVV, "Setting hive max address to {}".format(hex(self._maxaddr))) + vollog.log(constants.LOGLEVEL_VVV, "Setting hive {} max address to {}".format(self.name, hex(self._maxaddr))) except exceptions.InvalidAddressException: self._hive_maxaddr_non_volatile = 0x7fffffff self._hive_maxaddr_volatile = 0x7fffffff self._maxaddr = 0x80000000 | self._hive_maxaddr_volatile vollog.log(constants.LOGLEVEL_VVV, - "Exception when setting hive max address, using {}".format(hex(self._maxaddr))) + "Exception when setting hive {} max address, using {}".format(self.name, hex(self._maxaddr))) def _get_hive_maxaddr(self, volatile): return self._hive_maxaddr_volatile if volatile else self._hive_maxaddr_non_volatile @@ -199,6 +199,12 @@ def _translate(self, offset: int) -> int: # Ignore the volatile bit when determining maxaddr validity volatile = self._mask(offset, 31, 31) >> 31 if offset & 0x7fffffff > self._get_hive_maxaddr(volatile): + vollog.log(constants.LOGLEVEL_VVV, + "Couldn't translate offset {}, greater than {} in {} store of {}".format( + hex(offset & 0x7fffffff), + hex(self._get_hive_maxaddr(volatile)), + "volative" if volatile else "non-volatile", + self.name)) raise RegistryInvalidIndex(self.name, "Mapping request for value greater than maxaddr") storage = self.hive.Storage[volatile] From 8c7b6b29d1b94d750c059cdb211101997eb561b0 Mon Sep 17 00:00:00 2001 From: superponible Date: Wed, 10 Mar 2021 14:50:27 -0600 Subject: [PATCH 159/294] #457 - include hive name in debug message --- volatility3/framework/layers/registry.py | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/volatility3/framework/layers/registry.py b/volatility3/framework/layers/registry.py index 82a6ad1507..1cf7bcaf6a 100644 --- a/volatility3/framework/layers/registry.py +++ b/volatility3/framework/layers/registry.py @@ -200,11 +200,12 @@ def _translate(self, offset: int) -> int: volatile = self._mask(offset, 31, 31) >> 31 if offset & 0x7fffffff > self._get_hive_maxaddr(volatile): vollog.log(constants.LOGLEVEL_VVV, - "Couldn't translate offset {}, greater than {} in {} store of {}".format( + "Layer {} couldn't translate offset {}, greater than {} in {} store of {}".format( + self.name, hex(offset & 0x7fffffff), hex(self._get_hive_maxaddr(volatile)), "volative" if volatile else "non-volatile", - self.name)) + self.get_name())) raise RegistryInvalidIndex(self.name, "Mapping request for value greater than maxaddr") storage = self.hive.Storage[volatile] From a7cc978ac6f9b2f650e97c8e17e35b332c8445c8 Mon Sep 17 00:00:00 2001 From: superponible Date: Wed, 10 Mar 2021 14:52:21 -0600 Subject: [PATCH 160/294] #457 - filter hives for getsids and getservicesids --- volatility3/framework/plugins/windows/getservicesids.py | 5 +++-- volatility3/framework/plugins/windows/getsids.py | 1 + 2 files changed, 4 insertions(+), 2 deletions(-) diff --git a/volatility3/framework/plugins/windows/getservicesids.py b/volatility3/framework/plugins/windows/getservicesids.py index 2b5ab11f4c..f8a78dfcdd 100644 --- a/volatility3/framework/plugins/windows/getservicesids.py +++ b/volatility3/framework/plugins/windows/getservicesids.py @@ -62,13 +62,14 @@ def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface] def _generator(self): - # Go all over the hives + # Get the system hive for hive in hivelist.HiveList.list_hives(context = self.context, base_config_path = self.config_path, layer_name = self.config['primary'], symbol_table = self.config['nt_symbols'], + filter_string = 'machine\\system', hive_offsets = None): - # Get ConrolSet\Services. + # Get ControlSet\Services. try: services = hive.get_key(r"CurrentControlSet\Services") except (KeyError, exceptions.InvalidAddressException): diff --git a/volatility3/framework/plugins/windows/getsids.py b/volatility3/framework/plugins/windows/getsids.py index 30503b2bff..89b1f5f4f7 100644 --- a/volatility3/framework/plugins/windows/getsids.py +++ b/volatility3/framework/plugins/windows/getsids.py @@ -81,6 +81,7 @@ def lookup_user_sids(self) -> Dict[str, str]: base_config_path = self.config_path, layer_name = self.config['primary'], symbol_table = self.config['nt_symbols'], + filter_string = 'config\\software', hive_offsets = None): try: From 446510ab5fc952bcffb3d5adb7f020e3da231ec8 Mon Sep 17 00:00:00 2001 From: superponible Date: Mon, 12 Jul 2021 11:40:30 -0500 Subject: [PATCH 161/294] issue 528 - change how registry enum members are accessed --- volatility3/framework/plugins/windows/getsids.py | 4 ++-- volatility3/framework/plugins/windows/registry/printkey.py | 6 +++--- volatility3/plugins/windows/registry/certificates.py | 2 +- 3 files changed, 6 insertions(+), 6 deletions(-) diff --git a/volatility3/framework/plugins/windows/getsids.py b/volatility3/framework/plugins/windows/getsids.py index 30503b2bff..a33a30e586 100644 --- a/volatility3/framework/plugins/windows/getsids.py +++ b/volatility3/framework/plugins/windows/getsids.py @@ -96,9 +96,9 @@ def lookup_user_sids(self) -> Dict[str, str]: value_data = node.decode_data() if isinstance(value_data, int): value_data = format_hints.MultiTypeData(value_data, encoding = 'utf-8') - elif registry.RegValueTypes.get(node.Type) == registry.RegValueTypes.REG_BINARY: + elif registry.RegValueTypes[node.Type] == registry.RegValueTypes.REG_BINARY: value_data = format_hints.MultiTypeData(value_data, show_hex = True) - elif registry.RegValueTypes.get(node.Type) == registry.RegValueTypes.REG_MULTI_SZ: + elif registry.RegValueTypes[node.Type] == registry.RegValueTypes.REG_MULTI_SZ: value_data = format_hints.MultiTypeData(value_data, encoding = 'utf-16-le', split_nulls = True) diff --git a/volatility3/framework/plugins/windows/registry/printkey.py b/volatility3/framework/plugins/windows/registry/printkey.py index 5b0a909fc4..4c90e08b58 100644 --- a/volatility3/framework/plugins/windows/registry/printkey.py +++ b/volatility3/framework/plugins/windows/registry/printkey.py @@ -123,7 +123,7 @@ def _printkey_iterator(self, value_node_name = renderers.UnreadableValue() try: - value_type = RegValueTypes.get(node.Type).name + value_type = RegValueTypes[node.Type].name except (exceptions.InvalidAddressException, RegistryFormatException) as excp: vollog.debug(excp) value_type = renderers.UnreadableValue() @@ -137,9 +137,9 @@ def _printkey_iterator(self, if isinstance(value_data, int): value_data = format_hints.MultiTypeData(value_data, encoding = 'utf-8') - elif RegValueTypes.get(node.Type) == RegValueTypes.REG_BINARY: + elif RegValueTypes[node.Type] == RegValueTypes.REG_BINARY: value_data = format_hints.MultiTypeData(value_data, show_hex = True) - elif RegValueTypes.get(node.Type) == RegValueTypes.REG_MULTI_SZ: + elif RegValueTypes[node.Type] == RegValueTypes.REG_MULTI_SZ: value_data = format_hints.MultiTypeData(value_data, encoding = 'utf-16-le', split_nulls = True) diff --git a/volatility3/plugins/windows/registry/certificates.py b/volatility3/plugins/windows/registry/certificates.py index 909e852fb0..c4ae0bf371 100644 --- a/volatility3/plugins/windows/registry/certificates.py +++ b/volatility3/plugins/windows/registry/certificates.py @@ -50,7 +50,7 @@ def _generator(self) -> Iterator[Tuple[int, Tuple[str, str, str, str]]]: node_path = hive.get_key(top_key, return_list = True) for (depth, is_key, last_write_time, key_path, volatility, node) in printkey.PrintKey.key_iterator(hive, node_path, recurse = True): - if not is_key and RegValueTypes.get(node.Type).name == "REG_BINARY": + if not is_key and RegValueTypes[node.Type].name == "REG_BINARY": name, certificate_data = self.parse_data(node.decode_data()) unique_key_offset = key_path.index(top_key) + len(top_key) + 1 reg_section = key_path[unique_key_offset:key_path.index("\\", unique_key_offset)] From d613b384d2cc8b29943bc8e5b7f12af338cb174b Mon Sep 17 00:00:00 2001 From: superponible Date: Mon, 12 Jul 2021 21:55:30 -0500 Subject: [PATCH 162/294] #457 - raise log level and use f-strings --- volatility3/framework/layers/registry.py | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/volatility3/framework/layers/registry.py b/volatility3/framework/layers/registry.py index 1cf7bcaf6a..6f0d66bebe 100644 --- a/volatility3/framework/layers/registry.py +++ b/volatility3/framework/layers/registry.py @@ -66,13 +66,13 @@ def __init__(self, self._hive_maxaddr_non_volatile = self.hive.Storage[0].Length self._hive_maxaddr_volatile = self.hive.Storage[1].Length self._maxaddr = 0x80000000 | self._hive_maxaddr_volatile - vollog.log(constants.LOGLEVEL_VVV, "Setting hive {} max address to {}".format(self.name, hex(self._maxaddr))) + vollog.log(constants.LOGLEVEL_VVVV, f"Setting hive {self.name} max address to {hex(self._maxaddr)}") except exceptions.InvalidAddressException: self._hive_maxaddr_non_volatile = 0x7fffffff self._hive_maxaddr_volatile = 0x7fffffff self._maxaddr = 0x80000000 | self._hive_maxaddr_volatile - vollog.log(constants.LOGLEVEL_VVV, - "Exception when setting hive {} max address, using {}".format(self.name, hex(self._maxaddr))) + vollog.log(constants.LOGLEVEL_VVVV, + f"Exception when setting hive {self.name} max address, using {hex(self._maxaddr)}") def _get_hive_maxaddr(self, volatile): return self._hive_maxaddr_volatile if volatile else self._hive_maxaddr_non_volatile From e0f08cdead558ffb6932df0dee2ef6a0003797ba Mon Sep 17 00:00:00 2001 From: Gohar Irfan Chaudhry Date: Tue, 13 Jul 2021 10:00:32 -0700 Subject: [PATCH 163/294] Windows: Fix up pdb.json for LF_UDT_MOD_SRC_LINE The module field in `LF_UDT_MOD_SRC_LINE` should be `unsigned short` instead of `string` as per: ``` typedef struct lfUdtModSrcLine { unsigned short leaf; // LF_UDT_MOD_SRC_LINE CV_typ_t type; // UDT's type index CV_ItemId src; // index into string table where source file name is saved unsigned long line; // line number unsigned short imod; // module that contributes this UDT definition } lfUdtModSrcLine; ``` ([source](https://github.com/microsoft/microsoft-pdb/blob/082c5290e5aff028ae84e43affa8be717aa7af73/include/cvinfo.h#L1707)) This also changes the size of the struct from 16 to 14. --- volatility3/framework/symbols/windows/pdb.json | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/volatility3/framework/symbols/windows/pdb.json b/volatility3/framework/symbols/windows/pdb.json index 0496803266..8c365a72b2 100644 --- a/volatility3/framework/symbols/windows/pdb.json +++ b/volatility3/framework/symbols/windows/pdb.json @@ -1349,12 +1349,12 @@ "offset": 12, "type": { "kind": "base", - "name": "string" + "name": "unsigned short" } } }, "kind": "struct", - "size": 16 + "size": 14 }, "LF_UNION": { "fields": { From a795a7e2d5204cff4f140cf942dabb73c67e95a0 Mon Sep 17 00:00:00 2001 From: Gustavo Moreira Date: Wed, 14 Jul 2021 20:45:43 +1000 Subject: [PATCH 164/294] Using double underscore in the find() will also include edge cases like when the calling class contains an underscore i.e. THE_CLASS --- volatility3/framework/objects/__init__.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/volatility3/framework/objects/__init__.py b/volatility3/framework/objects/__init__.py index f9c87931ce..3f2e70b026 100644 --- a/volatility3/framework/objects/__init__.py +++ b/volatility3/framework/objects/__init__.py @@ -738,7 +738,7 @@ def __getattr__(self, attr: str) -> Any: if attr in self._concrete_members: return self._concrete_members[attr] if attr.startswith("_") and not attr.startswith("__") and "__" in attr: - attr = attr[attr.find("_", 1):] # See issue #522 + attr = attr[attr.find("__", 1):] # See issue #522 if attr in self.vol.members: mask = self._context.layers[self.vol.layer_name].address_mask relative_offset, template = self.vol.members[attr] From 3ca5461f9563f4e5b65a40da46a62145442a1e00 Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Wed, 14 Jul 2021 15:47:28 +0100 Subject: [PATCH 165/294] Windows: Ensure suitable progress callback --- volatility3/framework/automagic/windows.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/volatility3/framework/automagic/windows.py b/volatility3/framework/automagic/windows.py index 873a020a0d..5b8340abd8 100644 --- a/volatility3/framework/automagic/windows.py +++ b/volatility3/framework/automagic/windows.py @@ -340,7 +340,7 @@ def stack(cls, # Check for the self-referential pointer if layer is None: - hits = base_layer.scan(context, PageMapScanner(WintelHelper.tests)) + hits = base_layer.scan(context, PageMapScanner(WintelHelper.tests), progress_callback = progress_callback) layer = None config_path = None for test, dtb in hits: From 9f52a15734bd07d8ff21d978d579bf8a03c92ea1 Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Sun, 18 Jul 2021 15:53:33 +0100 Subject: [PATCH 166/294] Core: Convert to format strings across the whole base --- volatility3/cli/__init__.py | 60 +++++++++---------- volatility3/cli/text_renderer.py | 38 ++++++------ volatility3/cli/volargparse.py | 6 +- volatility3/cli/volshell/__init__.py | 12 ++-- volatility3/cli/volshell/generic.py | 26 ++++---- volatility3/cli/volshell/linux.py | 4 +- volatility3/cli/volshell/mac.py | 4 +- volatility3/cli/volshell/windows.py | 2 +- volatility3/framework/__init__.py | 4 +- volatility3/framework/automagic/__init__.py | 4 +- .../framework/automagic/construct_layers.py | 4 +- volatility3/framework/automagic/linux.py | 4 +- volatility3/framework/automagic/mac.py | 10 ++-- volatility3/framework/automagic/pdbscan.py | 4 +- volatility3/framework/automagic/stacker.py | 12 ++-- .../framework/automagic/symbol_cache.py | 10 ++-- .../framework/automagic/symbol_finder.py | 8 +-- volatility3/framework/automagic/windows.py | 2 +- .../framework/configuration/requirements.py | 16 ++--- volatility3/framework/contexts/__init__.py | 4 +- .../framework/interfaces/configuration.py | 6 +- volatility3/framework/interfaces/layers.py | 22 +++---- volatility3/framework/interfaces/objects.py | 12 ++-- volatility3/framework/interfaces/plugins.py | 2 +- volatility3/framework/interfaces/symbols.py | 2 +- volatility3/framework/layers/crash.py | 16 ++--- volatility3/framework/layers/elf.py | 10 ++-- volatility3/framework/layers/intel.py | 2 +- volatility3/framework/layers/leechcore.py | 2 +- volatility3/framework/layers/lime.py | 10 ++-- volatility3/framework/layers/linear.py | 8 +-- volatility3/framework/layers/msf.py | 2 +- volatility3/framework/layers/physical.py | 2 +- volatility3/framework/layers/qemu.py | 8 +-- volatility3/framework/layers/registry.py | 2 +- volatility3/framework/layers/resources.py | 10 ++-- volatility3/framework/layers/segmented.py | 2 +- volatility3/framework/layers/vmware.py | 4 +- volatility3/framework/objects/__init__.py | 28 ++++----- volatility3/framework/objects/templates.py | 2 +- volatility3/framework/plugins/__init__.py | 2 +- volatility3/framework/plugins/configwriter.py | 2 +- volatility3/framework/plugins/isfinfo.py | 2 +- volatility3/framework/plugins/layerwriter.py | 8 +-- volatility3/framework/plugins/linux/bash.py | 2 +- .../framework/plugins/linux/check_creds.py | 2 +- volatility3/framework/plugins/mac/bash.py | 2 +- volatility3/framework/plugins/mac/ifconfig.py | 2 +- volatility3/framework/plugins/mac/netstat.py | 4 +- volatility3/framework/plugins/mac/pslist.py | 2 +- volatility3/framework/plugins/timeliner.py | 8 +-- .../framework/plugins/windows/callbacks.py | 2 +- .../framework/plugins/windows/cmdline.py | 4 +- .../framework/plugins/windows/crashinfo.py | 2 +- .../framework/plugins/windows/dlllist.py | 4 +- .../framework/plugins/windows/dumpfiles.py | 20 +++---- .../framework/plugins/windows/handles.py | 10 ++-- .../framework/plugins/windows/hashdump.py | 4 +- volatility3/framework/plugins/windows/info.py | 4 +- .../framework/plugins/windows/memmap.py | 2 +- .../framework/plugins/windows/modscan.py | 2 +- .../framework/plugins/windows/netscan.py | 14 ++--- .../framework/plugins/windows/netstat.py | 22 +++---- .../framework/plugins/windows/poolscanner.py | 8 +-- .../framework/plugins/windows/privileges.py | 2 +- .../framework/plugins/windows/pslist.py | 10 ++-- .../framework/plugins/windows/psscan.py | 2 +- .../framework/plugins/windows/pstree.py | 2 +- .../plugins/windows/registry/hivelist.py | 4 +- .../plugins/windows/registry/printkey.py | 4 +- .../plugins/windows/registry/userassist.py | 2 +- .../framework/plugins/windows/strings.py | 6 +- .../framework/plugins/windows/symlinkscan.py | 2 +- .../framework/plugins/windows/vadinfo.py | 6 +- volatility3/framework/plugins/yarascan.py | 4 +- volatility3/framework/renderers/__init__.py | 6 +- volatility3/framework/symbols/__init__.py | 12 ++-- volatility3/framework/symbols/intermed.py | 38 ++++++------ .../framework/symbols/linux/__init__.py | 8 +-- .../symbols/linux/extensions/__init__.py | 6 +- .../framework/symbols/linux/extensions/elf.py | 2 +- volatility3/framework/symbols/mac/__init__.py | 2 +- .../symbols/mac/extensions/__init__.py | 4 +- volatility3/framework/symbols/native.py | 2 +- .../symbols/windows/extensions/__init__.py | 30 +++++----- .../symbols/windows/extensions/network.py | 10 ++-- .../symbols/windows/extensions/pe.py | 12 ++-- .../symbols/windows/extensions/pool.py | 10 ++-- .../symbols/windows/extensions/registry.py | 18 +++--- .../framework/symbols/windows/pdbconv.py | 36 +++++------ .../framework/symbols/windows/pdbutil.py | 10 ++-- volatility3/schemas/__init__.py | 2 +- 92 files changed, 386 insertions(+), 386 deletions(-) diff --git a/volatility3/cli/__init__.py b/volatility3/cli/__init__.py index 98b769c3b6..9ee4d570c0 100644 --- a/volatility3/cli/__init__.py +++ b/volatility3/cli/__init__.py @@ -56,7 +56,7 @@ def __call__(self, progress: Union[int, float], description: str = None): Args: progress: Percentage of progress of the current procedure """ - message = "\rProgress: {0: 7.2f}\t\t{1:}".format(round(progress, 2), description or '') + message = f"\rProgress: {round(progress, 2): 7.2f}\t\t{description or ''}" message_len = len(message) self._max_message_len = max([self._max_message_len, message_len]) sys.stderr.write(message + (' ' * (self._max_message_len - message_len)) + '\r') @@ -144,7 +144,7 @@ def run(self): parser.add_argument("-r", "--renderer", metavar = 'RENDERER', - help = "Determines how to render the output ({})".format(", ".join(list(renderers))), + help = f"Determines how to render the output ({', '.join(list(renderers))})", default = "quick", choices = list(renderers)) parser.add_argument("-f", @@ -162,7 +162,7 @@ def run(self): default = False, action = 'store_true') parser.add_argument("--cache-path", - help = "Change the default path ({}) used to store the cache".format(constants.CACHE_PATH), + help = f"Change the default path ({constants.CACHE_PATH}) used to store the cache", default = constants.CACHE_PATH, type = str) @@ -174,7 +174,7 @@ def run(self): banner_output = sys.stdout if renderers[partial_args.renderer].structured_output: banner_output = sys.stderr - banner_output.write("Volatility 3 Framework {}\n".format(constants.PACKAGE_VERSION)) + banner_output.write(f"Volatility 3 Framework {constants.PACKAGE_VERSION}\n") if partial_args.plugin_dirs: volatility3.plugins.__path__ = [os.path.abspath(p) @@ -202,8 +202,8 @@ def run(self): else: console.setLevel(10 - (partial_args.verbosity - 2)) - vollog.info("Volatility plugins path: {}".format(volatility3.plugins.__path__)) - vollog.info("Volatility symbols path: {}".format(volatility3.symbols.__path__)) + vollog.info(f"Volatility plugins path: {volatility3.plugins.__path__}") + vollog.info(f"Volatility symbols path: {volatility3.symbols.__path__}") # Set the PARALLELISM if partial_args.parallelism == 'processes': @@ -256,7 +256,7 @@ def run(self): if args.plugin is None: parser.error("Please select a plugin to run") - vollog.log(constants.LOGLEVEL_VVV, "Cache directory used: {}".format(constants.CACHE_PATH)) + vollog.log(constants.LOGLEVEL_VVV, f"Cache directory used: {constants.CACHE_PATH}") plugin = plugin_list[args.plugin] chosen_configurables_list[args.plugin] = plugin @@ -289,7 +289,7 @@ def run(self): ctx.config['automagic.LayerStacker.stackers'] = stacker.choose_os_stackers(plugin) self.output_dir = args.output_dir if not os.path.exists(self.output_dir): - parser.error("The output directory specified does not exist: {}".format(self.output_dir)) + parser.error(f"The output directory specified does not exist: {self.output_dir}") self.populate_config(ctx, chosen_configurables_list, args, plugin_config_path) @@ -318,7 +318,7 @@ def run(self): json.dump(dict(constructed.build_configuration()), f, sort_keys = True, indent = 2) except exceptions.UnsatisfiedException as excp: self.process_unsatisfied_exceptions(excp) - parser.exit(1, "Unable to validate the plugin requirements: {}\n".format([x for x in excp.unsatisfied])) + parser.exit(1, f"Unable to validate the plugin requirements: {[x for x in excp.unsatisfied]}\n") try: # Construct and run the plugin @@ -346,7 +346,7 @@ def location_from_file(cls, filename: str) -> str: filename = request.url2pathname(single_location.path) if not filename: raise ValueError("File URL looks incorrect (potentially missing /)") - raise ValueError("File does not exist: {}".format(filename)) + raise ValueError(f"File does not exist: {filename}") return parse.urlunparse(single_location) def process_exceptions(self, excp): @@ -363,20 +363,20 @@ def process_exceptions(self, excp): if isinstance(excp, exceptions.InvalidAddressException): general = "Volatility was unable to read a requested page:" if isinstance(excp, exceptions.SwappedInvalidAddressException): - detail = "Swap error {} in layer {} ({})".format(hex(excp.invalid_address), excp.layer_name, excp) + detail = f"Swap error {hex(excp.invalid_address)} in layer {excp.layer_name} ({excp})" caused_by = [ "No suitable swap file having been provided (locate and provide the correct swap file)", "An intentionally invalid page (operating system protection)" ] elif isinstance(excp, exceptions.PagedInvalidAddressException): - detail = "Page error {} in layer {} ({})".format(hex(excp.invalid_address), excp.layer_name, excp) + detail = f"Page error {hex(excp.invalid_address)} in layer {excp.layer_name} ({excp})" caused_by = [ "Memory smear during acquisition (try re-acquiring if possible)", "An intentionally invalid page lookup (operating system protection)", "A bug in the plugin/volatility3 (re-run with -vvv and file a bug)" ] else: - detail = "{} in layer {} ({})".format(hex(excp.invalid_address), excp.layer_name, excp) + detail = f"{hex(excp.invalid_address)} in layer {excp.layer_name} ({excp})" caused_by = [ "The base memory file being incomplete (try re-acquiring if possible)", "Memory smear during acquisition (try re-acquiring if possible)", @@ -385,7 +385,7 @@ def process_exceptions(self, excp): ] elif isinstance(excp, exceptions.SymbolError): general = "Volatility experienced a symbol-related issue:" - detail = "{}{}{}: {}".format(excp.table_name, constants.BANG, excp.symbol_name, excp) + detail = f"{excp.table_name}{constants.BANG}{excp.symbol_name}: {excp}" caused_by = [ "An invalid symbol table", "A plugin requesting a bad symbol", @@ -393,32 +393,32 @@ def process_exceptions(self, excp): ] elif isinstance(excp, exceptions.SymbolSpaceError): general = "Volatility experienced an issue related to a symbol table:" - detail = "{}".format(excp) + detail = f"{excp}" caused_by = [ "An invalid symbol table", "A plugin requesting a bad symbol", "A plugin requesting a symbol from the wrong table" ] elif isinstance(excp, exceptions.LayerException): - general = "Volatility experienced a layer-related issue: {}".format(excp.layer_name) - detail = "{}".format(excp) + general = f"Volatility experienced a layer-related issue: {excp.layer_name}" + detail = f"{excp}" caused_by = ["A faulty layer implementation (re-run with -vvv and file a bug)"] elif isinstance(excp, exceptions.MissingModuleException): - general = "Volatility could not import a necessary module: {}".format(excp.module) - detail = "{}".format(excp) + general = f"Volatility could not import a necessary module: {excp.module}" + detail = f"{excp}" caused_by = ["A required python module is not installed (install the module and re-run)"] else: general = "Volatilty encountered an unexpected situation." detail = "" caused_by = [ - "Please re-run using with -vvv and file a bug with the output", "at {}".format(constants.BUG_URL) + "Please re-run using with -vvv and file a bug with the output", f"at {constants.BUG_URL}" ] # Code that actually renders the exception output = sys.stderr - output.write(general + "\n") - output.write(detail + "\n\n") + output.write(f"{general}\n") + output.write(f"{detail}\n\n") for cause in caused_by: - output.write("\t* " + cause + "\n") + output.write(f" * {cause}\n") output.write("\nNo further results will be produced\n") sys.exit(1) @@ -434,7 +434,7 @@ def process_unsatisfied_exceptions(self, excp): symbols_failed = symbols_failed or isinstance(excp.unsatisfied[config_path], configuration.requirements.SymbolTableRequirement) - print("Unsatisfied requirement {}: {}".format(config_path, excp.unsatisfied[config_path].description)) + print(f"Unsatisfied requirement {config_path}: {excp.unsatisfied[config_path].description}") if symbols_failed: print("\nA symbol table requirement was not fulfilled. Please verify that:\n" @@ -471,8 +471,8 @@ def populate_config(self, context: interfaces.context.ContextInterface, if not scheme or len(scheme) <= 1: if not os.path.exists(value): raise FileNotFoundError( - "Non-existant file {} passed to URIRequirement".format(value)) - value = "file://" + request.pathname2url(os.path.abspath(value)) + f"Non-existant file {value} passed to URIRequirement") + value = f"file://{request.pathname2url(os.path.abspath(value))}" if isinstance(requirement, requirements.ListRequirement): if not isinstance(value, list): raise TypeError("Configuration for ListRequirement was not a list: {}".format( @@ -499,11 +499,11 @@ def _get_final_filename(self): pref_name_array = self.preferred_filename.split('.') filename, extension = os.path.join(output_dir, '.'.join(pref_name_array[:-1])), pref_name_array[-1] - output_filename = "{}.{}".format(filename, extension) + output_filename = f"{filename}.{extension}" counter = 1 while os.path.exists(output_filename): - output_filename = "{}-{}.{}".format(filename, counter, extension) + output_filename = f"{filename}-{counter}.{extension}" counter += 1 return output_filename @@ -525,7 +525,7 @@ def close(self): with open(output_filename, "wb") as current_file: current_file.write(self.read()) self._committed = True - vollog.log(logging.INFO, "Saved stored plugin file: {}".format(output_filename)) + vollog.log(logging.INFO, f"Saved stored plugin file: {output_filename}") super().close() @@ -578,7 +578,7 @@ def populate_requirements_argparse(self, parser: Union[argparse.ArgumentParser, configurable: The plugin object to pull the requirements from """ if not issubclass(configurable, interfaces.configuration.ConfigurableInterface): - raise TypeError("Expected ConfigurableInterface type, not: {}".format(type(configurable))) + raise TypeError(f"Expected ConfigurableInterface type, not: {type(configurable)}") # Construct an argparse group diff --git a/volatility3/cli/text_renderer.py b/volatility3/cli/text_renderer.py index f40a5d3239..19507b11c5 100644 --- a/volatility3/cli/text_renderer.py +++ b/volatility3/cli/text_renderer.py @@ -33,13 +33,13 @@ def hex_bytes_as_text(value: bytes) -> str: A text representation of the hexadecimal bytes plus their ascii equivalents, separated by newline characters """ if not isinstance(value, bytes): - raise TypeError("hex_bytes_as_text takes bytes not: {}".format(type(value))) + raise TypeError(f"hex_bytes_as_text takes bytes not: {type(value)}") ascii = [] hex = [] count = 0 output = "" for byte in value: - hex.append("{:02x}".format(byte)) + hex.append(f"{byte:02x}") ascii.append(chr(byte) if 0x20 < byte <= 0x7E else ".") if (count % 8) == 7: output += "\n" @@ -87,10 +87,10 @@ def wrapped(x: Any) -> str: if result == "-" or result == "N/A": return "" if isinstance(x, format_hints.MultiTypeData) and x.converted_int: - return "{}".format(result) + return f"{result}" if isinstance(x, int) and not isinstance(x, (format_hints.Hex, format_hints.Bin)): - return "{}".format(result) - return "\"{}\"".format(result) + return f"{result}" + return f"\"{result}\"" return wrapped @@ -115,7 +115,7 @@ def display_disassembly(disasm: interfaces.renderers.Disassembly) -> str: output = "" if disasm.architecture is not None: for i in disasm_types[disasm.architecture].disasm(disasm.data, disasm.offset): - output += "\n0x%x:\t%s\t%s" % (i.address, i.mnemonic, i.op_str) + output += f"\n0x{i.address:x}:\t{i.mnemonic}\t{i.op_str}" return output return QuickTextRenderer._type_renderers[bytes](disasm.data) @@ -128,14 +128,14 @@ class CLIRenderer(interfaces.renderers.Renderer): class QuickTextRenderer(CLIRenderer): _type_renderers = { - format_hints.Bin: optional(lambda x: "0b{:b}".format(x)), - format_hints.Hex: optional(lambda x: "0x{:x}".format(x)), + format_hints.Bin: optional(lambda x: f"0b{x:b}"), + format_hints.Hex: optional(lambda x: f"0x{x:x}"), format_hints.HexBytes: optional(hex_bytes_as_text), format_hints.MultiTypeData: quoted_optional(multitypedata_as_text), interfaces.renderers.Disassembly: optional(display_disassembly), - bytes: optional(lambda x: " ".join(["{0:02x}".format(b) for b in x])), + bytes: optional(lambda x: " ".join([f"{b:02x}" for b in x])), datetime.datetime: optional(lambda x: x.strftime("%Y-%m-%d %H:%M:%S.%f %Z")), - 'default': optional(lambda x: "{}".format(x)) + 'default': optional(lambda x: f"{x}") } name = "quick" @@ -158,7 +158,7 @@ def render(self, grid: interfaces.renderers.TreeGrid) -> None: line = [] for column in grid.columns: # Ignore the type because namedtuples don't realize they have accessible attributes - line.append("{}".format(column.name)) + line.append(f"{column.name}") outfd.write("\n{}\n".format("\t".join(line))) def visitor(node: interfaces.renderers.TreeNode, accumulator): @@ -184,14 +184,14 @@ def visitor(node: interfaces.renderers.TreeNode, accumulator): class CSVRenderer(CLIRenderer): _type_renderers = { - format_hints.Bin: quoted_optional(lambda x: "0b{:b}".format(x)), - format_hints.Hex: quoted_optional(lambda x: "0x{:x}".format(x)), + format_hints.Bin: quoted_optional(lambda x: f"0b{x:b}"), + format_hints.Hex: quoted_optional(lambda x: f"0x{x:x}"), format_hints.HexBytes: quoted_optional(hex_bytes_as_text), format_hints.MultiTypeData: quoted_optional(multitypedata_as_text), interfaces.renderers.Disassembly: quoted_optional(display_disassembly), - bytes: quoted_optional(lambda x: " ".join(["{0:02x}".format(b) for b in x])), + bytes: quoted_optional(lambda x: " ".join([f"{b:02x}" for b in x])), datetime.datetime: quoted_optional(lambda x: x.strftime("%Y-%m-%d %H:%M:%S.%f %Z")), - 'default': quoted_optional(lambda x: "{}".format(x)) + 'default': quoted_optional(lambda x: f"{x}") } name = "csv" @@ -212,7 +212,7 @@ def render(self, grid: interfaces.renderers.TreeGrid) -> None: for column in grid.columns: # Ignore the type because namedtuples don't realize they have accessible attributes line.append("{}".format('"' + column.name + '"')) - outfd.write("{}".format(",".join(line))) + outfd.write(f"{','.join(line)}") def visitor(node: interfaces.renderers.TreeNode, accumulator): accumulator.write("\n") @@ -223,7 +223,7 @@ def visitor(node: interfaces.renderers.TreeNode, accumulator): column = grid.columns[column_index] renderer = self._type_renderers.get(column.type, self._type_renderers['default']) line.append(renderer(node.values[column_index])) - accumulator.write("{}".format(",".join(line))) + accumulator.write(f"{','.join(line)}") return accumulator if not grid.populated: @@ -273,7 +273,7 @@ def visitor( renderer = self._type_renderers.get(column.type, self._type_renderers['default']) data = renderer(node.values[column_index]) max_column_widths[column.name] = max(max_column_widths.get(column.name, len(column.name)), - len("{}".format(data))) + len(f"{data}")) line[column] = data accumulator.append((node.path_depth, line)) return accumulator @@ -304,7 +304,7 @@ class JsonRenderer(CLIRenderer): format_hints.HexBytes: quoted_optional(hex_bytes_as_text), interfaces.renderers.Disassembly: quoted_optional(display_disassembly), format_hints.MultiTypeData: quoted_optional(multitypedata_as_text), - bytes: optional(lambda x: " ".join(["{0:02x}".format(b) for b in x])), + bytes: optional(lambda x: " ".join([f"{b:02x}" for b in x])), datetime.datetime: lambda x: x.isoformat() if not isinstance(x, interfaces.renderers.BaseAbsentValue) else None, 'default': lambda x: x } diff --git a/volatility3/cli/volargparse.py b/volatility3/cli/volargparse.py index 8f239867fc..5ced541ae1 100644 --- a/volatility3/cli/volargparse.py +++ b/volatility3/cli/volargparse.py @@ -46,10 +46,10 @@ def __call__(self, matched_parsers = [name for name in self._name_parser_map if parser_name in name] if len(matched_parsers) < 1: - msg = 'invalid choice {} (choose from {})'.format(parser_name, ', '.join(self._name_parser_map)) + msg = f"invalid choice {parser_name} (choose from {', '.join(self._name_parser_map)})" raise argparse.ArgumentError(self, msg) if len(matched_parsers) > 1: - msg = 'plugin {} matches multiple plugins ({})'.format(parser_name, ', '.join(matched_parsers)) + msg = f"plugin {parser_name} matches multiple plugins ({', '.join(matched_parsers)})" raise argparse.ArgumentError(self, msg) parser = self._name_parser_map[matched_parsers[0]] setattr(namespace, 'plugin', matched_parsers[0]) @@ -88,7 +88,7 @@ def _match_argument(self, action, arg_strings_pattern) -> int: if msg is None: msg = gettext.ngettext('expected %s argument', 'expected %s arguments', action.nargs) % action.nargs if action.choices: - msg = "{} (from: {})".format(msg, ", ".join(action.choices)) + msg = f"{msg} (from: {', '.join(action.choices)})" raise argparse.ArgumentError(action, msg) # return the number of arguments matched diff --git a/volatility3/cli/volshell/__init__.py b/volatility3/cli/volshell/__init__.py index 25336f06c6..bc1219e885 100644 --- a/volatility3/cli/volshell/__init__.py +++ b/volatility3/cli/volshell/__init__.py @@ -41,7 +41,7 @@ def __init__(self): def run(self): """Executes the command line module, taking the system arguments, determining the plugin to run and then running it.""" - sys.stdout.write("Volshell (Volatility 3 Framework) {}\n".format(constants.PACKAGE_VERSION)) + sys.stdout.write(f"Volshell (Volatility 3 Framework) {constants.PACKAGE_VERSION}\n") framework.require_interface_version(1, 0, 0) @@ -90,7 +90,7 @@ def run(self): default = False, action = 'store_true') parser.add_argument("--cache-path", - help = "Change the default path ({}) used to store the cache".format(constants.CACHE_PATH), + help = f"Change the default path ({constants.CACHE_PATH}) used to store the cache", default = constants.CACHE_PATH, type = str) @@ -119,8 +119,8 @@ def run(self): if partial_args.cache_path: constants.CACHE_PATH = partial_args.cache_path - vollog.info("Volatility plugins path: {}".format(volatility3.plugins.__path__)) - vollog.info("Volatility symbols path: {}".format(volatility3.symbols.__path__)) + vollog.info(f"Volatility plugins path: {volatility3.plugins.__path__}") + vollog.info(f"Volatility symbols path: {volatility3.symbols.__path__}") if partial_args.log: file_logger = logging.FileHandler(partial_args.log) @@ -180,7 +180,7 @@ def run(self): # Run the argparser args = parser.parse_args() - vollog.log(constants.LOGLEVEL_VVV, "Cache directory used: {}".format(constants.CACHE_PATH)) + vollog.log(constants.LOGLEVEL_VVV, f"Cache directory used: {constants.CACHE_PATH}") plugin = generic.Volshell if args.windows: @@ -243,7 +243,7 @@ def run(self): constructed.run() except exceptions.VolatilityException as excp: self.process_exceptions(excp) - parser.exit(1, "Unable to validate the plugin requirements: {}\n".format([x for x in excp.unsatisfied])) + parser.exit(1, f"Unable to validate the plugin requirements: {[x for x in excp.unsatisfied]}\n") def main(): diff --git a/volatility3/cli/volshell/generic.py b/volatility3/cli/volshell/generic.py index 6c81cda4bc..46701b4acb 100644 --- a/volatility3/cli/volshell/generic.py +++ b/volatility3/cli/volshell/generic.py @@ -76,14 +76,14 @@ def run(self, additional_locals: Dict[str, Any] = None) -> interfaces.renderers. mode = self.__module__.split('.')[-1] mode = mode[0].upper() + mode[1:] - banner = """ + banner = f""" Call help() to see available functions - Volshell mode: {} - Current Layer: {} - """.format(mode, self.current_layer) + Volshell mode: {mode} + Current Layer: {self.current_layer} + """ - sys.ps1 = "({}) >>> ".format(self.current_layer) + sys.ps1 = f"({self.current_layer}) >>> " self.__console = code.InteractiveConsole(locals = self._construct_locals_dict()) # Since we have to do work to add the option only once for all different modes of volshell, we can't # rely on the default having been set @@ -105,14 +105,14 @@ def help(self, *args): for aliases, item in self.construct_locals(): name = ", ".join(aliases) if item.__doc__ and callable(item): - print("* {}".format(name)) - print(" {}".format(item.__doc__)) + print(f"* {name}") + print(f" {item.__doc__}") else: variables.append(name) print("\nVariables:") for var in variables: - print(" {}".format(var)) + print(f" {var}") def construct_locals(self) -> List[Tuple[List[str], Any]]: """Returns a dictionary listing the functions to be added to the @@ -181,7 +181,7 @@ def change_layer(self, layer_name = None): if not layer_name: layer_name = self.config['primary'] self.__current_layer = layer_name - sys.ps1 = "({}) >>> ".format(self.current_layer) + sys.ps1 = f"({self.current_layer}) >>> " def display_bytes(self, offset, count = 128, layer_name = None): """Displays byte values and ASCII characters""" @@ -221,7 +221,7 @@ def disassemble(self, offset, count = 128, layer_name = None, architecture = Non } if architecture is not None: for i in disasm_types[architecture].disasm(remaining_data, offset): - print("0x%x:\t%s\t%s" % (i.address, i.mnemonic, i.op_str)) + print(f"0x{i.address:x}:\t{i.mnemonic}\t{i.op_str}") def display_type(self, object: Union[str, interfaces.objects.ObjectInterface, interfaces.objects.Template], @@ -245,7 +245,7 @@ def display_type(self, volobject = self.context.object(volobject.vol.type_name, layer_name = self.current_layer, offset = offset) if hasattr(volobject.vol, 'size'): - print("{} ({} bytes)".format(volobject.vol.type_name, volobject.vol.size)) + print(f"{volobject.vol.type_name} ({volobject.vol.size} bytes)") elif hasattr(volobject.vol, 'data_format'): data_format = volobject.vol.data_format print("{} ({} bytes, {} endian, {})".format(volobject.vol.type_name, data_format.length, @@ -301,7 +301,7 @@ def generate_treegrid(self, plugin: Type[interfaces.plugins.PluginInterface], constructed = plugins.construct_plugin(self.context, [], plugin, plugin_path, None, NullFileHandler) return constructed.run() except exceptions.UnsatisfiedException as excp: - print("Unable to validate the plugin requirements: {}\n".format([x for x in excp.unsatisfied])) + print(f"Unable to validate the plugin requirements: {[x for x in excp.unsatisfied]}\n") return None def render_treegrid(self, @@ -340,7 +340,7 @@ def run_script(self, location: str): """Runs a python script within the context of volshell""" if not parse.urlparse(location).scheme: location = "file:" + request.pathname2url(location) - print("Running code from {}\n".format(location)) + print(f"Running code from {location}\n") accessor = resources.ResourceAccessor() with io.TextIOWrapper(accessor.open(url = location), encoding = 'utf-8') as fp: self.__console.runsource(fp.read(), symbol = 'exec') diff --git a/volatility3/cli/volshell/linux.py b/volatility3/cli/volshell/linux.py index 73d5481aa1..850e3111cf 100644 --- a/volatility3/cli/volshell/linux.py +++ b/volatility3/cli/volshell/linux.py @@ -30,9 +30,9 @@ def change_task(self, pid = None): if process_layer is not None: self.change_layer(process_layer) return - print("Layer for task ID {} could not be constructed".format(pid)) + print(f"Layer for task ID {pid} could not be constructed") return - print("No task with task ID {} found".format(pid)) + print(f"No task with task ID {pid} found") def list_tasks(self): """Returns a list of task objects from the primary layer""" diff --git a/volatility3/cli/volshell/mac.py b/volatility3/cli/volshell/mac.py index 662f8dcb47..8218848baf 100644 --- a/volatility3/cli/volshell/mac.py +++ b/volatility3/cli/volshell/mac.py @@ -30,9 +30,9 @@ def change_task(self, pid = None): if process_layer is not None: self.change_layer(process_layer) return - print("Layer for task ID {} could not be constructed".format(pid)) + print(f"Layer for task ID {pid} could not be constructed") return - print("No task with task ID {} found".format(pid)) + print(f"No task with task ID {pid} found") def list_tasks(self): """Returns a list of task objects from the primary layer""" diff --git a/volatility3/cli/volshell/windows.py b/volatility3/cli/volshell/windows.py index d9de5a92ff..6c191ad28a 100644 --- a/volatility3/cli/volshell/windows.py +++ b/volatility3/cli/volshell/windows.py @@ -29,7 +29,7 @@ def change_process(self, pid = None): process_layer = process.add_process_layer() self.change_layer(process_layer) return - print("No process with process ID {} found".format(pid)) + print(f"No process with process ID {pid} found") def list_processes(self): """Returns a list of EPROCESS objects from the primary layer""" diff --git a/volatility3/framework/__init__.py b/volatility3/framework/__init__.py index a8401950a0..ba834f4f14 100644 --- a/volatility3/framework/__init__.py +++ b/volatility3/framework/__init__.py @@ -77,7 +77,7 @@ def hide_from_subclasses(cls: Type) -> Type: def class_subclasses(cls: Type[T]) -> Generator[Type[T], None, None]: """Returns all the (recursive) subclasses of a given class.""" if not inspect.isclass(cls): - raise TypeError("class_subclasses parameter not a valid class: {}".format(cls)) + raise TypeError(f"class_subclasses parameter not a valid class: {cls}") for clazz in cls.__subclasses__(): # The typing system is not clever enough to realize that clazz has a hidden attr after the hasattr check if not hasattr(clazz, 'hidden') or not clazz.hidden: # type: ignore @@ -92,7 +92,7 @@ def import_files(base_module, ignore_errors = False) -> List[str]: if not isinstance(base_module.__path__, list): raise TypeError("[base_module].__path__ must be a list of paths") vollog.log(constants.LOGLEVEL_VVVV, - "Importing from the following paths: {}".format(", ".join(base_module.__path__))) + f"Importing from the following paths: {', '.join(base_module.__path__)}") for path in base_module.__path__: for root, _, files in os.walk(path, followlinks = True): # TODO: Figure out how to import pycache files diff --git a/volatility3/framework/automagic/__init__.py b/volatility3/framework/automagic/__init__.py index 844d9a273a..a10b526d28 100644 --- a/volatility3/framework/automagic/__init__.py +++ b/volatility3/framework/automagic/__init__.py @@ -72,7 +72,7 @@ def choose_automagic( vollog.info("No plugin category detected") return automagics - vollog.info("Detected a {} category plugin".format(plugin_category)) + vollog.info(f"Detected a {plugin_category} category plugin") output = [] for amagic in automagics: if amagic.__class__.__name__ in automagic_categories[plugin_category]: @@ -127,7 +127,7 @@ def run(automagics: List[interfaces.automagic.AutomagicInterface], for automagic in automagics: try: - vollog.info("Running automagic: {}".format(automagic.__class__.__name__)) + vollog.info(f"Running automagic: {automagic.__class__.__name__}") automagic(context, config_path, requirement, progress_callback) except Exception as excp: exceptions.append(traceback.TracebackException.from_exception(excp)) diff --git a/volatility3/framework/automagic/construct_layers.py b/volatility3/framework/automagic/construct_layers.py index 15e6af4bde..8afc6326e3 100644 --- a/volatility3/framework/automagic/construct_layers.py +++ b/volatility3/framework/automagic/construct_layers.py @@ -48,12 +48,12 @@ def __call__(self, self(context, subreq_config_path, subreq, optional = optional or subreq.optional) except Exception as e: # We don't really care if this fails, it tends to mean the configuration isn't complete for that item - vollog.log(constants.LOGLEVEL_VVVV, "Construction Exception occurred: {}".format(e)) + vollog.log(constants.LOGLEVEL_VVVV, f"Construction Exception occurred: {e}") invalid = subreq.unsatisfied(context, subreq_config_path) # We want to traverse optional paths, so don't check until we've tried to validate # We also don't want to emit a debug message when a parent is optional, hence the optional parameter if invalid and not (optional or subreq.optional): - vollog.log(constants.LOGLEVEL_V, "Failed on requirement: {}".format(subreq_config_path)) + vollog.log(constants.LOGLEVEL_V, f"Failed on requirement: {subreq_config_path}") result.append(interfaces.configuration.path_join(subreq_config_path, subreq.name)) if result: return result diff --git a/volatility3/framework/automagic/linux.py b/volatility3/framework/automagic/linux.py index fb1f6fc750..4d0b95ce3f 100644 --- a/volatility3/framework/automagic/linux.py +++ b/volatility3/framework/automagic/linux.py @@ -41,7 +41,7 @@ def stack(cls, mss = scanners.MultiStringScanner([x for x in linux_banners if x is not None]) for _, banner in layer.scan(context = context, scanner = mss, progress_callback = progress_callback): dtb = None - vollog.debug("Identified banner: {}".format(repr(banner))) + vollog.debug(f"Identified banner: {repr(banner)}") symbol_files = linux_banners.get(banner, None) if symbol_files: @@ -82,7 +82,7 @@ def stack(cls, metadata = {'kaslr_value': aslr_shift, 'os': 'Linux'}) if layer and dtb: - vollog.debug("DTB was found at: 0x{:0x}".format(dtb)) + vollog.debug(f"DTB was found at: 0x{dtb:0x}") return layer vollog.debug("No suitable linux banner could be matched") return None diff --git a/volatility3/framework/automagic/mac.py b/volatility3/framework/automagic/mac.py index 1a039af8d3..07c995b360 100644 --- a/volatility3/framework/automagic/mac.py +++ b/volatility3/framework/automagic/mac.py @@ -44,7 +44,7 @@ def stack(cls, for banner_offset, banner in layer.scan(context = context, scanner = mss, progress_callback = progress_callback): dtb = None - vollog.debug("Identified banner: {}".format(repr(banner))) + vollog.debug(f"Identified banner: {repr(banner)}") symbol_files = mac_banners.get(banner, None) if symbol_files: @@ -63,7 +63,7 @@ def stack(cls, progress_callback = progress_callback) if kaslr_shift == 0: - vollog.log(constants.LOGLEVEL_VVV, "Invalid kalsr_shift found at offset: {}".format(banner_offset)) + vollog.log(constants.LOGLEVEL_VVV, f"Invalid kalsr_shift found at offset: {banner_offset}") continue bootpml4_addr = cls.virtual_to_physical_address(table.get_symbol("BootPML4").address + kaslr_shift) @@ -90,7 +90,7 @@ def stack(cls, tmp_dtb = idlepml4_addr if tmp_dtb % 4096: - vollog.log(constants.LOGLEVEL_VVV, "Skipping non-page aligned DTB: 0x{:0x}".format(tmp_dtb)) + vollog.log(constants.LOGLEVEL_VVV, f"Skipping non-page aligned DTB: 0x{tmp_dtb:0x}") continue dtb = tmp_dtb @@ -108,7 +108,7 @@ def stack(cls, metadata = {'kaslr_value': kaslr_shift}) if new_layer and dtb: - vollog.debug("DTB was found at: 0x{:0x}".format(dtb)) + vollog.debug(f"DTB was found at: 0x{dtb:0x}") return new_layer vollog.debug("No suitable mac banner could be matched") return None @@ -164,7 +164,7 @@ def find_aslr(cls, aslr_shift = tmp_aslr_shift & 0xffffffff break - vollog.log(constants.LOGLEVEL_VVVV, "Mac find_aslr returned: {:0x}".format(aslr_shift)) + vollog.log(constants.LOGLEVEL_VVVV, f"Mac find_aslr returned: {aslr_shift:0x}") return aslr_shift diff --git a/volatility3/framework/automagic/pdbscan.py b/volatility3/framework/automagic/pdbscan.py index 786f3d3309..d1df22ebea 100644 --- a/volatility3/framework/automagic/pdbscan.py +++ b/volatility3/framework/automagic/pdbscan.py @@ -127,7 +127,7 @@ def set_kernel_virtual_offset(self, context: interfaces.context.ContextInterface kvo_path = interfaces.configuration.path_join(context.layers[virtual_layer].config_path, 'kernel_virtual_offset') context.config[kvo_path] = kvo - vollog.debug("Setting kernel_virtual_offset to {}".format(hex(kvo))) + vollog.debug(f"Setting kernel_virtual_offset to {hex(kvo)}") def get_physical_layer_name(self, context, vlayer): return context.config.get(interfaces.configuration.path_join(vlayer.config_path, 'memory_layer'), None) @@ -166,7 +166,7 @@ def test_physical_kernel(physical_layer_name, virtual_layer_name, kernel): vollog.debug("Potential kernel_virtual_offset did not map to expected location: {}".format( hex(kvo))) except exceptions.InvalidAddressException: - vollog.debug("Potential kernel_virtual_offset caused a page fault: {}".format(hex(kvo))) + vollog.debug(f"Potential kernel_virtual_offset caused a page fault: {hex(kvo)}") vollog.debug("Kernel base determination - testing fixed base address") return self._method_layer_pdb_scan(context, vlayer, test_physical_kernel, True, progress_callback) diff --git a/volatility3/framework/automagic/stacker.py b/volatility3/framework/automagic/stacker.py index 142bfd3bbf..928e3d0684 100644 --- a/volatility3/framework/automagic/stacker.py +++ b/volatility3/framework/automagic/stacker.py @@ -58,7 +58,7 @@ def __call__(self, # Bow out quickly if the UI hasn't provided a single_location unsatisfied = self.unsatisfied(self.context, self.config_path) if unsatisfied: - vollog.info("Unable to run LayerStacker, unsatisfied requirement: {}".format(unsatisfied)) + vollog.info(f"Unable to run LayerStacker, unsatisfied requirement: {unsatisfied}") return list(unsatisfied) if not self.config or not self.config.get('single_location', None): raise ValueError("Unable to run LayerStacker, single_location parameter not provided") @@ -123,7 +123,7 @@ def stack(self, context: interfaces.context.ContextInterface, config_path: str, # Stash the changed config items self._cached = context.config.get(path, None), context.config.branch(path) - vollog.debug("Stacked layers: {}".format(stacked_layers)) + vollog.debug(f"Stacked layers: {stacked_layers}") @classmethod def stack_layer(cls, @@ -158,7 +158,7 @@ def stack_layer(cls, for stacker_item in stack_set: if not issubclass(stacker_item, interfaces.automagic.StackerLayerInterface): - raise TypeError("Stacker {} is not a descendent of StackerLayerInterface".format(stacker_item.__name__)) + raise TypeError(f"Stacker {stacker_item.__name__} is not a descendent of StackerLayerInterface") while stacked: stacked = False @@ -167,17 +167,17 @@ def stack_layer(cls, for stacker_cls in stack_set: stacker = stacker_cls() try: - vollog.log(constants.LOGLEVEL_VV, "Attempting to stack using {}".format(stacker_cls.__name__)) + vollog.log(constants.LOGLEVEL_VV, f"Attempting to stack using {stacker_cls.__name__}") new_layer = stacker.stack(context, initial_layer, progress_callback) if new_layer: context.layers.add_layer(new_layer) vollog.log(constants.LOGLEVEL_VV, - "Stacked {} using {}".format(new_layer.name, stacker_cls.__name__)) + f"Stacked {new_layer.name} using {stacker_cls.__name__}") break except Exception as excp: # Stacking exceptions are likely only of interest to developers, so the lowest level of logging fulltrace = traceback.TracebackException.from_exception(excp).format(chain = True) - vollog.log(constants.LOGLEVEL_VVV, "Exception during stacking: {}".format(str(excp))) + vollog.log(constants.LOGLEVEL_VVV, f"Exception during stacking: {str(excp)}") vollog.log(constants.LOGLEVEL_VVVV, "\n".join(fulltrace)) else: stacked = False diff --git a/volatility3/framework/automagic/symbol_cache.py b/volatility3/framework/automagic/symbol_cache.py index b2407f8e9b..1b468a4cd0 100644 --- a/volatility3/framework/automagic/symbol_cache.py +++ b/volatility3/framework/automagic/symbol_cache.py @@ -90,10 +90,10 @@ def __call__(self, context, config_path, configurable, progress_callback = None) total = len(cacheables) if total > 0: - vollog.info("Building {} caches...".format(self.os)) + vollog.info(f"Building {self.os} caches...") for current in range(total): if progress_callback is not None: - progress_callback(current * 100 / total, "Building {} caches".format(self.os)) + progress_callback(current * 100 / total, f"Building {self.os} caches") isf_url = cacheables[current] isf = None @@ -105,7 +105,7 @@ def __call__(self, context, config_path, configurable, progress_callback = None) # We don't bother with the hash (it'll likely take too long to validate) # but we should check at least that the banner matches on load. banner = isf.get_symbol(self.symbol_name).constant_data - vollog.log(constants.LOGLEVEL_VV, "Caching banner {} for file {}".format(banner, isf_url)) + vollog.log(constants.LOGLEVEL_VV, f"Caching banner {banner} for file {isf_url}") bannerlist = banners.get(banner, []) bannerlist.append(isf_url) @@ -113,7 +113,7 @@ def __call__(self, context, config_path, configurable, progress_callback = None) except exceptions.SymbolError: pass except json.JSONDecodeError: - vollog.log(constants.LOGLEVEL_VV, "Caching file {} failed due to JSON error".format(isf_url)) + vollog.log(constants.LOGLEVEL_VV, f"Caching file {isf_url} failed due to JSON error") finally: # Get rid of the loaded file, in case it sits in memory if isf: @@ -124,4 +124,4 @@ def __call__(self, context, config_path, configurable, progress_callback = None) self.save_banners(banners) if progress_callback is not None: - progress_callback(100, "Built {} caches".format(self.os)) + progress_callback(100, f"Built {self.os} caches") diff --git a/volatility3/framework/automagic/symbol_finder.py b/volatility3/framework/automagic/symbol_finder.py index f299767699..b57ab54c83 100644 --- a/volatility3/framework/automagic/symbol_finder.py +++ b/volatility3/framework/automagic/symbol_finder.py @@ -33,7 +33,7 @@ def banners(self) -> symbol_cache.BannersType: requested.""" if not self._banners: if not self.banner_cache: - raise RuntimeError("Cache has not been properly defined for {}".format(self.__class__.__name__)) + raise RuntimeError(f"Cache has not been properly defined for {self.__class__.__name__}") self._banners = self.banner_cache.load_banners() return self._banners @@ -98,11 +98,11 @@ def _banner_scan(self, banner_list = layer.scan(context = context, scanner = mss, progress_callback = progress_callback) for _, banner in banner_list: - vollog.debug("Identified banner: {}".format(repr(banner))) + vollog.debug(f"Identified banner: {repr(banner)}") symbol_files = self.banners.get(banner, None) if symbol_files: isf_path = symbol_files[0] - vollog.debug("Using symbol library: {}".format(symbol_files[0])) + vollog.debug(f"Using symbol library: {symbol_files[0]}") clazz = self.symbol_class # Set the discovered options path_join = interfaces.configuration.path_join @@ -134,7 +134,7 @@ def _banner_scan(self, break else: if symbol_files: - vollog.debug("Symbol library path not found: {}".format(symbol_files[0])) + vollog.debug(f"Symbol library path not found: {symbol_files[0]}") # print("Kernel", banner, hex(banner_offset)) else: vollog.debug("No existing banners found") diff --git a/volatility3/framework/automagic/windows.py b/volatility3/framework/automagic/windows.py index 5b8340abd8..1144978beb 100644 --- a/volatility3/framework/automagic/windows.py +++ b/volatility3/framework/automagic/windows.py @@ -272,7 +272,7 @@ def __call__(self, physical_layer_name = requirement.requirements["memory_layer"].config_value( context, sub_config_path) if not isinstance(physical_layer_name, str): - raise TypeError("Physical layer name is not a string: {}".format(sub_config_path)) + raise TypeError(f"Physical layer name is not a string: {sub_config_path}") physical_layer = context.layers[physical_layer_name] # Check lower layer metadata first if physical_layer.metadata.get('page_map_offset', None): diff --git a/volatility3/framework/configuration/requirements.py b/volatility3/framework/configuration/requirements.py index 20a7af0f6a..ea0a5fcc25 100644 --- a/volatility3/framework/configuration/requirements.py +++ b/volatility3/framework/configuration/requirements.py @@ -105,7 +105,7 @@ def unsatisfied(self, context: interfaces.context.ContextInterface, context.config[config_path] = [] if not isinstance(value, list): # TODO: Check this is the correct response for an error - raise TypeError("Unexpected config value found: {}".format(repr(value))) + raise TypeError(f"Unexpected config value found: {repr(value)}") if not (self.min_elements <= len(value)): vollog.log(constants.LOGLEVEL_V, "TypeError - Too few values provided to list option.") return {config_path: self} @@ -264,20 +264,20 @@ def unsatisfied(self, context: interfaces.context.ContextInterface, value = self.config_value(context, config_path, None) if isinstance(value, str): if value not in context.layers: - vollog.log(constants.LOGLEVEL_V, "IndexError - Layer not found in memory space: {}".format(value)) + vollog.log(constants.LOGLEVEL_V, f"IndexError - Layer not found in memory space: {value}") return {config_path: self} if self.oses and context.layers[value].metadata.get('os', None) not in self.oses: - vollog.log(constants.LOGLEVEL_V, "TypeError - Layer is not the required OS: {}".format(value)) + vollog.log(constants.LOGLEVEL_V, f"TypeError - Layer is not the required OS: {value}") return {config_path: self} if (self.architectures and context.layers[value].metadata.get('architecture', None) not in self.architectures): - vollog.log(constants.LOGLEVEL_V, "TypeError - Layer is not the required Architecture: {}".format(value)) + vollog.log(constants.LOGLEVEL_V, f"TypeError - Layer is not the required Architecture: {value}") return {config_path: self} return {} if value is not None: vollog.log(constants.LOGLEVEL_V, - "TypeError - Translation Layer Requirement only accepts string labels: {}".format(repr(value))) + f"TypeError - Translation Layer Requirement only accepts string labels: {repr(value)}") return {config_path: self} # TODO: check that the space in the context lives up to the requirements for arch/os etc @@ -285,7 +285,7 @@ def unsatisfied(self, context: interfaces.context.ContextInterface, ### NOTE: This validate method has side effects (the dependencies can change)!!! self._validate_class(context, interfaces.configuration.parent_path(config_path)) - vollog.log(constants.LOGLEVEL_V, "IndexError - No configuration provided: {}".format(config_path)) + vollog.log(constants.LOGLEVEL_V, f"IndexError - No configuration provided: {config_path}") return {config_path: self} def construct(self, context: interfaces.context.ContextInterface, config_path: str) -> None: @@ -333,7 +333,7 @@ def unsatisfied(self, context: interfaces.context.ContextInterface, value = self.config_value(context, config_path, None) if not isinstance(value, str) and value is not None: vollog.log(constants.LOGLEVEL_V, - "TypeError - SymbolTableRequirement only accepts string labels: {}".format(repr(value))) + f"TypeError - SymbolTableRequirement only accepts string labels: {repr(value)}") return {config_path: self} if value and value in context.symbol_space: # This is an expected situation, so return rather than raise @@ -345,7 +345,7 @@ def unsatisfied(self, context: interfaces.context.ContextInterface, ### NOTE: This validate method has side effects (the dependencies can change)!!! self._validate_class(context, interfaces.configuration.parent_path(config_path)) - vollog.log(constants.LOGLEVEL_V, "Symbol table requirement not yet fulfilled: {}".format(config_path)) + vollog.log(constants.LOGLEVEL_V, f"Symbol table requirement not yet fulfilled: {config_path}") return {config_path: self} def construct(self, context: interfaces.context.ContextInterface, config_path: str) -> None: diff --git a/volatility3/framework/contexts/__init__.py b/volatility3/framework/contexts/__init__.py index c082a29dd2..9fb950a03b 100644 --- a/volatility3/framework/contexts/__init__.py +++ b/volatility3/framework/contexts/__init__.py @@ -155,7 +155,7 @@ def wrapper(self, name: str) -> Callable: if constants.BANG not in name: name = self._module_name + constants.BANG + name else: - raise ValueError("Cannot reference another module when calling {}".format(method)) + raise ValueError(f"Cannot reference another module when calling {method}") return getattr(self._context.symbol_space, method)(name) for entry in ['__annotations__', '__doc__', '__module__', '__name__', '__qualname__']: @@ -232,7 +232,7 @@ def object_from_symbol(self, offset += self._offset if symbol_val.type is None: - raise TypeError("Symbol {} has no associated type".format(symbol_val.name)) + raise TypeError(f"Symbol {symbol_val.name} has no associated type") # Ensure we don't use a layer_name other than the module's, why would anyone do that? if 'layer_name' in kwargs: diff --git a/volatility3/framework/interfaces/configuration.py b/volatility3/framework/interfaces/configuration.py index 8b02d0d930..d3c05d9a96 100644 --- a/volatility3/framework/interfaces/configuration.py +++ b/volatility3/framework/interfaces/configuration.py @@ -77,7 +77,7 @@ def __init__(self, separator: A custom hierarchy separator (defaults to CONFIG_SEPARATOR) """ if not (isinstance(separator, str) and len(separator) == 1): - raise TypeError("Separator must be a one character string: {}".format(separator)) + raise TypeError(f"Separator must be a one character string: {separator}") self._separator = separator self._data = {} # type: Dict[str, ConfigSimpleType] self._subdict = {} # type: Dict[str, 'HierarchicalDict'] @@ -88,7 +88,7 @@ def __init__(self, self[k] = v elif initial_dict is not None: raise TypeError( - "Initial_dict must be a dictionary or JSON string containing a dictionary: {}".format(initial_dict)) + f"Initial_dict must be a dictionary or JSON string containing a dictionary: {initial_dict}") def __eq__(self, other): """Define equality between HierarchicalDicts""" @@ -315,7 +315,7 @@ def __init__(self, """ super().__init__() if CONFIG_SEPARATOR in name: - raise ValueError("Name cannot contain the config-hierarchy divider ({})".format(CONFIG_SEPARATOR)) + raise ValueError(f"Name cannot contain the config-hierarchy divider ({CONFIG_SEPARATOR})") self._name = name self._description = description or "" self._default = default diff --git a/volatility3/framework/interfaces/layers.py b/volatility3/framework/interfaces/layers.py index cce80f0a1f..0b5a58d372 100644 --- a/volatility3/framework/interfaces/layers.py +++ b/volatility3/framework/interfaces/layers.py @@ -236,7 +236,7 @@ def scan(self, for value in scan_iterator(): if progress_callback: progress_callback(scan_metric(progress.value), - "Scanning {} using {}".format(self.name, scanner.__class__.__name__)) + f"Scanning {self.name} using {scanner.__class__.__name__}") yield from scan_chunk(value) else: progress = multiprocessing.Manager().Value("Q", 0) @@ -251,7 +251,7 @@ def scan(self, if progress_callback: # Run the progress_callback progress_callback(scan_metric(progress.value), - "Scanning {} using {}".format(self.name, scanner.__class__.__name__)) + f"Scanning {self.name} using {scanner.__class__.__name__}") # Ensures we don't burn CPU cycles going round in a ready waiting loop # without delaying the user too long between progress updates/results result.wait(0.1) @@ -259,7 +259,7 @@ def scan(self, yield from result_value except Exception as e: # We don't care the kind of exception, so catch and report on everything, yielding nothing further - vollog.debug("Scan Failure: {}".format(str(e))) + vollog.debug(f"Scan Failure: {str(e)}") vollog.log(constants.LOGLEVEL_VVV, "\n".join(traceback.TracebackException.from_exception(e).format(chain = True))) @@ -325,7 +325,7 @@ def _scan_chunk(self, scanner: 'ScannerInterface', progress: 'ProgressValue', layer_name, self.name, address)) if len(data) > scanner.chunk_size + scanner.overlap: - vollog.debug("Scan chunk too large: {}".format(hex(len(data)))) + vollog.debug(f"Scan chunk too large: {hex(len(data))}") progress.value = chunk_end return list(scanner(data, chunk_end - len(data))) @@ -429,7 +429,7 @@ def read(self, offset: int, length: int, pad: bool = False) -> bytes: ignore_errors = pad): if not pad and layer_offset > current_offset: raise exceptions.InvalidAddressException( - self.name, current_offset, "Layer {} cannot map offset: {}".format(self.name, current_offset)) + self.name, current_offset, f"Layer {self.name} cannot map offset: {current_offset}") elif layer_offset > current_offset: output += b"\x00" * (layer_offset - current_offset) current_offset = layer_offset @@ -452,7 +452,7 @@ def write(self, offset: int, value: bytes) -> None: for (layer_offset, sublength, mapped_offset, mapped_length, layer) in self.mapping(offset, length): if layer_offset > current_offset: raise exceptions.InvalidAddressException( - self.name, current_offset, "Layer {} cannot map offset: {}".format(self.name, current_offset)) + self.name, current_offset, f"Layer {self.name} cannot map offset: {current_offset}") value_chunk = value[layer_offset - offset:layer_offset - offset + sublength] new_data = self._encode_data(layer, mapped_offset, layer_offset, value_chunk) @@ -566,12 +566,12 @@ def add_layer(self, layer: DataLayerInterface) -> None: layer: the layer to add to the list of layers (based on layer.name) """ if layer.name in self._layers: - raise exceptions.LayerException(layer.name, "Layer already exists: {}".format(layer.name)) + raise exceptions.LayerException(layer.name, f"Layer already exists: {layer.name}") if isinstance(layer, TranslationLayerInterface): missing_list = [sublayer for sublayer in layer.dependencies if sublayer not in self._layers] if missing_list: raise exceptions.LayerException( - layer.name, "Layer {} has unmet dependencies: {}".format(layer.name, ", ".join(missing_list))) + layer.name, f"Layer {layer.name} has unmet dependencies: {', '.join(missing_list)}") self._layers[layer.name] = layer def del_layer(self, name: str) -> None: @@ -587,7 +587,7 @@ def del_layer(self, name: str) -> None: if depend_list: raise exceptions.LayerException( self._layers[layer].name, - "Layer {} is depended upon: {}".format(self._layers[layer].name, ", ".join(depend_list))) + f"Layer {self._layers[layer].name} is depended upon: {', '.join(depend_list)}") self._layers[name].destroy() del self._layers[name] @@ -604,9 +604,9 @@ def free_layer_name(self, prefix: str = "layer") -> str: if prefix not in self: return prefix count = 1 - while "{}_{}".format(prefix, count) in self: + while f"{prefix}_{count}" in self: count += 1 - return "{}_{}".format(prefix, count) + return f"{prefix}_{count}" def __getitem__(self, name: str) -> DataLayerInterface: """Returns the layer of specified name.""" diff --git a/volatility3/framework/interfaces/objects.py b/volatility3/framework/interfaces/objects.py index 2f794b5655..4cb8bce42b 100644 --- a/volatility3/framework/interfaces/objects.py +++ b/volatility3/framework/interfaces/objects.py @@ -31,7 +31,7 @@ def __getattr__(self, attr: str) -> Any: return super().__getattribute__(attr) if attr in self._dict: return self._dict[attr] - raise AttributeError("Object has no attribute: {}.{}".format(self.__class__.__name__, attr)) + raise AttributeError(f"Object has no attribute: {self.__class__.__name__}.{attr}") def __getitem__(self, name: str) -> Any: """Returns the item requested.""" @@ -141,10 +141,10 @@ def get_symbol_table_name(self) -> str: KeyError: If the table_name is not valid within the object's context """ if constants.BANG not in self.vol.type_name: - raise ValueError("Unable to determine table for symbol: {}".format(self.vol.type_name)) + raise ValueError(f"Unable to determine table for symbol: {self.vol.type_name}") table_name = self.vol.type_name[:self.vol.type_name.index(constants.BANG)] if table_name not in self._context.symbol_space: - raise KeyError("Symbol table not found in context's symbol_space for symbol: {}".format(self.vol.type_name)) + raise KeyError(f"Symbol table not found in context's symbol_space for symbol: {self.vol.type_name}") return table_name def cast(self, new_type_name: str, **additional) -> 'ObjectInterface': @@ -231,14 +231,14 @@ def children(cls, template: 'Template') -> List['Template']: @abc.abstractmethod def replace_child(cls, template: 'Template', old_child: 'Template', new_child: 'Template') -> None: """Substitutes the old_child for the new_child.""" - raise KeyError("Template does not contain any children to replace: {}".format(template.vol.type_name)) + raise KeyError(f"Template does not contain any children to replace: {template.vol.type_name}") @classmethod @abc.abstractmethod def relative_child_offset(cls, template: 'Template', child: str) -> int: """Returns the relative offset from the head of the parent data to the child member.""" - raise KeyError("Template does not contain any children: {}".format(template.vol.type_name)) + raise KeyError(f"Template does not contain any children: {template.vol.type_name}") @classmethod @abc.abstractmethod @@ -330,7 +330,7 @@ def __getattr__(self, attr: str) -> Any: if attr != '_vol': if attr in self._vol: return self._vol[attr] - raise AttributeError("{} object has no attribute {}".format(self.__class__.__name__, attr)) + raise AttributeError(f"{self.__class__.__name__} object has no attribute {attr}") def __call__(self, context: 'interfaces.context.ContextInterface', object_info: ObjectInformation) -> ObjectInterface: diff --git a/volatility3/framework/interfaces/plugins.py b/volatility3/framework/interfaces/plugins.py index 5649629ca1..06316c221e 100644 --- a/volatility3/framework/interfaces/plugins.py +++ b/volatility3/framework/interfaces/plugins.py @@ -65,7 +65,7 @@ def __exit__(self, exc_type, exc_value, traceback): if exc_type is None and exc_value is None and traceback is None: self.close() else: - vollog.warning("File {} could not be written: {}".format(self._preferred_filename, str(exc_value))) + vollog.warning(f"File {self._preferred_filename} could not be written: {str(exc_value)}") self.close() diff --git a/volatility3/framework/interfaces/symbols.py b/volatility3/framework/interfaces/symbols.py index c1185bed9c..8b44198298 100644 --- a/volatility3/framework/interfaces/symbols.py +++ b/volatility3/framework/interfaces/symbols.py @@ -31,7 +31,7 @@ def __init__(self, """ self._name = name if constants.BANG in self._name: - raise ValueError("Symbol names cannot contain the symbol differentiator ({})".format(constants.BANG)) + raise ValueError(f"Symbol names cannot contain the symbol differentiator ({constants.BANG})") # Scope can be added at a later date self._location = None diff --git a/volatility3/framework/layers/crash.py b/volatility3/framework/layers/crash.py index 328e73ed4d..c690c8d8fa 100644 --- a/volatility3/framework/layers/crash.py +++ b/volatility3/framework/layers/crash.py @@ -70,8 +70,8 @@ def __init__(self, context: interfaces.context.ContextInterface, config_path: st # Verify that it is a supported format if header.DumpType not in self.supported_dumptypes: - vollog.log(constants.LOGLEVEL_VVVV, "unsupported dump format 0x{:x}".format(header.DumpType)) - raise WindowsCrashDumpFormatException(name, "unsupported dump format 0x{:x}".format(header.DumpType)) + vollog.log(constants.LOGLEVEL_VVVV, f"unsupported dump format 0x{header.DumpType:x}") + raise WindowsCrashDumpFormatException(name, f"unsupported dump format 0x{header.DumpType:x}") # Then call the super, which will call load_segments (which needs the base_layer before it'll work) super().__init__(context, config_path, name) @@ -143,11 +143,11 @@ def _load_segments(self) -> None: segment_length = (last_bit_seen - first_bit + 1) * 0x1000 segments.append((first_bit * 0x1000, first_offset, segment_length, segment_length)) else: - vollog.log(constants.LOGLEVEL_VVVV, "unsupported dump format 0x{:x}".format(self.dump_type)) - raise WindowsCrashDumpFormatException(self.name, "unsupported dump format 0x{:x}".format(self.dump_type)) + vollog.log(constants.LOGLEVEL_VVVV, f"unsupported dump format 0x{self.dump_type:x}") + raise WindowsCrashDumpFormatException(self.name, f"unsupported dump format 0x{self.dump_type:x}") if len(segments) == 0: - raise WindowsCrashDumpFormatException(self.name, "No Crash segments defined in {}".format(self._base_layer)) + raise WindowsCrashDumpFormatException(self.name, f"No Crash segments defined in {self._base_layer}") else: # report the segments for debugging. this is valuable for dev/troubleshooting but # not important enough for a dedicated plugin. @@ -167,15 +167,15 @@ def check_header(cls, base_layer: interfaces.layers.DataLayerInterface, offset: header_data = base_layer.read(offset, cls._magic_struct.size) except exceptions.InvalidAddressException: raise WindowsCrashDumpFormatException(base_layer.name, - "Crashdump header not found at offset {}".format(offset)) + f"Crashdump header not found at offset {offset}") (signature, validdump) = cls._magic_struct.unpack(header_data) if signature != cls.SIGNATURE: raise WindowsCrashDumpFormatException( - base_layer.name, "Bad signature 0x{:x} at file offset 0x{:x}".format(signature, offset)) + base_layer.name, f"Bad signature 0x{signature:x} at file offset 0x{offset:x}") if validdump != cls.VALIDDUMP: raise WindowsCrashDumpFormatException(base_layer.name, - "Invalid dump 0x{:x} at file offset 0x{:x}".format(validdump, offset)) + f"Invalid dump 0x{validdump:x} at file offset 0x{offset:x}") return signature, validdump diff --git a/volatility3/framework/layers/elf.py b/volatility3/framework/layers/elf.py index 48876c2867..4eb93a1c88 100644 --- a/volatility3/framework/layers/elf.py +++ b/volatility3/framework/layers/elf.py @@ -46,7 +46,7 @@ def _load_segments(self) -> None: segments.append((int(phdr.p_paddr), int(phdr.p_offset), int(phdr.p_memsz), int(phdr.p_memsz))) if len(segments) == 0: - raise ElfFormatException(self.name, "No ELF segments defined in {}".format(self._base_layer)) + raise ElfFormatException(self.name, f"No ELF segments defined in {self._base_layer}") self._segments = segments @@ -56,12 +56,12 @@ def _check_header(cls, base_layer: interfaces.layers.DataLayerInterface, offset: header_data = base_layer.read(offset, cls._header_struct.size) except exceptions.InvalidAddressException: raise ElfFormatException(base_layer.name, - "Offset 0x{:0x} does not exist within the base layer".format(offset)) + f"Offset 0x{offset:0x} does not exist within the base layer") (magic, elf_class, elf_data_encoding, elf_version) = cls._header_struct.unpack(header_data) if magic != cls.MAGIC: - raise ElfFormatException(base_layer.name, "Bad magic 0x{:x} at file offset 0x{:x}".format(magic, offset)) + raise ElfFormatException(base_layer.name, f"Bad magic 0x{magic:x} at file offset 0x{offset:x}") if elf_class != cls.ELF_CLASS: - raise ElfFormatException(base_layer.name, "ELF class is not 64-bit (2): {:d}".format(elf_class)) + raise ElfFormatException(base_layer.name, f"ELF class is not 64-bit (2): {elf_class:d}") # Virtualbox uses an ELF version of 0, which isn't to specification, but is ok to deal with return True @@ -78,7 +78,7 @@ def stack(cls, if not Elf64Layer._check_header(context.layers[layer_name]): return None except ElfFormatException as excp: - vollog.log(constants.LOGLEVEL_VVVV, "Exception: {}".format(excp)) + vollog.log(constants.LOGLEVEL_VVVV, f"Exception: {excp}") return None new_name = context.layers.free_layer_name("Elf64Layer") context.config[interfaces.configuration.path_join(new_name, "base_layer")] = layer_name diff --git a/volatility3/framework/layers/intel.py b/volatility3/framework/layers/intel.py index 94b15d3a0e..555a8417bc 100644 --- a/volatility3/framework/layers/intel.py +++ b/volatility3/framework/layers/intel.py @@ -107,7 +107,7 @@ def _translate(self, offset: int) -> Tuple[int, int, str]: # Now we're done if not self._page_is_valid(entry): raise exceptions.PagedInvalidAddressException(self.name, offset, position + 1, entry, - "Page Fault at entry {} in page entry".format(hex(entry))) + f"Page Fault at entry {hex(entry)} in page entry") page = self._mask(entry, self._maxphyaddr - 1, position + 1) | self._mask(offset, position, 0) return page, 1 << (position + 1), self._base_layer diff --git a/volatility3/framework/layers/leechcore.py b/volatility3/framework/layers/leechcore.py index d968fa715f..8c492ca852 100644 --- a/volatility3/framework/layers/leechcore.py +++ b/volatility3/framework/layers/leechcore.py @@ -43,7 +43,7 @@ def handle(self): try: self._handle = leechcorepyc.LeechCore(self._device) except TypeError: - raise IOError("Unable to open LeechCore device {}".format(self._device)) + raise IOError(f"Unable to open LeechCore device {self._device}") return self._handle def fileno(self): diff --git a/volatility3/framework/layers/lime.py b/volatility3/framework/layers/lime.py index ae93f7b7a6..4f4a66f185 100644 --- a/volatility3/framework/layers/lime.py +++ b/volatility3/framework/layers/lime.py @@ -45,7 +45,7 @@ def _load_segments(self) -> None: if start < maxaddr or end < start: raise LimeFormatException( - self.name, "Bad start/end 0x{:x}/0x{:x} at file offset 0x{:x}".format(start, end, offset)) + self.name, f"Bad start/end 0x{start:x}/0x{end:x} at file offset 0x{offset:x}") segment_length = end - start + 1 segments.append((start, offset + header_size, segment_length, segment_length)) @@ -53,7 +53,7 @@ def _load_segments(self) -> None: offset = offset + header_size + segment_length if len(segments) == 0: - raise LimeFormatException(self.name, "No LiME segments defined in {}".format(self._base_layer)) + raise LimeFormatException(self.name, f"No LiME segments defined in {self._base_layer}") self._segments = segments @@ -63,13 +63,13 @@ def _check_header(cls, base_layer: interfaces.layers.DataLayerInterface, offset: header_data = base_layer.read(offset, cls._header_struct.size) except exceptions.InvalidAddressException: raise LimeFormatException(base_layer.name, - "Offset 0x{:0x} does not exist within the base layer".format(offset)) + f"Offset 0x{offset:0x} does not exist within the base layer") (magic, version, start, end, reserved) = cls._header_struct.unpack(header_data) if magic != cls.MAGIC: - raise LimeFormatException(base_layer.name, "Bad magic 0x{:x} at file offset 0x{:x}".format(magic, offset)) + raise LimeFormatException(base_layer.name, f"Bad magic 0x{magic:x} at file offset 0x{offset:x}") if version != cls.VERSION: raise LimeFormatException(base_layer.name, - "Unexpected version {:d} at file offset 0x{:x}".format(version, offset)) + f"Unexpected version {version:d} at file offset 0x{offset:x}") return start, end diff --git a/volatility3/framework/layers/linear.py b/volatility3/framework/layers/linear.py index 80f71ec35e..40341d86de 100644 --- a/volatility3/framework/layers/linear.py +++ b/volatility3/framework/layers/linear.py @@ -16,13 +16,13 @@ def translate(self, offset: int, ignore_errors: bool = False) -> Tuple[Optional[ original_offset, _, mapped_offset, _, layer = mapping[0] if original_offset != offset: raise exceptions.LayerException(self.name, - "Layer {} claims to map linearly but does not".format(self.name)) + f"Layer {self.name} claims to map linearly but does not") else: if ignore_errors: # We should only hit this if we ignored errors, but check anyway return None, None raise exceptions.InvalidAddressException(self.name, offset, - "Cannot translate {} in layer {}".format(offset, self.name)) + f"Cannot translate {offset} in layer {self.name}") return mapped_offset, layer # ## Read/Write functions for mapped pages @@ -37,7 +37,7 @@ def read(self, offset: int, length: int, pad: bool = False) -> bytes: for (offset, _, mapped_offset, mapped_length, layer) in self.mapping(offset, length, ignore_errors = pad): if not pad and offset > current_offset: raise exceptions.InvalidAddressException( - self.name, current_offset, "Layer {} cannot map offset: {}".format(self.name, current_offset)) + self.name, current_offset, f"Layer {self.name} cannot map offset: {current_offset}") elif offset > current_offset: output += [b"\x00" * (offset - current_offset)] current_offset = offset @@ -57,7 +57,7 @@ def write(self, offset: int, value: bytes) -> None: for (offset, _, mapped_offset, length, layer) in self.mapping(offset, length): if offset > current_offset: raise exceptions.InvalidAddressException( - self.name, current_offset, "Layer {} cannot map offset: {}".format(self.name, current_offset)) + self.name, current_offset, f"Layer {self.name} cannot map offset: {current_offset}") elif offset < current_offset: raise exceptions.LayerException(self.name, "Mapping returned an overlapping element") self._context.layers.write(layer, mapped_offset, value[:length]) diff --git a/volatility3/framework/layers/msf.py b/volatility3/framework/layers/msf.py index 442ba87289..7713c50247 100644 --- a/volatility3/framework/layers/msf.py +++ b/volatility3/framework/layers/msf.py @@ -224,7 +224,7 @@ def maximum_address(self) -> int: def _pdb_layer(self) -> PdbMultiStreamFormat: if self._base_layer not in self._context.layers: raise PDBFormatException(self._base_layer, - "No PdbMultiStreamFormat layer found: {}".format(self._base_layer)) + f"No PdbMultiStreamFormat layer found: {self._base_layer}") result = self._context.layers[self._base_layer] if isinstance(result, PdbMultiStreamFormat): return result diff --git a/volatility3/framework/layers/physical.py b/volatility3/framework/layers/physical.py index 6010725bdf..998d8cf123 100644 --- a/volatility3/framework/layers/physical.py +++ b/volatility3/framework/layers/physical.py @@ -164,7 +164,7 @@ def write(self, offset: int, data: bytes) -> None: if not self._file.writable(): if not self._write_warning: self._write_warning = True - vollog.warning("Try to write to unwritable layer: {}".format(self.name)) + vollog.warning(f"Try to write to unwritable layer: {self.name}") return None if not self.is_valid(offset, len(data)): invalid_address = offset diff --git a/volatility3/framework/layers/qemu.py b/volatility3/framework/layers/qemu.py index caf768ed6c..95d7d2a8f1 100644 --- a/volatility3/framework/layers/qemu.py +++ b/volatility3/framework/layers/qemu.py @@ -179,21 +179,21 @@ def _load_segments(self): index += 4 if section_id != current_section_id: raise exceptions.LayerException( - self._name, 'QEMU section footer mismatch: {} and {}'.format(current_section_id, section_id)) + self._name, f'QEMU section footer mismatch: {current_section_id} and {section_id}') elif section_byte == self.QEVM_EOF: pass else: - raise exceptions.LayerException(self._name, 'QEMU unknown section encountered: {}'.format(section_byte)) + raise exceptions.LayerException(self._name, f'QEMU unknown section encountered: {section_byte}') def extract_data(self, index, name, version_id): if name == 'ram': if version_id != 4: - raise exceptions.LayerException("QEMU unknown RAM version_id {}".format(version_id)) + raise exceptions.LayerException(f"QEMU unknown RAM version_id {version_id}") new_segments, index = self._get_ram_segments(index, self._configuration.get('page_size', None) or 4096) self._segments += new_segments elif name == 'spapr/htab': if version_id != 1: - raise exceptions.LayerException("QEMU unknown HTAB version_id {}".format(version_id)) + raise exceptions.LayerException(f"QEMU unknown HTAB version_id {version_id}") header = self.context.object(self._qemu_table_name + constants.BANG + 'unsigned long', offset = index, layer_name = self._base_layer) diff --git a/volatility3/framework/layers/registry.py b/volatility3/framework/layers/registry.py index 6f0d66bebe..ed6d045f6d 100644 --- a/volatility3/framework/layers/registry.py +++ b/volatility3/framework/layers/registry.py @@ -48,7 +48,7 @@ def __init__(self, # TODO: Check the checksum if self.hive.Signature != 0xbee0bee0: raise RegistryFormatException( - self.name, "Registry hive at {} does not have a valid signature".format(self._hive_offset)) + self.name, f"Registry hive at {self._hive_offset} does not have a valid signature") # Win10 17063 introduced the Registry process to map most hives. Check # if it exists and update RegistryHive._base_layer diff --git a/volatility3/framework/layers/resources.py b/volatility3/framework/layers/resources.py index 88ae9f2ec3..fb8bdb7cc5 100644 --- a/volatility3/framework/layers/resources.py +++ b/volatility3/framework/layers/resources.py @@ -74,7 +74,7 @@ def __init__(self, self._enable_cache = enable_cache if self.list_handlers: vollog.log(constants.LOGLEVEL_VVV, - "Available URL handlers: {}".format(", ".join([x.__name__ for x in self._handlers]))) + f"Available URL handlers: {', '.join([x.__name__ for x in self._handlers])}") self.__class__.list_handlers = False def uses_cache(self, url: str) -> bool: @@ -132,7 +132,7 @@ def open(self, url: str, mode: str = "rb") -> Any: "data_" + hashlib.sha512(bytes(url, 'raw_unicode_escape')).hexdigest() + ".cache") if not os.path.exists(temp_filename): - vollog.debug("Caching file at: {}".format(temp_filename)) + vollog.debug(f"Caching file at: {temp_filename}") try: content_length = fp.info().get('Content-Length', -1) @@ -147,7 +147,7 @@ def open(self, url: str, mode: str = "rb") -> Any: count += len(block) if self._progress_callback: self._progress_callback(count * 100 / max(count, int(content_length)), - "Reading file {}".format(url)) + f"Reading file {url}") cache_file.write(block) block = fp.read(block_size) cache_file.close() @@ -237,13 +237,13 @@ def default_open(req: urllib.request.Request) -> Optional[Any]: if req.type == 'jar': subscheme, remainder = req.full_url.split(":")[1], ":".join(req.full_url.split(":")[2:]) if subscheme != 'file': - vollog.log(constants.LOGLEVEL_VVV, "Unsupported jar subscheme {}".format(subscheme)) + vollog.log(constants.LOGLEVEL_VVV, f"Unsupported jar subscheme {subscheme}") return None zipsplit = remainder.split("!") if len(zipsplit) != 2: vollog.log(constants.LOGLEVEL_VVV, - "Path did not contain exactly one fragment indicator: {}".format(remainder)) + f"Path did not contain exactly one fragment indicator: {remainder}") return None zippath, filepath = zipsplit diff --git a/volatility3/framework/layers/segmented.py b/volatility3/framework/layers/segmented.py index 8334c722df..076838da6e 100644 --- a/volatility3/framework/layers/segmented.py +++ b/volatility3/framework/layers/segmented.py @@ -67,7 +67,7 @@ def _find_segment(self, offset: int, next: bool = False) -> Tuple[int, int, int, if next: if i < len(self._segments): return self._segments[i] - raise exceptions.InvalidAddressException(self.name, offset, "Invalid address at {:0x}".format(offset)) + raise exceptions.InvalidAddressException(self.name, offset, f"Invalid address at {offset:0x}") def mapping(self, offset: int, diff --git a/volatility3/framework/layers/vmware.py b/volatility3/framework/layers/vmware.py index cd9cf651c6..85e961b246 100644 --- a/volatility3/framework/layers/vmware.py +++ b/volatility3/framework/layers/vmware.py @@ -53,7 +53,7 @@ def _read_header(self) -> None: data = meta_layer.read(0, header_size) magic, unknown, groupCount = struct.unpack(self.header_structure, data) if magic not in [b"\xD0\xBE\xD2\xBE", b"\xD1\xBA\xD1\xBA", b"\xD2\xBE\xD2\xBE", b"\xD3\xBE\xD3\xBE"]: - raise VmwareFormatException(self.name, "Wrong magic bytes for Vmware layer: {}".format(repr(magic))) + raise VmwareFormatException(self.name, f"Wrong magic bytes for Vmware layer: {repr(magic)}") version = magic[0] & 0xf group_size = struct.calcsize(self.group_structure) @@ -171,7 +171,7 @@ def stack(cls, except IOError: pass - vollog.log(constants.LOGLEVEL_VVVV, "Metadata found: VMSS ({}) or VMSN ({})".format(vmss_success, vmsn_success)) + vollog.log(constants.LOGLEVEL_VVVV, f"Metadata found: VMSS ({vmss_success}) or VMSN ({vmsn_success})") if not vmss_success and not vmsn_success: return None diff --git a/volatility3/framework/objects/__init__.py b/volatility3/framework/objects/__init__.py index 3f2e70b026..bf8dda515c 100644 --- a/volatility3/framework/objects/__init__.py +++ b/volatility3/framework/objects/__init__.py @@ -31,7 +31,7 @@ def convert_data_to_value(data: bytes, struct_type: Type[TUnion[int, float, byte elif struct_type in [bytes, str]: struct_format = str(data_format.length) + "s" else: - raise TypeError("Cannot construct struct format for type {}".format(type(struct_type))) + raise TypeError(f"Cannot construct struct format for type {type(struct_type)}") return struct.unpack(struct_format, data)[0] @@ -41,7 +41,7 @@ def convert_value_to_data(value: TUnion[int, float, bytes, str, bool], struct_ty data_format: DataFormatInfo) -> bytes: """Converts a particular value to a series of bytes.""" if not isinstance(value, struct_type): - raise TypeError("Written value is not of the correct type for {}".format(struct_type.__name__)) + raise TypeError(f"Written value is not of the correct type for {struct_type.__name__}") if struct_type == int and isinstance(value, int): # Doubling up on the isinstance is for mypy @@ -61,7 +61,7 @@ def convert_value_to_data(value: TUnion[int, float, bytes, str, bool], struct_ty value = bytes(value, 'latin-1') struct_format = str(data_format.length) + "s" else: - raise TypeError("Cannot construct struct format for type {}".format(type(struct_type))) + raise TypeError(f"Cannot construct struct format for type {type(struct_type)}") return struct.pack(struct_format, value) @@ -459,7 +459,7 @@ def _generate_inverse_choices(cls, choices: Dict[str, int]) -> Dict[int, str]: # Technically this shouldn't be a problem, but since we inverse cache # and can't map one value to two possibilities we throw an exception during build # We can remove/work around this if it proves a common issue - raise ValueError("Enumeration value {} duplicated as {} and {}".format(v, k, inverse_choices[v])) + raise ValueError(f"Enumeration value {v} duplicated as {k} and {inverse_choices[v]}") inverse_choices[v] = k return inverse_choices @@ -489,7 +489,7 @@ def __getattr__(self, attr: str) -> str: """Returns the value for a specific name.""" if attr in self._vol['choices']: return self._vol['choices'][attr] - raise AttributeError("Unknown attribute {} for Enumeration {}".format(attr, self._vol['type_name'])) + raise AttributeError(f"Unknown attribute {attr} for Enumeration {self._vol['type_name']}") def write(self, value: bytes): raise NotImplementedError("Writing to Enumerations is not yet implemented") @@ -589,7 +589,7 @@ def relative_child_offset(cls, template: interfaces.objects.Template, child: str the child member.""" if 'subtype' in template.vol and child == 'subtype': return 0 - raise IndexError("Member not present in array template: {}".format(child)) + raise IndexError(f"Member not present in array template: {child}") @overload def __getitem__(self, i: int) -> interfaces.objects.Template: @@ -660,10 +660,10 @@ def __repr__(self) -> str: """Describes the object appropriately""" extras = member_name = '' if self.vol.native_layer_name != self.vol.layer_name: - extras += " (Native: {})".format(self.vol.native_layer_name) + extras += f" (Native: {self.vol.native_layer_name})" if self.vol.member_name: - member_name = " (.{})".format(self.vol.member_name) - return "<{} {}{}: {} @ 0x{:x} #{}{}>".format(self.__class__.__name__, self.vol.type_name, member_name, self.vol.layer_name, self.vol.offset, self.vol.size, extras) + member_name = f" (.{self.vol.member_name})" + return f"<{self.__class__.__name__} {self.vol.type_name}{member_name}: {self.vol.layer_name} @ 0x{self.vol.offset:x} #{self.vol.size}{extras}>" class VolTemplateProxy(interfaces.objects.ObjectInterface.VolTemplateProxy): @@ -701,7 +701,7 @@ def relative_child_offset(cls, template: interfaces.objects.Template, child: str """Returns the relative offset of a child to its parent.""" retlist = template.vol.members.get(child, None) if retlist is None: - raise IndexError("Member not present in template: {}".format(child)) + raise IndexError(f"Member not present in template: {child}") return retlist[0] @classmethod @@ -722,9 +722,9 @@ def _check_members(cls, members: Dict[str, Tuple[int, interfaces.objects.Templat agg_name = agg_type.__name__ assert isinstance(members, collections.abc.Mapping) - "{} members parameter must be a mapping: {}".format(agg_name, type(members)) + f"{agg_name} members parameter must be a mapping: {type(members)}" assert all([(isinstance(member, tuple) and len(member) == 2) for member in members.values()]) - "{} members must be a tuple of relative_offsets and templates".format(agg_name) + f"{agg_name} members must be a tuple of relative_offsets and templates" def member(self, attr: str = 'member') -> object: """Specifically named method for retrieving members.""" @@ -758,7 +758,7 @@ def __getattr__(self, attr: str) -> Any: for agg_type in AggregateTypes: if isinstance(self, agg_type): agg_name = agg_type.__name__ - raise AttributeError("{} has no attribute: {}.{}".format(agg_name, self.vol.type_name, attr)) + raise AttributeError(f"{agg_name} has no attribute: {self.vol.type_name}.{attr}") # Disable messing around with setattr until the consequences have been considered properly # For example pdbutil constructs objects and then sets values for them @@ -782,7 +782,7 @@ def write(self, value): if isinstance(self, agg_type): agg_name = agg_type.__name__ raise TypeError( - "{}s cannot be written to directly, individual members must be written instead".format(agg_name)) + f"{agg_name}s cannot be written to directly, individual members must be written instead") class StructType(AggregateType): diff --git a/volatility3/framework/objects/templates.py b/volatility3/framework/objects/templates.py index 05938ff9c5..62094ff3d3 100644 --- a/volatility3/framework/objects/templates.py +++ b/volatility3/framework/objects/templates.py @@ -94,7 +94,7 @@ def _unresolved(self, *args, **kwargs) -> Any: symbol_name = type_name[-1] raise exceptions.SymbolError( symbol_name, table_name, - "Template contains no information about its structure: {}".format(self.vol.type_name)) + f"Template contains no information about its structure: {self.vol.type_name}") size = property(_unresolved) # type: ClassVar[Any] replace_child = _unresolved # type: ClassVar[Any] diff --git a/volatility3/framework/plugins/__init__.py b/volatility3/framework/plugins/__init__.py index aa9701a4cd..7dbce52085 100644 --- a/volatility3/framework/plugins/__init__.py +++ b/volatility3/framework/plugins/__init__.py @@ -44,7 +44,7 @@ def construct_plugin(context: interfaces.context.ContextInterface, if unsatisfied: for error in errors: error_string = [x for x in error.format_exception_only()][-1] - vollog.warning("Automagic exception occurred: {}".format(error_string[:-1])) + vollog.warning(f"Automagic exception occurred: {error_string[:-1]}") vollog.log(constants.LOGLEVEL_V, "".join(error.format(chain = True))) raise exceptions.UnsatisfiedException(unsatisfied) diff --git a/volatility3/framework/plugins/configwriter.py b/volatility3/framework/plugins/configwriter.py index 6dc504fb5c..5e96abcc52 100644 --- a/volatility3/framework/plugins/configwriter.py +++ b/volatility3/framework/plugins/configwriter.py @@ -42,7 +42,7 @@ def _generator(self): with self.open(filename) as file_data: file_data.write(bytes(json.dumps(config, sort_keys = True, indent = 2), 'raw_unicode_escape')) except Exception as excp: - vollog.warning("Unable to JSON encode configuration: {}".format(excp)) + vollog.warning(f"Unable to JSON encode configuration: {excp}") for k, v in config.items(): yield (0, (k, json.dumps(v))) diff --git a/volatility3/framework/plugins/isfinfo.py b/volatility3/framework/plugins/isfinfo.py index c1f5daef1b..e697994b74 100644 --- a/volatility3/framework/plugins/isfinfo.py +++ b/volatility3/framework/plugins/isfinfo.py @@ -117,7 +117,7 @@ def check_valid(data): windows_info = os.path.splitext(os.path.basename(entry))[0] valid = check_valid(data) except (UnicodeDecodeError, json.decoder.JSONDecodeError): - vollog.warning("Invalid ISF: {}".format(entry)) + vollog.warning(f"Invalid ISF: {entry}") yield (0, (entry, valid, num_bases, num_types, num_symbols, num_enums, windows_info, linux_banner, mac_banner)) diff --git a/volatility3/framework/plugins/layerwriter.py b/volatility3/framework/plugins/layerwriter.py index 3597792d09..cfa83ade26 100644 --- a/volatility3/framework/plugins/layerwriter.py +++ b/volatility3/framework/plugins/layerwriter.py @@ -73,7 +73,7 @@ def write_layer( data = layer.read(i, current_chunk_size, pad = True) file_handle.write(data) if progress_callback: - progress_callback((i / layer.maximum_address) * 100, 'Writing layer {}'.format(layer_name)) + progress_callback((i / layer.maximum_address) * 100, f'Writing layer {layer_name}') return file_handle def _generator(self): @@ -91,7 +91,7 @@ def _generator(self): for name in self.config['layers']: # Check the layer exists and validate the output file if name not in self.context.layers: - yield 0, ('Layer Name {} does not exist'.format(name), ) + yield 0, (f'Layer Name {name} does not exist', ) else: output_name = self.config.get('output', ".".join([name, "raw"])) try: @@ -103,9 +103,9 @@ def _generator(self): progress_callback = self._progress_callback) file_handle.close() except IOError as excp: - yield 0, ('Layer cannot be written to {}: {}'.format(self.config['output_name'], excp), ) + yield 0, (f"Layer cannot be written to {self.config['output_name']}: {excp}", ) - yield 0, ('Layer has been written to {}'.format(output_name), ) + yield 0, (f'Layer has been written to {output_name}', ) def _generate_layers(self): """List layer names from this run""" diff --git a/volatility3/framework/plugins/linux/bash.py b/volatility3/framework/plugins/linux/bash.py index 8f9bc9d246..3e8ac58903 100644 --- a/volatility3/framework/plugins/linux/bash.py +++ b/volatility3/framework/plugins/linux/bash.py @@ -106,5 +106,5 @@ def generate_timeline(self): self.config['vmlinux'], filter_func = filter_func)): _depth, row_data = row - description = "{} ({}): \"{}\"".format(row_data[0], row_data[1], row_data[3]) + description = f"{row_data[0]} ({row_data[1]}): \"{row_data[3]}\"" yield (description, timeliner.TimeLinerType.CREATED, row_data[2]) diff --git a/volatility3/framework/plugins/linux/check_creds.py b/volatility3/framework/plugins/linux/check_creds.py index 2333eb1231..20e3d26fb5 100644 --- a/volatility3/framework/plugins/linux/check_creds.py +++ b/volatility3/framework/plugins/linux/check_creds.py @@ -55,7 +55,7 @@ def _generator(self): if len(pids) > 1: pid_str = "" for pid in pids: - pid_str = pid_str + "{0:d}, ".format(pid) + pid_str = pid_str + f"{pid:d}, " pid_str = pid_str[:-2] yield (0, [str(pid_str)]) diff --git a/volatility3/framework/plugins/mac/bash.py b/volatility3/framework/plugins/mac/bash.py index 6b453759e6..c769d405ac 100644 --- a/volatility3/framework/plugins/mac/bash.py +++ b/volatility3/framework/plugins/mac/bash.py @@ -107,5 +107,5 @@ def generate_timeline(self): for row in self._generator( list_tasks(self.context, self.config['primary'], self.config['darwin'], filter_func = filter_func)): _depth, row_data = row - description = "{} ({}): \"{}\"".format(row_data[0], row_data[1], row_data[3]) + description = f"{row_data[0]} ({row_data[1]}): \"{row_data[3]}\"" yield (description, timeliner.TimeLinerType.CREATED, row_data[2]) diff --git a/volatility3/framework/plugins/mac/ifconfig.py b/volatility3/framework/plugins/mac/ifconfig.py index 5c4e5728e4..4aeaf15647 100644 --- a/volatility3/framework/plugins/mac/ifconfig.py +++ b/volatility3/framework/plugins/mac/ifconfig.py @@ -45,7 +45,7 @@ def _generator(self): for ifaddr in mac.MacUtilities.walk_tailq(ifnet.if_addrhead, "ifa_link"): ip = ifaddr.ifa_addr.get_address() - yield (0, ("{0}{1}".format(name, unit), ip, mac_addr, prom)) + yield (0, (f"{name}{unit}", ip, mac_addr, prom)) def run(self): return renderers.TreeGrid([("Interface", str), ("IP Address", str), ("Mac Address", str), diff --git a/volatility3/framework/plugins/mac/netstat.py b/volatility3/framework/plugins/mac/netstat.py index 4b9f539fdd..3b6bba9007 100644 --- a/volatility3/framework/plugins/mac/netstat.py +++ b/volatility3/framework/plugins/mac/netstat.py @@ -95,7 +95,7 @@ def _generator(self): continue yield (0, (format_hints.Hex(socket.vol.offset), "UNIX", path, 0, "", 0, "", - "{}/{:d}".format(task_name, pid))) + f"{task_name}/{pid:d}")) elif family in [2, 30]: state = socket.get_state() @@ -107,7 +107,7 @@ def _generator(self): (lip, lport, rip, rport) = vals yield (0, (format_hints.Hex(socket.vol.offset), proto, lip, lport, rip, rport, state, - "{}/{:d}".format(task_name, pid))) + f"{task_name}/{pid:d}")) def run(self): return renderers.TreeGrid([("Offset", format_hints.Hex), ("Proto", str), ("Local IP", str), ("Local Port", int), diff --git a/volatility3/framework/plugins/mac/pslist.py b/volatility3/framework/plugins/mac/pslist.py index 8ef7876f14..0829b42fbd 100644 --- a/volatility3/framework/plugins/mac/pslist.py +++ b/volatility3/framework/plugins/mac/pslist.py @@ -68,7 +68,7 @@ def get_list_tasks( list_tasks = cls.list_tasks_pid_hash_table else: raise ValueError("Impossible method choice chosen") - vollog.debug("Using method {}".format(method)) + vollog.debug(f"Using method {method}") return list_tasks diff --git a/volatility3/framework/plugins/timeliner.py b/volatility3/framework/plugins/timeliner.py index 59b7701f0a..e95bba126f 100644 --- a/volatility3/framework/plugins/timeliner.py +++ b/volatility3/framework/plugins/timeliner.py @@ -113,9 +113,9 @@ def _generator(self, runable_plugins: List[TimeLinerInterface]) -> Optional[Iter for plugin in runable_plugins: plugin_name = plugin.__class__.__name__ self._progress_callback((runable_plugins.index(plugin) * 100) // len(runable_plugins), - "Running plugin {}...".format(plugin_name)) + f"Running plugin {plugin_name}...") try: - vollog.log(logging.INFO, "Running {}".format(plugin_name)) + vollog.log(logging.INFO, f"Running {plugin_name}") for (item, timestamp_type, timestamp) in plugin.generate_timeline(): times = self.timeline.get((plugin_name, item), {}) if times.get(timestamp_type, None) is not None: @@ -131,7 +131,7 @@ def _generator(self, runable_plugins: List[TimeLinerInterface]) -> Optional[Iter times.get(TimeLinerType.CHANGED, renderers.NotApplicableValue()) ])) except Exception: - vollog.log(logging.INFO, "Exception occurred running plugin: {}".format(plugin_name)) + vollog.log(logging.INFO, f"Exception occurred running plugin: {plugin_name}") vollog.log(logging.DEBUG, traceback.format_exc()) for data_item in sorted(data, key = self._sort_function): yield data_item @@ -206,7 +206,7 @@ def run(self): plugins_to_run.append(plugin) except exceptions.UnsatisfiedException as excp: # Remove the failed plugin from the list and continue - vollog.debug("Unable to satisfy {}: {}".format(plugin_class.__name__, excp.unsatisfied)) + vollog.debug(f"Unable to satisfy {plugin_class.__name__}: {excp.unsatisfied}") continue if self.config.get('record-config', False): diff --git a/volatility3/framework/plugins/windows/callbacks.py b/volatility3/framework/plugins/windows/callbacks.py index 545ff9811f..4c58c62c0d 100644 --- a/volatility3/framework/plugins/windows/callbacks.py +++ b/volatility3/framework/plugins/windows/callbacks.py @@ -90,7 +90,7 @@ def list_notify_routines(cls, context: interfaces.context.ContextInterface, laye try: symbol_offset = ntkrnlmp.get_symbol(symbol_name).address except exceptions.SymbolError: - vollog.debug("Cannot find {}".format(symbol_name)) + vollog.debug(f"Cannot find {symbol_name}") continue if is_vista_or_later and extended_list: diff --git a/volatility3/framework/plugins/windows/cmdline.py b/volatility3/framework/plugins/windows/cmdline.py index bf5cc4bfe6..4b814b9806 100644 --- a/volatility3/framework/plugins/windows/cmdline.py +++ b/volatility3/framework/plugins/windows/cmdline.py @@ -66,10 +66,10 @@ def _generator(self, procs): result_text = self.get_cmdline(self.context, self.config["nt_symbols"], proc) except exceptions.SwappedInvalidAddressException as exp: - result_text = "Required memory at {0:#x} is inaccessible (swapped)".format(exp.invalid_address) + result_text = f"Required memory at {exp.invalid_address:#x} is inaccessible (swapped)" except exceptions.PagedInvalidAddressException as exp: - result_text = "Required memory at {0:#x} is not valid (process exited?)".format(exp.invalid_address) + result_text = f"Required memory at {exp.invalid_address:#x} is not valid (process exited?)" except exceptions.InvalidAddressException as exp: result_text = "Process {}: Required memory at {:#x} is not valid (incomplete layer {}?)".format( diff --git a/volatility3/framework/plugins/windows/crashinfo.py b/volatility3/framework/plugins/windows/crashinfo.py index 46ea6f1f0e..4c74582d45 100644 --- a/volatility3/framework/plugins/windows/crashinfo.py +++ b/volatility3/framework/plugins/windows/crashinfo.py @@ -34,7 +34,7 @@ def _generator(self, layer: crash.WindowsCrashDump32Layer): dump_type = "Bitmap Dump (0x5)" else: # this should never happen since the crash layer only accepts 0x1 and 0x5 - dump_type = "Unknown/Unsupported ({:#x})".format(header.DumpType) + dump_type = f"Unknown/Unsupported ({header.DumpType:#x})" if header.DumpType == 0x5: summary_header = layer.get_summary_header() diff --git a/volatility3/framework/plugins/windows/dlllist.py b/volatility3/framework/plugins/windows/dlllist.py index 4740e5751b..e02ad6e960 100644 --- a/volatility3/framework/plugins/windows/dlllist.py +++ b/volatility3/framework/plugins/windows/dlllist.py @@ -83,7 +83,7 @@ def dump_pe(cls, file_handle.seek(offset) file_handle.write(data) except (IOError, exceptions.VolatilityException, OverflowError, ValueError) as excp: - vollog.debug("Unable to dump dll at offset {}: {}".format(dll_entry.DllBase, excp)) + vollog.debug(f"Unable to dump dll at offset {dll_entry.DllBase}: {excp}") return None return file_handle @@ -131,7 +131,7 @@ def _generator(self, procs): entry, self.open, proc_layer_name, - prefix = "pid.{}.".format(proc_id)) + prefix = f"pid.{proc_id}.") file_output = "Error outputting file" if file_handle: file_handle.close() diff --git a/volatility3/framework/plugins/windows/dumpfiles.py b/volatility3/framework/plugins/windows/dumpfiles.py index f1e54d0f97..0258fbf21f 100755 --- a/volatility3/framework/plugins/windows/dumpfiles.py +++ b/volatility3/framework/plugins/windows/dumpfiles.py @@ -80,13 +80,13 @@ def dump_file_producer(cls, file_object: interfaces.objects.ObjectInterface, filedata.write(data) if not bytes_written: - vollog.debug("No data is cached for the file at {0:#x}".format(file_object.vol.offset)) + vollog.debug(f"No data is cached for the file at {file_object.vol.offset:#x}") return None else: - vollog.debug("Stored {}".format(filedata.preferred_filename)) + vollog.debug(f"Stored {filedata.preferred_filename}") return filedata except exceptions.InvalidAddressException: - vollog.debug("Unable to dump file at {0:#x}".format(file_object.vol.offset)) + vollog.debug(f"Unable to dump file at {file_object.vol.offset:#x}") return None @classmethod @@ -105,7 +105,7 @@ def process_file_object(cls, context: interfaces.context.ContextInterface, prima # use the "File" object type, such as \Device\Tcp and \Device\NamedPipe. if file_obj.DeviceObject.DeviceType not in [FILE_DEVICE_DISK, FILE_DEVICE_NETWORK_FILE_SYSTEM]: vollog.log(constants.LOGLEVEL_VVV, - "The file object at {0:#x} is not a file on disk".format(file_obj.vol.offset)) + f"The file object at {file_obj.vol.offset:#x} is not a file on disk") return # Depending on the type of object (DataSection, ImageSection, SharedCacheMap) we may need to @@ -134,7 +134,7 @@ def process_file_object(cls, context: interfaces.context.ContextInterface, prima dump_parameters.append((control_area, memory_layer, extension)) except exceptions.InvalidAddressException: vollog.log(constants.LOGLEVEL_VVV, - "{0} is unavailable for file {1:#x}".format(member_name, file_obj.vol.offset)) + f"{member_name} is unavailable for file {file_obj.vol.offset:#x}") # The SharedCacheMap is handled differently than the caches above. # We carve these "pages" from the primary_layer. @@ -145,7 +145,7 @@ def process_file_object(cls, context: interfaces.context.ContextInterface, prima dump_parameters.append((shared_cache_map, primary_layer, "vacb")) except exceptions.InvalidAddressException: vollog.log(constants.LOGLEVEL_VVV, - "SharedCacheMap is unavailable for file {0:#x}".format(file_obj.vol.offset)) + f"SharedCacheMap is unavailable for file {file_obj.vol.offset:#x}") for memory_object, layer, extension in dump_parameters: cache_name = EXTENSION_CACHE_MAP[extension] @@ -187,7 +187,7 @@ def _generator(self, procs: List, offsets: List): object_table = proc.ObjectTable except exceptions.InvalidAddressException: vollog.log(constants.LOGLEVEL_VVV, - "Cannot access _EPROCESS.ObjectTable at {0:#x}".format(proc.vol.offset)) + f"Cannot access _EPROCESS.ObjectTable at {proc.vol.offset:#x}") continue for entry in handles_plugin.handles(object_table): @@ -200,7 +200,7 @@ def _generator(self, procs: List, offsets: List): yield (0, result) except exceptions.InvalidAddressException: vollog.log(constants.LOGLEVEL_VVV, - "Cannot extract file from _OBJECT_HEADER at {0:#x}".format(entry.vol.offset)) + f"Cannot extract file from _OBJECT_HEADER at {entry.vol.offset:#x}") # Pull file objects from the VADs. This will produce DLLs and EXEs that are # mapped into the process as images, but that the process doesn't have an @@ -224,7 +224,7 @@ def _generator(self, procs: List, offsets: List): yield (0, result) except exceptions.InvalidAddressException: vollog.log(constants.LOGLEVEL_VVV, - "Cannot extract file from VAD at {0:#x}".format(vad.vol.offset)) + f"Cannot extract file from VAD at {vad.vol.offset:#x}") elif offsets: # Now process any offsets explicitly requested by the user. @@ -242,7 +242,7 @@ def _generator(self, procs: List, offsets: List): for result in self.process_file_object(self.context, self.config["primary"], self.open, file_obj): yield (0, result) except exceptions.InvalidAddressException: - vollog.log(constants.LOGLEVEL_VVV, "Cannot extract file at {0:#x}".format(offset)) + vollog.log(constants.LOGLEVEL_VVV, f"Cannot extract file at {offset:#x}") def run(self): # a list of tuples (, ) where is the address and is True for virtual. diff --git a/volatility3/framework/plugins/windows/handles.py b/volatility3/framework/plugins/windows/handles.py index d08f888e31..49098bb8b1 100644 --- a/volatility3/framework/plugins/windows/handles.py +++ b/volatility3/framework/plugins/windows/handles.py @@ -203,7 +203,7 @@ def get_type_map(cls, context: interfaces.context.ContextInterface, layer_name: type_name = objt.Name.String except exceptions.InvalidAddressException: vollog.log(constants.LOGLEVEL_VVV, - "Cannot access _OBJECT_HEADER Name at {0:#x}".format(objt.vol.offset)) + f"Cannot access _OBJECT_HEADER Name at {objt.vol.offset:#x}") continue type_map[i] = type_name @@ -305,7 +305,7 @@ def _generator(self, procs): object_table = proc.ObjectTable except exceptions.InvalidAddressException: vollog.log(constants.LOGLEVEL_VVV, - "Cannot access _EPROCESS.ObjectType at {0:#x}".format(proc.vol.offset)) + f"Cannot access _EPROCESS.ObjectType at {proc.vol.offset:#x}") continue process_name = utility.array_to_string(proc.ImageFileName) @@ -320,10 +320,10 @@ def _generator(self, procs): obj_name = item.file_name_with_device() elif obj_type == "Process": item = entry.Body.cast("_EPROCESS") - obj_name = "{} Pid {}".format(utility.array_to_string(proc.ImageFileName), item.UniqueProcessId) + obj_name = f"{utility.array_to_string(proc.ImageFileName)} Pid {item.UniqueProcessId}" elif obj_type == "Thread": item = entry.Body.cast("_ETHREAD") - obj_name = "Tid {} Pid {}".format(item.Cid.UniqueThread, item.Cid.UniqueProcess) + obj_name = f"Tid {item.Cid.UniqueThread} Pid {item.Cid.UniqueProcess}" elif obj_type == "Key": item = entry.Body.cast("_CM_KEY_BODY") obj_name = item.get_full_key_name() @@ -335,7 +335,7 @@ def _generator(self, procs): except (exceptions.InvalidAddressException): vollog.log(constants.LOGLEVEL_VVV, - "Cannot access _OBJECT_HEADER at {0:#x}".format(entry.vol.offset)) + f"Cannot access _OBJECT_HEADER at {entry.vol.offset:#x}") continue yield (0, (proc.UniqueProcessId, process_name, format_hints.Hex(entry.Body.vol.offset), diff --git a/volatility3/framework/plugins/windows/hashdump.py b/volatility3/framework/plugins/windows/hashdump.py index 9cf46414a6..7c3d8777c7 100644 --- a/volatility3/framework/plugins/windows/hashdump.py +++ b/volatility3/framework/plugins/windows/hashdump.py @@ -68,7 +68,7 @@ def get_hive_key(cls, hive: registry.RegistryHive, key: str): result = hive.get_key(key) except KeyError: vollog.info( - "Unable to load the required registry key {}\\{} from this memory image".format(hive.get_name(), key)) + f"Unable to load the required registry key {hive.get_name()}\\{key} from this memory image") return result @classmethod @@ -84,7 +84,7 @@ def get_user_keys(cls, samhive: registry.RegistryHive) -> List[interfaces.object @classmethod def get_bootkey(cls, syshive: registry.RegistryHive) -> Optional[bytes]: cs = 1 - lsa_base = "ControlSet{0:03}".format(cs) + "\\Control\\Lsa" + lsa_base = f"ControlSet{cs:03}" + "\\Control\\Lsa" lsa_keys = ["JD", "Skew1", "GBG", "Data"] lsa = cls.get_hive_key(syshive, lsa_base) diff --git a/volatility3/framework/plugins/windows/info.py b/volatility3/framework/plugins/windows/info.py index 76b81eb00d..e0de5a8512 100644 --- a/volatility3/framework/plugins/windows/info.py +++ b/volatility3/framework/plugins/windows/info.py @@ -162,7 +162,7 @@ def _generator(self): yield (0, ("IsPAE", str(self.context.layers[layer_name].metadata.get("pae", False)))) for i, layer in self.get_depends(self.context, "primary"): - yield (0, (layer.name, "{} {}".format(i, layer.__class__.__name__))) + yield (0, (layer.name, f"{i} {layer.__class__.__name__}")) if kdbg.Header.OwnerTag == 0x4742444B: @@ -173,7 +173,7 @@ def _generator(self): vers = self.get_version_structure(self.context, layer_name, symbol_table) yield (0, ("KdVersionBlock", hex(vers.vol.offset))) - yield (0, ("Major/Minor", "{0}.{1}".format(vers.MajorVersion, vers.MinorVersion))) + yield (0, ("Major/Minor", f"{vers.MajorVersion}.{vers.MinorVersion}")) yield (0, ("MachineType", str(vers.MachineType))) ntkrnlmp = self.get_kernel_module(self.context, layer_name, symbol_table) diff --git a/volatility3/framework/plugins/windows/memmap.py b/volatility3/framework/plugins/windows/memmap.py index 45d7158d9f..b0daa080f8 100644 --- a/volatility3/framework/plugins/windows/memmap.py +++ b/volatility3/framework/plugins/windows/memmap.py @@ -48,7 +48,7 @@ def _generator(self, procs): excp.layer_name)) continue - file_handle = self.open("pid.{}.dmp".format(pid)) + file_handle = self.open(f"pid.{pid}.dmp") with file_handle as file_data: file_offset = 0 for mapval in proc_layer.mapping(0x0, proc_layer.maximum_address, ignore_errors = True): diff --git a/volatility3/framework/plugins/windows/modscan.py b/volatility3/framework/plugins/windows/modscan.py index 13b23577fb..968609ebc0 100644 --- a/volatility3/framework/plugins/windows/modscan.py +++ b/volatility3/framework/plugins/windows/modscan.py @@ -160,7 +160,7 @@ def _generator(self): if self.config['dump']: session_layer_name = self.find_session_layer(self.context, session_layers, mod.DllBase) - file_output = "Cannot find a viable session layer for {0:#x}".format(mod.DllBase) + file_output = f"Cannot find a viable session layer for {mod.DllBase:#x}" if session_layer_name: file_handle = dlllist.DllList.dump_pe(self.context, pe_table_name, diff --git a/volatility3/framework/plugins/windows/netscan.py b/volatility3/framework/plugins/windows/netscan.py index 1bd9165e61..aa5501e94d 100644 --- a/volatility3/framework/plugins/windows/netscan.py +++ b/volatility3/framework/plugins/windows/netscan.py @@ -212,12 +212,12 @@ def determine_tcpip_version(cls, context: interfaces.context.ContextInterface, l latest_version = current_versions[-1] filename = version_dict.get(latest_version) - vollog.debug("Unable to find exact matching symbol file, going with latest: {}".format(filename)) + vollog.debug(f"Unable to find exact matching symbol file, going with latest: {filename}") else: raise NotImplementedError("This version of Windows is not supported: {}.{} {}.{}!".format( nt_major_version, nt_minor_version, vers.MajorVersion, vers_minor_version)) - vollog.debug("Determined symbol filename: {}".format(filename)) + vollog.debug(f"Determined symbol filename: {filename}") return filename, class_types @@ -285,13 +285,13 @@ def _generator(self, show_corrupt_results: Optional[bool] = None): for netw_obj in self.scan(self.context, self.config['primary'], self.config['nt_symbols'], netscan_symbol_table): - vollog.debug("Found netw obj @ 0x{:2x} of assumed type {}".format(netw_obj.vol.offset, type(netw_obj))) + vollog.debug(f"Found netw obj @ 0x{netw_obj.vol.offset:2x} of assumed type {type(netw_obj)}") # objects passed pool header constraints. check for additional constraints if strict flag is set. if not show_corrupt_results and not netw_obj.is_valid(): continue if isinstance(netw_obj, network._UDP_ENDPOINT): - vollog.debug("Found UDP_ENDPOINT @ 0x{:2x}".format(netw_obj.vol.offset)) + vollog.debug(f"Found UDP_ENDPOINT @ 0x{netw_obj.vol.offset:2x}") # For UdpA, the state is always blank and the remote end is asterisks for ver, laddr, _ in netw_obj.dual_stack_sockets(): @@ -301,7 +301,7 @@ def _generator(self, show_corrupt_results: Optional[bool] = None): or renderers.UnreadableValue())) elif isinstance(netw_obj, network._TCP_ENDPOINT): - vollog.debug("Found _TCP_ENDPOINT @ 0x{:2x}".format(netw_obj.vol.offset)) + vollog.debug(f"Found _TCP_ENDPOINT @ 0x{netw_obj.vol.offset:2x}") if netw_obj.get_address_family() == network.AF_INET: proto = "TCPv4" elif netw_obj.get_address_family() == network.AF_INET6: @@ -322,7 +322,7 @@ def _generator(self, show_corrupt_results: Optional[bool] = None): # check for isinstance of tcp listener last, because all other objects are inherited from here elif isinstance(netw_obj, network._TCP_LISTENER): - vollog.debug("Found _TCP_LISTENER @ 0x{:2x}".format(netw_obj.vol.offset)) + vollog.debug(f"Found _TCP_LISTENER @ 0x{netw_obj.vol.offset:2x}") # For TcpL, the state is always listening and the remote port is zero for ver, laddr, raddr in netw_obj.dual_stack_sockets(): @@ -332,7 +332,7 @@ def _generator(self, show_corrupt_results: Optional[bool] = None): or renderers.UnreadableValue())) else: # this should not happen therefore we log it. - vollog.debug("Found network object unsure of its type: {} of type {}".format(netw_obj, type(netw_obj))) + vollog.debug(f"Found network object unsure of its type: {netw_obj} of type {type(netw_obj)}") def generate_timeline(self): for row in self._generator(): diff --git a/volatility3/framework/plugins/windows/netstat.py b/volatility3/framework/plugins/windows/netstat.py index f47c34450a..9739e5dc93 100644 --- a/volatility3/framework/plugins/windows/netstat.py +++ b/volatility3/framework/plugins/windows/netstat.py @@ -128,7 +128,7 @@ def enumerate_structures_by_port(cls, # invalid argument. return - vollog.debug("Current Port: {}".format(port)) + vollog.debug(f"Current Port: {port}") # the given port serves as a shifted index into the port pool lists list_index = port >> 8 truncated_port = port & 0xff @@ -179,7 +179,7 @@ def get_tcpip_module(cls, context: interfaces.context.ContextInterface, layer_na """ for mod in modules.Modules.list_modules(context, layer_name, nt_symbols): if mod.BaseDllName.get_string() == "tcpip.sys": - vollog.debug("Found tcpip.sys image base @ 0x{:x}".format(mod.DllBase)) + vollog.debug(f"Found tcpip.sys image base @ 0x{mod.DllBase:x}") return mod return None @@ -255,7 +255,7 @@ def parse_partitions(cls, context: interfaces.context.ContextInterface, layer_na part_table_addr, part_count)) entry_offset = context.symbol_space.get_type(obj_name).relative_child_offset("ListEntry") for ctr, partition in enumerate(part_table.Partitions): - vollog.debug("Parsing partition {}".format(ctr)) + vollog.debug(f"Parsing partition {ctr}") if partition.Endpoints.NumEntries > 0: for endpoint_entry in cls.parse_hashtable(context, layer_name, partition.Endpoints.Directory, partition.Endpoints.TableSize, alignment, net_symbol_table): @@ -347,9 +347,9 @@ def find_port_pools(cls, context: interfaces.context.ContextInterface, layer_nam # this branch should not be reached. raise exceptions.SymbolError( "UdpPortPool", tcpip_symbol_table, - "Neither UdpPortPool nor UdpCompartmentSet found in {} table".format(tcpip_symbol_table)) + f"Neither UdpPortPool nor UdpCompartmentSet found in {tcpip_symbol_table} table") - vollog.debug("Found PortPools @ 0x{:x} (UDP) && 0x{:x} (TCP)".format(upp_addr, tpp_addr)) + vollog.debug(f"Found PortPools @ 0x{upp_addr:x} (UDP) && 0x{tpp_addr:x} (TCP)") return upp_addr, tpp_addr @classmethod @@ -399,8 +399,8 @@ def list_sockets(cls, tcpl_ports = cls.parse_bitmap(context, layer_name, tpp_obj.PortBitMap.Buffer, tpp_obj.PortBitMap.SizeOfBitMap // 8) - vollog.debug("Found TCP Ports: {}".format(tcpl_ports)) - vollog.debug("Found UDP Ports: {}".format(udpa_ports)) + vollog.debug(f"Found TCP Ports: {tcpl_ports}") + vollog.debug(f"Found UDP Ports: {udpa_ports}") # given the list of TCP / UDP ports, calculate the address of their respective objects and yield them. for port in tcpl_ports: # port value can be 0, which we can skip @@ -439,7 +439,7 @@ def _generator(self, show_corrupt_results: Optional[bool] = None): continue if isinstance(netw_obj, network._UDP_ENDPOINT): - vollog.debug("Found UDP_ENDPOINT @ 0x{:2x}".format(netw_obj.vol.offset)) + vollog.debug(f"Found UDP_ENDPOINT @ 0x{netw_obj.vol.offset:2x}") # For UdpA, the state is always blank and the remote end is asterisks for ver, laddr, _ in netw_obj.dual_stack_sockets(): @@ -449,7 +449,7 @@ def _generator(self, show_corrupt_results: Optional[bool] = None): or renderers.UnreadableValue())) elif isinstance(netw_obj, network._TCP_ENDPOINT): - vollog.debug("Found _TCP_ENDPOINT @ 0x{:2x}".format(netw_obj.vol.offset)) + vollog.debug(f"Found _TCP_ENDPOINT @ 0x{netw_obj.vol.offset:2x}") if netw_obj.get_address_family() == network.AF_INET: proto = "TCPv4" elif netw_obj.get_address_family() == network.AF_INET6: @@ -472,7 +472,7 @@ def _generator(self, show_corrupt_results: Optional[bool] = None): # check for isinstance of tcp listener last, because all other objects are inherited from here elif isinstance(netw_obj, network._TCP_LISTENER): - vollog.debug("Found _TCP_LISTENER @ 0x{:2x}".format(netw_obj.vol.offset)) + vollog.debug(f"Found _TCP_LISTENER @ 0x{netw_obj.vol.offset:2x}") # For TcpL, the state is always listening and the remote port is zero for ver, laddr, raddr in netw_obj.dual_stack_sockets(): @@ -482,7 +482,7 @@ def _generator(self, show_corrupt_results: Optional[bool] = None): or renderers.UnreadableValue())) else: # this should not happen therefore we log it. - vollog.debug("Found network object unsure of its type: {} of type {}".format(netw_obj, type(netw_obj))) + vollog.debug(f"Found network object unsure of its type: {netw_obj} of type {type(netw_obj)}") def generate_timeline(self): for row in self._generator(): diff --git a/volatility3/framework/plugins/windows/poolscanner.py b/volatility3/framework/plugins/windows/poolscanner.py index 297a99412c..62a98615a6 100644 --- a/volatility3/framework/plugins/windows/poolscanner.py +++ b/volatility3/framework/plugins/windows/poolscanner.py @@ -141,7 +141,7 @@ def _generator(self): try: name = mem_object.FileName.String except exceptions.InvalidAddressException: - vollog.log(constants.LOGLEVEL_VVV, "Skipping file at {0:#x}".format(mem_object.vol.offset)) + vollog.log(constants.LOGLEVEL_VVV, f"Skipping file at {mem_object.vol.offset:#x}") continue else: name = renderers.NotApplicableValue() @@ -298,7 +298,7 @@ def generate_pool_scan(cls, kernel_symbol_table = symbol_table) if mem_object is None: - vollog.log(constants.LOGLEVEL_VVV, "Cannot create an instance of {}".format(constraint.type_name)) + vollog.log(constants.LOGLEVEL_VVV, f"Cannot create an instance of {constraint.type_name}") continue if constraint.object_type is not None and not constraint.skip_type_test: @@ -307,7 +307,7 @@ def generate_pool_scan(cls, continue except exceptions.InvalidAddressException: vollog.log(constants.LOGLEVEL_VVV, - "Cannot test instance type check for {}".format(constraint.type_name)) + f"Cannot test instance type check for {constraint.type_name}") continue yield constraint, mem_object, header @@ -341,7 +341,7 @@ def pool_scan(cls, constraint_lookup = {} # type: Dict[bytes, PoolConstraint] for constraint in pool_constraints: if constraint.tag in constraint_lookup: - raise ValueError("Constraint tag is used for more than one constraint: {}".format(repr(constraint.tag))) + raise ValueError(f"Constraint tag is used for more than one constraint: {repr(constraint.tag)}") constraint_lookup[constraint.tag] = constraint pool_header_table_name = cls.get_pool_header_table(context, symbol_table) diff --git a/volatility3/framework/plugins/windows/privileges.py b/volatility3/framework/plugins/windows/privileges.py index 8c652daa64..ec9653517b 100644 --- a/volatility3/framework/plugins/windows/privileges.py +++ b/volatility3/framework/plugins/windows/privileges.py @@ -64,7 +64,7 @@ def _generator(self, procs): # Skip privileges whose bit positions cannot be # translated to a privilege name if not self.privilege_info.get(int(value)): - vollog.log(constants.LOGLEVEL_VVV, 'Skeep invalid privilege ({}).'.format(value)) + vollog.log(constants.LOGLEVEL_VVV, f'Skeep invalid privilege ({value}).') continue name, desc = self.privilege_info.get(int(value)) diff --git a/volatility3/framework/plugins/windows/pslist.py b/volatility3/framework/plugins/windows/pslist.py index 7b1e504c2d..25b24153e6 100644 --- a/volatility3/framework/plugins/windows/pslist.py +++ b/volatility3/framework/plugins/windows/pslist.py @@ -73,12 +73,12 @@ def process_dump( dos_header = context.object(pe_table_name + constants.BANG + "_IMAGE_DOS_HEADER", offset = peb.ImageBaseAddress, layer_name = proc_layer_name) - file_handle = open_method("pid.{0}.{1:#x}.dmp".format(proc.UniqueProcessId, peb.ImageBaseAddress)) + file_handle = open_method(f"pid.{proc.UniqueProcessId}.{peb.ImageBaseAddress:#x}.dmp") for offset, data in dos_header.reconstruct(): file_handle.seek(offset) file_handle.write(data) except Exception as excp: - vollog.debug("Unable to dump PE with pid {}: {}".format(proc.UniqueProcessId, excp)) + vollog.debug(f"Unable to dump PE with pid {proc.UniqueProcessId}: {excp}") return file_handle @@ -209,12 +209,12 @@ def _generator(self): proc.get_is_wow64(), proc.get_create_time(), proc.get_exit_time(), file_output)) except exceptions.InvalidAddressException: - vollog.info("Invalid process found at address: {:x}. Skipping".format(proc.vol.offset)) + vollog.info(f"Invalid process found at address: {proc.vol.offset:x}. Skipping") def generate_timeline(self): for row in self._generator(): _depth, row_data = row - description = "Process: {} {} ({})".format(row_data[0], row_data[2], row_data[3]) + description = f"Process: {row_data[0]} {row_data[2]} ({row_data[3]})" yield (description, timeliner.TimeLinerType.CREATED, row_data[8]) yield (description, timeliner.TimeLinerType.MODIFIED, row_data[9]) @@ -222,7 +222,7 @@ def run(self): offsettype = "(V)" if not self.config.get('physical', self.PHYSICAL_DEFAULT) else "(P)" return renderers.TreeGrid([("PID", int), ("PPID", int), ("ImageFileName", str), - ("Offset{0}".format(offsettype), format_hints.Hex), ("Threads", int), + (f"Offset{offsettype}", format_hints.Hex), ("Threads", int), ("Handles", int), ("SessionId", int), ("Wow64", bool), ("CreateTime", datetime.datetime), ("ExitTime", datetime.datetime), ("File output", str)], self._generator()) diff --git a/volatility3/framework/plugins/windows/psscan.py b/volatility3/framework/plugins/windows/psscan.py index a5e20ef62e..68c9efd4dd 100644 --- a/volatility3/framework/plugins/windows/psscan.py +++ b/volatility3/framework/plugins/windows/psscan.py @@ -178,7 +178,7 @@ def _generator(self): def generate_timeline(self): for row in self._generator(): _depth, row_data = row - description = "Process: {} {} ({})".format(row_data[0], row_data[2], row_data[3]) + description = f"Process: {row_data[0]} {row_data[2]} ({row_data[3]})" yield (description, timeliner.TimeLinerType.CREATED, row_data[8]) yield (description, timeliner.TimeLinerType.MODIFIED, row_data[9]) diff --git a/volatility3/framework/plugins/windows/pstree.py b/volatility3/framework/plugins/windows/pstree.py index 3b695ea70e..151a35d529 100644 --- a/volatility3/framework/plugins/windows/pstree.py +++ b/volatility3/framework/plugins/windows/pstree.py @@ -90,7 +90,7 @@ def run(self): offsettype = "(V)" if not self.config.get('physical', pslist.PsList.PHYSICAL_DEFAULT) else "(P)" return renderers.TreeGrid([("PID", int), ("PPID", int), ("ImageFileName", str), - ("Offset{0}".format(offsettype), format_hints.Hex), ("Threads", int), + (f"Offset{offsettype}", format_hints.Hex), ("Threads", int), ("Handles", int), ("SessionId", int), ("Wow64", bool), ("CreateTime", datetime.datetime), ("ExitTime", datetime.datetime)], self._generator()) diff --git a/volatility3/framework/plugins/windows/registry/hivelist.py b/volatility3/framework/plugins/windows/registry/hivelist.py index 249118c8ed..e75dc19a6c 100644 --- a/volatility3/framework/plugins/windows/registry/hivelist.py +++ b/volatility3/framework/plugins/windows/registry/hivelist.py @@ -83,7 +83,7 @@ def _generator(self) -> Iterator[Tuple[int, Tuple[int, str]]]: maxaddr = hive.hive.Storage[0].Length hive_name = self._sanitize_hive_name(hive.get_name()) - file_handle = self.open('registry.{}.{}.hive'.format(hive_name, hex(hive.hive_offset))) + file_handle = self.open(f'registry.{hive_name}.{hex(hive.hive_offset)}.hive') with file_handle as file_data: if hive._base_block: hive_data = self.context.layers[hive.dependencies[0]].read(hive.hive.BaseBlock, 1 << 12) @@ -143,7 +143,7 @@ def list_hives(cls, try: hive = registry.RegistryHive(context, reg_config_path, name = 'hive' + hex(hive_offset)) except exceptions.InvalidAddressException: - vollog.warning("Couldn't create RegistryHive layer at offset {}, skipping".format(hex(hive_offset))) + vollog.warning(f"Couldn't create RegistryHive layer at offset {hex(hive_offset)}, skipping") continue context.layers.add_layer(hive) yield hive diff --git a/volatility3/framework/plugins/windows/registry/printkey.py b/volatility3/framework/plugins/windows/registry/printkey.py index 4c90e08b58..380d3bbca1 100644 --- a/volatility3/framework/plugins/windows/registry/printkey.py +++ b/volatility3/framework/plugins/windows/registry/printkey.py @@ -176,11 +176,11 @@ def _registry_walker(self, yield (x - len(node_path), y) except (exceptions.InvalidAddressException, KeyError, RegistryFormatException) as excp: if isinstance(excp, KeyError): - vollog.debug("Key '{}' not found in Hive at offset {}.".format(key, hex(hive.hive_offset))) + vollog.debug(f"Key '{key}' not found in Hive at offset {hex(hive.hive_offset)}.") elif isinstance(excp, RegistryFormatException): vollog.debug(excp) elif isinstance(excp, exceptions.InvalidAddressException): - vollog.debug("Invalid address identified in Hive: {}".format(hex(excp.invalid_address))) + vollog.debug(f"Invalid address identified in Hive: {hex(excp.invalid_address)}") result = (0, (renderers.UnreadableValue(), format_hints.Hex(hive.hive_offset), "Key", '?\\' + (key or ''), renderers.UnreadableValue(), renderers.UnreadableValue(), renderers.UnreadableValue())) diff --git a/volatility3/framework/plugins/windows/registry/userassist.py b/volatility3/framework/plugins/windows/registry/userassist.py index b2ce971bff..804e9f60e6 100644 --- a/volatility3/framework/plugins/windows/registry/userassist.py +++ b/volatility3/framework/plugins/windows/registry/userassist.py @@ -227,7 +227,7 @@ def _generator(self): yield from self.list_userassist(hive) continue except exceptions.PagedInvalidAddressException as excp: - vollog.debug("Invalid address identified in Hive: {}".format(hex(excp.invalid_address))) + vollog.debug(f"Invalid address identified in Hive: {hex(excp.invalid_address)}") except exceptions.InvalidAddressException as excp: vollog.debug("Invalid address identified in lower layer {}: {}".format( excp.layer_name, excp.invalid_address)) diff --git a/volatility3/framework/plugins/windows/strings.py b/volatility3/framework/plugins/windows/strings.py index a44e596725..7a55079a9e 100644 --- a/volatility3/framework/plugins/windows/strings.py +++ b/volatility3/framework/plugins/windows/strings.py @@ -57,7 +57,7 @@ def _generator(self) -> Generator[Tuple, None, None]: offset, string = self._parse_line(line) string_list.append((offset, string)) except ValueError: - vollog.error("Line in unrecognized format: line {}".format(count)) + vollog.error(f"Line in unrecognized format: line {count}") line = strings_fp.readline() revmap = self.generate_mapping(self.context, @@ -150,11 +150,11 @@ def generate_mapping(cls, mapped_offset, _, offset, mapped_size, maplayer = mapval for val in range(mapped_offset, mapped_offset + mapped_size, 0x1000): cur_set = reverse_map.get(mapped_offset >> 12, set()) - cur_set.add(("Process {}".format(process.UniqueProcessId), offset)) + cur_set.add((f"Process {process.UniqueProcessId}", offset)) reverse_map[mapped_offset >> 12] = cur_set # FIXME: make the progress for all processes, rather than per-process if progress_callback: progress_callback((offset * 100) / layer.maximum_address, - "Creating mapping for task {}".format(process.UniqueProcessId)) + f"Creating mapping for task {process.UniqueProcessId}") return reverse_map diff --git a/volatility3/framework/plugins/windows/symlinkscan.py b/volatility3/framework/plugins/windows/symlinkscan.py index abfbd1d6d9..8a699a5d4e 100644 --- a/volatility3/framework/plugins/windows/symlinkscan.py +++ b/volatility3/framework/plugins/windows/symlinkscan.py @@ -68,7 +68,7 @@ def _generator(self): def generate_timeline(self): for row in self._generator(): _depth, row_data = row - description = "Symlink: {} -> {}".format(row_data[2], row_data[3]) + description = f"Symlink: {row_data[2]} -> {row_data[3]}" yield (description, timeliner.TimeLinerType.CREATED, row_data[1]) def run(self): diff --git a/volatility3/framework/plugins/windows/vadinfo.py b/volatility3/framework/plugins/windows/vadinfo.py index aa6567ace1..996aba1d1c 100644 --- a/volatility3/framework/plugins/windows/vadinfo.py +++ b/volatility3/framework/plugins/windows/vadinfo.py @@ -135,7 +135,7 @@ def vad_dump(cls, return None if maxsize > 0 and (vad_end - vad_start) > maxsize: - vollog.debug("Skip VAD dump {0:#x}-{1:#x} due to maxsize limit".format(vad_start, vad_end)) + vollog.debug(f"Skip VAD dump {vad_start:#x}-{vad_end:#x} due to maxsize limit") return None proc_id = "Unknown" @@ -148,7 +148,7 @@ def vad_dump(cls, return None proc_layer = context.layers[proc_layer_name] - file_name = "pid.{0}.vad.{1:#x}-{2:#x}.dmp".format(proc_id, vad_start, vad_end) + file_name = f"pid.{proc_id}.vad.{vad_start:#x}-{vad_end:#x}.dmp" try: file_handle = open_method(file_name) chunk_size = 1024 * 1024 * 10 @@ -162,7 +162,7 @@ def vad_dump(cls, offset += to_read except Exception as excp: - vollog.debug("Unable to dump VAD {}: {}".format(file_name, excp)) + vollog.debug(f"Unable to dump VAD {file_name}: {excp}") return None return file_handle diff --git a/volatility3/framework/plugins/yarascan.py b/volatility3/framework/plugins/yarascan.py index 89d1eb49ae..3ca4362108 100644 --- a/volatility3/framework/plugins/yarascan.py +++ b/volatility3/framework/plugins/yarascan.py @@ -75,12 +75,12 @@ def process_yara_options(cls, config: Dict[str, Any]): if config.get('yara_rules', None) is not None: rule = config['yara_rules'] if rule[0] not in ["{", "/"]: - rule = '"{}"'.format(rule) + rule = f'"{rule}"' if config.get('case', False): rule += " nocase" if config.get('wide', False): rule += " wide ascii" - rules = yara.compile(sources = {'n': 'rule r1 {{strings: $a = {} condition: $a}}'.format(rule)}) + rules = yara.compile(sources = {'n': f'rule r1 {{strings: $a = {rule} condition: $a}}'}) elif config.get('yara_file', None) is not None: rules = yara.compile(file = resources.ResourceAccessor().open(config['yara_file'], "rb")) elif config.get('yara_compiled_file', None) is not None: diff --git a/volatility3/framework/renderers/__init__.py b/volatility3/framework/renderers/__init__.py index a3fa7cef37..5b1013574b 100644 --- a/volatility3/framework/renderers/__init__.py +++ b/volatility3/framework/renderers/__init__.py @@ -59,7 +59,7 @@ def __init__(self, path: str, treegrid: 'TreeGrid', parent: Optional[interfaces. self._values = treegrid.RowStructure(*values) # type: ignore def __repr__(self) -> str: - return "".format(self.path, self._values) + return f"" def __getitem__(self, item: Union[int, slice]) -> Any: return self._treegrid.children(self).__getitem__(item) @@ -219,7 +219,7 @@ def function(_x: interfaces.renderers.TreeNode, _y: Any) -> Any: except Exception as excp: if fail_on_errors: raise - vollog.debug("Exception during population: {}".format(excp)) + vollog.debug(f"Exception during population: {excp}") self._populated = True return excp self._populated = True @@ -363,7 +363,7 @@ def __init__(self, treegrid: TreeGrid, column_name: str, ascending: bool = True) _index = i self._type = column.type if _index is None: - raise ValueError("Column not found in TreeGrid columns: {}".format(column_name)) + raise ValueError(f"Column not found in TreeGrid columns: {column_name}") self._index = _index def __call__(self, values: List[Any]) -> Any: diff --git a/volatility3/framework/symbols/__init__.py b/volatility3/framework/symbols/__init__.py index c2444682c1..6a8ab8246f 100644 --- a/volatility3/framework/symbols/__init__.py +++ b/volatility3/framework/symbols/__init__.py @@ -117,7 +117,7 @@ class UnresolvedTemplate(objects.templates.ReferenceTemplate): """ def __init__(self, type_name: str, **kwargs) -> None: - vollog.debug("Unresolved reference: {}".format(type_name)) + vollog.debug(f"Unresolved reference: {type_name}") super().__init__(type_name = type_name, **kwargs) def _weak_resolve(self, resolve_type: SymbolType, name: str) -> SymbolSpaceReturnType: @@ -139,8 +139,8 @@ def _weak_resolve(self, resolve_type: SymbolType, name: str) -> SymbolSpaceRetur return getattr(self._dict[table_name], get_function)(component_name) except KeyError as e: raise exceptions.SymbolError(component_name, table_name, - 'Type {} references missing Type/Symbol/Enum: {}'.format(name, e)) - raise exceptions.SymbolError(name, None, "Malformed name: {}".format(name)) + f'Type {name} references missing Type/Symbol/Enum: {e}') + raise exceptions.SymbolError(name, None, f"Malformed name: {name}") def _iterative_resolve(self, traverse_list): """Iteratively resolves a type, populating linked child @@ -185,7 +185,7 @@ def get_type(self, type_name: str) -> interfaces.objects.Template: index = type_name.find(constants.BANG) if index > 0: table_name, type_name = type_name[:index], type_name[index + 1:] - raise exceptions.SymbolError(type_name, table_name, "Unresolvable symbol requested: {}".format(type_name)) + raise exceptions.SymbolError(type_name, table_name, f"Unresolvable symbol requested: {type_name}") return self._resolved[type_name] def get_symbol(self, symbol_name: str) -> interfaces.symbols.SymbolInterface: @@ -198,7 +198,7 @@ def get_symbol(self, symbol_name: str) -> interfaces.symbols.SymbolInterface: index = symbol_name.find(constants.BANG) if index > 0: table_name, symbol_name = symbol_name[:index], symbol_name[index + 1:] - raise exceptions.SymbolError(symbol_name, table_name, "Unresolvable Symbol: {}".format(symbol_name)) + raise exceptions.SymbolError(symbol_name, table_name, f"Unresolvable Symbol: {symbol_name}") return retval def _subresolve(self, object_template: interfaces.objects.Template) -> interfaces.objects.Template: @@ -220,7 +220,7 @@ def get_enumeration(self, enum_name: str) -> interfaces.objects.Template: index = enum_name.find(constants.BANG) if index > 0: table_name, enum_name = enum_name[:index], enum_name[index + 1:] - raise exceptions.SymbolError(enum_name, table_name, "Unresolvable Enumeration: {}".format(enum_name)) + raise exceptions.SymbolError(enum_name, table_name, f"Unresolvable Enumeration: {enum_name}") return retval def _membership(self, member_type: SymbolType, name: str) -> bool: diff --git a/volatility3/framework/symbols/intermed.py b/volatility3/framework/symbols/intermed.py index 0ff03cbcab..b20760d6b7 100644 --- a/volatility3/framework/symbols/intermed.py +++ b/volatility3/framework/symbols/intermed.py @@ -111,7 +111,7 @@ def __init__(self, # Validation is expensive, but we cache to store the hashes of successfully validated json objects if validate and not schemas.validate(json_object): - raise exceptions.SymbolSpaceError("File does not pass version validation: {}".format(isf_url)) + raise exceptions.SymbolSpaceError(f"File does not pass version validation: {isf_url}") metadata = json_object.get('metadata', None) @@ -123,7 +123,7 @@ def __init__(self, raise RuntimeError("ISF version {} is no longer supported: {}".format(metadata.get('format', "0.0.0"), isf_url)) elif self._delegate.version < constants.ISF_MINIMUM_DEPRECATED: - vollog.warning("ISF version {} has been deprecated: {}".format(metadata.get('format', "0.0.0"), isf_url)) + vollog.warning(f"ISF version {metadata.get('format', '0.0.0')} has been deprecated: {isf_url}") # Inherit super().__init__(context, @@ -154,7 +154,7 @@ def _closest_version(version: str, versions: Dict[Tuple[int, int, int], Type['IS supported_versions = [x for x in versions if x[0] == major and x[1] >= minor] if not supported_versions: raise ValueError( - "No Intermediate Format interface versions support file interface version: {}".format(version)) + f"No Intermediate Format interface versions support file interface version: {version}") return versions[max(supported_versions)] symbols = _construct_delegate_function('symbols', True) @@ -188,7 +188,7 @@ def file_symbol_url(cls, sub_path: str, filename: Optional[str] = None) -> Gener zip_match = "/".join(os.path.split(filename)) # Check user symbol directory first, then fallback to the framework's library to allow for overloading - vollog.log(constants.LOGLEVEL_VVVV, "Searching for symbols in {}".format(", ".join(symbols.__path__))) + vollog.log(constants.LOGLEVEL_VVVV, f"Searching for symbols in {', '.join(symbols.__path__)}") for path in symbols.__path__: if not os.path.isabs(path): path = os.path.abspath(os.path.join(__file__, path)) @@ -300,7 +300,7 @@ def _get_natives(self) -> Optional[interfaces.symbols.NativeTableInterface]: # TODO: determine whether we should give voids a size - We don't give voids a length, whereas microsoft seemingly do pass else: - vollog.debug("Choosing appropriate natives for symbol library: {}".format(nc)) + vollog.debug(f"Choosing appropriate natives for symbol library: {nc}") return native_class.natives return None @@ -335,7 +335,7 @@ def get_symbol(self, name: str) -> interfaces.symbols.SymbolInterface: return self._symbol_cache[name] symbol = self._json_object['symbols'].get(name, None) if not symbol: - raise exceptions.SymbolError(name, self.name, "Unknown symbol: {}".format(name)) + raise exceptions.SymbolError(name, self.name, f"Unknown symbol: {name}") address = symbol['address'] + self.config.get('symbol_shift', 0) if self.config.get('symbol_mask', 0): address = address & self.config['symbol_mask'] @@ -362,7 +362,7 @@ def get_type_class(self, name: str) -> Type[interfaces.objects.ObjectInterface]: def set_type_class(self, name: str, clazz: Type[interfaces.objects.ObjectInterface]) -> None: if name not in self.types: - raise ValueError("Symbol type not in {} SymbolTable: {}".format(self.name, name)) + raise ValueError(f"Symbol type not in {self.name} SymbolTable: {name}") self._overrides[name] = clazz def del_type_class(self, name: str) -> None: @@ -372,7 +372,7 @@ def del_type_class(self, name: str) -> None: def _interdict_to_template(self, dictionary: Dict[str, Any]) -> interfaces.objects.Template: """Converts an intermediate format dict into an object template.""" if not dictionary: - raise exceptions.SymbolSpaceError("Invalid intermediate dictionary: {}".format(dictionary)) + raise exceptions.SymbolSpaceError(f"Invalid intermediate dictionary: {dictionary}") type_name = dictionary['kind'] if type_name == 'base': @@ -407,7 +407,7 @@ def _interdict_to_template(self, dictionary: Dict[str, Any]) -> interfaces.objec # Otherwise if dictionary['kind'] not in objects.AggregateTypes.values(): - raise exceptions.SymbolSpaceError("Unknown Intermediate format: {}".format(dictionary)) + raise exceptions.SymbolSpaceError(f"Unknown Intermediate format: {dictionary}") reference_name = dictionary['name'] if constants.BANG not in reference_name: @@ -424,7 +424,7 @@ def _lookup_enum(self, name: str) -> Dict[str, Any]: parameters for an Enum.""" lookup = self._json_object['enums'].get(name, None) if not lookup: - raise exceptions.SymbolSpaceError("Unknown enumeration: {}".format(name)) + raise exceptions.SymbolSpaceError(f"Unknown enumeration: {name}") result = {"choices": copy.deepcopy(lookup['constants']), "base_type": self.natives.get_type(lookup['base'])} return result @@ -432,11 +432,11 @@ def get_enumeration(self, enum_name: str) -> interfaces.objects.Template: """Resolves an individual enumeration.""" if constants.BANG in enum_name: raise exceptions.SymbolError(enum_name, self.name, - "Enumeration for a different table requested: {}".format(enum_name)) + f"Enumeration for a different table requested: {enum_name}") if enum_name not in self._json_object['enums']: # Fall back to the natives table raise exceptions.SymbolError(enum_name, self.name, - "Enumeration not found in {} table: {}".format(self.name, enum_name)) + f"Enumeration not found in {self.name} table: {enum_name}") curdict = self._json_object['enums'][enum_name] base_type = self.natives.get_type(curdict['base']) # The size isn't actually used, the base-type defines it. @@ -452,7 +452,7 @@ def get_type(self, type_name: str) -> interfaces.objects.Template: table_name, type_name = type_name[:index], type_name[index + 1:] raise exceptions.SymbolError( type_name, table_name, - "Symbol for a different table requested: {}".format(table_name + constants.BANG + type_name)) + f"Symbol for a different table requested: {table_name + constants.BANG + type_name}") if type_name not in self._json_object['user_types']: # Fall back to the natives table return self.natives.get_type(self.name + constants.BANG + type_name) @@ -491,7 +491,7 @@ def _get_natives(self) -> Optional[interfaces.symbols.NativeTableInterface]: # TODO: determine whether we should give voids a size - We don't give voids a length, whereas microsoft seemingly do pass else: - vollog.debug("Choosing appropriate natives for symbol library: {}".format(nc)) + vollog.debug(f"Choosing appropriate natives for symbol library: {nc}") return native_class.natives return None @@ -502,13 +502,13 @@ def get_type(self, type_name: str) -> interfaces.objects.Template: table_name, type_name = type_name[:index], type_name[index + 1:] raise exceptions.SymbolError( type_name, table_name, - "Symbol for a different table requested: {}".format(table_name + constants.BANG + type_name)) + f"Symbol for a different table requested: {table_name + constants.BANG + type_name}") if type_name not in self._json_object['user_types']: # Fall back to the natives table if type_name in self.natives.types: return self.natives.get_type(self.name + constants.BANG + type_name) else: - raise exceptions.SymbolError(type_name, self.name, "Unknown symbol: {}".format(type_name)) + raise exceptions.SymbolError(type_name, self.name, f"Unknown symbol: {type_name}") curdict = self._json_object['user_types'][type_name] members = {} for member_name in curdict['fields']: @@ -536,7 +536,7 @@ def get_symbol(self, name: str) -> interfaces.symbols.SymbolInterface: return self._symbol_cache[name] symbol = self._json_object['symbols'].get(name, None) if not symbol: - raise exceptions.SymbolError(name, self.name, "Unknown symbol: {}".format(name)) + raise exceptions.SymbolError(name, self.name, f"Unknown symbol: {name}") symbol_type = None if 'type' in symbol: symbol_type = self._interdict_to_template(symbol['type']) @@ -593,7 +593,7 @@ def get_symbol(self, name: str) -> interfaces.symbols.SymbolInterface: return self._symbol_cache[name] symbol = self._json_object['symbols'].get(name, None) if not symbol: - raise exceptions.SymbolError(name, self.name, "Unknown symbol: {}".format(name)) + raise exceptions.SymbolError(name, self.name, f"Unknown symbol: {name}") symbol_type = None if 'type' in symbol: symbol_type = self._interdict_to_template(symbol['type']) @@ -666,7 +666,7 @@ def get_type(self, type_name: str) -> interfaces.objects.Template: table_name, type_name = type_name[:index], type_name[index + 1:] raise exceptions.SymbolError( type_name, table_name, - "Symbol for a different table requested: {}".format(table_name + constants.BANG + type_name)) + f"Symbol for a different table requested: {table_name + constants.BANG + type_name}") type_definition = self._json_object['user_types'].get(type_name) if type_definition is None: diff --git a/volatility3/framework/symbols/linux/__init__.py b/volatility3/framework/symbols/linux/__init__.py index 0d8e99e420..cd0495a2cb 100644 --- a/volatility3/framework/symbols/linux/__init__.py +++ b/volatility3/framework/symbols/linux/__init__.py @@ -83,7 +83,7 @@ def _do_get_path(cls, rdentry, rmnt, dentry, vfsmnt) -> str: except exceptions.InvalidAddressException: ino = 0 - ret_val = ret_val[:-1] + ":[{0}]".format(ino) + ret_val = ret_val[:-1] + f":[{ino}]" else: ret_val = ret_val.replace("/", "") @@ -132,12 +132,12 @@ def _get_new_sock_pipe_path(cls, context, task, filp) -> str: pre_name = cls._get_path_file(task, filp) else: - pre_name = "".format(sym) + pre_name = f"" - ret = "{0}:[{1:d}]".format(pre_name, dentry.d_inode.i_ino) + ret = f"{pre_name}:[{dentry.d_inode.i_ino:d}]" else: - ret = " {0:x}".format(sym_addr) + ret = f" {sym_addr:x}" return ret diff --git a/volatility3/framework/symbols/linux/extensions/__init__.py b/volatility3/framework/symbols/linux/extensions/__init__.py index 5bcfda11ee..7b14a86749 100644 --- a/volatility3/framework/symbols/linux/extensions/__init__.py +++ b/volatility3/framework/symbols/linux/extensions/__init__.py @@ -181,7 +181,7 @@ def add_process_layer(self, config_prefix: str = None, preferred_name: str = Non return None if preferred_name is None: - preferred_name = self.vol.layer_name + "_Process{}".format(self.pid) + preferred_name = self.vol.layer_name + f"_Process{self.pid}" # Add the constructed layer and return the name return self._add_process_layer(self._context, dtb, config_prefix, preferred_name) @@ -197,7 +197,7 @@ def get_process_memory_sections(self, heap_only: bool = False) -> Generator[Tupl continue else: # FIXME: Check if this actually needs to be printed out or not - vollog.info("adding vma: {:x} {:x} | {:x} {:x}".format(start, self.mm.brk, end, self.mm.start_brk)) + vollog.info(f"adding vma: {start:x} {self.mm.brk:x} | {end:x} {self.mm.start_brk:x}") yield (start, end - start) @@ -496,7 +496,7 @@ def is_valid(self): def _get_real_mnt(self): table_name = self.vol.type_name.split(constants.BANG)[0] - mount_struct = "{0}{1}mount".format(table_name, constants.BANG) + mount_struct = f"{table_name}{constants.BANG}mount" offset = self._context.symbol_space.get_type(mount_struct).relative_child_offset("mnt") return self._context.object(mount_struct, self.vol.layer_name, offset = self.vol.offset - offset) diff --git a/volatility3/framework/symbols/linux/extensions/elf.py b/volatility3/framework/symbols/linux/extensions/elf.py index 0774e937ac..1277afe936 100644 --- a/volatility3/framework/symbols/linux/extensions/elf.py +++ b/volatility3/framework/symbols/linux/extensions/elf.py @@ -45,7 +45,7 @@ def __init__(self, context: interfaces.context.ContextInterface, type_name: str, elif ei_class == 2: self._type_prefix = "Elf64_" else: - raise ValueError("Unsupported ei_class value {}".format(ei_class)) + raise ValueError(f"Unsupported ei_class value {ei_class}") # Construct the full header self._hdr = self._context.object(symbol_table_name + constants.BANG + self._type_prefix + "Ehdr", diff --git a/volatility3/framework/symbols/mac/__init__.py b/volatility3/framework/symbols/mac/__init__.py index 54a5259d0a..241b4ffba9 100644 --- a/volatility3/framework/symbols/mac/__init__.py +++ b/volatility3/framework/symbols/mac/__init__.py @@ -149,7 +149,7 @@ def files_descriptors_for_process(cls, context: interfaces.context.ContextInterf vnode = f.f_fglob.fg_data.dereference().cast("vnode") path = vnode.full_path() elif ftype: - path = "<{}>".format(ftype.lower()) + path = f"<{ftype.lower()}>" yield f, path, fd_num diff --git a/volatility3/framework/symbols/mac/extensions/__init__.py b/volatility3/framework/symbols/mac/extensions/__init__.py index 37d733af6d..538c4101e2 100644 --- a/volatility3/framework/symbols/mac/extensions/__init__.py +++ b/volatility3/framework/symbols/mac/extensions/__init__.py @@ -32,7 +32,7 @@ def add_process_layer(self, config_prefix: str = None, preferred_name: str = Non return None if preferred_name is None: - preferred_name = self.vol.layer_name + "_Process{}".format(self.p_pid) + preferred_name = self.vol.layer_name + f"_Process{self.p_pid}" # Add the constructed layer and return the name return self._add_process_layer(self._context, dtb, config_prefix, preferred_name) @@ -478,7 +478,7 @@ def __str__(self): e = e.cast("unsigned char") - ret = ret + "{:02X}:".format(e) + ret = ret + f"{e:02X}:" if ret and ret[-1] == ":": ret = ret[:-1] diff --git a/volatility3/framework/symbols/native.py b/volatility3/framework/symbols/native.py index fdc161322c..c53ff6f16d 100644 --- a/volatility3/framework/symbols/native.py +++ b/volatility3/framework/symbols/native.py @@ -45,7 +45,7 @@ def get_type(self, type_name: str) -> interfaces.objects.Template: if constants.BANG in type_name: name_split = type_name.split(constants.BANG) if len(name_split) > 2: - raise ValueError("SymbolName cannot contain multiple {} separators".format(constants.BANG)) + raise ValueError(f"SymbolName cannot contain multiple {constants.BANG} separators") table_name, type_name = name_split prefix = table_name + constants.BANG diff --git a/volatility3/framework/symbols/windows/extensions/__init__.py b/volatility3/framework/symbols/windows/extensions/__init__.py index 997175fbae..5b057ff629 100755 --- a/volatility3/framework/symbols/windows/extensions/__init__.py +++ b/volatility3/framework/symbols/windows/extensions/__init__.py @@ -94,7 +94,7 @@ def traverse(self, visited = None, depth = 0): # any node other than the root that doesn't have a recognized tag # is just garbage and we skip the node entirely vollog.log(constants.LOGLEVEL_VVV, - "Skipping VAD at {} depth {} with tag {}".format(self.vol.offset, depth, tag)) + f"Skipping VAD at {self.vol.offset} depth {depth} with tag {tag}") return if target: @@ -105,13 +105,13 @@ def traverse(self, visited = None, depth = 0): for vad_node in self.get_left_child().dereference().traverse(visited, depth + 1): yield vad_node except exceptions.InvalidAddressException as excp: - vollog.log(constants.LOGLEVEL_VVV, "Invalid address on LeftChild: {0:#x}".format(excp.invalid_address)) + vollog.log(constants.LOGLEVEL_VVV, f"Invalid address on LeftChild: {excp.invalid_address:#x}") try: for vad_node in self.get_right_child().dereference().traverse(visited, depth + 1): yield vad_node except exceptions.InvalidAddressException as excp: - vollog.log(constants.LOGLEVEL_VVV, "Invalid address on RightChild: {0:#x}".format(excp.invalid_address)) + vollog.log(constants.LOGLEVEL_VVV, f"Invalid address on RightChild: {excp.invalid_address:#x}") def get_right_child(self): """Get the right child member.""" @@ -329,7 +329,7 @@ class EX_FAST_REF(objects.StructType): def dereference(self) -> interfaces.objects.ObjectInterface: if constants.BANG not in self.vol.type_name: - raise ValueError("Invalid symbol table name syntax (no {} found)".format(constants.BANG)) + raise ValueError(f"Invalid symbol table name syntax (no {constants.BANG} found)") # the mask value is different on 32 and 64 bits symbol_table_name = self.vol.type_name.split(constants.BANG)[0] @@ -394,7 +394,7 @@ def file_name_with_device(self) -> Union[str, interfaces.renderers.BaseAbsentVal # be instantiated from a primary (virtual) layer or a memory (physical) layer. if self._context.layers[self.vol.native_layer_name].is_valid(self.DeviceObject): try: - name = "\\Device\\{}".format(self.DeviceObject.get_device_name()) + name = f"\\Device\\{self.DeviceObject.get_device_name()}" except ValueError: pass @@ -449,7 +449,7 @@ def get_cross_thread_flags(self) -> str: stringCrossThreadFlags = '' for flag in dictCrossThreadFlags: if flags & 2 ** dictCrossThreadFlags[flag]: - stringCrossThreadFlags += '{} '.format(flag) + stringCrossThreadFlags += f'{flag} ' return stringCrossThreadFlags[:-1] if stringCrossThreadFlags else stringCrossThreadFlags @@ -534,7 +534,7 @@ def add_process_layer(self, config_prefix: str = None, preferred_name: str = Non dtb = dtb & ((1 << parent_layer.bits_per_register) - 1) if preferred_name is None: - preferred_name = self.vol.layer_name + "_Process{}".format(self.UniqueProcessId) + preferred_name = self.vol.layer_name + f"_Process{self.UniqueProcessId}" # Add the constructed layer and return the name return self._add_process_layer(self._context, dtb, config_prefix, preferred_name) @@ -542,7 +542,7 @@ def add_process_layer(self, config_prefix: str = None, preferred_name: str = Non def get_peb(self) -> interfaces.objects.ObjectInterface: """Constructs a PEB object""" if constants.BANG not in self.vol.type_name: - raise ValueError("Invalid symbol table name syntax (no {} found)".format(constants.BANG)) + raise ValueError(f"Invalid symbol table name syntax (no {constants.BANG} found)") # add_process_layer can raise InvalidAddressException. # if that happens, we let the exception propagate upwards @@ -551,10 +551,10 @@ def get_peb(self) -> interfaces.objects.ObjectInterface: proc_layer = self._context.layers[proc_layer_name] if not proc_layer.is_valid(self.Peb): raise exceptions.InvalidAddressException(proc_layer_name, self.Peb, - "Invalid address at {:0x}".format(self.Peb)) + f"Invalid address at {self.Peb:0x}") sym_table = self.vol.type_name.split(constants.BANG)[0] - peb = self._context.object("{}{}_PEB".format(sym_table, constants.BANG), + peb = self._context.object(f"{sym_table}{constants.BANG}_PEB", layer_name = proc_layer_name, offset = self.Peb) return peb @@ -565,7 +565,7 @@ def load_order_modules(self) -> Iterable[interfaces.objects.ObjectInterface]: try: peb = self.get_peb() for entry in peb.Ldr.InLoadOrderModuleList.to_list( - "{}{}_LDR_DATA_TABLE_ENTRY".format(self.get_symbol_table_name(), constants.BANG), + f"{self.get_symbol_table_name()}{constants.BANG}_LDR_DATA_TABLE_ENTRY", "InLoadOrderLinks"): yield entry except exceptions.InvalidAddressException: @@ -577,7 +577,7 @@ def init_order_modules(self) -> Iterable[interfaces.objects.ObjectInterface]: try: peb = self.get_peb() for entry in peb.Ldr.InInitializationOrderModuleList.to_list( - "{}{}_LDR_DATA_TABLE_ENTRY".format(self.get_symbol_table_name(), constants.BANG), + f"{self.get_symbol_table_name()}{constants.BANG}_LDR_DATA_TABLE_ENTRY", "InInitializationOrderLinks"): yield entry except exceptions.InvalidAddressException: @@ -589,7 +589,7 @@ def mem_order_modules(self) -> Iterable[interfaces.objects.ObjectInterface]: try: peb = self.get_peb() for entry in peb.Ldr.InMemoryOrderModuleList.to_list( - "{}{}_LDR_DATA_TABLE_ENTRY".format(self.get_symbol_table_name(), constants.BANG), + f"{self.get_symbol_table_name()}{constants.BANG}_LDR_DATA_TABLE_ENTRY", "InMemoryOrderLinks"): yield entry except exceptions.InvalidAddressException: @@ -603,7 +603,7 @@ def get_handle_count(self): except exceptions.InvalidAddressException: vollog.log(constants.LOGLEVEL_VVV, - "Cannot access _EPROCESS.ObjectTable.HandleCount at {0:#x}".format(self.vol.offset)) + f"Cannot access _EPROCESS.ObjectTable.HandleCount at {self.vol.offset:#x}") return renderers.UnreadableValue() @@ -626,7 +626,7 @@ def get_session_id(self): except exceptions.InvalidAddressException: vollog.log(constants.LOGLEVEL_VVV, - "Cannot access _EPROCESS.Session.SessionId at {0:#x}".format(self.vol.offset)) + f"Cannot access _EPROCESS.Session.SessionId at {self.vol.offset:#x}") return renderers.UnreadableValue() diff --git a/volatility3/framework/symbols/windows/extensions/network.py b/volatility3/framework/symbols/windows/extensions/network.py index 95332f07c5..6e13ad45ba 100644 --- a/volatility3/framework/symbols/windows/extensions/network.py +++ b/volatility3/framework/symbols/windows/extensions/network.py @@ -164,7 +164,7 @@ def is_valid(self): return False except exceptions.InvalidAddressException: - vollog.debug("netw obj 0x{:x} invalid due to invalid address access".format(self.vol.offset)) + vollog.debug(f"netw obj 0x{self.vol.offset:x} invalid due to invalid address access") return False return True @@ -200,21 +200,21 @@ def get_remote_address(self): def is_valid(self): if self.State not in self.State.choices.values(): - vollog.debug("{} 0x{:x} invalid due to invalid tcp state {}".format(type(self), self.vol.offset, self.State)) + vollog.debug(f"{type(self)} 0x{self.vol.offset:x} invalid due to invalid tcp state {self.State}") return False try: if self.get_address_family() not in (AF_INET, AF_INET6): - vollog.debug("{} 0x{:x} invalid due to invalid address_family {}".format(type(self), self.vol.offset, self.get_address_family())) + vollog.debug(f"{type(self)} 0x{self.vol.offset:x} invalid due to invalid address_family {self.get_address_family()}") return False if not self.get_local_address() and (not self.get_owner() or self.get_owner().UniqueProcessId == 0 or self.get_owner().UniqueProcessId > 65535): - vollog.debug("{} 0x{:x} invalid due to invalid owner data".format(type(self), self.vol.offset)) + vollog.debug(f"{type(self)} 0x{self.vol.offset:x} invalid due to invalid owner data") return False except exceptions.InvalidAddressException: - vollog.debug("{} 0x{:x} invalid due to invalid address access".format(type(self), self.vol.offset)) + vollog.debug(f"{type(self)} 0x{self.vol.offset:x} invalid due to invalid address access") return False return True diff --git a/volatility3/framework/symbols/windows/extensions/pe.py b/volatility3/framework/symbols/windows/extensions/pe.py index b7b6d71df2..df461318f2 100644 --- a/volatility3/framework/symbols/windows/extensions/pe.py +++ b/volatility3/framework/symbols/windows/extensions/pe.py @@ -22,7 +22,7 @@ def get_nt_header(self) -> interfaces.objects.ObjectInterface: """ if self.e_magic != 0x5a4d: - raise ValueError("e_magic {0:04X} is not a valid DOS signature.".format(self.e_magic)) + raise ValueError(f"e_magic {self.e_magic:04X} is not a valid DOS signature.") layer_name = self.vol.layer_name symbol_table_name = self.get_symbol_table_name() @@ -32,7 +32,7 @@ def get_nt_header(self) -> interfaces.objects.ObjectInterface: offset = self.vol.offset + self.e_lfanew) if nt_header.Signature != 0x4550: - raise ValueError("NT header signature {0:04X} is not a valid".format(nt_header.Signature)) + raise ValueError(f"NT header signature {nt_header.Signature:04X} is not a valid") # this checks if we need a PE32+ header if nt_header.FileHeader.Machine == 34404: @@ -110,7 +110,7 @@ def reconstruct(self) -> Generator[Tuple[int, bytes], None, None]: # no legitimate PE is going to be larger than this if size_of_image > (1024 * 1024 * 100): - raise ValueError("The claimed SizeOfImage is too large: {}".format(size_of_image)) + raise ValueError(f"The claimed SizeOfImage is too large: {size_of_image}") read_layer = self._context.layers[layer_name] @@ -127,13 +127,13 @@ def reconstruct(self) -> Generator[Tuple[int, bytes], None, None]: for sect in nt_header.get_sections(): if sect.VirtualAddress > size_of_image: - raise ValueError("Section VirtualAddress is too large: {}".format(sect.VirtualAddress)) + raise ValueError(f"Section VirtualAddress is too large: {sect.VirtualAddress}") if sect.Misc.VirtualSize > size_of_image: - raise ValueError("Section VirtualSize is too large: {}".format(sect.Misc.VirtualSize)) + raise ValueError(f"Section VirtualSize is too large: {sect.Misc.VirtualSize}") if sect.SizeOfRawData > size_of_image: - raise ValueError("Section SizeOfRawData is too large: {}".format(sect.SizeOfRawData)) + raise ValueError(f"Section SizeOfRawData is too large: {sect.SizeOfRawData}") if sect is not None: # It doesn't matter if this is too big, because it'll get overwritten by the later layers diff --git a/volatility3/framework/symbols/windows/extensions/pool.py b/volatility3/framework/symbols/windows/extensions/pool.py index 8955338400..59213aaff0 100644 --- a/volatility3/framework/symbols/windows/extensions/pool.py +++ b/volatility3/framework/symbols/windows/extensions/pool.py @@ -175,7 +175,7 @@ def _calculate_optional_header_lengths(cls, context: interfaces.context.ContextI 'HANDLE_REVOCATION_INFO', 'PADDING_INFO' ]: try: - type_name = "{}{}_OBJECT_HEADER_{}".format(symbol_table_name, constants.BANG, header) + type_name = f"{symbol_table_name}{constants.BANG}_OBJECT_HEADER_{header}" header_type = context.symbol_space.get_type(type_name) headers.append(header) sizes.append(header_type.size) @@ -240,7 +240,7 @@ def get_pool_type(self) -> Union[str, interfaces.renderers.BaseAbsentValue]: if hasattr(self, 'PoolType'): if not self.pool_type_lookup: self._generate_pool_type_lookup() - return self.pool_type_lookup.get(self.PoolType, "Unknown choice {}".format(self.PoolType)) + return self.pool_type_lookup.get(self.PoolType, f"Unknown choice {self.PoolType}") else: return renderers.NotApplicableValue() @@ -259,7 +259,7 @@ class ExecutiveObject(interfaces.objects.ObjectInterface): def get_object_header(self) -> 'OBJECT_HEADER': if constants.BANG not in self.vol.type_name: - raise ValueError("Invalid symbol table name syntax (no {} found)".format(constants.BANG)) + raise ValueError(f"Invalid symbol table name syntax (no {constants.BANG} found)") symbol_table_name = self.vol.type_name.split(constants.BANG)[0] body_offset = self._context.symbol_space.get_type(symbol_table_name + constants.BANG + "_OBJECT_HEADER").relative_child_offset("Body") @@ -315,7 +315,7 @@ def get_object_type(self, type_map: Dict[int, str], cookie: int = None) -> Optio @property def NameInfo(self) -> interfaces.objects.ObjectInterface: if constants.BANG not in self.vol.type_name: - raise ValueError("Invalid symbol table name syntax (no {} found)".format(constants.BANG)) + raise ValueError(f"Invalid symbol table name syntax (no {constants.BANG} found)") symbol_table_name = self.vol.type_name.split(constants.BANG)[0] @@ -329,7 +329,7 @@ def NameInfo(self) -> interfaces.objects.ObjectInterface: kvo = layer.config.get("kernel_virtual_offset", None) if kvo is None: - raise AttributeError("Could not find kernel_virtual_offset for layer: {}".format(self.vol.layer_name)) + raise AttributeError(f"Could not find kernel_virtual_offset for layer: {self.vol.layer_name}") ntkrnlmp = self._context.module(symbol_table_name, layer_name = self.vol.layer_name, offset = kvo) address = ntkrnlmp.get_symbol("ObpInfoMaskToOffset").address diff --git a/volatility3/framework/symbols/windows/extensions/registry.py b/volatility3/framework/symbols/windows/extensions/registry.py index fd01365566..cbd9052cdd 100644 --- a/volatility3/framework/symbols/windows/extensions/registry.py +++ b/volatility3/framework/symbols/windows/extensions/registry.py @@ -166,13 +166,13 @@ def _get_subkeys_recursive( for subnode_offset in node.List[::listjump]: if (subnode_offset & 0x7fffffff) > hive.maximum_address: vollog.log(constants.LOGLEVEL_VVV, - "Node found with address outside the valid Hive size: {}".format(hex(subnode_offset))) + f"Node found with address outside the valid Hive size: {hex(subnode_offset)}") else: try: subnode = hive.get_node(subnode_offset) except (exceptions.InvalidAddressException, RegistryFormatException): vollog.log(constants.LOGLEVEL_VVV, - "Failed to get node at {}, skipping".format(hex(subnode_offset))) + f"Failed to get node at {hex(subnode_offset)}, skipping") continue yield from self._get_subkeys_recursive(hive, subnode) @@ -190,12 +190,12 @@ def get_values(self) -> Iterable[interfaces.objects.ObjectInterface]: try: node = hive.get_node(v) except (RegistryInvalidIndex, RegistryFormatException) as excp: - vollog.debug("Invalid address {}".format(excp)) + vollog.debug(f"Invalid address {excp}") continue if node.vol.type_name.endswith(constants.BANG + '_CM_KEY_VALUE'): yield node except (exceptions.InvalidAddressException, RegistryFormatException) as excp: - vollog.debug("Invalid address in get_values iteration: {}".format(excp)) + vollog.debug(f"Invalid address in get_values iteration: {excp}") return def get_name(self) -> interfaces.objects.ObjectInterface: @@ -240,7 +240,7 @@ def decode_data(self) -> Union[int, bytes]: # Remove the high bit datalen = datalen & 0x7fffffff if (0 > datalen or datalen > 4): - raise ValueError("Unable to read inline registry value with excessive length: {}".format(datalen)) + raise ValueError(f"Unable to read inline registry value with excessive length: {datalen}") else: data = layer.read(self.Data.vol.offset, datalen) elif layer.hive.Version == 5 and datalen > 0x4000: @@ -263,15 +263,15 @@ def decode_data(self) -> Union[int, bytes]: self_type = RegValueTypes(self.Type) if self_type == RegValueTypes.REG_DWORD: if len(data) != struct.calcsize("L"): - raise ValueError("Size of data does not match the type of registry value {}".format(self.get_name())) + raise ValueError(f"Size of data does not match the type of registry value {self.get_name()}") return struct.unpack(">L", data)[0] if self_type == RegValueTypes.REG_QWORD: if len(data) != struct.calcsize(" Union[int, bytes]: return b'' # Fall back if it's something weird - vollog.debug("Unknown registry value type encountered: {}".format(self.Type)) + vollog.debug(f"Unknown registry value type encountered: {self.Type}") return data diff --git a/volatility3/framework/symbols/windows/pdbconv.py b/volatility3/framework/symbols/windows/pdbconv.py index 2880d44b8e..e530fcbd7b 100644 --- a/volatility3/framework/symbols/windows/pdbconv.py +++ b/volatility3/framework/symbols/windows/pdbconv.py @@ -364,19 +364,19 @@ def read_ipi_stream(self): def _read_info_stream(self, stream_number, stream_name, info_list): - vollog.debug("Reading {}".format(stream_name)) + vollog.debug(f"Reading {stream_name}") info_layer = self._context.layers.get(self._layer_name + "_stream" + str(stream_number), None) if not info_layer: - raise ValueError("No {} stream available".format(stream_name)) + raise ValueError(f"No {stream_name} stream available") module = self._context.module(module_name = info_layer.pdb_symbol_table, layer_name = info_layer.name, offset = 0) header = module.object(object_type = "TPI_HEADER", offset = 0) # Check the header if not (56 <= header.header_size < 1024): - raise ValueError("{} Stream Header size outside normal bounds".format(stream_name)) + raise ValueError(f"{stream_name} Stream Header size outside normal bounds") if header.index_min < 4096: - raise ValueError("Minimum {} index is 4096, found: {}".format(stream_name, header.index_min)) + raise ValueError(f"Minimum {stream_name} index is 4096, found: {header.index_min}") if header.index_max < header.index_min: raise ValueError("Maximum {} index is smaller than minimum TPI index, found: {} < {} ".format( stream_name, header.index_max, header.index_min)) @@ -396,8 +396,8 @@ def _read_info_stream(self, stream_number, stream_name, info_list): output, consumed = self.consume_type(module, offset, length) leaf_type, name, value = output for tag_type in ['unnamed', 'anonymous']: - if name == '<{}-tag>'.format(tag_type) or name == '__{}'.format(tag_type): - name = '__{}_'.format(tag_type) + hex(len(info_list) + 0x1000)[2:] + if name == f'<{tag_type}-tag>' or name == f'__{tag_type}': + name = f'__{tag_type}_' + hex(len(info_list) + 0x1000)[2:] if name: info_references[name] = len(info_list) info_list.append((leaf_type, name, value)) @@ -493,7 +493,7 @@ def read_symbol_stream(self): name = self.parse_string(sym.name, False, sym.length - sym.vol.size + 2) address = self._sections[sym.segment - 1].VirtualAddress + sym.offset else: - vollog.debug("Only v2 and v3 symbols are supported: {:x}".format(leaf_type)) + vollog.debug(f"Only v2 and v3 symbols are supported: {leaf_type:x}") if name: if self._omap_mapping: address = self.omap_lookup(address) @@ -674,9 +674,9 @@ def get_size_from_index(self, index: int) -> int: elif leaf_type in [leaf_type.LF_PROCEDURE]: raise ValueError("LF_PROCEDURE size could not be identified") else: - raise ValueError("Unable to determine size of leaf_type {}".format(leaf_type.lookup())) + raise ValueError(f"Unable to determine size of leaf_type {leaf_type.lookup()}") if result <= 0: - raise ValueError("Invalid size identified: {} ({})".format(index, name)) + raise ValueError(f"Invalid size identified: {index} ({name})") return result ### TYPE HANDLING CODE @@ -824,7 +824,7 @@ def consume_type( consumed += buildinfo.arguments.vol.size result = leaf_type, None, buildinfo else: - raise TypeError("Unhandled leaf_type: {}".format(leaf_type)) + raise TypeError(f"Unhandled leaf_type: {leaf_type}") return result, consumed @@ -934,20 +934,20 @@ def retreive_pdb(self, vollog.info("Download PDB file...") file_name = ".".join(file_name.split(".")[:-1] + ['pdb']) for sym_url in ['http://msdl.microsoft.com/download/symbols']: - url = sym_url + "/{}/{}/".format(file_name, guid) + url = sym_url + f"/{file_name}/{guid}/" result = None for suffix in [file_name, file_name[:-1] + '_']: try: - vollog.debug("Attempting to retrieve {}".format(url + suffix)) + vollog.debug(f"Attempting to retrieve {url + suffix}") # We have to cache this because the file is opened by a layer and we can't control whether that caches result = resources.ResourceAccessor(progress_callback).open(url + suffix) except (error.HTTPError, error.URLError) as excp: - vollog.debug("Failed with {}".format(excp)) + vollog.debug(f"Failed with {excp}") if result: break if progress_callback is not None: - progress_callback(100, "Downloading {}".format(url + suffix)) + progress_callback(100, f"Downloading {url + suffix}") if result is None: return None return url + suffix @@ -972,7 +972,7 @@ def __call__(self, progress: Union[int, float], description: str = None): Args: progress: Percentage of progress of the current procedure """ - message = "\rProgress: {0: 7.2f}\t\t{1:}".format(round(progress, 2), description or '') + message = f"\rProgress: {round(progress, 2): 7.2f}\t\t{description or ''}" message_len = len(message) self._max_message_len = max([self._max_message_len, message_len]) print(message, end = (' ' * (self._max_message_len - message_len)) + '\r') @@ -1017,7 +1017,7 @@ def __call__(self, progress: Union[int, float], description: str = None): url = parse.urlparse(filename, scheme = 'file') if url.scheme == 'file': if not os.path.exists(filename): - parser.error("File {} does not exists".format(filename)) + parser.error(f"File {filename} does not exists") location = "file:" + request.pathname2url(os.path.abspath(filename)) else: location = filename @@ -1032,7 +1032,7 @@ def __call__(self, progress: Union[int, float], description: str = None): else: guid = converted_json['metadata']['windows']['pdb']['GUID'] age = converted_json['metadata']['windows']['pdb']['age'] - args.output = "{}-{}.json.xz".format(guid, age) + args.output = f"{guid}-{age}.json.xz" output_url = os.path.abspath(args.output) @@ -1049,6 +1049,6 @@ def __call__(self, progress: Union[int, float], description: str = None): f.write(bytes(json_string, 'latin-1')) if args.keep: - print("Temporary PDB file: {}".format(filename)) + print(f"Temporary PDB file: {filename}") elif delfile: os.remove(filename) diff --git a/volatility3/framework/symbols/windows/pdbutil.py b/volatility3/framework/symbols/windows/pdbutil.py index 4ec94cd949..19ce551a70 100644 --- a/volatility3/framework/symbols/windows/pdbutil.py +++ b/volatility3/framework/symbols/windows/pdbutil.py @@ -85,13 +85,13 @@ def load_windows_symbol_table(cls, break if not isf_path: - vollog.debug("Required symbol library path not found: {}".format(filter_string)) + vollog.debug(f"Required symbol library path not found: {filter_string}") vollog.info("The symbols can be downloaded later using pdbconv.py -p {} -g {}".format( pdb_name.strip('\x00'), guid.upper() + str(age))) return None - vollog.debug("Using symbol library: {}".format(filter_string)) + vollog.debug(f"Using symbol library: {filter_string}") # Set the discovered options join = interfaces.configuration.path_join @@ -225,7 +225,7 @@ def download_pdb_isf(cls, try: os.remove(filename) except PermissionError: - vollog.warning("Temporary file could not be removed: {}".format(filename)) + vollog.warning(f"Temporary file could not be removed: {filename}") else: vollog.warning("Cannot write downloaded symbols, please add the appropriate symbols" " or add/modify a symbols directory that is writable") @@ -310,11 +310,11 @@ def symbol_table_from_pdb(cls, context: interfaces.context.ContextInterface, con if not guids: raise exceptions.VolatilityException( - "Did not find GUID of {} in module @ 0x{:x}!".format(pdb_name, module_offset)) + f"Did not find GUID of {pdb_name} in module @ 0x{module_offset:x}!") guid = guids[0] - vollog.debug("Found {}: {}-{}".format(guid["pdb_name"], guid["GUID"], guid["age"])) + vollog.debug(f"Found {guid['pdb_name']}: {guid['GUID']}-{guid['age']}") return cls.load_windows_symbol_table(context, guid["GUID"], diff --git a/volatility3/schemas/__init__.py b/volatility3/schemas/__init__.py index ea7f0579d6..9340b29f47 100644 --- a/volatility3/schemas/__init__.py +++ b/volatility3/schemas/__init__.py @@ -44,7 +44,7 @@ def validate(input: Dict[str, Any], use_cache: bool = True) -> bool: basepath = os.path.abspath(os.path.dirname(__file__)) schema_path = os.path.join(basepath, 'schema-' + format + '.json') if not os.path.exists(schema_path): - vollog.debug("Schema for format not found: {}".format(schema_path)) + vollog.debug(f"Schema for format not found: {schema_path}") return False with open(schema_path, 'r') as s: schema = json.load(s) From 0d48261201e09735e791f2a301bb39d3bd5e90d9 Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Sun, 18 Jul 2021 16:49:37 +0100 Subject: [PATCH 167/294] Core: Change all 3.5 type hints to variable annotations --- volatility3/__init__.py | 2 +- volatility3/cli/__init__.py | 2 +- volatility3/cli/text_renderer.py | 8 +++--- volatility3/cli/volargparse.py | 6 ++--- volatility3/cli/volshell/generic.py | 4 +-- .../framework/automagic/construct_layers.py | 2 +- volatility3/framework/automagic/linux.py | 2 +- volatility3/framework/automagic/pdbscan.py | 10 +++---- .../framework/automagic/symbol_cache.py | 8 +++--- .../framework/automagic/symbol_finder.py | 12 ++++----- volatility3/framework/automagic/windows.py | 2 +- .../framework/configuration/requirements.py | 14 +++++----- volatility3/framework/contexts/__init__.py | 4 +-- volatility3/framework/interfaces/automagic.py | 4 +-- .../framework/interfaces/configuration.py | 14 +++++----- volatility3/framework/interfaces/layers.py | 18 ++++++------- volatility3/framework/interfaces/objects.py | 4 +-- volatility3/framework/interfaces/plugins.py | 4 +-- volatility3/framework/interfaces/renderers.py | 4 +-- volatility3/framework/interfaces/symbols.py | 2 +- volatility3/framework/layers/intel.py | 2 +- volatility3/framework/layers/linear.py | 3 ++- volatility3/framework/layers/msf.py | 2 +- volatility3/framework/layers/physical.py | 6 ++--- volatility3/framework/layers/qemu.py | 2 +- volatility3/framework/layers/registry.py | 2 +- .../framework/layers/scanners/__init__.py | 2 +- .../framework/layers/scanners/multiregexp.py | 2 +- volatility3/framework/layers/segmented.py | 6 ++--- volatility3/framework/objects/__init__.py | 18 ++++++------- volatility3/framework/objects/templates.py | 10 +++---- volatility3/framework/plugins/mac/lsmod.py | 2 +- volatility3/framework/plugins/mac/psaux.py | 2 +- volatility3/framework/plugins/mac/pslist.py | 4 +-- volatility3/framework/plugins/timeliner.py | 2 +- .../framework/plugins/windows/callbacks.py | 4 +-- .../framework/plugins/windows/handles.py | 2 +- .../framework/plugins/windows/modscan.py | 2 +- .../framework/plugins/windows/modules.py | 2 +- .../framework/plugins/windows/poolscanner.py | 2 +- .../framework/plugins/windows/pstree.py | 6 ++--- .../plugins/windows/registry/printkey.py | 2 +- .../plugins/windows/registry/userassist.py | 4 +-- .../framework/plugins/windows/strings.py | 12 +++++---- .../framework/plugins/windows/virtmap.py | 4 +-- volatility3/framework/renderers/__init__.py | 6 ++--- volatility3/framework/renderers/conversion.py | 2 +- .../framework/renderers/format_hints.py | 2 +- volatility3/framework/symbols/__init__.py | 8 +++--- volatility3/framework/symbols/intermed.py | 4 +-- .../framework/symbols/linux/__init__.py | 2 +- volatility3/framework/symbols/mac/__init__.py | 2 +- .../symbols/mac/extensions/__init__.py | 2 +- volatility3/framework/symbols/native.py | 6 ++--- .../symbols/windows/extensions/__init__.py | 2 +- .../symbols/windows/extensions/pool.py | 2 +- .../framework/symbols/windows/pdbconv.py | 26 +++++++++---------- volatility3/schemas/__init__.py | 2 +- 58 files changed, 150 insertions(+), 147 deletions(-) diff --git a/volatility3/__init__.py b/volatility3/__init__.py index 5d3c34e436..db52aa9b0e 100644 --- a/volatility3/__init__.py +++ b/volatility3/__init__.py @@ -43,7 +43,7 @@ def find_spec(fullname: str, path: Optional[List[str]], target: None = None, **k raise Warning(warning) -warning_find_spec = [WarningFindSpec()] # type: List[abc.MetaPathFinder] +warning_find_spec: List[abc.MetaPathFinder] = [WarningFindSpec()] sys.meta_path = warning_find_spec + sys.meta_path # We point the volatility3.plugins __path__ variable at BOTH diff --git a/volatility3/cli/__init__.py b/volatility3/cli/__init__.py index 9ee4d570c0..e6f00728a3 100644 --- a/volatility3/cli/__init__.py +++ b/volatility3/cli/__init__.py @@ -583,7 +583,7 @@ def populate_requirements_argparse(self, parser: Union[argparse.ArgumentParser, # Construct an argparse group for requirement in configurable.get_requirements(): - additional = {} # type: Dict[str, Any] + additional: Dict[str, Any] = {} if not isinstance(requirement, interfaces.configuration.RequirementInterface): raise TypeError("Plugin contains requirements that are not RequirementInterfaces: {}".format( configurable.__name__)) diff --git a/volatility3/cli/text_renderer.py b/volatility3/cli/text_renderer.py index 19507b11c5..35b2468e85 100644 --- a/volatility3/cli/text_renderer.py +++ b/volatility3/cli/text_renderer.py @@ -278,7 +278,7 @@ def visitor( accumulator.append((node.path_depth, line)) return accumulator - final_output = [] # type: List[Tuple[int, Dict[interfaces.renderers.Column, bytes]]] + final_output: List[Tuple[int, Dict[interfaces.renderers.Column, bytes]]] = [] if not grid.populated: grid.populate(visitor, final_output) else: @@ -323,15 +323,15 @@ def render(self, grid: interfaces.renderers.TreeGrid): outfd = sys.stdout outfd.write("\n") - final_output = ( - {}, []) # type: Tuple[Dict[str, List[interfaces.renderers.TreeNode]], List[interfaces.renderers.TreeNode]] + final_output: Tuple[Dict[str, List[interfaces.renderers.TreeNode]], List[interfaces.renderers.TreeNode]] = ( + {}, []) def visitor( node: interfaces.renderers.TreeNode, accumulator: Tuple[Dict[str, Dict[str, Any]], List[Dict[str, Any]]] ) -> Tuple[Dict[str, Dict[str, Any]], List[Dict[str, Any]]]: # Nodes always have a path value, giving them a path_depth of at least 1, we use max just in case acc_map, final_tree = accumulator - node_dict = {'__children': []} # type: Dict[str, Any] + node_dict: Dict[str, Any] = {'__children': []} for column_index in range(len(grid.columns)): column = grid.columns[column_index] renderer = self._type_renderers.get(column.type, self._type_renderers['default']) diff --git a/volatility3/cli/volargparse.py b/volatility3/cli/volargparse.py index 5ced541ae1..996acc1506 100644 --- a/volatility3/cli/volargparse.py +++ b/volatility3/cli/volargparse.py @@ -12,7 +12,7 @@ # We shouldn't really steal a private member from argparse, but otherwise we're just duplicating code # HelpfulSubparserAction gives more information about the possible choices from a subparsed choice -# HelpfulArgParser gives the list of choices when no arguments are provided to a choice option whilst still using a METAVAR +# HelpfulArgParser gives the list of choices when no arguments are provided to a choice option whilst still using a class HelpfulSubparserAction(argparse._SubParsersAction): @@ -22,7 +22,7 @@ class HelpfulSubparserAction(argparse._SubParsersAction): def __init__(self, *args, **kwargs) -> None: super().__init__(*args, **kwargs) # We don't want the action self-check to kick in, so we remove the choices list, the check happens in __call__ - self.choices = None # type: ignore + self.choices = None def __call__(self, parser: argparse.ArgumentParser, @@ -31,7 +31,7 @@ def __call__(self, option_string: Optional[str] = None) -> None: parser_name = '' - arg_strings = [] # type: List[str] + arg_strings: List[str] = [] if values is not None: for value in values: if not parser_name: diff --git a/volatility3/cli/volshell/generic.py b/volatility3/cli/volshell/generic.py index 46701b4acb..7252c3b691 100644 --- a/volatility3/cli/volshell/generic.py +++ b/volatility3/cli/volshell/generic.py @@ -30,7 +30,7 @@ class Volshell(interfaces.plugins.PluginInterface): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) - self.__current_layer = None # type: Optional[str] + self.__current_layer: Optional[str] = None self.__console = None def random_string(self, length: int = 32) -> str: @@ -38,7 +38,7 @@ def random_string(self, length: int = 32) -> str: @classmethod def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface]: - reqs = [] # type: List[interfaces.configuration.RequirementInterface] + reqs: List[interfaces.configuration.RequirementInterface] = [] if cls == Volshell: reqs = [ requirements.URIRequirement(name = 'script', diff --git a/volatility3/framework/automagic/construct_layers.py b/volatility3/framework/automagic/construct_layers.py index 8afc6326e3..40a17419f8 100644 --- a/volatility3/framework/automagic/construct_layers.py +++ b/volatility3/framework/automagic/construct_layers.py @@ -37,7 +37,7 @@ def __call__(self, # Make sure we import the layers, so they can reconstructed framework.import_files(sys.modules['volatility3.framework.layers']) - result = [] # type: List[str] + result: List[str] = [] if requirement.unsatisfied(context, config_path): # Having called validate at the top level tells us both that we need to dig deeper # but also ensures that TranslationLayerRequirements have got the correct subrequirements if their class is populated diff --git a/volatility3/framework/automagic/linux.py b/volatility3/framework/automagic/linux.py index 4d0b95ce3f..767dd34070 100644 --- a/volatility3/framework/automagic/linux.py +++ b/volatility3/framework/automagic/linux.py @@ -57,7 +57,7 @@ def stack(cls, layer_name, progress_callback = progress_callback) - layer_class = intel.Intel # type: Type + layer_class: Type = intel.Intel if 'init_top_pgt' in table.symbols: layer_class = intel.Intel32e dtb_symbol_name = 'init_top_pgt' diff --git a/volatility3/framework/automagic/pdbscan.py b/volatility3/framework/automagic/pdbscan.py index d1df22ebea..b32010506c 100644 --- a/volatility3/framework/automagic/pdbscan.py +++ b/volatility3/framework/automagic/pdbscan.py @@ -62,7 +62,7 @@ def find_virtual_layers_from_req(self, context: interfaces.context.ContextInterf A list of (layer_name, scan_results) """ sub_config_path = interfaces.configuration.path_join(config_path, requirement.name) - results = [] # type: List[str] + results: List[str] = [] if isinstance(requirement, requirements.TranslationLayerRequirement): # Check for symbols in this layer # FIXME: optionally allow a full (slow) scan @@ -207,13 +207,13 @@ def _method_offset(self, """Method for finding a suitable kernel offset based on a module table.""" vollog.debug("Kernel base determination - searching layer module list structure") - valid_kernel = None # type: Optional[ValidKernelType] + valid_kernel: Optional[ValidKernelType] = None # If we're here, chances are high we're in a Win10 x64 image with kernel base randomization physical_layer_name = self.get_physical_layer_name(context, vlayer) physical_layer = context.layers[physical_layer_name] # TODO: On older windows, this might be \WINDOWS\system32\nt rather than \SystemRoot\system32\nt results = physical_layer.scan(context, scanners.BytesScanner(pattern), progress_callback = progress_callback) - seen = set() # type: Set[int] + seen: Set[int] = set() # Because this will launch a scan of the virtual layer, we want to be careful for result in results: # TODO: Identify the specific structure we're finding and document this a bit better @@ -252,7 +252,7 @@ def check_kernel_offset(self, """Scans a virtual address.""" # Scan a few megs of the virtual space at the location to see if they're potential kernels - valid_kernel = None # type: Optional[ValidKernelType] + valid_kernel: Optional[ValidKernelType] = None kernel_pdb_names = [bytes(name + ".pdb", "utf-8") for name in constants.windows.KERNEL_MODULE_NAMES] virtual_layer_name = vlayer.name @@ -295,7 +295,7 @@ def determine_valid_kernel(self, Returns: A dictionary of valid kernels """ - valid_kernel = None # type: Optional[ValidKernelType] + valid_kernel: Optional[ValidKernelType] = None for virtual_layer_name in potential_layers: vlayer = context.layers.get(virtual_layer_name, None) if isinstance(vlayer, layers.intel.Intel): diff --git a/volatility3/framework/automagic/symbol_cache.py b/volatility3/framework/automagic/symbol_cache.py index 1b468a4cd0..627ffac8bc 100644 --- a/volatility3/framework/automagic/symbol_cache.py +++ b/volatility3/framework/automagic/symbol_cache.py @@ -26,15 +26,15 @@ class SymbolBannerCache(interfaces.automagic.AutomagicInterface): # The user would run it eventually either way, but running it first means it can be used that run priority = 0 - os = None # type: Optional[str] - symbol_name = "banner_name" # type: str - banner_path = None # type: Optional[str] + os: Optional[str] = None + symbol_name: str = "banner_name" + banner_path: Optional[str] = None @classmethod def load_banners(cls) -> BannersType: if not cls.banner_path: raise ValueError("Banner_path not appropriately set") - banners = {} # type: BannersType + banners: BannersType = {} if os.path.exists(cls.banner_path): with open(cls.banner_path, "rb") as f: # We use pickle over JSON because we're dealing with bytes objects diff --git a/volatility3/framework/automagic/symbol_finder.py b/volatility3/framework/automagic/symbol_finder.py index b57ab54c83..03d051c49d 100644 --- a/volatility3/framework/automagic/symbol_finder.py +++ b/volatility3/framework/automagic/symbol_finder.py @@ -17,15 +17,15 @@ class SymbolFinder(interfaces.automagic.AutomagicInterface): """Symbol loader based on signature strings.""" priority = 40 - banner_config_key = "banner" # type: str - banner_cache = None # type: Optional[Type[symbol_cache.SymbolBannerCache]] - symbol_class = None # type: Optional[str] - find_aslr = None # type: Optional[Callable] + banner_config_key: str = "banner" + banner_cache: Optional[Type[symbol_cache.SymbolBannerCache]] = None + symbol_class: Optional[str] = None + find_aslr: Optional[Callable] = None def __init__(self, context: interfaces.context.ContextInterface, config_path: str) -> None: super().__init__(context, config_path) - self._requirements = [] # type: List[Tuple[str, interfaces.configuration.RequirementInterface]] - self._banners = {} # type: symbol_cache.BannersType + self._requirements: List[Tuple[str, interfaces.configuration.RequirementInterface]] = [] + self._banners: symbol_cache.BannersType = {} @property def banners(self) -> symbol_cache.BannersType: diff --git a/volatility3/framework/automagic/windows.py b/volatility3/framework/automagic/windows.py index 1144978beb..9f98bebfe6 100644 --- a/volatility3/framework/automagic/windows.py +++ b/volatility3/framework/automagic/windows.py @@ -325,7 +325,7 @@ def stack(cls, if arch not in ['Intel32', 'Intel64']: return None # Set the layer type - layer_type = intel.WindowsIntel # type: Type + layer_type: Type = intel.WindowsIntel if arch == 'Intel64': layer_type = intel.WindowsIntel32e elif base_layer.metadata.get('pae', False): diff --git a/volatility3/framework/configuration/requirements.py b/volatility3/framework/configuration/requirements.py index ea0a5fcc25..e0f8bb2d89 100644 --- a/volatility3/framework/configuration/requirements.py +++ b/volatility3/framework/configuration/requirements.py @@ -36,13 +36,13 @@ class BooleanRequirement(interfaces.configuration.SimpleTypeRequirement): class IntRequirement(interfaces.configuration.SimpleTypeRequirement): """A requirement type that contains a single integer.""" - instance_type = int # type: ClassVar[Type] + instance_type: ClassVar[Type] = int class StringRequirement(interfaces.configuration.SimpleTypeRequirement): """A requirement type that contains a single unicode string.""" # TODO: Maybe add string length limits? - instance_type = str # type: ClassVar[Type] + instance_type: ClassVar[Type] = str class URIRequirement(StringRequirement): @@ -53,7 +53,7 @@ class URIRequirement(StringRequirement): class BytesRequirement(interfaces.configuration.SimpleTypeRequirement): """A requirement type that contains a byte string.""" - instance_type = bytes # type: ClassVar[Type] + instance_type: ClassVar[Type] = bytes class ListRequirement(interfaces.configuration.RequirementInterface): @@ -83,9 +83,9 @@ def __init__(self, super().__init__(*args, **kwargs) if not issubclass(element_type, interfaces.configuration.BasicTypes): raise TypeError("ListRequirements can only be populated with simple InstanceRequirements") - self.element_type = element_type # type: Type - self.min_elements = min_elements or 0 # type: int - self.max_elements = max_elements # type: Optional[int] + self.element_type: Type = element_type + self.min_elements: int = min_elements or 0 + self.max_elements: Optional[int] = max_elements def unsatisfied(self, context: interfaces.context.ContextInterface, config_path: str) -> Dict[str, interfaces.configuration.RequirementInterface]: @@ -397,7 +397,7 @@ def __init__(self, super().__init__(name = name, description = description, default = default, optional = optional) if component is None: raise TypeError("Component cannot be None") - self._component = component # type: Type[interfaces.configuration.VersionableInterface] + self._component: Type[interfaces.configuration.VersionableInterface] = component if version is None: raise TypeError("Version cannot be None") self._version = version diff --git a/volatility3/framework/contexts/__init__.py b/volatility3/framework/contexts/__init__.py index 9fb950a03b..412dfbc508 100644 --- a/volatility3/framework/contexts/__init__.py +++ b/volatility3/framework/contexts/__init__.py @@ -319,7 +319,7 @@ def deduplicate(self) -> 'ModuleCollection': included in the deduplicated version """ new_modules = [] - seen = set() # type: Set[str] + seen: Set[str] = set() for mod in self._modules: if mod.hash not in seen or mod.size == 0: new_modules.append(mod) @@ -334,7 +334,7 @@ def modules(self) -> Dict[str, List[SizedModule]]: @classmethod def _generate_module_dict(cls, modules: List[SizedModule]) -> Dict[str, List[SizedModule]]: - result = {} # type: Dict[str, List[SizedModule]] + result: Dict[str, List[SizedModule]] = {} for module in modules: modlist = result.get(module.name, []) modlist.append(module) diff --git a/volatility3/framework/interfaces/automagic.py b/volatility3/framework/interfaces/automagic.py index c6eb2e5cea..04f4958946 100644 --- a/volatility3/framework/interfaces/automagic.py +++ b/volatility3/framework/interfaces/automagic.py @@ -82,7 +82,7 @@ def find_requirements(self, A list of tuples containing the config_path, sub_config_path and requirement identifying the unsatisfied `Requirements` """ sub_config_path = interfaces.configuration.path_join(config_path, requirement_root.name) - results = [] # type: List[Tuple[str, interfaces.configuration.RequirementInterface]] + results: List[Tuple[str, interfaces.configuration.RequirementInterface]] = [] recurse = not shortcut if isinstance(requirement_root, requirement_type): if recurse or requirement_root.unsatisfied(context, config_path): @@ -105,7 +105,7 @@ class StackerLayerInterface(metaclass = ABCMeta): stack_order = 0 """The order in which to attempt stacking, the lower the earlier""" - exclusion_list = [] # type: List[str] + exclusion_list: List[str] = [] """The list operating systems/first-level plugin hierarchy that should exclude this stacker""" @classmethod diff --git a/volatility3/framework/interfaces/configuration.py b/volatility3/framework/interfaces/configuration.py index d3c05d9a96..d52d6a1768 100644 --- a/volatility3/framework/interfaces/configuration.py +++ b/volatility3/framework/interfaces/configuration.py @@ -79,8 +79,8 @@ def __init__(self, if not (isinstance(separator, str) and len(separator) == 1): raise TypeError(f"Separator must be a one character string: {separator}") self._separator = separator - self._data = {} # type: Dict[str, ConfigSimpleType] - self._subdict = {} # type: Dict[str, 'HierarchicalDict'] + self._data: Dict[str, ConfigSimpleType] = {} + self._subdict: Dict[str, 'HierarchicalDict'] = {} if isinstance(initial_dict, str): initial_dict = json.loads(initial_dict) if isinstance(initial_dict, dict): @@ -320,7 +320,7 @@ def __init__(self, self._description = description or "" self._default = default self._optional = optional - self._requirements = {} # type: Dict[str, RequirementInterface] + self._requirements: Dict[str, RequirementInterface] = {} def __repr__(self) -> str: return "<" + self.__class__.__name__ + ": " + self.name + ">" @@ -438,7 +438,7 @@ def unsatisfied(self, context: 'interfaces.context.ContextInterface', class SimpleTypeRequirement(RequirementInterface): """Class to represent a single simple type (such as a boolean, a string, an integer or a series of bytes)""" - instance_type = bool # type: ClassVar[Type] + instance_type: ClassVar[Type] = bool def add_requirement(self, requirement: RequirementInterface): """Always raises a TypeError as instance requirements cannot have @@ -529,7 +529,7 @@ class ConstructableRequirementInterface(RequirementInterface): def __init__(self, *args, **kwargs) -> None: super().__init__(*args, **kwargs) self.add_requirement(ClassRequirement("class", "Class of the constructable requirement")) - self._current_class_requirements = set() # type: Set[Any] + self._current_class_requirements: Set[Any] = set() def __eq__(self, other): # We can just use super because it checks all member of `__dict__` @@ -620,7 +620,7 @@ def __init__(self, context: 'interfaces.context.ContextInterface', config_path: super().__init__() self._context = context self._config_path = config_path - self._config_cache = None # type: Optional[HierarchicalDict] + self._config_cache: Optional[HierarchicalDict] = None @property def context(self) -> 'interfaces.context.ContextInterface': @@ -729,7 +729,7 @@ class VersionableInterface: All version number should use semantic versioning """ - _version = (0, 0, 0) # type: Tuple[int, int, int] + _version: Tuple[int, int, int] = (0, 0, 0) @classproperty def version(cls) -> Tuple[int, int, int]: diff --git a/volatility3/framework/interfaces/layers.py b/volatility3/framework/interfaces/layers.py index 0b5a58d372..c7c4feb8cd 100644 --- a/volatility3/framework/interfaces/layers.py +++ b/volatility3/framework/interfaces/layers.py @@ -58,8 +58,8 @@ def __init__(self) -> None: super().__init__() self.chunk_size = 0x1000000 # Default to 16Mb chunks self.overlap = 0x1000 # A page of overlap by default - self._context = None # type: Optional[interfaces.context.ContextInterface] - self._layer_name = None # type: Optional[str] + self._context: Optional[interfaces.context.ContextInterface] = None + self._layer_name: Optional[str] = None @property def context(self) -> Optional['interfaces.context.ContextInterface']: @@ -99,7 +99,7 @@ class DataLayerInterface(interfaces.configuration.ConfigurableInterface, metacla accesses a data source and exposes it within volatility. """ - _direct_metadata = {'architecture': 'Unknown', 'os': 'Unknown'} # type: Mapping + _direct_metadata: Mapping = {'architecture': 'Unknown', 'os': 'Unknown'} def __init__(self, context: 'interfaces.context.ContextInterface', @@ -227,7 +227,7 @@ def scan(self, sections = list(self._coalesce_sections(sections)) try: - progress = DummyProgress() # type: ProgressValue + progress: ProgressValue = DummyProgress() scan_iterator = functools.partial(self._scan_iterator, scanner, sections) scan_metric = self._scan_metric(scanner, sections) if not scanner.thread_safe or constants.PARALLELISM == constants.Parallelism.Off: @@ -240,7 +240,7 @@ def scan(self, yield from scan_chunk(value) else: progress = multiprocessing.Manager().Value("Q", 0) - parallel_module = multiprocessing # type: types.ModuleType + parallel_module: types.ModuleType = multiprocessing if constants.PARALLELISM == constants.Parallelism.Threading: progress = DummyProgress() parallel_module = threading @@ -266,7 +266,7 @@ def scan(self, def _coalesce_sections(self, sections: Iterable[Tuple[int, int]]) -> Iterable[Tuple[int, int]]: """Take a list of (start, length) sections and coalesce any adjacent sections.""" - result = [] # type: List[Tuple[int, int]] + result: List[Tuple[int, int]] = [] position = 0 for (start, length) in sorted(sections): if result and start <= position: @@ -423,7 +423,7 @@ def read(self, offset: int, length: int, pad: bool = False) -> bytes: """Reads an offset for length bytes and returns 'bytes' (not 'str') of length size.""" current_offset = offset - output = b'' # type: bytes + output: bytes = b'' for (layer_offset, sublength, mapped_offset, mapped_length, layer) in self.mapping(offset, length, ignore_errors = pad): @@ -473,7 +473,7 @@ def _scan_iterator(self, assumed to have no holes """ for (section_start, section_length) in sections: - output = [] # type: List[Tuple[str, int, int]] + output: List[Tuple[str, int, int]] = [] # Hold the offsets of each chunk (including how much has been filled) chunk_start = chunk_position = 0 @@ -532,7 +532,7 @@ class LayerContainer(collections.abc.Mapping): """Container for multiple layers of data.""" def __init__(self) -> None: - self._layers = {} # type: Dict[str, DataLayerInterface] + self._layers: Dict[str, DataLayerInterface] = {} def read(self, layer: str, offset: int, length: int, pad: bool = False) -> bytes: """Reads from a particular layer at offset for length bytes. diff --git a/volatility3/framework/interfaces/objects.py b/volatility3/framework/interfaces/objects.py index 4cb8bce42b..11040c503a 100644 --- a/volatility3/framework/interfaces/objects.py +++ b/volatility3/framework/interfaces/objects.py @@ -214,7 +214,7 @@ class VolTemplateProxy(metaclass = abc.ABCMeta): to control how their templates respond without needing to write new templates for each and every potental object type. """ - _methods = [] # type: List[str] + _methods: List[str] = [] @classmethod @abc.abstractmethod @@ -275,7 +275,7 @@ def __init__(self, type_name: str, **arguments) -> None: """Stores the keyword arguments for later object creation.""" # Allow the updating of template arguments whilst still in template form super().__init__() - empty_dict = {} # type: Dict[str, Any] + empty_dict: Dict[str, Any] = {} self._vol = collections.ChainMap(empty_dict, arguments, {'type_name': type_name}) @property diff --git a/volatility3/framework/interfaces/plugins.py b/volatility3/framework/interfaces/plugins.py index 06316c221e..d091f27cb7 100644 --- a/volatility3/framework/interfaces/plugins.py +++ b/volatility3/framework/interfaces/plugins.py @@ -95,7 +95,7 @@ class PluginInterface(interfaces.configuration.ConfigurableInterface, """ # Be careful with inheritance around this (We default to requiring a version which doesn't exist, so it must be set) - _required_framework_version = (0, 0, 0) # type: Tuple[int, int, int] + _required_framework_version: Tuple[int, int, int] = (0, 0, 0) """The _version variable is a quick way for plugins to define their current interface, it should follow SemVer rules""" def __init__(self, @@ -121,7 +121,7 @@ def __init__(self, if requirement.name not in self.config: self.config[requirement.name] = requirement.default - self._file_handler = FileHandlerInterface # type: Type[FileHandlerInterface] + self._file_handler: Type[FileHandlerInterface] = FileHandlerInterface framework.require_interface_version(*self._required_framework_version) diff --git a/volatility3/framework/interfaces/renderers.py b/volatility3/framework/interfaces/renderers.py index 54b9f922fc..dd53a96b33 100644 --- a/volatility3/framework/interfaces/renderers.py +++ b/volatility3/framework/interfaces/renderers.py @@ -38,7 +38,7 @@ def render(self, grid: 'TreeGrid') -> None: class ColumnSortKey(metaclass = ABCMeta): - ascending = True # type: bool + ascending: bool = True @abstractmethod def __call__(self, values: List[Any]) -> Any: @@ -129,7 +129,7 @@ class TreeGrid(object, metaclass = ABCMeta): and to create cycles. """ - base_types = (int, str, float, bytes, datetime.datetime, Disassembly) # type: ClassVar[Tuple] + base_types: ClassVar[Tuple] = (int, str, float, bytes, datetime.datetime, Disassembly) def __init__(self, columns: ColumnsType, generator: Generator) -> None: """Constructs a TreeGrid object using a specific set of columns. diff --git a/volatility3/framework/interfaces/symbols.py b/volatility3/framework/interfaces/symbols.py index 8b44198298..37c2824ebb 100644 --- a/volatility3/framework/interfaces/symbols.py +++ b/volatility3/framework/interfaces/symbols.py @@ -96,7 +96,7 @@ def __init__(self, table_mapping = {} self.table_mapping = table_mapping self._native_types = native_types - self._sort_symbols = [] # type: List[Tuple[int, str]] + self._sort_symbols: List[Tuple[int, str]] = [] # Set any provisioned class_types if class_types: diff --git a/volatility3/framework/layers/intel.py b/volatility3/framework/layers/intel.py index 555a8417bc..48fa205bdf 100644 --- a/volatility3/framework/layers/intel.py +++ b/volatility3/framework/layers/intel.py @@ -39,7 +39,7 @@ def __init__(self, metadata: Optional[Dict[str, Any]] = None) -> None: super().__init__(context = context, config_path = config_path, name = name, metadata = metadata) self._base_layer = self.config["memory_layer"] - self._swap_layers = [] # type: List[str] + self._swap_layers: List[str] = [] self._page_map_offset = self.config["page_map_offset"] # Assign constants diff --git a/volatility3/framework/layers/linear.py b/volatility3/framework/layers/linear.py index 40341d86de..d94f7bcc0e 100644 --- a/volatility3/framework/layers/linear.py +++ b/volatility3/framework/layers/linear.py @@ -33,7 +33,8 @@ def read(self, offset: int, length: int, pad: bool = False) -> bytes: """Reads an offset for length bytes and returns 'bytes' (not 'str') of length size.""" current_offset = offset - output = [] # type: List[bytes] + output: List[bytes] = [] + output: List[bytes] = [] for (offset, _, mapped_offset, mapped_length, layer) in self.mapping(offset, length, ignore_errors = pad): if not pad and offset > current_offset: raise exceptions.InvalidAddressException( diff --git a/volatility3/framework/layers/msf.py b/volatility3/framework/layers/msf.py index 7713c50247..02fc570bcb 100644 --- a/volatility3/framework/layers/msf.py +++ b/volatility3/framework/layers/msf.py @@ -34,7 +34,7 @@ def __init__(self, if response is None: raise PDBFormatException(name, "Could not find a suitable header") self._version, self._header = response - self._streams = {} # type: Dict[int, str] + self._streams: Dict[int, str] = {} @property def pdb_symbol_table(self) -> str: diff --git a/volatility3/framework/layers/physical.py b/volatility3/framework/layers/physical.py index 998d8cf123..228815fea6 100644 --- a/volatility3/framework/layers/physical.py +++ b/volatility3/framework/layers/physical.py @@ -86,10 +86,10 @@ def __init__(self, self._write_warning = False self._location = self.config["location"] self._accessor = resources.ResourceAccessor() - self._file_ = None # type: Optional[IO[Any]] - self._size = None # type: Optional[int] + self._file_: Optional[IO[Any]] = None + self._size: Optional[int] = None # Construct the lock now (shared if made before threading) in case we ever need it - self._lock = DummyLock() # type: Union[DummyLock, threading.Lock] + self._lock: Union[DummyLock, threading.Lock] = DummyLock() if constants.PARALLELISM == constants.Parallelism.Threading: self._lock = threading.Lock() # Instantiate the file to throw exceptions if the file doesn't open diff --git a/volatility3/framework/layers/qemu.py b/volatility3/framework/layers/qemu.py index 95d7d2a8f1..ff0644dbc1 100644 --- a/volatility3/framework/layers/qemu.py +++ b/volatility3/framework/layers/qemu.py @@ -40,7 +40,7 @@ def __init__(self, metadata: Optional[Dict[str, Any]] = None) -> None: self._qemu_table_name = intermed.IntermediateSymbolTable.create(context, config_path, 'generic', 'qemu') self._configuration = None - self._compressed = set() # type: Set[int] + self._compressed: Set[int] = set() self._current_segment_name = b'' super().__init__(context = context, config_path = config_path, name = name, metadata = metadata) diff --git a/volatility3/framework/layers/registry.py b/volatility3/framework/layers/registry.py index ed6d045f6d..55a6e51863 100644 --- a/volatility3/framework/layers/registry.py +++ b/volatility3/framework/layers/registry.py @@ -141,7 +141,7 @@ def get_key(self, key: str, return_list: bool = False) -> Union[List[objects.Str if key.endswith("\\"): key = key[:-1] key_array = key.split('\\') - found_key = [] # type: List[str] + found_key: List[str] = [] while key_array and node_key: subkeys = node_key[-1].get_subkeys() for subkey in subkeys: diff --git a/volatility3/framework/layers/scanners/__init__.py b/volatility3/framework/layers/scanners/__init__.py index 2a167fc4e4..3407b57840 100644 --- a/volatility3/framework/layers/scanners/__init__.py +++ b/volatility3/framework/layers/scanners/__init__.py @@ -48,7 +48,7 @@ class MultiStringScanner(layers.ScannerInterface): def __init__(self, patterns: List[bytes]) -> None: super().__init__() - self._pattern_trie = {} # type: Optional[Dict[int, Optional[Dict]]] + self._pattern_trie: Optional[Dict[int, Optional[Dict]]] = {} for pattern in patterns: self._process_pattern(pattern) self._regex = self._process_trie(self._pattern_trie) diff --git a/volatility3/framework/layers/scanners/multiregexp.py b/volatility3/framework/layers/scanners/multiregexp.py index 36c6410aa8..45feb51d1c 100644 --- a/volatility3/framework/layers/scanners/multiregexp.py +++ b/volatility3/framework/layers/scanners/multiregexp.py @@ -10,7 +10,7 @@ class MultiRegexp(object): """Algorithm for multi-string matching.""" def __init__(self) -> None: - self._pattern_strings = [] # type: List[bytes] + self._pattern_strings: List[bytes] = [] self._regex = re.compile(b'') def add_pattern(self, pattern: bytes) -> None: diff --git a/volatility3/framework/layers/segmented.py b/volatility3/framework/layers/segmented.py index 076838da6e..80c89723ac 100644 --- a/volatility3/framework/layers/segmented.py +++ b/volatility3/framework/layers/segmented.py @@ -25,9 +25,9 @@ def __init__(self, super().__init__(context = context, config_path = config_path, name = name, metadata = metadata) self._base_layer = self.config["base_layer"] - self._segments = [] # type: List[Tuple[int, int, int, int]] - self._minaddr = None # type: Optional[int] - self._maxaddr = None # type: Optional[int] + self._segments: List[Tuple[int, int, int, int]] = [] + self._minaddr: Optional[int] = None + self._maxaddr: Optional[int] = None self._load_segments() diff --git a/volatility3/framework/objects/__init__.py b/volatility3/framework/objects/__init__.py index bf8dda515c..15d2ef63e5 100644 --- a/volatility3/framework/objects/__init__.py +++ b/volatility3/framework/objects/__init__.py @@ -95,7 +95,7 @@ class Function(interfaces.objects.ObjectInterface): class PrimitiveObject(interfaces.objects.ObjectInterface): """PrimitiveObject is an interface for any objects that should simulate a Python primitive.""" - _struct_type = int # type: ClassVar[Type] + _struct_type: ClassVar[Type] = int def __init__(self, context: interfaces.context.ContextInterface, type_name: str, object_info: interfaces.objects.ObjectInformation, data_format: DataFormatInfo) -> None: @@ -164,7 +164,7 @@ def write(self, value: TUnion[int, float, bool, bytes, str]) -> interfaces.objec # https://mail.python.org/pipermail/python-dev/2004-February/042537.html class Boolean(PrimitiveObject, int): """Primitive Object that handles boolean types.""" - _struct_type = int # type: ClassVar[Type] + _struct_type: ClassVar[Type] = int class Integer(PrimitiveObject, int): @@ -173,17 +173,17 @@ class Integer(PrimitiveObject, int): class Float(PrimitiveObject, float): """Primitive Object that handles double or floating point numbers.""" - _struct_type = float # type: ClassVar[Type] + _struct_type: ClassVar[Type] = float class Char(PrimitiveObject, int): """Primitive Object that handles characters.""" - _struct_type = int # type: ClassVar[Type] + _struct_type: ClassVar[Type] = int class Bytes(PrimitiveObject, bytes): """Primitive Object that handles specific series of bytes.""" - _struct_type = bytes # type: ClassVar[Type] + _struct_type: ClassVar[Type] = bytes def __init__(self, context: interfaces.context.ContextInterface, @@ -227,7 +227,7 @@ class String(PrimitiveObject, str): max_length: specifies the maximum possible length that the string could hold within memory (for multibyte characters, this will not be the maximum length of the string) """ - _struct_type = str # type: ClassVar[Type] + _struct_type: ClassVar[Type] = str def __init__(self, context: interfaces.context.ContextInterface, @@ -453,7 +453,7 @@ def __hash__(self): @classmethod def _generate_inverse_choices(cls, choices: Dict[str, int]) -> Dict[int, str]: """Generates the inverse choices for the object.""" - inverse_choices = {} # type: Dict[int, str] + inverse_choices: Dict[int, str] = {} for k, v in choices.items(): if v in inverse_choices: # Technically this shouldn't be a problem, but since we inverse cache @@ -601,7 +601,7 @@ def __getitem__(self, s: slice) -> List[interfaces.objects.Template]: def __getitem__(self, i): """Returns the i-th item from the array.""" - result = [] # type: List[interfaces.objects.Template] + result: List[interfaces.objects.Template] = [] mask = self._context.layers[self.vol.layer_name].address_mask # We use the range function to deal with slices for us series = range(self.vol.count)[i] @@ -649,7 +649,7 @@ def __init__(self, context: interfaces.context.ContextInterface, type_name: str, size = size, members = members) # self._check_members(members) - self._concrete_members = {} # type: Dict[str, Dict] + self._concrete_members: Dict[str, Dict] = {} def has_member(self, member_name: str) -> bool: """Returns whether the object would contain a member called diff --git a/volatility3/framework/objects/templates.py b/volatility3/framework/objects/templates.py index 62094ff3d3..b544d117fb 100644 --- a/volatility3/framework/objects/templates.py +++ b/volatility3/framework/objects/templates.py @@ -65,7 +65,7 @@ def __call__(self, context: interfaces.context.ContextInterface, Returns: an object adhereing to the :class:`~volatility3.framework.interfaces.objects.ObjectInterface` """ - arguments = {} # type: Dict[str, Any] + arguments: Dict[str, Any] = {} for arg in self.vol: if arg != 'object_class': arguments[arg] = self.vol[arg] @@ -96,10 +96,10 @@ def _unresolved(self, *args, **kwargs) -> Any: symbol_name, table_name, f"Template contains no information about its structure: {self.vol.type_name}") - size = property(_unresolved) # type: ClassVar[Any] - replace_child = _unresolved # type: ClassVar[Any] - relative_child_offset = _unresolved # type: ClassVar[Any] - has_member = _unresolved # type: ClassVar[Any] + size: ClassVar[Any] = property(_unresolved) + replace_child: ClassVar[Any] = _unresolved + relative_child_offset: ClassVar[Any] = _unresolved + has_member: ClassVar[Any] = _unresolved def __call__(self, context: interfaces.context.ContextInterface, object_info: interfaces.objects.ObjectInformation): template = context.symbol_space.get_type(self.vol.type_name) diff --git a/volatility3/framework/plugins/mac/lsmod.py b/volatility3/framework/plugins/mac/lsmod.py index 89a3c08af0..7227c32895 100644 --- a/volatility3/framework/plugins/mac/lsmod.py +++ b/volatility3/framework/plugins/mac/lsmod.py @@ -57,7 +57,7 @@ def list_modules(cls, context: interfaces.context.ContextInterface, layer_name: except exceptions.InvalidAddressException: return [] - seen = set() # type: Set + seen: Set = set() while kmod != 0 and \ kmod not in seen and \ diff --git a/volatility3/framework/plugins/mac/psaux.py b/volatility3/framework/plugins/mac/psaux.py index 97f61c02ed..14d765ef81 100644 --- a/volatility3/framework/plugins/mac/psaux.py +++ b/volatility3/framework/plugins/mac/psaux.py @@ -52,7 +52,7 @@ def _generator(self, tasks: Iterator[Any]) -> Generator[Tuple[int, Tuple[int, st task_name = utility.array_to_string(task.p_comm) - args = [] # type: List[bytes] + args: List[bytes] = [] while argc > 0: try: diff --git a/volatility3/framework/plugins/mac/pslist.py b/volatility3/framework/plugins/mac/pslist.py index 0829b42fbd..1ebd89c97c 100644 --- a/volatility3/framework/plugins/mac/pslist.py +++ b/volatility3/framework/plugins/mac/pslist.py @@ -124,7 +124,7 @@ def list_tasks_allproc(cls, proc = kernel.object_from_symbol(symbol_name = "allproc").lh_first - seen = {} # type: Dict[int, int] + seen: Dict[int, int] = {} while proc is not None and proc.vol.offset != 0: if proc.vol.offset in seen: vollog.log(logging.INFO, "Recursive process list detected (a result of non-atomic acquisition).") @@ -165,7 +165,7 @@ def list_tasks_tasks(cls, queue_entry = kernel.object_from_symbol(symbol_name = "tasks") - seen = {} # type: Dict[int, int] + seen: Dict[int, int] = {} for task in queue_entry.walk_list(queue_entry, "tasks", "task"): if task.vol.offset in seen: vollog.log(logging.INFO, "Recursive process list detected (a result of non-atomic acquisition).") diff --git a/volatility3/framework/plugins/timeliner.py b/volatility3/framework/plugins/timeliner.py index e95bba126f..c3fe424b99 100644 --- a/volatility3/framework/plugins/timeliner.py +++ b/volatility3/framework/plugins/timeliner.py @@ -48,7 +48,7 @@ def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) self.timeline = {} self.usable_plugins = None - self.automagics = None # type: Optional[List[interfaces.automagic.AutomagicInterface]] + self.automagics: Optional[List[interfaces.automagic.AutomagicInterface]] = None @classmethod def get_usable_plugins(cls, selected_list: List[str] = None) -> List[Type]: diff --git a/volatility3/framework/plugins/windows/callbacks.py b/volatility3/framework/plugins/windows/callbacks.py index 4c58c62c0d..ef9ca98c09 100644 --- a/volatility3/framework/plugins/windows/callbacks.py +++ b/volatility3/framework/plugins/windows/callbacks.py @@ -191,9 +191,9 @@ def list_bugcheck_reason_callbacks(cls, context: interfaces.context.ContextInter continue try: - component = ntkrnlmp.object( + component: Union[interfaces.renderers.BaseAbsentValue, interfaces.objects.ObjectInterface] = ntkrnlmp.object( "string", absolute = True, offset = callback.Component, max_length = 64, errors = "replace" - ) # type: Union[interfaces.renderers.BaseAbsentValue, interfaces.objects.ObjectInterface] + ) except exceptions.InvalidAddressException: component = renderers.UnreadableValue() diff --git a/volatility3/framework/plugins/windows/handles.py b/volatility3/framework/plugins/windows/handles.py index 49098bb8b1..9b249f83c2 100644 --- a/volatility3/framework/plugins/windows/handles.py +++ b/volatility3/framework/plugins/windows/handles.py @@ -172,7 +172,7 @@ def get_type_map(cls, context: interfaces.context.ContextInterface, layer_name: A mapping of type indicies to type names """ - type_map = {} # type: Dict[int, str] + type_map: Dict[int, str] = {} kvo = context.layers[layer_name].config['kernel_virtual_offset'] ntkrnlmp = context.module(symbol_table, layer_name = layer_name, offset = kvo) diff --git a/volatility3/framework/plugins/windows/modscan.py b/volatility3/framework/plugins/windows/modscan.py index 968609ebc0..4820a6fb86 100644 --- a/volatility3/framework/plugins/windows/modscan.py +++ b/volatility3/framework/plugins/windows/modscan.py @@ -81,7 +81,7 @@ def get_session_layers(cls, Returns: A list of session layer names """ - seen_ids = [] # type: List[interfaces.objects.ObjectInterface] + seen_ids: List[interfaces.objects.ObjectInterface] = [] filter_func = pslist.PsList.create_pid_filter(pids or []) for proc in pslist.PsList.list_processes(context = context, diff --git a/volatility3/framework/plugins/windows/modules.py b/volatility3/framework/plugins/windows/modules.py index b030066db7..ad2faefc96 100644 --- a/volatility3/framework/plugins/windows/modules.py +++ b/volatility3/framework/plugins/windows/modules.py @@ -86,7 +86,7 @@ def get_session_layers(cls, Returns: A list of session layer names """ - seen_ids = [] # type: List[interfaces.objects.ObjectInterface] + seen_ids: List[interfaces.objects.ObjectInterface] = [] filter_func = pslist.PsList.create_pid_filter(pids or []) for proc in pslist.PsList.list_processes(context = context, diff --git a/volatility3/framework/plugins/windows/poolscanner.py b/volatility3/framework/plugins/windows/poolscanner.py index 62a98615a6..a6a6cae0c9 100644 --- a/volatility3/framework/plugins/windows/poolscanner.py +++ b/volatility3/framework/plugins/windows/poolscanner.py @@ -338,7 +338,7 @@ def pool_scan(cls, An Iterable of pool constraints and the pool headers associated with them """ # Setup the pattern - constraint_lookup = {} # type: Dict[bytes, PoolConstraint] + constraint_lookup: Dict[bytes, PoolConstraint] = {} for constraint in pool_constraints: if constraint.tag in constraint_lookup: raise ValueError(f"Constraint tag is used for more than one constraint: {repr(constraint.tag)}") diff --git a/volatility3/framework/plugins/windows/pstree.py b/volatility3/framework/plugins/windows/pstree.py index 151a35d529..b8c99688f8 100644 --- a/volatility3/framework/plugins/windows/pstree.py +++ b/volatility3/framework/plugins/windows/pstree.py @@ -18,9 +18,9 @@ class PsTree(interfaces.plugins.PluginInterface): def __init__(self, *args, **kwargs) -> None: super().__init__(*args, **kwargs) - self._processes = {} # type: Dict[int, interfaces.objects.ObjectInterface] - self._levels = {} # type: Dict[int, int] - self._children = {} # type: Dict[int, Set[int]] + self._processes: Dict[int, interfaces.objects.ObjectInterface] = {} + self._levels: Dict[int, int] = {} + self._children: Dict[int, Set[int]] = {} @classmethod def get_requirements(cls): diff --git a/volatility3/framework/plugins/windows/registry/printkey.py b/volatility3/framework/plugins/windows/registry/printkey.py index 380d3bbca1..bad438256b 100644 --- a/volatility3/framework/plugins/windows/registry/printkey.py +++ b/volatility3/framework/plugins/windows/registry/printkey.py @@ -130,7 +130,7 @@ def _printkey_iterator(self, if isinstance(value_type, renderers.UnreadableValue): vollog.debug("Couldn't read registry value type, so data is unreadable") - value_data = renderers.UnreadableValue() # type: Union[interfaces.renderers.BaseAbsentValue, bytes] + value_data: Union[interfaces.renderers.BaseAbsentValue, bytes] = renderers.UnreadableValue() else: try: value_data = node.decode_data() diff --git a/volatility3/framework/plugins/windows/registry/userassist.py b/volatility3/framework/plugins/windows/registry/userassist.py index 804e9f60e6..b3a3b8f3ab 100644 --- a/volatility3/framework/plugins/windows/registry/userassist.py +++ b/volatility3/framework/plugins/windows/registry/userassist.py @@ -151,12 +151,12 @@ def list_userassist(self, hive: RegistryHive) -> Generator[Tuple[int, Tuple], No countkey_last_write_time = conversion.wintime_to_datetime(countkey.LastWriteTime.QuadPart) # output the parent Count key - result = ( + result: Tuple[int, Tuple[format_hints.Hex, Any, Any, Any, Any, Any, Any, Any, Any, Any, Any, Any]] = ( 0, (renderers.format_hints.Hex(hive.hive_offset), hive_name, countkey_path, countkey_last_write_time, "Key", renderers.NotApplicableValue(), renderers.NotApplicableValue(), renderers.NotApplicableValue(), renderers.NotApplicableValue(), renderers.NotApplicableValue(), renderers.NotApplicableValue(), renderers.NotApplicableValue()) - ) # type: Tuple[int, Tuple[format_hints.Hex, Any, Any, Any, Any, Any, Any, Any, Any, Any, Any, Any]] + ) yield result # output any subkeys under Count diff --git a/volatility3/framework/plugins/windows/strings.py b/volatility3/framework/plugins/windows/strings.py index 7a55079a9e..775ffc7322 100644 --- a/volatility3/framework/plugins/windows/strings.py +++ b/volatility3/framework/plugins/windows/strings.py @@ -40,17 +40,18 @@ def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface] def run(self): return renderers.TreeGrid([("String", str), ("Physical Address", format_hints.Hex), ("Result", str)], - self._generator()) + self._generator) + @property def _generator(self) -> Generator[Tuple, None, None]: """Generates results from a strings file.""" - string_list = [] # type: List[Tuple[int,bytes]] + string_list: List[Tuple[int,bytes]] = [] # Test strings file format is accurate accessor = resources.ResourceAccessor() strings_fp = accessor.open(self.config['strings_file'], "rb") line = strings_fp.readline() - count = 0 # type: float + count: float = 0 while line: count += 1 try: @@ -66,7 +67,8 @@ def _generator(self) -> Generator[Tuple, None, None]: progress_callback = self._progress_callback, pid_list = self.config['pid']) - last_prog = line_count = 0 # type: float + last_prog: float = 0 + line_count: float = 0 num_strings = len(string_list) for offset, string in string_list: line_count += 1 @@ -119,7 +121,7 @@ def generate_mapping(cls, filter = pslist.PsList.create_pid_filter(pid_list) layer = context.layers[layer_name] - reverse_map = dict() # type: Dict[int, Set[Tuple[str, int]]] + reverse_map: Dict[int, Set[Tuple[str, int]]] = dict() if isinstance(layer, intel.Intel): # We don't care about errors, we just wanted chunks that map correctly for mapval in layer.mapping(0x0, layer.maximum_address, ignore_errors = True): diff --git a/volatility3/framework/plugins/windows/virtmap.py b/volatility3/framework/plugins/windows/virtmap.py index 552564cad3..238b0df197 100644 --- a/volatility3/framework/plugins/windows/virtmap.py +++ b/volatility3/framework/plugins/windows/virtmap.py @@ -41,7 +41,7 @@ def determine_map(cls, module: interfaces.context.ModuleInterface) -> \ if not isinstance(layer, intel.Intel): raise - result = {} # type: Dict[str, List[Tuple[int, int]]] + result: Dict[str, List[Tuple[int, int]]] = {} system_va_type = module.get_enumeration('_MI_SYSTEM_VA_TYPE') large_page_size = (layer.page_size ** 2) // module.get_type("_MMPTE").size @@ -84,7 +84,7 @@ def determine_map(cls, module: interfaces.context.ModuleInterface) -> \ def _enumerate_system_va_type(cls, large_page_size: int, system_range_start: int, module: interfaces.context.ModuleInterface, type_array: interfaces.objects.ObjectInterface) -> Dict[str, List[Tuple[int, int]]]: - result = {} # type: Dict[str, List[Tuple[int, int]]] + result: Dict[str, List[Tuple[int, int]]] = {} system_va_type = module.get_enumeration('_MI_SYSTEM_VA_TYPE') start = system_range_start prev_entry = -1 diff --git a/volatility3/framework/renderers/__init__.py b/volatility3/framework/renderers/__init__.py index 5b1013574b..23e686a07d 100644 --- a/volatility3/framework/renderers/__init__.py +++ b/volatility3/framework/renderers/__init__.py @@ -158,8 +158,8 @@ def __init__(self, columns: List[Tuple[str, interfaces.renderers.BaseTypes]], """ self._populated = False self._row_count = 0 - self._children = [] # type: List[interfaces.renderers.TreeNode] - converted_columns = [] # type: List[interfaces.renderers.Column] + self._children: List[interfaces.renderers.TreeNode] = [] + converted_columns: List[interfaces.renderers.Column] = [] if len(columns) < 1: raise ValueError("Columns must be a list containing at least one column") for (name, column_type) in columns: @@ -207,7 +207,7 @@ def function(_x: interfaces.renderers.TreeNode, _y: Any) -> Any: if not self.populated: try: - prev_nodes = [] # type: List[interfaces.renderers.TreeNode] + prev_nodes: List[interfaces.renderers.TreeNode] = [] for (level, item) in self._generator: parent_index = min(len(prev_nodes), level) parent = prev_nodes[parent_index - 1] if parent_index > 0 else None diff --git a/volatility3/framework/renderers/conversion.py b/volatility3/framework/renderers/conversion.py index b60b47411c..5737abef51 100644 --- a/volatility3/framework/renderers/conversion.py +++ b/volatility3/framework/renderers/conversion.py @@ -24,7 +24,7 @@ def wintime_to_datetime(wintime: int) -> Union[interfaces.renderers.BaseAbsentVa def unixtime_to_datetime(unixtime: int) -> Union[interfaces.renderers.BaseAbsentValue, datetime.datetime]: - ret = renderers.UnparsableValue() # type: Union[interfaces.renderers.BaseAbsentValue, datetime.datetime] + ret: Union[interfaces.renderers.BaseAbsentValue, datetime.datetime] = renderers.UnparsableValue() if unixtime > 0: try: diff --git a/volatility3/framework/renderers/format_hints.py b/volatility3/framework/renderers/format_hints.py index b169f44c95..486e164b30 100644 --- a/volatility3/framework/renderers/format_hints.py +++ b/volatility3/framework/renderers/format_hints.py @@ -46,7 +46,7 @@ def __init__(self, encoding: str = 'utf-16-le', split_nulls: bool = False, show_hex: bool = False) -> None: - self.converted_int = False # type: bool + self.converted_int: bool = False if isinstance(original, int): self.converted_int = True self.encoding = encoding diff --git a/volatility3/framework/symbols/__init__.py b/volatility3/framework/symbols/__init__.py index 6a8ab8246f..3d0e01803c 100644 --- a/volatility3/framework/symbols/__init__.py +++ b/volatility3/framework/symbols/__init__.py @@ -31,15 +31,15 @@ class SymbolSpace(interfaces.symbols.SymbolSpaceInterface): def __init__(self) -> None: super().__init__() - self._dict = collections.OrderedDict() # type: Dict[str, interfaces.symbols.BaseSymbolTableInterface] + self._dict: Dict[str, interfaces.symbols.BaseSymbolTableInterface] = collections.OrderedDict() # Permanently cache all resolved symbols - self._resolved = {} # type: Dict[str, interfaces.objects.Template] - self._resolved_symbols = {} # type: Dict[str, interfaces.objects.Template] + self._resolved: Dict[str, interfaces.objects.Template] = {} + self._resolved_symbols: Dict[str, interfaces.objects.Template] = {} def clear_symbol_cache(self, table_name: str = None) -> None: """Clears the symbol cache for the specified table name. If no table name is specified, the caches of all symbol tables are cleared.""" - table_list = list() # type: List[interfaces.symbols.BaseSymbolTableInterface] + table_list: List[interfaces.symbols.BaseSymbolTableInterface] = list() if table_name is None: table_list = list(self._dict.values()) else: diff --git a/volatility3/framework/symbols/intermed.py b/volatility3/framework/symbols/intermed.py index b20760d6b7..a6e76ae7a6 100644 --- a/volatility3/framework/symbols/intermed.py +++ b/volatility3/framework/symbols/intermed.py @@ -282,8 +282,8 @@ def __init__(self, raise TypeError("Native table not provided") nt.name = name + "_natives" super().__init__(context, config_path, name, nt, table_mapping = table_mapping) - self._overrides = {} # type: Dict[str, Type[interfaces.objects.ObjectInterface]] - self._symbol_cache = {} # type: Dict[str, interfaces.symbols.SymbolInterface] + self._overrides: Dict[str, Type[interfaces.objects.ObjectInterface]] = {} + self._symbol_cache: Dict[str, interfaces.symbols.SymbolInterface] = {} def _get_natives(self) -> Optional[interfaces.symbols.NativeTableInterface]: """Determines the appropriate native_types to use from the JSON diff --git a/volatility3/framework/symbols/linux/__init__.py b/volatility3/framework/symbols/linux/__init__.py index cd0495a2cb..6d09bbe465 100644 --- a/volatility3/framework/symbols/linux/__init__.py +++ b/volatility3/framework/symbols/linux/__init__.py @@ -46,7 +46,7 @@ class LinuxUtilities(interfaces.configuration.VersionableInterface): @classmethod def _do_get_path(cls, rdentry, rmnt, dentry, vfsmnt) -> str: - ret_path = [] # type: List[str] + ret_path: List[str] = [] while dentry != rdentry or vfsmnt != rmnt: dname = dentry.path() diff --git a/volatility3/framework/symbols/mac/__init__.py b/volatility3/framework/symbols/mac/__init__.py index 241b4ffba9..152f0bbc23 100644 --- a/volatility3/framework/symbols/mac/__init__.py +++ b/volatility3/framework/symbols/mac/__init__.py @@ -160,7 +160,7 @@ def _walk_iterable(cls, list_next_member: str, next_member: str, max_elements: int = 4096) -> Iterable[interfaces.objects.ObjectInterface]: - seen = set() # type: Set[int] + seen: Set[int] = set() try: current = queue.member(attr = list_head_member) diff --git a/volatility3/framework/symbols/mac/extensions/__init__.py b/volatility3/framework/symbols/mac/extensions/__init__.py index 538c4101e2..b0d75b93b3 100644 --- a/volatility3/framework/symbols/mac/extensions/__init__.py +++ b/volatility3/framework/symbols/mac/extensions/__init__.py @@ -48,7 +48,7 @@ def get_map_iter(self) -> Iterable[interfaces.objects.ObjectInterface]: except exceptions.InvalidAddressException: return - seen = set() # type: Set[int] + seen: Set[int] = set() for i in range(task.map.hdr.nentries): if not current_map or current_map.vol.offset in seen: diff --git a/volatility3/framework/symbols/native.py b/volatility3/framework/symbols/native.py index c53ff6f16d..d9833e26dc 100644 --- a/volatility3/framework/symbols/native.py +++ b/volatility3/framework/symbols/native.py @@ -15,7 +15,7 @@ class NativeTable(interfaces.symbols.NativeTableInterface): def __init__(self, name: str, native_dictionary: Dict[str, Any]) -> None: super().__init__(name, self) self._native_dictionary = copy.deepcopy(native_dictionary) - self._overrides = {} # type: Dict[str, interfaces.objects.ObjectInterface] + self._overrides: Dict[str, interfaces.objects.ObjectInterface] = {} for native_type in self._native_dictionary: native_class, _native_struct = self._native_dictionary[native_type] self._overrides[native_type] = native_class @@ -49,8 +49,8 @@ def get_type(self, type_name: str) -> interfaces.objects.Template: table_name, type_name = name_split prefix = table_name + constants.BANG - additional = {} # type: Dict[str, Any] - obj = None # type: Optional[Type[interfaces.objects.ObjectInterface]] + additional: Dict[str, Any] = {} + obj: Optional[Type[interfaces.objects.ObjectInterface]] = None if type_name == 'void' or type_name == 'function': obj = objects.Void elif type_name == 'array': diff --git a/volatility3/framework/symbols/windows/extensions/__init__.py b/volatility3/framework/symbols/windows/extensions/__init__.py index 5b057ff629..13e07cb3b6 100755 --- a/volatility3/framework/symbols/windows/extensions/__init__.py +++ b/volatility3/framework/symbols/windows/extensions/__init__.py @@ -388,7 +388,7 @@ def is_valid(self) -> bool: self.FileName.Buffer) def file_name_with_device(self) -> Union[str, interfaces.renderers.BaseAbsentValue]: - name = renderers.UnreadableValue() # type: Union[str, interfaces.renderers.BaseAbsentValue] + name: Union[str, interfaces.renderers.BaseAbsentValue] = renderers.UnreadableValue() # this pointer needs to be checked against native_layer_name because the object may # be instantiated from a primary (virtual) layer or a memory (physical) layer. diff --git a/volatility3/framework/symbols/windows/extensions/pool.py b/volatility3/framework/symbols/windows/extensions/pool.py index 59213aaff0..d470797b5b 100644 --- a/volatility3/framework/symbols/windows/extensions/pool.py +++ b/volatility3/framework/symbols/windows/extensions/pool.py @@ -214,7 +214,7 @@ def is_nonpaged_pool(self): class POOL_TRACKER_BIG_PAGES(objects.StructType): """A kernel big page pool tracker.""" - pool_type_lookup = {} # type: Dict[str, str] + pool_type_lookup: Dict[str, str] = {} def _generate_pool_type_lookup(self): # Enumeration._generate_inverse_choices() raises ValueError because multiple enum names map to the same diff --git a/volatility3/framework/symbols/windows/pdbconv.py b/volatility3/framework/symbols/windows/pdbconv.py index e530fcbd7b..b582fc299d 100644 --- a/volatility3/framework/symbols/windows/pdbconv.py +++ b/volatility3/framework/symbols/windows/pdbconv.py @@ -265,18 +265,18 @@ def __init__(self, database_name: Optional[str] = None, progress_callback: constants.ProgressCallback = None) -> None: self._layer_name, self._context = self.load_pdb_layer(context, location) - self._dbiheader = None # type: Optional[interfaces.objects.ObjectInterface] + self._dbiheader: Optional[interfaces.objects.ObjectInterface] = None if not progress_callback: progress_callback = lambda x, y: None self._progress_callback = progress_callback - self.types = [ - ] # type: List[Tuple[interfaces.objects.ObjectInterface, Optional[str], interfaces.objects.ObjectInterface]] - self.bases = {} # type: Dict[str, Any] - self.user_types = {} # type: Dict[str, Any] - self.enumerations = {} # type: Dict[str, Any] - self.symbols = {} # type: Dict[str, Any] - self._omap_mapping = [] # type: List[Tuple[int, int]] - self._sections = [] # type: List[interfaces.objects.ObjectInterface] + self.types: List[Tuple[interfaces.objects.ObjectInterface, Optional[str], interfaces.objects.ObjectInterface]] = [ + ] + self.bases: Dict[str, Any] = {} + self.user_types: Dict[str, Any] = {} + self.enumerations: Dict[str, Any] = {} + self.symbols: Dict[str, Any] = {} + self._omap_mapping: List[Tuple[int, int]] = [] + self._sections: List[interfaces.objects.ObjectInterface] = [] self.metadata = {"format": "6.1.0", "windows": {}} self._database_name = database_name @@ -381,7 +381,7 @@ def _read_info_stream(self, stream_number, stream_name, info_list): raise ValueError("Maximum {} index is smaller than minimum TPI index, found: {} < {} ".format( stream_name, header.index_max, header.index_min)) # Reset the state - info_references = {} # type: Dict[str, int] + info_references: Dict[str, int] = {} offset = header.header_size # Ensure we use the same type everywhere length_type = "unsigned short" @@ -586,7 +586,7 @@ def get_type_from_index(self, index: int) -> Union[List[Any], Dict[str, Any]]: if index < 0x1000: base_name, base = primatives[index & 0xff] self.bases[base_name] = base - result = {"kind": "base", "name": base_name} # type: Union[List[Dict[str, Any]], Dict[str, Any]] + result: Union[List[Dict[str, Any]], Dict[str, Any]] = {"kind": "base", "name": base_name} indirection = (index & 0xf00) if indirection: pointer_name, pointer_base = indirections[indirection] @@ -639,7 +639,7 @@ def get_size_from_index(self, index: int) -> int: """Returns the size of the structure based on the type index provided.""" result = -1 - name = '' # type: Optional[str] + name: Optional[str] = '' if index < 0x1000: if (index & 0xf00): _, base = indirections[index & 0xf00] @@ -837,7 +837,7 @@ def consume_padding(self, layer_name: str, offset: int) -> int: def convert_fields(self, fields: int) -> Dict[Optional[str], Dict[str, Any]]: """Converts a field list into a list of fields.""" - result = {} # type: Dict[Optional[str], Dict[str, Any]] + result: Dict[Optional[str], Dict[str, Any]] = {} _, _, fields_struct = self.types[fields] if not isinstance(fields_struct, list): vollog.warning("Fields structure did not contain a list of fields") diff --git a/volatility3/schemas/__init__.py b/volatility3/schemas/__init__.py index 9340b29f47..5fbed7e099 100644 --- a/volatility3/schemas/__init__.py +++ b/volatility3/schemas/__init__.py @@ -18,7 +18,7 @@ def load_cached_validations() -> Set[str]: """Loads up the list of successfully cached json objects, so we don't need to revalidate them.""" - validhashes = set() # type: Set + validhashes: Set = set() if os.path.exists(cached_validation_filepath): with open(cached_validation_filepath, "r") as f: validhashes.update(json.load(f)) From 59ab92e99de932554ecdbba4cce57c72641a4a4f Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Sun, 18 Jul 2021 23:20:27 +0100 Subject: [PATCH 168/294] Development: Update development scripts with f-strings --- development/compare-vol.py | 12 ++++++------ development/pdbparse-to-json.py | 12 ++++++------ development/schema_validate.py | 10 +++++----- development/stock-linux-json.py | 16 ++++++++-------- 4 files changed, 25 insertions(+), 25 deletions(-) diff --git a/development/compare-vol.py b/development/compare-vol.py index 4e4f01b138..83acc6de9e 100644 --- a/development/compare-vol.py +++ b/development/compare-vol.py @@ -46,7 +46,7 @@ def create_results(self, plugin: VolatilityPlugin, image: VolatilityImage, image self.create_prerequisites(plugin, image, image_hash) # Volatility 2 Test - print("[*] Testing {} {} with image {}".format(self.short_name, plugin.name, image.filepath)) + print(f"[*] Testing {self.short_name} {plugin.name} with image {image.filepath}") os.chdir(self.path) cmd = self.plugin_cmd(plugin, image) start_time = time.perf_counter() @@ -56,9 +56,9 @@ def create_results(self, plugin: VolatilityPlugin, image: VolatilityImage, image completed = excp end_time = time.perf_counter() total_time = end_time - start_time - print(" Tested {} {} with image {}: {}".format(self.short_name, plugin.name, image.filepath, total_time)) + print(f" Tested {self.short_name} {plugin.name} with image {image.filepath}: {total_time}") with open( - os.path.join(self.output_directory, '{}_{}_{}_stdout'.format(self.short_name, plugin.name, image_hash)), + os.path.join(self.output_directory, f'{self.short_name}_{plugin.name}_{image_hash}_stdout'), "wb") as f: f.write(completed.stdout) if completed.stderr: @@ -91,15 +91,15 @@ def create_results(self, plugin: VolatilityPlugin, image: VolatilityImage, image def create_prerequisites(self, plugin: VolatilityPlugin, image: VolatilityImage, image_hash): # Volatility 2 image info if not image.vol2_profile: - print("[*] Testing {} imageinfo with image {}".format(self.short_name, image.filepath)) + print(f"[*] Testing {self.short_name} imageinfo with image {image.filepath}") os.chdir(self.path) cmd = ["python2", "-u", "vol.py", "-f", image.filepath, "imageinfo"] start_time = time.perf_counter() vol2_completed = subprocess.run(cmd, cwd = self.path, capture_output = True) end_time = time.perf_counter() image.vol2_imageinfo_time = end_time - start_time - print(" Tested volatility2 imageinfo with image {}: {}".format(image.filepath, end_time - start_time)) - with open(os.path.join(self.output_directory, 'vol2_imageinfo_{}_stdout'.format(image_hash)), "wb") as f: + print(f" Tested volatility2 imageinfo with image {image.filepath}: {end_time - start_time}") + with open(os.path.join(self.output_directory, f'vol2_imageinfo_{image_hash}_stdout'), "wb") as f: f.write(vol2_completed.stdout) image.vol2_profile = re.search(b"Suggested Profile\(s\) : ([^,]+)", vol2_completed.stdout)[1] diff --git a/development/pdbparse-to-json.py b/development/pdbparse-to-json.py index 88c8427fb0..0e1186ebf7 100644 --- a/development/pdbparse-to-json.py +++ b/development/pdbparse-to-json.py @@ -27,17 +27,17 @@ def retreive_pdb(self, guid: str, file_name: str) -> Optional[str]: logger.info("Download PDB file...") file_name = ".".join(file_name.split(".")[:-1] + ['pdb']) for sym_url in ['http://msdl.microsoft.com/download/symbols']: - url = sym_url + "/{}/{}/".format(file_name, guid) + url = sym_url + f"/{file_name}/{guid}/" result = None for suffix in [file_name[:-1] + '_', file_name]: try: - logger.debug("Attempting to retrieve {}".format(url + suffix)) + logger.debug(f"Attempting to retrieve {url + suffix}") result, _ = request.urlretrieve(url + suffix) except request.HTTPError as excp: - logger.debug("Failed with {}".format(excp)) + logger.debug(f"Failed with {excp}") if result: - logger.debug("Successfully written to {}".format(result)) + logger.debug(f"Successfully written to {result}") break return result @@ -257,7 +257,7 @@ def _determine_size(self, field): if output is None: import pdb pdb.set_trace() - raise ValueError("Unknown size for field: {}".format(field.name)) + raise ValueError(f"Unknown size for field: {field.name}") return output def _format_kind(self, kind): @@ -355,6 +355,6 @@ def read_basetypes(self) -> Dict: json.dump(convertor.read_pdb(), f, indent = 2, sort_keys = True) if args.keep: - print("Temporary PDB file: {}".format(filename)) + print(f"Temporary PDB file: {filename}") elif delfile: os.remove(filename) diff --git a/development/schema_validate.py b/development/schema_validate.py index ece56cdec8..0908e934f3 100644 --- a/development/schema_validate.py +++ b/development/schema_validate.py @@ -35,7 +35,7 @@ for filename in args.filenames: try: if os.path.exists(filename): - print("[?] Validating file: {}".format(filename)) + print(f"[?] Validating file: {filename}") with open(filename, 'r') as t: test = json.load(t) @@ -45,14 +45,14 @@ result = schemas.validate(test, False) if result: - print("[+] Validation successful: {}".format(filename)) + print(f"[+] Validation successful: {filename}") else: - print("[-] Validation failed: {}".format(filename)) + print(f"[-] Validation failed: {filename}") failures.append(filename) else: - print("[x] File not found: {}".format(filename)) + print(f"[x] File not found: {filename}") except Exception as e: failures.append(filename) - print("[x] Exception occurred: {} ({})".format(filename, repr(e))) + print(f"[x] Exception occurred: {filename} ({repr(e)})") print("Failures", failures) diff --git a/development/stock-linux-json.py b/development/stock-linux-json.py index 51bcc7ecd1..c863d41e49 100644 --- a/development/stock-linux-json.py +++ b/development/stock-linux-json.py @@ -30,7 +30,7 @@ def download_lists(self, keep = False): def download_list(self, urls: List[str]) -> Dict[str, str]: processed_files = {} for url in urls: - print(" - Downloading {}".format(url)) + print(f" - Downloading {url}") data = requests.get(url) with tempfile.NamedTemporaryFile() as archivedata: archivedata.write(data.content) @@ -48,14 +48,14 @@ def process_rpm(self, archivedata) -> Optional[str]: extracted = None for member in rpm.getmembers(): if 'vmlinux' in member.name or 'System.map' in member.name: - print(" - Extracting {}".format(member.name)) + print(f" - Extracting {member.name}") extracted = rpm.extractfile(member) break if not member or not extracted: return None with tempfile.NamedTemporaryFile(delete = False, prefix = 'vmlinux' if 'vmlinux' in member.name else 'System.map') as output: - print(" - Writing to {}".format(output.name)) + print(f" - Writing to {output.name}") output.write(extracted.read()) return output.name @@ -65,14 +65,14 @@ def process_deb(self, archivedata) -> Optional[str]: extracted = None for member in deb.data.tgz().getmembers(): if member.name.endswith('vmlinux') or 'System.map' in member.name: - print(" - Extracting {}".format(member.name)) + print(f" - Extracting {member.name}") extracted = deb.data.get_file(member.name) break if not member or not extracted: return None with tempfile.NamedTemporaryFile(delete = False, prefix = 'vmlinux' if 'vmlinux' in member.name else 'System.map') as output: - print(" - Writing to {}".format(output.name)) + print(f" - Writing to {output.name}") output.write(extracted.read()) return output.name @@ -81,7 +81,7 @@ def process_files(self, named_files: Dict[str, str]): print("Processing Files...") for i in named_files: if named_files[i] is None: - print("FAILURE: None encountered for {}".format(i)) + print(f"FAILURE: None encountered for {i}") return args = [DWARF2JSON, 'linux'] output_filename = 'unknown-kernel.json' @@ -91,10 +91,10 @@ def process_files(self, named_files: Dict[str, str]): prefix = '--elf' output_filename = './' + '-'.join((named_file.split('/')[-1]).split('-')[2:])[:-4] + '.json.xz' args += [prefix, named_files[named_file]] - print(" - Running {}".format(args)) + print(f" - Running {args}") proc = subprocess.run(args, capture_output = True) - print(" - Writing to {}".format(output_filename)) + print(f" - Writing to {output_filename}") with lzma.open(output_filename, 'w') as f: f.write(proc.stdout) From 767eea614b968406faba25e02bf6d3c192ee6843 Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Mon, 19 Jul 2021 12:31:13 +0100 Subject: [PATCH 169/294] Windows: Add extra checks to pdbscan --- volatility3/framework/automagic/pdbscan.py | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/volatility3/framework/automagic/pdbscan.py b/volatility3/framework/automagic/pdbscan.py index b32010506c..474146ad1e 100644 --- a/volatility3/framework/automagic/pdbscan.py +++ b/volatility3/framework/automagic/pdbscan.py @@ -138,6 +138,10 @@ def method_slow_scan(self, progress_callback: constants.ProgressCallback = None) -> Optional[ValidKernelType]: def test_virtual_kernel(physical_layer_name, virtual_layer_name, kernel): + # It seems the kernel is loaded at a fixed mapping (presumably because the memory manager hasn't started yet) + if kernel['mz_offset'] is None or not isinstance(kernel['mz_offset'], int): + # Rule out kernels that couldn't find a suitable MZ header + return None return (virtual_layer_name, kernel['mz_offset'], kernel) vollog.debug("Kernel base determination - slow scan virtual layer") From 787f61d5a7ff53330fd0ba8d75b5a32b9bf7282d Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Thu, 22 Jul 2021 21:30:08 +0100 Subject: [PATCH 170/294] Documentation: Update minimum python version. --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index c04188ced7..373d85d418 100644 --- a/README.md +++ b/README.md @@ -18,7 +18,7 @@ the Volatility Software License (VSL). See the [LICENSE](LICENSE.txt) file for m ## Requirements -- Python 3.5.3 or later. +- Python 3.6.0 or later. - Pefile 2017.8.1 or later. ## Optional Dependencies From 66c96644ac4d94844083f4c739fb954e64185abd Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Thu, 22 Jul 2021 21:36:37 +0100 Subject: [PATCH 171/294] Core: Minor flynt and typing fixes --- development/compare-vol.py | 4 ++-- development/pdbparse-to-json.py | 6 +++--- volatility3/cli/volargparse.py | 2 +- volatility3/framework/automagic/windows.py | 2 +- 4 files changed, 7 insertions(+), 7 deletions(-) diff --git a/development/compare-vol.py b/development/compare-vol.py index 83acc6de9e..d0d8340389 100644 --- a/development/compare-vol.py +++ b/development/compare-vol.py @@ -63,8 +63,8 @@ def create_results(self, plugin: VolatilityPlugin, image: VolatilityImage, image f.write(completed.stdout) if completed.stderr: with open( - os.path.join(self.output_directory, '{}_{}_{}_stderr'.format(self.short_name, plugin.name, - image_hash)), "wb") as f: + os.path.join(self.output_directory, f'{self.short_name}_{plugin.name}_{image_hash}_stderr'), + "wb") as f: f.write(completed.stderr) return [total_time] diff --git a/development/pdbparse-to-json.py b/development/pdbparse-to-json.py index 0e1186ebf7..819e44e15d 100644 --- a/development/pdbparse-to-json.py +++ b/development/pdbparse-to-json.py @@ -116,7 +116,7 @@ def __init__(self, filename: str): self._filename = filename logger.info("Parsing PDB...") self._pdb = pdbparse.parse(filename) - self._seen_ctypes = set([]) # type: Set[str] + self._seen_ctypes: Set[str] = set([]) def lookup_ctype(self, ctype: str) -> str: self._seen_ctypes.add(ctype) @@ -169,7 +169,7 @@ def generate_metadata(self) -> Dict[str, Any]: def read_enums(self) -> Dict: """Reads the Enumerations from the PDB file""" logger.info("Reading enums...") - output = {} # type: Dict[str, Any] + output: Dict[str, Any] = {} stream = self._pdb.STREAM_TPI for type_index in stream.types: user_type = stream.types[type_index] @@ -231,7 +231,7 @@ def read_usertypes(self) -> Dict: def _format_usertype(self, usertype, kind) -> Dict: """Produces a single usertype""" - fields = {} # type: Dict[str, Dict[str, Any]] + fields: Dict[str, Dict[str, Any]] = {} [fields.update(self._format_field(s)) for s in usertype.fieldlist.substructs] return {usertype.name: {'fields': fields, 'kind': kind, 'size': usertype.size}} diff --git a/volatility3/cli/volargparse.py b/volatility3/cli/volargparse.py index 996acc1506..8ba807fee1 100644 --- a/volatility3/cli/volargparse.py +++ b/volatility3/cli/volargparse.py @@ -31,7 +31,7 @@ def __call__(self, option_string: Optional[str] = None) -> None: parser_name = '' - arg_strings: List[str] = [] + arg_strings = [] # type: List[str] if values is not None: for value in values: if not parser_name: diff --git a/volatility3/framework/automagic/windows.py b/volatility3/framework/automagic/windows.py index 9f98bebfe6..147602facf 100644 --- a/volatility3/framework/automagic/windows.py +++ b/volatility3/framework/automagic/windows.py @@ -53,7 +53,7 @@ def __init__(self, layer_type: Type[layers.intel.Intel], ptr_struct: str, ptr_re self.ptr_size = struct.calcsize(ptr_struct) self.ptr_reference = ptr_reference self.mask = mask - self.page_size = layer_type.page_size # type: int + self.page_size: int = layer_type.page_size def _unpack(self, value: bytes) -> int: return struct.unpack("<" + self.ptr_struct, value)[0] From c993c35152a71a58db5d2199813e3cf8a09b5f9c Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Thu, 22 Jul 2021 22:23:32 +0100 Subject: [PATCH 172/294] Core: Further python 3.6 fixes --- volatility3/framework/symbols/__init__.py | 2 +- volatility3/framework/symbols/windows/extensions/__init__.py | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/volatility3/framework/symbols/__init__.py b/volatility3/framework/symbols/__init__.py index 3d0e01803c..d6ecb5393a 100644 --- a/volatility3/framework/symbols/__init__.py +++ b/volatility3/framework/symbols/__init__.py @@ -65,7 +65,7 @@ def get_symbols_by_type(self, type_name: str) -> Iterable[str]: def get_symbols_by_location(self, offset: int, size: int = 0, table_name: str = None) -> Iterable[str]: """Returns all symbols that exist at a specific relative address.""" - table_list = self._dict.values() # type: Iterable[interfaces.symbols.BaseSymbolTableInterface] + table_list: Iterable[interfaces.symbols.BaseSymbolTableInterface] = self._dict.values() if table_name is not None: if table_name in self._dict: table_list = [self._dict[table_name]] diff --git a/volatility3/framework/symbols/windows/extensions/__init__.py b/volatility3/framework/symbols/windows/extensions/__init__.py index 13e07cb3b6..25e48f78c2 100755 --- a/volatility3/framework/symbols/windows/extensions/__init__.py +++ b/volatility3/framework/symbols/windows/extensions/__init__.py @@ -526,7 +526,7 @@ def add_process_layer(self, config_prefix: str = None, preferred_name: str = Non raise TypeError("Parent layer is not a translation layer, unable to construct process layer") # Presumably for 64-bit systems, the DTB is defined as an array, rather than an unsigned long long - dtb = 0 # type: int + dtb: int = 0 if isinstance(self.Pcb.DirectoryTableBase, objects.Array): dtb = self.Pcb.DirectoryTableBase.cast("unsigned long long") else: From d2420899c577e349541e8fbcfcbe486301d8ef63 Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Thu, 22 Jul 2021 22:37:45 +0100 Subject: [PATCH 173/294] Layers: Fix minor pointless typo --- volatility3/framework/layers/linear.py | 1 - 1 file changed, 1 deletion(-) diff --git a/volatility3/framework/layers/linear.py b/volatility3/framework/layers/linear.py index d94f7bcc0e..c5cb47bdc2 100644 --- a/volatility3/framework/layers/linear.py +++ b/volatility3/framework/layers/linear.py @@ -34,7 +34,6 @@ def read(self, offset: int, length: int, pad: bool = False) -> bytes: length size.""" current_offset = offset output: List[bytes] = [] - output: List[bytes] = [] for (offset, _, mapped_offset, mapped_length, layer) in self.mapping(offset, length, ignore_errors = pad): if not pad and offset > current_offset: raise exceptions.InvalidAddressException( From 213eeda26846895246ec29ef904bf5e574e74d2a Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Thu, 22 Jul 2021 22:15:11 +0100 Subject: [PATCH 174/294] Core: Add module collection support --- API_CHANGES.md | 0 .../framework/configuration/requirements.py | 68 +++++++++++ volatility3/framework/constants/__init__.py | 6 +- volatility3/framework/contexts/__init__.py | 73 ++++++++---- .../framework/interfaces/configuration.py | 7 +- volatility3/framework/interfaces/context.py | 109 +++++++++++++++--- volatility3/framework/interfaces/layers.py | 2 + volatility3/framework/interfaces/plugins.py | 3 - volatility3/framework/symbols/intermed.py | 5 + .../framework/symbols/linux/__init__.py | 30 +++-- volatility3/framework/symbols/mac/__init__.py | 13 ++- .../framework/symbols/windows/pdbutil.py | 1 + 12 files changed, 258 insertions(+), 59 deletions(-) create mode 100644 API_CHANGES.md diff --git a/API_CHANGES.md b/API_CHANGES.md new file mode 100644 index 0000000000..e69de29bb2 diff --git a/volatility3/framework/configuration/requirements.py b/volatility3/framework/configuration/requirements.py index e0f8bb2d89..a0fb186aec 100644 --- a/volatility3/framework/configuration/requirements.py +++ b/volatility3/framework/configuration/requirements.py @@ -429,3 +429,71 @@ def __init__(self, optional = optional, component = plugin, version = version) + + +class ModuleRequirement(interfaces.configuration.ConstructableRequirementInterface, + interfaces.configuration.ConfigurableRequirementInterface): + + def __init__(self, name: str, description: str = None, default: bool = False, + architectures: Optional[List[str]] = None, optional: bool = False): + super().__init__(name = name, description = description, default = default, optional = optional) + self.add_requirement(TranslationLayerRequirement(name = 'layer_name', architectures = architectures)) + self.add_requirement(SymbolTableRequirement(name = 'symbol_table_name')) + + @classmethod + def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface]: + return [ + IntRequirement(name = 'offset'), + ] + + def unsatisfied(self, context: 'interfaces.context.ContextInterface', + config_path: str) -> Dict[str, interfaces.configuration.RequirementInterface]: + """Validate that the value is a valid module""" + config_path = interfaces.configuration.path_join(config_path, self.name) + value = self.config_value(context, config_path, None) + if isinstance(value, str): + if value not in context.modules: + vollog.log(constants.LOGLEVEL_V, f"IndexError - Module not found in context: {value}") + return {config_path: self} + return {} + + if value is not None: + vollog.log(constants.LOGLEVEL_V, + "TypeError - Module Requirement only accepts string labels: {}".format(repr(value))) + return {config_path: self} + + ### NOTE: This validate method has side effects (the dependencies can change)!!! + + self._validate_class(context, interfaces.configuration.parent_path(config_path)) + vollog.log(constants.LOGLEVEL_V, f"IndexError - No configuration provided: {config_path}") + return {config_path: self} + + def construct(self, context: interfaces.context.ContextInterface, config_path: str) -> None: + """Constructs the appropriate layer and adds it based on the class parameter.""" + config_path = interfaces.configuration.path_join(config_path, self.name) + + # Determine the layer name + name = self.name + counter = 2 + while name in context.modules: + name = self.name + str(counter) + counter += 1 + + args = {"context": context, "config_path": config_path, "name": name} + + if any( + [subreq.unsatisfied(context, config_path) for subreq in self.requirements.values() if not subreq.optional]): + return None + + obj = self._construct_class(context, config_path, args) + if obj is not None and isinstance(obj, interfaces.context.ModuleInterface): + context.add_module(obj) + # This should already be done by the _construct_class method + # context.config[config_path] = obj.name + return None + + def build_configuration(self, context: 'interfaces.context.ContextInterface', _: str, + value: Any) -> interfaces.configuration.HierarchicalDict: + """Builds the appropriate configuration for the specified + requirement.""" + return context.modules[value].build_configuration() diff --git a/volatility3/framework/constants/__init__.py b/volatility3/framework/constants/__init__.py index a43a55501b..cfe0356c1c 100644 --- a/volatility3/framework/constants/__init__.py +++ b/volatility3/framework/constants/__init__.py @@ -39,10 +39,12 @@ # We use the SemVer 2.0.0 versioning scheme VERSION_MAJOR = 1 # Number of releases of the library with a breaking change -VERSION_MINOR = 1 # Number of changes that only add to the interface -VERSION_PATCH = 1 # Number of changes that do not change the interface +VERSION_MINOR = 2 # Number of changes that only add to the interface +VERSION_PATCH = 0 # Number of changes that do not change the interface VERSION_SUFFIX = "" +# TODO: At version 2.0.0, remove the symbol_shift feature + PACKAGE_VERSION = ".".join([str(x) for x in [VERSION_MAJOR, VERSION_MINOR, VERSION_PATCH]]) + VERSION_SUFFIX """The canonical version of the volatility3 package""" diff --git a/volatility3/framework/contexts/__init__.py b/volatility3/framework/contexts/__init__.py index 412dfbc508..783a484219 100644 --- a/volatility3/framework/contexts/__init__.py +++ b/volatility3/framework/contexts/__init__.py @@ -10,11 +10,14 @@ """ import functools import hashlib -from typing import Callable, Dict, Iterable, List, Optional, Set, Tuple, Union +import logging +from typing import Callable, Iterable, List, Optional, Set, Tuple, Union from volatility3.framework import constants, interfaces, symbols, exceptions from volatility3.framework.objects import templates +vollog = logging.getLogger(__name__) + class Context(interfaces.context.ContextInterface): """Maintains the context within which to construct objects. @@ -33,6 +36,7 @@ def __init__(self) -> None: """Initializes the context.""" super().__init__() self._symbol_space = symbols.SymbolSpace() + self._module_space = ModuleCollection() self._memory = interfaces.layers.LayerContainer() self._config = interfaces.configuration.HierarchicalDict() @@ -50,6 +54,11 @@ def config(self, value: interfaces.configuration.HierarchicalDict) -> None: raise TypeError("Config must be of type HierarchicalDict") self._config = value + @property + def modules(self) -> interfaces.context.ModuleContainer: + """A container for modules loaded in this context""" + return self._module_space + @property def symbol_space(self) -> interfaces.symbols.SymbolSpaceInterface: """The space of all symbols that can be accessed within this @@ -153,7 +162,9 @@ def get_module_wrapper(method: str) -> Callable: def wrapper(self, name: str) -> Callable: if constants.BANG not in name: - name = self._module_name + constants.BANG + name + name = self.symbol_table_name + constants.BANG + name + elif name.startswith(self.symbol_table_name + constants.BANG): + pass else: raise ValueError(f"Cannot reference another module when calling {method}") return getattr(self._context.symbol_space, method)(name) @@ -245,6 +256,20 @@ def object_from_symbol(self, native_layer_name = native_layer_name or self._native_layer_name, **kwargs) + def get_symbols_by_absolute_location(self, offset: int, size: int = 0) -> List[str]: + """Returns the symbols within this module that live at the specified + absolute offset provided.""" + if size < 0: + raise ValueError("Size must be strictly non-negative") + return list( + self._context.symbol_space.get_symbols_by_location(offset = offset - self._offset, + size = size, + table_name = self.symbol_table_name)) + + @property + def symbols(self): + return self.context.symbol_space[self.symbol_table_name].symbols + get_symbol = get_module_wrapper('get_symbol') get_type = get_module_wrapper('get_type') get_enumeration = get_module_wrapper('get_enumeration') @@ -294,22 +319,17 @@ def hash(self) -> str: def get_symbols_by_absolute_location(self, offset: int, size: int = 0) -> List[str]: """Returns the symbols within this module that live at the specified absolute offset provided.""" - if size < 0: - raise ValueError("Size must be strictly non-negative") if offset > self._offset + self.size: return [] - return list( - self._context.symbol_space.get_symbols_by_location(offset = offset - self._offset, - size = size, - table_name = self.symbol_table_name)) + return super().get_symbols_by_absolute_location(offset, size) -class ModuleCollection: +class ModuleCollection(interfaces.context.ModuleContainer): """Class to contain a collection of SizedModules and reason about their contents.""" - def __init__(self, modules: List[SizedModule]) -> None: - self._modules = modules + def __init__(self, modules: Optional[List[interfaces.context.ModuleInterface]] = None) -> None: + super().__init__(modules) def deduplicate(self) -> 'ModuleCollection': """Returns a new deduplicated ModuleCollection featuring no repeated @@ -327,19 +347,12 @@ def deduplicate(self) -> 'ModuleCollection': return ModuleCollection(new_modules) @property - def modules(self) -> Dict[str, List[SizedModule]]: + def modules(self) -> 'ModuleCollection': """A name indexed dictionary of modules using that name in this collection.""" - return self._generate_module_dict(self._modules) - - @classmethod - def _generate_module_dict(cls, modules: List[SizedModule]) -> Dict[str, List[SizedModule]]: - result: Dict[str, List[SizedModule]] = {} - for module in modules: - modlist = result.get(module.name, []) - modlist.append(module) - result[module.name] = modlist - return result + vollog.warning( + "This method has been deprecated in favour of the ModuleCollection acting as a dictionary itself") + return self def get_module_symbols_by_absolute_location(self, offset: int, size: int = 0) -> Iterable[Tuple[str, List[str]]]: """Returns a tuple of (module_name, list_of_symbol_names) for each @@ -348,5 +361,17 @@ def get_module_symbols_by_absolute_location(self, offset: int, size: int = 0) -> if size < 0: raise ValueError("Size must be strictly non-negative") for module in self._modules: - if (offset <= module.offset + module.size) and (offset + size >= module.offset): - yield (module.name, module.get_symbols_by_absolute_location(offset, size)) + if isinstance(module, SizedModule): + if (offset <= module.offset + module.size) and (offset + size >= module.offset): + yield (module.name, module.get_symbols_by_absolute_location(offset, size)) + + +class ConfigurableModule(Module, interfaces.configuration.ConfigurableInterface): + + def __init__(self, context: interfaces.context.ContextInterface, config_path: str, name: str) -> None: + interfaces.configuration.ConfigurableInterface.__init__(self, context, config_path) + layer_name = self.config['layer_name'] + offset = self.config['offset'] + symbol_table_name = self.config['symbol_table_name'] + interfaces.configuration.ConfigurableInterface.__init__(self, context, config_path) + Module.__init__(self, context, name, layer_name, offset, symbol_table_name, layer_name) diff --git a/volatility3/framework/interfaces/configuration.py b/volatility3/framework/interfaces/configuration.py index d52d6a1768..6331e66836 100644 --- a/volatility3/framework/interfaces/configuration.py +++ b/volatility3/framework/interfaces/configuration.py @@ -25,7 +25,7 @@ from abc import ABCMeta, abstractmethod from typing import Any, ClassVar, Dict, Generator, Iterator, List, Optional, Type, Union, Tuple, Set -from volatility3 import classproperty +from volatility3 import classproperty, framework from volatility3.framework import constants, interfaces CONFIG_SEPARATOR = "." @@ -730,6 +730,11 @@ class VersionableInterface: All version number should use semantic versioning """ _version: Tuple[int, int, int] = (0, 0, 0) + _required_framework_version: Tuple[int, int, int] = (0, 0, 0) + + def __init__(self, *args, **kwargs): + framework.require_interface_version(*self._required_framework_version) + super().__init__(*args, **kwargs) @classproperty def version(cls) -> Tuple[int, int, int]: diff --git a/volatility3/framework/interfaces/context.py b/volatility3/framework/interfaces/context.py index 4e81d8e53a..6b69c9f80c 100644 --- a/volatility3/framework/interfaces/context.py +++ b/volatility3/framework/interfaces/context.py @@ -11,11 +11,12 @@ `object`, which will construct a symbol on a layer at a particular offset. """ +import collections import copy from abc import ABCMeta, abstractmethod -from typing import Optional, Union +from typing import Optional, Union, Dict, List, Iterable -from volatility3.framework import interfaces +from volatility3.framework import interfaces, exceptions class ContextInterface(metaclass = ABCMeta): @@ -44,6 +45,24 @@ def symbol_space(self) -> 'interfaces.symbols.SymbolSpaceInterface': # ## Memory Functions + @property + @abstractmethod + def modules(self) -> 'ModuleContainer': + """Returns the memory object for the context.""" + raise NotImplementedError("ModuleContainer has not been implemented.") + + def add_module(self, module: 'interfaces.context.ModuleInterface'): + """Adds a named module to the context. + + Args: + module: The module to be added to the module object collection + + Raises: + volatility3.framework.exceptions.VolatilityException: if the module is already present, or has + unmet dependencies + """ + self.modules.add_module(module) + @property @abstractmethod def layers(self) -> 'interfaces.layers.LayerContainer': @@ -134,7 +153,7 @@ def __init__(self, Args: context: The context within which this module will exist - module_name: The name of the module + name: The name of the module layer_name: The layer within the context in which the module exists offset: The offset at which the module exists in the layer symbol_table_name: The name of an associated symbol table @@ -143,14 +162,9 @@ def __init__(self, self._context = context self._module_name = module_name self._layer_name = layer_name - if not isinstance(offset, int): - raise TypeError(f"Module offset must be an int not {type(offset)}") self._offset = offset - self._native_layer_name = None - if native_layer_name: - self._native_layer_name = native_layer_name - self.symbol_table_name = symbol_table_name or self._module_name - super().__init__() + self._native_layer_name = native_layer_name or layer_name + self._symbol_table_name = symbol_table_name or self._module_name @property def name(self) -> str: @@ -173,6 +187,11 @@ def context(self) -> ContextInterface: """Context that the module uses.""" return self._context + @property + def symbol_table_name(self) -> str: + """The name of the symbol table associated with this module""" + return self._symbol_table_name + @abstractmethod def object(self, object_type: str, @@ -211,20 +230,78 @@ def object_from_symbol(self, The constructed object """ + def get_absolute_symbol_address(self, name: str) -> int: + """Returns the absolute address of the symbol within this module""" + symbol = self.get_symbol(name) + return self.offset + symbol.address + def get_type(self, name: str) -> 'interfaces.objects.Template': - """Returns a type from the module.""" + """Returns a type from the module's symbol table.""" def get_symbol(self, name: str) -> 'interfaces.symbols.SymbolInterface': - """Returns a symbol from the module.""" + """Returns a symbol object from the module's symbol table.""" def get_enumeration(self, name: str) -> 'interfaces.objects.Template': - """Returns an enumeration from the module.""" + """Returns an enumeration from the module's symbol table.""" def has_type(self, name: str) -> bool: - """Determines whether a type is present in the module.""" + """Determines whether a type is present in the module's symbol table.""" def has_symbol(self, name: str) -> bool: - """Determines whether a symbol is present in the module.""" + """Determines whether a symbol is present in the module's symbol table.""" def has_enumeration(self, name: str) -> bool: - """Determines whether an enumeration is present in the module.""" + """Determines whether an enumeration is present in the module's symbol table.""" + + def symbols(self) -> List: + """Lists the symbols contained in the symbol table for this module""" + + def get_symbols_by_absolute_location(self, offset: int, size: int = 0) -> List[str]: + """Returns the symbols within table_name (or this module if not specified) that live at the specified + absolute offset provided.""" + + +class ModuleContainer(collections.abc.Mapping): + """Container for multiple layers of data.""" + + def __init__(self, modules: Optional[List[ModuleInterface]] = None) -> None: + self._modules: Dict[str, ModuleInterface] = {} + if modules is not None: + for module in modules: + self.add_module(module) + + def __eq__(self, other): + return dict(self) == dict(other) + + def add_module(self, module: ModuleInterface) -> None: + """Adds a module to the module collection + + This will throw an exception if the required dependencies are not met + + Args: + module: the module to add to the list of modules (based on module.name) + """ + if module.name in self._modules: + raise exceptions.VolatilityException(f"Module already exists: {module.name}") + self._modules[module.name] = module + + def __delitem__(self, name: str) -> None: + """Removes a module from the module list""" + del self._modules[name] + + def __getitem__(self, name: str) -> ModuleInterface: + """Returns the layer of specified name.""" + return self._modules[name] + + def __len__(self) -> int: + return len(self._modules) + + def __iter__(self): + return iter(self._modules) + + def get_modules_by_symbol_tables(self, symbol_table: str) -> Iterable[str]: + """Returns the modules which use the specified symbol table name""" + for module_name in self._modules: + module = self._modules[module_name] + if module.symbol_table_name == symbol_table: + yield module_name diff --git a/volatility3/framework/interfaces/layers.py b/volatility3/framework/interfaces/layers.py index c7c4feb8cd..28c452f900 100644 --- a/volatility3/framework/interfaces/layers.py +++ b/volatility3/framework/interfaces/layers.py @@ -54,6 +54,8 @@ class ScannerInterface(interfaces.configuration.VersionableInterface, metaclass """ thread_safe = False + _required_framework_version = (1, 0, 0) + def __init__(self) -> None: super().__init__() self.chunk_size = 0x1000000 # Default to 16Mb chunks diff --git a/volatility3/framework/interfaces/plugins.py b/volatility3/framework/interfaces/plugins.py index d091f27cb7..983232cf0f 100644 --- a/volatility3/framework/interfaces/plugins.py +++ b/volatility3/framework/interfaces/plugins.py @@ -14,7 +14,6 @@ from abc import ABCMeta, abstractmethod from typing import List, Tuple, Type -from volatility3 import framework from volatility3.framework import exceptions, constants, interfaces vollog = logging.getLogger(__name__) @@ -123,8 +122,6 @@ def __init__(self, self._file_handler: Type[FileHandlerInterface] = FileHandlerInterface - framework.require_interface_version(*self._required_framework_version) - @property def open(self): """Returns a context manager and thus can be called like open""" diff --git a/volatility3/framework/symbols/intermed.py b/volatility3/framework/symbols/intermed.py index a6e76ae7a6..a3b5a582a0 100644 --- a/volatility3/framework/symbols/intermed.py +++ b/volatility3/framework/symbols/intermed.py @@ -135,6 +135,11 @@ def __init__(self, # Since we've been created with parameters, ensure our config is populated likewise self.config['isf_url'] = isf_url + + if symbol_shift: + vollog.warning( + "Symbol_shift support has been deprecated and will be removed in the next major release of Volatility 3" + ) self.config['symbol_shift'] = symbol_shift self.config['symbol_mask'] = symbol_mask diff --git a/volatility3/framework/symbols/linux/__init__.py b/volatility3/framework/symbols/linux/__init__.py index 6d09bbe465..4657304111 100644 --- a/volatility3/framework/symbols/linux/__init__.py +++ b/volatility3/framework/symbols/linux/__init__.py @@ -3,11 +3,11 @@ # from typing import List, Tuple, Iterator -from volatility3.framework import exceptions, constants, interfaces, objects, contexts +from volatility3 import framework +from volatility3.framework import exceptions, constants, interfaces, objects from volatility3.framework.objects import utility from volatility3.framework.symbols import intermed from volatility3.framework.symbols.linux import extensions -from volatility3.framework.objects import utility class LinuxKernelIntermedSymbols(intermed.IntermediateSymbolTable): @@ -40,7 +40,10 @@ def __init__(self, *args, **kwargs) -> None: class LinuxUtilities(interfaces.configuration.VersionableInterface): """Class with multiple useful linux functions.""" - _version = (1, 0, 0) + _version = (2, 0, 0) + _required_framework_version = (1, 2, 0) + + framework.require_interface_version(*_required_framework_version) # based on __d_path from the Linux kernel @classmethod @@ -114,7 +117,13 @@ def _get_new_sock_pipe_path(cls, context, task, filp) -> str: if len(symbol_table_arr) == 2: symbol_table = symbol_table_arr[0] - symbs = list(context.symbol_space.get_symbols_by_location(sym_addr, table_name = symbol_table)) + for module_name in context.modules.get_modules_by_symbol_tables(symbol_table): + kernel_module = context.modules[module_name] + break + else: + raise ValueError(f"No module using the symbol table {symbol_table}") + + symbs = list(kernel_module.get_symbols_by_absolute_location(sym_addr)) if len(symbs) == 1: sym = symbs[0].split(constants.BANG)[1] @@ -207,15 +216,15 @@ def mask_mods_list(cls, context: interfaces.context.ContextInterface, layer_name @classmethod def generate_kernel_handler_info( - cls, context: interfaces.context.ContextInterface, layer_name: str, kernel_name: str, + cls, context: interfaces.context.ContextInterface, kernel_module_name: str, mods_list: Iterator[interfaces.objects.ObjectInterface]) -> List[Tuple[str, int, int]]: """ A helper function that gets the beginning and end address of the kernel module """ - kernel = contexts.Module(context, kernel_name, layer_name, 0) + kernel = context.modules[kernel_module_name] - mask = context.layers[layer_name].address_mask + mask = context.layers[kernel.layer_name].address_mask start_addr = kernel.object_from_symbol("_text") start_addr = start_addr.vol.offset & mask @@ -224,10 +233,11 @@ def generate_kernel_handler_info( end_addr = end_addr.vol.offset & mask return [(constants.linux.KERNEL_NAME, start_addr, end_addr)] + \ - LinuxUtilities.mask_mods_list(context, layer_name, mods_list) + LinuxUtilities.mask_mods_list(context, kernel.layer_name, mods_list) @classmethod - def lookup_module_address(cls, context: interfaces.context.ContextInterface, handlers: List[Tuple[str, int, int]], + def lookup_module_address(cls, kernel_module: interfaces.context.ModuleInterface, + handlers: List[Tuple[str, int, int]], target_address: int): """ Searches between the start and end address of the kernel module using target_address. @@ -241,7 +251,7 @@ def lookup_module_address(cls, context: interfaces.context.ContextInterface, han if start <= target_address <= end: mod_name = name if name == constants.linux.KERNEL_NAME: - symbols = list(context.symbol_space.get_symbols_by_location(target_address)) + symbols = list(kernel_module.get_symbols_by_absolute_location(target_address)) if len(symbols): symbol_name = symbols[0].split(constants.BANG)[1] if constants.BANG in symbols[0] else \ diff --git a/volatility3/framework/symbols/mac/__init__.py b/volatility3/framework/symbols/mac/__init__.py index 152f0bbc23..e84712828f 100644 --- a/volatility3/framework/symbols/mac/__init__.py +++ b/volatility3/framework/symbols/mac/__init__.py @@ -35,8 +35,10 @@ class MacUtilities(interfaces.configuration.VersionableInterface): Version History: 1.1.0 -> added walk_list_head API 1.2.0 -> added walk_slist API + 1.3.0 -> add parameter to lookup_module_address to pass kernel module name """ - _version = (1, 2, 0) + _version = (1, 3, 0) + _required_framework_version = (1, 2, 0) @classmethod def mask_mods_list(cls, context: interfaces.context.ContextInterface, layer_name: str, @@ -77,15 +79,20 @@ def generate_kernel_handler_info( @classmethod def lookup_module_address(cls, context: interfaces.context.ContextInterface, handlers: Iterator[Any], - target_address): + target_address, kernel_module_name: str = None): mod_name = "UNKNOWN" symbol_name = "N/A" + module_shift = 0 + if kernel_module_name: + module = context.modules[kernel_module_name] + module_shift = module.offset + for name, start, end in handlers: if start <= target_address <= end: mod_name = name if name == "__kernel__": - symbols = list(context.symbol_space.get_symbols_by_location(target_address)) + symbols = list(context.symbol_space.get_symbols_by_location(target_address - module_shift)) if len(symbols) > 0: symbol_name = str(symbols[0].split(constants.BANG)[1]) if constants.BANG in symbols[0] else \ diff --git a/volatility3/framework/symbols/windows/pdbutil.py b/volatility3/framework/symbols/windows/pdbutil.py index 19ce551a70..9f2cc14629 100644 --- a/volatility3/framework/symbols/windows/pdbutil.py +++ b/volatility3/framework/symbols/windows/pdbutil.py @@ -25,6 +25,7 @@ class PDBUtility(interfaces.configuration.VersionableInterface): """Class to handle and manage all getting symbols based on MZ header""" _version = (1, 0, 0) + _required_framework_version = (1, 0, 0) @classmethod def symbol_table_from_offset( From 31dc6f34c95f4ead75b6e0f9724fc4ae176a7ba2 Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Thu, 22 Jul 2021 22:35:38 +0100 Subject: [PATCH 175/294] Automagic: Make symbol_shift automagic change --- volatility3/framework/automagic/__init__.py | 8 ++-- volatility3/framework/automagic/linux.py | 3 +- volatility3/framework/automagic/mac.py | 3 +- volatility3/framework/automagic/module.py | 48 +++++++++++++++++++++ 4 files changed, 57 insertions(+), 5 deletions(-) create mode 100644 volatility3/framework/automagic/module.py diff --git a/volatility3/framework/automagic/__init__.py b/volatility3/framework/automagic/__init__.py index a10b526d28..626126c8c5 100644 --- a/volatility3/framework/automagic/__init__.py +++ b/volatility3/framework/automagic/__init__.py @@ -21,11 +21,13 @@ vollog = logging.getLogger(__name__) -windows_automagic = ['ConstructionMagic', 'LayerStacker', 'WintelHelper', 'KernelPDBScanner', 'WinSwapLayers'] +windows_automagic = [ + 'ConstructionMagic', 'LayerStacker', 'WintelHelper', 'KernelPDBScanner', 'WinSwapLayers', 'KernelModule' +] -linux_automagic = ['ConstructionMagic', 'LayerStacker', 'LinuxBannerCache', 'LinuxSymbolFinder'] +linux_automagic = ['ConstructionMagic', 'LayerStacker', 'LinuxBannerCache', 'LinuxSymbolFinder', 'KernelModule'] -mac_automagic = ['ConstructionMagic', 'LayerStacker', 'MacBannerCache', 'MacSymbolFinder'] +mac_automagic = ['ConstructionMagic', 'LayerStacker', 'MacBannerCache', 'MacSymbolFinder', 'KernelModule'] def available(context: interfaces.context.ContextInterface) -> List[interfaces.automagic.AutomagicInterface]: diff --git a/volatility3/framework/automagic/linux.py b/volatility3/framework/automagic/linux.py index 767dd34070..f9fa22c071 100644 --- a/volatility3/framework/automagic/linux.py +++ b/volatility3/framework/automagic/linux.py @@ -79,7 +79,8 @@ def stack(cls, layer = layer_class(context, config_path = config_path, name = new_layer_name, - metadata = {'kaslr_value': aslr_shift, 'os': 'Linux'}) + metadata = {'os': 'Linux'}) + layer.config['kernel_virtual_offset'] = aslr_shift if layer and dtb: vollog.debug(f"DTB was found at: 0x{dtb:0x}") diff --git a/volatility3/framework/automagic/mac.py b/volatility3/framework/automagic/mac.py index 07c995b360..fb725a2346 100644 --- a/volatility3/framework/automagic/mac.py +++ b/volatility3/framework/automagic/mac.py @@ -105,7 +105,8 @@ def stack(cls, new_layer = intel.Intel32e(context, config_path = config_path, name = new_layer_name, - metadata = {'kaslr_value': kaslr_shift}) + metadata = {'os': 'mac'}) + new_layer.config['kernel_virtual_offset'] = kaslr_shift if new_layer and dtb: vollog.debug(f"DTB was found at: 0x{dtb:0x}") diff --git a/volatility3/framework/automagic/module.py b/volatility3/framework/automagic/module.py new file mode 100644 index 0000000000..315164ec7d --- /dev/null +++ b/volatility3/framework/automagic/module.py @@ -0,0 +1,48 @@ +from volatility3.framework import interfaces, constants, configuration + + +class KernelModule(interfaces.automagic.AutomagicInterface): + """Finds ModuleRequirements and ensures their layer, symbols and offsets""" + + priority = 100 + + def __call__(self, + context: interfaces.context.ContextInterface, + config_path: str, + requirement: interfaces.configuration.RequirementInterface, + progress_callback: constants.ProgressCallback = None) -> None: + new_config_path = interfaces.configuration.path_join(config_path, requirement.name) + if not isinstance(requirement, configuration.requirements.ModuleRequirement): + # Check subrequirements + for req in requirement.requirements: + self(context, new_config_path, requirement.requirements[req], progress_callback) + return + if not requirement.unsatisfied(context, config_path): + return + # The requirement is unfulfilled and is a ModuleRequirement + + context.config[interfaces.configuration.path_join( + new_config_path, 'class')] = 'volatility3.framework.contexts.ConfigurableModule' + + for req in requirement.requirements: + if requirement.requirements[req].unsatisfied(context, new_config_path) and req != 'offset': + return + + # We now just have the offset requirement, but the layer requirement has been fulfilled. + # Unfortunately we don't know the layer name requirement's exact name + + for req in requirement.requirements: + if isinstance(requirement.requirements[req], configuration.requirements.TranslationLayerRequirement): + layer_kvo_config_path = interfaces.configuration.path_join(new_config_path, req, + 'kernel_virtual_offset') + offset_config_path = interfaces.configuration.path_join(new_config_path, 'offset') + offset = context.config[layer_kvo_config_path] + context.config[offset_config_path] = offset + elif isinstance(requirement.requirements[req], configuration.requirements.SymbolTableRequirement): + symbol_shift_config_path = interfaces.configuration.path_join(new_config_path, + req, + 'symbol_shift') + context.config[symbol_shift_config_path] = 0 + + # Now construct the module based on the sub-requirements + requirement.construct(context, config_path) From abbd516f993b3aeec113740f9f33927560f914d0 Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Thu, 22 Jul 2021 22:36:33 +0100 Subject: [PATCH 176/294] Layers: Add minimum framework requirements --- volatility3/framework/layers/scanners/__init__.py | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-) diff --git a/volatility3/framework/layers/scanners/__init__.py b/volatility3/framework/layers/scanners/__init__.py index 3407b57840..acf7267ffb 100644 --- a/volatility3/framework/layers/scanners/__init__.py +++ b/volatility3/framework/layers/scanners/__init__.py @@ -2,7 +2,7 @@ # which is available at https://www.volatilityfoundation.org/license/vsl-v1.0 # import re -from typing import Generator, List, Tuple, Dict, Union, Optional +from typing import Generator, List, Tuple, Dict, Optional from volatility3.framework.interfaces import layers from volatility3.framework.layers.scanners import multiregexp @@ -11,6 +11,8 @@ class BytesScanner(layers.ScannerInterface): thread_safe = True + _required_framework_version = (1, 0, 0) + def __init__(self, needle: bytes) -> None: super().__init__() self.needle = needle @@ -30,6 +32,8 @@ def __call__(self, data: bytes, data_offset: int) -> Generator[int, None, None]: class RegExScanner(layers.ScannerInterface): thread_safe = True + _required_framework_version = (1, 0, 0) + def __init__(self, pattern: bytes, flags: int = 0) -> None: super().__init__() self.regex = re.compile(pattern, flags) @@ -43,9 +47,12 @@ def __call__(self, data: bytes, data_offset: int) -> Generator[int, None, None]: if offset < self.chunk_size: yield offset + data_offset + class MultiStringScanner(layers.ScannerInterface): thread_safe = True + _required_framework_version = (1, 0, 0) + def __init__(self, patterns: List[bytes]) -> None: super().__init__() self._pattern_trie: Optional[Dict[int, Optional[Dict]]] = {} From 1480aca4538739b45b9d792f8ce7886170f807bd Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Thu, 22 Jul 2021 22:40:09 +0100 Subject: [PATCH 177/294] Linux: Update all plugins to ModuleRequirement --- volatility3/framework/plugins/linux/bash.py | 13 +++---- .../framework/plugins/linux/check_afinfo.py | 12 +++---- .../framework/plugins/linux/check_creds.py | 17 ++++----- .../framework/plugins/linux/check_idt.py | 29 +++++++-------- .../framework/plugins/linux/check_modules.py | 30 ++++++++-------- .../framework/plugins/linux/check_syscall.py | 35 ++++++++++--------- volatility3/framework/plugins/linux/elfs.py | 10 ++---- .../plugins/linux/keyboard_notifiers.py | 24 ++++++------- volatility3/framework/plugins/linux/lsmod.py | 18 ++++------ volatility3/framework/plugins/linux/lsof.py | 14 ++++---- .../framework/plugins/linux/malfind.py | 13 +++---- volatility3/framework/plugins/linux/proc.py | 10 ++---- volatility3/framework/plugins/linux/pslist.py | 21 ++++------- volatility3/framework/plugins/linux/pstree.py | 5 ++- .../framework/plugins/linux/tty_check.py | 26 ++++++-------- 15 files changed, 116 insertions(+), 161 deletions(-) diff --git a/volatility3/framework/plugins/linux/bash.py b/volatility3/framework/plugins/linux/bash.py index 3e8ac58903..471f4cbe77 100644 --- a/volatility3/framework/plugins/linux/bash.py +++ b/volatility3/framework/plugins/linux/bash.py @@ -21,16 +21,13 @@ class Bash(plugins.PluginInterface, timeliner.TimeLinerInterface): """Recovers bash command history from memory.""" - _required_framework_version = (1, 0, 0) + _required_framework_version = (1, 2, 0) @classmethod def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface]: return [ - requirements.TranslationLayerRequirement(name = 'primary', - description = 'Memory layer for the kernel', - architectures = ["Intel32", "Intel64"]), - requirements.SymbolTableRequirement(name = "vmlinux", description = "Linux kernel symbols"), - requirements.PluginRequirement(name = 'pslist', plugin = pslist.PsList, version = (1, 0, 0)), + requirements.ModuleRequirement(name = 'vmlinux', architectures = ["Intel32", "Intel64"]), + requirements.PluginRequirement(name = 'pslist', plugin = pslist.PsList, version = (2, 0, 0)), requirements.ListRequirement(name = 'pid', element_type = int, description = "Process IDs to include (all other processes are excluded)", @@ -38,7 +35,7 @@ def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface] ] def _generator(self, tasks): - is_32bit = not symbols.symbol_table_is_64bit(self.context, self.config["vmlinux"]) + is_32bit = not symbols.symbol_table_is_64bit(self.context, self.config["vmlinux.symbol_table_name"]) if is_32bit: pack_format = "I" bash_json_file = "bash32" @@ -93,7 +90,6 @@ def run(self): ("Command", str)], self._generator( pslist.PsList.list_tasks(self.context, - self.config['primary'], self.config['vmlinux'], filter_func = filter_func))) @@ -102,7 +98,6 @@ def generate_timeline(self): for row in self._generator( pslist.PsList.list_tasks(self.context, - self.config['primary'], self.config['vmlinux'], filter_func = filter_func)): _depth, row_data = row diff --git a/volatility3/framework/plugins/linux/check_afinfo.py b/volatility3/framework/plugins/linux/check_afinfo.py index 29e6975408..9105247be5 100644 --- a/volatility3/framework/plugins/linux/check_afinfo.py +++ b/volatility3/framework/plugins/linux/check_afinfo.py @@ -6,7 +6,7 @@ import logging from typing import List -from volatility3.framework import exceptions, interfaces, contexts +from volatility3.framework import exceptions, interfaces from volatility3.framework import renderers from volatility3.framework.configuration import requirements from volatility3.framework.interfaces import plugins @@ -18,15 +18,12 @@ class Check_afinfo(plugins.PluginInterface): """Verifies the operation function pointers of network protocols.""" - _required_framework_version = (1, 0, 0) + _required_framework_version = (1, 2, 0) @classmethod def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface]: return [ - requirements.TranslationLayerRequirement(name = 'primary', - description = 'Memory layer for the kernel', - architectures = ["Intel32", "Intel64"]), - requirements.SymbolTableRequirement(name = "vmlinux", description = "Linux kernel symbols") + requirements.ModuleRequirement(name = 'vmlinux', architectures = ["Intel32", "Intel64"]), ] # returns whether the symbol is found within the kernel (system.map) or not @@ -63,7 +60,8 @@ def _check_afinfo(self, var_name, var, op_members, seq_members): yield var_name, "show", var.seq_show def _generator(self): - vmlinux = contexts.Module(self.context, self.config['vmlinux'], self.config['primary'], 0) + + vmlinux = self.context.modules[self.config['vmlinux']] op_members = vmlinux.get_type('file_operations').members seq_members = vmlinux.get_type('seq_operations').members diff --git a/volatility3/framework/plugins/linux/check_creds.py b/volatility3/framework/plugins/linux/check_creds.py index 20e3d26fb5..28f3d178b9 100644 --- a/volatility3/framework/plugins/linux/check_creds.py +++ b/volatility3/framework/plugins/linux/check_creds.py @@ -4,7 +4,7 @@ import logging -from volatility3.framework import interfaces, renderers, constants +from volatility3.framework import interfaces, renderers from volatility3.framework.configuration import requirements from volatility3.plugins.linux import pslist @@ -14,22 +14,19 @@ class Check_creds(interfaces.plugins.PluginInterface): """Checks if any processes are sharing credential structures""" - _required_framework_version = (1, 0, 0) + _required_framework_version = (1, 2, 0) @classmethod def get_requirements(cls): return [ - requirements.TranslationLayerRequirement(name = 'primary', - description = 'Memory layer for the kernel', - architectures = ["Intel32", "Intel64"]), - requirements.SymbolTableRequirement(name = "vmlinux", description = "Linux kernel symbols"), - requirements.PluginRequirement(name = 'pslist', plugin = pslist.PsList, version = (1, 0, 0)) + requirements.ModuleRequirement(name = 'vmlinux', architectures = ["Intel32", "Intel64"]), + requirements.PluginRequirement(name = 'pslist', plugin = pslist.PsList, version = (2, 0, 0)) ] def _generator(self): - # vmlinux = contexts.Module(self.context, self.config['vmlinux'], self.config['primary'], 0) + vmlinux = self.context.modules[self.config['vmlinux']] - type_task = self.context.symbol_space.get_type(self.config['vmlinux'] + constants.BANG + "task_struct") + type_task = vmlinux.get_type("task_struct") if not type_task.has_member("cred"): raise TypeError( @@ -40,7 +37,7 @@ def _generator(self): creds = {} - tasks = pslist.PsList.list_tasks(self.context, self.config['primary'], self.config['vmlinux']) + tasks = pslist.PsList.list_tasks(self.context, vmlinux.name) for task in tasks: diff --git a/volatility3/framework/plugins/linux/check_idt.py b/volatility3/framework/plugins/linux/check_idt.py index f171ab8464..016717841c 100644 --- a/volatility3/framework/plugins/linux/check_idt.py +++ b/volatility3/framework/plugins/linux/check_idt.py @@ -5,7 +5,7 @@ import logging from typing import List -from volatility3.framework import interfaces, renderers, contexts, symbols +from volatility3.framework import interfaces, renderers, symbols from volatility3.framework.configuration import requirements from volatility3.framework.renderers import format_hints from volatility3.framework.symbols import linux @@ -17,32 +17,28 @@ class Check_idt(interfaces.plugins.PluginInterface): """ Checks if the IDT has been altered """ - _required_framework_version = (1, 0, 0) + _required_framework_version = (1, 2, 0) @classmethod def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface]: return [ - requirements.TranslationLayerRequirement(name = 'primary', - description = 'Memory layer for the kernel', - architectures = ["Intel32", "Intel64"]), - requirements.SymbolTableRequirement(name = "vmlinux", description = "Linux kernel symbols"), - requirements.VersionRequirement(name = 'linuxutils', component = linux.LinuxUtilities, version = (1, 0, 0)), - requirements.PluginRequirement(name = 'lsmod', plugin = lsmod.Lsmod, version = (1, 0, 0)) + requirements.ModuleRequirement(name = 'vmlinux', architectures = ["Intel32", "Intel64"]), + requirements.VersionRequirement(name = 'linuxutils', component = linux.LinuxUtilities, version = (2, 0, 0)), + requirements.PluginRequirement(name = 'lsmod', plugin = lsmod.Lsmod, version = (2, 0, 0)) ] def _generator(self): - vmlinux = contexts.Module(self.context, self.config['vmlinux'], self.config['primary'], 0) + vmlinux = self.context.modules[self.config['vmlinux']] - modules = lsmod.Lsmod.list_modules(self.context, self.config['primary'], self.config['vmlinux']) + modules = lsmod.Lsmod.list_modules(self.context, vmlinux.name) - handlers = linux.LinuxUtilities.generate_kernel_handler_info(self.context, self.config['primary'], - self.config['vmlinux'], modules) + handlers = linux.LinuxUtilities.generate_kernel_handler_info(self.context, vmlinux.name, modules) - is_32bit = not symbols.symbol_table_is_64bit(self.context, self.config["vmlinux"]) + is_32bit = not symbols.symbol_table_is_64bit(self.context, vmlinux.symbol_table_name) idt_table_size = 256 - address_mask = self.context.layers[self.config['primary']].address_mask + address_mask = self.context.layers[vmlinux.layer_name].address_mask # hw handlers + system call check_idxs = list(range(0, 20)) + [128] @@ -65,7 +61,8 @@ def _generator(self): table = vmlinux.object(object_type = 'array', offset = addrs.vol.offset, subtype = vmlinux.get_type(idt_type), - count = idt_table_size) + count = idt_table_size, + absolute = True) for i in check_idxs: ent = table[i] @@ -88,7 +85,7 @@ def _generator(self): idt_addr = idt_addr & address_mask - module_name, symbol_name = linux.LinuxUtilities.lookup_module_address(self.context, handlers, idt_addr) + module_name, symbol_name = linux.LinuxUtilities.lookup_module_address(vmlinux, handlers, idt_addr) yield (0, [format_hints.Hex(i), format_hints.Hex(idt_addr), module_name, symbol_name]) diff --git a/volatility3/framework/plugins/linux/check_modules.py b/volatility3/framework/plugins/linux/check_modules.py index 449046e866..362dce6923 100644 --- a/volatility3/framework/plugins/linux/check_modules.py +++ b/volatility3/framework/plugins/linux/check_modules.py @@ -5,7 +5,7 @@ import logging from typing import List -from volatility3.framework import interfaces, renderers, exceptions, constants, contexts +from volatility3.framework import interfaces, renderers, exceptions, constants from volatility3.framework.configuration import requirements from volatility3.framework.interfaces import plugins from volatility3.framework.objects import utility @@ -18,19 +18,19 @@ class Check_modules(plugins.PluginInterface): """Compares module list to sysfs info, if available""" - _required_framework_version = (1, 0, 0) + _required_framework_version = (1, 2, 0) @classmethod def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface]: return [ - requirements.TranslationLayerRequirement(name = 'primary', - description = 'Memory layer for the kernel', - architectures = ["Intel32", "Intel64"]), - requirements.SymbolTableRequirement(name = "vmlinux", description = "Linux kernel symbols"), - requirements.PluginRequirement(name = 'lsmod', plugin = lsmod.Lsmod, version = (1, 0, 0)) + requirements.ModuleRequirement(name = 'vmlinux', architectures = ["Intel32", "Intel64"]), + requirements.PluginRequirement(name = 'lsmod', plugin = lsmod.Lsmod, version = (2, 0, 0)) ] - def get_kset_modules(self, vmlinux): + @classmethod + def get_kset_modules(self, context: interfaces.context.ContextInterface, vmlinux_name: str): + + vmlinux = context.modules[vmlinux_name] try: module_kset = vmlinux.object_from_symbol("module_kset") @@ -44,12 +44,12 @@ def get_kset_modules(self, vmlinux): ret = {} - kobj_off = self.context.symbol_space.get_type(self.config['vmlinux'] + constants.BANG + - 'module_kobject').relative_child_offset('kobj') + kobj_off = vmlinux.get_type('module_kobject').relative_child_offset('kobj') - for kobj in module_kset.list.to_list(vmlinux.name + constants.BANG + "kobject", "entry"): + for kobj in module_kset.list.to_list(vmlinux.symbol_table_name + constants.BANG + "kobject", "entry"): - mod_kobj = vmlinux.object(object_type = "module_kobject", offset = kobj.vol.offset - kobj_off) + mod_kobj = vmlinux.object(object_type = "module_kobject", offset = kobj.vol.offset - kobj_off, + absolute = True) mod = mod_kobj.mod @@ -60,13 +60,11 @@ def get_kset_modules(self, vmlinux): return ret def _generator(self): - vmlinux = contexts.Module(self.context, self.config['vmlinux'], self.config['primary'], 0) - - kset_modules = self.get_kset_modules(vmlinux) + kset_modules = self.get_kset_modules(self.context, self.config['vmlinux']) lsmod_modules = set( str(utility.array_to_string(modules.name)) - for modules in lsmod.Lsmod.list_modules(self.context, self.config['primary'], self.config['vmlinux'])) + for modules in lsmod.Lsmod.list_modules(self.context, self.config['vmlinux'])) for mod_name in set(kset_modules.keys()).difference(lsmod_modules): yield (0, (format_hints.Hex(kset_modules[mod_name]), str(mod_name))) diff --git a/volatility3/framework/plugins/linux/check_syscall.py b/volatility3/framework/plugins/linux/check_syscall.py index b845ad4aab..3acd2877af 100644 --- a/volatility3/framework/plugins/linux/check_syscall.py +++ b/volatility3/framework/plugins/linux/check_syscall.py @@ -6,7 +6,7 @@ import logging from typing import List -from volatility3.framework import exceptions, interfaces, contexts +from volatility3.framework import exceptions, interfaces from volatility3.framework import renderers, constants from volatility3.framework.configuration import requirements from volatility3.framework.interfaces import plugins @@ -25,24 +25,26 @@ class Check_syscall(plugins.PluginInterface): """Check system call table for hooks.""" - _required_framework_version = (1, 0, 0) + _required_framework_version = (1, 2, 0) @classmethod def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface]: return [ - requirements.TranslationLayerRequirement(name = 'primary', - description = 'Memory layer for the kernel', - architectures = ["Intel32", "Intel64"]), - requirements.SymbolTableRequirement(name = "vmlinux", description = "Linux kernel symbols") + requirements.ModuleRequirement(name = 'vmlinux', architectures = ["Intel32", "Intel64"]), ] def _get_table_size_next_symbol(self, table_addr, ptr_sz, vmlinux): """Returns the size of the table based on the next symbol.""" ret = 0 - sym_table = self.context.symbol_space[vmlinux.name] - - sorted_symbols = sorted([(sym_table.get_symbol(sn).address, sn) for sn in sym_table.symbols]) + symbol_list = [] + for sn in vmlinux.symbols: + try: + # When requesting the symbol from the module, a full resolve is performed + symbol_list.append((vmlinux.get_symbol(sn).address, sn)) + except exceptions.SymbolError: + pass + sorted_symbols = sorted(symbol_list) sym_address = 0 @@ -62,7 +64,8 @@ def _get_table_size_meta(self, vmlinux): accurate.""" return len( - [sym for sym in self.context.symbol_space[vmlinux.name].symbols if sym.startswith("__syscall_meta__")]) + [sym for sym in self.context.symbol_space[vmlinux.symbol_table_name].symbols if + sym.startswith("__syscall_meta__")]) def _get_table_info_other(self, table_addr, ptr_sz, vmlinux): table_size_meta = self._get_table_size_meta(vmlinux) @@ -93,12 +96,12 @@ def _get_table_info_disassembly(self, ptr_sz, vmlinux): md = capstone.Cs(capstone.CS_ARCH_X86, mode) try: - func_addr = self.context.symbol_space.get_symbol(vmlinux.name + constants.BANG + syscall_entry_func).address + func_addr = vmlinux.get_symbol(syscall_entry_func).address except exceptions.SymbolError as e: # if we can't find the disassemble function then bail and rely on a different method return 0 - data = self.context.layers.read(self.config['primary'], func_addr, 6) + data = self.context.layers.read(self.config['vmlinux.layer_name'], func_addr, 6) for (address, size, mnemonic, op_str) in md.disasm_lite(data, func_addr): if mnemonic == 'CMP': @@ -108,7 +111,7 @@ def _get_table_info_disassembly(self, ptr_sz, vmlinux): return table_size def _get_table_info(self, vmlinux, table_name, ptr_sz): - table_sym = self.context.symbol_space.get_symbol(vmlinux.name + constants.BANG + table_name) + table_sym = vmlinux.get_symbol(table_name) table_size = self._get_table_info_disassembly(ptr_sz, vmlinux) @@ -123,7 +126,7 @@ def _get_table_info(self, vmlinux, table_name, ptr_sz): # TODO - add finding and parsing unistd.h once cached file enumeration is added def _generator(self): - vmlinux = contexts.Module(self.context, self.config['vmlinux'], self.config['primary'], 0) + vmlinux = self.context.modules[self.config['vmlinux']] ptr_sz = vmlinux.get_type("pointer").size if ptr_sz == 4: @@ -143,7 +146,7 @@ def _generator(self): # enabled in order to support 32 bit programs and libraries # if the symbol isn't there then the support isn't in the kernel and so we skip it try: - ia32_symbol = self.context.symbol_space.get_symbol(vmlinux.name + constants.BANG + "ia32_sys_call_table") + ia32_symbol = vmlinux.get_symbol("ia32_sys_call_table") except exceptions.SymbolError: ia32_symbol = None @@ -161,7 +164,7 @@ def _generator(self): if not call_addr: continue - symbols = list(self.context.symbol_space.get_symbols_by_location(call_addr)) + symbols = list(vmlinux.get_symbols_by_absolute_location(call_addr)) if len(symbols) > 0: sym_name = str(symbols[0].split(constants.BANG)[1]) if constants.BANG in symbols[0] else \ diff --git a/volatility3/framework/plugins/linux/elfs.py b/volatility3/framework/plugins/linux/elfs.py index 89909f4ff5..3fcb017cd6 100644 --- a/volatility3/framework/plugins/linux/elfs.py +++ b/volatility3/framework/plugins/linux/elfs.py @@ -17,16 +17,13 @@ class Elfs(plugins.PluginInterface): """Lists all memory mapped ELF files for all processes.""" - _required_framework_version = (1, 0, 0) + _required_framework_version = (1, 2, 0) @classmethod def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface]: return [ - requirements.TranslationLayerRequirement(name = 'primary', - description = 'Memory layer for the kernel', - architectures = ["Intel32", "Intel64"]), - requirements.SymbolTableRequirement(name = "vmlinux", description = "Linux kernel symbols"), - requirements.PluginRequirement(name = 'pslist', plugin = pslist.PsList, version = (1, 0, 0)), + requirements.ModuleRequirement(name = 'vmlinux', architectures = ["Intel32", "Intel64"]), + requirements.PluginRequirement(name = 'pslist', plugin = pslist.PsList, version = (2, 0, 0)), requirements.ListRequirement(name = 'pid', description = 'Filter on specific process IDs', element_type = int, @@ -59,6 +56,5 @@ def run(self): ("End", format_hints.Hex), ("File Path", str)], self._generator( pslist.PsList.list_tasks(self.context, - self.config['primary'], self.config['vmlinux'], filter_func = filter_func))) diff --git a/volatility3/framework/plugins/linux/keyboard_notifiers.py b/volatility3/framework/plugins/linux/keyboard_notifiers.py index 290c0a180b..012632bb64 100644 --- a/volatility3/framework/plugins/linux/keyboard_notifiers.py +++ b/volatility3/framework/plugins/linux/keyboard_notifiers.py @@ -4,7 +4,7 @@ import logging -from volatility3.framework import interfaces, renderers, contexts, exceptions +from volatility3.framework import interfaces, renderers, exceptions from volatility3.framework.configuration import requirements from volatility3.framework.renderers import format_hints from volatility3.framework.symbols import linux @@ -16,26 +16,22 @@ class Keyboard_notifiers(interfaces.plugins.PluginInterface): """Parses the keyboard notifier call chain""" - _required_framework_version = (1, 0, 0) + _required_framework_version = (1, 2, 0) @classmethod def get_requirements(cls): return [ - requirements.TranslationLayerRequirement(name = 'primary', - description = 'Memory layer for the kernel', - architectures = ["Intel32", "Intel64"]), - requirements.SymbolTableRequirement(name = "vmlinux", description = "Linux kernel symbols"), - requirements.PluginRequirement(name = 'lsmod', plugin = lsmod.Lsmod, version = (1, 0, 0)), - requirements.VersionRequirement(name = 'linuxutils', component = linux.LinuxUtilities, version = (1, 0, 0)) + requirements.ModuleRequirement(name = 'vmlinux', architectures = ["Intel32", "Intel64"]), + requirements.PluginRequirement(name = 'lsmod', plugin = lsmod.Lsmod, version = (2, 0, 0)), + requirements.VersionRequirement(name = 'linuxutils', component = linux.LinuxUtilities, version = (2, 0, 0)) ] def _generator(self): - vmlinux = contexts.Module(self.context, self.config['vmlinux'], self.config['primary'], 0) + vmlinux = self.context.modules[self.config['vmlinux']] - modules = lsmod.Lsmod.list_modules(self.context, self.config['primary'], self.config['vmlinux']) + modules = lsmod.Lsmod.list_modules(self.context, vmlinux.name) - handlers = linux.LinuxUtilities.generate_kernel_handler_info(self.context, self.config['primary'], - self.config['vmlinux'], modules) + handlers = linux.LinuxUtilities.generate_kernel_handler_info(self.context, vmlinux.name, modules) try: knl_addr = vmlinux.object_from_symbol("keyboard_notifier_list") @@ -49,12 +45,12 @@ def _generator(self): "This means you are either analyzing an unsupported kernel version or that your symbol table is corrupt." ) - knl = vmlinux.object(object_type = "atomic_notifier_head", offset = knl_addr.vol.offset) + knl = vmlinux.object(object_type = "atomic_notifier_head", offset = knl_addr.vol.offset, absolute = True) for call_back in linux.LinuxUtilities.walk_internal_list(vmlinux, "notifier_block", "next", knl.head): call_addr = call_back.notifier_call - module_name, symbol_name = linux.LinuxUtilities.lookup_module_address(self.context, handlers, call_addr) + module_name, symbol_name = linux.LinuxUtilities.lookup_module_address(vmlinux, handlers, call_addr) yield (0, [format_hints.Hex(call_addr), module_name, symbol_name]) diff --git a/volatility3/framework/plugins/linux/lsmod.py b/volatility3/framework/plugins/linux/lsmod.py index 2d390cc824..a871ebed92 100644 --- a/volatility3/framework/plugins/linux/lsmod.py +++ b/volatility3/framework/plugins/linux/lsmod.py @@ -7,7 +7,6 @@ import logging from typing import List, Iterable -from volatility3.framework import contexts from volatility3.framework import exceptions, renderers, constants, interfaces from volatility3.framework.configuration import requirements from volatility3.framework.interfaces import plugins @@ -20,21 +19,18 @@ class Lsmod(plugins.PluginInterface): """Lists loaded kernel modules.""" - _required_framework_version = (1, 0, 0) - _version = (1, 0, 0) + _required_framework_version = (1, 2, 0) + _version = (2, 0, 0) @classmethod def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface]: return [ - requirements.TranslationLayerRequirement(name = 'primary', - description = 'Memory layer for the kernel', - architectures = ["Intel32", "Intel64"]), - requirements.SymbolTableRequirement(name = "vmlinux", description = "Linux kernel symbols") + requirements.ModuleRequirement(name = 'vmlinux', architectures = ["Intel32", "Intel64"]), ] @classmethod - def list_modules(cls, context: interfaces.context.ContextInterface, layer_name: str, - vmlinux_symbols: str) -> Iterable[interfaces.objects.ObjectInterface]: + def list_modules(cls, context: interfaces.context.ContextInterface, vmlinux_module_name: str) -> Iterable[ + interfaces.objects.ObjectInterface]: """Lists all the modules in the primary layer. Args: @@ -47,7 +43,7 @@ def list_modules(cls, context: interfaces.context.ContextInterface, layer_name: This function will throw a SymbolError exception if kernel module support is not enabled. """ - vmlinux = contexts.Module(context, vmlinux_symbols, layer_name, 0) + vmlinux = context.modules[vmlinux_module_name] modules = vmlinux.object_from_symbol(symbol_name = "modules").cast("list_head") @@ -58,7 +54,7 @@ def list_modules(cls, context: interfaces.context.ContextInterface, layer_name: def _generator(self): try: - for module in self.list_modules(self.context, self.config['primary'], self.config['vmlinux']): + for module in self.list_modules(self.context, self.config['vmlinux']): mod_size = module.get_init_size() + module.get_core_size() diff --git a/volatility3/framework/plugins/linux/lsof.py b/volatility3/framework/plugins/linux/lsof.py index 153d28f6a1..3b21d96824 100644 --- a/volatility3/framework/plugins/linux/lsof.py +++ b/volatility3/framework/plugins/linux/lsof.py @@ -19,17 +19,14 @@ class Lsof(plugins.PluginInterface): """Lists all memory maps for all processes.""" - _required_framework_version = (1, 0, 0) + _required_framework_version = (1, 2, 0) @classmethod def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface]: return [ - requirements.TranslationLayerRequirement(name = 'primary', - description = 'Memory layer for the kernel', - architectures = ["Intel32", "Intel64"]), - requirements.SymbolTableRequirement(name = "vmlinux", description = "Linux kernel symbols"), - requirements.PluginRequirement(name = 'pslist', plugin = pslist.PsList, version = (1, 0, 0)), - requirements.VersionRequirement(name = 'linuxutils', component = linux.LinuxUtilities, version = (1, 0, 0)), + requirements.ModuleRequirement(name = 'vmlinux', architectures = ["Intel32", "Intel64"]), + requirements.PluginRequirement(name = 'pslist', plugin = pslist.PsList, version = (2, 0, 0)), + requirements.VersionRequirement(name = 'linuxutils', component = linux.LinuxUtilities, version = (2, 0, 0)), requirements.ListRequirement(name = 'pid', description = 'Filter on specific process IDs', element_type = int, @@ -37,6 +34,8 @@ def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface] ] def _generator(self, tasks): + vmlinux = self.context.modules[self.config['vmlinux']] + symbol_table = None for task in tasks: if symbol_table is None: @@ -57,6 +56,5 @@ def run(self): return renderers.TreeGrid([("PID", int), ("Process", str), ("FD", int), ("Path", str)], self._generator( pslist.PsList.list_tasks(self.context, - self.config['primary'], self.config['vmlinux'], filter_func = filter_func))) diff --git a/volatility3/framework/plugins/linux/malfind.py b/volatility3/framework/plugins/linux/malfind.py index 149222f37a..c7fbd9ad1c 100644 --- a/volatility3/framework/plugins/linux/malfind.py +++ b/volatility3/framework/plugins/linux/malfind.py @@ -15,16 +15,13 @@ class Malfind(interfaces.plugins.PluginInterface): """Lists process memory ranges that potentially contain injected code.""" - _required_framework_version = (1, 0, 0) + _required_framework_version = (1, 2, 0) @classmethod def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface]: return [ - requirements.TranslationLayerRequirement(name = 'primary', - description = 'Memory layer for the kernel', - architectures = ["Intel32", "Intel64"]), - requirements.SymbolTableRequirement(name = "vmlinux", description = "Linux kernel symbols"), - requirements.PluginRequirement(name = 'pslist', plugin = pslist.PsList, version = (1, 0, 0)), + requirements.ModuleRequirement(name = 'vmlinux', architectures = ["Intel32", "Intel64"]), + requirements.PluginRequirement(name = 'pslist', plugin = pslist.PsList, version = (2, 0, 0)), requirements.ListRequirement(name = 'pid', description = 'Filter on specific process IDs', element_type = int, @@ -48,7 +45,8 @@ def _list_injections(self, task): def _generator(self, tasks): # determine if we're on a 32 or 64 bit kernel - if self.context.symbol_space.get_type(self.config["vmlinux"] + constants.BANG + "pointer").size == 4: + if self.context.symbol_space.get_type( + self.config["vmlinux.symbol_table_name"] + constants.BANG + "pointer").size == 4: is_32bit_arch = True else: is_32bit_arch = False @@ -75,6 +73,5 @@ def run(self): ("Disasm", interfaces.renderers.Disassembly)], self._generator( pslist.PsList.list_tasks(self.context, - self.config['primary'], self.config['vmlinux'], filter_func = filter_func))) diff --git a/volatility3/framework/plugins/linux/proc.py b/volatility3/framework/plugins/linux/proc.py index 8bd0db538c..893646d049 100644 --- a/volatility3/framework/plugins/linux/proc.py +++ b/volatility3/framework/plugins/linux/proc.py @@ -15,17 +15,14 @@ class Maps(plugins.PluginInterface): """Lists all memory maps for all processes.""" - _required_framework_version = (1, 0, 0) + _required_framework_version = (1, 2, 0) @classmethod def get_requirements(cls): # Since we're calling the plugin, make sure we have the plugin's requirements return [ - requirements.TranslationLayerRequirement(name = 'primary', - description = 'Memory layer for the kernel', - architectures = ["Intel32", "Intel64"]), - requirements.SymbolTableRequirement(name = "vmlinux", description = "Linux kernel symbols"), - requirements.PluginRequirement(name = 'pslist', plugin = pslist.PsList, version = (1, 0, 0)), + requirements.ModuleRequirement(name = 'vmlinux', architectures = ["Intel32", "Intel64"]), + requirements.PluginRequirement(name = 'pslist', plugin = pslist.PsList, version = (2, 0, 0)), requirements.ListRequirement(name = 'pid', description = 'Filter on specific process IDs', element_type = int, @@ -68,6 +65,5 @@ def run(self): ("File Path", str)], self._generator( pslist.PsList.list_tasks(self.context, - self.config['primary'], self.config['vmlinux'], filter_func = filter_func))) diff --git a/volatility3/framework/plugins/linux/pslist.py b/volatility3/framework/plugins/linux/pslist.py index 78fce978be..14295be065 100644 --- a/volatility3/framework/plugins/linux/pslist.py +++ b/volatility3/framework/plugins/linux/pslist.py @@ -1,10 +1,9 @@ # This file is Copyright 2019 Volatility Foundation and licensed under the Volatility Software License 1.0 # which is available at https://www.volatilityfoundation.org/license/vsl-v1.0 # - from typing import Callable, Iterable, List, Any -from volatility3.framework import renderers, interfaces, contexts +from volatility3.framework import renderers, interfaces from volatility3.framework.configuration import requirements from volatility3.framework.objects import utility @@ -12,17 +11,14 @@ class PsList(interfaces.plugins.PluginInterface): """Lists the processes present in a particular linux memory image.""" - _required_framework_version = (1, 0, 0) + _required_framework_version = (1, 2, 0) - _version = (1, 0, 0) + _version = (2, 0, 0) @classmethod def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface]: return [ - requirements.TranslationLayerRequirement(name = 'primary', - description = 'Memory layer for the kernel', - architectures = ["Intel32", "Intel64"]), - requirements.SymbolTableRequirement(name = "vmlinux", description = "Linux kernel symbols"), + requirements.ModuleRequirement(name = 'vmlinux'), requirements.ListRequirement(name = 'pid', description = 'Filter on specific process IDs', element_type = int, @@ -53,7 +49,6 @@ def filter_func(x): def _generator(self): for task in self.list_tasks(self.context, - self.config['primary'], self.config['vmlinux'], filter_func = self.create_pid_filter(self.config.get('pid', None))): pid = task.pid @@ -67,20 +62,18 @@ def _generator(self): def list_tasks( cls, context: interfaces.context.ContextInterface, - layer_name: str, - vmlinux_symbols: str, + vmlinux_module_name: str, filter_func: Callable[[int], bool] = lambda _: False) -> Iterable[interfaces.objects.ObjectInterface]: """Lists all the tasks in the primary layer. Args: context: The context to retrieve required elements (layers, symbol tables) from - layer_name: The name of the layer on which to operate - vmlinux_symbols: The name of the table containing the kernel symbols + vmlinux_module_name: The name of the kernel module on which to operate Yields: Process objects """ - vmlinux = contexts.Module(context, vmlinux_symbols, layer_name, 0) + vmlinux = context.modules[vmlinux_module_name] init_task = vmlinux.object_from_symbol(symbol_name = "init_task") diff --git a/volatility3/framework/plugins/linux/pstree.py b/volatility3/framework/plugins/linux/pstree.py index a187ea9078..2f11cf5ecb 100644 --- a/volatility3/framework/plugins/linux/pstree.py +++ b/volatility3/framework/plugins/linux/pstree.py @@ -10,8 +10,6 @@ class PsTree(pslist.PsList): """Plugin for listing processes in a tree based on their parent process ID.""" - _required_framework_version = (1, 0, 0) - def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) self._processes = {} @@ -36,7 +34,8 @@ def find_level(self, pid): def _generator(self): """Generates the.""" - for proc in self.list_tasks(self.context, self.config['primary'], self.config['vmlinux']): + for proc in self.list_tasks(self.context, self.config['vmlinux.layer_name'], + self.config['vmlinux.symbol_table_name']): self._processes[proc.pid] = proc # Build the child/level maps diff --git a/volatility3/framework/plugins/linux/tty_check.py b/volatility3/framework/plugins/linux/tty_check.py index f633b99859..f4a4a2820f 100644 --- a/volatility3/framework/plugins/linux/tty_check.py +++ b/volatility3/framework/plugins/linux/tty_check.py @@ -5,7 +5,7 @@ import logging from typing import List -from volatility3.framework import interfaces, renderers, exceptions, constants, contexts +from volatility3.framework import interfaces, renderers, exceptions, constants from volatility3.framework.configuration import requirements from volatility3.framework.interfaces import plugins from volatility3.framework.objects import utility @@ -19,26 +19,22 @@ class tty_check(plugins.PluginInterface): """Checks tty devices for hooks""" - _required_framework_version = (1, 0, 0) + _required_framework_version = (1, 2, 0) @classmethod def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface]: return [ - requirements.TranslationLayerRequirement(name = 'primary', - description = 'Memory layer for the kernel', - architectures = ["Intel32", "Intel64"]), - requirements.SymbolTableRequirement(name = "vmlinux", description = "Linux kernel symbols"), - requirements.PluginRequirement(name = 'lsmod', plugin = lsmod.Lsmod, version = (1, 0, 0)), - requirements.VersionRequirement(name = 'linuxutils', component = linux.LinuxUtilities, version = (1, 0, 0)) + requirements.ModuleRequirement(name = 'vmlinux', architectures = ["Intel32", "Intel64"]), + requirements.PluginRequirement(name = 'lsmod', plugin = lsmod.Lsmod, version = (2, 0, 0)), + requirements.VersionRequirement(name = 'linuxutils', component = linux.LinuxUtilities, version = (2, 0, 0)) ] def _generator(self): - vmlinux = contexts.Module(self.context, self.config['vmlinux'], self.config['primary'], 0) + vmlinux = self.context.modules[self.config['vmlinux']] - modules = lsmod.Lsmod.list_modules(self.context, self.config['primary'], self.config['vmlinux']) + modules = lsmod.Lsmod.list_modules(self.context, vmlinux.name) - handlers = linux.LinuxUtilities.generate_kernel_handler_info(self.context, self.config['primary'], - self.config['vmlinux'], modules) + handlers = linux.LinuxUtilities.generate_kernel_handler_info(self.context, vmlinux.name, modules) try: tty_drivers = vmlinux.object_from_symbol("tty_drivers").cast("list_head") @@ -52,12 +48,12 @@ def _generator(self): "This means you are either analyzing an unsupported kernel version or that your symbol table is corrupt." ) - for tty in tty_drivers.to_list(vmlinux.name + constants.BANG + "tty_driver", "tty_drivers"): + for tty in tty_drivers.to_list(vmlinux.symbol_table_name + constants.BANG + "tty_driver", "tty_drivers"): try: ttys = utility.array_of_pointers(tty.ttys.dereference(), count = tty.num, - subtype = vmlinux.name + constants.BANG + "tty_struct", + subtype = vmlinux.symbol_table_name + constants.BANG + "tty_struct", context = self.context) except exceptions.PagedInvalidAddressException: continue @@ -71,7 +67,7 @@ def _generator(self): recv_buf = tty_dev.ldisc.ops.receive_buf - module_name, symbol_name = linux.LinuxUtilities.lookup_module_address(self.context, handlers, recv_buf) + module_name, symbol_name = linux.LinuxUtilities.lookup_module_address(vmlinux, handlers, recv_buf) yield (0, (name, format_hints.Hex(recv_buf), module_name, symbol_name)) From cd2f40bc34bdf1bc6e3ad463781263a5bfc5f165 Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Thu, 22 Jul 2021 22:42:06 +0100 Subject: [PATCH 178/294] Mac: Update all plugins to ModuleRequirement --- volatility3/framework/plugins/mac/bash.py | 16 +++-- .../framework/plugins/mac/check_syscall.py | 21 +++---- .../framework/plugins/mac/check_sysctl.py | 20 +++--- .../framework/plugins/mac/check_trap_table.py | 20 +++--- volatility3/framework/plugins/mac/ifconfig.py | 11 ++-- .../framework/plugins/mac/kauth_listeners.py | 26 ++++---- .../framework/plugins/mac/kauth_scopes.py | 38 ++++++------ volatility3/framework/plugins/mac/kevents.py | 21 +++---- .../framework/plugins/mac/list_files.py | 22 +++---- volatility3/framework/plugins/mac/lsmod.py | 20 +++--- volatility3/framework/plugins/mac/lsof.py | 14 ++--- volatility3/framework/plugins/mac/malfind.py | 16 ++--- volatility3/framework/plugins/mac/mount.py | 18 +++--- volatility3/framework/plugins/mac/netstat.py | 19 +++--- .../framework/plugins/mac/proc_maps.py | 13 ++-- volatility3/framework/plugins/mac/psaux.py | 11 ++-- volatility3/framework/plugins/mac/pslist.py | 62 +++++++------------ volatility3/framework/plugins/mac/pstree.py | 12 ++-- .../framework/plugins/mac/socket_filters.py | 18 +++--- volatility3/framework/plugins/mac/timers.py | 27 ++++---- .../framework/plugins/mac/trustedbsd.py | 25 ++++---- .../framework/plugins/mac/vfsevents.py | 13 ++-- 22 files changed, 202 insertions(+), 261 deletions(-) diff --git a/volatility3/framework/plugins/mac/bash.py b/volatility3/framework/plugins/mac/bash.py index c769d405ac..e2e39e20de 100644 --- a/volatility3/framework/plugins/mac/bash.py +++ b/volatility3/framework/plugins/mac/bash.py @@ -20,16 +20,13 @@ class Bash(plugins.PluginInterface, timeliner.TimeLinerInterface): """Recovers bash command history from memory.""" - _required_framework_version = (1, 0, 0) + _required_framework_version = (1, 2, 0) @classmethod def get_requirements(cls): return [ - requirements.TranslationLayerRequirement(name = 'primary', - description = 'Memory layer for the kernel', - architectures = ["Intel32", "Intel64"]), - requirements.SymbolTableRequirement(name = "darwin", description = "Mac kernel symbols"), - requirements.PluginRequirement(name = 'pslist', plugin = pslist.PsList, version = (2, 0, 0)), + requirements.ModuleRequirement(name = 'darwin', description = 'Kernel module for the OS'), + requirements.PluginRequirement(name = 'pslist', plugin = pslist.PsList, version = (3, 0, 0)), requirements.ListRequirement(name = 'pid', description = 'Filter on specific process IDs', element_type = int, @@ -37,7 +34,7 @@ def get_requirements(cls): ] def _generator(self, tasks): - is_32bit = not symbols.symbol_table_is_64bit(self.context, self.config["darwin"]) + is_32bit = not symbols.symbol_table_is_64bit(self.context, self.config["darwin.symbol_table_name"]) if is_32bit: pack_format = "I" bash_json_file = "bash32" @@ -96,7 +93,6 @@ def run(self): ("Command", str)], self._generator( list_tasks(self.context, - self.config['primary'], self.config['darwin'], filter_func = filter_func))) @@ -105,7 +101,9 @@ def generate_timeline(self): list_tasks = pslist.PsList.get_list_tasks(self.config.get('pslist_method', pslist.PsList.pslist_methods[0])) for row in self._generator( - list_tasks(self.context, self.config['primary'], self.config['darwin'], filter_func = filter_func)): + list_tasks(self.context, + self.config['darwin'], + filter_func = filter_func)): _depth, row_data = row description = f"{row_data[0]} ({row_data[1]}): \"{row_data[3]}\"" yield (description, timeliner.TimeLinerType.CREATED, row_data[2]) diff --git a/volatility3/framework/plugins/mac/check_syscall.py b/volatility3/framework/plugins/mac/check_syscall.py index 4f367001eb..4d96c17334 100644 --- a/volatility3/framework/plugins/mac/check_syscall.py +++ b/volatility3/framework/plugins/mac/check_syscall.py @@ -5,7 +5,7 @@ from typing import List from volatility3.framework import exceptions, interfaces -from volatility3.framework import renderers, contexts +from volatility3.framework import renderers from volatility3.framework.configuration import requirements from volatility3.framework.interfaces import plugins from volatility3.framework.renderers import format_hints @@ -18,25 +18,23 @@ class Check_syscall(plugins.PluginInterface): """Check system call table for hooks.""" - _required_framework_version = (1, 0, 0) + _required_framework_version = (1, 2, 0) @classmethod def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface]: return [ - requirements.TranslationLayerRequirement(name = 'primary', - description = 'Memory layer for the kernel', - architectures = ["Intel32", "Intel64"]), - requirements.SymbolTableRequirement(name = "darwin", description = "Mac kernel symbols"), + requirements.ModuleRequirement(name = 'darwin', description = 'Kernel module for the OS', + architectures = ["Intel32", "Intel64"]), requirements.VersionRequirement(name = 'macutils', component = mac.MacUtilities, version = (1, 0, 0)), - requirements.PluginRequirement(name = 'lsmod', plugin = lsmod.Lsmod, version = (1, 0, 0)) + requirements.PluginRequirement(name = 'lsmod', plugin = lsmod.Lsmod, version = (2, 0, 0)) ] def _generator(self): - kernel = contexts.Module(self._context, self.config['darwin'], self.config['primary'], 0) + kernel = self.context.modules[self.config['darwin']] - mods = lsmod.Lsmod.list_modules(self.context, self.config['primary'], self.config['darwin']) + mods = lsmod.Lsmod.list_modules(self.context, self.config['darwin']) - handlers = mac.MacUtilities.generate_kernel_handler_info(self.context, self.config['primary'], kernel, mods) + handlers = mac.MacUtilities.generate_kernel_handler_info(self.context, kernel.layer_name, kernel, mods) nsysent = kernel.object_from_symbol(symbol_name = "nsysent") table = kernel.object_from_symbol(symbol_name = "sysent") @@ -55,7 +53,8 @@ def _generator(self): if not call_addr or call_addr == 0: continue - module_name, symbol_name = mac.MacUtilities.lookup_module_address(self.context, handlers, call_addr) + module_name, symbol_name = mac.MacUtilities.lookup_module_address(self.context, handlers, + call_addr, self.config['darwin']) yield (0, (format_hints.Hex(table.vol.offset), "SysCall", i, format_hints.Hex(call_addr), module_name, symbol_name)) diff --git a/volatility3/framework/plugins/mac/check_sysctl.py b/volatility3/framework/plugins/mac/check_sysctl.py index 4ea1d8af03..0f755d37dd 100644 --- a/volatility3/framework/plugins/mac/check_sysctl.py +++ b/volatility3/framework/plugins/mac/check_sysctl.py @@ -6,7 +6,7 @@ import volatility3 from volatility3.framework import exceptions, interfaces -from volatility3.framework import renderers, contexts +from volatility3.framework import renderers from volatility3.framework.configuration import requirements from volatility3.framework.interfaces import plugins from volatility3.framework.objects import utility @@ -20,17 +20,14 @@ class Check_sysctl(plugins.PluginInterface): """Check sysctl handlers for hooks.""" - _required_framework_version = (1, 0, 0) + _required_framework_version = (1, 2, 0) @classmethod def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface]: return [ - requirements.TranslationLayerRequirement(name = 'primary', - description = 'Memory layer for the kernel', - architectures = ["Intel32", "Intel64"]), - requirements.SymbolTableRequirement(name = "darwin", description = "Mac kernel symbols"), + requirements.ModuleRequirement(name = 'darwin', description = 'Kernel module for the OS'), requirements.VersionRequirement(name = 'macutils', component = mac.MacUtilities, version = (1, 0, 0)), - requirements.PluginRequirement(name = 'lsmod', plugin = lsmod.Lsmod, version = (1, 0, 0)) + requirements.PluginRequirement(name = 'lsmod', plugin = lsmod.Lsmod, version = (2, 0, 0)) ] def _parse_global_variable_sysctls(self, kernel, name): @@ -115,11 +112,11 @@ def _process_sysctl_list(self, kernel, sysctl_list, recursive = 0): break def _generator(self): - kernel = contexts.Module(self._context, self.config['darwin'], self.config['primary'], 0) + kernel = self.context.modules[self.config['darwin']] - mods = lsmod.Lsmod.list_modules(self.context, self.config['primary'], self.config['darwin']) + mods = lsmod.Lsmod.list_modules(self.context, self.config['darwin']) - handlers = mac.MacUtilities.generate_kernel_handler_info(self.context, self.config['primary'], kernel, mods) + handlers = mac.MacUtilities.generate_kernel_handler_info(self.context, kernel.layer_name, kernel, mods) sysctl_list = kernel.object_from_symbol(symbol_name = "sysctl__children") @@ -129,7 +126,8 @@ def _generator(self): except exceptions.InvalidAddressException: continue - module_name, symbol_name = mac.MacUtilities.lookup_module_address(self.context, handlers, check_addr) + module_name, symbol_name = mac.MacUtilities.lookup_module_address(self.context, handlers, check_addr, + self.config['darwin']) yield (0, (name, sysctl.oid_number, sysctl.get_perms(), format_hints.Hex(check_addr), val, module_name, symbol_name)) diff --git a/volatility3/framework/plugins/mac/check_trap_table.py b/volatility3/framework/plugins/mac/check_trap_table.py index 0584d9ff79..3adb3b5e05 100644 --- a/volatility3/framework/plugins/mac/check_trap_table.py +++ b/volatility3/framework/plugins/mac/check_trap_table.py @@ -6,7 +6,7 @@ from typing import List from volatility3.framework import exceptions, interfaces -from volatility3.framework import renderers, contexts +from volatility3.framework import renderers from volatility3.framework.configuration import requirements from volatility3.framework.interfaces import plugins from volatility3.framework.renderers import format_hints @@ -19,25 +19,22 @@ class Check_trap_table(plugins.PluginInterface): """Check mach trap table for hooks.""" - _required_framework_version = (1, 0, 0) + _required_framework_version = (1, 2, 0) @classmethod def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface]: return [ - requirements.TranslationLayerRequirement(name = 'primary', - description = 'Memory layer for the kernel', - architectures = ["Intel32", "Intel64"]), - requirements.SymbolTableRequirement(name = "darwin", description = "Mac kernel symbols"), - requirements.PluginRequirement(name = 'lsmod', plugin = lsmod.Lsmod, version = (1, 0, 0)), + requirements.ModuleRequirement(name = 'darwin', description = 'Kernel module for the OS'), + requirements.PluginRequirement(name = 'lsmod', plugin = lsmod.Lsmod, version = (2, 0, 0)), requirements.VersionRequirement(name = 'macutils', component = mac.MacUtilities, version = (1, 0, 0)), ] def _generator(self): - kernel = contexts.Module(self._context, self.config['darwin'], self.config['primary'], 0) + kernel = self.context.modules[self.config['darwin']] - mods = lsmod.Lsmod.list_modules(self.context, self.config['primary'], self.config['darwin']) + mods = lsmod.Lsmod.list_modules(self.context, self.config['darwin']) - handlers = mac.MacUtilities.generate_kernel_handler_info(self.context, self.config['primary'], kernel, mods) + handlers = mac.MacUtilities.generate_kernel_handler_info(self.context, kernel.layer_name, kernel, mods) table = kernel.object_from_symbol(symbol_name = "mach_trap_table") @@ -50,7 +47,8 @@ def _generator(self): if not call_addr or call_addr == 0: continue - module_name, symbol_name = mac.MacUtilities.lookup_module_address(self.context, handlers, call_addr) + module_name, symbol_name = mac.MacUtilities.lookup_module_address(self.context, handlers, call_addr, + self.config['darwin']) yield (0, (format_hints.Hex(table.vol.offset), "TrapTable", i, format_hints.Hex(call_addr), module_name, symbol_name)) diff --git a/volatility3/framework/plugins/mac/ifconfig.py b/volatility3/framework/plugins/mac/ifconfig.py index 4aeaf15647..e70ffd3a27 100644 --- a/volatility3/framework/plugins/mac/ifconfig.py +++ b/volatility3/framework/plugins/mac/ifconfig.py @@ -1,7 +1,7 @@ # This file is Copyright 2019 Volatility Foundation and licensed under the Volatility Software License 1.0 # which is available at https://www.volatilityfoundation.org/license/vsl-v1.0 # -from volatility3.framework import exceptions, renderers, contexts +from volatility3.framework import exceptions, renderers from volatility3.framework.configuration import requirements from volatility3.framework.interfaces import plugins from volatility3.framework.objects import utility @@ -11,20 +11,17 @@ class Ifconfig(plugins.PluginInterface): """Lists loaded kernel modules""" - _required_framework_version = (1, 0, 0) + _required_framework_version = (1, 2, 0) @classmethod def get_requirements(cls): return [ - requirements.TranslationLayerRequirement(name = 'primary', - description = 'Memory layer for the kernel', - architectures = ["Intel32", "Intel64"]), - requirements.SymbolTableRequirement(name = "darwin", description = "Mac kernel symbols"), + requirements.ModuleRequirement(name = 'darwin', description = 'Kernel module for the OS'), requirements.VersionRequirement(name = 'macutils', component = mac.MacUtilities, version = (1, 0, 0)) ] def _generator(self): - kernel = contexts.Module(self._context, self.config['darwin'], self.config['primary'], 0) + kernel = self.context.modules[self.config['darwin']] try: list_head = kernel.object_from_symbol(symbol_name = "ifnet_head") diff --git a/volatility3/framework/plugins/mac/kauth_listeners.py b/volatility3/framework/plugins/mac/kauth_listeners.py index 930239db9d..8036643504 100644 --- a/volatility3/framework/plugins/mac/kauth_listeners.py +++ b/volatility3/framework/plugins/mac/kauth_listeners.py @@ -2,7 +2,7 @@ # which is available at https://www.volatilityfoundation.org/license/vsl-v1.0 # -from volatility3.framework import renderers, interfaces, contexts +from volatility3.framework import renderers, interfaces from volatility3.framework.configuration import requirements from volatility3.framework.objects import utility from volatility3.framework.renderers import format_hints @@ -13,34 +13,31 @@ class Kauth_listeners(interfaces.plugins.PluginInterface): """ Lists kauth listeners and their status """ - _required_framework_version = (1, 0, 0) + _required_framework_version = (1, 2, 0) @classmethod def get_requirements(cls): return [ - requirements.TranslationLayerRequirement(name = 'primary', - description = 'Memory layer for the kernel', - architectures = ["Intel32", "Intel64"]), - requirements.SymbolTableRequirement(name = "darwin", description = "Mac kernel"), + requirements.ModuleRequirement(name = 'darwin', description = 'Kernel module for the OS', + architectures = ["Intel32", "Intel64"]), requirements.VersionRequirement(name = 'macutils', component = mac.MacUtilities, version = (1, 1, 0)), - requirements.PluginRequirement(name = 'lsmod', plugin = lsmod.Lsmod, version = (1, 0, 0)), + requirements.PluginRequirement(name = 'lsmod', plugin = lsmod.Lsmod, version = (2, 0, 0)), requirements.PluginRequirement(name = 'kauth_scopes', plugin = kauth_scopes.Kauth_scopes, - version = (1, 0, 0)) + version = (2, 0, 0)) ] def _generator(self): """ Enumerates the listeners for each kauth scope """ - kernel = contexts.Module(self.context, self.config['darwin'], self.config['primary'], 0) + kernel = self.context.modules[self.config['darwin']] - mods = lsmod.Lsmod.list_modules(self.context, self.config['primary'], self.config['darwin']) + mods = lsmod.Lsmod.list_modules(self.context, self.config['darwin']) - handlers = mac.MacUtilities.generate_kernel_handler_info(self.context, self.config['primary'], kernel, mods) + handlers = mac.MacUtilities.generate_kernel_handler_info(self.context, kernel.layer_name, kernel, mods) - for scope in kauth_scopes.Kauth_scopes.list_kauth_scopes(self.context, self.config['primary'], - self.config['darwin']): + for scope in kauth_scopes.Kauth_scopes.list_kauth_scopes(self.context, self.config['darwin']): scope_name = utility.pointer_to_string(scope.ks_identifier, 128) @@ -49,7 +46,8 @@ def _generator(self): if callback == 0: continue - module_name, symbol_name = mac.MacUtilities.lookup_module_address(self.context, handlers, callback) + module_name, symbol_name = mac.MacUtilities.lookup_module_address(self.context, handlers, callback, + self.config['darwin']) yield (0, (scope_name, format_hints.Hex(listener.kll_idata), format_hints.Hex(callback), module_name, symbol_name)) diff --git a/volatility3/framework/plugins/mac/kauth_scopes.py b/volatility3/framework/plugins/mac/kauth_scopes.py index a5f57d9a13..7f5a20d8fa 100644 --- a/volatility3/framework/plugins/mac/kauth_scopes.py +++ b/volatility3/framework/plugins/mac/kauth_scopes.py @@ -1,39 +1,38 @@ # This file is opyright 2020 Volatility Foundation and licensed under the Volatility Software License 1.0 # which is available at https://www.volatilityfoundation.org/license/vsl-v1.0 # - +import logging from typing import Iterable, Callable, Tuple -from volatility3.framework import renderers, interfaces, contexts +from volatility3.framework import renderers, interfaces from volatility3.framework.configuration import requirements from volatility3.framework.objects import utility from volatility3.framework.renderers import format_hints from volatility3.framework.symbols import mac from volatility3.plugins.mac import lsmod +vollog = logging.getLogger(__name__) + class Kauth_scopes(interfaces.plugins.PluginInterface): """ Lists kauth scopes and their status """ - _version = (1, 0, 0) - _required_framework_version = (1, 0, 0) + _version = (2, 0, 0) + _required_framework_version = (1, 2, 0) @classmethod def get_requirements(cls): return [ - requirements.TranslationLayerRequirement(name = 'primary', - description = 'Memory layer for the kernel', - architectures = ["Intel32", "Intel64"]), - requirements.SymbolTableRequirement(name = "darwin", description = "Mac kernel"), + requirements.ModuleRequirement(name = 'darwin', description = 'Kernel module for the OS', + architectures = ["Intel32", "Intel64"]), requirements.VersionRequirement(name = 'macutils', component = mac.MacUtilities, version = (1, 1, 0)), - requirements.PluginRequirement(name = 'lsmod', plugin = lsmod.Lsmod, version = (1, 0, 0)) + requirements.PluginRequirement(name = 'lsmod', plugin = lsmod.Lsmod, version = (2, 0, 0)) ] @classmethod def list_kauth_scopes(cls, context: interfaces.context.ContextInterface, - layer_name: str, - darwin_symbols: str, + kernel_module_name: str, filter_func: Callable[[int], bool] = lambda _: False) -> \ Iterable[Tuple[interfaces.objects.ObjectInterface, interfaces.objects.ObjectInterface, @@ -42,28 +41,29 @@ def list_kauth_scopes(cls, Enumerates the registered kauth scopes and yields each object Uses smear-safe enumeration API """ - - kernel = contexts.Module(context, darwin_symbols, layer_name, 0) + kernel = context.modules[kernel_module_name] scopes = kernel.object_from_symbol("kauth_scopes") for scope in mac.MacUtilities.walk_tailq(scopes, "ks_link"): - yield scope + if not filter_func(scope): + yield scope def _generator(self): - kernel = contexts.Module(self.context, self.config['darwin'], self.config['primary'], 0) + kernel = self.context.modules[self.config['darwin']] - mods = lsmod.Lsmod.list_modules(self.context, self.config['primary'], self.config['darwin']) + mods = lsmod.Lsmod.list_modules(self.context, self.config['darwin']) - handlers = mac.MacUtilities.generate_kernel_handler_info(self.context, self.config['primary'], kernel, mods) + handlers = mac.MacUtilities.generate_kernel_handler_info(self.context, kernel.layer_name, kernel, mods) - for scope in self.list_kauth_scopes(self.context, self.config['primary'], self.config['darwin']): + for scope in self.list_kauth_scopes(self.context, self.config['darwin']): callback = scope.ks_callback if callback == 0: continue - module_name, symbol_name = mac.MacUtilities.lookup_module_address(self.context, handlers, callback) + module_name, symbol_name = mac.MacUtilities.lookup_module_address(self.context, handlers, callback, + self.config['darwin']) identifier = utility.pointer_to_string(scope.ks_identifier, 128) diff --git a/volatility3/framework/plugins/mac/kevents.py b/volatility3/framework/plugins/mac/kevents.py index a6d9051021..47087ce388 100644 --- a/volatility3/framework/plugins/mac/kevents.py +++ b/volatility3/framework/plugins/mac/kevents.py @@ -4,7 +4,7 @@ from typing import Iterable, Callable, Tuple -from volatility3.framework import renderers, interfaces, exceptions, contexts +from volatility3.framework import renderers, interfaces, exceptions from volatility3.framework.configuration import requirements from volatility3.framework.objects import utility from volatility3.framework.symbols import mac @@ -14,7 +14,8 @@ class Kevents(interfaces.plugins.PluginInterface): """ Lists event handlers registered by processes """ - _required_framework_version = (1, 0, 0) + _required_framework_version = (1, 2, 0) + _version = (1, 0, 0) event_types = { 1: "EVFILT_READ", @@ -47,11 +48,9 @@ class Kevents(interfaces.plugins.PluginInterface): @classmethod def get_requirements(cls): return [ - requirements.TranslationLayerRequirement(name = 'primary', - description = 'Memory layer for the kernel', - architectures = ["Intel32", "Intel64"]), - requirements.SymbolTableRequirement(name = "darwin", description = "Mac kernel"), - requirements.PluginRequirement(name = 'pslist', plugin = pslist.PsList, version = (2, 0, 0)), + requirements.ModuleRequirement(name = 'darwin', description = 'Kernel module for the OS', + architectures = ["Intel32", "Intel64"]), + requirements.PluginRequirement(name = 'pslist', plugin = pslist.PsList, version = (3, 0, 0)), requirements.VersionRequirement(name = 'macutils', component = mac.MacUtilities, version = (1, 2, 0)), requirements.ListRequirement(name = 'pid', description = 'Filter on specific process IDs', @@ -120,8 +119,7 @@ def _get_task_kevents(cls, kernel, task): @classmethod def list_kernel_events(cls, context: interfaces.context.ContextInterface, - layer_name: str, - darwin_symbols: str, + kernel_module_name: str, filter_func: Callable[[int], bool] = lambda _: False) -> \ Iterable[Tuple[interfaces.objects.ObjectInterface, interfaces.objects.ObjectInterface, @@ -135,11 +133,11 @@ def list_kernel_events(cls, 2) The process ID of the process that registered the filter 3) The object of the associated kernel event filter """ - kernel = contexts.Module(context, darwin_symbols, layer_name, 0) + kernel = context.modules[kernel_module_name] list_tasks = pslist.PsList.get_list_tasks(pslist.PsList.pslist_methods[0]) - for task in list_tasks(context, layer_name, darwin_symbols, filter_func): + for task in list_tasks(context, kernel_module_name, filter_func): task_name = utility.array_to_string(task.p_comm) pid = task.p_pid @@ -150,7 +148,6 @@ def _generator(self): filter_func = pslist.PsList.create_pid_filter(self.config.get('pid', None)) for task_name, pid, kn in self.list_kernel_events(self.context, - self.config['primary'], self.config['darwin'], filter_func = filter_func): diff --git a/volatility3/framework/plugins/mac/list_files.py b/volatility3/framework/plugins/mac/list_files.py index 557e735eda..edc37c2a01 100644 --- a/volatility3/framework/plugins/mac/list_files.py +++ b/volatility3/framework/plugins/mac/list_files.py @@ -18,16 +18,14 @@ class List_Files(plugins.PluginInterface): """Lists all open file descriptors for all processes.""" - _required_framework_version = (1, 0, 0) + _required_framework_version = (1, 2, 0) @classmethod def get_requirements(cls): return [ - requirements.TranslationLayerRequirement(name = 'primary', - description = 'Kernel Address Space', - architectures = ["Intel32", "Intel64"]), - requirements.SymbolTableRequirement(name = "darwin", description = "Mac Kernel"), - requirements.PluginRequirement(name = 'mount', plugin = mount.Mount, version = (1, 0, 0)), + requirements.ModuleRequirement(name = 'darwin', description = 'Kernel module for the OS', + architectures = ["Intel32", "Intel64"]), + requirements.PluginRequirement(name = 'mount', plugin = mount.Mount, version = (2, 0, 0)), ] @classmethod @@ -114,14 +112,13 @@ def _walk_vnodelist(cls, list_head, loop_vnodes): @classmethod def _walk_mounts(cls, context: interfaces.context.ContextInterface, - layer_name: str, - darwin_symbols: str) -> \ + kernel_module_name: str) -> \ Iterable[interfaces.objects.ObjectInterface]: loop_vnodes = {} # iterate each vnode source from each mount - list_mounts = mount.Mount.list_mounts(context, layer_name, darwin_symbols) + list_mounts = mount.Mount.list_mounts(context, kernel_module_name) for mnt in list_mounts: cls._walk_vnodelist(mnt.mnt_vnodelist, loop_vnodes) cls._walk_vnodelist(mnt.mnt_workerqueue, loop_vnodes) @@ -157,11 +154,10 @@ def _build_path(cls, vnodes, vnode_name, parent_offset): @classmethod def list_files(cls, context: interfaces.context.ContextInterface, - layer_name: str, - darwin_symbols: str) -> \ + kernel_module_name: str) -> \ Iterable[interfaces.objects.ObjectInterface]: - vnodes = cls._walk_mounts(context, layer_name, darwin_symbols) + vnodes = cls._walk_mounts(context, kernel_module_name) for voff, (vnode_name, parent_offset, vnode) in vnodes.items(): full_path = cls._build_path(vnodes, vnode_name, parent_offset) @@ -169,7 +165,7 @@ def list_files(cls, yield vnode, full_path def _generator(self): - for vnode, full_path in self.list_files(self.context, self.config['primary'], self.config['darwin']): + for vnode, full_path in self.list_files(self.context, self.config['darwin']): yield (0, (format_hints.Hex(vnode), full_path)) diff --git a/volatility3/framework/plugins/mac/lsmod.py b/volatility3/framework/plugins/mac/lsmod.py index 7227c32895..18c9a37f7c 100644 --- a/volatility3/framework/plugins/mac/lsmod.py +++ b/volatility3/framework/plugins/mac/lsmod.py @@ -5,7 +5,7 @@ found in Mac's lsmod command.""" from typing import Set -from volatility3.framework import renderers, interfaces, contexts, exceptions +from volatility3.framework import renderers, interfaces, exceptions from volatility3.framework.configuration import requirements from volatility3.framework.interfaces import plugins from volatility3.framework.objects import utility @@ -15,21 +15,19 @@ class Lsmod(plugins.PluginInterface): """Lists loaded kernel modules.""" - _required_framework_version = (1, 0, 0) + _required_framework_version = (1, 2, 0) - _version = (1, 0, 0) + _version = (2, 0, 0) @classmethod def get_requirements(cls): return [ - requirements.TranslationLayerRequirement(name = 'primary', - description = 'Memory layer for the kernel', - architectures = ["Intel32", "Intel64"]), - requirements.SymbolTableRequirement(name = "darwin", description = "Mac kernel") + requirements.ModuleRequirement(name = 'darwin', description = 'Kernel module for the OS', + architectures = ["Intel32", "Intel64"]), ] @classmethod - def list_modules(cls, context: interfaces.context.ContextInterface, layer_name: str, darwin_symbols: str): + def list_modules(cls, context: interfaces.context.ContextInterface, darwin_module_name: str): """Lists all the modules in the primary layer. Args: @@ -40,8 +38,8 @@ def list_modules(cls, context: interfaces.context.ContextInterface, layer_name: Returns: A list of modules from the `layer_name` layer """ - kernel = contexts.Module(context, darwin_symbols, layer_name, 0) - kernel_layer = context.layers[layer_name] + kernel = context.modules[darwin_module_name] + kernel_layer = context.layers[kernel.layer_name] kmod_ptr = kernel.object_from_symbol(symbol_name = "kmod") @@ -78,7 +76,7 @@ def list_modules(cls, context: interfaces.context.ContextInterface, layer_name: return def _generator(self): - for module in self.list_modules(self.context, self.config['primary'], self.config['darwin']): + for module in self.list_modules(self.context, self.config['darwin']): mod_name = utility.array_to_string(module.name) mod_size = module.size diff --git a/volatility3/framework/plugins/mac/lsof.py b/volatility3/framework/plugins/mac/lsof.py index f4fb725a00..a7f2250cd6 100644 --- a/volatility3/framework/plugins/mac/lsof.py +++ b/volatility3/framework/plugins/mac/lsof.py @@ -16,17 +16,15 @@ class Lsof(plugins.PluginInterface): """Lists all open file descriptors for all processes.""" - _required_framework_version = (1, 0, 0) + _required_framework_version = (1, 2, 0) @classmethod def get_requirements(cls): return [ - requirements.TranslationLayerRequirement(name = 'primary', - description = 'Kernel Address Space', - architectures = ["Intel32", "Intel64"]), - requirements.SymbolTableRequirement(name = "darwin", description = "Mac Kernel"), + requirements.ModuleRequirement(name = 'darwin', description = 'Kernel module for the OS', + architectures = ["Intel32", "Intel64"]), requirements.VersionRequirement(name = 'macutils', component = mac.MacUtilities, version = (1, 0, 0)), - requirements.PluginRequirement(name = 'pslist', plugin = pslist.PsList, version = (2, 0, 0)), + requirements.PluginRequirement(name = 'pslist', plugin = pslist.PsList, version = (3, 0, 0)), requirements.ListRequirement(name = 'pid', description = 'Filter on specific process IDs', element_type = int, @@ -37,7 +35,8 @@ def _generator(self, tasks): for task in tasks: pid = task.p_pid - for _, filepath, fd in mac.MacUtilities.files_descriptors_for_process(self.context, self.config['darwin'], + for _, filepath, fd in mac.MacUtilities.files_descriptors_for_process(self.context, self.config[ + 'darwin.symbol_table_name'], task): if filepath and len(filepath) > 0: yield (0, (pid, fd, filepath)) @@ -49,6 +48,5 @@ def run(self): return renderers.TreeGrid([("PID", int), ("File Descriptor", int), ("File Path", str)], self._generator( list_tasks(self.context, - self.config['primary'], self.config['darwin'], filter_func = filter_func))) diff --git a/volatility3/framework/plugins/mac/malfind.py b/volatility3/framework/plugins/mac/malfind.py index 20b46528b6..98d876c90c 100644 --- a/volatility3/framework/plugins/mac/malfind.py +++ b/volatility3/framework/plugins/mac/malfind.py @@ -2,7 +2,6 @@ # which is available at https://www.volatilityfoundation.org/license/vsl-v1.0 # -from volatility3.framework import constants from volatility3.framework import interfaces from volatility3.framework import renderers from volatility3.framework.configuration import requirements @@ -14,16 +13,14 @@ class Malfind(interfaces.plugins.PluginInterface): """Lists process memory ranges that potentially contain injected code.""" - _required_framework_version = (1, 0, 0) + _required_framework_version = (1, 2, 0) @classmethod def get_requirements(cls): return [ - requirements.TranslationLayerRequirement(name = 'primary', - description = 'Memory layer for the kernel', - architectures = ["Intel32", "Intel64"]), - requirements.SymbolTableRequirement(name = "darwin", description = "Mac kernel"), - requirements.PluginRequirement(name = 'pslist', plugin = pslist.PsList, version = (2, 0, 0)), + requirements.ModuleRequirement(name = 'darwin', description = 'Kernel module for the OS', + architectures = ["Intel32", "Intel64"]), + requirements.PluginRequirement(name = 'pslist', plugin = pslist.PsList, version = (3, 0, 0)), requirements.ListRequirement(name = 'pid', description = 'Filter on specific process IDs', element_type = int, @@ -41,13 +38,13 @@ def _list_injections(self, task): proc_layer = self.context.layers[proc_layer_name] for vma in task.get_map_iter(): - if not vma.is_suspicious(self.context, self.config['darwin']): + if not vma.is_suspicious(self.context, self.context.modules[self.config['darwin']].symbol_table_name): data = proc_layer.read(vma.links.start, 64, pad = True) yield vma, data def _generator(self, tasks): # determine if we're on a 32 or 64 bit kernel - if self.context.symbol_space.get_type(self.config["darwin"] + constants.BANG + "pointer").size == 4: + if self.context.modules[self.config['darwin']].get_type("pointer").size == 4: is_32bit_arch = True else: is_32bit_arch = False @@ -75,6 +72,5 @@ def run(self): ("Disasm", interfaces.renderers.Disassembly)], self._generator( list_tasks(self.context, - self.config['primary'], self.config['darwin'], filter_func = filter_func))) diff --git a/volatility3/framework/plugins/mac/mount.py b/volatility3/framework/plugins/mac/mount.py index 3500e5ff18..8eceb040c5 100644 --- a/volatility3/framework/plugins/mac/mount.py +++ b/volatility3/framework/plugins/mac/mount.py @@ -3,7 +3,7 @@ # """A module containing a collection of plugins that produce data typically found in Mac's mount command.""" -from volatility3.framework import renderers, interfaces, contexts +from volatility3.framework import renderers, interfaces from volatility3.framework.configuration import requirements from volatility3.framework.interfaces import plugins from volatility3.framework.objects import utility @@ -14,22 +14,20 @@ class Mount(plugins.PluginInterface): """A module containing a collection of plugins that produce data typically foundin Mac's mount command""" - _required_framework_version = (1, 0, 0) + _required_framework_version = (1, 2, 0) - _version = (1, 0, 0) + _version = (2, 0, 0) @classmethod def get_requirements(cls): return [ - requirements.TranslationLayerRequirement(name = 'primary', - description = 'Memory layer for the kernel', - architectures = ["Intel32", "Intel64"]), + requirements.ModuleRequirement(name = 'darwin', description = 'Kernel module for the OS', + architectures = ["Intel32", "Intel64"]), requirements.VersionRequirement(name = 'macutils', component = mac.MacUtilities, version = (1, 0, 0)), - requirements.SymbolTableRequirement(name = "darwin", description = "Mac kernel symbols") ] @classmethod - def list_mounts(cls, context: interfaces.context.ContextInterface, layer_name: str, darwin_symbols: str): + def list_mounts(cls, context: interfaces.context.ContextInterface, kernel_module_name: str): """Lists all the mount structures in the primary layer. Args: @@ -40,7 +38,7 @@ def list_mounts(cls, context: interfaces.context.ContextInterface, layer_name: s Returns: A list of mount structures from the `layer_name` layer """ - kernel = contexts.Module(context, darwin_symbols, layer_name, 0) + kernel = context.modules[kernel_module_name] list_head = kernel.object_from_symbol(symbol_name = "mountlist") @@ -48,7 +46,7 @@ def list_mounts(cls, context: interfaces.context.ContextInterface, layer_name: s yield mount def _generator(self): - for mount in self.list_mounts(self.context, self.config['primary'], self.config['darwin']): + for mount in self.list_mounts(self.context, self.config['darwin']): vfs = mount.mnt_vfsstat device_name = utility.array_to_string(vfs.f_mntonname) mount_point = utility.array_to_string(vfs.f_mntfromname) diff --git a/volatility3/framework/plugins/mac/netstat.py b/volatility3/framework/plugins/mac/netstat.py index 3b6bba9007..67c9cb217b 100644 --- a/volatility3/framework/plugins/mac/netstat.py +++ b/volatility3/framework/plugins/mac/netstat.py @@ -19,16 +19,14 @@ class Netstat(plugins.PluginInterface): """Lists all network connections for all processes.""" - _required_framework_version = (1, 0, 0) + _required_framework_version = (1, 2, 0) @classmethod def get_requirements(cls): return [ - requirements.TranslationLayerRequirement(name = 'primary', - description = 'Kernel Address Space', - architectures = ["Intel32", "Intel64"]), - requirements.SymbolTableRequirement(name = "darwin", description = "Mac Kernel"), - requirements.PluginRequirement(name = 'pslist', plugin = pslist.PsList, version = (2, 0, 0)), + requirements.ModuleRequirement(name = 'darwin', description = 'Kernel module for the OS', + architectures = ["Intel32", "Intel64"]), + requirements.PluginRequirement(name = 'pslist', plugin = pslist.PsList, version = (3, 0, 0)), requirements.VersionRequirement(name = 'macutils', component = mac.MacUtilities, version = (1, 0, 0)), requirements.ListRequirement(name = 'pid', description = 'Filter on specific process IDs', @@ -39,8 +37,7 @@ def get_requirements(cls): @classmethod def list_sockets(cls, context: interfaces.context.ContextInterface, - layer_name: str, - darwin_symbols: str, + kernel_module_name: str, filter_func: Callable[[int], bool] = lambda _: False) -> \ Iterable[Tuple[interfaces.objects.ObjectInterface, interfaces.objects.ObjectInterface, @@ -56,12 +53,13 @@ def list_sockets(cls, """ # This is hardcoded, since a change in the default method would change the expected results list_tasks = pslist.PsList.get_list_tasks(pslist.PsList.pslist_methods[0]) - for task in list_tasks(context, layer_name, darwin_symbols, filter_func): + for task in list_tasks(context, kernel_module_name, filter_func): task_name = utility.array_to_string(task.p_comm) pid = task.p_pid - for filp, _, _ in mac.MacUtilities.files_descriptors_for_process(context, darwin_symbols, task): + for filp, _, _ in mac.MacUtilities.files_descriptors_for_process(context, context.modules[ + kernel_module_name].symbol_table_name, task): try: ftype = filp.f_fglob.get_fg_type() except exceptions.InvalidAddressException: @@ -81,7 +79,6 @@ def _generator(self): filter_func = pslist.PsList.create_pid_filter(self.config.get('pid', None)) for task_name, pid, socket in self.list_sockets(self.context, - self.config['primary'], self.config['darwin'], filter_func = filter_func): diff --git a/volatility3/framework/plugins/mac/proc_maps.py b/volatility3/framework/plugins/mac/proc_maps.py index 2ff80c47e4..e9912797c9 100644 --- a/volatility3/framework/plugins/mac/proc_maps.py +++ b/volatility3/framework/plugins/mac/proc_maps.py @@ -12,16 +12,14 @@ class Maps(interfaces.plugins.PluginInterface): """Lists process memory ranges that potentially contain injected code.""" - _required_framework_version = (1, 0, 0) + _required_framework_version = (1, 2, 0) @classmethod def get_requirements(cls): return [ - requirements.TranslationLayerRequirement(name = 'primary', - description = 'Memory layer for the kernel', - architectures = ["Intel32", "Intel64"]), - requirements.SymbolTableRequirement(name = "darwin", description = "Mac kernel"), - requirements.PluginRequirement(name = 'pslist', plugin = pslist.PsList, version = (2, 0, 0)), + requirements.ModuleRequirement(name = 'darwin', description = 'Kernel module for the OS', + architectures = ["Intel32", "Intel64"]), + requirements.PluginRequirement(name = 'pslist', plugin = pslist.PsList, version = (3, 0, 0)), requirements.ListRequirement(name = 'pid', description = 'Filter on specific process IDs', element_type = int, @@ -34,7 +32,7 @@ def _generator(self, tasks): process_pid = task.p_pid for vma in task.get_map_iter(): - path = vma.get_path(self.context, self.config['darwin']) + path = vma.get_path(self.context, self.context.modules[self.config['darwin']].symbol_table_name) if path == "": path = vma.get_special_path() @@ -49,6 +47,5 @@ def run(self): ("End", format_hints.Hex), ("Protection", str), ("Map Name", str)], self._generator( list_tasks(self.context, - self.config['primary'], self.config['darwin'], filter_func = filter_func))) diff --git a/volatility3/framework/plugins/mac/psaux.py b/volatility3/framework/plugins/mac/psaux.py index 14d765ef81..73d4b16c59 100644 --- a/volatility3/framework/plugins/mac/psaux.py +++ b/volatility3/framework/plugins/mac/psaux.py @@ -14,16 +14,14 @@ class Psaux(plugins.PluginInterface): """Recovers program command line arguments.""" - _required_framework_version = (1, 0, 0) + _required_framework_version = (1, 2, 0) @classmethod def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface]: return [ - requirements.TranslationLayerRequirement(name = 'primary', - description = 'Memory layer for the kernel', - architectures = ["Intel32", "Intel64"]), - requirements.SymbolTableRequirement(name = "darwin", description = "Mac kernel symbols"), - requirements.PluginRequirement(name = 'pslist', plugin = pslist.PsList, version = (2, 0, 0)), + requirements.ModuleRequirement(name = 'darwin', description = 'Kernel module for the OS', + architectures = ["Intel32", "Intel64"]), + requirements.PluginRequirement(name = 'pslist', plugin = pslist.PsList, version = (3, 0, 0)), requirements.ListRequirement(name = 'pid', description = 'Filter on specific process IDs', element_type = int, @@ -98,6 +96,5 @@ def run(self) -> renderers.TreeGrid: return renderers.TreeGrid([("PID", int), ("Process", str), ("Argc", int), ("Arguments", str)], self._generator( list_tasks(self.context, - self.config['primary'], self.config['darwin'], filter_func = filter_func))) diff --git a/volatility3/framework/plugins/mac/pslist.py b/volatility3/framework/plugins/mac/pslist.py index 1ebd89c97c..fb2617be11 100644 --- a/volatility3/framework/plugins/mac/pslist.py +++ b/volatility3/framework/plugins/mac/pslist.py @@ -5,7 +5,7 @@ import logging from typing import Callable, Iterable, List, Dict -from volatility3.framework import renderers, interfaces, contexts, exceptions +from volatility3.framework import renderers, interfaces, exceptions from volatility3.framework.configuration import requirements from volatility3.framework.objects import utility from volatility3.framework.symbols import mac @@ -16,17 +16,14 @@ class PsList(interfaces.plugins.PluginInterface): """Lists the processes present in a particular mac memory image.""" - _required_framework_version = (1, 0, 0) - _version = (2, 0, 0) + _required_framework_version = (1, 2, 0) + _version = (3, 0, 0) pslist_methods = ['tasks', 'allproc', 'process_group', 'sessions', 'pid_hash_table'] @classmethod def get_requirements(cls): return [ - requirements.TranslationLayerRequirement(name = 'primary', - description = 'Memory layer for the kernel', - architectures = ["Intel32", "Intel64"]), - requirements.SymbolTableRequirement(name = "darwin", description = "Mac kernel symbols"), + requirements.ModuleRequirement(name = 'darwin', description = 'Kernel module for the OS'), requirements.VersionRequirement(name = 'macutils', component = mac.MacUtilities, version = (1, 1, 0)), requirements.ChoiceRequirement(name = 'pslist_method', description = 'Method to determine for processes', @@ -41,8 +38,8 @@ def get_requirements(cls): @classmethod def get_list_tasks( - cls, method: str - ) -> Callable[[interfaces.context.ContextInterface, str, str, Callable[[int], bool]], + cls, method: str + ) -> Callable[[interfaces.context.ContextInterface, str, Callable[[int], bool]], Iterable[interfaces.objects.ObjectInterface]]: """Returns the list_tasks method based on the selector @@ -91,7 +88,6 @@ def _generator(self): list_tasks = self.get_list_tasks(self.config.get('pslist_method', self.pslist_methods[0])) for task in list_tasks(self.context, - self.config['primary'], self.config['darwin'], filter_func = self.create_pid_filter(self.config.get('pid', None))): pid = task.p_pid @@ -102,25 +98,23 @@ def _generator(self): @classmethod def list_tasks_allproc(cls, context: interfaces.context.ContextInterface, - layer_name: str, - darwin_symbols: str, + kernel_module_name: str, filter_func: Callable[[int], bool] = lambda _: False) -> \ Iterable[interfaces.objects.ObjectInterface]: """Lists all the processes in the primary layer based on the allproc method Args: context: The context to retrieve required elements (layers, symbol tables) from - layer_name: The name of the layer on which to operate - darwin_symbols: The name of the table containing the kernel symbols + kernel_module_name: The name of the the kernel module on which to operate filter_func: A function which takes a process object and returns True if the process should be ignored/filtered Returns: The list of process objects from the processes linked list after filtering """ - kernel = contexts.Module(context, darwin_symbols, layer_name, 0) + kernel = context.modules[kernel_module_name] - kernel_layer = context.layers[layer_name] + kernel_layer = context.layers[kernel.layer_name] proc = kernel.object_from_symbol(symbol_name = "allproc").lh_first @@ -143,25 +137,22 @@ def list_tasks_allproc(cls, @classmethod def list_tasks_tasks(cls, context: interfaces.context.ContextInterface, - layer_name: str, - darwin_symbols: str, + kernel_module_name: str, filter_func: Callable[[int], bool] = lambda _: False) -> \ Iterable[interfaces.objects.ObjectInterface]: """Lists all the tasks in the primary layer based on the tasks queue Args: context: The context to retrieve required elements (layers, symbol tables) from - layer_name: The name of the layer on which to operate - darwin_symbols: The name of the table containing the kernel symbols + kernel_module_name: The name of the the kernel module on which to operate filter_func: A function which takes a task object and returns True if the task should be ignored/filtered Returns: The list of task objects from the `layer_name` layer's `tasks` list after filtering """ + kernel = context.modules[kernel_module_name] - kernel = contexts.Module(context, darwin_symbols, layer_name, 0) - - kernel_layer = context.layers[layer_name] + kernel_layer = context.layers[kernel.layer_name] queue_entry = kernel.object_from_symbol(symbol_name = "tasks") @@ -184,22 +175,20 @@ def list_tasks_tasks(cls, @classmethod def list_tasks_sessions(cls, context: interfaces.context.ContextInterface, - layer_name: str, - darwin_symbols: str, + kernel_module_name: str, filter_func: Callable[[int], bool] = lambda _: False) -> \ Iterable[interfaces.objects.ObjectInterface]: """Lists all the tasks in the primary layer using sessions Args: context: The context to retrieve required elements (layers, symbol tables) from - layer_name: The name of the layer on which to operate - darwin_symbols: The name of the table containing the kernel symbols + kernel_module_name: The name of the the kernel module on which to operate filter_func: A function which takes a task object and returns True if the task should be ignored/filtered Returns: The list of task objects from the `layer_name` layer's `tasks` list after filtering """ - kernel = contexts.Module(context, darwin_symbols, layer_name, 0) + kernel = context.modules[kernel_module_name] table_size = kernel.object_from_symbol(symbol_name = "sesshash") @@ -218,23 +207,20 @@ def list_tasks_sessions(cls, @classmethod def list_tasks_process_group(cls, context: interfaces.context.ContextInterface, - layer_name: str, - darwin_symbols: str, + kernel_module_name: str, filter_func: Callable[[int], bool] = lambda _: False) -> \ Iterable[interfaces.objects.ObjectInterface]: """Lists all the tasks in the primary layer using process groups Args: context: The context to retrieve required elements (layers, symbol tables) from - layer_name: The name of the layer on which to operate - darwin_symbols: The name of the table containing the kernel symbols + kernel_module_name: The name of the the kernel module on which to operate filter_func: A function which takes a task object and returns True if the task should be ignored/filtered Returns: The list of task objects from the `layer_name` layer's `tasks` list after filtering """ - - kernel = contexts.Module(context, darwin_symbols, layer_name, 0) + kernel = context.modules[kernel_module_name] table_size = kernel.object_from_symbol(symbol_name = "pgrphash") @@ -254,23 +240,21 @@ def list_tasks_process_group(cls, @classmethod def list_tasks_pid_hash_table(cls, context: interfaces.context.ContextInterface, - layer_name: str, - darwin_symbols: str, + kernel_module_name: str, filter_func: Callable[[int], bool] = lambda _: False) -> \ Iterable[interfaces.objects.ObjectInterface]: """Lists all the tasks in the primary layer using the pid hash table Args: context: The context to retrieve required elements (layers, symbol tables) from - layer_name: The name of the layer on which to operate - darwin_symbols: The name of the table containing the kernel symbols + kernel_module_name: The name of the the kernel module on which to operate filter_func: A function which takes a task object and returns True if the task should be ignored/filtered Returns: The list of task objects from the `layer_name` layer's `tasks` list after filtering """ - kernel = contexts.Module(context, darwin_symbols, layer_name, 0) + kernel = context.modules[kernel_module_name] table_size = kernel.object_from_symbol(symbol_name = "pidhash") diff --git a/volatility3/framework/plugins/mac/pstree.py b/volatility3/framework/plugins/mac/pstree.py index ea70a5d4c6..a9846d0dbd 100644 --- a/volatility3/framework/plugins/mac/pstree.py +++ b/volatility3/framework/plugins/mac/pstree.py @@ -13,7 +13,7 @@ class PsTree(plugins.PluginInterface): """Plugin for listing processes in a tree based on their parent process ID.""" - _required_framework_version = (1, 0, 0) + _required_framework_version = (1, 2, 0) def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) @@ -24,11 +24,9 @@ def __init__(self, *args, **kwargs): @classmethod def get_requirements(cls): return [ - requirements.TranslationLayerRequirement(name = 'primary', - description = 'Memory layer for the kernel', - architectures = ["Intel32", "Intel64"]), - requirements.SymbolTableRequirement(name = "darwin", description = "Mac kernel symbols"), - requirements.PluginRequirement(name = 'pslist', plugin = pslist.PsList, version = (2, 0, 0)) + requirements.ModuleRequirement(name = 'darwin', description = 'Kernel module for the OS', + architectures = ["Intel32", "Intel64"]), + requirements.PluginRequirement(name = 'pslist', plugin = pslist.PsList, version = (3, 0, 0)) ] def _find_level(self, pid): @@ -50,7 +48,7 @@ def _generator(self): """Generates the tree list of processes""" list_tasks = pslist.PsList.get_list_tasks(self.config.get('pslist_method', pslist.PsList.pslist_methods[0])) - for proc in list_tasks(self.context, self.config['primary'], self.config['darwin']): + for proc in list_tasks(self.context, self.config['darwin']): self._processes[proc.p_pid] = proc # Build the child/level maps diff --git a/volatility3/framework/plugins/mac/socket_filters.py b/volatility3/framework/plugins/mac/socket_filters.py index 40fc6ffe0d..be21bc7d1c 100644 --- a/volatility3/framework/plugins/mac/socket_filters.py +++ b/volatility3/framework/plugins/mac/socket_filters.py @@ -5,7 +5,7 @@ from typing import List from volatility3.framework import exceptions, interfaces -from volatility3.framework import renderers, contexts +from volatility3.framework import renderers from volatility3.framework.configuration import requirements from volatility3.framework.interfaces import plugins from volatility3.framework.objects import utility @@ -19,25 +19,23 @@ class Socket_filters(plugins.PluginInterface): """Enumerates kernel socket filters.""" - _required_framework_version = (1, 0, 0) + _required_framework_version = (1, 2, 0) @classmethod def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface]: return [ - requirements.TranslationLayerRequirement(name = 'primary', - description = 'Memory layer for the kernel', - architectures = ["Intel32", "Intel64"]), - requirements.SymbolTableRequirement(name = "darwin", description = "Mac kernel symbols"), + requirements.ModuleRequirement(name = 'darwin', description = 'Kernel module for the OS', + architectures = ["Intel32", "Intel64"]), requirements.VersionRequirement(name = 'macutils', component = mac.MacUtilities, version = (1, 0, 0)), - requirements.PluginRequirement(name = 'lsmod', plugin = lsmod.Lsmod, version = (1, 0, 0)) + requirements.PluginRequirement(name = 'lsmod', plugin = lsmod.Lsmod, version = (2, 0, 0)) ] def _generator(self): - kernel = contexts.Module(self._context, self.config['darwin'], self.config['primary'], 0) + kernel = self.context.modules[self.config['darwin']] - mods = lsmod.Lsmod.list_modules(self.context, self.config['primary'], self.config['darwin']) + mods = lsmod.Lsmod.list_modules(self.context, self.config['darwin']) - handlers = mac.MacUtilities.generate_kernel_handler_info(self.context, self.config['primary'], kernel, mods) + handlers = mac.MacUtilities.generate_kernel_handler_info(self.context, kernel.layer_name, kernel, mods) members_to_check = [ "sf_unregistered", "sf_attach", "sf_detach", "sf_notify", "sf_getpeername", "sf_getsockname", "sf_data_in", diff --git a/volatility3/framework/plugins/mac/timers.py b/volatility3/framework/plugins/mac/timers.py index 912388ffb1..5ce973d5c7 100644 --- a/volatility3/framework/plugins/mac/timers.py +++ b/volatility3/framework/plugins/mac/timers.py @@ -5,7 +5,7 @@ from typing import List from volatility3.framework import exceptions, interfaces -from volatility3.framework import renderers, contexts +from volatility3.framework import renderers from volatility3.framework.configuration import requirements from volatility3.framework.interfaces import plugins from volatility3.framework.renderers import format_hints @@ -18,36 +18,36 @@ class Timers(plugins.PluginInterface): """Check for malicious kernel timers.""" - _required_framework_version = (1, 0, 0) + _required_framework_version = (1, 2, 0) @classmethod def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface]: return [ - requirements.TranslationLayerRequirement(name = 'primary', - description = 'Memory layer for the kernel', - architectures = ["Intel32", "Intel64"]), - requirements.SymbolTableRequirement(name = "darwin", description = "Mac kernel symbols"), - requirements.VersionRequirement(name = 'macutils', component = mac.MacUtilities, version = (1, 0, 0)), - requirements.PluginRequirement(name = 'lsmod', plugin = lsmod.Lsmod, version = (1, 0, 0)) + requirements.ModuleRequirement(name = 'darwin', description = 'Kernel module for the OS', + architectures = ["Intel32", "Intel64"]), + requirements.VersionRequirement(name = 'macutils', component = mac.MacUtilities, version = (1, 3, 0)), + requirements.PluginRequirement(name = 'lsmod', plugin = lsmod.Lsmod, version = (2, 0, 0)) ] def _generator(self): - kernel = contexts.Module(self.context, self.config['darwin'], self.config['primary'], 0) + kernel = self.context.modules[self.config['darwin']] - mods = lsmod.Lsmod.list_modules(self.context, self.config['primary'], self.config['darwin']) + mods = lsmod.Lsmod.list_modules(self.context, self.config['darwin']) - handlers = mac.MacUtilities.generate_kernel_handler_info(self.context, self.config['primary'], kernel, mods) + handlers = mac.MacUtilities.generate_kernel_handler_info(self.context, kernel.layer_name, kernel, mods) real_ncpus = kernel.object_from_symbol(symbol_name = "real_ncpus") cpu_data_ptrs_ptr = kernel.get_symbol("cpu_data_ptr").address + # Returns the a pointer to the absolute address cpu_data_ptrs_addr = kernel.object(object_type = "pointer", offset = cpu_data_ptrs_ptr, subtype = kernel.get_type('long unsigned int')) cpu_data_ptrs = kernel.object(object_type = "array", offset = cpu_data_ptrs_addr, + absolute = True, subtype = kernel.get_type('cpu_data'), count = real_ncpus) @@ -68,9 +68,10 @@ def _generator(self): else: entry_time = -1 - module_name, symbol_name = mac.MacUtilities.lookup_module_address(self.context, handlers, handler) + module_name, symbol_name = mac.MacUtilities.lookup_module_address(self.context, handlers, handler, + self.config['darwin']) - yield (0, (format_hints.Hex(handler), format_hints.Hex(timer.param0), format_hints.Hex(timer.param1), \ + yield (0, (format_hints.Hex(handler), format_hints.Hex(timer.param0), format_hints.Hex(timer.param1), timer.deadline, entry_time, module_name, symbol_name)) def run(self): diff --git a/volatility3/framework/plugins/mac/trustedbsd.py b/volatility3/framework/plugins/mac/trustedbsd.py index b03663625b..615e5ea644 100644 --- a/volatility3/framework/plugins/mac/trustedbsd.py +++ b/volatility3/framework/plugins/mac/trustedbsd.py @@ -6,7 +6,7 @@ from typing import List, Iterator, Any from volatility3.framework import exceptions, interfaces -from volatility3.framework import renderers, contexts +from volatility3.framework import renderers from volatility3.framework.configuration import requirements from volatility3.framework.interfaces import plugins from volatility3.framework.objects import utility @@ -20,29 +20,28 @@ class Trustedbsd(plugins.PluginInterface): """Checks for malicious trustedbsd modules""" - _required_framework_version = (1, 0, 0) + _required_framework_version = (1, 2, 0) @classmethod def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface]: return [ - requirements.TranslationLayerRequirement(name = 'primary', - description = 'Memory layer for the kernel', - architectures = ["Intel32", "Intel64"]), - requirements.SymbolTableRequirement(name = "darwin", description = "Mac kernel symbols"), - requirements.VersionRequirement(name = 'macutils', component = mac.MacUtilities, version = (1, 0, 0)), - requirements.PluginRequirement(name = 'lsmod', plugin = lsmod.Lsmod, version = (1, 0, 0)) + requirements.ModuleRequirement(name = 'darwin', description = 'Kernel module for the OS', + architectures = ["Intel32", "Intel64"]), + requirements.VersionRequirement(name = 'macutils', component = mac.MacUtilities, version = (1, 3, 0)), + requirements.PluginRequirement(name = 'lsmod', plugin = lsmod.Lsmod, version = (2, 0, 0)) ] def _generator(self, mods: Iterator[Any]): - kernel = contexts.Module(self._context, self.config['darwin'], self.config['primary'], 0) + kernel = self.context.modules[self.config['darwin']] - handlers = mac.MacUtilities.generate_kernel_handler_info(self.context, self.config['primary'], kernel, mods) + handlers = mac.MacUtilities.generate_kernel_handler_info(self.context, kernel.layer_name, kernel, mods) policy_list = kernel.object_from_symbol(symbol_name = "mac_policy_list").cast("mac_policy_list") entries = kernel.object(object_type = "array", offset = policy_list.entries.dereference().vol.offset, subtype = kernel.get_type('mac_policy_list_element'), + absolute = True, count = policy_list.staticmax + 1) for i, ent in enumerate(entries): @@ -65,7 +64,8 @@ def _generator(self, mods: Iterator[Any]): if call_addr is None or call_addr == 0: continue - module_name, symbol_name = mac.MacUtilities.lookup_module_address(self.context, handlers, call_addr) + module_name, symbol_name = mac.MacUtilities.lookup_module_address(self.context, handlers, call_addr, + self.config['darwin']) yield (0, (check, ent_name, format_hints.Hex(call_addr), module_name, symbol_name)) @@ -73,5 +73,4 @@ def run(self): return renderers.TreeGrid([("Member", str), ("Policy Name", str), ("Handler Address", format_hints.Hex), ("Handler Module", str), ("Handler Symbol", str)], self._generator( - lsmod.Lsmod.list_modules(self.context, self.config['primary'], - self.config['darwin']))) + lsmod.Lsmod.list_modules(self.context, self.config['darwin']))) diff --git a/volatility3/framework/plugins/mac/vfsevents.py b/volatility3/framework/plugins/mac/vfsevents.py index 97b23168c2..71915bcad9 100644 --- a/volatility3/framework/plugins/mac/vfsevents.py +++ b/volatility3/framework/plugins/mac/vfsevents.py @@ -2,7 +2,7 @@ # which is available at https://www.volatilityfoundation.org/license/vsl-v1.0 # -from volatility3.framework import renderers, interfaces, exceptions, contexts +from volatility3.framework import renderers, interfaces, exceptions from volatility3.framework.configuration import requirements from volatility3.framework.objects import utility @@ -10,7 +10,7 @@ class VFSevents(interfaces.plugins.PluginInterface): """ Lists processes that are filtering file system events """ - _required_framework_version = (1, 0, 0) + _required_framework_version = (1, 2, 0) event_types = [ "CREATE_FILE", "DELETE", "STAT_CHANGED", "RENAME", "CONTENT_MODIFIED", "EXCHANGE", "FINDER_INFO_CHANGED", @@ -20,10 +20,8 @@ class VFSevents(interfaces.plugins.PluginInterface): @classmethod def get_requirements(cls): return [ - requirements.TranslationLayerRequirement(name = 'primary', - description = 'Memory layer for the kernel', - architectures = ["Intel32", "Intel64"]), - requirements.SymbolTableRequirement(name = "darwin", description = "Mac kernel"), + requirements.ModuleRequirement(name = 'darwin', description = 'Kernel module for the OS', + architectures = ["Intel32", "Intel64"]), ] def _generator(self): @@ -32,7 +30,7 @@ def _generator(self): Also lists which event(s) a process is registered for """ - kernel = contexts.Module(self.context, self.config['darwin'], self.config['primary'], 0) + kernel = self.context.modules[self.config['darwin']] watcher_table = kernel.object_from_symbol("watcher_table") @@ -48,6 +46,7 @@ def _generator(self): try: event_array = kernel.object(object_type = "array", offset = watcher.event_list, + absolute = True, count = 13, subtype = kernel.get_type("unsigned char")) From 6fdf6e9e9f14a8b16a125a45b875517947a29add Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Thu, 22 Jul 2021 22:46:39 +0100 Subject: [PATCH 179/294] Windows: Convert pslist as an example --- .../framework/plugins/windows/pslist.py | 18 ++++++++---------- .../framework/plugins/windows/strings.py | 3 +-- 2 files changed, 9 insertions(+), 12 deletions(-) diff --git a/volatility3/framework/plugins/windows/pslist.py b/volatility3/framework/plugins/windows/pslist.py index 25b24153e6..575dfaab42 100644 --- a/volatility3/framework/plugins/windows/pslist.py +++ b/volatility3/framework/plugins/windows/pslist.py @@ -20,17 +20,14 @@ class PsList(interfaces.plugins.PluginInterface, timeliner.TimeLinerInterface): """Lists the processes present in a particular windows memory image.""" - _required_framework_version = (1, 0, 0) - _version = (2, 0, 1) + _required_framework_version = (1, 2, 0) + _version = (2, 0, 0) PHYSICAL_DEFAULT = False @classmethod def get_requirements(cls): return [ - requirements.TranslationLayerRequirement(name = 'primary', - description = 'Memory layer for the kernel', - architectures = ["Intel32", "Intel64"]), - requirements.SymbolTableRequirement(name = "nt_symbols", description = "Windows kernel symbols"), + requirements.ModuleRequirement(name = 'kernel', description = 'Windows kernel'), requirements.BooleanRequirement(name = 'physical', description = 'Display physical offsets instead of virtual', default = cls.PHYSICAL_DEFAULT, @@ -179,13 +176,13 @@ def _generator(self): "pe", class_types = pe.class_types) - memory = self.context.layers[self.config['primary']] + memory = self.context.layers[self.config['kernel.layer_name']] if not isinstance(memory, layers.intel.Intel): raise TypeError("Primary layer is not an intel layer") for proc in self.list_processes(self.context, - self.config['primary'], - self.config['nt_symbols'], + self.config['kernel.layer_name'], + self.config['kernel.symbol_table_name'], filter_func = self.create_pid_filter(self.config.get('pid', None))): if not self.config.get('physical', self.PHYSICAL_DEFAULT): @@ -197,7 +194,8 @@ def _generator(self): try: if self.config['dump']: - file_handle = self.process_dump(self.context, self.config['nt_symbols'], pe_table_name, proc, self.open) + file_handle = self.process_dump(self.context, self.config['kernel.symbol_table_name'], + pe_table_name, proc, self.open) file_output = "Error outputting file" if file_handle: file_handle.close() diff --git a/volatility3/framework/plugins/windows/strings.py b/volatility3/framework/plugins/windows/strings.py index 775ffc7322..36c68d809c 100644 --- a/volatility3/framework/plugins/windows/strings.py +++ b/volatility3/framework/plugins/windows/strings.py @@ -40,9 +40,8 @@ def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface] def run(self): return renderers.TreeGrid([("String", str), ("Physical Address", format_hints.Hex), ("Result", str)], - self._generator) + self._generator()) - @property def _generator(self) -> Generator[Tuple, None, None]: """Generates results from a strings file.""" string_list: List[Tuple[int,bytes]] = [] From 9358700553da3ae848f76f02fe1717ebf9c5a4e7 Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Sat, 24 Jul 2021 22:39:02 +0100 Subject: [PATCH 180/294] Windows: Refactor PDB code and support VS19 new types --- .../framework/symbols/windows/pdb.json | 50 +++++++ .../framework/symbols/windows/pdbconv.py | 134 ++++++++---------- 2 files changed, 109 insertions(+), 75 deletions(-) diff --git a/volatility3/framework/symbols/windows/pdb.json b/volatility3/framework/symbols/windows/pdb.json index 8c365a72b2..f87b8db9fc 100644 --- a/volatility3/framework/symbols/windows/pdb.json +++ b/volatility3/framework/symbols/windows/pdb.json @@ -1295,6 +1295,54 @@ "kind": "struct", "size": 18 }, + "LF_STRUCTURE_VS19": { + "fields": { + "properties": { + "offset": 0, + "type": { + "kind": "struct", + "name": "Type_Properties" + } + }, + "fields": { + "offset": 4, + "type": { + "kind": "base", + "name": "unsigned long" + } + }, + "derived_from": { + "offset": 8, + "type": { + "kind": "base", + "name": "unsigned long" + } + }, + "vtable_shape": { + "offset": 12, + "type": { + "kind": "base", + "name": "unsigned long" + } + }, + "size": { + "offset": 18, + "type": { + "kind": "base", + "name": "unsigned short" + } + }, + "name": { + "offset": 20, + "type": { + "kind": "base", + "name": "string" + } + } + }, + "kind": "struct", + "size": 20 + }, "LF_UDT_SRC_LINE": { "fields": { "udt": { @@ -1626,6 +1674,8 @@ "LF_STRING_ID": 5637, "LF_UDT_SRC_LINE": 5638, "LF_UDT_MOD_SRC_LINE": 5639, + "LF_CLASS_VS19": 5640, + "LF_STRUCTURE_VS19": 5641, "LF_CHAR": 32768, "LF_SHORT": 32769, "LF_USHORT": 32770, diff --git a/volatility3/framework/symbols/windows/pdbconv.py b/volatility3/framework/symbols/windows/pdbconv.py index b582fc299d..9314a66a5a 100644 --- a/volatility3/framework/symbols/windows/pdbconv.py +++ b/volatility3/framework/symbols/windows/pdbconv.py @@ -269,7 +269,8 @@ def __init__(self, if not progress_callback: progress_callback = lambda x, y: None self._progress_callback = progress_callback - self.types: List[Tuple[interfaces.objects.ObjectInterface, Optional[str], interfaces.objects.ObjectInterface]] = [ + self.types: List[ + Tuple[interfaces.objects.ObjectInterface, Optional[str], interfaces.objects.ObjectInterface]] = [ ] self.bases: Dict[str, Any] = {} self.user_types: Dict[str, Any] = {} @@ -362,7 +363,6 @@ def read_ipi_stream(self): except ValueError: return None - def _read_info_stream(self, stream_number, stream_name, info_list): vollog.debug(f"Reading {stream_name}") info_layer = self._context.layers.get(self._layer_name + "_stream" + str(stream_number), None) @@ -650,7 +650,7 @@ def get_size_from_index(self, index: int) -> int: leaf_type, name, value = self.types[index - 0x1000] if leaf_type in [ leaf_type.LF_UNION, leaf_type.LF_CLASS, leaf_type.LF_CLASS_ST, leaf_type.LF_STRUCTURE, - leaf_type.LF_STRUCTURE_ST, leaf_type.LF_INTERFACE + leaf_type.LF_STRUCTURE_ST, leaf_type.LF_INTERFACE, leaf_type.LF_CLASS_VS19, leaf_type.LF_STRUCTURE_VS19 ]: if not value.properties.forward_reference: result = value.size @@ -728,6 +728,36 @@ def process_types(self, type_references: Dict[str, int]) -> None: # Re-run through for ForwardSizeReferences self.user_types = self.replace_forward_references(self.user_types, type_references) + type_handlers = { + # Leaf_type: ('Structure', has_name, value_attribute) + 'LF_CLASS': ('LF_STRUCTURE', True, 'size'), + 'LF_CLASS_ST': ('LF_STRUCTURE', True, 'size'), + 'LF_STRUCTURE': ('LF_STRUCTURE', True, 'size'), + 'LF_STRUCTURE_ST': ('LF_STRUCTURE', True, 'size'), + 'LF_INTERFACE': ('LF_STRUCTURE', True, 'size'), + 'LF_CLASS_VS19': ('LF_STRUCTURE_VS19', True, 'size'), + 'LF_STRUCTURE_VS19': ('LF_STRUCTURE_VS19', True, 'size'), + 'LF_MEMBER': ('LF_MEMBER', True, 'offset'), + 'LF_MEMBER_ST': ('LF_MEMBER', True, 'offset'), + 'LF_ARRAY': ('LF_ARRAY', True, 'size'), + 'LF_ARRAY_ST': ('LF_ARRAY', True, 'size'), + 'LF_STRIDED_ARRAY': ('LF_ARRAY', True, 'size'), + 'LF_ENUMERATE': ('LF_ENUMERATE', True, 'value'), + 'LF_ARGLIST': ('LF_ENUM', True, None), + 'LF_ENUM': ('LF_ENUM', True, None), + 'LF_UNION': ('LF_UNION', True, None), + 'LF_STRING_ID': ('LF_STRING_ID', True, None), + 'LF_FUNC_ID': ('LF_FUNC_ID', True, None), + 'LF_MODIFIER': ('LF_MODIFIER', False, None), + 'LF_POINTER': ('LF_POINTER', False, None), + 'LF_PROCEDURE': ('LF_PROCEDURE', False, None), + 'LF_FIELDLIST': ('LF_FIELDLIST', False, None), + 'LF_BITFIELD': ('LF_BITFIELD', False, None), + 'LF_UDT_SRC_LINE': ('LF_UDT_SRC_LINE', False, None), + 'LF_UDT_MOD_SRC_LINE': ('LF_UDT_MOD_SRC_LINE', False, None), + 'LF_BUILDINFO': ('LF_BUILDINFO', False, None) + } + def consume_type( self, module: interfaces.context.ModuleInterface, offset: int, length: int ) -> Tuple[Tuple[Optional[interfaces.objects.ObjectInterface], Optional[str], Union[ @@ -740,61 +770,9 @@ def consume_type( consumed = leaf_type.vol.base_type.size remaining = length - consumed - if leaf_type in [ - leaf_type.LF_CLASS, leaf_type.LF_CLASS_ST, leaf_type.LF_STRUCTURE, leaf_type.LF_STRUCTURE_ST, - leaf_type.LF_INTERFACE - ]: - structure = module.object(object_type = "LF_STRUCTURE", offset = offset + consumed) - name_offset = structure.name.vol.offset - structure.vol.offset - name, value, excess = self.determine_extended_value(leaf_type, structure.size, module, - remaining - name_offset) - structure.size = value - structure.name = name - consumed += remaining - result = leaf_type, name, structure - elif leaf_type in [leaf_type.LF_MEMBER, leaf_type.LF_MEMBER_ST]: - member = module.object(object_type = "LF_MEMBER", offset = offset + consumed) - name_offset = member.name.vol.offset - member.vol.offset - name, value, excess = self.determine_extended_value(leaf_type, member.offset, module, - remaining - name_offset) - member.offset = value - member.name = name - result = leaf_type, name, member - consumed += member.vol.size + len(name) + 1 + excess - elif leaf_type in [leaf_type.LF_ARRAY, leaf_type.LF_ARRAY_ST, leaf_type.LF_STRIDED_ARRAY]: - array = module.object(object_type = "LF_ARRAY", offset = offset + consumed) - name_offset = array.name.vol.offset - array.vol.offset - name, value, excess = self.determine_extended_value(leaf_type, array.size, module, remaining - name_offset) - array.size = value - array.name = name - result = leaf_type, name, array - consumed += remaining - elif leaf_type in [leaf_type.LF_ENUMERATE]: - enum = module.object(object_type = 'LF_ENUMERATE', offset = offset + consumed) - name_offset = enum.name.vol.offset - enum.vol.offset - name, value, excess = self.determine_extended_value(leaf_type, enum.value, module, remaining - name_offset) - enum.value = value - enum.name = name - result = leaf_type, name, enum - consumed += enum.vol.size + len(name) + 1 + excess - elif leaf_type in [leaf_type.LF_ARGLIST, leaf_type.LF_ENUM]: - enum = module.object(object_type = "LF_ENUM", offset = offset + consumed) - name_offset = enum.name.vol.offset - enum.vol.offset - name = self.parse_string(enum.name, leaf_type < leaf_type.LF_ST_MAX, size = remaining - name_offset) - enum.name = name - result = leaf_type, name, enum - consumed += remaining - elif leaf_type in [leaf_type.LF_UNION]: - union = module.object(object_type = "LF_UNION", offset = offset + consumed) - name_offset = union.name.vol.offset - union.vol.offset - name = self.parse_string(union.name, leaf_type < leaf_type.LF_ST_MAX, size = remaining - name_offset) - result = leaf_type, name, union - consumed += remaining - elif leaf_type in [leaf_type.LF_MODIFIER, leaf_type.LF_POINTER, leaf_type.LF_PROCEDURE]: - obj = module.object(object_type = leaf_type.lookup(), offset = offset + consumed) - result = leaf_type, None, obj - consumed += remaining - elif leaf_type in [leaf_type.LF_FIELDLIST]: + type_handler, has_name, value_attribute = self.type_handlers.get(leaf_type.lookup(), 'LF_UNKNOWN') + + if type_handler in ['LF_FIELDLIST']: sub_length = remaining sub_offset = offset + consumed fields = [] @@ -806,23 +784,29 @@ def consume_type( consumed += sub_consumed fields.append(subfield) result = leaf_type, None, fields - elif leaf_type in [leaf_type.LF_BITFIELD]: - bitfield = module.object(object_type = "LF_BITFIELD", offset = offset + consumed) - result = leaf_type, None, bitfield - consumed += remaining - elif leaf_type in [leaf_type.LF_STRING_ID, leaf_type.LF_FUNC_ID]: - string_id = module.object(object_type = leaf_type.lookup(), offset = offset + consumed) - name_offset = string_id.name.vol.offset - string_id.vol.offset - name = self.parse_string(string_id.name, leaf_type < leaf_type.LF_ST_MAX, size = remaining - name_offset) - result = leaf_type, name, string_id - elif leaf_type in [leaf_type.LF_UDT_SRC_LINE, leaf_type.LF_UDT_MOD_SRC_LINE]: - src_line = module.object(object_type = leaf_type.lookup(), offset = offset + consumed) - result = leaf_type, None, src_line - elif leaf_type in [leaf_type.LF_BUILDINFO]: - buildinfo = module.object(object_type = leaf_type.lookup(), offset = offset + consumed) - buildinfo.arguments.count = buildinfo.count - consumed += buildinfo.arguments.vol.size - result = leaf_type, None, buildinfo + elif type_handler in ['LF_BUILDINFO']: + parsed_obj = module.object(object_type = type_handler, offset = offset + consumed) + parsed_obj.arguments.count = parsed_obj.count + consumed += parsed_obj.arguments.vol.size + result = leaf_type, None, parsed_obj + elif type_handler in self.type_handlers: + parsed_obj = module.object(object_type = type_handler, offset = offset + consumed) + current_consumed = remaining + if has_name: + name_offset = parsed_obj.name.vol.offset - parsed_obj.vol.offset + if value_attribute: + name, value, excess = self.determine_extended_value(leaf_type, getattr(parsed_obj, value_attribute), + module, remaining - name_offset) + setattr(parsed_obj, value_attribute, value) + current_consumed = parsed_obj.vol.size + len(name) + 1 + excess + else: + name = self.parse_string(parsed_obj.name, leaf_type < leaf_type.LF_ST_MAX, + size = remaining - name_offset) + parsed_obj.name = name + else: + name = None + result = leaf_type, name, parsed_obj + consumed += current_consumed else: raise TypeError(f"Unhandled leaf_type: {leaf_type}") From deaa86f625a55cf5a59ac9e57766bdbd20de6516 Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Sat, 24 Jul 2021 23:01:11 +0100 Subject: [PATCH 181/294] Windows: Ensure we emit LF_STRUCTURE_VS19 types --- volatility3/framework/symbols/windows/pdbconv.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/volatility3/framework/symbols/windows/pdbconv.py b/volatility3/framework/symbols/windows/pdbconv.py index 9314a66a5a..fe61aa10df 100644 --- a/volatility3/framework/symbols/windows/pdbconv.py +++ b/volatility3/framework/symbols/windows/pdbconv.py @@ -695,7 +695,7 @@ def process_types(self, type_references: Dict[str, int]) -> None: leaf_type, name, value = self.types[index] if leaf_type in [ leaf_type.LF_CLASS, leaf_type.LF_CLASS_ST, leaf_type.LF_STRUCTURE, leaf_type.LF_STRUCTURE_ST, - leaf_type.LF_INTERFACE + leaf_type.LF_INTERFACE, leaf_type.LF_CLASS_VS19, leaf_type.LF_STRUCTURE_VS19 ]: if not value.properties.forward_reference and name: self.user_types[name] = { From 63b4c60d3b3143ece92a6cb99abeeee445e85a1b Mon Sep 17 00:00:00 2001 From: Andrew Case Date: Tue, 27 Jul 2021 19:51:55 +0000 Subject: [PATCH 182/294] Cleanup vnode processing and smear protection based on @fgomulka testing --- .../symbols/mac/extensions/__init__.py | 40 ++++++++++++++++--- 1 file changed, 35 insertions(+), 5 deletions(-) diff --git a/volatility3/framework/symbols/mac/extensions/__init__.py b/volatility3/framework/symbols/mac/extensions/__init__.py index b0d75b93b3..1a955a3791 100644 --- a/volatility3/framework/symbols/mac/extensions/__init__.py +++ b/volatility3/framework/symbols/mac/extensions/__init__.py @@ -4,12 +4,15 @@ from typing import Generator, Iterable, Optional, Set, Tuple +import logging + from volatility3.framework import constants, objects, renderers from volatility3.framework import exceptions, interfaces from volatility3.framework.objects import utility from volatility3.framework.renderers import conversion from volatility3.framework.symbols import generic +vollog = logging.getLogger(__name__) class proc(generic.GenericIntelProcess): @@ -51,7 +54,15 @@ def get_map_iter(self) -> Iterable[interfaces.objects.ObjectInterface]: seen: Set[int] = set() for i in range(task.map.hdr.nentries): - if not current_map or current_map.vol.offset in seen: + if (not current_map or + current_map.vol.offset in seen or + not self._context.layers[task.vol.native_layer_name].is_valid(current_map.dereference().vol.offset, current_map.dereference().vol.size)): + + vollog.log(constants.LOGLEVEL_VVV, "Breaking process maps iteration due to invalid state.") + break + + # ZP_POISON value used to catch programming errors + if current_map.links.start == 0xdeadbeefdeadbeef or current_map.links.end == 0xdeadbeefdeadbeef: break yield current_map @@ -210,10 +221,21 @@ def get_path(self, context, config_prefix): ret = node elif node: path = [] - while node: - v_name = utility.pointer_to_string(node.v_name, 255) + seen: Set[int] = set() + while node and node.vol.offset not in seen: + try: + v_name = utility.pointer_to_string(node.v_name, 255) + except exceptions.InvalidAddressException: + break + path.append(v_name) + if len(path) > 1024: + break + + seen.add(node.vol.offset) + node = node.v_parent + path.reverse() ret = "/" + "/".join(path) else: @@ -243,9 +265,10 @@ def get_vnode(self, context, config_prefix): # based on find_vnode_object vnode_object = self.get_object().get_map_object() + if vnode_object == 0: + return None found_end = False - while not found_end: try: tmp_vnode_object = vnode_object.shadow.dereference() @@ -257,8 +280,15 @@ def get_vnode(self, context, config_prefix): else: vnode_object = tmp_vnode_object + if vnode_object.vol.offset == 0: + return None + try: - ops = vnode_object.pager.mo_pager_ops.dereference() + pager = vnode_object.pager + if pager == 0: + return None + + ops = pager.mo_pager_ops.dereference() except exceptions.InvalidAddressException: return None From c1bf63db83f7f2cc018ffd712f23f4090e1541a3 Mon Sep 17 00:00:00 2001 From: Andrew Case Date: Tue, 27 Jul 2021 19:59:28 +0000 Subject: [PATCH 183/294] Adding missing change for pointer_to_string exception catching --- volatility3/framework/symbols/mac/extensions/__init__.py | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/volatility3/framework/symbols/mac/extensions/__init__.py b/volatility3/framework/symbols/mac/extensions/__init__.py index 1a955a3791..94045d2e73 100644 --- a/volatility3/framework/symbols/mac/extensions/__init__.py +++ b/volatility3/framework/symbols/mac/extensions/__init__.py @@ -131,7 +131,10 @@ def _do_calc_path(self, ret, vnodeobj, vname): return if vname: - ret.append(utility.pointer_to_string(vname, 255)) + try: + ret.append(utility.pointer_to_string(vname, 255)) + except exceptions.InvalidAddressException: + return if int(vnodeobj.v_flag) & 0x000001 != 0 and int(vnodeobj.v_mount) != 0: if int(vnodeobj.v_mount.mnt_vnodecovered) != 0: From c9232ab7b5e51b54e3417f4532692bd31fbd34b2 Mon Sep 17 00:00:00 2001 From: Andrew Case Date: Tue, 27 Jul 2021 20:04:24 +0000 Subject: [PATCH 184/294] Skip invalid sockets in netstat enumeration, based on @fgomulka testing --- volatility3/framework/plugins/mac/netstat.py | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/volatility3/framework/plugins/mac/netstat.py b/volatility3/framework/plugins/mac/netstat.py index 3b6bba9007..ff7af79ea7 100644 --- a/volatility3/framework/plugins/mac/netstat.py +++ b/volatility3/framework/plugins/mac/netstat.py @@ -75,6 +75,10 @@ def list_sockets(cls, except exceptions.InvalidAddressException: continue + if not context.layers[task.vol.native_layer_name].is_valid(socket.vol.offset, + socket.vol.size): + continue + yield task_name, pid, socket def _generator(self): From f6c1e0eafc088cfdc8c0bc0791e9281ae266d4ae Mon Sep 17 00:00:00 2001 From: Andrew Case Date: Tue, 27 Jul 2021 21:31:07 +0000 Subject: [PATCH 185/294] Fix several code paths that could infinite loop and were triggered based on @fgomulka testing --- .../framework/plugins/mac/list_files.py | 77 ++++++++++++------- 1 file changed, 51 insertions(+), 26 deletions(-) diff --git a/volatility3/framework/plugins/mac/list_files.py b/volatility3/framework/plugins/mac/list_files.py index 557e735eda..f1e746d28c 100644 --- a/volatility3/framework/plugins/mac/list_files.py +++ b/volatility3/framework/plugins/mac/list_files.py @@ -44,20 +44,22 @@ def _vnode_name(cls, vnode: interfaces.objects.ObjectInterface) -> Optional[str] return v_name @classmethod - def _get_parent(cls, vnode): - parent = None - + def _get_parent(cls, context, vnode): # root entries do not have parents # and parents of normal files can be smeared try: - parent = vnode.v_parent + parent = vnode.v_parent.dereference() except exceptions.InvalidAddressException: - pass + return None + + if parent and not context.layers[vnode.vol.native_layer_name].is_valid(parent.vol.offset, + parent.vol.size): + return None return parent @classmethod - def _add_vnode(cls, vnode, loop_vnodes): + def _add_vnode(cls, context, vnode, loop_vnodes): """ Adds the given vnode to loop_vnodes. @@ -65,7 +67,11 @@ def _add_vnode(cls, vnode, loop_vnodes): and holds its name, parent address, and object """ - key = vnode + if not context.layers[vnode.vol.native_layer_name].is_valid(vnode.vol.offset, + vnode.vol.size): + return False + + key = vnode.vol.offset added = False if not key in loop_vnodes: @@ -74,9 +80,9 @@ def _add_vnode(cls, vnode, loop_vnodes): if v_name is None: return added - parent = cls._get_parent(vnode) + parent = cls._get_parent(context, vnode) if parent: - parent_val = parent + parent_val = parent.vol.offset else: parent_val = None @@ -87,29 +93,40 @@ def _add_vnode(cls, vnode, loop_vnodes): return added @classmethod - def _walk_vnode(cls, vnode, loop_vnodes): + def _walk_vnode(cls, context, vnode, loop_vnodes): """ Iterates over the list of vnodes associated with the given one. Also traverses the parent chain for the vnode and adds each one. """ + added = False + while vnode: - if not cls._add_vnode(vnode, loop_vnodes): + if vnode in loop_vnodes: + return added + + if not cls._add_vnode(context, vnode, loop_vnodes): break + + added = True - parent = cls._get_parent(vnode) - while parent: - cls._walk_vnode(parent, loop_vnodes) - parent = cls._get_parent(parent) + parent = cls._get_parent(context, vnode) + while parent and not parent in loop_vnodes: + if not cls._walk_vnode(context, parent, loop_vnodes): + break + + parent = cls._get_parent(context, parent) try: - vnode = vnode.v_mntvnodes.tqe_next + vnode = vnode.v_mntvnodes.tqe_next.dereference() except exceptions.InvalidAddressException: break + return added + @classmethod - def _walk_vnodelist(cls, list_head, loop_vnodes): + def _walk_vnodelist(cls, context, list_head, loop_vnodes): for vnode in mac.MacUtilities.walk_tailq(list_head, "v_mntvnodes"): - cls._walk_vnode(vnode, loop_vnodes) + cls._walk_vnode(context, vnode, loop_vnodes) @classmethod def _walk_mounts(cls, @@ -123,25 +140,33 @@ def _walk_mounts(cls, # iterate each vnode source from each mount list_mounts = mount.Mount.list_mounts(context, layer_name, darwin_symbols) for mnt in list_mounts: - cls._walk_vnodelist(mnt.mnt_vnodelist, loop_vnodes) - cls._walk_vnodelist(mnt.mnt_workerqueue, loop_vnodes) - cls._walk_vnodelist(mnt.mnt_newvnodes, loop_vnodes) - - cls._walk_vnode(mnt.mnt_vnodecovered, loop_vnodes) - cls._walk_vnode(mnt.mnt_realrootvp, loop_vnodes) - cls._walk_vnode(mnt.mnt_devvp, loop_vnodes) + cls._walk_vnodelist(context, mnt.mnt_vnodelist, loop_vnodes) + cls._walk_vnodelist(context, mnt.mnt_workerqueue, loop_vnodes) + cls._walk_vnodelist(context, mnt.mnt_newvnodes, loop_vnodes) + cls._walk_vnode(context, mnt.mnt_vnodecovered, loop_vnodes) + cls._walk_vnode(context, mnt.mnt_realrootvp, loop_vnodes) + cls._walk_vnode(context, mnt.mnt_devvp, loop_vnodes) return loop_vnodes @classmethod def _build_path(cls, vnodes, vnode_name, parent_offset): path = [vnode_name] + seen_offsets = set() while parent_offset in vnodes: parent_name, parent_offset, _ = vnodes[parent_offset] if parent_offset is None: parent_offset = 0 + # circular references from smear + elif parent_offset in seen_offsets: + path = [] + break + + else: + seen_offsets.add(parent_offset) + path.insert(0, parent_name) if len(path) > 1: @@ -171,7 +196,7 @@ def list_files(cls, def _generator(self): for vnode, full_path in self.list_files(self.context, self.config['primary'], self.config['darwin']): - yield (0, (format_hints.Hex(vnode), full_path)) + yield (0, (format_hints.Hex(vnode.vol.offset), full_path)) def run(self): return renderers.TreeGrid([("Address", format_hints.Hex), ("File Path", str)], self._generator()) From 6f4ff5f2df7390b09fd7aca067ad2db66ed2b7e1 Mon Sep 17 00:00:00 2001 From: Frank Gomulka Date: Thu, 15 Jul 2021 00:16:29 -0400 Subject: [PATCH 186/294] Add patch for driverscan for windows 7 and earlier This patches an issue for some versions of windows that require the bottom-up approach for finding the size of "object body" by recognizing additional structures occupying the "object body" position. The sizes of these additional structures are added to the size of `_DRIVER_OBJECT` to more accurately compute the actual size of "object body". --- .../framework/symbols/windows/extensions/pool.py | 12 ++++++++++-- .../{framework => }/plugins/windows/poolscanner.py | 10 ++++++---- 2 files changed, 16 insertions(+), 6 deletions(-) rename volatility3/{framework => }/plugins/windows/poolscanner.py (98%) diff --git a/volatility3/framework/symbols/windows/extensions/pool.py b/volatility3/framework/symbols/windows/extensions/pool.py index d470797b5b..667acc7391 100644 --- a/volatility3/framework/symbols/windows/extensions/pool.py +++ b/volatility3/framework/symbols/windows/extensions/pool.py @@ -5,6 +5,7 @@ from volatility3.framework import objects, interfaces, constants, symbols, exceptions, renderers from volatility3.framework.renderers import conversion +from volatility3.plugins.windows.poolscanner import PoolConstraint vollog = logging.getLogger(__name__) @@ -17,9 +18,8 @@ class POOL_HEADER(objects.StructType): """ def get_object(self, - type_name: str, + constraint: PoolConstraint, use_top_down: bool, - executive: bool = False, kernel_symbol_table: Optional[str] = None, native_layer_name: Optional[str] = None) -> Optional[interfaces.objects.ObjectInterface]: """Carve an object or data structure from a kernel pool allocation @@ -34,6 +34,10 @@ def get_object(self, An object as found from a POOL_HEADER """ + # TODO: I wasn't quite sure what to do with these values, so I just set them here for now. + type_name = constraint.type_name + executive = constraint.object_type is not None + symbol_table_name = self.vol.type_name.split(constants.BANG)[0] if constants.BANG in type_name: symbol_table_name, type_name = type_name.split(constants.BANG)[0:2] @@ -150,6 +154,10 @@ def get_object(self, # use the bottom up approach for windows 7 and earlier else: type_size = self._context.symbol_space.get_type(symbol_table_name + constants.BANG + type_name).size + if constraint.additional_structures: + for additional_structure in constraint.additional_structures: + type_size += self._context.symbol_space.get_type(symbol_table_name + constants.BANG + additional_structure).size + rounded_size = conversion.round(type_size, alignment, up = True) mem_object = self._context.object(symbol_table_name + constants.BANG + type_name, diff --git a/volatility3/framework/plugins/windows/poolscanner.py b/volatility3/plugins/windows/poolscanner.py similarity index 98% rename from volatility3/framework/plugins/windows/poolscanner.py rename to volatility3/plugins/windows/poolscanner.py index a6a6cae0c9..36baec0bfc 100644 --- a/volatility3/framework/plugins/windows/poolscanner.py +++ b/volatility3/plugins/windows/poolscanner.py @@ -39,7 +39,8 @@ def __init__(self, size: Optional[Tuple[Optional[int], Optional[int]]] = None, index: Optional[Tuple[Optional[int], Optional[int]]] = None, alignment: Optional[int] = 1, - skip_type_test: bool = False) -> None: + skip_type_test: bool = False, + additional_structures: Optional[List[str]] = None) -> None: self.tag = tag self.type_name = type_name self.object_type = object_type @@ -48,6 +49,7 @@ def __init__(self, self.index = index self.alignment = alignment self.skip_type_test = skip_type_test + self.additional_structures = additional_structures class PoolHeaderScanner(interfaces.layers.ScannerInterface): @@ -212,7 +214,8 @@ def builtin_constraints(symbol_table: str, tags_filter: List[bytes] = None) -> L type_name = symbol_table + constants.BANG + "_DRIVER_OBJECT", object_type = "Driver", size = (248, None), - page_type = PoolType.PAGED | PoolType.NONPAGED | PoolType.FREE), + page_type = PoolType.PAGED | PoolType.NONPAGED | PoolType.FREE, + additional_structures = ["_DRIVER_EXTENSION"]), # drivers on windows starting with windows 8 PoolConstraint(b'Driv', type_name = symbol_table + constants.BANG + "_DRIVER_OBJECT", @@ -291,9 +294,8 @@ def generate_pool_scan(cls, for constraint, header in cls.pool_scan(context, scan_layer, symbol_table, constraints, alignment = alignment): - mem_object = header.get_object(type_name = constraint.type_name, + mem_object = header.get_object(constraint = constraint, use_top_down = is_windows_8_or_later, - executive = constraint.object_type is not None, native_layer_name = 'primary', kernel_symbol_table = symbol_table) From 7311f241491a9613866cb070406c25a0b8b30814 Mon Sep 17 00:00:00 2001 From: Frank Gomulka Date: Thu, 15 Jul 2021 15:35:00 -0400 Subject: [PATCH 187/294] Comment and Pydoc changes --- volatility3/framework/symbols/windows/extensions/pool.py | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-) diff --git a/volatility3/framework/symbols/windows/extensions/pool.py b/volatility3/framework/symbols/windows/extensions/pool.py index 667acc7391..5353fc30b1 100644 --- a/volatility3/framework/symbols/windows/extensions/pool.py +++ b/volatility3/framework/symbols/windows/extensions/pool.py @@ -25,16 +25,15 @@ def get_object(self, """Carve an object or data structure from a kernel pool allocation Args: - type_name: the data structure type name - native_layer_name: the name of the layer where the data originally lived - object_type: the object type (executive kernel objects only) + constraint: a PoolConstraint object used to get the pool allocation header object + use_top_down: for delineating how a windows version finds the size of the object body kernel_symbol_table: in case objects of a different symbol table are scanned for + native_layer_name: the name of the layer where the data originally lived Returns: An object as found from a POOL_HEADER """ - # TODO: I wasn't quite sure what to do with these values, so I just set them here for now. type_name = constraint.type_name executive = constraint.object_type is not None From fda15122737ca3d5b8cdbc00131be59e6841ac6e Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Thu, 29 Jul 2021 00:05:36 +0100 Subject: [PATCH 188/294] Windows: Fix pdbconv handling of unknown types --- volatility3/framework/symbols/windows/pdbconv.py | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/volatility3/framework/symbols/windows/pdbconv.py b/volatility3/framework/symbols/windows/pdbconv.py index fe61aa10df..b48e8268cc 100644 --- a/volatility3/framework/symbols/windows/pdbconv.py +++ b/volatility3/framework/symbols/windows/pdbconv.py @@ -770,7 +770,8 @@ def consume_type( consumed = leaf_type.vol.base_type.size remaining = length - consumed - type_handler, has_name, value_attribute = self.type_handlers.get(leaf_type.lookup(), 'LF_UNKNOWN') + type_handler, has_name, value_attribute = self.type_handlers.get(leaf_type.lookup(), + ('LF_UNKNOWN', False, None)) if type_handler in ['LF_FIELDLIST']: sub_length = remaining From e793956e023ab76fea78dfc531f504c9325b8d2b Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Thu, 29 Jul 2021 00:17:22 +0100 Subject: [PATCH 189/294] Windows: Update netscan offsets for win10-17763-x64 (#478) --- .../symbols/windows/netscan/netscan-win10-17763-x64.json | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/volatility3/framework/symbols/windows/netscan/netscan-win10-17763-x64.json b/volatility3/framework/symbols/windows/netscan/netscan-win10-17763-x64.json index b7c8325a39..7f89607e5f 100644 --- a/volatility3/framework/symbols/windows/netscan/netscan-win10-17763-x64.json +++ b/volatility3/framework/symbols/windows/netscan/netscan-win10-17763-x64.json @@ -192,7 +192,7 @@ "_TCP_ENDPOINT": { "fields": { "Owner": { - "offset": 656, + "offset": 712, "type": { "kind": "pointer", "subtype": { @@ -202,7 +202,7 @@ } }, "CreateTime": { - "offset": 672, + "offset": 728, "type": { "kind": "union", "name": "_LARGE_INTEGER" From 30eec0cb761b73d3723a7928ec8a1774f75e9b7a Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Fri, 30 Jul 2021 00:17:26 +0100 Subject: [PATCH 190/294] Windows: Ensure the kernel contains an actual kvo --- volatility3/framework/automagic/pdbscan.py | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-) diff --git a/volatility3/framework/automagic/pdbscan.py b/volatility3/framework/automagic/pdbscan.py index 474146ad1e..fe6e463e01 100644 --- a/volatility3/framework/automagic/pdbscan.py +++ b/volatility3/framework/automagic/pdbscan.py @@ -124,10 +124,11 @@ def set_kernel_virtual_offset(self, context: interfaces.context.ContextInterface if valid_kernel: # Set the virtual offset under the TranslationLayer it applies to virtual_layer, kvo, kernel = valid_kernel - kvo_path = interfaces.configuration.path_join(context.layers[virtual_layer].config_path, - 'kernel_virtual_offset') - context.config[kvo_path] = kvo - vollog.debug(f"Setting kernel_virtual_offset to {hex(kvo)}") + if kvo is not None: + kvo_path = interfaces.configuration.path_join(context.layers[virtual_layer].config_path, + 'kernel_virtual_offset') + context.config[kvo_path] = kvo + vollog.debug(f"Setting kernel_virtual_offset to {hex(kvo)}") def get_physical_layer_name(self, context, vlayer): return context.config.get(interfaces.configuration.path_join(vlayer.config_path, 'memory_layer'), None) From 0343c8f81e3aaf0cff5979dcbb686ad91a14bbb3 Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Fri, 30 Jul 2021 21:20:33 +0100 Subject: [PATCH 191/294] Windows: Pdbsan typing and doc fixes --- volatility3/framework/automagic/pdbscan.py | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/volatility3/framework/automagic/pdbscan.py b/volatility3/framework/automagic/pdbscan.py index fe6e463e01..93d4337da3 100644 --- a/volatility3/framework/automagic/pdbscan.py +++ b/volatility3/framework/automagic/pdbscan.py @@ -56,7 +56,6 @@ def find_virtual_layers_from_req(self, context: interfaces.context.ContextInterf context: The context in which the `requirement` lives config_path: The path within the `context` for the `requirement`'s configuration variables requirement: The root of the requirement tree to search for :class:~`volatility3.framework.interfaces.layers.TranslationLayerRequirement` objects to scan - progress_callback: Means of providing the user with feedback during long processes Returns: A list of (layer_name, scan_results) @@ -90,6 +89,7 @@ def recurse_symbol_fulfiller(self, Args: context: Context on which to operate valid_kernel: A list of offsets where valid kernels have been found + progress_callback: Means of providing the user with feedback during long processes """ for sub_config_path, requirement in self._symbol_requirements: # TODO: Potentially think about multiple symbol requirements in both the same and different levels of the requirement tree @@ -138,7 +138,7 @@ def method_slow_scan(self, vlayer: layers.intel.Intel, progress_callback: constants.ProgressCallback = None) -> Optional[ValidKernelType]: - def test_virtual_kernel(physical_layer_name, virtual_layer_name, kernel): + def test_virtual_kernel(physical_layer_name, virtual_layer_name: str, kernel: Dict[str, Any]) -> Optional[ValidKernelType]: # It seems the kernel is loaded at a fixed mapping (presumably because the memory manager hasn't started yet) if kernel['mz_offset'] is None or not isinstance(kernel['mz_offset'], int): # Rule out kernels that couldn't find a suitable MZ header @@ -153,7 +153,7 @@ def method_fixed_mapping(self, vlayer: layers.intel.Intel, progress_callback: constants.ProgressCallback = None) -> Optional[ValidKernelType]: - def test_physical_kernel(physical_layer_name, virtual_layer_name, kernel): + def test_physical_kernel(physical_layer_name:str , virtual_layer_name: str, kernel: Dict[str, Any]) -> Optional[ValidKernelType]: # It seems the kernel is loaded at a fixed mapping (presumably because the memory manager hasn't started yet) if kernel['mz_offset'] is None or not isinstance(kernel['mz_offset'], int): # Rule out kernels that couldn't find a suitable MZ header @@ -239,14 +239,14 @@ def _method_offset(self, def method_module_offset(self, context: interfaces.context.ContextInterface, vlayer: layers.intel.Intel, - progress_callback: constants.ProgressCallback = None) -> ValidKernelType: + progress_callback: constants.ProgressCallback = None) -> Optional[ValidKernelType]: return self._method_offset(context, vlayer, b"\\SystemRoot\\system32\\nt", -16 - int(vlayer.bits_per_register / 8), progress_callback) def method_kdbg_offset(self, context: interfaces.context.ContextInterface, vlayer: layers.intel.Intel, - progress_callback: constants.ProgressCallback = None) -> ValidKernelType: + progress_callback: constants.ProgressCallback = None) -> Optional[ValidKernelType]: return self._method_offset(context, vlayer, b"KDBG", 8, progress_callback) def check_kernel_offset(self, @@ -294,7 +294,7 @@ def determine_valid_kernel(self, Args: context: Context on which to operate - potential_kernels: Dictionary containing `GUID`, `age`, `pdb_name` and `mz_offset` keys + potential_layers: List of layer names that the kernel might live at progress_callback: Function taking a percentage and optional description to be called during expensive computations to indicate progress Returns: From 058de2d9a8c51e04873c3bebb6eb53346bde93df Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Wed, 4 Aug 2021 22:25:13 +0100 Subject: [PATCH 192/294] Windows: Mute crypto warnings from lgtm --- volatility3/framework/plugins/windows/cachedump.py | 2 +- volatility3/framework/plugins/windows/hashdump.py | 8 ++++---- volatility3/framework/plugins/windows/lsadump.py | 4 ++-- 3 files changed, 7 insertions(+), 7 deletions(-) diff --git a/volatility3/framework/plugins/windows/cachedump.py b/volatility3/framework/plugins/windows/cachedump.py index 8436b46f65..95f3679076 100644 --- a/volatility3/framework/plugins/windows/cachedump.py +++ b/volatility3/framework/plugins/windows/cachedump.py @@ -46,7 +46,7 @@ def decrypt_hash(edata: bytes, nlkm: bytes, ch, xp: bool): hmac_md5 = HMAC.new(nlkm, ch) rc4key = hmac_md5.digest() rc4 = ARC4.new(rc4key) - data = rc4.encrypt(edata) + data = rc4.encrypt(edata) # lgtm [py/weak-cryptographic-algorithm] else: # based on Based on code from http://lab.mediaservice.net/code/cachedump.rb aes = AES.new(nlkm[16:32], AES.MODE_CBC, ch) diff --git a/volatility3/framework/plugins/windows/hashdump.py b/volatility3/framework/plugins/windows/hashdump.py index 7c3d8777c7..dac7f70733 100644 --- a/volatility3/framework/plugins/windows/hashdump.py +++ b/volatility3/framework/plugins/windows/hashdump.py @@ -134,7 +134,7 @@ def get_hbootkey(cls, samhive: registry.RegistryHive, bootkey: bytes) -> Optiona rc4_key = md5.digest() rc4 = ARC4.new(rc4_key) - hbootkey = rc4.encrypt(sam_data[0x80:0xA0]) + hbootkey = rc4.encrypt(sam_data[0x80:0xA0]) # lgtm [py/weak-cryptographic-algorithm] return hbootkey elif revision == 3: # AES encrypted @@ -153,7 +153,7 @@ def decrypt_single_salted_hash(cls, rid, hbootkey: bytes, enc_hash: bytes, _lmnt des2 = DES.new(des_k2, DES.MODE_ECB) cipher = AES.new(hbootkey[:16], AES.MODE_CBC, salt) obfkey = cipher.decrypt(enc_hash) - return des1.decrypt(obfkey[:8]) + des2.decrypt(obfkey[8:16]) + return des1.decrypt(obfkey[:8]) + des2.decrypt(obfkey[8:16]) # lgtm [py/weak-cryptographic-algorithm] @classmethod def get_user_hashes(cls, user: registry.CM_KEY_NODE, samhive: registry.RegistryHive, @@ -231,9 +231,9 @@ def decrypt_single_hash(cls, rid: int, hbootkey: bytes, enc_hash: bytes, lmntstr md5.update(hbootkey[:0x10] + pack(" Optional[bytes]: diff --git a/volatility3/framework/plugins/windows/lsadump.py b/volatility3/framework/plugins/windows/lsadump.py index c3d765f83f..1921b0453b 100644 --- a/volatility3/framework/plugins/windows/lsadump.py +++ b/volatility3/framework/plugins/windows/lsadump.py @@ -86,7 +86,7 @@ def get_lsa_key(cls, sechive: registry.RegistryHive, bootkey: bytes, vista_or_la rc4key = md5.digest() rc4 = ARC4.new(rc4key) - lsa_key = rc4.decrypt(obf_lsa_key[12:60]) + lsa_key = rc4.decrypt(obf_lsa_key[12:60]) # lgtm [py/weak-cryptographic-algorithm] lsa_key = lsa_key[0x10:0x20] else: lsa_key = cls.decrypt_aes(obf_lsa_key, bootkey) @@ -127,7 +127,7 @@ def decrypt_secret(cls, secret: bytes, key: bytes): des_key = hashdump.Hashdump.sidbytes_to_key(block_key) des = DES.new(des_key, DES.MODE_ECB) enc_block = enc_block + b"\x00" * int(abs(8 - len(enc_block)) % 8) - decrypted_data += des.decrypt(enc_block) + decrypted_data += des.decrypt(enc_block) # lgtm [py/weak-cryptographic-algorithm] j += 7 if len(key[j:j + 7]) < 7: j = len(key[j:j + 7]) From 1611a72eb9ee4811dd2102b7ad662f2316c9c31b Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Thu, 5 Aug 2021 16:21:35 +0100 Subject: [PATCH 193/294] Windows: Fix memmap not to create empty files --- volatility3/framework/plugins/windows/memmap.py | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/volatility3/framework/plugins/windows/memmap.py b/volatility3/framework/plugins/windows/memmap.py index b0daa080f8..ecc20267c8 100644 --- a/volatility3/framework/plugins/windows/memmap.py +++ b/volatility3/framework/plugins/windows/memmap.py @@ -1,6 +1,7 @@ # This file is Copyright 2020 Volatility Foundation and licensed under the Volatility Software License 1.0 # which is available at https://www.volatilityfoundation.org/license/vsl-v1.0 # +import contextlib import logging from typing import List @@ -48,7 +49,11 @@ def _generator(self, procs): excp.layer_name)) continue - file_handle = self.open(f"pid.{pid}.dmp") + if self.config['dump']: + file_handle = self.open(f"pid.{pid}.dmp") + else: + # Ensure the file isn't actually created if not needed + file_handle = contextlib.ExitStack() with file_handle as file_data: file_offset = 0 for mapval in proc_layer.mapping(0x0, proc_layer.maximum_address, ignore_errors = True): From 8745931dead2844a6cfed68b9acee611288008bf Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Thu, 5 Aug 2021 17:28:36 +0100 Subject: [PATCH 194/294] Windows: Add coalesce support to memmap --- .../framework/plugins/windows/memmap.py | 40 +++++++++++++++++-- 1 file changed, 36 insertions(+), 4 deletions(-) diff --git a/volatility3/framework/plugins/windows/memmap.py b/volatility3/framework/plugins/windows/memmap.py index ecc20267c8..e67a6a877b 100644 --- a/volatility3/framework/plugins/windows/memmap.py +++ b/volatility3/framework/plugins/windows/memmap.py @@ -27,6 +27,8 @@ def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface] architectures = ["Intel32", "Intel64"]), requirements.SymbolTableRequirement(name = "nt_symbols", description = "Windows kernel symbols"), requirements.PluginRequirement(name = 'pslist', plugin = pslist.PsList, version = (2, 0, 0)), + requirements.BooleanRequirement(name = 'coalesce', description = 'Clump output where possible', + default = False, optional = True), requirements.IntRequirement(name = 'pid', description = "Process ID to include (all other processes are excluded)", optional = True), @@ -36,6 +38,29 @@ def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface] optional = True) ] + @classmethod + def coalesce(cls, mapping_generator): + stashed_offset = stashed_mapped_offset = stashed_size = stashed_mapped_size = stashed_mapped_layer = None + for offset, size, mapped_offset, mapped_size, map_layer in mapping_generator: + if stashed_offset is None or (stashed_offset + stashed_size != offset) or ( + stashed_mapped_offset + stashed_mapped_size != mapped_offset) or (stashed_map_layer != map_layer): + # The block isn't contiguous + if stashed_offset is not None: + yield stashed_offset, stashed_size, stashed_mapped_offset, stashed_mapped_size, stashed_map_layer + # Update all the stashed values after output + stashed_offset = offset + stashed_mapped_offset = mapped_offset + stashed_size = size + stashed_mapped_size = mapped_size + stashed_map_layer = map_layer + else: + # Part of an existing block + stashed_size += size + stashed_mapped_size += mapped_size + # Yield whatever's left + if stashed_offset is not None: + yield stashed_offset, stashed_size, stashed_mapped_offset, stashed_mapped_size, stashed_map_layer + def _generator(self, procs): for proc in procs: pid = "Unknown" @@ -49,6 +74,10 @@ def _generator(self, procs): excp.layer_name)) continue + if self.config['coalesce']: + coalesce = self.coalesce + else: + coalesce = lambda x: x if self.config['dump']: file_handle = self.open(f"pid.{pid}.dmp") else: @@ -56,11 +85,10 @@ def _generator(self, procs): file_handle = contextlib.ExitStack() with file_handle as file_data: file_offset = 0 - for mapval in proc_layer.mapping(0x0, proc_layer.maximum_address, ignore_errors = True): + for mapval in coalesce(proc_layer.mapping(0x0, proc_layer.maximum_address, ignore_errors = True)): offset, size, mapped_offset, mapped_size, maplayer = mapval file_output = "Disabled" - file_offset += size if self.config['dump']: try: data = proc_layer.read(offset, size, pad = True) @@ -71,15 +99,19 @@ def _generator(self, procs): vollog.debug("Unable to write {}'s address {} to {}".format( proc_layer_name, offset, file_handle.preferred_filename)) - yield (0, (format_hints.Hex(offset), format_hints.Hex(mapped_offset), format_hints.Hex(mapped_size), + yield (0, (format_hints.Hex(offset), format_hints.Hex(mapped_offset), + format_hints.Hex(mapped_size), format_hints.Hex(file_offset), file_output)) + + file_offset += mapped_size offset += mapped_size def run(self): filter_func = pslist.PsList.create_pid_filter([self.config.get('pid', None)]) return renderers.TreeGrid([("Virtual", format_hints.Hex), ("Physical", format_hints.Hex), - ("Size", format_hints.Hex), ("Offset in File", format_hints.Hex), ("File output", str)], + ("Size", format_hints.Hex), ("Offset in File", format_hints.Hex), + ("File output", str)], self._generator( pslist.PsList.list_processes(context = self.context, layer_name = self.config['primary'], From 2cfb65380a82ecc4638d371bc514238e75a7f8a3 Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Wed, 11 Aug 2021 21:06:43 +0100 Subject: [PATCH 195/294] Core: Include API changes file --- API_CHANGES.md | 19 +++++++++++++++++++ 1 file changed, 19 insertions(+) diff --git a/API_CHANGES.md b/API_CHANGES.md index e69de29bb2..a04b9fc78b 100644 --- a/API_CHANGES.md +++ b/API_CHANGES.md @@ -0,0 +1,19 @@ +API Changes +=========== + +When an addition to the existing API is made, the minor version is bumped. +When an API feature or function is removed or changed, the major version is bumped. + + +1.2.0 +===== +* Added support for module collections +* Added context.modules +* Added ModuleRequirement +* Added get\_symbols\_by\_absolute\_location + +* Remove support for symbol\_shift and symbol\_mask from symbol tables + Symbols should be the data values from the JSON, and if they need modifying, + a module wrappr, or similar, should be used + + From 86771d15d5eb3402b9fd03768bf0bc37476c2a0a Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Wed, 11 Aug 2021 21:08:33 +0100 Subject: [PATCH 196/294] Core: Don't get to hasty with the API version --- API_CHANGES.md | 4 ---- 1 file changed, 4 deletions(-) diff --git a/API_CHANGES.md b/API_CHANGES.md index a04b9fc78b..4a65de04be 100644 --- a/API_CHANGES.md +++ b/API_CHANGES.md @@ -12,8 +12,4 @@ When an API feature or function is removed or changed, the major version is bump * Added ModuleRequirement * Added get\_symbols\_by\_absolute\_location -* Remove support for symbol\_shift and symbol\_mask from symbol tables - Symbols should be the data values from the JSON, and if they need modifying, - a module wrappr, or similar, should be used - From 4016b096c2d921572c7171875e889081552ba167 Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Sun, 13 Jun 2021 11:53:51 +0100 Subject: [PATCH 197/294] Core: Add in offline constant --- volatility3/cli/__init__.py | 7 +++++++ volatility3/framework/constants/__init__.py | 2 +- volatility3/framework/exceptions.py | 11 +++++++++++ volatility3/framework/layers/resources.py | 15 ++++++++++++++- 4 files changed, 33 insertions(+), 2 deletions(-) diff --git a/volatility3/cli/__init__.py b/volatility3/cli/__init__.py index e6f00728a3..6d75db46c4 100644 --- a/volatility3/cli/__init__.py +++ b/volatility3/cli/__init__.py @@ -165,6 +165,10 @@ def run(self): help = f"Change the default path ({constants.CACHE_PATH}) used to store the cache", default = constants.CACHE_PATH, type = str) + parser.add_argument("--offline", + help = "Do not search online for additional JSON files", + default = False, + action = 'store_true') # We have to filter out help, otherwise parse_known_args will trigger the help message before having # processed the plugin choice or had the plugin subparser added. @@ -216,6 +220,9 @@ def run(self): if partial_args.clear_cache: framework.clear_cache() + if partial_args.offline: + constants.OFFLINE = partial_args.offline + # Do the initialization ctx = contexts.Context() # Construct a blank context failures = framework.import_files(volatility3.plugins, diff --git a/volatility3/framework/constants/__init__.py b/volatility3/framework/constants/__init__.py index cfe0356c1c..b85c52d164 100644 --- a/volatility3/framework/constants/__init__.py +++ b/volatility3/framework/constants/__init__.py @@ -40,7 +40,7 @@ # We use the SemVer 2.0.0 versioning scheme VERSION_MAJOR = 1 # Number of releases of the library with a breaking change VERSION_MINOR = 2 # Number of changes that only add to the interface -VERSION_PATCH = 0 # Number of changes that do not change the interface +VERSION_PATCH = 1 # Number of changes that do not change the interface VERSION_SUFFIX = "" # TODO: At version 2.0.0, remove the symbol_shift feature diff --git a/volatility3/framework/exceptions.py b/volatility3/framework/exceptions.py index 7f381b1662..a234a353af 100644 --- a/volatility3/framework/exceptions.py +++ b/volatility3/framework/exceptions.py @@ -99,3 +99,14 @@ class MissingModuleException(VolatilityException): def __init__(self, module: str, *args) -> None: super().__init__(*args) self.module = module + + +class OfflineException(VolatilityException): + """Throw when a remote resource is requested but Volatility is in offline mode""" + + def __init__(self, url: str, *args) -> None: + super().__init__(*args) + self._url = url + + def __str__(self): + return f'Volatility 3 is offline: unable to access {self._url}' diff --git a/volatility3/framework/layers/resources.py b/volatility3/framework/layers/resources.py index fb8bdb7cc5..35182a86be 100644 --- a/volatility3/framework/layers/resources.py +++ b/volatility3/framework/layers/resources.py @@ -17,7 +17,7 @@ from urllib import error from volatility3 import framework -from volatility3.framework import constants +from volatility3.framework import constants, exceptions try: import magic @@ -34,6 +34,7 @@ vollog = logging.getLogger(__name__) + # TODO: Type-annotating the ResourceAccessor.open method is difficult because HTTPResponse is not actually an IO[Any] type # fix this @@ -117,6 +118,9 @@ def open(self, url: str, mode: str = "rb") -> Any: raise excp else: raise excp + except exceptions.OfflineException: + vollog.info(f"Not accessing {url} in offline mode") + raise with contextlib.closing(fp) as fp: # Cache the file locally @@ -227,6 +231,7 @@ class JarHandler(VolatilityHandler): Actual reference (found from https://www.w3.org/wiki/UriSchemes/jar) seemed not to return: http://developer.java.sun.com/developer/onlineTraining/protocolhandlers/ """ + @classmethod def non_cached_schemes(cls) -> List[str]: return ['jar'] @@ -249,3 +254,11 @@ def default_open(req: urllib.request.Request) -> Optional[Any]: zippath, filepath = zipsplit return zipfile.ZipFile(zippath).open(filepath) return None + + +class OfflineHandler(VolatilityHandler): + @staticmethod + def default_open(req: urllib.request.Request) -> Optional[Any]: + if constants.OFFLINE and req.type in ['http', 'https']: + raise exceptions.OfflineException(req.full_url) + return None From 79b3be4cf6a2f11602741e7e83269a9bdae07c11 Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Sun, 13 Jun 2021 21:32:03 +0100 Subject: [PATCH 198/294] Automagic: Add in remote banner cache first attempt --- development/banner_server.py | 77 +++++++++++++++++++ .../framework/automagic/symbol_cache.py | 71 ++++++++++++++--- volatility3/framework/constants/__init__.py | 6 ++ 3 files changed, 142 insertions(+), 12 deletions(-) create mode 100644 development/banner_server.py diff --git a/development/banner_server.py b/development/banner_server.py new file mode 100644 index 0000000000..3449babdd0 --- /dev/null +++ b/development/banner_server.py @@ -0,0 +1,77 @@ +import argparse +import base64 +import json +import logging +import os +import pathlib +import urllib + +from volatility3.cli import PrintedProgress +from volatility3.framework import contexts, constants +from volatility3.framework.automagic import linux, mac + +vollog = logging.getLogger(__name__) + + +class BannerCacheGenerator: + + def __init__(self, path: str, url_prefix: str): + self._path = path + self._url_prefix = url_prefix + + def convert_url(self, url): + parsed = urllib.parse.urlparse(url) + + relpath = os.path.relpath(parsed.path, os.path.abspath(self._path)) + + return urllib.parse.urljoin(self._url_prefix, relpath) + + def run(self): + context = contexts.Context() + json_output = {} + + path = self._path + filename = '*' + + for banner_cache in [linux.LinuxBannerCache, mac.MacBannerCache]: + sub_path = banner_cache.os + potentials = [] + for extension in constants.ISF_EXTENSIONS: + # Hopefully these will not be large lists, otherwise this might be slow + try: + for found in pathlib.Path(path).joinpath(sub_path).resolve().rglob(filename + extension): + potentials.append(found.as_uri()) + except FileNotFoundError: + # If there's no linux symbols, don't cry about it + pass + + new_banners = banner_cache.read_new_banners(context, 'BannerServer', potentials, banner_cache.symbol_name, + banner_cache.os, progress_callback = PrintedProgress()) + result_banners = {} + for new_banner in new_banners: + # Only accept file schemes + value = [self.convert_url(url) for url in new_banners[new_banner] if + urllib.parse.urlparse(url).scheme == 'file'] + if value and new_banner: + # Convert files into URLs + result_banners[str(base64.b64encode(new_banner), 'latin-1')] = value + + json_output[banner_cache.os] = result_banners + + output_path = os.path.join(self._path, 'banners.json') + with open(output_path, 'w') as fp: + vollog.warning(f"Banners file written to {output_path}") + json.dump(json_output, fp) + + +if __name__ == '__main__': + + parser = argparse.ArgumentParser() + parser.add_argument('--path', default = os.path.dirname(__file__)) + parser.add_argument('--urlprefix', help = 'Web prefix that will eventually serve the ISF files', + default = 'http://localhost/symbols') + + args = parser.parse_args() + + bcg = BannerCacheGenerator(args.path, args.urlprefix) + bcg.run() diff --git a/volatility3/framework/automagic/symbol_cache.py b/volatility3/framework/automagic/symbol_cache.py index 627ffac8bc..59999a07ec 100644 --- a/volatility3/framework/automagic/symbol_cache.py +++ b/volatility3/framework/automagic/symbol_cache.py @@ -1,6 +1,7 @@ # This file is Copyright 2019 Volatility Foundation and licensed under the Volatility Software License 1.0 # which is available at https://www.volatilityfoundation.org/license/vsl-v1.0 # +import base64 import gc import json import logging @@ -9,9 +10,11 @@ import urllib import urllib.parse import urllib.request +import zipfile from typing import Dict, List, Optional from volatility3.framework import constants, exceptions, interfaces +from volatility3.framework.layers import resources from volatility3.framework.symbols import intermed vollog = logging.getLogger(__name__) @@ -81,20 +84,42 @@ def __call__(self, context, config_path, configurable, progress_callback = None) # We only need to be called once, so no recursion necessary banners = self.load_banners() - cacheables = list(intermed.IntermediateSymbolTable.file_symbol_url(self.os)) + cacheables = self.find_new_banner_files(banners, self.os) - for banner in banners: - for json_file in banners[banner]: - if json_file in cacheables: - cacheables.remove(json_file) + new_banners = self.read_new_banners(context, config_path, cacheables, self.symbol_name, self.os, + progress_callback) + + # Add in any new banners to the existing list + for new_banner in new_banners: + banner_list = banners.get(new_banner, []) + banners[new_banner] = list(set(banner_list + new_banners[new_banner])) + + # Do remote banners *after* the JSON loading, so that it doen't pull down all the remote JSON + self.remote_banners(banners, self.os) + + # Rewrite the cached banners each run, since writing is faster than the banner_cache validation portion + self.save_banners(banners) + + if progress_callback is not None: + progress_callback(100, "Built {} caches".format(self.os)) + + @classmethod + def read_new_banners(cls, context: interfaces.context.ContextInterface, config_path: str, new_urls: List[str], + symbol_name: str, operating_system: str = None, + progress_callback = None) -> Optional[Dict[bytes, List[str]]]: + """Reads the any new banners for the OS in question""" + if operating_system is None: + return None + + banners = {} - total = len(cacheables) + total = len(new_urls) if total > 0: vollog.info(f"Building {self.os} caches...") for current in range(total): if progress_callback is not None: progress_callback(current * 100 / total, f"Building {self.os} caches") - isf_url = cacheables[current] + isf_url = new_urls[current] isf = None try: @@ -104,7 +129,7 @@ def __call__(self, context, config_path, configurable, progress_callback = None) # We should store the banner against the filename # We don't bother with the hash (it'll likely take too long to validate) # but we should check at least that the banner matches on load. - banner = isf.get_symbol(self.symbol_name).constant_data + banner = isf.get_symbol(symbol_name).constant_data vollog.log(constants.LOGLEVEL_VV, f"Caching banner {banner} for file {isf_url}") bannerlist = banners.get(banner, []) @@ -119,9 +144,31 @@ def __call__(self, context, config_path, configurable, progress_callback = None) if isf: del isf gc.collect() + return banners - # Rewrite the cached banners each run, since writing is faster than the banner_cache validation portion - self.save_banners(banners) + @classmethod + def find_new_banner_files(cls, banners: Dict[bytes, List[str]], operating_system: str) -> List[str]: + """Gathers all files and remove existing banners""" + cacheables = list(intermed.IntermediateSymbolTable.file_symbol_url(operating_system)) + for banner in banners: + for json_file in banners[banner]: + if json_file in cacheables: + cacheables.remove(json_file) + return cacheables - if progress_callback is not None: - progress_callback(100, f"Built {self.os} caches") + @classmethod + def remote_banners(cls, banners: Dict[bytes, List[str]], operating_system = None): + """Adds remote URLs to the banner list""" + if operating_system is None: + return None + + if not constants.OFFLINE: + # TODO: Only download the remote file once per amount of time + with resources.ResourceAccessor().open(url = constants.REMOTE_ISF_URL) as fp: + banner_list = json.load(fp) + if operating_system in banner_list: + for banner in banner_list[operating_system]: + binary_banner = base64.b64decode(banner) + file_list = banners.get(binary_banner, []) + file_list = list(set(file_list + banner_list[operating_system][banner])) + banners[binary_banner] = file_list diff --git a/volatility3/framework/constants/__init__.py b/volatility3/framework/constants/__init__.py index b85c52d164..9ec2fb88ff 100644 --- a/volatility3/framework/constants/__init__.py +++ b/volatility3/framework/constants/__init__.py @@ -94,3 +94,9 @@ class Parallelism(enum.IntEnum): """The minimum supported version of the Intermediate Symbol Format""" ISF_MINIMUM_DEPRECATED = (3, 9, 9) """The highest version of the ISF that's deprecated (usually higher than supported)""" + +OFFLINE = False +"""Whether to go online to retrieve missing/necessary JSON files""" + +REMOTE_ISF_URL = 'http://localhost:8000/banners.json' +"""Remote URL to query for a list of ISF addresses""" From 3b2aae40d2c7470af8c48bc5cbb2a8b1b79ef6d2 Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Mon, 12 Jul 2021 01:26:13 +0100 Subject: [PATCH 199/294] Automagic: Support remote banner versioning and chaining --- development/banner_server.py | 2 +- .../framework/automagic/symbol_cache.py | 51 +++++++++++++++---- 2 files changed, 43 insertions(+), 10 deletions(-) diff --git a/development/banner_server.py b/development/banner_server.py index 3449babdd0..3aea41c825 100644 --- a/development/banner_server.py +++ b/development/banner_server.py @@ -28,7 +28,7 @@ def convert_url(self, url): def run(self): context = contexts.Context() - json_output = {} + json_output = {'version': 1} path = self._path filename = '*' diff --git a/volatility3/framework/automagic/symbol_cache.py b/volatility3/framework/automagic/symbol_cache.py index 59999a07ec..caf186a236 100644 --- a/volatility3/framework/automagic/symbol_cache.py +++ b/volatility3/framework/automagic/symbol_cache.py @@ -157,18 +157,51 @@ def find_new_banner_files(cls, banners: Dict[bytes, List[str]], operating_system return cacheables @classmethod - def remote_banners(cls, banners: Dict[bytes, List[str]], operating_system = None): + def remote_banners(cls, banners: Dict[bytes, List[str]], operating_system = None, banner_location = None): """Adds remote URLs to the banner list""" if operating_system is None: return None + if banner_location is None: + banner_location = constants.REMOTE_ISF_URL + if not constants.OFFLINE: - # TODO: Only download the remote file once per amount of time - with resources.ResourceAccessor().open(url = constants.REMOTE_ISF_URL) as fp: - banner_list = json.load(fp) - if operating_system in banner_list: - for banner in banner_list[operating_system]: - binary_banner = base64.b64decode(banner) - file_list = banners.get(binary_banner, []) - file_list = list(set(file_list + banner_list[operating_system][banner])) + rbf = RemoteBannerFormat(banner_location) + rbf.process(banners, operating_system) + + +class RemoteBannerFormat: + def __init__(self, location: str): + self._location = location + with resources.ResourceAccessor().open(url = location) as fp: + self._data = json.load(fp) + if not self._verify(): + raise ValueError("Unsupported version for remote banner list format") + + def _verify(self) -> bool: + version = self._data.get('version', 0) + if version in [1]: + setattr(self, 'process', getattr(self, f'process_v{version}')) + return True + return False + + def process(self, banners: Dict[bytes, List[str]], operating_system: Optional[str]): + raise ValueError("Banner List version not verified") + + def process_v1(self, banners: Dict[bytes, List[str]], operating_system: Optional[str]): + if operating_system in self._data: + for banner in self._data[operating_system]: + binary_banner = base64.b64decode(banner) + file_list = banners.get(binary_banner, []) + for value in self._data[operating_system][banner]: + if value not in file_list: + file_list = file_list + [value] banners[binary_banner] = file_list + if 'additional' in self._data: + for location in self._data['additional']: + try: + subrbf = RemoteBannerFormat(location) + subrbf.process(banners, operating_system) + except IOError: + vollog.debug(f"Remote file not found: {location}") + return banners From bb40a8a2071aad763f52252a899a1e9ae358a233 Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Mon, 12 Jul 2021 02:24:55 +0100 Subject: [PATCH 200/294] Automagic: Catch remote download exceptions --- volatility3/framework/automagic/symbol_cache.py | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/volatility3/framework/automagic/symbol_cache.py b/volatility3/framework/automagic/symbol_cache.py index caf186a236..9af5b6defd 100644 --- a/volatility3/framework/automagic/symbol_cache.py +++ b/volatility3/framework/automagic/symbol_cache.py @@ -166,8 +166,11 @@ def remote_banners(cls, banners: Dict[bytes, List[str]], operating_system = None banner_location = constants.REMOTE_ISF_URL if not constants.OFFLINE: - rbf = RemoteBannerFormat(banner_location) - rbf.process(banners, operating_system) + try: + rbf = RemoteBannerFormat(banner_location) + rbf.process(banners, operating_system) + except urllib.error.URLError: + vollog.debug(f"Unable to download remote banner list from {banner_location}") class RemoteBannerFormat: From e7623baf458a75e7ceccbdbad0317707247d1cf7 Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Wed, 11 Aug 2021 21:24:39 +0100 Subject: [PATCH 201/294] Linux: Support None to turn off remote banner locations --- volatility3/framework/automagic/symbol_cache.py | 3 +-- volatility3/framework/constants/__init__.py | 2 +- 2 files changed, 2 insertions(+), 3 deletions(-) diff --git a/volatility3/framework/automagic/symbol_cache.py b/volatility3/framework/automagic/symbol_cache.py index 9af5b6defd..8010364f83 100644 --- a/volatility3/framework/automagic/symbol_cache.py +++ b/volatility3/framework/automagic/symbol_cache.py @@ -10,7 +10,6 @@ import urllib import urllib.parse import urllib.request -import zipfile from typing import Dict, List, Optional from volatility3.framework import constants, exceptions, interfaces @@ -165,7 +164,7 @@ def remote_banners(cls, banners: Dict[bytes, List[str]], operating_system = None if banner_location is None: banner_location = constants.REMOTE_ISF_URL - if not constants.OFFLINE: + if not constants.OFFLINE and banner_location is not None: try: rbf = RemoteBannerFormat(banner_location) rbf.process(banners, operating_system) diff --git a/volatility3/framework/constants/__init__.py b/volatility3/framework/constants/__init__.py index 9ec2fb88ff..43193852ab 100644 --- a/volatility3/framework/constants/__init__.py +++ b/volatility3/framework/constants/__init__.py @@ -98,5 +98,5 @@ class Parallelism(enum.IntEnum): OFFLINE = False """Whether to go online to retrieve missing/necessary JSON files""" -REMOTE_ISF_URL = 'http://localhost:8000/banners.json' +REMOTE_ISF_URL = None # 'http://localhost:8000/banners.json' """Remote URL to query for a list of ISF addresses""" From 3ad20d566c943c4ed2cd5017ea0cef1ff193eaae Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Wed, 11 Aug 2021 21:51:41 +0100 Subject: [PATCH 202/294] Automagic: Minor fixes for jar files and layers --- .../framework/automagic/symbol_cache.py | 21 ++++++++++--------- .../framework/automagic/symbol_finder.py | 3 ++- 2 files changed, 13 insertions(+), 11 deletions(-) diff --git a/volatility3/framework/automagic/symbol_cache.py b/volatility3/framework/automagic/symbol_cache.py index 8010364f83..b2c227228d 100644 --- a/volatility3/framework/automagic/symbol_cache.py +++ b/volatility3/framework/automagic/symbol_cache.py @@ -10,6 +10,7 @@ import urllib import urllib.parse import urllib.request +import zipfile from typing import Dict, List, Optional from volatility3.framework import constants, exceptions, interfaces @@ -53,13 +54,13 @@ def load_banners(cls) -> BannersType: path, str(banner or b'', 'latin-1'))) banners[banner].remove(path) # This is probably excessive, but it's here if we need it - # if url.scheme == 'jar': - # zip_file, zip_path = url.path.split("!") - # zip_file = urllib.parse.urlparse(zip_file).path - # if ((not os.path.exists(zip_file)) or (zip_path not in zipfile.ZipFile(zip_file).namelist())): - # vollog.log(constants.LOGLEVEL_VV, - # "Removing cached path {} for banner {}: file does not exist".format(path, banner)) - # banners[banner].remove(path) + if url.scheme == 'jar': + zip_file, zip_path = url.path.split("!") + zip_file = urllib.parse.urlparse(zip_file).path + if ((not os.path.exists(zip_file)) or (zip_path not in zipfile.ZipFile(zip_file).namelist())): + vollog.log(constants.LOGLEVEL_VV, + "Removing cached path {} for banner {}: file does not exist".format(path, banner)) + banners[banner].remove(path) if not banners[banner]: remove_banners.append(banner) @@ -100,7 +101,7 @@ def __call__(self, context, config_path, configurable, progress_callback = None) self.save_banners(banners) if progress_callback is not None: - progress_callback(100, "Built {} caches".format(self.os)) + progress_callback(100, f"Built {self.os} caches") @classmethod def read_new_banners(cls, context: interfaces.context.ContextInterface, config_path: str, new_urls: List[str], @@ -114,10 +115,10 @@ def read_new_banners(cls, context: interfaces.context.ContextInterface, config_p total = len(new_urls) if total > 0: - vollog.info(f"Building {self.os} caches...") + vollog.info(f"Building {operating_system} caches...") for current in range(total): if progress_callback is not None: - progress_callback(current * 100 / total, f"Building {self.os} caches") + progress_callback(current * 100 / total, f"Building {operating_system} caches") isf_url = new_urls[current] isf = None diff --git a/volatility3/framework/automagic/symbol_finder.py b/volatility3/framework/automagic/symbol_finder.py index 03d051c49d..f3a597a3c8 100644 --- a/volatility3/framework/automagic/symbol_finder.py +++ b/volatility3/framework/automagic/symbol_finder.py @@ -94,7 +94,8 @@ def _banner_scan(self, else: # Swap to the physical layer for scanning # TODO: Fix this so it works for layers other than just Intel - layer = context.layers[layer.config['memory_layer']] + if isinstance(layer, layers.intel.Intel): + layer = context.layers[layer.config['memory_layer']] banner_list = layer.scan(context = context, scanner = mss, progress_callback = progress_callback) for _, banner in banner_list: From 19b2a06964b5840e78755d12c36344c17d0b86b2 Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Wed, 11 Aug 2021 22:02:08 +0100 Subject: [PATCH 203/294] Layers: Coalesce intel mapping responses --- volatility3/framework/layers/intel.py | 35 +++++++++++++++++-- .../framework/plugins/windows/memmap.py | 31 +--------------- 2 files changed, 34 insertions(+), 32 deletions(-) diff --git a/volatility3/framework/layers/intel.py b/volatility3/framework/layers/intel.py index 48fa205bdf..4a582f1fbe 100644 --- a/volatility3/framework/layers/intel.py +++ b/volatility3/framework/layers/intel.py @@ -193,6 +193,38 @@ def mapping(self, """Returns a sorted iterable of (offset, sublength, mapped_offset, mapped_length, layer) mappings. + This allows translation layers to provide maps of contiguous + regions in one layer + """ + stashed_offset = stashed_mapped_offset = stashed_size = stashed_mapped_size = stashed_map_layer = None + for offset, size, mapped_offset, mapped_size, map_layer in self._mapping(offset, length, ignore_errors): + if stashed_offset is None or (stashed_offset + stashed_size != offset) or ( + stashed_mapped_offset + stashed_mapped_size != mapped_offset) or (stashed_map_layer != map_layer): + # The block isn't contiguous + if stashed_offset is not None: + yield stashed_offset, stashed_size, stashed_mapped_offset, stashed_mapped_size, stashed_map_layer + # Update all the stashed values after output + stashed_offset = offset + stashed_mapped_offset = mapped_offset + stashed_size = size + stashed_mapped_size = mapped_size + stashed_map_layer = map_layer + else: + # Part of an existing block + stashed_size += size + stashed_mapped_size += mapped_size + # Yield whatever's left + if (stashed_offset is not None and stashed_mapped_offset is not None and stashed_size is not None + and stashed_mapped_size is not None and stashed_map_layer is not None): + yield stashed_offset, stashed_size, stashed_mapped_offset, stashed_mapped_size, stashed_map_layer + + def _mapping(self, + offset: int, + length: int, + ignore_errors: bool = False) -> Iterable[Tuple[int, int, int, int, str]]: + """Returns a sorted iterable of (offset, sublength, mapped_offset, mapped_length, layer) + mappings. + This allows translation layers to provide maps of contiguous regions in one layer """ @@ -328,12 +360,11 @@ def _translate(self, offset: int) -> Tuple[int, int, str]: class WindowsIntel32e(WindowsMixin, Intel32e): - # TODO: Fix appropriately in a future release. # Currently just a temprorary workaround to deal with custom bit flag # in the PFN field for pages in transition state. # See https://github.com/volatilityfoundation/volatility3/pull/475 _maxphyaddr = 45 - + def _translate(self, offset: int) -> Tuple[int, int, str]: return self._translate_swap(self, offset, self._bits_per_register // 2) diff --git a/volatility3/framework/plugins/windows/memmap.py b/volatility3/framework/plugins/windows/memmap.py index e67a6a877b..c303a8e7bf 100644 --- a/volatility3/framework/plugins/windows/memmap.py +++ b/volatility3/framework/plugins/windows/memmap.py @@ -27,8 +27,6 @@ def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface] architectures = ["Intel32", "Intel64"]), requirements.SymbolTableRequirement(name = "nt_symbols", description = "Windows kernel symbols"), requirements.PluginRequirement(name = 'pslist', plugin = pslist.PsList, version = (2, 0, 0)), - requirements.BooleanRequirement(name = 'coalesce', description = 'Clump output where possible', - default = False, optional = True), requirements.IntRequirement(name = 'pid', description = "Process ID to include (all other processes are excluded)", optional = True), @@ -38,29 +36,6 @@ def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface] optional = True) ] - @classmethod - def coalesce(cls, mapping_generator): - stashed_offset = stashed_mapped_offset = stashed_size = stashed_mapped_size = stashed_mapped_layer = None - for offset, size, mapped_offset, mapped_size, map_layer in mapping_generator: - if stashed_offset is None or (stashed_offset + stashed_size != offset) or ( - stashed_mapped_offset + stashed_mapped_size != mapped_offset) or (stashed_map_layer != map_layer): - # The block isn't contiguous - if stashed_offset is not None: - yield stashed_offset, stashed_size, stashed_mapped_offset, stashed_mapped_size, stashed_map_layer - # Update all the stashed values after output - stashed_offset = offset - stashed_mapped_offset = mapped_offset - stashed_size = size - stashed_mapped_size = mapped_size - stashed_map_layer = map_layer - else: - # Part of an existing block - stashed_size += size - stashed_mapped_size += mapped_size - # Yield whatever's left - if stashed_offset is not None: - yield stashed_offset, stashed_size, stashed_mapped_offset, stashed_mapped_size, stashed_map_layer - def _generator(self, procs): for proc in procs: pid = "Unknown" @@ -74,10 +49,6 @@ def _generator(self, procs): excp.layer_name)) continue - if self.config['coalesce']: - coalesce = self.coalesce - else: - coalesce = lambda x: x if self.config['dump']: file_handle = self.open(f"pid.{pid}.dmp") else: @@ -85,7 +56,7 @@ def _generator(self, procs): file_handle = contextlib.ExitStack() with file_handle as file_data: file_offset = 0 - for mapval in coalesce(proc_layer.mapping(0x0, proc_layer.maximum_address, ignore_errors = True)): + for mapval in proc_layer.mapping(0x0, proc_layer.maximum_address, ignore_errors = True): offset, size, mapped_offset, mapped_size, maplayer = mapval file_output = "Disabled" From d86b9b34e4033cafafd9b044685cfb67976b258b Mon Sep 17 00:00:00 2001 From: x Date: Thu, 12 Aug 2021 15:47:16 +0000 Subject: [PATCH 204/294] Fix return types, change confusing use of continue, switch to get_absolute_symbol_address --- .../plugins/windows/skeleton_key_check.py | 17 +++++++++-------- 1 file changed, 9 insertions(+), 8 deletions(-) diff --git a/volatility3/framework/plugins/windows/skeleton_key_check.py b/volatility3/framework/plugins/windows/skeleton_key_check.py index 75a561d551..e380388b6e 100644 --- a/volatility3/framework/plugins/windows/skeleton_key_check.py +++ b/volatility3/framework/plugins/windows/skeleton_key_check.py @@ -13,7 +13,7 @@ import logging, io -from typing import Iterable, Tuple, List +from typing import Iterable, Tuple, List, Optional from volatility3.framework.symbols.windows import pdbutil from volatility3.framework import interfaces, symbols, exceptions @@ -170,9 +170,9 @@ def _find_array_with_pdb_symbols(self, cryptdll_symbols: str, """ cryptdll_module = self.context.module(cryptdll_symbols, layer_name = proc_layer_name, offset = cryptdll_base) - rc4HmacInitialize = cryptdll_module.get_symbol("rc4HmacInitialize").address + cryptdll_base + rc4HmacInitialize = cryptdll_module.get_absolute_symbol_address("rc4HmacInitialize") - rc4HmacDecrypt = cryptdll_module.get_symbol("rc4HmacDecrypt").address + cryptdll_base + rc4HmacDecrypt = cryptdll_module.get_absolute_symbol_address("rc4HmacDecrypt") count_address = cryptdll_module.get_symbol("cCSystems").address @@ -183,7 +183,7 @@ def _find_array_with_pdb_symbols(self, cryptdll_symbols: str, except exceptions.InvalidAddressException: count = 16 - array_start = cryptdll_module.get_symbol("CSystems").address + cryptdll_base + array_start = cryptdll_module.get_absolute_symbol_address("CSystems") array = self._construct_ecrypt_array(array_start, count, cryptdll_types) @@ -235,12 +235,13 @@ def _find_lsass_proc(self, proc_list: Iterable) -> \ try: proc_id = proc.UniqueProcessId proc_layer_name = proc.add_process_layer() + + return proc, proc_layer_name + except exceptions.InvalidAddressException as excp: vollog.debug("Process {}: invalid address {} in layer {}".format(proc_id, excp.invalid_address, excp.layer_name)) - continue - return proc, proc_layer_name return None, None @@ -335,7 +336,7 @@ def _get_rip_relative_target(self, inst) -> int: def _analyze_cdlocatecsystem(self, function_bytes: bytes, function_start: int, cryptdll_types: interfaces.context.ModuleInterface, - proc_layer_name: str) -> Tuple[int, int]: + proc_layer_name: str) -> Optional[interfaces.objects.ObjectInterface]: """ Performs static analysis on CDLocateCSystem to find the instructions that reference CSystems as well as cCsystems @@ -394,7 +395,7 @@ def _analyze_cdlocatecsystem(self, function_bytes: bytes, def _find_csystems_with_export(self, proc_layer_name: str, cryptdll_types: interfaces.context.ModuleInterface, cryptdll_base: int, - _) -> interfaces.context.ModuleInterface: + _) -> Optional[interfaces.objects.ObjectInterface]: """ Uses export table analysis to locate CDLocateCsystem This function references CSystems and cCsystems From 70750d2b922133b533a0b6bc981010aaf599179f Mon Sep 17 00:00:00 2001 From: x Date: Thu, 12 Aug 2021 22:52:08 +0000 Subject: [PATCH 205/294] Remove unused variable from plugin --- volatility3/framework/plugins/windows/skeleton_key_check.py | 2 -- 1 file changed, 2 deletions(-) diff --git a/volatility3/framework/plugins/windows/skeleton_key_check.py b/volatility3/framework/plugins/windows/skeleton_key_check.py index e380388b6e..9980d1392b 100644 --- a/volatility3/framework/plugins/windows/skeleton_key_check.py +++ b/volatility3/framework/plugins/windows/skeleton_key_check.py @@ -562,8 +562,6 @@ def _generator(self, procs): vollog.info("Unable to find CSystems inside of cryptdll.dll. Analysis cannot proceed.") return - found_target = False - for csystem in csystems: if not self.context.layers[proc_layer_name].is_valid(csystem.vol.offset, csystem.vol.size): continue From ef27d07183177b611c33428e1c8224619d82dd4e Mon Sep 17 00:00:00 2001 From: x Date: Fri, 13 Aug 2021 15:06:48 +0000 Subject: [PATCH 206/294] Add missing patch that fixes KERB_ECRYPT structure size --- volatility3/framework/symbols/windows/kerb_ecrypt.json | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/volatility3/framework/symbols/windows/kerb_ecrypt.json b/volatility3/framework/symbols/windows/kerb_ecrypt.json index bcba19b76e..95e1a1d6a4 100644 --- a/volatility3/framework/symbols/windows/kerb_ecrypt.json +++ b/volatility3/framework/symbols/windows/kerb_ecrypt.json @@ -77,7 +77,7 @@ } }, "kind": "struct", - "size": 256 + "size": 128 } }, "base_types": { From db2ec8cfb21e0f3f4808bc55b56d27a115263565 Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Mon, 16 Aug 2021 16:58:26 +0100 Subject: [PATCH 207/294] Core: Ensure module configuration is recordable --- volatility3/framework/interfaces/context.py | 20 ++++++++++++++++++++ 1 file changed, 20 insertions(+) diff --git a/volatility3/framework/interfaces/context.py b/volatility3/framework/interfaces/context.py index 6b69c9f80c..e42753363a 100644 --- a/volatility3/framework/interfaces/context.py +++ b/volatility3/framework/interfaces/context.py @@ -163,9 +163,29 @@ def __init__(self, self._module_name = module_name self._layer_name = layer_name self._offset = offset + # TODO: Figure out about storing/requesting the native_layer_name for a module in the configuration + # The current module requirement does not ask for nor act upon this information self._native_layer_name = native_layer_name or layer_name self._symbol_table_name = symbol_table_name or self._module_name + def build_configuration(self) -> 'configuration.HierarchicalDict': + """Builds the configuration dictionary for this specific Module""" + + config = super().build_configuration() + + config['offset'] = self.config['offset'] + subconfigs = {'symbol_table_name': self.context.symbol_space[self.symbol_table_name].build_configuration(), + 'layer_name': self.context.layers[self.layer_name].build_configuration()} + + if self.layer_name != self._native_layer_name: + subconfigs['native_layer_name'] = self.context.layers[self._native_layer_name].build_configuration() + + for subconfig in subconfigs: + for req in subconfigs[subconfig]: + config[interfaces.configuration.path_join(subconfig, req)] = subconfigs[subconfig][req] + + return config + @property def name(self) -> str: """The name of the constructed module.""" From 98db5e46d4499f05164e8d945c1ebd51b3e7126f Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Thu, 19 Aug 2021 11:12:05 +0100 Subject: [PATCH 208/294] Windows: Remove hardcoded parameter in poolscanner --- volatility3/framework/plugins/windows/poolscanner.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/volatility3/framework/plugins/windows/poolscanner.py b/volatility3/framework/plugins/windows/poolscanner.py index a6a6cae0c9..09f384d899 100644 --- a/volatility3/framework/plugins/windows/poolscanner.py +++ b/volatility3/framework/plugins/windows/poolscanner.py @@ -294,7 +294,7 @@ def generate_pool_scan(cls, mem_object = header.get_object(type_name = constraint.type_name, use_top_down = is_windows_8_or_later, executive = constraint.object_type is not None, - native_layer_name = 'primary', + native_layer_name = layer_name, kernel_symbol_table = symbol_table) if mem_object is None: From 260f87933406e4646f2cd12db5b76b88214174ae Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Thu, 19 Aug 2021 15:32:48 +0100 Subject: [PATCH 209/294] Core: Fix caching of pointer objects --- volatility3/framework/objects/__init__.py | 28 +++++++++++++---------- 1 file changed, 16 insertions(+), 12 deletions(-) diff --git a/volatility3/framework/objects/__init__.py b/volatility3/framework/objects/__init__.py index 15d2ef63e5..cc96326566 100644 --- a/volatility3/framework/objects/__init__.py +++ b/volatility3/framework/objects/__init__.py @@ -3,7 +3,6 @@ # import collections -import functools import logging import struct from typing import Any, ClassVar, Dict, List, Iterable, Optional, Tuple, Type, Union as TUnion, overload @@ -292,6 +291,7 @@ def __init__(self, subtype: Optional[templates.ObjectTemplate] = None) -> None: super().__init__(context = context, object_info = object_info, type_name = type_name, data_format = data_format) self._vol['subtype'] = subtype + self._cache = None @classmethod def _unmarshall(cls, context: interfaces.context.ContextInterface, data_format: DataFormatInfo, @@ -311,7 +311,6 @@ def _unmarshall(cls, context: interfaces.context.ContextInterface, data_format: value = int.from_bytes(data, byteorder = endian, signed = signed) return value & mask - @functools.lru_cache(3) def dereference(self, layer_name: Optional[str] = None) -> interfaces.objects.ObjectInterface: """Dereferences the pointer. @@ -320,14 +319,19 @@ def dereference(self, layer_name: Optional[str] = None) -> interfaces.objects.Ob defaults to the same layer that the pointer is currently instantiated in. """ - layer_name = layer_name or self.vol.native_layer_name - mask = self._context.layers[layer_name].address_mask - offset = self & mask - return self.vol.subtype(context = self._context, - object_info = interfaces.objects.ObjectInformation(layer_name = layer_name, - offset = offset, - parent = self, - size = self.vol.subtype.size)) + # Do our own caching because lru_cache doesn't seem to memoize correctly across multiple uses + # Cache clearing should be done by a cast (we can add a specific method to reset a pointer, + # but hopefully it's not necessary) + if self._cache is None: + layer_name = layer_name or self.vol.native_layer_name + mask = self._context.layers[layer_name].address_mask + offset = self & mask + self._cache = self.vol.subtype(context = self._context, + object_info = interfaces.objects.ObjectInformation(layer_name = layer_name, + offset = offset, + parent = self, + size = self.vol.subtype.size)) + return self._cache def is_readable(self, layer_name: Optional[str] = None) -> bool: """Determines whether the address of this pointer can be read from @@ -338,7 +342,7 @@ def is_readable(self, layer_name: Optional[str] = None) -> bool: def __getattr__(self, attr: str) -> Any: """Convenience function to access unknown attributes by getting them from the subtype object.""" - if attr in ['vol', '_vol']: + if attr in ['vol', '_vol', '_cache']: raise AttributeError("Pointer not initialized before use") return getattr(self.dereference(), attr) @@ -737,7 +741,7 @@ def __getattr__(self, attr: str) -> Any: raise AttributeError("Object has not been properly initialized") if attr in self._concrete_members: return self._concrete_members[attr] - if attr.startswith("_") and not attr.startswith("__") and "__" in attr: + if attr.startswith("_") and not attr.startswith("__") and "__" in attr: attr = attr[attr.find("__", 1):] # See issue #522 if attr in self.vol.members: mask = self._context.layers[self.vol.layer_name].address_mask From 0aa59111df23b38da487e4e9f9a22bd457b128a2 Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Thu, 19 Aug 2021 23:06:55 +0100 Subject: [PATCH 210/294] Generic: YaraScanner fail on no rules --- volatility3/framework/plugins/yarascan.py | 2 ++ 1 file changed, 2 insertions(+) diff --git a/volatility3/framework/plugins/yarascan.py b/volatility3/framework/plugins/yarascan.py index 3ca4362108..86a7426a5a 100644 --- a/volatility3/framework/plugins/yarascan.py +++ b/volatility3/framework/plugins/yarascan.py @@ -26,6 +26,8 @@ class YaraScanner(interfaces.layers.ScannerInterface): # yara.Rules isn't exposed, so we can't type this properly def __init__(self, rules) -> None: super().__init__() + if rules is None: + raise ValueError("No rules provided to YaraScanner") self._rules = rules def __call__(self, data: bytes, data_offset: int) -> Iterable[Tuple[int, str, str, bytes]]: From ab075ac667ca95793232d2f30f27b926867ca7fc Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Fri, 20 Aug 2021 00:52:43 +0100 Subject: [PATCH 211/294] Windows: Fix pstree physical/virtual offsets --- volatility3/framework/plugins/windows/pstree.py | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/volatility3/framework/plugins/windows/pstree.py b/volatility3/framework/plugins/windows/pstree.py index b8c99688f8..3459d4ed59 100644 --- a/volatility3/framework/plugins/windows/pstree.py +++ b/volatility3/framework/plugins/windows/pstree.py @@ -2,7 +2,7 @@ # which is available at https://www.volatilityfoundation.org/license/vsl-v1.0 # import datetime -from typing import Dict, Set +from typing import Dict, Set, Tuple from volatility3.framework import objects, interfaces, renderers from volatility3.framework.configuration import requirements @@ -18,7 +18,7 @@ class PsTree(interfaces.plugins.PluginInterface): def __init__(self, *args, **kwargs) -> None: super().__init__(*args, **kwargs) - self._processes: Dict[int, interfaces.objects.ObjectInterface] = {} + self._processes: Dict[int, Tuple[interfaces.objects.ObjectInterface, int]] = {} self._levels: Dict[int, int] = {} self._children: Dict[int, Set[int]] = {} @@ -45,12 +45,12 @@ def find_level(self, pid: objects.Pointer) -> None: seen = set([]) seen.add(pid) level = 0 - proc = self._processes.get(pid, None) + proc, _ = self._processes.get(pid, None) while proc is not None and proc.InheritedFromUniqueProcessId not in seen: child_list = self._children.get(proc.InheritedFromUniqueProcessId, set([])) child_list.add(proc.UniqueProcessId) self._children[proc.InheritedFromUniqueProcessId] = child_list - proc = self._processes.get(proc.InheritedFromUniqueProcessId, None) + proc, _ = self._processes.get(proc.InheritedFromUniqueProcessId, (None, None)) level += 1 self._levels[pid] = level @@ -65,14 +65,14 @@ def _generator(self): memory = self.context.layers[layer_name] (_, _, offset, _, _) = list(memory.mapping(offset = proc.vol.offset, length = 0))[0] - self._processes[proc.UniqueProcessId] = proc + self._processes[proc.UniqueProcessId] = proc, offset # Build the child/level maps for pid in self._processes: self.find_level(pid) def yield_processes(pid): - proc = self._processes[pid] + proc, offset = self._processes[pid] row = (proc.UniqueProcessId, proc.InheritedFromUniqueProcessId, proc.ImageFileName.cast("string", max_length = proc.ImageFileName.vol.count, errors = 'replace'), format_hints.Hex(offset), proc.ActiveThreads, proc.get_handle_count(), proc.get_session_id(), From 97da37d9683f179b14b6bef555726121d107b3bd Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Thu, 19 Aug 2021 11:11:10 +0100 Subject: [PATCH 212/294] Windows: Update to kernel module for ease of config --- .../framework/plugins/windows/bigpools.py | 14 +- .../framework/plugins/windows/cachedump.py | 23 +- .../framework/plugins/windows/callbacks.py | 22 +- .../framework/plugins/windows/cmdline.py | 19 +- .../framework/plugins/windows/dlllist.py | 23 +- .../framework/plugins/windows/driverirp.py | 14 +- .../framework/plugins/windows/driverscan.py | 12 +- .../framework/plugins/windows/dumpfiles.py | 47 ++-- .../framework/plugins/windows/envars.py | 18 +- .../framework/plugins/windows/filescan.py | 12 +- .../plugins/windows/getservicesids.py | 14 +- .../framework/plugins/windows/getsids.py | 18 +- .../framework/plugins/windows/handles.py | 43 ++-- .../framework/plugins/windows/hashdump.py | 22 +- volatility3/framework/plugins/windows/info.py | 24 +- .../framework/plugins/windows/lsadump.py | 22 +- .../framework/plugins/windows/malfind.py | 28 +-- .../framework/plugins/windows/memmap.py | 13 +- .../framework/plugins/windows/modscan.py | 14 +- .../framework/plugins/windows/modules.py | 11 +- .../framework/plugins/windows/mutantscan.py | 12 +- .../framework/plugins/windows/netscan.py | 18 +- .../framework/plugins/windows/netstat.py | 24 +- .../framework/plugins/windows/poolscanner.py | 14 +- .../framework/plugins/windows/privileges.py | 13 +- .../framework/plugins/windows/pslist.py | 25 +- .../framework/plugins/windows/psscan.py | 23 +- .../framework/plugins/windows/pstree.py | 15 +- .../plugins/windows/registry/hivelist.py | 18 +- .../plugins/windows/registry/hivescan.py | 15 +- .../plugins/windows/registry/printkey.py | 21 +- .../plugins/windows/registry/userassist.py | 22 +- .../plugins/windows/skeleton_key_check.py | 231 +++++++++--------- volatility3/framework/plugins/windows/ssdt.py | 18 +- .../framework/plugins/windows/strings.py | 18 +- .../framework/plugins/windows/svcscan.py | 19 +- .../framework/plugins/windows/symlinkscan.py | 12 +- .../framework/plugins/windows/vadinfo.py | 18 +- .../framework/plugins/windows/vadyarascan.py | 14 +- .../framework/plugins/windows/verinfo.py | 20 +- .../framework/plugins/windows/virtmap.py | 14 +- 41 files changed, 517 insertions(+), 480 deletions(-) diff --git a/volatility3/framework/plugins/windows/bigpools.py b/volatility3/framework/plugins/windows/bigpools.py index 5ffe8b7c71..1b013e81db 100644 --- a/volatility3/framework/plugins/windows/bigpools.py +++ b/volatility3/framework/plugins/windows/bigpools.py @@ -20,17 +20,15 @@ class BigPools(interfaces.plugins.PluginInterface): """List big page pools.""" - _required_framework_version = (1, 0, 0) + _required_framework_version = (1, 2, 0) _version = (1, 0, 0) @classmethod def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface]: # Since we're calling the plugin, make sure we have the plugin's requirements return [ - requirements.TranslationLayerRequirement(name = 'primary', - description = 'Memory layer for the kernel', - architectures = ["Intel32", "Intel64"]), - requirements.SymbolTableRequirement(name = "nt_symbols", description = "Windows kernel symbols"), + requirements.ModuleRequirement(name = 'kernel', description = 'Windows kernel', + architectures = ["Intel32", "Intel64"]), requirements.StringRequirement(name = 'tags', description = "Comma separated list of pool tags to filter pools returned", optional = True, @@ -108,9 +106,11 @@ def _generator(self) -> Iterator[Tuple[int, Tuple[int, str]]]: # , str, int]]]: else: tags = None + kernel = self.context.modules[self.config['kernel']] + for big_pool in self.list_big_pools(context = self.context, - layer_name = self.config["primary"], - symbol_table = self.config["nt_symbols"], + layer_name = kernel.layer_name, + symbol_table = kernel.symbol_table_name, tags = tags): num_bytes = big_pool.get_number_of_bytes() diff --git a/volatility3/framework/plugins/windows/cachedump.py b/volatility3/framework/plugins/windows/cachedump.py index 95f3679076..7a8c9933d5 100644 --- a/volatility3/framework/plugins/windows/cachedump.py +++ b/volatility3/framework/plugins/windows/cachedump.py @@ -21,16 +21,14 @@ class Cachedump(interfaces.plugins.PluginInterface): """Dumps lsa secrets from memory""" - _required_framework_version = (1, 0, 0) + _required_framework_version = (1, 2, 0) _version = (1, 0, 0) @classmethod def get_requirements(cls): return [ - requirements.TranslationLayerRequirement(name = 'primary', - description = 'Memory layer for the kernel', - architectures = ["Intel32", "Intel64"]), - requirements.SymbolTableRequirement(name = "nt_symbols", description = "Windows kernel symbols"), + requirements.ModuleRequirement(name = 'kernel', description = 'Windows kernel', + architectures = ["Intel32", "Intel64"]), requirements.PluginRequirement(name = 'hivelist', plugin = hivelist.HiveList, version = (1, 0, 0)), requirements.PluginRequirement(name = 'lsadump', plugin = lsadump.Lsadump, version = (1, 0, 0)), requirements.PluginRequirement(name = 'hashdump', plugin = hashdump.Hashdump, version = (1, 1, 0)) @@ -46,7 +44,7 @@ def decrypt_hash(edata: bytes, nlkm: bytes, ch, xp: bool): hmac_md5 = HMAC.new(nlkm, ch) rc4key = hmac_md5.digest() rc4 = ARC4.new(rc4key) - data = rc4.encrypt(edata) # lgtm [py/weak-cryptographic-algorithm] + data = rc4.encrypt(edata) # lgtm [py/weak-cryptographic-algorithm] else: # based on Based on code from http://lab.mediaservice.net/code/cachedump.rb aes = AES.new(nlkm[16:32], AES.MODE_CBC, ch) @@ -90,7 +88,10 @@ def _generator(self, syshive, sechive): vollog.warning('Unable to find bootkey') return - vista_or_later = versions.is_vista_or_later(context = self.context, symbol_table = self.config['nt_symbols']) + kernel = self.context.modules[self.config['kernel']] + + vista_or_later = versions.is_vista_or_later(context = self.context, + symbol_table = kernel.symbol_table_name) lsakey = lsadump.Lsadump.get_lsa_key(sechive, bootkey, vista_or_later) if not lsakey: @@ -129,10 +130,12 @@ def run(self): syshive = sechive = None + kernel = self.context.modules[self.config['kernel']] + for hive in hivelist.HiveList.list_hives(self.context, self.config_path, - self.config['primary'], - self.config['nt_symbols'], + kernel.layer_name, + kernel.symbol_table_name, hive_offsets = None if offset is None else [offset]): if hive.get_name().split('\\')[-1].upper() == 'SYSTEM': @@ -147,5 +150,5 @@ def run(self): vollog.warning('Unable to locate SECURITY hive') return - return renderers.TreeGrid([("Username", str), ("Domain", str), ("Domain name", str), ('Hashh', bytes)], + return renderers.TreeGrid([("Username", str), ("Domain", str), ("Domain name", str), ('Hash', bytes)], self._generator(syshive, sechive)) diff --git a/volatility3/framework/plugins/windows/callbacks.py b/volatility3/framework/plugins/windows/callbacks.py index ef9ca98c09..46711d152f 100644 --- a/volatility3/framework/plugins/windows/callbacks.py +++ b/volatility3/framework/plugins/windows/callbacks.py @@ -19,16 +19,14 @@ class Callbacks(interfaces.plugins.PluginInterface): """Lists kernel callbacks and notification routines.""" - _required_framework_version = (1, 0, 0) + _required_framework_version = (1, 2, 0) _version = (1, 0, 0) @classmethod def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface]: return [ - requirements.TranslationLayerRequirement(name = 'primary', - description = 'Memory layer for the kernel', - architectures = ["Intel32", "Intel64"]), - requirements.SymbolTableRequirement(name = "nt_symbols", description = "Windows kernel symbols"), + requirements.ModuleRequirement(name = 'kernel', description = 'Windows kernel', + architectures = ["Intel32", "Intel64"]), requirements.PluginRequirement(name = 'ssdt', plugin = ssdt.SSDT, version = (1, 0, 0)), requirements.PluginRequirement(name = 'svcscan', plugin = svcscan.SvcScan, version = (1, 0, 0)) ] @@ -191,7 +189,8 @@ def list_bugcheck_reason_callbacks(cls, context: interfaces.context.ContextInter continue try: - component: Union[interfaces.renderers.BaseAbsentValue, interfaces.objects.ObjectInterface] = ntkrnlmp.object( + component: Union[ + interfaces.renderers.BaseAbsentValue, interfaces.objects.ObjectInterface] = ntkrnlmp.object( "string", absolute = True, offset = callback.Component, max_length = 64, errors = "replace" ) except exceptions.InvalidAddressException: @@ -244,17 +243,20 @@ def list_bugcheck_callbacks(cls, context: interfaces.context.ContextInterface, l def _generator(self): - callback_table_name = self.create_callback_table(self.context, self.config["nt_symbols"], self.config_path) + kernel = self.context.modules[self.config['kernel']] - collection = ssdt.SSDT.build_module_collection(self.context, self.config['primary'], self.config['nt_symbols']) + callback_table_name = self.create_callback_table(self.context, kernel.symbol_table_name, + self.config_path) + + collection = ssdt.SSDT.build_module_collection(self.context, kernel.layer_name, kernel.symbol_table_name) callback_methods = (self.list_notify_routines, self.list_bugcheck_callbacks, self.list_bugcheck_reason_callbacks, self.list_registry_callbacks) for callback_method in callback_methods: for callback_type, callback_address, callback_detail in callback_method(self.context, - self.config['primary'], - self.config['nt_symbols'], + kernel.layer_name, + kernel.symbol_table_name, callback_table_name): if callback_detail is None: diff --git a/volatility3/framework/plugins/windows/cmdline.py b/volatility3/framework/plugins/windows/cmdline.py index 4b814b9806..a3f418be07 100644 --- a/volatility3/framework/plugins/windows/cmdline.py +++ b/volatility3/framework/plugins/windows/cmdline.py @@ -15,17 +15,15 @@ class CmdLine(interfaces.plugins.PluginInterface): """Lists process command line arguments.""" - _required_framework_version = (1, 0, 0) + _required_framework_version = (1, 2, 0) _version = (1, 0, 0) @classmethod def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface]: # Since we're calling the plugin, make sure we have the plugin's requirements return [ - requirements.TranslationLayerRequirement(name = 'primary', - description = 'Memory layer for the kernel', - architectures = ["Intel32", "Intel64"]), - requirements.SymbolTableRequirement(name = "nt_symbols", description = "Windows kernel symbols"), + requirements.ModuleRequirement(name = 'kernel', description = 'Windows kernel', + architectures = ["Intel32", "Intel64"]), requirements.PluginRequirement(name = 'pslist', plugin = pslist.PsList, version = (2, 0, 0)), requirements.ListRequirement(name = 'pid', element_type = int, @@ -57,13 +55,15 @@ def get_cmdline(cls, context: interfaces.context.ContextInterface, kernel_table_ def _generator(self, procs): + kernel = self.context.modules[self.config['kernel']] + for proc in procs: process_name = utility.array_to_string(proc.ImageFileName) proc_id = "Unknown" try: proc_id = proc.UniqueProcessId - result_text = self.get_cmdline(self.context, self.config["nt_symbols"], proc) + result_text = self.get_cmdline(self.context, kernel.symbol_table_name, proc) except exceptions.SwappedInvalidAddressException as exp: result_text = f"Required memory at {exp.invalid_address:#x} is inaccessible (swapped)" @@ -78,11 +78,14 @@ def _generator(self, procs): yield (0, (proc.UniqueProcessId, process_name, result_text)) def run(self): + + kernel = self.context.modules[self.config['kernel']] + filter_func = pslist.PsList.create_pid_filter(self.config.get('pid', None)) return renderers.TreeGrid([("PID", int), ("Process", str), ("Args", str)], self._generator( pslist.PsList.list_processes(context = self.context, - layer_name = self.config['primary'], - symbol_table = self.config['nt_symbols'], + layer_name = kernel.layer_name, + symbol_table = kernel.symbol_table_name, filter_func = filter_func))) diff --git a/volatility3/framework/plugins/windows/dlllist.py b/volatility3/framework/plugins/windows/dlllist.py index e02ad6e960..992d9538ef 100644 --- a/volatility3/framework/plugins/windows/dlllist.py +++ b/volatility3/framework/plugins/windows/dlllist.py @@ -20,17 +20,15 @@ class DllList(interfaces.plugins.PluginInterface, timeliner.TimeLinerInterface): """Lists the loaded modules in a particular windows memory image.""" - _required_framework_version = (1, 0, 0) + _required_framework_version = (1, 2, 0) _version = (2, 0, 0) @classmethod def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface]: # Since we're calling the plugin, make sure we have the plugin's requirements return [ - requirements.TranslationLayerRequirement(name = 'primary', - description = 'Memory layer for the kernel', - architectures = ["Intel32", "Intel64"]), - requirements.SymbolTableRequirement(name = "nt_symbols", description = "Windows kernel symbols"), + requirements.ModuleRequirement(name = 'kernel', description = 'Windows kernel', + architectures = ["Intel32", "Intel64"]), requirements.VersionRequirement(name = 'pslist', component = pslist.PsList, version = (2, 0, 0)), requirements.VersionRequirement(name = 'info', component = info.Info, version = (1, 0, 0)), requirements.ListRequirement(name = 'pid', @@ -94,7 +92,9 @@ def _generator(self, procs): "pe", class_types = pe.class_types) - kuser = info.Info.get_kuser_structure(self.context, self.config['primary'], self.config['nt_symbols']) + kernel = self.context.modules[self.config['kernel']] + + kuser = info.Info.get_kuser_structure(self.context, kernel.layer_name, kernel.symbol_table_name) nt_major_version = int(kuser.NtMajorVersion) nt_minor_version = int(kuser.NtMinorVersion) # LoadTime only applies to versions higher or equal to Window 7 (6.1 and higher) @@ -144,10 +144,12 @@ def _generator(self, procs): format_hints.Hex(entry.SizeOfImage), BaseDllName, FullDllName, DllLoadTime, file_output)) def generate_timeline(self): + kernel = self.context.modules[self.config['kernel']] + for row in self._generator( pslist.PsList.list_processes(context = self.context, - layer_name = self.config['primary'], - symbol_table = self.config['nt_symbols'])): + layer_name = kernel.layer_name, + symbol_table = kernel.symbol_table_name)): _depth, row_data = row if not isinstance(row_data[6], datetime.datetime): continue @@ -157,12 +159,13 @@ def generate_timeline(self): def run(self): filter_func = pslist.PsList.create_pid_filter(self.config.get('pid', None)) + kernel = self.context.modules[self.config['kernel']] return renderers.TreeGrid([("PID", int), ("Process", str), ("Base", format_hints.Hex), ("Size", format_hints.Hex), ("Name", str), ("Path", str), ("LoadTime", datetime.datetime), ("File output", str)], self._generator( pslist.PsList.list_processes(context = self.context, - layer_name = self.config['primary'], - symbol_table = self.config['nt_symbols'], + layer_name = kernel.layer_name, + symbol_table = kernel.symbol_table_name, filter_func = filter_func))) diff --git a/volatility3/framework/plugins/windows/driverirp.py b/volatility3/framework/plugins/windows/driverirp.py index 64231b9dbd..3ed086c4a5 100644 --- a/volatility3/framework/plugins/windows/driverirp.py +++ b/volatility3/framework/plugins/windows/driverirp.py @@ -22,25 +22,23 @@ class DriverIrp(interfaces.plugins.PluginInterface): """List IRPs for drivers in a particular windows memory image.""" - _required_framework_version = (1, 0, 0) + _required_framework_version = (1, 2, 0) @classmethod def get_requirements(cls): return [ + requirements.ModuleRequirement(name = 'kernel', description = 'Windows kernel', + architectures = ["Intel32", "Intel64"]), requirements.PluginRequirement(name = 'ssdt', plugin = ssdt.SSDT, version = (1, 0, 0)), requirements.PluginRequirement(name = 'driverscan', plugin = driverscan.DriverScan, version = (1, 0, 0)), - requirements.TranslationLayerRequirement(name = 'primary', - description = 'Memory layer for the kernel', - architectures = ["Intel32", "Intel64"]), - requirements.SymbolTableRequirement(name = "nt_symbols", description = "Windows kernel symbols"), ] def _generator(self): + kernel = self.context.modules[self.config['kernel']] - collection = ssdt.SSDT.build_module_collection(self.context, self.config['primary'], self.config['nt_symbols']) + collection = ssdt.SSDT.build_module_collection(self.context, kernel.layer_name, kernel.symbol_table_name) - for driver in driverscan.DriverScan.scan_drivers(self.context, self.config['primary'], - self.config['nt_symbols']): + for driver in driverscan.DriverScan.scan_drivers(self.context, kernel.layer_name, kernel.symbol_table_name): try: driver_name = driver.get_driver_name() diff --git a/volatility3/framework/plugins/windows/driverscan.py b/volatility3/framework/plugins/windows/driverscan.py index 7b44d4ebc8..498ae33387 100644 --- a/volatility3/framework/plugins/windows/driverscan.py +++ b/volatility3/framework/plugins/windows/driverscan.py @@ -13,16 +13,14 @@ class DriverScan(interfaces.plugins.PluginInterface): """Scans for drivers present in a particular windows memory image.""" - _required_framework_version = (1, 0, 0) + _required_framework_version = (1, 2, 0) _version = (1, 0, 0) @classmethod def get_requirements(cls): return [ - requirements.TranslationLayerRequirement(name = 'primary', - description = 'Memory layer for the kernel', - architectures = ["Intel32", "Intel64"]), - requirements.SymbolTableRequirement(name = "nt_symbols", description = "Windows kernel symbols"), + requirements.ModuleRequirement(name = 'kernel', description = 'Windows kernel', + architectures = ["Intel32", "Intel64"]), requirements.PluginRequirement(name = 'poolscanner', plugin = poolscanner.PoolScanner, version = (1, 0, 0)), ] @@ -51,7 +49,9 @@ def scan_drivers(cls, yield mem_object def _generator(self): - for driver in self.scan_drivers(self.context, self.config['primary'], self.config['nt_symbols']): + kernel = self.context.modules[self.config['kernel']] + + for driver in self.scan_drivers(self.context, kernel.layer_name, kernel.symbol_table_name): try: driver_name = driver.get_driver_name() diff --git a/volatility3/framework/plugins/windows/dumpfiles.py b/volatility3/framework/plugins/windows/dumpfiles.py index 0258fbf21f..ca78696f84 100755 --- a/volatility3/framework/plugins/windows/dumpfiles.py +++ b/volatility3/framework/plugins/windows/dumpfiles.py @@ -4,12 +4,13 @@ import logging import ntpath +from typing import List, Tuple, Type, Optional, Generator + from volatility3.framework import interfaces, renderers, exceptions, constants -from volatility3.plugins.windows import handles -from volatility3.plugins.windows import pslist from volatility3.framework.configuration import requirements from volatility3.framework.renderers import format_hints -from typing import List, Tuple, Type, Optional, Generator +from volatility3.plugins.windows import handles +from volatility3.plugins.windows import pslist vollog = logging.getLogger(__name__) @@ -25,17 +26,15 @@ class DumpFiles(interfaces.plugins.PluginInterface): """Dumps cached file contents from Windows memory samples.""" - _required_framework_version = (1, 0, 0) + _required_framework_version = (1, 2, 0) _version = (1, 0, 0) @classmethod def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface]: # Since we're calling the plugin, make sure we have the plugin's requirements return [ - requirements.TranslationLayerRequirement(name = 'primary', - description = 'Memory layer for the kernel', - architectures = ["Intel32", "Intel64"]), - requirements.SymbolTableRequirement(name = "nt_symbols", description = "Windows kernel symbols"), + requirements.ModuleRequirement(name = 'kernel', description = 'Windows kernel', + architectures = ["Intel32", "Intel64"]), requirements.IntRequirement(name = 'pid', description = "Process ID to include (all other processes are excluded)", optional = True), @@ -167,6 +166,7 @@ def process_file_object(cls, context: interfaces.context.ContextInterface, prima file_output) def _generator(self, procs: List, offsets: List): + kernel = self.context.modules[self.config['kernel']] if procs: # The handles plugin doesn't expose any staticmethod/classmethod, and it also requires stashing @@ -175,11 +175,11 @@ def _generator(self, procs: List, offsets: List): # results instead of just dealing with them as direct objects here. handles_plugin = handles.Handles(context = self.context, config_path = self._config_path) type_map = handles_plugin.get_type_map(context = self.context, - layer_name = self.config["primary"], - symbol_table = self.config["nt_symbols"]) + layer_name = kernel.layer_name, + symbol_table = kernel.symbol_table_name) cookie = handles_plugin.find_cookie(context = self.context, - layer_name = self.config["primary"], - symbol_table = self.config["nt_symbols"]) + layer_name = kernel.layer_name, + symbol_table = kernel.symbol_table_name) for proc in procs: @@ -195,7 +195,7 @@ def _generator(self, procs: List, offsets: List): obj_type = entry.get_object_type(type_map, cookie) if obj_type == "File": file_obj = entry.Body.cast("_FILE_OBJECT") - for result in self.process_file_object(self.context, self.config["primary"], self.open, + for result in self.process_file_object(self.context, kernel.layer_name, self.open, file_obj): yield (0, result) except exceptions.InvalidAddressException: @@ -219,7 +219,7 @@ def _generator(self, procs: List, offsets: List): if not file_obj.is_valid(): continue - for result in self.process_file_object(self.context, self.config["primary"], self.open, + for result in self.process_file_object(self.context, kernel.layer_name, self.open, file_obj): yield (0, result) except exceptions.InvalidAddressException: @@ -230,16 +230,17 @@ def _generator(self, procs: List, offsets: List): # Now process any offsets explicitly requested by the user. for offset, is_virtual in offsets: try: - layer_name = self.config["primary"] + layer_name = kernel.layer_name # switch to a memory layer if the user provided --physaddr instead of --virtaddr if not is_virtual: layer_name = self.context.layers[layer_name].config["memory_layer"] - file_obj = self.context.object(self.config["nt_symbols"] + constants.BANG + "_FILE_OBJECT", - layer_name = layer_name, - native_layer_name = self.config["primary"], - offset = offset) - for result in self.process_file_object(self.context, self.config["primary"], self.open, file_obj): + file_obj = self.context.object( + kernel.symbol_table_name + constants.BANG + "_FILE_OBJECT", + layer_name = layer_name, + native_layer_name = kernel.layer_name, + offset = offset) + for result in self.process_file_object(self.context, kernel.layer_name, self.open, file_obj): yield (0, result) except exceptions.InvalidAddressException: vollog.log(constants.LOGLEVEL_VVV, f"Cannot extract file at {offset:#x}") @@ -250,6 +251,8 @@ def run(self): # a list of processes matching the pid filter. all files for these process(es) will be dumped. procs = [] + kernel = self.context.modules[self.config['kernel']] + if self.config.get("virtaddr", None) is not None: offsets.append((self.config["virtaddr"], True)) elif self.config.get("physaddr", None) is not None: @@ -257,8 +260,8 @@ def run(self): else: filter_func = pslist.PsList.create_pid_filter([self.config.get("pid", None)]) procs = pslist.PsList.list_processes(self.context, - self.config["primary"], - self.config["nt_symbols"], + kernel.layer_name, + kernel.symbol_table_name, filter_func = filter_func) return renderers.TreeGrid([("Cache", str), ("FileObject", format_hints.Hex), ("FileName", str), diff --git a/volatility3/framework/plugins/windows/envars.py b/volatility3/framework/plugins/windows/envars.py index 8d3ee85062..6caf6d2fac 100644 --- a/volatility3/framework/plugins/windows/envars.py +++ b/volatility3/framework/plugins/windows/envars.py @@ -15,17 +15,15 @@ class Envars(interfaces.plugins.PluginInterface): "Display process environment variables" + _required_framework_version = (1, 2, 0) _version = (1, 0, 0) - _required_framework_version = (1, 0, 0) @classmethod def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface]: # Since we're calling the plugin, make sure we have the plugin's requirements return [ - requirements.TranslationLayerRequirement(name = 'primary', - description = 'Memory layer for the kernel', - architectures = ["Intel32", "Intel64"]), - requirements.SymbolTableRequirement(name = "nt_symbols", description = "Windows kernel symbols"), + requirements.ModuleRequirement(name = 'kernel', description = 'Windows kernel', + architectures = ["Intel32", "Intel64"]), requirements.ListRequirement(name = 'pid', description = 'Filter on specific process IDs', element_type = int, @@ -48,11 +46,12 @@ def _get_silent_vars(self) -> List[str]: """ values = [] + kernel = self.context.modules[self.config['kernel']] for hive in hivelist.HiveList.list_hives(context = self.context, base_config_path = self.config_path, - layer_name = self.config['primary'], - symbol_table = self.config['nt_symbols'], + layer_name = kernel.layer_name, + symbol_table = kernel.symbol_table_name, hive_offsets = None): sys = False ntuser = False @@ -192,10 +191,11 @@ def _generator(self, data): def run(self): filter_func = pslist.PsList.create_pid_filter(self.config.get('pid', None)) + kernel = self.context.modules[self.config['kernel']] return renderers.TreeGrid([("PID", int), ("Process", str), ("Block", str), ("Variable", str), ("Value", str)], self._generator( pslist.PsList.list_processes(context = self.context, - layer_name = self.config['primary'], - symbol_table = self.config['nt_symbols'], + layer_name = kernel.layer_name, + symbol_table = kernel.symbol_table_name, filter_func = filter_func))) diff --git a/volatility3/framework/plugins/windows/filescan.py b/volatility3/framework/plugins/windows/filescan.py index e1756630db..5ceb57ac4c 100644 --- a/volatility3/framework/plugins/windows/filescan.py +++ b/volatility3/framework/plugins/windows/filescan.py @@ -13,15 +13,13 @@ class FileScan(interfaces.plugins.PluginInterface): """Scans for file objects present in a particular windows memory image.""" - _required_framework_version = (1, 0, 0) + _required_framework_version = (1, 2, 0) @classmethod def get_requirements(cls): return [ - requirements.TranslationLayerRequirement(name = 'primary', - description = 'Memory layer for the kernel', - architectures = ["Intel32", "Intel64"]), - requirements.SymbolTableRequirement(name = "nt_symbols", description = "Windows kernel symbols"), + requirements.ModuleRequirement(name = 'kernel', description = 'Windows kernel', + architectures = ["Intel32", "Intel64"]), requirements.PluginRequirement(name = 'poolscanner', plugin = poolscanner.PoolScanner, version = (1, 0, 0)), ] @@ -50,7 +48,9 @@ def scan_files(cls, yield mem_object def _generator(self): - for fileobj in self.scan_files(self.context, self.config['primary'], self.config['nt_symbols']): + kernel = self.context.modules[self.config['kernel']] + + for fileobj in self.scan_files(self.context, kernel.layer_name, kernel.symbol_table_name): try: file_name = fileobj.FileName.String diff --git a/volatility3/framework/plugins/windows/getservicesids.py b/volatility3/framework/plugins/windows/getservicesids.py index f8a78dfcdd..9395aadb49 100644 --- a/volatility3/framework/plugins/windows/getservicesids.py +++ b/volatility3/framework/plugins/windows/getservicesids.py @@ -30,8 +30,8 @@ def createservicesid(svc) -> str: class GetServiceSIDs(interfaces.plugins.PluginInterface): """Lists process token sids.""" + _required_framework_version = (1, 2, 0) _version = (1, 0, 0) - _required_framework_version = (1, 0, 0) def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) @@ -53,20 +53,18 @@ def __init__(self, *args, **kwargs): def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface]: # Since we're calling the plugin, make sure we have the plugin's requirements return [ - requirements.TranslationLayerRequirement(name = 'primary', - description = 'Memory layer for the kernel', - architectures = ["Intel32", "Intel64"]), - requirements.SymbolTableRequirement(name = "nt_symbols", description = "Windows kernel symbols"), + requirements.ModuleRequirement(name = 'kernel', description = 'Windows kernel', + architectures = ["Intel32", "Intel64"]), requirements.PluginRequirement(name = 'hivelist', plugin = hivelist.HiveList, version = (1, 0, 0)) ] def _generator(self): - + kernel = self.context.modules[self.config['kernel']] # Get the system hive for hive in hivelist.HiveList.list_hives(context = self.context, base_config_path = self.config_path, - layer_name = self.config['primary'], - symbol_table = self.config['nt_symbols'], + layer_name = kernel.layer_name, + symbol_table = kernel.symbol_table_name, filter_string = 'machine\\system', hive_offsets = None): # Get ControlSet\Services. diff --git a/volatility3/framework/plugins/windows/getsids.py b/volatility3/framework/plugins/windows/getsids.py index fbb6aafd2c..179ce4737a 100644 --- a/volatility3/framework/plugins/windows/getsids.py +++ b/volatility3/framework/plugins/windows/getsids.py @@ -28,8 +28,8 @@ def find_sid_re(sid_string, sid_re_list) -> Union[str, interfaces.renderers.Base class GetSIDs(interfaces.plugins.PluginInterface): """Print the SIDs owning each process""" + _required_framework_version = (1, 2, 0) _version = (1, 0, 0) - _required_framework_version = (1, 0, 0) def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) @@ -53,10 +53,8 @@ def __init__(self, *args, **kwargs): @classmethod def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface]: return [ - requirements.TranslationLayerRequirement(name = 'primary', - description = 'Memory layer for the kernel', - architectures = ["Intel32", "Intel64"]), - requirements.SymbolTableRequirement(name = "nt_symbols", description = "Windows kernel symbols"), + requirements.ModuleRequirement(name = 'kernel', description = 'Windows kernel', + architectures = ["Intel32", "Intel64"]), requirements.ListRequirement(name = 'pid', description = 'Filter on specific process IDs', element_type = int, @@ -75,12 +73,13 @@ def lookup_user_sids(self) -> Dict[str, str]: key = "Microsoft\\Windows NT\\CurrentVersion\\ProfileList" val = "ProfileImagePath" + kernel = self.context.modules[self.config['kernel']] sids = {} for hive in hivelist.HiveList.list_hives(context = self.context, base_config_path = self.config_path, - layer_name = self.config['primary'], - symbol_table = self.config['nt_symbols'], + layer_name = kernel.layer_name, + symbol_table = kernel.symbol_table_name, filter_string = 'config\\software', hive_offsets = None): @@ -154,10 +153,11 @@ def _generator(self, procs): def run(self): filter_func = pslist.PsList.create_pid_filter(self.config.get('pid', None)) + kernel = self.context.modules[self.config['kernel']] return renderers.TreeGrid([("PID", int), ("Process", str), ("SID", str), ("Name", str)], self._generator( pslist.PsList.list_processes(context = self.context, - layer_name = self.config['primary'], - symbol_table = self.config['nt_symbols'], + layer_name = kernel.layer_name, + symbol_table = kernel.symbol_table_name, filter_func = filter_func))) diff --git a/volatility3/framework/plugins/windows/handles.py b/volatility3/framework/plugins/windows/handles.py index 9b249f83c2..2f02ec6213 100644 --- a/volatility3/framework/plugins/windows/handles.py +++ b/volatility3/framework/plugins/windows/handles.py @@ -24,7 +24,7 @@ class Handles(interfaces.plugins.PluginInterface): """Lists process open handles.""" - _required_framework_version = (1, 0, 0) + _required_framework_version = (1, 2, 0) _version = (1, 0, 0) def __init__(self, *args, **kwargs): @@ -38,10 +38,8 @@ def __init__(self, *args, **kwargs): def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface]: # Since we're calling the plugin, make sure we have the plugin's requirements return [ - requirements.TranslationLayerRequirement(name = 'primary', - description = 'Memory layer for the kernel', - architectures = ["Intel32", "Intel64"]), - requirements.SymbolTableRequirement(name = "nt_symbols", description = "Windows kernel symbols"), + requirements.ModuleRequirement(name = 'kernel', description = 'Windows kernel', + architectures = ["Intel32", "Intel64"]), requirements.ListRequirement(name = 'pid', element_type = int, description = "Process IDs to include (all other processes are excluded)", @@ -69,7 +67,9 @@ def _get_item(self, handle_table_entry, handle_value): process' handle table, determine where the corresponding object's _OBJECT_HEADER can be found.""" - virtual = self.config["primary"] + kernel = self.context.modules[self.config['kernel']] + + virtual = kernel.layer_name try: # before windows 7 @@ -80,7 +80,7 @@ def _get_item(self, handle_table_entry, handle_value): object_header.GrantedAccess = handle_table_entry.GrantedAccess except AttributeError: # starting with windows 8 - is_64bit = symbols.symbol_table_is_64bit(self.context, self.config["nt_symbols"]) + is_64bit = symbols.symbol_table_is_64bit(self.context, kernel.symbol_table_name) if is_64bit: if handle_table_entry.LowValue == 0: @@ -104,8 +104,7 @@ def _get_item(self, handle_table_entry, handle_value): offset = handle_table_entry.InfoTable & ~7 # print("LowValue: {0:#x} Magic: {1:#x} Offset: {2:#x}".format(handle_table_entry.InfoTable, magic, offset)) - object_header = self.context.object(self.config["nt_symbols"] + constants.BANG + "_OBJECT_HEADER", - virtual, + object_header = self.context.object(kernel.symbol_table_name + constants.BANG + "_OBJECT_HEADER", virtual, offset = offset) object_header.GrantedAccess = handle_table_entry.GrantedAccessBits @@ -124,10 +123,11 @@ def find_sar_value(self): if not has_capstone: return None + kernel = self.context.modules[self.config['kernel']] - virtual_layer_name = self.config['primary'] + virtual_layer_name = kernel.layer_name kvo = self.context.layers[virtual_layer_name].config['kernel_virtual_offset'] - ntkrnlmp = self.context.module(self.config["nt_symbols"], layer_name = virtual_layer_name, offset = kvo) + ntkrnlmp = self.context.module(kernel.symbol_table_name, layer_name = virtual_layer_name, offset = kvo) try: func_addr = ntkrnlmp.get_symbol("ObpCaptureHandleInformationEx").address @@ -227,10 +227,12 @@ def _make_handle_array(self, offset, level, depth = 0): """Parse a process' handle table and yield valid handle table entries, going as deep into the table "levels" as necessary.""" - virtual = self.config["primary"] + kernel = self.context.modules[self.config['kernel']] + + virtual = kernel.layer_name kvo = self.context.layers[virtual].config['kernel_virtual_offset'] - ntkrnlmp = self.context.module(self.config["nt_symbols"], layer_name = virtual, offset = kvo) + ntkrnlmp = self.context.module(kernel.symbol_table_name, layer_name = virtual, offset = kvo) if level > 0: subtype = ntkrnlmp.get_type("pointer") @@ -292,13 +294,15 @@ def handles(self, handle_table): def _generator(self, procs): + kernel = self.context.modules[self.config['kernel']] + type_map = self.get_type_map(context = self.context, - layer_name = self.config["primary"], - symbol_table = self.config["nt_symbols"]) + layer_name = kernel.layer_name, + symbol_table = kernel.symbol_table_name) cookie = self.find_cookie(context = self.context, - layer_name = self.config["primary"], - symbol_table = self.config["nt_symbols"]) + layer_name = kernel.layer_name, + symbol_table = kernel.symbol_table_name) for proc in procs: try: @@ -345,12 +349,13 @@ def _generator(self, procs): def run(self): filter_func = pslist.PsList.create_pid_filter(self.config.get('pid', None)) + kernel = self.context.modules[self.config['kernel']] return renderers.TreeGrid([("PID", int), ("Process", str), ("Offset", format_hints.Hex), ("HandleValue", format_hints.Hex), ("Type", str), ("GrantedAccess", format_hints.Hex), ("Name", str)], self._generator( pslist.PsList.list_processes(self.context, - self.config['primary'], - self.config['nt_symbols'], + kernel.layer_name, + kernel.symbol_table_name, filter_func = filter_func))) diff --git a/volatility3/framework/plugins/windows/hashdump.py b/volatility3/framework/plugins/windows/hashdump.py index dac7f70733..05d0281d91 100644 --- a/volatility3/framework/plugins/windows/hashdump.py +++ b/volatility3/framework/plugins/windows/hashdump.py @@ -21,16 +21,14 @@ class Hashdump(interfaces.plugins.PluginInterface): """Dumps user hashes from memory""" - _required_framework_version = (1, 0, 0) + _required_framework_version = (1, 2, 0) _version = (1, 1, 0) @classmethod def get_requirements(cls): return [ - requirements.TranslationLayerRequirement(name = 'primary', - description = 'Memory layer for the kernel', - architectures = ["Intel32", "Intel64"]), - requirements.SymbolTableRequirement(name = "nt_symbols", description = "Windows kernel symbols"), + requirements.ModuleRequirement(name = 'kernel', description = 'Windows kernel', + architectures = ["Intel32", "Intel64"]), requirements.PluginRequirement(name = 'hivelist', plugin = hivelist.HiveList, version = (1, 0, 0)) ] @@ -134,7 +132,7 @@ def get_hbootkey(cls, samhive: registry.RegistryHive, bootkey: bytes) -> Optiona rc4_key = md5.digest() rc4 = ARC4.new(rc4_key) - hbootkey = rc4.encrypt(sam_data[0x80:0xA0]) # lgtm [py/weak-cryptographic-algorithm] + hbootkey = rc4.encrypt(sam_data[0x80:0xA0]) # lgtm [py/weak-cryptographic-algorithm] return hbootkey elif revision == 3: # AES encrypted @@ -153,7 +151,7 @@ def decrypt_single_salted_hash(cls, rid, hbootkey: bytes, enc_hash: bytes, _lmnt des2 = DES.new(des_k2, DES.MODE_ECB) cipher = AES.new(hbootkey[:16], AES.MODE_CBC, salt) obfkey = cipher.decrypt(enc_hash) - return des1.decrypt(obfkey[:8]) + des2.decrypt(obfkey[8:16]) # lgtm [py/weak-cryptographic-algorithm] + return des1.decrypt(obfkey[:8]) + des2.decrypt(obfkey[8:16]) # lgtm [py/weak-cryptographic-algorithm] @classmethod def get_user_hashes(cls, user: registry.CM_KEY_NODE, samhive: registry.RegistryHive, @@ -231,9 +229,9 @@ def decrypt_single_hash(cls, rid: int, hbootkey: bytes, enc_hash: bytes, lmntstr md5.update(hbootkey[:0x10] + pack(" Optional[bytes]: @@ -289,10 +287,12 @@ def run(self): offset = self.config.get('offset', None) syshive = None samhive = None + kernel = self.context.modules[self.config['kernel']] + for hive in hivelist.HiveList.list_hives(self.context, self.config_path, - self.config['primary'], - self.config['nt_symbols'], + kernel.layer_name, + kernel.symbol_table_name, hive_offsets = None if offset is None else [offset]): if hive.get_name().split('\\')[-1].upper() == 'SYSTEM': diff --git a/volatility3/framework/plugins/windows/info.py b/volatility3/framework/plugins/windows/info.py index e0de5a8512..c06c69c3af 100644 --- a/volatility3/framework/plugins/windows/info.py +++ b/volatility3/framework/plugins/windows/info.py @@ -16,16 +16,14 @@ class Info(plugins.PluginInterface): """Show OS & kernel details of the memory sample being analyzed.""" - _required_framework_version = (1, 0, 0) + _required_framework_version = (1, 2, 0) _version = (1, 0, 0) @classmethod def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface]: return [ - requirements.TranslationLayerRequirement(name = 'primary', - description = 'Memory layer for the kernel', - architectures = ["Intel32", "Intel64"]), - requirements.SymbolTableRequirement(name = "nt_symbols", description = "Windows kernel symbols") + requirements.ModuleRequirement(name = 'kernel', description = 'Windows kernel', + architectures = ["Intel32", "Intel64"]), ] @classmethod @@ -150,18 +148,22 @@ def get_ntheader_structure(cls, context: interfaces.context.ContextInterface, co def _generator(self): - layer_name = self.config['primary'] - symbol_table = self.config['nt_symbols'] + kernel = self.context.modules[self.config['kernel']] + + layer_name = kernel.layer_name + symbol_table = kernel.symbol_table_name + layer = self.context.layers[layer_name] + table = self.context.symbol_space[symbol_table] kdbg = self.get_kdbg_structure(self.context, self.config_path, layer_name, symbol_table) - yield (0, ("Kernel Base", hex(self.config["primary.kernel_virtual_offset"]))) - yield (0, ("DTB", hex(self.config["primary.page_map_offset"]))) - yield (0, ("Symbols", self.config["nt_symbols.isf_url"])) + yield (0, ("Kernel Base", hex(layer.config["kernel_virtual_offset"]))) + yield (0, ("DTB", hex(layer.config["page_map_offset"]))) + yield (0, ("Symbols", table.config["isf_url"])) yield (0, ("Is64Bit", str(symbols.symbol_table_is_64bit(self.context, symbol_table)))) yield (0, ("IsPAE", str(self.context.layers[layer_name].metadata.get("pae", False)))) - for i, layer in self.get_depends(self.context, "primary"): + for i, layer in self.get_depends(self.context, layer_name): yield (0, (layer.name, f"{i} {layer.__class__.__name__}")) if kdbg.Header.OwnerTag == 0x4742444B: diff --git a/volatility3/framework/plugins/windows/lsadump.py b/volatility3/framework/plugins/windows/lsadump.py index 1921b0453b..c0db0b5b1a 100644 --- a/volatility3/framework/plugins/windows/lsadump.py +++ b/volatility3/framework/plugins/windows/lsadump.py @@ -21,16 +21,14 @@ class Lsadump(interfaces.plugins.PluginInterface): """Dumps lsa secrets from memory""" - _required_framework_version = (1, 0, 0) + _required_framework_version = (1, 2, 0) _version = (1, 0, 0) @classmethod def get_requirements(cls): return [ - requirements.TranslationLayerRequirement(name = 'primary', - description = 'Memory layer for the kernel', - architectures = ["Intel32", "Intel64"]), - requirements.SymbolTableRequirement(name = "nt_symbols", description = "Windows kernel symbols"), + requirements.ModuleRequirement(name = 'kernel', description = 'Windows kernel', + architectures = ["Intel32", "Intel64"]), requirements.VersionRequirement(name = 'hashdump', component = hashdump.Hashdump, version = (1, 1, 0)), requirements.VersionRequirement(name = 'hivelist', component = hivelist.HiveList, version = (1, 0, 0)) ] @@ -86,7 +84,7 @@ def get_lsa_key(cls, sechive: registry.RegistryHive, bootkey: bytes, vista_or_la rc4key = md5.digest() rc4 = ARC4.new(rc4key) - lsa_key = rc4.decrypt(obf_lsa_key[12:60]) # lgtm [py/weak-cryptographic-algorithm] + lsa_key = rc4.decrypt(obf_lsa_key[12:60]) # lgtm [py/weak-cryptographic-algorithm] lsa_key = lsa_key[0x10:0x20] else: lsa_key = cls.decrypt_aes(obf_lsa_key, bootkey) @@ -127,7 +125,7 @@ def decrypt_secret(cls, secret: bytes, key: bytes): des_key = hashdump.Hashdump.sidbytes_to_key(block_key) des = DES.new(des_key, DES.MODE_ECB) enc_block = enc_block + b"\x00" * int(abs(8 - len(enc_block)) % 8) - decrypted_data += des.decrypt(enc_block) # lgtm [py/weak-cryptographic-algorithm] + decrypted_data += des.decrypt(enc_block) # lgtm [py/weak-cryptographic-algorithm] j += 7 if len(key[j:j + 7]) < 7: j = len(key[j:j + 7]) @@ -138,7 +136,10 @@ def decrypt_secret(cls, secret: bytes, key: bytes): def _generator(self, syshive: registry.RegistryHive, sechive: registry.RegistryHive): - vista_or_later = versions.is_vista_or_later(context = self.context, symbol_table = self.config['nt_symbols']) + kernel = self.context.modules[self.config['kernel']] + + vista_or_later = versions.is_vista_or_later(context = self.context, + symbol_table = kernel.symbol_table_name) bootkey = hashdump.Hashdump.get_bootkey(syshive) lsakey = self.get_lsa_key(sechive, bootkey, vista_or_later) @@ -181,11 +182,12 @@ def run(self): offset = self.config.get('offset', None) syshive = sechive = None + kernel = self.context.modules[self.config['kernel']] for hive in hivelist.HiveList.list_hives(self.context, self.config_path, - self.config['primary'], - self.config['nt_symbols'], + kernel.layer_name, + kernel.symbol_table_name, hive_offsets = None if offset is None else [offset]): if hive.get_name().split('\\')[-1].upper() == 'SYSTEM': diff --git a/volatility3/framework/plugins/windows/malfind.py b/volatility3/framework/plugins/windows/malfind.py index eedd3b715d..864722fbe3 100644 --- a/volatility3/framework/plugins/windows/malfind.py +++ b/volatility3/framework/plugins/windows/malfind.py @@ -17,16 +17,14 @@ class Malfind(interfaces.plugins.PluginInterface): """Lists process memory ranges that potentially contain injected code.""" - _required_framework_version = (1, 0, 0) + _required_framework_version = (1, 2, 0) @classmethod def get_requirements(cls): # Since we're calling the plugin, make sure we have the plugin's requirements return [ - requirements.TranslationLayerRequirement(name = 'primary', - description = 'Memory layer for the kernel', - architectures = ["Intel32", "Intel64"]), - requirements.SymbolTableRequirement(name = "nt_symbols", description = "Windows kernel symbols"), + requirements.ModuleRequirement(name = 'kernel', description = 'Windows kernel', + architectures = ["Intel32", "Intel64"]), requirements.ListRequirement(name = 'pid', element_type = int, description = "Process IDs to include (all other processes are excluded)", @@ -105,8 +103,8 @@ def list_injections( continue if (vad.get_private_memory() == 1 - and vad.get_tag() == "VadS") or (vad.get_private_memory() == 0 - and protection_string != "PAGE_EXECUTE_WRITECOPY"): + and vad.get_tag() == "VadS") or (vad.get_private_memory() == 0 + and protection_string != "PAGE_EXECUTE_WRITECOPY"): if cls.is_vad_empty(proc_layer, vad): continue @@ -115,13 +113,14 @@ def list_injections( def _generator(self, procs): # determine if we're on a 32 or 64 bit kernel - is_32bit_arch = not symbols.symbol_table_is_64bit(self.context, self.config["nt_symbols"]) + kernel = self.context.modules[self.config['kernel']] + + is_32bit_arch = not symbols.symbol_table_is_64bit(self.context, kernel.symbol_table_name) for proc in procs: process_name = utility.array_to_string(proc.ImageFileName) - for vad, data in self.list_injections(self.context, self.config["primary"], self.config["nt_symbols"], - proc): + for vad, data in self.list_injections(self.context, kernel.layer_name, kernel.symbol_table_name, proc): # if we're on a 64 bit kernel, we may still need 32 bit disasm due to wow64 if is_32bit_arch or proc.get_is_wow64(): @@ -145,13 +144,14 @@ def _generator(self, procs): yield (0, (proc.UniqueProcessId, process_name, format_hints.Hex(vad.get_start()), format_hints.Hex(vad.get_end()), vad.get_tag(), vad.get_protection( - vadinfo.VadInfo.protect_values(self.context, self.config["primary"], - self.config["nt_symbols"]), + vadinfo.VadInfo.protect_values(self.context, kernel.layer_name, + kernel.symbol_table_name), vadinfo.winnt_protections), vad.get_commit_charge(), vad.get_private_memory(), file_output, format_hints.HexBytes(data), disasm)) def run(self): filter_func = pslist.PsList.create_pid_filter(self.config.get('pid', None)) + kernel = self.context.modules[self.config['kernel']] return renderers.TreeGrid([("PID", int), ("Process", str), ("Start VPN", format_hints.Hex), ("End VPN", format_hints.Hex), ("Tag", str), ("Protection", str), @@ -159,6 +159,6 @@ def run(self): ("Hexdump", format_hints.HexBytes), ("Disasm", interfaces.renderers.Disassembly)], self._generator( pslist.PsList.list_processes(context = self.context, - layer_name = self.config['primary'], - symbol_table = self.config['nt_symbols'], + layer_name = kernel.layer_name, + symbol_table = kernel.symbol_table_name, filter_func = filter_func))) diff --git a/volatility3/framework/plugins/windows/memmap.py b/volatility3/framework/plugins/windows/memmap.py index e67a6a877b..e873b3b7d6 100644 --- a/volatility3/framework/plugins/windows/memmap.py +++ b/volatility3/framework/plugins/windows/memmap.py @@ -16,16 +16,14 @@ class Memmap(interfaces.plugins.PluginInterface): """Prints the memory map""" - _required_framework_version = (1, 0, 0) + _required_framework_version = (1, 2, 0) @classmethod def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface]: # Since we're calling the plugin, make sure we have the plugin's requirements return [ - requirements.TranslationLayerRequirement(name = 'primary', - description = 'Memory layer for the kernel', - architectures = ["Intel32", "Intel64"]), - requirements.SymbolTableRequirement(name = "nt_symbols", description = "Windows kernel symbols"), + requirements.ModuleRequirement(name = 'kernel', description = 'Windows kernel', + architectures = ["Intel32", "Intel64"]), requirements.PluginRequirement(name = 'pslist', plugin = pslist.PsList, version = (2, 0, 0)), requirements.BooleanRequirement(name = 'coalesce', description = 'Clump output where possible', default = False, optional = True), @@ -108,12 +106,13 @@ def _generator(self, procs): def run(self): filter_func = pslist.PsList.create_pid_filter([self.config.get('pid', None)]) + kernel = self.context.modules[self.config['kernel']] return renderers.TreeGrid([("Virtual", format_hints.Hex), ("Physical", format_hints.Hex), ("Size", format_hints.Hex), ("Offset in File", format_hints.Hex), ("File output", str)], self._generator( pslist.PsList.list_processes(context = self.context, - layer_name = self.config['primary'], - symbol_table = self.config['nt_symbols'], + layer_name = kernel.layer_name, + symbol_table = kernel.symbol_table_name, filter_func = filter_func))) diff --git a/volatility3/framework/plugins/windows/modscan.py b/volatility3/framework/plugins/windows/modscan.py index 4820a6fb86..5179d29638 100644 --- a/volatility3/framework/plugins/windows/modscan.py +++ b/volatility3/framework/plugins/windows/modscan.py @@ -17,16 +17,14 @@ class ModScan(interfaces.plugins.PluginInterface): """Scans for modules present in a particular windows memory image.""" - _required_framework_version = (1, 0, 0) + _required_framework_version = (1, 2, 0) _version = (1, 0, 0) @classmethod def get_requirements(cls): return [ - requirements.TranslationLayerRequirement(name = 'primary', - description = 'Memory layer for the kernel', - architectures = ["Intel32", "Intel64"]), - requirements.SymbolTableRequirement(name = "nt_symbols", description = "Windows kernel symbols"), + requirements.ModuleRequirement(name = 'kernel', description = 'Windows kernel', + architectures = ["Intel32", "Intel64"]), requirements.VersionRequirement(name = 'poolerscanner', component = poolscanner.PoolScanner, version = (1, 0, 0)), @@ -137,14 +135,16 @@ def find_session_layer(cls, context: interfaces.context.ContextInterface, sessio return None def _generator(self): - session_layers = list(self.get_session_layers(self.context, self.config['primary'], self.config['nt_symbols'])) + kernel = self.context.modules[self.config['kernel']] + + session_layers = list(self.get_session_layers(self.context, kernel.layer_name, kernel.symbol_table_name)) pe_table_name = intermed.IntermediateSymbolTable.create(self.context, self.config_path, "windows", "pe", class_types = pe.class_types) - for mod in self.scan_modules(self.context, self.config['primary'], self.config['nt_symbols']): + for mod in self.scan_modules(self.context, kernel.layer_name, kernel.symbol_table_name): try: BaseDllName = mod.BaseDllName.get_string() diff --git a/volatility3/framework/plugins/windows/modules.py b/volatility3/framework/plugins/windows/modules.py index ad2faefc96..a3fb87694d 100644 --- a/volatility3/framework/plugins/windows/modules.py +++ b/volatility3/framework/plugins/windows/modules.py @@ -19,16 +19,14 @@ class Modules(interfaces.plugins.PluginInterface): """Lists the loaded kernel modules.""" - _required_framework_version = (1, 0, 0) + _required_framework_version = (1, 2, 0) _version = (1, 1, 0) @classmethod def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface]: return [ - requirements.TranslationLayerRequirement(name = 'primary', - description = 'Memory layer for the kernel', - architectures = ["Intel32", "Intel64"]), - requirements.SymbolTableRequirement(name = "nt_symbols", description = "Windows kernel symbols"), + requirements.ModuleRequirement(name = 'kernel', description = 'Windows kernel', + architectures = ["Intel32", "Intel64"]), requirements.VersionRequirement(name = 'pslist', component = pslist.PsList, version = (2, 0, 0)), requirements.VersionRequirement(name = 'dlllist', component = dlllist.DllList, version = (2, 0, 0)), requirements.BooleanRequirement(name = 'dump', @@ -38,13 +36,14 @@ def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface] ] def _generator(self): + kernel = self.context.modules[self.config['kernel']] pe_table_name = intermed.IntermediateSymbolTable.create(self.context, self.config_path, "windows", "pe", class_types = pe.class_types) - for mod in self.list_modules(self.context, self.config['primary'], self.config['nt_symbols']): + for mod in self.list_modules(self.context, kernel.layer_name, kernel.symbol_table_name): try: BaseDllName = mod.BaseDllName.get_string() diff --git a/volatility3/framework/plugins/windows/mutantscan.py b/volatility3/framework/plugins/windows/mutantscan.py index c0a47d38e1..55a131b4ad 100644 --- a/volatility3/framework/plugins/windows/mutantscan.py +++ b/volatility3/framework/plugins/windows/mutantscan.py @@ -13,15 +13,13 @@ class MutantScan(interfaces.plugins.PluginInterface): """Scans for mutexes present in a particular windows memory image.""" - _required_framework_version = (1, 0, 0) + _required_framework_version = (1, 2, 0) @classmethod def get_requirements(cls): return [ - requirements.TranslationLayerRequirement(name = 'primary', - description = 'Memory layer for the kernel', - architectures = ["Intel32", "Intel64"]), - requirements.SymbolTableRequirement(name = "nt_symbols", description = "Windows kernel symbols"), + requirements.ModuleRequirement(name = 'kernel', description = 'Windows kernel', + architectures = ["Intel32", "Intel64"]), requirements.PluginRequirement(name = 'poolscanner', plugin = poolscanner.PoolScanner, version = (1, 0, 0)), ] @@ -50,7 +48,9 @@ def scan_mutants(cls, yield mem_object def _generator(self): - for mutant in self.scan_mutants(self.context, self.config['primary'], self.config['nt_symbols']): + kernel = self.context.modules[self.config['kernel']] + + for mutant in self.scan_mutants(self.context, kernel.layer_name, kernel.symbol_table_name): try: name = mutant.get_name() diff --git a/volatility3/framework/plugins/windows/netscan.py b/volatility3/framework/plugins/windows/netscan.py index aa5501e94d..9e72713f79 100644 --- a/volatility3/framework/plugins/windows/netscan.py +++ b/volatility3/framework/plugins/windows/netscan.py @@ -22,16 +22,14 @@ class NetScan(interfaces.plugins.PluginInterface, timeliner.TimeLinerInterface): """Scans for network objects present in a particular windows memory image.""" - _required_framework_version = (1, 0, 0) + _required_framework_version = (1, 2, 0) _version = (1, 0, 0) @classmethod def get_requirements(cls): return [ - requirements.TranslationLayerRequirement(name = 'primary', - description = 'Memory layer for the kernel', - architectures = ["Intel32", "Intel64"]), - requirements.SymbolTableRequirement(name = "nt_symbols", description = "Windows kernel symbols"), + requirements.ModuleRequirement(name = 'kernel', description = 'Windows kernel', + architectures = ["Intel32", "Intel64"]), requirements.VersionRequirement(name = 'poolscanner', component = poolscanner.PoolScanner, version = (1, 0, 0)), @@ -279,11 +277,13 @@ def scan(cls, def _generator(self, show_corrupt_results: Optional[bool] = None): """ Generates the network objects for use in rendering. """ - netscan_symbol_table = self.create_netscan_symbol_table(self.context, self.config["primary"], - self.config["nt_symbols"], self.config_path) + kernel = self.context.modules[self.config['kernel']] - for netw_obj in self.scan(self.context, self.config['primary'], self.config['nt_symbols'], - netscan_symbol_table): + netscan_symbol_table = self.create_netscan_symbol_table(self.context, kernel.layer_name, + kernel.symbol_table_name, + self.config_path) + + for netw_obj in self.scan(self.context, kernel.layer_name, kernel.symbol_table_name, netscan_symbol_table): vollog.debug(f"Found netw obj @ 0x{netw_obj.vol.offset:2x} of assumed type {type(netw_obj)}") # objects passed pool header constraints. check for additional constraints if strict flag is set. diff --git a/volatility3/framework/plugins/windows/netstat.py b/volatility3/framework/plugins/windows/netstat.py index 9739e5dc93..faae80503b 100644 --- a/volatility3/framework/plugins/windows/netstat.py +++ b/volatility3/framework/plugins/windows/netstat.py @@ -20,16 +20,14 @@ class NetStat(interfaces.plugins.PluginInterface, timeliner.TimeLinerInterface): """Traverses network tracking structures present in a particular windows memory image.""" - _required_framework_version = (1, 0, 0) + _required_framework_version = (1, 2, 0) _version = (1, 0, 0) @classmethod def get_requirements(cls): return [ - requirements.TranslationLayerRequirement(name = 'primary', - description = 'Memory layer for the kernel', - architectures = ["Intel32", "Intel64"]), - requirements.SymbolTableRequirement(name = "nt_symbols", description = "Windows kernel symbols"), + requirements.ModuleRequirement(name = 'kernel', description = 'Windows kernel', + architectures = ["Intel32", "Intel64"]), requirements.VersionRequirement(name = 'netscan', component = netscan.NetScan, version = (1, 0, 0)), requirements.VersionRequirement(name = 'modules', component = modules.Modules, version = (1, 0, 0)), requirements.VersionRequirement(name = 'pdbutil', component = pdbutil.PDBUtility, version = (1, 0, 0)), @@ -419,19 +417,23 @@ def list_sockets(cls, def _generator(self, show_corrupt_results: Optional[bool] = None): """ Generates the network objects for use in rendering. """ - netscan_symbol_table = netscan.NetScan.create_netscan_symbol_table(self.context, self.config["primary"], - self.config["nt_symbols"], self.config_path) + kernel = self.context.modules[self.config['kernel']] - tcpip_module = self.get_tcpip_module(self.context, self.config["primary"], self.config["nt_symbols"]) + netscan_symbol_table = netscan.NetScan.create_netscan_symbol_table(self.context, + kernel.layer_name, + kernel.symbol_table_name, + self.config_path) + + tcpip_module = self.get_tcpip_module(self.context, kernel.layer_name, kernel.symbol_table_name) try: tcpip_symbol_table = pdbutil.PDBUtility.symbol_table_from_pdb( - self.context, interfaces.configuration.path_join(self.config_path, 'tcpip'), self.config["primary"], - "tcpip.pdb", tcpip_module.DllBase, tcpip_module.SizeOfImage) + self.context, interfaces.configuration.path_join(self.config_path, 'tcpip'), + kernel.layer_name, "tcpip.pdb", tcpip_module.DllBase, tcpip_module.SizeOfImage) except exceptions.VolatilityException: vollog.warning("Unable to locate symbols for the memory image's tcpip module") - for netw_obj in self.list_sockets(self.context, self.config['primary'], self.config['nt_symbols'], + for netw_obj in self.list_sockets(self.context, kernel.layer_name, kernel.symbol_table_name, netscan_symbol_table, tcpip_module.DllBase, tcpip_symbol_table): # objects passed pool header constraints. check for additional constraints if strict flag is set. diff --git a/volatility3/framework/plugins/windows/poolscanner.py b/volatility3/framework/plugins/windows/poolscanner.py index 09f384d899..e1abb1b7fa 100644 --- a/volatility3/framework/plugins/windows/poolscanner.py +++ b/volatility3/framework/plugins/windows/poolscanner.py @@ -112,25 +112,25 @@ def __call__(self, data: bytes, data_offset: int): class PoolScanner(plugins.PluginInterface): """A generic pool scanner plugin.""" + _required_framework_version = (1, 2, 0) _version = (1, 0, 0) - _required_framework_version = (1, 0, 0) @classmethod def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface]: return [ - requirements.TranslationLayerRequirement(name = 'primary', - description = 'Memory layer for the kernel', - architectures = ["Intel32", "Intel64"]), - requirements.SymbolTableRequirement(name = "nt_symbols", description = "Windows kernel symbols"), + requirements.ModuleRequirement(name = 'kernel', description = 'Windows kernel', + architectures = ["Intel32", "Intel64"]), requirements.PluginRequirement(name = 'handles', plugin = handles.Handles, version = (1, 0, 0)), ] def _generator(self): - symbol_table = self.config["nt_symbols"] + kernel = self.context.modules[self.config['kernel']] + + symbol_table = kernel.symbol_table_name constraints = self.builtin_constraints(symbol_table) - for constraint, mem_object, header in self.generate_pool_scan(self.context, self.config["primary"], + for constraint, mem_object, header in self.generate_pool_scan(self.context, kernel.layer_name, symbol_table, constraints): # generate some type-specific info for sanity checking if constraint.object_type == "Process": diff --git a/volatility3/framework/plugins/windows/privileges.py b/volatility3/framework/plugins/windows/privileges.py index ec9653517b..2b6381145e 100644 --- a/volatility3/framework/plugins/windows/privileges.py +++ b/volatility3/framework/plugins/windows/privileges.py @@ -16,7 +16,7 @@ class Privs(interfaces.plugins.PluginInterface): """Lists process token privileges""" - _version = (1, 0, 0) + _version = (1, 2, 0) _required_framework_version = (1, 0, 0) def __init__(self, *args, **kwargs): @@ -40,10 +40,8 @@ def __init__(self, *args, **kwargs): def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface]: # Since we're calling the plugin, make sure we have the plugin's requirements return [ - requirements.TranslationLayerRequirement(name = 'primary', - description = 'Memory layer for the kernel', - architectures = ["Intel32", "Intel64"]), - requirements.SymbolTableRequirement(name = "nt_symbols", description = "Windows kernel symbols"), + requirements.ModuleRequirement(name = 'kernel', description = 'Windows kernel', + architectures = ["Intel32", "Intel64"]), requirements.ListRequirement(name = 'pid', description = 'Filter on specific process IDs', element_type = int, @@ -89,11 +87,12 @@ def _generator(self, procs): def run(self): filter_func = pslist.PsList.create_pid_filter(self.config.get('pid', None)) + kernel = self.context.modules[self.config['kernel']] return renderers.TreeGrid([("PID", int), ("Process", str), ("Value", int), ("Privilege", str), ("Attributes", str), ("Description", str)], self._generator( pslist.PsList.list_processes(context = self.context, - layer_name = self.config['primary'], - symbol_table = self.config['nt_symbols'], + layer_name = kernel.layer_name, + symbol_table = kernel.symbol_table_name, filter_func = filter_func))) diff --git a/volatility3/framework/plugins/windows/pslist.py b/volatility3/framework/plugins/windows/pslist.py index 575dfaab42..b95160a82a 100644 --- a/volatility3/framework/plugins/windows/pslist.py +++ b/volatility3/framework/plugins/windows/pslist.py @@ -80,7 +80,8 @@ def process_dump( return file_handle @classmethod - def create_pid_filter(cls, pid_list: List[int] = None, exclude: bool = False) -> Callable[[interfaces.objects.ObjectInterface], bool]: + def create_pid_filter(cls, pid_list: List[int] = None, exclude: bool = False) -> Callable[ + [interfaces.objects.ObjectInterface], bool]: """A factory for producing filter functions that filter based on a list of process IDs. @@ -103,7 +104,8 @@ def create_pid_filter(cls, pid_list: List[int] = None, exclude: bool = False) -> return filter_func @classmethod - def create_name_filter(cls, name_list: List[str] = None, exclude: bool = False) -> Callable[[interfaces.objects.ObjectInterface], bool]: + def create_name_filter(cls, name_list: List[str] = None, exclude: bool = False) -> Callable[ + [interfaces.objects.ObjectInterface], bool]: """A factory for producing filter functions that filter based on a list of process names. @@ -170,19 +172,21 @@ def list_processes(cls, yield proc def _generator(self): + kernel = self.context.modules[self.config['kernel']] + pe_table_name = intermed.IntermediateSymbolTable.create(self.context, self.config_path, "windows", "pe", class_types = pe.class_types) - memory = self.context.layers[self.config['kernel.layer_name']] + memory = self.context.layers[kernel.layer_name] if not isinstance(memory, layers.intel.Intel): raise TypeError("Primary layer is not an intel layer") for proc in self.list_processes(self.context, - self.config['kernel.layer_name'], - self.config['kernel.symbol_table_name'], + kernel.layer_name, + kernel.symbol_table_name, filter_func = self.create_pid_filter(self.config.get('pid', None))): if not self.config.get('physical', self.PHYSICAL_DEFAULT): @@ -194,7 +198,7 @@ def _generator(self): try: if self.config['dump']: - file_handle = self.process_dump(self.context, self.config['kernel.symbol_table_name'], + file_handle = self.process_dump(self.context, kernel.symbol_table_name, pe_table_name, proc, self.open) file_output = "Error outputting file" if file_handle: @@ -202,12 +206,13 @@ def _generator(self): file_output = str(file_handle.preferred_filename) yield (0, (proc.UniqueProcessId, proc.InheritedFromUniqueProcessId, - proc.ImageFileName.cast("string", max_length = proc.ImageFileName.vol.count, errors = 'replace'), - format_hints.Hex(offset), proc.ActiveThreads, proc.get_handle_count(), proc.get_session_id(), - proc.get_is_wow64(), proc.get_create_time(), proc.get_exit_time(), file_output)) + proc.ImageFileName.cast("string", max_length = proc.ImageFileName.vol.count, + errors = 'replace'), + format_hints.Hex(offset), proc.ActiveThreads, proc.get_handle_count(), proc.get_session_id(), + proc.get_is_wow64(), proc.get_create_time(), proc.get_exit_time(), file_output)) except exceptions.InvalidAddressException: - vollog.info(f"Invalid process found at address: {proc.vol.offset:x}. Skipping") + vollog.info(f"Invalid process found at address: {proc.vol.offset:x}. Skipping") def generate_timeline(self): for row in self._generator(): diff --git a/volatility3/framework/plugins/windows/psscan.py b/volatility3/framework/plugins/windows/psscan.py index 68c9efd4dd..8bbebdc7bf 100644 --- a/volatility3/framework/plugins/windows/psscan.py +++ b/volatility3/framework/plugins/windows/psscan.py @@ -22,16 +22,14 @@ class PsScan(interfaces.plugins.PluginInterface, timeliner.TimeLinerInterface): """Scans for processes present in a particular windows memory image.""" - _required_framework_version = (1, 0, 0) + _required_framework_version = (1, 2, 0) _version = (1, 1, 0) @classmethod def get_requirements(cls): return [ - requirements.TranslationLayerRequirement(name = 'primary', - description = 'Memory layer for the kernel', - architectures = ["Intel32", "Intel64"]), - requirements.SymbolTableRequirement(name = "nt_symbols", description = "Windows kernel symbols"), + requirements.ModuleRequirement(name = 'kernel', description = 'Windows kernel', + architectures = ["Intel32", "Intel64"]), requirements.PluginRequirement(name = 'pslist', plugin = pslist.PsList, version = (2, 0, 0)), requirements.VersionRequirement(name = 'info', component = info.Info, version = (1, 0, 0)), requirements.ListRequirement(name = 'pid', @@ -144,26 +142,29 @@ def get_osversion(cls, context: interfaces.context.ContextInterface, layer_name: return (nt_major_version, nt_minor_version, build) def _generator(self): + kernel = self.context.modules[self.config['kernel']] + pe_table_name = intermed.IntermediateSymbolTable.create(self.context, self.config_path, "windows", "pe", class_types = pe.class_types) for proc in self.scan_processes(self.context, - self.config['primary'], - self.config['nt_symbols'], + kernel.layer_name, + kernel.symbol_table_name, filter_func = pslist.PsList.create_pid_filter(self.config.get('pid', None))): file_output = "Disabled" if self.config['dump']: # windows 10 objects (maybe others in the future) are already in virtual memory - if proc.vol.layer_name == self.config['primary']: + if proc.vol.layer_name == kernel.layer_name: vproc = proc else: - vproc = self.virtual_process_from_physical(self.context, self.config['primary'], - self.config['nt_symbols'], proc) + vproc = self.virtual_process_from_physical(self.context, kernel.layer_name, + kernel.symbol_table_name, proc) - file_handle = pslist.PsList.process_dump(self.context, self.config['nt_symbols'], pe_table_name, vproc, + file_handle = pslist.PsList.process_dump(self.context, kernel.symbol_table_name, + pe_table_name, vproc, self.open) file_output = "Error outputting file" if file_handle: diff --git a/volatility3/framework/plugins/windows/pstree.py b/volatility3/framework/plugins/windows/pstree.py index 3459d4ed59..2c40ca55fe 100644 --- a/volatility3/framework/plugins/windows/pstree.py +++ b/volatility3/framework/plugins/windows/pstree.py @@ -14,7 +14,7 @@ class PsTree(interfaces.plugins.PluginInterface): """Plugin for listing processes in a tree based on their parent process ID.""" - _required_framework_version = (1, 0, 0) + _required_framework_version = (1, 2, 0) def __init__(self, *args, **kwargs) -> None: super().__init__(*args, **kwargs) @@ -25,10 +25,8 @@ def __init__(self, *args, **kwargs) -> None: @classmethod def get_requirements(cls): return [ - requirements.TranslationLayerRequirement(name = 'primary', - description = 'Memory layer for the kernel', - architectures = ["Intel32", "Intel64"]), - requirements.SymbolTableRequirement(name = "nt_symbols", description = "Windows kernel symbols"), + requirements.ModuleRequirement(name = 'kernel', description = 'Windows kernel', + architectures = ["Intel32", "Intel64"]), requirements.BooleanRequirement(name = 'physical', description = 'Display physical offsets instead of virtual', default = pslist.PsList.PHYSICAL_DEFAULT, @@ -56,12 +54,15 @@ def find_level(self, pid: objects.Pointer) -> None: def _generator(self): """Generates the Tree of processes.""" - for proc in pslist.PsList.list_processes(self.context, self.config['primary'], self.config['nt_symbols']): + kernel = self.context.modules[self.config['kernel']] + + for proc in pslist.PsList.list_processes(self.context, kernel.layer_name, + kernel.symbol_table_name): if not self.config.get('physical', pslist.PsList.PHYSICAL_DEFAULT): offset = proc.vol.offset else: - layer_name = self.config['primary'] + layer_name = kernel.layer_name memory = self.context.layers[layer_name] (_, _, offset, _, _) = list(memory.mapping(offset = proc.vol.offset, length = 0))[0] diff --git a/volatility3/framework/plugins/windows/registry/hivelist.py b/volatility3/framework/plugins/windows/registry/hivelist.py index e75dc19a6c..0296456188 100644 --- a/volatility3/framework/plugins/windows/registry/hivelist.py +++ b/volatility3/framework/plugins/windows/registry/hivelist.py @@ -39,16 +39,14 @@ def invalid(self) -> Optional[int]: class HiveList(interfaces.plugins.PluginInterface): """Lists the registry hives present in a particular memory image.""" + _required_framework_version = (1, 2, 0) _version = (1, 0, 0) - _required_framework_version = (1, 0, 0) @classmethod def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface]: return [ - requirements.TranslationLayerRequirement(name = 'primary', - description = 'Memory layer for the kernel', - architectures = ["Intel32", "Intel64"]), - requirements.SymbolTableRequirement(name = "nt_symbols", description = "Windows kernel symbols"), + requirements.ModuleRequirement(name = 'kernel', description = 'Windows kernel', + architectures = ["Intel32", "Intel64"]), requirements.StringRequirement(name = 'filter', description = "String to filter hive names returned", optional = True, @@ -66,9 +64,11 @@ def _sanitize_hive_name(self, name: str) -> str: def _generator(self) -> Iterator[Tuple[int, Tuple[int, str]]]: chunk_size = 0x500000 + kernel = self.context.modules[self.config['kernel']] + for hive_object in self.list_hive_objects(context = self.context, - layer_name = self.config["primary"], - symbol_table = self.config["nt_symbols"], + layer_name = kernel.layer_name, + symbol_table = kernel.symbol_table_name, filter_string = self.config.get('filter', None)): file_output = "Disabled" @@ -77,8 +77,8 @@ def _generator(self) -> Iterator[Tuple[int, Tuple[int, str]]]: hive = next( self.list_hives(self.context, self.config_path, - layer_name = self.config["primary"], - symbol_table = self.config["nt_symbols"], + layer_name = kernel.layer_name, + symbol_table = kernel.symbol_table_name, hive_offsets = [hive_object.vol.offset])) maxaddr = hive.hive.Storage[0].Length hive_name = self._sanitize_hive_name(hive.get_name()) diff --git a/volatility3/framework/plugins/windows/registry/hivescan.py b/volatility3/framework/plugins/windows/registry/hivescan.py index ac04fcc025..0a0257d478 100644 --- a/volatility3/framework/plugins/windows/registry/hivescan.py +++ b/volatility3/framework/plugins/windows/registry/hivescan.py @@ -15,16 +15,14 @@ class HiveScan(interfaces.plugins.PluginInterface): """Scans for registry hives present in a particular windows memory image.""" - _required_framework_version = (1, 0, 0) + _required_framework_version = (1, 2, 0) _version = (1, 0, 0) @classmethod def get_requirements(cls): return [ - requirements.TranslationLayerRequirement(name = 'primary', - description = 'Memory layer for the kernel', - architectures = ["Intel32", "Intel64"]), - requirements.SymbolTableRequirement(name = "nt_symbols", description = "Windows kernel symbols"), + requirements.ModuleRequirement(name = 'kernel', description = 'Windows kernel', + architectures = ["Intel32", "Intel64"]), requirements.PluginRequirement(name = 'poolscanner', plugin = poolscanner.PoolScanner, version = (1, 0, 0)), requirements.PluginRequirement(name = 'bigpools', plugin = bigpools.BigPools, version = (1, 0, 0)), ] @@ -68,9 +66,12 @@ def scan_hives(cls, yield mem_object def _generator(self): - for hive in self.scan_hives(self.context, self.config['primary'], self.config['nt_symbols']): - yield (0, (format_hints.Hex(hive.vol.offset), )) + kernel = self.context.modules[self.config['kernel']] + + for hive in self.scan_hives(self.context, kernel.layer_name, kernel.symbol_table_name): + + yield (0, (format_hints.Hex(hive.vol.offset),)) def run(self): return renderers.TreeGrid([("Offset", format_hints.Hex)], self._generator()) diff --git a/volatility3/framework/plugins/windows/registry/printkey.py b/volatility3/framework/plugins/windows/registry/printkey.py index bad438256b..405df46806 100644 --- a/volatility3/framework/plugins/windows/registry/printkey.py +++ b/volatility3/framework/plugins/windows/registry/printkey.py @@ -19,16 +19,14 @@ class PrintKey(interfaces.plugins.PluginInterface): """Lists the registry keys under a hive or specific key value.""" - _required_framework_version = (1, 0, 0) + _required_framework_version = (1, 2, 0) _version = (1, 0, 0) @classmethod def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface]: return [ - requirements.TranslationLayerRequirement(name = 'primary', - description = 'Memory layer for the kernel', - architectures = ["Intel32", "Intel64"]), - requirements.SymbolTableRequirement(name = "nt_symbols", description = "Windows kernel symbols"), + requirements.ModuleRequirement(name = 'kernel', description = 'Windows kernel', + architectures = ["Intel32", "Intel64"]), requirements.PluginRequirement(name = 'hivelist', plugin = hivelist.HiveList, version = (1, 0, 0)), requirements.IntRequirement(name = 'offset', description = "Hive Offset", default = None, optional = True), requirements.StringRequirement(name = 'key', @@ -43,10 +41,10 @@ def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface] @classmethod def key_iterator( - cls, - hive: RegistryHive, - node_path: Sequence[objects.StructType] = None, - recurse: bool = False + cls, + hive: RegistryHive, + node_path: Sequence[objects.StructType] = None, + recurse: bool = False ) -> Iterable[Tuple[int, bool, datetime.datetime, str, bool, interfaces.objects.ObjectInterface]]: """Walks through a set of nodes from a given node (last one in node_path). Avoids loops by not traversing into nodes already present @@ -188,12 +186,13 @@ def _registry_walker(self, def run(self): offset = self.config.get('offset', None) + kernel = self.context.modules[self.config['kernel']] return TreeGrid(columns = [('Last Write Time', datetime.datetime), ('Hive Offset', format_hints.Hex), ('Type', str), ('Key', str), ('Name', str), ('Data', format_hints.MultiTypeData), ('Volatile', bool)], - generator = self._registry_walker(self.config['primary'], - self.config['nt_symbols'], + generator = self._registry_walker(kernel.layer_name, + kernel.symbol_table_name, hive_offsets = None if offset is None else [offset], key = self.config.get('key', None), recurse = self.config.get('recurse', None))) diff --git a/volatility3/framework/plugins/windows/registry/userassist.py b/volatility3/framework/plugins/windows/registry/userassist.py index b3a3b8f3ab..2dd0ae0238 100644 --- a/volatility3/framework/plugins/windows/registry/userassist.py +++ b/volatility3/framework/plugins/windows/registry/userassist.py @@ -23,7 +23,7 @@ class UserAssist(interfaces.plugins.PluginInterface): """Print userassist registry keys and information.""" - _required_framework_version = (1, 0, 0) + _required_framework_version = (1, 2, 0) def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) @@ -37,10 +37,8 @@ def __init__(self, *args, **kwargs): @classmethod def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface]: return [ - requirements.TranslationLayerRequirement(name = 'primary', - description = 'Memory layer for the kernel', - architectures = ["Intel32", "Intel64"]), - requirements.SymbolTableRequirement(name = "nt_symbols", description = "Windows kernel symbols"), + requirements.ModuleRequirement(name = 'kernel', description = 'Windows kernel', + architectures = ["Intel32", "Intel64"]), requirements.IntRequirement(name = 'offset', description = "Hive Offset", default = None, optional = True), requirements.PluginRequirement(name = 'hivelist', plugin = hivelist.HiveList, version = (1, 0, 0)) ] @@ -115,13 +113,17 @@ def _determine_userassist_type(self) -> None: def _win7_or_later(self) -> bool: # TODO: change this if there is a better way of determining the OS version # _KUSER_SHARED_DATA.CookiePad is in Windows 6.1 (Win7) and later - return self.context.symbol_space.get_type(self.config['nt_symbols'] + constants.BANG + + kernel = self.context.modules[self.config['kernel']] + + return self.context.symbol_space.get_type(kernel.symbol_table_name + constants.BANG + "_KUSER_SHARED_DATA").has_member('CookiePad') def list_userassist(self, hive: RegistryHive) -> Generator[Tuple[int, Tuple], None, None]: """Generate userassist data for a registry hive.""" - hive_name = hive.hive.cast(self.config["nt_symbols"] + constants.BANG + "_CMHIVE").get_name() + kernel = self.context.modules[self.config['kernel']] + + hive_name = hive.hive.cast(kernel.symbol_table_name + constants.BANG + "_CMHIVE").get_name() if self._win7 is None: try: @@ -216,11 +218,13 @@ def _generator(self): if self.config.get('offset', None) is not None: hive_offsets = [self.config.get('offset', None)] + kernel = self.context.modules[self.config['kernel']] + # get all the user hive offsets or use the one specified for hive in hivelist.HiveList.list_hives(context = self.context, base_config_path = self.config_path, - layer_name = self.config['primary'], - symbol_table = self.config['nt_symbols'], + layer_name = kernel.layer_name, + symbol_table = kernel.symbol_table_name, filter_string = 'ntuser.dat', hive_offsets = hive_offsets): try: diff --git a/volatility3/framework/plugins/windows/skeleton_key_check.py b/volatility3/framework/plugins/windows/skeleton_key_check.py index 9980d1392b..a1e0bbbd3f 100644 --- a/volatility3/framework/plugins/windows/skeleton_key_check.py +++ b/volatility3/framework/plugins/windows/skeleton_key_check.py @@ -6,50 +6,49 @@ # It does this by locating the CSystems array through a variety of methods, # and then validating the entry for RC4 HMAC (0x17 / 23) # -# For a thorough walkthrough on how the R&D was performed to develop this plugin, +# For a thorough walkthrough on how the R&D was performed to develop this plugin, # please see our blogpost here: # # -import logging, io - +import io +import logging from typing import Iterable, Tuple, List, Optional -from volatility3.framework.symbols.windows import pdbutil +import pefile + from volatility3.framework import interfaces, symbols, exceptions from volatility3.framework import renderers, constants -from volatility3.framework.layers import scanners from volatility3.framework.configuration import requirements +from volatility3.framework.layers import scanners from volatility3.framework.objects import utility -from volatility3.framework.symbols import intermed from volatility3.framework.renderers import format_hints -from volatility3.plugins.windows import pslist, vadinfo - +from volatility3.framework.symbols import intermed +from volatility3.framework.symbols.windows import pdbutil from volatility3.framework.symbols.windows.extensions import pe - -import pefile +from volatility3.plugins.windows import pslist, vadinfo try: import capstone + has_capstone = True except ImportError: has_capstone = False vollog = logging.getLogger(__name__) + class Skeleton_Key_Check(interfaces.plugins.PluginInterface): """ Looks for signs of Skeleton Key malware """ - _required_framework_version = (1, 0, 0) + _required_framework_version = (1, 2, 0) @classmethod def get_requirements(cls): # Since we're calling the plugin, make sure we have the plugin's requirements return [ - requirements.TranslationLayerRequirement(name = 'primary', - description = 'Memory layer for the kernel', - architectures = ["Intel32", "Intel64"]), - requirements.SymbolTableRequirement(name = "nt_symbols", description = "Windows kernel symbols"), + requirements.ModuleRequirement(name = 'kernel', description = 'Windows kernel', + architectures = ["Intel32", "Intel64"]), requirements.VersionRequirement(name = 'pslist', component = pslist.PsList, version = (2, 0, 0)), requirements.VersionRequirement(name = 'vadinfo', component = vadinfo.VadInfo, version = (2, 0, 0)), requirements.VersionRequirement(name = 'pdbutil', component = pdbutil.PDBUtility, version = (1, 0, 0)), @@ -71,26 +70,26 @@ def _get_pefile_obj(self, pe_table_name: str, layer_name: str, base_address: int try: dos_header = self.context.object(pe_table_name + constants.BANG + "_IMAGE_DOS_HEADER", - offset = base_address, - layer_name = layer_name) + offset = base_address, + layer_name = layer_name) for offset, data in dos_header.reconstruct(): pe_data.seek(offset) pe_data.write(data) - + pe_ret = pefile.PE(data = pe_data.getvalue(), fast_load = True) - + except exceptions.InvalidAddressException: vollog.debug("Unable to reconstruct cryptdll.dll in memory") pe_ret = None return pe_ret - def _check_for_skeleton_key_vad(self, csystem: interfaces.objects.ObjectInterface, - cryptdll_base: int, - cryptdll_size: int) -> bool: + def _check_for_skeleton_key_vad(self, csystem: interfaces.objects.ObjectInterface, + cryptdll_base: int, + cryptdll_size: int) -> bool: """ - Checks if Initialize and/or Decrypt is hooked by determining if + Checks if Initialize and/or Decrypt is hooked by determining if these function pointers reference addresses inside of the cryptdll VAD Args: @@ -101,11 +100,11 @@ def _check_for_skeleton_key_vad(self, csystem: interfaces.objects.ObjectInterfac bool: if a skeleton key hook is present """ return not ((cryptdll_base <= csystem.Initialize <= cryptdll_base + cryptdll_size) and \ - (cryptdll_base <= csystem.Decrypt <= cryptdll_base + cryptdll_size)) + (cryptdll_base <= csystem.Decrypt <= cryptdll_base + cryptdll_size)) - def _check_for_skeleton_key_symbols(self, csystem: interfaces.objects.ObjectInterface, - rc4HmacInitialize: int, - rc4HmacDecrypt: int) -> bool: + def _check_for_skeleton_key_symbols(self, csystem: interfaces.objects.ObjectInterface, + rc4HmacInitialize: int, + rc4HmacDecrypt: int) -> bool: """ Uses the PDB information to specifically check if the csystem for RC4HMAC has an initialization pointer to rc4HmacInitialize and a decryption pointer @@ -113,12 +112,12 @@ def _check_for_skeleton_key_symbols(self, csystem: interfaces.objects.ObjectInte Args: csystem: The RC4HMAC KERB_ECRYPT instance - rc4HmacInitialize: The expected address of csystem Initialization function + rc4HmacInitialize: The expected address of csystem Initialization function rc4HmacDecrypt: The expected address of the csystem Decryption function - + Returns: bool: if a skeleton key hook was found - """ + """ return csystem.Initialize != rc4HmacInitialize or csystem.Decrypt != rc4HmacDecrypt def _construct_ecrypt_array(self, array_start: int, count: int, \ @@ -137,21 +136,21 @@ def _construct_ecrypt_array(self, array_start: int, count: int, \ try: array = cryptdll_types.object(object_type = "array", - offset = array_start, - subtype = cryptdll_types.get_type("_KERB_ECRYPT"), - count = count, - absolute = True) + offset = array_start, + subtype = cryptdll_types.get_type("_KERB_ECRYPT"), + count = count, + absolute = True) except exceptions.InvalidAddressException: vollog.debug("Unable to construct cSystems array at given offset: {:x}".format(array_start)) array = None - + return array - def _find_array_with_pdb_symbols(self, cryptdll_symbols: str, - cryptdll_types: interfaces.context.ModuleInterface, - proc_layer_name: str, - cryptdll_base: int) -> Tuple[interfaces.objects.ObjectInterface, int, int, int]: + def _find_array_with_pdb_symbols(self, cryptdll_symbols: str, + cryptdll_types: interfaces.context.ModuleInterface, + proc_layer_name: str, + cryptdll_base: int) -> Tuple[interfaces.objects.ObjectInterface, int, int, int]: """ Finds the CSystems array through use of PDB symbols @@ -177,7 +176,7 @@ def _find_array_with_pdb_symbols(self, cryptdll_symbols: str, count_address = cryptdll_module.get_symbol("cCSystems").address # we do not want to fail just because the count is not in memory - # 16 was the size on samples I tested, so I chose it as the default + # 16 was the size on samples I tested, so I chose it as the default try: count = cryptdll_types.object(object_type = "unsigned long", offset = count_address) except exceptions.InvalidAddressException: @@ -186,30 +185,31 @@ def _find_array_with_pdb_symbols(self, cryptdll_symbols: str, array_start = cryptdll_module.get_absolute_symbol_address("CSystems") array = self._construct_ecrypt_array(array_start, count, cryptdll_types) - + if array is None: vollog.debug("The CSystem array is not present in memory. Stopping PDB based analysis.") return array, rc4HmacInitialize, rc4HmacDecrypt - def _get_cryptdll_types(self, context: interfaces.context.ContextInterface, - config, - config_path: str, - proc_layer_name: str, - cryptdll_base: int): + def _get_cryptdll_types(self, context: interfaces.context.ContextInterface, + config, + config_path: str, + proc_layer_name: str, + cryptdll_base: int): """ Builds a symbol table from the cryptdll types generated after binary analysis Args: context: the context to operate upon - config: + config: config_path: proc_layer_name: name of the lsass.exe process layer cryptdll_base: base address of cryptdll.dll inside of lsass.exe """ - table_mapping = {"nt_symbols": config["nt_symbols"]} + kernel = self.context.modules[self.config['kernel']] + table_mapping = {"nt_symbols": kernel.symbol_table_name} - cryptdll_symbol_table = intermed.IntermediateSymbolTable.create(context = context, + cryptdll_symbol_table = intermed.IntermediateSymbolTable.create(context = context, config_path = config_path, sub_path = "windows", filename = "kerb_ecrypt", @@ -218,7 +218,7 @@ def _get_cryptdll_types(self, context: interfaces.context.ContextInterface, return context.module(cryptdll_symbol_table, proc_layer_name, offset = cryptdll_base) def _find_lsass_proc(self, proc_list: Iterable) -> \ - Tuple[interfaces.context.ContextInterface, str]: + Tuple[interfaces.context.ContextInterface, str]: """ Walks the process list and returns the first valid lsass instances. There should be only one lsass process, but malware will often use the @@ -242,11 +242,10 @@ def _find_lsass_proc(self, proc_list: Iterable) -> \ vollog.debug("Process {}: invalid address {} in layer {}".format(proc_id, excp.invalid_address, excp.layer_name)) - return None, None def _find_cryptdll(self, lsass_proc: interfaces.context.ContextInterface) -> \ - Tuple[int, int]: + Tuple[int, int]: """ Finds the base address of cryptdll.dll inside of lsass.exe @@ -260,18 +259,18 @@ def _find_cryptdll(self, lsass_proc: interfaces.context.ContextInterface) -> \ """ for vad in lsass_proc.get_vad_root().traverse(): filename = vad.get_file_name() - + if isinstance(filename, str) and filename.lower().endswith("cryptdll.dll"): base = vad.get_start() return base, vad.get_end() - base return None, None - def _find_csystems_with_symbols(self, proc_layer_name: str, - cryptdll_types: interfaces.context.ModuleInterface, - cryptdll_base: int, - cryptdll_size: int) -> \ - Tuple[interfaces.objects.ObjectInterface, int, int]: + def _find_csystems_with_symbols(self, proc_layer_name: str, + cryptdll_types: interfaces.context.ModuleInterface, + cryptdll_base: int, + cryptdll_size: int) -> \ + Tuple[interfaces.objects.ObjectInterface, int, int]: """ Attempts to find CSystems and the expected address of the handlers. Relies on downloading and parsing of the cryptdll PDB file. @@ -281,27 +280,28 @@ def _find_csystems_with_symbols(self, proc_layer_name: str, cryptdll_types: The types from cryptdll binary analysis cryptdll_base: the base address of cryptdll.dll crytpdll_size: the size of the VAD for cryptdll.dll - + Returns: A tuple of: array: An initialized Volatility array of _KERB_ECRYPT structures - rc4HmacInitialize: The expected address of csystem Initialization function + rc4HmacInitialize: The expected address of csystem Initialization function rc4HmacDecrypt: The expected address of the csystem Decryption function """ try: - cryptdll_symbols = pdbutil.PDBUtility.symbol_table_from_pdb(self.context, - interfaces.configuration.path_join(self.config_path, 'cryptdll'), - proc_layer_name, - "cryptdll.pdb", - cryptdll_base, - cryptdll_size) + cryptdll_symbols = pdbutil.PDBUtility.symbol_table_from_pdb(self.context, + interfaces.configuration.path_join( + self.config_path, 'cryptdll'), + proc_layer_name, + "cryptdll.pdb", + cryptdll_base, + cryptdll_size) except exceptions.VolatilityException: vollog.debug("Unable to use the cryptdll PDB. Stopping PDB symbols based analysis.") return None, None, None array, rc4HmacInitialize, rc4HmacDecrypt = \ - self._find_array_with_pdb_symbols(cryptdll_symbols, cryptdll_types, proc_layer_name, cryptdll_base) - + self._find_array_with_pdb_symbols(cryptdll_symbols, cryptdll_types, proc_layer_name, cryptdll_base) + if array is None: vollog.debug("The CSystem array is not present in memory. Stopping PDB symbols based analysis.") @@ -313,7 +313,7 @@ def _get_rip_relative_target(self, inst) -> int: These instructions contain the offset of a target address relative to the current instruction pointer. - + Args: inst: A capstone instruction instance @@ -322,7 +322,7 @@ def _get_rip_relative_target(self, inst) -> int: """ try: opnd = inst.operands[1] - except capstone.CsError: + except capstone.CsError: return None if opnd.type != capstone.x86.X86_OP_MEM: @@ -334,9 +334,9 @@ def _get_rip_relative_target(self, inst) -> int: return inst.address + inst.size + opnd.mem.disp def _analyze_cdlocatecsystem(self, function_bytes: bytes, - function_start: int, - cryptdll_types: interfaces.context.ModuleInterface, - proc_layer_name: str) -> Optional[interfaces.objects.ObjectInterface]: + function_start: int, + cryptdll_types: interfaces.context.ModuleInterface, + proc_layer_name: str) -> Optional[interfaces.objects.ObjectInterface]: """ Performs static analysis on CDLocateCSystem to find the instructions that reference CSystems as well as cCsystems @@ -380,7 +380,7 @@ def _analyze_cdlocatecsystem(self, function_bytes: bytes, target_address = self._get_rip_relative_target(inst) if target_address: - array_start = target_address + array_start = target_address # we find the count before, so we can terminate the static analysis here break @@ -392,10 +392,10 @@ def _analyze_cdlocatecsystem(self, function_bytes: bytes, return array - def _find_csystems_with_export(self, proc_layer_name: str, - cryptdll_types: interfaces.context.ModuleInterface, - cryptdll_base: int, - _) -> Optional[interfaces.objects.ObjectInterface]: + def _find_csystems_with_export(self, proc_layer_name: str, + cryptdll_types: interfaces.context.ModuleInterface, + cryptdll_base: int, + _) -> Optional[interfaces.objects.ObjectInterface]: """ Uses export table analysis to locate CDLocateCsystem This function references CSystems and cCsystems @@ -420,8 +420,7 @@ def _find_csystems_with_export(self, proc_layer_name: str, "windows", "pe", class_types = pe.class_types) - - + cryptdll = self._get_pefile_obj(pe_table_name, proc_layer_name, cryptdll_base) if not cryptdll: return None @@ -440,7 +439,8 @@ def _find_csystems_with_export(self, proc_layer_name: str, try: function_bytes = self.context.layers[proc_layer_name].read(function_start, 0x50) except exceptions.InvalidAddressException: - vollog.debug("The CDLocateCSystem function is not present in the lsass address space. Stopping export based analysis.") + vollog.debug( + "The CDLocateCSystem function is not present in the lsass address space. Stopping export based analysis.") break array = self._analyze_cdlocatecsystem(function_bytes, function_start, cryptdll_types, proc_layer_name) @@ -451,15 +451,15 @@ def _find_csystems_with_export(self, proc_layer_name: str, return None - def _find_csystems_with_scanning(self, proc_layer_name: str, - cryptdll_types: interfaces.context.ModuleInterface, - cryptdll_base: int, - cryptdll_size: int) -> List[interfaces.context.ModuleInterface]: + def _find_csystems_with_scanning(self, proc_layer_name: str, + cryptdll_types: interfaces.context.ModuleInterface, + cryptdll_base: int, + cryptdll_size: int) -> List[interfaces.context.ModuleInterface]: """ Performs scanning to find potential RC4 HMAC csystem instances This function may return several values as it cannot validate which is the active one - + Args: proc_layer_name: the lsass.exe process layer name cryptdll_types: the types from cryptdll binary analysis @@ -468,22 +468,22 @@ def _find_csystems_with_scanning(self, proc_layer_name: str, Returns: A list of csystem instances """ - + csystems = [] - + cryptdll_end = cryptdll_base + cryptdll_size proc_layer = self.context.layers[proc_layer_name] - + ecrypt_size = cryptdll_types.get_type("_KERB_ECRYPT").size # scan for potential instances of RC4 HMAC # the signature is based on the type being 0x17 - # and the block size member being 1 in all test samples + # and the block size member being 1 in all test samples for address in proc_layer.scan(self.context, scanners.BytesScanner(b"\x17\x00\x00\x00\x01\x00\x00\x00"), sections = [(cryptdll_base, cryptdll_size)]): - + # this occurs across page boundaries if not proc_layer.is_valid(address, ecrypt_size): continue @@ -491,11 +491,11 @@ def _find_csystems_with_scanning(self, proc_layer_name: str, kerb = cryptdll_types.object("_KERB_ECRYPT", offset = address, absolute = True) - + # ensure the Encrypt and Finish pointers are inside the VAD - # these are not manipulated in the attack + # these are not manipulated in the attack if (cryptdll_base < kerb.Encrypt < cryptdll_end) and \ - (cryptdll_base < kerb.Finish < cryptdll_end): + (cryptdll_base < kerb.Finish < cryptdll_end): csystems.append(kerb) return csystems @@ -509,35 +509,36 @@ def _generator(self, procs): Args: procs: the process list filtered to lsass.exe instances """ - - if not symbols.symbol_table_is_64bit(self.context, self.config["nt_symbols"]): + kernel = self.context.modules[self.config['kernel']] + + if not symbols.symbol_table_is_64bit(self.context, kernel.symbol_table_name): vollog.info("This plugin only supports 64bit Windows memory samples") return lsass_proc, proc_layer_name = self._find_lsass_proc(procs) if not lsass_proc: - vollog.info("Unable to find a valid lsass.exe process in the process list. This should never happen. Analysis cannot proceed.") + vollog.info( + "Unable to find a valid lsass.exe process in the process list. This should never happen. Analysis cannot proceed.") return cryptdll_base, cryptdll_size = self._find_cryptdll(lsass_proc) if not cryptdll_base: vollog.info("Unable to find the location of cryptdll.dll inside of lsass.exe. Analysis cannot proceed.") return - + # the custom type information from binary analysis - cryptdll_types = self._get_cryptdll_types(self.context, - self.config, + cryptdll_types = self._get_cryptdll_types(self.context, + self.config, self.config_path, proc_layer_name, cryptdll_base) - # attempt to find the array and symbols directly from the PDB csystems, rc4HmacInitialize, rc4HmacDecrypt = \ - self._find_csystems_with_symbols(proc_layer_name, - cryptdll_types, - cryptdll_base, - cryptdll_size) + self._find_csystems_with_symbols(proc_layer_name, + cryptdll_types, + cryptdll_base, + cryptdll_size) csystems = None @@ -550,7 +551,7 @@ def _generator(self, procs): self._find_csystems_with_scanning] for source in fallback_sources: - csystems = source(proc_layer_name, + csystems = source(proc_layer_name, cryptdll_types, cryptdll_base, cryptdll_size) @@ -587,13 +588,17 @@ def _lsass_proc_filter(self, proc): named processes to blend in or uses lsass.exe as a process hollowing target """ process_name = utility.array_to_string(proc.ImageFileName) - + return process_name != "lsass.exe" def run(self): - return renderers.TreeGrid([("PID", int), ("Process", str), ("Skeleton Key Found", bool), ("rc4HmacInitialize", format_hints.Hex), ("rc4HmacDecrypt", format_hints.Hex)], - self._generator( - pslist.PsList.list_processes(context = self.context, - layer_name = self.config['primary'], - symbol_table = self.config['nt_symbols'], - filter_func = self._lsass_proc_filter))) + kernel = self.context.modules[self.config['kernel']] + + return renderers.TreeGrid( + [("PID", int), ("Process", str), ("Skeleton Key Found", bool), ("rc4HmacInitialize", format_hints.Hex), + ("rc4HmacDecrypt", format_hints.Hex)], + self._generator( + pslist.PsList.list_processes(context = self.context, + layer_name = kernel.layer_name, + symbol_table = kernel.symbol_table_name, + filter_func = self._lsass_proc_filter))) diff --git a/volatility3/framework/plugins/windows/ssdt.py b/volatility3/framework/plugins/windows/ssdt.py index 8092beb2f9..b5b3e40c0f 100644 --- a/volatility3/framework/plugins/windows/ssdt.py +++ b/volatility3/framework/plugins/windows/ssdt.py @@ -18,16 +18,14 @@ class SSDT(plugins.PluginInterface): """Lists the system call table.""" - _required_framework_version = (1, 0, 0) + _required_framework_version = (1, 2, 0) _version = (1, 0, 0) @classmethod def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface]: return [ - requirements.TranslationLayerRequirement(name = 'primary', - description = 'Memory layer for the kernel', - architectures = ["Intel32", "Intel64"]), - requirements.SymbolTableRequirement(name = "nt_symbols", description = "Windows kernel symbols"), + requirements.ModuleRequirement(name = 'kernel', description = 'Windows kernel', + architectures = ["Intel32", "Intel64"]), requirements.PluginRequirement(name = 'modules', plugin = modules.Modules, version = (1, 0, 0)), ] @@ -75,11 +73,13 @@ def build_module_collection(cls, context: interfaces.context.ContextInterface, l def _generator(self) -> Iterator[Tuple[int, Tuple[int, int, Any, Any]]]: - layer_name = self.config['primary'] - collection = self.build_module_collection(self.context, self.config["primary"], self.config["nt_symbols"]) + kernel = self.context.modules[self.config['kernel']] + + layer_name = kernel.layer_name + collection = self.build_module_collection(self.context, layer_name, kernel.symbol_table_name) kvo = self.context.layers[layer_name].config['kernel_virtual_offset'] - ntkrnlmp = self.context.module(self.config["nt_symbols"], layer_name = layer_name, offset = kvo) + ntkrnlmp = self.context.module(kernel.symbol_table_name, layer_name = layer_name, offset = kvo) # this is just one way to enumerate the native (NT) service table. # to do the same thing for the Win32K service table, we would need Win32K.sys symbol support @@ -91,7 +91,7 @@ def _generator(self) -> Iterator[Tuple[int, Tuple[int, int, Any, Any]]]: # on 32-bit systems the table indexes are 32-bits and contain pointers (unsigned) # on 64-bit systems the indexes are also 32-bits but they're offsets from the # base address of the table and can be negative, so we need a signed data type - is_kernel_64 = symbols.symbol_table_is_64bit(self.context, self.config["nt_symbols"]) + is_kernel_64 = symbols.symbol_table_is_64bit(self.context, kernel.symbol_table_name) if is_kernel_64: array_subtype = "long" diff --git a/volatility3/framework/plugins/windows/strings.py b/volatility3/framework/plugins/windows/strings.py index 36c68d809c..67055e18d8 100644 --- a/volatility3/framework/plugins/windows/strings.py +++ b/volatility3/framework/plugins/windows/strings.py @@ -18,18 +18,16 @@ class Strings(interfaces.plugins.PluginInterface): """Reads output from the strings command and indicates which process(es) each string belongs to.""" - _version = (1, 0, 0) + _version = (1, 2, 0) _required_framework_version = (1, 0, 0) strings_pattern = re.compile(rb"^(?:\W*)([0-9]+)(?:\W*)(\w[\w\W]+)\n?") @classmethod def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface]: return [ + requirements.ModuleRequirement(name = 'kernel', description = 'Windows kernel', + architectures = ["Intel32", "Intel64"]), requirements.PluginRequirement(name = 'pslist', plugin = pslist.PsList, version = (2, 0, 0)), - requirements.TranslationLayerRequirement(name = 'primary', - description = 'Memory layer for the kernel', - architectures = ["Intel32", "Intel64"]), - requirements.SymbolTableRequirement(name = "nt_symbols", description = "Windows kernel symbols"), requirements.ListRequirement(name = 'pid', element_type = int, description = "Process ID to include (all other processes are excluded)", @@ -44,7 +42,7 @@ def run(self): def _generator(self) -> Generator[Tuple, None, None]: """Generates results from a strings file.""" - string_list: List[Tuple[int,bytes]] = [] + string_list: List[Tuple[int, bytes]] = [] # Test strings file format is accurate accessor = resources.ResourceAccessor() @@ -60,14 +58,16 @@ def _generator(self) -> Generator[Tuple, None, None]: vollog.error(f"Line in unrecognized format: line {count}") line = strings_fp.readline() + kernel = self.context.modules[self.config['kernel']] + revmap = self.generate_mapping(self.context, - self.config['primary'], - self.config['nt_symbols'], + kernel.layer_name, + kernel.symbol_table_name, progress_callback = self._progress_callback, pid_list = self.config['pid']) last_prog: float = 0 - line_count: float = 0 + line_count: float = 0 num_strings = len(string_list) for offset, string in string_list: line_count += 1 diff --git a/volatility3/framework/plugins/windows/svcscan.py b/volatility3/framework/plugins/windows/svcscan.py index 24bc271097..140a2cccd3 100644 --- a/volatility3/framework/plugins/windows/svcscan.py +++ b/volatility3/framework/plugins/windows/svcscan.py @@ -21,17 +21,15 @@ class SvcScan(interfaces.plugins.PluginInterface): """Scans for windows services.""" - _required_framework_version = (1, 0, 0) + _required_framework_version = (1, 2, 0) _version = (1, 0, 0) @classmethod def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface]: # Since we're calling the plugin, make sure we have the plugin's requirements return [ - requirements.TranslationLayerRequirement(name = 'primary', - description = 'Memory layer for the kernel', - architectures = ["Intel32", "Intel64"]), - requirements.SymbolTableRequirement(name = "nt_symbols", description = "Windows kernel symbols"), + requirements.ModuleRequirement(name = 'kernel', description = 'Windows kernel', + architectures = ["Intel32", "Intel64"]), requirements.PluginRequirement(name = 'pslist', plugin = pslist.PsList, version = (2, 0, 0)), requirements.PluginRequirement(name = 'poolscanner', plugin = poolscanner.PoolScanner, version = (1, 0, 0)), requirements.PluginRequirement(name = 'vadyarascan', plugin = vadyarascan.VadYaraScan, version = (1, 0, 0)) @@ -94,15 +92,18 @@ def create_service_table(context: interfaces.context.ContextInterface, symbol_ta native_types = native_types) def _generator(self): + kernel = self.context.modules[self.config['kernel']] - service_table_name = self.create_service_table(self.context, self.config["nt_symbols"], self.config_path) + service_table_name = self.create_service_table(self.context, kernel.symbol_table_name, + self.config_path) relative_tag_offset = self.context.symbol_space.get_type(service_table_name + constants.BANG + "_SERVICE_RECORD").relative_child_offset("Tag") filter_func = pslist.PsList.create_name_filter(["services.exe"]) - is_vista_or_later = versions.is_vista_or_later(context = self.context, symbol_table = self.config["nt_symbols"]) + is_vista_or_later = versions.is_vista_or_later(context = self.context, + symbol_table = kernel.symbol_table_name) if is_vista_or_later: service_tag = b"serH" @@ -112,8 +113,8 @@ def _generator(self): seen = [] for task in pslist.PsList.list_processes(context = self.context, - layer_name = self.config['primary'], - symbol_table = self.config['nt_symbols'], + layer_name = kernel.layer_name, + symbol_table = kernel.symbol_table_name, filter_func = filter_func): proc_id = "Unknown" diff --git a/volatility3/framework/plugins/windows/symlinkscan.py b/volatility3/framework/plugins/windows/symlinkscan.py index 8a699a5d4e..3d7cbf89b9 100644 --- a/volatility3/framework/plugins/windows/symlinkscan.py +++ b/volatility3/framework/plugins/windows/symlinkscan.py @@ -15,15 +15,13 @@ class SymlinkScan(interfaces.plugins.PluginInterface, timeliner.TimeLinerInterface): """Scans for links present in a particular windows memory image.""" - _required_framework_version = (1, 0, 0) + _required_framework_version = (1, 2, 0) @classmethod def get_requirements(cls): return [ - requirements.TranslationLayerRequirement(name = 'primary', - description = 'Memory layer for the kernel', - architectures = ["Intel32", "Intel64"]), - requirements.SymbolTableRequirement(name = "nt_symbols", description = "Windows kernel symbols"), + requirements.ModuleRequirement(name = 'kernel', description = 'Windows kernel', + architectures = ["Intel32", "Intel64"]), ] @classmethod @@ -51,7 +49,9 @@ def scan_symlinks(cls, yield mem_object def _generator(self): - for link in self.scan_symlinks(self.context, self.config['primary'], self.config['nt_symbols']): + kernel = self.context.modules[self.config['kernel']] + + for link in self.scan_symlinks(self.context, kernel.layer_name, kernel.symbol_table_name): try: from_name = link.get_link_name() diff --git a/volatility3/framework/plugins/windows/vadinfo.py b/volatility3/framework/plugins/windows/vadinfo.py index 996aba1d1c..ee8bc04315 100644 --- a/volatility3/framework/plugins/windows/vadinfo.py +++ b/volatility3/framework/plugins/windows/vadinfo.py @@ -33,7 +33,7 @@ class VadInfo(interfaces.plugins.PluginInterface): """Lists process memory ranges.""" - _required_framework_version = (1, 0, 0) + _required_framework_version = (1, 2, 0) _version = (2, 0, 0) MAXSIZE_DEFAULT = 0 @@ -44,10 +44,8 @@ def __init__(self, *args, **kwargs): @classmethod def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface]: # Since we're calling the plugin, make sure we have the plugin's requirements - return [requirements.TranslationLayerRequirement(name = 'primary', - description = 'Memory layer for the kernel', - architectures = ["Intel32", "Intel64"]), - requirements.SymbolTableRequirement(name = "nt_symbols", description = "Windows kernel symbols"), + return [requirements.ModuleRequirement(name = 'kernel', description = 'Windows kernel', + architectures = ["Intel32", "Intel64"]), # TODO: Convert this to a ListRequirement so that people can filter on sets of ranges requirements.IntRequirement(name = 'address', description = "Process virtual memory address to include " \ @@ -169,6 +167,8 @@ def vad_dump(cls, def _generator(self, procs): + kernel = self.context.modules[self.config['kernel']] + def passthrough(_: interfaces.objects.ObjectInterface) -> bool: return False @@ -196,12 +196,14 @@ def filter_function(x: interfaces.objects.ObjectInterface) -> bool: yield (0, (proc.UniqueProcessId, process_name, format_hints.Hex(vad.vol.offset), format_hints.Hex(vad.get_start()), format_hints.Hex(vad.get_end()), vad.get_tag(), vad.get_protection( - self.protect_values(self.context, self.config['primary'], self.config['nt_symbols']), + self.protect_values(self.context, kernel.layer_name, kernel.symbol_table_name), winnt_protections), vad.get_commit_charge(), vad.get_private_memory(), format_hints.Hex(vad.get_parent()), vad.get_file_name(), file_output)) def run(self): + kernel = self.context.modules[self.config['kernel']] + filter_func = pslist.PsList.create_pid_filter(self.config.get('pid', None)) return renderers.TreeGrid([("PID", int), ("Process", str), ("Offset", format_hints.Hex), @@ -210,6 +212,6 @@ def run(self): ("Parent", format_hints.Hex), ("File", str), ("File output", str)], self._generator( pslist.PsList.list_processes(context = self.context, - layer_name = self.config['primary'], - symbol_table = self.config['nt_symbols'], + layer_name = kernel.layer_name, + symbol_table = kernel.symbol_table_name, filter_func = filter_func))) diff --git a/volatility3/framework/plugins/windows/vadyarascan.py b/volatility3/framework/plugins/windows/vadyarascan.py index 6c4723e88b..756e18be54 100644 --- a/volatility3/framework/plugins/windows/vadyarascan.py +++ b/volatility3/framework/plugins/windows/vadyarascan.py @@ -17,16 +17,14 @@ class VadYaraScan(interfaces.plugins.PluginInterface): """Scans all the Virtual Address Descriptor memory maps using yara.""" - _required_framework_version = (1, 0, 0) + _required_framework_version = (1, 2, 0) _version = (1, 0, 0) @classmethod def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface]: return [ - requirements.TranslationLayerRequirement(name = 'primary', - description = "Memory layer for the kernel", - architectures = ["Intel32", "Intel64"]), - requirements.SymbolTableRequirement(name = "nt_symbols", description = "Windows kernel symbols"), + requirements.ModuleRequirement(name = 'kernel', description = 'Windows kernel', + architectures = ["Intel32", "Intel64"]), requirements.BooleanRequirement(name = "wide", description = "Match wide (unicode) strings", default = False, @@ -55,13 +53,15 @@ def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface] def _generator(self): + kernel = self.context.modules[self.config['kernel']] + rules = yarascan.YaraScan.process_yara_options(dict(self.config)) filter_func = pslist.PsList.create_pid_filter(self.config.get('pid', None)) for task in pslist.PsList.list_processes(context = self.context, - layer_name = self.config['primary'], - symbol_table = self.config['nt_symbols'], + layer_name = kernel.layer_name, + symbol_table = kernel.symbol_table_name, filter_func = filter_func): layer_name = task.add_process_layer() layer = self.context.layers[layer_name] diff --git a/volatility3/framework/plugins/windows/verinfo.py b/volatility3/framework/plugins/windows/verinfo.py index 25115136b6..82571b477e 100644 --- a/volatility3/framework/plugins/windows/verinfo.py +++ b/volatility3/framework/plugins/windows/verinfo.py @@ -27,21 +27,19 @@ class VerInfo(interfaces.plugins.PluginInterface): """Lists version information from PE files.""" + _required_framework_version = (1, 2, 0) _version = (1, 0, 0) - _required_framework_version = (1, 0, 0) @classmethod def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface]: ## TODO: we might add a regex option on the name later, but otherwise we're good ## TODO: and we don't want any CLI options from pslist, modules, or moddump return [ + requirements.ModuleRequirement(name = 'kernel', description = 'Windows kernel', + architectures = ["Intel32", "Intel64"]), requirements.PluginRequirement(name = 'pslist', plugin = pslist.PsList, version = (2, 0, 0)), requirements.PluginRequirement(name = 'modules', plugin = modules.Modules, version = (1, 0, 0)), requirements.VersionRequirement(name = 'dlllist', component = dlllist.DllList, version = (2, 0, 0)), - requirements.TranslationLayerRequirement(name = 'primary', - description = 'Memory layer for the kernel', - architectures = ["Intel32", "Intel64"]), - requirements.SymbolTableRequirement(name = "nt_symbols", description = "Windows kernel symbols"), requirements.BooleanRequirement(name = "extensive", description = "Search physical layer for version information", optional = True, @@ -123,6 +121,7 @@ def _generator(self, procs: Generator[interfaces.objects.ObjectInterface, None, mods: of modules session_layers: of layers in the session to be checked """ + kernel = self.context.modules[self.config['kernel']] pe_table_name = intermed.IntermediateSymbolTable.create(self.context, self.config_path, @@ -131,7 +130,7 @@ def _generator(self, procs: Generator[interfaces.objects.ObjectInterface, None, class_types = pe.class_types) # TODO: Fix this so it works with more than just intel layers - physical_layer_name = self.context.layers[self.config['primary']].config.get('memory_layer', None) + physical_layer_name = self.context.layers[kernel.layer_name].config.get('memory_layer', None) for mod in mods: try: @@ -191,13 +190,14 @@ def _generator(self, procs: Generator[interfaces.objects.ObjectInterface, None, build)) def run(self): - procs = pslist.PsList.list_processes(self.context, self.config["primary"], self.config["nt_symbols"]) + kernel = self.context.modules[self.config['kernel']] - mods = modules.Modules.list_modules(self.context, self.config["primary"], self.config["nt_symbols"]) + procs = pslist.PsList.list_processes(self.context, kernel.layer_name, kernel.symbol_table_name) + + mods = modules.Modules.list_modules(self.context, kernel.layer_name, kernel.symbol_table_name) # populate the session layers for kernel modules - session_layers = modules.Modules.get_session_layers(self.context, self.config['primary'], - self.config['nt_symbols']) + session_layers = modules.Modules.get_session_layers(self.context, kernel.layer_name, kernel.symbol_table_name) return renderers.TreeGrid([("PID", int), ("Process", str), ("Base", format_hints.Hex), ("Name", str), ("Major", int), ("Minor", int), ("Product", int), ("Build", int)], diff --git a/volatility3/framework/plugins/windows/virtmap.py b/volatility3/framework/plugins/windows/virtmap.py index 238b0df197..9a43cb1f64 100644 --- a/volatility3/framework/plugins/windows/virtmap.py +++ b/volatility3/framework/plugins/windows/virtmap.py @@ -16,16 +16,14 @@ class VirtMap(interfaces.plugins.PluginInterface): """Lists virtual mapped sections.""" - _required_framework_version = (1, 0, 0) + _required_framework_version = (1, 2, 0) @classmethod def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface]: # Since we're calling the plugin, make sure we have the plugin's requirements return [ - requirements.TranslationLayerRequirement(name = 'primary', - description = 'Memory layer for the kernel', - architectures = ["Intel32", "Intel64"]), - requirements.SymbolTableRequirement(name = "nt_symbols", description = "Windows kernel symbols") + requirements.ModuleRequirement(name = 'kernel', description = 'Windows kernel', + architectures = ["Intel32", "Intel64"]) ] def _generator(self, map): @@ -112,8 +110,10 @@ def scannable_sections(cls, module: interfaces.context.ModuleInterface) -> Gener yield value def run(self): - layer = self.context.layers[self.config['primary']] - module = self.context.module(self.config['nt_symbols'], + kernel = self.context.modules[self.config['kernel']] + + layer = self.context.layers[kernel.layer_name] + module = self.context.module(kernel.symbol_table_name, layer_name = layer.name, offset = layer.config['kernel_virtual_offset']) From 27c5dfdffdc4659286f04daf85ae75d091538db5 Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Tue, 24 Aug 2021 21:47:43 +0100 Subject: [PATCH 213/294] Plugins: Add in architectures for ModuleRequirement Fixes #553 --- volatility3/framework/plugins/linux/pslist.py | 2 +- volatility3/framework/plugins/mac/bash.py | 3 ++- volatility3/framework/plugins/mac/check_sysctl.py | 3 ++- volatility3/framework/plugins/mac/check_trap_table.py | 3 ++- volatility3/framework/plugins/mac/ifconfig.py | 3 ++- volatility3/framework/plugins/mac/pslist.py | 3 ++- volatility3/framework/plugins/windows/pslist.py | 3 ++- 7 files changed, 13 insertions(+), 7 deletions(-) diff --git a/volatility3/framework/plugins/linux/pslist.py b/volatility3/framework/plugins/linux/pslist.py index 14295be065..02d5118c2b 100644 --- a/volatility3/framework/plugins/linux/pslist.py +++ b/volatility3/framework/plugins/linux/pslist.py @@ -18,7 +18,7 @@ class PsList(interfaces.plugins.PluginInterface): @classmethod def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface]: return [ - requirements.ModuleRequirement(name = 'vmlinux'), + requirements.ModuleRequirement(name = 'vmlinux', architectures = ["Intel32", "Intel64"]), requirements.ListRequirement(name = 'pid', description = 'Filter on specific process IDs', element_type = int, diff --git a/volatility3/framework/plugins/mac/bash.py b/volatility3/framework/plugins/mac/bash.py index e2e39e20de..d1547dcc36 100644 --- a/volatility3/framework/plugins/mac/bash.py +++ b/volatility3/framework/plugins/mac/bash.py @@ -25,7 +25,8 @@ class Bash(plugins.PluginInterface, timeliner.TimeLinerInterface): @classmethod def get_requirements(cls): return [ - requirements.ModuleRequirement(name = 'darwin', description = 'Kernel module for the OS'), + requirements.ModuleRequirement(name = 'darwin', description = 'Kernel module for the OS', + architectures = ["Intel32", "Intel64"]), requirements.PluginRequirement(name = 'pslist', plugin = pslist.PsList, version = (3, 0, 0)), requirements.ListRequirement(name = 'pid', description = 'Filter on specific process IDs', diff --git a/volatility3/framework/plugins/mac/check_sysctl.py b/volatility3/framework/plugins/mac/check_sysctl.py index 0f755d37dd..0468310d68 100644 --- a/volatility3/framework/plugins/mac/check_sysctl.py +++ b/volatility3/framework/plugins/mac/check_sysctl.py @@ -25,7 +25,8 @@ class Check_sysctl(plugins.PluginInterface): @classmethod def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface]: return [ - requirements.ModuleRequirement(name = 'darwin', description = 'Kernel module for the OS'), + requirements.ModuleRequirement(name = 'darwin', description = 'Kernel module for the OS', + architectures = ["Intel32", "Intel64"]), requirements.VersionRequirement(name = 'macutils', component = mac.MacUtilities, version = (1, 0, 0)), requirements.PluginRequirement(name = 'lsmod', plugin = lsmod.Lsmod, version = (2, 0, 0)) ] diff --git a/volatility3/framework/plugins/mac/check_trap_table.py b/volatility3/framework/plugins/mac/check_trap_table.py index 3adb3b5e05..3703c6f0ca 100644 --- a/volatility3/framework/plugins/mac/check_trap_table.py +++ b/volatility3/framework/plugins/mac/check_trap_table.py @@ -24,7 +24,8 @@ class Check_trap_table(plugins.PluginInterface): @classmethod def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface]: return [ - requirements.ModuleRequirement(name = 'darwin', description = 'Kernel module for the OS'), + requirements.ModuleRequirement(name = 'darwin', description = 'Kernel module for the OS', + architectures = ["Intel32", "Intel64"]), requirements.PluginRequirement(name = 'lsmod', plugin = lsmod.Lsmod, version = (2, 0, 0)), requirements.VersionRequirement(name = 'macutils', component = mac.MacUtilities, version = (1, 0, 0)), ] diff --git a/volatility3/framework/plugins/mac/ifconfig.py b/volatility3/framework/plugins/mac/ifconfig.py index e70ffd3a27..826f5761bb 100644 --- a/volatility3/framework/plugins/mac/ifconfig.py +++ b/volatility3/framework/plugins/mac/ifconfig.py @@ -16,7 +16,8 @@ class Ifconfig(plugins.PluginInterface): @classmethod def get_requirements(cls): return [ - requirements.ModuleRequirement(name = 'darwin', description = 'Kernel module for the OS'), + requirements.ModuleRequirement(name = 'darwin', description = 'Kernel module for the OS', + architectures = ["Intel32", "Intel64"]), requirements.VersionRequirement(name = 'macutils', component = mac.MacUtilities, version = (1, 0, 0)) ] diff --git a/volatility3/framework/plugins/mac/pslist.py b/volatility3/framework/plugins/mac/pslist.py index fb2617be11..b87822efe1 100644 --- a/volatility3/framework/plugins/mac/pslist.py +++ b/volatility3/framework/plugins/mac/pslist.py @@ -23,7 +23,8 @@ class PsList(interfaces.plugins.PluginInterface): @classmethod def get_requirements(cls): return [ - requirements.ModuleRequirement(name = 'darwin', description = 'Kernel module for the OS'), + requirements.ModuleRequirement(name = 'darwin', description = 'Kernel module for the OS', + architectures = ["Intel32", "Intel64"]), requirements.VersionRequirement(name = 'macutils', component = mac.MacUtilities, version = (1, 1, 0)), requirements.ChoiceRequirement(name = 'pslist_method', description = 'Method to determine for processes', diff --git a/volatility3/framework/plugins/windows/pslist.py b/volatility3/framework/plugins/windows/pslist.py index b95160a82a..5121b1b595 100644 --- a/volatility3/framework/plugins/windows/pslist.py +++ b/volatility3/framework/plugins/windows/pslist.py @@ -27,7 +27,8 @@ class PsList(interfaces.plugins.PluginInterface, timeliner.TimeLinerInterface): @classmethod def get_requirements(cls): return [ - requirements.ModuleRequirement(name = 'kernel', description = 'Windows kernel'), + requirements.ModuleRequirement(name = 'kernel', description = 'Windows kernel', + architectures = ["Intel32", "Intel64"]), requirements.BooleanRequirement(name = 'physical', description = 'Display physical offsets instead of virtual', default = cls.PHYSICAL_DEFAULT, From b90fd6bbb205305ce9a0d49059eb564ea0cc5957 Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Tue, 24 Aug 2021 21:59:18 +0100 Subject: [PATCH 214/294] Automagic: Only traverse down intel layers --- volatility3/framework/automagic/symbol_finder.py | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/volatility3/framework/automagic/symbol_finder.py b/volatility3/framework/automagic/symbol_finder.py index 03d051c49d..6f6332a8d4 100644 --- a/volatility3/framework/automagic/symbol_finder.py +++ b/volatility3/framework/automagic/symbol_finder.py @@ -93,8 +93,10 @@ def _banner_scan(self, 'raw_unicode_escape'))] # type: Iterable[Any] else: # Swap to the physical layer for scanning + # Only traverse down a layer if it's an intel layer # TODO: Fix this so it works for layers other than just Intel - layer = context.layers[layer.config['memory_layer']] + if isinstance(layer, layers.intel.Intel): + layer = context.layers[layer.config['memory_layer']] banner_list = layer.scan(context = context, scanner = mss, progress_callback = progress_callback) for _, banner in banner_list: From ad3560e1bc5d613733832a8c650415b8471e236c Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Wed, 25 Aug 2021 13:00:23 +0100 Subject: [PATCH 215/294] Linux: Shift linux plugins to using kernel as their default module --- volatility3/framework/plugins/linux/bash.py | 10 +-- .../framework/plugins/linux/check_afinfo.py | 5 +- .../framework/plugins/linux/check_creds.py | 5 +- .../framework/plugins/linux/check_idt.py | 5 +- .../framework/plugins/linux/check_modules.py | 7 +- .../framework/plugins/linux/check_syscall.py | 8 ++- volatility3/framework/plugins/linux/elfs.py | 5 +- .../plugins/linux/keyboard_notifiers.py | 5 +- volatility3/framework/plugins/linux/kmsg.py | 64 +++++++++---------- volatility3/framework/plugins/linux/lsmod.py | 5 +- volatility3/framework/plugins/linux/lsof.py | 7 +- .../framework/plugins/linux/malfind.py | 7 +- volatility3/framework/plugins/linux/proc.py | 5 +- volatility3/framework/plugins/linux/pslist.py | 5 +- volatility3/framework/plugins/linux/pstree.py | 4 +- .../framework/plugins/linux/tty_check.py | 5 +- 16 files changed, 82 insertions(+), 70 deletions(-) diff --git a/volatility3/framework/plugins/linux/bash.py b/volatility3/framework/plugins/linux/bash.py index 471f4cbe77..dd2cb2c0fc 100644 --- a/volatility3/framework/plugins/linux/bash.py +++ b/volatility3/framework/plugins/linux/bash.py @@ -26,7 +26,8 @@ class Bash(plugins.PluginInterface, timeliner.TimeLinerInterface): @classmethod def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface]: return [ - requirements.ModuleRequirement(name = 'vmlinux', architectures = ["Intel32", "Intel64"]), + requirements.ModuleRequirement(name = 'kernel', description = 'Linux kernel', + architectures = ["Intel32", "Intel64"]), requirements.PluginRequirement(name = 'pslist', plugin = pslist.PsList, version = (2, 0, 0)), requirements.ListRequirement(name = 'pid', element_type = int, @@ -35,7 +36,8 @@ def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface] ] def _generator(self, tasks): - is_32bit = not symbols.symbol_table_is_64bit(self.context, self.config["vmlinux.symbol_table_name"]) + vmlinux = self.context.modules[self.config["kernel"]] + is_32bit = not symbols.symbol_table_is_64bit(self.context, vmlinux.symbol_table_name) if is_32bit: pack_format = "I" bash_json_file = "bash32" @@ -90,7 +92,7 @@ def run(self): ("Command", str)], self._generator( pslist.PsList.list_tasks(self.context, - self.config['vmlinux'], + self.config['kernel'], filter_func = filter_func))) def generate_timeline(self): @@ -98,7 +100,7 @@ def generate_timeline(self): for row in self._generator( pslist.PsList.list_tasks(self.context, - self.config['vmlinux'], + self.config['kernel'], filter_func = filter_func)): _depth, row_data = row description = f"{row_data[0]} ({row_data[1]}): \"{row_data[3]}\"" diff --git a/volatility3/framework/plugins/linux/check_afinfo.py b/volatility3/framework/plugins/linux/check_afinfo.py index 9105247be5..f54b89ee31 100644 --- a/volatility3/framework/plugins/linux/check_afinfo.py +++ b/volatility3/framework/plugins/linux/check_afinfo.py @@ -23,7 +23,8 @@ class Check_afinfo(plugins.PluginInterface): @classmethod def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface]: return [ - requirements.ModuleRequirement(name = 'vmlinux', architectures = ["Intel32", "Intel64"]), + requirements.ModuleRequirement(name = 'kernel', description = 'Linux kernel', + architectures = ["Intel32", "Intel64"]), ] # returns whether the symbol is found within the kernel (system.map) or not @@ -61,7 +62,7 @@ def _check_afinfo(self, var_name, var, op_members, seq_members): def _generator(self): - vmlinux = self.context.modules[self.config['vmlinux']] + vmlinux = self.context.modules[self.config['kernel']] op_members = vmlinux.get_type('file_operations').members seq_members = vmlinux.get_type('seq_operations').members diff --git a/volatility3/framework/plugins/linux/check_creds.py b/volatility3/framework/plugins/linux/check_creds.py index 28f3d178b9..06ac392dbc 100644 --- a/volatility3/framework/plugins/linux/check_creds.py +++ b/volatility3/framework/plugins/linux/check_creds.py @@ -19,12 +19,13 @@ class Check_creds(interfaces.plugins.PluginInterface): @classmethod def get_requirements(cls): return [ - requirements.ModuleRequirement(name = 'vmlinux', architectures = ["Intel32", "Intel64"]), + requirements.ModuleRequirement(name = 'kernel', description = 'Linux kernel', + architectures = ["Intel32", "Intel64"]), requirements.PluginRequirement(name = 'pslist', plugin = pslist.PsList, version = (2, 0, 0)) ] def _generator(self): - vmlinux = self.context.modules[self.config['vmlinux']] + vmlinux = self.context.modules[self.config['kernel']] type_task = vmlinux.get_type("task_struct") diff --git a/volatility3/framework/plugins/linux/check_idt.py b/volatility3/framework/plugins/linux/check_idt.py index 016717841c..e612300411 100644 --- a/volatility3/framework/plugins/linux/check_idt.py +++ b/volatility3/framework/plugins/linux/check_idt.py @@ -22,13 +22,14 @@ class Check_idt(interfaces.plugins.PluginInterface): @classmethod def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface]: return [ - requirements.ModuleRequirement(name = 'vmlinux', architectures = ["Intel32", "Intel64"]), + requirements.ModuleRequirement(name = 'kernel', description = 'Linux kernel', + architectures = ["Intel32", "Intel64"]), requirements.VersionRequirement(name = 'linuxutils', component = linux.LinuxUtilities, version = (2, 0, 0)), requirements.PluginRequirement(name = 'lsmod', plugin = lsmod.Lsmod, version = (2, 0, 0)) ] def _generator(self): - vmlinux = self.context.modules[self.config['vmlinux']] + vmlinux = self.context.modules[self.config['kernel']] modules = lsmod.Lsmod.list_modules(self.context, vmlinux.name) diff --git a/volatility3/framework/plugins/linux/check_modules.py b/volatility3/framework/plugins/linux/check_modules.py index 362dce6923..40b4f6e0c9 100644 --- a/volatility3/framework/plugins/linux/check_modules.py +++ b/volatility3/framework/plugins/linux/check_modules.py @@ -23,7 +23,8 @@ class Check_modules(plugins.PluginInterface): @classmethod def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface]: return [ - requirements.ModuleRequirement(name = 'vmlinux', architectures = ["Intel32", "Intel64"]), + requirements.ModuleRequirement(name = 'kernel', description = 'Linux kernel', + architectures = ["Intel32", "Intel64"]), requirements.PluginRequirement(name = 'lsmod', plugin = lsmod.Lsmod, version = (2, 0, 0)) ] @@ -60,11 +61,11 @@ def get_kset_modules(self, context: interfaces.context.ContextInterface, vmlinux return ret def _generator(self): - kset_modules = self.get_kset_modules(self.context, self.config['vmlinux']) + kset_modules = self.get_kset_modules(self.context, self.config['kernel']) lsmod_modules = set( str(utility.array_to_string(modules.name)) - for modules in lsmod.Lsmod.list_modules(self.context, self.config['vmlinux'])) + for modules in lsmod.Lsmod.list_modules(self.context, self.config['kernel'])) for mod_name in set(kset_modules.keys()).difference(lsmod_modules): yield (0, (format_hints.Hex(kset_modules[mod_name]), str(mod_name))) diff --git a/volatility3/framework/plugins/linux/check_syscall.py b/volatility3/framework/plugins/linux/check_syscall.py index 3acd2877af..e1ba413615 100644 --- a/volatility3/framework/plugins/linux/check_syscall.py +++ b/volatility3/framework/plugins/linux/check_syscall.py @@ -30,7 +30,8 @@ class Check_syscall(plugins.PluginInterface): @classmethod def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface]: return [ - requirements.ModuleRequirement(name = 'vmlinux', architectures = ["Intel32", "Intel64"]), + requirements.ModuleRequirement(name = 'kernel', description = 'Linux kernel', + architectures = ["Intel32", "Intel64"]), ] def _get_table_size_next_symbol(self, table_addr, ptr_sz, vmlinux): @@ -101,7 +102,8 @@ def _get_table_info_disassembly(self, ptr_sz, vmlinux): # if we can't find the disassemble function then bail and rely on a different method return 0 - data = self.context.layers.read(self.config['vmlinux.layer_name'], func_addr, 6) + vmlinux = self.context.modules[self.config['kernel']] + data = self.context.layers.read(vmlinux.layer_name, func_addr, 6) for (address, size, mnemonic, op_str) in md.disasm_lite(data, func_addr): if mnemonic == 'CMP': @@ -126,7 +128,7 @@ def _get_table_info(self, vmlinux, table_name, ptr_sz): # TODO - add finding and parsing unistd.h once cached file enumeration is added def _generator(self): - vmlinux = self.context.modules[self.config['vmlinux']] + vmlinux = self.context.modules[self.config['kernel']] ptr_sz = vmlinux.get_type("pointer").size if ptr_sz == 4: diff --git a/volatility3/framework/plugins/linux/elfs.py b/volatility3/framework/plugins/linux/elfs.py index 3fcb017cd6..5b072ed7db 100644 --- a/volatility3/framework/plugins/linux/elfs.py +++ b/volatility3/framework/plugins/linux/elfs.py @@ -22,7 +22,8 @@ class Elfs(plugins.PluginInterface): @classmethod def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface]: return [ - requirements.ModuleRequirement(name = 'vmlinux', architectures = ["Intel32", "Intel64"]), + requirements.ModuleRequirement(name = 'kernel', description = 'Linux kernel', + architectures = ["Intel32", "Intel64"]), requirements.PluginRequirement(name = 'pslist', plugin = pslist.PsList, version = (2, 0, 0)), requirements.ListRequirement(name = 'pid', description = 'Filter on specific process IDs', @@ -56,5 +57,5 @@ def run(self): ("End", format_hints.Hex), ("File Path", str)], self._generator( pslist.PsList.list_tasks(self.context, - self.config['vmlinux'], + self.config['kernel'], filter_func = filter_func))) diff --git a/volatility3/framework/plugins/linux/keyboard_notifiers.py b/volatility3/framework/plugins/linux/keyboard_notifiers.py index 012632bb64..8bb79b6ec8 100644 --- a/volatility3/framework/plugins/linux/keyboard_notifiers.py +++ b/volatility3/framework/plugins/linux/keyboard_notifiers.py @@ -21,13 +21,14 @@ class Keyboard_notifiers(interfaces.plugins.PluginInterface): @classmethod def get_requirements(cls): return [ - requirements.ModuleRequirement(name = 'vmlinux', architectures = ["Intel32", "Intel64"]), + requirements.ModuleRequirement(name = 'kernel', description = 'Linux kernel', + architectures = ["Intel32", "Intel64"]), requirements.PluginRequirement(name = 'lsmod', plugin = lsmod.Lsmod, version = (2, 0, 0)), requirements.VersionRequirement(name = 'linuxutils', component = linux.LinuxUtilities, version = (2, 0, 0)) ] def _generator(self): - vmlinux = self.context.modules[self.config['vmlinux']] + vmlinux = self.context.modules[self.config['kernel']] modules = lsmod.Lsmod.list_modules(self.context, vmlinux.name) diff --git a/volatility3/framework/plugins/linux/kmsg.py b/volatility3/framework/plugins/linux/kmsg.py index 67a5540671..27327e97b4 100644 --- a/volatility3/framework/plugins/linux/kmsg.py +++ b/volatility3/framework/plugins/linux/kmsg.py @@ -2,17 +2,15 @@ # which is available at https://www.volatilityfoundation.org/license/vsl-v1.0 # import logging -from typing import List, Iterator, Tuple, Generator - from abc import ABC, abstractmethod from enum import Enum +from typing import List, Iterator, Tuple, Generator from volatility3.framework import renderers, interfaces, constants, contexts, class_subclasses from volatility3.framework.configuration import requirements from volatility3.framework.interfaces import plugins from volatility3.framework.objects import utility - vollog = logging.getLogger(__name__) @@ -53,14 +51,15 @@ class ABCKmsg(ABC): ) def __init__( - self, - context: interfaces.context.ContextInterface, - config: interfaces.configuration.HierarchicalDict + self, + context: interfaces.context.ContextInterface, + config: interfaces.configuration.HierarchicalDict ): self._context = context self._config = config - self.layer_name = self._config['primary'] # type: ignore - symbol_table_name = self._config['vmlinux'] # type: ignore + vmlinux = context.modules[self._config['kernel']] + self.layer_name = kernel.layer_name # type: ignore + symbol_table_name = vmlinux.symbol_table_name # type: ignore self.vmlinux = contexts.Module(context, symbol_table_name, self.layer_name, 0) # type: ignore self.long_unsigned_int_size = self.vmlinux.get_type('long unsigned int').size @@ -80,20 +79,17 @@ def run_all( Yields: kmsg records """ - - symbol_table_name = config['vmlinux'] # type: ignore - layer_name = config['primary'] # type: ignore - vmlinux = contexts.Module(context, symbol_table_name, layer_name, 0) # type: ignore + vmlinux = context.modules[config['kernel']] kmsg_inst = None # type: ignore for subclass in class_subclasses(cls): - if not subclass.symtab_checks(vmlinux=vmlinux): + if not subclass.symtab_checks(vmlinux = vmlinux): vollog.log(constants.LOGLEVEL_VVVV, "Kmsg implementation '%s' doesn't match this memory dump", subclass.__name__) continue vollog.log(constants.LOGLEVEL_VVVV, "Kmsg implementation '%s' matches!", subclass.__name__) - kmsg_inst = subclass(context=context, config=config) + kmsg_inst = subclass(context = context, config = config) # More than one class could be executed for an specific kernel # version i.e. Netfilter Ingress hooks # We expect just one implementation to be executed for an specific kernel @@ -120,7 +116,7 @@ def symtab_checks(cls, vmlinux: interfaces.context.ModuleInterface) -> bool: def get_string(self, addr: int, length: int) -> str: txt = self._context.layers[self.layer_name].read(addr, length) # type: ignore - return txt.decode(encoding='utf8', errors='replace') + return txt.decode(encoding = 'utf8', errors = 'replace') def nsec_to_sec_str(self, nsec: int) -> str: # See kernel/printk/printk.c:print_time() @@ -172,6 +168,7 @@ def get_facility_text(cls, facility: int) -> str: vollog.debug(f"Facility {facility} unknown") return str(facility) + class KmsgLegacy(ABCKmsg): """Linux kernels prior to v5.10, the ringbuffer is initially kept in __log_buf, and log_buf is a pointer to the former. __log_buf is declared as @@ -185,6 +182,7 @@ class KmsgLegacy(ABCKmsg): consequently to the new buffer. In that case, the original static buffer in __log_buf is unused. """ + @classmethod def symtab_checks(cls, vmlinux) -> bool: return vmlinux.has_type('printk_log') @@ -207,20 +205,20 @@ def get_dict_lines(self, msg) -> Generator[str, None, None]: yield " " + chunk.decode() def run(self) -> Iterator[Tuple[str, str, str, str, str]]: - log_buf_ptr = self.vmlinux.object_from_symbol(symbol_name='log_buf') + log_buf_ptr = self.vmlinux.object_from_symbol(symbol_name = 'log_buf') if log_buf_ptr == 0: # This is weird, let's fallback to check the static ringbuffer. - log_buf_ptr = self.vmlinux.object_from_symbol(symbol_name='__log_buf').vol.offset + log_buf_ptr = self.vmlinux.object_from_symbol(symbol_name = '__log_buf').vol.offset if log_buf_ptr == 0: raise ValueError("Log buffer is not available") - log_first_idx = int(self.vmlinux.object_from_symbol(symbol_name='log_first_idx')) + log_first_idx = int(self.vmlinux.object_from_symbol(symbol_name = 'log_first_idx')) cur_idx = log_first_idx end_idx = None # We don't need log_next_idx here. See below msg.len == 0 while cur_idx != end_idx: end_idx = log_first_idx msg_offset = log_buf_ptr + cur_idx # type: ignore - msg = self.vmlinux.object(object_type='printk_log', offset=msg_offset) + msg = self.vmlinux.object(object_type = 'printk_log', offset = msg_offset) if msg.len == 0: # As per kernel/printk/printk.c: # A length == 0 for the next message indicates a wrap-around to @@ -273,6 +271,7 @@ class KmsgFiveTen(ABCKmsg): See printk.c and printk_ringbuffer.c in kernel/printk/ folder for more details. """ + @classmethod def symtab_checks(cls, vmlinux) -> bool: return vmlinux.has_symbol('prb') @@ -318,20 +317,20 @@ def get_dict_lines(self, info) -> Generator[str, None, None]: def run(self) -> Iterator[Tuple[str, str, str, str, str]]: # static struct printk_ringbuffer *prb = &printk_rb_static; - ringbuffers = self.vmlinux.object_from_symbol(symbol_name='prb').dereference() + ringbuffers = self.vmlinux.object_from_symbol(symbol_name = 'prb').dereference() desc_ring = ringbuffers.desc_ring text_data_ring = ringbuffers.text_data_ring desc_count = 1 << desc_ring.count_bits - desc_arr = self.vmlinux.object(object_type="array", - offset=desc_ring.descs, - subtype=self.vmlinux.get_type("prb_desc"), - count=desc_count) - info_arr = self.vmlinux.object(object_type="array", - offset=desc_ring.infos, - subtype=self.vmlinux.get_type("printk_info"), - count=desc_count) + desc_arr = self.vmlinux.object(object_type = "array", + offset = desc_ring.descs, + subtype = self.vmlinux.get_type("prb_desc"), + count = desc_count) + info_arr = self.vmlinux.object(object_type = "array", + offset = desc_ring.infos, + subtype = self.vmlinux.get_type("printk_info"), + count = desc_count) # See kernel/printk/printk_ringbuffer.h desc_state_var_bytes_sz = self.long_unsigned_int_size @@ -371,15 +370,12 @@ class Kmsg(plugins.PluginInterface): @classmethod def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface]: return [ - requirements.TranslationLayerRequirement(name='primary', - description="Memory layer for the kernel", - architectures=['Intel32', 'Intel64']), - requirements.SymbolTableRequirement(name='vmlinux', - description="Linux kernel symbols"), + requirements.ModuleRequirement(name = 'kernel', description = 'Linux kernel', + architectures = ['Intel32', 'Intel64']), ] def _generator(self) -> Iterator[Tuple[int, Tuple[str, str, str, str, str]]]: - for values in ABCKmsg.run_all(context=self.context, config=self.config): + for values in ABCKmsg.run_all(context = self.context, config = self.config): yield (0, values) def run(self): diff --git a/volatility3/framework/plugins/linux/lsmod.py b/volatility3/framework/plugins/linux/lsmod.py index a871ebed92..7b70db4bae 100644 --- a/volatility3/framework/plugins/linux/lsmod.py +++ b/volatility3/framework/plugins/linux/lsmod.py @@ -25,7 +25,8 @@ class Lsmod(plugins.PluginInterface): @classmethod def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface]: return [ - requirements.ModuleRequirement(name = 'vmlinux', architectures = ["Intel32", "Intel64"]), + requirements.ModuleRequirement(name = 'kernel', description = 'Linux kernel', + architectures = ["Intel32", "Intel64"]), ] @classmethod @@ -54,7 +55,7 @@ def list_modules(cls, context: interfaces.context.ContextInterface, vmlinux_modu def _generator(self): try: - for module in self.list_modules(self.context, self.config['vmlinux']): + for module in self.list_modules(self.context, self.config['kernel']): mod_size = module.get_init_size() + module.get_core_size() diff --git a/volatility3/framework/plugins/linux/lsof.py b/volatility3/framework/plugins/linux/lsof.py index 3b21d96824..b452bf0ba7 100644 --- a/volatility3/framework/plugins/linux/lsof.py +++ b/volatility3/framework/plugins/linux/lsof.py @@ -24,7 +24,8 @@ class Lsof(plugins.PluginInterface): @classmethod def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface]: return [ - requirements.ModuleRequirement(name = 'vmlinux', architectures = ["Intel32", "Intel64"]), + requirements.ModuleRequirement(name = 'kernel', description = 'Linux kernel', + architectures = ["Intel32", "Intel64"]), requirements.PluginRequirement(name = 'pslist', plugin = pslist.PsList, version = (2, 0, 0)), requirements.VersionRequirement(name = 'linuxutils', component = linux.LinuxUtilities, version = (2, 0, 0)), requirements.ListRequirement(name = 'pid', @@ -34,7 +35,7 @@ def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface] ] def _generator(self, tasks): - vmlinux = self.context.modules[self.config['vmlinux']] + vmlinux = self.context.modules[self.config['kernel']] symbol_table = None for task in tasks: @@ -56,5 +57,5 @@ def run(self): return renderers.TreeGrid([("PID", int), ("Process", str), ("FD", int), ("Path", str)], self._generator( pslist.PsList.list_tasks(self.context, - self.config['vmlinux'], + self.config['kernel'], filter_func = filter_func))) diff --git a/volatility3/framework/plugins/linux/malfind.py b/volatility3/framework/plugins/linux/malfind.py index c7fbd9ad1c..fcfd68855b 100644 --- a/volatility3/framework/plugins/linux/malfind.py +++ b/volatility3/framework/plugins/linux/malfind.py @@ -20,7 +20,8 @@ class Malfind(interfaces.plugins.PluginInterface): @classmethod def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface]: return [ - requirements.ModuleRequirement(name = 'vmlinux', architectures = ["Intel32", "Intel64"]), + requirements.ModuleRequirement(name = 'kernel', description = 'Linux kernel', + architectures = ["Intel32", "Intel64"]), requirements.PluginRequirement(name = 'pslist', plugin = pslist.PsList, version = (2, 0, 0)), requirements.ListRequirement(name = 'pid', description = 'Filter on specific process IDs', @@ -45,8 +46,8 @@ def _list_injections(self, task): def _generator(self, tasks): # determine if we're on a 32 or 64 bit kernel - if self.context.symbol_space.get_type( - self.config["vmlinux.symbol_table_name"] + constants.BANG + "pointer").size == 4: + vmlinux = self.context.modules[self.config['kernel']] + if self.context.symbol_space.get_type(vmlinux.symbol_table_name + constants.BANG + "pointer").size == 4: is_32bit_arch = True else: is_32bit_arch = False diff --git a/volatility3/framework/plugins/linux/proc.py b/volatility3/framework/plugins/linux/proc.py index 893646d049..2c4cd2aff2 100644 --- a/volatility3/framework/plugins/linux/proc.py +++ b/volatility3/framework/plugins/linux/proc.py @@ -21,7 +21,8 @@ class Maps(plugins.PluginInterface): def get_requirements(cls): # Since we're calling the plugin, make sure we have the plugin's requirements return [ - requirements.ModuleRequirement(name = 'vmlinux', architectures = ["Intel32", "Intel64"]), + requirements.ModuleRequirement(name = 'kernel', description = 'Linux kernel', + architectures = ["Intel32", "Intel64"]), requirements.PluginRequirement(name = 'pslist', plugin = pslist.PsList, version = (2, 0, 0)), requirements.ListRequirement(name = 'pid', description = 'Filter on specific process IDs', @@ -65,5 +66,5 @@ def run(self): ("File Path", str)], self._generator( pslist.PsList.list_tasks(self.context, - self.config['vmlinux'], + self.config['kernel'], filter_func = filter_func))) diff --git a/volatility3/framework/plugins/linux/pslist.py b/volatility3/framework/plugins/linux/pslist.py index 02d5118c2b..ed7374f2fd 100644 --- a/volatility3/framework/plugins/linux/pslist.py +++ b/volatility3/framework/plugins/linux/pslist.py @@ -18,7 +18,8 @@ class PsList(interfaces.plugins.PluginInterface): @classmethod def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface]: return [ - requirements.ModuleRequirement(name = 'vmlinux', architectures = ["Intel32", "Intel64"]), + requirements.ModuleRequirement(name = 'kernel', description = 'Linux kernel', + architectures = ["Intel32", "Intel64"]), requirements.ListRequirement(name = 'pid', description = 'Filter on specific process IDs', element_type = int, @@ -49,7 +50,7 @@ def filter_func(x): def _generator(self): for task in self.list_tasks(self.context, - self.config['vmlinux'], + self.config['kernel'], filter_func = self.create_pid_filter(self.config.get('pid', None))): pid = task.pid ppid = 0 diff --git a/volatility3/framework/plugins/linux/pstree.py b/volatility3/framework/plugins/linux/pstree.py index 2f11cf5ecb..9b24c27f72 100644 --- a/volatility3/framework/plugins/linux/pstree.py +++ b/volatility3/framework/plugins/linux/pstree.py @@ -34,8 +34,8 @@ def find_level(self, pid): def _generator(self): """Generates the.""" - for proc in self.list_tasks(self.context, self.config['vmlinux.layer_name'], - self.config['vmlinux.symbol_table_name']): + vmlinux = self.context.modules[self.config['kernel']] + for proc in self.list_tasks(self.context, vmlinux.layer_name, vmlinux.symbol_table_name): self._processes[proc.pid] = proc # Build the child/level maps diff --git a/volatility3/framework/plugins/linux/tty_check.py b/volatility3/framework/plugins/linux/tty_check.py index f4a4a2820f..8c0662ca76 100644 --- a/volatility3/framework/plugins/linux/tty_check.py +++ b/volatility3/framework/plugins/linux/tty_check.py @@ -24,13 +24,14 @@ class tty_check(plugins.PluginInterface): @classmethod def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface]: return [ - requirements.ModuleRequirement(name = 'vmlinux', architectures = ["Intel32", "Intel64"]), + requirements.ModuleRequirement(name = 'kernel', description = 'Linux kernel', + architectures = ["Intel32", "Intel64"]), requirements.PluginRequirement(name = 'lsmod', plugin = lsmod.Lsmod, version = (2, 0, 0)), requirements.VersionRequirement(name = 'linuxutils', component = linux.LinuxUtilities, version = (2, 0, 0)) ] def _generator(self): - vmlinux = self.context.modules[self.config['vmlinux']] + vmlinux = self.context.modules[self.config['kernel']] modules = lsmod.Lsmod.list_modules(self.context, vmlinux.name) From ece34139db9da959208fd912aab269d2700bef28 Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Wed, 25 Aug 2021 16:10:55 +0100 Subject: [PATCH 216/294] Mac: Standadize on kernel config name --- volatility3/framework/plugins/mac/bash.py | 13 +++++++------ .../framework/plugins/mac/check_syscall.py | 8 ++++---- .../framework/plugins/mac/check_sysctl.py | 8 ++++---- .../framework/plugins/mac/check_trap_table.py | 8 ++++---- volatility3/framework/plugins/mac/ifconfig.py | 4 ++-- .../framework/plugins/mac/kauth_listeners.py | 10 +++++----- .../framework/plugins/mac/kauth_scopes.py | 16 +++++++--------- volatility3/framework/plugins/mac/kevents.py | 4 ++-- volatility3/framework/plugins/mac/list_files.py | 4 ++-- volatility3/framework/plugins/mac/lsmod.py | 4 ++-- volatility3/framework/plugins/mac/lsof.py | 9 +++++---- volatility3/framework/plugins/mac/malfind.py | 8 ++++---- volatility3/framework/plugins/mac/mount.py | 4 ++-- volatility3/framework/plugins/mac/netstat.py | 6 +++--- volatility3/framework/plugins/mac/proc_maps.py | 6 +++--- volatility3/framework/plugins/mac/psaux.py | 4 ++-- volatility3/framework/plugins/mac/pslist.py | 4 ++-- volatility3/framework/plugins/mac/pstree.py | 4 ++-- .../framework/plugins/mac/socket_filters.py | 6 +++--- volatility3/framework/plugins/mac/timers.py | 8 ++++---- volatility3/framework/plugins/mac/trustedbsd.py | 8 ++++---- volatility3/framework/plugins/mac/vfsevents.py | 4 ++-- 22 files changed, 75 insertions(+), 75 deletions(-) diff --git a/volatility3/framework/plugins/mac/bash.py b/volatility3/framework/plugins/mac/bash.py index d1547dcc36..1929e35f73 100644 --- a/volatility3/framework/plugins/mac/bash.py +++ b/volatility3/framework/plugins/mac/bash.py @@ -25,7 +25,7 @@ class Bash(plugins.PluginInterface, timeliner.TimeLinerInterface): @classmethod def get_requirements(cls): return [ - requirements.ModuleRequirement(name = 'darwin', description = 'Kernel module for the OS', + requirements.ModuleRequirement(name = 'kernel', description = 'Kernel module for the OS', architectures = ["Intel32", "Intel64"]), requirements.PluginRequirement(name = 'pslist', plugin = pslist.PsList, version = (3, 0, 0)), requirements.ListRequirement(name = 'pid', @@ -35,7 +35,8 @@ def get_requirements(cls): ] def _generator(self, tasks): - is_32bit = not symbols.symbol_table_is_64bit(self.context, self.config["darwin.symbol_table_name"]) + darwin = self.context.modules[self.config['kernel']] + is_32bit = not symbols.symbol_table_is_64bit(self.context, darwin.symbol_table_name) if is_32bit: pack_format = "I" bash_json_file = "bash32" @@ -65,7 +66,7 @@ def _generator(self, tasks): for address in proc_layer.scan(self.context, scanners.BytesScanner(b"#"), sections = task.get_process_memory_sections(self.context, - self.config['darwin'], + self.config['kernel'], rw_no_file = True)): bang_addrs.append(struct.pack(pack_format, address)) @@ -74,7 +75,7 @@ def _generator(self, tasks): for address, _ in proc_layer.scan(self.context, scanners.MultiStringScanner(bang_addrs), sections = task.get_process_memory_sections(self.context, - self.config['darwin'], + self.config['kernel'], rw_no_file = True)): hist = self.context.object(bash_table_name + constants.BANG + "hist_entry", offset = address - ts_offset, @@ -94,7 +95,7 @@ def run(self): ("Command", str)], self._generator( list_tasks(self.context, - self.config['darwin'], + self.config['kernel'], filter_func = filter_func))) def generate_timeline(self): @@ -103,7 +104,7 @@ def generate_timeline(self): for row in self._generator( list_tasks(self.context, - self.config['darwin'], + self.config['kernel'], filter_func = filter_func)): _depth, row_data = row description = f"{row_data[0]} ({row_data[1]}): \"{row_data[3]}\"" diff --git a/volatility3/framework/plugins/mac/check_syscall.py b/volatility3/framework/plugins/mac/check_syscall.py index 4d96c17334..3a0cd28c95 100644 --- a/volatility3/framework/plugins/mac/check_syscall.py +++ b/volatility3/framework/plugins/mac/check_syscall.py @@ -23,16 +23,16 @@ class Check_syscall(plugins.PluginInterface): @classmethod def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface]: return [ - requirements.ModuleRequirement(name = 'darwin', description = 'Kernel module for the OS', + requirements.ModuleRequirement(name = 'kernel', description = 'Kernel module for the OS', architectures = ["Intel32", "Intel64"]), requirements.VersionRequirement(name = 'macutils', component = mac.MacUtilities, version = (1, 0, 0)), requirements.PluginRequirement(name = 'lsmod', plugin = lsmod.Lsmod, version = (2, 0, 0)) ] def _generator(self): - kernel = self.context.modules[self.config['darwin']] + kernel = self.context.modules[self.config['kernel']] - mods = lsmod.Lsmod.list_modules(self.context, self.config['darwin']) + mods = lsmod.Lsmod.list_modules(self.context, self.config['kernel']) handlers = mac.MacUtilities.generate_kernel_handler_info(self.context, kernel.layer_name, kernel, mods) @@ -54,7 +54,7 @@ def _generator(self): continue module_name, symbol_name = mac.MacUtilities.lookup_module_address(self.context, handlers, - call_addr, self.config['darwin']) + call_addr, self.config['kernel']) yield (0, (format_hints.Hex(table.vol.offset), "SysCall", i, format_hints.Hex(call_addr), module_name, symbol_name)) diff --git a/volatility3/framework/plugins/mac/check_sysctl.py b/volatility3/framework/plugins/mac/check_sysctl.py index 0468310d68..f7a8973ff7 100644 --- a/volatility3/framework/plugins/mac/check_sysctl.py +++ b/volatility3/framework/plugins/mac/check_sysctl.py @@ -25,7 +25,7 @@ class Check_sysctl(plugins.PluginInterface): @classmethod def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface]: return [ - requirements.ModuleRequirement(name = 'darwin', description = 'Kernel module for the OS', + requirements.ModuleRequirement(name = 'kernel', description = 'Kernel module for the OS', architectures = ["Intel32", "Intel64"]), requirements.VersionRequirement(name = 'macutils', component = mac.MacUtilities, version = (1, 0, 0)), requirements.PluginRequirement(name = 'lsmod', plugin = lsmod.Lsmod, version = (2, 0, 0)) @@ -113,9 +113,9 @@ def _process_sysctl_list(self, kernel, sysctl_list, recursive = 0): break def _generator(self): - kernel = self.context.modules[self.config['darwin']] + kernel = self.context.modules[self.config['kernel']] - mods = lsmod.Lsmod.list_modules(self.context, self.config['darwin']) + mods = lsmod.Lsmod.list_modules(self.context, self.config['kernel']) handlers = mac.MacUtilities.generate_kernel_handler_info(self.context, kernel.layer_name, kernel, mods) @@ -128,7 +128,7 @@ def _generator(self): continue module_name, symbol_name = mac.MacUtilities.lookup_module_address(self.context, handlers, check_addr, - self.config['darwin']) + self.config['kernel']) yield (0, (name, sysctl.oid_number, sysctl.get_perms(), format_hints.Hex(check_addr), val, module_name, symbol_name)) diff --git a/volatility3/framework/plugins/mac/check_trap_table.py b/volatility3/framework/plugins/mac/check_trap_table.py index 3703c6f0ca..0976d6ea04 100644 --- a/volatility3/framework/plugins/mac/check_trap_table.py +++ b/volatility3/framework/plugins/mac/check_trap_table.py @@ -24,16 +24,16 @@ class Check_trap_table(plugins.PluginInterface): @classmethod def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface]: return [ - requirements.ModuleRequirement(name = 'darwin', description = 'Kernel module for the OS', + requirements.ModuleRequirement(name = 'kernel', description = 'Kernel module for the OS', architectures = ["Intel32", "Intel64"]), requirements.PluginRequirement(name = 'lsmod', plugin = lsmod.Lsmod, version = (2, 0, 0)), requirements.VersionRequirement(name = 'macutils', component = mac.MacUtilities, version = (1, 0, 0)), ] def _generator(self): - kernel = self.context.modules[self.config['darwin']] + kernel = self.context.modules[self.config['kernel']] - mods = lsmod.Lsmod.list_modules(self.context, self.config['darwin']) + mods = lsmod.Lsmod.list_modules(self.context, self.config['kernel']) handlers = mac.MacUtilities.generate_kernel_handler_info(self.context, kernel.layer_name, kernel, mods) @@ -49,7 +49,7 @@ def _generator(self): continue module_name, symbol_name = mac.MacUtilities.lookup_module_address(self.context, handlers, call_addr, - self.config['darwin']) + self.config['kernel']) yield (0, (format_hints.Hex(table.vol.offset), "TrapTable", i, format_hints.Hex(call_addr), module_name, symbol_name)) diff --git a/volatility3/framework/plugins/mac/ifconfig.py b/volatility3/framework/plugins/mac/ifconfig.py index 826f5761bb..8e72528c5d 100644 --- a/volatility3/framework/plugins/mac/ifconfig.py +++ b/volatility3/framework/plugins/mac/ifconfig.py @@ -16,13 +16,13 @@ class Ifconfig(plugins.PluginInterface): @classmethod def get_requirements(cls): return [ - requirements.ModuleRequirement(name = 'darwin', description = 'Kernel module for the OS', + requirements.ModuleRequirement(name = 'kernel', description = 'Kernel module for the OS', architectures = ["Intel32", "Intel64"]), requirements.VersionRequirement(name = 'macutils', component = mac.MacUtilities, version = (1, 0, 0)) ] def _generator(self): - kernel = self.context.modules[self.config['darwin']] + kernel = self.context.modules[self.config['kernel']] try: list_head = kernel.object_from_symbol(symbol_name = "ifnet_head") diff --git a/volatility3/framework/plugins/mac/kauth_listeners.py b/volatility3/framework/plugins/mac/kauth_listeners.py index 8036643504..42eaa45825 100644 --- a/volatility3/framework/plugins/mac/kauth_listeners.py +++ b/volatility3/framework/plugins/mac/kauth_listeners.py @@ -18,7 +18,7 @@ class Kauth_listeners(interfaces.plugins.PluginInterface): @classmethod def get_requirements(cls): return [ - requirements.ModuleRequirement(name = 'darwin', description = 'Kernel module for the OS', + requirements.ModuleRequirement(name = 'kernel', description = 'Kernel module for the OS', architectures = ["Intel32", "Intel64"]), requirements.VersionRequirement(name = 'macutils', component = mac.MacUtilities, version = (1, 1, 0)), requirements.PluginRequirement(name = 'lsmod', plugin = lsmod.Lsmod, version = (2, 0, 0)), @@ -31,13 +31,13 @@ def _generator(self): """ Enumerates the listeners for each kauth scope """ - kernel = self.context.modules[self.config['darwin']] + kernel = self.context.modules[self.config['kernel']] - mods = lsmod.Lsmod.list_modules(self.context, self.config['darwin']) + mods = lsmod.Lsmod.list_modules(self.context, self.config['kernel']) handlers = mac.MacUtilities.generate_kernel_handler_info(self.context, kernel.layer_name, kernel, mods) - for scope in kauth_scopes.Kauth_scopes.list_kauth_scopes(self.context, self.config['darwin']): + for scope in kauth_scopes.Kauth_scopes.list_kauth_scopes(self.context, self.config['kernel']): scope_name = utility.pointer_to_string(scope.ks_identifier, 128) @@ -47,7 +47,7 @@ def _generator(self): continue module_name, symbol_name = mac.MacUtilities.lookup_module_address(self.context, handlers, callback, - self.config['darwin']) + self.config['kernel']) yield (0, (scope_name, format_hints.Hex(listener.kll_idata), format_hints.Hex(callback), module_name, symbol_name)) diff --git a/volatility3/framework/plugins/mac/kauth_scopes.py b/volatility3/framework/plugins/mac/kauth_scopes.py index 7f5a20d8fa..f66c5cc0e1 100644 --- a/volatility3/framework/plugins/mac/kauth_scopes.py +++ b/volatility3/framework/plugins/mac/kauth_scopes.py @@ -2,7 +2,7 @@ # which is available at https://www.volatilityfoundation.org/license/vsl-v1.0 # import logging -from typing import Iterable, Callable, Tuple +from typing import Iterable, Callable from volatility3.framework import renderers, interfaces from volatility3.framework.configuration import requirements @@ -23,7 +23,7 @@ class Kauth_scopes(interfaces.plugins.PluginInterface): @classmethod def get_requirements(cls): return [ - requirements.ModuleRequirement(name = 'darwin', description = 'Kernel module for the OS', + requirements.ModuleRequirement(name = 'kernel', description = 'Kernel module for the OS', architectures = ["Intel32", "Intel64"]), requirements.VersionRequirement(name = 'macutils', component = mac.MacUtilities, version = (1, 1, 0)), requirements.PluginRequirement(name = 'lsmod', plugin = lsmod.Lsmod, version = (2, 0, 0)) @@ -34,9 +34,7 @@ def list_kauth_scopes(cls, context: interfaces.context.ContextInterface, kernel_module_name: str, filter_func: Callable[[int], bool] = lambda _: False) -> \ - Iterable[Tuple[interfaces.objects.ObjectInterface, - interfaces.objects.ObjectInterface, - interfaces.objects.ObjectInterface]]: + Iterable[interfaces.objects.ObjectInterface]: """ Enumerates the registered kauth scopes and yields each object Uses smear-safe enumeration API @@ -50,20 +48,20 @@ def list_kauth_scopes(cls, yield scope def _generator(self): - kernel = self.context.modules[self.config['darwin']] + kernel = self.context.modules[self.config['kernel']] - mods = lsmod.Lsmod.list_modules(self.context, self.config['darwin']) + mods = lsmod.Lsmod.list_modules(self.context, self.config['kernel']) handlers = mac.MacUtilities.generate_kernel_handler_info(self.context, kernel.layer_name, kernel, mods) - for scope in self.list_kauth_scopes(self.context, self.config['darwin']): + for scope in self.list_kauth_scopes(self.context, self.config['kernel']): callback = scope.ks_callback if callback == 0: continue module_name, symbol_name = mac.MacUtilities.lookup_module_address(self.context, handlers, callback, - self.config['darwin']) + self.config['kernel']) identifier = utility.pointer_to_string(scope.ks_identifier, 128) diff --git a/volatility3/framework/plugins/mac/kevents.py b/volatility3/framework/plugins/mac/kevents.py index 47087ce388..6fb8e99aff 100644 --- a/volatility3/framework/plugins/mac/kevents.py +++ b/volatility3/framework/plugins/mac/kevents.py @@ -48,7 +48,7 @@ class Kevents(interfaces.plugins.PluginInterface): @classmethod def get_requirements(cls): return [ - requirements.ModuleRequirement(name = 'darwin', description = 'Kernel module for the OS', + requirements.ModuleRequirement(name = 'kernel', description = 'Kernel module for the OS', architectures = ["Intel32", "Intel64"]), requirements.PluginRequirement(name = 'pslist', plugin = pslist.PsList, version = (3, 0, 0)), requirements.VersionRequirement(name = 'macutils', component = mac.MacUtilities, version = (1, 2, 0)), @@ -148,7 +148,7 @@ def _generator(self): filter_func = pslist.PsList.create_pid_filter(self.config.get('pid', None)) for task_name, pid, kn in self.list_kernel_events(self.context, - self.config['darwin'], + self.config['kernel'], filter_func = filter_func): filter_index = kn.kn_kevent.filter * -1 diff --git a/volatility3/framework/plugins/mac/list_files.py b/volatility3/framework/plugins/mac/list_files.py index edc37c2a01..9c6a6bcddf 100644 --- a/volatility3/framework/plugins/mac/list_files.py +++ b/volatility3/framework/plugins/mac/list_files.py @@ -23,7 +23,7 @@ class List_Files(plugins.PluginInterface): @classmethod def get_requirements(cls): return [ - requirements.ModuleRequirement(name = 'darwin', description = 'Kernel module for the OS', + requirements.ModuleRequirement(name = 'kernel', description = 'Kernel module for the OS', architectures = ["Intel32", "Intel64"]), requirements.PluginRequirement(name = 'mount', plugin = mount.Mount, version = (2, 0, 0)), ] @@ -165,7 +165,7 @@ def list_files(cls, yield vnode, full_path def _generator(self): - for vnode, full_path in self.list_files(self.context, self.config['darwin']): + for vnode, full_path in self.list_files(self.context, self.config['kernel']): yield (0, (format_hints.Hex(vnode), full_path)) diff --git a/volatility3/framework/plugins/mac/lsmod.py b/volatility3/framework/plugins/mac/lsmod.py index 18c9a37f7c..5cb242b19e 100644 --- a/volatility3/framework/plugins/mac/lsmod.py +++ b/volatility3/framework/plugins/mac/lsmod.py @@ -22,7 +22,7 @@ class Lsmod(plugins.PluginInterface): @classmethod def get_requirements(cls): return [ - requirements.ModuleRequirement(name = 'darwin', description = 'Kernel module for the OS', + requirements.ModuleRequirement(name = 'kernel', description = 'Kernel module for the OS', architectures = ["Intel32", "Intel64"]), ] @@ -76,7 +76,7 @@ def list_modules(cls, context: interfaces.context.ContextInterface, darwin_modul return def _generator(self): - for module in self.list_modules(self.context, self.config['darwin']): + for module in self.list_modules(self.context, self.config['kernel']): mod_name = utility.array_to_string(module.name) mod_size = module.size diff --git a/volatility3/framework/plugins/mac/lsof.py b/volatility3/framework/plugins/mac/lsof.py index a7f2250cd6..6d96f102a3 100644 --- a/volatility3/framework/plugins/mac/lsof.py +++ b/volatility3/framework/plugins/mac/lsof.py @@ -21,7 +21,7 @@ class Lsof(plugins.PluginInterface): @classmethod def get_requirements(cls): return [ - requirements.ModuleRequirement(name = 'darwin', description = 'Kernel module for the OS', + requirements.ModuleRequirement(name = 'kernel', description = 'Kernel module for the OS', architectures = ["Intel32", "Intel64"]), requirements.VersionRequirement(name = 'macutils', component = mac.MacUtilities, version = (1, 0, 0)), requirements.PluginRequirement(name = 'pslist', plugin = pslist.PsList, version = (3, 0, 0)), @@ -32,11 +32,12 @@ def get_requirements(cls): ] def _generator(self, tasks): + darwin = self.context.modules[self.config['kernel']] for task in tasks: pid = task.p_pid - for _, filepath, fd in mac.MacUtilities.files_descriptors_for_process(self.context, self.config[ - 'darwin.symbol_table_name'], + for _, filepath, fd in mac.MacUtilities.files_descriptors_for_process(self.context, + darwin.symbol_table_name, task): if filepath and len(filepath) > 0: yield (0, (pid, fd, filepath)) @@ -48,5 +49,5 @@ def run(self): return renderers.TreeGrid([("PID", int), ("File Descriptor", int), ("File Path", str)], self._generator( list_tasks(self.context, - self.config['darwin'], + self.config['kernel'], filter_func = filter_func))) diff --git a/volatility3/framework/plugins/mac/malfind.py b/volatility3/framework/plugins/mac/malfind.py index 98d876c90c..cf0a808666 100644 --- a/volatility3/framework/plugins/mac/malfind.py +++ b/volatility3/framework/plugins/mac/malfind.py @@ -18,7 +18,7 @@ class Malfind(interfaces.plugins.PluginInterface): @classmethod def get_requirements(cls): return [ - requirements.ModuleRequirement(name = 'darwin', description = 'Kernel module for the OS', + requirements.ModuleRequirement(name = 'kernel', description = 'Kernel module for the OS', architectures = ["Intel32", "Intel64"]), requirements.PluginRequirement(name = 'pslist', plugin = pslist.PsList, version = (3, 0, 0)), requirements.ListRequirement(name = 'pid', @@ -38,13 +38,13 @@ def _list_injections(self, task): proc_layer = self.context.layers[proc_layer_name] for vma in task.get_map_iter(): - if not vma.is_suspicious(self.context, self.context.modules[self.config['darwin']].symbol_table_name): + if not vma.is_suspicious(self.context, self.context.modules[self.config['kernel']].symbol_table_name): data = proc_layer.read(vma.links.start, 64, pad = True) yield vma, data def _generator(self, tasks): # determine if we're on a 32 or 64 bit kernel - if self.context.modules[self.config['darwin']].get_type("pointer").size == 4: + if self.context.modules[self.config['kernel']].get_type("pointer").size == 4: is_32bit_arch = True else: is_32bit_arch = False @@ -72,5 +72,5 @@ def run(self): ("Disasm", interfaces.renderers.Disassembly)], self._generator( list_tasks(self.context, - self.config['darwin'], + self.config['kernel'], filter_func = filter_func))) diff --git a/volatility3/framework/plugins/mac/mount.py b/volatility3/framework/plugins/mac/mount.py index 8eceb040c5..bc171127ef 100644 --- a/volatility3/framework/plugins/mac/mount.py +++ b/volatility3/framework/plugins/mac/mount.py @@ -21,7 +21,7 @@ class Mount(plugins.PluginInterface): @classmethod def get_requirements(cls): return [ - requirements.ModuleRequirement(name = 'darwin', description = 'Kernel module for the OS', + requirements.ModuleRequirement(name = 'kernel', description = 'Kernel module for the OS', architectures = ["Intel32", "Intel64"]), requirements.VersionRequirement(name = 'macutils', component = mac.MacUtilities, version = (1, 0, 0)), ] @@ -46,7 +46,7 @@ def list_mounts(cls, context: interfaces.context.ContextInterface, kernel_module yield mount def _generator(self): - for mount in self.list_mounts(self.context, self.config['darwin']): + for mount in self.list_mounts(self.context, self.config['kernel']): vfs = mount.mnt_vfsstat device_name = utility.array_to_string(vfs.f_mntonname) mount_point = utility.array_to_string(vfs.f_mntfromname) diff --git a/volatility3/framework/plugins/mac/netstat.py b/volatility3/framework/plugins/mac/netstat.py index ee5b773f2d..4453a8e30d 100644 --- a/volatility3/framework/plugins/mac/netstat.py +++ b/volatility3/framework/plugins/mac/netstat.py @@ -24,7 +24,7 @@ class Netstat(plugins.PluginInterface): @classmethod def get_requirements(cls): return [ - requirements.ModuleRequirement(name = 'darwin', description = 'Kernel module for the OS', + requirements.ModuleRequirement(name = 'kernel', description = 'Kernel module for the OS', architectures = ["Intel32", "Intel64"]), requirements.PluginRequirement(name = 'pslist', plugin = pslist.PsList, version = (3, 0, 0)), requirements.VersionRequirement(name = 'macutils', component = mac.MacUtilities, version = (1, 0, 0)), @@ -74,7 +74,7 @@ def list_sockets(cls, continue if not context.layers[task.vol.native_layer_name].is_valid(socket.vol.offset, - socket.vol.size): + socket.vol.size): continue yield task_name, pid, socket @@ -83,7 +83,7 @@ def _generator(self): filter_func = pslist.PsList.create_pid_filter(self.config.get('pid', None)) for task_name, pid, socket in self.list_sockets(self.context, - self.config['darwin'], + self.config['kernel'], filter_func = filter_func): family = socket.get_family() diff --git a/volatility3/framework/plugins/mac/proc_maps.py b/volatility3/framework/plugins/mac/proc_maps.py index e9912797c9..e150c6a55a 100644 --- a/volatility3/framework/plugins/mac/proc_maps.py +++ b/volatility3/framework/plugins/mac/proc_maps.py @@ -17,7 +17,7 @@ class Maps(interfaces.plugins.PluginInterface): @classmethod def get_requirements(cls): return [ - requirements.ModuleRequirement(name = 'darwin', description = 'Kernel module for the OS', + requirements.ModuleRequirement(name = 'kernel', description = 'Kernel module for the OS', architectures = ["Intel32", "Intel64"]), requirements.PluginRequirement(name = 'pslist', plugin = pslist.PsList, version = (3, 0, 0)), requirements.ListRequirement(name = 'pid', @@ -32,7 +32,7 @@ def _generator(self, tasks): process_pid = task.p_pid for vma in task.get_map_iter(): - path = vma.get_path(self.context, self.context.modules[self.config['darwin']].symbol_table_name) + path = vma.get_path(self.context, self.context.modules[self.config['kernel']].symbol_table_name) if path == "": path = vma.get_special_path() @@ -47,5 +47,5 @@ def run(self): ("End", format_hints.Hex), ("Protection", str), ("Map Name", str)], self._generator( list_tasks(self.context, - self.config['darwin'], + self.config['kernel'], filter_func = filter_func))) diff --git a/volatility3/framework/plugins/mac/psaux.py b/volatility3/framework/plugins/mac/psaux.py index 73d4b16c59..5206e5b2bc 100644 --- a/volatility3/framework/plugins/mac/psaux.py +++ b/volatility3/framework/plugins/mac/psaux.py @@ -19,7 +19,7 @@ class Psaux(plugins.PluginInterface): @classmethod def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface]: return [ - requirements.ModuleRequirement(name = 'darwin', description = 'Kernel module for the OS', + requirements.ModuleRequirement(name = 'kernel', description = 'Kernel module for the OS', architectures = ["Intel32", "Intel64"]), requirements.PluginRequirement(name = 'pslist', plugin = pslist.PsList, version = (3, 0, 0)), requirements.ListRequirement(name = 'pid', @@ -96,5 +96,5 @@ def run(self) -> renderers.TreeGrid: return renderers.TreeGrid([("PID", int), ("Process", str), ("Argc", int), ("Arguments", str)], self._generator( list_tasks(self.context, - self.config['darwin'], + self.config['kernel'], filter_func = filter_func))) diff --git a/volatility3/framework/plugins/mac/pslist.py b/volatility3/framework/plugins/mac/pslist.py index b87822efe1..c094adba80 100644 --- a/volatility3/framework/plugins/mac/pslist.py +++ b/volatility3/framework/plugins/mac/pslist.py @@ -23,7 +23,7 @@ class PsList(interfaces.plugins.PluginInterface): @classmethod def get_requirements(cls): return [ - requirements.ModuleRequirement(name = 'darwin', description = 'Kernel module for the OS', + requirements.ModuleRequirement(name = 'kernel', description = 'Kernel module for the OS', architectures = ["Intel32", "Intel64"]), requirements.VersionRequirement(name = 'macutils', component = mac.MacUtilities, version = (1, 1, 0)), requirements.ChoiceRequirement(name = 'pslist_method', @@ -89,7 +89,7 @@ def _generator(self): list_tasks = self.get_list_tasks(self.config.get('pslist_method', self.pslist_methods[0])) for task in list_tasks(self.context, - self.config['darwin'], + self.config['kernel'], filter_func = self.create_pid_filter(self.config.get('pid', None))): pid = task.p_pid ppid = task.p_ppid diff --git a/volatility3/framework/plugins/mac/pstree.py b/volatility3/framework/plugins/mac/pstree.py index a9846d0dbd..76219c4571 100644 --- a/volatility3/framework/plugins/mac/pstree.py +++ b/volatility3/framework/plugins/mac/pstree.py @@ -24,7 +24,7 @@ def __init__(self, *args, **kwargs): @classmethod def get_requirements(cls): return [ - requirements.ModuleRequirement(name = 'darwin', description = 'Kernel module for the OS', + requirements.ModuleRequirement(name = 'kernel', description = 'Kernel module for the OS', architectures = ["Intel32", "Intel64"]), requirements.PluginRequirement(name = 'pslist', plugin = pslist.PsList, version = (3, 0, 0)) ] @@ -48,7 +48,7 @@ def _generator(self): """Generates the tree list of processes""" list_tasks = pslist.PsList.get_list_tasks(self.config.get('pslist_method', pslist.PsList.pslist_methods[0])) - for proc in list_tasks(self.context, self.config['darwin']): + for proc in list_tasks(self.context, self.config['kernel']): self._processes[proc.p_pid] = proc # Build the child/level maps diff --git a/volatility3/framework/plugins/mac/socket_filters.py b/volatility3/framework/plugins/mac/socket_filters.py index be21bc7d1c..ee1b83ed79 100644 --- a/volatility3/framework/plugins/mac/socket_filters.py +++ b/volatility3/framework/plugins/mac/socket_filters.py @@ -24,16 +24,16 @@ class Socket_filters(plugins.PluginInterface): @classmethod def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface]: return [ - requirements.ModuleRequirement(name = 'darwin', description = 'Kernel module for the OS', + requirements.ModuleRequirement(name = 'kernel', description = 'Kernel module for the OS', architectures = ["Intel32", "Intel64"]), requirements.VersionRequirement(name = 'macutils', component = mac.MacUtilities, version = (1, 0, 0)), requirements.PluginRequirement(name = 'lsmod', plugin = lsmod.Lsmod, version = (2, 0, 0)) ] def _generator(self): - kernel = self.context.modules[self.config['darwin']] + kernel = self.context.modules[self.config['kernel']] - mods = lsmod.Lsmod.list_modules(self.context, self.config['darwin']) + mods = lsmod.Lsmod.list_modules(self.context, self.config['kernel']) handlers = mac.MacUtilities.generate_kernel_handler_info(self.context, kernel.layer_name, kernel, mods) diff --git a/volatility3/framework/plugins/mac/timers.py b/volatility3/framework/plugins/mac/timers.py index 5ce973d5c7..42b71134a7 100644 --- a/volatility3/framework/plugins/mac/timers.py +++ b/volatility3/framework/plugins/mac/timers.py @@ -23,16 +23,16 @@ class Timers(plugins.PluginInterface): @classmethod def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface]: return [ - requirements.ModuleRequirement(name = 'darwin', description = 'Kernel module for the OS', + requirements.ModuleRequirement(name = 'kernel', description = 'Kernel module for the OS', architectures = ["Intel32", "Intel64"]), requirements.VersionRequirement(name = 'macutils', component = mac.MacUtilities, version = (1, 3, 0)), requirements.PluginRequirement(name = 'lsmod', plugin = lsmod.Lsmod, version = (2, 0, 0)) ] def _generator(self): - kernel = self.context.modules[self.config['darwin']] + kernel = self.context.modules[self.config['kernel']] - mods = lsmod.Lsmod.list_modules(self.context, self.config['darwin']) + mods = lsmod.Lsmod.list_modules(self.context, self.config['kernel']) handlers = mac.MacUtilities.generate_kernel_handler_info(self.context, kernel.layer_name, kernel, mods) @@ -69,7 +69,7 @@ def _generator(self): entry_time = -1 module_name, symbol_name = mac.MacUtilities.lookup_module_address(self.context, handlers, handler, - self.config['darwin']) + self.config['kernel']) yield (0, (format_hints.Hex(handler), format_hints.Hex(timer.param0), format_hints.Hex(timer.param1), timer.deadline, entry_time, module_name, symbol_name)) diff --git a/volatility3/framework/plugins/mac/trustedbsd.py b/volatility3/framework/plugins/mac/trustedbsd.py index 615e5ea644..5d0eba6695 100644 --- a/volatility3/framework/plugins/mac/trustedbsd.py +++ b/volatility3/framework/plugins/mac/trustedbsd.py @@ -25,14 +25,14 @@ class Trustedbsd(plugins.PluginInterface): @classmethod def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface]: return [ - requirements.ModuleRequirement(name = 'darwin', description = 'Kernel module for the OS', + requirements.ModuleRequirement(name = 'kernel', description = 'Kernel module for the OS', architectures = ["Intel32", "Intel64"]), requirements.VersionRequirement(name = 'macutils', component = mac.MacUtilities, version = (1, 3, 0)), requirements.PluginRequirement(name = 'lsmod', plugin = lsmod.Lsmod, version = (2, 0, 0)) ] def _generator(self, mods: Iterator[Any]): - kernel = self.context.modules[self.config['darwin']] + kernel = self.context.modules[self.config['kernel']] handlers = mac.MacUtilities.generate_kernel_handler_info(self.context, kernel.layer_name, kernel, mods) @@ -65,7 +65,7 @@ def _generator(self, mods: Iterator[Any]): continue module_name, symbol_name = mac.MacUtilities.lookup_module_address(self.context, handlers, call_addr, - self.config['darwin']) + self.config['kernel']) yield (0, (check, ent_name, format_hints.Hex(call_addr), module_name, symbol_name)) @@ -73,4 +73,4 @@ def run(self): return renderers.TreeGrid([("Member", str), ("Policy Name", str), ("Handler Address", format_hints.Hex), ("Handler Module", str), ("Handler Symbol", str)], self._generator( - lsmod.Lsmod.list_modules(self.context, self.config['darwin']))) + lsmod.Lsmod.list_modules(self.context, self.config['kernel']))) diff --git a/volatility3/framework/plugins/mac/vfsevents.py b/volatility3/framework/plugins/mac/vfsevents.py index 71915bcad9..9259956e97 100644 --- a/volatility3/framework/plugins/mac/vfsevents.py +++ b/volatility3/framework/plugins/mac/vfsevents.py @@ -20,7 +20,7 @@ class VFSevents(interfaces.plugins.PluginInterface): @classmethod def get_requirements(cls): return [ - requirements.ModuleRequirement(name = 'darwin', description = 'Kernel module for the OS', + requirements.ModuleRequirement(name = 'kernel', description = 'Kernel module for the OS', architectures = ["Intel32", "Intel64"]), ] @@ -30,7 +30,7 @@ def _generator(self): Also lists which event(s) a process is registered for """ - kernel = self.context.modules[self.config['darwin']] + kernel = self.context.modules[self.config['kernel']] watcher_table = kernel.object_from_symbol("watcher_table") From aa1c24e60ab9db03cc2ece4ee873688af3053e3d Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Thu, 26 Aug 2021 11:43:32 +0100 Subject: [PATCH 217/294] Core: Make pointer caching more accurate --- volatility3/framework/objects/__init__.py | 19 +++++++++++-------- 1 file changed, 11 insertions(+), 8 deletions(-) diff --git a/volatility3/framework/objects/__init__.py b/volatility3/framework/objects/__init__.py index cc96326566..eec0329d57 100644 --- a/volatility3/framework/objects/__init__.py +++ b/volatility3/framework/objects/__init__.py @@ -291,7 +291,7 @@ def __init__(self, subtype: Optional[templates.ObjectTemplate] = None) -> None: super().__init__(context = context, object_info = object_info, type_name = type_name, data_format = data_format) self._vol['subtype'] = subtype - self._cache = None + self._cache: Dict[str, interfaces.objects.ObjectInterface] = {} @classmethod def _unmarshall(cls, context: interfaces.context.ContextInterface, data_format: DataFormatInfo, @@ -322,16 +322,19 @@ def dereference(self, layer_name: Optional[str] = None) -> interfaces.objects.Ob # Do our own caching because lru_cache doesn't seem to memoize correctly across multiple uses # Cache clearing should be done by a cast (we can add a specific method to reset a pointer, # but hopefully it's not necessary) - if self._cache is None: + if layer_name is None: + layer_name = self.vol.layer_name + if self._cache.get(layer_name, None) is None: layer_name = layer_name or self.vol.native_layer_name mask = self._context.layers[layer_name].address_mask offset = self & mask - self._cache = self.vol.subtype(context = self._context, - object_info = interfaces.objects.ObjectInformation(layer_name = layer_name, - offset = offset, - parent = self, - size = self.vol.subtype.size)) - return self._cache + self._cache[layer_name] = self.vol.subtype(context = self._context, + object_info = interfaces.objects.ObjectInformation( + layer_name = layer_name, + offset = offset, + parent = self, + size = self.vol.subtype.size)) + return self._cache[layer_name] def is_readable(self, layer_name: Optional[str] = None) -> bool: """Determines whether the address of this pointer can be read from From 2371c1158d954b706efd877c23526fbd88b06260 Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Thu, 26 Aug 2021 11:50:04 +0100 Subject: [PATCH 218/294] Windows: Ensure DTBs don't have all valid entries We were finding that PAE results were being masked by certain pages with 64bit results. The DTBs could be distinguished from good DTBs because every entry was valid (which for a 64-bit or even 32-bit DTB is extremely unlikely except on systems with *vast* quanties of memory. As such, this now tests for all entries being allocated and doesn't return the DTB if this is the case. --- volatility3/framework/automagic/windows.py | 57 +++++----------------- 1 file changed, 12 insertions(+), 45 deletions(-) diff --git a/volatility3/framework/automagic/windows.py b/volatility3/framework/automagic/windows.py index 147602facf..38da637074 100644 --- a/volatility3/framework/automagic/windows.py +++ b/volatility3/framework/automagic/windows.py @@ -54,6 +54,10 @@ def __init__(self, layer_type: Type[layers.intel.Intel], ptr_struct: str, ptr_re self.ptr_reference = ptr_reference self.mask = mask self.page_size: int = layer_type.page_size + # This calculates the *wrong* value for PAE systems, + # but they can have all four entries filled, so we'd want this test off anyway + self.num_entries = self.page_size // self.ptr_size + def _unpack(self, value: bytes) -> int: return struct.unpack("<" + self.ptr_struct, value)[0] @@ -109,7 +113,8 @@ def second_pass(self, dtb: int, data: bytes, data_offset: int) -> Optional[Tuple # print(hex(dtb), usr_count, sup_count, usr_count + sup_count) # We sometimes find bogus DTBs at 0x16000 with a very low sup_count and 0 usr_count # I have a winxpsp2-x64 image with identical usr/sup counts at 0x16000 and 0x24c00 as well as the actual 0x3c3000 - if usr_count or sup_count > 5: + # We almost never have every single entry allocated + if usr_count or sup_count > 5 and usr_count + sup_count < self.num_entries: return dtb, None return None @@ -120,7 +125,8 @@ def __init__(self) -> None: super().__init__(layer_type = layers.intel.WindowsIntel, ptr_struct = "I", ptr_reference = [0x300], - mask = 0xFFFFF000) + mask = 0xFFFFF000, + num_entries = 1024) class DtbTest64bit(DtbTest): @@ -128,8 +134,9 @@ class DtbTest64bit(DtbTest): def __init__(self) -> None: super().__init__(layer_type = layers.intel.WindowsIntel32e, ptr_struct = "Q", - ptr_reference = range(0x1E0, 0x1FF), - mask = 0x3FFFFFFFFFF000) + ptr_reference = range(0x1ff, 0x100, -1), + mask = 0x3FFFFFFFFFF000, + num_entries = 512) # As of Windows-10 RS1+, the ptr_reference is randomized: # https://blahcat.github.io/2020/06/15/playing_with_self_reference_pml4_entry/ @@ -172,46 +179,6 @@ def second_pass(self, dtb: int, data: bytes, data_offset: int) -> Optional[Tuple return None -class DtbSelfReferential(DtbTest): - """A generic DTB test which looks for a self-referential pointer at *any* - index within the page.""" - - def __init__(self, layer_type: Type[layers.intel.Intel], ptr_struct: str, ptr_reference: int, mask: int) -> None: - super().__init__(layer_type = layer_type, ptr_struct = ptr_struct, ptr_reference = ptr_reference, mask = mask) - - def __call__(self, data: bytes, data_offset: int, page_offset: int) -> Optional[Tuple[int, int]]: - page = data[page_offset:page_offset + self.page_size] - if not page: - return None - ref_pages = set() - for ref in range(0, self.page_size, self.ptr_size): - ptr_data = page[ref:ref + self.ptr_size] - if len(ptr_data) == self.ptr_size: - ptr, = struct.unpack(self.ptr_struct, ptr_data) - if ((ptr & self.mask) == (data_offset + page_offset)) and (data_offset + page_offset > 0): - ref_pages.add(ref) - # The DTB is extremely unlikely to refer back to itself. so the number of reference should always be exactly 1 - if len(ref_pages) == 1: - return (data_offset + page_offset), ref_pages.pop() - return None - - -class DtbSelfRef32bit(DtbSelfReferential): - - def __init__(self): - super().__init__(layer_type = layers.intel.WindowsIntel, - ptr_struct = "I", - ptr_reference = 0x300, - mask = 0xFFFFF000) - - -class DtbSelfRef64bit(DtbSelfReferential): - - def __init__(self) -> None: - super().__init__(layer_type = layers.intel.WindowsIntel32e, - ptr_struct = "Q", - ptr_reference = 0x1ED, - mask = 0x3FFFFFFFFFF000) class PageMapScanner(interfaces.layers.ScannerInterface): @@ -359,7 +326,7 @@ def stack(cls, vollog.debug("Self-referential pointer not in well-known location, moving to recent windows heuristic") # There is a very high chance that the DTB will live in this narrow segment, assuming we couldn't find it previously hits = context.layers[layer_name].scan(context, - PageMapScanner([DtbSelfRef64bit()]), + PageMapScanner([DtbTest64bit()]), sections = [(0x1a0000, 0x50000)], progress_callback = progress_callback) # Flatten the generator From 1e1b3fda367183b7d22958bb4d3d87b3b7fb48d2 Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Sun, 30 May 2021 15:28:44 +0100 Subject: [PATCH 219/294] Windows: Fix up typo in num_entries parameter --- volatility3/framework/automagic/windows.py | 6 ++---- 1 file changed, 2 insertions(+), 4 deletions(-) diff --git a/volatility3/framework/automagic/windows.py b/volatility3/framework/automagic/windows.py index 38da637074..c583a7131c 100644 --- a/volatility3/framework/automagic/windows.py +++ b/volatility3/framework/automagic/windows.py @@ -58,7 +58,6 @@ def __init__(self, layer_type: Type[layers.intel.Intel], ptr_struct: str, ptr_re # but they can have all four entries filled, so we'd want this test off anyway self.num_entries = self.page_size // self.ptr_size - def _unpack(self, value: bytes) -> int: return struct.unpack("<" + self.ptr_struct, value)[0] @@ -76,7 +75,7 @@ def __call__(self, data: bytes, data_offset: int, page_offset: int) -> Optional[ """ for ptr_reference in self.ptr_reference: value = data[page_offset + (ptr_reference * self.ptr_size):page_offset + - ((ptr_reference + 1) * self.ptr_size)] + ((ptr_reference + 1) * self.ptr_size)] try: ptr = self._unpack(value) except struct.error: @@ -179,8 +178,6 @@ def second_pass(self, dtb: int, data: bytes, data_offset: int) -> Optional[Tuple return None - - class PageMapScanner(interfaces.layers.ScannerInterface): """Scans through all pages using DTB tests to determine a dtb offset and architecture.""" @@ -194,6 +191,7 @@ def __init__(self, tests: List[DtbTest]) -> None: self.tests = tests def __call__(self, data: bytes, data_offset: int) -> Generator[Tuple[DtbTest, int], None, None]: + for test in self.tests: for page_offset in range(0, len(data), 0x1000): result = test(data, data_offset, page_offset) From b1156fa0a167dbb8a42ded9f781056103f3dba18 Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Thu, 26 Aug 2021 16:18:27 +0100 Subject: [PATCH 220/294] Automagic: Remove the superfluous num_entries fields --- volatility3/framework/automagic/windows.py | 6 ++---- 1 file changed, 2 insertions(+), 4 deletions(-) diff --git a/volatility3/framework/automagic/windows.py b/volatility3/framework/automagic/windows.py index c583a7131c..fbbbc92224 100644 --- a/volatility3/framework/automagic/windows.py +++ b/volatility3/framework/automagic/windows.py @@ -124,8 +124,7 @@ def __init__(self) -> None: super().__init__(layer_type = layers.intel.WindowsIntel, ptr_struct = "I", ptr_reference = [0x300], - mask = 0xFFFFF000, - num_entries = 1024) + mask = 0xFFFFF000) class DtbTest64bit(DtbTest): @@ -134,8 +133,7 @@ def __init__(self) -> None: super().__init__(layer_type = layers.intel.WindowsIntel32e, ptr_struct = "Q", ptr_reference = range(0x1ff, 0x100, -1), - mask = 0x3FFFFFFFFFF000, - num_entries = 512) + mask = 0x3FFFFFFFFFF000) # As of Windows-10 RS1+, the ptr_reference is randomized: # https://blahcat.github.io/2020/06/15/playing_with_self_reference_pml4_entry/ From f082c1c8cc9a17b08c0f33f7b3244b8b4e4dc629 Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Thu, 26 Aug 2021 17:25:03 +0100 Subject: [PATCH 221/294] Objects: Fix up pointer caching to default to native layer --- volatility3/framework/objects/__init__.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/volatility3/framework/objects/__init__.py b/volatility3/framework/objects/__init__.py index eec0329d57..05d594b2f1 100644 --- a/volatility3/framework/objects/__init__.py +++ b/volatility3/framework/objects/__init__.py @@ -323,7 +323,7 @@ def dereference(self, layer_name: Optional[str] = None) -> interfaces.objects.Ob # Cache clearing should be done by a cast (we can add a specific method to reset a pointer, # but hopefully it's not necessary) if layer_name is None: - layer_name = self.vol.layer_name + layer_name = self.vol.native_layer_name if self._cache.get(layer_name, None) is None: layer_name = layer_name or self.vol.native_layer_name mask = self._context.layers[layer_name].address_mask From 57e1243932168b7c3c179ac4aba7507d8af483dd Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Fri, 27 Aug 2021 22:23:33 +0100 Subject: [PATCH 222/294] CLI: Verify improvement for buildbot --- vol.py | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/vol.py b/vol.py index 0804135450..0f43da5344 100755 --- a/vol.py +++ b/vol.py @@ -4,7 +4,11 @@ # which is available at https://www.volatilityfoundation.org/license/vsl-v1.0 # +import sys import volatility3.cli +sys.stdin.reconfigure(encoding='utf-8') +sys.stdout.reconfigure(encoding='utf-8') + if __name__ == '__main__': volatility3.cli.main() From e6547a2d5e34fd6ad33a49ae075bc9a12ecb4047 Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Sat, 28 Aug 2021 00:34:11 +0100 Subject: [PATCH 223/294] Revert "CLI: Verify improvement for buildbot" This fix for buildbot didn't seem to have any effect This reverts commit 57e1243932168b7c3c179ac4aba7507d8af483dd. --- vol.py | 4 ---- 1 file changed, 4 deletions(-) diff --git a/vol.py b/vol.py index 0f43da5344..0804135450 100755 --- a/vol.py +++ b/vol.py @@ -4,11 +4,7 @@ # which is available at https://www.volatilityfoundation.org/license/vsl-v1.0 # -import sys import volatility3.cli -sys.stdin.reconfigure(encoding='utf-8') -sys.stdout.reconfigure(encoding='utf-8') - if __name__ == '__main__': volatility3.cli.main() From b43712f9f482c9b66ff26372b7ac134a61b9852c Mon Sep 17 00:00:00 2001 From: shusei tomonaga <8147599+shu-tom@users.noreply.github.com> Date: Mon, 30 Aug 2021 21:24:47 +0900 Subject: [PATCH 224/294] Fixed an issue where the PDB guid value was not zero padded. --- volatility3/framework/symbols/windows/pdbutil.py | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/volatility3/framework/symbols/windows/pdbutil.py b/volatility3/framework/symbols/windows/pdbutil.py index 9f2cc14629..9d8b2431bf 100644 --- a/volatility3/framework/symbols/windows/pdbutil.py +++ b/volatility3/framework/symbols/windows/pdbutil.py @@ -170,9 +170,9 @@ def get_guid_from_mz(cls, context: interfaces.context.ContextInterface, layer_na pdb_name = debug_entry.PdbFileName.decode("utf-8").strip('\x00') age = debug_entry.Age - guid = "{:x}{:x}{:x}{}".format(debug_entry.Signature_Data1, debug_entry.Signature_Data2, - debug_entry.Signature_Data3, - binascii.hexlify(debug_entry.Signature_Data4).decode('utf-8')) + guid = "{:08x}{:04x}{:04x}{}".format(debug_entry.Signature_Data1, debug_entry.Signature_Data2, + debug_entry.Signature_Data3, + binascii.hexlify(debug_entry.Signature_Data4).decode('utf-8')) return guid, age, pdb_name @classmethod From 4634a146c757dd12e34d71933222631b57a9607c Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Thu, 2 Sep 2021 02:11:32 +0100 Subject: [PATCH 225/294] Windows: Pstree loop protection --- volatility3/framework/plugins/windows/pstree.py | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-) diff --git a/volatility3/framework/plugins/windows/pstree.py b/volatility3/framework/plugins/windows/pstree.py index 2c40ca55fe..b88e4bba3e 100644 --- a/volatility3/framework/plugins/windows/pstree.py +++ b/volatility3/framework/plugins/windows/pstree.py @@ -2,6 +2,7 @@ # which is available at https://www.volatilityfoundation.org/license/vsl-v1.0 # import datetime +import logging from typing import Dict, Set, Tuple from volatility3.framework import objects, interfaces, renderers @@ -9,6 +10,7 @@ from volatility3.framework.renderers import format_hints from volatility3.plugins.windows import pslist +vollog = logging.getLogger(__name__) class PsTree(interfaces.plugins.PluginInterface): """Plugin for listing processes in a tree based on their parent process @@ -48,6 +50,7 @@ def find_level(self, pid: objects.Pointer) -> None: child_list = self._children.get(proc.InheritedFromUniqueProcessId, set([])) child_list.add(proc.UniqueProcessId) self._children[proc.InheritedFromUniqueProcessId] = child_list + seen.add(proc.InheritedFromUniqueProcessId) proc, _ = self._processes.get(proc.InheritedFromUniqueProcessId, (None, None)) level += 1 self._levels[pid] = level @@ -58,7 +61,6 @@ def _generator(self): for proc in pslist.PsList.list_processes(self.context, kernel.layer_name, kernel.symbol_table_name): - if not self.config.get('physical', pslist.PsList.PHYSICAL_DEFAULT): offset = proc.vol.offset else: @@ -72,7 +74,12 @@ def _generator(self): for pid in self._processes: self.find_level(pid) + process_pids = set([]) def yield_processes(pid): + if pid in process_pids: + vollog.debug(f"Pid cycle: already processed pid {pid}") + return + process_pids.add(pid) proc, offset = self._processes[pid] row = (proc.UniqueProcessId, proc.InheritedFromUniqueProcessId, proc.ImageFileName.cast("string", max_length = proc.ImageFileName.vol.count, errors = 'replace'), From 3f0f5fb29c6ded93dedb774114b242469bb216ac Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Thu, 2 Sep 2021 17:05:35 +0100 Subject: [PATCH 226/294] Windows: Reduce the full automagic range *again* 5:\ --- volatility3/framework/automagic/windows.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/volatility3/framework/automagic/windows.py b/volatility3/framework/automagic/windows.py index fbbbc92224..c86ba02e9e 100644 --- a/volatility3/framework/automagic/windows.py +++ b/volatility3/framework/automagic/windows.py @@ -132,7 +132,7 @@ class DtbTest64bit(DtbTest): def __init__(self) -> None: super().__init__(layer_type = layers.intel.WindowsIntel32e, ptr_struct = "Q", - ptr_reference = range(0x1ff, 0x100, -1), + ptr_reference = range(0x1ff, 0x1e0, -1), mask = 0x3FFFFFFFFFF000) # As of Windows-10 RS1+, the ptr_reference is randomized: From 5d030263ea3709cf520821c37311b182f7a71beb Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Thu, 2 Sep 2021 21:50:24 +0100 Subject: [PATCH 227/294] Windows: Totally revert broken merge issue505-2 --- volatility3/framework/automagic/windows.py | 55 ++++++++++++++++++---- 1 file changed, 46 insertions(+), 9 deletions(-) diff --git a/volatility3/framework/automagic/windows.py b/volatility3/framework/automagic/windows.py index c86ba02e9e..147602facf 100644 --- a/volatility3/framework/automagic/windows.py +++ b/volatility3/framework/automagic/windows.py @@ -54,9 +54,6 @@ def __init__(self, layer_type: Type[layers.intel.Intel], ptr_struct: str, ptr_re self.ptr_reference = ptr_reference self.mask = mask self.page_size: int = layer_type.page_size - # This calculates the *wrong* value for PAE systems, - # but they can have all four entries filled, so we'd want this test off anyway - self.num_entries = self.page_size // self.ptr_size def _unpack(self, value: bytes) -> int: return struct.unpack("<" + self.ptr_struct, value)[0] @@ -75,7 +72,7 @@ def __call__(self, data: bytes, data_offset: int, page_offset: int) -> Optional[ """ for ptr_reference in self.ptr_reference: value = data[page_offset + (ptr_reference * self.ptr_size):page_offset + - ((ptr_reference + 1) * self.ptr_size)] + ((ptr_reference + 1) * self.ptr_size)] try: ptr = self._unpack(value) except struct.error: @@ -112,8 +109,7 @@ def second_pass(self, dtb: int, data: bytes, data_offset: int) -> Optional[Tuple # print(hex(dtb), usr_count, sup_count, usr_count + sup_count) # We sometimes find bogus DTBs at 0x16000 with a very low sup_count and 0 usr_count # I have a winxpsp2-x64 image with identical usr/sup counts at 0x16000 and 0x24c00 as well as the actual 0x3c3000 - # We almost never have every single entry allocated - if usr_count or sup_count > 5 and usr_count + sup_count < self.num_entries: + if usr_count or sup_count > 5: return dtb, None return None @@ -132,7 +128,7 @@ class DtbTest64bit(DtbTest): def __init__(self) -> None: super().__init__(layer_type = layers.intel.WindowsIntel32e, ptr_struct = "Q", - ptr_reference = range(0x1ff, 0x1e0, -1), + ptr_reference = range(0x1E0, 0x1FF), mask = 0x3FFFFFFFFFF000) # As of Windows-10 RS1+, the ptr_reference is randomized: @@ -176,6 +172,48 @@ def second_pass(self, dtb: int, data: bytes, data_offset: int) -> Optional[Tuple return None +class DtbSelfReferential(DtbTest): + """A generic DTB test which looks for a self-referential pointer at *any* + index within the page.""" + + def __init__(self, layer_type: Type[layers.intel.Intel], ptr_struct: str, ptr_reference: int, mask: int) -> None: + super().__init__(layer_type = layer_type, ptr_struct = ptr_struct, ptr_reference = ptr_reference, mask = mask) + + def __call__(self, data: bytes, data_offset: int, page_offset: int) -> Optional[Tuple[int, int]]: + page = data[page_offset:page_offset + self.page_size] + if not page: + return None + ref_pages = set() + for ref in range(0, self.page_size, self.ptr_size): + ptr_data = page[ref:ref + self.ptr_size] + if len(ptr_data) == self.ptr_size: + ptr, = struct.unpack(self.ptr_struct, ptr_data) + if ((ptr & self.mask) == (data_offset + page_offset)) and (data_offset + page_offset > 0): + ref_pages.add(ref) + # The DTB is extremely unlikely to refer back to itself. so the number of reference should always be exactly 1 + if len(ref_pages) == 1: + return (data_offset + page_offset), ref_pages.pop() + return None + + +class DtbSelfRef32bit(DtbSelfReferential): + + def __init__(self): + super().__init__(layer_type = layers.intel.WindowsIntel, + ptr_struct = "I", + ptr_reference = 0x300, + mask = 0xFFFFF000) + + +class DtbSelfRef64bit(DtbSelfReferential): + + def __init__(self) -> None: + super().__init__(layer_type = layers.intel.WindowsIntel32e, + ptr_struct = "Q", + ptr_reference = 0x1ED, + mask = 0x3FFFFFFFFFF000) + + class PageMapScanner(interfaces.layers.ScannerInterface): """Scans through all pages using DTB tests to determine a dtb offset and architecture.""" @@ -189,7 +227,6 @@ def __init__(self, tests: List[DtbTest]) -> None: self.tests = tests def __call__(self, data: bytes, data_offset: int) -> Generator[Tuple[DtbTest, int], None, None]: - for test in self.tests: for page_offset in range(0, len(data), 0x1000): result = test(data, data_offset, page_offset) @@ -322,7 +359,7 @@ def stack(cls, vollog.debug("Self-referential pointer not in well-known location, moving to recent windows heuristic") # There is a very high chance that the DTB will live in this narrow segment, assuming we couldn't find it previously hits = context.layers[layer_name].scan(context, - PageMapScanner([DtbTest64bit()]), + PageMapScanner([DtbSelfRef64bit()]), sections = [(0x1a0000, 0x50000)], progress_callback = progress_callback) # Flatten the generator From 03136d1469d432dccaf880034d2d42b7ddb13393 Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Thu, 2 Sep 2021 22:51:32 +0100 Subject: [PATCH 228/294] Automagic: Clean up the windows self-referential dtb finder --- volatility3/framework/automagic/__init__.py | 4 +- volatility3/framework/automagic/windows.py | 463 +++++++++++--------- 2 files changed, 246 insertions(+), 221 deletions(-) diff --git a/volatility3/framework/automagic/__init__.py b/volatility3/framework/automagic/__init__.py index 626126c8c5..e4d422c995 100644 --- a/volatility3/framework/automagic/__init__.py +++ b/volatility3/framework/automagic/__init__.py @@ -22,7 +22,7 @@ vollog = logging.getLogger(__name__) windows_automagic = [ - 'ConstructionMagic', 'LayerStacker', 'WintelHelper', 'KernelPDBScanner', 'WinSwapLayers', 'KernelModule' + 'ConstructionMagic', 'LayerStacker', 'KernelPDBScanner', 'WinSwapLayers', 'KernelModule' ] linux_automagic = ['ConstructionMagic', 'LayerStacker', 'LinuxBannerCache', 'LinuxSymbolFinder', 'KernelModule'] @@ -46,7 +46,7 @@ def available(context: interfaces.context.ContextInterface) -> List[interfaces.a clazz(context, interfaces.configuration.path_join(config_path, clazz.__name__)) for clazz in class_subclasses(interfaces.automagic.AutomagicInterface) ], - key = lambda x: x.priority) + key = lambda x: x.priority) def choose_automagic( diff --git a/volatility3/framework/automagic/windows.py b/volatility3/framework/automagic/windows.py index 147602facf..eba79ef988 100644 --- a/volatility3/framework/automagic/windows.py +++ b/volatility3/framework/automagic/windows.py @@ -28,7 +28,7 @@ """ import logging import struct -from typing import Any, Generator, List, Optional, Tuple, Type +from typing import Generator, List, Optional, Tuple, Type, Iterable from volatility3.framework import interfaces, layers, constants from volatility3.framework.configuration import requirements @@ -37,147 +37,153 @@ vollog = logging.getLogger(__name__) -class DtbTest: - """This class generically contains the tests for a page based on a set of - class parameters. +# class DtbTest: +# """This class generically contains the tests for a page based on a set of +# class parameters. +# +# When constructed it contains all the information necessary to +# extract a specific index from a page and determine whether it points +# back to that page's offset. +# """ +# +# def __init__(self, layer_type: Type[layers.intel.Intel], ptr_struct: str, ptr_reference: List[int], +# mask: int) -> None: +# self.layer_type = layer_type +# self.ptr_struct = ptr_struct +# self.ptr_size = struct.calcsize(ptr_struct) +# self.ptr_reference = ptr_reference +# self.mask = mask +# self.page_size: int = layer_type.page_size +# +# def _unpack(self, value: bytes) -> int: +# return struct.unpack("<" + self.ptr_struct, value)[0] +# +# def __call__(self, data: bytes, data_offset: int, page_offset: int) -> Optional[Tuple[int, Any]]: +# """Tests a specific page in a chunk of data to see if it contains a +# self-referential pointer. +# +# Args: +# data: The chunk of data that contains the page to be scanned +# data_offset: Where, within the layer, the chunk of data lives +# page_offset: Where, within the data, the page to be scanned starts +# +# Returns: +# A valid DTB within this page (and an additional parameter for data) +# """ +# for ptr_reference in self.ptr_reference: +# value = data[page_offset + (ptr_reference * self.ptr_size):page_offset + +# ((ptr_reference + 1) * self.ptr_size)] +# try: +# ptr = self._unpack(value) +# except struct.error: +# return None +# # The value *must* be present (bit 0) since it's a mapped page +# # It's almost always writable (bit 1) +# # It's occasionally Super, but not reliably so, haven't checked when/why not +# # The top 3-bits are usually ignore (which in practice means 0 +# # Need to find out why the middle 3-bits are usually 6 (0110) +# if ptr != 0 and (ptr & self.mask == data_offset + page_offset) & (ptr & 0xFF1 == 0x61): +# dtb = (ptr & self.mask) +# return self.second_pass(dtb, data, data_offset) +# return None +# +# def second_pass(self, dtb: int, data: bytes, data_offset: int) -> Optional[Tuple[int, Any]]: +# """Re-reads over the whole page to validate other records based on the +# number of pages marked user vs super. +# +# Args: +# dtb: The identified dtb that needs validating +# data: The chunk of data that contains the dtb to be validated +# data_offset: Where, within the layer, the chunk of data lives +# +# Returns: +# A valid DTB within this page +# """ +# page = data[dtb - data_offset:dtb - data_offset + self.page_size] +# usr_count, sup_count = 0, 0 +# for i in range(0, self.page_size, self.ptr_size): +# val = self._unpack(page[i:i + self.ptr_size]) +# if val & 0x1: +# sup_count += 0 if (val & 0x4) else 1 +# usr_count += 1 if (val & 0x4) else 0 +# # print(hex(dtb), usr_count, sup_count, usr_count + sup_count) +# # We sometimes find bogus DTBs at 0x16000 with a very low sup_count and 0 usr_count +# # I have a winxpsp2-x64 image with identical usr/sup counts at 0x16000 and 0x24c00 as well as the actual 0x3c3000 +# if usr_count or sup_count > 5: +# return dtb, None +# return None +# +# +# class DtbTest32bit(DtbTest): +# +# def __init__(self) -> None: +# super().__init__(layer_type = layers.intel.WindowsIntel, +# ptr_struct = "I", +# ptr_reference = [0x300], +# mask = 0xFFFFF000) +# +# +# class DtbTest64bit(DtbTest): +# +# def __init__(self) -> None: +# super().__init__(layer_type = layers.intel.WindowsIntel32e, +# ptr_struct = "Q", +# ptr_reference = range(0x1E0, 0x1FF), +# mask = 0x3FFFFFFFFFF000) +# +# # As of Windows-10 RS1+, the ptr_reference is randomized: +# # https://blahcat.github.io/2020/06/15/playing_with_self_reference_pml4_entry/ +# # So far, we've only seen examples between 0x1e0 and 0x1ff +# +# +# class DtbTestPae(DtbTest): +# +# def __init__(self) -> None: +# super().__init__(layer_type = layers.intel.WindowsIntelPAE, +# ptr_struct = "Q", +# ptr_reference = [0x3], +# mask = 0x3FFFFFFFFFF000) +# +# def second_pass(self, dtb: int, data: bytes, data_offset: int) -> Optional[Tuple[int, Any]]: +# """PAE top level directory tables contains four entries and the self- +# referential pointer occurs in the second level of tables (so as not to +# use up a full quarter of the space). This is very high in the space, +# and occurs in the fourht (last quarter) second-level table. The +# second-level tables appear always to come sequentially directly after +# the real dtb. The value for the real DTB is therefore four page +# earlier (and the fourth entry should point back to the `dtb` parameter +# this function was originally passed. +# +# Args: +# dtb: The identified self-referential pointer that needs validating +# data: The chunk of data that contains the dtb to be validated +# data_offset: Where, within the layer, the chunk of data lives +# +# Returns: +# Returns the actual DTB of the PAE space +# """ +# dtb -= 0x4000 +# # If we're not in something that the overlap would pick up +# if dtb - data_offset >= 0: +# pointers = data[dtb - data_offset + (3 * self.ptr_size):dtb - data_offset + (4 * self.ptr_size)] +# val = self._unpack(pointers) +# if (val & self.mask == dtb + 0x4000) and (val & 0xFFF == 0x001): +# return dtb, None +# return None +# - When constructed it contains all the information necessary to - extract a specific index from a page and determine whether it points - back to that page's offset. - """ +class DtbSelfReferential: + """A generic DTB test which looks for a self-referential pointer at *any* + index within the page.""" - def __init__(self, layer_type: Type[layers.intel.Intel], ptr_struct: str, ptr_reference: List[int], - mask: int) -> None: + def __init__(self, layer_type: Type[layers.intel.Intel], ptr_struct: str, mask: int, + valid_range: Iterable[int]) -> None: self.layer_type = layer_type self.ptr_struct = ptr_struct self.ptr_size = struct.calcsize(ptr_struct) - self.ptr_reference = ptr_reference self.mask = mask self.page_size: int = layer_type.page_size - - def _unpack(self, value: bytes) -> int: - return struct.unpack("<" + self.ptr_struct, value)[0] - - def __call__(self, data: bytes, data_offset: int, page_offset: int) -> Optional[Tuple[int, Any]]: - """Tests a specific page in a chunk of data to see if it contains a - self-referential pointer. - - Args: - data: The chunk of data that contains the page to be scanned - data_offset: Where, within the layer, the chunk of data lives - page_offset: Where, within the data, the page to be scanned starts - - Returns: - A valid DTB within this page (and an additional parameter for data) - """ - for ptr_reference in self.ptr_reference: - value = data[page_offset + (ptr_reference * self.ptr_size):page_offset + - ((ptr_reference + 1) * self.ptr_size)] - try: - ptr = self._unpack(value) - except struct.error: - return None - # The value *must* be present (bit 0) since it's a mapped page - # It's almost always writable (bit 1) - # It's occasionally Super, but not reliably so, haven't checked when/why not - # The top 3-bits are usually ignore (which in practice means 0 - # Need to find out why the middle 3-bits are usually 6 (0110) - if ptr != 0 and (ptr & self.mask == data_offset + page_offset) & (ptr & 0xFF1 == 0x61): - dtb = (ptr & self.mask) - return self.second_pass(dtb, data, data_offset) - return None - - def second_pass(self, dtb: int, data: bytes, data_offset: int) -> Optional[Tuple[int, Any]]: - """Re-reads over the whole page to validate other records based on the - number of pages marked user vs super. - - Args: - dtb: The identified dtb that needs validating - data: The chunk of data that contains the dtb to be validated - data_offset: Where, within the layer, the chunk of data lives - - Returns: - A valid DTB within this page - """ - page = data[dtb - data_offset:dtb - data_offset + self.page_size] - usr_count, sup_count = 0, 0 - for i in range(0, self.page_size, self.ptr_size): - val = self._unpack(page[i:i + self.ptr_size]) - if val & 0x1: - sup_count += 0 if (val & 0x4) else 1 - usr_count += 1 if (val & 0x4) else 0 - # print(hex(dtb), usr_count, sup_count, usr_count + sup_count) - # We sometimes find bogus DTBs at 0x16000 with a very low sup_count and 0 usr_count - # I have a winxpsp2-x64 image with identical usr/sup counts at 0x16000 and 0x24c00 as well as the actual 0x3c3000 - if usr_count or sup_count > 5: - return dtb, None - return None - - -class DtbTest32bit(DtbTest): - - def __init__(self) -> None: - super().__init__(layer_type = layers.intel.WindowsIntel, - ptr_struct = "I", - ptr_reference = [0x300], - mask = 0xFFFFF000) - - -class DtbTest64bit(DtbTest): - - def __init__(self) -> None: - super().__init__(layer_type = layers.intel.WindowsIntel32e, - ptr_struct = "Q", - ptr_reference = range(0x1E0, 0x1FF), - mask = 0x3FFFFFFFFFF000) - - # As of Windows-10 RS1+, the ptr_reference is randomized: - # https://blahcat.github.io/2020/06/15/playing_with_self_reference_pml4_entry/ - # So far, we've only seen examples between 0x1e0 and 0x1ff - - -class DtbTestPae(DtbTest): - - def __init__(self) -> None: - super().__init__(layer_type = layers.intel.WindowsIntelPAE, - ptr_struct = "Q", - ptr_reference = [0x3], - mask = 0x3FFFFFFFFFF000) - - def second_pass(self, dtb: int, data: bytes, data_offset: int) -> Optional[Tuple[int, Any]]: - """PAE top level directory tables contains four entries and the self- - referential pointer occurs in the second level of tables (so as not to - use up a full quarter of the space). This is very high in the space, - and occurs in the fourht (last quarter) second-level table. The - second-level tables appear always to come sequentially directly after - the real dtb. The value for the real DTB is therefore four page - earlier (and the fourth entry should point back to the `dtb` parameter - this function was originally passed. - - Args: - dtb: The identified self-referential pointer that needs validating - data: The chunk of data that contains the dtb to be validated - data_offset: Where, within the layer, the chunk of data lives - - Returns: - Returns the actual DTB of the PAE space - """ - dtb -= 0x4000 - # If we're not in something that the overlap would pick up - if dtb - data_offset >= 0: - pointers = data[dtb - data_offset + (3 * self.ptr_size):dtb - data_offset + (4 * self.ptr_size)] - val = self._unpack(pointers) - if (val & self.mask == dtb + 0x4000) and (val & 0xFFF == 0x001): - return dtb, None - return None - - -class DtbSelfReferential(DtbTest): - """A generic DTB test which looks for a self-referential pointer at *any* - index within the page.""" - - def __init__(self, layer_type: Type[layers.intel.Intel], ptr_struct: str, ptr_reference: int, mask: int) -> None: - super().__init__(layer_type = layer_type, ptr_struct = ptr_struct, ptr_reference = ptr_reference, mask = mask) + self.valid_range = valid_range def __call__(self, data: bytes, data_offset: int, page_offset: int) -> Optional[Tuple[int, int]]: page = data[page_offset:page_offset + self.page_size] @@ -192,7 +198,9 @@ def __call__(self, data: bytes, data_offset: int, page_offset: int) -> Optional[ ref_pages.add(ref) # The DTB is extremely unlikely to refer back to itself. so the number of reference should always be exactly 1 if len(ref_pages) == 1: - return (data_offset + page_offset), ref_pages.pop() + ref_page = ref_pages.pop() + if (ref_page // self.ptr_size) in self.valid_range: + return (data_offset + page_offset), ref_page return None @@ -201,8 +209,8 @@ class DtbSelfRef32bit(DtbSelfReferential): def __init__(self): super().__init__(layer_type = layers.intel.WindowsIntel, ptr_struct = "I", - ptr_reference = 0x300, - mask = 0xFFFFF000) + mask = 0xFFFFF000, + valid_range = [0x300]) class DtbSelfRef64bit(DtbSelfReferential): @@ -210,23 +218,39 @@ class DtbSelfRef64bit(DtbSelfReferential): def __init__(self) -> None: super().__init__(layer_type = layers.intel.WindowsIntel32e, ptr_struct = "Q", - ptr_reference = 0x1ED, + mask = 0x3FFFFFFFFFF000, + valid_range = range(0x100, 0x1ff)) + + +class DtbSelfRefPae(DtbSelfReferential): + + def __init__(self) -> None: + super().__init__(layer_type = layers.intel.WindowsIntelPAE, + ptr_struct = "Q", + valid_range = [0x3], mask = 0x3FFFFFFFFFF000) + def __call__(self, *args, **kwargs): + dtb = super().__call__(*args, **kwargs) + if dtb: + return dtb[0] - 0x4000, dtb[1] + return dtb + class PageMapScanner(interfaces.layers.ScannerInterface): """Scans through all pages using DTB tests to determine a dtb offset and architecture.""" overlap = 0x4000 thread_safe = True - tests = [DtbTest64bit(), DtbTest32bit(), DtbTestPae()] + tests = [DtbSelfRef64bit(), DtbSelfRefPae(), DtbSelfRef32bit()] """The default tests to run when searching for DTBs""" - def __init__(self, tests: List[DtbTest]) -> None: + def __init__(self, tests: Optional[List[DtbSelfReferential]]) -> None: super().__init__() - self.tests = tests + if tests: + self.tests = tests - def __call__(self, data: bytes, data_offset: int) -> Generator[Tuple[DtbTest, int], None, None]: + def __call__(self, data: bytes, data_offset: int) -> Generator[Tuple[DtbSelfReferential, int], None, None]: for test in self.tests: for page_offset in range(0, len(data), 0x1000): result = test(data, data_offset, page_offset) @@ -234,61 +258,61 @@ def __call__(self, data: bytes, data_offset: int) -> Generator[Tuple[DtbTest, in yield (test, result[0]) -class WintelHelper(interfaces.automagic.AutomagicInterface): - """Windows DTB finder based on self-referential pointers. - - This class adheres to the :class:`~volatility3.framework.interfaces.automagic.AutomagicInterface` interface - and both determines the directory table base of an intel layer if one hasn't been specified, and constructs - the intel layer if necessary (for example when reconstructing a pre-existing configuration). - - It will scan for existing TranslationLayers that do not have a DTB using the :class:`PageMapScanner` - """ - priority = 20 - tests = [DtbTest64bit(), DtbTest32bit(), DtbTestPae()] - - def __call__(self, - context: interfaces.context.ContextInterface, - config_path: str, - requirement: interfaces.configuration.RequirementInterface, - progress_callback: constants.ProgressCallback = None) -> None: - useful = [] - sub_config_path = interfaces.configuration.path_join(config_path, requirement.name) - if (isinstance(requirement, requirements.TranslationLayerRequirement) - and requirement.requirements.get("class", False) and requirement.unsatisfied(context, config_path)): - class_req = requirement.requirements["class"] - - for test in self.tests: - if (test.layer_type.__module__ + "." + test.layer_type.__name__ == class_req.config_value( - context, sub_config_path)): - useful.append(test) - - # Determine if a class has been chosen - # Once an appropriate class has been chosen, attempt to determine the page_map_offset value - if ("memory_layer" in requirement.requirements - and not requirement.requirements["memory_layer"].unsatisfied(context, sub_config_path)): - # Only bother getting the DTB if we don't already have one - page_map_offset_path = interfaces.configuration.path_join(sub_config_path, "page_map_offset") - if not context.config.get(page_map_offset_path, None): - physical_layer_name = requirement.requirements["memory_layer"].config_value( - context, sub_config_path) - if not isinstance(physical_layer_name, str): - raise TypeError(f"Physical layer name is not a string: {sub_config_path}") - physical_layer = context.layers[physical_layer_name] - # Check lower layer metadata first - if physical_layer.metadata.get('page_map_offset', None): - context.config[page_map_offset_path] = physical_layer.metadata['page_map_offset'] - else: - hits = physical_layer.scan(context, PageMapScanner(useful), progress_callback) - for test, dtb in hits: - context.config[page_map_offset_path] = dtb - break - else: - return None - if isinstance(requirement, interfaces.configuration.ConstructableRequirementInterface): - requirement.construct(context, config_path) - else: - for subreq in requirement.requirements.values(): - self(context, sub_config_path, subreq) +# class WintelHelper(interfaces.automagic.AutomagicInterface): +# """Windows DTB finder based on self-referential pointers. +# +# This class adheres to the :class:`~volatility3.framework.interfaces.automagic.AutomagicInterface` interface +# and both determines the directory table base of an intel layer if one hasn't been specified, and constructs +# the intel layer if necessary (for example when reconstructing a pre-existing configuration). +# +# It will scan for existing TranslationLayers that do not have a DTB using the :class:`PageMapScanner` +# """ +# priority = 20 +# tests = [DtbTest64bit(), DtbTest32bit(), DtbTestPae()] +# +# def __call__(self, +# context: interfaces.context.ContextInterface, +# config_path: str, +# requirement: interfaces.configuration.RequirementInterface, +# progress_callback: constants.ProgressCallback = None) -> None: +# useful = [] +# sub_config_path = interfaces.configuration.path_join(config_path, requirement.name) +# if (isinstance(requirement, requirements.TranslationLayerRequirement) +# and requirement.requirements.get("class", False) and requirement.unsatisfied(context, config_path)): +# class_req = requirement.requirements["class"] +# +# for test in self.tests: +# if (test.layer_type.__module__ + "." + test.layer_type.__name__ == class_req.config_value( +# context, sub_config_path)): +# useful.append(test) +# +# # Determine if a class has been chosen +# # Once an appropriate class has been chosen, attempt to determine the page_map_offset value +# if ("memory_layer" in requirement.requirements +# and not requirement.requirements["memory_layer"].unsatisfied(context, sub_config_path)): +# # Only bother getting the DTB if we don't already have one +# page_map_offset_path = interfaces.configuration.path_join(sub_config_path, "page_map_offset") +# if not context.config.get(page_map_offset_path, None): +# physical_layer_name = requirement.requirements["memory_layer"].config_value( +# context, sub_config_path) +# if not isinstance(physical_layer_name, str): +# raise TypeError(f"Physical layer name is not a string: {sub_config_path}") +# physical_layer = context.layers[physical_layer_name] +# # Check lower layer metadata first +# if physical_layer.metadata.get('page_map_offset', None): +# context.config[page_map_offset_path] = physical_layer.metadata['page_map_offset'] +# else: +# hits = physical_layer.scan(context, PageMapScanner(useful), progress_callback) +# for test, dtb in hits: +# context.config[page_map_offset_path] = dtb +# break +# else: +# return None +# if isinstance(requirement, interfaces.configuration.ConstructableRequirementInterface): +# requirement.construct(context, config_path) +# else: +# for subreq in requirement.requirements.values(): +# self(context, sub_config_path, subreq) class WindowsIntelStacker(interfaces.automagic.StackerLayerInterface): @@ -338,29 +362,30 @@ def stack(cls, config_path, "page_map_offset")] = base_layer.metadata['page_map_offset'] layer = layer_type(context, config_path = config_path, name = new_layer_name, metadata = {'os': 'Windows'}) - # Check for the self-referential pointer - if layer is None: - hits = base_layer.scan(context, PageMapScanner(WintelHelper.tests), progress_callback = progress_callback) - layer = None - config_path = None - for test, dtb in hits: - new_layer_name = context.layers.free_layer_name("IntelLayer") - config_path = interfaces.configuration.path_join("IntelHelper", new_layer_name) - context.config[interfaces.configuration.path_join(config_path, "memory_layer")] = layer_name - context.config[interfaces.configuration.path_join(config_path, "page_map_offset")] = dtb - layer = test.layer_type(context, - config_path = config_path, - name = new_layer_name, - metadata = {'os': 'Windows'}) - break + # # Check for the self-referential pointer + # if layer is None: + # hits = base_layer.scan(context, PageMapScanner(), progress_callback = progress_callback) + # layer = None + # config_path = None + # for test, dtb in hits: + # new_layer_name = context.layers.free_layer_name("IntelLayer") + # config_path = interfaces.configuration.path_join("IntelHelper", new_layer_name) + # context.config[interfaces.configuration.path_join(config_path, "memory_layer")] = layer_name + # context.config[interfaces.configuration.path_join(config_path, "page_map_offset")] = dtb + # layer = test.layer_type(context, + # config_path = config_path, + # name = new_layer_name, + # metadata = {'os': 'Windows'}) + # break # Fall back to a heuristic for finding the Windows DTB if layer is None: vollog.debug("Self-referential pointer not in well-known location, moving to recent windows heuristic") # There is a very high chance that the DTB will live in this narrow segment, assuming we couldn't find it previously hits = context.layers[layer_name].scan(context, - PageMapScanner([DtbSelfRef64bit()]), - sections = [(0x1a0000, 0x50000)], + PageMapScanner( + [DtbSelfRef64bit(), DtbSelfRefPae(), DtbSelfRef32bit()]), + sections = [(0x1a0000, 0x550000)], progress_callback = progress_callback) # Flatten the generator hits = list(hits) @@ -372,10 +397,10 @@ def stack(cls, context.config[interfaces.configuration.path_join(config_path, "memory_layer")] = layer_name context.config[interfaces.configuration.path_join(config_path, "page_map_offset")] = page_map_offset # TODO: Need to determine the layer type (chances are high it's x64, hence this default) - layer = layers.intel.WindowsIntel32e(context, - config_path = config_path, - name = new_layer_name, - metadata = {'os': 'Windows'}) + layer = test.layer_type(context, + config_path = config_path, + name = new_layer_name, + metadata = {'os': 'Windows'}) if layer is not None and config_path: vollog.debug("DTB was found at: 0x{:0x}".format(context.config[interfaces.configuration.path_join( config_path, "page_map_offset")])) From bb3f411e22c37097e3676330f06a5ab1f1a172f9 Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Thu, 2 Sep 2021 22:54:25 +0100 Subject: [PATCH 229/294] Automagic: Add a section for where old kernels live --- volatility3/framework/automagic/windows.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/volatility3/framework/automagic/windows.py b/volatility3/framework/automagic/windows.py index eba79ef988..d133b492cb 100644 --- a/volatility3/framework/automagic/windows.py +++ b/volatility3/framework/automagic/windows.py @@ -385,7 +385,7 @@ def stack(cls, hits = context.layers[layer_name].scan(context, PageMapScanner( [DtbSelfRef64bit(), DtbSelfRefPae(), DtbSelfRef32bit()]), - sections = [(0x1a0000, 0x550000)], + sections = [(0x1a0000, 0x550000), (0x30000, 0x10000)], progress_callback = progress_callback) # Flatten the generator hits = list(hits) From 4b56ee4c731acb81fb0cd57f11458344adcf0ba1 Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Thu, 2 Sep 2021 23:01:05 +0100 Subject: [PATCH 230/294] Automagic: Increase windows self-ref segment for win7 --- volatility3/framework/automagic/windows.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/volatility3/framework/automagic/windows.py b/volatility3/framework/automagic/windows.py index d133b492cb..f05be06597 100644 --- a/volatility3/framework/automagic/windows.py +++ b/volatility3/framework/automagic/windows.py @@ -385,7 +385,7 @@ def stack(cls, hits = context.layers[layer_name].scan(context, PageMapScanner( [DtbSelfRef64bit(), DtbSelfRefPae(), DtbSelfRef32bit()]), - sections = [(0x1a0000, 0x550000), (0x30000, 0x10000)], + sections = [(0x180000, 0x580000), (0x30000, 0x10000)], progress_callback = progress_callback) # Flatten the generator hits = list(hits) From d17ad710f2fb0713d0d0127efe99bebcefb170ee Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Thu, 2 Sep 2021 23:01:26 +0100 Subject: [PATCH 231/294] Automagic: Remove unused old code --- volatility3/framework/automagic/windows.py | 210 +-------------------- 1 file changed, 1 insertion(+), 209 deletions(-) diff --git a/volatility3/framework/automagic/windows.py b/volatility3/framework/automagic/windows.py index f05be06597..1156e24017 100644 --- a/volatility3/framework/automagic/windows.py +++ b/volatility3/framework/automagic/windows.py @@ -37,141 +37,6 @@ vollog = logging.getLogger(__name__) -# class DtbTest: -# """This class generically contains the tests for a page based on a set of -# class parameters. -# -# When constructed it contains all the information necessary to -# extract a specific index from a page and determine whether it points -# back to that page's offset. -# """ -# -# def __init__(self, layer_type: Type[layers.intel.Intel], ptr_struct: str, ptr_reference: List[int], -# mask: int) -> None: -# self.layer_type = layer_type -# self.ptr_struct = ptr_struct -# self.ptr_size = struct.calcsize(ptr_struct) -# self.ptr_reference = ptr_reference -# self.mask = mask -# self.page_size: int = layer_type.page_size -# -# def _unpack(self, value: bytes) -> int: -# return struct.unpack("<" + self.ptr_struct, value)[0] -# -# def __call__(self, data: bytes, data_offset: int, page_offset: int) -> Optional[Tuple[int, Any]]: -# """Tests a specific page in a chunk of data to see if it contains a -# self-referential pointer. -# -# Args: -# data: The chunk of data that contains the page to be scanned -# data_offset: Where, within the layer, the chunk of data lives -# page_offset: Where, within the data, the page to be scanned starts -# -# Returns: -# A valid DTB within this page (and an additional parameter for data) -# """ -# for ptr_reference in self.ptr_reference: -# value = data[page_offset + (ptr_reference * self.ptr_size):page_offset + -# ((ptr_reference + 1) * self.ptr_size)] -# try: -# ptr = self._unpack(value) -# except struct.error: -# return None -# # The value *must* be present (bit 0) since it's a mapped page -# # It's almost always writable (bit 1) -# # It's occasionally Super, but not reliably so, haven't checked when/why not -# # The top 3-bits are usually ignore (which in practice means 0 -# # Need to find out why the middle 3-bits are usually 6 (0110) -# if ptr != 0 and (ptr & self.mask == data_offset + page_offset) & (ptr & 0xFF1 == 0x61): -# dtb = (ptr & self.mask) -# return self.second_pass(dtb, data, data_offset) -# return None -# -# def second_pass(self, dtb: int, data: bytes, data_offset: int) -> Optional[Tuple[int, Any]]: -# """Re-reads over the whole page to validate other records based on the -# number of pages marked user vs super. -# -# Args: -# dtb: The identified dtb that needs validating -# data: The chunk of data that contains the dtb to be validated -# data_offset: Where, within the layer, the chunk of data lives -# -# Returns: -# A valid DTB within this page -# """ -# page = data[dtb - data_offset:dtb - data_offset + self.page_size] -# usr_count, sup_count = 0, 0 -# for i in range(0, self.page_size, self.ptr_size): -# val = self._unpack(page[i:i + self.ptr_size]) -# if val & 0x1: -# sup_count += 0 if (val & 0x4) else 1 -# usr_count += 1 if (val & 0x4) else 0 -# # print(hex(dtb), usr_count, sup_count, usr_count + sup_count) -# # We sometimes find bogus DTBs at 0x16000 with a very low sup_count and 0 usr_count -# # I have a winxpsp2-x64 image with identical usr/sup counts at 0x16000 and 0x24c00 as well as the actual 0x3c3000 -# if usr_count or sup_count > 5: -# return dtb, None -# return None -# -# -# class DtbTest32bit(DtbTest): -# -# def __init__(self) -> None: -# super().__init__(layer_type = layers.intel.WindowsIntel, -# ptr_struct = "I", -# ptr_reference = [0x300], -# mask = 0xFFFFF000) -# -# -# class DtbTest64bit(DtbTest): -# -# def __init__(self) -> None: -# super().__init__(layer_type = layers.intel.WindowsIntel32e, -# ptr_struct = "Q", -# ptr_reference = range(0x1E0, 0x1FF), -# mask = 0x3FFFFFFFFFF000) -# -# # As of Windows-10 RS1+, the ptr_reference is randomized: -# # https://blahcat.github.io/2020/06/15/playing_with_self_reference_pml4_entry/ -# # So far, we've only seen examples between 0x1e0 and 0x1ff -# -# -# class DtbTestPae(DtbTest): -# -# def __init__(self) -> None: -# super().__init__(layer_type = layers.intel.WindowsIntelPAE, -# ptr_struct = "Q", -# ptr_reference = [0x3], -# mask = 0x3FFFFFFFFFF000) -# -# def second_pass(self, dtb: int, data: bytes, data_offset: int) -> Optional[Tuple[int, Any]]: -# """PAE top level directory tables contains four entries and the self- -# referential pointer occurs in the second level of tables (so as not to -# use up a full quarter of the space). This is very high in the space, -# and occurs in the fourht (last quarter) second-level table. The -# second-level tables appear always to come sequentially directly after -# the real dtb. The value for the real DTB is therefore four page -# earlier (and the fourth entry should point back to the `dtb` parameter -# this function was originally passed. -# -# Args: -# dtb: The identified self-referential pointer that needs validating -# data: The chunk of data that contains the dtb to be validated -# data_offset: Where, within the layer, the chunk of data lives -# -# Returns: -# Returns the actual DTB of the PAE space -# """ -# dtb -= 0x4000 -# # If we're not in something that the overlap would pick up -# if dtb - data_offset >= 0: -# pointers = data[dtb - data_offset + (3 * self.ptr_size):dtb - data_offset + (4 * self.ptr_size)] -# val = self._unpack(pointers) -# if (val & self.mask == dtb + 0x4000) and (val & 0xFFF == 0x001): -# return dtb, None -# return None -# - class DtbSelfReferential: """A generic DTB test which looks for a self-referential pointer at *any* index within the page.""" @@ -258,63 +123,6 @@ def __call__(self, data: bytes, data_offset: int) -> Generator[Tuple[DtbSelfRefe yield (test, result[0]) -# class WintelHelper(interfaces.automagic.AutomagicInterface): -# """Windows DTB finder based on self-referential pointers. -# -# This class adheres to the :class:`~volatility3.framework.interfaces.automagic.AutomagicInterface` interface -# and both determines the directory table base of an intel layer if one hasn't been specified, and constructs -# the intel layer if necessary (for example when reconstructing a pre-existing configuration). -# -# It will scan for existing TranslationLayers that do not have a DTB using the :class:`PageMapScanner` -# """ -# priority = 20 -# tests = [DtbTest64bit(), DtbTest32bit(), DtbTestPae()] -# -# def __call__(self, -# context: interfaces.context.ContextInterface, -# config_path: str, -# requirement: interfaces.configuration.RequirementInterface, -# progress_callback: constants.ProgressCallback = None) -> None: -# useful = [] -# sub_config_path = interfaces.configuration.path_join(config_path, requirement.name) -# if (isinstance(requirement, requirements.TranslationLayerRequirement) -# and requirement.requirements.get("class", False) and requirement.unsatisfied(context, config_path)): -# class_req = requirement.requirements["class"] -# -# for test in self.tests: -# if (test.layer_type.__module__ + "." + test.layer_type.__name__ == class_req.config_value( -# context, sub_config_path)): -# useful.append(test) -# -# # Determine if a class has been chosen -# # Once an appropriate class has been chosen, attempt to determine the page_map_offset value -# if ("memory_layer" in requirement.requirements -# and not requirement.requirements["memory_layer"].unsatisfied(context, sub_config_path)): -# # Only bother getting the DTB if we don't already have one -# page_map_offset_path = interfaces.configuration.path_join(sub_config_path, "page_map_offset") -# if not context.config.get(page_map_offset_path, None): -# physical_layer_name = requirement.requirements["memory_layer"].config_value( -# context, sub_config_path) -# if not isinstance(physical_layer_name, str): -# raise TypeError(f"Physical layer name is not a string: {sub_config_path}") -# physical_layer = context.layers[physical_layer_name] -# # Check lower layer metadata first -# if physical_layer.metadata.get('page_map_offset', None): -# context.config[page_map_offset_path] = physical_layer.metadata['page_map_offset'] -# else: -# hits = physical_layer.scan(context, PageMapScanner(useful), progress_callback) -# for test, dtb in hits: -# context.config[page_map_offset_path] = dtb -# break -# else: -# return None -# if isinstance(requirement, interfaces.configuration.ConstructableRequirementInterface): -# requirement.construct(context, config_path) -# else: -# for subreq in requirement.requirements.values(): -# self(context, sub_config_path, subreq) - - class WindowsIntelStacker(interfaces.automagic.StackerLayerInterface): stack_order = 40 exclusion_list = ['mac', 'linux'] @@ -362,23 +170,7 @@ def stack(cls, config_path, "page_map_offset")] = base_layer.metadata['page_map_offset'] layer = layer_type(context, config_path = config_path, name = new_layer_name, metadata = {'os': 'Windows'}) - # # Check for the self-referential pointer - # if layer is None: - # hits = base_layer.scan(context, PageMapScanner(), progress_callback = progress_callback) - # layer = None - # config_path = None - # for test, dtb in hits: - # new_layer_name = context.layers.free_layer_name("IntelLayer") - # config_path = interfaces.configuration.path_join("IntelHelper", new_layer_name) - # context.config[interfaces.configuration.path_join(config_path, "memory_layer")] = layer_name - # context.config[interfaces.configuration.path_join(config_path, "page_map_offset")] = dtb - # layer = test.layer_type(context, - # config_path = config_path, - # name = new_layer_name, - # metadata = {'os': 'Windows'}) - # break - - # Fall back to a heuristic for finding the Windows DTB + # Self Referential finder if layer is None: vollog.debug("Self-referential pointer not in well-known location, moving to recent windows heuristic") # There is a very high chance that the DTB will live in this narrow segment, assuming we couldn't find it previously From 66ec8ff84141888806c8edd33097d101b011b975 Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Thu, 2 Sep 2021 23:07:52 +0100 Subject: [PATCH 232/294] Automagic: Further sections for older windows --- volatility3/framework/automagic/windows.py | 29 +++++++++++++++++----- 1 file changed, 23 insertions(+), 6 deletions(-) diff --git a/volatility3/framework/automagic/windows.py b/volatility3/framework/automagic/windows.py index 1156e24017..75a3627582 100644 --- a/volatility3/framework/automagic/windows.py +++ b/volatility3/framework/automagic/windows.py @@ -87,6 +87,15 @@ def __init__(self) -> None: valid_range = range(0x100, 0x1ff)) +class DtbSelfRef64bitOldWindows(DtbSelfReferential): + + def __init__(self) -> None: + super().__init__(layer_type = layers.intel.WindowsIntel32e, + ptr_struct = "Q", + mask = 0x3FFFFFFFFFF000, + valid_range = [0x1ed]) + + class DtbSelfRefPae(DtbSelfReferential): def __init__(self) -> None: @@ -170,20 +179,26 @@ def stack(cls, config_path, "page_map_offset")] = base_layer.metadata['page_map_offset'] layer = layer_type(context, config_path = config_path, name = new_layer_name, metadata = {'os': 'Windows'}) + test_sets = [("Detecting Self-referential pointer for recent windows", + [DtbSelfRefPae(), DtbSelfRef64bit()], [(0x1a0000, 0x100000), (0x650000, 0x50000)]), + ("Older windows fixed location self-referential pointers", + [DtbSelfRefPae(), DtbSelfRef32bit(), DtbSelfRef64bitOldWindows()], [(0x30000, 0x1000000)]) + ] + # Self Referential finder - if layer is None: - vollog.debug("Self-referential pointer not in well-known location, moving to recent windows heuristic") - # There is a very high chance that the DTB will live in this narrow segment, assuming we couldn't find it previously + for description, tests, sections in test_sets: + vollog.debug(description) + # There is a very high chance that the DTB will live in these very narrow segments, assuming we couldn't find them previously hits = context.layers[layer_name].scan(context, - PageMapScanner( - [DtbSelfRef64bit(), DtbSelfRefPae(), DtbSelfRef32bit()]), - sections = [(0x180000, 0x580000), (0x30000, 0x10000)], + PageMapScanner(tests), + sections = sections, progress_callback = progress_callback) # Flatten the generator hits = list(hits) if hits: # TODO: Decide which to use if there are multiple options test, page_map_offset = hits[0] + vollog.debug(f"{test.__class__.__name__} test succeeded at {hex(page_map_offset)}") new_layer_name = context.layers.free_layer_name("IntelLayer") config_path = interfaces.configuration.path_join("IntelHelper", new_layer_name) context.config[interfaces.configuration.path_join(config_path, "memory_layer")] = layer_name @@ -193,6 +208,8 @@ def stack(cls, config_path = config_path, name = new_layer_name, metadata = {'os': 'Windows'}) + break + if layer is not None and config_path: vollog.debug("DTB was found at: 0x{:0x}".format(context.config[interfaces.configuration.path_join( config_path, "page_map_offset")])) From be54de140e702955ef2ab6e40146e6d497e90f48 Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Fri, 3 Sep 2021 15:44:36 +0100 Subject: [PATCH 233/294] Automagic: Better check for bad DTBs --- volatility3/framework/automagic/windows.py | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/volatility3/framework/automagic/windows.py b/volatility3/framework/automagic/windows.py index 75a3627582..e0ab92de24 100644 --- a/volatility3/framework/automagic/windows.py +++ b/volatility3/framework/automagic/windows.py @@ -59,6 +59,10 @@ def __call__(self, data: bytes, data_offset: int, page_offset: int) -> Optional[ ptr_data = page[ref:ref + self.ptr_size] if len(ptr_data) == self.ptr_size: ptr, = struct.unpack(self.ptr_struct, ptr_data) + # For both PAE and Intel-32e, bit 7 is reserved (more are reserved in PAE), so if that's ever set, + # we can move on + if ptr & 0x10: + return None if ((ptr & self.mask) == (data_offset + page_offset)) and (data_offset + page_offset > 0): ref_pages.add(ref) # The DTB is extremely unlikely to refer back to itself. so the number of reference should always be exactly 1 @@ -190,7 +194,7 @@ def stack(cls, vollog.debug(description) # There is a very high chance that the DTB will live in these very narrow segments, assuming we couldn't find them previously hits = context.layers[layer_name].scan(context, - PageMapScanner(tests), + PageMapScanner(tests = tests), sections = sections, progress_callback = progress_callback) # Flatten the generator From 6492a458db72fb8bd3e19f007f6b0cf0a1f57a55 Mon Sep 17 00:00:00 2001 From: superponible Date: Fri, 3 Sep 2021 17:32:57 -0500 Subject: [PATCH 234/294] correct usage for RegValueTypes enum --- volatility3/framework/plugins/windows/getsids.py | 4 ++-- volatility3/framework/plugins/windows/registry/printkey.py | 6 +++--- volatility3/plugins/windows/registry/certificates.py | 2 +- 3 files changed, 6 insertions(+), 6 deletions(-) diff --git a/volatility3/framework/plugins/windows/getsids.py b/volatility3/framework/plugins/windows/getsids.py index 179ce4737a..0b082f3eae 100644 --- a/volatility3/framework/plugins/windows/getsids.py +++ b/volatility3/framework/plugins/windows/getsids.py @@ -96,9 +96,9 @@ def lookup_user_sids(self) -> Dict[str, str]: value_data = node.decode_data() if isinstance(value_data, int): value_data = format_hints.MultiTypeData(value_data, encoding = 'utf-8') - elif registry.RegValueTypes[node.Type] == registry.RegValueTypes.REG_BINARY: + elif registry.RegValueTypes(node.Type) == registry.RegValueTypes.REG_BINARY: value_data = format_hints.MultiTypeData(value_data, show_hex = True) - elif registry.RegValueTypes[node.Type] == registry.RegValueTypes.REG_MULTI_SZ: + elif registry.RegValueTypes(node.Type) == registry.RegValueTypes.REG_MULTI_SZ: value_data = format_hints.MultiTypeData(value_data, encoding = 'utf-16-le', split_nulls = True) diff --git a/volatility3/framework/plugins/windows/registry/printkey.py b/volatility3/framework/plugins/windows/registry/printkey.py index 405df46806..10409e3ea5 100644 --- a/volatility3/framework/plugins/windows/registry/printkey.py +++ b/volatility3/framework/plugins/windows/registry/printkey.py @@ -121,7 +121,7 @@ def _printkey_iterator(self, value_node_name = renderers.UnreadableValue() try: - value_type = RegValueTypes[node.Type].name + value_type = RegValueTypes(node.Type).name except (exceptions.InvalidAddressException, RegistryFormatException) as excp: vollog.debug(excp) value_type = renderers.UnreadableValue() @@ -135,9 +135,9 @@ def _printkey_iterator(self, if isinstance(value_data, int): value_data = format_hints.MultiTypeData(value_data, encoding = 'utf-8') - elif RegValueTypes[node.Type] == RegValueTypes.REG_BINARY: + elif RegValueTypes(node.Type) == RegValueTypes.REG_BINARY: value_data = format_hints.MultiTypeData(value_data, show_hex = True) - elif RegValueTypes[node.Type] == RegValueTypes.REG_MULTI_SZ: + elif RegValueTypes(node.Type) == RegValueTypes.REG_MULTI_SZ: value_data = format_hints.MultiTypeData(value_data, encoding = 'utf-16-le', split_nulls = True) diff --git a/volatility3/plugins/windows/registry/certificates.py b/volatility3/plugins/windows/registry/certificates.py index c4ae0bf371..3ba4266515 100644 --- a/volatility3/plugins/windows/registry/certificates.py +++ b/volatility3/plugins/windows/registry/certificates.py @@ -50,7 +50,7 @@ def _generator(self) -> Iterator[Tuple[int, Tuple[str, str, str, str]]]: node_path = hive.get_key(top_key, return_list = True) for (depth, is_key, last_write_time, key_path, volatility, node) in printkey.PrintKey.key_iterator(hive, node_path, recurse = True): - if not is_key and RegValueTypes[node.Type].name == "REG_BINARY": + if not is_key and RegValueTypes(node.Type).name == "REG_BINARY": name, certificate_data = self.parse_data(node.decode_data()) unique_key_offset = key_path.index(top_key) + len(top_key) + 1 reg_section = key_path[unique_key_offset:key_path.index("\\", unique_key_offset)] From e9e80035ebdfc6a7c2261a514532d84bd572611a Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Sat, 4 Sep 2021 15:28:57 +0100 Subject: [PATCH 235/294] Linux: Fixes after the pslist kernel req change --- volatility3/framework/plugins/linux/malfind.py | 2 +- volatility3/framework/plugins/linux/pstree.py | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/volatility3/framework/plugins/linux/malfind.py b/volatility3/framework/plugins/linux/malfind.py index fcfd68855b..4587dca847 100644 --- a/volatility3/framework/plugins/linux/malfind.py +++ b/volatility3/framework/plugins/linux/malfind.py @@ -74,5 +74,5 @@ def run(self): ("Disasm", interfaces.renderers.Disassembly)], self._generator( pslist.PsList.list_tasks(self.context, - self.config['vmlinux'], + self.config['kernel'], filter_func = filter_func))) diff --git a/volatility3/framework/plugins/linux/pstree.py b/volatility3/framework/plugins/linux/pstree.py index 9b24c27f72..3b95a344c6 100644 --- a/volatility3/framework/plugins/linux/pstree.py +++ b/volatility3/framework/plugins/linux/pstree.py @@ -35,7 +35,7 @@ def find_level(self, pid): def _generator(self): """Generates the.""" vmlinux = self.context.modules[self.config['kernel']] - for proc in self.list_tasks(self.context, vmlinux.layer_name, vmlinux.symbol_table_name): + for proc in self.list_tasks(self.context, vmlinux.name): self._processes[proc.pid] = proc # Build the child/level maps From 17efe7dbdc25657ae5e7084ec22ceb29e2cdc8da Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Sun, 5 Sep 2021 12:48:32 +0100 Subject: [PATCH 236/294] Windows: Fix issue #385 --- volatility3/framework/plugins/windows/dlllist.py | 14 ++++++++++++-- 1 file changed, 12 insertions(+), 2 deletions(-) diff --git a/volatility3/framework/plugins/windows/dlllist.py b/volatility3/framework/plugins/windows/dlllist.py index 992d9538ef..8125862765 100644 --- a/volatility3/framework/plugins/windows/dlllist.py +++ b/volatility3/framework/plugins/windows/dlllist.py @@ -137,11 +137,21 @@ def _generator(self, procs): file_handle.close() file_output = file_handle.preferred_filename + try: + dllbase = format_hints.Hex(entry.DllBase) + except exceptions.InvalidAddressException: + dllbase = renderers.NotAvailableValue() + + try: + size_of_image = format_hints.Hex(entry.SizeOfImage) + except exceptions.InvalidAddressException: + size_of_image = renderers.NotAvailableValue() + yield (0, (proc.UniqueProcessId, proc.ImageFileName.cast("string", max_length = proc.ImageFileName.vol.count, - errors = 'replace'), format_hints.Hex(entry.DllBase), - format_hints.Hex(entry.SizeOfImage), BaseDllName, FullDllName, DllLoadTime, file_output)) + errors = 'replace'), dllbase, size_of_image, BaseDllName, + FullDllName, DllLoadTime, file_output)) def generate_timeline(self): kernel = self.context.modules[self.config['kernel']] From 7d7a8e4c2e0054e84a65d8ec1059e8ae1361b750 Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Sun, 5 Sep 2021 15:44:12 +0100 Subject: [PATCH 237/294] Linux: Remove unused code (LGTM) --- volatility3/framework/plugins/linux/lsof.py | 2 -- 1 file changed, 2 deletions(-) diff --git a/volatility3/framework/plugins/linux/lsof.py b/volatility3/framework/plugins/linux/lsof.py index b452bf0ba7..711a7d4e49 100644 --- a/volatility3/framework/plugins/linux/lsof.py +++ b/volatility3/framework/plugins/linux/lsof.py @@ -35,8 +35,6 @@ def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface] ] def _generator(self, tasks): - vmlinux = self.context.modules[self.config['kernel']] - symbol_table = None for task in tasks: if symbol_table is None: From ced2dbe67d379ffc2ff7a09a67e71dc1d38339c9 Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Sun, 5 Sep 2021 15:44:30 +0100 Subject: [PATCH 238/294] Mac: Remove unused code (LGTM) --- volatility3/framework/plugins/mac/check_syscall.py | 5 ----- 1 file changed, 5 deletions(-) diff --git a/volatility3/framework/plugins/mac/check_syscall.py b/volatility3/framework/plugins/mac/check_syscall.py index 3a0cd28c95..55ead986f1 100644 --- a/volatility3/framework/plugins/mac/check_syscall.py +++ b/volatility3/framework/plugins/mac/check_syscall.py @@ -39,11 +39,6 @@ def _generator(self): nsysent = kernel.object_from_symbol(symbol_name = "nsysent") table = kernel.object_from_symbol(symbol_name = "sysent") - # smear help - num_ents = min(nsysent, table.count) - if num_ents > 1024: - num_ents = 1024 - for (i, ent) in enumerate(table): try: call_addr = ent.sy_call.dereference().vol.offset From 10aa2f76205da3fdd8ca288e2cb109739c118afb Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Sat, 14 Nov 2020 01:15:55 +0000 Subject: [PATCH 239/294] Core: Support zip files as plugin paths --- volatility3/framework/__init__.py | 25 ++++++++++++++++++++++--- 1 file changed, 22 insertions(+), 3 deletions(-) diff --git a/volatility3/framework/__init__.py b/volatility3/framework/__init__.py index ba834f4f14..f07dd72468 100644 --- a/volatility3/framework/__init__.py +++ b/volatility3/framework/__init__.py @@ -5,10 +5,11 @@ # Check the python version to ensure it's suitable import glob import sys +import zipfile required_python_version = (3, 6, 0) if (sys.version_info.major != required_python_version[0] or sys.version_info.minor < required_python_version[1] or - (sys.version_info.minor == required_python_version[1] and sys.version_info.micro < required_python_version[2])): + (sys.version_info.minor == required_python_version[1] and sys.version_info.micro < required_python_version[2])): raise RuntimeError( "Volatility framework requires python version {}.{}.{} or greater".format(*required_python_version)) @@ -20,6 +21,7 @@ from volatility3.framework import constants, interfaces + # ## # # SemVer version scheme @@ -86,7 +88,7 @@ def class_subclasses(cls: Type[T]) -> Generator[Type[T], None, None]: yield return_value -def import_files(base_module, ignore_errors = False) -> List[str]: +def import_files(base_module, ignore_errors: bool = False) -> List[str]: """Imports all plugins present under plugins module namespace.""" failures = [] if not isinstance(base_module.__path__, list): @@ -94,7 +96,7 @@ def import_files(base_module, ignore_errors = False) -> List[str]: vollog.log(constants.LOGLEVEL_VVVV, f"Importing from the following paths: {', '.join(base_module.__path__)}") for path in base_module.__path__: - for root, _, files in os.walk(path, followlinks = True): + for root, files in zipwalk(path, followlinks = True): # TODO: Figure out how to import pycache files if root.endswith("__pycache__"): continue @@ -115,6 +117,23 @@ def import_files(base_module, ignore_errors = False) -> List[str]: return failures +def zipwalk(path: str, followlinks: bool = False): + """Walks the contents of a zipfile as well as directory""" + if zipfile.is_zipfile(path): + zip_results = {} + with zipfile.ZipFile(path) as archive: + for file in archive.filelist: + if not file.is_dir(): + dirlist = zip_results.get(os.path.dirname(file.filename), []) + dirlist.append(os.path.basename(file.filename)) + zip_results[os.path.join(path, os.path.dirname(file.filename))] = dirlist + for value in zip_results: + yield value, zip_results[value] + else: + for root, _, files in os.walk(path, followlinks = followlinks): + yield root, files + + def list_plugins() -> Dict[str, Type[interfaces.plugins.PluginInterface]]: plugin_list = {} for plugin in class_subclasses(interfaces.plugins.PluginInterface): From 853b0376ce2c783f73929dd78542c2e631712ab9 Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Sun, 15 Nov 2020 02:24:56 +0000 Subject: [PATCH 240/294] Core: Change zipimporting code The previous patch in this branch would allow passing zipfiles as part of the plugin path list. Instead, we now traverse any zipfiles in the plugin directories and load them based on where they are. The filenames for the zip files aren't important. --- volatility3/framework/__init__.py | 98 +++++++++++++++++++++---------- 1 file changed, 68 insertions(+), 30 deletions(-) diff --git a/volatility3/framework/__init__.py b/volatility3/framework/__init__.py index f07dd72468..2c94441302 100644 --- a/volatility3/framework/__init__.py +++ b/volatility3/framework/__init__.py @@ -96,42 +96,80 @@ def import_files(base_module, ignore_errors: bool = False) -> List[str]: vollog.log(constants.LOGLEVEL_VVVV, f"Importing from the following paths: {', '.join(base_module.__path__)}") for path in base_module.__path__: - for root, files in zipwalk(path, followlinks = True): + for root, _, files in os.walk(path, followlinks = True): # TODO: Figure out how to import pycache files if root.endswith("__pycache__"): continue - for f in files: - if (f.endswith(".py") or f.endswith(".pyc") or f.endswith(".pyo")) and not f.startswith("__"): - modpath = os.path.join(root[len(path) + len(os.path.sep):], f[:f.rfind(".")]) - module = modpath.replace(os.path.sep, ".") - if base_module.__name__ + "." + module not in sys.modules: - try: - importlib.import_module(base_module.__name__ + "." + module) - except ImportError as e: - vollog.debug(str(e)) - vollog.debug("Failed to import module {} based on file: {}".format( - base_module.__name__ + "." + module, modpath)) - failures.append(base_module.__name__ + "." + module) - if not ignore_errors: - raise + for filename in files: + if zipfile.is_zipfile(os.path.join(root, filename)): + # Use the root to add this to the module path, and sub-traverse the files + new_module = base_module + premodules = root[len(path) + len(os.path.sep):].replace(os.path.sep, '.') + for component in premodules.split('.'): + if component: + try: + new_module = getattr(new_module, component) + except AttributeError: + failures += [new_module + '.' + component] + new_module.__path__ = [os.path.join(root, filename)] + new_module.__path__ + for ziproot, zipfiles in _zipwalk(os.path.join(root, filename)): + for zfile in zipfiles: + if _filter_files(zfile): + submodule = zfile[:zfile.rfind('.')].replace(os.path.sep, '.') + failures += import_file(new_module.__name__ + '.' + submodule, + os.path.join(path, ziproot, zfile)) + else: + if _filter_files(filename): + modpath = os.path.join(root[len(path) + len(os.path.sep):], filename[:filename.rfind(".")]) + submodule = modpath.replace(os.path.sep, ".") + failures += import_file(base_module.__name__ + '.' + submodule, + os.path.join(root, filename), + ignore_errors) + + return failures + + +def _filter_files(filename: str): + """Ensures that a filename traversed is an importable python file""" + return (filename.endswith(".py") or filename.endswith(".pyc") or filename.endswith( + ".pyo")) and not filename.startswith("__") + + +def import_file(module: str, path: str, ignore_errors: bool = False) -> List[str]: + """Imports a python file based on an existing module, a submodule and a filepath for error messages + + Args + module: Module name to be imported + path: File to be imported from (used for error messages) + + Returns + List of modules that may have failed to import + + """ + failures = [] + if module not in sys.modules: + try: + importlib.import_module(module) + except ImportError as e: + vollog.debug(str(e)) + vollog.debug("Failed to import module {} based on file: {}".format(module, path)) + failures.append(module) + if not ignore_errors: + raise return failures -def zipwalk(path: str, followlinks: bool = False): - """Walks the contents of a zipfile as well as directory""" - if zipfile.is_zipfile(path): - zip_results = {} - with zipfile.ZipFile(path) as archive: - for file in archive.filelist: - if not file.is_dir(): - dirlist = zip_results.get(os.path.dirname(file.filename), []) - dirlist.append(os.path.basename(file.filename)) - zip_results[os.path.join(path, os.path.dirname(file.filename))] = dirlist - for value in zip_results: - yield value, zip_results[value] - else: - for root, _, files in os.walk(path, followlinks = followlinks): - yield root, files +def _zipwalk(path: str): + """Walks the contents of a zipfile just like os.walk""" + zip_results = {} + with zipfile.ZipFile(path) as archive: + for file in archive.filelist: + if not file.is_dir(): + dirlist = zip_results.get(os.path.dirname(file.filename), []) + dirlist.append(os.path.basename(file.filename)) + zip_results[os.path.join(path, os.path.dirname(file.filename))] = dirlist + for value in zip_results: + yield value, zip_results[value] def list_plugins() -> Dict[str, Type[interfaces.plugins.PluginInterface]]: From 4b3fc58c9f741a545ee78e804b1b6b3682e7af82 Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Mon, 6 Sep 2021 20:21:29 +0100 Subject: [PATCH 241/294] Mac: Finish the previous LGTM clean-up --- volatility3/framework/plugins/mac/check_syscall.py | 1 - 1 file changed, 1 deletion(-) diff --git a/volatility3/framework/plugins/mac/check_syscall.py b/volatility3/framework/plugins/mac/check_syscall.py index 55ead986f1..1608a64a9e 100644 --- a/volatility3/framework/plugins/mac/check_syscall.py +++ b/volatility3/framework/plugins/mac/check_syscall.py @@ -36,7 +36,6 @@ def _generator(self): handlers = mac.MacUtilities.generate_kernel_handler_info(self.context, kernel.layer_name, kernel, mods) - nsysent = kernel.object_from_symbol(symbol_name = "nsysent") table = kernel.object_from_symbol(symbol_name = "sysent") for (i, ent) in enumerate(table): From 02a1ff31b44a6a783f361aa100e7319b73118a00 Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Tue, 7 Sep 2021 09:25:39 +0100 Subject: [PATCH 242/294] Automagic: Extended the possible DTB locations for Win 11 --- volatility3/framework/automagic/windows.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/volatility3/framework/automagic/windows.py b/volatility3/framework/automagic/windows.py index e0ab92de24..37b696a9af 100644 --- a/volatility3/framework/automagic/windows.py +++ b/volatility3/framework/automagic/windows.py @@ -184,7 +184,7 @@ def stack(cls, layer = layer_type(context, config_path = config_path, name = new_layer_name, metadata = {'os': 'Windows'}) test_sets = [("Detecting Self-referential pointer for recent windows", - [DtbSelfRefPae(), DtbSelfRef64bit()], [(0x1a0000, 0x100000), (0x650000, 0x50000)]), + [DtbSelfRefPae(), DtbSelfRef64bit()], [(0x1a0000, 0x100000), (0x650000, 0xa0000)]), ("Older windows fixed location self-referential pointers", [DtbSelfRefPae(), DtbSelfRef32bit(), DtbSelfRef64bitOldWindows()], [(0x30000, 0x1000000)]) ] From bef1a0f654ee2ce66246dda71b440cca2043673e Mon Sep 17 00:00:00 2001 From: Niklas Beierl Date: Wed, 8 Sep 2021 12:04:28 +0200 Subject: [PATCH 243/294] Doc: list_head.to_list, pslist pid 0 Added Documentation for the params of list_head.to_list Added a comment to linux.pslist explaining why the init task is not yielded. --- volatility3/framework/plugins/linux/pslist.py | 1 + .../framework/symbols/linux/extensions/__init__.py | 14 +++++++++++++- 2 files changed, 14 insertions(+), 1 deletion(-) diff --git a/volatility3/framework/plugins/linux/pslist.py b/volatility3/framework/plugins/linux/pslist.py index ed7374f2fd..9b97a56d1f 100644 --- a/volatility3/framework/plugins/linux/pslist.py +++ b/volatility3/framework/plugins/linux/pslist.py @@ -78,6 +78,7 @@ def list_tasks( init_task = vmlinux.object_from_symbol(symbol_name = "init_task") + # Note that the init_task itself is not yielded, since "ps" also never shows it. for task in init_task.tasks: if not filter_func(task): yield task diff --git a/volatility3/framework/symbols/linux/extensions/__init__.py b/volatility3/framework/symbols/linux/extensions/__init__.py index 7b14a86749..fbc02399f9 100644 --- a/volatility3/framework/symbols/linux/extensions/__init__.py +++ b/volatility3/framework/symbols/linux/extensions/__init__.py @@ -403,7 +403,19 @@ def to_list(self, forward: bool = True, sentinel: bool = True, layer: Optional[str] = None) -> Iterator[interfaces.objects.ObjectInterface]: - """Returns an iterator of the entries in the list.""" + """Returns an iterator of the entries in the list. + + Args: + symbol_type: Type of the list elements + member: Name of the list_head member in the list elements + forward: Set false to go backwards + sentinel: Whether self is a "sentinel node", meaning it is not embedded in a member of the list + Sentinel nodes are NOT yielded. See https://en.wikipedia.org/wiki/Sentinel_node for further reference + layer: Name of layer to read from + Yields: + Objects of the type specified via the "symbol_type" argument. + + """ layer = layer or self.vol.layer_name relative_offset = self._context.symbol_space.get_type(symbol_type).relative_child_offset(member) From 76d319c83a3e532c5484d2e0d2bb00bdef6a6fd2 Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Wed, 8 Sep 2021 20:50:38 +0100 Subject: [PATCH 244/294] Automagic: Improve reserved bit detection for PAE --- volatility3/framework/automagic/windows.py | 34 +++++++++++++--------- 1 file changed, 21 insertions(+), 13 deletions(-) diff --git a/volatility3/framework/automagic/windows.py b/volatility3/framework/automagic/windows.py index 37b696a9af..6a404ae0fd 100644 --- a/volatility3/framework/automagic/windows.py +++ b/volatility3/framework/automagic/windows.py @@ -42,29 +42,33 @@ class DtbSelfReferential: index within the page.""" def __init__(self, layer_type: Type[layers.intel.Intel], ptr_struct: str, mask: int, - valid_range: Iterable[int]) -> None: + valid_range: Iterable[int], reserved_bits: int) -> None: self.layer_type = layer_type self.ptr_struct = ptr_struct self.ptr_size = struct.calcsize(ptr_struct) self.mask = mask self.page_size: int = layer_type.page_size self.valid_range = valid_range + self.reserved_bits = 0 def __call__(self, data: bytes, data_offset: int, page_offset: int) -> Optional[Tuple[int, int]]: page = data[page_offset:page_offset + self.page_size] if not page: return None ref_pages = set() + for ref in range(0, self.page_size, self.ptr_size): ptr_data = page[ref:ref + self.ptr_size] - if len(ptr_data) == self.ptr_size: - ptr, = struct.unpack(self.ptr_struct, ptr_data) - # For both PAE and Intel-32e, bit 7 is reserved (more are reserved in PAE), so if that's ever set, - # we can move on - if ptr & 0x10: - return None - if ((ptr & self.mask) == (data_offset + page_offset)) and (data_offset + page_offset > 0): + ptr, = struct.unpack(self.ptr_struct, ptr_data) + # For both Intel-32e, bit 7 is reserved (more are reserved in PAE), so if that's ever set, + # we can move on + if ptr & self.reserved_bits: + return None + if ((ptr & self.mask) == (data_offset + page_offset)) and (data_offset + page_offset > 0): + # Pointer must be valid + if (ptr & 0x01): ref_pages.add(ref) + # The DTB is extremely unlikely to refer back to itself. so the number of reference should always be exactly 1 if len(ref_pages) == 1: ref_page = ref_pages.pop() @@ -79,7 +83,8 @@ def __init__(self): super().__init__(layer_type = layers.intel.WindowsIntel, ptr_struct = "I", mask = 0xFFFFF000, - valid_range = [0x300]) + valid_range = [0x300], + reserved_bits = 0x80) class DtbSelfRef64bit(DtbSelfReferential): @@ -88,7 +93,8 @@ def __init__(self) -> None: super().__init__(layer_type = layers.intel.WindowsIntel32e, ptr_struct = "Q", mask = 0x3FFFFFFFFFF000, - valid_range = range(0x100, 0x1ff)) + valid_range = range(0x100, 0x1ff), + reserved_bits = 0x80) class DtbSelfRef64bitOldWindows(DtbSelfReferential): @@ -97,7 +103,8 @@ def __init__(self) -> None: super().__init__(layer_type = layers.intel.WindowsIntel32e, ptr_struct = "Q", mask = 0x3FFFFFFFFFF000, - valid_range = [0x1ed]) + valid_range = [0x1ed], + reserved_bits = 0x80) class DtbSelfRefPae(DtbSelfReferential): @@ -106,7 +113,8 @@ def __init__(self) -> None: super().__init__(layer_type = layers.intel.WindowsIntelPAE, ptr_struct = "Q", valid_range = [0x3], - mask = 0x3FFFFFFFFFF000) + mask = 0x3FFFFFFFFFF000, + reserved_bits = 0x0) def __call__(self, *args, **kwargs): dtb = super().__call__(*args, **kwargs) @@ -184,7 +192,7 @@ def stack(cls, layer = layer_type(context, config_path = config_path, name = new_layer_name, metadata = {'os': 'Windows'}) test_sets = [("Detecting Self-referential pointer for recent windows", - [DtbSelfRefPae(), DtbSelfRef64bit()], [(0x1a0000, 0x100000), (0x650000, 0xa0000)]), + [DtbSelfRefPae(), DtbSelfRef64bit()], [(0x150000, 0x150000), (0x650000, 0xa0000)]), ("Older windows fixed location self-referential pointers", [DtbSelfRefPae(), DtbSelfRef32bit(), DtbSelfRef64bitOldWindows()], [(0x30000, 0x1000000)]) ] From 75feba009918906f99b5a5cc294e56442b6f263d Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Wed, 8 Sep 2021 21:31:45 +0100 Subject: [PATCH 245/294] Automagic: Sort DTB results by test --- volatility3/framework/automagic/windows.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/volatility3/framework/automagic/windows.py b/volatility3/framework/automagic/windows.py index 6a404ae0fd..8c479bff04 100644 --- a/volatility3/framework/automagic/windows.py +++ b/volatility3/framework/automagic/windows.py @@ -206,7 +206,7 @@ def stack(cls, sections = sections, progress_callback = progress_callback) # Flatten the generator - hits = list(hits) + hits = sorted(list(hits), key = lambda x: tests.index(x[0])) if hits: # TODO: Decide which to use if there are multiple options test, page_map_offset = hits[0] From ca047c1fd51f2f58741dadd6727285031718a046 Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Wed, 8 Sep 2021 21:46:35 +0100 Subject: [PATCH 246/294] Revert "Automagic: Sort DTB results by test" This reverts commit 75feba009918906f99b5a5cc294e56442b6f263d. --- volatility3/framework/automagic/windows.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/volatility3/framework/automagic/windows.py b/volatility3/framework/automagic/windows.py index 8c479bff04..6a404ae0fd 100644 --- a/volatility3/framework/automagic/windows.py +++ b/volatility3/framework/automagic/windows.py @@ -206,7 +206,7 @@ def stack(cls, sections = sections, progress_callback = progress_callback) # Flatten the generator - hits = sorted(list(hits), key = lambda x: tests.index(x[0])) + hits = list(hits) if hits: # TODO: Decide which to use if there are multiple options test, page_map_offset = hits[0] From 86afc8fb0cae6156cfcae09407750396fdf9bd31 Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Wed, 8 Sep 2021 22:42:58 +0100 Subject: [PATCH 247/294] Automagic: Actually use reserved_bits --- volatility3/framework/automagic/windows.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/volatility3/framework/automagic/windows.py b/volatility3/framework/automagic/windows.py index 6a404ae0fd..f3ad05a924 100644 --- a/volatility3/framework/automagic/windows.py +++ b/volatility3/framework/automagic/windows.py @@ -49,7 +49,7 @@ def __init__(self, layer_type: Type[layers.intel.Intel], ptr_struct: str, mask: self.mask = mask self.page_size: int = layer_type.page_size self.valid_range = valid_range - self.reserved_bits = 0 + self.reserved_bits = reserved_bits def __call__(self, data: bytes, data_offset: int, page_offset: int) -> Optional[Tuple[int, int]]: page = data[page_offset:page_offset + self.page_size] From ee2a868c97271c92d4eed1850d462476e8cb6c3b Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Sun, 12 Sep 2021 11:35:44 +0100 Subject: [PATCH 248/294] Automagic: Windows 32-bit bit 7 is not always 0 --- volatility3/framework/automagic/windows.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/volatility3/framework/automagic/windows.py b/volatility3/framework/automagic/windows.py index f3ad05a924..0cda683328 100644 --- a/volatility3/framework/automagic/windows.py +++ b/volatility3/framework/automagic/windows.py @@ -84,7 +84,7 @@ def __init__(self): ptr_struct = "I", mask = 0xFFFFF000, valid_range = [0x300], - reserved_bits = 0x80) + reserved_bits = 0x0) class DtbSelfRef64bit(DtbSelfReferential): From 807662c41f042702085246dbea8f0a43a26d52c7 Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Sun, 12 Sep 2021 12:36:35 +0100 Subject: [PATCH 249/294] Automagic: Prioritize tests in order --- volatility3/framework/automagic/windows.py | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/volatility3/framework/automagic/windows.py b/volatility3/framework/automagic/windows.py index 0cda683328..57bfe0dd87 100644 --- a/volatility3/framework/automagic/windows.py +++ b/volatility3/framework/automagic/windows.py @@ -205,8 +205,13 @@ def stack(cls, PageMapScanner(tests = tests), sections = sections, progress_callback = progress_callback) + # Flatten the generator - hits = list(hits) + def sort_by_tests(x): + return tests.index(x[0]), x[1] + + hits = sorted(list(hits), key = sort_by_tests) + if hits: # TODO: Decide which to use if there are multiple options test, page_map_offset = hits[0] From 48e802e255f3e8b7108b8aee0daf64109dc6a571 Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Sun, 12 Sep 2021 12:55:42 +0100 Subject: [PATCH 250/294] Automagic: Ensure reserved bits are for valid pages --- volatility3/framework/automagic/windows.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/volatility3/framework/automagic/windows.py b/volatility3/framework/automagic/windows.py index 57bfe0dd87..e67f20be8e 100644 --- a/volatility3/framework/automagic/windows.py +++ b/volatility3/framework/automagic/windows.py @@ -62,7 +62,7 @@ def __call__(self, data: bytes, data_offset: int, page_offset: int) -> Optional[ ptr, = struct.unpack(self.ptr_struct, ptr_data) # For both Intel-32e, bit 7 is reserved (more are reserved in PAE), so if that's ever set, # we can move on - if ptr & self.reserved_bits: + if (ptr & self.reserved_bits) and (ptr & 0x01): return None if ((ptr & self.mask) == (data_offset + page_offset)) and (data_offset + page_offset > 0): # Pointer must be valid From cc26494b7b6c4d2328cbfe9fe7b17dcfcb991cae Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Sun, 12 Sep 2021 13:21:17 +0100 Subject: [PATCH 251/294] Automagic: Don't churn memory as much --- volatility3/framework/automagic/windows.py | 19 ++++++++++--------- 1 file changed, 10 insertions(+), 9 deletions(-) diff --git a/volatility3/framework/automagic/windows.py b/volatility3/framework/automagic/windows.py index e67f20be8e..eb63a75e59 100644 --- a/volatility3/framework/automagic/windows.py +++ b/volatility3/framework/automagic/windows.py @@ -137,8 +137,8 @@ def __init__(self, tests: Optional[List[DtbSelfReferential]]) -> None: self.tests = tests def __call__(self, data: bytes, data_offset: int) -> Generator[Tuple[DtbSelfReferential, int], None, None]: - for test in self.tests: - for page_offset in range(0, len(data), 0x1000): + for page_offset in range(0, len(data), 0x1000): + for test in self.tests: result = test(data, data_offset, page_offset) if result is not None: yield (test, result[0]) @@ -148,6 +148,13 @@ class WindowsIntelStacker(interfaces.automagic.StackerLayerInterface): stack_order = 40 exclusion_list = ['mac', 'linux'] + # Group these by region so we only run over the data once + test_sets = [("Detecting Self-referential pointer for recent windows", + [DtbSelfRef64bit()], [(0x150000, 0x150000), (0x650000, 0xa0000)]), + ("Older windows fixed location self-referential pointers", + [DtbSelfRefPae(), DtbSelfRef32bit(), DtbSelfRef64bitOldWindows()], [(0x30000, 0x1000000)]) + ] + @classmethod def stack(cls, context: interfaces.context.ContextInterface, @@ -191,14 +198,8 @@ def stack(cls, config_path, "page_map_offset")] = base_layer.metadata['page_map_offset'] layer = layer_type(context, config_path = config_path, name = new_layer_name, metadata = {'os': 'Windows'}) - test_sets = [("Detecting Self-referential pointer for recent windows", - [DtbSelfRefPae(), DtbSelfRef64bit()], [(0x150000, 0x150000), (0x650000, 0xa0000)]), - ("Older windows fixed location self-referential pointers", - [DtbSelfRefPae(), DtbSelfRef32bit(), DtbSelfRef64bitOldWindows()], [(0x30000, 0x1000000)]) - ] - # Self Referential finder - for description, tests, sections in test_sets: + for description, tests, sections in cls.test_sets: vollog.debug(description) # There is a very high chance that the DTB will live in these very narrow segments, assuming we couldn't find them previously hits = context.layers[layer_name].scan(context, From 9e3d1bdc26aab1a3115b47bd9813940bd5e96227 Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Sun, 19 Sep 2021 01:05:23 +0100 Subject: [PATCH 252/294] Core: Fix module size finding --- volatility3/framework/contexts/__init__.py | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/volatility3/framework/contexts/__init__.py b/volatility3/framework/contexts/__init__.py index 783a484219..e39c2d4ec1 100644 --- a/volatility3/framework/contexts/__init__.py +++ b/volatility3/framework/contexts/__init__.py @@ -360,7 +360,8 @@ def get_module_symbols_by_absolute_location(self, offset: int, size: int = 0) -> provided.""" if size < 0: raise ValueError("Size must be strictly non-negative") - for module in self._modules: + for module_name in self._modules: + module = self._modules[module_name] if isinstance(module, SizedModule): if (offset <= module.offset + module.size) and (offset + size >= module.offset): yield (module.name, module.get_symbols_by_absolute_location(offset, size)) From dd35e78dd048232af9e13b036cb1eb02b4b13f4e Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Sun, 19 Sep 2021 01:28:14 +0100 Subject: [PATCH 253/294] Linux: Fix kmsg plugin --- volatility3/framework/plugins/linux/kmsg.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/volatility3/framework/plugins/linux/kmsg.py b/volatility3/framework/plugins/linux/kmsg.py index 27327e97b4..3ec53cdcf1 100644 --- a/volatility3/framework/plugins/linux/kmsg.py +++ b/volatility3/framework/plugins/linux/kmsg.py @@ -58,7 +58,7 @@ def __init__( self._context = context self._config = config vmlinux = context.modules[self._config['kernel']] - self.layer_name = kernel.layer_name # type: ignore + self.layer_name = vmlinux.layer_name # type: ignore symbol_table_name = vmlinux.symbol_table_name # type: ignore self.vmlinux = contexts.Module(context, symbol_table_name, self.layer_name, 0) # type: ignore self.long_unsigned_int_size = self.vmlinux.get_type('long unsigned int').size From 95713fc7acdab0ae177b2c63476c59d063e43b2c Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Sun, 19 Sep 2021 01:30:21 +0100 Subject: [PATCH 254/294] Core: Change Module to ConfigurableInterface --- volatility3/framework/automagic/module.py | 2 +- volatility3/framework/contexts/__init__.py | 75 ++++++++++++------- .../framework/interfaces/configuration.py | 2 +- volatility3/framework/interfaces/context.py | 48 +++++++----- volatility3/framework/plugins/linux/kmsg.py | 2 +- volatility3/framework/plugins/windows/ssdt.py | 12 +-- 6 files changed, 84 insertions(+), 57 deletions(-) diff --git a/volatility3/framework/automagic/module.py b/volatility3/framework/automagic/module.py index 315164ec7d..0c2b58f3c3 100644 --- a/volatility3/framework/automagic/module.py +++ b/volatility3/framework/automagic/module.py @@ -22,7 +22,7 @@ def __call__(self, # The requirement is unfulfilled and is a ModuleRequirement context.config[interfaces.configuration.path_join( - new_config_path, 'class')] = 'volatility3.framework.contexts.ConfigurableModule' + new_config_path, 'class')] = 'volatility3.framework.contexts.Module' for req in requirement.requirements: if requirement.requirements[req].unsatisfied(context, new_config_path) and req != 'offset': diff --git a/volatility3/framework/contexts/__init__.py b/volatility3/framework/contexts/__init__.py index e39c2d4ec1..3b15c5d0e1 100644 --- a/volatility3/framework/contexts/__init__.py +++ b/volatility3/framework/contexts/__init__.py @@ -144,17 +144,17 @@ def module(self, size: The size, in bytes, that the module occupys from offset location within the layer named layer_name """ if size: - return SizedModule(self, - module_name = module_name, - layer_name = layer_name, - offset = offset, - size = size, - native_layer_name = native_layer_name) - return Module(self, - module_name = module_name, - layer_name = layer_name, - offset = offset, - native_layer_name = native_layer_name) + return SizedModule.create(self, + module_name = module_name, + layer_name = layer_name, + offset = offset, + size = size, + native_layer_name = native_layer_name) + return Module.create(self, + module_name = module_name, + layer_name = layer_name, + offset = offset, + native_layer_name = native_layer_name) def get_module_wrapper(method: str) -> Callable: @@ -179,6 +179,33 @@ def wrapper(self, name: str) -> Callable: class Module(interfaces.context.ModuleInterface): + @classmethod + def create(cls, + context: interfaces.context.ContextInterface, + module_name: str, + layer_name: str, + offset: int, + **kwargs) -> 'Module': + pathjoin = interfaces.configuration.path_join + # Check if config_path is None + config_path = kwargs.get('config_path', None) + if config_path is None: + config_path = pathjoin('temporary', 'modules') + # Populate the configuration + context.config[pathjoin(config_path, 'layer_name')] = layer_name + context.config[pathjoin(config_path, 'offset')] = offset + # This is important, since the module_name may be changed in case it is already in use + if 'symbol_table_name' not in kwargs: + kwargs['symbol_table_name'] = module_name + for arg in kwargs: + context.config[pathjoin(config_path, arg)] = kwargs.get(arg, None) + # Construct the object + return_val = cls(context, config_path, context.modules.free_module_name(module_name)) + context.add_module(return_val) + context.config[config_path] = return_val.name + # Add the module to the context modules collection + return return_val + def object(self, object_type: str, offset: int = None, @@ -280,26 +307,11 @@ def symbols(self): class SizedModule(Module): - def __init__(self, - context: interfaces.context.ContextInterface, - module_name: str, - layer_name: str, - offset: int, - size: int, - symbol_table_name: Optional[str] = None, - native_layer_name: Optional[str] = None) -> None: - super().__init__(context, - module_name = module_name, - layer_name = layer_name, - offset = offset, - native_layer_name = native_layer_name, - symbol_table_name = symbol_table_name) - self._size = size - @property def size(self) -> int: """Returns the size of the module (0 for unknown size)""" - return self._size + size = self.config.get('size', 0) + return size or 0 @property # type: ignore # FIXME: mypy #5107 @functools.lru_cache() @@ -346,6 +358,13 @@ def deduplicate(self) -> 'ModuleCollection': seen.add(mod.hash) # type: ignore # FIXME: mypy #5107 return ModuleCollection(new_modules) + def free_module_name(self, prefix: str = "module") -> str: + """Returns an unused module name""" + count = 1 + while prefix + str(count) in self: + count += 1 + return prefix + str(count) + @property def modules(self) -> 'ModuleCollection': """A name indexed dictionary of modules using that name in this diff --git a/volatility3/framework/interfaces/configuration.py b/volatility3/framework/interfaces/configuration.py index 6331e66836..7dc046a3e8 100644 --- a/volatility3/framework/interfaces/configuration.py +++ b/volatility3/framework/interfaces/configuration.py @@ -192,7 +192,7 @@ def _sanitize_value(self, value: Any) -> ConfigSimpleType: elif value is None: return None else: - raise TypeError("Invalid type stored in configuration") + raise TypeError(f"Invalid type stored in configuration: {type(value)}") def __delitem__(self, key: str) -> None: """Deletes an item from the hierarchical dict.""" diff --git a/volatility3/framework/interfaces/context.py b/volatility3/framework/interfaces/context.py index e42753363a..c52e1aaa59 100644 --- a/volatility3/framework/interfaces/context.py +++ b/volatility3/framework/interfaces/context.py @@ -136,7 +136,7 @@ def module(self, """ -class ModuleInterface(metaclass = ABCMeta): +class ModuleInterface(interfaces.configuration.ConfigurableInterface): """Maintains state concerning a particular loaded module in memory. This object is OS-independent. @@ -144,31 +144,35 @@ class ModuleInterface(metaclass = ABCMeta): def __init__(self, context: ContextInterface, - module_name: str, - layer_name: str, - offset: int, - symbol_table_name: Optional[str] = None, - native_layer_name: Optional[str] = None) -> None: + config_path: str, + name: str) -> None: """Constructs a new os-independent module. Args: context: The context within which this module will exist + config_path: The path within the context's configuration tree name: The name of the module - layer_name: The layer within the context in which the module exists - offset: The offset at which the module exists in the layer - symbol_table_name: The name of an associated symbol table - native_layer_name: The default native layer for objects constructed by the module """ - self._context = context - self._module_name = module_name - self._layer_name = layer_name - self._offset = offset - # TODO: Figure out about storing/requesting the native_layer_name for a module in the configuration - # The current module requirement does not ask for nor act upon this information - self._native_layer_name = native_layer_name or layer_name - self._symbol_table_name = symbol_table_name or self._module_name - - def build_configuration(self) -> 'configuration.HierarchicalDict': + super().__init__(context, config_path) + self._module_name = name + + @property + def _layer_name(self) -> str: + return self.config['layer_name'] + + @property + def _offset(self) -> int: + return self.config['offset'] + + @property + def _native_layer_name(self) -> str: + return self.config.get('native_layer_name', self._layer_name) + + @property + def _symbol_table_name(self) -> str: + return self.config.get('symbol_table_name', self._module_name) + + def build_configuration(self) -> 'interfaces.configuration.HierarchicalDict': """Builds the configuration dictionary for this specific Module""" config = super().build_configuration() @@ -319,6 +323,10 @@ def __len__(self) -> int: def __iter__(self): return iter(self._modules) + def free_module_name(self, prefix: str = "module") -> str: + """Returns an unused table name to ensure no collision occurs when + inserting a symbol table.""" + def get_modules_by_symbol_tables(self, symbol_table: str) -> Iterable[str]: """Returns the modules which use the specified symbol table name""" for module_name in self._modules: diff --git a/volatility3/framework/plugins/linux/kmsg.py b/volatility3/framework/plugins/linux/kmsg.py index 3ec53cdcf1..d982956d09 100644 --- a/volatility3/framework/plugins/linux/kmsg.py +++ b/volatility3/framework/plugins/linux/kmsg.py @@ -60,7 +60,7 @@ def __init__( vmlinux = context.modules[self._config['kernel']] self.layer_name = vmlinux.layer_name # type: ignore symbol_table_name = vmlinux.symbol_table_name # type: ignore - self.vmlinux = contexts.Module(context, symbol_table_name, self.layer_name, 0) # type: ignore + self.vmlinux = contexts.Module.create(context, symbol_table_name, self.layer_name, 0) # type: ignore self.long_unsigned_int_size = self.vmlinux.get_type('long unsigned int').size @classmethod diff --git a/volatility3/framework/plugins/windows/ssdt.py b/volatility3/framework/plugins/windows/ssdt.py index b5b3e40c0f..7203d415b0 100644 --- a/volatility3/framework/plugins/windows/ssdt.py +++ b/volatility3/framework/plugins/windows/ssdt.py @@ -60,12 +60,12 @@ def build_module_collection(cls, context: interfaces.context.ContextInterface, l if module_name in constants.windows.KERNEL_MODULE_NAMES: symbol_table_name = symbol_table - context_module = contexts.SizedModule(context, - module_name, - layer_name, - mod.DllBase, - mod.SizeOfImage, - symbol_table_name = symbol_table_name) + context_module = contexts.SizedModule.create(context = context, + module_name = module_name, + layer_name = layer_name, + offset = mod.DllBase, + size = mod.SizeOfImage, + symbol_table_name = symbol_table_name) context_modules.append(context_module) From 54203e02531915be48be9993ca1b128ba63bb161 Mon Sep 17 00:00:00 2001 From: Frank Gomulka Date: Thu, 15 Jul 2021 00:16:29 -0400 Subject: [PATCH 255/294] Add patch for driverscan for windows 7 and earlier This patches an issue for some versions of windows that require the bottom-up approach for finding the size of "object body" by recognizing additional structures occupying the "object body" position. The sizes of these additional structures are added to the size of `_DRIVER_OBJECT` to more accurately compute the actual size of "object body". --- .../framework/symbols/windows/extensions/pool.py | 12 ++++++++++-- .../{framework => }/plugins/windows/poolscanner.py | 12 +++++++----- 2 files changed, 17 insertions(+), 7 deletions(-) rename volatility3/{framework => }/plugins/windows/poolscanner.py (97%) diff --git a/volatility3/framework/symbols/windows/extensions/pool.py b/volatility3/framework/symbols/windows/extensions/pool.py index d470797b5b..667acc7391 100644 --- a/volatility3/framework/symbols/windows/extensions/pool.py +++ b/volatility3/framework/symbols/windows/extensions/pool.py @@ -5,6 +5,7 @@ from volatility3.framework import objects, interfaces, constants, symbols, exceptions, renderers from volatility3.framework.renderers import conversion +from volatility3.plugins.windows.poolscanner import PoolConstraint vollog = logging.getLogger(__name__) @@ -17,9 +18,8 @@ class POOL_HEADER(objects.StructType): """ def get_object(self, - type_name: str, + constraint: PoolConstraint, use_top_down: bool, - executive: bool = False, kernel_symbol_table: Optional[str] = None, native_layer_name: Optional[str] = None) -> Optional[interfaces.objects.ObjectInterface]: """Carve an object or data structure from a kernel pool allocation @@ -34,6 +34,10 @@ def get_object(self, An object as found from a POOL_HEADER """ + # TODO: I wasn't quite sure what to do with these values, so I just set them here for now. + type_name = constraint.type_name + executive = constraint.object_type is not None + symbol_table_name = self.vol.type_name.split(constants.BANG)[0] if constants.BANG in type_name: symbol_table_name, type_name = type_name.split(constants.BANG)[0:2] @@ -150,6 +154,10 @@ def get_object(self, # use the bottom up approach for windows 7 and earlier else: type_size = self._context.symbol_space.get_type(symbol_table_name + constants.BANG + type_name).size + if constraint.additional_structures: + for additional_structure in constraint.additional_structures: + type_size += self._context.symbol_space.get_type(symbol_table_name + constants.BANG + additional_structure).size + rounded_size = conversion.round(type_size, alignment, up = True) mem_object = self._context.object(symbol_table_name + constants.BANG + type_name, diff --git a/volatility3/framework/plugins/windows/poolscanner.py b/volatility3/plugins/windows/poolscanner.py similarity index 97% rename from volatility3/framework/plugins/windows/poolscanner.py rename to volatility3/plugins/windows/poolscanner.py index e1abb1b7fa..951edac664 100644 --- a/volatility3/framework/plugins/windows/poolscanner.py +++ b/volatility3/plugins/windows/poolscanner.py @@ -39,7 +39,8 @@ def __init__(self, size: Optional[Tuple[Optional[int], Optional[int]]] = None, index: Optional[Tuple[Optional[int], Optional[int]]] = None, alignment: Optional[int] = 1, - skip_type_test: bool = False) -> None: + skip_type_test: bool = False, + additional_structures: Optional[List[str]] = None) -> None: self.tag = tag self.type_name = type_name self.object_type = object_type @@ -48,6 +49,7 @@ def __init__(self, self.index = index self.alignment = alignment self.skip_type_test = skip_type_test + self.additional_structures = additional_structures class PoolHeaderScanner(interfaces.layers.ScannerInterface): @@ -212,7 +214,8 @@ def builtin_constraints(symbol_table: str, tags_filter: List[bytes] = None) -> L type_name = symbol_table + constants.BANG + "_DRIVER_OBJECT", object_type = "Driver", size = (248, None), - page_type = PoolType.PAGED | PoolType.NONPAGED | PoolType.FREE), + page_type = PoolType.PAGED | PoolType.NONPAGED | PoolType.FREE, + additional_structures = ["_DRIVER_EXTENSION"]), # drivers on windows starting with windows 8 PoolConstraint(b'Driv', type_name = symbol_table + constants.BANG + "_DRIVER_OBJECT", @@ -291,10 +294,9 @@ def generate_pool_scan(cls, for constraint, header in cls.pool_scan(context, scan_layer, symbol_table, constraints, alignment = alignment): - mem_object = header.get_object(type_name = constraint.type_name, + mem_object = header.get_object(constraint = constraint, use_top_down = is_windows_8_or_later, - executive = constraint.object_type is not None, - native_layer_name = layer_name, + native_layer_name = 'primary', kernel_symbol_table = symbol_table) if mem_object is None: From 3919c909bc07fae0b40b4ec35b44fbc10081c992 Mon Sep 17 00:00:00 2001 From: Frank Gomulka Date: Thu, 15 Jul 2021 15:35:00 -0400 Subject: [PATCH 256/294] Comment and Pydoc changes --- volatility3/framework/symbols/windows/extensions/pool.py | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-) diff --git a/volatility3/framework/symbols/windows/extensions/pool.py b/volatility3/framework/symbols/windows/extensions/pool.py index 667acc7391..5353fc30b1 100644 --- a/volatility3/framework/symbols/windows/extensions/pool.py +++ b/volatility3/framework/symbols/windows/extensions/pool.py @@ -25,16 +25,15 @@ def get_object(self, """Carve an object or data structure from a kernel pool allocation Args: - type_name: the data structure type name - native_layer_name: the name of the layer where the data originally lived - object_type: the object type (executive kernel objects only) + constraint: a PoolConstraint object used to get the pool allocation header object + use_top_down: for delineating how a windows version finds the size of the object body kernel_symbol_table: in case objects of a different symbol table are scanned for + native_layer_name: the name of the layer where the data originally lived Returns: An object as found from a POOL_HEADER """ - # TODO: I wasn't quite sure what to do with these values, so I just set them here for now. type_name = constraint.type_name executive = constraint.object_type is not None From ce5bc3ccd89a443b50e9c2f55bccf135a41bf827 Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Sun, 26 Sep 2021 01:33:48 +0100 Subject: [PATCH 257/294] Layers: Improve non-linear segmented layers --- volatility3/framework/interfaces/layers.py | 3 ++- volatility3/framework/layers/qemu.py | 9 ++++++--- volatility3/framework/layers/segmented.py | 5 ++--- 3 files changed, 10 insertions(+), 7 deletions(-) diff --git a/volatility3/framework/interfaces/layers.py b/volatility3/framework/interfaces/layers.py index 28c452f900..79b8902e34 100644 --- a/volatility3/framework/interfaces/layers.py +++ b/volatility3/framework/interfaces/layers.py @@ -441,7 +441,8 @@ def read(self, offset: int, length: int, pad: bool = False) -> bytes: unprocessed_data = self._context.layers.read(layer, mapped_offset, mapped_length, pad) processed_data = self._decode_data(unprocessed_data, mapped_offset, layer_offset, sublength) if len(processed_data) != sublength: - raise ValueError("ProcessedData length does not match expected length of chunk") + raise ValueError( + f"ProcessedData length {len(processed_data)} does not match expected length of chunk {sublength}") output += processed_data current_offset += sublength return output + (b"\x00" * (length - len(output))) diff --git a/volatility3/framework/layers/qemu.py b/volatility3/framework/layers/qemu.py index ff0644dbc1..b56dbe2724 100644 --- a/volatility3/framework/layers/qemu.py +++ b/volatility3/framework/layers/qemu.py @@ -1,6 +1,7 @@ # This file is Copyright 2020 Volatility Foundation and licensed under the Volatility Software License 1.0 # which is available at https://www.volatilityfoundation.org/license/vsl-v1.0 # +import bisect import functools import json import math @@ -211,9 +212,11 @@ def extract_data(self, index, name, version_id): return index def _decode_data(self, data: bytes, mapped_offset: int, offset: int, output_length: int) -> bytes: - if mapped_offset in self._compressed: - return (data * 0x1000)[:output_length] - return data + start_offset, _, _, _ = self._segments[bisect.bisect_right(self._segments, (offset, 0xffffffffffffff,)) - 1] + if offset in self._compressed: + data = (data * 0x1000) + result = data[offset - start_offset:output_length + offset - start_offset] + return result @functools.lru_cache(maxsize = 512) def read(self, offset: int, length: int, pad: bool = False) -> bytes: diff --git a/volatility3/framework/layers/segmented.py b/volatility3/framework/layers/segmented.py index 80c89723ac..e4a58e9831 100644 --- a/volatility3/framework/layers/segmented.py +++ b/volatility3/framework/layers/segmented.py @@ -35,7 +35,7 @@ def __init__(self, def _load_segments(self) -> None: """Populates the _segments variable. - Segments must be (address, mapped address, length) and must be + Segments must be (address, mapped address, length, mapped_length) and must be sorted by address when this method exits """ @@ -85,7 +85,6 @@ def mapping(self, if current_offset > logical_offset: difference = current_offset - logical_offset logical_offset += difference - mapped_offset += difference size -= difference except exceptions.InvalidAddressException: if not ignore_errors: @@ -103,7 +102,7 @@ def mapping(self, return # Crop it to the amount we need left chunk_size = min(size, length + offset - logical_offset) - yield logical_offset, chunk_size, mapped_offset, chunk_size, self._base_layer + yield logical_offset, chunk_size, mapped_offset, mapped_size, self._base_layer current_offset += chunk_size # Terminate if we've gone (or reached) our required limit if current_offset >= offset + length: From 2dd13d6b4b70cf1a6f5d39ab4d9418345a5c7400 Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Sun, 26 Sep 2021 02:01:07 +0100 Subject: [PATCH 258/294] Revert "Layers: Improve non-linear segmented layers" This reverts commit ce5bc3ccd89a443b50e9c2f55bccf135a41bf827. Seems like it broke standard images for some reason, this needs investigation before it comes back. --- volatility3/framework/interfaces/layers.py | 3 +-- volatility3/framework/layers/qemu.py | 9 +++------ volatility3/framework/layers/segmented.py | 5 +++-- 3 files changed, 7 insertions(+), 10 deletions(-) diff --git a/volatility3/framework/interfaces/layers.py b/volatility3/framework/interfaces/layers.py index 79b8902e34..28c452f900 100644 --- a/volatility3/framework/interfaces/layers.py +++ b/volatility3/framework/interfaces/layers.py @@ -441,8 +441,7 @@ def read(self, offset: int, length: int, pad: bool = False) -> bytes: unprocessed_data = self._context.layers.read(layer, mapped_offset, mapped_length, pad) processed_data = self._decode_data(unprocessed_data, mapped_offset, layer_offset, sublength) if len(processed_data) != sublength: - raise ValueError( - f"ProcessedData length {len(processed_data)} does not match expected length of chunk {sublength}") + raise ValueError("ProcessedData length does not match expected length of chunk") output += processed_data current_offset += sublength return output + (b"\x00" * (length - len(output))) diff --git a/volatility3/framework/layers/qemu.py b/volatility3/framework/layers/qemu.py index b56dbe2724..ff0644dbc1 100644 --- a/volatility3/framework/layers/qemu.py +++ b/volatility3/framework/layers/qemu.py @@ -1,7 +1,6 @@ # This file is Copyright 2020 Volatility Foundation and licensed under the Volatility Software License 1.0 # which is available at https://www.volatilityfoundation.org/license/vsl-v1.0 # -import bisect import functools import json import math @@ -212,11 +211,9 @@ def extract_data(self, index, name, version_id): return index def _decode_data(self, data: bytes, mapped_offset: int, offset: int, output_length: int) -> bytes: - start_offset, _, _, _ = self._segments[bisect.bisect_right(self._segments, (offset, 0xffffffffffffff,)) - 1] - if offset in self._compressed: - data = (data * 0x1000) - result = data[offset - start_offset:output_length + offset - start_offset] - return result + if mapped_offset in self._compressed: + return (data * 0x1000)[:output_length] + return data @functools.lru_cache(maxsize = 512) def read(self, offset: int, length: int, pad: bool = False) -> bytes: diff --git a/volatility3/framework/layers/segmented.py b/volatility3/framework/layers/segmented.py index e4a58e9831..80c89723ac 100644 --- a/volatility3/framework/layers/segmented.py +++ b/volatility3/framework/layers/segmented.py @@ -35,7 +35,7 @@ def __init__(self, def _load_segments(self) -> None: """Populates the _segments variable. - Segments must be (address, mapped address, length, mapped_length) and must be + Segments must be (address, mapped address, length) and must be sorted by address when this method exits """ @@ -85,6 +85,7 @@ def mapping(self, if current_offset > logical_offset: difference = current_offset - logical_offset logical_offset += difference + mapped_offset += difference size -= difference except exceptions.InvalidAddressException: if not ignore_errors: @@ -102,7 +103,7 @@ def mapping(self, return # Crop it to the amount we need left chunk_size = min(size, length + offset - logical_offset) - yield logical_offset, chunk_size, mapped_offset, mapped_size, self._base_layer + yield logical_offset, chunk_size, mapped_offset, chunk_size, self._base_layer current_offset += chunk_size # Terminate if we've gone (or reached) our required limit if current_offset >= offset + length: From f8ce367fd5d6861114a2ec445037d0a9423f05ac Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Sun, 26 Sep 2021 10:02:54 +0100 Subject: [PATCH 259/294] Layers: Fix segmentation for non-linear and linear --- volatility3/framework/layers/qemu.py | 16 +++++++++++++--- volatility3/framework/layers/segmented.py | 21 +++++++++++++++++---- 2 files changed, 30 insertions(+), 7 deletions(-) diff --git a/volatility3/framework/layers/qemu.py b/volatility3/framework/layers/qemu.py index ff0644dbc1..4ce17bb634 100644 --- a/volatility3/framework/layers/qemu.py +++ b/volatility3/framework/layers/qemu.py @@ -1,6 +1,7 @@ # This file is Copyright 2020 Volatility Foundation and licensed under the Volatility Software License 1.0 # which is available at https://www.volatilityfoundation.org/license/vsl-v1.0 # +import bisect import functools import json import math @@ -211,9 +212,18 @@ def extract_data(self, index, name, version_id): return index def _decode_data(self, data: bytes, mapped_offset: int, offset: int, output_length: int) -> bytes: - if mapped_offset in self._compressed: - return (data * 0x1000)[:output_length] - return data + """Takes the full segment from the base_layer that the data occurs in, checks whether it's compressed + (by locating it in the segment list and verifying if that address is compressed), then reading/expanding the + data, and finally cutting it to the right size. Offset may be the address requested rather than the location + of the starting data. It is the responsibility of the layer to turn the provided data chunk into the right + portion of data necessary. + """ + start_offset, _, start_mapped_offset, _ = self._segments[ + bisect.bisect_right(self._segments, (offset, 0xffffffffffffff,)) - 1] + if start_mapped_offset in self._compressed: + data = (data * 0x1000) + result = data[offset - start_offset:output_length + offset - start_offset] + return result @functools.lru_cache(maxsize = 512) def read(self, offset: int, length: int, pad: bool = False) -> bytes: diff --git a/volatility3/framework/layers/segmented.py b/volatility3/framework/layers/segmented.py index 80c89723ac..05fc01b977 100644 --- a/volatility3/framework/layers/segmented.py +++ b/volatility3/framework/layers/segmented.py @@ -35,7 +35,7 @@ def __init__(self, def _load_segments(self) -> None: """Populates the _segments variable. - Segments must be (address, mapped address, length) and must be + Segments must be (address, mapped address, length, mapped_length) and must be sorted by address when this method exits """ @@ -69,6 +69,10 @@ def _find_segment(self, offset: int, next: bool = False) -> Tuple[int, int, int, return self._segments[i] raise exceptions.InvalidAddressException(self.name, offset, f"Invalid address at {offset:0x}") + # Determines whether larger segments are in use and the offsets within them should be tracked linearly + # When no decoding of the data occurs, this should be set to true + _track_offset = False + def mapping(self, offset: int, length: int, @@ -85,7 +89,8 @@ def mapping(self, if current_offset > logical_offset: difference = current_offset - logical_offset logical_offset += difference - mapped_offset += difference + if self._track_offset: + mapped_offset += difference size -= difference except exceptions.InvalidAddressException: if not ignore_errors: @@ -103,7 +108,7 @@ def mapping(self, return # Crop it to the amount we need left chunk_size = min(size, length + offset - logical_offset) - yield logical_offset, chunk_size, mapped_offset, chunk_size, self._base_layer + yield logical_offset, chunk_size, mapped_offset, mapped_size, self._base_layer current_offset += chunk_size # Terminate if we've gone (or reached) our required limit if current_offset >= offset + length: @@ -139,4 +144,12 @@ def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface] class SegmentedLayer(NonLinearlySegmentedLayer, linear.LinearlyMappedLayer, metaclass = ABCMeta): - pass + _track_offset = True + + def mapping(self, + offset: int, + length: int, + ignore_errors: bool = False) -> Iterable[Tuple[int, int, int, int, str]]: + # Linear mappings must return the same length of segment as that requested + for offset, length, mapped_offset, mapped_length, layer in super().mapping(offset, length, ignore_errors): + yield offset, length, mapped_offset, length, layer From ca39b20c6dcf750cf8e8e4d4ab58ae19f346ba9b Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Sun, 26 Sep 2021 01:35:52 +0100 Subject: [PATCH 260/294] Layers: Implement AVML file support --- volatility3/framework/layers/avml.py | 282 +++++++++++++++++++++++++++ 1 file changed, 282 insertions(+) create mode 100644 volatility3/framework/layers/avml.py diff --git a/volatility3/framework/layers/avml.py b/volatility3/framework/layers/avml.py new file mode 100644 index 0000000000..aca3b037fc --- /dev/null +++ b/volatility3/framework/layers/avml.py @@ -0,0 +1,282 @@ +"""Functions that read AVML files. + +The user of the file doesn't have to worry about the compression, +but random access is not allowed.""" +import io +import struct +from typing import Tuple, List, Optional + +from volatility3.framework import exceptions, interfaces, constants +from volatility3.framework.layers import segmented + +try: + import snappy + + HAS_SNAPPY = True +except ImportError: + HAS_SNAPPY = False + + +class SnappyFraming: + def __init__(self): + pass + + # crc32-c (Castagnoli) (crc32c_poly=0x1EDC6F41) + crc32c_table = [ + 0x00000000, 0xF26B8303, 0xE13B70F7, 0x1350F3F4, + 0xC79A971F, 0x35F1141C, 0x26A1E7E8, 0xD4CA64EB, + 0x8AD958CF, 0x78B2DBCC, 0x6BE22838, 0x9989AB3B, + 0x4D43CFD0, 0xBF284CD3, 0xAC78BF27, 0x5E133C24, + 0x105EC76F, 0xE235446C, 0xF165B798, 0x030E349B, + 0xD7C45070, 0x25AFD373, 0x36FF2087, 0xC494A384, + 0x9A879FA0, 0x68EC1CA3, 0x7BBCEF57, 0x89D76C54, + 0x5D1D08BF, 0xAF768BBC, 0xBC267848, 0x4E4DFB4B, + 0x20BD8EDE, 0xD2D60DDD, 0xC186FE29, 0x33ED7D2A, + 0xE72719C1, 0x154C9AC2, 0x061C6936, 0xF477EA35, + 0xAA64D611, 0x580F5512, 0x4B5FA6E6, 0xB93425E5, + 0x6DFE410E, 0x9F95C20D, 0x8CC531F9, 0x7EAEB2FA, + 0x30E349B1, 0xC288CAB2, 0xD1D83946, 0x23B3BA45, + 0xF779DEAE, 0x05125DAD, 0x1642AE59, 0xE4292D5A, + 0xBA3A117E, 0x4851927D, 0x5B016189, 0xA96AE28A, + 0x7DA08661, 0x8FCB0562, 0x9C9BF696, 0x6EF07595, + 0x417B1DBC, 0xB3109EBF, 0xA0406D4B, 0x522BEE48, + 0x86E18AA3, 0x748A09A0, 0x67DAFA54, 0x95B17957, + 0xCBA24573, 0x39C9C670, 0x2A993584, 0xD8F2B687, + 0x0C38D26C, 0xFE53516F, 0xED03A29B, 0x1F682198, + 0x5125DAD3, 0xA34E59D0, 0xB01EAA24, 0x42752927, + 0x96BF4DCC, 0x64D4CECF, 0x77843D3B, 0x85EFBE38, + 0xDBFC821C, 0x2997011F, 0x3AC7F2EB, 0xC8AC71E8, + 0x1C661503, 0xEE0D9600, 0xFD5D65F4, 0x0F36E6F7, + 0x61C69362, 0x93AD1061, 0x80FDE395, 0x72966096, + 0xA65C047D, 0x5437877E, 0x4767748A, 0xB50CF789, + 0xEB1FCBAD, 0x197448AE, 0x0A24BB5A, 0xF84F3859, + 0x2C855CB2, 0xDEEEDFB1, 0xCDBE2C45, 0x3FD5AF46, + 0x7198540D, 0x83F3D70E, 0x90A324FA, 0x62C8A7F9, + 0xB602C312, 0x44694011, 0x5739B3E5, 0xA55230E6, + 0xFB410CC2, 0x092A8FC1, 0x1A7A7C35, 0xE811FF36, + 0x3CDB9BDD, 0xCEB018DE, 0xDDE0EB2A, 0x2F8B6829, + 0x82F63B78, 0x709DB87B, 0x63CD4B8F, 0x91A6C88C, + 0x456CAC67, 0xB7072F64, 0xA457DC90, 0x563C5F93, + 0x082F63B7, 0xFA44E0B4, 0xE9141340, 0x1B7F9043, + 0xCFB5F4A8, 0x3DDE77AB, 0x2E8E845F, 0xDCE5075C, + 0x92A8FC17, 0x60C37F14, 0x73938CE0, 0x81F80FE3, + 0x55326B08, 0xA759E80B, 0xB4091BFF, 0x466298FC, + 0x1871A4D8, 0xEA1A27DB, 0xF94AD42F, 0x0B21572C, + 0xDFEB33C7, 0x2D80B0C4, 0x3ED04330, 0xCCBBC033, + 0xA24BB5A6, 0x502036A5, 0x4370C551, 0xB11B4652, + 0x65D122B9, 0x97BAA1BA, 0x84EA524E, 0x7681D14D, + 0x2892ED69, 0xDAF96E6A, 0xC9A99D9E, 0x3BC21E9D, + 0xEF087A76, 0x1D63F975, 0x0E330A81, 0xFC588982, + 0xB21572C9, 0x407EF1CA, 0x532E023E, 0xA145813D, + 0x758FE5D6, 0x87E466D5, 0x94B49521, 0x66DF1622, + 0x38CC2A06, 0xCAA7A905, 0xD9F75AF1, 0x2B9CD9F2, + 0xFF56BD19, 0x0D3D3E1A, 0x1E6DCDEE, 0xEC064EED, + 0xC38D26C4, 0x31E6A5C7, 0x22B65633, 0xD0DDD530, + 0x0417B1DB, 0xF67C32D8, 0xE52CC12C, 0x1747422F, + 0x49547E0B, 0xBB3FFD08, 0xA86F0EFC, 0x5A048DFF, + 0x8ECEE914, 0x7CA56A17, 0x6FF599E3, 0x9D9E1AE0, + 0xD3D3E1AB, 0x21B862A8, 0x32E8915C, 0xC083125F, + 0x144976B4, 0xE622F5B7, 0xF5720643, 0x07198540, + 0x590AB964, 0xAB613A67, 0xB831C993, 0x4A5A4A90, + 0x9E902E7B, 0x6CFBAD78, 0x7FAB5E8C, 0x8DC0DD8F, + 0xE330A81A, 0x115B2B19, 0x020BD8ED, 0xF0605BEE, + 0x24AA3F05, 0xD6C1BC06, 0xC5914FF2, 0x37FACCF1, + 0x69E9F0D5, 0x9B8273D6, 0x88D28022, 0x7AB90321, + 0xAE7367CA, 0x5C18E4C9, 0x4F48173D, 0xBD23943E, + 0xF36E6F75, 0x0105EC76, 0x12551F82, 0xE03E9C81, + 0x34F4F86A, 0xC69F7B69, 0xD5CF889D, 0x27A40B9E, + 0x79B737BA, 0x8BDCB4B9, 0x988C474D, 0x6AE7C44E, + 0xBE2DA0A5, 0x4C4623A6, 0x5F16D052, 0xAD7D5351, + ] + + def masked_crc32c(self, buf: bytes) -> int: + crc = 0xffffffff + for c in buf: + crc = (crc >> 8) ^ self.crc32c_table[(crc ^ c) & 0xFF] + crc = (~crc) & 0xffffffff + # reverse endianness + crc = struct.unpack(">I", struct.pack("> 15) | (crc << 17)) + 0xa282ead8) & 0xffffffff + + def decompress(self, source_data: bytes) -> io.BytesIO: + offset = 0 + data = io.BytesIO() + while offset < len(source_data): + header_structure = " Tuple[bytes, int]: + """Decompresses data up to the size limit provided + + Args: + data: Input data + limit: The maximum size of decompressed data + + Returns: + Tuple of decompressed data, and number of compressed bytes consumed + """ + decompressed = bytearray() + offset = 0 + crc_len = 4 + chunk_header_struct = '> 8 + if chunk_type == 0xff: + if data[offset + chunk_header_len:offset + chunk_header_len + chunk_size] != b'sNaPpY': + raise ValueError(f"Snappy header missing at offset: {offset}") + elif chunk_type in [0x00, 0x01]: + # CRC + (Un)compressed data + start = offset + chunk_header_len + chunk_crc = data[start: start + crc_len] + chunk_data = data[start + crc_len: start + chunk_size] + if chunk_type == 0x00: + # Compressed data + chunk_data = snappy.decompress(chunk_data) + # TODO: Verify CRC + decompressed.extend(chunk_data) + elif chunk_type in range(0x2, 0x80): + # Unskippable + raise ValueError(f"Unskippable chunk of type {chunk_type} found: {offset}") + offset += chunk_header_len + chunk_size + return decompressed, offset + + +class AVMLLayer(segmented.NonLinearlySegmentedLayer): + """A Lime format TranslationLayer. + + Lime is generally used to store physical memory images where there + are large holes in the physical layer + """ + + def __init__(self, *args, **kwargs): + self._compressed = {} + super().__init__(*args, **kwargs) + + @classmethod + def _check_header(cls, layer: interfaces.layers.DataLayerInterface): + header_structure = " None: + base_layer = self.context.layers[self._base_layer] + offset = base_layer.minimum_address + while offset + 4 < base_layer.maximum_address: + avml_header_structure = " Tuple[ + List[Tuple[int, int, int, int, bool]], int]: + """ + Reads a framed-format snappy stream + + Args: + data: The stream to read + expected_length: How big the decompressed stream is expected to be (termination limit) + + Returns: + (offset, mapped_offset, length, mapped_length, compressed) relative to the data chunk (ie, not relative to the file start) + """ + segments = [] + decompressed_len = 0 + offset = 0 + crc_len = 4 + frame_header_struct = '> 8 + if frame_type == 0xff: + if data[offset + frame_header_len:offset + frame_header_len + frame_size] != b'sNaPpY': + raise ValueError(f"Snappy header missing at offset: {offset}") + elif frame_type in [0x00, 0x01]: + # CRC + (Un)compressed data + mapped_start = offset + frame_header_len + frame_crc = data[mapped_start: mapped_start + crc_len] + frame_data = data[mapped_start + crc_len: mapped_start + frame_size] + if frame_type == 0x00: + # Compressed data + frame_data = snappy.decompress(frame_data) + # TODO: Verify CRC + segments.append((decompressed_len, mapped_start + crc_len, len(frame_data), frame_size - crc_len, + frame_type == 0x00)) + decompressed_len += len(frame_data) + elif frame_type in range(0x2, 0x80): + # Unskippable + raise exceptions.LayerException(f"Unskippable chunk of type {frame_type} found: {offset}") + offset += frame_header_len + frame_size + return segments, offset + + def _decode_data(self, data: bytes, mapped_offset: int, offset: int, output_length: int) -> bytes: + start_offset, _, _, _ = self._find_segment(offset) + if self._compressed[mapped_offset]: + decoded_data = snappy.decompress(data) + else: + decoded_data = data + decoded_data = decoded_data[offset - start_offset:] + decoded_data = decoded_data[:output_length] + return decoded_data + + +class AVMLStacker(interfaces.automagic.StackerLayerInterface): + stack_order = 10 + + @classmethod + def stack(cls, + context: interfaces.context.ContextInterface, + layer_name: str, + progress_callback: constants.ProgressCallback = None) -> Optional[interfaces.layers.DataLayerInterface]: + try: + AVMLLayer._check_header(context.layers[layer_name]) + except exceptions.LayerException: + return None + new_name = context.layers.free_layer_name("AVMLLayer") + context.config[interfaces.configuration.path_join(new_name, "base_layer")] = layer_name + return AVMLLayer(context, new_name, new_name) + + +if __name__ == '__main__': + import sys + + source_data = open(sys.argv[1], 'br').read() + + sf = SnappyFraming() + with open('outputfile', 'wb') as fp: + fp.write(sf.decompress(source_data).read()) From 16f5abcdf5de5dc32a9d52139936cfaa4e8c3c09 Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Sun, 26 Sep 2021 01:40:38 +0100 Subject: [PATCH 261/294] Layers: Improve response if snappy not installed --- volatility3/framework/layers/avml.py | 156 +-------------------------- 1 file changed, 4 insertions(+), 152 deletions(-) diff --git a/volatility3/framework/layers/avml.py b/volatility3/framework/layers/avml.py index aca3b037fc..5c1bfe1bb5 100644 --- a/volatility3/framework/layers/avml.py +++ b/volatility3/framework/layers/avml.py @@ -2,13 +2,14 @@ The user of the file doesn't have to worry about the compression, but random access is not allowed.""" -import io import struct from typing import Tuple, List, Optional from volatility3.framework import exceptions, interfaces, constants from volatility3.framework.layers import segmented +vollog = logging.getLogger(__name__) + try: import snappy @@ -17,147 +18,6 @@ HAS_SNAPPY = False -class SnappyFraming: - def __init__(self): - pass - - # crc32-c (Castagnoli) (crc32c_poly=0x1EDC6F41) - crc32c_table = [ - 0x00000000, 0xF26B8303, 0xE13B70F7, 0x1350F3F4, - 0xC79A971F, 0x35F1141C, 0x26A1E7E8, 0xD4CA64EB, - 0x8AD958CF, 0x78B2DBCC, 0x6BE22838, 0x9989AB3B, - 0x4D43CFD0, 0xBF284CD3, 0xAC78BF27, 0x5E133C24, - 0x105EC76F, 0xE235446C, 0xF165B798, 0x030E349B, - 0xD7C45070, 0x25AFD373, 0x36FF2087, 0xC494A384, - 0x9A879FA0, 0x68EC1CA3, 0x7BBCEF57, 0x89D76C54, - 0x5D1D08BF, 0xAF768BBC, 0xBC267848, 0x4E4DFB4B, - 0x20BD8EDE, 0xD2D60DDD, 0xC186FE29, 0x33ED7D2A, - 0xE72719C1, 0x154C9AC2, 0x061C6936, 0xF477EA35, - 0xAA64D611, 0x580F5512, 0x4B5FA6E6, 0xB93425E5, - 0x6DFE410E, 0x9F95C20D, 0x8CC531F9, 0x7EAEB2FA, - 0x30E349B1, 0xC288CAB2, 0xD1D83946, 0x23B3BA45, - 0xF779DEAE, 0x05125DAD, 0x1642AE59, 0xE4292D5A, - 0xBA3A117E, 0x4851927D, 0x5B016189, 0xA96AE28A, - 0x7DA08661, 0x8FCB0562, 0x9C9BF696, 0x6EF07595, - 0x417B1DBC, 0xB3109EBF, 0xA0406D4B, 0x522BEE48, - 0x86E18AA3, 0x748A09A0, 0x67DAFA54, 0x95B17957, - 0xCBA24573, 0x39C9C670, 0x2A993584, 0xD8F2B687, - 0x0C38D26C, 0xFE53516F, 0xED03A29B, 0x1F682198, - 0x5125DAD3, 0xA34E59D0, 0xB01EAA24, 0x42752927, - 0x96BF4DCC, 0x64D4CECF, 0x77843D3B, 0x85EFBE38, - 0xDBFC821C, 0x2997011F, 0x3AC7F2EB, 0xC8AC71E8, - 0x1C661503, 0xEE0D9600, 0xFD5D65F4, 0x0F36E6F7, - 0x61C69362, 0x93AD1061, 0x80FDE395, 0x72966096, - 0xA65C047D, 0x5437877E, 0x4767748A, 0xB50CF789, - 0xEB1FCBAD, 0x197448AE, 0x0A24BB5A, 0xF84F3859, - 0x2C855CB2, 0xDEEEDFB1, 0xCDBE2C45, 0x3FD5AF46, - 0x7198540D, 0x83F3D70E, 0x90A324FA, 0x62C8A7F9, - 0xB602C312, 0x44694011, 0x5739B3E5, 0xA55230E6, - 0xFB410CC2, 0x092A8FC1, 0x1A7A7C35, 0xE811FF36, - 0x3CDB9BDD, 0xCEB018DE, 0xDDE0EB2A, 0x2F8B6829, - 0x82F63B78, 0x709DB87B, 0x63CD4B8F, 0x91A6C88C, - 0x456CAC67, 0xB7072F64, 0xA457DC90, 0x563C5F93, - 0x082F63B7, 0xFA44E0B4, 0xE9141340, 0x1B7F9043, - 0xCFB5F4A8, 0x3DDE77AB, 0x2E8E845F, 0xDCE5075C, - 0x92A8FC17, 0x60C37F14, 0x73938CE0, 0x81F80FE3, - 0x55326B08, 0xA759E80B, 0xB4091BFF, 0x466298FC, - 0x1871A4D8, 0xEA1A27DB, 0xF94AD42F, 0x0B21572C, - 0xDFEB33C7, 0x2D80B0C4, 0x3ED04330, 0xCCBBC033, - 0xA24BB5A6, 0x502036A5, 0x4370C551, 0xB11B4652, - 0x65D122B9, 0x97BAA1BA, 0x84EA524E, 0x7681D14D, - 0x2892ED69, 0xDAF96E6A, 0xC9A99D9E, 0x3BC21E9D, - 0xEF087A76, 0x1D63F975, 0x0E330A81, 0xFC588982, - 0xB21572C9, 0x407EF1CA, 0x532E023E, 0xA145813D, - 0x758FE5D6, 0x87E466D5, 0x94B49521, 0x66DF1622, - 0x38CC2A06, 0xCAA7A905, 0xD9F75AF1, 0x2B9CD9F2, - 0xFF56BD19, 0x0D3D3E1A, 0x1E6DCDEE, 0xEC064EED, - 0xC38D26C4, 0x31E6A5C7, 0x22B65633, 0xD0DDD530, - 0x0417B1DB, 0xF67C32D8, 0xE52CC12C, 0x1747422F, - 0x49547E0B, 0xBB3FFD08, 0xA86F0EFC, 0x5A048DFF, - 0x8ECEE914, 0x7CA56A17, 0x6FF599E3, 0x9D9E1AE0, - 0xD3D3E1AB, 0x21B862A8, 0x32E8915C, 0xC083125F, - 0x144976B4, 0xE622F5B7, 0xF5720643, 0x07198540, - 0x590AB964, 0xAB613A67, 0xB831C993, 0x4A5A4A90, - 0x9E902E7B, 0x6CFBAD78, 0x7FAB5E8C, 0x8DC0DD8F, - 0xE330A81A, 0x115B2B19, 0x020BD8ED, 0xF0605BEE, - 0x24AA3F05, 0xD6C1BC06, 0xC5914FF2, 0x37FACCF1, - 0x69E9F0D5, 0x9B8273D6, 0x88D28022, 0x7AB90321, - 0xAE7367CA, 0x5C18E4C9, 0x4F48173D, 0xBD23943E, - 0xF36E6F75, 0x0105EC76, 0x12551F82, 0xE03E9C81, - 0x34F4F86A, 0xC69F7B69, 0xD5CF889D, 0x27A40B9E, - 0x79B737BA, 0x8BDCB4B9, 0x988C474D, 0x6AE7C44E, - 0xBE2DA0A5, 0x4C4623A6, 0x5F16D052, 0xAD7D5351, - ] - - def masked_crc32c(self, buf: bytes) -> int: - crc = 0xffffffff - for c in buf: - crc = (crc >> 8) ^ self.crc32c_table[(crc ^ c) & 0xFF] - crc = (~crc) & 0xffffffff - # reverse endianness - crc = struct.unpack(">I", struct.pack("> 15) | (crc << 17)) + 0xa282ead8) & 0xffffffff - - def decompress(self, source_data: bytes) -> io.BytesIO: - offset = 0 - data = io.BytesIO() - while offset < len(source_data): - header_structure = " Tuple[bytes, int]: - """Decompresses data up to the size limit provided - - Args: - data: Input data - limit: The maximum size of decompressed data - - Returns: - Tuple of decompressed data, and number of compressed bytes consumed - """ - decompressed = bytearray() - offset = 0 - crc_len = 4 - chunk_header_struct = '> 8 - if chunk_type == 0xff: - if data[offset + chunk_header_len:offset + chunk_header_len + chunk_size] != b'sNaPpY': - raise ValueError(f"Snappy header missing at offset: {offset}") - elif chunk_type in [0x00, 0x01]: - # CRC + (Un)compressed data - start = offset + chunk_header_len - chunk_crc = data[start: start + crc_len] - chunk_data = data[start + crc_len: start + chunk_size] - if chunk_type == 0x00: - # Compressed data - chunk_data = snappy.decompress(chunk_data) - # TODO: Verify CRC - decompressed.extend(chunk_data) - elif chunk_type in range(0x2, 0x80): - # Unskippable - raise ValueError(f"Unskippable chunk of type {chunk_type} found: {offset}") - offset += chunk_header_len + chunk_size - return decompressed, offset - - class AVMLLayer(segmented.NonLinearlySegmentedLayer): """A Lime format TranslationLayer. @@ -176,6 +36,8 @@ def _check_header(cls, layer: interfaces.layers.DataLayerInterface): layer.read(layer.minimum_address, struct.calcsize(header_structure))) if magic not in [0x4c4d5641] or version != 2: raise exceptions.LayerException("File not completely in AVML format") + if not HAS_SNAPPY: + vollog.warning('AVML file detected, but snappy python library not installed') def _load_segments(self) -> None: base_layer = self.context.layers[self._base_layer] @@ -270,13 +132,3 @@ def stack(cls, new_name = context.layers.free_layer_name("AVMLLayer") context.config[interfaces.configuration.path_join(new_name, "base_layer")] = layer_name return AVMLLayer(context, new_name, new_name) - - -if __name__ == '__main__': - import sys - - source_data = open(sys.argv[1], 'br').read() - - sf = SnappyFraming() - with open('outputfile', 'wb') as fp: - fp.write(sf.decompress(source_data).read()) From 151170b3dfa4ea939834ec1d2808308dae38be04 Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Sun, 26 Sep 2021 01:42:31 +0100 Subject: [PATCH 262/294] Core: Add dependency for AVML --- setup.py | 3 ++- volatility3/framework/layers/avml.py | 2 ++ 2 files changed, 4 insertions(+), 1 deletion(-) diff --git a/setup.py b/setup.py index 62f5bd2d54..0110492ae3 100644 --- a/setup.py +++ b/setup.py @@ -6,7 +6,7 @@ from volatility3.framework import constants -with open("README.md", "r", encoding="utf-8") as fh: +with open("README.md", "r", encoding = "utf-8") as fh: long_description = fh.read() setuptools.setup(name = "volatility3", @@ -45,4 +45,5 @@ 'crypto': ["pycryptodome>=3"], 'disasm': ["capstone;platform_system=='Linux'", "capstone-windows;platform_system=='Windows'"], 'doc': ["sphinx>=1.8.2", "sphinx_autodoc_typehints>=1.4.0", "sphinx-rtd-theme>=0.4.3"], + 'avml': ["snappy==0.6.0"], }) diff --git a/volatility3/framework/layers/avml.py b/volatility3/framework/layers/avml.py index 5c1bfe1bb5..83c43fcc9e 100644 --- a/volatility3/framework/layers/avml.py +++ b/volatility3/framework/layers/avml.py @@ -2,6 +2,7 @@ The user of the file doesn't have to worry about the compression, but random access is not allowed.""" +import logging import struct from typing import Tuple, List, Optional @@ -38,6 +39,7 @@ def _check_header(cls, layer: interfaces.layers.DataLayerInterface): raise exceptions.LayerException("File not completely in AVML format") if not HAS_SNAPPY: vollog.warning('AVML file detected, but snappy python library not installed') + raise exceptions.LayerException("AVML format dependencies not satisfied (snappy)") def _load_segments(self) -> None: base_layer = self.context.layers[self._base_layer] From 5fb9444d950666c089daa4393cc1f4a4046a49a5 Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Sun, 26 Sep 2021 01:49:38 +0100 Subject: [PATCH 263/294] Core: Fix AVML dependency --- setup.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/setup.py b/setup.py index 0110492ae3..8e9f882172 100644 --- a/setup.py +++ b/setup.py @@ -45,5 +45,5 @@ 'crypto': ["pycryptodome>=3"], 'disasm': ["capstone;platform_system=='Linux'", "capstone-windows;platform_system=='Windows'"], 'doc': ["sphinx>=1.8.2", "sphinx_autodoc_typehints>=1.4.0", "sphinx-rtd-theme>=0.4.3"], - 'avml': ["snappy==0.6.0"], + 'avml': ["python-snappy==0.6.0"], }) From 88ee6bc7fc1adf65feaf9e0572e632d60c0a9ad8 Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Sun, 26 Sep 2021 12:50:07 +0100 Subject: [PATCH 264/294] Windows: Add better errors for undownloadable pdb --- volatility3/framework/symbols/windows/pdbconv.py | 2 ++ 1 file changed, 2 insertions(+) diff --git a/volatility3/framework/symbols/windows/pdbconv.py b/volatility3/framework/symbols/windows/pdbconv.py index b48e8268cc..f0b97f9eed 100644 --- a/volatility3/framework/symbols/windows/pdbconv.py +++ b/volatility3/framework/symbols/windows/pdbconv.py @@ -988,6 +988,8 @@ def __call__(self, progress: Union[int, float], description: str = None): filename = None if args.guid is not None and args.pattern is not None: filename = PdbRetreiver().retreive_pdb(guid = args.guid, file_name = args.pattern, progress_callback = pg_cb) + if filename is None: + parser.error("PDB file could not be retrieved from the internet") if parse.urlparse(filename, 'file').scheme == 'file': delfile = True elif args.file: From 3c578aa605e9f7aa18a4fbc046acc9d11e49e3d0 Mon Sep 17 00:00:00 2001 From: x Date: Thu, 30 Sep 2021 15:15:09 +0000 Subject: [PATCH 265/294] Fix bug in skeleton key check --- volatility3/framework/plugins/windows/skeleton_key_check.py | 2 -- 1 file changed, 2 deletions(-) diff --git a/volatility3/framework/plugins/windows/skeleton_key_check.py b/volatility3/framework/plugins/windows/skeleton_key_check.py index a1e0bbbd3f..90379be869 100644 --- a/volatility3/framework/plugins/windows/skeleton_key_check.py +++ b/volatility3/framework/plugins/windows/skeleton_key_check.py @@ -540,8 +540,6 @@ def _generator(self, procs): cryptdll_base, cryptdll_size) - csystems = None - # if we can't find cSystems through the PDB then # we fall back to export analysis and scanning # we keep the address of the rc4 functions from the PDB From 8ec52c7178cf55c4ec2df8ed42db39fac05d0fc2 Mon Sep 17 00:00:00 2001 From: iMHLv2 Date: Wed, 6 Oct 2021 12:47:04 -0500 Subject: [PATCH 266/294] refs #566 refactor dependencies --- README.md | 21 ++++++++++++++++----- doc/requirements.txt | 4 ++++ requirements-minimal.txt | 2 ++ requirements.txt | 24 ++++++++++++++++++++++++ setup.py | 23 ++++++++++++----------- 5 files changed, 58 insertions(+), 16 deletions(-) create mode 100644 doc/requirements.txt create mode 100644 requirements-minimal.txt create mode 100644 requirements.txt diff --git a/README.md b/README.md index 373d85d418..2e01b3b036 100644 --- a/README.md +++ b/README.md @@ -18,13 +18,24 @@ the Volatility Software License (VSL). See the [LICENSE](LICENSE.txt) file for m ## Requirements -- Python 3.6.0 or later. -- Pefile 2017.8.1 or later. +Volatility 3 requires Python 3.6.0 or later. To install the most minimal set of dependencies (some plugins will not work) use a command such as: -## Optional Dependencies +```shell +pip3 install requirements-minimal.txt +``` + +Alternately, the minimal packages can be installed automatically when Volatility 3 is installed. However, as noted in the Quick Start section below, Volatility 3 does not *need* to be installed prior to using it. -- yara-python 3.8.0 or later. -- capstone 3.0.0 or later. +```shell +python3 setup.py build +python3 setup.py install +``` + +To enable the full range of Volatility 3 functionality, use a command like the one below. For partial functionality, comment out any unnecessary packages in [requirements.txt](requirements.txt) prior to running the command. + +```shell +pip3 install requirements.txt +``` ## Downloading Volatility diff --git a/doc/requirements.txt b/doc/requirements.txt new file mode 100644 index 0000000000..d646d22ce4 --- /dev/null +++ b/doc/requirements.txt @@ -0,0 +1,4 @@ +# These packages are required for building the documentation. +sphinx>=1.8.2 +sphinx_autodoc_typehints>=1.4.0 +sphinx-rtd-theme>=0.4.3 \ No newline at end of file diff --git a/requirements-minimal.txt b/requirements-minimal.txt new file mode 100644 index 0000000000..31ac028148 --- /dev/null +++ b/requirements-minimal.txt @@ -0,0 +1,2 @@ +# These packages are required for core functionality. +pefile>=2017.8.1 #foo \ No newline at end of file diff --git a/requirements.txt b/requirements.txt new file mode 100644 index 0000000000..f9ec9d3d1f --- /dev/null +++ b/requirements.txt @@ -0,0 +1,24 @@ +# The following packages are required for core functionality. +pefile>=2017.8.1 + +# The following packages are optional. +# If certain packages are not necessary, place a comment (#) at the start of the line. + +# This is required for the yara plugins +yara-python>=3.8.0 + +# This is required for several plugins that perform malware analysis and disassemble code. +# It can also improve accuracy of Windows 8 and later memory samples. +capstone>=3.0.5 + +# This is required by plugins that decrypt passwords, password hashes, etc. +pycryptodome + +# This can improve error messages regarding improperly configured ISF files. +jsonschema>=2.3.0 + +# This is required for memory acquisition via leech. +leechcorepyc>=2.4.0 + +# This is required for analyzing Linux samples acquired with AVML. +python-snappy==0.6.0 \ No newline at end of file diff --git a/setup.py b/setup.py index 8e9f882172..f6bb687f24 100644 --- a/setup.py +++ b/setup.py @@ -9,6 +9,16 @@ with open("README.md", "r", encoding = "utf-8") as fh: long_description = fh.read() +def get_install_requires(): + requirements = [] + with open("requirements-minimal.txt", "r", encoding="utf-8") as fh: + for line in fh.readlines(): + stripped_line = line.strip() + if stripped_line == "" or stripped_line.startswith("#"): + continue + requirements.append(stripped_line) + return requirements + setuptools.setup(name = "volatility3", description = "Memory forensics framework", version = constants.PACKAGE_VERSION, @@ -24,7 +34,7 @@ "Documentation": "https://volatility3.readthedocs.io/", "Source Code": "https://github.com/volatilityfoundation/volatility3", }, - python_requires = '>=3.5.3', + python_requires = '>=3.6.0', include_package_data = True, exclude_package_data = { '': ['development', 'development.*'], @@ -37,13 +47,4 @@ 'volshell = volatility3.cli.volshell:main', ], }, - install_requires = ["pefile"], - extras_require = { - 'leechcorepyc': ["leechcorepyc>=2.4.0"], - 'jsonschema': ["jsonschema>=2.3.0"], - 'yara': ["yara-python>=3.8.0"], - 'crypto': ["pycryptodome>=3"], - 'disasm': ["capstone;platform_system=='Linux'", "capstone-windows;platform_system=='Windows'"], - 'doc': ["sphinx>=1.8.2", "sphinx_autodoc_typehints>=1.4.0", "sphinx-rtd-theme>=0.4.3"], - 'avml': ["python-snappy==0.6.0"], - }) + install_requires = get_install_requires()) From f92d83f61974d2cd38432ac3673ef977bf6c7580 Mon Sep 17 00:00:00 2001 From: iMHLv2 Date: Wed, 6 Oct 2021 12:48:52 -0500 Subject: [PATCH 267/294] add the missing -r --- README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index 2e01b3b036..c51bd060cb 100644 --- a/README.md +++ b/README.md @@ -21,7 +21,7 @@ the Volatility Software License (VSL). See the [LICENSE](LICENSE.txt) file for m Volatility 3 requires Python 3.6.0 or later. To install the most minimal set of dependencies (some plugins will not work) use a command such as: ```shell -pip3 install requirements-minimal.txt +pip3 install -r requirements-minimal.txt ``` Alternately, the minimal packages can be installed automatically when Volatility 3 is installed. However, as noted in the Quick Start section below, Volatility 3 does not *need* to be installed prior to using it. @@ -34,7 +34,7 @@ python3 setup.py install To enable the full range of Volatility 3 functionality, use a command like the one below. For partial functionality, comment out any unnecessary packages in [requirements.txt](requirements.txt) prior to running the command. ```shell -pip3 install requirements.txt +pip3 install -r requirements.txt ``` ## Downloading Volatility From 2d2a45751707e0a36b16d15afed676c7574f1329 Mon Sep 17 00:00:00 2001 From: iMHLv2 Date: Sat, 10 Jul 2021 13:01:59 -0500 Subject: [PATCH 268/294] fix pool scanners on some windows versions This fixes the size of _POOL_HEADER on 32-bit versions of Windows (was 16, should be 8). Also, get_object() is not doing enough validation and its returning too early. I turned that into a generator so it yields both valid and invalid objects, since the caller does validation anyway. Another way of fixing this is to pass the type_map and cookie from the caller into get_object() so it can do the proper validation and only return valid objects. --- volatility3/framework/symbols/windows/extensions/pool.py | 7 +++---- volatility3/framework/symbols/windows/poolheader-x86.json | 2 +- volatility3/plugins/windows/poolscanner.py | 2 +- 3 files changed, 5 insertions(+), 6 deletions(-) diff --git a/volatility3/framework/symbols/windows/extensions/pool.py b/volatility3/framework/symbols/windows/extensions/pool.py index 5353fc30b1..79d03fcc24 100644 --- a/volatility3/framework/symbols/windows/extensions/pool.py +++ b/volatility3/framework/symbols/windows/extensions/pool.py @@ -59,7 +59,7 @@ def get_object(self, layer_name = self.vol.layer_name, offset = self.vol.offset + pool_header_size, native_layer_name = native_layer_name) - return mem_object + yield mem_object # otherwise we have an executive object in the pool else: @@ -145,7 +145,7 @@ def get_object(self, native_layer_name = native_layer_name) if mem_object.is_valid(): - return mem_object + yield mem_object except (TypeError, exceptions.InvalidAddressException): pass @@ -168,8 +168,7 @@ def get_object(self, if mem_object.is_valid(): return mem_object except (TypeError, exceptions.InvalidAddressException): - return None - return None + pass @classmethod @functools.lru_cache() diff --git a/volatility3/framework/symbols/windows/poolheader-x86.json b/volatility3/framework/symbols/windows/poolheader-x86.json index d117a8b2c8..72f2c6cc1c 100644 --- a/volatility3/framework/symbols/windows/poolheader-x86.json +++ b/volatility3/framework/symbols/windows/poolheader-x86.json @@ -55,7 +55,7 @@ } }, "kind": "struct", - "size": 16 + "size": 8 } }, "symbols": { diff --git a/volatility3/plugins/windows/poolscanner.py b/volatility3/plugins/windows/poolscanner.py index 951edac664..4885c88d77 100644 --- a/volatility3/plugins/windows/poolscanner.py +++ b/volatility3/plugins/windows/poolscanner.py @@ -121,7 +121,7 @@ class PoolScanner(plugins.PluginInterface): def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface]: return [ requirements.ModuleRequirement(name = 'kernel', description = 'Windows kernel', - architectures = ["Intel32", "Intel64"]), + architectures = ["Intel32", "Intel64"]), requirements.PluginRequirement(name = 'handles', plugin = handles.Handles, version = (1, 0, 0)), ] From ea335b03aeb28c20337aa62c6188cb7ad958cdd4 Mon Sep 17 00:00:00 2001 From: iMHLv2 Date: Mon, 12 Jul 2021 08:36:38 -0500 Subject: [PATCH 269/294] change return to yield after get_object() turned into a generator in 2da510d8 --- volatility3/framework/symbols/windows/extensions/pool.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/volatility3/framework/symbols/windows/extensions/pool.py b/volatility3/framework/symbols/windows/extensions/pool.py index 79d03fcc24..f75c6c417c 100644 --- a/volatility3/framework/symbols/windows/extensions/pool.py +++ b/volatility3/framework/symbols/windows/extensions/pool.py @@ -166,7 +166,7 @@ def get_object(self, try: if mem_object.is_valid(): - return mem_object + yield mem_object except (TypeError, exceptions.InvalidAddressException): pass From 8f1c5ee55c040e7c748c95ce8fff6b19992c954c Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Wed, 11 Aug 2021 21:16:51 +0100 Subject: [PATCH 270/294] Core: Bump API to 2.0.0 and remove symbol_shift --- volatility3/cli/__init__.py | 2 +- volatility3/cli/volshell/__init__.py | 2 +- volatility3/cli/volshell/generic.py | 2 +- volatility3/framework/automagic/module.py | 5 --- .../framework/automagic/symbol_finder.py | 21 +-------- volatility3/framework/constants/__init__.py | 7 ++- volatility3/framework/interfaces/layers.py | 2 +- volatility3/framework/interfaces/symbols.py | 8 +--- .../framework/layers/scanners/__init__.py | 6 +-- volatility3/framework/plugins/banners.py | 2 +- volatility3/framework/plugins/configwriter.py | 2 +- .../framework/plugins/frameworkinfo.py | 2 +- volatility3/framework/plugins/isfinfo.py | 2 +- volatility3/framework/plugins/layerwriter.py | 2 +- volatility3/framework/plugins/linux/bash.py | 2 +- .../framework/plugins/linux/check_afinfo.py | 2 +- .../framework/plugins/linux/check_creds.py | 2 +- .../framework/plugins/linux/check_idt.py | 2 +- .../framework/plugins/linux/check_modules.py | 2 +- .../framework/plugins/linux/check_syscall.py | 2 +- volatility3/framework/plugins/linux/elfs.py | 2 +- .../plugins/linux/keyboard_notifiers.py | 2 +- volatility3/framework/plugins/linux/kmsg.py | 2 +- volatility3/framework/plugins/linux/lsmod.py | 2 +- volatility3/framework/plugins/linux/lsof.py | 2 +- .../framework/plugins/linux/malfind.py | 2 +- volatility3/framework/plugins/linux/proc.py | 2 +- volatility3/framework/plugins/linux/pslist.py | 2 +- .../framework/plugins/linux/tty_check.py | 2 +- volatility3/framework/plugins/mac/bash.py | 2 +- .../framework/plugins/mac/check_syscall.py | 2 +- .../framework/plugins/mac/check_sysctl.py | 2 +- .../framework/plugins/mac/check_trap_table.py | 2 +- volatility3/framework/plugins/mac/ifconfig.py | 2 +- .../framework/plugins/mac/kauth_listeners.py | 2 +- .../framework/plugins/mac/kauth_scopes.py | 2 +- volatility3/framework/plugins/mac/kevents.py | 2 +- .../framework/plugins/mac/list_files.py | 2 +- volatility3/framework/plugins/mac/lsmod.py | 2 +- volatility3/framework/plugins/mac/lsof.py | 2 +- volatility3/framework/plugins/mac/malfind.py | 2 +- volatility3/framework/plugins/mac/mount.py | 2 +- volatility3/framework/plugins/mac/netstat.py | 2 +- .../framework/plugins/mac/proc_maps.py | 2 +- volatility3/framework/plugins/mac/psaux.py | 2 +- volatility3/framework/plugins/mac/pslist.py | 2 +- volatility3/framework/plugins/mac/pstree.py | 2 +- .../framework/plugins/mac/socket_filters.py | 2 +- volatility3/framework/plugins/mac/timers.py | 2 +- .../framework/plugins/mac/trustedbsd.py | 2 +- .../framework/plugins/mac/vfsevents.py | 2 +- volatility3/framework/plugins/timeliner.py | 2 +- .../framework/plugins/windows/bigpools.py | 5 +-- .../framework/plugins/windows/cachedump.py | 7 ++- .../framework/plugins/windows/callbacks.py | 4 +- .../framework/plugins/windows/cmdline.py | 7 +-- .../framework/plugins/windows/dlllist.py | 6 +-- .../framework/plugins/windows/driverirp.py | 2 +- .../framework/plugins/windows/driverscan.py | 4 +- .../framework/plugins/windows/dumpfiles.py | 10 ++--- .../framework/plugins/windows/envars.py | 4 +- .../framework/plugins/windows/filescan.py | 4 +- .../plugins/windows/getservicesids.py | 5 ++- .../framework/plugins/windows/getsids.py | 4 +- .../framework/plugins/windows/handles.py | 5 +-- .../framework/plugins/windows/hashdump.py | 13 +++--- volatility3/framework/plugins/windows/info.py | 4 +- .../framework/plugins/windows/lsadump.py | 8 ++-- .../framework/plugins/windows/malfind.py | 8 ++-- .../framework/plugins/windows/memmap.py | 5 ++- .../framework/plugins/windows/modscan.py | 4 +- .../framework/plugins/windows/modules.py | 4 +- .../framework/plugins/windows/mutantscan.py | 4 +- .../framework/plugins/windows/netscan.py | 4 +- .../framework/plugins/windows/netstat.py | 4 +- .../framework/plugins/windows/privileges.py | 4 +- .../framework/plugins/windows/pslist.py | 2 +- .../framework/plugins/windows/psscan.py | 5 +-- .../framework/plugins/windows/pstree.py | 4 +- .../plugins/windows/registry/hivelist.py | 7 ++- .../plugins/windows/registry/hivescan.py | 7 ++- .../plugins/windows/registry/printkey.py | 12 ++--- .../plugins/windows/registry/userassist.py | 5 +-- volatility3/framework/plugins/windows/ssdt.py | 4 +- .../framework/plugins/windows/strings.py | 7 ++- .../framework/plugins/windows/svcscan.py | 4 +- .../framework/plugins/windows/symlinkscan.py | 4 +- .../framework/plugins/windows/vadinfo.py | 6 +-- .../framework/plugins/windows/vadyarascan.py | 5 +-- .../framework/plugins/windows/verinfo.py | 2 +- .../framework/plugins/windows/virtmap.py | 2 +- volatility3/framework/plugins/yarascan.py | 2 +- volatility3/framework/symbols/intermed.py | 44 ++++--------------- .../framework/symbols/linux/__init__.py | 2 +- volatility3/framework/symbols/mac/__init__.py | 2 +- .../framework/symbols/windows/pdbutil.py | 2 +- volatility3/plugins/windows/poolscanner.py | 2 +- .../plugins/windows/registry/certificates.py | 2 +- volatility3/plugins/windows/statistics.py | 2 +- 99 files changed, 164 insertions(+), 240 deletions(-) diff --git a/volatility3/cli/__init__.py b/volatility3/cli/__init__.py index 6d75db46c4..19c6e4dddb 100644 --- a/volatility3/cli/__init__.py +++ b/volatility3/cli/__init__.py @@ -88,7 +88,7 @@ def run(self): """Executes the command line module, taking the system arguments, determining the plugin to run and then running it.""" - volatility3.framework.require_interface_version(1, 0, 0) + volatility3.framework.require_interface_version(2, 0, 0) renderers = dict([(x.name.lower(), x) for x in framework.class_subclasses(text_renderer.CLIRenderer)]) diff --git a/volatility3/cli/volshell/__init__.py b/volatility3/cli/volshell/__init__.py index bc1219e885..f2375c7740 100644 --- a/volatility3/cli/volshell/__init__.py +++ b/volatility3/cli/volshell/__init__.py @@ -43,7 +43,7 @@ def run(self): determining the plugin to run and then running it.""" sys.stdout.write(f"Volshell (Volatility 3 Framework) {constants.PACKAGE_VERSION}\n") - framework.require_interface_version(1, 0, 0) + framework.require_interface_version(2, 0, 0) parser = argparse.ArgumentParser(prog = self.CLI_NAME, description = "A tool for interactivate forensic analysis of memory images") diff --git a/volatility3/cli/volshell/generic.py b/volatility3/cli/volshell/generic.py index 7252c3b691..8f81a04207 100644 --- a/volatility3/cli/volshell/generic.py +++ b/volatility3/cli/volshell/generic.py @@ -26,7 +26,7 @@ class Volshell(interfaces.plugins.PluginInterface): """Shell environment to directly interact with a memory image.""" - _required_framework_version = (1, 0, 0) + _required_framework_version = (2, 0, 0) def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) diff --git a/volatility3/framework/automagic/module.py b/volatility3/framework/automagic/module.py index 315164ec7d..ecd31495a9 100644 --- a/volatility3/framework/automagic/module.py +++ b/volatility3/framework/automagic/module.py @@ -38,11 +38,6 @@ def __call__(self, offset_config_path = interfaces.configuration.path_join(new_config_path, 'offset') offset = context.config[layer_kvo_config_path] context.config[offset_config_path] = offset - elif isinstance(requirement.requirements[req], configuration.requirements.SymbolTableRequirement): - symbol_shift_config_path = interfaces.configuration.path_join(new_config_path, - req, - 'symbol_shift') - context.config[symbol_shift_config_path] = 0 # Now construct the module based on the sub-requirements requirement.construct(context, config_path) diff --git a/volatility3/framework/automagic/symbol_finder.py b/volatility3/framework/automagic/symbol_finder.py index 6f6332a8d4..72dc071a5f 100644 --- a/volatility3/framework/automagic/symbol_finder.py +++ b/volatility3/framework/automagic/symbol_finder.py @@ -5,7 +5,7 @@ import logging from typing import Any, Iterable, List, Tuple, Type, Optional, Callable -from volatility3.framework import interfaces, constants, layers, exceptions +from volatility3.framework import interfaces, constants from volatility3.framework.automagic import symbol_cache from volatility3.framework.configuration import requirements from volatility3.framework.layers import scanners @@ -112,27 +112,8 @@ def _banner_scan(self, context.config[path_join(config_path, requirement.name, "isf_url")] = isf_path context.config[path_join(config_path, requirement.name, "symbol_mask")] = layer.address_mask - # Set a default symbol_shift when attempt to determine it, - # so we can create the symbols which are used in finding the aslr_shift anyway - if not context.config.get(path_join(config_path, requirement.name, "symbol_shift"), None): - # Don't overwrite it if it's already been set, it will be manually refound if not present - prefound_kaslr_value = context.layers[layer_name].metadata.get('kaslr_value', 0) - context.config[path_join(config_path, requirement.name, "symbol_shift")] = prefound_kaslr_value # Construct the appropriate symbol table requirement.construct(context, config_path) - - # Apply the ASLR masking (only if we're not already shifted) - if self.find_aslr and not context.config.get(path_join(config_path, requirement.name, "symbol_shift"), - None): - unmasked_symbol_table_name = context.config.get(path_join(config_path, requirement.name), None) - if not unmasked_symbol_table_name: - raise exceptions.SymbolSpaceError("Symbol table could not be constructed") - if not isinstance(layer, layers.intel.Intel): - raise TypeError("Layer name {} is not an intel space") - aslr_shift = self.find_aslr(context, unmasked_symbol_table_name, layer.config['memory_layer']) - context.config[path_join(config_path, requirement.name, "symbol_shift")] = aslr_shift - context.symbol_space.clear_symbol_cache(unmasked_symbol_table_name) - break else: if symbol_files: diff --git a/volatility3/framework/constants/__init__.py b/volatility3/framework/constants/__init__.py index 43193852ab..23598837b1 100644 --- a/volatility3/framework/constants/__init__.py +++ b/volatility3/framework/constants/__init__.py @@ -38,9 +38,9 @@ """Constant used to delimit table names from type names when referring to a symbol""" # We use the SemVer 2.0.0 versioning scheme -VERSION_MAJOR = 1 # Number of releases of the library with a breaking change -VERSION_MINOR = 2 # Number of changes that only add to the interface -VERSION_PATCH = 1 # Number of changes that do not change the interface +VERSION_MAJOR = 2 # Number of releases of the library with a breaking change +VERSION_MINOR = 0 # Number of changes that only add to the interface +VERSION_PATCH = 0 # Number of changes that do not change the interface VERSION_SUFFIX = "" # TODO: At version 2.0.0, remove the symbol_shift feature @@ -94,7 +94,6 @@ class Parallelism(enum.IntEnum): """The minimum supported version of the Intermediate Symbol Format""" ISF_MINIMUM_DEPRECATED = (3, 9, 9) """The highest version of the ISF that's deprecated (usually higher than supported)""" - OFFLINE = False """Whether to go online to retrieve missing/necessary JSON files""" diff --git a/volatility3/framework/interfaces/layers.py b/volatility3/framework/interfaces/layers.py index 28c452f900..a42282c394 100644 --- a/volatility3/framework/interfaces/layers.py +++ b/volatility3/framework/interfaces/layers.py @@ -54,7 +54,7 @@ class ScannerInterface(interfaces.configuration.VersionableInterface, metaclass """ thread_safe = False - _required_framework_version = (1, 0, 0) + _required_framework_version = (2, 0, 0) def __init__(self) -> None: super().__init__() diff --git a/volatility3/framework/interfaces/symbols.py b/volatility3/framework/interfaces/symbols.py index 37c2824ebb..bf0ead3b35 100644 --- a/volatility3/framework/interfaces/symbols.py +++ b/volatility3/framework/interfaces/symbols.py @@ -8,7 +8,6 @@ from typing import Any, Dict, Iterable, List, Optional, Tuple, Type, Mapping from volatility3.framework import constants, exceptions, interfaces -from volatility3.framework.configuration import requirements from volatility3.framework.interfaces import configuration, objects from volatility3.framework.interfaces.configuration import RequirementInterface @@ -302,12 +301,7 @@ def build_configuration(self) -> 'configuration.HierarchicalDict': @classmethod def get_requirements(cls) -> List[RequirementInterface]: - return super().get_requirements() + [ - requirements.IntRequirement( - name = 'symbol_shift', description = 'Symbol Shift', optional = True, default = 0), - requirements.IntRequirement( - name = 'symbol_mask', description = 'Address mask for symbols', optional = True, default = 0), - ] + return super().get_requirements() class NativeTableInterface(BaseSymbolTableInterface): diff --git a/volatility3/framework/layers/scanners/__init__.py b/volatility3/framework/layers/scanners/__init__.py index acf7267ffb..a99e80b14a 100644 --- a/volatility3/framework/layers/scanners/__init__.py +++ b/volatility3/framework/layers/scanners/__init__.py @@ -11,7 +11,7 @@ class BytesScanner(layers.ScannerInterface): thread_safe = True - _required_framework_version = (1, 0, 0) + _required_framework_version = (2, 0, 0) def __init__(self, needle: bytes) -> None: super().__init__() @@ -32,7 +32,7 @@ def __call__(self, data: bytes, data_offset: int) -> Generator[int, None, None]: class RegExScanner(layers.ScannerInterface): thread_safe = True - _required_framework_version = (1, 0, 0) + _required_framework_version = (2, 0, 0) def __init__(self, pattern: bytes, flags: int = 0) -> None: super().__init__() @@ -51,7 +51,7 @@ def __call__(self, data: bytes, data_offset: int) -> Generator[int, None, None]: class MultiStringScanner(layers.ScannerInterface): thread_safe = True - _required_framework_version = (1, 0, 0) + _required_framework_version = (2, 0, 0) def __init__(self, patterns: List[bytes]) -> None: super().__init__() diff --git a/volatility3/framework/plugins/banners.py b/volatility3/framework/plugins/banners.py index c907497ed8..ac20062074 100644 --- a/volatility3/framework/plugins/banners.py +++ b/volatility3/framework/plugins/banners.py @@ -15,7 +15,7 @@ class Banners(interfaces.plugins.PluginInterface): """Attempts to identify potential linux banners in an image""" - _required_framework_version = (1, 0, 0) + _required_framework_version = (2, 0, 0) @classmethod def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface]: diff --git a/volatility3/framework/plugins/configwriter.py b/volatility3/framework/plugins/configwriter.py index 5e96abcc52..f0979eb4de 100644 --- a/volatility3/framework/plugins/configwriter.py +++ b/volatility3/framework/plugins/configwriter.py @@ -17,7 +17,7 @@ class ConfigWriter(plugins.PluginInterface): """Runs the automagics and both prints and outputs configuration in the output directory.""" - _required_framework_version = (1, 0, 0) + _required_framework_version = (2, 0, 0) @classmethod def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface]: diff --git a/volatility3/framework/plugins/frameworkinfo.py b/volatility3/framework/plugins/frameworkinfo.py index 8d9f3b0137..b7c887d5cf 100644 --- a/volatility3/framework/plugins/frameworkinfo.py +++ b/volatility3/framework/plugins/frameworkinfo.py @@ -8,7 +8,7 @@ class FrameworkInfo(plugins.PluginInterface): """Plugin to list the various modular components of Volatility""" - _required_framework_version = (1, 0, 0) + _required_framework_version = (2, 0, 0) @classmethod def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface]: diff --git a/volatility3/framework/plugins/isfinfo.py b/volatility3/framework/plugins/isfinfo.py index e697994b74..575f254269 100644 --- a/volatility3/framework/plugins/isfinfo.py +++ b/volatility3/framework/plugins/isfinfo.py @@ -22,7 +22,7 @@ class IsfInfo(plugins.PluginInterface): """Determines information about the currently available ISF files, or a specific one""" - _required_framework_version = (1, 0, 0) + _required_framework_version = (2, 0, 0) _version = (1, 0, 0) @classmethod diff --git a/volatility3/framework/plugins/layerwriter.py b/volatility3/framework/plugins/layerwriter.py index cfa83ade26..b2a02116e9 100644 --- a/volatility3/framework/plugins/layerwriter.py +++ b/volatility3/framework/plugins/layerwriter.py @@ -17,7 +17,7 @@ class LayerWriter(plugins.PluginInterface): default_block_size = 0x500000 - _required_framework_version = (1, 0, 0) + _required_framework_version = (2, 0, 0) _version = (2, 0, 0) @classmethod diff --git a/volatility3/framework/plugins/linux/bash.py b/volatility3/framework/plugins/linux/bash.py index dd2cb2c0fc..7f606115cf 100644 --- a/volatility3/framework/plugins/linux/bash.py +++ b/volatility3/framework/plugins/linux/bash.py @@ -21,7 +21,7 @@ class Bash(plugins.PluginInterface, timeliner.TimeLinerInterface): """Recovers bash command history from memory.""" - _required_framework_version = (1, 2, 0) + _required_framework_version = (2, 0, 0) @classmethod def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface]: diff --git a/volatility3/framework/plugins/linux/check_afinfo.py b/volatility3/framework/plugins/linux/check_afinfo.py index f54b89ee31..4cd065c7ea 100644 --- a/volatility3/framework/plugins/linux/check_afinfo.py +++ b/volatility3/framework/plugins/linux/check_afinfo.py @@ -18,7 +18,7 @@ class Check_afinfo(plugins.PluginInterface): """Verifies the operation function pointers of network protocols.""" - _required_framework_version = (1, 2, 0) + _required_framework_version = (2, 0, 0) @classmethod def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface]: diff --git a/volatility3/framework/plugins/linux/check_creds.py b/volatility3/framework/plugins/linux/check_creds.py index 06ac392dbc..613469eed1 100644 --- a/volatility3/framework/plugins/linux/check_creds.py +++ b/volatility3/framework/plugins/linux/check_creds.py @@ -14,7 +14,7 @@ class Check_creds(interfaces.plugins.PluginInterface): """Checks if any processes are sharing credential structures""" - _required_framework_version = (1, 2, 0) + _required_framework_version = (2, 0, 0) @classmethod def get_requirements(cls): diff --git a/volatility3/framework/plugins/linux/check_idt.py b/volatility3/framework/plugins/linux/check_idt.py index e612300411..1764b63645 100644 --- a/volatility3/framework/plugins/linux/check_idt.py +++ b/volatility3/framework/plugins/linux/check_idt.py @@ -17,7 +17,7 @@ class Check_idt(interfaces.plugins.PluginInterface): """ Checks if the IDT has been altered """ - _required_framework_version = (1, 2, 0) + _required_framework_version = (2, 0, 0) @classmethod def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface]: diff --git a/volatility3/framework/plugins/linux/check_modules.py b/volatility3/framework/plugins/linux/check_modules.py index 40b4f6e0c9..6af8dec968 100644 --- a/volatility3/framework/plugins/linux/check_modules.py +++ b/volatility3/framework/plugins/linux/check_modules.py @@ -18,7 +18,7 @@ class Check_modules(plugins.PluginInterface): """Compares module list to sysfs info, if available""" - _required_framework_version = (1, 2, 0) + _required_framework_version = (2, 0, 0) @classmethod def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface]: diff --git a/volatility3/framework/plugins/linux/check_syscall.py b/volatility3/framework/plugins/linux/check_syscall.py index e1ba413615..729a0bec69 100644 --- a/volatility3/framework/plugins/linux/check_syscall.py +++ b/volatility3/framework/plugins/linux/check_syscall.py @@ -25,7 +25,7 @@ class Check_syscall(plugins.PluginInterface): """Check system call table for hooks.""" - _required_framework_version = (1, 2, 0) + _required_framework_version = (2, 0, 0) @classmethod def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface]: diff --git a/volatility3/framework/plugins/linux/elfs.py b/volatility3/framework/plugins/linux/elfs.py index 5b072ed7db..d2380817d4 100644 --- a/volatility3/framework/plugins/linux/elfs.py +++ b/volatility3/framework/plugins/linux/elfs.py @@ -17,7 +17,7 @@ class Elfs(plugins.PluginInterface): """Lists all memory mapped ELF files for all processes.""" - _required_framework_version = (1, 2, 0) + _required_framework_version = (2, 0, 0) @classmethod def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface]: diff --git a/volatility3/framework/plugins/linux/keyboard_notifiers.py b/volatility3/framework/plugins/linux/keyboard_notifiers.py index 8bb79b6ec8..51e684e6f4 100644 --- a/volatility3/framework/plugins/linux/keyboard_notifiers.py +++ b/volatility3/framework/plugins/linux/keyboard_notifiers.py @@ -16,7 +16,7 @@ class Keyboard_notifiers(interfaces.plugins.PluginInterface): """Parses the keyboard notifier call chain""" - _required_framework_version = (1, 2, 0) + _required_framework_version = (2, 0, 0) @classmethod def get_requirements(cls): diff --git a/volatility3/framework/plugins/linux/kmsg.py b/volatility3/framework/plugins/linux/kmsg.py index 3ec53cdcf1..0ea00d48d0 100644 --- a/volatility3/framework/plugins/linux/kmsg.py +++ b/volatility3/framework/plugins/linux/kmsg.py @@ -363,7 +363,7 @@ def run(self) -> Iterator[Tuple[str, str, str, str, str]]: class Kmsg(plugins.PluginInterface): """Kernel log buffer reader""" - _required_framework_version = (1, 0, 0) + _required_framework_version = (2, 0, 0) _version = (1, 0, 0) diff --git a/volatility3/framework/plugins/linux/lsmod.py b/volatility3/framework/plugins/linux/lsmod.py index 7b70db4bae..ecb262d008 100644 --- a/volatility3/framework/plugins/linux/lsmod.py +++ b/volatility3/framework/plugins/linux/lsmod.py @@ -19,7 +19,7 @@ class Lsmod(plugins.PluginInterface): """Lists loaded kernel modules.""" - _required_framework_version = (1, 2, 0) + _required_framework_version = (2, 0, 0) _version = (2, 0, 0) @classmethod diff --git a/volatility3/framework/plugins/linux/lsof.py b/volatility3/framework/plugins/linux/lsof.py index 711a7d4e49..a074f57446 100644 --- a/volatility3/framework/plugins/linux/lsof.py +++ b/volatility3/framework/plugins/linux/lsof.py @@ -19,7 +19,7 @@ class Lsof(plugins.PluginInterface): """Lists all memory maps for all processes.""" - _required_framework_version = (1, 2, 0) + _required_framework_version = (2, 0, 0) @classmethod def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface]: diff --git a/volatility3/framework/plugins/linux/malfind.py b/volatility3/framework/plugins/linux/malfind.py index 4587dca847..abc2cf7d25 100644 --- a/volatility3/framework/plugins/linux/malfind.py +++ b/volatility3/framework/plugins/linux/malfind.py @@ -15,7 +15,7 @@ class Malfind(interfaces.plugins.PluginInterface): """Lists process memory ranges that potentially contain injected code.""" - _required_framework_version = (1, 2, 0) + _required_framework_version = (2, 0, 0) @classmethod def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface]: diff --git a/volatility3/framework/plugins/linux/proc.py b/volatility3/framework/plugins/linux/proc.py index 2c4cd2aff2..13fd87f531 100644 --- a/volatility3/framework/plugins/linux/proc.py +++ b/volatility3/framework/plugins/linux/proc.py @@ -15,7 +15,7 @@ class Maps(plugins.PluginInterface): """Lists all memory maps for all processes.""" - _required_framework_version = (1, 2, 0) + _required_framework_version = (2, 0, 0) @classmethod def get_requirements(cls): diff --git a/volatility3/framework/plugins/linux/pslist.py b/volatility3/framework/plugins/linux/pslist.py index 9b97a56d1f..5672bb56e7 100644 --- a/volatility3/framework/plugins/linux/pslist.py +++ b/volatility3/framework/plugins/linux/pslist.py @@ -11,7 +11,7 @@ class PsList(interfaces.plugins.PluginInterface): """Lists the processes present in a particular linux memory image.""" - _required_framework_version = (1, 2, 0) + _required_framework_version = (2, 0, 0) _version = (2, 0, 0) diff --git a/volatility3/framework/plugins/linux/tty_check.py b/volatility3/framework/plugins/linux/tty_check.py index 8c0662ca76..e1d8339e86 100644 --- a/volatility3/framework/plugins/linux/tty_check.py +++ b/volatility3/framework/plugins/linux/tty_check.py @@ -19,7 +19,7 @@ class tty_check(plugins.PluginInterface): """Checks tty devices for hooks""" - _required_framework_version = (1, 2, 0) + _required_framework_version = (2, 0, 0) @classmethod def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface]: diff --git a/volatility3/framework/plugins/mac/bash.py b/volatility3/framework/plugins/mac/bash.py index 1929e35f73..e16a0d79f8 100644 --- a/volatility3/framework/plugins/mac/bash.py +++ b/volatility3/framework/plugins/mac/bash.py @@ -20,7 +20,7 @@ class Bash(plugins.PluginInterface, timeliner.TimeLinerInterface): """Recovers bash command history from memory.""" - _required_framework_version = (1, 2, 0) + _required_framework_version = (2, 0, 0) @classmethod def get_requirements(cls): diff --git a/volatility3/framework/plugins/mac/check_syscall.py b/volatility3/framework/plugins/mac/check_syscall.py index 1608a64a9e..9072e76d1a 100644 --- a/volatility3/framework/plugins/mac/check_syscall.py +++ b/volatility3/framework/plugins/mac/check_syscall.py @@ -18,7 +18,7 @@ class Check_syscall(plugins.PluginInterface): """Check system call table for hooks.""" - _required_framework_version = (1, 2, 0) + _required_framework_version = (2, 0, 0) @classmethod def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface]: diff --git a/volatility3/framework/plugins/mac/check_sysctl.py b/volatility3/framework/plugins/mac/check_sysctl.py index f7a8973ff7..fc6ab5c373 100644 --- a/volatility3/framework/plugins/mac/check_sysctl.py +++ b/volatility3/framework/plugins/mac/check_sysctl.py @@ -20,7 +20,7 @@ class Check_sysctl(plugins.PluginInterface): """Check sysctl handlers for hooks.""" - _required_framework_version = (1, 2, 0) + _required_framework_version = (2, 0, 0) @classmethod def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface]: diff --git a/volatility3/framework/plugins/mac/check_trap_table.py b/volatility3/framework/plugins/mac/check_trap_table.py index 0976d6ea04..47d0ed57db 100644 --- a/volatility3/framework/plugins/mac/check_trap_table.py +++ b/volatility3/framework/plugins/mac/check_trap_table.py @@ -19,7 +19,7 @@ class Check_trap_table(plugins.PluginInterface): """Check mach trap table for hooks.""" - _required_framework_version = (1, 2, 0) + _required_framework_version = (2, 0, 0) @classmethod def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface]: diff --git a/volatility3/framework/plugins/mac/ifconfig.py b/volatility3/framework/plugins/mac/ifconfig.py index 8e72528c5d..c366a19f06 100644 --- a/volatility3/framework/plugins/mac/ifconfig.py +++ b/volatility3/framework/plugins/mac/ifconfig.py @@ -11,7 +11,7 @@ class Ifconfig(plugins.PluginInterface): """Lists loaded kernel modules""" - _required_framework_version = (1, 2, 0) + _required_framework_version = (2, 0, 0) @classmethod def get_requirements(cls): diff --git a/volatility3/framework/plugins/mac/kauth_listeners.py b/volatility3/framework/plugins/mac/kauth_listeners.py index 42eaa45825..7002d88e29 100644 --- a/volatility3/framework/plugins/mac/kauth_listeners.py +++ b/volatility3/framework/plugins/mac/kauth_listeners.py @@ -13,7 +13,7 @@ class Kauth_listeners(interfaces.plugins.PluginInterface): """ Lists kauth listeners and their status """ - _required_framework_version = (1, 2, 0) + _required_framework_version = (2, 0, 0) @classmethod def get_requirements(cls): diff --git a/volatility3/framework/plugins/mac/kauth_scopes.py b/volatility3/framework/plugins/mac/kauth_scopes.py index f66c5cc0e1..910de35fd4 100644 --- a/volatility3/framework/plugins/mac/kauth_scopes.py +++ b/volatility3/framework/plugins/mac/kauth_scopes.py @@ -18,7 +18,7 @@ class Kauth_scopes(interfaces.plugins.PluginInterface): """ Lists kauth scopes and their status """ _version = (2, 0, 0) - _required_framework_version = (1, 2, 0) + _required_framework_version = (2, 0, 0) @classmethod def get_requirements(cls): diff --git a/volatility3/framework/plugins/mac/kevents.py b/volatility3/framework/plugins/mac/kevents.py index 6fb8e99aff..c433a39eea 100644 --- a/volatility3/framework/plugins/mac/kevents.py +++ b/volatility3/framework/plugins/mac/kevents.py @@ -14,7 +14,7 @@ class Kevents(interfaces.plugins.PluginInterface): """ Lists event handlers registered by processes """ - _required_framework_version = (1, 2, 0) + _required_framework_version = (2, 0, 0) _version = (1, 0, 0) event_types = { diff --git a/volatility3/framework/plugins/mac/list_files.py b/volatility3/framework/plugins/mac/list_files.py index 9efe4b3af4..19f28b18f9 100644 --- a/volatility3/framework/plugins/mac/list_files.py +++ b/volatility3/framework/plugins/mac/list_files.py @@ -18,7 +18,7 @@ class List_Files(plugins.PluginInterface): """Lists all open file descriptors for all processes.""" - _required_framework_version = (1, 2, 0) + _required_framework_version = (2, 0, 0) @classmethod def get_requirements(cls): diff --git a/volatility3/framework/plugins/mac/lsmod.py b/volatility3/framework/plugins/mac/lsmod.py index 5cb242b19e..095fbc6630 100644 --- a/volatility3/framework/plugins/mac/lsmod.py +++ b/volatility3/framework/plugins/mac/lsmod.py @@ -15,7 +15,7 @@ class Lsmod(plugins.PluginInterface): """Lists loaded kernel modules.""" - _required_framework_version = (1, 2, 0) + _required_framework_version = (2, 0, 0) _version = (2, 0, 0) diff --git a/volatility3/framework/plugins/mac/lsof.py b/volatility3/framework/plugins/mac/lsof.py index 6d96f102a3..c3941ec277 100644 --- a/volatility3/framework/plugins/mac/lsof.py +++ b/volatility3/framework/plugins/mac/lsof.py @@ -16,7 +16,7 @@ class Lsof(plugins.PluginInterface): """Lists all open file descriptors for all processes.""" - _required_framework_version = (1, 2, 0) + _required_framework_version = (2, 0, 0) @classmethod def get_requirements(cls): diff --git a/volatility3/framework/plugins/mac/malfind.py b/volatility3/framework/plugins/mac/malfind.py index cf0a808666..7a42a0c5f1 100644 --- a/volatility3/framework/plugins/mac/malfind.py +++ b/volatility3/framework/plugins/mac/malfind.py @@ -13,7 +13,7 @@ class Malfind(interfaces.plugins.PluginInterface): """Lists process memory ranges that potentially contain injected code.""" - _required_framework_version = (1, 2, 0) + _required_framework_version = (2, 0, 0) @classmethod def get_requirements(cls): diff --git a/volatility3/framework/plugins/mac/mount.py b/volatility3/framework/plugins/mac/mount.py index bc171127ef..3985594463 100644 --- a/volatility3/framework/plugins/mac/mount.py +++ b/volatility3/framework/plugins/mac/mount.py @@ -14,7 +14,7 @@ class Mount(plugins.PluginInterface): """A module containing a collection of plugins that produce data typically foundin Mac's mount command""" - _required_framework_version = (1, 2, 0) + _required_framework_version = (2, 0, 0) _version = (2, 0, 0) diff --git a/volatility3/framework/plugins/mac/netstat.py b/volatility3/framework/plugins/mac/netstat.py index 4453a8e30d..e231b8082c 100644 --- a/volatility3/framework/plugins/mac/netstat.py +++ b/volatility3/framework/plugins/mac/netstat.py @@ -19,7 +19,7 @@ class Netstat(plugins.PluginInterface): """Lists all network connections for all processes.""" - _required_framework_version = (1, 2, 0) + _required_framework_version = (2, 0, 0) @classmethod def get_requirements(cls): diff --git a/volatility3/framework/plugins/mac/proc_maps.py b/volatility3/framework/plugins/mac/proc_maps.py index e150c6a55a..70c9684cd5 100644 --- a/volatility3/framework/plugins/mac/proc_maps.py +++ b/volatility3/framework/plugins/mac/proc_maps.py @@ -12,7 +12,7 @@ class Maps(interfaces.plugins.PluginInterface): """Lists process memory ranges that potentially contain injected code.""" - _required_framework_version = (1, 2, 0) + _required_framework_version = (2, 0, 0) @classmethod def get_requirements(cls): diff --git a/volatility3/framework/plugins/mac/psaux.py b/volatility3/framework/plugins/mac/psaux.py index 5206e5b2bc..e3fcdc0bbd 100644 --- a/volatility3/framework/plugins/mac/psaux.py +++ b/volatility3/framework/plugins/mac/psaux.py @@ -14,7 +14,7 @@ class Psaux(plugins.PluginInterface): """Recovers program command line arguments.""" - _required_framework_version = (1, 2, 0) + _required_framework_version = (2, 0, 0) @classmethod def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface]: diff --git a/volatility3/framework/plugins/mac/pslist.py b/volatility3/framework/plugins/mac/pslist.py index c094adba80..e92609b3aa 100644 --- a/volatility3/framework/plugins/mac/pslist.py +++ b/volatility3/framework/plugins/mac/pslist.py @@ -16,7 +16,7 @@ class PsList(interfaces.plugins.PluginInterface): """Lists the processes present in a particular mac memory image.""" - _required_framework_version = (1, 2, 0) + _required_framework_version = (2, 0, 0) _version = (3, 0, 0) pslist_methods = ['tasks', 'allproc', 'process_group', 'sessions', 'pid_hash_table'] diff --git a/volatility3/framework/plugins/mac/pstree.py b/volatility3/framework/plugins/mac/pstree.py index 76219c4571..d7fb0eab4c 100644 --- a/volatility3/framework/plugins/mac/pstree.py +++ b/volatility3/framework/plugins/mac/pstree.py @@ -13,7 +13,7 @@ class PsTree(plugins.PluginInterface): """Plugin for listing processes in a tree based on their parent process ID.""" - _required_framework_version = (1, 2, 0) + _required_framework_version = (2, 0, 0) def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) diff --git a/volatility3/framework/plugins/mac/socket_filters.py b/volatility3/framework/plugins/mac/socket_filters.py index ee1b83ed79..a6e9d11fd8 100644 --- a/volatility3/framework/plugins/mac/socket_filters.py +++ b/volatility3/framework/plugins/mac/socket_filters.py @@ -19,7 +19,7 @@ class Socket_filters(plugins.PluginInterface): """Enumerates kernel socket filters.""" - _required_framework_version = (1, 2, 0) + _required_framework_version = (2, 0, 0) @classmethod def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface]: diff --git a/volatility3/framework/plugins/mac/timers.py b/volatility3/framework/plugins/mac/timers.py index 42b71134a7..7bc5fd5d0b 100644 --- a/volatility3/framework/plugins/mac/timers.py +++ b/volatility3/framework/plugins/mac/timers.py @@ -18,7 +18,7 @@ class Timers(plugins.PluginInterface): """Check for malicious kernel timers.""" - _required_framework_version = (1, 2, 0) + _required_framework_version = (2, 0, 0) @classmethod def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface]: diff --git a/volatility3/framework/plugins/mac/trustedbsd.py b/volatility3/framework/plugins/mac/trustedbsd.py index 5d0eba6695..9efb1b9d60 100644 --- a/volatility3/framework/plugins/mac/trustedbsd.py +++ b/volatility3/framework/plugins/mac/trustedbsd.py @@ -20,7 +20,7 @@ class Trustedbsd(plugins.PluginInterface): """Checks for malicious trustedbsd modules""" - _required_framework_version = (1, 2, 0) + _required_framework_version = (2, 0, 0) @classmethod def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface]: diff --git a/volatility3/framework/plugins/mac/vfsevents.py b/volatility3/framework/plugins/mac/vfsevents.py index 9259956e97..38f4172ce7 100644 --- a/volatility3/framework/plugins/mac/vfsevents.py +++ b/volatility3/framework/plugins/mac/vfsevents.py @@ -10,7 +10,7 @@ class VFSevents(interfaces.plugins.PluginInterface): """ Lists processes that are filtering file system events """ - _required_framework_version = (1, 2, 0) + _required_framework_version = (2, 0, 0) event_types = [ "CREATE_FILE", "DELETE", "STAT_CHANGED", "RENAME", "CONTENT_MODIFIED", "EXCHANGE", "FINDER_INFO_CHANGED", diff --git a/volatility3/framework/plugins/timeliner.py b/volatility3/framework/plugins/timeliner.py index c3fe424b99..6bf5925046 100644 --- a/volatility3/framework/plugins/timeliner.py +++ b/volatility3/framework/plugins/timeliner.py @@ -42,7 +42,7 @@ class Timeliner(interfaces.plugins.PluginInterface): """Runs all relevant plugins that provide time related information and orders the results by time.""" - _required_framework_version = (1, 0, 0) + _required_framework_version = (2, 0, 0) def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) diff --git a/volatility3/framework/plugins/windows/bigpools.py b/volatility3/framework/plugins/windows/bigpools.py index 1b013e81db..c81125f079 100644 --- a/volatility3/framework/plugins/windows/bigpools.py +++ b/volatility3/framework/plugins/windows/bigpools.py @@ -20,7 +20,7 @@ class BigPools(interfaces.plugins.PluginInterface): """List big page pools.""" - _required_framework_version = (1, 2, 0) + _required_framework_version = (2, 0, 0) _version = (1, 0, 0) @classmethod @@ -28,7 +28,7 @@ def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface] # Since we're calling the plugin, make sure we have the plugin's requirements return [ requirements.ModuleRequirement(name = 'kernel', description = 'Windows kernel', - architectures = ["Intel32", "Intel64"]), + architectures = ["Intel32", "Intel64"]), requirements.StringRequirement(name = 'tags', description = "Comma separated list of pool tags to filter pools returned", optional = True, @@ -105,7 +105,6 @@ def _generator(self) -> Iterator[Tuple[int, Tuple[int, str]]]: # , str, int]]]: tags = [tag for tag in self.config["tags"].split(',')] else: tags = None - kernel = self.context.modules[self.config['kernel']] for big_pool in self.list_big_pools(context = self.context, diff --git a/volatility3/framework/plugins/windows/cachedump.py b/volatility3/framework/plugins/windows/cachedump.py index 7a8c9933d5..ddfa856b9c 100644 --- a/volatility3/framework/plugins/windows/cachedump.py +++ b/volatility3/framework/plugins/windows/cachedump.py @@ -21,14 +21,14 @@ class Cachedump(interfaces.plugins.PluginInterface): """Dumps lsa secrets from memory""" - _required_framework_version = (1, 2, 0) + _required_framework_version = (2, 0, 0) _version = (1, 0, 0) @classmethod def get_requirements(cls): return [ requirements.ModuleRequirement(name = 'kernel', description = 'Windows kernel', - architectures = ["Intel32", "Intel64"]), + architectures = ["Intel32", "Intel64"]), requirements.PluginRequirement(name = 'hivelist', plugin = hivelist.HiveList, version = (1, 0, 0)), requirements.PluginRequirement(name = 'lsadump', plugin = lsadump.Lsadump, version = (1, 0, 0)), requirements.PluginRequirement(name = 'hashdump', plugin = hashdump.Hashdump, version = (1, 1, 0)) @@ -44,7 +44,7 @@ def decrypt_hash(edata: bytes, nlkm: bytes, ch, xp: bool): hmac_md5 = HMAC.new(nlkm, ch) rc4key = hmac_md5.digest() rc4 = ARC4.new(rc4key) - data = rc4.encrypt(edata) # lgtm [py/weak-cryptographic-algorithm] + data = rc4.encrypt(edata) # lgtm [py/weak-cryptographic-algorithm] else: # based on Based on code from http://lab.mediaservice.net/code/cachedump.rb aes = AES.new(nlkm[16:32], AES.MODE_CBC, ch) @@ -129,7 +129,6 @@ def run(self): offset = self.config.get('offset', None) syshive = sechive = None - kernel = self.context.modules[self.config['kernel']] for hive in hivelist.HiveList.list_hives(self.context, diff --git a/volatility3/framework/plugins/windows/callbacks.py b/volatility3/framework/plugins/windows/callbacks.py index 46711d152f..352dba448a 100644 --- a/volatility3/framework/plugins/windows/callbacks.py +++ b/volatility3/framework/plugins/windows/callbacks.py @@ -19,14 +19,14 @@ class Callbacks(interfaces.plugins.PluginInterface): """Lists kernel callbacks and notification routines.""" - _required_framework_version = (1, 2, 0) + _required_framework_version = (2, 0, 0) _version = (1, 0, 0) @classmethod def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface]: return [ requirements.ModuleRequirement(name = 'kernel', description = 'Windows kernel', - architectures = ["Intel32", "Intel64"]), + architectures = ["Intel32", "Intel64"]), requirements.PluginRequirement(name = 'ssdt', plugin = ssdt.SSDT, version = (1, 0, 0)), requirements.PluginRequirement(name = 'svcscan', plugin = svcscan.SvcScan, version = (1, 0, 0)) ] diff --git a/volatility3/framework/plugins/windows/cmdline.py b/volatility3/framework/plugins/windows/cmdline.py index a3f418be07..af6035abb3 100644 --- a/volatility3/framework/plugins/windows/cmdline.py +++ b/volatility3/framework/plugins/windows/cmdline.py @@ -15,7 +15,7 @@ class CmdLine(interfaces.plugins.PluginInterface): """Lists process command line arguments.""" - _required_framework_version = (1, 2, 0) + _required_framework_version = (2, 0, 0) _version = (1, 0, 0) @classmethod @@ -23,7 +23,7 @@ def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface] # Since we're calling the plugin, make sure we have the plugin's requirements return [ requirements.ModuleRequirement(name = 'kernel', description = 'Windows kernel', - architectures = ["Intel32", "Intel64"]), + architectures = ["Intel32", "Intel64"]), requirements.PluginRequirement(name = 'pslist', plugin = pslist.PsList, version = (2, 0, 0)), requirements.ListRequirement(name = 'pid', element_type = int, @@ -54,7 +54,6 @@ def get_cmdline(cls, context: interfaces.context.ContextInterface, kernel_table_ return result_text def _generator(self, procs): - kernel = self.context.modules[self.config['kernel']] for proc in procs: @@ -78,9 +77,7 @@ def _generator(self, procs): yield (0, (proc.UniqueProcessId, process_name, result_text)) def run(self): - kernel = self.context.modules[self.config['kernel']] - filter_func = pslist.PsList.create_pid_filter(self.config.get('pid', None)) return renderers.TreeGrid([("PID", int), ("Process", str), ("Args", str)], diff --git a/volatility3/framework/plugins/windows/dlllist.py b/volatility3/framework/plugins/windows/dlllist.py index 8125862765..b24f2c0ca7 100644 --- a/volatility3/framework/plugins/windows/dlllist.py +++ b/volatility3/framework/plugins/windows/dlllist.py @@ -20,7 +20,7 @@ class DllList(interfaces.plugins.PluginInterface, timeliner.TimeLinerInterface): """Lists the loaded modules in a particular windows memory image.""" - _required_framework_version = (1, 2, 0) + _required_framework_version = (2, 0, 0) _version = (2, 0, 0) @classmethod @@ -28,7 +28,7 @@ def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface] # Since we're calling the plugin, make sure we have the plugin's requirements return [ requirements.ModuleRequirement(name = 'kernel', description = 'Windows kernel', - architectures = ["Intel32", "Intel64"]), + architectures = ["Intel32", "Intel64"]), requirements.VersionRequirement(name = 'pslist', component = pslist.PsList, version = (2, 0, 0)), requirements.VersionRequirement(name = 'info', component = info.Info, version = (1, 0, 0)), requirements.ListRequirement(name = 'pid', @@ -136,7 +136,6 @@ def _generator(self, procs): if file_handle: file_handle.close() file_output = file_handle.preferred_filename - try: dllbase = format_hints.Hex(entry.DllBase) except exceptions.InvalidAddressException: @@ -155,7 +154,6 @@ def _generator(self, procs): def generate_timeline(self): kernel = self.context.modules[self.config['kernel']] - for row in self._generator( pslist.PsList.list_processes(context = self.context, layer_name = kernel.layer_name, diff --git a/volatility3/framework/plugins/windows/driverirp.py b/volatility3/framework/plugins/windows/driverirp.py index 3ed086c4a5..7f9bc6b088 100644 --- a/volatility3/framework/plugins/windows/driverirp.py +++ b/volatility3/framework/plugins/windows/driverirp.py @@ -22,7 +22,7 @@ class DriverIrp(interfaces.plugins.PluginInterface): """List IRPs for drivers in a particular windows memory image.""" - _required_framework_version = (1, 2, 0) + _required_framework_version = (2, 0, 0) @classmethod def get_requirements(cls): diff --git a/volatility3/framework/plugins/windows/driverscan.py b/volatility3/framework/plugins/windows/driverscan.py index 498ae33387..2cf309014d 100644 --- a/volatility3/framework/plugins/windows/driverscan.py +++ b/volatility3/framework/plugins/windows/driverscan.py @@ -13,14 +13,14 @@ class DriverScan(interfaces.plugins.PluginInterface): """Scans for drivers present in a particular windows memory image.""" - _required_framework_version = (1, 2, 0) + _required_framework_version = (2, 0, 0) _version = (1, 0, 0) @classmethod def get_requirements(cls): return [ requirements.ModuleRequirement(name = 'kernel', description = 'Windows kernel', - architectures = ["Intel32", "Intel64"]), + architectures = ["Intel32", "Intel64"]), requirements.PluginRequirement(name = 'poolscanner', plugin = poolscanner.PoolScanner, version = (1, 0, 0)), ] diff --git a/volatility3/framework/plugins/windows/dumpfiles.py b/volatility3/framework/plugins/windows/dumpfiles.py index ca78696f84..58166ee7f2 100755 --- a/volatility3/framework/plugins/windows/dumpfiles.py +++ b/volatility3/framework/plugins/windows/dumpfiles.py @@ -5,7 +5,6 @@ import logging import ntpath from typing import List, Tuple, Type, Optional, Generator - from volatility3.framework import interfaces, renderers, exceptions, constants from volatility3.framework.configuration import requirements from volatility3.framework.renderers import format_hints @@ -26,7 +25,7 @@ class DumpFiles(interfaces.plugins.PluginInterface): """Dumps cached file contents from Windows memory samples.""" - _required_framework_version = (1, 2, 0) + _required_framework_version = (2, 0, 0) _version = (1, 0, 0) @classmethod @@ -34,7 +33,7 @@ def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface] # Since we're calling the plugin, make sure we have the plugin's requirements return [ requirements.ModuleRequirement(name = 'kernel', description = 'Windows kernel', - architectures = ["Intel32", "Intel64"]), + architectures = ["Intel32", "Intel64"]), requirements.IntRequirement(name = 'pid', description = "Process ID to include (all other processes are excluded)", optional = True), @@ -237,9 +236,9 @@ def _generator(self, procs: List, offsets: List): file_obj = self.context.object( kernel.symbol_table_name + constants.BANG + "_FILE_OBJECT", - layer_name = layer_name, + layer_name = layer_name, native_layer_name = kernel.layer_name, - offset = offset) + offset = offset) for result in self.process_file_object(self.context, kernel.layer_name, self.open, file_obj): yield (0, result) except exceptions.InvalidAddressException: @@ -250,7 +249,6 @@ def run(self): offsets = [] # a list of processes matching the pid filter. all files for these process(es) will be dumped. procs = [] - kernel = self.context.modules[self.config['kernel']] if self.config.get("virtaddr", None) is not None: diff --git a/volatility3/framework/plugins/windows/envars.py b/volatility3/framework/plugins/windows/envars.py index 6caf6d2fac..9791fa5807 100644 --- a/volatility3/framework/plugins/windows/envars.py +++ b/volatility3/framework/plugins/windows/envars.py @@ -15,15 +15,15 @@ class Envars(interfaces.plugins.PluginInterface): "Display process environment variables" - _required_framework_version = (1, 2, 0) _version = (1, 0, 0) + _required_framework_version = (2, 0, 0) @classmethod def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface]: # Since we're calling the plugin, make sure we have the plugin's requirements return [ requirements.ModuleRequirement(name = 'kernel', description = 'Windows kernel', - architectures = ["Intel32", "Intel64"]), + architectures = ["Intel32", "Intel64"]), requirements.ListRequirement(name = 'pid', description = 'Filter on specific process IDs', element_type = int, diff --git a/volatility3/framework/plugins/windows/filescan.py b/volatility3/framework/plugins/windows/filescan.py index 5ceb57ac4c..79d85eb8d9 100644 --- a/volatility3/framework/plugins/windows/filescan.py +++ b/volatility3/framework/plugins/windows/filescan.py @@ -13,13 +13,13 @@ class FileScan(interfaces.plugins.PluginInterface): """Scans for file objects present in a particular windows memory image.""" - _required_framework_version = (1, 2, 0) + _required_framework_version = (2, 0, 0) @classmethod def get_requirements(cls): return [ requirements.ModuleRequirement(name = 'kernel', description = 'Windows kernel', - architectures = ["Intel32", "Intel64"]), + architectures = ["Intel32", "Intel64"]), requirements.PluginRequirement(name = 'poolscanner', plugin = poolscanner.PoolScanner, version = (1, 0, 0)), ] diff --git a/volatility3/framework/plugins/windows/getservicesids.py b/volatility3/framework/plugins/windows/getservicesids.py index 9395aadb49..a47b1178a7 100644 --- a/volatility3/framework/plugins/windows/getservicesids.py +++ b/volatility3/framework/plugins/windows/getservicesids.py @@ -30,8 +30,8 @@ def createservicesid(svc) -> str: class GetServiceSIDs(interfaces.plugins.PluginInterface): """Lists process token sids.""" - _required_framework_version = (1, 2, 0) _version = (1, 0, 0) + _required_framework_version = (2, 0, 0) def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) @@ -54,11 +54,12 @@ def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface] # Since we're calling the plugin, make sure we have the plugin's requirements return [ requirements.ModuleRequirement(name = 'kernel', description = 'Windows kernel', - architectures = ["Intel32", "Intel64"]), + architectures = ["Intel32", "Intel64"]), requirements.PluginRequirement(name = 'hivelist', plugin = hivelist.HiveList, version = (1, 0, 0)) ] def _generator(self): + kernel = self.context.modules[self.config['kernel']] # Get the system hive for hive in hivelist.HiveList.list_hives(context = self.context, diff --git a/volatility3/framework/plugins/windows/getsids.py b/volatility3/framework/plugins/windows/getsids.py index 0b082f3eae..da6fa71c98 100644 --- a/volatility3/framework/plugins/windows/getsids.py +++ b/volatility3/framework/plugins/windows/getsids.py @@ -28,8 +28,8 @@ def find_sid_re(sid_string, sid_re_list) -> Union[str, interfaces.renderers.Base class GetSIDs(interfaces.plugins.PluginInterface): """Print the SIDs owning each process""" - _required_framework_version = (1, 2, 0) _version = (1, 0, 0) + _required_framework_version = (2, 0, 0) def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) @@ -54,7 +54,7 @@ def __init__(self, *args, **kwargs): def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface]: return [ requirements.ModuleRequirement(name = 'kernel', description = 'Windows kernel', - architectures = ["Intel32", "Intel64"]), + architectures = ["Intel32", "Intel64"]), requirements.ListRequirement(name = 'pid', description = 'Filter on specific process IDs', element_type = int, diff --git a/volatility3/framework/plugins/windows/handles.py b/volatility3/framework/plugins/windows/handles.py index 2f02ec6213..66fd94f08e 100644 --- a/volatility3/framework/plugins/windows/handles.py +++ b/volatility3/framework/plugins/windows/handles.py @@ -24,7 +24,7 @@ class Handles(interfaces.plugins.PluginInterface): """Lists process open handles.""" - _required_framework_version = (1, 2, 0) + _required_framework_version = (2, 0, 0) _version = (1, 0, 0) def __init__(self, *args, **kwargs): @@ -39,7 +39,7 @@ def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface] # Since we're calling the plugin, make sure we have the plugin's requirements return [ requirements.ModuleRequirement(name = 'kernel', description = 'Windows kernel', - architectures = ["Intel32", "Intel64"]), + architectures = ["Intel32", "Intel64"]), requirements.ListRequirement(name = 'pid', element_type = int, description = "Process IDs to include (all other processes are excluded)", @@ -293,7 +293,6 @@ def handles(self, handle_table): yield handle_table_entry def _generator(self, procs): - kernel = self.context.modules[self.config['kernel']] type_map = self.get_type_map(context = self.context, diff --git a/volatility3/framework/plugins/windows/hashdump.py b/volatility3/framework/plugins/windows/hashdump.py index 05d0281d91..4d0b25b5c3 100644 --- a/volatility3/framework/plugins/windows/hashdump.py +++ b/volatility3/framework/plugins/windows/hashdump.py @@ -21,14 +21,14 @@ class Hashdump(interfaces.plugins.PluginInterface): """Dumps user hashes from memory""" - _required_framework_version = (1, 2, 0) + _required_framework_version = (2, 0, 0) _version = (1, 1, 0) @classmethod def get_requirements(cls): return [ requirements.ModuleRequirement(name = 'kernel', description = 'Windows kernel', - architectures = ["Intel32", "Intel64"]), + architectures = ["Intel32", "Intel64"]), requirements.PluginRequirement(name = 'hivelist', plugin = hivelist.HiveList, version = (1, 0, 0)) ] @@ -132,7 +132,7 @@ def get_hbootkey(cls, samhive: registry.RegistryHive, bootkey: bytes) -> Optiona rc4_key = md5.digest() rc4 = ARC4.new(rc4_key) - hbootkey = rc4.encrypt(sam_data[0x80:0xA0]) # lgtm [py/weak-cryptographic-algorithm] + hbootkey = rc4.encrypt(sam_data[0x80:0xA0]) # lgtm [py/weak-cryptographic-algorithm] return hbootkey elif revision == 3: # AES encrypted @@ -151,7 +151,7 @@ def decrypt_single_salted_hash(cls, rid, hbootkey: bytes, enc_hash: bytes, _lmnt des2 = DES.new(des_k2, DES.MODE_ECB) cipher = AES.new(hbootkey[:16], AES.MODE_CBC, salt) obfkey = cipher.decrypt(enc_hash) - return des1.decrypt(obfkey[:8]) + des2.decrypt(obfkey[8:16]) # lgtm [py/weak-cryptographic-algorithm] + return des1.decrypt(obfkey[:8]) + des2.decrypt(obfkey[8:16]) # lgtm [py/weak-cryptographic-algorithm] @classmethod def get_user_hashes(cls, user: registry.CM_KEY_NODE, samhive: registry.RegistryHive, @@ -229,9 +229,9 @@ def decrypt_single_hash(cls, rid: int, hbootkey: bytes, enc_hash: bytes, lmntstr md5.update(hbootkey[:0x10] + pack(" Optional[bytes]: @@ -288,7 +288,6 @@ def run(self): syshive = None samhive = None kernel = self.context.modules[self.config['kernel']] - for hive in hivelist.HiveList.list_hives(self.context, self.config_path, kernel.layer_name, diff --git a/volatility3/framework/plugins/windows/info.py b/volatility3/framework/plugins/windows/info.py index c06c69c3af..172664aef8 100644 --- a/volatility3/framework/plugins/windows/info.py +++ b/volatility3/framework/plugins/windows/info.py @@ -16,14 +16,14 @@ class Info(plugins.PluginInterface): """Show OS & kernel details of the memory sample being analyzed.""" - _required_framework_version = (1, 2, 0) + _required_framework_version = (2, 0, 0) _version = (1, 0, 0) @classmethod def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface]: return [ requirements.ModuleRequirement(name = 'kernel', description = 'Windows kernel', - architectures = ["Intel32", "Intel64"]), + architectures = ["Intel32", "Intel64"]), ] @classmethod diff --git a/volatility3/framework/plugins/windows/lsadump.py b/volatility3/framework/plugins/windows/lsadump.py index c0db0b5b1a..edf16416c6 100644 --- a/volatility3/framework/plugins/windows/lsadump.py +++ b/volatility3/framework/plugins/windows/lsadump.py @@ -21,14 +21,14 @@ class Lsadump(interfaces.plugins.PluginInterface): """Dumps lsa secrets from memory""" - _required_framework_version = (1, 2, 0) + _required_framework_version = (2, 0, 0) _version = (1, 0, 0) @classmethod def get_requirements(cls): return [ requirements.ModuleRequirement(name = 'kernel', description = 'Windows kernel', - architectures = ["Intel32", "Intel64"]), + architectures = ["Intel32", "Intel64"]), requirements.VersionRequirement(name = 'hashdump', component = hashdump.Hashdump, version = (1, 1, 0)), requirements.VersionRequirement(name = 'hivelist', component = hivelist.HiveList, version = (1, 0, 0)) ] @@ -84,7 +84,7 @@ def get_lsa_key(cls, sechive: registry.RegistryHive, bootkey: bytes, vista_or_la rc4key = md5.digest() rc4 = ARC4.new(rc4key) - lsa_key = rc4.decrypt(obf_lsa_key[12:60]) # lgtm [py/weak-cryptographic-algorithm] + lsa_key = rc4.decrypt(obf_lsa_key[12:60]) # lgtm [py/weak-cryptographic-algorithm] lsa_key = lsa_key[0x10:0x20] else: lsa_key = cls.decrypt_aes(obf_lsa_key, bootkey) @@ -125,7 +125,7 @@ def decrypt_secret(cls, secret: bytes, key: bytes): des_key = hashdump.Hashdump.sidbytes_to_key(block_key) des = DES.new(des_key, DES.MODE_ECB) enc_block = enc_block + b"\x00" * int(abs(8 - len(enc_block)) % 8) - decrypted_data += des.decrypt(enc_block) # lgtm [py/weak-cryptographic-algorithm] + decrypted_data += des.decrypt(enc_block) # lgtm [py/weak-cryptographic-algorithm] j += 7 if len(key[j:j + 7]) < 7: j = len(key[j:j + 7]) diff --git a/volatility3/framework/plugins/windows/malfind.py b/volatility3/framework/plugins/windows/malfind.py index 864722fbe3..bfd29a254f 100644 --- a/volatility3/framework/plugins/windows/malfind.py +++ b/volatility3/framework/plugins/windows/malfind.py @@ -17,14 +17,14 @@ class Malfind(interfaces.plugins.PluginInterface): """Lists process memory ranges that potentially contain injected code.""" - _required_framework_version = (1, 2, 0) + _required_framework_version = (2, 0, 0) @classmethod def get_requirements(cls): # Since we're calling the plugin, make sure we have the plugin's requirements return [ requirements.ModuleRequirement(name = 'kernel', description = 'Windows kernel', - architectures = ["Intel32", "Intel64"]), + architectures = ["Intel32", "Intel64"]), requirements.ListRequirement(name = 'pid', element_type = int, description = "Process IDs to include (all other processes are excluded)", @@ -103,8 +103,8 @@ def list_injections( continue if (vad.get_private_memory() == 1 - and vad.get_tag() == "VadS") or (vad.get_private_memory() == 0 - and protection_string != "PAGE_EXECUTE_WRITECOPY"): + and vad.get_tag() == "VadS") or (vad.get_private_memory() == 0 + and protection_string != "PAGE_EXECUTE_WRITECOPY"): if cls.is_vad_empty(proc_layer, vad): continue diff --git a/volatility3/framework/plugins/windows/memmap.py b/volatility3/framework/plugins/windows/memmap.py index 6253ccb9b3..74e5885da8 100644 --- a/volatility3/framework/plugins/windows/memmap.py +++ b/volatility3/framework/plugins/windows/memmap.py @@ -16,14 +16,14 @@ class Memmap(interfaces.plugins.PluginInterface): """Prints the memory map""" - _required_framework_version = (1, 2, 0) + _required_framework_version = (2, 0, 0) @classmethod def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface]: # Since we're calling the plugin, make sure we have the plugin's requirements return [ requirements.ModuleRequirement(name = 'kernel', description = 'Windows kernel', - architectures = ["Intel32", "Intel64"]), + architectures = ["Intel32", "Intel64"]), requirements.PluginRequirement(name = 'pslist', plugin = pslist.PsList, version = (2, 0, 0)), requirements.IntRequirement(name = 'pid', description = "Process ID to include (all other processes are excluded)", @@ -34,6 +34,7 @@ def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface] optional = True) ] + def _generator(self, procs): for proc in procs: pid = "Unknown" diff --git a/volatility3/framework/plugins/windows/modscan.py b/volatility3/framework/plugins/windows/modscan.py index 5179d29638..b661d71d72 100644 --- a/volatility3/framework/plugins/windows/modscan.py +++ b/volatility3/framework/plugins/windows/modscan.py @@ -17,14 +17,14 @@ class ModScan(interfaces.plugins.PluginInterface): """Scans for modules present in a particular windows memory image.""" - _required_framework_version = (1, 2, 0) + _required_framework_version = (2, 0, 0) _version = (1, 0, 0) @classmethod def get_requirements(cls): return [ requirements.ModuleRequirement(name = 'kernel', description = 'Windows kernel', - architectures = ["Intel32", "Intel64"]), + architectures = ["Intel32", "Intel64"]), requirements.VersionRequirement(name = 'poolerscanner', component = poolscanner.PoolScanner, version = (1, 0, 0)), diff --git a/volatility3/framework/plugins/windows/modules.py b/volatility3/framework/plugins/windows/modules.py index a3fb87694d..7b488a8eb5 100644 --- a/volatility3/framework/plugins/windows/modules.py +++ b/volatility3/framework/plugins/windows/modules.py @@ -19,14 +19,14 @@ class Modules(interfaces.plugins.PluginInterface): """Lists the loaded kernel modules.""" - _required_framework_version = (1, 2, 0) + _required_framework_version = (2, 0, 0) _version = (1, 1, 0) @classmethod def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface]: return [ requirements.ModuleRequirement(name = 'kernel', description = 'Windows kernel', - architectures = ["Intel32", "Intel64"]), + architectures = ["Intel32", "Intel64"]), requirements.VersionRequirement(name = 'pslist', component = pslist.PsList, version = (2, 0, 0)), requirements.VersionRequirement(name = 'dlllist', component = dlllist.DllList, version = (2, 0, 0)), requirements.BooleanRequirement(name = 'dump', diff --git a/volatility3/framework/plugins/windows/mutantscan.py b/volatility3/framework/plugins/windows/mutantscan.py index 55a131b4ad..29e27c9b10 100644 --- a/volatility3/framework/plugins/windows/mutantscan.py +++ b/volatility3/framework/plugins/windows/mutantscan.py @@ -13,13 +13,13 @@ class MutantScan(interfaces.plugins.PluginInterface): """Scans for mutexes present in a particular windows memory image.""" - _required_framework_version = (1, 2, 0) + _required_framework_version = (2, 0, 0) @classmethod def get_requirements(cls): return [ requirements.ModuleRequirement(name = 'kernel', description = 'Windows kernel', - architectures = ["Intel32", "Intel64"]), + architectures = ["Intel32", "Intel64"]), requirements.PluginRequirement(name = 'poolscanner', plugin = poolscanner.PoolScanner, version = (1, 0, 0)), ] diff --git a/volatility3/framework/plugins/windows/netscan.py b/volatility3/framework/plugins/windows/netscan.py index 9e72713f79..33b5a7fbc3 100644 --- a/volatility3/framework/plugins/windows/netscan.py +++ b/volatility3/framework/plugins/windows/netscan.py @@ -22,14 +22,14 @@ class NetScan(interfaces.plugins.PluginInterface, timeliner.TimeLinerInterface): """Scans for network objects present in a particular windows memory image.""" - _required_framework_version = (1, 2, 0) + _required_framework_version = (2, 0, 0) _version = (1, 0, 0) @classmethod def get_requirements(cls): return [ requirements.ModuleRequirement(name = 'kernel', description = 'Windows kernel', - architectures = ["Intel32", "Intel64"]), + architectures = ["Intel32", "Intel64"]), requirements.VersionRequirement(name = 'poolscanner', component = poolscanner.PoolScanner, version = (1, 0, 0)), diff --git a/volatility3/framework/plugins/windows/netstat.py b/volatility3/framework/plugins/windows/netstat.py index faae80503b..76480399f0 100644 --- a/volatility3/framework/plugins/windows/netstat.py +++ b/volatility3/framework/plugins/windows/netstat.py @@ -20,14 +20,14 @@ class NetStat(interfaces.plugins.PluginInterface, timeliner.TimeLinerInterface): """Traverses network tracking structures present in a particular windows memory image.""" - _required_framework_version = (1, 2, 0) + _required_framework_version = (2, 0, 0) _version = (1, 0, 0) @classmethod def get_requirements(cls): return [ requirements.ModuleRequirement(name = 'kernel', description = 'Windows kernel', - architectures = ["Intel32", "Intel64"]), + architectures = ["Intel32", "Intel64"]), requirements.VersionRequirement(name = 'netscan', component = netscan.NetScan, version = (1, 0, 0)), requirements.VersionRequirement(name = 'modules', component = modules.Modules, version = (1, 0, 0)), requirements.VersionRequirement(name = 'pdbutil', component = pdbutil.PDBUtility, version = (1, 0, 0)), diff --git a/volatility3/framework/plugins/windows/privileges.py b/volatility3/framework/plugins/windows/privileges.py index 2b6381145e..eaafbeae6e 100644 --- a/volatility3/framework/plugins/windows/privileges.py +++ b/volatility3/framework/plugins/windows/privileges.py @@ -17,7 +17,7 @@ class Privs(interfaces.plugins.PluginInterface): """Lists process token privileges""" _version = (1, 2, 0) - _required_framework_version = (1, 0, 0) + _required_framework_version = (2, 0, 0) def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) @@ -41,7 +41,7 @@ def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface] # Since we're calling the plugin, make sure we have the plugin's requirements return [ requirements.ModuleRequirement(name = 'kernel', description = 'Windows kernel', - architectures = ["Intel32", "Intel64"]), + architectures = ["Intel32", "Intel64"]), requirements.ListRequirement(name = 'pid', description = 'Filter on specific process IDs', element_type = int, diff --git a/volatility3/framework/plugins/windows/pslist.py b/volatility3/framework/plugins/windows/pslist.py index 5121b1b595..cadccc5f1d 100644 --- a/volatility3/framework/plugins/windows/pslist.py +++ b/volatility3/framework/plugins/windows/pslist.py @@ -20,7 +20,7 @@ class PsList(interfaces.plugins.PluginInterface, timeliner.TimeLinerInterface): """Lists the processes present in a particular windows memory image.""" - _required_framework_version = (1, 2, 0) + _required_framework_version = (2, 0, 0) _version = (2, 0, 0) PHYSICAL_DEFAULT = False diff --git a/volatility3/framework/plugins/windows/psscan.py b/volatility3/framework/plugins/windows/psscan.py index 8bbebdc7bf..184ef5103a 100644 --- a/volatility3/framework/plugins/windows/psscan.py +++ b/volatility3/framework/plugins/windows/psscan.py @@ -22,14 +22,14 @@ class PsScan(interfaces.plugins.PluginInterface, timeliner.TimeLinerInterface): """Scans for processes present in a particular windows memory image.""" - _required_framework_version = (1, 2, 0) + _required_framework_version = (2, 0, 0) _version = (1, 1, 0) @classmethod def get_requirements(cls): return [ requirements.ModuleRequirement(name = 'kernel', description = 'Windows kernel', - architectures = ["Intel32", "Intel64"]), + architectures = ["Intel32", "Intel64"]), requirements.PluginRequirement(name = 'pslist', plugin = pslist.PsList, version = (2, 0, 0)), requirements.VersionRequirement(name = 'info', component = info.Info, version = (1, 0, 0)), requirements.ListRequirement(name = 'pid', @@ -143,7 +143,6 @@ def get_osversion(cls, context: interfaces.context.ContextInterface, layer_name: def _generator(self): kernel = self.context.modules[self.config['kernel']] - pe_table_name = intermed.IntermediateSymbolTable.create(self.context, self.config_path, "windows", diff --git a/volatility3/framework/plugins/windows/pstree.py b/volatility3/framework/plugins/windows/pstree.py index b88e4bba3e..fbb883839f 100644 --- a/volatility3/framework/plugins/windows/pstree.py +++ b/volatility3/framework/plugins/windows/pstree.py @@ -16,7 +16,7 @@ class PsTree(interfaces.plugins.PluginInterface): """Plugin for listing processes in a tree based on their parent process ID.""" - _required_framework_version = (1, 2, 0) + _required_framework_version = (2, 0, 0) def __init__(self, *args, **kwargs) -> None: super().__init__(*args, **kwargs) @@ -28,7 +28,7 @@ def __init__(self, *args, **kwargs) -> None: def get_requirements(cls): return [ requirements.ModuleRequirement(name = 'kernel', description = 'Windows kernel', - architectures = ["Intel32", "Intel64"]), + architectures = ["Intel32", "Intel64"]), requirements.BooleanRequirement(name = 'physical', description = 'Display physical offsets instead of virtual', default = pslist.PsList.PHYSICAL_DEFAULT, diff --git a/volatility3/framework/plugins/windows/registry/hivelist.py b/volatility3/framework/plugins/windows/registry/hivelist.py index 0296456188..ac30560c6e 100644 --- a/volatility3/framework/plugins/windows/registry/hivelist.py +++ b/volatility3/framework/plugins/windows/registry/hivelist.py @@ -17,7 +17,7 @@ class HiveGenerator: """Walks the registry HiveList linked list in a given direction and stores an invalid offset if it's unable to fully walk the list""" - _required_framework_version = (1, 0, 0) + _required_framework_version = (2, 0, 0) def __init__(self, cmhive, forward = True): self._cmhive = cmhive @@ -39,14 +39,14 @@ def invalid(self) -> Optional[int]: class HiveList(interfaces.plugins.PluginInterface): """Lists the registry hives present in a particular memory image.""" - _required_framework_version = (1, 2, 0) _version = (1, 0, 0) + _required_framework_version = (2, 0, 0) @classmethod def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface]: return [ requirements.ModuleRequirement(name = 'kernel', description = 'Windows kernel', - architectures = ["Intel32", "Intel64"]), + architectures = ["Intel32", "Intel64"]), requirements.StringRequirement(name = 'filter', description = "String to filter hive names returned", optional = True, @@ -63,7 +63,6 @@ def _sanitize_hive_name(self, name: str) -> str: def _generator(self) -> Iterator[Tuple[int, Tuple[int, str]]]: chunk_size = 0x500000 - kernel = self.context.modules[self.config['kernel']] for hive_object in self.list_hive_objects(context = self.context, diff --git a/volatility3/framework/plugins/windows/registry/hivescan.py b/volatility3/framework/plugins/windows/registry/hivescan.py index 0a0257d478..ab15e56acd 100644 --- a/volatility3/framework/plugins/windows/registry/hivescan.py +++ b/volatility3/framework/plugins/windows/registry/hivescan.py @@ -15,14 +15,14 @@ class HiveScan(interfaces.plugins.PluginInterface): """Scans for registry hives present in a particular windows memory image.""" - _required_framework_version = (1, 2, 0) + _required_framework_version = (2, 0, 0) _version = (1, 0, 0) @classmethod def get_requirements(cls): return [ requirements.ModuleRequirement(name = 'kernel', description = 'Windows kernel', - architectures = ["Intel32", "Intel64"]), + architectures = ["Intel32", "Intel64"]), requirements.PluginRequirement(name = 'poolscanner', plugin = poolscanner.PoolScanner, version = (1, 0, 0)), requirements.PluginRequirement(name = 'bigpools', plugin = bigpools.BigPools, version = (1, 0, 0)), ] @@ -66,12 +66,11 @@ def scan_hives(cls, yield mem_object def _generator(self): - kernel = self.context.modules[self.config['kernel']] for hive in self.scan_hives(self.context, kernel.layer_name, kernel.symbol_table_name): - yield (0, (format_hints.Hex(hive.vol.offset),)) + yield (0, (format_hints.Hex(hive.vol.offset), )) def run(self): return renderers.TreeGrid([("Offset", format_hints.Hex)], self._generator()) diff --git a/volatility3/framework/plugins/windows/registry/printkey.py b/volatility3/framework/plugins/windows/registry/printkey.py index 10409e3ea5..2082c339e5 100644 --- a/volatility3/framework/plugins/windows/registry/printkey.py +++ b/volatility3/framework/plugins/windows/registry/printkey.py @@ -19,14 +19,14 @@ class PrintKey(interfaces.plugins.PluginInterface): """Lists the registry keys under a hive or specific key value.""" - _required_framework_version = (1, 2, 0) + _required_framework_version = (2, 0, 0) _version = (1, 0, 0) @classmethod def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface]: return [ requirements.ModuleRequirement(name = 'kernel', description = 'Windows kernel', - architectures = ["Intel32", "Intel64"]), + architectures = ["Intel32", "Intel64"]), requirements.PluginRequirement(name = 'hivelist', plugin = hivelist.HiveList, version = (1, 0, 0)), requirements.IntRequirement(name = 'offset', description = "Hive Offset", default = None, optional = True), requirements.StringRequirement(name = 'key', @@ -41,10 +41,10 @@ def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface] @classmethod def key_iterator( - cls, - hive: RegistryHive, - node_path: Sequence[objects.StructType] = None, - recurse: bool = False + cls, + hive: RegistryHive, + node_path: Sequence[objects.StructType] = None, + recurse: bool = False ) -> Iterable[Tuple[int, bool, datetime.datetime, str, bool, interfaces.objects.ObjectInterface]]: """Walks through a set of nodes from a given node (last one in node_path). Avoids loops by not traversing into nodes already present diff --git a/volatility3/framework/plugins/windows/registry/userassist.py b/volatility3/framework/plugins/windows/registry/userassist.py index 2dd0ae0238..a788f058f5 100644 --- a/volatility3/framework/plugins/windows/registry/userassist.py +++ b/volatility3/framework/plugins/windows/registry/userassist.py @@ -23,7 +23,7 @@ class UserAssist(interfaces.plugins.PluginInterface): """Print userassist registry keys and information.""" - _required_framework_version = (1, 2, 0) + _required_framework_version = (2, 0, 0) def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) @@ -38,7 +38,7 @@ def __init__(self, *args, **kwargs): def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface]: return [ requirements.ModuleRequirement(name = 'kernel', description = 'Windows kernel', - architectures = ["Intel32", "Intel64"]), + architectures = ["Intel32", "Intel64"]), requirements.IntRequirement(name = 'offset', description = "Hive Offset", default = None, optional = True), requirements.PluginRequirement(name = 'hivelist', plugin = hivelist.HiveList, version = (1, 0, 0)) ] @@ -217,7 +217,6 @@ def _generator(self): hive_offsets = None if self.config.get('offset', None) is not None: hive_offsets = [self.config.get('offset', None)] - kernel = self.context.modules[self.config['kernel']] # get all the user hive offsets or use the one specified diff --git a/volatility3/framework/plugins/windows/ssdt.py b/volatility3/framework/plugins/windows/ssdt.py index b5b3e40c0f..f596dfe0a8 100644 --- a/volatility3/framework/plugins/windows/ssdt.py +++ b/volatility3/framework/plugins/windows/ssdt.py @@ -18,14 +18,14 @@ class SSDT(plugins.PluginInterface): """Lists the system call table.""" - _required_framework_version = (1, 2, 0) + _required_framework_version = (2, 0, 0) _version = (1, 0, 0) @classmethod def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface]: return [ requirements.ModuleRequirement(name = 'kernel', description = 'Windows kernel', - architectures = ["Intel32", "Intel64"]), + architectures = ["Intel32", "Intel64"]), requirements.PluginRequirement(name = 'modules', plugin = modules.Modules, version = (1, 0, 0)), ] diff --git a/volatility3/framework/plugins/windows/strings.py b/volatility3/framework/plugins/windows/strings.py index 67055e18d8..e79f64741f 100644 --- a/volatility3/framework/plugins/windows/strings.py +++ b/volatility3/framework/plugins/windows/strings.py @@ -19,7 +19,7 @@ class Strings(interfaces.plugins.PluginInterface): """Reads output from the strings command and indicates which process(es) each string belongs to.""" _version = (1, 2, 0) - _required_framework_version = (1, 0, 0) + _required_framework_version = (2, 0, 0) strings_pattern = re.compile(rb"^(?:\W*)([0-9]+)(?:\W*)(\w[\w\W]+)\n?") @classmethod @@ -42,7 +42,7 @@ def run(self): def _generator(self) -> Generator[Tuple, None, None]: """Generates results from a strings file.""" - string_list: List[Tuple[int, bytes]] = [] + string_list: List[Tuple[int,bytes]] = [] # Test strings file format is accurate accessor = resources.ResourceAccessor() @@ -57,7 +57,6 @@ def _generator(self) -> Generator[Tuple, None, None]: except ValueError: vollog.error(f"Line in unrecognized format: line {count}") line = strings_fp.readline() - kernel = self.context.modules[self.config['kernel']] revmap = self.generate_mapping(self.context, @@ -67,7 +66,7 @@ def _generator(self) -> Generator[Tuple, None, None]: pid_list = self.config['pid']) last_prog: float = 0 - line_count: float = 0 + line_count: float = 0 num_strings = len(string_list) for offset, string in string_list: line_count += 1 diff --git a/volatility3/framework/plugins/windows/svcscan.py b/volatility3/framework/plugins/windows/svcscan.py index 140a2cccd3..b6adbb0b09 100644 --- a/volatility3/framework/plugins/windows/svcscan.py +++ b/volatility3/framework/plugins/windows/svcscan.py @@ -21,7 +21,7 @@ class SvcScan(interfaces.plugins.PluginInterface): """Scans for windows services.""" - _required_framework_version = (1, 2, 0) + _required_framework_version = (2, 0, 0) _version = (1, 0, 0) @classmethod @@ -29,7 +29,7 @@ def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface] # Since we're calling the plugin, make sure we have the plugin's requirements return [ requirements.ModuleRequirement(name = 'kernel', description = 'Windows kernel', - architectures = ["Intel32", "Intel64"]), + architectures = ["Intel32", "Intel64"]), requirements.PluginRequirement(name = 'pslist', plugin = pslist.PsList, version = (2, 0, 0)), requirements.PluginRequirement(name = 'poolscanner', plugin = poolscanner.PoolScanner, version = (1, 0, 0)), requirements.PluginRequirement(name = 'vadyarascan', plugin = vadyarascan.VadYaraScan, version = (1, 0, 0)) diff --git a/volatility3/framework/plugins/windows/symlinkscan.py b/volatility3/framework/plugins/windows/symlinkscan.py index 3d7cbf89b9..ef970b2968 100644 --- a/volatility3/framework/plugins/windows/symlinkscan.py +++ b/volatility3/framework/plugins/windows/symlinkscan.py @@ -15,13 +15,13 @@ class SymlinkScan(interfaces.plugins.PluginInterface, timeliner.TimeLinerInterface): """Scans for links present in a particular windows memory image.""" - _required_framework_version = (1, 2, 0) + _required_framework_version = (2, 0, 0) @classmethod def get_requirements(cls): return [ requirements.ModuleRequirement(name = 'kernel', description = 'Windows kernel', - architectures = ["Intel32", "Intel64"]), + architectures = ["Intel32", "Intel64"]), ] @classmethod diff --git a/volatility3/framework/plugins/windows/vadinfo.py b/volatility3/framework/plugins/windows/vadinfo.py index ee8bc04315..51002d0edf 100644 --- a/volatility3/framework/plugins/windows/vadinfo.py +++ b/volatility3/framework/plugins/windows/vadinfo.py @@ -33,7 +33,7 @@ class VadInfo(interfaces.plugins.PluginInterface): """Lists process memory ranges.""" - _required_framework_version = (1, 2, 0) + _required_framework_version = (2, 0, 0) _version = (2, 0, 0) MAXSIZE_DEFAULT = 0 @@ -45,7 +45,7 @@ def __init__(self, *args, **kwargs): def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface]: # Since we're calling the plugin, make sure we have the plugin's requirements return [requirements.ModuleRequirement(name = 'kernel', description = 'Windows kernel', - architectures = ["Intel32", "Intel64"]), + architectures = ["Intel32", "Intel64"]), # TODO: Convert this to a ListRequirement so that people can filter on sets of ranges requirements.IntRequirement(name = 'address', description = "Process virtual memory address to include " \ @@ -166,7 +166,6 @@ def vad_dump(cls, return file_handle def _generator(self, procs): - kernel = self.context.modules[self.config['kernel']] def passthrough(_: interfaces.objects.ObjectInterface) -> bool: @@ -201,7 +200,6 @@ def filter_function(x: interfaces.objects.ObjectInterface) -> bool: format_hints.Hex(vad.get_parent()), vad.get_file_name(), file_output)) def run(self): - kernel = self.context.modules[self.config['kernel']] filter_func = pslist.PsList.create_pid_filter(self.config.get('pid', None)) diff --git a/volatility3/framework/plugins/windows/vadyarascan.py b/volatility3/framework/plugins/windows/vadyarascan.py index 756e18be54..06a87d0033 100644 --- a/volatility3/framework/plugins/windows/vadyarascan.py +++ b/volatility3/framework/plugins/windows/vadyarascan.py @@ -17,14 +17,14 @@ class VadYaraScan(interfaces.plugins.PluginInterface): """Scans all the Virtual Address Descriptor memory maps using yara.""" - _required_framework_version = (1, 2, 0) + _required_framework_version = (2, 0, 0) _version = (1, 0, 0) @classmethod def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface]: return [ requirements.ModuleRequirement(name = 'kernel', description = 'Windows kernel', - architectures = ["Intel32", "Intel64"]), + architectures = ["Intel32", "Intel64"]), requirements.BooleanRequirement(name = "wide", description = "Match wide (unicode) strings", default = False, @@ -52,7 +52,6 @@ def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface] ] def _generator(self): - kernel = self.context.modules[self.config['kernel']] rules = yarascan.YaraScan.process_yara_options(dict(self.config)) diff --git a/volatility3/framework/plugins/windows/verinfo.py b/volatility3/framework/plugins/windows/verinfo.py index 82571b477e..762832e396 100644 --- a/volatility3/framework/plugins/windows/verinfo.py +++ b/volatility3/framework/plugins/windows/verinfo.py @@ -27,8 +27,8 @@ class VerInfo(interfaces.plugins.PluginInterface): """Lists version information from PE files.""" - _required_framework_version = (1, 2, 0) _version = (1, 0, 0) + _required_framework_version = (2, 0, 0) @classmethod def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface]: diff --git a/volatility3/framework/plugins/windows/virtmap.py b/volatility3/framework/plugins/windows/virtmap.py index 9a43cb1f64..9a241d3d84 100644 --- a/volatility3/framework/plugins/windows/virtmap.py +++ b/volatility3/framework/plugins/windows/virtmap.py @@ -16,7 +16,7 @@ class VirtMap(interfaces.plugins.PluginInterface): """Lists virtual mapped sections.""" - _required_framework_version = (1, 2, 0) + _required_framework_version = (2, 0, 0) @classmethod def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface]: diff --git a/volatility3/framework/plugins/yarascan.py b/volatility3/framework/plugins/yarascan.py index 86a7426a5a..94b3cba452 100644 --- a/volatility3/framework/plugins/yarascan.py +++ b/volatility3/framework/plugins/yarascan.py @@ -39,7 +39,7 @@ def __call__(self, data: bytes, data_offset: int) -> Iterable[Tuple[int, str, st class YaraScan(plugins.PluginInterface): """Scans kernel memory using yara rules (string or file).""" - _required_framework_version = (1, 0, 0) + _required_framework_version = (2, 0, 0) _version = (1, 0, 0) @classmethod diff --git a/volatility3/framework/symbols/intermed.py b/volatility3/framework/symbols/intermed.py index a3b5a582a0..6e55c4b356 100644 --- a/volatility3/framework/symbols/intermed.py +++ b/volatility3/framework/symbols/intermed.py @@ -13,14 +13,15 @@ from abc import ABCMeta from typing import Any, Dict, Generator, Iterable, List, Optional, Type, Tuple, Mapping -from volatility3.framework.layers import resources from volatility3 import schemas, symbols from volatility3.framework import class_subclasses, constants, exceptions, interfaces, objects from volatility3.framework.configuration import requirements +from volatility3.framework.layers import resources from volatility3.framework.symbols import native, metadata vollog = logging.getLogger(__name__) + # ## TODO # # All symbol tables should take a label to an object template @@ -47,7 +48,6 @@ def _construct_delegate_function(name: str, is_property: bool = False) -> Any: - def _delegate_function(self, *args, **kwargs): if is_property: return getattr(self._delegate, name) @@ -82,9 +82,7 @@ def __init__(self, native_types: interfaces.symbols.NativeTableInterface = None, table_mapping: Optional[Dict[str, str]] = None, validate: bool = True, - class_types: Optional[Mapping[str, Type[interfaces.objects.ObjectInterface]]] = None, - symbol_shift: int = 0, - symbol_mask: int = 0) -> None: + class_types: Optional[Mapping[str, Type[interfaces.objects.ObjectInterface]]] = None) -> None: """Instantiates a SymbolTable based on an IntermediateSymbolFormat JSON file. This is validated against the appropriate schema. The validation can be disabled by passing validate = False, but this should almost never be done. @@ -98,8 +96,6 @@ def __init__(self, table_mapping: A dictionary linking names referenced in the file with symbol tables in the context validate: Determines whether the ISF file will be validated against the appropriate schema class_types: A dictionary of type names and classes that override StructType when they are instantiated - symbol_shift: An offset by which to alter all returned symbols for this table - symbol_mask: An address mask used for all returned symbol offsets from this table (a mask of 0 disables masking) """ # Check there are no obvious errors # Open the file and test the version @@ -136,13 +132,6 @@ def __init__(self, # Since we've been created with parameters, ensure our config is populated likewise self.config['isf_url'] = isf_url - if symbol_shift: - vollog.warning( - "Symbol_shift support has been deprecated and will be removed in the next major release of Volatility 3" - ) - self.config['symbol_shift'] = symbol_shift - self.config['symbol_mask'] = symbol_mask - @staticmethod def _closest_version(version: str, versions: Dict[Tuple[int, int, int], Type['ISFormatTable']]) \ -> Type['ISFormatTable']: @@ -225,9 +214,7 @@ def create(cls, filename: str, native_types: Optional[interfaces.symbols.NativeTableInterface] = None, table_mapping: Optional[Dict[str, str]] = None, - class_types: Optional[Mapping[str, Type[interfaces.objects.ObjectInterface]]] = None, - symbol_shift: int = 0, - symbol_mask: int = 0) -> str: + class_types: Optional[Mapping[str, Type[interfaces.objects.ObjectInterface]]] = None) -> str: """Takes a context and loads an intermediate symbol table based on a filename. @@ -238,8 +225,6 @@ def create(cls, filename: Basename of the file to find under the sub_path native_types: Set of native types, defaults to native types read from the intermediate symbol format file table_mapping: a dictionary of table names mentioned within the ISF file, and the tables within the context which they map to - symbol_shift: An offset by which to alter all returned symbols for this table - symbol_mask: An address mask used for all returned symbol offsets from this table (a mask of 0 disables masking) Returns: the name of the added symbol table @@ -254,9 +239,7 @@ def create(cls, isf_url = urls[0], native_types = native_types, table_mapping = table_mapping, - class_types = class_types, - symbol_shift = symbol_shift, - symbol_mask = symbol_mask) + class_types = class_types) context.symbol_space.append(table) return table_name @@ -341,10 +324,7 @@ def get_symbol(self, name: str) -> interfaces.symbols.SymbolInterface: symbol = self._json_object['symbols'].get(name, None) if not symbol: raise exceptions.SymbolError(name, self.name, f"Unknown symbol: {name}") - address = symbol['address'] + self.config.get('symbol_shift', 0) - if self.config.get('symbol_mask', 0): - address = address & self.config['symbol_mask'] - self._symbol_cache[name] = interfaces.symbols.SymbolInterface(name = name, address = address) + self._symbol_cache[name] = interfaces.symbols.SymbolInterface(name = name, address = symbol['address']) return self._symbol_cache[name] @property @@ -546,12 +526,8 @@ def get_symbol(self, name: str) -> interfaces.symbols.SymbolInterface: if 'type' in symbol: symbol_type = self._interdict_to_template(symbol['type']) - # Mask the addresses if necessary - address = symbol['address'] + self.config.get('symbol_shift', 0) - if self.config.get('symbol_mask', 0): - address = address & self.config['symbol_mask'] self._symbol_cache[name] = interfaces.symbols.SymbolInterface(name = name, - address = address, + address = symbol['address'], type = symbol_type) return self._symbol_cache[name] @@ -606,12 +582,8 @@ def get_symbol(self, name: str) -> interfaces.symbols.SymbolInterface: if 'constant_data' in symbol: symbol_constant_data = base64.b64decode(symbol.get('constant_data')) - # Mask the addresses if necessary - address = symbol['address'] + self.config.get('symbol_shift', 0) - if self.config.get('symbol_mask', 0): - address = address & self.config['symbol_mask'] self._symbol_cache[name] = interfaces.symbols.SymbolInterface(name = name, - address = address, + address = symbol['address'], type = symbol_type, constant_data = symbol_constant_data) return self._symbol_cache[name] diff --git a/volatility3/framework/symbols/linux/__init__.py b/volatility3/framework/symbols/linux/__init__.py index 4657304111..36e23a35d7 100644 --- a/volatility3/framework/symbols/linux/__init__.py +++ b/volatility3/framework/symbols/linux/__init__.py @@ -41,7 +41,7 @@ class LinuxUtilities(interfaces.configuration.VersionableInterface): """Class with multiple useful linux functions.""" _version = (2, 0, 0) - _required_framework_version = (1, 2, 0) + _required_framework_version = (2, 0, 0) framework.require_interface_version(*_required_framework_version) diff --git a/volatility3/framework/symbols/mac/__init__.py b/volatility3/framework/symbols/mac/__init__.py index e84712828f..7d094ea4f7 100644 --- a/volatility3/framework/symbols/mac/__init__.py +++ b/volatility3/framework/symbols/mac/__init__.py @@ -38,7 +38,7 @@ class MacUtilities(interfaces.configuration.VersionableInterface): 1.3.0 -> add parameter to lookup_module_address to pass kernel module name """ _version = (1, 3, 0) - _required_framework_version = (1, 2, 0) + _required_framework_version = (2, 0, 0) @classmethod def mask_mods_list(cls, context: interfaces.context.ContextInterface, layer_name: str, diff --git a/volatility3/framework/symbols/windows/pdbutil.py b/volatility3/framework/symbols/windows/pdbutil.py index 9d8b2431bf..e73c25d48d 100644 --- a/volatility3/framework/symbols/windows/pdbutil.py +++ b/volatility3/framework/symbols/windows/pdbutil.py @@ -25,7 +25,7 @@ class PDBUtility(interfaces.configuration.VersionableInterface): """Class to handle and manage all getting symbols based on MZ header""" _version = (1, 0, 0) - _required_framework_version = (1, 0, 0) + _required_framework_version = (2, 0, 0) @classmethod def symbol_table_from_offset( diff --git a/volatility3/plugins/windows/poolscanner.py b/volatility3/plugins/windows/poolscanner.py index 4885c88d77..0c6e989ecb 100644 --- a/volatility3/plugins/windows/poolscanner.py +++ b/volatility3/plugins/windows/poolscanner.py @@ -114,8 +114,8 @@ def __call__(self, data: bytes, data_offset: int): class PoolScanner(plugins.PluginInterface): """A generic pool scanner plugin.""" - _required_framework_version = (1, 2, 0) _version = (1, 0, 0) + _required_framework_version = (2, 0, 0) @classmethod def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface]: diff --git a/volatility3/plugins/windows/registry/certificates.py b/volatility3/plugins/windows/registry/certificates.py index 3ba4266515..96a7e39775 100644 --- a/volatility3/plugins/windows/registry/certificates.py +++ b/volatility3/plugins/windows/registry/certificates.py @@ -10,7 +10,7 @@ class Certificates(interfaces.plugins.PluginInterface): """Lists the certificates in the registry's Certificate Store.""" - _required_framework_version = (1, 0, 0) + _required_framework_version = (2, 0, 0) @classmethod def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface]: diff --git a/volatility3/plugins/windows/statistics.py b/volatility3/plugins/windows/statistics.py index b3bc8044f1..e6f2016ed2 100644 --- a/volatility3/plugins/windows/statistics.py +++ b/volatility3/plugins/windows/statistics.py @@ -13,7 +13,7 @@ class Statistics(plugins.PluginInterface): - _required_framework_version = (1, 0, 0) + _required_framework_version = (2, 0, 0) @classmethod def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface]: From aedfd0c54a865c3469c562222ca76c7100531096 Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Wed, 6 Oct 2021 21:38:32 +0100 Subject: [PATCH 271/294] Windows: Fix up some poolscaner rebase issues --- .../plugins/windows/poolscanner.py | 25 ++++++++++--------- 1 file changed, 13 insertions(+), 12 deletions(-) rename volatility3/{ => framework}/plugins/windows/poolscanner.py (95%) diff --git a/volatility3/plugins/windows/poolscanner.py b/volatility3/framework/plugins/windows/poolscanner.py similarity index 95% rename from volatility3/plugins/windows/poolscanner.py rename to volatility3/framework/plugins/windows/poolscanner.py index 0c6e989ecb..d64bb64f12 100644 --- a/volatility3/plugins/windows/poolscanner.py +++ b/volatility3/framework/plugins/windows/poolscanner.py @@ -294,25 +294,26 @@ def generate_pool_scan(cls, for constraint, header in cls.pool_scan(context, scan_layer, symbol_table, constraints, alignment = alignment): - mem_object = header.get_object(constraint = constraint, + mem_objects = header.get_object(constraint = constraint, use_top_down = is_windows_8_or_later, native_layer_name = 'primary', kernel_symbol_table = symbol_table) - if mem_object is None: - vollog.log(constants.LOGLEVEL_VVV, f"Cannot create an instance of {constraint.type_name}") - continue + for mem_object in mem_objects: + if mem_object is None: + vollog.log(constants.LOGLEVEL_VVV, f"Cannot create an instance of {constraint.type_name}") + continue - if constraint.object_type is not None and not constraint.skip_type_test: - try: - if mem_object.get_object_header().get_object_type(type_map, cookie) != constraint.object_type: + if constraint.object_type is not None and not constraint.skip_type_test: + try: + if mem_object.get_object_header().get_object_type(type_map, cookie) != constraint.object_type: + continue + except exceptions.InvalidAddressException: + vollog.log(constants.LOGLEVEL_VVV, + f"Cannot test instance type check for {constraint.type_name}") continue - except exceptions.InvalidAddressException: - vollog.log(constants.LOGLEVEL_VVV, - f"Cannot test instance type check for {constraint.type_name}") - continue - yield constraint, mem_object, header + yield constraint, mem_object, header @classmethod def pool_scan(cls, From c4c7aa2b13374efdcc64ed7f2d192c8f662a5dc1 Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Wed, 6 Oct 2021 21:44:04 +0100 Subject: [PATCH 272/294] Windows: Remove regression introduced during rebase --- volatility3/framework/plugins/windows/poolscanner.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/volatility3/framework/plugins/windows/poolscanner.py b/volatility3/framework/plugins/windows/poolscanner.py index d64bb64f12..c5d60ce03d 100644 --- a/volatility3/framework/plugins/windows/poolscanner.py +++ b/volatility3/framework/plugins/windows/poolscanner.py @@ -296,7 +296,7 @@ def generate_pool_scan(cls, mem_objects = header.get_object(constraint = constraint, use_top_down = is_windows_8_or_later, - native_layer_name = 'primary', + native_layer_name = layer_name, kernel_symbol_table = symbol_table) for mem_object in mem_objects: From 8d98cb876800f028c7d13674209fef13c1195bf2 Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Wed, 6 Oct 2021 22:56:43 +0100 Subject: [PATCH 273/294] Symbols: Add symbol_masking back in (but not symbol_shift) --- volatility3/framework/interfaces/symbols.py | 10 +++++-- volatility3/framework/symbols/intermed.py | 32 +++++++++++++++------ 2 files changed, 31 insertions(+), 11 deletions(-) diff --git a/volatility3/framework/interfaces/symbols.py b/volatility3/framework/interfaces/symbols.py index bf0ead3b35..b271de412e 100644 --- a/volatility3/framework/interfaces/symbols.py +++ b/volatility3/framework/interfaces/symbols.py @@ -4,10 +4,11 @@ """Symbols provide structural information about a set of bytes.""" import bisect import collections.abc -from abc import abstractmethod, ABC -from typing import Any, Dict, Iterable, List, Optional, Tuple, Type, Mapping +from abc import ABC, abstractmethod +from typing import Any, Dict, Iterable, List, Mapping, Optional, Tuple, Type from volatility3.framework import constants, exceptions, interfaces +from volatility3.framework.configuration import requirements from volatility3.framework.interfaces import configuration, objects from volatility3.framework.interfaces.configuration import RequirementInterface @@ -301,7 +302,10 @@ def build_configuration(self) -> 'configuration.HierarchicalDict': @classmethod def get_requirements(cls) -> List[RequirementInterface]: - return super().get_requirements() + return super().get_requirements() + [ + requirements.IntRequirement(name = 'symbol_mask', description = 'Address mask for symbols', optional = True, + default = 0), + ] class NativeTableInterface(BaseSymbolTableInterface): diff --git a/volatility3/framework/symbols/intermed.py b/volatility3/framework/symbols/intermed.py index 6e55c4b356..c48fcc8a7a 100644 --- a/volatility3/framework/symbols/intermed.py +++ b/volatility3/framework/symbols/intermed.py @@ -82,7 +82,8 @@ def __init__(self, native_types: interfaces.symbols.NativeTableInterface = None, table_mapping: Optional[Dict[str, str]] = None, validate: bool = True, - class_types: Optional[Mapping[str, Type[interfaces.objects.ObjectInterface]]] = None) -> None: + class_types: Optional[Mapping[str, Type[interfaces.objects.ObjectInterface]]] = None, + symbol_mask: int = 0) -> None: """Instantiates a SymbolTable based on an IntermediateSymbolFormat JSON file. This is validated against the appropriate schema. The validation can be disabled by passing validate = False, but this should almost never be done. @@ -96,6 +97,7 @@ def __init__(self, table_mapping: A dictionary linking names referenced in the file with symbol tables in the context validate: Determines whether the ISF file will be validated against the appropriate schema class_types: A dictionary of type names and classes that override StructType when they are instantiated + symbol_mask: An address mask used for all returned symbol offsets from this table (a mask of 0 disables masking) """ # Check there are no obvious errors # Open the file and test the version @@ -131,6 +133,7 @@ def __init__(self, # Since we've been created with parameters, ensure our config is populated likewise self.config['isf_url'] = isf_url + self.config['symbol_mask'] = symbol_mask @staticmethod def _closest_version(version: str, versions: Dict[Tuple[int, int, int], Type['ISFormatTable']]) \ @@ -214,7 +217,8 @@ def create(cls, filename: str, native_types: Optional[interfaces.symbols.NativeTableInterface] = None, table_mapping: Optional[Dict[str, str]] = None, - class_types: Optional[Mapping[str, Type[interfaces.objects.ObjectInterface]]] = None) -> str: + class_types: Optional[Mapping[str, Type[interfaces.objects.ObjectInterface]]] = None, + symbol_mask: int = 0) -> str: """Takes a context and loads an intermediate symbol table based on a filename. @@ -225,6 +229,7 @@ def create(cls, filename: Basename of the file to find under the sub_path native_types: Set of native types, defaults to native types read from the intermediate symbol format file table_mapping: a dictionary of table names mentioned within the ISF file, and the tables within the context which they map to + symbol_mask: An address mask used for all returned symbol offsets from this table (a mask of 0 disables masking) Returns: the name of the added symbol table @@ -239,7 +244,8 @@ def create(cls, isf_url = urls[0], native_types = native_types, table_mapping = table_mapping, - class_types = class_types) + class_types = class_types, + symbol_mask = symbol_mask) context.symbol_space.append(table) return table_name @@ -324,7 +330,11 @@ def get_symbol(self, name: str) -> interfaces.symbols.SymbolInterface: symbol = self._json_object['symbols'].get(name, None) if not symbol: raise exceptions.SymbolError(name, self.name, f"Unknown symbol: {name}") - self._symbol_cache[name] = interfaces.symbols.SymbolInterface(name = name, address = symbol['address']) + address = symbol['address'] + if self.config.get('symbol_mask', 0): + address = address & self.config['symbol_mask'] + + self._symbol_cache[name] = interfaces.symbols.SymbolInterface(name = name, address = address) return self._symbol_cache[name] @property @@ -522,13 +532,15 @@ def get_symbol(self, name: str) -> interfaces.symbols.SymbolInterface: symbol = self._json_object['symbols'].get(name, None) if not symbol: raise exceptions.SymbolError(name, self.name, f"Unknown symbol: {name}") + address = symbol['address'] + if self.config.get('symbol_mask', 0): + address = address & self.config['symbol_mask'] + symbol_type = None if 'type' in symbol: symbol_type = self._interdict_to_template(symbol['type']) - self._symbol_cache[name] = interfaces.symbols.SymbolInterface(name = name, - address = symbol['address'], - type = symbol_type) + self._symbol_cache[name] = interfaces.symbols.SymbolInterface(name = name, address = address, type = symbol_type) return self._symbol_cache[name] @@ -575,6 +587,10 @@ def get_symbol(self, name: str) -> interfaces.symbols.SymbolInterface: symbol = self._json_object['symbols'].get(name, None) if not symbol: raise exceptions.SymbolError(name, self.name, f"Unknown symbol: {name}") + address = symbol['address'] + if self.config.get('symbol_mask', 0): + address = address & self.config['symbol_mask'] + symbol_type = None if 'type' in symbol: symbol_type = self._interdict_to_template(symbol['type']) @@ -583,7 +599,7 @@ def get_symbol(self, name: str) -> interfaces.symbols.SymbolInterface: symbol_constant_data = base64.b64decode(symbol.get('constant_data')) self._symbol_cache[name] = interfaces.symbols.SymbolInterface(name = name, - address = symbol['address'], + address = address, type = symbol_type, constant_data = symbol_constant_data) return self._symbol_cache[name] From 57b2de3166d957c2aa8e33c6ef7deb14ede95da4 Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Thu, 7 Oct 2021 00:41:26 +0100 Subject: [PATCH 274/294] Modules: Ensure created modules have distinct config paths --- volatility3/framework/contexts/__init__.py | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/volatility3/framework/contexts/__init__.py b/volatility3/framework/contexts/__init__.py index 3b15c5d0e1..f7532cc5de 100644 --- a/volatility3/framework/contexts/__init__.py +++ b/volatility3/framework/contexts/__init__.py @@ -188,9 +188,10 @@ def create(cls, **kwargs) -> 'Module': pathjoin = interfaces.configuration.path_join # Check if config_path is None + free_module_name = context.modules.free_module_name(module_name) config_path = kwargs.get('config_path', None) if config_path is None: - config_path = pathjoin('temporary', 'modules') + config_path = pathjoin('temporary', 'modules', free_module_name) # Populate the configuration context.config[pathjoin(config_path, 'layer_name')] = layer_name context.config[pathjoin(config_path, 'offset')] = offset @@ -200,7 +201,7 @@ def create(cls, for arg in kwargs: context.config[pathjoin(config_path, arg)] = kwargs.get(arg, None) # Construct the object - return_val = cls(context, config_path, context.modules.free_module_name(module_name)) + return_val = cls(context, config_path, free_module_name) context.add_module(return_val) context.config[config_path] = return_val.name # Add the module to the context modules collection From 472547061c04db335fd1f76b9026d0d8539b8fea Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Mon, 11 Oct 2021 11:20:48 +0100 Subject: [PATCH 275/294] Core: Fix up setup.py documentation --- README.md | 2 +- requirements.txt | 7 ++++--- 2 files changed, 5 insertions(+), 4 deletions(-) diff --git a/README.md b/README.md index c51bd060cb..75c3c1c23c 100644 --- a/README.md +++ b/README.md @@ -24,7 +24,7 @@ Volatility 3 requires Python 3.6.0 or later. To install the most minimal set of pip3 install -r requirements-minimal.txt ``` -Alternately, the minimal packages can be installed automatically when Volatility 3 is installed. However, as noted in the Quick Start section below, Volatility 3 does not *need* to be installed prior to using it. +Alternately, the minimal packages will be installed automatically when Volatility 3 is installed using setup.py. However, as noted in the Quick Start section below, Volatility 3 does not *need* to be installed via setup.py prior to using it. ```shell python3 setup.py build diff --git a/requirements.txt b/requirements.txt index f9ec9d3d1f..290d9ca977 100644 --- a/requirements.txt +++ b/requirements.txt @@ -17,8 +17,9 @@ pycryptodome # This can improve error messages regarding improperly configured ISF files. jsonschema>=2.3.0 -# This is required for memory acquisition via leech. +# This is required for memory acquisition via leechcore/pcileech. leechcorepyc>=2.4.0 -# This is required for analyzing Linux samples acquired with AVML. -python-snappy==0.6.0 \ No newline at end of file +# This is required for analyzing Linux samples compressed using AVMLs native +# compression format. It is not required for AVML's standard LiME compression. +python-snappy==0.6.0 From daef7df46f5d8cd12bb0ef447310185e691f571a Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Wed, 13 Oct 2021 14:18:25 +0100 Subject: [PATCH 276/294] Windows: Improve PDB scanning regex The regular expression for finding PDB signatures didn't have an apprioriate flag to treat newline characters like normal characters, meaning that if a newline character occurred between the RSDS header and the name of the pdb file, the pdb signature would be missed. Also updated the filenames so that they're escaped, meaning it must be an actual dot for the extension rather than any character. Fixes #577 --- volatility3/framework/symbols/windows/pdbutil.py | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/volatility3/framework/symbols/windows/pdbutil.py b/volatility3/framework/symbols/windows/pdbutil.py index e73c25d48d..4e90e4834b 100644 --- a/volatility3/framework/symbols/windows/pdbutil.py +++ b/volatility3/framework/symbols/windows/pdbutil.py @@ -346,8 +346,9 @@ def __init__(self, pdb_names: List[bytes]) -> None: self._pdb_names = pdb_names def __call__(self, data: bytes, data_offset: int) -> Generator[Tuple[str, Any, bytes, int], None, None]: - pattern = b'RSDS' + (b'.' * self._RSDS_format.size) + b'(' + b'|'.join(self._pdb_names) + b')\x00' - for match in re.finditer(pattern, data): + pattern = b'RSDS' + (b'.' * self._RSDS_format.size) + b'(' + b'|'.join( + [re.escape(x) for x in self._pdb_names]) + b')\x00' + for match in re.finditer(pattern, data, flags = re.DOTALL): pdb_name = data[match.start(0) + 4 + self._RSDS_format.size:match.start(0) + len(match.group()) - 1] if pdb_name in self._pdb_names: ## this ordering is intentional due to mixed endianness in the GUID From adfb0de1873078804972804844a4ac4f2edec6b7 Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Wed, 13 Oct 2021 14:50:42 +0100 Subject: [PATCH 277/294] Windows: Set a 100Gb VAD max size for vadinfo --- volatility3/framework/plugins/windows/vadinfo.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/volatility3/framework/plugins/windows/vadinfo.py b/volatility3/framework/plugins/windows/vadinfo.py index 51002d0edf..2b5bc8701f 100644 --- a/volatility3/framework/plugins/windows/vadinfo.py +++ b/volatility3/framework/plugins/windows/vadinfo.py @@ -35,7 +35,7 @@ class VadInfo(interfaces.plugins.PluginInterface): _required_framework_version = (2, 0, 0) _version = (2, 0, 0) - MAXSIZE_DEFAULT = 0 + MAXSIZE_DEFAULT = 100 * 1024 * 1024 * 1024 # 100 Gb def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) From 5f3e519caec01777a60a286edc4ae1b4df0a73a1 Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Wed, 13 Oct 2021 15:00:27 +0100 Subject: [PATCH 278/294] Core: Make the regex scanner default to DOTALL This will have no impact on the existing core use of the regex scanner, and therefore does not require a version bump. --- volatility3/framework/layers/scanners/__init__.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/volatility3/framework/layers/scanners/__init__.py b/volatility3/framework/layers/scanners/__init__.py index a99e80b14a..2c27c4fea5 100644 --- a/volatility3/framework/layers/scanners/__init__.py +++ b/volatility3/framework/layers/scanners/__init__.py @@ -34,7 +34,7 @@ class RegExScanner(layers.ScannerInterface): _required_framework_version = (2, 0, 0) - def __init__(self, pattern: bytes, flags: int = 0) -> None: + def __init__(self, pattern: bytes, flags: int = re.DOTALL) -> None: super().__init__() self.regex = re.compile(pattern, flags) From 7cac1a98fdf686a1b49716e2bb038bc9a6909e4c Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Wed, 13 Oct 2021 15:03:17 +0100 Subject: [PATCH 279/294] Core: Update documentation for RegExScanner --- volatility3/framework/layers/scanners/__init__.py | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/volatility3/framework/layers/scanners/__init__.py b/volatility3/framework/layers/scanners/__init__.py index 2c27c4fea5..85c8390d95 100644 --- a/volatility3/framework/layers/scanners/__init__.py +++ b/volatility3/framework/layers/scanners/__init__.py @@ -30,6 +30,11 @@ def __call__(self, data: bytes, data_offset: int) -> Generator[int, None, None]: class RegExScanner(layers.ScannerInterface): + """A scanner that can be provided with a bytes-object regular expression pattern + The scanner will scqn all blocks for the regular expression and report the absolute offset of any finds + + The default flags include DOTALL, since the searches are through binary data and the newline character should + have no specific significance in such searches""" thread_safe = True _required_framework_version = (2, 0, 0) From 2a7831466c95bbadb381eb9ed0d5b1068917ed45 Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Sun, 17 Oct 2021 13:29:32 +0100 Subject: [PATCH 280/294] Layers: Fix bug where PAT flag is treated as address --- volatility3/framework/layers/intel.py | 3 +++ 1 file changed, 3 insertions(+) diff --git a/volatility3/framework/layers/intel.py b/volatility3/framework/layers/intel.py index 4a582f1fbe..1f997c4101 100644 --- a/volatility3/framework/layers/intel.py +++ b/volatility3/framework/layers/intel.py @@ -135,6 +135,9 @@ def _translate_entry(self, offset: int) -> Tuple[int, int]: "Page Fault at entry " + hex(entry) + " in table " + name) # Check if we're a large page if large_page and (entry & (1 << 7)): + # Mask off the PAT bit + if entry & (1 << 12): + entry -= (1 << 12) # We're a large page, the rest is finished below # If we want to implement PSE-36, it would need to be done here break From 449c2aaddf8ef9c43d5fc8f8c8c99be02ee79817 Mon Sep 17 00:00:00 2001 From: garanews Date: Mon, 18 Oct 2021 14:38:12 +0200 Subject: [PATCH 281/294] fix some typo --- doc/source/simple-plugin.rst | 2 +- doc/source/vol-cli.rst | 2 +- doc/source/vol2to3.rst | 2 +- volatility3/cli/__init__.py | 2 +- volatility3/framework/automagic/symbol_cache.py | 2 +- volatility3/framework/contexts/__init__.py | 2 +- volatility3/framework/interfaces/automagic.py | 2 +- volatility3/framework/interfaces/objects.py | 2 +- volatility3/framework/interfaces/renderers.py | 2 +- volatility3/framework/layers/intel.py | 2 +- volatility3/framework/layers/resources.py | 2 +- volatility3/framework/objects/__init__.py | 12 ++++++------ volatility3/framework/plugins/linux/kmsg.py | 4 ++-- volatility3/framework/plugins/mac/kevents.py | 2 +- volatility3/framework/plugins/windows/handles.py | 2 +- volatility3/framework/plugins/windows/netstat.py | 2 +- volatility3/framework/renderers/conversion.py | 2 +- volatility3/framework/renderers/format_hints.py | 2 +- .../framework/symbols/windows/extensions/__init__.py | 4 ++-- volatility3/framework/symbols/windows/pdbconv.py | 6 +++--- volatility3/framework/symbols/windows/pdbutil.py | 2 +- volatility3/schemas/__init__.py | 2 +- 22 files changed, 31 insertions(+), 31 deletions(-) diff --git a/doc/source/simple-plugin.rst b/doc/source/simple-plugin.rst index 52dabfd4fb..4e499b1869 100644 --- a/doc/source/simple-plugin.rst +++ b/doc/source/simple-plugin.rst @@ -147,7 +147,7 @@ it does not. The :py:func:`~volatility3.plugins.windows.pslist.PsList.create_pi identifiers that are included in the list. If the list is empty, all processes are returned. The next line specifies the columns by their name and type. The types are simple types (int, str, bytes, float, and bool) -but can also provide hints as to how the output should be displayed (such as a hexidecimal number, using +but can also provide hints as to how the output should be displayed (such as a hexadecimal number, using :py:class:`volatility3.framework.renderers.format_hints.Hex`). This indicates to user interfaces that the value should be displayed in a particular way, but does not guarantee that the value will be displayed that way (for example, if it doesn't make sense to do so in a particular interface). diff --git a/doc/source/vol-cli.rst b/doc/source/vol-cli.rst index 0699068730..9db29c8186 100644 --- a/doc/source/vol-cli.rst +++ b/doc/source/vol-cli.rst @@ -116,7 +116,7 @@ Options **** The name of the plugin to execute (these are usually categorized by - the operating system, such as `windows.pslist.PsList`). Any subtring + the operating system, such as `windows.pslist.PsList`). Any substring that uniquely matches the desired plugin name can be used. As such `hivescan` would match `windows.registry.hivescan.HiveScan`, but `pslist` is ambiguous because it could match `windows.pslist` or diff --git a/doc/source/vol2to3.rst b/doc/source/vol2to3.rst index bc1733dcf8..eb33b6618f 100644 --- a/doc/source/vol2to3.rst +++ b/doc/source/vol2to3.rst @@ -24,7 +24,7 @@ Object Model changes -------------------- The object model has changed as well, objects now inherit directly from their Python counterparts, meaning an integer -object is actually a Python integer (and has all the associated methods, and can be used whereever a normal int could). +object is actually a Python integer (and has all the associated methods, and can be used wherever a normal int could). In Volatility 2, a complex proxy object was constructed which tried to emulate all the methods of the host object, but ultimately it was a different type and could not be used in the same places (critically, it could make the ordering of operations important, since a + b might not work, but b + a might work fine). diff --git a/volatility3/cli/__init__.py b/volatility3/cli/__init__.py index 19c6e4dddb..608fdf79cf 100644 --- a/volatility3/cli/__init__.py +++ b/volatility3/cli/__init__.py @@ -478,7 +478,7 @@ def populate_config(self, context: interfaces.context.ContextInterface, if not scheme or len(scheme) <= 1: if not os.path.exists(value): raise FileNotFoundError( - f"Non-existant file {value} passed to URIRequirement") + f"Non-existent file {value} passed to URIRequirement") value = f"file://{request.pathname2url(os.path.abspath(value))}" if isinstance(requirement, requirements.ListRequirement): if not isinstance(value, list): diff --git a/volatility3/framework/automagic/symbol_cache.py b/volatility3/framework/automagic/symbol_cache.py index b2c227228d..7b6adf9b47 100644 --- a/volatility3/framework/automagic/symbol_cache.py +++ b/volatility3/framework/automagic/symbol_cache.py @@ -94,7 +94,7 @@ def __call__(self, context, config_path, configurable, progress_callback = None) banner_list = banners.get(new_banner, []) banners[new_banner] = list(set(banner_list + new_banners[new_banner])) - # Do remote banners *after* the JSON loading, so that it doen't pull down all the remote JSON + # Do remote banners *after* the JSON loading, so that it doesn't pull down all the remote JSON self.remote_banners(banners, self.os) # Rewrite the cached banners each run, since writing is faster than the banner_cache validation portion diff --git a/volatility3/framework/contexts/__init__.py b/volatility3/framework/contexts/__init__.py index f7532cc5de..518215ab4e 100644 --- a/volatility3/framework/contexts/__init__.py +++ b/volatility3/framework/contexts/__init__.py @@ -6,7 +6,7 @@ This has been made an object to allow quick swapping and changing of contexts, to allow a plugin to act on multiple different contexts -without them interfering eith each other. +without them interfering with each other. """ import functools import hashlib diff --git a/volatility3/framework/interfaces/automagic.py b/volatility3/framework/interfaces/automagic.py index 04f4958946..c310f5b4a7 100644 --- a/volatility3/framework/interfaces/automagic.py +++ b/volatility3/framework/interfaces/automagic.py @@ -26,7 +26,7 @@ class AutomagicInterface(interfaces.configuration.ConfigurableInterface, metacla Args: context: The context in which to store configuration data that the automagic might populate config_path: Configuration path where the configurable's data under the context's config lives - configurable: The top level configurable whose requirements may need statisfying + configurable: The top level configurable whose requirements may need satisfying progress_callback: An optional function accepting a percentage and optional description to indicate progress during long calculations diff --git a/volatility3/framework/interfaces/objects.py b/volatility3/framework/interfaces/objects.py index 11040c503a..811327094a 100644 --- a/volatility3/framework/interfaces/objects.py +++ b/volatility3/framework/interfaces/objects.py @@ -212,7 +212,7 @@ class VolTemplateProxy(metaclass = abc.ABCMeta): takes a template since the templates may contain the necessary data about the yet-to-be-constructed object. It allows objects to control how their templates respond without needing to write - new templates for each and every potental object type. + new templates for each and every potential object type. """ _methods: List[str] = [] diff --git a/volatility3/framework/interfaces/renderers.py b/volatility3/framework/interfaces/renderers.py index dd53a96b33..7f80425a4c 100644 --- a/volatility3/framework/interfaces/renderers.py +++ b/volatility3/framework/interfaces/renderers.py @@ -215,6 +215,6 @@ def visit(self, Args: node: The initial node to be visited function: The visitor to apply to the nodes under the initial node - initial_accumulator: An accumulator that allows data to be transfered between one visitor call to the next + initial_accumulator: An accumulator that allows data to be transferred between one visitor call to the next sort_key: Information about the sort order of columns in order to determine the ordering of results """ diff --git a/volatility3/framework/layers/intel.py b/volatility3/framework/layers/intel.py index 4a582f1fbe..7569aa0b81 100644 --- a/volatility3/framework/layers/intel.py +++ b/volatility3/framework/layers/intel.py @@ -361,7 +361,7 @@ def _translate(self, offset: int) -> Tuple[int, int, str]: class WindowsIntel32e(WindowsMixin, Intel32e): # TODO: Fix appropriately in a future release. - # Currently just a temprorary workaround to deal with custom bit flag + # Currently just a temporary workaround to deal with custom bit flag # in the PFN field for pages in transition state. # See https://github.com/volatilityfoundation/volatility3/pull/475 _maxphyaddr = 45 diff --git a/volatility3/framework/layers/resources.py b/volatility3/framework/layers/resources.py index 35182a86be..b6ef1b6bad 100644 --- a/volatility3/framework/layers/resources.py +++ b/volatility3/framework/layers/resources.py @@ -56,7 +56,7 @@ def close(): class ResourceAccessor(object): - """Object for openning URLs as files (downloading locally first if + """Object for opening URLs as files (downloading locally first if necessary)""" list_handlers = True diff --git a/volatility3/framework/objects/__init__.py b/volatility3/framework/objects/__init__.py index 05d594b2f1..fb8975ebb7 100644 --- a/volatility3/framework/objects/__init__.py +++ b/volatility3/framework/objects/__init__.py @@ -111,7 +111,7 @@ def __new__(cls: Type, """Creates the appropriate class and returns it so that the native type is inherited. - The only reason the kwargs is added, is so that the inherriting types can override __init__ + The only reason the kwargs is added, is so that the inheriting types can override __init__ without needing to override __new__ We also sneak in new_value, so that we don't have to do expensive (read: impossible) context reads @@ -128,7 +128,7 @@ def __new__(cls: Type, return result def __getnewargs_ex__(self): - """Make sure that when pickling, all appropiate parameters for new are + """Make sure that when pickling, all appropriate parameters for new are provided.""" kwargs = {} for k, v in self._vol.maps[-1].items(): @@ -205,7 +205,7 @@ def __new__(cls: Type, is inherritted. The only reason the kwargs is added, is so that the - inherriting types can override __init__ without needing to + inheriting types can override __init__ without needing to override __new__ """ return cls._struct_type.__new__( @@ -255,7 +255,7 @@ def __new__(cls: Type, is inherited. The only reason the kwargs is added, is so that the - inherriting types can override __init__ without needing to + inheriting types can override __init__ without needing to override __new__ """ params = {} @@ -634,7 +634,7 @@ def __len__(self) -> int: def write(self, value) -> None: if not isinstance(value, collections.Sequence): - raise TypeError("Only Sequences can be writen to arrays") + raise TypeError("Only Sequences can be written to arrays") self.count = len(value) for index in range(len(value)): self[index].write(value[index]) @@ -769,7 +769,7 @@ def __getattr__(self, attr: str) -> Any: # Disable messing around with setattr until the consequences have been considered properly # For example pdbutil constructs objects and then sets values for them - # Some don't always match the type (for example, the data read is encoded and interpretted) + # Some don't always match the type (for example, the data read is encoded and interpreted) # # def __setattr__(self, name, value): # """Method for writing specific members of a structure""" diff --git a/volatility3/framework/plugins/linux/kmsg.py b/volatility3/framework/plugins/linux/kmsg.py index 4f5fb29249..a729c1695a 100644 --- a/volatility3/framework/plugins/linux/kmsg.py +++ b/volatility3/framework/plugins/linux/kmsg.py @@ -108,7 +108,7 @@ def run(self) -> Iterator[Tuple[str, str, str, str, str]]: def symtab_checks(cls, vmlinux: interfaces.context.ModuleInterface) -> bool: """This method on each sublasss will be called to evaluate if the kernel being analyzed fulfill the type & symbols requirements for the implementation. - The first class returning True will be instanciated and called via the + The first class returning True will be instantiated and called via the run() method. :return: True is the kernel being analysed fulfill the class requirements. @@ -173,7 +173,7 @@ class KmsgLegacy(ABCKmsg): """Linux kernels prior to v5.10, the ringbuffer is initially kept in __log_buf, and log_buf is a pointer to the former. __log_buf is declared as a char array but it actually contains an array of printk_log structs. - The lenght of this array is defined in the kernel KConfig configuration via + The length of this array is defined in the kernel KConfig configuration via the CONFIG_LOG_BUF_SHIFT value as a power of 2. This can also be modified by the log_buf_len kernel boot parameter. In SMP systems with more than 64 CPUs this ringbuffer size is dynamically diff --git a/volatility3/framework/plugins/mac/kevents.py b/volatility3/framework/plugins/mac/kevents.py index c433a39eea..16fa51fa44 100644 --- a/volatility3/framework/plugins/mac/kevents.py +++ b/volatility3/framework/plugins/mac/kevents.py @@ -98,7 +98,7 @@ def _get_task_kevents(cls, kernel, task): """ Enumerates event filters per task. Uses smear-safe APIs throughout as these data structures - see a signifcant amount of smear + see a significant amount of smear """ fdp = task.p_fd diff --git a/volatility3/framework/plugins/windows/handles.py b/volatility3/framework/plugins/windows/handles.py index 66fd94f08e..ab11d30d6c 100644 --- a/volatility3/framework/plugins/windows/handles.py +++ b/volatility3/framework/plugins/windows/handles.py @@ -169,7 +169,7 @@ def get_type_map(cls, context: interfaces.context.ContextInterface, layer_name: symbol_table: The name of the table containing the kernel symbols Returns: - A mapping of type indicies to type names + A mapping of type indices to type names """ type_map: Dict[int, str] = {} diff --git a/volatility3/framework/plugins/windows/netstat.py b/volatility3/framework/plugins/windows/netstat.py index 76480399f0..e71683c10e 100644 --- a/volatility3/framework/plugins/windows/netstat.py +++ b/volatility3/framework/plugins/windows/netstat.py @@ -74,7 +74,7 @@ def read_pointer(cls, context: interfaces.context.ContextInterface, layer_name: @classmethod def parse_bitmap(cls, context: interfaces.context.ContextInterface, layer_name: str, bitmap_offset: int, bitmap_size_in_byte: int) -> list: - """Parses a given bitmap and looks for each occurence of a 1. + """Parses a given bitmap and looks for each occurrence of a 1. Args: context: The context to retrieve required elements (layers, symbol tables) from diff --git a/volatility3/framework/renderers/conversion.py b/volatility3/framework/renderers/conversion.py index 5737abef51..996cf03a50 100644 --- a/volatility3/framework/renderers/conversion.py +++ b/volatility3/framework/renderers/conversion.py @@ -96,7 +96,7 @@ def convert_network_four_tuple(family, four_tuple): dest port) into their string equivalents. IP addresses are expected as a tuple - of unsigned shorts Ports are converted to proper endianess as well + of unsigned shorts Ports are converted to proper endianness as well """ if family == socket.AF_INET: diff --git a/volatility3/framework/renderers/format_hints.py b/volatility3/framework/renderers/format_hints.py index 486e164b30..f386d8e9df 100644 --- a/volatility3/framework/renderers/format_hints.py +++ b/volatility3/framework/renderers/format_hints.py @@ -18,7 +18,7 @@ class Bin(int): class Hex(int): """A class to indicate that the integer value should be represented as a - hexidecimal value.""" + hexadecimal value.""" class HexBytes(bytes): diff --git a/volatility3/framework/symbols/windows/extensions/__init__.py b/volatility3/framework/symbols/windows/extensions/__init__.py index 25e48f78c2..ae7c45d04c 100755 --- a/volatility3/framework/symbols/windows/extensions/__init__.py +++ b/volatility3/framework/symbols/windows/extensions/__init__.py @@ -973,7 +973,7 @@ def get_available_pages(self) -> Iterable[Tuple[int, int, int]]: # If the entry is not a valid physical address then see if it is in transition. elif mmpte.u.Trans.Transition == 1: # TODO: Fix appropriately in a future release. - # Currently just a temprorary workaround to deal with custom bit flag + # Currently just a temporary workaround to deal with custom bit flag # in the PFN field for pages in transition state. # See https://github.com/volatilityfoundation/volatility3/pull/475 physoffset = (mmpte.u.Trans.PageFrameNumber & (( 1 << 33 ) - 1 ) ) << 12 @@ -1102,7 +1102,7 @@ def get_available_pages(self) -> List: if vacb_obj.SharedCacheMap == self.vol.offset: self.save_vacb(vacb_obj, vacb_list) - # If the file is larger than 1 MB, a seperate VACB index array needs to be allocated. + # If the file is larger than 1 MB, a separate VACB index array needs to be allocated. # This is based on how many 256 KB blocks would be required for the size of the file. # This newly allocated VACB index array is found through the Vacbs member of SHARED_CACHE_MAP. vacb_obj = self.Vacbs diff --git a/volatility3/framework/symbols/windows/pdbconv.py b/volatility3/framework/symbols/windows/pdbconv.py index f0b97f9eed..da8254ffd9 100644 --- a/volatility3/framework/symbols/windows/pdbconv.py +++ b/volatility3/framework/symbols/windows/pdbconv.py @@ -18,7 +18,7 @@ vollog = logging.getLogger(__name__) -primatives = { +primitives = { 0x03: ("void", { "endian": "little", "kind": "void", @@ -584,7 +584,7 @@ def get_json(self): def get_type_from_index(self, index: int) -> Union[List[Any], Dict[str, Any]]: """Takes a type index and returns appropriate dictionary.""" if index < 0x1000: - base_name, base = primatives[index & 0xff] + base_name, base = primitives[index & 0xff] self.bases[base_name] = base result: Union[List[Dict[str, Any]], Dict[str, Any]] = {"kind": "base", "name": base_name} indirection = (index & 0xf00) @@ -644,7 +644,7 @@ def get_size_from_index(self, index: int) -> int: if (index & 0xf00): _, base = indirections[index & 0xf00] else: - _, base = primatives[index & 0xff] + _, base = primitives[index & 0xff] result = base['size'] else: leaf_type, name, value = self.types[index - 0x1000] diff --git a/volatility3/framework/symbols/windows/pdbutil.py b/volatility3/framework/symbols/windows/pdbutil.py index 4e90e4834b..4c3788a567 100644 --- a/volatility3/framework/symbols/windows/pdbutil.py +++ b/volatility3/framework/symbols/windows/pdbutil.py @@ -68,7 +68,7 @@ def load_windows_symbol_table(cls, symbol_table_class: str, config_path: str = 'pdbutility', progress_callback: constants.ProgressCallback = None): - """Loads (downlading if necessary) a windows symbol table""" + """Loads (downloading if necessary) a windows symbol table""" filter_string = os.path.join(pdb_name.strip('\x00'), guid.upper() + "-" + str(age)) diff --git a/volatility3/schemas/__init__.py b/volatility3/schemas/__init__.py index 5fbed7e099..65329a4f5f 100644 --- a/volatility3/schemas/__init__.py +++ b/volatility3/schemas/__init__.py @@ -53,7 +53,7 @@ def validate(input: Dict[str, Any], use_cache: bool = True) -> bool: def create_json_hash(input: Dict[str, Any], schema: Dict[str, Any]) -> str: """Constructs the hash of the input and schema to create a unique - indentifier for a particular JSON file.""" + identifier for a particular JSON file.""" return hashlib.sha1(bytes(json.dumps((input, schema), sort_keys = True), 'utf-8')).hexdigest() From 2d3d133c0e53eb8d3d203f4219cc3c0c4c7d19bd Mon Sep 17 00:00:00 2001 From: Stefano Date: Sun, 24 Oct 2021 19:02:14 +0200 Subject: [PATCH 282/294] issue-449 - add --physical flag --- .../framework/plugins/windows/psscan.py | 30 ++++++++++++++++--- 1 file changed, 26 insertions(+), 4 deletions(-) diff --git a/volatility3/framework/plugins/windows/psscan.py b/volatility3/framework/plugins/windows/psscan.py index 184ef5103a..1e0f20be38 100644 --- a/volatility3/framework/plugins/windows/psscan.py +++ b/volatility3/framework/plugins/windows/psscan.py @@ -6,7 +6,7 @@ import logging from typing import Iterable, Callable, Tuple -from volatility3.framework import renderers, interfaces +from volatility3.framework import renderers, interfaces, layers, exceptions from volatility3.framework.configuration import requirements from volatility3.framework.renderers import format_hints from volatility3.framework.symbols import intermed @@ -24,6 +24,7 @@ class PsScan(interfaces.plugins.PluginInterface, timeliner.TimeLinerInterface): _required_framework_version = (2, 0, 0) _version = (1, 1, 0) + PHYSICAL_DEFAULT = False @classmethod def get_requirements(cls): @@ -39,6 +40,10 @@ def get_requirements(cls): requirements.BooleanRequirement(name = 'dump', description = "Extract listed processes", default = False, + optional = True), + requirements.BooleanRequirement(name = 'physical', + description = "Display physical offset instead of virtual", + default = cls.PHYSICAL_DEFAULT, optional = True) ] @@ -148,6 +153,11 @@ def _generator(self): "windows", "pe", class_types = pe.class_types) + memory = self.context.layers[kernel.layer_name] + + if not isinstance(memory, layers.intel.Intel): + raise TypeError("Primary layer is not an intel layer") + for proc in self.scan_processes(self.context, kernel.layer_name, kernel.symbol_table_name, @@ -169,11 +179,22 @@ def _generator(self): if file_handle: file_output = file_handle.preferred_filename - yield (0, (proc.UniqueProcessId, proc.InheritedFromUniqueProcessId, + if not self.config.get('physical', self.PHYSICAL_DEFAULT): + offset = proc.vol.offset + else: + (_, _, offset, _, _) = list(memory.mapping(offset = proc.vol.offset, length = 0))[0] + + try: + + yield (0, (proc.UniqueProcessId, proc.InheritedFromUniqueProcessId, proc.ImageFileName.cast("string", max_length = proc.ImageFileName.vol.count, - errors = 'replace'), format_hints.Hex(proc.vol.offset), + errors = 'replace'), format_hints.Hex(offset), proc.ActiveThreads, proc.get_handle_count(), proc.get_session_id(), proc.get_is_wow64(), proc.get_create_time(), proc.get_exit_time(), file_output)) + + except exceptions.InvalidAddressException: + vollog.info(f"Invalid process found at address: {proc.vol.offset:x}. Skipping") + def generate_timeline(self): for row in self._generator(): @@ -183,7 +204,8 @@ def generate_timeline(self): yield (description, timeliner.TimeLinerType.MODIFIED, row_data[9]) def run(self): - return renderers.TreeGrid([("PID", int), ("PPID", int), ("ImageFileName", str), ("Offset", format_hints.Hex), + return renderers.TreeGrid([("PID", int), ("PPID", int), ("ImageFileName", str), ("Offset" + + "(V)" if (not self.config["physical"]) else "(P)", format_hints.Hex), ("Threads", int), ("Handles", int), ("SessionId", int), ("Wow64", bool), ("CreateTime", datetime.datetime), ("ExitTime", datetime.datetime), ("File output", str)], self._generator()) From 6979a6d2eefa75ffda28689ef7630e69161135c8 Mon Sep 17 00:00:00 2001 From: Stefano Date: Mon, 25 Oct 2021 17:56:33 +0200 Subject: [PATCH 283/294] psscan: Remove usage of PHYSICAL_DEFAULT variable --- .../framework/plugins/windows/psscan.py | 18 +++++++----------- 1 file changed, 7 insertions(+), 11 deletions(-) diff --git a/volatility3/framework/plugins/windows/psscan.py b/volatility3/framework/plugins/windows/psscan.py index 1e0f20be38..237c0edcd2 100644 --- a/volatility3/framework/plugins/windows/psscan.py +++ b/volatility3/framework/plugins/windows/psscan.py @@ -24,7 +24,6 @@ class PsScan(interfaces.plugins.PluginInterface, timeliner.TimeLinerInterface): _required_framework_version = (2, 0, 0) _version = (1, 1, 0) - PHYSICAL_DEFAULT = False @classmethod def get_requirements(cls): @@ -43,7 +42,7 @@ def get_requirements(cls): optional = True), requirements.BooleanRequirement(name = 'physical', description = "Display physical offset instead of virtual", - default = cls.PHYSICAL_DEFAULT, + default = False, optional = True) ] @@ -153,8 +152,7 @@ def _generator(self): "windows", "pe", class_types = pe.class_types) - memory = self.context.layers[kernel.layer_name] - + memory = self.context.layers[kernel.layer_name] if not isinstance(memory, layers.intel.Intel): raise TypeError("Primary layer is not an intel layer") @@ -179,23 +177,20 @@ def _generator(self): if file_handle: file_output = file_handle.preferred_filename - if not self.config.get('physical', self.PHYSICAL_DEFAULT): + if not self.config['physical']: offset = proc.vol.offset else: (_, _, offset, _, _) = list(memory.mapping(offset = proc.vol.offset, length = 0))[0] try: - yield (0, (proc.UniqueProcessId, proc.InheritedFromUniqueProcessId, proc.ImageFileName.cast("string", max_length = proc.ImageFileName.vol.count, errors = 'replace'), format_hints.Hex(offset), proc.ActiveThreads, proc.get_handle_count(), proc.get_session_id(), proc.get_is_wow64(), proc.get_create_time(), proc.get_exit_time(), file_output)) - except exceptions.InvalidAddressException: vollog.info(f"Invalid process found at address: {proc.vol.offset:x}. Skipping") - def generate_timeline(self): for row in self._generator(): _depth, row_data = row @@ -204,8 +199,9 @@ def generate_timeline(self): yield (description, timeliner.TimeLinerType.MODIFIED, row_data[9]) def run(self): - return renderers.TreeGrid([("PID", int), ("PPID", int), ("ImageFileName", str), ("Offset" + - "(V)" if (not self.config["physical"]) else "(P)", format_hints.Hex), - ("Threads", int), ("Handles", int), ("SessionId", int), ("Wow64", bool), + offsettype = "(V)" if not self.config['physical'] else "(P)" + return renderers.TreeGrid([("PID", int), ("PPID", int), ("ImageFileName", str), + (f"Offset{offsettype}", format_hints.Hex), ("Threads", int), + ("Handles", int), ("SessionId", int), ("Wow64", bool), ("CreateTime", datetime.datetime), ("ExitTime", datetime.datetime), ("File output", str)], self._generator()) From 20ca165d0f1fc49c8d9011c7e3000e07483d6665 Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Wed, 27 Oct 2021 23:19:30 +0100 Subject: [PATCH 284/294] Windows: Lower the vadlimit on MHL's advice --- volatility3/framework/plugins/windows/vadinfo.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/volatility3/framework/plugins/windows/vadinfo.py b/volatility3/framework/plugins/windows/vadinfo.py index 2b5bc8701f..9fa1458d17 100644 --- a/volatility3/framework/plugins/windows/vadinfo.py +++ b/volatility3/framework/plugins/windows/vadinfo.py @@ -35,7 +35,7 @@ class VadInfo(interfaces.plugins.PluginInterface): _required_framework_version = (2, 0, 0) _version = (2, 0, 0) - MAXSIZE_DEFAULT = 100 * 1024 * 1024 * 1024 # 100 Gb + MAXSIZE_DEFAULT = 1024 * 1024 * 1024 # 1 Gb def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) From 3cb2bf9fac15fabeaebaa7b40d417756d49ee13f Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Wed, 17 Nov 2021 13:41:09 +0000 Subject: [PATCH 285/294] Core: Fix up python 3.10 support Fixes #589 --- volatility3/framework/objects/__init__.py | 3 ++- volatility3/framework/renderers/__init__.py | 3 ++- 2 files changed, 4 insertions(+), 2 deletions(-) diff --git a/volatility3/framework/objects/__init__.py b/volatility3/framework/objects/__init__.py index fb8975ebb7..6ee24407f6 100644 --- a/volatility3/framework/objects/__init__.py +++ b/volatility3/framework/objects/__init__.py @@ -3,6 +3,7 @@ # import collections +import collections.abc import logging import struct from typing import Any, ClassVar, Dict, List, Iterable, Optional, Tuple, Type, Union as TUnion, overload @@ -633,7 +634,7 @@ def __len__(self) -> int: return self.vol.count def write(self, value) -> None: - if not isinstance(value, collections.Sequence): + if not isinstance(value, collections.abc.Sequence): raise TypeError("Only Sequences can be written to arrays") self.count = len(value) for index in range(len(value)): diff --git a/volatility3/framework/renderers/__init__.py b/volatility3/framework/renderers/__init__.py index 23e686a07d..de1b14c915 100644 --- a/volatility3/framework/renderers/__init__.py +++ b/volatility3/framework/renderers/__init__.py @@ -7,6 +7,7 @@ or file or graphical output """ import collections +import collections.abc import datetime import logging from typing import Any, Callable, Iterable, List, Optional, Tuple, TypeVar, Union @@ -70,7 +71,7 @@ def __len__(self) -> int: def _validate_values(self, values: List[interfaces.renderers.BaseTypes]) -> None: """A function for raising exceptions if a given set of values is invalid according to the column properties.""" - if not (isinstance(values, collections.Sequence) and len(values) == len(self._treegrid.columns)): + if not (isinstance(values, collections.abc.Sequence) and len(values) == len(self._treegrid.columns)): raise TypeError( "Values must be a list of objects made up of simple types and number the same as the columns") for index in range(len(self._treegrid.columns)): From 8d48faa0d99d537153eb82d7c07c4b172296ea94 Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Wed, 17 Nov 2021 13:52:25 +0000 Subject: [PATCH 286/294] Layers: Close file on layer destruction to prevent ResourceWarning --- volatility3/framework/layers/physical.py | 3 +++ 1 file changed, 3 insertions(+) diff --git a/volatility3/framework/layers/physical.py b/volatility3/framework/layers/physical.py index 228815fea6..73d46b2119 100644 --- a/volatility3/framework/layers/physical.py +++ b/volatility3/framework/layers/physical.py @@ -189,6 +189,9 @@ def destroy(self) -> None: """Closes the file handle.""" self._file.close() + def __del__(self) -> None: + self.destroy() + @classmethod def get_requirements(cls) -> List[interfaces.configuration.RequirementInterface]: return [requirements.StringRequirement(name = 'location', optional = False)] From f821ac60721047dd7b8832724b28e1383903199c Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Wed, 17 Nov 2021 20:42:11 +0000 Subject: [PATCH 287/294] Layers: Comment out unused code in AVML --- volatility3/framework/layers/avml.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/volatility3/framework/layers/avml.py b/volatility3/framework/layers/avml.py index 83c43fcc9e..acc4493f4d 100644 --- a/volatility3/framework/layers/avml.py +++ b/volatility3/framework/layers/avml.py @@ -93,7 +93,7 @@ def _read_snappy_frames(self, data: bytes, expected_length: int) -> Tuple[ elif frame_type in [0x00, 0x01]: # CRC + (Un)compressed data mapped_start = offset + frame_header_len - frame_crc = data[mapped_start: mapped_start + crc_len] + # frame_crc = data[mapped_start: mapped_start + crc_len] frame_data = data[mapped_start + crc_len: mapped_start + frame_size] if frame_type == 0x00: # Compressed data From 680349016e6d2efe420fa3968ec17beb37617298 Mon Sep 17 00:00:00 2001 From: a5hlynx Date: Sat, 4 Dec 2021 00:29:00 +0900 Subject: [PATCH 288/294] update version to 2.0.0 --- volatility3/framework/plugins/windows/crashinfo.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/volatility3/framework/plugins/windows/crashinfo.py b/volatility3/framework/plugins/windows/crashinfo.py index 4c74582d45..d66d86cd13 100644 --- a/volatility3/framework/plugins/windows/crashinfo.py +++ b/volatility3/framework/plugins/windows/crashinfo.py @@ -14,7 +14,7 @@ class Crashinfo(interfaces.plugins.PluginInterface): - _required_framework_version = (1, 1, 0) + _required_framework_version = (2, 0, 0) @classmethod def get_requirements(cls): From 96820e280495d6abfc96c6ca6cad981adc9b33d1 Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Fri, 3 Dec 2021 16:01:57 +0000 Subject: [PATCH 289/294] Plugins: Bump skeleton_key_check to 2.0.0 --- volatility3/framework/plugins/windows/skeleton_key_check.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/volatility3/framework/plugins/windows/skeleton_key_check.py b/volatility3/framework/plugins/windows/skeleton_key_check.py index 90379be869..cd4a5baecf 100644 --- a/volatility3/framework/plugins/windows/skeleton_key_check.py +++ b/volatility3/framework/plugins/windows/skeleton_key_check.py @@ -41,7 +41,7 @@ class Skeleton_Key_Check(interfaces.plugins.PluginInterface): """ Looks for signs of Skeleton Key malware """ - _required_framework_version = (1, 2, 0) + _required_framework_version = (2, 0, 0) @classmethod def get_requirements(cls): From 832bdc2ab19d795b7c4abe519ef4eea9b909e581 Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Wed, 8 Dec 2021 23:34:23 +0000 Subject: [PATCH 290/294] Linux: Fix long standing typo (thanks to @gcmoreira) --- volatility3/framework/symbols/linux/extensions/__init__.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/volatility3/framework/symbols/linux/extensions/__init__.py b/volatility3/framework/symbols/linux/extensions/__init__.py index fbc02399f9..0edd60608f 100644 --- a/volatility3/framework/symbols/linux/extensions/__init__.py +++ b/volatility3/framework/symbols/linux/extensions/__init__.py @@ -108,7 +108,7 @@ def get_symbols(self): "linux", "elf", native_types = None, - class_types = extensions.elf.class_types) + class_types = elf.class_types) syms = self._context.object( self.get_symbol_table().name + constants.BANG + "array", From dd13b427e011a84f94ebc571608a5375f5f9ccea Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Fri, 31 Dec 2021 00:51:21 +0000 Subject: [PATCH 291/294] Volshell: Update the linux pslist requirement --- volatility3/cli/volshell/linux.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/volatility3/cli/volshell/linux.py b/volatility3/cli/volshell/linux.py index 850e3111cf..4338ae06f0 100644 --- a/volatility3/cli/volshell/linux.py +++ b/volatility3/cli/volshell/linux.py @@ -17,7 +17,7 @@ class Volshell(generic.Volshell): def get_requirements(cls): return (super().get_requirements() + [ requirements.SymbolTableRequirement(name = "vmlinux", description = "Linux kernel symbols"), - requirements.PluginRequirement(name = 'pslist', plugin = pslist.PsList, version = (1, 0, 0)), + requirements.PluginRequirement(name = 'pslist', plugin = pslist.PsList, version = (2, 0, 0)), requirements.IntRequirement(name = 'pid', description = "Process ID", optional = True) ]) From e91e65d64b82968d466c879b1ac29c7f1d701e13 Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Thu, 30 Dec 2021 01:47:45 +0000 Subject: [PATCH 292/294] Renderers: Make appending rows massively more efficient Previously we were generating the list of children (for most likely the root node) for every single append statement, during which we were recalculating the length of the list, twice. In plugins that output a lot of rows this would add an enourmous overhead (that likely grew as the length of the output grew). Without this overhead, the time taken for the TreeGrid._append method went from 1230.0s to 2.4s. --- volatility3/framework/renderers/__init__.py | 20 +++++++++++++------- 1 file changed, 13 insertions(+), 7 deletions(-) diff --git a/volatility3/framework/renderers/__init__.py b/volatility3/framework/renderers/__init__.py index de1b14c915..5773861d91 100644 --- a/volatility3/framework/renderers/__init__.py +++ b/volatility3/framework/renderers/__init__.py @@ -272,20 +272,26 @@ def values(self, node): def _append(self, parent: Optional[interfaces.renderers.TreeNode], values: Any) -> TreeNode: """Adds a new node at the top level if parent is None, or under the parent node otherwise, after all other children.""" - children = self.children(parent) - return self._insert(parent, len(children), values) + return self._insert(parent, None, values) - def _insert(self, parent: Optional[interfaces.renderers.TreeNode], position: int, values: Any) -> TreeNode: + def _insert(self, parent: Optional[interfaces.renderers.TreeNode], position: Optional[int], values: Any) -> TreeNode: """Inserts an element into the tree at a specific position.""" parent_path = "" children = self._find_children(parent) if parent is not None: parent_path = parent.path + self.path_sep - newpath = parent_path + str(position) + if position is None: + newpath = parent_path + str(len(children)) + else: + newpath = parent_path + str(position) + for node, _ in children[position:]: + self.visit(node, lambda child, _: child.path_changed(newpath, True), None) + tree_item = TreeNode(newpath, self, parent, values) - for node, _ in children[position:]: - self.visit(node, lambda child, _: child.path_changed(newpath, True), None) - children.insert(position, (tree_item, [])) + if position is None: + children.append((tree_item, [])) + else: + children.insert(position, (tree_item, [])) return tree_item def is_ancestor(self, node, descendant): From 3294d0a2c26385043af025f2d8d4c173712ef4b5 Mon Sep 17 00:00:00 2001 From: Mike Auty Date: Wed, 12 Jan 2022 21:06:20 +0000 Subject: [PATCH 293/294] Documentation: Update README.md before release --- README.md | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index 75c3c1c23c..f7d326c339 100644 --- a/README.md +++ b/README.md @@ -14,7 +14,9 @@ technical and performance challenges associated with the original code base that became apparent over the previous 10 years. Another benefit of the rewrite is that Volatility 3 could be released under a custom license that was more aligned with the goals of the Volatility community, -the Volatility Software License (VSL). See the [LICENSE](LICENSE.txt) file for more details. +the Volatility Software License (VSL). See the +[LICENSE](https://www.volatilityfoundation.org/license/vsl-v1.0) file for +more details. ## Requirements @@ -102,7 +104,7 @@ The latest generated copy of the documentation can be found at: Date: Wed, 12 Jan 2022 21:13:05 +0000 Subject: [PATCH 294/294] Documentation: Update master branch to stable branch --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index f7d326c339..9f9c1bbb70 100644 --- a/README.md +++ b/README.md @@ -41,7 +41,7 @@ pip3 install -r requirements.txt ## Downloading Volatility -The latest stable version of Volatility will always be the master branch of the GitHub repository. You can get the latest version of the code using the following command: +The latest stable version of Volatility will always be the stable branch of the GitHub repository. You can get the latest version of the code using the following command: ```shell git clone https://github.com/volatilityfoundation/volatility3.git