-
Notifications
You must be signed in to change notification settings - Fork 68
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
Showing
13 changed files
with
312 additions
and
106 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Binary file not shown.
This file was deleted.
Oops, something went wrong.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,126 @@ | ||
# async_file_appender | ||
|
||
## LogEntry&LogStreamBuffer | ||
|
||
LogStreamBuffer是一个std::stringbuf的实现,实际内存管理在分页定长内存上,LogEntry就是这个分页定长内存本身的维护结构。 | ||
|
||
### 用法示例 | ||
|
||
```c++ | ||
#include "babylon/logging/log_entry.h" | ||
|
||
using babylon::LogStreamBuffer; | ||
using babylon::LogEntry; | ||
|
||
// 使用LogStreamBuffer需要先设置一个PageAllocator | ||
PageAllocator& page_allocator = ... | ||
LogStreamBuffer buffer; | ||
buffer.set_page_allocator(page_allocator); | ||
|
||
// 后续buffer可以反复使用 | ||
loop: | ||
buffer.begin(); // 每次使用需要用begin触发准备动作 | ||
buffer.sputn(...); // 之后可以进行写入动作,一般不直接调用,而是成为LogStream的底层 | ||
LogEntry& entry = buffer.end(); // 一轮写入完成,返回最终的组装结果 | ||
... // LogEntry本身只有一个cache line大小,可以轻量拷贝转移 | ||
|
||
consumer: | ||
::std::vector<struct ::iovec> iov; | ||
// 一般LogEntry经过异步队列转移到消费者执行 | ||
LogEntry& entry = ... | ||
// 追加倒出成iovec结构,主要方便对接writev | ||
entry.append_to_iovec(page_allocator.page_size(), iov); | ||
``` | ||
## FileObject | ||
FileObject是对于日志写入对向的抽象,功能为对外提供可用的fd,对于需要滚动的场景,内部完成滚动和老文件管理 | ||
```c++ | ||
#include "babylon/logging/file_object.h" | ||
using babylon::FileObject; | ||
class CustomFileObject : public FileObject { | ||
// 核心功能函数,上层在每次写出前需要调用此函数来获得文件操作符 | ||
// 函数内部完成文件滚动检测等操作,返回最终准备好的描述符 | ||
// 由于可能发生文件的滚动,返回值为新旧描述符(fd, old_fd)二元组 | ||
// fd: | ||
// >=0: 当前文件描述符,调用者后续写入通过此描述符发起 | ||
// < 0: 发生异常无法打开文件 | ||
// old_fd: | ||
// >=0: 发生文件切换,返回之前的文件描述符 | ||
// 一般由文件滚动引起,需要调用者执行关闭动作 | ||
// 关闭前调用者可以进行最后的收尾写入等操作 | ||
// < 0: 未发生文件切换 | ||
virtual ::std::tuple<int, int> check_and_get_file_descriptor() noexcept override { | ||
... | ||
} | ||
}; | ||
``` | ||
|
||
## RollingFileObject | ||
|
||
实现滚动文件的FileObject,支持按照时间间隔滚动切换,并提供定量保留清理能力 | ||
|
||
```c++ | ||
#include "babylon/logging/rolling_file_object.h" | ||
|
||
using babylon::RollingFileObject; | ||
|
||
RollingFileObject object; | ||
object.set_directory("dir"); // 日志所在目录 | ||
object.set_file_pattern("name.%Y-%m-%d"); // 日志文件名模板,支持strftime语法 | ||
// 当时间驱动文件名发生变化时,执行文件滚动 | ||
object.set_max_file_number(7); // 最多保留个数 | ||
|
||
// 实际会写入类似这样名称的文件当中 | ||
// dir/name.2024-07-18 | ||
// dir/name.2024-07-19 | ||
|
||
// 启动期间调用此接口可以扫描目录并记录其中符合pattern的已有文件 | ||
// 并加入到跟踪列表,来支持重启场景下继续跟进正确的文件定量保留 | ||
object.scan_and_tracking_existing_files(); | ||
|
||
loop: | ||
// 检查目前跟踪列表中是否超出了保留数目,超出则进行清理 | ||
object.delete_expire_files(); | ||
// 一些场景进程会同时输出很多路日志文件 | ||
// 主动调用便于在一个后台线程实现所有日志的过期删除 | ||
... | ||
sleep(1); | ||
``` | ||
## AsyncFileAppender&AsyncLogStream | ||
AsyncFileAppender实现了LogEntry的队列传输,以及最终异步向FileObject执行写入动作 | ||
AsyncLogStream包装了AsyncFileAppender、FileObject和LogStreamBuffer,对接到Logger机制 | ||
```c++ | ||
#include "babylon/logging/async_log_stream.h" | ||
using babylon::AsyncFileAppender; | ||
using babylon::AsyncLogStream; | ||
using babylon::FileObject; | ||
using babylon::LoggerBuilder; | ||
using babylon::PageAllocator; | ||
// 需要先准备一个PageAllocator和一个FileObject& | ||
PageAllocator& page_allocator = ... | ||
FileObject& file_object = ... | ||
AsyncFileAppender appender; | ||
appender.set_page_allocator(page_allocator); | ||
// 设置队列长度 | ||
appender.set_queue_capacity(65536); | ||
appender.initialize(); | ||
// 组合AsyncFileAppender和FileObject行程一个能够生成Logger的AsyncLogStream | ||
LoggerBuilder builder; | ||
builder.set_log_stream_creator(AsyncLogStream::creator(appender, object)); | ||
LoggerManager::instance().set_root_builder(::std::move(builder)); | ||
LoggerManager::instance().apply(); | ||
// 之后就会在日志宏背后开始生效 | ||
BABYLON_LOG(INFO) << ... | ||
``` |
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,32 @@ | ||
# logging | ||
|
||
## 背景和原理 | ||
|
||
由于写入page cache的动作涉及较多不确定的内核和设备因素,完成时间不可控,因此典型的服务端程序在执行日志记录时都会通过异步衔接解耦日志的组装生成和实际的写入动作。大多数独立的日志框架例如[spdlog](https://github.com/gabime/spdlog)、[boost.log](https://github.com/boostorg/log)一般都包含内置的异步方案。另一个使用广泛的日志框架[glog](https://github.com/google/glog)虽然自身不包含异步方案,不过保留了一定的扩展点,实际采用的应用框架例如[apollo](https://github.com/ApolloAuto/apollo/blob/master/cyber/logger/async_logger.h)、[brpc](https://github.com/apache/brpc/blob/master/src/butil/logging.cc)等一般都包含内置的异步化插件方案。 | ||
|
||
但是目前的流行实现一般容易存在几个典型的性能阻塞点 | ||
- 用来解耦组装和写入的衔接机制往往采用了锁同步,对竞争激烈的场景会有明显的性能衰减 | ||
- 对于承载日志的内存块,由于其变长的特性,往往设计中存在变长动态内存申请和释放,而且申请和释放往往存在跨线程转移穿透内存分配器的线程缓存 | ||
- 有些实现对于localtime计算的[全局锁现象](../time.md)没有足够重视,同样会引起多线程竞争现象 | ||
|
||
![](images/logging-classic.png) | ||
|
||
值得一提的是一个独特的日志框架[NanoLog](https://github.com/PlatformLab/NanoLog),采用线程缓存汇聚的逻辑避免了上述内存问题,同时采用了独特的静态format spec单独记录的方式结合按需还原降低了需要写入文件的信息量。只是优化限制了使用场景在类printf场景,对于streaming类的序列化体系(operator<<)比较难兼容,使用范围有一定制约和限制。不过能够接受这样的制约和限制的场景下,这套框架选择的线程缓存采集汇聚逻辑很好的解决了典型的性能阻塞竞争点,提供了很优异的性能。 | ||
|
||
![](images/logging-nano.png) | ||
|
||
抛开动静分离的特殊优化手段,线程缓存汇聚作为一种优化手段,尽管有效解决的锁竞争的问题,但是在线程增多并结合生产环境偶发设备卡顿时间增长的情况下,需要设置显著增多的线程缓存空间来进行适应。所以这里提出了一个结合统一的无锁队列,以及定长无锁内存池的解决方案AsyncFileAppender。在前端部分,通过一个在定长分页内存上实现的streambuf,将结果承接到一个分页管理的LogEntry上。之后将LogEntry送入中央无锁队列进行异步解耦,并由统一的Appender后端消费完成写入动作,最终分页释放回定长内存池,进入后续的前端流转。 | ||
|
||
![](images/logging-async.png) | ||
|
||
进一步在AsyncFileAppender的之上,单独设计了Logger层的概念,主要出于两点考虑 | ||
- 单独的Logger层采用了类似[log4j](https://github.com/apache/logging-log4j2)的树型层次化概念,在c++生态中类似的复杂管理能力相对较少,希望能够提供一个类似思想,但又更符合c++内存管理理念的对标产品 | ||
- 将AsyncFileAppender整套机制和实际的日志接口解耦,提供尽可能干净的接口实现。便于实际在生产环境中,对接到业务服务实际使用日志框架。实际上即使在百度内部,AsyncFileAppender最广泛的使用方法依然是集成到内部更为广泛使用的已有日志接口框架下,作为底层异步能力来使用 | ||
- babylon内部也会有记录日志的需求,一个轻量的Logger层可以让用户更方便对接到自己已有的日志系统中去,而不是必然绑定用户整体切换到AsyncFileAppender机制。对于向成熟系统的集成,这可能是一种更有好的提供方式,给用户更符合其意愿的选择权 | ||
|
||
![](images/logging-logger.png) | ||
|
||
## 功能文档 | ||
|
||
- [logger](logger.md) | ||
- [async_file_appender](async_file_appender.md) |
Oops, something went wrong.