How do
function openCheckout(licenseType) {
const priceId = priceIdLookup[licenseType];
- Paddle.Checkout.open({items:[{quantity: licenseType == 'team' ? 5 : 1,priceId}]});
+ Paddle.Checkout.open({discountCode:'BLACKFRIDAY24',items:[{quantity: licenseType == 'team' ? 5 : 1,priceId}]});
window.plausible('Bezel Checkout Opened')
}
diff --git a/recordkit/changelog.md b/recordkit/changelog.md
index b10a3f7..0b7d36b 100644
--- a/recordkit/changelog.md
+++ b/recordkit/changelog.md
@@ -1,5 +1,21 @@
# Changelog
+### 0.20.0
+
+- Added `RKCameraPreview` to preview a camera in SwiftUI.
+- Apple device recordings now also capture audio from the device.
+- Recorder is more robust when prepare/start/stop are called multiple times.
+- The start method on a recorder can now throw an error if something fails.
+- Improved precision of recorder start/stop.
+- Improved window filtering when listing available windows.
+- Improved resillience of all audio recorders when audio gaps occur.
+- Segmented output now has correct video dimensions for Apple device recordings.
+- Improved error messages through the SDK.
+- Improved log messages through the SDK.
+- Swift: Options to exclude windows when recording a display.
+- Swift: Improved device discovery API.
+- Swift: Ability to store/retrieve prefered microphone, camera and display.
+
### 0.19.0
- Electron: Add option to receive RecordKit logs through `recordkit.on('log', (logEvent) => { })`.
diff --git a/root/Content/blog/2024-11-27-handling-audio-capture-gaps-on-macos.md b/root/Content/blog/2024-11-27-handling-audio-capture-gaps-on-macos.md
new file mode 100644
index 0000000..33daa95
--- /dev/null
+++ b/root/Content/blog/2024-11-27-handling-audio-capture-gaps-on-macos.md
@@ -0,0 +1,98 @@
+---
+date: 2024-11-23 12:00
+authors: mathijs, tom
+tags: Engineering, RecordKit
+title: Handling audio capture gaps on macOS
+description: Audio capture on macOS can sometimes contain gaps. This article explains how to detect and handle these gaps to maintain proper audio/video sync when recording to files.
+image: images/blog/victoria-shes-IUk1S6n2s0o-unsplash.jpg
+path: 2024/handling-audio-capture-gaps-on-macos
+featured: true
+---
+
+**tldr; Missing audio samples during recording can cause audio/video desynchronization. Detect gaps using presentation timestamps and fill them with silent audio samples to maintain proper sync.**
+
+Audio capture presents unique challenges due to its real-time, continuous nature. While macOS provides robust APIs for capturing and recording audio to files, certain scenarios can lead to gaps in the audio stream, causing synchronization issues with simulaniously recorded video.
+
+## Understanding the Problem
+
+When capturing audio on macOS, the system occasionally fails to deliver a continuous stream of samples to applications. This issue seems to manifest, among other scenarios, during audio device switches, such as connecting AirPods mid-recording. While Core Audio logs errors to the console, the recording continues - but with potentially missing samples.
+
+The issue has been observed with both `AVCaptureSession` and `AVAudioEngine` based capture implementations, suggesting the problem exists at a lower level in the Core Audio stack rather than being specific to a particular capture API.
+
+For real-time playback, these gaps result in momentary audio glitches. However, when recording to a file, the consequences are more severe. The missing samples cause the audio track to become shorter than other recorded media, leading to desynchronization with video tracks.
+
+
+
+## Detection and Solution
+
+The key to addressing this issue lies in monitoring the presentation timestamps of incoming `CMSampleBuffer` objects. Audio devices generate samples at a fixed rate, which serves as a clock for generating presentation times and calculating sample durations. The sample timings should form a continuous sequence.
+
+This can be used to implement a method that fills the gaps in the audio sequence:
+
+```swift
+var nextExpectedAudioTime: CMTime?
+func processSampleBuffer(_ sampleBuffer: CMSampleBuffer) {
+ // TODO: Probably wise to check if the sample buffer times are all valid
+
+ // Ensure to always set the next expected time
+ defer { nextExpectedAudioTime = sampleBuffer.presentationTimeStamp + sampleBuffer.duration }
+
+ guard let nextExpectedAudioTime else {
+ // Handle first sample
+ writeToFile(sampleBuffer)
+ return
+ }
+
+ let delta = sampleBuffer.presentationTimeStamp - nextExpectedAudioTime
+ if delta > .zero {
+ let silentAudio = generateSilentAudio(duration: delta)
+ writeToFile(sampleBuffer)
+ }
+
+ writeToFile(sampleBuffer)
+}
+```
+
+When a gap is detected, generating and inserting silent audio samples maintains the proper timing relationship between audio and other recorded media. While this approach still results in a brief audio dropout, it prevents the more problematic issue of audio/video desynchronization throughout the remainder of the recording.
+
+## Note on Reliability
+
+This issue doesn't affect all users equally. It's been observed across different harware setups and macOS versions, with most reports coming from macOS 14. It is particularly noticeable for us during testing when switching the audio output device.
+
+This seems to be an issue inside Core Audio that we hope will be resolved over time. However, this workaround of inserting silent audio at least maintains proper audio/video sync. It has been successfully used in [RecordKit](https://recordkit.dev/), our SDK for macOS recording applications for some time in production now.
+
+