Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

refactor: restructure audio handling and enhance type definitions #5951

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
91 changes: 57 additions & 34 deletions app/components/markdown.tsx
Original file line number Diff line number Diff line change
Expand Up @@ -71,6 +71,51 @@ export function Mermaid(props: { code: string }) {
);
}

function CodeActions({ onCopy }: { onCopy: () => void }) {
return <span className="copy-code-button" onClick={onCopy}></span>;
}
Comment on lines +74 to +76
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Enhance accessibility and user feedback for copy functionality

The copy button implementation needs improvements in accessibility and user feedback:

 function CodeActions({ onCopy }: { onCopy: () => void }) {
-  return <span className="copy-code-button" onClick={onCopy}></span>;
+  return (
+    <button
+      className="copy-code-button"
+      onClick={onCopy}
+      aria-label="Copy code to clipboard"
+      title="Copy code"
+      role="button"
+      tabIndex={0}
+    >
+      <span className="sr-only">Copy code</span>
+    </button>
+  );
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
function CodeActions({ onCopy }: { onCopy: () => void }) {
return <span className="copy-code-button" onClick={onCopy}></span>;
}
function CodeActions({ onCopy }: { onCopy: () => void }) {
return (
<button
className="copy-code-button"
onClick={onCopy}
aria-label="Copy code to clipboard"
title="Copy code"
role="button"
tabIndex={0}
>
<span className="sr-only">Copy code</span>
</button>
);
}


function CodePreview({
mermaidCode,
htmlCode,
enableArtifacts,
previewRef,
height,
}: {
mermaidCode: string;
htmlCode: string;
enableArtifacts: boolean;
previewRef: RefObject<HTMLPreviewHander>;
height: number;
}) {
return (
<>
{mermaidCode && <Mermaid code={mermaidCode} key={mermaidCode} />}
{htmlCode && enableArtifacts && (
<FullScreen className="no-dark html" right={70}>
<ArtifactsShareButton
style={{ position: "absolute", right: 20, top: 10 }}
getCode={() => htmlCode}
/>
<IconButton
style={{ position: "absolute", right: 120, top: 10 }}
bordered
icon={<ReloadButtonIcon />}
shadow
onClick={() => previewRef.current?.reload()}
/>
<HTMLPreview
ref={previewRef}
code={htmlCode}
autoHeight={!document.fullscreenElement}
height={!document.fullscreenElement ? 600 : height}
/>
</FullScreen>
)}
</>
);
}
Comment on lines +78 to +117
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Add error boundaries and improve component resilience

The component handles complex rendering scenarios but lacks proper error handling and loading states.

  1. Wrap Mermaid rendering in an error boundary:
+class MermaidErrorBoundary extends React.Component<{children: React.ReactNode}> {
+  state = { hasError: false };
+  static getDerivedStateFromError() {
+    return { hasError: true };
+  }
+  render() {
+    if (this.state.hasError) {
+      return <div className="error-message">Failed to render diagram</div>;
+    }
+    return this.children;
+  }
+}

 function CodePreview({ ... }) {
   return (
     <>
-      {mermaidCode && <Mermaid code={mermaidCode} key={mermaidCode} />}
+      {mermaidCode && (
+        <MermaidErrorBoundary>
+          <Mermaid code={mermaidCode} key={mermaidCode} />
+        </MermaidErrorBoundary>
+      )}
  1. Extract magic numbers to constants:
+const PREVIEW_CONSTANTS = {
+  FULLSCREEN_OFFSET: 70,
+  SHARE_BUTTON_OFFSET: 20,
+  RELOAD_BUTTON_OFFSET: 120,
+} as const;

 function CodePreview({ ... }) {
   return (
     <>
       {htmlCode && enableArtifacts && (
-        <FullScreen className="no-dark html" right={70}>
+        <FullScreen className="no-dark html" right={PREVIEW_CONSTANTS.FULLSCREEN_OFFSET}>

Consider splitting this component further into MermaidPreview and HtmlPreview components to better separate concerns and improve maintainability.

Committable suggestion skipped: line range outside the PR's diff.


export function PreCode(props: { children: any }) {
const ref = useRef<HTMLPreElement>(null);
const previewRef = useRef<HTMLPreviewHander>(null);
Expand Down Expand Up @@ -133,42 +178,20 @@ export function PreCode(props: { children: any }) {
return (
<>
<pre ref={ref}>
<span
className="copy-code-button"
onClick={() => {
if (ref.current) {
copyToClipboard(
ref.current.querySelector("code")?.innerText ?? "",
);
}
}}
></span>
<CodeActions
onCopy={() =>
copyToClipboard(ref.current?.querySelector("code")?.innerText ?? "")
}
/>
{props.children}
</pre>
{mermaidCode.length > 0 && (
<Mermaid code={mermaidCode} key={mermaidCode} />
)}
{htmlCode.length > 0 && enableArtifacts && (
<FullScreen className="no-dark html" right={70}>
<ArtifactsShareButton
style={{ position: "absolute", right: 20, top: 10 }}
getCode={() => htmlCode}
/>
<IconButton
style={{ position: "absolute", right: 120, top: 10 }}
bordered
icon={<ReloadButtonIcon />}
shadow
onClick={() => previewRef.current?.reload()}
/>
<HTMLPreview
ref={previewRef}
code={htmlCode}
autoHeight={!document.fullscreenElement}
height={!document.fullscreenElement ? 600 : height}
/>
</FullScreen>
)}
<CodePreview
mermaidCode={mermaidCode}
htmlCode={htmlCode}
enableArtifacts={enableArtifacts}
previewRef={previewRef}
height={height}
/>
</>
);
}
Expand Down
58 changes: 29 additions & 29 deletions app/global.d.ts
Original file line number Diff line number Diff line change
Expand Up @@ -10,34 +10,34 @@ declare module "*.scss" {

declare module "*.svg";

declare interface Window {
__TAURI__?: {
writeText(text: string): Promise<void>;
invoke(command: string, payload?: Record<string, unknown>): Promise<any>;
dialog: {
save(options?: Record<string, unknown>): Promise<string | null>;
};
fs: {
writeBinaryFile(path: string, data: Uint8Array): Promise<void>;
writeTextFile(path: string, data: string): Promise<void>;
};
notification: {
requestPermission(): Promise<Permission>;
isPermissionGranted(): Promise<boolean>;
sendNotification(options: string | Options): void;
};
updater: {
checkUpdate(): Promise<UpdateResult>;
installUpdate(): Promise<void>;
onUpdaterEvent(
handler: (status: UpdateStatusResult) => void,
): Promise<UnlistenFn>;
};
http: {
fetch<T>(
url: string,
options?: Record<string, unknown>,
): Promise<Response<T>>;
// Add more specific types
interface TauriCommands {
writeText(text: string): Promise<void>;
invoke(command: string, payload?: Record<string, unknown>): Promise<any>;
}

interface TauriDialog {
save(options?: Record<string, unknown>): Promise<string | null>;
}

interface TauriFS {
writeBinaryFile(path: string, data: Uint8Array): Promise<void>;
writeTextFile(path: string, data: string): Promise<void>;
}

interface TauriNotification {
requestPermission(): Promise<Permission>;
isPermissionGranted(): Promise<boolean>;
sendNotification(options: string | Options): void;
}

declare global {
interface Window {
__TAURI__?: {
dialog: TauriDialog;
fs: TauriFS;
notification: TauriNotification;
// ... other Tauri interfaces
};
};
}
}
207 changes: 26 additions & 181 deletions app/lib/audio.ts
Original file line number Diff line number Diff line change
@@ -1,200 +1,45 @@
export class AudioHandler {
private context: AudioContext;
private mergeNode: ChannelMergerNode;
class AudioAnalyzer {
private analyser: AnalyserNode;
private analyserData: Uint8Array;
public analyser: AnalyserNode;
private workletNode: AudioWorkletNode | null = null;
private stream: MediaStream | null = null;
private source: MediaStreamAudioSourceNode | null = null;
private recordBuffer: Int16Array[] = [];
private readonly sampleRate = 24000;

private nextPlayTime: number = 0;
private isPlaying: boolean = false;
private playbackQueue: AudioBufferSourceNode[] = [];
private playBuffer: Int16Array[] = [];

constructor() {
this.context = new AudioContext({ sampleRate: this.sampleRate });
// using ChannelMergerNode to get merged audio data, and then get analyser data.
this.mergeNode = new ChannelMergerNode(this.context, { numberOfInputs: 2 });
this.analyser = new AnalyserNode(this.context, { fftSize: 256 });
constructor(context: AudioContext) {
this.analyser = new AnalyserNode(context, { fftSize: 256 });
this.analyserData = new Uint8Array(this.analyser.frequencyBinCount);
this.mergeNode.connect(this.analyser);
}

getByteFrequencyData() {
this.analyser.getByteFrequencyData(this.analyserData);
return this.analyserData;
}

async initialize() {
await this.context.audioWorklet.addModule("/audio-processor.js");
}

async startRecording(onChunk: (chunk: Uint8Array) => void) {
try {
if (!this.workletNode) {
await this.initialize();
}

this.stream = await navigator.mediaDevices.getUserMedia({
audio: {
channelCount: 1,
sampleRate: this.sampleRate,
echoCancellation: true,
noiseSuppression: true,
},
});

await this.context.resume();
this.source = this.context.createMediaStreamSource(this.stream);
this.workletNode = new AudioWorkletNode(
this.context,
"audio-recorder-processor",
);

this.workletNode.port.onmessage = (event) => {
if (event.data.eventType === "audio") {
const float32Data = event.data.audioData;
const int16Data = new Int16Array(float32Data.length);

for (let i = 0; i < float32Data.length; i++) {
const s = Math.max(-1, Math.min(1, float32Data[i]));
int16Data[i] = s < 0 ? s * 0x8000 : s * 0x7fff;
}

const uint8Data = new Uint8Array(int16Data.buffer);
onChunk(uint8Data);
// save recordBuffer
// @ts-ignore
this.recordBuffer.push.apply(this.recordBuffer, int16Data);
}
};

this.source.connect(this.workletNode);
this.source.connect(this.mergeNode, 0, 0);
this.workletNode.connect(this.context.destination);

this.workletNode.port.postMessage({ command: "START_RECORDING" });
} catch (error) {
console.error("Error starting recording:", error);
throw error;
}
}

stopRecording() {
if (!this.workletNode || !this.source || !this.stream) {
throw new Error("Recording not started");
}

this.workletNode.port.postMessage({ command: "STOP_RECORDING" });

this.workletNode.disconnect();
this.source.disconnect();
this.stream.getTracks().forEach((track) => track.stop());
}
startStreamingPlayback() {
this.isPlaying = true;
this.nextPlayTime = this.context.currentTime;
}

stopStreamingPlayback() {
this.isPlaying = false;
this.playbackQueue.forEach((source) => source.stop());
this.playbackQueue = [];
this.playBuffer = [];
getNode() {
return this.analyser;
}
}

playChunk(chunk: Uint8Array) {
if (!this.isPlaying) return;

const int16Data = new Int16Array(chunk.buffer);
// @ts-ignore
this.playBuffer.push.apply(this.playBuffer, int16Data); // save playBuffer

const float32Data = new Float32Array(int16Data.length);
for (let i = 0; i < int16Data.length; i++) {
float32Data[i] = int16Data[i] / (int16Data[i] < 0 ? 0x8000 : 0x7fff);
}

const audioBuffer = this.context.createBuffer(
1,
float32Data.length,
this.sampleRate,
);
audioBuffer.getChannelData(0).set(float32Data);

const source = this.context.createBufferSource();
source.buffer = audioBuffer;
source.connect(this.context.destination);
source.connect(this.mergeNode, 0, 1);

const chunkDuration = audioBuffer.length / this.sampleRate;
class AudioPlayback {
private nextPlayTime: number = 0;
private isPlaying: boolean = false;
private playbackQueue: AudioBufferSourceNode[] = [];
private playBuffer: Int16Array[] = [];

source.start(this.nextPlayTime);
// Add playback related methods
}
Comment on lines +20 to +27
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codebase verification

Memory leak risk confirmed - AudioBufferSourceNode cleanup implementation missing

The AudioPlayback class only contains the declaration of the playbackQueue array but lacks any implementation of playback-related methods, including cleanup mechanisms. This confirms the original concern about potential memory leaks, as there's no visible handling of AudioBufferSourceNode references after playback.

  • The playbackQueue array is declared but no methods are implemented to manage the lifecycle of the AudioBufferSourceNodes
  • The comment // Add playback related methods suggests this is incomplete implementation
  • No cleanup logic exists to handle the AudioBufferSourceNodes after they finish playing
🔗 Analysis chain

Assess concurrency and memory usage
The playbackQueue array references AudioBufferSourceNodes. Consider cleaning up references to avoid potential memory leaks when playback is complete. Marking them for garbage collection or removing them from the array might be necessary.


Let me search for the implementation of playback-related methods to verify how the AudioBufferSourceNodes are handled.


Let me try a different approach to find the implementation details.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Searching for usage references to 'playbackQueue' to confirm these nodes are handled properly at playback end.
rg -A 5 'playbackQueue'

Length of output: 259


Script:

#!/bin/bash
# Search for methods that handle playback and cleanup in the AudioPlayback class
ast-grep --pattern 'class AudioPlayback {
  $$$
  $_($$) {
    $$$
  }
}'

Length of output: 75


Script:

#!/bin/bash
# Get the complete content of the audio.ts file to analyze the implementation
cat app/lib/audio.ts

Length of output: 1217


this.playbackQueue.push(source);
source.onended = () => {
const index = this.playbackQueue.indexOf(source);
if (index > -1) {
this.playbackQueue.splice(index, 1);
}
};
export class AudioHandler {
private context: AudioContext;
private mergeNode: ChannelMergerNode;
private analyzer: AudioAnalyzer;
private playback: AudioPlayback;

this.nextPlayTime += chunkDuration;
constructor() {
this.context = new AudioContext({ sampleRate: 24000 });
this.mergeNode = new ChannelMergerNode(this.context, { numberOfInputs: 2 });
this.analyzer = new AudioAnalyzer(this.context);
this.playback = new AudioPlayback();

if (this.nextPlayTime < this.context.currentTime) {
this.nextPlayTime = this.context.currentTime;
}
this.mergeNode.connect(this.analyzer.getNode());
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codebase verification

Audio chain is incomplete - analyzer node needs destination connection

The analyzer node is receiving input from mergeNode but is not connected to the audio context's destination, which means no audio will be output. You need to connect the analyzer node to this.context.destination to complete the audio processing chain:

this.mergeNode.connect(this.analyzer.getNode());
this.analyzer.getNode().connect(this.context.destination);
🔗 Analysis chain

Verify mergeNode connectivity
You are connecting the mergeNode to the analyzer node. Ensure that either the output node or the next node in the chain is connected to the destination. If not already, it may need to connect to this.context.destination.


Let me gather more specific information about the audio chain connectivity.


Let me check one more thing to see if the analyzer node connects to the destination.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Verify if mergeNode or analyzerNode eventually connects to this.context.destination for audible playback or further processing
rg -A 3 'connect'

Length of output: 6681


Script:

#!/bin/bash
# Search for analyzer implementation and audio chain connections in the audio files
rg -l 'class.*Analyzer' | xargs cat

# Also check the complete implementation of the audio class
cat app/lib/audio.ts

Length of output: 2450


Script:

#!/bin/bash
# Search for any implementation where analyzer node connects to destination
ast-grep --pattern 'analyzer.getNode().connect($$$)'

# Also search for any direct connection to destination in the audio files
rg 'connect.*destination' app/lib/audio.ts

Length of output: 99

}
_saveData(data: Int16Array, bytesPerSample = 16): Blob {
const headerLength = 44;
const numberOfChannels = 1;
const byteLength = data.buffer.byteLength;
const header = new Uint8Array(headerLength);
const view = new DataView(header.buffer);
view.setUint32(0, 1380533830, false); // RIFF identifier 'RIFF'
view.setUint32(4, 36 + byteLength, true); // file length minus RIFF identifier length and file description length
view.setUint32(8, 1463899717, false); // RIFF type 'WAVE'
view.setUint32(12, 1718449184, false); // format chunk identifier 'fmt '
view.setUint32(16, 16, true); // format chunk length
view.setUint16(20, 1, true); // sample format (raw)
view.setUint16(22, numberOfChannels, true); // channel count
view.setUint32(24, this.sampleRate, true); // sample rate
view.setUint32(28, this.sampleRate * 4, true); // byte rate (sample rate * block align)
view.setUint16(32, numberOfChannels * 2, true); // block align (channel count * bytes per sample)
view.setUint16(34, bytesPerSample, true); // bits per sample
view.setUint32(36, 1684108385, false); // data chunk identifier 'data'
view.setUint32(40, byteLength, true); // data chunk length

// using data.buffer, so no need to setUint16 to view.
return new Blob([view, data.buffer], { type: "audio/mpeg" });
}
savePlayFile() {
// @ts-ignore
return this._saveData(new Int16Array(this.playBuffer));
}
saveRecordFile(
audioStartMillis: number | undefined,
audioEndMillis: number | undefined,
) {
const startIndex = audioStartMillis
? Math.floor((audioStartMillis * this.sampleRate) / 1000)
: 0;
const endIndex = audioEndMillis
? Math.floor((audioEndMillis * this.sampleRate) / 1000)
: this.recordBuffer.length;
return this._saveData(
// @ts-ignore
new Int16Array(this.recordBuffer.slice(startIndex, endIndex)),
);
}
async close() {
this.recordBuffer = [];
this.workletNode?.disconnect();
this.source?.disconnect();
this.stream?.getTracks().forEach((track) => track.stop());
await this.context.close();
}
// ... rest of the implementation
}
Loading