How to Build Audio Recording in SwiftUI
Use AVAudioRecorder inside an
@Observable class to capture mic input
to an .m4a file. On iOS 17+ request permission with the new
AVAudioApplication.requestRecordPermission()
async API, configure the session category, and call record() / stop().
import AVFoundation
// 1. Permission (iOS 17+)
let granted = await AVAudioApplication.requestRecordPermission()
// 2. Session
let session = AVAudioSession.sharedInstance()
try session.setCategory(.playAndRecord, mode: .default)
try session.setActive(true)
// 3. Record
let url = FileManager.default.temporaryDirectory
.appendingPathComponent(UUID().uuidString + ".m4a")
let settings: [String: Any] = [
AVFormatIDKey: Int(kAudioFormatMPEG4AAC),
AVSampleRateKey: 44_100,
AVNumberOfChannelsKey: 1,
AVEncoderAudioQualityKey: AVAudioQuality.high.rawValue
]
let recorder = try AVAudioRecorder(url: url, settings: settings)
recorder.record() // start
// …later…
recorder.stop() // finish
Full implementation
The implementation wraps AVAudioRecorder
inside a Swift @Observable class so
SwiftUI's observation system automatically re-renders when recording state changes.
A .task modifier on the view handles the
async permission request at first appearance, and a pulsing SF Symbol button doubles as both
start and stop control. The recording is saved to the system temporary directory as AAC-encoded
.m4a, ready to play back or upload.
import SwiftUI
import AVFoundation
// MARK: - Model
@Observable
final class AudioRecorderModel {
var isRecording = false
var recordingURL: URL?
var hasPermission = false
var errorMessage: String?
private var recorder: AVAudioRecorder?
// iOS 17+ async permission API
func requestPermission() async {
let granted = await AVAudioApplication.requestRecordPermission()
hasPermission = granted
if !granted { errorMessage = "Microphone access denied." }
}
func startRecording() {
guard hasPermission else { errorMessage = "No microphone permission."; return }
let session = AVAudioSession.sharedInstance()
do {
try session.setCategory(.playAndRecord, mode: .default,
options: .defaultToSpeaker)
try session.setActive(true)
} catch {
errorMessage = "Session error: \(error.localizedDescription)"
return
}
let url = FileManager.default.temporaryDirectory
.appendingPathComponent(UUID().uuidString)
.appendingPathExtension("m4a")
let settings: [String: Any] = [
AVFormatIDKey: Int(kAudioFormatMPEG4AAC),
AVSampleRateKey: 44_100,
AVNumberOfChannelsKey: 1,
AVEncoderAudioQualityKey: AVAudioQuality.high.rawValue
]
do {
recorder = try AVAudioRecorder(url: url, settings: settings)
recorder?.isMeteringEnabled = true
recorder?.record()
recordingURL = url
isRecording = true
errorMessage = nil
} catch {
errorMessage = "Recorder error: \(error.localizedDescription)"
}
}
func stopRecording() {
recorder?.stop()
isRecording = false
try? AVAudioSession.sharedInstance().setActive(false,
options: .notifyOthersOnDeactivation)
}
}
// MARK: - View
struct AudioRecorderView: View {
@State private var model = AudioRecorderModel()
var body: some View {
VStack(spacing: 32) {
Spacer()
Text(model.isRecording ? "Recording…" : "Tap to Record")
.font(.title2.bold())
.contentTransition(.numericText())
Button {
if model.isRecording {
model.stopRecording()
} else {
model.startRecording()
}
} label: {
Image(systemName: model.isRecording
? "stop.circle.fill"
: "mic.circle.fill")
.font(.system(size: 80))
.foregroundStyle(model.isRecording ? .red : .accentColor)
.symbolEffect(.pulse, isActive: model.isRecording)
}
.accessibilityLabel(model.isRecording ? "Stop recording" : "Start recording")
.disabled(!model.hasPermission)
if let url = model.recordingURL, !model.isRecording {
VStack(spacing: 6) {
Text("Saved:")
.font(.caption)
.foregroundStyle(.secondary)
Text(url.lastPathComponent)
.font(.caption.monospaced())
.foregroundStyle(.secondary)
.multilineTextAlignment(.center)
}
.padding(.horizontal)
}
if let error = model.errorMessage {
Text(error)
.font(.caption)
.foregroundStyle(.red)
}
Spacer()
}
.padding()
.task {
await model.requestPermission()
}
}
}
// MARK: - Preview
#Preview {
AudioRecorderView()
}
How it works
-
AVAudioApplication.requestRecordPermission()(iOS 17+) — Replaces the callback-basedAVAudioSession.requestRecordPermissionfrom earlier OS versions. It returns aBoolasynchronously and is called inside.task { }so it runs once when the view appears, off the main actor. -
Session category
.playAndRecord— Activating the session with this category enables the microphone while still allowing concurrent audio output. The.defaultToSpeakeroption routes playback through the main speaker instead of the earpiece, which matters if you add inline playback later. -
AAC settings dictionary —
kAudioFormatMPEG4AACproduces a small, high-quality.m4aat 44.1 kHz mono. SwapAVNumberOfChannelsKeyto2for stereo, or lower the sample rate to22_050for voice-only recordings that need to be even smaller. -
isMeteringEnabled = true— Enables the internal level meter on the recorder so you can callrecorder.updateMeters()on a timer and readaveragePower(forChannel:)to drive a live waveform without starting a second audio tap. -
symbolEffect(.pulse, isActive:)— An iOS 17 SF Symbols animation that breathes the mic icon while recording is active, giving instant visual feedback without any hand-rolled animation state.
Variants
Live audio level meter
Poll the recorder's metering API on a repeating timer and map the decibel value to a
ProgressView to show a real-time level bar while recording.
// Inside AudioRecorderModel
var audioLevel: Float = 0 // 0.0 – 1.0
private var levelTimer: Timer?
func startRecording() {
// … existing setup …
recorder?.isMeteringEnabled = true
recorder?.record()
isRecording = true
levelTimer = Timer.scheduledTimer(withTimeInterval: 0.05, repeats: true) { [weak self] _ in
guard let self else { return }
recorder?.updateMeters()
// averagePower returns dB in range -160…0
let db = recorder?.averagePower(forChannel: 0) ?? -160
let normalized = max(0, (db + 60) / 60) // map -60…0 dB → 0…1
Task { @MainActor in self.audioLevel = normalized }
}
}
func stopRecording() {
levelTimer?.invalidate(); levelTimer = nil
recorder?.stop()
isRecording = false; audioLevel = 0
}
// In the View body:
ProgressView(value: model.audioLevel)
.tint(.red)
.animation(.linear(duration: 0.05), value: model.audioLevel)
.accessibilityLabel("Recording level: \(Int(model.audioLevel * 100))%")
Time-limited recording
Pass a duration to AVAudioRecorder.record(forDuration:) instead of calling the
bare record() method. The recorder stops automatically and calls your
AVAudioRecorderDelegate.audioRecorderDidFinishRecording(_:successfully:) method.
This is ideal for voice memos with a configurable maximum length — set
recorder.delegate = self on the model (after making it conform to
NSObject & AVAudioRecorderDelegate) and update isRecording = false
inside the delegate callback.
Common pitfalls
-
Missing
NSMicrophoneUsageDescriptionin Info.plist. Without this key the app crashes on the first permission request — there's no runtime warning, just a silent abort. Add the key with a user-facing explanation string such as "Used to record voice memos." -
Using the old callback-based permission API on iOS 17+.
AVAudioSession.requestRecordPermission(_:)is deprecated from iOS 17 onward. Switch toAVAudioApplication.requestRecordPermission(), which is async and integrates cleanly with Swift concurrency — noDispatchQueue.main.asyncwrapper needed. -
Forgetting to deactivate the session after stopping.
Leaving the
AVAudioSessionactive after recording ends prevents the system from returning audio to other apps (music, podcasts). Always calltry session.setActive(false, options: .notifyOthersOnDeactivation)instopRecording(). -
Recording to a URL in the Documents directory without excluding from iCloud backup.
Large recordings in Documents are backed up by default. Either write to
FileManager.default.temporaryDirectory(cleaned up by the OS) or set theURLResourceKey.isExcludedFromBackupKeyattribute on the file URL before recording.
Prompt this with Claude Code
When using Soarias or Claude Code directly to implement this:
Implement audio recording in SwiftUI for iOS 17+. Use AVAudioRecorder with kAudioFormatMPEG4AAC settings. Request permission via AVAudioApplication.requestRecordPermission(). Add a live level meter using isMeteringEnabled + updateMeters(). Make it accessible (VoiceOver labels for start/stop button). Deactivate AVAudioSession properly when recording stops. Add a #Preview with a realistic demo state (isRecording = true).
In Soarias's Build phase, paste this prompt directly into the Claude Code panel after scaffolding your screen — it generates the model, view, and preview in a single pass so you can move straight to wiring the recorded URL into your data layer.
Related
FAQ
Does this work on iOS 16?
Partially. AVAudioRecorder itself is available back to iOS 3, and the
@Observable macro requires iOS 17+. If you need iOS 16 support, replace
@Observable with ObservableObject + @Published,
swap AVAudioApplication.requestRecordPermission() for the callback-based
AVAudioSession.requestRecordPermission, and replace
.symbolEffect(.pulse) with a manual withAnimation loop.
How do I play back the recorded file?
Pass the saved URL to AVAudioPlayer(contentsOf:) inside a similar
@Observable model, activate the session with category .playback,
and call player.play(). If you want inline waveform scrubbing in SwiftUI,
consider wrapping AVPlayer in a VideoPlayer-style
UIViewControllerRepresentable, or use the new
AudioKit / DSWaveformImage packages for rendered waveform thumbnails.
What's the UIKit equivalent?
The underlying AVAudioRecorder API is identical — there's no SwiftUI-specific
wrapper. In UIKit you'd typically hold the recorder on a UIViewController,
implement AVAudioRecorderDelegate directly on it, and update UI in the
delegate callbacks. The SwiftUI approach above is strictly cleaner: the
@Observable model acts as the delegate proxy and publishes state changes that
the view layer observes automatically.
Last reviewed: 2026-05-11 by the Soarias team.