.Ensure being compatible along with a number of frameworks, including.NET 6.0,. Internet Structure 4.6.2, and.NET Standard 2.0 and above.Lessen dependencies to avoid variation conflicts as well as the requirement for binding redirects.Translating Sound Information.One of the primary functionalities of the SDK is actually audio transcription. Programmers can easily translate audio files asynchronously or even in real-time. Below is an example of just how to translate an audio file:.making use of AssemblyAI.using AssemblyAI.Transcripts.var client = new AssemblyAIClient(" YOUR_API_KEY").var transcript = await client.Transcripts.TranscribeAsync( brand new TranscriptParams.AudioUrl="https://storage.googleapis.com/aai-docs-samples/nbc.mp3". ).transcript.EnsureStatusCompleted().Console.WriteLine( transcript.Text).For nearby files, similar code can be used to achieve transcription.wait for making use of var stream = new FileStream("./ nbc.mp3", FileMode.Open).var transcript = wait for client.Transcripts.TranscribeAsync(.flow,.brand new TranscriptOptionalParams.LanguageCode = TranscriptLanguageCode.EnUs.).transcript.EnsureStatusCompleted().Console.WriteLine( transcript.Text).Real-Time Sound Transcription.The SDK additionally sustains real-time sound transcription using Streaming Speech-to-Text. This component is actually particularly useful for uses requiring urgent handling of audio data.making use of AssemblyAI.Realtime.await using var scribe = brand-new RealtimeTranscriber( new RealtimeTranscriberOptions.ApiKey="YOUR_API_KEY",.SampleRate = 16_000. ).transcriber.PartialTranscriptReceived.Subscribe( transcript =>Console.WriteLine($" Limited: transcript.Text "). ).transcriber.FinalTranscriptReceived.Subscribe( transcript =>Console.WriteLine($" Final: transcript.Text "). ).wait for transcriber.ConnectAsync().// Pseudocode for obtaining sound coming from a microphone as an example.GetAudio( async (piece) => await transcriber.SendAudioAsync( part)).await transcriber.CloseAsync().Using LeMUR for LLM Apps.The SDK incorporates along with LeMUR to permit creators to develop huge language version (LLM) apps on voice records. Below is actually an example:.var lemurTaskParams = brand-new LemurTaskParams.Motivate="Deliver a quick recap of the records.",.TranscriptIds = [transcript.Id],.FinalModel = LemurModel.AnthropicClaude3 _ 5_Sonnet..var action = wait for client.Lemur.TaskAsync( lemurTaskParams).Console.WriteLine( response.Response).Sound Cleverness Models.Furthermore, the SDK possesses integrated support for audio knowledge versions, making it possible for feeling analysis and also various other enhanced features.var transcript = await client.Transcripts.TranscribeAsync( brand new TranscriptParams.AudioUrl="https://storage.googleapis.com/aai-docs-samples/nbc.mp3",.SentimentAnalysis = true. ).foreach (var result in transcript.SentimentAnalysisResults!).Console.WriteLine( result.Text).Console.WriteLine( result.Sentiment)// POSITIVE, NEUTRAL, or downside.Console.WriteLine( result.Confidence).Console.WriteLine($" Timestamp: result.Start - result.End ").To read more, see the official AssemblyAI blog.Image resource: Shutterstock.