How to use the microsoft-cognitiveservices-speech-sdk.AudioConfig.fromDefaultMicrophoneInput function in microsoft-cognitiveservices-speech-sdk

To help you get started, we’ve selected a few microsoft-cognitiveservices-speech-sdk examples, based on popular ways it is used in public projects.

Secure your code as it's written. Use Snyk Code to scan source code in minutes - no build needed - and fix issues immediately.

github microsoft / BotFramework-WebChat / packages / directlinespeech / src / createWebSpeechPonyfillFactory.js View on Github external
export default function({
  audioConfig = AudioConfig.fromDefaultMicrophoneInput(),
  audioContext,
  enableTelemetry,
  ponyfill = {
    AudioContext: window.AudioContext || window.webkitAudioContext
  },
  recognizer,
  textNormalization
}) {
  if (!ponyfill.AudioContext) {
    console.warn(
      'botframework-directlinespeech-sdk: This browser does not support Web Audio API. Speech support is disabled.'
    );

    return () => ({});
  }