<audio> tag not working within SSML
See original GitHub issueMicrosoft.CognitiveServices.Speech C# SDK Not working
I simply want to embed Audio as mentioned (https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/speech-synthesis-markup
When the following SSML is called into the SpeechSynthesizer.SpeakSsmlAsync, the system returns a Cancel error:
Connection was closed by the remote host. Error code: 1007. Error details: Unsupported voice . USP state: 3. Received audio size: 0bytes.
I am clearly providing a URL with a valid file and I believe I have well formed the audio tag as per the documentation:
<speak version='1.0' xml:lang='en-US'>
<audio src="https://www.upsilondynamics.com/Content/TwilioGreetings/notify.wav" />
<voice xml:lang='en-US' xml:gender='male' name='en-US-GuyNeural'>
Hello World, this is a test!</voice>
</speak>
Embedding it within the VOICE tag is preferable, it also dos not work:
<speak version='1.0' xml:lang='en-US'>
<voice xml:lang='en-US' xml:gender='male' name='en-US-GuyNeural'>
<audio src="https://www.upsilondynamics.com/Content/TwilioGreetings/notify.wav" />
Hello World, this is a test!
</voice>
</speak>
This is a duplicate posted here as it is documented here, but I cannot get it to work: https://github.com/MicrosoftDocs/azure-docs/issues/42668
Issue Analytics
- State:
- Created 4 years ago
- Comments:7 (3 by maintainers)
Top GitHub Comments
@nitinthewiz Sure, word boundary feature works fine with single voice audio. You don’t need to change file type, just binding a
wordboundary event
is enough. You can refer this c# sample to try this feature (other language samples also available in this repo).I’m also not able to use
audio
ormstts:backgroundaudio
. Any update on this?