Amazon instructions vs. AlexaPi Implementation
See original GitHub issueHi all! (Heads up – not an issue, just a question)
I was trying to create my own version of an Alexa interface on RPi through Python (for learning!) using this implementation as an example. I noticed that on Amazon’s instructions regarding interfacing with Alexa, they say to open and maintain an HTTP/2 connection that remains open through the lifetime of the connection and is used to send/deliver events/directives.
However, in this AlexaPi implementation, it seems as though that step is omitted, and every time an command is given, the audio file is directly sent to the speech recognizer 'https://access-alexa-na.amazon.com/v1/avs/speechrecognizer/recognize'
.
Could someone explain why the constantly open connection is skipped? Thanks!
Issue Analytics
- State:
- Created 7 years ago
- Comments:9 (3 by maintainers)
Top Results From Across the Web
Installation · alexa-pi/AlexaPi Wiki - GitHub
First you need to obtain a set of credentials from Amazon to use the Alexa Voice service. Make a note of these credentials...
Read more >How to Build an Alexa Speaker with Raspberry Pi
AlexaPi Setup and Install · 1. Connect your USB microphone and speaker to your Raspberry Pi. · 2. Boot your Raspberry Pi. ·...
Read more >AlexaPi Part 1: How and Why to Make - YouTube
Amazon has updated their "How to make an Alexa" instructions for the 3rd time. In 2016, it was a set of instructions on...
Read more >Raspberry Pi Quick Start Guide | Alexa Voice Service
Complete the following step-by-step instructions to set up the Alexa Smart ... The Alexa Smart Screen SDK reference implementation relies on the browser...
Read more >Alexa Baked in a Pi - johnkeefe.net
The Alexa Pi code isn't some under-the-table hack; the Amazon Alexa team itself has posted a complete guide for setting up Alexa on...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
@David Cai, “Works in progress” are rarely public. That’s because some people would try to build it without it being finished and be very frustrated.
On Thu, Jun 23, 2016 at 8:17 PM, David Cai notifications@github.com wrote:
@renekliment
I built an objective-c version as an extension of the work here that takes both voice commands and text inputs (so I can give it commands coffee shops). I wasn’t sure if was appropriate to let y’all know about it here in the python project. It works pretty well just need to add your own client_id - token - and secret. Would love help improving it.
https://github.com/flooie/Mecho