Objc - Speech Sample

oError prints NSError

The other print is Siri and Dictation are disabled

I turned Siri on and tried again. The OError.localizedDescription now prints failed to access assets.

It seems like your device does not support “on device recognition” (see macos - SFSpeechRecognizer on Monterrey error 102 - Stack Overflow).

Can you try changing requiresOnDeviceRecognition to false on line 43?

If all this works, we could improve the sample with parameters to control the max recognition time, and toggle on device recognition. We should also display any error reported by the speech recognition :slight_smile:

We’re making progress. It now prints Go Up when I say it, but the sprite doesn’t do anything. It also prints the other Go commands but no response on the sprite.

@dave1707 Interesting. If it does write Go Up, and not Go up, you could try updating the commands as the comparison is likely case sensitive. And making it case-insensitive would be an even better fix :wink:

I didn’t change anything, but it’s now moving the sprite. The only difference is I’m pausing a little between the Go and other command. Does this mean that Siri needs to be on. I don’t use Siri so I never set it up.

Edit: I turned Siri off and the code still worked.

@dave1707 I’m not sure what must be enabled exactly. Normally, the first time you launch this app, it requests for the required permission, but if you deny it, I believe you have to enable it manually in the settings. Hopefully, displaying the error message and providing some options will make it easier for anyone to try this sample.

It required the microphone to be turned on. I don’t remember if there was anything else. I guess if you put out the required error messages, that should help. So far it’s working great.

I’ve updated the project with the changes. However, I feel like there is still an issue if you leave the application running for a while without speaking any command, and this might not be as easy to fix as the other issues. At least the current version of the project should be easier to test and still serves as a good example for objc usage.

In the current project at the top of this thread, on my iPad, there is output in the output window that says “message: “ followed by an accurate representation of what I said. But the guy on screen still shows “?”. Actually he just shows the question mark once and then stops showing anything. Edit: that only happened once. Now he shows the question mark every time.

Downloaded the project from webrepo. Not working. See screenshot. Oddly, at left you can see that it did recognize my words—that’s exactly what I said—but it had an error and didn’t get the words in the little dude’s speech balloon.

Hi @UberGoober. What happens when using the supported commands? “Go Left”, “Go Right”, “Say Hello”, etc. For any other command, it would be normal that it shows “?”.

I feel dumb—I misunderstood the demo. I thought it was supposed to say my words back to me in the speech balloon to show it understood what I said! Now that I see my error, the program seems to work just as intended.

Ah, I’m glad it works! The demo could definitely be improved, I should have included a legend on the screen with all the supported commands instead. Sorry if this wasn’t clear!