You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
|`audiostart`| Audio capturing has started | Includes the `uri` if `recordingOptions.persist` is enabled. |
309
310
|`audioend`| Audio capturing has ended | Includes the `uri` if `recordingOptions.persist` is enabled. |
310
-
|`end`| Speech recognition service has disconnected. | This should be the last event dispatched. |
311
+
|`end`| Speech recognition service has disconnected. | This should always be the last event dispatched, including after errors.|
311
312
|`error`| Fired when a speech recognition error occurs. | You'll also receive an `error` event (with code "aborted") when calling `.abort()`|
312
313
|`nomatch`| Speech recognition service returns a final result with no significant recognition. | You may have non-final results recognized. This may get emitted after cancellation. |
313
314
|`result`| Speech recognition service returns a word or phrase has been positively recognized. | On Android, continous mode runs as a segmented session, meaning when a final result is reached, additional partial and final results will cover a new segment separate from the previous final result. On iOS, you should expect one final result before speech recognition has stopped. |
@@ -361,7 +362,7 @@ The error code is based on the [Web Speech API error codes](https://developer.mo
361
362
If you would like to persist the recognized audio for later use, you can enable the `recordingOptions.persist` option when calling `start()`. Enabling this setting will emit an `{ uri: string }` event object in the `audiostart` and `audioend` events with the local file path.
362
363
363
364
> [!IMPORTANT]
364
-
> This feature is available on Android 13+ and iOS. Call `supportsRecording()` to see if it's available before using this feature.
365
+
> This feature is available on Android 13+ and iOS. Call [`supportsRecording()`](#supportsrecording-boolean) to see if it's available before using this feature.
365
366
366
367
Default audio output formats:
367
368
@@ -838,9 +839,9 @@ Get list of speech recognition services available on the device.
838
839
> This only includes services that are listed under `androidSpeechServicePackages` in your app.json as well as the core services listed under `forceQueryable` when running the command: `adb shell dumpsys package queries`
// Usually "com.google.android.googlequicksearchbox" for Google
871
872
// or "com.samsung.android.bixby.agent" for Samsung
872
873
```
873
874
875
+
### `isRecognitionAvailable(): boolean`
876
+
877
+
Whether speech recognition is currently available on the device.
878
+
879
+
If this method returns false, calling `start()` will fail and emit an error event with the code `service-not-allowed` or `language-not-supported`. You should also ask the user to enable speech recognition in the system settings (i.e, for iOS to enable Siri & Dictation). On Android, you should ask the user to install and enable `com.google.android.tts` (Android 13+) or `com.google.android.googlequicksearchbox` (Android <= 12) as a default voice recognition service.
880
+
881
+
For Web, this method only checks if the browser has the Web SpeechRecognition API available, however keep in mind that browsers (like Brave) may still have the APIs but not have it implemented yet. Refer to [Platform Compatibility Table](#platform-compatibility-table) for more information. You may want to use a user agent parser to fill in the gaps.
0 commit comments