Skip to content

Commit 6b6669e

Browse files
authored
Change package name to expo-speech-recognition from @jamsch/expo-speech-recognition (#11)
* Change package name to expo-speech-recognition * update lockfile --------- Co-authored-by: jamsch <jamsch@users.noreply.github.com>
1 parent df2e425 commit 6b6669e

File tree

4 files changed

+39
-38
lines changed

4 files changed

+39
-38
lines changed

README.md

+29-29
Original file line numberDiff line numberDiff line change
@@ -49,7 +49,7 @@ expo-speech-recognition implements the iOS [`SFSpeechRecognizer`](https://develo
4949
1. Install the package
5050

5151
```
52-
npm install @jamsch/expo-speech-recognition
52+
npm install expo-speech-recognition
5353
```
5454

5555
2. Configure the config plugin.
@@ -62,7 +62,7 @@ npm install @jamsch/expo-speech-recognition
6262
"expo": {
6363
"plugins": [
6464
[
65-
"@jamsch/expo-speech-recognition",
65+
"expo-speech-recognition",
6666
{
6767
"microphonePermission": "Allow $(PRODUCT_NAME) to use the microphone.",
6868
"speechRecognitionPermission": "Allow $(PRODUCT_NAME) to use speech recognition.",
@@ -88,7 +88,7 @@ Using hooks is the easiest way to get started. The `useSpeechRecognitionEvent` h
8888
import {
8989
ExpoSpeechRecognitionModule,
9090
useSpeechRecognitionEvent,
91-
} from "@jamsch/expo-speech-recognition";
91+
} from "expo-speech-recognition";
9292

9393
function App() {
9494
const [recognizing, setRecognizing] = useState(false);
@@ -143,7 +143,7 @@ function App() {
143143
You should request permissions prior to starting recognition. This library exports two functions: `getPermissionsAsync` and `requestPermissionsAsync` for this purpose. If you do not request permissions or the user has denied permissions after starting, expect an `error` event with the `error` code set to `not-allowed`.
144144

145145
```ts
146-
import { ExpoSpeechRecognitionModule } from "@jamsch/expo-speech-recognition";
146+
import { ExpoSpeechRecognitionModule } from "expo-speech-recognition";
147147

148148
ExpoSpeechRecognitionModule.getPermissionsAsync().then((result) => {
149149
console.log("Status:", result.status);
@@ -170,7 +170,7 @@ You can also use the `ExpoSpeechRecognitionModule` to use the native APIs direct
170170
import {
171171
ExpoSpeechRecognitionModule,
172172
addSpeechRecognitionListener,
173-
} from "@jamsch/expo-speech-recognition";
173+
} from "expo-speech-recognition";
174174

175175
// Register event listeners
176176
const startListener = addSpeechRecognitionListener("start", () => {
@@ -294,7 +294,7 @@ import {
294294
type ExpoSpeechRecognitionErrorCode,
295295
addSpeechRecognitionListener,
296296
useSpeechRecognitionEvent,
297-
} from "@jamsch/expo-speech-recognition";
297+
} from "expo-speech-recognition";
298298

299299
addSpeechRecognitionListener("error", (event) => {
300300
console.log("error code:", event.error, "error messsage:", event.message);
@@ -344,7 +344,7 @@ import { Button, View } from "react-native";
344344
import {
345345
ExpoSpeechRecognitionModule,
346346
useSpeechRecognitionEvent,
347-
} from "@jamsch/expo-speech-recognition";
347+
} from "expo-speech-recognition";
348348

349349
function RecordAudio() {
350350
const [recording, setRecording] = useState(false);
@@ -411,7 +411,7 @@ function AudioPlayer(props: { source: string }) {
411411

412412
## Transcribing audio files
413413

414-
You can use the `audioSource.sourceUri` option to transcribe audio files instead of using the microphone.
414+
You can use the `audioSource.uri` option to transcribe audio files instead of using the microphone.
415415

416416
> **Important note**: This feature is available on Android 13+ and iOS. If the device does not support the feature, you'll receive an `error` event with the code `audio-capture`.
417417
@@ -443,7 +443,7 @@ import {
443443
ExpoSpeechRecognitionModule,
444444
useSpeechRecognitionEvent,
445445
AudioEncodingAndroid,
446-
} from "@jamsch/expo-speech-recognition";
446+
} from "expo-speech-recognition";
447447

448448
function TranscribeAudioFile() {
449449
const [transcription, setTranscription] = useState("");
@@ -500,7 +500,7 @@ Refer to the [SpeechRecognition MDN docs](https://developer.mozilla.org/en-US/do
500500
// "npm install -D @types/dom-speech-recognition"
501501
import "dom-speech-recognition";
502502

503-
import { ExpoWebSpeechRecognition } from "@jamsch/expo-speech-recognition";
503+
import { ExpoWebSpeechRecognition } from "expo-speech-recognition";
504504

505505
// Polyfill the globals for use in external libraries
506506
webkitSpeechRecognition = ExpoWebSpeechRecognition;
@@ -522,7 +522,7 @@ recognition.contextualStrings = ["Carlsen", "Nepomniachtchi", "Praggnanandhaa"];
522522
recognition.requiresOnDeviceRecognition = true;
523523
recognition.addsPunctuation = true;
524524
recognition.androidIntentOptions = {
525-
EXTRA_LANGUAGE_MODEL: "quick_response",
525+
EXTRA_LANGUAGE_MODEL: "web_search",
526526
};
527527
recognition.androidRecognitionServicePackage = "com.google.android.tts";
528528

@@ -571,7 +571,7 @@ recognition.abort();
571571
On Android, you may notice that there's a beep sound when you start and stop speech recognition. This is due to a hardcoded behavior in the underlying SpeechRecognizer API. However, a workaround you can use is by enabling continuous recognition:
572572

573573
```ts
574-
import { ExpoSpeechRecognitionModule } from "@jamsch/expo-speech-recognition";
574+
import { ExpoSpeechRecognitionModule } from "expo-speech-recognition";
575575

576576
ExpoSpeechRecognitionModule.start({
577577
lang: "en-US",
@@ -616,7 +616,7 @@ As of 7 Aug 2024, the following platforms are supported:
616616
Starts speech recognition.
617617

618618
```ts
619-
import { ExpoSpeechRecognitionModule } from "@jamsch/expo-speech-recognition";
619+
import { ExpoSpeechRecognitionModule } from "expo-speech-recognition";
620620

621621
ExpoSpeechRecognitionModule.start({
622622
lang: "en-US",
@@ -628,7 +628,7 @@ ExpoSpeechRecognitionModule.start({
628628
Stops speech recognition and attempts to return a final result (through the `result` event).
629629

630630
```ts
631-
import { ExpoSpeechRecognitionModule } from "@jamsch/expo-speech-recognition";
631+
import { ExpoSpeechRecognitionModule } from "expo-speech-recognition";
632632

633633
ExpoSpeechRecognitionModule.stop();
634634
// Expect the following events to be emitted in order:
@@ -645,7 +645,7 @@ ExpoSpeechRecognitionModule.stop();
645645
Immediately cancels speech recognition (does not process the final result).
646646

647647
```ts
648-
import { ExpoSpeechRecognitionModule } from "@jamsch/expo-speech-recognition";
648+
import { ExpoSpeechRecognitionModule } from "expo-speech-recognition";
649649

650650
ExpoSpeechRecognitionModule.abort();
651651
// Expect an "error" event to be emitted with the code "aborted"
@@ -659,7 +659,7 @@ For iOS, once a user has granted (or denied) location permissions by responding
659659
the only way that the permissions can be changed is by the user themselves using the device settings app.
660660

661661
```ts
662-
import { requestPermissionsAsync } from "@jamsch/expo-speech-recognition";
662+
import { requestPermissionsAsync } from "expo-speech-recognition";
663663

664664
requestPermissionsAsync().then((result) => {
665665
console.log("Status:", result.status); // "granted" | "denied" | "not-determined"
@@ -674,7 +674,7 @@ requestPermissionsAsync().then((result) => {
674674
Returns the current permission status for the microphone and speech recognition.
675675

676676
```ts
677-
import { getPermissionsAsync } from "@jamsch/expo-speech-recognition";
677+
import { getPermissionsAsync } from "expo-speech-recognition";
678678

679679
getPermissionsAsync().then((result) => {
680680
console.log("Status:", result.status); // "granted" | "denied" | "not-determined"
@@ -689,7 +689,7 @@ getPermissionsAsync().then((result) => {
689689
Returns the current internal state of the speech recognizer.
690690

691691
```ts
692-
import { getStateAsync } from "@jamsch/expo-speech-recognition";
692+
import { getStateAsync } from "expo-speech-recognition";
693693

694694
// Note: you probably should rather rely on the events emitted by the SpeechRecognition API instead
695695
getStateAsync().then((state) => {
@@ -701,7 +701,7 @@ getStateAsync().then((state) => {
701701
### `addSpeechRecognitionListener(eventName: string, listener: (event: any) => void): { remove: () => void }`
702702

703703
```ts
704-
import { addSpeechRecognitionListener } from "@jamsch/expo-speech-recognition";
704+
import { addSpeechRecognitionListener } from "expo-speech-recognition";
705705

706706
const listener = addSpeechRecognitionListener("result", (event) => {
707707
console.log("result:", event.results[event.resultIndex][0].transcript);
@@ -716,7 +716,7 @@ listener.remove();
716716
Get the list of supported locales and the installed locales that can be used for on-device speech recognition.
717717

718718
```ts
719-
import { getSupportedLocales } from "@jamsch/expo-speech-recognition";
719+
import { getSupportedLocales } from "expo-speech-recognition";
720720

721721
getSupportedLocales({
722722
/**
@@ -751,7 +751,7 @@ Get list of speech recognition services available on the device.
751751
> Note: this only includes services that are listed under `androidSpeechServicePackages` in your app.json as well as the core services listed under `forceQueryable` when running the command: `adb shell dumpsys package queries`
752752
753753
```ts
754-
import { getSpeechRecognitionServices } from "@jamsch/expo-speech-recognition";
754+
import { getSpeechRecognitionServices } from "expo-speech-recognition";
755755

756756
const packages = ExpoSpeechRecognitionModule.getSpeechRecognitionServices();
757757
console.log("Speech recognition services:", packages.join(", "));
@@ -763,7 +763,7 @@ console.log("Speech recognition services:", packages.join(", "));
763763
Returns the default voice recognition service on the device.
764764

765765
```ts
766-
import { getDefaultRecognitionService } from "@jamsch/expo-speech-recognition";
766+
import { getDefaultRecognitionService } from "expo-speech-recognition";
767767

768768
const service = ExpoSpeechRecognitionModule.getDefaultRecognitionService();
769769
console.log("Default recognition service:", service.packageName);
@@ -775,7 +775,7 @@ console.log("Default recognition service:", service.packageName);
775775
Returns the default voice assistant service on the device.
776776

777777
```ts
778-
import { getAssistantService } from "@jamsch/expo-speech-recognition";
778+
import { getAssistantService } from "expo-speech-recognition";
779779

780780
const service = ExpoSpeechRecognitionModule.getAssistantService();
781781
console.log("Default assistant service:", service.packageName);
@@ -788,7 +788,7 @@ console.log("Default assistant service:", service.packageName);
788788
Whether on-device speech recognition is available on the device.
789789

790790
```ts
791-
import { supportsOnDeviceRecognition } from "@jamsch/expo-speech-recognition";
791+
import { supportsOnDeviceRecognition } from "expo-speech-recognition";
792792

793793
const available = supportsOnDeviceRecognition();
794794
console.log("OnDevice recognition available:", available);
@@ -799,7 +799,7 @@ console.log("OnDevice recognition available:", available);
799799
Whether audio recording is supported during speech recognition. This mostly applies to Android devices, to check if it's at least Android 13.
800800

801801
```ts
802-
import { supportsRecording } from "@jamsch/expo-speech-recognition";
802+
import { supportsRecording } from "expo-speech-recognition";
803803

804804
const available = supportsRecording();
805805
console.log("Recording available:", available);
@@ -814,7 +814,7 @@ You can see which locales are supported and installed on your device by running
814814
To download the offline model for a specific locale, use the `androidTriggerOfflineModelDownload` function.
815815

816816
```ts
817-
import { ExpoSpeechRecognitionModule } from "@jamsch/expo-speech-recognition";
817+
import { ExpoSpeechRecognitionModule } from "expo-speech-recognition";
818818

819819
// Download the offline model for the specified locale
820820
ExpoSpeechRecognitionModule.androidTriggerOfflineModelDownload({
@@ -856,7 +856,7 @@ import {
856856
AVAudioSessionCategory,
857857
AVAudioSessionCategoryOptions,
858858
AVAudioSessionMode,
859-
} from "@jamsch/expo-speech-recognition";
859+
} from "expo-speech-recognition";
860860

861861
setCategoryIOS({
862862
category: AVAudioSessionCategory.playAndRecord, // or "playAndRecord"
@@ -873,7 +873,7 @@ setCategoryIOS({
873873
Returns the current audio session category and options. For advanced use cases, you may want to use this function to safely configure the audio session category and mode.
874874
875875
```ts
876-
import { getAudioSessionCategoryAndOptionsIOS } from "@jamsch/expo-speech-recognition";
876+
import { getAudioSessionCategoryAndOptionsIOS } from "expo-speech-recognition";
877877

878878
const values = getAudioSessionCategoryAndOptionsIOS();
879879
console.log(values);
@@ -885,7 +885,7 @@ console.log(values);
885885
Sets the audio session active state.
886886
887887
```ts
888-
import { setAudioSessionActiveIOS } from "@jamsch/expo-speech-recognition";
888+
import { setAudioSessionActiveIOS } from "expo-speech-recognition";
889889

890890
setAudioSessionActiveIOS(true, {
891891
notifyOthersOnDeactivation: true,

example/package-lock.json

+5-4
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

package-lock.json

+4-4
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

package.json

+1-1
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
{
2-
"name": "@jamsch/expo-speech-recognition",
2+
"name": "expo-speech-recognition",
33
"version": "0.2.15",
44
"description": "Speech Recognition for React Native Expo projects",
55
"main": "build/index.js",

0 commit comments

Comments
 (0)