Skip to content

Vaihi api 2 #8936

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 3 commits into from
Closed
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
24 changes: 23 additions & 1 deletion common/api-review/vertexai.api.md
Original file line number Diff line number Diff line change
@@ -344,7 +344,7 @@ export class GenerativeModel extends VertexAIModel {
}

// @public
export function getGenerativeModel(vertexAI: VertexAI, modelParams: ModelParams, requestOptions?: RequestOptions): GenerativeModel;
export function getGenerativeModel(vertexAI: VertexAI, onCloudOrHybridParams: ModelParams | HybridParams, requestOptions?: RequestOptions): GenerativeModel;

// @beta
export function getImagenModel(vertexAI: VertexAI, modelParams: ImagenModelParams, requestOptions?: RequestOptions): ImagenModel;
@@ -416,6 +416,18 @@ export enum HarmSeverity {
HARM_SEVERITY_NEGLIGIBLE = "HARM_SEVERITY_NEGLIGIBLE"
}

// @public
export interface HybridParams {
// (undocumented)
mode?: InferenceMode;
// (undocumented)
onCloudParams?: ModelParams;
// Warning: (ae-forgotten-export) The symbol "LanguageModelCreateOptions" needs to be exported by the entry point index.d.ts
//
// (undocumented)
onDeviceParams?: LanguageModelCreateOptions;
}

// @beta
export enum ImagenAspectRatio {
LANDSCAPE_16x9 = "16:9",
@@ -500,6 +512,16 @@ export interface ImagenSafetySettings {
safetyFilterLevel?: ImagenSafetyFilterLevel;
}

// @public
export enum InferenceMode {
// (undocumented)
ONLY_ON_CLOUD = "ONLY_ON_CLOUD",
// (undocumented)
ONLY_ON_DEVICE = "ONLY_ON_DEVICE",
// (undocumented)
PREFER_ON_DEVICE = "PREFER_ON_DEVICE"
}

// @public
export interface InlineDataPart {
// (undocumented)
2 changes: 2 additions & 0 deletions docs-devsite/_toc.yaml
Original file line number Diff line number Diff line change
@@ -536,6 +536,8 @@ toc:
path: /docs/reference/js/vertexai.groundingattribution.md
- title: GroundingMetadata
path: /docs/reference/js/vertexai.groundingmetadata.md
- title: HybridParams
path: /docs/reference/js/vertexai.hybridparams.md
- title: ImagenGCSImage
path: /docs/reference/js/vertexai.imagengcsimage.md
- title: ImagenGenerationConfig
51 changes: 51 additions & 0 deletions docs-devsite/vertexai.hybridparams.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,51 @@
Project: /docs/reference/js/_project.yaml
Book: /docs/reference/_book.yaml
page_type: reference

{% comment %}
DO NOT EDIT THIS FILE!
This is generated by the JS SDK team, and any local changes will be
overwritten. Changes should be made in the source code at
https://github.com/firebase/firebase-js-sdk
{% endcomment %}

# HybridParams interface
Configures on-device and on-cloud inference.

<b>Signature:</b>

```typescript
export interface HybridParams
```

## Properties

| Property | Type | Description |
| --- | --- | --- |
| [mode](./vertexai.hybridparams.md#hybridparamsmode) | [InferenceMode](./vertexai.md#inferencemode) | |
| [onCloudParams](./vertexai.hybridparams.md#hybridparamsoncloudparams) | [ModelParams](./vertexai.modelparams.md#modelparams_interface) | |
| [onDeviceParams](./vertexai.hybridparams.md#hybridparamsondeviceparams) | LanguageModelCreateOptions | |

## HybridParams.mode

<b>Signature:</b>

```typescript
mode?: InferenceMode;
```

## HybridParams.onCloudParams

<b>Signature:</b>

```typescript
onCloudParams?: ModelParams;
```

## HybridParams.onDeviceParams

<b>Signature:</b>

```typescript
onDeviceParams?: LanguageModelCreateOptions;
```
32 changes: 26 additions & 6 deletions docs-devsite/vertexai.md
Original file line number Diff line number Diff line change
@@ -19,7 +19,7 @@ The Vertex AI in Firebase Web SDK.
| <b>function(app, ...)</b> |
| [getVertexAI(app, options)](./vertexai.md#getvertexai_04094cf) | Returns a [VertexAI](./vertexai.vertexai.md#vertexai_interface) instance for the given app. |
| <b>function(vertexAI, ...)</b> |
| [getGenerativeModel(vertexAI, modelParams, requestOptions)](./vertexai.md#getgenerativemodel_e3037c9) | Returns a [GenerativeModel](./vertexai.generativemodel.md#generativemodel_class) class with methods for inference and other functionality. |
| [getGenerativeModel(vertexAI, onCloudOrHybridParams, requestOptions)](./vertexai.md#getgenerativemodel_202434f) | Returns a [GenerativeModel](./vertexai.generativemodel.md#generativemodel_class) class with methods for inference and other functionality. |
| [getImagenModel(vertexAI, modelParams, requestOptions)](./vertexai.md#getimagenmodel_812c375) | <b><i>(Public Preview)</i></b> Returns an [ImagenModel](./vertexai.imagenmodel.md#imagenmodel_class) class with methods for using Imagen.<!-- -->Only Imagen 3 models (named <code>imagen-3.0-*</code>) are supported. |

## Classes
@@ -55,6 +55,7 @@ The Vertex AI in Firebase Web SDK.
| [ImagenAspectRatio](./vertexai.md#imagenaspectratio) | <b><i>(Public Preview)</i></b> Aspect ratios for Imagen images.<!-- -->To specify an aspect ratio for generated images, set the <code>aspectRatio</code> property in your [ImagenGenerationConfig](./vertexai.imagengenerationconfig.md#imagengenerationconfig_interface)<!-- -->.<!-- -->See the the [documentation](http://firebase.google.com/docs/vertex-ai/generate-images) for more details and examples of the supported aspect ratios. |
| [ImagenPersonFilterLevel](./vertexai.md#imagenpersonfilterlevel) | <b><i>(Public Preview)</i></b> A filter level controlling whether generation of images containing people or faces is allowed.<!-- -->See the <a href="http://firebase.google.com/docs/vertex-ai/generate-images">personGeneration</a> documentation for more details. |
| [ImagenSafetyFilterLevel](./vertexai.md#imagensafetyfilterlevel) | <b><i>(Public Preview)</i></b> A filter level controlling how aggressively to filter sensitive content.<!-- -->Text prompts provided as inputs and images (generated or uploaded) through Imagen on Vertex AI are assessed against a list of safety filters, which include 'harmful categories' (for example, <code>violence</code>, <code>sexual</code>, <code>derogatory</code>, and <code>toxic</code>). This filter level controls how aggressively to filter out potentially harmful content from responses. See the [documentation](http://firebase.google.com/docs/vertex-ai/generate-images) and the [Responsible AI and usage guidelines](https://cloud.google.com/vertex-ai/generative-ai/docs/image/responsible-ai-imagen#safety-filters) for more details. |
| [InferenceMode](./vertexai.md#inferencemode) | Determines whether inference happens on-device or on-cloud. |
| [Modality](./vertexai.md#modality) | Content part modality. |
| [SchemaType](./vertexai.md#schematype) | Contains the list of OpenAPI data types as defined by the [OpenAPI specification](https://swagger.io/docs/specification/data-models/data-types/) |
| [VertexAIErrorCode](./vertexai.md#vertexaierrorcode) | Standardized error codes that [VertexAIError](./vertexai.vertexaierror.md#vertexaierror_class) can have. |
@@ -91,6 +92,7 @@ The Vertex AI in Firebase Web SDK.
| [GenerativeContentBlob](./vertexai.generativecontentblob.md#generativecontentblob_interface) | Interface for sending an image. |
| [GroundingAttribution](./vertexai.groundingattribution.md#groundingattribution_interface) | |
| [GroundingMetadata](./vertexai.groundingmetadata.md#groundingmetadata_interface) | Metadata returned to client when grounding is enabled. |
| [HybridParams](./vertexai.hybridparams.md#hybridparams_interface) | Configures on-device and on-cloud inference. |
| [ImagenGCSImage](./vertexai.imagengcsimage.md#imagengcsimage_interface) | An image generated by Imagen, stored in a Cloud Storage for Firebase bucket.<!-- -->This feature is not available yet. |
| [ImagenGenerationConfig](./vertexai.imagengenerationconfig.md#imagengenerationconfig_interface) | <b><i>(Public Preview)</i></b> Configuration options for generating images with Imagen.<!-- -->See the [documentation](http://firebase.google.com/docs/vertex-ai/generate-images-imagen) for more details. |
| [ImagenGenerationResponse](./vertexai.imagengenerationresponse.md#imagengenerationresponse_interface) | <b><i>(Public Preview)</i></b> The response from a request to generate images with Imagen. |
@@ -99,10 +101,10 @@ The Vertex AI in Firebase Web SDK.
| [ImagenSafetySettings](./vertexai.imagensafetysettings.md#imagensafetysettings_interface) | <b><i>(Public Preview)</i></b> Settings for controlling the aggressiveness of filtering out sensitive content.<!-- -->See the [documentation](http://firebase.google.com/docs/vertex-ai/generate-images) for more details. |
| [InlineDataPart](./vertexai.inlinedatapart.md#inlinedatapart_interface) | Content part interface if the part represents an image. |
| [ModalityTokenCount](./vertexai.modalitytokencount.md#modalitytokencount_interface) | Represents token counting info for a single modality. |
| [ModelParams](./vertexai.modelparams.md#modelparams_interface) | Params passed to [getGenerativeModel()](./vertexai.md#getgenerativemodel_e3037c9)<!-- -->. |
| [ModelParams](./vertexai.modelparams.md#modelparams_interface) | Params passed to [getGenerativeModel()](./vertexai.md#getgenerativemodel_202434f)<!-- -->. |
| [ObjectSchemaInterface](./vertexai.objectschemainterface.md#objectschemainterface_interface) | Interface for [ObjectSchema](./vertexai.objectschema.md#objectschema_class) class. |
| [PromptFeedback](./vertexai.promptfeedback.md#promptfeedback_interface) | If the prompt was blocked, this will be populated with <code>blockReason</code> and the relevant <code>safetyRatings</code>. |
| [RequestOptions](./vertexai.requestoptions.md#requestoptions_interface) | Params passed to [getGenerativeModel()](./vertexai.md#getgenerativemodel_e3037c9)<!-- -->. |
| [RequestOptions](./vertexai.requestoptions.md#requestoptions_interface) | Params passed to [getGenerativeModel()](./vertexai.md#getgenerativemodel_202434f)<!-- -->. |
| [RetrievedContextAttribution](./vertexai.retrievedcontextattribution.md#retrievedcontextattribution_interface) | |
| [SafetyRating](./vertexai.safetyrating.md#safetyrating_interface) | A safety rating associated with a [GenerateContentCandidate](./vertexai.generatecontentcandidate.md#generatecontentcandidate_interface) |
| [SafetySetting](./vertexai.safetysetting.md#safetysetting_interface) | Safety setting that can be sent as part of request parameters. |
@@ -160,22 +162,22 @@ export declare function getVertexAI(app?: FirebaseApp, options?: VertexAIOptions

## function(vertexAI, ...)

### getGenerativeModel(vertexAI, modelParams, requestOptions) {:#getgenerativemodel_e3037c9}
### getGenerativeModel(vertexAI, onCloudOrHybridParams, requestOptions) {:#getgenerativemodel_202434f}

Returns a [GenerativeModel](./vertexai.generativemodel.md#generativemodel_class) class with methods for inference and other functionality.

<b>Signature:</b>

```typescript
export declare function getGenerativeModel(vertexAI: VertexAI, modelParams: ModelParams, requestOptions?: RequestOptions): GenerativeModel;
export declare function getGenerativeModel(vertexAI: VertexAI, onCloudOrHybridParams: ModelParams | HybridParams, requestOptions?: RequestOptions): GenerativeModel;
```

#### Parameters

| Parameter | Type | Description |
| --- | --- | --- |
| vertexAI | [VertexAI](./vertexai.vertexai.md#vertexai_interface) | |
| modelParams | [ModelParams](./vertexai.modelparams.md#modelparams_interface) | |
| onCloudOrHybridParams | [ModelParams](./vertexai.modelparams.md#modelparams_interface) \| [HybridParams](./vertexai.hybridparams.md#hybridparams_interface) | |
| requestOptions | [RequestOptions](./vertexai.requestoptions.md#requestoptions_interface) | |

<b>Returns:</b>
@@ -489,6 +491,24 @@ export declare enum ImagenSafetyFilterLevel
| BLOCK\_NONE | <code>&quot;block_none&quot;</code> | <b><i>(Public Preview)</i></b> The least aggressive filtering level; blocks very few sensitive prompts and responses.<!-- -->Access to this feature is restricted and may require your case to be reviewed and approved by Cloud support. |
| BLOCK\_ONLY\_HIGH | <code>&quot;block_only_high&quot;</code> | <b><i>(Public Preview)</i></b> Blocks few sensitive prompts and responses. |

## InferenceMode

Determines whether inference happens on-device or on-cloud.

<b>Signature:</b>

```typescript
export declare enum InferenceMode
```

## Enumeration Members

| Member | Value | Description |
| --- | --- | --- |
| ONLY\_ON\_CLOUD | <code>&quot;ONLY_ON_CLOUD&quot;</code> | |
| ONLY\_ON\_DEVICE | <code>&quot;ONLY_ON_DEVICE&quot;</code> | |
| PREFER\_ON\_DEVICE | <code>&quot;PREFER_ON_DEVICE&quot;</code> | |

## Modality

Content part modality.
2 changes: 1 addition & 1 deletion docs-devsite/vertexai.modelparams.md
Original file line number Diff line number Diff line change
@@ -10,7 +10,7 @@ https://github.com/firebase/firebase-js-sdk
{% endcomment %}

# ModelParams interface
Params passed to [getGenerativeModel()](./vertexai.md#getgenerativemodel_e3037c9)<!-- -->.
Params passed to [getGenerativeModel()](./vertexai.md#getgenerativemodel_202434f)<!-- -->.

<b>Signature:</b>

2 changes: 1 addition & 1 deletion docs-devsite/vertexai.requestoptions.md
Original file line number Diff line number Diff line change
@@ -10,7 +10,7 @@ https://github.com/firebase/firebase-js-sdk
{% endcomment %}

# RequestOptions interface
Params passed to [getGenerativeModel()](./vertexai.md#getgenerativemodel_e3037c9)<!-- -->.
Params passed to [getGenerativeModel()](./vertexai.md#getgenerativemodel_202434f)<!-- -->.

<b>Signature:</b>

14 changes: 13 additions & 1 deletion packages/vertexai/src/api.test.ts
Original file line number Diff line number Diff line change
@@ -14,7 +14,12 @@
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import { ImagenModelParams, ModelParams, VertexAIErrorCode } from './types';
import {
ImagenModelParams,
InferenceMode,
ModelParams,
VertexAIErrorCode
} from './types';
import { VertexAIError } from './errors';
import { ImagenModel, getGenerativeModel, getImagenModel } from './api';
import { expect } from 'chai';
@@ -112,6 +117,13 @@ describe('Top level API', () => {
);
}
});
it('getGenerativeModel with HybridParams sets the model', () => {
const genModel = getGenerativeModel(fakeVertexAI, {
mode: InferenceMode.ONLY_ON_CLOUD,
onCloudParams: { model: 'my-model' }
});
expect(genModel.model).to.equal('publishers/google/models/my-model');
});
it('getImagenModel throws if no apiKey is provided', () => {
const fakeVertexNoApiKey = {
...fakeVertexAI,
18 changes: 15 additions & 3 deletions packages/vertexai/src/api.ts
Original file line number Diff line number Diff line change
@@ -23,6 +23,7 @@ import { VertexAIService } from './service';
import { VertexAI, VertexAIOptions } from './public-types';
import {
ImagenModelParams,
HybridParams,
ModelParams,
RequestOptions,
VertexAIErrorCode
@@ -70,16 +71,27 @@ export function getVertexAI(
*/
export function getGenerativeModel(
vertexAI: VertexAI,
modelParams: ModelParams,
onCloudOrHybridParams: ModelParams | HybridParams,
requestOptions?: RequestOptions
): GenerativeModel {
if (!modelParams.model) {
// Disambiguates onCloudOrHybridParams input.
const hybridParams = onCloudOrHybridParams as HybridParams;
let onCloudParams: ModelParams;
if (hybridParams.mode) {
onCloudParams = hybridParams.onCloudParams || {
model: 'gemini-2.0-flash-lite'
};
} else {
onCloudParams = onCloudOrHybridParams as ModelParams;
}

if (!onCloudParams.model) {
throw new VertexAIError(
VertexAIErrorCode.NO_MODEL,
`Must provide a model name. Example: getGenerativeModel({ model: 'my-model-name' })`
);
}
return new GenerativeModel(vertexAI, modelParams, requestOptions);
return new GenerativeModel(vertexAI, onCloudParams, requestOptions);
}

/**
10 changes: 10 additions & 0 deletions packages/vertexai/src/types/enums.ts
Original file line number Diff line number Diff line change
@@ -240,3 +240,13 @@ export enum Modality {
*/
DOCUMENT = 'DOCUMENT'
}

/**
* Determines whether inference happens on-device or on-cloud.
* @public
*/
export enum InferenceMode {
PREFER_ON_DEVICE = 'PREFER_ON_DEVICE',
ONLY_ON_DEVICE = 'ONLY_ON_DEVICE',
ONLY_ON_CLOUD = 'ONLY_ON_CLOUD'
}
84 changes: 84 additions & 0 deletions packages/vertexai/src/types/language-model.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,84 @@
/**
* @license
* Copyright 2025 Google LLC
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/

export interface LanguageModel extends EventTarget {
create(options?: LanguageModelCreateOptions): Promise<LanguageModel>;
availability(options?: LanguageModelCreateCoreOptions): Promise<Availability>;
prompt(
input: LanguageModelPrompt,
options?: LanguageModelPromptOptions
): Promise<string>;
promptStreaming(
input: LanguageModelPrompt,
options?: LanguageModelPromptOptions
): ReadableStream;
measureInputUsage(
input: LanguageModelPrompt,
options?: LanguageModelPromptOptions
): Promise<number>;
destroy(): undefined;
}
enum Availability {
'unavailable',
'downloadable',
'downloading',
'available'
}
export interface LanguageModelCreateCoreOptions {
topK?: number;
temperature?: number;
expectedInputs?: LanguageModelExpectedInput[];
}
export interface LanguageModelCreateOptions
extends LanguageModelCreateCoreOptions {
signal?: AbortSignal;
systemPrompt?: string;
initialPrompts?: LanguageModelInitialPrompts;
}
interface LanguageModelPromptOptions {
signal?: AbortSignal;
}
interface LanguageModelExpectedInput {
type: LanguageModelMessageType;
languages?: string[];
}
type LanguageModelPrompt =
| LanguageModelMessage[]
| LanguageModelMessageShorthand[]
| string;
type LanguageModelInitialPrompts =
| LanguageModelMessage[]
| LanguageModelMessageShorthand[];
interface LanguageModelMessage {
role: LanguageModelMessageRole;
content: LanguageModelMessageContent[];
}
interface LanguageModelMessageShorthand {
role: LanguageModelMessageRole;
content: string;
}
interface LanguageModelMessageContent {
type: LanguageModelMessageType;
content: LanguageModelMessageContentValue;
}
type LanguageModelMessageRole = 'system' | 'user' | 'assistant';
type LanguageModelMessageType = 'text' | 'image' | 'audio';
type LanguageModelMessageContentValue =
| ImageBitmapSource
| AudioBuffer
| BufferSource
| string;
Loading
Loading