Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tokenizer is already loaded error #95

Open
DarkSorrow opened this issue Feb 10, 2025 · 1 comment
Open

Tokenizer is already loaded error #95

DarkSorrow opened this issue Feb 10, 2025 · 1 comment
Labels
bug Something isn't working

Comments

@DarkSorrow
Copy link

Description

Hello,

I can't really get a consistent pattern to reproduce this error and right now i didn't have it for a bit already but just in case this is the code i'm using. It's a component that is loaded on a page and once the LLM is ready put it in a provider context. When the user wants to switch it makes this variable null in the context to "unload" it. If an error occured it also put the llm at null and the component where you choose your model is displayed back
The modelURI and tokenizerURI is never null when it reaches this page.

import { Dispatch, SetStateAction, useEffect } from "react";
import { Image, ImageSourcePropType } from 'react-native';
import { useLLM } from "react-native-executorch";
import { Spinner } from "tamagui";
import Animated, { useSharedValue, useAnimatedStyle, withRepeat, withTiming } from 'react-native-reanimated';
import Toast from "react-native-toast-message";
import { useTranslation } from 'react-i18next';

import { useAssistant } from "@/providers/assistant";
import { AvatarWrapper } from "@/components/atoms/avatar-wrapper";
import { AvatarBackgroundGradient } from "@/components/atoms/avatar-background-gradient";
import { AvatarBackground } from "../atoms/avatar-background";
import { ASSETS_PROFILE, AVATAR_SIZE } from "@/utils/constants";
import { AssistantStatus, ChatMessage, EngineType } from "@/utils/types";
import { AvatarLabel } from "../atoms/typography";
import { AvatarModelImage } from '@/components/atoms/avatar-model-image';

interface LLMLoaderProps {
  modelURI: string;
  tokenizerURI: string;
  systemPrompt?: string;
  contextWindowLength: number;
  setMessages: Dispatch<SetStateAction<ChatMessage[]>>;
  setIsGenerating: Dispatch<SetStateAction<boolean>>;
}

const AVATAR_REAL = AVATAR_SIZE - 18;

export const LLMLoader = ({
  modelURI, tokenizerURI, systemPrompt, contextWindowLength, setMessages, setIsGenerating
}: LLMLoaderProps) => {
  const { t } = useTranslation();
  const { setAgent, model } = useAssistant();
  const llama = useLLM({
    modelSource: modelURI,
    tokenizerSource: tokenizerURI,
    systemPrompt,
    contextWindowLength,
  });
  const bounceAnim = useSharedValue(1);
  
  useEffect(() => {
    if (llama && llama.isGenerating) {
      bounceAnim.value = withRepeat(
        withTiming(1.05, { duration: 1000 }),
        -1,
        true
      );
    } else {
      bounceAnim.value = 1;
    }
  }, [llama?.isGenerating]);

  const animatedStyle = useAnimatedStyle(() => ({
    transform: [{ scale: bounceAnim.value }],
  }));

  useEffect(() => {
    if (llama.isReady) {
      setAgent({
        llm: llama,
        type: EngineType.EXECUTORCH,
      }, AssistantStatus.AI_READY);
    }
  }, [llama?.isReady]);

  // Handle streaming responses
  useEffect(() => {
    if (llama.response && llama.isGenerating) {
      // Update the latest assistant message with the current stream
      setMessages(prev => {
        const newMessages = [...prev];
        if (newMessages[0]?.role === 'assistant') {
          newMessages[0].message = llama.response;
          newMessages[0].isStreaming = true;
        }
        return newMessages;
      });
    } else if (llama.response && !llama.isGenerating) {
      // Finalize the message when generation is complete
      setMessages(prev => {
        const newMessages = [...prev];
        if (newMessages[0]?.role === 'assistant') {
          newMessages[0].message = llama.response;
          newMessages[0].isStreaming = false;
        }
        return newMessages;
      });
      setIsGenerating(false);
    }
  }, [llama.response, llama.isGenerating]);

  useEffect(() => {
    if (llama?.error) {
      Toast.show({
        type: 'error',
        text1: t('error.llamaErr', { msg: llama.error }),
      });
      
      setAgent(null, AssistantStatus.AI_ERROR);
    }
  }, [llama?.error]);

  return (
    <AvatarWrapper>
      <AvatarLabel>{model?.name || t('anon')}</AvatarLabel>
      <Animated.View style={animatedStyle}>
        <AvatarBackgroundGradient status="ready">
          <AvatarBackground opa={llama?.isGenerating}>
            {llama?.isReady === true ? (
              <AvatarModelImage 
                model={model}
                size={AVATAR_REAL}
              />
            ) : (
              <Spinner size="large" color="$primary" />
            )}
          </AvatarBackground>
        </AvatarBackgroundGradient>
      </Animated.View>
    </AvatarWrapper>
  )
}

Steps to reproduce

The last time it happened was after a new release
Opened the page where the llm was, i got an error that set the agent at null.
Came back on the same page got the error again saying the tokenizer is already loaded.

Snack or a link to a repository

https://snack.expo.dev/

React Native Executorch version

0.2.0

React Native version

0.76.3

Platforms

iOS, Android

JavaScript runtime

Hermes

Workflow

Expo Dev Client

Architecture

Fabric (New Architecture)

Build type

Release mode

Device

iOS simulator

Device model

No response

AI model

LLama 1b SpinQuant

Performance logs

No response

Acknowledgements

Yes

@NorbertKlockiewicz NorbertKlockiewicz added the bug Something isn't working label Feb 10, 2025
@DarkSorrow
Copy link
Author

So i've made a new build again, i just download the tokenizer and model and they stay in the document then i call them and i'm getting this error "Model and tokenizer already loaded"

(NOBRIDGE) LOG  llama {"modelURI": "file:///Users/dark/Library/Developer/CoreSimulator/Devices/A647091C-F23C-46C4-88D7-A1D20454F3AE/data/Containers/Data/Application/E7C85E85-D165-4783-879E-4819CFFA53CA/Documents/models/75b81858-ae9a-430d-babb-008debca7788/llama3_2_spinquant.pte", "tokenizerURI": "file:///Users/dark/Library/Developer/CoreSimulator/Devices/A647091C-F23C-46C4-88D7-A1D20454F3AE/data/Containers/Data/Application/E7C85E85-D165-4783-879E-4819CFFA53CA/Documents/models/75b81858-ae9a-430d-babb-008debca7788/tokenizer.bin"}
[CoreGraphics]
CGBitmapContextCreate: unsupported parameter combination:
NULL colorspace | 0 bits/component, integer | 0 bytes/row.
kCGImageAlphaNone | kCGImageByteOrderDefault | kCGImagePixelFormatPacked
Set CGBITMAP_CONTEXT_LOG_ERRORS environmental variable to see more details.
[CoreGraphics] CGDisplayListDrawInContext: invalid context 0x0. If you want to see the backtrace, please set CG_CONTEXT_SHOW_BACKTRACE environmental variable.
[CoreGraphics] CGBitmapContextCreateImage: invalid context 0x0. If you want to see the backtrace, please set CG_CONTEXT_SHOW_BACKTRACE environmental variable.
(NOBRIDGE) LOG  llama {"downloadProgress": 0, "error": "Model and tokenizer already loaded", "generate": [Function anonymous], "interrupt": [Function interrupt], "isGenerating": false, "isModelGenerating": false, "isModelReady": false, "isReady": false, "response": ""}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants