Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Web] ONNX Runtime Initialization Fails with irVersion Error #22931

Closed
lstrhsu opened this issue Nov 23, 2024 · 9 comments
Closed

[Web] ONNX Runtime Initialization Fails with irVersion Error #22931

lstrhsu opened this issue Nov 23, 2024 · 9 comments
Labels
platform:web issues related to ONNX Runtime web; typically submitted using template

Comments

@lstrhsu
Copy link

lstrhsu commented Nov 23, 2024

Describe the issue

I am encountering an issue with the ONNX Runtime initialization in my userscript. The error message indicates that the irVersion property is being accessed on a null object, which suggests that the ONNX model is not being loaded correctly. Additionally, the ONNX Runtime version is showing as undefined, which implies that the onnxruntime-web library might not be properly initialized or loaded.

To reproduce

Load the userscript in a browser environment

  • Chrome 133.0.6847.2
  • Tampermonkey v5.3.2

Observe the console logs for the following error

ONNX Runtime initialization failed: TypeError: Cannot read properties of null (reading 'irVersion')
Error stack: TypeError: Cannot read properties of null (reading 'irVersion')
Initialization failed: Error: ONNX initialization failed
Error stack: Error: ONNX initialization failed

Console Logs

Script execution started...
Starting initialization...
Initializing ONNX...
ort: object
ONNX Runtime version: undefined
Fetching model from:[my model](https://raw.githubusercontent.com/lstrhsu/MyHost/main/model.onnx)
Fetching model...
Model data size: 1153332 bytes
Creating ONNX Runtime session...
ONNX Runtime initialization failed: TypeError: Cannot read properties of null (reading ['irVersion')]

Related Userscript code

// @require      https://cdn.jsdelivr.net/npm/[email protected]/dist/ort.min.js
// @connect      raw.githubusercontent.com
// @connect      cdn.jsdelivr.net
// @grant        GM_xmlhttpRequest
// @grant        unsafeWindow
// @connect      microsoft.github.io
// @resource     WASM_SIMD https://cdn.jsdelivr.net/npm/[email protected]/dist/ort-wasm-simd.wasm
// @resource     WASM https://cdn.jsdelivr.net/npm/[email protected]/dist/ort-wasm.wasm
// ==/UserScript==
(async function() {
    'use strict';
    
    console.log('Script execution started...');

    // Initialize ONNX session
    let session;

    // Configure ONNX Runtime
    const initONNX = async () => {
        try {
            // Check if ONNX Runtime is loaded correctly
            console.log('ort:', typeof ort);
            if (typeof ort === 'undefined') {
                throw new Error('ONNX Runtime not loaded');
            }
            console.log('ONNX Runtime version:', ort.version);

            // Configuration options
            const options = {
                executionProviders: ['webgl'], // Use WebGL backend
                graphOptimizationLevel: 'all'
            };

            // Use the correct model URL
            const MODEL_URL = 'https://raw.githubusercontent.com/lstrhsu/MyHost/main/model.onnx';
            console.log('Fetching model from:', MODEL_URL);

            // Fetch model data
            console.log('Fetching model...');
            const modelResponse = await new Promise((resolve, reject) => {
                GM_xmlhttpRequest({
                    method: 'GET',
                    url: MODEL_URL,
                    responseType: 'arraybuffer',
                    onload: (response) => {
                        // Validate response
                        if (response.status !== 200) {
                            reject(new Error(`Model download failed: ${response.status}`));
                            return;
                        }
                        resolve(response);
                    },
                    onerror: reject
                });
            });

            // Use ArrayBuffer directly
            const modelBuffer = modelResponse.response;
            console.log('Model data size:', modelBuffer.byteLength, 'bytes');

            // Check if model data is valid
            if (!modelBuffer || modelBuffer.byteLength === 0) {
                throw new Error('Model data is invalid or empty');
            }

            // Create session
            console.log('Creating ONNX Runtime session...');
            session = await ort.InferenceSession.create(modelBuffer, options);
            
            // Validate session
            console.log('Session created successfully');
            console.log('Input nodes:', session.inputNames);
            console.log('Output nodes:', session.outputNames);

            return true;
        } catch (error) {
            console.error('ONNX Runtime initialization failed:', error);
            console.error('Error stack:', error.stack);
            return false;
        }
    };

Model Composition

Environment Information:

  • ONNX Version: 1.13.1
  • ONNX Runtime Version: 1.13.1

Version Compatibility Check:

  • Model Opset Version: 13
  • Maximum Supported Opset Version in Current Environment: 17
  • Compatibility: ✓ Compatible

Model Validation: Passed ✓

  • Warnings: The following operators may need updates: Squeeze & Reshape

Web Deployment Compatibility Check (ONNX.js):

  • IR Version 7: ✓ Compatible
  • Opset Version 13: ✓ Compatible

Metadata Information:

  • ONNX IR Version: 7
  • Producer Name: tf2onnx
  • Producer Version: 1.13.0 2c1db5
  • Model Version: 0
  • Opset Versions: [13, 2]

Urgency

This issue is urgent because it blocks a critical feature in our application that relies on ONNX Runtime for real-time captcha solving. We have a project deadline approaching in two weeks, and resolving this issue is crucial for our deployment schedule.

ONNX Runtime Installation

Built from Source

ONNX Runtime Version or Commit ID

Version: 1.13.1

Execution Provider

'webgl' (WebGL)

@lstrhsu lstrhsu added the platform:web issues related to ONNX Runtime web; typically submitted using template label Nov 23, 2024
@github-actions github-actions bot added the .NET Pull requests that update .net code label Nov 23, 2024
@fs-eire
Copy link
Contributor

fs-eire commented Nov 24, 2024

1.13.1 is quite old. here are the options:

  • keep using 1.13.1 with WebGL. then you need to downgrade the IR version of the onnx model to an older version.
  • [recommended] using latest version (1.20.1) of onnxruntime-web and use webgpu as execution provider(Chrome 133 supports it). webgpu EP has better performance than webgl and we are keeping improving it

@fs-eire fs-eire removed the .NET Pull requests that update .net code label Nov 24, 2024
@lstrhsu
Copy link
Author

lstrhsu commented Nov 25, 2024

1.13.1 is quite old. here are the options:

  • keep using 1.13.1 with WebGL. then you need to downgrade the IR version of the onnx model to an older version.
  • [recommended] using latest version (1.20.1) of onnxruntime-web and use webgpu as execution provider(Chrome 133 supports it). webgpu EP has better performance than webgl and we are keeping improving it

Thank you for your reply. Here's what I've tried:

  • Convert the model with different IR versions (6 and 4), but still getting the same error:
    ONNX Runtime initialization failed: TypeError: Cannot read properties of null (reading 'irVersion')
    
  • Using the latest version (1.20.1) of onnxruntime-web and WebGPU as execution provider, but got this error:
    removing requested execution provider "webgpu" from session options because it is not available: backend not found.
    
    {
      message: "Failed to load model as ONNX format: Error: unrecognized input '' for node: LSTM__61\nas ORT format: TypeError: Cannot read properties of null (reading 'irVersion')",
      name: "Error",
      stack: "Error: Failed to load model as ONNX format: Error: unrecognized input '' for node: LSTM__61\nas ORT format: TypeError: Cannot read properties of null (reading 'irVersion')\n    at dn.load (...)"
    }
    

I've tried many methods, including retraining multiple times. Then I found some old posts mentioning that ONNX.js didn't support LSTM? I'd like to know if it's supported now and how I should improve my approach.
And I use Bidirectional LSTM.

Here's my model conversion code:

import tf2onnx
import tensorflow as tf
from tensorflow import keras
import logging
import os
import shutil
import subprocess

# 1. Set up logging
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)

# 2. Keep CTCLayer definition consistent with original Python version
class CTCLayer(keras.layers.Layer):
    def __init__(self, name=None, trainable=True, **kwargs):
        super().__init__(name=name, trainable=trainable, **kwargs)
        self.loss_fn = keras.backend.ctc_batch_cost

    def call(self, y_true, y_pred):
        return y_pred

def convert_model_to_onnx(model_path, output_path):
    try:
        logger.info("Loading model...")
        # Load model
        model = keras.models.load_model(
            model_path, 
            custom_objects={'CTCLayer': CTCLayer},
            compile=False
        )
        
        # Create prediction model
        logger.info("Creating prediction model...")
        prediction_model = keras.models.Model(
            model.get_layer(name="image").input,
            model.get_layer(name="dense2").output
        )
        
        # Save as SavedModel format with specific input shape
        logger.info("Saving as SavedModel format...")
        temp_saved_model = 'temp_saved_model'
        prediction_model.save(
            temp_saved_model, 
            save_format='tf',
            signatures={
                'serving_default': tf.function(
                    lambda x: prediction_model(x)
                ).get_concrete_function(
                    tf.TensorSpec(shape=(1, 280, 80, 1), dtype=tf.float32, name="image")
                )
            }
        )
        
        # Convert using command line
        logger.info("Starting ONNX conversion...")
        cmd = f"python -m tf2onnx.convert --saved-model {temp_saved_model} --output {output_path} --opset 13"
        result = subprocess.run(cmd, shell=True, capture_output=True, text=True)
        
        if result.returncode != 0:
            logger.error(f"Conversion failed: {result.stderr}")
            raise Exception(f"Conversion failed: {result.stderr}")
            
        # Clean up temporary files
        if os.path.exists(temp_saved_model):
            shutil.rmtree(temp_saved_model)
        
        logger.info(f"Model successfully converted and saved to: {output_path}")
        return True
        
    except Exception as e:
        logger.error(f"Error during conversion: {str(e)}")
        if os.path.exists(temp_saved_model):
            shutil.rmtree(temp_saved_model)
        raise

if __name__ == "__main__":
    try:
        model_path = "model.h5"
        output_path = "model.onnx"
        
        # Verify input file exists
        if not os.path.exists(model_path):
            raise FileNotFoundError(f"Model file not found: {model_path}")
            
        # Execute conversion
        convert_model_to_onnx(model_path, output_path)
        
        # Verify output file
        if os.path.exists(output_path):
            logger.info("Conversion completed successfully!")
        else:
            raise FileNotFoundError("Conversion failed, output file not generated")
            
    except Exception as e:
        logger.error(f"Program execution failed: {str(e)}")
        raise

Thank you in advance for your response.

@fs-eire
Copy link
Contributor

fs-eire commented Nov 26, 2024

Using the latest version (1.20.1) of onnxruntime-web and WebGPU as execution provider, but got this error:
removing requested execution provider "webgpu" from session options because it is not available: backend not found.

sorry I forgot to mention that you need to use import "onnxruntime-web/webgpu" instead of import "onnxruntime-web" to use webgpu in 1.20.x

@lstrhsu
Copy link
Author

lstrhsu commented Nov 26, 2024

sorry I forgot to mention that you need to use import "onnxruntime-web/webgpu" instead of import "onnxruntime-web" to use webgpu in 1.20.x

I apologize for any confusion, as I am a complete beginner. Attempted to use the following URLs:

// @require      https://unpkg.com/[email protected]/dist/ort.webgpu.min.js
or
// @require      https://cdn.jsdelivr.net/npm/[email protected]/dist/ort.webgpu.min.js

However, both resulted in the following output:

Checking WebGPU support...
WebGPU supported: true
Creating session with options: {"executionProviders":["webgpu"],"graphOptimizationLevel":"all","logSeverityLevel":0,"executionMode":"sequential"}
Failed to load resource: the server responded with a status of 404 ()

After checking, they are correct URLs.
Could you please advise on any additional steps I might need to take to resolve this issue?
Thank you for your assistance!

@fs-eire
Copy link
Contributor

fs-eire commented Nov 26, 2024

Failed to load resource: the server responded with a status of 404 ()

is the model URL 404? A full or reproducible example would be helpful.

@lstrhsu
Copy link
Author

lstrhsu commented Nov 26, 2024

is the model URL 404? A full or reproducible example would be helpful.

The model URL is right.

Full Example:

// ==UserScript==
// @name         Melon Ticket Captcha Solver (ONNX Version)
// @namespace    http://tampermonkey.net/
// @match        https://tkglobal.melon.com/reservation/popup/onestop.htm*
// @require      https://unpkg.com/[email protected]/dist/ort.webgpu.min.js
// @connect      raw.githubusercontent.com
// @connect      cdn.jsdelivr.net
// @grant        GM_xmlhttpRequest
// @grant        unsafeWindow
// @connect      microsoft.github.io
// @resource     WASM_SIMD https://cdn.jsdelivr.net/npm/[email protected]/dist/ort-wasm-simd.wasm
// @resource     WASM https://cdn.jsdelivr.net/npm/[email protected]/dist/ort-wasm.wasm
// ==/UserScript==
(async function() {
    'use strict';

    console.log('Script execution started...');

    // Initialize ONNX session
    let session;

    // Configure ONNX Runtime
    const initONNX = async () => {
        try {
            // Check if ONNX Runtime is loaded correctly
            console.log('ort:', typeof ort);
            if (typeof ort === 'undefined') {
                throw new Error('ONNX Runtime not loaded');
            }
            console.log('ONNX Runtime version:', ort.version);

            // Add WebGPU support check
            console.log('Checking WebGPU support...');
            const webGPUSupported = 'gpu' in navigator;
            console.log('WebGPU supported:', webGPUSupported);

            // Configuration options
            const options = {
                executionProviders: ['webgpu'],
                graphOptimizationLevel: 'all',
                logSeverityLevel: 0, 
                executionMode: 'sequential' 
            };

            // Use the correct model URL
            const MODEL_URL = 'https://raw.githubusercontent.com/lstrhsu/MyHost/main/model_original.onnx';
            console.log('Fetching model from:', MODEL_URL);

            // Fetch model data
            console.log('Fetching model...');
            const modelResponse = await new Promise((resolve, reject) => {
                GM_xmlhttpRequest({
                    method: 'GET',
                    url: MODEL_URL,
                    responseType: 'arraybuffer',
                    onload: (response) => {
                        // Validate response
                        if (response.status !== 200) {
                            reject(new Error(`Model download failed: ${response.status}`));
                            return;
                        }
                        resolve(response);
                    },
                    onerror: reject
                });
            });

            // Use ArrayBuffer directly
            const modelBuffer = modelResponse.response;
            console.log('Model data size:', modelBuffer.byteLength, 'bytes');

            // Check if model data is valid
            console.log('Validating model buffer...');
            if (!modelBuffer || modelBuffer.byteLength === 0) {
                throw new Error('Model data is invalid or empty');
            }

            // Create session
            console.log('Creating ONNX Runtime session...');
            console.log('Model buffer size:', modelBuffer.byteLength);
            console.log('Creating session with options:', JSON.stringify(options));

            session = await ort.InferenceSession.create(modelBuffer, options);

            // Validate session
            console.log('Session created successfully');
            console.log('Input nodes:', session.inputNames);
            console.log('Output nodes:', session.outputNames);

            // Log selected execution provider
            console.log('Selected execution provider:', session.handler.sessionOptions.executionProviders[0]);

            // Verify session
            if (!session.inputNames || session.inputNames.length === 0) {
                throw new Error('Invalid session: no input names found');
            }

            // Model Details
            console.log('Model inputs:', session.inputNames);
            console.log('Input details:', session.inputNames.map(name => {
                const info = session.inputs.find(input => input.name === name);
                return {
                    name: name,
                    shape: info.dims,
                    type: info.type
                };
            }));

            return true;
        } catch (error) {
            console.error( {
                message: error.message,
                stack: error.stack,
                name: error.name
            });
            return false;
        }
    };

    // Modify initialization function
    async function init() {
        try {
            console.log('Starting initialization...');

            // Initialize ONNX
            console.log('Initializing ONNX...');
            const success = await initONNX();
            if (!success) {
                throw new Error('ONNX initialization failed');
            }

            console.log('ONNX initialized successfully, setting up captcha observer...');
            // Start captcha observer
            setupCaptchaObserver();

            console.log('Initialization complete');
        } catch (error) {
            console.error('Initialization failed:', error);
            console.error('Error stack:', error.stack);
        }
    }

    // Ensure script runs after page load
    if (document.readyState === 'loading') {
        document.addEventListener('DOMContentLoaded', init);
    } else {
        init();
    }

    console.log('Script loaded');  // Add end log
})();

Console Output:

JQMIGRATE: Migrate is installed, version 3.3.2
Script execution started...Starting initialization...
Initializing ONNX...
ort: object
ONNX Runtime version: undefined
Checking WebGPU support...
WebGPU supported: true
Fetching model from: https://raw.githubusercontent.com/lstrhsu/MyHost/main/model_original.onnx
Fetching model...
Script loaded
data size: 1152257 bytes
Validating model buffer...
Creating ONNX Runtime session...
Model buffer size: 1152257
Creating session with options: {"executionProviders":["webgpu"],"graphOptimizationLevel":"all","logSeverityLevel":0,"executionMode":"sequential"}
JQMIGRATE: Migrate is installed, version 3.3.2
Failed to load resource: the server responded with a status of 404 ()
Object
message: "no available backend found. ERR: [webgpu] TypeError: Failed to fetch dynamically imported module: https://tkglobal.melon.com/reservation/popup/ort-wasm-simd-threaded.jsep.mjs"
name: "Error"
stack: "Error: no available backend found. ERR: [webgpu] TypeError: Failed to fetch dynamically imported module: https://tkglobal.melon.com/reservation/popup/ort-wasm-simd-threaded.jsep.mjs\n  
[[Prototype]]: Object
nitialization failed: Error: ONNX initialization failed
Error stack: Error: ONNX initialization failed

It automatically generated https://tkglobal.melon.com/reservation/popup/ort-wasm-simd-threaded.jsep.mjs which of course does not exist.
GPT says that ONNX Runtime generates incorrect resource load paths by default because wasmPaths is not explicitly specified. This can be fixed by explicitly specifying wasmPaths or disabling WebGPU.
I'm sorry this is difficult for me.

@fs-eire
Copy link
Contributor

fs-eire commented Nov 27, 2024

I see. it looks like you are importing from https://unpkg.com/[email protected]/dist/ort.webgpu.min.js.
Since you are using

// @resource     WASM_SIMD https://cdn.jsdelivr.net/npm/[email protected]/dist/ort-wasm-simd.wasm
// @resource     WASM https://cdn.jsdelivr.net/npm/[email protected]/dist/ort-wasm.wasm

you can specify in code

ort.env.wasm.wasmPaths = 'https://cdn.jsdelivr.net/npm/[email protected]/dist/';

@lstrhsu
Copy link
Author

lstrhsu commented Nov 27, 2024

ort.env.wasm.wasmPaths = 'https://cdn.jsdelivr.net/npm/[email protected]/dist/';

After that:

WASM paths set to: https://cdn.jsdelivr.net/npm/[email protected]/dist/
Creating ONNX Runtime session...
Creating session with options: {"executionProviders":["webgpu"],"graphOptimizationLevel":"all","logSeverityLevel":0,"executionMode":"sequential"}
(A  lot of warnings...)
Session created successfully
Input nodes: Array(1)
	0: "image"
	length: 1
	[[Prototype]]: Array(0)
Output nodes: Array(1)
	0: "output_0"
	length: 1
	[[Prototype]]: Array(0)
Object
	message: "Cannot read properties of undefined (reading 'executionProviders')"
	name: "TypeError"
	stack: "TypeError: Cannot read properties of undefined (reading 'executionProviders')\n    at initONNX 
	[[Prototype]]: Object
Initialization failed: Error: ONNX initialization failed
Error stack: Error: ONNX initialization failed

So I changed:

console.log('Available execution providers:', ort.env.providers);
or
console.log('Execution provider used by session:', options.executionProviders[0]);

The output (cpu or wasm as the same):

Creating ONNX Runtime session...
Creating session with options: {"executionProviders":["webgpu"],"graphOptimizationLevel":"all","executionMode":"sequential"}
[W:onnxruntime:, session_state.cc:1168 VerifyEachNodeIsAssignedToAnEp] Some nodes were not assigned to the preferred execution providers which may or may not have an negative impact on performance. e.g. ORT explicitly assigns shape related ops to CPU to improve perf.
[W:onnxruntime:, session_state.cc:1170 VerifyEachNodeIsAssignedToAnEp] Rerunning with verbose output on a non-minimal build will show node assignments.
Session created successfully
Input nodes: Array(1)
	0: "image"
	length: 1
	[[Prototype]]: Array(0)
Output nodes: Array(1)
	0: "output_0"
	length: 1
	[[Prototype]]: Array(0)
Available execution providers: undefined
Model inputs: Array(1)
	0: "image"
	length: 1
	[[Prototype]]: Array(0)
Object
	message: "Cannot read properties of undefined (reading 'find')"
	name: "TypeError"
	stack: "TypeError: Cannot read properties of undefined (reading 'find')
	[[Prototype]]: Object
Initialization failed: Error: ONNX initialization failed
Error stack: Error: ONNX initialization failed

@fs-eire
Copy link
Contributor

fs-eire commented Dec 2, 2024

It seems that the initialization is already completed successfully.

The error message should be from this line:

//  "Cannot read properties of undefined (reading 'find')"
const info = session.inputs.find(input => input.name === name);

Which makes sense to me because onnxruntime-web does not define property inputs in a session object.

@lstrhsu lstrhsu closed this as completed Dec 15, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
platform:web issues related to ONNX Runtime web; typically submitted using template
Projects
None yet
Development

No branches or pull requests

2 participants