Skip to content

Tensorflow.RuntimeError: 'Current graph is not default graph. Default Graph Key #4568

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
BillMcCrary opened this issue Dec 12, 2019 · 3 comments
Assignees
Labels
bug Something isn't working loadsave Bugs related loading and saving data or models P1 Priority of the issue for triage purpose: Needs to be fixed soon.

Comments

@BillMcCrary
Copy link

We would like to be able to load multiple models from disk on startup, then infer against them without having to reload from disk each pass.

If I try to keep 2 models in memory at the same time, I get an error if not calling against the last one initialized from disk.

i.e. this works - only including code needed to generate the error:

            MLContext context = new MLContext();

            TensorFlowModel model1 = context.Model.LoadTensorFlowModel("Model1");

            TensorFlowEstimator estimator1 = model1.ScoreTensorFlowModel("serving_default_input_1", "StatefulPartitionedCall");

            TensorFlowModel model2 = context.Model.LoadTensorFlowModel("Model2");

            TensorFlowEstimator estimator2 = model2.ScoreTensorFlowModel("serving_default_input_1", "StatefulPartitionedCall");

but this throws an error like "Tensorflow.RuntimeError: 'Current graph is not default graph":

            MLContext context = new MLContext();

            TensorFlowModel model1 = context.Model.LoadTensorFlowModel("Model1");

            TensorFlowModel model2 = context.Model.LoadTensorFlowModel("Model2");
            
            TensorFlowEstimator estimator1 = model1.ScoreTensorFlowModel("serving_default_input_1", "StatefulPartitionedCall");

            TensorFlowEstimator estimator2 = model2.ScoreTensorFlowModel("serving_default_input_1", "StatefulPartitionedCall");

I guess we could roll our own process to load from disk to stream 1x, then call context.Model.Load() but that still smells wrong - we're still initializing the model over and over just avoiding disk calls. Is this a bug or are we doing something wrong?

@frank-dong-ms-zz frank-dong-ms-zz added bug Something isn't working P1 Priority of the issue for triage purpose: Needs to be fixed soon. labels Jan 9, 2020
@BillMcCrary
Copy link
Author

Bump - I'm seeing this issue again in a new project but it's somehow different. This time, I cannot load a 2nd model no matter what. I.e. in my above examples, both cases now cause estimator 2 to throw the "Current graph is not default graph" error.

For now, we're having to load the model from disk on each inference pass. Works but slowing down a bit, would love to be able to load multiple models from disk 1x then infer whenever I want.

@harishsk harishsk added the loadsave Bugs related loading and saving data or models label Apr 29, 2020
@frank-dong-ms-zz frank-dong-ms-zz self-assigned this May 21, 2020
@frank-dong-ms-zz
Copy link
Contributor

I am able to repro this in my local environment, let me dig up little bit and get back to you

@frank-dong-ms-zz
Copy link
Contributor

frank-dong-ms-zz commented May 21, 2020

@BillMcCrary This error message "Current graph is not default graph" is thrown directly from tf.net. you can set default graph to any graph you want using as_default, take a look below as reference:
https://stackoverflow.com/questions/57824600/tensorflow-why-tf-get-default-graph-is-not-current-graph

Below is an code sample that works in your case (note: we don't provide any public method or field to let you have access to tensorflow session so below code is a work around using reflection):

    static void Main(string[] args)
    {
        try
        {
            MLContext context = new MLContext();

            TensorFlowModel model1 = context.Model.LoadTensorFlowModel("tensorflow_inception_graph.pb");

            TensorFlowModel model2 = context.Model.LoadTensorFlowModel("model.pb");

            var session = GetInstanceField(typeof(TensorFlowModel), model1, "Session") as Session;
            session.graph.as_default();

            TensorFlowEstimator estimator1 = model1.ScoreTensorFlowModel(outputColumnNames: new[] { "softmax2" },
                                            inputColumnNames: new[] { "input" }, addBatchDimensionInput: true);

            var session2 = GetInstanceField(typeof(TensorFlowModel), model2, "Session") as Session;
            session2.graph.as_default();

            TensorFlowEstimator estimator2 = model2.ScoreTensorFlowModel("serving_default_input_1", "StatefulPartitionedCall");
        }
        catch(Exception ex)
        {
            Console.WriteLine($"{ex.Message}");
        }
    }

    internal static object GetInstanceField(Type type, object instance, string fieldName)
    {
        BindingFlags bindFlags = BindingFlags.Instance | BindingFlags.Public | BindingFlags.NonPublic;
        var fields = type.GetFields(bindFlags);

        foreach (var field in fields)
        {
            if (field.Name.Contains(fieldName))
            {
                return field.GetValue(instance);
            }
        }

        throw new Exception($"Can't find field {fieldName} from object type {type}");
    }

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug Something isn't working loadsave Bugs related loading and saving data or models P1 Priority of the issue for triage purpose: Needs to be fixed soon.
Projects
None yet
Development

No branches or pull requests

3 participants