-
Notifications
You must be signed in to change notification settings - Fork 193
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to save the time for loading pre-trained model #82
Comments
Building graph should be put out of camera capture loop: predictions = self.model(inputs, training_schedule)
pred_flow = predictions['flow']
with tf.Session(config=config) as sess:
saver.restore(blablabla)
while ret:
# Get frames and preprocessing
flow = sess.run(pred_flow, feed_dict={
input_a : frame_0,
input_b : frame_1
})[0, :, :, :] |
Hello, thanks for your guidance, I write my code like this:
However, it reports that the data-type is not correct: Traceback (most recent call last): |
You should feed 2 placeholders to self.model(), instead of 2 constants. |
How can I give the 2 placeholders with the input and give it to the self.model and then sess.run()? |
I haven't trained Flownet2. For inference: # Build Graph
input_a = tf.placeholder(dtype=tf.float32, shape=[1, None, None, 3])
input_b = tf.placeholder(dtype=tf.float32, shape=[1, None, None, 3])
inputs = {
'input_a': input_a,
'input_b': input_b,
}
training_schedule = LONG_SCHEDULE
predictions = self.model(inputs, training_schedule)
pred_flow = predictions['flow']
with tf.Session(config=config) as sess:
saver.restore(blablabla)
while ret:
# Get frames and preprocessing
# Feed
# There is no need to conver frame(ndarray) to tensor.
flow = sess.run(pred_flow, feed_dict={
input_a : frame_0,
input_b : frame_1
})[0, :, :, :] |
This worked for me to measure running time. Change it to handle frames.
|
Hello guys,
Currently, my solution is restore the model every image or frame.
It will be very time-consuming.
I have tried to load it once, here is the code, in src/net.py:
The api is called in src/flownet2/test.py:
net = FlowNet2(mode=Mode.TEST)
net.test_new_rule(
checkpoint='./checkpoints/FlowNet2/flownet-2.ckpt-0',
graphpoint='./checkpoints/FlowNet2/flownet-2.ckpt-0.meta',
cap=cap
)
**Here is error-log:
FailedPreconditionError (see above for traceback): Attempting to use uninitialized value FlowNet2/FlowNetCSS/FlowNetCS/FlowNetC/conv1/weights_1
[[node FlowNet2/FlowNetCSS/FlowNetCS/FlowNetC/conv1/weights_1/read (defined at /root/anaconda3/lib/python3.6/site-packages/tensorflow/contrib/framework/python/ops/variables.py:277) = IdentityT=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:GPU:0"]]
[[{{node FlowNet2/ResizeBilinear/_459}} = _Recvclient_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_1817_FlowNet2/ResizeBilinear", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"]]**
It will be very time-consuming for me to restore the model every time for image-processing.
Is there a work-around with the varaible definition of flownet-2 that I can transfer to test-file?
Thanks & regards!
The text was updated successfully, but these errors were encountered: