Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AttributeError: module 'tensorflow.contrib.seq2seq' has no attribute 'prepare_attention' #7

Open
blindfish opened this issue Oct 10, 2017 · 12 comments

Comments

@blindfish
Copy link

This part of the code:

# Create the training and inference logits
train_logits, inference_logits = seq2seq_model(
    tf.reverse(input_data, [-1]), targets, keep_prob, batch_size, sequence_length, len(answers_vocab_to_int), 
    len(questions_vocab_to_int), encoding_embedding_size, decoding_embedding_size, rnn_size, num_layers, 
    questions_vocab_to_int)

Fails with the following error. Anyone get the code to work with TF 1.2.1 or 1.3?

---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-42-5eb2c4ab2c25> in <module>()
     15     tf.reverse(input_data, [-1]), targets, keep_prob, batch_size, sequence_length, len(answers_vocab_to_int),
     16     len(questions_vocab_to_int), encoding_embedding_size, decoding_embedding_size, rnn_size, num_layers,
---> 17     questions_vocab_to_int)
     18 
     19 # Create a tensor for the inference logits, needed if loading a checkpoint version of the model

<ipython-input-39-bbac5bbc5884> in seq2seq_model(input_data, target_data, keep_prob, batch_size, sequence_length, answers_vocab_size, questions_vocab_size, enc_embedding_size, dec_embedding_size, rnn_size, num_layers, questions_vocab_to_int)
     24                                                 questions_vocab_to_int,
     25                                                 keep_prob,
---> 26                                                 batch_size)
     27     return train_logits, infer_logits

<ipython-input-38-4c62787c7f16> in decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size, num_layers, vocab_to_int, keep_prob, batch_size)
     24                                             output_fn,
     25                                             keep_prob,
---> 26                                             batch_size)
     27         decoding_scope.reuse_variables()
     28         infer_logits = decoding_layer_infer(encoder_state, 

<ipython-input-36-c7b11c624372> in decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob, batch_size)
      5     attention_states = tf.zeros([batch_size, 1, dec_cell.output_size])
      6 
----> 7     att_keys, att_vals, att_score_fn, att_construct_fn =             tf.contrib.seq2seq.prepare_attention(attention_states,
      8                                                  attention_option="bahdanau",
      9                                                  num_units=dec_cell.output_size)

AttributeError: module 'tensorflow.contrib.seq2seq' has no attribute 'prepare_attention'
@shreyneil
Copy link

Use tensorflow version 1.0.0, it will only work with that.

@abhibisht89
Copy link

I am also having the same issue, do I need to downgrade(as i am having tf version '1.3.0') the tf version? or there is some other solution to this issue?

@Khanquer17
Copy link

same issue

@shreyneil
Copy link

You have to downgrade, as downgrading worked for me.

@saurabhrathor
Copy link

Could not find tensorflow version 1.0

C:\Users\eratsau>pip install tensorflow==1.0
Collecting tensorflow==1.0
Cache entry deserialization failed, entry ignored
Could not find a version that satisfies the requirement tensorflow==1.0 (from
versions: 1.2.0rc2, 1.2.0, 1.2.1, 1.3.0rc0, 1.3.0rc1, 1.3.0rc2, 1.3.0, 1.4.0rc0,
1.4.0rc1, 1.4.0, 1.5.0rc0, 1.5.0rc1, 1.5.0, 1.6.0rc0)
No matching distribution found for tensorflow==1.0

@adithyaChander
Copy link

Use Python version = 3.5. (Upgrade/downgrade python version accordingly)
Then downgrade tensorflow version to 1.0.0.
This should work :)

@sumanthd17
Copy link

what is the alternative for tf.contrib.seq2seq.prepare_attention() in the tensorflow API 1.8

@rahulv1993
Copy link

rahulv1993 commented Jul 25, 2018

I am unable to downgrade tensorflow 1.9 to 1.0 on anaconda.
Tried many ways.
please help me with this, thanks!

@Shanunicorn
Copy link

Shanunicorn commented Aug 13, 2018

is there any other way to clear this issue without downgrading the version of tensorflow

@Joshsnailz
Copy link

Is there any other workaround without having to downgrade my tensorflow

@Swaminathan-R
Copy link

Is there any other workaround other than downgrading TensorFlow?

@RamKumar-T-R
Copy link

What is the alternative for tensorflow.contrib.seq2seq.prepare_attention() in tensorflow 1.13 or 1.14

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests