Skip to content
This repository has been archived by the owner on Dec 29, 2022. It is now read-only.

speeding up inference nmt chatbot nlp #352

Open
b7amine opened this issue Feb 12, 2019 · 0 comments
Open

speeding up inference nmt chatbot nlp #352

b7amine opened this issue Feb 12, 2019 · 0 comments

Comments

@b7amine
Copy link

b7amine commented Feb 12, 2019

Hello , as discribed in title i'm trying to speed up inference in raspi , I reduced num_units from 512 to 256 and reduced beam width to 1 , noticed significant difference but still 6 seconds will make my humanoid robot sound stupid , could someone help , i'm trying to build an understanding like a map that represent the full model cause it sounds complicated since it contains many levels , I already know how to manipulate a simple neural network like the one on tensorflow tuto for classifying images ,
i'm studying A i , NLP through this project at the same time so consider that i'm still. a beginner please , thank you.
github repository: https://github.com/daniel-kukiela/nmt-chatbot#introduction
hparams:hparams = {
'attention': 'scaled_luong',
'src': 'from',
'tgt': 'to',
'vocab_prefix': os.path.join(train_dir, "vocab"),
'train_prefix': os.path.join(train_dir, "train"),
'dev_prefix': os.path.join(train_dir, "tst2012"),
'test_prefix': os.path.join(train_dir, "tst2013"),
'out_dir': out_dir,
'num_train_steps': 500000,
'num_layers': 2,
'num_units': 256,
'override_loaded_hparams': True,
'learning_rate':0.001,

'decay_factor': 0.99998,

'decay_steps': 1,

'residual': True,

'start_decay_step': 1,
'beam_width': 1,
'length_penalty_weight': 1.0,
'optimizer': 'adam',
'encoder_type': 'bi',
'num_translations_per_input': 30

}

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant