-
Notifications
You must be signed in to change notification settings - Fork 86
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to train other dataset like svt in sythtext sl model #12
Comments
Your problem seems similar to issue #1. TextBoxes requires axis-aligned bounding boxes, but TextBoxes++ and SegLink require oriented bounding boxes. I was simply too lazy to implement the 'polygon' case for datasets containing only axis-aligned bounding boxes. The implementation of the 'polygon' case in the corresponding
If you find the time, pull requests are welcome :) |
How can i use custom dataset which does not have orientation bounding box alignment, I using LabelImg tool to create custom pascal format dataset. Thanks for the reply |
@sivatejachinnam Take |
i loaded the dataset using below code
then i runned the code
and finally
but i get this error
/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/gradients_impl.py:110: UserWarning: Converting sparse IndexedSlices to a dense Tensor of unknown shape. This may consume a large amount of memory.
"Converting sparse IndexedSlices to a dense Tensor of unknown shape. "
Epoch 1/100
ValueError Traceback (most recent call last)
in ()
44 class_weight=None,
45 max_queue_size=1,
---> 46 workers=1,
47 #use_multiprocessing=False,
48 #initial_epoch=initial_epoch,
/usr/local/lib/python3.6/dist-packages/keras/legacy/interfaces.py in wrapper(*args, **kwargs)
89 warnings.warn('Update your
' + object_name + '
call to the ' +90 'Keras 2 API: ' + signature, stacklevel=2)
---> 91 return func(*args, **kwargs)
92 wrapper._original_function = func
93 return wrapper
/usr/local/lib/python3.6/dist-packages/keras/engine/training.py in fit_generator(self, generator, steps_per_epoch, epochs, verbose, callbacks, validation_data, validation_steps, class_weight, max_queue_size, workers, use_multiprocessing, shuffle, initial_epoch)
1416 use_multiprocessing=use_multiprocessing,
1417 shuffle=shuffle,
-> 1418 initial_epoch=initial_epoch)
1419
1420 @interfaces.legacy_generator_methods_support
/usr/local/lib/python3.6/dist-packages/keras/engine/training_generator.py in fit_generator(model, generator, steps_per_epoch, epochs, verbose, callbacks, validation_data, validation_steps, class_weight, max_queue_size, workers, use_multiprocessing, shuffle, initial_epoch)
179 batch_index = 0
180 while steps_done < steps_per_epoch:
--> 181 generator_output = next(output_generator)
182
183 if not hasattr(generator_output, 'len'):
/usr/local/lib/python3.6/dist-packages/keras/utils/data_utils.py in get(self)
707 "
use_multiprocessing=False, workers > 1
."708 "For more information see issue #1638.")
--> 709 six.reraise(*sys.exc_info())
/usr/local/lib/python3.6/dist-packages/six.py in reraise(tp, value, tb)
691 if value.traceback is not tb:
692 raise value.with_traceback(tb)
--> 693 raise value
694 finally:
695 value = None
/usr/local/lib/python3.6/dist-packages/keras/utils/data_utils.py in get(self)
683 try:
684 while self.is_running():
--> 685 inputs = self.queue.get(block=True).get()
686 self.queue.task_done()
687 if inputs is not None:
/usr/lib/python3.6/multiprocessing/pool.py in get(self, timeout)
668 return self._value
669 else:
--> 670 raise self._value
671
672 def _set(self, i, obj):
/usr/lib/python3.6/multiprocessing/pool.py in worker(inqueue, outqueue, initializer, initargs, maxtasks, wrap_exception)
117 job, i, func, args, kwds = task
118 try:
--> 119 result = (True, func(*args, **kwds))
120 except Exception as e:
121 if wrap_exception and func is not _helper_reraises_exception:
/usr/local/lib/python3.6/dist-packages/keras/utils/data_utils.py in next_sample(uid)
624 The next value of generator
uid
.625 """
--> 626 return six.next(_SHARED_SEQUENCES[uid])
627
628
/content/drive/My Drive/ssd_detectors_master/ssd_data.py in generate(self, debug, encode, seed)
565 if len(targets) == batch_size:
566 if encode:
--> 567 targets = [self.prior_util.encode(y) for y in targets]
568 targets = np.array(targets, dtype=np.float32)
569 tmp_inputs = np.array(inputs, dtype=np.float32)
/content/drive/My Drive/ssd_detectors_master/ssd_data.py in (.0)
565 if len(targets) == batch_size:
566 if encode:
--> 567 targets = [self.prior_util.encode(y) for y in targets]
568 targets = np.array(targets, dtype=np.float32)
569 tmp_inputs = np.array(inputs, dtype=np.float32)
/content/drive/My Drive/ssd_detectors_master/sl_utils.py in encode(self, gt_data, debug)
138 polygons = []
139 for word in gt_data:
--> 140 xy = np.reshape(word[:8], (-1, 2))
141 xy = np.copy(xy) * (self.image_w, self.image_h)
142 polygons.append(xy)
/usr/local/lib/python3.6/dist-packages/numpy/core/fromnumeric.py in reshape(a, newshape, order)
290 [5, 6]])
291 """
--> 292 return _wrapfunc(a, 'reshape', newshape, order=order)
293
294
/usr/local/lib/python3.6/dist-packages/numpy/core/fromnumeric.py in _wrapfunc(obj, method, *args, **kwds)
54 def _wrapfunc(obj, method, *args, **kwds):
55 try:
---> 56 return getattr(obj, method)(*args, **kwds)
57
58 # An AttributeError occurs if the object does not have
ValueError: cannot reshape array of size 5 into shape (2)
and how i modify this model for custom text or object detections dataset i used labelimg to create custom dataset so how can i use this dataset in your model.
Thanking You
The text was updated successfully, but these errors were encountered: