-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Multiple publishers writing on the same topic cause segfault #8
Comments
Just a thought, but why should each thread instantiate its own In ROS, each module calls |
@Tobias-Fischer YARP tries to stay as generic as possible, in order to allow integration with different systems... ROS is just one system, the fact that ROS supports only one node, does not mean that other system will work in the same way, limiting it to one single node adds a limitation in the systems that we will be able to integrate in the future. |
Sure. If YARP wants to be that generic though and at the same time inter-operable with ROS, a |
Actually I haven't tested/reviewed @randaz81 code, but at a very quick check, this is likely to be a bug: yarp::os::Node* node[num];
yarp::os::Publisher<geometry_msgs_Point> pub2[num];
yarp::os::Time::delay(1.0);
for (int i=0; i<num; i++)
{
char name [50];
sprintf (name,"/testNode%d",i);
node[i] = new yarp::os::Node (name);
pub2[i].topic("/pubTest2");
} The |
@Tobias-Fischer our current use case is a wrapper which has to publish data on a specific topic. If we use the same node for all of them, then we will have many publishers with the same node and same topic, and that's not possible. So we identify different parts using different nodes. The point here is also to discuss what the best practise should be. Is this the best we can do, or there are other ideas which may be better? |
Hi @barbalberto, My suggestion was to pass a reference of the Node to the Publisher (either at instantiation or when calling But again, I am not very familiar with the topic so this is just an idea and I might totally miss the point .. |
I think @Tobias-Fischer proposal makes a lot of sense, we might actually consider reviewing and deprecating the current api (I haven't completely understood the differences between passing the topic to the constructor, compared to passing it in yarp::os::Publisher<T>::topic(const std::string& topic, const yarp::os::Node& node);
yarp::os::Subscriber<T>::topic(const std::string& topic, const yarp::os::Node& node); I wonder if it is actually possible (and if it makes sense) for a publisher to publish on 2 different nodes. |
I definitely like this solution and the API revision here proposed. |
Another idea I had time ago was to completely reverse the logic. The idea would be to create the If an executable needs to publish on the same topic from different threads, it can recall the same Then one can create a topic attached to that
This will create a topic and attach it to the In this way, if I have many robot wrappers writing all on the same topic Basically we can run the two lines above as many times as we want, the result will always be one node and one port. This will reduce resources consumption and also solve the problem of having many writers on the same topic from the root. |
I don't agree with making
But actually, reading the description, I believe you don't actually mean a "singleton", but something that is "unique" depending on the name, is this correct? In this case I totally agree, but I believe that we might have something similar already, see In Node constructor: I believe that here, instead of |
Yes, that's what I meant. You are right, singleton is not the right word, sorry. AFAIK there is no strong relationship right now between I agree on beeing as generic as possible, however the |
Relevant discussion (in that case the context is ROS2 and Gazebo plugins) about single node/multiple nodes in a process: ros-simulation/gazebo_ros_pkgs#797 . |
Let's consider the following case:
The process will segfault after a random amount of time, probably due to a race condition inside write() method.
A gist containing a snippet of code to reproduce the bug is provided below.
The text was updated successfully, but these errors were encountered: