Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Stuff Messages to increase throughput #107

Draft
wants to merge 5 commits into
base: master
Choose a base branch
from
Draft

Conversation

NoahSprenger
Copy link
Contributor

@NoahSprenger NoahSprenger commented May 11, 2024

Right now we are not being efficient with our use of the radio buffers and will benefit from "stuffing" messages into a single transmitted radio packet.

@NoahSprenger NoahSprenger marked this pull request as draft May 11, 2024 17:30
pub fn send_message(&mut self, m: Message) -> Result<(), HydraError> {
let payload: Vec<u8, 255> = postcard::to_vec(&m)?;

pub fn send_message(&mut self, payload: Vec<u8, 255>) -> Result<(), HydraError> {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could also create another another send function that takes just a Message type and matches it to the needed size defined by the consts in the messages library. We should have a way to send messages that isn't stuffing.

]
}

pub fn clone_states(&self) -> [Option<StateData>; 1] {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should still be sending state messages, and I think we do since we don't treat it differently than any other message received (with the exception of commands).

pub gps_pos_2: Option<Message>,
pub gps_pos_acc: Option<Message>,
pub state: Option<StateData>,
pub message_queue: HistoryBuffer<Message, 32>,
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why 32? Is this enough?

Copy link
Contributor

@seofernando25 seofernando25 May 15, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suppose the only way to find out is to see how fast they're being filled and emptied.

We could run the program on actual hardware and save metrics about its buffer size (Max Size, Min Size, 10 percentile, 90 percentile, etc)

but when I mean save metrics I meant that we can also send the current message_queue size to the ground station as telemetry data analyze it :p

image

I would try the 4x of the minimum amount of messages in the message queue to not be filled instantly
(maybe 16*4 = 64)

pub gps_pos_2: Option<Message>,
pub gps_pos_acc: Option<Message>,
pub state: Option<StateData>,
pub message_queue: HistoryBuffer<Message, 32>,
Copy link
Contributor

@seofernando25 seofernando25 May 15, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suppose the only way to find out is to see how fast they're being filled and emptied.

We could run the program on actual hardware and save metrics about its buffer size (Max Size, Min Size, 10 percentile, 90 percentile, etc)

but when I mean save metrics I meant that we can also send the current message_queue size to the ground station as telemetry data analyze it :p

image

I would try the 4x of the minimum amount of messages in the message queue to not be filled instantly
(maybe 16*4 = 64)

messages::sensor::SensorData::EkfNav2(_) => {
self.ekf_nav_2 = Some(data);
messages::Data::Health(_) => {
if bytes.len() + MAX_HEALTH_SIZE <= MAX_SIZE {
Copy link
Contributor

@seofernando25 seofernando25 May 15, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not about the PR in question but
I noticed that these MAX_????_SIZE are hardcoded.

Is there any macro that can infer that so its done automatically?

image

maybe a size_of plus offset_of?

Copy link
Contributor Author

@NoahSprenger NoahSprenger May 15, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I really like this idea, my only worry is can we sync this to the mavlink defintions? @MarwanMashaly1

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants