-
Notifications
You must be signed in to change notification settings - Fork 435
Remove intermediate buffering and solve the Read blocking issue #150
base: master
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for addressing this!
C.tcp_shutdown(tpcb, 1, 0) | ||
return C.ERR_OK | ||
} | ||
if len(buf) < int(p.tot_len) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This will break if the app passes a small buffer.
Instead of failing, why don't you copy as much as the buffer can hold, calling C.tcp_recved(tpcb, len(buf))
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Calling tcp_recved
only cause lwip to advertise a larger TCP window size, there is no way to partly receive a pbuf passed to tcpRecvFn
, you either accept a whole pbuf and return ERR_OK, or reject it and return some other errors such as ERR_CONN, lwip would hold the pbuf for the next tcpRecvFn
call in the latter case.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I worry this new behavior would break any client that passes a buffer smaller than the TCP MSS, which seems to be the size of the pbufs. Actually it's worse: it will break if the buffer is shorter than the pbuf chain, since you're using the p.tot_len
, not p.len
.
I see you have to consume a full pbuf in order to return OK. In that case you could iterate until a full pbuf is read, then return OK.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Then the blocking issue is re-introduced.
I was playing around with this and noticed that sometimes I'd get a panic due to a send on a closed channel. I'll see if I can reproduce and get you more details. |
Here's the stack trace for the panic
|
@oxtoacart Thanks for the information. This is just a quick implementation trying to make the issues in #149 more clear, I'm not surprised there are bugs, it's not going to be merged. |
No description provided.