Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
163 views
in Technique[技术] by (71.8m points)

Python Socket Flush

I am trying to make sure that every time I call the socket.send function my buffer is sent (flushed) to my server (which is in C using unix socket).

From my understanding (and from what I see on this board) just disabling the naggle algo. should do it but my server still receive my data in chunk of 4096 bytes (default set)...

Im using the following code in Python v2.5.4:

 self.sck = socket( AF_INET, SOCK_STREAM )

 self.sck.setsockopt( IPPROTO_TCP, TCP_NODELAY, 1 ) # That doesn't seems to work...

 self.sck.connect( ( "127.0.0.1", "12345" ) )

 while( 1 ):
      self.sck.send( "test
" )
      self.sck.send( "" ) # Still trying to flush...

Enabling/Disabling TCP_NODELAY seems that have no effect whatsoever... Is this a bug or I am missing something?

TIA

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Answer

0 votes
by (71.8m points)

TCP does not provide any kind of guaranteed "packet" sending to the other end. You are sending data as fast as you can, and TCP is helpfully batching up the data into as much as it can send at once. Your server is receiving data 4096 bytes at a time, probably because that's what it asked for (in a recv() call).

TCP is a stream protocol and therefore you will have to implement some kind of framing yourself. There are no built-in message boundaries.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...