Advertisement

UDP Confusion c++ sockets

Started by July 26, 2018 02:13 PM
33 comments, last by hplus0603 6 years, 3 months ago

Need to see what happens: because client sends commands and server receives them but when i want the server to send commands it writes few bytes then i get this error,  but this happens only after server attemp to send commands,

/* Edited/Added

BUT if client keeps sending data - server can receive them forever and nothing bad happens) maybe im forced to use select to see if fd is ready to write cause it may be reading data from fd and it didnt end and then i try to send something to this fd instead of waiting for reading to finish? Cause like i said below i only check if socket isnready to read with select. Maybe this is it??? You could tell me that if im wrong about this so i'll search for something else

*/

additionally log outputs sth like this

Received cmd from client

Received cmd from client

Received cmd from client

Sending cmd to client

Instead

Received cmd from client

Sending cmd to client

Received cmd from client

Received cmd from client

And thats unusual im a bit confused now because it may be an issue with select itself (i only check if sockets are ready to read anything) and after select i throw execute write command (and before sending i use accept() to see if there are any connections.

So maybe accept or select or whatever write read: are asynced 

I always thought that if one of these functions are called and they return anything it means i can go do other things, but it finds out they do something in background(like they were creating another thread to execute),

However back to code

ProcessServerFrame is run in thread in while loop with a interval of 33 ms



void Server::ProcessFrame()
{

int selr = select (fdmax+1, &readfds, NULL, NULL, &timeoutval);

    int serveri = sockfd;
  if (FD_ISSET (serveri, &readfds))
  {
      int new_client_sock = accept (sockfd, (struct sockaddr *) &cli_addr, &clilen);
                if (new_client_sock >= 0)
                accept new connection here
  }
  
    //now loop through clients and check whenever they sent something
   TCPWindowLayerClient * current_clientfd = clients;
while (current_clientfd != 0)
{
        if (FD_ISSET (current_clientfd->sockfd, &readfds))
        read_length = read_from_fd(current_clientfd->sockfd, current_clientfd->pdata, &current_clientfd->ppos, max_tcp_buff_size);
        
        current_clientfd = current_clientfd->next;
}
}

void ProcessServerFrame()
{
		server->ProcessFrame(); //add any pending data to clients pdata buffer stream, accept or disconnect clients and thats all
  
  
loop through clients to see if they sent any data to server ( this actually adds every cmd from client to server pending command list - so other clients will receive what one client did)


Send any pending commands to all connected clients
}

So it looks like i dont know something,

Server should behave the same on nonblocking sockets like on blocking one imo...

 

Code for client looks the same. Thus it only sends cmds to server

I execute select and check whenever theres something to read then i use read (theres obvious no accept function)

So the base of client and server is

Select

if(server) accept

If (theres data to be read on fd) read

Send any pending cmds to peers

 

 

That should definetly output something else than i get in log :0

 

cheers

tcp is basically udp, but with the stuff you need to reliably send something from A to B. There are years to decades of research in tcp implementations. Systems like "the web", and "ftp sites" run on it. OSes use it for "updates". Everything real-time from RPC to chat such as IRC uses it world-wide, with a zillion users.

Really, you're not going to make something better by using udp and extending it yourself to get reliability.

 

Advertisement
9 hours ago, Alberth said:

tcp is basically udp, but with the stuff you need to reliably send something from A to B. There are years to decades of research in tcp implementations. Systems like "the web", and "ftp sites" run on it. OSes use it for "updates". Everything real-time from RPC to chat such as IRC uses it world-wide, with a zillion users.

Really, you're not going to make something better by using udp and extending it yourself to get reliability.

 

This is misleading imo. TCP is designed afaik for sending streams of data, reliably and in order, such as files. In games the reliable messages to be sent are often small packets, far smaller than files, and timely delivery is still important.

Quote

TCP is optimized for accurate delivery rather than timely delivery and can incur relatively long delays (on the order of seconds) while waiting for out-of-order messages or re-transmissions of lost messages.

https://en.wikipedia.org/wiki/Transmission_Control_Protocol

Quote

At the lower levels of the protocol stack, due to network congestion, traffic load balancing, or other unpredictable network behaviour, IP packets may be lost, duplicated, or delivered out of order. TCP detects these problems, requests re-transmission of lost data, rearranges out-of-order data and even helps minimize network congestion to reduce the occurrence of the other problems.

These protocols are about tradeoffs, TCP seeks to reliably transmit information in order, and cope with saturated connections, with little regard for timely delivery. If you seek to minimize the bandwidth on a connection, this can come at the expense of other techniques which can improve delivery time.

In games what it often needed, rather than file transfer, is timely reliable transmission of small datagrams. For this purpose, things like requesting re-transmission and timeouts on the sender, while optimal for bandwidth, are clearly suboptimal for time delays. If the message is small (of the order of a few bytes), it would seem to make far more sense to just send it repeatedly until there is an acknowledgment from the receiver. This minimizes the delay at a small cost in bandwidth.

There are thus 3 tiers of messages:

  1. Time dependent, non-reliable packets (e.g. position updates) (UDP)
  2. Time dependent, reliable packets (e.g. player death) (UDP reliable)
  3. Time independent, files (e.g. level resources) (TCP)

In most real world cases, TCP will do the job, however it is good practice to run things through a network simulator that can simulate lost packets, out of order, delays, and build a system that will cope well with this rather than lock up waiting for a packet that was sent 10 minutes ago that no one cares about.

/edit This is a good article on the subject:

https://gafferongames.com/post/udp_vs_tcp/

Anyway tcp in nonblocking mode looks promising, however i cannot find the cause of disconnection

Now i have found out that whenever something has to be read from fd i need to call read in a loop yo receive all the data then i can actually check whenever fd is ready to write however

read returns 0 when it reaches end of file but what the hell that is supposed to mean? Does this mean that theres no data now and i can safely break the read loop? Or maybe it indicates something else?

2 hours ago, _WeirdCat_ said:

Now i have found out that whenever something has to be read from fd i need to call read in a loop yo receive all the data then i can actually check whenever fd is ready to write however

Read and write are (mostly) independent. Or at least, they are buffered independently. As long as you are reading fairly continuously, you should be able to write whenever there is space in the buffer (and with NO_DELAY enabled, that should be pretty much always at the volume of writes you are talking about).

Tristam MacDonald. Ex-BigTech Software Engineer. Future farmer. [https://trist.am]

Advertisement

but can this be causing epipe? this doesnt still explain loosing connection, and now I have the same behavior but now server sends commands and client receives lol, I hguess my best pick would be to rewrite blocking sockets handling so each threaded frame, after passing all reads ill process any received packets then send any other data to peer.

odd thing is noone has ever mentioned anything about strange epipe coming as soon as one of peers tries to communicate. even when everthing has been read and write flag is marked to true. so it should be safe to write.

you know select(sockfd,&readfds, &writefds)

 

if fdisset(sockfd, readfds)

while (read !=0) read(sockfd)

 

if fdisset(sickfd, writefds)

send all data

 

should work but apparentely not

I've managed to get good results with blocking sockets, i just need not to write too much data through sockets and it does pretty well.

I just 

Thread2

{

If flag1 continue

For each peer

Read

Write

Flag1 = true;

}

 

Thread1

{

If flag1

ProcessPackets

Flag1 = false

}

 

Thanks for all

 

I've been writing socket-based networking applications for maybe 20 years or more, including high-throughput servers that process many thousands of connections simultaneously, and I have never witnessed anything but full duplexing (ie. reading and writing are independent) and have generally always used non-blocking mode.

EPIPE has always meant the write buffer is full: either the client end has closed (but not shut down, so the connection is in TIMED_WAIT state -- see shutdown(2)), I've accidentally shut down the ephemeral server socket (bug in my code), or I've tried to write more bytes than the write buffer will hold because I haven't checked to make sure the socket is available for writing.  Yes, you have to check to see that the socket is available for writing, and you have to track how much was actually written so you know to start the send from on the next write() call.  That's why that API is the way it is, it's not just to fatten the documentation.

When read() returns 0, it means there is nothing left to read on that fd.  Go back to waiting on select().  Of source, read() returning 0 does not mean the message was completely received using TCP, you need your own protocol on top of the TCP stream to know when your message is complete (send bytecount, add a termination marker, hard-coded sizes, whatever).

You still need to check if the socket is ready for UDP, but sending or receiving part of a message makes no sense: either you get/send the whole thing or it gets discarded.  You probably also want to use recv*()/send*() for UDP instead of read()/write().

Anyway, good luck.

Stephen M. Webb
Professional Free Software Developer

So from what you say it seems i have a problem within write.

But yet int wb = write(sockfd, &pdata[ count ], size_t( len_to_write ));

And then in while loop i do len_to_write = len_to_write - wb; so i may not write more cause i dont even anywhere else set the max tcp buffer size.

 

 

Now

I'vetried to write more bytes than the write buffer willhold because I haven't checked to make sure the socket is available for writing.  

Well im checking whenever fd is ready for writing then i do while loop until whole message(a string with x characters (may be lets say 2000 chars)) is written with write so there could be a problem

 



const int max_write_fails = 180;


inline int SendStrToFd(unsigned char * pdata, int cmdlen, int sockfd)
{
	bool sentall = false;

	int count = 0;
	int iteration = 0;
	int fails = 0;
	int len_to_write = cmdlen;
	while (!sentall)
	{
		iteration++;
		if (count >= cmdlen) break;
		if (iteration > 1500) break;
		if (len_to_write <= 0) break; //we shouldnt even be here but ill add log exception
		int wb = write(sockfd, &pdata[ count ], size_t( len_to_write ));
		if (wb < 0) 
		{
			fails = fails + 1;
			if (fails >= max_write_fails)
		break;
		}
		if (wb >= 0)
		{
		count = count + wb;
		len_to_write = len_to_write - wb;
//		ALOG("WROTE: "+IntToStr(wb)+" bytes");
		}
		if (count >= cmdlen) { sentall = true; break; }
	}
	if ( sentall ) return 1; else return -1;
}

 

This topic is closed to new replies.

Advertisement