in WCF it is known that you can pass messages up to multi GBs between a client and a service. however, there are certain configurations that you need to tweak.
in one of my projects i had a SQL table with around 100,000 records. i had a service which simply reads all the records and pass them to a client.
i decided to use the netTcpBinding for binary encoding which will enhance performance..of course i had .NET-based apps on both ends.
i started out with the default configurations and i immediatley got an error that the max message quota limit was exceeded. so i set both
– maxReceivedMessageSize = 2147483647
– maxBufferSize = 2147483647
however, it was interesting that with the above confiruation, although i have set up the allowed message and buffered size to more than 2 GB; i could not pass more than around 8000 records with message size of 3 MB! i was buffled for a second because i have granted the communication channel all the size limit it needs…
i then turned on message logging and tracing, and every time i tried to pass more than 8000 records (again 3 MB) i got the following error:
“the socket connection was aborted. This could be caused by an error processing your message or a receive timeout being exceeded by the remote host, or an underlying network resource issue”
searching around i found on MSDN a property of DataContractSerializer called maxItemsInObjectGraph. after i set this property for a high value, then i could pass the full 100,000 records; that’s around 50 MB.
see, in my example i obviously was using the DataContractSerializer to serialize data on the wire, but more interesting i was filling out each of my SQL records into a seperate .NET object. so at then end i was trying to pass 100,000 objects from the service to the client. the maxItemsInObjectGraph, sets the max allowed objects to be passed around so setting this property to a high value solved the problem.
the relevant part of the service configuration was:
the relevant part of the client configuration was: