I don't have an answer for you, but want to comment on:
I suppose that this happens to requests that are still in progress when IIS decides to kill them in order to be able to restart but I was not able to prove that.
Using a simple page:
<%@ Page Language="C#" debug="true"%>
<script runat="server">
void Page_Load() {
Response.Headers.Add("Stack","Overflow");
System.Threading.Thread.Sleep(10000);
}
</script>
which takes 10 seconds to execute and adds a custom header.
Some response headers for a normal 200:
Content-Type:text/html
Server:Microsoft-IIS/8.5
Stack:Overflow
X-AspNet-Version:4.0.30319
if you now set the application pool to not wait that long before shutting down the pool:
Set-WebConfigurationProperty -pspath 'MACHINE/WEBROOT/APPHOST' -filter "system.applicationHost/applicationPools/add[@name='DefaultAppPool']/processModel" -name "shutdownTimeLimit" -value "00:00:03"
I'm telling IIS to shutdown the DefaultAppPool after 3 seconds regardless of running requests.
Now in the browser hit our page again and while it is executing. stop the pool or do an iisreset.exe
You now get a 503 (not a 500) with response headers like this:
Connection:close
Content-Length:326
Content-Type:text/html; charset=us-ascii
Date:Mon, 06 Jul 2015 04:21:31 GMT
Server:Microsoft-HTTPAPI/2.0
Notice that the server is no longer IIS, it is now Microsoft-HTTPAPI which means the request has been answered by http.sys the kernel mode part of IIS rather than the user mode part.
Also our custom http header is gone.
All this means is that your assumption above is incorrect and the 500 response must come from somewhere else.
Do you see those 500s in the IIS logs? If so they happen before the application pool is gone.
If you can't reproduce the problem locally and can't run Failed Request tracing on the production box, it may be difficult to find out what is going on.