OHMS BLOG

Saturday, October 31, 2009

musings

The Top Ten Signs You're At a Bad Halloween Pub Crawl

Written by me as an accessory to my David Letterman costume:



  1. Blood is everywhere - and it's spraying out of a bite wound in your neck

  2. When you buy your ticket, the salesman asks, "Are you sure?"

  3. The guy sitting next to you is dressed up as Regis

  4. Bouncer demands proof that your flu shots are up to date

  5. Party games include such classics as "bobbing for assholes"

  6. Best costume award goes to wolfman wearing a slutty nurse outfit

  7. Your friends would rather stay home and discuss Balloon Boy on Twitter

  8. Bloody Mary you ordered comes with real blood

  9. Your bus captain pulls the bus over so he can change his Depends

  10. Some guy gets up and starts reciting a lame top ten list



(As an aside, I'd like to point out that I wrote this before Friday's Late Show. The topic of that show's top ten list? "Top Ten Signs You're At A Lame Halloween Party." There's even mention of Regis, bobbing for apples, H1N1 flu, Balloon Boy, blood, and "best costume." I've been watching way too much late night TV.)

Wednesday, October 07, 2009

code

Leaky Abstractions Redux, Part I

Today at work I was stuck fixing a stubborn deadlock between two multithreaded services on Windows that were communicating with each other over TCP/IP. This problem ended up being another instance of what Joel Spolsky refers to as Leaky Abstractions.

To understand what is happening here, I need to sketch out a bit of background. The first item to note is that the server process uses an M:N threading model, as opposed to a 1:1 threading model. M connections are multiplexed onto N threads, instead of creating one thread per connection. Because N does not increase without bound, it could be possible for those threads to be exhausted under pathological conditions.

The second noteworthy point is that the threads that are initiating the connections on the client side come from the I/O component of the system thread pool. That is, the TCP/IP client was being queued up onto the system thread pool by calling QueueUserWorkItem with the WT_EXECUTEINIOTHREAD flag. The I/O component of the system thread pool uses user-mode asynchronous procedure calls (APCs) as the queuing mechanism. When the pool thread needs another work item, it performs an alertable wait until the next APC arrives. Since there is only one APC queue per thread, APCs can come from multiple sources but arrive on this single queue in some arbitrary order. If there are multiple sources, we can't guarantee which APC actually gets called when the thread goes alertable; whatever's at the head of the queue is what gets invoked.

That's enough background, so let's take a look at the deadlocked client process. Once I had WinDbg attached, I noticed that the call stack for the thread pool's I/O component looked something like this:

ntdll!KiFastSystemCallRet
ntdll!NtWaitForSingleObject+0xc
kernel32!WaitForSingleObjectEx+0xac
kernel32!WaitForSingleObject+0x12
myprog!MyHandshake+0x86
myprog!MyWorkItem+0x3a
ntdll!RtlpWorkerCallout+0x71
ntdll!RtlpExecuteIOWorkItem+0x29
ntdll!KiUserApcDispatcher+0x25
mswsock!SockDoConnectReal+0x27a
mswsock!SockDoConnect+0x38a
mswsock!WSPConnect+0xbe
WS2_32!connect+0x52
myprog!MyWorkItem+0x3a
ntdll!RtlpWorkerCallout+0x71
ntdll!RtlpExecuteIOWorkItem+0x29
ntdll!KiUserApcDispatcher+0x25
mswsock!SockDoConnectReal+0x27a
mswsock!SockDoConnect+0x38a
mswsock!WSPConnect+0xbe
WS2_32!connect+0x52
myprog!MyWorkItem+0x3a
ntdll!RtlpWorkerCallout+0x71
ntdll!RtlpExecuteIOWorkItem+0x29
ntdll!KiUserApcDispatcher+0x25
mswsock!SockDoConnectReal+0x27a
mswsock!SockDoConnect+0x38a
mswsock!WSPConnect+0xbe
WS2_32!connect+0x52
myprog!MyWorkItem+0x3a
ntdll!RtlpWorkerCallout+0x71
ntdll!RtlpExecuteIOWorkItem+0x29
ntdll!KiUserApcDispatcher+0x25
mswsock!SockDoConnectReal+0x27a
mswsock!SockDoConnect+0x38a
mswsock!WSPConnect+0xbe
WS2_32!connect+0x52
myprog!MyWorkItem+0x3a
ntdll!RtlpWorkerCallout+0x71
ntdll!RtlpExecuteIOWorkItem+0x29
ntdll!KiUserApcDispatcher+0x25
mswsock!SockDoConnectReal+0x27a
mswsock!SockDoConnect+0x38a
mswsock!WSPConnect+0xbe
WS2_32!connect+0x52
myprog!MyWorkItem+0x3a
ntdll!RtlpWorkerCallout+0x71
ntdll!RtlpExecuteIOWorkItem+0x29
ntdll!KiUserApcDispatcher+0x25
mswsock!SockDoConnectReal+0x27a
mswsock!SockDoConnect+0x38a
mswsock!WSPConnect+0xbe
WS2_32!connect+0x52
myprog!MyWorkItem+0x3a
ntdll!RtlpWorkerCallout+0x71
ntdll!RtlpExecuteIOWorkItem+0x29
ntdll!KiUserApcDispatcher+0x25

Notice that there are seven invocations of myprog!MyWorkItem on the call stack. Further notice that the call stack contains a pattern that repeats itself after every invocation of mswsock!SockDoConnectReal. Of course Winsock has no knowledge of my code, so it can't be intentionally invoking my work item. As soon as I saw this I knew what it meant: the internals of the Winsock connect API are implemented using APCs! Since both my code and Winsock were queuing up APCs to this thread, Winsock invoked whichever procedure was at the head of the APC queue when it went alertable. In this case it was my work item instead of the internal Winsock procedure. My work item then attempts another connect, thus repeating the cycle. Since all of the connections on the stack had only been partially completed, this gobbled up resources on the server side until there were no threads available to service the handshake at the top of the call stack. Talk about pathological conditions: there's our deadlock!

I ended up removing the offending client-side code from the system thread pool altogether. You might be wondering why that code was even using the I/O component of the thread pool in the first place. Why didn't it just use WT_EXECUTEDEFAULT? There is a good reason for this, and ironically enough it too is because of a leaky abstraction! That tale will have to wait until Part II.

Release 7.0; Copyright © 1996-2012 Aaron Klotz. All Rights Reserved.