Passing postBackElement to endRequest handler

It would be nice if an endRequest() handler for the PageRequestManager had a means for getting the element that triggered the post back, like the get_postBackElement() of the BeginRequestEventArgs class. This usually isn’t necessary when you have only one UpdatePanel, but if you have more than one, you may want to take different actions depending on which panel posted.

An easy way to overcome this, is to create a proxy function which can forward the original arguments and some custom ones, in this case the element that triggered the post back. In the beginRequest() handler, we get the element that triggered the post back, and create an inline function that calls the appropriate endHandler() like so..

function _OnBeginRequest( sender, args ) {
  var elm = args.get_postBackElement();
  // Add the correct endRequest handler for this post back
  if ( -1 !=‘CmdSave’) ) {
  } else if ( -1 !=‘CmdEdit’) ) {

Notice that we’re passing the add_endRequest() method of the PageRequestManager an inline function that acts as the handler. This handler then calls another function with the original handler arguments passed by the PageRequestManager, and an additional one which is the element.

A problem with this, is that we’re registering a new handler every time the beginRequest() handler is called, and they’re going to add up if we don’t unregister them. We used an inline function though, which doesn’t have a name so there’s no way to get at it by name. We can however use the “caller” property of the Javascript Function object, which will be the inline function we installed as the handler. So the first thing we do, when our custom handler is called, is unregister the caller from the PageRequestManager.

function _OnCmdSaveEndRequestEx( sender, args, elm ) {
   // Unhook this handler from the PageRequestManager..

function _OnCmdEditEndRequestEx( sender, args, elm ) {

So now we can propagate anything that’s available in the beginRequest() handler to the endRequest() handler.

Posted in .NET, Programming | Leave a comment

FltReleaseContext() woes…

Reference counting is a _LOT_ easier when you can actually see the reference counts at some point. COM has this done wonderfully with IUnknown::AddRef()/Release() both returning a “not known to be accurate but potentially helpful for debugging” result. The documentation clearly states that the return values are intended solely for “diagnostic/testing purposes”. Now move on down to the kernel and things get darker. Too dark!

The handle/stream/file/etc contexts provided by FltMgr are damn handy, but they are reference counted objects and there’s no way to track the active number of references on one. I can understand that hiding this sort of thing is important because hosing things in kernel mode is hardly pleasant, but returning even a stale count is helpful to track down something that is going to end up hosing the system anyway.

So let’s say your mini-filter won’t unload because you probably have outstanding references which you introduced by failing to call FltReleaseContext() at some point. With a helpful return value on the number of outstanding references at the time of a call to FltReleaseContext(), tracking this down is _SO_ much easier. For stream contexts on x64 (XP/2K3), the reference count seems to be 8 bytes behind the opaque pointer the mini-filter gets after calling FltAllocateContext(). I’m not sure what it is for other context types, but it’s not difficult to find.

So, this is what I ended up using…

  #define _DumpStreamContextReferenceCount( Context ) \
  { \
    if ( NULL != Context ) { \
      DbgPrint("Context(%p)!ReferenceCount=%u!" __FUNCTION__ "\n", Context, ((ULONG*)Context)[-2]); \

Now you can litter these little babies all over the damn place, sort the output and end up with something that at least gives you a clue about where the leak is.

Posted in Programming | Leave a comment

Service debugging…

Debugging services isn’t all that difficult, but one thing about it which is a pain is attaching a debugger before anything interesting happens. One little trick is to spin in a loop after the service has been created until a debugger is attached.

void _WaitForDebugger( SC_SERVICE_STATUS_HANDLE hServiceStatus, SERVICE_STATUS& ServiceStatus ) {
  while ( !IsDebuggerPresent() ) {
    ServiceStatus.dwCurrentState = SERVICE_START_PENDING;
    ServiceStatus.dwCheckPoint = 1;
    ServiceStatus.dwWaitHint = 2000;
    SetServiceStatus(hServiceStatus, &ServiceStatus);

The main thing to note is that the SCM should be notified that the service is still starting on a regular basis. If this isn’t done then the SCM may timeout waiting for the service to start properly.

If you’re using ATL’s CServiceModuleT class then a good place to use this is in an overloaded Run() method on the custom module class.

HRESULT MyServiceModule::Run( int nShowCmd ) throw() {
#ifdef _DEBUG
  _WaitForDebugger(m_hServiceStatus, m_status);
#endif /* _DEBUG */
  return __super::Run(nShowCmd);

Posted in Programming | Leave a comment

Lose the DEF file…

Typically you would use a .DEF file to specify exported functions in an EXPORT section, but you can accomplish the same thing with an embeded /EXPORT directive to the linker. I’ve found it easiest to do this directly from the function being exported.

STDAPI DllRegisterServer( ) {
  #pragma comment(linker, "/EXPORT:" __FUNCTION__ "=" __FUNCDNAME__ ",PRIVATE")

The __FUNCTION__ and __FUNCDNAME__ macros are builtin and resolve to the undecorated and decorated names of the function they are used in. So with this syntax an /EXPORT directive is embedded in the object file which ends up like “/EXPORT:DllRegisterServer=_DllRegisterServer@0,PRIVATE” for x86 builds and “/EXPORT:DllRegisterServer=DllRegisterServer,PRIVATE” for x64 builds. Using the decorated name ensures that the linker has enough information to map the exported name to a symbolic one. You can also pass other options supported by the /EXPORT command in the same way that the PRIVATE option is passed above.

You can use the same trick to pass /DEFAULTLIB, /INCLUDE, /MANIFESTDEPENDENCY, /MERGE and /SECTION commands to the linker as well.

Posted in Programming | Leave a comment


So I’ve been trying to get good performance with minimal “resident” resource usage for JoinExt. I started out using buffered, overlapped I/O and a mapped view of the target file so I could read into the view and let the OS write it back out to disk. Performance was good and it allowed me to use a single thread to drive the UI and the file I/O, but the system cached the files, it was complicated to maintain and there is no guarantee that a file system will honor a request for asynchronous I/O so the UI could still hang. Solving the potential for hanging the UI is easy though, run the I/O in a a new thread, so I focused on keeping the file data out of the system cache. This meant using unbuffered I/O but it seemed overly complicated with all the alignment requirements. I thought maybe a file mapping was smart enough to deal with the alignment issues and all I would need to do is memcpy from source to target. So I rewrote the code to do just that and it worked, but what appears to happen behind the scenes is that a map caches the files just as if they were opened for buffered access and so you lose all the benefits of doing unbuffered I/O in the first place. The only option then was to deal with all the complications of unbuffered I/O. I tried a few different approaches but settled on one that I think is pretty simple to maintain and is performant for the majority of cases that JoinExt will be used for. I don’t know what the hell to call it, but it’s something like having 2 sections to a buffer, one for holding unaligned data and another which always starts and ends on an alignment required by whatever file is being read. When the alignment changes, the entire buffer is compacted so that everything in the aligned section moves to the unaligned section and padding is added after the unaligned section to reset the aligned section on its required boundary. When the buffer is full, it is emptied to the target file. When the files are large it works very well because many aligned reads can be done to fill up the buffer before the contents must be shifted to fix up a realignment. When the files are small, there are a lot of fix ups but for this utility, small files should generally be the exception. The plus to using a large buffer is that disk I/O is kept local until the buffer is filled with reads, then local again as it is emptied back to disk with writes. One other thing I ran into is that there are limits on how much data can be read or written at a single time. From a few Usenet posts it seems to be around 128KB, but that works out good for providing responsive feedback on the I/O in progress as well.

Posted in Programming | Leave a comment

Bad style…

This is a peculiar piece of code from the MiniSpy sample of the IFS kit…

SpyPostOperationCallback (
__in PVOID CompletionContext,


// Log reparse tag information if specified.

if (tagData = Data->TagData) {


It’s not a bug, but it would be much clearer to rewrite it as…

if (NULL != Data->TagData) {
tagData = Data->TagData;

Posted in Programming | 2 Comments

Native NT program debugging…

The WinDbg .kdfiles command is pretty sweet for getting fresh system binaries onto a debugging target, but there is no user-mode equivalent which stinks big time.

This recently became even more frustrating for me as I was developing a small native NT application to compact the hard-disk of a Virtual PC guest OS during boot time. For those uninformed, these are programs which the Session Manager (SMSS.EXE) picks up from the HKLM\CurrentControlSet\Session Manager\BootExecute registry value. They are called Native because they are restricted to using the Native NT API as they run before the system brings up the Win32 or any other subsystem.

Anyway, I started out with a breakpoint and a memory flag I could set to either exit the program without debugging it, or continue into the body of the program which may or may not end cleanly and allow the system to continue booting up. Either way, I would have to completely boot into user-mode and copy the new images over, then reboot and start all over. It was a painful ordeal.

Then suddenly this floppy disk drops out of the sky into my lap and a booming voice says “Hey buddy, why do you think you still have one of these?”.

Yeah, so SMSS works well with DOS paths and floppy drives which isn’t surprising since the OS, which includes file systems and volumes, is loaded by the time a USERMODE program gets to run. DOH!

The only problem I have run into, and this is with Virtual PC, is that VPC requires exclusive access to the floppy while the guest OS is running. To work around that, you have to always shutdown when the program exits via ZwShutdownSystem(ShutdownPowerOff) or by simply shutting it down.

So long live the completely outdated and entirely useless floppy drive!

Posted in Programming | Leave a comment