Posted 3 years ago

Dear Microsoft: It’s About Time

Mr. Sinofsky, I’d like to thank you. I’ve been waiting for this for a long time.

Win32 is an old warhorse. In its heyday, it was dependable, useful, and let us as developers do whatever we needed. But the world has moved on, and while development got easier and better everywhere else, Win32 stayed the same.

For good reason, mind you. Keeping backwards compatibility is hard, especially when you’re trying to release new features at the same time. And it’s not just technically hard — I’m sure people inside Microsoft have been itching to start over for a long time now.

Apple did this 10 years ago. I’m sure there was much wailing and gnashing of teeth when OSX and Cocoa were released. Apple gave their users a great platform, and their developers a new way of making things for it that was better. A hardware-accelerated UI layer. Bundles. A visual style to follow. A programming model that was designed from the ground up to support a new kind of application.

Now Microsoft is doing it, and many of the themes are the same. I hope it goes as well for them, because I want to write Metro apps, and the only way I can do that is if Windows 8 sells like hotcakes.

Posted 3 years ago

Native Win32 for fun and profit

All the cool kids these days are playing with awesome dynamic languages, or on cool frameworks. I’m stuck with C++ at work, but every now and then I get to do something cool with it.

That’s the Wacom radial menu, which is implemented as a fully alpha-blended window in native Win32. Something like this is dead simple in WPF, but with native code it’s a bit trickier. I used WTL, GDI+, and a handy, little-known Windows feature to get it done, and I’m going to share my secrets with you, dear reader.



Windowing frameworks are thick on the ground, and I’ve been mostly dissatisfied with the abilities of the Win32-wrapping category. However, they make something like this reusable, so what the heck.

You can grab WTL at the project home on SourceForge. For this project, I’m just taking the files in the include directory and putting them under wtl in my project directory, so I don’t get the Windows SDK versions instead.

I’ve found this to be the best way to include the WTL headers:

#define _SECURE_ATL 1

// These are required to be included first
#include "atlbase.h"
#include "atlwin.h"
#include "wtl/atlapp.h"

#include "wtl/atlgdi.h"   // For WTL::CDC
#include "wtl/atlframe.h" // For WTL::CFrameWindowImpl

Those defines specify that the ATL and WTL classes should stay safely ensconced in their own namespaces. This means you have to reference them as WTL::CFrameWndImpl, but it keeps the global namespace clean, which is a major failing of windows.h.


GDI+ is an immediate-mode drawing API that has shipped with Windows since XP, so I can use it without needing to ship yet another redistributable installer. Here’s all you need to do:

#pragma comment(lib, "gdiplus.lib")
#include <gdiplus.h>

While GDI+ is written in C++ and uses classes, it’s initialization isn’t RAII-friendly, so I wrote a little wrapper class:

class ScopedGdiplusInitializer
        Gdiplus::GdiplusStartupInput gdisi;
        Gdiplus::GdiplusStartup(&mGdiplusToken, &gdisi, NULL);
    ULONG_PTR mGdiplusToken;

Now I can write my main function like this:

int main()
    ScopedGdiplusInitializer gdiplusinit;

    // ...


The production code for this feature uses boost (specifically shared_ptr), but in the interest of simplicity I’ve left it out. If you use boost, or your compiler supports the new std::shared_ptr introduced with TR1, I highly recommend you use that instead of raw pointers whenever possible.

A window class

Here’s where it all comes together. Meet me after the code, and I’ll explain more fully.

class AlphaWindow 
    : public WTL::CFrameWindowImpl< AlphaWindow, ATL::CWindow,
        ATL::CWinTraits< WS_POPUP, WS_EX_LAYERED > >
    DECLARE_FRAME_WND_CLASS(_T("WTLAlphaWindow"), 0);

    virtual ~AlphaWindow()
        if (IsWindow())

    void UpdateWithBitmap(Gdiplus::Bitmap *bmp_I, POINT *windowLocation_I = NULL)
        // Create a memory DC
        HDC screenDC = ::GetDC(NULL);
        WTL::CDC memDC;
        ::ReleaseDC(NULL, screenDC);

        // Copy the input bitmap and select it into the memory DC
        WTL::CBitmap localBmp;
            bmp_I->GetHBITMAP(Gdiplus::Color(0,0,0,0), &localBmp.m_hBitmap);
        HBITMAP oldBmp = memDC.SelectBitmap(localBmp);

        // Update the display
        POINT p = {0};
        SIZE s = {bmp_I->GetWidth(), bmp_I->GetHeight()};
            ::UpdateLayeredWindow(m_hWnd, NULL, windowLocation_I, &s, memDC, 
                &p, RGB(0,255,255), &bf, ULW_ALPHA);

        // Cleanup

Layered Windows

The magic ingredients for this class are the WS_EX_* styles and the UpdateLayeredWindow call.

First, the styles. These are specified on line 3, as part of the base class. That’s just how you declare your window’s styles in WTL. There are two:

  • WS_POPUP means this is a square window with no decorations around the outside. No title bar, no close button, nothing.
  • WS_EX_LAYERED tells Windows that this window is different, and that it can do per-pixel alpha blending with other windows. This was available in Windows 2000, but starting with Vista the window’s face could be cached and composited by the GPU, which made it much more useful.

The call to UpdateLayeredWindow on line 35 is what tells Windows what the contents of the display are. There’s some clunky interop code here, since the GDI+ Bitmap object can’t be used directly with the GDI-oriented layered window API. I’m sure there’s a better way, but in my case the overhead of copying my smallish Bitmap into another smallish HBITMAP wasn’t a problem.

WTL complains rather loudly if a window object is destroyed before the HWND it’s wrapping is closed, so the destructor on line 7 takes care of that.

Pretty Pictures

That UpdatedLayeredWindow call is wrapped in a method that takes a GDI+ bitmap, so now all we need to do is provide it with one. GDI+ makes this pretty easy, especially when compared to GDI code:

using namespace Gdiplus;
Bitmap bmp(400,400);  // Create a bitmap buffer
Graphics g(&bmp);     // Context for drawing on the bitmap
// ...

All together now

Here’s the main function of my little test program.

int main()
    ScopedGdiplusInitializer init;

        // Create the display window
        AlphaWindow wnd;
        wnd.SetWindowPos(NULL, 200,200, 0,0, SWP_NOSIZE | SWP_NOREPOSITION);

        // Create a backbuffer
        Gdiplus::Bitmap bmp(400,400);

        // Clear the background of the buffer to translucent black
        Gdiplus::Graphics g(&bmp);

        // This tells GDI+ to anti-alias when it draws shapes

        // Draw two semi-transparent ellipses
        Gdiplus::Pen redPen(Gdiplus::Color(100,255,0,0), 10.);
        Gdiplus::Pen bluePen(Gdiplus::Color(100,0,0,255), 10.);
        g.DrawEllipse(&redPen, 50,50, 200,300);
        g.DrawLine(&bluePen, 175,10, 175,390);
        g.DrawEllipse(&redPen, 100,50, 200,300);

        // Update the window's display

        // Wait to exit

I know, programmer demos of this are always ugly. Maybe one day I’ll write about how to store a PNG as a resource, and load it in for use with this. For now, you get an ugly screenshot:

Posted 4 years ago

Git: Grafting repositories

We recently evaluated replacements for our VSS-workalike source control system at work. We have about 14 years of history in our current database, though, and it seems like a good idea to preserve that.

The problem is that all that history takes time to import, and shutting all development down for a week while we get the data into the new system was just not an option. I knew there had to be a way to get this done right, and it turns out git can do exactly what we want.

Moving to git

The first step is to fetch a clean snapshot of the current source tree and stuff that into a git repo as the root commit. All the engineers can then start working from that repository with no effective downtime.

I’m going to handwave the data conversion, but suffice it to say we’d need a fast-import script. The interesting bit is how to get all of this historical data back into git.

Pull it in

So now our “new” repository has some work done on it, and it looks like this:

And we’ve imported all the old history into an “old” repository, which looks like this:

Now what we want to do is change the first commit in the “nuevo” repo (“New commit #1”) so that its parent is the last commit in the “old” repo (“Old #3”). Time for some voodoo:

    git fetch ../old master:ancient_history

Git lets you fetch from any other git repository, whether this repo is related to it or not! Brilliant! This leaves us with this:

Note how we renamed the old master branch to ancient_history. If we hadn’t, git would have tried to merge the two, and probably given up in disgust.

Now we still have a problem. The two trees aren’t connected, and in fact a git pull won’t even get the ancient_history branch at all. We need a way to make a connection between the two.


Disclaimer: I know there must be an easier way.

Git has a facility called a graft, which basically fakes up a parent link between two commits. To make one, just insert a line into the .git/info/grafts file in this format:

    [ref] [parent]

Both of these need to be the full hash of the commits in question. So let’s find them:

    $ git rev-list master | tail -n 1

    $ git rev-parse ancient_history

    $ echo d7737bffdad86dc05bbade271a9c16f8f912d3c6 \
           463d0401a3f34bd381c456c6166e514564289ab2 \
           > .git/info/grafts

There. Now our history looks like this:

Perfect! What could go wrong?

What went wrong

Cloning this repo results in this:

Woops. It turns out that grafts only take effect for the local repository. We can fix this with judicious application of fast-import:

$ git fast-export --all > ../export

$ mkdir ../nuevo-complete

$ cd ../nuevo-complete

$ git init

$ git fast-import < ../export
git-fast-import statistics: [...]

This effectively converts our “fake” history link into a real one. All the engineers will have to re-clone from this new repository, since the hashes will all be different, but that’s a small price to pay for no downtime and a complete history.

Posted 4 years ago

When I see something as a nerd, I always try to see the algorithm behind it, but this resists all of that. I can’t imagine the level of talent it takes to make something like this.

Posted 4 years ago

/^1?$|^(11+?)\1+$/ tests for primeness!

Ugly and beautiful at the same time. Beaugliful?

Posted 4 years ago

Git for other purposes

Git is usually used to manage source code, though it’s described by its creator as a stupid information tracker. When my team was looking to rework our CM system, I was able to apply my spare-time dabbling with git.

CM is a software engineering discipline all its own, and a large company can easily have several people whose full-time job it is to manage software configurations. CM basically boils down to taking several independent components and combining them into something consumable.

In our case, we have an executable produced by one team, a set of manuals that comes from another team, some kernel-mode drivers that are on their own timeline because of WHQL, and some other odds and ends. The goal of the new system was to allow things that change together to be kept in the same place, and simply refer to fixed versions of other things. For instance, version 9 of our driver might need to pull in version “2010-06-10” of the manuals.

We currently use a VSS-workalike to manage these things, but that has some problems. Our tool can’t manage file renames or reorganization, so we’re stuck with a directory that has every historical name for a file, and the directories have no sanity. It’s a mess.

The problems we have with our current tool simply don’t exist with git. You can move files around and rename them with impunity, since a git commit is just a bunch of path/object mappings (rather than a listing of files, each with its own history). So I hacked together a set of scripts that would take built binaries, commit, tag, and push them to a central repository for safekeeping. Each component comes with a script that knows how to pull other components in. The whole thing uses git tags to manage component versions.

One thing we had to manage properly was nightly builds. The central repo is only used for actually-released versions of the software; pushing all the nightly builds to it would be a waste of time. However, we wanted the automated nightly to be able to use the CM machinery to create an installable package. This is a bit tricky, and at first glance seems silly: why would you commit something into a version control system if you only want some of the versions to actually be stored?

It’s not so bad. The trick is to git reset --hard origin/master before the script copies in the new versions of the files. Here’s what a fresh clone of a component looks like:

This is a component with three real versions stored on the server. Now we’ll wait four days, and see what happens to our local repository:

Pretty messy. There are four throwaway builds stored in there. We haven’t pushed any of these, so all of those commits only exist on this machine. Now we’ll do a real build:

And now we need to give the server both the commit on master as well as the new tag:

$ git push origin master
  Counting objects: 5, done.
  Writing objects: 100% (3/3), 233 bytes, done.
  Total 3 (delta 0), reused 0 (delta 0)
  Unpacking objects: 100% (3/3), done.
  To <remote>
     ad9c250..725bf06  master -> master
$ git push origin v4
Total 0 (delta 0), reused 0 (delta 0)
To <remote>
 * [new tag]         v4 -> v4

And now our gitk window looks like this:

And a fresh clone looks like this:

Success: none of the nightly builds were stored in the central repository. Git is a pretty nice source control system, but it does a pretty decent job at configuration management as well.

Posted 4 years ago

This weekend’s food project: strawberry freezer jam!

Posted 4 years ago
This is my daughter.

This is my daughter.