Thursday, March 18, 2010

Healthcare Musings Part 4 (From a Friggin Airplane)

I’m thrilled to see the whole healthcare thing coming together. I am very excited that the most important legislation of my generation will be passed soon!

I hope upon hope that the new CBO numbers, predicting some nice deficit cuts (1.3 trillion over 20 years!), will give some republics the excuse they need to get with the program. Imagine how powerful it would be if a bunch of them got on board—it doesn’t have to be a party line vote and I think the CBO report gives them an out.

They can even rub it in the dems faces by claiming their pressure and work lead to a better, cheaper bill—that might even be true. Just vote for the damn thing!

What’s the worse that could happen? It’s not like it can’t be amended, repealed, etc. later. Yeah, yeah—the biggest takeover of healthcare—ah the fear! If you’re willing to shutdown Medicare, and thereby send millions of old people to join the 30+million existing uninsured, I’ll hear you out. I respect your principals but they don’t work both ways.

All along I’ve maintained a much simpler view of this whole healthcare issue:

The insured pay for the freeloaders already through higher premiums, obscene costs, and taxes. Why not make it official and get some benefits ourselves?

BTW: I’m posting this from a friggin airplane.

The Power of Defaults, and: SourceSafe Really is That Bad

Think about the last gadget you bought—a computer, car, GPS, phone—whatever. Chance are that it came with a million settings, each preconfigured to something. Most of us change the simple stuff to suit us (e.g. a ring tone or wallpaper), and ignore the rest.

It’s the default settings of all the non-trivial things that we ignore that can be so powerful, and often terrible. Consider Microsoft Visual Source Safe. Visual Source Safe was a version control system that shipped with Visual Basic 6, and possibly a rev or two after that. Since everyone already had it, it was the obvious thing to turn to when small VB shops moved up to version control.

Great for VSS. Terrible for humans.

The trouble was that VSS is a terrible product. It is Microsoft’s worst product, ever (they don’t even use it themselves). Here’s why:

It actively fails where it matters most. Version control systems have one mandate above all others—protect your data. If you put code in there, it should be really, really safe. It’s in the product name for crying out loud. Unfortunately, VSS doesn’t do this. Some examples…

It has virtually no mechanism for taking backups. If your database exceeds 2gb, the standard UI won’t work. It’s actually much worse than that—it’ll claim it worked, but actually produce corrupted backup. That’s evil.

Then, if you actually get a valid backup, you won’t be able to restore it because the restore mechanism won’t take it. I’m not even kidding. Google it.

It becomes corrupted far too easily. Every 30 days, it’ll suggest that you run a consistency checker to detect and fix problems. Unfortunately, the documentation warns that the consistency checker, while important, can also corrupt your databases. Say whaaaa? There other maintenance utilities are equally dangerous.

It is hostile to developers, the only ones that actually care about it. It supports multiple checkouts and merging so poorly that no sane team would use anything but exclusive checkouts (of course no sane team would use this product…). It’s so ridiculously slow to “get latest” on a tree of trivial size (minutes, not seconds) that this becomes a dreaded activity.

If I had to guess, I’d say that a bunch of interns cranked this thing out as a brute force, fun summer project called “let’s make a POS VCS.” They had no intention of shipping an obviously malicious product, and are just as shocked (ashamed?) as the rest of us when it was sold to Microsoft.

If you’re on VSS right now, there are other options. I personally use SVN exclusively. If you have a savvy group, you might consider Mercurial, too (distributed VCSs are all the rage right now). Those are both free, by the way.

For more VSS hate, check out these crazy ads (part of a series) or this PSA.

Why am I ranting on this? Because someone was foolish enough to build a ginormous product on top of VSS and that’s causing me all kinds of pain right now.

Thursday, March 11, 2010

Easter Eggs in Red-Gate’s SQL Compare

A coworker discovered a neat Easter egg in Red-Gate’s Schema Compare for Oracle: Oracleoids:

clip_image002

I checked the tools I use and discovered this Easter egg in SQL Compare and Data Compare (v7.1):

image

Which, after a few seconds, turns into one of those annoying slider puzzles:

image

Fun stuff!

Watch out for that Distribution Database

I received some pretty serious alerts this morning about our database server running low on disk space and quickly discovered something amiss:

image

That’s a tad higher than usual…just 100x bigger than it should be (yikes)! Allow me to illustrate how I imagine the last few months going for this gargantuan file:

db-growth

The problem turned out to be that several of the replication SQL jobs were disabled (since…months ago). The pertinent job is probably this one, the distribution clean up job:

image

It’s supposed to run every 10 minutes to tidy things up in the distribution database. I guess not running for three months could lead to some problems (eek!). There’s really no excuse for this—it’s embarrassing. I’m not the DBA for this system but I should have noticed; a lot of people should have noticed.

Since this task was disabled (not failing), it didn’t show up in our usual alert stream. I’m not sure how we solve that problem other than create an alert that detects disabled jobs. Or better yet, we could create alerts that fire whenever the job hasn’t run in the last n minutes—something like that.

Of course adding more sensitive alerts to the sizes of certain files or the free space of certain drives will help, too.

So I guess the lesson here is to run regular checkups on your critical infrastructure and leverage monitoring services if you can to make the checkups easier and more effective.

Wednesday, March 10, 2010

Autohotkey: Wrapping the selection with a tag

Autohotkey is a nice tool to be familiar with—it enables you to create advanced hotkeys. Today, I built a very simple script which saved me a bunch of time. Here’s the skinny:

I’ve been blogging about software a lot and these posts are often heavy with terms or phrases that I wrap in http://www.autohotkey.com/ tags. Unfortunately, my editor (Live Writer), as awesome as it is, doesn’t support something like this. AutoHotKey to the rescue!

Here’s the script:

#c::                       ; fire on WIN+c
AutoTrim Off               ; Retain any leading and trailing whitespace on the clipboard.
ClipSaved := ClipboardAll  ; Save the entire clipboard so we can restore it when we're done
SendInput ^x               ; cut the selection to the clipboard
ClipWait                   ; wait for the clipboard to contain something
SendInput <code>%clipboard%</code> ; Output what was selected, surrounded by <code> tags
Clipboard := ClipSaved     ; Restore the original clipboard
ClipSaved =                ; Free the memory in case the clipboard was very large.
return

Load this into your AHK script, hit reload, and fire away. Select some text, hit WIN+C, and watch in amazement as it is surrounded by <code> tags.

Building URLs for “SRC” Attributes in ASP.NET MVC

I’ve been told that these programming posts are not interesting or funny. For those that have no interest in programming, I offer the following jokes:

“Chuck Norris can divide by zero”

“Chuck Norris can touch MC Hammer”

“Chuck Norris CAN believe it's not butter.”

Chuck Norris Facts

Now would be a good time for you to stop reading.


Dive into ASP.NET MVC and it won’t be long before you do this in a master page:

    <link type="text/css" rel="Stylesheet" href="~/Content/all-src.min.css" />
    <script type="text/javascript" src="~/Scripts/all-src.min.js"></script>

This of course includes a couple global files—one for styles and one for scripts. Here’s the rub: it doesn’t work at all. It’ll seem like it works at first, because you’ll have nice styles and some of your scripts might even work, but it will be a short-lived experience.

Unfortunately something funny is going on here. Those URLs are not valid—they’re more than relative (relative URLs are fine), they’re relative from an application root, denoted by the tilde (~). That tilde means nothing to the browser.

Now the funny business is that ASP.NET will rewrite the link tag automatically to include the correct relative URL by replacing the “~” with the appropriate path. It does not do that with script tags. So you try to be clever and use a web-friendly relative URL syntax like this:

    <script type="text/javascript" src="../../Scripts/all-src.min.js"></script>

Sorry, that doesn’t cut it. The “../../” will only work properly if the content page (which uses the master page) is nested 2-levels deep, which is not likely to be true very often.

The trick is to call into Url.Content or Url.Content like so:

    <script type="text/javascript" src="<%=Url.Content("~/Scripts/all-src.min.js")%>"></script>

This extra step will give me a nice URL, regardless of the page’s depth in my tree. So what’s the difference between Url.Content and Url.Content? ResolveUrl has been around forever as part of Url.Content. On the other hand, Url.Content is relatively new and ships as part of Url.Content. Aside from that, I have no idea—if you do, please share.

Note: these commands work pretty much everywhere—imgs, Url.Content, etc.

Monday, March 8, 2010

+/-20 Years of Computing

In less than 140 characters:

Great new things by decade: 90s: make/save data; 00s: find data; 10s: visualize data, extract greater meaning; 20s: democratize data

In detail:

1990s

The 90s were an incredible time. This was the decade where most computing focused on generating data and saving it. The conventional wisdom of the day seemed to be that, through a magical process, massive amounts of data could be used to solve anything. This was the era of the chess playing super computer, Deep Blue. This was the “anything is possible” childhood of the Internet.

2000s

Then came Google. Google made the 2000s the year of search. By then, what it had started in 1998 had reach a seriously huge critical mass. Early in the decade, though, many people and companies struggled to understand the Internet. This was a scary time for me as I saw some high profile collapses like Pets.com learn some hard lessons in business fundamentals (e.g. 1,000,000 views * $0/view = $0).

It’s during this time that businesses based on good foundations of revenue and purpose really grew. This was the decade of search. Entire generations learned that by typing a few keywords into a box could lead you to damn near anything you wanted to know. Organizations and aspiring individuals learned that by pushing information to the Internet in a public way, they could capitalize on this traffic. This was a cool decade.

That brings us to today.

2010s

We’re starting to feel a little overloaded by the massive amounts of information available to us. The ability to find a dataset, track it over a period of time and compare it to another dataset is a fairly challenging task today. This is where I expect to see some big “Wows” in 2010—visualization of data.

I’ve seen some absolutely amazing things coming from TED lately (go watch those now) and am excited for what fiscally strong companies and universities can create. Enabling non-PhDs to extract meaning and value out of massive amounts of data has been on the radar for the last 20 years—I think we’re finally to a point where it can happen on a grand scale.

Computing power is no longer a limitation.

Connectivity is no longer a limitation.

We will seem some very impressive and innovative ways to make sense and meaning of data very soon.

2020s

I think the success of data visualization will lead to passionate movements to democratize data. Around 2020, it will no longer be acceptable to conceal, hide, or privatize data. There will be a very successful movement to make government data and university data available via extremely accessible means—via APIs or methods that probably don’t exist today. Individuals will adopt the use of standards and contribute—for free—to the pool of data. This long-tail effect will be interesting if not incredible.

Organizations will jump on board and contribute to this stream by dropping the unsuccessful pay-walls they constructed in the 2010s. Vague patents, which will be distorted and abused in the 2010s to monetize data will be invalidated or expire and the flood gates will open.

In 2028, people will start discussing the merits of a conventional census—a reinvigoration of arguments made leading up to the 2020 census. Doing away with the census—which seemed ridiculous in 2018—will have a lot of support. We’ll do one anyway (at great expense) but it’ll be the last time. Around this time (2030), near-real-time data of greater quality than today’s census numbers will be available to all of us.

In Summary

I’m excited.

Thanks to the Internet, I’ll end up back on some future incarnation of this page to see how completely and utterly wrong I was about everything (I can’t wait).

Creating/Submitting a Patch to a Subversion Repo

I’ve been told that these programming posts are not interesting or funny. For those that have no interest in programming, I offer the following jokes:

“I don't think I could stab somebody, cause I'm really bad at a Capri Sun.”

“I hope God speaks English. If I get up to heaven and have to point at a menu, I'm gonna be pissed.”

“I hope we find a cure for every major disease, because I'm tired of walking 5K. I'm pretty sure I don't have to sweat for cancer. I'll write a check.”

Daniel Tosh (via)

Now would be a good time for you to stop reading.


I use Subversion as my primary version control system. It’s awesome. I have a few users that have read-only rights to this repo and only occasionally make changes themselves. In these cases, I can’t provide commit rights to the repo so what are we to do? Patches.

A patch is basically a change set wrapped up into a single tidy file. The patch can be created by one dev and sent to another to be applied to the VCS. SVN, like most VCSs has very good support for patches. This post describes how to create one.

First, you should update your working directory if possible with “SVN Update”:

image

Normally you would go to the Commit screen to apply your changes. Since you don’t have commit access, this won’t work, so instead right-click and go to “TortoiseSVN” > “Create Patch”:

image

A dialog will show you all the changes it has detected; you can double click each file to diff it. Choose the changes you want included in the patch and click “OK”:

image

Save the patch somewhere handy:

image

Send the patch file off to your committer and you’re done! Go ahead and open it up in a text editor if you want to see how these work. It’s basically a snippet of each of the pieces of code you changed, all bundled up into a nice text file.

Applying a Patch to a Subversion Repo

Of course the process of applying patches is simple, too. Right-click on the patch file and choose “TortoiseSVN” > “Apply Patch”:

image

Choose the SVN working directory to which the patch should be applied:

image

You’ll see a list of the patched files and have the opportunity to review each change:

image

Then right click to apply some or all of the changes into the working directory you chose.

The patch has now been applied to your working directory—now would be a good time to commit it via normal means (right-click >  “SVN Commit”):

image

It might seem a little complicated at first, but after you do it once or twice it’ll click as a convenient and effective way to share change sets.

Friday, March 5, 2010

Active Directory Look-Up

I’ve been told that these programming posts are not interesting or funny. For those that have no interest in programming, I offer the following joke:

“I was gonna get a candy bar; the button I was supposed to push was ‘HH’, so I went to the side, I found the ‘H’ button, I pushed it twice. F’in...potato chips came out, man, because they had an ‘HH’ button for Christ's sake! You need to let me know. I'm not familiar with the concept of ‘HH’. I did not learn my AA-BB-CC's. God god, dammit dammit” –Mitch Hedberg (via)

Now would be a good time for you to stop reading.


I’ve been working on an app that’s defers authentication to the company’s Active Directory. Rather than ask user’s to fill in profile info like a display name, I decided to pull this info out of the directory.

This turned out to be ridiculously easy after adding a reference to System.DirectoryServices.AccountManagement to the project:

using (var PC = new PrincipalContext(ContextType.Domain))
{
    var UserPrincipal = Principal.FindByIdentity(PC, userName);
}

In this case, we’re passing along the user’s NT name, including the domain to help make it unique (e.g. “domain\user”) and getting back an object of type System.DirectoryServices.AccountManagement.Principal, which has some nice properties like DisplayName and Sid.

Since I’m running this app as a domain user, I don’t even have to configure the directory connection (which is nice, because that part’s a pain).


OK so I have the user’s name, but I’m rarely a fan of duplicating data. But I need a local copy of the user’s name to keep things nice and speedy (plus hitting the domain for a person’s name all the time is a little silly, too).

My compromise is that I update my local copy with the directory’s profile data each time the user logs in. I’m already hitting the domain to authenticate the user any way so it’s not any extra work. This should take care of the rare situation that someone’s name or profile info changes without requiring anyone to do anything.

HTML/JS: Progressive Enhancement

The great thing about a semantic approach to web development is how nice and easy it can be to make progressive enhancements.

For example, suppose I have a “what’s this” help link beside some potentially confusing statement:

 image

Nothing fancy here—just a link with a _blank target (source, demo):

<p>Hello World 
  <a href="/help/tips"
     target="_blank" 
     title="Hello World Help"
     class="help-link">(what's this?)</a></p>

It’s not very pretty but it gets the job done without any Javascript. Let’s make it sexy:

image

Here we’ve augmented the help link with a nice jQuery UI dialog instead of a browser popup (source, demo):

$(function(){
  
  $('.help-link').click(function(){
    
    $('<div></div>')
      .attr('title', this.title)
      .load(this.href)
      .dialog({
        modal: true,
        buttons: {
          Ok: function () {
            $(this).dialog('close');
          }
        },
        width: 600,
        height: 350    
      });
    
    return false;
  });
  
});​

This doesn’t require any changes to the HTML/CSS—it uses existing attributes like href and title to wire itself up to the link. And, if JS is disabled or broken, the link will still work.

By applying incremental enhancements in this fashion, we can easily maintain decent support for less-capable browsers while keeping our code clean and elegant.

You might notice, too, that this JS snippet is looking at a class (help-link), not an id. Since it infers everything it needs to show the dialog from the link itself, this snippet will work on any link in the page tagged with the help-link class. Nice, right?

Thursday, March 4, 2010

Generating Super Shiny, Hopefully Secure Tokens

I’ve been told that these programming posts are not interesting or funny. For those that have no interest in programming, I offer the following joke:

“My friend had a burrito. The next day he said, ‘That burrito did not agree with me.’ I was like, ‘Was the disagreement over whether or not you’d have diarrhea? Let me guess who won.’” –Demetri Martin (via)

Now would be a good time for you to stop reading.


I was working on a little security related code today which required the generation of unique and random tokens. I’m always nervous working with crypto because it’s so easy to fail.

But here I am, ready to fail.

So like I said, I need to create a bunch of tokens—blocks of text or numbers. They can’t be easily guessed and need to be unique. Let’s see if I can’t screw this up.

        /// <summary>
        /// Generate a decently long string o random characters, suitable for tokens
        /// </summary>
        /// <returns>a string of gobbledygook</returns>
        public static string GenerateKey()
        {
            var RandomBytes = new byte[
                6 * 10 // use a multiple of 6 to get a full base64 output http://en.wikipedia.org/wiki/Base64
                - 16 // compensate for the 16-byte guid we're going to add in 
                ];

            // fill the buffer with garbage (this is threadsafe)
            BetterRandom.GetBytes(RandomBytes);

            // get a guid, which will be unique enough for us
            var UniqueBytes = Guid.NewGuid().ToByteArray();

            // encode the garbage as friendly, printable characters
            var AllBytes = new byte[RandomBytes.Length + UniqueBytes.Length];
            UniqueBytes.CopyTo(AllBytes, 0);
            RandomBytes.CopyTo(AllBytes, UniqueBytes.Length);

            return Convert.ToBase64String(AllBytes);
        }
        static RandomNumberGenerator BetterRandom = new RNGCryptoServiceProvider();

Basically I take two components—a 16-bit GUID, and a 44-byte chunk of random bits. The GUID would normally be enough to satisfy me as they are pretty much unique (and the Win32 algorithm might even guarantee them to be unique when considering a single machine) but, I was afraid they might be predictable as they aren’t actually all that random.

How’d I come up with 44 bytes (352 bits)? It looks nice. I guessed a few numbers until I got the encoded output to be of reasonable size. Which brings me to the Base64 conversion. This just takes the binary blob of bits and turns them into simple, printable characters so I can pass them around in URLs.

If you’re know of any weaknesses with this approach, please share! Something like this will eventually guard something about as valuable as a garden gnome, so I’m not too worried about it yet. It’s certainly more secure than the simple passwords most of us use.

Class Inheritance Throw Back

Another question I answered recently took me way back to my college classes on programming language theory. In those classes we studied the internals of languages in greater detail than I’d care to remember. We also build a Scheme scanner, parser, printer, too, which was fun.

Anyway, the question today was: given a base class “Animal” and a derived class “Dog”, in what order are the constructors and destructors called when the child class is instantiated and destroyed?

image

I haven’t touched C++ for a long time but after thinking about it for a few seconds, this is what would make sense to me:

new Dog():

  1. Animal()
  2. Dog()

delete Dog():

  1. Dog()
  2. Animal()

The constructor part is the same in C# (my current language of choice) so that was obvious, but we don’t really have destructors in C# (we have IDisposable) so i had to think about that logically a bit.

The gist is: constructors are executed top down, and destructors are executed bottom up. I like digging into the details and the nuances of these things…

Algorithms Throw Back

I was given a question today that really took me back. Here’s a hint: it had to do with binary search trees, data structures, and pretty printing.

I haven’t touched a BST in six years so it took some priming to get me going.

image(BST builder)

The task was to print this tree level by level. So the output should be 3, 1, 6, 2, 5, 7, 4. If you’re a programmer, I encourage you to solve this problem as an exercise before looking at my solution. It was humbling for me.

After wasting a half hour messing around with recursion, I was given a pretty nice hint to do it iteratively with a queue.

I still failed miserably with my good old paper and pencil, but afterwards set out to do it in a more comfortable environment (C#).

Here’s my basic node class (structs are for sissys):

    public class Node
    {
        public Node(int value, Node left = null, Node right = null)
        {
            Value = value; Left = left; Right = right;
        }
        public int Value { get; set; }
        public Node Left { get; set; }
        public Node Right { get; set; }
    }

And my main program:

    static void Main(string[] args)
    {
        Node n = new Node(3);
        n.Left = new Node(1, null, new Node(2));
        n.Right = new Node(6, new Node(5, new Node(4)), new Node(7));

        PrettyPrintByLevel(n);
        Console.ReadKey();
    }

And the magic:

    static void PrettyPrintByLevel(Node n)
    {
        Queue<Node> Nodes = new Queue<Node>();
        Nodes.Enqueue(n);

        do
        {
            Node QNode = Nodes.Dequeue();
            Console.WriteLine(QNode.Value);

            if (QNode.Left != null) Nodes.Enqueue(QNode.Left);
            if (QNode.Right != null) Nodes.Enqueue(QNode.Right);

        } while (Nodes.Count > 0);
    }

A quick test reveals that it works:

3 1 6 2 5 7 4
Yay! So what did I learn today? I’m rusty on the basics and need to do some more Project Euler problems.

I’ve taken this opportunity to brush up on some Java. Here’s the same app in the similar, but different Java:

    public static void main(String[] args) {
        // build up a tree
        Node n = new Node(3, null, null);
        n.Left = new Node(1, null, new Node(2, null, null));
        n.Right = new Node(6, new Node(5, new Node(4, null, null), null), new Node(7, null, null));

        // print out the tree to the console
        PrettyPrintByLevel(n);
    }

    private static void PrettyPrintByLevel(Node n) {
        Queue<Node> Nodes = new LinkedList<Node>();
        Nodes.add(n);

        do
        {
            Node QNode = Nodes.remove();
            System.out.println(QNode.Value);

            if (QNode.Left != null) Nodes.add(QNode.Left);
            if (QNode.Right != null) Nodes.add(QNode.Right);

        } while (Nodes.peek() != null);        
        // process the queue until it's empty
        // peeking for a null element is certainly faster (or as fast) as
        // calling for the list's length over and over again
    }

It’s pretty much the same thing.

Tuesday, March 2, 2010

Word Document Automation with .NET 4: New Doc From Template

With all the Word automation stuff I’ve been working through, it was nice to find something easy today. I wanted to create a base template and use it to start my docs from. So I created the template in Word as usual and saved it as a .dotx.

Then, to start new docs from this, just include it in the .Add() call:

WordApp = new Application();

// open the template as a new doc
var Doc = WordApp.Documents.Add(PathToTemplateFile);

Easy does it.

Leave SQL Server’s Cost Threshold for Parallelism Alone

I’ve been told that these programming posts are not interesting or funny. For those that have no interest in programming, I offer the following joke:

“I like fruit baskets because it gives you the ability to mail someone a piece of fruit without appearing insane. Like, if someone just mailed you an apple you’d be like ‘Huh? What the hell is this?’, but if it’s in a fruit basket you’re like ‘This is nice!.’” –Demetri Martin (via)

Now would be a good time for you to stop reading.


A while back I was performance-testing a new SQL Server cluster. This machine was years-better than the system it was replacing and the perf-test was showing it. Everything I threw at it was flying—this thing was screaming fast.

Then we started load testing. This was basically an integration test where we turned on everything at once and cranked it to 11. Only we didn’t get to 11 because our server fell over at 2, making me a sad panda. The server started throwing strange and never-before-seen (by me) errors about problems with memory, threads, timeouts, etc. It looked like this:

image

We had barely loaded the machine with concurrency and it was freaking out. It’d run in spurts of blazing glory, then fail to a grinding halt. After a lot of personal freaking out (we had a very, very tight schedule measured in minutes), I discovered the culprit: parallelism.

Normally you would think parallelism would be a good thing—many cores make light work (this machine had 16!). Unfortunately that’s just not so in all cases. The overhead to split a query into parallel chunks, execute the chunks, and join the results together is significant. It turns out it’s extremely significant for simple queries and increases the complexity/load required to execute them dramatically.

Fortunately, SQL Server knows all this and has a setting for it:

image

The cost threshold for parallelism. This value is in seconds. When SQL Server estimates a query will take longer than x seconds to be executed, the query is executed in parallel; otherwise serial.

Do not set this to a very low value like my DBA apparently did.

Monday, March 1, 2010

Ruminations: Multiple Births; Congratulations

I am afraid if I ever have twins or triplets that I would try to carry too many at one time and drop one, both, all, etc. Oops!

And that's my segway into my congratulations to B+T on their new double-dose of parenthood. Here's a list of things to keep you occupied during your downtime (infants are boring):

  • N/A

...you won't be having any of that. Sorry. Now let's extrapolate your family:

Extrapolating

At this rate (if my calculations are correct), the trend is slightly alarming:

image

Of course the total number of kids you have will be a bit more dramatic:

image

The only thing I can conclude from this is that you might want to refocus your house-hunting for something in the country. With lots of room.

In any case, best of luck to you and enjoy these first few weeks as a family of five! Let me know if I can help in any way (I just might).

Moving List Items Between Lists

I often apply a push-pull pattern when working with business/data interfaces. I’m talking about something like this:

image

I don’t much like these so I came up with something similar that works well for small datasets which I’ll describe here. Here it is in action:

image and here’s how to build it. First, the basic layout:

<body>
  <ul id='list1'>
    <li>Item 1 <img class='icon move' src='blank.png'/></li>
    <li>Item 2 <img class='icon move' src='blank.png'/></li>
    <li>Item 3 <img class='icon move' src='blank.png'/></li>
    <li>Item 4 <img class='icon move' src='blank.png'/></li>
  </ul>
  <ul id='list2'>
    <li>Item 5 <img class='icon move' src='blank.png'/></li>
    <li>Item 6 <img class='icon move' src='blank.png'/></li>
    <li>Item 7 <img class='icon move' src='blank.png'/></li>
    <li>Item 8 <img class='icon move' src='blank.png'/></li>
  </ul>
  </body>
</html>

What I want to do is have each li hop to the opposing list when its move button is clicked. It’s very simple with jQuery’s live event binding (demo, source):

  $(function(){
    $('ul#list1 .move').live('click', function(){
      $(this).closest('li').appendTo('ul#list2');
    });
    
    $('ul#list2 .move').live('click', function(){
      $(this).closest('li').appendTo('ul#list1');
    });
  });

These events aren’t bound to the items themselves. Rather they sit higher up the DOM and, through some event delegation magic, are handled by any li matching the selector (including elements appended in the future). So when an li’s move icon is clicked, the event handler walks up the DOM until it finds the li element, and moves it to the other list via a call to appendTo(). This technique can be combined with jQuery UI’s sortable component, too, for drag/drop and reorder support, too.

It’s also really easy to add animation (demo, source):

  $(function(){
    $('ul#list1 .move').live('click', function(){
      $li = $(this).closest('li');
      $li.fadeOut('slow', function(){ $li.appendTo('ul#list2').fadeIn(); });
    });
    
    $('ul#list2 .move').live('click', function(){
      $li = $(this).closest('li');
      $li.fadeOut('slow', function(){ $li.appendTo('ul#list1').fadeIn(); });
    });
  });

Now we’re getting to the point where some refactoring might be appropriate (demo, source):

  $.fn.pushTo = function(toSelector)
  {
    $this = $(this);
    return $this.fadeOut('slow', function(){ $this.appendTo(toSelector).fadeIn(); });   
  };
  
  $(function(){
    $('ul#list1 .move').live('click', function(){
      $(this).closest('li').pushTo('ul#list2');
    });
    
    $('ul#list2 .move').live('click', function(){
      $(this).closest('li').pushTo('ul#list1');
    });
  });

It’s not really any less code, but we’ve moved the messy animation pieces out into a chainable function. I could have moved the .closest() pieces into the function, too, but that would make the pushTo() method a little too specific to this task for my taste. Since we have the animation isolated to one line, we can easily change it to slide the items in and out (demo, source):

  $.fn.pushTo = function(toSelector)
  {
    $this = $(this);
    $this.slideUp('slow', function(){ $this.appendTo(toSelector).slideDown(); });   
    return $this;
  };

Finally, if you use something like this in a real app, use ‘fast’ for the animation speed. I’m using ‘slow’ here to make it obvious. In practice, though, it’d be very annoying.

Word Document Automation with .NET 4: Attach Styles From a Template

I’ve been working with document generation a bit lately. The latest hurdle I’ve had to jump is related to styles. I’ve found that the technique I’m using to merge styles is nice and easy but has one undesired feature: each source doc brings its own styles with it, overwriting any existing styles that have already been imported as it goes. This is nice in a lot of ways, but not what I want at the moment.

After a lot of trial and error, I’ve come up with a super simple way to apply a single set of styles to the finished document:

public static void StyleDocument(Document document, string templateFile)
{
    document.CopyStylesFromTemplate(templateFile);
}

That’s it! This will take all the styles from the given .dotx or .docx file and apply them to the given document object. If you only have a file path of the document that needs to be styled, you’ll need to open/close it, too, with this overload (in addition to the above method):

public static void StyleDocument(string file, string templateFile)
{
    Application WordApp = null;

    try
    {
        WordApp = new Application();
        var Document = WordApp.Documents.Open(file);
        StyleDocument(Document, templateFile);
    }
    finally
    {
        DisposeApp(WordApp);
    }
}

Where DisposeApp(…) is just a helper to cleanup my mess:

private static void DisposeApp(Application WordApp)
{
    if (WordApp != null)
    {
        foreach (var Doc in WordApp.Documents)
        {
            (Doc as _Document).Close();
        }
        (WordApp as _Application).Quit();

        System.Runtime.InteropServices.Marshal.FinalReleaseComObject(WordApp);
    }
}

This technique is far, far nicer than working with the styles manually.