Thursday, December 9, 2010

Dynamically Generating Controls in WPF and Silverlight

Some of the Windows Forms developers I've spoken to have said that one thing they want to learn is how to dynamically create controls in WPF and Silverlight. In this post I'll show you several different ways to create controls at runtime using Silverlight 4 and WPF 4.

First, we'll start with how to create controls in XAML. From there, we'll move to dynamically-loaded XAML before we take a look at using the CLR object equivalents.

 
Creating Controls at Design Time in XAML
Creating controls using the design surface and/or XAML editor is definitely the easiest way to create your UI. You can use Expression Blend or Visual Studio, depending upon how creative you want to be. If you want a more dynamic layout, you can hide and show panels at runtime.

Here's an example layout:

<Grid Margin="10">
    <Grid.ColumnDefinitions>
        <ColumnDefinition Width="100" />
        <ColumnDefinition Width="*" />
    </Grid.ColumnDefinitions>
            
    <Grid.RowDefinitions>
        <RowDefinition Height="Auto" />
        <RowDefinition Height="Auto" />
        <RowDefinition Height="Auto" />
        <RowDefinition Height="*" />
    </Grid.RowDefinitions>
 
    <TextBlock Text="First Name"
                Height="19"
                Margin="0,7,31,4" />           
    <TextBox x:Name="FirstName"
                Margin="3"
                Grid.Row="0"
                Grid.Column="1" />
            
    <TextBlock Text="Last Name"
                Margin="0,7,6,3"
                Grid.Row="1"
                Height="20" />
    <TextBox x:Name="LastName"
                Margin="3"
                Grid.Row="1"
                Grid.Column="1" />
 
 
    <TextBlock Text="Date of Birth"
                Grid.Row="2"
                Margin="0,9,0,0"
                Height="21" />
    <sdk:DatePicker x:Name="DateOfBirth" 
                    Margin="3"
                    Grid.Row="2"
                    Grid.Column="1" />
 
 
    <Button x:Name="SubmitChanges"
            Grid.Row="3"
            Grid.Column="3"
            HorizontalAlignment="Right"
            VerticalAlignment="Top"
            Margin="3"
            Width="80"
            Height="25"
            Content="Save" />
</Grid>


That markup creates a layout that looks like this in Silverlight:


Or, if you're using WPF, like this:


(note that you'll need to remove or remap the "sdk" prefix when using this XAML in WPF, as the date control is built-in)

Once you're familiar with working in XAML, you can easily modify the process to load the XAML at runtime to dynamically create controls.


Creating Controls at runtime using XAML Strings

In Silverlight, this block of code in the code-behind creates the same controls at runtime by loading the XAML from a string using the System.Windows.Markup.XamlReader class. This class exposes a Load method which (in Silverlight) takes a well-formed and valid XAML string and returns back a visual tree

public MainPage()
{
    InitializeComponent();
 
    Loaded += new RoutedEventHandler(MainPage_Loaded);
}
 
void MainPage_Loaded(object sender, RoutedEventArgs e)
{
    CreateControls();
}
 
 
private void CreateControls()
{
    string xaml =
    "<Grid Margin='10' " +
        "xmlns='http://schemas.microsoft.com/winfx/2006/xaml/presentation' " + 
        "xmlns:x='http://schemas.microsoft.com/winfx/2006/xaml' " + 
        "xmlns:sdk='http://schemas.microsoft.com/winfx/2006/xaml/presentation/sdk'>" + 
        "<Grid.ColumnDefinitions>" +
            "<ColumnDefinition Width='100' />" +
            "<ColumnDefinition Width='*' />" +
        "</Grid.ColumnDefinitions>" +
 
        "<Grid.RowDefinitions>" +
            "<RowDefinition Height='Auto' />" +
            "<RowDefinition Height='Auto' />" +
            "<RowDefinition Height='Auto' />" +
            "<RowDefinition Height='*' />" +
        "</Grid.RowDefinitions>" +
 
        "<TextBlock Text='First Name' Height='19' Margin='0,7,31,4' />" +
        "<TextBox x:Name='FirstName' Margin='3' Grid.Row='0' Grid.Column='1' />" +
 
        "<TextBlock Text='Last Name' Margin='0,7,6,3' Grid.Row='1' Height='20' />" +
        "<TextBox x:Name='LastName' Margin='3' Grid.Row='1' Grid.Column='1' />" +
 
        "<TextBlock Text='Date of Birth' Grid.Row='2' Margin='0,9,0,0' Height='21' />" +
        "<sdk:DatePicker x:Name='DateOfBirth' Margin='3' Grid.Row='2' Grid.Column='1' />" +
 
        "<Button x:Name='SubmitChanges' Grid.Row='3' Grid.Column='3' " +
            "HorizontalAlignment='Right' VerticalAlignment='Top' " +
            "Margin='3' Width='80' Height='25' Content='Save' />" +
    "</Grid>";
 
 
    UIElement tree = (UIElement)XamlReader.Load(xaml);
 
    LayoutRoot.Children.Add(tree);
}


Note that I needed to add the namespace definitions directly in this XAML. A chunk of XAML loaded via XamlReader.Load must be completely self-contained and syntactically correct.

The WPF XamlReader.Load call is slightly different as it has no overload which would take a string. Instead, it takes an XmlReader as one form of parameter:

 
StringReader stringReader = new StringReader(xaml);
XmlReader xmlReader = XmlReader.Create(stringReader);
 
UIElement tree = (UIElement)XamlReader.Load(xmlReader);
 
LayoutRoot.Children.Add(tree);
This technique also works for loading chunks of XAML from a file on the local machine, or as the result of a database query. It's also helpful for enabling the use of constants (like the extended color set) that are recognized by XAML parser in Silverlight, but not from code.

The more typical approach to dynamically creating controls, however, is to simply use the CLR objects.

 
Creating Controls at runtime using Code and CLR Objects

 
Everything you do in XAML can also be done from code. XAML is a representation of CLR objects, rather than a markup language that abstracts the underlying objects. Creating controls from code tends to be more verbose than doing the same from XAML. However, it is a familiar approach for Windows Forms developers, and a great way to handle dynamic UI.
private void CreateControlsUsingObjects()
{
// <Grid Margin="10">
Grid rootGrid = new Grid();
rootGrid.Margin = new Thickness(10.0);
// <Grid.ColumnDefinitions>
//   <ColumnDefinition Width="100" />
//   <ColumnDefinition Width="*" />
//</Grid.ColumnDefinitions>
 
rootGrid.ColumnDefinitions.Add(
new ColumnDefinition() { Width = new GridLength(100.0) });
rootGrid.ColumnDefinitions.Add(
new ColumnDefinition() { Width = new GridLength(1, GridUnitType.Star) });
 
//<Grid.RowDefinitions>
//  <RowDefinition Height="Auto" />
//  <RowDefinition Height="Auto" />
//  <RowDefinition Height="Auto" />
//  <RowDefinition Height="*" />
//</Grid.RowDefinitions>
 
rootGrid.RowDefinitions.Add(
new RowDefinition() { Height = GridLength.Auto });
rootGrid.RowDefinitions.Add(
new RowDefinition() { Height = GridLength.Auto });
rootGrid.RowDefinitions.Add(
new RowDefinition() { Height = GridLength.Auto });
rootGrid.RowDefinitions.Add(
new RowDefinition() { Height = new GridLength(1, GridUnitType.Star) });
 
//<TextBlock Text="First Name"
//           Height="19"
//           Margin="0,7,31,4" />
 
var firstNameLabel = CreateTextBlock("First Name", 19, new Thickness(0, 7, 31, 4), 0, 0);
rootGrid.Children.Add(firstNameLabel);
 
//<TextBox x:Name="FirstName"
//         Margin="3"
//         Grid.Row="0"
//         Grid.Column="1" />
 
var firstNameField = CreateTextBox(new Thickness(3), 0, 1);
rootGrid.Children.Add(firstNameField);
 
//<TextBlock Text="Last Name"
//           Margin="0,7,6,3"
//           Grid.Row="1"
//           Height="20" />
 
var lastNameLabel = CreateTextBlock("Last Name", 20, new Thickness(0, 7, 6, 3), 1, 0);
rootGrid.Children.Add(lastNameLabel);
 
//<TextBox x:Name="LastName"
//         Margin="3"
//         Grid.Row="1"
//         Grid.Column="1" />
 
var lastNameField = CreateTextBox(new Thickness(3), 1, 1);
rootGrid.Children.Add(lastNameField);
 
 
//<TextBlock Text="Date of Birth"
//           Grid.Row="2"
//           Margin="0,9,0,0"
//           Height="21" />
 
var dobLabel = CreateTextBlock("Date of Birth", 21, new Thickness(0, 9, 0, 0), 2, 0);
rootGrid.Children.Add(dobLabel);
 
//<DatePicker x:Name="DateOfBirth"
//            Margin="3"
//            Grid.Row="2"
//            Grid.Column="1" />
 
DatePicker picker = new DatePicker();
picker.Margin = new Thickness(3);
Grid.SetRow(picker, 2);
Grid.SetColumn(picker, 1);
rootGrid.Children.Add(picker);
 
//<Button x:Name="SubmitChanges"
//        Grid.Row="3"
//        Grid.Column="3"
//        HorizontalAlignment="Right"
//        VerticalAlignment="Top"
//        Margin="3"
//        Width="80"
//        Height="25"
//        Content="Save" />
 
Button button = new Button();
button.HorizontalAlignment = HorizontalAlignment.Right;
button.VerticalAlignment = VerticalAlignment.Top;
button.Margin = new Thickness(3);
button.Width = 80;
button.Height = 25;
button.Content = "Save";
Grid.SetRow(button, 3);
Grid.SetColumn(button, 1);
rootGrid.Children.Add(button);
 
LayoutRoot.Children.Add(rootGrid);
}
 
private TextBlock CreateTextBlock(string text, double height, Thickness margin, int row, int column)
{
TextBlock tb = new TextBlock() 
{ Text = text, Height = height, Margin = margin };
Grid.SetColumn(tb, column);
Grid.SetRow(tb, row);
 
return tb;
}
 
private TextBox CreateTextBox(Thickness margin, int row, int column)
{
TextBox tb = new TextBox() { Margin = margin };
Grid.SetColumn(tb, column);
Grid.SetRow(tb, row);
 
return tb;
}


You'll see the code is only slightly more verbose when expanded out. The two helper functions help minimize that. In the code, I create the entire branch of the visual tree before I add it to the root. Doing this helps minimize layout cycles you'd otherwise have if you added each item individually to the root.

I tend to put any UI interaction inside the Loaded event. However, you could place this same code inside the constructor, after the InitializeComponent call. As your code gets more complex, and relies on other UI elements to be initialized and loaded, you'll want to be smart about which function you use.

 
Event Handling

If you want to handle events, like button clicks, you'd do that like any other .NET event handler:

{
    Button button = new Button();
    ...
    button.Click += new RoutedEventHandler(button_Click);
 
    LayoutRoot.Children.Add(rootGrid);
}
 
void button_Click(object sender, RoutedEventArgs e)
{
    ...
}



Creating controls from code doesn't mean you lose the valuable ability to data bind. In some cases, especially where the binding source is hard to reference from XAML, binding is easier in code.

Binding Dynamically Created Controls

We haven't used any binding yet, so we'll need to create a binding source. For that, I created a simple shared project that targets Silverlight 4. It's a Silverlight class library project and is used by both the WPF and Silverlight examples. Remember, to use it from WPF 4 (without any additions), you'll need to use a file reference to the compiled DLL, not a project reference.

Inside that project, I created a single ViewModel class named ExampleViewModel.

 
public class ExampleViewModel : INotifyPropertyChanged
{
private string _lastName;
public string LastName
{
get { return _lastName; }
set { _lastName = value; NotifyPropertyChanged("LastName"); }
}
 
private string _firstName;
public string FirstName
{
get { return _firstName; }
set { _firstName = value; NotifyPropertyChanged("FirstName"); }
}
 
private DateTime _dateOfBirth;
public DateTime DateOfBirth
{
get { return _dateOfBirth; }
set { _dateOfBirth = value; NotifyPropertyChanged("DateOfBirth"); }
}
 
public event PropertyChangedEventHandler PropertyChanged;
 
protected void NotifyPropertyChanged(string propertyName)
{
if (PropertyChanged != null)
PropertyChanged(this, new PropertyChangedEventArgs(propertyName));
}
}


Inside the code-behind (this is a demo, after all) of the Window (or Page), initialize the viewmodel class:
private ExampleViewModel _vm = new ExampleViewModel();
 
public MainWindow()
{
    _vm.LastName = "Brown";
    _vm.FirstName = "Pete";
    _vm.DateOfBirth = DateTime.Parse("Jan 1, 1910");
 
    InitializeComponent();
 
    ...
}


Once that is done, we can create an example binding. I'm going to use the First Name TextBox and set up two-way binding with the FirstName property of the ExampleViewModel instance.

 
var firstNameField = CreateTextBox(new Thickness(3), 0, 1);
Binding firstNameBinding = new Binding();
firstNameBinding.Source = _vm;
firstNameBinding.Path = new PropertyPath("FirstName");
firstNameBinding.Mode = BindingMode.TwoWay;
firstNameField.SetBinding(TextBox.TextProperty, firstNameBinding);          
 
rootGrid.Children.Add(firstNameField);
 
The same approach to expressing binding also works in XAML. It's just that we have a binding markup extension that makes the process easier.

One thing that tripped me up in this example was I passed in TextBlock.TextProperty to the SetBinding call. That's a valid dependency property, so it compiles just fine. In WPF, that fails silently, even when you have verbose binding debugging turned on. In Silverlight, it throws a catastrophic error (without any additional information). That catastrophic error made me look more closely at the call, ultimately leading to the fix.

To bind controls added using dynamically-loaded XAML, you'll need to provide a valid Name to each control you want to reference, then use FindName after loading to get a reference to the control. From there, you can using the Binding object and SetBinding method. Of course, you can also embed the binding statements directly in the XAML if you wish to do a little string manipulation.

 
Summary

So, we've seen that there are three different ways you can display controls in Silverlight and WPF.

  • Use the design surface / XAML Editor / Blend and create them prior to compiling
  • Load XAML at runtime
  • Use CLR Objects at runtime
Each way is useful in different scenarios, and also has different performance characteristics. XAML parsing is surprisingly efficient and the XAML can be stored in a large text field in a single row in a database or as a loose file on the file system, for example.
 


Wednesday, December 8, 2010

Tomb stoning on the Win7 Mobile Platform

grave_thumbTomb stoning is  the process of saving the data and state of your application when it’s terminated such that it can resume where it left off when it is started back up again. There are many cases where you application may be suddenly terminated on the phone. Examples include incoming phone calls or activation of the camera on the device. Tomb stoning is an essential feature that almost every application should support to avoid customer frustration with your application.

Tomb stoning an application is fairly straightforward. Every application on the phone has its own permanent storage space on the device that is referred to as isolated storage.

Classes for isolated storage can be found in under the System.IO.IsolatedStorage namespace.

You can use these classes to store and reload the state and data for your application.

In your App.xaml file, the events you should monitor for are the following:

  • Application_Launching – Code to execute when the application is launching. This code will not execute when the application is reactivated
  • Application_Closing – Code to execute when the application is closing (i.e. user hits the back button). This code will not execute when the application is deactivated
  • Application_Activated  – Code to execute when the application is activated (brought to foreground). This code will not execute when the application is first launched
  • Application_Deactivated – Code to execute when the application is deactivated (sent to background. This code will not execute when the application is closing

These should all be there by default when you create a new Silverlight mobile project:

<shell:PhoneApplicationService 
 
    Launching="Application_Launching" 
 
    Closing="Application_Closing" 
 
    Activated="Application_Activated" 
 
    Deactivated="Application_Deactivated"/>
 
 
 
private void Application_Launching(object sender, LaunchingEventArgs e)
 
{
 
    LoadApplicationState();
 
}
 
 
 
private void Application_Activated(object sender, ActivatedEventArgs e)
 
{
 
    LoadApplicationState();
 
}
 
 
 
private void Application_Deactivated(object sender, DeactivatedEventArgs e)
 
{
 
    SaveApplicationState();
 
}
 
 
 
private void Application_Closing(object sender, ClosingEventArgs e)
 
{
 
    SaveApplicationState();
 
}
 



Typically data I want stored I keep as static variables so they are easily accessible from your Appl.xaml.cs class. In the example below, I am loading and retrieving the “level” your application is on.


private void LoadApplicationState()
 
{
 
    int currentLevel = 1;
 
 
 
    IsolatedStorageSettings settings = IsolatedStorageSettings.ApplicationSettings;
 
    settings.TryGetValue<int> ("level", out currentLevel);
 
 
 
    Consts.CurrentLevel = currentLevel;
 
}
 
 
 
private void SaveApplicationState()
 
{
 
    IsolatedStorageSettings settings = IsolatedStorageSettings.ApplicationSettings;
 
 
 
    settings["level"] = Consts.CurrentLevel;
 
}
 



Thanks,


Rizwan Suddle

Monday, November 29, 2010

10 mistakes every programmer makes Admit it, you've made mistakes like these


When you start programming, you get disillusioned quickly. No longer is the computer the allinfallible perfect machine – "do as I mean, not as I say" becomes a frequent cry.
At night, when the blasted hobgoblins finally go to bed, you lie there and ruminate on the errors you made that day, and they're worse than any horror movie. So when the editor of PC Plus asked me to write this article, I reacted with both fear and knowing obedience.
I was confident that I could dash this off in a couple of hours and nip down to the pub without the usual resultant night terrors. The problem with such a request is, well, which language are we talking about?
I can't just trot out the top 10 mistakes you could make in C#, Delphi, JavaScript or whatever – somehow my top ten list has to encompass every language. Suddenly, the task seemed more difficult. The hobgoblins started cackling in my head. Nevertheless, here goes…

1. Writing for the compiler, not for people
When they use a compiler to create their applications, people tend to forget that the verbose grammar and syntax required to make programming easier is tossed aside in the process of converting prose to machine code.
A compiler doesn't care if you use a single-letter identifier or a more human-readable one. The compiler doesn't care if you write optimised expressions or whether you envelop sub-expressions with parentheses. It takes your human-readable code, parses it into abstract syntax trees and converts those trees into machine code, or some kind of intermediate language. Your names are by then history.
So why not use more readable or semantically significant identifiers than just i, j or x? These days, the extra time you would spend waiting for the compiler to complete translating longer identifiers is minuscule. However, the much-reduced time it takes you or another programmer to read your source code when the code is expressly written to be self-explanatory, to be more easily understandable, is quite remarkable.
Another similar point: you may have memorised the operator precedence to such a level that you can omit needless parentheses in your expressions, but consider the next programmer to look at your code. Does he? Will he know the precedence of operators in some other language better than this one and thereby misread your code and make invalid assumptions about how it works?
Personally, I assume that everyone knows that multiplication (or division) is done before addition and subtraction, but that's about it. Anything else in an expression and I throw in parentheses to make sure that I'm writing what I intend to write, and that other people will read what I intended to say.
The compiler just doesn't care. Studies have shown that the proportion of some code's lifecycle spent being maintained is easily five times more than was spent initially writing it. It makes sense to write your code for someone else to read and understand.

2. Writing big routines
Back when I was starting out, there was a rule of thumb where I worked that routines should never be longer than one printed page of fan-fold paper – and that included the comment box at the top that was fashionable back then. Since then, and especially in the past few years, methods tend to be much smaller – merely a few lines of code.
In essence, just enough code that you can grasp its significance and understand it in a short time. Long methods are frowned upon and tend to be broken up.
The reason is extremely simple: long methods are hard to understand and therefore hard to maintain. They're also hard to test properly. If you consider that testing is a function of the number of possible paths through a method, the longer the method, the more tests you'll have to write and the more involved those tests will have to be.
There's actually a pretty good measurement you can make of your code that indicates how complex it is, and therefore how probable it is to have bugs – the cyclomatic complexity.
Developed by Thomas J. McCabe Sr in 1976, cyclomatic complexity has a big equation linked to it if you're going to run through it properly, but there's an easy, basic method you can use on the fly. Just count the number of 'if' statements and loops in your code. Add 1 and this is the CC value of the method.
It's a rough count of the number of execution paths through the code. If your method has a value greater than 10, I'd recommend you rewrite it.

3. Premature optimisation
This one's simple. When we write code, sometimes we have a niggling devil on our shoulder pointing out that this clever code would be a bit faster than the code you just wrote. Ignore the fact that the clever code is harder to read or harder to comprehend; you're shaving off milliseconds from this loop. This is known as premature optimisation.
The famous computer scientist Donald Knuth said, "We should forget about small efficiencies, say about 97 per cent of the time: premature optimisation is the root of all evil."
In other words: write your code clearly and cleanly, then profile to find out where the real bottlenecks are and optimise them. Don't try to guess beforehand.

4. Using global variables
Back when I started, lots of languages had no concept of local variables at all and so I was forced to use global variables. Subroutines were available and encouraged but you couldn't declare a variable just for use within that routine – you had to use one that was visible from all your code. Still, they're so enticing, you almost feel as if you're being green and environmentally conscious by using them. You only declare them once, and use them all over the place, so it seems you're saving all that precious memory.
But it's that "using all over the place" that trips you up. The great thing about global variables is that they're visible everywhere. This is also the worst thing about global variables: you have no way of controlling who changes it or when the variable is accessed. Assume a global has a particular value before a call to a routine and it may be different after you get control back and you don't notice.
Of course, once people had worked out that globals were bad, something came along with a different name that was really a global variable in a different guise. This was the singleton, an object that's supposed to represent something of which there can only be one in a given program.
A classic example, perhaps, is an object that contains information about your program's window, its position on the screen, its size, its caption and the like. The main problem with the singleton object is testability. Because they are global objects, they're created when first used, and destroyed only when the program itself terminates. This persistence makes them extremely difficult to test.
Later tests will be written implicitly assuming that previous tests have been run, which set up the internal state of the singleton. Another problem is that a singleton is a complex global object, a reference to which is passed around your program's code. Your code is now dependent on some other class.
Worse than that, it's coupled to that singleton. In testing, you would have to use that singleton. Your tests would then become dependent on its state, much as the problem you had in testing the singleton in the first place. So, don't use globals and avoid singletons.

5. Not making estimates
You're just about to write an application. You're so excited about it that you just go ahead and start designing and writing it. You release and suddenly you're beset with performance issues, or out-of-memory problems.
Further investigations show that, although your design works well with small number of users, or records, or items, it does not scale – think of the early days of Twitter for a good example. Or it works great on your super-duper developer 3GHz PC with 8GB of RAM and an SSD, but on a run-of-the-mill PC, it's slower than a Greenland glacier in January.
Part of your design process should have been some estimates, some back-back-of- the-envelope calculations. How many simultaneous users are you going to cater for? How many records? What response time are you targeting?
Try to provide estimates to these types of questions and you'll be able to make further decisions about techniques you can build into your application, such as different algorithms or caching. Don't run pell-mell into development – take some time to estimate your goals.

6. Off by one
This mistake is made by everyone, regularly, all the time. It's writing a loop with an index in such a way that the index incremented once too often or once too little. Consequently, the loop is traversed an incorrect number of times.
If the code in the loop is visiting elements of an array one by one, a non-existent element of the array may be accessed – or, worse, written to – or an element may be missed altogether. One reason why you might get an off-by one error is forgetting whether indexes for array elements are zero-based or one-based.
Some languages even have cases where some object is zero-based and others where the assumption is one-based. There are so many variants of this kind of error that modern languages or their runtimes have features such as 'foreach loops' to avoid the need to count through elements of an array or list.
Others use functional programming techniques called map, reduce and filter to avoid the need to iterate over collections. Use modern 'functional' loops rather than iterative loops.

7. Suppressing exceptions
Modern languages use an exception system as an error-reporting technique, rather than the old traditional passing and checking of error numbers. The language incorporates new keywords to dispatch and trap exceptions, using names such as throw, try, finally and catch.
The remarkable thing about exceptions is their ability to unwind the stack, automatically returning from nested routines until the exception is trapped and dealt with. No longer do you have to check for error conditions, making your code into a morass of error tests.
All in all, exceptions make for more robust software, providing that they're used properly. Catch is the interesting one: it allows you to trap an exception that was thrown and perform some kind of action based upon the type of the exception.
The biggest mistakes programmers make with exceptions are twofold. The first is that the programmer is not specific enough in the type of exception they catch. Catching too general an exception type means that they may be inadvertently dealing with particular exceptions that would be best left to other code, higher up the call chain. Those exceptions would, in effect, be suppressed and possibly lost.
The second mistake is more pernicious: the programmer doesn't want any exceptions leaving their code and so catches them all and ignores them. This is known as the empty catch block. They may think, for example, that only certain types of exceptions might be thrown in his code; ones that they could justifiably ignore.
In reality, other deadly runtime exceptions could happen – things such as out-of-memory exceptions, invalid code exceptions and the like, for which the program shouldn't continue running at all. Tune your exception catch blocks to be as specific as possible.

8. Storing secrets in plain text
A long time ago, I worked in a bank. We purchased a new computer system for the back office to manage some kind of workflow dealing with bond settlements. Part of my job was to check this system to see whether it worked as described and whether it was foolproof. After all, it dealt with millions of pounds daily and then, as now, a company is more likely to be defrauded by an employee than an outsider.
After 15 minutes with a rudimentary hex editor, I'd found the administrator's password stored in plain text. Data security is one of those topics that deserves more coverage than I can justifiably provide here, but you should never, ever store passwords in plain text.
The standard for passwords is to store the salted hash of the original password, and then do the same salting and hashing of an entered password to see if they match.
Here's a handy hint: if a website promises to email you your original password should you forget it, walk away from the site. This is a huge security issue. One day that site will be hacked. You'll read about how many logins were compromised, and you'll swallow hard and feel the panic rising. Don't be one of the people whose information has been compromised and, equally, don't store passwords or other 'secrets' in plain text in your apps.

9. Not validating user input
In the good old days, our programs were run by individuals, one at a time. We grew complacent about user input: after all, if the program crashed, only one person would be inconvenienced – the one user of the program at that time. Our input validation was limited to number validation, or date checking, or other kinds of verification of input.
Text input tended not to be validated particularly. Then came the web. Suddenly your program is being used all over the world and you've lost that connection with the user. Malicious users could be entering data into your program with the express intent of trying to take over your application or your servers.
A whole crop of devious new attacks were devised that took advantage of the lack of checking of user input. The most famous one is SQL injection, although unsanitised user input could precipitate an XSS attack (crosssite scripting) through markup injection.
Both types rely on the user providing, as part of normal form input, some text that contains either SQL or HTML fragments. If the application does not validate the user input, it may just use it as is and either cause some hacked SQl to execute, or some hacked HTML/JavaScript to be produced.
This in turn could crash the app or allow it to be taken over by the hacker. So, always assume the user is a hacker trying to crash or take over your application and validate or sanitise user input.

10. Not being up to date
All of the previous mistakes have been covered in depth online and in various books. I haven't discovered anything new – they and others have been known for years. These days you have to work pretty hard to avoid coming into contact with various modern design and programming techniques.
I'd say that not spending enough time becoming knowledgeable about programming – and maintaining that expertise – is in fact the biggest mistake that programmers make. They should be learning about techniques such as TDD or BDD, about what SLAP or SOLID means, about various agile techniques.
These skills are of equal or greater importance than understanding how a loop is written in your language of choice. So don't be like them: read McConnell and Beck and Martin and Jeffries and the Gang of Four and Fowler and Hunt & Thomas and so on. Make sure you stay up to date with the art and practice of programming.
And that concludes my top 10 list of mistakes programmers make, no matter what their language stripe. There are others, to be sure, perhaps more disastrous than mine, but I would say that their degree of dread is proportional to the consequences of making them.
All of the above were pretty dire for me the last time I made them. If you have further suggestions or calamities of your own, don't hesitate to contact me and let me know.

Friday, November 26, 2010

The Next Really, Really, Really, Big Thing



Everybody should be excited about the next big thing. And why not? It’s very, extremely big. Even bigger than anything that came before. No, really, it’s that freakin’ HUGE.

If you don’t want to get left behind, you’ve got to hop on this right away. Of course, you will need to be fast and smart and work late nights, but it will be worth it. You can’t go halfway on a thing like this. It’s all or nothing, baby!

I’m here to tell you what this big thing is. But first, let’s take a quick look at past big things so that we can see why this one is so much bigger.

A Short History of Big Things

We live in interesting times. Conventional wisdom says that it takes about 20 years for new technology to take its full effect. These days, innovation cycles are much shorter, so we’re getting new stuff before we really know what to do with the old.

Many economists believe that these time lags account for the productivity paradox (i.e. it’s notoriously difficult to measure what we really get out of all this new stuff). So it is always hard to see the next big thing until it’s already really big and you’ve missed out.

Nevertheless, there are always pundits and gurus to point the way. Unfortunately, they are usually only partly right, which makes the history of big things somewhat muddled:

Digital Media: Sometime back in the 90’s, an extremely confident young man appeared on the TV show, 60 Minutes, and announced that he was going to put their company (CBS) out of business.

I don’t remember what actually happened to the guy, but last year CBS earned about a billion dollars in operating profit (Yahoo made about a tenth as much). 60 minutes, of course, is still on the air and still gets huge ratings.

E-Commerce: During the dot-com boom, many pointed out that a lot of the web revenues were driven by advertising (which, for some reason, is supposed to be a bad business). However, selling things over the web was infinitely more promising.

Of course, many of those e-commerce start-ups failed, some did okay and some did extremely well. Today, Amazon.com is enormously successful, but really not in Wal-Mart’s league. I was at the mall the other day and it seemed pretty crowded.

Search: After the crash in 2000, Search emerged as the new, new thing. Google has made a bundle on this one (and some regional players, like Yandex in Russia and Baidu in China, have also done well). Yahoo and Microsoft… not so much.

Social Media: This is the most recent big thing (and, of course, has a big movie to prove it). Facebook has 500 million members, but profits remain elusive. Others, such as MySpace, Friendster and Digg… well, we’ll see.
Big Things That Last

Of course, the biggest things get so big that they last for a very long time. Jim Collins profiled a bunch of them in his book Built to Last. He studied firms like Hewlett Packard, Sony and General Electric and found that much of what we hear about really big things is untrue.

For instance, they often don’t start with very good ideas. In fact, sometimes they begin with lousy ones (apparently Sony’s first product was a rice cooker). Nor do they tend to have charismatic, visionary leaders. What they do have is a lot of talented people who work as a team.

It seems to me, this is where a lot of technology driven companies go wrong. We glamorize the vision and forget that it is people who actually make it happen. Moreover, because our globalized, digitized world is so complex, these people have very diverse skills and perspectives and need to operate in an uncertain environment.

Getting really smart, driven people to work together well is the truly BIG thing.
Winning the Talent War

A while back, I wrote a post about how to win the war for talent, and I made the point that talent isn’t something you acquire, it’s something you build. I think it’s worth summarizing the main points here:

In-House Training: While third party training can sometimes be helpful, having an in-house training program is much more valuable. Companies like GE and McDonald’s have put enormous resources into training campuses, but even small companies can build good programs with a little effort and focus.

An often overlooked benefit of in-house training is the trainers themselves, who are usually mid and senior level employees. They get to refresh basic concepts in their own minds while they teach more junior people. This also helps the old guard get invested in the next generation.

Perhaps most importantly, training helps to bring people together who would ordinarily not meet and improves connectivity throughout the company.

Focus On Intrinsic Motivation: Most people want to do a good job. Of course, money is important, but the best people want to achieve things and to be recognized for doing so. Often, time and effort wasted on designing elaborate compensation schemes could be better spent on getting people recognized for true accomplishments.

A senior executive taking a minute or two to stop and recognize a job well done can often mean more than a monetary reward. That doesn’t mean that people don’t need to be paid what they’re worth, but anybody can sign a check. Paying big salaries is not, and will never be, a long term competitive advantage.

Best Practice Programs: One way for people to shine is to have regular meetings where they can present successful initiatives to their peers. This also helps increase connectivity and gets good ideas spread throughout the company.

Another approach is to build an in house social network where people can share ideas and rate each others work (there are plenty of applications similar to slideshare that can be adapted easily and cheaply to a company intranet).

Coaches and Mentors: Getting regular feedback is essential for development. We’re generally pretty bad judges of our own efforts. Some companies have formal mentoring programs that are quite successful. However, what is most important is a realization throughout the company that senior people are responsible for helping to develop junior ones.

Firing Nasty People: A long time ago, I decided that I didn’t want to work with nasty people, so I started firing them regardless of competency. I’ve been amazed at what a positive effect it had and have never looked back. Nasty people invariably destroy more than they create.

A Community of Purpose: Most of all, people need to believe in what they do; that their work has a purpose and makes a positive impact. Nothing motivates better than a common cause that people value above themselves.

So the next big thing is really not much different than the previous ones. There will be an interesting idea that has real value and most of the companies who jump on it will screw it up and lose a lot of money.

The difference, of course, will be made by the people who are working to solve everyday problems, how they are developed and how they treat each other.

Thursday, October 7, 2010

10+ ways to screw up your database design

Database developers are people too — you make mistakes just like everyone else. The problem is, users are the ones who pay for those mistakes when they have to work with slow, clunky applications. If you’re part of your company’s IT team, a mistake could cost you your job. If you’re a freelance developer, a bad database usually means a lost client — and even more lost business, if word gets around. Here are some common design pitfalls to watch out for.

1: The data’s unimportant; it’s the architecture that matters

It doesn’t matter if your code sings, if you don’t know the data. You want the two to work in harmony and that means spending some time with the people who use and manipulate all that data. This is probably the most important rule: Before you do anything, you absolutely must get intimate with the data. Without that firsthand knowledge, you might make errors in judgment that have far-reaching consequences — like dragging the whole application to a complete halt.

A nonchalant attitude that the data isn’t important isn’t a sign of laziness. It’s a mistaken perception that anything that doesn’t work quite right early on can be fixed later. I just don’t agree. Doing it right from the bottom up will produce a foundation that can grow and accommodate change quickly. Without that foundation, any database is just a few Band-Aids away from disaster.

2: I can do anything with a little code

Some developers are so skilled that they can make just about anything happen with a bit of code. But you can take a good thing too far. One of my biggest complaints about the developers’ psyche is that they want to solve everything with code, even when a system feature exists to handle the need. They claim it’s just easier — easier for them, maybe, but not necessarily easier for those maintaining the database. My recommendation is to use the built-in features, when available, unless you don’t know the RDBMS well enough. If that’s the case, see #3.

3: I can use whatever RDBMS you have

This brings me to the next point: Developers who think the system is unimportant because their coding ability is the only magic they need. Wrong! Unless your hands are tied, choose the best system for the job. You’ll save your client time and money and build a reputation as an honest and comprehensive developer. You might not have a choice, of course. If you find yourself stuck with a system you’re not familiar with or that’s just not right for the job, you might want to excuse yourself from the project. You’re going to take the fall for that decision eventually, even if you didn’t make it.

4: That doesn’t need an index

Very little affects performance like failing to apply an index or applying an index incorrectly. It isn’t rocket science, and there are guidelines that help. But many developers still avoid the task altogether. Without proper indexing, your database will eventually slow down and irritate users. Perhaps the only thing that causes as much trouble as no indexing is too much indexing.

5: This database doesn’t require referential integrity

Enforcing referential integrity protects the validity of your data by eliminating orphans (foreign keys that have no related primary key entity). For instance, in a sales database, you might have an ordered item that doesn’t point to a customer — not a good idea. If your RDBMS supports referential integrity, I recommend that you use it.

6: Natural keys are best

Relational database theory relies on keys, primary and foreign. Natural keys are based on data, which of course has meaning within the context of the database’s purpose. Natural keys are obsolete now that we have systems that can generate sequential values, known as surrogates. They have little purpose beyond identifying entities. (They are usually an auto-incrementing data type).

The superiority of natural versus surrogate keys is a hotly debated topic. Just bring it up in your favorite development list or forum, sit back, and watch the show. Here’s the nitty-gritty though:

* Natural keys can be unwieldy and awkward to maintain. It might take several columns to create a unique key for each record. It’s doable, but do you really want to accommodate that kind of structure if you don’t have to?
* Primary keys are supposed to be stable; nothing about data is stable. It changes all the time. In contrast, there’s no reason to ever change a surrogate key. You might delete one, but if you have to change a surrogate key, something’s wrong with your design.

The biggest argument for natural keys is one of association. Proponents insist that you need to be able to associate the key to the actual record. Why? Keys are used by the RDBMS, not users. The other most commonly heard argument is that surrogate keys allow duplicate records. My response is to apply indexes appropriately to avoid duplicate records.

I recommend surrogate keys — always, which is an invitation to hate mail, but it’s my recommendation just the same. I can think of no circumstance where a natural key would be preferable to a surrogate.

7: Normalization is a waste of time

Just writing that title hurts. Unfortunately, I do run into developers who don’t take normalization seriously enough. Normalization is the process of removing any repeating groups and redundant data to related tables. This process supports the RDBMS by theory and design. Without normalization, a RDBMS is doomed. Despite its importance, many developers make a cursory pass through the data and normalize very little, and that’s a mistake you should avoid. Take the time to break down your data, normalizing at least to 2nd or 3rd Normal form.

8: You can’t normalize enough

The previous point may seem to imply that normalization is the panacea of database design. But like code, too much of a good thing can slow things down. The more tables and joins involved in pulling data together into meaningful information, the slower the database will perform. Don’t overdo it — be thorough without being obsessed.

If your normalization scheme requires several tables to generate a common view, you’ve gone too far (probably). In short, if performance slows and there’s nothing wrong with the connection, the query, and so on, excessive normalization might be the culprit.

9: It’ll perform just as well with real data

Failing to test a database for scalability is a huge mistake. During the development stage, it’s acceptable to work with a scant amount of data. On the other hand, a few rows of test data just can’t provide a realistic view of how the database will perform in a production environment. Before going live, be sure to test your database with real data, and lots of it. Doing so will expose bottlenecks and vulnerabilities.

You can blame the database engine for choking on real data — nice work if you can get it (and the client believes you).

10: Only the most elegant code is good enough for my clients


This attitude is another example of how too much of a good thing can be bad. We all want to write the best code possible, but sometimes, good enough is, well, good enough. Time spent optimizing routines that already perform well and accurately can be money down the drain for your client. If the database runs great with a bit of ugly code, so what? Is the trade-off worth the extra time and money you’ll spend to optimize the code to its fullest? I’m betting your client would answer in the negative. I’m not saying write clunky code. Nor am I suggesting that you write code that performs poorly because doing so makes your job easier. I’m saying, don’t put your client’s money into optimizing something that works fine as is. Put that time into good design and a solid foundation — that’s what will support the best performance.

11: You can back it up later


If the data is important enough to store, it’s important enough to protect. Hardware breaks. Mistakes happen. A backup plan should be part of your development process, not an afterthought: I meant to do that. How often should you back up the database, where will you store those backups, and so on, are questions to answer up front, not after your client losses important data.

12: You promised that wouldn’t change

The client promised that a specific business rule would never change and you believed it. Never believe them! Don’t take the easy way out on this one; apply the best design and logic so that change is easy. The truth is, once users become accustom to the database, they’ll want more — and that means change. It’s just about the only part of the whole development process you can depend on.

13: Yes, I can give you the moon

Some developers are so ambitious. Wanting to give users everything they want in the first version is a nice sentiment, but it’s also impractical. Unless the project is small with a specific focus, producing a foundation version that can go into production quickly is preferable. Users won’t get everything they asked for, but they’ll have a production database much sooner. You can add features with subsequent versions. The client gets work quickly and you get job security.

Friday, October 1, 2010

You're a bad Programmer. Embrace it!!!

How many developers think they're good programmers? We're not. We're all fairly bad at what we do. We can't remember all the methods our code needs to call, so we use autocompleting IDEs to remind us. And the languages we've spent years, in some cases decades, learning? We can't even type the syntax properly, so we have compilers that check and correct it for us.

Don't even get me started on the correctness of the logic in the code... I'm looking at a chart that shows failure rates for software projects. "Small" projects (about a year for 8 people) have a 46% success rate. That's almost half. Any longer project just plummets down to a 2% rate for projects costing 10 million or more. In other words, our projects nearly always fail.

We constantly write code that breaks existing features. We add features in ways that won't scale. We create features nobody wants.

In short, as an industry, we all stink.

You can fight this. You can summon your "inner lawyer", the voice in your head that always defends you and tells you how great you are. You can take the typical developer's attitude that "it's always someone else's fault".

Or you can embrace it. Admit it... humans aren't good at programming. It's a task that requires extraordinary complexity and detail from brains that don't work that way. It's not how we tick. We might enjoy programming, we might get "in the zone" and loose track of time, we might spend our nights and weekends writing code for fun... but after years of practice, we still need these crutches to prop us up.

Once you've embraced this bitter reality, you can start to move forward. Once you've admitted to yourself that you're a bad programmer, you can stop all the silly posturing and pretending that you're great, and you can look around and find the best possible tools to help you look smarter than you are.

If we know that anytime we touch code, we'll probably break something, then we can look around to see what catches functional breaks in code as quickly as possible? A good continuous integration system combined with a good automated test suite.

Creating a good test suite is a lot like working out though... we all know we should, but we rarely take the time to create a solid, workable test automation strategy. Something like Defect Driven Testing is a great place to start.

If you recognize that we all write the same fundamentally bad code that everyone else does, then you can look around at static code analysis tools. If you're writing Java, I highly recommend FindBugs.

These tools flag common mistakes... really common mistakes. Things you can't believe you actually do. But somehow we do nearly every day. I've met many development teams who didn't think they needed static code anaylsis. But I've never run a tool like FindBugs without finding a real (read: not theoretical) problem in production code. They hadn't learned to embrace their badness yet.

If you don't know what code is being exercised by your manual or automated test cases, look at a code coverage tool. Cobertura is my favorite, but I'm a bit biased as I had a hand in getting it started. (Why isn't it listed on that Wikipedia page??)

If your team is constantly getting interrupted and pulled in different directions, try tacking smaller units of work. Time boxed iterations, especially the one week iteration, force you to tackle and complete smaller units of work. It's easier to push back on an "emergency" change if you can ask them to wait for only two or three days. If you're on a two month task, they won't wait.

Admit that you're a sorry sysadmin as well and script your deployments. Tools like rake and capistrano are extremely powerful tools that can completely automate your development, testing, and even production deployments. Once it's scripted, it becomes incredibly easy to duplicate. You'll be amazed at how much time this one will save you.

Of course, if you've automated your code deployments, it'd be silly to keep typing in SQL commands by hand. Ruby on Rail's database migrations may have been the early leaders in this area, but there are plenty of other tools available today. One prominent tool is Liquibase. Here are two good articles with plenty of information. http://www.infoq.com/news/2007/05/liquibase-database-refactoring and http://java.dzone.com/articles/liquibase-hibernate.

How about customer interaction? Yes, we're bad at that too. (Ever used the phrase "stupid users"?) Try moving your specifications into a more precise, and even executable format. DSLs (Domain Specific Languages) provide a wide variety of ways to engage a customer with a precise, but understandable, langage.

There are a great many tools we can use to make ourselves look smarter than we are. Once we stop fighting against the need for the tools, and embrace them, we start looking like we're pretty good developers. I won't tell anyone you stink if you don't tell them about me. ;)

Sunday, September 19, 2010

Thursday, September 2, 2010

Agile people still don't get it

I just attended Test-Driven Development presentation which represents everything that is wrong about the way Agile advocates are trying to evangelize their practices.  I don’t have anything against the presenter in particular, but it’s really time for Agilists to rethink the way they communicate with the real world.
Here are a few comments on his presentation.
One of the first slides that deeply troubled me claimed the following:
  • Tests are (executable) specs.
  • If it’s not testable, it’s useless.
First of all, tests are not specs.  Not even close.  Somebody in the audience was quick to give a counter-example to this absurd claim by using a numeric example ("how do you specify an exponentiation function with a test?") but my objection to this claim is much broader than that.  Relying on tests as a design specification is lazy and unprofessional because you are only testing a very small portion of the solution space of your application (and of course, your tests can have bugs).  Tests also fall extremely short of having the expressiveness needed to articulate the subtle shades that a real specification need to cover to be effective.
This claim is part of a broader and more disturbing general Agilist attitude that is usually articulated like "Your code is your spec", along with some of its ridiculous corollaries such as "Documentation gets out of date, code never does".
Anyone who claims this has never worked on a real-world project.  And I’m setting the bar fairly low for such a project:  more than five developers and more than 50,000 lines of code.  Try to bring on board new developers on this project and see how fast they come up to speed if all they have to understand the code base is… well, just code.  And tests.
I am currently getting acquainted with a brand new project that is not even very big, and while I understand Java fairly well, there is no doubt in my mind that for ten minutes I spend trying to understand how a certain part of the application works, a five-line comment would have given me this knowledge in ten seconds.
The second claim, "If it’s not testable, it’s useless" is equally ludicrous and a guarantee that at this point, the audience you are talking to is already looking at you as a crackpot.
Software is shipped with untested parts every day, and just because it’s not entirely tested doesn’t mean it’s bad software or that the untested parts are "useless".
Agilists just don’t understand the meaning of calculated risk.
Early in the development cycle, it’s perfectly acceptable to go for a policy of "zero bugs" and "100% tests".  But as the deadline looms, these choices need to be reconsidered all the time and evaluated while keeping a close eye of the final goal.  Very often, Agilists simply forget that their job is to produce software that satisfies customers, not software that meets some golden software engineering scale.
Anyway, let’s go back to the presentation, which then proceeded with the implementation of a Stack class with TDD.  Before spending thirty minutes on a live demo of the implementation of a Stack class (are you impressed yet?), the presenter warned the increasingly impatient audience that they should "not pay too much attention to the Stack example itself but to the technique".
And that’s exactly the wrong thing to do.
Look, we "get" TDD.  We understand it.  Frankly, it takes all of ten minutes to explain Test-Driven Development to a developer who’s never heard of it:  "Write a test that fails and doesn’t compile.  Make it compile.  Then make it pass.  Repeat".
The hard part is applying it to the real world, and showing the implementation of a Stack will soon have everyone leave the room with the thought "Cute, but useless.  Now let’s go back to work". 
It was even worse than that, actually:  The presenter kept taking suggestions from the crowd but he declined all those that didn’t fit in the neat script that he had in hands at all times.  These suggestions were good, by the way:

"What should we test now?"
"How about:  if we pop an empty stack, we get an exception"
To be honest, I am becoming quite suspicious of Agile practices for that reason:  all the presentations I have attended and books that I have read are always using toy implementations as examples.  Stack, List, Money, Bowling…  enough already!  Let’s talk about TDD for code that interacts with clustered databases on laggy connections built on 500,000 lines of code that was never designed to be tested in the first place (and:  yes, I read Michael Feathers’ book, it has some good and some bad, but it’s not germane to Java and TDD so I won’t expand on it here).
And please, avoid smug and useless answers such as:
"A lot of the classes I have to test are hard to isolate, do you have any advice regarding mocks?"
"Well, if you had started with TDD in the first place, you wouldn’t be having this problem today".

Fundamentally, I am disturbed by the Agilists’ dishonesty when it comes to presenting their arguments.  They offer you all these nice ideas such as Test-Driven Development and Pair Programming but they never — ever — disclose the risks and the downsides.  To them, Agility is a silver bullet that is applicable in all cases with no compromises.
The truth is that these practices come at a price, and for a lot of organizations, the price gets high very quickly.  Agile development will never go far if its proponents keep ignoring these organizations and make condescending comments to its members.
I like Test-Driven Development.  I really do, and I’m fortunate enough to work on a project that lets me use TDD most of the time.  But the truth is:  at times, I don’t do TDD because implementing a feature quickly is more important than a fuzzy feeling.  And I’m also aware that TestNG is an open source project with less than five developers, all of them on the bleeding edge and aware of the latest advances in software engineering.
And this is my main beef with Agilists:  I strongly suspect that most of them are spending their time on open source projects with like-minded fellows, but none of them have any experience what companies whose survival depends on shipping software have to go through to organize huge code bases growing thousands of lines of code every day under the combined push of hundreds of a developers, all with their personal background, education and bias.
So here is my advice to Agilists:  get real now, or you will become irrelevant soon.