Agile, scrum

How SCRUM helped our team

In 2008, I began work with a client on a new project. The client was a airline and travel agency that needed to rewrite, from scratch, their online travel booking application. The new website had the following primary requirements:

  • Improve end user experience, including performance and security issues;
  • Offer new set of products on the website (Hotel booking, car hiring and insurance purchase); and
  • The new application had to connect to a brand new back-end system.

After learning about their process and project, I suggested that we try a new approach: Scrum. My clients did not know much about Scrum. In fact, the only Scrum-like practice my client had tried was daily meetings. I insisted that using Scrum could help us build software more quickly and build it with higher quality. This was not easy to sell. The client had a number of questions, such as:

  • How can you accurately estimate a project with an iterative process?
  • How can you determine the delivery date of a project if you re-estimate it after every sprint?
  • How can your customer agree on an analysis and “sign it” if you do not have an analysis phase?
  • Isn’t Scrum just a cowboy development process since we do not have a detailed design phase?

I answered their questions or worked with them to find answers. At the same time, I did not pretend that Scrum could solve all of their problems. I did point out what they already knew:  their existing waterfall methodology, with its detailed estimates and phases, only gave the illusion that it could deliver a high quality product with all required features on budget and on time. Scrum, on the other hand, could mitigate some of these risks. After much debate, we decided to give Scrum a try.



Organizing classes in C#

I was doing a quick research on C# coding conventions and I found one rule that comes back in almost all coding conventions I came through: Always put fields (aka private members) at the top of the class definition.

I am not sure of the reasons to do this. Let me explain why I usually advocate the other way around: order properties and methods from the more accessible to the less accessible.

A class has two types of users:

  1. developers writing the code of the class (usually one or two developers max),
  2. developers who are ‘clients or users’ of the class (the rest of developers and other guys maintaining the software).

I consider that in the total lifecycle of a project, there might be much more developers of the 2nd type than of the first one.

When you use a class or try to maintain it (refactor it or try to find out about what it does to understand the code), you are not interested in private members. You rather focus on public methods and/or properties (concepts of black box that exposes services). Then, if you need to go further, you will probably have a look at the internals of the class. For this reason, I really prefere, when I open a class file, to directly have all interesting information ride away. Just right there and not have to scroll down.

For me this is a consequence of OOP programming and the encapsulation principle.

I would really appreciate comments… Am I the only one who advocates this? It is not a big deal and I gave up on this subject for the moment- however, I generally do not like the argument of ‘This is the standard!’ (which is the only one I got so far). I rather like the real reason underneath it but could not find anything explanation on this rule except a fellow who told me that this was a C++ programmers’ standard who did not have really the choice otherwise the compiler did not compile (I did not verify this argument yet).


Combining javascript files with Ajax toolkit library

One of the new features that came with .NET Framework SP1 is the ability to combine multiple js files in order to reduce the number of files downloaded by the browser.

In theory, you simply have to list all the JS files called by the page in a sub section of the script manager. You can get the list of files needed by your page by using the script reference profiler (third party control that lists all the files). You can find this control on

Once the list is known, you just need to copy/paste it in the Script Manager control using the new ‘CompositeScript’ tag. This is an example from a real world application using Telerik and AJAX toolkit controls.

 <asp:ScriptManager ID=”ScriptManager1″ runat=”server”>
<CompositeScript ScriptMode=”Release”>
<asp:ScriptReference name=”MicrosoftAjax.js”/>
<asp:ScriptReference name=”MicrosoftAjaxWebForms.js”/>
<asp:ScriptReference name=”AjaxControlToolkit.Common.Common.js” assembly=”AjaxControlToolkit, Version=3.0.20229.17016, Culture=neutral, PublicKeyToken=28f01b0e84b6d53e”/>
<asp:ScriptReference name=”AjaxControlToolkit.ExtenderBase.BaseScripts.js” assembly=”AjaxControlToolkit, Version=3.0.20229.17016, Culture=neutral, PublicKeyToken=28f01b0e84b6d53e”/>
<asp:ScriptReference name=”Telerik.Web.UI.Common.Core.js” assembly=”Telerik.Web.UI, Version=2008.2.826.35, Culture=neutral, PublicKeyToken=121fae78165ba3d4″/>
<asp:ScriptReference name=”Telerik.Web.UI.Common.Animation.AnimationScripts.js” assembly=”Telerik.Web.UI, Version=2008.2.826.35, Culture=neutral, PublicKeyToken=121fae78165ba3d4″/>
<asp:ScriptReference name=”Telerik.Web.UI.Common.Scrolling.ScrollingScripts.js” assembly=”Telerik.Web.UI, Version=2008.2.826.35, Culture=neutral, PublicKeyToken=121fae78165ba3d4″/>
<asp:ScriptReference name=”Telerik.Web.UI.Common.Navigation.NavigationScripts.js” assembly=”Telerik.Web.UI, Version=2008.2.826.35, Culture=neutral, PublicKeyToken=121fae78165ba3d4″/>
<asp:ScriptReference name=”Telerik.Web.UI.Menu.RadMenuScripts.js” assembly=”Telerik.Web.UI, Version=2008.2.826.35, Culture=neutral, PublicKeyToken=121fae78165ba3d4″/>
<asp:ScriptReference name=”Telerik.Web.UI.ComboBox.RadComboBoxScripts.js” assembly=”Telerik.Web.UI, Version=2008.2.826.35, Culture=neutral, PublicKeyToken=121fae78165ba3d4″/>
<asp:ScriptReference name=”Telerik.Web.UI.TabStrip.RadTabStripScripts.js” assembly=”Telerik.Web.UI, Version=2008.2.826.35, Culture=neutral, PublicKeyToken=121fae78165ba3d4″/>

Note that you can add references to your own JS files using the path to the Javascript:

i.e: <asp:ScriptReference Path=”~/js/MyJavascript.js” />

 However, this example won’t work properly. As you try to run a page that contains it, you will get the following error message:

The resource URL cannot be longer than 1024 characters. If using a CompositeScriptReference, reduce the number of ScriptReferences it contains, or combine them into a single static file and set the Path property to the location of it.

To fix this issue, you should split the list into smaller chuncks and add them in different Proxy Script Manager controls. Bear in mind that each of these controls will generate a separate JS file and therefore you may end up with as many JS files as the number of Script Manager/Proxy Script Manager controls.

<asp:ScriptManagerProxy ID=”ScriptManagerProxy1″ runat=”server”>
<CompositeScript ScriptMode=”Release”>

…. SCRIPT References….(subset)
</asp:ScriptManagerProxy >

<asp:ScriptManagerProxy ID=”ScriptManagerProxy2″ runat=”server”>
<CompositeScript ScriptMode=”Release”>

…. SCRIPT References….(another subset)
</asp:ScriptManagerProxy >

Another issue you should pay attention to is the order of your custom JS files. If you have any of your files that has Javascript code that executes at load (not in a function) and in which you use some Ajax functionality, ensure that you download it after the Ajax files. In other words, ensure that all Ajax library JS files are referenced before yours. I suggest that you put all your files in a separate Proxy Script Manager to isolate them and put this section as the last one in the list.

By combining files you create a new file with a URL that contains the references of all the JS included in the new resource (that’s the reason why it is limitted to 1024 characters in the URL). That’s said, you better have always the same sections with the same files in the same order so that the browser will download them only once and cache them for future use. I think that it is better to download a larger file than needed rather than redownloading many times the needed part only.

My suggestion is to put all references to common Ajax library files in multiple sections of the script manager and subsequent proxy script managers and put all this list in a/multiple user controls that you’ll add to your pages. Custom scripts should be added separately as needed or listed in one Proxy Script Manager and shared across all pages if you don’t have that many files.

As it is highly recommended to load javascripts at the end of the page, I suggest that you keep your ScriptManager empty of any JS reference and put all your references in ScriptManagerProxy controls placed at the end of the page.


Agile Design Artifacts

I recently got contracted by a company to be the technical lead for a web application development project. My first task was to come up with an architecture document for the application within 2 to 3 weeks. I rapidly realized that it was not realistic. I could eventually design the application, but I was convinced that it would not be efficient for the following reasons:

  • I was promised to get the full requirements a week after I started. I did not get them and still now, after 3 months,  there are still on going,
  • The web application was intended to connect to a back end bought from a third party provider. We did not know much about the API (actually, the API were planned to change – and still changing today after 3 months)

  • I always have been convinced that a too detailed architecture will be obsolete as soon as developers start to implement the functionality.

For these reasons, I told the team manager that I prefer not to spend my time drawing class diagrams or any sort of artifact that will be thrown away or, at best, be expensively kept up to date during the development process. What I suggested though was to:

  • Draw a high level design diagram; this diagram included all the providers and layers of the application.

  • Write few class stubs to give a direction to developers. The classes did not contain any implementation. They rather contained many “TODOs” with comments for developers; we also agreed with developers that the comments were there as a suggestion. They were allowed to go in another direction as long as they respect the overall design idea.

  • We focuse on only one type of artifact: the code itself. Well written code is the best artifact as it expresses the static view but also the dynamic view of the system (which is, from my point of view), the most important view to understand a system especially to support it.

  • We scheduled and took the necessary time to do few meetings with all developers to explain the design guidelines; this is far better than writing thick documents as developers came up with very interesting questions and suggestions from their past experiences. Once we agreed on the design, we started the development of what we called “the infrastructure” or the framework that will support the application.

During this phase, we decided to use SCRUM methodology; this was new to my client (they applied it partly in the past). I insisted on the fact that an iterative development process will help us to get things done and deliver high quality software.

We called this first phase the SPRINT 0; we did a lot of refactoring as the code was written and the application was growing in size. We decided that no interface (user interface) will be developed during this sprint; we wanted to focus only on the framework and the design.

Now, we are in the sprint 3. We delivered some functionality to the business and we had very positive comments so far. Even managers who first were skeptical regarding SCRUM methodology are now much more comfortable and confident that an iterative process can effectively help to deliver in time and with a high quality.

.NET, C#, Extension methods

C# 3.0 Extension Methods

One of the new features in .NET 3.5 and C# 3.0 is extension methods. Extension methods, as their names state, let you extend any existing type by adding new methods without having to inherit from it.

For instance, one of the common issues we encounter when retrieving data from the database, is to check whether the value is DBNULL or not before we can use it. I used to write a DbReaderHelper class that actually implements ‘dbnull safe’ data retrieval methods. The syntax to declare an extension method is very simple. It is a static method which its first argument starts with the keyword “this” followed by the type we want to extend.

For instance, if I want to add a new method called “nGetInt32” which returns an Int32.MinValue if the field is dbnull otherwise the field value (an Int32), I should write it as follows:


public static Int32 nGetInt32(this DbDataReader reader, int index)
      if (reader.IsDBNull(index))         
return Int32.MinValue;
return reader.GetInt32(index);

How to use it?

The extension must be part of a static class (see the example at the end of this post). To use it, simply add the namespace the class belongs to as part of the “using” directives and that’s it. Even intellisense will take it into account;



Why using extension methods

If we want to extend the DataReader class to make it dbnull safe, we would have to inherit from one of its implementations and therefore would not be able to extend all inherited classes. The other advantage is the fact that you can extend any class (even those marked as “sealed”. On the other hand, extending a class has a limitation which is that the extension method can only use public methods and therefore there’s no access to the inner state of the object. 

Methods resolution 

          Instance methods have priority
          If the same extension method is declared more than once, the compiler raises an error.
          The compiler looks into the current namespace and all the namespaces included with the using directive. 


.NET, Anonymous types, C#

C# 3.0 Anonymous types

With C# 3.0 you can declare a variable without typing it explicitly and its type will be inferred based on the right expression. This is different from the non typing in VB like languages (ASP, VB) as the resulting value is really typed and can no longer be assigned to a different type. Also, right after entering the line that declares the variable, intellisense becomes available.


We can see that actually the variable x is of an anonymous type with the two properties Name and EmployeeNumber plus the inherited properties from the Object class. If we declare another anonymous type variable with the same structure: 

var y = new { Name = “Smith”, EmployeeNumber = “199283” }; 

Then the new variable, y, is compatible with x. In another words, since it has the same properties with the same types (string, string) then C# compiler detects that x and y are of the same type even though we never declared it explicitly. In this case the instruction: x  = y is correct and will result in affecting the value of y in x. However if we declare a third variable z:   

var z = new { Name = “Buddy”, Company = “Microsoft” }; 

and try to affect the value of z to x for instance, we will have a compilation error (and not a runtime error – typical for non-typed variables – ). Even though z has two members of string type, all like x and y, it is not compatible. The reason is that the second property has not the same name (company versus EmployeeNumber). 

Some restrictions to anonymous types:

Anonymous types are only permitted for local variables. You cannot declare a class member with the var keyword. A method cannot return a var type neither can it has a var type parameter; all the following declarations won’t compile: 

public int function(var x, int y)
public var function(int x, int y)

however returning a var variable value is permitted:  

public int function(int x, int y)       
  var t = x + y;
     return t;       

The reason is simple : at the point of the ‘return’ statement, the compiler knows that ‘t’ is an integer and can be returned since the method must return an integer. 

Why using anonymous types?

First, let’s calm down the spirits who think that this is a step back to untyped variables. As mentioned in this post, it is really an implicit typing and if you look at it closely, the variables are typed but the type is constructed ‘on the fly’ if we want. We can use it without declaring it. 

This may open the door to poor programming if it is overused. However, the impact might be limited since the anonymous type has only the scope of the method where it is used.  The main advantage of the anonymous types is in LINQ (beyond the scope of this post). I will post something on LINQ pretty quickly… 

Well the only advantage I can see is to be able to manipulate temporary results in a collection of an anonymous structure and then do something with it without having to declare a very lightweight structure or class just to carry the results in one single method. I can see the usage in a data access layer class that has to process the result returned by a DataReader. But until today, we lived without it and never felt really a need for it. 

As a conclusion, I believe that the real advantage is the LINQ query…. Which I will talk about in my next post. 

Team Foundation Server

Extend legally TFS Trial period

I have tried TFS for few months using the trial version and got stuck when it told me that the trial period was over because TFS does not notify you beforehand that your trial version is about to end (as any other software usually does).

I did a quick search on google and found a TFS trial extender that worked fine for me. It extends TFS for an extra month which should be enough to buy your licence. Here’s the link:

Note that according the guy who did this utility, you can extend it only once.

.NET, C#, DirectX, WMV

Determine a video size and duration

You can determine the size and duration of a video using DirectX.
After installing DirectX on your development box, add a reference to Microsoft.DirectX.AudioVideoPlayback assembly.

Then, create an instance of a Video class and pass the name of the movie file you want to load (I tested it using a WMV file).


Microsoft.DirectX.AudioVideoPlayback.Video video = new Microsoft.DirectX.AudioVideoPlayback.Video(path);
StringBuilder sb = new StringBuilder();
textBox1.Text = sb.ToString();

Agile, iteration, iterative development, Uncategorized

Le développement itératif (French only)



 Le développement itératif consiste à livrer des parties d’un système ou d’une application à des intervalles réguliers. Ces intervalles sont appelés Itérations. Une itération est donc une succession d’activités couvrant l’analyse des besoins, la conception des parties du système, leur implémentation ainsi que leurs tests qui, activités, aboutissent à la livraison d’une ou plusieurs fonctionnalités qui feront partie du produit final.  

Approche classique (par étapes ou Waterfall en anglais) comparée à l’approche par itérations

 Par exemple, imaginons que nous avons un projet de développement d’une application en ligne qui offre 20 fonctionnalités différentes (20 scénarios). Dans une approche par étapes : 

  • On effectue une analyse complète pour élaborer et détailler tout les scénarios,
  • L’architecte livre une architecture détaillée de toutes les composantes de l’application,
  • L’analyse fonctionnelle et le document d’architecture sont transmis aux développeurs qui implémentent la totalité des 20 scénarios
  • On effectue les tests d’assurance qualité sur les 20 scénarios
  • On livre le produit au client pour des tests d’acceptation,
  • On fait les changements demandés par le client,
  • On livre le produit final

 Notez l’avant dernier point : Pour passer au produit final, il est rare que le client ne demande pas des changements. Cela souvent provoque des retards dans la livraison ou/et des fins de semaines sacrifiées à travailler sur les dernières demandes du client. Dans une approche itérative, on garde les mêmes étapes que celle de l’approche précedente sauf que ces dernières se produisent en dedans d’une itération dont la durée est fixe et de ce fait, se répètent autant de fois qu’il y a d’itérations. Par exemple, on pourra décider que les scénarios 1, 10 et 15 vont être développés dans l’itération 1. Pour l’itération 2, nous aurons probablement des correctifs sur les scénarios 1, 10 et 15 plus quelques autres scénarios tirés de la liste complète etc… 

Avantages du développement itératif

 L’approche de développement par itérations offre les avantages suivants : 

  • Elle s’adapte mieux aux changements. En fait cette approche considère le changement comme faisant partie du cycle de développement d’une application et non pas comme un événement intempestif,
  • Elle nous permet de détecter les risques très tôt dans la vie du projet,
  • Elle permet d’ajuster les choix en termes d’architecture ou de conception graphique par exemple, très tôt dans le processus et non pas après que ces derniers aient été complètement réalisés (et donc les heures déjà consommées),
  • Chaque itération est une expérience qui nous permet d’apprendre d’avantage sur les challenges que représente le projet. Par exemple, il est fréquent de revoir les estimations faites au début du projet après la fin des premières itérations,
  • On donne la chance au client de visualiser le résultat des itérations et donc l’occasion pour lui d’exprimer des ajustements au fur et à mesure que le projet avance et non pas à la fin uniquement lors des tests d’acceptation,
  • Le contrôle de la qualité se fait à la fin de chaque itération.
  • Les développeurs restent concentrés sur une partie des fonctionnalités qui font partie de l’itération courante. Tout changement ou correction qui s’ajoute à la liste des tâches, doit être planifié dans les itérations subséquentes. Comme la durée d’une itération est relativement courte, généralement, les clients et chefs de projets acceptent d’attendre ce délai
  • Le client est rassuré car il peut voir concrètement la progression du projet à travers la manipulation ou l’exécution de cas d’utilisations réels de sont produit


Règles de gestion des itérations

 Afin de gérer au mieux les itérations, il est important d’observer quelques règles dont les plus importantes sont : 

  • Fixer la durée des itérations au début du projet : Une itération doit avoir une durée entre 2 et 3 semaines. Il est fortement conseillé que la durée soit calculée en semaines pour que ca soit facile à mémoriser
  • Au début de chaque itération, tout les intervenants dans le projet, y compris le client, doivent se réunir pour discuter des l’expérience de l’itération précédente et déterminer le contenu de la prochaine itération
  • l’équipe de production doit présenter un produit à la fin de l’itération. On entend par produit, un ensemble de fonctions qui seraient utilisables telles quelles même si, dans la plupart des cas, on n’ira pas en production sans le reste des fonctions. La présentation se fait en utilisant l’application (il ne s’agit pas de présenter des Power Point par exemple)

 Évidement, il existe des exceptions à ces règles, surtout en ce qui concerne la dernière, dans le cas par exemple du développement d’une application serveur qui ne présente pas d’interface utilisateur et qui serait difficile à présenter partiellement. Aussi, typiquement, la première itération « Set Up » et la dernière « Livraison » sont un peu différentes des autres itérations. Dans la première, le nombre de rencontres entre les différents intervenants est souvent élevé et les livrables sont de type « documentation ». Dans la dernière itération, on s’afférera à corriger les derniers bogues et focaliser sur les procédures de déploiements (création d’une application de déploiement par exemple).   


Les avantages d’une approche par itérations sont évidents mais l’application d’une telle méthodologie nécessite plus de discipline qu’une approche classique ou les intervenants (équipe de production) ont une certaine durée pour livrer la totalité d’un produit, et en dedans de cette durée, il n’y a pas de moyen de mesurer de manière précise la progression du projet.

 À la question du gestionnaire de projet « Êtes-vous dans les temps? » les développeurs vont avoir deux réponses, dépendamment du temps restant. Soit un « Oui » si on est encore loin de la date de livraison. Soit un « Non » si on est à quelques jours seulement de la livraison. La marge de manœuvre est alors quasi nulle et il est trop tard pour agir ou négocier des délais supplémentaires avec le client. En se mettant des points de contrôles à la fin des itérations, le gestionnaire de projet peut évaluer lui-même la progression du projet et sa marge de manœuvre sera d’autant plus grande qu’il aura détecté les dépassements dans les premières étapes du projet. Aussi, cette approche permet de mieux intégrer les demandes de changements et les commentaires du client. Le fait de ne pas les recevoir tous d’un coup à la fin des tests d’acceptation permet au gestionnaire du projet de mieux planifier leur impacte sur la date de livraison du produit final.  

Lire aussi l’impact qu’a eu SCRUM sur notre équipe:

Lyra, MP3, MP3 Player, RCA

RCA Lyra MC2602… Continue

After having spent weeks unsuccessfully trying to find a firmware update for my MP3 player, I sent an email via RCA web site to the support (faking a US postal code since the form is only for US residents). I received a reply pretty quickly that suggested me to call 866-449-7112 for any support information (they could simply publish this phone number on their web site. No?).

I called, and now I am going to send them the device. By the way, they confirmed me that there is no update for this device.

To be continued…