Uncategorized

New tools in App Insights to better understand your customers’ behaviour


I was trying to get some insights into our application and noticed a new section in App Insights which seems to be added on May 11th (probably announced in Build Conference). This section is in preview mode and looks very promising.

Basically, Microsoft is adding new analytics reports regarding users’ behavior like reports on sessions and users retention. You can also create cohorts and tracks users by groups (useful for A/B testing for instance).

Although these reports are very useful, I won’t see this replacing the use of Google Analytics and alike. But at the same time, I do not believe that this is Microsoft intent (maybe for the longer term). However, combining these reports with other information such like errors, performance and dependencies failures, the one can more easily figure out what’s potentially holding users from converting.

More details
https://azure.microsoft.com/en-us/blog/new-tools-for-understanding-user-behavior-with-application-insights/

Advertisements
ASP.NET, Uncategorized

10 simple rules to write better ASP.NET applications


When writing ASP.NET application, we usually do a lot of design on the back end isolating different application layers, focusing on domain related design etc… However, as soon as we get close to the UI layer programming, the code becomes a bit messy. My explanation of this phenomenon is that the closer we are to the UI, the less the code is reusable or at least, meant to be reusable. Although, that may be true (that the code is not aimed to be reused), UI code must be at least clean and maintainable.

UI code should use OOP principles such like Inheritance and encapsulation as the code in any other layer. Moreover, some of the SOLID principles can, and should, be applied when designing our user controls and web pages. As  web forms are still widely used, and I believe it will remain the case for a long time, in this article, I am exposing some design and programming rules that I use in my ASP.NET projects. Hopefully, following these rules will help having a cleaner code which will ease your UI code maintenance.

1. Use a base class for the UI classes

It is a good habit to start your project by creating at least a BasePage and BaseUserControl classes and make all your forms and user controls inherit from them. The reason is that you will certainly need to put some code in common to all or part of your web forms or user controls.
For instance, all pages of our site will have a title and eventually some meta-tags. Adding a PageID property and a method that retrieves the title and the meta-tags from a CMS based on the PageID will be achievable pretty easily using inheritance and the maintenance of the code will obviously easiest.
Another common scenario is the need to show/hide elements on a web form depending on a certain state. Adding a virtual method that is called by the OnPreRender event of the base class will make the mechanism of hiding and showing elements of the base centralized in one method overwritten by all the web forms. This is an implementation of the Open Close principle using template method design pattern.

Example:

using System;using System.Collections.Generic;
using System.Linq;
using System.Web;
using System.Web.UI;
using System.Web.UI.WebControls;
using System.Collections.Specialized;
using System.Data;
public class BasePage : System.Web.UI.Page
{
    protected override void OnPreRender(EventArgs e)
    {
        base.OnPreRender(e);
        DoEnableDisableControls();
        SetTitleFromCMS();
        SetMetaTagsFromCMS();
    }
    public string PageID
    {
        get;
        set;
    }
    /// <summary>
    /// Overwrite this method to enable or disable controls
    /// depending on the state of the page.
    /// </summary>
    protected virtual void DoEnableDisableControls()
    {
    }

    private void SetTitleFromCMS()
    {
        if (String.IsNullOrEmpty(PageID))
            return;
        //Queries the CMS for the page title
        this.Title = "Whatever we got from the CMS";
    }
    private void SetMetaTagsFromCMS()
    {
        //Queries the CMS for the MetaTags
        //add the meta tags
    }
 

A page that derives from the base page will just need to set the PageID property and it will have a title and meta-tags (note the PageID property assigned in the page directive).

<%@ Page Language="C#" PageID="DefaultPage" AutoEventWireup="true" CodeBehind="Default.aspx.cs" Inherits="ExtRefWebApp._Default" %>

Using such a base class will let you inject common code at any time in the development process that will benefit all inheriting pages in a consistent way.

2. Encapsulate session variables

Following are the reasons why I never use session variables directly in the code but I rather encapsulate them into classes:

  • Session variables are widely used in web applications. They are very helpful but we must not forget that a session variable is nothing else than a global variable from an OOP perspective. 
  • When debugging or maintaining a Web application, we often come across a piece of code that makes usage of a session variable and we need to check where this session variable has been set. It is a bit difficult to find out.
  • Session is a property bag and it can hold any type. If we change the type stored for a certain key, we will need to check all the code and change the castings (we do this using the text search in the code).
  • A session variable either exists or not (null value). There is no way to have a default value for instance and thus, whenever we use a session variable, we must check its value (check whether it is a null value) before using it.

The best way to avoid these issues is to have a set of classes that hold session variables and expose them through properties. Using the ‘Find all references’ of a property setter will find all the objects that set the session variable. On the other hand, if we change the type held in the session variable, the solution will eventually not compile and therefore, it will be very easy to make all the changes or to estimate the amount of work necessary to make the change. It makes the code more type safe.
Session variables should be encapsulated in different classes if we have too many and these classes which, preferably, should belong to the same namespace and eventually be in the same folder to ease their maintenance.

There is certainly much more to say about Session variables usage than what is here. For instance, using sessions in other logical layers of the application than the UI is common; a business layer class that makes usage of the session. Needless to say that, in this case, we violate the encapsulation principle and the application layers isolation since the business layer depends on the HTTP context (we no longer can reuse it in a non HTTP application type).

3. Do not create a deep UI controls hierarchies

User controls are a very good mean to re-use UI code. However, when we start to embed user controls into other user controls without a limit, we end up with an endless tree of nested controls. This makes the maintenance of the application a real hell.

Usually, it is a good habit to limit the tree to 2 or 3 levels maximum. The page contains controls that may contain controls. But the inner controls must contain only .NET Framework basic controls (or a server control). And even at the same level, we must limit the number of user controls included in the same user control. For instance, creating one big control that contains many inner controls is not always a good idea. Bear in mind that a page can contain controls as well: I have seen very often developers who do not have any ASP.NET controls in the page but embed everything in a ‘super’ user control that in turn, contains child controls.  Why should we add this level of indirection and complexity? A page is a perfect container for controls.

4. Not everything should be embedded in a separate user control

If we do not pay enough attention, we may quickly have a huge number of controls in a web application. To avoid this, I usually follow the rules below in order to determine whether or not UI functionality must be in a separate control:

  • Reusability: if we know that this control is going to be reused somewhere else, we must embed it in a user control,
  • Complex rules: Sometimes it is better to separate certain parts of a user interface because we want to encapsulate some of its complexity into a separate class and file. This is nothing else the single responsibility principle applied to UI components.

For instance, a large form that contains multiple sections, that may communicate eventually, will be better implemented if we isolate the sections in different controls and make them communicate using events as explained in the next point.

5. Use events to communicate between user controls and their containers

Using a property in a user control to communicate a state with its container control is a common pattern we see in web development. In most cases, developers will set a session variable when something happens in the user control and the container (being it the page or another user control) will check the value of the session variable in order to determine whether or not something happened. This is usually done in the ‘OnLoad’ event. You will notice that pretty quickly, we end up with a pretty messy code in this event handler.
First, we must ensure that the code that sets the session variable runs before the code that checks its value. Also, we must ensure that the value is reset otherwise, next time we hit the same page, we may consider the event even though, it did not happen yet.
One elegant way to notify a status change from one control to another is to expose an event to which the container will subscribe. There are many advantages to this approach:

  • Clean code that reacts to a specific event from the inner control,
  • The same mechanism can be used to notify as many controls as we want,
  • We are sure that when the event happens we will be notified whereas, using session variables adds an uncertainty since we do not know which code is executed first: the code that sets the session variable or the code that checks it.
  • We do not have to care about the session variable and reset it after the event is over since we no longer use it.
  • We do not use a session variable just to hold a flag (save memory and respect basic OOP principles).

6. Avoid embedding code in the ASPX files

The best place to add code to a web form is the code behind file. Embedded code in the ASPX files makes the code messy and very difficult to maintain as in the old ASP days.

7 . Use declarative code as much as possible

If you need to set a user control, or any server control, property do it in the ASPX file instead of the code behind. There case where we do not have the choice (conditional values for instance), but in general this makes the HTML code clearer and increases the maintainability since we know much about the control just by looking at one file (no need to check the code behind).

8. Be careful when using static variables

Static members are shared and leave as long as the ASP.NET process runs. This means whenever you use static members, you are consuming memory that won’t be released unless done explicitly.

One example I have seen in a past project is storing user specific information in a static dictionary (it could be any type of collection by the way). When load tested, the application memory usage grew up to 12 GB for a couple of hundreds of users. As the dictionary was static, its size was just adding up as new users came in and at the end, we had an ‘Out of memory exception’.

In this particular scenario, we should have put in place a timer and a method that cleans up from time to time the dictionary in order to avoid the situation described above. Though, we removed the dictionary and got the information from the database each time the code needed it (no caching).

9. Consider the performance from the client side perspective

In terms of performance, we often focus only on the code that executes on the server and neglect the bandwidth usage and number of downloads that a page contains. While CPUs and memory is cheaper, the bandwidth on the other hand is more expansive and, unless we are implementing an intranet in a controlled environment, we do not have any control on the available bandwidth for our web site users.

There are many tools and ways to make the page performance better on the client side (a good starting point would be this article: http://www.codeproject.com/KB/aspnet/PeformanceAspnet.aspx).

On the server side code, we should take care of not using too much ViewState. In fact, storing an object tree in the ViewState may look helpful and more efficient than storing it in the session variables since the object will not be held indefinitely in memory. Performance will be worse actually since your object tree is serialized and sent as part of the response to the client browser. And when the client submits a form, the serialized object is sent back again to the server.

Another issue is the number of Javascript files that are loaded in a page. We must combine them using the ScriptManage (see https://bellouti.wordpress.com/2008/09/14/combining-javascript-files-with-ajax-toolkit-library/ for more details).

There are too many topics on the client side performance to be all covered in this article and the two links cited above will give you a very good starting point if you never considered this particular point.

10. Use caching whenever possible

User controls can be cached using the OutputCache directive. User controls that display content grabbed from a CMS for instance, should consider using this directive.
We can also cache the content from the CMS in static members or using EnterpriseLibrary caching objects. If we do so, we must ensure that we do cache the closer form of the content to the UI. For instance if you grab XML from a CMS and transform it with an XSLT to generate the HTML to be displayed, you better cache the resulting HTML rather than the XML. Caching the HTML will avoid retransforming the XML the next time we hit the page.

Uncategorized

Organizing classes in C#


I was doing a quick research on C# coding conventions and I found one rule that comes back in almost all coding conventions I came through: Always put fields (aka private members) at the top of the class definition.

I am not sure of the reasons to do this. Let me explain why I usually advocate the other way around: order properties and methods from the more accessible to the less accessible.

A class has two types of users:

  1. developers writing the code of the class (usually one or two developers max),
  2. developers who are ‘clients or users’ of the class (the rest of developers and other guys maintaining the software).

I consider that in the total lifecycle of a project, there might be much more developers of the 2nd type than of the first one.

When you use a class or try to maintain it (refactor it or try to find out about what it does to understand the code), you are not interested in private members. You rather focus on public methods and/or properties (concepts of black box that exposes services). Then, if you need to go further, you will probably have a look at the internals of the class. For this reason, I really prefere, when I open a class file, to directly have all interesting information ride away. Just right there and not have to scroll down.

For me this is a consequence of OOP programming and the encapsulation principle.

I would really appreciate comments… Am I the only one who advocates this? It is not a big deal and I gave up on this subject for the moment- however, I generally do not like the argument of ‘This is the standard!’ (which is the only one I got so far). I rather like the real reason underneath it but could not find anything explanation on this rule except a fellow who told me that this was a C++ programmers’ standard who did not have really the choice otherwise the compiler did not compile (I did not verify this argument yet).

Agile, iteration, iterative development, Uncategorized

Le développement itératif (French only)


 

Définition

 Le développement itératif consiste à livrer des parties d’un système ou d’une application à des intervalles réguliers. Ces intervalles sont appelés Itérations. Une itération est donc une succession d’activités couvrant l’analyse des besoins, la conception des parties du système, leur implémentation ainsi que leurs tests qui, activités, aboutissent à la livraison d’une ou plusieurs fonctionnalités qui feront partie du produit final.  

Approche classique (par étapes ou Waterfall en anglais) comparée à l’approche par itérations

 Par exemple, imaginons que nous avons un projet de développement d’une application en ligne qui offre 20 fonctionnalités différentes (20 scénarios). Dans une approche par étapes : 

  • On effectue une analyse complète pour élaborer et détailler tout les scénarios,
  • L’architecte livre une architecture détaillée de toutes les composantes de l’application,
  • L’analyse fonctionnelle et le document d’architecture sont transmis aux développeurs qui implémentent la totalité des 20 scénarios
  • On effectue les tests d’assurance qualité sur les 20 scénarios
  • On livre le produit au client pour des tests d’acceptation,
  • On fait les changements demandés par le client,
  • On livre le produit final

 Notez l’avant dernier point : Pour passer au produit final, il est rare que le client ne demande pas des changements. Cela souvent provoque des retards dans la livraison ou/et des fins de semaines sacrifiées à travailler sur les dernières demandes du client. Dans une approche itérative, on garde les mêmes étapes que celle de l’approche précedente sauf que ces dernières se produisent en dedans d’une itération dont la durée est fixe et de ce fait, se répètent autant de fois qu’il y a d’itérations. Par exemple, on pourra décider que les scénarios 1, 10 et 15 vont être développés dans l’itération 1. Pour l’itération 2, nous aurons probablement des correctifs sur les scénarios 1, 10 et 15 plus quelques autres scénarios tirés de la liste complète etc… 

Avantages du développement itératif

 L’approche de développement par itérations offre les avantages suivants : 

  • Elle s’adapte mieux aux changements. En fait cette approche considère le changement comme faisant partie du cycle de développement d’une application et non pas comme un événement intempestif,
  • Elle nous permet de détecter les risques très tôt dans la vie du projet,
  • Elle permet d’ajuster les choix en termes d’architecture ou de conception graphique par exemple, très tôt dans le processus et non pas après que ces derniers aient été complètement réalisés (et donc les heures déjà consommées),
  • Chaque itération est une expérience qui nous permet d’apprendre d’avantage sur les challenges que représente le projet. Par exemple, il est fréquent de revoir les estimations faites au début du projet après la fin des premières itérations,
  • On donne la chance au client de visualiser le résultat des itérations et donc l’occasion pour lui d’exprimer des ajustements au fur et à mesure que le projet avance et non pas à la fin uniquement lors des tests d’acceptation,
  • Le contrôle de la qualité se fait à la fin de chaque itération.
  • Les développeurs restent concentrés sur une partie des fonctionnalités qui font partie de l’itération courante. Tout changement ou correction qui s’ajoute à la liste des tâches, doit être planifié dans les itérations subséquentes. Comme la durée d’une itération est relativement courte, généralement, les clients et chefs de projets acceptent d’attendre ce délai
  • Le client est rassuré car il peut voir concrètement la progression du projet à travers la manipulation ou l’exécution de cas d’utilisations réels de sont produit

  

Règles de gestion des itérations

 Afin de gérer au mieux les itérations, il est important d’observer quelques règles dont les plus importantes sont : 

  • Fixer la durée des itérations au début du projet : Une itération doit avoir une durée entre 2 et 3 semaines. Il est fortement conseillé que la durée soit calculée en semaines pour que ca soit facile à mémoriser
  • Au début de chaque itération, tout les intervenants dans le projet, y compris le client, doivent se réunir pour discuter des l’expérience de l’itération précédente et déterminer le contenu de la prochaine itération
  • l’équipe de production doit présenter un produit à la fin de l’itération. On entend par produit, un ensemble de fonctions qui seraient utilisables telles quelles même si, dans la plupart des cas, on n’ira pas en production sans le reste des fonctions. La présentation se fait en utilisant l’application (il ne s’agit pas de présenter des Power Point par exemple)

 Évidement, il existe des exceptions à ces règles, surtout en ce qui concerne la dernière, dans le cas par exemple du développement d’une application serveur qui ne présente pas d’interface utilisateur et qui serait difficile à présenter partiellement. Aussi, typiquement, la première itération « Set Up » et la dernière « Livraison » sont un peu différentes des autres itérations. Dans la première, le nombre de rencontres entre les différents intervenants est souvent élevé et les livrables sont de type « documentation ». Dans la dernière itération, on s’afférera à corriger les derniers bogues et focaliser sur les procédures de déploiements (création d’une application de déploiement par exemple).   

Conclusion


Les avantages d’une approche par itérations sont évidents mais l’application d’une telle méthodologie nécessite plus de discipline qu’une approche classique ou les intervenants (équipe de production) ont une certaine durée pour livrer la totalité d’un produit, et en dedans de cette durée, il n’y a pas de moyen de mesurer de manière précise la progression du projet.

 À la question du gestionnaire de projet « Êtes-vous dans les temps? » les développeurs vont avoir deux réponses, dépendamment du temps restant. Soit un « Oui » si on est encore loin de la date de livraison. Soit un « Non » si on est à quelques jours seulement de la livraison. La marge de manœuvre est alors quasi nulle et il est trop tard pour agir ou négocier des délais supplémentaires avec le client. En se mettant des points de contrôles à la fin des itérations, le gestionnaire de projet peut évaluer lui-même la progression du projet et sa marge de manœuvre sera d’autant plus grande qu’il aura détecté les dépassements dans les premières étapes du projet. Aussi, cette approche permet de mieux intégrer les demandes de changements et les commentaires du client. Le fait de ne pas les recevoir tous d’un coup à la fin des tests d’acceptation permet au gestionnaire du projet de mieux planifier leur impacte sur la date de livraison du produit final.  

Lire aussi l’impact qu’a eu SCRUM sur notre équipe: https://bellouti.wordpress.com/2008/12/04/how-scrum-helped-our-team/

Convert, Convert.exe, FAT, FAT32, NTFS, Uncategorized

Convert from FAT or FAT32 to NTFS


Few months ago I bought an external hard disk for an emergency back up. Therefore, I did not notice that I formatted it with FAT32 until a week ago when I tried to install ORCAS Beta 2 on this same disk. Actually, I got stuck as the files of the virtual machine were too big to be hold on a FAT32 file system (over 4 Go) and I was wondering whether I could convert my disk to NTFS without having to format it and loose its content.

I finally got a solution that eventually worked fine and did not require me to back up all my files, format the disk and copy them back.

The solution lies in the CONVERT.EXE utility. To get a list of all parameters, type “CONVERT /?” in a command shell window (Start –> Run –> type “CMD”  then type “CONVERT /?”)

For instance, to convert a hard disk, type “CONVERT F: /FS:NTFS”  (i.e. convert the hard disk F from FAT(32) to NTFS file system).

Note that you can add the parameter /NoSecurity so that the new created NTFS partitions will allow anyone to access the content of the disk (it is the same as giving access to All Users to your disk). This particulary usefull when converting an external HD that might be used on another system.

I also suggest you to run a CHKDSK  /F in order to check the disk and correct any error on it. This will save you time because CONVERT will run CHKDSK prior to convert the disk and if any error is found, the operation is aborted and you will have to run CHKDSK manually to correct the errors.