Wednesday, August 15, 2012

Fluent Migrations - another way to track database changes

Every software developer is a potential creator having a goal to leave a remarkable thumbprint in project development cycle. Fuzzy type names, discussion-proofed coding conventions, the variety of third-party components used in projects - those are a few ways to express one's creativity.

Things get even more creative when it comes to database development as various techniques are used in the projects to track database changes. One of the most popular solutions is to maintain SQL change script file set and have some execution engine (batch file, Ant build script, etc.) to apply them to the target database incrementally. Another widely mentioned approach is to do generate SQL change scripts using schema compare tool (e.g. Redgate SQL Compare, tablediff).

Fluent migrations could be mentioned as a third option. In this article I am referring to fluent migrations as a part of FluentMigrator API. It is a .NET API that can be utilized to perform incremental database changes. Each database change is described as a Migration class descendant and basically consists of two methods:
  • Up() - logic to perform incremental change
  • Down() - logic to revert incremental change

Here is a sample of a complete migration class:

   [Migration(201208150020)]
   public class CreateUserTable : Migration
   {
      public override void Up()
      {
         Create.Table("Users");
      }

      public override void Down()
      {
         Delete.Table("Users");
      }
   }

I am not going to dive deep into the API details - some documentation is available here. My goal is to mention several features of Fluent Migrator which might bring it to the projects.

  1. Unique API for all supported databases - with saying that I am 90% right as for example you cannot have unique syntax for adding identity column for MS SQL and Oracle database table as there is no identity column type for Oracle. But general routines - adding/dropping a table, adding/dropping/modifying a column, managing indexes - are supported. On the other hand methods to execute SQL statements and external SQL files are also provided. The list of supported databases is available here
  2. Migration runners - a set of tools is provided to run migrations on the specified database. Those can be used as stand-alone tools or incorporated into continuous integration build process. No more batch scripts should be required.
  3. Ability to rollback migration - it was already mentioned that each migration class consists of two methods: one for performing and another for rolling back a migration. With that in mind you could think of a situation where after performing several migrations you still have a possibility to bring your database to some previous state. Think of a software with pluggable components - migration sounds like a good way to install/remove them. 

Thursday, April 12, 2012

Developing email functionality using smtp4dev

In order to develop and - most probably - to test some email sending functionality you will definitely need a SMTP server - running locally or outside. Under Windows XP the SMTP server installation is available as a IIS server component, which is now gone in Windows 7 (more information is available here).

The other option is to use third party SMTP server software. One of such applications is smtp4dev available at http://smtp4dev.codeplex.com/. It is nice, because:

  • No installation required (standalone executable is available)
  • No setup required
  • Messages are not being delivered and can be previewed locally in the SMTP server using smtp4dev GUI

In order to test email sending functionality, run this snippet:


using System.Net.Mail;

namespace Sender
{
    class Program
    {
        static void Main(string[] args)
        {
            MailMessage message = new MailMessage();
            message.To.Add("recipient@test.com");
            message.From = new MailAddress("sender@test.com");
            message.Subject = "Message Subject";
            message.Body = "Message Body";
            SmtpClient smtp = new SmtpClient("localhost");
            smtp.Send(message);
        }
    }
}

Sunday, March 25, 2012

Overriding default ASP.NET MVC 3 scaffolding in Visual Studio 10

Scott Hanselman has a nice reading about overriding ASP.NET MVC 3 scaffolding in Visual Studio. Putting it short:

  1. Copy folder [c:\Program Files\Microsoft Visual Studio 10.0\Common7\IDE\ItemTemplates\CSharp\Web\MVC 3\CodeTemplates\] to your project directory (drag-dropping it from Windows Explorer into Visual Studio will do the trick); 
  2. Clear Custom Tool property for all copied T4 template files (*.tt) - this will stop output files being generated by those templates while saving them.

This is it. After that you can modify existing templates / add new ones that would immediately appear in Add View / Add Controller scaffolding dialogs. Since templates are included locally into Visual Studio project they are available specifically for this particular project only.

Friday, March 16, 2012

Intersecting date intervals - the easy way

I guess this is a common situation were you have a task to filter a list of objects that fall into some date range. "Get all my tasks entered today", "Get all my payments made through the last week " - those could be quite realistic examples.

SELECT * FROM dbo.Tasks 
    WHERE tsk_DateCreated BETWEEN @today_start AND @today_end

  SELECT * FROM dbo.Payments
    WHERE pmt_DateMade BETWEEN @week_start_date AND @week_end_date


Those are quite easy scenarios, as you only have a single date to put into the interval: tsk_DateCreated for tasks and pmt_DateMade for payments. Things get little more complicated when you have two date fields come into play.

Examples: "Get all employee vacation list that intersect with a given period", "Get all user accounts that were valid during the given interval".

Solution (the hard way):

SELECT * FROM dbo.EmployeeVacations
    WHERE (empv_DateFrom BETWEEN @date_from AND @date_to) OR 
          (empv_DateTo BETWEEN @date_from AND @date_to) OR 
          (empv_DateFrom <= @date_from AND empv_DateTo >= @date_to) 

  SELECT * FROM dbo.Users
    WHERE (usr_DateFrom BETWEEN @date_from AND @date_to) OR 
          (usr_DateTo BETWEEN @date_from AND @date_to) OR 
          (usr_DateFrom <= @date_from AND usr_DateTo >= @date_to) 


Solution (the easy way):

SELECT * FROM dbo.EmployeeVacations
    WHERE (NOT((empv_DateTo < @date_from) OR (empv_DateFrom > @date_to)))

  SELECT * FROM dbo.Users
    WHERE (NOT((usr_DateTo < @date_from) OR (usr_DateFrom > @date_to)))


Give it a try, you will not regret it.

Update:  another way to check for date interval intersection:

SELECT * FROM dbo.EmployeeVacations
    WHERE (@date_from BETWEEN empv_DateFrom AND empv_DateTo OR empv_DateFrom BETWEEN @date_from AND @date_to)

  SELECT * FROM dbo.Users
    WHERE (@date_from BETWEEN usr_DateFrom AND usr_DateTo OR usr_DateFrom BETWEEN @date_from AND @date_to)

Friday, March 9, 2012

Solving "Received an unexpected EOF or 0 bytes from the transport stream" issue

One of the popular sayings appears to be correct when saying that most of the things are not so easy as they seem to be. Calling a web service with transport security enabled - this is what I'd expect to be easy. And it does... until something similar shows up: 

System.ServiceModel.CommunicationException: An error occurred while making the HTTP request to https://xxx.xxx.xxx. 
This could be due to the fact that the server certificate is not configured properly with HTTP.SYS in the HTTPS case. 
This could also be caused by a mismatch of the security binding between the client and the server. 
---> System.Net.WebException: The underlying connection was closed: An unexpected error occurred on a send. 
---> System.IO.IOException: Received an unexpected EOF or 0 bytes from the transport stream.
   
My first idea was that the error was caused by the certificate validation issue so I tried to apply the solution mentioned in my earlier post. No luck this time.

After some research the problem was solved by setting:

ServicePointManager.SecurityProtocol = SecurityProtocolType.Ssl3

Quick note, why this was helpful.
It's all related to protocols - SSL3 and TLS - used to establish a secure communication between two endpoints.

  • SSL3 is a security protocol released in 1996 by Netscape Communications and it is a superset of SSL2. 
  • TLS (TLS 1.0 or sometimes known as SSL 3.1) was introduced by IETF in 1999 (RFC 2246) and is a superset of SSL3 although not 100% backward compatible.

More information about the protocols mentioned above could be found here.

.NET applications use TLS for transport security by default. If you look at the static constructor of the ServicePointManager class you would notice the following line:

s_SecurityProtocolType = SecurityProtocolType.Tls | SecurityProtocolType.Ssl3;

My wild guess would be that initially TLS protocol is used to establish a secure connection with the server. In case when the server does not support TLS, the negotiation continues using SSL3 protocol. The problem is that some servers not supporting TLS terminate connection immediately. This is why in such cases SSL3 communication needs to be forced.

Thursday, March 8, 2012

ServicePointManager.ServerCertificateValidationCallback - the magic cure

The things I am going to touch in this article are widely described. It is a common task for a developer to test WCF services on the development environment having transport security enabled for a service. So basically what is done here:

1) IIS is configured to enable SSL for a web site, hosting a WCF service.
    Instructions for IIS 6.0 can be found here.
    Instructions for IIS 7.0 can be found here.
2) Self issued certificate is being used for that purpose.
    Instructions for creating self issued certificate can be found here.
3) WCF service and client bindings are configured to have transport security enabled.
    Instructions to enable WCF transport security can be found here. 

After spending couple of days on the steps mentioned above sometimes you might come to a glorious moment when you feel strong enough to test your service. After doing that you might see something like this:

System.ServiceModel.Security.SecurityNegotiationException: Could not establish trust relationship for the SSL/TLS secure channel with authority 'xxx.xxx.xxx'. 
---> System.Net.WebException: The underlying connection was closed: Could not
establish trust relationship for the SSL/TLS secure channel. 
---> System.Security.Authentication.AuthenticationException: The remote certificate is invalid according to the validation procedure.

It is actually saying that your certificate cannot be validated. You might try solving this problem by adding the server certificate to some trusted authority certificate store on your local machine. But my guess is that 99% would solve this issue like this:

ServicePointManager.ServerCertificateValidationCallback += 
   (sender, certificate, chain, sslPolicyErrors) => true;

What happens then is that you have your client and server talking to each other using a secured channel. Nice, isn't it? Make sure it is for debugging purposes only.

What if you have a client, talking to multiple services hosted by different service providers? This could be a real world scenario, i.e. an integration module talking to multiple B2B systems.

Overriding ServicePointManager.ServerCertificateValidationCallback behaviour would turn off server certificate validation globally for the whole client application. This is a problem if you want your certificate validation to be shut off for a single service only.

It is worth to take a quick look into the place where ServicePointManager.ServerCertificateValidationCallback is used in the .NET framework infrastructure. To be more specific: look for a HandshakeDoneProcedure.CertValidationCallback method. This is a private class so most probably Reflector is the tool to help you here. Or let me share a snippet:

private bool CertValidationCallback(string hostName, X509Certificate certificate, X509Chain chain, SslPolicyErrors sslPolicyErrors)
{
    ...
    bool flag = true;
    ...
    if (ServicePointManager.ServerCertificateValidationCallback != null)
    {
        flag = false;
        return ServicePointManager.ServerCertValidationCallback.Invoke(this.m_Request, certificate, chain, sslPolicyErrors);
    }
    if (flag)
    {
        return (sslPolicyErrors == SslPolicyErrors.None);
    }
    return true;
}

Default certificate validation behaviour (when validation callback is not assigned) consists of validating sslPolicyErrors value. You might want to do the same for the services you still need server certificates to be validated.

ServicePointManager.ServerCertificateValidationCallback = new System.Net.Security.RemoteCertificateValidationCallback(ValidateCert);

public static bool ValidateCert(Object sender, X509Certificate certificate, X509Chain chain, SslPolicyErrors sslPolicyErrors)
{
    HttpWebRequest request = sender as HttpWebRequest;

    if (request != null && request.Host == "host_name_to_skip_validation")
    {
        return true;
    }

    return sslPolicyErrors == SslPolicyErrors.None;
}

Tuesday, February 7, 2012

Producing multiple versions of Web.config file

What is the best way of preparing configuration files for different project environments? Is it worth investing resources in creating more automatic and transparent build process? Does it make any sense for a project that is going to be installed only once into production environment?

From time to time I have discussion similar to this with somebody - my team lead, my project colleagues and the most popular answer to the questions mentioned above was Yes for a product and Partially Yes for a project. Partially usually meant the following steps:

  • retrieving source code from the repository
  • building the project

No deployment to the environment locations, no merging of configuration files. Basically the build process is something that comes out of the box with some minor adjustments.

My personal opinion is Yes for both cases (project and product) and let me put a short note why.

Usually production environment is not the first and the single target were the project is going to be deployed. In many cases the bits to be pushed to the client flow through a bunch of other environments - development, testing, staging, so having some automated build procedure isn't just helpful - it is a must. It does not matter whether you are deploying a project or a product - the build process should be complete and any manual intervention of the person performing a build should be avoided. Copying build files to the server using Total Commander, tracking configuration file changes using WinMerge - no, all this should be a part of the automatic build process. Any manual intervention is a possible source for a human error - something can be skipped, something mistyped. On the other hand, it is often a requirement to have frequent builds that should be deployed on demand (for example: fixing a critical issue) so managing something for every build by hand would be painful.

One of the major challenges faced in the projects I was involved was the requirement to prepare different configuration files for different environments during the build process. Usually changes used to be merged into multiple configuration files manually. Any alternatives? Let me share one of them.

Microsoft Visual Studio 10 allows you to prepare multiple templates for configuration files. Basically you would have a single master configuration file and the child templates containing only the specific parts of the configuration that has to be applied to the specific configuration file. What can be achieved with templates? 

  • It is possible to remove/add/replace configuration file elements
  • It is possible to set configuration element attributes to some environment specific values
  • Templates can be transformed into configuration files during the build process 

 Possible usage scenarios:

  • setting different appSettings values for different environments
  • setting different connectionStrings values for different environments

and similar.

A sample of the Web.config transformation template:

<?xml version="1.0"?>
<configuration xmlns:xdt="http://schemas.microsoft.com/XML-Document-Transform">
    <connectionStrings>
        <add name="ApplicationServices"
            connectionString="Data Source=DebugSQLServer;Initial Catalog=MyDebugDB;Integrated Security=True"
            xdt:Transform="SetAttributes" xdt:Locator="Match(name)"/>
    </connectionStrings>
    <system.web>
        <customErrors defaultRedirect="GenericError.htm"
        mode="RemoteOnly" xdt:Transform="Insert">
            <error statusCode="500" redirect="InternalError.htm"/>
        </customErrors>
    </system.web>
</configuration>

The above mentioned template serves for two purposes:

  • It replaces ApplicationServices connection string with a value specific to the particular environment
  • It inserts customErrors section into the transformed configuration file which is also specific to the particular environment. 

When writing a configuration file template you should only care about two things: how to LOCATE a master configuration file part to be changed and how to actually TRANSFORM it.

Element search conditions can be defined as xdt:Locator attribute values (xdt stands as a prefix for transformation syntax namespace http://schemas.microsoft.com/XML-Document-Transform). Conditions can include XPath expressions, comma-separated attribute values, locator functions and more. In the example mentioned above the Match(name) function is used to locate connectionString element in the master configuration file that has its name attribute set to ApplicationServices.

Element transformation mode can be defined as xdt:Transform attribute values. Some examples: SetAttributes - set located element attributes to the specified values, Insert - add specified element and others.

More information about template file syntax is available here: Web.config Transformation Syntax for Web Application Project Deployment.

The final thing - how to apply the created templates and produce transformed configuration files during build process?

When you are building a project using Visual Studio, you are actually using the msbuild to achieve this. The *.csproj it self is actually a msbuild script. The configuration file transformation is nothing more than an additional build step that needs to be defined in the script.

I am not an expert of msbuild and creating fluent shiny build scripts is not my strongest part, so I will give you a short note how you could modify your existing web site *.csproj project to perform configuration file transformation and copy results to the output directory.

Add following XML fragment into your *.csproj file just before closing project element:

<Project>
...
    <UsingTask 
        TaskName="TransformXml" 
        AssemblyFile="$(MSBuildExtensionsPath32)\Microsoft\VisualStudio\v10.0\Web\Microsoft.Web.Publishing.Tasks.Dll"/>
</Project>

And add the following lines under the <target name="AfterBuild">

<TransformXml Source="web.config" Transform="web.debug.Config" Destination="bin\web.debug.Config" />
<TransformXml Source="web.config" Transform="web.release.Config" Destination="bin\web.release.Config" />

What is going to happen here during the build process (when building a project using Visual Studio or using msbuild directly) is that web.config file content is going to be transformed using templates web.debug.config and web.release.config and the output is going to be saved into BIN directory under the same names.

To me all this looks like a painless way to transform configuration files the way I want for any environment. Does it look so to you too?

Monday, January 30, 2012

Nice reading about SQL indexing and SQL tuning for developers

My colleagues shared with me a link to an online SQL indexing and tuning tutorial (a guide to database performance by Markus Winand) which I think is worth reading. "Indexed ORDER BY", "Searching for ranges" - those are few sample topics described in the book. Different DBMS are covered - MS SQL, Oracle and others. You can also check your SQL tuning knowledge by taking a short online test.

Saturday, January 28, 2012

The correct way to download text files

Downloading a file - a common task you have to deal in many projects. Generally it is quite simple - all you have to do is to read a file or a string content into memory and dump it into HttpResponse.

public static void DownloadFile(string content, string fileName, bool addBOM)
{
     DownloadFile(System.Text.Encoding.UTF8.GetBytes(content), fileName, addBOM);
}

public static void DownloadFile(byte[] content, string fileName, bool addBOM)
{
     if (content != null)
     {
          byte[] bom = System.Text.Encoding.UTF8.GetPreamble();
          int contentLength = content.Length + (addBOM ? bom.Length : 0);

          HttpContext.Current.Response.ClearHeaders();
          HttpContext.Current.Response.Clear();
          HttpContext.Current.Response.AddHeader("Content-Disposition", "attachment; filename=" + fileName);
          HttpContext.Current.Response.AddHeader("Content-Length", contentLength.ToString());
          HttpContext.Current.Response.ContentType = "application/octet-stream";
          HttpContext.Current.Response.Flush();

          if (addBOM)
          {
               HttpContext.Current.Response.BinaryWrite(bom); 
          }
           
          HttpContext.Current.Response.BinaryWrite(content);
          HttpContext.Current.Response.End();
     }
}

The example shows how the UTF8 text file should be pushed for download. I believe this is most popular encoding for the text files containing localized data. Couple things to be noted here:
  • Some text editors behave differently while opening UTF8 encoded files with BOM (byte order mark) and without it. This is mostly related to legacy text editors which might handle BOM prefixed files incorrectly. So you might have a requirement to include BOM information into the file you are presenting for download or just skip it. 
  • Adding a content-length header is a very good practice. This provides the client browser with the information required to display download progress correctly. When you have a requirement to download a string content as a file, use the appropriate encoding when calculating content length value. Relying on the string content length is not always the good solution. When you have a UTF8 encoded string all the localized international characters are encoded using 2 bytes, therefor calculating string length in characters would actually cut 1 byte off any international character available in the string.

Saturday, January 21, 2012

Replacing page contents completely using javascript

Let me share with you an example of how it is possible to replace the whole page content using javascript. When this is applicable? One of the situations I have experienced was the requirement to call a third party service returning the whole report content as a HTML page. Report generation was quite a time consuming operation and because of that reason there was a requirement to call the service using Ajax.

The complete solution:

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html>
    <head>
        <title>Calling a long running page</title>
        <script type="text/javascript" src="http://code.jquery.com/jquery-latest.js"></script>
        
        <script type="text/javascript">
            $(document).ready(function () {
                $.ajax({
                    url: "LongRunningPage.aspx",
                    type: "POST",
                    context: document,
                    data: {
                        parameter1: "This is parameter1 value"
                    },
                    success: function (data) {
                        document.open();
                        document.write(data); 
                        document.close(); 
                    }
                });
            }
            );
        </script>

    </head>
    <body>
        <p><img src="ajax-loader.gif" alt="In progress" /></p>
    </body>
</html>

By the way, I highly recommend this site if you need a progress info image for ajax calls.

Monday, January 16, 2012

Extracting WCF bindings

Couple days ago my colleague Rimas and me came across a situation where we needed to add several configuration parameters to some predefined WCF binding. We also wanted to know how those predefined bindings are configured. WCF provides many bindings preconfigured out of the box - starting with BasicHttpBinding moving all the way down to more complex configurations. All of them basically are Binding instances with some predefined set of configuration parameters which we were interested in.

One of the easiest ways to see what's inside the binding is to add a service reference using Visual Studio. It  generates a proxy class and configures client to use the service so the binding information gets pushed into *.config file.

Generating configuration parameters for all WCF bindings at once is a bit more complex task. We solved it this way:

static void Main(string[] args)
{
    string sectionName;
    string configName;

    var types = Assembly.Load("System.ServiceModel, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089")
        .GetTypes()
        .Where(t => t.IsSubclassOf(typeof(System.ServiceModel.Channels.Binding)))
        .Where(t => !t.IsAbstract)
        .Where(t => t.GetConstructor(Type.EmptyTypes) != null)
        .ToArray<Type>(); 

    var generator = new ServiceContractGenerator(
        ConfigurationManager.OpenMappedExeConfiguration(
            new ExeConfigurationFileMap() { ExeConfigFilename = "Bindings.config" }, 
            ConfigurationUserLevel.None));

    foreach (var type in types)
    {
        generator.GenerateBinding(Activator.CreateInstance(type) as System.ServiceModel.Channels.Binding, out sectionName, out configName); 
    }

    generator.Configuration.Save(); 
}

Saturday, January 7, 2012

ASP.NET MVC 3 controller action method parameter binding

When I started to play around with ASP.NET MVC one of the areas I needed more light to be shed on was controller action parameter binding. At the first glance the binding behaviour looked like "you name it - you get it" which seemed kind of suspicious. As appeared later there was not so little truth in the rule mentioned earlier.

In a very basic manner there are two classes involved in executing a controller action method and binding parameters. ControllerActionInvoker is a class responsible for finding an appropriate controller class action method by matching its signature to a route data (obviously it is not a single task performed by this class). Usually developers would not need to deal with this class directly.

After the appropriate action method is found a DefaultModelBinder class comes into play by managing method parameter binding. And this is were the magic begins.

Parameter values are bound using the data that arrives within a http request. The following request data is used when looking up data to be bound (in the order defined below):

  1. Posted form data
  2. Route data
  3. Querystring data
  4. Submitted file data

Keeping it short for simple action parameter types - action parameters matching request data keys are bound to corresponding request data values. All non matching action parameters are set their default values.

Keeping it short for complex action parameter types - action parameter type public properties matching request data keys are bound to corresponding request data values. All not matching properties are set their default values. Complex type action parameters are always instantiated.

I guess some examples would be a good place to start explaining the rules mentioned above.

A simple untyped view to display a list of product categories would call action like this:

public ActionResult Categories()
{
    ViewData["Categories"] = ProductNH.GetProductCategories();
    return View(); 
}

A corresponding view markup:

@using (Html.BeginForm("Categories", "Product", FormMethod.Get))
{ 
    <ul>
    @foreach (var category in (IList<Category>)ViewData["Categories"])
    {
        <li>@category.Name - @category.Descn</li>
    }
    </ul>
    
    <input type="submit" value="Refresh categories" />
}

Most probably you would notice the html form and the submit button being redundant in this particular situation and you would be absolutely right. But I would like to keep them as it is to stay in sync with my later samples.

Let's add some filtering functionality and update the action method to accept a filter parameter:

public ActionResult Categories(string namePart)
{
    ViewData["Categories"] = ProductNH.GetProductCategories(namePart);
    return View(); 
}

An updated view markup:

@using (Html.BeginForm("Categories", "Product", FormMethod.Get))
{ 
    <label for="namepart">Category name part: </label>
    <input type="text" name="namepart" />
    
    <ul>
    @foreach (var category in (IList<Category>)ViewData["Categories"])
    {
        <li>@category.Name - @category.Descn</li>
    }
    </ul>
    
    <input type="submit" value="Refresh categories" />
}

After the html form is submitted the "namepart" value comes to the server as a query string value (notice the form submit mode FormMethod.Get). It is then matched to the action parameter name (case-insensitive match is performed).

Let's change the form submit mode to FormMethod.Post. In such case the following actions are valid to serve a view request:

[HttpPost]
public ActionResult Categories(string namePart)
{
    ViewData["Categories"] = ProductNH.GetProductCategories(namePart);
    return View();
}

[HttpPost]
public ActionResult Categories(FormCollection formCollection)
{
    ViewData["Categories"] = ProductNH.GetProductCategories(formCollection["namepart"]);
    return View();
}

Next, move a little bit further: let's define a class for a category filter:

public class CategoryFilter
{
    public string NamePart { get; set; }
}

And let's pass it into the controller action:

[HttpPost]
public ActionResult Categories(CategoryFilter filter)
{
    ViewData["Categories"] = ProductNH.GetProductCategories(filter.NamePart);
    return View();
}

What happens here is the CategoryFilter class instance is constructed using default parameterless constructor and property NamePart is mapped to a posted form data value identified by the key "namepart". As you might have already noticed introducing the CategoryFilter class did not require any changes in the view markup. Lovely, isn't it?

Let's make the view to be a typed view and create a model class for it:

public class CategoryListModel
{
    public CategoryFilter Filter { get; set; }

    public IList<Category> Categories { get; set; }
}

In this case changes are necessary for the view markup as well:

@using (Html.BeginForm("Categories", "Product", FormMethod.Post))
{ 
    <label for="filter.namepart">Category name part: </label>
    <input type="text" name="filter.namepart" />
    
    <ul>
    @foreach (var category in Model.Categories)
    {
        <li>@category.Name - @category.Descn</li>
    }
    </ul>
    
    <input type="submit" value="Refresh categories" />
}

Notice the input name changes - it has a "filter" prefix before "namepart". And now the action:

[HttpPost]
public ActionResult Categories(CategoryListModel model)
{
    model.Categories = ProductNH.GetProductCategories(model.Filter.NamePart);
    return View(model);
}

What happens here is the CategoryListModel class instance is constructed using default parameterless constructor. Filter property value is instantiated the same way and its property NamePart is mapped to a prefixed posted form data value. Basically the prefix "filter" is mapped to a property name Filter. The rest of the binding is performed as described in the previous example.

So: you name it - you get it, isn't it so?

Sunday, January 1, 2012

Configure NHibernate SQL logging for ASP.NET web application

Let me share quick instructions for enabling NHibernate SQL logging for ASP.NET web application (default log4net logging). Two steps need to be taken:

  1. Enable log4net logging for your web application. This can be done by putting following line into Global.asax code-behind:
    protected void Application_Start()
    {
        log4net.Config.XmlConfigurator.Configure(); 
        // Other statements suppressed. 
    }
    
  2. Update the web.config file of your web application to enable NHibernate SQL logging to a text file:
    <?xml version="1.0"?>
    <configuration>
        <configSections>
            <section name="log4net" type="log4net.Config.Log4NetConfigurationSectionHandler,log4net" />
            ...
        </configSections>
      
        <log4net>
            <appender name="rollingFile" type="log4net.Appender.RollingFileAppender,log4net" >
    
                <param name="File" value="Logs/log.txt" />
                <param name="AppendToFile" value="false" />
                <param name="RollingStyle" value="Date" />
                <param name="DatePattern" value="yyyy.MM.dd" />
                <param name="StaticLogFileName" value="true" />
    
                <layout type="log4net.Layout.PatternLayout,log4net">
                    <param name="ConversionPattern" value="%d [%t] %-5p %c - %m%n" />
                </layout>
            </appender>
    
            <logger name="NHibernate.SQL" additivity="false">
                <level value="DEBUG"/>
                <appender-ref ref="rollingFile"/>
            </logger>
        </log4net>
        
    </configuration>
    
Log file content structure can be customized by changing ConversionPattern parameter value. More information about possible pattern values is available in Apache log4net SDK Documentation.

    Saturday, December 31, 2011

    Short comment on UnobtrusiveJavaScriptEnabled app setting

    While looking into web.config file of the ASP.NET MVC 3 application you would notice an application setting called UnobtrusiveJavaScriptEnabled. Not sure what about you but for me it looked like a mysterious setting for turning on/off some mysterious functionality.

    The purpose of that setting is explained by Brad Wilson in his post Unobtrusive Client Validation in ASP.NET MVC 3. Putting it short - with this setting turned off client side validation is being performed using Microsoft javascript libraries (the same way it was performed in ASP.NET MVC 1 and 2). Otherwise (with setting turned on) client side validation is performed using JQuery Validate.

    Friday, December 30, 2011

    Implementing Cancel button functionality in ASP.NET MVC 3

    Implementing a Cancel button functionality - currently I cannot imagine a better topic for my first blog post ever. And no matter how trivial the topic is, I am sure there is plenty of room left for me to slip somewhere.

    I am a huge fan of ASP.NET Web Forms framework. I love the stateful Web model approach and yes – I am not afraid of postbacks, I am using viewstate and I think I could not live without server side controls and code-behind in my web pages.

    And now I am trying to fall in love again with ASP.NET MVC. Which is not an easy task for me, but I am moving on.

    While playing around with my ASP.NET MVC application I came across a situation where I needed to implement a simple data entry form. It had some text fields on it, a Save button to submit entered information and a Cancel button which would just redirect to an index page. The problem for me was – how to implement cancel button functionality?

    One of the possible solutions is presented by Andrey Shchekin (Multiple submit buttons with ASP.NET MVC: final solution). This could work, but I don’t want to have the complete MVC page life cycle being triggered just for a cancel button functionality. More than that - I don’t even want to make Cancel button as a submit button because I don’t want validation of my form to be triggered while clicking it.

    Solution I found suitable for me is simple: 

    <input type="submit" value="Save" />
    <input type="button" value="Cancel" 
           onclick="javascript:document.location.href='@Url.Action("Index", "Home")'" />