Silverlight Navigation Framework: resolve the pages using an IoC container

Print Content | More

Silverlight 3 introduced to us a good navigation framework: you could combine Frames and UriMapper objects to change portion of your page (or the entire page) on the fly allowing you to navigate to different section of your application. There are plenty of resources available on the topic, just make a search for them.

Silverlight 4 further improved the model allowing us to easily customize portion of this framework too: we can for example change the way a new object (page) is loaded into a Frame implementing our own version of the INavigationContentLoader interface and assigning it to the ContentLoader property of the Frame object.

I won’t hide the fact I’m a big fun of writing ‘modular’ applications, so I tend to separate everything in components and use interfaces contracts for each module, an IoC container works very well in this scenario because you can think of it just like your service provider or an application entry point provider.

Being able to combine an IoC container with the Navigation Framework and the UriMapper will give us great flexibility, because we can easily swap part of the application just reconfiguring the objects inside the container, making the writing of a modular Silverlight application a - cough cough -breeze.

What I want to obtain is this:

<navigation:Frame x:Name="ContentFrame" 
                  Source="/Home" 
                  Navigated="ContentFrame_Navigated" 
                  NavigationFailed="ContentFrame_NavigationFailed">
    <navigation:Frame.ContentLoader>
        <helpers:IocNavigationContentLoader />
    </navigation:Frame.ContentLoader>
    <navigation:Frame.UriMapper>
        <uriMapper:UriMapper>
            <uriMapper:UriMapping Uri="" MappedUri="/Views/Home.xaml"/>
            <uriMapper:UriMapping Uri="/Search/{query}" MappedUri="Search?q={query}"/>
            <uriMapper:UriMapping Uri="/Album/{id}" MappedUri="Album?id={id}"/>
            <uriMapper:UriMapping Uri="/Search" MappedUri="Search"/>
            <uriMapper:UriMapping Uri="/Album" MappedUri="Album"/>
            <uriMapper:UriMapping Uri="/{pageName}" MappedUri="/Views/{pageName}.xaml"/>
        </uriMapper:UriMapper>
    </navigation:Frame.UriMapper>
</navigation:Frame>

Here we have a mixed mapping:

  • Lines 10 and 15 set the routes to resolve our Uri to normal xaml pages (they have the default .xaml extension and I want to use the default PageResourceContentLoader here).
  • Lines 11-14 map the Uri to something that the default ContentLoader cannot resolve, so our custom ContentLoader will come in action.

A first rough implementation is easy to do:

public class IocNavigationContentLoader : INavigationContentLoader
{
	private static PageResourceContentLoader DefaultPageContentLoader = new PageResourceContentLoader();

	public IAsyncResult BeginLoad(Uri targetUri, Uri currentUri, AsyncCallback userCallback, object asyncState)
	{
		if (targetUri.ToString().Contains(".xaml"))
			return DefaultPageContentLoader.BeginLoad(targetUri, currentUri, userCallback, asyncState);

		var iar = new IoCNavigationContentAsyncResult(asyncState);
		//Tries to load type from already loaded assemblies
		iar.Result = App.Container.Resolve(GetComponentName(targetUri));
		userCallback(iar);
		return iar;
	}

	private string GetComponentName(Uri uri)
	{
		return uri.ToString().Split('?')[0];
	}

	public bool CanLoad(Uri targetUri, Uri currentUri)
	{
		if (targetUri.ToString().Contains(".xaml"))
			return DefaultPageContentLoader.CanLoad(targetUri, currentUri);
		// check if the IoC can resolve the object
		return App.Container.Kernel.HasComponent(GetComponentName(targetUri));
	}

	public void CancelLoad(IAsyncResult asyncResult)
	{
		
	}

	public LoadResult EndLoad(IAsyncResult asyncResult)
	{
		if (asyncResult is IoCNavigationContentAsyncResult)
			return new LoadResult((asyncResult as IoCNavigationContentAsyncResult).Result);

		return DefaultPageContentLoader.EndLoad(asyncResult);
	}
}

public class IoCNavigationContentAsyncResult : IAsyncResult
{
	public IoCNavigationContentAsyncResult(object asyncState)
	{
		this.AsyncState = asyncState;
		this.AsyncWaitHandle = new ManualResetEvent(true);
	}

	public object Result { get; set; }

	public object AsyncState { get; private set; }

	public WaitHandle AsyncWaitHandle { get; private set; }

	public bool CompletedSynchronously
	{
		get { return true; }
	}

	public bool IsCompleted
	{
		get { return true; }
	}
}

The component logic is extremely simple: it will hold an instance of the PageResourceContentLoader (line 3) to be used for every Uri that contains the ‘.xaml’ string, all the rest will be asked to the IoC container.

The core functions are:

  • CanLoad (line 22): we check if it’s a plain xaml page, otherwise we ask the container (Castle Windsor in my case) if the component is registered.
  • BeginLoad (line 5): again...if it’s a plain xaml page, pass the control to the PageResourceContentLoader, otherwise get the service name from the mapped Uri and resolve it using the container.

Simple as that and it works! As a ‘side effect’ now your Pages can have injected dependencies to every service you like (logging, caching, searching, custom application services, etc...) and it all will be handled by the container.

This implementation was just a test, it’s not production code! a lot of improvement can be made, for example: you can take out the static container and use a service factory, add the ability to resolve object that reside on assemblies/xap downloaded on demand (it’s not that hard to do, believe me...again you can use the power of the UriMapper...there’s a very nice post by Corrado Cavalli on the topic, ok it’s in Italian...but the code explain itself).

Edit: I just noticed I forgot to post the container configuration for those two pages objects, so here it is:

Container.Register(
	Component.For<ISearchViewModel>().ImplementedBy<SearchViewModel>(),
	Component.For<ISearchView>().ImplementedBy<Search>().Named("Search"),
	Component.For<IAlbumViewModel>().ImplementedBy<AlbumViewModel>(),
	Component.For<IAlbumView>().ImplementedBy<Album>().Named("Album")
	);

As you can see each page is implemented following the MVVM pattern and it’s all resolved by the container.



Castle windsor, Ioc, Navigation, Silverlight

0 comments

Ready for the 14th DotNetMarche Workshop? IoC, DI, AOP and related techniques = lot of fun for all

Print Content | More

On April 16 in Italy at Castelfidardo (Marche) it will take place the next DotNetMarche workshop, based on the requests and the feedback we had on our previous works, we’ve decided to focus on showing some techniques and ‘best practice’ you can use while developing an application using IoC / DI / AOP.

we’ll have 4 sessions for an afternoon of fun!

  • Stefano Leli - will introduce us to some of the basic concepts of using (and why using) an IoC/DI container to modularize your applications.
  • Andrea Balducci - will show us some of the most used libraries (Castle Windsor, Unity,...) and other advanced features these frameworks offer.
  • Giorgetti Alessandro (me :D) - will bring everything ‘live’ in a Silverlight 4 application; we’ll talk about the (in)famous service locator pattern and we’ll see some application examples: I took the demo application we’ve developed and used in the past workshops and refactored it to use a modular approach with an MVVM pattern for the UI...everything is configured and resolved through the IoC container.
  • Gian Maria Ricci - will introduce some AOP techniques to improve the application and clean-up the design a bit.

At the end of the workshop we’ll have the usual Q&A session (and the even more usual dinner after :D)...don’t loose your chance to embarrass us!

For registration and more info head to our community website: www.dotnetmarche.org.



Dotnetmarche, Workshop, Ioc, Aop

0 comments

NHibernate: a custom (parametric) UserType to truncate strings

Print Content | More

Developing Dexter I encountered again a usual usual error you have to deal with NHibernate and strings: we tried to persist en entity whose string field exceeded the limit imposed by the database table; Nhibernate rightfully complained with:

System.Data.SqlClient.SqlException: String or binary data would be truncated. The statement has been terminated.

Given the fact that having partial data for these fields was acceptable we decided to truncate the value itself. Instead of using the usual solution - that is: take care of this in the entity class or somewhere else during the validation - I decided to build some custom UserType and let NHibernate take care of the truncation if needed (this way we do not have to change anything in the logic of the application).

Why this approach ? well digging into NHibernate code you can see that it uses a lot of structures very similar to a UserType to actually take care of the interaction between the mapped properties and the database (look at the NHibernate.Type namespace in the source code or with Reflector), adding another one isn’t a big issue so I decided to follow their approach and inherited directly from AbstractStringType:

public abstract class AbstractTruncatedStringType : AbstractStringType
{
	internal AbstractTruncatedStringType()
		: base(new StringSqlType())
	{
	}

	internal AbstractTruncatedStringType(StringSqlType sqlType)
		: base(sqlType)
	{
	}

	public abstract int Length { get; }

	public override void Set(System.Data.IDbCommand cmd, object value, int index)
	{
		string str = (string)value;
		if (str.Length > Length)
			str = str.Substring(0, Length);
		base.Set(cmd, str, index);
	}
}

and then added some specializations:

public class TruncatedString500 : AbstractTruncatedStringType
{
	public override int Length
	{
		get { return 500; }
	}

	public override string Name
	{
		get { return "TruncatedString500"; }
	}
}

public class TruncatedString100 : AbstractTruncatedStringType
{
	public override int Length
	{
		get { return 100; }
	}

	public override string Name
	{
		get { return "TruncatedString100"; }
	}
}

...

you can use these classes writing your mapping like this:

...
<class name="Data" table="DATA">
		<id name="Id" column="ID" type="guid" unsaved-value="00000000-0000-0000-0000-000000000000">
			<generator class="guid" />
		</id>
		<property name="Data1" column="DATA1" type="string" length="100" />
		<property name="TruncatedString" column="TruncatedString" length="10" type="Structura.NHibernate.UserTypes.TruncatedString10, Structura.NHibernate" />
		...

Which is good...but not enough for me...you see we need to implement a lot of different versions of this class based on the limit we want to impose to the string, why not make this type parametric then !? We can do it just implementing the IParameterizedType interface.

The code is quite straightforward to write:

public class TruncatedString : AbstractTruncatedStringType, IParameterizedType
{
	private const int DefaultLimit = 50;

	public override int Length
	{
		get { return _length; }
	}
	private int _length = DefaultLimit;

	public override string Name
	{
		get { return "TruncatedString"; }
	}

	public void SetParameterValues(System.Collections.Generic.IDictionary<string, string> parameters)
	{
		if (false == int.TryParse(parameters["length"], out _length))
			_length = DefaultLimit;
	}
}

And this is how you can use it in your mappings:

...
<property name="TruncatedString" column="TruncatedString" length="10">
	<type name="Structura.NHibernate.UserTypes.TruncatedString, Structura.NHibernate">
		<param name="length">10</param>
	</type>
</property>
...

Yet not perfect...but it’s an improvement, so you have two choices: implement multiple versions of the abstract class (and keep your mapping cleaner) or use the parameterized version (and have extra flexibility).



Nhibernate, UserType, Truncate, String

3 comments

Silverlight / Castle Windsor – how to use a logging framework properly

Print Content | More

In my last post I shown you how to build a simple logging framework for Silverlight applications and use it with an IoC container through ‘constructor injection’, well…in my opinion I consider that a bad programming practice. In short when using a Dependency Injection library you have two types of DI mechanism:

  • Constructor Injection: DI through constructor parameters, the DI container try us the constructor that matches best all the modules it knows.
  • Property Injection: DI through properties, the DI container try to resolve each property based on the modules it knows.

Usually I use constructor injection for all the modules I consider mandatory and property injection for optional modules. A logging system does not add nor carry any ‘core level’ feature to the application, it’s merely accessorial (even if extremely useful); so a good practice is to not use constructor injection to initialize it, use property injection instead.

Consider something like this:

public class TestLoggingClass
{
 public TestLoggingClass()
 { }

 public ILogger Logger { get; set; }

 public void Operation()
 {
    Logger.Info("Operation started");
    Logger.Debug("Operation started");
 }
}

In this scenario we cannot call something like: Logger.Info()…because Logger can be null (remember it’s optional now);

We have two diffirent solution to this problem:

  • Register a default NullLogger instance that actually does nothing.
  • Another simple solution is the to write down some extension methods that checks for Logger nullability before actually making the call, each method will be just a wrapper around your framework call:
public static class LoggingExtensions
{
  #region Debug

  public static void SafeDebug(this ILogger logger, string message)
  {
     if (logger != null)
        logger.Debug(message);
  }

  public static void SafeDebug(this ILogger logger, string message, Exception exception)
  {
     if (logger != null)
        logger.Debug(message, exception);
  }

  public static void SafeDebug(this ILogger logger, string format, params object[] args)
  {
     if (logger != null)
        logger.Debug(format, args);
  }

  public static void SafeDebugFormat(this ILogger logger, string format, params object[] args)
  {
     if (logger != null)
        logger.DebugFormat(format, args);
  }

  public static void SafeDebugFormat(this ILogger logger, Exception exception, string format, params object[] args)
  {
     if (logger != null)
        logger.DebugFormat(exception, format, args);
  }

  public static void SafeDebugFormat(this ILogger logger, IFormatProvider formatProvider, string format, params object[] args)
  {
     if (logger != null)
        logger.DebugFormat(formatProvider, format, args);
  }

  public static void SafeDebugFormat(this ILogger logger, Exception exception, IFormatProvider formatProvider, string format, params object[] args)
  {
     if (logger != null)
        logger.DebugFormat(exception, formatProvider, format, args);
  }

  #endregion

  #region Error

  public static void SafeError(this ILogger logger, string message)
  {
     if (logger != null)
        logger.Error(message);
  }

  public static void SafeError(this ILogger logger, string message, Exception exception)
  {
     if (logger != null)
        logger.Error(message, exception);
  }

  public static void SafeError(this ILogger logger, string format, params object[] args)
  {
     if (logger != null)
        logger.Error(format, args);
  }

  public static void SafeErrorFormat(this ILogger logger, string format, params object[] args)
  {
     if (logger != null)
        logger.ErrorFormat(format, args);
  }

  public static void SafeErrorFormat(this ILogger logger, Exception exception, string format, params object[] args)
  {
     if (logger != null)
        logger.ErrorFormat(exception, format, args);
  }

  public static void SafeErrorFormat(this ILogger logger, IFormatProvider formatProvider, string format, params object[] args)
  {
     if (logger != null)
        logger.ErrorFormat(formatProvider, format, args);
  }

  public static void SafeErrorFormat(this ILogger logger, Exception exception, IFormatProvider formatProvider, string format, params object[] args)
  {
     if (logger != null)
        logger.ErrorFormat(exception, formatProvider, format, args);
  }

  #endregion

  #region Fatal

  public static void SafeFatal(this ILogger logger, string message)
  {
     if (logger != null)
        logger.Fatal(message);
  }

  public static void SafeFatal(this ILogger logger, string message, Exception exception)
  {
     if (logger != null)
        logger.Fatal(message, exception);
  }

  public static void SafeFatal(this ILogger logger, string format, params object[] args)
  {
     if (logger != null)
        logger.Fatal(format, args);
  }

  public static void SafeFatalFormat(this ILogger logger, string format, params object[] args)
  {
     if (logger != null)
        logger.FatalFormat(format, args);
  }

  public static void SafeFatalFormat(this ILogger logger, Exception exception, string format, params object[] args)
  {
     if (logger != null)
        logger.FatalFormat(exception, format, args);
  }

  public static void SafeFatalFormat(this ILogger logger, IFormatProvider formatProvider, string format, params object[] args)
  {
     if (logger != null)
        logger.FatalFormat(formatProvider, format, args);
  }

  public static void SafeFatalFormat(this ILogger logger, Exception exception, IFormatProvider formatProvider, string format, params object[] args)
  {
     if (logger != null)
        logger.FatalFormat(exception, formatProvider, format, args);
  }

  #endregion

  #region Info

  public static void SafeInfo(this ILogger logger, string message)
  {
     if (logger != null)
        logger.Info(message);
  }

  public static void SafeInfo(this ILogger logger, string message, Exception exception)
  {
     if (logger != null)
        logger.Info(message, exception);
  }

  public static void SafeInfo(this ILogger logger, string format, params object[] args)
  {
     if (logger != null)
        logger.Info(format, args);
  }

  public static void SafeInfoFormat(this ILogger logger, string format, params object[] args)
  {
     if (logger != null)
        logger.InfoFormat(format, args);
  }

  public static void SafeInfoFormat(this ILogger logger, Exception exception, string format, params object[] args)
  {
     if (logger != null)
        logger.InfoFormat(exception, format, args);
  }

  public static void SafeInfoFormat(this ILogger logger, IFormatProvider formatProvider, string format, params object[] args)
  {
     if (logger != null)
        logger.InfoFormat(formatProvider, format, args);
  }

  public static void SafeInfoFormat(this ILogger logger, Exception exception, IFormatProvider formatProvider, string format, params object[] args)
  {
     if (logger != null)
        logger.InfoFormat(exception, formatProvider, format, args);
  }

  #endregion

  #region Warn

  public static void SafeWarn(this ILogger logger, string message)
  {
     if (logger != null)
        logger.Warn(message);
  }

  public static void SafeWarn(this ILogger logger, string message, Exception exception)
  {
     if (logger != null)
        logger.Warn(message, exception);
  }

  public static void SafeWarn(this ILogger logger, string format, params object[] args)
  {
     if (logger != null)
        logger.Warn(format, args);
  }

  public static void SafeWarnFormat(this ILogger logger, string format, params object[] args)
  {
     if (logger != null)
        logger.WarnFormat(format, args);
  }

  public static void SafeWarnFormat(this ILogger logger, Exception exception, string format, params object[] args)
  {
     if (logger != null)
        logger.WarnFormat(exception, format, args);
  }

  public static void SafeWarnFormat(this ILogger logger, IFormatProvider formatProvider, string format, params object[] args)
  {
     if (logger != null)
        logger.WarnFormat(formatProvider, format, args);
  }

  public static void SafeWarnFormat(this ILogger logger, Exception exception, IFormatProvider formatProvider, string format, params object[] args)
  {
     if (logger != null)
        logger.WarnFormat(exception, formatProvider, format, args);
  }

  #endregion
}

This way we can have our optional logging component with an ‘elegant’ calling syntax (and we don’t pollute all our code with all those nullability checks).



Castle windsor, Logging, Silverlight

2 comments

Silverlight / Castle Windsor – implementing a simple logging framework

Print Content | More

As your Silverlight projects grow complex you’ll soon start to feel the need to have a solid logging system. In my WPF and Windows Forms project I’m now used to have Castle Windsor + Log4Net as my logging system and I really miss it in my Silverlight applications.

We don’t have a porting of Log4Net in Silverlight yet, but we do have Castle Windsor now. Given the fact I’ll use it in my production environment I decided to roll my version of a simple logging framework that mimic some of the features that Log4Net have and it’s based on Castle Windsor logging capabilities (at first…then if I’ll have the need to switch my IoC/DI framework I can abstract the whole logging system and write the integration facilities).

The goal is to have a simple system that can be configured adding and removing component to the Windsor container.

The logger infrastructure will be like this:

LoggingThe Logger class derives directly from Castle’s LevelFilteredLogger (which implement the default ILogger interface). The logger also has a collection of appenders, each IAppender object simply exposes a simple Log() function which will accept some parameters and perform the logging operation.

A simple implementation of the IAppender interface is given by the BrowserConsoleAppender class: this class will log the actions to the IE Developer’s Toolbar console script window, or to the Firebug console window.

To simply the logging configuration I’ve added a LoggingFacility class that is able to configure a default application logger or can provide a simple way to configure multiple loggers and appenders if you need a fine grained configuration.

Let’s start simple: in this first version we’ll reuse interfaces and members directly from the Castle namespaces. The IAppender interface will be like this:

/// <summary>
/// interface for our custom appenders
/// </summary>
public interface IAppender
{
	void Log(LoggerLevel loggerLevel, string loggerName, string message, Exception exception);
}

The BrowserConsoleAppender is quite simple too, with the actual code that logs to the browser’s console stolen from some articles around the web :D:

public class BrowserConsoleAppender : IAppender
{
	public void Log(global::Castle.Core.Logging.LoggerLevel loggerLevel, string loggerName, string message, Exception exception)
	{
		HtmlWindow window = HtmlPage.Window;
		//only log is a console is available (IE and FF)
		var isConsoleAvailable = (bool)window.Eval("typeof(console) != 'undefined' && typeof(console.log) != 'undefined'");
		if (isConsoleAvailable)
		{
			var console = (window.Eval("console.log") as ScriptObject);
			if (console != null)
			{
				DateTime dateTime = DateTime.Now;
				string output;
				if (exception == null)
					output = string.Format("{0} [{1}] '{2}' {3}", dateTime.ToString("u"), loggerLevel, loggerName, message).SanitizeForBrowser();
				else
					output = string.Format("{0} [{1}] '{2}' {3}:\n{4}\n{5}", dateTime.ToString("u"), loggerLevel, loggerName, exception.GetType().FullName,
					                       exception.Message, exception.StackTrace).SanitizeForBrowser();

				console.InvokeSelf(output);
			}
		}
	}
}

The Logger implementation is simple too:

public class Logger : LevelFilteredLogger
{
	public Logger()
	{
	}

	public Logger(string name)
		: base(name)
	{
	}

	public Logger(LoggerLevel loggerLevel)
		: base(loggerLevel)
	{
	}

	public Logger(string loggerName, LoggerLevel loggerLevel)
		: base(loggerName, loggerLevel)
	{
	}

	public Logger(LoggerLevel loggerLevel, IList<IAppender> appenders)
		: base(loggerLevel)
	{
		_appenders = appenders;
	}

	public Logger(string loggerName, LoggerLevel loggerLevel, IList<IAppender> appenders)
		: base(loggerName, loggerLevel)
	{
		_appenders = appenders;
	}

	public override ILogger CreateChildLogger(string loggerName)
	{
		if (loggerName == null)
			throw new ArgumentNullException("loggerName", "To create a child logger you must supply a non null name");

		return new Logger(String.Format(CultureInfo.CurrentCulture, "{0}.{1}", Name, loggerName), Level, Appenders);
	}

	private readonly IList<IAppender> _appenders = new List<IAppender> { new BrowserConsoleAppender() };
	public IList<IAppender> Appenders
	{
		get { return _appenders; }
	}

	protected override void Log(LoggerLevel loggerLevel, string loggerName, string message, Exception exception)
	{
		foreach (var appender in Appenders)
			appender.Log(loggerLevel, loggerName, message, exception);
	}
}

It directly derives from the basic Castle implementation which will give us some logging methods for free (we just have to override the Log() function); we have a bunch of constructors that allows you to configure the logger, the most complete one will accept a threshold level, a logger name and an array of appenders.

Basically this is all that you need to log something in Silverlight; to use this logging framework with Castle you can configure it like this:

Container = new WindsorContainer();
Container.Register(
// register one or more appenders
Component.For<IAppender>().ImplementedBy<BrowserConsoleAppender>().Named("Default"),
// register and configure the loggers
Component.For<ILogger>().ImplementedBy<Logger>().DynamicParameters((k, d) =>
			                   	{
			                   		d["loggerLevel"] = LoggerLevel.Debug;
			                   		IAppender[] appenders = k.ResolveAll<IAppender>();
			                   	})
);

And then use it with normal resolution or dependency injection.

To simply the configuration stage I built up a LoggingFacility that if used without any parameter will configure a single unnamed Logger with the default console appender (if you register more appenders they will be automatically used the first time you resolve the Logger service); alternatively you can pre-configure the Logger services you want to have passing an array of LoggerConfig objects to the facility.

Here’s the implementation code:

public class LoggingFacility : AbstractFacility
{
	public LoggingFacility()
	{ }

	public LoggingFacility(LoggerConfig config)
	{
		_configuredLoggers = new[] { config };
	}

	public LoggingFacility(LoggerConfig[] configs)
	{
		_configuredLoggers = configs;
	}

	private const LoggerLevel DefaultLoggerLevel = LoggerLevel.Warn;
	private readonly LoggerConfig[] _configuredLoggers;

	protected override void Init()
	{
		// if we do not have any explicit configuration, we register a default logger
		if ((_configuredLoggers == null) || (_configuredLoggers.Length == 0))
			Kernel.Register(
				Component.For<ILogger>().ImplementedBy<Logger>()
					.DynamicParameters((k, d) =>
					                   	{
					                   		d["loggerLevel"] = DefaultLoggerLevel;
					                   		IAppender[] appenders = k.ResolveAll<IAppender>();
					                   		// if we do not have registered any appender we provide a default console one
					                   		if (appenders.Length == 0)
					                   			appenders = new IAppender[] { new BrowserConsoleAppender() };
					                   		d["appenders"] = appenders;
					                   	})
				);
		else
		{
			// we need to register more than one logger
			foreach (var loggerConfig in _configuredLoggers)
			{
				LoggerConfig config = loggerConfig;
				Kernel.Register(Component.For<ILogger>()
				                	.ImplementedBy<Logger>()
				                	.DynamicParameters((k, d) =>
				                	                   	{
				                	                   		if (!string.IsNullOrEmpty(config.Name))
				                	                   			d["loggerName"] = config.Name;
				                	                   		d["loggerLevel"] = config.Level;
				                	                   		IAppender[] appenders = null;
				                	                   		/* if we have appenders defined..resolve them */
				                	                   		if ((config.AppendersNames != null) && (config.AppendersNames.Length > 0))
				                	                   		{
				                	                   			List<IAppender> aps = new List<IAppender>();
				                	                   			for (int i = 0; i < config.AppendersNames.Length; i++)
				                	                   				aps.Add(k.Resolve<IAppender>(config.AppendersNames[i]));
				                	                   			appenders = aps.ToArray();
				                	                   		}
				                	                   		/* if not..resolve all the available */
				                	                   		if ((appenders == null) || (appenders.Length == 0))
				                	                   			appenders = k.ResolveAll<IAppender>();
				                	                   		/* if we do not have registered any appender we provide a default console one */
				                	                   		if (appenders.Length == 0)
				                	                   			appenders = new IAppender[] { new BrowserConsoleAppender() };
				                	                   		d["appenders"] = appenders;
				                	                   	}));
			}
		}
	}
}

To use the facility:

Container = new WindsorContainer();

// simple configuraion: just set the facility (this ocnfigures a default logger)
Container.AddFacility<LoggingFacility>();

// advanced configuration: specify some options for the default logger
Container.Kernel.AddFacility("LoggingFacility", new LoggingFacility(new LoggerConfig { Level = LoggerLevel.Debug }));

// also if you specify some more appenders they will be used too
Container.Register(
            // register one or more appenders
            Component.For<IAppender>().ImplementedBy<BrowserConsoleAppender>().Named("Appender1"),
				Component.For<IAppender>().ImplementedBy<BrowserConsoleAppender>().Named("Appender2"),
				Component.For<IAppender>().ImplementedBy<BrowserConsoleAppender>().Named("Appender3"),
				);

As usual it started as a short post and became quite long in the end…let’s see an example of usage, in a Silverlight application you can modify a Page to accept a constructor dependency on the logger (this isn’t a good design solution when it comes to logging services…but this is just an operational demo so it’s ok):

public partial class MainPage : UserControl
{
	public MainPage(ILogger logger)
	{
		InitializeComponent();
		_logger = logger;

		_logger.Info("MainPage created");
	}

	private ILogger _logger;

	private void button1_Click(object sender, RoutedEventArgs e)
	{
		_logger.Info("Second action taken");
	}

	private void button2_Click(object sender, RoutedEventArgs e)
	{
		_logger.Info("First action taken");
	}
}

You then need to rely on the container to configure the actual instance of the class:

// register the type in the container, so we can resolve it
Container.Register(
	Component.For<MainPage>()
	);
	
// later on: ask the container to build an instance of the page
private void Application_Startup(object sender, StartupEventArgs e)
{
	InitContainer();
	Container.Resolve<ILogger>().Info("Application started");
	this.RootVisual = Container.Resolve<MainPage>();
}

To see the Logging service in action I created a simple Silverlight project which configures and uses it, the solution is attached at the end of this article, here’s a screenshot of the IE8 Developer Toolbar script console window that shows how the actions are logged.

SilverlightLogging

You can take this code and extends it with any other logging capabilities you like just implementing your own Appender classes, I have one that calls a WCF service and log the exception using Elmah for example; further extension to this system will include XML configuration support for the LoggingFacility and the introduction of Exception and Message formatters.

Here is the complete solution project:



Silverlight, Castle windsor, Logging

0 comments

Dexter has been updated to Asp.NET MVC2

Print Content | More

Due to the recent release of Asp.NET MVC2 we’ve decided to port Dexter to the new engine; Ugo spent a couple of days reorganizing the repository and doing the porting (you can read more info on his Italian blog: http://www.tostring.it).

We’ve just fixed some of the bugs that arose after the porting...one them will require some further investigation and will be the subject for a next post :)

If you’re interested in giving a look to the project go to our Codeplex page (http://dexterblogengine.codeplex.com/), I recommend you to perform a fresh checkout to a new directory because many things were relocated.

We’ve switched the developing environment to Visual Studio 2010, if you have troubles in installing Asp.NET MVC2 in your developing machine, you can follow the guideline given in this post: Installing ASP.NET MVC 2 RTM on Visual Studio 2010 RC.

To see the new version of Dexter in action you don’t have to go too far: this blog (and Ugo’s blog too) now runs with the current Trunk version with supports MVC2!



Asp.net mvc 2, Dexter

2 comments

Castle NHibernate Integration Facility: how to get the configuration object

Print Content | More

This is a simple tip, I’ve used the Castle Windsor NHibernate Integration Facility to build up the data access services in one of the projects I’m working on, I also wanted to use the NHibernate SchemaExport utilities to build up the database from scratch, this can be very useful when using SQLite in a testing environment.

Since the facility is configured with XML files, I had the problem on how to access the NHibernate.Cfg.Configuration object; having a look at the source code of the Integration Facility it turns out that the Configuration object are registered in the container too along with the factories (one configuration and one factory for each element declared in the configuration file), an alias name is assigned to each configuration object in the form of: {factory_id}.cfg.

So if you have the following settings:

<facility id="nhibernatefacility" isweb="true"
			 type="Castle.Facilities.NHibernateIntegration.NHibernateFacility,Castle.Facilities.NHibernateIntegration">
	<factory id="MainDatabase" alias="nh.facility.default">
		<settings>
			<item key="connection.provider">NHibernate.Connection.DriverConnectionProvider</item>
			...
		</settings>
		<assemblies>
			<assembly>YourAssembly</assembly>
		</assemblies>
	</factory>
</facility>
You can access and use the configuration objects like this:

[Test, Explicit]
public void CreateDatabaseSchema()
{
	NHibernate.Cfg.Configuration cfg = Container.Resolve<NHibernate.Cfg.Configuration>("MainDatabase.cfg");
	SchemaExport export = new SchemaExport(cfg);
	export.Execute(false, true, false);
}


Castle windsor, Configuration, Facility, Nhibernate

0 comments

Css and JavaScript file minification

Print Content | More

Performing the minification of your custom JavaScript and Css files is usually a good practice to follow when deploying your website in a production environment, but this usually makes doing ‘on the fly’ and ‘live’ modification to those files nearly impossible due to the very compact form they get.

A common technique you can use when it comes to asp.net web sites is to develop an HttpHandler that handle the minification task on the fly.

A very simple approach is to minify each file on a file per file basis, that is: every request of a .js or a .css file will be mapped to be processed by our handlers, the file read and compressed (using an appropriate library) and then stored in the cache (for any subsequent request) and streamed back to the client.

To perform the minification I decided to use the Yahoo! UI Library: YUI Compressor for .Net, but you can change it to whatever compressor you like.

Here’s the code for the Css minification handler:

   public class CssYuiCompressorHandler : IHttpHandler
   {
      private const int DefaultCacheDuration = 30;

      public bool IsReusable { get { return true; } }

      public void ProcessRequest(HttpContext context)
      {
			context.Response.ContentType = "text/css";
			string filePath = context.Request.Url.AbsolutePath;
			filePath = context.Server.MapPath(filePath);
         // if the file is already minified (we use the .min.css naming convention)
			// read it as it is and deliver it to the client
      	if (File.Exists(filePath))
      	{
      		if (filePath.EndsWith(".min.css"))
      			context.Response.WriteFile(filePath);
      		else
      			CompressCssAndWriteToResponseStream(context, filePath);
      	}
      	else
      		context.Response.StatusCode = 404;

      	context.Response.Flush();
         context.Response.End();
      }

   	static readonly object FileLock = new object();

		private static void CompressCssAndWriteToResponseStream(HttpContext context, string filePath)
      {
         string requestHash = context.Request.Url.AbsolutePath;
         if (context.Cache[requestHash] != null)
         {
            context.Response.Write((string)context.Cache[requestHash]);
            return;
         }
         lock (FileLock)
         {
            using (StreamReader sr = new StreamReader(filePath, true))
            {
               string compressed = CssCompressor.Compress(sr.ReadToEnd());
               context.Response.Write(compressed);

               context.Cache.Add(requestHash,
                                  compressed,
                                  null,
                                  System.Web.Caching.Cache.NoAbsoluteExpiration,
                                  new TimeSpan(DefaultCacheDuration, 0, 0),
                                  System.Web.Caching.CacheItemPriority.Normal,
                                  null);
               sr.Close();
            }
         }
      }
   }

The JavaScript minification handler is quite similar:

   public class JsYuiCompressorHandler : IHttpHandler
   {
      private const int DefaultCacheDuration = 30;

      public bool IsReusable { get { return true; } }

      public void ProcessRequest(HttpContext context)
      {
			context.Response.ContentType = "application/x-javascript";
			string filePath = context.Request.Url.AbsolutePath;
			filePath = context.Server.MapPath(filePath);
         // if the file is already minified (we use the .min.js naming convention)
			// read it as it is and deliver it to the client
      	if (File.Exists(filePath))
      	{
      		if (filePath.EndsWith(".min.js"))
      			context.Response.WriteFile(filePath);
      		else
					CompressJsAndWriteToResponseStream(context, filePath);
      	}
      	else
      		context.Response.StatusCode = 404;

      	context.Response.Flush();
         context.Response.End();
      }

   	static readonly object FileLock = new object();

		private static void CompressJsAndWriteToResponseStream(HttpContext context, string filePath)
		{
			string requestHash = context.Request.Url.AbsolutePath; //.GetHashCode().ToString();
			if (context.Cache[requestHash] != null)
			{
				context.Response.Write((string)context.Cache[requestHash]);
				return;
			}
			lock (FileLock)
			{
				using (StreamReader sr = new StreamReader(filePath, true))
				{
					string compressed = JavaScriptCompressor.Compress(sr.ReadToEnd());
					context.Response.Write(compressed);

					context.Cache.Add(requestHash,
											 compressed,
											 null,
											 System.Web.Caching.Cache.NoAbsoluteExpiration,
											 new TimeSpan(DefaultCacheDuration, 0, 0),
											 System.Web.Caching.CacheItemPriority.Normal,
											 null);
					sr.Close();
				}
			}
		}
   }

To activate these two handlers you have to modify the web.config file:

		<httpHandlers>
			...
			<add verb="*" path="*.css" type="Dexter.Web.HttpHandlers.CssYuiCompressorHandler, Dexter.Web, Version=1.0.0.0, Culture=neutral"/>
			<add verb="*" path="*.js" type="Dexter.Web.HttpHandlers.JsYuiCompressorHandler, Dexter.Web, Version=1.0.0.0, Culture=neutral"/>
		</httpHandlers>

This approach allows you to deploy the files as they are and the minification is performed on the server during the first request.

I tested this technique in Dexter (it’s actually working in this blog) and I noticed a good reduction in the size of the custom .css and .js files I had:

MinificationBefore MinificationAfter

for example:

Site.css passed from 36.80 KB to 27.53 KB

shCore.js passed from 19.22 KB to 18.17 KB

customFunction.js passed from 5.44 KB to 3.77 KB

The total reduction in size was something like 32 KB.

In a more advanced solution you can look for a way to not only minify the single files, but also to merge them in a single file in order to minimize the number of requests made to the server.

Update: fixed a bug in the cache usage.



Asp net, Css, Javascript, Minification

0 comments

PrimordialCode is now powered by the open source Dexter Blog Engine

Print Content | More

It's time for another new beginning.

Over a month ago I joined the Dexter's Developers Team, cause I felt that the project was indeed good and I liked the idea to participate in developing something I could also have used.

When I entered the project it was missing some features I considered vital for me to switch over:

  • The ability to import all my previous data from Wordpress.
  • The support for multiple categories for every post.
  • A better integration with Windows Live Writer (the primary tool I always used to make my posts) to support hierarchical categories, tags and slugs.

I've worked on all these in my spare time during the last month and now that all of them are implemented (there's still some work to do to improve the import section and integrate it in the blog engine instead of using an external tool), I see no reason to not switch over.

So let's say good-bye to my Wordpress version of the blog that accompanied me during this almost two years of blogging experience and say welcome to the new PrimordialCode powered by Dexter (the actual skin is kindly stolen from Ugo’s blog).

There's still more room for improvements and we have a lots of new features on the horizon to implement.

If you are curious about Dexter go check our feature list and download the source code from our
project page on CodePlex: http://dexterblogengine.codeplex.com/

A big thank to all the guys of the team for making this possible.



Dexter

2 comments

JQuery, WCF and the JSON DateTime serialization

Print Content | More

Days ago I blogged about how to call a WCF service from a jQuery application to retrieve and send data to the server to realize a small interactive chatting application. Everything was working fine until it came to format any DateTime data passed from the server to the client.

We ended up having a call like this:

var msg3 = { "msg": { "Id": "2", "Sender": "Webpage", "Text": "Sended Text" }, "id": "1" };
 
$(document).ready(function() {
    $.ajax({
        type: "POST",
        url: serviceUrl + "TestMessageModifyWithSuppliedContent",
        // data: "{ \"msg\" : " + JSON.stringify(msg) + ", \"id\" : \"4\"}",
        data: JSON.stringify(msg3),
        contentType: "application/json; charset=utf-8",
        dataType: "json",
        success: function(data) {
            //debugger;
            var text = data.d.Timestamp + " - " + data.d.Sender + " - " + data.d.Text;
            alert(text);
        },
        error: function(XMLHttpRequest, textStatus, errorThrown) {
            debugger;
            alert("Error Occured!");
        }
    });
});

the output of this code is a Message Box with the following text:

"/Date(1267695086938+0100)/ - Webpage - 3 - Second parameter passed: 1"

Looking at fiddler for the request and the response we have:

POST http://localhost.:58817/Services/ChatService.svc/TestMessageModifyWithSuppliedContent HTTP/1.1
...Request plumbing goes here...
Pragma: no-cache
 
{"msg":{"Id":"2","Sender":"Webpage","Text":"Sended Text"},"id":"1"}
HTTP/1.1 200 OK
...Response plumbing goes here...
Content-Type: application/json; charset=utf-8
Content-Length: 160
Connection: Close
 
{"d":{"__type":"Message:#LiveAssistance.Entities","Id":2,"Sender":"Webpage","Text":"3 - Second parameter passed: 1","Timestamp":"\/Date(1267694873788+0100)\/"}}

I was surprised at fist, then a quick research show that this is the way WCF serializes the DateTime object in JSON. In short the first number represent the number of milliseconds in the GMT time zone, regular (non-daylight savings) time since midnight, January 1, 1970. The number may be negative to represent earlier times. The second is the time zone (more info here: Stand-Alone JSON Serialization).

So we have to do something to convert this string representation in a JavaScript Date() object. To do so we can use the ‘dataFilter’ feature of the jQuery ajax() call that allows us to modify and alter the returned JSON string representation before it’s passed on to the parser.

We basically want to change the string “/Date(1267695086938+0100)/” to
“new Date(Date(1267695086938+0100)”; to do so we can rewrite our ajax call like this:

   1: $(document).ready(function() {
   2:         $.ajax({
   3:             type: "POST",
   4:             url: serviceUrl + "TestMessageModifyWithSuppliedContent",
   5:             // data: "{ \"msg\" : " + JSON.stringify(msg) + ", \"id\" : \"4\"}",
   6:             data: JSON.stringify(msg3),
   7:             contentType: "application/json; charset=utf-8",
   8:             dataType: "json",
   9:             dataFilter: function(data, type) {
  10:                 var d = data.replace(/"\\\/(Date\(.*?\))\\\/"/gi, 'new $1');
  11:                 return d;
  12:             },
  13:             success: function(data) {
  14:                 //debugger;
  15:             var text = data.d.Timestamp.format("yyyy/mm/dd - HH:MM:ss") + " - " + data.d.Sender + " - " + data.d.Text;
  16:                 alert(text);
  17:             },
  18:             error: function(XMLHttpRequest, textStatus, errorThrown) {
  19:                 debugger;
  20:                 alert("Error Occured!");
  21:             }
  22:         });
  23:     });

Lines 9 - 12 shows our filtering function with the Regex we use to convert the returned JSON string representation to this one:

FROM:
 
"{"d":{"__type":"Message:#Entities","Id":2,"Sender":"Webpage","Text":"3 - Second parameter passed: 1","Timestamp":"\/Date(1267696301237+0100)\/"}}"
 
TO:
 
"{"d":{"__type":"Message:#Entities","Id":2,"Sender":"Webpage","Text":"3 - Second parameter passed: 1","Timestamp":new Date(1267696301237+0100)}}"

Everthing is now working and the Timestamp field contains a Date object...if you use jQuery 1.3.x...

If (like me) you use jQuery 1.4.x you will get an error from the JSON serializer with an ‘invalid JSON format’ message...and guess...he’s right because “new Date(something)” isn’t a valid representation...so why the hell it all worked before?

It turned out that jQuery 1.3.x used the JavaScript eval() function to internally deserialize objects (so using the method above we used in reality a trick), jQuery 1.4.x relies on the browser capabilities to deserialize JSON streams (in particular it uses the window.JSON object if the browser has support for it).

It does this way mainly for performances and security reasons. So we have 2 ways now to get a Date object from our string representation:

1- process each object with a function that (using the previously pointed regex and the eval() function) convert each date field to the corresponding object.

2- change the way the data are parsed (client-side) and do our own JSON deserialization using eval(), to act this way we need to change the ‘dataType’ returned from ‘json’ to ‘text’ - this way we disable the automatic deserialization - then we have to call the eval() function on the returned and modified data stream:

   1: $(document).ready(function() {
   2:         $.ajax({
   3:             type: "POST",
   4:             url: serviceUrl + "TestMessageModifyWithSuppliedContent",
   5:             // data: "{ \"msg\" : " + JSON.stringify(msg) + ", \"id\" : \"4\"}",
   6:             data: JSON.stringify(msg3),
   7:             contentType: "application/json; charset=utf-8",
   8:             dataType: "text",
   9:             dataFilter: function(data, type) {
  10:                 var d = data.replace(/"\\\/(Date\(.*?\))\\\/"/gi, 'new $1');
  11:                 return d;
  12:             },
  13:             success: function(data) {
  14:                 //debugger;
  15:                 data = eval('(' + data + ')');
  16:                 var text = data.d.Timestamp.format("yyyy/mm/dd - HH:MM:ss") + " - " + data.d.Sender + " - " + data.d.Text;
  17:                 alert(text);
  18:             },
  19:             error: function(XMLHttpRequest, textStatus, errorThrown) {
  20:                 debugger;
  21:                 alert("Error Occured!");
  22:             }
  23:         });
  24:     });

Lines 8, 15 and 16 shows the modifications we made. As you can see in line 16 now you can use the Timestamp field as a data object and format it using any JavaScript DateTime formatting library you like.

As a note: using this second method you are obviously loosing in performance and security, cause eval() is slower than the native JSON deserialization, and even worse you are subject to code injection attacks.



Datetime, Jquery, Json, Wcf

3 comments