Saturday, November 28, 2009

Setting Assembly Version with Windows PowerShell

I've been using the Build Version Increament add-in for Visual Studio to automatically set the assembly and file versions. It works fine, however, it only works when using the Visual Studio IDE and it requires you to setup every single project of your solution. If you need to increment the assembly version on an automated build (MS Build, NAnt, PSake), then a PowerShell script would be a better solution.

The following script, SetVersion.ps1, searches for AssemblyInfo.cs files in the current directory and its sub directories, and then updates the AssemblyVersion and AssemblyFileVersion. You can optionally provide the version number to use, or it will auto generate one for you. You can customize the Generate-VersionNumber function to use your own version schema. Also, if you are using a source control that needs to check-out files before editing them, such as TFS and Perforce, then add the check-out command to the Update-AssemblyInfoFiles function.

#-------------------------------------------------------------------------------
# Displays how to use this script.
#-------------------------------------------------------------------------------
function Help {
    "Sets the AssemblyVersion and AssemblyFileVersion of AssemblyInfo.cs files`n"
    ".\SetVersion.ps1 [VersionNumber]`n"
    "   [VersionNumber]     The version number to set, for example: 1.1.9301.0"
    "                       If not provided, a version number will be generated.`n"
}

#-------------------------------------------------------------------------------
# Generate a version number.
# Note: customize this function to generate it using your version schema.
#-------------------------------------------------------------------------------
function Generate-VersionNumber {
    $today = Get-Date
    return "1.0." + ( ($today.year - 2000) * 1000 + $today.DayOfYear )+ ".0"
}
 
#-------------------------------------------------------------------------------
# Update version numbers of AssemblyInfo.cs
#-------------------------------------------------------------------------------
function Update-AssemblyInfoFiles ([string] $version) {
    $assemblyVersionPattern = 'AssemblyVersion\("[0-9]+(\.([0-9]+|\*)){1,3}"\)'
    $fileVersionPattern = 'AssemblyFileVersion\("[0-9]+(\.([0-9]+|\*)){1,3}"\)'
    $assemblyVersion = 'AssemblyVersion("' + $version + '")';
    $fileVersion = 'AssemblyFileVersion("' + $version + '")';
    
    Get-ChildItem -r -filter AssemblyInfo.cs | ForEach-Object {
        $filename = $_.Directory.ToString() + '\' + $_.Name
        $filename + ' -> ' + $version
        
        # If you are using a source control that requires to check-out files before 
        # modifying them, make sure to check-out the file here.
        # For example, TFS will require the following command:
        # tf checkout $filename
    
        (Get-Content $filename) | ForEach-Object {
            % {$_ -replace $assemblyVersionPattern, $assemblyVersion } |
            % {$_ -replace $fileVersionPattern, $fileVersion }
        } | Set-Content $filename
    }
}

#-------------------------------------------------------------------------------
# Parse arguments.
#-------------------------------------------------------------------------------
if ($args -ne $null) {
    $version = $args[0]
    if (($version -eq '/?') -or ($version -notmatch "[0-9]+(\.([0-9]+|\*)){1,3}")) {
        Help
        return;
    }
} else {
    $version =  Generate-VersionNumber
}

Update-AssemblyInfoFiles $version


And finally, before running this script, or any PowerShell script, make sure that you are allowed to execute scripts by running Get-ExecutionPolicy. If it returns Restricted, you need to run the following command:

Set-ExecutionPolicy RemoteSigned

Download the entire script from here.

Enjoy it!

Tuesday, November 17, 2009

The Channel 9 Learning Center

If you want to learn the latest Microsoft technologies such as V2010 and .NET 4.0, Windows Azure, and SharePoint 2010, then check it out the training course available at Channel 9 Learning Center:

http://channel9.msdn.com/learn/

Sunday, June 28, 2009

Unit Testing CRM Plug-ins

What is a CRM plug-in?

A plug-in is custom business logic that you can integrate with Microsoft Dynamics CRM 4.0 to modify or augment the standard behavior of the platform. This custom business logic can be executed based on a message pipeline execution model called Event Execution Pipeline. A plug-in can be executed before or after a MS CRM platform event. For example, you can create a plug-in to validate the attributes of an account entity before the create and update operations.

To create plug-ins, you need to create a normal .NET class library and reference the MS CRM SDK libraries. Then add a class that implements the Microsoft.Crm.Sdk.IPlugin interface.
public interface IPlugin
{
    void Execute(IPluginExecutionContext context);
}

Plug-in Unit Testing

In order to write unit tests for your plug-in, you need to create at least a mock of the IPluginExecutionContext. Depending on your plug-in implementation, you will also need to mock ICrmService or IMetadataService if you are calling IPluginExecutionContext.CreateCrmService or IPluginExecutionContext.CreateMetadataService.

There is the MS CRM Plug-in Debugger, which consists of a small EXE container that implements a mock of the IPluginExecutionContext interface. You could use this container to unit test your plug-ins. However, IMHO, I do not see any advantage in using it versus a unit test and a mock framework. I posted a comment on the CRM Team Blog: Testing CRM Plug-in asking about that, but didn't get a response yet.

To unit test a CRM plug in, you can use your favorite unit test framework (NUnit, MbUnit, Visual Studio Tests) and your favorite mock framework (Rhino Mocks, NMock, Typemocks). In this article, I will be using NUnit and RhinoMocks.

The Plug-in Code

In the following example, adapted from the "Programming Microsoft Dynamics CRM 4.0" book, the plug-in validates the account number attribute before saving the account entity.
public class AccountNumberValidator : IPlugin
{
    public void Execute(IPluginExecutionContext context)
    {
        var target = (DynamicEntity) context.InputParameters[ParameterName.Target];

        if (target.Properties.Contains("accountnumber"))
        {
            var accountNumber = target["accountnumber"].ToString();
            var regex = new Regex("[A-Z]{2}-[0-9]{6}");

            if (!regex.IsMatch(accountNumber))
            {
                throw new InvalidPluginExecutionException("Invalid account number.");
            }
        }
    }
}

The code above checks to see if the account number attribute is in the right format. If not, it throws an InvalidPluginExecutionException. Since we will register this plug-in as a pre-event of creating and updating the account entity, this exception will be handled by the CRM platform, and the create/update operation is aborted.

Writing the Plug-in Unit Test

The following code is a simple test using NUnit to verify that an InvalidPluginExecutionException is thrown when the account entity has invalid account number:

[Test]
[ExpectedException(typeof(InvalidPluginExecutionException))]
public void ShouldHandleInvalidAccountNumber([Values("",
                                                    "AB123456",
                                                    "A123456",
                                                    "ABC123456",
                                                    "AB-12345",
                                                    "AB123456",
                                                    "AB-123",
                                                    "AB-1234",
                                                    "aa-012345",
                                                    "aa-000000",
                                                    "Za-999999",
                                                    "wW-936187")]
                                                    string number)
{
    // Create necessary mocks for the plug-in.
    var mocks = new MockRepository();
    var context = mocks.DynamicMock<IPluginExecutionContext>();

    // Creates a property bag for the plugin execution context mock.
    var target = new DynamicEntity();
    target.Properties["accountnumber"] = number;
    var inputParameters = new PropertyBag();
    inputParameters.Properties[ParameterName.Target] = target;

    // Set expectations of mocks.
    Expect.Call(context.InputParameters).Return(inputParameters).Repeat.Any();
    mocks.ReplayAll();

    // Test the plug-in using the context mock.
    IPlugin plugin = new AccountNumberValidator();
    plugin.Execute(context);

    // Verify all the mocks.
    mocks.VerifyAll();
}

Now, we will go through all the details of this unit test:
  • The ExpectedException attribute defines the type of exception that this test expects to be raised. In our case, it is an InvalidPluginExecutionException.
  • This is a parameterized test that uses the Values attribute to define a set of invalid account numbers. This test will run once for each value that we define. The Values attribute is specific to NUnit, but other frameworks have similar mechanisms: MbUnit uses RowTest for example.
  • We create a mock of the IPluginExecutionContext interface by using the MockRepository.DynamicMock method. We are using a DynamicMock because we are only interested in a small piece of the functionality (InputParameters property of the context object). If we want a complete control of the mock object behavior, then we would use a StrickMock. For more information about the types of mocks that you can create with Rhino Mocks, see here.
  • The InputParameters property of the plug-in context, is a property bag that will contain the account number attribute. So, we create this property bag, and add the account number defined by the Values attribute parameter.
  • Now, we set the expectations of the mock object. This step is called the Record state. When the InputParameters property is called, we expect it to return the property bag we created on the previous step. Note that we are using Repeat.Any() that means this property can be called more than once. In our test, we just want to make sure that InputParameters is called, no matter how many times.
  • The Record state is finish by calling ReplayAll(). This will move to the Replay state.
  • Now, we are ready to instantiate our plug-in object and call its Execute method using the plug-in context mock object.
  • Finally, we call VerifyAll() method, to verify that the mock expectations were satisfied. In our case, it will make sure that InputParameters property was called during the Replay state.
This test will assert that the plug-in Execute method will throw an InvalidPluginExecutionException with the account number values supplied.

We also should write a test to assert that no InvalidPluginExecutionException is thrown when using valid account numbers. I will not include this test here, but you can see it on the solution source code files.

Mocking the ICrmService Interface

In our previous test, we only need to mock the plug-in context interface. However, in more complex plug-ins, you might need to mock other interfaces such as the ICrmService. The CreateCrmService method of the IPluginExecutionContext creates an ICrmService object. If you use the CreateCrmService method on your plug-in, you will need to create a mock of ICrmService.

Our validate account number plug-in has been changed to also detect duplicate account numbers. If an account number already exists, then the validation will fail by throwing an InvalidPluginExecutionException. To verify that the account number exists, we query CRM using the ICrmService.Fetch method with a FetchXML query. The following code demonstrate these changes:

public class AccountNumberValidator : IPlugin
{
    /// <summary>
    /// 
    /// </summary>
    /// <param name="context"></param>
    public void Execute(IPluginExecutionContext context)
    {
        var target = (DynamicEntity) context.InputParameters[ParameterName.Target];

        if (target.Properties.Contains("accountnumber"))
        {
            var accountNumber = target["accountnumber"].ToString();

            // Validates the account number format.
            var regex = new Regex("[A-Z]{2}-[0-9]{6}");
            if (!regex.IsMatch(accountNumber))
            {
                throw new InvalidPluginExecutionException("Invalid account number.");
            }

            // Validates the account number is unique.
            using (var service = context.CreateCrmService(true))
            {
                var query = string.Format(@"<fetch mapping='logical'>
                                                <entity name='account'>
                                                <attribute name='accountnumber' />
                                                <filter>
                                                <condition attribute='accountnumber'
                                                    operator='eq' value='{0}' />
                                                </filter>
                                                </entity>
                                            </fetch>",
                                            accountNumber);

            var results = service.Fetch(query);
            var xdocument = XDocument.Parse(results);
            var existingNumbers = from item in xdocument.Descendants("accountnumber")
            select item.Value;

            if (existingNumbers.Count() > 0)
                throw new InvalidPluginExecutionException("Account number already exist.");
            }
        }
    }
}

Now, we will create a unit test to verify that our plug-in detects duplicate account numbers.

[Test]
[ExpectedException(typeof(InvalidPluginExecutionException))]
public void ShoulRejectDuplicateAccountNumber()
{
    // Create necessary mocks for the plug-in.
    var mocks = new MockRepository();
    var context = mocks.DynamicMock<IPluginExecutionContext>();
    var service = mocks.DynamicMock<ICrmService>();

    // Creates a property bag for the plugin execution context mock.
    var target = new DynamicEntity();
    target.Properties["accountnumber"] = "AB-123456";
    var inputParameters = new PropertyBag();
    inputParameters.Properties[ParameterName.Target] = target;

    // Set expectations of mocks.
    Expect.Call(context.InputParameters).Return(inputParameters).Repeat.Any();
    Expect.Call(context.CreateCrmService(true)).Return(service);
    Expect.Call(service.Fetch(null)).IgnoreArguments()
                .Return(@"<resultset>
                            <result>
                                <accountnumber>AB-123456</accountnumber>
                            </result>
                        </resultset>");
    mocks.ReplayAll();

    // Test the plug-in using the context mock.
    IPlugin plugin = new AccountNumberValidator();
    plugin.Execute(context);

    // Verify all the mocks.
    mocks.VerifyAll();
}

In the test above, we are using Rhino Mocks to create a mock for the ICrmService. This object will be returned by the CreateCrmService method of the plug-in execution context. We are also recording that when ICrmService.Fetch method is called, it will return a XML file containing a duplicated account number. This will simulate the CRM behavior of detecting that an account number already exists, and we can assert that our plug-in will fail the validation by throwing an exception.

I hope this post helps you to unit test your CRM plug-ins. Although I demonstrated it using NUnit and Rhino Mocks, you can use any unit testing framework (NUnit, MbUnit, Visual Studio Tests, etc.) and any mock framework (Rhino Mocks, NMock, Typemocks, etc.).

Monday, June 15, 2009

Using Embedded Files for FetchXML Queries

FetchXML is a proprietary language that it is used in Microsoft Dynamics CRM. All examples that I've seen so far, always show the FetchXML query hard coded into the C# file. Instead of keeping the queries mixed with the source code, a bad practice IMHO, I prefer placing queries in separate XML files. These files can be embedded resources of the assembly. By placing them on a separate file, it isolates them from the code, making easier to locate, share and test them. In order to embed your query in the assembly, you will need to add an XML file with the query into your project. Make sure to change its build action to Embedded Resource. Then, use the following code to read the embedded XML file. The code below assumes that the file was placed in the subfolder Queries of the project. It refers to the embedded file by using the assembly name and the related path to the file. Notice that it uses "." instead of "\" to refer to the embedded file.

// Read the embedded fetch xml query file.
var assembly = Assembly.GetExecutingAssembly();
var stream = assembly.GetManifestResourceStream("MyAssembly.Queries.MyQuery.xml");

if (stream == null)
{
    throw new FileLoadException("Cannot load fetchXML embedded file");
}

// Gets the Fetch XML query string.
var reader = new StreamReader(stream);
string query = reader.ReadToEnd();

// Removing leading spaces to reduce the size of the xml request.
query = Regex.Replace(fetchXml, @"\s+", @" ");

// Fetches the results.
string results = crmService.Fetch(query);

The code above uses a static query. If you need to use a dynamic query, then the XML can contain the string format of the query, and you can use String.Format to pass the parameters needed to build your dynamic query.

Wednesday, May 20, 2009

Visual Studio 2010 and .NET 4 Beta 1

Visual Studio 2010 and .NET 4 Beta 1 is available today for the general public. Note that the new .NET version is 4 and not 4.0. You can download the beta from here.

Also, check out Jason Zander's post where he highlights the new functionalities, and Brad Adams post with .NET 4 poster.

I am currently preparing a virtual machine with Windows 7 RC 1 and VS 2010 for trying out the new features.

Saturday, April 25, 2009

NVidia driver not working after upgrading to Ubuntu 9.04

Ubuntu 9.04 has been released this week and I upgraded my machines to the latest version (download it here).

If you are using dual boot (GRUB) , NVidia drivers, and answered yes to keep your existing version of menu.lst, then you might have the same problem as me.

After I upgraded to Ubuntu 9.04, I get the following error message when restarting the machine:
Ubuntu is running in low-graphics mode

The following error was encountered. You may need
to update your configuration to solve this.

(EE) NVIDIA(0): Failed to load the NVIDIA kernel module!
(EE) NVIDIA(0): *** Aborting ***
(EE) Screen(s) found, but none have a usable configuration.
Although I have restricted drivers enabled, I tried to activate the NVidia driver on System/Administration/Hardware Drivers and nothing happens.



If I open a terminal window and run the following commands below, and then try to reactivate again, you will be able to see some error message displayed on the terminal window:
sudo killall jockey-backend
sudo /usr/share/jockey/jockey-backend --debug -l /tmp/jockey.log
In my case, I got the following error message:
FATAL: Module nvidia not found.
The NVidia module cannot be found because I am running an old version of the Kernel (remember that I said to keep my existing menu.lst version!!) and the NVidia driver is compiled to the latest version.

To check the kernel version I am running I just use the command "uname -r", and it returns 2.6.27-11-generic, but Ubuntu 9.07 comes with 2.6.28-11. So, I need to manually update my menu.lst file to be able to boot to the latest kernel version.

Warning: Only update your menu.lst file if you have done this before. This is not recommended for users that are not experienced with changing menu.lst file. If you are not an experienced Linux user, it is better to not proceed with these changes.


First, run the following commands to see that you have vmlinuz-2.6.28-11-generic:
ls /boot/*2.6.28*
You should be able to see vmlinuz-2.6.28-11-generic and initrd.img-2.6.28-11-generic. Then, backup and edit your menu.lst file:
sudo cp /boot/grub/menu.lst /boot/grub/menu.lst.bak
sudo gedit /boot/grub/menu.lst
The safest way is to duplicate your first boot menu entry, then change only this new entry to use the latest kernel version. In my case, I changed it from 2.6.27-11 to 2.6.28-11. I also changed the title to 9.04. This way you still have your previous entries in case you have any problems rebooting and need to restore your previous menu.lst from the backup copy (menu.lst.bak).

Next time you reboot your machine, you can select the first boot entry and then you will see the following:
* Running DKMS auto installation service for kernel
* nvidia (173.14.16)...
And then your Ubuntu will be loaded with the proper video resolution!!

If you have any problems rebooting your machine with the new entry, you can reboot it using an existing entry (or Ubuntu CD) and revert the changes you made by restoring the backup copy: menu.lst.bak (manually created) or menu.lst~ (created by gedit).

Monday, April 20, 2009

LIDNUG LinkedIn .NET Users Group

Linked .NET Users Group (LIDNUG) is an official INETA .NET User Group with online presentations through Live Meeting.

These are some upcoming events in the next few weeks:
You can also have their complete schedule from the following calendar links:
I am looking forward to attend their presentations. Enjoy!!

Saturday, April 04, 2009

Error 1327 Invalid Drive when installing VMware Server

When I try to install WMware Server in Windows 7 (also happened on Vista and XP), I get the message Error 1327 Invalid Drive S:\ and the installation aborts.

For some reason, the VMware installer does not like when you change the default location of your shell folders. I have my Windows shell folders (My Documents, My Music, My Video, My Pictures) mapped to a network drive S:.

The workaround is to temporary change your shell folders back to the default location. An easy way to do it, it is by changing the User Shell Folders registry key. Be careful when editing your Windows registry, so use the following steps at your own risk.

1. Run regedit.exe

2. Locate the following key: HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Explorer\User Shell Folders

3. Click on File, Export and save this key to your desktop.

4. Change all entries that uses your mapped drive (S: in my case) to the default one (%USERPROFILE%).

5. Now, Install VMware Server.

6. After installation completes, restore your User Shell Folders registry info by double clicking on the file saved on step #3.

That is, you were able to install VMware server and also keep your shell folders at you custom location. I hope VMware folks fix this issue on their installer. Other people were also having this same issue when installing VMware tools.

Saturday, March 28, 2009

VMware is slow or hangs on Ubuntu when running virtual machines from NTFS partitions

I have an external hard drive (WD My Passport, highly recommended!) where I keep all my virtual machines and use it with my desktop (Windows 7) and laptop (Ubuntu 8.10). When I use Ubuntu as host, WMware hangs when starting virtual machines stored on my external HD (NTFS partition). If I copy these VMs to the host ext3 partition, then VMware works fine.

In order to run virtual machines stored on a NTFS partition by using VMware on a Linux host, you will need to add the following entry to your configuration file (.vmx file):

mainMem.useNamedFile="FALSE"

This entry will disable paging files and VMware will work fine, often with a much better performance. For more details, see here. After I added this entry to my .vmx files, I was able to successfully run all my virtual machines from the NTFS partition.

Saturday, March 21, 2009

Fixing Windows 7 Beta 64bit Blue Screen of Death (BSoD) tdx.sys

I have been using Windows 7 Beta (Build 7000) on my laptop since it became available. I have been very pleased with this new version and looking forward for its RTM. Today, I decided to install Windows Beta 64bit on my desktop machine as well. Everything was fine, until I started getting the following Blue Screen of Death (BSoD):




After some research, I found that the problem is related to my antivirus (AVG Fee) and a network shared drive (DNS323 - Linux based using Samba).

After multiples BSoD, Windows 7 finally suggested a solution to my problem:




The problem occurs because of an incorrect behavior in a system component that occurs when Server Message Block (SMB) connections go over Transport Driver Interface (TDI). My network shared drive uses Samba, which is a reimplementation of SMB. The hotfix can be found at:
This hotfix has not undergone full testing. That is why it is not available under Windows Update. So, it is intended only for systems or computers that are experiencing the exact problem that is described here. You need to register to receive it by email. After installing this hotfix, my problem was solved, and I can finally enjoy Windows 7 Beta on my desktop machine!!

Sunday, March 08, 2009

Memory Management of Windows SharePoint Services (WSS) Objects: Best Practices and SPDisposeCheck Tool

The Windows SharePoint Services (WSS) 3.0 object model, provides a set of classes to work with WSS data. These objects can be used to read and write data to the WSS store. The WSS object model contains many objects that implement the IDisposable interface, which primary use is to release unmanaged resources. The Dispose method of IDisposable is intended to be called by consumers of the object, directly or with the using statement, in order to release unmanaged resources.

Best Practices for Disposing WSS objects

Two common WSS objects that implement IDisposable are SPSite and SPWeb. SPSite represents a site collection, and SPWeb represents a single site. A common mistake is to use these objects without disposing unmanaged resources:

// BAD: SPSite and SPWeb will be leaked.
SPSite site = new SPSite("http://server");
SPWeb web = site.OpenWeb();
string title = web.Title;
string url = web.Url;

A better way to write the code above is:
// GOOD: No leaks.
using(SPSite site = new SPSite("http://server"))
{
using(SPWeb web = site.OpenWeb())
{
string title = web.Title;
string url = web.Url;
}
}
The code above was very simple, but you got the idea. Things start getting more complicated when you start combining calls into the same line. This makes harder to catch leaks, since you do not see the implicit referenced object. For example:
// BAD
SPWeb web = new SPSite("http://server").OpenWeb();
The line above, might make you thing that there is only one leak, and you will attempt to fix with the following:
// BAD
using (SPWeb web = new SPSite("http://server").OpenWeb())
{
...
}
The code above will dispose the SPWeb object that is returned by the OpenWeb method, but it will not dispose the SPSite that is created by the new operator. The solution is the same as the first problem:
// GOOD: No leaks.
using(SPSite site = new SPSite("http://server"))
using(SPWeb web = site.OpenWeb())
{
...
}
Working with SharePoint collections also need some attention. When using SPWebCollection objects, you will need to dispose any SPWeb object that is accessed using the [] indexer or with a foreach statement. The same applies for SPSiteCollection. The following code shows how to avoid leaks when using the SPSiteWeb object:

// GOOD: No leaks.
using (SPSite site = new SPSite("http://server"))
{
using (SPWeb web = siteCollection.OpenWeb())
{
foreach (SPWeb innerWeb in site.AllWebs)
{
try
{
// ...
}
finally
{
// Must dispose the collection item.
if(innerWeb != null)
innerWeb.Dispose();
}
}
}
}
A similar approach would be used if you were accessing collection items using the [] indexer. You would need to dispose each object returned by [].

The examples above just show you a few common cases of how to dispose WSS objects, just for you to get the idea. The following article provides detailed explanation of how to detect and proper dispose your SharePoint objects:
You can also look at Roger Lamb's article which complements the article above with lots of examples:SPDisposeCheck Tool

The SPDisposeCheck tool analyzes your assembly and detect if you are disposing WSS objects properly. This tool takes the path to a managed .DLL or .EXE, or the path to a directory containing many managed assemblies. It will recursively search for and analyze each managed module attempting to detect coding patterns based on best practice article above. The tool does not detect all possible unmanaged resources leaks, so you still have to learn the best practices and review your code.

You can install SPDisposeCheck from here. It is a console application, and you should add C:\Program Files\Microsoft\SharePoint Dispose Check to your path.

The usage is:

SPDisposeCheck -debug –xml

The output is a list of potential problems. If you have the PDB symbol file available, then the output will include additional source code information about the error. The -debug option adds additional information to the output. The -xml option outputs the errors to an XML file instead of text, however, the SPDisposeCheck site says that this option is unreliable, and they do not recommend to use this command on this release.

I run this tool with a WebPart that I recently created, and I got the following result:

ID: SPDisposeCheckID_160
Module: CustomCalendar.dll
Method: CalendarGenerator.SharePointCalendarRepository.#ctor(System.String)
Statement: manager := web.{Microsoft.SharePoint.SPWeb}GetLimitedWebPartManager(pageUrl, 1)
Source: D:\Code\CustomCalendar\CustomCalendar\SharePointCalendarRepository.cs
Line: 40
Notes: Dispose/Close was not called on SPLimitedWebPartManager.Web
More Information: http://blogs.msdn.com/rogerla/archive/2008/02/12/sharepoint-2007-and-wss-3-0-dispose-patterns-by-example.aspx#SPDisposeCheckID_160
----------------------------------------------------------

Total Found: 1

Well, although I am embarrassed that it found a bug in my code, I am very glad that this tool works! In my code, I am calling GetLimitedWebPartManager which returns a SPLimitedWebPartManager. This returned reference also needs to be disposed.

The following table contains the possible errors, links to the detailed description, and a summary of the solution.

Error CodeSummary Solution
SPDisposeCheckID_110 If you create a SharePoint object with a new operator, ensure that the creating application disposes of it.
SPDisposeCheckID_120 You must dispose the SPWeb objects that are created by SharePoint methods that return other SPWeb objects (such as OpenWeb).
SPDisposeCheckID_130 You must dispose the SPSite object that is returned each time you access SPSite.AllWebs [] index operator.
SPDisposeCheckID_140 It is NOT necessary to dispose the SPWeb object returned by SPWeb.RootWeb. The dispose cleanup is handled automatically by the SharePoint framework.
SPDisposeCheckID_150 You must dispose the SPWeb object the is returned by calling SPSite.AllWebs.Add method.
SPDisposeCheckID_160 You must dispose the SPWeb object the is returned by the SPLimitedWebPartManager.Web property.
SPDisposeCheckID_170 It is NOT necessary to dispose the SPWeb object returned by SPWeb.ParentWeb. The dispose cleanup is handled automatically by the SharePoint framework.
SPDisposeCheckID_180 You must dispose all SPSite objects in the SPWeb.Webs property (SPWebCollection type).
SPDisposeCheckID_190 You must dispose the SPSite object that is created and returned by the SPWeb.Webs.Add method.
SPDisposeCheckID_200 You must dispose the SPSite object that is created and returned by the SPWebCollection.Add method.
SPDisposeCheckID_210 It is NOT necessary to dispose SPContext object returned from calling SPControl.GetContextSite and SPControl.GetContextWeb Methods, since they are managed by the SharePoint framework.
SPDisposeCheckID_220 It is NOT necessary to dispose SPContext objects on your code, since they are managed by the SharePoint framework.
SPDisposeCheckID_230 You must dispose the SPSite object that is returned each time you access SPSiteCollection [] index operator.
SPDisposeCheckID_240 You must dispose the any SPSite object returned from the SPSiteCollection.Add method.


The tool also allows you to ignore a reported issue. If you have investigated a reported issue and are satisfied that it does not represent a problem, then you can add the following attribute to the calling method:
[SPDisposeCheckIgnore(SPDisposeCheckID.SPDisposeCheckID_100, "Ignoring this error")]
To add this attribute to your method, you will need to grab the source code of the attribute class from the file:

C:\Program Files\Microsoft\SharePoint Dispose Check\SPDisposeExamplesSource.zip

One of things to consider, is to run SPDisposeCheck Tool as a post-build event of your assembly. If it reports any issue, then the build fails. Developers can then detect leak problems at build time and troubleshoot them as soon as possible.

Conclusion

The improper memory management of WSS objects can lead to a poor system performance, system crash, or users experiencing unexpected errors (timeouts, page not available), especially under heavy loads. To avoid these problems, developers need to follow best practices for disposing WSS objects, and also use tools, such as SPDisposeCheck, to check if their assemblies are properly disposing WSS objects.

References

Monday, February 16, 2009

Free DevExpress controls and IDE productive Tools

DevExpress is a software development company producing .NET components, controls and IDE productivity tools. They also offer some of their commercial products for free!! The list below contains some of their FREE products that I am aware of. Just register to get your free license:

.NET Controls:
IDE Productivity Tools:

Comparing Mock frameworks for .NET

Andrew Kazyrevich posted a series of articles comparing the most popular Mock frameworks for .NET development: NMock2, Rhino Mocks, Moq, and Typemock Isolator.

The complete series of articles can be found at:
He also created an open source project that "provides a unified set of tests written against Moq, NMock2, Rhino Mocks and Typemock Isolator, so that you can easily compare the frameworks and make an informed decision when picking one up".

http://code.google.com/p/mocking-frameworks-compare/


Tuesday, February 10, 2009

Microsoft Certification for Developers

Today I was talking to my buddies from work, Ryan and Michael, about Microsoft certification. We discussed about the MCPD and the SharePoint certifications and I decided to post some information here.

MCPD Enterprise Application Developer 3.5

The MCPD Enterprise Application Developer 3.5 certification is the current top developer certification and it requires you to pass in 6 exams. The following table contains all the exams and links to the official training books. Some of books have not been published yet, but they are expected sometime during the first quarter of 2009.


70-536 TS: Microsoft .NET Framework 2.0 - Application Development Foundation

70-562 TS: Microsoft .NET Framework 3.5, ASP.NET Application Development

70-505 TS: Microsoft .NET Framework 3.5, Windows Forms Application Development
  • Official Training Book (CAD52.91). Not published, expected on Feb 25 2009. Since there is not so much difference between the .NET 3.5 and .NET 2.0 versions, you could use this book (I used this book for the beta exam).

70-561 TS: Microsoft .NET Framework 3.5, ADO.NET Application Development

70-503 TS: Microsoft .NET Framework 3.5 – Windows Communication Foundation Application Development

70-565 Pro: Designing and Developing Enterprise Applications Using the Microsoft .NET Framework 3.5
  • No current Official Training Book. However, there is the .NET 2.0 version here (CAD48.50).


The exams don't have to be completed in a specific order, but I recommend starting with the 70-536 exam first for two reasons:
  • The 70-536 exam contains the foundations of .NET framework (types, collections, threading, app domains, configuration, serialization, encryption, code access security, reflection, interoperability, globalization, and drawing). This will provide a very good foundation before jumping into the specific types of applications.
  • The 70-536 exam is a prerequisite for all the developer MCTS certifications. To acquire your first .NET MCTS certification, you need to pass in two exams : the 70-536 and the desired MCTS exam (WCF, Windows Form, ASP.NET, ADO.NET, WF, WPF). By taking 70-536 first, you will acquire a new MCTS certification for every subsequent MCTS exam you pass.
Another recommendation is to take the 70-565 exam at last. This is the professional exam that will cover designing .NET enterprise applications and choosing the proper technologies. It is better to finish all the specific applications first before taking this one, for obvious reasons.
For the other exams, you should choose first a exam that you are most familiar with. This will make your life easier, and you will progressively advance to other exams of unknown areas.

Some Tips when preparing for the exams above:
  • One of the most important things is to practice all the material. You should try all the code samples you find in the study guide, make modifications, improve the code, apply real scenarios and examples from your day job, etc. Also, try using different methods overloads, different constructors, etc. Sometimes the training book shows how to use some set of classes, and then on the exam you see them used with different methods and constructors and do not know if it is right or wrong. So, go beyond the examples of the book.
  • Check it out the material in the MSDN online. If the book is showing you how to use the a certain class, check it out the documentation at MSDN to see this class methods, properties and the usage examples on MSDN.
  • Although the books are great resources, read all the topics of the preparation guides to make sure you know all of them. Supplement book information with MSDN library information.
  • Make sure to get the book correction from Microsoft Support. When I was studying with the .NET 2.0 training books, I found lots of minor mistakes in the books. To get the book correction, just go to the Microsoft Help and Support and search for the book ISBN. For more information, see here.
  • Usually these books come with practice tests from MeasureUp. It is recommended to practice these tests to get used to the format of the exam, type of questions, etc.
  • Take advantage of the Microsoft Second Shot. You get an exam voucher to be used when registering to your exam. If you fail the exam, you can register for a free retake exam using this voucher. It is a good investment just in case something bad happen during your first attempt.

Additional Resources:
  • Prometric: this is the exam provider where you can schedule a Microsoft certification exam. You will be able to choose the test site where you can take your exam. The .NET 3.5 exams usually cost USD 125. Do not forget to use a exam voucher from Microsoft Second Shot when registering, just in case you need to retake the exam.
  • Gerry O'Brien's Blog: This is the official blog about Microsoft certifications for developer and SQL Server. You will find information about upcoming exams, new certifications, beta exams, etc. Also, pay attention to announcements for Visual Studio 2010 exams, that should bring a new MCPD .NET 4.0. Gerry and his team are still planning the new certifications, but as you can see from his comments, there will be upgrade exams from MCPD 3.5 to 4.0 (2 upgrade exams, I think), see his comments on this post.
  • Beta Exams Announcements: This blog contains announcements about beta exams. What is a beta exam? Well, it works like a beta software. Before opening a new exam for the general public, Microsoft first releases it as beta exam (71-### instead of 70-###) to evaluate the exam, get feedback and error reports from test takers. You can register for these beta exams for free (normal cost is 125 USD), and if you pass on the beta exam, the exam credit will be added to your transcript and you will not need to take the exam in its released form. I took 4 beta exams of .NET 3.5 and passed on them without paying anything!! When taking beta exams, make sure to leave some feedback because they are expecting our input to improve the exam experience.

SharePoint Certification

I started working with SharePoint (WSS 3.0 and MOSS 2007) recently and I am planning to get certified on it as well. For developers, there are two SharePoint MCTS:


70-541 TS: Microsoft Windows SharePoint Services 3.0 – Application Development
  • No official training book, but there is the Microsoft Press book (CAD34.64).
  • There are no practice tests from MeasureUp for this exam.
70-542 TS: Microsoft Office SharePoint Server 2007 – Application Development
  • No official training book, but there is the Microsoft Press book (CAD31.49).
  • You might consider buying the practice tests from MeasureUp.

Monday, January 26, 2009

MCPD Certification and Next

I finally got my MCPD certification by upgrading my MCSD certification!! My last exam was 70-551 which took about 3 1/2 hours to complete. This upgrade exam includes content from three regular exams: 70-536 (.NET Foundation), 70-526 (Windows Apps) and 70-528 (Web Apps). So, the scope of the upgrade exam is too big, and it was probably one of the hardest I ever had to prepare (3 books).

In the 70-551 exam, the Windows Application part had many questions about ClickOnce technology, and I regret not spending more time when studying it. The Web Applications part also had a topic which I should have spent more time studying, the Web Part controls. But, at the end, I was able to pass on it. Note that in upgrade exams like this, you need to achieve at least 700 points in each part, and the final exam score is the minimum score on each part, not the average. So, it means if you do very well in two parts, but get only 690 on the third part, you will fail the whole upgrade exam, and you will have to take all the three parts again when retaking the exam.

I started my certification path back in 2005, but only completed two exams towards MCAD. After passing on these two exams, Microsoft announced the new certifications for Visual Studio 2005. At the same time, I got a new job in California and did not have enough time to dedicate to .NET certifications. Only last year I decided to continue my certification path. I thought about starting from the scratch, but decided to follow the upgrade path. The results can be seen on the picture with the pile of books I had to study. The XBox 360 just happen to be on my desk, it is not related with .NET certification. It actually represents a threat to my whole certification path!!

Some developers think that certifications are useless. I agree that certification is not everything, it shows that you pass on the exam, and there are many other things to consider such as your work experience. On the other hand, I see these certifications as an structured way to learn. The goal is not to pass, but it is to study and learn the content of the exam. They expose you to a wide view of the .NET framework, rather than a specific view. You will get the detailed knowledge by working on a project on a daily basis, but this wide view is important to expose you to different areas that you might not touch when working on your projects. So, I am still continuing my certification path and I highly recommend my fellow developers to work on it.

Well, what is next now? I got the MCPD EAD (Enterprise Application Developer) for the VS 2005 (.NET 2.0), and now I want to upgrade it to MCPD EAD .NET 3.5. There are two upgrade exams: 70-568 and 70-569. These exams are not available yet, and they should be available soon. At the same time, I participated on the Beta exams of the .NET 3.5 certifications. Note that beta exams are free, but they expect you to know the area and provide feedback about it (see beta exams announcement here). I took 3 beta exams of the new .NET 3.5 track (70-561, 70-505, and 70-565), and received so far the result of only one (which I passed!!). The other ones, I do not know the results yet. Beta exam results are only sent after 8 weeks (or more!!) after the end of the beta period. If I end up passing on the other ones, then I will just to two regular exams (70-503 and 70-562), instead of two upgrade exams. The following diagram shows some upgrade paths to MCPD EAD .NET 3.5:


Thursday, January 22, 2009

Multiple output files from a T4 template

When using T4 templates for code generation, I always needed to generate multiple files from the same template. Unfortunately, Visual Studio does not support this, and I was using a solution based on this MSDN forum discussion.

Damien Guard just posted a nice solution for this problem. See his post at:
I am looking forward to update my current t4 templates to use this new solution.

Tuesday, January 13, 2009

Guidance Automation Toolkit (GAT) Documentation

One of the hardest things to work on the Repository Factory is the lack of documentation about creating software factories using Guidance Automation Toolkit (GAT). One of the Repository Factory users asked about the documentation available (see here). At the time we inherited this code from the p&p team, there was no documentation about it and we had to learn from the code. Since Repository Factory is just a GAT software factory like WCSF, WSSF, and SCSF, it should be easy to find information about GAT, right? Yeah right!?!?

The GAT web site does not have too much documentation, besides the general overview. The community web site GuidanceAutomation.net does not work anymore, returning Service Unavailable error. The only book that I know about creating software factories using GAT is the Practical Software Factories in .NET.

The best online documentation I found about GAT is from Jelle Druyts , who created a series of articles about GAT based on the June 2006 CTP: I am just wondering who else is using the GAT, and if Microsoft will still release any new version of it.

Sunday, January 04, 2009

Chinook Database 1.1 Released

I just released a new version of the Chinook Database. Chinook Database is a sample database that represents a digital media store, like iTunes, including information of artists, albums, tracks, media type, invoices, customers, etc. The media information was imported from my iTunes library, the customer/employee info was created manually, and the sales info is auto-generated for a 4 years period.

It supports Oracle, MySQL, SQL Server and SQL Server Compact. The database can be created by running a single SQL script. It is also provided as an XML file and a SQL Server Compact database. It is possible to use your own iTunes library information to regenerate these scripts, see more details here.

The SQL scripts and unit tests are auto-generated using T4 templates by reading the tables information defined in an XML schema.

The new changes for the release 1.1 are:
  • Support for SQL Server Compact.
  • Additional customers from multiple countries.
  • Added a many-to-many relationship (a Playlist contains many Tracks, a Track belongs to many Playlists).
  • Added a one-to-many relationship between Employee and Customer (support representative).
  • Added Total field to invoice table.
I also created more unit tests to validate the data created in the database after running the SQL scripts.

The new schema is:



You can download this new version from the Chinook Database 1.1 Release page.

Spring Boot Configuration Properties Localization

Spring Boot allows to externalize application configuration by using properties files or YAML files. Spring Profiles  provide a way to segr...