I just came across this article about creating SharePoint lists with parent/child relationship using out-of-the-box functionality:
http://www.endusersharepoint.com/2009/10/02/creating-a-sharepoint-list-parent-child-relationship-out-of-the-box/
Monday, October 12, 2009
Sunday, June 28, 2009
Unit Testing CRM Plug-ins
What is a CRM plug-in?
A plug-in is custom business logic that you can integrate with Microsoft Dynamics CRM 4.0 to modify or augment the standard behavior of the platform. This custom business logic can be executed based on a message pipeline execution model called Event Execution Pipeline. A plug-in can be executed before or after a MS CRM platform event. For example, you can create a plug-in to validate the attributes of an account entity before the create and update operations.
To create plug-ins, you need to create a normal .NET class library and reference the MS CRM SDK libraries. Then add a class that implements the Microsoft.Crm.Sdk.IPlugin interface.
Plug-in Unit Testing
In order to write unit tests for your plug-in, you need to create at least a mock of the IPluginExecutionContext. Depending on your plug-in implementation, you will also need to mock ICrmService or IMetadataService if you are calling IPluginExecutionContext.CreateCrmService or IPluginExecutionContext.CreateMetadataService.
There is the MS CRM Plug-in Debugger, which consists of a small EXE container that implements a mock of the IPluginExecutionContext interface. You could use this container to unit test your plug-ins. However, IMHO, I do not see any advantage in using it versus a unit test and a mock framework. I posted a comment on the CRM Team Blog: Testing CRM Plug-in asking about that, but didn't get a response yet.
To unit test a CRM plug in, you can use your favorite unit test framework (NUnit, MbUnit, Visual Studio Tests) and your favorite mock framework (Rhino Mocks, NMock, Typemocks). In this article, I will be using NUnit and RhinoMocks.
The Plug-in Code
In the following example, adapted from the "Programming Microsoft Dynamics CRM 4.0" book, the plug-in validates the account number attribute before saving the account entity.
The code above checks to see if the account number attribute is in the right format. If not, it throws an InvalidPluginExecutionException. Since we will register this plug-in as a pre-event of creating and updating the account entity, this exception will be handled by the CRM platform, and the create/update operation is aborted.
Writing the Plug-in Unit Test
The following code is a simple test using NUnit to verify that an InvalidPluginExecutionException is thrown when the account entity has invalid account number:
Now, we will go through all the details of this unit test:
We also should write a test to assert that no InvalidPluginExecutionException is thrown when using valid account numbers. I will not include this test here, but you can see it on the solution source code files.
Mocking the ICrmService Interface
In our previous test, we only need to mock the plug-in context interface. However, in more complex plug-ins, you might need to mock other interfaces such as the ICrmService. The CreateCrmService method of the IPluginExecutionContext creates an ICrmService object. If you use the CreateCrmService method on your plug-in, you will need to create a mock of ICrmService.
Our validate account number plug-in has been changed to also detect duplicate account numbers. If an account number already exists, then the validation will fail by throwing an InvalidPluginExecutionException. To verify that the account number exists, we query CRM using the ICrmService.Fetch method with a FetchXML query. The following code demonstrate these changes:
Now, we will create a unit test to verify that our plug-in detects duplicate account numbers.
In the test above, we are using Rhino Mocks to create a mock for the ICrmService. This object will be returned by the CreateCrmService method of the plug-in execution context. We are also recording that when ICrmService.Fetch method is called, it will return a XML file containing a duplicated account number. This will simulate the CRM behavior of detecting that an account number already exists, and we can assert that our plug-in will fail the validation by throwing an exception.
I hope this post helps you to unit test your CRM plug-ins. Although I demonstrated it using NUnit and Rhino Mocks, you can use any unit testing framework (NUnit, MbUnit, Visual Studio Tests, etc.) and any mock framework (Rhino Mocks, NMock, Typemocks, etc.).
A plug-in is custom business logic that you can integrate with Microsoft Dynamics CRM 4.0 to modify or augment the standard behavior of the platform. This custom business logic can be executed based on a message pipeline execution model called Event Execution Pipeline. A plug-in can be executed before or after a MS CRM platform event. For example, you can create a plug-in to validate the attributes of an account entity before the create and update operations.
To create plug-ins, you need to create a normal .NET class library and reference the MS CRM SDK libraries. Then add a class that implements the Microsoft.Crm.Sdk.IPlugin interface.
public interface IPlugin { void Execute(IPluginExecutionContext context); }
Plug-in Unit Testing
In order to write unit tests for your plug-in, you need to create at least a mock of the IPluginExecutionContext. Depending on your plug-in implementation, you will also need to mock ICrmService or IMetadataService if you are calling IPluginExecutionContext.CreateCrmService or IPluginExecutionContext.CreateMetadataService.
There is the MS CRM Plug-in Debugger, which consists of a small EXE container that implements a mock of the IPluginExecutionContext interface. You could use this container to unit test your plug-ins. However, IMHO, I do not see any advantage in using it versus a unit test and a mock framework. I posted a comment on the CRM Team Blog: Testing CRM Plug-in asking about that, but didn't get a response yet.
To unit test a CRM plug in, you can use your favorite unit test framework (NUnit, MbUnit, Visual Studio Tests) and your favorite mock framework (Rhino Mocks, NMock, Typemocks). In this article, I will be using NUnit and RhinoMocks.
The Plug-in Code
In the following example, adapted from the "Programming Microsoft Dynamics CRM 4.0" book, the plug-in validates the account number attribute before saving the account entity.
public class AccountNumberValidator : IPlugin { public void Execute(IPluginExecutionContext context) { var target = (DynamicEntity) context.InputParameters[ParameterName.Target]; if (target.Properties.Contains("accountnumber")) { var accountNumber = target["accountnumber"].ToString(); var regex = new Regex("[A-Z]{2}-[0-9]{6}"); if (!regex.IsMatch(accountNumber)) { throw new InvalidPluginExecutionException("Invalid account number."); } } } }
The code above checks to see if the account number attribute is in the right format. If not, it throws an InvalidPluginExecutionException. Since we will register this plug-in as a pre-event of creating and updating the account entity, this exception will be handled by the CRM platform, and the create/update operation is aborted.
Writing the Plug-in Unit Test
The following code is a simple test using NUnit to verify that an InvalidPluginExecutionException is thrown when the account entity has invalid account number:
[Test] [ExpectedException(typeof(InvalidPluginExecutionException))] public void ShouldHandleInvalidAccountNumber([Values("", "AB123456", "A123456", "ABC123456", "AB-12345", "AB123456", "AB-123", "AB-1234", "aa-012345", "aa-000000", "Za-999999", "wW-936187")] string number) { // Create necessary mocks for the plug-in. var mocks = new MockRepository(); var context = mocks.DynamicMock<IPluginExecutionContext>(); // Creates a property bag for the plugin execution context mock. var target = new DynamicEntity(); target.Properties["accountnumber"] = number; var inputParameters = new PropertyBag(); inputParameters.Properties[ParameterName.Target] = target; // Set expectations of mocks. Expect.Call(context.InputParameters).Return(inputParameters).Repeat.Any(); mocks.ReplayAll(); // Test the plug-in using the context mock. IPlugin plugin = new AccountNumberValidator(); plugin.Execute(context); // Verify all the mocks. mocks.VerifyAll(); }
Now, we will go through all the details of this unit test:
- The ExpectedException attribute defines the type of exception that this test expects to be raised. In our case, it is an InvalidPluginExecutionException.
- This is a parameterized test that uses the Values attribute to define a set of invalid account numbers. This test will run once for each value that we define. The Values attribute is specific to NUnit, but other frameworks have similar mechanisms: MbUnit uses RowTest for example.
- We create a mock of the IPluginExecutionContext interface by using the MockRepository.DynamicMock
method. We are using a DynamicMock because we are only interested in a small piece of the functionality (InputParameters property of the context object). If we want a complete control of the mock object behavior, then we would use a StrickMock. For more information about the types of mocks that you can create with Rhino Mocks, see here. - The InputParameters property of the plug-in context, is a property bag that will contain the account number attribute. So, we create this property bag, and add the account number defined by the Values attribute parameter.
- Now, we set the expectations of the mock object. This step is called the Record state. When the InputParameters property is called, we expect it to return the property bag we created on the previous step. Note that we are using Repeat.Any() that means this property can be called more than once. In our test, we just want to make sure that InputParameters is called, no matter how many times.
- The Record state is finish by calling ReplayAll(). This will move to the Replay state.
- Now, we are ready to instantiate our plug-in object and call its Execute method using the plug-in context mock object.
- Finally, we call VerifyAll() method, to verify that the mock expectations were satisfied. In our case, it will make sure that InputParameters property was called during the Replay state.
We also should write a test to assert that no InvalidPluginExecutionException is thrown when using valid account numbers. I will not include this test here, but you can see it on the solution source code files.
Mocking the ICrmService Interface
In our previous test, we only need to mock the plug-in context interface. However, in more complex plug-ins, you might need to mock other interfaces such as the ICrmService. The CreateCrmService method of the IPluginExecutionContext creates an ICrmService object. If you use the CreateCrmService method on your plug-in, you will need to create a mock of ICrmService.
Our validate account number plug-in has been changed to also detect duplicate account numbers. If an account number already exists, then the validation will fail by throwing an InvalidPluginExecutionException. To verify that the account number exists, we query CRM using the ICrmService.Fetch method with a FetchXML query. The following code demonstrate these changes:
public class AccountNumberValidator : IPlugin { /// <summary> /// /// </summary> /// <param name="context"></param> public void Execute(IPluginExecutionContext context) { var target = (DynamicEntity) context.InputParameters[ParameterName.Target]; if (target.Properties.Contains("accountnumber")) { var accountNumber = target["accountnumber"].ToString(); // Validates the account number format. var regex = new Regex("[A-Z]{2}-[0-9]{6}"); if (!regex.IsMatch(accountNumber)) { throw new InvalidPluginExecutionException("Invalid account number."); } // Validates the account number is unique. using (var service = context.CreateCrmService(true)) { var query = string.Format(@"<fetch mapping='logical'> <entity name='account'> <attribute name='accountnumber' /> <filter> <condition attribute='accountnumber' operator='eq' value='{0}' /> </filter> </entity> </fetch>", accountNumber); var results = service.Fetch(query); var xdocument = XDocument.Parse(results); var existingNumbers = from item in xdocument.Descendants("accountnumber") select item.Value; if (existingNumbers.Count() > 0) throw new InvalidPluginExecutionException("Account number already exist."); } } } }
Now, we will create a unit test to verify that our plug-in detects duplicate account numbers.
[Test] [ExpectedException(typeof(InvalidPluginExecutionException))] public void ShoulRejectDuplicateAccountNumber() { // Create necessary mocks for the plug-in. var mocks = new MockRepository(); var context = mocks.DynamicMock<IPluginExecutionContext>(); var service = mocks.DynamicMock<ICrmService>(); // Creates a property bag for the plugin execution context mock. var target = new DynamicEntity(); target.Properties["accountnumber"] = "AB-123456"; var inputParameters = new PropertyBag(); inputParameters.Properties[ParameterName.Target] = target; // Set expectations of mocks. Expect.Call(context.InputParameters).Return(inputParameters).Repeat.Any(); Expect.Call(context.CreateCrmService(true)).Return(service); Expect.Call(service.Fetch(null)).IgnoreArguments() .Return(@"<resultset> <result> <accountnumber>AB-123456</accountnumber> </result> </resultset>"); mocks.ReplayAll(); // Test the plug-in using the context mock. IPlugin plugin = new AccountNumberValidator(); plugin.Execute(context); // Verify all the mocks. mocks.VerifyAll(); }
In the test above, we are using Rhino Mocks to create a mock for the ICrmService. This object will be returned by the CreateCrmService method of the plug-in execution context. We are also recording that when ICrmService.Fetch method is called, it will return a XML file containing a duplicated account number. This will simulate the CRM behavior of detecting that an account number already exists, and we can assert that our plug-in will fail the validation by throwing an exception.
I hope this post helps you to unit test your CRM plug-ins. Although I demonstrated it using NUnit and Rhino Mocks, you can use any unit testing framework (NUnit, MbUnit, Visual Studio Tests, etc.) and any mock framework (Rhino Mocks, NMock, Typemocks, etc.).
Monday, June 15, 2009
Using Embedded Files for FetchXML Queries
FetchXML is a proprietary language that it is used in Microsoft Dynamics CRM. All examples that I've seen so far, always show the FetchXML query hard coded into the C# file. Instead of keeping the queries mixed with the source code, a bad practice IMHO, I prefer placing queries in separate XML files. These files can be embedded resources of the assembly. By placing them on a separate file, it isolates them from the code, making easier to locate, share and test them. In order to embed your query in the assembly, you will need to add an XML file with the query into your project. Make sure to change its build action to Embedded Resource. Then, use the following code to read the embedded XML file. The code below assumes that the file was placed in the subfolder Queries of the project. It refers to the embedded file by using the assembly name and the related path to the file. Notice that it uses "." instead of "\" to refer to the embedded file.
The code above uses a static query. If you need to use a dynamic query, then the XML can contain the string format of the query, and you can use String.Format to pass the parameters needed to build your dynamic query.
// Read the embedded fetch xml query file. var assembly = Assembly.GetExecutingAssembly(); var stream = assembly.GetManifestResourceStream("MyAssembly.Queries.MyQuery.xml"); if (stream == null) { throw new FileLoadException("Cannot load fetchXML embedded file"); } // Gets the Fetch XML query string. var reader = new StreamReader(stream); string query = reader.ReadToEnd(); // Removing leading spaces to reduce the size of the xml request. query = Regex.Replace(fetchXml, @"\s+", @" "); // Fetches the results. string results = crmService.Fetch(query);
The code above uses a static query. If you need to use a dynamic query, then the XML can contain the string format of the query, and you can use String.Format to pass the parameters needed to build your dynamic query.
Wednesday, May 20, 2009
Visual Studio 2010 and .NET 4 Beta 1
Visual Studio 2010 and .NET 4 Beta 1 is available today for the general public. Note that the new .NET version is 4 and not 4.0. You can download the beta from here.
Also, check out Jason Zander's post where he highlights the new functionalities, and Brad Adams post with .NET 4 poster.
I am currently preparing a virtual machine with Windows 7 RC 1 and VS 2010 for trying out the new features.
Saturday, April 25, 2009
NVidia driver not working after upgrading to Ubuntu 9.04
Ubuntu 9.04 has been released this week and I upgraded my machines to the latest version (download it here).
If you are using dual boot (GRUB) , NVidia drivers, and answered yes to keep your existing version of menu.lst, then you might have the same problem as me.
After I upgraded to Ubuntu 9.04, I get the following error message when restarting the machine:

If I open a terminal window and run the following commands below, and then try to reactivate again, you will be able to see some error message displayed on the terminal window:
To check the kernel version I am running I just use the command "uname -r", and it returns 2.6.27-11-generic, but Ubuntu 9.07 comes with 2.6.28-11. So, I need to manually update my menu.lst file to be able to boot to the latest kernel version.
Warning: Only update your menu.lst file if you have done this before. This is not recommended for users that are not experienced with changing menu.lst file. If you are not an experienced Linux user, it is better to not proceed with these changes.
First, run the following commands to see that you have vmlinuz-2.6.28-11-generic:
Next time you reboot your machine, you can select the first boot entry and then you will see the following:
If you have any problems rebooting your machine with the new entry, you can reboot it using an existing entry (or Ubuntu CD) and revert the changes you made by restoring the backup copy: menu.lst.bak (manually created) or menu.lst~ (created by gedit).
If you are using dual boot (GRUB) , NVidia drivers, and answered yes to keep your existing version of menu.lst, then you might have the same problem as me.
After I upgraded to Ubuntu 9.04, I get the following error message when restarting the machine:
Ubuntu is running in low-graphics modeAlthough I have restricted drivers enabled, I tried to activate the NVidia driver on System/Administration/Hardware Drivers and nothing happens.
The following error was encountered. You may need
to update your configuration to solve this.
(EE) NVIDIA(0): Failed to load the NVIDIA kernel module!
(EE) NVIDIA(0): *** Aborting ***
(EE) Screen(s) found, but none have a usable configuration.

If I open a terminal window and run the following commands below, and then try to reactivate again, you will be able to see some error message displayed on the terminal window:
sudo killall jockey-backendIn my case, I got the following error message:
sudo /usr/share/jockey/jockey-backend --debug -l /tmp/jockey.log
FATAL: Module nvidia not found.The NVidia module cannot be found because I am running an old version of the Kernel (remember that I said to keep my existing menu.lst version!!) and the NVidia driver is compiled to the latest version.
To check the kernel version I am running I just use the command "uname -r", and it returns 2.6.27-11-generic, but Ubuntu 9.07 comes with 2.6.28-11. So, I need to manually update my menu.lst file to be able to boot to the latest kernel version.
Warning: Only update your menu.lst file if you have done this before. This is not recommended for users that are not experienced with changing menu.lst file. If you are not an experienced Linux user, it is better to not proceed with these changes.
First, run the following commands to see that you have vmlinuz-2.6.28-11-generic:
ls /boot/*2.6.28*You should be able to see vmlinuz-2.6.28-11-generic and initrd.img-2.6.28-11-generic. Then, backup and edit your menu.lst file:
sudo cp /boot/grub/menu.lst /boot/grub/menu.lst.bakThe safest way is to duplicate your first boot menu entry, then change only this new entry to use the latest kernel version. In my case, I changed it from 2.6.27-11 to 2.6.28-11. I also changed the title to 9.04. This way you still have your previous entries in case you have any problems rebooting and need to restore your previous menu.lst from the backup copy (menu.lst.bak).
sudo gedit /boot/grub/menu.lst
Next time you reboot your machine, you can select the first boot entry and then you will see the following:
* Running DKMS auto installation service for kernelAnd then your Ubuntu will be loaded with the proper video resolution!!
* nvidia (173.14.16)...
If you have any problems rebooting your machine with the new entry, you can reboot it using an existing entry (or Ubuntu CD) and revert the changes you made by restoring the backup copy: menu.lst.bak (manually created) or menu.lst~ (created by gedit).
Monday, April 20, 2009
LIDNUG LinkedIn .NET Users Group
Linked .NET Users Group (LIDNUG) is an official INETA .NET User Group with online presentations through Live Meeting.
These are some upcoming events in the next few weeks:
These are some upcoming events in the next few weeks:
- Brian Harry talks about VSTS 2010, April 30, 12:30 PM PST - http://events.linkedin.com/
LIDNUG-Brian-Harry-Visual- Studio-Team/pub/54208 - Stephen Toub talks about the .NET Parallel Extensions, May 7, 12:00 PM PST - http://events.linkedin.com/
LINDUG-Stephen-Toub-NET- Parallel/pub/60569 - Scott Guthrie talks shop with developers, May 11, 11:30 AM PST - http://events.linkedin.com/
LIDNUG-ScottGu-talks-shop- developers/pub/60571
- HTML: http://isaacabraham.calendar.
live.com/calendar/Linked+In+. NET+User+Group+Events/index. html - RSS: http://isaacabraham.calendar.
live.com/calendar/Linked+In+. NET+User+Group+Events/ calendar.xml - ICS: webcal:/ /isaacabraham.calendar.live.
com/calendar/Linked+In+.NET+ User+Group+Events/calendar.ics
Saturday, April 04, 2009
Error 1327 Invalid Drive when installing VMware Server
When I try to install WMware Server in Windows 7 (also happened on Vista and XP), I get the message Error 1327 Invalid Drive S:\ and the installation aborts.
For some reason, the VMware installer does not like when you change the default location of your shell folders. I have my Windows shell folders (My Documents, My Music, My Video, My Pictures) mapped to a network drive S:.
The workaround is to temporary change your shell folders back to the default location. An easy way to do it, it is by changing the User Shell Folders registry key. Be careful when editing your Windows registry, so use the following steps at your own risk.
1. Run regedit.exe
2. Locate the following key: HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Explorer\User Shell Folders
3. Click on File, Export and save this key to your desktop.
4. Change all entries that uses your mapped drive (S: in my case) to the default one (%USERPROFILE%).
5. Now, Install VMware Server.
6. After installation completes, restore your User Shell Folders registry info by double clicking on the file saved on step #3.
That is, you were able to install VMware server and also keep your shell folders at you custom location. I hope VMware folks fix this issue on their installer. Other people were also having this same issue when installing VMware tools.
For some reason, the VMware installer does not like when you change the default location of your shell folders. I have my Windows shell folders (My Documents, My Music, My Video, My Pictures) mapped to a network drive S:.
The workaround is to temporary change your shell folders back to the default location. An easy way to do it, it is by changing the User Shell Folders registry key. Be careful when editing your Windows registry, so use the following steps at your own risk.
1. Run regedit.exe
2. Locate the following key: HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Explorer\User Shell Folders
3. Click on File, Export and save this key to your desktop.
4. Change all entries that uses your mapped drive (S: in my case) to the default one (%USERPROFILE%).
5. Now, Install VMware Server.
6. After installation completes, restore your User Shell Folders registry info by double clicking on the file saved on step #3.
That is, you were able to install VMware server and also keep your shell folders at you custom location. I hope VMware folks fix this issue on their installer. Other people were also having this same issue when installing VMware tools.
Subscribe to:
Posts (Atom)
Spring Boot Configuration Properties Localization
Spring Boot allows to externalize application configuration by using properties files or YAML files. Spring Profiles provide a way to segr...
-
I use robocopy to backup my files to a network drive with the following command: robocopy [source folder] [target folder] /MIR The MIR optio...
-
When I try to install WMware Server in Windows 7 (also happened on Vista and XP), I get the message Error 1327 Invalid Drive S:\ and the in...
-
Update: For Entity Framework 4.1 RTM, the exception message is a little bit different. It suggests to use the ColumnAttribute instead of Dat...