SQL Server

Scrambling your production data for testing

When it comes to testing business applications implemented with Dynamics CRM, sometimes you might have to bring down your production data to your Development, Testing, UAT/Pre Production environments. In such scenarios, what if your company policies does not allow you to play around with sensitive data of their clients. For instance, a tester might trigger several email to a client for testing certain functionality which will surprise the client and some may not like it at all.

Therefore, it is always good to replace sensitive data with dummy data values. But this is not a simple task to do if you have large quantities of data. One might argue that for testing, you may not require large quantities of data. But, if the testers are doing performance testing, then large quantities of data is required.

Objective of this article is to highlight options available to replace/scramble sensitive data with dummy data.

(more…)

Advanced Data Migration techniques for Dynamics 365

More recently I have been writing little bit more about data migration techniques and best practices. I have to say, it is a point where we all make mistakes. If you don’t map migrate the data properly, whatever the solution you may deliver, will not work. Especially when the current work of the client is dependent on existing data. If your CRM solution is not working, then you will get the blame no matter how good your solution is.

(more…)

Entity Framework Code First with AWS SQL RDS

In this article I’m going to take you through the steps for using Amazon RDS with Entity Framework code first. I will not be covering any details related to Entity Framework. First you need to install AWS Tools for .NET. Then you will need to open up the AWS explorer from visual Studio View menu. Remember you need to have your account setup with Amazon Web Services. From this you can manipulate your AWS account.

image1SS

So first I am going to create a database using the AWS RDS. Here I will be selecting the Amazon RDS (Relational Database Services).

image2SS
(more…)

Installing Dynamics CRM2011 Amazon Relational Database Services (RDS) – Part 1

I have been working with Amazon RDS for some time and always wondered that can we install CRM using Amazon RDS? This was a question haunting my mind for from the moment I have started exploring the features of RDS.
After setting up my AWS account I wanted to try this so bad and I did it. The main reason for going with RDS is the rich set of features provided by RDS. You can read more about these feature on the following link. Amazon Relational Database Services-Features

So I have set up an EC2 instance as bellow. Then I configured it to install CRM2011, i.e. configuring IIS and Active Directory services.
Image3

Then I have set up a RDS instance as bellow.

Image10

Make sure that the security group for the EC2 instance and the RDS instance is the same; otherwise you can’t access the RDS instance from the EC2 instance. At the present, it doesn’t support SQL server Analysis and Integration service as well.

Image11

After setting up the instances I have started the deployment of CRM2011, but it failed, because the RDS is not supporting SQL Server Reporting Services.
So I have found a workaround for this. After searching for some time I came across the following link to launch Optimized SQL server instance published by Microsoft. Microsoft optimized instance for SQL Server 2008 R2

So when we design the CRM2011 deployment I assume that we could use this particular image as a separate server which is running SSRS and point to RDS instance as the database. This solution will enhance the performance of the CRM Reports as the SSRS are running with dedicated resources. Or else we could use this as the backend SQL server for the EC2 instance. I’m going to explore these solutions in Part of this blog post.

Five common mistakes to watch out in database design

Following is a TechRwpublic article which highlight five common mistakes that we tend to do in some situations.

Five common mistakes in database design

I have seen in many cases that the teams unintentionally doing these mistakes, but the catch is here that we can’t always go by the rules. We have to consider the other factors such as time lines, business value, end user expectations, performance, cost, quality etc. What I believe is that we as part of the development team needs to have a balance in all aspects related to software development. That is if we would try to implement rules to avoid mistakes mentioned in the above article we may well have to sacrifice many other things. So the knowledge of the business domain, the technology used and the capability of the team will determine the successful end product.

But as a convention we must always take all the necessary measures to make no mistakes at design level so take as much time as possible to do it right at the first place.

Hard delete or soft delete?

I have been working on some R&D work and came accross to delete records from the database. So I’ve been thinking that whether to do a soft delete or to do a hard delete. What do you think?
In my point of view, it all depends on the end user requirements. If the transactions are used in pattern analysis by the top management and especially in decision support systems, I think the soft delete would be appropriate. I’m sure that you all know what is meant by soft delete. In simplest of terms it is just using state variable to. When the user selects a record and confirms to delete the record will not get deleted from the database, but a deletion state will be changed to record and other related data appropriately.
But using a soft delete can also cause you more space, because you are keeping all the historical data. For that you could use a separate mechanism for archiving, for example a flat data structure which may use a trigger in the source entity and write the important data related to the transactions to the flat table. After that you can perform a hard delete by using a service or a daily SQL server job to clean up the data in your database with the specific deletion status.
As I mentioned earlier, there are other factors affecting this decision, such as business domain requirements, hardware such as storage and software requirements, cost, etc. Based on these conditions you could provide a better solution. In my case I thought of going with the soft delete and archive important data to a flat table. In this solution I have added an additional column to all tables in the database called “Deletion State”. After that to use a daily SQL Server job to clean up the data with the deletion status equals to an appropriate value.

Published via wordpress mobile

SSIS Package Control Flow is Corrupted

In one of the on going projects we had to create considerable amount of SSIS packages to upload and generate various files. Some times we encounter a scenario where the Control Flow view gets corrupted. All the packages were in Source Control with TFS2010. Couldn’t find the exact reason for this. But found a way to correct the issues. Following is a sample image of the corrupted Control flow of the package.

Because of this issue we can’t do further amendments to the control flow as it is not responding at this stage. At one stage it appears that we had to create the package from the scratch. It is a nightmare to do so for a complex ETL package. I found a workaround to overcome this issue and the source code being checked in to the Source Control helped me a lot. Following are the steps to resolve the problem.

(more…)