In this article I’m going to take you through the steps for using Amazon RDS with Entity Framework code first. I will not be covering any details related to Entity Framework. First you need to install AWS Tools for .NET. Then you will need to open up the AWS explorer from visual Studio View menu. Remember you need to have your account setup with Amazon Web Services. From this you can manipulate your AWS account.
So first I am going to create a database using the AWS RDS. Here I will be selecting the Amazon RDS (Relational Database Services).
When we design for distributed systems logging is one of the important thing that we need to consider. Why is it so important? As we all know, when we take a distributed system there are many layers involved and also many third party integrations such as facebook, twitter, payment gateways, etc. All these component together may build your enterprise solution.
So the problem arises when it comes to locate issues within your system. The business needs are changing and new competitors comes up all around the world and no one can rest, sometimes you may get requirements to be implemented over night. So these requirements need to be implemented and deployed to the target users as quickly as possible and we can’t afford to have bugs in these releases as they will end up with catastrophic results to the end users.
Debugging would be an ideal and the apparent solution for this, but believe me if you have several different layers in your solution and some third party integrations, it is not. Because you’ll have to spend hours to locate the locations to put the break points and have to run the debugger several times to identify the issue. This is like brain surgery hard scenario and you can’t spend time one these. And if you have any asynchronous processing and any threads involved, identifying an issue will be more difficult than anything else.
I have been implementing TFS 2010 automated build for our company’ product and initially it was implemented in a workgroup environment. There were some custom Powershell scripts that has to be executed remotely and it took weeks for me to finally solve it.
I was so close to complete the whole automated process but due to the security factors we have to bring in a Domain Controller and had to add the servers to the domain. All our servers are in Amazon Cloud and during this process the computer names were changed and all the share point configurations and the TFS configurations were somewhat corrupted. So instead of battling with the trouble shooting this I decided to remove TFS and re install it.
So I did a basic installation as a standalone server and all the settings were fine. But after setting the build definition and queue a new build, I faced a whole new problem as follows. No matter what I did everything failed at the Get Workspace step as follows.
Attempt by method ‘Microsoft.TeamFoundation.Build.Workflow.Activities.TfUndo+TfUndoCore.RunCommand(Microsoft.TeamFoundation.Build.Workflow.Activities.VersionControlScope, Microsoft.TeamFoundation.VersionControl.Client.Workspace, System.String, System.String)’ to access method ‘Microsoft.TeamFoundation.VersionControl.Client.Workspace.Undo(Microsoft.TeamFoundation.VersionControl.Client.ItemSpec, Boolean, Boolean, System.String)’ failed.
I was trying many things and everything failed. But at the end my good old friend, Prabath helped me out by suggesting that I should install the TFS 2010 service pack 1. But I have already done that.
So I was out of options, and then I decided to re install the service packs. First I installed the Microsoft Team Foundation Server 2010, Service pack 1 using the following link: TFS 2010 Service Pack 1
After that I applied the Cumulative Update package 2 for Visual Studio Team foundation server 2010 Service Pack 1 from the following link: Cumulative Update package 2 for Visual Studio Team foundation server 2010 Service Pack 1
So this approach solved the issue for me which I was stuck in the above issue for 4 days and have to say that never give up. There is always an answer and always someone to guide you to get it fixed.
During the last week I have been working on one of the payment gateway implementation for the current project that I’m working. For this particular payment gateway, it accepts web request as serialized XML. Also it sends the response back to us as XML and we need to deserialize it.
The serialization was already implemented in one of the XML helper class in the solution. I only have to pass the object to get is serialized. After implementing this I have started to test it and all the test were failing. I had to spend painfully longer hours to understand what is exactly going on.
I found the issue by comparing the system generated XML request and the sample request sent from the support and also with the sample request on API documentation. The issue was there are some unnecessary XML namespaces in the root element and the payment gateway can’t process the root element with the namespaces.
This article is about to get a good understanding of how an IOC containers works. In real life you don’t have to create IOC containers by yourself as there are many frameworks that can be used, such as Unity, Castle Windsor, Ninject etc. But as a developer understanding the mechanics of this is important.
First we’ll have a look at what is as IOC container is and how it can resolve dependencies. As an example I will be demonstrating the constructor injection.
In basic terms we can think of an IOC container as a framework for implementing dependency injection. This is the fundamental purpose of the IOC container. Apart from that it can manage the lifetime of an object, but here our focus is on the primary feature, i.e. to do the Dependency Injection automatically once we configured it. In the configuration you configure the dependencies so if you have an interface and when you ask for it the configuration will resolve it to the particular concrete type. Also you can use this concrete type to resolve to another type down the class hierarchy. The key concept here is you set up the dependencies and the container will automatically resolve it for you.
So if you ask for IPerson interface and the IOC container will resolve it to an Employee or Student concrete types. In this case any of the scenarios doesn’t know about the dependencies, only the IOC container knows about the dependencies.
Following is a high level visualisation of the implementation of the IOC container.
I have been working on some R&D work and came accross to delete records from the database. So I’ve been thinking that whether to do a soft delete or to do a hard delete. What do you think?
In my point of view, it all depends on the end user requirements. If the transactions are used in pattern analysis by the top management and especially in decision support systems, I think the soft delete would be appropriate. I’m sure that you all know what is meant by soft delete. In simplest of terms it is just using state variable to. When the user selects a record and confirms to delete the record will not get deleted from the database, but a deletion state will be changed to record and other related data appropriately.
But using a soft delete can also cause you more space, because you are keeping all the historical data. For that you could use a separate mechanism for archiving, for example a flat data structure which may use a trigger in the source entity and write the important data related to the transactions to the flat table. After that you can perform a hard delete by using a service or a daily SQL server job to clean up the data in your database with the specific deletion status.
As I mentioned earlier, there are other factors affecting this decision, such as business domain requirements, hardware such as storage and software requirements, cost, etc. Based on these conditions you could provide a better solution. In my case I thought of going with the soft delete and archive important data to a flat table. In this solution I have added an additional column to all tables in the database called “Deletion State”. After that to use a daily SQL Server job to clean up the data with the deletion status equals to an appropriate value.
Published via wordpress mobile
I thought of writing this article because I used to be so frightened when I hear the word asynchronous programming because it’s too complicated. But I faced an issue during my recent development work where I had to write an application to listen to Twitter via GNIP web API. After getting some sample code and going through the API, I managed to get the application running. But the problem came when I was testing it. The requirement was to get the tweets send to our private stream and display on our side for further processing.
But when I start the application and send a tweet, it doesn’t display anything although the application is connected and listening to the stream. When the second message is sent to the app it displays the tweet, but unfortunately not the current message, but the first one.
For example when I tweet the message @SomeTweetHandle My test message 10001, the message didn’t appear on my screen. When I send the second message @SomeTweetHandle My Test message 10002, the @SomeTweetHandle My test message 10001 appeared. So when ever I send a message it always returned me the previous message I sent. This was not ideal to the requirement that I had.
So after doing some R&D I finally end up in .NET Asynchronous programming model, which indeed helped me to solve my issues. I used the following article as a resource for solving my problem.
Now my application is responding to the latest messages that I send to the stream. Also I ran the application continuously for longer hours, at one instance for 3 hours and the other was about 5 hours, and the connection stays on for all these time, which is great. The application is responding properly and all went good.
Next big challenge was put to me by my product owner to check the connectivity status and keep the stream listener alive. And above all the exception handling is not straight forward as the processes are executing in different threads. I’ll write more about it on my next article.