Recent Posts - Subscribe Here

BitCoin Phishing Attack

Published - Tuesday, January 7, 2014

I received this email about an hour ago:

Hello David…
I just did what you advised me to do but the problem remains the same : importing the private key is not working…. drives me nuts! Last time I checked ( ) there was still 30.28020001BTC ! But no way my bitcoinqt client loads the key so I am stuck with those BTCs.
Thanks for offering your help with this. Here is my wallet.dat with the password (googlelink). If you need anything else let me know. If you can load the key please send the BTCs to 1DxFvJ6up9jXAZ9pkUmWVdiMTWvsjgB5Ea
This would help me so much. Thanks David!

This is interesting, because the link in the email was not at all but some other link. This was mostly likely a link to trojan or maleware of some sort, the author knew people who would be likely to click on the link would also tend to be people that may have a bitcoin wallet on their computer.
I checked on that particular address and it didn't have any bitcoins in the wallet.


Using Google Alerts, IFTTT, and BoxCar to protect your information

Published - Saturday, April 27, 2013

Google alerts is a service from Google that will send you an email or update a RSS feed when it's crawler hits new data from your search term(s). IFTTT is a service that lets you set up actions in response to triggers. BoxCar is a service that let you send custom notifications to your phone. I've combined all three of these services to help protect my personal information online.

I've spoken about BoxCar in the past (I wrote my own .net wrapper to their web API, available on nuget). I've used both Google alerts and IFTTT for a while. However it was only recently that I thought of combining them all in order to get almost instantaneous alerts on my phone whenever any information is posted on the internet that I need to know about right away. I've already had this system tell me when apparently Kevin Kinnett was arrested multiple times recently. I look really bad in that picture ;-)

Maybe 5 years ago I set up some Google alerts to update RSS feeds search for several numbers that I would never ever want to see appearing anywhere on the internet. These include several of my credit card numbers, my social security number, my drivers licence number, etc. Luckily these Google alerts have never hit.

I then set up a IFTTT 'trigger', which in this case is an new post on each RSS feed. IFTTT then sends a message to my BoxCar account, which I have set up on my phone. I get an almost instant notification and link to the relevant information and I can take action if need be. 


Blown away by how good Azure has become

Published - Saturday, December 8, 2012

I had looked into Microsoft’s Azure (then called AppFabric) maybe two years ago, and built some hello world type of things on there, and even hosted our wedding website there.

At that time I was not very impressed by what was being offered. You had to program to a very specific API, which might not be a problem if you were creating a new application, however shoehorning an existing application to Azure (then) would have proved extremely difficult, and for what benefit? The only real potential benefit to hosting on a 'cloud service' would be pricing and I was NOT impressed by the pricing at all. It was not at all clear or intuitive what the pricing model was, and I ended up being charged almost 100$ one month just to host a static webpage. Ouch… way to burn me, Microsoft, waiting until the end of the month to get that type of surprise… not cool. And from what I had read there were few scenarios that would actually justify (from a price standpoint) creating solutions for Azure. So I shelved any interest in it until now.

Fast forward two years to this week, when I attended a talk about a solution that was built on top of Azure and afterward I started playing with it. And WOW….

I have had my personal website down for a few weeks now, because my server had met with an unfortunate end. And since I have been procrastinating fixing it, I figured I might give it a try on Azure.

Let me describe the steps that I followed to get this webpage online.

  • Signed up for a new Azure account
  • Created a new ‘Web Site’ compute type resource
  • Saw link to TFS in the cloud, don’t mind if I do
  • Created TFS in the cloud account
  • Created a new project
  • Opened my existing ASP.NET project
  • Committed it to the new TFS in the could account
  • A minute later I checked out the deployments tab in the Azure management portal for my website and saw that it was building!
  • In less than a minute the page was live 

Needless to say I jumped out my skin. Continues integration and deployment out of the box? Between signing up for new accounts, configuring, committing the code, and building/deploying, all this took maybe 15 minutes. This is an incredible game changer. Not only that, but so far it has all been absolutely free. How do I know? Well I can always just click the the prominent ‘view my bill’ at any time to see if I have accidentally accrued any charges at any point in time! Well done Microsoft! So far I am very impressed.

Labels: .net, Azure, Cloud, Microsoft

Simple & Easy Notifications Using BoxCar

Published - Saturday, April 14, 2012

Notifio was an awesome tool I made use of to make up for the lack of screen notifications for new mail in older version of iOS. It was also a great application in general that allowed me to send notification to my phone without having to write my own application specifically for iOS, or send SMS message or the like.

However, ever since I found out that Notifio’s time was short (it seems the service will be retired soon) I have been looking for something that could replace it. BoxCar is definitely it. BoxCar car is easy to set up, there are a lot of applications that work with it out of the box, and they have a fairly straight forward API that allows you to interact with it.

First you have to set yourself up a provider, which is easy enough to do on their website. This will give you access to the keys you will need to call the services, which are described in their documentation.

I set up a very simple WCF webservice that simply accepts a string which is then calls the BoxCar APIs and a message is sent to my phone and/or my iPad:

void NotifyKevin(string message);

The implementation simply some libraries that I set up that calls the BoxCar’s API JSON webservice:

 var client = new RestClient(_baseUrl + _apiKey + "/notifications");
 var request = new RestRequest(Method.POST);               
 request.AddParameter("notification[message]", message);
 request.AddParameter("email", _email);
 request.AddParameter("emails", _emails);
 request.AddParameter("notification[from_screen_name]", _sentFrom);
 request.AddParameter("notification[from_remote_service_id]", _uniqueId);
 request.AddParameter("notification[icon_url]", _iconUrl);  

RestResponse _restResponse = client.Execute(request);

Simple as that. I set up the webservice in a place that is easily accessible to me, and now I can simply call one method and pass in a string if I would like to programmatically notify myself of anything. For example if the temperature in my house is above or below a certain critical temperatures like I describe in my previous post.

In fact I spent just a few minutes setting up this page which allows you to send a message to me! Check it out.

Labels: BoxCar, JSON, Notification, Notifio, WCF

Filtrete Touchscreen WiFi-Enabled Programmable Thermostat

Published - Friday, April 13, 2012

Last fall I purchased and installed the Filtrete Touchscreen WiFi-Enabled Programmable Thermostat in our home after reading about it here on Scott Hanselman’s blog. It does exactly what it sounds like, plus it comes with a pretty cool iPhone app (I am sure there is an android version as well).

Now that we have it installed, it would be hard to imagine life without it. It seems really lazy (and it is) but the thought of having to actually get up and interact with a device on the wall seems like a lot work. And the real benefit is in the middle of the night, when you want to change the temperature, you do not need to actually get out of bed to do it.

The ability to change the temperature at home, while you are away is really where it shines. Forget to turn off the air conditioning in the middle of the summer, when you were planning on being out all Saturday? Maybe forgot to set it to ‘away mode’ before you left for vacation? Neither is a problem that takes over 30 seconds to solve with your phone.

The cool thing about the thermostat is not only does it have all of these features but it also hosts a JSON restful webservice that allows you to query or change the temperature/settings. Here is my C# code that talks to it. I am using the newtonsoft json framework to handle the parsing of the json method, which I pulled down from the NuGet repository.

var client = new WebClient();

client.Headers.Add("User-Agent", "Nobody");

var response = client.DownloadString(new Uri("insert-thermostat-url-here/tstat"));

dynamic resource = JObject.Parse(response);

RadioThermostatStatus status = new RadioThermostatStatus() {

CurrentTemp = resource.temp,

ThermostatOperationMode = resource.tmode,

FanOperationMode = resource.fmode,

TargetTemp_Cool = resource.t_cool,

TargetTemp_Heat = resource.t_heat,

HVACOperationState = resource.tstate


Here, in a few lines of code, I am able to determine the current conditions inside the house. I have this code running every few minutes inside of a windows service that is running on my server.

What in the world can could you do with this information? For one thing, I can send alerts to myself if things get out of control:

// If current temp is outside emergency bounds send a notification

if (status.CurrentTemp >= MAXINSIDETEMP || status.CurrentTemp <= MININSIDETEMP)


string message = "EMERGENCY: HOUSE INSIDE TEMP IS AT: " + status.CurrentTemp;




Here I have min and max temperatures set so that if the inside temperature is outside of a certain threshold (to the point that I should be looking into it) it sends me a notification via BoxCar notification which is sent straight to my iPhone. It is communicating with a simple WCF webservice I set up on my machine that calls the BoxCar service (more on that later).

Is there more you can do with this webservice? Yes there is, and I will be showing a few cool uses I have come up with, so stay tuned.

Labels: BoxCar, JSON, NuGet, Thermostat, WCF, webservices, Windows-Service

Calling external C# assemblies with overloaded methods using the params keyword in BizTalk 2010

Published - Friday, January 13, 2012

I created the following method, calling it from a BizTalk mapping functoid not long ago.

public string TrimAll(params string[] addressInfo)



I used params because I need to pass in eight string arguments which I was preforming more or less the same operations on. I was curious to see if BizTalk would resolve the method properly. As you’d expect, it did not, and gave me an exception about the number of arguments needed when I tried to test the map.

No problem! I just overloaded the method in the class, which explicitly contained eight string parameters. I would not need to change the BizTalk map, as it would resolve to this method, which would in turn call the parameterized methods.

public string TrimAll(string address1, string address2,

string address3, string address4, string address5,

string address6, string address7, string address8)


return TrimAll(address1, address2,

address3, address4, address5,

address6, address7, address8);


But this time, when I tried to test run the map, Visual Studio crashed ungracefully!! So instead of overloading I changed the name of the method…

private string TrimIt(params string[] addressInfo)

…and I called it like this…

return TrimIt(

address1, address2,

address3, address4, address5,

address6, address7, address8);

This time it ran without error. For kicks I changed it to be an overloaded method again and it crashed consistently. The take away here is that you cannot expose any method that uses the params keyword to the BizTalk mapper.

Labels: BizTalk, C#, Mapping

Updated BizTalk FTP Port Change Projected

Published - Thursday, December 22, 2011

I have updated the BizTalk ChangeFTPPortProperties project. I have updated the user interface, and updated the project for Visual Studio 2010. The tool will now work on BizTalk 2009 and BizTalk 2010 installations. There is a known limitation that this tool will NOT work on clustered or grouped installs. However, it will run in 64bit environments. You can download the source and the executable here.

Labels: BizTalk, C#, FTP

Continuous Integration/Deployment: My Own Example

Published - Friday, May 6, 2011

Previously, I had talked about why continuous integration (CI) is the best approach to software development in terms of improving software quality and saving time and effort for developers on a team. Now I want to explain how I maintain this page among other projects using continuous integration and continuous deployment and why this is beneficial even for one person projects. I should note that what I am about to describe I would not exactly recommend using in a business environment, I’ll describe that a later (however, if you don’t have any CI environment, this would be a huge improvement).

First, I use the following software:
1) IIS – Microsoft’s Web Server.
2) Subversion (svn) – Source Control
3) Team City - an application specifically built for CI and build management.
4) Subversion plug-in for Visual Studio.

I have my own windows server set up, which is running the svn server, the Team City server, and IIS. I have various Team City builds monitoring different parts of the source tree. Whenever I make a change and commit it, the build that monitors will see the change and get kicked off. I’ve configured the build with the following steps:
1) Pulls down a new copy of the source into a temporary build directory
2) Builds the entire solution
3) If, and only if, the solution compiles then a series of unit tests are run against the compiled project.
4) If, and only if, all of the unit tests pass then the solution compiles then a MSBuild script runs which does the following:
a. Stops IIS
b. Deploys the artifacts specifically needed to run the application (aspx pages, dlls, etc) to where the directory that IIS is running the site out of.
c. Restarts IIS

If any of the steps fail then Team City tray notification application gives a dashboard display of the status of the builds (see the picture). I can easily see if any changes I have made have caused any deployment issues, and if not, it informs me of when the changes have made to the live environment.

You can really see the benefit if you consider another project I have that are a series of windows services that monitor various things like rss feeds and host their own WCF services. The steps in this build are as follows:

1) Pulls down a new copy of the source into a temporary build directory
2) Builds the entire solution
3) If, and only if, the solution compiles then a series of unit tests are run against the compiled project.
4) If, and only if, all of the unit tests pass then a MSBuild script runs which does the following:
a. Stops the correct windows service
b. Uninstalls the windows service
c. Deploys the new service
d. Starts the new service

For either the webpage or the windows services, if I wanted to deploy by hand it would take fair bit of time and trouble each time I wanted to do it. This automates the entire process, freeing me to continue to work while the new versions are being pushed out there for me. If I made any mistakes and I have done my due diligence writing tests that cover those cases, then I will have a helpful error message in the tray notification icon soon after I deploy (remember it is easier to track down recent mistakes) and prior to it going out into the live system (fixing bugs before they make it to production).

I think took me somewhere in the neighborhood of 3 to 4 hours to set up and maintain this process. But how much time did this save me? You can see from the picture that I have done something like 96 deployments of this page alone. It would have taken may times that initial investment to have done all those deployments by hand. What I’ve been able to do is make small incremental changes whenever I want and deploy them without any pain at all.

Here are some other benefits:
1) I can easily track the changes I have made through the project and revert them if necessary and have the reverted site live within minutes.
2) If I wanted to work on my projects on another machine, I could just pull down the latest version of the project and start working almost immediately.
3) If I wanted someone else to start working on my project tomorrow, I’d simply have to give him access to the svn server, and he could pull down the code and start working immediately. Any time he made changes, I’d have higher level of assurance that his code would work and not take down the live environment. Plus, he would have the ability to deploy his changes without needing to know how to log into the box and do it himself.

If you were doing this in a business environment then you should definitely maintain a separate build environment from your production environment, plus you should have separate development and quality assurance environments as well.

Here is one potential real-world scenario:
1) Your continuous integration environment builds you project.
2) Runs unit tests against the compiled project.
3) Creates deployment artifacts (such as a MSI).
4) Uses those artifacts to deploy the entire project to a separate test environment.
5) Run integration tests against that test environment (such as database tests, or automated UI tests).
6) Later, if those tests are successful, another build target takes the deployment artifacts of the most recent successful build and test (or one that you specify) and deploys it to the Quality Assurance environment.
7) QA engineers then run manual functional and regression tests against it.
8) If these tests are successful then the same deployment artifacts can be used to make a deployment to production.

If you have a reasonable facsimile of the production on your dev and QA environments then this can help ensure a success deployment with less potential for bug being introduced.

I’ve heard of people having a process similar to the above example but instead using the API provided by VMware to create actual running VMs with the product installed on them as a build artifact. It is the actual VM itself that is tested, passed to QA, and deployed to production. This way you are guaranteed that your environments are identical, and what you deploy to production is the same as what you have tested.

Labels: Continous-Integration, svn, TeamCity, VMware


Published - Saturday, April 30, 2011

BitCoin is a peer-to-peer crypto currency. Peer-to-peer means that no central authority issues new money or tracks transactions. These tasks are managed collectively by the network. BitCoin has garnered quite a bit of attention of late. What is it? Why use it? I’ll try to answer this below.

Why BitCoin? With all money there is the problem of double spending. With electronic representation of money in the modern age, it is easy for a financial institution to say that it has your money and then loan it to someone else, then loan it to someone else, etc. The only solution to the problem is heavy government regulation of financial institutions, and we have seen how good that works especially recently (think the Federal Reserve). BitCoin has solved this problem.

How does it work? There is a BitCoin application that you install on your computer. Each computer running this application operates a peer node in the BitCoin network. All the nodes in the network effectively keep account of all of the transactions that take place in the system. The individuals making the transactions are known only by a long number but the transactions themselves are 'confirmed' by other clients on the network. New BitCoins are slowly entered into the system in the following way: each client has an option of mining. When a client is mining it will make attempts to solve the next block in a chain; this is really just a long sequence of a mathematical problem. This chain keeps track of the history of all the BitCoin transactions. When a node solves this problem it is rewarded with a set about of BitCoins, over time this amount is more and more slowly decreased. It is extremely rare to get rewarded. Given the current difficulty, a standard desktop would have to mine for something like a year before it might randomly solve one of these problems.

What are the advantages of this system? With our monetary system the only value that the dollar or any other fiat currency has is the faith that you in the issuing institution. BitCoin is completely transparent and open source, so you don't have to trust it, you can look for yourself at the code and the transactions going on. I KNOW that there are only 5 million BitCoins currently in the world, and I know that there will only be 21 million or so total (in about 100 years), because of the mathematics behind it. Right now, you can trade USD, AUS, RUB, or EUR back and forth for BitCoins. There are people accepting BitCoins in exchange for goods and services.

What are the disadvantages of this system? I worry about BitCoins recent run up; they have gone from around 30 cents a BitCoin on November of 2010, to, as I write this at the end of April, to almost 4$ per BitCoin. On the surface that sounds great, but with a huge upswing in value it is very likely to have a huge downswing at some point soon. Ideally you would want to see a lot more stability around something like this for more widespread adoption. Another problem is that governments may try to make it illegal. Money and monetary policy is the central means by which governments exert power. BitCoin would take away much of that power. You could make the argument that by its nature it would be hard to get rid of, and you would be correct. But, the vast majority of people would probably not use it if it were to become illegal.

What is most likely going to happen? I think BitCoins are going to part of our future. I doubt that I will be able to use them one day to fill up my tank at a gas station, or tip a server at a restaurant. But the real promise is the Internet. Newspapers are going out of business because no one will pay money for access to a news website, because there is so much free news. But, consider if there was a way to pay 2 or 3 cents to view a story from a reputable news source. I might be willing to pay that, but how do I get the money to them? I'd have to enter in my credit card information, but I am not going to do that. Even if I could, the credit card transaction fee would make it a losing scenario for the seller. But what if there was an easy way, built into the browsers to send cents or even fraction of cents to wherever without transaction fees? BitCoin has that potential. My hope is that they will become the de-facto recognized currency for certain things on the Internet, and solve the micro payment problem once and for all.

And by the way, if you like my website or have found it usefully, why consider sending me some BitCoins at this address 173mvmf9Cw2AUKCKf35yG7dPNXN8oySZ3X

Labels: Alternative Currency, BitCoin, Cryptocurrency, Finance

Continuous Integration or To Err is Human

Published - Friday, April 8, 2011

I am constantly astounded at the number of shops out there that do not have source control not to mention even the most rudimentary build server or continuous integration environment. Or worse, there are developers that scoff at the idea of being ‘held back’ by things like this. More commonly management considers this a waste of time.

My Personal Experience

Fresh out of college I worked for a company that had a full-fledged continuous integration environment. In other words, whenever anyone checked in any project code ro artifacts for any project, a process somewhere would be monitoring source control and a build would get kicked off. That project, plus any dependent projects, will get checked out and built. If and only if it built without any errors, that build would then have a series of unit tests run against it. If any of the unit tests did not pass then the build was considered to be broken.

As I first joined the company, I was part of the team that took this one step further. Each night we would automatically take the latest successful build and install it in a clean environment, then load a bunch of test data in to it, and run a series of integrations tests against. These test were mostly automated UI tests, and generally verified use cases or user scenarios could be accomplished. Also, there was a series of tests performed against the database.

The results of these tests were automatically compiled together and displayed a SharePoint site, and were broken down into various graphs and what not, and visible to all the development teams, management, and product/project management to see.

The advantage of a system like this are pretty obvious.

  • Developers would know the minute the broke something, or forgot to check something in. As a result of the fast feedback loop, the developer at fault would generally know exactly what he did wrong because he was just working on it. Plus no one else ever wasted time trying to figure out why what the code they just pulled down out of source control would not compile.

  • Not wanting to break the build, developers would run all the unit tests prior to checking in, catching many bugs before it even reached source control.

  • Writing unit tests along with your code is very beneficial to distinguish what exactly it is you are trying to get your code to do. So if I made changes, then ran all the unit tests and found I broke one, reading the unit test that broke would more quickly reveal the intent of the code that was broken and to know when it was fixed.

  • Best of all the progress on the project was completely transparent. Any kind of problems were obviously from the beginning and could be addressed early and dealt with efficiently. This might be having someone else, who has experience with the problem take a look at it, or the project scope/time might need to be changed.

As this was my first job out of college, I was extremely naïve and thought that this was the norm for software development shops. Now that I have been in a dozen or more software development environments and found none having anything like this, most not even having a build server, and some do not even have source control, I can say without a doubt that continuous integration buys you so much in terms of time, money and quality.


Let go over each objection to using it.

  • It takes more time to write, develop, and maintain unit tests. We have limited time on this project so we can worry about them later.

    What no one considers is that when I write new code, and most people I know do the same, I write some sort of test harness anyway (an executable, a web page, a win32 app, something), so I can call the code I am writing over and over again while I am writing it. In this way I can have some level of assurance that the interface I am writing for is correct.

    So, while it is true that it does take time, in many ways it is just a standardization and capturing of the code that is already being written and thrown away anyway.

  • It will take time to set up and maintain the build server and the machines will cost additional resources we do not have.

    Assuming that you are using source control anyway, it is not complicated to set up a build server. I have used cruise control .net and another product called team city. Cruise control is free, team city is commercial product but is free for up to a certain number of developers. I’ve set both of these up to do a simple build on extremely underpowered machines and they run fine. Depending on the complexity of your build it should take a competent person less than a day to get initially up and running for a single build. With team city using their web interface, if you are using svn or TFS you can just point to your source control machine, give it credentials then it is literally just a drop down menu for a visual studio sln build, you can point to it and about as complicated as it gets.

    If your build is very complex you will most likely need to write a build script. If you are on the Microsoft stack then you will most likely want to use MSbuild. Again, this is probably a good exercise to do anyway. Not only will it save tons of time every time a developer comes on the project, but it will bring to light a bunch of issues and dependencies that your project has and you need to know about and document anyway… right?

    Once, there was a bug that one of my colleagues was working on. After about a half a day he had made little progress and ask me to help. I looked at it for another 3 or 4 hours and made some progress but the bug was not resolved. Finally another developer looked at it for another few hours before fixing it. So 1 bug took 3 developers a total of 8 or 9 hours to fix. What does 1 hour of developer time cost a company, maybe 100$? So 9 hours times 100$ is 900$ to fix that one bug. This is an extreme example, but each bug effectively can cost an organization several hundred dollars apiece to fix, and that is only development time. Do you have QA and deployment process as well? This will cost. Consider, it might cost you say 2 or 3 thousand dollars to set up a continuous integration environment in terms of time and hardware. And developers might send an extra 10% of their time writing and maintaining tests. If even 25% of bugs are caught up front then that organization has already saved a ton of money for very little overhead.

    Even consider the savings simply setting up a simple build server. At one company I was at, I probably wasted 10 to 15 hours a month dealing with and tracking down, why the source I just pulled down did not compile. Again, my salary + benefits + the building I was working in + HR costs + opportunity cost of not working on new features + etc. etc. = 1000$ or 1500$ in wasted time. Having even one developer spending one day could have saved so much wasted time, energy, and money.

  • This is complicated and will confuse people.

    Okay, after setting up a continous integration environment in my own spare time to show a manger what kind of benefits it could bring, this was his objection. My response to this is: what kind of people have you hired that they cannot deal with something like this? Developers and generally pretty smart I am sure they can handle it. This is not a new technology, it has been around for years and it benefits have been proven. In his defense he let me go ahead and set it up but I could only build once a night. Of course he was never on board with any organizational buy-in or integrating it into the development process (which, even if he was, was beyond his control).

Here are some objections that a developer might have.

  • I don’t want to be blamed or ridiculed if I break the build. We have a complex product and this is easy to do.

    If everyone in your organization is an adult (hope most organizations can say this is mostly true) they should be able to handle it. The effect over time of everyone on the team being aware of this should cause more focus to be placed on improving the problem that exist that are causing this. This will in turn reduce the amount of time spent wading through the complexity and allow individual developers to focus on the problems at hand.

    Software development is complex by nature. It is far better to realize there is a problem now when you break the build or a test, than six months from now when it is deployed into a customer’s environment and is causing major chaos. Everyone makes mistakes: this is human. This process is simply a recognition of that fact and an attempt to control for it.

  • Our product is an enterprise app and I couldn’t possible deploy and test this thing realistically.

    Study up on testing. In particular mocking. You can write you code in such a way using something like dependency injection and the mock test pattern to create a mock or a fake interface such that when you run your tests, they are testing your code and your code alone.

    I’ve seen continuous integration environments that use the API for VM ware to spin up multiple new VMs that then have various products that talk to each other all up and running so that you can run actual integration testing automatically in your environment.

    Check out Microsoft’s Team Foundation Lab Management. It basically has solved this problem for you.

Benefits in Terms of Quality

The software industry in my options is facing a crisis of quality. No one trusts their software, and with good reason. Most software that is built, even very expensive enterprise level software is riddled with bugs and security problems.

Management could make the argument and many times does, that it isn’t worth the extra cost to implement this because they never intended on fixing those bugs anyway. This could make sense from a business perspective. If one bug is effecting one customer sometimes, then it may not be worth the 1000$ it might take to fix it, go through QA and deploy.

There is two point I would make against this.

  • What is your reputation worth?

  • What is this bug really costing you? 1000$? Over time, if your customer, be it an outside company, or another group within your organization, loses confidence in the quality of your software then they will advise others not to buy your software, move on to a competitor, or complain loudly enough to management that you are replaced.

  • You don’t know what you don’t know.

  • If you had a clear picture of the bugs that you had, you could make a better informed business decision on whether or not to go ahead and fix it. This leads to my next point, technical debt.

Technical Debt

I highly recommend you reading Steven McConnell’s article on technical debt.

Essentially he argues that it is a legitimate business decision to do bad software engineering now to take advantage of a business opportunity or market conditions now, knowing that it will cost you later to maintain and enhance those systems later. However, the huge caveat, is that management is very rarely aware of the technical debt that they are accruing, and therefor they cannot make informed business decisions.

I argue that continuous integration’s relatively small up-front cost and maintenance buy you huge advantage in terms of productivity, plus it makes you aware of the technical debt whether you choose to address it or not, at least you can make better informed decisions.

Did I mention you should read the article? It is absolutely brilliant, I cannot recommend it enough, I cannot do it justice here. Read it!


Continuous integration like many things, has a diminishing rate of return. Setting up a build server has a huge and immediate benefit. Setting up unit tests to run automatically will give a huge benefit, but the effect may not be felt immediately. As you approach 100% code coverage the quality of your application might be stellar, but developers might be spending an unacceptable amount of time on the overhead, depending on the type of your application.

Once you have a continuous integration environment set up, you can go down the path of really interesting things like test driven development, or behavior driven development, or things like dependency injection with mocking like I had mentioned earlier... if you would like to. These things will really push the limit on you in terms of technical prowess, and truly understanding the product that you are working with. But without it, the possibility doesn’t exist.

Labels: .net, Build, Continous-Integration, process, team-city

Converging Data Usage

Published - Friday, April 8, 2011

Check out my data usage for my home internet (in green) and the data usage on my phone (in blue). I am using almost half the amount of data on my iphone then what I am using at home.

Keep in mind that at home we do not have cable, but we probably watch a half hour or more TV a night primarily through Hulu and Netflix. Also some of that data at home is being used by my phone as well when I am there.

What the heck am I doing to rack up so much data on my phone you ask? I have no idea. I stream podcasts, and in the last few days, baseball games to my phone on 3G. I am sure that have having several push email accounts might cause a little bump in data. But consider, if I listened to say a half of an hour of streaming audio per business day (an over estimation), at something like 40 MB an hour, that is only 20MB * 5 days a week * 4 weeks a month = 400 MB. Where is the other GB coming from? I want AT&T to give me a break down of ports and domains.

Labels: Data, Internet, Wireless

Filtering for Jobs

Published - Sunday, March 27, 2011

This is a method I have used when searching for jobs in previous years that I also found was a good technique for keeping your eye on the local job market when not looking for jobs, which has plenty advantages as well.

Essentially this technique is to filter various job posting sites that deliver content via RSS then subscribe to those RSS feeds. In this way can have an ongoing view of job information that you care about.

If you have never used yahoo pipes, take a minute to check them out now. I find them pretty easy to use. Pipes is designed to be a model driven programing interface, meaning you can create and design pipes through a GUI and never actually write any 'code’. Pipes are actually kind of like a extremely simple cloud enabled version BizTalk or windows workflow, which I spend most of my time in these days. The nice thing about pipes is you can search and see other pipes that other uses have created and uses them as examples or templates. Since 9 out of 10 things you want to do will probably have been created by someone else already you can just use theirs if you would like.

Next spend some time researching various companies and organizations located in your city, particularly ones in which are in your industry, or ones that you would be interesting in working for. Then do the same for nation-wide companies and organizations. You will find that most of these companies will have a page where the post jobs, and that for many of these you can subscribe to an RSS feed. For me, there are several very industry specific blogs and websites that I frequent that also have some job postings areas as well and I find that jobs posted here are of incredible quality as well. If you have any sites like this jot them down as well.

If the job pages does not have an RSS feed by default you can use a service called page2rss which I have found works okay. Essentially it will publish changes to a page you point it at as an RSS feed.

For each one of the feeds, create a yahoo pipe that filters specifically based on the incoming feed. For instance: if you have a feed that is posting jobs nation-wide, you will want to have a filter that only lets through items containing the name of your city or the name of several cities in the metro area. If the company is national or local you probably want to filter on your job title in either case.

Next step, subscribe to the resulting yahoo pipes RSS feeds, in your favorite RSS reader. If you don’t have one already I’d suggest Google reader. Now as time goes by you will receive various postings.

I am not looking for a job currently, but I have an excellent finger on the pulses of the software industry in the Atlanta area. Sometimes recruiters will contact me and make a vague reference to a job they are trying to fill. I can occasionally spot the job that they are hiring for because the company will, in many cases, also publish the job on their site as well. Or if you are looking for a job, then you can have a live feed of current jobs in your area that you care about, and apply as soon as they are posted.

Labels: Job, Pipes

T4 Transformations for ASP.NET paths

Published - Wednesday, February 16, 2011

T4 is an engine built into Visual Studio for code generation. It has been around since VS 2005 and I have been playing with it for a while. In the past I used it to generate unit tests just after compile time that used the templates and reflection to automatically test a particularly nasty DB persistence layer for one of our company’s products. It ended up uncovering about 4000 inconsistencies and potential bugs.

More recently I have created a template that automatically creates the paths for the pages in your project. I hate to hard code values, but sometimes in it seems you cannot get around hard coding references. This will eliminate that dependency.

It works by iterating over the file structure of your project and finds classes that inherit from Page. It then generates a partial class for each page that has a static string with the full path to that page. It also generates a static class for each type of image resource with a static string with its path as well. In this way you can reference all of the other pages in your project like this <%=MyPage.Path %> or your resources <%=Images.JPG.MyPicture%>. If you ever decide to move the pages or resources around, for whatever reason you can freely do so without worrying about breaking hard coded references.

Here is the link to the file that does it.

Labels:, C#, code-generation, T4

Notifio - Free and easy mail notifications for your iPhone

Published - Saturday, February 12, 2011

If you are moving from the blackberry to the iPhone, or just a new iPhone user, you may be disappointed to know that the only real mail notification for the iPhone is the ‘ding’ that you here whenever it comes in. Whereas on the blackberry you get a small flashing light on the top of your phone, and in some models you can customize the color based on the person that sent the email.

You would expect on the iPhone at least a popup notification, so you don’t have to unlock your phone and check the unread mail count every time you want to know if there is a new email or not. But that is exactly what you have to do unless you would like to pay for a third party app that does send you a push notification. In addition all of the apps I have looked into do not support exchange email. Although there may be some that I am unaware of that do this.

I have found a free alternative.

Download a free app called Notifio iTunes link. Then sign up for their free service here. Notifio is basically a platform that send you notifications via several services you can sign up for through there site. You can get things like twitter notifications, etc. However, the part that I am interested in, is a feature that lets you send an email to an address that they give you. Whenever an email is received at that address you get a push notification on your phone with the subject of the email.

So all you have to do at this point is forward all of your email accounts that you want notifications for to that address and wait for a notification on your phone. The time this takes is generally between instantaneous and 2 seconds.

There is one caveat in that if you emails contain sensitive or confidential information you are forwarding them to a 3rd party that you do not have control over.

Labels: Email, iPhone, Notification, Notifio, Push

BizTalk - Importing Bindings with Many Password

Published - Sunday, February 6, 2011

If you ever spend much time exporting and importing apps in BizTalk you will quickly learn that doing so does not copy the passwords of your locations, ftp and otherwise along with it.

My currently client's environment uses copious amounts of ftp locations, too many probably, though that is a different story. However I find myself spending too much time copying the passwords to the new environment every time I import an app.

So I whipped up a small app this weekend that will take command line parameters for the application you want to change, the ftp server you are targeting and the the password to set. This was basically an excuse for me to play around with BizTalk object model as well as experiment with WPF

ChangeFTPPortProperties.exe -application=My.Companies.App -pass=MyNewPass

Within seconds you have all of your send and receive ports who are pointed at configured with the new password.

You can download the exe, and the code below.


I have updated the project. The new code is here

Labels: BizTalk, C#, FTP, Passwords

Reading lists

Here is some of what I have been reading lately

Software Books

Non Fiction


I am Kevin Kinnett, a software developer in Atlanta, GA.More about me.

profile for aceinthehole on Stack Exchange, a network of free, community-driven Q&A sites

Innovative Architects

I am a Senior Software Engineer/Consultant working for Innovative Architects.

Innovative Architects is a Gold Certified Microsoft Consulting company located in Duluth, GA, right outside of Atlanta.

I am mainly engaged with BizTalk development projects, although I also specialize solving business needs using ASP.NET, and Sharepoint among others.


I my previous position I was a Software Engineer working for ADAM Inc.

Adam specializes in software for employers, benefits brokers and healthcare organizations which provide customizable health solutions to help hospitals, managed care organizations, and consumer web sites become an integral part of the online consumer healthcare experience.

I was responsible for several mission critical applications, including being on the 2 person team that was charged with the final stages of the rewrite of the company’s flagship product


In a previous position I was a Software Engineer working for Nexidia

Nexidia has applied years of research and development to deliver a comprehensive range of solutions for audio and video search. Nexidia works with some of the world’s largest contact centers, rich media companies, government agencies, and legal firms to help them realize the amazing possibilities now discoverable in audio and video content.

I was part of serveral .NET application development projects including standalone tools, applications, and frameworks used to perform system testing, automated functional testing, load testing, and automated UI testing. Also applications for continous integration and automated deployment of configurable complex data sets into installed products.


While attending Kennsaw State University I majored in Computer Science

While in the STARS program, I developed a software application to keep track of the audio/visual equipment the department loans out, which is currently in use and written in Java Swing and uses a MySQL database for persistence

In the morning