DBPro for unit testing has been a PITA for the following reasons:
1. Roll back of initial test state is difficult
There is no easy way to begin a transaction in the setup and rollback in the tear down without modifying the generated code to use transaction scope. The setup/tear down methods cannot be used because they run on a separate connection than the test is executed. This is important for security testing but not all that applicable to the work that we perform. I don’t think it’s reasonable for the tear down methods to have to reverse all the work of the test setup methods. Database rollback is the appropriate approach. Therefore, we have reverted to not using anything but the body of the test.
2. Inability to share T-SQL code to setup common state across tests
This is forcing us write T-SQL setup code that is duplicated across multiple tests or we resort to putting all the testing for a given state into single test. These tests are then testing a single unit (stored proc, trigger, view, etc.) but are testing multiple conditions. If the first condition fails, all subsequent tests will not execute. Also, using state that was modified by a previous test can be very problematic and creates interdependencies between tests.
3. Test designer for specifying test conditions is difficult
The point/click interface for editing many test conditions becomes laborious for large numbers of result sets and columns. Managing inline T-SQL with RAISERROR statements is easier and allows a script to be reviewed without the designer surface.
4. T-SQL test editing designer is not friendly for debugging your test
Most people find that using SQL Management Studio to write/debug your unit tests is easier. In order to use SMS you must copy-paste your code in and out of the DBPro test designer. This process is error prone and laborious.
5. DBPro is slow at executing tests
6. Data Generation Plans are not useful
I may be wrong but I don’t find the Data Plan Generation facility very useful for most unit tests. I see how it would be useful for performance or load testing but these types of tests are much higher level.
7. Integrating the DBPro tests into the build process is not easy
This has not been attempted yet but it does not appear on the surface to be easy.
Friday, December 14, 2007
Friday, November 30, 2007
ObjectBuilder Documentation is incorrect.
The following is the default Object Builder Pipeline Stages:
PreCreation
Microsoft.Practices.ObjectBuilder.TypeMappingStrategy
Microsoft.Practices.ObjectBuilder.SingletonStrategy
Microsoft.Practices.ObjectBuilder.ConstructorReflectionStrategy
Microsoft.Practices.ObjectBuilder.PropertyReflectionStrategy
Microsoft.Practices.ObjectBuilder.MethodReflectionStrategy
Creation
Microsoft.Practices.ObjectBuilder.CreationStrategy
Initialization
Microsoft.Practices.ObjectBuilder.PropertySetterStrategy
Microsoft.Practices.ObjectBuilder.MethodExecutionStrategy
PostInitialization
Microsoft.Practices.ObjectBuilder.BuilderAwareStrategy
PreCreation
Microsoft.Practices.ObjectBuilder.TypeMappingStrategy
Microsoft.Practices.ObjectBuilder.SingletonStrategy
Microsoft.Practices.ObjectBuilder.ConstructorReflectionStrategy
Microsoft.Practices.ObjectBuilder.PropertyReflectionStrategy
Microsoft.Practices.ObjectBuilder.MethodReflectionStrategy
Creation
Microsoft.Practices.ObjectBuilder.CreationStrategy
Initialization
Microsoft.Practices.ObjectBuilder.PropertySetterStrategy
Microsoft.Practices.ObjectBuilder.MethodExecutionStrategy
PostInitialization
Microsoft.Practices.ObjectBuilder.BuilderAwareStrategy
Sunday, November 25, 2007
My Digital Picture Frame project (DPF)
I considered purchasing one of these cool toys but had high requirements for it.
- WiFi
- Support Pictures via RSS with local storage media as fall back.
- Remote Administration
- Re-chargeable battery
- Options for wood frames
Of course the few that actually fit these most were > $500 up to $1000. There was only one that supported RSS.
So I used an old Tecra 8100 laptop and set out this long holiday weekend to build my own. I purchased a frame and shadowbox at Micheal's craft store at a total cost of $22.
Three hours later I had produced this:
The laptop already had windows XP loaded on it so it was just a matter of:
- Setting windows to automatically login
- Loading the google screen saver which supports local media as well as RSS feeds.
- Loading up TightVNC for remote administration
- Adding nircmd.exe screensaver as a startup item.
I decided I had better leave the back panel of the shadowbox to allow air flow. The laptop does get a little bit warm.
I hope this inspires you to build your own DPF!
Wednesday, November 21, 2007
Effective Code Reviews... the next steps.
I've been giving a lot of thought as to how to move closer toward pair programming within our development organization. Code reviews today are somewhat effective but tend to take a back seat when the pressure is on to make schedules to release code. e.g. The urgent always wins over the important. (Stephen Covey)
So developers and managers under pressure will delaying code reviews until after the release. That's just a stall tactic as sometimes that occurs and sometimes it doesn't. Either way, it's POINTLESS! The further in time you are away from when the code was written the more chance you have of making a breaking change. (ignoring unit test coverage and unit test quality)
Most code reviews reveal code that can be improved but is not incorrect. These types of changes should never be made on released code! Sometimes it's even worse in that the code requires design changes that we glossed over during the design phase but we obvious design concerns now that the code is complete.
Perhaps the best approach to shortening the time between when the code is written to when the code is reviewed is to enforce check-in policies. This would require at least another developer to review code changes before it could be committed to the source code repository.
I have noticed that TFS contains a field to indicate that the code was reviewed. I imagine a combination of shelving and using this flag with policies in place could begin to move code reviews more into the daily coding work flow rather than waiting until there is no time at the end of the schedule.
Another possibility to get closer to pair programming is to all developers be requires to spend one hour each day reviewing (hence pairing) on another developers work for the day.
I just don't think the culture shift to pair programming is going to be something that is ever dropped into an existing organization. We need to find methods to introduce it in small doses in order to succeed.
As Einstein once said, "Insanity is doing the same thing over and over again and expecting different results."
So developers and managers under pressure will delaying code reviews until after the release. That's just a stall tactic as sometimes that occurs and sometimes it doesn't. Either way, it's POINTLESS! The further in time you are away from when the code was written the more chance you have of making a breaking change. (ignoring unit test coverage and unit test quality)
Most code reviews reveal code that can be improved but is not incorrect. These types of changes should never be made on released code! Sometimes it's even worse in that the code requires design changes that we glossed over during the design phase but we obvious design concerns now that the code is complete.
Perhaps the best approach to shortening the time between when the code is written to when the code is reviewed is to enforce check-in policies. This would require at least another developer to review code changes before it could be committed to the source code repository.
I have noticed that TFS contains a field to indicate that the code was reviewed. I imagine a combination of shelving and using this flag with policies in place could begin to move code reviews more into the daily coding work flow rather than waiting until there is no time at the end of the schedule.
Another possibility to get closer to pair programming is to all developers be requires to spend one hour each day reviewing (hence pairing) on another developers work for the day.
I just don't think the culture shift to pair programming is going to be something that is ever dropped into an existing organization. We need to find methods to introduce it in small doses in order to succeed.
As Einstein once said, "Insanity is doing the same thing over and over again and expecting different results."
Sunday, October 28, 2007
Presenter communication with the view - events or direct communication.
I just started reading Jeremy Miller's excellent blog post series on Build your own CAB series. I'm currently up to his 6th post on "View to Presenter Communication" and found the discussion of using events vs. callbacks on the presenter very interesting and relevant to my current work.
Let me share a recent experience...
I've been using events from my view to communicate with the presenter and ran across a bug that was related to using event inadvertently. While working one day, I accidentally pressed Ctrl-L and duplicated a line in my presenter constructor. The line I duplicated was a hook of a view event. This caused my presenter event delegate to be called twice.
I found out later that my unit tests for the presenter were correctly checking that I wired all the view events but they were not checking that the events were only wired once. All my integration testing was in debug mode! Then ran the application in release mode. BOOM!
It turns out the events fired in a different order under release mode and caused a null reference exception with a state change in the model.
If the purpose of events is to allow multiple listeners, when would I ever design the use of a single event to communicate multiple methods in the presenter?
Perhaps I would but that appears to be more of an edge case and not very good design. It would lead me back towards a design where events don't communicate specifics but something more abstract as the 'SomethingChanged' event.
Would I ever have multiple presenters for the same view? Never.
So is using events a good design principle to create loose coupling? Yes, but only if you require multiple listeners. If you don't, are you over engineering the solution? Perhaps.
Now when it comes to the model, events are essential! I have multiple presenters listening to model objects!
I've also been fighting memory leaks due to the use events on my model objects. My model has certain objects that can be retrieved from cache and exist for the life of the process. When my presenter hooks these events, it was causing my presenter to not unload when the view was destroyed. This cause me to have to disconnect my model events from the model objects when the presenter was disposed. This also required me to dispose my presenter when the code that first created the presenter fell out of scope.
On a side note, I found the memory leak using SciTech's excellent memory profiler. If you are creating WinForm applications, I suggest you download it and try it against your application. You might be surprised to find you have memory leaks in your applications. It also has an API so you can build integration or unit tests to check for memory leaks.
Let me share a recent experience...
I've been using events from my view to communicate with the presenter and ran across a bug that was related to using event inadvertently. While working one day, I accidentally pressed Ctrl-L and duplicated a line in my presenter constructor. The line I duplicated was a hook of a view event. This caused my presenter event delegate to be called twice.
I found out later that my unit tests for the presenter were correctly checking that I wired all the view events but they were not checking that the events were only wired once. All my integration testing was in debug mode! Then ran the application in release mode. BOOM!
It turns out the events fired in a different order under release mode and caused a null reference exception with a state change in the model.
If the purpose of events is to allow multiple listeners, when would I ever design the use of a single event to communicate multiple methods in the presenter?
Perhaps I would but that appears to be more of an edge case and not very good design. It would lead me back towards a design where events don't communicate specifics but something more abstract as the 'SomethingChanged' event.
Would I ever have multiple presenters for the same view? Never.
So is using events a good design principle to create loose coupling? Yes, but only if you require multiple listeners. If you don't, are you over engineering the solution? Perhaps.
Now when it comes to the model, events are essential! I have multiple presenters listening to model objects!
I've also been fighting memory leaks due to the use events on my model objects. My model has certain objects that can be retrieved from cache and exist for the life of the process. When my presenter hooks these events, it was causing my presenter to not unload when the view was destroyed. This cause me to have to disconnect my model events from the model objects when the presenter was disposed. This also required me to dispose my presenter when the code that first created the presenter fell out of scope.
On a side note, I found the memory leak using SciTech's excellent memory profiler. If you are creating WinForm applications, I suggest you download it and try it against your application. You might be surprised to find you have memory leaks in your applications. It also has an API so you can build integration or unit tests to check for memory leaks.
Saturday, October 20, 2007
Some more unknown programming quotes
"Any sufficiently advanced bug is indistinguishable from a feature."
"Programming is an art form that fights back."
"There are two ways to write error-free programs; only the third one works."
"Programming is an art form that fights back."
"There are two ways to write error-free programs; only the third one works."
Tuesday, October 16, 2007
Ioc is very simple and important.
I attended the CMAP code camp this past weekend and sat in on Michael Pastore's excellent presentation on Dependency Injection and Inversion of Control.
Although I was already familiar with DI/IOC his presentation managed to sharpen my understanding of why Ioc is important.
I don't have Mike's exact definition of Ioc but in my words:
"Ioc is where code surrenders control or configuration to external code"
This is very common in most Frameworks including the the .NET framework.
Consider the TrueForAll method from List from reflector:
This method delegates control to the Predicate inverting the control to the caller. These delegate / callback methods appear all over the .NET framework.
So in my opinion Ioc is a very important concept that every developer should understand to effectively implement that Single Responsibility Principle.
Although I was already familiar with DI/IOC his presentation managed to sharpen my understanding of why Ioc is important.
I don't have Mike's exact definition of Ioc but in my words:
"Ioc is where code surrenders control or configuration to external code"
This is very common in most Frameworks including the the .NET framework.
Consider the TrueForAll method from List
public static bool TrueForAll<T>(T[] array, Predicate<T> match) |
So in my opinion Ioc is a very important concept that every developer should understand to effectively implement that Single Responsibility Principle.
Friday, October 12, 2007
My Seven Year old is out blogging me!
My seven year old took up blogging last week.
Has a real fascination with star wars. (Webkinz is is side hobby)
It's really pathetic that I can't seem to blog as much as he does! Yikes!
Has a real fascination with star wars. (Webkinz is is side hobby)
It's really pathetic that I can't seem to blog as much as he does! Yikes!
Friday, September 28, 2007
A nice quote relative to current work
I'm currently working on a Presenter First design and I stumbled across this quote:
"Doing encapsulation right is a commitment not just to abstraction of state, but to eliminate state oriented metaphors from programming." — Alan Kay, Early History of Smalltalk.
"Doing encapsulation right is a commitment not just to abstraction of state, but to eliminate state oriented metaphors from programming." — Alan Kay, Early History of Smalltalk.
Wednesday, September 12, 2007
Sunday, August 26, 2007
WTF Comcast?
I recently moved from Comcast Digital TV and Internet Services to FiOS and I couldn't be happier. My download speeds on the Comcast's internet service was horrific at times.
I just followed this a digg link and found this article about how Comcast has drawn an invisible line for acceptable bandwidth use. The interesting thing was I got an advertisement on the page for comcast triple play. LOL!!!
I just followed this a digg link and found this article about how Comcast has drawn an invisible line for acceptable bandwidth use. The interesting thing was I got an advertisement on the page for comcast triple play. LOL!!!
Saturday, July 28, 2007
FizzBuzz c# 3.0
I've been playing with the C# 3.0 Express beta....
(from n in Enumerable.Range(1, 100)
where (n % 3) == 0 || (n % 2) == 0
select n.ToString() + "=" + (((n % 2) == 0) ? "fizz" : "") + (((n % 3) == 0) ? "buzz" : "")).ToList().ForEach(Console.WriteLine);
(from n in Enumerable.Range(1, 100)
where (n % 3) == 0 || (n % 2) == 0
select n.ToString() + "=" + (((n % 2) == 0) ? "fizz" : "") + (((n % 3) == 0) ? "buzz" : "")).ToList().ForEach(Console.WriteLine);
Complexity
Every problem can be solved by adding another layer of indirection.
Unfortunately, adding another layer usually creates a new problem.
Unfortunately, adding another layer usually creates a new problem.
Thursday, July 26, 2007
The inverse of Moore's Law?
Moore’s Law
The power of computers per unit cost doubles every 24 month.
Wirth’s law
Software gets slower faster than hardware gets faster.
Another good law...
Conway’s Law
Any piece of software reflects the organizational structure that produced it
Put another way...
If you have four groups working on a compiler, you’ll get a 4-pass compiler.
A new tip one for my wall
Hofstadter’s Law
A task always takes longer than you expect, even when you take into account Hofstadter’s Law.
Tuesday, July 17, 2007
statics are EVIL!!!
I made this statement a year or so ago while on a Microsoft lab engagement trip and I got some strange looks from some of the Microsoft folks over lunch. So here are my thoughts on why I believe use of statics is a bad idea.
Of course this crazy talk will eventually lead you to Ioc. See you when you get there.
- when you use a static you couple to an implementation and not an interface. You can not change the implementation without changing all dependencies.
- your statics are essentially global. Use of globals should be discouraged for the reason of intellectual manageability of your code. Strive to minimize the scope.
- using statics do not follow tenets of Object Orientation. Fine for your functional programming style but not OO.
- statics leads to all or nothing testing. Testing a single unit is much harder if not impossible.
Of course this crazy talk will eventually lead you to Ioc. See you when you get there.
Wednesday, June 27, 2007
Tuesday, June 12, 2007
Presenter First Pattern
I've been using MVC patterns for 10+ years and recently have been looking at Fowler's Passive View and Supervising Controller now that he has pulled the MVP pattern.
But the Presenter First pattern really is starting to make a lot of sense when it comes to easy testing and mocking of the model and presenter. I saw someone describe on a blog as "UI as a service". I like the idea of the Model and View being totally ignorant of the view.
I'm still a little fuzzy on the mechanics of wiring MVP triads together at the model levels.
Grab the whitepaper and check it out.
But the Presenter First pattern really is starting to make a lot of sense when it comes to easy testing and mocking of the model and presenter. I saw someone describe on a blog as "UI as a service". I like the idea of the Model and View being totally ignorant of the view.
I'm still a little fuzzy on the mechanics of wiring MVP triads together at the model levels.
Grab the whitepaper and check it out.
Wednesday, June 06, 2007
Getting back to Domain Driven Design
I recently started reading Domain Driven Design by Eric Evans and am gaining some insights into many of the DDD design practices I have unknowingly been using over the past 5 years.
It is interesting to see how over time aspects of my companies domain design has been muddled and broken down over time due to the miscommunication of the UBIQUITOUS LANGUAGE we developed in the initial design of the system.
This quote from the FACTORY/REPOSITORY chapters was quite pungent and applicable to our current environment when I read it:
"A client needs a practical means of acquiring reference to pre-existing domain objects. If the infrastructure makes it easy to do so, the developers of the client may add more traversable associations, muddling the model. On the other hand, they may use queries to pull the exact data they need from the database, or to pull a few specific objects rather than navigating the AGGREGATE roots. Domain VALUE OBJECTS become mere data containers. The sheer technical complexity of applying most database access infrastructure quickly swaps the client code, which leads developers to dumb down the domain layer, which makes the model irrelevant."
Also, in discussing the corrosion of the domain model he mentions how making it easy to convert data into objects using mapping layer can lead developers astray. Developers don't think of objects but just think of data containers. Is this the debate between DataSets and Business Objects a wolf in sheep's clothing?
"As a client code uses the database directly, developers tempted to bypass model features such as the AGGREGATES, or even object encapsulation, instead directly taking and manipulating the data they need. More and more domain rules become embedded in query code or simply lost."
I've been heads down in design and coding for so long in an attempt to establish the financial stability. I have been ineffective in the capacity as a System Architect in this regard.
My situation is rather humorous because the majority of Architects tend to spend too much time on Architecture related tasks where as I've been spending the majority of the last 5 years on design, coding, and support.
Also, people will fall back on the argument that any added abstraction is going to impact performance. Pooh to that! Depends on what you are building. If you are building a typical business app, design it correctly for God sake! Unit test it for performance and adjust as required. A good design is agile to change with TDD.
I look forward to reading more of DDD. It's stimulating new ideas and re-enforcing ones I already knew to be correct.
It is interesting to see how over time aspects of my companies domain design has been muddled and broken down over time due to the miscommunication of the UBIQUITOUS LANGUAGE we developed in the initial design of the system.
This quote from the FACTORY/REPOSITORY chapters was quite pungent and applicable to our current environment when I read it:
"A client needs a practical means of acquiring reference to pre-existing domain objects. If the infrastructure makes it easy to do so, the developers of the client may add more traversable associations, muddling the model. On the other hand, they may use queries to pull the exact data they need from the database, or to pull a few specific objects rather than navigating the AGGREGATE roots. Domain VALUE OBJECTS become mere data containers. The sheer technical complexity of applying most database access infrastructure quickly swaps the client code, which leads developers to dumb down the domain layer, which makes the model irrelevant."
Also, in discussing the corrosion of the domain model he mentions how making it easy to convert data into objects using mapping layer can lead developers astray. Developers don't think of objects but just think of data containers. Is this the debate between DataSets and Business Objects a wolf in sheep's clothing?
"As a client code uses the database directly, developers tempted to bypass model features such as the AGGREGATES, or even object encapsulation, instead directly taking and manipulating the data they need. More and more domain rules become embedded in query code or simply lost."
I've been heads down in design and coding for so long in an attempt to establish the financial stability. I have been ineffective in the capacity as a System Architect in this regard.
My situation is rather humorous because the majority of Architects tend to spend too much time on Architecture related tasks where as I've been spending the majority of the last 5 years on design, coding, and support.
Also, people will fall back on the argument that any added abstraction is going to impact performance. Pooh to that! Depends on what you are building. If you are building a typical business app, design it correctly for God sake! Unit test it for performance and adjust as required. A good design is agile to change with TDD.
I look forward to reading more of DDD. It's stimulating new ideas and re-enforcing ones I already knew to be correct.
Sunday, June 03, 2007
Esoteric and entertaining programming languages
If you ever have some times to waste I encourage you to check out the following programming languages. They are not really of any use but are a cool mental exercise.
Whitespace
BrainFu*k (or BF)
Whitespace is interesting because you embed secret messages through code within white space of your web pages.
Sure, go ahead and print out your white space program and destroy the source file. No one is ever going to figure it out now.
Apparently I had some time to waste this weekend.
Whitespace
BrainFu*k (or BF)
Whitespace is interesting because you embed secret messages through code within white space of your web pages.
Sure, go ahead and print out your white space program and destroy the source file. No one is ever going to figure it out now.
Apparently I had some time to waste this weekend.
Sunday, May 13, 2007
My Fizz Buzz interview question
So there has been some "buzz" about the interview question that sets the bar for any entry level programmer. (forgive the pun)
I have my own entry level programming problem I have developed over many years of interviewing:
"Write the code in your favorite language (or pseudo-code) to remove all items from a list that a contained in another list."
I give them concrete lists so we can talk about the implementation:
List 1 = A, B, C, D, E, F
List 2 = C, E, F
Sounds easy enough but candidates almost always start with a loop over List 1 with an inner loop or lookup in list 2. Then, any normal programmer realizes the errors in their ways and say's oh crap, I'll be screwing up my iterator or index over list 1 if I modify list 1. It's interesting to watch as they try to come to a solutions. You would be surprised how many can not make a leap to a solution.
I get statements such as "Oh, man I've done this before" and then they never come to a solution.
Better candidates always ask questions such as what data structures would they be stored in and how large could the sets become? Is performance a factor? Do I memory limitations. The poor developers never make it to a solution. This is the true sign of an experienced developer.
But it's important that they give you a solution that works. I don't take the intelligent discussions around the problem as confirmation that they are a good developer. I want developers on my team that are good thinkers as well as can get things done.
I debated if I should post my version of the "Fizz Buzz" question here but my traffic on my blog is almost non-existent. I guess it's because either I don't have anything interesting to say or I don't blog enough.
Whatever....
I have my own entry level programming problem I have developed over many years of interviewing:
"Write the code in your favorite language (or pseudo-code) to remove all items from a list that a contained in another list."
I give them concrete lists so we can talk about the implementation:
List 1 = A, B, C, D, E, F
List 2 = C, E, F
Sounds easy enough but candidates almost always start with a loop over List 1 with an inner loop or lookup in list 2. Then, any normal programmer realizes the errors in their ways and say's oh crap, I'll be screwing up my iterator or index over list 1 if I modify list 1. It's interesting to watch as they try to come to a solutions. You would be surprised how many can not make a leap to a solution.
I get statements such as "Oh, man I've done this before" and then they never come to a solution.
Better candidates always ask questions such as what data structures would they be stored in and how large could the sets become? Is performance a factor? Do I memory limitations. The poor developers never make it to a solution. This is the true sign of an experienced developer.
But it's important that they give you a solution that works. I don't take the intelligent discussions around the problem as confirmation that they are a good developer. I want developers on my team that are good thinkers as well as can get things done.
I debated if I should post my version of the "Fizz Buzz" question here but my traffic on my blog is almost non-existent. I guess it's because either I don't have anything interesting to say or I don't blog enough.
Whatever....
Wednesday, April 25, 2007
Bug-by-by compatibility - semantic compatibility
I just finished to a Hanselminutes episode with Raymond Chen where they discussed a multitude of topics. One of them was about how the Microsoft App compat team has to deal with what Raymond called "Bug-By-Bug compatibility"
I realized that what he was really talking about was a form of semantic compatibility. My best real world example of semantic compatibility is when a component exposes events through it's interface. Although the interface is immutable the order in which events fire is not. So a component vendors may release a new version of a component for bug fixes or new features and not modify the interface but modify the order in which it's events fire causing a semantic compatibility problem.
The software development industry is doing a great disservice by not documenting event ordering through it's standard component documentation. Yes, we have BeforeActivate and AfterActivate through naming standards but does BeforeActivate come before or after BeforeInitialize? Got me! Let me test that. Oh, ok... now I know what it does. Is that what it's going to do on the next version?
So here's an interesting story I've told many friends:
Years ago I wrote a security component that handled all user password for encryption and decryption for system. This component was written in VB6 and used the CryptoAPI calls to encrypt and decrypt the passwords. As an example for calling the CryptoAPI I used this class from planetsourcecode.com I created full unit tests for encrypting and decrypting and everything worked great. The application shipped and the world is happy.
Now, we are migrating the system to .NET and one of the first components I needed to migrate was the security component. Unfortunately .NET 1.1 did not have native managed support for the algorithm I selected (CALG_RC4). No problem, I'll just implement the PInvoke calls as I did in the VB6 code.
I did that. Wrote unit tests and for some reason I was unable to decrypt passwords encrypted with the VB6 component. Passwords encrypted with the .NET component decrypted fine. Hmmmm, very odd. It turns out that the one of the hash parameters in the VB6 Crypto API declarations was defined "as String" when it should have been defined "as Any". This caused the hash to go through a unicode string conversion where the .NET code was not.
So now we have to inject a bug in our .NET code to maintain compatibility with a bug in our VB6 code. ARRRRRRGGGG!
A few of lessons here:
I realized that what he was really talking about was a form of semantic compatibility. My best real world example of semantic compatibility is when a component exposes events through it's interface. Although the interface is immutable the order in which events fire is not. So a component vendors may release a new version of a component for bug fixes or new features and not modify the interface but modify the order in which it's events fire causing a semantic compatibility problem.
The software development industry is doing a great disservice by not documenting event ordering through it's standard component documentation. Yes, we have BeforeActivate and AfterActivate through naming standards but does BeforeActivate come before or after BeforeInitialize? Got me! Let me test that. Oh, ok... now I know what it does. Is that what it's going to do on the next version?
So here's an interesting story I've told many friends:
Years ago I wrote a security component that handled all user password for encryption and decryption for system. This component was written in VB6 and used the CryptoAPI calls to encrypt and decrypt the passwords. As an example for calling the CryptoAPI I used this class from planetsourcecode.com I created full unit tests for encrypting and decrypting and everything worked great. The application shipped and the world is happy.
Now, we are migrating the system to .NET and one of the first components I needed to migrate was the security component. Unfortunately .NET 1.1 did not have native managed support for the algorithm I selected (CALG_RC4). No problem, I'll just implement the PInvoke calls as I did in the VB6 code.
I did that. Wrote unit tests and for some reason I was unable to decrypt passwords encrypted with the VB6 component. Passwords encrypted with the .NET component decrypted fine. Hmmmm, very odd. It turns out that the one of the hash parameters in the VB6 Crypto API declarations was defined "as String" when it should have been defined "as Any". This caused the hash to go through a unicode string conversion where the .NET code was not.
So now we have to inject a bug in our .NET code to maintain compatibility with a bug in our VB6 code. ARRRRRRGGGG!
A few of lessons here:
- Just because it passes all your tests does not mean it is correct.
- Don't use code directly off the internet (even API declarations)
- Once software is in the field you must maintain bug-for-bug compatibility and semantic compatibility.
Friday, April 20, 2007
ORM and when query plans go bad.
I've been listening to quite a few discussions on the debates between the use or ORMs and two topics that never seem to get discussed is:
Point #2 I've also een starting to pay more attention to the Microsoft Entities Framework. .NET Rocks did a show with one of the product managers for ADO.vNext (Dan someone) and he talked about the Entities framework in depth. He talked about how they are getting pressure from the community that they are not supporting all the functionality that the ORMs (such as NHibernate) are providing. His response was that MS has a longer term strategy with the EF to support replication and reporting (essentially a unified model) against the EF. He continued to say we would have to put up with the limited capabilities to reach this higher level.
The interesting part was that he continued on to talk about how he worked on the WinFS team before it was killed. He talked about the reason it was killed was because they were unable to deliver all the functionality on one release and how no one (especially mgmt) could not decide on the base feature set. It was rather ironic that he was mentioning this on the heals of talking about how the EF was a stepping stone.
In my opinion the EF will certainly be a wait and see technology. CSLA .NET is still my choice.
- How can dynamic SQL ORMs deal with the fact that your database server (a.k.a SQL Server) can decide at any point that it is going to use an alternate query plan. A simple index HINT on the join syntax can fix this problem but how is my ORM going to handle this?
- How come there is no talk about scaling these ORMs. No, I'm not talking about scaling the database. A layer between the ORM and the database execution.
Point #2 I've also een starting to pay more attention to the Microsoft Entities Framework. .NET Rocks did a show with one of the product managers for ADO.vNext (Dan someone) and he talked about the Entities framework in depth. He talked about how they are getting pressure from the community that they are not supporting all the functionality that the ORMs (such as NHibernate) are providing. His response was that MS has a longer term strategy with the EF to support replication and reporting (essentially a unified model) against the EF. He continued to say we would have to put up with the limited capabilities to reach this higher level.
The interesting part was that he continued on to talk about how he worked on the WinFS team before it was killed. He talked about the reason it was killed was because they were unable to deliver all the functionality on one release and how no one (especially mgmt) could not decide on the base feature set. It was rather ironic that he was mentioning this on the heals of talking about how the EF was a stepping stone.
In my opinion the EF will certainly be a wait and see technology. CSLA .NET is still my choice.
Searching for .NET solutions
Dan Appleman did a talk at VSLive Orlando about "Discoverability". I was not able to attend the session but my co-worker did.
I also heard Dan talk on the internet talk show .Net Rocks regarding the same topic. One interesting link that was served up was a site that uses a google custom search. Dan has set one up for only search the top .NET sites. The site is www.searchdotnet.com and currently searchs about 300 sites. Some of those sites are the MSDN blog sites so the actual depth of the content being searched is much greater than the 300 sites.
So far it has been returning much more relative results than straight google searchs on .NET topics.
I also heard Dan talk on the internet talk show .Net Rocks regarding the same topic. One interesting link that was served up was a site that uses a google custom search. Dan has set one up for only search the top .NET sites. The site is www.searchdotnet.com and currently searchs about 300 sites. Some of those sites are the MSDN blog sites so the actual depth of the content being searched is much greater than the 300 sites.
So far it has been returning much more relative results than straight google searchs on .NET topics.
I've been looking for a private simple desktop VPN solution for a while and I finally found it.
http://www.hamachi.cc/
Encrypted over HTTP... NICE! No more opening RTP ports in my firewall.
http://www.hamachi.cc/
Encrypted over HTTP... NICE! No more opening RTP ports in my firewall.
Thursday, April 19, 2007
Javascript prototype function for formatting dates objects.
Some useful javascript for formatting date objects:
String.prototype.zf = function(l) { return '0'.string(l - this.length) + this; }
String.prototype.string = function(l) { var s = '', i = 0; while (i++ < l) { s += this; } return s; }
Number.prototype.zf = function(l) { return this.toString().zf(l); }
Date.prototype.format = function(f)
{
if (!this.valueOf())
return ' ';
var d = this;
return f.replace(/(yyyy|mmmm|mmm|mm|dddd|ddd|dd|hh|nn|ss|a\/p)/gi,
function($1)
{
switch ($1.toLowerCase())
{
case 'yyyy': return d.getFullYear();
case 'mmmm': return gsMonthNames[d.getMonth()];
case 'mmm': return gsMonthNames[d.getMonth()].substr(0, 3);
case 'mm': return (d.getMonth() + 1).zf(2);
case 'dddd': return gsDayNames[d.getDay()];
case 'ddd': return gsDayNames[d.getDay()].substr(0, 3);
case 'dd': return d.getDate().zf(2);
case 'hh': return ((h = d.getHours() % 12) ? h : 12).zf(2);
case 'nn': return d.getMinutes().zf(2);
case 'ss': return d.getSeconds().zf(2);
case 'a/p': return d.getHours() < 12 ? 'a' : 'p';
}
}
);
}
Thursday, February 01, 2007
A new Vista security issue?
A new Vista security issue? I don't think so!!! This is the exact reason I can't stand rags like e-Week.
Grrrrr...
Grrrrr...
Subscribe to:
Posts (Atom)