S h o r t S t o r i e s

// Tales from software development

Archive for February 2010

Conspiracy theory: Why doesn’t C# support unmanaged externs ?

leave a comment »

While C# supports COM Interop in both directions, from COM to .NET and vice versa, it only supports P/Invoke calls and not P/Invoke callers.

What’s odd about this is that unmanaged calls are supported in IL Assembler and the .NET runtime. So, why did the C# team provide support for only three out of the four permutations of unmanaged interop ?

In all likelihood it was just never a priority and yet… Consider how easy it would have been to add the support  – probably just an attribute to decorate the method you wanted to export  with a few options like the calling convention to use. In fact, rather like Robert Giesecke’s Unmanaged Exports MSBuild task.

So why didn’t Microsoft do it ? There’s a number of possible reasons but my guess is that making it easy for legacy application code to call managed code wasn’t something that aligned with Microsoft’s marketing stragegy for .NET. It’s one thing to allow your new development platform to interoperate with legacy code but quite another to have legacy code interoperate with managed code. The first case is a real world compromise that recognises that legacy server code is going to be around for a while yet but the second just makes it too easy for customers and third parties to continue developing and using (i.e. investing, which means dollars) systems written in unmanaged code.

Advertisements

Written by Sea Monkey

February 25, 2010 at 9:00 pm

Posted in Comment

Tagged with ,

Getting over the hump

leave a comment »

I’ve been using source code metrics tools for around 12 years and early on I noticed something a little odd about the way that the lines of codes (LOC) metric changes over the course of a project. I’d expected that the LOC value plotted against time would look like this:

This shows the weekly LOC metric for a project of around 12 weeks duration that produces an application of around 34,000 LOC.

As expected, the quantity of code being written in the first few weeks is high but gradually eases off towards the end of the project until perhaps only a few hundred lines of code are being added each week at the end of the project.

In fact, every project I’ve ever worked on and collected metrics for looks like this:

The significant difference is the way that the LOC value starts decreasing at the end of the project. Why ?

It’s simple enough: at this stage of the project developers are rationalising and refactoring code. For example, two developers might have each written a library method that performs almost the same function. One of them will identify the duplication of code and refactor the two methods into one. This kind of activity often happens towards the end of the development phase of a project when developers have started fixing bugs identified in testing.

I think it’s useful to know about this because if you plot the LOC value over the course of your project you can expect to see a peak and then a small decrease towards the end of the development phase. If you see this when you expect to then you can congratulate yourself that this project is going to plan. If you don’t see it, something’s wrong…

Written by Sea Monkey

February 23, 2010 at 9:00 pm

Posted in Development

Tagged with

Sometimes, being helpful is the worst thing you can be…

leave a comment »

Sometimes, being helpful is the worst thing you can be…

I still occasionally use the following story as an example of why it sometimes better not to help out when the things get tough on a project.

It was about 16 years ago and I was working for a company that developed system software for IBM mainframes. There was an internally developed issue tracking application called PolyGlut. It was better than nothing but only just. It had originally been written by a guy in one of the product support teams but the Internal Systems department, ISYS, had taken over responsibility for it. They were doing the best they could and although new features and functionality were being added all the time they were simply too under resourced to deliver the new and enhanced functionality that the support teams needed urgently.

The most significant of these functions was a facility to quickly search through all the logged problems so that, for example, when a customer reported a problem we could quickly determine if the problem had been previously reported by another customer. There was a search option but it was unusable as it often took up to 20 minutes to complete a search of a single product’s logged problems. There were times when out of desperation we might use the feature but it was soul destroying to leave it running for 20 minutes only for the result to be ‘No hits found.’

As it happened, at the time I’d been researching free text indexing in my own time and had put together a couple of utilities for indexing and searching text. PolyGlut stored information in text files so it was easy to run the indexer against these and then use the search utility to quickly find problem reports that contained the specified key words. The performance of the searching tool was very impressive with response times that were always under one second to search all the problem logs for all the products. I called the search utility PGIndex.

Suddenly, we had a tool that revolutionised the way we used the problem logs. In a matter of a few minutes we could run searches against all the pertinent keywords and identify any and all problem reports that might be relevant to a newly reported problem.

For the first few days I just used it myself but then Mark saw me using it and wanted it. A few days later all the guys in our team were using it. Then the team that we shared an office with wanted it. And soon I had other team leaders coming to my desk to ask if they could use it too.

Then I got a phone call from Robert who had responsibility for, among other things, managing ISYS. He asked me to come to his office and it wasn’t hard to guess that the discussion was going to be about PGIndex.

Twenty minutes later, in his office, I was explaining how PGIndex worked and how fast it was. I was slightly puzzled by the fact that he didn’t seem as happy about it as I did. We had a short discussion about the ownership of the code I’d written and I made it clear that although it had all been written outside of work hours I wasn’t making any claim on it. So, it would be easy to make it freely available throughout the company, or, to hand it over to ISYS so that they could incorporate it into PolyGlut.

That wasn’t the way Robert saw it.

He said that couldn’t let everyone in the company use it. In fact, he asked me to go to all the team leaders whose teams were using it and tell them that they couldn’t use it. He didn’t really want anyone using it but conceded that he couldn’t stop me using it and that he would take a ‘lenient’ view on the members of my team using it. I thought he overstepped the mark here – why should he be telling the support teams what tools they could or couldn’t have ?

Then said that he couldn’t allow me to hand the code over to the ISYS department either.

I couldn’t make any sense of this. Why was he blocking any use of a utility that had transformed the way the support teams used PolyGlut ?

Finally he explained… ISYS was under resourced. If I handed my code over to ISYS this would obscure the fact that they were understaffed. The best way for him to make a case to the senior management for additional staff in ISYS was by not delivering features and functions that were needed in applications like PolyGlut.

This was a bitter pill to swallow at the time but over the past 16 years since I’ve seen similar scenarios and I know that he was right.

Written by Sea Monkey

February 18, 2010 at 9:00 pm

Posted in Development

Tagged with

Requirements management using Volere

leave a comment »

I first encountered the Volere methodology when I bought about half a dozen books from Amazon on the subject of requirements gathering and management in 2000. It’d become apparent to me that requirements management was the least understood process in the projects that I had worked on over the past few years and, although I didn’t usually have direct responsibility for it, I wanted to understand it better to try to improve it.

Most of the books were helpful to some degree but the one that stood out by offering a straightforward and practical methodology was Mastering the Requirements Process by Suzanne Robertson and James Robertson. I changed jobs around the same time and found that I was much more involved in the processes upstream of design and implementation and got the chance to put the Volere methodology to the test. As might be expected, I needed to get experience on a few projects before I was really using it properly but it quickly proved to be as practical as its proponents claimed.

The method is based on documenting requirements on index cards in a particular format. I never used physical index cards but used Word documents for the first few projects and then wrote a small application that imitated the index card format. This made it possible to implement additional functionality such as automatically checking the dependencies between requirements.

I also found it extremely useful to create a two page bullet point list of all the requirements categories documented in the Volere template. Simply working through this list from start to finish is a good way of checking and confirming that you’re remembered to cover all the requirements for a project – not just the functional requirements of the software solution but also, for example, the environment it will be deployed to, the testing process, and the deployment.

At first it might seem a bit too simple – surely requirements management ought to be more complicated than this ? Well, you can add complexity if you like but once you start using it you begin to understand that the apparent simplicity is in fact a well designed methodology built on hard won experience.

If you’re interested in Volere then there’s a second edition of the Mastering the Requirements Process (Addison Wesley, March 2006) and a website at http://www.volere.co.uk/.

Written by Sea Monkey

February 16, 2010 at 9:00 pm

Posted in Development

Tagged with

Quick book review: Introducing .NET 4.0

leave a comment »

I haven’t downloaded any of the .NET 4.0 / Visual Studio 2010 betas so although I have a vague idea of some of the changes I thought it’d be useful to buy a book that gave a quick summary of what’s new. Alex Mackey’s Introducing .NET 4.0 with Visual Studio 2010 (Apress) is exactly that.

It’s probably unfair to criticize a book that’s been written against early non-production releases of the new .NET and Visual Studio and the marketing material but it is worth pointing out that some areas are covered very briefly. Not only are they lacking any depth but seemingly insight as well. And while references to web sites and blogs is useful additional information, all too often in this book they’re used as a substitute for actual content.

Personally, I found the coverage of VB.NET changes to be an annoying irrelevance. I guess it’s not economically viable to publish two books to cover C# and VB.NET separately but it would have been better to cover the language changes as separate chapters for C# and VB.NET.

Some of the changes are given the briefest mention and are easily missed. As an example, at the bottom of page 81 there are two sentences that explain that the 4Gb limit has been removed from the System.IO.Compression methods. This is a significant enhancement to anyone who’s tried using these methods for general purpose file compression in the past as the 4Gb limit often meant that these methods were not practical solutions.

Criticisms aside, the book does provide a useful primer but I suspect that most experienced developers will skim through it quickly and then bin it.

Written by Sea Monkey

February 15, 2010 at 8:00 am

Posted in Books

Tagged with , ,

928MB Windows Update

leave a comment »

Rob just emailed me with this screen shot from Windows Update:

This is on a new lap top with Windows 7 pre-installed.

928MB!

Written by Sea Monkey

February 12, 2010 at 6:42 pm

Posted in Environments

Tagged with

Do what I say, not what I do…

leave a comment »

European politicians have criticised Nokia for providing the Iranian government with surveillance technology.

So why is it that the UK government is using the same technology to snoop on its citizens ?

This technology is a two edged sword and it’s very naive for European governments and the EU to use it as part of “the war on terrorism” but then criticise its use by governments that they don’t approve of.

Written by Sea Monkey

February 11, 2010 at 6:00 pm

Posted in Comment

Tagged with