S h o r t S t o r i e s

// Tales from software development

Archive for December 2011

Web Service Mocking

leave a comment »

I’ve just finished writing an interface for our medical information system that calls a customer’s web service for patient data. The web service isn’t yet available to use but, as it’s fairly simple, I created some test stubs that could be used to simulate the calls to the web service although they were actually in-process calls rather than a mocked web service.

Having finished the initial development I started testing and realised that, while useful during development,  my test stubs didn’t provide a useful testing environment. I considered writing a simple web service to mimic the customer’s but after a quick web search for web service mocking tools decided to give soapUI a try, not least because it’s available as a free open-source version.

I was sceptical of one reviewer’s claim that a mocked web service could be configured and running within a few minutes but was surprised to find that this is true. All I did was run soapUI, load the WSDL for the customer’s web service, and start the mock service.

To have the mock service return data you need to specify a response for each method called but soapUI provides an XML template for the response that makes it quick and easy to do this. There are several options to vary the response according to the method’s arguments.

Highly recommended.


Written by Sea Monkey

December 14, 2011 at 8:00 pm

Posted in Development

Tagged with

Downstream impact

leave a comment »

It’s a well documented fact that the longer an issue goes unresolved during the development of a software product, the greater the impact is downstream. In the real world it’s sometimes necessary to accept that a known problem will not be resolved until later due to, for example, resourcing constraints. Sometimes we have to accept this and work around it as best we can. Last week an interesting example occurred that demonstrated why we shouldn’t allow this to happen if at all possible.

This particular project has been running for almost two years but the particular interface that I’m responsible for was ‘code complete’ in July of this year. During some extended testing with large volumes of input data (around 18,000 HL7 messages) I noticed that a small number of messages were missing an identifier that the interface expects. In this implementation, the OBR-2 field of an ORU R01 message specifies the unique laboratory identifier for the test result.

No problems with the interface were found but the missing OBR-2 problem was reported to the customer’s developer responsible for the HL7 message feed. After several email exchanges it emerged that the IT Department Manager would not make anyone available to investigate the problem because the project was deemed to be a low priority as far as he was concerned. Perhaps this was understandable as the project didn’t have a ‘go-live’ date scheduled at the time but I still considered it a poor decision. At the very least, a few hours spent on determining the cause would have indicated what the cause was and how much effort would be required to fix it.

As far as I was concerned, the interface itself was finished and no further testing would be done until the missing OBR-2 problem was fixed prior to the first ‘dry-run’ of the ‘go-live’ process.

In November a project plan was written that scheduled the first dry-run for early December. This meant that the missing OBR-2 problem had to be resolved by then. Last week a member of the customer’s IT team looked at the problem and identified the cause within 20 minutes. It was not a bug but a deliberate design decision. The OBR-2 field in this particular laboratory’s data feed contains the requestor’s request id rather than the laboratory’s test identifier but if the test is unsolicited then it’s blank. The ‘fix’ agreed with the customer was to place the laboratory’s test identifier in the OBR-3 field and for the interface to use the OBR-2 value if available otherwise the OBR-3 value would be used.

This meant that the interface source code had to be updated and this in turn meant a new release build, installer, testing, deployment, etc.

So, even though I thought I was code complete at the end of July and that there was one small outstanding issue for the customer to resolve, here I was four and a half months later making code changes and going through another release cycle.

And all of this could have been avoided if the customer had allowed one of its developers to work on the problem for 20 minutes back in July.

Written by Sea Monkey

December 12, 2011 at 8:00 pm

Posted in Development

Tagged with