One From The Vaults: Testing

Reading Time: 3 minutes
Looking for Rhino (real ones, not mocks)

Whilst I was looking for something else I found an explanation I wrote years ago to try to explain to senior management how testing had changed and become critical in the Agile environment. It’s not the easiest thing to try to get across: the business wants features that it can sell. I don’t think I did a bad job, so here it is.

Having now spent a few months with this project I’m very conscious that, at the moment, we don’t have much of a testing strategy. It looks as if we’re trying to offload all responsibility for testing onto the testing department.
This rather implies that our primary testing strategy is manual system testing.
Failed system tests are expensive – and if they impact promised delivery dates they can be a major problem. It would be better if we could have a good idea before we entered a system test whether or not it’s going to pass and that is a software quality management problem.

(Technical) testing should start in the software design. The questions the developers should be asking are;

  1. How am I going to achieve this task?
  2. How am I going to test it?

As an example, user interfaces are notoriously difficult to test, but if one of the MVC style design patterns is used it means that the view (the actual UI) can be separated from the rest of the code. This means you can write a program to test the logic behind the UI without having to try to actually click buttons programmatically.

There are any number of places within the design of software where adding a little thought to how it can be (easily or preferably automatically) tested is of significant advantage.
The more automated testing we can get in the better. If we can get to a position where a substantial portion of the code is automatically built and tested every night that would be great. It means that we, as developers, get continuous feedback about the state of the software.
We can then go into a system testing phase with a much higher level of confidence that it will pass.

This does mean however that we need to start investing in testing, which can be a difficult message to get across. If we don’t however then as the software and the functionality grows and the code-base becomes more difficult to maintain we will be taking ever increasing risks with the future of the product.

It’s a good idea to get this baked in now.

I would suggest that a feature is not finished unless the issue of how it is going to be tested is solved – and that means the automated tests and / or the manual tests written. If there’s no automated testing there needs to be an explanation of why.

The more bodges, work-arounds and spaghetti code there is, the harder it becomes to maintain and the longer it takes to develop each new feature. A product which leads the market can very quickly fall behind because every time someone tries to do something they have to try to unravel all the spaghetti, they then inevitably end up piling on more spaghetti just to make it work and making it worse for the next edit. Technical debt snowballs.

As with all things there’s a balance, I’m conscious we need to get features to market fast, but as the product matures the importance of testing will increase. Automated unit testing ensures that each building block of the project actually does what we think it should do. It’s not a silver bullet for solving all technical debt, but it’s a good starting point. Making code testable enforces certain good behaviours that will increase the longevity of the product.

I do not believe that the current development strategy is sustainable. I am therefore intending to phase in a plan that will, in time, see all new development covered by automated testing and will start to retrofit automated testing into the existing code-base. Inevitably this will slow down the speed we can get features to market, by a known and controllable overhead. Our business model is based on repeat business. If we fail to get a grip of technical debt the competition will overtake us in the mid term and it will invalidate that model.

Entity Framework Double PK Overwrite Gotcha

Reading Time: 2 minutes

I was writing some unit tests: largely out of completeness I wanted to test that you couldn’t insert two records with the same primary key. The code is simple enough.

    var car = db.Cars.OrderBy(g => Guid.NewGuid()).First();
    var pool1 = db.Pools.OrderBy(g => Guid.NewGuid()).First();
    var driverName1 = Guid.NewGuid().ToString().Trim('{', '}');
    var created1 = PoolAllocatedCar.Create(car, pool1, driverName1);
    db.PoolAllocatedCars.Add(created1);

    var pool2 = db.Pools.OrderBy(g => Guid.NewGuid()).First();
    var driverName2 = Guid.NewGuid().ToString().Trim('{', '}');
    var created2 = PoolAllocatedCar.Create(car, pool2, driverName2);
    db.PoolAllocatedCars.Add(created2);

    Assert.Catch<Exception>(()=> db.SaveChanges());

Car Id is the sole primary key of the PoolAllocatedCars table.

Being bit lazy I guessed it was quicker for me to run the code and find out what exception SaveChanges() it threw rather than trawl through the docs and work out what it should be.

The problem: it didn’t throw an exception. So I added some debug to find out what happened, the result is disappointing to say the least.

---===*** FAILED TO NOTICE PK CLASH ***===---

In Memory:
    Car Id [24] Pool Id [49] Driver [bda1d05c-8dae-4648-ab42-736eb8c44b71]
    Car Id [24] Pool Id [08] Driver [9dae73a0-5b8d-45bc-9d0a-f8b73141aa2c]

In Database:
    Car Id [24] Pool Id [08] Driver [9dae73a0-5b8d-45bc-9d0a-f8b73141aa2c]

It would appear that Entity Framework simply overwrote the first record with the second without giving any indication that there was ever a primary key clash.

Now I’m sure that somewhere in the documentation there’s a warning or a note about this but I haven’t found it yet…

Update: It Gets Worse

I was taken aback by the above, that it could be deemed acceptable to treat the explicit addition of a second object with the same primary key as an implicit update with no warning to the user.

I guess then I shouldn’t have been surprised that it even does this after an explicit call to SaveChanges()

The following test gives exactly the same result as the first. This is a massive gotcha.

    var car = db.Cars.OrderBy(g => Guid.NewGuid()).First();
    var pool1 = db.Pools.OrderBy(g => Guid.NewGuid()).First();
    var driverName1 = Guid.NewGuid().ToString().Trim('{', '}');
    var created1 = PoolAllocatedCar.Create(car, pool1, driverName1);
    db.PoolAllocatedCars.Add(created1);

    db.SaveChanges();

    var pool2 = db.Pools.OrderBy(g => Guid.NewGuid()).First();
    var driverName2 = Guid.NewGuid().ToString().Trim('{', '}');
    var created2 = PoolAllocatedCar.Create(car, pool2, driverName2);
    db.PoolAllocatedCars.Add(created2);

    Assert.Catch<Exception>(()=> db.SaveChanges());

The good news is that if you use a different DbContext it throws an exception :- in fact on the Add, not the SaveChanges.

I’m having trouble getting my head around this: I can’t see the logic. If you thought there was a chance that you’d want to update an object after you’d added it to the table then you should keep your own reference to it. If that’s a problem because of scope then your design is probably wrong.

A Tender Subject

Reading Time: 7 minutes

I’d barely got into the corridor before I found myself being bundled into the wall by a rotund sales guy. “What the fuck do you think you’re doing?” he barked in my face, “If you ever pull a stunt like that again I’ll making fucking sure we put you out of business. Consider this a warning.” he stuttered a little and I could see the fear in his eyes: he was not just angry, but terrified. “And I don’t give second warnings. Understand?” he eventually added and with that marched angrily away.

I’d got maybe 15 metres further before another salesperson came up beside me and sniped, “If that’s how you’re going to behave you can forget ever partnering with us.” before he too shuffled off.

What had I done to deserve such attention? Insulted their mothers? Murdered a puppy? No, just give a sales presentation.

One of the way the UK government keeps track of technology is through seminars, they take various forms but this was essentially a day-long event where vendors of a particular type of technology were invited to present their products to a bunch of interested parties within government.

Somehow – despite being a tiny little firm – we’d managed to blag our way onto the list. I’d been to these events before, as the “tech guy” that answers the difficult questions so I knew the format. What I hadn’t done before was actually sit through a whole day of them.

They were dismal. The scripts were poorly thought out, badly structured, omitted important details and contained much that was irrelevant. The presenters were worse. Monotonous at best, incoherent and rambling at worst. By the second or third presentation I was genuinely ashamed that this is supposedly the best of what my industry can offer.

By the second session after lunch half the audience were asleep and I was reasonably convinced that the other half were dead.

When we eventually took to the stage my opening words after introducing myself were “I’m sure you’ll be delighted to hear that this is the only Powerpoint slide I’m going to use.” I then talked for a few minutes about who we are and what the product was whilst my colleague set up a fully working system. We put it up on the projector, handed our various devices to the audience and spent the rest of our time role playing live scenarios that demonstrated the product features.

It was a bold move and quite a risk, there were some very senior civil servants in the room who could have taken umbrage, but they didn’t, they did the exact opposite. At the insistence of one of them our slot was extended to allow for an additional 5 minutes of questions. My colleague and I both ran out of business cards in seconds and took to writing our details on the backs of other peoples. From a sales point of view it could not have gone better for us.

That’s all I’d done. I’d hadn’t bad-mouthed the competition, said anything that was untrue or in any way behaved unprofessionally. We turned up, we told jokes and tried to be as engaging as we could whilst demonstrating a live system that could be set up in less than 5 minutes.

That is the kind of behaviour that the competitors needed to stop. For 30 years they’ve been telling government that software systems are big, complex and expensive. That they take years to develop, months to install and configure and that they require huge, expensive racks of equipment.

Then two guys turn up with a laptop and a bag of kit and prove that none of this is true.

The issue is more complex than one of straight deception, though. These organisations were formed at a time when these things were all true. The organisations themselves have grown to provide a heavyweight framework. They have little momentum and colossal inertia.

They also have huge problems with legacy code-bases. The requirements of government tend to evolve slowly, hence re-writes are rare. Instead old code lumbers on and on and velocity nosedives as technical debt increases [more on that in a minute] and new development techniques, tools, frameworks and languages accelerate the pace that can be achieved within the industry in general.

These factors have reciprocated a framework within government that has grown to expect and indeed facilitate it. The tendering process results in excessively detailed documents that emphasise the minutia and neglect the actual business requirement for the system. These are often produced by committees and are stitched together from requirements drawn from multiple sources – or in other words they’re actually just wish-lists and can at times can be incoherent.

This is a thoroughly unhealthy relationship, it’s not just wasteful, it’s actively counterproductive. It stifles technical innovation and guarantees mediocrity.

More than that though it damages the engineering quality of the products as developers rush to strap on extra features and to modify existing ones in order to meet the letter of the requirement where in reality the product was designed to solve the overall business need and does so perfectly adequately.

This is often a panic scenario too. In order to gain a higher score on the tender a business will commit to what is it at the absolute limit of what it can achieve – often a little more than. The result is the in order to meet the delivery, development have to cut every corner possible. Inevitably this piles on the technical debt.

As the development team crashes from tender to tender its velocity plummets, the business throws more developers at the problem and you end up where we are today. The products are bloated, resource hungry, hideous to configure, often still include archaic technologies that (now) require special environments and are generally impossible to demonstrate anywhere but a dedicated, static suite specifically for the purpose.

That is why 2 people turning up to a sales seminar with a laptop and a bunch of hand-held devices and doing a fully interactive live demonstration caused such a furore. Yes I do believe that everyone else there on that day was at least one of: incompetent, phoning it in or just plain burnt out. None of them however could have done a live demonstration even if they’d wanted to.
To the competitors watching it must have seemed like we’d taken the bar for a sales presentation, which was set barely higher than the crash-mat, and smashed it through the roof. There was no way they could compete.

The reality however is that we weren’t a real threat to them. If we wanted to win tenders then we had to tick boxes. We could undercut them only a handful of times before the same thing happened to us as happened to them 30 years ago. Thankfully we recognised that and stopped responding to additional tenders.

I have to admit that I don’t know what the answer is. As a business or an individual one can make purchasing decisions however we like. If we see something that doesn’t meet what we thought our requirements were, but if we thought about the problem in a different way is actually a far better solution, we can buy that and change the way we work.

Government is different, a democracy must always be accountable to the people. The government must be able to justify why it chose supplier X over supplier Y. A qualitative response is always going to be open to challenges. It is far more robust from a point of view of the accountability of government to take a quantitative approach: supplier X was able to meet more of the requirements than supplier Y. It is also more legally robust, many of the suppliers are large companies that are not afraid of litigation if they believe it might give them a competitive advantage.

One could perhaps say that this adds weight to the argument that the government is trying to do too much and that many of these key areas should be privatised, where a more effective purchasing strategy could be implemented. This however has its own problems: ultimately it’s still public money and all this is doing is to move the point of accountability higher up. The same end could be achieved by just increasing the threshold at which a tendering process is required.
I’m also not particularly keen to live in a country that has a private Police Force and I rather suspect I’m not alone.

The counter-argument is that all such systems should be developed in-house by the government itself. In an ideal world this would be the answer. Unfortunately the world isn’t ideal and the list of reasons why this is a terrible idea is really rather long.

However we spin the model and try to look at it a different way, it’s difficult to get away from the idea of a tendering system that’s essentially scored quantitatively. There have been schemes to try to improve the situation such as to get people from the software industry in ahead of the tenders being written, however their influence has to be carefully managed in case of any allegations of impropriety and they are inevitably associated with the very organisations that will be responding to the tender. Thus this is has not proved and effective means for improving the situation.

I rather dislike finishing an article without at least suggesting something that I think might improve the situation, but I have to admit that I’m struggling with this one.

—=== END ===—


I’ve had a number of comments since writing this article. I can split them largely into 3 categories.

The saddest is the number if people who’ve also reported being directly threatened by larger firms within the industry.

I hadn’t realised how much technical jargon I’d used. Basically “velocity” is the speed at which you can make changes to a product and “technical debt” is what happens when you keep making changes without the proper procedures:
eventually you end up with a big bird’s nest and it gets harder and harder to make changes and it all gets more and more out of control.

A few people within the industry have commented that I’ve made some generalisations and simplifications. This is true, it’s a trade-off between writing a fastidiously accurate article and writing one that makes the point sufficiently well but is engaging enough to appeal to a wider audience.


Tom Fosdick is a Software Architect specialising in Control Room systems for the Emergency Services. This article refers to an event that happened in a previous role.

Intlerocked.Exhange and the Atomic Option

Reading Time: 4 minutes

Developers are now having to deal with concurrency issues far more than we ever have in the past. Our languages are evolving more and more features (like the Parrallel.* class in C#) that allow us to take advantage without too much pain.
These are great, but they’re not always the answer and they don’t negate the need to understand the fundamental issues of concurrent programming. This is in fact the very subject that this blog started with several years ago...

Today I’d like to talk about Interlocked.Exchange. Its purpose it to exchange two variables in an atomic way. It’s the atomic nature of the exchange that’s the important factor here, it means that nothing can catch the exchange in an incomplete state. It’s either completely one value or completely another value.

Thread Safe Exchange

The significance of this becomes evident when we consider the traditional method of exchanging two variables.

var spare = first;
first = second;
second = spare;

This is not thread safe. The problem is that a second thread could interleave into the first. Consider the following path of execution. Let’s assume the value of first is 5 and the value of second is 10. The table shows the values of the variables as two threads interleave.

Thread Code first second spare 1 spare 2
1 var spare = first; 05 10 05
1 first = second; 10 10 05
2 var spare = first; 10 10 05 10
1 second = spare; 10 05 05 10
2 first = second; 05 10 05 10
2 second = spare; 05 10 05 10

Oh dear… that didn’t work, did it?

If we’d used Interlocked.Exchange we wouldn’t have the problem, because the exchange is atomic no interleaving can take place, the first thread will finish then the second will take over.

Thread Safe Changes of Scope

When it gets really interesting though is the fact that Interlocked.Exchange returns the original value.

public static T Exchange(
    ref T location1,
    T value)
    where T : class

The beauty of this is that we can use this to change scope. There’s a wholly unsafe pattern that gets used in Dispose handlers all the time:


class Blah: IDisposable
{
    SomeDisposableType someObject;

    ...

    public void Dispose()
    {
        if(someObject != null)
        {
           someObject.Dispose();
           someObject = null;
        }
    }
}

It’s possible that two threads could interleave between the null check and someObject being assigned to null, resulting in someObject being disposed twice.

Alternatively…

public void Dispose()
{
    var myCopy = Interlocked.Exchange(ref someObject, null);
    if(myCopy != null)
    {
        myCopy.Dispose();
    }
}

What the Interlocked.Exchange does here is to set someObject to null in the object scope and return its (former) value to myCopy which is in the method scope.
If two threads call Dispose at exactly the same moment, then both of them will succeed in setting the value of someObject to null. In the case of the thread that calls Interlocked.Exchange first, it will return the original value of someObject to its myCopy and set someObject to null. When the second thread calls Interlocked.Exchange it will return the value that the first thread set someObject to, that being null. It will then proceed to set someObject to null again.
The effective is that someObject is set to null twice, but only one of the threads gets the original value of someObject, so only one will pass the null check and only one will call Dispose on someObject.

Note that this isn’t a good Dispose pattern fullstop however. Microsoft have written some guidelines.

This scope switching trick can be useful in other places too. Consider if you’re writing a log file of some description. Writing an ever-expanding log file causes problems, it’s good to have a cut-off and write a new one every so often. A common method is to write one file per day of the week.

public class LogWriter : IDsiposable
{
    StreamWriter writer;
    public string FilePath {get;set;}
    ...

    public void WriteLog(string s)
    {
        writer.WriteLine(s);
    }

    public void StartNewDaysLog()
    {
        var newWriter = new StreamWriter(FilePath + DateTime.Now.DayOfWeek.ToString() + ".log", false);
        var oldWriter = Interlocked.Exchange(ref writer, newWriter);
        if(null != oldWriter)
            oldWriter.Close();
    }
    ...
}

With this implementation multiple threads can safely call WriteLog continuously. When StartNewDaysLog is called a new StreamWriter is set up ready to go, then the two are switched in an atomic fashion. Nothing can catch this out half way through the switch – as far as anything calling WriteLog is concerned one entry was written to one file and the next to another: it’s seamless.
After the switch, StartNewDaysLog is left with the old StreamWriter which it then has to Close (which in turn calls Dispose).

Conclusion

Interlocked.Exchange is a surprisingly useful little tool. Its plain usage – to simply exchange a value in a thread safe way is useful, but where it really comes in handy is in its ability to replace a value and return the original one into a narrower (thread safe) scope. This is particularly handy if you need to move or consume something in a simple way that doesn’t imply graduating to a mechanism such as ReaderWriterLockSlim or SemaphoreSlim.

Connectorum Reseatorum

Reading Time: < 1 minute

Compact!
Compact!
Last time I talked about De Morgan’s law which I learnt in electronics but still use in computer science.

Today I want to talk about a revered piece of arcane wisdom, a ritual handed down from electronics master to apprentice according to the strictest laws of tradition. It’s called “Connectorum Reseatorum” and I just used it to raise my compact camera back from the dead.

What they don’t want you to know is that you don’t actually need any complex electronics expertise to perform the ritual. All you do is take the case off and then one-by-one take each connector apart, clean the contacts (surgical spirit / rubbing alcohol works quite well) and then put the connector back together again.

It’s surprising how many electronics faults are just down to a bad connection and can be fixed simply by cleaning contacts and removing dirt.

Break the Line, Change the Sign

Reading Time: 2 minutes
Ipswich Civic College [(c) EADT]
Ipswich Civic College [(c) EADT]

Shirt and tie, green tank top, brown jacket with leather patches on the elbows and horn rimmed spectacles. The image of my college maths tutor yelling “break the line, change the sign!” is still burnt into my brain.
Nevertheless De Morgans’s Law is one of those things from my days as an electronics research technician that’s still useful today – so it was worth it.

I needed to change an if statement around. Originally it was like this;

if (null == cert || !cert.HasPrivateKey)
doMainPart()
else
doElsePart()

But I wanted to switch the main and else part around, so I wanted to reverse the result of the condition. I could have done this…

if ( !(null == cert || !cert.HasPrivateKey))

But instead I employed De Morgan’s Law. To invert the meaning of the entire condition the first step is to invert the meaning of each individual term of the condition. So:

  • null == cert becomes null != cert
  • !cert.HasPrivateKey becomes cert.HasPrivateKey

The second step is to change the operators that combine the terms, so OR becomes AND and vice-versa.
Thus (null != cert && cert.HasPrivateKey) gives the exact opposite result to (null == cert || !cert.HasPrivateKey).

if (null != cert && cert.HasPrivateKey)
doElsePart()
else
doMainPart()

It’s really easy to tie yourself in knots with this kind of stuff, remembering De Morgan’s law can save a lot of heartache.


If you’re wondering why it’s “break the line, change the sign” this is because of the way boolean logic is written is electronics: if you want to invert the meaning of something you don’t put a ! in front of it, you draw a line over the top of it.
So our condition would have started as:

null == cert || cert.HasPrivateKey

We then want to invert the entire operation, so we draw a line right over the top…

null == cert || cert.HasPrivateKey

Now we apply De Morgans’s Law, we break the line and change the sign.

null == cert && cert.HasPrivateKey

Of course the two inversions on the right-hand side cancel each other out, so we finish with:

null == cert && cert.HasPrivateKey

Of in C#

null != cert && cert.HasPrivateKey

Social Personality

Reading Time: 2 minutes

Singapore
Singapore

I made a mistake when I joined The University of Hull back in 2008. I don’t mean that joining university was a mistake, it was one of the best career decisions I’ve made. I made a mistake with the way I used social media.
Part of the role at The University was to be an ambassador, to represent and promote the university and the field of computer science. “No problem,” I thought, “I’ll just use my existing social media accounts.” That was the mistake.

It led to two problems. The first is that my online persona changed. I suddenly became aware that I used the account for professional purposes. That changed the image of the account, it became my professional persona and it became very difficult for me to be the person that my friends know outside of the professional environment.
The second problem is that I started adding professional “friends” and followers.  That reinforced the first problem.
I went from someone who’d been very active on social media to someone who only posted the most carefully filtered content.

If you are a professional you have to be somewhat circumspect, once you’ve hit the “send” button you’ve lost an element of control – it’s out there. Even friend-locked posts and old blog articles can resurface at inopportune moments.

At the last interview I was at one of the interviewers let slip something that they couldn’t have known unless they’d read my blog – and not only that but it wasn’t in the most prominent article either. As a professional your online presence matters.

Fortunately the Victorian image of professionalism is now fading. I maintain however that a professional image is important. It’s about giving your clients and peers confidence that when you turn up to work you’re going to do a good job. For instance, if your social media streams contain a disproportionate amount of pictures and stories of you partying with your friends to all hours that’s going to damage your professional credibility.
A balance still needs to be maintained.

For myself however I no longer work for The University, in fact being an ambassador for computer science is no longer an official part of my job at all. So you’d think it would be easy for me to rectify those mistakes I made back when I joined The University and revert my social media personas back to being more like the real me. You’d think that. It appears to be proving more difficult than I thought, however.

Avoid Constructors that Throw Exceptions

Reading Time: 2 minutes

Even ones in the CLR itself…

It’s well know that throwing exceptions in constructors is a bit dodgy (mainly because of possible memory leaks), but things recently got a bit weird.

It worked fine on my machine, however on my colleague’s machine Visual Studio’s unhandled exception dialog kept popping up. After he hit the continue button everything seemed to work OK, but it was still unnerving.

To cut a long story short I’d written something a bit like this[1]

        private async static Task<TcpClient> GetConnectedClient()
        {
            try
            {
                return await Task.Run<TcpClient>(() =>
                {
                    return new TcpClient("127.0.0.1", 80);
                });
            }
            catch(SocketException ex)
            {
                Console.WriteLine(ex.ToString());
                return null;
            }
        }

That particular overload of the TcpClient constructor tries to open a connection and throws a SocketException if it fails. The exception should be marshalled through the await to the catch, but it was all going a bit strange.

So I changed it to something a bit like this[1] and it all started to behave properly.

        private async static Task<TcpClient> GetConnectedClient()
        {
            try
            {
                var client = new TcpClient();
                await client.ConnectAsync();
                return client;
            }
            catch(SocketException ex)
            {
                Console.WriteLine(ex.ToString());
                return null;
            }
        }

Underneath the bonnet (or hood, if you’re not British) ConnectAsync starts a Task to manage the older BeginConnect asynchronous mechanism. Nevertheless, exceptions throw in this scenario are marshalled properly with no weirdness.

Now ultimately both versions of the code seem to work correctly, but what is clear is that there is something different about the exception handling in constructors. So I’d recommend avoiding not just avoiding throwing exceptions in your own constructors but if you can, avoid using constructor that throws an exception.


[1]In order to make the point clear I’ve simplified these examples to the point where, as code, they’re not terribly useful in their own right. I certainly wouldn’t recommend using these as any kind of template.

Because Great Things Grow From Seeds…

Reading Time: 5 minutes

Way Back When...
Way Back When…

I’d been working for Seed Software for a few short weeks when the manager announced that he was going snowboarding for a week. “Who’s in charge whilst you’re away?” I inquired, “You are!” came the reply.

Over a year ago I posted the story of how I came to work for Seed Software, but the story didn’t end there. There’s the small matter of what happened between then and me leaving Seed in October 2015. It was quite an experience. I’m not sure if people take me entirely seriously when I say that I learnt as much in Seed as any of the interns or students but it is nevertheless true.

So I’d just about worked out where the stationary cupboard was and suddenly I was being asked to run the business. This was definitely not in the job description for a software developer but before I made that point I took a moment to think about it. I’d spent the past few years in my previous company trying to convince the senior management that some software developers understood more than just matters technical. I’d had some success, but here was an opportunity for me to prove it by stepping right into the front line of running a business, if only for week.

Nothing much happened, it was rather an uneventful week. I don’t know if I was more disappointed or relieved. Nevertheless it cemented my position as being very actively involved in the running of the business of Seed Software.

After that I started getting down to trying to learn WPF and WCF, neither of which I’d used before, and trying to build a Command and Control system. I’d got out on the road a bit too, the C&C was very much developed with the Fire Service which meant frequent visits to site with the latest developments to make sure we were all heading in the same direction. I’d also been to a few of the other sites because although the other products were managed by the Seed Manager there was only one of him and we needed some resilience.

It was that need for resilience that soon bit us, “Tom,” the Seed Manager said, “Erm… I’ve double-booked myself. I don’t suppose you could cover a sales presentation next week? I’ve got the slides and everything.”

This wasn’t entirely unexpected. I’d known before I joined Seed Software that it was just a software development team. There were no sales, marketing or operations staff. A business can’t survive without these functions though which left only one conclusion: the development team were doing them. This is actually one of the things I found exciting about Seed, if the business was going to work and I was going to be successful within it I knew that I was going to have to get involved in these functions to a much greater level than I ever had been before. Sure I’d been to sales presentations, I was actually a bit of a regular, but I’d always been the “technical expert” that answered the questions that the salesperson couldn’t. I’d never actually delivered a sales presentation before.

As it turned out Seed’s presentation was part of a much larger event where several suppliers were pitching their wares at a group of senior fire officers from many services across the country.

This was a great learning opportunity for me – I was on relatively late in the day which meant that I had a lot of time to watch what the others presenters did and tune my own performance. I was expecting swish, professional salespeople to glide in and deliver polished shows that would make mine seem shambolic and amateurish.

That is not what happened. They were all professional enough but there was no performance, no spark, no charisma.

By my slot half the audience had been struck down with a nasty case of…

.

This is where it struck me how just how different Seed Software really was in 2009, there was nobody else like us there. I had some Powerpoint slides, just to get across some of the key information, but most of our sales presentations were live demonstrations of the kit. This, it transpired, was a breath of fresh air. I was able to engage with the audience, sleepy heads popped up and started asking questions. I ran out of business cards and had to start writing my details on the back of potential customers’ ones.

I was beginning to settle in to Seed, I was a lead developer, software architect, product manager, deputy business manager and occasional sales guy. The phrase “can do attitude” crops up in a lot of places and generally it means something it’s not supposed to but in 2009 / 2010 Seed Software embodied it in its true sense. It didn’t seem like there was anything that we couldn’t make work somehow.

2011 was not so kind to us. The business was growing too fast for The University to react and we were all having to put in way too much work just to keep our heads above water. To make matters worse the Seed Manager ran into a spot of bad luck – a couple of serious accidents ruled him out for extended periods of time. I found myself trying to develop the Command and Control, project manage delivery to the first control rooms and the subsequent go-lives and manage the business of Seed itself.

It was insane, one day I looked at my timesheet and I’d accumulated 28 extra days of time-off-in-lieu. I had to offload the management of the business to the department’s Enterprise Director, or I would have burnt out.

Despite the workload Seed was still a hugely positive, exciting place to work. What we’d achieved was pretty amazing too, two industry professionals and a bunch of students had successfully developed and delivered a mobilising system – the single most important computer system in a fire service – into two live control rooms.

The workload however was still out of control. Even with the Seed Manager back full time it was clear that we needed to make big changes. The Seed Manager position was actually a hybrid position, half developer, half manager. It was obvious that managing the business alone had now become a full time job. The role was therefore split into two, a senior developer and a dedicated business manager.

Preferring to retain a technical role, the Seed Manager left. I had also planned to leave – I knew that a chapter in the development of Seed Software was coming to a close. Seed was going to change, it was going to become more established, less dynamic. I also wanted to move back here, to Suffolk.

I did move, but I didn’t leave Seed. The University rather unexpectedly offered me a remote working contract. This threw up a whole myriad of new challenges. When I first became a remote worker I thought I’d be knocking out a steady stream of blog articles on what the problems were and how we were trying to solve them.

That’s a subject for another time however. As for the story, In October 2015 Seed was about 3 times the size it was in 2011, had a lot more products than it did an had begun to offer a support function. Brigid Command and Control is was well established as the primary mobilising system in the control rooms of 3 of the UK’s Fire and Rescue Services.

That just about brings my story at Seed to a close. But what about the future of Seed? The changes I predicted have certainly happened: I believe that Seed Software will become a highly successful business.

 

 

Because… PostScript!

Reading Time: 2 minutesBecause it’s become somewhat of a tradition for me to do something silly on a Friday lunchtime, I thought I’d take on one of The Department‘s basic coding challenges.

The brief was simple;

I want a program that will print out the numbers 1,2,3,4,5,6,7,8,9 in a shuffled order. The order must be different each time the program runs. Note that the same number must be different each time. It should be possible to extend this to work with 52 numbers, in which case I can make a shuffled deck of cards.
You can use the Random number generator in C#, but you must make sure that the same number never appears twice, as a deck of cards which contains more than 4 aces has been known to raise suspicion.

As it was Friday lunchtime however I decided to make the solution anything but simple, firstly I replaced the numbers with the actual card names and secondly I thought I’d write it in PostScript because it demonstrates a totally different form of notation from the way we write the normal imperative languages like C# or Java.

/Suits [(Clubs)(Diamonds)(Hearts)(Spades)] def
/Cards [(Ace)(Two)(Three)(Four)(Five)(Six)(Seven)(Eight)(Nine)(Ten)(Jack)(Queen)(King)] def
/YCursorMax 720 def % 10 inches from bottom
/YCursor YCursorMax def 
/XCursorMin 72 def % 1 inch from the left
/XCursor XCursorMin def
/XColWidth 113 def % 1/4 of the printable page
/Helvetica findfont
12 scalefont 
setfont
/Deck [ 0 1 51 {} for ] def
0 1 50 {
    /SwapLeft exch def
    52 SwapLeft sub realtime rand mul exch mod
    SwapLeft add /SwapRight exch def
    Deck SwapLeft get
    Deck SwapRight get
    Deck exch SwapLeft exch put
    Deck exch SwapRight exch put
} for
0 1 3 {
    dup /Col exch def
    0 1 12 {
        Col 13 mul add Deck exch get
        dup 13 mod
        XCursor YCursor moveto
        Cards exch get show
        5 0 rmoveto
        (of) show
        5 0 rmoveto
        13 div cvi
        Suits exch get show
        /YCursor YCursor 20 sub def
    } for
    /XCursor exch 1 add XColWidth mul XCursorMin add def
    /YCursor YCursorMax def
} for
showpage