Mange Tout, Rodney

Reading Time: 2 minutes

What’s in a name? What if mispronouncing it were part of a secret language? Decades ago, I encountered exactly that. It didn’t end well.

Tech background: the Postgres database system was built to replace Ingres. Postgres is portmanteau of Post-Ingres. When SQL support was added, they did it again and we got PostgreSQL. Many people call it Postgress, some add the SQL part in some way.

If there is a pronunciation of PostgreSQL that’s wrong, it’s Postgree. I can understand the confusion, though, so when the management team started saying that I tried to be nice. As the company’s foremost database geek, I figured if I dropped “Postgress” into conversation a few times they’d catch on. After all, if the management of a technical business mispronounces technical words, that doesn’t promote confidence in the business – and we were all pulling in the same direction there, right?

We were not. If anything my “help” made matters worse. I even started to hear the overall technical lead call it Postgree. So I invited him out to the car park for a polite word.

“It’s a management thing,” he told me, “it’s one of the ways they distinguish if you’re one of them.”

Omitting much for the sake of brevity, A them and us culture had developed (or perhaps was deliberately fostered). It included a secret language used by the management team. They were perfectly aware that all the buzzwords they were using were ridiculous and that they were, indeed, mispronouncing PostgreSQL. It was a tool which allowed them to talk openly but still exclude people from the conversation.

It won’t surprise you to know that the business struggled, in many ways.

I met up with someone from there a year or so back, “We’re dead,” he told me, “COVID has killed us.” The company couldn’t recruit technical people, which he firmly blamed on the rise in remote working. What he had wrong was that the company couldn’t afford to pay enough.

It’s true that some people focus only on the wage slip. It was a factor. Most of us are far more complex: we want job security, job satisfaction, camaraderie, things that make the course of the day a more pleasant experience.

That company still had a toxic culture. Nobody wanted to work there, but options in that region – before COVID – were limited. Now tech people had much more choice, they were choosing not to work there.

In a market where good people are excruciatingly difficult to attract, if you want to build a successful business you have to build a positive and progressive culture, make somewhere people want to work. It’s a necessary investment that’s good for all of us.

You Are Paying, So Take Control

Reading Time: 4 minutes

There’s no such thing as a free lunch. Everything has to be paid for somewhere, somehow. TikTok, Twitter, Instagram, Facebook, Google, none of these are free. If you’re not paying with money, you’re paying some other how.

“Look,” someone inevitably interjects at this point, “if Facebook wants to analyse my ‘likes’ and show me adverts for camera lenses I’m fine with that.”

I think most people would be OK with that. That’s not what they’re doing. This isn’t an exposé of Facebook however, that’s been done. Even if we hadn’t consciously surfaced it we all knew Facebook was evil.

The others are alright though, surely? I mean Twitter just show you a few sponsored tweets, right?

One of the great advantages of social media advertising is that it’s possible to tell exactly how many times an advert is shown to users. The more the advert is shown, the more money Twitter makes. So, by extension, the more time you spend on Twitter, the more adverts you are shown, the more Twitter makes.

Most of these “free” social media platforms started off just showing you the latest updates from your network, in time order. Have you noticed how hard they’re all trying not to do that now?

It’s because they’re profiling you. They’re recording what you interact with, what you open, what you scroll past and combining that with other tracking data so they can doctor your timeline. They want to show you the stuff that’s most likely to keep you engaged, most likely to get you to view adverts.

One of the unfortunate side effects of this can be explained by looking at headlines in the media, because newspapers been doing it for centuries. We love sensation, we love controversy. The result is that the algorithms that manage our timelines are ever searching for more extreme content, more sensation, more controversy.

That’s why Twitter is such a hellscape of political extremism, it’s because the most extreme, the most controversial, the most sensational views are getting shoved in our faces by their algorithms. If you mutter “dimwit” to yourself and scroll past something they’re not making any money. If it makes you angry and you fire off an angry response that makes other people respond angrily, they’re driving engagement and driving up their advertising revenue.

Nobody’s being consciously evil here, it’s just artificial intelligence (AI) working out what presses your buttons and manipulating what you see to do it more, all in the name of keeping you engaged so you view more adverts. It’s just unfortunate that the result is profoundly unhealthy, both for you and for society.

How do you escape? The obvious way is to pay directly for the services and the content you consume.

Up until comparatively recently there have been limited options here. In 2010, when I started this blog, paying to host a blog was pretty much your only option. Now the field is expanding fast.

The Fediverse might turn out to be significant. It’s a project supported by the software development community and (mainly) paid for by voluntary contributions. The whole idea is to create a system for interconnections that’s robust, but also that nobody can own. Sure, people can and do own individual servers, perhaps even systems, but not the whole ball game. If you’re in the Fediverse, you have to play nicely with other servers and other systems or you’ll not be in the Fediverse any more.

Don’t get me wrong, the Fediverse has problems, it has to evolve and to continue to evolve. If it can do it and maintain its purpose and its integrity it might define the next epoch of social media. Maybe. It seems we’ve said that a lot and very few things ever do break through into the maintstream.

Unless you’ve been living in a cupboard, however, you’ll have heard of the Fediverse app Mastodon; primarily because it’s very much like Twitter. It’s not a drop in replacement, there are differences, some of which are by design. Every Mastodon server has to be funded somehow. Some are actually free, run by generous people or businesses. Others (e.g. mastodon.org.uk) rely on contributions from the users, usually through Patreon or some other, similar mechanism.

That’s a neat segue. Patreon is one of a bunch of what are essentially subscription management services, they provide a platform for smaller producers to get paid for what they produce. The advantage, other than actually paying them (comparatively) fairly, is that the producers don’t have to game the content manipulation algorithms for advertising funded services nearly as much. Clearly you have to find them somehow, but it means they only have to compete in the clickbait leagues with a small percentage of their output. They can do deep cuts.

For me, none of this is quite landing where I want it, but I wonder if it ever can. Wherever you sit, there has to be a compromise. Some people will always be happy with advertising funded social media, no matter how unhealthy that might be for them personally or for society as a whole. Free to use social media will also, always have a place in carrying the stories of poorer folk and poorer regions, as well as raising awareness and documenting what goes on in war zones and under oppressive regimes.

I’m not sure there’s any single right answer, but if there’s a wrong answer it’s to have an oligopoly of tech giants abusing people’s personal data and rigging their content – at any cost to them or society – to manipulate people into viewing more adverts.

The key, in whatever form it takes, will come from diversity and cooperation.

Government and Personal Accounts Don’t Mix

Reading Time: 6 minutes

The UK Minister for Food recently made a gaffe about using a personal phone. He might not realise how big a gaffe it was, however. His comments were part of a wider debate about the UK’s Home Secretary having admitted to using her personal email account for government business. Whilst government business should be carried out using government (approved) equipment and services, there’s a big difference between making a phone call and sending an email.

The TL;DR, phone calls are pretty secure but email absolutely is not. Read on, I’ll explain, as succinctly as I can.

On the surface of it, using one type of communication device or technology might seem much like another. In reality the technology underneath and the security of them varies drastically.

Telephone Calls are Reasonably Safe

Your common or garden telephone in the UK is considered pretty secure. Your land line is connected to an actual cable that goes to a cabinet in the street. That cabinet is connected via real cables (or optical fibre) to the telephone exchange. You call is then routed from there around a network owned and operated by BT[1], eventually working its way to the destination. It’s pretty difficult for a rogue actor to get access to that pipeline. They either need to tap the wire at one end or they need to get into BT’s secure network.

Mobile phones are a little more vulnerable. It is possible for a snooper who’s physically near either end to listen in to the radio signals between the phone and the mast and hear the call audio. Also, as more organisations get involved in anything so the risk of a compromise within one of them grows. A call routed from Vodafone through BT to EE is more vulnerable simply because there are three organisations involved.

There have been incidents where large telephone networks have been hacked, but it is relatively unlikely that unfriendly foreign organisations are listening in to telephone calls in the UK.

Naturally, government business should be conducted on government (approved) devices. There are many reasons for this, but let me give you just three:

  • People tend not to encrypt or adequately access protect their phones. Both can be enforced by policy with a government phone.
  • Although it’s difficult to intercept an actual telephone call, some smart phones have been hacked to record audio and even video and relay that to rogue actors. Again, organisations can set policies to try to reduce this risk.
  • If a government phone is lost it can be remotely disabled immediately.

Email is Horrendously Insecure, End of Story

The basic protocol the internet uses for email is now more than 40 years old. It was developed when The Internet was a very different animal to what it is today. There have been a number of security updates since then, but there are still some big holes.

One obvious problem is data at rest. Email is a store and forward system, when you send an email from your phone or computer it goes to an email server which then tries to work out what to do with it. That email server stores your email. Because email is not an end-to-end encrypted protocol, the mail server has access to the contents of your email, as does anyone who was sufficient rights (whether legitimate or hacked). I once demonstrated this to an unbelieving manager by changing emails that he sent.

What’s more, when data is written to storage it has a funny habit of hanging around. There are systems to try to make sure that deleted data is really deleted but not everybody uses them, the result being that it’s sometimes possible for a hacker to retrieve emails that passed through the server a long time ago.

Now let me take another angle, if you can receive emails, that means you have an email server somewhere that’s acting on your behalf. That email server is open to The Internet. If you’re foo@bar.com I could connect to the email server at bar.com, say “Hi, I’ve got a message for foo” and under the original protocol your mail server wouldn’t even check who I was. Almost all do now check, but the checks aren’t 100% fool proof and it’s still possible to send emails that appear to be from people they’re not.

As sender and receiver, we also have no control over the path that the email takes. The vast majority are simple, I’ll send my email to the mail server at tomfosdick.com which will look up your server at bar.com and directly transfer the email. As long as both email servers are uncompromised and the link between them uses an up-to-date strong encryption that’s relatively secure.

But there’s no guarantee that will be the route that gets taken. It could end up going through an email server in Russia. It could go between two servers that aren’t using strong encryption or even any encryption at all.

There’s a whole library of different techniques and different ways that email can be compromised, intercepted, altered and faked. If it’s done well, as an end user it can be impossible to tell if it’s been compromised. Even experts can’t absolutely tell if a message has been observed by a rogue actor on its journey, or if it’s been left on an insecure server somewhere for a hacker to pick up at a later date.

A final note here, one of the reasons that it’s important that government officials (including Ministers) to only conduct government business using government (approved) devices using government accounts is because they’re monitored and logged. This is a completely separate reason why a government official using a personal account is a serious issue; it opens the person up to the allegation that they were deliberately avoiding scrutiny. There are times when Ministers need to do secret things, but there are protocols for that. Avoiding scrutiny is a pretty good sign that a government official is working in their best interests, not ours.

WhatsApp et al are Comparatively Secure

A lot of newer messaging apps are comparatively secure compared to email. This is because they’re end-to-end encrypted. Your phone (or web client) encrypts the message and only your intended recipient has the key to decrypt it. It doesn’t matter how many servers or other pieces of network equipment it passes through, they could all be compromised, it wouldn’t matter because they don’t have the decryption key, they can’t view the message contents.

Having said this, there is information in the metadata; an attacker who did manage to compromise the network might be able to see who the message was from, to, when it was sent, when it arrived, how big it was etc. This kind of information can be extremely useful, but unless a hacker can crack the encryption, they can’t view the message itself.

Of course today’s strong encryption can be cracked by tomorrow’s mobile phone, so just consider that if your data does get stored, someone in the future might be able to crack it.

Do be very aware that not every messaging service is end-to-end encrypted. Twitter direct messages, for instance, are not. Their contents are also stored by Twitter indefinitely. Not only could the Twitter organisation exploit the contents of your direct messages, a data leak could easily expose your direct messages to threat. If you’re a government official there’s a reasonable risk your entire twitter DM history could end up on Wikileaks.

Wrapping It Up

For the vast majority of business and government needs, the good old telephone is plenty secure enough, but make sure that you comply with your organisations usage policies and don’t bleed your professional communications across into your personal accounts.

For the majority of business, email is fine. The reality is that millions of emails are flying around all the time and only a handful have anything interesting or valuable to a hacker. Emails are also, generally, pretty secure within the organisation itself. If you’re sending an email from your professional account to the professional account of someone else in the same organisation, that should be relatively safe.

Do be aware that if you’re sending information to people outside your organisation there’s a chance that email might be compromised. There’s a small risk that anything bad will happen, but it is there nonetheless.
A top tip is to remember that the telephone is comparatively secure. If you receive an email message that you are in any way concerned about, or you suspect anything not entirely straightforward, call the person.

Again, do not bleed professional stuff into your personal accounts. That’s a big no-no. Don’t, for instance, send a document to your personal email because you can read it better on your phone that way.

Newer messaging apps can be more secure, but check that they’re end-to-end encrypted and using a encryption technique that’s currently considered secure. You might be surprised how insecure some of the common platforms are.


[1] Yes, there will be people reading this and the words “well, technically, it’s not that simple…” will be on the tips of their tongues. I know; there is always a balance to strike between being technically accurate and boring the vast majority of readers into a stupor. You might consider T-REC-H.248.1 a little light reading before bed, but you’re a very, very niche minority.

Cancel Culture is Not Real

Reading Time: 5 minutes

John Cleese is a well read, intelligent and usually eloquent man. He’s made some pertinent observations in the past, ones about which nobody can doubt his good intentions. However, I could say exactly the same about Enoch Powell.

Lately Cleese has swallowed the concept of Cancel Culture and is banging on about it like some old white men have become prone to in the past few years. Actually, I get his point, but the problem is – for the most part – his, not ours. Graham Norton hits the nail on the head, Cleese is finding himself accountable for his words for the first time and he’s not dealing with that all too well.

Going after someone’s platform because you don’t like what they’re saying is nothing new. The soap box had barely been invented before it was kicked out from beneath a speaker because someone didn’t like what they were saying. It might be underhanded and cowardly, it might be a better world if nobody did it, but it’s commonplace and always has been.

What’s changed, then? Freedom from the consequence of your words is a privilege, but whereas in the past someone in a position such as Cleese would be above the threshold for that, they now find themselves below it. That’s it, pure and simple.

“But”, I hear you ask, “if it’s a matter of privilege, shouldn’t we be trying to extend that out to everyone?”

In an ideal world freedom of speech would be an absolute. But even in that ideal world, all freedom of speech means is freedom from sanction or oppression by the state (or state actors). In theory everything is (or could be) controlled by the government, so it’s paramount to the functioning of a democracy that you must be able to criticise the government without fear of sanction from the government or its agents. That’s the fundamental reason we have a right to freedom of speech.

There are two key points here:

  • You may speak, but nothing about free speech says anyone has to listen or give you a platform.
  • Your only indemnity is against sanction from the government and its agents. The right to free speech doesn’t protect you against any other consequences.

Yes, of course we can argue about the extent of the agents of the government, but if your local pub throws you out for trying to hold Combat 18 meetings there, that isn’t a freedom of speech issue.

Enter The Internet

Let me put a hypothesis to you. The Internet has changed our lives enormously. It’s facilitated (more) direct targeting, but it’s also added a horizontal layer across public channels that wasn’t previously there.

What do I mean? In 1968 you could go to the pub with your similarly minded friends and spout whatever nonsense you liked. You’d be very unlucky if there were any negative consequences – but that’s only because nobody who was interested heard you. Even politicians could get away with making inflammatory speeches to local party groups, because nobody outside the room was listening. Enoch Powell had to actually tell the media that he was going to “send up a rocket” in order to get himself cancelled, otherwise his ill-judged “Rivers of Blood” speech might have slipped by unnoticed.

The Internet (and technology in general) has changed that. You might subscribe to The Telegraph or The Guardian. Think of them as vertical channels, they feed you news based content on a variety of different topics, applying their own particular filters and biases.

In 1968 a lot of people kept newspapers for a few days, so that if something came up they could look back at what was being said. They were staying in vertical channels.

Ed: There was a nice visual link here to a Twitter post in which Rebecca Reid explains some of the above and another pertinent problem with British Journalism, but Space Karen has screwed up Twitter so badly that visual link previews aren’t working any more. You can still follow the old school link, however =>

https://twitter.com/mikegalsworthy/status/1584463739566583809

In 2022 if you want to find out what’s going on, you Google it, and Google doesn’t just give you your favourite news source, it gives you a selection of articles from all the major news sources. You can take a horizontal view, you can easily see what each different channel has to say about a particular topic.

This should be a great advantage, but people don’t do it because, sadly, people don’t like having their opinions challenged. Anyway, they’re not the people we’re talking about…

Expand this vertical versus horizontal concept to Twitter, Facebook, Instagram. Your normal audience on these platforms might be just your friends – the vertical – but they are public and unless you’ve locked your account, your posts can be found in searches and by algorithms covering any topic.

There are numerous groups and interested parties out there working on the horizontal, searching for, picking up on things and amplifying them. When someone with a significant platform says something they agree with, they amplify that. It gets retweeted, copied around Facebook groups, WhatsApp groups, people talk about it on YouTube and TikTok, etc. It can result in the person getting quite a boost, both in exposure but also directly through stuff like Patreon, Paypal, BuyMeACoffee etc.

Exactly the same thing happens when someone says something they disagree with. The signal gets amplified and as a result people start to go after the person’s platform, their employer, start campaigns to boycott the person’s products and businesses etc.

That’s it. That is the primary explanation for the illusion of Cancel Culture. The Internet giveth and The Internet taketh away.

Cleese; an awful lot of white suburbia, rent-a-gobs and bigots do like to stand on the battlements of their castles and yell at the peasants, certain that they are protected. But everyone’s castle is, ultimately, built on sand. Society, culture and technology change. If you don’t adapt to the changing sands, your castle will fall and you’ll end up confused, angry and lashing out at ghosts.

Nobody, it seems, is more resistant to change than old white men.

They Do Have A Point, Though…

At the top I said it was mostly their problem. The fact that something is doesn’t make it right. Of course it’s right that people should be held to account for their actions, even those who haven’t in the past, but what happens is not always proportional or just.

Many years ago someone overheard me explaining The Great Replacement (a racist conspiracy theory) and mistakenly assumed I was advocating it. That person then set about what we might call today a campaign to cancel me. It took a lot of effort for me to counter that negative campaign.

Forward fast that story to today. Imagine how much further, faster that negative campaign might have got. We can see this played out on social media time and again.

Sometimes it’s a few words taken out of context and suddenly that person is the enemy.

Other times someone might give a genuinely ill informed opinion. By that I mean that their opinion was earnest, but it was based on something they’d believed but which was wrong or they didn’t realise they were lacking critical information.

They might get a few responses saying “Hey, I think you should read this…” but the storm starts immediately. The saying “bad news travels fast” is much older than The Internet, but The Internet amplifies it greatly. Conversely, “Highly Knowledgeable Person Expresses Well Reasoned Opinion” never made a headline, so the defence, the full context, the revision of an opinion never has the reach that the initial sensationalism does.

Unjustified damage is done and valid, useful arguments are lost.

It’s Mixed Bag, Then.

I’m hypothesising here, of course. I don’t know that The Internet and effortless global communication are the primary cause of these changes in our society, but at a kind of amateur sleuth level it seems rather plausible.

What we can say is that anyone who’s ever lived in a deprived area understands what accountability for their words means. As Ice-T so neatly observes, “Talk Shit, Get Shot.” Whilst we clearly want accountability to be fair, just and not involve getting “Sprayed with the ‘K”, we want it to apply equally to everyone. If all we’re seeing is accountability being extended to people who previously weren’t, that’s no bad thing.

Please Just Mute Geoff.

Reading Time: 3 minutes

In 2011 I became a remote worker. I was really surprised how easy it was, but I was working for the Department of Computer Science at the University of Hull so if we couldn’t make it work, that would have been a very bad sign.

COVID-19 has changed the game completely though. For us accustomed remote workers the results have been positive; in many ways it’s making our lives a lot easier. Over the past few months, however, we’ve been watching, and trying politely to advise, the rest of the world as they catch up with many of the social aspects.

By and large, with the occasional nudge, it seems that everyone has now learnt the core lessons. I’ll talk about them at another time, for now I want to talk about something that doesn’t seem to have made it into culture yet: the etiquette about muting in a meeting.

Background noise happens. It’s a fact of life. Whether it’s the builders next door or your partner on another call or a small Yorkshire terrier inexplicably named Fenrir, it happens. What’s more, for a variety of reasons, sometimes background noise can get amplified to unpleasant levels and broadcast to the entire meeting.

Two things we need to establish:

  • Being on mute is not a sign that you’re not contributing, or not intending to contribute. It’s a sign that you’ve learnt the shortcut key that your system uses and that you respect the other people in the meeting. It’s very rare that you need to speak instantly and without warning. Get used to CRTL+D – talk – CTRL+D (if you use Google). It just basic politeness.
  • It’s not rude to mute other people if you’re getting background noise from them. Most systems allow this. If you’re using one where only the meeting organiser can mute other people, then it’s part of the organiser’s job. Obviously, if it’s convenient, point it out and ask the person to mute themselves, but if Tracy is talking and Geoff’s geese suddenly get spooked, then mute Geoff.
    Believe me, you do not need unsolicited contributions from geese in any meeting.

There are a couple of ancillary points. In the above case Geoff might have no idea how loud the geese are, because he may be using really good noise cancelling headphones. That algorithm might be entirely different to the one used for the meeting, which might think that geese are really important contributors who need to be put front and centre. Background noise doesn’t mean that anyone’s doing anything wrong or that they’re being inconsiderate. It’s not a conflict situation, don’t treat it like one.

Finally, please invest in (at least) basic equipment. Laptop mics are awful, not in the least because of how far they are away from your mouth. What’s more, if you so much as look at your keyboard whilst using a laptop mic, the whole world will know about it. A good, basic headset is a huge improvement over a laptop mic.

The headset I use is one of these. There are a lot of similar headsets on the market at a similar price. If I worked in a noisier environment I might have paid for the advantage of active noise cancellation, but for me it’s not necessary.

If we bake these things into business culture now, if we make them protocol, it will make our lives just that little bit easier and our workplaces just that little bit more productive.

The Y2K Bug: Was It a Hoax?

Reading Time: 5 minutes

I’ve run into a few people recently who’ve told me that the Y2K problem, aka The Millennium Bug was a hoax. In some ways the issue was, but let’s get one thing straight, the bug was very real and if we hadn’t done a hell of a lot of work to fix it, things would have gone catastrophically wrong.

What was the problem then? In the 1950s every tiny piece of computer storage was critical. Programmers were always looking for ways to store and process information more efficiently. They didn’t think for one moment that their code would ever have to deal with the year 2000, so they decided to lop the “19” off the front of the year and just store the last 2 digits. 1958 was actually stored as “58”. If the user needed to see the full year then many systems simply printed “19” before the 2 digit year.

This wouldn’t have been much of a problem if it hadn’t made it out of the 1950s. Unfortunately every new generation of the tech industry builds on previous generations. Not only did the 2 digit year become a kind of industry standard, it also got baked very deeply into the code that actually ran the computers themselves.

OK, it’s a stock image. It’s really here to break up the text.

By the time the 1990s rolled around there was an awful lot of computer code about and people started to realise that a lot of it was going to have to deal with the year 2000.

Suddenly You Find You’re Not Insured…

Let’s look at an example. Let’s say you renew your car insurance. The new policy starts on January 2nd, 1999. Now, you’ve been lucky, this computer program uses 4 digit years so you correctly see your expiry date as January 1st, 2000.

Unfortunately the database that all the records are stored in only uses 2 digit years, so the system writes a start date of 02/01/99 and an expiry date of 01/01/00 into the database.

The problem is obvious: when that record is read back the system will correctly convert 02/01/99 to January 2nd, 1999, but it will wrongly convert 01/01/00 to January 1st, 1900. Congratulations, as far as that computer system is concerned you’re not insured.

In that simple example you’d hope that, at some point, a human would see it and realise something had gone deeply wrong. The problem is that, even in 1999, there was an awful lot of processing going on, in financial systems even in safety critical systems, before the results ever got anywhere near a human.

The Ariane 5 rocket explosion was caused by a similar problem. The guidance system was capable of producing a much higher number than the main computer could deal with. This hadn’t been a problem on Ariane 4 because it couldn’t do anything to cause such a number to be generated. Ariane 5 however could and 37 seconds after main engine ignition on June 4, 1996, it did, ultimately causing the rocket to self-destruct.

That’s why we had to fix the Y2K bug, because pretty much everywhere there was a date in computer code there was potential for things to go badly wrong.

It Wasn’t Just Dates…

What’s more, it wasn’t just the obvious cases we had to worry about. There were more subtle implications of the bug. Consider the following output from a little example program I wrote. It gives you the expected arrival time of a plane and its current altitude both in feet and metres.

 SIGN    DATE            TIME    ALT(m)  ALT(ft)
Y2K00 1990/11/01 00:00 5000 16384
Y2K01 1991/11/01 00:35 4900 15872
Y2K02 1992/11/01 01:10 4800 15616
Y2K03 1993/11/01 01:45 4700 15360
Y2K04 1994/11/01 02:20 4600 14848
Y2K05 1995/11/01 02:55 4500 14592
Y2K06 1996/11/01 03:30 4400 14336
Y2K07 1997/11/01 04:05 4300 14080
Y2K08 1998/11/01 04:40 4200 13568
Y2K09 1999/11/01 05:15 4100 13312
Y2K10 19100/11/01 05:50 4000 49
Y2K11 19101/11/01 06:25 3900 49
Y2K12 19102/11/01 07:00 3800 49
Y2K13 19103/11/01 07:35 3700 49
Y2K14 19104/11/01 08:10 3600 49

There’s one thing you might expect, that when it got to the year 2000 it printed out 19100. The program stores the date as 2 digits and simply prints “19” in front of them. That was a pretty typical Y2K bug: the 2 digit year ticks over from 99 to 100 and it gets printed as “19100”.

What might be surprising is that after the year 2000 it completely screws up the calculation of how high the plane is in feet. The calculation before the year 2000 is (approximately) right. Afterwards it just prints “49” however high the plane is.

This is because, when I wrote the program, I only allocated enough storage for 2 figures in the year. When it came to after the year 2000 however, the program wrote 3 figures regardless. What it did was to write the extra “1” to some storage that was being used for something else – in this case to store the height in feet. 49 is the value a computer would send to the screen if it wanted to print the number 1.

Again, in my little program this gets printed to the screen and you’d hope that someone would notice. What it highlights however is that the problem caused can be somewhere else in the code and affect something other than just the date. This corrupted value could be the radiation dose of a chemotherapy patient and it might never get seen by a human before its delivered…

I hope that makes it abundantly clear that the Y2K bug was very much real and that the consequences could very definitely have been catastrophic. The idea that the bug could have caused planes to fall out of the sky is not and was not scaremongering. It was entirely possible. Indeed if we had somehow sleep-walked through to the closing minutes of 1999 without realising there was a problem it was a relatively likely consequence. We did however realise and we did a hell of a lot of work to fix the problems.

Now of course it’s true that the press over-hyped the situation. The headline “Renowned industry expert says that thanks to years’ worth of effort it’s now exceedingly unlikely that there will be any critical incident in the aviation sector” doesn’t make much of a headline. “Boffin says planes could fall from sky” is going to sell many more newspapers.

On the back of that hype there was also the predictable bunch of spivs and con-merchants offering to Y2K-proof your toaster. I’m sure you get my point; some people capitalised on the ignorance and panic by spreading more misinformation and making a pretty penny out of fixing things that didn’t need fixing.

That doesn’t lessen however the seriousness of the real underlying problem. It was, as they say, “a biggie”.

So It Definitely Wasn’t a Hoax… Or Was It?

There is however a certain thread of logic that says, even considering everything I’ve written, it was still a hoax. It’s a line of argument I actually quite like. For the tech industry it certainly wasn’t a hoax, it was very real indeed. For the government too – the government needed to make sure that adequate provisions were being made to fix it, to mitigate any remaining risk and deal with any problems arising.

As far as the general public were concerned however, they were never actually exposed to any significant level of risk. It was inevitable that we – the tech industry – would fix all of the serious issues well before they came into play. There was nothing that the people on the Clapham omnibus needed to worry about. In fact, being perfectly brutal about it, there wasn’t really any need for them to ever know about the problem at all.

Much as I like it, I don’t entirely subscribe to that school of reasoning. Even as midnight ticked over we couldn’t be sure that we’d fixed every critical bug. There was still a risk of things going badly wrong and the general public needed to be aware of that.
There’s also an argument that it was public awareness that actually made a lot of the tech industry sit up and take notice :- that’s when the senior management of these businesses finally realised that what the technical people were saying was right.

Did we need people predicting that planes would fall from the sky and toasters would stop working though? No, we definitely didn’t. What we needed was common sense. What we got was the British Press.

Perl: The Lazy Way to Write WPF

Reading Time: 2 minutes

I hate writing boilerplate. Recently I was writing a test tool where I needed to be able to build messages from a WCF interface. That’s a lot of ViewModels and a lot of views and a lot of tedious typing.

That is, of course, unless it’s Friday lunchtime and you happen to have spent the first half of your career working on *nix systems. Enter Cygwin and some now rather sketchy memories of how to write Perl.

$ perl -ne \
'if($next) \
{ \
$_=~/^\s+void\s+(\w+)\s*(([^\)]+)\)/; \
$cType=$1; \
print "public class ${cType}Model:DependencyObject\n{\n"; \
@props=split /,\s+/,$2; \
foreach $p (@props) \
{ \
($type,$name)=split /\s+/,$p; \
$name=~s/^([a-z])/\U$1/; \
print "public static DependencyProperty ${name}Property = DependencyProperty.Register(\"${name}\",typeof($type),typeof(${cType}Model));\npublic $type $name\n{\nget => ($type)GetValue(${name}Property);\nset => SetValue(${name}Property, value);\n}\n"; \
} \
print "}\n"; } ; \
$next = /\[OperationContract\]/ ;' \
< IClient.cs >../../../../../Models.cs

I then just used ReSharper to move the classes into their own separate files.

Yes, I realise that the little snippet of Perl is very poorly written, both from the point of view of its fragility in processing C# and also because these days I really only use Perl for one time lash-ups like this. It’s be 15 years since I could say that I wrote Perl in any professional sense and I’ve forgotten a lot in that time.

My point however isn’t to provide a shining example of Perl for you to cut and paste. It’s to point out that a few lines of text processing script in whatever language, written in a few minutes, can save you from a whole load of tedious typing.

I use Cygwin a lot in programming, because utilities like find, xargs, grep, sed, awk, cut, uniq and bash scripting itself can save a heck of a lot of time.

If you’re not an old *nix wonk like me however all is not lost. Perl can do pretty much everything anyway and there are plenty of Perl implementations for Windows.

Of course there is the world of Powershell too and that’s where I have a confession to make. Aside from learning the basics; just enough to do what I need to do, I haven’t really delved into Powershell. I’m sure Microsoft have put a lot of research into it, but to me it feels really awkward, like you’re always having to jump through hoops to get even the simplest thing done.
I realise the potential hypocrisy of this, Despite being notoriously counter-intuitive, vi has become second nature to me. It’s only when I try to explain to others that I remember that if you’re not thinking about trying to operate an editor over a 300 baud serial link then none of it makes any sense at all.

Anyway, I digress, the simple conclusion is that if you’ve got a mountain of boiler-plate to write, have a think about using some sort of script. There’s a lot of power in your fingertips.

Primordial Radio Data Usage

Reading Time: 3 minutes

The question of how much data allowance Primordial Radio uses has been asked a few times.

The simple answer is about 30 megabytes per hour, which means 1Gb of data will last you about 33 hours.

If you’re interested, the not so simple answer goes as follows.

Primordial are cunning, they use a 63KBit AAC stream. The bit rate is, quite literally, how much data per second the stream uses. The higher the bit rate, the higher the quality of audio that can be squeezed in. It’s a trade-off between data and quality. But there’s another factor – the technique used to encode the audio.

If Primordial used a 63Kbit MP3 stream it would sound dismal, because MP3 is actually a pretty old and inefficient audio encoding technique. Because they use AAC, they can get away with a much lower bit rate, which keeps the amount of data you need to use to listen to Primordial low and the quality acceptable.

BBC Radio 3, in comparison, have a 320Kbit AAC stream (amongst others). You can get your Classical Music fix in super-high quality, but it will munch 150 megabytes per hour.

Now, the relationship between the bit rate and the amount of data it uses isn’t entirely straightforward. In data transmission we tend to talk about bits per second and when we talk about data allowances they’re in bytes, or more likely Gigabytes.

Your broadband connection, for instance, is almost certainly specified in Megabits per second. Long story short, the reason is that the bit is the smallest thing that can be sent, so it’s most accurate to talk about the speed of a connection as bits per second.

A byte is almost always 8 bits, but some types of communication use extra bits to regulate the transmission, so it’s not always a straight 8 from bits per second to bytes. It’s close enough for a ready-reckoner though:

63 / 8 = 8 (roughly)

We need 8 kilobytes of data for one second of audio. We can then easily multiply that up.

8 *60 = 480Kbytes per minute

480 * 60 = 28800Kbytes per hour

A megabyte is 1024 kilobytes, so:

28800 / 1024 = 28Mbytes (per hour)

This, however, is always going to be optimistically low. Firstly there is the problem of the envelope. Data over The Internet is sent in billions of packets. You can think of each packet like a… um… packet. You can’t just lob a bottle of Hendricks in the postbox and expect it to get anything other than drunk by the postie. You need to wrap it up in something, put an address on it and pay postage if you want someone to actually receive it. There are similar overheads on the The Internet.

There are various different systems in use, often there are several layers of content and packets. This means that there is a lot more traffic on The Internet than just the useful data.

There is also the problem of packet loss. A small amount of data on The Internet just disappears. This is actually expected, it was designed that way because it’s easier and more resilient. What it does mean however is that a small amount of data has to be sent twice.

You can pretty much account for all this by simply adding a fudge factor. 20% is usually considered a safe margin. If we take our theoretical figure from earlier:

28 * 1.2 = 33.6MBytes per hour.

This, of course, is an estimate based on a bit of theory and some practical experience. If you don’t trust these kinds of calculations, you could just look at the speed on your router’s data rate table.

If you wanted a bit more accuracy though, you could listen to Primordial for, say, 1/4 hour, record the amount of data every packet contained and the overall length of the packet, then add them all up.

You’d have to be a right geek to do that though.

The total data received was 8150537 bytes, of which 7247617 was useful content. Those can pretty easily be multiplied up to an hour:

Total audio and related data: 27.65 megabytes per hour.

Total data exchanged: 31.1 megabytes per hour.

Naturally I can’t guarantee these figures absolutely. They’re over Wi-Fi rather than a mobile network and there will be differences. There will also be differences between different networks and even different times of day as The Internet itself changes and adapts to the traffic.

What I can say is that they should be somewhere near, within a few percent.

I’m Pulling the Social Media Plug

Reading Time: 4 minutes

In the words of a certain radio station, “Social media can be a force for good, but it can also be a giant pain in the arse.”

When I wrote that First Class Post was the most rapid form of communication of which I approved I was only half joking. I recently spent two weeks in India. Rather than deal with the expense of roaming or hassle of a local SIM I just turned mobile data off. It was a surprisingly liberating experience.

It’s not like I dropped off The Internet completely: there’s free WiFi in most hotels and a few restaurants. What I found though was that having Internet access time-boxed had a far greater effect on the way I lived my life than I could ever have imagined.

I’ve spent pretty much my entire career in communications, most of it trying to improve the connectivity and communication technology used by the emergency services. I’d always kind of assumed that more connectivity and more flexible communications were a good thing.

It’s true – the increased ability to communicate can benefit us very greatly. For instance, we’re now talking about the ability for members of the public to stream video directly into an emergency service control room. That information could be hugely useful to the call-taker, in informing the member of the public, in informing the crews being sent to the scene and also providing an evidence trail for any followup action.

On the flip side however, as I’m sure you can imagine, the ability to stream video from a remote location to another, particularly via an end-to-end encrypted channel, facilitates some of the most appalling people in existence.

To a lesser extent the same is true of social media. It enables us to keep in contact with people that we would otherwise naturally lose touch with, but it also throws up conflicts that we would never otherwise have. On top of this the social media companies themselves aren’t making money unless you’re using them. They make every effort to ensure that their platform invades your life as much as possible.

Over the past year or so I’ve become utterly frustrated with this: I’ve disabled all notifications from every social media app on my phone. What India taught me however is that this isn’t enough. If I really want to take back control from social media, I have to remove myself from the social media environment and only step back into it on my terms.

Social Media is not a new thing, it existed back in the dial-up days. The difference was that to be online you had to make a phone call, and the costs could mount up if you weren’t careful. You had to set limits, for purely financial reasons (especially if you were on a trainee’s wage).

So I’m setting usage limits again.

Just before we left for India, my partner and I were in a restaurant and the couple next to us spent the entire meal on their phones. They barely talked to each other. The first rule therefore is:

No phones at the dinner table, wherever that dinner table is: in the house; in a restaurant; a picnic table in a field; etc.

If I’m out and about doing jobs or visiting people, the chances of me needing to call someone are fairly high, but the chances of me needing a smartphone are fairly low. The second rule:

Unless there is a clear reason to take the smart phone out, take the dumb phone [see above photo].

The penultimate rule I call the “Soap Opera Rule”. In many ways Social Media is like a Soap Opera, the two differences are that it deals with real people and that it’s constant, it doesn’t come in half hour chunks 3 times a week. The former is somewhat the point of Social Media. The latter is something that you have to manage and it helps if you think of it more like a Soap Opera:

Set clear daily usage limits and don’t exceed them.

Of course there are exceptions to every rule. When we’re talking about usage limits we have to consider what the purpose of usage is. If you’re organising a family meal via WhatsApp that’s not the same as reading your Twitter timeline.

The key here is be sensible and maintain perspective.

The last rule is the simplest of them all:

Talk to people.

Social media is no way to conduct a friendship. Sure, it’s a great way to allow you to find people with common interests and to keep touch with people who you would otherwise lose touch with. Those people aren’t your friends (although they may have been or may become so). Ultimately, friends are not people you broadcast status updates to. Friends are the people you have a conversation with when you have news.

So call them, invite them round for tea, go to lunch with them, go watch a film with them but interact with them directly and personally, not via timelines and group chats.

From now on I’m going to be following these rules. In reality you probably won’t notice any difference, but I think it’s going to make a big difference to me and I hope these words make a difference to other people.

Remember that it’s in the interests of the Social Media companies to create a society where it’s socially unacceptable not to be glued to your phone. Don’t sign your life over to them: take back control and always, always be true to yourself.

C# Best Practice: Why a Separate “lock” Object?

Reading Time: 4 minutesSome time about 1995 I noticed that I was writing a lot more concurrent code than the other programmers. It was almost as if someone was deliberately pushing it in my direction… That theme never really changed.

I was rather surprised then when a developer made a comment on something I’d written a few years back, because I was pretty confident I’d covered all the bases.

public class SomeServer
{
    private readonly Dictionary<KeyType, ValueType> queries = new Dictionary<KeyType, ValueType>();

    //stuff

    public void PerformLookup(string someQueryTerm)
    {
        //some logic...
        lock(queries)
        {
            //some more logic
        }
    }
}

The comment was:

Please use a separate lock object. e.g. private readonly object _queriesLock = new Object();

Eh? What?

OK, hands up I missed this. I learnt to use basic concurrency tools way before C# existed. For me C#’s ‘lock’ construct was great because it allowed me a very clear and concise way to use a monitor. As far as I was concerned there was no downside. Why on earth would I want to use a separate lock object?

There are 2 things you need to be really careful about when using a lock in this way.

You must carefully manage the lifetime of the locked object.

Imagine above if ‘queries’ were not a readonly object created at instance initialisation. Imagine if someone did…

lock(queries)
{
    //stuff
    queries = new Dictionary<KeyType, ValueType>();
}

You have to make sure that the locked object is instantiated before the fist lock is taken out and you must make sure that it is not reassigned in any way until after the last lock has been exited.
If you don’t it leads to all different flavours of bad.

If you instantiate a separate lock object at object initialisation, an object that has no purpose other than as a lock, then you know it’s there at the start and the chances of someone messing with it before the end of the last lock are very significantly reduced.

You must be careful not to expose the locked object externally

C# allows an implicit monitor to be created on any object. You can use that monitor by wrapping the object in a ‘lock’ statement.

If you wrote the class to use, say a mutex explicitly rather than the implicit monitor there’s no way you consider making the mutex externally accessible…

public class SomeServer
{
    private readonly Dictionary<KeyType, ValueType> queries = new Dictionary<KeyType, ValueType>();
    public Mutex _queryLock = new Mutex();

    //stuff

    public void PerformLookup(string someQueryTerm)
    {
        //some logic...
        _queryLock.WaitOne();
        try
        {
            //some more logic
        }
        finally
        {
            _queryLock.ReleaseMutex();
        }
    }
}

That’s complete madness – any other class can mess directly with the lock and cause all sorts of unwanted behaviour. Deadlocks are a particular hazard here and compound deadlocks can be really tough to debug.

If you’re using the implicit monitor via the ‘lock’ construct however and you expose the locked object beyond the scope of the class, you are effectively also sharing the monitor.

public class SomeServer
{
    public Dictionary<KeyType, ValueType> queries {get; private set;} = new Dictionary<KeyType, ValueType>();

    //stuff

    public void PerformLookup(string someQueryTerm)
    {
        //some logic...
        lock(queries)
        {
            //some more logic
        }
    }
}

public class SomeOtherClass
{
    private SomeServer myServer=new SomeServer();

    public void SomeMethod()
    {
        lock(myServer.Queries)
        {
            //some logic
        }

        //or worse...
        Monitor.Enter(myServer.Queries)
        //and the Monitor.Exit is in another method that might not get called
    }
}

If you use a separate lock object that you know is private and will always be private then you don’t have to worry about this.

Having written this pattern many, many times in many different languages I’m not likely to fall into either trap. That’s not all that being a good developer is about though.

HwacheonCentreLathe 460x1000

Now, this might seem like a strange tangent but bear with me. I learnt to use an industrial [machine] lathe when I was a kid. They teacher drummed 2 things into us.

  1. Do not wear any loose clothing (e.g. a tie)
  2. Do not leave the chuck key in when starting the lathe

The reason these 2 things in particular were so important was because the lathe I learnt to use had no guard. Either of those 2 mistakes could be fatal.

I’m glad I learnt that, but given the choice would I use the lathe with a guard or the one without? It’s a no-brainer.

We have the same situation here. Unless we’re writing something very specific where memory is absolutely critical, there’s no harm in creating an extra lock object. It provides a useful safeguard against a couple of gotchas that could cause real headaches in production.

Software development purists will be wringing their hands, but they’re not what commercial software development is about. It’s about writing code that does the job in a simple, safe and maintainable way. That’s why using a separate lock object is C# best practice and that’s why I fully support it.