Sunday, 15 April 2018

Crash Course in SQL Part 2: SELECT


The SQL SELECT instruction is the most important single thing to master

It is the instruction that actually gives the answers to any questions

It can have up to six sections, as follows

SELECT
FROM
WHERE
GROUP BY
HAVING
ORDER BY

But some of these are used a lot less often than others

SELECT   lists the things we want to see
FROM   specifies which places to get these from
WHERE   specifies which things we do or don't want to see
GROUP BY   allows for some grouping or totalling up
HAVING   allows us to specify some more things we do or don't want if we used GROUP BY
ORDER BY   specifies which order to list the results in



SELECT can be used on it's own without any of the other sections. There are relatively few sensible uses for this. One might be to see what the computer thinks the date and time is

SELECT Getdate() 

gives results:






Getdate() is a 'function'. More on these later



To be of much use, we have to SELECT from one or more tables.

The simplest use of SELECT is with the * (star) symbol which means 'Select Everything'

  SELECT *
  FROM [TESTDB].[dbo].[tblExampleTable]

gives results:













This is little caricature of real data which we can use for illustrating SQL. It is in a table called tblExampleTable. The [TESTDB].[dbo].bit gives more information on exactly where this table is


So the SELECT * lists all five records in full. The order it comes back in is the not controlled.

We may want to have the list sorted alphabetically. The ORDER BY section can be used for this

  SELECT *
  FROM [TESTDB].[dbo].[tblExampleTable]
  ORDER BY [Description]

gives













Or we might want it sorted by the (in this case meaningless) number data, with the highest first. So we would add DESC (short for Descending) to the ORDER BY bit as follows

  SELECT *
  FROM [TESTDB].[dbo].[tblExampleTable]
  ORDER BY [SomeNumber] DESC

which gives:












Often we will not want every field in the table shown in the results. So rather than SELECT * we can SELECT specific fieldnames

  SELECT [Description], [SomeOtherThing]
  FROM [TESTDB].[dbo].[tblExampleTable]













More often than not we will be interested in looking at a specific part of the data rather than seeing everything each time. We can use the WHERE section to limit which records are shown. For example

  SELECT [Description], [SomeOtherThing]
  FROM [TESTDB].[dbo].[tblExampleTable]
  WHERE [SomeOtherThing]='Green'

gives









or


  SELECT [Description], [SomeOtherThing]
  FROM [TESTDB].[dbo].[tblExampleTable]
  WHERE [SomeOtherThing]<>'Green'

gives









The <> means 'is not equal to' so is basically 'everything but'

It is not necessary for the data items used for selecting or listing to be displayed in the final results

That is the end of Part 2
In Part 3 we will look at counting








Saturday, 14 April 2018

Crash Course in SQL Part 1: What is SQL

Most people pronounce 'SQL' as 'sequel' although 'S Q L' is equally valid

It has different meanings:

One is 'Structured Query Language', which is the subject of this 'crash' course.

Another common one  is as a shorthand for Microsoft SQL Server, which is a set of database software products produced by Microsoft.

Microsoft SQL Server has its own version of Structured Query Language which it calls 'Transact SQL' or 'T-SQL' for short

Other software products have their own variations of Structured Query Language. For example Oracle provides 'PL/SQL'

Although there are slight differences between the different versions of SQL, the core is exactly the same. So if you learn SQL on one product you can adapt quickly to others.

Structured Query Language is a way of interacting with relational databases

Relational databases are those which can have many tables. In theory this will be designed in such as way as  to avoid duplication of data. And each table will have a unique key which allows it to be related to other tables

In real life the design and content of databases can get a bit more messy, for various reasons.

SQL is essentially an instruction, written out as text, which is 'thrown' at a database. The database will attempt to respond to the instruction

How the instructions are written and thrown - and where the results are displayed - will depend on what tools you have available, Typically there will be a utility (such as SQL Server Management Studio) which allows these things to be done in different panels visible together on the same computer screen

So we would write (or retrieve an existing) SQL script in one panel, hit an Execute button, and see the results displayed in another panel

.
The beauty of SQL is that the core is relatively simple

There are different types of instruction for different purposes

Computer people give these grand titles such as

  • Data Manipulation Language (DML)
  • Data Definition Language (DDL)
  • Data Control Language (DCL)


These are basically just groups of instructions for either doing things with data or setting things up


SQL Data Manipulation Language (DML) is mainly based around the following four types of instruction
  • SELECT
  • DELETE
  • INSERT
  • UPDATE
These instructions are used for looking at, getting rid of, collecting or changing data

Of these,  SELECT is the most often used as it is the one that actually gives the answers to any questions


SQL Data Definition Language (DDL) is mainly based around the following three types of instruction
  • CREATE 
  • ALTER
  • DROP

These instructions are the ones needed to set things up - and then changing them again later. We will not spend much time looking at these on this crash course

SQL Data Control Language (DCL)is mainly based around the following three types of instruction
  • GRANT 
  • REVOKE 
  • DENY

These are the instruction used to set out who can do what. Computer people are obsessed with stopping other people doing things, or looking into the innards of their creations. It's a shame as otherwise they are good people to have around. We will not spend any time looking at these 'permission' issues on this crash course. Just be ready to hit this nonsense when you try to use anything in the workplace

That is the end of Part 1.
In Part 2 we will look at SELECT  - the main big thing you will need to master

Here are a few links which may help flesh out the content of Part 1

https://www.w3schools.com/sql/sql_intro.asp

https://www.infoworld.com/article/3219795/sql/what-is-sql-structured-query-language-explained.html

https://bytescout.com/blog/what-is-sql-and-what-is-it-used-for.html

https://www.ntchosting.com/encyclopedia/databases/structured-query-language/

https://searchsqlserver.techtarget.com/definition/SQL







Saturday, 31 March 2018

Medians, quartiles, centiles and all that: An introduction to quantiles

Most of us get to hear about 'medians' fairly soon after we start dabbling with data. The median is just one version of a type of measure called 'quantiles'. We regularly bump into quantiles and the rather quaint sounding names they are usually described with.

A lot of these names sound as if they come from the Arthurian mythology, Chaucer or somewhere else in the Middle Ages - a kind of Medieval Martian language surviving into modern times, and used mainly to baffle us




The idea behind a quantile is a simple one. It is best visualised as a 'cut point'  dividing a block of ranked data into equal parts.

The median is the single cut  point dividing the data into two equal blocks.

The 'terciles' are the two cut points dividing the data into three equal blocks

The 'quarttiles' are the three cut points dividing the data into four equal blocks





And so on.

Note that the number of blocks of data is always one more than the number of cuts  (and be careful not to confuse quantile names with the blocks of data created)

Apart from the median, the quantiles which you will probably meet most often are the quartiles and the centiles (also now regularly called 'percentiles', although the 'per-' is not strictly necessary, unless you are a cat)

The quartiles are  used a lot for describing the spread of a block of data. They have an associated visualisation  - the box plot




The illustration above shows how the quartiles are reflected schematically in the box plot (there are several versions of box plot but the reference points are the same. Box plots are more usually set out as columns (vertically) rather than bars (horizontally). Conceptualy there is no difference).

The 'max' is sometimes referred to as the 'Fourth Quartile' (Q4) and the 'min' as the 'Zeroth Quartile' (Q0). While these terms probably make perfect conceptual sense to a mathematician, I think they are confusing, and probably pretentious, alternatives to 'max' and 'min' -  so best avoided.

The span between Q1 and Q3 is known as the 'inter-quartile range'  i.e. the two quarter blocks either side of the middle.

The median and inter-quartile range are more 'resistant' measures of spread than the mean and standard deviation (see separate article on these). 'Resistant' here means less prone to being distorted by values at the extremes of the spread

Particular centiles are sometimes used as synonyms for quartiles
  • the 25th centile is the same as the first quartile
  • the 75th centile is the same as the third quartile
  • the 50th centile is the same as the median
It is difficult to decide whether the saying '25th centile' and the '75th centile' is clearer or less clear than the first and third quartiles. You will have to decide, based on your knowledge of your audience. But I  am fairly sure that saying  the '50th centile' is invariably much less clear than saying 'the median'

Other centiles are also used. A good example is in describing the expected growth of children, where a series of 'centile' charts are published and an individual child's measurements are plotted on these














Sunday, 25 March 2018

Excel: creating quick frequency distribution graphs

Excel level: Intermediate

This article shows how you can use Excel to create quick frequency distribution summaries. This is mainly intended for exploratory analysis rather than for presentation. You can obviously polish it up a bit if you need to present it too.

There are different methods available in Excel (depending on which version you are using):

  • Pivot Table method
  • Histogram standard chart type (Excel 2016)
  • Analysis Pack Add-In Histogram


Let's start off with some sample data. We have 168 records of a single measure set up on a worksheet in a single column like this (just the top 16 records are illustrated here to save space):



Pivot Table method: Step One
Highlight the entire block of data then use  Insert...Pivot Table to create a Pivot Table to a new worksheet.

Pivot Table method: Step Two
Drag the Data field to both the Rows and the Values areas in the Pivot Table field selection dialogue


This will add a second column to the Pivot Table. But the new column will likely default to Sum of Data and have totals rather than counts in it




Pivot Table method: Step Three
Right click in the Pivot Table Sum of Data column then select the Value Field Settings option. From this select Count. The pivot table should now be transformed to show a count of each value (rather than the sum). Something like this: 




We now need to group the data values into regular bands ('bins'  in histogram speak). 


Pivot Table method: Step Four
Right click in the original column (anywhere will do) and select Group




The default dialogue box will have both detected what the range of your data and suggested an interval. Change these into something more suitable (you can change these settings again later) . 


You should then see the pivot table transformed into a frequency distribution summary, something like this:




Pivot Table method: Step Five
You can turn this table into a graph by using Insert....Pivotchart




This method takes far longer to describe than to actually do.


Histogram standard chart (Excel 2016)Step One
Highlight the entire block of data then use Insert...Chart and pick Histogram. You will be presented with a chart something like:




Histogram standard chart (Excel 2016)Step Two
Adjust the number of 'bins' via formatting the axis. Either by setting the number of bins:


Or by setting the bin widths:




This is very fast. But there are some problems fitting bin boundaries. It is not available in earlier versions of Excel

Friday, 23 March 2018

Standard Deviation


From time to time, you will hear people using the phrase 'standard deviation' in a variety of contexts. It can appear to give a seal of statistical credibility to whatever it is that they are saying. This article explains what Standard Deviation actually is and how to calculate it.

The simplest way of visualising Standard Deviation is to see it as a measure of the spread of a bunch of data: Whether the data is all tight together, like a bunch of asparagus. Or whether it is spread out flat  like an omelette



The most complicated way of visualising Standard Deviation is to present it as an over-decorated mathematical equation. You will find lots of examples of this. It  can be quite intimidating at first.




Don't be put off though. It's nothing like as complicated as they want you to believe

To show what Standard Deviation is, we need some data to play with. Here is a block of data from my world. It doesn't really matter what it is. Pick a block of your own if you want.
At first glance, it is hard to make any real sense of this data. We need some summary measures to help describe it more compactly.

The most obvious ones to look at are: How many items of data do we have, what is the highest value, what is the lowest value, and what is the 'average' value. You can work these out manually from the illustration above. I'm using Excel functions to speed this up a bit















Average

We could get into a whole separate discussion about what the best type of average would be. But we are mostly talking here about the spread of the data. So we will put medians etc on one side for another day.

The average used here is the 'traditional' average, also known as the 'mean' or the 'arithmetic mean'.

This is simply the total of all the individual bits of data divided by the total number of bits of data

For my example data this is   23,088.1  divided by 168 which equals 137.4

We could make this look much more complicated and impressive as follows:


You will see variations of this around. 

The 'x' with the line over it is called 'x-bar' which is Martian for 'the mean'. 

The capital Greek 'sigma' symbol  ∑ just means 'sum' or 'total of' (you will see this in parts of Excel as a symbol indicating the SUM() function)

n is simply the total number of items of data, in my example 168

x is any actual item of data and the little subscript i  refers to the position (order number) of any individual bit of data in the whole list

So the formula translates into: 

Total up all the individual data items from the first to the 168th item (i.e. all of them). Then divide this by the total number of items  


Spread

Ok so we've got the average, the maximum and the minimum

We can calculate the 'Range' by subtracting the minimum from the maximum:

Range = maximum - minimum

Range = 160.5 - 115.7 = 44.8 

Surely this is enough to describe the extent of spread of our data? Well not really. While we have the width and the positions of the two extreme edges of the spread, we still don't know whether our data is spread evenly across this range (omelette) or whether it is mostly bunched up tightly around the average (asparagus)

What we need to do is to look at how far each individual bit of data item is away from the mean and then find a way to summarise that




Deviation

We can calculate the Deviation from the Mean (or simply the 'Deviation') for each value  by subtracting the Mean from it. This gives a set of positive and negative smaller values. (For no obvious reason. statisticians sometimes refer to these calculated deviations values as 'residuals')

The negative Deviations are all cases where the original value is smaller/lower than the mean, The positive Deviation values are all cases where the original values are larger/higher than the mean





So having calculated all the individual Deviations, surely all we have to do now is to calculate an Average Deviation. And this will provide a measure for the spread of the data?

Unfortunately there is a snag......



If you try to calculate the Average Deviation by totalling up all the individual Deviations and then dividing by the Total number (still 168) something annoying happens: All the positive and negative Deviations cancel themselves out. So the Total Deviation is zero and the Average Deviation becomes zero too.

This is no use. So we need to find a way to discount the effect of the negative signs in half the Deviation values.


Mean Absolute Deviation (MAD)

One method we could use is to simply ignore all the negative signs and average the 'absolute' values of each Deviation. This is relatively easy to do (you can the Excel ABS() function if you are wanting to follow this step by step)  and gives a type of Average Deviation called the Mean Absolute Deviation - also referred to as 'MAD' 


We can calculate the MAD value for our sample data. The total of the absolute values of all the Deviations is  1592.7  divided by 168 gives a MAD value of 9.48

Excel provides a function for calculating MAD values, cleverly disguised under the title AVEDEV()


Square and Un-square

There's another way to get rid of the negative signs. This is based on the fact that the square of any negative number is always a positive number.

So the square of 4 is 4 x 4 = 16 

And the square of -4 is -4 x -4 =16  too

So if you Square a negative number and then 'Un-square' it again you end up with a positive number. Mathematicians prefer to call the 'Un-square' of a number the 'Square Root' or even just the 'Root'

So the Square Root of the Square of 4 is 4

And the Square Root of the Square of -4 is also 4
 

Statisticians find this slightly more complicated way of dealing with negative numbers more attractive (they have plenty of time on their hands). So they use this trick to create another type of Average Deviation from the Mean  - the 'Standard Deviation'

Calculating Standard Deviation

The essence of the calculation is
  1. Work out the Mean
  2. Work out all the Deviations from the Mean
  3. Square all the Deviations
  4. Calculate the Average of the Square of the Deviations
  5. Un-square it again

We had already got as far as completing Step 2. 

Step 3: let's calculate all the Squares of the Deviations from the Mean:









So that's got rid of all the negatives - as we hoped. Notice that some the numbers are quite big - looking even bigger than the original data - and some are quite small. There are even a few 0.0 values appearing. Don't worry -  this is all just the behaviour of numbers when multiplied by themselves. The 0.0 values are not actually zero, they are just very small numbers which do not show up when the number is written to just one decimal place


Step 4 is then to calculate the average of the Squares of the Deviations. So this will be their total divided by the number (still 168)

The Total of the Squares of the Deviations = 23,345.7

The Average of the Squares of the Deviations = 23,345.7 divided by 168

The Average of the Squares of the Deviations = 138.96

The value of the Average of the Square of the Deviations also has an official statistical name - the 'Variance' (not to be confused with this same term as used by either accountants or lawyers)

Note that Excel provides a function DEVSQ() which can calculate the total of the Squares of the Deviation (23,345.7 in our example) in a single step



Step 5 - the final stepis to Un-square the previous number

The Square Root of 138.96  = 11.8

This is the Standard Deviation



The Standard Deviation can also be described as the Square Root of the Variance. And the Variance can also be described as the Square of the Standard Deviation. Try this if you want to baffle people.

If you thought that was a bit long-winded and time consuming, don't worry, you will probably never actually have to do this in full. Excel has a function that calculates this is in a single move. And many pocket calculators have had short cut ways of doing this since the 1970s



Ok, so what? 

So we now have two different versions of an Average Deviation from the Mean to go with the summary measures we calculated earlier. And we have a thing called the Variance which sounds as if it should be useful but looks rather big:




Remember that the Variance is the calculation before it is un-squared. So this means that it is a different type of measurement and cannot be directly compared to the max, min, average or range.

But the Variance still gives a measure of the spread - or 'dispersion' - of the data. The smaller the Variance, the more tightly bunched the data is around the Mean  (asparagus).  And the bigger, the more spread out (omelette). A slight issue with the Variance is that because it uses squares, it is distorted by data at the extremes of the range. 


Both the Standard Deviation and the MAD (mean absolute deviation) can be compared directly to the Average, and to the max and min. And their size also gives a direct measure of the spread (dispersion) of the data


There is a 'rule of thumb' called the 'Range Rule'  (or 'Range Rule of Thumb')  which says that the Range is about four times the Standard Deviation. That seems to be the case with our illustration data. 44.8 is roughly four times 11.8 (but not exactly - hence 'rule of thumb' - some days I roughly get to work on time)


Our data can be described as having an Average (Mean) of 137.4 with a Standard Deviation of 11.8. 


These are absolute numbers. If you wanted to do a quick comparison with another different block of data then some kind of relative measure would be helpful

If we divide the Standard Deviation by the Mean we will get a ratio (which can also be expressed as a percentage). In our example


11.8 divided by 137.4 then times 100 =  8.6%


i.e. our Standard Deviation is 8.6% of our Mean


This measurement has an official statistical name too - the 'Coefficient of Variation'



This is often abbreviated to 'CV'. So as with the other measures, it gives an indication of the dispersion (spread) of the data. A high CV indicates dispersed data (omelette) and a low CV indicates compact data (asparagus). Ours seems quite low - so definite asparagus tendencies (but you will need to compare it to other blocks of data before you can come to this conclusion).
So one important use for Standard Deviation is as a stepping stone to calculating the Coefficient of Variation which in turn allows the 'flatness' in two different blocks of data to be compared


Time for a break 

Reward yourself with a coffee  - if you've got this far you have earned it


We have entered the world of the statistician and managed to follow everything without even a Martian-to-English phrasebook. Once translated into plain English their obscure gibberings suddenly make sense.

They have tried to baffle us with jargon such as 
  • Variance 
  • Standard Deviation 
  • Mean Absolute Deviation 
  • Coefficient of Variation 

but we know that these are just Martian phrases for some fairly simple ideas


What next? 

There are a few things still to look at:

Understanding why there are different versions of Standard Deviation and Variance and when to use each

Understanding how Standard Deviation relates to probability and a relatively abnormal thing known as the Normal Distribution

Understanding how to put Standard Deviation to practical use, such as in the special types of run charts used in Statistical Process Control (SPC). These are really useful as they help us decide whether changes are real or imaginary 


























Monday, 19 March 2018

The Manager’s Guide to Surviving Poor Performance Figures



Scenario:

The latest figures are out and it’s not looking good. You have not hit your key performance targets. Your meteoric career trajectory suddenly feels imminently parabolic.

Don’t worry. There are several proven techniques that you can use to survive this situation – or at least buy yourself enough time to find the next job while the fan still spins unimpeded.


 1
Challenge the data
Spread as many rumours as possible about the quality and completeness of data. This will allow you to say that the figures cannot be relied upon. There may be bit of a risk if the production of the data is itself your responsibility.  Luckily most people won’t spot this. 

If this fails…. move on to

2
Challenge the method of calculation
Suggest that the figures are being calculated wrong. Experiment with alternative calculations, randomly missing bits out - or including other bits - or changing the time periods used. If you are lucky you will eventually stumble across a method which comes out with the ‘right’ number. It does not matter how irrational this method may appear. Next, insist that this new method should become the standard method to be used from now on.

If anybody suggests that the target should also be recalculated in line with the new method, put your fingers in your ears and wait for them to go away.  

If this fails…. move on to

3
Challenge the target
Don’t worry about the confident assurances you gave the powers that be six months ago.  Stress that the target is clearly over-optimistic.  Don’t worry if it was you who originally came up with the target. Nobody will remember this.  By now it is simply ‘The Target’. Say that ‘The Original Target’ was based on ‘an unadjusted growth factorisation methodology’. Nobody will understand this, so they probably won’t challenge you. If they do, say that the original figures were all provided by the Finance Department. If you are the Finance Department, say the figures were originally provided by the Information Department. Everybody knows that these are two parallel universes and that their figures never agree.

Calculate a new target which is exactly 1.47 % lower than whatever you actually did achieve. Tell everybody that this new target is in line with the corrected growth model.

If this fails…. move on to

4
You should be treated as a special case
Assert that the target is absolutely fine for everything except those areas which are your responsibility. The significant differences in your areas mean that you should not be held accountable to the same target as everybody else. The unique factors particular to you mean that this target never could be achieved (Actually this last sentence may be entirely true, but not in the way you want people to think). To add weight, come up with any three very detailed observations about the areas that you manage. The fact that these features may be present in other areas, or even all other areas, does not matter.

If this fails…. move on to

5
Everybody else is doing badly too
Did you say you should be treated as special case? Actually, what you meant was you are the exactly the same as everybody else, and everybody is doing badly. Other people doing badly too is good. To a layperson, it may suggest that all your peers are incompetent too. But you can use it to imply that there are massive external forces operating which make it impossible for anybody to meet the target. It does not matter if nobody knows what these forces are. Find all the others doing worse than yourself and lump these together to present your performance against a ‘peer group figure’

If this fails…. move on to

6
Find a small part that did reach the target
Ok so you clearly failed to meet the target. But this does not mean that you completely failed to meet it. Break down the area you are responsible for into every possible component part. If you are lucky, at least one small area will have met the target. Focus all attention on this. The performance in this area should be taken as a reflection of your management competence. The other areas which failed were ‘atypical’, or experienced unforeseen circumstances, such as staff taking their Christmas holidays at Christmas.

If this fails…. move on to

7
We are on trajectory
Ok so you clearly failed to meet the target. But it’s ok because you have met the level required to be ’on trajectory’ to eventually meet the target. If you have missed that too, you may have a problem. Bur you could say that you are ‘on target to converge with the recovery trajectory’. There is a good chance that you can simply change the trajectory to fit the actual performance. If challenged say that the trajectory ‘needed to be reprofiled to take account of seasonal factors’

If this fails…. move on to

8
The latest figures are not typical
It is important to create the impression that last month was completely unusual, leading to a misleading position. The expression ‘perfect storm’ comes in useful here and most of the people you are talking to will assume that your performance has somehow been affected by bad weather

If this fails…. move on to


9
Create a Diversion
With luck there will be some other area in the organisation which is currently sitting slightly below target. Draw attention to this everywhere you go. Suggest that we need a ‘crisis meeting’.

If this does not work, set fire to one of the offices. With luck the routine performance meeting will be cancelled.

If this fails…. move on to

10
Play for time
Send a ‘High Priority’ email to the Information Department at 10:40pm on Sunday evening the day before your meeting. They almost certainly won’t get this in time. When required to give an explanation for the performance figures, you can truthfully say that you are “still waiting to hear back from Information”

If you are really unlucky, the Information Department might drop everything to ensure that the information you asked for is sent back to you before 8:30am the next day. If so, don’t open the email. Later you can say you were having IT problems and didn’t get it in time.

If you need a much longer breathing space, commission somebody to do a study. If their eventual report highlights any concerns about how you have done things, disown and discredit it immediately. Then commission another study.

If this fails…. move on to

11
You haven’t been provided with the right tools
Insist that you need information to 11 decimal places, disaggregated into 30 second time blocks, plotted against National Grid Reference, and standardised against average telephone number. Make it clear that without this information it is impossible for you to monitor your performance. If provided with this information, ask for it to be broken down further.

Ask for the data to be provided in a dashboard updated every minute of the day and night, viewable on your smartphone even when it is switched off.

If this fails…. move on to



12
We have a plan
You can survive almost any performance crisis so long as you can reel off a list of remedies that you can claim will resolve it. It does not matter what these are. It is even better if you can say you have started to implement these already.

You don’t actually need to have a document outlining what you are going to do. You just need to say that you plan to have a plan. If somebody asks when it will be ready say the Communications Plan will be covered in the plan.

Be careful not to create any tangible milestones in the plan.

If this fails…. move on to

13
I am arranging a meeting….
This old technique is still worth a shot as some of your younger colleagues may not yet have encountered it. Here’s how it goes:

Acknowledge that this performance is a significant concern and you are taking immediate action to deal with it. Announce a list of people being invited to an urgent meeting to discuss it. The subliminal message is that the performance failure is really down to this wider group rather than yourself. By appearing as the convenor of the meeting you may also get credit for being proactive.

If you are forced to actually go ahead and have the meeting, try to leave everybody with impossible tasks to complete. It does not matter what these tasks are or whether they are relevant to the performance.

If this fails…. move on to

14
We need more staff
Never miss an opportunity to turn your own failures into a bid for more resources.

If this fails…. move on to


15
We were let down by somebody else
Find any example where somebody else did not manage to do something on time. Portray this as the ‘root cause’ of your performance problem. It does not matter what it is.

Recruitment is a good area to choose. If you are lucky you can blame HR for not getting an advert out on time. It does not matter that they waited for six weeks for you to finish writing the job description.  You can also blame all the potential candidates who chose not to apply to work with you.

If this fails…. move on to

16
I am new
Portray the performance failure to be a continuing consequence of a legacy of deeply embedded problems. These are taking longer to deal with than originally estimated. Imply that your predecessor left things in a real mess. You can get away with this one for anything up to five years. Play down any claims you may have made in your job application to ‘hit the ground running’.  Delete all references to your ‘first hundred days’.

But be  careful to ensure that the credit for anything that has been achieved does not also go to your predecessor.

If this fails…. move on to

17
Blame your staff
Don’t be sentimental. You must be prepared to throw members of your team under the bus. Remember, success is collective, and all stems from your skilful leadership. But blame needs to be delegated as quickly and deeply as possible.

Suggest that you have inherited a group people who don’t have the right skills. If these individuals are all people who you recruited yourself, suggest that they are ‘now struggling to adjust to requirements of the changing environment’.

Most importantly: suggest that the staff may have deliberately concealed the performance problems from you. Stress your disappointment, especially given your past support for them and the blame free culture you are striving to introduce.

If this fails…. move on to


18
Burst into Tears
Ask for a private one to one meeting with your line manager. Just as the conversation moves onto the performance figures, bust into tears. Invent a list of traumatic things that have just happened to you.

This is unpredictable, but it can be surprisingly effective. Be careful not to over use it though. It can become counter-productive if you try it every day.

If this fails…. move on to

19
Pretend to be Leaving
Announce that you are leaving. This may deflect the urge to fire you. You can probably spin this out for about 18 months before people begin to wonder why you are still here.

It is even possible they will forget all those missed performance targets and they will offer you a promotion to stop you leaving

If this fails…. move on to


20
Leave
No need to worry, they will almost certainly provide you with an excellent reference to help propel you on your way!

If this fails….

Ouch!



Disclaimer
Any resemblance between these observations and any of the various organisations that I have worked in is entirely.


Crash Course in SQL Part 2: SELECT

The SQL SELECT instruction is the most important single thing to master It is the instruction that actually gives the answers to any que...