5 Oct 2016

My Son

My son, Matthew Oiyan Parkhill, was born on Monday 26th September at 04:18, at the Royal Jubilee Maternity Hospital, Belfast. He weighed in at 5 lbs 9 oz. Mum and baby are doing well.

Mei and I would like to pass on our thanks to everyone at the Royal Jubilee Maternity Hospital for all their help during Mei’s & Matthew’s stay, especially the midwives Patricia, Deirdre and Heidi. We won’t forget their many kindnesses to our little family.

Matthew comes from the Hebrew name Matityahu, meaning ‘Gift of God’, and Oiyan comes from the Cantonese for ‘Love and Grace’.

21 Aug 2016

Forgotten Your Restriction PIN for IPhone?

Recently, I forgot the restrictions PIN on my iPhone (fat thumbs when initially setting it). I eventually recovered it by using the very useful pinfinder utility. This console program, written in Go, was kindly developed and open sourced by Gareth Watts. His utility recovers the restrictions (parental) passcode on iPhones, iPads and iPod touches from an (unencrypted) iTunes backup.

I’m blogging about this to help anyone with the same problem, and also to recommend pinfinder, so people use it rather than one of the rather dubious other commercial products that are available.

Download the latest release of pinfinder from here:


6 Jul 2016

What I’ve Been Reading…

I’ve always loved reading. I can remember (aged 6-7?) looking forward to every Thursday afternoon, when my sister and I would be taken to a local newsagent for a comic. My first reading material consisted of The Beezer, The Dandy and The Beano.


At the age of 9, my mum left me for an hour in the local library. By the time she returned, I was hooked on books. I can still remember the first library book I took out on my new library card - Gorilla Adventure by Willard Price. I was enthralled by this story of 2 teenagers who travelled the world capturing wild animals. I ended up reading most of the titles in the excellent Adventure book series (I seem to recall it being called ‘The Adventures of Hal & Roger’ – a UK printing possibly?).

With the recent migration of my data from Shelfari, I wondered – what have I actually been reading?  After importing the data into a local database, I decided to take a look.

Since November 2011 (when I signed up to Shelfari), I’ve read 332 books. 29 of these were second readings, so this brings the total down to 303 unique books. So, how many books do I read per year?

So I vary from 56 to 91 books per year (over the 4 full years of data, 2012 to 2015), averaging at around 74 books. If we look at the type of books (fiction/non-fiction), does this help explain the variability in the number of books read? I would expect the number of books to decrease if I’m reading more non-fiction in a given year. My impression is that it takes me longer to complete a non-fiction book, and I’m also less inclined to start a new book after I finish.

And looking at non-fiction books as a percentage of the total number of books read each year:

There doesn't appear to be a clear relationship between the amount of non-fiction I read and the total number of books. What other trends can we see in my reading? Looking at my fiction reading:

Author Number of Highly Rated Books
Patrick O'Brian 18
J.K. Rowling 8
George R. R. Martin 6
Paul McAuley 6
Stephen Baxter 6
James Clavell 5
Robert Harris 5
Stephen King 5
William Gibson 5
Neil Gaiman 4

Over the past few years, I’ve enjoyed reading Patrick O’Brian’s amazing Aubrey–Maturin series (think Jane Austin on the high seas), the Harry Potter books, A Song of Ice and Fire (the book series on which the TV series Game of Thrones is based on), as well as several science fiction series from the talented UK writers Paul Mcauley and Stephen Baxter. The Asian Saga from James Clavell is a series I return to time and again.

As you can probably tell from the list above, I like reading complex novels, preferably in a series, with intricate plots and multi-facetted characters. One author who is not featured above is Frank Herbert – I still consider his Dune series the best writing I have ever read. I plan to read it again soon. I especially enjoy science fiction (Macauley, Baxter, Gibson - Neal Stephenson just fell outside the top 10) and Fantasy (Martin, Gaiman and Rowling).

In my non-fiction, I’m tend to read more widely – the topic interests me more than the author. There are a few exceptions:

Author Number of Highly Rated Books
Fergal Keane 2
Jared Diamond 2
John Briffa 2
Max Hastings 2
Michael Lewis 2

I’ve particularly enjoy reading Fergal Keane’s books, in particular his epic story Road of Bones, about the battle of Kohima in 1944. I’ve also recently enjoyed Jared Diamond's books for their unique mix of anthropology, ecology, geography and evolutionary biology

What about books I didn’t like? Here is a list of the worst from the past 4 years:

Title Author My Review
Antifragile    Nassim Nicholas Taleb ‘Vainglorious and pompous nonsense. Read about 15 pages and gave up, not on the ideas being conveyed, but on the arrogance of the author.’
American Sniper Chris Kyle, Scott McEwen, Jim DeFelice Very poor, jingoistic fare.  Tells a completely different, and much grubbier, story than the film of the same name.  I found this book very depressing reading indeed.
The Bone Clocks David Mitchell Very disappointing. After enjoying the wonderful Cloud Atlas so much, very let down by this novel. It read like a terrible 1950's pulp Sci-Fi novel. Yet there was the odd glimpse of some great writing.  Such a pity.
Slow-Tech Andrew Price Rubbish.  The author has a very poor understanding of sound engineering principles.  The book reads more as a left wing diatribe, as opposed to a serious discussion of how to improve design choices.
A Discovery of Witches Deborah E. Harkness A truly dreadful book.  I threw the book away after approx. 30 pages. Dire stuff.

I regret putting David Mitchell in the above list – he is a fantastic author (Cloud Atlas, Black Swan Green), but Bone Clocks was a great let-down.

At some point, I hope to spend some time analysing the hashtags I associated with each book in my reading list. I’ll update this post with the results. Until then, happy reading!

21 Jun 2016


Recently, the Shelfari book cataloguing service was shut down by Amazon. Officially, the site was merged with GoodReads, another Amazon purchase, but effectively it was a shutdown. I've used Shelfari to track my reading habits and to maintain a list of 'must read books' for over 4 years, and I was a bit put out by the announcement, but not particularly surprised. The Shelfari site hasn't been upgraded in years, and was slow, unresponsive (both in terms of performance and mobile usage) and prone to outages.

I looked at alternative sites that I could migrate my data to (GoodReads, LibraryThing, Open Library), and decided that none of them was what I was after. Given that I only had 2 lists (books I've read, and books I want to read), a simple spreadsheet would do. So I migrated my data out from Shelfari using the data export functionality, and looked at the contents of the export file.

A cursory examination of the file (unusually, a .tsv file rather than the more popular .csv format) showed:
  1. The reading list was missing key data. Primarily, if a book was recorded twice (when I had re-read it), that book's details was completely missing from the list.
  2. The file format was invalid - it inconsistently quoted string fields (i.e. empty/null values didn't have quotes, while other values in the same column were quoted). This meant it was extremely difficult to import the data into a database/Excel to manipulate. 
In order to import my Shelfari data into an Excel file, I had to write a small command line application to carry out the conversion.  While it was an interesting exercise (I came across the excellent EPPlus and CsvHelper libraries for manipulating Excel and .csv files respectively), it took up time I could have used on other projects. Also, how is a non-programmer meant to deal with invalid data?  It is pretty poor for Amazon to ignore a site like Shelfari for years and then on shutting it down, to hamper users from accessing their own data due to an incompently coded and untested data export functionality. It also stands in stark contrast to the excellent work from both Google and Twitter in allowing users to export their data in a usable format.

Thankfully, I had taken screenshots of my book history and was able to recreate the missing book data. After managing to get all of my data into an Excel spreadsheet, I've now generated an updated list of the books I've read since joining Shelfari back in November 2011. You can view here:

17 Apr 2016

Social Media

This weekend, I’ve deleted my Twitter and LinkedIn accounts. I deleted my Facebook account years ago. It looks like I’m done on social media for good.

LinkedIn is a cesspit of recruiter slime and unsolicited offers to connect about jobs I wouldn’t offer to my cat, so it was no hardship to delete that account. But Twitter?

Death of Twitter

I used to love Twitter. I’m made some great friends there, been part of some fascinating conversations, and followed some really interesting people. But the fact is, Twitter has become a nasty place to be. I’ve seen too many people dog-piling onto others for not sharing their views, and people being abused for little or no reason. Too often lately, I’ve also found myself getting snarky with others when trying to argue a point in those rare discussions that still occasionally occur. I don’t want to spend my time in a place where people are mean to each other. I don’t want to be one of those being mean.

Apologies to all my friends on Twitter – I’ll miss catching up with you there! Please drop me a line. We can grab a coffee and stay in touch the old-fashioned, analogue way. I’ll still be posting to my blog, hopefully a bit more often than in the past year.

9 Feb 2016

Goals for 2016

As it now mid-February, it is now the time of year when I finally update you with my goals for this year. 

Recap of 2015

But before I set out my goals for the year ahead, how did I do with my goals for last year?  Well, I set out 4 goals:
  1. Start running again
  2. Get physically fit again
  3. To develop a non-trivial side project using JavaScript.
  4. Marry my girlfriend Mei.
So, tackling each in turn:

  1. Running – a partial success. I did build up my running, just not as much as I had hoped.  Looking at the sub-goals:
    1. As you can see from the graph of my monthly mileage below, trying to run 80 miles a month was probably a bit too ambitious.  I averaged just over 30 miles a month over 2015, and was hitting over 45 miles per month once I got going.
    2. I ran in one Parkrun, the Ecos run in Ballymena on Saturday 10th January 2015. Frankly, this was just too big a goal, and would have meant giving up every other Saturday morning. I needed to make this goal a lot more manageable.
  2. Get fit – a definite fail.
    1. My end of year score in the Five Minute Fitnes Challenge was 182, actually down on my personal best of  194.
    2. My bodyweight at the end of 2015 was 78.4 kg at around 23% body fat – very poor. I started 2015 weighing 80.7 kg.
  3. Develop a side project in JavaScript – Another fail! I did have a couple of side projects on the go, specifically one to export OneNote documents into markdown format, but I never got round to completing them or releasing them, due to a lack of time and interest. I’m not too concerned about this, as I have been heavily committed to my new role since April, and have been learning a lot. But I do think this something I need to look at again this year.
  4. Marry my girlfriend Mei – Yes!! Mei and I married in the summer at St. Patrick’s church in Coleraine. The massive dip in my monthly running mileage was caused by the August wedding (and associated stress) and our honeymoon cruise to Norway.
Getting married to my bride, Mei

So, based on a very mixed bag of results for last year’s goals, what about this year?  The main lessons I learnt from last year was to focus on a single big goal, and to be realistic about what you can achieve. Funnily enough, it is not the first time I’ve learnt the same lessons. So for this year:
  1. Continue to focus on my running
    1. Run 12 Parkruns by end of 2016 – a slightly more realistic target!
    2. To be running 60 miles per month by the end of the year
  2. Focus on my blog
    1. To blog a new article monthly, and to start focusing on generating more technical content for my blog.
    2. Migrate my blog away to a new blogging platform. Blogger was a good platform to start on back in March 2010, but it is definitely showing its age.  It is time for a blog makeover!!
    3. Make more time for blogging by giving up Twitter for the year. I enjoy Twitter and I have some good conversations there – but I also waste a lot of time, and occasionally I get dragged down into exchanging bile and snark. I would rather spend the time on something more meaningful and long term, like my blog and other side projects. I’ll occasionally post links to my own content on Twitter, but otherwise I’ll be avoiding it.
As usual, even if I don’t achieve my goals, I’ll hopefully learn or achieve something worthwhile.

4 Dec 2015

Sharing Code Between SharePoint Solutions

I'm currently working on upgrading a large set of SharePoint Web Part solutions from SharePoint 2010 to SharePoint 2013. As part of this work, I have been identifying code that is common to several of the solutions (logging to the ULS, decoding Claims Based Authenticated usernames) and moving it to a single common utility DLL. I thought it would be useful to document how I have done this, and to share some lessons I've learnt along the way.


"NuGet is the package manager for the Microsoft development platform including .NET. " Taken from https://www.nuget.org

NuGet is the obvious choice for sharing a common assembly between multiple solutions. By simply adding this NuGet package (https://www.nuget.org/packages/CreateNewNuGetPackageFromProjectAfterEachBuild/) to the Visual Studio project I created for my common code, a .nupkg file was automatically created following each build.  I created a local (internal) package repository on a network share and added a post-build event to my project to push new .nupkg files to this location. Altogether, this took approximately an hour to add to my existing common utility project, and most of this was learning how to populate the NuGet package manifest.

I then added the local package source as a feed in Visual Studio (see https://docs.nuget.org/create/hosting-your-own-nuget-feeds for details), and installed the package in the various SharePoint solutions I was working on.  To ensure that I could track what version of the common utility DLL was installed, the assembly and resulting .nupkg file was versioned. As the upgrading of the solutions is ongoing, there will be a need to add additional common code to the utility assembly, and to ensure that all SharePoint solution are updated to the latest version.

Packaging and Deployment

My initial approach to package and deploy the common utility assembly was to add it as an additional assembly into the various SharePoint solution packages, (see https://msdn.microsoft.com/en-us/library/ee231595%28v=vs.120%29.aspx for details). This is the recommended approach from MSDN. It is also completely wrong.

The problem is that SharePoint packaging and deployment mechanism does not count the number of references that are made to shared assemblies used across multiple SharePoint solutions. This leads to an issue when a solution referencing the shared assembly is uninstalled, then the shared assembly is also uninstalled. This leads to the other solutions that reference the shared assembly being broken. This isn’t a new problem – it has been around since SharePoint 2010 (see http://lawrencecawood.com/2011/07/13/how-i-solved-the-problem-of-dlls-being-removed-during-wsp-uninstallation/).

In my specific case, the problem occurred when I deployed a SharePoint solution that included a new updated version of the common assembly. If I had deployed several solutions that referenced version of the Common.dll, and then deployed an upgraded solution that referenced version of the Common.dll, the version Common.dll was uninstalled as part of the deployment and version was installed instead. The new solution worked perfectly, but the other deployed solutions that still referenced version were broken. This lead to a sticky moment following one deployment to Production. Once I twigged what the issue was, I simply redeployed the most recent WSP of a solution containing the now missing version of the common assembly.

Deployment Redux

It was clear that continuing to deploy the common assembly as an additional assembly in he SharePoint solution package wasn't an option. While it was an intermittent issue, only occurring when deploying an updated version of the common assembly, it was still a problem that could lead to unnecessary downtime. So, what other options for sharing common code were there?

After a bit of research, I came up with the following options:

  1. Deploy each new version of the Common.dll as a separate solution, deployed separately. The WSP for each version would include the version number in the name (i.e. Common.V1.0.0.1.dll) and would ned to have unique solution GUIDs for deployment. This would allow the referenced assembly to be removed from the package for web parts that reference it, so it would not be removed when deploying newer versions.
  2. Create a separate solution package to deploy the various versions of the common assembly, and several other 3rd party DLLs. This would isolate the deployment of the common assembly from the deployment of the various solutions that reference it. Obviously, if a solution referencing a new (not previously deployed) version of the common assembly is now deployed, the new solution containing the common assemblies must be updated to include the new version and also deployed.
  3. Remove versioning entirely. Simply use V1.0.0.0 of the common assembly. Instead of deleting methods, or changing method parameters, future changes would require overload existing methods and using the [Deprecated] and [Obsolete] attributes to indicate methods that have been superseded in the latest 'version' of the assembly.
  4. Merge the common assembly into a single assembly for each separate web part solution using ILMerge. This allows for the versioned common assembly be used, and results in a single DLL for each solution, avoiding additional assemblies and versioning in SharePoint altogether.

Clearly, using ILMerge was the best way forward. And it turned out to be very easy to implement with the existing solutions. The steps are:

  1. Install the Nuget package https://www.nuget.org/packages/MSBuild.ILMerge.Task/1.0.3-rc2 (it will also install a NuGet package for ILMERGE as a dependency).
  2. Clean and rebuild the solution in Visual Studio.
The resulting DLL takes the name of the parent solution, and when you check the assembly using ILSpy, it includes the namespace of the common utility code as well as that of the parent solution. Note, if you are building an assembly for a SharePoint solution, all the child assemblies must be signed.  The additional assemblies to be combined with the primary DLL must also have the Copy Local property set to TRUE (this is the default when using NuGet packages).

Hopefully, the above is of some use if you're looking for a solution to share code across multiple SharePoint solutions.