Leicester.gov.uk redesign reflections

What were the lessons learned whilst carrying out the re-design of the leicester city council website in 2014/15? Here’s a giant brain-dump, mainly so I don’t forget stuff for the next time I’m involved in this kind of thing.I was involved in virtually every stage of the re-design, from requirements gathering, to researching alternative content management systems, to the actual branding and design, CMS integration, in fact pretty much everything (apart from actually populating the site with content).

Lessons learned

My main lessons learned, in a nut shell, have been –

  1. Make sure the site can integrate with third party applications – maybe not the most critical of points but when deciding on a CMS or front-end framework, it must be possible for it to (fairly painlessly) integrate with other systems.
  2. Ensure buy-in and engagement from management (but not the wrong kind of buy-in) – We found that although senior management knew a re-design was happening, they either did not realise what was involved or it wasn’t a priority for them.
  3. Get a build process sorted out – As the site grew and lots of stuff changed and especially when it went live, it was increasingly nerve-wracking to deploy things, since not only did the code-base change, but so did the umbraco configuration.
  4. Set things up with automation in mind – By using zurb foundation in conjunction with grunt and .net’s client dependency framework, it was pretty simple and quick to work on un-minified, debugg-able code locally and then deploy minified, concatenated and cached assets in production.

Third party and legacy integration

One of the most irritating aspects has been having to cater to legacy third party applications. You write your shiny, new, responsive, mobile first, mvc, faster-than-light application and then have to integrate it with something straight out of the 19th century. An example is capita. We developed leicester.gov.uk using zurb foundation 5, which is out-of-the-box mobile first. Unfortunately, it ditches support for ie8 and also relies on modernizr (especially for older browsers). If for some reason modernizr doesn’t or can’t be loaded, the fall-back is not pretty.

lcc-no-modernizrThis is a screen-shot from a capita test. Since modernizr is not loading, in ie8, none of the legacy browser workarounds are being picked up, and it looks less than inspiring.

Added to this, some of the  third party apps we needed to integrate with just provide a text-area to drop your css into. No access to the head of the document here. No way to conditionally load a desktop fallback stylesheet for ie7,8 etc. So what do you do in this situation? The only thing we could do in these cases was to create a cut-down branding kit using bespoke stylesheets, which increases development and testing time, increases the risk of errors creeping in and makes it harder to keep one unified brand across all applications.

On reflection, we should have thought more about the connotations of using this framework. It has many advantages and can make the process of creating a responsive website faster/easier but its designed for standalone websites that don’t need to integrate with legacy applications or work in old browsers. Maybe in future, it would be better just to use specific components of the framework (like the grids), which would make the site easier to configure and easier to make it backwards/sideways compatible.

Buy in

The senior management, directorate, and their web-editing underlings, had months and months of notifications, emails, meetings, drop-in sessions, reminders, memos about the re-design project. It wasn’t until launch day, however, that they fully comprehended what was happening and the subsequent barrage of emails complaining where their content had gone, started gushing in.

The reality of this re-design is that the site was re-written from scratch. It had grown organically over the past 10 years into a 10,000 page monster with around 300 (yes 300) editors, many of which would only edit 1 page maybe once a month. Management of that system and its editing model was virtually impossible. So it was decided that there would be a new design, a new editing model and a drastically reduced number of pages (the new site has around 1000 pages).

However, actual taking on of ownership of web content did not happen. I think this was mainly due to the fact that directors and senior management were not made aware of the importance of the website. It just wasn’t on their list of priorities so no sense of urgency was cascaded downwards to their staff. In some respects, the web team (my team) was to blame for this. While we knew how important the website was and tried our best to develop a successful solution, the task of getting buy-in wasn’t given the priority it should have gotten.

To be honest, I’m not sure what we could have done about this (thoughts and comments welcome) but I know this area needs addressing in any future projects. Maybe its a public-sector thing, but I suspect its an issue wherever you go.

The wrong type of buy-in

What I mean here is that its not good to have design input from those who should be working at a strategic level (even if they don’t realise what that means). During the process, we had people like the deputy city mayor giving input on actual design decisions (i.e. colours, button placement, functionality). A nightmare plain and simple. These people should be outlining their strategy in high-level terms and its our job as web designers/developers to align the website to those strategies, but appropriate for the audience.

One potential bad result of this is that when the decisions made by these so-called ‘designers’ are shown to be a big pile of poo, they’ll just turn round and say ‘why didn’t you tell me this was going to happen? Why didn’t you make your point better understood?’ So, it still ends up being your fault. Its very difficult, and we didn’t handle it well, but a line needs to be drawn between high-level and low-level decisions.

Automated build processes

This is less of a procedure thing and more of a technical thing. Like I said, as the project grew, and not only the code-base, but also the umbraco config changed, and especially once the site went live, making changes to the live environment became increasing nerve-wracking.

I read some stuff about courier, the umbraco package that’s meant to make deployment easier, but I was put off by various people saying it didn’t do what it was meant to. Whether that’s true or not I don’t know, but we never fully investigated it. The documentation on automated build processes for umbraco seems to be fairly minimal.

In future, what we need is some process that will deploy the web files, database, and update the umbraco configuration (e.g. changes to document types, data types etc) but leave the content on the live environment database unchanged.

Front-end configuration

One of the major advantages of using zurb foundation was its out-of-the-box integration with grunt. Things have moved on even more since we developed the site (for example, the latest version of foundation ditches mixins for vendor prefixes and uses auto-prefixer instead). However, the version we used still allowed us a great level of control over how our stylesheets and scripts were generated.

So, for development, there’s a ‘dev’ task that generates expanded, un-uglified code and for production, the ‘build’ task minifies, concatenates and uglifies the code. By also using the client dependency framework, which comes with umbraco, generated stylesheets and scripts are cached. The way they are cached is by appending a query string to the stylesheet or script, thus forcing the browser to get the un-cached version when the source files change.

SASS bloat

Once developed, even the minified sass files were creeping up in size. On reflection, I could have wrote the code in a more efficient way. It seems like common sense now but nesting rules just for the sake of it adds unnecessary bloat. Here’s an example…

#side-nav-container {
  //styles
  .side-nav {
    //more styles
    ul {
      li {
         //even more styles
         &.current {
           //yet more styles
           .label {
               //...
               @media #{$medium-up} {
                 //...
               }
            }
         }
      }
    }
  }
}

Looking at this sass, it looks logically nested but results in a large amount of unnecessary bloat. Consider, every single rule in the above will output ‘#side-nav-container .side-nav’

The resulting css is nearly 300 lines long (unminified) and is needlessly specific. By simply doing the following …

#side-nav-container {
  //styles
}
.side-nav {
  //more styles
  ul {
    //ul-specific styles
  }
  li {
    //li-specific styles
    &.current {
      //yet more styles
      .label {
         //...
         @media #{$medium-up} {
           //...
         }
       }
     }
   }
  }
}

This removes the #side-nav-container element from every rule to do with side-nav. It also removes the nested ‘ul li’ rule. This all cuts down on the file size since only top level elements are repeated.

Another thing to look at in future is the css critical path, but for this project there wasn’t time to implement it.

Other useful stuff

Other stuff that I thought went well, or was useful…

Folder-specific web.config files

Use folder-specific web.config files to allow cross-domain access to certain assets – This means that the main areas of the site are not accessible outside the domain but specific things, like custom fonts and font icons, are. This makes integration with some external third parties easier, since these files can be referenced from a different domain.

specific-web-configspecific-web-config-2

Automated variants for stylesheets

Since some of the branding kits we needed to produce required references to be absolute, we set up a few different configurations in the sass files. Then, when running the grunt build task, variant css files would be automatically produced, one with relative urls, one with absolute. This was useful in particular for the cabinet pages, since their weird technique was to scrape a public url which would hold a template with placeholders, inject their own content into the placeholders, and then serve that back to the user. Since the actual page being served was from a server other than the leicester.gov.uk website, all the urls needed to be absolute.

So, our main app.scss looked like this…

@import "settings", "components/lccmixins", "app_content";

From within the settings file, I created a variable called ‘$baseUrl,’ which by default is set to an empty string. Our ‘app_absolute’ sass file looks like this…

@import "settings", "components/lccmixins";
$baseUrl: "//www.leicester.gov.uk/Foundation/";
@import "app_content";

So, the settings and other config files are imported first, the baseUrl is overridden, then the actual content is loaded. So, whenever the $baseUrl variable is encountered, an absolute reference is created instead of a relative one.

Automated styleguides

I’ve already written a post about this but I thought this was a great tool, especially when it came to sending branding kits to various third parties. As well as sending the core files, we could just send a link to the styleguide and that would give them a comprehensive explanation of how the brand was to be used. You can see the leicester.gov.uk styleguide here.

While this styleguide isn’t quite as comprehensive as a GEL, like the BBC, it does give a good overview of the components of the brand and how to implement them.

Nice bits

There are quite a lot of things I’m proud of with this project, that are a definite improvement over the old site. Here’s a few…

Look and feel

The new websites presentation (while subjective) has been improved (in my opinion). Its responsive, mobile-first and optimised for speed. Whilst doing the branding exercise, the keywords of ‘friendly’ and ‘approachable’ were kept to the forefront. This influenced the choice of colours, the decision to use actual images of people where possible rather than icons, the tone of the language, the decision to use big, thumb-friendly buttons etc, etc. We wanted people to feel at ease using the site, whatever their computer-literacy level. Here’s an example of an early-stage moodboard…

mood1We listed all the keywords that we wanted the site to reflect, then over a process of refinement, reduced these to the 2 or 3 most important ones for the target audience. Take the health and social care landing page as an example…

Health_and_social_care_-_Leicester_City_Council_-_2015-03-19_14.07.05The page features (where appropriate) pictures of positive, happy people. That sounds cliched but it creates a feeling on the page of inclusiveness and friendliness. Text is kept to a minimum (in Leicester the average reading age is around 7) in favour of vibrant imagery and simple keywords.

Compare this to the old site’s corresponding page…

Health_&_Social_Care_-_Leicester_City_Council_-_2015-03-19_14.11.51Here, one image of 2, frankly, grumpy looking women, and a text-heavy page that is basically a list of links. While this may appeal to some and provides more deep links directly into content, its not appropriate for the majority of users. The dark masthead dominates the page, constantly drawing the eye away from content.

Personally, I think we went a bit too far with the minimal text approach. To a certain extent, it hinders content discovery through deep links into the site. If the top-level task isn’t one of the buttons, you need to rely on a-z or search. Provided these are powerful enough, that’s not a major problem I guess.

Here’s a few other bits I thought were good…

Centralised contact information

On the old site, we had a massive amount of duplication of contact emails, phone numbers and postal addresses. If one changed, it was a huge job to update every single page. In the new site, we made one ‘contact page’ document type, which contained fields for email, phone and post. Then, any page that required showing contact info on could just be picked from the ‘contact page’ field in the umbraco back office. When rendering, this page would simply consume the email, phone and postal address from that contact page and display it in modal windows. The result is a small set of contact pages that are consumed by all the other pages.
contact formThis is what appears on the page.

contact form modalThis is what happens when the user clicks ‘send us a message.’
contact form umbraco back officeThis is the back office. The ‘contact details’ field is a simple content picker.

Autocomplete search

The way this was implemented was that in the back office, a tick box indicated whether the page in question should be a promoted page. Then, a web api controller was set up that output the list of promoted pages. From the front-end, the jquery autocomplete plugin simply pointed at this feed, retrieved the results as JSON and then output them.

lcc autocompleteGoogle promotions auto-generation

Related to this is the list of google promotions that were automatically generated. The feeds api was developed to either output a simple list for the autocomplete, or a complete xml file that could be imported into google search. The site itself hooks into a google site search. To create starred links, these need adding to the site search. Our feeds api automatically generated this list. Take a look at this url – extended search promotions. This list is ready to import straight into GSS, all made possible by a simple tick box in the back office.

As a further development, it is possible to automate insertion of the promotions list into the GSS, but we didn’t get round to that.

Responsive images

Umbraco has a very nice package called slimsy, which assists in the generation of responsive images. This makes it possible to output image references dynamically, where javascript gets the screen size and outputs an image of appropriate dimensions. As a result, if viewing on a small screen size, only a small size image is fetched, not a gigantic one. From an umbraco view this is as simple as –

<img src="@promo.Image.GetResponsiveCropUrl("banner")" alt="@imageAlt" />

If javascript is not available, the slimsy package outputs the image in a noscript tag.

Automatic desktop fallback

I wrote a post about this already, but basically, a couple of node packages were used, one that converted rem’s to em’s and one that stripped out media queries based on a screen size threshold. This meant that at the same time as generated the mobile first stylesheet, it could also generate a desktop fallback for browsers below ie9. By combining this desktop stylesheet with an ‘oldies’ stylesheet, which contained all the old browser hacks, we had a fairly nice solution.

Mapping

I wrote a post about this too. I wrote a custom property editor for this one, allowing a user in the back office to do an address look up, mark a point on a map and then choose whether or not that page should consume child page maps. On these pages, if they were also map pages, the parent page consumed these map points, plotted them all on one map, and then linked through to the child pages using the page titles, summaries and featured images where possible.

map pages back office map pages front endThe solution used leaflet.js and open street map.

Configurable galleries

A widely used feature on the site is galleries, so I developed a flexible macro, linked to a controller and model, which could show galleries in various ways. So the user would create a gallery folder in the back office, and configure it, then create child gallery items. The gallery folder configuration allowed users to choose the layout method, lightbox functionality, and a few other bits.

galleriesSo the user had a lot of flexibility in how the gallery would be displayed, either as a list, grid, or carousel, showing titles/summaries, using lightbox functionality etc. Further to that, each gallery item could be configured to link to actual pages and consume their featured images.

Site-wide/section-wide alerts

Using a multi-node tree picker on the home page and 11 or so section home pages, high level web editors could easily create alerts that would either cascade through the entire site (in the case of ones added to the home page) or cascade through a particular section. These were then ordered by level so the site-wide ones would always appear first. To enable or disable them, the user would then simply have to unpublish the alert or remove it from the page.

Unit tests

There is also a post about this. The basic procedure here is to compile umbraco from the source, grab the umbraco.tests dll and that allows you to hook into umbraco’s test router and other such gubbins. As much as possible I tried to create code that made testing easier. This included abstracting business logic to services and utilities that weren’t reliant on any .net mvc magic and creating interfaces for repository classes that were easy to mock out.

Some final thoughts

A lot of this is pretty random, but I did say that I was involved in most aspects of the re-design. The general feeling I have about the whole thing is positive. My main annoyances were with people too high up the food chain thinking they could meaningfully input on design. Either this showed they didn’t understand their role, or they didn’t respect us enough to do our job. I get the feeling if we were an external design agency, not an in-house team, things would have been different. There seems to be a general attitude in the public sector that in-house teams don’t really know what they’re doing, and external, private companies are always better, which is not always correct.

Anyway, enough of my rant. Feedback always welcome.

2 thoughts on “Leicester.gov.uk redesign reflections

  1. Hats off to you Simon for such a detailed, yet frank document of your experiences in delivering to a client. Its given me lots of ideas on Umbraco specific development as well as client handling. Thanks.
    M.

Leave a Reply