Our custom Ghost theme is now looking rather neat 😬, the logical next step is to release our theme onto the world by running it in a Ghost instance. We'll do this by deploying a Ghost stack locally, using Docker Compose and then to Docker Cloud, with HTTP/2 and securing it with Let’s Encrypt.

Part 4 of this series on developing a Ghost theme with Gulp focused on adding some production optimisations, making our theme as snappy and responsive as it can be. We also made it look like an actual blog by introducing Bourbon Neat with some basic Sass styling. If you haven't read the previous four posts (part 1, part 2, part 3 and part 4), it's highly recommended you do so as the this part builds on the work done in the previous four.

Get all the codes

All the code for this series on developing a Ghost theme with Gulp is available in full on GitHub.
Each part in the series will have it's own branch with the master branch eventually containing the complete project.

The complete code for Part 5 is available on the part5 branch.

Docking locally

It is assumed that you have installed Docker 1.9+ using either the Docker Tools or Docker for X packages. This post was developed and tested with Docker for Mac 1.12

Unless you've been living under a rock for the past few years, you would have heard and maybe used Docker to containerise your applications. We'll be using Docker to create images for a Ghost instance, preconfigured to use our theme, as well as an Nginx image with optimisations for acting as a reverse proxy for a Ghost blog. To make life a little easier, we'll wrap this in Docker Compose.

Content image

Let's start with how and where we're going to store our blog content.
In this post we will stick with the default sqlite3 backed database but we'll house it in a data volume container. Technically we don't need to split the content out into it's own volume container, we could just leave it in our Ghost Docker image (which we'll see next). The problem with that approach though, is that if you ever wanted to upgrade your Ghost image, you would blow away all your content. Not great.

I am intentionally ignoring the option of using a dedicated database like Postgres/MySQL for this post. Obviously the volume container would be pointless in that case. If you have a high profile or otherwise important/complex blog, then you should consider this option but for our little theme post the volume container will do nicely.

Following on from the code in part 4, create some new directories starting at the root of our project:

$ mkdir -p docker/content
$ tree -L 2
β”œβ”€β”€ README.md
β”œβ”€β”€ app
β”‚Β Β  β”œβ”€β”€ assets
β”œβ”€β”€ docker
β”‚Β Β  β”œβ”€β”€ content

In the docker/content directory, create a Dockerfile, with the following content (excuse the pun):

# docker/content/Dockerfile
FROM ghost:0.9


We use the ghost:0.9 base Docker image from Docker Hub which contains a fully functional Ghost configuration. This might seem excessive but the only reason we do that is to make sure we have the directory structures and file permissions spot on. Plus, we're going to use the Ghost image next anyway, so we won't waste too much in the way of storage space thanks to the way Docker uses a layered file system.

The next line in the Dockerfile simply exposes the directory referenced by the environment variable $GHOST_CONTENT (which points to /var/lib/ghost) as a volume which we'll use next.

A Ghostly image

The next Dockerfile will describe how to build is the meat and potatoes of our Ghost stack. As mentioned previously, thankfully there is already an official Docker image for Ghost so most of the hard work has been done already.

All we need to do is add our theme and enable it by default. We'll also provide some options to configure things like what mail server to use. Create a docker/ghost directory and add the following Dockerfile:

$ mkdir docker/ghost
# docker/ghost/Dockerfile
FROM ghost:0.9

ADD dist/ content/themes/my-ghost-theme
RUN sed -i.bak s/casper/my-ghost-theme/g "/usr/src/ghost/core/server/data/schema/default-settings.json"

ADD config-prod.js config.example.js

ENV NODE_ENV production


Using the same ghost:0.9 image as we did for the content image, we cd into $GHOST_SOURCE (/usr/src/ghost) and add the contents of a dist directory into the content/themes/my-ghost-theme directory. content/themes is the location where Ghost will look for themes, so we are essentially installing our theme with that. So where is this dist directory? Not so fast, we'll get to that later...

Our theme is now available to Ghost but to use our theme and not the default Casper theme, we would have to go through the initial setup and then select our theme in the settings page. That feels a little anti-climactic. Wouldn't it be more satisfying to see our theme used by default and as a side effect, enable the possibility of building customised Ghost Docker images with themes preconfigured. No problem. Remember back to part 2 where we overrode the default theme from Casper to our own? Well, we can use the same concept here, all we need to do is replace casper with my-ghost-theme in /usr/src/ghost/core/server/data/schema/default-settings.json. Ghost uses this file when populating the initial database values and we've just told Ghost we want to use my-ghost-theme as the default theme.πŸ‘

Next we add our custom config-prod.js as config.example.js in our image. The reason we do this, is that when the ENTRYPOINT script (docker-entrypoint.sh) is invoked, it checks for the existence of config.js in $GHOST_CONTENT and if it doesn't exist, then it uses $GHOST_SOURCE/config.example.js as the template to create $GHOST_CONTENT/config.js. The config-prod.js looks as follows:

// docker/ghost/config-prod.js
var path = require('path'),

config = {
  production: {
    url: process.env.BLOG_URL,

    database: {
      client: 'sqlite3',
      connection: {
        filename: path.join(process.env.GHOST_CONTENT, '/data/ghost.db')
      debug: false
    server: {
      host: '',
      port: '2368'
    paths: {
      contentPath: path.join(process.env.GHOST_CONTENT, '/')
    mail: {
      transport: 'SMTP',
      options: {
        service: 'Mailgun',
        auth: {
          user: process.env.MAILGUN_USER,
          pass: process.env.MAILGUN_PASSWORD

// Export config
module.exports = config;

After that we add a few ENV entries that will instruct node to run in production mode, allowing us to configure Ghost mail settings using Mailgun and also set BLOG_URL which we'll discuss next but essentially allows you to customise what URL your Ghost blog will be available at.

Last but not least, we EXPOSE 2368 which will be key later but for now advertises the fact that we want port 2368 to exposed from a container of this image.

Start your Nginx

Nginx will serve as a reverse proxy for our Ghost container mainly so we can configure it to optimise assets served from Ghost. Create a docker/nginx directory and add the following to our next Dockerfile:

$ mkdir docker/nginx
# docker/nginx/Dockerfile
FROM nginx:1-alpine

COPY nginx.conf /etc/nginx/nginx.conf
COPY ghost-blog /etc/nginx/sites-enabled/ghost-blog

RUN rm -rf /etc/nginx/conf.d

Again, pretty straight forward, start out using the latest official Nginx Alpine based image from Docker Hub. Then we copy two files, nginx.conf and ghost-blog, to customise Nginx slightly as well as setup our ghost-blog "vhost" or server block.

The nginx.conf file looks like this:

# docker/nginx/nginx.conf
user                    nginx;
worker_processes        1;
pid                     /var/run/nginx.pid;

events {
                        worker_connections  1024;

http {
    gzip                on;
    gzip_proxied        expired no-cache no-store private auth;
    gzip_types          text/plain text/css application/json application/javascript text/xml application/xml
                          application/xml+rss text/javascript image/svg+xml application/vnd.ms-fontobject

    # Use webp where supported
    map                 $http_accept  $webp_suffix {
                          default "";
                          "~*webp" ".webp";

    include             /etc/nginx/sites-enabled/*;

A minimal Nginx configuration as they go. The only interesting bits are the gzip_types directive that adds some mime types not covered by default, including some of the different font types (application/x-woff, etc.)

The other interesting bit is the WebP related map block.
What this does is assign the value of .webp to the variable $webp_suffix if the $http_accept HTTP header matches the regex expression ~*webp. If we look at an example request below:

We see the request Accept header ($http_accept in Nginx) has a value of image/webp,image/*,*/*;q=0.8 which matches our regex for webp (~*webp) and so $webp_suffix = '.webp'. This variable is used in ghost-blog to enable serving the .webp variant of our images if the browser supports it. Thanks to Eugene Lazutkin for this solution.

Next our ghost-blog file contains the configuration for the "vhost" that will proxy through to our Ghost container:

# docker/nginx/ghost-blog
server {
  listen              80;
  server_name         localhost my-ghost-theme.switchbit.local.io;

  location / {
    proxy_set_header  Host                $host;
    proxy_set_header  X-Forwarded-Proto   $scheme;
    proxy_set_header  X-Real-IP           $remote_addr;
    proxy_set_header  X-Forwarded-For     $proxy_add_x_forwarded_for;
    proxy_http_version                    1.1;
    proxy_connect_timeout                 90;
    proxy_send_timeout                    90;
    proxy_read_timeout                    90;
    proxy_buffer_size                     4k;
    proxy_buffers                         4 32k;
    proxy_busy_buffers_size               64k;
    proxy_temp_file_write_size            64k;

    # From: https://ghost.org/forum/bugs-suggestions/469-blog-cover-won-t-upload/11/
    client_max_body_size                  10m;
    client_body_buffer_size               128k;

    proxy_pass                            http://ghost:2368;

  # revisioned/fingerprinted images can be cached forever
  location ~ "/assets/images/(.*)-([a-z0-9]{10})\.(?:png|jpe?g|tiff)(.*)$" {
    expires           max;
    add_header        Cache-Control public;
    add_header        Vary Accept;
    proxy_pass        http://ghost:2368/$uri$webp_suffix;
    access_log        off;

  # revisioned/fingerprinted css and js can be cached forever
  location ~* \.(?:css|js) {
    expires           max;
    add_header        Cache-Control public;
    proxy_pass        http://ghost:2368/$uri;
    access_log        off;

  # non revisioned/fingerprinted images only cache for 1 week
  location ~* \.(?:gif|png|jpe?g)$ {
    expires           1w;
    add_header        Cache-Control public;
    proxy_pass        http://ghost:2368/$uri;
    access_log        off;

We'll listen on port 80 for any Host path that matches localhost my-ghost-theme.switchbit.local.io and proxy that to a container running at http://ghost:2368;. When we run our Ghost container, it will be accessible from the Nginx container via its container name ghost but we'll see that soon.

Cold Hard Cache

The next three sections let us optimise our caching strategy around images and other assets that are fingerprinted/revisioned and those that are not. We cache any fingerprinted assets forever (see above screenshot: Cache-Control:max-age=315360000 and Expires:Thu, 31 Dec 2037 23:55:55 GMT) and images that are uploaded when creating Ghost content will be cached for 7 days (uploaded images don't really change once a post is published but we don't want to cache forever, just in case...).


Also note the proxy_pass http://ghost:2368/$uri$webp_suffix; directive in the first asset location block. This adds support for serving up the .webp version of our images if available and supported. How this actually works is that all images that can be converted to WebP format are converted as part of the webp gulp task. When the webp task does the conversion, it adds the .webp extension to the original image filename. See below for an example:

β”œβ”€β”€ docker
β”‚Β Β  β”œβ”€β”€ ghost
β”‚Β Β  β”‚Β Β  └── dist
β”‚Β Β  β”‚Β Β      β”œβ”€β”€ assets
β”‚Β Β  β”‚Β Β      β”‚Β Β  β”œβ”€β”€ images
β”‚Β Β  β”‚Β Β      β”‚Β Β  β”‚Β Β  β”œβ”€β”€ apple-touch-icon-928a29e513.png
β”‚Β Β  β”‚Β Β      β”‚Β Β  β”‚Β Β  β”œβ”€β”€ apple-touch-icon-928a29e513.png.webp
β”‚Β Β  β”‚Β Β      β”‚Β Β  β”‚Β Β  β”œβ”€β”€ apple-touch-icon.png
β”‚Β Β  β”‚Β Β      β”‚Β Β  β”‚Β Β  β”œβ”€β”€ apple-touch-icon.png.webp
β”‚Β Β  β”‚Β Β      β”‚Β Β  β”‚Β Β  β”œβ”€β”€ ghost-386a7b237c.png
β”‚Β Β  β”‚Β Β      β”‚Β Β  β”‚Β Β  β”œβ”€β”€ ghost-386a7b237c.png.webp
β”‚Β Β  β”‚Β Β      β”‚Β Β  β”‚Β Β  β”œβ”€β”€ ghost-928a29e513.png
β”‚Β Β  β”‚Β Β      β”‚Β Β  β”‚Β Β  β”œβ”€β”€ ghost-928a29e513.png.webp
β”‚Β Β  β”‚Β Β      β”‚Β Β  β”‚Β Β  β”œβ”€β”€ ghost.png
β”‚Β Β  β”‚Β Β      β”‚Β Β  β”‚Β Β  └── ghost.png.webp

Notice how ghost-386a7b237c.png has been converted to WebP but suffixed with .webp (ghost-386a7b237c.png.webp). This naming convention allows the WebP image to be served from Ghost (http://ghost:2368/$uri$webp_suffix) if the requesting browser supports it (indicated by $webp_suffix having a value of .webp). Otherwise the original image is served ($webp_suffix will have the default value of "", i.e. a blank string).

Let's compose ourselves

That's it, we're ready to run this little stack. We'll use Docker Compose (version 1) to wire everything up. Create a new file called docker-compose.yml in your docker directory:

# docker/docker-compose.yml
  build: nginx
    - "80:80"
    - ghost

  build: ghost
    - content
    - BLOG_URL=http://my-ghost-theme.switchbit.local.io

  build: content
  command: echo "Ghost content volume"

Starting from the top, our nginx image will be built from docker/nginx/Dockerfile and will expose port 80 with a link to the ghost container (I.e. what allows proxy_pass http://ghost:2368; to work in ghost-blog). The ghost container will be built from docker/ghost/Dockerfile, will use the content data volume container and we'll set an environment variable BLOG_URL to the value of the URL our blog should be served from. Finally we build the content volume container and run it with a command that simply echoes a message and exits. The volume container only needs to run once to get created and thereby become eligible to be mounted into the ghost container.

Note: there is a docker-compose.v2.yml file that uses the features available in the Docker Compose version 2 format and the Named Volume feature available since Docker 1.9. This removes the need for the content container and the corresponding Dockerfile. The docker-compose.v2.yml can be viewed here.

Gulp, it's Docker time

gulp ghost and gulp ghost:production have proven to be quite useful, wouldn't it be great if we had a corresponding gulp ghost:docker task to launch our Ghost stack?

Simple. Add the following to gulp/tasks/production/ghost.js:

var gulp = require('gulp');
var ghost = require('ghost');
var path = require('path');
var runSequence = require('run-sequence');
var env = require('gulp-env');
var shell = require('gulp-shell');

var g;

gulp.task('ghost:production', ['dist'], function () {

gulp.task('ghost:docker', ['dist:docker'], function () {
  return gulp
      'docker-compose -f docker-compose.yml -p ghost up'
    ], {
      cwd: 'docker'

and install gulp-shell with:

$ npm install --save-dev gulp-shell

This will use the gulp-shell package to invoke docker-compose to build and run our Ghost stack.

Before we can test, we need to add two more Gulp tasks, dist:docker in gulp/tasks/production/dist-docker.js:

// gulp/tasks/production/dist-docker.js
var gulp = require('gulp');
var shell = require('gulp-shell');

gulp.task('dist:docker', ['copy:docker'], function () {
  return gulp
      'docker-compose -f docker-compose.yml -p ghost build'
    ], {
      cwd: 'docker'

and copy:docker in gulp/tasks/production/copy-docker.js:

// gulp/tasks/production/copy-docker.js
var gulp = require('gulp');

gulp.task('copy:docker', ['dist'], function () {
  return gulp

copy:docker kicks off a production build of our theme and then copies the built theme in dist to docker/ghost/dist, which is where we get our theme into the Ghost image as alluded to above.

dist:docker uses docker-compose to build the images in our stack.
It's used by ghost:docker to build the theme and Docker images but it is also used if you only want to build the Docker images for pushing to a Docker registry. ghost:docker profits from the these crisp new images by running our Ghost stack with docker-compose.

Test it out by running gulp ghost:docker.
You should see Gulp building our theme, then the Docker images and then Ghost and Nginx starting up, something like:

We used a BLOG_URL of http://my-ghost-theme.switchbit.local.io so if we want to make that URL resolvable, we need to add an entry into our /etc/hosts or equivelant for your platform. Something like:

# /etc/hosts
# Ghost blog on Docker my-ghost-theme.switchbit.local.io

with that, you should be able to hit: http://my-ghost-theme.switchbit.local.io/ and see our Dockerized Ghost theme running.

You could also use http://localhost because of the server_name value defined in docker/nginx/ghost-blog.

Docking in the Cloud

Our local Docker Compose based stack is handy for testing a "production" setup for running our Ghost blog but it's time we deploy this puppy to the rest of the world. To do this we'll use Docker Cloud.

Docker Cloud is a platform to build, ship and run your Docker containers. The platform as it is today, used to be called Tutum until it was acquired by Docker and merged into the Docker ecosystem. Essentially what it does is provide an orchestration layer for you to deploy containers on top of cloud providers, like Digital Ocean, AWS, Azure, etc. You declare your desired stack of containers/services and their configuration in Docker Cloud and then let it work out where and how to run the containers based on your configured providers.

Now that we know where and how we'll run our Ghost stack, we need to define what we want to accomplish. As mentioned at the beginning of the post, we'd like our preconfigured custom Ghost theme running wild using the latest and greatest HTTP/2 together with encryption courtesy of Let's Encrypt. Let's go...

Get your head in the clouds

The next few sections assume you have created a Docker Cloud account and signed in successfully. Once you've done that you'll have to link your cloud provider of choice. For this post I'll be using Digital Ocean. I've been using Digital Ocean for a few years now and they are awesome. If you're looking for a provider I highly recommend Digital Ocean.

Use this link: https://m.do.co/c/9063364d02d8, to create a new Digital Ocean account (disclaimer: This is a referral link and I will potentially receive a referral reward, see here)

Adding Docker Cloud Node

Besides letting Docker Cloud spin up a fully configured droplet for you, you also have the option to "bring your own node" which let's you reuse a current, publicly accessible machine, as a Docker Cloud node. This is the option I chose and the setup is pretty basic. Download a script that installs prerequisite packages (Docker too) and an "agent" so that Docker Cloud can talk to the node. This post only requires one node, however there are many options around creating node clusters etc.

Once you have your infrastructure setup, follow to the next section.

Image is everything

The "Ship" part of the Docker Cloud mantra is dealt with by giving you public/private Docker registries to house your images. From here we can reference the images when we define our stack. If you've followed the previous section on deploying the stack locally with Docker Compose, then we already have our images ready to go.

The registry and repositories are actually hosted on Docker Hub. This was one of the big changes from Tutum, where the registries and repositories were hosted by Tutum themselves.

If you didn't follow along you can build the content and ghost images locally with:

$ gulp dist:docker

when that's done, you should see all three images locally:

$ docker images 
REPOSITORY                                               TAG                 IMAGE ID            CREATED             SIZE
myghosttheme_nginx                                              latest              2703bcd39d0e        4 seconds ago       69.3 MB
myghosttheme_ghost                                              latest              1890bfa37713        8 seconds ago       372.9 MB
myghosttheme_content                                            latest              db16a68b26b5        3 seconds ago          372.8 MB

Next create two new repositories, one each for the content and Ghost images. Below is a screenshot of creating a new repository:

once you have added both you should see something like:

Now that our registries are open for business, following the documentation, tag and push each of your images to a repository. For example, using the above images on my local machine:

$ docker login ...

$ docker tag myghosttheme_content donovanmuller/my-ghost-theme-content:0.9
$ docker tag myghosttheme_ghost donovanmuller/my-ghost-theme-ghost:0.9

$ docker push donovanmuller/my-ghost-theme-content:0.9
The push refers to a repository [docker.io/donovanmuller/my-ghost-theme-content]
zf76eaa4b6r6: Pushed 
543bf59013k9: Pushed 
0.9: digest: sha256:67248a3a31aaa2e0011263ca943df5612f15f6f5ab72bcfeda514a1c5502a1b6 size: 2620

$ docker push donovanmuller/my-ghost-theme-ghost:0.9
The push refers to a repository [docker.io/donovanmuller/my-ghost-theme-ghost]
ac95eaa4b6f3: Pushed 
327bf59013f1: Pushed 
0.9: digest: sha256:003c7c734b8bf17bee11f3892f15891bbae196b65d7fb3dbbc1911b14355e337 size: 3243

After some time your images should be nestled in their new home.

You can also configure Docker Cloud to build the image for you by pointing it at your GitHub/BitBucket repositories. However, the complexity is that our theme must be built with Gulp before we can produce our images. If you would like to fully automate the building and pushing of theme and images, then I suggest you look into something like Codeship or similar.

Stack'em and rack'em

Imagine for a second that there was a preconfigured Docker Cloud Stack, that when deployed, allowed you to deploy your own Ghost based blog alongside it and, when this happened, that stack would magically handle the creation and configuration of a SSL/TLS encryption using Let's Encrypt and serve it all over HTTP/2.

Imagine no more!
Hit the button below and you'll have a Nginx Ghost Stack, with SSL/TLS encryption using HTTP/2, ready to go.

The stack above basically consists of customised versions of the docker-gen image in conjunction with the docker-letsencrypt-nginx-proxy-companion using the "Separate Containers" configuration. These two images together generate the appropriate Nginx server blocks for the separate Nginx image to act as a reverse proxy to a Ghost based stack in an optimised way (the same as we configured Nginx for our local Docker Compose based stack), over https, using HTTP/2.

Docker Cloud Support

The docker-gen and docker-letsencrypt-nginx-proxy-companion images have been modified to add support for Docker Cloud. By default, these images will not work correctly in a Docker Cloud context using the stock standard images. Both images try and restart containers based on their container names. However, in Docker Cloud the container names actually map to the Service name, as the actual container names are suffixed by the stack name etc. and therefore will not be restarted. See below for an example which uses the Stack above as well as this blog's Ghost stack:

Note the container names assigned by Docker Cloud. In the example above, the docker-gen and docker-letsencrypt-nginx-proxy-companion would be trying to reference a container with the name nginx-proxy to trigger a reload. However, nginx-proxy is actually the Service name.

The customised images (as used in the Stack file above) with Docker Cloud support are listed below:

Please see the GitHub/Docker Hub pages for more detailed information.

Unleashing the Ghost

Assuming you've clicked the button above to load up the Nginx Ghost Stack in your Docker Cloud account, make sure that you change the ACME_CA_URI value from the Lets Encrypt staging URI to the production URI of https://acme-v01.api.letsencrypt.org/directory. If you want to first test the Stack then rather leave the staging URI as is. I say this because if you request a certificate too many times, you will be rate limited for a week... which isn't fun.

Click "Create & Deploy" and your stack will fire up.

Ghost instance, away

Once the Ghost Nginx Stack is running, all we need is a Ghost stack to run alongside this.

Luckily there is a button for that too!

Before we can deploy our Ghost blog with the my-ghost-theme, urm, theme, we need to provide values for the environment variables section in the Stack file. Below is an example of the values that you should adapt to your blog:


  image: donovanmuller/my-ghost-theme-ghost:0.9
    - ghost-content
    - "BLOG_URL=http://my-ghost-theme.switchbit.io"
    - "LETSENCRYPT_EMAIL=...@switchbit.io"
    - "LETSENCRYPT_HOST=my-ghost-theme.switchbit.io"
    - "VIRTUAL_HOST=my-ghost-theme.switchbit.io"


At this point and before you click the "Create & Deploy" button it's probably a good idea to add a DNS entry so that whatever value you used for the VIRTUAL_HOST can resolve to the IP of the node running your containers (if you had multiple containers you would point to a load balancer instead). For example, I used my-ghost-theme.switchbit.io as my VIRTUAL_HOST, therefore I need to have a CNAME added pointing my-ghost-theme. to either my node A record/IP or the Service Endpoint of the nginx-proxy Service.

If your DNS record is added and you can ping your VIRTUAL_HOST and it resolves, you can then click the "Create & Deploy" button.

As a reminder, we are using a single node for as the reference for this post. If you ever decide to use multiple nodes to deploy your Stack too, you would instead point the DNS entries to the Docker Cloud Service Endpoint of a load balancer container (like HAProxy etc.).

It's aliiive

If your VIRTUAL_HOST is resolving correctly and you've clicked the "Create & Deploy" button you should see your stack fire into life. The ghost-content service will start and then appear as "Stopped". This is perfectly normal as we just wanted the data volume to be created. The ghost service on the other hand should appear as "Running".

Your stack will also appear as "Partly Running" and this is also perfectly normal. It's just the "Stopped" volume containers that cause this.

Once our ghost blog service fires into life, the nginx-gen and letsencrypt-nginx-proxy combination, pick up the container started event, request a new certificate from Let's Encrypt and add a new server block for the nginx-proxy service to use once it's restarted. You should see something like this in the logs of the letsencrypt-nginx-proxy service:

If you see something similar to that, you should then be able to open a new browser window and navigate to our deployed Ghost instance. In this example, it would be http://my-ghost-theme.switchbit.io.

Note that we were redirected to the https://... URI and that we have been issued a certificate from Let's Encrypt.

Also note that all of our resources are served using HTTP/2:

You can see the deployed Ghost instance here πŸ‘


We've come a long way in this series, from writing the first Gulp task all the way to being deployed in Docker Cloud. Hopefully you were able to benefit from some of the concepts discussed in the series and use them when developing and running your own Ghost themes/blogs.

If you enjoyed the series or have any feedback, please let me know via Twitter.

Brage Ghost Theme

For a fully implemented Ghost theme built using the same techniques discussed in this and the preceding articles, please see the Brage theme. It contains all the optimisations mentioned above as well as a few more.