Grav on Azure - Grav Code Deployment

Now that the infrastructure is deployed, all that remains is the intallation and configuration of Grav itself. This is a fairly straightforward process, but there are some nuances to be aware of and some tweaks to be made for this environment.

The first thing to do is to download Grav itself. Grav is a modular CMS, which starts out as a core on which you add plugins and themes to extend it. In many ways it's like Windows Server Core - start with a minimal product and add on what you need in order to minimise size and attack surface, and at the same time improve overall performance.

Grav is available from here

I would suggest downloading Grav Core + Admin to start with.

Having downloaded Grav, the next thing to do is pop it into your source control provider of choice. I'm using GitHub, but by all means use whatever provider you prefer. There's no built in way in Grav to sync between our two sites, so we'll use GitHub as a canonical source of truth for our site.

The benefit of this is that I'll be able to edit the site and create new posts on my local machine, push them to GitHub, and have them automatically deployed to both West Europe and East US sites with no manual intervention. This also allows me to edit on my phone and push to GitHub, or run a local copy of the site on my laptop, and push when ready.

There are a few files we don't want to be synched through our GitHub deployment, which will be unique in test, West Europe, and East US environments. To make sure they don't get inadvertantly pushed out to the wrong place, we'll add them to a .gitignore file now. These include all cache files, log files, and the system.yaml file that includes the Redis configuration. The Redis config is unique in each of East US and West Europe, so we don't want that file to come from a central source. We also don't want to make use of Redis caching on local or dev/test deployments.

In order to deploy the site, we need to configure the App Service in Azure to automatically deploy from GitHub. There are various ways to do this in Azure, including Deployment Center, DevOps Projects, Visual Studio Team Services, or a direct Deployment config. In this scenario, we're just going to use the most simple of these, direct deployment, which is configured under the 'Deployment Options' tab in each App Service. This step needs configured in both East US and West Europe web apps.

Select the deployment source you want to pull from, in this case it's GitHub.

Authorise Azure to access your deployment source, then choose the appropriate project and branch. For production purposes, we'll be pulling from the master branch here. There are various options available for pulling alternative branches into deployment slots in the Azure App service, but we won't go into that here.

Once you hit save the App Service will automatically pull in, build and deploy the code, then mark that commit as active.

Unfortunately the site will fail to load with the following error under IIS, which will need to be fixed.

"The resource you are looking for has been removed, had its name changed, or is temporarily unavailable."

In order to make Grav run under IIS, there are two things that need to be done. First, we have to install the Composer extension into the Azure App Service.

To do this, go to the 'Extensions' tab in each App Service, and add the Composer extension.

The next is to add a web.config file to the installation. The default download contains a .htaccess file for Apache installations, but IIS requires a web.config file to manage redirects and the like.

There are various ways to convert an existing .htaccess file to an IIS web.config file.

Here's a guide to run through.

Once the web.config file is created, put it into the root of your site and push the changes to GitHub to have the new file deploy to your two web apps.

Once the new version is deployed with an appropriate web.config file, reload the site again and... success! We have a Grav admin configuration page :)

You can either configure each region's Admin identically through the browser, or change it in text files and push to GitHub. From this point you should get used to editing source and then pushing to GitHub rather than making changes to the site through the admin portal.

Regardless, after configuring the admin module, you can navigate to the site and you will see the default Grav page.

At the stage we have the website deployed and running, with traffic management across two regions, and a deployment pipeline in place through GitHub. The next steps to take are to integrate the Redis cache, and then the CDN.

Because the Redis config is different in each region, this is one case where we'll make config changes via the admin portal rather than pushing from source control. If you recall, we added the file "user/config/system.yaml" to the .gitignore file earlier, so changes we make to it in the app won't be overwritten by source control later.

To do this piece we'll need to bypass Traffic Manager and go direct to the relevant web apps. and in this instance. Login to the Admin portal...

... and navigate to the Configuration page. Find the Caching section, and change Cache driver to Redis, then fill in the Redis server and password with the relevant details.

You can find those details in the Redis app under the Access Keys blade. For password, use either of the Primary or Secondary keys presented.

Save the settings, reload, and... crash! It doesn't work. The error is that 'Class Redis is not found', because we haven't enabled the Redis extension in the Azure App service PHP settings.

To rectify this, we need to upload a couple of files and change an app setting. First, navigate to the Kudu Advanced Tools blade in each web app, and click 'Go'.

Select 'Debug console', then 'CMD', and navigate to the site folder. Create two new folders in Site, called ini and ext.

Into the ext folder you need to upload a valid php_redis.dll file. Various versions of this are available for download, in order to work in Azure, this should be a non thread safe (NTS) x86 version.

After uploading the php_redis.dll file into the ext folder, navigate to the Appliation Settings blade of the web app, and add a new App Setting called 'PHP_INI_SCAN_DIR' with the value 'd:\home\site\ini'.

Return to Kudu and the ini folder created earlier, and create a new file in it called 'extensions.ini'. Add one line to that ini file, 'extension=d:\home\site\ext\php_redis.dll'.

Return to the web app Overview tab, and restart the app service. These steps will need to be done for both the West Europe and East US web apps.

Once the app is restarted, you should be able to navigate to the site successfully. Congratulations, the site is now being fronted by a geo-diverse and highly available Redis cache.

Next, to add the CDN plugin, we first need to download it. The plugin is available here for download.

Simply extract it into a new folder under user/plugins called cdn, git add, commit, and push it, and it should be pushed out to your live sites.

The Deployment options blade will show that as soon as the new folder hit GitHub, the app was redeployed to the app service using the new version of the codebase. Easy peasy!

The plugin will now show up in the admin portal as an available installed Plugin.

The next thing to do is to enable the CDN and point it at the CDN endpoint we created in the previous blog. Navigate to that CDN profile we created earlier to make a note of the endpoint name. In this case it's This could also be a custom domain, but as I don't have an SSL cert to cover anything other than, I'm leaving it with the default domain.

Back in source control, upen the user/config/plugins/cdn.yaml file (or create it if it doesn't exist) and populate it as below. If editing the existing file, the only settings that need changed are enabled to be true, and pullzone to be the CDN url. Commit, push, and redeploy the site.

Reloading the site should work as expected with no changes noticeable at the front end.

If you view the page source now though, you should see that all static content is being served from the CDN domain Success!

We can now run a website performance test against the site to see how effective all these steps have been in improving access and response time, and the answer is... extremely effective! Getting all green and particularly all 'A' grade across the board can be difficult to achieve, so this is an excellent result.

As I say, this is a more complex setup than a blog site really warrants, but I wanted to try out some features and integrations that I haven't played with before. Now that I've run through this and proven that it works, I'll most likely scale the deployment back to a single region for production purposes. It's always nice to know what expansion options exist though!