Plone Mephisto Sprint 2016 Report

Plone Mosaic 2.0 RC has now been released for Plone 5. Similarly to previous releases, you can deploy a demo site of it at Heroku with a single click. Please, join the bug hunt and help with reporting and fixing issues which matter you the most. Please note that Mosaic 2.0 is a major release because it no longer supports for Plone 4.3 (yet, most of the bug fixes could be cherry-picked to 1.0 branch by those who care). The final release will be made once possible migration problems from 1.0 have been solved.


Yes, there was a yet another Plone community sprint. Plone Mephisto Sprint 2016 happened at Leipzig, Germany, from 5th to 9th September, and was focused on improving TTW (through-the-web) experience of customizing Plone sites for better flexibility and hackability. The sprint was organized by Maik Derstappen and it was sponsored by Derico, e-ventis and Plone Foundation. Once the sprint was approved as a strategic sprint by the foundation, enhancing Plone Mosaic became the main target of the sprint.

The participants included Maik Derstappen (derico), Peter Holzer (agitator Weblösungen - BDA), Thomas Massmann (it-spirit), Asko Soukka (University of Jyväskylä), Thomas Lotze, Andreas Jung, Stefania Trabucchi, Jens Klein (Klein & Partner KG - BDA), Christoph Scheid (Uni Marburg), Kristin Kuche (Uni Marburg), Stephan Klinger (der Freitag), Nathan Van Gheem (Windcard Corp), Veit Schiele, Gil Forcada Codinachs (der Freitag), Ramon Navarro Bosch, Michael Töpfl (e-ventis.de) and Christoph Töpfl (e-ventis.de). During the sprint there was also remote participation by Dylan Jay and Rodrigo Ferreira de Souza (Simples Consultoria).

Let's start with some Github Activity statistics collected by Nathan Vangheem from the sprint. During the sprint, at least the following packages were touched with these great numbers:

  • plone.app.mosaic (23 PR, 154 commits, 2 outstanding PRs)
  • plone.app.blocks (4 PR, 18 commits)
  • plone.app.standardtiles (9 PR, 18 commits, 2 outstanding PRs)
  • plone.app.tiles (4 PR, 9 commits)
  • plone.tiles (2 PR, 16 commits)

Behind those numbers, there was a lot of bug fixes, new issues reported, some refactoring and simplification, few major changes to the foundations of Plone Mosaic, and the long awaited initiative for more complete configuration documentation. Thanks to all the bug fixes, Mosaic Editor and TinyMCE in it will break much less often. Also layout related permission work now as expected. To name the other major fixes and changes to Plone Mosaic, we:

  • deprecated dedicated image and attachment tiles in favor of using image and file content types and linking them with rich text tiles as without Mosaic.
  • removed complex universal pluggable grid system (tm) implementation in favor of simply using such CSS grid class names that it's possibly to build any current grid system for those class names. This finally makes Mosaic edit and view mode show the same grid by default on Plone 5.
  • removed Mosaic Editor from content add forms and introduced a simple add form with just title and description for those content types, which have Mosaic enabled and its layout view defined as their default view. This removes all experienced confusion of add forms some times behaving differently from edit forms and not being able to save images directly inside the new containerish content.
  • unified HTML tile implementations and removed dedicated example tiles for headers, lists, etc, in favor of a single rich text tile (which would still allow similar dedicated or templated tiles).
  • introduced outline mode for the editor (pressing Alt modifier key while in editor) to keep the default editor experience simple, but still allow more technical look into the layout and make it easier to split the layout into separate rows.
  • fixed a major issue where versioning of content with blobs in its tiles' configuration created empty blobs into filesystem. This has been a major issue for collective.cover users and is now fixed with new plone.app.tiles releases (1.1.0 for collective.cover).

In addition

  • we fixed a few site layouts related compatibility issues with Plone 5 and added a support for enabling site layouts support with a single line in buildout. Site layouts are not yet enabled by default.
  • we implemented a new, but transparent, tile configuration and data storage to mostly avoid using annotation objects with shared content layouts (and be more friendly for ZODB connection cache).
  • Michael and Christoph (Töpfl) worked on Rob Gietema's ReactJS based Mosaic Editor experiment, and Ramon started a new Angular 2 based Mosaic Editor experiment. Hopefully at least the other one will succeed to give us a fresh flexbox based layout editor during the next year.
  • Maik lead work on example content layout to be shipped with the final Mosaic 2.0 release.
  • Andreas started work on allowing custom views for existing content tile.
  • Dylan worked on user interface and TinyMCE support for allowing tiles inside rich text tiles.
  • Rodrigo worked on refactoring code of RSS and calendar portlets to be easily re-usable with respective tiles.

We also discussed about the path to get Mosaic into Plone core. The current plan is to get Mosaic dependencies into Plone core first (plone.tiles, plone.app.tiles, plone.app.blocks, plone.app.standardtiles), but only PLIP the user interface package (plone.app.mosaic) when it works well enough with the other add-ons shipped with Plone (e.g. multilingual). Unfortunately, no PLIPs were written yet.


A few photos from the sprint are available at Google Photos. In addition to full day sprinting, we did get a city tour around Leipzig and enjoyed Maik's barbequing at the sprint garden. Maik did a great job in organizing the sprint, and I really hope, we made it worth the effort. And as always, the work around Plone Mosaic continues also after the sprint. It's still huge effort left, but while it's still not ready for Plone core, it can already be customized to give real return on investment, as seen in Castle CMS. Finally, Big thanks for everyone participating the sprint. It was a pleasure to work with you, and I hope to see you all again!

Creating flexible and responsive tile-based pages using Plone Mosaic

What is Plone Mosaic?

Plone Mosaic is a new layout solution for Plone CMS. Mosaic, Blocks and Tiles provide a simple, yet powerful way to manage the pages on your Plone website. 
In this article I will give examples on how to create and edit flexible and responsive tile-based pages using Plone Mosaic. I'll also discuss some issues I have so far bumped into, and ponder on the possibility to use Mosaic to replace Portalview, our current tool to compose customized portals. I'll use Plone 5, the newest version of Plone CMS.
Normally a Plone page consists of a title, description and content section. Using Mosaic one can easily create custom layouts and use them as templates for othes pages.

Mosaic editor example

1. How to Create a Mosaic Page

All too easy:
  1. Add a new page
  2. Select Display -> Mosaic layout
  3. Select Basic or Document layout
  4. Save

Basically you first have to create a page and then change how the page is viewed - using normal Plone page layout or Mosaic layout. 

2. How to Edit a Mosaic Page 

Mosaic editor on Plone 5
You can still edit normal page properties on the top left corner, but for editing Mosaic layout, there are two menus on the top right corner: Format and Insert.

At first I'll insert a couple of tiles into place. Starting with text tiles is the way to go.
Insert menu

It is easy to drag and drop the tile into a desired place. There can be 1-4 colums side by side. And they are responsive too.

In this example I added two text tiles and an image.

Drag and drop the new tile to a desired place.

Image dropped over a text tile

Embedding media works well too using the Embed tile (and Oembed). Just add a media URL.

One thing missing here, though, is the opportunity to change embed size. Quite often the video size is not optimal.

3. Formatting a Mosaic Page

I can quickly drag and drop new tiles to the page, rearrange or delete tiles.

If I want to edit a text tile, I'll just click it and a TinyMCE editor appears:
TinyMCE on Mosaic text tile

The editor has basic editing features, such as:
  • Formatting (headers, paragraphs etc.)
  • Text alignment
  • Bullet lists
  • Images
  • Links
Occasionally I have missed the opportunity to edit HTML, though.

However, the basic formatting functions on tiles go a long way:
Formatting options

I can change content alignment, add a dark background and add some padding between tiles. Of course this could be done using Plone theming tools too.

Changing the structure of a mosaic page is really easy and it works well - I haven't come across any issues on drag and drop (Using Firefox on OSX).

Replacing Portalview with Mosaic?

One use case for Mosaic when we migrate our websites to Plone 5 could be replacing our multiple portal pages. They are created using our own Portalview composition add on (sorry, only a really old version available!). Portalview contains e.g. the following features:
  • Possibility to compose a customised layout (based on folders and other Plone content)
  • Image/text/video/page carousel (Yes I know, carousels are evil :)
  • Accordions
  • Tabs
  • Dropdown menu
  • Custom CSS!
All can be manipulated in the browser.

An example here:

And another one:

Issues and suggestions

In addition to myself there are couple of users and content managers working with Mosaic. First comments have been positive: "This looks shiny and modern!" More experience we will get next autumn when users begin to create new site for a new faculty at University of Jyväskylä.

"This looks shiny and modern!"

Some issues or suggestions for features so far:
  • Selecting between basic and document layout is probably not needed activating Mosaic layout?
  • In addition to text tile there are list elements and subheadings - they can be added through text tiles - perhaps not needed as tiles at all?
  • No unique ID:s on different tiles - makes it harder to create customised CSS for certain tiles
  • TinyMCE doesn't show html view. Could come handy at some cases (embedding an iframe for example)
  • Not possible to change embed size.  
  • No custom CSS
Overall technically Mosaic is very robust - I haven't come across any errors or technical issues while using it.

However, Mosaic still has some missing features in order to replace Portalview as it is, but luckily:

Plone Mephisto Sprint 2016, Leipzig, Germany, September 5th-9th 2016

Due to power of the awesome Plone open source community, there will be a sprint to develop Plone Mosaic further!


Plone Barcelona Sprint 2016 Report

For the last week, I was lucky enough to be allowed to participate Plone community sprint at Barcelona. The print was about polishing the new RESTful API for Plone, and experimenting with new front end and backend ideas, to prepare Plone for the next decade (as visioned in its roadmap). And once again, the community proved the power of its deeply rooted sprinting culture (adopted from the Zope community in the early 2000).

Just think about this: You need to get some new features for your sophisticated software framework, but you don't have resources to do it on your own. So, you set up a community sprint: reserve the dates and the venue, choose the topics for the sprint, advertise it or invite the people you want, and get a dozen of experienced developers to enthusiastically work on your topics for more for a full week, mostly at their own cost. It's a crazy bargain. More than too good to be true. Yet, that's just what seems to happen in the Plone community, over and over again.

To summarize, the sprint had three tracks: At first there was the completion of plone.restapi – a high quality and fully documented RESTful hypermedia API for all of the currently supported Plone versions. And after this productive sprint, the first official release for that should be out at any time now.

Then there was the research and prototyping of a completely new REST API based user interface for Plone 5 and 6: An extensible Angular 2 based app, which does all its interaction with Plone backend through the new RESTful API, and would universally support both server side and browser side rendering for fast response time, SEO and accessibility. Also these goals were reached, all the major blockers were resolved, and the chosen technologies were proven to be working together. To pick of my favorite sideproduct from that track: Albert Casado, the designer of Plone 5 default theme in LESS, appeared to migrate the theme to SASS.

Finally, there was our small backend moonshot team: Ramon and Aleix from Iskra / Intranetum (Catalonia), Eric from AMP Sport (U.S.), Nathan from Wildcard (U.S.) and yours truly from University of Jyväskylä (Finland). Our goal was to start with an alternative lightweight REST backend for the new experimental frontend, re-using the best parts of the current Plone stack when possible. Eventually, to meet our goals within the given time constraints, we agreed on the following stack: aiohttp based HTTP server, the Plone Dexterity content-type framework (without any HTML views or forms) built around Zope Toolkit, and ZODB as our database, all on Python 3.5 or greater. Yet, Pyramid remains as a possible alternative for ZTK later.


I was responsible for preparing the backend track in advance, and got us started with a a simple aiohttp based HTTP backend with experimental ZODB connection supporting multiple concurrent transaction (when handled with care). Most of my actual sprint time went into upgrading Plone Dexterity content-type framework (and its tests) to support Python 3.5. That also resulted in backwards compatible fixes and pull requests for Python 3.5 support for all its dependencies in plone.* namespace.

Ramon took the lead in integrating ZTK into the new backend, implemented a content-negotiation and content-language aware traversal, and kept us motivated by rising the sprint goal once features started clicking together. Aleix implemented an example docker-compose -setup for everything being developd at the sprint, and open-sourced their in-house OAuth-server as plone.oauth. Nathan worked originally in the frontend-team, but joined us for the last third of the sprint for pytest-based test setup and asyncio-integrated Elasticsearch integration. Eric replaced Zope2-remains in our Dexterity fork with ZTK equivalents, and researched all the available options in integrating content serialization of plone.restapi into our independent backend, eventually leading into a new package called plone.jsonserializer.

The status of our backend experiment after the sprint? Surprisingly good. We got far enough, that it's almost easier to point the missing and incomplete pieces that still remain on our to do:

  • We ported all Plone Dexterity content-type framework dependencies to Python 3.5. We only had to fork the main plone.dexterity-package, which still has some details in its ZTK integration to do and tests to be fixed. Also special fields (namely files, richtext and maybe relations) are still to be done.
  • Deserialization from JSON to Dexterity was left incomplete, because we were not able to fully re-use the existing plone.restapi-code (it depends on z3c.form-deserializers, which we cannot depend on).
  • We got a basic aiohttp-based Python 3.5 asyncio server running with ZODB and asynchronous traverse, permissions, REST-service mapping and JSON-serialization of Dexterity content. Integration with the new plone.oauth and zope.security was also almost done, and Ramon promised to continue to work on that to get the server ready for their in-house projects.
  • Workflows and their integration are to be done. We planned to try repoze.worklfow at first, and if that's not a fit, then look again into porting DCWorkflow or other 3rd party libraries.
  • Optimization for asyncio still needs more work, once the basic CRUD-features are in place.

So, that was a lot of checkbox ticked in a single sprint, really something to be proud of. And if not enough, an overlapping Plone sprint at Berlin got Python 3.5 upgrades of our stack even further, my favorite result being a helper tool for migrating Python 2 version ZODB databases to Python 3. These two sprints really transformed the nearing end-of-life of Python 2 from a threat into a possibility for our communitt, and confirmed that Plone has a viable roadmap well beyond 2020.

Personally, I just cannot wait for a suitable project with Dexterity based content-types on a modern asyncio based http server, or the next change meet our wonderful Catalan friends! :)

About XPath like tools for JSON

It bothers me that I have processed JSON manually and that there are standard tools like XPath and XSLT for XML. There must be better tools available for JSON also. Here are the ones I found for Python and some observations about them.


  • Powerful query language
  • Reminds me of write once regexps
  • Focused more on command line usage than module in the tutorial
  • Doesn't support Python 3


  • Simple API. Maybe too simple.
  • Good examples in README
  • No other dependencies
  • Supports Python 3


  • Rewrite of older jsonpath library which is a port from Javascript version
  • Some minor dependencies
  • Supports Python 3


If my requirements are simple I would probably go for dpath or jsonpath-rw (in that order). If I need some heavy lifting I would go with jq. ObjectPath has the nicest web pages but the lack of Python 3 support is a show-stopper nowadays. There doesn't seem to be a clear winner at the moment (like Requests for http client or SQLAlchemy for ORM). Your mileage may vary.

The perfect excuse for acceptance testing

It may surprise some, that open source, as revolutionary phenomenon as it has been, is actually a very conservative way to develop software. But that also makes it such a great fit for stable organisations like established universities. Successfully participating in open source projects require long term (and often personal) commitment, but should also result in pleasant surprises.
Photo by Anni Lähteenmäki
One of such surprising result from our open source collaboration has been the ability to generate documentation screenshots as a side-effect from acceptance testing. Or to put it other way, we are able to script our documentation screenshot as inline acceptance tests for the end-user features being documented. We are even able to do "documentation driven development": Write acceptance criteria into documentation skeleton and see the documentation complete with screenshot as the project develops.

We are able to script our documentation screenshot as inline acceptance tests for the end-user features being documented. We are even able to do "documentation driven development".

For example, once the required build tools and configuration boiler plate is in place, writing our end-user documentation with scripted (and therefore always up-to-date) screenshots may look like this:
Submitting a new application

For submitting a new application, simply click the button and fill the form
as shown in the pictures below:

..  figure:: submit-application-01.png

    Open the form to submit a new application by pressing the button.

..  figure:: submit-application-02.png

    Fill in the required fields and press *Submit* to complete.

..  code:: robotframework

    *** Test Cases ***

    Show how to submit a new application
        Go to  ${APPLICATION_URL}

        Page should contain element
         ...  css=input[value="New application"]
        Capture and crop page screenshot
        ...  submit-application-01.png
        ...  css=#content

        Click button  New application

        Page should contain  Submit new application
        Page should contain element
        ...  css=input[value="Submit"]

        Input text  id=form-widgets-name  Jane Doe
        Input text  id=form-widgets-email  jane.doe@example.com
        Capture and crop page screenshot
        ...  submit-application-02.png
        ...  css=#content

        Click button  Submit
        Page should contain  New application submitted.
This didn't become possible just by overnight, and it would not have been possible without ideas, contributions and testing from the community. It all started almost by an accident: a crucial part between our then favourite Python testing framework and Robot Framework based cross-browser acceptance testing with Selenium was missing. we needed that part to enable one of our interns to tests their project, we choosed to implement it, many parts clicked together, and a few years later we had this new development model available in our toolbox.
The specific technology for writing documentation with acceptance testing based screenshots is a real mashup by itself:
  • The final documentation is build with Sphinx, which is a very popular software documentation tool written in Python.
  • The extensibility of Sphinx is based on a plain text formatting syntax called ReStructuredText and its mature compiler implementation Docutils, which also also written in Python.
  • Together with a Google Summer of Code student, whom I mentored, we implemented a Sphinx-plugin to support inline plain text Robot Framework test suites within Sphinx-documentation.
  • In Robot Framework test suites, we can use its Selenium-keywords to test the web application in question and capture screenshots to be included in the documentation.
  • We also implemented a library of convenience keywords for annotating and cropping screenshots by bounding box of given HTML elements.
  • For Plone, with a lot of contributions from its friendly developer community, a very sophisticated Robot Framework integration was developed, to enable setting up and tearing down complete Plone server with app specific test fixtures directly from Robot Framework test suites with a few keywords.
  • Finally, with help from the Robot Framework core developers, the new ReStructuredText support for Robot Framework was implemented, which made it possibly to also run the written documentation with scripted screenshots as a real test suite with the Robot Framework's primary test runner (pybot).
Once you can script both the application configuration and screenshots, fun things become possible. For example, here's an old short scripted Plone clip presenting all the languages supported by Plone 4 out of the box. Only minimal editing was required to speedup the clip and add the ending logo:

Any cons? Yes. It's more than challenging to integrate this approach into workflows of real technical writers, who don't usually have a developer background. In practice, automated acceptence tests must be written by developers, and also ReStructuredText is still quite technical syntax to write documentation. Therefore, even for us this toolchain still remain quite underused.

Evolution of a Makefile for building projects with Docker

It's hard to move to GitLab and resist the temptation of its integrated GitLab CI. And with GitLab CI, it's just natural to run all CI jobs in Docker containers. Yet, to avoid vendor lock of its integrated Docker support, we choosed to keep our .gitlab-ci.yml configurations minimal and do all Docker calls with GNU make instead. This also ensured, that all of our CI tasks remain locally reproducible. In addition, we wanted to use official upstream Docker images from the official hub as far as possible.

As always with make, it it's a danger that Makefiles themselves become projects of their own. So, let's begin with a completely hypothetical Makefile:

all: test

     karma test

.PHONY: all test

Separation of concerns

At first, we want to keep all Docker related commands separate from the actual project specific commands. This lead us to have two separate Makefiles. A traditional default one, which expects all the build tools and other dependencies to exist in the running system, and a Docker specific one. We named them Makefile (as already seen above) and Makefile.docker (below):

all: test

     docker run --rm -v $PWD:/build -w /build node:5 make test

.PHONY: all test

So, we simply run a Docker container of required upstream language image (here Node 5), mount our project into the container and run make for the default Makefile inside the container.

$ make -f Makefile.docker

Of course, the logical next step is to abstract that Docker call into a function to make it trivial to wrap also other make targets to be run in Docker:

make = docker run --rm -v $PWD:/build -w /build node:5 make $1

all: test

     $(call make,test)

.PHONY: all test

Docker specific steps in the main Makefile

In the beginning, I mentioned, that we try to use the official upstream Docker images whenever possible, to keep our Docker dependencies fresh and supported. Yet, what if we need just minor modifications to them, like installation of a couple of extra packages...

Because our Makefile.docker mostly just wraps the make call for the default Makefile into a auto-removed Docker container run (docker run --rm), we cannot easily install extra packages into the container in Makefile.docker. This is the exception, when we add Docker-related commands into the default Makefile.

There are probably many ways to detect the run in Docker container, but my favourite is testing the existence of /.dockerenv file. So, any Docker container specific command in Makefile is wrapped with test for that file, as in:

all: test

     [ -f /.dockerenv ] && npm -g i karma || true
     karma test

.PHONY: all test

Getting rid of the filesystem side-effects

Unfortunately, one does not simply mount a source directory from the host into a container and run arbitrary commands with arbitrary users with that mount in place. (Unless one wants to play to game of having matching user ids inside and outside the container.)

To avoid all issues related to Docker possibly trying to (and sometimes succeeding in) creating files into mounted host file system, we may run Docker without host mount at all, by piping project sources into the container:

make = git archive HEAD | \
       docker run -i --rm -v /build -w /build node:5 \
       bash -c "tar x --warning=all && make $1"

all: test

test: bin/test
     $(call make,test)

.PHONY: all test
  • git archive HEAD writes tarball of the project git repository HEAD (latest commit) into stdout.
  • -i in docker run enables stdin in Docker.
  • -v /build in docker run ensures /build to exist in container (as a temporary volume).
  • bash -c "tar x --warning=all && make $1" is the single command to be run in the container (bash with arguments). It extracts the piped tarball from stdin into the current working directory in container (/build) and then executes given make target from the extracted tarball contents' Makefile.

Caching dependencies

One well known issue with Docker based builds is the amount of language specific dependencies required by your project on top of the official language image. We've solved this by creating a persistent data volume for those dependencies, and share that volume from build to build.

For example, defining a persistent NPM cache in our Makefile.docker would look like this:

CACHE_VOLUME = npm-cache

make = git archive HEAD | \
       docker run -i --rm -v $(CACHE_VOLUME):/cache \
       -v /build -w /build node:5 \
       bash -c "tar x --warning=all && make \
       NPM_INSTALL_ARGS='--cache /cache --cache-min 604800' $1"

all: test

test: bin/test
     $(call make,test)

.PHONY: all test

    docker volume ls | grep $(CACHE_VOLUME) || \
    docker create --name $(CACHE_VOLUME) -v $(CACHE_VOLUME):/cache node:5
  • CACHE_VOLUME variable holds the fixed name for the shared volume and the dummy container keeping the volume from being garbage collected by docker run --rm.
  • INIT_CACHE ensures that the cache volume is always present (so that it can simply be removed if its state goes bad).
  • -v $(CACHE_VOLUME:/cache in docker run mounts the cache volume into test container.
  • NPM_INSTALL_ARGS='--cache /cache --cache-min 604800' in docker run sets a make variable NPM_INSTALL_ARGS with arguments to configure cache location for NPM. That variable, of course, should be explicitly defined and used in the default Makefile:

all: test

     @[ -f /.dockerenv ] && npm -g $(NPM_INSTALL_ARGS) i karma || true
     karma test

.PHONY: all test

Cache volume, of course, adds state between the builds and may cause issues that require resetting the cache containers when that hapens. Still, most of the time, these have been working very well for us, significantly reducing the required build time.

Retrieving the build artifacts

The downside of running Docker without mounting anything from the host is that it's a bit harder to get build artifacts (e.g. test reports) out of the container. We've tried both stdout and docker cp for this. At the end we ended up using dedicated build data volume and docker cp in Makefile.docker:

CACHE_VOLUME = npm-cache

make = git archive HEAD | \
       docker run -i --rm -v $(CACHE_VOLUME):/cache \
       -v /build -w /build $(DOCKER_RUN_ARGS) node:5 \
       bash -c "tar x --warning=all && make \
       NPM_INSTALL_ARGS='--cache /cache --cache-min 604800' $1"

all: test

test: DOCKER_RUN_ARGS = --volumes-from=$(BUILD)
test: bin/test
     $(call make,test); \
       status=$$?; \
       docker cp $(BUILD):/build .; \
       docker rm -f -v $(BUILD); \
       exit $$status

.PHONY: all test

    docker volume ls | grep $(CACHE_VOLUME) || \
    docker create --name $(CACHE_VOLUME) -v $(CACHE_VOLUME):/cache node:5

# http://cakoose.com/wiki/gnu_make_thunks
BUILD_GEN = $(shell docker create -v /build node:5

A few powerful make patterns here:

  • DOCKER_RUN_ARGS = sets a placeholder variable for injecting make target specific options into docker run.
  • test: DOCKER_RUN_ARGS = --volumes-from=$(BUILD) sets a make target local value for DOCKER_RUN_ARGS. Here it adds volumes from a container uuid defined in variable BUILD.
  • BUILD is a lazily evaluated Make variable (created with GNU make thunk -pattern). It gets its value when it's used for the first time. Here it is set to an id of a new container with a shareable volume at /build so that docker run ends up writing all its build artifacts into that volume.
  • Because make would stop its execution after the first failing command, we must wrap the make test call of docker run so that we
    1. capture the original return value with status=$$?
    2. copy the artifacts to host using docker cp
    3. delete the build container
    4. finally return the captured status with exit $$status.

This pattern may look a bit complex at first, but it has been powerful enough to start any number of temporary containers and link or mount them with the actual test container (similarly to docker-compose, but directly in Makefile). For example, we use this to start and link Selenium web driver containers to be able run Selenium based acceptance tests in the test container on top of upstream language base image, and then retrieve the test reports from the build container volume.

Blazingly fast code reload with fork loop

The Plone Community has project long traditions for community driven development sprints, nowadays also known as "hackathowns". For new developers, sprints are the best possible places to meet and learn from the more experienced developers. And, as always, when enough openly minded developers collide, amazing new things get invented.


One of such event was the Sauna Sprint 2011 at Tampere, Finland, organized by EESTEC. During the sprint, from the idea by Mikko Ohtamaa, with the help from top Zope and Plone developers of the time, we developed a fast code reloading tool, which has significantly speeded our Plone-related development efforts ever since.

So, what was the problem then? Plone is implement in Python, which is a dynamically interpreted programming language, already requiring no compilation between change in code and getting change visible with a service restart. Yet, Plone has a huge set of features, leading to a large codebase, and a long restart time. And when you are not doing only TTD, but also want to see the effects of the code changes in running software (or re-run acceptance tests), restart time really affects development speed. Python language did had its own ways for reloading code, but there were corner cases, where everything was not really reloaded.

While our tool is strictly specific to Plone, the idea is very generic and language independent (and, to be honest, also we did borrow it from a developer community of another programming language). Our tool implemented and automated a way to split the code loading in Plone startup into two parts: The first part loads all the common framework code, and the second part only our custom application code. And by loading, we really mean loading into the process memory. Once the first part is loaded, we fork the process, and let the new forked child proces to load the second part. And every time a change is detected in code, we simply kill the child process and fork a new one with clean memory. What could possibly go wrong?

That was already almost five years ago and we are still happily using the tool. Even better, we did integrate the reloading approach for a volatile Plone development server with pre-loaded test fixture: each change to code or fixture restarts the server and reloads the fixture. acceptance tests for it. No more time wasted for the other developer to continue from where you left.