Creating flexible and responsive tile-based pages using Plone Mosaic

What is Plone Mosaic?

Plone Mosaic is a new layout solution for Plone CMS. Mosaic, Blocks and Tiles provide a simple, yet powerful way to manage the pages on your Plone website. 
In this article I will give examples on how to create and edit flexible and responsive tile-based pages using Plone Mosaic. I'll also discuss some issues I have so far bumped into, and ponder on the possibility to use Mosaic to replace Portalview, our current tool to compose customized portals. I'll use Plone 5, the newest version of Plone CMS.
Normally a Plone page consists of a title, description and content section. Using Mosaic one can easily create custom layouts and use them as templates for othes pages.

Mosaic editor example

1. How to Create a Mosaic Page

All too easy:
  1. Add a new page
  2. Select Display -> Mosaic layout
  3. Select Basic or Document layout
  4. Save

Basically you first have to create a page and then change how the page is viewed - using normal Plone page layout or Mosaic layout. 

2. How to Edit a Mosaic Page 

Mosaic editor on Plone 5
You can still edit normal page properties on the top left corner, but for editing Mosaic layout, there are two menus on the top right corner: Format and Insert.

At first I'll insert a couple of tiles into place. Starting with text tiles is the way to go.
Insert menu

It is easy to drag and drop the tile into a desired place. There can be 1-4 colums side by side. And they are responsive too.

In this example I added two text tiles and an image.

Drag and drop the new tile to a desired place.

Image dropped over a text tile

Embedding media works well too using the Embed tile (and Oembed). Just add a media URL.

One thing missing here, though, is the opportunity to change embed size. Quite often the video size is not optimal.

3. Formatting a Mosaic Page

I can quickly drag and drop new tiles to the page, rearrange or delete tiles.

If I want to edit a text tile, I'll just click it and a TinyMCE editor appears:
TinyMCE on Mosaic text tile

The editor has basic editing features, such as:
  • Formatting (headers, paragraphs etc.)
  • Text alignment
  • Bullet lists
  • Images
  • Links
Occasionally I have missed the opportunity to edit HTML, though.

However, the basic formatting functions on tiles go a long way:
Formatting options

I can change content alignment, add a dark background and add some padding between tiles. Of course this could be done using Plone theming tools too.

Changing the structure of a mosaic page is really easy and it works well - I haven't come across any issues on drag and drop (Using Firefox on OSX).

Replacing Portalview with Mosaic?

One use case for Mosaic when we migrate our websites to Plone 5 could be replacing our multiple portal pages. They are created using our own Portalview composition add on (sorry, only a really old version available!). Portalview contains e.g. the following features:
  • Possibility to compose a customised layout (based on folders and other Plone content)
  • Image/text/video/page carousel (Yes I know, carousels are evil :)
  • Accordions
  • Tabs
  • Dropdown menu
  • Custom CSS!
All can be manipulated in the browser.

An example here:

And another one:

Issues and suggestions

In addition to myself there are couple of users and content managers working with Mosaic. First comments have been positive: "This looks shiny and modern!" More experience we will get next autumn when users begin to create new site for a new faculty at University of Jyväskylä.

"This looks shiny and modern!"

Some issues or suggestions for features so far:
  • Selecting between basic and document layout is probably not needed activating Mosaic layout?
  • In addition to text tile there are list elements and subheadings - they can be added through text tiles - perhaps not needed as tiles at all?
  • No unique ID:s on different tiles - makes it harder to create customised CSS for certain tiles
  • TinyMCE doesn't show html view. Could come handy at some cases (embedding an iframe for example)
  • Not possible to change embed size.  
  • No custom CSS
Overall technically Mosaic is very robust - I haven't come across any errors or technical issues while using it.

However, Mosaic still has some missing features in order to replace Portalview as it is, but luckily:

Plone Mephisto Sprint 2016, Leipzig, Germany, September 5th-9th 2016

Due to power of the awesome Plone open source community, there will be a sprint to develop Plone Mosaic further!


Plone Barcelona Sprint 2016 Report

For the last week, I was lucky enough to be allowed to participate Plone community sprint at Barcelona. The print was about polishing the new RESTful API for Plone, and experimenting with new front end and backend ideas, to prepare Plone for the next decade (as visioned in its roadmap). And once again, the community proved the power of its deeply rooted sprinting culture (adopted from the Zope community in the early 2000).

Just think about this: You need to get some new features for your sophisticated software framework, but you don't have resources to do it on your own. So, you set up a community sprint: reserve the dates and the venue, choose the topics for the sprint, advertise it or invite the people you want, and get a dozen of experienced developers to enthusiastically work on your topics for more for a full week, mostly at their own cost. It's a crazy bargain. More than too good to be true. Yet, that's just what seems to happen in the Plone community, over and over again.

To summarize, the sprint had three tracks: At first there was the completion of plone.restapi – a high quality and fully documented RESTful hypermedia API for all of the currently supported Plone versions. And after this productive sprint, the first official release for that should be out at any time now.

Then there was the research and prototyping of a completely new REST API based user interface for Plone 5 and 6: An extensible Angular 2 based app, which does all its interaction with Plone backend through the new RESTful API, and would universally support both server side and browser side rendering for fast response time, SEO and accessibility. Also these goals were reached, all the major blockers were resolved, and the chosen technologies were proven to be working together. To pick of my favorite sideproduct from that track: Albert Casado, the designer of Plone 5 default theme in LESS, appeared to migrate the theme to SASS.

Finally, there was our small backend moonshot team: Ramon and Aleix from Iskra / Intranetum (Catalonia), Eric from AMP Sport (U.S.), Nathan from Wildcard (U.S.) and yours truly from University of Jyväskylä (Finland). Our goal was to start with an alternative lightweight REST backend for the new experimental frontend, re-using the best parts of the current Plone stack when possible. Eventually, to meet our goals within the given time constraints, we agreed on the following stack: aiohttp based HTTP server, the Plone Dexterity content-type framework (without any HTML views or forms) built around Zope Toolkit, and ZODB as our database, all on Python 3.5 or greater. Yet, Pyramid remains as a possible alternative for ZTK later.


I was responsible for preparing the backend track in advance, and got us started with a a simple aiohttp based HTTP backend with experimental ZODB connection supporting multiple concurrent transaction (when handled with care). Most of my actual sprint time went into upgrading Plone Dexterity content-type framework (and its tests) to support Python 3.5. That also resulted in backwards compatible fixes and pull requests for Python 3.5 support for all its dependencies in plone.* namespace.

Ramon took the lead in integrating ZTK into the new backend, implemented a content-negotiation and content-language aware traversal, and kept us motivated by rising the sprint goal once features started clicking together. Aleix implemented an example docker-compose -setup for everything being developd at the sprint, and open-sourced their in-house OAuth-server as plone.oauth. Nathan worked originally in the frontend-team, but joined us for the last third of the sprint for pytest-based test setup and asyncio-integrated Elasticsearch integration. Eric replaced Zope2-remains in our Dexterity fork with ZTK equivalents, and researched all the available options in integrating content serialization of plone.restapi into our independent backend, eventually leading into a new package called plone.jsonserializer.

The status of our backend experiment after the sprint? Surprisingly good. We got far enough, that it's almost easier to point the missing and incomplete pieces that still remain on our to do:

  • We ported all Plone Dexterity content-type framework dependencies to Python 3.5. We only had to fork the main plone.dexterity-package, which still has some details in its ZTK integration to do and tests to be fixed. Also special fields (namely files, richtext and maybe relations) are still to be done.
  • Deserialization from JSON to Dexterity was left incomplete, because we were not able to fully re-use the existing plone.restapi-code (it depends on z3c.form-deserializers, which we cannot depend on).
  • We got a basic aiohttp-based Python 3.5 asyncio server running with ZODB and asynchronous traverse, permissions, REST-service mapping and JSON-serialization of Dexterity content. Integration with the new plone.oauth and zope.security was also almost done, and Ramon promised to continue to work on that to get the server ready for their in-house projects.
  • Workflows and their integration are to be done. We planned to try repoze.worklfow at first, and if that's not a fit, then look again into porting DCWorkflow or other 3rd party libraries.
  • Optimization for asyncio still needs more work, once the basic CRUD-features are in place.

So, that was a lot of checkbox ticked in a single sprint, really something to be proud of. And if not enough, an overlapping Plone sprint at Berlin got Python 3.5 upgrades of our stack even further, my favorite result being a helper tool for migrating Python 2 version ZODB databases to Python 3. These two sprints really transformed the nearing end-of-life of Python 2 from a threat into a possibility for our communitt, and confirmed that Plone has a viable roadmap well beyond 2020.

Personally, I just cannot wait for a suitable project with Dexterity based content-types on a modern asyncio based http server, or the next change meet our wonderful Catalan friends! :)

About XPath like tools for JSON

It bothers me that I have processed JSON manually and that there are standard tools like XPath and XSLT for XML. There must be better tools available for JSON also. Here are the ones I found for Python and some observations about them.


  • Powerful query language
  • Reminds me of write once regexps
  • Focused more on command line usage than module in the tutorial
  • Doesn't support Python 3


  • Simple API. Maybe too simple.
  • Good examples in README
  • No other dependencies
  • Supports Python 3


  • Rewrite of older jsonpath library which is a port from Javascript version
  • Some minor dependencies
  • Supports Python 3


If my requirements are simple I would probably go for dpath or jsonpath-rw (in that order). If I need some heavy lifting I would go with jq. ObjectPath has the nicest web pages but the lack of Python 3 support is a show-stopper nowadays. There doesn't seem to be a clear winner at the moment (like Requests for http client or SQLAlchemy for ORM). Your mileage may vary.

The perfect excuse for acceptance testing

It may surprise some, that open source, as revolutionary phenomenon as it has been, is actually a very conservative way to develop software. But that also makes it such a great fit for stable organisations like established universities. Successfully participating in open source projects require long term (and often personal) commitment, but should also result in pleasant surprises.
Photo by Anni Lähteenmäki
One of such surprising result from our open source collaboration has been the ability to generate documentation screenshots as a side-effect from acceptance testing. Or to put it other way, we are able to script our documentation screenshot as inline acceptance tests for the end-user features being documented. We are even able to do "documentation driven development": Write acceptance criteria into documentation skeleton and see the documentation complete with screenshot as the project develops.

We are able to script our documentation screenshot as inline acceptance tests for the end-user features being documented. We are even able to do "documentation driven development".

For example, once the required build tools and configuration boiler plate is in place, writing our end-user documentation with scripted (and therefore always up-to-date) screenshots may look like this:
Submitting a new application

For submitting a new application, simply click the button and fill the form
as shown in the pictures below:

..  figure:: submit-application-01.png

    Open the form to submit a new application by pressing the button.

..  figure:: submit-application-02.png

    Fill in the required fields and press *Submit* to complete.

..  code:: robotframework

    *** Test Cases ***

    Show how to submit a new application
        Go to  ${APPLICATION_URL}

        Page should contain element
         ...  css=input[value="New application"]
        Capture and crop page screenshot
        ...  submit-application-01.png
        ...  css=#content

        Click button  New application

        Page should contain  Submit new application
        Page should contain element
        ...  css=input[value="Submit"]

        Input text  id=form-widgets-name  Jane Doe
        Input text  id=form-widgets-email  jane.doe@example.com
        Capture and crop page screenshot
        ...  submit-application-02.png
        ...  css=#content

        Click button  Submit
        Page should contain  New application submitted.
This didn't become possible just by overnight, and it would not have been possible without ideas, contributions and testing from the community. It all started almost by an accident: a crucial part between our then favourite Python testing framework and Robot Framework based cross-browser acceptance testing with Selenium was missing. we needed that part to enable one of our interns to tests their project, we choosed to implement it, many parts clicked together, and a few years later we had this new development model available in our toolbox.
The specific technology for writing documentation with acceptance testing based screenshots is a real mashup by itself:
  • The final documentation is build with Sphinx, which is a very popular software documentation tool written in Python.
  • The extensibility of Sphinx is based on a plain text formatting syntax called ReStructuredText and its mature compiler implementation Docutils, which also also written in Python.
  • Together with a Google Summer of Code student, whom I mentored, we implemented a Sphinx-plugin to support inline plain text Robot Framework test suites within Sphinx-documentation.
  • In Robot Framework test suites, we can use its Selenium-keywords to test the web application in question and capture screenshots to be included in the documentation.
  • We also implemented a library of convenience keywords for annotating and cropping screenshots by bounding box of given HTML elements.
  • For Plone, with a lot of contributions from its friendly developer community, a very sophisticated Robot Framework integration was developed, to enable setting up and tearing down complete Plone server with app specific test fixtures directly from Robot Framework test suites with a few keywords.
  • Finally, with help from the Robot Framework core developers, the new ReStructuredText support for Robot Framework was implemented, which made it possibly to also run the written documentation with scripted screenshots as a real test suite with the Robot Framework's primary test runner (pybot).
Once you can script both the application configuration and screenshots, fun things become possible. For example, here's an old short scripted Plone clip presenting all the languages supported by Plone 4 out of the box. Only minimal editing was required to speedup the clip and add the ending logo:

Any cons? Yes. It's more than challenging to integrate this approach into workflows of real technical writers, who don't usually have a developer background. In practice, automated acceptence tests must be written by developers, and also ReStructuredText is still quite technical syntax to write documentation. Therefore, even for us this toolchain still remain quite underused.

Evolution of a Makefile for building projects with Docker

It's hard to move to GitLab and resist the temptation of its integrated GitLab CI. And with GitLab CI, it's just natural to run all CI jobs in Docker containers. Yet, to avoid vendor lock of its integrated Docker support, we choosed to keep our .gitlab-ci.yml configurations minimal and do all Docker calls with GNU make instead. This also ensured, that all of our CI tasks remain locally reproducible. In addition, we wanted to use official upstream Docker images from the official hub as far as possible.

As always with make, it it's a danger that Makefiles themselves become projects of their own. So, let's begin with a completely hypothetical Makefile:

all: test

     karma test

.PHONY: all test

Separation of concerns

At first, we want to keep all Docker related commands separate from the actual project specific commands. This lead us to have two separate Makefiles. A traditional default one, which expects all the build tools and other dependencies to exist in the running system, and a Docker specific one. We named them Makefile (as already seen above) and Makefile.docker (below):

all: test

     docker run --rm -v $PWD:/build -w /build node:5 make test

.PHONY: all test

So, we simply run a Docker container of required upstream language image (here Node 5), mount our project into the container and run make for the default Makefile inside the container.

$ make -f Makefile.docker

Of course, the logical next step is to abstract that Docker call into a function to make it trivial to wrap also other make targets to be run in Docker:

make = docker run --rm -v $PWD:/build -w /build node:5 make $1

all: test

     $(call make,test)

.PHONY: all test

Docker specific steps in the main Makefile

In the beginning, I mentioned, that we try to use the official upstream Docker images whenever possible, to keep our Docker dependencies fresh and supported. Yet, what if we need just minor modifications to them, like installation of a couple of extra packages...

Because our Makefile.docker mostly just wraps the make call for the default Makefile into a auto-removed Docker container run (docker run --rm), we cannot easily install extra packages into the container in Makefile.docker. This is the exception, when we add Docker-related commands into the default Makefile.

There are probably many ways to detect the run in Docker container, but my favourite is testing the existence of /.dockerenv file. So, any Docker container specific command in Makefile is wrapped with test for that file, as in:

all: test

     [ -f /.dockerenv ] && npm -g i karma || true
     karma test

.PHONY: all test

Getting rid of the filesystem side-effects

Unfortunately, one does not simply mount a source directory from the host into a container and run arbitrary commands with arbitrary users with that mount in place. (Unless one wants to play to game of having matching user ids inside and outside the container.)

To avoid all issues related to Docker possibly trying to (and sometimes succeeding in) creating files into mounted host file system, we may run Docker without host mount at all, by piping project sources into the container:

make = git archive HEAD | \
       docker run -i --rm -v /build -w /build node:5 \
       bash -c "tar x --warning=all && make $1"

all: test

test: bin/test
     $(call make,test)

.PHONY: all test
  • git archive HEAD writes tarball of the project git repository HEAD (latest commit) into stdout.
  • -i in docker run enables stdin in Docker.
  • -v /build in docker run ensures /build to exist in container (as a temporary volume).
  • bash -c "tar x --warning=all && make $1" is the single command to be run in the container (bash with arguments). It extracts the piped tarball from stdin into the current working directory in container (/build) and then executes given make target from the extracted tarball contents' Makefile.

Caching dependencies

One well known issue with Docker based builds is the amount of language specific dependencies required by your project on top of the official language image. We've solved this by creating a persistent data volume for those dependencies, and share that volume from build to build.

For example, defining a persistent NPM cache in our Makefile.docker would look like this:

CACHE_VOLUME = npm-cache

make = git archive HEAD | \
       docker run -i --rm -v $(CACHE_VOLUME):/cache \
       -v /build -w /build node:5 \
       bash -c "tar x --warning=all && make \
       NPM_INSTALL_ARGS='--cache /cache --cache-min 604800' $1"

all: test

test: bin/test
     $(call make,test)

.PHONY: all test

    docker volume ls | grep $(CACHE_VOLUME) || \
    docker create --name $(CACHE_VOLUME) -v $(CACHE_VOLUME):/cache node:5
  • CACHE_VOLUME variable holds the fixed name for the shared volume and the dummy container keeping the volume from being garbage collected by docker run --rm.
  • INIT_CACHE ensures that the cache volume is always present (so that it can simply be removed if its state goes bad).
  • -v $(CACHE_VOLUME:/cache in docker run mounts the cache volume into test container.
  • NPM_INSTALL_ARGS='--cache /cache --cache-min 604800' in docker run sets a make variable NPM_INSTALL_ARGS with arguments to configure cache location for NPM. That variable, of course, should be explicitly defined and used in the default Makefile:

all: test

     @[ -f /.dockerenv ] && npm -g $(NPM_INSTALL_ARGS) i karma || true
     karma test

.PHONY: all test

Cache volume, of course, adds state between the builds and may cause issues that require resetting the cache containers when that hapens. Still, most of the time, these have been working very well for us, significantly reducing the required build time.

Retrieving the build artifacts

The downside of running Docker without mounting anything from the host is that it's a bit harder to get build artifacts (e.g. test reports) out of the container. We've tried both stdout and docker cp for this. At the end we ended up using dedicated build data volume and docker cp in Makefile.docker:

CACHE_VOLUME = npm-cache

make = git archive HEAD | \
       docker run -i --rm -v $(CACHE_VOLUME):/cache \
       -v /build -w /build $(DOCKER_RUN_ARGS) node:5 \
       bash -c "tar x --warning=all && make \
       NPM_INSTALL_ARGS='--cache /cache --cache-min 604800' $1"

all: test

test: DOCKER_RUN_ARGS = --volumes-from=$(BUILD)
test: bin/test
     $(call make,test); \
       status=$$?; \
       docker cp $(BUILD):/build .; \
       docker rm -f -v $(BUILD); \
       exit $$status

.PHONY: all test

    docker volume ls | grep $(CACHE_VOLUME) || \
    docker create --name $(CACHE_VOLUME) -v $(CACHE_VOLUME):/cache node:5

# http://cakoose.com/wiki/gnu_make_thunks
BUILD_GEN = $(shell docker create -v /build node:5

A few powerful make patterns here:

  • DOCKER_RUN_ARGS = sets a placeholder variable for injecting make target specific options into docker run.
  • test: DOCKER_RUN_ARGS = --volumes-from=$(BUILD) sets a make target local value for DOCKER_RUN_ARGS. Here it adds volumes from a container uuid defined in variable BUILD.
  • BUILD is a lazily evaluated Make variable (created with GNU make thunk -pattern). It gets its value when it's used for the first time. Here it is set to an id of a new container with a shareable volume at /build so that docker run ends up writing all its build artifacts into that volume.
  • Because make would stop its execution after the first failing command, we must wrap the make test call of docker run so that we
    1. capture the original return value with status=$$?
    2. copy the artifacts to host using docker cp
    3. delete the build container
    4. finally return the captured status with exit $$status.

This pattern may look a bit complex at first, but it has been powerful enough to start any number of temporary containers and link or mount them with the actual test container (similarly to docker-compose, but directly in Makefile). For example, we use this to start and link Selenium web driver containers to be able run Selenium based acceptance tests in the test container on top of upstream language base image, and then retrieve the test reports from the build container volume.

Blazingly fast code reload with fork loop

The Plone Community has project long traditions for community driven development sprints, nowadays also known as "hackathowns". For new developers, sprints are the best possible places to meet and learn from the more experienced developers. And, as always, when enough openly minded developers collide, amazing new things get invented.


One of such event was the Sauna Sprint 2011 at Tampere, Finland, organized by EESTEC. During the sprint, from the idea by Mikko Ohtamaa, with the help from top Zope and Plone developers of the time, we developed a fast code reloading tool, which has significantly speeded our Plone-related development efforts ever since.

So, what was the problem then? Plone is implement in Python, which is a dynamically interpreted programming language, already requiring no compilation between change in code and getting change visible with a service restart. Yet, Plone has a huge set of features, leading to a large codebase, and a long restart time. And when you are not doing only TTD, but also want to see the effects of the code changes in running software (or re-run acceptance tests), restart time really affects development speed. Python language did had its own ways for reloading code, but there were corner cases, where everything was not really reloaded.

While our tool is strictly specific to Plone, the idea is very generic and language independent (and, to be honest, also we did borrow it from a developer community of another programming language). Our tool implemented and automated a way to split the code loading in Plone startup into two parts: The first part loads all the common framework code, and the second part only our custom application code. And by loading, we really mean loading into the process memory. Once the first part is loaded, we fork the process, and let the new forked child proces to load the second part. And every time a change is detected in code, we simply kill the child process and fork a new one with clean memory. What could possibly go wrong?

That was already almost five years ago and we are still happily using the tool. Even better, we did integrate the reloading approach for a volatile Plone development server with pre-loaded test fixture: each change to code or fixture restarts the server and reloads the fixture. acceptance tests for it. No more time wasted for the other developer to continue from where you left.


From Plone 2, 3, or 4 to Plone 5 - Highlighting 5 Major Changes


 Plone at University of Jyväskylä since 2004

We have used Plone Content Management system at University of Jyväskylä almost 12 years now.
Our main websites run with Plone and the site serves over 200 000 visitors and 2 000 000  pageviews/month. In addition, we have over 80 other different services built on Plone, using its built-in permission-, workflow- and content management features. These services include e.g. Moniviestin video publishing platform, Koppa study material portal, Opiskelijankompassi and wide array of different kind of forms, some combined with our payments-services. And if you didn't know, Plone is Open-source software

Read more about Plone usage @ JYU

In this article I will discuss about how we are migrating from older Plone versions to the newest one and how the UI of Plone will radically change in version 5.  

Plone 2 and 3 and 4 work well - why upgrade to Plone 5?

With such history with Plone, there are still some sites or services that have been running with older Plone versions (2.1, 2.5 and 3.3) since 2006. After 2010 we have used only Plone 4.1 and up, the current version in most sites being Plone 4.3.x

For the record, these old sites have been generally rock solid, and there have been security patches even for  Plone 2 -versions.
However, we now want to migrate these sites to Plone 5. There are several good reasons. 
  1. Easier maintentance: We want to get rid of old versions to make our maintenance job easier. To be able to maintain several services with few people, you need to optimize.
But - why didn't we migrate these sites to Plone 4.3? Why wait till Plone 5, which was released late 2015. Well, to be honest, we had more pressing projects to finish, but that's not all there is.

Why we go to Plone 5?
  1. Plone 5 is fun. The user interface is modern, easy to use and the base template is responsive out of the box. It is lighter and faster than the previous versions. There is a cool layout system called Mosaic to use for customized, drag'n drop layouts. 
  2. The customers of the old sites wanted to renew their services - it was a good time to do so with Plone 5.  

Enter Plone 5 - Listing 5 Major Changes

There are many, and to get a complet list, check out https://plone.com/5

Here I will highlight exactly 5 important changes compared to previous versions. These are based on my experience with our new Plone 5 sites. 

1. HTML5 Responsive Theme

"Plone 5's new default theme, Barceloneta, is responsive out the box to work with the full range of mobile devices and is written using HTML5 and CSS3."

See the difference with this example:

This is a capture from www.psykonet.fi using Plone 3 (which we haven't touched in 7 years, but are now migrating to Plone 5).

The site works ok, but there's no mobile device optimation.
Notice how the page is "cut out" on the right with smaller screens.

In Plone 5 this has been taken care of. The default theme is clean and responsive and easy to modify. At our university the default theme will go a long way: just change the logo (through your browser) and change some colors, and that's it:

Theming Plone 5 is easy.
And as said, the default theme is responsive out-of-the-box, so here's the same page in a smaller screen:
Plone 5 is responsive

2. "Green Bar" Replaced with More Powerful Toolbar

"The new toolbar consolidates the personal menu and Plone's longstanding "green bar." The toolbar can be positioned on the left or top of the browser window. With its optional icon text labels, the toolbar gives editors more screen space to focus on what matters: their content."

Here is an example of Plone 3 green toolbar. Its located in content area. On top right there's a personal bar.

The toolbar in Plone 3 is in the middle.

 And here is the same site at Plone 5 with the new toolbar on the left:

Plone 5 toolbar is on the left. Talk about WYSIWYG!

What you see is what you get in this case. All the Plone features are there, but the editing features are neatly separated from content.

At first (like, the first 10 seconds!) with the new toolbar I felt a little out of place with the way of adding and editing, but to be honest, out with the old an in with the new!

We presented Plone 5 briefly in our content managers seminar this april, and the response was generally positive - "When will you upgrade our departmental site to Plone 5, we volunteer!"

We presented Plone 5 briefly in our content managers seminar this april, and the response was generally positive - "When will you upgrade our departmental site to Plone 5, we volunteer!"

3. New TinyMCE 4 web editor

"Plone 5 comes with TinyMCE 4, the gold standard in WYSIWYG web editors. Coupled with new tools to insert links and images, it's easier than ever to craft and customize content."

One of the most important aspects of any CMS is the content editor. Plone 2-3 had Kupu, then came along TinyMCE 1.3.x. Both were quite fine (though different versions of Internet Explorer were struggling with them - but then again, when IE hasn't struggled with something...).

Plone 5 uses the new TinyMCE 4 series visual HTML editor: https://www.tinymce.com/

Kupu at Plone 3:

Kupu editor at Plone 3.2.2

TinyMCE 1.3.x at Plone 4.3.3:

TinyMCE 1.3 at Plone 4.3.

TinyMCE 4 at Plone 5  is clean and robust.
TinyMCE 4 at Plone 5

4. "Faster, Harder, Scooter" - Plone 5 is fast 

"Plone 5 sites are 15%-20% faster now with the new Chameleon templating engine, a fully backwards-compatible replacement that works with existing templates."

As I have been working now with some new Plone 5 sites, they load faster than before, even with thousands of pages of content. Filtering a content view with thousand object loads in split seconds.

I'm really looking forward on seeing how Plone 5 performs with our main website.

5. Bulk Content Editing

"Perform bulk operations to add multiple files and images at once, assign keywords, apply workflow, rename, cut, copy, and delete. Use query filters to quickly find, sort, reorder, and select specific content."

This is a huge improvement over previous versions. In our websites there are hundreds of thousands of content items. In some cases finding a certain file is difficult (see example below).

Over 1000 items in one folder. Plone 4.3.3. No way to filter.

And here is the new Contents view of Plone 5:
Plone 5 folder contents - totally revamped

In Plone 5 there are many, many new or improved features to folder content view:
  • Upload multiple files
  • Show and hide columns
  • Change the amount of shown items
  • Filter items by name (really fast and useful!)
  • Rearrange columns 
  • Bulk rename, copy, paste items
  • Bulk change properties of dates or tags on items
  • Filter through chosen items


Here I have listed 5 most visible changes in Plone 5. And I haven't even mentioned Mosaic layout, which allows content editors to easily create customised layout views for content using just you browser!

But as said, there much more to Plone 5, so check out https://plone.com/5

Examples of Plone 5 sites

Even though most of these are in Finnish, I will provide links to some Plone 5 sites we now have:

In addition to these, we have currently 3 other migrations underway and one preview-site to be published when the content is ready.

Looking forward to more Plone 5 migrations this year!