Sometimes we have some folders or some documents which should have not been deleted.

But our customers/users have deleted them and it occurs some errors on application.

So we decided to create a new package to prevent delete or/and move actions.


How it works

We add marker interface to objects which can not be deleted or renamed (= moved).

We subscribe all IItem objects to OFS.interfaces.IObjectWillBeRemovedEvent and OFS.interfaces.IObjectWillBeMovedEvent

When one of these events is received, and object is marked as not deleted or not renamed, we raised an exception and object is not deleted or moved.


In the futur, we expect to add a dashboard to have a view of all contents with these markers interfaces to easily use it.


You can also set some contents not deleteable (for example) as this in your setuphandler :

from collective.preventactions.interfaces import IPreventDelete
from plone import api
from zope.interface import alsoProvides

def post_install(context):
obj = api.content.get('/Plone/content-not-deleteable')
alsoProvides(obj, IPreventDelete)


Now you can have a look at source code of package and try it.



New package: collective.geo.faceted

Why did we create collective.geo.faceted ?

We use collective.geo suite for geolocatisation for some of our projects. We also use eea.facetednavigation to easily find content into our website or application.

So we decided to add a map view for eea.facetednavigation and we created collective.geo.faceted.

How it works


We prefer to use Leaflet than OpenLayers, because it seems easier to use for us.

So we decided to use collective.geo.leaflet machinery for map creation.


We use geojson standard to add points on map. It is a famous standard used to geo content.

The view created for "faceted" will simply update the geojson and this geojson will also update the map. For the generation of geojson, we extended collective.geo.json view.


Map is added into a viewlet dedicated to the faceted view. We choose to use a viewlet out of 'content-core slot'  (content-core slot is used by faceted to update automatically the  contents). Indeed each technology (faceted and map) uses singular  javascript , it seems better to not mix both technology.

Plone objects are on map  and they are updated thanks to these lines of code:

jQuery(document).ready(function() {

This code fetches "faceted" events when "faceted" has modifed its criterias and used update_map javascript function to update new geojson on map.

Image is better than words:


This package is tested for Plone 4 with used as default Plone content types.

Maybe in future we should add a profile like plone5  e.g. example.p4p5 or create a branch plone4 on github and make master branch used to plone5.


Migrate Archetypes content types to Dexterity

By this migartion, I had 2 goals:

  • make my first step for migration to Plone 5
  • use multiligual site with Plone 4 and
  • use (for recurence, no end event, ...).

I also don't want to make big visual changes for my clients. So I decided to not use at this moment. I prefere use it with Plone 5. So I will pin 1.1.x. Indeed newer versions of use

For this migration I have to done 2 'steps'. I made a 'custom migration' and I had a profile with an upgrade step. I will explain that below.

For migration, I first have install and pin some packages in my buildout:

  • Add to my buildout (with 1.1.x version/branch)
  • Pin to 1.1.x version/branch
  • Pin plone.outputfilters 2.1.2 for this problem

Creation of a new profile

I choose to add new profile called 'migratetodx' for making migration. I prefere use a new profile instead of a upgrade step but all migration can be used into a upgrade step.

So I started with creating new profile like this :

        title="cpskin.migration: migrate at to dx"
        description="Updates CPSkin to dexterity"

Created folder profiles/migratetodx and added a metadata.xml file like this :

<?xml version="1.0"?>


Add behaviors

Lead image

For lead image, all job is already done in I just had to add behavior for types you would like to migrate.

So in my profile, I added files like profiles/migrate/types/Folder.xml

<?xml version="1.0"?>
<object name="Folder">
  <property name="behaviors" purge="false">
    <element value=""/>

Other collective packages

I added others packages from collective: Collective.geo.*, collective.plonetruegallery and eea.facetednavigation.

So I added other behaviors for my folder type into Folder.xml:

<?xml version="1.0"?>
<object name="Folder">
<property name="behaviors" purge="false">
<element value=""/>
<element value="collective.geo.behaviour.interfaces.ICoordinates" />
<element value="eea.facetednavigation.subtypes.interfaces.IPossibleFacetedNavigable"/>
<element value="collective.plonetruegallery.interfaces.IGallery"/>

Add import step

I created a profiles/migratetodx/import_steps.xml

<?xml version="1.0"?>
  <import-step id="cpskin.migration.migratetodx"
               title="cpskin.migration: import step for migration">
    <dependency step="typeinfo" />

And I use for preparing migration, starting migration (with migration view) and fixing image scales.

Problems with memoize

We use caching for our sites and applications. And during migration, I saw than I had some problems with the cache and with plone.memoize. We decide to use an empty plone.memoize cache and keep this cache empty with this code into import step.

In my file, I used this code:

from plone import api
from zope.annotation.interfaces import IAnnotations

def migratetodx(context):
    if context.readDataFile('cpskin.migration-migratetodx.txt') is None:
portal = api.portal.get()
request = getattr(portal, 'REQUEST', None)
class EmptyMemoize(dict):

    def __setitem__(self, key, value):

annotations = IAnnotations(request)
annotations['plone.memoize'] = EmptyMemoize()

Fix image scale

We also have to fix new way to get image scale, I was inspared by  this code. It get all richtext content and check if it needs to change image_[preview] to @@images/image/[preview].

I also use this code for getting all portlets static and update it.

from zope.component import getMultiAdapter
from zope.component import getUtility
from plone.portlets.interfaces import IPortletManager
from plone.portlets.interfaces import IPortletAssignmentMapping

def image_scale_fixer(text):
    if text:
        for old, new in IMAGE_SCALE_MAP.items():
            # replace old scale names with new ones
            text = text.replace(
            # replace AT traversing scales
            text = text.replace(
    return text

def fix_portlets_image_scales(obj):
    managers = [u'plone.leftcolumn', u'plone.rightcolumn']
    for manager in managers:
        column = getUtility(IPortletManager, manager)
        mappings = getMultiAdapter((obj, column), IPortletAssignmentMapping)
        for key, assignment in mappings.items():
            # skip possibly broken portlets here
            if not hasattr(assignment, '__Broken_state__'):
                if getattr(assignment, 'text', None):
                    clean_text = image_scale_fixer(assignment.text)
                    assignment.text = clean_text
                logger.warn(u'skipping broken portlet assignment {0} '
                            'for manager {1}'.format(key, manager))


Custom migration

Migrate extended archetypes field

I migrated an extend archetype filed named 'hiddentags'. For that is use ICustomMigrator adapter. I added this line on my configure.zcml :

<adapter name="mymigrator" factory=".migrate.MyMigrator" />

And I created a class MyMigrator with a "migrate"method in my file

from import ICustomMigrator
from zope.component import adapter
from zope.interface import implementer
from zope.interface import Interface

class MyMigrator(object):

    def __init__(self, context):
        self.context = context

    def migrate(self, old, new):
        # hiddenTags
        if getattr(old, 'hiddenTags', None):
            new.hiddenTags = old.hiddenTags

Migrate marker interfaces

I had some marker interfaces on our content, in this snippet, I will show how I migrated eea.facetednavigation marker :

from eea.facetednavigation.settings.interfaces import IDisableSmartFacets
from eea.facetednavigation.settings.interfaces import IHidePloneLeftColumn
from eea.facetednavigation.settings.interfaces import IHidePloneRightColumn
from eea.facetednavigation.subtypes.interfaces import IFacetedNavigable
from eea.facetednavigation.subtypes.interfaces import IFacetedWrapper
interfaces = [
for interface in interfaces:
    if interface.providedBy(old):
        alsoProvides(new, interface)

Migrate faceted criteria

In our site, we use faceted navigation. and we have to migrate criteria for all our faceted view. I have done into custom migration with that code :

if IFacetedNavigable.providedBy(old):
criteria = Criteria(new)

Migrate object with coordinates

Again in migrate method, I had this snippet

from collective.geo.behaviour.behaviour import Coordinates

old_coord = Coordinates(old).coordinates
new_coord = Coordinates(new)
new_coord.coordinates = old_coord

Setting time-zone to new event

When I wrote this code, there is still a bug into 1.1.0 and 1.1.0. Time-zone is not set on each event during migration, I forced id:

from import IDXEvent

if IDXEvent.providedBy(new):
new.timezone = timezone


I learnt a lot of Plone migration all along my work.Each migration depend on plugins you use,

And this is link for my 'import step' script. I hope you can use some piece of code of it.

Probes into Plone and Zope


On all teams, we need to check if Plone turns well. We need some probes to be sure our Plone site runs well !

With Jean-François Roche, we started to have a look on Products.ZNagios. This product allow you to have some probes from Zope, you can ask your instance (live):

  • Number of unresolved conflict on Zope
  • CPU usage
  • DB sizes
  • Memory percent
  • Uptime of Zope
  • ...

You can access to the probes with a thread which listen on Zope  on port 8888 (in this conf). You just have to add zope-conf-additional in your buildout like this:

zope-conf-additional =
<product-config five.z2monitor>

If you want more information on this, you can see documentation of five.z2monitor package.


I created this package for adding some probes into Plone. I created probes as Products.ZNagios. We used a zope interface for registering all probes (zc.z3monitor.interfaces.IZ3MonitorPlugin). In this package, I added these probes:

  • count users
  • count valid users (user logged during 3 last months)
  • check if smtp is set up
  • last login time of a user
  • last time a plone or zope object was modified

How use it

Adding collective.monitor in your buildout in eggs and zcml instance section

eggs +=
zcml +=

And also adding zope-conf-additional as explain above.

After this little config, you can access to probes with different way

1. bin/instance

After starting instance (bin/instance fg) you can access to probes with

./bin/instance monitor dbinfo main
./bin/instance monitor objectcount
./bin/instance monitor stats./bin/instance monitor help

2. netcat

After starting instance (bin/instance fg) you can access to probes with

echo 'dbinfo main' | nc -i 1 8888

3. telnet

After starting instance (bin/instance fg) you can access to probes with

$ telnet 8888
Connected to
Escape character is '^]'.
2015/08/11 11:49:48.540729 GMT+2
Connection closed by foreign host.


With this package, you can make stats on your instance.

We use diamond to collect and put informations from probes on graphite.

It's very helpful for having state of our infrastructure.


Geonode, Geoserver, Postgis with Docker


We have some clients which needs a framework for maps creation. We took a look on market of open source solutions for this kind of feature and we became fan of geonode project.

We begin to be familiar with Docker so we decided to create Docker images for Geonode. We would like to separate geoserver and geonode. The goal is to be able to move geoserver or geonode on distinct server if the load increase. So we create different images for Geonode, Geoserver. We also use Nginx image for creation of link between Geonode and Geoserver (and a Postgis image for development).

For this project, we customise geonode. We use django template for project creation as explain on documentation (

$ django-admin startproject imio_geonode --template= -epy,rst



We use 2 differents docker-compose.yml files, one for production and one for development. 

Differences are :

  • entrypoint and command options are define in Dockerfile for prod and in docker-compose.yml for dev (we overide options on dev).
  • image vs build : build is used in dev. For production, we build images and use image option.
  • postgis : An image of postgis is used in dev and no image on prod, we use a specific database cluster.
This is development docker-compose.yml
build: Dockerfiles/postgis/
hostname: postgis
- ./postgres_data:/var/lib/postgresql

build: Dockerfiles/geoserver/
hostname: geoserver
- postgis
- 8080:8080
- ./geoserver_data:/opt/geoserver/data_dir

build: .
hostname: geonode
- postgis
- 8000:8000
- .:/opt/geonode/
- /usr/bin/python
command: runserver

image: nginx:latest
- 80:80
- geonode
- geoserver
- postgis
- nginx-default.conf:/etc/nginx/conf.d/default.conf



We need and nginx images to make the link between geoserver and geonode. With Docker >= 1.8 and docker-compose >= 1.4, a new 'network' option arrived and seems to depreced this nginx utility.

Nginx default image ( is used with this config:

upstream geonode {
    server geonode:8000;
upstream geoserver {
    server geoserver:8080;

server {
        listen   80;
        client_max_body_size 128m;

        location / {
            proxy_pass         http://geonode;
            proxy_set_header   Host $http_host;
            proxy_set_header   X-Real-IP       $remote_addr;
            proxy_set_header   X-Forwarded-For $proxy_add_x_forwarded_for;

        location /geoserver {
            proxy_pass         http://geoserver;
            proxy_set_header   Host $http_host;
            proxy_set_header   X-Real-IP       $remote_addr;
            proxy_set_header   X-Forwarded-For $proxy_add_x_forwarded_for;

        error_log /var/log/nginx/error.log warn;
        access_log /var/log/nginx/access.log combined;

Nginx is useful for connection login from geoserver to geonode and for upload layers, ... from geonode to geoserver.

For production, we add static and upload folder rules :

location /static {
alias /opt/geonode/static;
} location /uploaded {
alias /opt/geonode/uploaded;



We use postgis docker image for development. Indead, we have a dedicated server for our databases. For dev, we use this Dockerfile :

FROM postgres:9.4
RUN apt-get update && apt-get install -y postgresql-9.4-postgis-2.1
RUN mkdir /docker-entrypoint-initdb.d COPY ./ /docker-entrypoint-initdb.d/

From postgres images, all sh files in docker-entrypoint-initdb.d folder are run during postgres initialisation. And db init script for geonode looks like :

POSTGRES="gosu postgres"
$POSTGRES postgres --single -E <<EOSQL
$POSTGRES postgres --single -E <<EOSQL
CREATE DATABASE geonode OWNER geonode ;
CREATE DATABASE "geonode-imports" OWNER geonode ;
$POSTGRES pg_ctl -w start
$POSTGRES psql -d geonode-imports -c 'CREATE EXTENSION postgis;'
$POSTGRES psql -d geonode-imports -c 'GRANT ALL ON geometry_columns TO PUBLIC;'
$POSTGRES psql -d geonode-imports -c 'GRANT ALL ON spatial_ref_sys TO PUBLIC;'



I create an Docker image from 'tomcat:8-jre7' image, and install geoserver from

Dockerfile looks like :

FROM tomcat:8-jre7

RUN apt-get update && apt-get install wget
RUN wget -O /usr/local/tomcat/webapps/geoserver.war
RUN apt-get remove -y wget ENV GEOSERVER_DATA_DIR /opt/geoserver/data_dir




We use gunicorn for production (


We use 'python runserver' for development. As you see in docker-compose.yml file, source code is added into a docker image with a volume, thus when you change code on your local computer, it's directly update on docker image.

Dockerfile :

FROM ubuntu:14.04
apt-get update && \
apt-get install -y build-essential && \
apt-get install -y libxml2-dev libxslt1-dev libjpeg-dev gettext git python-dev python-pip libgdal1-dev && \
apt-get install -y python-pillow python-lxml python-psycopg2 python-django python-bs4 python-multipartposthandler transifex-client python-paver python-nose python-django-nose python-gdal python-django-pagination python-django-jsonfield python-django-extensions python-django-taggit python-httplib2 RUN mkdir -p /opt/geonode WORKDIR /opt/geonode
ADD requirements.txt /opt/geonode/
RUN pip install -r requirements.txt
ADD . /opt/geonode

COPY ./ /

Settings and local_settings

You have to update your local_settings. Settings have to match with settings into Dockerfiles and docker-compose.yml. Here is an exemple of for OGC_SERVER and DATABASES : 

SITENAME = 'GeoNode'
SITEURL = 'http://localhost'
GEOSERVER_URL = SITEURL + '/geoserver/'
# OGC (WMS/WFS/WCS) Server Settings
'default': {
'BACKEND': 'geonode.geoserver',
'LOCATION': '', # Docker IP
'USER': 'admin',
'PASSWORD': 'admin',
# Set to name of database in DATABASES dictionary to enable
'DATASTORE': 'datastore',

'default': {
'ENGINE': 'django.db.backends.postgresql_psycopg2',
'NAME': 'geonode',
'USER': 'geonode',
'PASSWORD': 'geonode',
'HOST': 'postgis',
'PORT': 5432,
'datastore': {
'ENGINE': 'django.contrib.gis.db.backends.postgis',
'NAME': 'geonode-imports',
'USER': 'geonode',
'PASSWORD': 'geonode',
'HOST': 'postgis',
'PORT': 5432,

You also have to set link between Geoserver and Geonode login. Use Docker IP for settings that (geoserver_data/security/auth/geonodeAuthProvider/config.xml) :




You can now start you geonode project with a simple :

$ docker-compose up

And make Django syncdb of your database :

$ docker-compose run --rm --entrypoint='/usr/bin/python' geonode syncdb

New recipe, collective.recipe.buildoutcache


This recipe generate a buildout-cache archive. We use pre-generated buildout-cache folder for speed up buildout duration. The archive contains one single buildout-cache folder. In this folder, there are 2 folders:

  • eggs: contains all eggs use by your buildout except eggs which have to be compiled.
  • downloads: contains zip eggs which must be compiled (as AccessControl, lxml, Pillow, ZODB, ...)

Before starting a buildout, we download and extract buildout-cache and use it on our buildout. We add eggs-directory and download-cache parameters on buildout section like this:


eggs-directory = buildout-cache/eggs download-cache = buildout-cache/downloads


Use case

In our organization, we have a Jenkins server. We created a Jenkins job which generate buildout-cache.tar.gz2 every night and push it into a file server.

We also use Docker, our Dockerfiles download and untar buildout-cache before starting buildout, so creation of docker image became very faster !


How it works

Simply, you have to add an parts with this recipe on your buildout project.

Like this :


parts = ... makebuildoutcachetargz [makebuildoutcachetargz] recipe = collective.recipe.buildoutcache

You can use some parameters for changing name of your archive, use another working directory than ./tmp or use another buildout file than buildout.cfg for eggs downloads, See

For recipe installation you can make this command line:

./bin/buildout install makebuildoutcachetargz

And start recipe script:




Use collective.recipe.buildoutcache and decrease time lost with your buildout ;)

How I made my wedding site

I decide to make a website for my wedding with a list of gift, my honneymoon, presentation of my witnesses and so on.

I was looking for a litlle CMS with a "list of gift" (an online shopping) which can be installed on cheap and reliable hosting (An it's when I loose Plone)

Pyramid vs Django

I started looking on Pyramid (because I'm a Plone/Zope dev). I thought Kotti, but I didn't find a way to make easily gift, and I thougt project looks cool, but it'was maybe a little young for my kind of requirements. I didn't find good solution on pyramid for a wedding list. 

Such as I have some exprience in Django, And in my daily work, we started intereset on Geonode for GIS project.

-> I started looking on Django !

Django CMS vs Mezzanine

Django CMS and Django CMS e-commerce plugin. But it seems this project is a almost dead ? Last commit on github make me septic.

With little search, I found Mezzanine and Cartridge. I try it and It seems perfect for my porject, So I choose it !


My first choose was OVH, because it's very cheap (5€ / month). But with little search, it is almost impossible to create a complex Django site (by complex, I mean a "Mezzanine" Django site, and it's not very complex). I pursued my searching... And I found Webfaction. They have local pythons, postgres, 600Go data for 10 € / month. It looks perfect for me, except they do not manage domain name directly. So I host my wedding site on webfaction and my domain name on OVH.

Maybe I could made an heroku Django website, but I was little affraid about complexity.


Next step is to create an online shop with Kotti or with Pyramid !

Plone and Docker


In this post, I will try to explain how we put Plone sites into production in our organization (IMIO, Belgium).

For this process, we used some software as Puppet, Jenkins, but the process we use should be agnostic from these softwares.

Short story: when a push is made on Github, Jenkins builds a new image for Docker, pushes this image into a private Docker registry and updates the Docker image on server.

Docker images

We create Docker images with packer. We build .deb files with buildout, mr.bob and Jenkins. We create “debian” folder used to .deb files creation with a mr.bob template. We create 3 deb files:

  • plone-core-${version}.deb: contains eggs
  • plone-zeoserver-${version}.deb: contains zeoserver config files
  • plone-instance-${version}.deb: contains instance config files

After creation of deb files, packer uses (and installs) those deb files to create 2 Docker images:


We think this is a good way to have good isolation.

Both images are based on a "base IMIO image". Our base image is based on ubuntu image from docker hub. Each image has a size of +/- 530 MB because we have a lot of plone eggs in our buildout/plone site.

You could also create a simple Dockerfile which pulls a github repo and runs buildout to create your Docker image.

Once Packer has built the Docker images, Jenkins pushes them into a private Docker registry.

Private registry

For this post, I imagine we have a private docker registry in this url :
We use private registry to store our images.
Our images are created with tag latest and YYYYMMDD-JENKINS_JOB_NUMBER (20150127-97)
We use a private registry for each environment, (staging, production, …) and we copy images between environments. Actually, we automatically update dev and staging environments and when we see there are no problem, we copy images on production.

Update production

We use fig to orchestrate our docker containers (zeo server must be started before zeo clients). We use a script to update our docker images. This script checks if the currently running docker containers use the latest image. If not, the script downloads the latest image, stops the docker containers which are running, remove them and restart containers from new images (we use upstart scripts to starting docker daemon).

cd /fig/directory;fig pulll > /dev/null 2>&1

LATEST_INSTANCE_IMAGE_ID=$(docker images | grep $INSTANCE_IMAGE | grep latest | awk '{print $3}')
LATEST_ZEO_IMAGE_ID=$(docker images | grep $ZEO_IMAGE | grep latest | awk '{print $3}')

TAG_INSTANCE_IMAGE_ID=$(docker images | grep $LATEST_INSTANCE_IMAGE_ID | grep -v latest | awk '{print $2}')
TAG_ZEO_IMAGE_ID=$(docker images | grep $LATEST_ZEO_IMAGE_ID | grep -v latest | awk '{print $2}')

    echo "Error: instance and zeo images are no the same tag !" 1>&2
    exit 1
RUNNING=$(docker ps | grep $INSTANCE_NAME | awk '{print $2}')
if [ "$RUNNING" != "$LATEST" ];then
    echo "restarting $NAME"
    stop $NAME
    start $NAME
    echo "$NAME up to date"

Storage and backup

We use Docker data containers (called storage in our case) for filestorage, blobstorage and backup folders. We start docker container with --volumes-from option. We have to be carefull to NEVER delete a storage container (maybe we have to improve docker for that).

We configure our buildouts to backup all data into var/backups folder and so, we launch docker with --volumes-from and -v options for backup and restore. Thanks to -v options, backups are stored on server and not in Docker. Later, backups are synced to our backup server.

With this zeo docker image, it’s easy to backup, pack and restore zodb. In the futur, we envision using relstorage instead of zeoserver. But currently, there is no DB admin in the company (hint to our boss ?).


Docker runs great in production !

I intend to follow docker machine, docker swarm and docker compose. 

Thank you to my colleagues Cédric de Wilde and Jean-François Roche for having worked with me to setup our production Plone into Docker.

How I created my blog with Heroku and Plone


In this post, I explain how I use Heroku and the heroku build pack for the creation of a blog. I have thinking about create a blog with plone and Heroku after the ploneconf in Bristol and the talk of zupo (thanks for his great job).


So I started... I created a blog because of a friend (a no developer guy) would like to have a blog. And I thought, maybe it's a good idea to try plone with heroku. In this case I tried to make a very easy Plone site with only "Blog Post" object available (I changed my mind after). I created 2 pacakges:

 And I also create a buildout.

Buildout was very easy because of documentation of heroku build pack, I added a heroku.cfg file on my buildout pakcage. This heroku file extends buildout.cfg with this parts for instance with relstorage :

recipe = plone.recipe.zope2instance
relative-paths = true
eggs +=

rel-storage =
    keep-history false
    blob-dir /tmp/blobcache
    shared-blob-dir false
    type postgresql
    host PG_HOST
    dbname PG_DBNAME
    user PG_USER
    password PG_PASS


I already have a free account on Heroku, so I simply push my heroku branch and the build pack automaticaly added a postgres plugin, start buildout with heroku.cfg... And it was online.

In my policy, I used and install these packages : 

  • collective.quickupload
  • collective.contentrules.yearmonth
  • plonetheme.bootstrap
  • plone.formwidget.captcha

I still have little work to do as

  • change logo
  • improve comments
  • improve rss feed view (not only title and description into rss feed)

Now you have no more excuse for creating your personnal blog with plone for improving Plone communication and community !

Document Actions