Pootle uses a caching system to improve performance. It is an essential part of optimising your Pootle installation. It is based on Django's caching system, and is used for various things:
Without a well functioning cache system, Pootle could be slow.
Django supports multiple cache backends (methods of storing cache data). You specify which backend to use by changing the value of
CACHE_BACKEND in the
CACHE_BACKEND = 'memcached://127.0.0.1:11211/'
Memcached is the recommended cache backend, it provides the best performance. And works fine with multiprocessing servers like Apache. It requires the
python-memcached package and a running memcached server. Due to extra dependencies it is not enabled by default.
CACHE_BACKEND = 'memcached:unix:/path/to/memcached.sock'
If you don't want Pootle using TCP/IP to access memcached then you can use Unix sockets. This is often a situation in hardened installations using SELinux.
You will need to ensure that memcached is running with the
memcached -u nobody -s /path/to/memcached.sock -a 0777
CACHE_BACKEND = 'db://pootlecache?max_entries=65536&cull_frequency=16'
Database caching relies on a table in the main Pootle database for storing the cached data, which makes it suitable for multiprocessing servers, with the added benefit that the cached data remains intact after a server reboot (unlike memcached) but it is considerably slower than memcached.
As of Pootle 2.1.1 this is the default cache backend. On new installs and upgrades the required database will be created.
Users of older versions need to create the cache tables manually if they would like to switch to the database cache backend using this manage.py command:
./manage.py createcachetable pootlecache
CACHE_BACKEND = 'locmem:///?max_entries=4096&cull_frequency=5'
Until Pootle version 2.1.0 the default was to use this less efficient but simpler memory cache backend. That default is not suitable at all for multiprocess servers like Apache.
Since it uses in-process memory, it is impossible to update cache across all processes leading to translation statistics being different for each process, which often results in users seeing different values on consecutive requests, a problem easily solved by switching to memcached.
There is little reason to continue using local memory.