Project

General

Profile

Actions

Feature #23200

closed

[Caching framework] Add compress data options to DbBackend

Added by Christian Kuhn over 14 years ago. Updated over 13 years ago.

Status:
Closed
Priority:
Should have
Category:
Caching
Target version:
-
Start date:
2010-07-15
Due date:
% Done:

0%

Estimated time:
PHP Version:
5.2
Tags:
Complexity:
Sprint Focus:

Description

Type: Feature, Performance improvement

Branches: trunk

Problem:
RDBM's tend to slow down if tables are too large to fit into memory. This is a problem especially with big cache tables with big chunks of data.

Solution:
Implement options for db backend to compress cache data with zlib:
- content field of caching framework tables must be changed to blob instead of text, to be able to store binary data
- Add "compress" option for DbBackend
- Add "compressionLevel" option for DbBackend to enable a selection between data size and cpu tradeoff

How to test:
- Apply patch, change db fields to mediumblob
- Additional unit tests show that compression is actually done right
- localconf.php:
$TYPO3_CONF_VARS['SYS']['useCachingFramework'] = '1';
$TYPO3_CONF_VARS['SYS']['caching']['cacheConfigurations']['cache_pages'] = array(
'frontend' => 't3lib_cache_frontend_StringFrontend',
'backend' => 't3lib_cache_backend_DbBackend',
'options' => array(
'cacheTable' => 'cachingframework_cache_pages',
'tagsTable' => 'cachingframework_cache_pages_tags',
'compressian' => TRUE,
),
);

Notes:
This is a real performance improvement for the DbBackend if cache tables like cache_pages grow large (lots of rows with big data chunks). The data table typically shrinks to 20% of original size, which is a great benefit if fiddling with multi GB of cache tables.
The patch is a rip-off from the compressed db backend which is successfully delivered with enetcache since 4.3. It frees a lot of RAM on DBMS to be used elsewhere on a server.

Numbers:
- On a production system with 4-5 GB of cache tables we saw a lot of slow insert queries (~8GB of RAM given to mysql, together with a pretty much optimized mysql innodb setup), with compressed backend the main cache table shrinked to < 1GB, and all slow inserts where immediatly gone. The additional CPU overhead is marginal.
Attached are graphics from enetcacheanalytics performance test suite (mysql on localhost):
- GetByIdentifier get()'s an increasing number of previously set entries from cache and measures time taken: The overhead of compressing data is not that big.
- SetKiloBytesOfData set()'s a number of cache entries with growing data size. With small data sizes, timing for compressed and uncompressed Backends are nearly the same, but the compressed backend speeds up a lot with growing data size: Compress overhead is much smaller than setting big data chunks to mysql.
- Play with enetcacheanalytics to get more numbers and tests for your system, compressed backend is always quicker if number of rows and data size grows.

(issue imported from #M15141)


Files

15141_01.diff (12.8 KB) 15141_01.diff Administrator Admin, 2010-07-15 23:11
15141_performance_difference.png (69.2 KB) 15141_performance_difference.png Administrator Admin, 2010-07-15 23:12
15141_01_dbtest.diff (2.33 KB) 15141_01_dbtest.diff Administrator Admin, 2010-08-10 13:50
Actions #1

Updated by Christian Kuhn over 14 years ago

Committed to trunk rev. 8551.

Actions #2

Updated by Susanne Moog over 13 years ago

  • Target version deleted (4.5.0)
Actions

Also available in: Atom PDF