Disk space issues are common, and they’re often difficult to solve quickly. One way to recover some space is by enabling InnoDB compression.
First, of course, you want to make sure you’ve covered alternative solutions. Can you archive data? Do partitioning/sharding? These generally involve application changes and can take longer.
You may need to first do conversion to InnoDB.
While compression is available for MyISAM via myisampack, and this can be useful for some use cases (for example, if you are rotating out tables on a monthly basis), it makes the tables read-only, so generally you will want to first convert MyISAM tables to InnoDB.
Things to watch for in the schema: After working on functional and performance issues with full-text indexes after conversion to InnoDB, I wouldn’t recommend it. Application changes are also required to rewrite queries. You can consider outsourcing these to a tool like solr before converting your dataset to InnoDB.
You also should ensure that there is a primary key for each table, and that the PK isn’t a part of any secondary indexes, as that is unnecessary duplication of how InnoDB creates indexes.
You’ll want to adjust your memory buckets. Increase innodb_buffer_pool_size and, if you’re doing a wholesale conversion to InnoDB, you can reduce key_buffer_size to a lower size. If you need to keep both MyISAM and InnoDB tables in your schema – and hopefully this is temporarily – you’ll need to balance your memory.
Also read carefully how transactions work if you have a combined MyISAM/InnoDB environment, particularly around when these are committed.
Backups might need a revisit for InnoDB; xtrabackup is a good choice.
You’ll need some headroom on disk to do this conversion before you can enable compression, because InnoDB will take more space on disk than MyISAM, due to secondary index implementation, transactions, etc.
Are you ready for InnoDB compression? Maybe.
Note that compression is at the table (and its indexes) level; if you want to compress only certain columns, use application compression (or see column-level compression in the new Percona Server release here: https://www.percona.com/blog/2016/10/18/percona-server-5-6-33-79-0-is-now-available/).
Review the datatypes of your tables needing compression. Blob, text, and varchar all compress well. If you have less of these kinds of columns, you might see less compression. A quick way to test amount of compression is to run gzip on an .ibd file (not on a running server!) and measure before and after.
You’ll need both innodb_file_per_table enabled and the innodb_file_format set to Barracuda before you can enable InnoDB compression.
Some kinds of workloads are more performant with compression than others. In general, if you have mostly reads and/or inserts with an auto-increment primary key into a table with few indexes, you will see little overhead. If you have a lot of deletes and updates, you can see some performance drain and should benchmark carefully.
Also, note that compression and decompression are CPU-intensive. You should ensure you have good headroom on CPU (view this via sar or mpstat reports), and keep an eye on this after enabling compression.
Hopefully, after all the hard work, you’ll have a nice bit of headroom on your disk space!
(Originally published at https://www.pythian.com/blog/enabling-innodb-compression/.)