Posted 6 months ago by Tokuview Blog
Tired of the same old holiday tunes year after year? Join us as we Rock around the Fractal Tree here at Tokutek.
We’ve fiddled with the chorus; won’t you join us?
Rockin’ around the Fractal Tree
(sung to the tune of Rockin’ ... [More] around the Christmas Tree)
Rockin’ around the Fractal Tree
As the bits begin to hop
Leaves are full on trees that are not B
Our indexes cannot be stopped
Rockin’ around the Fractal Tree
Let the MySQL spirit ring
Time to compress all into an SSD
And watch out for the Merciless Ming
You will get a sensational feeling when you hear voices singing
“No, not a fable; online changes done with alter table!”
Rockin’ around the Fractal Tree
Have a happy holiday
Everyone’s coding merrily
In a Java-Python way
Happy holidays everyone! [Less]
Posted 6 months ago by Andrew Morgan
Oracle have just announced that MySQL Cluster 7.2 is Generally Available. For anyone not familiar with MySQL Cluster Manager – it’s a command-line management tool that makes it simpler and safer to manage your MySQL Cluster deployment ... [More] – use it to create, configure, start, stop, upgrade…. your cluster.
So what has changed since MCM 1.1 was released?
The first thing is that a lot of work has happened under the covers and it’s now faster, more robust and can manage larger clusters. Feature-wise you get the following (note that a couple of these were released early as part of post-GA versions of MCM 1.1):
Automation of on-line backup and restore
Single command to start MCM and a single-host Cluster
Multiple clusters per site
Single command to stop all of the MCM agents in a Cluster
Provide more details in “show status” command
Ability to restart “initial” the data nodes in order to wipe out the database ahead of a restore
A new version of the MySQL Cluster Manager white paper has been released that explains everything that you can do with it and also includes a tutorial for the key features; you can download it here.
Watch this video for a tutorial on using MySQL Cluster Manager, including the new features:
Using the new features
Single command to run MCM and then create and run a Cluster
A single-host cluster can very easily be created and run – an easy way to start experimenting with MySQL Cluster:
billy@black:~$ mcm/bin/mcmd –bootstrap
MySQL Cluster Manager 1.2.1 started
Connect to MySQL Cluster Manager by running "/home/billy/mcm-1.2.1-cluster-7.2.9_32-linux-rhel5-x86/bin/mcm" -a black.localdomain:1862
Configuring default cluster 'mycluster'...
Starting default cluster 'mycluster'...
Cluster 'mycluster' started successfully
Connect to the database by running "/home/billy/mcm-1.2.1-cluster-7.2.9_32-linux-rhel5-x86/cluster/bin/mysql" -h black.localdomain -P 3306 -u root
You can then connect to MCM:
Or access the database itself simply by running the regular mysql client.
Extra status information
When querying the status of the processes in a Cluster, you’re now also shown the package being used for each node:
mcm> show status --process mycluster;
| NodeId | Process | Host | Status | Nodegroup | Package |
| 49 | ndb_mgmd | black | running | | 7.2.9 |
| 50 | ndb_mgmd | blue | running | | 7.2.9 |
| 1 | ndbd | green | running | 0 | 7.2.9 |
| 2 | ndbd | brown | running | 0 | 7.2.9 |
| 3 | ndbd | green | running | 1 | 7.2.9 |
| 4 | ndbd | brown | running | 1 | 7.2.9 |
| 51 | mysqld | black | running | | 7.2.9 |
| 52 | mysqld | blue | running | | 7.2.9 |
Simplified on-line backup & restore
MySQL Cluster supports on-line backups (and the subsequent restore of that data); MySQL Cluster Manager 1.2 simplifies the process.
The database can be backed up with a single command (which in turn makes every data node in the cluster backup their data):
mcm> backup cluster mycluster;
The list command can be used to identify what backups are available in the cluster:
mcm> list backups mycluster;
| BackupId | NodeId | Host | Timestamp |
| 1 | 1 | green | 2012-11-31T06:41:36Z |
| 1 | 2 | brown | 2012-11-31T06:41:36Z |
| 1 | 3 | green | 2012-11-31T06:41:36Z |
| 1 | 4 | brown | 2012-11-31T06:41:36Z |
| 1 | 5 | purple | 2012-11-31T06:41:36Z |
| 1 | 6 | red | 2012-11-31T06:41:36Z |
| 1 | 7 | purple | 2012-11-31T06:41:36Z |
| 1 | 8 | red | 2012-11-31T06:41:36Z |
You may then select which of these backups you want to restore by specifying the associated BackupId when invoking the restore command:
mcm> restore cluster -I 1 mycluster;
Note that if you need to empty the database of its existing contents before performing the restore then MCM 1.2 introduces the initial option to the start cluster command which will delete all data from all MySQL Cluster tables.
Stopping all MCM agents for a site
A single command will now stop all of the agents for your site:
mcm> stop agents mysite;
Getting started with MySQL Cluster Manager
You can fetch the MCM binaries from edelivery.oracle.com and then see how to use it in the MySQL Cluster Manager white paper.
Please try it out and let us know how you get on! [Less]
Posted 6 months ago by Oracle MySQL Group
With data volumes exploding, it is vital to be able to ingest and query data at high speed. For this reason, MySQL has implemented NoSQL interfaces directly to the InnoDB and MySQL Cluster (NDB) storage engines, which bypass the SQL layer completely. ... [More] Without SQL parsing and optimization, Key-Value data can be written directly to MySQL tables up to 9x faster, while maintaining ACID guarantees.
In addition, users can continue to run complex queries with SQL across the same data set, providing real-time analytics to the business or anonymizing sensitive data before loading to big data platforms such as Hadoop, while still maintaining all of the advantages of their existing relational database infrastructure.
This and more is discussed in the latest Guide to MySQL and NoSQL where you can learn more about using the APIs to scale new generations of web, cloud, mobile and social applications on the world's most widely deployed open source database
The native Memcached API is part of the MySQL 5.6 Release Candidate, and is already available in the GA release of MySQL Cluster. By using the ubiquitous Memcached API for writing and reading data, developers can preserve their investments in Memcached infrastructure by re-using existing Memcached clients, while also eliminating the need for application changes.
Speed, when combined with flexibility, is essential in the world of growing data volumes and variability. Complementing NoSQL access, support for on-line DDL (Data Definition Language) operations in MySQL 5.6 and MySQL Cluster enables DevOps teams to dynamically update their database schema to accommodate rapidly changing requirements, such as the need to capture additional data generated by their applications. These changes can be made without database downtime.
Using the Memcached interface, developers do not need to define a schema at all when using MySQL Cluster.
Lets look a little more closely at the Memcached implementations for both InnoDB and MySQL Cluster.
Memcached Implementation for InnoDB
The Memcached API for InnoDB is previewed as part of the MySQL 5.6 Release Candidate.
As illustrated in the following figure, Memcached for InnoDB is implemented via a Memcached daemon plug-in to the mysqld process, with the Memcached protocol mapped to the native InnoDB API.
Figure 1: Memcached API Implementation for InnoDB
With the Memcached daemon running in the same process space, users get very low latency access to their data while also leveraging the scalability enhancements delivered with InnoDB and a simple deployment and management model. Multiple web / application servers can remotely access the Memcached / InnoDB server to get direct access to a shared data set.
With simultaneous SQL access, users can maintain all the advanced functionality offered by InnoDB including support for Foreign Keys, XA transactions and complex JOIN operations.
Benchmarks demonstrate that the NoSQL Memcached API for InnoDB delivers up to 9x higher performance than the SQL interface when inserting new key/value pairs, with a single low-end commodity server supporting nearly 70,000 Transactions per Second.
Figure 2: Over 9x Faster INSERT Operations
The delivered performance demonstrates MySQL with the native Memcached NoSQL interface is well suited for high-speed inserts with the added assurance of transactional guarantees.
You can check out the latest Memcached / InnoDB developments and benchmarks here
You can learn how to configure the Memcached API for InnoDB here
Memcached Implementation for MySQL Cluster
Memcached API support for MySQL Cluster was introduced with General Availability (GA) of the 7.2 release, and joins an extensive range of NoSQL interfaces that are already available for MySQL Cluster
Like Memcached, MySQL Cluster provides a distributed hash table with in-memory performance. MySQL Cluster extends Memcached functionality by adding support for write-intensive workloads, a full relational model with ACID compliance (including persistence), rich query support, auto-sharding and 99.999% availability, with extensive management and monitoring capabilities.
All writes are committed directly to MySQL Cluster, eliminating cache invalidation and the overhead of data consistency checking to ensure complete synchronization between the database and cache.
Figure 3: Memcached API Implementation with MySQL Cluster
Implementation is simple:
1. The application sends reads and writes to the Memcached process (using the standard Memcached API).
2. This invokes the Memcached Driver for NDB (which is part of the same process)
3. The NDB API is called, providing for very quick access to the data held in MySQL Cluster’s data nodes.
The solution has been designed to be very flexible, allowing the application architect to find a configuration that best fits their needs. It is possible to co-locate the Memcached API in either the data nodes or application nodes, or alternatively within a dedicated Memcached layer.
The benefit of this flexible approach to deployment is that users can configure behavior on a per-key-prefix basis (through tables in MySQL Cluster) and the application doesn’t have to care – it just uses the Memcached API and relies on the software to store data in the right place(s) and to keep everything synchronized.
Using Memcached for Schema-less Data
By default, every Key / Value is written to the same table with each Key / Value pair stored in a single row – thus allowing schema-less data storage. Alternatively, the developer can define a key-prefix so that each value is linked to a pre-defined column in a specific table.
Of course if the application needs to access the same data through SQL then developers can map key prefixes to existing table columns, enabling Memcached access to schema-structured data already stored in MySQL Cluster.
Download the Guide to MySQL and NoSQL to learn more about NoSQL APIs and how you can use them to scale new generations of web, cloud, mobile and social applications on the world's most widely deployed open source database
See how to build a social app with MySQL Cluster and the Memcached API from our on-demand webinar or take a look at the docs
Don't hesitate to use the comments section below for any questions you may have [Less]
Posted 6 months ago by Yoshinori Matsunobu
One of my favorite performance improvements in 5.6 is "faster non-persistent connections". I couldn't find this information from release notes, but non-persistent connection is really faster in 5.6.Here are sysbench micro-benchmark results. I tested ... [More] from 100 concurrent clients, running remotely. Clients simply connected and disconnected repeatedly.versionconnections per second5.1.65207125.5.28240005.5.25Maria280735.5.25Maria,ThreadPool276535.6.937800The whole sysbench command I used is as below.[remote_client]$ sysbench --test=oltp --oltp-table-size=2000000 --max-requests=1000000 --mysql-table-engine=innodb --db-ps-mode=disable --mysql-engine-trx=yes --oltp-read-only --oltp-skip-trx --oltp-dist-type=special --oltp-reconnect-mode=query --oltp-connect-delay=0 --db-driver=mysql --mysql-user=root --mysql-host=$server_host --mysql-port=$server_port --num-threads=100 runI tested 1000+ connections and got similar numbers.As you can see, 5.6 is nearly 2x faster than 5.1.There are many performance improvements in 5.6, so I'm not sure which fix contributed the most. I assume the biggest one is that THD (quite a large C++ class inside MySQL) destructor is no longer called during holding a global mutex. Prior to 5.6, THD destructor is called under a global LOCK_thread_count mutex. This is not efficient. I experienced some serious connection/disconnection problems caused by the global mutex in MySQL Cluster a few years ago, and there were some fixes in MySQL Cluster source tree. Now in 5.6, THD destructor is called outside of the LOCK_thread_count mutex as below. This is great news. 5.1/5.5: one_thread_per_connection_end() -> unlink_thd() (void) pthread_mutex_lock(&LOCK_thread_count); thread_count--; delete thd; ...5.6.9: one_thread_per_connection_end() ... mysql_mutex_unlock(&LOCK_thread_count); delete thd;5.6 also fixed some connection performance issues reported by Domas. Some of major issues are still ongoing. So once all of them are fixed, we can expect even better performance:)Apparently persistent connection is much more efficient than non-persistent, but this is not always possible. In addition to that, 5.6 improves query performance in high concurrency as well. I believe many production engineers will welcome these performance improvements. [Less]
Posted 6 months ago by Fernando Ipar
On December 19th at 1 PM Eastern / 10 AM Pacific, I will be giving a
webinar entitled “10 Percona Toolkit Tools Every MySQL DBA Should Know
As a Consultant, I make heavy use of Percona Toolkit tools while
solving ... [More] customers’ problems, and I have used this experience to select
what I consider to be the top 10 tools everyone working with MySQL
should know about. I will present each of the selected tools using
examples based on real-world cases.
You can sign up by clicking the “Register” button here. Hope to see
you next week! [Less]
Posted 6 months ago by Shlomi Noach
If you are familiar with both openark kit and common_schema, you'll notice I've incorporated some functionality already working in openark kit into common_schema, essentially rewriting what used to be a Python script into SQL/QueryScript.
What ... [More] was my reasoning for rewriting good code? I wish to explain that, and provide with a couple examples.
I'm generally interested in pushing as much functionality into the MySQL server. When using an external script, one:
Needs the right dependencies (OS, Perl/Python version, Perl/Python modules).
Needs to provide with connection params,
Needs to get acquainted with a lot of command line options,
Is limited by whatever command line options are provided.
Has to invoke that script (duh!) to get the work done.
This last bullet is not so trivial: it means you can't work some operation with your favorite GUI client, because it has no notion of your Perl script; does not run on the same machine where your Python code resides; simply can't run those scripts for you.
With server-side code, functionality is accessible via any client. You run your operation via a query (e.g. CALL some_procedure). That can be done from your GUI client, your command line client, your event scheduler, your cronjob, all equally. You only need access to your MySQL server, which is trivial.
Of course, server side scripting is limited. Some stuff simply can't be written solely on server side. If you want to consult your replicating slave; gracefully take action on user's Ctrl+C, send data over the web, you'll have to do it with an external tool. There are actually a lot of surprising limitations to things one would assume are possible on server side. You may already know how frustrated I am by the fact one can hardly get info from SHOW commands.
But, when it works, it shines
Let's review a couple examples. The first one is nearly trivial. The second less so.
Example: getting AUTO_INCREMENT "free space"
openark kit offers oak-show-limits. It's a tool that tells you if any of your AUTO_INCREMENT columns are running out of space (and so you might want to ALTER that INT to BIGINT).
It's a very simple Python script. It gets your MAX(auto_increment_column) FROM tables_with_auto_increment, and compares that MAX value to the column type. It pre-computes:
max_values['tinyint'] = 2**8
max_values['smallint'] = 2**16
max_values['mediumint'] = 2**24
max_values['int'] = 2**32
max_values['bigint'] = 2**64
takes care of SIGNED/UNSIGNED, and does the math. Why is this tool such a perfect candidate for replacement on server side? For two reasons.
First, It turns out it takes very little effort to build a query which does the same. In which case it is also easy to build a view which provides the same.
Second, there's this thing with command line arguments. The openark tool provides with --threshold (only output those columns where capacity is larger than x%), --database (only scan given database), --table (only for tables matching name), --column (only for columns matching name).
I don't like this. See, the above is essentially an extra layer for saying:
WHERE auto_increment_ratio >= x
WHERE table_schema = ...
WHERE table_name = ...
WHERE column_name = ...
The command line arguments each take the role of some WHERE/AND condition.Wow, what a 1-1 mapping. How about if I wanted the results sorted in some specific order? I would have to add a command line argument for that! How about only listing the SIGNED columns? I would have to add a command line argument for that, too! How about showing top 10? Yes, another command line argument!
Some of the above can be solved via shell scripting (sort -k 3 -n, head -n 10, etc.). But, hey, we're OK with SQL, aren't we? Why add now these two extra layers? Get to know all the command line options, get to script it? I love scripting, but this is an abuse.
So it makes much more sense, in my opinion, to SELECT * FROM auto_increment_columns WHERE table_schema='my_db' AND auto_increment_ratio >= 0.8 ORDER BY auto_increment_ratio DESC LIMIT 10. It doesn't require SQL-fu skills, just basic SQL skills which every DBA and DB user are expected to have. And it allows one to work from whatever environment one feels comfortable with. Heck, with your GUI editor you can probably get off with it by right-clicking and left-clicking your mouse buttons, never typing one character.
Example: blocking user accounts
The above mapped very easily to a query, and was just a read-only query. What if we had to modify data? oak-block-accounts is a tool which allows one to block grantees from logging in, then releasing them later on. common_schema offers sql_accounts and eval().
Let's skip the command line arguments issue, as it is identical to the above. How should we best provide with "taking action" interface? A script would have no problem to first SELECT stuff, then UPDATE, or SET PASSWORD, or DROP etc. How easy is it to do the same on server side?
The immediate solution is to write a stored procedure to do that. I reject the idea. Why? Because the procedure would look like this:
PROCEDURE block_account(user VARCHAR(64), host VARCHAR(64), only_if_empty_password BOOL, ...);
Can you see where I'm getting at? Doing the above re-introduces command line options, this time disguised as procedure parameters. We would again have to list all available filtering methods, only this time things are worse: since stored procedures have no such notion as overloading, and change to the params will break compatibility. Once we introduce this routine, we're stuck with it.
common_schema tries to stay away as far as it can from this pitfall. It presents another solution: the view solution. Just as with auto_increment_columns, SELECT your way to get the right rows. But this time, the result is a SQL query:
mysql> SELECT sql_block_account FROM sql_accounts WHERE USER = 'gromit';
| sql_block_account |
| SET PASSWORD FOR 'gromit'@'localhost' = '752AA50E562A6B40DE87DF0FA69FACADD908EA32*' |
Do your own WHERE/AND combination in SQL. But, how to take action? Our view cannot take the actual action for us!
eval() is at the core of many common_schema operations, like this one:
CALL eval("SELECT sql_block_account FROM sql_accounts WHERE USER = 'gromit'");
The SET PASSWORD query just got evaluated. Meaning it was executed. eval() is a very powerful solution.
I prefer stuff on server side. It requires basic SQL skills (or a smart GUI editor), and allows you easy access to a lot of functionality, removing dependency requirements. It is not always possible, and external scripts can do miracles not possible on server side, but server side scripting has its own miracles. [Less]
Posted 6 months ago by Rob Young
The MySQL Community continues its good work in testing and refining MySQL 5.6, and as such the next iteration of the 5.6 Release Candidate is now available for download. You can get MySQL 5.6.9 here (look under the "Development ... [More] Releases" tab). This version is the result of feedback we have gotten since MySQL 5.6.7 was announced at MySQL Connect in late September. As iron sharpens iron, Community feedback sharpens the quality and performance of MySQL so please download 5.6.9 and let us know how we can improve it as we move toward the production-ready product release in early 2013.
MySQL 5.6 is designed to meet the agility demands of the next generation of web apps and services and includes across the board improvements to the Optimizer, InnoDB performance/scale and online DDL operations, self-healing Replication, Performance Schema Instrumentation, Security and developer enabling NoSQL functionality.
You can learn all the details and follow MySQL Engineering blogs on all of the key features in this MySQL DevZone article.
On a related note, plan to join this week's live webinars to learn more about MySQL 5.6 Self-Healing Replication Clusters and Building the Next Generation of Web, Cloud, SaaS, Embedded Application and Services with MySQL 5.6. Hurry! Seating is limited!
As always, thanks for your continued support of MySQL! [Less]
Posted 6 months ago by Daniel Nichter
Yes it’s still 4 months away, but some pre-selected sessions for Percona Live MySQL Conference & Expo 2013 have been released, and one is mine: Survey of Percona Toolkit: Command-line Tools for MySQL. If you’ve not heard of Percona ... [More] Toolkit, or have but have been putting off getting acquainted with it, attend my talk and I’ll get you off to a good start. And if you didn’t know: I’m the lead PT developer, so I know the tools pretty well. [Less]
Posted 6 months ago by Ulf Wendel
Do I have to look out for a new job? Some analysists say so: NoSQL is getting stronger. Given the prediction it is time for me to learn what NoSQL is.
Vote NO for MySQL from Ulf Wendel ... [More]
http://de.slideshare.net/nixnutz/vote-no-for-mysql (Slides: Vote NO for MySQL)
As a matter of fact there is no one kind of NoSQL but many. Four different kinds of NoSQL stores are competing with MySQL: Key Value Stores, Document Database, Graph Databases and Big Data/Column-oriented. Each of them has their specific strengths.
Key Value Stores are all about performance. MySQL 5.6 introducesMemcache interfaces for both InnoDB and MySQL Cluster. MySQL Cluster is really fast and scales well. Both for reads and writes.
Document stores are schema free. That must appeal to any web developer. But recalling what normalization means, how useful relations can be and what impact the pure existance of a schema has on the disk footprint may make you think twice. And, MySQL 5.6 InnoDB Online DDL (ALTER TABLE) makes schema changes a little less of an issue.
So, things are crystal clear? Not quite… there is some MySQL critique in the slides. However, using some of the core ideas of NoSQL and incorporating them into MySQL setups can’t be wrong, if MySQL 5.6 is really a limiting factor for you. For example, use the good old divide-and-conquery approach: identify small sets of related data that you can easily shard. Then, use batch processing for rebuilding aggregates from shards: map and reduce the data to create views that you can query… Possibly, you want to use the new high performance interfaces for it, if your queries are simple.
Graph Databases and Big Data are beyond the scope of presentation, which I gave today at the PHP user group Hamburg. Please, be gentle with the presentation: there is a difference between the live show and the slides. In case you would like to see me perform live, why not invite me? I’m based in Kiel (Germany). If your company is nearby, ping me…
The post Vote NO for MySQL – Election 2012: NoSQL appeared first on Ulf Wendel. [Less]
Posted 6 months ago by Joerg Bruehe
Dear MySQL users,
MySQL Server 5.6.9 (Release Candidate) is a new version of the world's
most popular open source database.
The new features in this release are of Release Candidate quality.
As with any other pre-production ... [More] release, caution should be taken when
installing on production level systems or systems with critical data.
Note that 5.6.9 includes all features in MySQL 5.5 and previous 5.6
Development Milestone Releases. An overview of what's new in MySQL 5.6
is available online at
For information on installing MySQL 5.6.9 on new servers, please see the
MySQL installation documentation at
For upgrading from previous MySQL releases, please see the important
upgrade considerations at
Please note that **downgrading** from MySQL 5.6.9 RC or other
pre-production releases to a previous release series is not supported.
MySQL Server 5.6.9 is available in source and binary form for a number
of platforms from the "Development Releases" selection of our download
Please note that the list of platforms for MySQL 5.6 has been adapted to
the changes in the field:
- Apple Mac OS X 10.6 and 10.7 on x86 (32 bit) and x86_64
(Binary packages of MySQL 5.6 are not provided for OS X 10.5),
- Debian 6 on x86 (32 bit) and x86_64
(Binary packages of MySQL 5.6 are not provided for Debian 5),
- RedHat Enterprise / Oracle Linux 5 and 6 on x86 (32 bit) and x86_64
(Binary packages of MySQL 5.6 are not provided for RHEL/OL 4),
- SuSE Enterprise Linux 11 on x86_64
(Binary packages of MySQL 5.6 are not provided for SLES 10),
- generic Linux (kernel 2.6) on x86 (32 bit) and x86_64,
using glibc 2.5 (or newer),
- FreeBSD 9 on x86_64
(Binary packages of MySQL 5.6 are not provided for FreeBSD 7 and 8),
- Oracle Solaris 10 and 11 on Sparc (64 bit), x86 (32 bit) and x86_64,
- Windows Vista, 7, and 2008 on x86 (32 bit) and x86_64
(Binary packages of MySQL 5.6 are not provided for Windows XP and
This does not affect the list of supported platforms for 5.1 and 5.5.
For Linux, the dependency on glibc 2.5 is new with 5.6.9, it is
reflected in the names of the binary packages for generic Linux:
"linux-glibc2.5" (former: "linux2.6").
All supported specific platforms (RedHat 5 and newer, SuSE 11, ...) are
using glibc 2.5 or newer.
If you want to check your system, you may simply "run" the library:
| Prompt$ /lib/libc.so.6
| GNU C Library stable release version 2.7, by Roland McGrath et al.
| Copyright (C) 2007 Free Software Foundation, Inc.
The above example was done on Debian 6, it shows glibc 2.7.
Packages for specific Linux distributions are provided in the specific
format (RPM or deb), in addition the generic tar.gz packages will fit
For RedHat-alike distributions like CentOS or Fedora, both the RedHat
and the generic packages should work.
If you are using a newer version of your operating system, its binary
compatibility approach (supporting applications built for older
versions) should ensure you can use MySQL 5.6.
Windows packages are now available via the new Installer for Windows
Installer or .ZIP (no-install) packages for more advanced needs. It
should be noted that the previous MSI packaging is no longer available
and the point and click configuration wizards and all MySQL products
are now available in the unified Installer for Windows:
We welcome and appreciate your feedback, bug reports, bug fixes,
The list of all "Bugs Fixed" for 5.6.9 may also be viewed online at
If you are running a MySQL production level system, we would like to
direct your attention to MySQL Enterprise Edition, which includes the
most comprehensive set of MySQL production, backup, monitoring,
modeling, development, and administration and migration tools so
businesses can achieve the highest levels of MySQL performance,
security and uptime.
On behalf of the MySQL Build Team at Oracle,
Changes in MySQL 5.6.9 (2012-Dec-11)
* The --random-passwords option for mysql_install_db is now
used for MySQL install operations (not upgrades) which are
done using Solaris PKG packages.
Functionality Added or Changed
* Incompatible Change: Replication: A number of variable and
other names relating to GTID-based replication have been
changed, with a view to making these names more appropriate
and meaningful. The old names are no longer supported.
The features so renamed are shown in the following list:
+ The --disable-gtid-unsafe-statements server option has
been renamed --enforce-gtid-consistency; the
disable_gtid_unsafe_statements system variable has been
+ The gtid_done server system variable has been renamed
The gtid_lost server system variable has been renamed
gtid_purged; in addition, this variable is no longer
+ The SQL_THREAD_WAIT_AFTER_GTIDS() function has been
For more information, see Replication with Global Transaction
), and Global Transaction ID Options and Variables
ids.html). (Bug #14775984)
* Windows Vista, Windows Server 2008, and newer support native
symlinking using the mklink command. This makes the MySQL
Server implementation of database symbolic links using .sym
files redundant, so that mechanism is now deprecated and will
be removed in a future MySQL release. See Using Symbolic Links
for Databases on Windows
* Performance: InnoDB: The timing values for low-level InnoDB
read operations were adjusted for better performance with fast
storage devices, such as SSD
). This enhancement primarily affects read operations for BLOB
columns in compressed
pression) tables. (Bug #13702112, Bug #64258)
* Incompatible Change: The THREAD_ID column in Performance
Schema tables was widened from INT to BIGINT to accommodate
If you upgrade to this release of MySQL from an earlier
version, you must run mysql_upgrade (and restart the server)
to incorporate this change to the performance_schema database.
* Incompatible Change: Connection ID (thread ID) values greater
than 32 bits can occur on some systems (such as busy or
long-running 64-bit systems), causing these problems:
+ Connection IDs written to the general query log and slow
query log were incorrect. This was true for logging to
both files and tables.
+ The CONNECTION_ID() function could return a value with a
data type too small for values larger than 32 bits.
+ The mysql_thread_id() and mysql_kill() C API functions
did not handle ID values larger than 32 bits. This could
result in killing the wrong thread; for example, if you
Connection IDs now are permitted to be 64-bit values when the
server supports them (when built with 64-bit data types),
which has these effects:
+ Connection IDs are logged correctly to the general query
log and slow query log.
This change involves a modification to the log tables, so
after upgrading to this release, you must run
mysql_upgrade and restart the server.
+ CONNECTION_ID() returns a data type appropriate for
values larger than 32 bits.
+ mysql_thread_id() is unchanged; the client/server
protocal has only 4 bytes for the ID value. This function
returns an incorrect (truncated) value for connection IDs
larger than 32 bits and should be avoided.
mysql_kill() still cannot handle values larger than 32
bits, but to guard against killing the wrong thread now
returns an error in these cases:
o If given an ID larger than 32 bits, mysql_kill()
returns a CR_INVALID_CONN_HANDLE error.
o After the server's internal thread ID counter
reaches a value larger than 32 bits, it returns an
ER_DATA_OUT_OF_RANGE error for any mysql_kill()
invocation and mysql_kill() fails.
To avoid problems with mysql_thread_id() and
mysql_kill(), do not use them. To get the connection ID,
execute a SELECT CONNECTION_ID() query and retrieve the
result. To kill a thread, execute a KILL statement.
(Bug #19806, Bug #11745768, Bug #65715, Bug #14236124, Bug
#44728, Bug #11753308)
* Important Change: InnoDB: A DML
) statement using the index merge access method could lock
many rows from the table, even when those rows were not part
of the final result set. This fix reduces the excessive
king) by releasing the locks of unmatched rows. This
optimization affects only transactions with isolation level
equal to or less strict than READ COMMITTED; it does not apply
to transactions using REPEATABLE READ or SERIALIZABLE
isolation level. (Bug #14226171)
* Important Change: Replication: Because running the server with
GTIDs enabled prevented changes to nontransactional tables,
programs such as mysql_upgrade and mysql_install_db were
unable to operate on system tables that used the MyISAM
storage engine and therefore could not function correctly.
Now, when running with --enforce-gtid-consistency (required
whenever --gtid-mode=ON), the server allows single statements
on nontransactional tables. (Bug #14722659)
* Important Change: Replication: Formerly, the value of the
Seconds_Behind_Master column in the output of SHOW SLAVE
STATUS was always set to NULL whenever the SQL thread or the
I/O thread was stopped. Now, this column is set to NULL only
if the SQL thread is not running, or if the I/O thread is not
running following a check to determine whether or not the SQL
thread has processed all of the relay log. (If the SQL thread
has finished processing and the I/O thread is running,
Seconds_Behind_Master is 0.) (Bug #12946333)
* Partitioning: InnoDB: Previously, when attempting to optimize
one or more partitions of a partitioned table that used a
storage engine that does not support partition-level OPTIMIZE,
such as InnoDB, MySQL reported Table does not support
optimize, doing recreate + analyze instead, then re-created
the entire table, but did not actually analyze it. Now in such
cases, the warning message is, Table does not support optimize
on partitions. All partitions will be rebuilt and analyzed. In
addition, the entire table is analyzed after first being
rebuilt. (Bug #11751825)
* InnoDB: An online DDL operation for an InnoDB table
incorrectly reported an empty value ('') instead of the
correct key value when it reported a duplicate key error for a
unique index using an index prefix. (Bug #14729221)
* InnoDB: InnoDB tables with FULLTEXT indexes could allocate
memory for thread handles that was never released, possibly
leading to resource issues on Windows systems. (Bug #14759163)
* InnoDB: During an online DDL
ine_ddl) operation that copies the table, the secondary index
ondary_index) of the table could become corrupted. (Bug
* InnoDB: If an ALTER TABLE statement failed while attempting to
create a FULLTEXT index for an InnoDB table, the server could
halt with an assertion error while dropping the incomplete
index. (Bug #14504174)
* Replication: If a table to be replicated had a FULLTEXT index,
this index was not ruled out when selecting the type of scan
to be used in finding the next row, even though it cannot be
used to find the correct one. The row applier subsequently
tried unsuccessfully to employ an index scan, causing
replication to fail. Now in such cases, indexes which do not
provide for sequential access (such as FULLTEXT) are not
considered when determining whether to use a table, index, or
hash scan for this purpose. (Bug #14843764)
* Replication: When using the GTID-aware master-slave protocol,
the slave I/O thread used the wrong position. When using
GTIDs, the position is not normally used, but as a special
case, the position was used in addition to the GTID when the
slave reconnected to the same master (even though this was not
necessary). This problem is fixed by making the GTID-aware
master-slave protocol not use positions at all any longer.
* Replication: Given a stored routine R in which the
GTID_SUBTRACT() function was invoked: Once GTID_SUBTRACT()
returned NULL when called inside R, it continued to return
NULL every time it was called within R, for the remainder of
the client session. (Bug #14838575)
* Replication: MySQL Enterprise Backup, mysqldump, and
mysqlhotcopy could not be used with a GTID-enabled MySQL
Server, because they were unable to restore the server's GTID
state and so could not restore from any point in the binary
log other than the very beginning.
As part of the fix for this problem, the gtid_purged system
variable (formerly named gtid_lost) is no longer read-only;
now it is possible to add GTIDs to it when gtid_executed
(formerly gtid_done) is empty. (Bug #14787808)
* Replication: Restarting replication after the first binary log
file was purged resulted in the error Got fatal error 1236
from master when reading data from binary log: 'The slave is
connecting using CHANGE MASTER TO MASTER_AUTO_POSITION = 1,
but the master has purged binary logs containing GTIDs that
the slave requires.' This led GTID-based replication to fail.
* Invalid memory reads could occur for queries that selected
from a zero-length table name. (Bug #14780820)
* With LOCK TABLES in effect, CREATE TABLE IF NOT EXISTS ...
LIKE could raise an assertion. (Bug #14788976)
* An assertion was raised if ALTER TABLE was used to rename a
column to same name as an existing column while also
reordering the renamed column using AFTER or FIRST. (Bug
* An assertion could be raised if semi-join materialization was
used to evaluate a NOT IN subquery. (Bug #14751858)
* For some continuation handler nestings, continuation could
occur at the wrong location. (Bug #14724836)
* SHOW PROCESSLIST output was not sorted in Id order. (Bug
* For UPDATE statements, EXPLAIN showed the total key length in
the key_len column rather than the length of the used key
parts. (Bug #14682438)
* Starting the server with --bind-address and then setting
host_cache_size to 0 could result in the server stopping for
certain kinds of client connections. (Bug #14689561)
* With index condition pushdown enabled, the optimizer could
produce incorrect results for derived tables. (Bug #14640176)
* The optimizer could incorrectly use a nonspatial index to
optimize spatial operations, causing an assertion to be
raised. (Bug #14600994)
* CHECK TABLE and REPAIR TABLE could crash if a MyISAM table had
a corrupt key (.MYI) file. Now the server produces an error.
* CHECK TABLE and REPAIR TABLE could crash if a MyISAM table had
a corrupt key (.MYI) file. Now the server produces an error.
(Bug #13556107, Bug #13556000)
* For dumps of the mysql database, mysqldump skipped the event
table unless the --events option was given. To skip this table
if that is desired, use the --ignore-table option instead (Bug
#55587, Bug #11762933)
* mysqld_safe ignored the value of the UMASK environment
variable, leading to behavior different from mysqld with
respect to the access mode of created files. Now mysqld_safe
(and mysqld_multi) attempt to approximate the same behavior as
mysqld. (Bug #57406, Bug #11764559)
* LAST_INSERT_ID(expr) did not work for expr values greater than
the largest signed BIGINT value. (Bug #20964, Bug #11745891)
Joerg Bruehe, MySQL Build Team, email@example.com
ORACLE Deutschland B.V. & Co. KG, Komturstrasse 18a, D-12099 Berlin
Geschaeftsfuehrer: Juergen Kunz Amtsgericht Muenchen: HRA 95603
Komplementaerin: ORACLE Deutschland Verwaltung B.V. Utrecht, Niederlande
Geschaeftsfuehrer: Alexander van der Ven, Astrid Kepper, Val Maher [Less]
Copyright © 2013 Black Duck Software, Inc. and its contributors, Some Rights Reserved. Unless otherwise marked, this work is licensed under a Creative Commons Attribution 3.0 Unported License . Ohloh ® and the Ohloh logo are trademarks of Black Duck Software, Inc. in the United States and/or other jurisdictions. All other trademarks are the property of their respective holders.