MariaDB Connector/J high availability features testing

Preamble

I have already tested MySQL Connector/J in high availability scenarios using MySQL replication (master/slave). The testing has not gone as expected and I reached the conclusion that an additional product must be put in loop to achieve what I expected.

MariaDB foundation is also proposing it’s own Connector/J obviously called MariaDB Connector/J. Is it different from the one of Oracle ? Working better ? Working worst ? One feature that MariaDB does not have versus the one of Oracle is dynamic reconfiguration of replication group though JMX (if you add or delete a slave or master). It’s also not 100% clear (not tested) if MariaDB connector/J load balance the load on slaves in case you have multiple slaves, MySQL Connector/J does it and it is well written in documentation…

I have decided to test it in a typical MySQL replication scenario. Where the master could failed or when my only slave could failed.

For testing I have used two virtual machines running Oracle Linux Server release 7.2 64 bits and MariaDB 10.1.13. The virtual servers are:

  • server3.domain.com using non routable IP address 192.168.56.103
  • server4.domain.com using non routable IP address 192.168.56.104

The Java application is running on my desktop under Eclipse with Java 1.8.0_77 and MariaDB Connector/J 1.4.0. The is almost the same code as the one used for MySQL Connecter/J with few minor changes.

MariaDB replication setup

First step is to setup MariaDB replication that is slightly different from the one of MySQL. Please note that if they both use GTID they are not compatible…

On both server I create a test database, test table and the replication account:

MariaDB [(none)]> GRANT REPLICATION SLAVE ON *.* TO 'repadmin'@'%' IDENTIFIED BY 'secure_password';
Query OK, 0 rows affected (0.00 sec)
 
MariaDB [(none)]> CREATE DATABASE replicationdb CHARACTER SET utf32 COLLATE utf32_general_ci;
Query OK, 1 row affected (0.00 sec)
 
MariaDB [(none)]> use replicationdb
Database changed
 
MariaDB [replicationdb]> CREATE TABLE test1(val int, descr varchar(50));
Query OK, 0 rows affected (0.01 sec)
 
MariaDB [replicationdb]> insert into test1 values(1,'One');
Query OK, 1 row affected (0.00 sec)

On master server I get current position (not to start from beginning):

MariaDB [replicationdb]> SHOW MASTER STATUS;
+------------------+----------+--------------+------------------+
| File             | Position | Binlog_Do_DB | Binlog_Ignore_DB |
+------------------+----------+--------------+------------------+
| mysql-bin.000001 |     2631 |              |                  |
+------------------+----------+--------------+------------------+
1 row in set (0.00 sec)
 
MariaDB [replicationdb]> SELECT BINLOG_GTID_POS("mysql-bin.000001",2631);
+------------------------------------------+
| BINLOG_GTID_POS("mysql-bin.000001",2631) |
+------------------------------------------+
| 0-1-8                                    |
+------------------------------------------+

On slave server I setup replication with:

MariaDB [(none)]> SET GLOBAL gtid_slave_pos = "0-1-8";
Query OK, 0 rows affected (0.00 sec)
 
MariaDB [(none)]> change master to master_host='server3.domain.com', master_port=3316,master_user='repadmin', 
               -> master_password='secure_password', master_use_gtid=slave_pos;
Query OK, 0 rows affected (0.00 sec)
 
MariaDB [(none)]> start slave;
Query OK, 0 rows affected (0.00 sec)
 
MariaDB [(none)]> SHOW SLAVE STATUS\G
*************************** 1. row ***************************
               Slave_IO_State: Waiting for master to send event
                  Master_Host: server3.domain.com
                  Master_User: repadmin
                  Master_Port: 3316
                Connect_Retry: 60
              Master_Log_File: mysql-bin.000001
          Read_Master_Log_Pos: 2631
               Relay_Log_File: mysql01-relay-bin.000002
                Relay_Log_Pos: 640
        Relay_Master_Log_File: mysql-bin.000001
             Slave_IO_Running: Yes
            Slave_SQL_Running: Yes
              Replicate_Do_DB:
          Replicate_Ignore_DB:
           Replicate_Do_Table:
       Replicate_Ignore_Table:
      Replicate_Wild_Do_Table:
  Replicate_Wild_Ignore_Table:
                   Last_Errno: 0
                   Last_Error:
                 Skip_Counter: 0
          Exec_Master_Log_Pos: 2631
              Relay_Log_Space: 940
              Until_Condition: None
               Until_Log_File:
                Until_Log_Pos: 0
           Master_SSL_Allowed: No
           Master_SSL_CA_File:
           Master_SSL_CA_Path:
              Master_SSL_Cert:
            Master_SSL_Cipher:
               Master_SSL_Key:
        Seconds_Behind_Master: 0
Master_SSL_Verify_Server_Cert: No
                Last_IO_Errno: 0
                Last_IO_Error:
               Last_SQL_Errno: 0
               Last_SQL_Error:
  Replicate_Ignore_Server_Ids:
             Master_Server_Id: 1
               Master_SSL_Crl:
           Master_SSL_Crlpath:
                   Using_Gtid: Slave_Pos
                  Gtid_IO_Pos: 0-1-8
      Replicate_Do_Domain_Ids:
  Replicate_Ignore_Domain_Ids:
                Parallel_Mode: conservative
1 row in set (0.00 sec)

And you are done already !

Let’s do a small test by inserting a new row on master server:

MariaDB [replicationdb]> insert into replicationdb.test1 values(2,'Two');
Query OK, 1 row affected (0.00 sec)
 
MariaDB [replicationdb]> select * from replicationdb.test1;
+------+-------+
| val  | descr |
+------+-------+
|    1 | One   |
|    2 | Two   |
+------+-------+
2 rows in set (0.00 sec)

On slave server the row is replicated:

MariaDB [(none)]>  select * from replicationdb.test1;
+------+-------+
| val  | descr |
+------+-------+
|    1 | One   |
|    2 | Two   |
+------+-------+
2 rows in set (0.00 sec)

MariaDB Connector/J high availability testing

To test this I’m using the same dummy application I have already used doing read/write and read/only connections. The read/write connection can be initiated only on master server while the read/only connection can be issued on master server when slave server is down.

To start the application all servers must be up and running unless the connection does not succeed or you may need to change the JDBC_URL variable value to reflect remaining servers.

The different scenarios I’m planning to test are:

  • Failure of my unique slave, no impact expected as master can serve read/only transactions
  • Failure of master server, all read/write transactions are no more possible as only read/only ones can be served by surviving slave server

The source code of my application is the following, please note I initiate one unique connection and I do not reconnect each time. The java code is a loop of set the transaction read/write do a query, then set the transaction read/only and do a query. To know on which server the query is executed I simply select the environment variable that is returning server name:

package jdbcdemo1;
 
import org.mariadb.jdbc.Driver;
 
import java.sql.ResultSet;
import java.util.Properties;
import java.sql.Connection;
import java.sql.SQLException;
 
public class jdbcdemo1 {
  public static void main(String[] args) throws Exception {
    Driver driver = new Driver();
    Properties props = new Properties();
    ResultSet rs;
    String variable_value;
    Connection conn = null;
    String JDBC_URL = "jdbc:mysql:replication://address=(host=192.168.56.103)(port=3316)(type=master),"
    				+ "address=(host=192.168.56.104)(port=3316)(type=slave)";
 
    props.put("retriesAllDown","5");
    props.put("user", "yjaquier");
    props.put("password", "secure_password");
 
    System.out.println("\n------------ MariaDB Connector/J and MariaDB Replication Testing ------------\n");
 
    System.out.println("Trying connection...");
    try {
    	conn = driver.connect(JDBC_URL, props);
    	}
    catch (SQLException e) {
    	System.out.println("Connection Failed!");
    	System.out.println("Error cause: "+e.getCause());
    	System.out.println("Error message: "+e.getMessage());
    	return;
    }
 
    System.out.println("Connection established...");
 
    for(int i=1; i <= 1000; i++) {
 
// Read write query that can be performed ONLY on master server
 
        System.out.println("\nQuery "+i+": ");
    	// Fake Read Write SQL statement (just to see where it goes)
    	System.out.println("Read Write query...");
    	try {
    		conn.setReadOnly(false);
    	}
    	catch (SQLException e) {
    		System.out.println("Connection read write property set has failed...");
    	}
 
    	if (conn.isReadOnly() == false) {
    		try {
    			rs = conn.createStatement().executeQuery("SELECT variable_value FROM information_schema.global_variables " +
                                                                 "WHERE variable_name='hostname'");
    			while (rs.next()) {
    				variable_value = rs.getString("variable_value");
    				System.out.println("variable_value : " + variable_value);
    			}
    		}
    		catch (SQLException e) {
    			System.out.println("Read/write query has failed...");
    		}
    	}
 
// Read Only statement (that can also be done on master server if all slaves are down)
    	System.out.println("Read Only query...");
    	try {
    		conn.setReadOnly(true);	
    	}
    	catch (SQLException e) {
    		System.out.println("Connection read only property set has failed...");
    	}
   		try {
   			rs = conn.createStatement().executeQuery("SELECT variable_value FROM information_schema.global_variables " +
                                                                 "WHERE variable_name='hostname'");
   			while (rs.next()) {
   				variable_value = rs.getString("variable_value");
   				System.out.println("variable_value : " + variable_value);
   				}
   			}
   		catch (SQLException e) {
   			System.out.println("Read only query has failed...");
   			}
 
    	Thread.sleep(2000);
    }
    conn.close();
  }
}

The only MariaDB Connector/J parameter I have used is retriesAllDown otherwise you wait really too long to have the read/write status setting to false failing…

First testing is a failure of my unique slave server (if you have multiple that’s not even a failure as your others slaves can handle the work). This case is nicely done by MariaDB connector/J. When the slave fail the current query fails but the further ones are proceeded by master server (as expected). Read/write queries are unaffected:

mariadbconnectorj01
mariadbconnectorj01

When you recover you slave server read/only queries go to it automatically without re-connection:

mariadbconnectorj02
mariadbconnectorj02

Second testing is master server failure. In this situation we expect slave server to handle read/only queries but read/write ones cannot be proceeded. This is handle is my java code by forbidding them in case the read/write property cannot be set:

mariadbconnectorj03
mariadbconnectorj03

Once master server is recovered the read/write queries are again possible:

mariadbconnectorj04
mariadbconnectorj04

Conclusion

Upfront MariaDB Connector/J might not be seen so appealing versus MySQL Connector/J. The MariaDB Connector/J is (apparently) lacking many advanced features. But what I have tested is that it does very well the basic we expect from such Java connector when working with a massively deployed MySQL (or MariaDB) replication environment. This is not what I have seen with MySQL Connector/J with similar architecture

One issue I have seen is that they are not exact drop-ins replacement as MySQL Connecter/J 5.1.28 is not working with MariaDB 10.1.13. I have anyway no doubt, even if untested, that MariaDB Connecter/J works with MySQL 5.6 and above… So once you have decided which backend MySQL fork you will use it may, for compatibility issues, decide for yourself which Connector/J to implement…

References

About Post Author

One thought on “MariaDB Connector/J high availability features testing

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes:

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>