'전체'에 해당되는 글 114건

  1. [MariaDB] How to manage MariaDB with Puppet
  2. [Hello, World] 2014.10 Japan, Chiba, Narita (2)
  3. [Linux] Directory Tree Structure - Tree
  4. [Infra] 버전관리의 효율화 - git 개요
  5. [MySQL] MYSQL QUERY PERFORMANCE STATISTICS IN THE PERFORMANCE SCHEMA
  6. [MySQL] Optimizer Enhancements in MySQL 5.7
  7. [MySQL] XFS and EXT4 Testing Redux
  8. [MySQL] Replace Oracle RAC with MariaDB Galera Cluster?
  9. [MariaDB Install] How to Install MariaDB Server 10 by using yum (2)
  10. [RHEL-based] Install VNC Server on CentOS 7 / RHEL 7





ANATOLIYDIMITROV

Puppet is a powerful automation tool that helps administrators manage complex server setups centrally. You can use Puppet to manage MariaDB — let's see how.

With Puppet, you describe system states that you want the Puppet master server [to enforce] on the managed nodes. If you don't have Puppet installed and configured yet, please check the official Puppet documentation.

Before you can use Puppet to manage MariaDB, you must install a Puppet module that sets the proper repository corresponding to your operating system and version of MariaDB. For Red Hat-based distros, including CentOS, you can use the Yguenane MariaDB [repository] Puppet module. On your Puppet master, install the module with the command puppet module install yguenane/mariadbrepo, which puts the module files in the directory /etc/puppet/modules/mariadbrepo/.

The Yguenane module currently supports only Red Hat 5 and 6, CentOS 5 and 6, and any Fedora version that has the MariaDB repository, which, per the MariaDB 10.0 repository, means versions 19 and 20. If you need support for different versions or operating systems, you must edit the module. Its code is simple and straightforward, so you should be able to adapt it even if you don't know Ruby, the programming language behind Puppet. For example, to add support for Red Hat or CentOS 7, edit the file /etc/puppet/modules/mariadbrepo/manifests/init.pp and change the $os_ver variable. Initially, it looks like this:

$os_ver = $::operatingsystemrelease ? {
    /6.[0-9]/  => '6',
    /5.[0-9]+/ => '5',
    default    => $::operatingsystemrelease,
    }

Change it to:

$os_ver = $::operatingsystemrelease ? {
    /7.[0-9]/  => '7',
    /6.[0-9]/  => '6',
    /5.[0-9]+/ => '5',
    default    => $::operatingsystemrelease,
}

You can edit other variables in the same file, such as $os, to add support for other operating systems. As long as there is an official MariaDB repository for the OS and version, you should be able to add support for it.

MariaDB installation on the Puppet nodes

Once you have the necessary Puppet module on the Puppet master, you can install MariaDB on the Puppet nodes. Let's assume your Puppet manifests are found in the default /etc/puppet/manifests/site.pp file. The first thing you should do is distribute the MariaDB repo to the Puppet nodes that should have MariaDB installed. You can do it by adding code similar to this to site.pp, or to a separate file that you include in site.pp:

node 'host1' {
    class {'mariadbrepo':
        version => '10.0',
    }
}
node 'host2' {
    class {'mariadbrepo':
        version => '10.1',
    }
}

This tells Puppet to use the repository for MariaDB version 10.0 for host1, and the one for MariaDB 10.1 for host2. This is just an example to show that you can have different versions on different hosts; in real life it's better to have the same MariaDB version throughout your whole environment to avoid compatibility issues.

The next time Puppet catalog runs on the nodes, the repository should be added and the file /etc/yum.repos.d/MariaDB.repo should appear.

Next, you can define MariaDB installation by adding a new Puppet class (named block of Puppet code):

class mariadb {
    package { 'MariaDB-server':
        ensure => installed,
    }
    service { 'mysql':
        ensure => running,
        enable => true,
    }
}

This class instructs the nodes to install the package MariaDB-server. On Red Hat or CentOS nodes it will have the same effect as running the command yum install MariaDB-server. Naturally, it will take care of all the dependencies for MariaDB-server.

After the package directive comes the service one. Notice that I am using the service name mysql, which MariaDB uses to ensure compatibility with MySQL, which it can replace. The service directive ensures that the MariaDB service (mysql) is running, which means that it also ensures that it is started for the first time after the installation. Also, the service is set to enabled, meaning that it will automatically start during the OS boot process.

The only thing left is to include this class in the node's declaration. For example, let's extend an example host1 declaration like this:

node 'host1' {
    class {'mariadbrepo':
        version => '10.0',
    }
    include mariadb
}

After the next Puppet catalog run on the node host1, MariaDB should be installed and started.

MariaDB configuration on the Puppet nodes

To managing the MariaDB environment using Puppet you will probably need to edit some configuration files. For example, if you want to set custom configuration values for the MariaDB server, you will have to edit the file /etc/my.cnf.d/server.cnf on each of the Puppet nodes. You can use Puppet itself to ensure that the configuration is centrally managed and consistent over time and across multiple nodes.

To send a custom configuration file such as /etc/my.cnf.d/server.cnf to your Puppet nodes, create the following resource declaration as part of your mariadb class in the site.pp file:

file { '/etc/my.cnf.d/server.cnf':
      ensure => file,
      mode   => 644,
      source => 'puppet:///conf_files/mariadb/server.cnf',
}

This code specifies the permissions of the file (644) and its source location. Translated, the location puppet:///conf_files/mariadb/server.cnf means /etc/puppet/conf_files/mariadb/server.cnf on the Puppet master.

The above source directive assumes you have configured the Puppet fileserver already. The configuration file /etc/puppet/fileserver.conf should contain the following code:

[conf_files]
   path /etc/puppet/conf_files
   allow *

Furthermore, you should change the service description to recognize the custom MariaDB server configuration file. For this purpose you can use the subscribe metaparameter. Here is how the complete mariadb class should look:

class mariadb {
    package {'MariaDB-server':
        ensure => installed,
    }
    service { 'mysql':
        ensure => running,
        enable => true,
        subscribe => File['/etc/my.cnf.d/server.cnf'],
    }
    file { '/etc/my.cnf.d/server.cnf':
        ensure => file,
        mode   => 644,
        source => 'puppet:///conf_files/mariadb/server.cnf',
    }
}

When you use the subscribe parameter, the MariaDB server will be restarted whenever you make changes to the server configuration file, and thus your changes will take effect immediately.

As you can see, it's easy to install and configure MariaDB with Puppet. You can install different versions and manage configuration files centrally with just a little code and effort.

Tags: 

About the Author

anatoliydimitrov's picture
Anatoliy Dimitrov

Anatoliy Dimitrov is an open source enthusiast with substantial professional experience in databases and web/middleware technologies. He is as interested in technical writing and documentation as in practical work on complex IT projects. His favourite databases are MariaDB (sometimes MySQL) and PostgreSQL. He is currently graduating his master's degree in IT and aims to a PhD in Bionformatics in his home town University of Sofia.


저작자 표시 비영리 동일 조건 변경 허락
신고


Fig.01: 일본 나리타 공항


Fig.02: 치바 마쿠하리역 근처 다운타운


Fig.03: 일본의 자판기 문화는 어디를 가나 볼 수 있었다.


Fig.04: 출장의 목적인 2014 Japan IT Week


Fig.05: 봄에 열리는 IT Week의 규모가 더욱 크고, 10월에 열리는 행사였지만 정말 배울것이 많았다.


Fig.05: 마쿠하리 APA 호텔의 예약이 잘못되어 Green Tower Hotel에서 머물렀다.

Green Tower Hotel에서 보이는 다운타운의 모습 Aeon Mall의 간판도 보인다.


Fig.06: 둘째날  쇼핑을 간 Aeon Mall

Aeon Mall은 아키타현에서 가장 큰 규모의 쇼핑센터이다. 사진 뒷쪽에 보이는 3개의 큰 몰이 모두 Aeon Mall이다.


Fig.07: 호텔 근처에 잇는 지바 롯데 홈구장 QVC 마린 필드


전시회 개요(2014 Japan IT Week, Autumn)

 1) 일시 : 2014년 10월 29일 ~ 10월 31일

 2) 장소 : 치바현 마쿠하리 메세

 3) 주최 : 리드 익스비전 재팬


주요 일정

 1 일차 : 출국(인천 -> 나리타), 숙소 도착 및 중식, 전시회 참관

 2 일차 : 전시회 참관, Aeon Mall 쇼핑

 3 일차 : 전시회 참관, 업체 미팅, 귀국 (나리타 -> 인천)

              

회사 생활을 하며 첫 해외출장이였다. 사실 해외도 첫 경험이였다. 방사능 때문에 많이 걱정은 했었지만 내가 생각햇었던 일본의 선입견과는 너무 달랐다. 일본인들의 시민의식도 높고, 항상 친절하고 좋은 인상을 받았다. IT Week에 참석하면서 일본에 진출한 한국 기업들과도 미팅을 하고 회사와 제휴 관계를 맺고자 하는 몇개의 업체와 미팅도 했었다. 아쉽게 일본어를 몰라 실장님 뒤를 졸졸 쫓아다니긴 했지만, 생각했다. 이럴 때를 대비해서 영어라도 더욱 공부를 열심히 해야겠다고..



마쿠라히주변_영어.pdf


마쿠하리주변_한국어.pdf


마쿠하리주변_일본어.pdf


저작자 표시 비영리 동일 조건 변경 허락
신고

'Other > Hello, World' 카테고리의 다른 글

[Hello, World] 2014.10 Japan, Chiba, Narita  (2) 2014.11.06
[Hello, World] 2014.08 JEJU island  (0) 2014.08.24


디렉토리 구조를 트리 형으로 보여주는 Tree Package

트리 구조로 디렉토리 구조를 보고 싶을 경우, 기존 리눅스에서는 볼 수 없지만 Tree Package를 설치하게 되면 디렉토리를 Tree 구조로 확인 할 수 있습니다.


Tree Package 설치 및 테스트

1) RHEL / CentOS / Fedora Linux

기본적으로 트리 명령은 설치되어 있지 않습니다.

[root@localhost ~]# tree
bash: tree: command not found...

## tree pakcage install
 [root@localhost ~]# yum install tree
Loaded plugins: fastestmirror, langpacks
Repodata is over 2 weeks old. Install yum-cron? Or run: yum makecache fast
base                                                                                                                                                                                                                                     | 3.6 kB  00:00:00     
extras                                                                                                                                                                                                                                   | 3.4 kB  00:00:00     
mariadb                                                                                                                                                                                                                                  | 1.9 kB  00:00:00     
http://yum.puppetlabs.com/el/7/dependencies/x86_64/repodata/repomd.xml: [Errno 14] curl#6 - "Could not resolve host: yum.puppetlabs.com; Name or service not known"
Trying other mirror.
puppetlabs-devel                                                                                                                                                                                                                         | 2.5 kB  00:00:00     
puppetlabs-products                                                                                                                                                                                                                      | 2.5 kB  00:00:00     
updates                                                                                                                                                                                                                                  | 3.4 kB  00:00:00     
(1/3): extras/7/x86_64/primary_db                                                                                                                                                                                                        |  33 kB  00:00:00     
(2/3): updates/7/x86_64/primary_db                                                                                                                                                                                                       | 4.2 MB  00:00:00     
puppetlabs-products/x86_64/pri FAILED                                                                             99% [====================================================================================================== ]  98 kB/s | 4.2 MB  00:00:00 ETA 
http://yum.puppetlabs.com/el/7/products/x86_64/repodata/c16896632f46758106aca64ff9e1c3f6b2e0cc1b-primary.sqlite.bz2: [Errno -1] Metadata file does not match checksum======================================================== ]  98 kB/s | 4.2 MB  00:00:00 ETA 
Trying other mirror.
mariadb/primary_db                                                                                                                                                                                                                       |  20 kB  00:00:00     
Determining fastest mirrors
 * base: centos.mirror.cdnetworks.com
 * extras: centos.mirror.cdnetworks.com
 * updates: centos.mirror.cdnetworks.com
Resolving Dependencies
--> Running transaction check
---> Package tree.x86_64 0:1.6.0-10.el7 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

================================================================================================================================================================================================================================================================
 Package                                                    Arch                                                         Version                                                               Repository                                                  Size
================================================================================================================================================================================================================================================================
Installing:
 tree                                                       x86_64                                                       1.6.0-10.el7                                                          base                                                        46 k

Transaction Summary
================================================================================================================================================================================================================================================================
Install  1 Package

Total download size: 46 k
Installed size: 87 k
Is this ok [y/d/N]: y
Downloading packages:
tree-1.6.0-10.el7.x86_64.rpm                                                                                                                                                                                                             |  46 kB  00:00:00     
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : tree-1.6.0-10.el7.x86_64                                                                                                                                                                                                                     1/1 
  Verifying  : tree-1.6.0-10.el7.x86_64                                                                                                                                                                                                                     1/1 

Installed:
  tree.x86_64 0:1.6.0-10.el7                                                                                                                                                                                                                                    

Complete!

2) Debian / Mint / Ubuntu Linux

[root@localhost ~]# sudo apt-get install tree
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following NEW packages will be installed:
  tree
0 upgraded, 1 newly installed, 0 to remove and 381 not upgraded.
Need to get 36.7 kB of archives.
After this operation, 112 kB of additional disk space will be used.
Get:1 http://kr.archive.ubuntu.com/ubuntu/ trusty/universe tree i386 1.6.0-1 [36.7 kB]
Fetched 36.7 kB in 0s (102 kB/s)
Selecting previously unselected package tree.
(Reading database ... 169637 files and directories currently installed.)
Preparing to unpack .../archives/tree_1.6.0-1_i386.deb ...
Unpacking tree (1.6.0-1) ...
Processing triggers for man-db (2.6.7.1-1) ...
Setting up tree (1.6.0-1) ...

3) 테스트

[root@localhost ~]# tree -L 2 /app
/app
├── batch
├── test_demo
│  ├── webapps
│  └── webhome
├── test_mariadb
│  ├── webapps
│  └── webhome
├── test_monitoring
│  ├── webapps
│  └── webhome
└── tomcat
    ├── bin
    ├── conf
    ├── lib
    ├── LICENSE
    ├── logs
    ├── NOTICE
    ├── RELEASE-NOTES
    ├── RUNNING.txt
    ├── temp
    ├── webapps
    └── work

4) Syntax

tree
tree /path/to/directory
tree [options]
tree [options] /path/to/directory


저작자 표시 비영리 동일 조건 변경 허락
신고





DVCS (Distributed Version Control System)

소스 코드나, 이미지 등을 수정할 때 마다 각각의 형태를 보존하고 싶어하는 경우, 고객에 의해 각 의미있는 변화 (기능개선, 버그 수정) 들을 관리하는 체계를 형상관리, 버전관리 라고 합니다.

버전관리 시스템을 사용하면 소프트웨어의 상태, 프로젝트의 상태를 이전 상태로 되돌리거나 시간에 따른 갱신 기록등을 살펴볼 수 있는 장점이 있습니다. 대표적인 버전관리 소프트웨어로는 SVN(Subversion), mercurial, bazaar, Git 등이 존재합니다.


버전 관리의 방식으로는 로컬 버전 관리 시스템, 중앙집중식 버전 관리 시스템, 분산버전 관리 시스템으로 나눌수 있는데, 오늘 소개할 git은 분산 버전 관리 시스템(Distributed Version Control System; DVCS)에 해당됩니다. 분산 버전 관리 시스템은 클라이언트들이 파일의 마지막 스냅샷 정보를 가져오는 대신 해당 저장소(repository)를 통째로 복사하여, 만약 이상 상태(서버 이상)에 빠지더라도 해당 저장소를 다시 다른 서버로 복사하면 서버가 복구되는 형태입니다. 다시 말해 체크아웃(checkout) 이벤트가 발생 될 때마다 소스 Full Backup이 발생하는 셈입니다.


이로 인해 내부에 저장소 생성 되므로 기존 버전 관리 시스템인 SVN과 비교하였을 때 월등히 빠른 속도를 나타내며, 로컬 저장소에서 부담없이 테스트할 수 있습니다.


SVN vs Git

Git 을 처음 접할 경우 SVN과 많이 다른점을 발견할 수 있습니다.


저장소 (repository)

SVN은 저장소가 외부의 서버에 존재합니다. 하지만 Git은 로컬 저장소가 존재합니다. 테스트 역시 로컬 저장소에서 테스트 할 수 있어 SVN보다 빠른 속도로 처리 될 수 있습니다. 대부분의 명령어가 네트워크 속도에 영향을 받는데, Git은 프로젝트의 모든 히스토리가 로컬 저장소에 존재하기 때문에 실행 속도가 매우 빠릅니다.


파일 변화 처리

기존 VCS 시스템의 파일 변화 처리 과정에서 시간순으로 데이터 전체를 처리 했다면, Git에서는 데이터를 저장하지 않고 시스템의 스냅샷을 활용하여 파일이 달라질 경우에만 시간순으로 프로젝트 스냅샷을 저장합니다. 만약 파일이 이전 버전과 달라지지 않았다면 Git은 성능 향상을 위해 파일을 저장하지 않고 이전 상태의 파일에 대한 링크만 저장합니다.


Fig.01: 기존의 VCS 시스템 (http://git-scm.com/book/ko/v1)


Fig.02: git의 시스템


무결성

Git은 모든 데이터를 저장하기 전에 체크섬(또는 해시)을 구하고 그 체크섬으로 데이터를 관리합니다. 체크섬 없이 어떠한 파일, 디렉토리도 변경 하지 못합니다. Git은 SHA-1 해시를 사용하여 체크섬을 만듭니다. 만든 체크섬은 40자 길이의 16진수 문자열입니다. 파일의 내용이나 디렉토리 구조를 이용하여 체크섬을 구합니다.

Git은 모든 것을 해시로 식별하고, 실제로 git은 파일을 이름으로 저장하지 않고 해당 파일 해시 값으로 저장합니다.


git의 상태

Git은 파일을 Committed, Modified, Staged 이렇게 세 가지 상태로 관리합니다.

Commited

데이터가 로컬 데이터베이스에 안전하게 저장된 상태입니다.

Modified

수정한 파일을 아직 로컬 데이터베이스에 커밋하지 않은 상태입니다.

Staged

현재 수정한 파일을 곧 커밋할것이라고 표시한 상태입니다.


Fig.03: 워킹 디렉토리, Staging Area, Git 디렉토리


Git 디렉토리는 Git의 프로젝트 메타데이터객체 데이터베이스가 저장됩니다. Git 디렉토리가 Git의 핵심이라고 볼 수 있습니다. 원격에서 Clone을 할 경우 Git 디렉토리가 생성됩니다.

Working 디렉토리프로젝트를 특정 버전으로 Checkout 한 것 입니다. Git 디렉토리에는 지금 작업하는 디스크에 있고 그 디렉토리에 압축된 데이터베이스를 가져와서 Working 디렉토리를 생성합니다.

Staging Area는 Git 디렉토리에 존재합니다. 단순한 파일이고 곧 커밋할 파일에 대한 정보가 저장되어 있습니다.


Staging Area

Git에는 로컬 저장소에 저장하기 전 단계인 스테이지 영역이 존재 합니다. 인덱스라고도 불립니다.

Git은 Working 디렉토리에서 파일을 수정하고 Staging Area에 파일을 Stage해서 커밋할 스냅샷을 생성합니다. 그리고 Staging Area에 있는 파일들을 커밋해서 Git 디렉토리에 영구적인 스냅샷으로 저장합니다.

기존 SVN에서는 변경 된 파일들은 무조건 커밋 대상이 되지만, Git 에서는 Staging Area에 커밋할 파일들만 추가 하여 커밋이 가능합니다.


저작자 표시 비영리 동일 조건 변경 허락
신고





We’ve recently added the ability to monitor MySQL query performance statistics from MySQL’s PERFORMANCE_SCHEMA, and there were a number of lessons learned. There are definitely right and wrong ways to do it. If you are looking to the P_S tables for monitoring MySQL query performance, this blog post might save you some time and mistakes.

What Is The Performance Schema?

First, a quick introduction. The Performance Schema includes a set of tables that give information on how statements are performing. Most of the P_S tables follow a set of predictable conventions: there’s a set of tables with a limited set of full-granularity current and/or historical data, which is aggregated into tables that accumulate over time. In the case of statements, there’s a table of current statements, which feeds into a statement history, that accumulates into statement summary statistics. The tables are named as follows:

| events_statements_current                          |
| events_statements_history                          |
| events_statements_history_long                     |
| events_statements_summary_by_account_by_event_name |
| events_statements_summary_by_digest                |
| events_statements_summary_by_host_by_event_name    |
| events_statements_summary_by_thread_by_event_name  |
| events_statements_summary_by_user_by_event_name    |
| events_statements_summary_global_by_event_name     |

The tables most people will care about are events_statements_current, which is essentially a replacement for SHOW FULL PROCESSLIST, andevents_statements_summary_by_digest, which is statistics about classes of queries over time. The rest of the tables are pretty much what they look like – summaries by user, etc.

These tables were introduced in MySQL 5.6, and not all of them will be enabled by default. There’s a performance overhead to enable them, but this should be small relative to the performance improvements you can gain from using them.

We prefer our technique of decoding network traffic server-side to measure query performance, for several reasons, but the statement digest table is the next-best thing in cases such as Amazon RDS where that’s not possible. It gives us enough data to present a view of Top Queries as shown below.

Top MySQL Query Performance

Now let’s dig into specifics about these tables and how to use them.

Monitoring MySQL Performance - An Overview

The general idea, for most MySQL performance monitoring tools, is to read fromevents_statements_summary_by_digest at intervals and subtract each sample from the next, to get rates over time. As you can see in the below sample, there are a lot of columns with various statistics about each family of queries in the table:

text
mysql> select * from events_statements_summary_by_digest  limit 1\G
*************************** 1. row ***************************
                SCHEMA_NAME: customers
                     DIGEST: 4625121e18403967975fa86e817d78bf
                DIGEST_TEXT: SELECT @ @ max_allowed_packet 
                 COUNT_STAR: 36254
             SUM_TIMER_WAIT: 2683789829000
             MIN_TIMER_WAIT: 45079000
             AVG_TIMER_WAIT: 74027000
             MAX_TIMER_WAIT: 1445326000
              SUM_LOCK_TIME: 0
                 SUM_ERRORS: 0
               SUM_WARNINGS: 0
          SUM_ROWS_AFFECTED: 0
              SUM_ROWS_SENT: 36254
          SUM_ROWS_EXAMINED: 0
SUM_CREATED_TMP_DISK_TABLES: 0
     SUM_CREATED_TMP_TABLES: 0
       SUM_SELECT_FULL_JOIN: 0
 SUM_SELECT_FULL_RANGE_JOIN: 0
           SUM_SELECT_RANGE: 0
     SUM_SELECT_RANGE_CHECK: 0
            SUM_SELECT_SCAN: 0
      SUM_SORT_MERGE_PASSES: 0
             SUM_SORT_RANGE: 0
              SUM_SORT_ROWS: 0
              SUM_SORT_SCAN: 0
          SUM_NO_INDEX_USED: 0
     SUM_NO_GOOD_INDEX_USED: 0
                 FIRST_SEEN: 2014-09-12 16:04:38
                  LAST_SEEN: 2014-10-31 08:26:07

These columns are mostly counters that accumulate over time. The COUNT_STARcolumn, for example, shows the number of times the statement has been executed. The SUM_ columns are just what they look like.

Enabling And Sizing The Table

The table needs to be enabled as usual with the Performance Schema by using its setup table, setup_consumers. That table contains a row for each P_S consumer to be enabled. Other setup tables and server variables control some of the configuration as well, though the defaults work OK out of the box most of the time.

The table can also be sized, in number of rows. By default it is 10,000 rows (although I think somewhere I saw a documentation page that said 200).

Limitations Of The Table

There are a couple of limitations you should be aware of.

  1. The statement digest table does not record anything about statements that are prepared. It only captures statements that are executed by sending the full SQL to the server as text. If you use prepared statements, the table probably does not capture your server’s performance accurately. The drivers for many programming languages use prepared statements by default, so this could be a real issue. (If this is a problem for you, you might like to know that VividCortex captures prepared statements from network traffic, including samples).
  2. The table is fixed-size, and resizing it requires a server restart.
  3. Some things aren’t captured in full granularity. For example, when we’re capturing MySQL query performance data from network traffic, we can measure specific error codes. There’s a SUM_ERRORS column in the table, but you can’t see what the error codes and messages were.

Resetting The Table (Or Not)

The table can be reset with a TRUNCATE to start afresh, but generally shouldn’t be. Why would you want to do this? There might be a few reasons.

First, the table is fixed-size, and if the table isn’t large enough to hold all of the distinct types of queries your server runs, you’ll get a catch-all row with a NULL digest and schema. This represents statements that aren’t being tracked separately, and might be important for some reason. A TRUNCATE will empty the table if this is the case.

Second, statistics accumulate over time, so columns such as first-seen and last-seen dates may eventually end up being useless to you. The min, max, and average timer waits will not be very helpful over long periods of time, either.

Finally, you might want to reduce the number of rows it contains, so that occasional queries that are never purged don’t introduce performance overhead when reading the table.

There are tools that do this completely wrong, though. Some of them empty out the table every time they read from it. This is the worst behavior because these tables are not session-specific. They are global, and a TRUNCATE will affect everyone who’s looking at them. It might be kind of rude to constantly throw away the data your colleague (or another tool) is looking at.

The other problem with this is that a tool that reads from the table, then truncates it, is subject to race conditions. Statements that complete between these actions will be discarded and never seen. Of course, there’s no way to avoid this, except by just not doing it, or not doing it often.

I would suggest resetting this table only manually and only when needed, or perhaps at infrequent intervals such as once a day or once an hour, from a scheduled task.

Accumulating Statements Correctly

The table’s primary key isn’t defined in the schema, but there’s a unique set of columns. This is not, contrary to what I’ve seen some software assume, the DIGESTcolumn. There is one row per digest, per schema. The combination of schema and digest is unique.

This means that if you’re looking for all information about a single class of queries regardless of schema, you need to aggregate together all of the rows with the sameDIGEST.

One of the implications of the uniqueness being defined by schema and digest together is that servers that have a large number of schemas and a large number of digests will need a really huge number of rows to keep track of all of the statements. At VividCortex, we have customers whose servers have literally millions or tens of millions of distinct families of queries running on a regular basis. Multiply this by a large number of schemas, and you have no hope of keeping track of them with the P_S tables. This is not a problem for our default collection mechanism, though: by default we capture MySQL query performance statistics by decoding the server’s network traffic. This handles high-cardinality scenarios without trouble.

Don’t Run GROUP BY On The Table Frequently

There are several ways you can cause performance impact to the server by reading from the P_S tables.

One is by using complex queries on these tables. They are in-memory, but they’re not indexed. For example, if you run a GROUP BY to aggregate rows together by digest, you’ll cause trouble. You probably shouldn’t do this, at least not frequently.

Recall that VividCortex measures everything at 1-second resolution, giving you highly detailed performance statistics about your entire system. The statement statistics are no different; we have per-second statement (query) statistics. Reading from the P_S table once per second with a GROUP BY clause has too much performance impact on the server. It is less costly to read the entire table and accumulate the statistics in application code, in our tests.

Don’t Re-Fetch Data

Another way to cause problems is to fetch the DIGEST_TEXT column with every query. This column isn’t enormous, because it’s limited to 1kb in length, but it’s still large enough that you should not repeatedly fetch it. Instead, when you see an unknown digest, you should query for it only then. This may introduce a lot of complexity into your application code, but this is what needs to be done.

Handle Results Smartly

We care a lot about performance at VividCortex, obviously. We’ve built our approach to performance management with minimal overhead in mind. This is why we don’t do things like enable your server’s slow query log or poll commands that can block the server. But in addition to avoiding overhead on the server, we have to make our agent’s performance good, too.

This is why we also do things you might consider to be extreme. For example, MySQL returns all of the numbers from the Performance Schema in textual format over the network, and these then have to be converted into numbers by the client. Even ascii-to-number conversion has a cost we can measure, so we don’t do it unless theCOUNT_STAR column has changed since the last time we saw a row. This makes a material difference in the agent’s CPU consumption.

Capturing Sample Statements

Capturing samples of queries is very helpful. Aggregate statistics about groups of queries aren’t revealing enough; you need to be able to look at specific instances of queries to EXPLAIN them and so on.

To get samples, you’ll need to look at the current or historical statement tables. Not all of these are enabled by default, though.

Some tools LEFT JOIN against the statement history tables to get samples of individual query executions. This obviously should not be done at 1-second frequency. Even if it’s done infrequently, it’s not really a great idea. When you have fine-detailed performance instrumentation you can really see small spikes in server performance, and an occasionally intrusive query can potentially starve high-frequency fast-running queries of resources.

VividCortex’s approach to this, by the way, is to collect samples probabilistically, which is different from the usual practice of trying to find a “worst” sample. Worst-sample is okay in some ways, but it is not representative, so it doesn’t help you find out much about a broad spectrum of queries and their execution. Here’s a screenshot of what our sampling approach yields, which is quite different from the picture you’ll get from worst-sample tactics:

samples

Conclusions

Although we prefer to be able to capture and decode network traffic to see the full detail about what’s happening inside the server, in cases where that’s not possible, the Performance Schema in MySQL 5.6 and greater is a good alternative. There are just a few things one should take care to do, at least at 1-second resolution as we do at VividCortex. And there are a few common mistakes you can stumble over that will either be bad behavior or might make your results just plain wrong.

If you have suggestions, comments, or questions, please leave them below!

저작자 표시 비영리 동일 조건 변경 허락
신고





The MySQL optimizer is getting better. MySQL 5.6 introduced:

  • File sort optimizations with small limit
  • Index Condition Pushdown
  • Batched Key Access and Multi Range Read
  • Postponed Materialization
  • Improved Subquery execution
  • EXPLAIN for Insert, Update, and Delete
  • Optimizer Traces
  • Structured EXPLAIN in JSON format

This was in addition to the InnoDB storage engine now offering improved statistics collection, leading to more stable query plans.

In Evgeny Potemkin's session at MySQL Connect titled "MySQL's EXPLAIN Command New Features", two new features for 5.7 were announced. They are both incredibly useful, so I wanted to write a little about them.

EXPLAIN FOR CONNECTION

Normally with EXPLAIN, what you would be doing is finding the execution plan of a query you are intending to run, and then interpreting the output how you see fit.

What MySQL 5.7 will do, is give you the ability to see the execution plan of a running query in another connection. i.e.

EXPLAIN FORMAT=JSON FOR CONNECTION 2;

Why it's useful:
* Plans can change depending on input parameters. i.e. WHERE mydate BETWEEN '2013-01-01' and '2013-01-02' may use an index, but WHERE mydate BETWEEN '2001-01-01' and '2013-10-17' may not.
* Plans can change as data changes.
* Plans can also change depending on the context of a transaction, with InnoDB offering multi-version concurrency control.
* Optimizer statistics can change, and it's not impossible that the reason for the executing query being slow has something to do with it. It's great to have conclusive proof and be able to rule this out.

Execution cost in EXPLAIN

MySQL uses cost based optimization to pick the best query execution plan when there are multiple choices available. It is very similar to how a GPS navigator adds up estimated time and picks the best route to a destination.

What this feature does is exposes the cost as a numeric value when running EXPLAIN FORMAT=JSON. To take an example using the world sample database:

mysql [localhost] {msandbox} (world) > EXPLAIN FORMAT=JSON SELECT City.* 
FROM City INNER JOIN Country ON City.countrycode=Country.code 
ORDER BY City.NAME ASC LIMIT 100\G
*************************** 1. row ***************************
EXPLAIN: {
  "query_block": {
    "select_id": 1,
    "cost_info": {
      "query_cost": "4786.00"
    },
    "ordering_operation": {
      "using_temporary_table": true,
      "using_filesort": true,
      "cost_info": {
        "sort_cost": "2151.00"
      },
      "nested_loop": [
        {
          "table": {
            "table_name": "country",
            "access_type": "index",
            "possible_keys": [
              "PRIMARY"
            ],
            "key": "PRIMARY",
            "used_key_parts": [
              "Code"
            ],
            "key_length": "3",
            "rows_examined_per_scan": 239,
            "rows_produced_per_join": 239,
            "filtered": 100,
            "using_index": true,
            "cost_info": {
              "read_cost": "6.00",
              "eval_cost": "47.80",
              "prefix_cost": "53.80",
              "data_read_per_join": "61K"
            },
            "used_columns": [
              "Code"
            ]
          }
        },
        {
          "table": {
            "table_name": "City",
            "access_type": "ref",
            "possible_keys": [
              "CountryCode"
            ],
            "key": "CountryCode",
            "used_key_parts": [
              "CountryCode"
            ],
            "key_length": "3",
            "ref": [
              "world.country.Code"
            ],
            "rows_examined_per_scan": 9,
            "rows_produced_per_join": 2151,
            "filtered": 100,
            "cost_info": {
              "read_cost": "2151.00",
              "eval_cost": "430.20",
              "prefix_cost": "2635.00",
              "data_read_per_join": "151K"
            },
            "used_columns": [
              "ID",
              "Name",
              "CountryCode",
              "District",
              "Population"
            ]
          }
        }
      ]
    }
  }
}

Why it's useful:

  • This exposes more transparency into optimizer decisions. DBAs can better understand what part of a query is considered expensive, and try to optimize. I think this is important, because I have heard a lot of DBAs make blanket recommendations like "joins are bad" or "sorting is bad", but there needs to be context on how much data needs to be sorted. It makes us all speak the same language: estimated cost.
  • Cost refinement is an ongoing effort. As well as the introduction of new fast SSD storage, MySQL is introducing new optimizations (such as index-condition pushdown). Not all of these optimizations will be the best choice every time, and MySQL should ideally be able to make a right choice for all situations.


저작자 표시 비영리 동일 조건 변경 허락
신고





In my concluded testing post, I declared EXT4 my winner vs XFS for my scenario. My coworker,@keyurdg, was unwilling to let XFS lose out and made a few observations:

  • XFS wasn’t *really* being formatted optimally for the RAID stripe size
  • XFS wasn’t being mounted with the inode64 option which means that all of the inodes are kept in the first 2TB. (Side note: inode64 option is default in newer kernels but not on CentOS 6’s 2.6.32)
  • Single threaded testing isn’t entirely accurate because although replication is single threaded, the writes are collected in InnoDB and then writes it to disk using multiple threads governed by innodb_write_io_threads.

Armed with new data, I have – for real – the last round of testing.

To keep things a bit simpler, I will be comparing each file system on 2TB and 27TB, with 4 threads, which matches the default value for innodb_write_io_threads in MySQL 5.5.

FSRAIDSizeMount OptionsTransfer/sRequests/sAvg/Request95%/Request
xfs102Tnoatime,nodiratime,nobarrier,inode6462.588Mb/sec4005.660.88ms0.03ms
ext4102Tnoatime,nodiratime,nobarrier58.667Mb/sec3754.660.87ms0.19ms
FSRAIDSizeMount OptionsTransfer/sRequests/sAvg/Request95%/Request
xfs1027Tnoatime,nodiratime,nobarrier,inode6464.47Mb/sec4126.060.84ms0.02ms
ext41027Tnoatime,nodiratime,nobarrier49.379Mb/sec3160.261.06ms0.24ms

XFS finally wins out clearly over EXT4. XFS being dramatically slower on 27T earlier really shows how much the worse the performance between inode32 and inode64 is and explains why it was that much better on 2T. Fixing the formatting options pushed XFS over the top easily.

All that’s left to do is setup multiple instances until replication can’t keep up anymore.

저작자 표시 비영리 동일 조건 변경 허락
신고





ERKANYANAR

If you want to avoid downtimes in your business, High Availabilty (HA) is a strong requirement which, by definition, makes it possible to access your data all the time without losing (any) data. In this blog we compare two alternatives: Oracle RAC and MariaDB Galera Cluster. 

There are several options to implement High Availability. Oracle RAC is a popular and proven HA solution. HA can also be enabled for your data and systems with loadbalancers that make it possible to always access your data. MariaDB Galera Cluster provides similar functionality using synchronous multi-master Galera replication. It is also easier to build and proves to be more cost-effective. Being OpenSource, you may have to pay for support, but not for running the system.

Next, the designs of Oracle RAC and MariaDB Galera Cluster are going to be compared, so you can make up your mind on your own.

Oracle RAC

With RAC, Oracle instances run on separate nodes, while the data is located on shared storage. All instances access the same files.

To prevent conflicts, the instances must agree on which instance is actually working on a block of data. If a node wants to change a row, it must get exclusive access to that block and store it in its cache. It therefore asks the other nodes whether they have the block. If no other node does, it gets the block from storage.

Even in case of read-access data all the nodes need to communicate this way to get the data as it is done for writing. When the block is modified, the requesting nodes get a consistent read version (which they are not allowed to modify) from the block. This adds latency due to internode communication - there will be read and write access every time a node does not have the block.

The need for communication between the nodes for every access on a table adds overhead. On the other hand, having all blocks advance local locks on a node, e.g. for SELECT FOR UPDATE, are cluster wide locks.

The advantage of RAC is that losing an Oracle node does not harm the service at all. The other nodes will keep providing data (HA of the access). Therefore, you can shut down a node to perform maintenance tasks such as upgrading hardware or software, while reducing unexpected downtime. However, the shared storage - responsible for the data - is a potential single point of failure.

On Oracle RAC distributing read or write access is not optimal because latency is added by additional internode round trips. The best results occur when the application only accesses a fixed part of the data per node, so that no blocks have to be moved around, but it makes the setup more complicated.

MariaDB Galera Cluster

In contrast to Oracle RAC, MariaDB Galera Cluster is a high availability setup with shared-nothing architecture. Instead of having one shared storage (SAN or NAS), every cluster member has its own copy of all the data, thus eliminating the single point of failure.

MariaDB Galera Cluster take care about syncing data even for new nodes. This makes managing the cluster easy, as adding an empty node into the cluster is sufficient. MariaDB Galera Cluster will provide all data for the new node.

Unlike Oracle RAC, accessing a node to read data does not result in internode communication. Instead, communication (and so latency) happens at the time transactions are committed. This is faster than the Oracle RAC approach of acquiring all blocks in advance, but this also means conflicts are found at the time a transaction is committed.

And conflict are found by the internode communication because of the commit. Thats why the same data should not be accessed (at least not at the same time) on different nodes, as this increases the chance of having conflicts. This will not happen when the data is accessed on different nodes one after another. In the case of Oracle RAC the blocks would have to be copied.

This means that a SELECT FOR UPDATE statement is able to fail on commit, as it locks the data locally but not cluster wide. So conflicts with transactions on other nodes can only be found at the time of the commit. That is why the same data should not be accessed at the same time on different nodes, as it increases the chance of having conflicts. This is slightly different to Oracle RAC where accessing data on another node any time later does move the blocks.

While Oracle RAC has a lot of latency moving data blocks into the cache of every node, MariaDB Galera Cluster has an increased likelihood of failing commits.

Like Oracle RAC, single nodes in a MariaDB Galera Cluster can be taken down for maintenance without stopping the cluster. When a node rejoins the cluster, it automatically gets missing transactions via Incremental State Transfer (IST), or it may sync all data using State Snapshot Transfer (SST). If the missing transactions are in a local (configurable) cache of a node, IST is used, if not SST is used.

One drawback of the current Galera version is that Data Definition Language (DDL) commands (CREATE, ALTER, DROP) are run synchronously on the cluster. Therefore the entire cluster stalls until a DDL command finishes. Thats why Magento installations running default configuration do not scale at all on MariaDB Galera Cluster. In general using tools like pt-online-schema-change bypass this limitation. Eliminating this limitation is on the development roadmap.

In comparison

Oracle RAC and MariaDB Galera Cluster provide similar functionality using different designs. Each one is eliminating maintenance downtime for many tasks and thus gives you more freedom to run applications.

In general Oracle RAC has a lot more latency because of internode communication (including moving all requested data blocks) for read and write access. In MariaDB Galera Cluster the changed dataset is sent around by committing. So only changed datasets are sent.

Despite the obvious similarities, the two databases have quite different architectures. Oracle RAC uses shared storage, while MariaDB Galera Cluster uses a shared-nothing architecture, which is less expensive. Oracle RACs shared storage is quite expensive. The author has observed EMC or NetApp for that, as it is the single point of failure something reliable is needed.

Data on MariaDB Galera Cluster is replicated on all the nodes, which makes it easy to run the cluster spread over different regions. Consequently, your data will be safe even if your datacenter burns down. To have this level of redundancy with Oracle RAC you need a shared storage accordingly, i.e. a Netapp MetroCluster. Beside adding more costs, Netapp MetroCluster requires a network with a round trip latency of less than 10ms, while MariaDB Galera Cluster even runs in Cloud environments in different regions.

With Oracle RAC there are two inherent sources of latency: accessing the shared storage and internode communication for read and write access. While in MariaDB Galera Cluster there is latency for every COMMIT needed by the internode communication to check and send the data to be committed.

Of course MariaDB Galera Cluster is no one-to-one replacement for Oracle RAC. But if your application runs with either Oracle or MySQL/MariaDB, MariaDB Galera Cluster is more than an alternative.

Further reading

About the Author

erkanyanar's picture
erkan yanar

Erkan Yanar is an independent consultant with strong focus on MySQL, Docker/LXC and OpenStack. He loves to give presentations and do also writes for trade magazines.



저작자 표시 비영리 동일 조건 변경 허락
신고






How to Install MariaDB Server 10 by using yum

서버 구성환경

Virtual Machine

VMware Workstation 10.0.0

OS

Cent OS 7.0

Devices

Memory : 1 GB

Processors : 1

Hard Disk(SCSI) : 10 GB

Network Adapter : NAT

 DB Link

Download MariaDB


MariaDB Server 10 버전을 CentOS 7에 yum 패키지를 사용하여 설치하는 방법을 알아보겠습니다.


Create MariaDB yum repo file

각 OS, DB 환경에 맞게 MariaDB Repositories 정보를 입력해야 합니다.

## /etc/yum.repos.d/MariaDB.repo 파일을 신규 생성합니다.
[root@localhost ~]# vi /etc/yum.repos.d/MariaDB.repo

## 아래 내용을 복사하여 붙여 넣습니다.
--
# MariaDB 10.0 CentOS repository list - created 2014-10-16 08:00 UTC
# http://mariadb.org/mariadb/repositories/
[mariadb]
name = MariaDB
baseurl = http://yum.mariadb.org/10.0/centos7-amd64
gpgkey=https://yum.mariadb.org/RPM-GPG-KEY-MariaDB
gpgcheck=1
--


Importing MariaDB Signing Key

## MariaDB Key 값을 Import 시킵니다.
[root@localhost ~]# rpm --import https://yum.mariadb.org/RPM-GPG-KEY-MariaDB


Install MariaDB Server and Client

## yum package를 사용하여 MariaDB Server 와 Client를 설치합니다.
[root@localhost ~]# yum install MariaDB-server MariaDB-client MariaDB-devel MariaDB-shared
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile
 * base: centos.tt.co.kr
 * extras: centos.tt.co.kr
 * updates: centos.tt.co.kr
Resolving Dependencies
--> Running transaction check
---> Package MariaDB-client.x86_64 0:10.0.14-1.el7.centos will be installed
--> Processing Dependency: MariaDB-common for package: MariaDB-client-10.0.14-1.el7.centos.x86_64
---> Package MariaDB-devel.x86_64 0:10.0.14-1.el7.centos will be installed
---> Package MariaDB-server.x86_64 0:10.0.14-1.el7.centos will be installed
--> Processing Dependency: perl(DBI) for package: MariaDB-server-10.0.14-1.el7.centos.x86_64
---> Package MariaDB-shared.x86_64 0:10.0.14-1.el7.centos will be installed
--> Running transaction check
...
... ## 중간 중간 Is this ok [y/N] 메시지 표시 -> y 선택
...
Transaction Summary
================================================================================================================================================================================================================================================================
Install  4 Packages (+7 Dependent packages)

Total download size: 69 M
Installed size: 307 M
Is this ok [y/d/N]: y
Downloading packages:
(1/11): MariaDB-10.0.14-centos7_0-x86_64-common.rpm                                                                                                                                                                                      |  23 kB  00:00:01     
(2/11): MariaDB-10.0.14-centos7_0-x86_64-devel.rpm                                                                                                                                                                                       | 6.2 MB  00:00:05     
(3/11): MariaDB-10.0.14-centos7_0-x86_64-client.rpm                                                                                                                                                                                      | 9.9 MB  00:00:19     
(4/11): perl-Compress-Raw-Bzip2-2.061-3.el7.x86_64.rpm                                                                                                                                                                                   |  32 kB  00:00:00     
(5/11): perl-Compress-Raw-Zlib-2.061-4.el7.x86_64.rpm                                                                                                                                                                                    |  57 kB  00:00:00     
(6/11): perl-Net-Daemon-0.48-5.el7.noarch.rpm                                                                                                                                                                                            |  51 kB  00:00:00     
(7/11): perl-IO-Compress-2.061-2.el7.noarch.rpm                                                                                                                                                                                          | 260 kB  00:00:00     
(8/11): perl-PlRPC-0.2020-14.el7.noarch.rpm                                                                                                                                                                                              |  36 kB  00:00:00     
(9/11): MariaDB-10.0.14-centos7_0-x86_64-shared.rpm                                                                                                                                                                                      | 1.2 MB  00:00:01     
(10/11): perl-DBI-1.627-4.el7.x86_64.rpm                                                                                                                                                                                                 | 802 kB  00:00:06     
(11/11): MariaDB-10.0.14-centos7_0-x86_64-server.rpm                                                                                                                                                                                     |  50 MB  00:00:29     
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Total                                                                                                                                                                                                                           1.9 MB/s |  69 MB  00:00:36     
Running transaction check
Running transaction test


Transaction check error:
  file /etc/my.cnf from install of MariaDB-common-10.0.14-1.el7.centos.x86_64 conflicts with file from package mariadb-libs-1:5.5.35-3.el7.x86_64
  file /usr/share/mysql/charsets/Index.xml from install of MariaDB-common-10.0.14-1.el7.centos.x86_64 conflicts with file from package mariadb-libs-1:5.5.35-3.el7.x86_64
  file /usr/share/mysql/charsets/armscii8.xml from install of MariaDB-common-10.0.14-1.el7.centos.x86_64 conflicts with file from package mariadb-libs-1:5.5.35-3.el7.x86_64
  file /usr/share/mysql/charsets/ascii.xml from install of MariaDB-common-10.0.14-1.el7.centos.x86_64 conflicts with file from package mariadb-libs-1:5.5.35-3.el7.x86_64
  file /usr/share/mysql/charsets/cp1250.xml from install of MariaDB-common-10.0.14-1.el7.centos.x86_64 conflicts with file from package mariadb-libs-1:5.5.35-3.el7.x86_64
  file /usr/share/mysql/charsets/cp1256.xml from install of MariaDB-common-10.0.14-1.el7.centos.x86_64 conflicts with file from package mariadb-libs-1:5.5.35-3.el7.x86_64
  file /usr/share/mysql/charsets/cp1257.xml from install of MariaDB-common-10.0.14-1.el7.centos.x86_64 conflicts with file from package mariadb-libs-1:5.5.35-3.el7.x86_64
  file /usr/share/mysql/charsets/cp850.xml from install of MariaDB-common-10.0.14-1.el7.centos.x86_64 conflicts with file from package mariadb-libs-1:5.5.35-3.el7.x86_64
  file /usr/share/mysql/charsets/cp852.xml from install of MariaDB-common-10.0.14-1.el7.centos.x86_64 conflicts with file from package mariadb-libs-1:5.5.35-3.el7.x86_64
  file /usr/share/mysql/charsets/cp866.xml from install of MariaDB-common-10.0.14-1.el7.centos.x86_64 conflicts with file from package mariadb-libs-1:5.5.35-3.el7.x86_64
...
...
...
Error Summary
-------------

## 진행 중 트랜잭션 검사 오류가 나타납니다. 이 경우 아래 방법에 따라 해결하세요.
## mariadb-libs-1:5.5.35-3.el7.x86_64 패키지가 MariaDB 서버를 설치하는 동안에 충돌이 일어나기 때문이다.
## postfix 10 나중에 설치되어야 합니다. (mariadb-libs-1:5.5.35-3.el7.x86_64 우선적 설치)
[root@localhost ~]# yum remove postfix
Loaded plugins: fastestmirror, langpacks
Resolving Dependencies
--> Running transaction check
---> Package postfix.x86_64 2:2.10.1-6.el7 will be erased
--> Finished Dependency Resolution

Dependencies Resolved

================================================================================================================================================================================================================================================================
 Package                                                     Arch                                                       Version                                                             Repository                                                     Size
================================================================================================================================================================================================================================================================
Removing:
 postfix                                                     x86_64                                                     2:2.10.1-6.el7                                                      @anaconda                                                      12 M

Transaction Summary
================================================================================================================================================================================================================================================================
Remove  1 Package

Installed size: 12 M
Is this ok [y/N]: y
Downloading packages:
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Erasing    : 2:postfix-2.10.1-6.el7.x86_64                                                                                                                                                                                                                1/1 
  Verifying  : 2:postfix-2.10.1-6.el7.x86_64                                                                                                                                                                                                                1/1 

Removed:
  postfix.x86_64 2:2.10.1-6.el7                                                                                                                                                                                                                                 

Complete!

[root@localhost ~]# rpm -ev mariadb-libs-5.5.35-3.el7.x86_64 
Preparing packages...
mariadb-libs-1:5.5.35-3.el7.x86_64


## 이제 MariaDB 설치를 다시 진행합니다.
[root@localhost ~]# yum install MariaDB-server MariaDB-client MariaDB-devel MariaDB-shared
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from