Today I needed to clone a schema in MySQL: simply I upgraded my monitoring software Zabbix and thus I needed to “upgrade” also the DB schema where Zabbix saves data. As this is a destructive operation I wanted to have the zabbix schema cloned. But how do you clone a schema in MySQL?
The first thing to do is to create the new schema (or database in MySQL world) and grant all privileges to a user (even an existing one):
ZABBIX is an enterprise-class open source distributed monitoring solution. I use it at work and I like it very much. Today I needed the Zabbix Agent for Solaris10 for Intel and, as I still use version 1.4, I downloaded the precompiled agents binaries from Zabbix site. I’m writing this post because I found a problem!
The precompiled binaries fails to run with the following message:
Zabbix agent error on console
123
ld.so.1: zabbix_agentd: fatal: libresolv.so.2: version `SUNW_2.3' not found (required by file zabbix_agentd)ld.so.1: zabbix_agentd: fatal: libresolv.so.2: open failed: No such file or directory
Killed
Running the ldd on the zabbix_agentd will reveal the problem:
libresolv.so.2 (SUNW_2.3) is not bundled in Solaris10 but in Nevada (aka Solaris11): so the precompiled binaries were not built on Solaris10 but on Opensolaris. To overcome this problem I downloaded the Zabbix 1.4.6 sources and I compiled the agents with Sun Studio 12.
Today I needed to install an Oracle Client on a new Solaris10 (x86 64bit) installation: I found a useful tutorial here but it helps you to install the whole database and not only the client. As you may guess the client installation is easier than the Oracle DB Server one.
The first thing to do is to check if you have installed the prerequisites packages:
If you see any error here you must install the missing packages: if you did the “full installation” you should have all packages already installed.
Now its time to create user, groups, directories and project: as root type
Preparing for installation
123456789
//create user and groups
$ groupadd oinstall
$ groupadd dba
$ useradd -g oinstall -G dba -m -d /export/home/oracle -s /usr/bin/bash oracle
//create the project (this is optional)$ projadd -U oracle oracle
//create the installation dir and give the right permissions
$ mkdir /opt/oracle
$ chown -R oracle:oinstall /opt/oracle
OK… as root we have temporarily finished!
Now log in with the oracle user and download the Oracle Client 10g for Solaris10 from Oracle website: once there you need to select the right architecture (Solaris exists for SPARC and x86 both 32 and 64 bit) and get the right archive for the Oracle client installation. In my own situation (Solaris10 on a x86 64bit) I downloaded the file called 10201_client_solx86_64.zip and I typed:
Installing oracle
123
$ unzip 10201_client_solx86_64.zip
$ cd client/
$ ./runInstaller
As you can see I used /opt/oracle/oraInventory and I leaved the group oinstall as the default group.
As ORACLE_HOME I used /opt/oracle/product/10.2.0/client_1.
Just before the installation finishes it prompts to you to execute two more commands as root: in my own installation I run
Finishing installation
123
//execute the following commands as root
$ /opt/oracle/oraInventory/orainstRoot.sh
$ /opt/oracle/product/10.2.0/client_1/root.sh
but, as you may guess, your path may be different from mine.
Now, if you want to use the just installed oracle binaries, you need to change the .profile of the user adding at the end:
In this tutorial I will show you how to install a MySQL cluster on a single node: obviously you will not gain any hardware redundancy with this setup but it is useful if you need to create a test installation as it was for me. You can find many tutorials about this topic but they are quite old and MySQL Cluster changed a lot in last years.
What is a MySQL cluster?
Let’s start explaining the architecture of a MySQL cluster with an image taken from dev.mysql.com:
As you may see the MySQL cluster is an aggregation of many components:
* one management server;
* many MySQL daemons that acts as “frontend”;
* many data nodes that store the real data.
This tutorial will guide you in the creation of a cluster with:
* one management node;
* two MySQL daemons;
* two data nodes.
Obviously you can expand this configuration simply adding the components you need. As stated at the beginning of this tutorial you can create all this setup on a single server (well you need 3 IPs on the server) or, and that would be very easy, you can split the MySQL cluster components on many servers.
What do I need?
For this setup you need:
* the archive of MySQL cluster 6.2 (6.2.15 was used in this tutorial) compiled for your system/architecture you can download from dev.mysql.com (please note that this tutorial was created for a Solaris10/SPARC installation so your archive name could be different from mine);
* an unprivileged user;
* 3 IPs on the same server;
* at least 2GB of free disk space;
* quite some time to follow this tutorial ;)
Definitions
As it is a complex installation I define the following variables in the BASH shell:
12345678910111213141516171819202122
export MYSQL_HOME=[PUT HERE THE BASE DIRECTORY FOR THE INSTALLATION]export MGMT_HOME=$MYSQL_HOME/[PUT HERE THE DIRECTORY NAME FOR THE MGMT NODE]export NODE1_HOME=$MYSQL_HOME/[PUT HERE THE DIRECTORY NAME FOR THE FIRST NODE]export NODE2_HOME=$MYSQL_HOME/[PUT HERE THE DIRECTORY NAME FOR THE SECOND NODE]export MGMT_BIN=$MGMT_HOME/bin
export NODE1_BIN=$NODE1_HOME/bin
export NODE2_BIN=$NODE2_HOME/bin
export MGMT_VAR=$MGMT_HOME/var
export NODE1_VAR=$NODE1_HOME/var
export NODE2_VAR=$NODE2_HOME/var
export MGMT_DATADIR=$MGMT_VAR/lib/mysql-cluster
export NODE1_DATADIR=$NODE1_VAR/lib/mysql-cluster
export NODE2_DATADIR=$NODE2_VAR/lib/mysql-cluster
export NODE1_NDBD_DATADIR=$NODE1_VAR/lib/mysql-cluster1
export NODE2_NDBD_DATADIR=$NODE2_VAR/lib/mysql-cluster1
export MGMT_ETC=$MGMT_HOME/etc
export NODE1_ETC=$NODE1_HOME/etc
export NODE2_ETC=$NODE2_HOME/etc
Installing MySQL in proper directories
Let’s start to create the needed directories and install MySQL from the downloaded archive:
In this section we are going to create the configuration files for the three main components of the architecture: the management server and two MySQLd server (the first one is a master server).
This is the configuration of the mgmt node
12345678910111213141516
#Put this content in a file called **$MGMT_ETC/config.ini**
[NDB_MGMD]
Id=1
Hostname=[PUT HERE THE IPADDRESS OF MGMT NODE]
PortNumber=1186
Datadir=[PUT HERE THE VALUE OF MGMT_DATADIR]
[NDBD]
Id=2
Hostname=[PUT HERE THE IPADDRESS OF NODE1]
Datadir=[PUT HERE THE VALUE OF NODE1_DATADIR]
[NDBD]
Id=3
Hostname=[PUT HERE THE IPADDRESS OF NODE2]
Datadir=[PUT HERE THE VALUE OF NODE2_DATADIR]
[MYSQLD]
[MYSQLD]
This is the configuration of node1
12345678910111213141516171819202122
#Put this content in a file called **my.cnf.master** in **$NODE1_ETC**
[MYSQLD]
user=mysql #the user running MySQL
basedir=[PUT HERE THE VALUE OF NODE1_HOME]
datadir=[PUT HERE THE VALUE OF NODE1_DATADIR]
pid-file = [PUT HERE THE VALUE OF NODE1_VAR]/run/mysqld.pid
socket = [PUT HERE THE VALUE OF NODE1_VAR]/run/mysqld.sock
log-error = [PUT HERE THE VALUE OF NODE1_VAR]/log/mysqld.err
bind-address = [PUT HERE THE IPADDRESS OF NODE 1]
ndb-cluster-connection-pool=1
ndbcluster
ndb-connectstring="[PUT HERE THE IPADDRESS OF MGMT NODE]"
ndb-force-send=1
ndb-use-exact-count=0
ndb-extra-logging=1
ndb-autoincrement-prefetch-sz=256
engine-condition-pushdown=1
#REPLICATION SPECIFIC - GENERAL
#server-id must be unique across all mysql servers participating in replication.
server-id=4
#REPLICATION SPECIFIC - MASTER
log-bin
This is the configuration of node2
12345678910111213141516171819
**#Put this content in a file called **my.cnf** in **$NODE2_ETC
**[MYSQLD]
user=mysql #the user running MySQL
basedir=[PUT HERE THE VALUE OF NODE2_HOME]
datadir=[PUT HERE THE VALUE OF NODE2_DATADIR]
pid-file = [PUT HERE THE VALUE OF NODE2_VAR]/run/mysqld.pid
socket = [PUT HERE THE VALUE OF NODE2_VAR]/run/mysqld.sock
log-error = [PUT HERE THE VALUE OF NODE2_VAR]/log/mysqld.err
bind-address = [PUT HERE THE IPADDRESS OF NODE 2]
ndb-cluster-connection-pool=1
ndbcluster
ndb-connectstring="[PUT HERE THE IPADDRESS OF MGMT NODE]"
ndb-force-send=1
ndb-use-exact-count=0
ndb-extra-logging=1
ndb-autoincrement-prefetch-sz=256
engine-condition-pushdown=1
#server-id must be unique across all mysql servers participating in replication.
server-id=5
MySQLd servers initialization
The two MySQLd servers need to have their DB initialized (they need to know at least who users are and who can connect from). For this purpose we use the install_db script:
Please note the –initial parameter used as argument to the ndbd command: this is needed only for the very first run… omit it next time!
Now the first check: we ask the MGMT console the cluster status:
123456789101112
$ $MGMT_BIN/ndb_mgm -c [MGMT_IP_ADDRESS] -e show
Connected to Management Server at: 10.145.2.3:1186
Cluster Configuration
---------------------
[ndbd(NDB)] 2 node(s)id=2 @10.145.2.33 (mysql-5.1.23 ndb-6.2.15, Nodegroup: 0, Master)id=3 @10.145.2.34 (mysql-5.1.23 ndb-6.2.15, Nodegroup: 0)[ndb_mgmd(MGM)] 1 node(s)id=1 @10.145.2.3 (mysql-5.1.23 ndb-6.2.15)[mysqld(API)] 2 node(s)id=4 (not connected, accepting connect from any host)id=5 (not connected, accepting connect from any host)
You should see something similar with your IPs: note that we actually see the two datanodes just started and the mgmt node.
At the end we need to start the two MySQLd that acts as frontend:
If you see something different then probably you have a problem: please look at log files that you can find in $NODE1_VAR/log or $NODE2_VAR/log. If you provide as much information as possible I’ll try to help you out.
Trying the toy
Now, if the management console says everything is OK, we need to test our shiny new MySQL cluster:
12345678910111213141516
$ $NODE1_BIN/mysql -u root -h [FIRST_NODE_IP_ADDRESS]testmysql> show engines; #check you can see ndbcluster!!!mysql> CREATE TABLE animals (grp ENUM('fish','mammal','bird') NOT NULL, id MEDIUMINT NOT NULL AUTO_INCREMENT, name CHAR(30) NOT NULL, PRIMARY KEY (grp,id))engine=ndbcluster;
mysql> INSERT INTO animals (grp,name) VALUES ('mammal','dog'),('mammal','cat'), ('bird','penguin'),('fish','lax'), ('mammal','whale'), ('bird','ostrich');
mysql> select * from animals;
+--------+----+---------+
| grp | id | name |
+--------+----+---------+
| mammal | 2 | cat |
| bird | 6 | ostrich |
| fish | 4 | lax |
| bird | 3 | penguin |
| mammal | 1 | dog |
| mammal | 5 | whale |
+--------+----+---------+
6 rows in set(0.00 sec)
and now check the second MySQLd node:
1234567891011121314
# $NODE2_BIN/mysql -u root -h [SECOND_NODE_IP_ADDRESS] testmysql> show engines; #check you can see ndbcluster!!!mysql> select * from animals;
+--------+----+---------+
| grp | id | name |
+--------+----+---------+
| mammal | 2 | cat |
| bird | 6 | ostrich |
| fish | 4 | lax |
| bird | 3 | penguin |
| mammal | 1 | dog |
| mammal | 5 | whale |
+--------+----+---------+
6 rows in set(0.00 sec)
That’s it… if something on your side does not work please leave here a note providing as much information as possible and I’ll try to help you!
Today I needed to install the Oracle Client 10g on a Red Hat Enterprise Linux 5 64bit: all around the net you can find many useful tutorial on how to install the Oracle DB Server but I cannot find how to install only the client. As you may guess the client installation is easier than the Oracle DB Server one.
On the linux machine, as root, you have to run the following commands:
Preparing for installation
12345
# create user and groups$ groupadd oinstall
$ groupadd dba
$ useradd -g oinstall -G dba oracle
$ passwd oracle
I pointed the oraInventory directory to /opt/oracle in the first screen; in the second one I choose to install the Oracle 10g Client in /opt/oracle/product/10.2.0/client_1/. Just before the installation finishes it prompts to you to execute two more commands as root: in my own installation I run
Today I needed to install the Oracle Client 10g on a Red Hat Enterprise Linux 5 32bit: all around the net you can find many useful tutorial on how to install the Oracle DB Server but I cannot find how to install only the client. As you may guess the client installation is easier than the Oracle DB Server one.
On the linux machine, as root, you have to run the following commands:
$ unzip 10201_client_linux32.zip
$ cd client
$ ./runInstaller
I pointed the oraInventory directory to /opt/oracle in the first screen; in the second one I choose to install the Oracle 10g Client in /opt/oracle/product/10.2.0/client_1/. Just before the installation finishes it prompts to you to execute two more commands as root: in my own installation I run
Gomorra è il primo libro di Roberto Saviano, un giornalista nato nel 1979 a Napoli che ha collaborato con Repubblica e l’Espresso.
Per capire il libro non si può prescindere dai natali e dalla professione di Saviano: non siamo di fronte ad un romanzo, un racconto di fantasia, ma siamo di fronte alla dura e sentita cronaca della situazione attualmente in essere in Campania.
Come ormai noto il libro parla della camorra ormai comunemente chiamata “il Sistema”, visto che le varie “famiglie” regolano la vita e l’economia di tutta la Campania: “il Sistema” è il motore economico della Campania che non si occupa esclusivamente dei traffici illegali ma che dipana tutta la sua forza e virulenza anche nelle attività lecite. Attraverso una base di illegalità è possibile per “il Sistema” fornire servizi e attività legali in tutto il paese (e in tutto il mondo) a prezzi assolutamente concorrenziali.
Quando si parla di camorra l’immagine principale che se ne ha comunemente è quella della violenza (siano omicidi, rapimenti o estorsioni): Saviano invece stravolge questo luogo comune e svela che il vero potere del “Sistema” è quello economico prima ancora di quello militare.
Partendo esclusivamente da atti di cronaca e da indagini della magistratura (che talvolta divengono elenchi lunghissimi di nomi, luoghi e date) Saviano divide “il Sistema” in parti per poterlo esaminare in modo più analitico: si va da una divisione territoriale (Napoli, periferia di Napoli, Caserta) ad una divisione basata sulle varie specializzazione delle famiglie camorristiche (droga, rifiuti, edilizia, etc.). In questo modo l’autore riesce a mostrare l’infinita complessità e le numerose ramificazioni che “il Sistema” ha sviluppato nel corso degli anni: fa spesso impressione leggere che la tal famiglia ha interessi in Veneto o in Lombardia, che la tal altra smaltisce rifiuti pericolosi in Molise e non più solo nella martoriata Campania.
Ovviamente, anche per Saviano, la camorra non si riduce ad un semplice impero economico: “il Sistema” ha una sua base nella violenza e nella potenza militare. Un intero capitolo è dedicato alla guerra di Secondigliano: tra il 2004 e il 2005 ci furono diverse decine di morti ammazzati a causa di una lotta di potere tra il clan Di Lauro e “gli Scissionisti”. In un sistema dove l’ordine delle cose è dettato dalla ricchezza economica e dal potere militare sono all’ordine del giorno che i tentativi di scalata al potere: “gli Scissionisti”, che erano persone affiliate al clan Di Lauro, cercarono di staccarsi e di poter gestire in autonomia i traffici che gestivano originariamente per i Di Lauro. Tutti ricordiamo le notizie da Napoli di quel periodo e Saviano riporta tutte le morti causate da questa guerra intestina al clan Di Lauro: da giornalista come è, in stretto ordine cronologico, Saviano fa la cronaca di tutti gli avvenimenti di quel periodo a partire dai motivi della “scissione”, passando per le decine di morti, per arrivare alla fine alla pace imposta dalle altre componenti del “Sistema” ormai stufe delle luci della ribalta che avevano illuminato la periferia di Napoli in quel periodo.
“Il Sistema” vive e prolifica nel silenzio, senza riflettori puntati addosso: è questo il punto fondamentale del libro di Saviano. Solo parlando della camorra, solo rendendo pubblico quello che avviene in quelle zone (e ormai in tutta Italia) si riesce a fare il danno maggiore al “Sistema”: la pubblicità è quello che il sistema cerca sempre ed in ogni modo di evitare. Parlare del “Sistema” è il primo tassello, il più “semplice” ma anche efficace, per intaccare questi poteri distorti e contorti che avvelenano la vita della Campania e dell’Italia tutta.
Voto: 8/10
P.S.: Gomorra non è un romanzo. E’ la cruda realtà di una situazione orribile che stiamo vivendo. Non è un libro da leggere per piacere: se io fossi un insegnante di una scuola superiore obbligherei i miei studenti a leggerlo. Sapere è il primo modo per capire e cercare di cambiare le cose.
Yesterday I finished the installation of a new Ubuntu Linux 7.04 Feisty Fawn
and I needed to install the Data Protector Disk Agent to backup its files. I have a Windows 2003 cell manager and a HP-UX
install server that should help with the installation of the Data Protector software on UNIX/Linux/Solaris hosts… I’m quite
a newbye on the topic but I was unable to install the disk agent using the HP-UX install server so here you will find
instructions for the “always working” manual installation.
The first thing is to get the HP-UX PA-RISC install server CDs from HP website:if you do not have the original CDs
you can get them as trial from here.
Now its time to install the Disk Agent on the server you need to backup; you need to install a package that is not installed
by default on the Ubuntu Linux 7.04: inetd. So, as “administrator”, execute the command:
1
$ sudo apt-get install netkit-inetd
to get the needed package installed.
Insert the first CD of the HP-UX PA-RISC install server on the optical drive of the server and mount it:
1
$ mount /dev/hda /mnt
Now it is install time:
12
$ cd /mnt/LOCAL_INSTALL
> ./omnisetup.sh
Answer all the questions the installer asks to you and finish the installation. In my installation the Data Protector agent was correctly added to the inetd configuration:
Yesterday I finished the installation of a new Red Hat Enterprise Linux 5 (RHEL5) and I needed to install the Data Protector Disk Agent to backup its files. I have a Windows 2003 cell manager and a HP-UX install server that should help with the installation of the Data Protector software on UNIX/Linux/Solaris hosts… I’m quite a newbye on the topic but I was unable to install the disk agent using the HP-UX install server so here you will find instructions for the “always working” manual installation.
The first thing is to get the HP-UX PA-RISC install server CDs from HP website: if you do not have the original CDs you can get them as trial from here.
Now its time to install the Disk Agent on the server you need to backup; you need to install two packages that are not installed by default on the Red Hat Enterprise Linux 5: ncompress and xinetd. So, as root, execute the command:
Install prerequisites
1
$ yum install ncompress xinetd
to get the needed packages installed.
Insert the first CD of the HP-UX PA-RISC install server on the optical drive of the server and mount it:
Mount CD
1
$ mount /dev/hda /mnt
Now it is install time:
Install DP
12
$ cd /mnt/LOCAL_INSTALL
$ ./omnisetup.sh
Answer all the questions the installer asks to you and finish the installation. In my installation the Data Protector agent was not correctly added to the xinetd configuration… look if you find the omni file in /etc/xinetd.d… if you do not find it create it with the following content:
xinetd service installation
12345678910
service omni
{
socket_type = stream
protocol = tcp
wait = no
user = root
server = /opt/omni/lbin/inet
server_args = inet -log /var/opt/omni/log/inet.log
disable = no
}
and verify it is owned by root and with 644 permissions.
OK… we are at the end…
Restart xinetd
1
$ /etc/init.d/xinetd restart
As suggested by fluffy on comment #1 now you have to check if the firewall (iptables) is actually active: if the firewall is switched on you have to permit the traffic from and to port 5555 to flow regularly.
Stamattina, nella mia casella di posta, ho trovato un messaggio che diceva che il mio server di monitoraggio basato su Zabbix e installato su una Ubuntu 7.04 aveva tirato le cuoia stanotte alle 02:00: il messaggio riportava una laconico “ZABBIX database is down.”!
Ovviamente il primo controllo è stato fatto sul DB (MySQL) e non ho trovato alcun messaggio d’errore… il problema è che in realtà il file /var/log/syslog era fermo da parecchie ore. Spinto da una irrazionale speranza ho provato un
1
$ /etc/init.d/mysql restart
per scoprire che era inutile… il DB non risaliva. Bene… ho i backup… ranzo tutto e rimetto su i backup: no… nemmeno questo era possibile… la directory dei backup non era più una directory e al posto dei familiari drwxr-xr-x aveva degli spaventosi punti di domanda.
Riassumendo avevo il DB che non riusciva a risalire e i backup persi!
A questo punto l’unica cosa da fare era provare a verificare l’integrità del file system (ext3) e ho seguito i seguenti passi:
1234567891011121314
$ init 1 #vai in single-user-mode$ mount #dammi la lista dei filesystem montati/dev/sda1 on / type ext3 (rw,errors=remount-ro)proc on /proc type proc (rw,noexec,nosuid,nodev)/sys on /sys type sysfs (rw,noexec,nosuid,nodev)varrun on /var/run type tmpfs (rw,noexec,nosuid,nodev,mode=0755)varlock on /var/lock type tmpfs (rw,noexec,nosuid,nodev,mode=1777)udev on /dev type tmpfs (rw,mode=0755)devshm on /dev/shm type tmpfs (rw)devpts on /dev/pts type devpts (rw,gid=5,mode=620)$ mount -o remount,ro / #rimonta in read-only il filesystem di /$ e2fsck -f -v /dev/sda1 #controlla il filesystem anche se sembra corretto (-f) e sii prolisso nei log (-v)$ mount -o remount,rw / #rimonta in read-write il filesystem di / una volta terminato il controllo$ init 2 #ritorna al runlevel precedente e alla piena operatività del sistema
Per fortuna dopo questa cura il DB è risalito correttamente e ho addirittura recuperato i miei backup!