To be able to use current Apache on windows compiled for 64bit first we need to download binary version (can be also build form source - but this would be more rational way). One of suggested sites offers such version (httpd-2.4.12-x64-vc11.zip) is https://www.apachehaus.com. After trivial unzip and copy to local hard setting SRVROOT variable in httpd.conf would be wise thing to do. In my case:
Define SRVROOT "C:\work\Apache24"
After that, adding Apache as service so it gets started on machine boot and ability to use Apache monitor for quick access of restarting service is always good thing.
From bin folder just run:
httpd.exe -k install
In theory trying to load localhost in your browser should return some sort of landing page.
Next thing would be install of php, since using 64bit Apache we go for 64bit of php version also. At time of install VC11 x64 Non Thread Safe (2015-Jan-22 03:24:53) this was last version so i picked this one here http://windows.php.net/download#php-5.6. Again trivial unzip would do just fine - but if u go for c:\php folder might save u some config changing later.
To tie things together we need to download fcgi from apachehouse - mod_fcgid-2.3.9-2.4.x-x64.zip this matches our current installation stack. Just unzib contents into Apache root folder - we het module, config and some readme's.
Two things are needed to actually load this. Again open httpd.conf and add this two lines.
LoadModule fcgid_module modules/mod_fcgid.so
Include conf/extra/httpd-fcgid.conf
First line loads fcgi module and second one includes fcgi configuration into apache.
At this point we can make php file inside htdocs folder with phpinfo() command and see some standard phpinfo output about our configuration.
Now we can try connect to mssql. For this we would need 64bit library for php to connect to mssql which Microsoft is not providing - but thats where Rob jumps in - providing precompiled 64bit librarys for connecting to mssql here http://robsphp.blogspot.com/2012/06/unofficial-microsoft-sql-server-driver.html. After download just copy files form x64 folder into ext folder of php installation, add extension loading to php.ini and restart Apache service.
extension=ext/php_sqlsrv_55_ts.dll
Now phpini() should inform us about loaded extension.
If sql server is installed on other machine now would be good time to install native client for sql server found here https://msdn.microsoft.com/en-us/sqlserver/aa937733.aspx. If you are installing on same machine that runs sql server there is good chance you already have this installed and can skip this step.
Last step would be to add some table/data into sql server to test everything (i always try to test uft-8 since our alphabet includes some weird characters that lived inside ut8) - so adding extra parameter to connection string is a good thing ... and success.
php on win
ponedeljek, 16. februar 2015
nedelja, 13. oktober 2013
search sphinx within mysql/mariadb (linux edition)
I have used sphinx search engine ( http://sphinxsearch.com/ ) few times before and back then there was no option to use it inside mysql so the only way to do search was:
1. define search index (via mysql query)
2. do actual index and setup reindex via cron (at least that how i did it - now seems to be some kind of relat time index update option)
3. connect to searchd via php api with query - resposn was object of document ids best matching search criteria
4. do mysql query with returning id's as where in parameter
5. sort results as returned form searchd api call
This thing actually works just fine but its a bit complicated and could be done much simpler. So after doin a bit of research for new project i found this document ( https://mariadb.com/kb/en/about-sphinxse/ ) describing searching for results within mysql.
Let's asume that we have working searchd with our index (tasks 1 and 2)
Latest versions of MariaDB apparently include sphinx plugin that can be easily activate by
MariaDB [(none)]> INSTALL PLUGIN sphinx SONAME 'ha_sphinx.so';
Query OK, 0 rows affected (0.00 sec)
MariaDB [(none)]> show engines;
+------------+---------+--------------------------------------------+--------------+------+------------+
| Engine | Support | Comment | Transactions | XA | Savepoints |
+------------+---------+--------------------------------------------+--------------+------+------------+
| SPHINX | YES | Sphinx storage engine 0.9.9 | NO | NO | NO |
+------------+---------+--------------------------------------------+--------------+------+------------+
We can see plugin has been successfully running inside MySQL/MariaDB database.
Next thing would be to define local table with connection to searchd engine parameter
MariaDB [(none)]>CREATE TABLE t1
(
id INTEGER UNSIGNED NOT NULL,
weight INTEGER NOT NULL,
query VARCHAR(3072) NOT NULL,
group_id INTEGER,
INDEX(query)
) ENGINE=SPHINX CONNECTION="sphinx://127.0.0.1:9312/test1";
So inside out database we now have two tables -- one that is actually holding data* and the other "fake" table that connects to searchd and pull out results form our search query
MariaDB [nfo]> show tables;
+---------------+
| Tables_in_nfo |
+---------------+
| predb |
| t1 |
+---------------+
* i use 27 Years of Warez Scene Release Info Leaked in Giant Database ( http://torrentfreak.com/27-years-of-warez-scene-release-info-leaked-in-giant-database/ ) as my source - 2.6 mio records
So now we can achieve tasks 3,4 and 5 within mysql with just one query like this
MariaDB [nfo]> SELECT predb.id, rlsname, grp FROM t1 JOIN predb on t1.id = predb.id WHERE query='tie fighter';
+---------+-----------------------------------------------------------+---------+
| id | rlsname | grp |
+---------+-----------------------------------------------------------+---------+
| 2338491 | X-Wing.vs.Tie.Fighter.Balance.of.Power-HBD | HBD |
| 2338559 | TIE.Fighter.Collector.Series-iLLoGiC | iLLoGiC |
| 2338560 | TIE.Fighter.Collector.Series.Complete.Intro.Addon-iLLoGiC | iLLoGiC |
| 2338562 | TIE.Fighter.Collector.Series.Music.Addon-iLLoGiC | iLLoGiC |
| 2338565 | TIE.Fighter.Collector.Series.3-D.Fix-iLLoGiC | iLLoGiC |
| 2340174 | X-Wing.vs.Tie-Fighter-HBD | HBD |
+---------+-----------------------------------------------------------+---------+
6 rows in set (0.00 sec)
Final words/benchmarks
MariaDB [nfo]> SELECT id,rlsname FROM predb WHERE rlsname LIKE '%zaxxon%';
+---------+---------------------------------------------+
| id | rlsname |
+---------+---------------------------------------------+
| 804051 | Bassplugger-Zaxxon-ILEC006-Advance-2004-RFL |
| 1847329 | Zaxxon-DPS |
+---------+---------------------------------------------+
2 rows in set (1.79 sec)
MariaDB [nfo]> SELECT * FROM t1 WHERE query='zaxxon';
+---------+--------+--------+----------+
| id | weight | query | group_id |
+---------+--------+--------+----------+
| 804051 | 1 | zaxxon | 0 |
| 1847329 | 1 | zaxxon | 0 |
+---------+--------+--------+----------+
2 rows in set (0.01 sec)
MariaDB [nfo]> SELECT predb.id, rlsname, grp FROM t1 JOIN predb on t1.id = predb.id WHERE query='zaxxon';
+---------+---------------------------------------------+-----+
| id | rlsname | grp |
+---------+---------------------------------------------+-----+
| 804051 | Bassplugger-Zaxxon-ILEC006-Advance-2004-RFL | RFL |
| 1847329 | Zaxxon-DPS | DPS |
+---------+---------------------------------------------+-----+
2 rows in set (0.04 sec)
1. define search index (via mysql query)
2. do actual index and setup reindex via cron (at least that how i did it - now seems to be some kind of relat time index update option)
3. connect to searchd via php api with query - resposn was object of document ids best matching search criteria
4. do mysql query with returning id's as where in parameter
5. sort results as returned form searchd api call
This thing actually works just fine but its a bit complicated and could be done much simpler. So after doin a bit of research for new project i found this document ( https://mariadb.com/kb/en/about-sphinxse/ ) describing searching for results within mysql.
Let's asume that we have working searchd with our index (tasks 1 and 2)
Latest versions of MariaDB apparently include sphinx plugin that can be easily activate by
MariaDB [(none)]> INSTALL PLUGIN sphinx SONAME 'ha_sphinx.so';
Query OK, 0 rows affected (0.00 sec)
MariaDB [(none)]> show engines;
+------------+---------+--------------------------------------------+--------------+------+------------+
| Engine | Support | Comment | Transactions | XA | Savepoints |
+------------+---------+--------------------------------------------+--------------+------+------------+
| SPHINX | YES | Sphinx storage engine 0.9.9 | NO | NO | NO |
+------------+---------+--------------------------------------------+--------------+------+------------+
We can see plugin has been successfully running inside MySQL/MariaDB database.
Next thing would be to define local table with connection to searchd engine parameter
MariaDB [(none)]>CREATE TABLE t1
(
id INTEGER UNSIGNED NOT NULL,
weight INTEGER NOT NULL,
query VARCHAR(3072) NOT NULL,
group_id INTEGER,
INDEX(query)
) ENGINE=SPHINX CONNECTION="sphinx://127.0.0.1:9312/test1";
So inside out database we now have two tables -- one that is actually holding data* and the other "fake" table that connects to searchd and pull out results form our search query
MariaDB [nfo]> show tables;
+---------------+
| Tables_in_nfo |
+---------------+
| predb |
| t1 |
+---------------+
* i use 27 Years of Warez Scene Release Info Leaked in Giant Database ( http://torrentfreak.com/27-years-of-warez-scene-release-info-leaked-in-giant-database/ ) as my source - 2.6 mio records
So now we can achieve tasks 3,4 and 5 within mysql with just one query like this
MariaDB [nfo]> SELECT predb.id, rlsname, grp FROM t1 JOIN predb on t1.id = predb.id WHERE query='tie fighter';
+---------+-----------------------------------------------------------+---------+
| id | rlsname | grp |
+---------+-----------------------------------------------------------+---------+
| 2338491 | X-Wing.vs.Tie.Fighter.Balance.of.Power-HBD | HBD |
| 2338559 | TIE.Fighter.Collector.Series-iLLoGiC | iLLoGiC |
| 2338560 | TIE.Fighter.Collector.Series.Complete.Intro.Addon-iLLoGiC | iLLoGiC |
| 2338562 | TIE.Fighter.Collector.Series.Music.Addon-iLLoGiC | iLLoGiC |
| 2338565 | TIE.Fighter.Collector.Series.3-D.Fix-iLLoGiC | iLLoGiC |
| 2340174 | X-Wing.vs.Tie-Fighter-HBD | HBD |
+---------+-----------------------------------------------------------+---------+
6 rows in set (0.00 sec)
Final words/benchmarks
MariaDB [nfo]> SELECT id,rlsname FROM predb WHERE rlsname LIKE '%zaxxon%';
+---------+---------------------------------------------+
| id | rlsname |
+---------+---------------------------------------------+
| 804051 | Bassplugger-Zaxxon-ILEC006-Advance-2004-RFL |
| 1847329 | Zaxxon-DPS |
+---------+---------------------------------------------+
2 rows in set (1.79 sec)
MariaDB [nfo]> SELECT * FROM t1 WHERE query='zaxxon';
+---------+--------+--------+----------+
| id | weight | query | group_id |
+---------+--------+--------+----------+
| 804051 | 1 | zaxxon | 0 |
| 1847329 | 1 | zaxxon | 0 |
+---------+--------+--------+----------+
2 rows in set (0.01 sec)
MariaDB [nfo]> SELECT predb.id, rlsname, grp FROM t1 JOIN predb on t1.id = predb.id WHERE query='zaxxon';
+---------+---------------------------------------------+-----+
| id | rlsname | grp |
+---------+---------------------------------------------+-----+
| 804051 | Bassplugger-Zaxxon-ILEC006-Advance-2004-RFL | RFL |
| 1847329 | Zaxxon-DPS | DPS |
+---------+---------------------------------------------+-----+
2 rows in set (0.04 sec)
sreda, 4. julij 2012
mysql easy replication
MySQL offers quite simple installation of replication // we need at least two servers that sees each other over tcp/ip and a bit of configuration
on master server (one that we write data in) this is minimal (a bit more) required configuration:
[mysqld]
server-id = 1
log_bin = /var/log/mysql/mysql-bin.log
# http://dev.mysql.com/doc/refman/5.0/en/server-system-variables.html#sysvar_expire_logs_days
log_bin = /var/log/mysql/mysql-bin.log
# http://dev.mysql.com/doc/refman/5.0/en/server-system-variables.html#sysvar_expire_logs_days
expire_logs_days = 10
# http://dev.mysql.com/doc/refman/5.0/en/replication-options-binary-log.html#sysvar_max_binlog_size
max_binlog_size = 100M
binlog_do_db = my_database
binlog_do_db = my_database
the last line indicates that we would like to replicate only one databse on master server
after that we need to grant replication previlegies to slave server with command
GRANT REPLICATION SLAVE ON *.* TO 'slave-user'@'%' IDENTIFIED BY 'slave-pass';
FLUSH PRIVILEGES;
we could also specify exact ip of slave server, but i prefere setting this into iptables - firehol to be exact is the tool for lazy linux admins
interface eth0 inet
group with src "ip.of.slave.server"
server "mysql" accept
group end
group with src "ip.of.slave.server"
server "mysql" accept
group end
next we need to configure slave server
[mysqld]
server-id = 2
relay-log-index = slave-relay-bin.index
relay-log = slave-relay-bin
relay-log-index = slave-relay-bin.index
relay-log = slave-relay-bin
we are almost all set -- we just need to get current database state to slave server and start replication
1. on master we need to get current "log and position"
mysql> FLUSH TABLES WITH READ LOCK;
Query OK, 0 rows affected (0.07 sec)
Query OK, 0 rows affected (0.07 sec)
mysql> SHOW MASTER STATUS;
+------------------+----------+--------------+------------------+
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB |
+------------------+----------+--------------+------------------+
| mysql-bin.000034 | 1539 | my_databse | |
+------------------+----------+--------------+------------------+
1 row in set (0.00 sec)
mysql> UNLOCK TABLES;
Query OK, 0 rows affected (0.00 sec)
Query OK, 0 rows affected (0.00 sec)
2. we make export of master database and import it on slave (i use good old phpMyAdmin for the job)
3. and we set and start replication on slave
mysql> CHANGE MASTER TO MASTER_HOST='ip.of.master.server', MASTER_PORT=3306, MASTER_USER='slave-user', MASTER_PASSWORD='slave-pass', MASTER_LOG_FILE='mysql-bin.000034', MASTER_LOG_POS=1539;
mysql> START SLAVE;
mysql> SHOW SLAVE STATUS \G
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: ip.of.master.server
Master_User: master-user
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql-bin.000034
Read_Master_Log_Pos: 2060
Relay_Log_File: slave-relay-bin.000002
Relay_Log_Pos: 774
Relay_Master_Log_File: mysql-bin.000034
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
Replicate_Do_DB:
Replicate_Ignore_DB:
Replicate_Do_Table:
Replicate_Ignore_Table:
Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table: mysql.%,information_schema.%,performance_schema.%
Last_Errno: 0
Last_Error:
Skip_Counter: 0
Exec_Master_Log_Pos: 2060
Relay_Log_Space: 930
Until_Condition: None
Until_Log_File:
Until_Log_Pos: 0
Master_SSL_Allowed: No
Master_SSL_CA_File:
Master_SSL_CA_Path:
Master_SSL_Cert:
Master_SSL_Cipher:
Master_SSL_Key:
Seconds_Behind_Master: 0
Master_SSL_Verify_Server_Cert: No
Last_IO_Errno: 0
Last_IO_Error:
Last_SQL_Errno: 0
Last_SQL_Error:
Replicate_Ignore_Server_Ids:
Master_Server_Id: 1
1 row in set (0.00 sec)
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: ip.of.master.server
Master_User: master-user
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql-bin.000034
Read_Master_Log_Pos: 2060
Relay_Log_File: slave-relay-bin.000002
Relay_Log_Pos: 774
Relay_Master_Log_File: mysql-bin.000034
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
Replicate_Do_DB:
Replicate_Ignore_DB:
Replicate_Do_Table:
Replicate_Ignore_Table:
Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table: mysql.%,information_schema.%,performance_schema.%
Last_Errno: 0
Last_Error:
Skip_Counter: 0
Exec_Master_Log_Pos: 2060
Relay_Log_Space: 930
Until_Condition: None
Until_Log_File:
Until_Log_Pos: 0
Master_SSL_Allowed: No
Master_SSL_CA_File:
Master_SSL_CA_Path:
Master_SSL_Cert:
Master_SSL_Cipher:
Master_SSL_Key:
Seconds_Behind_Master: 0
Master_SSL_Verify_Server_Cert: No
Last_IO_Errno: 0
Last_IO_Error:
Last_SQL_Errno: 0
Last_SQL_Error:
Replicate_Ignore_Server_Ids:
Master_Server_Id: 1
1 row in set (0.00 sec)
petek, 25. maj 2012
connect to oracle
Let say we need to connect to Oracle DB form our windows (64 bit) php server. Linux fan boy would think tkat apt-get php module (on windows most modules are shipped with php installation ) and uncomment form php.ini would do the trick - but not with windows and oracle.
uncomment this line
extension=c:/php54/ext/php_oci8_11g.dll ; Use with Oracle 11gR2 Instant Client
will probably give u some sort of error
due fact this extension needs oracle client to be installed as well. So next step would be installing of proper (32bit!) oracle client form http://www.oracle.com/technetwork/topics/winsoft-085727.html (i usually go for smallest package in this case *Instant Client Package - Basic Lite: Smaller version of the Basic, with only English error messages and Unicode, ASCII, and Western European character set support (10.2 only) )
After that we need to put the installation folder to path variable
After apache restart phpinfo() should output some oci output
The following example will test the connection and do some trivial query
<?php
$c = oci_connect('*username*', '*password*', '//192.168.1.113:1521/ORCL');
$sql = "SELECT ID, USERNAME FROM USERS";
$stmt = oci_parse($c, $sql);
oci_execute($stmt, OCI_DEFAULT);
while ($row = oci_fetch_array($stmt, OCI_ASSOC)) {
print_r($row);
}
uncomment this line
extension=c:/php54/ext/php_oci8_11g.dll ; Use with Oracle 11gR2 Instant Client
will probably give u some sort of error
[Fri May 25 09:09:03 2012] [error] [client 192.168.1.194] PHP Warning: PHP Startup: Unable to load dynamic library 'c:/php54/ext/php_oci8_11g.dll' - %1 is not a valid Win32 application.\r
due fact this extension needs oracle client to be installed as well. So next step would be installing of proper (32bit!) oracle client form http://www.oracle.com/technetwork/topics/winsoft-085727.html (i usually go for smallest package in this case *Instant Client Package - Basic Lite: Smaller version of the Basic, with only English error messages and Unicode, ASCII, and Western European character set support (10.2 only) )
After that we need to put the installation folder to path variable
After apache restart phpinfo() should output some oci output
The following example will test the connection and do some trivial query
<?php
$c = oci_connect('*username*', '*password*', '//192.168.1.113:1521/ORCL');
$sql = "SELECT ID, USERNAME FROM USERS";
$stmt = oci_parse($c, $sql);
oci_execute($stmt, OCI_DEFAULT);
while ($row = oci_fetch_array($stmt, OCI_ASSOC)) {
print_r($row);
}
sreda, 16. maj 2012
debug PHP the JS way
We use CI (http://codeigniter.com/) in almost all our projects and since in JS got console.log our PHP debug/output was lagging behind with it's own print_r. Every good thing is eventually dumped by another new good thing so after lookin around for debug solutions in PHP and the fact, we almost all time share browser screen with firebug window FirePHP (http://www.firephp.org/) was the reasonable path.
For installation we just need to copy all files form archive to /ci/application/libraries/firephp_library and wrapper class looks something like
<?php if (!defined('BASEPATH')) exit('No direct script access allowed');
require "firephp_library/FirePHP.class.php";
class Firephp_library extends FirePHP {
function __construct() {
parent::init();
}
}
Then i took it for a test drive...
For installation we just need to copy all files form archive to /ci/application/libraries/firephp_library and wrapper class looks something like
<?php if (!defined('BASEPATH')) exit('No direct script access allowed');
require "firephp_library/FirePHP.class.php";
class Firephp_library extends FirePHP {
function __construct() {
parent::init();
}
}
After that we can use it in controller:
//load library
$this->load->library('Firephp_library');
nedelja, 13. maj 2012
Stuck in the moment
Sometimes php-cgi.exe process get stuck -- and eventually over some period of time, when number of prcesses get over critical point (i don't have clue what would that be) whole apache-php stack freezes.
That happened around once per month, but it is still unpleasant experience. Someone needed to log into server stop apache kill all php-cgi.exe processes and restart apache.
After a bit googling we write a tiny script and schedule it once a day to kill all php-cgi.exe processes so now can spawn and since than no more incidents like this occurred anymore - not bad (i know if someone process is in the middle of work it gets killed but its small price to pay)
C:\WINDOWS\system32\taskkill.exe /IM php-cgi.exe /F
That happened around once per month, but it is still unpleasant experience. Someone needed to log into server stop apache kill all php-cgi.exe processes and restart apache.
After a bit googling we write a tiny script and schedule it once a day to kill all php-cgi.exe processes so now can spawn and since than no more incidents like this occurred anymore - not bad (i know if someone process is in the middle of work it gets killed but its small price to pay)
C:\WINDOWS\system32\taskkill.exe /IM php-cgi.exe /F
sobota, 28. april 2012
Do the mongo with me
It's been round two years since our project took off, many things has been done (will be covered in next posts) but somehow i feel next step in pursuit of speed will be migrating some of our data tables to noDB land. I have read lot's about this topic and somehow decided to try http://www.mongodb.org/ at first. This decision was made on ease of use (in terms of instalation) and good support in php (remeber php on windows). Installation was very simple although i had to open port 11371 on firewall to get 10gen gpg key into my apt repositorie.
After that I was ready to go.
We have very special table in our DB called server_cache. It's actually key value table with some 20k lines and 50Mb of data, holding hashes for hotel/room/date records, filtered and sorted by some 10 or something parameters. We populate this table with users requests if record is not in table and just return hashes if desired combination already exists.
So i populate mongoDB collection with all records from MySQL and made a little script that gets 1000 random (predefined in script) hashes form this table.
I start to measure results with ApacheBench tool:
I stared with 100 requests and one thread, slowly increasing number of requests. Server "Time taken for test" increased linear and MySQL was lagging behind by twice the time mongoDB needed to complete the test.
Then i started increasing threads where mongoDB showed all of it's potential and leave MySQL far behind.
With such a fabulous preformace by mongoDB im shure we will implement in production soon.
Technical spec of our DB server:
bl4z@ubuntu:~$ uname -a
Linux ubuntu 2.6.32-38-server #83-Ubuntu SMP Wed Jan 4 11:26:59 UTC 2012 x86_64 GNU/Linux
bl4z@ubuntu:~$ mysql --version
mysql Ver 14.14 Distrib 5.1.62, for debian-linux-gnu (x86_64) using readline 6.1
bl4z@ubuntu:~$ mongo --version
MongoDB shell version: 2.0.4
After that I was ready to go.
We have very special table in our DB called server_cache. It's actually key value table with some 20k lines and 50Mb of data, holding hashes for hotel/room/date records, filtered and sorted by some 10 or something parameters. We populate this table with users requests if record is not in table and just return hashes if desired combination already exists.
So i populate mongoDB collection with all records from MySQL and made a little script that gets 1000 random (predefined in script) hashes form this table.
I start to measure results with ApacheBench tool:
I stared with 100 requests and one thread, slowly increasing number of requests. Server "Time taken for test" increased linear and MySQL was lagging behind by twice the time mongoDB needed to complete the test.
Then i started increasing threads where mongoDB showed all of it's potential and leave MySQL far behind.
With such a fabulous preformace by mongoDB im shure we will implement in production soon.
Technical spec of our DB server:
bl4z@ubuntu:~$ uname -a
Linux ubuntu 2.6.32-38-server #83-Ubuntu SMP Wed Jan 4 11:26:59 UTC 2012 x86_64 GNU/Linux
bl4z@ubuntu:~$ mysql --version
mysql Ver 14.14 Distrib 5.1.62, for debian-linux-gnu (x86_64) using readline 6.1
bl4z@ubuntu:~$ mongo --version
MongoDB shell version: 2.0.4
Naročite se na:
Objave (Atom)