LOB Beers in London
Yesterday I was invited to very pleasant event and met there different Oracle gurus like Doug Burns (Oracle ACE Director, OAKTABLE), Martin Widlake (OAKTABLE), Roger Allen and others. Very good company and beer. What could be better?
Ярлыки:
Events
Updatable cache and transactional order
Recently, I received a very interesting question about TimesTen cache. The customer would like to use TimesTen as a front office database and decided to use AWT cache group for that. The application creates a lot of transactions per second and basically changes the client balance. The question is "How will TimesTen transfer changes to Oracle database?". I think its a very good question, because I know customers who would like to use TimesTen for decreasing the load on production Oracle DB and in this case the transactional order is quite important.
Let's create a simple test. Oracle DB:
Let's create a simple test. Oracle DB:
SQL> show user USER is "ORATT" SQL> create table awttab ( a number not null primary key, b varchar2(100) ); Table created. SQL> select count(*) from awttab; COUNT(*) ---------- 0 SQL> grant select, insert, update, delete on awttab to cacheadmin; Grant succeeded. SQL>TimesTen:
[oracle@nodett1 ~]$ ttisql "DSN=ds_test;UID=cacheadmin;PWD=oracle" Copyright (c) 1996-2011, Oracle. All rights reserved. Type ? or "help" for help, type "exit" to quit ttIsql. connect "DSN=ds_test;UID=cacheadmin;PWD=oracle"; Connection successful: DSN=ds_test;UID=cacheadmin;DataStore=/u01/app/oracle/product/datastore/ds_test;DatabaseCharacterSet=WE8MSWIN1252;ConnectionCharacterSet=US7ASCII;DRIVER=/u01/app/oracle/product/11.2.2.4/TimesTen/tt11224/lib/libtten.so;LogDir=/u01/app/oracle/product/datastore/ds_test/log;PermSize=1024;TempSize=128;TypeMode=0;PLSQL_TIMEOUT=1000;CacheGridEnable=0;OracleNetServiceName=orcl; (Default setting AutoCommit=1) Command> call ttCacheUidPwdSet('cacheadmin','oracle'); Command> call ttCacheStart; Command> CREATE ASYNCHRONOUS WRITETHROUGH CACHE GROUP awtcache > FROM oratt.awttab ( a NUMBER NOT NULL PRIMARY KEY, > b VARCHAR2(100)); Command> call ttRepStart; Command> [oracle@nodett1 ~]$ ttisql "DSN=ds_test;UID=oratt;PWD=oracle" Copyright (c) 1996-2011, Oracle. All rights reserved. Type ? or "help" for help, type "exit" to quit ttIsql. connect "DSN=ds_test;UID=oratt;PWD=oracle"; Connection successful: DSN=ds_test;UID=oratt;DataStore=/u01/app/oracle/product/datastore/ds_test;DatabaseCharacterSet=WE8MSWIN1252;ConnectionCharacterSet=US7ASCII;DRIVER=/u01/app/oracle/product/11.2.2.4/TimesTen/tt11224/lib/libtten.so;LogDir=/u01/app/oracle/product/datastore/ds_test/log;PermSize=1024;TempSize=128;TypeMode=0;PLSQL_TIMEOUT=1000;CacheGridEnable=0;OracleNetServiceName=orcl; (Default setting AutoCommit=1) Command> select * from awttab; 0 rows found. Command>We created an AWT cache group on a table awttab. Insert a row into the table.
Command> insert into awttab values (1,'test'); 1 row inserted. Command> select * from awttab; < 1, test > 1 row found. Command>Transaction was committed automatically (autocommit on). Check the record inside Oracle DB.
SQL> select * from awttab; A B ---------- ------------------ 1 test SQL>Let's update this row 50 times by using the following code. Before that I turn on the 10046 event on cacheadmin Oracle DB user.
Command> begin > for i in 1 ..50 loop > update awttab set b = 'test'||i where a=1; > commit; > end loop; > end; > / PL/SQL procedure successfully completed. Command> select * from awttab; < 1, test50 > 1 row found. Command>As you can see, I updated the same row 50 times and used the following order:
update1 -> commit update2 -> commit ... update50 -> commitBasically, this type of transactions (each DML statement ends by commit) is very often used in OLTP systems and can cause the performance problems (see log_file_sync wait event), so let's have a look on tracing files. We can see information about each statement there:
... PARSING IN CURSOR #6328852 len=67 dep=1 uid=90 oct=6 lid=90 tim=1355414103869999 hv=317616249 ad='48b693a0' sqlid='dk6k6dc9fww3t' UPDATE "ORATT"."AWTTAB" SET "B" = :"SYS_B_0" WHERE "A" = :"SYS_B_1" END OF STMT PARSE #6328852:c=0,e=307,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=1,plh=1356545142,tim=1355414103869995 BINDS #6328852: Bind#0 oacdty=01 mxl=32(06) mxlc=00 mal=00 scl=00 pre=00 oacflg=10 fl2=0100 frm=01 csi=178 siz=32 off=0 kxsbbbfp=4b176ea0 bln=32 avl=06 flg=09 value="test1" Bind#1 oacdty=02 mxl=22(02) mxlc=00 mal=00 scl=00 pre=00 oacflg=10 fl2=0100 frm=00 csi=00 siz=24 off=0 kxsbbbfp=4b176e80 bln=22 avl=02 flg=09 value=1 EXEC #6328852:c=1000,e=494,p=0,cr=1,cu=1,mis=0,r=1,dep=1,og=1,plh=1356545142,tim=1355414103870584 STAT #6328852 id=1 cnt=0 pid=0 pos=1 obj=0 op='UPDATE AWTTAB (cr=1 pr=0 pw=0 time=265 us)' STAT #6328852 id=2 cnt=1 pid=1 pos=1 obj=75894 op='INDEX UNIQUE SCAN SYS_C0011095 (cr=1 pr=0 pw=0 time=71 us cost=1 size=65 card=1)' CLOSE #6328852:c=0,e=6,dep=1,type=0,tim=1355414103870744 =================== PARSING IN CURSOR #6328852 len=67 dep=1 uid=90 oct=6 lid=90 tim=1355414103870949 hv=317616249 ad='48b693a0' sqlid='dk6k6dc9fww3t' UPDATE "ORATT"."AWTTAB" SET "B" = :"SYS_B_0" WHERE "A" = :"SYS_B_1" END OF STMT PARSE #6328852:c=0,e=124,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=1,plh=1356545142,tim=1355414103870944 BINDS #6328852: Bind#0 oacdty=01 mxl=32(06) mxlc=00 mal=00 scl=00 pre=00 oacflg=10 fl2=0100 frm=01 csi=178 siz=32 off=0 kxsbbbfp=48b7fb28 bln=32 avl=06 flg=09 value="test2" Bind#1 oacdty=02 mxl=22(02) mxlc=00 mal=00 scl=00 pre=00 oacflg=10 fl2=0100 frm=00 csi=00 siz=24 off=0 kxsbbbfp=48b7fb10 bln=22 avl=02 flg=09 value=1 EXEC #6328852:c=1000,e=240,p=0,cr=1,cu=1,mis=0,r=1,dep=1,og=1,plh=1356545142,tim=1355414103871275 STAT #6328852 id=1 cnt=0 pid=0 pos=1 obj=0 op='UPDATE AWTTAB (cr=1 pr=0 pw=0 time=50 us)' STAT #6328852 id=2 cnt=1 pid=1 pos=1 obj=75894 op='INDEX UNIQUE SCAN SYS_C0011095 (cr=1 pr=0 pw=0 time=11 us cost=1 size=65 card=1)' CLOSE #6328852:c=0,e=6,dep=1,type=0,tim=1355414103871426 ...But there is no information about commit. And only after the last statement we see
PARSING IN CURSOR #6328852 len=67 dep=1 uid=90 oct=6 lid=90 tim=1355414103907559 hv=317616249 ad='48b693a0' sqlid='dk6k6dc9fww3t' UPDATE "ORATT"."AWTTAB" SET "B" = :"SYS_B_0" WHERE "A" = :"SYS_B_1" END OF STMT PARSE #6328852:c=0,e=100,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=1,plh=1356545142,tim=1355414103907555 BINDS #6328852: Bind#0 oacdty=01 mxl=32(07) mxlc=00 mal=00 scl=00 pre=00 oacflg=10 fl2=0100 frm=01 csi=178 siz=32 off=0 kxsbbbfp=48b5f340 bln=32 avl=07 flg=09 value="test100" Bind#1 oacdty=02 mxl=22(02) mxlc=00 mal=00 scl=00 pre=00 oacflg=10 fl2=0100 frm=00 csi=00 siz=24 off=0 kxsbbbfp=48b5f330 bln=22 avl=02 flg=09 value=1 EXEC #6328852:c=0,e=181,p=0,cr=1,cu=1,mis=0,r=1,dep=1,og=1,plh=1356545142,tim=1355414103907813 STAT #6328852 id=1 cnt=0 pid=0 pos=1 obj=0 op='UPDATE AWTTAB (cr=1 pr=0 pw=0 time=38 us)' STAT #6328852 id=2 cnt=1 pid=1 pos=1 obj=75894 op='INDEX UNIQUE SCAN SYS_C0011095 (cr=1 pr=0 pw=0 time=7 us cost=1 size=65 card=1)' CLOSE #6328852:c=0,e=5,dep=1,type=0,tim=13554141039080 ... XCTEND rlbk=0, rd_only=0, tim=1355414103908136As you can see, TimesTen tries to combine transactions in transactional order and applies them in efficient way. For example instead of execute transactions in the original order like this:
update1 -> commit update2 -> commit ... update50 -> commitTimesTen executed them like the following:
update1 -> update2 ... -> update50 -> commitThis method provides an opportunity to decrease load on Oracle Database. So, TimesTen provides not only the opportunity to achieve SLA and very low response time, additionally you can decrease the load on your production Oracle DB.
Setting Oracle DB to Oracle DB replication using Golden Gate
In this post I'd like to write about Oracle DB replication by using Golden Gate.I am going to create a Oracle DB replication. Environment (two virtual machines):
1. First of all, install Golden Gate software on each Oracle DB host. I've already written about it here.
2. Prepare the source database (host db) for replication. Switch the database to archivelog mode:
3.1. Source database (db):
5.1. Configure extracts on source database (db).
Run ggcsi and configure the manager process.
5.2. Configure the target database (d2).
Configure the manager process
Conclusion
Oracle GoldenGate is a software package for enabling the replication of data in heterogeneous data environments. It is easy to use. Of course, in this post I have not consider questions related with initial load, conflict detection, high availability and etc., I am only starting to work with this product and probably write about these features in the future. Additionally, I would recommend you to read Alexander Ryndin's blog http://www.oraclegis.com/blog/, he is a Golden Gate guru. One more interesting resource is http://gavinsoorma.com/. All the best.
- host db (192.168.2.131), OEL 5.2. x86, Oracle DB EE 11.2.0.3.0
- host db2 (192.168.2.132), OEL 5.2. x86, Oracle DB EE 11.2.0.3.0
1. First of all, install Golden Gate software on each Oracle DB host. I've already written about it here.
2. Prepare the source database (host db) for replication. Switch the database to archivelog mode:
SQL> shutdown immediate SQL> startup mount SQL> alter database archivelog; SQL> alter database open; SQL> select log_mode from v$database; LOG_MODE ------------ ARCHIVELOGEnable minimal supplemental logging:
SQL> alter database add supplemental log data; Database altered. SQL>Prepare the database for DDL replication. Turn off the recyclebin feature and be sure that it is empty.
SQL> alter system set recyclebin=off scope=spfile; SQL> PURGE DBA_RECYCLEBIN; DBA Recyclebin purged. SQL>Create a schema that will contain the Oracle GoldenGate DDL objects.
SQL> create tablespace gg_tbls datafile '/u01/app/oracle/oradata/orcl/gg_tbls.dbf' size 100m reuse autoextend on; Tablespace created. SQL> create user ggate identified by oracle default tablespace gg_tbls quota unlimited on gg_tbls; User created. SQL> grant create session, connect, resource to ggate; Grant succeeded. SQL> grant dba to ggate; -- just in case Grant succeeded. SQL> grant execute on utl_file to ggate; Grant succeeded. SQL>Change the directory on Golden Gate home directory and run scripts for creating all necessary objects for support ddl replication.
[oracle@db ~]$ cd /u01/app/oracle/product/gg/ [oracle@db gg]$ sqlplus / as sysdba SQL> @marker_setup.sql SQL> @ddl_setup.sql SQL> @role_setup.sql SQL> grant GGS_GGSUSER_ROLE to ggate; SQL> @ddl_enable.sqlAfter that, add information about your Oracle GoldenGate DDL scheme into file GLOBALS. You should input this row into the file "GGSCHEMA ggate".
[oracle@db gg]$ ./ggsci Oracle GoldenGate Command Interpreter for Oracle Version 11.2.1.0.1 OGGCORE_11.2.1.0.1_PLATFORMS_120423.0230_FBO Linux, x86, 32bit (optimized), Oracle 11g on Apr 23 2012 08:09:25 Copyright (C) 1995, 2012, Oracle and/or its affiliates. All rights reserved. GGSCI (db.us.oracle.com) 1> EDIT PARAMS ./GLOBALS GGSCHEMA ggate GGSCI (db.us.oracle.com) 2>3. Create the schemes for replication on both hosts (db, db2).
3.1. Source database (db):
SQL> create user source identified by oracle default tablespace users temporary tablespace temp; User created. SQL> grant connect,resource,unlimited tablespace to source; Grant succeeded. SQL>3.2. Target database(db2):
SQL> create user target identified by oracle default tablespace users temporary tablespace temp; User created. SQL> grant connect,resource,unlimited tablespace to target; Grant succeeded. SQL> grant dba to target; -- or particular grants Grant succeeded. SQL>4. Create the directory for trail files on both hosts and create directory for discard file on db2 host only.
[oracle@db ~] mkdir /u01/app/oracle/product/gg/dirdat/tr [oracle@db2 ~] mkdir /u01/app/oracle/product/gg/dirdat/tr [oracle@db2 ~] mkdir /u01/app/oracle/product/gg/discard5. Create a standard reporting configuration (picture 1)
Picture 1
5.1. Configure extracts on source database (db).
Run ggcsi and configure the manager process.
[oracle@db gg]$ ./ggsci Oracle GoldenGate Command Interpreter for Oracle Version 11.2.1.0.1 OGGCORE_11.2.1.0.1_PLATFORMS_120423.0230_FBO Linux, x86, 32bit (optimized), Oracle 11g on Apr 23 2012 08:09:25 Copyright (C) 1995, 2012, Oracle and/or its affiliates. All rights reserved. GGSCI (db.us.oracle.com) 1> edit params mgr PORT 7809 GGSCI (db.us.oracle.com) 2> start manager Manager started. GGSCI (db.us.oracle.com) 3> info all Program Status Group Lag at Chkpt Time Since Chkpt MANAGER RUNNING GGSCI (db.us.oracle.com) 4>Login into database and add additional information about primary keys into log files.
GGSCI (db.us.oracle.com) 4> dblogin userid ggate Password: Successfully logged into database. GGSCI (db.us.oracle.com) 5> ADD SCHEMATRANDATA source 2012-12-06 16:23:20 INFO OGG-01788 SCHEMATRANDATA has been added on schema source. GGSCI (db.us.oracle.com) 6>NOTE. This is a very important step, because if you don't do it you will not be able to replicate Update statements. You will get errors like the following:
OCI Error ORA-01403: no data found, SQLAs you know, when you write the update statement you usually don't change the primary key, so Oracle log files contain information about changing column values and don't contain information about primary key. For avoiding this situation you should add this information into log files using ADD SCHEMATRANDATA or ADD TRANDATA commands. Add extracts (regular and data pump).Aborting transaction on /u01/app/oracle/product/gg/dirdat/tr beginning at seqno 1 rba 4413 error at seqno 1 rba 4413 Problem replicating SOURCE.T1 to TARGET.T1 Record not found Mapping problem with compressed update record (target format)... * ID = NAME = test3 * ...
GGSCI (db.us.oracle.com) 6> add extract ext1, tranlog, begin now EXTRACT added. GGSCI (db.us.oracle.com) 7> add exttrail /u01/app/oracle/product/gg/dirdat/tr, extract ext1 EXTTRAIL added. GGSCI (db.us.oracle.com) 8> edit params ext1 extract ext1 userid ggate, password oracle exttrail /u01/app/oracle/product/gg/dirdat/tr ddl include mapped objname source.*; table source.*; GGSCI (db.us.oracle.com) 9> info all Program Status Group Lag at Chkpt Time Since Chkpt MANAGER RUNNING EXTRACT STOPPED EXT1 00:00:00 00:01:29 GGSCI (db.us.oracle.com) 10> add extract pump1, exttrailsource /u01/app/oracle/product/gg/dirdat/tr , begin now EXTRACT added. GGSCI (db.us.oracle.com) 11> add rmttrail /u01/app/oracle/product/gg/dirdat/tr, extract pump1 RMTTRAIL added. GGSCI (db.us.oracle.com) 12> edit params pump1 EXTRACT pump1 USERID ggate, PASSWORD oracle RMTHOST db2, MGRPORT 7809 RMTTRAIL /u01/app/oracle/product/gg/dirdat/tr PASSTHRU table source.*; GGSCI (db.us.oracle.com) 13> info all Program Status Group Lag at Chkpt Time Since Chkpt MANAGER RUNNING EXTRACT STOPPED EXT1 00:00:00 00:02:33 EXTRACT STOPPED PUMP1 00:00:00 00:02:56 GGSCI (db.us.oracle.com) 14>
5.2. Configure the target database (d2).
Configure the manager process
[oracle@db2 gg]$ ./ggsci Oracle GoldenGate Command Interpreter for Oracle Version 11.2.1.0.1 OGGCORE_11.2.1.0.1_PLATFORMS_120423.0230_FBO Linux, x86, 32bit (optimized), Oracle 11g on Apr 23 2012 08:09:25 Copyright (C) 1995, 2012, Oracle and/or its affiliates. All rights reserved. GGSCI (db2) 1> edit params mgr PORT 7809 GGSCI (db2) 2> start manager Manager started. GGSCI (db2) 3> info all Program Status Group Lag at Chkpt Time Since Chkpt MANAGER RUNNINGCreate the checkpoint table and change the GLOBAL file.
GGSCI (db2) 4> EDIT PARAMS ./GLOBALS CHECKPOINTTABLE target.checkpoint GGSCI (db2) 5> dblogin userid target Password: Successfully logged into database. GGSCI (db2) 6> add checkpointtable target.checkpoint Successfully created checkpoint table target.checkpoint. GGSCI (db2) 7>Add replicat.
GGSCI (db2) 8> add replicat rep1, exttrail /u01/app/oracle/product/gg/dirdat/tr, begin now REPLICAT added. GGSCI (db2) 9> edit params rep1 REPLICAT rep1 ASSUMETARGETDEFS USERID target, PASSWORD oracle discardfile /u01/app/oracle/product/gg/discard/rep1_discard.txt, append, megabytes 10 DDL map source.*, target target.*; GGSCI (db2) 10> info all Program Status Group Lag at Chkpt Time Since Chkpt MANAGER RUNNING REPLICAT STOPPED REP1 00:00:00 00:01:52 GGSCI (db2) 11>5.3. Start extracts and replicat.
GGSCI (db.us.oracle.com) 6> start extract ext1 Sending START request to MANAGER ... EXTRACT EXT1 starting GGSCI (db.us.oracle.com) 7> start extract pump1 Sending START request to MANAGER ... EXTRACT PUMP1 starting GGSCI (db.us.oracle.com) 8> info all Program Status Group Lag at Chkpt Time Since Chkpt MANAGER RUNNING EXTRACT RUNNING EXT1 00:00:00 00:00:01 EXTRACT RUNNING PUMP1 00:00:00 00:01:01 GGSCI (db2) 6> start replicat rep1 Sending START request to MANAGER ... REPLICAT REP1 starting GGSCI (db2) 7> info all Program Status Group Lag at Chkpt Time Since Chkpt MANAGER RUNNING REPLICAT RUNNING REP1 00:00:00 00:00:066. Check Host db.
[oracle@db gg]$ sqlplus source/oracle SQL*Plus: Release 11.2.0.3.0 Production on Fri Dec 7 20:09:17 2012 Copyright (c) 1982, 2011, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - Production With the Partitioning and Real Application Testing options SQL> select * from t1; select * from t1 * ERROR at line 1: ORA-00942: table or view does not exist SQL> create table t1 (id number primary key, name varchar2(50)); Table created. SQL> insert into t1 values (1,'test'); 1 row created. SQL> insert into t1 values (2,'test'); 1 row created. SQL> commit; Commit complete. SQL>Check host db2:
[oracle@db2 gg]$ sqlplus target/oracle SQL*Plus: Release 11.2.0.3.0 Production on Fri Dec 7 20:09:41 2012 Copyright (c) 1982, 2011, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - Production With the Partitioning and Real Application Testing options SQL> select * from t1; ID NAME ---------- -------------------------------------------------- 1 test 2 test SQL>As you can see, all works. Lets execute some SQL and Update statements of course. Host db:
SQL> delete t1 where id =2; 1 row deleted. SQL> insert into t1 values (3,'test'); 1 row created. SQL> update t1 set name='test3' where id = 3; 1 row updated. SQL> commit; Commit complete. SQL> select * from t1; ID NAME ---------- -------------------------------------------------- 1 test 3 test3 SQL>Let's check host db2:
SQL> select * from t1; ID NAME ---------- -------------------------------------------------- 1 test 3 test3 SQL>
Conclusion
Oracle GoldenGate is a software package for enabling the replication of data in heterogeneous data environments. It is easy to use. Of course, in this post I have not consider questions related with initial load, conflict detection, high availability and etc., I am only starting to work with this product and probably write about these features in the future. Additionally, I would recommend you to read Alexander Ryndin's blog http://www.oraclegis.com/blog/, he is a Golden Gate guru. One more interesting resource is http://gavinsoorma.com/. All the best.
Ярлыки:
Golden Gate,
Replication
Golden Gate installation on Oracle DB
I have already written here about using Golden Gate for transferring the changes from Oracle DB to TimesTen. This combination works very well for heavy loaded system when the regular TimesTen cache mechanism doesn't work properly. In the next posts I am going to write about how to set up this replication.
Let's start from Golden Gate installation. I am going to install Golden Gate on the following system: OEL 5.2 x86 + Oracle DB EE 11.2.0.3.
First of all, find the correct Golden Gate software for your system on edelivery.oracle.com . In this case, I install the "Oracle GoldenGate V11.2.1.0.1 for Oracle 11g on Linux x86" package.
You can use existing OS user (like I do) or create a new one. If you create a new user, that user must be a member of the group that owns the Oracle instance. In this example I use the oracle user.
Create the Golden Gate directory, copy the archive, unzip and tar the archive.
Let's start from Golden Gate installation. I am going to install Golden Gate on the following system: OEL 5.2 x86 + Oracle DB EE 11.2.0.3.
First of all, find the correct Golden Gate software for your system on edelivery.oracle.com . In this case, I install the "Oracle GoldenGate V11.2.1.0.1 for Oracle 11g on Linux x86" package.
You can use existing OS user (like I do) or create a new one. If you create a new user, that user must be a member of the group that owns the Oracle instance. In this example I use the oracle user.
Create the Golden Gate directory, copy the archive, unzip and tar the archive.
[oracle@db odb_11g]$ mkdir /u01/app/oracle/product/gg [oracle@db odb_11g]$ cp V32409-01.zip /u01/app/oracle/product/gg/ [oracle@db odb_11g]$ cd /u01/app/oracle/product/gg/ [oracle@db gg]$ ls V32409-01.zip [oracle@db gg]$ unzip V32409-01.zip Archive: V32409-01.zip inflating: fbo_ggs_Linux_x86_ora11g_32bit.tar inflating: OGG_WinUnix_Rel_Notes_11.2.1.0.1.pdf inflating: Oracle GoldenGate 11.2.1.0.1 README.txt inflating: Oracle GoldenGate 11.2.1.0.1 README.doc [oracle@db gg]$ [oracle@db gg]$ ls fbo_ggs_Linux_x86_ora11g_32bit.tar Oracle GoldenGate 11.2.1.0.1 README.doc V32409-01.zip OGG_WinUnix_Rel_Notes_11.2.1.0.1.pdf Oracle GoldenGate 11.2.1.0.1 README.txt [oracle@db gg]$ tar -xf fbo_ggs_Linux_x86_ora11g_32bit.tar [oracle@db gg]$ ls bcpfmt.tpl defgen libxml2.txt bcrypt.txt demo_more_ora_create.sql logdump cfg demo_more_ora_insert.sql marker_remove.sql chkpt_ora_create.sql demo_ora_create.sql marker_setup.sql cobgen demo_ora_insert.sql marker_status.sql convchk demo_ora_lob_create.sql mgr db2cntl.tpl demo_ora_misc.sql notices.txt ddl_cleartrace.sql demo_ora_pk_befores_create.sql oggerr ddlcob demo_ora_pk_befores_insert.sql OGG_WinUnix_Rel_Notes_11.2.1.0.1.pdf ddl_ddl2file.sql demo_ora_pk_befores_updates.sql Oracle GoldenGate 11.2.1.0.1 README.doc ddl_disable.sql dirjar Oracle GoldenGate 11.2.1.0.1 README.txt ddl_enable.sql dirprm params.sql ddl_filter.sql emsclnt prvtclkm.plb ddl_nopurgeRecyclebin.sql extract pw_agent_util.sh ddl_ora10.sql fbo_ggs_Linux_x86_ora11g_32bit.tar remove_seq.sql ddl_ora10upCommon.sql freeBSD.txt replicat ddl_ora11.sql ggcmd retrace ddl_ora9.sql ggMessage.dat reverse ddl_pin.sql ggsci role_setup.sql ddl_purgeRecyclebin.sql help.txt sequence.sql ddl_remove.sql jagent.sh server ddl_session1.sql keygen sqlldr.tpl ddl_session.sql libantlr3c.so tcperrs ddl_setup.sql libdb-5.2.so ucharset.h ddl_status.sql libgglog.so ulg.sql ddl_staymetadata_off.sql libggrepo.so UserExitExamples ddl_staymetadata_on.sql libicudata.so.38 usrdecs.h ddl_tracelevel.sql libicui18n.so.38 V32409-01.zip ddl_trace_off.sql libicuuc.so.38 zlib.txt ddl_trace_on.sql libxerces-c.so.28 [oracle@db gg]$Setting up the LD_LIBRARY_PATH.
[oracle@db gg]$ export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/u01/app/oracle/product/ggRun ggsci and create the subdirs
[oracle@db gg]$ ./ggsci Oracle GoldenGate Command Interpreter for Oracle Version 11.2.1.0.1 OGGCORE_11.2.1.0.1_PLATFORMS_120423.0230_FBO Linux, x86, 32bit (optimized), Oracle 11g on Apr 23 2012 08:09:25 Copyright (C) 1995, 2012, Oracle and/or its affiliates. All rights reserved. GGSCI (db.us.oracle.com) 1> create subdirs Creating subdirectories under current directory /u01/app/oracle/product/gg Parameter files /u01/app/oracle/product/gg/dirprm: already exists Report files /u01/app/oracle/product/gg/dirrpt: created Checkpoint files /u01/app/oracle/product/gg/dirchk: created Process status files /u01/app/oracle/product/gg/dirpcs: created SQL script files /u01/app/oracle/product/gg/dirsql: created Database definitions files /u01/app/oracle/product/gg/dirdef: created Extract data files /u01/app/oracle/product/gg/dirdat: created Temporary files /u01/app/oracle/product/gg/dirtmp: created Stdout files /u01/app/oracle/product/gg/dirout: created GGSCI (db.us.oracle.com) 2> exit [oracle@db gg]$On this step the Golden Gate installation is completed. As you can see, the installation is very easy and takes a couple of minutes.
Ярлыки:
Golden Gate
One more TimesTen video
One more TimesTen video (swingbench performance test) (author: Svetoslav Gyurov).
Ярлыки:
TimesTen
New type of data loading into TimesTen
Let's have a look on new feature which was introduced in 11.2.2.4 TimesTen version. I mean the opportunity to load data into TimesTen table based on the result of a query executed into Oracle database. If you don't want to use the cache group and cache grid features, if you don't want to create special users and etc., but at the same time, you would like to cache some information into TimesTen, please use this new feature.
Before using this feature you should make some preparation:
I should create the oratt user as well.
"The
For example:
As you can see, I loaded the result of the sql query into t_sql_load table. In this case TimesTen created the t_sql_load table, but I can also load the data into existing table.
Before using this feature you should make some preparation:
- You should create user (or use the existing user) on Oracle DB site (plus grants).
- DSN should have the same national database character set.
- specify the DSN attributes which are needed for connection to Oracle DB.
[oracle@db ~]$ sqlplus / as sysdba SQL*Plus: Release 11.2.0.3.0 Production on Wed Nov 7 18:55:28 2012 Copyright (c) 1982, 2011, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - Production With the Partitioning and Real Application Testing options SQL> select d.username from dba_users d where user in ('TIMESTEN', 'CACHEADMIN'); USERNAME ------------------------------ SYS SYSTEM ORATT ... 31 rows selected. SQL> select d.username from dba_users d where user in ('TIMESTEN', 'CACHEADMIN'); no rows selected SQL>As you can see, there are no any specific TimesTen users as well. In TimesTen, we need to specify some DSN attributes (OracleNetServiceName,DatabaseCharacterSet,OraclePWD). In this case I use the following DSN:
[ds_test] Driver=/u01/app/oracle/product/11.2.2.4/TimesTen/tt11224/lib/libtten.so DataStore=/u01/app/oracle/product/datastore/ds_test LogDir=/u01/app/oracle/product/datastore/ds_test/log PermSize=1024 TempSize=128 PLSQL=1 NLS_LENGTH_SEMANTICS=BYTE DatabaseCharacterSet=WE8MSWIN1252 PLSQL_TIMEOUT=1000 OracleNetServiceName=orcl OraclePWD=oracle
I should create the oratt user as well.
Command> CREATE USER oratt IDENTIFIED BY oracle; User created. Command> grant create session to oratt;That is all. I don't need to do anything else for loading data. Now I can load data. There are two methods of loading information into TimesTen:
- using
createandloadfromoraquery
command - using
ttTableSchemaFromOraQueryGet
andttLoadFromOracle
procedures
"The
ttIsql
utility provides the createandloadfromoraquery
command that, once provided the TimesTen table name and the SELECT
statement, will automatically create the TimesTen table, execute the SELECT
statement on Oracle, and load the result set into the TimesTen table."For example:
[oracle@nodett1 ~]$ ttisql "DSN=ds_test;UID=oratt;PWD=oracle" Copyright (c) 1996-2011, Oracle. All rights reserved. Type ? or "help" for help, type "exit" to quit ttIsql. connect "DSN=ds_test;UID=oratt;PWD=oracle"; Connection successful: DSN=ds_test;UID=oratt;DataStore=/u01/app/oracle/product/datastore/ds_test;DatabaseCharacterSet=WE8MSWIN1252;ConnectionCharacterSet=US7ASCII;DRIVER=/u01/app/oracle/product/11.2.2.4/TimesTen/tt11224/lib/libtten.so;LogDir=/u01/app/oracle/product/datastore/ds_test/log;PermSize=1024;TempSize=128;TypeMode=0;PLSQL_TIMEOUT=1000;CacheGridEnable=0;OracleNetServiceName=orcl; (Default setting AutoCommit=1) Command> tables; 0 tables found. Command> Command> createandloadfromoraquery t_sql_load select level id, > level val_1, > level val_2, > level val_3 > from dual > connect by level <= 100; Mapping query to this table: CREATE TABLE "ORATT"."T_SQL_LOAD" ( "ID" number, "VAL_1" number, "VAL_2" number, "VAL_3" number ) Table t_sql_load created 100 rows loaded from oracle. Command> tables; ORATT.T_SQL_LOAD 1 table found. Command> select count(*) from t_sql_load; < 100 > 1 row found. Command>
As you can see, I loaded the result of the sql query into t_sql_load table. In this case TimesTen created the t_sql_load table, but I can also load the data into existing table.
Command> truncate table t_sql_load; Command> select count(*) from t_sql_load; < 0 > 1 row found. Command> createandloadfromoraquery t_sql_load select level id, > level val_1, > level val_2, > level val_3 > from dual > connect by level <= 100; Warning 2207: Table ORATT.T_SQL_LOAD already exists 100 rows loaded from oracle. Command> select count(*) from t_sql_load; < 100 > 1 row found. Command>In this case I've received the 2207 warning. It is very simple :) let's have a look on the second option. Documentation says: "The ttTableSchemaFromOraQueryGet built-in procedure evaluates the user-provided SELECT statement to generate a CREATE TABLE statement that can be executed to create a table on TimesTen, which would be appropriate to receive the result set from the SELECT statement. The ttLoadFromOracle built-in procedure executes the SELECT statement on Oracle and loads the result set into the TimesTen table." For example;
Command> drop table t_sql_load; Command> tables; 0 tables found. Command> Command> call ttTableSchemaFromOraQueryGet('oratt','t_sql_load','select level id,level val_1, level val_2,level val_3 from dual connect by level <= 100'); < CREATE TABLE "ORATT"."T_SQL_LOAD" ( "ID" number, "VAL_1" number, "VAL_2" number, "VAL_3" number ) > 1 row found. Command>ttTableSchemaFromOraQueryGet procedure is telling us which create table statement we should manually execute. Ok, execute the following statement.
Command> CREATE TABLE "ORATT"."T_SQL_LOAD" ( > "ID" number, > "VAL_1" number, > "VAL_2" number, > "VAL_3" number > ); Command> tables; ORATT.T_SQL_LOAD 1 table found. Command> call ttloadfromoracle ('oratt','t_sql_load','select level id,level val_1, level val_2,level val_3 from dual connect by level <= 100'); < 100 > 1 row found. Command> select count(*) from t_sql_load; < 100 > 1 row found. Command>Now, we loaded all necessary information into TimesTen. It is also very easy. The second method is more flexible I think. So, now you can see how easy to load data from Oracle DB into TimesTen using this sql query feature. One more thing. As you now, cache groups in TimesTen have some naming restrictions. For example, you can't create a cache group on a table in Oracle DB which contains symbol "#". This feature helps in this situation as well. For example:
SQL> show user USER is "ORATT" SQL> create table tab# as 2 select level id, 3 level val_1, 4 level val_2, 5 level val_3 6 from dual 7 connect by level <= 100; Table created. SQL>In TimesTen:
Command> drop table t_sql_load; Command> tables; 0 tables found. Command> createandloadfromoraquery "oratt.tab#" select * from tab#; Mapping query to this table: CREATE TABLE "ORATT"."TAB#" ( "ID" number, "VAL_1" number, "VAL_2" number, "VAL_3" number ) Table tab# created 100 rows loaded from oracle. Command> tables; ORATT.TAB# 1 table found. Command> select count(*) from tab#; < 100 > 1 row found. Command>
Oracle TimesTen 11.2.2.4.0 was released
The new TimesTen version is now available on OTN (http://www.oracle.com/technetwork/products/timesten/downloads/index.html).
Some of the new features:
PS. What will be in TimesTen 12c vertion?
Some of the new features:
- This release contains an Index Adviser that can be used to recommend a set of indexes that can improve the performance of a specific workload.
- You can now load a TimesTen table with the result of a query executed on an Oracle database. This feature does not require you to create a cache group.
- The TimesTen Cache Advisor provides recommendations on how to initially configure a cache schema, to identify porting issues and to estimate the performance improvement of a specific workload. The Cache Advisor is available on Linux x86-64.
PS. What will be in TimesTen 12c vertion?
Ярлыки:
11.2.2
PLSQL Server Pages
Recently, I’ve received a very interesting task at my job, I have to create a colourful report which uses different colours depending on values (if value < 5 then cell should has a green colour, if value between 5 and 9 then yellow, else red).
There are a lot of methods to create a report like this. First approach based on using any BI tool ( like SAS, Oracle BI, BO and etc.). These tools allow you to create a report and upload all data into Excel file. There is only one disadvantage – they cost money, so in my case I can’t use it because I don’t have a budget for that.
Ensure that the listener is able to handle HTTP requests.
There are a lot of methods to create a report like this. First approach based on using any BI tool ( like SAS, Oracle BI, BO and etc.). These tools allow you to create a report and upload all data into Excel file. There is only one disadvantage – they cost money, so in my case I can’t use it because I don’t have a budget for that.
Second approach based on uploading a CSV file from Oracle DB by using PL/SQL, but in this case there is no opportunity to control the cells’ colours, so I’ve decided using PL/SQL server pages feature.
Let’s create a simple example. First of all, create test tablespace and objects owner user.
SQL> create tablespace gena_tbls datafile '/u01/app/oracle/oradata/orcl/gena.dbf' size 100m autoextend on; Tablespace created. SQL> create user gena identified by gena default tablespace gena_tbls; User created. SQL> grant connect, resource to gena; Grant succeeded.Ensure that the database account ANONYMOUS is unlocked.
SQL> alter user anonymous account unlock; User altered.Create a simple table.
SQL> connect gena/gena Connected. SQL> create table t1 ( indicator_name varchar2(50), value number); Table created. SQL> insert into t1 values ('INDIC 1', 1); 1 row created. SQL> insert into t1 values ('INDIC 2', 6); 1 row created. SQL> insert into t1 values ('INDIC 3', 10); 1 row created. SQL> insert into t1 values ('INDIC 4', 4); 1 row created. SQL> insert into t1 values ('INDIC 5', 8); 1 row created. SQL> commit; Commit complete. SQL>Log on to the database as an XML DB administrator (SYS in this case), that is a user with the XDBADMIN role and create the DAD.
SQL> connect / as sysdba Connected. SQL> exec dbms_epg.create_dad('gena_dad', '/gena_report/*'); PL/SQL procedure successfully completed. SQL>Set the DAD attribute database-username to the database user whose privileges must be used by the DAD.
SQL> exec dbms_epg.set_dad_attribute('gena_dad', 'database-username', 'gena'); PL/SQL procedure successfully completed. SQL>Grant EXECUTE privilege to the database user GENA whose privileges must be used by the DAD.
SQL> grant execute on dbms_epg to gena; Grant succeeded. SQL>Log on to the database as the database user whose privileges must be used by the DAD and authorize the embedded PL/SQL gateway to invoke procedures and access document tables through the DAD.
SQL> connect gena/gena Connected. SQL> exec dbms_epg.authorize_dad('gena_dad'); PL/SQL procedure successfully completed. SQL>Create a sample PL/SQL stored procedure. This procedure creates an HTML page that includes the result set of a query of gena.t1.
Ensure that the listener is able to handle HTTP requests.
[oracle@db ~]$ lsnrctl status listener LSNRCTL for Linux: Version 11.2.0.3.0 - Production on 31-JUL-2012 12:24:24 Copyright (c) 1991, 2011, Oracle. All rights reserved. Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=db)(PORT=1521))) STATUS of the LISTENER ------------------------ Alias listener Version TNSLSNR for Linux: Version 11.2.0.3.0 - Production Start Date 31-JUL-2012 12:24:00 Uptime 0 days 0 hr. 0 min. 24 sec Trace Level off Security ON: Local OS Authentication SNMP OFF Listener Parameter File /u01/app/oracle/product/11.2.0/dbhome_1/network/admin/listener.ora Listener Log File /u01/app/oracle/diag/tnslsnr/db/listener/alert/log.xml Listening Endpoints Summary... (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=db)(PORT=1521))) (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=db)(PORT=8080))(Presentation=HTTP)(Session=RAW)) Services Summary... Service "orcl" has 1 instance(s). Instance "orcl", status READY, has 1 handler(s) for this service... Service "orclXDB" has 1 instance(s). Instance "orcl", status READY, has 1 handler(s) for this service... The command completed successfully [oracle@db ~]$Run a web browser and put the following address: http://db:8080/gena_report/print_indicators
Ярлыки:
PL/SQL
NVL bug in TimesTen 11.2.2.2.0
Recently, I've found a message about weard NVL function behaviour on TimesTen forum and decided to make a test (funny bug).
[oracle@nodett1 ~]$ ttversion TimesTen Release 11.2.2.2.0 (32 bit Linux/x86) (tt1122:53392) 2011-12-23T09:21:34Z Instance admin: oracle Instance home directory: /u01/app/oracle/product/11.2.2/TimesTen/tt1122 Group owner: oinstall Daemon home directory: /u01/app/oracle/product/11.2.2/TimesTen/tt1122/info PL/SQL enabled. [oracle@nodett1 ~]$
Command> create table test ( id number, name_1 varchar2(20), name_2 varchar2(20)); Command> select nvl(name_1, 123) t1, nvl(name_2, 123) t2 from test; 0 rows found. Command> insert into test values (1,'test',null); 1 row inserted. Command> select * from test; < 1, test, <null> > 1 row found. Command> select nvl(name_1, 123) t1, nvl(name_2, 123) t2 from test; 2922: Invalid number type value 0 rows found. The command failed. Command> Command> select nvl(name_1, 123) from test; 2922: Invalid number type value 0 rows found. The command failed. Command> select nvl(name_2, 123) from test; < 123 > 1 row found. Command> Command> insert into test values (2,null,'test'); 1 row inserted. Command> select * from test; < 1, test, <null> > < 2, <null>, test > 2 rows found. Command> Command> select nvl(name_1, 123) t1, nvl(name_2, 123) t2 from test; 2922: Invalid number type value 0 rows found. The command failed. Command> Command> select nvl(name_1, 123) from test; 2922: Invalid number type value 0 rows found. The command failed. Command> select nvl(name_2, 123) from test; < 123 > 2922: Invalid number type value 1 row found. The command failed. Command>It looks like TimesTen tries to convert first expression into second’s expression data type. Especially I like the last statement :). Be careful with bugs.
SQL query and columnar compression
I've already written about Columnar Compression in TimesTen and now I would like to share some interesting point about it.
Environment:
Environment:
[oracle@nodett1 sql]$ ttversion TimesTen Release 11.2.2.2.0 (32 bit Linux/x86) (tt1122:53392) 2011-12-23T09:21:34Z Instance admin: oracle Instance home directory: /u01/app/oracle/product/11.2.2/TimesTen/tt1122 Group owner: oinstall Daemon home directory: /u01/app/oracle/product/11.2.2/TimesTen/tt1122/info PL/SQL enabled. [oracle@nodett1 sql]$Let's create a compression table and insert 1M rows.
Command> create table tab_comp (id number not null, > val_1 varchar2(40), > val_2 varchar2(40)) > compress (val_1 by dictionary maxvalues = 255) optimized for read; Command> desc tab_comp; Table GENA.TAB_COMP: Columns: ID NUMBER NOT NULL VAL_1 VARCHAR2 (40) INLINE VAL_2 VARCHAR2 (40) INLINE COMPRESS ( VAL_1 BY DICTIONARY MAXVALUES=255 ) OPTIMIZED FOR READ 1 table found. (primary key columns are indicated with *) Command> alltables; GENA.CD$_1086048_2 GENA.TAB_COMP ... 26 tables found. Command> Command> desc CD$_1086048_2; Table GENA.CD$_1086048_2: Columns: *VAL_1 VARCHAR2 (40) INLINE ##CD_REFCNT TT_INTEGER NOT NULL 1 table found. (primary key columns are indicated with *) Command> Command> begin > for i in 1 .. 1000000 loop > insert into tab_comp > values (i, > '1234567890123456789012345678901234567890', > '1234567890123456789012345678901234567890'); > end loop; > end; > / PL/SQL procedure successfully completed. Command> select count(*) from tab_comp; < 1000000 > 1 row found. Command> select * from CD$_1086048_2; < 1234567890123456789012345678901234567890, 1000000 > 1 row found. Command>As you can see, I've inserted the same value in VAL_1 column one million times. The CD$_1086048_2 system table contains only one row and TAB_COMP table contains only one 1 byte pointer in VAL_1 column for one row in CD$_1086048_2 table. After that I execute the following simple query (select count(val_1) from tab_comp;).
Command> autocommit 0; Command> showplan 1; Command> select count(val_1) from tab_comp; Query Optimizer Plan: STEP: 1 LEVEL: 2 OPERATION: TblLkSerialScan TBLNAME: TAB_COMP IXNAME:I was confused!!! TimesTen optimizer has chosen the full TAB_COMP table scan and of course it took long time (~0.04s). In TimesTen documentation I found these sentences ‘You can compress tables at the column level, which stores data more efficiently. This eliminates redundant storage of duplicate values within columns and improves the performance of SQL queries that perform full table scans.’ So I was expecting that optimizer would rewrite the query for something like select count('##CD_REFCNT') from CD$_1086048_2, especially taking into account that all information needed for optimizer could be found in CD$_1086048_2 table.INDEXED CONDITION: NOT INDEXED: STEP: 2 LEVEL: 1 OPERATION: OneGroupGroupBy TBLNAME: IXNAME: INDEXED CONDITION: NOT INDEXED: < 1000000 > 1 row found. Command> Command> showplan 0; Command> timing 1; Command> select count(val_1) from tab_comp; < 1000000 > 1 row found. Execution time (SQLExecute + Fetch Loop) = 0.050841 seconds. Command> select count(val_1) from tab_comp; < 1000000 > 1 row found. Execution time (SQLExecute + Fetch Loop) = 0.054260 seconds. Command> select count(val_1) from tab_comp; < 1000000 > 1 row found. Execution time (SQLExecute + Fetch Loop) = 0.046867 seconds. Command> select count(val_1) from tab_comp; < 1000000 > 1 row found. Execution time (SQLExecute + Fetch Loop) = 0.049661 seconds. Command>
Command> select count('##CD_REFCNT') from CD$_1086048_2; < 1 > 1 row found. Execution time (SQLExecute + Fetch Loop) = 0.000058 seconds. Command> select count('##CD_REFCNT') from CD$_1086048_2; < 1 > 1 row found. Execution time (SQLExecute + Fetch Loop) = 0.000081 seconds.Unfortunately, my assumptions were not confirmed. I hope TimesTen development team will include this feature in next release.
Client-side load balancing in Oracle RAC 11GR2
In this post I would like to write about Client-side load balancing in Oracle RAC 11gR2. At the beginning of this post lets have a look at this feature in previous versions (before 11gr2).
We have a four nodes cluster. Using a simple TNS descriptor for connection:
It was in previous 11gr2 versions. How does it work in new version? In 11gR2 SCAN (Single Client Access Name) was introduced. The SCAN feature is a new 'layer' between clients and local listeners in cluster, that allows you to change the your cluster configuration (i.e add/remove nodes) without making configuration changes in their clients. It means that now you can use only one name for connection to RAC instead of using a set of VIPs.
In first option your network administrator has to create a single name in DNS server that resolves up to 3 IP addresses using a round-robin algorithm. Three IP addresses are recommended considering load balancing and high availability requirements regardless of the number of servers in the cluster (but using two addresses is also possible). The IP addresses must be on the same subnet as your public network in the cluster. For example:
You can also to install Grid Infrastructure without DNS. In this case, you would use a hosts-file entry to resolve the SCAN to one IP address. Since you are using only one (and only one) SCAN VIP there is no opportunity for load balancing on SCAN level, but you are able to use load balancing across nodes VIPs described above. Second option is using GNS. Using GNS assumes you have a DHCP service for delivering IP for SCAN VIPs (not only) and corporate DNS available on your public network. During the cluster configuration you are defining the deligated subdomain and GNS VIP for GNS. Before that you should set up the DNS for subdomain deligation.In this case, three IP addresses will be acquired from a DHCP service to create the SCAN and name resolution for the SCAN will be provided by the GNS.
One interesting point is that Oracle Client only resolves SCAN name and transforms it into following:
This problem was described in detail in Igor Melnikov's
post
According to Oracle documentation, this parameter would have 'ON' default values only if the specified DESCRIPTION_LIST tag. Therefore, for load balancing in this case, you should set the LOAD_BALANCE parameter to 'ON' state, even if the SCAN is used.
Conclusion
In conclusion I would like to emphasize that described above client-side load balancing technology is transparent to the application and it does not require application modifications for all interfaces based on OCI (OCI, ODBC, JDBC, ADO DB, BDE).
We have a four nodes cluster. Using a simple TNS descriptor for connection:
RACDB = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = rac1-vip.us.oracle.com)(PORT = 1521)) (ADDRESS = (PROTOCOL = TCP)(HOST = rac2-vip.us.oracle.com)(PORT = 1521)) (ADDRESS = (PROTOCOL = TCP)(HOST = rac3-vip.us.oracle.com)(PORT = 1521)) (ADDRESS = (PROTOCOL = TCP)(HOST = rac4-vip.us.oracle.com)(PORT = 1521)) (CONNECT_DATA = (SERVICE_NAME = racdb) ) )In this case, client will connect to the first node (on rac1 local listener) (rac1-vip.us.oracle.com) only (if it's available of cource). For avoiding this situation we had to use LOAD_BALANCE parameter.
RACDB = (DESCRIPTION = (LOAD_BALANCE=ON) (ADDRESS = (PROTOCOL = TCP)(HOST = rac1-vip.us.oracle.com)(PORT = 1521)) (ADDRESS = (PROTOCOL = TCP)(HOST = rac2-vip.us.oracle.com)(PORT = 1521)) (ADDRESS = (PROTOCOL = TCP)(HOST = rac3-vip.us.oracle.com)(PORT = 1521)) (ADDRESS = (PROTOCOL = TCP)(HOST = rac4-vip.us.oracle.com)(PORT = 1521)) (CONNECT_DATA = (SERVICE_NAME = racdb) ) )Now, Oracle Net will choose one address from the list of addresses in a random sequence that will allow clients to balance the load on the all four listeners.
It was in previous 11gr2 versions. How does it work in new version? In 11gR2 SCAN (Single Client Access Name) was introduced. The SCAN feature is a new 'layer' between clients and local listeners in cluster, that allows you to change the your cluster configuration (i.e add/remove nodes) without making configuration changes in their clients. It means that now you can use only one name for connection to RAC instead of using a set of VIPs.
RACDB1 = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = rac-scan.cluster.us.oracle.com)(PORT = 1521)) (CONNECT_DATA = (SERVICE_NAME = racdb) ) )The SCAN is configured during the installation of Oracle Grid Infrastructure. There are 2 options for defining the SCAN:
- Define the SCAN in your corporate DNS (Domain Name Service)
- Use the Grid Naming Service (GNS).
In first option your network administrator has to create a single name in DNS server that resolves up to 3 IP addresses using a round-robin algorithm. Three IP addresses are recommended considering load balancing and high availability requirements regardless of the number of servers in the cluster (but using two addresses is also possible). The IP addresses must be on the same subnet as your public network in the cluster. For example:
rac-scan.cluster.us.oracle.com IN A 192.168.2.141 IN A 192.168.2.142 IN A 192.168.2.143That means, if you connect to RAC using RACDB1 descriptor and corporate DNS you will balance the load on DNS level (using round-robin algorithm). Each time when you are resolving the rac-scan.cluster.us.oracle.com name DNS will send you different SCAN VIP.
You can also to install Grid Infrastructure without DNS. In this case, you would use a hosts-file entry to resolve the SCAN to one IP address. Since you are using only one (and only one) SCAN VIP there is no opportunity for load balancing on SCAN level, but you are able to use load balancing across nodes VIPs described above. Second option is using GNS. Using GNS assumes you have a DHCP service for delivering IP for SCAN VIPs (not only) and corporate DNS available on your public network. During the cluster configuration you are defining the deligated subdomain and GNS VIP for GNS. Before that you should set up the DNS for subdomain deligation.In this case, three IP addresses will be acquired from a DHCP service to create the SCAN and name resolution for the SCAN will be provided by the GNS.
(DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = SCAN-vip1)(PORT = 1521)) (ADDRESS = (PROTOCOL = TCP)(HOST = SCAN-vip2)(PORT = 1521)) (ADDRESS = (PROTOCOL = TCP)(HOST = SCAN-vip3)(PORT = 1521)) (CONNECT_DATA = (SERVICE_NAME = racdb) ) ),where SCAN-vipN - SCAN VIPs address. As yo can see, in this case, clients will connect to the first SCAN VIP only (SCAN-vip1 in the example). For avoiding this situation using LOAD_BALANCE parameter:
RACDB1 = (DESCRIPTION = (LOAD_BALANCE=ON) (ADDRESS = (PROTOCOL = TCP)(HOST = rac-scan.cluster.us.oracle.com)(PORT = 1521)) (CONNECT_DATA = (SERVICE_NAME = racdb) ) )
According to Oracle documentation, this parameter would have 'ON' default values only if the specified DESCRIPTION_LIST tag. Therefore, for load balancing in this case, you should set the LOAD_BALANCE parameter to 'ON' state, even if the SCAN is used.
Conclusion
In conclusion I would like to emphasize that described above client-side load balancing technology is transparent to the application and it does not require application modifications for all interfaces based on OCI (OCI, ODBC, JDBC, ADO DB, BDE).
Ярлыки:
11GR2,
Clusterware,
RAC
Columnar compression in TimesTen
In this article I would like to write about Columnar Compression in TimesTen.
Oracle TimesTen 11.2.2 introduced several new features. One of them it’s Columnar Compression. Tables could be compressed at the column level, which stores data more efficiently.
The compression unit in Timesten is a compressed column group. A compressed column group consists of set of columns. You can include one or more columns in compressed column group; however, a column can be included in only one compressed column group. Each compressed column group is limited to a maximum of 16 columns.
Oracle TimesTen 11.2.2 introduced several new features. One of them it’s Columnar Compression. Tables could be compressed at the column level, which stores data more efficiently.
The compression unit in Timesten is a compressed column group. A compressed column group consists of set of columns. You can include one or more columns in compressed column group; however, a column can be included in only one compressed column group. Each compressed column group is limited to a maximum of 16 columns.
TimesTen and Memory allocation
In this post I would like to write about Columnar Compression in TimesTen but before that i should write some introduction about the TimesTen store method. As you know, TimesTen stores data in two different options:
As you can see, the IN_LINE column stores date by using INLINE and the OUT_OF_LINE column stores date by using NOT INLINE. You can also implicitly specify the store method:
Now, let’s create a simple table:
What size does it have? We can use a new TimesTen feature for that. If you want to know what size your table has you should compute it by using ttComputeTabSizes('table_name') function and after that you will be able to see the table size in SYS.ALL_TAB_SIZES table or using tablesize table_name command:
As you can see, there are no any data in EMP table. We see the metadata size (784 bytes) only, but we can get more information from data dictionary:
The data dictionary contains information about the EMP table. The SYS.COLUMN table contains columns information including data types, length of the column (SYS.COLUMN.COLLEN) and how many bytes a given column contributes to the inline width of a row (SYS.COLUMN.INLINELEN). Additionally, the SYS.TABLES table contains the length of inline portion of each row (SYS.TABLES.LENGTH). Now, let’s insert a row into the table:
Let's analyze what we've got. First of all, the INLINE page was allocated for storing 256 rows and you've got information about already stored rows and about free rows (NUM_USED_ROWS = 1 and NUM_FREE_ROWS = 255). In this example all data are storing inside INLINE page that is why OUT_OF_LINE_BYTES = 0. Additionally we see the average row length (AVG_ROW_LEN = 206) and the total table size (TOTAL_BYTES = 52864).
Let's insert 255 rows into EMP table.
Nothing changes except NUM_USED_ROWS and NUM_FREE_ROWS parameters. Let's insert one more row.
One more INLINE page was allocated for storing next 256 rows. Size the same (52080*2=104160). Metadata have the same size (784). But average row size was decrease till 204 bytes. What will happen if I delete the last row?
I deleted the single row in the new block that is why the block was deallocated. What If i deleted a row from first block?
The INLINE_ALLOC_BYTES value didn't change (104160) because I deleted the row from first block. Let's delete more rows from first block.
TimesTen allocated two pages (104160 bytes) for storing only two rows. Of course the next inserted rows will be inserted into free space in the first block for avoiding fragmentation but you should know about it.
Conclusion
A lot of new features were introduced in new TimesTen version. One of them it is possibility to know the actual table size and this functionality is very easy for use I think.
- IN_LINE - for column with a fixed length or columns with a variable length whose declared column length is <= 128 bytes (by default). This method is used by getting the best performance.
- OUT_OF_LINE - for columns whose declared column length is > 128 bytes (by default). Out-of-line columns are not stored contiguously with the row but are allocated. Accessing out-of-line columns is slightly slower than accessing in-line columns. LOB data types are stored out-of-line.
Command> create table t (in_line varchar2 (50), > out_of_line varchar2(129)); Command> desc t; Table GENA.T: Columns: IN_LINE VARCHAR2 (50) INLINE OUT_OF_LINE VARCHAR2 (129) NOT INLINE 1 table found. (primary key columns are indicated with *)
As you can see, the IN_LINE column stores date by using INLINE and the OUT_OF_LINE column stores date by using NOT INLINE. You can also implicitly specify the store method:
Command> create table t2 (in_line varchar2 (50), > out_of_line varchar2(129) inline); Command> desc t2; Table GENA.T2: Columns: IN_LINE VARCHAR2 (50) INLINE OUT_OF_LINE VARCHAR2 (129) INLINE 1 table found. (primary key columns are indicated with *) Command>
Now, let’s create a simple table:
Command> create table emp (id number not null, > creation date, > name_1 varchar2(40), > name_2 varchar2(40), > surname varchar2(40), > dept_id number); Command> desc emp; Table GENA.EMP: Columns: ID NUMBER NOT NULL CREATION DATE NAME_1 VARCHAR2 (40) INLINE NAME_2 VARCHAR2 (40) INLINE SURNAME VARCHAR2 (40) INLINE DEPT_ID NUMBER 1 table found. (primary key columns are indicated with *)
What size does it have? We can use a new TimesTen feature for that. If you want to know what size your table has you should compute it by using ttComputeTabSizes('table_name') function and after that you will be able to see the table size in SYS.ALL_TAB_SIZES table or using tablesize table_name command:
Command> call ttComputeTabSizes('emp'); Command> tablesize emp; Sizes of GENA.EMP: INLINE_ALLOC_BYTES: 0 NUM_USED_ROWS: 0 NUM_FREE_ROWS: 0 AVG_ROW_LEN: Not computed OUT_OF_LINE_BYTES: 0 METADATA_BYTES: 784 TOTAL_BYTES: 784 LAST_UPDATED: 2012-01-19 19:37:07.000000 1 table found. Command>
As you can see, there are no any data in EMP table. We see the metadata size (784 bytes) only, but we can get more information from data dictionary:
Command> VARIABLE TABNAME VARCHAR2(50) := 'EMP'; Command> SET AUTOVARIABLES ON; Command> select COLNAME, > COLLEN, > INLINELEN > from SYS.COLUMNS > where id = ( select TBLID > from sys.tables > where tblname = :TABNAME); < CREATION , 7, 7 > < DEPT_ID , 22, 22 > < ID , 22, 22 > < NAME_1 , 40, 44 > < NAME_2 , 40, 44 > < SURNAME , 40, 44 > 6 rows found. Command> select sum(COLLEN) SUM_COL_LEN, > sum(INLINELEN) SUM_INLINE_LEN > from SYS.COLUMNS > where id = ( select TBLID > from sys.tables > where tblname = :TABNAME); < 171, 183 > 1 row found. Command> select tblname, LENGTH from sys.tables where tblname = :TABNAME; < EMP , 192 > 1 row found. Command>
The data dictionary contains information about the EMP table. The SYS.COLUMN table contains columns information including data types, length of the column (SYS.COLUMN.COLLEN) and how many bytes a given column contributes to the inline width of a row (SYS.COLUMN.INLINELEN). Additionally, the SYS.TABLES table contains the length of inline portion of each row (SYS.TABLES.LENGTH). Now, let’s insert a row into the table:
Command> insert into emp values (1,sysdate,'gena', 'gena', 'gena', 1); 1 row inserted. Command> call ttComputeTabSizes('emp'); Command> tablesize emp; Sizes of GENA.EMP: INLINE_ALLOC_BYTES: 52080 NUM_USED_ROWS: 1 NUM_FREE_ROWS: 255 AVG_ROW_LEN: 206 OUT_OF_LINE_BYTES: 0 METADATA_BYTES: 784 TOTAL_BYTES: 52864 LAST_UPDATED: 2012-01-19 19:37:54.000000 1 table found. Command>
Let's analyze what we've got. First of all, the INLINE page was allocated for storing 256 rows and you've got information about already stored rows and about free rows (NUM_USED_ROWS = 1 and NUM_FREE_ROWS = 255). In this example all data are storing inside INLINE page that is why OUT_OF_LINE_BYTES = 0. Additionally we see the average row length (AVG_ROW_LEN = 206) and the total table size (TOTAL_BYTES = 52864).
Let's insert 255 rows into EMP table.
Command> begin > for i in 2 .. 256 loop > insert into emp > values (i, > sysdate, > '1234567890123456789012345678901234567890', > '1234567890123456789012345678901234567890', > '1234567890123456789012345678901234567890', > i); > end loop; > end; > / PL/SQL procedure successfully completed. Command> call ttComputeTabSizes('emp'); Command> tablesize emp; Sizes of GENA.EMP: INLINE_ALLOC_BYTES: 52080 NUM_USED_ROWS: 256 NUM_FREE_ROWS: 0 AVG_ROW_LEN: 206 OUT_OF_LINE_BYTES: 0 METADATA_BYTES: 784 TOTAL_BYTES: 52864 LAST_UPDATED: 2012-01-24 15:49:40.000000 1 table found. Command>
Nothing changes except NUM_USED_ROWS and NUM_FREE_ROWS parameters. Let's insert one more row.
Command> insert into emp values (257,sysdate,'gena', 'gena', 'gena', 1); 1 row inserted. Command> call ttComputeTabSizes('emp'); Command> tablesize emp; Sizes of GENA.EMP: INLINE_ALLOC_BYTES: 104160 NUM_USED_ROWS: 257 NUM_FREE_ROWS: 255 AVG_ROW_LEN: 204 OUT_OF_LINE_BYTES: 0 METADATA_BYTES: 784 TOTAL_BYTES: 104944 LAST_UPDATED: 2012-01-24 15:50:29.000000 1 table found. Command>
One more INLINE page was allocated for storing next 256 rows. Size the same (52080*2=104160). Metadata have the same size (784). But average row size was decrease till 204 bytes. What will happen if I delete the last row?
Command> delete from emp where id=257; 1 row deleted. Command> call ttComputeTabSizes('emp'); Command> tablesize emp; Sizes of GENA.EMP: INLINE_ALLOC_BYTES: 52080 NUM_USED_ROWS: 256 NUM_FREE_ROWS: 0 AVG_ROW_LEN: 206 OUT_OF_LINE_BYTES: 0 METADATA_BYTES: 784 TOTAL_BYTES: 52864 LAST_UPDATED: 2012-01-24 18:00:57.000000 1 table found. Command>
I deleted the single row in the new block that is why the block was deallocated. What If i deleted a row from first block?
Command> insert into emp values (257,sysdate,'gena', 'gena', 'gena', 1); 1 row inserted. Command> call ttComputeTabSizes('emp'); Command> tablesize emp; Sizes of GENA.EMP: INLINE_ALLOC_BYTES: 104160 NUM_USED_ROWS: 257 NUM_FREE_ROWS: 255 AVG_ROW_LEN: 204 OUT_OF_LINE_BYTES: 0 METADATA_BYTES: 784 TOTAL_BYTES: 104944 LAST_UPDATED: 2012-01-24 18:07:47.000000 1 table found. Command> delete from emp where id=1; 1 row deleted. Command> call ttComputeTabSizes('emp'); Command> tablesize emp; Sizes of GENA.EMP: INLINE_ALLOC_BYTES: 104160 NUM_USED_ROWS: 256 NUM_FREE_ROWS: 256 AVG_ROW_LEN: 204 OUT_OF_LINE_BYTES: 0 METADATA_BYTES: 784 TOTAL_BYTES: 104944 LAST_UPDATED: 2012-01-24 18:08:02.000000 1 table found. Command>
The INLINE_ALLOC_BYTES value didn't change (104160) because I deleted the row from first block. Let's delete more rows from first block.
Command> delete from emp where id between 3 and 256; 254 rows deleted. Command> call ttComputeTabSizes('emp'); Command> tablesize emp; Sizes of GENA.EMP: INLINE_ALLOC_BYTES: 104160 NUM_USED_ROWS: 2 NUM_FREE_ROWS: 510 AVG_ROW_LEN: 204 OUT_OF_LINE_BYTES: 0 METADATA_BYTES: 784 TOTAL_BYTES: 104944 LAST_UPDATED: 2012-01-24 18:17:42.000000 1 table found. Command>
TimesTen allocated two pages (104160 bytes) for storing only two rows. Of course the next inserted rows will be inserted into free space in the first block for avoiding fragmentation but you should know about it.
Conclusion
A lot of new features were introduced in new TimesTen version. One of them it is possibility to know the actual table size and this functionality is very easy for use I think.
TimesTen 11.2.2 is now available
TimesTen 11.2.2 is now available on OTN site. I was looking forward this release because a lot of new functionality was introduced in 11.2.2. I am particularly interested in the functionality which is related to Exalytics (TimesTen columnar compression, full LOB support, and etc.). There is not enough information about Exalitycs and there is only one man who has access to Exalytics outside of Oracle. That is Mark Rittman
Let's start. The most important new functionality is as follows:
1. In-memory columnar compression of table
Actually, I can write nothing about it, because I don't know how it works :(, but I am going to find out and write about it ASAP.
2. A global query (a query executed on multiple members of an Oracle In-Memory Database Cache grid) can reference more than one table.
I've already written here about Cache Grid and about impossibility to read the same data through different nodes. Additionally there were some restrictions for SQL (only one table can be used in SQL statement). In 11.2.2 this restriction was deleted.
3. The ability to determine the current space usage of a table using the
A lot of customers ask: "How to get the actual size of a table in Timesten" (link). Now they have a tip for that :)
4. You can configure parallel propagation of changes in AWT cache tables to the corresponding Oracle tables.
One more replication feature was introduced. This functionality should increase a speed for propagation of changes to Oracle database. One partner has asked me about this functionality. I hope he is very happy now :)
5. Analytic functions
I think this is the most important feature that was introduced in this realise. When I worked in Oracle CIS, Andrey Pivovarov asked me to deliver presentation and demo about using TimesTen as a source for Oracle BI (11.1.05) on this event. I got a lot of questions about support analytic functions inside TimesTen from audience. Now as you can see, analytic functions are supporting in TimesTen.
6. Implicit data typed conversion
It can help developers because they can avoid error 2963 "Inconsistent datatypes" (example).
Resume
In my opinion, a lot of essential features were introduced in the new TimesTen release and I will update you about each of them in details very soon.
Let's start. The most important new functionality is as follows:
1. In-memory columnar compression of table
Actually, I can write nothing about it, because I don't know how it works :(, but I am going to find out and write about it ASAP.
2. A global query (a query executed on multiple members of an Oracle In-Memory Database Cache grid) can reference more than one table.
I've already written here about Cache Grid and about impossibility to read the same data through different nodes. Additionally there were some restrictions for SQL (only one table can be used in SQL statement). In 11.2.2 this restriction was deleted.
3. The ability to determine the current space usage of a table using the
ttComputeTabSizes
built-in procedureA lot of customers ask: "How to get the actual size of a table in Timesten" (link). Now they have a tip for that :)
4. You can configure parallel propagation of changes in AWT cache tables to the corresponding Oracle tables.
One more replication feature was introduced. This functionality should increase a speed for propagation of changes to Oracle database. One partner has asked me about this functionality. I hope he is very happy now :)
5. Analytic functions
I think this is the most important feature that was introduced in this realise. When I worked in Oracle CIS, Andrey Pivovarov asked me to deliver presentation and demo about using TimesTen as a source for Oracle BI (11.1.05) on this event. I got a lot of questions about support analytic functions inside TimesTen from audience. Now as you can see, analytic functions are supporting in TimesTen.
6. Implicit data typed conversion
It can help developers because they can avoid error 2963 "Inconsistent datatypes" (example).
Resume
In my opinion, a lot of essential features were introduced in the new TimesTen release and I will update you about each of them in details very soon.
Подписаться на:
Сообщения (Atom)