Loading...

Follow Connor McDonald's Oracle blog on Feedspot

Continue with Google
Continue with Facebook
or

Valid

Most developers tools out there have some mechanism to unload a table into a flat file, either as CSV, or Excel, and some even allow you to unload the data as INSERT statements. The latter is pretty cool because it’s a nice way of having a self-contained file that does not need Excel or DataPump or any tool additional to the one you’re probably using to unload the data.

SQLcl and SQL Developer are perhaps the easiest to utilize for such an extract. You simply add the pseudo-hint INSERT to get the output as insert statements. For example:

SQLcl: Release 18.4 Production on Wed Apr 17 17:05:49 2019

Copyright (c) 1982, 2019, Oracle.  All rights reserved.

Last Successful login time: Wed Apr 17 2019 17:05:49 +08:00

Connected to:
Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
Version 18.6.0.0.0

SQL> select /*insert*/ * from t where rownum <= 10;
REM INSERTING into T
SET DEFINE OFF;
Insert into T (OWNER,OBJECT_NAME,OBJECT_TYPE) values ('SYS','I_FILE#_BLOCK#','INDEX');
Insert into T (OWNER,OBJECT_NAME,OBJECT_TYPE) values ('SYS','I_OBJ3','INDEX');
Insert into T (OWNER,OBJECT_NAME,OBJECT_TYPE) values ('SYS','I_TS1','INDEX');
Insert into T (OWNER,OBJECT_NAME,OBJECT_TYPE) values ('SYS','I_CON1','INDEX');
Insert into T (OWNER,OBJECT_NAME,OBJECT_TYPE) values ('SYS','IND$','TABLE');
Insert into T (OWNER,OBJECT_NAME,OBJECT_TYPE) values ('SYS','CDEF$','TABLE');
Insert into T (OWNER,OBJECT_NAME,OBJECT_TYPE) values ('SYS','C_TS#','CLUSTER');
Insert into T (OWNER,OBJECT_NAME,OBJECT_TYPE) values ('SYS','I_CCOL2','INDEX');
Insert into T (OWNER,OBJECT_NAME,OBJECT_TYPE) values ('SYS','I_PROXY_DATA$','INDEX');
Insert into T (OWNER,OBJECT_NAME,OBJECT_TYPE) values ('SYS','I_CDEF4','INDEX');
...
...

That’s pretty cool but no matter what tool you use, ultimately you end up with a set of insert statements that will load 1 row at a time into the database. For a small number of rows, that is no problem. But for larger numbers of rows then you might sitting around for a long time waiting for that script to finish. There is two things that are going to slow us down here.

1) Parsing

Every insert is a brand new statement since it contains literal values. Even for a simple INSERT, that’s a lot of parsing work for the database. This is easily solved with a couple of ALTER SESSION wrappers at the head and tail of the script.

alter session set cursor_sharing = force;
Insert into T (OWNER,OBJECT_NAME,OBJECT_TYPE) values ('SYS','I_FILE#_BLOCK#','INDEX');
Insert into T (OWNER,OBJECT_NAME,OBJECT_TYPE) values ('SYS','I_OBJ3','INDEX');
Insert into T (OWNER,OBJECT_NAME,OBJECT_TYPE) values ('SYS','I_TS1','INDEX');
Insert into T (OWNER,OBJECT_NAME,OBJECT_TYPE) values ('SYS','I_CON1','INDEX');
Insert into T (OWNER,OBJECT_NAME,OBJECT_TYPE) values ('SYS','IND$','TABLE');
Insert into T (OWNER,OBJECT_NAME,OBJECT_TYPE) values ('SYS','CDEF$','TABLE');
Insert into T (OWNER,OBJECT_NAME,OBJECT_TYPE) values ('SYS','C_TS#','CLUSTER');
Insert into T (OWNER,OBJECT_NAME,OBJECT_TYPE) values ('SYS','I_CCOL2','INDEX');
Insert into T (OWNER,OBJECT_NAME,OBJECT_TYPE) values ('SYS','I_PROXY_DATA$','INDEX');
Insert into T (OWNER,OBJECT_NAME,OBJECT_TYPE) values ('SYS','I_CDEF4','INDEX');
...
...
alter session set cursor_sharing = exact;

That will replace the literals with bind variable values and hence all of the INSERTs will be sharable.

2) Row by Row

A set of INSERT statements is inherently a row by row operation. One INSERT = one row. Of course, we developers have all had drummed into our programming minds over the years that database operations should be done in sets not row by row. But how can we get to that utopia with set of insert statements? We could totally rewrite the script with bind arrays and the like, but that’s a big job. So here is my quick hack to get convert single row inserts into multiple row inserts without too much effort.

A set of individual INSERTS such as

Insert into T (OWNER,OBJECT_NAME,OBJECT_TYPE) values ('SYS','I_FILE#_BLOCK#','INDEX');
Insert into T (OWNER,OBJECT_NAME,OBJECT_TYPE) values ('SYS','I_OBJ3','INDEX');
Insert into T (OWNER,OBJECT_NAME,OBJECT_TYPE) values ('SYS','I_TS1','INDEX');
Insert into T (OWNER,OBJECT_NAME,OBJECT_TYPE) values ('SYS','I_CON1','INDEX');
Insert into T (OWNER,OBJECT_NAME,OBJECT_TYPE) values ('SYS','IND$','TABLE');

can also be written as a multi-table INSERT, where all insert targets are the same table and all WHEN conditions are omitted and hence are implicitly true. The above set of INSERTs can be phrased as a single statement like this:

insert all
into t(OWNER,OBJECT_NAME,OBJECT_TYPE)  values (q'{SYS}',q'{I_FILE#_BLOCK#}',q'{INDEX}')
into t(OWNER,OBJECT_NAME,OBJECT_TYPE)  values (q'{SYS}',q'{I_OBJ3}',q'{INDEX}')
into t(OWNER,OBJECT_NAME,OBJECT_TYPE)  values (q'{SYS}',q'{I_TS1}',q'{INDEX}')
into t(OWNER,OBJECT_NAME,OBJECT_TYPE)  values (q'{SYS}',q'{I_CON1}',q'{INDEX}')
into t(OWNER,OBJECT_NAME,OBJECT_TYPE)  values (q'{SYS}',q'{IND$}',q'{TABLE}')
select * from dual;

You could write some awk/sed etc to take an existing INSERT script and make it into a multiple table one. I took a slightly different approach and whipped up some PL/SQL to allow pipelining the appropriate multi-table inserts out to a spool file. Here is the function I wrote to do it.

create or replace function as_insert(p_query varchar2, p_batch int default 10) return sys.odcivarchar2list pipelined as
    l_theCursor     integer default dbms_sql.open_cursor;
    l_columnValue   varchar2(4000);
    l_status        integer;
    l_descTbl       dbms_sql.desc_tab;
    l_colCnt        number;
    n number := 0;
    
    l_tname varchar2(200) := substr(p_query,instr(p_query,' ',-1,1)+1);
    l_collist varchar2(32000);
    l_colval varchar2(32000);
    l_dml varchar2(32000);
    
    l_nls sys.odcivarchar2list := sys.odcivarchar2list();
    
begin
   if l_tname is null then l_tname := '@@TABLE@@'; end if;

   select value
   bulk collect into l_nls
   from v$nls_parameters
   where parameter in (   
      'NLS_DATE_FORMAT',
      'NLS_TIMESTAMP_FORMAT',
      'NLS_TIMESTAMP_TZ_FORMAT')
   order by parameter;

    execute immediate 'alter session set nls_date_format=''yyyy-mm-dd hh24:mi:ss'' ';
    execute immediate 'alter session set nls_timestamp_format=''yyyy-mm-dd hh24:mi:ssff'' ';
    execute immediate 'alter session set nls_timestamp_tz_format=''yyyy-mm-dd hh24:mi:ssff tzr'' ';

    dbms_sql.parse(  l_theCursor,  p_query, dbms_sql.native );
    dbms_sql.describe_columns( l_theCursor, l_colCnt, l_descTbl );

    for i in 1 .. l_colCnt loop
        dbms_sql.define_column(l_theCursor, i, l_columnValue, 4000);
        l_collist := l_collist || l_descTbl(i).col_name||',';
    end loop;
    l_collist := 'into '||l_tname||'('||rtrim(l_collist,',')||')';

    l_status := dbms_sql.execute(l_theCursor);

    pipe row('alter session set cursor_sharing = force;');
    while ( dbms_sql.fetch_rows(l_theCursor) > 0 ) loop
       n := n + 1;
  
       if mod(n,p_batch) = 1 then
          pipe row('insert all ');
       end if;
       
        for i in 1 .. l_colCnt loop
            dbms_sql.column_value( l_theCursor, i, l_columnValue );
            if l_columnValue is null then
              l_colval := l_colval || 'null,';
            elsif l_descTbl(i).col_type in (1,8,9,96,112) then
              l_colval := l_colval || 'q''{'||l_columnValue ||'}''' || ',';
            elsif l_descTbl(i).col_type in (2,100,101) then
              l_colval := l_colval || l_columnValue || ',';
            elsif l_descTbl(i).col_type in (12) then
              l_colval := l_colval || 'to_date('''||l_columnValue||''',''yyyy-mm-dd hh24:mi:ss'')' || ',';
            elsif l_descTbl(i).col_type in (180) then
              l_colval := l_colval || 'to_timestamp('''||l_columnValue||''',''yyyy-mm-dd hh24:mi:ssff'')' || ',';
            elsif l_descTbl(i).col_type in (181) then
              l_colval := l_colval ||'to_timestamp_tz('''||l_columnValue||''',''yyyy-mm-dd hh24:mi:ssff tzr'')' || ',';
            elsif l_descTbl(i).col_type in (231) then
              l_colval := l_colval || 'to_timestamp('''||l_columnValue||''',''yyyy-mm-dd hh24:mi:ssff'')' || ',';
            elsif l_descTbl(i).col_type in (182) then
              l_colval := l_colval || 'to_yminterval('''||l_columnValue||''')' || ',';
            elsif l_descTbl(i).col_type in (183) then
              l_colval := l_colval ||'to_dsinterval('''||l_columnValue||''')'  || ',';
            end if;
        end loop;
        l_colval := rtrim(l_colval,',')||')';
        pipe row( l_collist  );
        pipe row( '  values ('||l_colval );
        if mod(n,p_batch) = 0 then
          pipe row('select * from dual;');
        end if;
        l_colval := null;
    end loop;
    if n = 0 then
      pipe row( 'No data found ');
    elsif mod(n,p_batch) != 0 then
      pipe row('select * from dual;');
    end if;
    pipe row('alter session set cursor_sharing = exact;');

    execute immediate 'alter session set nls_date_format='''||l_nls(1)||''' ';
    execute immediate 'alter session set nls_timestamp_format='''||l_nls(2)||''' ';
    execute immediate 'alter session set nls_timestamp_tz_format='''||l_nls(3)||''' ';
    return;
end;
/

and here’s an example of the output you’d expect to see from it

alter session set cursor_sharing = force;
insert all
into scott.emp(EMPNO,ENAME,SAL)
  values (7369,q'{SMITH}',800)
into scott.emp(EMPNO,ENAME,SAL)
  values (7499,q'{ALLEN}',1600)
into scott.emp(EMPNO,ENAME,SAL)
  values (7521,q'{WARD}',1250)
into scott.emp(EMPNO,ENAME,SAL)
  values (7566,q'{JONES}',2975)
into scott.emp(EMPNO,ENAME,SAL)
  values (7654,q'{MARTIN}',1250)
into scott.emp(EMPNO,ENAME,SAL)
  values (7698,q'{BLAKE}',2850)
into scott.emp(EMPNO,ENAME,SAL)
  values (7782,q'{CLARK}',2450)
into scott.emp(EMPNO,ENAME,SAL)
  values (7788,q'{SCOTT}',3000)
into scott.emp(EMPNO,ENAME,SAL)
  values (7839,q'{KING}',5000)
into scott.emp(EMPNO,ENAME,SAL)
  values (7844,q'{TURNER}',1500)
select * from dual;
insert all
into scott.emp(EMPNO,ENAME,SAL)
  values (7876,q'{ADAMS}',1100)
into scott.emp(EMPNO,ENAME,SAL)
  values (7900,q'{JAMES}',950)
into scott.emp(EMPNO,ENAME,SAL)
  values (7902,q'{FORD}',3000)
into scott.emp(EMPNO,ENAME,SAL)
  values (7934,q'{MILLER}',1300)
select * from dual;
alter session set cursor_sharing = exact;

Lets do a performance test on a large sample set. In this test, I’m loading a little over one million rows into an empty table, where the performance harness for each INSERT mechanism looks like the following to output the start and end times for the script:

select systimestamp from dual;
alter session set cursor_sharing = force;
set feedback off


[ Set of INSERT, or set of MULTI-TABLE inserts with 50 rows per INSERT]
set feedback on
alter session set cursor_sharing = exact;
select systimestamp from dual;

Running each gave the following results.

SQL> @c:\tmp\normal_insert.sql

SYSTIMESTAMP
---------------------------------------
17-APR-19 03.39.53.189000 PM +08:00


Session altered.


Session altered.


SYSTIMESTAMP
---------------------------------------
17-APR-19 03.43.47.920000 PM +08:00


SQL> @c:\tmp\multi_insert.sql

SYSTIMESTAMP
---------------------------------------
17-APR-19 03.43.40.402000 PM +08:00


Session altered.


Session altered.


SYSTIMESTAMP
---------------------------------------
17-APR-19 03.44.27.319000 PM +08:00

So that’s a nice four-fold speed boost, down from 3mins 54seconds to less than 50seconds.

Just to reiterate – I’m not suggesting you need to go this level for all INSERT scripts. Most of the time I just add the cursor_sharing options to my scripts, and that makes them plenty fast enough for the job. But if you have got a large number of inserts to do, then converting them to a multi-table equivalent might give you a nice boost.

Here’s the video version

Faster Inserts! - YouTube
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

First thing to note on this post. No tech content in this one. Just some nostalgia.

Couple of days ago, I was flying from Perth to Dubai on my way to APEX Connect in Bonn. Because this is an 11hour hell in a death tube flight I settled in to my standard sleepless task of watching movies to pass the time. I try to steer clear of going exclusively with new releases because I know I’ll always be travelling again soon, so I try not to exhaust all my options too soon

I was browsing through the “Movie Classics” section and found a movie from my childhood: The Right Stuff.

If you haven’t seen it, it’s a somewhat romanticized and slightly cheesy recount of the space race from Chuck Yeager breaking the sound barrier, through to the first American in space and subsequent launches, but stopping short of the moon program. In terms of run time, this is not a movie for faint-hearted. If you think Avengers Endgame is ridiculous at 3hrs, then The Right Stuff trumps it with an additional 15 minutes on top of that.

My father loved this movie, and the first time I watched it was with him on a Sony Betamax video recorder in my early teens. We sat and watched it together, and he constantly hit pause to explain to me how significant certain milestones were. Clichéd as it might sound, it was one of those great father-son bonding moments.

To bring at least a little bit of technology into this post, it blows me away that the central character of the film, Chuck Yeager is now on Twitter and has even been gracious enough to answer a couple of my questions over the years. It amazes me – the transition from the portrayal of a figure on screen to someone I’ve managed to communicate with directly on social media.

This year marks the 20th year since my father passed away from early onset Alzheimer’s, and I’ve realised that a re-watching of The Right Stuff on the plane was perhaps a mistake. Because I’ve discovered that it tends to freak out the passenger next to you, when a grown man sits there blubbering away at a movie that doesn’t really have any sad content. So nostalgia is a good thing, but perhaps best saved for somewhere less public

Love you Dad.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Direct mode insert using the APPEND hint is a cool piece of technology that lets you load bulk data into a table very quickly and efficiently. Obviously there are a number of implications of doing so which you can read about here, but the one that catches most people out is the that you are locking the table during the load and once the load is completed, the table is “disabled” until the transaction ends. Here’s a quick example of that in action:

SQL> create table t
  2  as select * from dba_objects d
  3  where 1=0;

Table created.

SQL> insert /*+ APPEND */ into t
  2  select * from dba_objects d;

82417 rows created.

--
-- No further INSERTs are possible
--
SQL> insert /*+ APPEND */ into t
  2  select * from dba_objects d;
insert /*+ APPEND */ into t
                          *
ERROR at line 1:
ORA-12838: cannot read/modify an object after modifying it in parallel


--
-- No further DML at all is possible
--
SQL> delete from t;
delete from t
            *
ERROR at line 1:
ORA-12838: cannot read/modify an object after modifying it in parallel


--
-- Not even a SELECT statement is allowed
--
SQL> select count(*) from t;
select count(*) from t
                     *
ERROR at line 1:
ORA-12838: cannot read/modify an object after modifying it in parallel


--
-- I must end the transaction first with commit or rollback before normal service is resumed
--
SQL> commit;

Commit complete.

SQL> select count(*) from t;

  COUNT(*)
----------
     82417

This makes sense given that a direct mode insert is manipulating the high water mark (HWM) for a table, so it pretty much has to be an all or nothing process for the session issuing the load, because the HWM is in a state of flux until we commit or rollback.

However, what about a partitioned table? Can I do a direct mode insert into a single partition, whilst leaving the other partitions available in the same session? Let’s try it out.

SQL> create table t
  2  partition by list ( x )
  3  ( partition p1 values (1),
  4    partition p2 values (2)
  5  )
  6  as select 1 x, d.* from dba_objects d
  7  where 1=0;

Table created.

--
-- Only load into partition P1
--
SQL> insert /*+ APPEND */ into t partition (p1)
  2  select 1 x, d.* from dba_objects d;

82419 rows created.

--
-- And now see if partition P2 can be loaded as part of the same transaction
--
SQL> insert /*+ APPEND */ into t partition (p2)
  2  select 2 x, d.* from dba_objects d;
insert /*+ APPEND */ into t partition (p2)
                          *
ERROR at line 1:
ORA-12838: cannot read/modify an object after modifying it in parallel

Unfortunately not. Even though we never touched partition P2 with the first INSERT, and partition P2 is a different physical segment, we still cannot do additional direct loads on it. One easy workaround to this is to place that second load in a separate transaction, for example:

SQL> declare
  2    pragma autonomous_transaction;
  3  begin
  4    insert /*+ APPEND */ into t partition (p2)
  5    select 2 x, d.* from dba_objects d;
  6    commit;
  7  end;
  8  /

PL/SQL procedure successfully completed.

You’re probably thinking “That’s a bit silly. If we’re going to commit for each partition, then why wouldn’t we just commit after the load of the first partition P1 anyway?”. That’s a valid point, but what the above example shows is that you can do direct path loads into separate partitions concurrently. This opens up opportunities for (dramatically) increasing the throughput of direct path loads if you can segment the source data into its designated target partitions at load time. I’ll extend the table above to have 10 partitions, and use a little DBMS_JOB code to now load 10 partitions all concurrently.

SQL> create table t
  2  partition by list ( x )
  3  ( partition p1 values (1),
  4    partition p2 values (2),
  5    partition p3 values (3),
  6    partition p4 values (4),
  7    partition p5 values (5),
  8    partition p6 values (6),
  9    partition p7 values (7),
 10    partition p8 values (8),
 11    partition p9 values (9),
 12    partition p10 values (10)
 13  )
 14  as select 1 x, d.* from dba_objects d
 15  where 1=0;

Table created.

SQL>
SQL> declare
  2    j int;
  3    l_sql varchar2(200) :=
  4      'begin
  5         insert into t partition (p@)
  6         select @ x, d.* from dba_objects d;
  7         commit;
  8       end;';
  9  begin
 10    for i in 1 .. 10 loop
 11      dbms_job.submit(j,replace(l_sql,'@',i));
 12    end loop;
 13    
 14  end;
 15  /

PL/SQL procedure successfully completed.

SQL>
SQL> select job, what from user_jobs
  2  where what like '%dba_objects%';

       JOB WHAT
---------- --------------------------------------------------
       161 begin
           insert into t partition (p1)
           select 1 x, d.* from dba_objects d;
           commit;
           end;

       162 begin
           insert into t partition (p2)
           select 2 x, d.* from dba_objects d;
           commit;
           end;

       163 begin
           insert into t partition (p3)
           select 3 x, d.* from dba_objects d;
           commit;
           end;

       164 begin
           insert into t partition (p4)
           select 4 x, d.* from dba_objects d;
           commit;
           end;

       165 begin
           insert into t partition (p5)
           select 5 x, d.* from dba_objects d;
           commit;
           end;

       166 begin
           insert into t partition (p6)
           select 6 x, d.* from dba_objects d;
           commit;
           end;

       167 begin
           insert into t partition (p7)
           select 7 x, d.* from dba_objects d;
           commit;
           end;

       168 begin
           insert into t partition (p8)
           select 8 x, d.* from dba_objects d;
           commit;
           end;

       169 begin
           insert into t partition (p9)
           select 9 x, d.* from dba_objects d;
           commit;
           end;

       170 begin
           insert into t partition (p10)
           select 10 x, d.* from dba_objects d;
           commit;
           end;


10 rows selected.

SQL>
SQL> commit;

Commit complete.

SQL> select subobject_name, cnt
  2  from (
  3    select dbms_rowid.rowid_object(rowid) obj, count(*) cnt
  4    from   t
  5    group by dbms_rowid.rowid_object(rowid)
  6    ), user_objects
  7  where obj = data_object_id
  8  order by to_number(substr(subobject_name,2));

SUBOBJECT_        CNT
---------- ----------
P1              82427
P2              82427
P3              82427
P4              82427
P5              82427
P6              82427
P7              82427
P8              82427
P9              82427
P10             82427

And voila! 10 partitions all loaded in direct mode concurrently! There is also a standard means of loading data concurrently using parallel DML, but if I know the segmentation of the data in advance, the “manual” method above can sometimes be a quicker and easier to debug option.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
Connor McDonald's Oracle blog by Connor Mcdonald - 3w ago

Here’s a quirky one for you. It can happen when you are dealing with a partitioned table where the partition key is defined as RAW. To be honest, I really can’t think of a reason why you ever want a table with a raw partition key. (If you have one, please let me know in the comments). Anyway, here’s the demo. I’ll start with a table using the cool automatic list partition facility and load a single row into it.

C:\>sql18

SQL*Plus: Release 18.0.0.0.0 - Production on Mon Apr 29 16:38:19 2019
Version 18.6.0.0.0

Copyright (c) 1982, 2018, Oracle.  All rights reserved.

Last Successful login time: Mon Apr 29 2019 16:37:30 +08:00

Connected to:
Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
Version 18.6.0.0.0


SQL> create table test
  2  (
  3     id         raw(16),
  4     groupid    raw(16),
  5     chardata   varchar2(20),
  6     numbdata   number,
  7     datedata   date
  8  )
  9    partition by list (groupid) automatic (partition P_ values ('FF'))
 10  ;

Table created.

SQL> alter table test
  2  add constraint test_pk
  3  primary key (id)
  4  using index;

Table altered.

SQL> insert into test
  2  values (sys_guid(), sys_guid(), 'some char data', 123, sysdate - 3);

1 row created.

SQL> commit;

Commit complete.

SQL> exit
Disconnected from Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
Version 18.6.0.0.0

No dramas there. Now I’ll take a DataPump export of that table using the defaults.

C:\>expdp mcdonac/*****@db18_pdb1 directory=TEMP dumpfile=test_exp.dmp tables=test

Export: Release 18.0.0.0.0 - Production on Mon Apr 29 16:38:19 2019
Version 18.6.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
Starting "MCDONAC"."SYS_EXPORT_TABLE_01":  mcdonac/********@db18_pdb1 directory=TEMP dumpfile=test_exp.dmp tables=test
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
. . exported "MCDONAC"."TEST":"P_"                           0 KB       0 rows
. . exported "MCDONAC"."TEST":"SYS_P7389"                6.804 KB       1 rows
Master table "MCDONAC"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for MCDONAC.SYS_EXPORT_TABLE_01 is:
  C:\TEMP\TEST_EXP.DMP
Job "MCDONAC"."SYS_EXPORT_TABLE_01" successfully completed at Mon Apr 29 16:38:40 2019 elapsed 0 00:00:16

Everything is still going OK. If I took a DataPump export, it’s a reasonable assumption that I will be wanting to load that into a new database somewhere. To mimic that, I’ll simply drop the table I just created so that I can import the table back into the same schema anew.

C:\>sql18

SQL*Plus: Release 18.0.0.0.0 - Production on Mon Apr 29 16:38:42 2019
Version 18.6.0.0.0

Copyright (c) 1982, 2018, Oracle.  All rights reserved.

Last Successful login time: Mon Apr 29 2019 16:38:19 +08:00

Connected to:
Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
Version 18.6.0.0.0



SQL> drop table test purge;

Table dropped.

SQL> exit
Disconnected from Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
Version 18.6.0.0.0

And now we’ll run a simple DataPump import to put my table back in place.


C:\>impdp mcdonac/******@db18_pdb1 directory=TEMP dumpfile=test_exp.dmp tables=test

Import: Release 18.0.0.0.0 - Production on Mon Apr 29 16:38:43 2019
Version 18.6.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
Master table "MCDONAC"."SYS_IMPORT_TABLE_01" successfully loaded/unloaded
Starting "MCDONAC"."SYS_IMPORT_TABLE_01":  mcdonac/********@db18_pdb1 directory=TEMP dumpfile=test_exp.dmp tables=test
Processing object type TABLE_EXPORT/TABLE/TABLE
ORA-39083: Object type TABLE:"MCDONAC"."TEST" failed to create with error:
ORA-14308: partition bound element must be one of: string, datetime or interval literal, number, or NULL

Failing sql is:
CREATE TABLE "MCDONAC"."TEST" ("ID" RAW(16), "GROUPID" RAW(16), "CHARDATA" VARCHAR2(20 BYTE) COLLATE "USING_NLS_COMP", "NUMBDATA" NUMB
ER, "DATEDATA" DATE)  DEFAULT COLLATION "USING_NLS_COMP" PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255  STORAGE( BUFFER_POOL DEFAULT F
LASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) TABLESPACE "USERS"  PARTITION BY LIST ("GROUPID") AUTOMATIC  (PARTITION "P_"  VALUES ('FF
') SEGMENT CREATION DEFERRED PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255  NOCOMPRESS LOGGING  STORAGE( BUFFER_POOL DEFAULT FLASH_CAC
HE DEFAULT CELL_FLASH_CACHE DEFAULT) TABLESPACE "USERS"  READ WRITE , PARTITION "SYS_P7389"  VALUES (HEXTORAW('38E50ADF7A7840149B16767
9433196C5')) SEGMENT CREATION IMMEDIATE PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255  NOCOMPRESS LOGGING  STORAGE(INITIAL 8388608 NEX
T 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_
FLASH_CACHE DEFAULT) TABLESPACE "USERS"  READ WRITE )

Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
ORA-39112: Dependent object type CONSTRAINT:"MCDONAC"."TEST_PK" skipped, base object type TABLE:"MCDONAC"."TEST" creation failed

Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
Job "MCDONAC"."SYS_IMPORT_TABLE_01" completed with 2 error(s) at Mon Apr 29 16:38:46 2019 elapsed 0 00:00:03

Uh oh… You can see from the highlighted part of the import log, that DataPump sees the RAW datatype definition and hence tries to map the partition keys to the the same datatype using HEXTORAW. It’s an interesting contradiction that you are allowed to partition a table by raw, but if you try to nominate a raw datatype in the actual partition definition values, then the database gets upset with you. Here’s the same DDL in simpler form showing the same error.


SQL> create table test
  2  (
  3     id         raw(16),
  4     groupid    raw(16),
  5     chardata   varchar2(20),
  6     numbdata   number,
  7     datedata   date
  8  )  partition by list (groupid) (partition P_ values (hextoraw('FF')));
)  partition by list (groupid) (partition P_ values (hextoraw('FF')))
                                                                   *
ERROR at line 8:
ORA-14308: partition bound element must be one of: string, datetime or interval literal, number, or NULL

To be honest, I’m not losing any sleep over this, because I can’t think of a reason to partition by raw. But if you do have a valid case for it, then you’ll need to pre-create the table (without the HEXTORAW expressions) and then use DataPump to load the data back into the table.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

This one came in via an AskTOM question. You need to be careful when disabling constraints that are underpinned by an index, especially when it comes to the storage for that index.

Let’s take a look at a simple example. I’ll explicitly nominate the LARGETS tablespace for my index that will support the primary key.


SQL> create table t as
  2      select 1 x, 1 y from dual;

Table created.

SQL>
SQL> alter table t add constraint
  2    t_pk primary key ( x ) using index tablespace largets;

Table altered.

SQL>
SQL> select tablespace_name from user_segments
  2  where segment_name = 'T_PK';

TABLESPACE_NAME
--------------------
LARGETS

1 row selected.

SQL>
SQL> alter table t modify constraint t_pk disable;

Table altered.

SQL> select * from user_indexes where index_name = 'T_PK';

no rows selected

SQL> alter table t modify constraint t_pk enable;

Table altered.

Now let us look at where the index that is needed for that constraint has been re-created.

SQL> select tablespace_name from user_segments
  2  where segment_name = 'T_PK';
TABLESPACE_NAME
--------------------
USERS

The index is a quite distinct entity from the constraint, and unless you nominate a tablespace it will be created in the default tablespace for the schema, which in my case was USERS.

So take care – you might need to pre-create indexes or explicitly rebuild them with a tablespace specification before re-enabling those constraints.

SQL> create table t as
  2      select 1 x, 1 y from dual;

Table created.

SQL>
SQL> create unique index t_pk on t ( x ) tablespace largets;

Index created.

SQL> alter table t add constraint   t_pk primary key ( x ) using index ;

Table altered.

SQL>
SQL> select tablespace_name from user_segments
  2  where segment_name = 'T_PK';

TABLESPACE_NAME
--------------------
LARGETS

1 row selected.

SQL>
SQL> alter table t modify constraint t_pk disable;

Table altered.

SQL> select tablespace_name from user_segments
  2  where segment_name = 'T_PK';

TABLESPACE_NAME
--------------------
LARGETS

1 row selected.

SQL> alter table t modify constraint t_pk enable;

Table altered.

SQL>
SQL> select tablespace_name from user_segments
  2  where segment_name = 'T_PK';

TABLESPACE_NAME
--------------------
LARGETS

1 row selected.
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Here’s a simple little PL/SQL block where we call an inner procedure PARAMETER_TESTER from its parent block. Pay particular attention to the parameter we pass to the procedure, and it’s value throughout the execution of that procedure.

SQL> set serverout on
SQL> declare
  2
  3     glob_var  int := 0;
  4     local_var int;
  5
  6     procedure PARAMETER_TESTER(param int) is
  7     begin
  8         dbms_output.put_line('Param came in as: '||param);
  9         glob_var := glob_var + 1;
 10         dbms_output.put_line('Param left as   : '||param);
 11         dbms_output.put_line('glob_var is now : '||glob_var);
 12     end;
 13
 14  begin
 15     parameter_tester(100);
 16  end;
 17  /
Param came in as: 100
Param left as   : 100
glob_var is now : 1

PL/SQL procedure successfully completed.

Now I’ll slowly extend the code, and just by eyeballing it, see if you can predict what the output will be before looking past the end of the PL/SQL block.


SQL> set serverout on
SQL> declare
  2
  3     glob_var  int := 0;
  4     local_var int;
  5
  6     procedure PARAMETER_TESTER(param int) is
  7     begin
  8         dbms_output.put_line('Param came in as: '||param);
  9         glob_var := glob_var + 1;
 10         dbms_output.put_line('Param left as   : '||param);
 11         dbms_output.put_line('glob_var is now : '||glob_var);
 12     end;
 13
 14  begin
 15     local_var := glob_var;
 16     parameter_tester(local_var);
 17  end;
 18  /
Param came in as: 0
Param left as   : 0
glob_var is now : 1

PL/SQL procedure successfully completed.

So far so good I imagine. The parameter came in as zero, we incremented the global variable which of course had no impact on the parameter. Let’s now up the ante a little.


SQL> set serverout on
SQL> declare
  2
  3     glob_var  int := 0;
  4     local_var int;
  5
  6     procedure PARAMETER_TESTER(param int) is
  7     begin
  8         dbms_output.put_line('Param came in as: '||param);
  9         glob_var := glob_var + 1;
 10         dbms_output.put_line('Param left as   : '||param);
 11         dbms_output.put_line('glob_var is now : '||glob_var);
 12     end;
 13
 14  begin
 15     parameter_tester(glob_var);
 16  end;
 17  /
Param came in as: 0
Param left as   : 1
glob_var is now : 1

PL/SQL procedure successfully completed.

This is perhaps the first one that you might find a little unexpected. Notice that the value of the parameter passed to the procedure has changed within the inner procedure even though it was passed (implicitly) as an IN parameter. People often assume that if you pass anything to a procedure without the IN OUT or OUT specification, then the parameter is “read only” and cannot be touched by code. This is true to the extent that you cannot perform an assignment to that parameter as you can see below


SQL> set serverout on
SQL> declare
  2
  3     glob_var  int := 0;
  4     local_var int;
  5
  6     procedure PARAMETER_TESTER(param int) is
  7     begin
  8         param := param + 1;
  9     end;
 10
 11  begin
 12     parameter_tester(glob_var);
 13  end;
 14  /
       param := param + 1;
       *
ERROR at line 8:
ORA-06550: line 8, column 8:
PLS-00363: expression 'PARAM' cannot be used as an assignment target
ORA-06550: line 8, column 8:
PL/SQL: Statement ignored

but that is not the same as saying that the parameter is fixed in value throughout the duration of the call. This behaviour is documented in the PL/SQL language manual in that an IN parameter can be passed by reference rather than a static value.

Now I’ll explore some other examples of how you might get caught out by this. I’ll modify the example just slightly now so that I’m passing an expression rather than just “glob_var”.


SQL> set serverout on
SQL> declare
  2
  3     glob_var  int := 0;
  4     local_var int;
  5
  6     procedure PARAMETER_TESTER(param int) is
  7     begin
  8         dbms_output.put_line('Param came in as: '||param);
  9         glob_var := glob_var + 1;
 10         dbms_output.put_line('Param left as   : '||param);
 11         dbms_output.put_line('glob_var is now : '||glob_var);
 12     end;
 13
 14  begin
 15     parameter_tester(glob_var+1);
 16  end;
 17  /
Param came in as: 1
Param left as   : 1
glob_var is now : 1

PL/SQL procedure successfully completed.

Now “normal” service has been resumed, in that the expression is evaluated first and hence is passed by value to the procedure leaving the parameter value unchanged throughout the procedure.

So it would appear an expression will disable the “pass by reference” mechanism? Well, let’s try two more examples


SQL> set serverout on
SQL> declare
  2
  3     glob_var  int := 0;
  4     local_var int;
  5
  6     procedure PARAMETER_TESTER(param int) is
  7     begin
  8         dbms_output.put_line('Param came in as: '||param);
  9         glob_var := glob_var + 1;
 10         dbms_output.put_line('Param left as   : '||param);
 11         dbms_output.put_line('glob_var is now : '||glob_var);
 12     end;
 13
 14  begin
 15    parameter_tester(to_char(glob_var));
 16  end;
 17  /
Param came in as: 0
Param left as   : 0
glob_var is now : 1

PL/SQL procedure successfully completed.

SQL>
SQL> set serverout on
SQL> declare
  2
  3     glob_var  int := 0;
  4     local_var int;
  5
  6     procedure PARAMETER_TESTER(param int) is
  7     begin
  8         dbms_output.put_line('Param came in as: '||param);
  9         glob_var := glob_var + 1;
 10         dbms_output.put_line('Param left as   : '||param);
 11         dbms_output.put_line('glob_var is now : '||glob_var);
 12     end;
 13
 14  begin
 15    parameter_tester(to_number(glob_var));
 16  end;
 17  /
Param came in as: 0
Param left as   : 1
glob_var is now : 1

PL/SQL procedure successfully completed.

Notice the last one in particular. Even though I had a TO_NUMBER expression around “glob_var”, it was still passed by reference and hence the parameter value changed throughout the execution of the inner procedure. This is because the PL/SQL compiler detected that the the TO_NUMBER function was redundant and optimized it out during the compilation process. That left just a parameter input of “glob_var” which could then be passed by reference.

Bottom line – it’s generally considered poor programming practice in any language to be mix and matching the scope of variables, in this case, using glob_var both within and outside the inner procedure. Trying to subvert the behaviour of the PL/SQL engine by using expressions such as to_char(glob_var) is a recipe for disaster. Who knows what additional optimizations the next release of the PL/SQL compiler will have? Perhaps it will optimize out “glob_val+1” or to_char(glob_var) and hence pass them by reference etc. If your functionality really demands on coding with these blurred scopes, then make sure you perform an assignment to a local variable and pass that to avoid unexpected side-effects.


SQL> set serverout on
SQL> declare
  2
  3     glob_var  int := 0;
  4     local_var int;
  5
  6     procedure PARAMETER_TESTER(param int) is
  7     begin
  8         dbms_output.put_line('Param came in as: '||param);
  9         glob_var := glob_var + 1;
 10         dbms_output.put_line('Param left as   : '||param);
 11         dbms_output.put_line('glob_var is now : '||glob_var);
 12     end;
 13
 14  begin
 15     local_var := glob_var;
 16     parameter_tester(local_var);
 17  end;
 18  /
Param came in as: 0
Param left as   : 0
glob_var is now : 1

PL/SQL procedure successfully completed.
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

I had a request from a client a while back regarding masking of data. They had an application with sensitive data in the Production environment (where access and audit were very tightly controlled) but the issue was how to respect that sensitivity in non-Production environments whilst still preserving full size data sizes for application testing.

After some conversations about requirements, it turned out that since (even in non-Production environments) all access to application components was logged and audited, the issue was simply protecting against “inadvertent” access to sensitive data. For example, in the application, if I searched for “males with black hair with an interest in technology” I should never see the name “Connor McDonald” on screen in the results, because simply viewing that data could be treated as a breach of privacy.

Addenda: Anyone adding a comment to this post with “Shouldn’t that be males with black hair with a hint of grey” will be banned forever

I suggested Data Redaction as a potential solution, but they wanted the data actually obfuscated within the database. Then I suggested Data Masking which definitely does the job, but they felt this offered far more than their simple requirement of just obfuscate the data and retain the distribution of data to preserve performance characteristics as much as possible.

So ultimately here is the basis of a little routine I created for them to obfuscate their data to satisfy their requirement of not viewing meaningful data in their non-Production environments. Please read the footnote at the end of this post before you consider using this code. The logic here is straight forward

  • Take the alphanumeric characters in a logical order as a single string.
  • Use DBMS_RANDOM to come up with 255 variations of that string and store them in an array.
  • Use the TRANSLATE function to perform a simple obfuscation of the input string.

That translation could then be used during the process of copying data from Production, or as a post-copy task.

First I’ll demo the concept of coming up with randomised strings. Here’s a simple anonymous block to create 10 random strings based on a base string of: ABC….Zabc…z0123456789

SQL> set serverout on
SQL> declare
  2    type rand_list is table of varchar2(62) index by pls_integer;
  3    source varchar2(62) := 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789';
  4    perms  rand_list;
  5    source_variant varchar2(62);
  6  begin
  7    for i in 1 .. 10 loop
  8      source_variant := null;
  9      for j in ( select rownum r from dual connect by level <= 52 order by dbms_random.value )
 10      loop
 11        source_variant := source_variant || substr(source,j.r,1);
 12      end loop;
 13      for j in ( select rownum r from dual connect by level <= 10 order by dbms_random.value )
 14      loop
 15        source_variant := source_variant || substr(source,52+j.r,1);
 16      end loop;
 17      perms(i) := source_variant;
 18      dbms_output.put_line(source_variant);
 19    end loop;
 20  end;
 21  /
xrLCuowbNfvXlMJSBKctPyheZWTnQUjYAdizkFVOIHGsmDRpaEqg5637918402
OkVxApdsLqTlyvgYXQGaSeNPMCjZEJhHRtbcIBrowKUunifzDFmW8013742695
gcVxtNeMdBuRJhvYrnkCIWzoyFwfLpqSATabQGUZKmDOEilPHsXj5601892743
vOUuisITyJjbaQLqfnpFVBoCrYtzHeDAmWExPkNwgScXKRlZGMhd3104879256
VLZkXndmytCWRwKeNgYifvTESzGFhHcqaMJPoljxUsAQrubOpBDI7302189564
VfPLKmObZzBMCpgrjUaQYieRNJykDwtnuWXFxdTEIlScqvohAGsH0138924756
ABMkfozebljnPGqHJhpNwxYEUCuQLyDrOasIgtFZKVmciTXSdvRW5702139864
iNjrmVTlyzFSLswfBvaWKUAHIZOcgMPYXCGqeoDJbhpktERdQunx1976045328
hiYGzxMEZFPcmkutbDlrCeynSITKHNJaXAwfpRsdqLjvQUoWVOgB0479632158
uZpPXOHLAvzricdjtqSQBKVhYoFfmUxEDNCwWlIJsbRgkMeGynaT6451328790

So a simple obfuscation could be:

SQL> select
  2    translate(
  3      'Connor McDonald',
  4      'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789',
  5      'ehZAFjmtYwWilTRDnCcLrvQqEpakKGPMxXyudVUNsoJOBfzSIbHg2163095784')
  6  from dual;

TRANSLATE('CONN
---------------
ZsNNsO lKAsNaVG

Armed with this, I can create a package which extends this to choose from 255 random strings and use these as a simple obfuscator for source data.

SQL> create or replace
  2  package masker is
  3    function rand_masks(p_size int default 255, p_refresh varchar2 default 'N',p_seed number default 0) return  sys.odcivarchar2list pipelined;
  4    function standard_source return varchar2;
  5    function idx_entry(p_idx int, p_size int default 255, p_refresh varchar2 default 'N',p_seed number default 0) return varchar2;
  6    function mask(p_input varchar2) return varchar2;
  7  end;
  8  /

Package created.

SQL> create or replace
  2  package body masker is
  3    perms sys.odcivarchar2list := sys.odcivarchar2list();
  4
  5  procedure init(p_size int,p_seed int) is
  6    source varchar2(62) := 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789';
  7    source_variant varchar2(62);
  8  begin
  9    dbms_random.seed(p_seed);
 10    perms.delete;
 11    perms.extend(p_size);
 12    for i in 1 .. p_size loop
 13      source_variant := null;
 14      for j in ( select rownum r from dual connect by level <= 52 order by dbms_random.value )
 15      loop
 16        source_variant := source_variant || substr(source,j.r,1);
 17      end loop;
 18      for j in ( select rownum r from dual connect by level <= 10 order by dbms_random.value )
 19      loop
 20        source_variant := source_variant || substr(source,52+j.r,1);
 21      end loop;
 22      perms(i) := source_variant;
 23  --      dbms_output.put_line(source_variant);
 24    end loop;
 25  end;
 26
 27  function rand_masks(p_size int default 255, p_refresh varchar2 default 'N',p_seed number default 0) return sys.odcivarchar2list pipelined is
 28  begin
 29    if perms.count < p_size or p_refresh in ('Y','y') then
 30      init(p_size,p_seed);
 31    end if;
 32
 33    for i in 1 .. p_size
 34    loop
 35      pipe row ( perms(i));
 36    end loop;
 37
 38    return;
 39  end;
 40
 41  function standard_source return varchar2 is
 42  begin
 43     return 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789';
 44  end;
 45
 46  function idx_entry(p_idx int, p_size int default 255, p_refresh varchar2 default 'N',p_seed number default 0) return varchar2 is
 47  begin
 48    if perms.count < p_size or p_refresh in ('Y','y') then
 49      init(p_size,p_seed);
 50    end if;
 51    return perms(p_idx);
 52  end;
 53
 54  function mask(p_input varchar2) return varchar2 is
 55  begin
 56    return translate(p_input,masker.standard_source,masker.idx_entry(ascii(p_input)));
 57  end;
 58
 59  end;
 60  /

Package body created.

Now I’ll test this with some sample data

SQL> create table t ( pk int, n varchar2(50));

Table created.

SQL>
SQL> insert into t values (1,'Connor McDonald');

1 row created.

SQL> insert into t values (2,'Chris Saxon');

1 row created.

SQL> insert into t values (3,'Maria Colgan');

1 row created.

SQL> insert into t values (4,'Bobby Curtis');

1 row created.

SQL> insert into t values (5,'Joel Kallman');

1 row created.

SQL> insert into t values (6,'Steven Feuerstein');

1 row created.

SQL> insert into t values (7,'Connor McDonald');

1 row created.

SQL> select pk, n, masker.mask(n) diddled
  2  from t;

        PK N                                                  DIDDLED
---------- -------------------------------------------------- ------------------------------
         1 Connor McDonald                                    sAJJAe CvnAJjWt
         2 Chris Saxon                                        sweOy RjrAJ
         3 Maria Colgan                                       czEJz BhMbzm
         4 Bobby Curtis                                       nkjjI EpzLBS
         5 Joel Kallman                                       oYfi luiiIuj
         6 Steven Feuerstein                                  CyUrUE SUtUWQyUXE
         7 Connor McDonald                                    sAJJAe CvnAJjWt

7 rows selected.

There we go! A simple obfuscator that runs quite efficiently on source data whilst preserving the distribution of the data.

Footnote: Remember that any masking scheme that does not completely randomize the obfuscation of  source data is not a complete encryption or security solution. Most mechanisms to deduce source data from a manipulated variant of that data involves distribution analysis of letters, n-grams, words and phrases. The moment you preserve distribution of source data for (say) performance and/or optimizer plan preservation, you are allowing for source data to be derived if there is enough of it available to perform that analysis. Remember that this is not a substitute for appropriate security and encryption protections.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

I posted about my APEX upgrade to 19 yesterday, and someone was quick to point out to me that they believed I hadn’t covered all of the steps.

“What if your APEX instance needs to call web services?” they said. “You need to update your Access Control Lists.”

I hadn’t thought of that, so I logged onto one of my other APEX instances that was still at version 18, and checked the current ACLs


SYS> select acl,
  2         principal,
  3         privilege
  4  from   dba_network_acl_privileges
  5  order by acl, principal, privilege;

ACL                                                PRINCIPAL            PRIVILEGE
-------------------------------------------------- -------------------- ----------
/sys/acls/oracle-sysman-ocm-Resolve-Access.xml     ORACLE_OCM           resolve
NETWORK_ACL_192DBA180AEB40AD98A73ACCD309FF8F       APEX_180200          http
NETWORK_ACL_296C00CF7F2744BAB526D4C4E85FE189       GGSYS                resolve
NETWORK_ACL_296C00CF7F2744BAB526D4C4E85FE189       GSMADMIN_INTERNAL    resolve

4 rows selected.

I can see the potential issue here. On upgrade, I’ll have a new schema which would need the same ACLs granted as the existing APEX 18 schema. Then I went ahead and upgraded this instance to version 19, and lo and behold, check out this nice little touch in the upgrade.

SYS> select acl,
  2         principal,
  3         privilege
  4  from   dba_network_acl_privileges
  5  order by acl, principal, privilege;

ACL                                                PRINCIPAL            PRIVILEGE
-------------------------------------------------- -------------------- ----------
/sys/acls/oracle-sysman-ocm-Resolve-Access.xml     ORACLE_OCM           resolve
NETWORK_ACL_0F93A8F653EC43DC9D90457B1151A330       APEX_190100          http
NETWORK_ACL_192DBA180AEB40AD98A73ACCD309FF8F       APEX_180200          http
NETWORK_ACL_296C00CF7F2744BAB526D4C4E85FE189       GGSYS                resolve
NETWORK_ACL_296C00CF7F2744BAB526D4C4E85FE189       GSMADMIN_INTERNAL    resolve

5 rows selected.

The upgrade took care of the ACLs for me! That’s pretty cool.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

AskTOM moved to Application Express 19.1 without any major issues last weekend. That in itself is a nice endorsement for APEX, given that the AskTOM application dates back nearly 20 years to 2001, and predates even the existence of APEX.

The only fix that we had to make was that AskTOM uses the static CDN files that Joel Kallman blogged about to make it nice and snappy wherever in the world it is used. The reference to those files have a hard-coded version number so that needed to updated. For AskTOM, we have a plugin that uses some jQuery elements that went pear-shaped when referencing the old version 18 files, but after a quick fix to that reference all was well.

Given that AskTOM is running on APEX 19 I figured I best upgrade my local PC installation as well. I find the standard documentation a little bit cryptic when I want to perform an upgrade because the docs have to cover all of the possible scenarios (CDB vs non-CDB, web listener vs EPG, etc), and as such when you click on the section on Upgrading, you don’t appear to the get a step by step guide on what to do.

But the secret is in the second paragraph:

“following any of the installations scenarios in the guide upgrades your Oracle Application Express instance…”

so here’s what I did to upgrade my local installation which is installed within a pluggable database, and running with a standalone ORDS listener.  (I stress, always consult the standard documentation set before commencing your upgrade).

  • Downloaded APEX and unzipped into an appropriate location
  • Connected as SYSDBA in the pluggable database where APEX is installed
  • Ran the standard installation script (which will automatically detected this be an upgrade)
@apexins.sql SYSAUX SYSAUX TEMP /i/
  • Let ORDS know that an upgrade had been done.
java -jar ords.war validate
  • And the reset the REST services component post-upgrade
@apex_rest_config.sql

and I was done! No dramas encountered and APEX on my PC is now on 19.1

Note: I moved my original APEX installation to a backup area, and placed the new version 19 installation in place of the old location, so I did not have to move or copy any images and static files around. Your approach may be different, so just to reiterate – consult the standard docs before diving into your upgrade.

Of course, all the dark mode fan bois will be telling me to switch to Dark Mode

but for me, I think it would be wrong to maintain a classic 2000’s application such as AskTOM in anything but brilliant white

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview