Loading...

Follow Connor McDonald's Oracle blog on Feedspot


Valid
or
Continue with Google
Continue with Facebook

I’ve spoken about attribute clustering before here, here and here. So from that you can probably glean that I’m a fan.

I recently spoke about an example of this as well during my AskTOM Office Hours session which you can watch below:

How can I speed up a query if an index cannot help? - YouTube

After posting that video, I had a follow-up question which I thought would be worth exploring. My examples so far on attribute clustering were either loading a table from empty, or issuing an “alter table move” on a table that already contained data. In both those situations, it is reasonable to assume that clustering is going to help because it aligns conceptually with what we’d expect to happen, namely

  • read all of the data
  • re-organize all of the data so that it is clustered by the nominated clustering key(s)
  • reload all of the data

But what if we do not have that luxury of moving all of the data around? What if I have table that is already clustered, but now I need to load more data into it? As we know, we only see clustering in effect if we perform a direct load operation. But a direct load operation only creates new blocks. That would suggest we cannot cluster the data, no?

Lets explore this scenario with an example.

As a control to this experiment, I’ll perform some index lookups to a “conventional” table, that is, random data without any clustering attributes.

I’ll create a table with the data arranged in random order, and then add the same amount of data again. (I could do this in a single operation, but I’m keeping it in alignment with experiments I’ll perform shortly to mimic the concept of load data and then appending new data).

SQL> create table t as
  2  select rownum id, rpad('x',100) data, mod(rownum,100) cust_id
  3  from dual
  4  connect by level <= 100000
  5  order by dbms_random.value;

Table created.

SQL>
SQL> insert /*+ APPEND */ into t
  2  select rownum+100000 id, rpad('x',100) data, mod(rownum,100) cust_id
  3  from dual
  4  connect by level <= 100000
  5  order by dbms_random.value;

100000 rows created.

SQL>
SQL> commit;

Commit complete.

I’ve got 200,000 randomly distributed rows for 100 distinct CUST_ID values. Now I’ll add an index and perform a simple lookup to examine the cost. (Note: The statistics you see are not from an initial execution, but from subsequent executions to eliminate parsing and other costs)

SQL> create index ix on t ( cust_id );

Index created.

SQL>
SQL>
SQL> set autotrace on stat
SQL> select /*+ index(t (cust_id)) */ max(data)
  2  from t
  3  where cust_id = 17;

MAX(DATA)
----------------------------------------------------------------------------------------------------
x

1 row selected.

Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
       1502  consistent gets
          0  physical reads
          0  redo size
        641  bytes sent via SQL*Net to client
        608  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

So our control figure is approximately 1500 logical I/O’s.

In the previous blog posts on this topic, I then added attribute clustering definitions and perform an ‘alter table move’ to reorganize all of the data into a nice clustered arragement. This time, I’ll repeat the above experiment in a different manner. The sequence of steps will be:

  1. load the data,
  2. add the clustering attributes,
  3. reorganize the data,
  4. add an index,
  5. observe the expected benefits of clustering,
  6. add more data, which we might expect could not be clustered because the table is already clustered now,
  7. perform more index lookups and see what the cost is.

If clustering is only useful for an initial data load, then we’d expect to see a significant degradation when the final index lookups are performed in (7). First of all, just to reinforce the coolness of attribute clustering just one more time, here are steps (1) through (5). Remember – the control figure here is 1500 LIO’s.

SQL> create table t as
  2  select rownum id, rpad('x',100) data, mod(rownum,100) cust_id
  3  from dual
  4  connect by level <= 100000
  5  order by dbms_random.value;

Table created.

SQL>
SQL> create table t2 as select * from t;

Table created.

SQL>
SQL> alter table t ADD clustering by linear order(cust_id);

Table altered.

SQL> alter table t move;

Table altered.

SQL>
SQL> create index ix on t ( cust_id );

Index created.

SQL>
SQL> set autotrace on stat
SQL> select /*+ index(t (cust_id)) */ max(data)
  2  from t
  3  where cust_id = 17;

MAX(DATA)
----------------------------------------------------------------------------------------------------
x

1 row selected.


Statistics
----------------------------------------------------------
          1  recursive calls
          0  db block gets
         20  consistent gets
          2  physical reads
          0  redo size
        641  bytes sent via SQL*Net to client
        608  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

Ooooh yeahhhhh… 20 LIOs!!! But now the real test comes. You can see I also created a table called T2, being a clone of T in its randomised data state. Now I am going to append the contents on T2 to my (clustered) T table.

SQL> insert /*+ APPEND */ into t select * from t2;

100000 rows created.

SQL> commit;

Commit complete.

One thing that does look promising is notice the “SORT ORDER BY” step in the execution plan. That suggests that even though this is appending new data to an existing set of data, some clustering might be taking place.

SQL> explain plan for insert /*+ APPEND */ into t select * from t2;

Explained.

SQL> select * from table(dbms_xplan.display());

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------
Plan hash value: 528765404

-------------------------------------------------------------------------------------------------
| Id  | Operation                        | Name | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------------------
|   0 | INSERT STATEMENT                 |      |   100K|    10M|       |  2905   (1)| 00:00:01 |
|   1 |  LOAD AS SELECT                  | T    |       |       |       |            |          |
|   2 |   OPTIMIZER STATISTICS GATHERING |      |   100K|    10M|       |  2905   (1)| 00:00:01 |
|   3 |    SORT ORDER BY                 |      |   100K|    10M|    11M|  2905   (1)| 00:00:01 |
|   4 |     TABLE ACCESS FULL            | T2   |   100K|    10M|       |   442   (1)| 00:00:01 |
-------------------------------------------------------------------------------------------------

11 rows selected.

But the real test comes in the index lookup. Let’s repeat that.

SQL>
SQL> set autotrace on stat
SQL> select /*+ index(t (cust_id)) */ max(data)
  2  from t
  3  where cust_id = 17;

MAX(DATA)
----------------------------------------------------------------------------------------------------
x

1 row selected.


Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
         43  consistent gets
          0  physical reads
          0  redo size
        641  bytes sent via SQL*Net to client
        608  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

And we’re still awesomely good. The amount of data for CUST_ID = 17 has doubled, and the LIOs approximately doubled as well. So we are still seeing benefits of clustering even though we are appending to a table with previously clustered data.

Just how close to optimal is this? I’ll perform an ‘alter table move’ against the entire table, and re-measure:

SQL> alter table t move online;

Table altered.

SQL> select /*+ index(t (cust_id)) */ max(data)
  2  from t
  3  where cust_id = 17;

MAX(DATA)
---------------------------------------------------------------------------
x

1 row selected.

Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
         37  consistent gets
          0  physical reads
          0  redo size
        641  bytes sent via SQL*Net to client
        608  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

Hence optimal clustering would have yielded 37 LIOs for this data, so 43 is very close to that, and a huge step up in efficiency over the 1500 LIOs from the base experiment. In effect, we have “clumps” of clustered data in the table now – so larger, low frequency dataset loads will give better results than smaller, higher frequency loads. But still…very cool stuff that you should be taking advantage of in your databases.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

I’ve just come back from OracleCode Singapore.  It was a great event – the venue was awesome and the attendees were engaged and interested in the content. But there was one thing that I found amusing (disturbing perhaps?) is the number of times I had people approach me on the topic of scaling.  Conversation would typically run along the lines of:

“What is your recommendation for scaling?”

which almost suggests that scaling is of itself, the end solution here.  Not “Here is function X, and I need it to scale”, or “My business requirement is X, and it needs to scale” but just “I need to scale”

So I’d push back and ask more questions:

  • Scale what?
  • What data are you capturing?
  • Where is it coming from?
  • What speed? What volume?
  • What are you plans with the data you are capturing?
  • How do you intend to process the data?
  • Is it transient? Are you planning on storing it forever? Is it sensitive information?

And the scary thing is – more often than not, those were questions for which they did not have answers to (yet?). I’d ask the questions, and very quickly the conversation would be returned to:

“Do I need sharding?”
”Should I use a NoSQL solution?”
“What ‘aaS’ option should I be using to achieve my scaling needs”
”How many nodes do I need?
”What server configuration is best?”

I’m seeing this more and more – that the technological approach to achieve a business requirement is seen AS the business requirement. I hate to be brutal (well…that’s a lie, I like being brutal ) but here’s the thing – Stop being so damn focussed on scaling until you have an idea of what your true performance requirements are!

Don’t get me wrong – there are systems out there that need to be architected from the ground up that will have to deal with scaling challenges that have perhaps never been tackled before.  But read those last few words again: “never been tackled before”.  Do you know what that also means?  It means it applies to an intsy wintsy tiny percentage of IT systems.  If it wasn’t, then surprise surprise – those challenges have been tackled before.  Why does everyone in the IT industry think that the system they are about to build will need the same architectural treatment as those 0.00001% of systems in the world that truly do.

Because in almost all of my time in IT, for the other 99.999% of systems out there – the two critical solutions to scaling systems to meet (and well and truly exceed) the performance requirements to meet the business needs are pretty simple:

1) don’t write crappy code,

2) don’t store data in a crappy way

That’s it.  When you can definitively demonstrate that

a) your code is well written,

b) your data is being stored in a means to best serve business requirements

and your application still cannot meet performance needs, then yes, it’s time to talk about architectural options for scaling. But more and more I see folks ignoring (a) and (b), or worse, just assuming that they are implicit and guaranteed to happen, and leaping straight into “I need a 10,000 node, geo-disperse, NoSQL, cached, compressed, mem-optimized, column-based, non-ACID, distributed blah blah blah” for my system to work.

Here’s a reality check – you don’t.  Save yourself a lot of hassles and start simple and focus on quality. You’ll find things will probably scale just fine.

If you’ve made it this far through the post and you think I’m just ranting…well, that’s true but let me also answer the next obvious question:

“So how do we make sure we write good code? How do we make sure we store our data intelligently?”

That’s why we (developer advocates) are here. We’re here to help you succeed. So check out our resources, reach out to us via social media channels, and we’ll help you every step of the journey.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

I had a fun request come in from a colleague the other day.  They had a simple list of fully qualified file names and they needed to present that data in the familiar hierarchical tree layout. 

To demonstrate, I took a little trip down memory lane and grabbed a subset of presentations I’ve done over the years.

SQL> create table t ( fname varchar2(1000));

Table created.

SQL>
SQL> insert into t values ('C:\Users\Connor\Presentations\2002\scene_200205.pdf');

1 row created.

SQL> insert into t values ('C:\Users\Connor\Presentations\2002\scene_200205a.pdf');

1 row created.

SQL> insert into t values ('C:\Users\Connor\Presentations\2002\254old.ppt');

1 row created.

SQL> insert into t values ('C:\Users\Connor\Presentations\2002\254_full.ppt');

1 row created.

SQL> insert into t values ('C:\Users\Connor\Presentations\2002\257.ppt');

1 row created.
...
...
...

SQL> insert into t values ('C:\Users\Connor\Presentations\2018\london\optimizer.pptx');

1 row created.

SQL> insert into t values ('C:\Users\Connor\Presentations\2018\london\optimizer_full.pptx');

1 row created.

SQL> insert into t values ('C:\Users\Connor\Presentations\2018\odtug\common_disasters_short.pptx');

1 row created.

SQL> insert into t values ('C:\Users\Connor\Presentations\2018\odtug\common_disasters_short_comp.pptx');

1 row created.

SQL> select count(*) from t;

  COUNT(*)
----------
       634

1 row selected.

So the challenge was to present this list of files in a nice tree format which mirrored my directory structure.

(Truth be told, my preference would be that the query would do a better job than I do in trying to organize my files, but that is a blog post for another day )

Step 1 was to use a little LATERAL trickery to parse out all of the elements of the file names into one row per “element”, where “element” an unqualified file name, or single level folder name.

SQL> select id, seq, token
  2  from
  3    ( select rownum id, fname from t ) t_base,
  4    lateral(select level seq, rtrim(regexp_substr(t_base.fname || '\', '(.*?)\\', 1, level),'\') token
  5            from dual
  6            connect by level <= 1+length(fname)-length(replace(fname,'\'))
  7            );

        ID        SEQ TOKEN
---------- ---------- ------------------------------------------------------------------------------------
         1          1 C:
         1          2 Users
         1          3 Connor
         1          4 Presentations
         1          5 2002
         1          6 scene_200205.pdf
         2          1 C:
         2          2 Users
         2          3 Connor
         2          4 Presentations
         2          5 2002
         2          6 scene_200205a.pdf
         3          1 C:
         3          2 Users
         3          3 Connor
         3          4 Presentations
         3          5 2002
         3          6 254old.ppt
         4          1 C:
         4          2 Users
         4          3 Connor
         4          4 Presentations
         4          5 2002
         4          6 254_full.ppt
         ...
         ...

We can use that as input to a standard LAG function to associate each file/folder with its parent, using the SEQ column to provide sequencing within each ID

SQL> with data as
  2  (
  3  select id, seq, token
  4  from
  5    ( select rownum id, fname from t ) t_base,
  6    lateral(select level seq, rtrim(regexp_substr(t_base.fname || '\', '(.*?)\\', 1, level),'\') token
  7            from dual
  8            connect by level <= 1+length(fname)-length(replace(fname,'\'))
  9            )
 10  )
 11  select
 12      id,
 13      seq,
 14      token,
 15      lag(token) over ( partition by id order by seq ) as parent
 16  from data
 17  order by id, seq desc;

        ID        SEQ TOKEN                                              PARENT
---------- ---------- -------------------------------------------------- -----------------------------------
         1          6 scene_200205.pdf                                   2002
         1          5 2002                                               Presentations
         1          4 Presentations                                      Connor
         1          3 Connor                                             Users
         1          2 Users                                              C:
         1          1 C:
         2          6 scene_200205a.pdf                                  2002
         2          5 2002                                               Presentations
         2          4 Presentations                                      Connor
         2          3 Connor                                             Users
         2          2 Users                                              C:
         2          1 C:
         3          6 254old.ppt                                         2002
         3          5 2002                                               Presentations
         3          4 Presentations                                      Connor
         3          3 Connor                                             Users
         3          2 Users                                              C:
        ...
        ...

That’s looking good, but we have duplicates in the sense that multiple files will roll up to a single folder (eg the “C:” root is repeated).. So we DISTINCT it out to leave just the relationships we need

SQL> with data as
  2  (
  3  select id, seq, token
  4  from
  5    ( select rownum id, fname from t ) t_base,
  6    lateral(select level seq, rtrim(regexp_substr(t_base.fname || '\', '(.*?)\\', 1, level),'\') token
  7            from dual
  8            connect by level <= 1+length(fname)-length(replace(fname,'\'))
  9            )
 10  )
 11  select
 12    distinct
 13      token,
 14      lag(token) over ( partition by id order by seq ) as parent
 15  from data
 16  /

TOKEN                                              PARENT
-------------------------------------------------- --------------------------------------------------
C:
254old.ppt                                         2002
257.ppt                                            2002
plus2.ppt                                          2002
254.ppt                                            UKOUG2002_final
error_handling.pdf                                 2003
2004                                               Presentations
mcdonald1.pdf                                      2004
sun_storage_brief.ppt                              sun_storage_presn
ausoug05_memory_x.ppt                              2005
jul05_read_consistency.ppt                         2005
saoug_oracle_home.ppt                              2005
plsql_80_a58236.pdf                                sql_plsql_evo
jun07_tas_moving_data_short.ppt                    misc
ausoug_adel                                        2008
successful_development.pdf                         ausoug_adel
apr08_11g_developers.ppt                           ausoug_apr
apr08_11g_developers_print.ppt                     ausoug_apr
mcdonald_oct08_flashback.pdf                       perth
oct08_11g_developers.ppt                           template
nov08_read_consistency.pdf                         rwwa
perf101b_rwwa.pptx                                 rwwa
sqlectric_melb                                     2009
sql_electric2.pdf                                  sqlectric_melb
sql_electric1.pptx                                 sqlectric_melb
ukoug                                              2009
...
...
...

And we’re pretty much done. At this point, we have the raw data we need in a child/parent form that we could use to traverse with a standard hierarchy CONNECT BY query – I’ll use LPAD to show the hierarchy relationships

SQL> with data as
  2  (
  3  select id, seq, token
  4  from
  5    ( select rownum id, fname from t ) t_base,
  6    lateral(select level seq, rtrim(regexp_substr(t_base.fname || '\', '(.*?)\\', 1, level),'\') toke
  7            from dual
  8            connect by level <= 1+length(fname)-length(replace(fname,'\'))
  9            )
 10  ),
 11  hier as(
 12    select
 13      distinct
 14        token,
 15        lag(token) over ( partition by id order by seq ) as parent
 16    from data
 17  )
 18  select lpad(' ',level*2)||token hier_list
 19  from hier
 20  start with parent is null
 21  connect by prior token = parent
 22  order siblings by token;

HIER_LIST
--------------------------------------------------------------------------------------------------------
  C:
    Users
      Connor
        Presentations
          2002
            254_full.ppt
            254old.ppt
            257.ppt
            UKOUG2002_final
              254.ppt
              257.ppt
              prelim.ppt
            dec_9i_features2.ppt
            dec_legends.ppt
            dec_legends_prelim.ppt
            jul_sig.ppt
            jul_sig_printout.ppt
            june_sig.ppt
            logo.ppt
            march_sig.ppt
            march_sig2.ppt
            plaus.ppt
            plus2.ppt
            scene_200205.pdf
            scene_200205a.pdf
            sep_unix_sig.ppt
            sep_unix_sig2.ppt
          2003
            254.ppt
            abm_succesful_development.ppt
            apr03_perth.ppt
            dec03_perth_9i_nf.ppt
            dec03_perth_9i_nf_popquiz.pdf
            dec03_perth_9i_nf_popquiz.ppt
            error_handling.pdf
            error_handling.ppt
            ims_performance_examples.pdf
            ims_performance_examples.ppt
            sep03_perth_9i_hidden.pdf
            sep03_perth_9i_hidden.ppt
            sep03_perth_9i_nf.ppt
            sep03_perth_legends.pdf
            sep03_perth_legends.ppt
            sep03_perth_prelim.ppt
            sf_oow_03.pdf
            sf_oow_03.ppt
            sf_oow_03a.ppt
            slide.ppt
            succesful_development.pdf
            succesful_development.ppt
            tools_for_performance.pdf
            tools_for_performance.ppt
          2004
            10046_scene.pdf
            bind_var.pdf
            bind_var.ppt
            dec03_perth_book.ppt
            dec03_perth_book2.ppt
            generating_test_data.pdf
            generating_test_data.ppt
            mar04_hotsos_legends.ppt
            mar04_hotsos_nf_w2k.ppt
            mar04_hotsos_nf_w2k_edit.ppt
            mcdonald1.pdf
            mcdonald2.pdf
            nov04_reorgs_rebuild.ppt
            nov04_succesful_development.ppt
            
            ...
            ...
            ...
            
          2018
            apex_connect
              apex_connect_sql_plsql.pdf
              apex_connect_sql_plsql.pptx
              apex_connect_sql_plsql_v2.pptm
              apex_connect_sql_plsql_v2.pptx
              apex_connect_sql_plsql_v3.pptm
              apex_connect_sql_plsql_v3_subset.pptm
              extensions1.pptx
              extensions2.pptx
            code_china
              better_sql_oracle_code.pptx
              better_sql_oracle_code_v2.pdf
              better_sql_oracle_code_v2.pptx
            code_hyderabad
              better_sql_oracle_code.pptx
            hong_kong
              hong_kong_122.pdf
              hong_kong_122.pptx
              hong_kong_122_no_multi.pptx
              hong_kong_partitioning_ora_template.pptx
              hong_kong_partitioning_ora_template2.pdf
              hong_kong_partitioning_ora_template2.pptx
            london
              london.pdf
              london.pptx
              optimizer.pdf
              optimizer.pptx
              optimizer_full.pptx
            odtug
              common_disasters_short.pptx
              common_disasters_short_comp.pptx      

And there we have it. From flat file listing to a nice hierarchical layout just by tackling the problem piece by piece using the trusty WITH clause.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

18.2 was released a few days ago, so I thought I’d throw it against my 18c instance and see how things played out.  This was just a single instance database, running with a single pluggable.

Step 1 for me was to download the latest OPatch.  To be honest, I don’t even check whether I already have the latest version in any installed ORACLE_HOME anymore.  If I’m going to apply a patch, I’m always going to download the latest OPatch unless the README explicitly tells me some reason why I should not.  I figure getting the latest version reduces the chances I’ll hit problems.  For me, I grabbed version “12.2.0.1.13”, but your experience may differ depending on when you encountered upon this post.

So I then downloaded patch 27676517 (note, this does not include patches for the JVM which are done separately).

From there is was just a case of following the README instructions and I had no dramas at all.  Software patching went through smoothly:

[oracle@host18 27676517]$ export PATH=$PATH:$ORACLE_HOME/OPatch
[oracle@host18 27676517]$ opatch apply
Oracle Interim Patch Installer version 12.2.0.1.13
Copyright (c) 2018, Oracle Corporation.  All rights reserved.


Oracle Home       : /u01/app/oracle/product/18.0.0/dbhome_1
Central Inventory : /u01/app/oraInventory
   from           : /u01/app/oracle/product/18.0.0/dbhome_1/oraInst.loc
OPatch version    : 12.2.0.1.13
OUI version       : 12.2.0.4.0
Log file location : /u01/app/oracle/product/18.0.0/dbhome_1/cfgtoollogs/opatch/opatch2018-04-19_08-32-51AM_1.log

Verifying environment and performing prerequisite checks...
OPatch continues with these patches:   27676517  

Do you want to proceed? [y|n]
y
User Responded with: Y
All checks passed.

Please shutdown Oracle instances running out of this ORACLE_HOME on the local system.
(Oracle Home = '/u01/app/oracle/product/18.0.0/dbhome_1')


Is the local system ready for patching? [y|n]
y
User Responded with: Y
Backing up files...
Applying interim patch '27676517' to OH '/u01/app/oracle/product/18.0.0/dbhome_1'
ApplySession: Optional component(s) [ oracle.assistants.usm, 18.0.0.0.0 ] , [ oracle.has.crs, 18.0.0.0.0 ] , [ oracle.network.cman, 18.0.0.0.0 ] , [ oracle.assistants.asm, 18.0.0.0.0 ]  not present in the Oracle Home or a higher version is found.

Patching component oracle.network.rsf, 18.0.0.0.0...

Patching component oracle.rdbms.crs, 18.0.0.0.0...

Patching component oracle.assistants.acf, 18.0.0.0.0...

Patching component oracle.sqlplus.ic, 18.0.0.0.0...

Patching component oracle.rdbms.deconfig, 18.0.0.0.0...

Patching component oracle.sqlplus, 18.0.0.0.0...

Patching component oracle.rdbms.util, 18.0.0.0.0...

Patching component oracle.rdbms, 18.0.0.0.0...

Patching component oracle.rdbms.dbscripts, 18.0.0.0.0...

Patching component oracle.assistants.deconfig, 18.0.0.0.0...

Patching component oracle.assistants.server, 18.0.0.0.0...

Patching component oracle.rdbms.install.plugins, 18.0.0.0.0...

Patching component oracle.rdbms.rsf, 18.0.0.0.0...

Patching component oracle.rdbms.rman, 18.0.0.0.0...

Patching component oracle.javavm.client, 18.0.0.0.0...

Patching component oracle.ldap.owm, 18.0.0.0.0...

Patching component oracle.ldap.security.osdt, 18.0.0.0.0...
Patch 27676517 successfully applied.
Log file location: /u01/app/oracle/product/18.0.0/dbhome_1/cfgtoollogs/opatch/opatch2018-04-19_08-32-51AM_1.log

OPatch succeeded.

And the datapatch stage to update the database(s) also went through without any problems.

[oracle@host18 OPatch]$ ./datapatch -verbose
SQL Patching tool version 18.0.0.0.0 Production on Thu Apr 19 08:37:40 2018
Copyright (c) 2012, 2018, Oracle.  All rights reserved.

Log file for this invocation: /u01/app/oracle/cfgtoollogs/sqlpatch/sqlpatch_10552_2018_04_19_08_37_40/sqlpatch_invocation.log

Connecting to database...OK
Gathering database info...done

Note:  Datapatch will only apply or rollback SQL fixes for PDBs
       that are in an open state, no patches will be applied to closed PDBs.
       Please refer to Note: Datapatch: Database 12c Post Patch SQL Automation
       (Doc ID 1585822.1)

Bootstrapping registry and package to current versions...done
Determining current state...done

Current state of interim SQL patches:
  No interim patches found

Current state of release update SQL patches:
  Binary registry:
    18.2.0.0.0 Release_Update 1804041635: Installed
  PDB CDB$ROOT:
    No release update patches installed
  PDB PDB$SEED:
    No release update patches installed
  PDB PDB1:
    No release update patches installed

Adding patches to installation queue and performing prereq checks...done
Installation queue:
  For the following PDBs: CDB$ROOT PDB$SEED PDB1
    No interim patches need to be rolled back
    Patch 27676517 (Database Release Update : 18.2.0.0.180417 (27676517)):
      Apply from 18.1.0.0.0 Feature Release to 18.2.0.0.0 Release_Update 1804041635
    No interim patches need to be applied

Installing patches...
Patch installation complete.  Total patches installed: 3

Validating logfiles...done
Patch 27676517 apply (pdb CDB$ROOT): SUCCESS
  logfile: /u01/app/oracle/cfgtoollogs/sqlpatch/27676517/22097537/27676517_apply_DB18_CDBROOT_2018Apr19_08_38_45.log (no errors)
Patch 27676517 apply (pdb PDB$SEED): SUCCESS
  logfile: /u01/app/oracle/cfgtoollogs/sqlpatch/27676517/22097537/27676517_apply_DB18_PDBSEED_2018Apr19_08_39_58.log (no errors)
Patch 27676517 apply (pdb PDB1): SUCCESS
  logfile: /u01/app/oracle/cfgtoollogs/sqlpatch/27676517/22097537/27676517_apply_DB18_PDB1_2018Apr19_08_40_01.log (no errors)
SQL Patching tool complete on Thu Apr 19 08:41:13 2018
[oracle@host18 OPatch]$ 




[oracle@host18 ~]$ sqlplus scott/tiger@pdb1

SQL*Plus: Release 18.0.0.0.0 - Production on Thu Apr 19 08:44:27 2018
Version 18.2.0.0.0

Copyright (c) 1982, 2018, Oracle.  All rights reserved.


Connected to:
Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
Version 18.2.0.0.0

SQL> 

So all in all, a painless and straightforward experience for me.

One minor annoyance for me was the “Bugs Fixed” document.  The README references this as:

but that document is not accessible currently.  However, if you follow a different link in the README

then you will get to the document which contains the list of fixes etc.

Footnote: If you are not familiar with the new nomenclature of “Patch”, “Release Update”, etc etc, here is a good place to start

https://mikedietrichde.com/2017/11/07/ru-rur-recommendations-facts/

and follow the various links within that post for more information.

Happy upgrading!

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Yesterday I was caught up in an interesting SNAFU at my local Supermarket.  All of the checkout registers shut down, thus making it impossible to pay for groceries.  Later on Twitter, the company apologized as we discovered it was actually a nationwide outage!

News of the outage spread like wildfire through the media:

http://www.news.com.au/finance/business/retail/woolworths-checkouts-hit-by-national-outage/news-story/5611943156249d4ecc6427ef0b447c18

https://www.smh.com.au/business/consumer-affairs/woolworths-meltdown-closes-stores-across-australia-20180416-p4z9y4.html

http://www.abc.net.au/news/2018-04-16/woolworths-checkouts-across-australia-down/9663904

https://www.sbs.com.au/news/woolworths-hit-by-nationwide-technical-outage

https://www.9news.com.au/national/2018/04/16/16/29/woolworths-outage-stops-trading-across-australia

https://www.lifehacker.com.au/2018/04/the-woolworths-outage-is-a-lesson-in-business-continuity-planning/

TL;DR – people were forced to abandon their shopping trolleys and had to leave the stores. 

Needless to say, consumers vented loudly and forcefully at Woolworths, and the negative press naturally struck a chord with the general public, because we all feel empathy for the parent in front of the television cameras lamenting their inability to feed their family that night.  (For the record, my boys had meat balls and salad last night that we made with the leftovers we had in the fridge )

In a perfect world, IT system updates should never cause pain for the users of those IT systems, but no matter how careful the testing and planning, I think it is reasonable to assert that we can never eliminate totally the chances of a major problem during an upgrade, our aim is always to shrink the probability to a close to zero as possible.

That brings me to the point of this post – and this perhaps slightly controversial stance.  I don’t think IT outages really matter that much from the perspective of the customer. For example, a while back Amazon had a huge outage here in Australia due to storms in Sydney.  Delta Airlines had a big outage in late 2016.  But last time I checked, people are still flying Delta and still buying stuff they didn’t need from Amazon . Customers will forgive an outage but only if you prioritize their needs over yours during the crisis.  People are still ripping into Woolworths today because a Twitter apology doesn’t really get consumers any closer to taking groceries home. 

So this is what I would have done if I was Woolworths…. Make an announcement in each store that the store needs to close unexpectedly and customers to take your trolley to the nearest checkout (even though I know that the checkout’s are not working).  At that point, simply let people take what they have accumulated so far in their trolleys for no charge.  The news articles above already mentioned that the stores had security staff on hand to assist with closing the stores – so there is protection against a “looting mentality” being created.  Yes, there will be still be some negative press for those customers that could not get into the stores once they closed, but I contend that ultimately this would have turned into a positive result for Woolworths.  Yes you take a hit on the bottom line for yesterdays revenue, but the media attention becomes the mums and dads walking out of the stores smiling about the free shop they just got, rather than swearing they’ll never shop at Woolworths again.

Outages don’t matter.  Meeting the customer need is what matters.

Don’t get me wrong – I’m not claiming that any and every company I have ever worked for, or worked with, has a glowing record of understanding how to meet customer needs during times of crisis.  My point is that it should be something to always strive for – when you inflict pain on your customers due to the information technology solutions you build, then do your best to own the problem, and bust a gut trying to make the experience as bearable as possible for your customers, or even a win for them. 

Whether you turn bad into good, or bad into worse, rest assured your customers will remember you for it.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Let’s face it. Concurrency is a good thing when it comes to database applications. After all, if there is only a single user of your application, then chances are, it is not a successful application .  Of course there are exceptions to this rule, but by and large, most of the applications we build are going to be used by large populations of users.  And given the recent publicity of users data and privacy, we can also be pretty confident that we want the data in our applications to be correct at all times.

As a developer, it is absolutely critical to keep concurrency in mind when building applications.  Here is a simple demonstration to thrust home the point. 

The task assigned to the developer here is simple – transfer move all of the rows satisfying a particular condition from table T1 to table T2.  Let’s create our database objects first:

SQL> create table t1 ( x int, y int, z int );

Table created.

SQL> create table t2 as select * from t1 where 1=0;

Table created.

SQL>
SQL> insert into t1
  2  select rownum, rownum, rownum
  3  from dual connect by level <= 10000;

10000 rows created.

SQL> commit;

Commit complete.

So table T1 has 10,000 rows and T2 is empty, and our condition to determine which rows to transfer will be X being non-zero.  That happens to be all of the rows, but of course, it could be any filtering rule on the existing data. I’ve chosen all of the rows so I can keep the numbers easy to digest in the upcoming examples.

There is no such thing as a MOVE command in SQL, so we can implement this with two statements: 

  • INSERT-SELECT to copy the rows from T1 to T2
  • DELETE to remove the rows we just copied.

Here is the simple PLSQL routine to do that, and a first test run of the code.

SQL>
SQL> set serverout on
SQL> begin
  2      insert into t2 select * from t1 where x != 0;
  3      delete from t1 where x != 0;
  4      commit;
  5  end;
  6  /

PL/SQL procedure successfully completed.

SQL>
SQL> select count(*) from t1;

  COUNT(*)
----------
         0

1 row selected.

SQL> select count(*) from t2;

  COUNT(*)
----------
     10000

1 row selected.

SQL>

Everything looks fine doesn’t it? But it’s not.  In fact, I’ll do what any good developer should be doing, and add some instrumentation to the code to get more information about its operations.

SQL> create table t1 ( x int, y int, z int );

Table created.

SQL> create table t2 as select * from t1 where 1=0;

Table created.

SQL>
SQL> insert into t1
  2  select rownum, rownum, rownum
  3  from dual connect by level <= 10000;

10000 rows created.

SQL> commit;

Commit complete.

SQL>
SQL> set serverout on
SQL> begin
  2      insert into t2 select * from t1 where x != 0;
  3      dbms_output.put_line(sql%rowcount);           
  4      delete from t1 where x != 0;
  5      dbms_output.put_line(sql%rowcount);
  6      commit;
  7  end;
  8  /
10000
10100      <=========== !!!!!

PL/SQL procedure successfully completed.

SQL>
SQL> select count(*) from t1;

  COUNT(*)
----------
         0

1 row selected.

SQL> select count(*) from t2;

  COUNT(*)
----------
     10000

1 row selected.

SQL>
SQL>

Look at the output from the PL/SQL block.  I copied(inserted ) 10,000 rows from T1 to T2, but the deletion phase removed more than 10,000 rows. How was that possible?  It is because databases are about concurrency.  In another session, I had a simple script running that just adds new rows to the table continuously.  As a developer, it is vital that you do not assume that you are the only person accessing or manipulating a table.  My code in its existing form has corrupted the database, because 100 rows have been deleted that were never copied to T2.  They’re gone forever!

So….onto some potential solutions.  Lets try locking the rows first

SQL> create table t1 ( x int, y int, z int );

Table created.

SQL> create table t2 as select * from t1 where 1=0;

Table created.

SQL>
SQL> insert into t1
  2  select rownum, rownum, rownum
  3  from dual connect by level <= 10000;

10000 rows created.

SQL> commit;

Commit complete.

SQL> set serverout on
SQL> declare
  2    cursor c is select * from t1 where x != 0 FOR UPDATE;
  3  begin
  4    open c;
  5    insert into t2 select * from t1 where x != 0;
  6    dbms_output.put_line(sql%rowcount);               
  7    delete from t1 where x != 0;
  8    dbms_output.put_line(sql%rowcount);
  9    commit;
 10    close c;
 11  end;
 12  /
10000
10100

PL/SQL procedure successfully completed.

SQL>
SQL> select count(*) from t1;

  COUNT(*)
----------
         0

1 row selected.

SQL> select count(*) from t2;

  COUNT(*)
----------
     10000

1 row selected.

As you can see, that has not yielded any benefit because locking is only of benefit for locking existing rows.  The new rows arriving from a separate session are unhindered, and hence my DELETE statement still picks up the new rows when it should not. I could workaround this by locking the entire table, but what if my filtering condition was only a small percentage of the rows?  It seems overly aggressive to lock everyone out of the table just to copy a small subset of rows.  In this case, I can use PL/SQL to help me out. Using the BULK COLLECT / BULK BIND facilities, I can restrict my DML operations to just the ROWIDs of the rows I am copying. 

SQL> create table t1 ( x int, y int, z int );

Table created.

SQL> create table t2 as select * from t1 where 1=0;

Table created.

SQL>
SQL> insert into t1
  2  select rownum, rownum, rownum
  3  from dual connect by level <= 10000;

10000 rows created.

SQL> commit;

Commit complete.

SQL> set serverout on
SQL> declare
  2    cursor c is select rowid rid, t1.* from t1 where x != 0 for update;
  3    type array is table of c%rowtype index by pls_integer;
  4    l_rows array;
  5    l_idx int;
  6  begin
  7    open c;
  8    loop
  9      fetch c bulk collect into l_rows limit 20000;
 10      exit when l_rows.count = 0;
 11
 12      forall i in 1 .. l_rows.count
 13         insert into t2 values (
 14            l_rows(i).x,
 15            l_rows(i).y,
 16            l_rows(i).z );
 17      dbms_output.put_line(sql%rowcount);       
 18      forall i in 1 .. l_rows.count
 19        delete from t1
 20        where rowid = l_rows(i).rid;
 21      dbms_output.put_line(sql%rowcount);
 22    end loop;
 23  end;
 24  /
10000
10000

PL/SQL procedure successfully completed.

SQL>
SQL>
SQL> select count(*) from t1;

  COUNT(*)
----------
       147

1 row selected.

SQL> select count(*) from t2;

  COUNT(*)
----------
     10000

1 row selected.

I am collecting the 10000 ROWID’s of interest into an array, and then deleting only those rows.  So even though new rows are arriving, I will never see them because they will not be in the set of ROWIDs that I am moving to table T2.  We can see that during the operation, 147 new rows arrived in T1 from other sessions.

(Note:  Normally you would choose a bulk collect size of perhaps 1000 or similar, and loop around repeated to avoid excessive PGA use.  I bulk collected the entire set to show the rowcounts more intuitively)

You might be thinking that perhaps we no longer need the “FOR UPDATE” clause on the cursor, but it is still a required element.  Without it, it is possible that a different session could update a row from the candidate data set, and that updated row would still be deleted.  Or a different session could even delete, commit, and insert a fresh row that happens to pick up the same ROWID as one from the original set.

Bottom Line – keep concurrency front and foremost in your mind when you are writing code to manipulate data.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
Connor McDonald's Oracle blog by Connormcdonald - 1M ago

Some members of the Oracle community got well and truly into the April Fools Day spirit this year.

There were plenty of very earnest looking blog posts about a new 18c function – “TO_DOG_YEAR”.  You can read their posts here

http://www.oralytics.com/2018/04/predicting-ibs-in-dogs-using-oracle-18c.html 
https://blog.dbi-services.com/after-iot-iop-makes-its-way-to-the-database/
http://berxblog.blogspot.ie/2018/04/more-fun-with-ages.html
http://vanpupi.stepi.net/2018/04/01/exploring-18c-exadata-functions/

They even enlisted the help of the team here at AskTOM where they posed a question looking for more details here.

But naturally, it was important to get as many puns and hints into our answer as possible – did you spot them all ?

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

It’s a big day here at AskTOM HQ !

An exciting new launch at AskTOM - YouTube
Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

It’s a big day here at AskTOM HQ !

An exciting new launch at AskTOM - YouTube
Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Just a quick post here so I could consolidate some information about histograms in 12c.

On my last Office Hours session, one of the questions that was posed was asking for an explanation of the new types of histograms in use in 12c.  So I had a few slides and described conceptually at a high level how they work and what they represent.  If you’re new to 12c, or new to optimizer histograms in general, then take a look at the video below to get up to speed.  But for those who want to dig into a deeper level, I also saw recently two absolutely cracking articles which discuss the algorithms and implementation in much more detail.  So I’ve linked them here as well.

Office Hours Highlights - what is a hybrid histogram? - YouTube

http://www.oracle.com/technetwork/articles/database/histogram-construction-oracle-12c-4426845.html

http://www.oracle.com/technetwork/articles/database/maintenance-histograms-db-12c-4426850.html

On that note…if you celebrate Easter – have a safe and happy break.  So you on the other side of the weekend!

Read Full Article
Visit website

Read for later

Articles marked as Favorite are saved for later viewing.
close
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview