First things first: the word autonomous come from the Greek word autónomos which means “with laws of one’s own, independent”.
After using for a while the Autonomous Data Warehouse Cloud, I must say I am pleasantly surprised to see something totally new, simple, uncomplicated and effortless, with no additional tuning or re-architecturing of the Oracle databases needed – the underlying Oracle Cloud Infrastructure is super fast and highly reliable.
1. You may connect to ADWC by either using the web interface as you can see above or as a client (I use SQL Developer 17.4) but for the client connection type choose Cloud PDB and not TNS. Your configuration file is a zip file and not a plain text file to what DBAs are used to.
2. You cannot create indexes on columns, you cannot partition tables, you cannot create materialized views, etc. Not even database links. You will get an error message: “ORA-00439: feature not enabled: Partitioning” or “ORA-01031: insufficient privileges”.
ADWC lets you create primary keys, unique keys and a foreign key constraints in RELY DISABLE NOVALIDATE mode which means that they are not enforced. These constraints can be created also in enforced mode, so technically you can create constraints as in a non-autonomous Oracle database.
Note that in execution plans primary keys and unique keys will only be used for single table lookups by the optimizer, they will not be used for joins.
But … you can run alter system kill session!
3. The Oracle Autonomous Data Warehouse interface contains all necessary capabilities for a non-professional database user to create its own data marts and run analytical reports on the data. You can even run AWR reports.
4. You do not have full DBA control as Oracle (in my opinion) uses lockdown profiles in order to make the database autonomous. As ADMIN user, you have 25 roles including the new DWROLE which you would normally grant to all ADWC users created by you. Among those 25 roles, you have GATHER_SYSTEM_STATISTICS, SELECT_CATALOG_ROLE, CONSOLE_ADMIN, etc. You have access to most DBA_ and GV_$ views. Not to mention the 211 system privileges.
5. ADWC configures the database initialization parameters based on the compute and storage capacity you provision. ADWC runs on dozens of non-default init.ora parameters. For example:
parallel_degree_policy = AUTO
optimizer_ignore_parallel_hints = TRUE
result_cache_mode = FORCE
inmemory_size = 1G
You are allowed to change almost no init.ora parameters except few NLS_ and PLSQL_ parameters.
And the DB block size is 8K!
6. I can see 31 underscore parameters which are not having default values, here are few:
_max_io_size = 33554432 (default is 1048576)
_sqlmon_max_plan = 4000 (default is 0)
_enable_parallel_dml = TRUE (default is FALSE)
_optimizer_answering_query_using_stats = TRUE (default is FALSE)
One of the few alter session commands you can run is “alter session disable parallel dml;”
7. Monitoring SQL is easy:
But there is no Oracle Tuning Pack: you did not expect to have that in an autonomous database, did you? There is no RAT, Data Masking and Subsetting Pack, Cloud Management Pack, Text, Java in DB, Oracle XML DB, APEX, Multimedia, etc.
8. Note that this is (for now) a data warehousing platform. However, DML is surprisingly fast too. I managed to insert more than half a billion records in just about 3 minutes:
Do not try to create nested tables, media or spatial types, or use LONG datatype: not supported. Compression is enabled by default. ADWC uses HCC for all tables by default, changing the compression method is not allowed.
9. The new Machine Learning interface is easy and simple:
You can create Notebooks where you have place for data discovery and analytics. Commands are run in a SQL Query Scratchpad.
10. Users of Oracle Autonomous database are allowed to analyze the tables and thus influence on the Cost Based Optimizer and hence on performance – I think end users should not be able to influence on the laws (“νόμος, nomos”) of the database.
Conclusion: The Autonomous Database is one of the best things Oracle have ever made. And they have quite a portfolio of products….
Finally, check this video: Oracle Autonomous Database: how it works:
Oracle Autonomous Database: How It Works - YouTube
“Simply put, an enterprise system consists of an application and the underlying database and infrastructure. Regardless of whether the solution in on-premises or delivered ‘as a service’ the application relies on those two components. Thus, the performance, uptime and security of an application will depend on how well the infrastructure and databases support those attributes.”
Both Figure 1 and Figure 2 show impressive results: the Oracle Cloud Infrastructure allows more than 3000 transactions per second while the leading cloud provider cannot even reach 400. Even the old Oracle Cloud Infrastructure Classic is at 1300 transactions per second.
The Oracle Cloud Infrastructure latency averages at 0.962ms while the leading cloud providers have about 6 times higher latency in average: 0.168ms.
“Armed with these insights, companies should be ready to consider moving their Oracle mission critical workloads to the Oracle Cloud—and reaping the benefits of greater flexibility and more manageable costs.”
Let us move to the Java Cloud Service and check the new Forrester Reserch
The costs and benefits for a composite organization with 30 Java developers, based on customer interviews, are:
– Investment costs: $827,384.
– Total benefits: $3,360,871.
– Net cost savings and benefits: $2,533,488.
The composite organization analysis points to benefits of $1,120,290 per year versus investment costs of $275,794, adding up to a net present value (NPV) of $2,533,488 over three years. With Java Cloud Service, developers gained valuable time with near instant development instances and were finally able to provide continuous delivery with applications and functionality for the organization.
For its fiscal Q2 ending Nov. 30, Oracle reported total cloud revenue of $1.5 billion, up 44%, including SaaS revenue of $1.1 billion, up 55%. The combined revenue for cloud and on-premise software was up 9% to $7.8 billion.
Oracle’s Q3 guidance offered growth rates extremely close to those recently posted by salesforce.com: when you add in the highly nontrivial fact that that same company with the $6-billion cloud business also has a $33-billion on-premises business and has rewritten every single bit of that IP for the cloud, with complete compatibility for customers taking the hybrid approach—and the percentage of customers taking the hybrid approach will be somewhere between 98.4% and 100%.
While Salesforce.com’s current SaaS revenue of more than $10 billion is much larger than Oracle’s current SaaS revenue—for the three months ended Aug. 31, Oracle posted SaaS revenue of $1.1 billion—Oracle’s bringing in new SaaS customers and revenue much faster than Salesforce.
The following quote is rather interesting: “Since Larry Ellison has spent the past 40 years competing brashly against and beating rivals large and small, it wasn’t a huge shock to hear him recently rail about how cloud archrival Amazon “has no expertise in database.” But it was a shocker to hear Ellison go on to say that “Amazon runs their entire operation on Oracle [Database]…. They paid us $60 million last year in [database] support and license! And you know who’s not on Amazon? Amazon is not on Amazon.”
And finally, the topic of In-Memory databases is quite hot. Several database brands have their IMDB. A picture is worth a thousand words:
“Artificial intelligence is no match for natural stupidity” ― Albert Einstein
What about introducing Artificial Intelligence into the database to an extent it tunes itself into all possible dimensions?
You have probably either seen the question above or have already asked yourself if that was at all possible. On Ask Tom, John from Guildford wrote the following:
As for Artificial Intelligence, well Artificial Stupidity is more likely to be true. Humanity is not privy to the algorithm for intelligence. Anyone who’s had the pleasure of dealing with machine generated code knows that software is no more capable of writing a cohesive system than it is of becoming self-aware.
Provided you’re not trying to be a cheap alternative to an automaton you just need to think. That one function alone differentiates us from computers, so do more of it. The most sublime software on the planet has an IQ of zero, so outdoing it shouldn’t be all that hard.
So what are the limitations of AI? Jay Liebowitz argues that “if intelligence and stupidity naturally exist, and if AI is said to exist, then is there something that might be called “artificial stupidity?” According to him three of these limitations are:
Ability to possess and use common sense
Development of deep reasoning systems
Ability to easily acquire and update knowledge
But does artificial intelligence use a database in order to be an artificial intelligence? Few very interesting answers to that question are give by Douglas Green, Jordan Miller and Ramon Morales, here is a summary:
Although AI could be built without a database, it would probably be more powerful if a database were added. AI and databases are currently not very well integrated. The database is just a standard tool that the AI uses. However, as AI becomes more advanced, it may become more a part of the database itself.
I don’t believe you can have an effective Artificial Intelligence without a database or memory structure of some kind.
While it is theoretically possible to have an artificial intelligence without using a database, it makes things a LOT easier if you can store what the AI knows somewhere convenient.
As Demystifying Artificial Intelligence explains, AI hass been embedded into some of the most fundamental aspects of data management, making those critical data-driven processes more celeritous and manageable.
Bottom line: if AI uses a database, then the intelligent database should be at least autonomous and have most tasks automated but not relying on artificial stupidity as a DBA limitation of artificial intelligence. Whatever it means… I do not want to curb your enthusiasm but we need to first fill in the skills gap: we need data engineers who understand databases and data warehouses, infrastructure and tools that span data cleaning, ingestion, security, predictions. And in this aspect Cloud is critical and a big differentiator.
“Instead of putting the taxi driver out of a job, blockhchain puts Uber out of a job and lets the taxi driver work with the customer directly.” – Vitalik Buterin
A blockchain database consists of two kinds of records: transactions and blocks. Blocks contain the lists of the transactions that are hashed and encoded into a hash (Merkle) tree. The linked blocks form a chain as every block holds the hash pointer to the previous block.
The blockchain can be stored in a flat file or in a database. For example, the Bitcoin core client stores the blockchain metadata using LevelDB (based on Google’s Bigtable database system).
The diagram above can be used to create the schema in PostgreSQL. “As far as what DBMS you should put it in”, says Ali Razeghi, “that’s up to your use case. If you want to analyze the transactions/wallet IDs to see some patterns or do BI work I would recommend a relational DB. If you want to setup a live ingest with multiple cryptocoins I would recommend something that doesn’t need the transaction log so a MongoDB solution would be good.”
If you want to setup a MySQL database: here are 8 easy steps.
But what is the structure of the block, what does it look like?
The block has 4 fields:
1. Block Size: The size of the block in bytes
2. Block Header: Six fields in the block header
3. Transaction Counter: How many transactions follow
4. Transactions: The transactions recorded in this block
The block header has 6 fields:
1. Version: A version number to track software/protocol upgrades
2. Previous Block Hash: A reference to the hash of the previous (parent) block in the chain
3. Merkle Root: A hash of the root of the merkle tree of this block’s transactions
4. Timestamp: The approximate creation time of this block (seconds from Unix Epoch)
5. Difficulty Target: The proof-of-work algorithm difficulty target for this block
6. Nonce: A counter used for the proof-of-work algorithm
More details, like for example details on block header hash and block height, can be found here.
But how about blockchain vs. relational database: Which is right for your application? As you can see, because the term “blockchain” is not clearly defined, you could argue that almost any IT project could be described as using a blockchain.
It is worth reading Guy Harrison’s article Sealing MongoDB documents on the blockchain. Here is a nice quote: “As a database administrator in the early 1990s, I remember the shock I felt when I realized that the contents of the database files were plain text; I’d just assumed they were encrypted and could only be modified by the database engine acting on behalf of a validated user.”
I particularly, like his last question: Is a private blockchain without token really more efficient than a centralized system? And I would add: private blockchain, really?
But once more, what is blockchain? Rockford Lhotka gives a very good DBA-friendly definition/characteristics of blockchain:
1. A linked list where each node contains data
– Each new node is cryptographically linked to the previous node
– The list and the data in each node is therefore immutable, tampering breaks the cryptography
– New nodes can be added to the list, though existing nodes can’t be altered
– Hence it is a data store – the list and nodes of data are persisted
– Copies of the list exist on many physical devices/servers
– Failure of 1+ physical devices has no impact on the integrity of the data
– The physical devices form a type of networked cluster and work together
– New nodes are only appended to the list if some quorum of physical devices agree with the cryptography and validity of the node via consistent algorithms running on all devices.
Kevin Ford’s reply is a good one to conclude with: “Based on this description (above) it really sounds like your (Rockford Lhotka’s) earlier comparison to the hype around XML is spot on. It sounds like in and of itself it isn’t particularly anything except a low level technology until you structure it to meet a particular problem.”
The nature of blockchain technology makes it difficult to work with high transnational volumes.
But DBAs can have a look at (1) BigchainDB, a database with several blockchain characteristics added: high-transaction, decentralized database, immutability & native support for assets and (2) at Chainfrog if interested in connecting legacy databases together. As far as I know, they support as of now at least MySQL and SQL Server.
“The interesting thing about cloud computing is that we’ve redefined cloud computing to include everything that we already do. I can’t think of anything that isn’t cloud computing with all of these announcements. The computer industry is the only industry that is more fashion-driven than women’s fashion. Maybe I’m an idiot, but I have no idea what anyone is talking about. What is it? It’s complete gibberish. It’s insane. When is this idiocy going to stop?” – Larry Ellison, CTO, Oracle
DBA 1.0 -> DBA 2.0 -> DBA 3.0: Definitely, the versioning of DBAs is falling behind the database versions of Oracle, Microsoft, IBM, etc. Mainframe, client-server, internet, grid computing, cloud computing…
The topic on the DBA profession and how it changes, how it evolves and how it expands has been of interest among top experts in the industry:
Penny Arvil, VP of Oracle Database Product Development, stated that DBAs are being asked to understand what businesses do with data rather than just the mechanics of keeping the database healthy and running.
Kellyn Pot’Vin-Gorman claims that DBAs with advanced skills will have plenty of work to keep them busy and if Larry is successful with the bid to rid companies of their DBAs for a period of time, they’ll be very busy cleaning up the mess afterwards.
Tim Hall said that for pragmatic DBAs the role has evolved so much over the years, and will continue to do so. Such DBAs have to continue to adapt or die.
Megan Elphingstone concluded that DBA skills would be helpful, but not required in a DBaaS environment.
Jim Donahoe hosted a discussion about the state of the DBA as the cloud continues to increase in popularity.
First time I heard about DBA 2.0 was about 10 years ago. At Oracle OpenWorld 2017 (next week or so), I will be listening to what DBA 3.0 is: How the life of a Database Administrator has changed! If you google for DBA 3.0 most likely you will find information about how to play De Bellis Antiquitatis DBA 3.0. Different story…
But if I can also donate something to the discussion is probably the fact that ever since a database vendor automated something in the database, it only generated more work for DBAs in the future. More DBAs are needed now as ever. Growing size and complexity of IT systems is definitely contributing to that need.
These DBA sessions in San Francisco are quite relevant to the DBA profession (last one on the list will be delivered by me):
– Advance from DBA to Cloud Administrator: Wednesday, Oct 04, 2:00 p.m. – 2:45 p.m. | Moscone West – Room 3022 – Navigating Your DBA Career in the Oracle Cloud: Monday, Oct 02, 1:15 p.m. – 2:00 p.m. | Moscone West – Room 3005 – Security in Oracle Database Cloud Service: Sunday, Oct 01, 3:45 p.m. – 4:30 p.m. | Moscone South – Room 159 – How to Eliminate the Storm When Moving to the Cloud: Sunday, Oct 01, 1:45 p.m. – 2:30 p.m. | Moscone South – Room 160 – War of the Worlds: DBAs Versus Developers: Wednesday, Oct 04, 1:00 p.m. – 1:45 p.m. | Moscone West – Room 3014 – DBA Types: Sunday, Oct 01, 1:45 p.m. – 2:30 p.m. | Marriott Marquis (Yerba Buena Level) – Nob Hill A/B
And finally, a couple of quotes about databases:
– “Database Management System [Origin: Data + Latin basus “low, mean, vile, menial, degrading, ounterfeit.”] A complex set of interrelational data structures allowing data to be lost in many convenient sequences while retaining a complete record of the logical relations between the missing items. — From The Devil’s DP Dictionary” ― Stan Kelly Bootle
– “I’m an oracle of the past. I can accurately predict up to 1 minute in the future, by thoroughly investigating the last 2 years of your life. Also, I look like an old database – flat and full of useless info.” ― Will Advise, Nothing is here…
“A statement is persuasive and credible either because it is directly self-evident or because it appears to be proved from other statements that are so.” Aristotle
In Oracle 12.2, there is a new view called DBA_STATEMENTS. It can helps us understand better what SQL we have within our PL/SQL functions, procedures and packages.
There is too little on the Internet and nothing on Metalink about that new view:
PL/Scope was introduced with Oracle 11.1 and covered only PL/SQL. In 12.2, PL/Scope was enhanced by Oracle in order to report on the occurrences of static and dynamic SQL call sites in PL/SQL units.
PL/Scope can help you answer questions such as:
– Where and how a column x in table y is used in the PL/SQL code?
– Is the SQL in my application PL/SQL code compatible with TimesTen?
– What are the constants, variables and exceptions in my application that are declared but never used?
– Is my code at risk for SQL injection and what are the SQL statements with an optimizer hint coded in the application?
– Which SQL has a BULK COLLECT or EXECUTE IMMEDIATE clause?
Here is an example: how to find all “execute immediate” statements and all hints used in my PL/SQL units? If needed, you can limit the query to only RULE hints (for example).
1. You need to set the PLSCOPE_SETTINGS parameter and ensure SYSAUX has enough space:
SQL> SELECT SPACE_USAGE_KBYTES FROM V$SYSAUX_OCCUPANTS
SQL> show parameter PLSCOPE_SETTINGS
NAME TYPE VALUE
------------------------------------ ----------- -------------------
plscope_settings string IDENTIFIERS:NONE
SQL> alter system set plscope_settings='STATEMENTS:ALL' scope=both;
SQL> show parameter PLSCOPE_SETTINGS
NAME TYPE VALUE
------------------------------------ ----------- -------------------
plscope_settings string STATEMENTS:ALL
2. You must compile the PL/SQL units with the PLSCOPE_SETTINGS=’STATEMENTS:ALL’ to collect the metadata. SQL statement types that PL/Scope collects are: SELECT, UPDATE, INSERT, DELETE, MERGE, EXECUTE IMMEDIATE, SET TRANSACTION, LOCK TABLE, COMMIT, SAVEPOINT, ROLLBACK, OPEN, CLOSE and FETCH.
SQL> select TYPE, OBJECT_NAME, OBJECT_TYPE, HAS_HINT,
SUBSTR(TEXT,1,LENGTH(TEXT)-INSTR(REVERSE(TEXT), '/*') +2 ) as "HINT"
where TYPE='EXECUTE IMMEDIATE' or HAS_HINT='YES';
TYPE OBJECT_NAME OBJECT_TYPE HAS HINT
----------------- ------------- ------------ --- ------------------
EXECUTE IMMEDIATE LASKE_KAIKKI PROCEDURE NO
SELECT LASKE_KAIKKI PROCEDURE YES SELECT /*+ RULE */
By finally, here is a way how to regenerate the SQL statements without the hints:
SQL> select TEXT from DBA_STATEMENTS where HAS_HINT='YES';
SELECT /*+ RULE */ NULL FROM DUAL WHERE SYSDATE = SYSDATE
SQL> select 'SELECT '||
TRIM(SUBSTR(TEXT, LENGTH(TEXT) - INSTR(REVERSE(TEXT), '/*') + 2))
as "SQL without HINT"
from DBA_STATEMENTS where HAS_HINT='YES';
SQL without HINT
SELECT NULL FROM DUAL WHERE SYSDATE = SYSDATE
“This policy applies to cloud computing environments from the following vendors: Amazon Web Services – Amazon Elastic Compute Cloud (EC2), Amazon Relational Database Service (RDS) and Microsoft Azure Platform (collectively, the ‘Authorized Cloud Environments’). This policy applies to these Oracle programs.”
The document that lists “these Oracle Programs” does not include RAC (or Multitenant or In-Memory DB).
An interesting blog post by Brian Peasland entitled Oracle RAC on Third-Party Clouds concludes: “But if I were looking to move my company’s RAC database infrastructure to the cloud, I would seriously investigate the claims in this Oracle white paper before committing to the AWS solution. That last sentence is the entire point of this blog post.”
We should not forget that something works and something being supported are two totally different things. Even for the Oracle Cloud check the Known issues for Oracle Database Cloud Service document: Updating the cloud tooling on a deployment hosting Oracle RAC requires manual update of the Oracle Database Cloud Backup Module.
Conclusion: Oracle RAC can NOT be licensed (and consequently not be used) in the above mentioned cloud environments although such claims were published even yesterday in the internet (July 10, 2017).
Thus, one of the main roles of the cyber security DBA is to protect and secure the data.
Here is what the latest Oracle release 12cR2 is offering us:
1. A Fully Encrypted Database
To encrypt an entire database, you must encrypt all the tablespaces within this database, including the Oracle-supplied SYSTEM, SYSAUX, UNDO, and TEMP tablespaces (which is now possible in 12.2). For a temporary tablespace, drop it and then recreate it as encrypted – do not specify an algorithm. Oracle recommends that you encrypt the Oracle-supplied tablespaces by using the default tablespace encryption algorithm, AES128. Here is how you do it:
ALTER TABLESPACE system ENCRYPTION ONLINE ENCRYPT
2. TDE Tablespace Live Conversion
You can now encrypt, decrypt, and rekey existing tablespaces with Transparent Data Encryption (TDE) tablespace live conversion. The feature performs initial cryptographic migration for TDE tablespace encryption on the tablespace data in the background so that the tablespace can continue servicing SQL and DML statements like insert, delete, select, merge, and so on. Ensure that you have enough auxiliary space to complete the encryption and run (for example):
ALTER TABLESPACE users ENCRYPTION ONLINE USING 'AES192' ENCRYPT
FILE_NAME_CONVERT = ('users.dbf', 'users_enc.dbf');
3. Support for ARIA, SEED, and GOST algorithms
By default, Transparent Data Encryption (TDE) Column encryption uses the Advanced Encryption Standard with a 192-bit length cipher key (AES192), and tablespace and database encryption use the 128–bit length cipher key (AES128). 12.2 provides advanced security Transparent Data Encryption (TDE) support for these encryption algorithms:
– SEED (Korea Information Security Agency (KISA) for South Korea
– ARIA (Academia, Research Institute, and Agency) for South Korea
– GOST (GOsudarstvennyy STandart) for Russia
ALTER TABLE clients REKEY USING 'GOST256';
4. TDE Tablespace Offline Conversion
12.2 introduces new SQL commands to encrypt tablespace files in place with no storage overhead. You can do this on multiple instances across multiple cores. Using this feature requires downtime, because you must take the tablespace temporarily offline. With Data Guard configurations, you can either encrypt the physical standby first and switchover, or encrypt the primary database, one tablespace at a time. This feature provides fast offline conversion of existing clear data to TDE encrypted tablespaces. Use the following syntax:
ALTER TABLESPACE users ENCRYPTION OFFLINE ENCRYPT;
5. Setting Future Tablespaces to be Encrypted
ALTER SYSTEM SET ENCRYPT_NEW_TABLESPACES = CLOUD_ONLY;
CLOUD_ONLY transparently encrypts the tablespace in the Cloud using the AES128 algorithm if you do not specify the ENCRYPTION clause of the CREATE TABLESPACE SQL statement: it applies only to an Oracle Cloud environment. ALWAYS automatically encrypts the tablespace using the AES128 algorithm if you omit the ENCRYPTION clause of CREATE TABLESPACE, for both the Cloud and premises scenarios.
6. Role-Based Conditional Auditing
Role-based conditional auditing provides the ability to define unified audit policies that conditionally audit users based on a role in addition to the current capability to audit by users. This feature enables more powerful policy-based conditional auditing by using database roles as the condition for auditing. For example, auditing for new users with the DBA role would begin automatically when they are granted the role:
7. Strong Password Verifiers by Default and Minimum Authentication Protocols
The newer verifiers use salted hashes, modern SHA-1 and SHA-2 hashing algorithms, and mixed-case passwords.
The allowed_logon_version_server in the sqlnet.ora file is used to specify the minimum authentication protocol allowed when connecting to Oracle Database instances.
Oracle notes that the term “version” in the allowed_logon_version_server parameter name refers to the version of the authentication protocol. It does NOT refer to the Oracle release version.
– SQLNET.ALLOWED_LOGON_VERSION_SERVER=8 generates all three password versions 10g, 11g, and 12c
– SQLNET.ALLOWED_LOGON_VERSION_SERVER=12 generates both 11g and 12c password versions, and removes the 10g password version
– SQLNET.ALLOWED_LOGON_VERSION_SERVER=12a generates only the 12c password version
8. New init.ora parametercalled OUTBOUND_DBLINK_PROTOCOLS
Due to direct SQL*Net Access Over Oracle Cloud, existing applications can now use Oracle Cloud without any code changes. We can easily control the outbound database link options:
– OUTBOUND_DBLINK_PROTOCOLS specifies the allowed network protocols for outbound database link connections: this can be used to restrict database links to use secure protocols
– ALL_GLOBAL_DBLINKS allows or disallow global database links, which look up LDAP by default
9. SYSRAC – Separation of Duty in a RAC
SYSRAC is a new role for Oracle Real Application Clusters (Oracle RAC) management. This administrative privilege is the default mode for connecting to the database by the clusterware agent on behalf of the Oracle RAC utilities such as srvctl. For example, we can now create a named administrative account and grant only the administrative privileges needed such as SYSRAC and SYSDG to manage both Oracle RAC and Oracle Data Guard configurations.
Within a user profile, the INACTIVE_ACCOUNT_TIME parameter controls the maximum time that an account can remain unused. The account is automatically locked if a log in does not occur in the specified number of days. Locking inactive user accounts prevents attackers from using them to gain access to the database. The minimum setting is 15 and the maximum is 24855. The default for INACTIVE_ACCOUNT_TIME is UNLIMITED.
11. Kerberos-Based Authentication for Direct NFS
Oracle Database now supports Kerberos implementation with Direct NFS communication. This feature solves the problem of authentication, message integrity, and optional encryption over unsecured networks for data exchange between Oracle Database and NFS servers using Direct NFS protocols.
12. Lockdown Profiles
Lockdown profile is a mechanism used to restrict operations that can be performed by connections to a given PDB for both cloud and non-cloud.
There are three functionalities that you can disable:
Feature: it lets us enable or disable database features for say junior DBAs (or cowboy DBAs) Option: for now, the two options we can enable/disable are “DATABASE QUEUING” and “PARTITIONING” Statement: we can either enable or disable the statements “ALTER DATABASE”, “ALTER PLUGGABLE DATABASE”, “ALTER SESSION”, and “ALTER SYSTEM”. In addition, we can specify granular options along with these statements. Example:
The maximum penalty for non-compliance is 4% of annual revenue or €20 million, whichever is higher. Lower fines of up to 2% are possible for administrative breaches, such as not carrying out impact assessments or notifying the authorities or individuals in the event of a data breach. This puts data protection penalties into same category as anti-corruption or competition compliance.
What DBAs should start with now is account and identify 100% of the private data located in all databases!
1. Assess (Article 35 and Recital 84)
2. Prevent (Articles 5,6,29,32 and Recitals 26,28,64,83)
3. Detect (Articles 30,33,34)
4. Maximum protection (Articles 25,32)
Article 25 is about data minimization, user access limits and limit period of storage and accessibility.
Article 32 is about pseudonymization and encryption, ongoing protection and regular testing and verification.
Article 33 and 34 are about data breach notification: there is 72 hour notification following discovery of data breach.
Article 35 is about the data protection impact assessment.
Article 44 treats data transfers to third country or international organizations where the allowed transfers are only to entities in compliance with the regulation.
As you can see, DBA job ads include nowadays the GDPR skills and responsibilities:
The main lawful bases for data processing are consent and necessity. Data can be recognized as a necessity if it:
• Relates to the performance of a contract
• Illustrates compliance with a legal obligation
• Protects the vital interests of the data subject or another person
• Relates to a task that’s in the public interest
• Is used for purposes of legitimate interests pursued by the controller or a third party (expect where overridden by the rights of the data subject)
Data subjects’ requests for access should be responded to within a month and without charge. This is new legislation within the GDPR and the same one month time frame applies to rectifying inaccurate data.
Breach notifications should be made within 72 hours of becoming aware. If this time frame isn’t met, a fine of 10M€, or 2% of global turnover, can be issued as a penalty. A breach is any failure of security leading to the destruction, loss, alteration, unauthorized disclosure of/access to personal data. Supervisory authorities must be notified if a breach results in a risk to the rights and freedoms of individuals.
Data held in an encrypted or pseudonymized form isn’t deemed to be personal data and falls outside of the scope of these new rules altogether. Despite this, data that’s encrypted and considered secure using today’s technology may become readable in the future. Therefore it’s worth considering format preserving encryption/pseudonymization which renders anonymous but stills allows selected processing of that data.
Here are few interesting articles meant mostly for DBAs:
A young and extremely smart analyst from my company asked me last week: “Why is the Oracle database better than MySQL or MongoDB?”. Tough question, right? You may ask the same about DB2 or SQL Server. All databases have their pros and cons. And we as people have our preferences, based on experience, knowledge and prejudices.
If you try to find out the explanation of the quote statement on top, you might very like end up with this one: “You have to spend most of your life working, so if you’re unhappy at your work you’re likely to always be unhappy”.
So, I have been happy (if that is the right word) working with the Oracle database. Unlike DB2, you have all the tools, options and automation to tune it. With about couple of hundred MySQL databases at Nokia, we spent more time (thank you Google!) investigating issues than with more than one thousand Oracle databases. SQL Server: if you prefer using the mouse instead of the keyboard, then this is the right database for you! Teradata compared to Exadata: let me not start…
As Forrester say, In-Memory Databases are driving next-generation workloads and use cases. Check out this recent comparison of all vendors.
But back to Cloud. Have a look at what speed and what features Oracle is embedding into its Cloud. By far the best Cloud for Oracle workloads! All these are new additions to the Oracle IaaS: