Loading...

Follow DBA Republic | Oracle database tips, tricks and.. on Feedspot


Valid
or
Continue with Google
Continue with Facebook
I was involved in a research project where I needed to provide a result set back to the requester. The file contained 7 million records which is big to send it over via an email. Even if I manage to send it to the requester via FTP or similar tools, the file is huge for anyone to open in a notepad or in a text editor. It is advisable to split a big file into many smaller files that are more manageable to view, edit or analyse.

If data is being pulled from a database, you can limit the number of records in your query while spooling to a file. If this is not an option then I would use Linux SPLIT command to split in a way that is more meaningful to a requester.  I am going to show split command on how to split a large file into smaller files. A file can be split in many ways, we will go over more than one way to slice a big file.

Split Syntax:
split [OPTION]... [FILE[PREFIX]]
Split based on file size [b]: This option (-b) splits a file into many smaller files of size included with option -b
split -b5000000 large_file.zip file_prefix_
This creates small files of size 500000 bytes.

Split based on the number of small files [n]: This option splits into the specified number of files mentioned with option -n50
split -n50 -e large_file.zip small_prefix
Creates 50 files out of large_file.zip.

Split based on the number of lines on each file [l]: This option splits file put the number of lines specified in -l to each file
spit -l500 large_file.zip small_prefix
Creates multiple files with 500 lines of records on each file.

Awesome, you can now split a large file into many small files. I prefer to split mine based on the number of lines per file (-l). There are instances where you may be sick of managing multiple files and wanted to have a big single file. How do you accomplish this? cat is a Linux command which is short for concatenation that allows you to create files, view the content of files, and also let you combine multiple file into a single file. We have xaa, xab, xcc, and xde files that we need to combine into a single file.
cat x* > large.zip
cat xaa xab xcc xde > large.zip

Split and cat are Linux commands to split a file or concats multiple files. These commands have very powerful and I only showed you the most frequently used options. I would encourage you to man these commands and scan all the options these commands offer.  Hey windows fan, how you do you split and cat file, we would like to hear from you as Linux is not an option for everyone.
Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
In most companies, System Admin takes care of space issue on servers. They have tools or scripts to monitor the disk space usages to plan the storage accordingly and avoid server failure. I use Linux to host my Oracle databases in a sand environment and it is essential to know Linux inside out even though you are the database professional. You will need to know the intermediate skills like that of a System Admin to maintain your Linux Server. One critical task of System Admin is to monitor storage usage and report it to users to reclaim/recycle or ask for additional space.

When reaching out to users for space usage, it is always a good idea to get the list of top offenders and reach out to them before reaching out to all users. We will discuss to find top files and directories, file size and space used. These are the Linux command which can be automated using a shell script to send alerts when the usage is at 80% to proactively monitor servers.

How to list file size? 
I use -h option from the list command to display a more human-readable output
ls -lh
How to display top size files only? The below command displays top 10 big files
find -type f -exec du -Sh {} + | sort -rh | head -n 10
How to list top 10 big files and folders in your present working directory (pwd)?
du -a | sort -n -r | head -n 10
Alternative: with human readable option
du -sh direcory name
How to list top 10 files & folder in a home directory?
du -a /home | sort -n -r | head -n 30 
How to list top files from all the directories in human readable format?
du -hs * | sort -rh | head -5
How to find the largest files in a directory?
ls -lSr
ls -lSr | less
ls -lSr | head -10
How to find the smallest files in a directory?
ls -lSr
ls -lSr | less
ls -lSr | tail -10
By now, you may have figured out there are just two commands we played with to find what we need. Those two are ls and du Linux commands and their options. These are the Linux commands I learned over the years from Linux Guru online and in person and I would like to thank them for their time and sharing their knowledge with everyone. I hope you find these commands helpful and please share our community if you know any better commands than what was discussed here.
Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
Why is there a need to Know, Learn and fully Understand "The Most Dangerous SQL Statements"? Let us compare these SQL statements with our life decisions we make. For every decision or action we make in life, we know the risk; that is how we take life's decision by weighing advantages over risk. There are SQL commands that are dangerous therefore we need to know the impact and prepare for the risk before running those SQL commands.  Recently, I took a poll of SQL Statements that DBA and Developer think are the most dangerous. The list below is from the poll and it is by no mean in any order. Also, I will explain what each SQL statement does and why the samplers think are dangerous to them.

DROP: This is a powerful SQL statement for any database, not just Oracle. It has the ability to drop a table, database, schema, database objects, table column etc. Once the drop is performed, the objects may be gone forever and may not get it back, therefore, always make sure you do not need them before dropping them. If you are not 100% sure then make a backup copy saved somewhere which it is easily accessible when required. Some database like Oracle may let you recover from recycle bin, therefore, get yourself familiar with recovery tools like recycle bin, flashback etc on the database you work.

TRUNCATE  Deletes all the data from a table without removing the structure of these objects. Truncate generates no undo logs, therefore it is faster than a delete but the transaction cannot be rolled back. Also, TRUNCATE does not invoke delete triggers.  Speed comes with a price, therefore, you need to save a copy somewhere if you need it back.

DELETE: Deletes row from a table and the change should be committed after completion to make the delete permanent. Delete can be rollbacked because it generates undo logs. TRUNCATE deletes all the rows from a table without creating undo files and it cannot be rolled back at least in Oracle database. Truncate is much faster than a delete. The change is permanent for TRUNCATE.  Drop and Truncate are DDL statements and cannot be rolled back. In some database, delete is permanent because it issues auto-commit from the database or from the IDE client. 

COMMIT: This command makes the DML or DDL change permanent. In most databases, you have the option to run COMMIT after DML but in some, you don't get the options. SQL Server is one example that issue auto-commit by default, all your DML change are permanent and you don't have the options to rollback. Use it wise or lost it all.

GRANT ALL: This will let a user or schema owner to have all the privileges that exist on a table or view. Some lazy DBA or developer use these but this is not recommended instead give the least privilege to user/schema to do a job. 

What do you think is most dangerous SQL command to you? Please comment below. To me it is a SELECT and here is why? All the SQL statements we described above are reversible if you have the good back up and proper disaster recovery in place. Yes, you may have to go through a pain to get it back but what is not reversible is the result of the SELECT statement. 

Hackers look for information from the database. Recently Equifax a credit reporting company was hacked where customers credit card and their personal information were stolen. It happened because of the SELECT statement. This damage that SELECT can make is something that is not repairable. If the select from the table was prevented, the problem could have avoided. The are few things DBA and developer can do to make SELECT restricted only to intended applications or authorized users. Use of proper grant, using stored procedures for reading data back from the table, and designing application and writing code free of SQL injections are some of the examples on how SELECT statement can be avoided by unauthorized users on a database level.

In a nutshell, If you ever come across these SQL commands, know what command does in detail, learn the risk and prepare what you can do to overcome the risk or prevent it. I would start with a rollback query when running these SQL statements. This helps you get the data and object back to their original states when asked or needed. A good developer and DBA starts with rollback plan, writes a rollback code and test it before implementing the actual change request. Always test your rollback plan, don't just plan your rollback in your head or in the paper because these kinds of rollback plans can surprise you and later your boss or HR can surprise you. Test your rollback and don't let anyone surprise you.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
Who remembers the Export-Import definition from grade school? According to Investopedia, Export is a function of international trade whereby goods produced in one country are shipped to another country for sales or trade. An Import is a good or service brought into one country from another. The concept of export and import is similar in Oracle to that of import and export goods to one country. Oracle has a built-in Export-Import utility that lets anyone export Oracle objects, schema, database, and import back to the different or same database.

This utility is frequently used to move or copy tables, data, schemas, database, database objects from one database to another even if the target database is on the different platform, hardware and software configurations. As an application DBA, I use this to move table & data from DEV to UAT and to PROD environment.

To use Export Utility, you must have CREATE SESSION privilege on Oracle Database. To export tables owned by another user, you must have the EXP_FULL_DATABASE role enabled. This role is granted to all DBAs. To use Import Utility, you must have CREATE SESSION privilege on Oracle Database. If the export file was created by a user with EXP_FULL_DATABASE privilege, then you must have the IMP_FULL_DATABASE privilege to import it which is typically assigned to DBA role.

During export, the utility writes the dump file to your PC, not the server or the host where the database sits. This tool is more for database developer who does not have DBA or export-import roles. This utility also does not require access to the host machine. Oracle has another utility called data pump which is introduced on Oracle 10g this data pump is similar to export-import utility but much faster and efficient. The drawback is you must have access to a host where the export dump resides. Data pump is more for system DBA or sysadmin not designed for developers.

What to check before import? If importing tables and data, verify the table already exists. If a table exists, you can ignore to CREATE table and just load data. Other options would be to drop table and import back table structure and data. You may lose some table privilege with second options unless you want same privileges from the source, therefore, you are required to know all the options that export and import utility offers.

Table Export/Import on the Same Schema:
EXPORT:
exp USERID=username/password@source_instance TABLES=(table_name) LOG=exp_table_name.log FILE=table_name.dmp
IMPORT:
exp USERID=username/password@target_instance TABLES=(table_name) log=imp_table_name.log file=table_name.dmp

Example: Single table On the Same Schema
exp USERID=hr/password@DEV TABLES=(employee) LOG=employee_table_DEV.log FILE=employee_table.dmp
imp USERID=hr/password@PROD TABLES=(employee) LOG=employee_table_PROD.log FILE=employee_table.dmp

Example: Multiple tables On Same Schema:
exp USERID=hr/password@DEV TABLES=(employee, dept) LOG=employee_dept_table_DEV.log FILE=employee_dept_table.dmp
imp USERID=hr/password@PROD TABLES=(employee, dept) LOG=employee_dept_table_PROD.log FILE=employee_dept_table.dmp
Table Export/Import on the Different Schema:
EXPORT:
exp USERID=username_1/password@source_instance TABLES=(table_name) LOG=table_name.log FILE=table_name.dmp
IMPORT:
imp USERID=username_2/password@target_instance TABLES=(table_name) fromuser=username_1 touser=username_2 LOG=table_name.log FILE=table_name.dmp

Example: Single table On Different Schema:
exp USERID=hr/password@DEV TABLES=(employee) LOG=exp_employee_table_DEV.log FILE=employee_table.dmp
imp USERID=dept/password@PROD TABLES=(employee) fromuser=hr touser=dept LOG=imp_employee_table_PROD.log FILE=employee_table.dmp

Example: Multiple tables On Different Schema:
exp USERID=hr/password@DEV table(employee, dept) LOG=exp_employee_dept_table_DEV.log FILE=employee_dept_table.dmp
imp USERID=dept/password@PROD table(employee, dept) fromuser=hr touser=dept LOG=imp_employee_dept_table_PROD.log FILE=employee_dept_table.dmp
Full Schema Export/Import:
Example: Full Export/Import on the Same Schema:
exp USERID=hr/password@DEV owner=schema_name log=exp_owner_schema.log file=owner_schema.dmp
imp USERID=hr/password@PROD fromuser=owner_schema log=imp_owner_schema.log file=owner_schema.dmp

Example: Full Schema Export/Import on The Different Schema:
exp USERID=hr/password@DEV owner=schema_name log=exp_owner_schema.log file=owner_schema.dmp
imp USERID=hr/password@PROD fromuser=owner_schema toschema=owner_schema log=imp_owner_schema.log file=owner_schema.dmp

Whole Database Export/Import:

Example:Full Database Export/Import:
exp USERID=hr/password@DEV full=y log=exp_entire_db.log file=entire_db.dmp
imp USERID=hr/password@PROD log=imp_entire_db.log file=entire_db.dmp
Help: 
imp help=y      (displays all import options)
exp help=y (displays all export options)
Help displays all the options with a short description for export and import. This is the handy reference when writing export and import command. I refer help command all the time which saves time from going back to Oracle document for the options that I need.
Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
This article is inspired by a question repeatedly asked to me: Why do you use a Complete Materialized View and not a Table or just a View? When the data is not available during refresh. This is a great question and I asked this once to myself and came to know a lot about Materialized View. When I learned more on Materialized View, I started to love them so much and started using them more and write more about them. This has changed my life at work and improved the efficiency of my database and application using them.

There is no question on Fast Refreshable Materialized View regarding data availability during refresh. The data is available at all time even during refresh even though the data is not fresh. Now, we will ask the same question to a Complete Refresh Materialized view. What do you think? Is the data available during refresh? Complete refresh takes a while therefore it does not fall on Fast refresh category. The answer is Yes and No both for data availability while the complete refresh is happening. What is NO and Yes? Let's find that out now.

What is a complete Refreshable Materialized View? This type of Mview refreshes everything from Master tables. Master tables have no logs to keep track of DML, therefore, it refreshes everything from master tables to Mview. During refresh, the Mview data gets deleted/truncated and then inserted with data from master tables. I said it is either DELETED or TRUNCATED therefore if the data from Mview is DELETED and refreshed with a new set of data, the data is available during refresh. When truncated, the data is not available and needs to wait until the refresh is complete. What determines the refresh to use DELETE or TRUNCATE?

To answer the question above regarding DELETE or TRUNCATE, you will need to know how to refresh a complete materialized view?

SYNTAX:
DBMS_MVIEW.REFRESH (
{ list IN VARCHAR2,
| tab IN DBMS_UTILITY.UNCL_ARRAY,}
method IN VARCHAR2 := NULL,
rollback_seg IN VARCHAR2 := NULL,
push_deferred_rpc IN BOOLEAN := true,
refresh_after_errors IN BOOLEAN := false,
purge_option IN BINARY_INTEGER := 1,
parallelism IN BINARY_INTEGER := 0,
heap_size IN BINARY_INTEGER := 0,
atomic_refresh IN BOOLEAN := true,
nested IN BOOLEAN := false);
  1. List|Tab: List of comma delimited Mviews.
  2. Method: Refresh method where 'F' indicates Fast refresh, indicates Force Refresh, 'C' indicates Complete refresh and 'P' refreshes by recomputing the rows in the Mview view affected by changed partitions.
  3.  Atomic_Refresh: True refreshes Mview in a single transaction. False refreshes in a separate transaction.
  4.  Parallelism: 0 specifies serial propagation, n>1 specifies parallel propagation.
  5.  Rest: I don't care, you should too!
Example:  Atomic Transaction:
BEGIN
DBMS_MVIEW.REFRESH(LIST => 'MV_NAME', METHOD => 'C', ATOMIC_REFRESH => TRUE);
END;
/
The Materialized View is refreshed using a single transaction or atomic translation. The refresh occurs using delete from Mview and then Insert into Mview underlying table. Delete is a slow process that generates redo/undo logs. The positive side to this refresh method is that the data available during refresh. By default atomic_fresh parameter  is set to TRUE. Did this answer your question? Wait, we have more below to cover.
Example:  Non-Atomic Transaction:
BEGIN
DBMS_MVIEW.REFRESH(LIST => 'MV_NAME', METHOD => 'C', ATOMIC_REFRESH => FALSE);
END;
/
The Materialized View is refreshed using multiple transactions. The refresh happens using TRUNCATE from Mview and then Insert into Mview. TRUNCATE is faster compared to DELETE statements and generates minimum redo or undo logs. Since the refreshing process Truncates the MView, data are NOT available during refresh but the refresh time is much shorter compared to previous example with atomic transaction. This is the great way to improve the refresh time on a complete refresh able MView that is  taking longer to refresh.
Example: Atomic Transaction with Multiple MViews
BEGIN
DBMS_MVIEW.REFRESH(LIST => 'MV_NAME1, MV_NAME_2', METHOD => 'C', ATOMIC_REFRESH => TRUE);
END;
/
In this example, both the MViews  are refresh in a single transaction, meaning they will refresh the same time. Whenever there is an error occurred in one, both will be roll backed to previous state.
Example: Non-Atomic Transaction with Multiple MViews
BEGIN
DBMS_MVIEW.REFRESH(LIST => 'MV_NAME1, MV_NAME_2', METHOD => 'C', ATOMIC_REFRESH => FALSE);
END;
/
In this example, the refresh happens in two or more transactions. Which is like refreshing individual Mview. If one fails another can complete or vice versa.

Materialized Views are great Oracle objects that are easy to use and manage and are used mainly to move data (ETL), optimize the query performance etc.  They save the development time of developer when you know them inside and out. I have covered several tips and types of Materialized View in my blog if you are interested, just go to search bar on the upper right corner.
Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
In a busy organization, where the applications are pulling, updating, purging records, we need to be really careful when doing a maintenance work or designing a new application or procedures. How do you make sure the data that you want to modify is locked for you so other application or users can't modify or purged. This is really important when designing a procedure or application which does the update or delete. We must lock the only records that you would like to make changes on so the rest are available for other resource to perform DML activities.

SELECT ... FOR UPDATE
Locks the rows selected by the SELECT statements which gets released only when the COMMIT or ROLLBACK is issued.
SYNTAX:
SELECT col1, col2, 
FROM table
FOR UPDATE [col_name] [NOWAIT];
Where col_name is the column name that you wish to update and NOWAIT does not wait for other resources to be free from lock.
Example:
SELECT empno
FROM emp
WHERE job ='CLERK'
FOR UPDATE sal;
It lock the row in emp table where the job is Clerk. This lock is released when the app or user issues COMMIT or ROLLBACK. All returned set of data hold row-level exclusive locks where other sessions can only query but cannot update, delete or  SELECT for UPDATE. This feature allows developer to lock a set of Oracle rows until the transaction is completed.
 
The WHERE CURRENT OF CLAUSE is used in some UPDATE and DELETE statements. The WHERE CURRENT OF is used if you are going to delete or update the records referenced by SELECT FOR UPDATE cursor.
Example:
UPDATE SYNTAX:
UPDATE tbl_name
SET set_clause
WHERE CURRENT OF cursor_name;

DELETE SYNTAX:
DELETE FROM tbl_name
WHERE CURRENT OF cursor_name;
Example: Using SELECT FOR UPDATE & WHERE CURRENT OF
DECLARE
CURSOR emp_update_cur
AS
SELECT sal, empno
FROM emp_bk
FOR UPDATE sal;
l_emp_update_cur mp_update_cur%ROWTYPE;
BEGIN
OPEN emp_update_cur;
LOOP
FETCH emp_update_cur INTO l_emp_update_cur;
EXIT WHEN emp_update_cur%NOTFOUND;
UPDATE emp
SET sal = l_emp_update_cur.sal
WHERE CURRENT OF emp_update_cur;
COMMIT;
END LOOP;
CLOSE emp_update_cur;
END;
/
Alternative to above PL/SQL is using a SQL UPDATE statement using sub-query.
UPDATE emp e
SET sal = (SELECT sal from emp_bk b
WHERE e.empno = b.empno)
WHERE EXISTS
(SELECT 1 FROM emp_bk b WHERE e.empno = b.empno);
The SQL is good for small numbers of update and you know for sure the data from emp_bk won't change. The PL/SQL is good for any number of rows (small, large, or very large).You can control when you want to commit change and add if statements any where.The cursors holds a lock on emp_bk for the given condition and updates emp table with the sal from emp_bk.
Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
REF CURSORS is the powerful, flexible, and scale-able ways to return query results from Oracle database to application. It is a PL/SQL data type whose value is the memory address of a query works on the database. In other word, a REF CURSORS is a pointer to a result set.

REF CURSORS Properties:
  1. A REF CURSOR refers to a memory address on the database. Therefore, the client must be connected to the database during the lifetime of the REF CURSOR in order to access it.
  2. A REF CURSOR involves an additional database round-trip. While the REF CURSOR is returned to the client, the actual data is not returned until the client opens the REF CURSOR and requests the data. Note that data is not be retrieved until the user attempts to read it.
  3. A REF CURSOR is not updatable. The result set represented by the REF CURSOR is read-only. You cannot update the database by using a REF CURSOR.
  4. A REF CURSOR is not backward scrollable. The data represented by the REF CURSOR is accessed in a forward-only, serial manner. You cannot position a record pointer inside the REF CURSOR to point to random records in the result set.
  5. A REF CURSOR is a PL/SQL data type. You create and return a REF CURSOR inside a PL/SQL code block.
Example: Stored Procedures that uses REF Cursors
CREATE TABLE prabin_emp(
empno NUMBER(4,0),
ename VARCHAR2(10),
job VARCHAR2(9),
mgr NUMBER(4,0),
hiredate DATE,
sal NUMBER(7,2),
comm NUMBER(7,2),
deptno NUMBER(2,0),
CONSTRAINT pk_emp PRIMARY KEY (empno));
CREATE OR REPLACE PROCEDURE get_emp_rs (p_deptno IN  prabin_emp.deptno%TYPE, p_recordset OUT SYS_REFCURSOR) AS 
BEGIN
OPEN p_recordset FOR
SELECT ename,
empno,
deptno
FROM prabin_emp
WHERE deptno = p_deptno
ORDER BY ename;
END get_emp_rs;
/
Procedure GET_EMP_RS compiled
</pre>
Example: Returning result using Function:
CREATE OR REPLACE FUNCTION get_emp_rs_test(
p_deptno IN NUMBER)
RETURN sys_refcursor
AS
p_recordset sys_refcursor;
BEGIN
get_emp_rs(p_deptno,p_recordset);
RETURN p_recordset;
END;
/
Function GET_EMP_RS_TEST compiled
SELECT my_proc_test(20) FROM dual;
Output:
{<ENAME=ALLEN,EMPNO=7499,DEPTNO=30>,
<ENAME=BLAKE,EMPNO=7698,DEPTNO=30>,
<ENAME=JAMES,EMPNO=7900,DEPTNO=30>,
<ENAME=MARTIN,EMPNO=7654,DEPTNO=30>,
<ENAME=TURNER,EMPNO=7844,DEPTNO=30>,
<ENAME=WARD,EMPNO=7521,DEPTNO=30>,}
Example: Returning Result Using PL/SQL
SET SERVEROUTPUT ON 
DECLARE
l_cursor SYS_REFCURSOR;
l_ename prabin_emp.ename%TYPE;
l_empno prabin_emp.empno%TYPE;
l_deptno prabin_emp.deptno%TYPE;
BEGIN
get_emp_rs (p_deptno => 30,
p_recordset => l_cursor);
LOOP
FETCH l_cursor
INTO l_ename, l_empno, l_deptno;
EXIT WHEN l_cursor%NOTFOUND;
DBMS_OUTPUT.PUT_LINE(l_ename || '| ' || l_empno || '| ' || l_deptno);
END LOOP;
CLOSE l_cursor;
END;
/
PL/SQL procedure successfully completed.
Output:
ALLEN | 7499 | 30
BLAKE | 7698 | 30
JAMES | 7900 | 30
MARTIN | 7654 | 30
TURNER | 7844 | 30
WARD | 7521 | 30
Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
This is not about Angry Bird? It is about ANGRY DBA? Want to make them happy and get your job done? Whenever you are working on a issue that involves DBA I recommend you to provide as much information as possible. Below, I will list all the questions DBA might  whenever the developer or anyone who is seeking help from DBA. Check out the questions that DBA ask and answers as many as you can before reaching out to DBA and include them in your email. Sooner or later they will ask these questions from you. Everyone says communication is key but to me providing correct information is the key more so then communication without the key information.

Questions DBA ask during troubleshooting:
  1. What is the ORA error code?
  2. What is the name of Database Instance name? 
  3. What is the table name?
  4. What is the schema name?
  5. What is the userid/username used to login?
  6. What arguments are being passed to the procedure/function? What is the expected result?What is the  ORA error message
  7. Can you reproduce the error message?
  8. Can you send the data in flat file(txt) either comma or pipe delimiter?, Never send data in an EXCEL document, DBA prefers pipe over comma.
  9. Can you send the query?
  10. Do you have an Execution Plan? 
  11. Has the data volume changed recently?
  12. Has any DDL change made recently?
  13. When was the last time it worked?
  14. Are there any dependencies?
  15. What is the impact to the business?
  16. How long do we keep the data?
  17. Do you have a purge process?
You may not have answer to all the questions but it does not hurt to ask DBA on where to check to get the answers they want. It is better to ask then providing the information that you are not sure. If you are a developer, you will need to get familiar with these question and answers and make your and your DBA's life better. 

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
I WISH, the requester sends the data file in the format we want and then the load is done in few minutes. Hey, it does not hurt to ask the requester back in a format that you need to make your load process easier but you may or may not get what you have requested. The most frequently used file type in most organization today is Excel document.There are various issues with loading data from Excel as most database may not support the functionality directly where you may need to use some third party utility. Most database loads data from a flat file with fields separated by comma. We will discuss on converting excel file into a file that is database load ready and the problem we may encounter the csv file.

Converting Excel into CSV (Comma Separated Value): This is an easy process which anyone can do it with few clicks. If you don't, you just need to save your excel document file "save as type: CVS(Comma Delimited)(*.csv). The new file *.csv file is created which you can open using your favorite text editor like notepad++ to view it as comma separated file. You can run your loader against this flat file to load your data.

Excel:
CSV: (Opened with Notepad++)
Using SQL Loader in Oracle to load data and the data load was successfully without any rejects and the life is GOOD.

Life may not be good always which we all know that. Recently, I was doing a data load file from Excel document where I followed the instruction exactly what was stated above. Only 2855 out of 3000 loaded successfully using SQL loader. What was wrong? To find out, I reviewed the bad.log files and further investigate the data that didn't load. The data that didn't load had comma on some field. When you have comma within data and the delimiter is comma as well, the data do not align with the columns and the database load will throw error for Invalid data type or data too long. Let's go over an example of data with comma and how we tackle this problem?

Excel:
CSV:
Do you see the problem here? We have 5 columns and the data file has eight data for 5 columns. Always, the number of columns should match the number of commas -1 in the data file. If not there is a problem of not loading or loading data into wrong column name.

Resolution: Change the comma delimiter into pipe delimiter file. When I tried to replace the comma with pipe in notpad++, it replace all comma which was no good. There are two
solutions I came across and use the one you like the most. The idea is to convert .CSV file into pipe delimited or use a special character not used within a data file.

Solution 1:
 Open .CSV file and save it as Tab delimiter File.
 Open tab delimited file using notepad++ and replace tab with pipe
 Save it.

There are 5 columns and the data file has 5 data as well. How do you like that?
Solution 2: 
  1. Make sure Excel is closed
  2. Navigate to control panel
  3. Select ‘Region and Language’
  4. Click the ‘Additional Settings’ button
  5. Find the List separator and change it from a comma to your preferred delimiter such as a pipe (|).
  6. Click OK
  7. Click OK
  8. Exit Control panel
  9. Open the Excel file you want to export to a pipe delimited file
  10. Select File, Save As
  11. Change the ‘Save as type’ to ‘CSV (Comma delimited)(*.csv)’
  12. Change the name and file extension if you want, by default stays as csv even though a different delimiter
  13. Click Save
  14. Click OK
  15. Click Yes

It produces the same result as the above. This solution #2 is what I found online when I wasn't convinced with my solution #1 and I realized there got to be a better way and I found Barry Stevens Solution and it works well.

Excel documents are not the best file type for loading data therefore, I would always request file into a flat file which your loader loves if not I would always convert into pipe delimited file NOT comma separated value. You will run into the issue like I did and sometime you may not notice where the load is successful but the data loaded incorrectly. Always check that data file and ensure that it does not contain the delimiter that you are going to use in your SQLLDR or other loader utility that your DBMS supports. Comma is most likely to be used in a data than pipe, check and verify before using a Delimiter. Do not use .csv, use Pipe separated value or use something that isn't used in a file.
Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
A Cursor is a work area or a section of memory in which SQL statements are processed in the Oracle server. A cursor is a pointer to a private SQL area that stores information about the processing of a SELECT or Data Manipulation Language statements like INSERT, UPDATE, DELETE or MERGE. Implicit and explicit are two types of PL/SQL statement cursor available on PL/SQL. This article is about Explicit Cursor but will have a some background on Implicit Cursor.

Implicit CURSOR:
Implicit Cursors are declared automatically for all DML and SELECT statements issued within a PLSQL block. Oracle server automatically creates Implicit cursor when a SQL statement is executed. The cursor is a work area in a memory where the statement is processed and contain the result of a SQL statements. There are several cursor attributes which allow the results of an SQL statement to be checked whether the statement affect any rows and how many? Implicit cursors are used in statement that returns only one row.You can learn more on Implicit Cursor here.

Explicit CURSOR:
An explicit cursor names the unnamed work area in which the database stores processing information when it executes a multiple-row query. When you have named the work area, you can access its information, and process the rows of the query individually.

PL/SQL Cursor: Example 1
SET serveroutput ON
DECLARE
CURSOR cur_emp
IS
SELECT ename, empno, deptno FROM prabin_emp;
c_ename prabin_emp.ename%TYPE;
c_empno prabin_emp.empno%TYPE;
c_deptno prabin_emp.deptno%TYPE;
BEGIN
OPEN cur_emp;
LOOP
FETCH cur_emp INTO c_ename, c_empno, c_deptno;
EXIT
WHEN cur_emp%NOTFOUND;
dbms_output.put_line(c_ename||' '|| c_empno||' '|| c_deptno);
END LOOP;
CLOSE cur_emp;
END;
/
OUTPUT:
PL/SQL procedure successfully completed.

KING 7839 10
BLAKE 7698 30
CLARK 7782 10
JONES 7566 20
SCOTT 7788 20
FORD 7902 20
SMITH 7369 20
ALLEN 7499 30
WARD 7521 30
MARTIN 7654 30
TURNER 7844 30
ADAMS 7876 20
JAMES 7900 30
MILLER 7934 10
PL/SQL Cursor: Example 2 alternative to example 1 and this is what I would prefer on my PL/SQL.
SET serveroutput ON
DECLARE
CURSOR cur_emp
IS
SELECT ename, empno, deptno FROM prabin_emp;
l_cur_emp cur_emp%ROWTYPE;
BEGIN
OPEN cur_emp;
LOOP
FETCH cur_emp INTO l_cur_emp;
EXIT
WHEN cur_emp%NOTFOUND;
dbms_output.put_line(l_cur_emp.ename ||' '|| l_cur_emp.empno||' '||l_cur_emp.deptno);
END LOOP;
CLOSE cur_emp;
END;
/
OUTPUT:
PL/SQL procedure successfully completed.

KING 7839 10
BLAKE 7698 30
CLARK 7782 10
JONES 7566 20
SCOTT 7788 20
FORD 7902 20
SMITH 7369 20
ALLEN 7499 30
WARD 7521 30
MARTIN 7654 30
TURNER 7844 30
ADAMS 7876 20
JAMES 7900 30
MILLER 7934 10 
Parameterized Cursor takes in a value like that in stored procedure.Cursor becomes more reusable with Cursor parameters.Default values can be assigned to Cursor parameters.The scope of the cursor parameters is local to the cursor. Below are some of the examples of parameter cursor.

Parameterized Cursor: Example 1
SET serveroutput ON
DECLARE
CURSOR cur_emp(p_deptno IN NUMBER)
IS
SELECT ename, empno, deptno
FROM prabin_emp
where deptno = p_deptno;

l_cur_emp cur_emp%ROWTYPE;
BEGIN
OPEN cur_emp(10);
LOOP
FETCH cur_emp INTO l_cur_emp;
EXIT
WHEN cur_emp%NOTFOUND;
dbms_output.put_line(l_cur_emp.ename ||' '|| l_cur_emp.empno||' '||l_cur_emp.deptno);
END LOOP;
CLOSE cur_emp;
END;
/
OUTPUT:
PL/SQL procedure successfully completed.

KING 7839 10
CLARK 7782 10
MILLER 7934 10
Parameterized Cursor: Example 2
SET serveroutput ON
DECLARE
CURSOR cur_emp(p_deptno IN NUMBER)
IS
SELECT ename, empno, deptno
FROM prabin_emp
where deptno = p_deptno;

l_cur_emp cur_emp%ROWTYPE;
BEGIN
OPEN cur_emp(10);
LOOP
FETCH cur_emp INTO l_cur_emp;
EXIT
WHEN cur_emp%NOTFOUND;
dbms_output.put_line(l_cur_emp.ename ||' '|| l_cur_emp.empno||' '||l_cur_emp.deptno);
END LOOP;
CLOSE cur_emp;
OPEN cur_emp(30);
LOOP
FETCH cur_emp INTO l_cur_emp;
EXIT
WHEN cur_emp%NOTFOUND;
dbms_output.put_line(l_cur_emp.ename ||' '|| l_cur_emp.empno||' '||l_cur_emp.deptno);
END LOOP;
CLOSE cur_emp;
END;
/
OUTPUT:
PL/SQL procedure successfully completed.
KING 7839 10
CLARK 7782 10
MILLER 7934 10
BLAKE 7698 30
ALLEN 7499 30
WARD 7521 30
MARTIN 7654 30
TURNER 7844 30
JAMES 7900 30
Parameterized Cursor: Example with Default value:
PL/SQL procedure successfully completed.
SET serveroutput ON
DECLARE
CURSOR cur_emp(p_deptno IN NUMBER := 10)
IS
SELECT ename, empno, deptno
FROM prabin_emp
where deptno = p_deptno;

l_cur_emp cur_emp%ROWTYPE;
BEGIN
OPEN cur_emp;
LOOP
FETCH cur_emp INTO l_cur_emp;
EXIT
WHEN cur_emp%NOTFOUND;
dbms_output.put_line(l_cur_emp.ename ||' '|| l_cur_emp.empno||' '||l_cur_emp.deptno);
END LOOP;
CLOSE cur_emp;
END;
/
OUTPUT:
PL/SQL procedure successfully completed.

KING 7839 10
CLARK 7782 10
MILLER 7934 10
When nothing is defined, it takes deptno as 10 and executes. To prevent failure, assigning default value is a way the best way to write PL/SQL cursor.
Read Full Article
Visit website

Read for later

Articles marked as Favorite are saved for later viewing.
close
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview