EMS logo

Products Navigation

choose your database tool

Our Partnership Status

Microsoft Certified Partner
Oracle Certified Partner
Embarcadero Technology Partner

SQL Articles

Oracle Articles

Steve Bobrowski
Transparent Data Encryption: Experience from the Trenches

Introduced as part of the Oracle Advanced Security Option (Release 10.2 and greater), Oracle Database Transparent Data Encryption (TDE) allows you to selectively encrypt sensitive database data that's maintained in the underlying data files of a database, as well as all downstream file components such as online redo logs, archived redo logs, and database backups. The basic goal of TDE is to protect the sensitive data found in these raw operating system files from prying eyes should a nefarious person gain access to a disk or backup tape and then try to either restore the database or scan the raw operating system files for data such as personally identifiable information or credit card information.

I've implemented TDE several times as part of my consulting practice. But until one of my most recent engagements, I had always used TDE either to encrypt a new column of an existing table or a column that was part of a brand new table. Working with TDE in both of these cases is straightforward because the target columns are empty, so there's not much risk involved due to the lack of data and existing application dependencies.

My most recent experience implementing TDE was different. I was helping a large company encrypt an existing column in a table that already had more than one million rows. There was also a mission-critical application that depended on the column, so, as you can imagine, there were a lot of important things to ponder before diving in. After searching the Internet for people's experiences in similar situations, I found only a few good resources to help me.

This article is a digest of what I learned going through the process of using TDE to encrypt existing data. My hope is that what I learned will help you expedite a similar experience should you try to use TDE with existing column data. Please note that this article is not an introduction to TDE; if you would like to learn the fundamentals of implementing TDE, refer to the documentation (or see this tutorial). 

Identifying Possible Restrictions

The initial thing that I did when researching my client's system was to look for data model characteristics related to the target column that would prohibit us from encrypting the column altogether, or things about the column that might negatively impact existing operations. This research included looking for column indexes and integrity constraints.

As the Oracle documentation clearly states, there are a number of restrictions to understand when you want to encrypt a column that has indexes. Oracle does not permit the encryption of a column that has a bitmap index, which was not germane in our situation. However, the target column did have a couple of normal (B-tree) indexes. While Oracle does permit the encryption of a column that has a normal index, Oracle does prohibit "salted" encryption of an indexed column. Salting strengthens the security of repetitious data by adding a random string to the data before encryption, thus making it more difficult for thieves to use pattern-matching recognition techniques to decipher encrypted data. In summary, after this initial analysis we were left with a situation where we could encrypt the column, but not with salt.

I could have stopped here with the analysis of the column's indexes, but the next question I wanted to answer was, "Are the indexes in place being used?" My thought process was this: if the indexes were not useful, I'd get rid of them and relieve the system of the overhead necessary to maintain the index entries, especially considering the extra burden of encryption. To judge whether the indexes were useful, I used Oracle Database's index monitoring feature. I found that, indeed, the indexes were being used, so we would have to maintain them going forward.

Next, I looked to see if the target column was involved in a referential integrity constraint. Because each table has it's own encryption key, Oracle does not permit you to use TDE to encrypt columns involved in a foreign key relationship. In our situation, the target column was not involved in a referential integrity constraint.

Assessing Performance Overhead

One of the first questions that my client asked was, "How will TDE affect the general performance of my application?" The Oracle documentation has a small section that discusses, in general terms, how TDE will affect the performance of dependent applications. But my customer wanted some concrete statistics to give them an idea of how TDE would affect a time-critical data load process that occurred every day.

To address my client's needs, I calculated the average number of rows inserted into the target table every day during the time-critical process. Then, I created a similar test table and indexes in the client's identical sandbox environment, and measured how much time it took to insert the same number of rows both before and after encrypting the target column. The difference between these elapsed times gave us a reasonably good idea of the "performance penalty" of encrypting the column data during the process. Listing 1 is an example of how I did this using SQL*Plus.

SQL> CONNECT system
Enter password:
Connected.
SQL> -- Configure Oracle-Managed (Data) Files
SQL> ALTER SYSTEM
  2    SET db_create_file_dest = '/data01/oracle/'
  3    SCOPE = MEMORY;

System altered.

SQL> -- Create two new tablespaces for the demo,
SQL> -- one for data segments, one for index segments
SQL> CREATE TABLESPACE data_001
  2    DATAFILE SIZE 1G;

Tablespace created.

SQL> CREATE TABLESPACE indx_001
  2    DATAFILE SIZE 500M;

Tablespace created.

SQL> -- Create a user for the demo
SQL> CREATE USER app_001 IDENTIFIED BY app
  2   DEFAULT TABLESPACE data_001
  3   TEMPORARY TABLESPACE temp
  4   QUOTA UNLIMITED ON data_001
  5   QUOTA UNLIMITED ON indx_001;

User created.

SQL> GRANT CREATE SESSION, CREATE TABLE TO app_001;

Grant succeeded.

SQL> -- Work as the demo user
SQL> CONNECT app_001/app;
Connected.
SQL> -- Create the demo table in the default tablespace
SQL> CREATE TABLE app_001.transactions (
  2    trans_id INTEGER
  3      CONSTRAINT transactions_pk PRIMARY KEY
  4        USING INDEX TABLESPACE indx_001,
  5    credit_card INTEGER NOT NULL
  6  );

Table created.

SQL> -- Create an index in the INDX_001 tablespace
SQL> CREATE INDEX app_001.transactions_ndx1
  2   ON app_001.transactions(credit_card)
  3   TABLESPACE indx_001;

Index created.

SQL> -- Time how long it takes to load data in the clear
SQL> SET TIMING ON;
SQL> BEGIN
  2    -- AMEX
  3    FOR i IN 1 .. 100000 LOOP
  4      INSERT INTO app_001.transactions(trans_id, credit_card)
  5      VALUES (
  6        i,
  7        '34' || TRUNC(DBMS_RANDOM.VALUE(low=>0, high=>99999999999999))
  8      );
  9    END LOOP;
 10    COMMIT;
 11    -- VISA
 12    FOR i IN 100001 .. 400000 LOOP
 13      INSERT INTO app_001.transactions(trans_id, credit_card)
 14      VALUES (
 15        i,
 16        '4' || TRUNC(DBMS_RANDOM.VALUE(low=>0, high=>999999999999999))
 17      );
 18    END LOOP;
 19    COMMIT;
 20    -- MASTERCARD
 21    FOR i IN 400001 .. 500000 LOOP
 22      INSERT INTO app_001.transactions(trans_id, credit_card)
 23      VALUES (
 24        i,
 25        '54' || TRUNC(DBMS_RANDOM.VALUE(low=>0, high=>99999999999999))
 26      );
 27    END LOOP;
 28    COMMIT;
 29  END;
 30  /

PL/SQL procedure successfully completed.

Elapsed: 00:00:56.14
SQL> SET TIMING OFF;
SQL> -- Remove existing synthetic data
SQL> TRUNCATE TABLE app_001.transactions;

Table truncated.

SQL> -- Enable encryption of the credit card column
SQL> ALTER TABLE app_001.transactions
  2   MODIFY (credit_card ENCRYPT NO SALT);

Table altered.

SQL> -- Time how long it takes to load encrypted data
SQL> SET TIMING ON;
SQL> BEGIN
  2    -- AMEX
  3    FOR i IN 1 .. 100000 LOOP
  4      INSERT INTO app_001.transactions(trans_id, credit_card)
  5      VALUES (
  6        i,
  7        '34' || TRUNC(DBMS_RANDOM.VALUE(low=>0, high=>99999999999999))
  8      );
  9    END LOOP;
 10    COMMIT;
 11    -- VISA
 12    FOR i IN 100001 .. 400000 LOOP
 13      INSERT INTO app_001.transactions(trans_id, credit_card)
 14      VALUES (
 15        i,
 16        '4' || TRUNC(DBMS_RANDOM.VALUE(low=>0, high=>999999999999999))
 17      );
 18    END LOOP;
 19    COMMIT;
 20    -- MASTERCARD
 21    FOR i IN 400001 .. 500000 LOOP
 22      INSERT INTO app_001.transactions(trans_id, credit_card)
 23      VALUES (
 24        i,
 25        '54' || TRUNC(DBMS_RANDOM.VALUE(low=>0, high=>99999999999999))
 26      );
 27    END LOOP;
 28    COMMIT;
 29  END;
 30  /

PL/SQL procedure successfully completed.

Elapsed: 00:01:16.31
SQL> SET TIMING OFF;
Listing 1 Using a sandbox environment that is identical to your production environment, a simple comparison of the time it takes to load a representative data set with and without column encryption enabled will give you a good expectation of how column encryption will affect the performance of your production system.

As with all performance tests, I suspect that the performance penalty of column encryption will vary from system to system depending on the usual variables (CPUs, average load, etc.). In Listing 1, you notice that the calculated performance penalty would be 36% (((56.14-76.31)/56.14)*100); however, using the experimental evidence that we collected in my client's system, our expectation was about an 11% increase in elapsed time for the data load process, which was right on the money when we went into production with TDE.

In this example, I've focused on estimating the performance penalty of data encryption for a data load process with indexes in place. If your system has different types of critical processes, such as a demanding report generation cycle, I advise you employ your sandbox environment to compare how long the process takes before and after data encryption. The section "Identifying Potential Query Plan Changes" later in this article discusses unique considerations for queries and data encryption.

Dealing with Outages and Maintenance Windows

Another valid concern of my client was what, if any, production application outage would be necessary while encrypting the existing column data in a table of approximately one million rows. My initial thought was that, theoretically, no outage would be necessary—after all, the Oracle documentation states that what happens when encrypting an existing column's data is an essentially a multi-row update of the entire table. Without giving it much thought, I didn't see why concurrent new row insertions into the table and existing row updates wouldn't be able to proceed. And as I mumbled that familiar Oracle mantra "readers don't block writers, writers don't block readers," I certainly didn't expect queries to be affected by the column encryption.

But I've been doing this DBA job long enough to know that it's important to test your theories and avoid unexpected problems when you finally make an actual change to a production system. And lo and behold, I discovered issues when I tested the application itself against the sandbox database during the column encryption. Most important, I found that the ongoing encryption slowed the response time of some queries enough that the application experienced response timeouts. These timeouts, in turn, caused disconnects, which then caused subsequent transactions to fail, and then it got real messy from there—I'll spare you the details.

Suffice it to say, after the tests, I knew that an outage was certainly justified. But the next question was how long would the production application need to be offline? Would we be able to encrypt the column during the normal two-hour maintenance window scheduled every weekend, or would more downtime be required? To figure this out, I simply measured how long it took to encrypt the column in the sandbox environment, considering that it had identical server hardware and data set as the production environment. I found that it took a little more than an hour to finish the column encryption. Frankly, I was shocked at how long it took, considering that I did simulated test encryption runs on a laptop that took less than five minutes with similar data. But what mattered is what happened with the antiquated server hardware that was going to be used when we encrypted the column in the production database system.

Knowing that we needed some more time to carry out other tasks during the normal maintenance window, I decided that I must find ways to decrease the hour that it took to encrypt the column. My first instinct was to drop the two indexes that included the target column. That way, Oracle would only have to encrypt the column data in the table itself, after which I could efficiently rebuild the indexes without the overhead of logging. After some new tests, I dropped the time necessary to encrypt the column and related indexes from 70 minutes (indexes in place during encryption) to just 20 minutes (indexes rebuilt after encrypting the column). Listing 2 is an example of the tests that I used to come to my conclusions (continuing from where we left off in Listing 1). Again, note that the timings in the listing are from a test system used to write this article, not the actual system that my client was using.

SQL> -- Remove existing synthetic data
SQL> TRUNCATE TABLE app_001.transactions;

Table truncated.

SQL> -- Disable encryption of the credit card column
SQL> ALTER TABLE app_001.transactions
  2   MODIFY (credit_card DECRYPT);

Table altered.

SQL> -- Load new synthetic data
SQL> BEGIN
  2    -- AMEX
  3    FOR i IN 1 .. 100000 LOOP
  4      INSERT INTO app_001.transactions(trans_id, credit_card)
  5      VALUES (
  6        i,
  7        '34' || TRUNC(DBMS_RANDOM.VALUE(low=>0, high=>99999999999999))
  8      );
  9    END LOOP;
 10    COMMIT;
 11    -- VISA
 12    FOR i IN 100001 .. 400000 LOOP
 13      INSERT INTO app_001.transactions(trans_id, credit_card)
 14      VALUES (
 15        i,
 16        '4' || TRUNC(DBMS_RANDOM.VALUE(low=>0, high=>999999999999999))
 17      );
 18    END LOOP;
 19    COMMIT;
 20    -- MASTERCARD
 21    FOR i IN 400001 .. 500000 LOOP
 22      INSERT INTO app_001.transactions(trans_id, credit_card)
 23      VALUES (
 24        i,
 25        '54' || TRUNC(DBMS_RANDOM.VALUE(low=>0, high=>99999999999999))
 26      );
 27    END LOOP;
 28    COMMIT;
 29  END;
 30  /

PL/SQL procedure successfully completed.

SQL> -- Time how long it takes to encrypt credit card data
SQL> -- with corresponding indexes in place
SQL> SET TIMING ON;
SQL> ALTER TABLE app_001.transactions
  2   MODIFY (credit_card ENCRYPT NO SALT);

Table altered.

Elapsed: 00:02:27.18
SQL> SET TIMING OFF;
SQL> -- Remove existing synthetic data
SQL> TRUNCATE TABLE app_001.transactions;

Table truncated.

SQL> -- Drop all indexes that correspond to the credit card column
SQL> DROP INDEX app_001.transactions_ndx1;

Index dropped.

SQL> -- Disable encryption of the credit card column
SQL> ALTER TABLE app_001.transactions
  2   MODIFY (credit_card DECRYPT);

Table altered.

SQL> -- Load new synthetic data
SQL> BEGIN
  2    -- AMEX
  3    FOR i IN 1 .. 100000 LOOP
  4      INSERT INTO app_001.transactions(trans_id, credit_card)
  5      VALUES (
  6        i,
  7        '34' || TRUNC(DBMS_RANDOM.VALUE(low=>0, high=>99999999999999))
  8      );
  9    END LOOP;
 10    COMMIT;
 11    -- VISA
 12    FOR i IN 100001 .. 400000 LOOP
 13      INSERT INTO app_001.transactions(trans_id, credit_card)
 14      VALUES (
 15        i,
 16        '4' || TRUNC(DBMS_RANDOM.VALUE(low=>0, high=>999999999999999))
 17      );
 18    END LOOP;
 19    COMMIT;
 20    -- MASTERCARD
 21    FOR i IN 400001 .. 500000 LOOP
 22      INSERT INTO app_001.transactions(trans_id, credit_card)
 23      VALUES (
 24        i,
 25        '54' || TRUNC(DBMS_RANDOM.VALUE(low=>0, high=>99999999999999))
 26      );
 27    END LOOP;
 28    COMMIT;
 29  END;
 30  /

PL/SQL procedure successfully completed.

SQL> -- Time how long it takes to:
SQL> -- 1. Encrypt credit card data without corresponding indexes in place
SQL> -- 2. Recreate corresponding indexes
SQL> SET TIMING ON;
SQL> ALTER TABLE app_001.transactions
  2   MODIFY (credit_card ENCRYPT NO SALT);

Table altered.

Elapsed: 00:01:15.48
SQL> CREATE INDEX app_001.transactions_ndx1
  2   ON app_001.transactions(credit_card)
  3   TABLESPACE indx_001
  4   PARALLEL 2
  5   NOLOGGING;

Index created.

Elapsed: 00:00:02.98
SQL> SET TIMING OFF;
Listing 2 To expedite the process of encrypting existing data, simply drop the underlying indexes of a column before you encrypt it and then rebuild the indexes afterward.

Note: The CREATE INDEX statements are already known in this article's simulated environment. In a real-world setting, consider using Oracle Database's DBMS_METADATA utility package to generate CREATE INDEX statements that you can use to recreate the indexes after data encryption is complete.

In summary, the new strategy of rebuilding the indexes after column encryption left much more time to deal with the most challenging problem in the entire process, as the next section explains.

Removing Ghost Copies of Unencrypted Data

Oracle and underlying host operating systems use optimized algorithms to update data in data blocks with the goal being to minimize performance-sapping disk I/O. In the specific case of encrypting existing column data, one thing that commonly happens is that Oracle writes encrypted column data to new data blocks and simply marks the space consumed by former unencrypted values as unused. In other words, Oracle makes no attempt to clean out the older unencrypted data. As long as the system in question experiences a continual amount of update activity, you can be reasonably sure that Oracle will eventually overwrite the older unencrypted data when reusing block space. But considering that my client was preparing for a regulatory compliance audit, we had to be sure that the unencrypted sensitive data was immediately wiped clean after the encryption process.

After doing some research, I found an FAQ at the Oracle Technology Network Website as well as a blog post that confirm this specific issue and give some basic thoughts on how to address it. The general idea is to move all segments with previously unencrypted data to a new tablespace (and data file), and then use an operating system utility to shred the old data file. But this synopsis makes the entire process sound simpler than it really is. The truth is that you'll most likely need to move many segments along with the segments that contain sensitive data before you can safely drop the old tablespace and shred its data files.

To automate this possibly laborious, somewhat error-prone process, I put together some scripts to help me build the DDL statements that are necessary to get everything done. I should give Tom Kyte a tip of the hat, as some of the work here is a modified query of something that I found at the Asktom site. Listing 3 shows you an example of the entire process that I used.

Enter password:
Connected.
SQL> -- Create new tablespaces for data and index segments
SQL> CREATE TABLESPACE data_002 DATAFILE SIZE 1G;

Tablespace created.

SQL> CREATE TABLESPACE indx_002 DATAFILE SIZE 500M;

Tablespace created.

SQL> -- Generate a script to move existing segments to new tablespaces
SQL> COL ORDER_COL1 NOPRINT;
SQL> COL ORDER_COL2 NOPRINT;
SQL> SET HEADING OFF;
SQL> SET VERIFY OFF;
SQL> SET ECHO OFF;
SQL> SELECT DECODE( segment_type, 'TABLE' , segment_name, table_name ) order_col1,
  2         DECODE( segment_type, 'TABLE', 1, 2 ) order_col2,
  3         'ALTER ' || segment_type || ' ' || LOWER(owner) || '.' || LOWER(segment_name) ||
  4         DECODE( segment_type, 'TABLE', ' MOVE ', ' REBUILD ' ) ||
  5         'TABLESPACE ' || LOWER(DECODE( segment_type, 'TABLE' , '&&NEW_DATA_TBS' , '&&NEW_INDX_TBS' )) || ';'
  6     FROM dba_segments,
  7          (SELECT table_name, index_name FROM dba_indexes WHERE tablespace_name = UPPER('&&OLD_INDX_TBS'))
  8    WHERE segment_type in ( 'TABLE', 'INDEX' )
  9      AND segment_name = index_name (+)
 10      AND tablespace_name IN (UPPER('&&OLD_DATA_TBS'), UPPER('&&OLD_INDX_TBS'))
 11      AND owner = UPPER('&&OWNER')
 12    ORDER BY 1, 2;
Enter value for new_data_tbs: data_002
Enter value for new_indx_tbs: indx_002
Enter value for old_indx_tbs: indx_001
Enter value for old_data_tbs: data_001
Enter value for owner: app_001

ALTER TABLE app_001.transactions MOVE TABLESPACE data_002;
ALTER INDEX app_001.transactions_pk REBUILD TABLESPACE indx_002;
ALTER INDEX app_001.transactions_ndx1 REBUILD TABLESPACE indx_002;

SQL> SET HEADING ON;
SQL> SET VERIFY ON;
SQL> SET ECHO ON;
SQL> -- execute script output
SQL> ALTER TABLE app_001.transactions MOVE TABLESPACE data_002;

Table altered.

SQL> ALTER INDEX app_001.transactions_pk REBUILD TABLESPACE indx_002;

Index altered.

SQL> ALTER INDEX app_001.transactions_ndx1 REBUILD TABLESPACE indx_002;

Index altered.

SQL> -- Check for any unusable indexes
SQL> SELECT owner, index_name, tablespace_name
  2    FROM dba_indexes
  3   WHERE STATUS = 'UNUSABLE';

no rows selected

SQL> -- Gather new schema stats
SQL> EXEC dbms_stats.gather_schema_stats('app_001');

PL/SQL procedure successfully completed.

SQL> -- Check for remaining segments in old tablespaces
SQL> SELECT distinct owner
  2    FROM dba_segments
  3   WHERE tablespace_name IN (UPPER('&&OLD_DATA_TBS'), UPPER('&&OLD_INDX_TBS'));
old   3:  WHERE tablespace_name IN (UPPER('&&OLD_DATA_TBS'), UPPER('&&OLD_INDX_TBS'))
new   3:  WHERE tablespace_name IN (UPPER('data_001'), UPPER('indx_001'))

no rows selected

SQL> -- Check for users assigned to old tablespaces
SQL> SELECT username, default_tablespace FROM dba_users
  2   WHERE default_tablespace IN (UPPER('&&OLD_DATA_TBS'), UPPER('&&OLD_INDX_TBS'));
old   2:  WHERE default_tablespace IN (UPPER('&&OLD_DATA_TBS'), UPPER('&&OLD_INDX_TBS'))
new   2:  WHERE default_tablespace IN (UPPER('data_001'), UPPER('indx_001'))

USERNAME                       DEFAULT_TABLESPACE
------------------------------ ------------------------------
APP_001                        DATA_001

SQL> -- Assign new default tablespaces for users, as necessary
SQL> ALTER USER app_001
  2    DEFAULT TABLESPACE data_002;

User altered.

SQL> -- List the data file names of old tablespaces
SQL> COL tablespace_name FOR A15;
SQL> COL file_name FOR A70;
SQL> SET LINES 100;
SQL> SELECT tablespace_name, file_name
  2    FROM dba_data_files
  3   WHERE tablespace_name IN (UPPER('&&OLD_DATA_TBS'), UPPER('&&OLD_INDX_TBS'));
old   3:  WHERE tablespace_name IN (UPPER('&&OLD_DATA_TBS'), UPPER('&&OLD_INDX_TBS'))
new   3:  WHERE tablespace_name IN (UPPER('data_001'), UPPER('indx_001'))

TABLESPACE_NAME FILE_NAME
--------------- ----------------------------------------------------------------------
DATA_001        /data01/oracle/db001/datafile/o1_mf_data_001_4m081w7m_.dbf
INDX_001        /data01/oracle/db001/datafile/o1_mf_indx_001_4m082l4q_.dbf

SQL> -- Drop old tablespaces, but keep data files in place
SQL> DROP TABLESPACE data_001
  2    INCLUDING CONTENTS KEEP DATAFILES;

Tablespace dropped.

SQL> DROP TABLESPACE indx_001
  2    INCLUDING CONTENTS KEEP DATAFILES;

Tablespace dropped.

SQL> -- Shred/remove old data files
SQL> HOST shred -n 200 -z -u /data01/oracle/db001/datafile/o1_mf_data_001_4m081w7m_.dbf
SQL> HOST shred -n 200 -z -u /data01/oracle/db001/datafile/o1_mf_indx_001_4m082l4q_.dbf
Listing 3 Removing leftover unencrypted ghost copies of sensitive data requires a series of steps to address a number of dependencies.

The last step in Listing 3 is operating system specific. In this example, I demonstrate use of the Linux/Unix shred utility. Other utilities that you might want to research are the Linux/Unix wipe, scrub, and srm programs.

Preparing for Potential Rekey Operations

My client also wanted to be prepared for future situations that might justify a rekey operation. Rekeying existing data means that you re-encrypt existing data with a new key. You might do this when you suspect that someone has gained access to the existing table keys and can potentially decrypt sensitive data outside of your control.

My steps for a completing a rekey operation are similar to the steps for originally encrypting existing data: drop indexes on the target column, rekey the column's data, and then rebuild dropped indexes. Optionally, if you are concerned about ghost copies that correspond to the data encrypted with the previous key, you will have to repeat the process of moving segments to a new tablespace, dropping the old tablespace, and scrubbing the old tablespace's data files.

Note: In my experience, PCI auditors are happy with re-keying only the master encryption key, because this does not require any data access, and the PCI standard does not include recommendations for a 2-tier key architecture like Oracle's. Re-keying the master encryption key should be sufficient from a PCI compliance perspective, and PCI auditors cannot force companies that have billions of rows to shut their business down for a couple days just for re-keying.

Identifying Potential Query Plan Changes

The Oracle documentation, several articles, and some forum posts that I read mention mostly general along with some specific information about possible changes that can occur to existing query execution plans after you encrypt a column's data. In general, you have to pay attention to what happens to the execution of SQL statements when you encrypt an indexed column as opposed to a non-indexed column. When Oracle encrypts an indexed column, Oracle also encrypts corresponding index values. If you think about this for a moment, it's clear that equality predicates targeting indexed data should continue to make use of indexes; however, the random nature of encrypted values makes range scans of encrypted indexes cost prohibitive because of the way that index values are sorted in an index. Listing 4 demonstrates these basic, well-documented scenarios.

SQL> CONNECT app_001
Enter password:
Connected.
SQL> -- Create a plan table
SQL> @?/rdbms/admin/utlxplan.sql;

Table created.

SQL> -- Disable encryption of the credit card column
SQL> ALTER TABLE app_001.transactions
  2   MODIFY (credit_card DECRYPT);

Table altered.

SQL> -- Ensure schema stats are current
SQL> EXEC dbms_stats.gather_schema_stats('app_001');

PL/SQL procedure successfully completed.

SQL> -- Display some representative data
SQL> COL credit_card FOR 9999999999999999;
SQL> SELECT * FROM app_001.transactions
  2   WHERE rownum < 5;

  TRANS_ID       CREDIT_CARD
---------- -----------------
       389  3469681098409570
       390  3441050723354352
       391  3485598407754404
       392  3485458104610650

SQL> -- Enable tracing and explain plan output
SQL> SET AUTOTRACE ON EXPLAIN;
SQL> -- Demonstrate an equality predicate targeting the
SQL> -- encrypted column
SQL> SELECT * FROM app_001.transactions
  2   WHERE credit_card = 3485458104610650;

  TRANS_ID       CREDIT_CARD
---------- -----------------
       392  3485458104610650


Execution Plan
----------------------------------------------------------
Plan hash value: 32329967

-------------------------------------------------------------------------------------------------
| Id  | Operation                   | Name              | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |                   |     1 |    14 |     5   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| TRANSACTIONS      |     1 |    14 |     5   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | TRANSACTIONS_NDX1 |     1 |       |     3   (0)| 00:00:01 |
-------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("CREDIT_CARD"=3485458104610650)

SQL> -- Demonstrate a range predicate targeting the
SQL> -- encrypted column
SQL> SELECT * FROM app_001.transactions
  2   WHERE credit_card BETWEEN 3499990000000000 AND 3499999999999999;

  TRANS_ID       CREDIT_CARD
---------- -----------------
      4629  3499990987277941
     18597  3499993250694089
     13364  3499996558049599
     79326  3499996616476145
     60420  3499997873591732
     24392  3499998608513414
     97433  3499999831086288
     72183  3499999977925392

8 rows selected.


Execution Plan
----------------------------------------------------------
Plan hash value: 32329967

-------------------------------------------------------------------------------------------------
| Id  | Operation                   | Name              | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |                   |     3 |    42 |     6   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| TRANSACTIONS      |     3 |    42 |     6   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | TRANSACTIONS_NDX1 |     3 |       |     3   (0)| 00:00:01 |
-------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("CREDIT_CARD">=3499990000000000 AND "CREDIT_CARD"<=3499999999999999)

SQL> -- Disable tracing and explain plan output
SQL> SET AUTOTRACE OFF;
SQL> -- Encrypt the column (and indexes)
SQL> ALTER TABLE app_001.transactions
  2   MODIFY (credit_card ENCRYPT NO SALT);

Table altered.

SQL> -- Ensure schema stats are current
SQL> EXEC dbms_stats.gather_schema_stats('app_001');

PL/SQL procedure successfully completed.

SQL> -- Enable tracing and explain plan output
SQL> SET AUTOTRACE ON EXPLAIN;
SQL> -- Rerun previous queries, compare execution plans
SQL> SELECT * FROM app_001.transactions
  2   WHERE credit_card = 3485458104610650;

  TRANS_ID       CREDIT_CARD
---------- -----------------
       392  3485458104610650


Execution Plan
----------------------------------------------------------
Plan hash value: 32329967

-------------------------------------------------------------------------------------------------
| Id  | Operation                   | Name              | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |                   |     1 |    14 |     5   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| TRANSACTIONS      |     1 |    14 |     5   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | TRANSACTIONS_NDX1 |     1 |       |     3   (0)| 00:00:01 |
-------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("CREDIT_CARD"=3485458104610650)

SQL> SELECT * FROM app_001.transactions
  2   WHERE credit_card BETWEEN 3499990000000000 AND 3499999999999999;

  TRANS_ID       CREDIT_CARD
---------- -----------------
     60420  3499997873591732
      4629  3499990987277941
     18597  3499993250694089
     13364  3499996558049599
     24392  3499998608513414
     79326  3499996616476145
     72183  3499999977925392
     97433  3499999831086288

8 rows selected.


Execution Plan
----------------------------------------------------------
Plan hash value: 1321366336

----------------------------------------------------------------------------------
| Id  | Operation         | Name         | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |              |  1250 | 17500 |   914   (2)| 00:00:11 |
|*  1 |  TABLE ACCESS FULL| TRANSACTIONS |  1250 | 17500 |   914   (2)| 00:00:11 |
----------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter(INTERNAL_FUNCTION("CREDIT_CARD")>=3499990000000000 AND
              INTERNAL_FUNCTION("CREDIT_CARD")<=3499999999999999)

SQL> -- Disable tracing and explain plan output
SQL> SET AUTOTRACE OFF;
Listing 4 Carefully identify SQL statements that reference encrypted column data, and then compare the execution plans of these statements before and after encryption to see if there are any changes.

I also wanted to know if the overhead of encryption could change plan costs and the optimizer's choices, even in cases where the documentation suggests that it should not. To make sure I knew exactly what would happen to the mission-critical application when we went into production, I did some extra work in the sandbox environment. First, I collected a list of the Top SQL statements (CPU, Gets, and I/O) from various Automatic Workload Repository (AWR) snapshots. Then, I compared the query execution plans of each SQL statement before and after the column was encrypted. My research turned up a complex query that used a predicate with several equality-based conditions, one of which targeted the column that was going to be encrypted. To my surprise, the execution plan for this query changed after encrypting the column. Unfortunately, I wasn't able to duplicate these results for this article in my test lab, and I'm still not completely sure why the query plan changed. But the reason I mention this scenario here is to point out that it is always a good idea to research the execution plans of a production application's key queries in a test environment before making a change to a production system. If I had assumed that none of the top queries would change, we would have made the change to the production system and had to scramble for a resolution with the heat on.

The lesson here is that you should always test things out before making changes, despite what you read in documentation and other sources.

Conclusion

Using Oracle's TDE feature to encrypt columns in new tables, tables without any data, or new columns in existing tables is trivial because there aren't any existing dependencies that you need to worry about. In contrast, encrypting existing column data requires careful research and testing in your sandbox environment before implementing your plan in a live production system due to the many dependencies that might be affected by encryption.

Source: Oracle Technology Network Website

twitterfacebook